block: lift setting the readahead size into the block layer
Drivers shouldn't really mess with the readahead size, as that is a VM concept. Instead set it based on the optimal I/O size by lifting the algorithm from the md driver when registering the disk. Also set bdi->io_pages there as well by applying the same scheme based on max_sectors. To ensure the limits work well for stacking drivers a new helper is added to update the readahead limits from the block limits, which is also called from disk_stack_limits. Signed-off-by:Christoph Hellwig <hch@lst.de> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by:
Jan Kara <jack@suse.cz> Reviewed-by:
Mike Snitzer <snitzer@redhat.com> Reviewed-by:
Martin K. Petersen <martin.petersen@oracle.com> Acked-by:
Coly Li <colyli@suse.de> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
parent
16ef5101
No related branches found
No related tags found
Showing
- block/blk-settings.c 16 additions, 2 deletionsblock/blk-settings.c
- block/blk-sysfs.c 2 additions, 0 deletionsblock/blk-sysfs.c
- drivers/block/aoe/aoeblk.c 0 additions, 1 deletiondrivers/block/aoe/aoeblk.c
- drivers/block/drbd/drbd_nl.c 1 addition, 9 deletionsdrivers/block/drbd/drbd_nl.c
- drivers/md/bcache/super.c 0 additions, 3 deletionsdrivers/md/bcache/super.c
- drivers/md/dm-table.c 1 addition, 2 deletionsdrivers/md/dm-table.c
- drivers/md/raid0.c 0 additions, 16 deletionsdrivers/md/raid0.c
- drivers/md/raid10.c 1 addition, 23 deletionsdrivers/md/raid10.c
- drivers/md/raid5.c 1 addition, 12 deletionsdrivers/md/raid5.c
- drivers/nvme/host/core.c 1 addition, 0 deletionsdrivers/nvme/host/core.c
- include/linux/blkdev.h 1 addition, 0 deletionsinclude/linux/blkdev.h
Loading
Please register or sign in to comment