blk-mq-sched: fix performance regression of mq-deadline
When mq-deadline is taken, IOPS of sequential read and seqential write is observed more than 20% drop on sata(scsi-mq) devices, compared with using 'none' scheduler. The reason is that the default nr_requests for scheduler is too big for small queuedepth devices, and latency is increased much. Since the principle of taking 256 requests for mq scheduler is based on 128 queue depth, this patch changes into double size of min(hw queue_depth, 128). Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
This commit is contained in:
parent
431b17f9d5
commit
32825c45ff
1 changed files with 5 additions and 3 deletions
|
@ -515,10 +515,12 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e)
|
|||
}
|
||||
|
||||
/*
|
||||
* Default to 256, since we don't split into sync/async like the
|
||||
* old code did. Additionally, this is a per-hw queue depth.
|
||||
* Default to double of smaller one between hw queue_depth and 128,
|
||||
* since we don't split into sync/async like the old code did.
|
||||
* Additionally, this is a per-hw queue depth.
|
||||
*/
|
||||
q->nr_requests = 2 * BLKDEV_MAX_RQ;
|
||||
q->nr_requests = 2 * min_t(unsigned int, q->tag_set->queue_depth,
|
||||
BLKDEV_MAX_RQ);
|
||||
|
||||
queue_for_each_hw_ctx(q, hctx, i) {
|
||||
ret = blk_mq_sched_alloc_tags(q, hctx, i);
|
||||
|
|
Loading…
Reference in a new issue