Skip to content

Commit

Permalink
Revert "block/mq-deadline: use correct way to throttling write requests"
Browse files Browse the repository at this point in the history
The code "max(1U, 3 * (1U << shift)  / 4)" comes from the Kyber I/O
scheduler. The Kyber I/O scheduler maintains one internal queue per hwq
and hence derives its async_depth from the number of hwq tags. Using
this approach for the mq-deadline scheduler is wrong since the
mq-deadline scheduler maintains one internal queue for all hwqs
combined. Hence this revert.

Cc: [email protected]
Cc: Damien Le Moal <[email protected]>
Cc: Harshit Mogalapalli <[email protected]>
Cc: Zhiguo Niu <[email protected]>
Fixes: d47f971 ("block/mq-deadline: use correct way to throttling write requests")
Signed-off-by: Bart Van Assche <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jens Axboe <[email protected]>
  • Loading branch information
bvanassche authored and axboe committed Mar 13, 2024
1 parent b874d4a commit 256aab4
Showing 1 changed file with 1 addition and 2 deletions.
3 changes: 1 addition & 2 deletions block/mq-deadline.c
Original file line number Diff line number Diff line change
Expand Up @@ -646,9 +646,8 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx)
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
struct blk_mq_tags *tags = hctx->sched_tags;
unsigned int shift = tags->bitmap_tags.sb.shift;

dd->async_depth = max(1U, 3 * (1U << shift) / 4);
dd->async_depth = max(1UL, 3 * q->nr_requests / 4);

sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth);
}
Expand Down

0 comments on commit 256aab4

Please sign in to comment.