Skip to content

Commit

Permalink
block: remove legacy rq tagging
Browse files Browse the repository at this point in the history
It's now unused, kill it.

Reviewed-by: Hannes Reinecke <[email protected]>
Tested-by: Ming Lei <[email protected]>
Reviewed-by: Omar Sandoval <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
  • Loading branch information
axboe committed Nov 7, 2018
1 parent 2cdf2ca commit 7ca0192
Show file tree
Hide file tree
Showing 8 changed files with 3 additions and 517 deletions.
88 changes: 0 additions & 88 deletions Documentation/block/biodoc.txt
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,6 @@ Description of Contents:
3.2.3 I/O completion
3.2.4 Implications for drivers that do not interpret bios (don't handle
multiple segments)
3.2.5 Request command tagging
3.3 I/O submission
4. The I/O scheduler
5. Scalability related changes
Expand Down Expand Up @@ -708,93 +707,6 @@ is crossed on completion of a transfer. (The end*request* functions should
be used if only if the request has come down from block/bio path, not for
direct access requests which only specify rq->buffer without a valid rq->bio)

3.2.5 Generic request command tagging

3.2.5.1 Tag helpers

Block now offers some simple generic functionality to help support command
queueing (typically known as tagged command queueing), ie manage more than
one outstanding command on a queue at any given time.

blk_queue_init_tags(struct request_queue *q, int depth)

Initialize internal command tagging structures for a maximum
depth of 'depth'.

blk_queue_free_tags((struct request_queue *q)

Teardown tag info associated with the queue. This will be done
automatically by block if blk_queue_cleanup() is called on a queue
that is using tagging.

The above are initialization and exit management, the main helpers during
normal operations are:

blk_queue_start_tag(struct request_queue *q, struct request *rq)

Start tagged operation for this request. A free tag number between
0 and 'depth' is assigned to the request (rq->tag holds this number),
and 'rq' is added to the internal tag management. If the maximum depth
for this queue is already achieved (or if the tag wasn't started for
some other reason), 1 is returned. Otherwise 0 is returned.

blk_queue_end_tag(struct request_queue *q, struct request *rq)

End tagged operation on this request. 'rq' is removed from the internal
book keeping structures.

To minimize struct request and queue overhead, the tag helpers utilize some
of the same request members that are used for normal request queue management.
This means that a request cannot both be an active tag and be on the queue
list at the same time. blk_queue_start_tag() will remove the request, but
the driver must remember to call blk_queue_end_tag() before signalling
completion of the request to the block layer. This means ending tag
operations before calling end_that_request_last()! For an example of a user
of these helpers, see the IDE tagged command queueing support.

3.2.5.2 Tag info

Some block functions exist to query current tag status or to go from a
tag number to the associated request. These are, in no particular order:

blk_queue_tagged(q)

Returns 1 if the queue 'q' is using tagging, 0 if not.

blk_queue_tag_request(q, tag)

Returns a pointer to the request associated with tag 'tag'.

blk_queue_tag_depth(q)

Return current queue depth.

blk_queue_tag_queue(q)

Returns 1 if the queue can accept a new queued command, 0 if we are
at the maximum depth already.

blk_queue_rq_tagged(rq)

Returns 1 if the request 'rq' is tagged.

3.2.5.2 Internal structure

Internally, block manages tags in the blk_queue_tag structure:

struct blk_queue_tag {
struct request **tag_index; /* array or pointers to rq */
unsigned long *tag_map; /* bitmap of free tags */
struct list_head busy_list; /* fifo list of busy tags */
int busy; /* queue depth */
int max_depth; /* max queue depth */
};

Most of the above is simple and straight forward, however busy_list may need
a bit of explaining. Normally we don't care too much about request ordering,
but in the event of any barrier requests in the tag queue we need to ensure
that requests are restarted in the order they were queue.

3.3 I/O Submission

The routine submit_bio() is used to submit a single io. Higher level i/o
Expand Down
2 changes: 1 addition & 1 deletion block/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# Makefile for the kernel block layer
#

obj-$(CONFIG_BLOCK) := bio.o elevator.o blk-core.o blk-tag.o blk-sysfs.o \
obj-$(CONFIG_BLOCK) := bio.o elevator.o blk-core.o blk-sysfs.o \
blk-flush.o blk-settings.o blk-ioc.o blk-map.o \
blk-exec.o blk-merge.o blk-softirq.o blk-timeout.o \
blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \
Expand Down
6 changes: 0 additions & 6 deletions block/blk-core.c
Original file line number Diff line number Diff line change
Expand Up @@ -1658,9 +1658,6 @@ void blk_requeue_request(struct request_queue *q, struct request *rq)
trace_block_rq_requeue(q, rq);
rq_qos_requeue(q, rq);

if (rq->rq_flags & RQF_QUEUED)
blk_queue_end_tag(q, rq);

BUG_ON(blk_queued_rq(rq));

elv_requeue_request(q, rq);
Expand Down Expand Up @@ -3174,9 +3171,6 @@ void blk_finish_request(struct request *req, blk_status_t error)
if (req->rq_flags & RQF_STATS)
blk_stat_add(req, now);

if (req->rq_flags & RQF_QUEUED)
blk_queue_end_tag(q, req);

BUG_ON(blk_queued_rq(req));

if (unlikely(laptop_mode) && !blk_rq_is_passthrough(req))
Expand Down
2 changes: 0 additions & 2 deletions block/blk-mq-debugfs.c
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,6 @@ static int queue_pm_only_show(void *data, struct seq_file *m)

#define QUEUE_FLAG_NAME(name) [QUEUE_FLAG_##name] = #name
static const char *const blk_queue_flag_name[] = {
QUEUE_FLAG_NAME(QUEUED),
QUEUE_FLAG_NAME(STOPPED),
QUEUE_FLAG_NAME(DYING),
QUEUE_FLAG_NAME(BYPASS),
Expand Down Expand Up @@ -318,7 +317,6 @@ static const char *const cmd_flag_name[] = {
static const char *const rqf_name[] = {
RQF_NAME(SORTED),
RQF_NAME(STARTED),
RQF_NAME(QUEUED),
RQF_NAME(SOFTBARRIER),
RQF_NAME(FLUSH_SEQ),
RQF_NAME(MIXED_MERGE),
Expand Down
6 changes: 2 additions & 4 deletions block/blk-mq-tag.c
Original file line number Diff line number Diff line change
Expand Up @@ -530,10 +530,8 @@ u32 blk_mq_unique_tag(struct request *rq)
struct blk_mq_hw_ctx *hctx;
int hwq = 0;

if (q->mq_ops) {
hctx = blk_mq_map_queue(q, rq->mq_ctx->cpu);
hwq = hctx->queue_num;
}
hctx = blk_mq_map_queue(q, rq->mq_ctx->cpu);
hwq = hctx->queue_num;

return (hwq << BLK_MQ_UNIQUE_TAG_BITS) |
(rq->tag & BLK_MQ_UNIQUE_TAG_MASK);
Expand Down
3 changes: 0 additions & 3 deletions block/blk-sysfs.c
Original file line number Diff line number Diff line change
Expand Up @@ -849,9 +849,6 @@ static void __blk_release_queue(struct work_struct *work)

blk_exit_rl(q, &q->root_rl);

if (q->queue_tags)
__blk_queue_free_tags(q);

blk_queue_free_zone_bitmaps(q);

if (!q->mq_ops) {
Expand Down
Loading

0 comments on commit 7ca0192

Please sign in to comment.