Skip to content

Commit

Permalink
Merge tag 'for-5.9/dm-fixes-2' of git://git.kernel.org/pub/scm/linux/…
Browse files Browse the repository at this point in the history
…kernel/git/device-mapper/linux-dm

Pull device mapper fixes from Mike Snitzer:

 - DM core fix for incorrect double bio splitting. Keep "fixing" this
   because past attempts didn't fully appreciate the liability relative
   to recursive bio splitting. This fix limits DM's bio splitting to a
   single method and does _not_ use blk_queue_split() for normal IO.

 - DM crypt Documentation updates for features added during 5.9 merge.

* tag 'for-5.9/dm-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
  dm crypt: document encrypted keyring key option
  dm crypt: document new no_workqueue flags
  dm: fix comment in dm_process_bio()
  dm: fix bio splitting and its bio completion order for regular IO
  • Loading branch information
torvalds committed Sep 23, 2020
2 parents bffac4b + 4c07ae0 commit a969324
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 23 deletions.
10 changes: 9 additions & 1 deletion Documentation/admin-guide/device-mapper/dm-crypt.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ Parameters::
the value passed in <key_size>.

<key_type>
Either 'logon' or 'user' kernel key type.
Either 'logon', 'user' or 'encrypted' kernel key type.

<key_description>
The kernel keyring key description crypt target should look for
Expand Down Expand Up @@ -121,6 +121,14 @@ submit_from_crypt_cpus
thread because it benefits CFQ to have writes submitted using the
same context.

no_read_workqueue
Bypass dm-crypt internal workqueue and process read requests synchronously.

no_write_workqueue
Bypass dm-crypt internal workqueue and process write requests synchronously.
This option is automatically enabled for host-managed zoned block devices
(e.g. host-managed SMR hard-disks).

integrity:<bytes>:<type>
The device requires additional <bytes> metadata per-sector stored
in per-bio integrity structure. This metadata must by provided
Expand Down
27 changes: 5 additions & 22 deletions drivers/md/dm.c
Original file line number Diff line number Diff line change
Expand Up @@ -1724,23 +1724,6 @@ static blk_qc_t __process_bio(struct mapped_device *md, struct dm_table *map,
return ret;
}

static void dm_queue_split(struct mapped_device *md, struct dm_target *ti, struct bio **bio)
{
unsigned len, sector_count;

sector_count = bio_sectors(*bio);
len = min_t(sector_t, max_io_len((*bio)->bi_iter.bi_sector, ti), sector_count);

if (sector_count > len) {
struct bio *split = bio_split(*bio, len, GFP_NOIO, &md->queue->bio_split);

bio_chain(split, *bio);
trace_block_split(md->queue, split, (*bio)->bi_iter.bi_sector);
submit_bio_noacct(*bio);
*bio = split;
}
}

static blk_qc_t dm_process_bio(struct mapped_device *md,
struct dm_table *map, struct bio *bio)
{
Expand All @@ -1761,21 +1744,21 @@ static blk_qc_t dm_process_bio(struct mapped_device *md,
}

/*
* If in ->queue_bio we need to use blk_queue_split(), otherwise
* If in ->submit_bio we need to use blk_queue_split(), otherwise
* queue_limits for abnormal requests (e.g. discard, writesame, etc)
* won't be imposed.
* If called from dm_wq_work() for deferred bio processing, bio
* was already handled by following code with previous ->submit_bio.
*/
if (current->bio_list) {
if (is_abnormal_io(bio))
blk_queue_split(&bio);
else
dm_queue_split(md, ti, &bio);
/* regular IO is split by __split_and_process_bio */
}

if (dm_get_md_type(md) == DM_TYPE_NVME_BIO_BASED)
return __process_bio(md, map, bio, ti);
else
return __split_and_process_bio(md, map, bio);
return __split_and_process_bio(md, map, bio);
}

static blk_qc_t dm_submit_bio(struct bio *bio)
Expand Down

0 comments on commit a969324

Please sign in to comment.