When a bio has an encryption context, its size must be aligned to its
crypto data unit size. A bio must not be split in the middle of a data
unit. Currently, bios are split at logical block boundaries, but a crypto
data unit size might be larger than the logical block size - e.g. a machine
could be using fscrypt (which uses 4K crypto data units) with an eMMC block
device with inline encryption hardware that has a logical block size of 512
bytes. So we need to support cases where the data unit size is larger than
the logical block size.
Patch 1 introduces blk_ksm_is_empty() that checks whether a keyslot manager
advertises a non-zero number of crypto capabilities. This function helps
clean up code a little.
Patch 2 handles the error code from blk_ksm_register() in all callers.
This return code was previously ignored by all callers because the function
could only fail if the request_queue had integrity support, which the
callers ensured would not be the case. But the later patches in this series
will add more cases where this function might fail, so it's better to just
handle the return code properly in all the callers.
Patch 3 and 4 introduce blk_crypto_bio_sectors_alignment() and
bio_required_sector_alignment() respectively. The former returns the
required sector alignment due to any crypto requirements the bio has. The
latter returns the required sector alignment due to any reason. The number
of sectors in any bio (and in particular, the number of sectors passed to
bio_split) *must* be aligned to the value returned by the latter function
(which, of course, calls the former function to decide what to return).
Patch 5 updates blk-crypto-fallback.c to respect
bio_required_sector_alignment() when calling bio_split(), so that any split
bio's size has the required alignment.
Patch 6 introduces restrictions on the data unit sizes advertised by a
keyslot manager. These restrictions come about due to the request_queue's
queue_limits, and are required to ensure that blk_bio_segment_split() can
always split a bio so that it has a limited number of sectors and segments,
and that the number of sectors it has is non-zero and aligned to
bio_required_sector_alignment().
Patch 7 updates get_max_io_size() and blk_bio_segment_split() to respect
bio_required_sector_alignment(). get_max_io_size() always returns a
value that is aligned to bio_required_sector_alignment(), and together
with Patch 6, this is enough to ensure that if the bio is split, it is
split at a crypto data unit size boundary.
Since all callers to bio_split() should have been updated by the previous
patches, Patch 8 adds a WARN_ON() to bio_split() when sectors isn't aligned
to bio_required_sector_alignment() (the one exception is bounce.c which is
legacy code and won't interact with inline encryption).
This patch series was tested by running android xfstests on the SDM630
chipset (which has eMMC inline encryption hardware with logical block size
512 bytes) with test_dummy_encryption with and without the 'inlinecrypt'
mount option.
Satya Tangirala (8):
block: introduce blk_ksm_is_empty()
dm,mmc,ufshcd: handle error from blk_ksm_register()
block: blk-crypto: introduce blk_crypto_bio_sectors_alignment()
block: introduce bio_required_sector_alignment()
block: respect bio_required_sector_alignment() in blk-crypto-fallback
block: keyslot-manager: introduce
blk_ksm_restrict_dus_to_queue_limits()
blk-merge: Ensure bios aren't split in middle of a crypto data unit
block: add WARN() in bio_split() for sector alignment
block/bio.c | 1 +
block/blk-crypto-fallback.c | 3 ++
block/blk-crypto-internal.h | 18 ++++++++
block/blk-merge.c | 49 ++++++++++++++--------
block/blk.h | 12 ++++++
block/keyslot-manager.c | 72 ++++++++++++++++++++++++++++++++
drivers/md/dm-table.c | 14 ++-----
drivers/mmc/core/crypto.c | 6 ++-
drivers/scsi/ufs/ufshcd-crypto.c | 6 ++-
include/linux/keyslot-manager.h | 2 +
10 files changed, 150 insertions(+), 33 deletions(-)
--
2.31.0.291.g576ba9dcdaf-goog
Handle any error from blk_ksm_register() in the callers. Previously,
the callers ignored the return value because blk_ksm_register() wouldn't
fail as long as the request_queue didn't have integrity support too, but
as this is no longer the case, it's safer for the callers to just handle
the return value appropriately.
Signed-off-by: Satya Tangirala <[email protected]>
---
drivers/md/dm-table.c | 3 ++-
drivers/mmc/core/crypto.c | 6 ++++--
drivers/scsi/ufs/ufshcd-crypto.c | 6 ++++--
3 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index db18a58adad7..1225b9050f29 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1372,7 +1372,8 @@ static void dm_update_keyslot_manager(struct request_queue *q,
/* Make the ksm less restrictive */
if (!q->ksm) {
- blk_ksm_register(t->ksm, q);
+ if (WARN_ON(!blk_ksm_register(t->ksm, q)))
+ dm_destroy_keyslot_manager(t->ksm);
} else {
blk_ksm_update_capabilities(q->ksm, t->ksm);
dm_destroy_keyslot_manager(t->ksm);
diff --git a/drivers/mmc/core/crypto.c b/drivers/mmc/core/crypto.c
index 419a368f8402..616103393557 100644
--- a/drivers/mmc/core/crypto.c
+++ b/drivers/mmc/core/crypto.c
@@ -21,8 +21,10 @@ void mmc_crypto_set_initial_state(struct mmc_host *host)
void mmc_crypto_setup_queue(struct request_queue *q, struct mmc_host *host)
{
- if (host->caps2 & MMC_CAP2_CRYPTO)
- blk_ksm_register(&host->ksm, q);
+ if (host->caps2 & MMC_CAP2_CRYPTO) {
+ if (WARN_ON(!blk_ksm_register(&host->ksm, q)))
+ host->caps2 &= ~MMC_CAP2_CRYPTO;
+ }
}
EXPORT_SYMBOL_GPL(mmc_crypto_setup_queue);
diff --git a/drivers/scsi/ufs/ufshcd-crypto.c b/drivers/scsi/ufs/ufshcd-crypto.c
index d70cdcd35e43..f47a72fefe9e 100644
--- a/drivers/scsi/ufs/ufshcd-crypto.c
+++ b/drivers/scsi/ufs/ufshcd-crypto.c
@@ -233,6 +233,8 @@ void ufshcd_init_crypto(struct ufs_hba *hba)
void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
struct request_queue *q)
{
- if (hba->caps & UFSHCD_CAP_CRYPTO)
- blk_ksm_register(&hba->ksm, q);
+ if (hba->caps & UFSHCD_CAP_CRYPTO) {
+ if (WARN_ON(!blk_ksm_register(&hba->ksm, q)))
+ hba->caps &= ~UFSHCD_CAP_CRYPTO;
+ }
}
--
2.31.0.291.g576ba9dcdaf-goog
The size of any bio must be aligned to the data unit size of the bio crypt
context (if it exists) of that bio. This must also be ensured whenever a
bio is split. Introduce blk_crypto_bio_sectors_alignment() that returns
the required alignment in sectors. The number of sectors passed to
any call of bio_split() should be aligned to
blk_crypto_bio_sectors_alignment().
Signed-off-by: Satya Tangirala <[email protected]>
---
block/blk-crypto-internal.h | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
index 0d36aae538d7..304e90ed99f5 100644
--- a/block/blk-crypto-internal.h
+++ b/block/blk-crypto-internal.h
@@ -60,6 +60,19 @@ static inline bool blk_crypto_rq_is_encrypted(struct request *rq)
return rq->crypt_ctx;
}
+/*
+ * Returns the alignment requirement for the number of sectors in this bio based
+ * on its bi_crypt_context. Any bios split from this bio must follow this
+ * alignment requirement as well.
+ */
+static inline unsigned int blk_crypto_bio_sectors_alignment(struct bio *bio)
+{
+ if (!bio_has_crypt_ctx(bio))
+ return 1;
+ return bio->bi_crypt_context->bc_key->crypto_cfg.data_unit_size >>
+ SECTOR_SHIFT;
+}
+
#else /* CONFIG_BLK_INLINE_ENCRYPTION */
static inline bool bio_crypt_rq_ctx_compatible(struct request *rq,
@@ -93,6 +106,11 @@ static inline bool blk_crypto_rq_is_encrypted(struct request *rq)
return false;
}
+static inline unsigned int blk_crypto_bio_sectors_alignment(struct bio *bio)
+{
+ return 1;
+}
+
#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
void __bio_crypt_advance(struct bio *bio, unsigned int bytes);
--
2.31.0.291.g576ba9dcdaf-goog
This function returns the required alignment for the number of sectors in
a bio. In particular, the number of sectors passed to bio_split() must be
aligned to this value.
Signed-off-by: Satya Tangirala <[email protected]>
---
block/blk.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/block/blk.h b/block/blk.h
index 3b53e44b967e..5ef207a6d34c 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -261,6 +261,18 @@ static inline unsigned int bio_allowed_max_sectors(struct request_queue *q)
return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9;
}
+/*
+ * The required sector alignment for a bio. The number of sectors in any bio
+ * that's constructed/split must be aligned to this value.
+ */
+static inline unsigned int bio_required_sector_alignment(struct bio *bio)
+{
+ struct request_queue *q = bio->bi_bdev->bd_disk->queue;
+
+ return max(queue_logical_block_size(q) >> SECTOR_SHIFT,
+ blk_crypto_bio_sectors_alignment(bio));
+}
+
/*
* The max bio size which is aligned to q->limits.discard_granularity. This
* is a hint to split large discard bio in generic block layer, then if device
--
2.31.0.291.g576ba9dcdaf-goog
Not all crypto data unit sizes might be supported by the block layer due to
certain queue limits. This new function checks the queue limits and
appropriately modifies the keyslot manager to reflect only the supported
crypto data unit sizes. blk_ksm_register() runs any given ksm through this
function before actually registering the ksm with a queue.
Signed-off-by: Satya Tangirala <[email protected]>
---
block/keyslot-manager.c | 59 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 59 insertions(+)
diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
index 2a2b1a9785d2..fad6d9c4b649 100644
--- a/block/keyslot-manager.c
+++ b/block/keyslot-manager.c
@@ -450,12 +450,71 @@ bool blk_ksm_is_empty(struct blk_keyslot_manager *ksm)
}
EXPORT_SYMBOL_GPL(blk_ksm_is_empty);
+/*
+ * Restrict the supported data unit sizes of the ksm based on the request queue
+ * limits
+ */
+void blk_ksm_restrict_dus_to_queue_limits(struct blk_keyslot_manager *ksm,
+ struct queue_limits *limits)
+{
+ /* The largest possible data unit size we support is PAGE_SIZE. */
+ unsigned long largest_dus = PAGE_SIZE;
+ unsigned int dus_allowed_mask;
+ int i;
+ bool dus_was_restricted = false;
+
+ /*
+ * If the queue doesn't support SG gaps, a bio might get split in the
+ * middle of a data unit. So require SG gap support for inline
+ * encryption for any data unit size larger than a single sector.
+ */
+ if (limits->virt_boundary_mask)
+ largest_dus = SECTOR_SIZE;
+
+ /*
+ * If the queue has chunk_sectors, the bio might be split within a data
+ * unit if the data unit size is larger than a single sector. So only
+ * support a single sector data unit size in this case.
+ */
+ if (limits->chunk_sectors)
+ largest_dus = SECTOR_SIZE;
+
+ /*
+ * Any bio sent to the queue must be allowed to contain at least a
+ * data_unit_size worth of data. Since each segment in a bio contains
+ * at least a SECTOR_SIZE worth of data, it's sufficient that
+ * queue_max_segments(q) * SECTOR_SIZE >= data_unit_size. So disable
+ * all data_unit_sizes not satisfiable.
+ */
+ largest_dus = min(largest_dus,
+ 1UL << (fls(limits->max_segments) - 1 + SECTOR_SHIFT));
+
+ /* Clear all unsupported data unit sizes. */
+ dus_allowed_mask = (largest_dus << 1) - 1;
+ for (i = 0; i < ARRAY_SIZE(ksm->crypto_modes_supported); i++) {
+ if (ksm->crypto_modes_supported[i] & (~dus_allowed_mask))
+ dus_was_restricted = true;
+ ksm->crypto_modes_supported[i] &= dus_allowed_mask;
+ }
+
+ if (dus_was_restricted) {
+ pr_warn("Disallowed use of encryption data unit sizes above %lu bytes with inline encryption hardware because of device request queue limits.\n",
+ largest_dus);
+ }
+}
+
bool blk_ksm_register(struct blk_keyslot_manager *ksm, struct request_queue *q)
{
if (blk_integrity_queue_supports_integrity(q)) {
pr_warn("Integrity and hardware inline encryption are not supported together. Disabling hardware inline encryption.\n");
return false;
}
+
+ blk_ksm_restrict_dus_to_queue_limits(ksm, &q->limits);
+
+ if (blk_ksm_is_empty(ksm))
+ return false;
+
q->ksm = ksm;
return true;
}
--
2.31.0.291.g576ba9dcdaf-goog
Make blk_crypto_split_bio_if_needed() respect
bio_required_sector_alignment() when calling bio_split().
Signed-off-by: Satya Tangirala <[email protected]>
---
block/blk-crypto-fallback.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index c322176a1e09..85c813ef670b 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -19,6 +19,7 @@
#include <linux/module.h>
#include <linux/random.h>
+#include "blk.h"
#include "blk-crypto-internal.h"
static unsigned int num_prealloc_bounce_pg = 32;
@@ -225,6 +226,8 @@ static bool blk_crypto_split_bio_if_needed(struct bio **bio_ptr)
if (num_sectors < bio_sectors(bio)) {
struct bio *split_bio;
+ num_sectors = round_down(num_sectors,
+ bio_required_sector_alignment(bio));
split_bio = bio_split(bio, num_sectors, GFP_NOIO,
&crypto_bio_split);
if (!split_bio) {
--
2.31.0.291.g576ba9dcdaf-goog
This function checks if a given keyslot manager supports any encryption
mode/data unit size combination (and returns true if there is no such
supported combination). Helps clean up code a little.
Signed-off-by: Satya Tangirala <[email protected]>
---
block/keyslot-manager.c | 13 +++++++++++++
drivers/md/dm-table.c | 11 +----------
include/linux/keyslot-manager.h | 2 ++
3 files changed, 16 insertions(+), 10 deletions(-)
diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
index 2c4a55bea6ca..2a2b1a9785d2 100644
--- a/block/keyslot-manager.c
+++ b/block/keyslot-manager.c
@@ -437,6 +437,19 @@ void blk_ksm_destroy(struct blk_keyslot_manager *ksm)
}
EXPORT_SYMBOL_GPL(blk_ksm_destroy);
+bool blk_ksm_is_empty(struct blk_keyslot_manager *ksm)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(ksm->crypto_modes_supported); i++) {
+ if (ksm->crypto_modes_supported[i])
+ return false;
+ }
+
+ return true;
+}
+EXPORT_SYMBOL_GPL(blk_ksm_is_empty);
+
bool blk_ksm_register(struct blk_keyslot_manager *ksm, struct request_queue *q)
{
if (blk_integrity_queue_supports_integrity(q)) {
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 95391f78b8d5..db18a58adad7 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1313,7 +1313,6 @@ static int dm_table_construct_keyslot_manager(struct dm_table *t)
struct blk_keyslot_manager *ksm;
struct dm_target *ti;
unsigned int i;
- bool ksm_is_empty = true;
dksm = kmalloc(sizeof(*dksm), GFP_KERNEL);
if (!dksm)
@@ -1350,15 +1349,7 @@ static int dm_table_construct_keyslot_manager(struct dm_table *t)
* If the new KSM doesn't actually support any crypto modes, we may as
* well represent it with a NULL ksm.
*/
- ksm_is_empty = true;
- for (i = 0; i < ARRAY_SIZE(ksm->crypto_modes_supported); i++) {
- if (ksm->crypto_modes_supported[i]) {
- ksm_is_empty = false;
- break;
- }
- }
-
- if (ksm_is_empty) {
+ if (blk_ksm_is_empty(ksm)) {
dm_destroy_keyslot_manager(ksm);
ksm = NULL;
}
diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h
index a27605e2f826..5bf0cea20c81 100644
--- a/include/linux/keyslot-manager.h
+++ b/include/linux/keyslot-manager.h
@@ -117,4 +117,6 @@ bool blk_ksm_is_superset(struct blk_keyslot_manager *ksm_superset,
void blk_ksm_update_capabilities(struct blk_keyslot_manager *target_ksm,
struct blk_keyslot_manager *reference_ksm);
+bool blk_ksm_is_empty(struct blk_keyslot_manager *ksm);
+
#endif /* __LINUX_KEYSLOT_MANAGER_H */
--
2.31.0.291.g576ba9dcdaf-goog
Update get_max_io_size() to return a value aligned to
bio_required_sector_alignment(). With this change, and the changes
to blk_ksm_register() that restrict the supported data unit sizes
based on the queue's limits, blk_bio_segment_split() won't split bios in
the middle of a data unit anymore
Signed-off-by: Satya Tangirala <[email protected]>
---
block/blk-merge.c | 49 ++++++++++++++++++++++++++++++-----------------
1 file changed, 31 insertions(+), 18 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index ffb4aa0ea68b..2903de62aaca 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -135,27 +135,39 @@ static struct bio *blk_bio_write_same_split(struct request_queue *q,
/*
* Return the maximum number of sectors from the start of a bio that may be
- * submitted as a single request to a block device. If enough sectors remain,
- * align the end to the physical block size. Otherwise align the end to the
- * logical block size. This approach minimizes the number of non-aligned
- * requests that are submitted to a block device if the start of a bio is not
- * aligned to a physical block boundary.
+ * submitted as a single request to a block device. Tries to align the end to
+ * the physical block size, while also aligning the returned number of sectors
+ * to bio_required_sector_alignment(). This approach minimizes the number of
+ * non-aligned requests that are submitted to a block device if the start of a
+ * bio is not aligned to a physical block boundary.
+ *
+ * More clearly, there are two conditions we're interested in satisfying.
+ *
+ * Condition 1) We absolutely must have @return divisible by the
+ * bio_required_sector_alignment(bio).
+ *
+ * Condition 2) *If possible*, while still satisfying Condition 1, we would like
+ * to have start_offset + @return divisible by physical block size in sectors
+ * (pbs).
*/
static inline unsigned get_max_io_size(struct request_queue *q,
struct bio *bio)
{
- unsigned sectors = blk_max_size_offset(q, bio->bi_iter.bi_sector, 0);
- unsigned max_sectors = sectors;
- unsigned pbs = queue_physical_block_size(q) >> SECTOR_SHIFT;
- unsigned lbs = queue_logical_block_size(q) >> SECTOR_SHIFT;
- unsigned start_offset = bio->bi_iter.bi_sector & (pbs - 1);
-
- max_sectors += start_offset;
- max_sectors &= ~(pbs - 1);
- if (max_sectors > start_offset)
- return max_sectors - start_offset;
-
- return sectors & ~(lbs - 1);
+ unsigned int start_offset = bio->bi_iter.bi_sector;
+ unsigned int sectors = blk_max_size_offset(q, start_offset, 0);
+ unsigned int pbs = queue_physical_block_size(q) >> SECTOR_SHIFT;
+ unsigned int req_sector_align = bio_required_sector_alignment(bio);
+ unsigned int pbs_aligned_sector = round_down(start_offset + sectors, pbs);
+
+ /*
+ * If we can return a pbs aligned endpoint while satisfying Condition 1,
+ * then do so.
+ */
+ if (pbs_aligned_sector > start_offset &&
+ IS_ALIGNED(pbs_aligned_sector - start_offset, req_sector_align))
+ return pbs_aligned_sector - start_offset;
+
+ return round_down(sectors, req_sector_align);
}
static inline unsigned get_max_segment_size(const struct request_queue *q,
@@ -235,6 +247,7 @@ static bool bvec_split_segs(const struct request_queue *q,
* following is guaranteed for the cloned bio:
* - That it has at most get_max_io_size(@q, @bio) sectors.
* - That it has at most queue_max_segments(@q) segments.
+ * - That the number of sectors is aligned to bio_required_sector_alignment(@bio).
*
* Except for discard requests the cloned bio will point at the bi_io_vec of
* the original bio. It is the responsibility of the caller to ensure that the
@@ -286,7 +299,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
* big IO can be trival, disable iopoll when split needed.
*/
bio->bi_opf &= ~REQ_HIPRI;
-
+ sectors = round_down(sectors, bio_required_sector_alignment(bio));
return bio_split(bio, sectors, GFP_NOIO, bs);
}
--
2.31.0.291.g576ba9dcdaf-goog
The number of sectors passed to bio_split() should be aligned to
bio_required_sector_alignment(). All callers (other than bounce.c) have
been updated to ensure this, so add a WARN() if the number of sectors is
not aligned. (bounce.c was not updated since it's legacy code that
won't interact with inline encryption).
Signed-off-by: Satya Tangirala <[email protected]>
---
block/bio.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/block/bio.c b/block/bio.c
index 26b7f721cda8..cb348f134a15 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1458,6 +1458,7 @@ struct bio *bio_split(struct bio *bio, int sectors,
BUG_ON(sectors <= 0);
BUG_ON(sectors >= bio_sectors(bio));
+ WARN_ON(!IS_ALIGNED(sectors, bio_required_sector_alignment(bio)));
/* Zone append commands cannot be split */
if (WARN_ON_ONCE(bio_op(bio) == REQ_OP_ZONE_APPEND))
--
2.31.0.291.g576ba9dcdaf-goog
On 3/25/21 2:26 PM, Satya Tangirala wrote:
> When a bio has an encryption context, its size must be aligned to its
> crypto data unit size. A bio must not be split in the middle of a data
> unit. Currently, bios are split at logical block boundaries [...]
Hi Satya,
Are you sure that the block layer core splits bios at logical block
boundaries? Commit 9cc5169cd478 ("block: Improve physical block
alignment of split bios") should have changed the behavior from
splitting at logical block boundaries into splitting at physical block
boundaries. Without having looked at this patch series, can the same
effect be achieved by reporting the crypto data unit size as the
physical block size?
Thanks,
Bart.
On Thu, Mar 25, 2021 at 02:51:31PM -0700, Bart Van Assche wrote:
> On 3/25/21 2:26 PM, Satya Tangirala wrote:
> > When a bio has an encryption context, its size must be aligned to its
> > crypto data unit size. A bio must not be split in the middle of a data
> > unit. Currently, bios are split at logical block boundaries [...]
>
> Hi Satya,
>
> Are you sure that the block layer core splits bios at logical block
> boundaries? Commit 9cc5169cd478 ("block: Improve physical block
> alignment of split bios") should have changed the behavior from
> splitting at logical block boundaries into splitting at physical block
> boundaries.
Ah, what I really meant with that sentence was "Currently, if a bio is
split, the size of the resulting bio is guaranteed to be aligned to a
the lbs. The endpoint of the bio might also be aligned to a physical
block boundary (which 9cc5169cd478 tries to achieve if possible), but
the bio's size (and hence also its endpoint since the start of the bio
is always lbs aligned) is *at least* lbs aligned". Does that sound
accurate?
With inline encryption, that lbs alignment guarantee is no longer
enough - the bio now needs to have a size that's aligned to the bio's
data unit size (which may be larger than the logical block size, but
is also still a power of 2).
> Without having looked at this patch series, can the same
> effect be achieved by reporting the crypto data unit size as the
> physical block size?
That would've been awesome, but I don't think we can do that :(
1) There isn't only one crypto data unit size. A device can support,
and upper layers are free to use, many different data unit sizes.
2) IIUC 9cc5169cd478 (or more accurately get_max_io_size() since the
function has been changed slightly since your original patch)
doesn't align the size of the bio to the pbs - it only aligns the
endpoint of the bio to the pbs (and it may actually not even do
that if it turns out to not be possible). Is that right? If so,
that means that if the startpoint of the bio isn't pbs aligned, the
size of the bio won't be pbs aligned either.
>
> Thanks,
>
> Bart.
On 3/25/21 6:39 PM, Satya Tangirala wrote:
> On Thu, Mar 25, 2021 at 02:51:31PM -0700, Bart Van Assche wrote:
>> Are you sure that the block layer core splits bios at logical block
>> boundaries? Commit 9cc5169cd478 ("block: Improve physical block
>> alignment of split bios") should have changed the behavior from
>> splitting at logical block boundaries into splitting at physical block
>> boundaries.
>
> Ah, what I really meant with that sentence was "Currently, if a bio is
> split, the size of the resulting bio is guaranteed to be aligned to a
> the lbs. The endpoint of the bio might also be aligned to a physical
> block boundary (which 9cc5169cd478 tries to achieve if possible), but
> the bio's size (and hence also its endpoint since the start of the bio
> is always lbs aligned) is *at least* lbs aligned". Does that sound
> accurate?
That sounds better to me :-)
>> Without having looked at this patch series, can the same
>> effect be achieved by reporting the crypto data unit size as the
>> physical block size?
>
> That would've been awesome, but I don't think we can do that :(
> 1) There isn't only one crypto data unit size. A device can support,
> and upper layers are free to use, many different data unit sizes.
> 2) IIUC 9cc5169cd478 (or more accurately get_max_io_size() since the
> function has been changed slightly since your original patch)
> doesn't align the size of the bio to the pbs - it only aligns the
> endpoint of the bio to the pbs (and it may actually not even do
> that if it turns out to not be possible). Is that right? If so,
> that means that if the startpoint of the bio isn't pbs aligned, the
> size of the bio won't be pbs aligned either.
Hmm ... if the start of a bio is not aligned to the physical block size
I don't think that the block layer can do anything about the start of
the bio. Anyway, I have taken a quick look at this patch series and the
patch series looks pretty clean to me. I will let Christoph review this
patch series since he already reviewed the previous version of this series.
Bart.
On 3/25/21 2:26 PM, Satya Tangirala wrote:
> @@ -1458,6 +1458,7 @@ struct bio *bio_split(struct bio *bio, int sectors,
>
> BUG_ON(sectors <= 0);
> BUG_ON(sectors >= bio_sectors(bio));
> + WARN_ON(!IS_ALIGNED(sectors, bio_required_sector_alignment(bio)));
Please change this WARN_ON() into WARN_ON_ONCE() to prevent a storm of
call traces in the kernel log if this warning statement would be hit
repeatedly.
Thanks,
Bart.
Hi Satya,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on block/for-next]
[also build test WARNING on dm/for-next mkp-scsi/for-next scsi/for-next linux/master linus/master v5.12-rc4 next-20210325]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Satya-Tangirala/ensure-bios-aren-t-split-in-middle-of-crypto-data-unit/20210326-053016
base: https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next
config: arm-randconfig-r033-20210325 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project f490a5969bd52c8a48586f134ff8f02ccbb295b3)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install arm cross compiling tool for clang build
# apt-get install binutils-arm-linux-gnueabi
# https://github.com/0day-ci/linux/commit/9b8b677bfdba70695b8d01ee318ef552fcc0392e
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Satya-Tangirala/ensure-bios-aren-t-split-in-middle-of-crypto-data-unit/20210326-053016
git checkout 9b8b677bfdba70695b8d01ee318ef552fcc0392e
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=arm
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>
All warnings (new ones prefixed by >>):
>> block/keyslot-manager.c:457:6: warning: no previous prototype for function 'blk_ksm_restrict_dus_to_queue_limits' [-Wmissing-prototypes]
void blk_ksm_restrict_dus_to_queue_limits(struct blk_keyslot_manager *ksm,
^
block/keyslot-manager.c:457:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
void blk_ksm_restrict_dus_to_queue_limits(struct blk_keyslot_manager *ksm,
^
static
1 warning generated.
vim +/blk_ksm_restrict_dus_to_queue_limits +457 block/keyslot-manager.c
452
453 /*
454 * Restrict the supported data unit sizes of the ksm based on the request queue
455 * limits
456 */
> 457 void blk_ksm_restrict_dus_to_queue_limits(struct blk_keyslot_manager *ksm,
458 struct queue_limits *limits)
459 {
460 /* The largest possible data unit size we support is PAGE_SIZE. */
461 unsigned long largest_dus = PAGE_SIZE;
462 unsigned int dus_allowed_mask;
463 int i;
464 bool dus_was_restricted = false;
465
466 /*
467 * If the queue doesn't support SG gaps, a bio might get split in the
468 * middle of a data unit. So require SG gap support for inline
469 * encryption for any data unit size larger than a single sector.
470 */
471 if (limits->virt_boundary_mask)
472 largest_dus = SECTOR_SIZE;
473
474 /*
475 * If the queue has chunk_sectors, the bio might be split within a data
476 * unit if the data unit size is larger than a single sector. So only
477 * support a single sector data unit size in this case.
478 */
479 if (limits->chunk_sectors)
480 largest_dus = SECTOR_SIZE;
481
482 /*
483 * Any bio sent to the queue must be allowed to contain at least a
484 * data_unit_size worth of data. Since each segment in a bio contains
485 * at least a SECTOR_SIZE worth of data, it's sufficient that
486 * queue_max_segments(q) * SECTOR_SIZE >= data_unit_size. So disable
487 * all data_unit_sizes not satisfiable.
488 */
489 largest_dus = min(largest_dus,
490 1UL << (fls(limits->max_segments) - 1 + SECTOR_SHIFT));
491
492 /* Clear all unsupported data unit sizes. */
493 dus_allowed_mask = (largest_dus << 1) - 1;
494 for (i = 0; i < ARRAY_SIZE(ksm->crypto_modes_supported); i++) {
495 if (ksm->crypto_modes_supported[i] & (~dus_allowed_mask))
496 dus_was_restricted = true;
497 ksm->crypto_modes_supported[i] &= dus_allowed_mask;
498 }
499
500 if (dus_was_restricted) {
501 pr_warn("Disallowed use of encryption data unit sizes above %lu bytes with inline encryption hardware because of device request queue limits.\n",
502 largest_dus);
503 }
504 }
505
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]
Hi Satya,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on block/for-next]
[also build test WARNING on dm/for-next mkp-scsi/for-next scsi/for-next linux/master linus/master v5.12-rc4 next-20210325]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Satya-Tangirala/ensure-bios-aren-t-split-in-middle-of-crypto-data-unit/20210326-053016
base: https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next
config: i386-randconfig-r016-20210325 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
# https://github.com/0day-ci/linux/commit/9b8b677bfdba70695b8d01ee318ef552fcc0392e
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Satya-Tangirala/ensure-bios-aren-t-split-in-middle-of-crypto-data-unit/20210326-053016
git checkout 9b8b677bfdba70695b8d01ee318ef552fcc0392e
# save the attached .config to linux build tree
make W=1 ARCH=i386
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>
All warnings (new ones prefixed by >>):
>> block/keyslot-manager.c:457:6: warning: no previous prototype for 'blk_ksm_restrict_dus_to_queue_limits' [-Wmissing-prototypes]
457 | void blk_ksm_restrict_dus_to_queue_limits(struct blk_keyslot_manager *ksm,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
vim +/blk_ksm_restrict_dus_to_queue_limits +457 block/keyslot-manager.c
452
453 /*
454 * Restrict the supported data unit sizes of the ksm based on the request queue
455 * limits
456 */
> 457 void blk_ksm_restrict_dus_to_queue_limits(struct blk_keyslot_manager *ksm,
458 struct queue_limits *limits)
459 {
460 /* The largest possible data unit size we support is PAGE_SIZE. */
461 unsigned long largest_dus = PAGE_SIZE;
462 unsigned int dus_allowed_mask;
463 int i;
464 bool dus_was_restricted = false;
465
466 /*
467 * If the queue doesn't support SG gaps, a bio might get split in the
468 * middle of a data unit. So require SG gap support for inline
469 * encryption for any data unit size larger than a single sector.
470 */
471 if (limits->virt_boundary_mask)
472 largest_dus = SECTOR_SIZE;
473
474 /*
475 * If the queue has chunk_sectors, the bio might be split within a data
476 * unit if the data unit size is larger than a single sector. So only
477 * support a single sector data unit size in this case.
478 */
479 if (limits->chunk_sectors)
480 largest_dus = SECTOR_SIZE;
481
482 /*
483 * Any bio sent to the queue must be allowed to contain at least a
484 * data_unit_size worth of data. Since each segment in a bio contains
485 * at least a SECTOR_SIZE worth of data, it's sufficient that
486 * queue_max_segments(q) * SECTOR_SIZE >= data_unit_size. So disable
487 * all data_unit_sizes not satisfiable.
488 */
489 largest_dus = min(largest_dus,
490 1UL << (fls(limits->max_segments) - 1 + SECTOR_SHIFT));
491
492 /* Clear all unsupported data unit sizes. */
493 dus_allowed_mask = (largest_dus << 1) - 1;
494 for (i = 0; i < ARRAY_SIZE(ksm->crypto_modes_supported); i++) {
495 if (ksm->crypto_modes_supported[i] & (~dus_allowed_mask))
496 dus_was_restricted = true;
497 ksm->crypto_modes_supported[i] &= dus_allowed_mask;
498 }
499
500 if (dus_was_restricted) {
501 pr_warn("Disallowed use of encryption data unit sizes above %lu bytes with inline encryption hardware because of device request queue limits.\n",
502 largest_dus);
503 }
504 }
505
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]
On Thu, Mar 25, 2021 at 08:46:20PM -0700, Bart Van Assche wrote:
> On 3/25/21 6:39 PM, Satya Tangirala wrote:
> > On Thu, Mar 25, 2021 at 02:51:31PM -0700, Bart Van Assche wrote:
> >> Are you sure that the block layer core splits bios at logical block
> >> boundaries? Commit 9cc5169cd478 ("block: Improve physical block
> >> alignment of split bios") should have changed the behavior from
> >> splitting at logical block boundaries into splitting at physical block
> >> boundaries.
> >
> > Ah, what I really meant with that sentence was "Currently, if a bio is
> > split, the size of the resulting bio is guaranteed to be aligned to a
> > the lbs. The endpoint of the bio might also be aligned to a physical
> > block boundary (which 9cc5169cd478 tries to achieve if possible), but
> > the bio's size (and hence also its endpoint since the start of the bio
> > is always lbs aligned) is *at least* lbs aligned". Does that sound
> > accurate?
> That sounds better to me :-)
>
> >> Without having looked at this patch series, can the same
> >> effect be achieved by reporting the crypto data unit size as the
> >> physical block size?
> >
> > That would've been awesome, but I don't think we can do that :(
> > 1) There isn't only one crypto data unit size. A device can support,
> > and upper layers are free to use, many different data unit sizes.
> > 2) IIUC 9cc5169cd478 (or more accurately get_max_io_size() since the
> > function has been changed slightly since your original patch)
> > doesn't align the size of the bio to the pbs - it only aligns the
> > endpoint of the bio to the pbs (and it may actually not even do
> > that if it turns out to not be possible). Is that right? If so,
> > that means that if the startpoint of the bio isn't pbs aligned, the
> > size of the bio won't be pbs aligned either.
>
> Hmm ... if the start of a bio is not aligned to the physical block size
> I don't think that the block layer can do anything about the start of
> the bio. Anyway, I have taken a quick look at this patch series and the
> patch series looks pretty clean to me. I will let Christoph review this
> patch series since he already reviewed the previous version of this series.
Sounds good. Thanks for looking through the series!
>
> Bart.
On Thu, Mar 25, 2021 at 09:26:02PM +0000, Satya Tangirala wrote:
> This function checks if a given keyslot manager supports any encryption
> mode/data unit size combination (and returns true if there is no such
> supported combination). Helps clean up code a little.
The name sounds a little strange to me, but the functionality looks
ok. A kerneldoc comment might be useful to describe what it does so
that we don't have to rely on the name alone.
On Thu, Mar 25, 2021 at 09:26:05PM +0000, Satya Tangirala wrote:
> +/*
> + * The required sector alignment for a bio. The number of sectors in any bio
> + * that's constructed/split must be aligned to this value.
> + */
> +static inline unsigned int bio_required_sector_alignment(struct bio *bio)
> +{
> + struct request_queue *q = bio->bi_bdev->bd_disk->queue;
> +
> + return max(queue_logical_block_size(q) >> SECTOR_SHIFT,
> + blk_crypto_bio_sectors_alignment(bio));
> +}
It might make more sense to just have a field in the request queue
for the max alignment so that the fast path just looks at one field.
Then the various setup time functions would update it to the maximum
required.
On Thu, Mar 25, 2021 at 09:26:02PM +0000, Satya Tangirala wrote:
> This function checks if a given keyslot manager supports any encryption
> mode/data unit size combination (and returns true if there is no such
> supported combination). Helps clean up code a little.
>
> Signed-off-by: Satya Tangirala <[email protected]>
> ---
> block/keyslot-manager.c | 13 +++++++++++++
> drivers/md/dm-table.c | 11 +----------
> include/linux/keyslot-manager.h | 2 ++
> 3 files changed, 16 insertions(+), 10 deletions(-)
>
> diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
> index 2c4a55bea6ca..2a2b1a9785d2 100644
> --- a/block/keyslot-manager.c
> +++ b/block/keyslot-manager.c
> @@ -437,6 +437,19 @@ void blk_ksm_destroy(struct blk_keyslot_manager *ksm)
> }
> EXPORT_SYMBOL_GPL(blk_ksm_destroy);
>
> +bool blk_ksm_is_empty(struct blk_keyslot_manager *ksm)
> +{
I agree with Christoph that this could use a kerneldoc comment.
> diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h
> index a27605e2f826..5bf0cea20c81 100644
> --- a/include/linux/keyslot-manager.h
> +++ b/include/linux/keyslot-manager.h
> @@ -117,4 +117,6 @@ bool blk_ksm_is_superset(struct blk_keyslot_manager *ksm_superset,
> void blk_ksm_update_capabilities(struct blk_keyslot_manager *target_ksm,
> struct blk_keyslot_manager *reference_ksm);
>
> +bool blk_ksm_is_empty(struct blk_keyslot_manager *ksm);
> +
It's easier to read if declarations are kept in the same order as the
definitions.
- Eric
On Thu, Mar 25, 2021 at 09:26:03PM +0000, Satya Tangirala wrote:
> Handle any error from blk_ksm_register() in the callers. Previously,
> the callers ignored the return value because blk_ksm_register() wouldn't
> fail as long as the request_queue didn't have integrity support too, but
> as this is no longer the case, it's safer for the callers to just handle
> the return value appropriately.
>
> Signed-off-by: Satya Tangirala <[email protected]>
> ---
> drivers/md/dm-table.c | 3 ++-
> drivers/mmc/core/crypto.c | 6 ++++--
> drivers/scsi/ufs/ufshcd-crypto.c | 6 ++++--
> 3 files changed, 10 insertions(+), 5 deletions(-)
This probably should be 3 patches, one for each subsystem.
> diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
> index db18a58adad7..1225b9050f29 100644
> --- a/drivers/md/dm-table.c
> +++ b/drivers/md/dm-table.c
> @@ -1372,7 +1372,8 @@ static void dm_update_keyslot_manager(struct request_queue *q,
>
> /* Make the ksm less restrictive */
> if (!q->ksm) {
> - blk_ksm_register(t->ksm, q);
> + if (WARN_ON(!blk_ksm_register(t->ksm, q)))
> + dm_destroy_keyslot_manager(t->ksm);
> } else {
> blk_ksm_update_capabilities(q->ksm, t->ksm);
> dm_destroy_keyslot_manager(t->ksm);
> diff --git a/drivers/mmc/core/crypto.c b/drivers/mmc/core/crypto.c
> index 419a368f8402..616103393557 100644
> --- a/drivers/mmc/core/crypto.c
> +++ b/drivers/mmc/core/crypto.c
> @@ -21,8 +21,10 @@ void mmc_crypto_set_initial_state(struct mmc_host *host)
>
> void mmc_crypto_setup_queue(struct request_queue *q, struct mmc_host *host)
> {
> - if (host->caps2 & MMC_CAP2_CRYPTO)
> - blk_ksm_register(&host->ksm, q);
> + if (host->caps2 & MMC_CAP2_CRYPTO) {
> + if (WARN_ON(!blk_ksm_register(&host->ksm, q)))
> + host->caps2 &= ~MMC_CAP2_CRYPTO;
> + }
> }
> EXPORT_SYMBOL_GPL(mmc_crypto_setup_queue);
>
> diff --git a/drivers/scsi/ufs/ufshcd-crypto.c b/drivers/scsi/ufs/ufshcd-crypto.c
> index d70cdcd35e43..f47a72fefe9e 100644
> --- a/drivers/scsi/ufs/ufshcd-crypto.c
> +++ b/drivers/scsi/ufs/ufshcd-crypto.c
> @@ -233,6 +233,8 @@ void ufshcd_init_crypto(struct ufs_hba *hba)
> void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
> struct request_queue *q)
> {
> - if (hba->caps & UFSHCD_CAP_CRYPTO)
> - blk_ksm_register(&hba->ksm, q);
> + if (hba->caps & UFSHCD_CAP_CRYPTO) {
> + if (WARN_ON(!blk_ksm_register(&hba->ksm, q)))
> + hba->caps &= ~UFSHCD_CAP_CRYPTO;
> + }
It would be helpful to add a comment in each case to explain why the WARN_ON
should never trigger.
Also, clearing UFSHCD_CAP_CRYPTO or MMC_CAP2_CRYPTO doesn't really make sense
here because those capabilities apply to the whole UFS or MMC host controller,
not just to the individual request_queue which failed. (A host controller can
control multiple devices, each of which has its own request_queue.) Isn't
blk_ksm_register() failing already enough to ensure that the inline crypto
support isn't used on that particular request_queue? What is the benefit of
clearing UFSHCD_CAP_CRYPTO and MMC_CAP2_CRYPTO too?
- Eric
On Tue, Mar 30, 2021 at 07:06:53PM +0100, Christoph Hellwig wrote:
> On Thu, Mar 25, 2021 at 09:26:05PM +0000, Satya Tangirala wrote:
> > +/*
> > + * The required sector alignment for a bio. The number of sectors in any bio
> > + * that's constructed/split must be aligned to this value.
> > + */
> > +static inline unsigned int bio_required_sector_alignment(struct bio *bio)
> > +{
> > + struct request_queue *q = bio->bi_bdev->bd_disk->queue;
> > +
> > + return max(queue_logical_block_size(q) >> SECTOR_SHIFT,
> > + blk_crypto_bio_sectors_alignment(bio));
> > +}
>
> It might make more sense to just have a field in the request queue
> for the max alignment so that the fast path just looks at one field.
> Then the various setup time functions would update it to the maximum
> required.
I don't see how that would work here, as the required alignment is a per-bio
thing which depends on whether the bio has an encryption context or not, and (if
it does) also the data_unit_size the submitter of the bio selected.
We could just always assume the worst-case scenario, but that seems
unnecessarily pessimistic?
- Eric
On Thu, Mar 25, 2021 at 09:26:05PM +0000, Satya Tangirala wrote:
> This function returns the required alignment for the number of sectors in
> a bio. In particular, the number of sectors passed to bio_split() must be
> aligned to this value.
>
> Signed-off-by: Satya Tangirala <[email protected]>
> ---
> block/blk.h | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/block/blk.h b/block/blk.h
> index 3b53e44b967e..5ef207a6d34c 100644
> --- a/block/blk.h
> +++ b/block/blk.h
> @@ -261,6 +261,18 @@ static inline unsigned int bio_allowed_max_sectors(struct request_queue *q)
> return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9;
> }
>
> +/*
> + * The required sector alignment for a bio. The number of sectors in any bio
> + * that's constructed/split must be aligned to this value.
What does "constructed" mean in this context?
> + */
> +static inline unsigned int bio_required_sector_alignment(struct bio *bio)
> +{
> + struct request_queue *q = bio->bi_bdev->bd_disk->queue;
> +
> + return max(queue_logical_block_size(q) >> SECTOR_SHIFT,
> + blk_crypto_bio_sectors_alignment(bio));
> +}
It would be slightly simpler to use bdev_logical_block_size(bio->bi_bdev)
instead of queue_logical_block_size(bio->bi_bdev->bd_disk->queue).
Also, if there's interest in avoiding the branch implied by the max() here, it
would be possible to take advantage of the values being powers of 2 as follows:
static inline unsigned int bio_required_sector_alignment(struct bio *bio)
{
unsigned int alignmask =
(bdev_logical_block_size(q) >> SECTOR_SHIFT) - 1;
alignmask |= blk_crypto_bio_sectors_alignment(bio) - 1;
return alignmask + 1;
}
On Thu, Mar 25, 2021 at 09:26:06PM +0000, Satya Tangirala wrote:
> Make blk_crypto_split_bio_if_needed() respect
> bio_required_sector_alignment() when calling bio_split().
>
A bit more explanation would be helpful here. Does this fix something, and if
so what is it and under what circumstances?
- Eric
On Thu, Mar 25, 2021 at 09:26:04PM +0000, Satya Tangirala wrote:
> The size of any bio must be aligned to the data unit size of the bio crypt
> context (if it exists) of that bio. This must also be ensured whenever a
> bio is split. Introduce blk_crypto_bio_sectors_alignment() that returns
> the required alignment in sectors. The number of sectors passed to
> any call of bio_split() should be aligned to
> blk_crypto_bio_sectors_alignment().
"should be aligned" => "must be aligned"?
> +/*
> + * Returns the alignment requirement for the number of sectors in this bio based
> + * on its bi_crypt_context. Any bios split from this bio must follow this
> + * alignment requirement as well.
> + */
It would be helpful to expand this comment a bit to explictly mention that the
size of the bio must be a multiple of the crypto data unit size that was
selected by the submitter of the bio, which is the granularity of
encryption/decryption. Keep in mind that people reading this code won't
necessarily be familiar with inline encryption.
- Eric
On Thu, Mar 25, 2021 at 09:26:07PM +0000, Satya Tangirala wrote:
> Not all crypto data unit sizes might be supported by the block layer due to
> certain queue limits. This new function checks the queue limits and
> appropriately modifies the keyslot manager to reflect only the supported
> crypto data unit sizes. blk_ksm_register() runs any given ksm through this
> function before actually registering the ksm with a queue.
>
> Signed-off-by: Satya Tangirala <[email protected]>
> ---
> block/keyslot-manager.c | 59 +++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 59 insertions(+)
>
> diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
> index 2a2b1a9785d2..fad6d9c4b649 100644
> --- a/block/keyslot-manager.c
> +++ b/block/keyslot-manager.c
> @@ -450,12 +450,71 @@ bool blk_ksm_is_empty(struct blk_keyslot_manager *ksm)
> }
> EXPORT_SYMBOL_GPL(blk_ksm_is_empty);
>
> +/*
> + * Restrict the supported data unit sizes of the ksm based on the request queue
> + * limits
> + */
> +void blk_ksm_restrict_dus_to_queue_limits(struct blk_keyslot_manager *ksm,
> + struct queue_limits *limits)
As the kernel test robot hinted at, this function needs to be 'static'.
> +{
> + /* The largest possible data unit size we support is PAGE_SIZE. */
> + unsigned long largest_dus = PAGE_SIZE;
> + unsigned int dus_allowed_mask;
> + int i;
> + bool dus_was_restricted = false;
> +
> + /*
> + * If the queue doesn't support SG gaps, a bio might get split in the
> + * middle of a data unit. So require SG gap support for inline
> + * encryption for any data unit size larger than a single sector.
> + */
> + if (limits->virt_boundary_mask)
> + largest_dus = SECTOR_SIZE;
> +
> + /*
> + * If the queue has chunk_sectors, the bio might be split within a data
> + * unit if the data unit size is larger than a single sector. So only
> + * support a single sector data unit size in this case.
> + */
> + if (limits->chunk_sectors)
> + largest_dus = SECTOR_SIZE;
So in practice, this means that inline encryption will be disabled on any disk
that declares a virt_boundary_mask or chunk_sectors.
What are the real-world consequences of that? Will that have any consequences
for UFS or eMMC, or are those things never applicable to UFS or eMMC?
It would also be helpful if the comments explained why these restrictions are
necessary. They kind of do, but they don't explicitly give an example --
presumably the issue is that a crypto data unit could cross a virt_boundary_mask
or chunk_sectors boundary?
> + /*
> + * Any bio sent to the queue must be allowed to contain at least a
> + * data_unit_size worth of data. Since each segment in a bio contains
> + * at least a SECTOR_SIZE worth of data, it's sufficient that
> + * queue_max_segments(q) * SECTOR_SIZE >= data_unit_size. So disable
> + * all data_unit_sizes not satisfiable.
> + */
> + largest_dus = min(largest_dus,
> + 1UL << (fls(limits->max_segments) - 1 + SECTOR_SHIFT));
> +
> + /* Clear all unsupported data unit sizes. */
> + dus_allowed_mask = (largest_dus << 1) - 1;
> + for (i = 0; i < ARRAY_SIZE(ksm->crypto_modes_supported); i++) {
> + if (ksm->crypto_modes_supported[i] & (~dus_allowed_mask))
> + dus_was_restricted = true;
> + ksm->crypto_modes_supported[i] &= dus_allowed_mask;
> + }
So again in practice, this effectively disables inline encryption on any disk
that doesn't declare max_segments >= 8. What are the real-world consequences of
that -- will this ever be a problem for UFS or eMMC?
Also, why is it necessary to assume the worst case of 512 bytes per segment?
> + if (dus_was_restricted) {
> + pr_warn("Disallowed use of encryption data unit sizes above %lu bytes with inline encryption hardware because of device request queue limits.\n",
> + largest_dus);
> + }
Could this message include the disk that it is talking about?
> bool blk_ksm_register(struct blk_keyslot_manager *ksm, struct request_queue *q)
> {
> if (blk_integrity_queue_supports_integrity(q)) {
> pr_warn("Integrity and hardware inline encryption are not supported together. Disabling hardware inline encryption.\n");
> return false;
> }
> +
> + blk_ksm_restrict_dus_to_queue_limits(ksm, &q->limits);
> +
> + if (blk_ksm_is_empty(ksm))
> + return false;
> +
> q->ksm = ksm;
> return true;
> }
Adding a kerneldoc comment to this function would be helpful. Especially to
explain what a return value of false means, exactly.
- Eric
On Thu, Mar 25, 2021 at 09:26:09PM +0000, Satya Tangirala wrote:
> The number of sectors passed to bio_split() should be aligned to
> bio_required_sector_alignment(). All callers (other than bounce.c) have
> been updated to ensure this, so add a WARN() if the number of sectors is
> not aligned. (bounce.c was not updated since it's legacy code that
> won't interact with inline encryption).
What does bounce.c "won't interact with inline encryption" mean, exactly?
Devices that enable bounce buffering won't declare inline encryption support, so
bounce.c will never see a bio that has an encryption context?
> diff --git a/block/bio.c b/block/bio.c
> index 26b7f721cda8..cb348f134a15 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -1458,6 +1458,7 @@ struct bio *bio_split(struct bio *bio, int sectors,
>
> BUG_ON(sectors <= 0);
> BUG_ON(sectors >= bio_sectors(bio));
> + WARN_ON(!IS_ALIGNED(sectors, bio_required_sector_alignment(bio)));
If this warning triggers, shouldn't the function return NULL to indicate a
failure rather than proceeding on?
- Eric