2024-04-22 20:36:08

by Eric Biggers

[permalink] [raw]
Subject: [PATCH v2 0/8] Optimize dm-verity and fsverity using multibuffer hashing

On many modern CPUs, it is possible to compute the SHA-256 hash of two
equal-length messages in about the same time as a single message, if all
the instructions are interleaved. This is because each SHA-256 (and
also most other cryptographic hash functions) is inherently serialized
and therefore can't always take advantage of the CPU's full throughput.

An earlier attempt to support multibuffer hashing in Linux was based
around the ahash API. That approach had some major issues. This
patchset instead takes a much simpler approach of just adding a
synchronous API for hashing two equal-length messages.

This works well for dm-verity and fsverity, which use Merkle trees and
therefore hash large numbers of equal-length messages.

This patchset is organized as follows:

- Patch 1-3 add crypto_shash_finup2x() and tests for it.
- Patch 4-5 implement finup2x on x86_64 and arm64.
- Patch 6-8 update fsverity and dm-verity to use crypto_shash_finup2x()
to hash pairs of data blocks when possible. Note: the patch
"dm-verity: hash blocks with shash import+finup when possible" is
revived from its original submission
(https://lore.kernel.org/dm-devel/[email protected]/)
because this new work provides a new motivation for it.

On CPUs that support multiple concurrent SHA-256's (all arm64 CPUs I
tested, and AMD Zen CPUs), raw SHA-256 hashing throughput increases by
70-98%, and the throughput of cold-cache reads from dm-verity and
fsverity increases by very roughly 35%.

I consider the crypto patches (patches 1-5) to be more or less ready,
subject any feedback. I plan to work more on cleaning up and testing
the fsverity and dm-verity patches (patches 6-8), which should go in
separately later through their respective trees.

Changed in v2:
- Rebase onto cryptodev/master
- Add more comments to assembly
- Reorganize some of the assembly slightly
- Fix the claimed throughput improvement on arm64
- Fix incorrect kunmap order in fs/verity/verify.c
- Adjust testmgr generation logic slightly
- Explicitly check for INT_MAX before casting unsigned int to int
- Mention SHA3 based parallel hashes
- Mention AVX512-based approach

Eric Biggers (8):
crypto: shash - add support for finup2x
crypto: testmgr - generate power-of-2 lengths more often
crypto: testmgr - add tests for finup2x
crypto: x86/sha256-ni - add support for finup2x
crypto: arm64/sha256-ce - add support for finup2x
fsverity: improve performance by using multibuffer hashing
dm-verity: hash blocks with shash import+finup when possible
dm-verity: improve performance by using multibuffer hashing

arch/arm64/crypto/sha2-ce-core.S | 281 +++++++++++++-
arch/arm64/crypto/sha2-ce-glue.c | 41 ++
arch/x86/crypto/sha256_ni_asm.S | 368 ++++++++++++++++++
arch/x86/crypto/sha256_ssse3_glue.c | 41 ++
crypto/testmgr.c | 72 +++-
drivers/md/dm-verity-fec.c | 31 +-
drivers/md/dm-verity-fec.h | 7 +-
drivers/md/dm-verity-target.c | 560 ++++++++++++++++++++--------
drivers/md/dm-verity.h | 43 +--
fs/verity/fsverity_private.h | 5 +
fs/verity/hash_algs.c | 31 +-
fs/verity/open.c | 6 +
fs/verity/verify.c | 177 +++++++--
include/crypto/hash.h | 34 ++
14 files changed, 1450 insertions(+), 247 deletions(-)


base-commit: 543ea178fbfadeaf79e15766ac989f3351349f02
--
2.44.0



2024-04-22 20:36:11

by Eric Biggers

[permalink] [raw]
Subject: [PATCH v2 1/8] crypto: shash - add support for finup2x

From: Eric Biggers <[email protected]>

Most cryptographic hash functions are serialized, in the sense that they
have an internal block size and the blocks must be processed serially.
(BLAKE3 is a notable exception that has tree-based hashing built-in, but
all the more common choices such as the SHAs and BLAKE2 are serialized.
ParallelHash and Sakura are parallel hashes based on SHA3, but SHA3 is
much slower than SHA256 in software even with the ARMv8 SHA3 extension.)

This limits the performance of computing a single hash. Yet, computing
multiple hashes simultaneously does not have this limitation. Modern
CPUs are superscalar and often can execute independent instructions in
parallel. As a result, on many modern CPUs, it is possible to hash two
equal-length messages in about the same time as a single message, if all
the instructions are interleaved.

Meanwhile, a very common use case for hashing in the Linux kernel is
dm-verity and fs-verity. Both use a Merkle tree that has a fixed block
size, usually 4096 bytes with an empty or 32-byte salt prepended. The
hash algorithm is usually SHA-256. Usually, many blocks need to be
hashed at a time. This is an ideal scenario for multibuffer hashing.

Linux actually used to support SHA-256 multibuffer hashing on x86_64,
before it was removed by commit ab8085c130ed ("crypto: x86 - remove SHA
multibuffer routines and mcryptd"). However, it was integrated with the
crypto API in a weird way, where it behaved as an asynchronous hash that
queued up and executed all requests on a global queue. This made it
very complex, buggy, and virtually unusable.

This patch takes a new approach of just adding an API
crypto_shash_finup2x() that synchronously computes the hash of two
equal-length messages, starting from a common state that represents the
(possibly empty) common prefix shared by the two messages.

The new API is part of the "shash" algorithm type, as it does not make
sense in "ahash". It does a "finup" operation rather than a "digest"
operation in order to support the salt that is used by dm-verity and
fs-verity. There is no fallback implementation that does two regular
finups if the underlying algorithm doesn't support finup2x, since users
probably will want to avoid the overhead of queueing up multiple hashes
when multibuffer hashing won't actually be used anyway.

For now the API only supports 2-way interleaving, as the usefulness and
practicality seems to drop off dramatically after 2. The arm64 CPUs I
tested don't support more than 2 concurrent SHA-256 hashes. On x86_64,
AMD's Zen 4 can do 4 concurrent SHA-256 hashes (at least based on a
microbenchmark of the sha256rnds2 instruction), and it's been reported
that the highest SHA-256 throughput on Intel processors comes from using
AVX512 to compute 16 hashes in parallel. However, higher interleaving
factors would involve tradeoffs such as no longer being able to cache
the round constants in registers, further increasing the code size (both
source and binary), further increasing the amount of state that users
need to keep track of, and causing there to be more "leftover" hashes.

Signed-off-by: Eric Biggers <[email protected]>
---
include/crypto/hash.h | 34 ++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)

diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 0014bdd81ab7..66d93c940861 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -177,10 +177,13 @@ struct shash_desc {
* @finup: see struct ahash_alg
* @digest: see struct ahash_alg
* @export: see struct ahash_alg
* @import: see struct ahash_alg
* @setkey: see struct ahash_alg
+ * @finup2x: **[optional]** Finish calculating the digests of two equal-length
+ * messages, interleaving the instructions to potentially achieve
+ * better performance than hashing each message individually.
* @init_tfm: Initialize the cryptographic transformation object.
* This function is called only once at the instantiation
* time, right after the transformation context was
* allocated. In case the cryptographic hardware has
* some special requirements which need to be handled
@@ -208,10 +211,12 @@ struct shash_alg {
unsigned int len, u8 *out);
int (*export)(struct shash_desc *desc, void *out);
int (*import)(struct shash_desc *desc, const void *in);
int (*setkey)(struct crypto_shash *tfm, const u8 *key,
unsigned int keylen);
+ int (*finup2x)(struct shash_desc *desc, const u8 *data1,
+ const u8 *data2, unsigned int len, u8 *out1, u8 *out2);
int (*init_tfm)(struct crypto_shash *tfm);
void (*exit_tfm)(struct crypto_shash *tfm);
int (*clone_tfm)(struct crypto_shash *dst, struct crypto_shash *src);

unsigned int descsize;
@@ -749,10 +754,15 @@ static inline unsigned int crypto_shash_digestsize(struct crypto_shash *tfm)
static inline unsigned int crypto_shash_statesize(struct crypto_shash *tfm)
{
return crypto_shash_alg(tfm)->statesize;
}

+static inline bool crypto_shash_supports_finup2x(struct crypto_shash *tfm)
+{
+ return crypto_shash_alg(tfm)->finup2x != NULL;
+}
+
static inline u32 crypto_shash_get_flags(struct crypto_shash *tfm)
{
return crypto_tfm_get_flags(crypto_shash_tfm(tfm));
}

@@ -842,10 +852,34 @@ int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
* Return: 0 on success; < 0 if an error occurred.
*/
int crypto_shash_tfm_digest(struct crypto_shash *tfm, const u8 *data,
unsigned int len, u8 *out);

+/**
+ * crypto_shash_finup2x() - finish hashing two equal-length messages
+ * @desc: the hash state that will be forked for the two messages. This
+ * contains the state after hashing a (possibly-empty) common prefix of
+ * the two messages.
+ * @data1: the first message (not including any common prefix from @desc)
+ * @data2: the second message (not including any common prefix from @desc)
+ * @len: length of @data1 and @data2 in bytes
+ * @out1: output buffer for first message digest
+ * @out2: output buffer for second message digest
+ *
+ * Users must check crypto_shash_supports_finup2x(tfm) before calling this.
+ *
+ * Context: Any context.
+ * Return: 0 on success; a negative errno value on failure.
+ */
+static inline int crypto_shash_finup2x(struct shash_desc *desc,
+ const u8 *data1, const u8 *data2,
+ unsigned int len, u8 *out1, u8 *out2)
+{
+ return crypto_shash_alg(desc->tfm)->finup2x(desc, data1, data2, len,
+ out1, out2);
+}
+
/**
* crypto_shash_export() - extract operational state for message digest
* @desc: reference to the operational state handle whose state is exported
* @out: output buffer of sufficient size that can hold the hash state
*
--
2.44.0


2024-04-22 20:36:14

by Eric Biggers

[permalink] [raw]
Subject: [PATCH v2 2/8] crypto: testmgr - generate power-of-2 lengths more often

From: Eric Biggers <[email protected]>

Implementations of hash functions often have special cases when lengths
are a multiple of the hash function's internal block size (e.g. 64 for
SHA-256, 128 for SHA-512). Currently, when the fuzz testing code
generates lengths, it doesn't prefer any length mod 64 over any other.
This limits the coverage of these special cases.

Therefore, this patch updates the fuzz testing code to generate
power-of-2 lengths and divide messages exactly in half a bit more often.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/testmgr.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 00f5a6cf341a..2c57ebcaf368 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -901,18 +901,24 @@ static unsigned int generate_random_length(struct rnd_state *rng,
{
unsigned int len = prandom_u32_below(rng, max_len + 1);

switch (prandom_u32_below(rng, 4)) {
case 0:
- return len % 64;
+ len %= 64;
+ break;
case 1:
- return len % 256;
+ len %= 256;
+ break;
case 2:
- return len % 1024;
+ len %= 1024;
+ break;
default:
- return len;
+ break;
}
+ if (prandom_u32_below(rng, 4) == 0)
+ len = rounddown_pow_of_two(len);
+ return len;
}

/* Flip a random bit in the given nonempty data buffer */
static void flip_random_bit(struct rnd_state *rng, u8 *buf, size_t size)
{
@@ -1004,10 +1010,12 @@ static char *generate_random_sgl_divisions(struct rnd_state *rng,
unsigned int this_len;
const char *flushtype_str;

if (div == &divs[max_divs - 1] || prandom_bool(rng))
this_len = remaining;
+ else if (prandom_u32_below(rng, 4) == 0)
+ this_len = (remaining + 1) / 2;
else
this_len = prandom_u32_inclusive(rng, 1, remaining);
div->proportion_of_total = this_len;

if (prandom_u32_below(rng, 4) == 0)
--
2.44.0


2024-04-22 20:36:17

by Eric Biggers

[permalink] [raw]
Subject: [PATCH v2 3/8] crypto: testmgr - add tests for finup2x

From: Eric Biggers <[email protected]>

Update the shash self-tests to test the new finup2x method when
CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/testmgr.c | 56 +++++++++++++++++++++++++++++++++++++++---------
1 file changed, 46 insertions(+), 10 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 2c57ebcaf368..b49fa88c95e1 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -227,10 +227,12 @@ enum flush_type {

/* finalization function for hash algorithms */
enum finalization_type {
FINALIZATION_TYPE_FINAL, /* use final() */
FINALIZATION_TYPE_FINUP, /* use finup() */
+ FINALIZATION_TYPE_FINUP2X_BUF1, /* use 1st buffer of finup2x() */
+ FINALIZATION_TYPE_FINUP2X_BUF2, /* use 2nd buffer of finup2x() */
FINALIZATION_TYPE_DIGEST, /* use digest() */
};

/*
* Whether the crypto operation will occur in-place, and if so whether the
@@ -1109,19 +1111,28 @@ static void generate_random_testvec_config(struct rnd_state *rng,
if (prandom_bool(rng)) {
cfg->req_flags |= CRYPTO_TFM_REQ_MAY_SLEEP;
p += scnprintf(p, end - p, " may_sleep");
}

- switch (prandom_u32_below(rng, 4)) {
+ switch (prandom_u32_below(rng, 8)) {
case 0:
+ case 1:
cfg->finalization_type = FINALIZATION_TYPE_FINAL;
p += scnprintf(p, end - p, " use_final");
break;
- case 1:
+ case 2:
cfg->finalization_type = FINALIZATION_TYPE_FINUP;
p += scnprintf(p, end - p, " use_finup");
break;
+ case 3:
+ cfg->finalization_type = FINALIZATION_TYPE_FINUP2X_BUF1;
+ p += scnprintf(p, end - p, " use_finup2x_buf1");
+ break;
+ case 4:
+ cfg->finalization_type = FINALIZATION_TYPE_FINUP2X_BUF2;
+ p += scnprintf(p, end - p, " use_finup2x_buf2");
+ break;
default:
cfg->finalization_type = FINALIZATION_TYPE_DIGEST;
p += scnprintf(p, end - p, " use_digest");
break;
}
@@ -1346,11 +1357,14 @@ static int test_shash_vec_cfg(const struct hash_testvec *vec,
return -EINVAL;
}
goto result_ready;
}

- /* Using init(), zero or more update(), then final() or finup() */
+ /*
+ * Using init(), zero or more update(), then either final(), finup(), or
+ * finup2x().
+ */

if (cfg->nosimd)
crypto_disable_simd_for_test();
err = crypto_shash_init(desc);
if (cfg->nosimd)
@@ -1358,28 +1372,50 @@ static int test_shash_vec_cfg(const struct hash_testvec *vec,
err = check_shash_op("init", err, driver, vec_name, cfg);
if (err)
return err;

for (i = 0; i < tsgl->nents; i++) {
+ const u8 *data = sg_virt(&tsgl->sgl[i]);
+ unsigned int len = tsgl->sgl[i].length;
+
if (i + 1 == tsgl->nents &&
- cfg->finalization_type == FINALIZATION_TYPE_FINUP) {
+ (cfg->finalization_type == FINALIZATION_TYPE_FINUP ||
+ cfg->finalization_type == FINALIZATION_TYPE_FINUP2X_BUF1 ||
+ cfg->finalization_type == FINALIZATION_TYPE_FINUP2X_BUF2)) {
+ const u8 *unused_data = tsgl->bufs[XBUFSIZE - 1];
+ u8 unused_result[HASH_MAX_DIGESTSIZE];
+ const char *op;
+
if (divs[i]->nosimd)
crypto_disable_simd_for_test();
- err = crypto_shash_finup(desc, sg_virt(&tsgl->sgl[i]),
- tsgl->sgl[i].length, result);
+ if (cfg->finalization_type == FINALIZATION_TYPE_FINUP ||
+ !crypto_shash_supports_finup2x(tfm)) {
+ err = crypto_shash_finup(desc, data, len,
+ result);
+ op = "finup";
+ } else if (cfg->finalization_type ==
+ FINALIZATION_TYPE_FINUP2X_BUF1) {
+ err = crypto_shash_finup2x(
+ desc, data, unused_data, len,
+ result, unused_result);
+ op = "finup2x_buf1";
+ } else { /* FINALIZATION_TYPE_FINUP2X_BUF2 */
+ err = crypto_shash_finup2x(
+ desc, unused_data, data, len,
+ unused_result, result);
+ op = "finup2x_buf2";
+ }
if (divs[i]->nosimd)
crypto_reenable_simd_for_test();
- err = check_shash_op("finup", err, driver, vec_name,
- cfg);
+ err = check_shash_op(op, err, driver, vec_name, cfg);
if (err)
return err;
goto result_ready;
}
if (divs[i]->nosimd)
crypto_disable_simd_for_test();
- err = crypto_shash_update(desc, sg_virt(&tsgl->sgl[i]),
- tsgl->sgl[i].length);
+ err = crypto_shash_update(desc, data, len);
if (divs[i]->nosimd)
crypto_reenable_simd_for_test();
err = check_shash_op("update", err, driver, vec_name, cfg);
if (err)
return err;
--
2.44.0


2024-04-22 20:36:21

by Eric Biggers

[permalink] [raw]
Subject: [PATCH v2 4/8] crypto: x86/sha256-ni - add support for finup2x

From: Eric Biggers <[email protected]>

Add an implementation of finup2x to sha256-ni. finup2x interleaves a
finup operation for two equal-length messages that share a common
prefix. dm-verity and fs-verity will take advantage of this for
significantly improved performance on capable CPUs.

This increases the throughput of SHA-256 hashing 4096-byte messages by
the following amounts on the following CPUs:

AMD Zen 1: 84%
AMD Zen 4: 98%
Intel Ice Lake: 4%
Intel Sapphire Rapids: 20%

For now, this seems to benefit AMD much more than Intel. This seems to
be because current AMD CPUs support concurrent execution of the SHA-NI
instructions, but unfortunately current Intel CPUs don't, except for the
sha256msg2 instruction. Hopefully future Intel CPUs will support SHA-NI
on more execution ports. Zen 1 supports 2 concurrent sha256rnds2, and
Zen 4 supports 4 concurrent sha256rnds2, which suggests that even better
performance may be achievable on Zen 4 by interleaving more than two
hashes; however, doing so poses a number of trade-offs.

It's been reported that the method that achieves the highest SHA-256
throughput on Intel CPUs is actually computing 16 hashes simultaneously
using AVX512. That method would be quite different to the SHA-NI method
used in this patch. However, such a high interleaving factor isn't
practical for the use cases being targeted in the kernel.

Signed-off-by: Eric Biggers <[email protected]>
---
arch/x86/crypto/sha256_ni_asm.S | 368 ++++++++++++++++++++++++++++
arch/x86/crypto/sha256_ssse3_glue.c | 41 ++++
2 files changed, 409 insertions(+)

diff --git a/arch/x86/crypto/sha256_ni_asm.S b/arch/x86/crypto/sha256_ni_asm.S
index d515a55a3bc1..5e97922a24e4 100644
--- a/arch/x86/crypto/sha256_ni_asm.S
+++ b/arch/x86/crypto/sha256_ni_asm.S
@@ -172,10 +172,378 @@ SYM_TYPED_FUNC_START(sha256_ni_transform)
.Ldone_hash:

RET
SYM_FUNC_END(sha256_ni_transform)

+#undef DIGEST_PTR
+#undef DATA_PTR
+#undef NUM_BLKS
+#undef SHA256CONSTANTS
+#undef MSG
+#undef STATE0
+#undef STATE1
+#undef MSG0
+#undef MSG1
+#undef MSG2
+#undef MSG3
+#undef TMP
+#undef SHUF_MASK
+#undef ABEF_SAVE
+#undef CDGH_SAVE
+
+// parameters for __sha256_ni_finup2x()
+#define SCTX %rdi
+#define DATA1 %rsi
+#define DATA2 %rdx
+#define LEN %ecx
+#define LEN8 %cl
+#define LEN64 %rcx
+#define OUT1 %r8
+#define OUT2 %r9
+
+// other scalar variables
+#define SHA256CONSTANTS %rax
+#define COUNT %r10
+#define COUNT32 %r10d
+#define FINAL_STEP %r11d
+
+// rbx is used as a temporary.
+
+#define MSG %xmm0 // sha256rnds2 implicit operand
+#define STATE0_A %xmm1
+#define STATE1_A %xmm2
+#define STATE0_B %xmm3
+#define STATE1_B %xmm4
+#define TMP_A %xmm5
+#define TMP_B %xmm6
+#define MSG0_A %xmm7
+#define MSG1_A %xmm8
+#define MSG2_A %xmm9
+#define MSG3_A %xmm10
+#define MSG0_B %xmm11
+#define MSG1_B %xmm12
+#define MSG2_B %xmm13
+#define MSG3_B %xmm14
+#define SHUF_MASK %xmm15
+
+#define OFFSETOF_STATE 0 // offsetof(struct sha256_state, state)
+#define OFFSETOF_COUNT 32 // offsetof(struct sha256_state, count)
+#define OFFSETOF_BUF 40 // offsetof(struct sha256_state, buf)
+
+// Do 4 rounds of SHA-256 for each of two messages (interleaved). m0_a and m0_b
+// contain the current 4 message schedule words for the first and second message
+// respectively.
+//
+// If not all the message schedule words have been computed yet, then this also
+// computes 4 more message schedule words for each message. m1_a-m3_a contain
+// the next 3 groups of 4 message schedule words for the first message, and
+// likewise m1_b-m3_b for the second. After consuming the current value of
+// m0_a, this macro computes the group after m3_a and writes it to m0_a, and
+// likewise for *_b. This means that the next (m0_a, m1_a, m2_a, m3_a) is the
+// current (m1_a, m2_a, m3_a, m0_a), and likewise for *_b, so the caller must
+// cycle through the registers accordingly.
+.macro do_4rounds_2x i, m0_a, m1_a, m2_a, m3_a, m0_b, m1_b, m2_b, m3_b
+ movdqa (\i-32)*4(SHA256CONSTANTS), TMP_A
+ movdqa TMP_A, TMP_B
+ paddd \m0_a, TMP_A
+ paddd \m0_b, TMP_B
+.if \i < 48
+ sha256msg1 \m1_a, \m0_a
+ sha256msg1 \m1_b, \m0_b
+.endif
+ movdqa TMP_A, MSG
+ sha256rnds2 STATE0_A, STATE1_A
+ movdqa TMP_B, MSG
+ sha256rnds2 STATE0_B, STATE1_B
+ pshufd $0x0E, TMP_A, MSG
+ sha256rnds2 STATE1_A, STATE0_A
+ pshufd $0x0E, TMP_B, MSG
+ sha256rnds2 STATE1_B, STATE0_B
+.if \i < 48
+ movdqa \m3_a, TMP_A
+ movdqa \m3_b, TMP_B
+ palignr $4, \m2_a, TMP_A
+ palignr $4, \m2_b, TMP_B
+ paddd TMP_A, \m0_a
+ paddd TMP_B, \m0_b
+ sha256msg2 \m3_a, \m0_a
+ sha256msg2 \m3_b, \m0_b
+.endif
+.endm
+
+//
+// void __sha256_ni_finup2x(const struct sha256_state *sctx,
+// const u8 *data1, const u8 *data2, int len,
+// u8 out1[SHA256_DIGEST_SIZE],
+// u8 out2[SHA256_DIGEST_SIZE]);
+//
+// This function computes the SHA-256 digests of two messages |data1| and
+// |data2| that are both |len| bytes long, starting from the initial state
+// |sctx|. |len| must be at least SHA256_BLOCK_SIZE.
+//
+// The instructions for the two SHA-256 operations are interleaved. On many
+// CPUs, this is almost twice as fast as hashing each message individually due
+// to taking better advantage of the CPU's SHA-256 and SIMD throughput.
+//
+SYM_FUNC_START(__sha256_ni_finup2x)
+ // Allocate 128 bytes of stack space, 16-byte aligned.
+ push %rbx
+ push %rbp
+ mov %rsp, %rbp
+ sub $128, %rsp
+ and $~15, %rsp
+
+ // Load the shuffle mask for swapping the endianness of 32-bit words.
+ movdqa PSHUFFLE_BYTE_FLIP_MASK(%rip), SHUF_MASK
+
+ // Set up pointer to the round constants.
+ lea K256+32*4(%rip), SHA256CONSTANTS
+
+ // Initially we're not processing the final blocks.
+ xor FINAL_STEP, FINAL_STEP
+
+ // Load the initial state from sctx->state.
+ movdqu OFFSETOF_STATE+0*16(SCTX), STATE0_A // DCBA
+ movdqu OFFSETOF_STATE+1*16(SCTX), STATE1_A // HGFE
+ movdqa STATE0_A, TMP_A
+ punpcklqdq STATE1_A, STATE0_A // FEBA
+ punpckhqdq TMP_A, STATE1_A // DCHG
+ pshufd $0x1B, STATE0_A, STATE0_A // ABEF
+ pshufd $0xB1, STATE1_A, STATE1_A // CDGH
+
+ // Load sctx->count. Take the mod 64 of it to get the number of bytes
+ // that are buffered in sctx->buf. Also save it in a register with LEN
+ // added to it.
+ mov LEN, LEN
+ mov OFFSETOF_COUNT(SCTX), %rbx
+ lea (%rbx, LEN64, 1), COUNT
+ and $63, %ebx
+ jz .Lfinup2x_enter_loop // No bytes buffered?
+
+ // %ebx bytes (1 to 63) are currently buffered in sctx->buf. Load them
+ // followed by the first 64 - %ebx bytes of data. Since LEN >= 64, we
+ // just load 64 bytes from each of sctx->buf, DATA1, and DATA2
+ // unconditionally and rearrange the data as needed.
+
+ movdqu OFFSETOF_BUF+0*16(SCTX), MSG0_A
+ movdqu OFFSETOF_BUF+1*16(SCTX), MSG1_A
+ movdqu OFFSETOF_BUF+2*16(SCTX), MSG2_A
+ movdqu OFFSETOF_BUF+3*16(SCTX), MSG3_A
+ movdqa MSG0_A, 0*16(%rsp)
+ movdqa MSG1_A, 1*16(%rsp)
+ movdqa MSG2_A, 2*16(%rsp)
+ movdqa MSG3_A, 3*16(%rsp)
+
+ movdqu 0*16(DATA1), MSG0_A
+ movdqu 1*16(DATA1), MSG1_A
+ movdqu 2*16(DATA1), MSG2_A
+ movdqu 3*16(DATA1), MSG3_A
+ movdqu MSG0_A, 0*16(%rsp,%rbx)
+ movdqu MSG1_A, 1*16(%rsp,%rbx)
+ movdqu MSG2_A, 2*16(%rsp,%rbx)
+ movdqu MSG3_A, 3*16(%rsp,%rbx)
+ movdqa 0*16(%rsp), MSG0_A
+ movdqa 1*16(%rsp), MSG1_A
+ movdqa 2*16(%rsp), MSG2_A
+ movdqa 3*16(%rsp), MSG3_A
+
+ movdqu 0*16(DATA2), MSG0_B
+ movdqu 1*16(DATA2), MSG1_B
+ movdqu 2*16(DATA2), MSG2_B
+ movdqu 3*16(DATA2), MSG3_B
+ movdqu MSG0_B, 0*16(%rsp,%rbx)
+ movdqu MSG1_B, 1*16(%rsp,%rbx)
+ movdqu MSG2_B, 2*16(%rsp,%rbx)
+ movdqu MSG3_B, 3*16(%rsp,%rbx)
+ movdqa 0*16(%rsp), MSG0_B
+ movdqa 1*16(%rsp), MSG1_B
+ movdqa 2*16(%rsp), MSG2_B
+ movdqa 3*16(%rsp), MSG3_B
+
+ sub $64, %rbx // rbx = buffered - 64
+ sub %rbx, DATA1 // DATA1 += 64 - buffered
+ sub %rbx, DATA2 // DATA2 += 64 - buffered
+ add %ebx, LEN // LEN += buffered - 64
+ movdqa STATE0_A, STATE0_B
+ movdqa STATE1_A, STATE1_B
+ jmp .Lfinup2x_loop_have_data
+
+.Lfinup2x_enter_loop:
+ sub $64, LEN
+ movdqa STATE0_A, STATE0_B
+ movdqa STATE1_A, STATE1_B
+.Lfinup2x_loop:
+ // Load the next two data blocks.
+ movdqu 0*16(DATA1), MSG0_A
+ movdqu 0*16(DATA2), MSG0_B
+ movdqu 1*16(DATA1), MSG1_A
+ movdqu 1*16(DATA2), MSG1_B
+ movdqu 2*16(DATA1), MSG2_A
+ movdqu 2*16(DATA2), MSG2_B
+ movdqu 3*16(DATA1), MSG3_A
+ movdqu 3*16(DATA2), MSG3_B
+ add $64, DATA1
+ add $64, DATA2
+.Lfinup2x_loop_have_data:
+ // Convert the words of the data blocks from big endian.
+ pshufb SHUF_MASK, MSG0_A
+ pshufb SHUF_MASK, MSG0_B
+ pshufb SHUF_MASK, MSG1_A
+ pshufb SHUF_MASK, MSG1_B
+ pshufb SHUF_MASK, MSG2_A
+ pshufb SHUF_MASK, MSG2_B
+ pshufb SHUF_MASK, MSG3_A
+ pshufb SHUF_MASK, MSG3_B
+.Lfinup2x_loop_have_bswapped_data:
+
+ // Save the original state for each block.
+ movdqa STATE0_A, 0*16(%rsp)
+ movdqa STATE0_B, 1*16(%rsp)
+ movdqa STATE1_A, 2*16(%rsp)
+ movdqa STATE1_B, 3*16(%rsp)
+
+ // Do the SHA-256 rounds on each block.
+.irp i, 0, 16, 32, 48
+ do_4rounds_2x (\i + 0), MSG0_A, MSG1_A, MSG2_A, MSG3_A, \
+ MSG0_B, MSG1_B, MSG2_B, MSG3_B
+ do_4rounds_2x (\i + 4), MSG1_A, MSG2_A, MSG3_A, MSG0_A, \
+ MSG1_B, MSG2_B, MSG3_B, MSG0_B
+ do_4rounds_2x (\i + 8), MSG2_A, MSG3_A, MSG0_A, MSG1_A, \
+ MSG2_B, MSG3_B, MSG0_B, MSG1_B
+ do_4rounds_2x (\i + 12), MSG3_A, MSG0_A, MSG1_A, MSG2_A, \
+ MSG3_B, MSG0_B, MSG1_B, MSG2_B
+.endr
+
+ // Add the original state for each block.
+ paddd 0*16(%rsp), STATE0_A
+ paddd 1*16(%rsp), STATE0_B
+ paddd 2*16(%rsp), STATE1_A
+ paddd 3*16(%rsp), STATE1_B
+
+ // Update LEN and loop back if more blocks remain.
+ sub $64, LEN
+ jge .Lfinup2x_loop
+
+ // Check if any final blocks need to be handled.
+ // FINAL_STEP = 2: all done
+ // FINAL_STEP = 1: need to do count-only padding block
+ // FINAL_STEP = 0: need to do the block with 0x80 padding byte
+ cmp $1, FINAL_STEP
+ jg .Lfinup2x_done
+ je .Lfinup2x_finalize_countonly
+ add $64, LEN
+ jz .Lfinup2x_finalize_blockaligned
+
+ // Not block-aligned; 1 <= LEN <= 63 data bytes remain. Pad the block.
+ // To do this, write the padding starting with the 0x80 byte to
+ // &sp[64]. Then for each message, copy the last 64 data bytes to sp
+ // and load from &sp[64 - LEN] to get the needed padding block. This
+ // code relies on the data buffers being >= 64 bytes in length.
+ mov $64, %ebx
+ sub LEN, %ebx // ebx = 64 - LEN
+ sub %rbx, DATA1 // DATA1 -= 64 - LEN
+ sub %rbx, DATA2 // DATA2 -= 64 - LEN
+ mov $0x80, FINAL_STEP // using FINAL_STEP as a temporary
+ movd FINAL_STEP, MSG0_A
+ pxor MSG1_A, MSG1_A
+ movdqa MSG0_A, 4*16(%rsp)
+ movdqa MSG1_A, 5*16(%rsp)
+ movdqa MSG1_A, 6*16(%rsp)
+ movdqa MSG1_A, 7*16(%rsp)
+ cmp $56, LEN
+ jge 1f // will COUNT spill into its own block?
+ shl $3, COUNT
+ bswap COUNT
+ mov COUNT, 56(%rsp,%rbx)
+ mov $2, FINAL_STEP // won't need count-only block
+ jmp 2f
+1:
+ mov $1, FINAL_STEP // will need count-only block
+2:
+ movdqu 0*16(DATA1), MSG0_A
+ movdqu 1*16(DATA1), MSG1_A
+ movdqu 2*16(DATA1), MSG2_A
+ movdqu 3*16(DATA1), MSG3_A
+ movdqa MSG0_A, 0*16(%rsp)
+ movdqa MSG1_A, 1*16(%rsp)
+ movdqa MSG2_A, 2*16(%rsp)
+ movdqa MSG3_A, 3*16(%rsp)
+ movdqu 0*16(%rsp,%rbx), MSG0_A
+ movdqu 1*16(%rsp,%rbx), MSG1_A
+ movdqu 2*16(%rsp,%rbx), MSG2_A
+ movdqu 3*16(%rsp,%rbx), MSG3_A
+
+ movdqu 0*16(DATA2), MSG0_B
+ movdqu 1*16(DATA2), MSG1_B
+ movdqu 2*16(DATA2), MSG2_B
+ movdqu 3*16(DATA2), MSG3_B
+ movdqa MSG0_B, 0*16(%rsp)
+ movdqa MSG1_B, 1*16(%rsp)
+ movdqa MSG2_B, 2*16(%rsp)
+ movdqa MSG3_B, 3*16(%rsp)
+ movdqu 0*16(%rsp,%rbx), MSG0_B
+ movdqu 1*16(%rsp,%rbx), MSG1_B
+ movdqu 2*16(%rsp,%rbx), MSG2_B
+ movdqu 3*16(%rsp,%rbx), MSG3_B
+ jmp .Lfinup2x_loop_have_data
+
+ // Prepare a padding block, either:
+ //
+ // {0x80, 0, 0, 0, ..., count (as __be64)}
+ // This is for a block aligned message.
+ //
+ // { 0, 0, 0, 0, ..., count (as __be64)}
+ // This is for a message whose length mod 64 is >= 56.
+ //
+ // Pre-swap the endianness of the words.
+.Lfinup2x_finalize_countonly:
+ pxor MSG0_A, MSG0_A
+ jmp 1f
+
+.Lfinup2x_finalize_blockaligned:
+ mov $0x80000000, %ebx
+ movd %ebx, MSG0_A
+1:
+ pxor MSG1_A, MSG1_A
+ pxor MSG2_A, MSG2_A
+ ror $29, COUNT
+ movq COUNT, MSG3_A
+ pslldq $8, MSG3_A
+ movdqa MSG0_A, MSG0_B
+ pxor MSG1_B, MSG1_B
+ pxor MSG2_B, MSG2_B
+ movdqa MSG3_A, MSG3_B
+ mov $2, FINAL_STEP
+ jmp .Lfinup2x_loop_have_bswapped_data
+
+.Lfinup2x_done:
+ // Write the two digests with all bytes in the correct order.
+ movdqa STATE0_A, TMP_A
+ movdqa STATE0_B, TMP_B
+ punpcklqdq STATE1_A, STATE0_A // GHEF
+ punpcklqdq STATE1_B, STATE0_B
+ punpckhqdq TMP_A, STATE1_A // ABCD
+ punpckhqdq TMP_B, STATE1_B
+ pshufd $0xB1, STATE0_A, STATE0_A // HGFE
+ pshufd $0xB1, STATE0_B, STATE0_B
+ pshufd $0x1B, STATE1_A, STATE1_A // DCBA
+ pshufd $0x1B, STATE1_B, STATE1_B
+ pshufb SHUF_MASK, STATE0_A
+ pshufb SHUF_MASK, STATE0_B
+ pshufb SHUF_MASK, STATE1_A
+ pshufb SHUF_MASK, STATE1_B
+ movdqu STATE0_A, 1*16(OUT1)
+ movdqu STATE0_B, 1*16(OUT2)
+ movdqu STATE1_A, 0*16(OUT1)
+ movdqu STATE1_B, 0*16(OUT2)
+
+ mov %rbp, %rsp
+ pop %rbp
+ pop %rbx
+ RET
+SYM_FUNC_END(__sha256_ni_finup2x)
+
.section .rodata.cst256.K256, "aM", @progbits, 256
.align 64
K256:
.long 0x428a2f98,0x71374491,0xb5c0fbcf,0xe9b5dba5
.long 0x3956c25b,0x59f111f1,0x923f82a4,0xab1c5ed5
diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c
index e04a43d9f7d5..de027bb240e2 100644
--- a/arch/x86/crypto/sha256_ssse3_glue.c
+++ b/arch/x86/crypto/sha256_ssse3_glue.c
@@ -331,10 +331,15 @@ static void unregister_sha256_avx2(void)

#ifdef CONFIG_AS_SHA256_NI
asmlinkage void sha256_ni_transform(struct sha256_state *digest,
const u8 *data, int rounds);

+asmlinkage void __sha256_ni_finup2x(const struct sha256_state *sctx,
+ const u8 *data1, const u8 *data2, int len,
+ u8 out1[SHA256_DIGEST_SIZE],
+ u8 out2[SHA256_DIGEST_SIZE]);
+
static int sha256_ni_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return _sha256_update(desc, data, len, sha256_ni_transform);
}
@@ -355,17 +360,53 @@ static int sha256_ni_digest(struct shash_desc *desc, const u8 *data,
{
return sha256_base_init(desc) ?:
sha256_ni_finup(desc, data, len, out);
}

+static noinline_for_stack int
+sha256_finup2x_fallback(struct sha256_state *sctx, const u8 *data1,
+ const u8 *data2, unsigned int len, u8 *out1, u8 *out2)
+{
+ struct sha256_state sctx2 = *sctx;
+
+ sha256_update(sctx, data1, len);
+ sha256_final(sctx, out1);
+ sha256_update(&sctx2, data2, len);
+ sha256_final(&sctx2, out2);
+ return 0;
+}
+
+static int sha256_ni_finup2x(struct shash_desc *desc,
+ const u8 *data1, const u8 *data2,
+ unsigned int len, u8 *out1, u8 *out2)
+{
+ struct sha256_state *sctx = shash_desc_ctx(desc);
+
+ if (unlikely(!crypto_simd_usable() || len < SHA256_BLOCK_SIZE ||
+ len > INT_MAX))
+ return sha256_finup2x_fallback(sctx, data1, data2, len,
+ out1, out2);
+
+ /* __sha256_ni_finup2x() assumes the following offsets. */
+ BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0);
+ BUILD_BUG_ON(offsetof(struct sha256_state, count) != 32);
+ BUILD_BUG_ON(offsetof(struct sha256_state, buf) != 40);
+
+ kernel_fpu_begin();
+ __sha256_ni_finup2x(sctx, data1, data2, len, out1, out2);
+ kernel_fpu_end();
+ return 0;
+}
+
static struct shash_alg sha256_ni_algs[] = { {
.digestsize = SHA256_DIGEST_SIZE,
.init = sha256_base_init,
.update = sha256_ni_update,
.final = sha256_ni_final,
.finup = sha256_ni_finup,
.digest = sha256_ni_digest,
+ .finup2x = sha256_ni_finup2x,
.descsize = sizeof(struct sha256_state),
.base = {
.cra_name = "sha256",
.cra_driver_name = "sha256-ni",
.cra_priority = 250,
--
2.44.0


2024-04-22 20:36:24

by Eric Biggers

[permalink] [raw]
Subject: [PATCH v2 5/8] crypto: arm64/sha256-ce - add support for finup2x

From: Eric Biggers <[email protected]>

Add an implementation of finup2x to sha256-ce. finup2x interleaves a
finup operation for two equal-length messages that share a common
prefix. dm-verity and fs-verity will take advantage of this for
significantly improved performance on capable CPUs.

On an ARM Cortex-X1, this increases the throughput of SHA-256 hashing
4096-byte messages by 70%.

Signed-off-by: Eric Biggers <[email protected]>
---
arch/arm64/crypto/sha2-ce-core.S | 281 ++++++++++++++++++++++++++++++-
arch/arm64/crypto/sha2-ce-glue.c | 41 +++++
2 files changed, 316 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S
index fce84d88ddb2..fb5d5227e585 100644
--- a/arch/arm64/crypto/sha2-ce-core.S
+++ b/arch/arm64/crypto/sha2-ce-core.S
@@ -68,22 +68,26 @@
.word 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5
.word 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3
.word 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208
.word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2

+ .macro load_round_constants tmp
+ adr_l \tmp, .Lsha2_rcon
+ ld1 { v0.4s- v3.4s}, [\tmp], #64
+ ld1 { v4.4s- v7.4s}, [\tmp], #64
+ ld1 { v8.4s-v11.4s}, [\tmp], #64
+ ld1 {v12.4s-v15.4s}, [\tmp]
+ .endm
+
/*
* int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
* int blocks)
*/
.text
SYM_FUNC_START(__sha256_ce_transform)
- /* load round constants */
- adr_l x8, .Lsha2_rcon
- ld1 { v0.4s- v3.4s}, [x8], #64
- ld1 { v4.4s- v7.4s}, [x8], #64
- ld1 { v8.4s-v11.4s}, [x8], #64
- ld1 {v12.4s-v15.4s}, [x8]
+
+ load_round_constants x8

/* load state */
ld1 {dgav.4s, dgbv.4s}, [x0]

/* load sha256_ce_state::finalize */
@@ -153,5 +157,270 @@ CPU_LE( rev32 v19.16b, v19.16b )
/* store new state */
3: st1 {dgav.4s, dgbv.4s}, [x0]
mov w0, w2
ret
SYM_FUNC_END(__sha256_ce_transform)
+
+ .unreq dga
+ .unreq dgav
+ .unreq dgb
+ .unreq dgbv
+ .unreq t0
+ .unreq t1
+ .unreq dg0q
+ .unreq dg0v
+ .unreq dg1q
+ .unreq dg1v
+ .unreq dg2q
+ .unreq dg2v
+
+ // parameters for __sha256_ce_finup2x()
+ sctx .req x0
+ data1 .req x1
+ data2 .req x2
+ len .req w3
+ out1 .req x4
+ out2 .req x5
+
+ // other scalar variables
+ count .req x6
+ final_step .req w7
+
+ // x8-x9 are used as temporaries.
+
+ // v0-v15 are used to cache the SHA-256 round constants.
+ // v16-v19 are used for the message schedule for the first message.
+ // v20-v23 are used for the message schedule for the second message.
+ // v24-v31 are used for the state and temporaries as given below.
+ // *_a are for the first message and *_b for the second.
+ state0_a_q .req q24
+ state0_a .req v24
+ state1_a_q .req q25
+ state1_a .req v25
+ state0_b_q .req q26
+ state0_b .req v26
+ state1_b_q .req q27
+ state1_b .req v27
+ t0_a .req v28
+ t0_b .req v29
+ t1_a_q .req q30
+ t1_a .req v30
+ t1_b_q .req q31
+ t1_b .req v31
+
+#define OFFSETOF_COUNT 32 // offsetof(struct sha256_state, count)
+#define OFFSETOF_BUF 40 // offsetof(struct sha256_state, buf)
+// offsetof(struct sha256_state, state) is assumed to be 0.
+
+ // Do 4 rounds of SHA-256 for each of two messages (interleaved). m0_a
+ // and m0_b contain the current 4 message schedule words for the first
+ // and second message respectively.
+ //
+ // If not all the message schedule words have been computed yet, then
+ // this also computes 4 more message schedule words for each message.
+ // m1_a-m3_a contain the next 3 groups of 4 message schedule words for
+ // the first message, and likewise m1_b-m3_b for the second. After
+ // consuming the current value of m0_a, this macro computes the group
+ // after m3_a and writes it to m0_a, and likewise for *_b. This means
+ // that the next (m0_a, m1_a, m2_a, m3_a) is the current (m1_a, m2_a,
+ // m3_a, m0_a), and likewise for *_b, so the caller must cycle through
+ // the registers accordingly.
+ .macro do_4rounds_2x i, k, m0_a, m1_a, m2_a, m3_a, \
+ m0_b, m1_b, m2_b, m3_b
+ add t0_a\().4s, \m0_a\().4s, \k\().4s
+ add t0_b\().4s, \m0_b\().4s, \k\().4s
+ .if \i < 48
+ sha256su0 \m0_a\().4s, \m1_a\().4s
+ sha256su0 \m0_b\().4s, \m1_b\().4s
+ sha256su1 \m0_a\().4s, \m2_a\().4s, \m3_a\().4s
+ sha256su1 \m0_b\().4s, \m2_b\().4s, \m3_b\().4s
+ .endif
+ mov t1_a.16b, state0_a.16b
+ mov t1_b.16b, state0_b.16b
+ sha256h state0_a_q, state1_a_q, t0_a\().4s
+ sha256h state0_b_q, state1_b_q, t0_b\().4s
+ sha256h2 state1_a_q, t1_a_q, t0_a\().4s
+ sha256h2 state1_b_q, t1_b_q, t0_b\().4s
+ .endm
+
+ .macro do_16rounds_2x i, k0, k1, k2, k3
+ do_4rounds_2x \i + 0, \k0, v16, v17, v18, v19, v20, v21, v22, v23
+ do_4rounds_2x \i + 4, \k1, v17, v18, v19, v16, v21, v22, v23, v20
+ do_4rounds_2x \i + 8, \k2, v18, v19, v16, v17, v22, v23, v20, v21
+ do_4rounds_2x \i + 12, \k3, v19, v16, v17, v18, v23, v20, v21, v22
+ .endm
+
+//
+// void __sha256_ce_finup2x(const struct sha256_state *sctx,
+// const u8 *data1, const u8 *data2, int len,
+// u8 out1[SHA256_DIGEST_SIZE],
+// u8 out2[SHA256_DIGEST_SIZE]);
+//
+// This function computes the SHA-256 digests of two messages |data1| and
+// |data2| that are both |len| bytes long, starting from the initial state
+// |sctx|. |len| must be at least SHA256_BLOCK_SIZE.
+//
+// The instructions for the two SHA-256 operations are interleaved. On many
+// CPUs, this is almost twice as fast as hashing each message individually due
+// to taking better advantage of the CPU's SHA-256 and SIMD throughput.
+//
+SYM_FUNC_START(__sha256_ce_finup2x)
+ sub sp, sp, #128
+ mov final_step, #0
+ load_round_constants x8
+
+ // Load the initial state from sctx->state.
+ ld1 {state0_a.4s-state1_a.4s}, [sctx]
+
+ // Load sctx->count. Take the mod 64 of it to get the number of bytes
+ // that are buffered in sctx->buf. Also save it in a register with len
+ // added to it.
+ ldr x8, [sctx, #OFFSETOF_COUNT]
+ add count, x8, len, sxtw
+ and x8, x8, #63
+ cbz x8, .Lfinup2x_enter_loop // No bytes buffered?
+
+ // x8 bytes (1 to 63) are currently buffered in sctx->buf. Load them
+ // followed by the first 64 - x8 bytes of data. Since len >= 64, we
+ // just load 64 bytes from each of sctx->buf, data1, and data2
+ // unconditionally and rearrange the data as needed.
+ add x9, sctx, #OFFSETOF_BUF
+ ld1 {v16.16b-v19.16b}, [x9]
+ st1 {v16.16b-v19.16b}, [sp]
+
+ ld1 {v16.16b-v19.16b}, [data1], #64
+ add x9, sp, x8
+ st1 {v16.16b-v19.16b}, [x9]
+ ld1 {v16.4s-v19.4s}, [sp]
+
+ ld1 {v20.16b-v23.16b}, [data2], #64
+ st1 {v20.16b-v23.16b}, [x9]
+ ld1 {v20.4s-v23.4s}, [sp]
+
+ sub len, len, #64
+ sub data1, data1, x8
+ sub data2, data2, x8
+ add len, len, w8
+ mov state0_b.16b, state0_a.16b
+ mov state1_b.16b, state1_a.16b
+ b .Lfinup2x_loop_have_data
+
+.Lfinup2x_enter_loop:
+ sub len, len, #64
+ mov state0_b.16b, state0_a.16b
+ mov state1_b.16b, state1_a.16b
+.Lfinup2x_loop:
+ // Load the next two data blocks.
+ ld1 {v16.4s-v19.4s}, [data1], #64
+ ld1 {v20.4s-v23.4s}, [data2], #64
+.Lfinup2x_loop_have_data:
+ // Convert the words of the data blocks from big endian.
+CPU_LE( rev32 v16.16b, v16.16b )
+CPU_LE( rev32 v17.16b, v17.16b )
+CPU_LE( rev32 v18.16b, v18.16b )
+CPU_LE( rev32 v19.16b, v19.16b )
+CPU_LE( rev32 v20.16b, v20.16b )
+CPU_LE( rev32 v21.16b, v21.16b )
+CPU_LE( rev32 v22.16b, v22.16b )
+CPU_LE( rev32 v23.16b, v23.16b )
+.Lfinup2x_loop_have_bswapped_data:
+
+ // Save the original state for each block.
+ st1 {state0_a.4s-state1_b.4s}, [sp]
+
+ // Do the SHA-256 rounds on each block.
+ do_16rounds_2x 0, v0, v1, v2, v3
+ do_16rounds_2x 16, v4, v5, v6, v7
+ do_16rounds_2x 32, v8, v9, v10, v11
+ do_16rounds_2x 48, v12, v13, v14, v15
+
+ // Add the original state for each block.
+ ld1 {v16.4s-v19.4s}, [sp]
+ add state0_a.4s, state0_a.4s, v16.4s
+ add state1_a.4s, state1_a.4s, v17.4s
+ add state0_b.4s, state0_b.4s, v18.4s
+ add state1_b.4s, state1_b.4s, v19.4s
+
+ // Update len and loop back if more blocks remain.
+ sub len, len, #64
+ tbz len, #31, .Lfinup2x_loop // len >= 0?
+
+ // Check if any final blocks need to be handled.
+ // final_step = 2: all done
+ // final_step = 1: need to do count-only padding block
+ // final_step = 0: need to do the block with 0x80 padding byte
+ tbnz final_step, #1, .Lfinup2x_done
+ tbnz final_step, #0, .Lfinup2x_finalize_countonly
+ add len, len, #64
+ cbz len, .Lfinup2x_finalize_blockaligned
+
+ // Not block-aligned; 1 <= len <= 63 data bytes remain. Pad the block.
+ // To do this, write the padding starting with the 0x80 byte to
+ // &sp[64]. Then for each message, copy the last 64 data bytes to sp
+ // and load from &sp[64 - len] to get the needed padding block. This
+ // code relies on the data buffers being >= 64 bytes in length.
+ sub w8, len, #64 // w8 = len - 64
+ add data1, data1, w8, sxtw // data1 += len - 64
+ add data2, data2, w8, sxtw // data2 += len - 64
+ mov x9, 0x80
+ fmov d16, x9
+ movi v17.16b, #0
+ stp q16, q17, [sp, #64]
+ stp q17, q17, [sp, #96]
+ sub x9, sp, w8, sxtw // x9 = &sp[64 - len]
+ cmp len, #56
+ b.ge 1f // will count spill into its own block?
+ lsl count, count, #3
+ rev count, count
+ str count, [x9, #56]
+ mov final_step, #2 // won't need count-only block
+ b 2f
+1:
+ mov final_step, #1 // will need count-only block
+2:
+ ld1 {v16.16b-v19.16b}, [data1]
+ st1 {v16.16b-v19.16b}, [sp]
+ ld1 {v16.4s-v19.4s}, [x9]
+ ld1 {v20.16b-v23.16b}, [data2]
+ st1 {v20.16b-v23.16b}, [sp]
+ ld1 {v20.4s-v23.4s}, [x9]
+ b .Lfinup2x_loop_have_data
+
+ // Prepare a padding block, either:
+ //
+ // {0x80, 0, 0, 0, ..., count (as __be64)}
+ // This is for a block aligned message.
+ //
+ // { 0, 0, 0, 0, ..., count (as __be64)}
+ // This is for a message whose length mod 64 is >= 56.
+ //
+ // Pre-swap the endianness of the words.
+.Lfinup2x_finalize_countonly:
+ movi v16.2d, #0
+ b 1f
+.Lfinup2x_finalize_blockaligned:
+ mov x8, #0x80000000
+ fmov d16, x8
+1:
+ movi v17.2d, #0
+ movi v18.2d, #0
+ ror count, count, #29 // ror(lsl(count, 3), 32)
+ mov v19.d[0], xzr
+ mov v19.d[1], count
+ mov v20.16b, v16.16b
+ movi v21.2d, #0
+ movi v22.2d, #0
+ mov v23.16b, v19.16b
+ mov final_step, #2
+ b .Lfinup2x_loop_have_bswapped_data
+
+.Lfinup2x_done:
+ // Write the two digests with all bytes in the correct order.
+CPU_LE( rev32 state0_a.16b, state0_a.16b )
+CPU_LE( rev32 state1_a.16b, state1_a.16b )
+CPU_LE( rev32 state0_b.16b, state0_b.16b )
+CPU_LE( rev32 state1_b.16b, state1_b.16b )
+ st1 {state0_a.4s-state1_a.4s}, [out1]
+ st1 {state0_b.4s-state1_b.4s}, [out2]
+ add sp, sp, #128
+ ret
+SYM_FUNC_END(__sha256_ce_finup2x)
diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c
index 0a44d2e7ee1f..c77d75395cc4 100644
--- a/arch/arm64/crypto/sha2-ce-glue.c
+++ b/arch/arm64/crypto/sha2-ce-glue.c
@@ -31,10 +31,15 @@ extern const u32 sha256_ce_offsetof_count;
extern const u32 sha256_ce_offsetof_finalize;

asmlinkage int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
int blocks);

+asmlinkage void __sha256_ce_finup2x(const struct sha256_state *sctx,
+ const u8 *data1, const u8 *data2, int len,
+ u8 out1[SHA256_DIGEST_SIZE],
+ u8 out2[SHA256_DIGEST_SIZE]);
+
static void sha256_ce_transform(struct sha256_state *sst, u8 const *src,
int blocks)
{
while (blocks) {
int rem;
@@ -122,10 +127,45 @@ static int sha256_ce_digest(struct shash_desc *desc, const u8 *data,
{
sha256_base_init(desc);
return sha256_ce_finup(desc, data, len, out);
}

+static noinline_for_stack int
+sha256_finup2x_fallback(struct sha256_state *sctx, const u8 *data1,
+ const u8 *data2, unsigned int len, u8 *out1, u8 *out2)
+{
+ struct sha256_state sctx2 = *sctx;
+
+ sha256_update(sctx, data1, len);
+ sha256_final(sctx, out1);
+ sha256_update(&sctx2, data2, len);
+ sha256_final(&sctx2, out2);
+ return 0;
+}
+
+static int sha256_ce_finup2x(struct shash_desc *desc,
+ const u8 *data1, const u8 *data2,
+ unsigned int len, u8 *out1, u8 *out2)
+{
+ struct sha256_ce_state *sctx = shash_desc_ctx(desc);
+
+ if (unlikely(!crypto_simd_usable() || len < SHA256_BLOCK_SIZE ||
+ len > INT_MAX))
+ return sha256_finup2x_fallback(&sctx->sst, data1, data2, len,
+ out1, out2);
+
+ /* __sha256_ce_finup2x() assumes the following offsets. */
+ BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0);
+ BUILD_BUG_ON(offsetof(struct sha256_state, count) != 32);
+ BUILD_BUG_ON(offsetof(struct sha256_state, buf) != 40);
+
+ kernel_neon_begin();
+ __sha256_ce_finup2x(&sctx->sst, data1, data2, len, out1, out2);
+ kernel_neon_end();
+ return 0;
+}
+
static int sha256_ce_export(struct shash_desc *desc, void *out)
{
struct sha256_ce_state *sctx = shash_desc_ctx(desc);

memcpy(out, &sctx->sst, sizeof(struct sha256_state));
@@ -162,10 +202,11 @@ static struct shash_alg algs[] = { {
.init = sha256_base_init,
.update = sha256_ce_update,
.final = sha256_ce_final,
.finup = sha256_ce_finup,
.digest = sha256_ce_digest,
+ .finup2x = sha256_ce_finup2x,
.export = sha256_ce_export,
.import = sha256_ce_import,
.descsize = sizeof(struct sha256_ce_state),
.statesize = sizeof(struct sha256_state),
.digestsize = SHA256_DIGEST_SIZE,
--
2.44.0


2024-04-22 20:36:27

by Eric Biggers

[permalink] [raw]
Subject: [PATCH v2 6/8] fsverity: improve performance by using multibuffer hashing

From: Eric Biggers <[email protected]>

When supported by the hash algorithm, use crypto_shash_finup2x() to
interleave the hashing of pairs of data blocks. On some CPUs this
nearly doubles hashing performance. The increase in overall throughput
of cold-cache fsverity reads that I'm seeing on arm64 and x86_64 is
roughly 35% (though this metric is hard to measure as it jumps around a
lot).

For now this is only done on the verification path, and only for data
blocks, not Merkle tree blocks. We could use finup2x on Merkle tree
blocks too, but that is less important as there aren't as many Merkle
tree blocks as data blocks, and that would require some additional code
restructuring. We could also use finup2x to accelerate building the
Merkle tree, but verification performance is more important.

Signed-off-by: Eric Biggers <[email protected]>
---
fs/verity/fsverity_private.h | 5 +
fs/verity/hash_algs.c | 31 +++++-
fs/verity/open.c | 6 ++
fs/verity/verify.c | 177 +++++++++++++++++++++++++++++------
4 files changed, 186 insertions(+), 33 deletions(-)

diff --git a/fs/verity/fsverity_private.h b/fs/verity/fsverity_private.h
index b3506f56e180..b72c03f8f416 100644
--- a/fs/verity/fsverity_private.h
+++ b/fs/verity/fsverity_private.h
@@ -27,10 +27,11 @@ struct fsverity_hash_alg {
/*
* The HASH_ALGO_* constant for this algorithm. This is different from
* FS_VERITY_HASH_ALG_*, which uses a different numbering scheme.
*/
enum hash_algo algo_id;
+ bool supports_finup2x; /* crypto_shash_supports_finup2x(tfm) */
};

/* Merkle tree parameters: hash algorithm, initial hash state, and topology */
struct merkle_tree_params {
const struct fsverity_hash_alg *hash_alg; /* the hash algorithm */
@@ -65,10 +66,11 @@ struct merkle_tree_params {
*/
struct fsverity_info {
struct merkle_tree_params tree_params;
u8 root_hash[FS_VERITY_MAX_DIGEST_SIZE];
u8 file_digest[FS_VERITY_MAX_DIGEST_SIZE];
+ u8 zero_block_hash[FS_VERITY_MAX_DIGEST_SIZE];
const struct inode *inode;
unsigned long *hash_block_verified;
};

#define FS_VERITY_MAX_SIGNATURE_SIZE (FS_VERITY_MAX_DESCRIPTOR_SIZE - \
@@ -82,10 +84,13 @@ const struct fsverity_hash_alg *fsverity_get_hash_alg(const struct inode *inode,
unsigned int num);
const u8 *fsverity_prepare_hash_state(const struct fsverity_hash_alg *alg,
const u8 *salt, size_t salt_size);
int fsverity_hash_block(const struct merkle_tree_params *params,
const struct inode *inode, const void *data, u8 *out);
+int fsverity_hash_2_blocks(const struct merkle_tree_params *params,
+ const struct inode *inode, const void *data1,
+ const void *data2, u8 *out1, u8 *out2);
int fsverity_hash_buffer(const struct fsverity_hash_alg *alg,
const void *data, size_t size, u8 *out);
void __init fsverity_check_hash_algs(void);

/* init.c */
diff --git a/fs/verity/hash_algs.c b/fs/verity/hash_algs.c
index 6b08b1d9a7d7..0be2903fa8f3 100644
--- a/fs/verity/hash_algs.c
+++ b/fs/verity/hash_algs.c
@@ -82,12 +82,15 @@ const struct fsverity_hash_alg *fsverity_get_hash_alg(const struct inode *inode,
if (WARN_ON_ONCE(alg->digest_size != crypto_shash_digestsize(tfm)))
goto err_free_tfm;
if (WARN_ON_ONCE(alg->block_size != crypto_shash_blocksize(tfm)))
goto err_free_tfm;

- pr_info("%s using implementation \"%s\"\n",
- alg->name, crypto_shash_driver_name(tfm));
+ alg->supports_finup2x = crypto_shash_supports_finup2x(tfm);
+
+ pr_info("%s using implementation \"%s\"%s\n",
+ alg->name, crypto_shash_driver_name(tfm),
+ alg->supports_finup2x ? " (multibuffer)" : "");

/* pairs with smp_load_acquire() above */
smp_store_release(&alg->tfm, tfm);
goto out_unlock;

@@ -195,10 +198,34 @@ int fsverity_hash_block(const struct merkle_tree_params *params,
if (err)
fsverity_err(inode, "Error %d computing block hash", err);
return err;
}

+int fsverity_hash_2_blocks(const struct merkle_tree_params *params,
+ const struct inode *inode, const void *data1,
+ const void *data2, u8 *out1, u8 *out2)
+{
+ SHASH_DESC_ON_STACK(desc, params->hash_alg->tfm);
+ int err;
+
+ desc->tfm = params->hash_alg->tfm;
+
+ if (params->hashstate)
+ err = crypto_shash_import(desc, params->hashstate);
+ else
+ err = crypto_shash_init(desc);
+ if (err) {
+ fsverity_err(inode, "Error %d importing hash state", err);
+ return err;
+ }
+ err = crypto_shash_finup2x(desc, data1, data2, params->block_size,
+ out1, out2);
+ if (err)
+ fsverity_err(inode, "Error %d computing block hashes", err);
+ return err;
+}
+
/**
* fsverity_hash_buffer() - hash some data
* @alg: the hash algorithm to use
* @data: the data to hash
* @size: size of data to hash, in bytes
diff --git a/fs/verity/open.c b/fs/verity/open.c
index fdeb95eca3af..4ae07c689c56 100644
--- a/fs/verity/open.c
+++ b/fs/verity/open.c
@@ -206,10 +206,16 @@ struct fsverity_info *fsverity_create_info(const struct inode *inode,
if (err) {
fsverity_err(inode, "Error %d computing file digest", err);
goto fail;
}

+ err = fsverity_hash_block(&vi->tree_params, inode,
+ page_address(ZERO_PAGE(0)),
+ vi->zero_block_hash);
+ if (err)
+ goto fail;
+
err = fsverity_verify_signature(vi, desc->signature,
le32_to_cpu(desc->sig_size));
if (err)
goto fail;

diff --git a/fs/verity/verify.c b/fs/verity/verify.c
index 4fcad0825a12..f6e899eaa271 100644
--- a/fs/verity/verify.c
+++ b/fs/verity/verify.c
@@ -77,29 +77,33 @@ static bool is_hash_block_verified(struct fsverity_info *vi, struct page *hpage,
SetPageChecked(hpage);
return false;
}

/*
- * Verify a single data block against the file's Merkle tree.
+ * Verify the hash of a single data block against the file's Merkle tree.
+ *
+ * @real_dblock_hash specifies the hash of the data block, and @data_pos
+ * specifies the byte position of the data block within the file.
*
* In principle, we need to verify the entire path to the root node. However,
* for efficiency the filesystem may cache the hash blocks. Therefore we need
* only ascend the tree until an already-verified hash block is seen, and then
* verify the path to that block.
*
* Return: %true if the data block is valid, else %false.
*/
static bool
verify_data_block(struct inode *inode, struct fsverity_info *vi,
- const void *data, u64 data_pos, unsigned long max_ra_pages)
+ const u8 *real_dblock_hash, u64 data_pos,
+ unsigned long max_ra_pages)
{
const struct merkle_tree_params *params = &vi->tree_params;
const unsigned int hsize = params->digest_size;
int level;
u8 _want_hash[FS_VERITY_MAX_DIGEST_SIZE];
const u8 *want_hash;
- u8 real_hash[FS_VERITY_MAX_DIGEST_SIZE];
+ u8 real_hblock_hash[FS_VERITY_MAX_DIGEST_SIZE];
/* The hash blocks that are traversed, indexed by level */
struct {
/* Page containing the hash block */
struct page *page;
/* Mapped address of the hash block (will be within @page) */
@@ -125,11 +129,12 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
* doesn't cover data blocks fully past EOF. But the entire
* page spanning EOF can be visible to userspace via a mmap, and
* any part past EOF should be all zeroes. Therefore, we need
* to verify that any data blocks fully past EOF are all zeroes.
*/
- if (memchr_inv(data, 0, params->block_size)) {
+ if (memcmp(vi->zero_block_hash, real_dblock_hash,
+ params->block_size) != 0) {
fsverity_err(inode,
"FILE CORRUPTED! Data past EOF is not zeroed");
return false;
}
return true;
@@ -200,13 +205,14 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
struct page *hpage = hblocks[level - 1].page;
const void *haddr = hblocks[level - 1].addr;
unsigned long hblock_idx = hblocks[level - 1].index;
unsigned int hoffset = hblocks[level - 1].hoffset;

- if (fsverity_hash_block(params, inode, haddr, real_hash) != 0)
+ if (fsverity_hash_block(params, inode, haddr,
+ real_hblock_hash) != 0)
goto error;
- if (memcmp(want_hash, real_hash, hsize) != 0)
+ if (memcmp(want_hash, real_hblock_hash, hsize) != 0)
goto corrupted;
/*
* Mark the hash block as verified. This must be atomic and
* idempotent, as the same hash block might be verified by
* multiple threads concurrently.
@@ -219,55 +225,145 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
want_hash = _want_hash;
kunmap_local(haddr);
put_page(hpage);
}

- /* Finally, verify the data block. */
- if (fsverity_hash_block(params, inode, data, real_hash) != 0)
- goto error;
- if (memcmp(want_hash, real_hash, hsize) != 0)
+ /* Finally, verify the hash of the data block. */
+ if (memcmp(want_hash, real_dblock_hash, hsize) != 0)
goto corrupted;
return true;

corrupted:
fsverity_err(inode,
"FILE CORRUPTED! pos=%llu, level=%d, want_hash=%s:%*phN, real_hash=%s:%*phN",
data_pos, level - 1,
params->hash_alg->name, hsize, want_hash,
- params->hash_alg->name, hsize, real_hash);
+ params->hash_alg->name, hsize,
+ level == 0 ? real_dblock_hash : real_hblock_hash);
error:
for (; level > 0; level--) {
kunmap_local(hblocks[level - 1].addr);
put_page(hblocks[level - 1].page);
}
return false;
}

+struct fsverity_verification_context {
+ struct inode *inode;
+ struct fsverity_info *vi;
+ unsigned long max_ra_pages;
+
+ /*
+ * pending_data and pending_pos are used when the selected hash
+ * algorithm supports multibuffer hashing. They're used to temporarily
+ * store the virtual address and position of a mapped data block that
+ * needs to be verified. If we then see another data block, we hash the
+ * two blocks simultaneously using the fast multibuffer hashing method.
+ */
+ const void *pending_data;
+ u64 pending_pos;
+
+ /* Buffers to temporarily store the calculated data block hashes */
+ u8 hash1[FS_VERITY_MAX_DIGEST_SIZE];
+ u8 hash2[FS_VERITY_MAX_DIGEST_SIZE];
+};
+
+static inline void
+fsverity_init_verification_context(struct fsverity_verification_context *ctx,
+ struct inode *inode,
+ unsigned long max_ra_pages)
+{
+ ctx->inode = inode;
+ ctx->vi = inode->i_verity_info;
+ ctx->max_ra_pages = max_ra_pages;
+ ctx->pending_data = NULL;
+}
+
+static bool
+fsverity_finish_verification(struct fsverity_verification_context *ctx)
+{
+ int err;
+
+ if (ctx->pending_data == NULL)
+ return true;
+ /*
+ * Multibuffer hashing is enabled but there was an odd number of data
+ * blocks. Hash and verify the last block by itself.
+ */
+ err = fsverity_hash_block(&ctx->vi->tree_params, ctx->inode,
+ ctx->pending_data, ctx->hash1);
+ kunmap_local(ctx->pending_data);
+ ctx->pending_data = NULL;
+ return err == 0 &&
+ verify_data_block(ctx->inode, ctx->vi, ctx->hash1,
+ ctx->pending_pos, ctx->max_ra_pages);
+}
+
+static inline void
+fsverity_abort_verification(struct fsverity_verification_context *ctx)
+{
+ if (ctx->pending_data) {
+ kunmap_local(ctx->pending_data);
+ ctx->pending_data = NULL;
+ }
+}
+
static bool
-verify_data_blocks(struct folio *data_folio, size_t len, size_t offset,
- unsigned long max_ra_pages)
+fsverity_add_data_blocks(struct fsverity_verification_context *ctx,
+ struct folio *data_folio, size_t len, size_t offset)
{
- struct inode *inode = data_folio->mapping->host;
- struct fsverity_info *vi = inode->i_verity_info;
- const unsigned int block_size = vi->tree_params.block_size;
- u64 pos = (u64)data_folio->index << PAGE_SHIFT;
+ struct inode *inode = ctx->inode;
+ struct fsverity_info *vi = ctx->vi;
+ const struct merkle_tree_params *params = &vi->tree_params;
+ const unsigned int block_size = params->block_size;
+ const bool use_finup2x = params->hash_alg->supports_finup2x;
+ u64 pos = ((u64)data_folio->index << PAGE_SHIFT) + offset;

if (WARN_ON_ONCE(len <= 0 || !IS_ALIGNED(len | offset, block_size)))
return false;
if (WARN_ON_ONCE(!folio_test_locked(data_folio) ||
folio_test_uptodate(data_folio)))
return false;
do {
- void *data;
- bool valid;
-
- data = kmap_local_folio(data_folio, offset);
- valid = verify_data_block(inode, vi, data, pos + offset,
- max_ra_pages);
- kunmap_local(data);
- if (!valid)
- return false;
+ const void *data = kmap_local_folio(data_folio, offset);
+ int err;
+
+ if (use_finup2x) {
+ if (ctx->pending_data) {
+ /* Hash and verify two data blocks. */
+ err = fsverity_hash_2_blocks(params,
+ inode,
+ ctx->pending_data,
+ data,
+ ctx->hash1,
+ ctx->hash2);
+ kunmap_local(data);
+ kunmap_local(ctx->pending_data);
+ ctx->pending_data = NULL;
+ if (err != 0 ||
+ !verify_data_block(inode, vi, ctx->hash1,
+ ctx->pending_pos,
+ ctx->max_ra_pages) ||
+ !verify_data_block(inode, vi, ctx->hash2,
+ pos, ctx->max_ra_pages))
+ return false;
+ } else {
+ /* Wait and see if there's another block. */
+ ctx->pending_data = data;
+ ctx->pending_pos = pos;
+ }
+ } else {
+ /* Hash and verify one data block. */
+ err = fsverity_hash_block(params, inode, data,
+ ctx->hash1);
+ kunmap_local(data);
+ if (err != 0 ||
+ !verify_data_block(inode, vi, ctx->hash1,
+ pos, ctx->max_ra_pages))
+ return false;
+ }
+ pos += block_size;
offset += block_size;
len -= block_size;
} while (len);
return true;
}
@@ -284,11 +380,19 @@ verify_data_blocks(struct folio *data_folio, size_t len, size_t offset,
*
* Return: %true if the data is valid, else %false.
*/
bool fsverity_verify_blocks(struct folio *folio, size_t len, size_t offset)
{
- return verify_data_blocks(folio, len, offset, 0);
+ struct fsverity_verification_context ctx;
+
+ fsverity_init_verification_context(&ctx, folio->mapping->host, 0);
+
+ if (!fsverity_add_data_blocks(&ctx, folio, len, offset)) {
+ fsverity_abort_verification(&ctx);
+ return false;
+ }
+ return fsverity_finish_verification(&ctx);
}
EXPORT_SYMBOL_GPL(fsverity_verify_blocks);

#ifdef CONFIG_BLOCK
/**
@@ -305,10 +409,12 @@ EXPORT_SYMBOL_GPL(fsverity_verify_blocks);
* filesystems) must instead call fsverity_verify_page() directly on each page.
* All filesystems must also call fsverity_verify_page() on holes.
*/
void fsverity_verify_bio(struct bio *bio)
{
+ struct inode *inode = bio_first_folio_all(bio)->mapping->host;
+ struct fsverity_verification_context ctx;
struct folio_iter fi;
unsigned long max_ra_pages = 0;

if (bio->bi_opf & REQ_RAHEAD) {
/*
@@ -321,17 +427,26 @@ void fsverity_verify_bio(struct bio *bio)
* reduces the number of I/O requests made to the Merkle tree.
*/
max_ra_pages = bio->bi_iter.bi_size >> (PAGE_SHIFT + 2);
}

+ fsverity_init_verification_context(&ctx, inode, max_ra_pages);
+
bio_for_each_folio_all(fi, bio) {
- if (!verify_data_blocks(fi.folio, fi.length, fi.offset,
- max_ra_pages)) {
- bio->bi_status = BLK_STS_IOERR;
- break;
+ if (!fsverity_add_data_blocks(&ctx, fi.folio, fi.length,
+ fi.offset)) {
+ fsverity_abort_verification(&ctx);
+ goto ioerr;
}
}
+
+ if (!fsverity_finish_verification(&ctx))
+ goto ioerr;
+ return;
+
+ioerr:
+ bio->bi_status = BLK_STS_IOERR;
}
EXPORT_SYMBOL_GPL(fsverity_verify_bio);
#endif /* CONFIG_BLOCK */

/**
--
2.44.0


2024-04-22 20:36:30

by Eric Biggers

[permalink] [raw]
Subject: [PATCH v2 7/8] dm-verity: hash blocks with shash import+finup when possible

From: Eric Biggers <[email protected]>

Currently dm-verity computes the hash of each block by using multiple
calls to the "ahash" crypto API. While the exact sequence depends on
the chosen dm-verity settings, in the vast majority of cases it is:

1. crypto_ahash_init()
2. crypto_ahash_update() [salt]
3. crypto_ahash_update() [data]
4. crypto_ahash_final()

This is inefficient for two main reasons:

- It makes multiple indirect calls, which is expensive on modern CPUs
especially when mitigations for CPU vulnerabilities are enabled.

Since the salt is the same across all blocks on a given dm-verity
device, a much more efficient sequence would be to do an import of the
pre-salted state, then a finup.

- It uses the ahash (asynchronous hash) API, despite the fact that
CPU-based hashing is almost always used in practice, and therefore it
experiences the overhead of the ahash-based wrapper for shash. This
also means that the new function crypto_shash_finup2x(), which is
specifically designed for fast CPU-based hashing, is unavailable.

Since dm-verity was intentionally converted to ahash to support
off-CPU crypto accelerators, wholesale conversion to shash (reverting
that change) might not be acceptable. Yet, we should still provide a
fast path for shash with the most common dm-verity settings.

Therefore, this patch adds a new shash import+finup based fast path to
dm-verity. It is used automatically when appropriate, i.e. when the
ahash API and shash APIs resolve to the same underlying algorithm, the
dm-verity version is not 0 (so that the salt is hashed before the data),
and the data block size is not greater than the page size.

This makes dm-verity optimized for what the vast majority of users want:
CPU-based hashing with the most common settings, while still retaining
support for rarer settings and off-CPU crypto accelerators.

In benchmarks with veritysetup's default parameters (SHA-256, 4K data
and hash block sizes, 32-byte salt), which also match the parameters
that Android currently uses, this patch improves block hashing
performance by about 15% on an x86_64 system that supports the SHA-NI
instructions, or by about 5% on an arm64 system that supports the ARMv8
SHA2 instructions. This was with CONFIG_CRYPTO_STATS disabled; an even
larger improvement can be expected if that option is enabled.

Note that another benefit of using "import" to handle the salt is that
if the salt size is equal to the input size of the hash algorithm's
compression function, e.g. 64 bytes for SHA-256, then the performance is
exactly the same as no salt. (This doesn't seem to be much better than
veritysetup's current default of 32-byte salts, due to the way SHA-256's
finalization padding works, but it should be marginally better.)

In addition to the benchmarks mentioned above, I've tested this patch
with cryptsetup's 'verity-compat-test' script. I've also lightly tested
this patch with Android, where the new shash-based code gets used.

Signed-off-by: Eric Biggers <[email protected]>
---
drivers/md/dm-verity-fec.c | 13 +-
drivers/md/dm-verity-target.c | 336 ++++++++++++++++++++++++----------
drivers/md/dm-verity.h | 27 ++-
3 files changed, 263 insertions(+), 113 deletions(-)

diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
index e46aee6f932e..b436b8e4d750 100644
--- a/drivers/md/dm-verity-fec.c
+++ b/drivers/md/dm-verity-fec.c
@@ -184,13 +184,14 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io,
* Locate data block erasures using verity hashes.
*/
static int fec_is_erasure(struct dm_verity *v, struct dm_verity_io *io,
u8 *want_digest, u8 *data)
{
- if (unlikely(verity_hash(v, verity_io_hash_req(v, io),
- data, 1 << v->data_dev_block_bits,
- verity_io_real_digest(v, io), true)))
+ if (unlikely(verity_compute_hash_virt(v, io, data,
+ 1 << v->data_dev_block_bits,
+ verity_io_real_digest(v, io),
+ true)))
return 0;

return memcmp(verity_io_real_digest(v, io), want_digest,
v->digest_size) != 0;
}
@@ -386,13 +387,13 @@ static int fec_decode_rsb(struct dm_verity *v, struct dm_verity_io *io,

pos += fio->nbufs << DM_VERITY_FEC_BUF_RS_BITS;
}

/* Always re-validate the corrected block against the expected hash */
- r = verity_hash(v, verity_io_hash_req(v, io), fio->output,
- 1 << v->data_dev_block_bits,
- verity_io_real_digest(v, io), true);
+ r = verity_compute_hash_virt(v, io, fio->output,
+ 1 << v->data_dev_block_bits,
+ verity_io_real_digest(v, io), true);
if (unlikely(r < 0))
return r;

if (memcmp(verity_io_real_digest(v, io), verity_io_want_digest(v, io),
v->digest_size)) {
diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
index bb5da66da4c1..2dd15f5e91b7 100644
--- a/drivers/md/dm-verity-target.c
+++ b/drivers/md/dm-verity-target.c
@@ -44,12 +44,16 @@

static unsigned int dm_verity_prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE;

module_param_named(prefetch_cluster, dm_verity_prefetch_cluster, uint, 0644);

+/* Is at least one dm-verity instance using the bh workqueue? */
static DEFINE_STATIC_KEY_FALSE(use_bh_wq_enabled);

+/* Is at least one dm-verity instance using ahash_tfm instead of shash_tfm? */
+static DEFINE_STATIC_KEY_FALSE(ahash_enabled);
+
struct dm_verity_prefetch_work {
struct work_struct work;
struct dm_verity *v;
unsigned short ioprio;
sector_t block;
@@ -100,13 +104,13 @@ static sector_t verity_position_at_level(struct dm_verity *v, sector_t block,
int level)
{
return block >> (level * v->hash_per_block_bits);
}

-static int verity_hash_update(struct dm_verity *v, struct ahash_request *req,
- const u8 *data, size_t len,
- struct crypto_wait *wait)
+static int verity_ahash_update(struct dm_verity *v, struct ahash_request *req,
+ const u8 *data, size_t len,
+ struct crypto_wait *wait)
{
struct scatterlist sg;

if (likely(!is_vmalloc_addr(data))) {
sg_init_one(&sg, data, len);
@@ -133,16 +137,16 @@ static int verity_hash_update(struct dm_verity *v, struct ahash_request *req,
}

/*
* Wrapper for crypto_ahash_init, which handles verity salting.
*/
-static int verity_hash_init(struct dm_verity *v, struct ahash_request *req,
+static int verity_ahash_init(struct dm_verity *v, struct ahash_request *req,
struct crypto_wait *wait, bool may_sleep)
{
int r;

- ahash_request_set_tfm(req, v->tfm);
+ ahash_request_set_tfm(req, v->ahash_tfm);
ahash_request_set_callback(req,
may_sleep ? CRYPTO_TFM_REQ_MAY_SLEEP | CRYPTO_TFM_REQ_MAY_BACKLOG : 0,
crypto_req_done, (void *)wait);
crypto_init_wait(wait);

@@ -153,22 +157,22 @@ static int verity_hash_init(struct dm_verity *v, struct ahash_request *req,
DMERR("crypto_ahash_init failed: %d", r);
return r;
}

if (likely(v->salt_size && (v->version >= 1)))
- r = verity_hash_update(v, req, v->salt, v->salt_size, wait);
+ r = verity_ahash_update(v, req, v->salt, v->salt_size, wait);

return r;
}

-static int verity_hash_final(struct dm_verity *v, struct ahash_request *req,
- u8 *digest, struct crypto_wait *wait)
+static int verity_ahash_final(struct dm_verity *v, struct ahash_request *req,
+ u8 *digest, struct crypto_wait *wait)
{
int r;

if (unlikely(v->salt_size && (!v->version))) {
- r = verity_hash_update(v, req, v->salt, v->salt_size, wait);
+ r = verity_ahash_update(v, req, v->salt, v->salt_size, wait);

if (r < 0) {
DMERR("%s failed updating salt: %d", __func__, r);
goto out;
}
@@ -178,27 +182,47 @@ static int verity_hash_final(struct dm_verity *v, struct ahash_request *req,
r = crypto_wait_req(crypto_ahash_final(req), wait);
out:
return r;
}

-int verity_hash(struct dm_verity *v, struct ahash_request *req,
- const u8 *data, size_t len, u8 *digest, bool may_sleep)
+int verity_compute_hash_virt(struct dm_verity *v, struct dm_verity_io *io,
+ const u8 *data, size_t len, u8 *digest,
+ bool may_sleep)
{
int r;
- struct crypto_wait wait;

- r = verity_hash_init(v, req, &wait, may_sleep);
- if (unlikely(r < 0))
- goto out;
+ if (static_branch_unlikely(&ahash_enabled) && !v->shash_tfm) {
+ struct ahash_request *req = verity_io_hash_req(v, io);
+ struct crypto_wait wait;

- r = verity_hash_update(v, req, data, len, &wait);
- if (unlikely(r < 0))
- goto out;
+ r = verity_ahash_init(v, req, &wait, may_sleep);
+ if (unlikely(r))
+ goto error;

- r = verity_hash_final(v, req, digest, &wait);
+ r = verity_ahash_update(v, req, data, len, &wait);
+ if (unlikely(r))
+ goto error;

-out:
+ r = verity_ahash_final(v, req, digest, &wait);
+ if (unlikely(r))
+ goto error;
+ } else {
+ struct shash_desc *desc = verity_io_hash_req(v, io);
+
+ desc->tfm = v->shash_tfm;
+ r = crypto_shash_import(desc, v->initial_hashstate);
+ if (unlikely(r))
+ goto error;
+
+ r = crypto_shash_finup(desc, data, len, digest);
+ if (unlikely(r))
+ goto error;
+ }
+ return 0;
+
+error:
+ DMERR("Error hashing block from virt buffer: %d", r);
return r;
}

static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level,
sector_t *hash_block, unsigned int *offset)
@@ -323,13 +347,14 @@ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io,
if (skip_unverified) {
r = 1;
goto release_ret_r;
}

- r = verity_hash(v, verity_io_hash_req(v, io),
- data, 1 << v->hash_dev_block_bits,
- verity_io_real_digest(v, io), !io->in_bh);
+ r = verity_compute_hash_virt(v, io, data,
+ 1 << v->hash_dev_block_bits,
+ verity_io_real_digest(v, io),
+ !io->in_bh);
if (unlikely(r < 0))
goto release_ret_r;

if (likely(memcmp(verity_io_real_digest(v, io), want_digest,
v->digest_size) == 0))
@@ -403,14 +428,17 @@ int verity_hash_for_block(struct dm_verity *v, struct dm_verity_io *io,

return r;
}

/*
- * Calculates the digest for the given bio
+ * Update the ahash_request of @io with the next data block from @iter, and
+ * advance @iter accordingly.
*/
-static int verity_for_io_block(struct dm_verity *v, struct dm_verity_io *io,
- struct bvec_iter *iter, struct crypto_wait *wait)
+static int verity_ahash_update_block(struct dm_verity *v,
+ struct dm_verity_io *io,
+ struct bvec_iter *iter,
+ struct crypto_wait *wait)
{
unsigned int todo = 1 << v->data_dev_block_bits;
struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
struct scatterlist sg;
struct ahash_request *req = verity_io_hash_req(v, io);
@@ -445,10 +473,71 @@ static int verity_for_io_block(struct dm_verity *v, struct dm_verity_io *io,
} while (todo);

return 0;
}

+static int verity_compute_hash(struct dm_verity *v, struct dm_verity_io *io,
+ struct bvec_iter *iter, u8 *digest,
+ bool may_sleep)
+{
+ int r;
+
+ if (static_branch_unlikely(&ahash_enabled) && !v->shash_tfm) {
+ struct ahash_request *req = verity_io_hash_req(v, io);
+ struct crypto_wait wait;
+
+ r = verity_ahash_init(v, req, &wait, may_sleep);
+ if (unlikely(r))
+ goto error;
+
+ r = verity_ahash_update_block(v, io, iter, &wait);
+ if (unlikely(r))
+ goto error;
+
+ r = verity_ahash_final(v, req, digest, &wait);
+ if (unlikely(r))
+ goto error;
+ } else {
+ struct shash_desc *desc = verity_io_hash_req(v, io);
+ struct bio *bio =
+ dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
+ struct bio_vec bv = bio_iter_iovec(bio, *iter);
+ const unsigned int len = 1 << v->data_dev_block_bits;
+ const void *virt;
+
+ if (unlikely(len > bv.bv_len)) {
+ /*
+ * Data block spans pages. This should not happen,
+ * since this code path is not used if the data block
+ * size is greater than the page size, and all I/O
+ * should be data block aligned because dm-verity sets
+ * logical_block_size to the data block size.
+ */
+ DMERR_LIMIT("unaligned io (data block spans pages)");
+ return -EIO;
+ }
+
+ desc->tfm = v->shash_tfm;
+ r = crypto_shash_import(desc, v->initial_hashstate);
+ if (unlikely(r))
+ goto error;
+
+ virt = bvec_kmap_local(&bv);
+ r = crypto_shash_finup(desc, virt, len, digest);
+ kunmap_local(virt);
+ if (unlikely(r))
+ goto error;
+
+ bio_advance_iter(bio, iter, len);
+ }
+ return 0;
+
+error:
+ DMERR("Error hashing block from bio iter: %d", r);
+ return r;
+}
+
/*
* Calls function process for 1 << v->data_dev_block_bits bytes in the bio_vec
* starting from iter.
*/
int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io,
@@ -516,13 +605,12 @@ static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io,
io_loc.count = 1 << (v->data_dev_block_bits - SECTOR_SHIFT);
r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
if (unlikely(r))
goto free_ret;

- r = verity_hash(v, verity_io_hash_req(v, io), buffer,
- 1 << v->data_dev_block_bits,
- verity_io_real_digest(v, io), true);
+ r = verity_compute_hash_virt(v, io, buffer, 1 << v->data_dev_block_bits,
+ verity_io_real_digest(v, io), true);
if (unlikely(r))
goto free_ret;

if (memcmp(verity_io_real_digest(v, io),
verity_io_want_digest(v, io), v->digest_size)) {
@@ -569,11 +657,10 @@ static int verity_verify_io(struct dm_verity_io *io)
bool is_zero;
struct dm_verity *v = io->v;
struct bvec_iter start;
struct bvec_iter iter_copy;
struct bvec_iter *iter;
- struct crypto_wait wait;
struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
unsigned int b;

if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) {
/*
@@ -586,11 +673,10 @@ static int verity_verify_io(struct dm_verity_io *io)
iter = &io->iter;

for (b = 0; b < io->n_blocks; b++) {
int r;
sector_t cur_block = io->block + b;
- struct ahash_request *req = verity_io_hash_req(v, io);

if (v->validated_blocks && bio->bi_status == BLK_STS_OK &&
likely(test_bit(cur_block, v->validated_blocks))) {
verity_bv_skip_block(v, io, iter);
continue;
@@ -613,21 +699,14 @@ static int verity_verify_io(struct dm_verity_io *io)
return r;

continue;
}

- r = verity_hash_init(v, req, &wait, !io->in_bh);
- if (unlikely(r < 0))
- return r;
-
start = *iter;
- r = verity_for_io_block(v, io, iter, &wait);
- if (unlikely(r < 0))
- return r;
-
- r = verity_hash_final(v, req, verity_io_real_digest(v, io),
- &wait);
+ r = verity_compute_hash(v, io, iter,
+ verity_io_real_digest(v, io),
+ !io->in_bh);
if (unlikely(r < 0))
return r;

if (likely(memcmp(verity_io_real_digest(v, io),
verity_io_want_digest(v, io), v->digest_size) == 0)) {
@@ -1031,15 +1110,20 @@ static void verity_dtr(struct dm_target *ti)
if (v->bufio)
dm_bufio_client_destroy(v->bufio);

kvfree(v->validated_blocks);
kfree(v->salt);
+ kfree(v->initial_hashstate);
kfree(v->root_digest);
kfree(v->zero_digest);

- if (v->tfm)
- crypto_free_ahash(v->tfm);
+ if (v->ahash_tfm) {
+ static_branch_dec(&ahash_enabled);
+ crypto_free_ahash(v->ahash_tfm);
+ } else {
+ crypto_free_shash(v->shash_tfm);
+ }

kfree(v->alg_name);

if (v->hash_dev)
dm_put_device(ti, v->hash_dev);
@@ -1081,33 +1165,33 @@ static int verity_alloc_most_once(struct dm_verity *v)
}

static int verity_alloc_zero_digest(struct dm_verity *v)
{
int r = -ENOMEM;
- struct ahash_request *req;
+ struct dm_verity_io *io;
u8 *zero_data;

v->zero_digest = kmalloc(v->digest_size, GFP_KERNEL);

if (!v->zero_digest)
return r;

- req = kmalloc(v->ahash_reqsize, GFP_KERNEL);
+ io = kmalloc(sizeof(*io) + v->hash_reqsize, GFP_KERNEL);

- if (!req)
+ if (!io)
return r; /* verity_dtr will free zero_digest */

zero_data = kzalloc(1 << v->data_dev_block_bits, GFP_KERNEL);

if (!zero_data)
goto out;

- r = verity_hash(v, req, zero_data, 1 << v->data_dev_block_bits,
- v->zero_digest, true);
-
+ r = verity_compute_hash_virt(v, io, zero_data,
+ 1 << v->data_dev_block_bits,
+ v->zero_digest, true);
out:
- kfree(req);
+ kfree(io);
kfree(zero_data);

return r;
}

@@ -1224,10 +1308,109 @@ static int verity_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v,
} while (argc && !r);

return r;
}

+static int verity_setup_hash_alg(struct dm_verity *v, const char *alg_name)
+{
+ struct dm_target *ti = v->ti;
+ struct crypto_ahash *ahash;
+ struct crypto_shash *shash = NULL;
+ const char *driver_name;
+
+ v->alg_name = kstrdup(alg_name, GFP_KERNEL);
+ if (!v->alg_name) {
+ ti->error = "Cannot allocate algorithm name";
+ return -ENOMEM;
+ }
+
+ ahash = crypto_alloc_ahash(alg_name, 0,
+ v->use_bh_wq ? CRYPTO_ALG_ASYNC : 0);
+ if (IS_ERR(ahash)) {
+ ti->error = "Cannot initialize hash function";
+ return PTR_ERR(ahash);
+ }
+ driver_name = crypto_ahash_driver_name(ahash);
+ if (v->version >= 1 /* salt prepended, not appended? */ &&
+ 1 << v->data_dev_block_bits <= PAGE_SIZE) {
+ shash = crypto_alloc_shash(alg_name, 0, 0);
+ if (!IS_ERR(shash) &&
+ strcmp(crypto_shash_driver_name(shash), driver_name) != 0) {
+ /*
+ * ahash gave a different driver than shash, so probably
+ * this is a case of real hardware offload. Use ahash.
+ */
+ crypto_free_shash(shash);
+ shash = NULL;
+ }
+ }
+ if (!IS_ERR_OR_NULL(shash)) {
+ crypto_free_ahash(ahash);
+ ahash = NULL;
+ v->shash_tfm = shash;
+ v->digest_size = crypto_shash_digestsize(shash);
+ v->hash_reqsize = sizeof(struct shash_desc) +
+ crypto_shash_descsize(shash);
+ DMINFO("%s using shash \"%s\"", alg_name, driver_name);
+ } else {
+ v->ahash_tfm = ahash;
+ static_branch_inc(&ahash_enabled);
+ v->digest_size = crypto_ahash_digestsize(ahash);
+ v->hash_reqsize = sizeof(struct ahash_request) +
+ crypto_ahash_reqsize(ahash);
+ DMINFO("%s using ahash \"%s\"", alg_name, driver_name);
+ }
+ if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) {
+ ti->error = "Digest size too big";
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int verity_setup_salt_and_hashstate(struct dm_verity *v, const char *arg)
+{
+ struct dm_target *ti = v->ti;
+
+ if (strcmp(arg, "-") != 0) {
+ v->salt_size = strlen(arg) / 2;
+ v->salt = kmalloc(v->salt_size, GFP_KERNEL);
+ if (!v->salt) {
+ ti->error = "Cannot allocate salt";
+ return -ENOMEM;
+ }
+ if (strlen(arg) != v->salt_size * 2 ||
+ hex2bin(v->salt, arg, v->salt_size)) {
+ ti->error = "Invalid salt";
+ return -EINVAL;
+ }
+ }
+ /*
+ * If the "shash with import+finup sequence" method has been selected
+ * (see verity_setup_hash_alg()), then create the initial hash state.
+ */
+ if (v->shash_tfm) {
+ SHASH_DESC_ON_STACK(desc, v->shash_tfm);
+ int r;
+
+ v->initial_hashstate = kmalloc(
+ crypto_shash_statesize(v->shash_tfm), GFP_KERNEL);
+ if (!v->initial_hashstate) {
+ ti->error = "Cannot allocate initial hash state";
+ return -ENOMEM;
+ }
+ desc->tfm = v->shash_tfm;
+ r = crypto_shash_init(desc) ?:
+ crypto_shash_update(desc, v->salt, v->salt_size) ?:
+ crypto_shash_export(desc, v->initial_hashstate);
+ if (r) {
+ ti->error = "Cannot set up initial hash state";
+ return r;
+ }
+ }
+ return 0;
+}
+
/*
* Target parameters:
* <version> The current format is version 1.
* Vsn 0 is compatible with original Chromium OS releases.
* <data device>
@@ -1348,42 +1531,13 @@ static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv)
r = -EINVAL;
goto bad;
}
v->hash_start = num_ll;

- v->alg_name = kstrdup(argv[7], GFP_KERNEL);
- if (!v->alg_name) {
- ti->error = "Cannot allocate algorithm name";
- r = -ENOMEM;
- goto bad;
- }
-
- v->tfm = crypto_alloc_ahash(v->alg_name, 0,
- v->use_bh_wq ? CRYPTO_ALG_ASYNC : 0);
- if (IS_ERR(v->tfm)) {
- ti->error = "Cannot initialize hash function";
- r = PTR_ERR(v->tfm);
- v->tfm = NULL;
- goto bad;
- }
-
- /*
- * dm-verity performance can vary greatly depending on which hash
- * algorithm implementation is used. Help people debug performance
- * problems by logging the ->cra_driver_name.
- */
- DMINFO("%s using implementation \"%s\"", v->alg_name,
- crypto_hash_alg_common(v->tfm)->base.cra_driver_name);
-
- v->digest_size = crypto_ahash_digestsize(v->tfm);
- if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) {
- ti->error = "Digest size too big";
- r = -EINVAL;
+ r = verity_setup_hash_alg(v, argv[7]);
+ if (r)
goto bad;
- }
- v->ahash_reqsize = sizeof(struct ahash_request) +
- crypto_ahash_reqsize(v->tfm);

v->root_digest = kmalloc(v->digest_size, GFP_KERNEL);
if (!v->root_digest) {
ti->error = "Cannot allocate root digest";
r = -ENOMEM;
@@ -1395,25 +1549,13 @@ static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv)
r = -EINVAL;
goto bad;
}
root_hash_digest_to_validate = argv[8];

- if (strcmp(argv[9], "-")) {
- v->salt_size = strlen(argv[9]) / 2;
- v->salt = kmalloc(v->salt_size, GFP_KERNEL);
- if (!v->salt) {
- ti->error = "Cannot allocate salt";
- r = -ENOMEM;
- goto bad;
- }
- if (strlen(argv[9]) != v->salt_size * 2 ||
- hex2bin(v->salt, argv[9], v->salt_size)) {
- ti->error = "Invalid salt";
- r = -EINVAL;
- goto bad;
- }
- }
+ r = verity_setup_salt_and_hashstate(v, argv[9]);
+ if (r)
+ goto bad;

argv += 10;
argc -= 10;

/* Optional parameters */
@@ -1512,11 +1654,11 @@ static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv)
r = -ENOMEM;
goto bad;
}

ti->per_io_data_size = sizeof(struct dm_verity_io) +
- v->ahash_reqsize + v->digest_size * 2;
+ v->hash_reqsize + v->digest_size * 2;

r = verity_fec_ctr(v);
if (r)
goto bad;

diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h
index 20b1bcf03474..15ffb0881cc9 100644
--- a/drivers/md/dm-verity.h
+++ b/drivers/md/dm-verity.h
@@ -37,13 +37,15 @@ struct dm_verity {
struct dm_dev *data_dev;
struct dm_dev *hash_dev;
struct dm_target *ti;
struct dm_bufio_client *bufio;
char *alg_name;
- struct crypto_ahash *tfm;
+ struct crypto_ahash *ahash_tfm; /* either this or shash_tfm is set */
+ struct crypto_shash *shash_tfm; /* either this or ahash_tfm is set */
u8 *root_digest; /* digest of the root block */
u8 *salt; /* salt: its size is salt_size */
+ u8 *initial_hashstate; /* salted initial state, if shash_tfm is set */
u8 *zero_digest; /* digest for a zero block */
unsigned int salt_size;
sector_t data_start; /* data offset in 512-byte sectors */
sector_t hash_start; /* hash start in blocks */
sector_t data_blocks; /* the number of data blocks */
@@ -54,11 +56,11 @@ struct dm_verity {
unsigned char levels; /* the number of tree levels */
unsigned char version;
bool hash_failed:1; /* set if hash of any block failed */
bool use_bh_wq:1; /* try to verify in BH wq before normal work-queue */
unsigned int digest_size; /* digest size for the current hash algorithm */
- unsigned int ahash_reqsize;/* the size of temporary space for crypto */
+ unsigned int hash_reqsize; /* the size of temporary space for crypto */
enum verity_mode mode; /* mode for handling verification errors */
unsigned int corrupted_errs;/* Number of errors for corrupted blocks */

struct workqueue_struct *verify_wq;

@@ -92,45 +94,50 @@ struct dm_verity_io {
char *recheck_buffer;

/*
* Three variably-size fields follow this struct:
*
- * u8 hash_req[v->ahash_reqsize];
+ * u8 hash_req[v->hash_reqsize];
* u8 real_digest[v->digest_size];
* u8 want_digest[v->digest_size];
*
* To access them use: verity_io_hash_req(), verity_io_real_digest()
* and verity_io_want_digest().
+ *
+ * hash_req is either a struct ahash_request or a struct shash_desc,
+ * depending on whether ahash_tfm or shash_tfm is being used.
*/
};

-static inline struct ahash_request *verity_io_hash_req(struct dm_verity *v,
- struct dm_verity_io *io)
+static inline void *verity_io_hash_req(struct dm_verity *v,
+ struct dm_verity_io *io)
{
- return (struct ahash_request *)(io + 1);
+ return io + 1;
}

static inline u8 *verity_io_real_digest(struct dm_verity *v,
struct dm_verity_io *io)
{
- return (u8 *)(io + 1) + v->ahash_reqsize;
+ return (u8 *)(io + 1) + v->hash_reqsize;
}

static inline u8 *verity_io_want_digest(struct dm_verity *v,
struct dm_verity_io *io)
{
- return (u8 *)(io + 1) + v->ahash_reqsize + v->digest_size;
+ return (u8 *)(io + 1) + v->hash_reqsize + v->digest_size;
}

extern int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io,
struct bvec_iter *iter,
int (*process)(struct dm_verity *v,
struct dm_verity_io *io,
u8 *data, size_t len));

-extern int verity_hash(struct dm_verity *v, struct ahash_request *req,
- const u8 *data, size_t len, u8 *digest, bool may_sleep);
+extern int verity_compute_hash_virt(struct dm_verity *v,
+ struct dm_verity_io *io,
+ const u8 *data, size_t len, u8 *digest,
+ bool may_sleep);

extern int verity_hash_for_block(struct dm_verity *v, struct dm_verity_io *io,
sector_t block, u8 *digest, bool *is_zero);

extern bool dm_is_verity_target(struct dm_target *ti);
--
2.44.0


2024-04-22 20:36:35

by Eric Biggers

[permalink] [raw]
Subject: [PATCH v2 8/8] dm-verity: improve performance by using multibuffer hashing

From: Eric Biggers <[email protected]>

When supported by the hash algorithm, use crypto_shash_finup2x() to
interleave the hashing of pairs of data blocks. On some CPUs this
nearly doubles hashing performance. The increase in overall throughput
of cold-cache dm-verity reads that I'm seeing on arm64 and x86_64 is
roughly 35% (though this metric is hard to measure as it jumps around a
lot).

For now this is only done on data blocks, not Merkle tree blocks. We
could use finup2x on Merkle tree blocks too, but that is less important
as there aren't as many Merkle tree blocks as data blocks, and that
would require some additional code restructuring.

Signed-off-by: Eric Biggers <[email protected]>
---
drivers/md/dm-verity-fec.c | 24 +--
drivers/md/dm-verity-fec.h | 7 +-
drivers/md/dm-verity-target.c | 360 ++++++++++++++++++++++------------
drivers/md/dm-verity.h | 28 +--
4 files changed, 261 insertions(+), 158 deletions(-)

diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
index b436b8e4d750..c1677137a682 100644
--- a/drivers/md/dm-verity-fec.c
+++ b/drivers/md/dm-verity-fec.c
@@ -184,18 +184,18 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io,
* Locate data block erasures using verity hashes.
*/
static int fec_is_erasure(struct dm_verity *v, struct dm_verity_io *io,
u8 *want_digest, u8 *data)
{
+ u8 real_digest[HASH_MAX_DIGESTSIZE];
+
if (unlikely(verity_compute_hash_virt(v, io, data,
1 << v->data_dev_block_bits,
- verity_io_real_digest(v, io),
- true)))
+ real_digest, true)))
return 0;

- return memcmp(verity_io_real_digest(v, io), want_digest,
- v->digest_size) != 0;
+ return memcmp(real_digest, want_digest, v->digest_size) != 0;
}

/*
* Read data blocks that are part of the RS block and deinterleave as much as
* fits into buffers. Check for erasure locations if @neras is non-NULL.
@@ -362,14 +362,15 @@ static void fec_init_bufs(struct dm_verity *v, struct dm_verity_fec_io *fio)
* (indicated by @offset) in fio->output. If @use_erasures is non-zero, uses
* hashes to locate erasures.
*/
static int fec_decode_rsb(struct dm_verity *v, struct dm_verity_io *io,
struct dm_verity_fec_io *fio, u64 rsb, u64 offset,
- bool use_erasures)
+ const u8 *want_digest, bool use_erasures)
{
int r, neras = 0;
unsigned int pos;
+ u8 real_digest[HASH_MAX_DIGESTSIZE];

r = fec_alloc_bufs(v, fio);
if (unlikely(r < 0))
return r;

@@ -389,16 +390,15 @@ static int fec_decode_rsb(struct dm_verity *v, struct dm_verity_io *io,
}

/* Always re-validate the corrected block against the expected hash */
r = verity_compute_hash_virt(v, io, fio->output,
1 << v->data_dev_block_bits,
- verity_io_real_digest(v, io), true);
+ real_digest, true);
if (unlikely(r < 0))
return r;

- if (memcmp(verity_io_real_digest(v, io), verity_io_want_digest(v, io),
- v->digest_size)) {
+ if (memcmp(real_digest, want_digest, v->digest_size)) {
DMERR_LIMIT("%s: FEC %llu: failed to correct (%d erasures)",
v->data_dev->name, (unsigned long long)rsb, neras);
return -EILSEQ;
}

@@ -419,12 +419,12 @@ static int fec_bv_copy(struct dm_verity *v, struct dm_verity_io *io, u8 *data,
/*
* Correct errors in a block. Copies corrected block to dest if non-NULL,
* otherwise to a bio_vec starting from iter.
*/
int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io,
- enum verity_block_type type, sector_t block, u8 *dest,
- struct bvec_iter *iter)
+ enum verity_block_type type, sector_t block,
+ const u8 *want_digest, u8 *dest, struct bvec_iter *iter)
{
int r;
struct dm_verity_fec_io *fio = fec_io(io);
u64 offset, res, rsb;

@@ -463,13 +463,13 @@ int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io,
/*
* Locating erasures is slow, so attempt to recover the block without
* them first. Do a second attempt with erasures if the corruption is
* bad enough.
*/
- r = fec_decode_rsb(v, io, fio, rsb, offset, false);
+ r = fec_decode_rsb(v, io, fio, rsb, offset, want_digest, false);
if (r < 0) {
- r = fec_decode_rsb(v, io, fio, rsb, offset, true);
+ r = fec_decode_rsb(v, io, fio, rsb, offset, want_digest, true);
if (r < 0)
goto done;
}

if (dest)
diff --git a/drivers/md/dm-verity-fec.h b/drivers/md/dm-verity-fec.h
index 8454070d2824..57c3f674cae9 100644
--- a/drivers/md/dm-verity-fec.h
+++ b/drivers/md/dm-verity-fec.h
@@ -68,11 +68,12 @@ struct dm_verity_fec_io {

extern bool verity_fec_is_enabled(struct dm_verity *v);

extern int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io,
enum verity_block_type type, sector_t block,
- u8 *dest, struct bvec_iter *iter);
+ const u8 *want_digest, u8 *dest,
+ struct bvec_iter *iter);

extern unsigned int verity_fec_status_table(struct dm_verity *v, unsigned int sz,
char *result, unsigned int maxlen);

extern void verity_fec_finish_io(struct dm_verity_io *io);
@@ -97,12 +98,12 @@ static inline bool verity_fec_is_enabled(struct dm_verity *v)
return false;
}

static inline int verity_fec_decode(struct dm_verity *v,
struct dm_verity_io *io,
- enum verity_block_type type,
- sector_t block, u8 *dest,
+ enum verity_block_type type, sector_t block,
+ const u8 *want_digest, u8 *dest,
struct bvec_iter *iter)
{
return -EOPNOTSUPP;
}

diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
index 2dd15f5e91b7..3dd127c23de2 100644
--- a/drivers/md/dm-verity-target.c
+++ b/drivers/md/dm-verity-target.c
@@ -300,16 +300,16 @@ static int verity_handle_err(struct dm_verity *v, enum verity_block_type type,

/*
* Verify hash of a metadata block pertaining to the specified data block
* ("block" argument) at a specified level ("level" argument).
*
- * On successful return, verity_io_want_digest(v, io) contains the hash value
- * for a lower tree level or for the data block (if we're at the lowest level).
+ * On successful return, want_digest contains the hash value for a lower tree
+ * level or for the data block (if we're at the lowest level).
*
* If "skip_unverified" is true, unverified buffer is skipped and 1 is returned.
* If "skip_unverified" is false, unverified buffer is hashed and verified
- * against current value of verity_io_want_digest(v, io).
+ * against current value of want_digest.
*/
static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io,
sector_t block, int level, bool skip_unverified,
u8 *want_digest)
{
@@ -318,10 +318,11 @@ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io,
u8 *data;
int r;
sector_t hash_block;
unsigned int offset;
struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
+ u8 real_digest[HASH_MAX_DIGESTSIZE];

verity_hash_at_level(v, block, level, &hash_block, &offset);

if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) {
data = dm_bufio_get(v->bufio, hash_block, &buf);
@@ -349,27 +350,26 @@ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io,
goto release_ret_r;
}

r = verity_compute_hash_virt(v, io, data,
1 << v->hash_dev_block_bits,
- verity_io_real_digest(v, io),
- !io->in_bh);
+ real_digest, !io->in_bh);
if (unlikely(r < 0))
goto release_ret_r;

- if (likely(memcmp(verity_io_real_digest(v, io), want_digest,
- v->digest_size) == 0))
+ if (likely(!memcmp(real_digest, want_digest, v->digest_size)))
aux->hash_verified = 1;
else if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) {
/*
* Error handling code (FEC included) cannot be run in a
* tasklet since it may sleep, so fallback to work-queue.
*/
r = -EAGAIN;
goto release_ret_r;
} else if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_METADATA,
- hash_block, data, NULL) == 0)
+ hash_block, want_digest,
+ data, NULL) == 0)
aux->hash_verified = 1;
else if (verity_handle_err(v,
DM_VERITY_BLOCK_TYPE_METADATA,
hash_block)) {
struct bio *bio =
@@ -473,71 +473,10 @@ static int verity_ahash_update_block(struct dm_verity *v,
} while (todo);

return 0;
}

-static int verity_compute_hash(struct dm_verity *v, struct dm_verity_io *io,
- struct bvec_iter *iter, u8 *digest,
- bool may_sleep)
-{
- int r;
-
- if (static_branch_unlikely(&ahash_enabled) && !v->shash_tfm) {
- struct ahash_request *req = verity_io_hash_req(v, io);
- struct crypto_wait wait;
-
- r = verity_ahash_init(v, req, &wait, may_sleep);
- if (unlikely(r))
- goto error;
-
- r = verity_ahash_update_block(v, io, iter, &wait);
- if (unlikely(r))
- goto error;
-
- r = verity_ahash_final(v, req, digest, &wait);
- if (unlikely(r))
- goto error;
- } else {
- struct shash_desc *desc = verity_io_hash_req(v, io);
- struct bio *bio =
- dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
- struct bio_vec bv = bio_iter_iovec(bio, *iter);
- const unsigned int len = 1 << v->data_dev_block_bits;
- const void *virt;
-
- if (unlikely(len > bv.bv_len)) {
- /*
- * Data block spans pages. This should not happen,
- * since this code path is not used if the data block
- * size is greater than the page size, and all I/O
- * should be data block aligned because dm-verity sets
- * logical_block_size to the data block size.
- */
- DMERR_LIMIT("unaligned io (data block spans pages)");
- return -EIO;
- }
-
- desc->tfm = v->shash_tfm;
- r = crypto_shash_import(desc, v->initial_hashstate);
- if (unlikely(r))
- goto error;
-
- virt = bvec_kmap_local(&bv);
- r = crypto_shash_finup(desc, virt, len, digest);
- kunmap_local(virt);
- if (unlikely(r))
- goto error;
-
- bio_advance_iter(bio, iter, len);
- }
- return 0;
-
-error:
- DMERR("Error hashing block from bio iter: %d", r);
- return r;
-}
-
/*
* Calls function process for 1 << v->data_dev_block_bits bytes in the bio_vec
* starting from iter.
*/
int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io,
@@ -581,41 +520,42 @@ static int verity_recheck_copy(struct dm_verity *v, struct dm_verity_io *io,
io->recheck_buffer += len;

return 0;
}

-static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io,
- struct bvec_iter start, sector_t cur_block)
+static int verity_recheck(struct dm_verity *v, struct dm_verity_io *io,
+ struct bvec_iter start, sector_t blkno,
+ const u8 *want_digest)
{
struct page *page;
void *buffer;
int r;
struct dm_io_request io_req;
struct dm_io_region io_loc;
+ u8 real_digest[HASH_MAX_DIGESTSIZE];

page = mempool_alloc(&v->recheck_pool, GFP_NOIO);
buffer = page_to_virt(page);

io_req.bi_opf = REQ_OP_READ;
io_req.mem.type = DM_IO_KMEM;
io_req.mem.ptr.addr = buffer;
io_req.notify.fn = NULL;
io_req.client = v->io;
io_loc.bdev = v->data_dev->bdev;
- io_loc.sector = cur_block << (v->data_dev_block_bits - SECTOR_SHIFT);
+ io_loc.sector = blkno << (v->data_dev_block_bits - SECTOR_SHIFT);
io_loc.count = 1 << (v->data_dev_block_bits - SECTOR_SHIFT);
r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
if (unlikely(r))
goto free_ret;

r = verity_compute_hash_virt(v, io, buffer, 1 << v->data_dev_block_bits,
- verity_io_real_digest(v, io), true);
+ real_digest, true);
if (unlikely(r))
goto free_ret;

- if (memcmp(verity_io_real_digest(v, io),
- verity_io_want_digest(v, io), v->digest_size)) {
+ if (memcmp(real_digest, want_digest, v->digest_size)) {
r = -EIO;
goto free_ret;
}

io->recheck_buffer = buffer;
@@ -647,22 +587,84 @@ static inline void verity_bv_skip_block(struct dm_verity *v,
struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);

bio_advance_iter(bio, iter, 1 << v->data_dev_block_bits);
}

+static noinline int
+__verity_handle_data_hash_mismatch(struct dm_verity *v, struct dm_verity_io *io,
+ struct bio *bio, struct bvec_iter *start,
+ sector_t blkno, const u8 *want_digest)
+{
+ if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) {
+ /*
+ * Error handling code (FEC included) cannot be run in the
+ * BH workqueue, so fallback to a standard workqueue.
+ */
+ return -EAGAIN;
+ }
+ if (verity_recheck(v, io, *start, blkno, want_digest) == 0) {
+ if (v->validated_blocks)
+ set_bit(blkno, v->validated_blocks);
+ return 0;
+ }
+#if defined(CONFIG_DM_VERITY_FEC)
+ if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA, blkno,
+ want_digest, NULL, start) == 0)
+ return 0;
+#endif
+ if (bio->bi_status)
+ return -EIO; /* Error correction failed; Just return error */
+
+ if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA, blkno)) {
+ dm_audit_log_bio(DM_MSG_PREFIX, "verify-data", bio, blkno, 0);
+ return -EIO;
+ }
+ return 0;
+}
+
+static __always_inline int
+verity_check_data_block_hash(struct dm_verity *v, struct dm_verity_io *io,
+ struct bio *bio, struct bvec_iter *start,
+ sector_t blkno,
+ const u8 *real_digest, const u8 *want_digest)
+{
+ if (likely(memcmp(real_digest, want_digest, v->digest_size) == 0)) {
+ if (v->validated_blocks)
+ set_bit(blkno, v->validated_blocks);
+ return 0;
+ }
+ return __verity_handle_data_hash_mismatch(v, io, bio, start, blkno,
+ want_digest);
+}
+
/*
* Verify one "dm_verity_io" structure.
*/
static int verity_verify_io(struct dm_verity_io *io)
{
- bool is_zero;
struct dm_verity *v = io->v;
+ const unsigned int block_size = 1 << v->data_dev_block_bits;
+ struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
+ u8 want_digest[HASH_MAX_DIGESTSIZE];
+ u8 real_digest[HASH_MAX_DIGESTSIZE];
struct bvec_iter start;
struct bvec_iter iter_copy;
struct bvec_iter *iter;
- struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
+ /*
+ * The pending_* variables are used when the selected hash algorithm
+ * supports multibuffer hashing. They're used to temporarily store the
+ * virtual address and position of a mapped data block that needs to be
+ * verified. If we then see another data block, we hash the two blocks
+ * simultaneously using the fast multibuffer hashing method.
+ */
+ const void *pending_data = NULL;
+ sector_t pending_blkno;
+ struct bvec_iter pending_start;
+ u8 pending_want_digest[HASH_MAX_DIGESTSIZE];
+ u8 pending_real_digest[HASH_MAX_DIGESTSIZE];
unsigned int b;
+ int r;

if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) {
/*
* Copy the iterator in case we need to restart
* verification in a work-queue.
@@ -671,82 +673,174 @@ static int verity_verify_io(struct dm_verity_io *io)
iter = &iter_copy;
} else
iter = &io->iter;

for (b = 0; b < io->n_blocks; b++) {
- int r;
- sector_t cur_block = io->block + b;
+ sector_t blkno = io->block + b;
+ bool is_zero;

if (v->validated_blocks && bio->bi_status == BLK_STS_OK &&
- likely(test_bit(cur_block, v->validated_blocks))) {
+ likely(test_bit(blkno, v->validated_blocks))) {
verity_bv_skip_block(v, io, iter);
continue;
}

- r = verity_hash_for_block(v, io, cur_block,
- verity_io_want_digest(v, io),
- &is_zero);
+ r = verity_hash_for_block(v, io, blkno, want_digest, &is_zero);
if (unlikely(r < 0))
- return r;
+ goto error;

if (is_zero) {
/*
* If we expect a zero block, don't validate, just
* return zeros.
*/
r = verity_for_bv_block(v, io, iter,
verity_bv_zero);
if (unlikely(r < 0))
- return r;
+ goto error;

continue;
}

start = *iter;
- r = verity_compute_hash(v, io, iter,
- verity_io_real_digest(v, io),
- !io->in_bh);
- if (unlikely(r < 0))
- return r;
-
- if (likely(memcmp(verity_io_real_digest(v, io),
- verity_io_want_digest(v, io), v->digest_size) == 0)) {
- if (v->validated_blocks)
- set_bit(cur_block, v->validated_blocks);
- continue;
- } else if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) {
- /*
- * Error handling code (FEC included) cannot be run in a
- * tasklet since it may sleep, so fallback to work-queue.
- */
- return -EAGAIN;
- } else if (verity_recheck(v, io, start, cur_block) == 0) {
- if (v->validated_blocks)
- set_bit(cur_block, v->validated_blocks);
- continue;
-#if defined(CONFIG_DM_VERITY_FEC)
- } else if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA,
- cur_block, NULL, &start) == 0) {
- continue;
-#endif
+ if (static_branch_unlikely(&ahash_enabled) && !v->shash_tfm) {
+ /* Hash and verify one data block using ahash. */
+ struct ahash_request *req = verity_io_hash_req(v, io);
+ struct crypto_wait wait;
+
+ r = verity_ahash_init(v, req, &wait, !io->in_bh);
+ if (unlikely(r))
+ goto hash_error;
+
+ r = verity_ahash_update_block(v, io, iter, &wait);
+ if (unlikely(r))
+ goto hash_error;
+
+ r = verity_ahash_final(v, req, real_digest, &wait);
+ if (unlikely(r))
+ goto hash_error;
+
+ r = verity_check_data_block_hash(v, io, bio, &start,
+ blkno, real_digest,
+ want_digest);
+ if (unlikely(r))
+ goto error;
} else {
- if (bio->bi_status) {
+ struct shash_desc *desc = verity_io_hash_req(v, io);
+ struct bio_vec bv = bio_iter_iovec(bio, *iter);
+ const void *data;
+
+ if (unlikely(bv.bv_len < block_size)) {
/*
- * Error correction failed; Just return error
+ * Data block spans pages. This should not
+ * happen, since this code path is not used if
+ * the data block size is greater than the page
+ * size, and all I/O should be data block
+ * aligned because dm-verity sets
+ * logical_block_size to the data block size.
*/
- return -EIO;
+ DMERR_LIMIT("unaligned io (data block spans pages)");
+ r = -EIO;
+ goto error;
}
- if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA,
- cur_block)) {
- dm_audit_log_bio(DM_MSG_PREFIX, "verify-data",
- bio, cur_block, 0);
- return -EIO;
+
+ data = bvec_kmap_local(&bv);
+
+ if (v->use_finup2x) {
+ if (pending_data) {
+ /* Hash and verify two data blocks. */
+ desc->tfm = v->shash_tfm;
+ r = crypto_shash_import(desc,
+ v->initial_hashstate) ?:
+ crypto_shash_finup2x(desc,
+ pending_data,
+ data,
+ block_size,
+ pending_real_digest,
+ real_digest);
+ kunmap_local(data);
+ kunmap_local(pending_data);
+ pending_data = NULL;
+ if (unlikely(r))
+ goto hash_error;
+ r = verity_check_data_block_hash(
+ v, io, bio,
+ &pending_start,
+ pending_blkno,
+ pending_real_digest,
+ pending_want_digest);
+ if (unlikely(r))
+ goto error;
+ r = verity_check_data_block_hash(
+ v, io, bio,
+ &start,
+ blkno,
+ real_digest,
+ want_digest);
+ if (unlikely(r))
+ goto error;
+ } else {
+ /* Wait and see if there's another block. */
+ pending_data = data;
+ pending_blkno = blkno;
+ pending_start = start;
+ memcpy(pending_want_digest, want_digest,
+ v->digest_size);
+ }
+ } else {
+ /* Hash and verify one data block. */
+ desc->tfm = v->shash_tfm;
+ r = crypto_shash_import(desc,
+ v->initial_hashstate) ?:
+ crypto_shash_finup(desc, data, block_size,
+ real_digest);
+ kunmap_local(data);
+ if (unlikely(r))
+ goto hash_error;
+ r = verity_check_data_block_hash(
+ v, io, bio, &start, blkno,
+ real_digest, want_digest);
+ if (unlikely(r))
+ goto error;
}
+
+ bio_advance_iter(bio, iter, block_size);
}
}

+ if (pending_data) {
+ /*
+ * Multibuffer hashing is enabled but there was an odd number of
+ * data blocks. Hash and verify the last block by itself.
+ */
+ struct shash_desc *desc = verity_io_hash_req(v, io);
+
+ desc->tfm = v->shash_tfm;
+ r = crypto_shash_import(desc, v->initial_hashstate) ?:
+ crypto_shash_finup(desc, pending_data, block_size,
+ pending_real_digest);
+ kunmap_local(pending_data);
+ pending_data = NULL;
+ if (unlikely(r))
+ goto hash_error;
+ r = verity_check_data_block_hash(v, io, bio,
+ &pending_start,
+ pending_blkno,
+ pending_real_digest,
+ pending_want_digest);
+ if (unlikely(r))
+ goto error;
+ }
+
return 0;
+
+hash_error:
+ DMERR("Error hashing block from bio iter: %d", r);
+error:
+ if (pending_data)
+ kunmap_local(pending_data);
+ return r;
}

/*
* Skip verity work in response to I/O error when system is shutting down.
*/
@@ -1321,10 +1415,34 @@ static int verity_setup_hash_alg(struct dm_verity *v, const char *alg_name)
if (!v->alg_name) {
ti->error = "Cannot allocate algorithm name";
return -ENOMEM;
}

+ /*
+ * Allocate the hash transformation object that this dm-verity instance
+ * will use. We have a choice of two APIs: shash and ahash. Most
+ * dm-verity users use CPU-based hashing, and for this shash is optimal
+ * since it matches the underlying algorithm implementations and also
+ * allows the use of fast multibuffer hashing (crypto_shash_finup2x()).
+ * ahash adds support for off-CPU hash offloading. It also provides
+ * access to shash algorithms, but does so less efficiently.
+ *
+ * Meanwhile, hashing a block in dm-verity in general requires an
+ * init+update+final sequence with multiple updates. However, usually
+ * the salt is prepended to the block rather than appended, and the data
+ * block size is not greater than the page size. In this very common
+ * case, the sequence can be optimized to import+finup, where the first
+ * step imports the pre-computed state after init+update(salt). This
+ * can reduce the crypto API overhead significantly.
+ *
+ * To provide optimal performance for the vast majority of dm-verity
+ * users while still supporting off-CPU hash offloading and the rarer
+ * dm-verity settings, we therefore have two code paths: one using shash
+ * where we use import+finup or import+finup2x, and one using ahash
+ * where we use init+update(s)+final. We use the former code path when
+ * it's possible to use and shash gives the same algorithm as ahash.
+ */
ahash = crypto_alloc_ahash(alg_name, 0,
v->use_bh_wq ? CRYPTO_ALG_ASYNC : 0);
if (IS_ERR(ahash)) {
ti->error = "Cannot initialize hash function";
return PTR_ERR(ahash);
@@ -1345,14 +1463,16 @@ static int verity_setup_hash_alg(struct dm_verity *v, const char *alg_name)
}
if (!IS_ERR_OR_NULL(shash)) {
crypto_free_ahash(ahash);
ahash = NULL;
v->shash_tfm = shash;
+ v->use_finup2x = crypto_shash_supports_finup2x(shash);
v->digest_size = crypto_shash_digestsize(shash);
v->hash_reqsize = sizeof(struct shash_desc) +
crypto_shash_descsize(shash);
- DMINFO("%s using shash \"%s\"", alg_name, driver_name);
+ DMINFO("%s using shash \"%s\"%s", alg_name, driver_name,
+ v->use_finup2x ? " (multibuffer)" : "");
} else {
v->ahash_tfm = ahash;
static_branch_inc(&ahash_enabled);
v->digest_size = crypto_ahash_digestsize(ahash);
v->hash_reqsize = sizeof(struct ahash_request) +
diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h
index 15ffb0881cc9..8040ba1c0a53 100644
--- a/drivers/md/dm-verity.h
+++ b/drivers/md/dm-verity.h
@@ -55,10 +55,11 @@ struct dm_verity {
unsigned char hash_per_block_bits; /* log2(hashes in hash block) */
unsigned char levels; /* the number of tree levels */
unsigned char version;
bool hash_failed:1; /* set if hash of any block failed */
bool use_bh_wq:1; /* try to verify in BH wq before normal work-queue */
+ bool use_finup2x:1; /* use crypto_shash_finup2x() */
unsigned int digest_size; /* digest size for the current hash algorithm */
unsigned int hash_reqsize; /* the size of temporary space for crypto */
enum verity_mode mode; /* mode for handling verification errors */
unsigned int corrupted_errs;/* Number of errors for corrupted blocks */

@@ -92,42 +93,23 @@ struct dm_verity_io {
struct work_struct bh_work;

char *recheck_buffer;

/*
- * Three variably-size fields follow this struct:
- *
- * u8 hash_req[v->hash_reqsize];
- * u8 real_digest[v->digest_size];
- * u8 want_digest[v->digest_size];
- *
- * To access them use: verity_io_hash_req(), verity_io_real_digest()
- * and verity_io_want_digest().
- *
- * hash_req is either a struct ahash_request or a struct shash_desc,
- * depending on whether ahash_tfm or shash_tfm is being used.
+ * This struct is followed by a variable-sized hash request of size
+ * v->hash_reqsize, either a struct ahash_request or a struct shash_desc
+ * (depending on whether ahash_tfm or shash_tfm is being used). To
+ * access it, use verity_io_hash_req().
*/
};

static inline void *verity_io_hash_req(struct dm_verity *v,
struct dm_verity_io *io)
{
return io + 1;
}

-static inline u8 *verity_io_real_digest(struct dm_verity *v,
- struct dm_verity_io *io)
-{
- return (u8 *)(io + 1) + v->hash_reqsize;
-}
-
-static inline u8 *verity_io_want_digest(struct dm_verity *v,
- struct dm_verity_io *io)
-{
- return (u8 *)(io + 1) + v->hash_reqsize + v->digest_size;
-}
-
extern int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io,
struct bvec_iter *iter,
int (*process)(struct dm_verity *v,
struct dm_verity_io *io,
u8 *data, size_t len));
--
2.44.0


2024-05-03 15:28:19

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH v2 1/8] crypto: shash - add support for finup2x

On Fri, May 03, 2024 at 06:18:32PM +0800, Herbert Xu wrote:
> Eric Biggers <[email protected]> wrote:
> >
> > For now the API only supports 2-way interleaving, as the usefulness and
> > practicality seems to drop off dramatically after 2. The arm64 CPUs I
> > tested don't support more than 2 concurrent SHA-256 hashes. On x86_64,
> > AMD's Zen 4 can do 4 concurrent SHA-256 hashes (at least based on a
> > microbenchmark of the sha256rnds2 instruction), and it's been reported
> > that the highest SHA-256 throughput on Intel processors comes from using
> > AVX512 to compute 16 hashes in parallel. However, higher interleaving
> > factors would involve tradeoffs such as no longer being able to cache
> > the round constants in registers, further increasing the code size (both
> > source and binary), further increasing the amount of state that users
> > need to keep track of, and causing there to be more "leftover" hashes.
>
> I think the lack of extensibility is the biggest problem with this
> API. Now I confess I too have used the magic number 2 in the
> lskcipher patch-set, but there I think at least it was more
> justifiable based on the set of algorithms we currently support.
>
> Here I think the evidence for limiting this to 2 is weak. And the
> amount of work to extend this beyond 2 would mean ripping this API
> out again.
>
> So let's get this right from the start. Rather than shoehorning
> this into shash, how about we add this to ahash instead where an
> async return is a natural part of the API?
>
> In fact, if we do it there we don't need to make any major changes
> to the API. You could simply add an optional flag that to the
> request flags to indicate that more requests will be forthcoming
> immediately.
>
> The algorithm could then either delay the current request if it
> is supported, or process it immediately as is the case now.
>

The kernel already had ahash-based multibuffer hashing years ago. It failed
spectacularly, as it was extremely complex, buggy, slow, and potentially
insecure as it mixed requests from different contexts. Sure, it could have been
improved slightly by adding flush support, but most issues would have remained.

Synchronous hashing really is the right model here. One of the main performance
issues we are having with dm-verity and fs-verity is the scheduling hops
associated with the workqueues on which the dm-verity and fs-verity work runs.
If there was another scheduling hop from the worker task to another task to do
the actual hashing, that would be even worse and would defeat the point of doing
multibuffer hashing. And with the ahash based API this would be difficult to
avoid, as when an individual request gets submitted and put on a queue somewhere
it would lose the information about the original submitter, so when it finally
gets hashed it might be by another task (which the original task would then have
to wait for). I guess the submitter could provide some sort of tag that makes
the request be put on a dedicated queue that would eventually get processed only
by the same task (which might also be needed for security reasons anyway, due to
all the CPU side channels), but that would add lots of complexity to add tag
support to the API and support an arbitrary number of queues.

And then there's the issue of request lengths. With one at a time submission
via 'ahash_request', each request would have its own length. Having to support
multibuffer hashing of different length requests would add a massive amount of
complexity and edge cases that are difficult to get correct, as was shown by the
old ahash based code. This suggests that either the API needs to enforce that
all the lengths are the same, or it needs to provide a clean API (my patch)
where the caller just provides a single length that applies to all messages.

So the synchronous API really seems like the right approach, whereas shoehorning
it into the asynchronous hash API would result in something much more complex
and not actually useful for the intended use cases.

If you're concerned about the hardcoding to 2x specifically, how about the
following API instead:

int crypto_shash_finup_mb(struct shash_desc *desc,
const u8 *datas[], unsigned int len,
u8 *outs[], int num_msgs)

This would allow extension to higher interleaving factors.

I do suspect that anything higher than 2x isn't going to be very practical for
in-kernel use cases, where code size, latency, and per-request memory usage tend
to be very important. Regardless, this would make the API able to support
higher interleaving factors.

- Eric

2024-05-07 00:33:18

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH v2 1/8] crypto: shash - add support for finup2x

On Fri, May 03, 2024 at 08:28:10AM -0700, Eric Biggers wrote:
> On Fri, May 03, 2024 at 06:18:32PM +0800, Herbert Xu wrote:
> > Eric Biggers <[email protected]> wrote:
> > >
> > > For now the API only supports 2-way interleaving, as the usefulness and
> > > practicality seems to drop off dramatically after 2. The arm64 CPUs I
> > > tested don't support more than 2 concurrent SHA-256 hashes. On x86_64,
> > > AMD's Zen 4 can do 4 concurrent SHA-256 hashes (at least based on a
> > > microbenchmark of the sha256rnds2 instruction), and it's been reported
> > > that the highest SHA-256 throughput on Intel processors comes from using
> > > AVX512 to compute 16 hashes in parallel. However, higher interleaving
> > > factors would involve tradeoffs such as no longer being able to cache
> > > the round constants in registers, further increasing the code size (both
> > > source and binary), further increasing the amount of state that users
> > > need to keep track of, and causing there to be more "leftover" hashes.
> >
> > I think the lack of extensibility is the biggest problem with this
> > API. Now I confess I too have used the magic number 2 in the
> > lskcipher patch-set, but there I think at least it was more
> > justifiable based on the set of algorithms we currently support.
> >
> > Here I think the evidence for limiting this to 2 is weak. And the
> > amount of work to extend this beyond 2 would mean ripping this API
> > out again.
> >
> > So let's get this right from the start. Rather than shoehorning
> > this into shash, how about we add this to ahash instead where an
> > async return is a natural part of the API?
> >
> > In fact, if we do it there we don't need to make any major changes
> > to the API. You could simply add an optional flag that to the
> > request flags to indicate that more requests will be forthcoming
> > immediately.
> >
> > The algorithm could then either delay the current request if it
> > is supported, or process it immediately as is the case now.
> >
>
> The kernel already had ahash-based multibuffer hashing years ago. It failed
> spectacularly, as it was extremely complex, buggy, slow, and potentially
> insecure as it mixed requests from different contexts. Sure, it could have been
> improved slightly by adding flush support, but most issues would have remained.
>
> Synchronous hashing really is the right model here. One of the main performance
> issues we are having with dm-verity and fs-verity is the scheduling hops
> associated with the workqueues on which the dm-verity and fs-verity work runs.
> If there was another scheduling hop from the worker task to another task to do
> the actual hashing, that would be even worse and would defeat the point of doing
> multibuffer hashing. And with the ahash based API this would be difficult to
> avoid, as when an individual request gets submitted and put on a queue somewhere
> it would lose the information about the original submitter, so when it finally
> gets hashed it might be by another task (which the original task would then have
> to wait for). I guess the submitter could provide some sort of tag that makes
> the request be put on a dedicated queue that would eventually get processed only
> by the same task (which might also be needed for security reasons anyway, due to
> all the CPU side channels), but that would add lots of complexity to add tag
> support to the API and support an arbitrary number of queues.
>
> And then there's the issue of request lengths. With one at a time submission
> via 'ahash_request', each request would have its own length. Having to support
> multibuffer hashing of different length requests would add a massive amount of
> complexity and edge cases that are difficult to get correct, as was shown by the
> old ahash based code. This suggests that either the API needs to enforce that
> all the lengths are the same, or it needs to provide a clean API (my patch)
> where the caller just provides a single length that applies to all messages.
>
> So the synchronous API really seems like the right approach, whereas shoehorning
> it into the asynchronous hash API would result in something much more complex
> and not actually useful for the intended use cases.
>
> If you're concerned about the hardcoding to 2x specifically, how about the
> following API instead:
>
> int crypto_shash_finup_mb(struct shash_desc *desc,
> const u8 *datas[], unsigned int len,
> u8 *outs[], int num_msgs)
>
> This would allow extension to higher interleaving factors.
>
> I do suspect that anything higher than 2x isn't going to be very practical for
> in-kernel use cases, where code size, latency, and per-request memory usage tend
> to be very important. Regardless, this would make the API able to support
> higher interleaving factors.

I've sent out a new version that makes the change to crypto_shash_finup_mb().

- Eric