Subject: [PATCH 0/3] crypto: yield at end of operations

Call crypto_yield() consistently in the skcipher, aead, and shash
helper functions so even generic drivers don't hog the CPU and
cause RCU stall warnings and soft lockups.

Add cond_yield() in tcrypt's do_test so back-to-back tests yield
as well.

Robert Elliott (3):
crypto: skcipher - always yield at end of walk
crypto: aead/shash - yield at end of operations
crypto: tcrypt - yield at end of test

crypto/aead.c | 4 ++++
crypto/shash.c | 32 ++++++++++++++++++++++++--------
crypto/skcipher.c | 15 +++++++++++----
crypto/tcrypt.c | 1 +
4 files changed, 40 insertions(+), 12 deletions(-)

--
2.38.1


Subject: [PATCH 3/3] crypto: tcrypt - yield at end of test

Call cond_resched() to let the scheduler reschedule the
CPU at the end of each test pass.

If the kernel is configured with CONFIG_PREEMPT_NONE=y (or
preempt=none is used on the kernel command line), the only
time the scheduler will intervene is when cond_resched()
is called. So, repeated calls to
modprobe tcrypt mode=<something>

hold the CPU for a long time.

Signed-off-by: Robert Elliott <[email protected]>
---
crypto/tcrypt.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 3e9e4adeef02..916bddbf4e75 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -3027,6 +3027,7 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)

}

+ cond_resched();
return ret;
}

--
2.38.1

Subject: [PATCH 2/3] crypto: aead/shash - yield at end of operations

Add crypto_yield() calls at the end of all the encrypt and decrypt
functions to let the scheduler use the CPU after possibly a long
tenure by the crypto driver.

This reduces RCU stalls and soft lockups when running crypto
functions back-to-back that don't have their own yield calls
(e.g., aligned generic functions).

Signed-off-by: Robert Elliott <[email protected]>
---
crypto/aead.c | 4 ++++
crypto/shash.c | 32 ++++++++++++++++++++++++--------
2 files changed, 28 insertions(+), 8 deletions(-)

diff --git a/crypto/aead.c b/crypto/aead.c
index 16991095270d..f88378f4d4f5 100644
--- a/crypto/aead.c
+++ b/crypto/aead.c
@@ -93,6 +93,8 @@ int crypto_aead_encrypt(struct aead_request *req)
else
ret = crypto_aead_alg(aead)->encrypt(req);
crypto_stats_aead_encrypt(cryptlen, alg, ret);
+
+ crypto_yield(crypto_aead_get_flags(aead));
return ret;
}
EXPORT_SYMBOL_GPL(crypto_aead_encrypt);
@@ -112,6 +114,8 @@ int crypto_aead_decrypt(struct aead_request *req)
else
ret = crypto_aead_alg(aead)->decrypt(req);
crypto_stats_aead_decrypt(cryptlen, alg, ret);
+
+ crypto_yield(crypto_aead_get_flags(aead));
return ret;
}
EXPORT_SYMBOL_GPL(crypto_aead_decrypt);
diff --git a/crypto/shash.c b/crypto/shash.c
index 868b6ba2b3b7..6fea17a50048 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -114,11 +114,15 @@ int crypto_shash_update(struct shash_desc *desc, const u8 *data,
struct crypto_shash *tfm = desc->tfm;
struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned long alignmask = crypto_shash_alignmask(tfm);
+ int ret;

if ((unsigned long)data & alignmask)
- return shash_update_unaligned(desc, data, len);
+ ret = shash_update_unaligned(desc, data, len);
+ else
+ ret = shash->update(desc, data, len);

- return shash->update(desc, data, len);
+ crypto_yield(crypto_shash_get_flags(tfm));
+ return ret;
}
EXPORT_SYMBOL_GPL(crypto_shash_update);

@@ -155,11 +159,15 @@ int crypto_shash_final(struct shash_desc *desc, u8 *out)
struct crypto_shash *tfm = desc->tfm;
struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned long alignmask = crypto_shash_alignmask(tfm);
+ int ret;

if ((unsigned long)out & alignmask)
- return shash_final_unaligned(desc, out);
+ ret = shash_final_unaligned(desc, out);
+ else
+ ret = shash->final(desc, out);

- return shash->final(desc, out);
+ crypto_yield(crypto_shash_get_flags(tfm));
+ return ret;
}
EXPORT_SYMBOL_GPL(crypto_shash_final);

@@ -176,11 +184,15 @@ int crypto_shash_finup(struct shash_desc *desc, const u8 *data,
struct crypto_shash *tfm = desc->tfm;
struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned long alignmask = crypto_shash_alignmask(tfm);
+ int ret;

if (((unsigned long)data | (unsigned long)out) & alignmask)
- return shash_finup_unaligned(desc, data, len, out);
+ ret = shash_finup_unaligned(desc, data, len, out);
+ else
+ ret = shash->finup(desc, data, len, out);

- return shash->finup(desc, data, len, out);
+ crypto_yield(crypto_shash_get_flags(tfm));
+ return ret;
}
EXPORT_SYMBOL_GPL(crypto_shash_finup);

@@ -197,14 +209,18 @@ int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
struct crypto_shash *tfm = desc->tfm;
struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned long alignmask = crypto_shash_alignmask(tfm);
+ int ret;

if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
return -ENOKEY;

if (((unsigned long)data | (unsigned long)out) & alignmask)
- return shash_digest_unaligned(desc, data, len, out);
+ ret = shash_digest_unaligned(desc, data, len, out);
+ else
+ ret = shash->digest(desc, data, len, out);

- return shash->digest(desc, data, len, out);
+ crypto_yield(crypto_shash_get_flags(tfm));
+ return ret;
}
EXPORT_SYMBOL_GPL(crypto_shash_digest);

--
2.38.1

2022-12-20 03:57:09

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 2/3] crypto: aead/shash - yield at end of operations

On Mon, Dec 19, 2022 at 02:37:32PM -0600, Robert Elliott wrote:
> Add crypto_yield() calls at the end of all the encrypt and decrypt
> functions to let the scheduler use the CPU after possibly a long
> tenure by the crypto driver.
>
> This reduces RCU stalls and soft lockups when running crypto
> functions back-to-back that don't have their own yield calls
> (e.g., aligned generic functions).
>
> Signed-off-by: Robert Elliott <[email protected]>
> ---
> crypto/aead.c | 4 ++++
> crypto/shash.c | 32 ++++++++++++++++++++++++--------
> 2 files changed, 28 insertions(+), 8 deletions(-)
>
> diff --git a/crypto/aead.c b/crypto/aead.c
> index 16991095270d..f88378f4d4f5 100644
> --- a/crypto/aead.c
> +++ b/crypto/aead.c
> @@ -93,6 +93,8 @@ int crypto_aead_encrypt(struct aead_request *req)
> else
> ret = crypto_aead_alg(aead)->encrypt(req);
> crypto_stats_aead_encrypt(cryptlen, alg, ret);
> +
> + crypto_yield(crypto_aead_get_flags(aead));

This is the wrong place to do it. It should be done by the code
that's actually doing the work, just like skcipher.

> diff --git a/crypto/shash.c b/crypto/shash.c
> index 868b6ba2b3b7..6fea17a50048 100644
> --- a/crypto/shash.c
> +++ b/crypto/shash.c
> @@ -114,11 +114,15 @@ int crypto_shash_update(struct shash_desc *desc, const u8 *data,
> struct crypto_shash *tfm = desc->tfm;
> struct shash_alg *shash = crypto_shash_alg(tfm);
> unsigned long alignmask = crypto_shash_alignmask(tfm);
> + int ret;
>
> if ((unsigned long)data & alignmask)
> - return shash_update_unaligned(desc, data, len);
> + ret = shash_update_unaligned(desc, data, len);
> + else
> + ret = shash->update(desc, data, len);
>
> - return shash->update(desc, data, len);
> + crypto_yield(crypto_shash_get_flags(tfm));
> + return ret;
> }
> EXPORT_SYMBOL_GPL(crypto_shash_update);

Ditto.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2022-12-20 03:59:10

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 3/3] crypto: tcrypt - yield at end of test

On Mon, Dec 19, 2022 at 02:37:33PM -0600, Robert Elliott wrote:
> Call cond_resched() to let the scheduler reschedule the
> CPU at the end of each test pass.
>
> If the kernel is configured with CONFIG_PREEMPT_NONE=y (or
> preempt=none is used on the kernel command line), the only
> time the scheduler will intervene is when cond_resched()
> is called. So, repeated calls to
> modprobe tcrypt mode=<something>
>
> hold the CPU for a long time.
>
> Signed-off-by: Robert Elliott <[email protected]>
> ---
> crypto/tcrypt.c | 1 +
> 1 file changed, 1 insertion(+)

I don't really see the point of this.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt