2023-10-22 08:19:01

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 00/30] crypto: reduce ahash API overhead

This patch series first removes the alignmask support from ahash. As is
the case with shash, the alignmask support of ahash has no real point.
Removing it reduces API overhead and complexity.

Second, this patch series optimizes the common case where the ahash API
uses an shash algorithm, by eliminating unnecessary indirect calls.

This series can be retrieved from git at
https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git
tag "crypto-ahash-2023-10-22". Note that it depends on my other series
"crypto: stop supporting alignmask in shash"
(https://lore.kernel.org/r/[email protected]).

Patch 1 cleans up after removal of alignmask support from shash.

Patches 2-12 make drivers stop setting an alignmask for ahashes.

Patch 13 removes alignmask support from ahash.

Patches 14-23 remove checks of ahash alignmasks that became unnecessary.

Patches 24-25 are other ahash related cleanups.

Patches 26-29 prepare for optimizing the ahash-using-shash case.

Patches 30 optimizes the ahash-using-shash case.

Eric Biggers (30):
crypto: shash - remove crypto_shash_ctx_aligned()
crypto: sun4i-ss - remove unnecessary alignmask for ahashes
crypto: sun8i-ce - remove unnecessary alignmask for ahashes
crypto: sun8i-ss - remove unnecessary alignmask for ahashes
crypto: atmel - remove unnecessary alignmask for ahashes
crypto: artpec6 - stop setting alignmask for ahashes
crypto: mxs-dcp - remove unnecessary alignmask for ahashes
crypto: s5p-sss - remove unnecessary alignmask for ahashes
crypto: talitos - remove unnecessary alignmask for ahashes
crypto: omap-sham - stop setting alignmask for ahashes
crypto: rockchip - remove unnecessary alignmask for ahashes
crypto: starfive - remove unnecessary alignmask for ahashes
crypto: stm32 - remove unnecessary alignmask for ahashes
crypto: ahash - remove support for nonzero alignmask
crypto: authenc - stop using alignmask of ahash
crypto: authencesn - stop using alignmask of ahash
crypto: testmgr - stop checking crypto_ahash_alignmask
net: ipv4: stop checking crypto_ahash_alignmask
net: ipv6: stop checking crypto_ahash_alignmask
crypto: ccm - stop using alignmask of ahash
crypto: chacha20poly1305 - stop using alignmask of ahash
crypto: gcm - stop using alignmask of ahash
crypto: ahash - remove crypto_ahash_alignmask
crypto: ahash - remove struct ahash_request_priv
crypto: ahash - improve file comment
crypto: chelsio - stop using crypto_ahash::init
crypto: talitos - stop using crypto_ahash::init
crypto: hash - move "ahash wrapping shash" functions to ahash.c
crypto: ahash - check for shash type instead of not ahash type
crypto: ahash - optimize performance when wrapping shash

Documentation/crypto/devel-algos.rst | 4 +-
crypto/ahash.c | 406 +++++++++++-------
crypto/authenc.c | 12 +-
crypto/authencesn.c | 20 +-
crypto/ccm.c | 3 +-
crypto/chacha20poly1305.c | 3 +-
crypto/gcm.c | 3 +-
crypto/hash.h | 14 +-
crypto/shash.c | 205 +--------
crypto/testmgr.c | 9 +-
.../crypto/allwinner/sun4i-ss/sun4i-ss-core.c | 2 -
.../crypto/allwinner/sun8i-ce/sun8i-ce-core.c | 6 -
.../crypto/allwinner/sun8i-ss/sun8i-ss-core.c | 5 -
drivers/crypto/atmel-sha.c | 2 -
drivers/crypto/axis/artpec6_crypto.c | 3 -
drivers/crypto/chelsio/chcr_algo.c | 9 +-
drivers/crypto/mxs-dcp.c | 2 -
drivers/crypto/omap-sham.c | 16 +-
drivers/crypto/rockchip/rk3288_crypto_ahash.c | 3 -
drivers/crypto/s5p-sss.c | 6 -
drivers/crypto/starfive/jh7110-hash.c | 13 +-
drivers/crypto/stm32/stm32-hash.c | 20 -
drivers/crypto/talitos.c | 17 +-
include/crypto/algapi.h | 5 -
include/crypto/hash.h | 74 +---
include/crypto/internal/hash.h | 9 +-
include/linux/crypto.h | 27 +-
net/ipv4/ah4.c | 17 +-
net/ipv6/ah6.c | 17 +-
29 files changed, 339 insertions(+), 593 deletions(-)


base-commit: a2786e8bdd0242d7f00abf452a572de7464d177b
prerequisite-patch-id: e447f81a392f2f3955206357d72032cf691c7e11
prerequisite-patch-id: 71947e05e23fb176da3ca898720b9e3332e891d7
prerequisite-patch-id: 98d070bdaf3cfaf88553ab707cc3bfe85371c006
prerequisite-patch-id: 9e4287b71c1129edb1ba162e2a1f641a9ac4385f
prerequisite-patch-id: 22a4cda4ae529854e55627c55d3f35b035871f3b
prerequisite-patch-id: f67b194e37338a4715850686b2f02bbf0a47cbe1
prerequisite-patch-id: bcb547f4c9be4b022b824f9bff6b919b2d37d60f
prerequisite-patch-id: 20a8c2663a94c2d49217c5158a6bc588881fb9ad
prerequisite-patch-id: e45e43c487d75c87fd713a5ef57a584cf947950e
prerequisite-patch-id: bb211c1b59f73b22319aee6fafd14b07bc5d1460
prerequisite-patch-id: 5f033ce643ba7d1f219dee490abd21e1e0958a51
prerequisite-patch-id: 2173122570085246d5f4e5d3c4a920f7b7c528f9
prerequisite-patch-id: 3fe1bc3b93e9502f874c485c5f2e39eec4899222
prerequisite-patch-id: 982ed5e31a6616f9788d4641c3757342e9f15576
prerequisite-patch-id: 6a207af4a7044cc47ab3f797e9c865fdbdb5d20c
prerequisite-patch-id: f34ad579025354af65a73c1497dc967e2e834a55
prerequisite-patch-id: 5ad384179da558ff3359baabda588731ed2e90a4
prerequisite-patch-id: d3d243977afb4f574fb289eddf0e71becda1ae2b
--
2.42.0


2023-10-22 08:19:01

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 02/30] crypto: sun4i-ss - remove unnecessary alignmask for ahashes

From: Eric Biggers <[email protected]>

The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.

In preparation for removing alignmask support from ahash, this patch
makes the sun4i-ss driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in sun4i_hash(), already
using the unaligned access helpers. And this driver only supports
unkeyed hash algorithms, so the key buffer need not be considered.

Signed-off-by: Eric Biggers <[email protected]>
---
drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c | 2 --
1 file changed, 2 deletions(-)

diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c
index 3bcfcfc370842..e23a020a64628 100644
--- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c
+++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c
@@ -42,21 +42,20 @@ static struct sun4i_ss_alg_template ss_algs[] = {
.digest = sun4i_hash_digest,
.export = sun4i_hash_export_md5,
.import = sun4i_hash_import_md5,
.halg = {
.digestsize = MD5_DIGEST_SIZE,
.statesize = sizeof(struct md5_state),
.base = {
.cra_name = "md5",
.cra_driver_name = "md5-sun4i-ss",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_blocksize = MD5_HMAC_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun4i_req_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun4i_hash_crainit,
.cra_exit = sun4i_hash_craexit,
}
}
}
},
{ .type = CRYPTO_ALG_TYPE_AHASH,
@@ -69,21 +68,20 @@ static struct sun4i_ss_alg_template ss_algs[] = {
.digest = sun4i_hash_digest,
.export = sun4i_hash_export_sha1,
.import = sun4i_hash_import_sha1,
.halg = {
.digestsize = SHA1_DIGEST_SIZE,
.statesize = sizeof(struct sha1_state),
.base = {
.cra_name = "sha1",
.cra_driver_name = "sha1-sun4i-ss",
.cra_priority = 300,
- .cra_alignmask = 3,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun4i_req_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun4i_hash_crainit,
.cra_exit = sun4i_hash_craexit,
}
}
}
},
{ .type = CRYPTO_ALG_TYPE_SKCIPHER,
--
2.42.0

2023-10-22 08:19:03

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 11/30] crypto: rockchip - remove unnecessary alignmask for ahashes

From: Eric Biggers <[email protected]>

The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.

In preparation for removing alignmask support from ahash, this patch
makes the rockchip driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in rk_hash_run(),
already using put_unaligned_le32(). And this driver only supports
unkeyed hash algorithms, so the key buffer need not be considered.

Signed-off-by: Eric Biggers <[email protected]>
---
drivers/crypto/rockchip/rk3288_crypto_ahash.c | 3 ---
1 file changed, 3 deletions(-)

diff --git a/drivers/crypto/rockchip/rk3288_crypto_ahash.c b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
index 8c143180645e5..1b13b4aa16ecc 100644
--- a/drivers/crypto/rockchip/rk3288_crypto_ahash.c
+++ b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
@@ -386,21 +386,20 @@ struct rk_crypto_tmp rk_ahash_sha1 = {
.digestsize = SHA1_DIGEST_SIZE,
.statesize = sizeof(struct sha1_state),
.base = {
.cra_name = "sha1",
.cra_driver_name = "rk-sha1",
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct rk_ahash_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
}
},
.alg.hash.op = {
.do_one_request = rk_hash_run,
},
};

struct rk_crypto_tmp rk_ahash_sha256 = {
@@ -419,21 +418,20 @@ struct rk_crypto_tmp rk_ahash_sha256 = {
.digestsize = SHA256_DIGEST_SIZE,
.statesize = sizeof(struct sha256_state),
.base = {
.cra_name = "sha256",
.cra_driver_name = "rk-sha256",
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct rk_ahash_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
}
},
.alg.hash.op = {
.do_one_request = rk_hash_run,
},
};

struct rk_crypto_tmp rk_ahash_md5 = {
@@ -452,19 +450,18 @@ struct rk_crypto_tmp rk_ahash_md5 = {
.digestsize = MD5_DIGEST_SIZE,
.statesize = sizeof(struct md5_state),
.base = {
.cra_name = "md5",
.cra_driver_name = "rk-md5",
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct rk_ahash_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
}
},
.alg.hash.op = {
.do_one_request = rk_hash_run,
},
};
--
2.42.0

2023-10-22 08:19:05

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 12/30] crypto: starfive - remove unnecessary alignmask for ahashes

From: Eric Biggers <[email protected]>

The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.

In preparation for removing alignmask support from ahash, this patch
makes the starfive driver no longer use it. This driver did actually
rely on it, but only for storing to the result buffer using int stores
in starfive_hash_copy_hash(). This patch makes
starfive_hash_copy_hash() use put_unaligned() instead. (It really
should use a specific endianness, but that's an existing bug.)

Signed-off-by: Eric Biggers <[email protected]>
---
drivers/crypto/starfive/jh7110-hash.c | 13 ++-----------
1 file changed, 2 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/starfive/jh7110-hash.c b/drivers/crypto/starfive/jh7110-hash.c
index cc7650198d703..b6d1808012ca7 100644
--- a/drivers/crypto/starfive/jh7110-hash.c
+++ b/drivers/crypto/starfive/jh7110-hash.c
@@ -202,21 +202,22 @@ static int starfive_hash_copy_hash(struct ahash_request *req)
int count, *data;
int mlen;

if (!req->result)
return 0;

mlen = rctx->digsize / sizeof(u32);
data = (u32 *)req->result;

for (count = 0; count < mlen; count++)
- data[count] = readl(ctx->cryp->base + STARFIVE_HASH_SHARDR);
+ put_unaligned(readl(ctx->cryp->base + STARFIVE_HASH_SHARDR),
+ &data[count]);

return 0;
}

void starfive_hash_done_task(unsigned long param)
{
struct starfive_cryp_dev *cryp = (struct starfive_cryp_dev *)param;
int err = cryp->err;

if (!err)
@@ -621,21 +622,20 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
.statesize = sizeof(struct sha256_state),
.base = {
.cra_name = "sha224",
.cra_driver_name = "sha224-starfive",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = starfive_hash_one_request,
},
}, {
.base.init = starfive_hash_init,
.base.update = starfive_hash_update,
.base.final = starfive_hash_final,
@@ -651,21 +651,20 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
.statesize = sizeof(struct sha256_state),
.base = {
.cra_name = "hmac(sha224)",
.cra_driver_name = "sha224-hmac-starfive",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = starfive_hash_one_request,
},
}, {
.base.init = starfive_hash_init,
.base.update = starfive_hash_update,
.base.final = starfive_hash_final,
@@ -680,21 +679,20 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
.statesize = sizeof(struct sha256_state),
.base = {
.cra_name = "sha256",
.cra_driver_name = "sha256-starfive",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = starfive_hash_one_request,
},
}, {
.base.init = starfive_hash_init,
.base.update = starfive_hash_update,
.base.final = starfive_hash_final,
@@ -710,21 +708,20 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
.statesize = sizeof(struct sha256_state),
.base = {
.cra_name = "hmac(sha256)",
.cra_driver_name = "sha256-hmac-starfive",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = starfive_hash_one_request,
},
}, {
.base.init = starfive_hash_init,
.base.update = starfive_hash_update,
.base.final = starfive_hash_final,
@@ -739,21 +736,20 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
.statesize = sizeof(struct sha512_state),
.base = {
.cra_name = "sha384",
.cra_driver_name = "sha384-starfive",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = starfive_hash_one_request,
},
}, {
.base.init = starfive_hash_init,
.base.update = starfive_hash_update,
.base.final = starfive_hash_final,
@@ -769,21 +765,20 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
.statesize = sizeof(struct sha512_state),
.base = {
.cra_name = "hmac(sha384)",
.cra_driver_name = "sha384-hmac-starfive",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = starfive_hash_one_request,
},
}, {
.base.init = starfive_hash_init,
.base.update = starfive_hash_update,
.base.final = starfive_hash_final,
@@ -798,21 +793,20 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
.statesize = sizeof(struct sha512_state),
.base = {
.cra_name = "sha512",
.cra_driver_name = "sha512-starfive",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = starfive_hash_one_request,
},
}, {
.base.init = starfive_hash_init,
.base.update = starfive_hash_update,
.base.final = starfive_hash_final,
@@ -828,21 +822,20 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
.statesize = sizeof(struct sha512_state),
.base = {
.cra_name = "hmac(sha512)",
.cra_driver_name = "sha512-hmac-starfive",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = starfive_hash_one_request,
},
}, {
.base.init = starfive_hash_init,
.base.update = starfive_hash_update,
.base.final = starfive_hash_final,
@@ -857,21 +850,20 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
.statesize = sizeof(struct sm3_state),
.base = {
.cra_name = "sm3",
.cra_driver_name = "sm3-starfive",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SM3_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = starfive_hash_one_request,
},
}, {
.base.init = starfive_hash_init,
.base.update = starfive_hash_update,
.base.final = starfive_hash_final,
@@ -887,21 +879,20 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
.statesize = sizeof(struct sm3_state),
.base = {
.cra_name = "hmac(sm3)",
.cra_driver_name = "sm3-hmac-starfive",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SM3_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct starfive_cryp_ctx),
- .cra_alignmask = 3,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = starfive_hash_one_request,
},
},
};

int starfive_hash_register_algs(void)
--
2.42.0

2023-10-22 08:19:07

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 09/30] crypto: talitos - remove unnecessary alignmask for ahashes

From: Eric Biggers <[email protected]>

The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.

In preparation for removing alignmask support from ahash, this patch
makes the talitos driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in
common_nonsnoop_hash_unmap(), simply using memcpy(). And this driver's
"ahash_setkey()" function does not assume any alignment for the key
buffer.

Signed-off-by: Eric Biggers <[email protected]>
---
drivers/crypto/talitos.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
index 4ca4fbd227bce..e8f710d87007b 100644
--- a/drivers/crypto/talitos.c
+++ b/drivers/crypto/talitos.c
@@ -3252,21 +3252,21 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev,
dev_err(dev, "unknown algorithm type %d\n", t_alg->algt.type);
devm_kfree(dev, t_alg);
return ERR_PTR(-EINVAL);
}

alg->cra_module = THIS_MODULE;
if (t_alg->algt.priority)
alg->cra_priority = t_alg->algt.priority;
else
alg->cra_priority = TALITOS_CRA_PRIORITY;
- if (has_ftr_sec1(priv))
+ if (has_ftr_sec1(priv) && t_alg->algt.type != CRYPTO_ALG_TYPE_AHASH)
alg->cra_alignmask = 3;
else
alg->cra_alignmask = 0;
alg->cra_ctxsize = sizeof(struct talitos_ctx);
alg->cra_flags |= CRYPTO_ALG_KERN_DRIVER_ONLY;

t_alg->dev = dev;

return t_alg;
}
--
2.42.0

2023-10-22 08:19:08

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 05/30] crypto: atmel - remove unnecessary alignmask for ahashes

From: Eric Biggers <[email protected]>

The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.

In preparation for removing alignmask support from ahash, this patch
makes the atmel driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in
atmel_sha_copy_ready_hash(), simply using memcpy(). And this driver
didn't set an alignmask for any keyed hash algorithms, so the key buffer
need not be considered.

Signed-off-by: Eric Biggers <[email protected]>
---
drivers/crypto/atmel-sha.c | 2 --
1 file changed, 2 deletions(-)

diff --git a/drivers/crypto/atmel-sha.c b/drivers/crypto/atmel-sha.c
index 3622120add625..6cd3fc493027a 100644
--- a/drivers/crypto/atmel-sha.c
+++ b/drivers/crypto/atmel-sha.c
@@ -1293,29 +1293,27 @@ static struct ahash_alg sha_224_alg = {
.halg.base.cra_blocksize = SHA224_BLOCK_SIZE,

.halg.digestsize = SHA224_DIGEST_SIZE,
};

static struct ahash_alg sha_384_512_algs[] = {
{
.halg.base.cra_name = "sha384",
.halg.base.cra_driver_name = "atmel-sha384",
.halg.base.cra_blocksize = SHA384_BLOCK_SIZE,
- .halg.base.cra_alignmask = 0x3,

.halg.digestsize = SHA384_DIGEST_SIZE,
},
{
.halg.base.cra_name = "sha512",
.halg.base.cra_driver_name = "atmel-sha512",
.halg.base.cra_blocksize = SHA512_BLOCK_SIZE,
- .halg.base.cra_alignmask = 0x3,

.halg.digestsize = SHA512_DIGEST_SIZE,
},
};

static void atmel_sha_queue_task(unsigned long data)
{
struct atmel_sha_dev *dd = (struct atmel_sha_dev *)data;

atmel_sha_handle_queue(dd, NULL);
--
2.42.0

2023-10-22 08:19:09

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 10/30] crypto: omap-sham - stop setting alignmask for ahashes

From: Eric Biggers <[email protected]>

The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.

In preparation for removing alignmask support from ahash, this patch
makes the omap-sham driver no longer use it. This driver did actually
rely on it, but only for storing to the result buffer using __u32 stores
in omap_sham_copy_ready_hash(). This patch makes
omap_sham_copy_ready_hash() use put_unaligned() instead. (It really
should use a specific endianness, but that's an existing bug.)

Signed-off-by: Eric Biggers <[email protected]>
---
drivers/crypto/omap-sham.c | 16 ++--------------
1 file changed, 2 insertions(+), 14 deletions(-)

diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
index a6b4a0b3ace30..c4d77d8533313 100644
--- a/drivers/crypto/omap-sham.c
+++ b/drivers/crypto/omap-sham.c
@@ -349,24 +349,24 @@ static void omap_sham_copy_ready_hash(struct ahash_request *req)
break;
case FLAGS_MODE_SHA512:
d = SHA512_DIGEST_SIZE / sizeof(u32);
break;
default:
d = 0;
}

if (big_endian)
for (i = 0; i < d; i++)
- hash[i] = be32_to_cpup((__be32 *)in + i);
+ put_unaligned(be32_to_cpup((__be32 *)in + i), &hash[i]);
else
for (i = 0; i < d; i++)
- hash[i] = le32_to_cpup((__le32 *)in + i);
+ put_unaligned(le32_to_cpup((__le32 *)in + i), &hash[i]);
}

static void omap_sham_write_ctrl_omap2(struct omap_sham_dev *dd, size_t length,
int final, int dma)
{
struct omap_sham_reqctx *ctx = ahash_request_ctx(dd->req);
u32 val = length << 5, mask;

if (likely(ctx->digcnt))
omap_sham_write(dd, SHA_REG_DIGCNT(dd), ctx->digcnt);
@@ -1428,21 +1428,20 @@ static struct ahash_engine_alg algs_sha1_md5[] = {
.base.halg.digestsize = SHA1_DIGEST_SIZE,
.base.halg.base = {
.cra_name = "sha1",
.cra_driver_name = "omap-sha1",
.cra_priority = 400,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_init,
.cra_exit = omap_sham_cra_exit,
},
.op.do_one_request = omap_sham_hash_one_req,
},
{
.base.init = omap_sham_init,
.base.update = omap_sham_update,
.base.final = omap_sham_final,
@@ -1451,21 +1450,20 @@ static struct ahash_engine_alg algs_sha1_md5[] = {
.base.halg.digestsize = MD5_DIGEST_SIZE,
.base.halg.base = {
.cra_name = "md5",
.cra_driver_name = "omap-md5",
.cra_priority = 400,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_init,
.cra_exit = omap_sham_cra_exit,
},
.op.do_one_request = omap_sham_hash_one_req,
},
{
.base.init = omap_sham_init,
.base.update = omap_sham_update,
.base.final = omap_sham_final,
@@ -1476,21 +1474,20 @@ static struct ahash_engine_alg algs_sha1_md5[] = {
.base.halg.base = {
.cra_name = "hmac(sha1)",
.cra_driver_name = "omap-hmac-sha1",
.cra_priority = 400,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx) +
sizeof(struct omap_sham_hmac_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_sha1_init,
.cra_exit = omap_sham_cra_exit,
},
.op.do_one_request = omap_sham_hash_one_req,
},
{
.base.init = omap_sham_init,
.base.update = omap_sham_update,
.base.final = omap_sham_final,
@@ -1501,21 +1498,20 @@ static struct ahash_engine_alg algs_sha1_md5[] = {
.base.halg.base = {
.cra_name = "hmac(md5)",
.cra_driver_name = "omap-hmac-md5",
.cra_priority = 400,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx) +
sizeof(struct omap_sham_hmac_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_md5_init,
.cra_exit = omap_sham_cra_exit,
},
.op.do_one_request = omap_sham_hash_one_req,
}
};

/* OMAP4 has some algs in addition to what OMAP2 has */
static struct ahash_engine_alg algs_sha224_sha256[] = {
@@ -1528,21 +1524,20 @@ static struct ahash_engine_alg algs_sha224_sha256[] = {
.base.halg.digestsize = SHA224_DIGEST_SIZE,
.base.halg.base = {
.cra_name = "sha224",
.cra_driver_name = "omap-sha224",
.cra_priority = 400,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_init,
.cra_exit = omap_sham_cra_exit,
},
.op.do_one_request = omap_sham_hash_one_req,
},
{
.base.init = omap_sham_init,
.base.update = omap_sham_update,
.base.final = omap_sham_final,
@@ -1551,21 +1546,20 @@ static struct ahash_engine_alg algs_sha224_sha256[] = {
.base.halg.digestsize = SHA256_DIGEST_SIZE,
.base.halg.base = {
.cra_name = "sha256",
.cra_driver_name = "omap-sha256",
.cra_priority = 400,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_init,
.cra_exit = omap_sham_cra_exit,
},
.op.do_one_request = omap_sham_hash_one_req,
},
{
.base.init = omap_sham_init,
.base.update = omap_sham_update,
.base.final = omap_sham_final,
@@ -1576,21 +1570,20 @@ static struct ahash_engine_alg algs_sha224_sha256[] = {
.base.halg.base = {
.cra_name = "hmac(sha224)",
.cra_driver_name = "omap-hmac-sha224",
.cra_priority = 400,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx) +
sizeof(struct omap_sham_hmac_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_sha224_init,
.cra_exit = omap_sham_cra_exit,
},
.op.do_one_request = omap_sham_hash_one_req,
},
{
.base.init = omap_sham_init,
.base.update = omap_sham_update,
.base.final = omap_sham_final,
@@ -1601,21 +1594,20 @@ static struct ahash_engine_alg algs_sha224_sha256[] = {
.base.halg.base = {
.cra_name = "hmac(sha256)",
.cra_driver_name = "omap-hmac-sha256",
.cra_priority = 400,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx) +
sizeof(struct omap_sham_hmac_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_sha256_init,
.cra_exit = omap_sham_cra_exit,
},
.op.do_one_request = omap_sham_hash_one_req,
},
};

static struct ahash_engine_alg algs_sha384_sha512[] = {
{
@@ -1627,21 +1619,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
.base.halg.digestsize = SHA384_DIGEST_SIZE,
.base.halg.base = {
.cra_name = "sha384",
.cra_driver_name = "omap-sha384",
.cra_priority = 400,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_init,
.cra_exit = omap_sham_cra_exit,
},
.op.do_one_request = omap_sham_hash_one_req,
},
{
.base.init = omap_sham_init,
.base.update = omap_sham_update,
.base.final = omap_sham_final,
@@ -1650,21 +1641,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
.base.halg.digestsize = SHA512_DIGEST_SIZE,
.base.halg.base = {
.cra_name = "sha512",
.cra_driver_name = "omap-sha512",
.cra_priority = 400,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_init,
.cra_exit = omap_sham_cra_exit,
},
.op.do_one_request = omap_sham_hash_one_req,
},
{
.base.init = omap_sham_init,
.base.update = omap_sham_update,
.base.final = omap_sham_final,
@@ -1675,21 +1665,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
.base.halg.base = {
.cra_name = "hmac(sha384)",
.cra_driver_name = "omap-hmac-sha384",
.cra_priority = 400,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx) +
sizeof(struct omap_sham_hmac_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_sha384_init,
.cra_exit = omap_sham_cra_exit,
},
.op.do_one_request = omap_sham_hash_one_req,
},
{
.base.init = omap_sham_init,
.base.update = omap_sham_update,
.base.final = omap_sham_final,
@@ -1700,21 +1689,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
.base.halg.base = {
.cra_name = "hmac(sha512)",
.cra_driver_name = "omap-hmac-sha512",
.cra_priority = 400,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct omap_sham_ctx) +
sizeof(struct omap_sham_hmac_ctx),
- .cra_alignmask = OMAP_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = omap_sham_cra_sha512_init,
.cra_exit = omap_sham_cra_exit,
},
.op.do_one_request = omap_sham_hash_one_req,
},
};

static void omap_sham_done_task(unsigned long data)
{
--
2.42.0

2023-10-22 08:19:09

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 17/30] crypto: testmgr - stop checking crypto_ahash_alignmask

From: Eric Biggers <[email protected]>

Now that the alignmask for ahash and shash algorithms is always 0,
crypto_ahash_alignmask() always returns 0 and will be removed. In
preparation for this, stop checking crypto_ahash_alignmask() in testmgr.

As a result of this change,
test_sg_division::offset_relative_to_alignmask and
testvec_config::key_offset_relative_to_alignmask no longer have any
effect on ahash (or shash) algorithms. Therefore, also stop setting
these flags in default_hash_testvec_configs[].

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/testmgr.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 48a0929c7a158..335449a27f757 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -401,31 +401,29 @@ static const struct testvec_config default_hash_testvec_configs[] = {
}, {
.name = "digest aligned buffer",
.src_divs = { { .proportion_of_total = 10000 } },
.finalization_type = FINALIZATION_TYPE_DIGEST,
}, {
.name = "init+update+final misaligned buffer",
.src_divs = { { .proportion_of_total = 10000, .offset = 1 } },
.finalization_type = FINALIZATION_TYPE_FINAL,
.key_offset = 1,
}, {
- .name = "digest buffer aligned only to alignmask",
+ .name = "digest misaligned buffer",
.src_divs = {
{
.proportion_of_total = 10000,
.offset = 1,
- .offset_relative_to_alignmask = true,
},
},
.finalization_type = FINALIZATION_TYPE_DIGEST,
.key_offset = 1,
- .key_offset_relative_to_alignmask = true,
}, {
.name = "init+update+update+final two even splits",
.src_divs = {
{ .proportion_of_total = 5000 },
{
.proportion_of_total = 5000,
.flush_type = FLUSH_TYPE_FLUSH,
},
},
.finalization_type = FINALIZATION_TYPE_FINAL,
@@ -1451,54 +1449,53 @@ static int check_nonfinal_ahash_op(const char *op, int err,

/* Test one hash test vector in one configuration, using the ahash API */
static int test_ahash_vec_cfg(const struct hash_testvec *vec,
const char *vec_name,
const struct testvec_config *cfg,
struct ahash_request *req,
struct test_sglist *tsgl,
u8 *hashstate)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- const unsigned int alignmask = crypto_ahash_alignmask(tfm);
const unsigned int digestsize = crypto_ahash_digestsize(tfm);
const unsigned int statesize = crypto_ahash_statesize(tfm);
const char *driver = crypto_ahash_driver_name(tfm);
const u32 req_flags = CRYPTO_TFM_REQ_MAY_BACKLOG | cfg->req_flags;
const struct test_sg_division *divs[XBUFSIZE];
DECLARE_CRYPTO_WAIT(wait);
unsigned int i;
struct scatterlist *pending_sgl;
unsigned int pending_len;
u8 result[HASH_MAX_DIGESTSIZE + TESTMGR_POISON_LEN];
int err;

/* Set the key, if specified */
if (vec->ksize) {
err = do_setkey(crypto_ahash_setkey, tfm, vec->key, vec->ksize,
- cfg, alignmask);
+ cfg, 0);
if (err) {
if (err == vec->setkey_error)
return 0;
pr_err("alg: ahash: %s setkey failed on test vector %s; expected_error=%d, actual_error=%d, flags=%#x\n",
driver, vec_name, vec->setkey_error, err,
crypto_ahash_get_flags(tfm));
return err;
}
if (vec->setkey_error) {
pr_err("alg: ahash: %s setkey unexpectedly succeeded on test vector %s; expected_error=%d\n",
driver, vec_name, vec->setkey_error);
return -EINVAL;
}
}

/* Build the scatterlist for the source data */
- err = build_hash_sglist(tsgl, vec, cfg, alignmask, divs);
+ err = build_hash_sglist(tsgl, vec, cfg, 0, divs);
if (err) {
pr_err("alg: ahash: %s: error preparing scatterlist for test vector %s, cfg=\"%s\"\n",
driver, vec_name, cfg->name);
return err;
}

/* Do the actual hashing */

testmgr_poison(req->__ctx, crypto_ahash_reqsize(tfm));
testmgr_poison(result, digestsize + TESTMGR_POISON_LEN);
--
2.42.0

2023-10-22 08:19:10

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 15/30] crypto: authenc - stop using alignmask of ahash

From: Eric Biggers <[email protected]>

Now that the alignmask for ahash and shash algorithms is always 0,
simplify the code in authenc accordingly.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/authenc.c | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/crypto/authenc.c b/crypto/authenc.c
index fa896ab143bdf..3aaf3ab4e360f 100644
--- a/crypto/authenc.c
+++ b/crypto/authenc.c
@@ -134,23 +134,20 @@ static int crypto_authenc_genicv(struct aead_request *req, unsigned int flags)
struct crypto_aead *authenc = crypto_aead_reqtfm(req);
struct aead_instance *inst = aead_alg_instance(authenc);
struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
struct authenc_instance_ctx *ictx = aead_instance_ctx(inst);
struct crypto_ahash *auth = ctx->auth;
struct authenc_request_ctx *areq_ctx = aead_request_ctx(req);
struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
u8 *hash = areq_ctx->tail;
int err;

- hash = (u8 *)ALIGN((unsigned long)hash + crypto_ahash_alignmask(auth),
- crypto_ahash_alignmask(auth) + 1);
-
ahash_request_set_tfm(ahreq, auth);
ahash_request_set_crypt(ahreq, req->dst, hash,
req->assoclen + req->cryptlen);
ahash_request_set_callback(ahreq, flags,
authenc_geniv_ahash_done, req);

err = crypto_ahash_digest(ahreq);
if (err)
return err;

@@ -279,23 +276,20 @@ static int crypto_authenc_decrypt(struct aead_request *req)
unsigned int authsize = crypto_aead_authsize(authenc);
struct aead_instance *inst = aead_alg_instance(authenc);
struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
struct authenc_instance_ctx *ictx = aead_instance_ctx(inst);
struct crypto_ahash *auth = ctx->auth;
struct authenc_request_ctx *areq_ctx = aead_request_ctx(req);
struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
u8 *hash = areq_ctx->tail;
int err;

- hash = (u8 *)ALIGN((unsigned long)hash + crypto_ahash_alignmask(auth),
- crypto_ahash_alignmask(auth) + 1);
-
ahash_request_set_tfm(ahreq, auth);
ahash_request_set_crypt(ahreq, req->src, hash,
req->assoclen + req->cryptlen - authsize);
ahash_request_set_callback(ahreq, aead_request_flags(req),
authenc_verify_ahash_done, req);

err = crypto_ahash_digest(ahreq);
if (err)
return err;

@@ -393,40 +387,38 @@ static int crypto_authenc_create(struct crypto_template *tmpl,
goto err_free_inst;
auth = crypto_spawn_ahash_alg(&ctx->auth);
auth_base = &auth->base;

err = crypto_grab_skcipher(&ctx->enc, aead_crypto_instance(inst),
crypto_attr_alg_name(tb[2]), 0, mask);
if (err)
goto err_free_inst;
enc = crypto_spawn_skcipher_alg_common(&ctx->enc);

- ctx->reqoff = ALIGN(2 * auth->digestsize + auth_base->cra_alignmask,
- auth_base->cra_alignmask + 1);
+ ctx->reqoff = 2 * auth->digestsize;

err = -ENAMETOOLONG;
if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
"authenc(%s,%s)", auth_base->cra_name,
enc->base.cra_name) >=
CRYPTO_MAX_ALG_NAME)
goto err_free_inst;

if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
"authenc(%s,%s)", auth_base->cra_driver_name,
enc->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
goto err_free_inst;

inst->alg.base.cra_priority = enc->base.cra_priority * 10 +
auth_base->cra_priority;
inst->alg.base.cra_blocksize = enc->base.cra_blocksize;
- inst->alg.base.cra_alignmask = auth_base->cra_alignmask |
- enc->base.cra_alignmask;
+ inst->alg.base.cra_alignmask = enc->base.cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct crypto_authenc_ctx);

inst->alg.ivsize = enc->ivsize;
inst->alg.chunksize = enc->chunksize;
inst->alg.maxauthsize = auth->digestsize;

inst->alg.init = crypto_authenc_init_tfm;
inst->alg.exit = crypto_authenc_exit_tfm;

inst->alg.setkey = crypto_authenc_setkey;
--
2.42.0

2023-10-22 08:19:13

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 08/30] crypto: s5p-sss - remove unnecessary alignmask for ahashes

From: Eric Biggers <[email protected]>

The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.

In preparation for removing alignmask support from ahash, this patch
makes the s5p-sss driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in
s5p_hash_copy_result(), simply using memcpy(). And this driver only
supports unkeyed hash algorithms, so the key buffer need not be
considered.

Signed-off-by: Eric Biggers <[email protected]>
---
drivers/crypto/s5p-sss.c | 6 ------
1 file changed, 6 deletions(-)

diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c
index fe8cf9ba8005c..43b840c7b743f 100644
--- a/drivers/crypto/s5p-sss.c
+++ b/drivers/crypto/s5p-sss.c
@@ -217,23 +217,20 @@
#define HASH_FLAGS_FINAL 1
#define HASH_FLAGS_DMA_ACTIVE 2
#define HASH_FLAGS_OUTPUT_READY 3
#define HASH_FLAGS_DMA_READY 4
#define HASH_FLAGS_SGS_COPIED 5
#define HASH_FLAGS_SGS_ALLOCED 6

/* HASH HW constants */
#define BUFLEN HASH_BLOCK_SIZE

-#define SSS_HASH_DMA_LEN_ALIGN 8
-#define SSS_HASH_DMA_ALIGN_MASK (SSS_HASH_DMA_LEN_ALIGN - 1)
-
#define SSS_HASH_QUEUE_LENGTH 10

/**
* struct samsung_aes_variant - platform specific SSS driver data
* @aes_offset: AES register offset from SSS module's base.
* @hash_offset: HASH register offset from SSS module's base.
* @clk_names: names of clocks needed to run SSS IP
*
* Specifies platform specific configuration of SSS module.
* Note: A structure for driver specific platform data is used for future
@@ -1739,21 +1736,20 @@ static struct ahash_alg algs_sha1_md5_sha256[] = {
.halg.digestsize = SHA1_DIGEST_SIZE,
.halg.base = {
.cra_name = "sha1",
.cra_driver_name = "exynos-sha1",
.cra_priority = 100,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = HASH_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s5p_hash_ctx),
- .cra_alignmask = SSS_HASH_DMA_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = s5p_hash_cra_init,
.cra_exit = s5p_hash_cra_exit,
}
},
{
.init = s5p_hash_init,
.update = s5p_hash_update,
.final = s5p_hash_final,
.finup = s5p_hash_finup,
@@ -1764,21 +1760,20 @@ static struct ahash_alg algs_sha1_md5_sha256[] = {
.halg.digestsize = MD5_DIGEST_SIZE,
.halg.base = {
.cra_name = "md5",
.cra_driver_name = "exynos-md5",
.cra_priority = 100,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = HASH_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s5p_hash_ctx),
- .cra_alignmask = SSS_HASH_DMA_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = s5p_hash_cra_init,
.cra_exit = s5p_hash_cra_exit,
}
},
{
.init = s5p_hash_init,
.update = s5p_hash_update,
.final = s5p_hash_final,
.finup = s5p_hash_finup,
@@ -1789,21 +1784,20 @@ static struct ahash_alg algs_sha1_md5_sha256[] = {
.halg.digestsize = SHA256_DIGEST_SIZE,
.halg.base = {
.cra_name = "sha256",
.cra_driver_name = "exynos-sha256",
.cra_priority = 100,
.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = HASH_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s5p_hash_ctx),
- .cra_alignmask = SSS_HASH_DMA_ALIGN_MASK,
.cra_module = THIS_MODULE,
.cra_init = s5p_hash_cra_init,
.cra_exit = s5p_hash_cra_exit,
}
}

};

static void s5p_set_aes(struct s5p_aes_dev *dev,
const u8 *key, const u8 *iv, const u8 *ctr,
--
2.42.0

2023-10-22 08:19:14

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 19/30] net: ipv6: stop checking crypto_ahash_alignmask

From: Eric Biggers <[email protected]>

Now that the alignmask for ahash and shash algorithms is always 0,
crypto_ahash_alignmask() always returns 0 and will be removed. In
preparation for this, stop checking crypto_ahash_alignmask() in ah6.c.

Signed-off-by: Eric Biggers <[email protected]>
---
net/ipv6/ah6.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/net/ipv6/ah6.c b/net/ipv6/ah6.c
index 56f9282ec5df4..2016e90e6e1d2 100644
--- a/net/ipv6/ah6.c
+++ b/net/ipv6/ah6.c
@@ -44,23 +44,21 @@ struct ah_skb_cb {
void *tmp;
};

#define AH_SKB_CB(__skb) ((struct ah_skb_cb *)&((__skb)->cb[0]))

static void *ah_alloc_tmp(struct crypto_ahash *ahash, int nfrags,
unsigned int size)
{
unsigned int len;

- len = size + crypto_ahash_digestsize(ahash) +
- (crypto_ahash_alignmask(ahash) &
- ~(crypto_tfm_ctx_alignment() - 1));
+ len = size + crypto_ahash_digestsize(ahash);

len = ALIGN(len, crypto_tfm_ctx_alignment());

len += sizeof(struct ahash_request) + crypto_ahash_reqsize(ahash);
len = ALIGN(len, __alignof__(struct scatterlist));

len += sizeof(struct scatterlist) * nfrags;

return kmalloc(len, GFP_ATOMIC);
}
@@ -68,24 +66,23 @@ static void *ah_alloc_tmp(struct crypto_ahash *ahash, int nfrags,
static inline struct tmp_ext *ah_tmp_ext(void *base)
{
return base + IPV6HDR_BASELEN;
}

static inline u8 *ah_tmp_auth(u8 *tmp, unsigned int offset)
{
return tmp + offset;
}

-static inline u8 *ah_tmp_icv(struct crypto_ahash *ahash, void *tmp,
- unsigned int offset)
+static inline u8 *ah_tmp_icv(void *tmp, unsigned int offset)
{
- return PTR_ALIGN((u8 *)tmp + offset, crypto_ahash_alignmask(ahash) + 1);
+ return tmp + offset;
}

static inline struct ahash_request *ah_tmp_req(struct crypto_ahash *ahash,
u8 *icv)
{
struct ahash_request *req;

req = (void *)PTR_ALIGN(icv + crypto_ahash_digestsize(ahash),
crypto_tfm_ctx_alignment());

@@ -292,21 +289,21 @@ static void ah6_output_done(void *data, int err)
struct ipv6hdr *top_iph = ipv6_hdr(skb);
struct ip_auth_hdr *ah = ip_auth_hdr(skb);
struct tmp_ext *iph_ext;

extlen = skb_network_header_len(skb) - sizeof(struct ipv6hdr);
if (extlen)
extlen += sizeof(*iph_ext);

iph_base = AH_SKB_CB(skb)->tmp;
iph_ext = ah_tmp_ext(iph_base);
- icv = ah_tmp_icv(ahp->ahash, iph_ext, extlen);
+ icv = ah_tmp_icv(iph_ext, extlen);

memcpy(ah->auth_data, icv, ahp->icv_trunc_len);
memcpy(top_iph, iph_base, IPV6HDR_BASELEN);

if (extlen) {
#if IS_ENABLED(CONFIG_IPV6_MIP6)
memcpy(&top_iph->saddr, iph_ext, extlen);
#else
memcpy(&top_iph->daddr, iph_ext, extlen);
#endif
@@ -355,21 +352,21 @@ static int ah6_output(struct xfrm_state *x, struct sk_buff *skb)
seqhi_len = sizeof(*seqhi);
}
err = -ENOMEM;
iph_base = ah_alloc_tmp(ahash, nfrags + sglists, IPV6HDR_BASELEN +
extlen + seqhi_len);
if (!iph_base)
goto out;

iph_ext = ah_tmp_ext(iph_base);
seqhi = (__be32 *)((char *)iph_ext + extlen);
- icv = ah_tmp_icv(ahash, seqhi, seqhi_len);
+ icv = ah_tmp_icv(seqhi, seqhi_len);
req = ah_tmp_req(ahash, icv);
sg = ah_req_sg(ahash, req);
seqhisg = sg + nfrags;

ah = ip_auth_hdr(skb);
memset(ah->auth_data, 0, ahp->icv_trunc_len);

top_iph = ipv6_hdr(skb);
top_iph->payload_len = htons(skb->len - sizeof(*top_iph));

@@ -461,21 +458,21 @@ static void ah6_input_done(void *data, int err)
struct ah_data *ahp = x->data;
struct ip_auth_hdr *ah = ip_auth_hdr(skb);
int hdr_len = skb_network_header_len(skb);
int ah_hlen = ipv6_authlen(ah);

if (err)
goto out;

work_iph = AH_SKB_CB(skb)->tmp;
auth_data = ah_tmp_auth(work_iph, hdr_len);
- icv = ah_tmp_icv(ahp->ahash, auth_data, ahp->icv_trunc_len);
+ icv = ah_tmp_icv(auth_data, ahp->icv_trunc_len);

err = crypto_memneq(icv, auth_data, ahp->icv_trunc_len) ? -EBADMSG : 0;
if (err)
goto out;

err = ah->nexthdr;

skb->network_header += ah_hlen;
memcpy(skb_network_header(skb), work_iph, hdr_len);
__skb_pull(skb, ah_hlen + hdr_len);
@@ -569,21 +566,21 @@ static int ah6_input(struct xfrm_state *x, struct sk_buff *skb)

work_iph = ah_alloc_tmp(ahash, nfrags + sglists, hdr_len +
ahp->icv_trunc_len + seqhi_len);
if (!work_iph) {
err = -ENOMEM;
goto out;
}

auth_data = ah_tmp_auth((u8 *)work_iph, hdr_len);
seqhi = (__be32 *)(auth_data + ahp->icv_trunc_len);
- icv = ah_tmp_icv(ahash, seqhi, seqhi_len);
+ icv = ah_tmp_icv(seqhi, seqhi_len);
req = ah_tmp_req(ahash, icv);
sg = ah_req_sg(ahash, req);
seqhisg = sg + nfrags;

memcpy(work_iph, ip6h, hdr_len);
memcpy(auth_data, ah->auth_data, ahp->icv_trunc_len);
memset(ah->auth_data, 0, ahp->icv_trunc_len);

err = ipv6_clear_mutable_options(ip6h, hdr_len, XFRM_POLICY_IN);
if (err)
--
2.42.0

2023-10-22 08:19:14

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 16/30] crypto: authencesn - stop using alignmask of ahash

From: Eric Biggers <[email protected]>

Now that the alignmask for ahash and shash algorithms is always 0,
simplify the code in authenc accordingly.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/authencesn.c | 20 ++++++--------------
1 file changed, 6 insertions(+), 14 deletions(-)

diff --git a/crypto/authencesn.c b/crypto/authencesn.c
index 60e9568f023f6..2cc933e2f7901 100644
--- a/crypto/authencesn.c
+++ b/crypto/authencesn.c
@@ -80,25 +80,22 @@ static int crypto_authenc_esn_setkey(struct crypto_aead *authenc_esn, const u8 *
err = crypto_skcipher_setkey(enc, keys.enckey, keys.enckeylen);
out:
memzero_explicit(&keys, sizeof(keys));
return err;
}

static int crypto_authenc_esn_genicv_tail(struct aead_request *req,
unsigned int flags)
{
struct crypto_aead *authenc_esn = crypto_aead_reqtfm(req);
- struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn);
struct authenc_esn_request_ctx *areq_ctx = aead_request_ctx(req);
- struct crypto_ahash *auth = ctx->auth;
- u8 *hash = PTR_ALIGN((u8 *)areq_ctx->tail,
- crypto_ahash_alignmask(auth) + 1);
+ u8 *hash = areq_ctx->tail;
unsigned int authsize = crypto_aead_authsize(authenc_esn);
unsigned int assoclen = req->assoclen;
unsigned int cryptlen = req->cryptlen;
struct scatterlist *dst = req->dst;
u32 tmp[2];

/* Move high-order bits of sequence number back. */
scatterwalk_map_and_copy(tmp, dst, 4, 4, 0);
scatterwalk_map_and_copy(tmp + 1, dst, assoclen + cryptlen, 4, 0);
scatterwalk_map_and_copy(tmp, dst, 0, 8, 1);
@@ -115,22 +112,21 @@ static void authenc_esn_geniv_ahash_done(void *data, int err)
aead_request_complete(req, err);
}

static int crypto_authenc_esn_genicv(struct aead_request *req,
unsigned int flags)
{
struct crypto_aead *authenc_esn = crypto_aead_reqtfm(req);
struct authenc_esn_request_ctx *areq_ctx = aead_request_ctx(req);
struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn);
struct crypto_ahash *auth = ctx->auth;
- u8 *hash = PTR_ALIGN((u8 *)areq_ctx->tail,
- crypto_ahash_alignmask(auth) + 1);
+ u8 *hash = areq_ctx->tail;
struct ahash_request *ahreq = (void *)(areq_ctx->tail + ctx->reqoff);
unsigned int authsize = crypto_aead_authsize(authenc_esn);
unsigned int assoclen = req->assoclen;
unsigned int cryptlen = req->cryptlen;
struct scatterlist *dst = req->dst;
u32 tmp[2];

if (!authsize)
return 0;

@@ -217,22 +213,21 @@ static int crypto_authenc_esn_encrypt(struct aead_request *req)
static int crypto_authenc_esn_decrypt_tail(struct aead_request *req,
unsigned int flags)
{
struct crypto_aead *authenc_esn = crypto_aead_reqtfm(req);
unsigned int authsize = crypto_aead_authsize(authenc_esn);
struct authenc_esn_request_ctx *areq_ctx = aead_request_ctx(req);
struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn);
struct skcipher_request *skreq = (void *)(areq_ctx->tail +
ctx->reqoff);
struct crypto_ahash *auth = ctx->auth;
- u8 *ohash = PTR_ALIGN((u8 *)areq_ctx->tail,
- crypto_ahash_alignmask(auth) + 1);
+ u8 *ohash = areq_ctx->tail;
unsigned int cryptlen = req->cryptlen - authsize;
unsigned int assoclen = req->assoclen;
struct scatterlist *dst = req->dst;
u8 *ihash = ohash + crypto_ahash_digestsize(auth);
u32 tmp[2];

if (!authsize)
goto decrypt;

/* Move high-order bits of sequence number back. */
@@ -265,22 +260,21 @@ static void authenc_esn_verify_ahash_done(void *data, int err)
}

static int crypto_authenc_esn_decrypt(struct aead_request *req)
{
struct crypto_aead *authenc_esn = crypto_aead_reqtfm(req);
struct authenc_esn_request_ctx *areq_ctx = aead_request_ctx(req);
struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn);
struct ahash_request *ahreq = (void *)(areq_ctx->tail + ctx->reqoff);
unsigned int authsize = crypto_aead_authsize(authenc_esn);
struct crypto_ahash *auth = ctx->auth;
- u8 *ohash = PTR_ALIGN((u8 *)areq_ctx->tail,
- crypto_ahash_alignmask(auth) + 1);
+ u8 *ohash = areq_ctx->tail;
unsigned int assoclen = req->assoclen;
unsigned int cryptlen = req->cryptlen;
u8 *ihash = ohash + crypto_ahash_digestsize(auth);
struct scatterlist *dst = req->dst;
u32 tmp[2];
int err;

cryptlen -= authsize;

if (req->src != dst) {
@@ -337,22 +331,21 @@ static int crypto_authenc_esn_init_tfm(struct crypto_aead *tfm)

null = crypto_get_default_null_skcipher();
err = PTR_ERR(null);
if (IS_ERR(null))
goto err_free_skcipher;

ctx->auth = auth;
ctx->enc = enc;
ctx->null = null;

- ctx->reqoff = ALIGN(2 * crypto_ahash_digestsize(auth),
- crypto_ahash_alignmask(auth) + 1);
+ ctx->reqoff = 2 * crypto_ahash_digestsize(auth);

crypto_aead_set_reqsize(
tfm,
sizeof(struct authenc_esn_request_ctx) +
ctx->reqoff +
max_t(unsigned int,
crypto_ahash_reqsize(auth) +
sizeof(struct ahash_request),
sizeof(struct skcipher_request) +
crypto_skcipher_reqsize(enc)));
@@ -424,22 +417,21 @@ static int crypto_authenc_esn_create(struct crypto_template *tmpl,
goto err_free_inst;

if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
"authencesn(%s,%s)", auth_base->cra_driver_name,
enc->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
goto err_free_inst;

inst->alg.base.cra_priority = enc->base.cra_priority * 10 +
auth_base->cra_priority;
inst->alg.base.cra_blocksize = enc->base.cra_blocksize;
- inst->alg.base.cra_alignmask = auth_base->cra_alignmask |
- enc->base.cra_alignmask;
+ inst->alg.base.cra_alignmask = enc->base.cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct crypto_authenc_esn_ctx);

inst->alg.ivsize = enc->ivsize;
inst->alg.chunksize = enc->chunksize;
inst->alg.maxauthsize = auth->digestsize;

inst->alg.init = crypto_authenc_esn_init_tfm;
inst->alg.exit = crypto_authenc_esn_exit_tfm;

inst->alg.setkey = crypto_authenc_esn_setkey;
--
2.42.0

2023-10-22 08:19:15

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 13/30] crypto: stm32 - remove unnecessary alignmask for ahashes

From: Eric Biggers <[email protected]>

The crypto API's support for alignmasks for ahash algorithms is nearly
useless, as its only effect is to cause the API to align the key and
result buffers. The drivers that happen to be specifying an alignmask
for ahash rarely actually need it. When they do, it's easily fixable,
especially considering that these buffers cannot be used for DMA.

In preparation for removing alignmask support from ahash, this patch
makes the stm32 driver no longer use it. This driver didn't actually
rely on it; it only writes to the result buffer in stm32_hash_finish(),
simply using memcpy(). And stm32_hash_setkey() does not assume any
alignment for the key buffer.

Signed-off-by: Eric Biggers <[email protected]>
---
drivers/crypto/stm32/stm32-hash.c | 20 --------------------
1 file changed, 20 deletions(-)

diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
index 2b2382d4332c5..34e0d7e381a8c 100644
--- a/drivers/crypto/stm32/stm32-hash.c
+++ b/drivers/crypto/stm32/stm32-hash.c
@@ -1276,21 +1276,20 @@ static struct ahash_engine_alg algs_md5[] = {
.digestsize = MD5_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "md5",
.cra_driver_name = "stm32-md5",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = MD5_HMAC_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
{
@@ -1306,21 +1305,20 @@ static struct ahash_engine_alg algs_md5[] = {
.digestsize = MD5_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "hmac(md5)",
.cra_driver_name = "stm32-hmac-md5",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = MD5_HMAC_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
}
};
@@ -1338,21 +1336,20 @@ static struct ahash_engine_alg algs_sha1[] = {
.digestsize = SHA1_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "sha1",
.cra_driver_name = "stm32-sha1",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
{
@@ -1368,21 +1365,20 @@ static struct ahash_engine_alg algs_sha1[] = {
.digestsize = SHA1_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "hmac(sha1)",
.cra_driver_name = "stm32-hmac-sha1",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
};
@@ -1400,21 +1396,20 @@ static struct ahash_engine_alg algs_sha224[] = {
.digestsize = SHA224_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "sha224",
.cra_driver_name = "stm32-sha224",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
{
@@ -1430,21 +1425,20 @@ static struct ahash_engine_alg algs_sha224[] = {
.digestsize = SHA224_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "hmac(sha224)",
.cra_driver_name = "stm32-hmac-sha224",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
};
@@ -1462,21 +1456,20 @@ static struct ahash_engine_alg algs_sha256[] = {
.digestsize = SHA256_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "sha256",
.cra_driver_name = "stm32-sha256",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
{
@@ -1492,21 +1485,20 @@ static struct ahash_engine_alg algs_sha256[] = {
.digestsize = SHA256_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "hmac(sha256)",
.cra_driver_name = "stm32-hmac-sha256",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
};
@@ -1524,21 +1516,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
.digestsize = SHA384_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "sha384",
.cra_driver_name = "stm32-sha384",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
{
@@ -1554,21 +1545,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
.digestsize = SHA384_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "hmac(sha384)",
.cra_driver_name = "stm32-hmac-sha384",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
{
@@ -1583,21 +1573,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
.digestsize = SHA512_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "sha512",
.cra_driver_name = "stm32-sha512",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
{
@@ -1613,21 +1602,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
.digestsize = SHA512_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "hmac(sha512)",
.cra_driver_name = "stm32-hmac-sha512",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
};
@@ -1645,21 +1633,20 @@ static struct ahash_engine_alg algs_sha3[] = {
.digestsize = SHA3_224_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "sha3-224",
.cra_driver_name = "stm32-sha3-224",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
{
@@ -1675,21 +1662,20 @@ static struct ahash_engine_alg algs_sha3[] = {
.digestsize = SHA3_224_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "hmac(sha3-224)",
.cra_driver_name = "stm32-hmac-sha3-224",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
{
@@ -1704,21 +1690,20 @@ static struct ahash_engine_alg algs_sha3[] = {
.digestsize = SHA3_256_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "sha3-256",
.cra_driver_name = "stm32-sha3-256",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
{
@@ -1734,21 +1719,20 @@ static struct ahash_engine_alg algs_sha3[] = {
.digestsize = SHA3_256_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "hmac(sha3-256)",
.cra_driver_name = "stm32-hmac-sha3-256",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
{
@@ -1763,21 +1747,20 @@ static struct ahash_engine_alg algs_sha3[] = {
.digestsize = SHA3_384_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "sha3-384",
.cra_driver_name = "stm32-sha3-384",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
{
@@ -1793,21 +1776,20 @@ static struct ahash_engine_alg algs_sha3[] = {
.digestsize = SHA3_384_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "hmac(sha3-384)",
.cra_driver_name = "stm32-hmac-sha3-384",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
{
@@ -1822,21 +1804,20 @@ static struct ahash_engine_alg algs_sha3[] = {
.digestsize = SHA3_512_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "sha3-512",
.cra_driver_name = "stm32-sha3-512",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
},
{
@@ -1852,21 +1833,20 @@ static struct ahash_engine_alg algs_sha3[] = {
.digestsize = SHA3_512_DIGEST_SIZE,
.statesize = sizeof(struct stm32_hash_state),
.base = {
.cra_name = "hmac(sha3-512)",
.cra_driver_name = "stm32-hmac-sha3-512",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_KERN_DRIVER_ONLY,
.cra_blocksize = SHA3_512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct stm32_hash_ctx),
- .cra_alignmask = 3,
.cra_init = stm32_hash_cra_sha3_hmac_init,
.cra_exit = stm32_hash_cra_exit,
.cra_module = THIS_MODULE,
}
},
.op = {
.do_one_request = stm32_hash_one_request,
},
}
};
--
2.42.0

2023-10-22 08:19:16

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 14/30] crypto: ahash - remove support for nonzero alignmask

From: Eric Biggers <[email protected]>

Currently, the ahash API checks the alignment of all key and result
buffers against the algorithm's declared alignmask, and for any
unaligned buffers it falls back to manually aligned temporary buffers.

This is virtually useless, however. First, since it does not apply to
the message, its effect is much more limited than e.g. is the case for
the alignmask for "skcipher". Second, the key and result buffers are
given as virtual addresses and cannot (in general) be DMA'ed into, so
drivers end up having to copy to/from them in software anyway. As a
result it's easy to use memcpy() or the unaligned access helpers.

The crypto_hash_walk_*() helper functions do use the alignmask to align
the message. But with one exception those are only used for shash
algorithms being exposed via the ahash API, not for native ahashes, and
aligning the message is not required in this case, especially now that
alignmask support has been removed from shash. The exception is the
n2_core driver, which doesn't set an alignmask.

In any case, no ahash algorithms actually set a nonzero alignmask
anymore. Therefore, remove support for it from ahash. The benefit is
that all the code to handle "misaligned" buffers in the ahash API goes
away, reducing the overhead of the ahash API.

This follows the same change that was made to shash.

Signed-off-by: Eric Biggers <[email protected]>
---
Documentation/crypto/devel-algos.rst | 4 +-
crypto/ahash.c | 117 ++-------------------------
crypto/shash.c | 8 +-
include/crypto/internal/hash.h | 4 +-
include/linux/crypto.h | 27 ++++---
5 files changed, 28 insertions(+), 132 deletions(-)

diff --git a/Documentation/crypto/devel-algos.rst b/Documentation/crypto/devel-algos.rst
index 3506899ef83e3..9b7782f4f6e0a 100644
--- a/Documentation/crypto/devel-algos.rst
+++ b/Documentation/crypto/devel-algos.rst
@@ -228,13 +228,11 @@ Note that it is perfectly legal to "abandon" a request object:
In other words implementations should mind the resource allocation and clean-up.
No resources related to request objects should remain allocated after a call
to .init() or .update(), since there might be no chance to free them.


Specifics Of Asynchronous HASH Transformation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Some of the drivers will want to use the Generic ScatterWalk in case the
implementation needs to be fed separate chunks of the scatterlist which
-contains the input data. The buffer containing the resulting hash will
-always be properly aligned to .cra_alignmask so there is no need to
-worry about this.
+contains the input data.
diff --git a/crypto/ahash.c b/crypto/ahash.c
index 213bb3e9f2451..744fd3b8ea258 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -28,35 +28,26 @@ static const struct crypto_type crypto_ahash_type;
struct ahash_request_priv {
crypto_completion_t complete;
void *data;
u8 *result;
u32 flags;
void *ubuf[] CRYPTO_MINALIGN_ATTR;
};

static int hash_walk_next(struct crypto_hash_walk *walk)
{
- unsigned int alignmask = walk->alignmask;
unsigned int offset = walk->offset;
unsigned int nbytes = min(walk->entrylen,
((unsigned int)(PAGE_SIZE)) - offset);

walk->data = kmap_local_page(walk->pg);
walk->data += offset;
-
- if (offset & alignmask) {
- unsigned int unaligned = alignmask + 1 - (offset & alignmask);
-
- if (nbytes > unaligned)
- nbytes = unaligned;
- }
-
walk->entrylen -= nbytes;
return nbytes;
}

static int hash_walk_new_entry(struct crypto_hash_walk *walk)
{
struct scatterlist *sg;

sg = walk->sg;
walk->offset = sg->offset;
@@ -66,37 +57,22 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)

if (walk->entrylen > walk->total)
walk->entrylen = walk->total;
walk->total -= walk->entrylen;

return hash_walk_next(walk);
}

int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
{
- unsigned int alignmask = walk->alignmask;
-
walk->data -= walk->offset;

- if (walk->entrylen && (walk->offset & alignmask) && !err) {
- unsigned int nbytes;
-
- walk->offset = ALIGN(walk->offset, alignmask + 1);
- nbytes = min(walk->entrylen,
- (unsigned int)(PAGE_SIZE - walk->offset));
- if (nbytes) {
- walk->entrylen -= nbytes;
- walk->data += walk->offset;
- return nbytes;
- }
- }
-
kunmap_local(walk->data);
crypto_yield(walk->flags);

if (err)
return err;

if (walk->entrylen) {
walk->offset = 0;
walk->pg++;
return hash_walk_next(walk);
@@ -114,115 +90,85 @@ EXPORT_SYMBOL_GPL(crypto_hash_walk_done);
int crypto_hash_walk_first(struct ahash_request *req,
struct crypto_hash_walk *walk)
{
walk->total = req->nbytes;

if (!walk->total) {
walk->entrylen = 0;
return 0;
}

- walk->alignmask = crypto_ahash_alignmask(crypto_ahash_reqtfm(req));
walk->sg = req->src;
walk->flags = req->base.flags;

return hash_walk_new_entry(walk);
}
EXPORT_SYMBOL_GPL(crypto_hash_walk_first);

-static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key,
- unsigned int keylen)
-{
- unsigned long alignmask = crypto_ahash_alignmask(tfm);
- int ret;
- u8 *buffer, *alignbuffer;
- unsigned long absize;
-
- absize = keylen + alignmask;
- buffer = kmalloc(absize, GFP_KERNEL);
- if (!buffer)
- return -ENOMEM;
-
- alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
- memcpy(alignbuffer, key, keylen);
- ret = tfm->setkey(tfm, alignbuffer, keylen);
- kfree_sensitive(buffer);
- return ret;
-}
-
static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen)
{
return -ENOSYS;
}

static void ahash_set_needkey(struct crypto_ahash *tfm)
{
const struct hash_alg_common *alg = crypto_hash_alg_common(tfm);

if (tfm->setkey != ahash_nosetkey &&
!(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
crypto_ahash_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
}

int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen)
{
- unsigned long alignmask = crypto_ahash_alignmask(tfm);
- int err;
-
- if ((unsigned long)key & alignmask)
- err = ahash_setkey_unaligned(tfm, key, keylen);
- else
- err = tfm->setkey(tfm, key, keylen);
+ int err = tfm->setkey(tfm, key, keylen);

if (unlikely(err)) {
ahash_set_needkey(tfm);
return err;
}

crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
return 0;
}
EXPORT_SYMBOL_GPL(crypto_ahash_setkey);

static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt,
bool has_state)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- unsigned long alignmask = crypto_ahash_alignmask(tfm);
unsigned int ds = crypto_ahash_digestsize(tfm);
struct ahash_request *subreq;
unsigned int subreq_size;
unsigned int reqsize;
u8 *result;
gfp_t gfp;
u32 flags;

subreq_size = sizeof(*subreq);
reqsize = crypto_ahash_reqsize(tfm);
reqsize = ALIGN(reqsize, crypto_tfm_ctx_alignment());
subreq_size += reqsize;
subreq_size += ds;
- subreq_size += alignmask & ~(crypto_tfm_ctx_alignment() - 1);

flags = ahash_request_flags(req);
gfp = (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC;
subreq = kmalloc(subreq_size, gfp);
if (!subreq)
return -ENOMEM;

ahash_request_set_tfm(subreq, tfm);
ahash_request_set_callback(subreq, flags, cplt, req);

result = (u8 *)(subreq + 1) + reqsize;
- result = PTR_ALIGN(result, alignmask + 1);

ahash_request_set_crypt(subreq, req->src, result, req->nbytes);

if (has_state) {
void *state;

state = kmalloc(crypto_ahash_statesize(tfm), gfp);
if (!state) {
kfree(subreq);
return -ENOMEM;
@@ -244,114 +190,67 @@ static void ahash_restore_req(struct ahash_request *req, int err)

if (!err)
memcpy(req->result, subreq->result,
crypto_ahash_digestsize(crypto_ahash_reqtfm(req)));

req->priv = NULL;

kfree_sensitive(subreq);
}

-static void ahash_op_unaligned_done(void *data, int err)
-{
- struct ahash_request *areq = data;
-
- if (err == -EINPROGRESS)
- goto out;
-
- /* First copy req->result into req->priv.result */
- ahash_restore_req(areq, err);
-
-out:
- /* Complete the ORIGINAL request. */
- ahash_request_complete(areq, err);
-}
-
-static int ahash_op_unaligned(struct ahash_request *req,
- int (*op)(struct ahash_request *),
- bool has_state)
-{
- int err;
-
- err = ahash_save_req(req, ahash_op_unaligned_done, has_state);
- if (err)
- return err;
-
- err = op(req->priv);
- if (err == -EINPROGRESS || err == -EBUSY)
- return err;
-
- ahash_restore_req(req, err);
-
- return err;
-}
-
-static int crypto_ahash_op(struct ahash_request *req,
- int (*op)(struct ahash_request *),
- bool has_state)
-{
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- unsigned long alignmask = crypto_ahash_alignmask(tfm);
- int err;
-
- if ((unsigned long)req->result & alignmask)
- err = ahash_op_unaligned(req, op, has_state);
- else
- err = op(req);
-
- return crypto_hash_errstat(crypto_hash_alg_common(tfm), err);
-}
-
int crypto_ahash_final(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct hash_alg_common *alg = crypto_hash_alg_common(tfm);

if (IS_ENABLED(CONFIG_CRYPTO_STATS))
atomic64_inc(&hash_get_stat(alg)->hash_cnt);

- return crypto_ahash_op(req, tfm->final, true);
+ return crypto_hash_errstat(alg, tfm->final(req));
}
EXPORT_SYMBOL_GPL(crypto_ahash_final);

int crypto_ahash_finup(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct hash_alg_common *alg = crypto_hash_alg_common(tfm);

if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
struct crypto_istat_hash *istat = hash_get_stat(alg);

atomic64_inc(&istat->hash_cnt);
atomic64_add(req->nbytes, &istat->hash_tlen);
}

- return crypto_ahash_op(req, tfm->finup, true);
+ return crypto_hash_errstat(alg, tfm->finup(req));
}
EXPORT_SYMBOL_GPL(crypto_ahash_finup);

int crypto_ahash_digest(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
+ int err;

if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
struct crypto_istat_hash *istat = hash_get_stat(alg);

atomic64_inc(&istat->hash_cnt);
atomic64_add(req->nbytes, &istat->hash_tlen);
}

if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
- return crypto_hash_errstat(alg, -ENOKEY);
+ err = -ENOKEY;
+ else
+ err = tfm->digest(req);

- return crypto_ahash_op(req, tfm->digest, false);
+ return crypto_hash_errstat(alg, err);
}
EXPORT_SYMBOL_GPL(crypto_ahash_digest);

static void ahash_def_finup_done2(void *data, int err)
{
struct ahash_request *areq = data;

if (err == -EINPROGRESS)
return;

diff --git a/crypto/shash.c b/crypto/shash.c
index 409b33f9c97cc..359702c2cd02b 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -534,40 +534,40 @@ struct crypto_shash *crypto_clone_shash(struct crypto_shash *hash)
EXPORT_SYMBOL_GPL(crypto_clone_shash);

int hash_prepare_alg(struct hash_alg_common *alg)
{
struct crypto_istat_hash *istat = hash_get_stat(alg);
struct crypto_alg *base = &alg->base;

if (alg->digestsize > HASH_MAX_DIGESTSIZE)
return -EINVAL;

+ /* alignmask is not useful for hashes, so it is not supported. */
+ if (base->cra_alignmask)
+ return -EINVAL;
+
base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;

if (IS_ENABLED(CONFIG_CRYPTO_STATS))
memset(istat, 0, sizeof(*istat));

return 0;
}

static int shash_prepare_alg(struct shash_alg *alg)
{
struct crypto_alg *base = &alg->halg.base;
int err;

if (alg->descsize > HASH_MAX_DESCSIZE)
return -EINVAL;

- /* alignmask is not useful for shash, so it is not supported. */
- if (base->cra_alignmask)
- return -EINVAL;
-
if ((alg->export && !alg->import) || (alg->import && !alg->export))
return -EINVAL;

err = hash_prepare_alg(&alg->halg);
if (err)
return err;

base->cra_type = &crypto_shash_type;
base->cra_flags |= CRYPTO_ALG_TYPE_SHASH;

diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
index 8d0cd0c591a09..59c707e4dea46 100644
--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -11,29 +11,27 @@
#include <crypto/algapi.h>
#include <crypto/hash.h>

struct ahash_request;
struct scatterlist;

struct crypto_hash_walk {
char *data;

unsigned int offset;
- unsigned int alignmask;
+ unsigned int flags;

struct page *pg;
unsigned int entrylen;

unsigned int total;
struct scatterlist *sg;
-
- unsigned int flags;
};

struct ahash_instance {
void (*free)(struct ahash_instance *inst);
union {
struct {
char head[offsetof(struct ahash_alg, halg.base)];
struct crypto_instance base;
} s;
struct ahash_alg alg;
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index f3c3a3b27facd..b164da5e129e8 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -103,21 +103,20 @@
* chunk can cross a page boundary or a scatterlist element boundary.
* aead:
* - The IV buffer and all scatterlist elements must be aligned to the
* algorithm's alignmask.
* - The first scatterlist element must contain all the associated data,
* and its pages must be !PageHighMem.
* - If the plaintext/ciphertext were to be divided into chunks of size
* crypto_aead_walksize() (with the remainder going at the end), no chunk
* can cross a page boundary or a scatterlist element boundary.
* ahash:
- * - The result buffer must be aligned to the algorithm's alignmask.
* - crypto_ahash_finup() must not be used unless the algorithm implements
* ->finup() natively.
*/
#define CRYPTO_ALG_ALLOCATES_MEMORY 0x00010000

/*
* Mark an algorithm as a service implementation only usable by a
* template and never by a normal user of the kernel crypto API.
* This is intended to be used by algorithms that are themselves
* not FIPS-approved but may instead be used to implement parts of
@@ -271,32 +270,34 @@ struct compress_alg {
* of the smallest possible unit which can be transformed with
* this algorithm. The users must respect this value.
* In case of HASH transformation, it is possible for a smaller
* block than @cra_blocksize to be passed to the crypto API for
* transformation, in case of any other transformation type, an
* error will be returned upon any attempt to transform smaller
* than @cra_blocksize chunks.
* @cra_ctxsize: Size of the operational context of the transformation. This
* value informs the kernel crypto API about the memory size
* needed to be allocated for the transformation context.
- * @cra_alignmask: Alignment mask for the input and output data buffer. The data
- * buffer containing the input data for the algorithm must be
- * aligned to this alignment mask. The data buffer for the
- * output data must be aligned to this alignment mask. Note that
- * the Crypto API will do the re-alignment in software, but
- * only under special conditions and there is a performance hit.
- * The re-alignment happens at these occasions for different
- * @cra_u types: cipher -- For both input data and output data
- * buffer; ahash -- For output hash destination buf; shash --
- * For output hash destination buf.
- * This is needed on hardware which is flawed by design and
- * cannot pick data from arbitrary addresses.
+ * @cra_alignmask: For cipher, skcipher, lskcipher, and aead algorithms this is
+ * 1 less than the alignment, in bytes, that the algorithm
+ * implementation requires for input and output buffers. When
+ * the crypto API is invoked with buffers that are not aligned
+ * to this alignment, the crypto API automatically utilizes
+ * appropriately aligned temporary buffers to comply with what
+ * the algorithm needs. (For scatterlists this happens only if
+ * the algorithm uses the skcipher_walk helper functions.) This
+ * misalignment handling carries a performance penalty, so it is
+ * preferred that algorithms do not set a nonzero alignmask.
+ * Also, crypto API users may wish to allocate buffers aligned
+ * to the alignmask of the algorithm being used, in order to
+ * avoid the API having to realign them. Note: the alignmask is
+ * not supported for hash algorithms and is always 0 for them.
* @cra_priority: Priority of this transformation implementation. In case
* multiple transformations with same @cra_name are available to
* the Crypto API, the kernel will use the one with highest
* @cra_priority.
* @cra_name: Generic name (usable by multiple implementations) of the
* transformation algorithm. This is the name of the transformation
* itself. This field is used by the kernel when looking up the
* providers of particular transformation.
* @cra_driver_name: Unique name of the transformation provider. This is the
* name of the provider of the transformation. This can be any
--
2.42.0

2023-10-22 08:19:19

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 20/30] crypto: ccm - stop using alignmask of ahash

From: Eric Biggers <[email protected]>

Now that the alignmask for ahash and shash algorithms is always 0,
simplify crypto_ccm_create_common() accordingly.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/ccm.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/crypto/ccm.c b/crypto/ccm.c
index dd7aed63efc93..36f0acec32e19 100644
--- a/crypto/ccm.c
+++ b/crypto/ccm.c
@@ -497,22 +497,21 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl,
goto err_free_inst;

if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
"ccm_base(%s,%s)", ctr->base.cra_driver_name,
mac->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
goto err_free_inst;

inst->alg.base.cra_priority = (mac->base.cra_priority +
ctr->base.cra_priority) / 2;
inst->alg.base.cra_blocksize = 1;
- inst->alg.base.cra_alignmask = mac->base.cra_alignmask |
- ctr->base.cra_alignmask;
+ inst->alg.base.cra_alignmask = ctr->base.cra_alignmask;
inst->alg.ivsize = 16;
inst->alg.chunksize = ctr->chunksize;
inst->alg.maxauthsize = 16;
inst->alg.base.cra_ctxsize = sizeof(struct crypto_ccm_ctx);
inst->alg.init = crypto_ccm_init_tfm;
inst->alg.exit = crypto_ccm_exit_tfm;
inst->alg.setkey = crypto_ccm_setkey;
inst->alg.setauthsize = crypto_ccm_setauthsize;
inst->alg.encrypt = crypto_ccm_encrypt;
inst->alg.decrypt = crypto_ccm_decrypt;
--
2.42.0

2023-10-22 08:19:19

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 23/30] crypto: ahash - remove crypto_ahash_alignmask

From: Eric Biggers <[email protected]>

crypto_ahash_alignmask() no longer has any callers, and it always
returns 0 now that neither ahash nor shash algorithms support nonzero
alignmasks anymore. Therefore, remove it.

Signed-off-by: Eric Biggers <[email protected]>
---
include/crypto/hash.h | 6 ------
1 file changed, 6 deletions(-)

diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index d3a380ae894ad..b00a4a36a8ec3 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -335,26 +335,20 @@ int crypto_has_ahash(const char *alg_name, u32 type, u32 mask);
static inline const char *crypto_ahash_alg_name(struct crypto_ahash *tfm)
{
return crypto_tfm_alg_name(crypto_ahash_tfm(tfm));
}

static inline const char *crypto_ahash_driver_name(struct crypto_ahash *tfm)
{
return crypto_tfm_alg_driver_name(crypto_ahash_tfm(tfm));
}

-static inline unsigned int crypto_ahash_alignmask(
- struct crypto_ahash *tfm)
-{
- return crypto_tfm_alg_alignmask(crypto_ahash_tfm(tfm));
-}
-
/**
* crypto_ahash_blocksize() - obtain block size for cipher
* @tfm: cipher handle
*
* The block size for the message digest cipher referenced with the cipher
* handle is returned.
*
* Return: block size of cipher
*/
static inline unsigned int crypto_ahash_blocksize(struct crypto_ahash *tfm)
--
2.42.0

2023-10-22 08:19:19

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 22/30] crypto: gcm - stop using alignmask of ahash

From: Eric Biggers <[email protected]>

Now that the alignmask for ahash and shash algorithms is always 0,
simplify crypto_gcm_create_common() accordingly.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/gcm.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/crypto/gcm.c b/crypto/gcm.c
index 91ce6e0e2afc1..84f7c23d14e48 100644
--- a/crypto/gcm.c
+++ b/crypto/gcm.c
@@ -622,22 +622,21 @@ static int crypto_gcm_create_common(struct crypto_template *tmpl,

if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
"gcm_base(%s,%s)", ctr->base.cra_driver_name,
ghash->base.cra_driver_name) >=
CRYPTO_MAX_ALG_NAME)
goto err_free_inst;

inst->alg.base.cra_priority = (ghash->base.cra_priority +
ctr->base.cra_priority) / 2;
inst->alg.base.cra_blocksize = 1;
- inst->alg.base.cra_alignmask = ghash->base.cra_alignmask |
- ctr->base.cra_alignmask;
+ inst->alg.base.cra_alignmask = ctr->base.cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct crypto_gcm_ctx);
inst->alg.ivsize = GCM_AES_IV_SIZE;
inst->alg.chunksize = ctr->chunksize;
inst->alg.maxauthsize = 16;
inst->alg.init = crypto_gcm_init_tfm;
inst->alg.exit = crypto_gcm_exit_tfm;
inst->alg.setkey = crypto_gcm_setkey;
inst->alg.setauthsize = crypto_gcm_setauthsize;
inst->alg.encrypt = crypto_gcm_encrypt;
inst->alg.decrypt = crypto_gcm_decrypt;
--
2.42.0

2023-10-22 08:19:20

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 24/30] crypto: ahash - remove struct ahash_request_priv

From: Eric Biggers <[email protected]>

struct ahash_request_priv is unused, so remove it.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/ahash.c | 8 --------
1 file changed, 8 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index 744fd3b8ea258..556c950100936 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -18,28 +18,20 @@
#include <linux/seq_file.h>
#include <linux/string.h>
#include <net/netlink.h>

#include "hash.h"

#define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000e

static const struct crypto_type crypto_ahash_type;

-struct ahash_request_priv {
- crypto_completion_t complete;
- void *data;
- u8 *result;
- u32 flags;
- void *ubuf[] CRYPTO_MINALIGN_ATTR;
-};
-
static int hash_walk_next(struct crypto_hash_walk *walk)
{
unsigned int offset = walk->offset;
unsigned int nbytes = min(walk->entrylen,
((unsigned int)(PAGE_SIZE)) - offset);

walk->data = kmap_local_page(walk->pg);
walk->data += offset;
walk->entrylen -= nbytes;
return nbytes;
--
2.42.0

2023-10-22 08:19:21

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 18/30] net: ipv4: stop checking crypto_ahash_alignmask

From: Eric Biggers <[email protected]>

Now that the alignmask for ahash and shash algorithms is always 0,
crypto_ahash_alignmask() always returns 0 and will be removed. In
preparation for this, stop checking crypto_ahash_alignmask() in ah4.c.

Signed-off-by: Eric Biggers <[email protected]>
---
net/ipv4/ah4.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/net/ipv4/ah4.c b/net/ipv4/ah4.c
index bc0f968c5d5b4..a2e6e1fdf82be 100644
--- a/net/ipv4/ah4.c
+++ b/net/ipv4/ah4.c
@@ -20,43 +20,40 @@ struct ah_skb_cb {
void *tmp;
};

#define AH_SKB_CB(__skb) ((struct ah_skb_cb *)&((__skb)->cb[0]))

static void *ah_alloc_tmp(struct crypto_ahash *ahash, int nfrags,
unsigned int size)
{
unsigned int len;

- len = size + crypto_ahash_digestsize(ahash) +
- (crypto_ahash_alignmask(ahash) &
- ~(crypto_tfm_ctx_alignment() - 1));
+ len = size + crypto_ahash_digestsize(ahash);

len = ALIGN(len, crypto_tfm_ctx_alignment());

len += sizeof(struct ahash_request) + crypto_ahash_reqsize(ahash);
len = ALIGN(len, __alignof__(struct scatterlist));

len += sizeof(struct scatterlist) * nfrags;

return kmalloc(len, GFP_ATOMIC);
}

static inline u8 *ah_tmp_auth(void *tmp, unsigned int offset)
{
return tmp + offset;
}

-static inline u8 *ah_tmp_icv(struct crypto_ahash *ahash, void *tmp,
- unsigned int offset)
+static inline u8 *ah_tmp_icv(void *tmp, unsigned int offset)
{
- return PTR_ALIGN((u8 *)tmp + offset, crypto_ahash_alignmask(ahash) + 1);
+ return tmp + offset;
}

static inline struct ahash_request *ah_tmp_req(struct crypto_ahash *ahash,
u8 *icv)
{
struct ahash_request *req;

req = (void *)PTR_ALIGN(icv + crypto_ahash_digestsize(ahash),
crypto_tfm_ctx_alignment());

@@ -122,21 +119,21 @@ static void ah_output_done(void *data, int err)
u8 *icv;
struct iphdr *iph;
struct sk_buff *skb = data;
struct xfrm_state *x = skb_dst(skb)->xfrm;
struct ah_data *ahp = x->data;
struct iphdr *top_iph = ip_hdr(skb);
struct ip_auth_hdr *ah = ip_auth_hdr(skb);
int ihl = ip_hdrlen(skb);

iph = AH_SKB_CB(skb)->tmp;
- icv = ah_tmp_icv(ahp->ahash, iph, ihl);
+ icv = ah_tmp_icv(iph, ihl);
memcpy(ah->auth_data, icv, ahp->icv_trunc_len);

top_iph->tos = iph->tos;
top_iph->ttl = iph->ttl;
top_iph->frag_off = iph->frag_off;
if (top_iph->ihl != 5) {
top_iph->daddr = iph->daddr;
memcpy(top_iph+1, iph+1, top_iph->ihl*4 - sizeof(struct iphdr));
}

@@ -175,21 +172,21 @@ static int ah_output(struct xfrm_state *x, struct sk_buff *skb)

if (x->props.flags & XFRM_STATE_ESN) {
sglists = 1;
seqhi_len = sizeof(*seqhi);
}
err = -ENOMEM;
iph = ah_alloc_tmp(ahash, nfrags + sglists, ihl + seqhi_len);
if (!iph)
goto out;
seqhi = (__be32 *)((char *)iph + ihl);
- icv = ah_tmp_icv(ahash, seqhi, seqhi_len);
+ icv = ah_tmp_icv(seqhi, seqhi_len);
req = ah_tmp_req(ahash, icv);
sg = ah_req_sg(ahash, req);
seqhisg = sg + nfrags;

memset(ah->auth_data, 0, ahp->icv_trunc_len);

top_iph = ip_hdr(skb);

iph->tos = top_iph->tos;
iph->ttl = top_iph->ttl;
@@ -272,21 +269,21 @@ static void ah_input_done(void *data, int err)
struct ah_data *ahp = x->data;
struct ip_auth_hdr *ah = ip_auth_hdr(skb);
int ihl = ip_hdrlen(skb);
int ah_hlen = (ah->hdrlen + 2) << 2;

if (err)
goto out;

work_iph = AH_SKB_CB(skb)->tmp;
auth_data = ah_tmp_auth(work_iph, ihl);
- icv = ah_tmp_icv(ahp->ahash, auth_data, ahp->icv_trunc_len);
+ icv = ah_tmp_icv(auth_data, ahp->icv_trunc_len);

err = crypto_memneq(icv, auth_data, ahp->icv_trunc_len) ? -EBADMSG : 0;
if (err)
goto out;

err = ah->nexthdr;

skb->network_header += ah_hlen;
memcpy(skb_network_header(skb), work_iph, ihl);
__skb_pull(skb, ah_hlen + ihl);
@@ -367,21 +364,21 @@ static int ah_input(struct xfrm_state *x, struct sk_buff *skb)

work_iph = ah_alloc_tmp(ahash, nfrags + sglists, ihl +
ahp->icv_trunc_len + seqhi_len);
if (!work_iph) {
err = -ENOMEM;
goto out;
}

seqhi = (__be32 *)((char *)work_iph + ihl);
auth_data = ah_tmp_auth(seqhi, seqhi_len);
- icv = ah_tmp_icv(ahash, auth_data, ahp->icv_trunc_len);
+ icv = ah_tmp_icv(auth_data, ahp->icv_trunc_len);
req = ah_tmp_req(ahash, icv);
sg = ah_req_sg(ahash, req);
seqhisg = sg + nfrags;

memcpy(work_iph, iph, ihl);
memcpy(auth_data, ah->auth_data, ahp->icv_trunc_len);
memset(ah->auth_data, 0, ahp->icv_trunc_len);

iph->ttl = 0;
iph->tos = 0;
--
2.42.0

2023-10-22 08:19:22

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 27/30] crypto: talitos - stop using crypto_ahash::init

From: Eric Biggers <[email protected]>

The function pointer crypto_ahash::init is an internal implementation
detail of the ahash API that exists to help it support both ahash and
shash algorithms. With an upcoming refactoring of how the ahash API
supports shash algorithms, this field will be removed.

Some drivers are invoking crypto_ahash::init to call into their own
code, which is unnecessary and inefficient. The talitos driver is one
of those drivers. Make it just call its own code directly.

Signed-off-by: Eric Biggers <[email protected]>
---
drivers/crypto/talitos.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
index e8f710d87007b..e39fc46a0718e 100644
--- a/drivers/crypto/talitos.c
+++ b/drivers/crypto/talitos.c
@@ -2112,27 +2112,28 @@ static int ahash_finup(struct ahash_request *areq)
{
struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq);

req_ctx->last = 1;

return ahash_process_req(areq, areq->nbytes);
}

static int ahash_digest(struct ahash_request *areq)
{
- struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq);
- struct crypto_ahash *ahash = crypto_ahash_reqtfm(areq);
-
- ahash->init(areq);
- req_ctx->last = 1;
+ ahash_init(areq);
+ return ahash_finup(areq);
+}

- return ahash_process_req(areq, areq->nbytes);
+static int ahash_digest_sha224_swinit(struct ahash_request *areq)
+{
+ ahash_init_sha224_swinit(areq);
+ return ahash_finup(areq);
}

static int ahash_export(struct ahash_request *areq, void *out)
{
struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq);
struct talitos_export_state *export = out;
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct talitos_ctx *ctx = crypto_ahash_ctx(tfm);
struct device *dev = ctx->dev;
dma_addr_t dma;
@@ -3235,20 +3236,22 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev,

if (!(priv->features & TALITOS_FTR_HMAC_OK) &&
!strncmp(alg->cra_name, "hmac", 4)) {
devm_kfree(dev, t_alg);
return ERR_PTR(-ENOTSUPP);
}
if (!(priv->features & TALITOS_FTR_SHA224_HWINIT) &&
(!strcmp(alg->cra_name, "sha224") ||
!strcmp(alg->cra_name, "hmac(sha224)"))) {
t_alg->algt.alg.hash.init = ahash_init_sha224_swinit;
+ t_alg->algt.alg.hash.digest =
+ ahash_digest_sha224_swinit;
t_alg->algt.desc_hdr_template =
DESC_HDR_TYPE_COMMON_NONSNOOP_NO_AFEU |
DESC_HDR_SEL0_MDEUA |
DESC_HDR_MODE0_MDEU_SHA256;
}
break;
default:
dev_err(dev, "unknown algorithm type %d\n", t_alg->algt.type);
devm_kfree(dev, t_alg);
return ERR_PTR(-EINVAL);
--
2.42.0

2023-10-22 08:19:23

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 25/30] crypto: ahash - improve file comment

From: Eric Biggers <[email protected]>

Improve the file comment for crypto/ahash.c.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/ahash.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index 556c950100936..1ad402f4dac6c 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -1,16 +1,20 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Asynchronous Cryptographic Hash operations.
*
- * This is the asynchronous version of hash.c with notification of
- * completion via a callback.
+ * This is the implementation of the ahash (asynchronous hash) API. It differs
+ * from shash (synchronous hash) in that ahash supports asynchronous operations,
+ * and it hashes data from scatterlists instead of virtually addressed buffers.
+ *
+ * The ahash API provides access to both ahash and shash algorithms. The shash
+ * API only provides access to shash algorithms.
*
* Copyright (c) 2008 Loc Ho <[email protected]>
*/

#include <crypto/scatterwalk.h>
#include <linux/cryptouser.h>
#include <linux/err.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/sched.h>
--
2.42.0

2023-10-22 08:19:24

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 26/30] crypto: chelsio - stop using crypto_ahash::init

From: Eric Biggers <[email protected]>

The function pointer crypto_ahash::init is an internal implementation
detail of the ahash API that exists to help it support both ahash and
shash algorithms. With an upcoming refactoring of how the ahash API
supports shash algorithms, this field will be removed.

Some drivers are invoking crypto_ahash::init to call into their own
code, which is unnecessary and inefficient. The chelsio driver is one
of those drivers. Make it just call its own code directly.

Signed-off-by: Eric Biggers <[email protected]>
---
drivers/crypto/chelsio/chcr_algo.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
index 16298ae4a00bf..177428480c7d1 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -1913,39 +1913,46 @@ static int chcr_ahash_finup(struct ahash_request *req)
set_wr_txq(skb, CPL_PRIORITY_DATA, req_ctx->txqidx);
chcr_send_wr(skb);
return -EINPROGRESS;
unmap:
chcr_hash_dma_unmap(&u_ctx->lldi.pdev->dev, req);
err:
chcr_dec_wrcount(dev);
return error;
}

+static int chcr_hmac_init(struct ahash_request *areq);
+static int chcr_sha_init(struct ahash_request *areq);
+
static int chcr_ahash_digest(struct ahash_request *req)
{
struct chcr_ahash_req_ctx *req_ctx = ahash_request_ctx(req);
struct crypto_ahash *rtfm = crypto_ahash_reqtfm(req);
struct chcr_dev *dev = h_ctx(rtfm)->dev;
struct uld_ctx *u_ctx = ULD_CTX(h_ctx(rtfm));
struct chcr_context *ctx = h_ctx(rtfm);
struct sk_buff *skb;
struct hash_wr_param params;
u8 bs;
int error;
unsigned int cpu;

cpu = get_cpu();
req_ctx->txqidx = cpu % ctx->ntxq;
req_ctx->rxqidx = cpu % ctx->nrxq;
put_cpu();

- rtfm->init(req);
+ if (is_hmac(crypto_ahash_tfm(rtfm)))
+ chcr_hmac_init(req);
+ else
+ chcr_sha_init(req);
+
bs = crypto_tfm_alg_blocksize(crypto_ahash_tfm(rtfm));
error = chcr_inc_wrcount(dev);
if (error)
return -ENXIO;

if (unlikely(cxgb4_is_crypto_q_full(u_ctx->lldi.ports[0],
req_ctx->txqidx) &&
(!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)))) {
error = -ENOSPC;
goto err;
--
2.42.0

2023-10-22 08:19:26

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 29/30] crypto: ahash - check for shash type instead of not ahash type

From: Eric Biggers <[email protected]>

Since the previous patch made crypto_shash_type visible to ahash.c,
change checks for '->cra_type != &crypto_ahash_type' to '->cra_type ==
&crypto_shash_type'. This makes more sense and avoids having to
forward-declare crypto_ahash_type. The result is still the same, since
the type is either shash or ahash here.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/ahash.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index 74be1eb26c1aa..96fec0ca202af 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -20,22 +20,20 @@
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/seq_file.h>
#include <linux/string.h>
#include <net/netlink.h>

#include "hash.h"

#define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000e

-static const struct crypto_type crypto_ahash_type;
-
static int shash_async_setkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen)
{
struct crypto_shash **ctx = crypto_ahash_ctx(tfm);

return crypto_shash_setkey(*ctx, key, keylen);
}

static int shash_async_init(struct ahash_request *req)
{
@@ -504,21 +502,21 @@ static void crypto_ahash_exit_tfm(struct crypto_tfm *tfm)

static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
{
struct crypto_ahash *hash = __crypto_ahash_cast(tfm);
struct ahash_alg *alg = crypto_ahash_alg(hash);

hash->setkey = ahash_nosetkey;

crypto_ahash_set_statesize(hash, alg->halg.statesize);

- if (tfm->__crt_alg->cra_type != &crypto_ahash_type)
+ if (tfm->__crt_alg->cra_type == &crypto_shash_type)
return crypto_init_shash_ops_async(tfm);

hash->init = alg->init;
hash->update = alg->update;
hash->final = alg->final;
hash->finup = alg->finup ?: ahash_def_finup;
hash->digest = alg->digest;
hash->export = alg->export;
hash->import = alg->import;

@@ -528,21 +526,21 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
}

if (alg->exit_tfm)
tfm->exit = crypto_ahash_exit_tfm;

return alg->init_tfm ? alg->init_tfm(hash) : 0;
}

static unsigned int crypto_ahash_extsize(struct crypto_alg *alg)
{
- if (alg->cra_type != &crypto_ahash_type)
+ if (alg->cra_type == &crypto_shash_type)
return sizeof(struct crypto_shash *);

return crypto_alg_extsize(alg);
}

static void crypto_ahash_free_instance(struct crypto_instance *inst)
{
struct ahash_instance *ahash = ahash_instance(inst);

ahash->free(ahash);
@@ -753,19 +751,19 @@ int ahash_register_instance(struct crypto_template *tmpl,
return err;

return crypto_register_instance(tmpl, ahash_crypto_instance(inst));
}
EXPORT_SYMBOL_GPL(ahash_register_instance);

bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg)
{
struct crypto_alg *alg = &halg->base;

- if (alg->cra_type != &crypto_ahash_type)
+ if (alg->cra_type == &crypto_shash_type)
return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg));

return __crypto_ahash_alg(alg)->setkey != NULL;
}
EXPORT_SYMBOL_GPL(crypto_hash_alg_has_setkey);

MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Asynchronous cryptographic hash type");
--
2.42.0

2023-10-22 08:19:58

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 28/30] crypto: hash - move "ahash wrapping shash" functions to ahash.c

From: Eric Biggers <[email protected]>

The functions that are involved in implementing the ahash API on top of
an shash algorithm belong better in ahash.c, not in shash.c where they
currently are. Move them.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/ahash.c | 186 ++++++++++++++++++++++++++++++++++++++++++++++++
crypto/hash.h | 4 +-
crypto/shash.c | 189 +------------------------------------------------
3 files changed, 188 insertions(+), 191 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index 1ad402f4dac6c..74be1eb26c1aa 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -22,20 +22,206 @@
#include <linux/seq_file.h>
#include <linux/string.h>
#include <net/netlink.h>

#include "hash.h"

#define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000e

static const struct crypto_type crypto_ahash_type;

+static int shash_async_setkey(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct crypto_shash **ctx = crypto_ahash_ctx(tfm);
+
+ return crypto_shash_setkey(*ctx, key, keylen);
+}
+
+static int shash_async_init(struct ahash_request *req)
+{
+ struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
+ struct shash_desc *desc = ahash_request_ctx(req);
+
+ desc->tfm = *ctx;
+
+ return crypto_shash_init(desc);
+}
+
+int shash_ahash_update(struct ahash_request *req, struct shash_desc *desc)
+{
+ struct crypto_hash_walk walk;
+ int nbytes;
+
+ for (nbytes = crypto_hash_walk_first(req, &walk); nbytes > 0;
+ nbytes = crypto_hash_walk_done(&walk, nbytes))
+ nbytes = crypto_shash_update(desc, walk.data, nbytes);
+
+ return nbytes;
+}
+EXPORT_SYMBOL_GPL(shash_ahash_update);
+
+static int shash_async_update(struct ahash_request *req)
+{
+ return shash_ahash_update(req, ahash_request_ctx(req));
+}
+
+static int shash_async_final(struct ahash_request *req)
+{
+ return crypto_shash_final(ahash_request_ctx(req), req->result);
+}
+
+int shash_ahash_finup(struct ahash_request *req, struct shash_desc *desc)
+{
+ struct crypto_hash_walk walk;
+ int nbytes;
+
+ nbytes = crypto_hash_walk_first(req, &walk);
+ if (!nbytes)
+ return crypto_shash_final(desc, req->result);
+
+ do {
+ nbytes = crypto_hash_walk_last(&walk) ?
+ crypto_shash_finup(desc, walk.data, nbytes,
+ req->result) :
+ crypto_shash_update(desc, walk.data, nbytes);
+ nbytes = crypto_hash_walk_done(&walk, nbytes);
+ } while (nbytes > 0);
+
+ return nbytes;
+}
+EXPORT_SYMBOL_GPL(shash_ahash_finup);
+
+static int shash_async_finup(struct ahash_request *req)
+{
+ struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
+ struct shash_desc *desc = ahash_request_ctx(req);
+
+ desc->tfm = *ctx;
+
+ return shash_ahash_finup(req, desc);
+}
+
+int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
+{
+ unsigned int nbytes = req->nbytes;
+ struct scatterlist *sg;
+ unsigned int offset;
+ int err;
+
+ if (nbytes &&
+ (sg = req->src, offset = sg->offset,
+ nbytes <= min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset))) {
+ void *data;
+
+ data = kmap_local_page(sg_page(sg));
+ err = crypto_shash_digest(desc, data + offset, nbytes,
+ req->result);
+ kunmap_local(data);
+ } else
+ err = crypto_shash_init(desc) ?:
+ shash_ahash_finup(req, desc);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(shash_ahash_digest);
+
+static int shash_async_digest(struct ahash_request *req)
+{
+ struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
+ struct shash_desc *desc = ahash_request_ctx(req);
+
+ desc->tfm = *ctx;
+
+ return shash_ahash_digest(req, desc);
+}
+
+static int shash_async_export(struct ahash_request *req, void *out)
+{
+ return crypto_shash_export(ahash_request_ctx(req), out);
+}
+
+static int shash_async_import(struct ahash_request *req, const void *in)
+{
+ struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
+ struct shash_desc *desc = ahash_request_ctx(req);
+
+ desc->tfm = *ctx;
+
+ return crypto_shash_import(desc, in);
+}
+
+static void crypto_exit_shash_ops_async(struct crypto_tfm *tfm)
+{
+ struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
+
+ crypto_free_shash(*ctx);
+}
+
+static int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
+{
+ struct crypto_alg *calg = tfm->__crt_alg;
+ struct shash_alg *alg = __crypto_shash_alg(calg);
+ struct crypto_ahash *crt = __crypto_ahash_cast(tfm);
+ struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
+ struct crypto_shash *shash;
+
+ if (!crypto_mod_get(calg))
+ return -EAGAIN;
+
+ shash = crypto_create_tfm(calg, &crypto_shash_type);
+ if (IS_ERR(shash)) {
+ crypto_mod_put(calg);
+ return PTR_ERR(shash);
+ }
+
+ *ctx = shash;
+ tfm->exit = crypto_exit_shash_ops_async;
+
+ crt->init = shash_async_init;
+ crt->update = shash_async_update;
+ crt->final = shash_async_final;
+ crt->finup = shash_async_finup;
+ crt->digest = shash_async_digest;
+ if (crypto_shash_alg_has_setkey(alg))
+ crt->setkey = shash_async_setkey;
+
+ crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
+ CRYPTO_TFM_NEED_KEY);
+
+ crt->export = shash_async_export;
+ crt->import = shash_async_import;
+
+ crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
+
+ return 0;
+}
+
+static struct crypto_ahash *
+crypto_clone_shash_ops_async(struct crypto_ahash *nhash,
+ struct crypto_ahash *hash)
+{
+ struct crypto_shash **nctx = crypto_ahash_ctx(nhash);
+ struct crypto_shash **ctx = crypto_ahash_ctx(hash);
+ struct crypto_shash *shash;
+
+ shash = crypto_clone_shash(*ctx);
+ if (IS_ERR(shash)) {
+ crypto_free_ahash(nhash);
+ return ERR_CAST(shash);
+ }
+
+ *nctx = shash;
+
+ return nhash;
+}
+
static int hash_walk_next(struct crypto_hash_walk *walk)
{
unsigned int offset = walk->offset;
unsigned int nbytes = min(walk->entrylen,
((unsigned int)(PAGE_SIZE)) - offset);

walk->data = kmap_local_page(walk->pg);
walk->data += offset;
walk->entrylen -= nbytes;
return nbytes;
diff --git a/crypto/hash.h b/crypto/hash.h
index 7e6c1a948692f..de2ee2f4ae304 100644
--- a/crypto/hash.h
+++ b/crypto/hash.h
@@ -24,17 +24,15 @@ static inline int crypto_hash_report_stat(struct sk_buff *skb,

strscpy(rhash.type, type, sizeof(rhash.type));

rhash.stat_hash_cnt = atomic64_read(&istat->hash_cnt);
rhash.stat_hash_tlen = atomic64_read(&istat->hash_tlen);
rhash.stat_err_cnt = atomic64_read(&istat->err_cnt);

return nla_put(skb, CRYPTOCFGA_STAT_HASH, sizeof(rhash), &rhash);
}

-int crypto_init_shash_ops_async(struct crypto_tfm *tfm);
-struct crypto_ahash *crypto_clone_shash_ops_async(struct crypto_ahash *nhash,
- struct crypto_ahash *hash);
+extern const struct crypto_type crypto_shash_type;

int hash_prepare_alg(struct hash_alg_common *alg);

#endif /* _LOCAL_CRYPTO_HASH_H */
diff --git a/crypto/shash.c b/crypto/shash.c
index 359702c2cd02b..28092ed8415a7 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -9,22 +9,20 @@
#include <linux/cryptouser.h>
#include <linux/err.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/seq_file.h>
#include <linux/string.h>
#include <net/netlink.h>

#include "hash.h"

-static const struct crypto_type crypto_shash_type;
-
static inline struct crypto_istat_hash *shash_get_stat(struct shash_alg *alg)
{
return hash_get_stat(&alg->halg);
}

static inline int crypto_shash_errstat(struct shash_alg *alg, int err)
{
return crypto_hash_errstat(&alg->halg, err);
}

@@ -186,205 +184,20 @@ int crypto_shash_import(struct shash_desc *desc, const void *in)
return -ENOKEY;

if (shash->import)
return shash->import(desc, in);

memcpy(shash_desc_ctx(desc), in, crypto_shash_descsize(tfm));
return 0;
}
EXPORT_SYMBOL_GPL(crypto_shash_import);

-static int shash_async_setkey(struct crypto_ahash *tfm, const u8 *key,
- unsigned int keylen)
-{
- struct crypto_shash **ctx = crypto_ahash_ctx(tfm);
-
- return crypto_shash_setkey(*ctx, key, keylen);
-}
-
-static int shash_async_init(struct ahash_request *req)
-{
- struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
- struct shash_desc *desc = ahash_request_ctx(req);
-
- desc->tfm = *ctx;
-
- return crypto_shash_init(desc);
-}
-
-int shash_ahash_update(struct ahash_request *req, struct shash_desc *desc)
-{
- struct crypto_hash_walk walk;
- int nbytes;
-
- for (nbytes = crypto_hash_walk_first(req, &walk); nbytes > 0;
- nbytes = crypto_hash_walk_done(&walk, nbytes))
- nbytes = crypto_shash_update(desc, walk.data, nbytes);
-
- return nbytes;
-}
-EXPORT_SYMBOL_GPL(shash_ahash_update);
-
-static int shash_async_update(struct ahash_request *req)
-{
- return shash_ahash_update(req, ahash_request_ctx(req));
-}
-
-static int shash_async_final(struct ahash_request *req)
-{
- return crypto_shash_final(ahash_request_ctx(req), req->result);
-}
-
-int shash_ahash_finup(struct ahash_request *req, struct shash_desc *desc)
-{
- struct crypto_hash_walk walk;
- int nbytes;
-
- nbytes = crypto_hash_walk_first(req, &walk);
- if (!nbytes)
- return crypto_shash_final(desc, req->result);
-
- do {
- nbytes = crypto_hash_walk_last(&walk) ?
- crypto_shash_finup(desc, walk.data, nbytes,
- req->result) :
- crypto_shash_update(desc, walk.data, nbytes);
- nbytes = crypto_hash_walk_done(&walk, nbytes);
- } while (nbytes > 0);
-
- return nbytes;
-}
-EXPORT_SYMBOL_GPL(shash_ahash_finup);
-
-static int shash_async_finup(struct ahash_request *req)
-{
- struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
- struct shash_desc *desc = ahash_request_ctx(req);
-
- desc->tfm = *ctx;
-
- return shash_ahash_finup(req, desc);
-}
-
-int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
-{
- unsigned int nbytes = req->nbytes;
- struct scatterlist *sg;
- unsigned int offset;
- int err;
-
- if (nbytes &&
- (sg = req->src, offset = sg->offset,
- nbytes <= min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset))) {
- void *data;
-
- data = kmap_local_page(sg_page(sg));
- err = crypto_shash_digest(desc, data + offset, nbytes,
- req->result);
- kunmap_local(data);
- } else
- err = crypto_shash_init(desc) ?:
- shash_ahash_finup(req, desc);
-
- return err;
-}
-EXPORT_SYMBOL_GPL(shash_ahash_digest);
-
-static int shash_async_digest(struct ahash_request *req)
-{
- struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
- struct shash_desc *desc = ahash_request_ctx(req);
-
- desc->tfm = *ctx;
-
- return shash_ahash_digest(req, desc);
-}
-
-static int shash_async_export(struct ahash_request *req, void *out)
-{
- return crypto_shash_export(ahash_request_ctx(req), out);
-}
-
-static int shash_async_import(struct ahash_request *req, const void *in)
-{
- struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
- struct shash_desc *desc = ahash_request_ctx(req);
-
- desc->tfm = *ctx;
-
- return crypto_shash_import(desc, in);
-}
-
-static void crypto_exit_shash_ops_async(struct crypto_tfm *tfm)
-{
- struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
-
- crypto_free_shash(*ctx);
-}
-
-int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
-{
- struct crypto_alg *calg = tfm->__crt_alg;
- struct shash_alg *alg = __crypto_shash_alg(calg);
- struct crypto_ahash *crt = __crypto_ahash_cast(tfm);
- struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
- struct crypto_shash *shash;
-
- if (!crypto_mod_get(calg))
- return -EAGAIN;
-
- shash = crypto_create_tfm(calg, &crypto_shash_type);
- if (IS_ERR(shash)) {
- crypto_mod_put(calg);
- return PTR_ERR(shash);
- }
-
- *ctx = shash;
- tfm->exit = crypto_exit_shash_ops_async;
-
- crt->init = shash_async_init;
- crt->update = shash_async_update;
- crt->final = shash_async_final;
- crt->finup = shash_async_finup;
- crt->digest = shash_async_digest;
- if (crypto_shash_alg_has_setkey(alg))
- crt->setkey = shash_async_setkey;
-
- crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
- CRYPTO_TFM_NEED_KEY);
-
- crt->export = shash_async_export;
- crt->import = shash_async_import;
-
- crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
-
- return 0;
-}
-
-struct crypto_ahash *crypto_clone_shash_ops_async(struct crypto_ahash *nhash,
- struct crypto_ahash *hash)
-{
- struct crypto_shash **nctx = crypto_ahash_ctx(nhash);
- struct crypto_shash **ctx = crypto_ahash_ctx(hash);
- struct crypto_shash *shash;
-
- shash = crypto_clone_shash(*ctx);
- if (IS_ERR(shash)) {
- crypto_free_ahash(nhash);
- return ERR_CAST(shash);
- }
-
- *nctx = shash;
-
- return nhash;
-}
-
static void crypto_shash_exit_tfm(struct crypto_tfm *tfm)
{
struct crypto_shash *hash = __crypto_shash_cast(tfm);
struct shash_alg *alg = crypto_shash_alg(hash);

alg->exit_tfm(hash);
}

static int crypto_shash_init_tfm(struct crypto_tfm *tfm)
{
@@ -449,21 +262,21 @@ static void crypto_shash_show(struct seq_file *m, struct crypto_alg *alg)
seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
seq_printf(m, "digestsize : %u\n", salg->digestsize);
}

static int __maybe_unused crypto_shash_report_stat(
struct sk_buff *skb, struct crypto_alg *alg)
{
return crypto_hash_report_stat(skb, alg, "shash");
}

-static const struct crypto_type crypto_shash_type = {
+const struct crypto_type crypto_shash_type = {
.extsize = crypto_alg_extsize,
.init_tfm = crypto_shash_init_tfm,
.free = crypto_shash_free_instance,
#ifdef CONFIG_PROC_FS
.show = crypto_shash_show,
#endif
#if IS_ENABLED(CONFIG_CRYPTO_USER)
.report = crypto_shash_report,
#endif
#ifdef CONFIG_CRYPTO_STATS
--
2.42.0

2023-10-22 08:20:05

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 30/30] crypto: ahash - optimize performance when wrapping shash

From: Eric Biggers <[email protected]>

The "ahash" API provides access to both CPU-based and hardware offload-
based implementations of hash algorithms. Typically the former are
implemented as "shash" algorithms under the hood, while the latter are
implemented as "ahash" algorithms. The "ahash" API provides access to
both. Various kernel subsystems use the ahash API because they want to
support hashing hardware offload without using a separate API for it.

Yet, the common case is that a crypto accelerator is not actually being
used, and ahash is just wrapping a CPU-based shash algorithm.

This patch optimizes the ahash API for that common case by eliminating
the extra indirect call for each ahash operation on top of shash.

It also fixes the double-counting of crypto stats in this scenario
(though CONFIG_CRYPTO_STATS should *not* be enabled by anyone interested
in performance anyway...), and it eliminates redundant checking of
CRYPTO_TFM_NEED_KEY. As a bonus, it also shrinks struct crypto_ahash.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/ahash.c | 285 +++++++++++++++++++++---------------------
crypto/hash.h | 10 ++
crypto/shash.c | 8 +-
include/crypto/hash.h | 68 +---------
4 files changed, 167 insertions(+), 204 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index 96fec0ca202af..deee55f939dc8 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -20,61 +20,67 @@
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/seq_file.h>
#include <linux/string.h>
#include <net/netlink.h>

#include "hash.h"

#define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000e

-static int shash_async_setkey(struct crypto_ahash *tfm, const u8 *key,
- unsigned int keylen)
+static inline struct crypto_istat_hash *ahash_get_stat(struct ahash_alg *alg)
{
- struct crypto_shash **ctx = crypto_ahash_ctx(tfm);
+ return hash_get_stat(&alg->halg);
+}
+
+static inline int crypto_ahash_errstat(struct ahash_alg *alg, int err)
+{
+ if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
+ return err;

- return crypto_shash_setkey(*ctx, key, keylen);
+ if (err && err != -EINPROGRESS && err != -EBUSY)
+ atomic64_inc(&ahash_get_stat(alg)->err_cnt);
+
+ return err;
}

-static int shash_async_init(struct ahash_request *req)
+/*
+ * For an ahash tfm that is using an shash algorithm (instead of an ahash
+ * algorithm), this returns the underlying shash tfm.
+ */
+static inline struct crypto_shash *ahash_to_shash(struct crypto_ahash *tfm)
{
- struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
- struct shash_desc *desc = ahash_request_ctx(req);
+ return *(struct crypto_shash **)crypto_ahash_ctx(tfm);
+}

- desc->tfm = *ctx;
+static inline struct shash_desc *prepare_shash_desc(struct ahash_request *req,
+ struct crypto_ahash *tfm)
+{
+ struct shash_desc *desc = ahash_request_ctx(req);

- return crypto_shash_init(desc);
+ desc->tfm = ahash_to_shash(tfm);
+ return desc;
}

int shash_ahash_update(struct ahash_request *req, struct shash_desc *desc)
{
struct crypto_hash_walk walk;
int nbytes;

for (nbytes = crypto_hash_walk_first(req, &walk); nbytes > 0;
nbytes = crypto_hash_walk_done(&walk, nbytes))
nbytes = crypto_shash_update(desc, walk.data, nbytes);

return nbytes;
}
EXPORT_SYMBOL_GPL(shash_ahash_update);

-static int shash_async_update(struct ahash_request *req)
-{
- return shash_ahash_update(req, ahash_request_ctx(req));
-}
-
-static int shash_async_final(struct ahash_request *req)
-{
- return crypto_shash_final(ahash_request_ctx(req), req->result);
-}
-
int shash_ahash_finup(struct ahash_request *req, struct shash_desc *desc)
{
struct crypto_hash_walk walk;
int nbytes;

nbytes = crypto_hash_walk_first(req, &walk);
if (!nbytes)
return crypto_shash_final(desc, req->result);

do {
@@ -82,30 +88,20 @@ int shash_ahash_finup(struct ahash_request *req, struct shash_desc *desc)
crypto_shash_finup(desc, walk.data, nbytes,
req->result) :
crypto_shash_update(desc, walk.data, nbytes);
nbytes = crypto_hash_walk_done(&walk, nbytes);
} while (nbytes > 0);

return nbytes;
}
EXPORT_SYMBOL_GPL(shash_ahash_finup);

-static int shash_async_finup(struct ahash_request *req)
-{
- struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
- struct shash_desc *desc = ahash_request_ctx(req);
-
- desc->tfm = *ctx;
-
- return shash_ahash_finup(req, desc);
-}
-
int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
{
unsigned int nbytes = req->nbytes;
struct scatterlist *sg;
unsigned int offset;
int err;

if (nbytes &&
(sg = req->src, offset = sg->offset,
nbytes <= min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset))) {
@@ -116,110 +112,54 @@ int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
req->result);
kunmap_local(data);
} else
err = crypto_shash_init(desc) ?:
shash_ahash_finup(req, desc);

return err;
}
EXPORT_SYMBOL_GPL(shash_ahash_digest);

-static int shash_async_digest(struct ahash_request *req)
-{
- struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
- struct shash_desc *desc = ahash_request_ctx(req);
-
- desc->tfm = *ctx;
-
- return shash_ahash_digest(req, desc);
-}
-
-static int shash_async_export(struct ahash_request *req, void *out)
-{
- return crypto_shash_export(ahash_request_ctx(req), out);
-}
-
-static int shash_async_import(struct ahash_request *req, const void *in)
-{
- struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
- struct shash_desc *desc = ahash_request_ctx(req);
-
- desc->tfm = *ctx;
-
- return crypto_shash_import(desc, in);
-}
-
-static void crypto_exit_shash_ops_async(struct crypto_tfm *tfm)
+static void crypto_exit_ahash_using_shash(struct crypto_tfm *tfm)
{
struct crypto_shash **ctx = crypto_tfm_ctx(tfm);

crypto_free_shash(*ctx);
}

-static int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
+static int crypto_init_ahash_using_shash(struct crypto_tfm *tfm)
{
struct crypto_alg *calg = tfm->__crt_alg;
- struct shash_alg *alg = __crypto_shash_alg(calg);
struct crypto_ahash *crt = __crypto_ahash_cast(tfm);
struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
struct crypto_shash *shash;

if (!crypto_mod_get(calg))
return -EAGAIN;

shash = crypto_create_tfm(calg, &crypto_shash_type);
if (IS_ERR(shash)) {
crypto_mod_put(calg);
return PTR_ERR(shash);
}

+ crt->using_shash = true;
*ctx = shash;
- tfm->exit = crypto_exit_shash_ops_async;
-
- crt->init = shash_async_init;
- crt->update = shash_async_update;
- crt->final = shash_async_final;
- crt->finup = shash_async_finup;
- crt->digest = shash_async_digest;
- if (crypto_shash_alg_has_setkey(alg))
- crt->setkey = shash_async_setkey;
+ tfm->exit = crypto_exit_ahash_using_shash;

crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
CRYPTO_TFM_NEED_KEY);
-
- crt->export = shash_async_export;
- crt->import = shash_async_import;
-
crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);

return 0;
}

-static struct crypto_ahash *
-crypto_clone_shash_ops_async(struct crypto_ahash *nhash,
- struct crypto_ahash *hash)
-{
- struct crypto_shash **nctx = crypto_ahash_ctx(nhash);
- struct crypto_shash **ctx = crypto_ahash_ctx(hash);
- struct crypto_shash *shash;
-
- shash = crypto_clone_shash(*ctx);
- if (IS_ERR(shash)) {
- crypto_free_ahash(nhash);
- return ERR_CAST(shash);
- }
-
- *nctx = shash;
-
- return nhash;
-}
-
static int hash_walk_next(struct crypto_hash_walk *walk)
{
unsigned int offset = walk->offset;
unsigned int nbytes = min(walk->entrylen,
((unsigned int)(PAGE_SIZE)) - offset);

walk->data = kmap_local_page(walk->pg);
walk->data += offset;
walk->entrylen -= nbytes;
return nbytes;
@@ -283,44 +223,68 @@ int crypto_hash_walk_first(struct ahash_request *req,
return hash_walk_new_entry(walk);
}
EXPORT_SYMBOL_GPL(crypto_hash_walk_first);

static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen)
{
return -ENOSYS;
}

-static void ahash_set_needkey(struct crypto_ahash *tfm)
+static void ahash_set_needkey(struct crypto_ahash *tfm, struct ahash_alg *alg)
{
- const struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
-
- if (tfm->setkey != ahash_nosetkey &&
- !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
+ if (alg->setkey != ahash_nosetkey &&
+ !(alg->halg.base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
crypto_ahash_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
}

int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen)
{
- int err = tfm->setkey(tfm, key, keylen);
+ if (likely(tfm->using_shash)) {
+ struct crypto_shash *shash = ahash_to_shash(tfm);
+ int err;

- if (unlikely(err)) {
- ahash_set_needkey(tfm);
- return err;
+ err = crypto_shash_setkey(shash, key, keylen);
+ if (unlikely(err)) {
+ crypto_ahash_set_flags(tfm,
+ crypto_shash_get_flags(shash) &
+ CRYPTO_TFM_NEED_KEY);
+ return err;
+ }
+ } else {
+ struct ahash_alg *alg = crypto_ahash_alg(tfm);
+ int err;
+
+ err = alg->setkey(tfm, key, keylen);
+ if (unlikely(err)) {
+ ahash_set_needkey(tfm, alg);
+ return err;
+ }
}
-
crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
return 0;
}
EXPORT_SYMBOL_GPL(crypto_ahash_setkey);

+int crypto_ahash_init(struct ahash_request *req)
+{
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+
+ if (likely(tfm->using_shash))
+ return crypto_shash_init(prepare_shash_desc(req, tfm));
+ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ return -ENOKEY;
+ return crypto_ahash_alg(tfm)->init(req);
+}
+EXPORT_SYMBOL_GPL(crypto_ahash_init);
+
static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt,
bool has_state)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
unsigned int ds = crypto_ahash_digestsize(tfm);
struct ahash_request *subreq;
unsigned int subreq_size;
unsigned int reqsize;
u8 *result;
gfp_t gfp;
@@ -370,67 +334,92 @@ static void ahash_restore_req(struct ahash_request *req, int err)

if (!err)
memcpy(req->result, subreq->result,
crypto_ahash_digestsize(crypto_ahash_reqtfm(req)));

req->priv = NULL;

kfree_sensitive(subreq);
}

-int crypto_ahash_final(struct ahash_request *req)
+int crypto_ahash_update(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
+ struct ahash_alg *alg;

+ if (likely(tfm->using_shash))
+ return shash_ahash_update(req, ahash_request_ctx(req));
+
+ alg = crypto_ahash_alg(tfm);
if (IS_ENABLED(CONFIG_CRYPTO_STATS))
- atomic64_inc(&hash_get_stat(alg)->hash_cnt);
+ atomic64_add(req->nbytes, &ahash_get_stat(alg)->hash_tlen);
+ return crypto_ahash_errstat(alg, alg->update(req));
+}
+EXPORT_SYMBOL_GPL(crypto_ahash_update);
+
+int crypto_ahash_final(struct ahash_request *req)
+{
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct ahash_alg *alg;
+
+ if (likely(tfm->using_shash))
+ return crypto_shash_final(ahash_request_ctx(req), req->result);

- return crypto_hash_errstat(alg, tfm->final(req));
+ alg = crypto_ahash_alg(tfm);
+ if (IS_ENABLED(CONFIG_CRYPTO_STATS))
+ atomic64_inc(&ahash_get_stat(alg)->hash_cnt);
+ return crypto_ahash_errstat(alg, alg->final(req));
}
EXPORT_SYMBOL_GPL(crypto_ahash_final);

int crypto_ahash_finup(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
+ struct ahash_alg *alg;
+
+ if (likely(tfm->using_shash))
+ return shash_ahash_finup(req, ahash_request_ctx(req));

+ alg = crypto_ahash_alg(tfm);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
- struct crypto_istat_hash *istat = hash_get_stat(alg);
+ struct crypto_istat_hash *istat = ahash_get_stat(alg);

atomic64_inc(&istat->hash_cnt);
atomic64_add(req->nbytes, &istat->hash_tlen);
}
-
- return crypto_hash_errstat(alg, tfm->finup(req));
+ return crypto_ahash_errstat(alg, alg->finup(req));
}
EXPORT_SYMBOL_GPL(crypto_ahash_finup);

int crypto_ahash_digest(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
+ struct ahash_alg *alg;
int err;

+ if (likely(tfm->using_shash))
+ return shash_ahash_digest(req, prepare_shash_desc(req, tfm));
+
+ alg = crypto_ahash_alg(tfm);
if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
- struct crypto_istat_hash *istat = hash_get_stat(alg);
+ struct crypto_istat_hash *istat = ahash_get_stat(alg);

atomic64_inc(&istat->hash_cnt);
atomic64_add(req->nbytes, &istat->hash_tlen);
}

if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
err = -ENOKEY;
else
- err = tfm->digest(req);
+ err = alg->digest(req);

- return crypto_hash_errstat(alg, err);
+ return crypto_ahash_errstat(alg, err);
}
EXPORT_SYMBOL_GPL(crypto_ahash_digest);

static void ahash_def_finup_done2(void *data, int err)
{
struct ahash_request *areq = data;

if (err == -EINPROGRESS)
return;

@@ -441,21 +430,21 @@ static void ahash_def_finup_done2(void *data, int err)

static int ahash_def_finup_finish1(struct ahash_request *req, int err)
{
struct ahash_request *subreq = req->priv;

if (err)
goto out;

subreq->base.complete = ahash_def_finup_done2;

- err = crypto_ahash_reqtfm(req)->final(subreq);
+ err = crypto_ahash_alg(crypto_ahash_reqtfm(req))->final(subreq);
if (err == -EINPROGRESS || err == -EBUSY)
return err;

out:
ahash_restore_req(req, err);
return err;
}

static void ahash_def_finup_done1(void *data, int err)
{
@@ -478,59 +467,68 @@ static void ahash_def_finup_done1(void *data, int err)

static int ahash_def_finup(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
int err;

err = ahash_save_req(req, ahash_def_finup_done1, true);
if (err)
return err;

- err = tfm->update(req->priv);
+ err = crypto_ahash_alg(tfm)->update(req->priv);
if (err == -EINPROGRESS || err == -EBUSY)
return err;

return ahash_def_finup_finish1(req, err);
}

+int crypto_ahash_export(struct ahash_request *req, void *out)
+{
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+
+ if (likely(tfm->using_shash))
+ return crypto_shash_export(ahash_request_ctx(req), out);
+ return crypto_ahash_alg(tfm)->export(req, out);
+}
+EXPORT_SYMBOL_GPL(crypto_ahash_export);
+
+int crypto_ahash_import(struct ahash_request *req, const void *in)
+{
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+
+ if (likely(tfm->using_shash))
+ return crypto_shash_import(prepare_shash_desc(req, tfm), in);
+ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ return -ENOKEY;
+ return crypto_ahash_alg(tfm)->import(req, in);
+}
+EXPORT_SYMBOL_GPL(crypto_ahash_import);
+
static void crypto_ahash_exit_tfm(struct crypto_tfm *tfm)
{
struct crypto_ahash *hash = __crypto_ahash_cast(tfm);
struct ahash_alg *alg = crypto_ahash_alg(hash);

alg->exit_tfm(hash);
}

static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
{
struct crypto_ahash *hash = __crypto_ahash_cast(tfm);
struct ahash_alg *alg = crypto_ahash_alg(hash);

- hash->setkey = ahash_nosetkey;
-
crypto_ahash_set_statesize(hash, alg->halg.statesize);

if (tfm->__crt_alg->cra_type == &crypto_shash_type)
- return crypto_init_shash_ops_async(tfm);
-
- hash->init = alg->init;
- hash->update = alg->update;
- hash->final = alg->final;
- hash->finup = alg->finup ?: ahash_def_finup;
- hash->digest = alg->digest;
- hash->export = alg->export;
- hash->import = alg->import;
-
- if (alg->setkey) {
- hash->setkey = alg->setkey;
- ahash_set_needkey(hash);
- }
+ return crypto_init_ahash_using_shash(tfm);
+
+ ahash_set_needkey(hash, alg);

if (alg->exit_tfm)
tfm->exit = crypto_ahash_exit_tfm;

return alg->init_tfm ? alg->init_tfm(hash) : 0;
}

static unsigned int crypto_ahash_extsize(struct crypto_alg *alg)
{
if (alg->cra_type == &crypto_shash_type)
@@ -634,33 +632,35 @@ struct crypto_ahash *crypto_clone_ahash(struct crypto_ahash *hash)
return ERR_CAST(tfm);

return hash;
}

nhash = crypto_clone_tfm(&crypto_ahash_type, tfm);

if (IS_ERR(nhash))
return nhash;

- nhash->init = hash->init;
- nhash->update = hash->update;
- nhash->final = hash->final;
- nhash->finup = hash->finup;
- nhash->digest = hash->digest;
- nhash->export = hash->export;
- nhash->import = hash->import;
- nhash->setkey = hash->setkey;
nhash->reqsize = hash->reqsize;
nhash->statesize = hash->statesize;

- if (tfm->__crt_alg->cra_type != &crypto_ahash_type)
- return crypto_clone_shash_ops_async(nhash, hash);
+ if (likely(hash->using_shash)) {
+ struct crypto_shash **nctx = crypto_ahash_ctx(nhash);
+ struct crypto_shash *shash;
+
+ shash = crypto_clone_shash(ahash_to_shash(hash));
+ if (IS_ERR(shash)) {
+ err = PTR_ERR(shash);
+ goto out_free_nhash;
+ }
+ *nctx = shash;
+ return nhash;
+ }

err = -ENOSYS;
alg = crypto_ahash_alg(hash);
if (!alg->clone_tfm)
goto out_free_nhash;

err = alg->clone_tfm(nhash, hash);
if (err)
goto out_free_nhash;

@@ -680,20 +680,25 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
if (alg->halg.statesize == 0)
return -EINVAL;

err = hash_prepare_alg(&alg->halg);
if (err)
return err;

base->cra_type = &crypto_ahash_type;
base->cra_flags |= CRYPTO_ALG_TYPE_AHASH;

+ if (!alg->finup)
+ alg->finup = ahash_def_finup;
+ if (!alg->setkey)
+ alg->setkey = ahash_nosetkey;
+
return 0;
}

int crypto_register_ahash(struct ahash_alg *alg)
{
struct crypto_alg *base = &alg->halg.base;
int err;

err = ahash_prepare_alg(alg);
if (err)
@@ -754,16 +759,16 @@ int ahash_register_instance(struct crypto_template *tmpl,
}
EXPORT_SYMBOL_GPL(ahash_register_instance);

bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg)
{
struct crypto_alg *alg = &halg->base;

if (alg->cra_type == &crypto_shash_type)
return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg));

- return __crypto_ahash_alg(alg)->setkey != NULL;
+ return __crypto_ahash_alg(alg)->setkey != ahash_nosetkey;
}
EXPORT_SYMBOL_GPL(crypto_hash_alg_has_setkey);

MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Asynchronous cryptographic hash type");
diff --git a/crypto/hash.h b/crypto/hash.h
index de2ee2f4ae304..93f6ba0df263e 100644
--- a/crypto/hash.h
+++ b/crypto/hash.h
@@ -5,20 +5,30 @@
* Copyright (c) 2023 Herbert Xu <[email protected]>
*/
#ifndef _LOCAL_CRYPTO_HASH_H
#define _LOCAL_CRYPTO_HASH_H

#include <crypto/internal/hash.h>
#include <linux/cryptouser.h>

#include "internal.h"

+static inline struct crypto_istat_hash *hash_get_stat(
+ struct hash_alg_common *alg)
+{
+#ifdef CONFIG_CRYPTO_STATS
+ return &alg->stat;
+#else
+ return NULL;
+#endif
+}
+
static inline int crypto_hash_report_stat(struct sk_buff *skb,
struct crypto_alg *alg,
const char *type)
{
struct hash_alg_common *halg = __crypto_hash_alg_common(alg);
struct crypto_istat_hash *istat = hash_get_stat(halg);
struct crypto_stat_hash rhash;

memset(&rhash, 0, sizeof(rhash));

diff --git a/crypto/shash.c b/crypto/shash.c
index 28092ed8415a7..d5194221c88cb 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -16,21 +16,27 @@

#include "hash.h"

static inline struct crypto_istat_hash *shash_get_stat(struct shash_alg *alg)
{
return hash_get_stat(&alg->halg);
}

static inline int crypto_shash_errstat(struct shash_alg *alg, int err)
{
- return crypto_hash_errstat(&alg->halg, err);
+ if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
+ return err;
+
+ if (err && err != -EINPROGRESS && err != -EBUSY)
+ atomic64_inc(&shash_get_stat(alg)->err_cnt);
+
+ return err;
}

int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
unsigned int keylen)
{
return -ENOSYS;
}
EXPORT_SYMBOL_GPL(shash_no_setkey);

static void shash_set_needkey(struct crypto_shash *tfm, struct shash_alg *alg)
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index b00a4a36a8ec3..c7bdbece27ccb 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -243,30 +243,21 @@ struct shash_alg {

union {
struct HASH_ALG_COMMON;
struct hash_alg_common halg;
};
};
#undef HASH_ALG_COMMON
#undef HASH_ALG_COMMON_STAT

struct crypto_ahash {
- int (*init)(struct ahash_request *req);
- int (*update)(struct ahash_request *req);
- int (*final)(struct ahash_request *req);
- int (*finup)(struct ahash_request *req);
- int (*digest)(struct ahash_request *req);
- int (*export)(struct ahash_request *req, void *out);
- int (*import)(struct ahash_request *req, const void *in);
- int (*setkey)(struct crypto_ahash *tfm, const u8 *key,
- unsigned int keylen);
-
+ bool using_shash; /* Underlying algorithm is shash, not ahash */
unsigned int statesize;
unsigned int reqsize;
struct crypto_tfm base;
};

struct crypto_shash {
unsigned int descsize;
struct crypto_tfm base;
};

@@ -506,109 +497,60 @@ int crypto_ahash_digest(struct ahash_request *req);
* crypto_ahash_export() - extract current message digest state
* @req: reference to the ahash_request handle whose state is exported
* @out: output buffer of sufficient size that can hold the hash state
*
* This function exports the hash state of the ahash_request handle into the
* caller-allocated output buffer out which must have sufficient size (e.g. by
* calling crypto_ahash_statesize()).
*
* Return: 0 if the export was successful; < 0 if an error occurred
*/
-static inline int crypto_ahash_export(struct ahash_request *req, void *out)
-{
- return crypto_ahash_reqtfm(req)->export(req, out);
-}
+int crypto_ahash_export(struct ahash_request *req, void *out);

/**
* crypto_ahash_import() - import message digest state
* @req: reference to ahash_request handle the state is imported into
* @in: buffer holding the state
*
* This function imports the hash state into the ahash_request handle from the
* input buffer. That buffer should have been generated with the
* crypto_ahash_export function.
*
* Return: 0 if the import was successful; < 0 if an error occurred
*/
-static inline int crypto_ahash_import(struct ahash_request *req, const void *in)
-{
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-
- if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
- return -ENOKEY;
-
- return tfm->import(req, in);
-}
+int crypto_ahash_import(struct ahash_request *req, const void *in);

/**
* crypto_ahash_init() - (re)initialize message digest handle
* @req: ahash_request handle that already is initialized with all necessary
* data using the ahash_request_* API functions
*
* The call (re-)initializes the message digest referenced by the ahash_request
* handle. Any potentially existing state created by previous operations is
* discarded.
*
* Return: see crypto_ahash_final()
*/
-static inline int crypto_ahash_init(struct ahash_request *req)
-{
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-
- if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
- return -ENOKEY;
-
- return tfm->init(req);
-}
-
-static inline struct crypto_istat_hash *hash_get_stat(
- struct hash_alg_common *alg)
-{
-#ifdef CONFIG_CRYPTO_STATS
- return &alg->stat;
-#else
- return NULL;
-#endif
-}
-
-static inline int crypto_hash_errstat(struct hash_alg_common *alg, int err)
-{
- if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
- return err;
-
- if (err && err != -EINPROGRESS && err != -EBUSY)
- atomic64_inc(&hash_get_stat(alg)->err_cnt);
-
- return err;
-}
+int crypto_ahash_init(struct ahash_request *req);

/**
* crypto_ahash_update() - add data to message digest for processing
* @req: ahash_request handle that was previously initialized with the
* crypto_ahash_init call.
*
* Updates the message digest state of the &ahash_request handle. The input data
* is pointed to by the scatter/gather list registered in the &ahash_request
* handle
*
* Return: see crypto_ahash_final()
*/
-static inline int crypto_ahash_update(struct ahash_request *req)
-{
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
-
- if (IS_ENABLED(CONFIG_CRYPTO_STATS))
- atomic64_add(req->nbytes, &hash_get_stat(alg)->hash_tlen);
-
- return crypto_hash_errstat(alg, tfm->update(req));
-}
+int crypto_ahash_update(struct ahash_request *req);

/**
* DOC: Asynchronous Hash Request Handle
*
* The &ahash_request data structure contains all pointers to data
* required for the asynchronous cipher operation. This includes the cipher
* handle (which can be used by multiple &ahash_request instances), pointer
* to plaintext and the message digest output buffer, asynchronous callback
* function, etc. It acts as a handle to the ahash_request_* API calls in a
* similar way as ahash handle to the crypto_ahash_* API calls.
--
2.42.0

2023-10-22 08:32:22

by Eric Biggers

[permalink] [raw]
Subject: [PATCH 21/30] crypto: chacha20poly1305 - stop using alignmask of ahash

From: Eric Biggers <[email protected]>

Now that the alignmask for ahash and shash algorithms is always 0,
simplify chachapoly_create() accordingly.

Signed-off-by: Eric Biggers <[email protected]>
---
crypto/chacha20poly1305.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/crypto/chacha20poly1305.c b/crypto/chacha20poly1305.c
index 0e2e208d98f94..9e4651330852b 100644
--- a/crypto/chacha20poly1305.c
+++ b/crypto/chacha20poly1305.c
@@ -603,22 +603,21 @@ static int chachapoly_create(struct crypto_template *tmpl, struct rtattr **tb,
poly->base.cra_name) >= CRYPTO_MAX_ALG_NAME)
goto err_free_inst;
if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
"%s(%s,%s)", name, chacha->base.cra_driver_name,
poly->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
goto err_free_inst;

inst->alg.base.cra_priority = (chacha->base.cra_priority +
poly->base.cra_priority) / 2;
inst->alg.base.cra_blocksize = 1;
- inst->alg.base.cra_alignmask = chacha->base.cra_alignmask |
- poly->base.cra_alignmask;
+ inst->alg.base.cra_alignmask = chacha->base.cra_alignmask;
inst->alg.base.cra_ctxsize = sizeof(struct chachapoly_ctx) +
ctx->saltlen;
inst->alg.ivsize = ivsize;
inst->alg.chunksize = chacha->chunksize;
inst->alg.maxauthsize = POLY1305_DIGEST_SIZE;
inst->alg.init = chachapoly_init;
inst->alg.exit = chachapoly_exit;
inst->alg.encrypt = chachapoly_encrypt;
inst->alg.decrypt = chachapoly_decrypt;
inst->alg.setkey = chachapoly_setkey;
--
2.42.0

2023-10-25 13:16:18

by Corentin Labbe

[permalink] [raw]
Subject: Re: [PATCH 02/30] crypto: sun4i-ss - remove unnecessary alignmask for ahashes

Le Sun, Oct 22, 2023 at 01:10:32AM -0700, Eric Biggers a ?crit :
> From: Eric Biggers <[email protected]>
>
> The crypto API's support for alignmasks for ahash algorithms is nearly
> useless, as its only effect is to cause the API to align the key and
> result buffers. The drivers that happen to be specifying an alignmask
> for ahash rarely actually need it. When they do, it's easily fixable,
> especially considering that these buffers cannot be used for DMA.
>
> In preparation for removing alignmask support from ahash, this patch
> makes the sun4i-ss driver no longer use it. This driver didn't actually
> rely on it; it only writes to the result buffer in sun4i_hash(), already
> using the unaligned access helpers. And this driver only supports
> unkeyed hash algorithms, so the key buffer need not be considered.
>
> Signed-off-by: Eric Biggers <[email protected]>
> ---
> drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c
> index 3bcfcfc370842..e23a020a64628 100644
> --- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c
> +++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c
> @@ -42,21 +42,20 @@ static struct sun4i_ss_alg_template ss_algs[] = {
> .digest = sun4i_hash_digest,
> .export = sun4i_hash_export_md5,
> .import = sun4i_hash_import_md5,
> .halg = {
> .digestsize = MD5_DIGEST_SIZE,
> .statesize = sizeof(struct md5_state),
> .base = {
> .cra_name = "md5",
> .cra_driver_name = "md5-sun4i-ss",
> .cra_priority = 300,
> - .cra_alignmask = 3,
> .cra_blocksize = MD5_HMAC_BLOCK_SIZE,
> .cra_ctxsize = sizeof(struct sun4i_req_ctx),
> .cra_module = THIS_MODULE,
> .cra_init = sun4i_hash_crainit,
> .cra_exit = sun4i_hash_craexit,
> }
> }
> }
> },
> { .type = CRYPTO_ALG_TYPE_AHASH,
> @@ -69,21 +68,20 @@ static struct sun4i_ss_alg_template ss_algs[] = {
> .digest = sun4i_hash_digest,
> .export = sun4i_hash_export_sha1,
> .import = sun4i_hash_import_sha1,
> .halg = {
> .digestsize = SHA1_DIGEST_SIZE,
> .statesize = sizeof(struct sha1_state),
> .base = {
> .cra_name = "sha1",
> .cra_driver_name = "sha1-sun4i-ss",
> .cra_priority = 300,
> - .cra_alignmask = 3,
> .cra_blocksize = SHA1_BLOCK_SIZE,
> .cra_ctxsize = sizeof(struct sun4i_req_ctx),
> .cra_module = THIS_MODULE,
> .cra_init = sun4i_hash_crainit,
> .cra_exit = sun4i_hash_craexit,
> }
> }
> }
> },
> { .type = CRYPTO_ALG_TYPE_SKCIPHER,
> --
> 2.42.0
>
Acked-by: Corentin Labbe <[email protected]>

Thanks

2023-10-27 10:56:31

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 00/30] crypto: reduce ahash API overhead

Eric Biggers <[email protected]> wrote:
> This patch series first removes the alignmask support from ahash. As is
> the case with shash, the alignmask support of ahash has no real point.
> Removing it reduces API overhead and complexity.
>
> Second, this patch series optimizes the common case where the ahash API
> uses an shash algorithm, by eliminating unnecessary indirect calls.
>
> This series can be retrieved from git at
> https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git
> tag "crypto-ahash-2023-10-22". Note that it depends on my other series
> "crypto: stop supporting alignmask in shash"
> (https://lore.kernel.org/r/[email protected]).
>
> Patch 1 cleans up after removal of alignmask support from shash.
>
> Patches 2-12 make drivers stop setting an alignmask for ahashes.
>
> Patch 13 removes alignmask support from ahash.
>
> Patches 14-23 remove checks of ahash alignmasks that became unnecessary.
>
> Patches 24-25 are other ahash related cleanups.
>
> Patches 26-29 prepare for optimizing the ahash-using-shash case.
>
> Patches 30 optimizes the ahash-using-shash case.
>
> Eric Biggers (30):
> crypto: shash - remove crypto_shash_ctx_aligned()
> crypto: sun4i-ss - remove unnecessary alignmask for ahashes
> crypto: sun8i-ce - remove unnecessary alignmask for ahashes
> crypto: sun8i-ss - remove unnecessary alignmask for ahashes
> crypto: atmel - remove unnecessary alignmask for ahashes
> crypto: artpec6 - stop setting alignmask for ahashes
> crypto: mxs-dcp - remove unnecessary alignmask for ahashes
> crypto: s5p-sss - remove unnecessary alignmask for ahashes
> crypto: talitos - remove unnecessary alignmask for ahashes
> crypto: omap-sham - stop setting alignmask for ahashes
> crypto: rockchip - remove unnecessary alignmask for ahashes
> crypto: starfive - remove unnecessary alignmask for ahashes
> crypto: stm32 - remove unnecessary alignmask for ahashes
> crypto: ahash - remove support for nonzero alignmask
> crypto: authenc - stop using alignmask of ahash
> crypto: authencesn - stop using alignmask of ahash
> crypto: testmgr - stop checking crypto_ahash_alignmask
> net: ipv4: stop checking crypto_ahash_alignmask
> net: ipv6: stop checking crypto_ahash_alignmask
> crypto: ccm - stop using alignmask of ahash
> crypto: chacha20poly1305 - stop using alignmask of ahash
> crypto: gcm - stop using alignmask of ahash
> crypto: ahash - remove crypto_ahash_alignmask
> crypto: ahash - remove struct ahash_request_priv
> crypto: ahash - improve file comment
> crypto: chelsio - stop using crypto_ahash::init
> crypto: talitos - stop using crypto_ahash::init
> crypto: hash - move "ahash wrapping shash" functions to ahash.c
> crypto: ahash - check for shash type instead of not ahash type
> crypto: ahash - optimize performance when wrapping shash
>
> Documentation/crypto/devel-algos.rst | 4 +-
> crypto/ahash.c | 406 +++++++++++-------
> crypto/authenc.c | 12 +-
> crypto/authencesn.c | 20 +-
> crypto/ccm.c | 3 +-
> crypto/chacha20poly1305.c | 3 +-
> crypto/gcm.c | 3 +-
> crypto/hash.h | 14 +-
> crypto/shash.c | 205 +--------
> crypto/testmgr.c | 9 +-
> .../crypto/allwinner/sun4i-ss/sun4i-ss-core.c | 2 -
> .../crypto/allwinner/sun8i-ce/sun8i-ce-core.c | 6 -
> .../crypto/allwinner/sun8i-ss/sun8i-ss-core.c | 5 -
> drivers/crypto/atmel-sha.c | 2 -
> drivers/crypto/axis/artpec6_crypto.c | 3 -
> drivers/crypto/chelsio/chcr_algo.c | 9 +-
> drivers/crypto/mxs-dcp.c | 2 -
> drivers/crypto/omap-sham.c | 16 +-
> drivers/crypto/rockchip/rk3288_crypto_ahash.c | 3 -
> drivers/crypto/s5p-sss.c | 6 -
> drivers/crypto/starfive/jh7110-hash.c | 13 +-
> drivers/crypto/stm32/stm32-hash.c | 20 -
> drivers/crypto/talitos.c | 17 +-
> include/crypto/algapi.h | 5 -
> include/crypto/hash.h | 74 +---
> include/crypto/internal/hash.h | 9 +-
> include/linux/crypto.h | 27 +-
> net/ipv4/ah4.c | 17 +-
> net/ipv6/ah6.c | 17 +-
> 29 files changed, 339 insertions(+), 593 deletions(-)
>
>
> base-commit: a2786e8bdd0242d7f00abf452a572de7464d177b
> prerequisite-patch-id: e447f81a392f2f3955206357d72032cf691c7e11
> prerequisite-patch-id: 71947e05e23fb176da3ca898720b9e3332e891d7
> prerequisite-patch-id: 98d070bdaf3cfaf88553ab707cc3bfe85371c006
> prerequisite-patch-id: 9e4287b71c1129edb1ba162e2a1f641a9ac4385f
> prerequisite-patch-id: 22a4cda4ae529854e55627c55d3f35b035871f3b
> prerequisite-patch-id: f67b194e37338a4715850686b2f02bbf0a47cbe1
> prerequisite-patch-id: bcb547f4c9be4b022b824f9bff6b919b2d37d60f
> prerequisite-patch-id: 20a8c2663a94c2d49217c5158a6bc588881fb9ad
> prerequisite-patch-id: e45e43c487d75c87fd713a5ef57a584cf947950e
> prerequisite-patch-id: bb211c1b59f73b22319aee6fafd14b07bc5d1460
> prerequisite-patch-id: 5f033ce643ba7d1f219dee490abd21e1e0958a51
> prerequisite-patch-id: 2173122570085246d5f4e5d3c4a920f7b7c528f9
> prerequisite-patch-id: 3fe1bc3b93e9502f874c485c5f2e39eec4899222
> prerequisite-patch-id: 982ed5e31a6616f9788d4641c3757342e9f15576
> prerequisite-patch-id: 6a207af4a7044cc47ab3f797e9c865fdbdb5d20c
> prerequisite-patch-id: f34ad579025354af65a73c1497dc967e2e834a55
> prerequisite-patch-id: 5ad384179da558ff3359baabda588731ed2e90a4
> prerequisite-patch-id: d3d243977afb4f574fb289eddf0e71becda1ae2b

All applied. Thanks.
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-11-29 16:40:00

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH 09/30] crypto: talitos - remove unnecessary alignmask for ahashes



Le 22/10/2023 à 10:10, Eric Biggers a écrit :
> From: Eric Biggers <[email protected]>
>
> The crypto API's support for alignmasks for ahash algorithms is nearly
> useless, as its only effect is to cause the API to align the key and
> result buffers. The drivers that happen to be specifying an alignmask
> for ahash rarely actually need it. When they do, it's easily fixable,
> especially considering that these buffers cannot be used for DMA.
>
> In preparation for removing alignmask support from ahash, this patch
> makes the talitos driver no longer use it. This driver didn't actually
> rely on it; it only writes to the result buffer in
> common_nonsnoop_hash_unmap(), simply using memcpy(). And this driver's
> "ahash_setkey()" function does not assume any alignment for the key
> buffer.

I can't really see the link between your explanation and commit
c9cca7034b34 ("crypto: talitos - Align SEC1 accesses to 32 bits
boundaries.").

Was that commit wrong ?

Christophe


>
> Signed-off-by: Eric Biggers <[email protected]>
> ---
> drivers/crypto/talitos.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
> index 4ca4fbd227bce..e8f710d87007b 100644
> --- a/drivers/crypto/talitos.c
> +++ b/drivers/crypto/talitos.c
> @@ -3252,21 +3252,21 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev,
> dev_err(dev, "unknown algorithm type %d\n", t_alg->algt.type);
> devm_kfree(dev, t_alg);
> return ERR_PTR(-EINVAL);
> }
>
> alg->cra_module = THIS_MODULE;
> if (t_alg->algt.priority)
> alg->cra_priority = t_alg->algt.priority;
> else
> alg->cra_priority = TALITOS_CRA_PRIORITY;
> - if (has_ftr_sec1(priv))
> + if (has_ftr_sec1(priv) && t_alg->algt.type != CRYPTO_ALG_TYPE_AHASH)
> alg->cra_alignmask = 3;
> else
> alg->cra_alignmask = 0;
> alg->cra_ctxsize = sizeof(struct talitos_ctx);
> alg->cra_flags |= CRYPTO_ALG_KERN_DRIVER_ONLY;
>
> t_alg->dev = dev;
>
> return t_alg;
> }

2023-11-29 22:44:40

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH 09/30] crypto: talitos - remove unnecessary alignmask for ahashes

On Wed, Nov 29, 2023 at 03:00:48PM +0000, Christophe Leroy wrote:
>
>
> Le 22/10/2023 ? 10:10, Eric Biggers a ?crit?:
> > From: Eric Biggers <[email protected]>
> >
> > The crypto API's support for alignmasks for ahash algorithms is nearly
> > useless, as its only effect is to cause the API to align the key and
> > result buffers. The drivers that happen to be specifying an alignmask
> > for ahash rarely actually need it. When they do, it's easily fixable,
> > especially considering that these buffers cannot be used for DMA.
> >
> > In preparation for removing alignmask support from ahash, this patch
> > makes the talitos driver no longer use it. This driver didn't actually
> > rely on it; it only writes to the result buffer in
> > common_nonsnoop_hash_unmap(), simply using memcpy(). And this driver's
> > "ahash_setkey()" function does not assume any alignment for the key
> > buffer.
>
> I can't really see the link between your explanation and commit
> c9cca7034b34 ("crypto: talitos - Align SEC1 accesses to 32 bits
> boundaries.").
>
> Was that commit wrong ?
>
> Christophe

Commit c9cca7034b34 ("crypto: talitos - Align SEC1 accesses to 32 bits
boundaries.") added an alignmask to all algorithm types: skcipher, aead, and
ahash. The commit did not explain why it was needed for each algorithm type,
and its true necessity may have varied by algorithm type. In the case of
ahashes, the alignmask may have been needed originally, but commit 7a6eda5b8d9d
("crypto: talitos - fix hash result for VMAP_STACK") made it unnecessary.

- Eric