2019-07-30 12:09:26

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v4 00/14] crypto: caam - fixes for kernel v5.3

The series solves:
- the failures found with fuzz testing;
- resources clean-up on caampkc/caamrng exit path.

The first 10 patches solve the issues found with
CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled.
They modify the drivers to provide a valid error (and not the hardware
error ID) to the user, via completion callbacks.
They check key length, assoclen, authsize and input size to solve the
fuzz tests that expect -EINVAL to be returned when these values are
not valid.

The next 4 patches check the algorithm registration for caampkc
module and unregister it only if the registration was successful.
Also, on caampkc/caamrng, the exit point function is executed only if the
registration was successful to avoid double freeing of resources in case
the initialization function failed.

This patch depends on series:
https://patchwork.kernel.org/project/linux-crypto/list/?series=152649

Changes since v3:
- use, newly renamed, helper functions from crypto API, to validate
the inputs;
- update rfc4106 shared descriptor, by moving the erratum workaround.

Horia Geantă (5):
crypto: caam/qi - fix error handling in ERN handler
crypto: caam - fix return code in completion callbacks
crypto: caam - update IV only when crypto operation succeeds
crypto: caam - keep both virtual and dma key addresses
crypto: caam - fix MDHA key derivation for certain user key lengths

Iuliana Prodan (9):
crypto: caam - check key length
crypto: caam - check authsize
crypto: caam - check assoclen
crypto: caam - check zero-length input
crypto: caam - update rfc4106 sh desc to support zero length input
crypto: caam - free resources in case caam_rng registration failed
crypto: caam - execute module exit point only if necessary
crypto: caam - unregister algorithm only if the registration succeeded
crypto: caam - change return value in case CAAM has no MDHA

drivers/crypto/caam/Kconfig | 2 +
drivers/crypto/caam/caamalg.c | 227 +++++++++++++++----------
drivers/crypto/caam/caamalg_desc.c | 47 ++++--
drivers/crypto/caam/caamalg_desc.h | 2 +-
drivers/crypto/caam/caamalg_qi.c | 225 +++++++++++++++----------
drivers/crypto/caam/caamalg_qi2.c | 320 +++++++++++++++++++++++-------------
drivers/crypto/caam/caamhash.c | 114 ++++++++-----
drivers/crypto/caam/caamhash_desc.c | 5 +-
drivers/crypto/caam/caamhash_desc.h | 2 +-
drivers/crypto/caam/caampkc.c | 80 ++++++---
drivers/crypto/caam/caamrng.c | 17 +-
drivers/crypto/caam/desc_constr.h | 34 ++--
drivers/crypto/caam/error.c | 61 ++++---
drivers/crypto/caam/error.h | 2 +-
drivers/crypto/caam/key_gen.c | 14 +-
drivers/crypto/caam/qi.c | 10 +-
drivers/crypto/caam/regs.h | 1 +
17 files changed, 748 insertions(+), 415 deletions(-)

--
2.1.0


2019-07-30 12:09:51

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v4 13/14] crypto: caam - unregister algorithm only if the registration succeeded

To know if a registration succeeded added a new struct,
caam_akcipher_alg, that keeps, also, the registration status.
This status is updated in caam_pkc_init and verified in
caam_pkc_exit to unregister an algorithm.

Fixes: 1b46c90c8e00 ("crypto: caam - convert top level drivers to libraries")
Signed-off-by: Iuliana Prodan <[email protected]>
Reviewed-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/caampkc.c | 49 ++++++++++++++++++++++++++-----------------
1 file changed, 30 insertions(+), 19 deletions(-)

diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
index e00a470..5b12b23 100644
--- a/drivers/crypto/caam/caampkc.c
+++ b/drivers/crypto/caam/caampkc.c
@@ -35,6 +35,11 @@ static u8 *zero_buffer;
*/
static bool init_done;

+struct caam_akcipher_alg {
+ struct akcipher_alg akcipher;
+ bool registered;
+};
+
static void rsa_io_unmap(struct device *dev, struct rsa_edesc *edesc,
struct akcipher_request *req)
{
@@ -1058,22 +1063,24 @@ static void caam_rsa_exit_tfm(struct crypto_akcipher *tfm)
caam_jr_free(ctx->dev);
}

-static struct akcipher_alg caam_rsa = {
- .encrypt = caam_rsa_enc,
- .decrypt = caam_rsa_dec,
- .set_pub_key = caam_rsa_set_pub_key,
- .set_priv_key = caam_rsa_set_priv_key,
- .max_size = caam_rsa_max_size,
- .init = caam_rsa_init_tfm,
- .exit = caam_rsa_exit_tfm,
- .reqsize = sizeof(struct caam_rsa_req_ctx),
- .base = {
- .cra_name = "rsa",
- .cra_driver_name = "rsa-caam",
- .cra_priority = 3000,
- .cra_module = THIS_MODULE,
- .cra_ctxsize = sizeof(struct caam_rsa_ctx),
- },
+static struct caam_akcipher_alg caam_rsa = {
+ .akcipher = {
+ .encrypt = caam_rsa_enc,
+ .decrypt = caam_rsa_dec,
+ .set_pub_key = caam_rsa_set_pub_key,
+ .set_priv_key = caam_rsa_set_priv_key,
+ .max_size = caam_rsa_max_size,
+ .init = caam_rsa_init_tfm,
+ .exit = caam_rsa_exit_tfm,
+ .reqsize = sizeof(struct caam_rsa_req_ctx),
+ .base = {
+ .cra_name = "rsa",
+ .cra_driver_name = "rsa-caam",
+ .cra_priority = 3000,
+ .cra_module = THIS_MODULE,
+ .cra_ctxsize = sizeof(struct caam_rsa_ctx),
+ },
+ }
};

/* Public Key Cryptography module initialization handler */
@@ -1101,13 +1108,15 @@ int caam_pkc_init(struct device *ctrldev)
if (!zero_buffer)
return -ENOMEM;

- err = crypto_register_akcipher(&caam_rsa);
+ err = crypto_register_akcipher(&caam_rsa.akcipher);
+
if (err) {
kfree(zero_buffer);
dev_warn(ctrldev, "%s alg registration failed\n",
- caam_rsa.base.cra_driver_name);
+ caam_rsa.akcipher.base.cra_driver_name);
} else {
init_done = true;
+ caam_rsa.registered = true;
dev_info(ctrldev, "caam pkc algorithms registered in /proc/crypto\n");
}

@@ -1119,6 +1128,8 @@ void caam_pkc_exit(void)
if (!init_done)
return;

+ if (caam_rsa.registered)
+ crypto_unregister_akcipher(&caam_rsa.akcipher);
+
kfree(zero_buffer);
- crypto_unregister_akcipher(&caam_rsa);
}
--
2.1.0

2019-07-30 12:10:01

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v4 14/14] crypto: caam - change return value in case CAAM has no MDHA

To be consistent with other CAAM modules, caamhash should return 0
instead of -ENODEV in case CAAM has no MDHA.

Based on commit 1b46c90c8e00 ("crypto: caam - convert top level drivers to libraries")
the value returned by entry point is never checked and
the exit point is always executed.

Signed-off-by: Iuliana Prodan <[email protected]>
Reviewed-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/caamhash.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index 2ce5a79..262be3a 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -2007,7 +2007,7 @@ int caam_algapi_hash_init(struct device *ctrldev)
* is not present.
*/
if (!md_inst)
- return -ENODEV;
+ return 0;

/* Limit digest size based on LP256 */
if (md_vid == CHA_VER_VID_MD_LP256)
--
2.1.0

2019-07-30 12:10:01

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v4 12/14] crypto: caam - execute module exit point only if necessary

Commit 1b46c90c8e00 ("crypto: caam - convert top level drivers to libraries")
changed entry and exit points behavior for caamalg,
caamalg_qi, caamalg_qi2, caamhash, caampkc, caamrng.

For example, previously caam_pkc_init() and caam_pkc_exit() were
module entry/exit points. This means that if an error would happen
in caam_pkc_init(), then caam_pkc_exit() wouldn't have been called.
After the mentioned commit, caam_pkc_init() and caam_pkc_exit()
are manually called - from jr.c. caam_pkc_exit() is called
unconditionally, even if caam_pkc_init() failed.

Added a global variable to keep the status of the algorithm
registration and free of resources.
The exit point of caampkc/caamrng module is executed only if the
registration was successful. Therefore we avoid double free of
resources in case the algorithm registration failed.

Fixes: 1b46c90c8e00 ("crypto: caam - convert top level drivers to libraries")
Signed-off-by: Iuliana Prodan <[email protected]>
Reviewed-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/caampkc.c | 11 +++++++++++
drivers/crypto/caam/caamrng.c | 14 +++++++++++++-
2 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
index e05d975..e00a470 100644
--- a/drivers/crypto/caam/caampkc.c
+++ b/drivers/crypto/caam/caampkc.c
@@ -29,6 +29,12 @@
/* buffer filled with zeros, used for padding */
static u8 *zero_buffer;

+/*
+ * variable used to avoid double free of resources in case
+ * algorithm registration was unsuccessful
+ */
+static bool init_done;
+
static void rsa_io_unmap(struct device *dev, struct rsa_edesc *edesc,
struct akcipher_request *req)
{
@@ -1076,6 +1082,7 @@ int caam_pkc_init(struct device *ctrldev)
struct caam_drv_private *priv = dev_get_drvdata(ctrldev);
u32 pk_inst;
int err;
+ init_done = false;

/* Determine public key hardware accelerator presence. */
if (priv->era < 10)
@@ -1100,6 +1107,7 @@ int caam_pkc_init(struct device *ctrldev)
dev_warn(ctrldev, "%s alg registration failed\n",
caam_rsa.base.cra_driver_name);
} else {
+ init_done = true;
dev_info(ctrldev, "caam pkc algorithms registered in /proc/crypto\n");
}

@@ -1108,6 +1116,9 @@ int caam_pkc_init(struct device *ctrldev)

void caam_pkc_exit(void)
{
+ if (!init_done)
+ return;
+
kfree(zero_buffer);
crypto_unregister_akcipher(&caam_rsa);
}
diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c
index 54c32d5..7fbda1b 100644
--- a/drivers/crypto/caam/caamrng.c
+++ b/drivers/crypto/caam/caamrng.c
@@ -80,6 +80,12 @@ struct caam_rng_ctx {

static struct caam_rng_ctx *rng_ctx;

+/*
+ * Variable used to avoid double free of resources in case
+ * algorithm registration was unsuccessful
+ */
+static bool init_done;
+
static inline void rng_unmap_buf(struct device *jrdev, struct buf_data *bd)
{
if (bd->addr)
@@ -296,6 +302,9 @@ static struct hwrng caam_rng = {

void caam_rng_exit(void)
{
+ if (!init_done)
+ return;
+
caam_jr_free(rng_ctx->jrdev);
hwrng_unregister(&caam_rng);
kfree(rng_ctx);
@@ -307,6 +316,7 @@ int caam_rng_init(struct device *ctrldev)
u32 rng_inst;
struct caam_drv_private *priv = dev_get_drvdata(ctrldev);
int err;
+ init_done = false;

/* Check for an instantiated RNG before registration */
if (priv->era < 10)
@@ -335,8 +345,10 @@ int caam_rng_init(struct device *ctrldev)
dev_info(dev, "registering rng-caam\n");

err = hwrng_register(&caam_rng);
- if (!err)
+ if (!err) {
+ init_done = true;
return err;
+ }

free_rng_ctx:
kfree(rng_ctx);
--
2.1.0

2019-07-30 12:10:28

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v4 09/14] crypto: caam - keep both virtual and dma key addresses

From: Horia Geantă <[email protected]>

Update alginfo struct to keep both virtual and dma key addresses,
so that descriptors have them at hand.
One example where this is needed is in the xcbc(aes) shared descriptors,
which are updated in current patch.
Another example is the upcoming fix for DKP.

Signed-off-by: Horia Geantă <[email protected]>
---
drivers/crypto/caam/caamhash.c | 37 +++++++++++++++++--------------------
drivers/crypto/caam/caamhash_desc.c | 5 ++---
drivers/crypto/caam/caamhash_desc.h | 2 +-
drivers/crypto/caam/desc_constr.h | 10 ++++------
4 files changed, 24 insertions(+), 30 deletions(-)

diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index f3fef79..2c2e378 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -95,7 +95,6 @@ struct caam_hash_ctx {
dma_addr_t sh_desc_update_first_dma;
dma_addr_t sh_desc_fin_dma;
dma_addr_t sh_desc_digest_dma;
- dma_addr_t key_dma;
enum dma_data_direction dir;
struct device *jrdev;
int ctx_len;
@@ -282,13 +281,10 @@ static int axcbc_set_sh_desc(struct crypto_ahash *ahash)
struct device *jrdev = ctx->jrdev;
u32 *desc;

- /* key is loaded from memory for UPDATE and FINALIZE states */
- ctx->adata.key_dma = ctx->key_dma;
-
/* shared descriptor for ahash_update */
desc = ctx->sh_desc_update;
cnstr_shdsc_sk_hash(desc, &ctx->adata, OP_ALG_AS_UPDATE,
- ctx->ctx_len, ctx->ctx_len, 0);
+ ctx->ctx_len, ctx->ctx_len);
dma_sync_single_for_device(jrdev, ctx->sh_desc_update_dma,
desc_bytes(desc), ctx->dir);
print_hex_dump_debug("axcbc update shdesc@" __stringify(__LINE__)" : ",
@@ -298,7 +294,7 @@ static int axcbc_set_sh_desc(struct crypto_ahash *ahash)
/* shared descriptor for ahash_{final,finup} */
desc = ctx->sh_desc_fin;
cnstr_shdsc_sk_hash(desc, &ctx->adata, OP_ALG_AS_FINALIZE,
- digestsize, ctx->ctx_len, 0);
+ digestsize, ctx->ctx_len);
dma_sync_single_for_device(jrdev, ctx->sh_desc_fin_dma,
desc_bytes(desc), ctx->dir);
print_hex_dump_debug("axcbc finup shdesc@" __stringify(__LINE__)" : ",
@@ -311,7 +307,7 @@ static int axcbc_set_sh_desc(struct crypto_ahash *ahash)
/* shared descriptor for first invocation of ahash_update */
desc = ctx->sh_desc_update_first;
cnstr_shdsc_sk_hash(desc, &ctx->adata, OP_ALG_AS_INIT, ctx->ctx_len,
- ctx->ctx_len, ctx->key_dma);
+ ctx->ctx_len);
dma_sync_single_for_device(jrdev, ctx->sh_desc_update_first_dma,
desc_bytes(desc), ctx->dir);
print_hex_dump_debug("axcbc update first shdesc@" __stringify(__LINE__)
@@ -321,7 +317,7 @@ static int axcbc_set_sh_desc(struct crypto_ahash *ahash)
/* shared descriptor for ahash_digest */
desc = ctx->sh_desc_digest;
cnstr_shdsc_sk_hash(desc, &ctx->adata, OP_ALG_AS_INITFINAL,
- digestsize, ctx->ctx_len, 0);
+ digestsize, ctx->ctx_len);
dma_sync_single_for_device(jrdev, ctx->sh_desc_digest_dma,
desc_bytes(desc), ctx->dir);
print_hex_dump_debug("axcbc digest shdesc@" __stringify(__LINE__)" : ",
@@ -340,7 +336,7 @@ static int acmac_set_sh_desc(struct crypto_ahash *ahash)
/* shared descriptor for ahash_update */
desc = ctx->sh_desc_update;
cnstr_shdsc_sk_hash(desc, &ctx->adata, OP_ALG_AS_UPDATE,
- ctx->ctx_len, ctx->ctx_len, 0);
+ ctx->ctx_len, ctx->ctx_len);
dma_sync_single_for_device(jrdev, ctx->sh_desc_update_dma,
desc_bytes(desc), ctx->dir);
print_hex_dump_debug("acmac update shdesc@" __stringify(__LINE__)" : ",
@@ -350,7 +346,7 @@ static int acmac_set_sh_desc(struct crypto_ahash *ahash)
/* shared descriptor for ahash_{final,finup} */
desc = ctx->sh_desc_fin;
cnstr_shdsc_sk_hash(desc, &ctx->adata, OP_ALG_AS_FINALIZE,
- digestsize, ctx->ctx_len, 0);
+ digestsize, ctx->ctx_len);
dma_sync_single_for_device(jrdev, ctx->sh_desc_fin_dma,
desc_bytes(desc), ctx->dir);
print_hex_dump_debug("acmac finup shdesc@" __stringify(__LINE__)" : ",
@@ -360,7 +356,7 @@ static int acmac_set_sh_desc(struct crypto_ahash *ahash)
/* shared descriptor for first invocation of ahash_update */
desc = ctx->sh_desc_update_first;
cnstr_shdsc_sk_hash(desc, &ctx->adata, OP_ALG_AS_INIT, ctx->ctx_len,
- ctx->ctx_len, 0);
+ ctx->ctx_len);
dma_sync_single_for_device(jrdev, ctx->sh_desc_update_first_dma,
desc_bytes(desc), ctx->dir);
print_hex_dump_debug("acmac update first shdesc@" __stringify(__LINE__)
@@ -370,7 +366,7 @@ static int acmac_set_sh_desc(struct crypto_ahash *ahash)
/* shared descriptor for ahash_digest */
desc = ctx->sh_desc_digest;
cnstr_shdsc_sk_hash(desc, &ctx->adata, OP_ALG_AS_INITFINAL,
- digestsize, ctx->ctx_len, 0);
+ digestsize, ctx->ctx_len);
dma_sync_single_for_device(jrdev, ctx->sh_desc_digest_dma,
desc_bytes(desc), ctx->dir);
print_hex_dump_debug("acmac digest shdesc@" __stringify(__LINE__)" : ",
@@ -507,7 +503,8 @@ static int axcbc_setkey(struct crypto_ahash *ahash, const u8 *key,
}

memcpy(ctx->key, key, keylen);
- dma_sync_single_for_device(jrdev, ctx->key_dma, keylen, DMA_TO_DEVICE);
+ dma_sync_single_for_device(jrdev, ctx->adata.key_dma, keylen,
+ DMA_TO_DEVICE);
ctx->adata.keylen = keylen;

print_hex_dump_debug("axcbc ctx.key@" __stringify(__LINE__)" : ",
@@ -1831,11 +1828,11 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)
ctx->adata.algtype = OP_TYPE_CLASS1_ALG | caam_hash->alg_type;
ctx->ctx_len = 48;

- ctx->key_dma = dma_map_single_attrs(ctx->jrdev, ctx->key,
- ARRAY_SIZE(ctx->key),
- DMA_BIDIRECTIONAL,
- DMA_ATTR_SKIP_CPU_SYNC);
- if (dma_mapping_error(ctx->jrdev, ctx->key_dma)) {
+ ctx->adata.key_dma = dma_map_single_attrs(ctx->jrdev, ctx->key,
+ ARRAY_SIZE(ctx->key),
+ DMA_BIDIRECTIONAL,
+ DMA_ATTR_SKIP_CPU_SYNC);
+ if (dma_mapping_error(ctx->jrdev, ctx->adata.key_dma)) {
dev_err(ctx->jrdev, "unable to map key\n");
caam_jr_free(ctx->jrdev);
return -ENOMEM;
@@ -1859,7 +1856,7 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)
dev_err(ctx->jrdev, "unable to map shared descriptors\n");

if (is_xcbc_aes(caam_hash->alg_type))
- dma_unmap_single_attrs(ctx->jrdev, ctx->key_dma,
+ dma_unmap_single_attrs(ctx->jrdev, ctx->adata.key_dma,
ARRAY_SIZE(ctx->key),
DMA_BIDIRECTIONAL,
DMA_ATTR_SKIP_CPU_SYNC);
@@ -1895,7 +1892,7 @@ static void caam_hash_cra_exit(struct crypto_tfm *tfm)
offsetof(struct caam_hash_ctx, key),
ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
if (is_xcbc_aes(ctx->adata.algtype))
- dma_unmap_single_attrs(ctx->jrdev, ctx->key_dma,
+ dma_unmap_single_attrs(ctx->jrdev, ctx->adata.key_dma,
ARRAY_SIZE(ctx->key), DMA_BIDIRECTIONAL,
DMA_ATTR_SKIP_CPU_SYNC);
caam_jr_free(ctx->jrdev);
diff --git a/drivers/crypto/caam/caamhash_desc.c b/drivers/crypto/caam/caamhash_desc.c
index 71d0183..78383d7 100644
--- a/drivers/crypto/caam/caamhash_desc.c
+++ b/drivers/crypto/caam/caamhash_desc.c
@@ -83,10 +83,9 @@ EXPORT_SYMBOL(cnstr_shdsc_ahash);
* @state: algorithm state OP_ALG_AS_{INIT, FINALIZE, INITFINALIZE, UPDATE}
* @digestsize: algorithm's digest size
* @ctx_len: size of Context Register
- * @key_dma: I/O Virtual Address of the key
*/
void cnstr_shdsc_sk_hash(u32 * const desc, struct alginfo *adata, u32 state,
- int digestsize, int ctx_len, dma_addr_t key_dma)
+ int digestsize, int ctx_len)
{
u32 *skip_key_load;

@@ -136,7 +135,7 @@ void cnstr_shdsc_sk_hash(u32 * const desc, struct alginfo *adata, u32 state,
LDST_SRCDST_BYTE_CONTEXT);
if (is_xcbc_aes(adata->algtype) && state == OP_ALG_AS_INIT)
/* Save K1 */
- append_fifo_store(desc, key_dma, adata->keylen,
+ append_fifo_store(desc, adata->key_dma, adata->keylen,
LDST_CLASS_1_CCB | FIFOST_TYPE_KEY_KEK);
}
EXPORT_SYMBOL(cnstr_shdsc_sk_hash);
diff --git a/drivers/crypto/caam/caamhash_desc.h b/drivers/crypto/caam/caamhash_desc.h
index 6947ee1..4f369b8 100644
--- a/drivers/crypto/caam/caamhash_desc.h
+++ b/drivers/crypto/caam/caamhash_desc.h
@@ -25,5 +25,5 @@ void cnstr_shdsc_ahash(u32 * const desc, struct alginfo *adata, u32 state,
int digestsize, int ctx_len, bool import_ctx, int era);

void cnstr_shdsc_sk_hash(u32 * const desc, struct alginfo *adata, u32 state,
- int digestsize, int ctx_len, dma_addr_t key_dma);
+ int digestsize, int ctx_len);
#endif /* _CAAMHASH_DESC_H_ */
diff --git a/drivers/crypto/caam/desc_constr.h b/drivers/crypto/caam/desc_constr.h
index 5988a26..8154174 100644
--- a/drivers/crypto/caam/desc_constr.h
+++ b/drivers/crypto/caam/desc_constr.h
@@ -457,8 +457,8 @@ do { \
* functions where it is used.
* @keylen: length of the provided algorithm key, in bytes
* @keylen_pad: padded length of the provided algorithm key, in bytes
- * @key: address where algorithm key resides; virtual address if key_inline
- * is true, dma (bus) address if key_inline is false.
+ * @key_dma: dma (bus) address where algorithm key resides
+ * @key_virt: virtual address where algorithm key resides
* @key_inline: true - key can be inlined in the descriptor; false - key is
* referenced by the descriptor
*/
@@ -466,10 +466,8 @@ struct alginfo {
u32 algtype;
unsigned int keylen;
unsigned int keylen_pad;
- union {
- dma_addr_t key_dma;
- const void *key_virt;
- };
+ dma_addr_t key_dma;
+ const void *key_virt;
bool key_inline;
};

--
2.1.0

2019-07-30 12:10:28

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v4 10/14] crypto: caam - fix MDHA key derivation for certain user key lengths

From: Horia Geantă <[email protected]>

Fuzz testing uncovered an issue when |user key| > |derived key|.
Derived key generation has to be fixed in two cases:

1. Era >= 6 (DKP is available)
DKP cannot be used with immediate input key if |user key| > |derived key|,
since the resulting descriptor (after DKP execution) would be invalid -
having a few bytes from user key left in descriptor buffer
as incorrect opcodes.

Fix DKP usage both in standalone hmac and in authenc algorithms.
For authenc the logic is simplified, by always storing both virtual
and dma key addresses.

2. Era < 6
The same case (|user key| > |derived key|) fails when DKP
is not available.
Make sure gen_split_key() dma maps max(|user key|, |derived key|),
since this is an in-place (bidirectional) operation.

Signed-off-by: Horia Geantă <[email protected]>
---
drivers/crypto/caam/caamalg.c | 42 +++++++-----------------
drivers/crypto/caam/caamalg_qi.c | 42 +++++++-----------------
drivers/crypto/caam/caamalg_qi2.c | 67 +++++++++++++++++++++++++++------------
drivers/crypto/caam/caamhash.c | 53 ++++++++++++++++++++++---------
drivers/crypto/caam/desc_constr.h | 24 ++++++++++----
drivers/crypto/caam/key_gen.c | 9 +++---
6 files changed, 132 insertions(+), 105 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 9f83b55..f785f55 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -205,6 +205,18 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ctx->cdata.keylen - CTR_RFC3686_NONCE_SIZE);
}

+ /*
+ * In case |user key| > |derived key|, using DKP<imm,imm>
+ * would result in invalid opcodes (last bytes of user key) in
+ * the resulting descriptor. Use DKP<ptr,imm> instead => both
+ * virtual and dma key addresses are needed.
+ */
+ ctx->adata.key_virt = ctx->key;
+ ctx->adata.key_dma = ctx->key_dma;
+
+ ctx->cdata.key_virt = ctx->key + ctx->adata.keylen_pad;
+ ctx->cdata.key_dma = ctx->key_dma + ctx->adata.keylen_pad;
+
data_len[0] = ctx->adata.keylen_pad;
data_len[1] = ctx->cdata.keylen;

@@ -221,16 +233,6 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ARRAY_SIZE(data_len)) < 0)
return -EINVAL;

- if (inl_mask & 1)
- ctx->adata.key_virt = ctx->key;
- else
- ctx->adata.key_dma = ctx->key_dma;
-
- if (inl_mask & 2)
- ctx->cdata.key_virt = ctx->key + ctx->adata.keylen_pad;
- else
- ctx->cdata.key_dma = ctx->key_dma + ctx->adata.keylen_pad;
-
ctx->adata.key_inline = !!(inl_mask & 1);
ctx->cdata.key_inline = !!(inl_mask & 2);

@@ -253,16 +255,6 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ARRAY_SIZE(data_len)) < 0)
return -EINVAL;

- if (inl_mask & 1)
- ctx->adata.key_virt = ctx->key;
- else
- ctx->adata.key_dma = ctx->key_dma;
-
- if (inl_mask & 2)
- ctx->cdata.key_virt = ctx->key + ctx->adata.keylen_pad;
- else
- ctx->cdata.key_dma = ctx->key_dma + ctx->adata.keylen_pad;
-
ctx->adata.key_inline = !!(inl_mask & 1);
ctx->cdata.key_inline = !!(inl_mask & 2);

@@ -287,16 +279,6 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ARRAY_SIZE(data_len)) < 0)
return -EINVAL;

- if (inl_mask & 1)
- ctx->adata.key_virt = ctx->key;
- else
- ctx->adata.key_dma = ctx->key_dma;
-
- if (inl_mask & 2)
- ctx->cdata.key_virt = ctx->key + ctx->adata.keylen_pad;
- else
- ctx->cdata.key_dma = ctx->key_dma + ctx->adata.keylen_pad;
-
ctx->adata.key_inline = !!(inl_mask & 1);
ctx->cdata.key_inline = !!(inl_mask & 2);

diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
index 5ba620d..7e9c3a0 100644
--- a/drivers/crypto/caam/caamalg_qi.c
+++ b/drivers/crypto/caam/caamalg_qi.c
@@ -105,6 +105,18 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ctx->cdata.keylen - CTR_RFC3686_NONCE_SIZE);
}

+ /*
+ * In case |user key| > |derived key|, using DKP<imm,imm> would result
+ * in invalid opcodes (last bytes of user key) in the resulting
+ * descriptor. Use DKP<ptr,imm> instead => both virtual and dma key
+ * addresses are needed.
+ */
+ ctx->adata.key_virt = ctx->key;
+ ctx->adata.key_dma = ctx->key_dma;
+
+ ctx->cdata.key_virt = ctx->key + ctx->adata.keylen_pad;
+ ctx->cdata.key_dma = ctx->key_dma + ctx->adata.keylen_pad;
+
data_len[0] = ctx->adata.keylen_pad;
data_len[1] = ctx->cdata.keylen;

@@ -118,16 +130,6 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ARRAY_SIZE(data_len)) < 0)
return -EINVAL;

- if (inl_mask & 1)
- ctx->adata.key_virt = ctx->key;
- else
- ctx->adata.key_dma = ctx->key_dma;
-
- if (inl_mask & 2)
- ctx->cdata.key_virt = ctx->key + ctx->adata.keylen_pad;
- else
- ctx->cdata.key_dma = ctx->key_dma + ctx->adata.keylen_pad;
-
ctx->adata.key_inline = !!(inl_mask & 1);
ctx->cdata.key_inline = !!(inl_mask & 2);

@@ -143,16 +145,6 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ARRAY_SIZE(data_len)) < 0)
return -EINVAL;

- if (inl_mask & 1)
- ctx->adata.key_virt = ctx->key;
- else
- ctx->adata.key_dma = ctx->key_dma;
-
- if (inl_mask & 2)
- ctx->cdata.key_virt = ctx->key + ctx->adata.keylen_pad;
- else
- ctx->cdata.key_dma = ctx->key_dma + ctx->adata.keylen_pad;
-
ctx->adata.key_inline = !!(inl_mask & 1);
ctx->cdata.key_inline = !!(inl_mask & 2);

@@ -171,16 +163,6 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ARRAY_SIZE(data_len)) < 0)
return -EINVAL;

- if (inl_mask & 1)
- ctx->adata.key_virt = ctx->key;
- else
- ctx->adata.key_dma = ctx->key_dma;
-
- if (inl_mask & 2)
- ctx->cdata.key_virt = ctx->key + ctx->adata.keylen_pad;
- else
- ctx->cdata.key_dma = ctx->key_dma + ctx->adata.keylen_pad;
-
ctx->adata.key_inline = !!(inl_mask & 1);
ctx->cdata.key_inline = !!(inl_mask & 2);

diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
index d5f6517..aea0d05 100644
--- a/drivers/crypto/caam/caamalg_qi2.c
+++ b/drivers/crypto/caam/caamalg_qi2.c
@@ -198,6 +198,18 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ctx->cdata.keylen - CTR_RFC3686_NONCE_SIZE);
}

+ /*
+ * In case |user key| > |derived key|, using DKP<imm,imm> would result
+ * in invalid opcodes (last bytes of user key) in the resulting
+ * descriptor. Use DKP<ptr,imm> instead => both virtual and dma key
+ * addresses are needed.
+ */
+ ctx->adata.key_virt = ctx->key;
+ ctx->adata.key_dma = ctx->key_dma;
+
+ ctx->cdata.key_virt = ctx->key + ctx->adata.keylen_pad;
+ ctx->cdata.key_dma = ctx->key_dma + ctx->adata.keylen_pad;
+
data_len[0] = ctx->adata.keylen_pad;
data_len[1] = ctx->cdata.keylen;

@@ -209,16 +221,6 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ARRAY_SIZE(data_len)) < 0)
return -EINVAL;

- if (inl_mask & 1)
- ctx->adata.key_virt = ctx->key;
- else
- ctx->adata.key_dma = ctx->key_dma;
-
- if (inl_mask & 2)
- ctx->cdata.key_virt = ctx->key + ctx->adata.keylen_pad;
- else
- ctx->cdata.key_dma = ctx->key_dma + ctx->adata.keylen_pad;
-
ctx->adata.key_inline = !!(inl_mask & 1);
ctx->cdata.key_inline = !!(inl_mask & 2);

@@ -247,16 +249,6 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ARRAY_SIZE(data_len)) < 0)
return -EINVAL;

- if (inl_mask & 1)
- ctx->adata.key_virt = ctx->key;
- else
- ctx->adata.key_dma = ctx->key_dma;
-
- if (inl_mask & 2)
- ctx->cdata.key_virt = ctx->key + ctx->adata.keylen_pad;
- else
- ctx->cdata.key_dma = ctx->key_dma + ctx->adata.keylen_pad;
-
ctx->adata.key_inline = !!(inl_mask & 1);
ctx->cdata.key_inline = !!(inl_mask & 2);

@@ -2998,6 +2990,7 @@ enum hash_optype {
/**
* caam_hash_ctx - ahash per-session context
* @flc: Flow Contexts array
+ * @key: authentication key
* @flc_dma: I/O virtual addresses of the Flow Contexts
* @dev: dpseci device
* @ctx_len: size of Context Register
@@ -3005,6 +2998,7 @@ enum hash_optype {
*/
struct caam_hash_ctx {
struct caam_flc flc[HASH_NUM_OP];
+ u8 key[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned;
dma_addr_t flc_dma[HASH_NUM_OP];
struct device *dev;
int ctx_len;
@@ -3305,6 +3299,19 @@ static int ahash_setkey(struct crypto_ahash *ahash, const u8 *key,
ctx->adata.key_virt = key;
ctx->adata.key_inline = true;

+ /*
+ * In case |user key| > |derived key|, using DKP<imm,imm> would result
+ * in invalid opcodes (last bytes of user key) in the resulting
+ * descriptor. Use DKP<ptr,imm> instead => both virtual and dma key
+ * addresses are needed.
+ */
+ if (keylen > ctx->adata.keylen_pad) {
+ memcpy(ctx->key, key, keylen);
+ dma_sync_single_for_device(ctx->dev, ctx->adata.key_dma,
+ ctx->adata.keylen_pad,
+ DMA_TO_DEVICE);
+ }
+
ret = ahash_set_sh_desc(ahash);
kfree(hashed_key);
return ret;
@@ -4535,11 +4542,27 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)

ctx->dev = caam_hash->dev;

+ if (alg->setkey) {
+ ctx->adata.key_dma = dma_map_single_attrs(ctx->dev, ctx->key,
+ ARRAY_SIZE(ctx->key),
+ DMA_TO_DEVICE,
+ DMA_ATTR_SKIP_CPU_SYNC);
+ if (dma_mapping_error(ctx->dev, ctx->adata.key_dma)) {
+ dev_err(ctx->dev, "unable to map key\n");
+ return -ENOMEM;
+ }
+ }
+
dma_addr = dma_map_single_attrs(ctx->dev, ctx->flc, sizeof(ctx->flc),
DMA_BIDIRECTIONAL,
DMA_ATTR_SKIP_CPU_SYNC);
if (dma_mapping_error(ctx->dev, dma_addr)) {
dev_err(ctx->dev, "unable to map shared descriptors\n");
+ if (ctx->adata.key_dma)
+ dma_unmap_single_attrs(ctx->dev, ctx->adata.key_dma,
+ ARRAY_SIZE(ctx->key),
+ DMA_TO_DEVICE,
+ DMA_ATTR_SKIP_CPU_SYNC);
return -ENOMEM;
}

@@ -4565,6 +4588,10 @@ static void caam_hash_cra_exit(struct crypto_tfm *tfm)

dma_unmap_single_attrs(ctx->dev, ctx->flc_dma[0], sizeof(ctx->flc),
DMA_BIDIRECTIONAL, DMA_ATTR_SKIP_CPU_SYNC);
+ if (ctx->adata.key_dma)
+ dma_unmap_single_attrs(ctx->dev, ctx->adata.key_dma,
+ ARRAY_SIZE(ctx->key), DMA_TO_DEVICE,
+ DMA_ATTR_SKIP_CPU_SYNC);
}

static struct caam_hash_alg *caam_hash_alloc(struct device *dev,
diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index 2c2e378..2ce5a79 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -96,6 +96,7 @@ struct caam_hash_ctx {
dma_addr_t sh_desc_fin_dma;
dma_addr_t sh_desc_digest_dma;
enum dma_data_direction dir;
+ enum dma_data_direction key_dir;
struct device *jrdev;
int ctx_len;
struct alginfo adata;
@@ -476,6 +477,18 @@ static int ahash_setkey(struct crypto_ahash *ahash,
goto bad_free_key;

memcpy(ctx->key, key, keylen);
+
+ /*
+ * In case |user key| > |derived key|, using DKP<imm,imm>
+ * would result in invalid opcodes (last bytes of user key) in
+ * the resulting descriptor. Use DKP<ptr,imm> instead => both
+ * virtual and dma key addresses are needed.
+ */
+ if (keylen > ctx->adata.keylen_pad)
+ dma_sync_single_for_device(ctx->jrdev,
+ ctx->adata.key_dma,
+ ctx->adata.keylen_pad,
+ DMA_TO_DEVICE);
} else {
ret = gen_split_key(ctx->jrdev, ctx->key, &ctx->adata, key,
keylen, CAAM_MAX_HASH_KEY_SIZE);
@@ -1825,40 +1838,50 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)

if (is_xcbc_aes(caam_hash->alg_type)) {
ctx->dir = DMA_TO_DEVICE;
+ ctx->key_dir = DMA_BIDIRECTIONAL;
ctx->adata.algtype = OP_TYPE_CLASS1_ALG | caam_hash->alg_type;
ctx->ctx_len = 48;
-
- ctx->adata.key_dma = dma_map_single_attrs(ctx->jrdev, ctx->key,
- ARRAY_SIZE(ctx->key),
- DMA_BIDIRECTIONAL,
- DMA_ATTR_SKIP_CPU_SYNC);
- if (dma_mapping_error(ctx->jrdev, ctx->adata.key_dma)) {
- dev_err(ctx->jrdev, "unable to map key\n");
- caam_jr_free(ctx->jrdev);
- return -ENOMEM;
- }
} else if (is_cmac_aes(caam_hash->alg_type)) {
ctx->dir = DMA_TO_DEVICE;
+ ctx->key_dir = DMA_NONE;
ctx->adata.algtype = OP_TYPE_CLASS1_ALG | caam_hash->alg_type;
ctx->ctx_len = 32;
} else {
- ctx->dir = priv->era >= 6 ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE;
+ if (priv->era >= 6) {
+ ctx->dir = DMA_BIDIRECTIONAL;
+ ctx->key_dir = alg->setkey ? DMA_TO_DEVICE : DMA_NONE;
+ } else {
+ ctx->dir = DMA_TO_DEVICE;
+ ctx->key_dir = DMA_NONE;
+ }
ctx->adata.algtype = OP_TYPE_CLASS2_ALG | caam_hash->alg_type;
ctx->ctx_len = runninglen[(ctx->adata.algtype &
OP_ALG_ALGSEL_SUBMASK) >>
OP_ALG_ALGSEL_SHIFT];
}

+ if (ctx->key_dir != DMA_NONE) {
+ ctx->adata.key_dma = dma_map_single_attrs(ctx->jrdev, ctx->key,
+ ARRAY_SIZE(ctx->key),
+ ctx->key_dir,
+ DMA_ATTR_SKIP_CPU_SYNC);
+ if (dma_mapping_error(ctx->jrdev, ctx->adata.key_dma)) {
+ dev_err(ctx->jrdev, "unable to map key\n");
+ caam_jr_free(ctx->jrdev);
+ return -ENOMEM;
+ }
+ }
+
dma_addr = dma_map_single_attrs(ctx->jrdev, ctx->sh_desc_update,
offsetof(struct caam_hash_ctx, key),
ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
if (dma_mapping_error(ctx->jrdev, dma_addr)) {
dev_err(ctx->jrdev, "unable to map shared descriptors\n");

- if (is_xcbc_aes(caam_hash->alg_type))
+ if (ctx->key_dir != DMA_NONE)
dma_unmap_single_attrs(ctx->jrdev, ctx->adata.key_dma,
ARRAY_SIZE(ctx->key),
- DMA_BIDIRECTIONAL,
+ ctx->key_dir,
DMA_ATTR_SKIP_CPU_SYNC);

caam_jr_free(ctx->jrdev);
@@ -1891,9 +1914,9 @@ static void caam_hash_cra_exit(struct crypto_tfm *tfm)
dma_unmap_single_attrs(ctx->jrdev, ctx->sh_desc_update_dma,
offsetof(struct caam_hash_ctx, key),
ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
- if (is_xcbc_aes(ctx->adata.algtype))
+ if (ctx->key_dir != DMA_NONE)
dma_unmap_single_attrs(ctx->jrdev, ctx->adata.key_dma,
- ARRAY_SIZE(ctx->key), DMA_BIDIRECTIONAL,
+ ARRAY_SIZE(ctx->key), ctx->key_dir,
DMA_ATTR_SKIP_CPU_SYNC);
caam_jr_free(ctx->jrdev);
}
diff --git a/drivers/crypto/caam/desc_constr.h b/drivers/crypto/caam/desc_constr.h
index 8154174..536f360 100644
--- a/drivers/crypto/caam/desc_constr.h
+++ b/drivers/crypto/caam/desc_constr.h
@@ -533,14 +533,26 @@ static inline void append_proto_dkp(u32 * const desc, struct alginfo *adata)
if (adata->key_inline) {
int words;

- append_operation(desc, OP_TYPE_UNI_PROTOCOL | protid |
- OP_PCL_DKP_SRC_IMM | OP_PCL_DKP_DST_IMM |
- adata->keylen);
- append_data(desc, adata->key_virt, adata->keylen);
+ if (adata->keylen > adata->keylen_pad) {
+ append_operation(desc, OP_TYPE_UNI_PROTOCOL | protid |
+ OP_PCL_DKP_SRC_PTR |
+ OP_PCL_DKP_DST_IMM | adata->keylen);
+ append_ptr(desc, adata->key_dma);
+
+ words = (ALIGN(adata->keylen_pad, CAAM_CMD_SZ) -
+ CAAM_PTR_SZ) / CAAM_CMD_SZ;
+ } else {
+ append_operation(desc, OP_TYPE_UNI_PROTOCOL | protid |
+ OP_PCL_DKP_SRC_IMM |
+ OP_PCL_DKP_DST_IMM | adata->keylen);
+ append_data(desc, adata->key_virt, adata->keylen);
+
+ words = (ALIGN(adata->keylen_pad, CAAM_CMD_SZ) -
+ ALIGN(adata->keylen, CAAM_CMD_SZ)) /
+ CAAM_CMD_SZ;
+ }

/* Reserve space in descriptor buffer for the derived key */
- words = (ALIGN(adata->keylen_pad, CAAM_CMD_SZ) -
- ALIGN(adata->keylen, CAAM_CMD_SZ)) / CAAM_CMD_SZ;
if (words)
(*desc) = cpu_to_caam32(caam32_to_cpu(*desc) + words);
} else {
diff --git a/drivers/crypto/caam/key_gen.c b/drivers/crypto/caam/key_gen.c
index c6f8375..5a851dd 100644
--- a/drivers/crypto/caam/key_gen.c
+++ b/drivers/crypto/caam/key_gen.c
@@ -48,18 +48,20 @@ int gen_split_key(struct device *jrdev, u8 *key_out,
u32 *desc;
struct split_key_result result;
dma_addr_t dma_addr;
+ unsigned int local_max;
int ret = -ENOMEM;

adata->keylen = split_key_len(adata->algtype & OP_ALG_ALGSEL_MASK);
adata->keylen_pad = split_key_pad_len(adata->algtype &
OP_ALG_ALGSEL_MASK);
+ local_max = max(keylen, adata->keylen_pad);

dev_dbg(jrdev, "split keylen %d split keylen padded %d\n",
adata->keylen, adata->keylen_pad);
print_hex_dump_debug("ctx.key@" __stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, key_in, keylen, 1);

- if (adata->keylen_pad > max_keylen)
+ if (local_max > max_keylen)
return -EINVAL;

desc = kmalloc(CAAM_CMD_SZ * 6 + CAAM_PTR_SZ * 2, GFP_KERNEL | GFP_DMA);
@@ -70,8 +72,7 @@ int gen_split_key(struct device *jrdev, u8 *key_out,

memcpy(key_out, key_in, keylen);

- dma_addr = dma_map_single(jrdev, key_out, adata->keylen_pad,
- DMA_BIDIRECTIONAL);
+ dma_addr = dma_map_single(jrdev, key_out, local_max, DMA_BIDIRECTIONAL);
if (dma_mapping_error(jrdev, dma_addr)) {
dev_err(jrdev, "unable to map key memory\n");
goto out_free;
@@ -117,7 +118,7 @@ int gen_split_key(struct device *jrdev, u8 *key_out,
adata->keylen_pad, 1);
}

- dma_unmap_single(jrdev, dma_addr, adata->keylen_pad, DMA_BIDIRECTIONAL);
+ dma_unmap_single(jrdev, dma_addr, local_max, DMA_BIDIRECTIONAL);
out_free:
kfree(desc);
return ret;
--
2.1.0

2019-07-30 12:10:31

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v4 11/14] crypto: caam - free resources in case caam_rng registration failed

Check the return value of the hardware registration for caam_rng and free
resources in case of failure.

Fixes: e24f7c9e87d4 ("crypto: caam - hwrng support")
Signed-off-by: Iuliana Prodan <[email protected]>
Reviewed-by: Horia Geanta <[email protected]>
---
Changes since v3:
- update Fixes tag with 12 characters of the SHA1.
---
drivers/crypto/caam/caamrng.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c
index 561bcb5..54c32d5 100644
--- a/drivers/crypto/caam/caamrng.c
+++ b/drivers/crypto/caam/caamrng.c
@@ -333,7 +333,10 @@ int caam_rng_init(struct device *ctrldev)
goto free_rng_ctx;

dev_info(dev, "registering rng-caam\n");
- return hwrng_register(&caam_rng);
+
+ err = hwrng_register(&caam_rng);
+ if (!err)
+ return err;

free_rng_ctx:
kfree(rng_ctx);
--
2.1.0

2019-07-30 12:10:31

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v4 02/14] crypto: caam - fix return code in completion callbacks

From: Horia Geantă <[email protected]>

Modify drive to provide a valid errno (and not the HW error ID)
to the user, via completion callbacks.

A "valid errno" is currently not explicitly mentioned in the docs,
however the error code is expected to match the one returned by the
generic SW implementation.

Note: in most error cases caam/qi and caam/qi2 returned -EIO; align all
caam drivers to return -EINVAL.

While here, ratelimit prints triggered by fuzz testing, such that
console is not flooded.

Signed-off-by: Horia Geantă <[email protected]>
Signed-off-by: Iuliana Prodan <[email protected]>
---
drivers/crypto/caam/caamalg.c | 26 ++++++++--------
drivers/crypto/caam/caamalg_qi.c | 21 ++++---------
drivers/crypto/caam/caamalg_qi2.c | 62 ++++++++++++---------------------------
drivers/crypto/caam/caamhash.c | 20 ++++++++-----
drivers/crypto/caam/caampkc.c | 20 ++++++++-----
drivers/crypto/caam/error.c | 60 +++++++++++++++++++++++--------------
drivers/crypto/caam/error.h | 2 +-
drivers/crypto/caam/key_gen.c | 5 ++--
drivers/crypto/caam/qi.c | 5 ++--
9 files changed, 104 insertions(+), 117 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 43f1825..06b4f2d 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -930,19 +930,20 @@ static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
{
struct aead_request *req = context;
struct aead_edesc *edesc;
+ int ecode = 0;

dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);

edesc = container_of(desc, struct aead_edesc, hw_desc[0]);

if (err)
- caam_jr_strstatus(jrdev, err);
+ ecode = caam_jr_strstatus(jrdev, err);

aead_unmap(jrdev, edesc, req);

kfree(edesc);

- aead_request_complete(req, err);
+ aead_request_complete(req, ecode);
}

static void aead_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
@@ -950,25 +951,20 @@ static void aead_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
{
struct aead_request *req = context;
struct aead_edesc *edesc;
+ int ecode = 0;

dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);

edesc = container_of(desc, struct aead_edesc, hw_desc[0]);

if (err)
- caam_jr_strstatus(jrdev, err);
+ ecode = caam_jr_strstatus(jrdev, err);

aead_unmap(jrdev, edesc, req);

- /*
- * verify hw auth check passed else return -EBADMSG
- */
- if ((err & JRSTA_CCBERR_ERRID_MASK) == JRSTA_CCBERR_ERRID_ICVCHK)
- err = -EBADMSG;
-
kfree(edesc);

- aead_request_complete(req, err);
+ aead_request_complete(req, ecode);
}

static void skcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
@@ -978,13 +974,14 @@ static void skcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
struct skcipher_edesc *edesc;
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
int ivsize = crypto_skcipher_ivsize(skcipher);
+ int ecode = 0;

dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);

edesc = container_of(desc, struct skcipher_edesc, hw_desc[0]);

if (err)
- caam_jr_strstatus(jrdev, err);
+ ecode = caam_jr_strstatus(jrdev, err);

skcipher_unmap(jrdev, edesc, req);

@@ -1008,7 +1005,7 @@ static void skcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,

kfree(edesc);

- skcipher_request_complete(req, err);
+ skcipher_request_complete(req, ecode);
}

static void skcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
@@ -1018,12 +1015,13 @@ static void skcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
struct skcipher_edesc *edesc;
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
int ivsize = crypto_skcipher_ivsize(skcipher);
+ int ecode = 0;

dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);

edesc = container_of(desc, struct skcipher_edesc, hw_desc[0]);
if (err)
- caam_jr_strstatus(jrdev, err);
+ ecode = caam_jr_strstatus(jrdev, err);

skcipher_unmap(jrdev, edesc, req);

@@ -1047,7 +1045,7 @@ static void skcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err,

kfree(edesc);

- skcipher_request_complete(req, err);
+ skcipher_request_complete(req, ecode);
}

/*
diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
index 32f0f8a..ab263b1 100644
--- a/drivers/crypto/caam/caamalg_qi.c
+++ b/drivers/crypto/caam/caamalg_qi.c
@@ -884,20 +884,8 @@ static void aead_done(struct caam_drv_req *drv_req, u32 status)

qidev = caam_ctx->qidev;

- if (unlikely(status)) {
- u32 ssrc = status & JRSTA_SSRC_MASK;
- u8 err_id = status & JRSTA_CCBERR_ERRID_MASK;
-
- caam_jr_strstatus(qidev, status);
- /*
- * verify hw auth check passed else return -EBADMSG
- */
- if (ssrc == JRSTA_SSRC_CCB_ERROR &&
- err_id == JRSTA_CCBERR_ERRID_ICVCHK)
- ecode = -EBADMSG;
- else
- ecode = -EIO;
- }
+ if (unlikely(status))
+ ecode = caam_jr_strstatus(qidev, status);

edesc = container_of(drv_req, typeof(*edesc), drv_req);
aead_unmap(qidev, edesc, aead_req);
@@ -1190,13 +1178,14 @@ static void skcipher_done(struct caam_drv_req *drv_req, u32 status)
struct caam_ctx *caam_ctx = crypto_skcipher_ctx(skcipher);
struct device *qidev = caam_ctx->qidev;
int ivsize = crypto_skcipher_ivsize(skcipher);
+ int ecode = 0;

dev_dbg(qidev, "%s %d: status 0x%x\n", __func__, __LINE__, status);

edesc = container_of(drv_req, typeof(*edesc), drv_req);

if (status)
- caam_jr_strstatus(qidev, status);
+ ecode = caam_jr_strstatus(qidev, status);

print_hex_dump_debug("dstiv @" __stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, req->iv,
@@ -1215,7 +1204,7 @@ static void skcipher_done(struct caam_drv_req *drv_req, u32 status)
memcpy(req->iv, (u8 *)&edesc->sgt[0] + edesc->qm_sg_bytes, ivsize);

qi_cache_free(edesc);
- skcipher_request_complete(req, status);
+ skcipher_request_complete(req, ecode);
}

static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
index 06bf32c..2681581 100644
--- a/drivers/crypto/caam/caamalg_qi2.c
+++ b/drivers/crypto/caam/caamalg_qi2.c
@@ -1227,10 +1227,8 @@ static void aead_encrypt_done(void *cbk_ctx, u32 status)

dev_dbg(ctx->dev, "%s %d: err 0x%x\n", __func__, __LINE__, status);

- if (unlikely(status)) {
- caam_qi2_strstatus(ctx->dev, status);
- ecode = -EIO;
- }
+ if (unlikely(status))
+ ecode = caam_qi2_strstatus(ctx->dev, status);

aead_unmap(ctx->dev, edesc, req);
qi_cache_free(edesc);
@@ -1250,17 +1248,8 @@ static void aead_decrypt_done(void *cbk_ctx, u32 status)

dev_dbg(ctx->dev, "%s %d: err 0x%x\n", __func__, __LINE__, status);

- if (unlikely(status)) {
- caam_qi2_strstatus(ctx->dev, status);
- /*
- * verify hw auth check passed else return -EBADMSG
- */
- if ((status & JRSTA_CCBERR_ERRID_MASK) ==
- JRSTA_CCBERR_ERRID_ICVCHK)
- ecode = -EBADMSG;
- else
- ecode = -EIO;
- }
+ if (unlikely(status))
+ ecode = caam_qi2_strstatus(ctx->dev, status);

aead_unmap(ctx->dev, edesc, req);
qi_cache_free(edesc);
@@ -1352,10 +1341,8 @@ static void skcipher_encrypt_done(void *cbk_ctx, u32 status)

dev_dbg(ctx->dev, "%s %d: err 0x%x\n", __func__, __LINE__, status);

- if (unlikely(status)) {
- caam_qi2_strstatus(ctx->dev, status);
- ecode = -EIO;
- }
+ if (unlikely(status))
+ ecode = caam_qi2_strstatus(ctx->dev, status);

print_hex_dump_debug("dstiv @" __stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, req->iv,
@@ -1390,10 +1377,8 @@ static void skcipher_decrypt_done(void *cbk_ctx, u32 status)

dev_dbg(ctx->dev, "%s %d: err 0x%x\n", __func__, __LINE__, status);

- if (unlikely(status)) {
- caam_qi2_strstatus(ctx->dev, status);
- ecode = -EIO;
- }
+ if (unlikely(status))
+ ecode = caam_qi2_strstatus(ctx->dev, status);

print_hex_dump_debug("dstiv @" __stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, req->iv,
@@ -3094,10 +3079,7 @@ static void split_key_sh_done(void *cbk_ctx, u32 err)

dev_dbg(res->dev, "%s %d: err 0x%x\n", __func__, __LINE__, err);

- if (err)
- caam_qi2_strstatus(res->dev, err);
-
- res->err = err;
+ res->err = err ? caam_qi2_strstatus(res->dev, err) : 0;
complete(&res->completion);
}

@@ -3282,10 +3264,8 @@ static void ahash_done(void *cbk_ctx, u32 status)

dev_dbg(ctx->dev, "%s %d: err 0x%x\n", __func__, __LINE__, status);

- if (unlikely(status)) {
- caam_qi2_strstatus(ctx->dev, status);
- ecode = -EIO;
- }
+ if (unlikely(status))
+ ecode = caam_qi2_strstatus(ctx->dev, status);

ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE);
memcpy(req->result, state->caam_ctx, digestsize);
@@ -3310,10 +3290,8 @@ static void ahash_done_bi(void *cbk_ctx, u32 status)

dev_dbg(ctx->dev, "%s %d: err 0x%x\n", __func__, __LINE__, status);

- if (unlikely(status)) {
- caam_qi2_strstatus(ctx->dev, status);
- ecode = -EIO;
- }
+ if (unlikely(status))
+ ecode = caam_qi2_strstatus(ctx->dev, status);

ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL);
switch_buf(state);
@@ -3343,10 +3321,8 @@ static void ahash_done_ctx_src(void *cbk_ctx, u32 status)

dev_dbg(ctx->dev, "%s %d: err 0x%x\n", __func__, __LINE__, status);

- if (unlikely(status)) {
- caam_qi2_strstatus(ctx->dev, status);
- ecode = -EIO;
- }
+ if (unlikely(status))
+ ecode = caam_qi2_strstatus(ctx->dev, status);

ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL);
memcpy(req->result, state->caam_ctx, digestsize);
@@ -3371,10 +3347,8 @@ static void ahash_done_ctx_dst(void *cbk_ctx, u32 status)

dev_dbg(ctx->dev, "%s %d: err 0x%x\n", __func__, __LINE__, status);

- if (unlikely(status)) {
- caam_qi2_strstatus(ctx->dev, status);
- ecode = -EIO;
- }
+ if (unlikely(status))
+ ecode = caam_qi2_strstatus(ctx->dev, status);

ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE);
switch_buf(state);
@@ -4700,7 +4674,7 @@ static void dpaa2_caam_process_fd(struct dpaa2_caam_priv *priv,

fd_err = dpaa2_fd_get_ctrl(fd) & FD_CTRL_ERR_MASK;
if (unlikely(fd_err))
- dev_err(priv->dev, "FD error: %08x\n", fd_err);
+ dev_err_ratelimited(priv->dev, "FD error: %08x\n", fd_err);

/*
* FD[ADDR] is guaranteed to be valid, irrespective of errors reported
diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index e4ac5d5..73abefa 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -584,12 +584,13 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
int digestsize = crypto_ahash_digestsize(ahash);
struct caam_hash_state *state = ahash_request_ctx(req);
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+ int ecode = 0;

dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);

edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
if (err)
- caam_jr_strstatus(jrdev, err);
+ ecode = caam_jr_strstatus(jrdev, err);

ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
memcpy(req->result, state->caam_ctx, digestsize);
@@ -599,7 +600,7 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
ctx->ctx_len, 1);

- req->base.complete(&req->base, err);
+ req->base.complete(&req->base, ecode);
}

static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err,
@@ -611,12 +612,13 @@ static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err,
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
struct caam_hash_state *state = ahash_request_ctx(req);
int digestsize = crypto_ahash_digestsize(ahash);
+ int ecode = 0;

dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);

edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
if (err)
- caam_jr_strstatus(jrdev, err);
+ ecode = caam_jr_strstatus(jrdev, err);

ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_BIDIRECTIONAL);
switch_buf(state);
@@ -630,7 +632,7 @@ static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err,
DUMP_PREFIX_ADDRESS, 16, 4, req->result,
digestsize, 1);

- req->base.complete(&req->base, err);
+ req->base.complete(&req->base, ecode);
}

static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
@@ -642,12 +644,13 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
int digestsize = crypto_ahash_digestsize(ahash);
struct caam_hash_state *state = ahash_request_ctx(req);
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+ int ecode = 0;

dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);

edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
if (err)
- caam_jr_strstatus(jrdev, err);
+ ecode = caam_jr_strstatus(jrdev, err);

ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
memcpy(req->result, state->caam_ctx, digestsize);
@@ -657,7 +660,7 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
ctx->ctx_len, 1);

- req->base.complete(&req->base, err);
+ req->base.complete(&req->base, ecode);
}

static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
@@ -669,12 +672,13 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
struct caam_hash_state *state = ahash_request_ctx(req);
int digestsize = crypto_ahash_digestsize(ahash);
+ int ecode = 0;

dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);

edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
if (err)
- caam_jr_strstatus(jrdev, err);
+ ecode = caam_jr_strstatus(jrdev, err);

ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_FROM_DEVICE);
switch_buf(state);
@@ -688,7 +692,7 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
DUMP_PREFIX_ADDRESS, 16, 4, req->result,
digestsize, 1);

- req->base.complete(&req->base, err);
+ req->base.complete(&req->base, ecode);
}

/*
diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
index 2340f94..e05d975 100644
--- a/drivers/crypto/caam/caampkc.c
+++ b/drivers/crypto/caam/caampkc.c
@@ -107,9 +107,10 @@ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
{
struct akcipher_request *req = context;
struct rsa_edesc *edesc;
+ int ecode = 0;

if (err)
- caam_jr_strstatus(dev, err);
+ ecode = caam_jr_strstatus(dev, err);

edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);

@@ -117,7 +118,7 @@ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
rsa_io_unmap(dev, edesc, req);
kfree(edesc);

- akcipher_request_complete(req, err);
+ akcipher_request_complete(req, ecode);
}

static void rsa_priv_f1_done(struct device *dev, u32 *desc, u32 err,
@@ -125,9 +126,10 @@ static void rsa_priv_f1_done(struct device *dev, u32 *desc, u32 err,
{
struct akcipher_request *req = context;
struct rsa_edesc *edesc;
+ int ecode = 0;

if (err)
- caam_jr_strstatus(dev, err);
+ ecode = caam_jr_strstatus(dev, err);

edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);

@@ -135,7 +137,7 @@ static void rsa_priv_f1_done(struct device *dev, u32 *desc, u32 err,
rsa_io_unmap(dev, edesc, req);
kfree(edesc);

- akcipher_request_complete(req, err);
+ akcipher_request_complete(req, ecode);
}

static void rsa_priv_f2_done(struct device *dev, u32 *desc, u32 err,
@@ -143,9 +145,10 @@ static void rsa_priv_f2_done(struct device *dev, u32 *desc, u32 err,
{
struct akcipher_request *req = context;
struct rsa_edesc *edesc;
+ int ecode = 0;

if (err)
- caam_jr_strstatus(dev, err);
+ ecode = caam_jr_strstatus(dev, err);

edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);

@@ -153,7 +156,7 @@ static void rsa_priv_f2_done(struct device *dev, u32 *desc, u32 err,
rsa_io_unmap(dev, edesc, req);
kfree(edesc);

- akcipher_request_complete(req, err);
+ akcipher_request_complete(req, ecode);
}

static void rsa_priv_f3_done(struct device *dev, u32 *desc, u32 err,
@@ -161,9 +164,10 @@ static void rsa_priv_f3_done(struct device *dev, u32 *desc, u32 err,
{
struct akcipher_request *req = context;
struct rsa_edesc *edesc;
+ int ecode = 0;

if (err)
- caam_jr_strstatus(dev, err);
+ ecode = caam_jr_strstatus(dev, err);

edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);

@@ -171,7 +175,7 @@ static void rsa_priv_f3_done(struct device *dev, u32 *desc, u32 err,
rsa_io_unmap(dev, edesc, req);
kfree(edesc);

- akcipher_request_complete(req, err);
+ akcipher_request_complete(req, ecode);
}

/**
diff --git a/drivers/crypto/caam/error.c b/drivers/crypto/caam/error.c
index 95da6ae..b7fbf1b 100644
--- a/drivers/crypto/caam/error.c
+++ b/drivers/crypto/caam/error.c
@@ -211,8 +211,8 @@ static const char * const rng_err_id_list[] = {
"Secure key generation",
};

-static void report_ccb_status(struct device *jrdev, const u32 status,
- const char *error)
+static int report_ccb_status(struct device *jrdev, const u32 status,
+ const char *error)
{
u8 cha_id = (status & JRSTA_CCBERR_CHAID_MASK) >>
JRSTA_CCBERR_CHAID_SHIFT;
@@ -248,22 +248,27 @@ static void report_ccb_status(struct device *jrdev, const u32 status,
* CCB ICV check failures are part of normal operation life;
* we leave the upper layers to do what they want with them.
*/
- if (err_id != JRSTA_CCBERR_ERRID_ICVCHK)
- dev_err(jrdev, "%08x: %s: %s %d: %s%s: %s%s\n",
- status, error, idx_str, idx,
- cha_str, cha_err_code,
- err_str, err_err_code);
+ if (err_id == JRSTA_CCBERR_ERRID_ICVCHK)
+ return -EBADMSG;
+
+ dev_err_ratelimited(jrdev, "%08x: %s: %s %d: %s%s: %s%s\n", status,
+ error, idx_str, idx, cha_str, cha_err_code,
+ err_str, err_err_code);
+
+ return -EINVAL;
}

-static void report_jump_status(struct device *jrdev, const u32 status,
- const char *error)
+static int report_jump_status(struct device *jrdev, const u32 status,
+ const char *error)
{
dev_err(jrdev, "%08x: %s: %s() not implemented\n",
status, error, __func__);
+
+ return -EINVAL;
}

-static void report_deco_status(struct device *jrdev, const u32 status,
- const char *error)
+static int report_deco_status(struct device *jrdev, const u32 status,
+ const char *error)
{
u8 err_id = status & JRSTA_DECOERR_ERROR_MASK;
u8 idx = (status & JRSTA_DECOERR_INDEX_MASK) >>
@@ -289,10 +294,12 @@ static void report_deco_status(struct device *jrdev, const u32 status,

dev_err(jrdev, "%08x: %s: %s %d: %s%s\n",
status, error, idx_str, idx, err_str, err_err_code);
+
+ return -EINVAL;
}

-static void report_qi_status(struct device *qidev, const u32 status,
- const char *error)
+static int report_qi_status(struct device *qidev, const u32 status,
+ const char *error)
{
u8 err_id = status & JRSTA_QIERR_ERROR_MASK;
const char *err_str = "unidentified error value 0x";
@@ -310,27 +317,33 @@ static void report_qi_status(struct device *qidev, const u32 status,

dev_err(qidev, "%08x: %s: %s%s\n",
status, error, err_str, err_err_code);
+
+ return -EINVAL;
}

-static void report_jr_status(struct device *jrdev, const u32 status,
- const char *error)
+static int report_jr_status(struct device *jrdev, const u32 status,
+ const char *error)
{
dev_err(jrdev, "%08x: %s: %s() not implemented\n",
status, error, __func__);
+
+ return -EINVAL;
}

-static void report_cond_code_status(struct device *jrdev, const u32 status,
- const char *error)
+static int report_cond_code_status(struct device *jrdev, const u32 status,
+ const char *error)
{
dev_err(jrdev, "%08x: %s: %s() not implemented\n",
status, error, __func__);
+
+ return -EINVAL;
}

-void caam_strstatus(struct device *jrdev, u32 status, bool qi_v2)
+int caam_strstatus(struct device *jrdev, u32 status, bool qi_v2)
{
static const struct stat_src {
- void (*report_ssed)(struct device *jrdev, const u32 status,
- const char *error);
+ int (*report_ssed)(struct device *jrdev, const u32 status,
+ const char *error);
const char *error;
} status_src[16] = {
{ NULL, "No error" },
@@ -358,11 +371,14 @@ void caam_strstatus(struct device *jrdev, u32 status, bool qi_v2)
* Otherwise print the error source name.
*/
if (status_src[ssrc].report_ssed)
- status_src[ssrc].report_ssed(jrdev, status, error);
- else if (error)
+ return status_src[ssrc].report_ssed(jrdev, status, error);
+
+ if (error)
dev_err(jrdev, "%d: %s\n", ssrc, error);
else
dev_err(jrdev, "%d: unknown error source\n", ssrc);
+
+ return -EINVAL;
}
EXPORT_SYMBOL(caam_strstatus);

diff --git a/drivers/crypto/caam/error.h b/drivers/crypto/caam/error.h
index d9726e6..16809fa 100644
--- a/drivers/crypto/caam/error.h
+++ b/drivers/crypto/caam/error.h
@@ -12,7 +12,7 @@

#define CAAM_ERROR_STR_MAX 302

-void caam_strstatus(struct device *dev, u32 status, bool qi_v2);
+int caam_strstatus(struct device *dev, u32 status, bool qi_v2);

#define caam_jr_strstatus(jrdev, status) caam_strstatus(jrdev, status, false)
#define caam_qi2_strstatus(qidev, status) caam_strstatus(qidev, status, true)
diff --git a/drivers/crypto/caam/key_gen.c b/drivers/crypto/caam/key_gen.c
index 48dd353..c6f8375 100644
--- a/drivers/crypto/caam/key_gen.c
+++ b/drivers/crypto/caam/key_gen.c
@@ -15,13 +15,14 @@ void split_key_done(struct device *dev, u32 *desc, u32 err,
void *context)
{
struct split_key_result *res = context;
+ int ecode = 0;

dev_dbg(dev, "%s %d: err 0x%x\n", __func__, __LINE__, err);

if (err)
- caam_jr_strstatus(dev, err);
+ ecode = caam_jr_strstatus(dev, err);

- res->err = err;
+ res->err = ecode;

complete(&res->completion);
}
diff --git a/drivers/crypto/caam/qi.c b/drivers/crypto/caam/qi.c
index 19a378b..378f627 100644
--- a/drivers/crypto/caam/qi.c
+++ b/drivers/crypto/caam/qi.c
@@ -577,8 +577,9 @@ static enum qman_cb_dqrr_result caam_rsp_fq_dqrr_cb(struct qman_portal *p,

if (ssrc != JRSTA_SSRC_CCB_ERROR ||
err_id != JRSTA_CCBERR_ERRID_ICVCHK)
- dev_err(qidev, "Error: %#x in CAAM response FD\n",
- status);
+ dev_err_ratelimited(qidev,
+ "Error: %#x in CAAM response FD\n",
+ status);
}

if (unlikely(qm_fd_get_format(fd) != qm_fd_compound)) {
--
2.1.0

2019-07-30 12:10:39

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v4 03/14] crypto: caam - update IV only when crypto operation succeeds

From: Horia Geantă <[email protected]>

skcipher encryption might fail and in some cases, like (invalid) input
length smaller then block size, updating the IV would lead to a useless
IV copy in case hardware issued an error.

Signed-off-by: Horia Geantă <[email protected]>
Signed-off-by: Iuliana Prodan <[email protected]>
---
Changes since v3:
- update commit message.
---
drivers/crypto/caam/caamalg.c | 5 ++---
drivers/crypto/caam/caamalg_qi.c | 4 +++-
drivers/crypto/caam/caamalg_qi2.c | 8 ++++++--
3 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 06b4f2d..28d55a0 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -990,10 +990,9 @@ static void skcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
* ciphertext block (CBC mode) or last counter (CTR mode).
* This is used e.g. by the CTS mode.
*/
- if (ivsize) {
+ if (ivsize && !ecode) {
memcpy(req->iv, (u8 *)edesc->sec4_sg + edesc->sec4_sg_bytes,
ivsize);
-
print_hex_dump_debug("dstiv @"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, req->iv,
edesc->src_nents > 1 ? 100 : ivsize, 1);
@@ -1030,7 +1029,7 @@ static void skcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
* ciphertext block (CBC mode) or last counter (CTR mode).
* This is used e.g. by the CTS mode.
*/
- if (ivsize) {
+ if (ivsize && !ecode) {
memcpy(req->iv, (u8 *)edesc->sec4_sg + edesc->sec4_sg_bytes,
ivsize);

diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
index ab263b1..66531d6 100644
--- a/drivers/crypto/caam/caamalg_qi.c
+++ b/drivers/crypto/caam/caamalg_qi.c
@@ -1201,7 +1201,9 @@ static void skcipher_done(struct caam_drv_req *drv_req, u32 status)
* ciphertext block (CBC mode) or last counter (CTR mode).
* This is used e.g. by the CTS mode.
*/
- memcpy(req->iv, (u8 *)&edesc->sgt[0] + edesc->qm_sg_bytes, ivsize);
+ if (!ecode)
+ memcpy(req->iv, (u8 *)&edesc->sgt[0] + edesc->qm_sg_bytes,
+ ivsize);

qi_cache_free(edesc);
skcipher_request_complete(req, ecode);
diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
index 2681581..bc370af 100644
--- a/drivers/crypto/caam/caamalg_qi2.c
+++ b/drivers/crypto/caam/caamalg_qi2.c
@@ -1358,7 +1358,9 @@ static void skcipher_encrypt_done(void *cbk_ctx, u32 status)
* ciphertext block (CBC mode) or last counter (CTR mode).
* This is used e.g. by the CTS mode.
*/
- memcpy(req->iv, (u8 *)&edesc->sgt[0] + edesc->qm_sg_bytes, ivsize);
+ if (!ecode)
+ memcpy(req->iv, (u8 *)&edesc->sgt[0] + edesc->qm_sg_bytes,
+ ivsize);

qi_cache_free(edesc);
skcipher_request_complete(req, ecode);
@@ -1394,7 +1396,9 @@ static void skcipher_decrypt_done(void *cbk_ctx, u32 status)
* ciphertext block (CBC mode) or last counter (CTR mode).
* This is used e.g. by the CTS mode.
*/
- memcpy(req->iv, (u8 *)&edesc->sgt[0] + edesc->qm_sg_bytes, ivsize);
+ if (!ecode)
+ memcpy(req->iv, (u8 *)&edesc->sgt[0] + edesc->qm_sg_bytes,
+ ivsize);

qi_cache_free(edesc);
skcipher_request_complete(req, ecode);
--
2.1.0

2019-07-30 12:10:46

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v4 07/14] crypto: caam - check zero-length input

Check zero-length input, for skcipher algorithm, to solve the extra
tests. This is a valid operation, therefore the API will return no error.

Signed-off-by: Iuliana Prodan <[email protected]>
Reviewed-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/caamalg.c | 6 ++++++
drivers/crypto/caam/caamalg_qi.c | 3 +++
drivers/crypto/caam/caamalg_qi2.c | 5 +++++
3 files changed, 14 insertions(+)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 47e61c0..9f83b55 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -1884,6 +1884,9 @@ static int skcipher_encrypt(struct skcipher_request *req)
u32 *desc;
int ret = 0;

+ if (!req->cryptlen)
+ return 0;
+
/* allocate extended descriptor */
edesc = skcipher_edesc_alloc(req, DESC_JOB_IO_LEN * CAAM_CMD_SZ);
if (IS_ERR(edesc))
@@ -1918,6 +1921,9 @@ static int skcipher_decrypt(struct skcipher_request *req)
u32 *desc;
int ret = 0;

+ if (!req->cryptlen)
+ return 0;
+
/* allocate extended descriptor */
edesc = skcipher_edesc_alloc(req, DESC_JOB_IO_LEN * CAAM_CMD_SZ);
if (IS_ERR(edesc))
diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
index 794d155d..5ba620d 100644
--- a/drivers/crypto/caam/caamalg_qi.c
+++ b/drivers/crypto/caam/caamalg_qi.c
@@ -1445,6 +1445,9 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
int ret;

+ if (!req->cryptlen)
+ return 0;
+
if (unlikely(caam_congested))
return -EAGAIN;

diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
index 5fd6529..d5f6517 100644
--- a/drivers/crypto/caam/caamalg_qi2.c
+++ b/drivers/crypto/caam/caamalg_qi2.c
@@ -1498,6 +1498,9 @@ static int skcipher_encrypt(struct skcipher_request *req)
struct caam_request *caam_req = skcipher_request_ctx(req);
int ret;

+ if (!req->cryptlen)
+ return 0;
+
/* allocate extended descriptor */
edesc = skcipher_edesc_alloc(req);
if (IS_ERR(edesc))
@@ -1526,6 +1529,8 @@ static int skcipher_decrypt(struct skcipher_request *req)
struct caam_request *caam_req = skcipher_request_ctx(req);
int ret;

+ if (!req->cryptlen)
+ return 0;
/* allocate extended descriptor */
edesc = skcipher_edesc_alloc(req);
if (IS_ERR(edesc))
--
2.1.0

2019-07-30 12:10:59

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v4 05/14] crypto: caam - check authsize

Check authsize to solve the extra tests that expect -EINVAL to be
returned when the authentication tag size is not valid.

Validated authsize for GCM, RFC4106 and RFC4543.

Signed-off-by: Iuliana Prodan <[email protected]>
Reviewed-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/caamalg.c | 13 +++++++++++++
drivers/crypto/caam/caamalg_qi.c | 13 +++++++++++++
drivers/crypto/caam/caamalg_qi2.c | 13 +++++++++++++
3 files changed, 39 insertions(+)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index bb38f31..0461bf3 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -376,6 +376,11 @@ static int gcm_set_sh_desc(struct crypto_aead *aead)
static int gcm_setauthsize(struct crypto_aead *authenc, unsigned int authsize)
{
struct caam_ctx *ctx = crypto_aead_ctx(authenc);
+ int err;
+
+ err = crypto_gcm_check_authsize(authsize);
+ if (err)
+ return err;

ctx->authsize = authsize;
gcm_set_sh_desc(authenc);
@@ -439,6 +444,11 @@ static int rfc4106_setauthsize(struct crypto_aead *authenc,
unsigned int authsize)
{
struct caam_ctx *ctx = crypto_aead_ctx(authenc);
+ int err;
+
+ err = crypto_rfc4106_check_authsize(authsize);
+ if (err)
+ return err;

ctx->authsize = authsize;
rfc4106_set_sh_desc(authenc);
@@ -503,6 +513,9 @@ static int rfc4543_setauthsize(struct crypto_aead *authenc,
{
struct caam_ctx *ctx = crypto_aead_ctx(authenc);

+ if (authsize != 16)
+ return -EINVAL;
+
ctx->authsize = authsize;
rfc4543_set_sh_desc(authenc);

diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
index b9c8581..8011a2a 100644
--- a/drivers/crypto/caam/caamalg_qi.c
+++ b/drivers/crypto/caam/caamalg_qi.c
@@ -371,6 +371,11 @@ static int gcm_set_sh_desc(struct crypto_aead *aead)
static int gcm_setauthsize(struct crypto_aead *authenc, unsigned int authsize)
{
struct caam_ctx *ctx = crypto_aead_ctx(authenc);
+ int err;
+
+ err = crypto_gcm_check_authsize(authsize);
+ if (err)
+ return err;

ctx->authsize = authsize;
gcm_set_sh_desc(authenc);
@@ -472,6 +477,11 @@ static int rfc4106_setauthsize(struct crypto_aead *authenc,
unsigned int authsize)
{
struct caam_ctx *ctx = crypto_aead_ctx(authenc);
+ int err;
+
+ err = crypto_rfc4106_check_authsize(authsize);
+ if (err)
+ return err;

ctx->authsize = authsize;
rfc4106_set_sh_desc(authenc);
@@ -578,6 +588,9 @@ static int rfc4543_setauthsize(struct crypto_aead *authenc,
{
struct caam_ctx *ctx = crypto_aead_ctx(authenc);

+ if (authsize != 16)
+ return -EINVAL;
+
ctx->authsize = authsize;
rfc4543_set_sh_desc(authenc);

diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
index 0b672f93..e351ff2 100644
--- a/drivers/crypto/caam/caamalg_qi2.c
+++ b/drivers/crypto/caam/caamalg_qi2.c
@@ -719,6 +719,11 @@ static int gcm_set_sh_desc(struct crypto_aead *aead)
static int gcm_setauthsize(struct crypto_aead *authenc, unsigned int authsize)
{
struct caam_ctx *ctx = crypto_aead_ctx(authenc);
+ int err;
+
+ err = crypto_gcm_check_authsize(authsize);
+ if (err)
+ return err;

ctx->authsize = authsize;
gcm_set_sh_desc(authenc);
@@ -811,6 +816,11 @@ static int rfc4106_setauthsize(struct crypto_aead *authenc,
unsigned int authsize)
{
struct caam_ctx *ctx = crypto_aead_ctx(authenc);
+ int err;
+
+ err = crypto_rfc4106_check_authsize(authsize);
+ if (err)
+ return err;

ctx->authsize = authsize;
rfc4106_set_sh_desc(authenc);
@@ -910,6 +920,9 @@ static int rfc4543_setauthsize(struct crypto_aead *authenc,
{
struct caam_ctx *ctx = crypto_aead_ctx(authenc);

+ if (authsize != 16)
+ return -EINVAL;
+
ctx->authsize = authsize;
rfc4543_set_sh_desc(authenc);

--
2.1.0

2019-07-30 12:23:20

by Iuliana Prodan

[permalink] [raw]
Subject: Re: [PATCH v4 09/14] crypto: caam - keep both virtual and dma key addresses

On 7/30/2019 2:06 PM, Iuliana Prodan wrote:
> From: Horia Geant? <[email protected]>
>
> Update alginfo struct to keep both virtual and dma key addresses,
> so that descriptors have them at hand.
> One example where this is needed is in the xcbc(aes) shared descriptors,
> which are updated in current patch.
> Another example is the upcoming fix for DKP.
>
> Signed-off-by: Horia Geant? <[email protected]>
> ---
Reviewed-by: Iuliana Prodan <[email protected]>

Thanks,
Iulia

2019-07-30 12:28:58

by Iuliana Prodan

[permalink] [raw]
Subject: Re: [PATCH v4 10/14] crypto: caam - fix MDHA key derivation for certain user key lengths

On 7/30/2019 2:06 PM, Iuliana Prodan wrote:
> From: Horia Geant? <[email protected]>
>
> Fuzz testing uncovered an issue when |user key| > |derived key|.
> Derived key generation has to be fixed in two cases:
>
> 1. Era >= 6 (DKP is available)
> DKP cannot be used with immediate input key if |user key| > |derived key|,
> since the resulting descriptor (after DKP execution) would be invalid -
> having a few bytes from user key left in descriptor buffer
> as incorrect opcodes.
>
> Fix DKP usage both in standalone hmac and in authenc algorithms.
> For authenc the logic is simplified, by always storing both virtual
> and dma key addresses.
>
> 2. Era < 6
> The same case (|user key| > |derived key|) fails when DKP
> is not available.
> Make sure gen_split_key() dma maps max(|user key|, |derived key|),
> since this is an in-place (bidirectional) operation.
>
> Signed-off-by: Horia Geant? <[email protected]>
> ---
Reviewed-by: Iuliana Prodan <[email protected]>

Thanks,
Iulia