2021-04-29 15:08:18

by Thara Gopinath

[permalink] [raw]
Subject: [Patch v4 0/7] Add support for AEAD algorithms in Qualcomm Crypto Engine driver

Enable support for AEAD algorithms in Qualcomm CE driver. The first three
patches in this series are cleanups and add a few missing pieces required
to add support for AEAD algorithms. Patch 4 introduces supported AEAD
transformations on Qualcomm CE. Patches 5 and 6 implements the h/w
infrastructure needed to enable and run the AEAD transformations on
Qualcomm CE. Patch 7 adds support to queue fallback algorithms in case of
unsupported special inputs.

This patch series has been tested with in kernel crypto testing module
tcrypt.ko with fuzz tests enabled as well.

Thara Gopinath (7):
crypto: qce: common: Add MAC failed error checking
crypto: qce: common: Make result dump optional
crypto: qce: Add mode for rfc4309
crypto: qce: Add support for AEAD algorithms
crypto: qce: common: Clean up qce_auth_cfg
crypto: qce: common: Add support for AEAD algorithms
crypto: qce: aead: Schedule fallback algorithm

drivers/crypto/Kconfig | 15 +
drivers/crypto/qce/Makefile | 1 +
drivers/crypto/qce/aead.c | 841 ++++++++++++++++++++++++++++++++++++
drivers/crypto/qce/aead.h | 56 +++
drivers/crypto/qce/common.c | 196 ++++++++-
drivers/crypto/qce/common.h | 9 +-
drivers/crypto/qce/core.c | 4 +
7 files changed, 1102 insertions(+), 20 deletions(-)
create mode 100644 drivers/crypto/qce/aead.c
create mode 100644 drivers/crypto/qce/aead.h

--
2.25.1


2021-04-29 15:08:51

by Thara Gopinath

[permalink] [raw]
Subject: [Patch v4 6/7] crypto: qce: common: Add support for AEAD algorithms

Add register programming sequence for enabling AEAD
algorithms on the Qualcomm crypto engine.

Signed-off-by: Thara Gopinath <[email protected]>
---

v3->v2: - Initialized auth_ivsize to 0 to avoid -Wsometimes-uninitialized warning as
reported by kernel test robot <[email protected]>.
v2->v3:
- Made qce_be32_to_cpu_array truly be32 to cpu endian by using be32_to_cpup
instead of cpu_to_be32p. Also remove the (u32 *) typcasting of arrays obtained
as output from qce_be32_to_cpu_array as per Bjorn's review comments.
- Wrapped newly introduced std_iv_sha1, std_iv_sha256 and qce_be32_to_cpu_array
in CONFIG_CRYPTO_DEV_QCE_AEAD to prevent W1 warnings as reported by kernel
test robot <[email protected]>.

v1->v2:
- Minor fixes like removing not needed initializing of variables
and using bool values in lieu of 0 and 1 as pointed out by Bjorn.
- Introduced qce_be32_to_cpu_array which converts the u8 string in big
endian order to array of u32 and returns back total number of words,
as per Bjorn's review comments. Presently this function is used only by
qce_setup_regs_aead to format keys, iv and nonce. cipher and hash
algorithms can be made to use this function as a separate clean up patch.

drivers/crypto/qce/common.c | 162 +++++++++++++++++++++++++++++++++++-
drivers/crypto/qce/common.c | 162 +++++++++++++++++++++++++++++++++++-
1 file changed, 160 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c
index 7b3d6caec1b2..7c612ba5068f 100644
--- a/drivers/crypto/qce/common.c
+++ b/drivers/crypto/qce/common.c
@@ -15,6 +15,7 @@
#include "core.h"
#include "regs-v5.h"
#include "sha.h"
+#include "aead.h"

static inline u32 qce_read(struct qce_device *qce, u32 offset)
{
@@ -96,7 +97,7 @@ static inline void qce_crypto_go(struct qce_device *qce, bool result_dump)
qce_write(qce, REG_GOPROC, BIT(GO_SHIFT));
}

-#ifdef CONFIG_CRYPTO_DEV_QCE_SHA
+#if defined(CONFIG_CRYPTO_DEV_QCE_SHA) || defined(CONFIG_CRYPTO_DEV_QCE_AEAD)
static u32 qce_auth_cfg(unsigned long flags, u32 key_size, u32 auth_size)
{
u32 cfg = 0;
@@ -139,7 +140,9 @@ static u32 qce_auth_cfg(unsigned long flags, u32 key_size, u32 auth_size)

return cfg;
}
+#endif

+#ifdef CONFIG_CRYPTO_DEV_QCE_SHA
static int qce_setup_regs_ahash(struct crypto_async_request *async_req)
{
struct ahash_request *req = ahash_request_cast(async_req);
@@ -225,7 +228,7 @@ static int qce_setup_regs_ahash(struct crypto_async_request *async_req)
}
#endif

-#ifdef CONFIG_CRYPTO_DEV_QCE_SKCIPHER
+#if defined(CONFIG_CRYPTO_DEV_QCE_SKCIPHER) || defined(CONFIG_CRYPTO_DEV_QCE_AEAD)
static u32 qce_encr_cfg(unsigned long flags, u32 aes_key_size)
{
u32 cfg = 0;
@@ -271,7 +274,9 @@ static u32 qce_encr_cfg(unsigned long flags, u32 aes_key_size)

return cfg;
}
+#endif

+#ifdef CONFIG_CRYPTO_DEV_QCE_SKCIPHER
static void qce_xts_swapiv(__be32 *dst, const u8 *src, unsigned int ivsize)
{
u8 swap[QCE_AES_IV_LENGTH];
@@ -386,6 +391,155 @@ static int qce_setup_regs_skcipher(struct crypto_async_request *async_req)
}
#endif

+#ifdef CONFIG_CRYPTO_DEV_QCE_AEAD
+static const u32 std_iv_sha1[SHA256_DIGEST_SIZE / sizeof(u32)] = {
+ SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4, 0, 0, 0
+};
+
+static const u32 std_iv_sha256[SHA256_DIGEST_SIZE / sizeof(u32)] = {
+ SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3,
+ SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7
+};
+
+static unsigned int qce_be32_to_cpu_array(u32 *dst, const u8 *src, unsigned int len)
+{
+ u32 *d = dst;
+ const u8 *s = src;
+ unsigned int n;
+
+ n = len / sizeof(u32);
+ for (; n > 0; n--) {
+ *d = be32_to_cpup((const __be32 *)s);
+ s += sizeof(u32);
+ d++;
+ }
+ return DIV_ROUND_UP(len, sizeof(u32));
+}
+
+static int qce_setup_regs_aead(struct crypto_async_request *async_req)
+{
+ struct aead_request *req = aead_request_cast(async_req);
+ struct qce_aead_reqctx *rctx = aead_request_ctx(req);
+ struct qce_aead_ctx *ctx = crypto_tfm_ctx(async_req->tfm);
+ struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req));
+ struct qce_device *qce = tmpl->qce;
+ u32 enckey[QCE_MAX_CIPHER_KEY_SIZE / sizeof(u32)] = {0};
+ u32 enciv[QCE_MAX_IV_SIZE / sizeof(u32)] = {0};
+ u32 authkey[QCE_SHA_HMAC_KEY_SIZE / sizeof(u32)] = {0};
+ u32 authiv[SHA256_DIGEST_SIZE / sizeof(u32)] = {0};
+ u32 authnonce[QCE_MAX_NONCE / sizeof(u32)] = {0};
+ unsigned int enc_keylen = ctx->enc_keylen;
+ unsigned int auth_keylen = ctx->auth_keylen;
+ unsigned int enc_ivsize = rctx->ivsize;
+ unsigned int auth_ivsize = 0;
+ unsigned int enckey_words, enciv_words;
+ unsigned int authkey_words, authiv_words, authnonce_words;
+ unsigned long flags = rctx->flags;
+ u32 encr_cfg, auth_cfg, config, totallen;
+ u32 iv_last_word;
+
+ qce_setup_config(qce);
+
+ /* Write encryption key */
+ enckey_words = qce_be32_to_cpu_array(enckey, ctx->enc_key, enc_keylen);
+ qce_write_array(qce, REG_ENCR_KEY0, enckey, enckey_words);
+
+ /* Write encryption iv */
+ enciv_words = qce_be32_to_cpu_array(enciv, rctx->iv, enc_ivsize);
+ qce_write_array(qce, REG_CNTR0_IV0, enciv, enciv_words);
+
+ if (IS_CCM(rctx->flags)) {
+ iv_last_word = enciv[enciv_words - 1];
+ qce_write(qce, REG_CNTR3_IV3, iv_last_word + 1);
+ qce_write_array(qce, REG_ENCR_CCM_INT_CNTR0, (u32 *)enciv, enciv_words);
+ qce_write(qce, REG_CNTR_MASK, ~0);
+ qce_write(qce, REG_CNTR_MASK0, ~0);
+ qce_write(qce, REG_CNTR_MASK1, ~0);
+ qce_write(qce, REG_CNTR_MASK2, ~0);
+ }
+
+ /* Clear authentication IV and KEY registers of previous values */
+ qce_clear_array(qce, REG_AUTH_IV0, 16);
+ qce_clear_array(qce, REG_AUTH_KEY0, 16);
+
+ /* Clear byte count */
+ qce_clear_array(qce, REG_AUTH_BYTECNT0, 4);
+
+ /* Write authentication key */
+ authkey_words = qce_be32_to_cpu_array(authkey, ctx->auth_key, auth_keylen);
+ qce_write_array(qce, REG_AUTH_KEY0, (u32 *)authkey, authkey_words);
+
+ /* Write initial authentication IV only for HMAC algorithms */
+ if (IS_SHA_HMAC(rctx->flags)) {
+ /* Write default authentication iv */
+ if (IS_SHA1_HMAC(rctx->flags)) {
+ auth_ivsize = SHA1_DIGEST_SIZE;
+ memcpy(authiv, std_iv_sha1, auth_ivsize);
+ } else if (IS_SHA256_HMAC(rctx->flags)) {
+ auth_ivsize = SHA256_DIGEST_SIZE;
+ memcpy(authiv, std_iv_sha256, auth_ivsize);
+ }
+ authiv_words = auth_ivsize / sizeof(u32);
+ qce_write_array(qce, REG_AUTH_IV0, (u32 *)authiv, authiv_words);
+ } else if (IS_CCM(rctx->flags)) {
+ /* Write nonce for CCM algorithms */
+ authnonce_words = qce_be32_to_cpu_array(authnonce, rctx->ccm_nonce, QCE_MAX_NONCE);
+ qce_write_array(qce, REG_AUTH_INFO_NONCE0, authnonce, authnonce_words);
+ }
+
+ /* Set up ENCR_SEG_CFG */
+ encr_cfg = qce_encr_cfg(flags, enc_keylen);
+ if (IS_ENCRYPT(flags))
+ encr_cfg |= BIT(ENCODE_SHIFT);
+ qce_write(qce, REG_ENCR_SEG_CFG, encr_cfg);
+
+ /* Set up AUTH_SEG_CFG */
+ auth_cfg = qce_auth_cfg(rctx->flags, auth_keylen, ctx->authsize);
+ auth_cfg |= BIT(AUTH_LAST_SHIFT);
+ auth_cfg |= BIT(AUTH_FIRST_SHIFT);
+ if (IS_ENCRYPT(flags)) {
+ if (IS_CCM(rctx->flags))
+ auth_cfg |= AUTH_POS_BEFORE << AUTH_POS_SHIFT;
+ else
+ auth_cfg |= AUTH_POS_AFTER << AUTH_POS_SHIFT;
+ } else {
+ if (IS_CCM(rctx->flags))
+ auth_cfg |= AUTH_POS_AFTER << AUTH_POS_SHIFT;
+ else
+ auth_cfg |= AUTH_POS_BEFORE << AUTH_POS_SHIFT;
+ }
+ qce_write(qce, REG_AUTH_SEG_CFG, auth_cfg);
+
+ totallen = rctx->cryptlen + rctx->assoclen;
+
+ /* Set the encryption size and start offset */
+ if (IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags))
+ qce_write(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen + ctx->authsize);
+ else
+ qce_write(qce, REG_ENCR_SEG_SIZE, rctx->cryptlen);
+ qce_write(qce, REG_ENCR_SEG_START, rctx->assoclen & 0xffff);
+
+ /* Set the authentication size and start offset */
+ qce_write(qce, REG_AUTH_SEG_SIZE, totallen);
+ qce_write(qce, REG_AUTH_SEG_START, 0);
+
+ /* Write total length */
+ if (IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags))
+ qce_write(qce, REG_SEG_SIZE, totallen + ctx->authsize);
+ else
+ qce_write(qce, REG_SEG_SIZE, totallen);
+
+ /* get little endianness */
+ config = qce_config_reg(qce, 1);
+ qce_write(qce, REG_CONFIG, config);
+
+ /* Start the process */
+ qce_crypto_go(qce, !IS_CCM(flags));
+
+ return 0;
+}
+#endif
+
int qce_start(struct crypto_async_request *async_req, u32 type)
{
switch (type) {
@@ -396,6 +550,10 @@ int qce_start(struct crypto_async_request *async_req, u32 type)
#ifdef CONFIG_CRYPTO_DEV_QCE_SHA
case CRYPTO_ALG_TYPE_AHASH:
return qce_setup_regs_ahash(async_req);
+#endif
+#ifdef CONFIG_CRYPTO_DEV_QCE_AEAD
+ case CRYPTO_ALG_TYPE_AEAD:
+ return qce_setup_regs_aead(async_req);
#endif
default:
return -EINVAL;
--
2.25.1

2021-04-29 15:08:52

by Thara Gopinath

[permalink] [raw]
Subject: [Patch v4 3/7] crypto: qce: Add mode for rfc4309

rf4309 is the specification that uses aes ccm algorithms with IPsec
security packets. Add a submode to identify rfc4309 ccm(aes) algorithm
in the crypto driver.

Reviewed-by: Bjorn Andersson <[email protected]>
Signed-off-by: Thara Gopinath <[email protected]>
---

v1->v2:
- Moved up the QCE_ENCRYPT AND QCE_DECRYPT bit positions so that
addition of other algorithms in future will not affect these
macros.

drivers/crypto/qce/common.h | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/qce/common.h b/drivers/crypto/qce/common.h
index 3bc244bcca2d..b135440bf72b 100644
--- a/drivers/crypto/qce/common.h
+++ b/drivers/crypto/qce/common.h
@@ -51,9 +51,11 @@
#define QCE_MODE_CCM BIT(12)
#define QCE_MODE_MASK GENMASK(12, 8)

+#define QCE_MODE_CCM_RFC4309 BIT(13)
+
/* cipher encryption/decryption operations */
-#define QCE_ENCRYPT BIT(13)
-#define QCE_DECRYPT BIT(14)
+#define QCE_ENCRYPT BIT(30)
+#define QCE_DECRYPT BIT(31)

#define IS_DES(flags) (flags & QCE_ALG_DES)
#define IS_3DES(flags) (flags & QCE_ALG_3DES)
@@ -73,6 +75,7 @@
#define IS_CTR(mode) (mode & QCE_MODE_CTR)
#define IS_XTS(mode) (mode & QCE_MODE_XTS)
#define IS_CCM(mode) (mode & QCE_MODE_CCM)
+#define IS_CCM_RFC4309(mode) ((mode) & QCE_MODE_CCM_RFC4309)

#define IS_ENCRYPT(dir) (dir & QCE_ENCRYPT)
#define IS_DECRYPT(dir) (dir & QCE_DECRYPT)
--
2.25.1

2021-04-29 15:09:22

by Thara Gopinath

[permalink] [raw]
Subject: [Patch v4 7/7] crypto: qce: aead: Schedule fallback algorithm

Qualcomm crypto engine does not handle the following scenarios and
will issue an abort. In such cases, pass on the transformation to
a fallback algorithm.

- DES3 algorithms with all three keys same.
- AES192 algorithms.
- 0 length messages.

Signed-off-by: Thara Gopinath <[email protected]>
---
drivers/crypto/qce/aead.c | 64 ++++++++++++++++++++++++++++++++-------
drivers/crypto/qce/aead.h | 3 ++
2 files changed, 56 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/qce/aead.c b/drivers/crypto/qce/aead.c
index ef66ae21eae3..6d06a19b48e4 100644
--- a/drivers/crypto/qce/aead.c
+++ b/drivers/crypto/qce/aead.c
@@ -512,7 +512,23 @@ static int qce_aead_crypt(struct aead_request *req, int encrypt)
/* CE does not handle 0 length messages */
if (!rctx->cryptlen) {
if (!(IS_CCM(rctx->flags) && IS_DECRYPT(rctx->flags)))
- return -EINVAL;
+ ctx->need_fallback = true;
+ }
+
+ /* If fallback is needed, schedule and exit */
+ if (ctx->need_fallback) {
+ /* Reset need_fallback in case the same ctx is used for another transaction */
+ ctx->need_fallback = false;
+
+ aead_request_set_tfm(&rctx->fallback_req, ctx->fallback);
+ aead_request_set_callback(&rctx->fallback_req, req->base.flags,
+ req->base.complete, req->base.data);
+ aead_request_set_crypt(&rctx->fallback_req, req->src,
+ req->dst, req->cryptlen, req->iv);
+ aead_request_set_ad(&rctx->fallback_req, req->assoclen);
+
+ return encrypt ? crypto_aead_encrypt(&rctx->fallback_req) :
+ crypto_aead_decrypt(&rctx->fallback_req);
}

/*
@@ -553,7 +569,7 @@ static int qce_aead_ccm_setkey(struct crypto_aead *tfm, const u8 *key,
memcpy(ctx->ccm4309_salt, key + keylen, QCE_CCM4309_SALT_SIZE);
}

- if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_256)
+ if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_256 && keylen != AES_KEYSIZE_192)
return -EINVAL;

ctx->enc_keylen = keylen;
@@ -562,7 +578,12 @@ static int qce_aead_ccm_setkey(struct crypto_aead *tfm, const u8 *key,
memcpy(ctx->enc_key, key, keylen);
memcpy(ctx->auth_key, key, keylen);

- return 0;
+ if (keylen == AES_KEYSIZE_192)
+ ctx->need_fallback = true;
+
+ return IS_CCM_RFC4309(flags) ?
+ crypto_aead_setkey(ctx->fallback, key, keylen + QCE_CCM4309_SALT_SIZE) :
+ crypto_aead_setkey(ctx->fallback, key, keylen);
}

static int qce_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
@@ -593,20 +614,21 @@ static int qce_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int
* The crypto engine does not support any two keys
* being the same for triple des algorithms. The
* verify_skcipher_des3_key does not check for all the
- * below conditions. Return -EINVAL in case any two keys
- * are the same. Revisit to see if a fallback cipher
- * is needed to handle this condition.
+ * below conditions. Schedule fallback in this case.
*/
memcpy(_key, authenc_keys.enckey, DES3_EDE_KEY_SIZE);
if (!((_key[0] ^ _key[2]) | (_key[1] ^ _key[3])) ||
!((_key[2] ^ _key[4]) | (_key[3] ^ _key[5])) ||
!((_key[0] ^ _key[4]) | (_key[1] ^ _key[5])))
- return -EINVAL;
+ ctx->need_fallback = true;
} else if (IS_AES(flags)) {
/* No random key sizes */
if (authenc_keys.enckeylen != AES_KEYSIZE_128 &&
+ authenc_keys.enckeylen != AES_KEYSIZE_192 &&
authenc_keys.enckeylen != AES_KEYSIZE_256)
return -EINVAL;
+ if (authenc_keys.enckeylen == AES_KEYSIZE_192)
+ ctx->need_fallback = true;
}

ctx->enc_keylen = authenc_keys.enckeylen;
@@ -617,7 +639,7 @@ static int qce_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int
memset(ctx->auth_key, 0, sizeof(ctx->auth_key));
memcpy(ctx->auth_key, authenc_keys.authkey, authenc_keys.authkeylen);

- return 0;
+ return crypto_aead_setkey(ctx->fallback, key, keylen);
}

static int qce_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
@@ -632,15 +654,33 @@ static int qce_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
return -EINVAL;
}
ctx->authsize = authsize;
- return 0;
+
+ return crypto_aead_setauthsize(ctx->fallback, authsize);
}

static int qce_aead_init(struct crypto_aead *tfm)
{
- crypto_aead_set_reqsize(tfm, sizeof(struct qce_aead_reqctx));
+ struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm);
+
+ ctx->need_fallback = false;
+ ctx->fallback = crypto_alloc_aead(crypto_tfm_alg_name(&tfm->base),
+ 0, CRYPTO_ALG_NEED_FALLBACK);
+
+ if (IS_ERR(ctx->fallback))
+ return PTR_ERR(ctx->fallback);
+
+ crypto_aead_set_reqsize(tfm, sizeof(struct qce_aead_reqctx) +
+ crypto_aead_reqsize(ctx->fallback));
return 0;
}

+static void qce_aead_exit(struct crypto_aead *tfm)
+{
+ struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm);
+
+ crypto_free_aead(ctx->fallback);
+}
+
struct qce_aead_def {
unsigned long flags;
const char *name;
@@ -738,11 +778,13 @@ static int qce_aead_register_one(const struct qce_aead_def *def, struct qce_devi
alg->encrypt = qce_aead_encrypt;
alg->decrypt = qce_aead_decrypt;
alg->init = qce_aead_init;
+ alg->exit = qce_aead_exit;

alg->base.cra_priority = 300;
alg->base.cra_flags = CRYPTO_ALG_ASYNC |
CRYPTO_ALG_ALLOCATES_MEMORY |
- CRYPTO_ALG_KERN_DRIVER_ONLY;
+ CRYPTO_ALG_KERN_DRIVER_ONLY |
+ CRYPTO_ALG_NEED_FALLBACK;
alg->base.cra_ctxsize = sizeof(struct qce_aead_ctx);
alg->base.cra_alignmask = 0;
alg->base.cra_module = THIS_MODULE;
diff --git a/drivers/crypto/qce/aead.h b/drivers/crypto/qce/aead.h
index 3d1f2039930b..efb8477cc088 100644
--- a/drivers/crypto/qce/aead.h
+++ b/drivers/crypto/qce/aead.h
@@ -19,6 +19,8 @@ struct qce_aead_ctx {
unsigned int enc_keylen;
unsigned int auth_keylen;
unsigned int authsize;
+ bool need_fallback;
+ struct crypto_aead *fallback;
};

struct qce_aead_reqctx {
@@ -39,6 +41,7 @@ struct qce_aead_reqctx {
u8 ccm_nonce[QCE_MAX_NONCE];
u8 ccmresult_buf[QCE_BAM_BURST_SIZE];
u8 ccm_rfc4309_iv[QCE_MAX_IV_SIZE];
+ struct aead_request fallback_req;
};

static inline struct qce_alg_template *to_aead_tmpl(struct crypto_aead *tfm)
--
2.25.1

2021-05-05 23:28:48

by Thara Gopinath

[permalink] [raw]
Subject: Re: [Patch v4 0/7] Add support for AEAD algorithms in Qualcomm Crypto Engine driver



On 4/29/21 11:07 AM, Thara Gopinath wrote:
> Enable support for AEAD algorithms in Qualcomm CE driver. The first three
> patches in this series are cleanups and add a few missing pieces required
> to add support for AEAD algorithms. Patch 4 introduces supported AEAD
> transformations on Qualcomm CE. Patches 5 and 6 implements the h/w
> infrastructure needed to enable and run the AEAD transformations on
> Qualcomm CE. Patch 7 adds support to queue fallback algorithms in case of
> unsupported special inputs.
>
> This patch series has been tested with in kernel crypto testing module
> tcrypt.ko with fuzz tests enabled as well.

Hi Herbert,

I have fixed all the review comments on this series and fuzz tests pass
as well. Do you think this can be merged ?

--
Warm Regards
Thara
>
> Thara Gopinath (7):
> crypto: qce: common: Add MAC failed error checking
> crypto: qce: common: Make result dump optional
> crypto: qce: Add mode for rfc4309
> crypto: qce: Add support for AEAD algorithms
> crypto: qce: common: Clean up qce_auth_cfg
> crypto: qce: common: Add support for AEAD algorithms
> crypto: qce: aead: Schedule fallback algorithm
>
> drivers/crypto/Kconfig | 15 +
> drivers/crypto/qce/Makefile | 1 +
> drivers/crypto/qce/aead.c | 841 ++++++++++++++++++++++++++++++++++++
> drivers/crypto/qce/aead.h | 56 +++
> drivers/crypto/qce/common.c | 196 ++++++++-
> drivers/crypto/qce/common.h | 9 +-
> drivers/crypto/qce/core.c | 4 +
> 7 files changed, 1102 insertions(+), 20 deletions(-)
> create mode 100644 drivers/crypto/qce/aead.c
> create mode 100644 drivers/crypto/qce/aead.h
>


2021-05-05 23:39:50

by Thara Gopinath

[permalink] [raw]
Subject: Re: [Patch v4 0/7] Add support for AEAD algorithms in Qualcomm Crypto Engine driver



On 5/5/21 6:58 PM, Herbert Xu wrote:
> On Wed, May 05, 2021 at 06:54:18PM -0400, Thara Gopinath wrote:
>>
>>
>> On 4/29/21 11:07 AM, Thara Gopinath wrote:
>>> Enable support for AEAD algorithms in Qualcomm CE driver. The first three
>>> patches in this series are cleanups and add a few missing pieces required
>>> to add support for AEAD algorithms. Patch 4 introduces supported AEAD
>>> transformations on Qualcomm CE. Patches 5 and 6 implements the h/w
>>> infrastructure needed to enable and run the AEAD transformations on
>>> Qualcomm CE. Patch 7 adds support to queue fallback algorithms in case of
>>> unsupported special inputs.
>>>
>>> This patch series has been tested with in kernel crypto testing module
>>> tcrypt.ko with fuzz tests enabled as well.
>>
>> Hi Herbert,
>>
>> I have fixed all the review comments on this series and fuzz tests pass as
>> well. Do you think this can be merged ?
>
> We're still in the merge window, no new features are accepted
> at this point. Please be patient.

Ok, thanks!

>
> Thanks,
>

--
Warm Regards
Thara

2021-05-05 23:40:58

by Herbert Xu

[permalink] [raw]
Subject: Re: [Patch v4 0/7] Add support for AEAD algorithms in Qualcomm Crypto Engine driver

On Wed, May 05, 2021 at 06:54:18PM -0400, Thara Gopinath wrote:
>
>
> On 4/29/21 11:07 AM, Thara Gopinath wrote:
> > Enable support for AEAD algorithms in Qualcomm CE driver. The first three
> > patches in this series are cleanups and add a few missing pieces required
> > to add support for AEAD algorithms. Patch 4 introduces supported AEAD
> > transformations on Qualcomm CE. Patches 5 and 6 implements the h/w
> > infrastructure needed to enable and run the AEAD transformations on
> > Qualcomm CE. Patch 7 adds support to queue fallback algorithms in case of
> > unsupported special inputs.
> >
> > This patch series has been tested with in kernel crypto testing module
> > tcrypt.ko with fuzz tests enabled as well.
>
> Hi Herbert,
>
> I have fixed all the review comments on this series and fuzz tests pass as
> well. Do you think this can be merged ?

We're still in the merge window, no new features are accepted
at this point. Please be patient.

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2021-05-14 11:36:22

by Herbert Xu

[permalink] [raw]
Subject: Re: [Patch v4 0/7] Add support for AEAD algorithms in Qualcomm Crypto Engine driver

On Thu, Apr 29, 2021 at 11:07:00AM -0400, Thara Gopinath wrote:
> Enable support for AEAD algorithms in Qualcomm CE driver. The first three
> patches in this series are cleanups and add a few missing pieces required
> to add support for AEAD algorithms. Patch 4 introduces supported AEAD
> transformations on Qualcomm CE. Patches 5 and 6 implements the h/w
> infrastructure needed to enable and run the AEAD transformations on
> Qualcomm CE. Patch 7 adds support to queue fallback algorithms in case of
> unsupported special inputs.
>
> This patch series has been tested with in kernel crypto testing module
> tcrypt.ko with fuzz tests enabled as well.
>
> Thara Gopinath (7):
> crypto: qce: common: Add MAC failed error checking
> crypto: qce: common: Make result dump optional
> crypto: qce: Add mode for rfc4309
> crypto: qce: Add support for AEAD algorithms
> crypto: qce: common: Clean up qce_auth_cfg
> crypto: qce: common: Add support for AEAD algorithms
> crypto: qce: aead: Schedule fallback algorithm
>
> drivers/crypto/Kconfig | 15 +
> drivers/crypto/qce/Makefile | 1 +
> drivers/crypto/qce/aead.c | 841 ++++++++++++++++++++++++++++++++++++
> drivers/crypto/qce/aead.h | 56 +++
> drivers/crypto/qce/common.c | 196 ++++++++-
> drivers/crypto/qce/common.h | 9 +-
> drivers/crypto/qce/core.c | 4 +
> 7 files changed, 1102 insertions(+), 20 deletions(-)
> create mode 100644 drivers/crypto/qce/aead.c
> create mode 100644 drivers/crypto/qce/aead.h

All applied. Thanks.
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt