2020-01-03 01:03:51

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v2 00/10] crypto: caam - backlogging support

Integrate crypto_engine framework into CAAM, to make use of
the engine queue.
Added support for SKCIPHER, HASH, RSA and AEAD algorithms.
This is intended to be used for CAAM backlogging support.
The requests, with backlog flag (e.g. from dm-crypt) will be
listed into crypto-engine queue and processed by CAAM when free.

While here, I've also made some refactorization.
Patches #1 - #4 include some refactorizations on caamalg, caamhash
and caampkc.
Patches #5, #6 change the return code of caam_jr_enqueue function
to -EINPROGRESS, in case of success, -ENOSPC in case the CAAM is
busy, -EIO if it cannot map the caller's descriptor.
Also, to keep each request information, like backlog flag, a new
struct is passed as argument to enqueue function.
Patches #7 - #10 integrate crypto_engine into CAAM, for
SKCIPHER/AEAD/RSA/HASH algorithms.

---
Changes since V1:
- remove helper function - akcipher_request_cast;
- remove any references to crypto_async_request,
use specific request type;
- remove bypass crypto-engine queue, in case is empty;
- update some commit messages;
- remove unrelated changes, like whitespaces;
- squash some changes from patch #9 to patch #6;
- added Reviewed-by.
---

Iuliana Prodan (10):
crypto: caam - refactor skcipher/aead/gcm/chachapoly {en,de}crypt
functions
crypto: caam - refactor ahash_done callbacks
crypto: caam - refactor ahash_edesc_alloc
crypto: caam - refactor RSA private key _done callbacks
crypto: caam - change return code in caam_jr_enqueue function
crypto: caam - refactor caam_jr_enqueue
crypto: caam - support crypto_engine framework for SKCIPHER algorithms
crypto: caam - add crypto_engine support for AEAD algorithms
crypto: caam - add crypto_engine support for RSA algorithms
crypto: caam - add crypto_engine support for HASH algorithms

drivers/crypto/caam/Kconfig | 1 +
drivers/crypto/caam/caamalg.c | 450 +++++++++++++++++++----------------------
drivers/crypto/caam/caamhash.c | 357 +++++++++++++++++---------------
drivers/crypto/caam/caampkc.c | 205 ++++++++++++-------
drivers/crypto/caam/caampkc.h | 22 ++
drivers/crypto/caam/caamrng.c | 4 +-
drivers/crypto/caam/intern.h | 3 +
drivers/crypto/caam/jr.c | 37 +++-
drivers/crypto/caam/key_gen.c | 2 +-
9 files changed, 597 insertions(+), 484 deletions(-)

--
2.1.0


2020-01-03 01:04:25

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v2 09/10] crypto: caam - add crypto_engine support for RSA algorithms

Add crypto_engine support for RSA algorithms, to make use of
the engine queue.
The requests, with backlog flag, will be listed into crypto-engine
queue and processed by CAAM when free. In case the queue is empty,
the request is directly sent to CAAM.

Signed-off-by: Iuliana Prodan <[email protected]>
---
drivers/crypto/caam/caampkc.c | 144 ++++++++++++++++++++++++++++++++----------
drivers/crypto/caam/caampkc.h | 8 +++
2 files changed, 120 insertions(+), 32 deletions(-)

diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
index 858cc95..82d5b55 100644
--- a/drivers/crypto/caam/caampkc.c
+++ b/drivers/crypto/caam/caampkc.c
@@ -118,19 +118,28 @@ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
{
struct caam_akcipher_request_entry *jrentry = context;
struct akcipher_request *req = jrentry->base;
+ struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
+ struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
struct rsa_edesc *edesc;
int ecode = 0;

if (err)
ecode = caam_jr_strstatus(dev, err);

- edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);
+ edesc = req_ctx->edesc;

rsa_pub_unmap(dev, edesc, req);
rsa_io_unmap(dev, edesc, req);
kfree(edesc);

- akcipher_request_complete(req, ecode);
+ /*
+ * If no backlog flag, the completion of the request is done
+ * by CAAM, not crypto engine.
+ */
+ if (!jrentry->bklog)
+ akcipher_request_complete(req, ecode);
+ else
+ crypto_finalize_akcipher_request(jrp->engine, req, ecode);
}

static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err,
@@ -139,15 +148,17 @@ static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err,
struct caam_akcipher_request_entry *jrentry = context;
struct akcipher_request *req = jrentry->base;
struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+ struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
struct caam_rsa_key *key = &ctx->key;
+ struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
struct rsa_edesc *edesc;
int ecode = 0;

if (err)
ecode = caam_jr_strstatus(dev, err);

- edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);
+ edesc = req_ctx->edesc;

switch (key->priv_form) {
case FORM1:
@@ -163,7 +174,14 @@ static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err,
rsa_io_unmap(dev, edesc, req);
kfree(edesc);

- akcipher_request_complete(req, ecode);
+ /*
+ * If no backlog flag, the completion of the request is done
+ * by CAAM, not crypto engine.
+ */
+ if (!jrentry->bklog)
+ akcipher_request_complete(req, ecode);
+ else
+ crypto_finalize_akcipher_request(jrp->engine, req, ecode);
}

/**
@@ -312,6 +330,7 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
edesc->dst_nents = dst_nents;

edesc->jrentry.base = req;
+ req_ctx->edesc = edesc;

if (!sec4_sg_bytes)
return edesc;
@@ -343,6 +362,36 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
return ERR_PTR(-ENOMEM);
}

+static int akcipher_do_one_req(struct crypto_engine *engine, void *areq)
+{
+ struct akcipher_request *req = container_of(areq,
+ struct akcipher_request,
+ base);
+ struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+ struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
+ struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+ struct caam_akcipher_request_entry *jrentry;
+ struct device *jrdev = ctx->dev;
+ u32 *desc = req_ctx->edesc->hw_desc;
+ int ret;
+
+ jrentry = &req_ctx->edesc->jrentry;
+ jrentry->bklog = true;
+
+ ret = caam_jr_enqueue(jrdev, desc, req_ctx->akcipher_op_done,
+ jrentry);
+
+ if (ret != -EINPROGRESS) {
+ rsa_pub_unmap(jrdev, req_ctx->edesc, req);
+ rsa_io_unmap(jrdev, req_ctx->edesc, req);
+ kfree(req_ctx->edesc);
+ } else {
+ ret = 0;
+ }
+
+ return ret;
+}
+
static int set_rsa_pub_pdb(struct akcipher_request *req,
struct rsa_edesc *edesc)
{
@@ -606,10 +655,50 @@ static int set_rsa_priv_f3_pdb(struct akcipher_request *req,
return -ENOMEM;
}

+static int akcipher_enqueue_req(struct device *jrdev, u32 *desc,
+ void (*cbk)(struct device *jrdev, u32 *desc,
+ u32 err, void *context),
+ struct akcipher_request *req,
+ struct rsa_edesc *edesc)
+{
+ struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev);
+ struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+ struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+ struct caam_rsa_key *key = &ctx->key;
+ int ret;
+
+ if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
+ return crypto_transfer_akcipher_request_to_engine(jrpriv->engine,
+ req);
+ else
+ ret = caam_jr_enqueue(jrdev, desc, cbk, &edesc->jrentry);
+
+ if (ret != -EINPROGRESS) {
+ switch (key->priv_form) {
+ case FORM1:
+ rsa_priv_f1_unmap(jrdev, edesc, req);
+ break;
+ case FORM2:
+ rsa_priv_f2_unmap(jrdev, edesc, req);
+ break;
+ case FORM3:
+ rsa_priv_f3_unmap(jrdev, edesc, req);
+ break;
+ default:
+ rsa_pub_unmap(jrdev, edesc, req);
+ }
+ rsa_io_unmap(jrdev, edesc, req);
+ kfree(edesc);
+ }
+
+ return ret;
+}
+
static int caam_rsa_enc(struct akcipher_request *req)
{
struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+ struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
struct caam_rsa_key *key = &ctx->key;
struct device *jrdev = ctx->dev;
struct rsa_edesc *edesc;
@@ -637,13 +726,9 @@ static int caam_rsa_enc(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub);

- ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done,
- &edesc->jrentry);
- if (ret == -EINPROGRESS)
- return ret;
-
- rsa_pub_unmap(jrdev, edesc, req);
-
+ req_ctx->akcipher_op_done = rsa_pub_done;
+ return akcipher_enqueue_req(jrdev, edesc->hw_desc, rsa_pub_done,
+ req, edesc);
init_fail:
rsa_io_unmap(jrdev, edesc, req);
kfree(edesc);
@@ -654,6 +739,7 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
{
struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+ struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
struct device *jrdev = ctx->dev;
struct rsa_edesc *edesc;
int ret;
@@ -671,13 +757,9 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1);

- ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done,
- &edesc->jrentry);
- if (ret == -EINPROGRESS)
- return ret;
-
- rsa_priv_f1_unmap(jrdev, edesc, req);
-
+ req_ctx->akcipher_op_done = rsa_priv_f_done;
+ return akcipher_enqueue_req(jrdev, edesc->hw_desc, rsa_priv_f_done,
+ req, edesc);
init_fail:
rsa_io_unmap(jrdev, edesc, req);
kfree(edesc);
@@ -688,6 +770,7 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
{
struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+ struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
struct device *jrdev = ctx->dev;
struct rsa_edesc *edesc;
int ret;
@@ -705,13 +788,9 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2);

- ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done,
- &edesc->jrentry);
- if (ret == -EINPROGRESS)
- return ret;
-
- rsa_priv_f2_unmap(jrdev, edesc, req);
-
+ req_ctx->akcipher_op_done = rsa_priv_f_done;
+ return akcipher_enqueue_req(jrdev, edesc->hw_desc, rsa_priv_f_done,
+ req, edesc);
init_fail:
rsa_io_unmap(jrdev, edesc, req);
kfree(edesc);
@@ -722,6 +801,7 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
{
struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+ struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
struct device *jrdev = ctx->dev;
struct rsa_edesc *edesc;
int ret;
@@ -739,13 +819,9 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3);

- ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done,
- &edesc->jrentry);
- if (ret == -EINPROGRESS)
- return ret;
-
- rsa_priv_f3_unmap(jrdev, edesc, req);
-
+ req_ctx->akcipher_op_done = rsa_priv_f_done;
+ return akcipher_enqueue_req(jrdev, edesc->hw_desc, rsa_priv_f_done,
+ req, edesc);
init_fail:
rsa_io_unmap(jrdev, edesc, req);
kfree(edesc);
@@ -1037,6 +1113,10 @@ static int caam_rsa_init_tfm(struct crypto_akcipher *tfm)
return -ENOMEM;
}

+ ctx->enginectx.op.do_one_request = akcipher_do_one_req;
+
+ akcipher_set_reqsize(tfm, sizeof(struct caam_rsa_req_ctx));
+
return 0;
}

diff --git a/drivers/crypto/caam/caampkc.h b/drivers/crypto/caam/caampkc.h
index e0b1076..8e6d7e0 100644
--- a/drivers/crypto/caam/caampkc.h
+++ b/drivers/crypto/caam/caampkc.h
@@ -13,6 +13,7 @@
#include "compat.h"
#include "intern.h"
#include "pdb.h"
+#include <crypto/engine.h>

/**
* caam_priv_key_form - CAAM RSA private key representation
@@ -88,11 +89,13 @@ struct caam_rsa_key {

/**
* caam_rsa_ctx - per session context.
+ * @enginectx : crypto engine context
* @key : RSA key in DMA zone
* @dev : device structure
* @padding_dma : dma address of padding, for adding it to the input
*/
struct caam_rsa_ctx {
+ struct crypto_engine_ctx enginectx;
struct caam_rsa_key key;
struct device *dev;
dma_addr_t padding_dma;
@@ -104,11 +107,16 @@ struct caam_rsa_ctx {
* @src : input scatterlist (stripped of leading zeros)
* @fixup_src : input scatterlist (that might be stripped of leading zeros)
* @fixup_src_len : length of the fixup_src input scatterlist
+ * @edesc : s/w-extended rsa descriptor
+ * @akcipher_op_done : callback used when operation is done
*/
struct caam_rsa_req_ctx {
struct scatterlist src[2];
struct scatterlist *fixup_src;
unsigned int fixup_src_len;
+ struct rsa_edesc *edesc;
+ void (*akcipher_op_done)(struct device *jrdev, u32 *desc, u32 err,
+ void *context);
};

/*
--
2.1.0

2020-01-03 01:04:26

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v2 10/10] crypto: caam - add crypto_engine support for HASH algorithms

Add crypto_engine support for HASH algorithms, to make use of
the engine queue.
The requests, with backlog flag, will be listed into crypto-engine
queue and processed by CAAM when free.

Signed-off-by: Iuliana Prodan <[email protected]>
---
drivers/crypto/caam/caamhash.c | 175 ++++++++++++++++++++++++++++++-----------
1 file changed, 127 insertions(+), 48 deletions(-)

diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index f179d39..93af298 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -65,6 +65,7 @@
#include "sg_sw_sec4.h"
#include "key_gen.h"
#include "caamhash_desc.h"
+#include <crypto/engine.h>

#define CAAM_CRA_PRIORITY 3000

@@ -86,6 +87,7 @@ static struct list_head hash_list;

/* ahash per-session context */
struct caam_hash_ctx {
+ struct crypto_engine_ctx enginectx;
u32 sh_desc_update[DESC_HASH_MAX_USED_LEN] ____cacheline_aligned;
u32 sh_desc_update_first[DESC_HASH_MAX_USED_LEN] ____cacheline_aligned;
u32 sh_desc_fin[DESC_HASH_MAX_USED_LEN] ____cacheline_aligned;
@@ -111,9 +113,12 @@ struct caam_hash_state {
int buflen;
int next_buflen;
u8 caam_ctx[MAX_CTX_LEN] ____cacheline_aligned;
- int (*update)(struct ahash_request *req);
+ int (*update)(struct ahash_request *req) ____cacheline_aligned;
int (*final)(struct ahash_request *req);
int (*finup)(struct ahash_request *req);
+ struct ahash_edesc *edesc;
+ void (*ahash_op_done)(struct device *jrdev, u32 *desc, u32 err,
+ void *context);
};

struct caam_export_state {
@@ -123,6 +128,9 @@ struct caam_export_state {
int (*update)(struct ahash_request *req);
int (*final)(struct ahash_request *req);
int (*finup)(struct ahash_request *req);
+ struct ahash_edesc *edesc;
+ void (*ahash_op_done)(struct device *jrdev, u32 *desc, u32 err,
+ void *context);
};

static inline bool is_cmac_aes(u32 algtype)
@@ -588,6 +596,7 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err,
{
struct caam_ahash_request_entry *jrentry = context;
struct ahash_request *req = jrentry->base;
+ struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev);
struct ahash_edesc *edesc;
struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
int digestsize = crypto_ahash_digestsize(ahash);
@@ -597,7 +606,8 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err,

dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);

- edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
+ edesc = state->edesc;
+
if (err)
ecode = caam_jr_strstatus(jrdev, err);

@@ -609,7 +619,14 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err,
DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
ctx->ctx_len, 1);

- req->base.complete(&req->base, ecode);
+ /*
+ * If no backlog flag, the completion of the request is done
+ * by CAAM, not crypto engine.
+ */
+ if (!jrentry->bklog)
+ req->base.complete(&req->base, ecode);
+ else
+ crypto_finalize_hash_request(jrp->engine, req, ecode);
}

static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
@@ -629,6 +646,7 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err,
{
struct caam_ahash_request_entry *jrentry = context;
struct ahash_request *req = jrentry->base;
+ struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev);
struct ahash_edesc *edesc;
struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
@@ -638,7 +656,7 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err,

dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);

- edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
+ edesc = state->edesc;
if (err)
ecode = caam_jr_strstatus(jrdev, err);

@@ -662,7 +680,15 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err,
DUMP_PREFIX_ADDRESS, 16, 4, req->result,
digestsize, 1);

- req->base.complete(&req->base, ecode);
+ /*
+ * If no backlog flag, the completion of the request is done
+ * by CAAM, not crypto engine.
+ */
+ if (!jrentry->bklog)
+ req->base.complete(&req->base, ecode);
+ else
+ crypto_finalize_hash_request(jrp->engine, req, ecode);
+
}

static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err,
@@ -687,6 +713,7 @@ static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req,
{
struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+ struct caam_hash_state *state = ahash_request_ctx(req);
gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
GFP_KERNEL : GFP_ATOMIC;
struct ahash_edesc *edesc;
@@ -699,6 +726,7 @@ static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req,
}

edesc->jrentry.base = req;
+ state->edesc = edesc;

init_job_desc_shared(edesc->hw_desc, sh_desc_dma, desc_len(sh_desc),
HDR_SHARE_DEFER | HDR_REVERSE);
@@ -742,6 +770,56 @@ static int ahash_edesc_add_src(struct caam_hash_ctx *ctx,
return 0;
}

+static int ahash_do_one_req(struct crypto_engine *engine, void *areq)
+{
+ struct ahash_request *req = ahash_request_cast(areq);
+ struct caam_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
+ struct caam_hash_state *state = ahash_request_ctx(req);
+ struct caam_ahash_request_entry *jrentry;
+ struct device *jrdev = ctx->jrdev;
+ u32 *desc = state->edesc->hw_desc;
+ int ret;
+
+ jrentry = &state->edesc->jrentry;
+ jrentry->bklog = true;
+
+ ret = caam_jr_enqueue(jrdev, desc, state->ahash_op_done,
+ jrentry);
+
+ if (ret != -EINPROGRESS) {
+ ahash_unmap(jrdev, state->edesc, req, 0);
+ kfree(state->edesc);
+ } else {
+ ret = 0;
+ }
+
+ return ret;
+}
+
+static int ahash_enqueue_req(struct device *jrdev, u32 *desc,
+ void (*cbk)(struct device *jrdev, u32 *desc,
+ u32 err, void *context),
+ struct ahash_request *req,
+ struct ahash_edesc *edesc,
+ int dst_len, enum dma_data_direction dir)
+{
+ struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev);
+ int ret;
+
+ if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
+ return crypto_transfer_hash_request_to_engine(jrpriv->engine,
+ req);
+ else
+ ret = caam_jr_enqueue(jrdev, desc, cbk, &edesc->jrentry);
+
+ if (ret != -EINPROGRESS) {
+ ahash_unmap_ctx(jrdev, edesc, req, dst_len, dir);
+ kfree(edesc);
+ }
+
+ return ret;
+}
+
/* submit update job descriptor */
static int ahash_update_ctx(struct ahash_request *req)
{
@@ -849,10 +927,9 @@ static int ahash_update_ctx(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc,
desc_bytes(desc), 1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi,
- &edesc->jrentry);
- if (ret != -EINPROGRESS)
- goto unmap_ctx;
+ state->ahash_op_done = ahash_done_bi;
+ ret = ahash_enqueue_req(jrdev, desc, ahash_done_bi, req, edesc,
+ ctx->ctx_len, DMA_BIDIRECTIONAL);
} else if (*next_buflen) {
scatterwalk_map_and_copy(buf + *buflen, req->src, 0,
req->nbytes, 0);
@@ -923,10 +1000,10 @@ static int ahash_final_ctx(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, &edesc->jrentry);
- if (ret == -EINPROGRESS)
- return ret;
+ state->ahash_op_done = ahash_done_ctx_src;

+ return ahash_enqueue_req(jrdev, desc, ahash_done_ctx_src, req, edesc,
+ digestsize, DMA_BIDIRECTIONAL);
unmap_ctx:
ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
kfree(edesc);
@@ -999,10 +1076,10 @@ static int ahash_finup_ctx(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, &edesc->jrentry);
- if (ret == -EINPROGRESS)
- return ret;
+ state->ahash_op_done = ahash_done_ctx_src;

+ return ahash_enqueue_req(jrdev, desc, ahash_done_ctx_src, req, edesc,
+ digestsize, DMA_BIDIRECTIONAL);
unmap_ctx:
ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
kfree(edesc);
@@ -1071,13 +1148,10 @@ static int ahash_digest(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry);
- if (ret != -EINPROGRESS) {
- ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
- kfree(edesc);
- }
+ state->ahash_op_done = ahash_done;

- return ret;
+ return ahash_enqueue_req(jrdev, desc, ahash_done, req, edesc,
+ digestsize, DMA_FROM_DEVICE);
}

/* submit ahash final if it the first job descriptor */
@@ -1121,18 +1195,14 @@ static int ahash_final_no_ctx(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry);
- if (ret != -EINPROGRESS) {
- ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
- kfree(edesc);
- }
+ state->ahash_op_done = ahash_done;

- return ret;
+ return ahash_enqueue_req(jrdev, desc, ahash_done, req, edesc,
+ digestsize, DMA_FROM_DEVICE);
unmap:
ahash_unmap(jrdev, edesc, req, digestsize);
kfree(edesc);
return -ENOMEM;
-
}

/* submit ahash update if it the first job descriptor after update */
@@ -1232,10 +1302,9 @@ static int ahash_update_no_ctx(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc,
desc_bytes(desc), 1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst,
- &edesc->jrentry);
- if (ret != -EINPROGRESS)
- goto unmap_ctx;
+ state->ahash_op_done = ahash_done_ctx_dst;
+ ret = ahash_enqueue_req(jrdev, desc, ahash_done_ctx_dst, req,
+ edesc, ctx->ctx_len, DMA_TO_DEVICE);

state->update = ahash_update_ctx;
state->finup = ahash_finup_ctx;
@@ -1324,13 +1393,10 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry);
- if (ret != -EINPROGRESS) {
- ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
- kfree(edesc);
- }
+ state->ahash_op_done = ahash_done;

- return ret;
+ return ahash_enqueue_req(jrdev, desc, ahash_done, req, edesc,
+ digestsize, DMA_FROM_DEVICE);
unmap:
ahash_unmap(jrdev, edesc, req, digestsize);
kfree(edesc);
@@ -1418,11 +1484,9 @@ static int ahash_update_first(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc,
desc_bytes(desc), 1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst,
- &edesc->jrentry);
- if (ret != -EINPROGRESS)
- goto unmap_ctx;
-
+ state->ahash_op_done = ahash_done_ctx_dst;
+ ret = ahash_enqueue_req(jrdev, desc, ahash_done_ctx_dst, req,
+ edesc, ctx->ctx_len, DMA_TO_DEVICE);
state->update = ahash_update_ctx;
state->finup = ahash_finup_ctx;
state->final = ahash_final_ctx;
@@ -1502,6 +1566,8 @@ static int ahash_export(struct ahash_request *req, void *out)
export->update = state->update;
export->final = state->final;
export->finup = state->finup;
+ export->edesc = state->edesc;
+ export->ahash_op_done = state->ahash_op_done;

return 0;
}
@@ -1518,6 +1584,8 @@ static int ahash_import(struct ahash_request *req, const void *in)
state->update = export->update;
state->final = export->final;
state->finup = export->finup;
+ state->edesc = export->edesc;
+ state->ahash_op_done = export->ahash_op_done;

return 0;
}
@@ -1777,7 +1845,9 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)
}

dma_addr = dma_map_single_attrs(ctx->jrdev, ctx->sh_desc_update,
- offsetof(struct caam_hash_ctx, key),
+ offsetof(struct caam_hash_ctx, key) -
+ offsetof(struct caam_hash_ctx,
+ sh_desc_update),
ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
if (dma_mapping_error(ctx->jrdev, dma_addr)) {
dev_err(ctx->jrdev, "unable to map shared descriptors\n");
@@ -1795,11 +1865,19 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)
ctx->sh_desc_update_dma = dma_addr;
ctx->sh_desc_update_first_dma = dma_addr +
offsetof(struct caam_hash_ctx,
- sh_desc_update_first);
+ sh_desc_update_first) -
+ offsetof(struct caam_hash_ctx,
+ sh_desc_update);
ctx->sh_desc_fin_dma = dma_addr + offsetof(struct caam_hash_ctx,
- sh_desc_fin);
+ sh_desc_fin) -
+ offsetof(struct caam_hash_ctx,
+ sh_desc_update);
ctx->sh_desc_digest_dma = dma_addr + offsetof(struct caam_hash_ctx,
- sh_desc_digest);
+ sh_desc_digest) -
+ offsetof(struct caam_hash_ctx,
+ sh_desc_update);
+
+ ctx->enginectx.op.do_one_request = ahash_do_one_req;

crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
sizeof(struct caam_hash_state));
@@ -1816,7 +1894,8 @@ static void caam_hash_cra_exit(struct crypto_tfm *tfm)
struct caam_hash_ctx *ctx = crypto_tfm_ctx(tfm);

dma_unmap_single_attrs(ctx->jrdev, ctx->sh_desc_update_dma,
- offsetof(struct caam_hash_ctx, key),
+ offsetof(struct caam_hash_ctx, key) -
+ offsetof(struct caam_hash_ctx, sh_desc_update),
ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
if (ctx->key_dir != DMA_NONE)
dma_unmap_single_attrs(ctx->jrdev, ctx->adata.key_dma,
--
2.1.0

2020-01-03 01:04:43

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v2 08/10] crypto: caam - add crypto_engine support for AEAD algorithms

Add crypto_engine support for AEAD algorithms, to make use of
the engine queue.
The requests, with backlog flag, will be listed into crypto-engine
queue and processed by CAAM when free.

Signed-off-by: Iuliana Prodan <[email protected]>
---
drivers/crypto/caam/caamalg.c | 106 ++++++++++++++++++++++++++++++------------
1 file changed, 76 insertions(+), 30 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 8911c04..7cefd0a 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -122,6 +122,12 @@ struct caam_skcipher_req_ctx {
void *context);
};

+struct caam_aead_req_ctx {
+ struct aead_edesc *edesc;
+ void (*aead_op_done)(struct device *jrdev, u32 *desc, u32 err,
+ void *context);
+};
+
static int aead_null_set_sh_desc(struct crypto_aead *aead)
{
struct caam_ctx *ctx = crypto_aead_ctx(aead);
@@ -999,12 +1005,14 @@ static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err,
{
struct caam_aead_request_entry *jrentry = context;
struct aead_request *req = jrentry->base;
+ struct caam_aead_req_ctx *rctx = aead_request_ctx(req);
+ struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev);
struct aead_edesc *edesc;
int ecode = 0;

dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);

- edesc = container_of(desc, struct aead_edesc, hw_desc[0]);
+ edesc = rctx->edesc;

if (err)
ecode = caam_jr_strstatus(jrdev, err);
@@ -1013,7 +1021,14 @@ static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err,

kfree(edesc);

- aead_request_complete(req, ecode);
+ /*
+ * If no backlog flag, the completion of the request is done
+ * by CAAM, not crypto engine.
+ */
+ if (!jrentry->bklog)
+ aead_request_complete(req, ecode);
+ else
+ crypto_finalize_aead_request(jrp->engine, req, ecode);
}

static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err,
@@ -1309,6 +1324,7 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
struct crypto_aead *aead = crypto_aead_reqtfm(req);
struct caam_ctx *ctx = crypto_aead_ctx(aead);
struct device *jrdev = ctx->jrdev;
+ struct caam_aead_req_ctx *rctx = aead_request_ctx(req);
gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
GFP_KERNEL : GFP_ATOMIC;
int src_nents, mapped_src_nents, dst_nents = 0, mapped_dst_nents = 0;
@@ -1411,6 +1427,9 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
desc_bytes;
edesc->jrentry.base = req;

+ rctx->edesc = edesc;
+ rctx->aead_op_done = aead_crypt_done;
+
*all_contig_ptr = !(mapped_src_nents > 1);

sec4_sg_index = 0;
@@ -1441,6 +1460,28 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
return edesc;
}

+static int aead_enqueue_req(struct device *jrdev, u32 *desc,
+ void (*cbk)(struct device *jrdev, u32 *desc,
+ u32 err, void *context),
+ struct aead_request *req, struct aead_edesc *edesc)
+{
+ struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev);
+ int ret;
+
+ if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
+ return crypto_transfer_aead_request_to_engine(jrpriv->engine,
+ req);
+ else
+ ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done,
+ &edesc->jrentry);
+ if (ret != -EINPROGRESS) {
+ aead_unmap(jrdev, edesc, req);
+ kfree(edesc);
+ }
+
+ return ret;
+}
+
static inline int chachapoly_crypt(struct aead_request *req, bool encrypt)
{
struct aead_edesc *edesc;
@@ -1449,7 +1490,6 @@ static inline int chachapoly_crypt(struct aead_request *req, bool encrypt)
struct device *jrdev = ctx->jrdev;
bool all_contig;
u32 *desc;
- int ret;

edesc = aead_edesc_alloc(req, CHACHAPOLY_DESC_JOB_IO_LEN, &all_contig,
encrypt);
@@ -1463,13 +1503,7 @@ static inline int chachapoly_crypt(struct aead_request *req, bool encrypt)
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
1);

- ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry);
- if (ret != -EINPROGRESS) {
- aead_unmap(jrdev, edesc, req);
- kfree(edesc);
- }
-
- return ret;
+ return aead_enqueue_req(jrdev, desc, aead_crypt_done, req, edesc);
}

static int chachapoly_encrypt(struct aead_request *req)
@@ -1489,8 +1523,6 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
struct caam_ctx *ctx = crypto_aead_ctx(aead);
struct device *jrdev = ctx->jrdev;
bool all_contig;
- u32 *desc;
- int ret = 0;

/* allocate extended descriptor */
edesc = aead_edesc_alloc(req, AUTHENC_DESC_JOB_IO_LEN,
@@ -1505,14 +1537,8 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
desc_bytes(edesc->hw_desc), 1);

- desc = edesc->hw_desc;
- ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry);
- if (ret != -EINPROGRESS) {
- aead_unmap(jrdev, edesc, req);
- kfree(edesc);
- }
-
- return ret;
+ return aead_enqueue_req(jrdev, edesc->hw_desc, aead_crypt_done, req,
+ edesc);
}

static int aead_encrypt(struct aead_request *req)
@@ -1525,6 +1551,30 @@ static int aead_decrypt(struct aead_request *req)
return aead_crypt(req, false);
}

+static int aead_do_one_req(struct crypto_engine *engine, void *areq)
+{
+ struct aead_request *req = aead_request_cast(areq);
+ struct caam_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req));
+ struct caam_aead_req_ctx *rctx = aead_request_ctx(req);
+ struct caam_aead_request_entry *jrentry;
+ u32 *desc = rctx->edesc->hw_desc;
+ int ret;
+
+ jrentry = &rctx->edesc->jrentry;
+ jrentry->bklog = true;
+
+ ret = caam_jr_enqueue(ctx->jrdev, desc, rctx->aead_op_done, jrentry);
+
+ if (ret != -EINPROGRESS) {
+ aead_unmap(ctx->jrdev, rctx->edesc, req);
+ kfree(rctx->edesc);
+ } else {
+ ret = 0;
+ }
+
+ return ret;
+}
+
static inline int gcm_crypt(struct aead_request *req, bool encrypt)
{
struct aead_edesc *edesc;
@@ -1532,8 +1582,6 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt)
struct caam_ctx *ctx = crypto_aead_ctx(aead);
struct device *jrdev = ctx->jrdev;
bool all_contig;
- u32 *desc;
- int ret = 0;

/* allocate extended descriptor */
edesc = aead_edesc_alloc(req, GCM_DESC_JOB_IO_LEN, &all_contig,
@@ -1548,14 +1596,8 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt)
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
desc_bytes(edesc->hw_desc), 1);

- desc = edesc->hw_desc;
- ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry);
- if (ret != -EINPROGRESS) {
- aead_unmap(jrdev, edesc, req);
- kfree(edesc);
- }
-
- return ret;
+ return aead_enqueue_req(jrdev, edesc->hw_desc, aead_crypt_done, req,
+ edesc);
}

static int gcm_encrypt(struct aead_request *req)
@@ -3385,6 +3427,10 @@ static int caam_aead_init(struct crypto_aead *tfm)
container_of(alg, struct caam_aead_alg, aead);
struct caam_ctx *ctx = crypto_aead_ctx(tfm);

+ crypto_aead_set_reqsize(tfm, sizeof(struct caam_aead_req_ctx));
+
+ ctx->enginectx.op.do_one_request = aead_do_one_req;
+
return caam_init_common(ctx, &caam_alg->caam, !caam_alg->caam.nodkp);
}

--
2.1.0

2020-01-03 01:04:49

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v2 06/10] crypto: caam - refactor caam_jr_enqueue

Added a new struct - caam_{skcipher, akcipher, ahash, aead}_request_entry,
to keep each request information. This has a specific crypto request and
a bool to check if the request has backlog flag or not.
This struct is passed to CAAM, via enqueue function - caam_jr_enqueue.

This is done for later use, on backlogging support in CAAM.

Signed-off-by: Iuliana Prodan <[email protected]>
---
drivers/crypto/caam/caamalg.c | 44 ++++++++++++++++++++++++++++++++++++------
drivers/crypto/caam/caamhash.c | 40 ++++++++++++++++++++++++++++----------
drivers/crypto/caam/caampkc.c | 20 +++++++++++++------
drivers/crypto/caam/caampkc.h | 14 ++++++++++++++
drivers/crypto/caam/intern.h | 1 +
5 files changed, 97 insertions(+), 22 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 21b6172..34662b4 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -871,6 +871,17 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
}

/*
+ * caam_aead_request_entry - storage for tracking each aead request
+ * that is processed by a ring
+ * @base: common attributes for aead requests
+ * @bklog: stored to determine if the request needs backlog
+ */
+struct caam_aead_request_entry {
+ struct aead_request *base;
+ bool bklog;
+};
+
+/*
* aead_edesc - s/w-extended aead descriptor
* @src_nents: number of segments in input s/w scatterlist
* @dst_nents: number of segments in output s/w scatterlist
@@ -878,6 +889,7 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
* @mapped_dst_nents: number of segments in output h/w link table
* @sec4_sg_bytes: length of dma mapped sec4_sg space
* @sec4_sg_dma: bus physical mapped address of h/w link table
+ * @jrentry: information about the current request that is processed by a ring
* @sec4_sg: pointer to h/w link table
* @hw_desc: the h/w job descriptor followed by any referenced link tables
*/
@@ -888,11 +900,23 @@ struct aead_edesc {
int mapped_dst_nents;
int sec4_sg_bytes;
dma_addr_t sec4_sg_dma;
+ struct caam_aead_request_entry jrentry;
struct sec4_sg_entry *sec4_sg;
u32 hw_desc[];
};

/*
+ * caam_skcipher_request_entry - storage for tracking each skcipher request
+ * that is processed by a ring
+ * @base: common attributes for skcipher requests
+ * @bklog: stored to determine if the request needs backlog
+ */
+struct caam_skcipher_request_entry {
+ struct skcipher_request *base;
+ bool bklog;
+};
+
+/*
* skcipher_edesc - s/w-extended skcipher descriptor
* @src_nents: number of segments in input s/w scatterlist
* @dst_nents: number of segments in output s/w scatterlist
@@ -901,6 +925,7 @@ struct aead_edesc {
* @iv_dma: dma address of iv for checking continuity and link table
* @sec4_sg_bytes: length of dma mapped sec4_sg space
* @sec4_sg_dma: bus physical mapped address of h/w link table
+ * @jrentry: information about the current request that is processed by a ring
* @sec4_sg: pointer to h/w link table
* @hw_desc: the h/w job descriptor followed by any referenced link tables
* and IV
@@ -913,6 +938,7 @@ struct skcipher_edesc {
dma_addr_t iv_dma;
int sec4_sg_bytes;
dma_addr_t sec4_sg_dma;
+ struct caam_skcipher_request_entry jrentry;
struct sec4_sg_entry *sec4_sg;
u32 hw_desc[0];
};
@@ -963,7 +989,8 @@ static void skcipher_unmap(struct device *dev, struct skcipher_edesc *edesc,
static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err,
void *context)
{
- struct aead_request *req = context;
+ struct caam_aead_request_entry *jrentry = context;
+ struct aead_request *req = jrentry->base;
struct aead_edesc *edesc;
int ecode = 0;

@@ -984,7 +1011,8 @@ static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err,
static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err,
void *context)
{
- struct skcipher_request *req = context;
+ struct caam_skcipher_request_entry *jrentry = context;
+ struct skcipher_request *req = jrentry->base;
struct skcipher_edesc *edesc;
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
int ivsize = crypto_skcipher_ivsize(skcipher);
@@ -1364,6 +1392,8 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
edesc->mapped_dst_nents = mapped_dst_nents;
edesc->sec4_sg = (void *)edesc + sizeof(struct aead_edesc) +
desc_bytes;
+ edesc->jrentry.base = req;
+
*all_contig_ptr = !(mapped_src_nents > 1);

sec4_sg_index = 0;
@@ -1416,7 +1446,7 @@ static inline int chachapoly_crypt(struct aead_request *req, bool encrypt)
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
1);

- ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
+ ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry);
if (ret != -EINPROGRESS) {
aead_unmap(jrdev, edesc, req);
kfree(edesc);
@@ -1459,7 +1489,7 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt)
desc_bytes(edesc->hw_desc), 1);

desc = edesc->hw_desc;
- ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
+ ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry);
if (ret != -EINPROGRESS) {
aead_unmap(jrdev, edesc, req);
kfree(edesc);
@@ -1502,7 +1532,7 @@ static inline int gcm_crypt(struct aead_request *req, bool encrypt)
desc_bytes(edesc->hw_desc), 1);

desc = edesc->hw_desc;
- ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
+ ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, &edesc->jrentry);
if (ret != -EINPROGRESS) {
aead_unmap(jrdev, edesc, req);
kfree(edesc);
@@ -1637,6 +1667,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
edesc->sec4_sg_bytes = sec4_sg_bytes;
edesc->sec4_sg = (struct sec4_sg_entry *)((u8 *)edesc->hw_desc +
desc_bytes);
+ edesc->jrentry.base = req;

/* Make sure IV is located in a DMAable area */
if (ivsize) {
@@ -1717,8 +1748,9 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
desc_bytes(edesc->hw_desc), 1);

desc = edesc->hw_desc;
- ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, req);

+ ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done,
+ &edesc->jrentry);
if (ret != -EINPROGRESS) {
skcipher_unmap(jrdev, edesc, req);
kfree(edesc);
diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index b019d7e..f179d39 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -522,10 +522,22 @@ static int acmac_setkey(struct crypto_ahash *ahash, const u8 *key,
}

/*
+ * caam_ahash_request_entry - storage for tracking each ahash request that
+ * is processed by a ring
+ * @base: common attributes for ahash requests
+ * @bklog: stored to determine if the request needs backlog
+ */
+struct caam_ahash_request_entry {
+ struct ahash_request *base;
+ bool bklog;
+};
+
+/*
* ahash_edesc - s/w-extended ahash descriptor
* @sec4_sg_dma: physical mapped address of h/w link table
* @src_nents: number of segments in input scatterlist
* @sec4_sg_bytes: length of dma mapped sec4_sg space
+ * @jrentry: information about the current request that is processed by a ring
* @hw_desc: the h/w job descriptor followed by any referenced link tables
* @sec4_sg: h/w link table
*/
@@ -533,6 +545,7 @@ struct ahash_edesc {
dma_addr_t sec4_sg_dma;
int src_nents;
int sec4_sg_bytes;
+ struct caam_ahash_request_entry jrentry;
u32 hw_desc[DESC_JOB_IO_LEN_MAX / sizeof(u32)] ____cacheline_aligned;
struct sec4_sg_entry sec4_sg[0];
};
@@ -573,7 +586,8 @@ static inline void ahash_unmap_ctx(struct device *dev,
static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err,
void *context, enum dma_data_direction dir)
{
- struct ahash_request *req = context;
+ struct caam_ahash_request_entry *jrentry = context;
+ struct ahash_request *req = jrentry->base;
struct ahash_edesc *edesc;
struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
int digestsize = crypto_ahash_digestsize(ahash);
@@ -613,7 +627,8 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err,
void *context, enum dma_data_direction dir)
{
- struct ahash_request *req = context;
+ struct caam_ahash_request_entry *jrentry = context;
+ struct ahash_request *req = jrentry->base;
struct ahash_edesc *edesc;
struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
@@ -683,6 +698,8 @@ static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req,
return NULL;
}

+ edesc->jrentry.base = req;
+
init_job_desc_shared(edesc->hw_desc, sh_desc_dma, desc_len(sh_desc),
HDR_SHARE_DEFER | HDR_REVERSE);

@@ -832,7 +849,8 @@ static int ahash_update_ctx(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc,
desc_bytes(desc), 1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, req);
+ ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi,
+ &edesc->jrentry);
if (ret != -EINPROGRESS)
goto unmap_ctx;
} else if (*next_buflen) {
@@ -905,7 +923,7 @@ static int ahash_final_ctx(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
+ ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, &edesc->jrentry);
if (ret == -EINPROGRESS)
return ret;

@@ -981,7 +999,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
+ ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, &edesc->jrentry);
if (ret == -EINPROGRESS)
return ret;

@@ -1053,7 +1071,7 @@ static int ahash_digest(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
+ ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry);
if (ret != -EINPROGRESS) {
ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
kfree(edesc);
@@ -1103,7 +1121,7 @@ static int ahash_final_no_ctx(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
+ ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry);
if (ret != -EINPROGRESS) {
ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
kfree(edesc);
@@ -1214,7 +1232,8 @@ static int ahash_update_no_ctx(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc,
desc_bytes(desc), 1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req);
+ ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst,
+ &edesc->jrentry);
if (ret != -EINPROGRESS)
goto unmap_ctx;

@@ -1305,7 +1324,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
+ ret = caam_jr_enqueue(jrdev, desc, ahash_done, &edesc->jrentry);
if (ret != -EINPROGRESS) {
ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
kfree(edesc);
@@ -1399,7 +1418,8 @@ static int ahash_update_first(struct ahash_request *req)
DUMP_PREFIX_ADDRESS, 16, 4, desc,
desc_bytes(desc), 1);

- ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req);
+ ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst,
+ &edesc->jrentry);
if (ret != -EINPROGRESS)
goto unmap_ctx;

diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
index 7f7ea32..858cc95 100644
--- a/drivers/crypto/caam/caampkc.c
+++ b/drivers/crypto/caam/caampkc.c
@@ -116,7 +116,8 @@ static void rsa_priv_f3_unmap(struct device *dev, struct rsa_edesc *edesc,
/* RSA Job Completion handler */
static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
{
- struct akcipher_request *req = context;
+ struct caam_akcipher_request_entry *jrentry = context;
+ struct akcipher_request *req = jrentry->base;
struct rsa_edesc *edesc;
int ecode = 0;

@@ -135,7 +136,8 @@ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err,
void *context)
{
- struct akcipher_request *req = context;
+ struct caam_akcipher_request_entry *jrentry = context;
+ struct akcipher_request *req = jrentry->base;
struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
struct caam_rsa_key *key = &ctx->key;
@@ -309,6 +311,8 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
edesc->src_nents = src_nents;
edesc->dst_nents = dst_nents;

+ edesc->jrentry.base = req;
+
if (!sec4_sg_bytes)
return edesc;

@@ -633,7 +637,8 @@ static int caam_rsa_enc(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub);

- ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, req);
+ ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done,
+ &edesc->jrentry);
if (ret == -EINPROGRESS)
return ret;

@@ -666,7 +671,8 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1);

- ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
+ ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done,
+ &edesc->jrentry);
if (ret == -EINPROGRESS)
return ret;

@@ -699,7 +705,8 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2);

- ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
+ ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done,
+ &edesc->jrentry);
if (ret == -EINPROGRESS)
return ret;

@@ -732,7 +739,8 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3);

- ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
+ ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done,
+ &edesc->jrentry);
if (ret == -EINPROGRESS)
return ret;

diff --git a/drivers/crypto/caam/caampkc.h b/drivers/crypto/caam/caampkc.h
index c68fb4c..e0b1076 100644
--- a/drivers/crypto/caam/caampkc.h
+++ b/drivers/crypto/caam/caampkc.h
@@ -11,6 +11,7 @@
#ifndef _PKC_DESC_H_
#define _PKC_DESC_H_
#include "compat.h"
+#include "intern.h"
#include "pdb.h"

/**
@@ -110,6 +111,17 @@ struct caam_rsa_req_ctx {
unsigned int fixup_src_len;
};

+/*
+ * caam_akcipher_request_entry - Storage for tracking each akcipher request
+ * that is processed by a ring
+ * @base: common attributes for akcipher requests
+ * @bklog: stored to determine if the request needs backlog
+ */
+struct caam_akcipher_request_entry {
+ struct akcipher_request *base;
+ bool bklog;
+};
+
/**
* rsa_edesc - s/w-extended rsa descriptor
* @src_nents : number of segments in input s/w scatterlist
@@ -118,6 +130,7 @@ struct caam_rsa_req_ctx {
* @mapped_dst_nents: number of segments in output h/w link table
* @sec4_sg_bytes : length of h/w link table
* @sec4_sg_dma : dma address of h/w link table
+ * @jrentry : info about the current request that is processed by a ring
* @sec4_sg : pointer to h/w link table
* @pdb : specific RSA Protocol Data Block (PDB)
* @hw_desc : descriptor followed by link tables if any
@@ -129,6 +142,7 @@ struct rsa_edesc {
int mapped_dst_nents;
int sec4_sg_bytes;
dma_addr_t sec4_sg_dma;
+ struct caam_akcipher_request_entry jrentry;
struct sec4_sg_entry *sec4_sg;
union {
struct rsa_pub_pdb pub;
diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h
index c7c10c9..8ca884b 100644
--- a/drivers/crypto/caam/intern.h
+++ b/drivers/crypto/caam/intern.h
@@ -11,6 +11,7 @@
#define INTERN_H

#include "ctrl.h"
+#include "regs.h"

/* Currently comes from Kconfig param as a ^2 (driver-required) */
#define JOBR_DEPTH (1 << CONFIG_CRYPTO_DEV_FSL_CAAM_RINGSIZE)
--
2.1.0

2020-01-03 01:04:55

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v2 03/10] crypto: caam - refactor ahash_edesc_alloc

Changed parameters for ahash_edesc_alloc function:
- remove flags since they can be computed in
ahash_edesc_alloc, the only place they are needed;
- use ahash_request instead of caam_hash_ctx, to be
able to compute gfp flags.

Signed-off-by: Iuliana Prodan <[email protected]>
Reviewed-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/caamhash.c | 62 +++++++++++++++---------------------------
1 file changed, 22 insertions(+), 40 deletions(-)

diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index 84be6db..844e391 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -666,11 +666,14 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
* Allocate an enhanced descriptor, which contains the hardware descriptor
* and space for hardware scatter table containing sg_num entries.
*/
-static struct ahash_edesc *ahash_edesc_alloc(struct caam_hash_ctx *ctx,
+static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req,
int sg_num, u32 *sh_desc,
- dma_addr_t sh_desc_dma,
- gfp_t flags)
+ dma_addr_t sh_desc_dma)
{
+ struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+ struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+ gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
+ GFP_KERNEL : GFP_ATOMIC;
struct ahash_edesc *edesc;
unsigned int sg_size = sg_num * sizeof(struct sec4_sg_entry);

@@ -729,8 +732,6 @@ static int ahash_update_ctx(struct ahash_request *req)
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
struct caam_hash_state *state = ahash_request_ctx(req);
struct device *jrdev = ctx->jrdev;
- gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
- GFP_KERNEL : GFP_ATOMIC;
u8 *buf = state->buf;
int *buflen = &state->buflen;
int *next_buflen = &state->next_buflen;
@@ -784,8 +785,8 @@ static int ahash_update_ctx(struct ahash_request *req)
* allocate space for base edesc and hw desc commands,
* link tables
*/
- edesc = ahash_edesc_alloc(ctx, pad_nents, ctx->sh_desc_update,
- ctx->sh_desc_update_dma, flags);
+ edesc = ahash_edesc_alloc(req, pad_nents, ctx->sh_desc_update,
+ ctx->sh_desc_update_dma);
if (!edesc) {
dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE);
return -ENOMEM;
@@ -859,8 +860,6 @@ static int ahash_final_ctx(struct ahash_request *req)
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
struct caam_hash_state *state = ahash_request_ctx(req);
struct device *jrdev = ctx->jrdev;
- gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
- GFP_KERNEL : GFP_ATOMIC;
int buflen = state->buflen;
u32 *desc;
int sec4_sg_bytes;
@@ -872,8 +871,8 @@ static int ahash_final_ctx(struct ahash_request *req)
sizeof(struct sec4_sg_entry);

/* allocate space for base edesc and hw desc commands, link tables */
- edesc = ahash_edesc_alloc(ctx, 4, ctx->sh_desc_fin,
- ctx->sh_desc_fin_dma, flags);
+ edesc = ahash_edesc_alloc(req, 4, ctx->sh_desc_fin,
+ ctx->sh_desc_fin_dma);
if (!edesc)
return -ENOMEM;

@@ -925,8 +924,6 @@ static int ahash_finup_ctx(struct ahash_request *req)
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
struct caam_hash_state *state = ahash_request_ctx(req);
struct device *jrdev = ctx->jrdev;
- gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
- GFP_KERNEL : GFP_ATOMIC;
int buflen = state->buflen;
u32 *desc;
int sec4_sg_src_index;
@@ -955,9 +952,8 @@ static int ahash_finup_ctx(struct ahash_request *req)
sec4_sg_src_index = 1 + (buflen ? 1 : 0);

/* allocate space for base edesc and hw desc commands, link tables */
- edesc = ahash_edesc_alloc(ctx, sec4_sg_src_index + mapped_nents,
- ctx->sh_desc_fin, ctx->sh_desc_fin_dma,
- flags);
+ edesc = ahash_edesc_alloc(req, sec4_sg_src_index + mapped_nents,
+ ctx->sh_desc_fin, ctx->sh_desc_fin_dma);
if (!edesc) {
dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE);
return -ENOMEM;
@@ -1005,8 +1001,6 @@ static int ahash_digest(struct ahash_request *req)
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
struct caam_hash_state *state = ahash_request_ctx(req);
struct device *jrdev = ctx->jrdev;
- gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
- GFP_KERNEL : GFP_ATOMIC;
u32 *desc;
int digestsize = crypto_ahash_digestsize(ahash);
int src_nents, mapped_nents;
@@ -1033,9 +1027,8 @@ static int ahash_digest(struct ahash_request *req)
}

/* allocate space for base edesc and hw desc commands, link tables */
- edesc = ahash_edesc_alloc(ctx, mapped_nents > 1 ? mapped_nents : 0,
- ctx->sh_desc_digest, ctx->sh_desc_digest_dma,
- flags);
+ edesc = ahash_edesc_alloc(req, mapped_nents > 1 ? mapped_nents : 0,
+ ctx->sh_desc_digest, ctx->sh_desc_digest_dma);
if (!edesc) {
dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE);
return -ENOMEM;
@@ -1082,8 +1075,6 @@ static int ahash_final_no_ctx(struct ahash_request *req)
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
struct caam_hash_state *state = ahash_request_ctx(req);
struct device *jrdev = ctx->jrdev;
- gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
- GFP_KERNEL : GFP_ATOMIC;
u8 *buf = state->buf;
int buflen = state->buflen;
u32 *desc;
@@ -1092,8 +1083,8 @@ static int ahash_final_no_ctx(struct ahash_request *req)
int ret;

/* allocate space for base edesc and hw desc commands, link tables */
- edesc = ahash_edesc_alloc(ctx, 0, ctx->sh_desc_digest,
- ctx->sh_desc_digest_dma, flags);
+ edesc = ahash_edesc_alloc(req, 0, ctx->sh_desc_digest,
+ ctx->sh_desc_digest_dma);
if (!edesc)
return -ENOMEM;

@@ -1141,8 +1132,6 @@ static int ahash_update_no_ctx(struct ahash_request *req)
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
struct caam_hash_state *state = ahash_request_ctx(req);
struct device *jrdev = ctx->jrdev;
- gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
- GFP_KERNEL : GFP_ATOMIC;
u8 *buf = state->buf;
int *buflen = &state->buflen;
int *next_buflen = &state->next_buflen;
@@ -1195,10 +1184,9 @@ static int ahash_update_no_ctx(struct ahash_request *req)
* allocate space for base edesc and hw desc commands,
* link tables
*/
- edesc = ahash_edesc_alloc(ctx, pad_nents,
+ edesc = ahash_edesc_alloc(req, pad_nents,
ctx->sh_desc_update_first,
- ctx->sh_desc_update_first_dma,
- flags);
+ ctx->sh_desc_update_first_dma);
if (!edesc) {
dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE);
return -ENOMEM;
@@ -1266,8 +1254,6 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
struct caam_hash_state *state = ahash_request_ctx(req);
struct device *jrdev = ctx->jrdev;
- gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
- GFP_KERNEL : GFP_ATOMIC;
int buflen = state->buflen;
u32 *desc;
int sec4_sg_bytes, sec4_sg_src_index, src_nents, mapped_nents;
@@ -1297,9 +1283,8 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
sizeof(struct sec4_sg_entry);

/* allocate space for base edesc and hw desc commands, link tables */
- edesc = ahash_edesc_alloc(ctx, sec4_sg_src_index + mapped_nents,
- ctx->sh_desc_digest, ctx->sh_desc_digest_dma,
- flags);
+ edesc = ahash_edesc_alloc(req, sec4_sg_src_index + mapped_nents,
+ ctx->sh_desc_digest, ctx->sh_desc_digest_dma);
if (!edesc) {
dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE);
return -ENOMEM;
@@ -1352,8 +1337,6 @@ static int ahash_update_first(struct ahash_request *req)
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
struct caam_hash_state *state = ahash_request_ctx(req);
struct device *jrdev = ctx->jrdev;
- gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
- GFP_KERNEL : GFP_ATOMIC;
u8 *buf = state->buf;
int *buflen = &state->buflen;
int *next_buflen = &state->next_buflen;
@@ -1401,11 +1384,10 @@ static int ahash_update_first(struct ahash_request *req)
* allocate space for base edesc and hw desc commands,
* link tables
*/
- edesc = ahash_edesc_alloc(ctx, mapped_nents > 1 ?
+ edesc = ahash_edesc_alloc(req, mapped_nents > 1 ?
mapped_nents : 0,
ctx->sh_desc_update_first,
- ctx->sh_desc_update_first_dma,
- flags);
+ ctx->sh_desc_update_first_dma);
if (!edesc) {
dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE);
return -ENOMEM;
--
2.1.0

2020-01-03 01:04:57

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v2 02/10] crypto: caam - refactor ahash_done callbacks

Create two common ahash_done_* functions with the dma
direction as parameter. Then, these 2 are called with
the proper direction for unmap.

Signed-off-by: Iuliana Prodan <[email protected]>
Reviewed-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/caamhash.c | 88 +++++++++++-------------------------------
1 file changed, 22 insertions(+), 66 deletions(-)

diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index 50a8852..84be6db 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -570,8 +570,8 @@ static inline void ahash_unmap_ctx(struct device *dev,
ahash_unmap(dev, edesc, req, dst_len);
}

-static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
- void *context)
+static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err,
+ void *context, enum dma_data_direction dir)
{
struct ahash_request *req = context;
struct ahash_edesc *edesc;
@@ -587,7 +587,7 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
if (err)
ecode = caam_jr_strstatus(jrdev, err);

- ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
+ ahash_unmap_ctx(jrdev, edesc, req, digestsize, dir);
memcpy(req->result, state->caam_ctx, digestsize);
kfree(edesc);

@@ -598,76 +598,20 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
req->base.complete(&req->base, ecode);
}

-static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err,
- void *context)
+static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
+ void *context)
{
- struct ahash_request *req = context;
- struct ahash_edesc *edesc;
- struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
- struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
- struct caam_hash_state *state = ahash_request_ctx(req);
- int digestsize = crypto_ahash_digestsize(ahash);
- int ecode = 0;
-
- dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
-
- edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
- if (err)
- ecode = caam_jr_strstatus(jrdev, err);
-
- ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_BIDIRECTIONAL);
- kfree(edesc);
-
- scatterwalk_map_and_copy(state->buf, req->src,
- req->nbytes - state->next_buflen,
- state->next_buflen, 0);
- state->buflen = state->next_buflen;
-
- print_hex_dump_debug("buf@" __stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, state->buf,
- state->buflen, 1);
-
- print_hex_dump_debug("ctx@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
- ctx->ctx_len, 1);
- if (req->result)
- print_hex_dump_debug("result@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, req->result,
- digestsize, 1);
-
- req->base.complete(&req->base, ecode);
+ ahash_done_cpy(jrdev, desc, err, context, DMA_FROM_DEVICE);
}

static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
void *context)
{
- struct ahash_request *req = context;
- struct ahash_edesc *edesc;
- struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
- int digestsize = crypto_ahash_digestsize(ahash);
- struct caam_hash_state *state = ahash_request_ctx(req);
- struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
- int ecode = 0;
-
- dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
-
- edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
- if (err)
- ecode = caam_jr_strstatus(jrdev, err);
-
- ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
- memcpy(req->result, state->caam_ctx, digestsize);
- kfree(edesc);
-
- print_hex_dump_debug("ctx@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
- ctx->ctx_len, 1);
-
- req->base.complete(&req->base, ecode);
+ ahash_done_cpy(jrdev, desc, err, context, DMA_BIDIRECTIONAL);
}

-static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
- void *context)
+static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err,
+ void *context, enum dma_data_direction dir)
{
struct ahash_request *req = context;
struct ahash_edesc *edesc;
@@ -683,7 +627,7 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
if (err)
ecode = caam_jr_strstatus(jrdev, err);

- ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_FROM_DEVICE);
+ ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, dir);
kfree(edesc);

scatterwalk_map_and_copy(state->buf, req->src,
@@ -706,6 +650,18 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
req->base.complete(&req->base, ecode);
}

+static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err,
+ void *context)
+{
+ ahash_done_switch(jrdev, desc, err, context, DMA_BIDIRECTIONAL);
+}
+
+static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
+ void *context)
+{
+ ahash_done_switch(jrdev, desc, err, context, DMA_FROM_DEVICE);
+}
+
/*
* Allocate an enhanced descriptor, which contains the hardware descriptor
* and space for hardware scatter table containing sg_num entries.
--
2.1.0

2020-01-03 01:05:10

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v2 01/10] crypto: caam - refactor skcipher/aead/gcm/chachapoly {en,de}crypt functions

Create a common crypt function for each skcipher/aead/gcm/chachapoly
algorithms and call it for encrypt/decrypt with the specific boolean -
true for encrypt and false for decrypt.

Signed-off-by: Iuliana Prodan <[email protected]>
Reviewed-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/caamalg.c | 268 +++++++++---------------------------------
1 file changed, 53 insertions(+), 215 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 2912006..6e021692 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -960,8 +960,8 @@ static void skcipher_unmap(struct device *dev, struct skcipher_edesc *edesc,
edesc->sec4_sg_dma, edesc->sec4_sg_bytes);
}

-static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
- void *context)
+static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err,
+ void *context)
{
struct aead_request *req = context;
struct aead_edesc *edesc;
@@ -981,69 +981,8 @@ static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
aead_request_complete(req, ecode);
}

-static void aead_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
- void *context)
-{
- struct aead_request *req = context;
- struct aead_edesc *edesc;
- int ecode = 0;
-
- dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
-
- edesc = container_of(desc, struct aead_edesc, hw_desc[0]);
-
- if (err)
- ecode = caam_jr_strstatus(jrdev, err);
-
- aead_unmap(jrdev, edesc, req);
-
- kfree(edesc);
-
- aead_request_complete(req, ecode);
-}
-
-static void skcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
- void *context)
-{
- struct skcipher_request *req = context;
- struct skcipher_edesc *edesc;
- struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
- int ivsize = crypto_skcipher_ivsize(skcipher);
- int ecode = 0;
-
- dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
-
- edesc = container_of(desc, struct skcipher_edesc, hw_desc[0]);
-
- if (err)
- ecode = caam_jr_strstatus(jrdev, err);
-
- skcipher_unmap(jrdev, edesc, req);
-
- /*
- * The crypto API expects us to set the IV (req->iv) to the last
- * ciphertext block (CBC mode) or last counter (CTR mode).
- * This is used e.g. by the CTS mode.
- */
- if (ivsize && !ecode) {
- memcpy(req->iv, (u8 *)edesc->sec4_sg + edesc->sec4_sg_bytes,
- ivsize);
- print_hex_dump_debug("dstiv @"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, req->iv,
- edesc->src_nents > 1 ? 100 : ivsize, 1);
- }
-
- caam_dump_sg("dst @" __stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, req->dst,
- edesc->dst_nents > 1 ? 100 : req->cryptlen, 1);
-
- kfree(edesc);
-
- skcipher_request_complete(req, ecode);
-}
-
-static void skcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
- void *context)
+static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err,
+ void *context)
{
struct skcipher_request *req = context;
struct skcipher_edesc *edesc;
@@ -1455,41 +1394,7 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
return edesc;
}

-static int gcm_encrypt(struct aead_request *req)
-{
- struct aead_edesc *edesc;
- struct crypto_aead *aead = crypto_aead_reqtfm(req);
- struct caam_ctx *ctx = crypto_aead_ctx(aead);
- struct device *jrdev = ctx->jrdev;
- bool all_contig;
- u32 *desc;
- int ret = 0;
-
- /* allocate extended descriptor */
- edesc = aead_edesc_alloc(req, GCM_DESC_JOB_IO_LEN, &all_contig, true);
- if (IS_ERR(edesc))
- return PTR_ERR(edesc);
-
- /* Create and submit job descriptor */
- init_gcm_job(req, edesc, all_contig, true);
-
- print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
- desc_bytes(edesc->hw_desc), 1);
-
- desc = edesc->hw_desc;
- ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req);
- if (!ret) {
- ret = -EINPROGRESS;
- } else {
- aead_unmap(jrdev, edesc, req);
- kfree(edesc);
- }
-
- return ret;
-}
-
-static int chachapoly_encrypt(struct aead_request *req)
+static inline int chachapoly_crypt(struct aead_request *req, bool encrypt)
{
struct aead_edesc *edesc;
struct crypto_aead *aead = crypto_aead_reqtfm(req);
@@ -1500,18 +1405,18 @@ static int chachapoly_encrypt(struct aead_request *req)
int ret;

edesc = aead_edesc_alloc(req, CHACHAPOLY_DESC_JOB_IO_LEN, &all_contig,
- true);
+ encrypt);
if (IS_ERR(edesc))
return PTR_ERR(edesc);

desc = edesc->hw_desc;

- init_chachapoly_job(req, edesc, all_contig, true);
+ init_chachapoly_job(req, edesc, all_contig, encrypt);
print_hex_dump_debug("chachapoly jobdesc@" __stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
1);

- ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req);
+ ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
if (!ret) {
ret = -EINPROGRESS;
} else {
@@ -1522,45 +1427,17 @@ static int chachapoly_encrypt(struct aead_request *req)
return ret;
}

-static int chachapoly_decrypt(struct aead_request *req)
+static int chachapoly_encrypt(struct aead_request *req)
{
- struct aead_edesc *edesc;
- struct crypto_aead *aead = crypto_aead_reqtfm(req);
- struct caam_ctx *ctx = crypto_aead_ctx(aead);
- struct device *jrdev = ctx->jrdev;
- bool all_contig;
- u32 *desc;
- int ret;
-
- edesc = aead_edesc_alloc(req, CHACHAPOLY_DESC_JOB_IO_LEN, &all_contig,
- false);
- if (IS_ERR(edesc))
- return PTR_ERR(edesc);
-
- desc = edesc->hw_desc;
-
- init_chachapoly_job(req, edesc, all_contig, false);
- print_hex_dump_debug("chachapoly jobdesc@" __stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
- 1);
-
- ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req);
- if (!ret) {
- ret = -EINPROGRESS;
- } else {
- aead_unmap(jrdev, edesc, req);
- kfree(edesc);
- }
-
- return ret;
+ return chachapoly_crypt(req, true);
}

-static int ipsec_gcm_encrypt(struct aead_request *req)
+static int chachapoly_decrypt(struct aead_request *req)
{
- return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_encrypt(req);
+ return chachapoly_crypt(req, false);
}

-static int aead_encrypt(struct aead_request *req)
+static inline int aead_crypt(struct aead_request *req, bool encrypt)
{
struct aead_edesc *edesc;
struct crypto_aead *aead = crypto_aead_reqtfm(req);
@@ -1572,19 +1449,19 @@ static int aead_encrypt(struct aead_request *req)

/* allocate extended descriptor */
edesc = aead_edesc_alloc(req, AUTHENC_DESC_JOB_IO_LEN,
- &all_contig, true);
+ &all_contig, encrypt);
if (IS_ERR(edesc))
return PTR_ERR(edesc);

/* Create and submit job descriptor */
- init_authenc_job(req, edesc, all_contig, true);
+ init_authenc_job(req, edesc, all_contig, encrypt);

print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
desc_bytes(edesc->hw_desc), 1);

desc = edesc->hw_desc;
- ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req);
+ ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
if (!ret) {
ret = -EINPROGRESS;
} else {
@@ -1595,7 +1472,17 @@ static int aead_encrypt(struct aead_request *req)
return ret;
}

-static int gcm_decrypt(struct aead_request *req)
+static int aead_encrypt(struct aead_request *req)
+{
+ return aead_crypt(req, true);
+}
+
+static int aead_decrypt(struct aead_request *req)
+{
+ return aead_crypt(req, false);
+}
+
+static inline int gcm_crypt(struct aead_request *req, bool encrypt)
{
struct aead_edesc *edesc;
struct crypto_aead *aead = crypto_aead_reqtfm(req);
@@ -1606,19 +1493,20 @@ static int gcm_decrypt(struct aead_request *req)
int ret = 0;

/* allocate extended descriptor */
- edesc = aead_edesc_alloc(req, GCM_DESC_JOB_IO_LEN, &all_contig, false);
+ edesc = aead_edesc_alloc(req, GCM_DESC_JOB_IO_LEN, &all_contig,
+ encrypt);
if (IS_ERR(edesc))
return PTR_ERR(edesc);

- /* Create and submit job descriptor*/
- init_gcm_job(req, edesc, all_contig, false);
+ /* Create and submit job descriptor */
+ init_gcm_job(req, edesc, all_contig, encrypt);

print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
desc_bytes(edesc->hw_desc), 1);

desc = edesc->hw_desc;
- ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req);
+ ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done, req);
if (!ret) {
ret = -EINPROGRESS;
} else {
@@ -1629,48 +1517,24 @@ static int gcm_decrypt(struct aead_request *req)
return ret;
}

-static int ipsec_gcm_decrypt(struct aead_request *req)
+static int gcm_encrypt(struct aead_request *req)
{
- return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_decrypt(req);
+ return gcm_crypt(req, true);
}

-static int aead_decrypt(struct aead_request *req)
+static int gcm_decrypt(struct aead_request *req)
{
- struct aead_edesc *edesc;
- struct crypto_aead *aead = crypto_aead_reqtfm(req);
- struct caam_ctx *ctx = crypto_aead_ctx(aead);
- struct device *jrdev = ctx->jrdev;
- bool all_contig;
- u32 *desc;
- int ret = 0;
-
- caam_dump_sg("dec src@" __stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, req->src,
- req->assoclen + req->cryptlen, 1);
-
- /* allocate extended descriptor */
- edesc = aead_edesc_alloc(req, AUTHENC_DESC_JOB_IO_LEN,
- &all_contig, false);
- if (IS_ERR(edesc))
- return PTR_ERR(edesc);
-
- /* Create and submit job descriptor*/
- init_authenc_job(req, edesc, all_contig, false);
-
- print_hex_dump_debug("aead jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
- desc_bytes(edesc->hw_desc), 1);
+ return gcm_crypt(req, false);
+}

- desc = edesc->hw_desc;
- ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req);
- if (!ret) {
- ret = -EINPROGRESS;
- } else {
- aead_unmap(jrdev, edesc, req);
- kfree(edesc);
- }
+static int ipsec_gcm_encrypt(struct aead_request *req)
+{
+ return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_encrypt(req);
+}

- return ret;
+static int ipsec_gcm_decrypt(struct aead_request *req)
+{
+ return crypto_ipsec_check_assoclen(req->assoclen) ? : gcm_decrypt(req);
}

/*
@@ -1834,7 +1698,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
return edesc;
}

-static int skcipher_encrypt(struct skcipher_request *req)
+static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
{
struct skcipher_edesc *edesc;
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
@@ -1852,14 +1716,14 @@ static int skcipher_encrypt(struct skcipher_request *req)
return PTR_ERR(edesc);

/* Create and submit job descriptor*/
- init_skcipher_job(req, edesc, true);
+ init_skcipher_job(req, edesc, encrypt);

print_hex_dump_debug("skcipher jobdesc@" __stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
desc_bytes(edesc->hw_desc), 1);

desc = edesc->hw_desc;
- ret = caam_jr_enqueue(jrdev, desc, skcipher_encrypt_done, req);
+ ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done, req);

if (!ret) {
ret = -EINPROGRESS;
@@ -1871,40 +1735,14 @@ static int skcipher_encrypt(struct skcipher_request *req)
return ret;
}

-static int skcipher_decrypt(struct skcipher_request *req)
+static int skcipher_encrypt(struct skcipher_request *req)
{
- struct skcipher_edesc *edesc;
- struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
- struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
- struct device *jrdev = ctx->jrdev;
- u32 *desc;
- int ret = 0;
-
- if (!req->cryptlen)
- return 0;
-
- /* allocate extended descriptor */
- edesc = skcipher_edesc_alloc(req, DESC_JOB_IO_LEN * CAAM_CMD_SZ);
- if (IS_ERR(edesc))
- return PTR_ERR(edesc);
-
- /* Create and submit job descriptor*/
- init_skcipher_job(req, edesc, false);
- desc = edesc->hw_desc;
-
- print_hex_dump_debug("skcipher jobdesc@" __stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
- desc_bytes(edesc->hw_desc), 1);
-
- ret = caam_jr_enqueue(jrdev, desc, skcipher_decrypt_done, req);
- if (!ret) {
- ret = -EINPROGRESS;
- } else {
- skcipher_unmap(jrdev, edesc, req);
- kfree(edesc);
- }
+ return skcipher_crypt(req, true);
+}

- return ret;
+static int skcipher_decrypt(struct skcipher_request *req)
+{
+ return skcipher_crypt(req, false);
}

static struct caam_skcipher_alg driver_algs[] = {
--
2.1.0

2020-01-03 01:05:50

by Iuliana Prodan

[permalink] [raw]
Subject: [PATCH v2 04/10] crypto: caam - refactor RSA private key _done callbacks

Create a common rsa_priv_f_done function, which based
on private key form calls the specific unmap function.

Signed-off-by: Iuliana Prodan <[email protected]>
Reviewed-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/caampkc.c | 61 +++++++++++++------------------------------
1 file changed, 18 insertions(+), 43 deletions(-)

diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
index 6619c51..ebf1677 100644
--- a/drivers/crypto/caam/caampkc.c
+++ b/drivers/crypto/caam/caampkc.c
@@ -132,29 +132,13 @@ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
akcipher_request_complete(req, ecode);
}

-static void rsa_priv_f1_done(struct device *dev, u32 *desc, u32 err,
- void *context)
-{
- struct akcipher_request *req = context;
- struct rsa_edesc *edesc;
- int ecode = 0;
-
- if (err)
- ecode = caam_jr_strstatus(dev, err);
-
- edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);
-
- rsa_priv_f1_unmap(dev, edesc, req);
- rsa_io_unmap(dev, edesc, req);
- kfree(edesc);
-
- akcipher_request_complete(req, ecode);
-}
-
-static void rsa_priv_f2_done(struct device *dev, u32 *desc, u32 err,
- void *context)
+static void rsa_priv_f_done(struct device *dev, u32 *desc, u32 err,
+ void *context)
{
struct akcipher_request *req = context;
+ struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+ struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
+ struct caam_rsa_key *key = &ctx->key;
struct rsa_edesc *edesc;
int ecode = 0;

@@ -163,26 +147,17 @@ static void rsa_priv_f2_done(struct device *dev, u32 *desc, u32 err,

edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);

- rsa_priv_f2_unmap(dev, edesc, req);
- rsa_io_unmap(dev, edesc, req);
- kfree(edesc);
-
- akcipher_request_complete(req, ecode);
-}
-
-static void rsa_priv_f3_done(struct device *dev, u32 *desc, u32 err,
- void *context)
-{
- struct akcipher_request *req = context;
- struct rsa_edesc *edesc;
- int ecode = 0;
-
- if (err)
- ecode = caam_jr_strstatus(dev, err);
-
- edesc = container_of(desc, struct rsa_edesc, hw_desc[0]);
+ switch (key->priv_form) {
+ case FORM1:
+ rsa_priv_f1_unmap(dev, edesc, req);
+ break;
+ case FORM2:
+ rsa_priv_f2_unmap(dev, edesc, req);
+ break;
+ case FORM3:
+ rsa_priv_f3_unmap(dev, edesc, req);
+ }

- rsa_priv_f3_unmap(dev, edesc, req);
rsa_io_unmap(dev, edesc, req);
kfree(edesc);

@@ -691,7 +666,7 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1);

- ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f1_done, req);
+ ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
if (!ret)
return -EINPROGRESS;

@@ -724,7 +699,7 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2);

- ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f2_done, req);
+ ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
if (!ret)
return -EINPROGRESS;

@@ -757,7 +732,7 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
/* Initialize Job Descriptor */
init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3);

- ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f3_done, req);
+ ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f_done, req);
if (!ret)
return -EINPROGRESS;

--
2.1.0

2020-01-10 08:31:55

by Horia Geanta

[permalink] [raw]
Subject: Re: [PATCH v2 08/10] crypto: caam - add crypto_engine support for AEAD algorithms

On 1/3/2020 3:04 AM, Iuliana Prodan wrote:
> +struct caam_aead_req_ctx {
> + struct aead_edesc *edesc;
> + void (*aead_op_done)(struct device *jrdev, u32 *desc, u32 err,
> + void *context);
Similar with skcipher, aead_op_done is not needed since aead_crypt_done
is the only callback used.

> +static int aead_enqueue_req(struct device *jrdev, u32 *desc,
> + void (*cbk)(struct device *jrdev, u32 *desc,
> + u32 err, void *context),
> + struct aead_request *req, struct aead_edesc *edesc)
cbk parameter is not used.

> +{
> + struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev);
> + int ret;
> +
> + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
> + return crypto_transfer_aead_request_to_engine(jrpriv->engine,
> + req);
Resources leak in case of failure.

> + else
> + ret = caam_jr_enqueue(jrdev, desc, aead_crypt_done,
> + &edesc->jrentry);
Need to justify why only some requests are transferred to crypto engine.

Horia

2020-01-10 08:46:35

by Horia Geanta

[permalink] [raw]
Subject: Re: [PATCH v2 09/10] crypto: caam - add crypto_engine support for RSA algorithms

On 1/3/2020 3:03 AM, Iuliana Prodan wrote:
> +static int akcipher_enqueue_req(struct device *jrdev, u32 *desc,
> + void (*cbk)(struct device *jrdev, u32 *desc,
> + u32 err, void *context),
> + struct akcipher_request *req,
> + struct rsa_edesc *edesc)
> +{
> + struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev);
> + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
> + struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
> + struct caam_rsa_key *key = &ctx->key;
> + int ret;
> +
> + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
> + return crypto_transfer_akcipher_request_to_engine(jrpriv->engine,
> + req);
Resource leak in case transfer fails.

> + else
> + ret = caam_jr_enqueue(jrdev, desc, cbk, &edesc->jrentry);
What's the problem with transferring all requests to crypto engine?

> +
> + if (ret != -EINPROGRESS) {
> + switch (key->priv_form) {
> + case FORM1:
> + rsa_priv_f1_unmap(jrdev, edesc, req);
> + break;
> + case FORM2:
> + rsa_priv_f2_unmap(jrdev, edesc, req);
> + break;
> + case FORM3:
> + rsa_priv_f3_unmap(jrdev, edesc, req);
> + break;
> + default:
> + rsa_pub_unmap(jrdev, edesc, req);
> + }
> + rsa_io_unmap(jrdev, edesc, req);
> + kfree(edesc);
> + }
> +
> + return ret;
> +}
> +
> static int caam_rsa_enc(struct akcipher_request *req)
> {
> struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
> struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
> + struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
> struct caam_rsa_key *key = &ctx->key;
> struct device *jrdev = ctx->dev;
> struct rsa_edesc *edesc;
> @@ -637,13 +726,9 @@ static int caam_rsa_enc(struct akcipher_request *req)
> /* Initialize Job Descriptor */
> init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub);
>
> - ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done,
> - &edesc->jrentry);
> - if (ret == -EINPROGRESS)
> - return ret;
> -
> - rsa_pub_unmap(jrdev, edesc, req);
> -
> + req_ctx->akcipher_op_done = rsa_pub_done;
This initialization could be moved into akcipher_enqueue_req().

> + return akcipher_enqueue_req(jrdev, edesc->hw_desc, rsa_pub_done,
> + req, edesc);
edesc, edesc->hw_desc parameters not needed - can be deduced internally
via req -> req_ctx -> edesc- > hw_desc.

Horia

2020-01-13 09:48:28

by Iuliana Prodan

[permalink] [raw]
Subject: Re: [PATCH v2 09/10] crypto: caam - add crypto_engine support for RSA algorithms

On 1/10/2020 10:46 AM, Horia Geanta wrote:
> On 1/3/2020 3:03 AM, Iuliana Prodan wrote:
>> +static int akcipher_enqueue_req(struct device *jrdev, u32 *desc,
>> + void (*cbk)(struct device *jrdev, u32 *desc,
>> + u32 err, void *context),
>> + struct akcipher_request *req,
>> + struct rsa_edesc *edesc)
>> +{
>> + struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev);
>> + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
>> + struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
>> + struct caam_rsa_key *key = &ctx->key;
>> + int ret;
>> +
>> + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
>> + return crypto_transfer_akcipher_request_to_engine(jrpriv->engine,
>> + req);
> Resource leak in case transfer fails.
>
>> + else
>> + ret = caam_jr_enqueue(jrdev, desc, cbk, &edesc->jrentry);
> What's the problem with transferring all requests to crypto engine?
>
I'll address all your comments in v3.

Regarding the transfer request to crypto-engine: if sending all requests
to crypto-engine, multibuffer tests, for non-backlogging requests fail
after only 10 requests, since crypto-engine queue has 10 entries.
Here's an example:
root@imx6qpdlsolox:~# insmod tcrypt.ko mode=422 num_mb=1024
insmod: ERROR: could not insert module tcrypt.ko: Resource temporarily
unavailable
root@imx6qpdlsolox:~#
root@imx6qpdlsolox:~# dmesg
...
testing speed of multibuffer sha1 (sha1-caam)
tcrypt: test 0 ( 16 byte blocks, 16 bytes per update, 1 updates):
tcrypt: concurrent request 11 error -28
tcrypt: concurrent request 13 error -28
tcrypt: concurrent request 14 error -28
tcrypt: concurrent request 16 error -28
tcrypt: concurrent request 18 error -28
tcrypt: concurrent request 20 error -28
tcrypt: concurrent request 22 error -28
tcrypt: concurrent request 24 error -28
tcrypt: concurrent request 26 error -28
tcrypt: concurrent request 28 error -28
tcrypt: concurrent request 30 error -28
tcrypt: concurrent request 32 error -28
tcrypt: concurrent request 34 error -28

tcrypt: concurrent request 1020 error -28
tcrypt: concurrent request 1022 error -28
tcrypt: At least one hashing failed ret=-28
root@imx6qpdlsolox:~#

If sending just the backlog request to crypto-engine, and non-blocking
directly to CAAM, these tests have a better chance to pass since JR has
1024 entries.

Will need to work/update crypto-engine: set queue length when initialize
crypto-engine, and remove serialization of requests in crypto-engine.
But, until then, I would like to have a backlogging solution in CAAM driver.

Iulia

>> +
>> + if (ret != -EINPROGRESS) {
>> + switch (key->priv_form) {
>> + case FORM1:
>> + rsa_priv_f1_unmap(jrdev, edesc, req);
>> + break;
>> + case FORM2:
>> + rsa_priv_f2_unmap(jrdev, edesc, req);
>> + break;
>> + case FORM3:
>> + rsa_priv_f3_unmap(jrdev, edesc, req);
>> + break;
>> + default:
>> + rsa_pub_unmap(jrdev, edesc, req);
>> + }
>> + rsa_io_unmap(jrdev, edesc, req);
>> + kfree(edesc);
>> + }
>> +
>> + return ret;
>> +}
>> +
>> static int caam_rsa_enc(struct akcipher_request *req)
>> {
>> struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
>> struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
>> + struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
>> struct caam_rsa_key *key = &ctx->key;
>> struct device *jrdev = ctx->dev;
>> struct rsa_edesc *edesc;
>> @@ -637,13 +726,9 @@ static int caam_rsa_enc(struct akcipher_request *req)
>> /* Initialize Job Descriptor */
>> init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub);
>>
>> - ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done,
>> - &edesc->jrentry);
>> - if (ret == -EINPROGRESS)
>> - return ret;
>> -
>> - rsa_pub_unmap(jrdev, edesc, req);
>> -
>> + req_ctx->akcipher_op_done = rsa_pub_done;
> This initialization could be moved into akcipher_enqueue_req().
>
>> + return akcipher_enqueue_req(jrdev, edesc->hw_desc, rsa_pub_done,
>> + req, edesc);
> edesc, edesc->hw_desc parameters not needed - can be deduced internally
> via req -> req_ctx -> edesc- > hw_desc.
>
> Horia
>

2020-01-13 12:21:57

by Horia Geanta

[permalink] [raw]
Subject: Re: [PATCH v2 09/10] crypto: caam - add crypto_engine support for RSA algorithms

On 1/13/2020 11:48 AM, Iuliana Prodan wrote:
> On 1/10/2020 10:46 AM, Horia Geanta wrote:
>> On 1/3/2020 3:03 AM, Iuliana Prodan wrote:
>>> +static int akcipher_enqueue_req(struct device *jrdev, u32 *desc,
>>> + void (*cbk)(struct device *jrdev, u32 *desc,
>>> + u32 err, void *context),
>>> + struct akcipher_request *req,
>>> + struct rsa_edesc *edesc)
>>> +{
>>> + struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev);
>>> + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
>>> + struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
>>> + struct caam_rsa_key *key = &ctx->key;
>>> + int ret;
>>> +
>>> + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
>>> + return crypto_transfer_akcipher_request_to_engine(jrpriv->engine,
>>> + req);
>> Resource leak in case transfer fails.
>>
>>> + else
>>> + ret = caam_jr_enqueue(jrdev, desc, cbk, &edesc->jrentry);
>> What's the problem with transferring all requests to crypto engine?
>>
> I'll address all your comments in v3.
>
> Regarding the transfer request to crypto-engine: if sending all requests
> to crypto-engine, multibuffer tests, for non-backlogging requests fail
> after only 10 requests, since crypto-engine queue has 10 entries.
> Here's an example:
> root@imx6qpdlsolox:~# insmod tcrypt.ko mode=422 num_mb=1024
> insmod: ERROR: could not insert module tcrypt.ko: Resource temporarily
> unavailable
> root@imx6qpdlsolox:~#
> root@imx6qpdlsolox:~# dmesg
> ...
> testing speed of multibuffer sha1 (sha1-caam)
> tcrypt: test 0 ( 16 byte blocks, 16 bytes per update, 1 updates):
> tcrypt: concurrent request 11 error -28
> tcrypt: concurrent request 13 error -28
> tcrypt: concurrent request 14 error -28
> tcrypt: concurrent request 16 error -28
> tcrypt: concurrent request 18 error -28
> tcrypt: concurrent request 20 error -28
> tcrypt: concurrent request 22 error -28
> tcrypt: concurrent request 24 error -28
> tcrypt: concurrent request 26 error -28
> tcrypt: concurrent request 28 error -28
> tcrypt: concurrent request 30 error -28
> tcrypt: concurrent request 32 error -28
> tcrypt: concurrent request 34 error -28
>
> tcrypt: concurrent request 1020 error -28
> tcrypt: concurrent request 1022 error -28
> tcrypt: At least one hashing failed ret=-28
> root@imx6qpdlsolox:~#
>
> If sending just the backlog request to crypto-engine, and non-blocking
> directly to CAAM, these tests have a better chance to pass since JR has
> 1024 entries.
>
> Will need to work/update crypto-engine: set queue length when initialize
> crypto-engine, and remove serialization of requests in crypto-engine.
> But, until then, I would like to have a backlogging solution in CAAM driver.
>
My point is you need to add details about the current limitations
in the commit message (even in the source code, it wouldn't hurt),
justifying the choice of not using crypto engine for all requests.

Horia

2020-01-13 13:07:24

by Iuliana Prodan

[permalink] [raw]
Subject: Re: [PATCH v2 09/10] crypto: caam - add crypto_engine support for RSA algorithms

On 1/13/2020 2:21 PM, Horia Geanta wrote:
> On 1/13/2020 11:48 AM, Iuliana Prodan wrote:
>> On 1/10/2020 10:46 AM, Horia Geanta wrote:
>>> On 1/3/2020 3:03 AM, Iuliana Prodan wrote:
>>>> +static int akcipher_enqueue_req(struct device *jrdev, u32 *desc,
>>>> + void (*cbk)(struct device *jrdev, u32 *desc,
>>>> + u32 err, void *context),
>>>> + struct akcipher_request *req,
>>>> + struct rsa_edesc *edesc)
>>>> +{
>>>> + struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev);
>>>> + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
>>>> + struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
>>>> + struct caam_rsa_key *key = &ctx->key;
>>>> + int ret;
>>>> +
>>>> + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
>>>> + return crypto_transfer_akcipher_request_to_engine(jrpriv->engine,
>>>> + req);
>>> Resource leak in case transfer fails.
>>>
>>>> + else
>>>> + ret = caam_jr_enqueue(jrdev, desc, cbk, &edesc->jrentry);
>>> What's the problem with transferring all requests to crypto engine?
>>>
>> I'll address all your comments in v3.
>>
>> Regarding the transfer request to crypto-engine: if sending all requests
>> to crypto-engine, multibuffer tests, for non-backlogging requests fail
>> after only 10 requests, since crypto-engine queue has 10 entries.
>> Here's an example:
>> root@imx6qpdlsolox:~# insmod tcrypt.ko mode=422 num_mb=1024
>> insmod: ERROR: could not insert module tcrypt.ko: Resource temporarily
>> unavailable
>> root@imx6qpdlsolox:~#
>> root@imx6qpdlsolox:~# dmesg
>> ...
>> testing speed of multibuffer sha1 (sha1-caam)
>> tcrypt: test 0 ( 16 byte blocks, 16 bytes per update, 1 updates):
>> tcrypt: concurrent request 11 error -28
>> tcrypt: concurrent request 13 error -28
>> tcrypt: concurrent request 14 error -28
>> tcrypt: concurrent request 16 error -28
>> tcrypt: concurrent request 18 error -28
>> tcrypt: concurrent request 20 error -28
>> tcrypt: concurrent request 22 error -28
>> tcrypt: concurrent request 24 error -28
>> tcrypt: concurrent request 26 error -28
>> tcrypt: concurrent request 28 error -28
>> tcrypt: concurrent request 30 error -28
>> tcrypt: concurrent request 32 error -28
>> tcrypt: concurrent request 34 error -28
>>
>> tcrypt: concurrent request 1020 error -28
>> tcrypt: concurrent request 1022 error -28
>> tcrypt: At least one hashing failed ret=-28
>> root@imx6qpdlsolox:~#
>>
>> If sending just the backlog request to crypto-engine, and non-blocking
>> directly to CAAM, these tests have a better chance to pass since JR has
>> 1024 entries.
>>
>> Will need to work/update crypto-engine: set queue length when initialize
>> crypto-engine, and remove serialization of requests in crypto-engine.
>> But, until then, I would like to have a backlogging solution in CAAM driver.
>>
> My point is you need to add details about the current limitations
> in the commit message (even in the source code, it wouldn't hurt),
> justifying the choice of not using crypto engine for all requests.
>
Yes, I understand your point and, as I mentioned above, I'll address all
comments, from all patches, in v3:
- update commit messages;
- handle resource leak in case of crypto-engine transfer;
- remove unnecessary variables, in some structs;
- will remove patch #6.

Iulia

2020-01-14 00:26:24

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v2 09/10] crypto: caam - add crypto_engine support for RSA algorithms

On Mon, Jan 13, 2020 at 09:48:11AM +0000, Iuliana Prodan wrote:
>
> Regarding the transfer request to crypto-engine: if sending all requests
> to crypto-engine, multibuffer tests, for non-backlogging requests fail
> after only 10 requests, since crypto-engine queue has 10 entries.

That isn't right. The crypto engine should never refuse to accept
a request unless the hardware queue is really full. Perhaps the
crypto engine code needs to be fixed?

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2020-01-14 10:41:18

by Iuliana Prodan

[permalink] [raw]
Subject: Re: [PATCH v2 09/10] crypto: caam - add crypto_engine support for RSA algorithms

On 1/14/2020 2:14 AM, Herbert Xu wrote:
> On Mon, Jan 13, 2020 at 09:48:11AM +0000, Iuliana Prodan wrote:
>>
>> Regarding the transfer request to crypto-engine: if sending all requests
>> to crypto-engine, multibuffer tests, for non-backlogging requests fail
>> after only 10 requests, since crypto-engine queue has 10 entries.
>
> That isn't right. The crypto engine should never refuse to accept
> a request
Crypto-engine accepts all request that have the backlog flag, the
non-backlog are accepted till the configured limit (of 10).

> unless the hardware queue is really full.
Crypto-engine doesn't check the status of hardware queue.
The non-backlog requests are dropped after 10 entries.

> Perhaps the
> crypto engine code needs to be fixed?
To me, crypto-engine seems to be made for backlogged request, that's why
I'm sending the non-backlog directly to CAAM. The implicit serialization
of request in crypto-engine is the bottleneck.

But, as I said before, I want to update crypto-engine to set queue
length when initialize crypto-engine, and remove serialization of
requests in crypto-engine by adding knowledge about the underlying hw
accelerator (number of request that can be processed in parallel).
I'll send a RFC with my proposal for crypto-engine enhancements.

But, until then, I would like to have a backlogging solution in CAAM driver.

Thanks,
Iulia

2020-01-14 13:54:18

by Corentin Labbe

[permalink] [raw]
Subject: Re: [PATCH v2 09/10] crypto: caam - add crypto_engine support for RSA algorithms

On Tue, Jan 14, 2020 at 10:40:53AM +0000, Iuliana Prodan wrote:
> On 1/14/2020 2:14 AM, Herbert Xu wrote:
> > On Mon, Jan 13, 2020 at 09:48:11AM +0000, Iuliana Prodan wrote:
> >>
> >> Regarding the transfer request to crypto-engine: if sending all requests
> >> to crypto-engine, multibuffer tests, for non-backlogging requests fail
> >> after only 10 requests, since crypto-engine queue has 10 entries.
> >
> > That isn't right. The crypto engine should never refuse to accept
> > a request
> Crypto-engine accepts all request that have the backlog flag, the
> non-backlog are accepted till the configured limit (of 10).
>
> > unless the hardware queue is really full.
> Crypto-engine doesn't check the status of hardware queue.
> The non-backlog requests are dropped after 10 entries.
>
> > Perhaps the
> > crypto engine code needs to be fixed?
> To me, crypto-engine seems to be made for backlogged request, that's why
> I'm sending the non-backlog directly to CAAM. The implicit serialization
> of request in crypto-engine is the bottleneck.
>
> But, as I said before, I want to update crypto-engine to set queue
> length when initialize crypto-engine, and remove serialization of
> requests in crypto-engine by adding knowledge about the underlying hw
> accelerator (number of request that can be processed in parallel).
> I'll send a RFC with my proposal for crypto-engine enhancements.
>
> But, until then, I would like to have a backlogging solution in CAAM driver.
>

Hello

I have already something for queue length and processing in parallel.
I will send it soon.

Regards