2020-01-10 08:13:19

by Horia Geanta

[permalink] [raw]
Subject: Re: [PATCH v2 07/10] crypto: caam - support crypto_engine framework for SKCIPHER algorithms

On 1/3/2020 3:03 AM, Iuliana Prodan wrote:
> Integrate crypto_engine into CAAM, to make use of the engine queue.
> Add support for SKCIPHER algorithms.
>
> This is intended to be used for CAAM backlogging support.
> The requests, with backlog flag (e.g. from dm-crypt) will be listed
> into crypto-engine queue and processed by CAAM when free.
> This changes the return codes for caam_jr_enqueue:
> -EINPROGRESS if OK, -EBUSY if request is backlogged,
caam_jr_enqueue() is no longer modified to return -EBUSY
(as it was in v1).

> -ENOSPC if the queue is full, -EIO if it cannot map the caller's
> descriptor.
[...]
> +struct caam_skcipher_req_ctx {
> + struct skcipher_edesc *edesc;
> + void (*skcipher_op_done)(struct device *jrdev, u32 *desc, u32 err,
> + void *context);
skcipher_op_done doesn't seem to be needed, see below.

> @@ -1669,6 +1687,9 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
> desc_bytes);
> edesc->jrentry.base = req;
>
> + rctx->edesc = edesc;
> + rctx->skcipher_op_done = skcipher_crypt_done;
skcipher_op_done is always set to skcipher_crypt_done...

> +static int skcipher_do_one_req(struct crypto_engine *engine, void *areq)
> +{
> + struct skcipher_request *req = skcipher_request_cast(areq);
> + struct caam_ctx *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
> + struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req);
> + struct caam_skcipher_request_entry *jrentry;
> + u32 *desc = rctx->edesc->hw_desc;
> + int ret;
> +
> + jrentry = &rctx->edesc->jrentry;
> + jrentry->bklog = true;
> +
> + ret = caam_jr_enqueue(ctx->jrdev, desc, rctx->skcipher_op_done,
> + jrentry);
...thus skcipher_crypt_done could be used directly here
instead of rctx->skcipher_op_done.

> @@ -1742,15 +1789,18 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
>
> /* Create and submit job descriptor*/
> init_skcipher_job(req, edesc, encrypt);
> + desc = edesc->hw_desc;
>
> print_hex_dump_debug("skcipher jobdesc@" __stringify(__LINE__)": ",
> - DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
> - desc_bytes(edesc->hw_desc), 1);
> + DUMP_PREFIX_ADDRESS, 16, 4, desc,
> + desc_bytes(desc), 1);
>
> - desc = edesc->hw_desc;
> -
> - ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done,
> - &edesc->jrentry);
> + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)
> + return crypto_transfer_skcipher_request_to_engine(jrpriv->engine,
> + req);
In case request transfer to crypto engine fails, some resources will leak.

> + else
> + ret = caam_jr_enqueue(jrdev, desc, skcipher_crypt_done,
> + &edesc->jrentry);
It's worth mentioning (in the commit message) the reasons for sending
only some of the requests (those with MAY_BACKLOG flag) to the crypto engine.

Horia