2018-01-03 20:13:28

by Corentin Labbe

[permalink] [raw]
Subject: [PATCH 0/6] crypto: engine - Permit to enqueue all async requests

Hello

The current crypto_engine support only ahash and ablkcipher request.
My first patch which try to add skcipher was Nacked, it will add too many functions
and adding other algs(aead, asymetric_key) will make the situation worst.

This patchset remove all algs specific stuff and now only process generic crypto_async_request.

The requests handler function pointer are now moved out of struct engine and
are now stored directly in a crypto_engine_reqctx.

The original proposal of Herbert [1] cannot be done completly since the crypto_engine
could only dequeue crypto_async_request and it is impossible to access any request_ctx
without knowing the underlying request type.

So I do something near that was requested: adding crypto_engine_reqctx in TFM context.
Note that the current implementation expect that crypto_engine_reqctx
is the first member of the context.

The first patch is a try to document the crypto engine API.
The second patch convert the crypto engine with the new way,
while the following patchs convert the 4 existing users of crypto_engine.
Note that this split break bisection, so probably the final commit will be all merged.

Appart from virtio, all 4 latest patch were compile tested only.
But the crypto engine is tested with my new sun8i-ce driver.

Regards

[1] https://www.mail-archive.com/[email protected]/msg1474434.html

Changes since RFC:
- Added a documentation patch
- Added patch for stm32-cryp
- Changed parameter of all crypto_engine_op functions from
crypto_async_request to void*
- Reintroduced crypto_transfer_xxx_request_to_engine functions

Corentin Labbe (6):
Documentation: crypto: document crypto engine API
crypto: engine - Permit to enqueue all async requests
crypto: omap: convert to new crypto engine API
crypto: virtio: convert to new crypto engine API
crypto: stm32-hash: convert to the new crypto engine API
crypto: stm32-cryp: convert to the new crypto engine API

Documentation/crypto/crypto_engine.rst | 46 ++++++
crypto/crypto_engine.c | 230 +++++++++++++--------------
drivers/crypto/omap-aes.c | 17 +-
drivers/crypto/omap-aes.h | 3 +
drivers/crypto/omap-des.c | 20 ++-
drivers/crypto/stm32/stm32-cryp.c | 21 ++-
drivers/crypto/stm32/stm32-hash.c | 18 ++-
drivers/crypto/virtio/virtio_crypto_algs.c | 10 +-
drivers/crypto/virtio/virtio_crypto_common.h | 3 +-
drivers/crypto/virtio/virtio_crypto_core.c | 3 -
include/crypto/engine.h | 59 ++++---
11 files changed, 263 insertions(+), 167 deletions(-)
create mode 100644 Documentation/crypto/crypto_engine.rst

--
2.13.6


2018-01-03 20:13:30

by Corentin Labbe

[permalink] [raw]
Subject: [PATCH 3/6] crypto: omap: convert to new crypto engine API

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <[email protected]>
---
drivers/crypto/omap-aes.c | 17 +++++++++++++----
drivers/crypto/omap-aes.h | 3 +++
drivers/crypto/omap-des.c | 20 ++++++++++++++++----
3 files changed, 32 insertions(+), 8 deletions(-)

diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index fbec0a2e76dd..518b94628166 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -414,8 +414,9 @@ static int omap_aes_handle_queue(struct omap_aes_dev *dd,
}

static int omap_aes_prepare_req(struct crypto_engine *engine,
- struct ablkcipher_request *req)
+ void *areq)
{
+ struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx(
crypto_ablkcipher_reqtfm(req));
struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
@@ -468,8 +469,9 @@ static int omap_aes_prepare_req(struct crypto_engine *engine,
}

static int omap_aes_crypt_req(struct crypto_engine *engine,
- struct ablkcipher_request *req)
+ void *areq)
{
+ struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
struct omap_aes_reqctx *rctx = ablkcipher_request_ctx(req);
struct omap_aes_dev *dd = rctx->dd;

@@ -601,6 +603,11 @@ static int omap_aes_ctr_decrypt(struct ablkcipher_request *req)
return omap_aes_crypt(req, FLAGS_CTR);
}

+static int omap_aes_prepare_req(struct crypto_engine *engine,
+ void *req);
+static int omap_aes_crypt_req(struct crypto_engine *engine,
+ void *req);
+
static int omap_aes_cra_init(struct crypto_tfm *tfm)
{
const char *name = crypto_tfm_alg_name(tfm);
@@ -616,6 +623,10 @@ static int omap_aes_cra_init(struct crypto_tfm *tfm)

tfm->crt_ablkcipher.reqsize = sizeof(struct omap_aes_reqctx);

+ ctx->enginectx.op.prepare_request = omap_aes_prepare_req;
+ ctx->enginectx.op.unprepare_request = NULL;
+ ctx->enginectx.op.do_one_request = omap_aes_crypt_req;
+
return 0;
}

@@ -1119,8 +1130,6 @@ static int omap_aes_probe(struct platform_device *pdev)
goto err_engine;
}

- dd->engine->prepare_cipher_request = omap_aes_prepare_req;
- dd->engine->cipher_one_request = omap_aes_crypt_req;
err = crypto_engine_start(dd->engine);
if (err)
goto err_engine;
diff --git a/drivers/crypto/omap-aes.h b/drivers/crypto/omap-aes.h
index 8906342e2b9a..f6ce94907ade 100644
--- a/drivers/crypto/omap-aes.h
+++ b/drivers/crypto/omap-aes.h
@@ -13,6 +13,8 @@
#ifndef __OMAP_AES_H__
#define __OMAP_AES_H__

+#include <crypto/engine.h>
+
#define DST_MAXBURST 4
#define DMA_MIN (DST_MAXBURST * sizeof(u32))

@@ -95,6 +97,7 @@ struct omap_aes_gcm_result {
};

struct omap_aes_ctx {
+ struct crypto_engine_reqctx enginectx;
int keylen;
u32 key[AES_KEYSIZE_256 / sizeof(u32)];
u8 nonce[4];
diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c
index ebc5c0f11f03..c6a3b0490616 100644
--- a/drivers/crypto/omap-des.c
+++ b/drivers/crypto/omap-des.c
@@ -86,6 +86,7 @@
#define FLAGS_OUT_DATA_ST_SHIFT 10

struct omap_des_ctx {
+ struct crypto_engine_reqctx enginectx;
struct omap_des_dev *dd;

int keylen;
@@ -526,8 +527,9 @@ static int omap_des_handle_queue(struct omap_des_dev *dd,
}

static int omap_des_prepare_req(struct crypto_engine *engine,
- struct ablkcipher_request *req)
+ void *areq)
{
+ struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
crypto_ablkcipher_reqtfm(req));
struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -582,8 +584,9 @@ static int omap_des_prepare_req(struct crypto_engine *engine,
}

static int omap_des_crypt_req(struct crypto_engine *engine,
- struct ablkcipher_request *req)
+ void *areq)
{
+ struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
crypto_ablkcipher_reqtfm(req));
struct omap_des_dev *dd = omap_des_find_dev(ctx);
@@ -695,12 +698,23 @@ static int omap_des_cbc_decrypt(struct ablkcipher_request *req)
return omap_des_crypt(req, FLAGS_CBC);
}

+static int omap_des_prepare_req(struct crypto_engine *engine,
+ void *areq);
+static int omap_des_crypt_req(struct crypto_engine *engine,
+ void *areq);
+
static int omap_des_cra_init(struct crypto_tfm *tfm)
{
+ struct omap_des_ctx *ctx = crypto_tfm_ctx(tfm);
+
pr_debug("enter\n");

tfm->crt_ablkcipher.reqsize = sizeof(struct omap_des_reqctx);

+ ctx->enginectx.op.prepare_request = omap_des_prepare_req;
+ ctx->enginectx.op.unprepare_request = NULL;
+ ctx->enginectx.op.do_one_request = omap_des_crypt_req;
+
return 0;
}

@@ -1046,8 +1060,6 @@ static int omap_des_probe(struct platform_device *pdev)
goto err_engine;
}

- dd->engine->prepare_cipher_request = omap_des_prepare_req;
- dd->engine->cipher_one_request = omap_des_crypt_req;
err = crypto_engine_start(dd->engine);
if (err)
goto err_engine;
--
2.13.6

2018-01-03 20:13:35

by Corentin Labbe

[permalink] [raw]
Subject: [PATCH 6/6] crypto: stm32-cryp: convert to the new crypto engine API

This patch convert the stm32-cryp driver to the new crypto engine API.
Signed-off-by: Corentin Labbe <[email protected]>
---
drivers/crypto/stm32/stm32-cryp.c | 21 ++++++++++++++++-----
1 file changed, 16 insertions(+), 5 deletions(-)

diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
index cf1dddbeaa2c..99e0473ef247 100644
--- a/drivers/crypto/stm32/stm32-cryp.c
+++ b/drivers/crypto/stm32/stm32-cryp.c
@@ -91,6 +91,7 @@
#define _walked_out (cryp->out_walk.offset - cryp->out_sg->offset)

struct stm32_cryp_ctx {
+ struct crypto_engine_reqctx enginectx;
struct stm32_cryp *cryp;
int keylen;
u32 key[AES_KEYSIZE_256 / sizeof(u32)];
@@ -494,10 +495,20 @@ static int stm32_cryp_cpu_start(struct stm32_cryp *cryp)
return 0;
}

+static int stm32_cryp_cipher_one_req(struct crypto_engine *engine,
+ void *areq);
+static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
+ void *areq);
+
static int stm32_cryp_cra_init(struct crypto_tfm *tfm)
{
+ struct stm32_cryp_ctx *ctx = crypto_tfm_ctx(tfm);
+
tfm->crt_ablkcipher.reqsize = sizeof(struct stm32_cryp_reqctx);

+ ctx->enginectx.op.do_one_request = stm32_cryp_cipher_one_req;
+ ctx->enginectx.op.prepare_request = stm32_cryp_prepare_cipher_req;
+ ctx->enginectx.op.unprepare_request = NULL;
return 0;
}

@@ -695,14 +706,17 @@ static int stm32_cryp_prepare_req(struct crypto_engine *engine,
}

static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
- struct ablkcipher_request *req)
+ void *areq)
{
+ struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
+
return stm32_cryp_prepare_req(engine, req);
}

static int stm32_cryp_cipher_one_req(struct crypto_engine *engine,
- struct ablkcipher_request *req)
+ void *areq)
{
+ struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(
crypto_ablkcipher_reqtfm(req));
struct stm32_cryp *cryp = ctx->cryp;
@@ -1104,9 +1118,6 @@ static int stm32_cryp_probe(struct platform_device *pdev)
goto err_engine1;
}

- cryp->engine->prepare_cipher_request = stm32_cryp_prepare_cipher_req;
- cryp->engine->cipher_one_request = stm32_cryp_cipher_one_req;
-
ret = crypto_engine_start(cryp->engine);
if (ret) {
dev_err(dev, "Could not start crypto engine\n");
--
2.13.6

2018-01-03 20:25:06

by Corentin Labbe

[permalink] [raw]
Subject: [PATCH 5/6] crypto: stm32-hash: convert to the new crypto engine API

This patch convert the stm32-hash driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <[email protected]>
---
drivers/crypto/stm32/stm32-hash.c | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
index 4ca4a264a833..9790c2c936c7 100644
--- a/drivers/crypto/stm32/stm32-hash.c
+++ b/drivers/crypto/stm32/stm32-hash.c
@@ -122,6 +122,7 @@ enum stm32_hash_data_format {
#define HASH_DMA_THRESHOLD 50

struct stm32_hash_ctx {
+ struct crypto_engine_reqctx enginectx;
struct stm32_hash_dev *hdev;
unsigned long flags;

@@ -828,6 +829,11 @@ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
return 0;
}

+static int stm32_hash_one_request(struct crypto_engine *engine,
+ void *areq);
+static int stm32_hash_prepare_req(struct crypto_engine *engine,
+ void *areq);
+
static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
struct ahash_request *req)
{
@@ -835,8 +841,9 @@ static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
}

static int stm32_hash_prepare_req(struct crypto_engine *engine,
- struct ahash_request *req)
+ void *areq)
{
+ struct ahash_request *req = container_of(areq, struct ahash_request, base);
struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
struct stm32_hash_request_ctx *rctx;
@@ -855,8 +862,9 @@ static int stm32_hash_prepare_req(struct crypto_engine *engine,
}

static int stm32_hash_one_request(struct crypto_engine *engine,
- struct ahash_request *req)
+ void *areq)
{
+ struct ahash_request *req = container_of(areq, struct ahash_request, base);
struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
struct stm32_hash_request_ctx *rctx;
@@ -1033,6 +1041,9 @@ static int stm32_hash_cra_init_algs(struct crypto_tfm *tfm,
if (algs_hmac_name)
ctx->flags |= HASH_FLAGS_HMAC;

+ ctx->enginectx.op.do_one_request = stm32_hash_one_request;
+ ctx->enginectx.op.prepare_request = stm32_hash_prepare_req;
+ ctx->enginectx.op.unprepare_request = NULL;
return 0;
}

@@ -1493,9 +1504,6 @@ static int stm32_hash_probe(struct platform_device *pdev)
goto err_engine;
}

- hdev->engine->prepare_hash_request = stm32_hash_prepare_req;
- hdev->engine->hash_one_request = stm32_hash_one_request;
-
ret = crypto_engine_start(hdev->engine);
if (ret)
goto err_engine_start;
--
2.13.6

2018-01-03 20:25:46

by Corentin Labbe

[permalink] [raw]
Subject: [PATCH 4/6] crypto: virtio: convert to new crypto engine API

This patch convert the driver to the new crypto engine API.

Signed-off-by: Corentin Labbe <[email protected]>
---
drivers/crypto/virtio/virtio_crypto_algs.c | 10 +++++++---
drivers/crypto/virtio/virtio_crypto_common.h | 3 +--
drivers/crypto/virtio/virtio_crypto_core.c | 3 ---
3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c
index abe8c15450df..060824a8ab0a 100644
--- a/drivers/crypto/virtio/virtio_crypto_algs.c
+++ b/drivers/crypto/virtio/virtio_crypto_algs.c
@@ -29,6 +29,7 @@


struct virtio_crypto_ablkcipher_ctx {
+ struct crypto_engine_reqctx enginectx;
struct virtio_crypto *vcrypto;
struct crypto_tfm *tfm;

@@ -521,6 +522,9 @@ static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm)
tfm->crt_ablkcipher.reqsize = sizeof(struct virtio_crypto_sym_request);
ctx->tfm = tfm;

+ ctx->enginectx.op.do_one_request = virtio_crypto_ablkcipher_crypt_req;
+ ctx->enginectx.op.prepare_request = NULL;
+ ctx->enginectx.op.unprepare_request = NULL;
return 0;
}

@@ -538,9 +542,9 @@ static void virtio_crypto_ablkcipher_exit(struct crypto_tfm *tfm)
}

int virtio_crypto_ablkcipher_crypt_req(
- struct crypto_engine *engine,
- struct ablkcipher_request *req)
+ struct crypto_engine *engine, void *vreq)
{
+ struct ablkcipher_request *req = container_of(vreq, struct ablkcipher_request, base);
struct virtio_crypto_sym_request *vc_sym_req =
ablkcipher_request_ctx(req);
struct virtio_crypto_request *vc_req = &vc_sym_req->base;
@@ -562,7 +566,7 @@ static void virtio_crypto_ablkcipher_finalize_req(
int err)
{
crypto_finalize_cipher_request(vc_sym_req->base.dataq->engine,
- req, err);
+ req, err);
kzfree(vc_sym_req->iv);
virtcrypto_clear_request(&vc_sym_req->base);
}
diff --git a/drivers/crypto/virtio/virtio_crypto_common.h b/drivers/crypto/virtio/virtio_crypto_common.h
index e976539a05d9..72621bd67211 100644
--- a/drivers/crypto/virtio/virtio_crypto_common.h
+++ b/drivers/crypto/virtio/virtio_crypto_common.h
@@ -107,8 +107,7 @@ struct virtio_crypto *virtcrypto_get_dev_node(int node);
int virtcrypto_dev_start(struct virtio_crypto *vcrypto);
void virtcrypto_dev_stop(struct virtio_crypto *vcrypto);
int virtio_crypto_ablkcipher_crypt_req(
- struct crypto_engine *engine,
- struct ablkcipher_request *req);
+ struct crypto_engine *engine, void *vreq);

void
virtcrypto_clear_request(struct virtio_crypto_request *vc_req);
diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
index ff1410a32c2b..83326986c113 100644
--- a/drivers/crypto/virtio/virtio_crypto_core.c
+++ b/drivers/crypto/virtio/virtio_crypto_core.c
@@ -111,9 +111,6 @@ static int virtcrypto_find_vqs(struct virtio_crypto *vi)
ret = -ENOMEM;
goto err_engine;
}
-
- vi->data_vq[i].engine->cipher_one_request =
- virtio_crypto_ablkcipher_crypt_req;
}

kfree(names);
--
2.13.6

2018-01-03 20:26:33

by Corentin Labbe

[permalink] [raw]
Subject: [PATCH 2/6] crypto: engine - Permit to enqueue all async requests

The crypto engine could actually only enqueue hash and ablkcipher request.
This patch permit it to enqueue any type of crypto_async_request.

Signed-off-by: Corentin Labbe <[email protected]>
---
crypto/crypto_engine.c | 230 ++++++++++++++++++++++++------------------------
include/crypto/engine.h | 59 +++++++------
2 files changed, 148 insertions(+), 141 deletions(-)

diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
index 61e7c4e02fd2..036270b61648 100644
--- a/crypto/crypto_engine.c
+++ b/crypto/crypto_engine.c
@@ -15,7 +15,6 @@
#include <linux/err.h>
#include <linux/delay.h>
#include <crypto/engine.h>
-#include <crypto/internal/hash.h>
#include <uapi/linux/sched/types.h>
#include "internal.h"

@@ -34,11 +33,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
bool in_kthread)
{
struct crypto_async_request *async_req, *backlog;
- struct ahash_request *hreq;
- struct ablkcipher_request *breq;
unsigned long flags;
bool was_busy = false;
- int ret, rtype;
+ int ret;
+ struct crypto_engine_reqctx *enginectx;

spin_lock_irqsave(&engine->queue_lock, flags);

@@ -94,7 +92,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,

spin_unlock_irqrestore(&engine->queue_lock, flags);

- rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
/* Until here we get the request need to be encrypted successfully */
if (!was_busy && engine->prepare_crypt_hardware) {
ret = engine->prepare_crypt_hardware(engine);
@@ -104,57 +101,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
}
}

- switch (rtype) {
- case CRYPTO_ALG_TYPE_AHASH:
- hreq = ahash_request_cast(engine->cur_req);
- if (engine->prepare_hash_request) {
- ret = engine->prepare_hash_request(engine, hreq);
- if (ret) {
- dev_err(engine->dev, "failed to prepare request: %d\n",
- ret);
- goto req_err;
- }
- engine->cur_req_prepared = true;
- }
- ret = engine->hash_one_request(engine, hreq);
- if (ret) {
- dev_err(engine->dev, "failed to hash one request from queue\n");
- goto req_err;
- }
- return;
- case CRYPTO_ALG_TYPE_ABLKCIPHER:
- breq = ablkcipher_request_cast(engine->cur_req);
- if (engine->prepare_cipher_request) {
- ret = engine->prepare_cipher_request(engine, breq);
- if (ret) {
- dev_err(engine->dev, "failed to prepare request: %d\n",
- ret);
- goto req_err;
- }
- engine->cur_req_prepared = true;
- }
- ret = engine->cipher_one_request(engine, breq);
+ enginectx = crypto_tfm_ctx(async_req->tfm);
+
+ if (enginectx->op.prepare_request) {
+ ret = enginectx->op.prepare_request(engine, async_req);
if (ret) {
- dev_err(engine->dev, "failed to cipher one request from queue\n");
+ dev_err(engine->dev, "failed to prepare request: %d\n",
+ ret);
goto req_err;
}
- return;
- default:
- dev_err(engine->dev, "failed to prepare request of unknown type\n");
- return;
+ engine->cur_req_prepared = true;
+ }
+ if (!enginectx->op.do_one_request) {
+ dev_err(engine->dev, "failed to do request\n");
+ ret = -EINVAL;
+ goto req_err;
+ }
+ ret = enginectx->op.do_one_request(engine, async_req);
+ if (ret) {
+ dev_err(engine->dev, "Failed to do one request from queue: %d\n", ret);
+ goto req_err;
}
+ return;

req_err:
- switch (rtype) {
- case CRYPTO_ALG_TYPE_AHASH:
- hreq = ahash_request_cast(engine->cur_req);
- crypto_finalize_hash_request(engine, hreq, ret);
- break;
- case CRYPTO_ALG_TYPE_ABLKCIPHER:
- breq = ablkcipher_request_cast(engine->cur_req);
- crypto_finalize_cipher_request(engine, breq, ret);
- break;
- }
+ crypto_finalize_request(engine, async_req, ret);
return;

out:
@@ -170,13 +141,12 @@ static void crypto_pump_work(struct kthread_work *work)
}

/**
- * crypto_transfer_cipher_request - transfer the new request into the
- * enginequeue
+ * crypto_transfer_request - transfer the new request into the engine queue
* @engine: the hardware engine
* @req: the request need to be listed into the engine queue
*/
-int crypto_transfer_cipher_request(struct crypto_engine *engine,
- struct ablkcipher_request *req,
+static int crypto_transfer_request(struct crypto_engine *engine,
+ struct crypto_async_request *req,
bool need_pump)
{
unsigned long flags;
@@ -189,7 +159,7 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
return -ESHUTDOWN;
}

- ret = ablkcipher_enqueue_request(&engine->queue, req);
+ ret = crypto_enqueue_request(&engine->queue, req);

if (!engine->busy && need_pump)
kthread_queue_work(engine->kworker, &engine->pump_requests);
@@ -197,85 +167,97 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
spin_unlock_irqrestore(&engine->queue_lock, flags);
return ret;
}
-EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
+EXPORT_SYMBOL_GPL(crypto_transfer_request);

/**
- * crypto_transfer_cipher_request_to_engine - transfer one request to list
+ * crypto_transfer_request_to_engine - transfer one request to list
* into the engine queue
* @engine: the hardware engine
* @req: the request need to be listed into the engine queue
*/
+static int crypto_transfer_request_to_engine(struct crypto_engine *engine,
+ struct crypto_async_request *req)
+{
+ return crypto_transfer_request(engine, req, true);
+}
+
+/**
+ * crypto_transfer_cipher_request_to_engine - transfer one ablkcipher_request
+ * to list into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ * TODO: Remove this function when skcipher conversion is finished
+ */
int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
struct ablkcipher_request *req)
{
- return crypto_transfer_cipher_request(engine, req, true);
+ return crypto_transfer_request_to_engine(engine, &req->base);
}
EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);

/**
- * crypto_transfer_hash_request - transfer the new request into the
- * enginequeue
+ * crypto_transfer_skcipher_request_to_engine - transfer one skcipher_request
+ * to list into the engine queue
* @engine: the hardware engine
* @req: the request need to be listed into the engine queue
*/
-int crypto_transfer_hash_request(struct crypto_engine *engine,
- struct ahash_request *req, bool need_pump)
+int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
+ struct skcipher_request *req)
{
- unsigned long flags;
- int ret;
-
- spin_lock_irqsave(&engine->queue_lock, flags);
-
- if (!engine->running) {
- spin_unlock_irqrestore(&engine->queue_lock, flags);
- return -ESHUTDOWN;
- }
-
- ret = ahash_enqueue_request(&engine->queue, req);
-
- if (!engine->busy && need_pump)
- kthread_queue_work(engine->kworker, &engine->pump_requests);
+ return crypto_transfer_request_to_engine(engine, &req->base);
+}
+EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request_to_engine);

- spin_unlock_irqrestore(&engine->queue_lock, flags);
- return ret;
+/**
+ * crypto_transfer_akcipher_request_to_engine - transfer one akcipher_request
+ * to list into the engine queue
+ * @engine: the hardware engine
+ * @req: the request need to be listed into the engine queue
+ */
+int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
+ struct akcipher_request *req)
+{
+ return crypto_transfer_request_to_engine(engine, &req->base);
}
-EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
+EXPORT_SYMBOL_GPL(crypto_transfer_akcipher_request_to_engine);

/**
- * crypto_transfer_hash_request_to_engine - transfer one request to list
- * into the engine queue
+ * crypto_transfer_hash_request_to_engine - transfer one ahash_request
+ * to list into the engine queue
* @engine: the hardware engine
* @req: the request need to be listed into the engine queue
*/
int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
struct ahash_request *req)
{
- return crypto_transfer_hash_request(engine, req, true);
+ return crypto_transfer_request_to_engine(engine, &req->base);
}
EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);

/**
- * crypto_finalize_cipher_request - finalize one request if the request is done
+ * crypto_finalize_request - finalize one request if the request is done
* @engine: the hardware engine
* @req: the request need to be finalized
* @err: error number
*/
-void crypto_finalize_cipher_request(struct crypto_engine *engine,
- struct ablkcipher_request *req, int err)
+void crypto_finalize_request(struct crypto_engine *engine,
+ struct crypto_async_request *req, int err)
{
unsigned long flags;
bool finalize_cur_req = false;
int ret;
+ struct crypto_engine_reqctx *enginectx;

spin_lock_irqsave(&engine->queue_lock, flags);
- if (engine->cur_req == &req->base)
+ if (engine->cur_req == req)
finalize_cur_req = true;
spin_unlock_irqrestore(&engine->queue_lock, flags);

if (finalize_cur_req) {
+ enginectx = crypto_tfm_ctx(req->tfm);
if (engine->cur_req_prepared &&
- engine->unprepare_cipher_request) {
- ret = engine->unprepare_cipher_request(engine, req);
+ enginectx->op.unprepare_request) {
+ ret = enginectx->op.unprepare_request(engine, req);
if (ret)
dev_err(engine->dev, "failed to unprepare request\n");
}
@@ -285,46 +267,64 @@ void crypto_finalize_cipher_request(struct crypto_engine *engine,
spin_unlock_irqrestore(&engine->queue_lock, flags);
}

- req->base.complete(&req->base, err);
+ req->complete(req, err);

kthread_queue_work(engine->kworker, &engine->pump_requests);
}
-EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);

/**
- * crypto_finalize_hash_request - finalize one request if the request is done
+ * crypto_finalize_cipher_request - finalize one ablkcipher_request if
+ * the request is done
* @engine: the hardware engine
* @req: the request need to be finalized
* @err: error number
*/
-void crypto_finalize_hash_request(struct crypto_engine *engine,
- struct ahash_request *req, int err)
+void crypto_finalize_cipher_request(struct crypto_engine *engine,
+ struct ablkcipher_request *req, int err)
{
- unsigned long flags;
- bool finalize_cur_req = false;
- int ret;
-
- spin_lock_irqsave(&engine->queue_lock, flags);
- if (engine->cur_req == &req->base)
- finalize_cur_req = true;
- spin_unlock_irqrestore(&engine->queue_lock, flags);
+ return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);

- if (finalize_cur_req) {
- if (engine->cur_req_prepared &&
- engine->unprepare_hash_request) {
- ret = engine->unprepare_hash_request(engine, req);
- if (ret)
- dev_err(engine->dev, "failed to unprepare request\n");
- }
- spin_lock_irqsave(&engine->queue_lock, flags);
- engine->cur_req = NULL;
- engine->cur_req_prepared = false;
- spin_unlock_irqrestore(&engine->queue_lock, flags);
- }
+/**
+ * crypto_finalize_skcipher_request - finalize one skcipher_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_skcipher_request(struct crypto_engine *engine,
+ struct skcipher_request *req, int err)
+{
+ return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_skcipher_request);

- req->base.complete(&req->base, err);
+/**
+ * crypto_finalize_akcipher_request - finalize one akcipher_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_akcipher_request(struct crypto_engine *engine,
+ struct akcipher_request *req, int err)
+{
+ return crypto_finalize_request(engine, &req->base, err);
+}
+EXPORT_SYMBOL_GPL(crypto_finalize_akcipher_request);

- kthread_queue_work(engine->kworker, &engine->pump_requests);
+/**
+ * crypto_finalize_hash_request - finalize one ahash_request if
+ * the request is done
+ * @engine: the hardware engine
+ * @req: the request need to be finalized
+ * @err: error number
+ */
+void crypto_finalize_hash_request(struct crypto_engine *engine,
+ struct ahash_request *req, int err)
+{
+ return crypto_finalize_request(engine, &req->base, err);
}
EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);

diff --git a/include/crypto/engine.h b/include/crypto/engine.h
index dd04c1699b51..1ea7cbe92eaf 100644
--- a/include/crypto/engine.h
+++ b/include/crypto/engine.h
@@ -17,7 +17,9 @@
#include <linux/kernel.h>
#include <linux/kthread.h>
#include <crypto/algapi.h>
+#include <crypto/akcipher.h>
#include <crypto/hash.h>
+#include <crypto/skcipher.h>

#define ENGINE_NAME_LEN 30
/*
@@ -37,12 +39,6 @@
* @unprepare_crypt_hardware: there are currently no more requests on the
* queue so the subsystem notifies the driver that it may relax the
* hardware by issuing this call
- * @prepare_cipher_request: do some prepare if need before handle the current request
- * @unprepare_cipher_request: undo any work done by prepare_cipher_request()
- * @cipher_one_request: do encryption for current request
- * @prepare_hash_request: do some prepare if need before handle the current request
- * @unprepare_hash_request: undo any work done by prepare_hash_request()
- * @hash_one_request: do hash for current request
* @kworker: kthread worker struct for request pump
* @pump_requests: work struct for scheduling work to the request pump
* @priv_data: the engine private data
@@ -65,19 +61,6 @@ struct crypto_engine {
int (*prepare_crypt_hardware)(struct crypto_engine *engine);
int (*unprepare_crypt_hardware)(struct crypto_engine *engine);

- int (*prepare_cipher_request)(struct crypto_engine *engine,
- struct ablkcipher_request *req);
- int (*unprepare_cipher_request)(struct crypto_engine *engine,
- struct ablkcipher_request *req);
- int (*prepare_hash_request)(struct crypto_engine *engine,
- struct ahash_request *req);
- int (*unprepare_hash_request)(struct crypto_engine *engine,
- struct ahash_request *req);
- int (*cipher_one_request)(struct crypto_engine *engine,
- struct ablkcipher_request *req);
- int (*hash_one_request)(struct crypto_engine *engine,
- struct ahash_request *req);
-
struct kthread_worker *kworker;
struct kthread_work pump_requests;

@@ -85,19 +68,43 @@ struct crypto_engine {
struct crypto_async_request *cur_req;
};

-int crypto_transfer_cipher_request(struct crypto_engine *engine,
- struct ablkcipher_request *req,
- bool need_pump);
+/*
+ * struct crypto_engine_op - crypto hardware engine operations
+ * @prepare__request: do some prepare if need before handle the current request
+ * @unprepare_request: undo any work done by prepare_request()
+ * @do_one_request: do encryption for current request
+ */
+struct crypto_engine_op {
+ int (*prepare_request)(struct crypto_engine *engine,
+ void *areq);
+ int (*unprepare_request)(struct crypto_engine *engine,
+ void *areq);
+ int (*do_one_request)(struct crypto_engine *engine,
+ void *areq);
+};
+
+struct crypto_engine_reqctx {
+ struct crypto_engine_op op;
+};
+
+int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
+ struct akcipher_request *req);
int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
- struct ablkcipher_request *req);
-int crypto_transfer_hash_request(struct crypto_engine *engine,
- struct ahash_request *req, bool need_pump);
+ struct ablkcipher_request *req);
int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
- struct ahash_request *req);
+ struct ahash_request *req);
+int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
+ struct skcipher_request *req);
+void crypto_finalize_request(struct crypto_engine *engine,
+ struct crypto_async_request *req, int err);
+void crypto_finalize_akcipher_request(struct crypto_engine *engine,
+ struct akcipher_request *req, int err);
void crypto_finalize_cipher_request(struct crypto_engine *engine,
struct ablkcipher_request *req, int err);
void crypto_finalize_hash_request(struct crypto_engine *engine,
struct ahash_request *req, int err);
+void crypto_finalize_skcipher_request(struct crypto_engine *engine,
+ struct skcipher_request *req, int err);
int crypto_engine_start(struct crypto_engine *engine);
int crypto_engine_stop(struct crypto_engine *engine);
struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);
--
2.13.6

2018-01-03 20:27:12

by Corentin Labbe

[permalink] [raw]
Subject: [PATCH 1/6] Documentation: crypto: document crypto engine API

Signed-off-by: Corentin Labbe <[email protected]>
---
Documentation/crypto/crypto_engine.rst | 46 ++++++++++++++++++++++++++++++++++
1 file changed, 46 insertions(+)
create mode 100644 Documentation/crypto/crypto_engine.rst

diff --git a/Documentation/crypto/crypto_engine.rst b/Documentation/crypto/crypto_engine.rst
new file mode 100644
index 000000000000..b0ed37f9fb0c
--- /dev/null
+++ b/Documentation/crypto/crypto_engine.rst
@@ -0,0 +1,46 @@
+=============
+CRYPTO ENGINE
+=============
+
+Overview
+--------
+The crypto engine API (CE), is a crypto queue manager.
+
+Requirement
+-----------
+You have to put at start of your tfm_ctx the struct crypto_engine_reqctx
+struct your_tfm_ctx {
+ struct crypto_engine_reqctx enginectx;
+ ...
+};
+Why: Since CE manage only crypto_async_request, it cannot know the underlying
+request_type and so have access only on the TFM.
+So using container_of for accessing __ctx is impossible.
+Furthermore, the crypto engine cannot know the "struct your_tfm_ctx",
+so it must assume that crypto_engine_reqctx is at start of it.
+
+Order of operations
+-------------------
+You have to obtain a struct crypto_engine via crypto_engine_alloc_init().
+And start it via crypto_engine_start().
+
+Before transferring any request, you have to fill the enginectx.
+- prepare_request: (taking a function pointer) If you need to do some processing before doing the request
+- unprepare_request: (taking a function pointer) Undoing what's done in prepare_request
+- do_one_request: (taking a function pointer) Do encryption for current request
+
+Note: that those three functions get the crypto_async_request associated with the received request.
+So your need to get the original request via container_of(areq, struct yourrequesttype_request, base);
+
+When your driver receive a crypto_request, you have to transfer it to
+the cryptoengine via one of:
+- crypto_transfer_cipher_request_to_engine()
+- crypto_transfer_skcipher_request_to_engine()
+- crypto_transfer_akcipher_request_to_engine()
+- crypto_transfer_hash_request_to_engine()
+
+At the end of the request process, a call to one of the following function is needed:
+- crypto_finalize_cipher_request
+- crypto_finalize_skcipher_request
+- crypto_finalize_akcipher_request
+- crypto_finalize_hash_request
--
2.13.6

2018-01-10 14:14:14

by Fabien DESSENNE

[permalink] [raw]
Subject: Re: [PATCH 1/6] Documentation: crypto: document crypto engine API

Hi Corentin,


Thank you for this new version which I have testes successfully with the
stm32 hash & cryp drivers.

As a general comment on this patchset, I would say that it does not
cover all async requests: typically I need (for the pending stm32 cryp
driver uprade) to use CryptoEngine to process AEAD requests which is not
covered here.

Could you please consider adding the 'transfer' and 'finalize' EXPORTed
functions for aead requests? (the implementation is quite trivial)

Have also a look at struct acomp_req (acompress.h) and struct
kpp_request (kpp.h) which also use "struct crypto_async_request base"


BR

Fabien


On 03/01/18 21:11, Corentin Labbe wrote:
> Signed-off-by: Corentin Labbe <[email protected]>
> ---
> Documentation/crypto/crypto_engine.rst | 46 ++++++++++++++++++++++++++++++++++
> 1 file changed, 46 insertions(+)
> create mode 100644 Documentation/crypto/crypto_engine.rst
>
> diff --git a/Documentation/crypto/crypto_engine.rst b/Documentation/crypto/crypto_engine.rst
> new file mode 100644
> index 000000000000..b0ed37f9fb0c
> --- /dev/null
> +++ b/Documentation/crypto/crypto_engine.rst
> @@ -0,0 +1,46 @@
> +=============
> +CRYPTO ENGINE
> +=============
> +
> +Overview
> +--------
> +The crypto engine API (CE), is a crypto queue manager.
> +
> +Requirement
> +-----------
> +You have to put at start of your tfm_ctx the struct crypto_engine_reqctx
> +struct your_tfm_ctx {
> + struct crypto_engine_reqctx enginectx;
> + ...
> +};
> +Why: Since CE manage only crypto_async_request, it cannot know the underlying
> +request_type and so have access only on the TFM.
> +So using container_of for accessing __ctx is impossible.
> +Furthermore, the crypto engine cannot know the "struct your_tfm_ctx",
> +so it must assume that crypto_engine_reqctx is at start of it.
> +
> +Order of operations
> +-------------------
> +You have to obtain a struct crypto_engine via crypto_engine_alloc_init().
> +And start it via crypto_engine_start().
> +
> +Before transferring any request, you have to fill the enginectx.
> +- prepare_request: (taking a function pointer) If you need to do some processing before doing the request
> +- unprepare_request: (taking a function pointer) Undoing what's done in prepare_request
> +- do_one_request: (taking a function pointer) Do encryption for current request
> +
> +Note: that those three functions get the crypto_async_request associated with the received request.
> +So your need to get the original request via container_of(areq, struct yourrequesttype_request, base);
> +
> +When your driver receive a crypto_request, you have to transfer it to
> +the cryptoengine via one of:
> +- crypto_transfer_cipher_request_to_engine()
> +- crypto_transfer_skcipher_request_to_engine()
> +- crypto_transfer_akcipher_request_to_engine()
> +- crypto_transfer_hash_request_to_engine()
> +
> +At the end of the request process, a call to one of the following function is needed:
> +- crypto_finalize_cipher_request
> +- crypto_finalize_skcipher_request
> +- crypto_finalize_akcipher_request
> +- crypto_finalize_hash_request

2018-01-10 14:21:06

by Fabien DESSENNE

[permalink] [raw]
Subject: Re: [PATCH 2/6] crypto: engine - Permit to enqueue all async requests


On 03/01/18 21:11, Corentin Labbe wrote:
> The crypto engine could actually only enqueue hash and ablkcipher request.
> This patch permit it to enqueue any type of crypto_async_request.
>
> Signed-off-by: Corentin Labbe <[email protected]>
> ---
> crypto/crypto_engine.c | 230 ++++++++++++++++++++++++------------------------
> include/crypto/engine.h | 59 +++++++------
> 2 files changed, 148 insertions(+), 141 deletions(-)
>
> diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> index 61e7c4e02fd2..036270b61648 100644
> --- a/crypto/crypto_engine.c
> +++ b/crypto/crypto_engine.c
> @@ -15,7 +15,6 @@
> #include <linux/err.h>
> #include <linux/delay.h>
> #include <crypto/engine.h>
> -#include <crypto/internal/hash.h>
> #include <uapi/linux/sched/types.h>
> #include "internal.h"
>
> @@ -34,11 +33,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
> bool in_kthread)
> {
> struct crypto_async_request *async_req, *backlog;
> - struct ahash_request *hreq;
> - struct ablkcipher_request *breq;
> unsigned long flags;
> bool was_busy = false;
> - int ret, rtype;
> + int ret;
> + struct crypto_engine_reqctx *enginectx;
>
> spin_lock_irqsave(&engine->queue_lock, flags);
>
> @@ -94,7 +92,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>
> spin_unlock_irqrestore(&engine->queue_lock, flags);
>
> - rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
> /* Until here we get the request need to be encrypted successfully */
> if (!was_busy && engine->prepare_crypt_hardware) {
> ret = engine->prepare_crypt_hardware(engine);
> @@ -104,57 +101,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
> }
> }
>
> - switch (rtype) {
> - case CRYPTO_ALG_TYPE_AHASH:
> - hreq = ahash_request_cast(engine->cur_req);
> - if (engine->prepare_hash_request) {
> - ret = engine->prepare_hash_request(engine, hreq);
> - if (ret) {
> - dev_err(engine->dev, "failed to prepare request: %d\n",
> - ret);
> - goto req_err;
> - }
> - engine->cur_req_prepared = true;
> - }
> - ret = engine->hash_one_request(engine, hreq);
> - if (ret) {
> - dev_err(engine->dev, "failed to hash one request from queue\n");
> - goto req_err;
> - }
> - return;
> - case CRYPTO_ALG_TYPE_ABLKCIPHER:
> - breq = ablkcipher_request_cast(engine->cur_req);
> - if (engine->prepare_cipher_request) {
> - ret = engine->prepare_cipher_request(engine, breq);
> - if (ret) {
> - dev_err(engine->dev, "failed to prepare request: %d\n",
> - ret);
> - goto req_err;
> - }
> - engine->cur_req_prepared = true;
> - }
> - ret = engine->cipher_one_request(engine, breq);
> + enginectx = crypto_tfm_ctx(async_req->tfm);
> +
> + if (enginectx->op.prepare_request) {
> + ret = enginectx->op.prepare_request(engine, async_req);
> if (ret) {
> - dev_err(engine->dev, "failed to cipher one request from queue\n");
> + dev_err(engine->dev, "failed to prepare request: %d\n",
> + ret);
> goto req_err;
> }
> - return;
> - default:
> - dev_err(engine->dev, "failed to prepare request of unknown type\n");
> - return;
> + engine->cur_req_prepared = true;
> + }
> + if (!enginectx->op.do_one_request) {
> + dev_err(engine->dev, "failed to do request\n");
> + ret = -EINVAL;
> + goto req_err;
> + }
> + ret = enginectx->op.do_one_request(engine, async_req);
> + if (ret) {
> + dev_err(engine->dev, "Failed to do one request from queue: %d\n", ret);
> + goto req_err;
> }
> + return;
>
> req_err:
> - switch (rtype) {
> - case CRYPTO_ALG_TYPE_AHASH:
> - hreq = ahash_request_cast(engine->cur_req);
> - crypto_finalize_hash_request(engine, hreq, ret);
> - break;
> - case CRYPTO_ALG_TYPE_ABLKCIPHER:
> - breq = ablkcipher_request_cast(engine->cur_req);
> - crypto_finalize_cipher_request(engine, breq, ret);
> - break;
> - }
> + crypto_finalize_request(engine, async_req, ret);
> return;
>
> out:
> @@ -170,13 +141,12 @@ static void crypto_pump_work(struct kthread_work *work)
> }
>
> /**
> - * crypto_transfer_cipher_request - transfer the new request into the
> - * enginequeue
> + * crypto_transfer_request - transfer the new request into the engine queue
> * @engine: the hardware engine
> * @req: the request need to be listed into the engine queue
> */
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> - struct ablkcipher_request *req,
> +static int crypto_transfer_request(struct crypto_engine *engine,
> + struct crypto_async_request *req,
> bool need_pump)
> {
> unsigned long flags;
> @@ -189,7 +159,7 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
> return -ESHUTDOWN;
> }
>
> - ret = ablkcipher_enqueue_request(&engine->queue, req);
> + ret = crypto_enqueue_request(&engine->queue, req);
>
> if (!engine->busy && need_pump)
> kthread_queue_work(engine->kworker, &engine->pump_requests);
> @@ -197,85 +167,97 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
> spin_unlock_irqrestore(&engine->queue_lock, flags);
> return ret;
> }
> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
> +EXPORT_SYMBOL_GPL(crypto_transfer_request);

Do not export this function which is a static one.

>
> /**
> - * crypto_transfer_cipher_request_to_engine - transfer one request to list
> + * crypto_transfer_request_to_engine - transfer one request to list
> * into the engine queue
> * @engine: the hardware engine
> * @req: the request need to be listed into the engine queue
> */
> +static int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> + struct crypto_async_request *req)
> +{
> + return crypto_transfer_request(engine, req, true);
> +}
> +
> +/**
> + * crypto_transfer_cipher_request_to_engine - transfer one ablkcipher_request
> + * to list into the engine queue
> + * @engine: the hardware engine
> + * @req: the request need to be listed into the engine queue
> + * TODO: Remove this function when skcipher conversion is finished
> + */
> int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> struct ablkcipher_request *req)
> {
> - return crypto_transfer_cipher_request(engine, req, true);
> + return crypto_transfer_request_to_engine(engine, &req->base);
> }
> EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
>
> /**
> - * crypto_transfer_hash_request - transfer the new request into the
> - * enginequeue
> + * crypto_transfer_skcipher_request_to_engine - transfer one skcipher_request
> + * to list into the engine queue
> * @engine: the hardware engine
> * @req: the request need to be listed into the engine queue
> */
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> - struct ahash_request *req, bool need_pump)
> +int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
> + struct skcipher_request *req)
> {
> - unsigned long flags;
> - int ret;
> -
> - spin_lock_irqsave(&engine->queue_lock, flags);
> -
> - if (!engine->running) {
> - spin_unlock_irqrestore(&engine->queue_lock, flags);
> - return -ESHUTDOWN;
> - }
> -
> - ret = ahash_enqueue_request(&engine->queue, req);
> -
> - if (!engine->busy && need_pump)
> - kthread_queue_work(engine->kworker, &engine->pump_requests);
> + return crypto_transfer_request_to_engine(engine, &req->base);
> +}
> +EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request_to_engine);
>
> - spin_unlock_irqrestore(&engine->queue_lock, flags);
> - return ret;
> +/**
> + * crypto_transfer_akcipher_request_to_engine - transfer one akcipher_request
> + * to list into the engine queue
> + * @engine: the hardware engine
> + * @req: the request need to be listed into the engine queue
> + */
> +int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
> + struct akcipher_request *req)
> +{
> + return crypto_transfer_request_to_engine(engine, &req->base);
> }
> -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
> +EXPORT_SYMBOL_GPL(crypto_transfer_akcipher_request_to_engine);
>
> /**
> - * crypto_transfer_hash_request_to_engine - transfer one request to list
> - * into the engine queue
> + * crypto_transfer_hash_request_to_engine - transfer one ahash_request
> + * to list into the engine queue
> * @engine: the hardware engine
> * @req: the request need to be listed into the engine queue
> */
> int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> struct ahash_request *req)
> {
> - return crypto_transfer_hash_request(engine, req, true);
> + return crypto_transfer_request_to_engine(engine, &req->base);
> }
> EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
>

Please add this EXPORTed function:

crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
struct aead_request *req)

> /**
> - * crypto_finalize_cipher_request - finalize one request if the request is done
> + * crypto_finalize_request - finalize one request if the request is done
> * @engine: the hardware engine
> * @req: the request need to be finalized
> * @err: error number
> */
> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
> - struct ablkcipher_request *req, int err)
> +void crypto_finalize_request(struct crypto_engine *engine,

shall be static

> + struct crypto_async_request *req, int err)
> {
> unsigned long flags;
> bool finalize_cur_req = false;
> int ret;
> + struct crypto_engine_reqctx *enginectx;
>
> spin_lock_irqsave(&engine->queue_lock, flags);
> - if (engine->cur_req == &req->base)
> + if (engine->cur_req == req)
> finalize_cur_req = true;
> spin_unlock_irqrestore(&engine->queue_lock, flags);
>
> if (finalize_cur_req) {
> + enginectx = crypto_tfm_ctx(req->tfm);
> if (engine->cur_req_prepared &&
> - engine->unprepare_cipher_request) {
> - ret = engine->unprepare_cipher_request(engine, req);
> + enginectx->op.unprepare_request) {
> + ret = enginectx->op.unprepare_request(engine, req);
> if (ret)
> dev_err(engine->dev, "failed to unprepare request\n");
> }
> @@ -285,46 +267,64 @@ void crypto_finalize_cipher_request(struct crypto_engine *engine,
> spin_unlock_irqrestore(&engine->queue_lock, flags);
> }
>
> - req->base.complete(&req->base, err);
> + req->complete(req, err);
>
> kthread_queue_work(engine->kworker, &engine->pump_requests);
> }
> -EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
>
> /**
> - * crypto_finalize_hash_request - finalize one request if the request is done
> + * crypto_finalize_cipher_request - finalize one ablkcipher_request if
> + * the request is done
> * @engine: the hardware engine
> * @req: the request need to be finalized
> * @err: error number
> */
> -void crypto_finalize_hash_request(struct crypto_engine *engine,
> - struct ahash_request *req, int err)
> +void crypto_finalize_cipher_request(struct crypto_engine *engine,
> + struct ablkcipher_request *req, int err)
> {
> - unsigned long flags;
> - bool finalize_cur_req = false;
> - int ret;
> -
> - spin_lock_irqsave(&engine->queue_lock, flags);
> - if (engine->cur_req == &req->base)
> - finalize_cur_req = true;
> - spin_unlock_irqrestore(&engine->queue_lock, flags);
> + return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
>
> - if (finalize_cur_req) {
> - if (engine->cur_req_prepared &&
> - engine->unprepare_hash_request) {
> - ret = engine->unprepare_hash_request(engine, req);
> - if (ret)
> - dev_err(engine->dev, "failed to unprepare request\n");
> - }
> - spin_lock_irqsave(&engine->queue_lock, flags);
> - engine->cur_req = NULL;
> - engine->cur_req_prepared = false;
> - spin_unlock_irqrestore(&engine->queue_lock, flags);
> - }
> +/**
> + * crypto_finalize_skcipher_request - finalize one skcipher_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_skcipher_request(struct crypto_engine *engine,
> + struct skcipher_request *req, int err)
> +{
> + return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_skcipher_request);
>
> - req->base.complete(&req->base, err);
> +/**
> + * crypto_finalize_akcipher_request - finalize one akcipher_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_akcipher_request(struct crypto_engine *engine,
> + struct akcipher_request *req, int err)
> +{
> + return crypto_finalize_request(engine, &req->base, err);
> +}
> +EXPORT_SYMBOL_GPL(crypto_finalize_akcipher_request);
>
> - kthread_queue_work(engine->kworker, &engine->pump_requests);
> +/**
> + * crypto_finalize_hash_request - finalize one ahash_request if
> + * the request is done
> + * @engine: the hardware engine
> + * @req: the request need to be finalized
> + * @err: error number
> + */
> +void crypto_finalize_hash_request(struct crypto_engine *engine,
> + struct ahash_request *req, int err)
> +{
> + return crypto_finalize_request(engine, &req->base, err);
> }
> EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);

Add
crypto_finalize_aead_request(struct crypto_engine *engine, struct
aead_request *req, int err)

>
> diff --git a/include/crypto/engine.h b/include/crypto/engine.h
> index dd04c1699b51..1ea7cbe92eaf 100644
> --- a/include/crypto/engine.h
> +++ b/include/crypto/engine.h
> @@ -17,7 +17,9 @@
> #include <linux/kernel.h>
> #include <linux/kthread.h>
> #include <crypto/algapi.h>
> +#include <crypto/akcipher.h>
> #include <crypto/hash.h>
> +#include <crypto/skcipher.h>
>
> #define ENGINE_NAME_LEN 30
> /*
> @@ -37,12 +39,6 @@
> * @unprepare_crypt_hardware: there are currently no more requests on the
> * queue so the subsystem notifies the driver that it may relax the
> * hardware by issuing this call
> - * @prepare_cipher_request: do some prepare if need before handle the current request
> - * @unprepare_cipher_request: undo any work done by prepare_cipher_request()
> - * @cipher_one_request: do encryption for current request
> - * @prepare_hash_request: do some prepare if need before handle the current request
> - * @unprepare_hash_request: undo any work done by prepare_hash_request()
> - * @hash_one_request: do hash for current request
> * @kworker: kthread worker struct for request pump
> * @pump_requests: work struct for scheduling work to the request pump
> * @priv_data: the engine private data
> @@ -65,19 +61,6 @@ struct crypto_engine {
> int (*prepare_crypt_hardware)(struct crypto_engine *engine);
> int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
>
> - int (*prepare_cipher_request)(struct crypto_engine *engine,
> - struct ablkcipher_request *req);
> - int (*unprepare_cipher_request)(struct crypto_engine *engine,
> - struct ablkcipher_request *req);
> - int (*prepare_hash_request)(struct crypto_engine *engine,
> - struct ahash_request *req);
> - int (*unprepare_hash_request)(struct crypto_engine *engine,
> - struct ahash_request *req);
> - int (*cipher_one_request)(struct crypto_engine *engine,
> - struct ablkcipher_request *req);
> - int (*hash_one_request)(struct crypto_engine *engine,
> - struct ahash_request *req);
> -
> struct kthread_worker *kworker;
> struct kthread_work pump_requests;
>
> @@ -85,19 +68,43 @@ struct crypto_engine {
> struct crypto_async_request *cur_req;
> };
>
> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
> - struct ablkcipher_request *req,
> - bool need_pump);
> +/*
> + * struct crypto_engine_op - crypto hardware engine operations
> + * @prepare__request: do some prepare if need before handle the current request
> + * @unprepare_request: undo any work done by prepare_request()
> + * @do_one_request: do encryption for current request
> + */
> +struct crypto_engine_op {
> + int (*prepare_request)(struct crypto_engine *engine,
> + void *areq);
> + int (*unprepare_request)(struct crypto_engine *engine,
> + void *areq);
> + int (*do_one_request)(struct crypto_engine *engine,
> + void *areq);
> +};
> +
> +struct crypto_engine_reqctx {
> + struct crypto_engine_op op;
> +};
> +
> +int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
> + struct akcipher_request *req);
> int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
> - struct ablkcipher_request *req);
> -int crypto_transfer_hash_request(struct crypto_engine *engine,
> - struct ahash_request *req, bool need_pump);
> + struct ablkcipher_request *req);
> int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
> - struct ahash_request *req);
> + struct ahash_request *req);
> +int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
> + struct skcipher_request *req);

+ transfer_aead

> +void crypto_finalize_request(struct crypto_engine *engine,
> + struct crypto_async_request *req, int err);

static (+move to  .c file?)

> +void crypto_finalize_akcipher_request(struct crypto_engine *engine,
> + struct akcipher_request *req, int err);
> void crypto_finalize_cipher_request(struct crypto_engine *engine,
> struct ablkcipher_request *req, int err);
> void crypto_finalize_hash_request(struct crypto_engine *engine,
> struct ahash_request *req, int err);
> +void crypto_finalize_skcipher_request(struct crypto_engine *engine,
> + struct skcipher_request *req, int err);

+ finalize_aead

> int crypto_engine_start(struct crypto_engine *engine);
> int crypto_engine_stop(struct crypto_engine *engine);
> struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);

2018-01-10 14:25:10

by Fabien DESSENNE

[permalink] [raw]
Subject: Re: [PATCH 5/6] crypto: stm32-hash: convert to the new crypto engine API



On 03/01/18 21:11, Corentin Labbe wrote:
> This patch convert the stm32-hash driver to the new crypto engine API.
>
> Signed-off-by: Corentin Labbe <[email protected]>
> ---
> drivers/crypto/stm32/stm32-hash.c | 18 +++++++++++++-----
> 1 file changed, 13 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
> index 4ca4a264a833..9790c2c936c7 100644
> --- a/drivers/crypto/stm32/stm32-hash.c
> +++ b/drivers/crypto/stm32/stm32-hash.c
> @@ -122,6 +122,7 @@ enum stm32_hash_data_format {
> #define HASH_DMA_THRESHOLD 50
>
> struct stm32_hash_ctx {
> + struct crypto_engine_reqctx enginectx;
> struct stm32_hash_dev *hdev;
> unsigned long flags;
>
> @@ -828,6 +829,11 @@ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
> return 0;
> }
>
> +static int stm32_hash_one_request(struct crypto_engine *engine,
> + void *areq);

merge these two lines in a single one

> +static int stm32_hash_prepare_req(struct crypto_engine *engine,
> + void *areq);

merge these two lines in a single one

> +
> static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
> struct ahash_request *req)
> {
> @@ -835,8 +841,9 @@ static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
> }
>
> static int stm32_hash_prepare_req(struct crypto_engine *engine,
> - struct ahash_request *req)
> + void *areq)

merge these two lines in a single one

> {
> + struct ahash_request *req = container_of(areq, struct ahash_request, base);

> 80 characters (CHECKPATCH)

> struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
> struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
> struct stm32_hash_request_ctx *rctx;
> @@ -855,8 +862,9 @@ static int stm32_hash_prepare_req(struct crypto_engine *engine,
> }
>
> static int stm32_hash_one_request(struct crypto_engine *engine,
> - struct ahash_request *req)
> + void *areq)

merge these two lines in a single one

> {
> + struct ahash_request *req = container_of(areq, struct ahash_request, base);

> 80 characters (CHECKPATCH)

> struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
> struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
> struct stm32_hash_request_ctx *rctx;
> @@ -1033,6 +1041,9 @@ static int stm32_hash_cra_init_algs(struct crypto_tfm *tfm,
> if (algs_hmac_name)
> ctx->flags |= HASH_FLAGS_HMAC;
>
> + ctx->enginectx.op.do_one_request = stm32_hash_one_request;
> + ctx->enginectx.op.prepare_request = stm32_hash_prepare_req;
> + ctx->enginectx.op.unprepare_request = NULL;
> return 0;
> }
>
> @@ -1493,9 +1504,6 @@ static int stm32_hash_probe(struct platform_device *pdev)
> goto err_engine;
> }
>
> - hdev->engine->prepare_hash_request = stm32_hash_prepare_req;
> - hdev->engine->hash_one_request = stm32_hash_one_request;
> -
> ret = crypto_engine_start(hdev->engine);
> if (ret)
> goto err_engine_start;

2018-01-10 14:27:14

by Fabien DESSENNE

[permalink] [raw]
Subject: Re: [PATCH 6/6] crypto: stm32-cryp: convert to the new crypto engine API



On 03/01/18 21:11, Corentin Labbe wrote:
> This patch convert the stm32-cryp driver to the new crypto engine API.
> Signed-off-by: Corentin Labbe <[email protected]>
> ---
> drivers/crypto/stm32/stm32-cryp.c | 21 ++++++++++++++++-----
> 1 file changed, 16 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
> index cf1dddbeaa2c..99e0473ef247 100644
> --- a/drivers/crypto/stm32/stm32-cryp.c
> +++ b/drivers/crypto/stm32/stm32-cryp.c
> @@ -91,6 +91,7 @@
> #define _walked_out (cryp->out_walk.offset - cryp->out_sg->offset)
>
> struct stm32_cryp_ctx {
> + struct crypto_engine_reqctx enginectx;
> struct stm32_cryp *cryp;
> int keylen;
> u32 key[AES_KEYSIZE_256 / sizeof(u32)];
> @@ -494,10 +495,20 @@ static int stm32_cryp_cpu_start(struct stm32_cryp *cryp)
> return 0;
> }
>
> +static int stm32_cryp_cipher_one_req(struct crypto_engine *engine,
> + void *areq);

Merge these 2 lines in a single one

> +static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
> + void *areq);
> +
> static int stm32_cryp_cra_init(struct crypto_tfm *tfm)
> {
> + struct stm32_cryp_ctx *ctx = crypto_tfm_ctx(tfm);
> +
> tfm->crt_ablkcipher.reqsize = sizeof(struct stm32_cryp_reqctx);
>
> + ctx->enginectx.op.do_one_request = stm32_cryp_cipher_one_req;
> + ctx->enginectx.op.prepare_request = stm32_cryp_prepare_cipher_req;
> + ctx->enginectx.op.unprepare_request = NULL;
> return 0;
> }
>
> @@ -695,14 +706,17 @@ static int stm32_cryp_prepare_req(struct crypto_engine *engine,
> }
>
> static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
> - struct ablkcipher_request *req)
> + void *areq)
> {
> + struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);

> 80 characters (CHECKPATCH)

> +
> return stm32_cryp_prepare_req(engine, req);
> }
>
> static int stm32_cryp_cipher_one_req(struct crypto_engine *engine,
> - struct ablkcipher_request *req)
> + void *areq)

Merge these 2 lines in a single one

> {
> + struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);

> 80 characters (CHECKPATCH)

> struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(
> crypto_ablkcipher_reqtfm(req));
> struct stm32_cryp *cryp = ctx->cryp;
> @@ -1104,9 +1118,6 @@ static int stm32_cryp_probe(struct platform_device *pdev)
> goto err_engine1;
> }
>
> - cryp->engine->prepare_cipher_request = stm32_cryp_prepare_cipher_req;
> - cryp->engine->cipher_one_request = stm32_cryp_cipher_one_req;
> -
> ret = crypto_engine_start(cryp->engine);
> if (ret) {
> dev_err(dev, "Could not start crypto engine\n");

2018-01-10 19:14:59

by Corentin Labbe

[permalink] [raw]
Subject: Re: [PATCH 1/6] Documentation: crypto: document crypto engine API

On Wed, Jan 10, 2018 at 02:13:13PM +0000, Fabien DESSENNE wrote:
> Hi Corentin,
>
>
> Thank you for this new version which I have testes successfully with the
> stm32 hash & cryp drivers.
>
> As a general comment on this patchset, I would say that it does not
> cover all async requests: typically I need (for the pending stm32 cryp
> driver uprade) to use CryptoEngine to process AEAD requests which is not
> covered here.
>
> Could you please consider adding the 'transfer' and 'finalize' EXPORTed
> functions for aead requests? (the implementation is quite trivial)
>
> Have also a look at struct acomp_req (acompress.h) and struct
> kpp_request (kpp.h) which also use "struct crypto_async_request base"
>
>
> BR
>
> Fabien
>

Hello

Thanks for your review and test (could I add your tested-by ?).
I didn't add aead (and kpp/acompress), since I do not have any way to test it.
Since you have a way to test aead, I will add it to the next release.

Regards

>
> On 03/01/18 21:11, Corentin Labbe wrote:
> > Signed-off-by: Corentin Labbe <[email protected]>
> > ---
> > Documentation/crypto/crypto_engine.rst | 46 ++++++++++++++++++++++++++++++++++
> > 1 file changed, 46 insertions(+)
> > create mode 100644 Documentation/crypto/crypto_engine.rst
> >
> > diff --git a/Documentation/crypto/crypto_engine.rst b/Documentation/crypto/crypto_engine.rst
> > new file mode 100644
> > index 000000000000..b0ed37f9fb0c
> > --- /dev/null
> > +++ b/Documentation/crypto/crypto_engine.rst
> > @@ -0,0 +1,46 @@
> > +=============
> > +CRYPTO ENGINE
> > +=============
> > +
> > +Overview
> > +--------
> > +The crypto engine API (CE), is a crypto queue manager.
> > +
> > +Requirement
> > +-----------
> > +You have to put at start of your tfm_ctx the struct crypto_engine_reqctx
> > +struct your_tfm_ctx {
> > + struct crypto_engine_reqctx enginectx;
> > + ...
> > +};
> > +Why: Since CE manage only crypto_async_request, it cannot know the underlying
> > +request_type and so have access only on the TFM.
> > +So using container_of for accessing __ctx is impossible.
> > +Furthermore, the crypto engine cannot know the "struct your_tfm_ctx",
> > +so it must assume that crypto_engine_reqctx is at start of it.
> > +
> > +Order of operations
> > +-------------------
> > +You have to obtain a struct crypto_engine via crypto_engine_alloc_init().
> > +And start it via crypto_engine_start().
> > +
> > +Before transferring any request, you have to fill the enginectx.
> > +- prepare_request: (taking a function pointer) If you need to do some processing before doing the request
> > +- unprepare_request: (taking a function pointer) Undoing what's done in prepare_request
> > +- do_one_request: (taking a function pointer) Do encryption for current request
> > +
> > +Note: that those three functions get the crypto_async_request associated with the received request.
> > +So your need to get the original request via container_of(areq, struct yourrequesttype_request, base);
> > +
> > +When your driver receive a crypto_request, you have to transfer it to
> > +the cryptoengine via one of:
> > +- crypto_transfer_cipher_request_to_engine()
> > +- crypto_transfer_skcipher_request_to_engine()
> > +- crypto_transfer_akcipher_request_to_engine()
> > +- crypto_transfer_hash_request_to_engine()
> > +
> > +At the end of the request process, a call to one of the following function is needed:
> > +- crypto_finalize_cipher_request
> > +- crypto_finalize_skcipher_request
> > +- crypto_finalize_akcipher_request
> > +- crypto_finalize_hash_request

2018-01-11 07:45:11

by Fabien DESSENNE

[permalink] [raw]
Subject: Re: [PATCH 2/6] crypto: engine - Permit to enqueue all async requests

(adding my tested by)


On 10/01/18 15:19, Fabien DESSENNE wrote:
> On 03/01/18 21:11, Corentin Labbe wrote:
>> The crypto engine could actually only enqueue hash and ablkcipher request.
>> This patch permit it to enqueue any type of crypto_async_request.
>>
>> Signed-off-by: Corentin Labbe <[email protected]>

Tested-by: Fabien Dessenne <[email protected]>

>> ---
>> crypto/crypto_engine.c | 230 ++++++++++++++++++++++++------------------------
>> include/crypto/engine.h | 59 +++++++------
>> 2 files changed, 148 insertions(+), 141 deletions(-)
>>
>> diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
>> index 61e7c4e02fd2..036270b61648 100644
>> --- a/crypto/crypto_engine.c
>> +++ b/crypto/crypto_engine.c
>> @@ -15,7 +15,6 @@
>> #include <linux/err.h>
>> #include <linux/delay.h>
>> #include <crypto/engine.h>
>> -#include <crypto/internal/hash.h>
>> #include <uapi/linux/sched/types.h>
>> #include "internal.h"
>>
>> @@ -34,11 +33,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>> bool in_kthread)
>> {
>> struct crypto_async_request *async_req, *backlog;
>> - struct ahash_request *hreq;
>> - struct ablkcipher_request *breq;
>> unsigned long flags;
>> bool was_busy = false;
>> - int ret, rtype;
>> + int ret;
>> + struct crypto_engine_reqctx *enginectx;
>>
>> spin_lock_irqsave(&engine->queue_lock, flags);
>>
>> @@ -94,7 +92,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>>
>> spin_unlock_irqrestore(&engine->queue_lock, flags);
>>
>> - rtype = crypto_tfm_alg_type(engine->cur_req->tfm);
>> /* Until here we get the request need to be encrypted successfully */
>> if (!was_busy && engine->prepare_crypt_hardware) {
>> ret = engine->prepare_crypt_hardware(engine);
>> @@ -104,57 +101,31 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>> }
>> }
>>
>> - switch (rtype) {
>> - case CRYPTO_ALG_TYPE_AHASH:
>> - hreq = ahash_request_cast(engine->cur_req);
>> - if (engine->prepare_hash_request) {
>> - ret = engine->prepare_hash_request(engine, hreq);
>> - if (ret) {
>> - dev_err(engine->dev, "failed to prepare request: %d\n",
>> - ret);
>> - goto req_err;
>> - }
>> - engine->cur_req_prepared = true;
>> - }
>> - ret = engine->hash_one_request(engine, hreq);
>> - if (ret) {
>> - dev_err(engine->dev, "failed to hash one request from queue\n");
>> - goto req_err;
>> - }
>> - return;
>> - case CRYPTO_ALG_TYPE_ABLKCIPHER:
>> - breq = ablkcipher_request_cast(engine->cur_req);
>> - if (engine->prepare_cipher_request) {
>> - ret = engine->prepare_cipher_request(engine, breq);
>> - if (ret) {
>> - dev_err(engine->dev, "failed to prepare request: %d\n",
>> - ret);
>> - goto req_err;
>> - }
>> - engine->cur_req_prepared = true;
>> - }
>> - ret = engine->cipher_one_request(engine, breq);
>> + enginectx = crypto_tfm_ctx(async_req->tfm);
>> +
>> + if (enginectx->op.prepare_request) {
>> + ret = enginectx->op.prepare_request(engine, async_req);
>> if (ret) {
>> - dev_err(engine->dev, "failed to cipher one request from queue\n");
>> + dev_err(engine->dev, "failed to prepare request: %d\n",
>> + ret);
>> goto req_err;
>> }
>> - return;
>> - default:
>> - dev_err(engine->dev, "failed to prepare request of unknown type\n");
>> - return;
>> + engine->cur_req_prepared = true;
>> + }
>> + if (!enginectx->op.do_one_request) {
>> + dev_err(engine->dev, "failed to do request\n");
>> + ret = -EINVAL;
>> + goto req_err;
>> + }
>> + ret = enginectx->op.do_one_request(engine, async_req);
>> + if (ret) {
>> + dev_err(engine->dev, "Failed to do one request from queue: %d\n", ret);
>> + goto req_err;
>> }
>> + return;
>>
>> req_err:
>> - switch (rtype) {
>> - case CRYPTO_ALG_TYPE_AHASH:
>> - hreq = ahash_request_cast(engine->cur_req);
>> - crypto_finalize_hash_request(engine, hreq, ret);
>> - break;
>> - case CRYPTO_ALG_TYPE_ABLKCIPHER:
>> - breq = ablkcipher_request_cast(engine->cur_req);
>> - crypto_finalize_cipher_request(engine, breq, ret);
>> - break;
>> - }
>> + crypto_finalize_request(engine, async_req, ret);
>> return;
>>
>> out:
>> @@ -170,13 +141,12 @@ static void crypto_pump_work(struct kthread_work *work)
>> }
>>
>> /**
>> - * crypto_transfer_cipher_request - transfer the new request into the
>> - * enginequeue
>> + * crypto_transfer_request - transfer the new request into the engine queue
>> * @engine: the hardware engine
>> * @req: the request need to be listed into the engine queue
>> */
>> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
>> - struct ablkcipher_request *req,
>> +static int crypto_transfer_request(struct crypto_engine *engine,
>> + struct crypto_async_request *req,
>> bool need_pump)
>> {
>> unsigned long flags;
>> @@ -189,7 +159,7 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
>> return -ESHUTDOWN;
>> }
>>
>> - ret = ablkcipher_enqueue_request(&engine->queue, req);
>> + ret = crypto_enqueue_request(&engine->queue, req);
>>
>> if (!engine->busy && need_pump)
>> kthread_queue_work(engine->kworker, &engine->pump_requests);
>> @@ -197,85 +167,97 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
>> spin_unlock_irqrestore(&engine->queue_lock, flags);
>> return ret;
>> }
>> -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request);
>> +EXPORT_SYMBOL_GPL(crypto_transfer_request);
> Do not export this function which is a static one.
>
>>
>> /**
>> - * crypto_transfer_cipher_request_to_engine - transfer one request to list
>> + * crypto_transfer_request_to_engine - transfer one request to list
>> * into the engine queue
>> * @engine: the hardware engine
>> * @req: the request need to be listed into the engine queue
>> */
>> +static int crypto_transfer_request_to_engine(struct crypto_engine *engine,
>> + struct crypto_async_request *req)
>> +{
>> + return crypto_transfer_request(engine, req, true);
>> +}
>> +
>> +/**
>> + * crypto_transfer_cipher_request_to_engine - transfer one ablkcipher_request
>> + * to list into the engine queue
>> + * @engine: the hardware engine
>> + * @req: the request need to be listed into the engine queue
>> + * TODO: Remove this function when skcipher conversion is finished
>> + */
>> int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
>> struct ablkcipher_request *req)
>> {
>> - return crypto_transfer_cipher_request(engine, req, true);
>> + return crypto_transfer_request_to_engine(engine, &req->base);
>> }
>> EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine);
>>
>> /**
>> - * crypto_transfer_hash_request - transfer the new request into the
>> - * enginequeue
>> + * crypto_transfer_skcipher_request_to_engine - transfer one skcipher_request
>> + * to list into the engine queue
>> * @engine: the hardware engine
>> * @req: the request need to be listed into the engine queue
>> */
>> -int crypto_transfer_hash_request(struct crypto_engine *engine,
>> - struct ahash_request *req, bool need_pump)
>> +int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
>> + struct skcipher_request *req)
>> {
>> - unsigned long flags;
>> - int ret;
>> -
>> - spin_lock_irqsave(&engine->queue_lock, flags);
>> -
>> - if (!engine->running) {
>> - spin_unlock_irqrestore(&engine->queue_lock, flags);
>> - return -ESHUTDOWN;
>> - }
>> -
>> - ret = ahash_enqueue_request(&engine->queue, req);
>> -
>> - if (!engine->busy && need_pump)
>> - kthread_queue_work(engine->kworker, &engine->pump_requests);
>> + return crypto_transfer_request_to_engine(engine, &req->base);
>> +}
>> +EXPORT_SYMBOL_GPL(crypto_transfer_skcipher_request_to_engine);
>>
>> - spin_unlock_irqrestore(&engine->queue_lock, flags);
>> - return ret;
>> +/**
>> + * crypto_transfer_akcipher_request_to_engine - transfer one akcipher_request
>> + * to list into the engine queue
>> + * @engine: the hardware engine
>> + * @req: the request need to be listed into the engine queue
>> + */
>> +int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
>> + struct akcipher_request *req)
>> +{
>> + return crypto_transfer_request_to_engine(engine, &req->base);
>> }
>> -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request);
>> +EXPORT_SYMBOL_GPL(crypto_transfer_akcipher_request_to_engine);
>>
>> /**
>> - * crypto_transfer_hash_request_to_engine - transfer one request to list
>> - * into the engine queue
>> + * crypto_transfer_hash_request_to_engine - transfer one ahash_request
>> + * to list into the engine queue
>> * @engine: the hardware engine
>> * @req: the request need to be listed into the engine queue
>> */
>> int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
>> struct ahash_request *req)
>> {
>> - return crypto_transfer_hash_request(engine, req, true);
>> + return crypto_transfer_request_to_engine(engine, &req->base);
>> }
>> EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine);
>>
> Please add this EXPORTed function:
>
> crypto_transfer_aead_request_to_engine(struct crypto_engine *engine,
> struct aead_request *req)
>
>> /**
>> - * crypto_finalize_cipher_request - finalize one request if the request is done
>> + * crypto_finalize_request - finalize one request if the request is done
>> * @engine: the hardware engine
>> * @req: the request need to be finalized
>> * @err: error number
>> */
>> -void crypto_finalize_cipher_request(struct crypto_engine *engine,
>> - struct ablkcipher_request *req, int err)
>> +void crypto_finalize_request(struct crypto_engine *engine,
> shall be static
>
>> + struct crypto_async_request *req, int err)
>> {
>> unsigned long flags;
>> bool finalize_cur_req = false;
>> int ret;
>> + struct crypto_engine_reqctx *enginectx;
>>
>> spin_lock_irqsave(&engine->queue_lock, flags);
>> - if (engine->cur_req == &req->base)
>> + if (engine->cur_req == req)
>> finalize_cur_req = true;
>> spin_unlock_irqrestore(&engine->queue_lock, flags);
>>
>> if (finalize_cur_req) {
>> + enginectx = crypto_tfm_ctx(req->tfm);
>> if (engine->cur_req_prepared &&
>> - engine->unprepare_cipher_request) {
>> - ret = engine->unprepare_cipher_request(engine, req);
>> + enginectx->op.unprepare_request) {
>> + ret = enginectx->op.unprepare_request(engine, req);
>> if (ret)
>> dev_err(engine->dev, "failed to unprepare request\n");
>> }
>> @@ -285,46 +267,64 @@ void crypto_finalize_cipher_request(struct crypto_engine *engine,
>> spin_unlock_irqrestore(&engine->queue_lock, flags);
>> }
>>
>> - req->base.complete(&req->base, err);
>> + req->complete(req, err);
>>
>> kthread_queue_work(engine->kworker, &engine->pump_requests);
>> }
>> -EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
>>
>> /**
>> - * crypto_finalize_hash_request - finalize one request if the request is done
>> + * crypto_finalize_cipher_request - finalize one ablkcipher_request if
>> + * the request is done
>> * @engine: the hardware engine
>> * @req: the request need to be finalized
>> * @err: error number
>> */
>> -void crypto_finalize_hash_request(struct crypto_engine *engine,
>> - struct ahash_request *req, int err)
>> +void crypto_finalize_cipher_request(struct crypto_engine *engine,
>> + struct ablkcipher_request *req, int err)
>> {
>> - unsigned long flags;
>> - bool finalize_cur_req = false;
>> - int ret;
>> -
>> - spin_lock_irqsave(&engine->queue_lock, flags);
>> - if (engine->cur_req == &req->base)
>> - finalize_cur_req = true;
>> - spin_unlock_irqrestore(&engine->queue_lock, flags);
>> + return crypto_finalize_request(engine, &req->base, err);
>> +}
>> +EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
>>
>> - if (finalize_cur_req) {
>> - if (engine->cur_req_prepared &&
>> - engine->unprepare_hash_request) {
>> - ret = engine->unprepare_hash_request(engine, req);
>> - if (ret)
>> - dev_err(engine->dev, "failed to unprepare request\n");
>> - }
>> - spin_lock_irqsave(&engine->queue_lock, flags);
>> - engine->cur_req = NULL;
>> - engine->cur_req_prepared = false;
>> - spin_unlock_irqrestore(&engine->queue_lock, flags);
>> - }
>> +/**
>> + * crypto_finalize_skcipher_request - finalize one skcipher_request if
>> + * the request is done
>> + * @engine: the hardware engine
>> + * @req: the request need to be finalized
>> + * @err: error number
>> + */
>> +void crypto_finalize_skcipher_request(struct crypto_engine *engine,
>> + struct skcipher_request *req, int err)
>> +{
>> + return crypto_finalize_request(engine, &req->base, err);
>> +}
>> +EXPORT_SYMBOL_GPL(crypto_finalize_skcipher_request);
>>
>> - req->base.complete(&req->base, err);
>> +/**
>> + * crypto_finalize_akcipher_request - finalize one akcipher_request if
>> + * the request is done
>> + * @engine: the hardware engine
>> + * @req: the request need to be finalized
>> + * @err: error number
>> + */
>> +void crypto_finalize_akcipher_request(struct crypto_engine *engine,
>> + struct akcipher_request *req, int err)
>> +{
>> + return crypto_finalize_request(engine, &req->base, err);
>> +}
>> +EXPORT_SYMBOL_GPL(crypto_finalize_akcipher_request);
>>
>> - kthread_queue_work(engine->kworker, &engine->pump_requests);
>> +/**
>> + * crypto_finalize_hash_request - finalize one ahash_request if
>> + * the request is done
>> + * @engine: the hardware engine
>> + * @req: the request need to be finalized
>> + * @err: error number
>> + */
>> +void crypto_finalize_hash_request(struct crypto_engine *engine,
>> + struct ahash_request *req, int err)
>> +{
>> + return crypto_finalize_request(engine, &req->base, err);
>> }
>> EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
> Add
> crypto_finalize_aead_request(struct crypto_engine *engine, struct
> aead_request *req, int err)
>
>>
>> diff --git a/include/crypto/engine.h b/include/crypto/engine.h
>> index dd04c1699b51..1ea7cbe92eaf 100644
>> --- a/include/crypto/engine.h
>> +++ b/include/crypto/engine.h
>> @@ -17,7 +17,9 @@
>> #include <linux/kernel.h>
>> #include <linux/kthread.h>
>> #include <crypto/algapi.h>
>> +#include <crypto/akcipher.h>
>> #include <crypto/hash.h>
>> +#include <crypto/skcipher.h>
>>
>> #define ENGINE_NAME_LEN 30
>> /*
>> @@ -37,12 +39,6 @@
>> * @unprepare_crypt_hardware: there are currently no more requests on the
>> * queue so the subsystem notifies the driver that it may relax the
>> * hardware by issuing this call
>> - * @prepare_cipher_request: do some prepare if need before handle the current request
>> - * @unprepare_cipher_request: undo any work done by prepare_cipher_request()
>> - * @cipher_one_request: do encryption for current request
>> - * @prepare_hash_request: do some prepare if need before handle the current request
>> - * @unprepare_hash_request: undo any work done by prepare_hash_request()
>> - * @hash_one_request: do hash for current request
>> * @kworker: kthread worker struct for request pump
>> * @pump_requests: work struct for scheduling work to the request pump
>> * @priv_data: the engine private data
>> @@ -65,19 +61,6 @@ struct crypto_engine {
>> int (*prepare_crypt_hardware)(struct crypto_engine *engine);
>> int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
>>
>> - int (*prepare_cipher_request)(struct crypto_engine *engine,
>> - struct ablkcipher_request *req);
>> - int (*unprepare_cipher_request)(struct crypto_engine *engine,
>> - struct ablkcipher_request *req);
>> - int (*prepare_hash_request)(struct crypto_engine *engine,
>> - struct ahash_request *req);
>> - int (*unprepare_hash_request)(struct crypto_engine *engine,
>> - struct ahash_request *req);
>> - int (*cipher_one_request)(struct crypto_engine *engine,
>> - struct ablkcipher_request *req);
>> - int (*hash_one_request)(struct crypto_engine *engine,
>> - struct ahash_request *req);
>> -
>> struct kthread_worker *kworker;
>> struct kthread_work pump_requests;
>>
>> @@ -85,19 +68,43 @@ struct crypto_engine {
>> struct crypto_async_request *cur_req;
>> };
>>
>> -int crypto_transfer_cipher_request(struct crypto_engine *engine,
>> - struct ablkcipher_request *req,
>> - bool need_pump);
>> +/*
>> + * struct crypto_engine_op - crypto hardware engine operations
>> + * @prepare__request: do some prepare if need before handle the current request
>> + * @unprepare_request: undo any work done by prepare_request()
>> + * @do_one_request: do encryption for current request
>> + */
>> +struct crypto_engine_op {
>> + int (*prepare_request)(struct crypto_engine *engine,
>> + void *areq);
>> + int (*unprepare_request)(struct crypto_engine *engine,
>> + void *areq);
>> + int (*do_one_request)(struct crypto_engine *engine,
>> + void *areq);
>> +};
>> +
>> +struct crypto_engine_reqctx {
>> + struct crypto_engine_op op;
>> +};
>> +
>> +int crypto_transfer_akcipher_request_to_engine(struct crypto_engine *engine,
>> + struct akcipher_request *req);
>> int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine,
>> - struct ablkcipher_request *req);
>> -int crypto_transfer_hash_request(struct crypto_engine *engine,
>> - struct ahash_request *req, bool need_pump);
>> + struct ablkcipher_request *req);
>> int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine,
>> - struct ahash_request *req);
>> + struct ahash_request *req);
>> +int crypto_transfer_skcipher_request_to_engine(struct crypto_engine *engine,
>> + struct skcipher_request *req);
> + transfer_aead
>
>> +void crypto_finalize_request(struct crypto_engine *engine,
>> + struct crypto_async_request *req, int err);
> static (+move to  .c file?)
>
>> +void crypto_finalize_akcipher_request(struct crypto_engine *engine,
>> + struct akcipher_request *req, int err);
>> void crypto_finalize_cipher_request(struct crypto_engine *engine,
>> struct ablkcipher_request *req, int err);
>> void crypto_finalize_hash_request(struct crypto_engine *engine,
>> struct ahash_request *req, int err);
>> +void crypto_finalize_skcipher_request(struct crypto_engine *engine,
>> + struct skcipher_request *req, int err);
> + finalize_aead
>
>> int crypto_engine_start(struct crypto_engine *engine);
>> int crypto_engine_stop(struct crypto_engine *engine);
>> struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);

2018-01-11 07:45:40

by Fabien DESSENNE

[permalink] [raw]
Subject: Re: [PATCH 5/6] crypto: stm32-hash: convert to the new crypto engine API

(adding my tested my)


On 10/01/18 15:24, Fabien DESSENNE wrote:
>
> On 03/01/18 21:11, Corentin Labbe wrote:
>> This patch convert the stm32-hash driver to the new crypto engine API.
>>
>> Signed-off-by: Corentin Labbe <[email protected]>

Tested-by: Fabien Dessenne <[email protected]>

>> ---
>> drivers/crypto/stm32/stm32-hash.c | 18 +++++++++++++-----
>> 1 file changed, 13 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
>> index 4ca4a264a833..9790c2c936c7 100644
>> --- a/drivers/crypto/stm32/stm32-hash.c
>> +++ b/drivers/crypto/stm32/stm32-hash.c
>> @@ -122,6 +122,7 @@ enum stm32_hash_data_format {
>> #define HASH_DMA_THRESHOLD 50
>>
>> struct stm32_hash_ctx {
>> + struct crypto_engine_reqctx enginectx;
>> struct stm32_hash_dev *hdev;
>> unsigned long flags;
>>
>> @@ -828,6 +829,11 @@ static int stm32_hash_hw_init(struct stm32_hash_dev *hdev,
>> return 0;
>> }
>>
>> +static int stm32_hash_one_request(struct crypto_engine *engine,
>> + void *areq);
> merge these two lines in a single one
>
>> +static int stm32_hash_prepare_req(struct crypto_engine *engine,
>> + void *areq);
> merge these two lines in a single one
>
>> +
>> static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
>> struct ahash_request *req)
>> {
>> @@ -835,8 +841,9 @@ static int stm32_hash_handle_queue(struct stm32_hash_dev *hdev,
>> }
>>
>> static int stm32_hash_prepare_req(struct crypto_engine *engine,
>> - struct ahash_request *req)
>> + void *areq)
> merge these two lines in a single one
>
>> {
>> + struct ahash_request *req = container_of(areq, struct ahash_request, base);
> > 80 characters (CHECKPATCH)
>
>> struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
>> struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
>> struct stm32_hash_request_ctx *rctx;
>> @@ -855,8 +862,9 @@ static int stm32_hash_prepare_req(struct crypto_engine *engine,
>> }
>>
>> static int stm32_hash_one_request(struct crypto_engine *engine,
>> - struct ahash_request *req)
>> + void *areq)
> merge these two lines in a single one
>
>> {
>> + struct ahash_request *req = container_of(areq, struct ahash_request, base);
> > 80 characters (CHECKPATCH)
>
>> struct stm32_hash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
>> struct stm32_hash_dev *hdev = stm32_hash_find_dev(ctx);
>> struct stm32_hash_request_ctx *rctx;
>> @@ -1033,6 +1041,9 @@ static int stm32_hash_cra_init_algs(struct crypto_tfm *tfm,
>> if (algs_hmac_name)
>> ctx->flags |= HASH_FLAGS_HMAC;
>>
>> + ctx->enginectx.op.do_one_request = stm32_hash_one_request;
>> + ctx->enginectx.op.prepare_request = stm32_hash_prepare_req;
>> + ctx->enginectx.op.unprepare_request = NULL;
>> return 0;
>> }
>>
>> @@ -1493,9 +1504,6 @@ static int stm32_hash_probe(struct platform_device *pdev)
>> goto err_engine;
>> }
>>
>> - hdev->engine->prepare_hash_request = stm32_hash_prepare_req;
>> - hdev->engine->hash_one_request = stm32_hash_one_request;
>> -
>> ret = crypto_engine_start(hdev->engine);
>> if (ret)
>> goto err_engine_start;

2018-01-11 07:46:07

by Fabien DESSENNE

[permalink] [raw]
Subject: Re: [PATCH 6/6] crypto: stm32-cryp: convert to the new crypto engine API

(Adding my tested by)


On 10/01/18 15:25, Fabien DESSENNE wrote:
>
> On 03/01/18 21:11, Corentin Labbe wrote:
>> This patch convert the stm32-cryp driver to the new crypto engine API.
>> Signed-off-by: Corentin Labbe <[email protected]>

Tested-by: Fabien Dessenne <[email protected]>

>> ---
>> drivers/crypto/stm32/stm32-cryp.c | 21 ++++++++++++++++-----
>> 1 file changed, 16 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/crypto/stm32/stm32-cryp.c b/drivers/crypto/stm32/stm32-cryp.c
>> index cf1dddbeaa2c..99e0473ef247 100644
>> --- a/drivers/crypto/stm32/stm32-cryp.c
>> +++ b/drivers/crypto/stm32/stm32-cryp.c
>> @@ -91,6 +91,7 @@
>> #define _walked_out (cryp->out_walk.offset - cryp->out_sg->offset)
>>
>> struct stm32_cryp_ctx {
>> + struct crypto_engine_reqctx enginectx;
>> struct stm32_cryp *cryp;
>> int keylen;
>> u32 key[AES_KEYSIZE_256 / sizeof(u32)];
>> @@ -494,10 +495,20 @@ static int stm32_cryp_cpu_start(struct stm32_cryp *cryp)
>> return 0;
>> }
>>
>> +static int stm32_cryp_cipher_one_req(struct crypto_engine *engine,
>> + void *areq);
> Merge these 2 lines in a single one
>
>> +static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
>> + void *areq);
>> +
>> static int stm32_cryp_cra_init(struct crypto_tfm *tfm)
>> {
>> + struct stm32_cryp_ctx *ctx = crypto_tfm_ctx(tfm);
>> +
>> tfm->crt_ablkcipher.reqsize = sizeof(struct stm32_cryp_reqctx);
>>
>> + ctx->enginectx.op.do_one_request = stm32_cryp_cipher_one_req;
>> + ctx->enginectx.op.prepare_request = stm32_cryp_prepare_cipher_req;
>> + ctx->enginectx.op.unprepare_request = NULL;
>> return 0;
>> }
>>
>> @@ -695,14 +706,17 @@ static int stm32_cryp_prepare_req(struct crypto_engine *engine,
>> }
>>
>> static int stm32_cryp_prepare_cipher_req(struct crypto_engine *engine,
>> - struct ablkcipher_request *req)
>> + void *areq)
>> {
>> + struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
> > 80 characters (CHECKPATCH)
>
>> +
>> return stm32_cryp_prepare_req(engine, req);
>> }
>>
>> static int stm32_cryp_cipher_one_req(struct crypto_engine *engine,
>> - struct ablkcipher_request *req)
>> + void *areq)
> Merge these 2 lines in a single one
>
>> {
>> + struct ablkcipher_request *req = container_of(areq, struct ablkcipher_request, base);
> > 80 characters (CHECKPATCH)
>
>> struct stm32_cryp_ctx *ctx = crypto_ablkcipher_ctx(
>> crypto_ablkcipher_reqtfm(req));
>> struct stm32_cryp *cryp = ctx->cryp;
>> @@ -1104,9 +1118,6 @@ static int stm32_cryp_probe(struct platform_device *pdev)
>> goto err_engine1;
>> }
>>
>> - cryp->engine->prepare_cipher_request = stm32_cryp_prepare_cipher_req;
>> - cryp->engine->cipher_one_request = stm32_cryp_cipher_one_req;
>> -
>> ret = crypto_engine_start(cryp->engine);
>> if (ret) {
>> dev_err(dev, "Could not start crypto engine\n");

2018-01-12 07:15:27

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 2/6] crypto: engine - Permit to enqueue all async requests

On Wed, Jan 03, 2018 at 09:11:05PM +0100, Corentin Labbe wrote:
> The crypto engine could actually only enqueue hash and ablkcipher request.
> This patch permit it to enqueue any type of crypto_async_request.
>
> Signed-off-by: Corentin Labbe <[email protected]>
> ---
> crypto/crypto_engine.c | 230 ++++++++++++++++++++++++------------------------
> include/crypto/engine.h | 59 +++++++------
> 2 files changed, 148 insertions(+), 141 deletions(-)
>
> diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> index 61e7c4e02fd2..036270b61648 100644
> --- a/crypto/crypto_engine.c
> +++ b/crypto/crypto_engine.c
> @@ -15,7 +15,6 @@
> #include <linux/err.h>
> #include <linux/delay.h>
> #include <crypto/engine.h>
> -#include <crypto/internal/hash.h>
> #include <uapi/linux/sched/types.h>
> #include "internal.h"
>
> @@ -34,11 +33,10 @@ static void crypto_pump_requests(struct crypto_engine *engine,
> bool in_kthread)
> {
> struct crypto_async_request *async_req, *backlog;
> - struct ahash_request *hreq;
> - struct ablkcipher_request *breq;
> unsigned long flags;
> bool was_busy = false;
> - int ret, rtype;
> + int ret;
> + struct crypto_engine_reqctx *enginectx;

This all looks very good. Just one minor nit, since you're storing
this in the tfm ctx as opposed to the request ctx (which is indeed
an improvement), you should remove the "req" from its name.

Thanks!
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt