2014-07-18 16:38:48

by Horia Geantă

[permalink] [raw]
Subject: [PATCH 0/9] crypto: caam - Add RTA descriptor creation library

This patch set adds Run Time Assembler (RTA) SEC descriptor library.

The main reason of replacing incumbent "inline append" is
to have a single code base both for user space and kernel space.

Patches are based on latest cryptodev, but with the following on top:
[PATCH 00/10] CAAM - DMA API fixes
http://www.mail-archive.com/[email protected]/msg11381.html
[PATCH] crypto: caam - set DK (Decrypt Key) bit only for AES accelerator
http://www.mail-archive.com/[email protected]/msg11392.html

Patches 01-04 are fixes, clean-ups.

Patch 05 adds the RTA library.

Patch 06 rewrites "inline append" descriptors using RTA.
Descriptors (hex dumps) were tested to be bit-exact,
with a few exceptions (see commit message).

Patch 07 removes "inline append".

Patch 08 refactors code that generates the descriptors,
to make code more comprehensible and maintainable.

Patch 09 adds support for generating kernel-doc for RTA.
It depends on upstream (torvalds/linux.git) commit
cbb4d3e6510b99522719c5ef0cd0482886a324c0
("scripts/kernel-doc: handle object-like macros")

Thanks,
Horia

Horia Geanta (9):
crypto: caam - completely remove error propagation handling
crypto: caam - desc.h fixes
crypto: caam - code cleanup
crypto: caam - move sec4_sg_entry to sg_sw_sec4.h
crypto: caam - add Run Time Library (RTA)
crypto: caam - use RTA instead of inline append
crypto: caam - completely remove inline append
crypto: caam - refactor descriptor creation
crypto: caam - add Run Time Library (RTA) docbook

Documentation/DocBook/Makefile | 3 +-
Documentation/DocBook/rta-api.tmpl | 245 ++
Documentation/DocBook/rta/.gitignore | 1 +
Documentation/DocBook/rta/Makefile | 5 +
Documentation/DocBook/rta/rta_arch.svg | 381 +++
drivers/crypto/caam/Makefile | 4 +-
drivers/crypto/caam/caamalg.c | 850 ++++---
drivers/crypto/caam/caamhash.c | 550 ++---
drivers/crypto/caam/caamrng.c | 53 +-
drivers/crypto/caam/compat.h | 1 +
drivers/crypto/caam/ctrl.c | 98 +-
drivers/crypto/caam/ctrl.h | 2 +-
drivers/crypto/caam/desc.h | 1621 -------------
drivers/crypto/caam/desc_constr.h | 388 ---
drivers/crypto/caam/error.c | 7 +-
drivers/crypto/caam/flib/desc.h | 2541 ++++++++++++++++++++
drivers/crypto/caam/flib/desc/common.h | 39 +
drivers/crypto/caam/flib/desc/jobdesc.h | 75 +
drivers/crypto/caam/flib/rta.h | 926 +++++++
drivers/crypto/caam/flib/rta/fifo_load_store_cmd.h | 319 +++
drivers/crypto/caam/flib/rta/header_cmd.h | 209 ++
drivers/crypto/caam/flib/rta/jump_cmd.h | 181 ++
drivers/crypto/caam/flib/rta/key_cmd.h | 192 ++
drivers/crypto/caam/flib/rta/load_cmd.h | 308 +++
drivers/crypto/caam/flib/rta/math_cmd.h | 388 +++
drivers/crypto/caam/flib/rta/move_cmd.h | 408 ++++
drivers/crypto/caam/flib/rta/nfifo_cmd.h | 164 ++
drivers/crypto/caam/flib/rta/operation_cmd.h | 545 +++++
drivers/crypto/caam/flib/rta/protocol_cmd.h | 595 +++++
drivers/crypto/caam/flib/rta/sec_run_time_asm.h | 755 ++++++
drivers/crypto/caam/flib/rta/seq_in_out_ptr_cmd.h | 168 ++
drivers/crypto/caam/flib/rta/signature_cmd.h | 36 +
drivers/crypto/caam/flib/rta/store_cmd.h | 156 ++
drivers/crypto/caam/jr.c | 6 +-
drivers/crypto/caam/key_gen.c | 36 +-
drivers/crypto/caam/key_gen.h | 2 +-
drivers/crypto/caam/pdb.h | 402 ----
drivers/crypto/caam/sg_sw_sec4.h | 12 +-
38 files changed, 9511 insertions(+), 3161 deletions(-)
create mode 100644 Documentation/DocBook/rta-api.tmpl
create mode 100644 Documentation/DocBook/rta/.gitignore
create mode 100644 Documentation/DocBook/rta/Makefile
create mode 100644 Documentation/DocBook/rta/rta_arch.svg
delete mode 100644 drivers/crypto/caam/desc.h
delete mode 100644 drivers/crypto/caam/desc_constr.h
create mode 100644 drivers/crypto/caam/flib/desc.h
create mode 100644 drivers/crypto/caam/flib/desc/common.h
create mode 100644 drivers/crypto/caam/flib/desc/jobdesc.h
create mode 100644 drivers/crypto/caam/flib/rta.h
create mode 100644 drivers/crypto/caam/flib/rta/fifo_load_store_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/header_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/jump_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/key_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/load_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/math_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/move_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/nfifo_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/operation_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/protocol_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/sec_run_time_asm.h
create mode 100644 drivers/crypto/caam/flib/rta/seq_in_out_ptr_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/signature_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/store_cmd.h
delete mode 100644 drivers/crypto/caam/pdb.h

--
1.8.3.1


2014-07-18 16:38:50

by Horia Geantă

[permalink] [raw]
Subject: [PATCH 2/9] crypto: caam - desc.h fixes

1. fix HDR_START_IDX_MASK
Define HDR_START_IDX_MASK consistently with the other masks:
mask = bitmask << offset

2. fix FIFO_STORE output data type value for AFHA S-Box

3. fix OPERATION pkha modular arithmetic source mask

Signed-off-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/desc.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/caam/desc.h b/drivers/crypto/caam/desc.h
index d397ff9d56fd..7a58d6ee801d 100644
--- a/drivers/crypto/caam/desc.h
+++ b/drivers/crypto/caam/desc.h
@@ -80,8 +80,8 @@ struct sec4_sg_entry {
#define HDR_ZRO 0x00008000

/* Start Index or SharedDesc Length */
-#define HDR_START_IDX_MASK 0x3f
#define HDR_START_IDX_SHIFT 16
+#define HDR_START_IDX_MASK (0x3f << HDR_START_IDX_SHIFT)

/* If shared descriptor header, 6-bit length */
#define HDR_DESCLEN_SHR_MASK 0x3f
@@ -390,7 +390,7 @@ struct sec4_sg_entry {
#define FIFOST_TYPE_PKHA_N (0x08 << FIFOST_TYPE_SHIFT)
#define FIFOST_TYPE_PKHA_A (0x0c << FIFOST_TYPE_SHIFT)
#define FIFOST_TYPE_PKHA_B (0x0d << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_AF_SBOX_JKEK (0x10 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_JKEK (0x20 << FIFOST_TYPE_SHIFT)
#define FIFOST_TYPE_AF_SBOX_TKEK (0x21 << FIFOST_TYPE_SHIFT)
#define FIFOST_TYPE_PKHA_E_JKEK (0x22 << FIFOST_TYPE_SHIFT)
#define FIFOST_TYPE_PKHA_E_TKEK (0x23 << FIFOST_TYPE_SHIFT)
@@ -1237,7 +1237,7 @@ struct sec4_sg_entry {
#define OP_ALG_PKMODE_MOD_PRIMALITY 0x00f

/* PKHA mode copy-memory functions */
-#define OP_ALG_PKMODE_SRC_REG_SHIFT 13
+#define OP_ALG_PKMODE_SRC_REG_SHIFT 17
#define OP_ALG_PKMODE_SRC_REG_MASK (7 << OP_ALG_PKMODE_SRC_REG_SHIFT)
#define OP_ALG_PKMODE_DST_REG_SHIFT 10
#define OP_ALG_PKMODE_DST_REG_MASK (7 << OP_ALG_PKMODE_DST_REG_SHIFT)
--
1.8.3.1

2014-07-18 16:38:59

by Horia Geantă

[permalink] [raw]
Subject: [PATCH 4/9] crypto: caam - move sec4_sg_entry to sg_sw_sec4.h

sec4_sg_entry structure is used only by helper functions in sg_sw_sec4.h.
Since SEC HW S/G entries are to be manipulated only indirectly, via these
functions, move sec4_sg_entry to the corresponding header.

Signed-off-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/desc.h | 10 ----------
drivers/crypto/caam/sg_sw_sec4.h | 10 +++++++++-
2 files changed, 9 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/caam/desc.h b/drivers/crypto/caam/desc.h
index 7a58d6ee801d..9066fdc402fa 100644
--- a/drivers/crypto/caam/desc.h
+++ b/drivers/crypto/caam/desc.h
@@ -8,16 +8,6 @@
#ifndef DESC_H
#define DESC_H

-struct sec4_sg_entry {
- u64 ptr;
-#define SEC4_SG_LEN_FIN 0x40000000
-#define SEC4_SG_LEN_EXT 0x80000000
- u32 len;
- u8 reserved;
- u8 buf_pool_id;
- u16 offset;
-};
-
/* Max size of any CAAM descriptor in 32-bit words, inclusive of header */
#define MAX_CAAM_DESCSIZE 64

diff --git a/drivers/crypto/caam/sg_sw_sec4.h b/drivers/crypto/caam/sg_sw_sec4.h
index a6e5b94756d4..e6fa2c226b8f 100644
--- a/drivers/crypto/caam/sg_sw_sec4.h
+++ b/drivers/crypto/caam/sg_sw_sec4.h
@@ -5,7 +5,15 @@
*
*/

-struct sec4_sg_entry;
+struct sec4_sg_entry {
+ u64 ptr;
+#define SEC4_SG_LEN_FIN 0x40000000
+#define SEC4_SG_LEN_EXT 0x80000000
+ u32 len;
+ u8 reserved;
+ u8 buf_pool_id;
+ u16 offset;
+};

/*
* convert single dma address to h/w link table format
--
1.8.3.1

2014-07-18 16:39:03

by Horia Geantă

[permalink] [raw]
Subject: [PATCH 1/9] crypto: caam - completely remove error propagation handling

Commit 4464a7d4f53d756101291da26563f37f7fce40f3
("crypto: caam - remove error propagation handling")
removed error propagation handling only from caamalg.

Do this in all other places: caamhash, caamrng.
Update descriptors' lengths appropriately.
Note that caamrng's shared descriptor length was incorrect.

Signed-off-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/caamhash.c | 5 +----
drivers/crypto/caam/caamrng.c | 9 +++------
2 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index b464d03ebf40..56ec534337b3 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -72,7 +72,7 @@
#define CAAM_MAX_HASH_DIGEST_SIZE SHA512_DIGEST_SIZE

/* length of descriptors text */
-#define DESC_AHASH_BASE (4 * CAAM_CMD_SZ)
+#define DESC_AHASH_BASE (3 * CAAM_CMD_SZ)
#define DESC_AHASH_UPDATE_LEN (6 * CAAM_CMD_SZ)
#define DESC_AHASH_UPDATE_FIRST_LEN (DESC_AHASH_BASE + 4 * CAAM_CMD_SZ)
#define DESC_AHASH_FINAL_LEN (DESC_AHASH_BASE + 5 * CAAM_CMD_SZ)
@@ -247,9 +247,6 @@ static inline void init_sh_desc_key_ahash(u32 *desc, struct caam_hash_ctx *ctx)

set_jump_tgt_here(desc, key_jump_cmd);
}
-
- /* Propagate errors from shared to job descriptor */
- append_cmd(desc, SET_OK_NO_PROP_ERRORS | CMD_LOAD);
}

/*
diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c
index ae31e555793c..8b9df8deda67 100644
--- a/drivers/crypto/caam/caamrng.c
+++ b/drivers/crypto/caam/caamrng.c
@@ -52,7 +52,7 @@

/* length of descriptors */
#define DESC_JOB_O_LEN (CAAM_CMD_SZ * 2 + CAAM_PTR_SZ * 2)
-#define DESC_RNG_LEN (10 * CAAM_CMD_SZ)
+#define DESC_RNG_LEN (3 * CAAM_CMD_SZ)

/* Buffer, its dma address and lock */
struct buf_data {
@@ -90,8 +90,8 @@ static inline void rng_unmap_ctx(struct caam_rng_ctx *ctx)
struct device *jrdev = ctx->jrdev;

if (ctx->sh_desc_dma)
- dma_unmap_single(jrdev, ctx->sh_desc_dma, DESC_RNG_LEN,
- DMA_TO_DEVICE);
+ dma_unmap_single(jrdev, ctx->sh_desc_dma,
+ desc_bytes(ctx->sh_desc), DMA_TO_DEVICE);
rng_unmap_buf(jrdev, &ctx->bufs[0]);
rng_unmap_buf(jrdev, &ctx->bufs[1]);
}
@@ -192,9 +192,6 @@ static inline int rng_create_sh_desc(struct caam_rng_ctx *ctx)

init_sh_desc(desc, HDR_SHARE_SERIAL);

- /* Propagate errors from shared to job descriptor */
- append_cmd(desc, SET_OK_NO_PROP_ERRORS | CMD_LOAD);
-
/* Generate random bytes */
append_operation(desc, OP_ALG_ALGSEL_RNG | OP_TYPE_CLASS1_ALG);

--
1.8.3.1

2014-07-18 16:39:16

by Horia Geantă

[permalink] [raw]
Subject: [PATCH 6/9] crypto: caam - use RTA instead of inline append

Update the following components:
caamalg, caamhash, caamrng, keygen, ctrl

Include path is updated accordingly in the Makefile.

Descriptors rewritten using RTA were tested to be bit-exact
(i.e. exact hex dump) with the ones being replaced, with
the following exceptions:
-shared descriptors - start index is 1 instead of 0; this has
no functional effect
-MDHA split keys are different - since the keys are the pre-computed
IPAD | OPAD HMAC keys encrypted with JDKEK (Job Descriptor
Key-Encryption Key); JDKEK changes at device POR.

Signed-off-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/Makefile | 4 +-
drivers/crypto/caam/caamalg.c | 664 +++++++++++++++++++++++------------------
drivers/crypto/caam/caamhash.c | 390 +++++++++++++++---------
drivers/crypto/caam/caamrng.c | 46 ++-
drivers/crypto/caam/compat.h | 1 +
drivers/crypto/caam/ctrl.c | 90 ++++--
drivers/crypto/caam/ctrl.h | 2 +-
drivers/crypto/caam/error.c | 2 +-
drivers/crypto/caam/jr.c | 2 +-
drivers/crypto/caam/key_gen.c | 36 +--
drivers/crypto/caam/key_gen.h | 2 +-
11 files changed, 727 insertions(+), 512 deletions(-)

diff --git a/drivers/crypto/caam/Makefile b/drivers/crypto/caam/Makefile
index 550758a333e7..10a97a8a8391 100644
--- a/drivers/crypto/caam/Makefile
+++ b/drivers/crypto/caam/Makefile
@@ -2,9 +2,11 @@
# Makefile for the CAAM backend and dependent components
#
ifeq ($(CONFIG_CRYPTO_DEV_FSL_CAAM_DEBUG), y)
- EXTRA_CFLAGS := -DDEBUG
+ ccflags-y := -DDEBUG
endif

+ccflags-y += -I$(src)
+
obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM) += caam.o
obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_JR) += caam_jr.o
obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API) += caamalg.o
diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index c3a845856cd0..ad5ef8c0c179 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -48,7 +48,7 @@

#include "regs.h"
#include "intern.h"
-#include "desc_constr.h"
+#include "flib/rta.h"
#include "jr.h"
#include "error.h"
#include "sg_sw_sec4.h"
@@ -93,59 +93,56 @@
static struct list_head alg_list;

/* Set DK bit in class 1 operation if shared */
-static inline void append_dec_op1(u32 *desc, u32 type)
+static inline void append_dec_op1(struct program *program, uint32_t type)
{
- u32 *jump_cmd, *uncond_jump_cmd;
+ LABEL(jump_cmd);
+ REFERENCE(pjump_cmd);
+ LABEL(uncond_jump_cmd);
+ REFERENCE(puncond_jump_cmd);

/* DK bit is valid only for AES */
if ((type & OP_ALG_ALGSEL_MASK) != OP_ALG_ALGSEL_AES) {
- append_operation(desc, type | OP_ALG_AS_INITFINAL |
- OP_ALG_DECRYPT);
+ ALG_OPERATION(type & OP_ALG_ALGSEL_MASK, type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ OP_ALG_DECRYPT);
return;
}

- jump_cmd = append_jump(desc, JUMP_TEST_ALL | JUMP_COND_SHRD);
- append_operation(desc, type | OP_ALG_AS_INITFINAL |
- OP_ALG_DECRYPT);
- uncond_jump_cmd = append_jump(desc, JUMP_TEST_ALL);
- set_jump_tgt_here(desc, jump_cmd);
- append_operation(desc, type | OP_ALG_AS_INITFINAL |
- OP_ALG_DECRYPT | OP_ALG_AAI_DK);
- set_jump_tgt_here(desc, uncond_jump_cmd);
-}
-
-/*
- * For aead functions, read payload and write payload,
- * both of which are specified in req->src and req->dst
- */
-static inline void aead_append_src_dst(u32 *desc, u32 msg_type)
-{
- append_seq_fifo_store(desc, 0, FIFOST_TYPE_MESSAGE_DATA | KEY_VLF);
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_BOTH |
- KEY_VLF | msg_type | FIFOLD_TYPE_LASTBOTH);
+ pjump_cmd = JUMP(IMM(jump_cmd), LOCAL_JUMP, ALL_TRUE, SHRD);
+ ALG_OPERATION(type & OP_ALG_ALGSEL_MASK, type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ OP_ALG_DECRYPT);
+ puncond_jump_cmd = JUMP(IMM(uncond_jump_cmd), LOCAL_JUMP, ALL_TRUE, 0);
+ SET_LABEL(jump_cmd);
+ ALG_OPERATION(type & OP_ALG_ALGSEL_MASK,
+ (type & OP_ALG_AAI_MASK) | OP_ALG_AAI_DK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ OP_ALG_DECRYPT);
+ SET_LABEL(uncond_jump_cmd);
+
+ PATCH_JUMP(pjump_cmd, jump_cmd);
+ PATCH_JUMP(puncond_jump_cmd, uncond_jump_cmd);
}

/*
* For aead encrypt and decrypt, read iv for both classes
*/
-static inline void aead_append_ld_iv(u32 *desc, int ivsize)
+static inline void aead_append_ld_iv(struct program *program, uint32_t ivsize)
{
- append_cmd(desc, CMD_SEQ_LOAD | LDST_SRCDST_BYTE_CONTEXT |
- LDST_CLASS_1_CCB | ivsize);
- append_move(desc, MOVE_SRC_CLASS1CTX | MOVE_DEST_CLASS2INFIFO | ivsize);
+ SEQLOAD(CONTEXT1, 0, ivsize, 0);
+ MOVE(CONTEXT1, 0, IFIFOAB2, 0, IMM(ivsize), 0);
}

/*
* For ablkcipher encrypt and decrypt, read from req->src and
* write to req->dst
*/
-static inline void ablkcipher_append_src_dst(u32 *desc)
+static inline void ablkcipher_append_src_dst(struct program *program)
{
- append_math_add(desc, VARSEQOUTLEN, SEQINLEN, REG0, CAAM_CMD_SZ);
- append_math_add(desc, VARSEQINLEN, SEQINLEN, REG0, CAAM_CMD_SZ);
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS1 |
- KEY_VLF | FIFOLD_TYPE_MSG | FIFOLD_TYPE_LAST1);
- append_seq_fifo_store(desc, 0, FIFOST_TYPE_MESSAGE_DATA | KEY_VLF);
+ MATHB(SEQINSZ, ADD, MATH0, VSEQOUTSZ, 4, 0);
+ MATHB(SEQINSZ, ADD, MATH0, VSEQINSZ, 4, 0);
+ SEQFIFOLOAD(MSG1, 0, VLF | LAST1);
+ SEQFIFOSTORE(MSG, 0, 0, VLF);
}

/*
@@ -160,15 +157,15 @@ static inline void ablkcipher_append_src_dst(u32 *desc)
*/
struct caam_ctx {
struct device *jrdev;
- u32 sh_desc_enc[DESC_MAX_USED_LEN];
- u32 sh_desc_dec[DESC_MAX_USED_LEN];
- u32 sh_desc_givenc[DESC_MAX_USED_LEN];
+ uint32_t sh_desc_enc[DESC_MAX_USED_LEN];
+ uint32_t sh_desc_dec[DESC_MAX_USED_LEN];
+ uint32_t sh_desc_givenc[DESC_MAX_USED_LEN];
dma_addr_t sh_desc_enc_dma;
dma_addr_t sh_desc_dec_dma;
dma_addr_t sh_desc_givenc_dma;
- u32 class1_alg_type;
- u32 class2_alg_type;
- u32 alg_op;
+ uint32_t class1_alg_type;
+ uint32_t class2_alg_type;
+ uint32_t alg_op;
u8 key[CAAM_MAX_KEY_SIZE];
dma_addr_t key_dma;
unsigned int enckeylen;
@@ -177,38 +174,38 @@ struct caam_ctx {
unsigned int authsize;
};

-static void append_key_aead(u32 *desc, struct caam_ctx *ctx,
+static void append_key_aead(struct program *program, struct caam_ctx *ctx,
int keys_fit_inline)
{
if (keys_fit_inline) {
- append_key_as_imm(desc, ctx->key, ctx->split_key_pad_len,
- ctx->split_key_len, CLASS_2 |
- KEY_DEST_MDHA_SPLIT | KEY_ENC);
- append_key_as_imm(desc, (void *)ctx->key +
- ctx->split_key_pad_len, ctx->enckeylen,
- ctx->enckeylen, CLASS_1 | KEY_DEST_CLASS_REG);
+ KEY(MDHA_SPLIT_KEY, ENC, PTR((uintptr_t)ctx->key),
+ ctx->split_key_len, IMMED);
+ KEY(KEY1, 0,
+ PTR((uintptr_t)(ctx->key + ctx->split_key_pad_len)),
+ ctx->enckeylen, IMMED);
} else {
- append_key(desc, ctx->key_dma, ctx->split_key_len, CLASS_2 |
- KEY_DEST_MDHA_SPLIT | KEY_ENC);
- append_key(desc, ctx->key_dma + ctx->split_key_pad_len,
- ctx->enckeylen, CLASS_1 | KEY_DEST_CLASS_REG);
+ KEY(MDHA_SPLIT_KEY, ENC, PTR(ctx->key_dma), ctx->split_key_len,
+ 0);
+ KEY(KEY1, 0, PTR(ctx->key_dma + ctx->split_key_pad_len),
+ ctx->enckeylen, 0);
}
}

-static void init_sh_desc_key_aead(u32 *desc, struct caam_ctx *ctx,
+static void init_sh_desc_key_aead(struct program *program, struct caam_ctx *ctx,
int keys_fit_inline)
{
- u32 *key_jump_cmd;
+ LABEL(key_jump_cmd);
+ REFERENCE(pkey_jump_cmd);

- init_sh_desc(desc, HDR_SHARE_SERIAL);
+ SHR_HDR(SHR_SERIAL, 1, 0);

/* Skip if already shared */
- key_jump_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL |
- JUMP_COND_SHRD);
+ pkey_jump_cmd = JUMP(IMM(key_jump_cmd), LOCAL_JUMP, ALL_TRUE, SHRD);

- append_key_aead(desc, ctx, keys_fit_inline);
+ append_key_aead(program, ctx, keys_fit_inline);

- set_jump_tgt_here(desc, key_jump_cmd);
+ SET_LABEL(key_jump_cmd);
+ PATCH_JUMP(pkey_jump_cmd, key_jump_cmd);
}

static int aead_null_set_sh_desc(struct crypto_aead *aead)
@@ -217,8 +214,19 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)
struct caam_ctx *ctx = crypto_aead_ctx(aead);
struct device *jrdev = ctx->jrdev;
bool keys_fit_inline = false;
- u32 *key_jump_cmd, *jump_cmd, *read_move_cmd, *write_move_cmd;
- u32 *desc;
+ uint32_t *desc;
+ struct program prg;
+ struct program *program = &prg;
+ unsigned desc_bytes;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));
+ LABEL(skip_key_load);
+ REFERENCE(pskip_key_load);
+ LABEL(nop_cmd);
+ REFERENCE(pnop_cmd);
+ LABEL(read_move_cmd);
+ REFERENCE(pread_move_cmd);
+ LABEL(write_move_cmd);
+ REFERENCE(pwrite_move_cmd);

/*
* Job Descriptor and Shared Descriptors
@@ -230,70 +238,72 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)

/* aead_encrypt shared descriptor */
desc = ctx->sh_desc_enc;
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();

- init_sh_desc(desc, HDR_SHARE_SERIAL);
+ SHR_HDR(SHR_SERIAL, 1, 0);

/* Skip if already shared */
- key_jump_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL |
- JUMP_COND_SHRD);
+ pskip_key_load = JUMP(IMM(skip_key_load), LOCAL_JUMP, ALL_TRUE, SHRD);
if (keys_fit_inline)
- append_key_as_imm(desc, ctx->key, ctx->split_key_pad_len,
- ctx->split_key_len, CLASS_2 |
- KEY_DEST_MDHA_SPLIT | KEY_ENC);
+ KEY(MDHA_SPLIT_KEY, ENC, PTR((uintptr_t)ctx->key),
+ ctx->split_key_len, IMMED);
else
- append_key(desc, ctx->key_dma, ctx->split_key_len, CLASS_2 |
- KEY_DEST_MDHA_SPLIT | KEY_ENC);
- set_jump_tgt_here(desc, key_jump_cmd);
+ KEY(MDHA_SPLIT_KEY, ENC, PTR(ctx->key_dma), ctx->split_key_len,
+ 0);
+ SET_LABEL(skip_key_load);

/* cryptlen = seqoutlen - authsize */
- append_math_sub_imm_u32(desc, REG3, SEQOUTLEN, IMM, ctx->authsize);
+ MATHB(SEQOUTSZ, SUB, IMM(ctx->authsize), MATH3, CAAM_CMD_SZ, 0);

/*
* NULL encryption; IV is zero
* assoclen = (assoclen + cryptlen) - cryptlen
*/
- append_math_sub(desc, VARSEQINLEN, SEQINLEN, REG3, CAAM_CMD_SZ);
+ MATHB(SEQINSZ, SUB, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);

/* read assoc before reading payload */
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG |
- KEY_VLF);
+ SEQFIFOLOAD(MSG2, 0 , VLF);

/* Prepare to read and write cryptlen bytes */
- append_math_add(desc, VARSEQINLEN, ZERO, REG3, CAAM_CMD_SZ);
- append_math_add(desc, VARSEQOUTLEN, ZERO, REG3, CAAM_CMD_SZ);
+ MATHB(ZERO, ADD, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);
+ MATHB(ZERO, ADD, MATH3, VSEQOUTSZ, CAAM_CMD_SZ, 0);

/*
* MOVE_LEN opcode is not available in all SEC HW revisions,
* thus need to do some magic, i.e. self-patch the descriptor
* buffer.
*/
- read_move_cmd = append_move(desc, MOVE_SRC_DESCBUF |
- MOVE_DEST_MATH3 |
- (0x6 << MOVE_LEN_SHIFT));
- write_move_cmd = append_move(desc, MOVE_SRC_MATH3 |
- MOVE_DEST_DESCBUF |
- MOVE_WAITCOMP |
- (0x8 << MOVE_LEN_SHIFT));
+ pread_move_cmd = MOVE(DESCBUF, 0, MATH3, 0, IMM(6), 0);
+ pwrite_move_cmd = MOVE(MATH3, 0, DESCBUF, 0, IMM(8), WAITCOMP);

/* Class 2 operation */
- append_operation(desc, ctx->class2_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_ENCRYPT);
+ ALG_OPERATION(ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class2_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ OP_ALG_ENCRYPT);

/* Read and write cryptlen bytes */
- aead_append_src_dst(desc, FIFOLD_TYPE_MSG | FIFOLD_TYPE_FLUSH1);
+ SEQFIFOSTORE(MSG, 0, 0, VLF);
+ SEQFIFOLOAD(MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);

- set_move_tgt_here(desc, read_move_cmd);
- set_move_tgt_here(desc, write_move_cmd);
- append_cmd(desc, CMD_LOAD | DISABLE_AUTO_INFO_FIFO);
- append_move(desc, MOVE_SRC_INFIFO_CL | MOVE_DEST_OUTFIFO |
- MOVE_AUX_LS);
+ SET_LABEL(read_move_cmd);
+ SET_LABEL(write_move_cmd);
+ LOAD(IMM(0), DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, 0);
+ MOVE(IFIFOAB1, 0, OFIFO, 0, IMM(0), 0);

/* Write ICV */
- append_seq_store(desc, ctx->authsize, LDST_CLASS_2_CCB |
- LDST_SRCDST_BYTE_CONTEXT);
+ SEQSTORE(CONTEXT2, 0, ctx->authsize, 0);
+
+ PATCH_JUMP(pskip_key_load, skip_key_load);
+ PATCH_MOVE(pread_move_cmd, read_move_cmd);
+ PATCH_MOVE(pwrite_move_cmd, write_move_cmd);

- ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ PROGRAM_FINALIZE();
+
+ desc_bytes = DESC_BYTES(desc);
+ ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_enc_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -302,8 +312,7 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)
#ifdef DEBUG
print_hex_dump(KERN_ERR,
"aead null enc shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes, 1);
#endif

/*
@@ -315,78 +324,81 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)
ctx->split_key_pad_len <= CAAM_DESC_BYTES_MAX)
keys_fit_inline = true;

+ /* aead_decrypt shared descriptor */
desc = ctx->sh_desc_dec;
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();

- /* aead_decrypt shared descriptor */
- init_sh_desc(desc, HDR_SHARE_SERIAL);
+ SHR_HDR(SHR_SERIAL, 1, 0);

/* Skip if already shared */
- key_jump_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL |
- JUMP_COND_SHRD);
+ pskip_key_load = JUMP(IMM(skip_key_load), LOCAL_JUMP, ALL_TRUE, SHRD);
if (keys_fit_inline)
- append_key_as_imm(desc, ctx->key, ctx->split_key_pad_len,
- ctx->split_key_len, CLASS_2 |
- KEY_DEST_MDHA_SPLIT | KEY_ENC);
+ KEY(MDHA_SPLIT_KEY, ENC, PTR((uintptr_t)ctx->key),
+ ctx->split_key_len, IMMED);
else
- append_key(desc, ctx->key_dma, ctx->split_key_len, CLASS_2 |
- KEY_DEST_MDHA_SPLIT | KEY_ENC);
- set_jump_tgt_here(desc, key_jump_cmd);
+ KEY(MDHA_SPLIT_KEY, ENC, PTR(ctx->key_dma), ctx->split_key_len,
+ 0);
+ SET_LABEL(skip_key_load);

/* Class 2 operation */
- append_operation(desc, ctx->class2_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_DECRYPT | OP_ALG_ICV_ON);
+ ALG_OPERATION(ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class2_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE,
+ OP_ALG_DECRYPT);

/* assoclen + cryptlen = seqinlen - ivsize - authsize */
- append_math_sub_imm_u32(desc, REG3, SEQINLEN, IMM,
- ctx->authsize + tfm->ivsize);
+ MATHB(SEQINSZ, SUB, IMM(ctx->authsize + tfm->ivsize), MATH3,
+ CAAM_CMD_SZ, 0);
/* assoclen = (assoclen + cryptlen) - cryptlen */
- append_math_sub(desc, REG2, SEQOUTLEN, REG0, CAAM_CMD_SZ);
- append_math_sub(desc, VARSEQINLEN, REG3, REG2, CAAM_CMD_SZ);
+ MATHB(SEQOUTSZ, SUB, MATH0, MATH2, CAAM_CMD_SZ, 0);
+ MATHB(MATH3, SUB, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);

/* read assoc before reading payload */
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG |
- KEY_VLF);
+ SEQFIFOLOAD(MSG2, 0 , VLF);

/* Prepare to read and write cryptlen bytes */
- append_math_add(desc, VARSEQINLEN, ZERO, REG2, CAAM_CMD_SZ);
- append_math_add(desc, VARSEQOUTLEN, ZERO, REG2, CAAM_CMD_SZ);
+ MATHB(ZERO, ADD, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);
+ MATHB(ZERO, ADD, MATH2, VSEQOUTSZ, CAAM_CMD_SZ, 0);

/*
* MOVE_LEN opcode is not available in all SEC HW revisions,
* thus need to do some magic, i.e. self-patch the descriptor
* buffer.
*/
- read_move_cmd = append_move(desc, MOVE_SRC_DESCBUF |
- MOVE_DEST_MATH2 |
- (0x6 << MOVE_LEN_SHIFT));
- write_move_cmd = append_move(desc, MOVE_SRC_MATH2 |
- MOVE_DEST_DESCBUF |
- MOVE_WAITCOMP |
- (0x8 << MOVE_LEN_SHIFT));
+ pread_move_cmd = MOVE(DESCBUF, 0, MATH2, 0, IMM(6), 0);
+ pwrite_move_cmd = MOVE(MATH2, 0, DESCBUF, 0, IMM(8), WAITCOMP);

/* Read and write cryptlen bytes */
- aead_append_src_dst(desc, FIFOLD_TYPE_MSG | FIFOLD_TYPE_FLUSH1);
+ SEQFIFOSTORE(MSG, 0, 0, VLF);
+ SEQFIFOLOAD(MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);

/*
* Insert a NOP here, since we need at least 4 instructions between
* code patching the descriptor buffer and the location being patched.
*/
- jump_cmd = append_jump(desc, JUMP_TEST_ALL);
- set_jump_tgt_here(desc, jump_cmd);
+ pnop_cmd = JUMP(IMM(nop_cmd), LOCAL_JUMP, ALL_TRUE, 0);
+ SET_LABEL(nop_cmd);

- set_move_tgt_here(desc, read_move_cmd);
- set_move_tgt_here(desc, write_move_cmd);
- append_cmd(desc, CMD_LOAD | DISABLE_AUTO_INFO_FIFO);
- append_move(desc, MOVE_SRC_INFIFO_CL | MOVE_DEST_OUTFIFO |
- MOVE_AUX_LS);
- append_cmd(desc, CMD_LOAD | ENABLE_AUTO_INFO_FIFO);
+ SET_LABEL(read_move_cmd);
+ SET_LABEL(write_move_cmd);
+ LOAD(IMM(0), DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, 0);
+ MOVE(IFIFOAB1, 0, OFIFO, 0, IMM(0), 0);
+ LOAD(IMM(0), DCTRL, LDOFF_ENABLE_AUTO_NFIFO, 0, 0);

/* Load ICV */
- append_seq_fifo_load(desc, ctx->authsize, FIFOLD_CLASS_CLASS2 |
- FIFOLD_TYPE_LAST2 | FIFOLD_TYPE_ICV);
+ SEQFIFOLOAD(ICV2, ctx->authsize, LAST2);
+
+ PATCH_JUMP(pskip_key_load, skip_key_load);
+ PATCH_JUMP(pnop_cmd, nop_cmd);
+ PATCH_MOVE(pread_move_cmd, read_move_cmd);
+ PATCH_MOVE(pwrite_move_cmd, write_move_cmd);

- ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ PROGRAM_FINALIZE();
+
+ desc_bytes = DESC_BYTES(desc);
+ ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_dec_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -395,8 +407,7 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)
#ifdef DEBUG
print_hex_dump(KERN_ERR,
"aead null dec shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes, 1);
#endif

return 0;
@@ -408,8 +419,12 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
struct caam_ctx *ctx = crypto_aead_ctx(aead);
struct device *jrdev = ctx->jrdev;
bool keys_fit_inline = false;
- u32 geniv, moveiv;
- u32 *desc;
+ uint32_t geniv, moveiv;
+ uint32_t *desc;
+ struct program prg;
+ struct program *program = &prg;
+ unsigned desc_bytes;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

if (!ctx->authsize)
return 0;
@@ -429,42 +444,52 @@ static int aead_set_sh_desc(struct crypto_aead *aead)

/* aead_encrypt shared descriptor */
desc = ctx->sh_desc_enc;
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();

- init_sh_desc_key_aead(desc, ctx, keys_fit_inline);
+ init_sh_desc_key_aead(program, ctx, keys_fit_inline);

/* Class 2 operation */
- append_operation(desc, ctx->class2_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_ENCRYPT);
+ ALG_OPERATION(ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class2_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ OP_ALG_ENCRYPT);

/* cryptlen = seqoutlen - authsize */
- append_math_sub_imm_u32(desc, REG3, SEQOUTLEN, IMM, ctx->authsize);
+ MATHB(SEQOUTSZ, SUB, IMM(ctx->authsize), MATH3, CAAM_CMD_SZ, 0);

/* assoclen + cryptlen = seqinlen - ivsize */
- append_math_sub_imm_u32(desc, REG2, SEQINLEN, IMM, tfm->ivsize);
+ MATHB(SEQINSZ, SUB, IMM(tfm->ivsize), MATH2, CAAM_CMD_SZ, 0);

/* assoclen = (assoclen + cryptlen) - cryptlen */
- append_math_sub(desc, VARSEQINLEN, REG2, REG3, CAAM_CMD_SZ);
+ MATHB(MATH2, SUB, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);

/* read assoc before reading payload */
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG |
- KEY_VLF);
- aead_append_ld_iv(desc, tfm->ivsize);
+ SEQFIFOLOAD(MSG2, 0 , VLF);
+ aead_append_ld_iv(program, tfm->ivsize);

/* Class 1 operation */
- append_operation(desc, ctx->class1_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_ENCRYPT);
+ ALG_OPERATION(ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class1_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ OP_ALG_ENCRYPT);

/* Read and write cryptlen bytes */
- append_math_add(desc, VARSEQINLEN, ZERO, REG3, CAAM_CMD_SZ);
- append_math_add(desc, VARSEQOUTLEN, ZERO, REG3, CAAM_CMD_SZ);
- aead_append_src_dst(desc, FIFOLD_TYPE_MSG1OUT2);
+ MATHB(ZERO, ADD, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);
+ MATHB(ZERO, ADD, MATH3, VSEQOUTSZ, CAAM_CMD_SZ, 0);
+
+ /* Read and write payload */
+ SEQFIFOSTORE(MSG, 0, 0, VLF);
+ SEQFIFOLOAD(MSGOUTSNOOP, 0, VLF | LAST1 | LAST2);

/* Write ICV */
- append_seq_store(desc, ctx->authsize, LDST_CLASS_2_CCB |
- LDST_SRCDST_BYTE_CONTEXT);
+ SEQSTORE(CONTEXT2, 0, ctx->authsize, 0);
+
+ PROGRAM_FINALIZE();

- ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ desc_bytes = DESC_BYTES(desc);
+ ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_enc_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -472,8 +497,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
}
#ifdef DEBUG
print_hex_dump(KERN_ERR, "aead enc shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes, 1);
#endif

/*
@@ -488,39 +512,47 @@ static int aead_set_sh_desc(struct crypto_aead *aead)

/* aead_decrypt shared descriptor */
desc = ctx->sh_desc_dec;
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();

- init_sh_desc_key_aead(desc, ctx, keys_fit_inline);
+ init_sh_desc_key_aead(program, ctx, keys_fit_inline);

/* Class 2 operation */
- append_operation(desc, ctx->class2_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_DECRYPT | OP_ALG_ICV_ON);
+ ALG_OPERATION(ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class2_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE,
+ OP_ALG_DECRYPT);

/* assoclen + cryptlen = seqinlen - ivsize - authsize */
- append_math_sub_imm_u32(desc, REG3, SEQINLEN, IMM,
- ctx->authsize + tfm->ivsize);
+ MATHB(SEQINSZ, SUB, IMM(ctx->authsize + tfm->ivsize), MATH3,
+ CAAM_CMD_SZ, 0);
/* assoclen = (assoclen + cryptlen) - cryptlen */
- append_math_sub(desc, REG2, SEQOUTLEN, REG0, CAAM_CMD_SZ);
- append_math_sub(desc, VARSEQINLEN, REG3, REG2, CAAM_CMD_SZ);
+ MATHB(SEQOUTSZ, SUB, MATH0, MATH2, CAAM_CMD_SZ, 0);
+ MATHB(MATH3, SUB, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);

/* read assoc before reading payload */
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG |
- KEY_VLF);
+ SEQFIFOLOAD(MSG2, 0 , VLF);

- aead_append_ld_iv(desc, tfm->ivsize);
+ aead_append_ld_iv(program, tfm->ivsize);

- append_dec_op1(desc, ctx->class1_alg_type);
+ append_dec_op1(program, ctx->class1_alg_type);

/* Read and write cryptlen bytes */
- append_math_add(desc, VARSEQINLEN, ZERO, REG2, CAAM_CMD_SZ);
- append_math_add(desc, VARSEQOUTLEN, ZERO, REG2, CAAM_CMD_SZ);
- aead_append_src_dst(desc, FIFOLD_TYPE_MSG);
+ MATHB(ZERO, ADD, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);
+ MATHB(ZERO, ADD, MATH2, VSEQOUTSZ, CAAM_CMD_SZ, 0);
+
+ /* Read and write payload */
+ SEQFIFOSTORE(MSG, 0, 0, VLF);
+ SEQFIFOLOAD(MSGINSNOOP, 0, VLF | LAST1 | LAST2);

/* Load ICV */
- append_seq_fifo_load(desc, ctx->authsize, FIFOLD_CLASS_CLASS2 |
- FIFOLD_TYPE_LAST2 | FIFOLD_TYPE_ICV);
+ SEQFIFOLOAD(ICV2, ctx->authsize, LAST2);
+
+ PROGRAM_FINALIZE();

- ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ desc_bytes = DESC_BYTES(desc);
+ ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_dec_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -528,8 +560,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
}
#ifdef DEBUG
print_hex_dump(KERN_ERR, "aead dec shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes, 1);
#endif

/*
@@ -544,67 +575,71 @@ static int aead_set_sh_desc(struct crypto_aead *aead)

/* aead_givencrypt shared descriptor */
desc = ctx->sh_desc_givenc;
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();

- init_sh_desc_key_aead(desc, ctx, keys_fit_inline);
+ init_sh_desc_key_aead(program, ctx, keys_fit_inline);

/* Generate IV */
geniv = NFIFOENTRY_STYPE_PAD | NFIFOENTRY_DEST_DECO |
NFIFOENTRY_DTYPE_MSG | NFIFOENTRY_LC1 |
NFIFOENTRY_PTYPE_RND | (tfm->ivsize << NFIFOENTRY_DLEN_SHIFT);
- append_load_imm_u32(desc, geniv, LDST_CLASS_IND_CCB |
- LDST_SRCDST_WORD_INFO_FIFO | LDST_IMM);
- append_cmd(desc, CMD_LOAD | DISABLE_AUTO_INFO_FIFO);
- append_move(desc, MOVE_SRC_INFIFO |
- MOVE_DEST_CLASS1CTX | (tfm->ivsize << MOVE_LEN_SHIFT));
- append_cmd(desc, CMD_LOAD | ENABLE_AUTO_INFO_FIFO);
+ LOAD(IMM(geniv), NFIFO, 0, CAAM_CMD_SZ, 0);
+ LOAD(IMM(0), DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, 0);
+ MOVE(IFIFOABD, 0, CONTEXT1, 0, IMM(tfm->ivsize), 0);
+ LOAD(IMM(0), DCTRL, LDOFF_ENABLE_AUTO_NFIFO, 0, 0);

/* Copy IV to class 1 context */
- append_move(desc, MOVE_SRC_CLASS1CTX |
- MOVE_DEST_OUTFIFO | (tfm->ivsize << MOVE_LEN_SHIFT));
+ MOVE(CONTEXT1, 0, OFIFO, 0, IMM(tfm->ivsize), 0);

/* Return to encryption */
- append_operation(desc, ctx->class2_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_ENCRYPT);
+ ALG_OPERATION(ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class2_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ OP_ALG_ENCRYPT);

/* ivsize + cryptlen = seqoutlen - authsize */
- append_math_sub_imm_u32(desc, REG3, SEQOUTLEN, IMM, ctx->authsize);
+ MATHB(SEQOUTSZ, SUB, IMM(ctx->authsize), MATH3, CAAM_CMD_SZ, 0);

/* assoclen = seqinlen - (ivsize + cryptlen) */
- append_math_sub(desc, VARSEQINLEN, SEQINLEN, REG3, CAAM_CMD_SZ);
+ MATHB(SEQINSZ, SUB, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);

/* read assoc before reading payload */
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG |
- KEY_VLF);
+ SEQFIFOLOAD(MSG2, 0, VLF);

/* Copy iv from class 1 ctx to class 2 fifo*/
moveiv = NFIFOENTRY_STYPE_OFIFO | NFIFOENTRY_DEST_CLASS2 |
NFIFOENTRY_DTYPE_MSG | (tfm->ivsize << NFIFOENTRY_DLEN_SHIFT);
- append_load_imm_u32(desc, moveiv, LDST_CLASS_IND_CCB |
- LDST_SRCDST_WORD_INFO_FIFO | LDST_IMM);
- append_load_imm_u32(desc, tfm->ivsize, LDST_CLASS_2_CCB |
- LDST_SRCDST_WORD_DATASZ_REG | LDST_IMM);
+ LOAD(IMM(moveiv), NFIFO, 0, CAAM_CMD_SZ, 0);
+ LOAD(IMM(tfm->ivsize), DATA2SZ, 0, CAAM_CMD_SZ, 0);

/* Class 1 operation */
- append_operation(desc, ctx->class1_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_ENCRYPT);
+ ALG_OPERATION(ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class1_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ OP_ALG_ENCRYPT);

/* Will write ivsize + cryptlen */
- append_math_add(desc, VARSEQOUTLEN, SEQINLEN, REG0, CAAM_CMD_SZ);
+ MATHB(SEQINSZ, ADD, MATH0, VSEQOUTSZ, CAAM_CMD_SZ, 0);

/* Not need to reload iv */
- append_seq_fifo_load(desc, tfm->ivsize,
- FIFOLD_CLASS_SKIP);
+ SEQFIFOLOAD(SKIP, tfm->ivsize, 0);

/* Will read cryptlen */
- append_math_add(desc, VARSEQINLEN, SEQINLEN, REG0, CAAM_CMD_SZ);
- aead_append_src_dst(desc, FIFOLD_TYPE_MSG1OUT2);
+ MATHB(SEQINSZ, ADD, MATH0, VSEQINSZ, CAAM_CMD_SZ, 0);
+
+ /* Read and write payload */
+ SEQFIFOSTORE(MSG, 0, 0, VLF);
+ SEQFIFOLOAD(MSGOUTSNOOP, 0, VLF | LAST1 | LAST2);

/* Write ICV */
- append_seq_store(desc, ctx->authsize, LDST_CLASS_2_CCB |
- LDST_SRCDST_BYTE_CONTEXT);
+ SEQSTORE(CONTEXT2, 0, ctx->authsize, 0);
+
+ PROGRAM_FINALIZE();

- ctx->sh_desc_givenc_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ desc_bytes = DESC_BYTES(desc);
+ ctx->sh_desc_givenc_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_givenc_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -612,8 +647,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
}
#ifdef DEBUG
print_hex_dump(KERN_ERR, "aead givenc shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes, 1);
#endif

return 0;
@@ -710,8 +744,13 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
struct ablkcipher_tfm *tfm = &ablkcipher->base.crt_ablkcipher;
struct device *jrdev = ctx->jrdev;
int ret = 0;
- u32 *key_jump_cmd;
- u32 *desc;
+ uint32_t *desc;
+ struct program prg;
+ struct program *program = &prg;
+ unsigned desc_bytes;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));
+ LABEL(key_jump_cmd);
+ REFERENCE(pkey_jump_cmd);

#ifdef DEBUG
print_hex_dump(KERN_ERR, "key in @"__stringify(__LINE__)": ",
@@ -729,31 +768,37 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,

/* ablkcipher_encrypt shared descriptor */
desc = ctx->sh_desc_enc;
- init_sh_desc(desc, HDR_SHARE_SERIAL);
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ SHR_HDR(SHR_SERIAL, 1, 0);
/* Skip if already shared */
- key_jump_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL |
- JUMP_COND_SHRD);
+ pkey_jump_cmd = JUMP(IMM(key_jump_cmd), LOCAL_JUMP, ALL_TRUE, SHRD);

/* Load class1 key only */
- append_key_as_imm(desc, (void *)ctx->key, ctx->enckeylen,
- ctx->enckeylen, CLASS_1 |
- KEY_DEST_CLASS_REG);
+ KEY(KEY1, 0, PTR((uintptr_t)ctx->key), ctx->enckeylen, IMMED);

- set_jump_tgt_here(desc, key_jump_cmd);
+ SET_LABEL(key_jump_cmd);

- /* Load iv */
- append_cmd(desc, CMD_SEQ_LOAD | LDST_SRCDST_BYTE_CONTEXT |
- LDST_CLASS_1_CCB | tfm->ivsize);
+ /* Load IV */
+ SEQLOAD(CONTEXT1, 0, tfm->ivsize, 0);

/* Load operation */
- append_operation(desc, ctx->class1_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_ENCRYPT);
+ ALG_OPERATION(ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class1_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ OP_ALG_ENCRYPT);

/* Perform operation */
- ablkcipher_append_src_dst(desc);
+ ablkcipher_append_src_dst(program);
+
+ PATCH_JUMP(pkey_jump_cmd, key_jump_cmd);

- ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ PROGRAM_FINALIZE();
+
+ desc_bytes = DESC_BYTES(desc);
+ ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_enc_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -762,36 +807,40 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
#ifdef DEBUG
print_hex_dump(KERN_ERR,
"ablkcipher enc shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes, 1);
#endif
+
/* ablkcipher_decrypt shared descriptor */
desc = ctx->sh_desc_dec;
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ SHR_HDR(SHR_SERIAL, 1, 0);

- init_sh_desc(desc, HDR_SHARE_SERIAL);
/* Skip if already shared */
- key_jump_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL |
- JUMP_COND_SHRD);
+ pkey_jump_cmd = JUMP(IMM(key_jump_cmd), LOCAL_JUMP, ALL_TRUE, SHRD);

/* Load class1 key only */
- append_key_as_imm(desc, (void *)ctx->key, ctx->enckeylen,
- ctx->enckeylen, CLASS_1 |
- KEY_DEST_CLASS_REG);
+ KEY(KEY1, 0, PTR((uintptr_t)ctx->key), ctx->enckeylen, IMMED);

- set_jump_tgt_here(desc, key_jump_cmd);
+ SET_LABEL(key_jump_cmd);

/* load IV */
- append_cmd(desc, CMD_SEQ_LOAD | LDST_SRCDST_BYTE_CONTEXT |
- LDST_CLASS_1_CCB | tfm->ivsize);
+ SEQLOAD(CONTEXT1, 0, tfm->ivsize, 0);

/* Choose operation */
- append_dec_op1(desc, ctx->class1_alg_type);
+ append_dec_op1(program, ctx->class1_alg_type);

/* Perform operation */
- ablkcipher_append_src_dst(desc);
+ ablkcipher_append_src_dst(program);
+
+ PATCH_JUMP(pkey_jump_cmd, key_jump_cmd);

- ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ PROGRAM_FINALIZE();
+
+ desc_bytes = DESC_BYTES(desc);
+ ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_dec_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -801,8 +850,7 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
#ifdef DEBUG
print_hex_dump(KERN_ERR,
"ablkcipher dec shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes, 1);
#endif

return ret;
@@ -1071,7 +1119,7 @@ static void ablkcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
/*
* Fill in aead job descriptor
*/
-static void init_aead_job(u32 *sh_desc, dma_addr_t ptr,
+static void init_aead_job(uint32_t *sh_desc, dma_addr_t ptr,
struct aead_edesc *edesc,
struct aead_request *req,
bool all_contig, bool encrypt)
@@ -1081,9 +1129,12 @@ static void init_aead_job(u32 *sh_desc, dma_addr_t ptr,
int ivsize = crypto_aead_ivsize(aead);
int authsize = ctx->authsize;
u32 *desc = edesc->hw_desc;
- u32 out_options = 0, in_options;
+ uint32_t out_options = EXT, in_options = EXT;
dma_addr_t dst_dma, src_dma;
- int len, sec4_sg_index = 0;
+ unsigned len, sec4_sg_index = 0;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

#ifdef DEBUG
debug("assoclen %d cryptlen %d authsize %d\n",
@@ -1098,25 +1149,27 @@ static void init_aead_job(u32 *sh_desc, dma_addr_t ptr,
DUMP_PREFIX_ADDRESS, 16, 4, sg_virt(req->src),
edesc->src_nents ? 100 : req->cryptlen, 1);
print_hex_dump(KERN_ERR, "shrdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, sh_desc,
- desc_bytes(sh_desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, sh_desc, DESC_BYTES(sh_desc),
+ 1);
#endif

- len = desc_len(sh_desc);
- init_job_desc_shared(desc, ptr, len, HDR_SHARE_DEFER | HDR_REVERSE);
+ len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(desc, len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ JOB_HDR(SHR_DEFER, len, ptr, REO | SHR);

if (all_contig) {
src_dma = sg_dma_address(req->assoc);
- in_options = 0;
} else {
src_dma = edesc->sec4_sg_dma;
sec4_sg_index += (edesc->assoc_nents ? : 1) + 1 +
(edesc->src_nents ? : 1);
- in_options = LDST_SGF;
+ in_options |= SGF;
}

- append_seq_in_ptr(desc, src_dma, req->assoclen + ivsize + req->cryptlen,
- in_options);
+ SEQINPTR(src_dma, req->assoclen + ivsize + req->cryptlen, in_options);

if (likely(req->src == req->dst)) {
if (all_contig) {
@@ -1124,7 +1177,7 @@ static void init_aead_job(u32 *sh_desc, dma_addr_t ptr,
} else {
dst_dma = src_dma + sizeof(struct sec4_sg_entry) *
((edesc->assoc_nents ? : 1) + 1);
- out_options = LDST_SGF;
+ out_options |= SGF;
}
} else {
if (!edesc->dst_nents) {
@@ -1133,15 +1186,15 @@ static void init_aead_job(u32 *sh_desc, dma_addr_t ptr,
dst_dma = edesc->sec4_sg_dma +
sec4_sg_index *
sizeof(struct sec4_sg_entry);
- out_options = LDST_SGF;
+ out_options |= SGF;
}
}
if (encrypt)
- append_seq_out_ptr(desc, dst_dma, req->cryptlen + authsize,
- out_options);
+ SEQOUTPTR(dst_dma, req->cryptlen + authsize, out_options);
else
- append_seq_out_ptr(desc, dst_dma, req->cryptlen - authsize,
- out_options);
+ SEQOUTPTR(dst_dma, req->cryptlen - authsize, out_options);
+
+ PROGRAM_FINALIZE();
}

/*
@@ -1157,9 +1210,12 @@ static void init_aead_giv_job(u32 *sh_desc, dma_addr_t ptr,
int ivsize = crypto_aead_ivsize(aead);
int authsize = ctx->authsize;
u32 *desc = edesc->hw_desc;
- u32 out_options = 0, in_options;
+ uint32_t out_options = EXT, in_options = EXT;
dma_addr_t dst_dma, src_dma;
- int len, sec4_sg_index = 0;
+ unsigned len, sec4_sg_index = 0;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

#ifdef DEBUG
debug("assoclen %d cryptlen %d authsize %d\n",
@@ -1173,23 +1229,25 @@ static void init_aead_giv_job(u32 *sh_desc, dma_addr_t ptr,
DUMP_PREFIX_ADDRESS, 16, 4, sg_virt(req->src),
edesc->src_nents > 1 ? 100 : req->cryptlen, 1);
print_hex_dump(KERN_ERR, "shrdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, sh_desc,
- desc_bytes(sh_desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, sh_desc, DESC_BYTES(sh_desc),
+ 1);
#endif

- len = desc_len(sh_desc);
- init_job_desc_shared(desc, ptr, len, HDR_SHARE_DEFER | HDR_REVERSE);
+ len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(desc, len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ JOB_HDR(SHR_DEFER, len, ptr, REO | SHR);

if (contig & GIV_SRC_CONTIG) {
src_dma = sg_dma_address(req->assoc);
- in_options = 0;
} else {
src_dma = edesc->sec4_sg_dma;
sec4_sg_index += edesc->assoc_nents + 1 + edesc->src_nents;
- in_options = LDST_SGF;
+ in_options |= SGF;
}
- append_seq_in_ptr(desc, src_dma, req->assoclen + ivsize + req->cryptlen,
- in_options);
+ SEQINPTR(src_dma, req->assoclen + ivsize + req->cryptlen, in_options);

if (contig & GIV_DST_CONTIG) {
dst_dma = edesc->iv_dma;
@@ -1197,17 +1255,18 @@ static void init_aead_giv_job(u32 *sh_desc, dma_addr_t ptr,
if (likely(req->src == req->dst)) {
dst_dma = src_dma + sizeof(struct sec4_sg_entry) *
edesc->assoc_nents;
- out_options = LDST_SGF;
+ out_options |= SGF;
} else {
dst_dma = edesc->sec4_sg_dma +
sec4_sg_index *
sizeof(struct sec4_sg_entry);
- out_options = LDST_SGF;
+ out_options |= SGF;
}
}

- append_seq_out_ptr(desc, dst_dma, ivsize + req->cryptlen + authsize,
- out_options);
+ SEQOUTPTR(dst_dma, ivsize + req->cryptlen + authsize, out_options);
+
+ PROGRAM_FINALIZE();
}

/*
@@ -1221,9 +1280,12 @@ static void init_ablkcipher_job(u32 *sh_desc, dma_addr_t ptr,
struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req);
int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
u32 *desc = edesc->hw_desc;
- u32 out_options = 0, in_options;
+ uint32_t out_options = EXT, in_options = EXT;
dma_addr_t dst_dma, src_dma;
- int len, sec4_sg_index = 0;
+ unsigned len, sec4_sg_index = 0;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

#ifdef DEBUG
print_hex_dump(KERN_ERR, "presciv@"__stringify(__LINE__)": ",
@@ -1234,18 +1296,21 @@ static void init_ablkcipher_job(u32 *sh_desc, dma_addr_t ptr,
edesc->src_nents ? 100 : req->nbytes, 1);
#endif

- len = desc_len(sh_desc);
- init_job_desc_shared(desc, ptr, len, HDR_SHARE_DEFER | HDR_REVERSE);
+ len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(desc, len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ JOB_HDR(SHR_DEFER, len, ptr, REO | SHR);

if (iv_contig) {
src_dma = edesc->iv_dma;
- in_options = 0;
} else {
src_dma = edesc->sec4_sg_dma;
sec4_sg_index += (iv_contig ? 0 : 1) + edesc->src_nents;
- in_options = LDST_SGF;
+ in_options |= SGF;
}
- append_seq_in_ptr(desc, src_dma, req->nbytes + ivsize, in_options);
+ SEQINPTR(src_dma, req->nbytes + ivsize, in_options);

if (likely(req->src == req->dst)) {
if (!edesc->src_nents && iv_contig) {
@@ -1253,7 +1318,7 @@ static void init_ablkcipher_job(u32 *sh_desc, dma_addr_t ptr,
} else {
dst_dma = edesc->sec4_sg_dma +
sizeof(struct sec4_sg_entry);
- out_options = LDST_SGF;
+ out_options |= SGF;
}
} else {
if (!edesc->dst_nents) {
@@ -1261,10 +1326,13 @@ static void init_ablkcipher_job(u32 *sh_desc, dma_addr_t ptr,
} else {
dst_dma = edesc->sec4_sg_dma +
sec4_sg_index * sizeof(struct sec4_sg_entry);
- out_options = LDST_SGF;
+ out_options |= SGF;
}
}
- append_seq_out_ptr(desc, dst_dma, req->nbytes, out_options);
+
+ SEQOUTPTR(dst_dma, req->nbytes, out_options);
+
+ PROGRAM_FINALIZE();
}

/*
@@ -1406,7 +1474,7 @@ static int aead_encrypt(struct aead_request *req)
#ifdef DEBUG
print_hex_dump(KERN_ERR, "aead jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
- desc_bytes(edesc->hw_desc), 1);
+ DESC_BYTES(edesc->hw_desc), 1);
#endif

desc = edesc->hw_desc;
@@ -1449,7 +1517,7 @@ static int aead_decrypt(struct aead_request *req)
#ifdef DEBUG
print_hex_dump(KERN_ERR, "aead jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
- desc_bytes(edesc->hw_desc), 1);
+ DESC_BYTES(edesc->hw_desc), 1);
#endif

desc = edesc->hw_desc;
@@ -1612,7 +1680,7 @@ static int aead_givencrypt(struct aead_givcrypt_request *areq)
#ifdef DEBUG
print_hex_dump(KERN_ERR, "aead jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
- desc_bytes(edesc->hw_desc), 1);
+ DESC_BYTES(edesc->hw_desc), 1);
#endif

desc = edesc->hw_desc;
@@ -1755,7 +1823,7 @@ static int ablkcipher_encrypt(struct ablkcipher_request *req)
#ifdef DEBUG
print_hex_dump(KERN_ERR, "ablkcipher jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
- desc_bytes(edesc->hw_desc), 1);
+ DESC_BYTES(edesc->hw_desc), 1);
#endif
desc = edesc->hw_desc;
ret = caam_jr_enqueue(jrdev, desc, ablkcipher_encrypt_done, req);
@@ -1793,7 +1861,7 @@ static int ablkcipher_decrypt(struct ablkcipher_request *req)
#ifdef DEBUG
print_hex_dump(KERN_ERR, "ablkcipher jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
- desc_bytes(edesc->hw_desc), 1);
+ DESC_BYTES(edesc->hw_desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ablkcipher_decrypt_done, req);
@@ -1822,9 +1890,9 @@ struct caam_alg_template {
struct compress_alg compress;
struct rng_alg rng;
} template_u;
- u32 class1_alg_type;
- u32 class2_alg_type;
- u32 alg_op;
+ uint32_t class1_alg_type;
+ uint32_t class2_alg_type;
+ uint32_t alg_op;
};

static struct caam_alg_template driver_algs[] = {
@@ -2389,15 +2457,15 @@ static void caam_cra_exit(struct crypto_tfm *tfm)
if (ctx->sh_desc_enc_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_enc_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_enc_dma,
- desc_bytes(ctx->sh_desc_enc), DMA_TO_DEVICE);
+ DESC_BYTES(ctx->sh_desc_enc), DMA_TO_DEVICE);
if (ctx->sh_desc_dec_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_dec_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_dec_dma,
- desc_bytes(ctx->sh_desc_dec), DMA_TO_DEVICE);
+ DESC_BYTES(ctx->sh_desc_dec), DMA_TO_DEVICE);
if (ctx->sh_desc_givenc_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_givenc_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_givenc_dma,
- desc_bytes(ctx->sh_desc_givenc),
+ DESC_BYTES(ctx->sh_desc_givenc),
DMA_TO_DEVICE);
if (ctx->key_dma &&
!dma_mapping_error(ctx->jrdev, ctx->key_dma))
diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index 386efb9e192c..ec66e715d825 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -57,7 +57,7 @@

#include "regs.h"
#include "intern.h"
-#include "desc_constr.h"
+#include "flib/rta.h"
#include "jr.h"
#include "error.h"
#include "sg_sw_sec4.h"
@@ -137,7 +137,8 @@ struct caam_hash_state {
/* Common job descriptor seq in/out ptr routines */

/* Map state->caam_ctx, and append seq_out_ptr command that points to it */
-static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev,
+static inline int map_seq_out_ptr_ctx(struct program *program,
+ struct device *jrdev,
struct caam_hash_state *state,
int ctx_len)
{
@@ -148,19 +149,20 @@ static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev,
return -ENOMEM;
}

- append_seq_out_ptr(desc, state->ctx_dma, ctx_len, 0);
+ SEQOUTPTR(state->ctx_dma, ctx_len, EXT);

return 0;
}

/* Map req->result, and append seq_out_ptr command that points to it */
-static inline dma_addr_t map_seq_out_ptr_result(u32 *desc, struct device *jrdev,
+static inline dma_addr_t map_seq_out_ptr_result(struct program *program,
+ struct device *jrdev,
u8 *result, int digestsize)
{
dma_addr_t dst_dma;

dst_dma = dma_map_single(jrdev, result, digestsize, DMA_FROM_DEVICE);
- append_seq_out_ptr(desc, dst_dma, digestsize, 0);
+ SEQOUTPTR(dst_dma, digestsize, EXT);

return dst_dma;
}
@@ -224,28 +226,32 @@ static inline int ctx_map_to_sec4_sg(u32 *desc, struct device *jrdev,
}

/* Common shared descriptor commands */
-static inline void append_key_ahash(u32 *desc, struct caam_hash_ctx *ctx)
+static inline void append_key_ahash(struct program *program,
+ struct caam_hash_ctx *ctx)
{
- append_key_as_imm(desc, ctx->key, ctx->split_key_pad_len,
- ctx->split_key_len, CLASS_2 |
- KEY_DEST_MDHA_SPLIT | KEY_ENC);
+ KEY(MDHA_SPLIT_KEY, ENC, PTR((uintptr_t)ctx->key),
+ ctx->split_key_len, IMMED);
}

/* Append key if it has been set */
-static inline void init_sh_desc_key_ahash(u32 *desc, struct caam_hash_ctx *ctx)
+static inline void init_sh_desc_key_ahash(struct program *program,
+ struct caam_hash_ctx *ctx)
{
- u32 *key_jump_cmd;
+ LABEL(key_jump_cmd);
+ REFERENCE(pkey_jump_cmd);

- init_sh_desc(desc, HDR_SHARE_SERIAL);
+ SHR_HDR(SHR_SERIAL, 1, 0);

if (ctx->split_key_len) {
/* Skip if already shared */
- key_jump_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL |
- JUMP_COND_SHRD);
+ pkey_jump_cmd = JUMP(IMM(key_jump_cmd), LOCAL_JUMP, ALL_TRUE,
+ SHRD);

- append_key_ahash(desc, ctx);
+ append_key_ahash(program, ctx);

- set_jump_tgt_here(desc, key_jump_cmd);
+ SET_LABEL(key_jump_cmd);
+
+ PATCH_JUMP(pkey_jump_cmd, key_jump_cmd);
}
}

@@ -254,55 +260,55 @@ static inline void init_sh_desc_key_ahash(u32 *desc, struct caam_hash_ctx *ctx)
* and write resulting class2 context to seqout, which may be state->caam_ctx
* or req->result
*/
-static inline void ahash_append_load_str(u32 *desc, int digestsize)
+static inline void ahash_append_load_str(struct program *program,
+ int digestsize)
{
/* Calculate remaining bytes to read */
- append_math_add(desc, VARSEQINLEN, SEQINLEN, REG0, CAAM_CMD_SZ);
+ MATHB(SEQINSZ, ADD, MATH0, VSEQINSZ, CAAM_CMD_SZ, 0);

/* Read remaining bytes */
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_LAST2 |
- FIFOLD_TYPE_MSG | KEY_VLF);
+ SEQFIFOLOAD(MSG2, 0, VLF | LAST2);

/* Store class2 context bytes */
- append_seq_store(desc, digestsize, LDST_CLASS_2_CCB |
- LDST_SRCDST_BYTE_CONTEXT);
+ SEQSTORE(CONTEXT2, 0, digestsize, 0);
}

/*
* For ahash update, final and finup, import context, read and write to seqout
*/
-static inline void ahash_ctx_data_to_out(u32 *desc, u32 op, u32 state,
- int digestsize,
+static inline void ahash_ctx_data_to_out(struct program *program, u32 op,
+ u32 state, int digestsize,
struct caam_hash_ctx *ctx)
{
- init_sh_desc_key_ahash(desc, ctx);
+ init_sh_desc_key_ahash(program, ctx);

/* Import context from software */
- append_cmd(desc, CMD_SEQ_LOAD | LDST_SRCDST_BYTE_CONTEXT |
- LDST_CLASS_2_CCB | ctx->ctx_len);
+ SEQLOAD(CONTEXT2, 0, ctx->ctx_len, 0);

/* Class 2 operation */
- append_operation(desc, op | state | OP_ALG_ENCRYPT);
+ ALG_OPERATION(op & OP_ALG_ALGSEL_MASK, op & OP_ALG_AAI_MASK, state,
+ ICV_CHECK_DISABLE, OP_ALG_ENCRYPT);

/*
* Load from buf and/or src and write to req->result or state->context
*/
- ahash_append_load_str(desc, digestsize);
+ ahash_append_load_str(program, digestsize);
}

/* For ahash firsts and digest, read and write to seqout */
-static inline void ahash_data_to_out(u32 *desc, u32 op, u32 state,
+static inline void ahash_data_to_out(struct program *program, u32 op, u32 state,
int digestsize, struct caam_hash_ctx *ctx)
{
- init_sh_desc_key_ahash(desc, ctx);
+ init_sh_desc_key_ahash(program, ctx);

/* Class 2 operation */
- append_operation(desc, op | state | OP_ALG_ENCRYPT);
+ ALG_OPERATION(op & OP_ALG_ALGSEL_MASK, op & OP_ALG_AAI_MASK, state,
+ ICV_CHECK_DISABLE, OP_ALG_ENCRYPT);

/*
* Load from buf and/or src and write to req->result or state->context
*/
- ahash_append_load_str(desc, digestsize);
+ ahash_append_load_str(program, digestsize);
}

static int ahash_set_sh_desc(struct crypto_ahash *ahash)
@@ -311,28 +317,36 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
int digestsize = crypto_ahash_digestsize(ahash);
struct device *jrdev = ctx->jrdev;
u32 have_key = 0;
- u32 *desc;
+ uint32_t *desc;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

if (ctx->split_key_len)
have_key = OP_ALG_AAI_HMAC_PRECOMP;

/* ahash_update shared descriptor */
desc = ctx->sh_desc_update;
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();

- init_sh_desc(desc, HDR_SHARE_SERIAL);
+ SHR_HDR(SHR_SERIAL, 1, 0);

/* Import context from software */
- append_cmd(desc, CMD_SEQ_LOAD | LDST_SRCDST_BYTE_CONTEXT |
- LDST_CLASS_2_CCB | ctx->ctx_len);
+ SEQLOAD(CONTEXT2, 0, ctx->ctx_len, 0);

/* Class 2 operation */
- append_operation(desc, ctx->alg_type | OP_ALG_AS_UPDATE |
- OP_ALG_ENCRYPT);
+ ALG_OPERATION(ctx->alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->alg_type & OP_ALG_AAI_MASK, OP_ALG_AS_UPDATE,
+ ICV_CHECK_DISABLE, OP_ALG_ENCRYPT);

/* Load data and write to result or context */
- ahash_append_load_str(desc, ctx->ctx_len);
+ ahash_append_load_str(program, ctx->ctx_len);
+
+ PROGRAM_FINALIZE();

- ctx->sh_desc_update_dma = dma_map_single(jrdev, desc, desc_bytes(desc),
+ ctx->sh_desc_update_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_update_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -341,17 +355,22 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
#ifdef DEBUG
print_hex_dump(KERN_ERR,
"ahash update shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

/* ahash_update_first shared descriptor */
desc = ctx->sh_desc_update_first;
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();

- ahash_data_to_out(desc, have_key | ctx->alg_type, OP_ALG_AS_INIT,
+ ahash_data_to_out(program, have_key | ctx->alg_type, OP_ALG_AS_INIT,
ctx->ctx_len, ctx);

+ PROGRAM_FINALIZE();
+
ctx->sh_desc_update_first_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_update_first_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -360,16 +379,21 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
#ifdef DEBUG
print_hex_dump(KERN_ERR,
"ahash update first shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

/* ahash_final shared descriptor */
desc = ctx->sh_desc_fin;
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();

- ahash_ctx_data_to_out(desc, have_key | ctx->alg_type,
+ ahash_ctx_data_to_out(program, have_key | ctx->alg_type,
OP_ALG_AS_FINALIZE, digestsize, ctx);

- ctx->sh_desc_fin_dma = dma_map_single(jrdev, desc, desc_bytes(desc),
+ PROGRAM_FINALIZE();
+
+ ctx->sh_desc_fin_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_fin_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -377,17 +401,21 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
}
#ifdef DEBUG
print_hex_dump(KERN_ERR, "ahash final shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

/* ahash_finup shared descriptor */
desc = ctx->sh_desc_finup;
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();

- ahash_ctx_data_to_out(desc, have_key | ctx->alg_type,
+ ahash_ctx_data_to_out(program, have_key | ctx->alg_type,
OP_ALG_AS_FINALIZE, digestsize, ctx);

- ctx->sh_desc_finup_dma = dma_map_single(jrdev, desc, desc_bytes(desc),
+ PROGRAM_FINALIZE();
+
+ ctx->sh_desc_finup_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_finup_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -395,18 +423,21 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
}
#ifdef DEBUG
print_hex_dump(KERN_ERR, "ahash finup shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

/* ahash_digest shared descriptor */
desc = ctx->sh_desc_digest;
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ ahash_data_to_out(program, have_key | ctx->alg_type,
+ OP_ALG_AS_INITFINAL, digestsize, ctx);

- ahash_data_to_out(desc, have_key | ctx->alg_type, OP_ALG_AS_INITFINAL,
- digestsize, ctx);
+ PROGRAM_FINALIZE();

- ctx->sh_desc_digest_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ ctx->sh_desc_digest_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_digest_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -415,8 +446,7 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
#ifdef DEBUG
print_hex_dump(KERN_ERR,
"ahash digest shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

return 0;
@@ -435,10 +465,13 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
u32 *keylen, u8 *key_out, u32 digestsize)
{
struct device *jrdev = ctx->jrdev;
- u32 *desc;
+ uint32_t *desc;
struct split_key_result result;
dma_addr_t src_dma, dst_dma;
int ret = 0;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

desc = kmalloc(CAAM_CMD_SZ * 8 + CAAM_PTR_SZ * 2, GFP_KERNEL | GFP_DMA);
if (!desc) {
@@ -446,7 +479,11 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
return -ENOMEM;
}

- init_job_desc(desc, 0);
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ JOB_HDR(SHR_NEVER, 0, 0, 0);

src_dma = dma_map_single(jrdev, (void *)key_in, *keylen,
DMA_TO_DEVICE);
@@ -465,20 +502,21 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
}

/* Job descriptor to perform unkeyed hash on key_in */
- append_operation(desc, ctx->alg_type | OP_ALG_ENCRYPT |
- OP_ALG_AS_INITFINAL);
- append_seq_in_ptr(desc, src_dma, *keylen, 0);
- append_seq_fifo_load(desc, *keylen, FIFOLD_CLASS_CLASS2 |
- FIFOLD_TYPE_LAST2 | FIFOLD_TYPE_MSG);
- append_seq_out_ptr(desc, dst_dma, digestsize, 0);
- append_seq_store(desc, digestsize, LDST_CLASS_2_CCB |
- LDST_SRCDST_BYTE_CONTEXT);
+ ALG_OPERATION(ctx->alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->alg_type & OP_ALG_AAI_MASK, OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE, OP_ALG_ENCRYPT);
+ SEQINPTR(src_dma, *keylen, EXT);
+ SEQFIFOLOAD(MSG2, *keylen, LAST2);
+ SEQOUTPTR(dst_dma, digestsize, EXT);
+ SEQSTORE(CONTEXT2, 0, digestsize, 0);
+
+ PROGRAM_FINALIZE();

#ifdef DEBUG
print_hex_dump(KERN_ERR, "key_in@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, key_in, *keylen, 1);
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

result.err = 0;
@@ -777,13 +815,16 @@ static int ahash_update_ctx(struct ahash_request *req)
int *next_buflen = state->current_buf ? &state->buflen_0 :
&state->buflen_1, last_buflen;
int in_len = *buflen + req->nbytes, to_hash;
- u32 *sh_desc = ctx->sh_desc_update, *desc;
+ uint32_t *sh_desc = ctx->sh_desc_update, *desc;
dma_addr_t ptr = ctx->sh_desc_update_dma;
int src_nents, sec4_sg_bytes, sec4_sg_src_index;
struct ahash_edesc *edesc;
bool chained = false;
int ret = 0;
- int sh_len;
+ unsigned sh_len;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

last_buflen = *next_buflen;
*next_buflen = in_len & (crypto_tfm_alg_blocksize(&ahash->base) - 1);
@@ -838,10 +879,13 @@ static int ahash_update_ctx(struct ahash_request *req)
SEC4_SG_LEN_FIN;
}

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER |
- HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);

edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
sec4_sg_bytes,
@@ -851,15 +895,15 @@ static int ahash_update_ctx(struct ahash_request *req)
return -ENOMEM;
}

- append_seq_in_ptr(desc, edesc->sec4_sg_dma, ctx->ctx_len +
- to_hash, LDST_SGF);
+ SEQINPTR(edesc->sec4_sg_dma, ctx->ctx_len + to_hash, SGF | EXT);
+ SEQOUTPTR(state->ctx_dma, ctx->ctx_len, EXT);

- append_seq_out_ptr(desc, state->ctx_dma, ctx->ctx_len, 0);
+ PROGRAM_FINALIZE();

#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, req);
@@ -898,13 +942,16 @@ static int ahash_final_ctx(struct ahash_request *req)
int buflen = state->current_buf ? state->buflen_1 : state->buflen_0;
int last_buflen = state->current_buf ? state->buflen_0 :
state->buflen_1;
- u32 *sh_desc = ctx->sh_desc_fin, *desc;
+ uint32_t *sh_desc = ctx->sh_desc_fin, *desc;
dma_addr_t ptr = ctx->sh_desc_fin_dma;
int sec4_sg_bytes;
int digestsize = crypto_ahash_digestsize(ahash);
struct ahash_edesc *edesc;
int ret = 0;
- int sh_len;
+ unsigned sh_len;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

sec4_sg_bytes = (1 + (buflen ? 1 : 0)) * sizeof(struct sec4_sg_entry);

@@ -916,9 +963,13 @@ static int ahash_final_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER | HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);

edesc->sec4_sg_bytes = sec4_sg_bytes;
edesc->sec4_sg = (void *)edesc + sizeof(struct ahash_edesc) +
@@ -942,19 +993,20 @@ static int ahash_final_ctx(struct ahash_request *req)
return -ENOMEM;
}

- append_seq_in_ptr(desc, edesc->sec4_sg_dma, ctx->ctx_len + buflen,
- LDST_SGF);
+ SEQINPTR(edesc->sec4_sg_dma, ctx->ctx_len + buflen, SGF | EXT);

- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+ edesc->dst_dma = map_seq_out_ptr_result(program, jrdev, req->result,
digestsize);
if (dma_mapping_error(jrdev, edesc->dst_dma)) {
dev_err(jrdev, "unable to map dst\n");
return -ENOMEM;
}

+ PROGRAM_FINALIZE();
+
#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
@@ -980,7 +1032,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
int buflen = state->current_buf ? state->buflen_1 : state->buflen_0;
int last_buflen = state->current_buf ? state->buflen_0 :
state->buflen_1;
- u32 *sh_desc = ctx->sh_desc_finup, *desc;
+ uint32_t *sh_desc = ctx->sh_desc_finup, *desc;
dma_addr_t ptr = ctx->sh_desc_finup_dma;
int sec4_sg_bytes, sec4_sg_src_index;
int src_nents;
@@ -988,7 +1040,10 @@ static int ahash_finup_ctx(struct ahash_request *req)
struct ahash_edesc *edesc;
bool chained = false;
int ret = 0;
- int sh_len;
+ unsigned sh_len;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

src_nents = __sg_count(req->src, req->nbytes, &chained);
sec4_sg_src_index = 1 + (buflen ? 1 : 0);
@@ -1003,9 +1058,13 @@ static int ahash_finup_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER | HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);

edesc->src_nents = src_nents;
edesc->chained = chained;
@@ -1032,19 +1091,21 @@ static int ahash_finup_ctx(struct ahash_request *req)
return -ENOMEM;
}

- append_seq_in_ptr(desc, edesc->sec4_sg_dma, ctx->ctx_len +
- buflen + req->nbytes, LDST_SGF);
+ SEQINPTR(edesc->sec4_sg_dma, ctx->ctx_len + buflen + req->nbytes,
+ SGF | EXT);

- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+ edesc->dst_dma = map_seq_out_ptr_result(program, jrdev, req->result,
digestsize);
if (dma_mapping_error(jrdev, edesc->dst_dma)) {
dev_err(jrdev, "unable to map dst\n");
return -ENOMEM;
}

+ PROGRAM_FINALIZE();
+
#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
@@ -1065,7 +1126,7 @@ static int ahash_digest(struct ahash_request *req)
struct device *jrdev = ctx->jrdev;
gfp_t flags = (req->base.flags & (CRYPTO_TFM_REQ_MAY_BACKLOG |
CRYPTO_TFM_REQ_MAY_SLEEP)) ? GFP_KERNEL : GFP_ATOMIC;
- u32 *sh_desc = ctx->sh_desc_digest, *desc;
+ uint32_t *sh_desc = ctx->sh_desc_digest, *desc;
dma_addr_t ptr = ctx->sh_desc_digest_dma;
int digestsize = crypto_ahash_digestsize(ahash);
int src_nents, sec4_sg_bytes;
@@ -1073,8 +1134,11 @@ static int ahash_digest(struct ahash_request *req)
struct ahash_edesc *edesc;
bool chained = false;
int ret = 0;
- u32 options;
- int sh_len;
+ uint32_t options = EXT;
+ unsigned sh_len;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

src_nents = sg_count(req->src, req->nbytes, &chained);
dma_map_sg_chained(jrdev, req->src, src_nents ? : 1, DMA_TO_DEVICE,
@@ -1094,9 +1158,13 @@ static int ahash_digest(struct ahash_request *req)
edesc->src_nents = src_nents;
edesc->chained = chained;

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER | HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);

if (src_nents) {
sg_to_sec4_sg_last(req->src, src_nents, edesc->sec4_sg, 0);
@@ -1107,23 +1175,24 @@ static int ahash_digest(struct ahash_request *req)
return -ENOMEM;
}
src_dma = edesc->sec4_sg_dma;
- options = LDST_SGF;
+ options |= SGF;
} else {
src_dma = sg_dma_address(req->src);
- options = 0;
}
- append_seq_in_ptr(desc, src_dma, req->nbytes, options);
+ SEQINPTR(src_dma, req->nbytes, options);

- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+ edesc->dst_dma = map_seq_out_ptr_result(program, jrdev, req->result,
digestsize);
if (dma_mapping_error(jrdev, edesc->dst_dma)) {
dev_err(jrdev, "unable to map dst\n");
return -ENOMEM;
}

+ PROGRAM_FINALIZE();
+
#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
@@ -1148,12 +1217,15 @@ static int ahash_final_no_ctx(struct ahash_request *req)
CRYPTO_TFM_REQ_MAY_SLEEP)) ? GFP_KERNEL : GFP_ATOMIC;
u8 *buf = state->current_buf ? state->buf_1 : state->buf_0;
int buflen = state->current_buf ? state->buflen_1 : state->buflen_0;
- u32 *sh_desc = ctx->sh_desc_digest, *desc;
+ uint32_t *sh_desc = ctx->sh_desc_digest, *desc;
dma_addr_t ptr = ctx->sh_desc_digest_dma;
int digestsize = crypto_ahash_digestsize(ahash);
struct ahash_edesc *edesc;
int ret = 0;
- int sh_len;
+ unsigned sh_len;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

/* allocate space for base edesc and hw desc commands, link tables */
edesc = kmalloc(sizeof(struct ahash_edesc) + DESC_JOB_IO_LEN,
@@ -1163,9 +1235,13 @@ static int ahash_final_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER | HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);

state->buf_dma = dma_map_single(jrdev, buf, buflen, DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, state->buf_dma)) {
@@ -1173,19 +1249,22 @@ static int ahash_final_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

- append_seq_in_ptr(desc, state->buf_dma, buflen, 0);
+ SEQINPTR(state->buf_dma, buflen, EXT);

- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+ edesc->dst_dma = map_seq_out_ptr_result(program, jrdev, req->result,
digestsize);
if (dma_mapping_error(jrdev, edesc->dst_dma)) {
dev_err(jrdev, "unable to map dst\n");
return -ENOMEM;
}
+
+ PROGRAM_FINALIZE();
+
edesc->src_nents = 0;

#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
@@ -1216,11 +1295,14 @@ static int ahash_update_no_ctx(struct ahash_request *req)
int in_len = *buflen + req->nbytes, to_hash;
int sec4_sg_bytes, src_nents;
struct ahash_edesc *edesc;
- u32 *desc, *sh_desc = ctx->sh_desc_update_first;
+ uint32_t *desc, *sh_desc = ctx->sh_desc_update_first;
dma_addr_t ptr = ctx->sh_desc_update_first_dma;
bool chained = false;
int ret = 0;
- int sh_len;
+ unsigned sh_len;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

*next_buflen = in_len & (crypto_tfm_alg_blocksize(&ahash->base) - 1);
to_hash = in_len - *next_buflen;
@@ -1260,10 +1342,13 @@ static int ahash_update_no_ctx(struct ahash_request *req)
state->current_buf = !state->current_buf;
}

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER |
- HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);

edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
sec4_sg_bytes,
@@ -1273,16 +1358,18 @@ static int ahash_update_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

- append_seq_in_ptr(desc, edesc->sec4_sg_dma, to_hash, LDST_SGF);
+ SEQINPTR(edesc->sec4_sg_dma, to_hash, SGF | EXT);

- ret = map_seq_out_ptr_ctx(desc, jrdev, state, ctx->ctx_len);
+ ret = map_seq_out_ptr_ctx(program, jrdev, state, ctx->ctx_len);
if (ret)
return ret;

+ PROGRAM_FINALIZE();
+
#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req);
@@ -1325,14 +1412,17 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
int buflen = state->current_buf ? state->buflen_1 : state->buflen_0;
int last_buflen = state->current_buf ? state->buflen_0 :
state->buflen_1;
- u32 *sh_desc = ctx->sh_desc_digest, *desc;
+ uint32_t *sh_desc = ctx->sh_desc_digest, *desc;
dma_addr_t ptr = ctx->sh_desc_digest_dma;
int sec4_sg_bytes, sec4_sg_src_index, src_nents;
int digestsize = crypto_ahash_digestsize(ahash);
struct ahash_edesc *edesc;
bool chained = false;
- int sh_len;
+ unsigned sh_len;
int ret = 0;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

src_nents = __sg_count(req->src, req->nbytes, &chained);
sec4_sg_src_index = 2;
@@ -1347,9 +1437,13 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER | HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);

edesc->src_nents = src_nents;
edesc->chained = chained;
@@ -1371,19 +1465,20 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

- append_seq_in_ptr(desc, edesc->sec4_sg_dma, buflen +
- req->nbytes, LDST_SGF);
+ SEQINPTR(edesc->sec4_sg_dma, buflen + req->nbytes, SGF | EXT);

- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+ edesc->dst_dma = map_seq_out_ptr_result(program, jrdev, req->result,
digestsize);
if (dma_mapping_error(jrdev, edesc->dst_dma)) {
dev_err(jrdev, "unable to map dst\n");
return -ENOMEM;
}

+ PROGRAM_FINALIZE();
+
#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
@@ -1410,15 +1505,18 @@ static int ahash_update_first(struct ahash_request *req)
CAAM_MAX_HASH_BLOCK_SIZE;
int *next_buflen = &state->buflen_0 + state->current_buf;
int to_hash;
- u32 *sh_desc = ctx->sh_desc_update_first, *desc;
+ uint32_t *sh_desc = ctx->sh_desc_update_first, *desc;
dma_addr_t ptr = ctx->sh_desc_update_first_dma;
int sec4_sg_bytes, src_nents;
dma_addr_t src_dma;
- u32 options;
+ uint32_t options = EXT;
struct ahash_edesc *edesc;
bool chained = false;
int ret = 0;
- int sh_len;
+ unsigned sh_len;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

*next_buflen = req->nbytes & (crypto_tfm_alg_blocksize(&ahash->base) -
1);
@@ -1462,30 +1560,34 @@ static int ahash_update_first(struct ahash_request *req)
return -ENOMEM;
}
src_dma = edesc->sec4_sg_dma;
- options = LDST_SGF;
+ options |= SGF;
} else {
src_dma = sg_dma_address(req->src);
- options = 0;
}

if (*next_buflen)
sg_copy_part(next_buf, req->src, to_hash, req->nbytes);

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER |
- HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);

- append_seq_in_ptr(desc, src_dma, to_hash, options);
+ SEQINPTR(src_dma, to_hash, options);

- ret = map_seq_out_ptr_ctx(desc, jrdev, state, ctx->ctx_len);
+ ret = map_seq_out_ptr_ctx(program, jrdev, state, ctx->ctx_len);
if (ret)
return ret;

+ PROGRAM_FINALIZE();
+
#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst,
@@ -1779,26 +1881,26 @@ static void caam_hash_cra_exit(struct crypto_tfm *tfm)
if (ctx->sh_desc_update_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_update_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_update_dma,
- desc_bytes(ctx->sh_desc_update),
+ DESC_BYTES(ctx->sh_desc_update),
DMA_TO_DEVICE);
if (ctx->sh_desc_update_first_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_update_first_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_update_first_dma,
- desc_bytes(ctx->sh_desc_update_first),
+ DESC_BYTES(ctx->sh_desc_update_first),
DMA_TO_DEVICE);
if (ctx->sh_desc_fin_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_fin_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_fin_dma,
- desc_bytes(ctx->sh_desc_fin), DMA_TO_DEVICE);
+ DESC_BYTES(ctx->sh_desc_fin), DMA_TO_DEVICE);
if (ctx->sh_desc_digest_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_digest_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_digest_dma,
- desc_bytes(ctx->sh_desc_digest),
+ DESC_BYTES(ctx->sh_desc_digest),
DMA_TO_DEVICE);
if (ctx->sh_desc_finup_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_finup_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_finup_dma,
- desc_bytes(ctx->sh_desc_finup), DMA_TO_DEVICE);
+ DESC_BYTES(ctx->sh_desc_finup), DMA_TO_DEVICE);

caam_jr_free(ctx->jrdev);
}
diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c
index 5b288082e6ac..5bcfb1a1d584 100644
--- a/drivers/crypto/caam/caamrng.c
+++ b/drivers/crypto/caam/caamrng.c
@@ -39,7 +39,7 @@

#include "regs.h"
#include "intern.h"
-#include "desc_constr.h"
+#include "flib/rta.h"
#include "jr.h"
#include "error.h"

@@ -91,7 +91,7 @@ static inline void rng_unmap_ctx(struct caam_rng_ctx *ctx)

if (ctx->sh_desc_dma)
dma_unmap_single(jrdev, ctx->sh_desc_dma,
- desc_bytes(ctx->sh_desc), DMA_TO_DEVICE);
+ DESC_BYTES(ctx->sh_desc), DMA_TO_DEVICE);
rng_unmap_buf(jrdev, &ctx->bufs[0]);
rng_unmap_buf(jrdev, &ctx->bufs[1]);
}
@@ -188,17 +188,26 @@ static int caam_read(struct hwrng *rng, void *data, size_t max, bool wait)
static inline int rng_create_sh_desc(struct caam_rng_ctx *ctx)
{
struct device *jrdev = ctx->jrdev;
- u32 *desc = ctx->sh_desc;
+ uint32_t *desc = ctx->sh_desc;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

- init_sh_desc(desc, HDR_SHARE_SERIAL);
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ SHR_HDR(SHR_SERIAL, 1, 0);

/* Generate random bytes */
- append_operation(desc, OP_ALG_ALGSEL_RNG | OP_TYPE_CLASS1_ALG);
+ ALG_OPERATION(OP_ALG_ALGSEL_RNG, OP_ALG_AAI_RNG, 0, 0, 0);

/* Store bytes */
- append_seq_fifo_store(desc, RN_BUF_SIZE, FIFOST_TYPE_RNGSTORE);
+ SEQFIFOSTORE(RNG, 0, RN_BUF_SIZE, 0);
+
+ PROGRAM_FINALIZE();

- ctx->sh_desc_dma = dma_map_single(jrdev, desc, desc_bytes(desc),
+ ctx->sh_desc_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -206,7 +215,7 @@ static inline int rng_create_sh_desc(struct caam_rng_ctx *ctx)
}
#ifdef DEBUG
print_hex_dump(KERN_ERR, "rng shdesc@: ", DUMP_PREFIX_ADDRESS, 16, 4,
- desc, desc_bytes(desc), 1);
+ desc, DESC_BYTES(desc), 1);
#endif
return 0;
}
@@ -215,11 +224,17 @@ static inline int rng_create_job_desc(struct caam_rng_ctx *ctx, int buf_id)
{
struct device *jrdev = ctx->jrdev;
struct buf_data *bd = &ctx->bufs[buf_id];
- u32 *desc = bd->hw_desc;
- int sh_len = desc_len(ctx->sh_desc);
+ uint32_t *desc = bd->hw_desc;
+ unsigned sh_len = DESC_LEN(ctx->sh_desc);
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

- init_job_desc_shared(desc, ctx->sh_desc_dma, sh_len, HDR_SHARE_DEFER |
- HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ JOB_HDR(SHR_DEFER, sh_len, ctx->sh_desc_dma, REO | SHR);

bd->addr = dma_map_single(jrdev, bd->buf, RN_BUF_SIZE, DMA_FROM_DEVICE);
if (dma_mapping_error(jrdev, bd->addr)) {
@@ -227,10 +242,13 @@ static inline int rng_create_job_desc(struct caam_rng_ctx *ctx, int buf_id)
return -ENOMEM;
}

- append_seq_out_ptr_intlen(desc, bd->addr, RN_BUF_SIZE, 0);
+ SEQOUTPTR(bd->addr, RN_BUF_SIZE, 0);
+
+ PROGRAM_FINALIZE();
+
#ifdef DEBUG
print_hex_dump(KERN_ERR, "rng job desc@: ", DUMP_PREFIX_ADDRESS, 16, 4,
- desc, desc_bytes(desc), 1);
+ desc, DESC_BYTES(desc), 1);
#endif
return 0;
}
diff --git a/drivers/crypto/caam/compat.h b/drivers/crypto/caam/compat.h
index f227922cea38..8fe0f6993ab0 100644
--- a/drivers/crypto/caam/compat.h
+++ b/drivers/crypto/caam/compat.h
@@ -23,6 +23,7 @@
#include <linux/types.h>
#include <linux/debugfs.h>
#include <linux/circ_buf.h>
+#include <linux/bitops.h>
#include <net/xfrm.h>

#include <crypto/algapi.h>
diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
index be8c6c147395..155268ce9388 100644
--- a/drivers/crypto/caam/ctrl.c
+++ b/drivers/crypto/caam/ctrl.c
@@ -13,25 +13,34 @@
#include "regs.h"
#include "intern.h"
#include "jr.h"
-#include "desc_constr.h"
+#include "flib/rta.h"
#include "error.h"
#include "ctrl.h"

+enum rta_sec_era rta_sec_era;
+EXPORT_SYMBOL(rta_sec_era);
+
/*
* Descriptor to instantiate RNG State Handle 0 in normal mode and
* load the JDKEK, TDKEK and TDSK registers
*/
-static void build_instantiation_desc(u32 *desc, int handle, int do_sk)
+static void build_instantiation_desc(uint32_t *desc, int handle, int do_sk)
{
- u32 *jump_cmd, op_flags;
-
- init_job_desc(desc, 0);
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(uint32_t) == sizeof(dma_addr_t));
+ LABEL(jump_cmd);
+ REFERENCE(pjump_cmd);

- op_flags = OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |
- (handle << OP_ALG_AAI_SHIFT) | OP_ALG_AS_INIT;
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();

/* INIT RNG in non-test mode */
- append_operation(desc, op_flags);
+ ALG_OPERATION(OP_ALG_ALGSEL_RNG,
+ (uint16_t)(OP_ALG_AAI_RNG |
+ (handle << OP_ALG_AAI_RNG4_SH_SHIFT)),
+ OP_ALG_AS_INIT, 0, 0);

if (!handle && do_sk) {
/*
@@ -39,33 +48,50 @@ static void build_instantiation_desc(u32 *desc, int handle, int do_sk)
*/

/* wait for done */
- jump_cmd = append_jump(desc, JUMP_CLASS_CLASS1);
- set_jump_tgt_here(desc, jump_cmd);
+ pjump_cmd = JUMP(IMM(jump_cmd), LOCAL_JUMP, ALL_TRUE, CLASS1);
+ SET_LABEL(jump_cmd);

/*
* load 1 to clear written reg:
* resets the done interrrupt and returns the RNG to idle.
*/
- append_load_imm_u32(desc, 1, LDST_SRCDST_WORD_CLRW);
+ LOAD(IMM(CLRW_CLR_C1MODE), CLRW, 0, CAAM_CMD_SZ, 0);

/* Initialize State Handle */
- append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |
- OP_ALG_AAI_RNG4_SK);
+ ALG_OPERATION(OP_ALG_ALGSEL_RNG, OP_ALG_AAI_RNG4_SK,
+ OP_ALG_AS_UPDATE, 0, 0);
}

- append_jump(desc, JUMP_CLASS_CLASS1 | JUMP_TYPE_HALT);
+ JUMP(IMM(0), HALT, ALL_TRUE, CLASS1);
+
+ PATCH_JUMP(pjump_cmd, jump_cmd);
+
+ PROGRAM_FINALIZE();
}

/* Descriptor for deinstantiation of State Handle 0 of the RNG block. */
-static void build_deinstantiation_desc(u32 *desc, int handle)
+static void build_deinstantiation_desc(uint32_t *desc, int handle)
{
- init_job_desc(desc, 0);
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(uint32_t) == sizeof(dma_addr_t));
+
+
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+
+ JOB_HDR(SHR_NEVER, 1, 0, 0);

/* Uninstantiate State Handle 0 */
- append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |
- (handle << OP_ALG_AAI_SHIFT) | OP_ALG_AS_INITFINAL);
+ ALG_OPERATION(OP_ALG_ALGSEL_RNG,
+ (uint16_t)(OP_ALG_AAI_RNG |
+ (handle << OP_ALG_AAI_RNG4_SH_SHIFT)),
+ OP_ALG_AS_INITFINAL, 0, 0);
+
+ JUMP(IMM(0), HALT, ALL_TRUE, CLASS1);

- append_jump(desc, JUMP_CLASS_CLASS1 | JUMP_TYPE_HALT);
+ PROGRAM_FINALIZE();
}

/*
@@ -109,7 +135,7 @@ static inline int run_descriptor_deco0(struct device *ctrldev, u32 *desc,
return -ENODEV;
}

- for (i = 0; i < desc_len(desc); i++)
+ for (i = 0; i < DESC_LEN(desc); i++)
wr_reg32(&topregs->deco.descbuf[i], *(desc + i));

flags = DECO_JQCR_WHL;
@@ -117,7 +143,7 @@ static inline int run_descriptor_deco0(struct device *ctrldev, u32 *desc,
* If the descriptor length is longer than 4 words, then the
* FOUR bit in JRCTRL register must be set.
*/
- if (desc_len(desc) >= 4)
+ if (DESC_LEN(desc) >= 4)
flags |= DECO_JQCR_FOUR;

/* Instruct the DECO to execute it */
@@ -176,7 +202,8 @@ static int instantiate_rng(struct device *ctrldev, int state_handle_mask,
struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctrldev);
struct caam_full __iomem *topregs;
struct rng4tst __iomem *r4tst;
- u32 *desc, status, rdsta_val;
+ uint32_t *desc;
+ u32 status, rdsta_val;
int ret = 0, sh_idx;

topregs = (struct caam_full __iomem *)ctrlpriv->ctrl;
@@ -241,7 +268,8 @@ static int instantiate_rng(struct device *ctrldev, int state_handle_mask,
*/
static int deinstantiate_rng(struct device *ctrldev, int state_handle_mask)
{
- u32 *desc, status;
+ uint32_t *desc;
+ u32 status;
int sh_idx, ret = 0;

desc = kmalloc(CAAM_CMD_SZ * 3, GFP_KERNEL);
@@ -362,8 +390,9 @@ static void kick_trng(struct platform_device *pdev, int ent_delay)
/**
* caam_get_era() - Return the ERA of the SEC on SoC, based
* on "sec-era" propery in the DTS. This property is updated by u-boot.
+ * Returns the ERA number or -ENOTSUPP if the ERA is unknown.
**/
-int caam_get_era(void)
+static int caam_get_era(void)
{
struct device_node *caam_node;

@@ -378,7 +407,6 @@ int caam_get_era(void)

return -ENOTSUPP;
}
-EXPORT_SYMBOL(caam_get_era);

/* Probe routine for CAAM top (controller) level */
static int caam_probe(struct platform_device *pdev)
@@ -579,8 +607,16 @@ static int caam_probe(struct platform_device *pdev)
(u64)rd_reg32(&topregs->ctrl.perfmon.caam_id_ls);

/* Report "alive" for developer to see */
- dev_info(dev, "device ID = 0x%016llx (Era %d)\n", caam_id,
- caam_get_era());
+ dev_info(dev, "device ID = 0x%016llx\n", caam_id);
+ ret = caam_get_era();
+ if (ret >= 0) {
+ dev_info(dev, "Era %d\n", ret);
+ rta_set_sec_era(INTL_SEC_ERA(ret));
+ } else {
+ dev_warn(dev, "Era property not found! Defaulting to era %d\n",
+ USER_SEC_ERA(DEFAULT_SEC_ERA));
+ rta_set_sec_era(DEFAULT_SEC_ERA);
+ }
dev_info(dev, "job rings = %d, qi = %d\n",
ctrlpriv->total_jobrs, ctrlpriv->qi_present);

diff --git a/drivers/crypto/caam/ctrl.h b/drivers/crypto/caam/ctrl.h
index cac5402a46eb..93680a9290db 100644
--- a/drivers/crypto/caam/ctrl.h
+++ b/drivers/crypto/caam/ctrl.h
@@ -8,6 +8,6 @@
#define CTRL_H

/* Prototypes for backend-level services exposed to APIs */
-int caam_get_era(void);
+extern enum rta_sec_era rta_sec_era;

#endif /* CTRL_H */
diff --git a/drivers/crypto/caam/error.c b/drivers/crypto/caam/error.c
index 7d6ed4722345..5daa9cd4109a 100644
--- a/drivers/crypto/caam/error.c
+++ b/drivers/crypto/caam/error.c
@@ -7,7 +7,7 @@
#include "compat.h"
#include "regs.h"
#include "intern.h"
-#include "desc.h"
+#include "flib/desc.h"
#include "jr.h"
#include "error.h"

diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index ec3652d62e93..01d434e20ca4 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -11,7 +11,7 @@
#include "compat.h"
#include "regs.h"
#include "jr.h"
-#include "desc.h"
+#include "flib/desc.h"
#include "intern.h"

struct jr_driver_data {
diff --git a/drivers/crypto/caam/key_gen.c b/drivers/crypto/caam/key_gen.c
index 871703c49d2c..bbd784cb9ce2 100644
--- a/drivers/crypto/caam/key_gen.c
+++ b/drivers/crypto/caam/key_gen.c
@@ -7,7 +7,7 @@
#include "compat.h"
#include "jr.h"
#include "error.h"
-#include "desc_constr.h"
+#include "flib/desc/jobdesc.h"
#include "key_gen.h"

void split_key_done(struct device *dev, u32 *desc, u32 err,
@@ -43,12 +43,14 @@ Split key generation-----------------------------------------------
*/
int gen_split_key(struct device *jrdev, u8 *key_out, int split_key_len,
int split_key_pad_len, const u8 *key_in, u32 keylen,
- u32 alg_op)
+ uint32_t alg_op)
{
- u32 *desc;
+ uint32_t *desc;
struct split_key_result result;
dma_addr_t dma_addr_in, dma_addr_out;
int ret = 0;
+ unsigned jd_len;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));

desc = kmalloc(CAAM_CMD_SZ * 6 + CAAM_PTR_SZ * 2, GFP_KERNEL | GFP_DMA);
if (!desc) {
@@ -56,8 +58,6 @@ int gen_split_key(struct device *jrdev, u8 *key_out, int split_key_len,
return -ENOMEM;
}

- init_job_desc(desc, 0);
-
dma_addr_in = dma_map_single(jrdev, (void *)key_in, keylen,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, dma_addr_in)) {
@@ -65,22 +65,7 @@ int gen_split_key(struct device *jrdev, u8 *key_out, int split_key_len,
kfree(desc);
return -ENOMEM;
}
- append_key(desc, dma_addr_in, keylen, CLASS_2 | KEY_DEST_CLASS_REG);
-
- /* Sets MDHA up into an HMAC-INIT */
- append_operation(desc, alg_op | OP_ALG_DECRYPT | OP_ALG_AS_INIT);
-
- /*
- * do a FIFO_LOAD of zero, this will trigger the internal key expansion
- * into both pads inside MDHA
- */
- append_fifo_load_as_imm(desc, NULL, 0, LDST_CLASS_2_CCB |
- FIFOLD_TYPE_MSG | FIFOLD_TYPE_LAST2);
-
- /*
- * FIFO_STORE with the explicit split-key content store
- * (0x26 output type)
- */
+
dma_addr_out = dma_map_single(jrdev, key_out, split_key_pad_len,
DMA_FROM_DEVICE);
if (dma_mapping_error(jrdev, dma_addr_out)) {
@@ -88,14 +73,17 @@ int gen_split_key(struct device *jrdev, u8 *key_out, int split_key_len,
kfree(desc);
return -ENOMEM;
}
- append_fifo_store(desc, dma_addr_out, split_key_len,
- LDST_CLASS_2_CCB | FIFOST_TYPE_SPLIT_KEK);
+
+ /* keylen is expected to be less or equal block size (which is <=64) */
+ cnstr_jobdesc_mdsplitkey(desc, &jd_len, ps, dma_addr_in,
+ (uint8_t)keylen, alg_op & OP_ALG_ALGSEL_MASK,
+ dma_addr_out);

#ifdef DEBUG
print_hex_dump(KERN_ERR, "ctx.key@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, key_in, keylen, 1);
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

result.err = 0;
diff --git a/drivers/crypto/caam/key_gen.h b/drivers/crypto/caam/key_gen.h
index c5588f6d8109..2f719d80cdcd 100644
--- a/drivers/crypto/caam/key_gen.h
+++ b/drivers/crypto/caam/key_gen.h
@@ -14,4 +14,4 @@ void split_key_done(struct device *dev, u32 *desc, u32 err, void *context);

int gen_split_key(struct device *jrdev, u8 *key_out, int split_key_len,
int split_key_pad_len, const u8 *key_in, u32 keylen,
- u32 alg_op);
+ uint32_t alg_op);
--
1.8.3.1

2014-07-18 16:39:35

by Horia Geantă

[permalink] [raw]
Subject: [PATCH 9/9] crypto: caam - add Run Time Library (RTA) docbook

Add SGML template for generating RTA docbook.
Source code is in drivers/crypto/caam/flib

Cc: Randy Dunlap <[email protected]>
Signed-off-by: Horia Geanta <[email protected]>
---
Documentation/DocBook/Makefile | 3 +-
Documentation/DocBook/rta-api.tmpl | 245 +++++++++++++++++++++
Documentation/DocBook/rta/.gitignore | 1 +
Documentation/DocBook/rta/Makefile | 5 +
Documentation/DocBook/rta/rta_arch.svg | 381 +++++++++++++++++++++++++++++++++
5 files changed, 634 insertions(+), 1 deletion(-)
create mode 100644 Documentation/DocBook/rta-api.tmpl
create mode 100644 Documentation/DocBook/rta/.gitignore
create mode 100644 Documentation/DocBook/rta/Makefile
create mode 100644 Documentation/DocBook/rta/rta_arch.svg

diff --git a/Documentation/DocBook/Makefile b/Documentation/DocBook/Makefile
index bec06659e0eb..f2917495db49 100644
--- a/Documentation/DocBook/Makefile
+++ b/Documentation/DocBook/Makefile
@@ -15,7 +15,7 @@ DOCBOOKS := z8530book.xml device-drivers.xml \
80211.xml debugobjects.xml sh.xml regulator.xml \
alsa-driver-api.xml writing-an-alsa-driver.xml \
tracepoint.xml drm.xml media_api.xml w1.xml \
- writing_musb_glue_layer.xml
+ writing_musb_glue_layer.xml rta-api.xml

include Documentation/DocBook/media/Makefile

@@ -53,6 +53,7 @@ htmldocs: $(HTML)
$(call build_main_index)
$(call build_images)
$(call install_media_images)
+ $(call install_rta_images)

MAN := $(patsubst %.xml, %.9, $(BOOKS))
mandocs: $(MAN)
diff --git a/Documentation/DocBook/rta-api.tmpl b/Documentation/DocBook/rta-api.tmpl
new file mode 100644
index 000000000000..cefd267a00eb
--- /dev/null
+++ b/Documentation/DocBook/rta-api.tmpl
@@ -0,0 +1,245 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
+ "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" []>
+
+<book id="RTAapi">
+ <bookinfo>
+ <title>Writing descriptors for Freescale CAAM using RTA library</title>
+ <authorgroup>
+ <author>
+ <firstname>Horia</firstname>
+ <surname>Geanta</surname>
+ <affiliation>
+ <address><email>[email protected]</email></address>
+ </affiliation>
+ </author>
+ </authorgroup>
+
+ <copyright>
+ <year>2008-2014</year>
+ <holder>Freescale Semiconductor</holder>
+ </copyright>
+
+ <legalnotice>
+ <para>
+ This documentation is free software; you can redistribute
+ it and/or modify it under the terms of the GNU General Public
+ License as published by the Free Software Foundation; either
+ version 2 of the License, or (at your option) any later
+ version.
+ </para>
+
+ <para>
+ This program is distributed in the hope that it will be
+ useful, but WITHOUT ANY WARRANTY; without even the implied
+ warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ See the GNU General Public License for more details.
+ </para>
+
+ <para>
+ For more details see the file COPYING in the source
+ distribution of Linux.
+ </para>
+ </legalnotice>
+ </bookinfo>
+
+<toc></toc>
+
+ <chapter id="intro">
+ <title>Introduction</title>
+ <sect1>
+ <title>About</title>
+!Pdrivers/crypto/caam/flib/rta.h About
+!Pdrivers/crypto/caam/flib/rta.h Usage
+ <mediaobject>
+ <imageobject>
+ <imagedata fileref="rta_arch.svg" format="SVG" align="CENTER"/>
+ </imageobject>
+ <caption><para>RTA Integration Overview</para></caption>
+ </mediaobject>
+ </sect1>
+ <sect1>
+ <title>Using RTA</title>
+ <para>
+ RTA can be used in an application just by including the following header file:
+ #include flib/rta.h
+ </para>
+ <para>
+ The files in drivers/crypto/caam/desc directory contain several
+ real-world descriptors written with RTA. You can use them as-is or adapt
+ them to your needs.
+ </para>
+ <para>
+ RTA routines assume that your code defines a local variable named
+ "program":
+ <itemizedlist mark='opencircle'>
+ <listitem>
+ <para>struct program prg;</para>
+ </listitem>
+ <listitem>
+ <para>struct program *program = &amp;prg;</para>
+ </listitem>
+ </itemizedlist>
+ This variable is passed behind the scene to all RTA API calls.
+ It contains several housekeeping information that are used during
+ descriptor creation.
+ </para>
+ <para>
+ RTA creates the descriptors and saves them in buffers. It is the user's
+ job to allocate memory for these buffers before passing them to RTA
+ program initialization call.
+ </para>
+ <para>
+ A RTA program must start with a call to PROGRAM_CNTXT_INIT and end with
+ PROGRAM_FINALIZE. PROGRAM_CNTXT_INIT will initialze the members of
+ 'program' structure with user information (pointer to user's buffer, and
+ the SEC subversion). The PROGRAM_FINALIZE call checks the descriptor's
+ validity.
+ </para>
+ <para>
+ The program length is limited to the size of buffer descriptor which
+ can be maximum 64 words (256 bytes). However, a JUMP command can cause
+ loading and execution of another Job Descriptor; this allows for much
+ larger programs to be created.
+ </para>
+ </sect1>
+ <sect1>
+ <title>RTA components</title>
+ <para>
+ The content of the package is split mainly in two components:
+ <itemizedlist mark='opencircle'>
+ <listitem>
+ <para>descriptor builder API (drivers/crypto/caam/flib/rta.h)</para>
+ </listitem>
+ <listitem>
+ <para>
+ ready to use RTA descriptors
+ (drivers/crypto/caam/flib/desc/*.h)
+ </para>
+ </listitem>
+ </itemizedlist>
+ </para>
+ <para>
+ These are the main building blocks of descriptors:
+ <itemizedlist mark='opencircle'>
+ <listitem>
+ <para>buffer management: init &amp; finalize</para>
+ </listitem>
+ <listitem>
+ <para>SEC commands: MOVE, LOAD, FIFO_LOAD etc.</para>
+ </listitem>
+ <listitem>
+ <para>descriptor labels (e.g. used as JUMP destinations)</para>
+ </listitem>
+ <listitem>
+ <para>
+ utility commands: (e.g. PATCH_* commands that update labels and
+ references)
+ </para>
+ </listitem>
+ </itemizedlist>
+ </para>
+ <para>
+ In some cases, descriptor fields can't all be set when the commands are
+ inserted. These fields must be updated in a similar fashion to what the
+ linking process does with a binary file. RTA uses PATCH_* commands to
+ get relevant information and PROGRAM_FINALIZE to complete the
+ "code relocation".
+ </para>
+ <para>
+ If there is a need for descriptors larger than 64 words, their function
+ can be split into several smaller ones. In such case the smaller
+ descriptors are correlated and updated using PATCH_*_NON_LOCAL commands.
+ These calls must appear after all the descriptors are finalized and not
+ before as in a single descriptor case (the reason being that only then
+ references to all descriptors are available).
+ </para>
+ </sect1>
+ </chapter>
+
+ <chapter id="apiref">
+ <title>RTA API reference</title>
+ <sect1>
+ <title>Descriptor Buffer Management Routines</title>
+!Pdrivers/crypto/caam/flib/rta.h Descriptor Buffer Management Routines
+!Fdrivers/crypto/caam/flib/rta/sec_run_time_asm.h rta_sec_era
+!Fdrivers/crypto/caam/flib/rta/sec_run_time_asm.h USER_SEC_ERA
+!Fdrivers/crypto/caam/flib/rta/sec_run_time_asm.h INTL_SEC_ERA
+!Fdrivers/crypto/caam/flib/rta.h PROGRAM_CNTXT_INIT
+!Fdrivers/crypto/caam/flib/rta.h PROGRAM_FINALIZE
+!Fdrivers/crypto/caam/flib/rta.h PROGRAM_SET_36BIT_ADDR
+!Fdrivers/crypto/caam/flib/rta.h PROGRAM_SET_BSWAP
+!Fdrivers/crypto/caam/flib/rta.h WORD
+!Fdrivers/crypto/caam/flib/rta.h DWORD
+!Fdrivers/crypto/caam/flib/rta.h COPY_DATA
+!Fdrivers/crypto/caam/flib/rta.h DESC_LEN
+!Fdrivers/crypto/caam/flib/rta.h DESC_BYTES
+!Fdrivers/crypto/caam/flib/rta/sec_run_time_asm.h program
+ </sect1>
+ <sect1>
+ <title>SEC Commands Routines</title>
+!Pdrivers/crypto/caam/flib/rta.h SEC Commands Routines
+!Fdrivers/crypto/caam/flib/rta/sec_run_time_asm.h rta_share_type
+!Fdrivers/crypto/caam/flib/rta.h SHR_HDR
+!Fdrivers/crypto/caam/flib/rta.h JOB_HDR
+!Fdrivers/crypto/caam/flib/rta.h JOB_HDR_EXT
+!Fdrivers/crypto/caam/flib/rta.h MOVE
+!Fdrivers/crypto/caam/flib/rta.h MOVEB
+!Fdrivers/crypto/caam/flib/rta.h MOVEDW
+!Fdrivers/crypto/caam/flib/rta.h FIFOLOAD
+!Fdrivers/crypto/caam/flib/rta.h SEQFIFOLOAD
+!Fdrivers/crypto/caam/flib/rta.h FIFOSTORE
+!Fdrivers/crypto/caam/flib/rta.h SEQFIFOSTORE
+!Fdrivers/crypto/caam/flib/rta.h KEY
+!Fdrivers/crypto/caam/flib/rta.h SEQINPTR
+!Fdrivers/crypto/caam/flib/rta.h SEQOUTPTR
+!Fdrivers/crypto/caam/flib/rta.h ALG_OPERATION
+!Fdrivers/crypto/caam/flib/rta.h PROTOCOL
+!Fdrivers/crypto/caam/flib/rta.h PKHA_OPERATION
+!Fdrivers/crypto/caam/flib/rta/sec_run_time_asm.h rta_jump_cond
+!Fdrivers/crypto/caam/flib/rta/sec_run_time_asm.h rta_jump_type
+!Fdrivers/crypto/caam/flib/rta.h JUMP
+!Fdrivers/crypto/caam/flib/rta.h JUMP_INC
+!Fdrivers/crypto/caam/flib/rta.h JUMP_DEC
+!Fdrivers/crypto/caam/flib/rta.h LOAD
+!Fdrivers/crypto/caam/flib/rta.h SEQLOAD
+!Fdrivers/crypto/caam/flib/rta.h STORE
+!Fdrivers/crypto/caam/flib/rta.h SEQSTORE
+!Fdrivers/crypto/caam/flib/rta.h MATHB
+!Fdrivers/crypto/caam/flib/rta.h MATHU
+!Fdrivers/crypto/caam/flib/rta.h SIGNATURE
+!Fdrivers/crypto/caam/flib/rta.h NFIFOADD
+ </sect1>
+ <sect1>
+ <title>Self Referential Code Management Routines</title>
+!Pdrivers/crypto/caam/flib/rta.h Self Referential Code Management Routines
+!Fdrivers/crypto/caam/flib/rta.h REFERENCE
+!Fdrivers/crypto/caam/flib/rta.h LABEL
+!Fdrivers/crypto/caam/flib/rta.h SET_LABEL
+!Fdrivers/crypto/caam/flib/rta.h PATCH_JUMP
+!Fdrivers/crypto/caam/flib/rta.h PATCH_JUMP_NON_LOCAL
+!Fdrivers/crypto/caam/flib/rta.h PATCH_MOVE
+!Fdrivers/crypto/caam/flib/rta.h PATCH_MOVE_NON_LOCAL
+!Fdrivers/crypto/caam/flib/rta.h PATCH_LOAD
+!Fdrivers/crypto/caam/flib/rta.h PATCH_STORE
+!Fdrivers/crypto/caam/flib/rta.h PATCH_STORE_NON_LOCAL
+!Fdrivers/crypto/caam/flib/rta.h PATCH_RAW
+!Fdrivers/crypto/caam/flib/rta.h PATCH_RAW_NON_LOCAL
+ </sect1>
+ </chapter>
+
+ <chapter id="descapi">
+ <title>RTA descriptors library</title>
+ <sect1>
+ <title>Job Descriptor Example Routines</title>
+!Pdrivers/crypto/caam/flib/desc/jobdesc.h Job Descriptor Constructors
+!Fdrivers/crypto/caam/flib/desc/jobdesc.h cnstr_jobdesc_mdsplitkey
+ </sect1>
+ <sect1>
+ <title>Auxiliary Data Structures</title>
+!Pdrivers/crypto/caam/flib/desc/common.h Shared Descriptor Constructors - shared structures
+!Fdrivers/crypto/caam/flib/desc/common.h alginfo
+!Fdrivers/crypto/caam/flib/desc/common.h protcmd
+ </sect1>
+ </chapter>
+</book>
diff --git a/Documentation/DocBook/rta/.gitignore b/Documentation/DocBook/rta/.gitignore
new file mode 100644
index 000000000000..e461c585fde8
--- /dev/null
+++ b/Documentation/DocBook/rta/.gitignore
@@ -0,0 +1 @@
+!*.svg
diff --git a/Documentation/DocBook/rta/Makefile b/Documentation/DocBook/rta/Makefile
new file mode 100644
index 000000000000..58981e3ae3ef
--- /dev/null
+++ b/Documentation/DocBook/rta/Makefile
@@ -0,0 +1,5 @@
+RTA_OBJ_DIR=$(objtree)/Documentation/DocBook/
+RTA_SRC_DIR=$(srctree)/Documentation/DocBook/rta
+
+install_rta_images = \
+ $(Q)cp $(RTA_SRC_DIR)/*.svg $(RTA_OBJ_DIR)/rta_api
diff --git a/Documentation/DocBook/rta/rta_arch.svg b/Documentation/DocBook/rta/rta_arch.svg
new file mode 100644
index 000000000000..d816eed04852
--- /dev/null
+++ b/Documentation/DocBook/rta/rta_arch.svg
@@ -0,0 +1,381 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="644.09448819"
+ height="652.3622047"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.2 r9819"
+ sodipodi:docname="rta_arch.svg"
+ inkscape:export-filename="Z:\repos\sdk-devel\flib\sec\rta\doc\images\rta_arch.png"
+ inkscape:export-xdpi="90"
+ inkscape:export-ydpi="90">
+ <title
+ id="title3950">RTA Integration Overview</title>
+ <defs
+ id="defs4">
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow2Lend"
+ style="overflow:visible;">
+ <path
+ id="path4157"
+ style="font-size:12.0;fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
+ d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+ transform="scale(1.1) rotate(180) translate(1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Lend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Lend"
+ style="overflow:visible;">
+ <path
+ id="path4139"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;marker-start:none;"
+ transform="scale(0.8) rotate(180) translate(12.5,0)" />
+ </marker>
+ </defs>
+ <sodipodi:namedview
+ id="base"
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1.0"
+ inkscape:pageopacity="0.0"
+ inkscape:pageshadow="2"
+ inkscape:zoom="0.98994949"
+ inkscape:cx="338.47626"
+ inkscape:cy="723.66809"
+ inkscape:document-units="px"
+ inkscape:current-layer="layer1"
+ showgrid="false"
+ inkscape:window-width="1440"
+ inkscape:window-height="878"
+ inkscape:window-x="-8"
+ inkscape:window-y="-8"
+ inkscape:window-maximized="1" />
+ <metadata
+ id="metadata7">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title>RTA Integration Overview</dc:title>
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <g
+ inkscape:label="Layer 1"
+ inkscape:groupmode="layer"
+ id="layer1"
+ style="display:inline">
+ <rect
+ style="fill:#e5ffe5;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.94082779;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.9408278, 1.8816556;stroke-dashoffset:0"
+ id="rect2985"
+ width="533.80353"
+ height="200.01016"
+ x="82.832512"
+ y="49.280708"
+ ry="19.1929" />
+ <rect
+ style="fill:#99ffcc;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:1, 2;stroke-dashoffset:0"
+ id="rect3767"
+ width="101.01525"
+ height="53.538086"
+ x="243.44676"
+ y="73.524353"
+ ry="19.1929" />
+ <rect
+ style="fill:#99ffcc;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.81756771;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.81756773, 1.63513546;stroke-dashoffset:0"
+ id="rect3767-1"
+ width="101.01525"
+ height="35.785767"
+ x="243.44678"
+ y="159.89241"
+ ry="12.82886" />
+ <rect
+ style="fill:#ff66ff;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.81756771;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.81756773, 1.63513546;stroke-dashoffset:0"
+ id="rect3767-1-8"
+ width="101.01525"
+ height="35.785767"
+ x="490.93414"
+ y="81.895447"
+ ry="12.82886" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ x="529.31989"
+ y="103.82895"
+ id="text3832"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan3834"
+ x="529.31989"
+ y="103.82895">RTA</tspan></text>
+ <rect
+ style="fill:#ffffcc;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.76365763;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.76365765, 1.5273153;stroke-dashoffset:0"
+ id="rect2985-5"
+ width="533.80353"
+ height="131.77383"
+ x="81.600868"
+ y="287.67673"
+ ry="12.644968" />
+ <rect
+ style="fill:#ff66ff;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.81756771;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.81756773, 1.63513546;stroke-dashoffset:0"
+ id="rect3767-1-8-1"
+ width="101.01525"
+ height="35.785767"
+ x="463.66003"
+ y="373.82953"
+ ry="12.82886" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ x="500.61041"
+ y="395.72299"
+ id="text3832-5"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan3834-2"
+ x="500.61041"
+ y="395.72299">RTA</tspan></text>
+ <rect
+ style="fill:#ccecff;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.76365763;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.76365765, 1.5273153;stroke-dashoffset:0"
+ id="rect2985-5-7"
+ width="533.80353"
+ height="131.77383"
+ x="80.590714"
+ y="460.18579"
+ ry="12.644968" />
+ <rect
+ style="fill:#99ccff;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.30565068;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.30565068, 0.61130137;stroke-dashoffset:0"
+ id="rect2985-5-6"
+ width="203.08368"
+ height="55.48671"
+ x="248.03383"
+ y="519.5426"
+ ry="5.3244843" />
+ <flowRoot
+ xml:space="preserve"
+ id="flowRoot4061"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"><flowRegion
+ id="flowRegion4063"><rect
+ id="rect4065"
+ width="45.456863"
+ height="17.172594"
+ x="139.40105"
+ y="685.67682" /></flowRegion><flowPara
+ id="flowPara4067" /></flowRoot> <path
+ style="fill:none;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;stroke-dashoffset:0;marker-end:url(#Arrow2Lend)"
+ d="M 344.46201,100.19032 490.93414,99.891405"
+ id="path4131"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0"
+ inkscape:connection-start="#rect3767"
+ inkscape:connection-start-point="d4"
+ inkscape:connection-end="#rect3767-1-8"
+ inkscape:connection-end-point="d4" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="m 293.95439,127.06244 1e-5,32.82997"
+ id="path4763"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0"
+ inkscape:connection-start="#rect3767"
+ inkscape:connection-start-point="d4"
+ inkscape:connection-end="#rect3767-1"
+ inkscape:connection-end-point="d4" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="m 440.96319,335.95105 49.7186,37.87848"
+ id="path5135"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0"
+ inkscape:connection-start="#rect4101"
+ inkscape:connection-start-point="d4"
+ inkscape:connection-end="#rect3767-1-8-1"
+ inkscape:connection-end-point="d4" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="m 292.94424,193.73252 25.25381,338.4011"
+ id="path3067"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="m 212.13204,394.75287 94.95433,137.38075"
+ id="path3069"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.004;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend);stroke-miterlimit:4;stroke-dasharray:none"
+ d="m 273.75134,378.59043 189.28009,13.13199"
+ id="path3071"
+ inkscape:connector-curvature="0" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="103.62045"
+ y="71.464035"
+ id="text3832-1"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan3834-7"
+ x="103.62045"
+ y="71.464035">User space</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="99.680267"
+ y="313.51968"
+ id="text3832-1-4"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan3834-7-0"
+ x="99.680267"
+ y="313.51968">Kernel space</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="96.269417"
+ y="482.21518"
+ id="text3832-1-4-8"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan3834-7-0-8"
+ x="96.269417"
+ y="482.21518">Platform hardware</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;text-align:center;line-height:125%;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="294.0625"
+ y="94.316589"
+ id="text3832-1-2"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan3834-7-4"
+ x="294.0625"
+ y="94.316589">Crypto</tspan><tspan
+ sodipodi:role="line"
+ x="294.0625"
+ y="111.81659"
+ id="tspan3138">application</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;text-align:center;line-height:125%;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="295.19696"
+ y="182.62668"
+ id="text3832-1-2-5"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="295.19696"
+ y="182.62668"
+ id="tspan3138-1">QBMAN</tspan></text>
+ </g>
+ <g
+ inkscape:groupmode="layer"
+ id="layer2"
+ inkscape:label="Layer 2"
+ style="display:inline">
+ <rect
+ style="fill:#ccecff;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.10832807;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.10832807, 0.21665614;stroke-dashoffset:0;display:inline"
+ id="rect2985-5-7-3"
+ width="46.55518"
+ height="30.403757"
+ x="292.39508"
+ y="532.58911"
+ ry="2.9175332" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="308.09653"
+ y="552.33661"
+ id="text4015"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan4017"
+ x="308.09653"
+ y="552.33661">QI</tspan></text>
+ <rect
+ style="fill:#ccecff;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.10832807;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.10832807, 0.21665614;stroke-dashoffset:0;display:inline"
+ id="rect2985-5-7-3-2"
+ width="46.55518"
+ height="30.403757"
+ x="384.82404"
+ y="533.09424"
+ ry="2.9175332" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="397.49506"
+ y="551.8316"
+ id="text4015-2"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan4017-1"
+ x="397.49506"
+ y="551.8316">JRI</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="254.55844"
+ y="535.16406"
+ id="text4069"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan4071"
+ x="254.55844"
+ y="535.16406">SEC</tspan></text>
+ <rect
+ style="fill:#ffcc00;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.80089962;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.8008996, 1.6017992;stroke-dashoffset:0"
+ id="rect4101"
+ width="112.12693"
+ height="31.101717"
+ x="348.50262"
+ y="304.84933"
+ ry="3.415338" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ x="369.71585"
+ y="325.0524"
+ id="text4103"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan4105"
+ x="369.71585"
+ y="325.0524">SEC Driver</tspan></text>
+ <rect
+ style="fill:#ffcc00;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.80738008;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.80738011, 1.61476021;stroke-dashoffset:0;display:inline"
+ id="rect4101-5"
+ width="111.56696"
+ height="31.765713"
+ x="162.08232"
+ y="362.38086"
+ ry="3.4882529" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="177.28172"
+ y="383.64117"
+ id="text4103-7"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan4105-6"
+ x="177.28172"
+ y="383.64117">SEC QI Driver</tspan></text>
+ </g>
+</svg>
--
1.8.3.1

2014-07-18 16:39:29

by Horia Geantă

[permalink] [raw]
Subject: [PATCH 8/9] crypto: caam - refactor descriptor creation

Refactor descriptor creation in caamalg and caamhash, i.e.
create whole descriptors in the same place / function.
This makes the code more comprehensible and easier to maintain.

Signed-off-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/caamalg.c | 251 +++++++++++++++-----------
drivers/crypto/caam/caamhash.c | 391 ++++++++++++++---------------------------
2 files changed, 278 insertions(+), 364 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index ad5ef8c0c179..927d6467eeba 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -92,59 +92,6 @@
#endif
static struct list_head alg_list;

-/* Set DK bit in class 1 operation if shared */
-static inline void append_dec_op1(struct program *program, uint32_t type)
-{
- LABEL(jump_cmd);
- REFERENCE(pjump_cmd);
- LABEL(uncond_jump_cmd);
- REFERENCE(puncond_jump_cmd);
-
- /* DK bit is valid only for AES */
- if ((type & OP_ALG_ALGSEL_MASK) != OP_ALG_ALGSEL_AES) {
- ALG_OPERATION(type & OP_ALG_ALGSEL_MASK, type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
- OP_ALG_DECRYPT);
- return;
- }
-
- pjump_cmd = JUMP(IMM(jump_cmd), LOCAL_JUMP, ALL_TRUE, SHRD);
- ALG_OPERATION(type & OP_ALG_ALGSEL_MASK, type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
- OP_ALG_DECRYPT);
- puncond_jump_cmd = JUMP(IMM(uncond_jump_cmd), LOCAL_JUMP, ALL_TRUE, 0);
- SET_LABEL(jump_cmd);
- ALG_OPERATION(type & OP_ALG_ALGSEL_MASK,
- (type & OP_ALG_AAI_MASK) | OP_ALG_AAI_DK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
- OP_ALG_DECRYPT);
- SET_LABEL(uncond_jump_cmd);
-
- PATCH_JUMP(pjump_cmd, jump_cmd);
- PATCH_JUMP(puncond_jump_cmd, uncond_jump_cmd);
-}
-
-/*
- * For aead encrypt and decrypt, read iv for both classes
- */
-static inline void aead_append_ld_iv(struct program *program, uint32_t ivsize)
-{
- SEQLOAD(CONTEXT1, 0, ivsize, 0);
- MOVE(CONTEXT1, 0, IFIFOAB2, 0, IMM(ivsize), 0);
-}
-
-/*
- * For ablkcipher encrypt and decrypt, read from req->src and
- * write to req->dst
- */
-static inline void ablkcipher_append_src_dst(struct program *program)
-{
- MATHB(SEQINSZ, ADD, MATH0, VSEQOUTSZ, 4, 0);
- MATHB(SEQINSZ, ADD, MATH0, VSEQINSZ, 4, 0);
- SEQFIFOLOAD(MSG1, 0, VLF | LAST1);
- SEQFIFOSTORE(MSG, 0, 0, VLF);
-}
-
/*
* If all data, including src (with assoc and iv) or dst (with iv only) are
* contiguous
@@ -174,40 +121,6 @@ struct caam_ctx {
unsigned int authsize;
};

-static void append_key_aead(struct program *program, struct caam_ctx *ctx,
- int keys_fit_inline)
-{
- if (keys_fit_inline) {
- KEY(MDHA_SPLIT_KEY, ENC, PTR((uintptr_t)ctx->key),
- ctx->split_key_len, IMMED);
- KEY(KEY1, 0,
- PTR((uintptr_t)(ctx->key + ctx->split_key_pad_len)),
- ctx->enckeylen, IMMED);
- } else {
- KEY(MDHA_SPLIT_KEY, ENC, PTR(ctx->key_dma), ctx->split_key_len,
- 0);
- KEY(KEY1, 0, PTR(ctx->key_dma + ctx->split_key_pad_len),
- ctx->enckeylen, 0);
- }
-}
-
-static void init_sh_desc_key_aead(struct program *program, struct caam_ctx *ctx,
- int keys_fit_inline)
-{
- LABEL(key_jump_cmd);
- REFERENCE(pkey_jump_cmd);
-
- SHR_HDR(SHR_SERIAL, 1, 0);
-
- /* Skip if already shared */
- pkey_jump_cmd = JUMP(IMM(key_jump_cmd), LOCAL_JUMP, ALL_TRUE, SHRD);
-
- append_key_aead(program, ctx, keys_fit_inline);
-
- SET_LABEL(key_jump_cmd);
- PATCH_JUMP(pkey_jump_cmd, key_jump_cmd);
-}
-
static int aead_null_set_sh_desc(struct crypto_aead *aead)
{
struct aead_tfm *tfm = &aead->base.crt_aead;
@@ -425,6 +338,12 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
struct program *program = &prg;
unsigned desc_bytes;
bool ps = (sizeof(dma_addr_t) == sizeof(u64));
+ LABEL(skip_key_load);
+ REFERENCE(pskip_key_load);
+ LABEL(set_dk);
+ REFERENCE(pset_dk);
+ LABEL(skip_dk);
+ REFERENCE(pskip_dk);

if (!ctx->authsize)
return 0;
@@ -448,7 +367,25 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
if (ps)
PROGRAM_SET_36BIT_ADDR();

- init_sh_desc_key_aead(program, ctx, keys_fit_inline);
+ SHR_HDR(SHR_SERIAL, 1, 0);
+
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(IMM(skip_key_load), LOCAL_JUMP, ALL_TRUE, SHRD);
+
+ if (keys_fit_inline) {
+ KEY(MDHA_SPLIT_KEY, ENC, PTR((uintptr_t)ctx->key),
+ ctx->split_key_len, IMMED);
+ KEY(KEY1, 0,
+ PTR((uintptr_t)(ctx->key + ctx->split_key_pad_len)),
+ ctx->enckeylen, IMMED);
+ } else {
+ KEY(MDHA_SPLIT_KEY, ENC, PTR(ctx->key_dma), ctx->split_key_len,
+ 0);
+ KEY(KEY1, 0, PTR(ctx->key_dma + ctx->split_key_pad_len),
+ ctx->enckeylen, 0);
+ }
+
+ SET_LABEL(skip_key_load);

/* Class 2 operation */
ALG_OPERATION(ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
@@ -467,7 +404,10 @@ static int aead_set_sh_desc(struct crypto_aead *aead)

/* read assoc before reading payload */
SEQFIFOLOAD(MSG2, 0 , VLF);
- aead_append_ld_iv(program, tfm->ivsize);
+
+ /* read iv for both classes */
+ SEQLOAD(CONTEXT1, 0, tfm->ivsize, 0);
+ MOVE(CONTEXT1, 0, IFIFOAB2, 0, IMM(tfm->ivsize), 0);

/* Class 1 operation */
ALG_OPERATION(ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
@@ -486,6 +426,8 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
/* Write ICV */
SEQSTORE(CONTEXT2, 0, ctx->authsize, 0);

+ PATCH_JUMP(pskip_key_load, skip_key_load);
+
PROGRAM_FINALIZE();

desc_bytes = DESC_BYTES(desc);
@@ -516,7 +458,26 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
if (ps)
PROGRAM_SET_36BIT_ADDR();

- init_sh_desc_key_aead(program, ctx, keys_fit_inline);
+ /* aead_decrypt shared descriptor */
+ SHR_HDR(SHR_SERIAL, 1, 0);
+
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(IMM(skip_key_load), LOCAL_JUMP, ALL_TRUE, SHRD);
+
+ if (keys_fit_inline) {
+ KEY(MDHA_SPLIT_KEY, ENC, PTR((uintptr_t)ctx->key),
+ ctx->split_key_len, IMMED);
+ KEY(KEY1, 0,
+ PTR((uintptr_t)(ctx->key + ctx->split_key_pad_len)),
+ ctx->enckeylen, IMMED);
+ } else {
+ KEY(MDHA_SPLIT_KEY, ENC, PTR(ctx->key_dma), ctx->split_key_len,
+ 0);
+ KEY(KEY1, 0, PTR(ctx->key_dma + ctx->split_key_pad_len),
+ ctx->enckeylen, 0);
+ }
+
+ SET_LABEL(skip_key_load);

/* Class 2 operation */
ALG_OPERATION(ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
@@ -534,9 +495,30 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
/* read assoc before reading payload */
SEQFIFOLOAD(MSG2, 0 , VLF);

- aead_append_ld_iv(program, tfm->ivsize);
+ /* read iv for both classes */
+ SEQLOAD(CONTEXT1, 0, tfm->ivsize, 0);
+ MOVE(CONTEXT1, 0, IFIFOAB2, 0, IMM(tfm->ivsize), 0);

- append_dec_op1(program, ctx->class1_alg_type);
+ /* Set DK bit in class 1 operation if shared (AES only) */
+ if ((ctx->class1_alg_type & OP_ALG_ALGSEL_MASK) == OP_ALG_ALGSEL_AES) {
+ pset_dk = JUMP(IMM(set_dk), LOCAL_JUMP, ALL_TRUE, SHRD);
+ ALG_OPERATION(ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class1_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ OP_ALG_DECRYPT);
+ pskip_dk = JUMP(IMM(skip_dk), LOCAL_JUMP, ALL_TRUE, 0);
+ SET_LABEL(set_dk);
+ ALG_OPERATION(ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ (ctx->class1_alg_type & OP_ALG_AAI_MASK) |
+ OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE, OP_ALG_DECRYPT);
+ SET_LABEL(skip_dk);
+ } else {
+ ALG_OPERATION(ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class1_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ OP_ALG_DECRYPT);
+ }

/* Read and write cryptlen bytes */
MATHB(ZERO, ADD, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);
@@ -549,6 +531,10 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
/* Load ICV */
SEQFIFOLOAD(ICV2, ctx->authsize, LAST2);

+ PATCH_JUMP(pskip_key_load, skip_key_load);
+ PATCH_JUMP(pset_dk, set_dk);
+ PATCH_JUMP(pskip_dk, skip_dk);
+
PROGRAM_FINALIZE();

desc_bytes = DESC_BYTES(desc);
@@ -579,7 +565,25 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
if (ps)
PROGRAM_SET_36BIT_ADDR();

- init_sh_desc_key_aead(program, ctx, keys_fit_inline);
+ SHR_HDR(SHR_SERIAL, 1, 0);
+
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(IMM(skip_key_load), LOCAL_JUMP, ALL_TRUE, SHRD);
+
+ if (keys_fit_inline) {
+ KEY(MDHA_SPLIT_KEY, ENC, PTR((uintptr_t)ctx->key),
+ ctx->split_key_len, IMMED);
+ KEY(KEY1, 0,
+ PTR((uintptr_t)(ctx->key + ctx->split_key_pad_len)),
+ ctx->enckeylen, IMMED);
+ } else {
+ KEY(MDHA_SPLIT_KEY, ENC, PTR(ctx->key_dma), ctx->split_key_len,
+ 0);
+ KEY(KEY1, 0, PTR(ctx->key_dma + ctx->split_key_pad_len),
+ ctx->enckeylen, 0);
+ }
+
+ SET_LABEL(skip_key_load);

/* Generate IV */
geniv = NFIFOENTRY_STYPE_PAD | NFIFOENTRY_DEST_DECO |
@@ -636,6 +640,8 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
/* Write ICV */
SEQSTORE(CONTEXT2, 0, ctx->authsize, 0);

+ PATCH_JUMP(pskip_key_load, skip_key_load);
+
PROGRAM_FINALIZE();

desc_bytes = DESC_BYTES(desc);
@@ -749,8 +755,12 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
struct program *program = &prg;
unsigned desc_bytes;
bool ps = (sizeof(dma_addr_t) == sizeof(u64));
- LABEL(key_jump_cmd);
- REFERENCE(pkey_jump_cmd);
+ LABEL(skip_key_load);
+ REFERENCE(pskip_key_load);
+ LABEL(set_dk);
+ REFERENCE(pset_dk);
+ LABEL(skip_dk);
+ REFERENCE(pskip_dk);

#ifdef DEBUG
print_hex_dump(KERN_ERR, "key in @"__stringify(__LINE__)": ",
@@ -773,13 +783,14 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
PROGRAM_SET_36BIT_ADDR();

SHR_HDR(SHR_SERIAL, 1, 0);
- /* Skip if already shared */
- pkey_jump_cmd = JUMP(IMM(key_jump_cmd), LOCAL_JUMP, ALL_TRUE, SHRD);
+
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(IMM(skip_key_load), LOCAL_JUMP, ALL_TRUE, SHRD);

/* Load class1 key only */
KEY(KEY1, 0, PTR((uintptr_t)ctx->key), ctx->enckeylen, IMMED);

- SET_LABEL(key_jump_cmd);
+ SET_LABEL(skip_key_load);

/* Load IV */
SEQLOAD(CONTEXT1, 0, tfm->ivsize, 0);
@@ -791,9 +802,12 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
OP_ALG_ENCRYPT);

/* Perform operation */
- ablkcipher_append_src_dst(program);
+ MATHB(SEQINSZ, ADD, MATH0, VSEQOUTSZ, 4, 0);
+ MATHB(SEQINSZ, ADD, MATH0, VSEQINSZ, 4, 0);
+ SEQFIFOLOAD(MSG1, 0, VLF | LAST1);
+ SEQFIFOSTORE(MSG, 0, 0, VLF);

- PATCH_JUMP(pkey_jump_cmd, key_jump_cmd);
+ PATCH_JUMP(pskip_key_load, skip_key_load);

PROGRAM_FINALIZE();

@@ -818,24 +832,47 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,

SHR_HDR(SHR_SERIAL, 1, 0);

- /* Skip if already shared */
- pkey_jump_cmd = JUMP(IMM(key_jump_cmd), LOCAL_JUMP, ALL_TRUE, SHRD);
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(IMM(skip_key_load), LOCAL_JUMP, ALL_TRUE, SHRD);

/* Load class1 key only */
KEY(KEY1, 0, PTR((uintptr_t)ctx->key), ctx->enckeylen, IMMED);

- SET_LABEL(key_jump_cmd);
+ SET_LABEL(skip_key_load);

/* load IV */
SEQLOAD(CONTEXT1, 0, tfm->ivsize, 0);

- /* Choose operation */
- append_dec_op1(program, ctx->class1_alg_type);
+ /* Set DK bit in class 1 operation if shared (AES only) */
+ if ((ctx->class1_alg_type & OP_ALG_ALGSEL_MASK) == OP_ALG_ALGSEL_AES) {
+ pset_dk = JUMP(IMM(set_dk), LOCAL_JUMP, ALL_TRUE, SHRD);
+ ALG_OPERATION(ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class1_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ OP_ALG_DECRYPT);
+ pskip_dk = JUMP(IMM(skip_dk), LOCAL_JUMP, ALL_TRUE, 0);
+ SET_LABEL(set_dk);
+ ALG_OPERATION(ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ (ctx->class1_alg_type & OP_ALG_AAI_MASK) |
+ OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE, OP_ALG_DECRYPT);
+ SET_LABEL(skip_dk);
+ } else {
+ ALG_OPERATION(ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class1_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ OP_ALG_DECRYPT);
+ }

/* Perform operation */
- ablkcipher_append_src_dst(program);
+ MATHB(SEQINSZ, ADD, MATH0, VSEQOUTSZ, 4, 0);
+ MATHB(SEQINSZ, ADD, MATH0, VSEQINSZ, 4, 0);
+ SEQFIFOLOAD(MSG1, 0, VLF | LAST1);
+ SEQFIFOSTORE(MSG, 0, 0, VLF);

- PATCH_JUMP(pkey_jump_cmd, key_jump_cmd);
+ PATCH_JUMP(pskip_key_load, skip_key_load);
+ PATCH_JUMP(pset_dk, set_dk);
+ PATCH_JUMP(pskip_dk, skip_dk);

PROGRAM_FINALIZE();

diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index ec66e715d825..a7e90be9845c 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -100,18 +100,18 @@ static struct list_head hash_list;
/* ahash per-session context */
struct caam_hash_ctx {
struct device *jrdev;
- u32 sh_desc_update[DESC_HASH_MAX_USED_LEN];
- u32 sh_desc_update_first[DESC_HASH_MAX_USED_LEN];
- u32 sh_desc_fin[DESC_HASH_MAX_USED_LEN];
- u32 sh_desc_digest[DESC_HASH_MAX_USED_LEN];
- u32 sh_desc_finup[DESC_HASH_MAX_USED_LEN];
+ uint32_t sh_desc_update[DESC_HASH_MAX_USED_LEN];
+ uint32_t sh_desc_update_first[DESC_HASH_MAX_USED_LEN];
+ uint32_t sh_desc_fin[DESC_HASH_MAX_USED_LEN];
+ uint32_t sh_desc_digest[DESC_HASH_MAX_USED_LEN];
+ uint32_t sh_desc_finup[DESC_HASH_MAX_USED_LEN];
dma_addr_t sh_desc_update_dma;
dma_addr_t sh_desc_update_first_dma;
dma_addr_t sh_desc_fin_dma;
dma_addr_t sh_desc_digest_dma;
dma_addr_t sh_desc_finup_dma;
- u32 alg_type;
- u32 alg_op;
+ uint32_t alg_type;
+ uint32_t alg_op;
u8 key[CAAM_MAX_HASH_KEY_SIZE];
dma_addr_t key_dma;
int ctx_len;
@@ -136,37 +136,6 @@ struct caam_hash_state {

/* Common job descriptor seq in/out ptr routines */

-/* Map state->caam_ctx, and append seq_out_ptr command that points to it */
-static inline int map_seq_out_ptr_ctx(struct program *program,
- struct device *jrdev,
- struct caam_hash_state *state,
- int ctx_len)
-{
- state->ctx_dma = dma_map_single(jrdev, state->caam_ctx,
- ctx_len, DMA_FROM_DEVICE);
- if (dma_mapping_error(jrdev, state->ctx_dma)) {
- dev_err(jrdev, "unable to map ctx\n");
- return -ENOMEM;
- }
-
- SEQOUTPTR(state->ctx_dma, ctx_len, EXT);
-
- return 0;
-}
-
-/* Map req->result, and append seq_out_ptr command that points to it */
-static inline dma_addr_t map_seq_out_ptr_result(struct program *program,
- struct device *jrdev,
- u8 *result, int digestsize)
-{
- dma_addr_t dst_dma;
-
- dst_dma = dma_map_single(jrdev, result, digestsize, DMA_FROM_DEVICE);
- SEQOUTPTR(dst_dma, digestsize, EXT);
-
- return dst_dma;
-}
-
/* Map current buffer in state and put it in link table */
static inline dma_addr_t buf_map_to_sec4_sg(struct device *jrdev,
struct sec4_sg_entry *sec4_sg,
@@ -225,90 +194,64 @@ static inline int ctx_map_to_sec4_sg(u32 *desc, struct device *jrdev,
return 0;
}

-/* Common shared descriptor commands */
-static inline void append_key_ahash(struct program *program,
- struct caam_hash_ctx *ctx)
+/*
+ * For ahash update, final and finup (import_ctx = true)
+ * import context, read and write to seqout
+ * For ahash firsts and digest (import_ctx = false)
+ * read and write to seqout
+ */
+static inline void ahash_gen_sh_desc(uint32_t *desc, uint32_t state,
+ int digestsize, struct caam_hash_ctx *ctx,
+ bool import_ctx)
{
- KEY(MDHA_SPLIT_KEY, ENC, PTR((uintptr_t)ctx->key),
- ctx->split_key_len, IMMED);
-}
+ uint32_t op = ctx->alg_type;
+ struct program prg;
+ struct program *program = &prg;
+ bool ps = (sizeof(dma_addr_t) == sizeof(u64));
+ LABEL(skip_key_load);
+ REFERENCE(pskip_key_load);

-/* Append key if it has been set */
-static inline void init_sh_desc_key_ahash(struct program *program,
- struct caam_hash_ctx *ctx)
-{
- LABEL(key_jump_cmd);
- REFERENCE(pkey_jump_cmd);
+ PROGRAM_CNTXT_INIT(desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();

SHR_HDR(SHR_SERIAL, 1, 0);

- if (ctx->split_key_len) {
- /* Skip if already shared */
- pkey_jump_cmd = JUMP(IMM(key_jump_cmd), LOCAL_JUMP, ALL_TRUE,
- SHRD);
-
- append_key_ahash(program, ctx);
-
- SET_LABEL(key_jump_cmd);
+ /* Append key if it has been set; ahash update excluded */
+ if ((state != OP_ALG_AS_UPDATE) && (ctx->split_key_len)) {
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(IMM(skip_key_load), LOCAL_JUMP, ALL_TRUE,
+ SHRD);

- PATCH_JUMP(pkey_jump_cmd, key_jump_cmd);
- }
-}
-
-/*
- * For ahash read data from seqin following state->caam_ctx,
- * and write resulting class2 context to seqout, which may be state->caam_ctx
- * or req->result
- */
-static inline void ahash_append_load_str(struct program *program,
- int digestsize)
-{
- /* Calculate remaining bytes to read */
- MATHB(SEQINSZ, ADD, MATH0, VSEQINSZ, CAAM_CMD_SZ, 0);
+ KEY(MDHA_SPLIT_KEY, ENC, PTR((uintptr_t)ctx->key),
+ ctx->split_key_len, IMMED);

- /* Read remaining bytes */
- SEQFIFOLOAD(MSG2, 0, VLF | LAST2);
+ SET_LABEL(skip_key_load);

- /* Store class2 context bytes */
- SEQSTORE(CONTEXT2, 0, digestsize, 0);
-}
+ PATCH_JUMP(pskip_key_load, skip_key_load);

-/*
- * For ahash update, final and finup, import context, read and write to seqout
- */
-static inline void ahash_ctx_data_to_out(struct program *program, u32 op,
- u32 state, int digestsize,
- struct caam_hash_ctx *ctx)
-{
- init_sh_desc_key_ahash(program, ctx);
+ op |= OP_ALG_AAI_HMAC_PRECOMP;
+ }

- /* Import context from software */
- SEQLOAD(CONTEXT2, 0, ctx->ctx_len, 0);
+ /* If needed, import context from software */
+ if (import_ctx)
+ SEQLOAD(CONTEXT2, 0, ctx->ctx_len, 0);

/* Class 2 operation */
- ALG_OPERATION(op & OP_ALG_ALGSEL_MASK, op & OP_ALG_AAI_MASK, state,
- ICV_CHECK_DISABLE, OP_ALG_ENCRYPT);
+ ALG_OPERATION(op & OP_ALG_ALGSEL_MASK, op & OP_ALG_AAI_MASK,
+ state, ICV_CHECK_DISABLE, OP_ALG_ENCRYPT);

/*
* Load from buf and/or src and write to req->result or state->context
+ * Calculate remaining bytes to read
*/
- ahash_append_load_str(program, digestsize);
-}
-
-/* For ahash firsts and digest, read and write to seqout */
-static inline void ahash_data_to_out(struct program *program, u32 op, u32 state,
- int digestsize, struct caam_hash_ctx *ctx)
-{
- init_sh_desc_key_ahash(program, ctx);
-
- /* Class 2 operation */
- ALG_OPERATION(op & OP_ALG_ALGSEL_MASK, op & OP_ALG_AAI_MASK, state,
- ICV_CHECK_DISABLE, OP_ALG_ENCRYPT);
+ MATHB(SEQINSZ, ADD, MATH0, VSEQINSZ, CAAM_CMD_SZ, 0);
+ /* Read remaining bytes */
+ SEQFIFOLOAD(MSG2, 0, VLF | LAST2);
+ /* Store class2 context bytes */
+ SEQSTORE(CONTEXT2, 0, digestsize, 0);

- /*
- * Load from buf and/or src and write to req->result or state->context
- */
- ahash_append_load_str(program, digestsize);
+ PROGRAM_FINALIZE();
}

static int ahash_set_sh_desc(struct crypto_ahash *ahash)
@@ -316,36 +259,11 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
int digestsize = crypto_ahash_digestsize(ahash);
struct device *jrdev = ctx->jrdev;
- u32 have_key = 0;
uint32_t *desc;
- struct program prg;
- struct program *program = &prg;
- bool ps = (sizeof(dma_addr_t) == sizeof(u64));
-
- if (ctx->split_key_len)
- have_key = OP_ALG_AAI_HMAC_PRECOMP;

/* ahash_update shared descriptor */
desc = ctx->sh_desc_update;
- PROGRAM_CNTXT_INIT(desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR();
-
- SHR_HDR(SHR_SERIAL, 1, 0);
-
- /* Import context from software */
- SEQLOAD(CONTEXT2, 0, ctx->ctx_len, 0);
-
- /* Class 2 operation */
- ALG_OPERATION(ctx->alg_type & OP_ALG_ALGSEL_MASK,
- ctx->alg_type & OP_ALG_AAI_MASK, OP_ALG_AS_UPDATE,
- ICV_CHECK_DISABLE, OP_ALG_ENCRYPT);
-
- /* Load data and write to result or context */
- ahash_append_load_str(program, ctx->ctx_len);
-
- PROGRAM_FINALIZE();
-
+ ahash_gen_sh_desc(desc, OP_ALG_AS_UPDATE, ctx->ctx_len, ctx, true);
ctx->sh_desc_update_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_update_dma)) {
@@ -360,15 +278,7 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)

/* ahash_update_first shared descriptor */
desc = ctx->sh_desc_update_first;
- PROGRAM_CNTXT_INIT(desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR();
-
- ahash_data_to_out(program, have_key | ctx->alg_type, OP_ALG_AS_INIT,
- ctx->ctx_len, ctx);
-
- PROGRAM_FINALIZE();
-
+ ahash_gen_sh_desc(desc, OP_ALG_AS_INIT, ctx->ctx_len, ctx, false);
ctx->sh_desc_update_first_dma = dma_map_single(jrdev, desc,
DESC_BYTES(desc),
DMA_TO_DEVICE);
@@ -384,15 +294,7 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)

/* ahash_final shared descriptor */
desc = ctx->sh_desc_fin;
- PROGRAM_CNTXT_INIT(desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR();
-
- ahash_ctx_data_to_out(program, have_key | ctx->alg_type,
- OP_ALG_AS_FINALIZE, digestsize, ctx);
-
- PROGRAM_FINALIZE();
-
+ ahash_gen_sh_desc(desc, OP_ALG_AS_FINALIZE, digestsize, ctx, true);
ctx->sh_desc_fin_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_fin_dma)) {
@@ -406,15 +308,7 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)

/* ahash_finup shared descriptor */
desc = ctx->sh_desc_finup;
- PROGRAM_CNTXT_INIT(desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR();
-
- ahash_ctx_data_to_out(program, have_key | ctx->alg_type,
- OP_ALG_AS_FINALIZE, digestsize, ctx);
-
- PROGRAM_FINALIZE();
-
+ ahash_gen_sh_desc(desc, OP_ALG_AS_FINALIZE, digestsize, ctx, true);
ctx->sh_desc_finup_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_finup_dma)) {
@@ -428,15 +322,7 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)

/* ahash_digest shared descriptor */
desc = ctx->sh_desc_digest;
- PROGRAM_CNTXT_INIT(desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR();
-
- ahash_data_to_out(program, have_key | ctx->alg_type,
- OP_ALG_AS_INITFINAL, digestsize, ctx);
-
- PROGRAM_FINALIZE();
-
+ ahash_gen_sh_desc(desc, OP_ALG_AS_INITFINAL, digestsize, ctx, false);
ctx->sh_desc_digest_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_digest_dma)) {
@@ -897,7 +783,6 @@ static int ahash_update_ctx(struct ahash_request *req)

SEQINPTR(edesc->sec4_sg_dma, ctx->ctx_len + to_hash, SGF | EXT);
SEQOUTPTR(state->ctx_dma, ctx->ctx_len, EXT);
-
PROGRAM_FINALIZE();

#ifdef DEBUG
@@ -963,17 +848,17 @@ static int ahash_final_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- PROGRAM_CNTXT_INIT(desc, sh_len);
- if (ps)
- PROGRAM_SET_36BIT_ADDR();
-
- JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);
-
edesc->sec4_sg_bytes = sec4_sg_bytes;
edesc->sec4_sg = (void *)edesc + sizeof(struct ahash_edesc) +
DESC_JOB_IO_LEN;
+ edesc->dst_dma = dma_map_single(jrdev, req->result, digestsize,
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+ dev_err(jrdev, "unable to map dst\n");
+ return -ENOMEM;
+ }
+
edesc->src_nents = 0;

ret = ctx_map_to_sec4_sg(desc, jrdev, state, ctx->ctx_len,
@@ -986,6 +871,12 @@ static int ahash_final_ctx(struct ahash_request *req)
last_buflen);
(edesc->sec4_sg + sec4_sg_bytes - 1)->len |= SEC4_SG_LEN_FIN;

+ sh_len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+ JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);
+
edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
sec4_sg_bytes, DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) {
@@ -994,14 +885,7 @@ static int ahash_final_ctx(struct ahash_request *req)
}

SEQINPTR(edesc->sec4_sg_dma, ctx->ctx_len + buflen, SGF | EXT);
-
- edesc->dst_dma = map_seq_out_ptr_result(program, jrdev, req->result,
- digestsize);
- if (dma_mapping_error(jrdev, edesc->dst_dma)) {
- dev_err(jrdev, "unable to map dst\n");
- return -ENOMEM;
- }
-
+ SEQOUTPTR(edesc->dst_dma, digestsize, EXT);
PROGRAM_FINALIZE();

#ifdef DEBUG
@@ -1058,19 +942,18 @@ static int ahash_finup_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- PROGRAM_CNTXT_INIT(desc, sh_len);
- if (ps)
- PROGRAM_SET_36BIT_ADDR();
-
- JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);
-
edesc->src_nents = src_nents;
edesc->chained = chained;
edesc->sec4_sg_bytes = sec4_sg_bytes;
edesc->sec4_sg = (void *)edesc + sizeof(struct ahash_edesc) +
DESC_JOB_IO_LEN;
+ edesc->dst_dma = dma_map_single(jrdev, req->result, digestsize,
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+ dev_err(jrdev, "unable to map dst\n");
+ return -ENOMEM;
+ }

ret = ctx_map_to_sec4_sg(desc, jrdev, state, ctx->ctx_len,
edesc->sec4_sg, DMA_TO_DEVICE);
@@ -1084,6 +967,12 @@ static int ahash_finup_ctx(struct ahash_request *req)
src_map_to_sec4_sg(jrdev, req->src, src_nents, edesc->sec4_sg +
sec4_sg_src_index, chained);

+ sh_len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+ JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);
+
edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
sec4_sg_bytes, DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) {
@@ -1093,14 +982,7 @@ static int ahash_finup_ctx(struct ahash_request *req)

SEQINPTR(edesc->sec4_sg_dma, ctx->ctx_len + buflen + req->nbytes,
SGF | EXT);
-
- edesc->dst_dma = map_seq_out_ptr_result(program, jrdev, req->result,
- digestsize);
- if (dma_mapping_error(jrdev, edesc->dst_dma)) {
- dev_err(jrdev, "unable to map dst\n");
- return -ENOMEM;
- }
-
+ SEQOUTPTR(edesc->dst_dma, digestsize, EXT);
PROGRAM_FINALIZE();

#ifdef DEBUG
@@ -1155,17 +1037,16 @@ static int ahash_digest(struct ahash_request *req)
edesc->sec4_sg = (void *)edesc + sizeof(struct ahash_edesc) +
DESC_JOB_IO_LEN;
edesc->sec4_sg_bytes = sec4_sg_bytes;
+ edesc->dst_dma = dma_map_single(jrdev, req->result, digestsize,
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+ dev_err(jrdev, "unable to map dst\n");
+ return -ENOMEM;
+ }
+
edesc->src_nents = src_nents;
edesc->chained = chained;
-
- sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- PROGRAM_CNTXT_INIT(desc, sh_len);
- if (ps)
- PROGRAM_SET_36BIT_ADDR();
-
- JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);
-
if (src_nents) {
sg_to_sec4_sg_last(req->src, src_nents, edesc->sec4_sg, 0);
edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
@@ -1179,15 +1060,14 @@ static int ahash_digest(struct ahash_request *req)
} else {
src_dma = sg_dma_address(req->src);
}
- SEQINPTR(src_dma, req->nbytes, options);
-
- edesc->dst_dma = map_seq_out_ptr_result(program, jrdev, req->result,
- digestsize);
- if (dma_mapping_error(jrdev, edesc->dst_dma)) {
- dev_err(jrdev, "unable to map dst\n");
- return -ENOMEM;
- }

+ sh_len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+ JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);
+ SEQINPTR(src_dma, req->nbytes, options);
+ SEQOUTPTR(edesc->dst_dma, digestsize, EXT);
PROGRAM_FINALIZE();

#ifdef DEBUG
@@ -1235,33 +1115,30 @@ static int ahash_final_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = DESC_LEN(sh_desc);
+ edesc->src_nents = 0;
desc = edesc->hw_desc;
- PROGRAM_CNTXT_INIT(desc, sh_len);
- if (ps)
- PROGRAM_SET_36BIT_ADDR();
-
- JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);
-
state->buf_dma = dma_map_single(jrdev, buf, buflen, DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, state->buf_dma)) {
dev_err(jrdev, "unable to map src\n");
return -ENOMEM;
}

- SEQINPTR(state->buf_dma, buflen, EXT);
-
- edesc->dst_dma = map_seq_out_ptr_result(program, jrdev, req->result,
- digestsize);
+ edesc->dst_dma = dma_map_single(jrdev, req->result, digestsize,
+ DMA_FROM_DEVICE);
if (dma_mapping_error(jrdev, edesc->dst_dma)) {
dev_err(jrdev, "unable to map dst\n");
return -ENOMEM;
}

+ sh_len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+ JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);
+ SEQINPTR(state->buf_dma, buflen, EXT);
+ SEQOUTPTR(edesc->dst_dma, digestsize, EXT);
PROGRAM_FINALIZE();

- edesc->src_nents = 0;
-
#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
@@ -1325,6 +1202,7 @@ static int ahash_update_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

+ desc = edesc->hw_desc;
edesc->src_nents = src_nents;
edesc->chained = chained;
edesc->sec4_sg_bytes = sec4_sg_bytes;
@@ -1342,12 +1220,17 @@ static int ahash_update_no_ctx(struct ahash_request *req)
state->current_buf = !state->current_buf;
}

+ state->ctx_dma = dma_map_single(jrdev, state->caam_ctx,
+ ctx->ctx_len, DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, state->ctx_dma)) {
+ dev_err(jrdev, "unable to map ctx\n");
+ return -ENOMEM;
+ }
+
sh_len = DESC_LEN(sh_desc);
- desc = edesc->hw_desc;
PROGRAM_CNTXT_INIT(desc, sh_len);
if (ps)
PROGRAM_SET_36BIT_ADDR();
-
JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);

edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
@@ -1359,11 +1242,7 @@ static int ahash_update_no_ctx(struct ahash_request *req)
}

SEQINPTR(edesc->sec4_sg_dma, to_hash, SGF | EXT);
-
- ret = map_seq_out_ptr_ctx(program, jrdev, state, ctx->ctx_len);
- if (ret)
- return ret;
-
+ SEQOUTPTR(state->ctx_dma, ctx->ctx_len, EXT);
PROGRAM_FINALIZE();

#ifdef DEBUG
@@ -1437,19 +1316,18 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- PROGRAM_CNTXT_INIT(desc, sh_len);
- if (ps)
- PROGRAM_SET_36BIT_ADDR();
-
- JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);
-
edesc->src_nents = src_nents;
edesc->chained = chained;
edesc->sec4_sg_bytes = sec4_sg_bytes;
edesc->sec4_sg = (void *)edesc + sizeof(struct ahash_edesc) +
DESC_JOB_IO_LEN;
+ edesc->dst_dma = dma_map_single(jrdev, req->result, digestsize,
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+ dev_err(jrdev, "unable to map dst\n");
+ return -ENOMEM;
+ }

state->buf_dma = try_buf_map_to_sec4_sg(jrdev, edesc->sec4_sg, buf,
state->buf_dma, buflen,
@@ -1458,6 +1336,12 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
src_map_to_sec4_sg(jrdev, req->src, src_nents, edesc->sec4_sg + 1,
chained);

+ sh_len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR();
+ JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);
+
edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
sec4_sg_bytes, DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) {
@@ -1466,14 +1350,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
}

SEQINPTR(edesc->sec4_sg_dma, buflen + req->nbytes, SGF | EXT);
-
- edesc->dst_dma = map_seq_out_ptr_result(program, jrdev, req->result,
- digestsize);
- if (dma_mapping_error(jrdev, edesc->dst_dma)) {
- dev_err(jrdev, "unable to map dst\n");
- return -ENOMEM;
- }
-
+ SEQOUTPTR(edesc->dst_dma, digestsize, EXT);
PROGRAM_FINALIZE();

#ifdef DEBUG
@@ -1541,12 +1418,19 @@ static int ahash_update_first(struct ahash_request *req)
return -ENOMEM;
}

+ desc = edesc->hw_desc;
edesc->src_nents = src_nents;
edesc->chained = chained;
edesc->sec4_sg_bytes = sec4_sg_bytes;
edesc->sec4_sg = (void *)edesc + sizeof(struct ahash_edesc) +
DESC_JOB_IO_LEN;
edesc->dst_dma = 0;
+ state->ctx_dma = dma_map_single(jrdev, state->caam_ctx,
+ ctx->ctx_len, DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, state->ctx_dma)) {
+ dev_err(jrdev, "unable to map ctx\n");
+ return -ENOMEM;
+ }

if (src_nents) {
sg_to_sec4_sg_last(req->src, src_nents,
@@ -1569,19 +1453,12 @@ static int ahash_update_first(struct ahash_request *req)
sg_copy_part(next_buf, req->src, to_hash, req->nbytes);

sh_len = DESC_LEN(sh_desc);
- desc = edesc->hw_desc;
PROGRAM_CNTXT_INIT(desc, sh_len);
if (ps)
PROGRAM_SET_36BIT_ADDR();
-
JOB_HDR(SHR_DEFER, sh_len, ptr, REO | SHR);
-
SEQINPTR(src_dma, to_hash, options);
-
- ret = map_seq_out_ptr_ctx(program, jrdev, state, ctx->ctx_len);
- if (ret)
- return ret;
-
+ SEQOUTPTR(state->ctx_dma, ctx->ctx_len, EXT);
PROGRAM_FINALIZE();

#ifdef DEBUG
--
1.8.3.1

2014-07-18 16:39:36

by Horia Geantă

[permalink] [raw]
Subject: [PATCH 7/9] crypto: caam - completely remove inline append

desc_constr.h and desc.h have no users, being replaced by RTA,
so get rid of them.

pdb.h is removed since its structures are not currently used.
Future protocol descriptors will add these when needed
in flib/desc/ directory.

Signed-off-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/desc.h | 1611 -------------------------------------
drivers/crypto/caam/desc_constr.h | 388 ---------
drivers/crypto/caam/pdb.h | 402 ---------
3 files changed, 2401 deletions(-)
delete mode 100644 drivers/crypto/caam/desc.h
delete mode 100644 drivers/crypto/caam/desc_constr.h
delete mode 100644 drivers/crypto/caam/pdb.h

diff --git a/drivers/crypto/caam/desc.h b/drivers/crypto/caam/desc.h
deleted file mode 100644
index 9066fdc402fa..000000000000
--- a/drivers/crypto/caam/desc.h
+++ /dev/null
@@ -1,1611 +0,0 @@
-/*
- * CAAM descriptor composition header
- * Definitions to support CAAM descriptor instruction generation
- *
- * Copyright 2008-2011 Freescale Semiconductor, Inc.
- */
-
-#ifndef DESC_H
-#define DESC_H
-
-/* Max size of any CAAM descriptor in 32-bit words, inclusive of header */
-#define MAX_CAAM_DESCSIZE 64
-
-/* Block size of any entity covered/uncovered with a KEK/TKEK */
-#define KEK_BLOCKSIZE 16
-
-/*
- * Supported descriptor command types as they show up
- * inside a descriptor command word.
- */
-#define CMD_SHIFT 27
-#define CMD_MASK 0xf8000000
-
-#define CMD_KEY (0x00 << CMD_SHIFT)
-#define CMD_SEQ_KEY (0x01 << CMD_SHIFT)
-#define CMD_LOAD (0x02 << CMD_SHIFT)
-#define CMD_SEQ_LOAD (0x03 << CMD_SHIFT)
-#define CMD_FIFO_LOAD (0x04 << CMD_SHIFT)
-#define CMD_SEQ_FIFO_LOAD (0x05 << CMD_SHIFT)
-#define CMD_STORE (0x0a << CMD_SHIFT)
-#define CMD_SEQ_STORE (0x0b << CMD_SHIFT)
-#define CMD_FIFO_STORE (0x0c << CMD_SHIFT)
-#define CMD_SEQ_FIFO_STORE (0x0d << CMD_SHIFT)
-#define CMD_MOVE_LEN (0x0e << CMD_SHIFT)
-#define CMD_MOVE (0x0f << CMD_SHIFT)
-#define CMD_OPERATION (0x10 << CMD_SHIFT)
-#define CMD_SIGNATURE (0x12 << CMD_SHIFT)
-#define CMD_JUMP (0x14 << CMD_SHIFT)
-#define CMD_MATH (0x15 << CMD_SHIFT)
-#define CMD_DESC_HDR (0x16 << CMD_SHIFT)
-#define CMD_SHARED_DESC_HDR (0x17 << CMD_SHIFT)
-#define CMD_SEQ_IN_PTR (0x1e << CMD_SHIFT)
-#define CMD_SEQ_OUT_PTR (0x1f << CMD_SHIFT)
-
-/* General-purpose class selector for all commands */
-#define CLASS_SHIFT 25
-#define CLASS_MASK (0x03 << CLASS_SHIFT)
-
-#define CLASS_NONE (0x00 << CLASS_SHIFT)
-#define CLASS_1 (0x01 << CLASS_SHIFT)
-#define CLASS_2 (0x02 << CLASS_SHIFT)
-#define CLASS_BOTH (0x03 << CLASS_SHIFT)
-
-/*
- * Descriptor header command constructs
- * Covers shared, job, and trusted descriptor headers
- */
-
-/*
- * Do Not Run - marks a descriptor inexecutable if there was
- * a preceding error somewhere
- */
-#define HDR_DNR 0x01000000
-
-/*
- * ONE - should always be set. Combination of ONE (always
- * set) and ZRO (always clear) forms an endianness sanity check
- */
-#define HDR_ONE 0x00800000
-#define HDR_ZRO 0x00008000
-
-/* Start Index or SharedDesc Length */
-#define HDR_START_IDX_SHIFT 16
-#define HDR_START_IDX_MASK (0x3f << HDR_START_IDX_SHIFT)
-
-/* If shared descriptor header, 6-bit length */
-#define HDR_DESCLEN_SHR_MASK 0x3f
-
-/* If non-shared header, 7-bit length */
-#define HDR_DESCLEN_MASK 0x7f
-
-/* This is a TrustedDesc (if not SharedDesc) */
-#define HDR_TRUSTED 0x00004000
-
-/* Make into TrustedDesc (if not SharedDesc) */
-#define HDR_MAKE_TRUSTED 0x00002000
-
-/* Save context if self-shared (if SharedDesc) */
-#define HDR_SAVECTX 0x00001000
-
-/* Next item points to SharedDesc */
-#define HDR_SHARED 0x00001000
-
-/*
- * Reverse Execution Order - execute JobDesc first, then
- * execute SharedDesc (normally SharedDesc goes first).
- */
-#define HDR_REVERSE 0x00000800
-
-/* Propogate DNR property to SharedDesc */
-#define HDR_PROP_DNR 0x00000800
-
-/* JobDesc/SharedDesc share property */
-#define HDR_SD_SHARE_MASK 0x03
-#define HDR_SD_SHARE_SHIFT 8
-#define HDR_JD_SHARE_MASK 0x07
-#define HDR_JD_SHARE_SHIFT 8
-
-#define HDR_SHARE_NEVER (0x00 << HDR_SD_SHARE_SHIFT)
-#define HDR_SHARE_WAIT (0x01 << HDR_SD_SHARE_SHIFT)
-#define HDR_SHARE_SERIAL (0x02 << HDR_SD_SHARE_SHIFT)
-#define HDR_SHARE_ALWAYS (0x03 << HDR_SD_SHARE_SHIFT)
-#define HDR_SHARE_DEFER (0x04 << HDR_SD_SHARE_SHIFT)
-
-/* JobDesc/SharedDesc descriptor length */
-#define HDR_JD_LENGTH_MASK 0x7f
-#define HDR_SD_LENGTH_MASK 0x3f
-
-/*
- * KEY/SEQ_KEY Command Constructs
- */
-
-/* Key Destination Class: 01 = Class 1, 02 - Class 2 */
-#define KEY_DEST_CLASS_SHIFT 25 /* use CLASS_1 or CLASS_2 */
-#define KEY_DEST_CLASS_MASK (0x03 << KEY_DEST_CLASS_SHIFT)
-
-/* Scatter-Gather Table/Variable Length Field */
-#define KEY_SGF 0x01000000
-#define KEY_VLF 0x01000000
-
-/* Immediate - Key follows command in the descriptor */
-#define KEY_IMM 0x00800000
-
-/*
- * Encrypted - Key is encrypted either with the KEK, or
- * with the TDKEK if TK is set
- */
-#define KEY_ENC 0x00400000
-
-/*
- * No Write Back - Do not allow key to be FIFO STOREd
- */
-#define KEY_NWB 0x00200000
-
-/*
- * Enhanced Encryption of Key
- */
-#define KEY_EKT 0x00100000
-
-/*
- * Encrypted with Trusted Key
- */
-#define KEY_TK 0x00008000
-
-/*
- * KDEST - Key Destination: 0 - class key register,
- * 1 - PKHA 'e', 2 - AFHA Sbox, 3 - MDHA split-key
- */
-#define KEY_DEST_SHIFT 16
-#define KEY_DEST_MASK (0x03 << KEY_DEST_SHIFT)
-
-#define KEY_DEST_CLASS_REG (0x00 << KEY_DEST_SHIFT)
-#define KEY_DEST_PKHA_E (0x01 << KEY_DEST_SHIFT)
-#define KEY_DEST_AFHA_SBOX (0x02 << KEY_DEST_SHIFT)
-#define KEY_DEST_MDHA_SPLIT (0x03 << KEY_DEST_SHIFT)
-
-/* Length in bytes */
-#define KEY_LENGTH_MASK 0x000003ff
-
-/*
- * LOAD/SEQ_LOAD/STORE/SEQ_STORE Command Constructs
- */
-
-/*
- * Load/Store Destination: 0 = class independent CCB,
- * 1 = class 1 CCB, 2 = class 2 CCB, 3 = DECO
- */
-#define LDST_CLASS_SHIFT 25
-#define LDST_CLASS_MASK (0x03 << LDST_CLASS_SHIFT)
-#define LDST_CLASS_IND_CCB (0x00 << LDST_CLASS_SHIFT)
-#define LDST_CLASS_1_CCB (0x01 << LDST_CLASS_SHIFT)
-#define LDST_CLASS_2_CCB (0x02 << LDST_CLASS_SHIFT)
-#define LDST_CLASS_DECO (0x03 << LDST_CLASS_SHIFT)
-
-/* Scatter-Gather Table/Variable Length Field */
-#define LDST_SGF 0x01000000
-#define LDST_VLF LDST_SGF
-
-/* Immediate - Key follows this command in descriptor */
-#define LDST_IMM_MASK 1
-#define LDST_IMM_SHIFT 23
-#define LDST_IMM (LDST_IMM_MASK << LDST_IMM_SHIFT)
-
-/* SRC/DST - Destination for LOAD, Source for STORE */
-#define LDST_SRCDST_SHIFT 16
-#define LDST_SRCDST_MASK (0x7f << LDST_SRCDST_SHIFT)
-
-#define LDST_SRCDST_BYTE_CONTEXT (0x20 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_BYTE_KEY (0x40 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_BYTE_INFIFO (0x7c << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_BYTE_OUTFIFO (0x7e << LDST_SRCDST_SHIFT)
-
-#define LDST_SRCDST_WORD_MODE_REG (0x00 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_KEYSZ_REG (0x01 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_DATASZ_REG (0x02 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_ICVSZ_REG (0x03 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_CHACTRL (0x06 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_DECOCTRL (0x06 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_IRQCTRL (0x07 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_DECO_PCLOVRD (0x07 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_CLRW (0x08 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_DECO_MATH0 (0x08 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_STAT (0x09 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_DECO_MATH1 (0x09 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_DECO_MATH2 (0x0a << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_DECO_AAD_SZ (0x0b << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_DECO_MATH3 (0x0b << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_CLASS1_ICV_SZ (0x0c << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_ALTDS_CLASS1 (0x0f << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_PKHA_A_SZ (0x10 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_PKHA_B_SZ (0x11 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_PKHA_N_SZ (0x12 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_PKHA_E_SZ (0x13 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_CLASS_CTX (0x20 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_DESCBUF (0x40 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_DESCBUF_JOB (0x41 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_DESCBUF_SHARED (0x42 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_DESCBUF_JOB_WE (0x45 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_DESCBUF_SHARED_WE (0x46 << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_INFO_FIFO (0x7a << LDST_SRCDST_SHIFT)
-
-/* Offset in source/destination */
-#define LDST_OFFSET_SHIFT 8
-#define LDST_OFFSET_MASK (0xff << LDST_OFFSET_SHIFT)
-
-/* LDOFF definitions used when DST = LDST_SRCDST_WORD_DECOCTRL */
-/* These could also be shifted by LDST_OFFSET_SHIFT - this reads better */
-#define LDOFF_CHG_SHARE_SHIFT 0
-#define LDOFF_CHG_SHARE_MASK (0x3 << LDOFF_CHG_SHARE_SHIFT)
-#define LDOFF_CHG_SHARE_NEVER (0x1 << LDOFF_CHG_SHARE_SHIFT)
-#define LDOFF_CHG_SHARE_OK_PROP (0x2 << LDOFF_CHG_SHARE_SHIFT)
-#define LDOFF_CHG_SHARE_OK_NO_PROP (0x3 << LDOFF_CHG_SHARE_SHIFT)
-
-#define LDOFF_ENABLE_AUTO_NFIFO (1 << 2)
-#define LDOFF_DISABLE_AUTO_NFIFO (1 << 3)
-
-#define LDOFF_CHG_NONSEQLIODN_SHIFT 4
-#define LDOFF_CHG_NONSEQLIODN_MASK (0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
-#define LDOFF_CHG_NONSEQLIODN_SEQ (0x1 << LDOFF_CHG_NONSEQLIODN_SHIFT)
-#define LDOFF_CHG_NONSEQLIODN_NON_SEQ (0x2 << LDOFF_CHG_NONSEQLIODN_SHIFT)
-#define LDOFF_CHG_NONSEQLIODN_TRUSTED (0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
-
-#define LDOFF_CHG_SEQLIODN_SHIFT 6
-#define LDOFF_CHG_SEQLIODN_MASK (0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
-#define LDOFF_CHG_SEQLIODN_SEQ (0x1 << LDOFF_CHG_SEQLIODN_SHIFT)
-#define LDOFF_CHG_SEQLIODN_NON_SEQ (0x2 << LDOFF_CHG_SEQLIODN_SHIFT)
-#define LDOFF_CHG_SEQLIODN_TRUSTED (0x3 << LDOFF_CHG_SEQLIODN_SHIFT)
-
-/* Data length in bytes */
-#define LDST_LEN_SHIFT 0
-#define LDST_LEN_MASK (0xff << LDST_LEN_SHIFT)
-
-/* Special Length definitions when dst=deco-ctrl */
-#define LDLEN_ENABLE_OSL_COUNT (1 << 7)
-#define LDLEN_RST_CHA_OFIFO_PTR (1 << 6)
-#define LDLEN_RST_OFIFO (1 << 5)
-#define LDLEN_SET_OFIFO_OFF_VALID (1 << 4)
-#define LDLEN_SET_OFIFO_OFF_RSVD (1 << 3)
-#define LDLEN_SET_OFIFO_OFFSET_SHIFT 0
-#define LDLEN_SET_OFIFO_OFFSET_MASK (3 << LDLEN_SET_OFIFO_OFFSET_SHIFT)
-
-/*
- * FIFO_LOAD/FIFO_STORE/SEQ_FIFO_LOAD/SEQ_FIFO_STORE
- * Command Constructs
- */
-
-/*
- * Load Destination: 0 = skip (SEQ_FIFO_LOAD only),
- * 1 = Load for Class1, 2 = Load for Class2, 3 = Load both
- * Store Source: 0 = normal, 1 = Class1key, 2 = Class2key
- */
-#define FIFOLD_CLASS_SHIFT 25
-#define FIFOLD_CLASS_MASK (0x03 << FIFOLD_CLASS_SHIFT)
-#define FIFOLD_CLASS_SKIP (0x00 << FIFOLD_CLASS_SHIFT)
-#define FIFOLD_CLASS_CLASS1 (0x01 << FIFOLD_CLASS_SHIFT)
-#define FIFOLD_CLASS_CLASS2 (0x02 << FIFOLD_CLASS_SHIFT)
-#define FIFOLD_CLASS_BOTH (0x03 << FIFOLD_CLASS_SHIFT)
-
-#define FIFOST_CLASS_SHIFT 25
-#define FIFOST_CLASS_MASK (0x03 << FIFOST_CLASS_SHIFT)
-#define FIFOST_CLASS_NORMAL (0x00 << FIFOST_CLASS_SHIFT)
-#define FIFOST_CLASS_CLASS1KEY (0x01 << FIFOST_CLASS_SHIFT)
-#define FIFOST_CLASS_CLASS2KEY (0x02 << FIFOST_CLASS_SHIFT)
-
-/*
- * Scatter-Gather Table/Variable Length Field
- * If set for FIFO_LOAD, refers to a SG table. Within
- * SEQ_FIFO_LOAD, is variable input sequence
- */
-#define FIFOLDST_SGF_SHIFT 24
-#define FIFOLDST_SGF_MASK (1 << FIFOLDST_SGF_SHIFT)
-#define FIFOLDST_VLF_MASK (1 << FIFOLDST_SGF_SHIFT)
-#define FIFOLDST_SGF (1 << FIFOLDST_SGF_SHIFT)
-#define FIFOLDST_VLF (1 << FIFOLDST_SGF_SHIFT)
-
-/* Immediate - Data follows command in descriptor */
-#define FIFOLD_IMM_SHIFT 23
-#define FIFOLD_IMM_MASK (1 << FIFOLD_IMM_SHIFT)
-#define FIFOLD_IMM (1 << FIFOLD_IMM_SHIFT)
-
-/* Continue - Not the last FIFO store to come */
-#define FIFOST_CONT_SHIFT 23
-#define FIFOST_CONT_MASK (1 << FIFOST_CONT_SHIFT)
-
-/*
- * Extended Length - use 32-bit extended length that
- * follows the pointer field. Illegal with IMM set
- */
-#define FIFOLDST_EXT_SHIFT 22
-#define FIFOLDST_EXT_MASK (1 << FIFOLDST_EXT_SHIFT)
-#define FIFOLDST_EXT (1 << FIFOLDST_EXT_SHIFT)
-
-/* Input data type.*/
-#define FIFOLD_TYPE_SHIFT 16
-#define FIFOLD_CONT_TYPE_SHIFT 19 /* shift past last-flush bits */
-#define FIFOLD_TYPE_MASK (0x3f << FIFOLD_TYPE_SHIFT)
-
-/* PK types */
-#define FIFOLD_TYPE_PK (0x00 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_PK_MASK (0x30 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_PK_TYPEMASK (0x0f << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_PK_A0 (0x00 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_PK_A1 (0x01 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_PK_A2 (0x02 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_PK_A3 (0x03 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_PK_B0 (0x04 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_PK_B1 (0x05 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_PK_B2 (0x06 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_PK_B3 (0x07 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_PK_N (0x08 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_PK_A (0x0c << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_PK_B (0x0d << FIFOLD_TYPE_SHIFT)
-
-/* Other types. Need to OR in last/flush bits as desired */
-#define FIFOLD_TYPE_MSG_MASK (0x38 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_MSG (0x10 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_MSG1OUT2 (0x18 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_IV (0x20 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_BITDATA (0x28 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_AAD (0x30 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_ICV (0x38 << FIFOLD_TYPE_SHIFT)
-
-/* Last/Flush bits for use with "other" types above */
-#define FIFOLD_TYPE_ACT_MASK (0x07 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_NOACTION (0x00 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_FLUSH1 (0x01 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_LAST1 (0x02 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_LAST2FLUSH (0x03 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_LAST2 (0x04 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_LAST2FLUSH1 (0x05 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_LASTBOTH (0x06 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_LASTBOTHFL (0x07 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_NOINFOFIFO (0x0F << FIFOLD_TYPE_SHIFT)
-
-#define FIFOLDST_LEN_MASK 0xffff
-#define FIFOLDST_EXT_LEN_MASK 0xffffffff
-
-/* Output data types */
-#define FIFOST_TYPE_SHIFT 16
-#define FIFOST_TYPE_MASK (0x3f << FIFOST_TYPE_SHIFT)
-
-#define FIFOST_TYPE_PKHA_A0 (0x00 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_PKHA_A1 (0x01 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_PKHA_A2 (0x02 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_PKHA_A3 (0x03 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_PKHA_B0 (0x04 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_PKHA_B1 (0x05 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_PKHA_B2 (0x06 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_PKHA_B3 (0x07 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_PKHA_N (0x08 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_PKHA_A (0x0c << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_PKHA_B (0x0d << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_AF_SBOX_JKEK (0x20 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_AF_SBOX_TKEK (0x21 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_PKHA_E_JKEK (0x22 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_PKHA_E_TKEK (0x23 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_KEY_KEK (0x24 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_KEY_TKEK (0x25 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_SPLIT_KEK (0x26 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_SPLIT_TKEK (0x27 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_OUTFIFO_KEK (0x28 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_OUTFIFO_TKEK (0x29 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_MESSAGE_DATA (0x30 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_RNGSTORE (0x34 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_RNGFIFO (0x35 << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_SKIP (0x3f << FIFOST_TYPE_SHIFT)
-
-/*
- * OPERATION Command Constructs
- */
-
-/* Operation type selectors - OP TYPE */
-#define OP_TYPE_SHIFT 24
-#define OP_TYPE_MASK (0x07 << OP_TYPE_SHIFT)
-
-#define OP_TYPE_UNI_PROTOCOL (0x00 << OP_TYPE_SHIFT)
-#define OP_TYPE_PK (0x01 << OP_TYPE_SHIFT)
-#define OP_TYPE_CLASS1_ALG (0x02 << OP_TYPE_SHIFT)
-#define OP_TYPE_CLASS2_ALG (0x04 << OP_TYPE_SHIFT)
-#define OP_TYPE_DECAP_PROTOCOL (0x06 << OP_TYPE_SHIFT)
-#define OP_TYPE_ENCAP_PROTOCOL (0x07 << OP_TYPE_SHIFT)
-
-/* ProtocolID selectors - PROTID */
-#define OP_PCLID_SHIFT 16
-#define OP_PCLID_MASK (0xff << 16)
-
-/* Assuming OP_TYPE = OP_TYPE_UNI_PROTOCOL */
-#define OP_PCLID_IKEV1_PRF (0x01 << OP_PCLID_SHIFT)
-#define OP_PCLID_IKEV2_PRF (0x02 << OP_PCLID_SHIFT)
-#define OP_PCLID_SSL30_PRF (0x08 << OP_PCLID_SHIFT)
-#define OP_PCLID_TLS10_PRF (0x09 << OP_PCLID_SHIFT)
-#define OP_PCLID_TLS11_PRF (0x0a << OP_PCLID_SHIFT)
-#define OP_PCLID_DTLS10_PRF (0x0c << OP_PCLID_SHIFT)
-#define OP_PCLID_PRF (0x06 << OP_PCLID_SHIFT)
-#define OP_PCLID_BLOB (0x0d << OP_PCLID_SHIFT)
-#define OP_PCLID_SECRETKEY (0x11 << OP_PCLID_SHIFT)
-#define OP_PCLID_PUBLICKEYPAIR (0x14 << OP_PCLID_SHIFT)
-#define OP_PCLID_DSASIGN (0x15 << OP_PCLID_SHIFT)
-#define OP_PCLID_DSAVERIFY (0x16 << OP_PCLID_SHIFT)
-
-/* Assuming OP_TYPE = OP_TYPE_DECAP_PROTOCOL/ENCAP_PROTOCOL */
-#define OP_PCLID_IPSEC (0x01 << OP_PCLID_SHIFT)
-#define OP_PCLID_SRTP (0x02 << OP_PCLID_SHIFT)
-#define OP_PCLID_MACSEC (0x03 << OP_PCLID_SHIFT)
-#define OP_PCLID_WIFI (0x04 << OP_PCLID_SHIFT)
-#define OP_PCLID_WIMAX (0x05 << OP_PCLID_SHIFT)
-#define OP_PCLID_SSL30 (0x08 << OP_PCLID_SHIFT)
-#define OP_PCLID_TLS10 (0x09 << OP_PCLID_SHIFT)
-#define OP_PCLID_TLS11 (0x0a << OP_PCLID_SHIFT)
-#define OP_PCLID_TLS12 (0x0b << OP_PCLID_SHIFT)
-#define OP_PCLID_DTLS (0x0c << OP_PCLID_SHIFT)
-
-/*
- * ProtocolInfo selectors
- */
-#define OP_PCLINFO_MASK 0xffff
-
-/* for OP_PCLID_IPSEC */
-#define OP_PCL_IPSEC_CIPHER_MASK 0xff00
-#define OP_PCL_IPSEC_AUTH_MASK 0x00ff
-
-#define OP_PCL_IPSEC_DES_IV64 0x0100
-#define OP_PCL_IPSEC_DES 0x0200
-#define OP_PCL_IPSEC_3DES 0x0300
-#define OP_PCL_IPSEC_AES_CBC 0x0c00
-#define OP_PCL_IPSEC_AES_CTR 0x0d00
-#define OP_PCL_IPSEC_AES_XTS 0x1600
-#define OP_PCL_IPSEC_AES_CCM8 0x0e00
-#define OP_PCL_IPSEC_AES_CCM12 0x0f00
-#define OP_PCL_IPSEC_AES_CCM16 0x1000
-#define OP_PCL_IPSEC_AES_GCM8 0x1200
-#define OP_PCL_IPSEC_AES_GCM12 0x1300
-#define OP_PCL_IPSEC_AES_GCM16 0x1400
-
-#define OP_PCL_IPSEC_HMAC_NULL 0x0000
-#define OP_PCL_IPSEC_HMAC_MD5_96 0x0001
-#define OP_PCL_IPSEC_HMAC_SHA1_96 0x0002
-#define OP_PCL_IPSEC_AES_XCBC_MAC_96 0x0005
-#define OP_PCL_IPSEC_HMAC_MD5_128 0x0006
-#define OP_PCL_IPSEC_HMAC_SHA1_160 0x0007
-#define OP_PCL_IPSEC_HMAC_SHA2_256_128 0x000c
-#define OP_PCL_IPSEC_HMAC_SHA2_384_192 0x000d
-#define OP_PCL_IPSEC_HMAC_SHA2_512_256 0x000e
-
-/* For SRTP - OP_PCLID_SRTP */
-#define OP_PCL_SRTP_CIPHER_MASK 0xff00
-#define OP_PCL_SRTP_AUTH_MASK 0x00ff
-
-#define OP_PCL_SRTP_AES_CTR 0x0d00
-
-#define OP_PCL_SRTP_HMAC_SHA1_160 0x0007
-
-/* For SSL 3.0 - OP_PCLID_SSL30 */
-#define OP_PCL_SSL30_AES_128_CBC_SHA 0x002f
-#define OP_PCL_SSL30_AES_128_CBC_SHA_2 0x0030
-#define OP_PCL_SSL30_AES_128_CBC_SHA_3 0x0031
-#define OP_PCL_SSL30_AES_128_CBC_SHA_4 0x0032
-#define OP_PCL_SSL30_AES_128_CBC_SHA_5 0x0033
-#define OP_PCL_SSL30_AES_128_CBC_SHA_6 0x0034
-#define OP_PCL_SSL30_AES_128_CBC_SHA_7 0x008c
-#define OP_PCL_SSL30_AES_128_CBC_SHA_8 0x0090
-#define OP_PCL_SSL30_AES_128_CBC_SHA_9 0x0094
-#define OP_PCL_SSL30_AES_128_CBC_SHA_10 0xc004
-#define OP_PCL_SSL30_AES_128_CBC_SHA_11 0xc009
-#define OP_PCL_SSL30_AES_128_CBC_SHA_12 0xc00e
-#define OP_PCL_SSL30_AES_128_CBC_SHA_13 0xc013
-#define OP_PCL_SSL30_AES_128_CBC_SHA_14 0xc018
-#define OP_PCL_SSL30_AES_128_CBC_SHA_15 0xc01d
-#define OP_PCL_SSL30_AES_128_CBC_SHA_16 0xc01e
-#define OP_PCL_SSL30_AES_128_CBC_SHA_17 0xc01f
-
-#define OP_PCL_SSL30_AES_256_CBC_SHA 0x0035
-#define OP_PCL_SSL30_AES_256_CBC_SHA_2 0x0036
-#define OP_PCL_SSL30_AES_256_CBC_SHA_3 0x0037
-#define OP_PCL_SSL30_AES_256_CBC_SHA_4 0x0038
-#define OP_PCL_SSL30_AES_256_CBC_SHA_5 0x0039
-#define OP_PCL_SSL30_AES_256_CBC_SHA_6 0x003a
-#define OP_PCL_SSL30_AES_256_CBC_SHA_7 0x008d
-#define OP_PCL_SSL30_AES_256_CBC_SHA_8 0x0091
-#define OP_PCL_SSL30_AES_256_CBC_SHA_9 0x0095
-#define OP_PCL_SSL30_AES_256_CBC_SHA_10 0xc005
-#define OP_PCL_SSL30_AES_256_CBC_SHA_11 0xc00a
-#define OP_PCL_SSL30_AES_256_CBC_SHA_12 0xc00f
-#define OP_PCL_SSL30_AES_256_CBC_SHA_13 0xc014
-#define OP_PCL_SSL30_AES_256_CBC_SHA_14 0xc019
-#define OP_PCL_SSL30_AES_256_CBC_SHA_15 0xc020
-#define OP_PCL_SSL30_AES_256_CBC_SHA_16 0xc021
-#define OP_PCL_SSL30_AES_256_CBC_SHA_17 0xc022
-
-#define OP_PCL_SSL30_3DES_EDE_CBC_MD5 0x0023
-
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA 0x001f
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_2 0x008b
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_3 0x008f
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_4 0x0093
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_5 0x000a
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_6 0x000d
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_7 0x0010
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_8 0x0013
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_9 0x0016
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_10 0x001b
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_11 0xc003
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_12 0xc008
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_13 0xc00d
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_14 0xc012
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_15 0xc017
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_16 0xc01a
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_17 0xc01b
-#define OP_PCL_SSL30_3DES_EDE_CBC_SHA_18 0xc01c
-
-#define OP_PCL_SSL30_DES40_CBC_MD5 0x0029
-
-#define OP_PCL_SSL30_DES_CBC_MD5 0x0022
-
-#define OP_PCL_SSL30_DES40_CBC_SHA 0x0008
-#define OP_PCL_SSL30_DES40_CBC_SHA_2 0x000b
-#define OP_PCL_SSL30_DES40_CBC_SHA_3 0x000e
-#define OP_PCL_SSL30_DES40_CBC_SHA_4 0x0011
-#define OP_PCL_SSL30_DES40_CBC_SHA_5 0x0014
-#define OP_PCL_SSL30_DES40_CBC_SHA_6 0x0019
-#define OP_PCL_SSL30_DES40_CBC_SHA_7 0x0026
-
-#define OP_PCL_SSL30_DES_CBC_SHA 0x001e
-#define OP_PCL_SSL30_DES_CBC_SHA_2 0x0009
-#define OP_PCL_SSL30_DES_CBC_SHA_3 0x000c
-#define OP_PCL_SSL30_DES_CBC_SHA_4 0x000f
-#define OP_PCL_SSL30_DES_CBC_SHA_5 0x0012
-#define OP_PCL_SSL30_DES_CBC_SHA_6 0x0015
-#define OP_PCL_SSL30_DES_CBC_SHA_7 0x001a
-
-#define OP_PCL_SSL30_RC4_128_MD5 0x0024
-#define OP_PCL_SSL30_RC4_128_MD5_2 0x0004
-#define OP_PCL_SSL30_RC4_128_MD5_3 0x0018
-
-#define OP_PCL_SSL30_RC4_40_MD5 0x002b
-#define OP_PCL_SSL30_RC4_40_MD5_2 0x0003
-#define OP_PCL_SSL30_RC4_40_MD5_3 0x0017
-
-#define OP_PCL_SSL30_RC4_128_SHA 0x0020
-#define OP_PCL_SSL30_RC4_128_SHA_2 0x008a
-#define OP_PCL_SSL30_RC4_128_SHA_3 0x008e
-#define OP_PCL_SSL30_RC4_128_SHA_4 0x0092
-#define OP_PCL_SSL30_RC4_128_SHA_5 0x0005
-#define OP_PCL_SSL30_RC4_128_SHA_6 0xc002
-#define OP_PCL_SSL30_RC4_128_SHA_7 0xc007
-#define OP_PCL_SSL30_RC4_128_SHA_8 0xc00c
-#define OP_PCL_SSL30_RC4_128_SHA_9 0xc011
-#define OP_PCL_SSL30_RC4_128_SHA_10 0xc016
-
-#define OP_PCL_SSL30_RC4_40_SHA 0x0028
-
-
-/* For TLS 1.0 - OP_PCLID_TLS10 */
-#define OP_PCL_TLS10_AES_128_CBC_SHA 0x002f
-#define OP_PCL_TLS10_AES_128_CBC_SHA_2 0x0030
-#define OP_PCL_TLS10_AES_128_CBC_SHA_3 0x0031
-#define OP_PCL_TLS10_AES_128_CBC_SHA_4 0x0032
-#define OP_PCL_TLS10_AES_128_CBC_SHA_5 0x0033
-#define OP_PCL_TLS10_AES_128_CBC_SHA_6 0x0034
-#define OP_PCL_TLS10_AES_128_CBC_SHA_7 0x008c
-#define OP_PCL_TLS10_AES_128_CBC_SHA_8 0x0090
-#define OP_PCL_TLS10_AES_128_CBC_SHA_9 0x0094
-#define OP_PCL_TLS10_AES_128_CBC_SHA_10 0xc004
-#define OP_PCL_TLS10_AES_128_CBC_SHA_11 0xc009
-#define OP_PCL_TLS10_AES_128_CBC_SHA_12 0xc00e
-#define OP_PCL_TLS10_AES_128_CBC_SHA_13 0xc013
-#define OP_PCL_TLS10_AES_128_CBC_SHA_14 0xc018
-#define OP_PCL_TLS10_AES_128_CBC_SHA_15 0xc01d
-#define OP_PCL_TLS10_AES_128_CBC_SHA_16 0xc01e
-#define OP_PCL_TLS10_AES_128_CBC_SHA_17 0xc01f
-
-#define OP_PCL_TLS10_AES_256_CBC_SHA 0x0035
-#define OP_PCL_TLS10_AES_256_CBC_SHA_2 0x0036
-#define OP_PCL_TLS10_AES_256_CBC_SHA_3 0x0037
-#define OP_PCL_TLS10_AES_256_CBC_SHA_4 0x0038
-#define OP_PCL_TLS10_AES_256_CBC_SHA_5 0x0039
-#define OP_PCL_TLS10_AES_256_CBC_SHA_6 0x003a
-#define OP_PCL_TLS10_AES_256_CBC_SHA_7 0x008d
-#define OP_PCL_TLS10_AES_256_CBC_SHA_8 0x0091
-#define OP_PCL_TLS10_AES_256_CBC_SHA_9 0x0095
-#define OP_PCL_TLS10_AES_256_CBC_SHA_10 0xc005
-#define OP_PCL_TLS10_AES_256_CBC_SHA_11 0xc00a
-#define OP_PCL_TLS10_AES_256_CBC_SHA_12 0xc00f
-#define OP_PCL_TLS10_AES_256_CBC_SHA_13 0xc014
-#define OP_PCL_TLS10_AES_256_CBC_SHA_14 0xc019
-#define OP_PCL_TLS10_AES_256_CBC_SHA_15 0xc020
-#define OP_PCL_TLS10_AES_256_CBC_SHA_16 0xc021
-#define OP_PCL_TLS10_AES_256_CBC_SHA_17 0xc022
-
-/* #define OP_PCL_TLS10_3DES_EDE_CBC_MD5 0x0023 */
-
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA 0x001f
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_2 0x008b
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_3 0x008f
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_4 0x0093
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_5 0x000a
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_6 0x000d
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_7 0x0010
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_8 0x0013
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_9 0x0016
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_10 0x001b
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_11 0xc003
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_12 0xc008
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_13 0xc00d
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_14 0xc012
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_15 0xc017
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_16 0xc01a
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_17 0xc01b
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA_18 0xc01c
-
-#define OP_PCL_TLS10_DES40_CBC_MD5 0x0029
-
-#define OP_PCL_TLS10_DES_CBC_MD5 0x0022
-
-#define OP_PCL_TLS10_DES40_CBC_SHA 0x0008
-#define OP_PCL_TLS10_DES40_CBC_SHA_2 0x000b
-#define OP_PCL_TLS10_DES40_CBC_SHA_3 0x000e
-#define OP_PCL_TLS10_DES40_CBC_SHA_4 0x0011
-#define OP_PCL_TLS10_DES40_CBC_SHA_5 0x0014
-#define OP_PCL_TLS10_DES40_CBC_SHA_6 0x0019
-#define OP_PCL_TLS10_DES40_CBC_SHA_7 0x0026
-
-
-#define OP_PCL_TLS10_DES_CBC_SHA 0x001e
-#define OP_PCL_TLS10_DES_CBC_SHA_2 0x0009
-#define OP_PCL_TLS10_DES_CBC_SHA_3 0x000c
-#define OP_PCL_TLS10_DES_CBC_SHA_4 0x000f
-#define OP_PCL_TLS10_DES_CBC_SHA_5 0x0012
-#define OP_PCL_TLS10_DES_CBC_SHA_6 0x0015
-#define OP_PCL_TLS10_DES_CBC_SHA_7 0x001a
-
-#define OP_PCL_TLS10_RC4_128_MD5 0x0024
-#define OP_PCL_TLS10_RC4_128_MD5_2 0x0004
-#define OP_PCL_TLS10_RC4_128_MD5_3 0x0018
-
-#define OP_PCL_TLS10_RC4_40_MD5 0x002b
-#define OP_PCL_TLS10_RC4_40_MD5_2 0x0003
-#define OP_PCL_TLS10_RC4_40_MD5_3 0x0017
-
-#define OP_PCL_TLS10_RC4_128_SHA 0x0020
-#define OP_PCL_TLS10_RC4_128_SHA_2 0x008a
-#define OP_PCL_TLS10_RC4_128_SHA_3 0x008e
-#define OP_PCL_TLS10_RC4_128_SHA_4 0x0092
-#define OP_PCL_TLS10_RC4_128_SHA_5 0x0005
-#define OP_PCL_TLS10_RC4_128_SHA_6 0xc002
-#define OP_PCL_TLS10_RC4_128_SHA_7 0xc007
-#define OP_PCL_TLS10_RC4_128_SHA_8 0xc00c
-#define OP_PCL_TLS10_RC4_128_SHA_9 0xc011
-#define OP_PCL_TLS10_RC4_128_SHA_10 0xc016
-
-#define OP_PCL_TLS10_RC4_40_SHA 0x0028
-
-#define OP_PCL_TLS10_3DES_EDE_CBC_MD5 0xff23
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA160 0xff30
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA224 0xff34
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA256 0xff36
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA384 0xff33
-#define OP_PCL_TLS10_3DES_EDE_CBC_SHA512 0xff35
-#define OP_PCL_TLS10_AES_128_CBC_SHA160 0xff80
-#define OP_PCL_TLS10_AES_128_CBC_SHA224 0xff84
-#define OP_PCL_TLS10_AES_128_CBC_SHA256 0xff86
-#define OP_PCL_TLS10_AES_128_CBC_SHA384 0xff83
-#define OP_PCL_TLS10_AES_128_CBC_SHA512 0xff85
-#define OP_PCL_TLS10_AES_192_CBC_SHA160 0xff20
-#define OP_PCL_TLS10_AES_192_CBC_SHA224 0xff24
-#define OP_PCL_TLS10_AES_192_CBC_SHA256 0xff26
-#define OP_PCL_TLS10_AES_192_CBC_SHA384 0xff23
-#define OP_PCL_TLS10_AES_192_CBC_SHA512 0xff25
-#define OP_PCL_TLS10_AES_256_CBC_SHA160 0xff60
-#define OP_PCL_TLS10_AES_256_CBC_SHA224 0xff64
-#define OP_PCL_TLS10_AES_256_CBC_SHA256 0xff66
-#define OP_PCL_TLS10_AES_256_CBC_SHA384 0xff63
-#define OP_PCL_TLS10_AES_256_CBC_SHA512 0xff65
-
-
-
-/* For TLS 1.1 - OP_PCLID_TLS11 */
-#define OP_PCL_TLS11_AES_128_CBC_SHA 0x002f
-#define OP_PCL_TLS11_AES_128_CBC_SHA_2 0x0030
-#define OP_PCL_TLS11_AES_128_CBC_SHA_3 0x0031
-#define OP_PCL_TLS11_AES_128_CBC_SHA_4 0x0032
-#define OP_PCL_TLS11_AES_128_CBC_SHA_5 0x0033
-#define OP_PCL_TLS11_AES_128_CBC_SHA_6 0x0034
-#define OP_PCL_TLS11_AES_128_CBC_SHA_7 0x008c
-#define OP_PCL_TLS11_AES_128_CBC_SHA_8 0x0090
-#define OP_PCL_TLS11_AES_128_CBC_SHA_9 0x0094
-#define OP_PCL_TLS11_AES_128_CBC_SHA_10 0xc004
-#define OP_PCL_TLS11_AES_128_CBC_SHA_11 0xc009
-#define OP_PCL_TLS11_AES_128_CBC_SHA_12 0xc00e
-#define OP_PCL_TLS11_AES_128_CBC_SHA_13 0xc013
-#define OP_PCL_TLS11_AES_128_CBC_SHA_14 0xc018
-#define OP_PCL_TLS11_AES_128_CBC_SHA_15 0xc01d
-#define OP_PCL_TLS11_AES_128_CBC_SHA_16 0xc01e
-#define OP_PCL_TLS11_AES_128_CBC_SHA_17 0xc01f
-
-#define OP_PCL_TLS11_AES_256_CBC_SHA 0x0035
-#define OP_PCL_TLS11_AES_256_CBC_SHA_2 0x0036
-#define OP_PCL_TLS11_AES_256_CBC_SHA_3 0x0037
-#define OP_PCL_TLS11_AES_256_CBC_SHA_4 0x0038
-#define OP_PCL_TLS11_AES_256_CBC_SHA_5 0x0039
-#define OP_PCL_TLS11_AES_256_CBC_SHA_6 0x003a
-#define OP_PCL_TLS11_AES_256_CBC_SHA_7 0x008d
-#define OP_PCL_TLS11_AES_256_CBC_SHA_8 0x0091
-#define OP_PCL_TLS11_AES_256_CBC_SHA_9 0x0095
-#define OP_PCL_TLS11_AES_256_CBC_SHA_10 0xc005
-#define OP_PCL_TLS11_AES_256_CBC_SHA_11 0xc00a
-#define OP_PCL_TLS11_AES_256_CBC_SHA_12 0xc00f
-#define OP_PCL_TLS11_AES_256_CBC_SHA_13 0xc014
-#define OP_PCL_TLS11_AES_256_CBC_SHA_14 0xc019
-#define OP_PCL_TLS11_AES_256_CBC_SHA_15 0xc020
-#define OP_PCL_TLS11_AES_256_CBC_SHA_16 0xc021
-#define OP_PCL_TLS11_AES_256_CBC_SHA_17 0xc022
-
-/* #define OP_PCL_TLS11_3DES_EDE_CBC_MD5 0x0023 */
-
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA 0x001f
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_2 0x008b
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_3 0x008f
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_4 0x0093
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_5 0x000a
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_6 0x000d
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_7 0x0010
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_8 0x0013
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_9 0x0016
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_10 0x001b
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_11 0xc003
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_12 0xc008
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_13 0xc00d
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_14 0xc012
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_15 0xc017
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_16 0xc01a
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_17 0xc01b
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA_18 0xc01c
-
-#define OP_PCL_TLS11_DES40_CBC_MD5 0x0029
-
-#define OP_PCL_TLS11_DES_CBC_MD5 0x0022
-
-#define OP_PCL_TLS11_DES40_CBC_SHA 0x0008
-#define OP_PCL_TLS11_DES40_CBC_SHA_2 0x000b
-#define OP_PCL_TLS11_DES40_CBC_SHA_3 0x000e
-#define OP_PCL_TLS11_DES40_CBC_SHA_4 0x0011
-#define OP_PCL_TLS11_DES40_CBC_SHA_5 0x0014
-#define OP_PCL_TLS11_DES40_CBC_SHA_6 0x0019
-#define OP_PCL_TLS11_DES40_CBC_SHA_7 0x0026
-
-#define OP_PCL_TLS11_DES_CBC_SHA 0x001e
-#define OP_PCL_TLS11_DES_CBC_SHA_2 0x0009
-#define OP_PCL_TLS11_DES_CBC_SHA_3 0x000c
-#define OP_PCL_TLS11_DES_CBC_SHA_4 0x000f
-#define OP_PCL_TLS11_DES_CBC_SHA_5 0x0012
-#define OP_PCL_TLS11_DES_CBC_SHA_6 0x0015
-#define OP_PCL_TLS11_DES_CBC_SHA_7 0x001a
-
-#define OP_PCL_TLS11_RC4_128_MD5 0x0024
-#define OP_PCL_TLS11_RC4_128_MD5_2 0x0004
-#define OP_PCL_TLS11_RC4_128_MD5_3 0x0018
-
-#define OP_PCL_TLS11_RC4_40_MD5 0x002b
-#define OP_PCL_TLS11_RC4_40_MD5_2 0x0003
-#define OP_PCL_TLS11_RC4_40_MD5_3 0x0017
-
-#define OP_PCL_TLS11_RC4_128_SHA 0x0020
-#define OP_PCL_TLS11_RC4_128_SHA_2 0x008a
-#define OP_PCL_TLS11_RC4_128_SHA_3 0x008e
-#define OP_PCL_TLS11_RC4_128_SHA_4 0x0092
-#define OP_PCL_TLS11_RC4_128_SHA_5 0x0005
-#define OP_PCL_TLS11_RC4_128_SHA_6 0xc002
-#define OP_PCL_TLS11_RC4_128_SHA_7 0xc007
-#define OP_PCL_TLS11_RC4_128_SHA_8 0xc00c
-#define OP_PCL_TLS11_RC4_128_SHA_9 0xc011
-#define OP_PCL_TLS11_RC4_128_SHA_10 0xc016
-
-#define OP_PCL_TLS11_RC4_40_SHA 0x0028
-
-#define OP_PCL_TLS11_3DES_EDE_CBC_MD5 0xff23
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA160 0xff30
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA224 0xff34
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA256 0xff36
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA384 0xff33
-#define OP_PCL_TLS11_3DES_EDE_CBC_SHA512 0xff35
-#define OP_PCL_TLS11_AES_128_CBC_SHA160 0xff80
-#define OP_PCL_TLS11_AES_128_CBC_SHA224 0xff84
-#define OP_PCL_TLS11_AES_128_CBC_SHA256 0xff86
-#define OP_PCL_TLS11_AES_128_CBC_SHA384 0xff83
-#define OP_PCL_TLS11_AES_128_CBC_SHA512 0xff85
-#define OP_PCL_TLS11_AES_192_CBC_SHA160 0xff20
-#define OP_PCL_TLS11_AES_192_CBC_SHA224 0xff24
-#define OP_PCL_TLS11_AES_192_CBC_SHA256 0xff26
-#define OP_PCL_TLS11_AES_192_CBC_SHA384 0xff23
-#define OP_PCL_TLS11_AES_192_CBC_SHA512 0xff25
-#define OP_PCL_TLS11_AES_256_CBC_SHA160 0xff60
-#define OP_PCL_TLS11_AES_256_CBC_SHA224 0xff64
-#define OP_PCL_TLS11_AES_256_CBC_SHA256 0xff66
-#define OP_PCL_TLS11_AES_256_CBC_SHA384 0xff63
-#define OP_PCL_TLS11_AES_256_CBC_SHA512 0xff65
-
-
-/* For TLS 1.2 - OP_PCLID_TLS12 */
-#define OP_PCL_TLS12_AES_128_CBC_SHA 0x002f
-#define OP_PCL_TLS12_AES_128_CBC_SHA_2 0x0030
-#define OP_PCL_TLS12_AES_128_CBC_SHA_3 0x0031
-#define OP_PCL_TLS12_AES_128_CBC_SHA_4 0x0032
-#define OP_PCL_TLS12_AES_128_CBC_SHA_5 0x0033
-#define OP_PCL_TLS12_AES_128_CBC_SHA_6 0x0034
-#define OP_PCL_TLS12_AES_128_CBC_SHA_7 0x008c
-#define OP_PCL_TLS12_AES_128_CBC_SHA_8 0x0090
-#define OP_PCL_TLS12_AES_128_CBC_SHA_9 0x0094
-#define OP_PCL_TLS12_AES_128_CBC_SHA_10 0xc004
-#define OP_PCL_TLS12_AES_128_CBC_SHA_11 0xc009
-#define OP_PCL_TLS12_AES_128_CBC_SHA_12 0xc00e
-#define OP_PCL_TLS12_AES_128_CBC_SHA_13 0xc013
-#define OP_PCL_TLS12_AES_128_CBC_SHA_14 0xc018
-#define OP_PCL_TLS12_AES_128_CBC_SHA_15 0xc01d
-#define OP_PCL_TLS12_AES_128_CBC_SHA_16 0xc01e
-#define OP_PCL_TLS12_AES_128_CBC_SHA_17 0xc01f
-
-#define OP_PCL_TLS12_AES_256_CBC_SHA 0x0035
-#define OP_PCL_TLS12_AES_256_CBC_SHA_2 0x0036
-#define OP_PCL_TLS12_AES_256_CBC_SHA_3 0x0037
-#define OP_PCL_TLS12_AES_256_CBC_SHA_4 0x0038
-#define OP_PCL_TLS12_AES_256_CBC_SHA_5 0x0039
-#define OP_PCL_TLS12_AES_256_CBC_SHA_6 0x003a
-#define OP_PCL_TLS12_AES_256_CBC_SHA_7 0x008d
-#define OP_PCL_TLS12_AES_256_CBC_SHA_8 0x0091
-#define OP_PCL_TLS12_AES_256_CBC_SHA_9 0x0095
-#define OP_PCL_TLS12_AES_256_CBC_SHA_10 0xc005
-#define OP_PCL_TLS12_AES_256_CBC_SHA_11 0xc00a
-#define OP_PCL_TLS12_AES_256_CBC_SHA_12 0xc00f
-#define OP_PCL_TLS12_AES_256_CBC_SHA_13 0xc014
-#define OP_PCL_TLS12_AES_256_CBC_SHA_14 0xc019
-#define OP_PCL_TLS12_AES_256_CBC_SHA_15 0xc020
-#define OP_PCL_TLS12_AES_256_CBC_SHA_16 0xc021
-#define OP_PCL_TLS12_AES_256_CBC_SHA_17 0xc022
-
-/* #define OP_PCL_TLS12_3DES_EDE_CBC_MD5 0x0023 */
-
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA 0x001f
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_2 0x008b
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_3 0x008f
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_4 0x0093
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_5 0x000a
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_6 0x000d
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_7 0x0010
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_8 0x0013
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_9 0x0016
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_10 0x001b
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_11 0xc003
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_12 0xc008
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_13 0xc00d
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_14 0xc012
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_15 0xc017
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_16 0xc01a
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_17 0xc01b
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA_18 0xc01c
-
-#define OP_PCL_TLS12_DES40_CBC_MD5 0x0029
-
-#define OP_PCL_TLS12_DES_CBC_MD5 0x0022
-
-#define OP_PCL_TLS12_DES40_CBC_SHA 0x0008
-#define OP_PCL_TLS12_DES40_CBC_SHA_2 0x000b
-#define OP_PCL_TLS12_DES40_CBC_SHA_3 0x000e
-#define OP_PCL_TLS12_DES40_CBC_SHA_4 0x0011
-#define OP_PCL_TLS12_DES40_CBC_SHA_5 0x0014
-#define OP_PCL_TLS12_DES40_CBC_SHA_6 0x0019
-#define OP_PCL_TLS12_DES40_CBC_SHA_7 0x0026
-
-#define OP_PCL_TLS12_DES_CBC_SHA 0x001e
-#define OP_PCL_TLS12_DES_CBC_SHA_2 0x0009
-#define OP_PCL_TLS12_DES_CBC_SHA_3 0x000c
-#define OP_PCL_TLS12_DES_CBC_SHA_4 0x000f
-#define OP_PCL_TLS12_DES_CBC_SHA_5 0x0012
-#define OP_PCL_TLS12_DES_CBC_SHA_6 0x0015
-#define OP_PCL_TLS12_DES_CBC_SHA_7 0x001a
-
-#define OP_PCL_TLS12_RC4_128_MD5 0x0024
-#define OP_PCL_TLS12_RC4_128_MD5_2 0x0004
-#define OP_PCL_TLS12_RC4_128_MD5_3 0x0018
-
-#define OP_PCL_TLS12_RC4_40_MD5 0x002b
-#define OP_PCL_TLS12_RC4_40_MD5_2 0x0003
-#define OP_PCL_TLS12_RC4_40_MD5_3 0x0017
-
-#define OP_PCL_TLS12_RC4_128_SHA 0x0020
-#define OP_PCL_TLS12_RC4_128_SHA_2 0x008a
-#define OP_PCL_TLS12_RC4_128_SHA_3 0x008e
-#define OP_PCL_TLS12_RC4_128_SHA_4 0x0092
-#define OP_PCL_TLS12_RC4_128_SHA_5 0x0005
-#define OP_PCL_TLS12_RC4_128_SHA_6 0xc002
-#define OP_PCL_TLS12_RC4_128_SHA_7 0xc007
-#define OP_PCL_TLS12_RC4_128_SHA_8 0xc00c
-#define OP_PCL_TLS12_RC4_128_SHA_9 0xc011
-#define OP_PCL_TLS12_RC4_128_SHA_10 0xc016
-
-#define OP_PCL_TLS12_RC4_40_SHA 0x0028
-
-/* #define OP_PCL_TLS12_AES_128_CBC_SHA256 0x003c */
-#define OP_PCL_TLS12_AES_128_CBC_SHA256_2 0x003e
-#define OP_PCL_TLS12_AES_128_CBC_SHA256_3 0x003f
-#define OP_PCL_TLS12_AES_128_CBC_SHA256_4 0x0040
-#define OP_PCL_TLS12_AES_128_CBC_SHA256_5 0x0067
-#define OP_PCL_TLS12_AES_128_CBC_SHA256_6 0x006c
-
-/* #define OP_PCL_TLS12_AES_256_CBC_SHA256 0x003d */
-#define OP_PCL_TLS12_AES_256_CBC_SHA256_2 0x0068
-#define OP_PCL_TLS12_AES_256_CBC_SHA256_3 0x0069
-#define OP_PCL_TLS12_AES_256_CBC_SHA256_4 0x006a
-#define OP_PCL_TLS12_AES_256_CBC_SHA256_5 0x006b
-#define OP_PCL_TLS12_AES_256_CBC_SHA256_6 0x006d
-
-/* AEAD_AES_xxx_CCM/GCM remain to be defined... */
-
-#define OP_PCL_TLS12_3DES_EDE_CBC_MD5 0xff23
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA160 0xff30
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA224 0xff34
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA256 0xff36
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA384 0xff33
-#define OP_PCL_TLS12_3DES_EDE_CBC_SHA512 0xff35
-#define OP_PCL_TLS12_AES_128_CBC_SHA160 0xff80
-#define OP_PCL_TLS12_AES_128_CBC_SHA224 0xff84
-#define OP_PCL_TLS12_AES_128_CBC_SHA256 0xff86
-#define OP_PCL_TLS12_AES_128_CBC_SHA384 0xff83
-#define OP_PCL_TLS12_AES_128_CBC_SHA512 0xff85
-#define OP_PCL_TLS12_AES_192_CBC_SHA160 0xff20
-#define OP_PCL_TLS12_AES_192_CBC_SHA224 0xff24
-#define OP_PCL_TLS12_AES_192_CBC_SHA256 0xff26
-#define OP_PCL_TLS12_AES_192_CBC_SHA384 0xff23
-#define OP_PCL_TLS12_AES_192_CBC_SHA512 0xff25
-#define OP_PCL_TLS12_AES_256_CBC_SHA160 0xff60
-#define OP_PCL_TLS12_AES_256_CBC_SHA224 0xff64
-#define OP_PCL_TLS12_AES_256_CBC_SHA256 0xff66
-#define OP_PCL_TLS12_AES_256_CBC_SHA384 0xff63
-#define OP_PCL_TLS12_AES_256_CBC_SHA512 0xff65
-
-/* For DTLS - OP_PCLID_DTLS */
-
-#define OP_PCL_DTLS_AES_128_CBC_SHA 0x002f
-#define OP_PCL_DTLS_AES_128_CBC_SHA_2 0x0030
-#define OP_PCL_DTLS_AES_128_CBC_SHA_3 0x0031
-#define OP_PCL_DTLS_AES_128_CBC_SHA_4 0x0032
-#define OP_PCL_DTLS_AES_128_CBC_SHA_5 0x0033
-#define OP_PCL_DTLS_AES_128_CBC_SHA_6 0x0034
-#define OP_PCL_DTLS_AES_128_CBC_SHA_7 0x008c
-#define OP_PCL_DTLS_AES_128_CBC_SHA_8 0x0090
-#define OP_PCL_DTLS_AES_128_CBC_SHA_9 0x0094
-#define OP_PCL_DTLS_AES_128_CBC_SHA_10 0xc004
-#define OP_PCL_DTLS_AES_128_CBC_SHA_11 0xc009
-#define OP_PCL_DTLS_AES_128_CBC_SHA_12 0xc00e
-#define OP_PCL_DTLS_AES_128_CBC_SHA_13 0xc013
-#define OP_PCL_DTLS_AES_128_CBC_SHA_14 0xc018
-#define OP_PCL_DTLS_AES_128_CBC_SHA_15 0xc01d
-#define OP_PCL_DTLS_AES_128_CBC_SHA_16 0xc01e
-#define OP_PCL_DTLS_AES_128_CBC_SHA_17 0xc01f
-
-#define OP_PCL_DTLS_AES_256_CBC_SHA 0x0035
-#define OP_PCL_DTLS_AES_256_CBC_SHA_2 0x0036
-#define OP_PCL_DTLS_AES_256_CBC_SHA_3 0x0037
-#define OP_PCL_DTLS_AES_256_CBC_SHA_4 0x0038
-#define OP_PCL_DTLS_AES_256_CBC_SHA_5 0x0039
-#define OP_PCL_DTLS_AES_256_CBC_SHA_6 0x003a
-#define OP_PCL_DTLS_AES_256_CBC_SHA_7 0x008d
-#define OP_PCL_DTLS_AES_256_CBC_SHA_8 0x0091
-#define OP_PCL_DTLS_AES_256_CBC_SHA_9 0x0095
-#define OP_PCL_DTLS_AES_256_CBC_SHA_10 0xc005
-#define OP_PCL_DTLS_AES_256_CBC_SHA_11 0xc00a
-#define OP_PCL_DTLS_AES_256_CBC_SHA_12 0xc00f
-#define OP_PCL_DTLS_AES_256_CBC_SHA_13 0xc014
-#define OP_PCL_DTLS_AES_256_CBC_SHA_14 0xc019
-#define OP_PCL_DTLS_AES_256_CBC_SHA_15 0xc020
-#define OP_PCL_DTLS_AES_256_CBC_SHA_16 0xc021
-#define OP_PCL_DTLS_AES_256_CBC_SHA_17 0xc022
-
-/* #define OP_PCL_DTLS_3DES_EDE_CBC_MD5 0x0023 */
-
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA 0x001f
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_2 0x008b
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_3 0x008f
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_4 0x0093
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_5 0x000a
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_6 0x000d
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_7 0x0010
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_8 0x0013
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_9 0x0016
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_10 0x001b
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_11 0xc003
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_12 0xc008
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_13 0xc00d
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_14 0xc012
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_15 0xc017
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_16 0xc01a
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_17 0xc01b
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA_18 0xc01c
-
-#define OP_PCL_DTLS_DES40_CBC_MD5 0x0029
-
-#define OP_PCL_DTLS_DES_CBC_MD5 0x0022
-
-#define OP_PCL_DTLS_DES40_CBC_SHA 0x0008
-#define OP_PCL_DTLS_DES40_CBC_SHA_2 0x000b
-#define OP_PCL_DTLS_DES40_CBC_SHA_3 0x000e
-#define OP_PCL_DTLS_DES40_CBC_SHA_4 0x0011
-#define OP_PCL_DTLS_DES40_CBC_SHA_5 0x0014
-#define OP_PCL_DTLS_DES40_CBC_SHA_6 0x0019
-#define OP_PCL_DTLS_DES40_CBC_SHA_7 0x0026
-
-
-#define OP_PCL_DTLS_DES_CBC_SHA 0x001e
-#define OP_PCL_DTLS_DES_CBC_SHA_2 0x0009
-#define OP_PCL_DTLS_DES_CBC_SHA_3 0x000c
-#define OP_PCL_DTLS_DES_CBC_SHA_4 0x000f
-#define OP_PCL_DTLS_DES_CBC_SHA_5 0x0012
-#define OP_PCL_DTLS_DES_CBC_SHA_6 0x0015
-#define OP_PCL_DTLS_DES_CBC_SHA_7 0x001a
-
-
-#define OP_PCL_DTLS_3DES_EDE_CBC_MD5 0xff23
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA160 0xff30
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA224 0xff34
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA256 0xff36
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA384 0xff33
-#define OP_PCL_DTLS_3DES_EDE_CBC_SHA512 0xff35
-#define OP_PCL_DTLS_AES_128_CBC_SHA160 0xff80
-#define OP_PCL_DTLS_AES_128_CBC_SHA224 0xff84
-#define OP_PCL_DTLS_AES_128_CBC_SHA256 0xff86
-#define OP_PCL_DTLS_AES_128_CBC_SHA384 0xff83
-#define OP_PCL_DTLS_AES_128_CBC_SHA512 0xff85
-#define OP_PCL_DTLS_AES_192_CBC_SHA160 0xff20
-#define OP_PCL_DTLS_AES_192_CBC_SHA224 0xff24
-#define OP_PCL_DTLS_AES_192_CBC_SHA256 0xff26
-#define OP_PCL_DTLS_AES_192_CBC_SHA384 0xff23
-#define OP_PCL_DTLS_AES_192_CBC_SHA512 0xff25
-#define OP_PCL_DTLS_AES_256_CBC_SHA160 0xff60
-#define OP_PCL_DTLS_AES_256_CBC_SHA224 0xff64
-#define OP_PCL_DTLS_AES_256_CBC_SHA256 0xff66
-#define OP_PCL_DTLS_AES_256_CBC_SHA384 0xff63
-#define OP_PCL_DTLS_AES_256_CBC_SHA512 0xff65
-
-/* 802.16 WiMAX protinfos */
-#define OP_PCL_WIMAX_OFDM 0x0201
-#define OP_PCL_WIMAX_OFDMA 0x0231
-
-/* 802.11 WiFi protinfos */
-#define OP_PCL_WIFI 0xac04
-
-/* MacSec protinfos */
-#define OP_PCL_MACSEC 0x0001
-
-/* PKI unidirectional protocol protinfo bits */
-#define OP_PCL_PKPROT_TEST 0x0008
-#define OP_PCL_PKPROT_DECRYPT 0x0004
-#define OP_PCL_PKPROT_ECC 0x0002
-#define OP_PCL_PKPROT_F2M 0x0001
-
-/* For non-protocol/alg-only op commands */
-#define OP_ALG_TYPE_SHIFT 24
-#define OP_ALG_TYPE_MASK (0x7 << OP_ALG_TYPE_SHIFT)
-#define OP_ALG_TYPE_CLASS1 2
-#define OP_ALG_TYPE_CLASS2 4
-
-#define OP_ALG_ALGSEL_SHIFT 16
-#define OP_ALG_ALGSEL_MASK (0xff << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_SUBMASK (0x0f << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_AES (0x10 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_DES (0x20 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_3DES (0x21 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_ARC4 (0x30 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_MD5 (0x40 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_SHA1 (0x41 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_SHA224 (0x42 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_SHA256 (0x43 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_SHA384 (0x44 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_SHA512 (0x45 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_RNG (0x50 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_SNOW (0x60 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_SNOW_F8 (0x60 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_KASUMI (0x70 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_CRC (0x90 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_SNOW_F9 (0xA0 << OP_ALG_ALGSEL_SHIFT)
-
-#define OP_ALG_AAI_SHIFT 4
-#define OP_ALG_AAI_MASK (0x1ff << OP_ALG_AAI_SHIFT)
-
-/* blockcipher AAI set */
-#define OP_ALG_AAI_CTR_MOD128 (0x00 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD8 (0x01 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD16 (0x02 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD24 (0x03 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD32 (0x04 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD40 (0x05 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD48 (0x06 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD56 (0x07 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD64 (0x08 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD72 (0x09 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD80 (0x0a << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD88 (0x0b << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD96 (0x0c << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD104 (0x0d << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD112 (0x0e << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_MOD120 (0x0f << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CBC (0x10 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_ECB (0x20 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CFB (0x30 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_OFB (0x40 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_XTS (0x50 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CMAC (0x60 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_XCBC_MAC (0x70 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CCM (0x80 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_GCM (0x90 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CBC_XCBCMAC (0xa0 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CTR_XCBCMAC (0xb0 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CHECKODD (0x80 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_DK (0x100 << OP_ALG_AAI_SHIFT)
-
-/* randomizer AAI set */
-#define OP_ALG_AAI_RNG (0x00 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_RNG_NZB (0x10 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_RNG_OBP (0x20 << OP_ALG_AAI_SHIFT)
-
-/* RNG4 AAI set */
-#define OP_ALG_AAI_RNG4_SH_0 (0x00 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_RNG4_SH_1 (0x01 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_RNG4_PS (0x40 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_RNG4_AI (0x80 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_RNG4_SK (0x100 << OP_ALG_AAI_SHIFT)
-
-/* hmac/smac AAI set */
-#define OP_ALG_AAI_HASH (0x00 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_HMAC (0x01 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_SMAC (0x02 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_HMAC_PRECOMP (0x04 << OP_ALG_AAI_SHIFT)
-
-/* CRC AAI set*/
-#define OP_ALG_AAI_802 (0x01 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_3385 (0x02 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_CUST_POLY (0x04 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_DIS (0x10 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_DOS (0x20 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_DOC (0x40 << OP_ALG_AAI_SHIFT)
-
-/* Kasumi/SNOW AAI set */
-#define OP_ALG_AAI_F8 (0xc0 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_F9 (0xc8 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_GSM (0x10 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_EDGE (0x20 << OP_ALG_AAI_SHIFT)
-
-#define OP_ALG_AS_SHIFT 2
-#define OP_ALG_AS_MASK (0x3 << OP_ALG_AS_SHIFT)
-#define OP_ALG_AS_UPDATE (0 << OP_ALG_AS_SHIFT)
-#define OP_ALG_AS_INIT (1 << OP_ALG_AS_SHIFT)
-#define OP_ALG_AS_FINALIZE (2 << OP_ALG_AS_SHIFT)
-#define OP_ALG_AS_INITFINAL (3 << OP_ALG_AS_SHIFT)
-
-#define OP_ALG_ICV_SHIFT 1
-#define OP_ALG_ICV_MASK (1 << OP_ALG_ICV_SHIFT)
-#define OP_ALG_ICV_OFF (0 << OP_ALG_ICV_SHIFT)
-#define OP_ALG_ICV_ON (1 << OP_ALG_ICV_SHIFT)
-
-#define OP_ALG_DIR_SHIFT 0
-#define OP_ALG_DIR_MASK 1
-#define OP_ALG_DECRYPT 0
-#define OP_ALG_ENCRYPT 1
-
-/* PKHA algorithm type set */
-#define OP_ALG_PK 0x00800000
-#define OP_ALG_PK_FUN_MASK 0x3f /* clrmem, modmath, or cpymem */
-
-/* PKHA mode clear memory functions */
-#define OP_ALG_PKMODE_A_RAM 0x80000
-#define OP_ALG_PKMODE_B_RAM 0x40000
-#define OP_ALG_PKMODE_E_RAM 0x20000
-#define OP_ALG_PKMODE_N_RAM 0x10000
-#define OP_ALG_PKMODE_CLEARMEM 0x00001
-
-/* PKHA mode modular-arithmetic functions */
-#define OP_ALG_PKMODE_MOD_IN_MONTY 0x80000
-#define OP_ALG_PKMODE_MOD_OUT_MONTY 0x40000
-#define OP_ALG_PKMODE_MOD_F2M 0x20000
-#define OP_ALG_PKMODE_MOD_R2_IN 0x10000
-#define OP_ALG_PKMODE_PRJECTV 0x00800
-#define OP_ALG_PKMODE_TIME_EQ 0x400
-#define OP_ALG_PKMODE_OUT_B 0x000
-#define OP_ALG_PKMODE_OUT_A 0x100
-#define OP_ALG_PKMODE_MOD_ADD 0x002
-#define OP_ALG_PKMODE_MOD_SUB_AB 0x003
-#define OP_ALG_PKMODE_MOD_SUB_BA 0x004
-#define OP_ALG_PKMODE_MOD_MULT 0x005
-#define OP_ALG_PKMODE_MOD_EXPO 0x006
-#define OP_ALG_PKMODE_MOD_REDUCT 0x007
-#define OP_ALG_PKMODE_MOD_INV 0x008
-#define OP_ALG_PKMODE_MOD_ECC_ADD 0x009
-#define OP_ALG_PKMODE_MOD_ECC_DBL 0x00a
-#define OP_ALG_PKMODE_MOD_ECC_MULT 0x00b
-#define OP_ALG_PKMODE_MOD_MONT_CNST 0x00c
-#define OP_ALG_PKMODE_MOD_CRT_CNST 0x00d
-#define OP_ALG_PKMODE_MOD_GCD 0x00e
-#define OP_ALG_PKMODE_MOD_PRIMALITY 0x00f
-
-/* PKHA mode copy-memory functions */
-#define OP_ALG_PKMODE_SRC_REG_SHIFT 17
-#define OP_ALG_PKMODE_SRC_REG_MASK (7 << OP_ALG_PKMODE_SRC_REG_SHIFT)
-#define OP_ALG_PKMODE_DST_REG_SHIFT 10
-#define OP_ALG_PKMODE_DST_REG_MASK (7 << OP_ALG_PKMODE_DST_REG_SHIFT)
-#define OP_ALG_PKMODE_SRC_SEG_SHIFT 8
-#define OP_ALG_PKMODE_SRC_SEG_MASK (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
-#define OP_ALG_PKMODE_DST_SEG_SHIFT 6
-#define OP_ALG_PKMODE_DST_SEG_MASK (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
-
-#define OP_ALG_PKMODE_SRC_REG_A (0 << OP_ALG_PKMODE_SRC_REG_SHIFT)
-#define OP_ALG_PKMODE_SRC_REG_B (1 << OP_ALG_PKMODE_SRC_REG_SHIFT)
-#define OP_ALG_PKMODE_SRC_REG_N (3 << OP_ALG_PKMODE_SRC_REG_SHIFT)
-#define OP_ALG_PKMODE_DST_REG_A (0 << OP_ALG_PKMODE_DST_REG_SHIFT)
-#define OP_ALG_PKMODE_DST_REG_B (1 << OP_ALG_PKMODE_DST_REG_SHIFT)
-#define OP_ALG_PKMODE_DST_REG_E (2 << OP_ALG_PKMODE_DST_REG_SHIFT)
-#define OP_ALG_PKMODE_DST_REG_N (3 << OP_ALG_PKMODE_DST_REG_SHIFT)
-#define OP_ALG_PKMODE_SRC_SEG_0 (0 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
-#define OP_ALG_PKMODE_SRC_SEG_1 (1 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
-#define OP_ALG_PKMODE_SRC_SEG_2 (2 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
-#define OP_ALG_PKMODE_SRC_SEG_3 (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
-#define OP_ALG_PKMODE_DST_SEG_0 (0 << OP_ALG_PKMODE_DST_SEG_SHIFT)
-#define OP_ALG_PKMODE_DST_SEG_1 (1 << OP_ALG_PKMODE_DST_SEG_SHIFT)
-#define OP_ALG_PKMODE_DST_SEG_2 (2 << OP_ALG_PKMODE_DST_SEG_SHIFT)
-#define OP_ALG_PKMODE_DST_SEG_3 (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
-#define OP_ALG_PKMODE_CPYMEM_N_SZ 0x80
-#define OP_ALG_PKMODE_CPYMEM_SRC_SZ 0x81
-
-/*
- * SEQ_IN_PTR Command Constructs
- */
-
-/* Release Buffers */
-#define SQIN_RBS 0x04000000
-
-/* Sequence pointer is really a descriptor */
-#define SQIN_INL 0x02000000
-
-/* Sequence pointer is a scatter-gather table */
-#define SQIN_SGF 0x01000000
-
-/* Appends to a previous pointer */
-#define SQIN_PRE 0x00800000
-
-/* Use extended length following pointer */
-#define SQIN_EXT 0x00400000
-
-/* Restore sequence with pointer/length */
-#define SQIN_RTO 0x00200000
-
-/* Replace job descriptor */
-#define SQIN_RJD 0x00100000
-
-#define SQIN_LEN_SHIFT 0
-#define SQIN_LEN_MASK (0xffff << SQIN_LEN_SHIFT)
-
-/*
- * SEQ_OUT_PTR Command Constructs
- */
-
-/* Sequence pointer is a scatter-gather table */
-#define SQOUT_SGF 0x01000000
-
-/* Appends to a previous pointer */
-#define SQOUT_PRE SQIN_PRE
-
-/* Restore sequence with pointer/length */
-#define SQOUT_RTO SQIN_RTO
-
-/* Use extended length following pointer */
-#define SQOUT_EXT 0x00400000
-
-#define SQOUT_LEN_SHIFT 0
-#define SQOUT_LEN_MASK (0xffff << SQOUT_LEN_SHIFT)
-
-
-/*
- * SIGNATURE Command Constructs
- */
-
-/* TYPE field is all that's relevant */
-#define SIGN_TYPE_SHIFT 16
-#define SIGN_TYPE_MASK (0x0f << SIGN_TYPE_SHIFT)
-
-#define SIGN_TYPE_FINAL (0x00 << SIGN_TYPE_SHIFT)
-#define SIGN_TYPE_FINAL_RESTORE (0x01 << SIGN_TYPE_SHIFT)
-#define SIGN_TYPE_FINAL_NONZERO (0x02 << SIGN_TYPE_SHIFT)
-#define SIGN_TYPE_IMM_2 (0x0a << SIGN_TYPE_SHIFT)
-#define SIGN_TYPE_IMM_3 (0x0b << SIGN_TYPE_SHIFT)
-#define SIGN_TYPE_IMM_4 (0x0c << SIGN_TYPE_SHIFT)
-
-/*
- * MOVE Command Constructs
- */
-
-#define MOVE_AUX_SHIFT 25
-#define MOVE_AUX_MASK (3 << MOVE_AUX_SHIFT)
-#define MOVE_AUX_MS (2 << MOVE_AUX_SHIFT)
-#define MOVE_AUX_LS (1 << MOVE_AUX_SHIFT)
-
-#define MOVE_WAITCOMP_SHIFT 24
-#define MOVE_WAITCOMP_MASK (1 << MOVE_WAITCOMP_SHIFT)
-#define MOVE_WAITCOMP (1 << MOVE_WAITCOMP_SHIFT)
-
-#define MOVE_SRC_SHIFT 20
-#define MOVE_SRC_MASK (0x0f << MOVE_SRC_SHIFT)
-#define MOVE_SRC_CLASS1CTX (0x00 << MOVE_SRC_SHIFT)
-#define MOVE_SRC_CLASS2CTX (0x01 << MOVE_SRC_SHIFT)
-#define MOVE_SRC_OUTFIFO (0x02 << MOVE_SRC_SHIFT)
-#define MOVE_SRC_DESCBUF (0x03 << MOVE_SRC_SHIFT)
-#define MOVE_SRC_MATH0 (0x04 << MOVE_SRC_SHIFT)
-#define MOVE_SRC_MATH1 (0x05 << MOVE_SRC_SHIFT)
-#define MOVE_SRC_MATH2 (0x06 << MOVE_SRC_SHIFT)
-#define MOVE_SRC_MATH3 (0x07 << MOVE_SRC_SHIFT)
-#define MOVE_SRC_INFIFO (0x08 << MOVE_SRC_SHIFT)
-#define MOVE_SRC_INFIFO_CL (0x09 << MOVE_SRC_SHIFT)
-
-#define MOVE_DEST_SHIFT 16
-#define MOVE_DEST_MASK (0x0f << MOVE_DEST_SHIFT)
-#define MOVE_DEST_CLASS1CTX (0x00 << MOVE_DEST_SHIFT)
-#define MOVE_DEST_CLASS2CTX (0x01 << MOVE_DEST_SHIFT)
-#define MOVE_DEST_OUTFIFO (0x02 << MOVE_DEST_SHIFT)
-#define MOVE_DEST_DESCBUF (0x03 << MOVE_DEST_SHIFT)
-#define MOVE_DEST_MATH0 (0x04 << MOVE_DEST_SHIFT)
-#define MOVE_DEST_MATH1 (0x05 << MOVE_DEST_SHIFT)
-#define MOVE_DEST_MATH2 (0x06 << MOVE_DEST_SHIFT)
-#define MOVE_DEST_MATH3 (0x07 << MOVE_DEST_SHIFT)
-#define MOVE_DEST_CLASS1INFIFO (0x08 << MOVE_DEST_SHIFT)
-#define MOVE_DEST_CLASS2INFIFO (0x09 << MOVE_DEST_SHIFT)
-#define MOVE_DEST_INFIFO_NOINFO (0x0a << MOVE_DEST_SHIFT)
-#define MOVE_DEST_PK_A (0x0c << MOVE_DEST_SHIFT)
-#define MOVE_DEST_CLASS1KEY (0x0d << MOVE_DEST_SHIFT)
-#define MOVE_DEST_CLASS2KEY (0x0e << MOVE_DEST_SHIFT)
-
-#define MOVE_OFFSET_SHIFT 8
-#define MOVE_OFFSET_MASK (0xff << MOVE_OFFSET_SHIFT)
-
-#define MOVE_LEN_SHIFT 0
-#define MOVE_LEN_MASK (0xff << MOVE_LEN_SHIFT)
-
-#define MOVELEN_MRSEL_SHIFT 0
-#define MOVELEN_MRSEL_MASK (0x3 << MOVE_LEN_SHIFT)
-
-/*
- * MATH Command Constructs
- */
-
-#define MATH_IFB_SHIFT 26
-#define MATH_IFB_MASK (1 << MATH_IFB_SHIFT)
-#define MATH_IFB (1 << MATH_IFB_SHIFT)
-
-#define MATH_NFU_SHIFT 25
-#define MATH_NFU_MASK (1 << MATH_NFU_SHIFT)
-#define MATH_NFU (1 << MATH_NFU_SHIFT)
-
-#define MATH_STL_SHIFT 24
-#define MATH_STL_MASK (1 << MATH_STL_SHIFT)
-#define MATH_STL (1 << MATH_STL_SHIFT)
-
-/* Function selectors */
-#define MATH_FUN_SHIFT 20
-#define MATH_FUN_MASK (0x0f << MATH_FUN_SHIFT)
-#define MATH_FUN_ADD (0x00 << MATH_FUN_SHIFT)
-#define MATH_FUN_ADDC (0x01 << MATH_FUN_SHIFT)
-#define MATH_FUN_SUB (0x02 << MATH_FUN_SHIFT)
-#define MATH_FUN_SUBB (0x03 << MATH_FUN_SHIFT)
-#define MATH_FUN_OR (0x04 << MATH_FUN_SHIFT)
-#define MATH_FUN_AND (0x05 << MATH_FUN_SHIFT)
-#define MATH_FUN_XOR (0x06 << MATH_FUN_SHIFT)
-#define MATH_FUN_LSHIFT (0x07 << MATH_FUN_SHIFT)
-#define MATH_FUN_RSHIFT (0x08 << MATH_FUN_SHIFT)
-#define MATH_FUN_SHLD (0x09 << MATH_FUN_SHIFT)
-#define MATH_FUN_ZBYT (0x0a << MATH_FUN_SHIFT)
-
-/* Source 0 selectors */
-#define MATH_SRC0_SHIFT 16
-#define MATH_SRC0_MASK (0x0f << MATH_SRC0_SHIFT)
-#define MATH_SRC0_REG0 (0x00 << MATH_SRC0_SHIFT)
-#define MATH_SRC0_REG1 (0x01 << MATH_SRC0_SHIFT)
-#define MATH_SRC0_REG2 (0x02 << MATH_SRC0_SHIFT)
-#define MATH_SRC0_REG3 (0x03 << MATH_SRC0_SHIFT)
-#define MATH_SRC0_IMM (0x04 << MATH_SRC0_SHIFT)
-#define MATH_SRC0_DPOVRD (0x07 << MATH_SRC0_SHIFT)
-#define MATH_SRC0_SEQINLEN (0x08 << MATH_SRC0_SHIFT)
-#define MATH_SRC0_SEQOUTLEN (0x09 << MATH_SRC0_SHIFT)
-#define MATH_SRC0_VARSEQINLEN (0x0a << MATH_SRC0_SHIFT)
-#define MATH_SRC0_VARSEQOUTLEN (0x0b << MATH_SRC0_SHIFT)
-#define MATH_SRC0_ZERO (0x0c << MATH_SRC0_SHIFT)
-
-/* Source 1 selectors */
-#define MATH_SRC1_SHIFT 12
-#define MATH_SRC1_MASK (0x0f << MATH_SRC1_SHIFT)
-#define MATH_SRC1_REG0 (0x00 << MATH_SRC1_SHIFT)
-#define MATH_SRC1_REG1 (0x01 << MATH_SRC1_SHIFT)
-#define MATH_SRC1_REG2 (0x02 << MATH_SRC1_SHIFT)
-#define MATH_SRC1_REG3 (0x03 << MATH_SRC1_SHIFT)
-#define MATH_SRC1_IMM (0x04 << MATH_SRC1_SHIFT)
-#define MATH_SRC1_DPOVRD (0x07 << MATH_SRC0_SHIFT)
-#define MATH_SRC1_INFIFO (0x0a << MATH_SRC1_SHIFT)
-#define MATH_SRC1_OUTFIFO (0x0b << MATH_SRC1_SHIFT)
-#define MATH_SRC1_ONE (0x0c << MATH_SRC1_SHIFT)
-
-/* Destination selectors */
-#define MATH_DEST_SHIFT 8
-#define MATH_DEST_MASK (0x0f << MATH_DEST_SHIFT)
-#define MATH_DEST_REG0 (0x00 << MATH_DEST_SHIFT)
-#define MATH_DEST_REG1 (0x01 << MATH_DEST_SHIFT)
-#define MATH_DEST_REG2 (0x02 << MATH_DEST_SHIFT)
-#define MATH_DEST_REG3 (0x03 << MATH_DEST_SHIFT)
-#define MATH_DEST_SEQINLEN (0x08 << MATH_DEST_SHIFT)
-#define MATH_DEST_SEQOUTLEN (0x09 << MATH_DEST_SHIFT)
-#define MATH_DEST_VARSEQINLEN (0x0a << MATH_DEST_SHIFT)
-#define MATH_DEST_VARSEQOUTLEN (0x0b << MATH_DEST_SHIFT)
-#define MATH_DEST_NONE (0x0f << MATH_DEST_SHIFT)
-
-/* Length selectors */
-#define MATH_LEN_SHIFT 0
-#define MATH_LEN_MASK (0x0f << MATH_LEN_SHIFT)
-#define MATH_LEN_1BYTE 0x01
-#define MATH_LEN_2BYTE 0x02
-#define MATH_LEN_4BYTE 0x04
-#define MATH_LEN_8BYTE 0x08
-
-/*
- * JUMP Command Constructs
- */
-
-#define JUMP_CLASS_SHIFT 25
-#define JUMP_CLASS_MASK (3 << JUMP_CLASS_SHIFT)
-#define JUMP_CLASS_NONE 0
-#define JUMP_CLASS_CLASS1 (1 << JUMP_CLASS_SHIFT)
-#define JUMP_CLASS_CLASS2 (2 << JUMP_CLASS_SHIFT)
-#define JUMP_CLASS_BOTH (3 << JUMP_CLASS_SHIFT)
-
-#define JUMP_JSL_SHIFT 24
-#define JUMP_JSL_MASK (1 << JUMP_JSL_SHIFT)
-#define JUMP_JSL (1 << JUMP_JSL_SHIFT)
-
-#define JUMP_TYPE_SHIFT 22
-#define JUMP_TYPE_MASK (0x03 << JUMP_TYPE_SHIFT)
-#define JUMP_TYPE_LOCAL (0x00 << JUMP_TYPE_SHIFT)
-#define JUMP_TYPE_NONLOCAL (0x01 << JUMP_TYPE_SHIFT)
-#define JUMP_TYPE_HALT (0x02 << JUMP_TYPE_SHIFT)
-#define JUMP_TYPE_HALT_USER (0x03 << JUMP_TYPE_SHIFT)
-
-#define JUMP_TEST_SHIFT 16
-#define JUMP_TEST_MASK (0x03 << JUMP_TEST_SHIFT)
-#define JUMP_TEST_ALL (0x00 << JUMP_TEST_SHIFT)
-#define JUMP_TEST_INVALL (0x01 << JUMP_TEST_SHIFT)
-#define JUMP_TEST_ANY (0x02 << JUMP_TEST_SHIFT)
-#define JUMP_TEST_INVANY (0x03 << JUMP_TEST_SHIFT)
-
-/* Condition codes. JSL bit is factored in */
-#define JUMP_COND_SHIFT 8
-#define JUMP_COND_MASK (0x100ff << JUMP_COND_SHIFT)
-#define JUMP_COND_PK_0 (0x80 << JUMP_COND_SHIFT)
-#define JUMP_COND_PK_GCD_1 (0x40 << JUMP_COND_SHIFT)
-#define JUMP_COND_PK_PRIME (0x20 << JUMP_COND_SHIFT)
-#define JUMP_COND_MATH_N (0x08 << JUMP_COND_SHIFT)
-#define JUMP_COND_MATH_Z (0x04 << JUMP_COND_SHIFT)
-#define JUMP_COND_MATH_C (0x02 << JUMP_COND_SHIFT)
-#define JUMP_COND_MATH_NV (0x01 << JUMP_COND_SHIFT)
-
-#define JUMP_COND_JRP ((0x80 << JUMP_COND_SHIFT) | JUMP_JSL)
-#define JUMP_COND_SHRD ((0x40 << JUMP_COND_SHIFT) | JUMP_JSL)
-#define JUMP_COND_SELF ((0x20 << JUMP_COND_SHIFT) | JUMP_JSL)
-#define JUMP_COND_CALM ((0x10 << JUMP_COND_SHIFT) | JUMP_JSL)
-#define JUMP_COND_NIP ((0x08 << JUMP_COND_SHIFT) | JUMP_JSL)
-#define JUMP_COND_NIFP ((0x04 << JUMP_COND_SHIFT) | JUMP_JSL)
-#define JUMP_COND_NOP ((0x02 << JUMP_COND_SHIFT) | JUMP_JSL)
-#define JUMP_COND_NCP ((0x01 << JUMP_COND_SHIFT) | JUMP_JSL)
-
-#define JUMP_OFFSET_SHIFT 0
-#define JUMP_OFFSET_MASK (0xff << JUMP_OFFSET_SHIFT)
-
-/*
- * NFIFO ENTRY
- * Data Constructs
- *
- */
-#define NFIFOENTRY_DEST_SHIFT 30
-#define NFIFOENTRY_DEST_MASK (3 << NFIFOENTRY_DEST_SHIFT)
-#define NFIFOENTRY_DEST_DECO (0 << NFIFOENTRY_DEST_SHIFT)
-#define NFIFOENTRY_DEST_CLASS1 (1 << NFIFOENTRY_DEST_SHIFT)
-#define NFIFOENTRY_DEST_CLASS2 (2 << NFIFOENTRY_DEST_SHIFT)
-#define NFIFOENTRY_DEST_BOTH (3 << NFIFOENTRY_DEST_SHIFT)
-
-#define NFIFOENTRY_LC2_SHIFT 29
-#define NFIFOENTRY_LC2_MASK (1 << NFIFOENTRY_LC2_SHIFT)
-#define NFIFOENTRY_LC2 (1 << NFIFOENTRY_LC2_SHIFT)
-
-#define NFIFOENTRY_LC1_SHIFT 28
-#define NFIFOENTRY_LC1_MASK (1 << NFIFOENTRY_LC1_SHIFT)
-#define NFIFOENTRY_LC1 (1 << NFIFOENTRY_LC1_SHIFT)
-
-#define NFIFOENTRY_FC2_SHIFT 27
-#define NFIFOENTRY_FC2_MASK (1 << NFIFOENTRY_FC2_SHIFT)
-#define NFIFOENTRY_FC2 (1 << NFIFOENTRY_FC2_SHIFT)
-
-#define NFIFOENTRY_FC1_SHIFT 26
-#define NFIFOENTRY_FC1_MASK (1 << NFIFOENTRY_FC1_SHIFT)
-#define NFIFOENTRY_FC1 (1 << NFIFOENTRY_FC1_SHIFT)
-
-#define NFIFOENTRY_STYPE_SHIFT 24
-#define NFIFOENTRY_STYPE_MASK (3 << NFIFOENTRY_STYPE_SHIFT)
-#define NFIFOENTRY_STYPE_DFIFO (0 << NFIFOENTRY_STYPE_SHIFT)
-#define NFIFOENTRY_STYPE_OFIFO (1 << NFIFOENTRY_STYPE_SHIFT)
-#define NFIFOENTRY_STYPE_PAD (2 << NFIFOENTRY_STYPE_SHIFT)
-#define NFIFOENTRY_STYPE_SNOOP (3 << NFIFOENTRY_STYPE_SHIFT)
-
-#define NFIFOENTRY_DTYPE_SHIFT 20
-#define NFIFOENTRY_DTYPE_MASK (0xF << NFIFOENTRY_DTYPE_SHIFT)
-
-#define NFIFOENTRY_DTYPE_SBOX (0x0 << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_AAD (0x1 << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_IV (0x2 << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_SAD (0x3 << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_ICV (0xA << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_SKIP (0xE << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_MSG (0xF << NFIFOENTRY_DTYPE_SHIFT)
-
-#define NFIFOENTRY_DTYPE_PK_A0 (0x0 << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_PK_A1 (0x1 << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_PK_A2 (0x2 << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_PK_A3 (0x3 << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_PK_B0 (0x4 << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_PK_B1 (0x5 << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_PK_B2 (0x6 << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_PK_B3 (0x7 << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_PK_N (0x8 << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_PK_E (0x9 << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_PK_A (0xC << NFIFOENTRY_DTYPE_SHIFT)
-#define NFIFOENTRY_DTYPE_PK_B (0xD << NFIFOENTRY_DTYPE_SHIFT)
-
-
-#define NFIFOENTRY_BND_SHIFT 19
-#define NFIFOENTRY_BND_MASK (1 << NFIFOENTRY_BND_SHIFT)
-#define NFIFOENTRY_BND (1 << NFIFOENTRY_BND_SHIFT)
-
-#define NFIFOENTRY_PTYPE_SHIFT 16
-#define NFIFOENTRY_PTYPE_MASK (0x7 << NFIFOENTRY_PTYPE_SHIFT)
-
-#define NFIFOENTRY_PTYPE_ZEROS (0x0 << NFIFOENTRY_PTYPE_SHIFT)
-#define NFIFOENTRY_PTYPE_RND_NOZEROS (0x1 << NFIFOENTRY_PTYPE_SHIFT)
-#define NFIFOENTRY_PTYPE_INCREMENT (0x2 << NFIFOENTRY_PTYPE_SHIFT)
-#define NFIFOENTRY_PTYPE_RND (0x3 << NFIFOENTRY_PTYPE_SHIFT)
-#define NFIFOENTRY_PTYPE_ZEROS_NZ (0x4 << NFIFOENTRY_PTYPE_SHIFT)
-#define NFIFOENTRY_PTYPE_RND_NZ_LZ (0x5 << NFIFOENTRY_PTYPE_SHIFT)
-#define NFIFOENTRY_PTYPE_N (0x6 << NFIFOENTRY_PTYPE_SHIFT)
-#define NFIFOENTRY_PTYPE_RND_NZ_N (0x7 << NFIFOENTRY_PTYPE_SHIFT)
-
-#define NFIFOENTRY_OC_SHIFT 15
-#define NFIFOENTRY_OC_MASK (1 << NFIFOENTRY_OC_SHIFT)
-#define NFIFOENTRY_OC (1 << NFIFOENTRY_OC_SHIFT)
-
-#define NFIFOENTRY_AST_SHIFT 14
-#define NFIFOENTRY_AST_MASK (1 << NFIFOENTRY_OC_SHIFT)
-#define NFIFOENTRY_AST (1 << NFIFOENTRY_OC_SHIFT)
-
-#define NFIFOENTRY_BM_SHIFT 11
-#define NFIFOENTRY_BM_MASK (1 << NFIFOENTRY_BM_SHIFT)
-#define NFIFOENTRY_BM (1 << NFIFOENTRY_BM_SHIFT)
-
-#define NFIFOENTRY_PS_SHIFT 10
-#define NFIFOENTRY_PS_MASK (1 << NFIFOENTRY_PS_SHIFT)
-#define NFIFOENTRY_PS (1 << NFIFOENTRY_PS_SHIFT)
-
-#define NFIFOENTRY_DLEN_SHIFT 0
-#define NFIFOENTRY_DLEN_MASK (0xFFF << NFIFOENTRY_DLEN_SHIFT)
-
-#define NFIFOENTRY_PLEN_SHIFT 0
-#define NFIFOENTRY_PLEN_MASK (0xFF << NFIFOENTRY_PLEN_SHIFT)
-
-/* Append Load Immediate Command */
-#define FD_CMD_APPEND_LOAD_IMMEDIATE 0x80000000
-
-/* Set SEQ LIODN equal to the Non-SEQ LIODN for the job */
-#define FD_CMD_SET_SEQ_LIODN_EQUAL_NONSEQ_LIODN 0x40000000
-
-/* Frame Descriptor Command for Replacement Job Descriptor */
-#define FD_CMD_REPLACE_JOB_DESC 0x20000000
-
-#endif /* DESC_H */
diff --git a/drivers/crypto/caam/desc_constr.h b/drivers/crypto/caam/desc_constr.h
deleted file mode 100644
index 7eec20bb3849..000000000000
--- a/drivers/crypto/caam/desc_constr.h
+++ /dev/null
@@ -1,388 +0,0 @@
-/*
- * caam descriptor construction helper functions
- *
- * Copyright 2008-2012 Freescale Semiconductor, Inc.
- */
-
-#include "desc.h"
-
-#define IMMEDIATE (1 << 23)
-#define CAAM_CMD_SZ sizeof(u32)
-#define CAAM_PTR_SZ sizeof(dma_addr_t)
-#define CAAM_DESC_BYTES_MAX (CAAM_CMD_SZ * MAX_CAAM_DESCSIZE)
-#define DESC_JOB_IO_LEN (CAAM_CMD_SZ * 5 + CAAM_PTR_SZ * 3)
-
-#ifdef DEBUG
-#define PRINT_POS do { printk(KERN_DEBUG "%02d: %s\n", desc_len(desc),\
- &__func__[sizeof("append")]); } while (0)
-#else
-#define PRINT_POS
-#endif
-
-#define SET_OK_NO_PROP_ERRORS (IMMEDIATE | LDST_CLASS_DECO | \
- LDST_SRCDST_WORD_DECOCTRL | \
- (LDOFF_CHG_SHARE_OK_NO_PROP << \
- LDST_OFFSET_SHIFT))
-#define DISABLE_AUTO_INFO_FIFO (IMMEDIATE | LDST_CLASS_DECO | \
- LDST_SRCDST_WORD_DECOCTRL | \
- (LDOFF_DISABLE_AUTO_NFIFO << LDST_OFFSET_SHIFT))
-#define ENABLE_AUTO_INFO_FIFO (IMMEDIATE | LDST_CLASS_DECO | \
- LDST_SRCDST_WORD_DECOCTRL | \
- (LDOFF_ENABLE_AUTO_NFIFO << LDST_OFFSET_SHIFT))
-
-static inline int desc_len(u32 *desc)
-{
- return *desc & HDR_DESCLEN_MASK;
-}
-
-static inline int desc_bytes(void *desc)
-{
- return desc_len(desc) * CAAM_CMD_SZ;
-}
-
-static inline u32 *desc_end(u32 *desc)
-{
- return desc + desc_len(desc);
-}
-
-static inline void *sh_desc_pdb(u32 *desc)
-{
- return desc + 1;
-}
-
-static inline void init_desc(u32 *desc, u32 options)
-{
- *desc = (options | HDR_ONE) + 1;
-}
-
-static inline void init_sh_desc(u32 *desc, u32 options)
-{
- PRINT_POS;
- init_desc(desc, CMD_SHARED_DESC_HDR | options);
-}
-
-static inline void init_sh_desc_pdb(u32 *desc, u32 options, size_t pdb_bytes)
-{
- u32 pdb_len = (pdb_bytes + CAAM_CMD_SZ - 1) / CAAM_CMD_SZ;
-
- init_sh_desc(desc, (((pdb_len + 1) << HDR_START_IDX_SHIFT) + pdb_len) |
- options);
-}
-
-static inline void init_job_desc(u32 *desc, u32 options)
-{
- init_desc(desc, CMD_DESC_HDR | options);
-}
-
-static inline void append_ptr(u32 *desc, dma_addr_t ptr)
-{
- dma_addr_t *offset = (dma_addr_t *)desc_end(desc);
-
- *offset = ptr;
-
- (*desc) += CAAM_PTR_SZ / CAAM_CMD_SZ;
-}
-
-static inline void init_job_desc_shared(u32 *desc, dma_addr_t ptr, int len,
- u32 options)
-{
- PRINT_POS;
- init_job_desc(desc, HDR_SHARED | options |
- (len << HDR_START_IDX_SHIFT));
- append_ptr(desc, ptr);
-}
-
-static inline void append_data(u32 *desc, void *data, int len)
-{
- u32 *offset = desc_end(desc);
-
- if (len) /* avoid sparse warning: memcpy with byte count of 0 */
- memcpy(offset, data, len);
-
- (*desc) += (len + CAAM_CMD_SZ - 1) / CAAM_CMD_SZ;
-}
-
-static inline void append_cmd(u32 *desc, u32 command)
-{
- u32 *cmd = desc_end(desc);
-
- *cmd = command;
-
- (*desc)++;
-}
-
-#define append_u32 append_cmd
-
-static inline void append_u64(u32 *desc, u64 data)
-{
- u32 *offset = desc_end(desc);
-
- *offset = upper_32_bits(data);
- *(++offset) = lower_32_bits(data);
-
- (*desc) += 2;
-}
-
-/* Write command without affecting header, and return pointer to next word */
-static inline u32 *write_cmd(u32 *desc, u32 command)
-{
- *desc = command;
-
- return desc + 1;
-}
-
-static inline void append_cmd_ptr(u32 *desc, dma_addr_t ptr, int len,
- u32 command)
-{
- append_cmd(desc, command | len);
- append_ptr(desc, ptr);
-}
-
-/* Write length after pointer, rather than inside command */
-static inline void append_cmd_ptr_extlen(u32 *desc, dma_addr_t ptr,
- unsigned int len, u32 command)
-{
- append_cmd(desc, command);
- if (!(command & (SQIN_RTO | SQIN_PRE)))
- append_ptr(desc, ptr);
- append_cmd(desc, len);
-}
-
-static inline void append_cmd_data(u32 *desc, void *data, int len,
- u32 command)
-{
- append_cmd(desc, command | IMMEDIATE | len);
- append_data(desc, data, len);
-}
-
-#define APPEND_CMD_RET(cmd, op) \
-static inline u32 *append_##cmd(u32 *desc, u32 options) \
-{ \
- u32 *cmd = desc_end(desc); \
- PRINT_POS; \
- append_cmd(desc, CMD_##op | options); \
- return cmd; \
-}
-APPEND_CMD_RET(jump, JUMP)
-APPEND_CMD_RET(move, MOVE)
-
-static inline void set_jump_tgt_here(u32 *desc, u32 *jump_cmd)
-{
- *jump_cmd = *jump_cmd | (desc_len(desc) - (jump_cmd - desc));
-}
-
-static inline void set_move_tgt_here(u32 *desc, u32 *move_cmd)
-{
- *move_cmd &= ~MOVE_OFFSET_MASK;
- *move_cmd = *move_cmd | ((desc_len(desc) << (MOVE_OFFSET_SHIFT + 2)) &
- MOVE_OFFSET_MASK);
-}
-
-#define APPEND_CMD(cmd, op) \
-static inline void append_##cmd(u32 *desc, u32 options) \
-{ \
- PRINT_POS; \
- append_cmd(desc, CMD_##op | options); \
-}
-APPEND_CMD(operation, OPERATION)
-
-#define APPEND_CMD_LEN(cmd, op) \
-static inline void append_##cmd(u32 *desc, unsigned int len, u32 options) \
-{ \
- PRINT_POS; \
- append_cmd(desc, CMD_##op | len | options); \
-}
-APPEND_CMD_LEN(seq_store, SEQ_STORE)
-APPEND_CMD_LEN(seq_fifo_load, SEQ_FIFO_LOAD)
-APPEND_CMD_LEN(seq_fifo_store, SEQ_FIFO_STORE)
-
-#define APPEND_CMD_PTR(cmd, op) \
-static inline void append_##cmd(u32 *desc, dma_addr_t ptr, unsigned int len, \
- u32 options) \
-{ \
- PRINT_POS; \
- append_cmd_ptr(desc, ptr, len, CMD_##op | options); \
-}
-APPEND_CMD_PTR(key, KEY)
-APPEND_CMD_PTR(load, LOAD)
-APPEND_CMD_PTR(fifo_load, FIFO_LOAD)
-APPEND_CMD_PTR(fifo_store, FIFO_STORE)
-
-static inline void append_store(u32 *desc, dma_addr_t ptr, unsigned int len,
- u32 options)
-{
- u32 cmd_src;
-
- cmd_src = options & LDST_SRCDST_MASK;
-
- append_cmd(desc, CMD_STORE | options | len);
-
- /* The following options do not require pointer */
- if (!(cmd_src == LDST_SRCDST_WORD_DESCBUF_SHARED ||
- cmd_src == LDST_SRCDST_WORD_DESCBUF_JOB ||
- cmd_src == LDST_SRCDST_WORD_DESCBUF_JOB_WE ||
- cmd_src == LDST_SRCDST_WORD_DESCBUF_SHARED_WE))
- append_ptr(desc, ptr);
-}
-
-#define APPEND_SEQ_PTR_INTLEN(cmd, op) \
-static inline void append_seq_##cmd##_ptr_intlen(u32 *desc, dma_addr_t ptr, \
- unsigned int len, \
- u32 options) \
-{ \
- PRINT_POS; \
- if (options & (SQIN_RTO | SQIN_PRE)) \
- append_cmd(desc, CMD_SEQ_##op##_PTR | len | options); \
- else \
- append_cmd_ptr(desc, ptr, len, CMD_SEQ_##op##_PTR | options); \
-}
-APPEND_SEQ_PTR_INTLEN(in, IN)
-APPEND_SEQ_PTR_INTLEN(out, OUT)
-
-#define APPEND_CMD_PTR_TO_IMM(cmd, op) \
-static inline void append_##cmd##_as_imm(u32 *desc, void *data, \
- unsigned int len, u32 options) \
-{ \
- PRINT_POS; \
- append_cmd_data(desc, data, len, CMD_##op | options); \
-}
-APPEND_CMD_PTR_TO_IMM(load, LOAD);
-APPEND_CMD_PTR_TO_IMM(fifo_load, FIFO_LOAD);
-
-#define APPEND_CMD_PTR_EXTLEN(cmd, op) \
-static inline void append_##cmd##_extlen(u32 *desc, dma_addr_t ptr, \
- unsigned int len, u32 options) \
-{ \
- PRINT_POS; \
- append_cmd_ptr_extlen(desc, ptr, len, CMD_##op | SQIN_EXT | options); \
-}
-APPEND_CMD_PTR_EXTLEN(seq_in_ptr, SEQ_IN_PTR)
-APPEND_CMD_PTR_EXTLEN(seq_out_ptr, SEQ_OUT_PTR)
-
-/*
- * Determine whether to store length internally or externally depending on
- * the size of its type
- */
-#define APPEND_CMD_PTR_LEN(cmd, op, type) \
-static inline void append_##cmd(u32 *desc, dma_addr_t ptr, \
- type len, u32 options) \
-{ \
- PRINT_POS; \
- if (sizeof(type) > sizeof(u16)) \
- append_##cmd##_extlen(desc, ptr, len, options); \
- else \
- append_##cmd##_intlen(desc, ptr, len, options); \
-}
-APPEND_CMD_PTR_LEN(seq_in_ptr, SEQ_IN_PTR, u32)
-APPEND_CMD_PTR_LEN(seq_out_ptr, SEQ_OUT_PTR, u32)
-
-/*
- * 2nd variant for commands whose specified immediate length differs
- * from length of immediate data provided, e.g., split keys
- */
-#define APPEND_CMD_PTR_TO_IMM2(cmd, op) \
-static inline void append_##cmd##_as_imm(u32 *desc, void *data, \
- unsigned int data_len, \
- unsigned int len, u32 options) \
-{ \
- PRINT_POS; \
- append_cmd(desc, CMD_##op | IMMEDIATE | len | options); \
- append_data(desc, data, data_len); \
-}
-APPEND_CMD_PTR_TO_IMM2(key, KEY);
-
-#define APPEND_CMD_RAW_IMM(cmd, op, type) \
-static inline void append_##cmd##_imm_##type(u32 *desc, type immediate, \
- u32 options) \
-{ \
- PRINT_POS; \
- append_cmd(desc, CMD_##op | IMMEDIATE | options | sizeof(type)); \
- append_cmd(desc, immediate); \
-}
-APPEND_CMD_RAW_IMM(load, LOAD, u32);
-
-/*
- * Append math command. Only the last part of destination and source need to
- * be specified
- */
-#define APPEND_MATH(op, desc, dest, src_0, src_1, len) \
-append_cmd(desc, CMD_MATH | MATH_FUN_##op | MATH_DEST_##dest | \
- MATH_SRC0_##src_0 | MATH_SRC1_##src_1 | (u32)len);
-
-#define append_math_add(desc, dest, src0, src1, len) \
- APPEND_MATH(ADD, desc, dest, src0, src1, len)
-#define append_math_sub(desc, dest, src0, src1, len) \
- APPEND_MATH(SUB, desc, dest, src0, src1, len)
-#define append_math_add_c(desc, dest, src0, src1, len) \
- APPEND_MATH(ADDC, desc, dest, src0, src1, len)
-#define append_math_sub_b(desc, dest, src0, src1, len) \
- APPEND_MATH(SUBB, desc, dest, src0, src1, len)
-#define append_math_and(desc, dest, src0, src1, len) \
- APPEND_MATH(AND, desc, dest, src0, src1, len)
-#define append_math_or(desc, dest, src0, src1, len) \
- APPEND_MATH(OR, desc, dest, src0, src1, len)
-#define append_math_xor(desc, dest, src0, src1, len) \
- APPEND_MATH(XOR, desc, dest, src0, src1, len)
-#define append_math_lshift(desc, dest, src0, src1, len) \
- APPEND_MATH(LSHIFT, desc, dest, src0, src1, len)
-#define append_math_rshift(desc, dest, src0, src1, len) \
- APPEND_MATH(RSHIFT, desc, dest, src0, src1, len)
-#define append_math_ldshift(desc, dest, src0, src1, len) \
- APPEND_MATH(SHLD, desc, dest, src0, src1, len)
-
-/* Exactly one source is IMM. Data is passed in as u32 value */
-#define APPEND_MATH_IMM_u32(op, desc, dest, src_0, src_1, data) \
-do { \
- APPEND_MATH(op, desc, dest, src_0, src_1, CAAM_CMD_SZ); \
- append_cmd(desc, data); \
-} while (0)
-
-#define append_math_add_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(ADD, desc, dest, src0, src1, data)
-#define append_math_sub_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(SUB, desc, dest, src0, src1, data)
-#define append_math_add_c_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(ADDC, desc, dest, src0, src1, data)
-#define append_math_sub_b_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(SUBB, desc, dest, src0, src1, data)
-#define append_math_and_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(AND, desc, dest, src0, src1, data)
-#define append_math_or_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(OR, desc, dest, src0, src1, data)
-#define append_math_xor_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(XOR, desc, dest, src0, src1, data)
-#define append_math_lshift_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(LSHIFT, desc, dest, src0, src1, data)
-#define append_math_rshift_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(RSHIFT, desc, dest, src0, src1, data)
-
-/* Exactly one source is IMM. Data is passed in as u64 value */
-#define APPEND_MATH_IMM_u64(op, desc, dest, src_0, src_1, data) \
-do { \
- u32 upper = (data >> 16) >> 16; \
- APPEND_MATH(op, desc, dest, src_0, src_1, CAAM_CMD_SZ * 2 | \
- (upper ? 0 : MATH_IFB)); \
- if (upper) \
- append_u64(desc, data); \
- else \
- append_u32(desc, data); \
-} while (0)
-
-#define append_math_add_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(ADD, desc, dest, src0, src1, data)
-#define append_math_sub_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(SUB, desc, dest, src0, src1, data)
-#define append_math_add_c_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(ADDC, desc, dest, src0, src1, data)
-#define append_math_sub_b_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(SUBB, desc, dest, src0, src1, data)
-#define append_math_and_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(AND, desc, dest, src0, src1, data)
-#define append_math_or_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(OR, desc, dest, src0, src1, data)
-#define append_math_xor_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(XOR, desc, dest, src0, src1, data)
-#define append_math_lshift_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(LSHIFT, desc, dest, src0, src1, data)
-#define append_math_rshift_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(RSHIFT, desc, dest, src0, src1, data)
diff --git a/drivers/crypto/caam/pdb.h b/drivers/crypto/caam/pdb.h
deleted file mode 100644
index 3a87c0cf879a..000000000000
--- a/drivers/crypto/caam/pdb.h
+++ /dev/null
@@ -1,402 +0,0 @@
-/*
- * CAAM Protocol Data Block (PDB) definition header file
- *
- * Copyright 2008-2012 Freescale Semiconductor, Inc.
- *
- */
-
-#ifndef CAAM_PDB_H
-#define CAAM_PDB_H
-
-/*
- * PDB- IPSec ESP Header Modification Options
- */
-#define PDBHMO_ESP_DECAP_SHIFT 12
-#define PDBHMO_ESP_ENCAP_SHIFT 4
-/*
- * Encap and Decap - Decrement TTL (Hop Limit) - Based on the value of the
- * Options Byte IP version (IPvsn) field:
- * if IPv4, decrement the inner IP header TTL field (byte 8);
- * if IPv6 decrement the inner IP header Hop Limit field (byte 7).
-*/
-#define PDBHMO_ESP_DECAP_DEC_TTL (0x02 << PDBHMO_ESP_DECAP_SHIFT)
-#define PDBHMO_ESP_ENCAP_DEC_TTL (0x02 << PDBHMO_ESP_ENCAP_SHIFT)
-/*
- * Decap - DiffServ Copy - Copy the IPv4 TOS or IPv6 Traffic Class byte
- * from the outer IP header to the inner IP header.
- */
-#define PDBHMO_ESP_DIFFSERV (0x01 << PDBHMO_ESP_DECAP_SHIFT)
-/*
- * Encap- Copy DF bit -if an IPv4 tunnel mode outer IP header is coming from
- * the PDB, copy the DF bit from the inner IP header to the outer IP header.
- */
-#define PDBHMO_ESP_DFBIT (0x04 << PDBHMO_ESP_ENCAP_SHIFT)
-
-/*
- * PDB - IPSec ESP Encap/Decap Options
- */
-#define PDBOPTS_ESP_ARSNONE 0x00 /* no antireplay window */
-#define PDBOPTS_ESP_ARS32 0x40 /* 32-entry antireplay window */
-#define PDBOPTS_ESP_ARS64 0xc0 /* 64-entry antireplay window */
-#define PDBOPTS_ESP_IVSRC 0x20 /* IV comes from internal random gen */
-#define PDBOPTS_ESP_ESN 0x10 /* extended sequence included */
-#define PDBOPTS_ESP_OUTFMT 0x08 /* output only decapsulation (decap) */
-#define PDBOPTS_ESP_IPHDRSRC 0x08 /* IP header comes from PDB (encap) */
-#define PDBOPTS_ESP_INCIPHDR 0x04 /* Prepend IP header to output frame */
-#define PDBOPTS_ESP_IPVSN 0x02 /* process IPv6 header */
-#define PDBOPTS_ESP_AOFL 0x04 /* adjust out frame len (decap, SEC>=5.3)*/
-#define PDBOPTS_ESP_TUNNEL 0x01 /* tunnel mode next-header byte */
-#define PDBOPTS_ESP_IPV6 0x02 /* ip header version is V6 */
-#define PDBOPTS_ESP_DIFFSERV 0x40 /* copy TOS/TC from inner iphdr */
-#define PDBOPTS_ESP_UPDATE_CSUM 0x80 /* encap-update ip header checksum */
-#define PDBOPTS_ESP_VERIFY_CSUM 0x20 /* decap-validate ip header checksum */
-
-/*
- * General IPSec encap/decap PDB definitions
- */
-struct ipsec_encap_cbc {
- u32 iv[4];
-};
-
-struct ipsec_encap_ctr {
- u32 ctr_nonce;
- u32 ctr_initial;
- u32 iv[2];
-};
-
-struct ipsec_encap_ccm {
- u32 salt; /* lower 24 bits */
- u8 b0_flags;
- u8 ctr_flags;
- u16 ctr_initial;
- u32 iv[2];
-};
-
-struct ipsec_encap_gcm {
- u32 salt; /* lower 24 bits */
- u32 rsvd1;
- u32 iv[2];
-};
-
-struct ipsec_encap_pdb {
- u8 hmo_rsvd;
- u8 ip_nh;
- u8 ip_nh_offset;
- u8 options;
- u32 seq_num_ext_hi;
- u32 seq_num;
- union {
- struct ipsec_encap_cbc cbc;
- struct ipsec_encap_ctr ctr;
- struct ipsec_encap_ccm ccm;
- struct ipsec_encap_gcm gcm;
- };
- u32 spi;
- u16 rsvd1;
- u16 ip_hdr_len;
- u32 ip_hdr[0]; /* optional IP Header content */
-};
-
-struct ipsec_decap_cbc {
- u32 rsvd[2];
-};
-
-struct ipsec_decap_ctr {
- u32 salt;
- u32 ctr_initial;
-};
-
-struct ipsec_decap_ccm {
- u32 salt;
- u8 iv_flags;
- u8 ctr_flags;
- u16 ctr_initial;
-};
-
-struct ipsec_decap_gcm {
- u32 salt;
- u32 resvd;
-};
-
-struct ipsec_decap_pdb {
- u16 hmo_ip_hdr_len;
- u8 ip_nh_offset;
- u8 options;
- union {
- struct ipsec_decap_cbc cbc;
- struct ipsec_decap_ctr ctr;
- struct ipsec_decap_ccm ccm;
- struct ipsec_decap_gcm gcm;
- };
- u32 seq_num_ext_hi;
- u32 seq_num;
- u32 anti_replay[2];
- u32 end_index[0];
-};
-
-/*
- * IPSec ESP Datapath Protocol Override Register (DPOVRD)
- */
-struct ipsec_deco_dpovrd {
-#define IPSEC_ENCAP_DECO_DPOVRD_USE 0x80
- u8 ovrd_ecn;
- u8 ip_hdr_len;
- u8 nh_offset;
- u8 next_header; /* reserved if decap */
-};
-
-/*
- * IEEE 802.11i WiFi Protocol Data Block
- */
-#define WIFI_PDBOPTS_FCS 0x01
-#define WIFI_PDBOPTS_AR 0x40
-
-struct wifi_encap_pdb {
- u16 mac_hdr_len;
- u8 rsvd;
- u8 options;
- u8 iv_flags;
- u8 pri;
- u16 pn1;
- u32 pn2;
- u16 frm_ctrl_mask;
- u16 seq_ctrl_mask;
- u8 rsvd1[2];
- u8 cnst;
- u8 key_id;
- u8 ctr_flags;
- u8 rsvd2;
- u16 ctr_init;
-};
-
-struct wifi_decap_pdb {
- u16 mac_hdr_len;
- u8 rsvd;
- u8 options;
- u8 iv_flags;
- u8 pri;
- u16 pn1;
- u32 pn2;
- u16 frm_ctrl_mask;
- u16 seq_ctrl_mask;
- u8 rsvd1[4];
- u8 ctr_flags;
- u8 rsvd2;
- u16 ctr_init;
-};
-
-/*
- * IEEE 802.16 WiMAX Protocol Data Block
- */
-#define WIMAX_PDBOPTS_FCS 0x01
-#define WIMAX_PDBOPTS_AR 0x40 /* decap only */
-
-struct wimax_encap_pdb {
- u8 rsvd[3];
- u8 options;
- u32 nonce;
- u8 b0_flags;
- u8 ctr_flags;
- u16 ctr_init;
- /* begin DECO writeback region */
- u32 pn;
- /* end DECO writeback region */
-};
-
-struct wimax_decap_pdb {
- u8 rsvd[3];
- u8 options;
- u32 nonce;
- u8 iv_flags;
- u8 ctr_flags;
- u16 ctr_init;
- /* begin DECO writeback region */
- u32 pn;
- u8 rsvd1[2];
- u16 antireplay_len;
- u64 antireplay_scorecard;
- /* end DECO writeback region */
-};
-
-/*
- * IEEE 801.AE MacSEC Protocol Data Block
- */
-#define MACSEC_PDBOPTS_FCS 0x01
-#define MACSEC_PDBOPTS_AR 0x40 /* used in decap only */
-
-struct macsec_encap_pdb {
- u16 aad_len;
- u8 rsvd;
- u8 options;
- u64 sci;
- u16 ethertype;
- u8 tci_an;
- u8 rsvd1;
- /* begin DECO writeback region */
- u32 pn;
- /* end DECO writeback region */
-};
-
-struct macsec_decap_pdb {
- u16 aad_len;
- u8 rsvd;
- u8 options;
- u64 sci;
- u8 rsvd1[3];
- /* begin DECO writeback region */
- u8 antireplay_len;
- u32 pn;
- u64 antireplay_scorecard;
- /* end DECO writeback region */
-};
-
-/*
- * SSL/TLS/DTLS Protocol Data Blocks
- */
-
-#define TLS_PDBOPTS_ARS32 0x40
-#define TLS_PDBOPTS_ARS64 0xc0
-#define TLS_PDBOPTS_OUTFMT 0x08
-#define TLS_PDBOPTS_IV_WRTBK 0x02 /* 1.1/1.2/DTLS only */
-#define TLS_PDBOPTS_EXP_RND_IV 0x01 /* 1.1/1.2/DTLS only */
-
-struct tls_block_encap_pdb {
- u8 type;
- u8 version[2];
- u8 options;
- u64 seq_num;
- u32 iv[4];
-};
-
-struct tls_stream_encap_pdb {
- u8 type;
- u8 version[2];
- u8 options;
- u64 seq_num;
- u8 i;
- u8 j;
- u8 rsvd1[2];
-};
-
-struct dtls_block_encap_pdb {
- u8 type;
- u8 version[2];
- u8 options;
- u16 epoch;
- u16 seq_num[3];
- u32 iv[4];
-};
-
-struct tls_block_decap_pdb {
- u8 rsvd[3];
- u8 options;
- u64 seq_num;
- u32 iv[4];
-};
-
-struct tls_stream_decap_pdb {
- u8 rsvd[3];
- u8 options;
- u64 seq_num;
- u8 i;
- u8 j;
- u8 rsvd1[2];
-};
-
-struct dtls_block_decap_pdb {
- u8 rsvd[3];
- u8 options;
- u16 epoch;
- u16 seq_num[3];
- u32 iv[4];
- u64 antireplay_scorecard;
-};
-
-/*
- * SRTP Protocol Data Blocks
- */
-#define SRTP_PDBOPTS_MKI 0x08
-#define SRTP_PDBOPTS_AR 0x40
-
-struct srtp_encap_pdb {
- u8 x_len;
- u8 mki_len;
- u8 n_tag;
- u8 options;
- u32 cnst0;
- u8 rsvd[2];
- u16 cnst1;
- u16 salt[7];
- u16 cnst2;
- u32 rsvd1;
- u32 roc;
- u32 opt_mki;
-};
-
-struct srtp_decap_pdb {
- u8 x_len;
- u8 mki_len;
- u8 n_tag;
- u8 options;
- u32 cnst0;
- u8 rsvd[2];
- u16 cnst1;
- u16 salt[7];
- u16 cnst2;
- u16 rsvd1;
- u16 seq_num;
- u32 roc;
- u64 antireplay_scorecard;
-};
-
-/*
- * DSA/ECDSA Protocol Data Blocks
- * Two of these exist: DSA-SIGN, and DSA-VERIFY. They are similar
- * except for the treatment of "w" for verify, "s" for sign,
- * and the placement of "a,b".
- */
-#define DSA_PDB_SGF_SHIFT 24
-#define DSA_PDB_SGF_MASK (0xff << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_Q (0x80 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_R (0x40 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_G (0x20 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_W (0x10 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_S (0x10 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_F (0x08 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_C (0x04 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_D (0x02 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_AB_SIGN (0x02 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_AB_VERIFY (0x01 << DSA_PDB_SGF_SHIFT)
-
-#define DSA_PDB_L_SHIFT 7
-#define DSA_PDB_L_MASK (0x3ff << DSA_PDB_L_SHIFT)
-
-#define DSA_PDB_N_MASK 0x7f
-
-struct dsa_sign_pdb {
- u32 sgf_ln; /* Use DSA_PDB_ defintions per above */
- u8 *q;
- u8 *r;
- u8 *g; /* or Gx,y */
- u8 *s;
- u8 *f;
- u8 *c;
- u8 *d;
- u8 *ab; /* ECC only */
- u8 *u;
-};
-
-struct dsa_verify_pdb {
- u32 sgf_ln;
- u8 *q;
- u8 *r;
- u8 *g; /* or Gx,y */
- u8 *w; /* or Wx,y */
- u8 *f;
- u8 *c;
- u8 *d;
- u8 *tmp; /* temporary data block */
- u8 *ab; /* only used if ECC processing */
-};
-
-#endif
--
1.8.3.1

2014-07-18 16:54:35

by Horia Geantă

[permalink] [raw]
Subject: [PATCH 3/9] crypto: caam - code cleanup

1. Fix the following sparse/smatch warnings:
drivers/crypto/caam/ctrl.c:365:5: warning: symbol 'caam_get_era' was not declared. Should it be static?
drivers/crypto/caam/ctrl.c:372 caam_get_era() info: loop could be replaced with if statement.
drivers/crypto/caam/ctrl.c:368 caam_get_era() info: ignoring unreachable code.
drivers/crypto/caam/jr.c:68:5: warning: symbol 'caam_jr_shutdown' was not declared. Should it be static?
drivers/crypto/caam/jr.c:475:23: warning: incorrect type in assignment (different address spaces)
drivers/crypto/caam/jr.c:475:23: expected struct caam_job_ring [noderef] <asn:2>*rregs
drivers/crypto/caam/jr.c:475:23: got struct caam_job_ring *<noident>
drivers/crypto/caam/caamrng.c:343 caam_rng_init() error: no modifiers for allocation.

2. remove unreachable code in report_ccb_status
ERRID is a 4-bit field.
Since err_id values are in [0..15] and err_id_list array size is 16,
the condition "err_id < ARRAY_SIZE(err_id_list)" is always true.

3. remove unused / unneeded variables

4. remove precision loss warning - offset field in HW s/g table

5. replace offsetof with container_of

Signed-off-by: Horia Geanta <[email protected]>
---
drivers/crypto/caam/caamalg.c | 59 ++++++++++++++++++----------------------
drivers/crypto/caam/caamhash.c | 12 +++-----
drivers/crypto/caam/caamrng.c | 2 +-
drivers/crypto/caam/ctrl.c | 8 ++++--
drivers/crypto/caam/error.c | 5 ++--
drivers/crypto/caam/jr.c | 4 +--
drivers/crypto/caam/sg_sw_sec4.h | 2 +-
7 files changed, 42 insertions(+), 50 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index a80ea853701d..c3a845856cd0 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -925,8 +925,7 @@ static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct aead_edesc *)((char *)desc -
- offsetof(struct aead_edesc, hw_desc));
+ edesc = container_of(desc, struct aead_edesc, hw_desc[0]);

if (err)
caam_jr_strstatus(jrdev, err);
@@ -964,8 +963,7 @@ static void aead_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct aead_edesc *)((char *)desc -
- offsetof(struct aead_edesc, hw_desc));
+ edesc = container_of(desc, struct aead_edesc, hw_desc[0]);

#ifdef DEBUG
print_hex_dump(KERN_ERR, "dstiv @"__stringify(__LINE__)": ",
@@ -1019,8 +1017,7 @@ static void ablkcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct ablkcipher_edesc *)((char *)desc -
- offsetof(struct ablkcipher_edesc, hw_desc));
+ edesc = container_of(desc, struct ablkcipher_edesc, hw_desc[0]);

if (err)
caam_jr_strstatus(jrdev, err);
@@ -1052,8 +1049,7 @@ static void ablkcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct ablkcipher_edesc *)((char *)desc -
- offsetof(struct ablkcipher_edesc, hw_desc));
+ edesc = container_of(desc, struct ablkcipher_edesc, hw_desc[0]);
if (err)
caam_jr_strstatus(jrdev, err);

@@ -1286,7 +1282,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
int assoc_nents, src_nents, dst_nents = 0;
struct aead_edesc *edesc;
dma_addr_t iv_dma = 0;
- int sgc;
bool all_contig = true;
bool assoc_chained = false, src_chained = false, dst_chained = false;
int ivsize = crypto_aead_ivsize(aead);
@@ -1308,16 +1303,16 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
&src_chained);
}

- sgc = dma_map_sg_chained(jrdev, req->assoc, assoc_nents ? : 1,
- DMA_TO_DEVICE, assoc_chained);
+ dma_map_sg_chained(jrdev, req->assoc, assoc_nents ? : 1,
+ DMA_TO_DEVICE, assoc_chained);
if (likely(req->src == req->dst)) {
- sgc = dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
- DMA_BIDIRECTIONAL, src_chained);
+ dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
+ DMA_BIDIRECTIONAL, src_chained);
} else {
- sgc = dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
- DMA_TO_DEVICE, src_chained);
- sgc = dma_map_sg_chained(jrdev, req->dst, dst_nents ? : 1,
- DMA_FROM_DEVICE, dst_chained);
+ dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
+ DMA_TO_DEVICE, src_chained);
+ dma_map_sg_chained(jrdev, req->dst, dst_nents ? : 1,
+ DMA_FROM_DEVICE, dst_chained);
}

iv_dma = dma_map_single(jrdev, req->iv, ivsize, DMA_TO_DEVICE);
@@ -1485,7 +1480,6 @@ static struct aead_edesc *aead_giv_edesc_alloc(struct aead_givcrypt_request
int assoc_nents, src_nents, dst_nents = 0;
struct aead_edesc *edesc;
dma_addr_t iv_dma = 0;
- int sgc;
u32 contig = GIV_SRC_CONTIG | GIV_DST_CONTIG;
int ivsize = crypto_aead_ivsize(aead);
bool assoc_chained = false, src_chained = false, dst_chained = false;
@@ -1498,16 +1492,16 @@ static struct aead_edesc *aead_giv_edesc_alloc(struct aead_givcrypt_request
dst_nents = sg_count(req->dst, req->cryptlen + ctx->authsize,
&dst_chained);

- sgc = dma_map_sg_chained(jrdev, req->assoc, assoc_nents ? : 1,
- DMA_TO_DEVICE, assoc_chained);
+ dma_map_sg_chained(jrdev, req->assoc, assoc_nents ? : 1,
+ DMA_TO_DEVICE, assoc_chained);
if (likely(req->src == req->dst)) {
- sgc = dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
- DMA_BIDIRECTIONAL, src_chained);
+ dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
+ DMA_BIDIRECTIONAL, src_chained);
} else {
- sgc = dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
- DMA_TO_DEVICE, src_chained);
- sgc = dma_map_sg_chained(jrdev, req->dst, dst_nents ? : 1,
- DMA_FROM_DEVICE, dst_chained);
+ dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
+ DMA_TO_DEVICE, src_chained);
+ dma_map_sg_chained(jrdev, req->dst, dst_nents ? : 1,
+ DMA_FROM_DEVICE, dst_chained);
}

iv_dma = dma_map_single(jrdev, greq->giv, ivsize, DMA_TO_DEVICE);
@@ -1655,7 +1649,6 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
struct ablkcipher_edesc *edesc;
dma_addr_t iv_dma = 0;
bool iv_contig = false;
- int sgc;
int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
bool src_chained = false, dst_chained = false;
int sec4_sg_index;
@@ -1666,13 +1659,13 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
dst_nents = sg_count(req->dst, req->nbytes, &dst_chained);

if (likely(req->src == req->dst)) {
- sgc = dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
- DMA_BIDIRECTIONAL, src_chained);
+ dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
+ DMA_BIDIRECTIONAL, src_chained);
} else {
- sgc = dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
- DMA_TO_DEVICE, src_chained);
- sgc = dma_map_sg_chained(jrdev, req->dst, dst_nents ? : 1,
- DMA_FROM_DEVICE, dst_chained);
+ dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
+ DMA_TO_DEVICE, src_chained);
+ dma_map_sg_chained(jrdev, req->dst, dst_nents ? : 1,
+ DMA_FROM_DEVICE, dst_chained);
}

iv_dma = dma_map_single(jrdev, req->info, ivsize, DMA_TO_DEVICE);
diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index 56ec534337b3..386efb9e192c 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -640,8 +640,7 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct ahash_edesc *)((char *)desc -
- offsetof(struct ahash_edesc, hw_desc));
+ edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
if (err)
caam_jr_strstatus(jrdev, err);

@@ -675,8 +674,7 @@ static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct ahash_edesc *)((char *)desc -
- offsetof(struct ahash_edesc, hw_desc));
+ edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
if (err)
caam_jr_strstatus(jrdev, err);

@@ -710,8 +708,7 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct ahash_edesc *)((char *)desc -
- offsetof(struct ahash_edesc, hw_desc));
+ edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
if (err)
caam_jr_strstatus(jrdev, err);

@@ -745,8 +742,7 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct ahash_edesc *)((char *)desc -
- offsetof(struct ahash_edesc, hw_desc));
+ edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
if (err)
caam_jr_strstatus(jrdev, err);

diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c
index 8b9df8deda67..5b288082e6ac 100644
--- a/drivers/crypto/caam/caamrng.c
+++ b/drivers/crypto/caam/caamrng.c
@@ -340,7 +340,7 @@ static int __init caam_rng_init(void)
pr_err("Job Ring Device allocation for transform failed\n");
return PTR_ERR(dev);
}
- rng_ctx = kmalloc(sizeof(struct caam_rng_ctx), GFP_DMA);
+ rng_ctx = kmalloc(sizeof(*rng_ctx), GFP_KERNEL | GFP_DMA);
if (!rng_ctx)
return -ENOMEM;
err = caam_init_rng(rng_ctx, dev);
diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
index cedb56500b61..be8c6c147395 100644
--- a/drivers/crypto/caam/ctrl.c
+++ b/drivers/crypto/caam/ctrl.c
@@ -15,6 +15,7 @@
#include "jr.h"
#include "desc_constr.h"
#include "error.h"
+#include "ctrl.h"

/*
* Descriptor to instantiate RNG State Handle 0 in normal mode and
@@ -209,7 +210,7 @@ static int instantiate_rng(struct device *ctrldev, int state_handle_mask,
* CAAM eras), then try again.
*/
rdsta_val =
- rd_reg32(&topregs->ctrl.r4tst[0].rdsta) & RDSTA_IFMASK;
+ rd_reg32(&r4tst->rdsta) & RDSTA_IFMASK;
if (status || !(rdsta_val & (1 << sh_idx)))
ret = -EAGAIN;
if (ret)
@@ -365,10 +366,13 @@ static void kick_trng(struct platform_device *pdev, int ent_delay)
int caam_get_era(void)
{
struct device_node *caam_node;
- for_each_compatible_node(caam_node, NULL, "fsl,sec-v4.0") {
+
+ caam_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0");
+ if (caam_node) {
const uint32_t *prop = (uint32_t *)of_get_property(caam_node,
"fsl,sec-era",
NULL);
+ of_node_put(caam_node);
return prop ? *prop : -ENOTSUPP;
}

diff --git a/drivers/crypto/caam/error.c b/drivers/crypto/caam/error.c
index 6531054a44c8..7d6ed4722345 100644
--- a/drivers/crypto/caam/error.c
+++ b/drivers/crypto/caam/error.c
@@ -146,10 +146,9 @@ static void report_ccb_status(struct device *jrdev, const u32 status,
strlen(rng_err_id_list[err_id])) {
/* RNG-only error */
err_str = rng_err_id_list[err_id];
- } else if (err_id < ARRAY_SIZE(err_id_list))
+ } else {
err_str = err_id_list[err_id];
- else
- snprintf(err_err_code, sizeof(err_err_code), "%02x", err_id);
+ }

dev_err(jrdev, "%08x: %s: %s %d: %s%s: %s%s\n",
status, error, idx_str, idx,
diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index 50cd1b9af2ba..ec3652d62e93 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -65,7 +65,7 @@ static int caam_reset_hw_jr(struct device *dev)
/*
* Shutdown JobR independent of platform property code
*/
-int caam_jr_shutdown(struct device *dev)
+static int caam_jr_shutdown(struct device *dev)
{
struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
dma_addr_t inpbusaddr, outbusaddr;
@@ -472,7 +472,7 @@ static int caam_jr_probe(struct platform_device *pdev)
return -ENOMEM;
}

- jrpriv->rregs = (struct caam_job_ring __force *)ctrl;
+ jrpriv->rregs = (struct caam_job_ring __iomem __force *)ctrl;

if (sizeof(dma_addr_t) == sizeof(u64))
if (of_device_is_compatible(nprop, "fsl,sec-v5.0-job-ring"))
diff --git a/drivers/crypto/caam/sg_sw_sec4.h b/drivers/crypto/caam/sg_sw_sec4.h
index b12ff85f4241..a6e5b94756d4 100644
--- a/drivers/crypto/caam/sg_sw_sec4.h
+++ b/drivers/crypto/caam/sg_sw_sec4.h
@@ -17,7 +17,7 @@ static inline void dma_to_sec4_sg_one(struct sec4_sg_entry *sec4_sg_ptr,
sec4_sg_ptr->len = len;
sec4_sg_ptr->reserved = 0;
sec4_sg_ptr->buf_pool_id = 0;
- sec4_sg_ptr->offset = offset;
+ sec4_sg_ptr->offset = (u16)offset;
#ifdef DEBUG
print_hex_dump(KERN_ERR, "sec4_sg_ptr@: ",
DUMP_PREFIX_ADDRESS, 16, 4, sec4_sg_ptr,
--
1.8.3.1

2014-07-18 22:18:52

by Kim Phillips

[permalink] [raw]
Subject: Re: [PATCH 0/9] crypto: caam - Add RTA descriptor creation library

On Fri, 18 Jul 2014 19:37:17 +0300
Horia Geanta <[email protected]> wrote:

> This patch set adds Run Time Assembler (RTA) SEC descriptor library.
>
> The main reason of replacing incumbent "inline append" is
> to have a single code base both for user space and kernel space.

that's orthogonal to what this patchseries is doing from the kernel
maintainer's perspective: it's polluting the driver with a
CodingStyle-violating (see, e.g., Chapter 12) 6000+ lines of code -
which can only mean it's slower and more susceptible to bugs - and
AFAICT for no superior technical advantage: NACK from me.

Kim

2014-07-18 23:51:51

by Horia Geantă

[permalink] [raw]
Subject: Re: [PATCH 0/9] crypto: caam - Add RTA descriptor creation library

On 7/19/2014 1:13 AM, Kim Phillips wrote:
> On Fri, 18 Jul 2014 19:37:17 +0300
> Horia Geanta <[email protected]> wrote:
>
>> This patch set adds Run Time Assembler (RTA) SEC descriptor library.
>>
>> The main reason of replacing incumbent "inline append" is
>> to have a single code base both for user space and kernel space.
>
> that's orthogonal to what this patchseries is doing from the kernel
> maintainer's perspective: it's polluting the driver with a
> CodingStyle-violating (see, e.g., Chapter 12) 6000+ lines of code -

Regarding coding style - AFAICT that's basically:
ERROR: Macros with complex values should be enclosed in parenthesis
and I am wiling to find a different approach.

> which can only mean it's slower and more susceptible to bugs - and
> AFAICT for no superior technical advantage: NACK from me.

The fact that the code size is bigger doesn't necessarily mean a bad thing:
1-code is better documented - cloc reports ~ 1000 more lines of
comments; patch 09 even adds support for generating a docbook
2-pure code (i.e. no comments, white spaces) - cloc reports ~ 5000 more
lines; this reflects two things, AFAICT:
2.1-more features: options (for e.g. new SEC instructions, little endian
env. support), platform support includes Era 7 and Era 8, i.e.
Layerscape LS1 and LS2; this is important to note, since plans are to
run the very same CAAM driver on ARM platforms
2.2-more error-checking - from this perspective, I'd say driver is less
susceptible to bugs, especially subtle ones in CAAM descriptors that are
hard to identify / debug; RTA will complain when generating descriptors
using features (say a new bit in an instruction opcode) that are not
supported on the SEC on device

RTA currently runs on:
-QorIQ platforms - userspace (USDPAA)
-Layerscape platforms - AIOP accelerator
(obviously, plans are to run also on QorIQ/PowerPC and LS/ARM kernels)

Combined with:
-comprehensive unit testing suite
-RTA kernel port is bit-exact in terms of SEC descriptors hex dumps with
inline append; besides this, it was tested with tcrypt and in IPsec
scenarios
I would say that RTA is tested more than inline append. In the end, this
is a side effect of having a single code base.

Thanks,
Horia

2014-07-19 01:28:38

by Kim Phillips

[permalink] [raw]
Subject: Re: [PATCH 0/9] crypto: caam - Add RTA descriptor creation library

On Sat, 19 Jul 2014 02:51:30 +0300
Horia Geantă <[email protected]> wrote:

> On 7/19/2014 1:13 AM, Kim Phillips wrote:
> > On Fri, 18 Jul 2014 19:37:17 +0300
> > Horia Geanta <[email protected]> wrote:
> >
> >> This patch set adds Run Time Assembler (RTA) SEC descriptor library.
> >>
> >> The main reason of replacing incumbent "inline append" is
> >> to have a single code base both for user space and kernel space.
> >
> > that's orthogonal to what this patchseries is doing from the kernel
> > maintainer's perspective: it's polluting the driver with a
> > CodingStyle-violating (see, e.g., Chapter 12) 6000+ lines of code -
>
> Regarding coding style - AFAICT that's basically:
> ERROR: Macros with complex values should be enclosed in parenthesis
> and I am wiling to find a different approach.

There's that, too.

> > which can only mean it's slower and more susceptible to bugs - and
> > AFAICT for no superior technical advantage: NACK from me.
>
> The fact that the code size is bigger doesn't necessarily mean a bad thing:
> 1-code is better documented - cloc reports ~ 1000 more lines of
> comments; patch 09 even adds support for generating a docbook
> 2-pure code (i.e. no comments, white spaces) - cloc reports ~ 5000 more
> lines; this reflects two things, AFAICT:
> 2.1-more features: options (for e.g. new SEC instructions, little endian
> env. support), platform support includes Era 7 and Era 8, i.e.
> Layerscape LS1 and LS2; this is important to note, since plans are to
> run the very same CAAM driver on ARM platforms

um, *those* features should not cost any driver *that many* lines of
code!

> 2.2-more error-checking - from this perspective, I'd say driver is less
> susceptible to bugs, especially subtle ones in CAAM descriptors that are
> hard to identify / debug; RTA will complain when generating descriptors
> using features (say a new bit in an instruction opcode) that are not
> supported on the SEC on device

? The hardware does the error checking. This just tells me RTA is
slow, inflexible, and requires unnecessary maintenance by design:
all the more reason to keep it out of the kernel :)

> RTA currently runs on:
> -QorIQ platforms - userspace (USDPAA)
> -Layerscape platforms - AIOP accelerator
> (obviously, plans are to run also on QorIQ/PowerPC and LS/ARM kernels)

inline append runs elsewhere, too, but I don't see how this is
related to the subject I'm bringing up.

> Combined with:
> -comprehensive unit testing suite
> -RTA kernel port is bit-exact in terms of SEC descriptors hex dumps with
> inline append; besides this, it was tested with tcrypt and in IPsec
> scenarios
> I would say that RTA is tested more than inline append. In the end, this
> is a side effect of having a single code base.

inline append has been proven stable for years now. RTA just adds
redundant code and violates CodingStyle AFAICT.

Kim

2014-07-21 07:48:22

by Horia Geantă

[permalink] [raw]
Subject: Re: [PATCH 0/9] crypto: caam - Add RTA descriptor creation library

On 7/19/2014 4:23 AM, Kim Phillips wrote:
> On Sat, 19 Jul 2014 02:51:30 +0300
> Horia Geantă <[email protected]> wrote:
>
>> On 7/19/2014 1:13 AM, Kim Phillips wrote:
>>> On Fri, 18 Jul 2014 19:37:17 +0300
>>> Horia Geanta <[email protected]> wrote:
>>>
>>>> This patch set adds Run Time Assembler (RTA) SEC descriptor library.
>>>>
>>>> The main reason of replacing incumbent "inline append" is
>>>> to have a single code base both for user space and kernel space.
>>>
>>> that's orthogonal to what this patchseries is doing from the kernel
>>> maintainer's perspective: it's polluting the driver with a
>>> CodingStyle-violating (see, e.g., Chapter 12) 6000+ lines of code -
>>
>> Regarding coding style - AFAICT that's basically:
>> ERROR: Macros with complex values should be enclosed in parenthesis
>> and I am wiling to find a different approach.
>
> There's that, too.
>
>>> which can only mean it's slower and more susceptible to bugs - and
>>> AFAICT for no superior technical advantage: NACK from me.
>>
>> The fact that the code size is bigger doesn't necessarily mean a bad thing:
>> 1-code is better documented - cloc reports ~ 1000 more lines of
>> comments; patch 09 even adds support for generating a docbook
>> 2-pure code (i.e. no comments, white spaces) - cloc reports ~ 5000 more
>> lines; this reflects two things, AFAICT:
>> 2.1-more features: options (for e.g. new SEC instructions, little endian
>> env. support), platform support includes Era 7 and Era 8, i.e.
>> Layerscape LS1 and LS2; this is important to note, since plans are to
>> run the very same CAAM driver on ARM platforms
>
> um, *those* features should not cost any driver *that many* lines of
> code!

You are invited to comment on the code at hand. I am pretty sure it's
not perfect.

>
>> 2.2-more error-checking - from this perspective, I'd say driver is less
>> susceptible to bugs, especially subtle ones in CAAM descriptors that are
>> hard to identify / debug; RTA will complain when generating descriptors
>> using features (say a new bit in an instruction opcode) that are not
>> supported on the SEC on device
>
> ? The hardware does the error checking. This just tells me RTA is
> slow, inflexible, and requires unnecessary maintenance by design:
> all the more reason to keep it out of the kernel :)

This is just like saying a toolchain isn't performing any checks and
lets the user generate invalid machine code and deal with HW errors.

Beside this, there are (quite a few) cases when SEC won't emit any
error, but still the results are different than expected.
SEC HW is complex enough to deserve descriptor error checking.

>
>> RTA currently runs on:
>> -QorIQ platforms - userspace (USDPAA)
>> -Layerscape platforms - AIOP accelerator
>> (obviously, plans are to run also on QorIQ/PowerPC and LS/ARM kernels)
>
> inline append runs elsewhere, too, but I don't see how this is
> related to the subject I'm bringing up.

This is relevant, since having a piece of SW running in multiple
environments should lead to better testing, more exposure, finding bugs
faster.
inline append *could run* elsewhere , but it doesn't AFAICT. Last time
I checked, USDPAA and AIOP use RTA.

>
>> Combined with:
>> -comprehensive unit testing suite
>> -RTA kernel port is bit-exact in terms of SEC descriptors hex dumps with
>> inline append; besides this, it was tested with tcrypt and in IPsec
>> scenarios
>> I would say that RTA is tested more than inline append. In the end, this
>> is a side effect of having a single code base.
>
> inline append has been proven stable for years now. RTA just adds
> redundant code and violates CodingStyle AFAICT.

New platform support is not redundant.
Error checking is not redundant, as explained above.
kernel-doc is always helpful.
Coding Style can be fixed.

Thanks,
Horia

2014-07-21 13:04:37

by Horia Geantă

[permalink] [raw]
Subject: [PATCH] crypto: caam - fix DECO RSR polling

RSR (Request Source Register) is not used when
virtualization is disabled, thus don't poll for Valid bit.

Besides this, if used, timeout has to be reinitialized.

Signed-off-by: Horia Geanta <[email protected]>
---
Only compile-tested.
Ruchika / Kim, please review / test.

drivers/crypto/caam/ctrl.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
index c6e9d3b2d502..84d4b95c761e 100644
--- a/drivers/crypto/caam/ctrl.c
+++ b/drivers/crypto/caam/ctrl.c
@@ -89,12 +89,15 @@ static inline int run_descriptor_deco0(struct device *ctrldev, u32 *desc,
/* Set the bit to request direct access to DECO0 */
topregs = (struct caam_full __iomem *)ctrlpriv->ctrl;

- if (ctrlpriv->virt_en == 1)
+ if (ctrlpriv->virt_en == 1) {
setbits32(&topregs->ctrl.deco_rsr, DECORSR_JR0);

- while (!(rd_reg32(&topregs->ctrl.deco_rsr) & DECORSR_VALID) &&
- --timeout)
- cpu_relax();
+ while (!(rd_reg32(&topregs->ctrl.deco_rsr) & DECORSR_VALID) &&
+ --timeout)
+ cpu_relax();
+
+ timeout = 100000;
+ }

setbits32(&topregs->ctrl.deco_rq, DECORR_RQD0ENABLE);

--
1.8.3.1

2014-07-21 15:13:51

by Kim Phillips

[permalink] [raw]
Subject: Re: [PATCH 0/9] crypto: caam - Add RTA descriptor creation library

On Mon, 21 Jul 2014 10:47:49 +0300
Horia Geantă <[email protected]> wrote:

> On 7/19/2014 4:23 AM, Kim Phillips wrote:
> > On Sat, 19 Jul 2014 02:51:30 +0300
> > Horia Geantă <[email protected]> wrote:
> >
> >> On 7/19/2014 1:13 AM, Kim Phillips wrote:
> >>> On Fri, 18 Jul 2014 19:37:17 +0300
> >>> Horia Geanta <[email protected]> wrote:
> >>>
> >>>> This patch set adds Run Time Assembler (RTA) SEC descriptor library.
> >>>>
> >>> which can only mean it's slower and more susceptible to bugs - and
> >>> AFAICT for no superior technical advantage: NACK from me.
> >>
> >> The fact that the code size is bigger doesn't necessarily mean a bad thing:
> >> 1-code is better documented - cloc reports ~ 1000 more lines of
> >> comments; patch 09 even adds support for generating a docbook
> >> 2-pure code (i.e. no comments, white spaces) - cloc reports ~ 5000 more
> >> lines; this reflects two things, AFAICT:
> >> 2.1-more features: options (for e.g. new SEC instructions, little endian
> >> env. support), platform support includes Era 7 and Era 8, i.e.
> >> Layerscape LS1 and LS2; this is important to note, since plans are to
> >> run the very same CAAM driver on ARM platforms
> >
> > um, *those* features should not cost any driver *that many* lines of
> > code!
>
> You are invited to comment on the code at hand. I am pretty sure it's
> not perfect.

I can see RTA is responsible for the code size increase, not the
features. And just because RTA has - or has plans for - those
features don't justify the kernel driver adopting RTA over
inline-append.

> >> 2.2-more error-checking - from this perspective, I'd say driver is less
> >> susceptible to bugs, especially subtle ones in CAAM descriptors that are
> >> hard to identify / debug; RTA will complain when generating descriptors
> >> using features (say a new bit in an instruction opcode) that are not
> >> supported on the SEC on device
> >
> > ? The hardware does the error checking. This just tells me RTA is
> > slow, inflexible, and requires unnecessary maintenance by design:
> > all the more reason to keep it out of the kernel :)
>
> This is just like saying a toolchain isn't performing any checks and
> lets the user generate invalid machine code and deal with HW errors.
>
> Beside this, there are (quite a few) cases when SEC won't emit any
> error, but still the results are different than expected.
> SEC HW is complex enough to deserve descriptor error checking.

if part of RTA's objective is to cater to new SEC programmers, great,
but that doesn't mean it belongs in the crypto API driver's limited
input parameter set, and fixed descriptors operating environment:
it's not the place to host an entire SEC toolchain.

> >> RTA currently runs on:
> >> -QorIQ platforms - userspace (USDPAA)
> >> -Layerscape platforms - AIOP accelerator
> >> (obviously, plans are to run also on QorIQ/PowerPC and LS/ARM kernels)
> >
> > inline append runs elsewhere, too, but I don't see how this is
> > related to the subject I'm bringing up.
>
> This is relevant, since having a piece of SW running in multiple
> environments should lead to better testing, more exposure, finding bugs
> faster.

that doesn't defeat the fact that more lines of code to do the same
thing is always going to be a more bug-prone way of doing it.

> inline append *could run* elsewhere , but it doesn't AFAICT. Last time
> I checked, USDPAA and AIOP use RTA.

inline append runs in ASF, and has been available for all upstream
for years.

> >> Combined with:
> >> -comprehensive unit testing suite
> >> -RTA kernel port is bit-exact in terms of SEC descriptors hex dumps with
> >> inline append; besides this, it was tested with tcrypt and in IPsec
> >> scenarios
> >> I would say that RTA is tested more than inline append. In the end, this
> >> is a side effect of having a single code base.
> >
> > inline append has been proven stable for years now. RTA just adds
> > redundant code and violates CodingStyle AFAICT.
>
> New platform support is not redundant.

No, RTA is.

> Error checking is not redundant, as explained above.

It is: the kernel has fixed descriptors.

> kernel-doc is always helpful.

it doesn't matter how much you decorate it.

> Coding Style can be fixed.

inline append isn't broken.

Kim

2014-07-22 21:37:17

by Kim Phillips

[permalink] [raw]
Subject: Re: [PATCH] crypto: caam - fix DECO RSR polling

On Mon, 21 Jul 2014 16:03:21 +0300
Horia Geanta <[email protected]> wrote:

> RSR (Request Source Register) is not used when
> virtualization is disabled, thus don't poll for Valid bit.
>
> Besides this, if used, timeout has to be reinitialized.
>
> Signed-off-by: Horia Geanta <[email protected]>
> ---
> Only compile-tested.
> Ruchika / Kim, please review / test.

Acked-by: Kim Phillips <[email protected]>

fwiw, it would be nice if virt_en were a bool...can you also please
start using get_maintainer.pl?

Thanks,

Kim

2014-07-23 08:53:05

by Ruchika Gupta

[permalink] [raw]
Subject: RE: [PATCH] crypto: caam - fix DECO RSR polling

<Acked-by> :- Ruchika Gupta <[email protected]>

Tested on P4080DS.
Ported and tested on LS1 platform also (This platform has the virtualization enabled).

Thanks,
Ruchika

> -----Original Message-----
> From: Horia Geanta [mailto:[email protected]]
> Sent: Monday, July 21, 2014 6:33 PM
> To: Herbert Xu; [email protected]; Gupta Ruchika-R66431; Phillips
> Kim-R1AAHA
> Cc: David S. Miller
> Subject: [PATCH] crypto: caam - fix DECO RSR polling
>
> RSR (Request Source Register) is not used when virtualization is disabled,
> thus don't poll for Valid bit.
>
> Besides this, if used, timeout has to be reinitialized.
>
> Signed-off-by: Horia Geanta <[email protected]>
> ---
> Only compile-tested.
> Ruchika / Kim, please review / test.
>
> drivers/crypto/caam/ctrl.c | 11 +++++++----
> 1 file changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c index
> c6e9d3b2d502..84d4b95c761e 100644
> --- a/drivers/crypto/caam/ctrl.c
> +++ b/drivers/crypto/caam/ctrl.c
> @@ -89,12 +89,15 @@ static inline int run_descriptor_deco0(struct device
> *ctrldev, u32 *desc,
> /* Set the bit to request direct access to DECO0 */
> topregs = (struct caam_full __iomem *)ctrlpriv->ctrl;
>
> - if (ctrlpriv->virt_en == 1)
> + if (ctrlpriv->virt_en == 1) {
> setbits32(&topregs->ctrl.deco_rsr, DECORSR_JR0);
>
> - while (!(rd_reg32(&topregs->ctrl.deco_rsr) & DECORSR_VALID) &&
> - --timeout)
> - cpu_relax();
> + while (!(rd_reg32(&topregs->ctrl.deco_rsr) & DECORSR_VALID) &&
> + --timeout)
> + cpu_relax();
> +
> + timeout = 100000;
> + }
>
> setbits32(&topregs->ctrl.deco_rq, DECORR_RQD0ENABLE);
>
> --
> 1.8.3.1

2014-07-23 13:36:30

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH] crypto: caam - fix DECO RSR polling

On Tue, Jul 22, 2014 at 04:31:56PM -0500, Kim Phillips wrote:
> On Mon, 21 Jul 2014 16:03:21 +0300
> Horia Geanta <[email protected]> wrote:
>
> > RSR (Request Source Register) is not used when
> > virtualization is disabled, thus don't poll for Valid bit.
> >
> > Besides this, if used, timeout has to be reinitialized.
> >
> > Signed-off-by: Horia Geanta <[email protected]>
> > ---
> > Only compile-tested.
> > Ruchika / Kim, please review / test.
>
> Acked-by: Kim Phillips <[email protected]>

Patch applied.
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt