2017-04-20 13:12:54

by Gilad Ben-Yossef

[permalink] [raw]
Subject: [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver

Arm TrustZone CryptoCell 700 is a family of cryptographic hardware
accelerators. It is supported by a long lived series of out of tree
drivers, which I am now in the process of unifying and upstreaming.
This is the first drop, supporting the new CryptoCell 712 REE.

The code still needs some cleanup before maturing to a proper
upstream driver, which I am in the process of doing. However,
as discussion of some of the capabilities of the hardware and
its application to some dm-crypt and dm-verity features recently
took place I though it is better to do this in the open via the
staging tree.

A Git repository based off of Linux 4.11-rc7 is also available at
https://github.com/gby/linux.git branch ccree_v2 for those inclined.

Signed-off-by: Gilad Ben-Yossef <[email protected]>
CC: Binoy Jayan <[email protected]>
CC: Ofir Drang <[email protected]>
CC: Stuart Yoder <[email protected]>

Changes from v1:
- Broke up patch set into smaller units for mailing list review as per
Greg KH's indication.
- Changed DT binding compatible tag as per Mark Rutland suggestion.
- Moved DT binding document inside the staging directory and added DT binding
review to TODO list as per Mark Rutland's request.

Many thanks to all reviewers :-)

Gilad Ben-Yossef (9):
staging: ccree: introduce CryptoCell HW driver
staging: ccree: add ahash support
staging: ccree: add skcipher support
staging: ccree: add IV generation support
staging: ccree: add AEAD support
staging: ccree: add FIPS support
staging: ccree: add TODO list
staging: ccree: add DT bindings for Arm CryptoCell
MAINTAINERS: add Gilad BY as ccree maintainer

MAINTAINERS | 7 +
drivers/staging/Kconfig | 2 +
drivers/staging/Makefile | 2 +-
.../devicetree/bindings/crypto/arm-cryptocell.txt | 27 +
drivers/staging/ccree/Kconfig | 43 +
drivers/staging/ccree/Makefile | 3 +
drivers/staging/ccree/TODO | 28 +
drivers/staging/ccree/bsp.h | 21 +
drivers/staging/ccree/cc_bitops.h | 62 +
drivers/staging/ccree/cc_crypto_ctx.h | 299 +++
drivers/staging/ccree/cc_hal.h | 35 +
drivers/staging/ccree/cc_hw_queue_defs.h | 603 +++++
drivers/staging/ccree/cc_lli_defs.h | 57 +
drivers/staging/ccree/cc_pal_log.h | 188 ++
drivers/staging/ccree/cc_pal_log_plat.h | 33 +
drivers/staging/ccree/cc_pal_types.h | 97 +
drivers/staging/ccree/cc_pal_types_plat.h | 29 +
drivers/staging/ccree/cc_regs.h | 106 +
drivers/staging/ccree/dx_crys_kernel.h | 180 ++
drivers/staging/ccree/dx_env.h | 224 ++
drivers/staging/ccree/dx_host.h | 155 ++
drivers/staging/ccree/dx_reg_base_host.h | 34 +
drivers/staging/ccree/dx_reg_common.h | 26 +
drivers/staging/ccree/hash_defs.h | 78 +
drivers/staging/ccree/hw_queue_defs_plat.h | 43 +
drivers/staging/ccree/ssi_aead.c | 2832 ++++++++++++++++++++
drivers/staging/ccree/ssi_aead.h | 120 +
drivers/staging/ccree/ssi_buffer_mgr.c | 1876 +++++++++++++
drivers/staging/ccree/ssi_buffer_mgr.h | 105 +
drivers/staging/ccree/ssi_cipher.c | 1503 +++++++++++
drivers/staging/ccree/ssi_cipher.h | 89 +
drivers/staging/ccree/ssi_config.h | 61 +
drivers/staging/ccree/ssi_driver.c | 557 ++++
drivers/staging/ccree/ssi_driver.h | 228 ++
drivers/staging/ccree/ssi_fips.c | 65 +
drivers/staging/ccree/ssi_fips.h | 70 +
drivers/staging/ccree/ssi_fips_data.h | 315 +++
drivers/staging/ccree/ssi_fips_ext.c | 96 +
drivers/staging/ccree/ssi_fips_ll.c | 1681 ++++++++++++
drivers/staging/ccree/ssi_fips_local.c | 369 +++
drivers/staging/ccree/ssi_fips_local.h | 77 +
drivers/staging/ccree/ssi_hash.c | 2751 +++++++++++++++++++
drivers/staging/ccree/ssi_hash.h | 101 +
drivers/staging/ccree/ssi_ivgen.c | 301 +++
drivers/staging/ccree/ssi_ivgen.h | 72 +
drivers/staging/ccree/ssi_pm.c | 150 ++
drivers/staging/ccree/ssi_pm.h | 46 +
drivers/staging/ccree/ssi_pm_ext.c | 60 +
drivers/staging/ccree/ssi_pm_ext.h | 33 +
drivers/staging/ccree/ssi_request_mgr.c | 713 +++++
drivers/staging/ccree/ssi_request_mgr.h | 60 +
drivers/staging/ccree/ssi_sram_mgr.c | 138 +
drivers/staging/ccree/ssi_sram_mgr.h | 80 +
drivers/staging/ccree/ssi_sysfs.c | 440 +++
drivers/staging/ccree/ssi_sysfs.h | 54 +
55 files changed, 17424 insertions(+), 1 deletion(-)
create mode 100644 drivers/staging/ccree/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt
create mode 100644 drivers/staging/ccree/Kconfig
create mode 100644 drivers/staging/ccree/Makefile
create mode 100644 drivers/staging/ccree/TODO
create mode 100644 drivers/staging/ccree/bsp.h
create mode 100644 drivers/staging/ccree/cc_bitops.h
create mode 100644 drivers/staging/ccree/cc_crypto_ctx.h
create mode 100644 drivers/staging/ccree/cc_hal.h
create mode 100644 drivers/staging/ccree/cc_hw_queue_defs.h
create mode 100644 drivers/staging/ccree/cc_lli_defs.h
create mode 100644 drivers/staging/ccree/cc_pal_log.h
create mode 100644 drivers/staging/ccree/cc_pal_log_plat.h
create mode 100644 drivers/staging/ccree/cc_pal_types.h
create mode 100644 drivers/staging/ccree/cc_pal_types_plat.h
create mode 100644 drivers/staging/ccree/cc_regs.h
create mode 100644 drivers/staging/ccree/dx_crys_kernel.h
create mode 100644 drivers/staging/ccree/dx_env.h
create mode 100644 drivers/staging/ccree/dx_host.h
create mode 100644 drivers/staging/ccree/dx_reg_base_host.h
create mode 100644 drivers/staging/ccree/dx_reg_common.h
create mode 100644 drivers/staging/ccree/hash_defs.h
create mode 100644 drivers/staging/ccree/hw_queue_defs_plat.h
create mode 100644 drivers/staging/ccree/ssi_aead.c
create mode 100644 drivers/staging/ccree/ssi_aead.h
create mode 100644 drivers/staging/ccree/ssi_buffer_mgr.c
create mode 100644 drivers/staging/ccree/ssi_buffer_mgr.h
create mode 100644 drivers/staging/ccree/ssi_cipher.c
create mode 100644 drivers/staging/ccree/ssi_cipher.h
create mode 100644 drivers/staging/ccree/ssi_config.h
create mode 100644 drivers/staging/ccree/ssi_driver.c
create mode 100644 drivers/staging/ccree/ssi_driver.h
create mode 100644 drivers/staging/ccree/ssi_fips.c
create mode 100644 drivers/staging/ccree/ssi_fips.h
create mode 100644 drivers/staging/ccree/ssi_fips_data.h
create mode 100644 drivers/staging/ccree/ssi_fips_ext.c
create mode 100644 drivers/staging/ccree/ssi_fips_ll.c
create mode 100644 drivers/staging/ccree/ssi_fips_local.c
create mode 100644 drivers/staging/ccree/ssi_fips_local.h
create mode 100644 drivers/staging/ccree/ssi_hash.c
create mode 100644 drivers/staging/ccree/ssi_hash.h
create mode 100644 drivers/staging/ccree/ssi_ivgen.c
create mode 100644 drivers/staging/ccree/ssi_ivgen.h
create mode 100644 drivers/staging/ccree/ssi_pm.c
create mode 100644 drivers/staging/ccree/ssi_pm.h
create mode 100644 drivers/staging/ccree/ssi_pm_ext.c
create mode 100644 drivers/staging/ccree/ssi_pm_ext.h
create mode 100644 drivers/staging/ccree/ssi_request_mgr.c
create mode 100644 drivers/staging/ccree/ssi_request_mgr.h
create mode 100644 drivers/staging/ccree/ssi_sram_mgr.c
create mode 100644 drivers/staging/ccree/ssi_sram_mgr.h
create mode 100644 drivers/staging/ccree/ssi_sysfs.c
create mode 100644 drivers/staging/ccree/ssi_sysfs.h

--
2.1.4


2017-04-20 13:12:56

by Gilad Ben-Yossef

[permalink] [raw]
Subject: [PATCH v2 2/9] staging: ccree: add ahash support

Add CryptoCell async. hash and HMAC support.

Signed-off-by: Gilad Ben-Yossef <[email protected]>
---
drivers/staging/ccree/Kconfig | 6 +
drivers/staging/ccree/Makefile | 2 +-
drivers/staging/ccree/cc_crypto_ctx.h | 22 +
drivers/staging/ccree/hash_defs.h | 78 +
drivers/staging/ccree/ssi_buffer_mgr.c | 311 +++-
drivers/staging/ccree/ssi_buffer_mgr.h | 6 +
drivers/staging/ccree/ssi_driver.c | 11 +-
drivers/staging/ccree/ssi_driver.h | 4 +-
drivers/staging/ccree/ssi_hash.c | 2732 ++++++++++++++++++++++++++++++++
drivers/staging/ccree/ssi_hash.h | 101 ++
drivers/staging/ccree/ssi_pm.c | 4 +
11 files changed, 3263 insertions(+), 14 deletions(-)
create mode 100644 drivers/staging/ccree/hash_defs.h
create mode 100644 drivers/staging/ccree/ssi_hash.c
create mode 100644 drivers/staging/ccree/ssi_hash.h

diff --git a/drivers/staging/ccree/Kconfig b/drivers/staging/ccree/Kconfig
index 0f723d7..a528a99 100644
--- a/drivers/staging/ccree/Kconfig
+++ b/drivers/staging/ccree/Kconfig
@@ -2,6 +2,12 @@ config CRYPTO_DEV_CCREE
tristate "Support for ARM TrustZone CryptoCell C7XX family of Crypto accelerators"
depends on CRYPTO_HW && OF && HAS_DMA
default n
+ select CRYPTO_HASH
+ select CRYPTO_SHA1
+ select CRYPTO_MD5
+ select CRYPTO_SHA256
+ select CRYPTO_SHA512
+ select CRYPTO_HMAC
help
Say 'Y' to enable a driver for the Arm TrustZone CryptoCell
C7xx. Currently only the CryptoCell 712 REE is supported.
diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile
index 972af69..f94e225 100644
--- a/drivers/staging/ccree/Makefile
+++ b/drivers/staging/ccree/Makefile
@@ -1,2 +1,2 @@
obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o
-ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
+ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_hash.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h
index 8b8aea2..fedf259 100644
--- a/drivers/staging/ccree/cc_crypto_ctx.h
+++ b/drivers/staging/ccree/cc_crypto_ctx.h
@@ -220,6 +220,28 @@ struct drv_ctx_generic {
} __attribute__((__may_alias__));


+struct drv_ctx_hash {
+ enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_HASH */
+ enum drv_hash_mode mode;
+ uint8_t digest[CC_DIGEST_SIZE_MAX];
+ /* reserve to end of allocated context size */
+ uint8_t reserved[CC_CTX_SIZE - 2 * sizeof(uint32_t) -
+ CC_DIGEST_SIZE_MAX];
+};
+
+/* !!!! drv_ctx_hmac should have the same structure as drv_ctx_hash except
+ k0, k0_size fields */
+struct drv_ctx_hmac {
+ enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_HMAC */
+ enum drv_hash_mode mode;
+ uint8_t digest[CC_DIGEST_SIZE_MAX];
+ uint32_t k0[CC_HMAC_BLOCK_SIZE_MAX/sizeof(uint32_t)];
+ uint32_t k0_size;
+ /* reserve to end of allocated context size */
+ uint8_t reserved[CC_CTX_SIZE - 3 * sizeof(uint32_t) -
+ CC_DIGEST_SIZE_MAX - CC_HMAC_BLOCK_SIZE_MAX];
+};
+
/*******************************************************************/
/***************** MESSAGE BASED CONTEXTS **************************/
/*******************************************************************/
diff --git a/drivers/staging/ccree/hash_defs.h b/drivers/staging/ccree/hash_defs.h
new file mode 100644
index 0000000..0cd6909
--- /dev/null
+++ b/drivers/staging/ccree/hash_defs.h
@@ -0,0 +1,78 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef _HASH_DEFS_H__
+#define _HASH_DEFS_H__
+
+#include "cc_crypto_ctx.h"
+
+/* this files provides definitions required for hash engine drivers */
+#ifndef CC_CONFIG_HASH_SHA_512_SUPPORTED
+#define SEP_HASH_LENGTH_WORDS 2
+#else
+#define SEP_HASH_LENGTH_WORDS 4
+#endif
+
+#ifdef BIG__ENDIAN
+#define OPAD_CURRENT_LENGTH 0x40000000, 0x00000000 , 0x00000000, 0x00000000
+#define HASH_LARVAL_MD5 0x76543210, 0xFEDCBA98, 0x89ABCDEF, 0x01234567
+#define HASH_LARVAL_SHA1 0xF0E1D2C3, 0x76543210, 0xFEDCBA98, 0x89ABCDEF, 0x01234567
+#define HASH_LARVAL_SHA224 0XA44FFABE, 0XA78FF964, 0X11155868, 0X310BC0FF, 0X39590EF7, 0X17DD7030, 0X07D57C36, 0XD89E05C1
+#define HASH_LARVAL_SHA256 0X19CDE05B, 0XABD9831F, 0X8C68059B, 0X7F520E51, 0X3AF54FA5, 0X72F36E3C, 0X85AE67BB, 0X67E6096A
+#define HASH_LARVAL_SHA384 0X1D48B547, 0XA44FFABE, 0X0D2E0CDB, 0XA78FF964, 0X874AB48E, 0X11155868, 0X67263367, 0X310BC0FF, 0XD8EC2F15, 0X39590EF7, 0X5A015991, 0X17DD7030, 0X2A299A62, 0X07D57C36, 0X5D9DBBCB, 0XD89E05C1
+#define HASH_LARVAL_SHA512 0X19CDE05B, 0X79217E13, 0XABD9831F, 0X6BBD41FB, 0X8C68059B, 0X1F6C3E2B, 0X7F520E51, 0XD182E6AD, 0X3AF54FA5, 0XF1361D5F, 0X72F36E3C, 0X2BF894FE, 0X85AE67BB, 0X3BA7CA84, 0X67E6096A, 0X08C9BCF3
+#else
+#define OPAD_CURRENT_LENGTH 0x00000040, 0x00000000, 0x00000000, 0x00000000
+#define HASH_LARVAL_MD5 0x10325476, 0x98BADCFE, 0xEFCDAB89, 0x67452301
+#define HASH_LARVAL_SHA1 0xC3D2E1F0, 0x10325476, 0x98BADCFE, 0xEFCDAB89, 0x67452301
+#define HASH_LARVAL_SHA224 0xbefa4fa4, 0x64f98fa7, 0x68581511, 0xffc00b31, 0xf70e5939, 0x3070dd17, 0x367cd507, 0xc1059ed8
+#define HASH_LARVAL_SHA256 0x5be0cd19, 0x1f83d9ab, 0x9b05688c, 0x510e527f, 0xa54ff53a, 0x3c6ef372, 0xbb67ae85, 0x6a09e667
+#define HASH_LARVAL_SHA384 0X47B5481D, 0XBEFA4FA4, 0XDB0C2E0D, 0X64F98FA7, 0X8EB44A87, 0X68581511, 0X67332667, 0XFFC00B31, 0X152FECD8, 0XF70E5939, 0X9159015A, 0X3070DD17, 0X629A292A, 0X367CD507, 0XCBBB9D5D, 0XC1059ED8
+#define HASH_LARVAL_SHA512 0x5be0cd19, 0x137e2179, 0x1f83d9ab, 0xfb41bd6b, 0x9b05688c, 0x2b3e6c1f, 0x510e527f, 0xade682d1, 0xa54ff53a, 0x5f1d36f1, 0x3c6ef372, 0xfe94f82b, 0xbb67ae85, 0x84caa73b, 0x6a09e667, 0xf3bcc908
+#endif
+
+enum HashConfig1Padding {
+ HASH_PADDING_DISABLED = 0,
+ HASH_PADDING_ENABLED = 1,
+ HASH_DIGEST_RESULT_LITTLE_ENDIAN = 2,
+ HASH_CONFIG1_PADDING_RESERVE32 = INT32_MAX,
+};
+
+enum HashCipherDoPadding {
+ DO_NOT_PAD = 0,
+ DO_PAD = 1,
+ HASH_CIPHER_DO_PADDING_RESERVE32 = INT32_MAX,
+};
+
+typedef struct SepHashPrivateContext {
+ /* The current length is placed at the end of the context buffer because the hash
+ context is used for all HMAC operations as well. HMAC context includes a 64 bytes
+ K0 field. The size of struct drv_ctx_hash reserved field is 88/184 bytes depend if t
+ he SHA512 is supported ( in this case teh context size is 256 bytes).
+ The size of struct drv_ctx_hash reseved field is 20 or 52 depend if the SHA512 is supported.
+ This means that this structure size (without the reserved field can be up to 20 bytes ,
+ in case sha512 is not suppported it is 20 bytes (SEP_HASH_LENGTH_WORDS define to 2 ) and in the other
+ case it is 28 (SEP_HASH_LENGTH_WORDS define to 4) */
+ uint32_t reserved[(sizeof(struct drv_ctx_hash)/sizeof(uint32_t)) - SEP_HASH_LENGTH_WORDS - 3];
+ uint32_t CurrentDigestedLength[SEP_HASH_LENGTH_WORDS];
+ uint32_t KeyType;
+ uint32_t dataCompleted;
+ uint32_t hmacFinalization;
+ /* no space left */
+} SepHashPrivateContext_s;
+
+#endif /*_HASH_DEFS_H__*/
+
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index aca837d..5144eaa 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -17,6 +17,7 @@
#include <linux/crypto.h>
#include <linux/version.h>
#include <crypto/algapi.h>
+#include <crypto/hash.h>
#include <crypto/authenc.h>
#include <crypto/scatterwalk.h>
#include <linux/dmapool.h>
@@ -27,6 +28,7 @@

#include "ssi_buffer_mgr.h"
#include "cc_lli_defs.h"
+#include "ssi_hash.h"

#define LLI_MAX_NUM_OF_DATA_ENTRIES 128
#define LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES 4
@@ -281,11 +283,6 @@ static inline int ssi_buffer_mgr_render_scatterlist_to_mlli(
return 0;
}

-static int ssi_buffer_mgr_generate_mlli (
- struct device *dev,
- struct buffer_array *sg_data,
- struct mlli_params *mlli_params) __maybe_unused;
-
static int ssi_buffer_mgr_generate_mlli(
struct device *dev,
struct buffer_array *sg_data,
@@ -427,11 +424,6 @@ ssi_buffer_mgr_dma_map_sg(struct device *dev, struct scatterlist *sg, uint32_t n
return 0;
}

-static int ssi_buffer_mgr_map_scatterlist (struct device *dev,
- struct scatterlist *sg, unsigned int nbytes, int direction,
- uint32_t *nents, uint32_t max_sg_nents, uint32_t *lbytes,
- uint32_t *mapped_nents) __maybe_unused;
-
static int ssi_buffer_mgr_map_scatterlist(
struct device *dev, struct scatterlist *sg,
unsigned int nbytes, int direction,
@@ -493,6 +485,305 @@ static int ssi_buffer_mgr_map_scatterlist(
return 0;
}

+static inline int ssi_ahash_handle_curr_buf(struct device *dev,
+ struct ahash_req_ctx *areq_ctx,
+ uint8_t* curr_buff,
+ uint32_t curr_buff_cnt,
+ struct buffer_array *sg_data)
+{
+ SSI_LOG_DEBUG(" handle curr buff %x set to DLLI \n", curr_buff_cnt);
+ /* create sg for the current buffer */
+ sg_init_one(areq_ctx->buff_sg,curr_buff, curr_buff_cnt);
+ if (unlikely(dma_map_sg(dev, areq_ctx->buff_sg, 1,
+ DMA_TO_DEVICE) != 1)) {
+ SSI_LOG_ERR("dma_map_sg() "
+ "src buffer failed\n");
+ return -ENOMEM;
+ }
+ SSI_LOG_DEBUG("Mapped curr_buff: dma_address=0x%llX "
+ "page_link=0x%08lX addr=%pK "
+ "offset=%u length=%u\n",
+ (unsigned long long)sg_dma_address(areq_ctx->buff_sg),
+ areq_ctx->buff_sg->page_link,
+ sg_virt(areq_ctx->buff_sg),
+ areq_ctx->buff_sg->offset,
+ areq_ctx->buff_sg->length);
+ areq_ctx->data_dma_buf_type = SSI_DMA_BUF_DLLI;
+ areq_ctx->curr_sg = areq_ctx->buff_sg;
+ areq_ctx->in_nents = 0;
+ /* prepare for case of MLLI */
+ ssi_buffer_mgr_add_scatterlist_entry(sg_data, 1, areq_ctx->buff_sg,
+ curr_buff_cnt, 0, false, NULL);
+ return 0;
+}
+
+int ssi_buffer_mgr_map_hash_request_final(
+ struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update)
+{
+ struct ahash_req_ctx *areq_ctx = (struct ahash_req_ctx *)ctx;
+ struct device *dev = &drvdata->plat_dev->dev;
+ uint8_t* curr_buff = areq_ctx->buff_index ? areq_ctx->buff1 :
+ areq_ctx->buff0;
+ uint32_t *curr_buff_cnt = areq_ctx->buff_index ? &areq_ctx->buff1_cnt :
+ &areq_ctx->buff0_cnt;
+ struct mlli_params *mlli_params = &areq_ctx->mlli_params;
+ struct buffer_array sg_data;
+ struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle;
+ uint32_t dummy = 0;
+ uint32_t mapped_nents = 0;
+
+ SSI_LOG_DEBUG(" final params : curr_buff=%pK "
+ "curr_buff_cnt=0x%X nbytes = 0x%X "
+ "src=%pK curr_index=%u\n",
+ curr_buff, *curr_buff_cnt, nbytes,
+ src, areq_ctx->buff_index);
+ /* Init the type of the dma buffer */
+ areq_ctx->data_dma_buf_type = SSI_DMA_BUF_NULL;
+ mlli_params->curr_pool = NULL;
+ sg_data.num_of_buffers = 0;
+ areq_ctx->in_nents = 0;
+
+ if (unlikely(nbytes == 0 && *curr_buff_cnt == 0)) {
+ /* nothing to do */
+ return 0;
+ }
+
+ /*TODO: copy data in case that buffer is enough for operation */
+ /* map the previous buffer */
+ if (*curr_buff_cnt != 0 ) {
+ if (ssi_ahash_handle_curr_buf(dev, areq_ctx, curr_buff,
+ *curr_buff_cnt, &sg_data) != 0) {
+ return -ENOMEM;
+ }
+ }
+
+ if (src && (nbytes > 0) && do_update) {
+ if ( unlikely( ssi_buffer_mgr_map_scatterlist( dev,src,
+ nbytes,
+ DMA_TO_DEVICE,
+ &areq_ctx->in_nents,
+ LLI_MAX_NUM_OF_DATA_ENTRIES,
+ &dummy, &mapped_nents))){
+ goto unmap_curr_buff;
+ }
+ if ( src && (mapped_nents == 1)
+ && (areq_ctx->data_dma_buf_type == SSI_DMA_BUF_NULL) ) {
+ memcpy(areq_ctx->buff_sg,src,
+ sizeof(struct scatterlist));
+ areq_ctx->buff_sg->length = nbytes;
+ areq_ctx->curr_sg = areq_ctx->buff_sg;
+ areq_ctx->data_dma_buf_type = SSI_DMA_BUF_DLLI;
+ } else {
+ areq_ctx->data_dma_buf_type = SSI_DMA_BUF_MLLI;
+ }
+
+ }
+
+ /*build mlli */
+ if (unlikely(areq_ctx->data_dma_buf_type == SSI_DMA_BUF_MLLI)) {
+ mlli_params->curr_pool = buff_mgr->mlli_buffs_pool;
+ /* add the src data to the sg_data */
+ ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
+ areq_ctx->in_nents,
+ src,
+ nbytes, 0,
+ true, &areq_ctx->mlli_nents);
+ if (unlikely(ssi_buffer_mgr_generate_mlli(dev, &sg_data,
+ mlli_params) != 0)) {
+ goto fail_unmap_din;
+ }
+ }
+ /* change the buffer index for the unmap function */
+ areq_ctx->buff_index = (areq_ctx->buff_index^1);
+ SSI_LOG_DEBUG("areq_ctx->data_dma_buf_type = %s\n",
+ GET_DMA_BUFFER_TYPE(areq_ctx->data_dma_buf_type));
+ return 0;
+
+fail_unmap_din:
+ dma_unmap_sg(dev, src, areq_ctx->in_nents, DMA_TO_DEVICE);
+
+unmap_curr_buff:
+ if (*curr_buff_cnt != 0 ) {
+ dma_unmap_sg(dev, areq_ctx->buff_sg, 1, DMA_TO_DEVICE);
+ }
+ return -ENOMEM;
+}
+
+int ssi_buffer_mgr_map_hash_request_update(
+ struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, unsigned int block_size)
+{
+ struct ahash_req_ctx *areq_ctx = (struct ahash_req_ctx *)ctx;
+ struct device *dev = &drvdata->plat_dev->dev;
+ uint8_t* curr_buff = areq_ctx->buff_index ? areq_ctx->buff1 :
+ areq_ctx->buff0;
+ uint32_t *curr_buff_cnt = areq_ctx->buff_index ? &areq_ctx->buff1_cnt :
+ &areq_ctx->buff0_cnt;
+ uint8_t* next_buff = areq_ctx->buff_index ? areq_ctx->buff0 :
+ areq_ctx->buff1;
+ uint32_t *next_buff_cnt = areq_ctx->buff_index ? &areq_ctx->buff0_cnt :
+ &areq_ctx->buff1_cnt;
+ struct mlli_params *mlli_params = &areq_ctx->mlli_params;
+ unsigned int update_data_len;
+ uint32_t total_in_len = nbytes + *curr_buff_cnt;
+ struct buffer_array sg_data;
+ struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle;
+ unsigned int swap_index = 0;
+ uint32_t dummy = 0;
+ uint32_t mapped_nents = 0;
+
+ SSI_LOG_DEBUG(" update params : curr_buff=%pK "
+ "curr_buff_cnt=0x%X nbytes=0x%X "
+ "src=%pK curr_index=%u \n",
+ curr_buff, *curr_buff_cnt, nbytes,
+ src, areq_ctx->buff_index);
+ /* Init the type of the dma buffer */
+ areq_ctx->data_dma_buf_type = SSI_DMA_BUF_NULL;
+ mlli_params->curr_pool = NULL;
+ areq_ctx->curr_sg = NULL;
+ sg_data.num_of_buffers = 0;
+ areq_ctx->in_nents = 0;
+
+ if (unlikely(total_in_len < block_size)) {
+ SSI_LOG_DEBUG(" less than one block: curr_buff=%pK "
+ "*curr_buff_cnt=0x%X copy_to=%pK\n",
+ curr_buff, *curr_buff_cnt,
+ &curr_buff[*curr_buff_cnt]);
+ areq_ctx->in_nents =
+ ssi_buffer_mgr_get_sgl_nents(src,
+ nbytes,
+ &dummy, NULL);
+ sg_copy_to_buffer(src, areq_ctx->in_nents,
+ &curr_buff[*curr_buff_cnt], nbytes);
+ *curr_buff_cnt += nbytes;
+ return 1;
+ }
+
+ /* Calculate the residue size*/
+ *next_buff_cnt = total_in_len & (block_size - 1);
+ /* update data len */
+ update_data_len = total_in_len - *next_buff_cnt;
+
+ SSI_LOG_DEBUG(" temp length : *next_buff_cnt=0x%X "
+ "update_data_len=0x%X\n",
+ *next_buff_cnt, update_data_len);
+
+ /* Copy the new residue to next buffer */
+ if (*next_buff_cnt != 0) {
+ SSI_LOG_DEBUG(" handle residue: next buff %pK skip data %u"
+ " residue %u \n", next_buff,
+ (update_data_len - *curr_buff_cnt),
+ *next_buff_cnt);
+ ssi_buffer_mgr_copy_scatterlist_portion(next_buff, src,
+ (update_data_len -*curr_buff_cnt),
+ nbytes,SSI_SG_TO_BUF);
+ /* change the buffer index for next operation */
+ swap_index = 1;
+ }
+
+ if (*curr_buff_cnt != 0) {
+ if (ssi_ahash_handle_curr_buf(dev, areq_ctx, curr_buff,
+ *curr_buff_cnt, &sg_data) != 0) {
+ return -ENOMEM;
+ }
+ /* change the buffer index for next operation */
+ swap_index = 1;
+ }
+
+ if ( update_data_len > *curr_buff_cnt ) {
+ if ( unlikely( ssi_buffer_mgr_map_scatterlist( dev,src,
+ (update_data_len -*curr_buff_cnt),
+ DMA_TO_DEVICE,
+ &areq_ctx->in_nents,
+ LLI_MAX_NUM_OF_DATA_ENTRIES,
+ &dummy, &mapped_nents))){
+ goto unmap_curr_buff;
+ }
+ if ( (mapped_nents == 1)
+ && (areq_ctx->data_dma_buf_type == SSI_DMA_BUF_NULL) ) {
+ /* only one entry in the SG and no previous data */
+ memcpy(areq_ctx->buff_sg,src,
+ sizeof(struct scatterlist));
+ areq_ctx->buff_sg->length = update_data_len;
+ areq_ctx->data_dma_buf_type = SSI_DMA_BUF_DLLI;
+ areq_ctx->curr_sg = areq_ctx->buff_sg;
+ } else {
+ areq_ctx->data_dma_buf_type = SSI_DMA_BUF_MLLI;
+ }
+ }
+
+ if (unlikely(areq_ctx->data_dma_buf_type == SSI_DMA_BUF_MLLI)) {
+ mlli_params->curr_pool = buff_mgr->mlli_buffs_pool;
+ /* add the src data to the sg_data */
+ ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
+ areq_ctx->in_nents,
+ src,
+ (update_data_len - *curr_buff_cnt), 0,
+ true, &areq_ctx->mlli_nents);
+ if (unlikely(ssi_buffer_mgr_generate_mlli(dev, &sg_data,
+ mlli_params) != 0)) {
+ goto fail_unmap_din;
+ }
+
+ }
+ areq_ctx->buff_index = (areq_ctx->buff_index^swap_index);
+
+ return 0;
+
+fail_unmap_din:
+ dma_unmap_sg(dev, src, areq_ctx->in_nents, DMA_TO_DEVICE);
+
+unmap_curr_buff:
+ if (*curr_buff_cnt != 0 ) {
+ dma_unmap_sg(dev, areq_ctx->buff_sg, 1, DMA_TO_DEVICE);
+ }
+ return -ENOMEM;
+}
+
+void ssi_buffer_mgr_unmap_hash_request(
+ struct device *dev, void *ctx, struct scatterlist *src, bool do_revert)
+{
+ struct ahash_req_ctx *areq_ctx = (struct ahash_req_ctx *)ctx;
+ uint32_t *prev_len = areq_ctx->buff_index ? &areq_ctx->buff0_cnt :
+ &areq_ctx->buff1_cnt;
+
+ /*In case a pool was set, a table was
+ allocated and should be released */
+ if (areq_ctx->mlli_params.curr_pool != NULL) {
+ SSI_LOG_DEBUG("free MLLI buffer: dma=0x%llX virt=%pK\n",
+ (unsigned long long)areq_ctx->mlli_params.mlli_dma_addr,
+ areq_ctx->mlli_params.mlli_virt_addr);
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->mlli_params.mlli_dma_addr);
+ dma_pool_free(areq_ctx->mlli_params.curr_pool,
+ areq_ctx->mlli_params.mlli_virt_addr,
+ areq_ctx->mlli_params.mlli_dma_addr);
+ }
+
+ if ((src) && likely(areq_ctx->in_nents != 0)) {
+ SSI_LOG_DEBUG("Unmapped sg src: virt=%pK dma=0x%llX len=0x%X\n",
+ sg_virt(src),
+ (unsigned long long)sg_dma_address(src),
+ sg_dma_len(src));
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(src));
+ dma_unmap_sg(dev, src,
+ areq_ctx->in_nents, DMA_TO_DEVICE);
+ }
+
+ if (*prev_len != 0) {
+ SSI_LOG_DEBUG("Unmapped buffer: areq_ctx->buff_sg=%pK"
+ "dma=0x%llX len 0x%X\n",
+ sg_virt(areq_ctx->buff_sg),
+ (unsigned long long)sg_dma_address(areq_ctx->buff_sg),
+ sg_dma_len(areq_ctx->buff_sg));
+ dma_unmap_sg(dev, areq_ctx->buff_sg, 1, DMA_TO_DEVICE);
+ if (!do_revert) {
+ /* clean the previous data length for update operation */
+ *prev_len = 0;
+ } else {
+ areq_ctx->buff_index ^= 1;
+ }
+ }
+}
+
int ssi_buffer_mgr_init(struct ssi_drvdata *drvdata)
{
struct buff_mgr_handle *buff_mgr_handle;
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.h b/drivers/staging/ccree/ssi_buffer_mgr.h
index 9b74d81..ccac5ce 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.h
+++ b/drivers/staging/ccree/ssi_buffer_mgr.h
@@ -55,6 +55,12 @@ int ssi_buffer_mgr_init(struct ssi_drvdata *drvdata);

int ssi_buffer_mgr_fini(struct ssi_drvdata *drvdata);

+int ssi_buffer_mgr_map_hash_request_final(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update);
+
+int ssi_buffer_mgr_map_hash_request_update(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, unsigned int block_size);
+
+void ssi_buffer_mgr_unmap_hash_request(struct device *dev, void *ctx, struct scatterlist *src, bool do_revert);
+
void ssi_buffer_mgr_copy_scatterlist_portion(u8 *dest, struct scatterlist *sg, uint32_t to_skip, uint32_t end, enum ssi_sg_cpy_direct direct);

void ssi_buffer_mgr_zero_sgl(struct scatterlist *sgl, uint32_t data_len);
diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index e70ad07..95e27c2 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -61,6 +61,7 @@
#include "ssi_request_mgr.h"
#include "ssi_buffer_mgr.h"
#include "ssi_sysfs.h"
+#include "ssi_hash.h"
#include "ssi_sram_mgr.h"
#include "ssi_pm.h"

@@ -218,8 +219,6 @@ static int init_cc_resources(struct platform_device *plat_dev)
goto init_cc_res_err;
}

- new_drvdata->inflight_counter = 0;
-
dev_set_drvdata(&plat_dev->dev, new_drvdata);
/* Get device resources */
/* First CC registers space */
@@ -344,12 +343,19 @@ static int init_cc_resources(struct platform_device *plat_dev)
goto init_cc_res_err;
}

+ rc = ssi_hash_alloc(new_drvdata);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("ssi_hash_alloc failed\n");
+ goto init_cc_res_err;
+ }
+
return 0;

init_cc_res_err:
SSI_LOG_ERR("Freeing CC HW resources!\n");

if (new_drvdata != NULL) {
+ ssi_hash_free(new_drvdata);
ssi_power_mgr_fini(new_drvdata);
ssi_buffer_mgr_fini(new_drvdata);
request_mgr_fini(new_drvdata);
@@ -389,6 +395,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
struct ssi_drvdata *drvdata =
(struct ssi_drvdata *)dev_get_drvdata(&plat_dev->dev);

+ ssi_hash_free(drvdata);
ssi_power_mgr_fini(drvdata);
ssi_buffer_mgr_fini(drvdata);
request_mgr_fini(drvdata);
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index c4ccbfa..9aa5d30 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -32,6 +32,7 @@
#include <crypto/aes.h>
#include <crypto/sha.h>
#include <crypto/authenc.h>
+#include <crypto/hash.h>
#include <linux/version.h>

#ifndef INT32_MAX /* Missing in Linux kernel */
@@ -50,6 +51,7 @@
#define CC_SUPPORT_SHA DX_DEV_SHA_MAX
#include "cc_crypto_ctx.h"
#include "ssi_sysfs.h"
+#include "hash_defs.h"

#define DRV_MODULE_VERSION "3.0"

@@ -138,13 +140,13 @@ struct ssi_drvdata {
ssi_sram_addr_t mlli_sram_addr;
struct completion icache_setup_completion;
void *buff_mgr_handle;
+ void *hash_handle;
void *request_mgr_handle;
void *sram_mgr_handle;

#ifdef ENABLE_CYCLE_COUNT
cycles_t isr_exit_cycles; /* Save for isr-to-tasklet latency */
#endif
- uint32_t inflight_counter;

};

diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c
new file mode 100644
index 0000000..cb7fde7
--- /dev/null
+++ b/drivers/staging/ccree/ssi_hash.c
@@ -0,0 +1,2732 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <crypto/algapi.h>
+#include <crypto/hash.h>
+#include <crypto/sha.h>
+#include <crypto/md5.h>
+#include <crypto/internal/hash.h>
+
+#include "ssi_config.h"
+#include "ssi_driver.h"
+#include "ssi_request_mgr.h"
+#include "ssi_buffer_mgr.h"
+#include "ssi_sysfs.h"
+#include "ssi_hash.h"
+#include "ssi_sram_mgr.h"
+
+#define SSI_MAX_AHASH_SEQ_LEN 12
+#define SSI_MAX_HASH_OPAD_TMP_KEYS_SIZE MAX(SSI_MAX_HASH_BLCK_SIZE, 3 * AES_BLOCK_SIZE)
+
+struct ssi_hash_handle {
+ ssi_sram_addr_t digest_len_sram_addr; /* const value in SRAM*/
+ ssi_sram_addr_t larval_digest_sram_addr; /* const value in SRAM */
+ struct list_head hash_list;
+ struct completion init_comp;
+};
+
+static const uint32_t digest_len_init[] = {
+ 0x00000040, 0x00000000, 0x00000000, 0x00000000 };
+static const uint32_t md5_init[] = {
+ SHA1_H3, SHA1_H2, SHA1_H1, SHA1_H0 };
+static const uint32_t sha1_init[] = {
+ SHA1_H4, SHA1_H3, SHA1_H2, SHA1_H1, SHA1_H0 };
+static const uint32_t sha224_init[] = {
+ SHA224_H7, SHA224_H6, SHA224_H5, SHA224_H4,
+ SHA224_H3, SHA224_H2, SHA224_H1, SHA224_H0 };
+static const uint32_t sha256_init[] = {
+ SHA256_H7, SHA256_H6, SHA256_H5, SHA256_H4,
+ SHA256_H3, SHA256_H2, SHA256_H1, SHA256_H0 };
+#if (DX_DEV_SHA_MAX > 256)
+static const uint32_t digest_len_sha512_init[] = {
+ 0x00000080, 0x00000000, 0x00000000, 0x00000000 };
+static const uint64_t sha384_init[] = {
+ SHA384_H7, SHA384_H6, SHA384_H5, SHA384_H4,
+ SHA384_H3, SHA384_H2, SHA384_H1, SHA384_H0 };
+static const uint64_t sha512_init[] = {
+ SHA512_H7, SHA512_H6, SHA512_H5, SHA512_H4,
+ SHA512_H3, SHA512_H2, SHA512_H1, SHA512_H0 };
+#endif
+
+static void ssi_hash_create_xcbc_setup(
+ struct ahash_request *areq,
+ HwDesc_s desc[],
+ unsigned int *seq_size);
+
+static void ssi_hash_create_cmac_setup(struct ahash_request *areq,
+ HwDesc_s desc[],
+ unsigned int *seq_size);
+
+struct ssi_hash_alg {
+ struct list_head entry;
+ bool synchronize;
+ int hash_mode;
+ int hw_mode;
+ int inter_digestsize;
+ struct ssi_drvdata *drvdata;
+ union {
+ struct ahash_alg ahash_alg;
+ struct shash_alg shash_alg;
+ };
+};
+
+
+struct hash_key_req_ctx {
+ uint32_t keylen;
+ dma_addr_t key_dma_addr;
+};
+
+/* hash per-session context */
+struct ssi_hash_ctx {
+ struct ssi_drvdata *drvdata;
+ /* holds the origin digest; the digest after "setkey" if HMAC,*
+ the initial digest if HASH. */
+ uint8_t digest_buff[SSI_MAX_HASH_DIGEST_SIZE] ____cacheline_aligned;
+ uint8_t opad_tmp_keys_buff[SSI_MAX_HASH_OPAD_TMP_KEYS_SIZE] ____cacheline_aligned;
+ dma_addr_t opad_tmp_keys_dma_addr ____cacheline_aligned;
+ dma_addr_t digest_buff_dma_addr;
+ /* use for hmac with key large then mode block size */
+ struct hash_key_req_ctx key_params;
+ int hash_mode;
+ int hw_mode;
+ int inter_digestsize;
+ struct completion setkey_comp;
+ bool is_hmac;
+};
+
+static const struct crypto_type crypto_shash_type;
+
+static void ssi_hash_create_data_desc(
+ struct ahash_req_ctx *areq_ctx,
+ struct ssi_hash_ctx *ctx,
+ unsigned int flow_mode,HwDesc_s desc[],
+ bool is_not_last_data,
+ unsigned int *seq_size);
+
+static inline void ssi_set_hash_endianity(uint32_t mode, HwDesc_s *desc)
+{
+ if (unlikely((mode == DRV_HASH_MD5) ||
+ (mode == DRV_HASH_SHA384) ||
+ (mode == DRV_HASH_SHA512))) {
+ HW_DESC_SET_BYTES_SWAP(desc, 1);
+ } else {
+ HW_DESC_SET_CIPHER_CONFIG0(desc, HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+ }
+}
+
+static int ssi_hash_map_result(struct device *dev,
+ struct ahash_req_ctx *state,
+ unsigned int digestsize)
+{
+ state->digest_result_dma_addr =
+ dma_map_single(dev, (void *)state->digest_result_buff,
+ digestsize,
+ DMA_BIDIRECTIONAL);
+ if (unlikely(dma_mapping_error(dev, state->digest_result_dma_addr))) {
+ SSI_LOG_ERR("Mapping digest result buffer %u B for DMA failed\n",
+ digestsize);
+ return -ENOMEM;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(state->digest_result_dma_addr,
+ digestsize);
+ SSI_LOG_DEBUG("Mapped digest result buffer %u B "
+ "at va=%pK to dma=0x%llX\n",
+ digestsize, state->digest_result_buff,
+ (unsigned long long)state->digest_result_dma_addr);
+
+ return 0;
+}
+
+static int ssi_hash_map_request(struct device *dev,
+ struct ahash_req_ctx *state,
+ struct ssi_hash_ctx *ctx)
+{
+ bool is_hmac = ctx->is_hmac;
+ ssi_sram_addr_t larval_digest_addr = ssi_ahash_get_larval_digest_sram_addr(
+ ctx->drvdata, ctx->hash_mode);
+ struct ssi_crypto_req ssi_req = {};
+ HwDesc_s desc;
+ int rc = -ENOMEM;
+
+ state->buff0 = kzalloc(SSI_MAX_HASH_BLCK_SIZE ,GFP_KERNEL|GFP_DMA);
+ if (!state->buff0) {
+ SSI_LOG_ERR("Allocating buff0 in context failed\n");
+ goto fail0;
+ }
+ state->buff1 = kzalloc(SSI_MAX_HASH_BLCK_SIZE ,GFP_KERNEL|GFP_DMA);
+ if (!state->buff1) {
+ SSI_LOG_ERR("Allocating buff1 in context failed\n");
+ goto fail_buff0;
+ }
+ state->digest_result_buff = kzalloc(SSI_MAX_HASH_DIGEST_SIZE ,GFP_KERNEL|GFP_DMA);
+ if (!state->digest_result_buff) {
+ SSI_LOG_ERR("Allocating digest_result_buff in context failed\n");
+ goto fail_buff1;
+ }
+ state->digest_buff = kzalloc(ctx->inter_digestsize, GFP_KERNEL|GFP_DMA);
+ if (!state->digest_buff) {
+ SSI_LOG_ERR("Allocating digest-buffer in context failed\n");
+ goto fail_digest_result_buff;
+ }
+
+ SSI_LOG_DEBUG("Allocated digest-buffer in context ctx->digest_buff=@%p\n", state->digest_buff);
+ if (ctx->hw_mode != DRV_CIPHER_XCBC_MAC) {
+ state->digest_bytes_len = kzalloc(HASH_LEN_SIZE, GFP_KERNEL|GFP_DMA);
+ if (!state->digest_bytes_len) {
+ SSI_LOG_ERR("Allocating digest-bytes-len in context failed\n");
+ goto fail1;
+ }
+ SSI_LOG_DEBUG("Allocated digest-bytes-len in context state->>digest_bytes_len=@%p\n", state->digest_bytes_len);
+ } else {
+ state->digest_bytes_len = NULL;
+ }
+
+ state->opad_digest_buff = kzalloc(ctx->inter_digestsize, GFP_KERNEL|GFP_DMA);
+ if (!state->opad_digest_buff) {
+ SSI_LOG_ERR("Allocating opad-digest-buffer in context failed\n");
+ goto fail2;
+ }
+ SSI_LOG_DEBUG("Allocated opad-digest-buffer in context state->digest_bytes_len=@%p\n", state->opad_digest_buff);
+
+ state->digest_buff_dma_addr = dma_map_single(dev, (void *)state->digest_buff, ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(dev, state->digest_buff_dma_addr)) {
+ SSI_LOG_ERR("Mapping digest len %d B at va=%pK for DMA failed\n",
+ ctx->inter_digestsize, state->digest_buff);
+ goto fail3;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr,
+ ctx->inter_digestsize);
+ SSI_LOG_DEBUG("Mapped digest %d B at va=%pK to dma=0x%llX\n",
+ ctx->inter_digestsize, state->digest_buff,
+ (unsigned long long)state->digest_buff_dma_addr);
+
+ if (is_hmac) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->digest_buff_dma_addr);
+ dma_sync_single_for_cpu(dev, ctx->digest_buff_dma_addr, ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->digest_buff_dma_addr,
+ ctx->inter_digestsize);
+ if ((ctx->hw_mode == DRV_CIPHER_XCBC_MAC) || (ctx->hw_mode == DRV_CIPHER_CMAC)) {
+ memset(state->digest_buff, 0, ctx->inter_digestsize);
+ } else { /*sha*/
+ memcpy(state->digest_buff, ctx->digest_buff, ctx->inter_digestsize);
+#if (DX_DEV_SHA_MAX > 256)
+ if (unlikely((ctx->hash_mode == DRV_HASH_SHA512) || (ctx->hash_mode == DRV_HASH_SHA384))) {
+ memcpy(state->digest_bytes_len, digest_len_sha512_init, HASH_LEN_SIZE);
+ } else {
+ memcpy(state->digest_bytes_len, digest_len_init, HASH_LEN_SIZE);
+ }
+#else
+ memcpy(state->digest_bytes_len, digest_len_init, HASH_LEN_SIZE);
+#endif
+ }
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr);
+ dma_sync_single_for_device(dev, state->digest_buff_dma_addr, ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr,
+ ctx->inter_digestsize);
+
+ if (ctx->hash_mode != DRV_HASH_NULL) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr);
+ dma_sync_single_for_cpu(dev, ctx->opad_tmp_keys_dma_addr, ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+ memcpy(state->opad_digest_buff, ctx->opad_tmp_keys_buff, ctx->inter_digestsize);
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr,
+ ctx->inter_digestsize);
+ }
+ } else { /*hash*/
+ /* Copy the initial digests if hash flow. The SRAM contains the
+ initial digests in the expected order for all SHA* */
+ HW_DESC_INIT(&desc);
+ HW_DESC_SET_DIN_SRAM(&desc, larval_digest_addr, ctx->inter_digestsize);
+ HW_DESC_SET_DOUT_DLLI(&desc, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc, BYPASS);
+
+ rc = send_request(ctx->drvdata, &ssi_req, &desc, 1, 0);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ goto fail4;
+ }
+ }
+
+ if (ctx->hw_mode != DRV_CIPHER_XCBC_MAC) {
+ state->digest_bytes_len_dma_addr = dma_map_single(dev, (void *)state->digest_bytes_len, HASH_LEN_SIZE, DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(dev, state->digest_bytes_len_dma_addr)) {
+ SSI_LOG_ERR("Mapping digest len %u B at va=%pK for DMA failed\n",
+ HASH_LEN_SIZE, state->digest_bytes_len);
+ goto fail4;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(state->digest_bytes_len_dma_addr,
+ HASH_LEN_SIZE);
+ SSI_LOG_DEBUG("Mapped digest len %u B at va=%pK to dma=0x%llX\n",
+ HASH_LEN_SIZE, state->digest_bytes_len,
+ (unsigned long long)state->digest_bytes_len_dma_addr);
+ } else {
+ state->digest_bytes_len_dma_addr = 0;
+ }
+
+ if (is_hmac && ctx->hash_mode != DRV_HASH_NULL) {
+ state->opad_digest_dma_addr = dma_map_single(dev, (void *)state->opad_digest_buff, ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(dev, state->opad_digest_dma_addr)) {
+ SSI_LOG_ERR("Mapping opad digest %d B at va=%pK for DMA failed\n",
+ ctx->inter_digestsize, state->opad_digest_buff);
+ goto fail5;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(state->opad_digest_dma_addr,
+ ctx->inter_digestsize);
+ SSI_LOG_DEBUG("Mapped opad digest %d B at va=%pK to dma=0x%llX\n",
+ ctx->inter_digestsize, state->opad_digest_buff,
+ (unsigned long long)state->opad_digest_dma_addr);
+ } else {
+ state->opad_digest_dma_addr = 0;
+ }
+ state->buff0_cnt = 0;
+ state->buff1_cnt = 0;
+ state->buff_index = 0;
+ state->mlli_params.curr_pool = NULL;
+
+ return 0;
+
+fail5:
+ if (state->digest_bytes_len_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_bytes_len_dma_addr);
+ dma_unmap_single(dev, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, DMA_BIDIRECTIONAL);
+ state->digest_bytes_len_dma_addr = 0;
+ }
+fail4:
+ if (state->digest_buff_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr);
+ dma_unmap_single(dev, state->digest_buff_dma_addr, ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+ state->digest_buff_dma_addr = 0;
+ }
+fail3:
+ if (state->opad_digest_buff != NULL)
+ kfree(state->opad_digest_buff);
+fail2:
+ if (state->digest_bytes_len != NULL)
+ kfree(state->digest_bytes_len);
+fail1:
+ if (state->digest_buff != NULL)
+ kfree(state->digest_buff);
+fail_digest_result_buff:
+ if (state->digest_result_buff != NULL) {
+ kfree(state->digest_result_buff);
+ state->digest_result_buff = NULL;
+ }
+fail_buff1:
+ if (state->buff1 != NULL) {
+ kfree(state->buff1);
+ state->buff1 = NULL;
+ }
+fail_buff0:
+ if (state->buff0 != NULL) {
+ kfree(state->buff0);
+ state->buff0 = NULL;
+ }
+fail0:
+ return rc;
+}
+
+static void ssi_hash_unmap_request(struct device *dev,
+ struct ahash_req_ctx *state,
+ struct ssi_hash_ctx *ctx)
+{
+ if (state->digest_buff_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_buff_dma_addr);
+ dma_unmap_single(dev, state->digest_buff_dma_addr,
+ ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+ SSI_LOG_DEBUG("Unmapped digest-buffer: digest_buff_dma_addr=0x%llX\n",
+ (unsigned long long)state->digest_buff_dma_addr);
+ state->digest_buff_dma_addr = 0;
+ }
+ if (state->digest_bytes_len_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_bytes_len_dma_addr);
+ dma_unmap_single(dev, state->digest_bytes_len_dma_addr,
+ HASH_LEN_SIZE, DMA_BIDIRECTIONAL);
+ SSI_LOG_DEBUG("Unmapped digest-bytes-len buffer: digest_bytes_len_dma_addr=0x%llX\n",
+ (unsigned long long)state->digest_bytes_len_dma_addr);
+ state->digest_bytes_len_dma_addr = 0;
+ }
+ if (state->opad_digest_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(state->opad_digest_dma_addr);
+ dma_unmap_single(dev, state->opad_digest_dma_addr,
+ ctx->inter_digestsize, DMA_BIDIRECTIONAL);
+ SSI_LOG_DEBUG("Unmapped opad-digest: opad_digest_dma_addr=0x%llX\n",
+ (unsigned long long)state->opad_digest_dma_addr);
+ state->opad_digest_dma_addr = 0;
+ }
+
+ if (state->opad_digest_buff != NULL)
+ kfree(state->opad_digest_buff);
+ if (state->digest_bytes_len != NULL)
+ kfree(state->digest_bytes_len);
+ if (state->digest_buff != NULL)
+ kfree(state->digest_buff);
+ if (state->digest_result_buff != NULL)
+ kfree(state->digest_result_buff);
+ if (state->buff1 != NULL)
+ kfree(state->buff1);
+ if (state->buff0 != NULL)
+ kfree(state->buff0);
+}
+
+static void ssi_hash_unmap_result(struct device *dev,
+ struct ahash_req_ctx *state,
+ unsigned int digestsize, u8 *result)
+{
+ if (state->digest_result_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(state->digest_result_dma_addr);
+ dma_unmap_single(dev,
+ state->digest_result_dma_addr,
+ digestsize,
+ DMA_BIDIRECTIONAL);
+ SSI_LOG_DEBUG("unmpa digest result buffer "
+ "va (%pK) pa (%llx) len %u\n",
+ state->digest_result_buff,
+ (unsigned long long)state->digest_result_dma_addr,
+ digestsize);
+ memcpy(result,
+ state->digest_result_buff,
+ digestsize);
+ }
+ state->digest_result_dma_addr = 0;
+}
+
+static void ssi_hash_update_complete(struct device *dev, void *ssi_req, void __iomem *cc_base)
+{
+ struct ahash_request *req = (struct ahash_request *)ssi_req;
+ struct ahash_req_ctx *state = ahash_request_ctx(req);
+
+ SSI_LOG_DEBUG("req=%pK\n", req);
+
+ ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, false);
+ req->base.complete(&req->base, 0);
+}
+
+static void ssi_hash_digest_complete(struct device *dev, void *ssi_req, void __iomem *cc_base)
+{
+ struct ahash_request *req = (struct ahash_request *)ssi_req;
+ struct ahash_req_ctx *state = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+ uint32_t digestsize = crypto_ahash_digestsize(tfm);
+
+ SSI_LOG_DEBUG("req=%pK\n", req);
+
+ ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, false);
+ ssi_hash_unmap_result(dev, state, digestsize, req->result);
+ ssi_hash_unmap_request(dev, state, ctx);
+ req->base.complete(&req->base, 0);
+}
+
+static void ssi_hash_complete(struct device *dev, void *ssi_req, void __iomem *cc_base)
+{
+ struct ahash_request *req = (struct ahash_request *)ssi_req;
+ struct ahash_req_ctx *state = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+ uint32_t digestsize = crypto_ahash_digestsize(tfm);
+
+ SSI_LOG_DEBUG("req=%pK\n", req);
+
+ ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, false);
+ ssi_hash_unmap_result(dev, state, digestsize, req->result);
+ ssi_hash_unmap_request(dev, state, ctx);
+ req->base.complete(&req->base, 0);
+}
+
+static int ssi_hash_digest(struct ahash_req_ctx *state,
+ struct ssi_hash_ctx *ctx,
+ unsigned int digestsize,
+ struct scatterlist *src,
+ unsigned int nbytes, u8 *result,
+ void *async_req)
+{
+ struct device *dev = &ctx->drvdata->plat_dev->dev;
+ bool is_hmac = ctx->is_hmac;
+ struct ssi_crypto_req ssi_req = {};
+ HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+ ssi_sram_addr_t larval_digest_addr = ssi_ahash_get_larval_digest_sram_addr(
+ ctx->drvdata, ctx->hash_mode);
+ int idx = 0;
+ int rc = 0;
+
+
+ SSI_LOG_DEBUG("===== %s-digest (%d) ====\n", is_hmac?"hmac":"hash", nbytes);
+
+ if (unlikely(ssi_hash_map_request(dev, state, ctx) != 0)) {
+ SSI_LOG_ERR("map_ahash_source() failed\n");
+ return -ENOMEM;
+ }
+
+ if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) {
+ SSI_LOG_ERR("map_ahash_digest() failed\n");
+ return -ENOMEM;
+ }
+
+ if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, src, nbytes, 1) != 0)) {
+ SSI_LOG_ERR("map_ahash_request_final() failed\n");
+ return -ENOMEM;
+ }
+
+ if (async_req) {
+ /* Setup DX request structure */
+ ssi_req.user_cb = (void *)ssi_hash_digest_complete;
+ ssi_req.user_arg = (void *)async_req;
+#ifdef ENABLE_CYCLE_COUNT
+ ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+ }
+
+ /* If HMAC then load hash IPAD xor key, if HASH then load initial digest */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ if (is_hmac) {
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT);
+ } else {
+ HW_DESC_SET_DIN_SRAM(&desc[idx], larval_digest_addr, ctx->inter_digestsize);
+ }
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ /* Load the hash current length */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+
+ if (is_hmac) {
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT);
+ } else {
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE);
+ if (likely(nbytes != 0)) {
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ } else {
+ HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD);
+ }
+ }
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx);
+
+ if (is_hmac) {
+ /* HW last hash block padding (aka. "DO_PAD") */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, HASH_LEN_SIZE, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1);
+ HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD);
+ idx++;
+
+ /* store the hash digest result in the context */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, digestsize, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ idx++;
+
+ /* Loading hash opad xor key state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, ctx->inter_digestsize, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ /* Load the hash current length */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DIN_SRAM(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ /* Memory Barrier: wait for IPAD/OPAD axi write to complete */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ idx++;
+
+ /* Perform HASH update */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, digestsize, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+ }
+
+ /* Get final MAC result */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, async_req? 1:0); /*TODO*/
+ if (async_req) {
+ HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+ }
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+ ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]);
+ idx++;
+
+ if (async_req) {
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+ if (unlikely(rc != -EINPROGRESS)) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+ ssi_hash_unmap_result(dev, state, digestsize, result);
+ ssi_hash_unmap_request(dev, state, ctx);
+ }
+ } else {
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+ if (rc != 0) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+ } else {
+ ssi_buffer_mgr_unmap_hash_request(dev, state, src, false);
+ }
+ ssi_hash_unmap_result(dev, state, digestsize, result);
+ ssi_hash_unmap_request(dev, state, ctx);
+ }
+ return rc;
+}
+
+static int ssi_hash_update(struct ahash_req_ctx *state,
+ struct ssi_hash_ctx *ctx,
+ unsigned int block_size,
+ struct scatterlist *src,
+ unsigned int nbytes,
+ void *async_req)
+{
+ struct device *dev = &ctx->drvdata->plat_dev->dev;
+ struct ssi_crypto_req ssi_req = {};
+ HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+ uint32_t idx = 0;
+ int rc;
+
+ SSI_LOG_DEBUG("===== %s-update (%d) ====\n", ctx->is_hmac ?
+ "hmac":"hash", nbytes);
+
+ if (nbytes == 0) {
+ /* no real updates required */
+ return 0;
+ }
+
+ if (unlikely(rc = ssi_buffer_mgr_map_hash_request_update(ctx->drvdata, state, src, nbytes, block_size))) {
+ if (rc == 1) {
+ SSI_LOG_DEBUG(" data size not require HW update %x\n",
+ nbytes);
+ /* No hardware updates are required */
+ return 0;
+ }
+ SSI_LOG_ERR("map_ahash_request_update() failed\n");
+ return -ENOMEM;
+ }
+
+ if (async_req) {
+ /* Setup DX request structure */
+ ssi_req.user_cb = (void *)ssi_hash_update_complete;
+ ssi_req.user_arg = async_req;
+#ifdef ENABLE_CYCLE_COUNT
+ ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+ }
+
+ /* Restore hash digest */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+ /* Restore hash current length */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx);
+
+ /* store the hash digest result in context */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ idx++;
+
+ /* store current hash length in context */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT, async_req? 1:0);
+ if (async_req) {
+ HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+ }
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1);
+ idx++;
+
+ if (async_req) {
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+ if (unlikely(rc != -EINPROGRESS)) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+ }
+ } else {
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+ if (rc != 0) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+ } else {
+ ssi_buffer_mgr_unmap_hash_request(dev, state, src, false);
+ }
+ }
+ return rc;
+}
+
+static int ssi_hash_finup(struct ahash_req_ctx *state,
+ struct ssi_hash_ctx *ctx,
+ unsigned int digestsize,
+ struct scatterlist *src,
+ unsigned int nbytes,
+ u8 *result,
+ void *async_req)
+{
+ struct device *dev = &ctx->drvdata->plat_dev->dev;
+ bool is_hmac = ctx->is_hmac;
+ struct ssi_crypto_req ssi_req = {};
+ HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+ int idx = 0;
+ int rc;
+
+ SSI_LOG_DEBUG("===== %s-finup (%d) ====\n", is_hmac?"hmac":"hash", nbytes);
+
+ if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, src , nbytes, 1) != 0)) {
+ SSI_LOG_ERR("map_ahash_request_final() failed\n");
+ return -ENOMEM;
+ }
+ if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) {
+ SSI_LOG_ERR("map_ahash_digest() failed\n");
+ return -ENOMEM;
+ }
+
+ if (async_req) {
+ /* Setup DX request structure */
+ ssi_req.user_cb = (void *)ssi_hash_complete;
+ ssi_req.user_arg = async_req;
+#ifdef ENABLE_CYCLE_COUNT
+ ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+ }
+
+ /* Restore hash digest */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ /* Restore hash current length */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx);
+
+ if (is_hmac) {
+ /* Store the hash digest result in the context */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, digestsize, NS_BIT, 0);
+ ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ idx++;
+
+ /* Loading hash OPAD xor key state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, ctx->inter_digestsize, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ /* Load the hash current length */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DIN_SRAM(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ /* Memory Barrier: wait for IPAD/OPAD axi write to complete */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ idx++;
+
+ /* Perform HASH update on last digest */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, digestsize, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+ }
+
+ /* Get final MAC result */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, async_req? 1:0); /*TODO*/
+ if (async_req) {
+ HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+ }
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ idx++;
+
+ if (async_req) {
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+ if (unlikely(rc != -EINPROGRESS)) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+ ssi_hash_unmap_result(dev, state, digestsize, result);
+ }
+ } else {
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+ if (rc != 0) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+ ssi_hash_unmap_result(dev, state, digestsize, result);
+ } else {
+ ssi_buffer_mgr_unmap_hash_request(dev, state, src, false);
+ ssi_hash_unmap_result(dev, state, digestsize, result);
+ ssi_hash_unmap_request(dev, state, ctx);
+ }
+ }
+ return rc;
+}
+
+static int ssi_hash_final(struct ahash_req_ctx *state,
+ struct ssi_hash_ctx *ctx,
+ unsigned int digestsize,
+ struct scatterlist *src,
+ unsigned int nbytes,
+ u8 *result,
+ void *async_req)
+{
+ struct device *dev = &ctx->drvdata->plat_dev->dev;
+ bool is_hmac = ctx->is_hmac;
+ struct ssi_crypto_req ssi_req = {};
+ HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+ int idx = 0;
+ int rc;
+
+ SSI_LOG_DEBUG("===== %s-final (%d) ====\n", is_hmac?"hmac":"hash", nbytes);
+
+ if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, src, nbytes, 0) != 0)) {
+ SSI_LOG_ERR("map_ahash_request_final() failed\n");
+ return -ENOMEM;
+ }
+
+ if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) {
+ SSI_LOG_ERR("map_ahash_digest() failed\n");
+ return -ENOMEM;
+ }
+
+ if (async_req) {
+ /* Setup DX request structure */
+ ssi_req.user_cb = (void *)ssi_hash_complete;
+ ssi_req.user_arg = async_req;
+#ifdef ENABLE_CYCLE_COUNT
+ ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+ }
+
+ /* Restore hash digest */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ /* Restore hash current length */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx);
+
+ /* "DO-PAD" must be enabled only when writing current length to HW */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT, 0);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ idx++;
+
+ if (is_hmac) {
+ /* Store the hash digest result in the context */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, digestsize, NS_BIT, 0);
+ ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ idx++;
+
+ /* Loading hash OPAD xor key state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->opad_digest_dma_addr, ctx->inter_digestsize, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ /* Load the hash current length */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DIN_SRAM(&desc[idx], ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ /* Memory Barrier: wait for IPAD/OPAD axi write to complete */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ idx++;
+
+ /* Perform HASH update on last digest */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, digestsize, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+ }
+
+ /* Get final MAC result */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, async_req? 1:0);
+ if (async_req) {
+ HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+ }
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ idx++;
+
+ if (async_req) {
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+ if (unlikely(rc != -EINPROGRESS)) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+ ssi_hash_unmap_result(dev, state, digestsize, result);
+ }
+ } else {
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+ if (rc != 0) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ ssi_buffer_mgr_unmap_hash_request(dev, state, src, true);
+ ssi_hash_unmap_result(dev, state, digestsize, result);
+ } else {
+ ssi_buffer_mgr_unmap_hash_request(dev, state, src, false);
+ ssi_hash_unmap_result(dev, state, digestsize, result);
+ ssi_hash_unmap_request(dev, state, ctx);
+ }
+ }
+ return rc;
+}
+
+static int ssi_hash_init(struct ahash_req_ctx *state, struct ssi_hash_ctx *ctx)
+{
+ struct device *dev = &ctx->drvdata->plat_dev->dev;
+ state->xcbc_count = 0;
+
+ ssi_hash_map_request(dev, state, ctx);
+
+ return 0;
+}
+
+#ifdef EXPORT_FIXED
+static int ssi_hash_export(struct ssi_hash_ctx *ctx, void *out)
+{
+ memcpy(out, ctx, sizeof(struct ssi_hash_ctx));
+ return 0;
+}
+
+static int ssi_hash_import(struct ssi_hash_ctx *ctx, const void *in)
+{
+ memcpy(ctx, in, sizeof(struct ssi_hash_ctx));
+ return 0;
+}
+#endif
+
+static int ssi_hash_setkey(void *hash,
+ const u8 *key,
+ unsigned int keylen,
+ bool synchronize)
+{
+ unsigned int hmacPadConst[2] = { HMAC_IPAD_CONST, HMAC_OPAD_CONST };
+ struct ssi_crypto_req ssi_req = {};
+ struct ssi_hash_ctx *ctx = NULL;
+ int blocksize = 0;
+ int digestsize = 0;
+ int i, idx = 0, rc = 0;
+ HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+ ssi_sram_addr_t larval_addr;
+
+ SSI_LOG_DEBUG("ssi_hash_setkey: start keylen: %d", keylen);
+
+ if (synchronize) {
+ ctx = crypto_shash_ctx(((struct crypto_shash *)hash));
+ blocksize = crypto_tfm_alg_blocksize(&((struct crypto_shash *)hash)->base);
+ digestsize = crypto_shash_digestsize(((struct crypto_shash *)hash));
+ } else {
+ ctx = crypto_ahash_ctx(((struct crypto_ahash *)hash));
+ blocksize = crypto_tfm_alg_blocksize(&((struct crypto_ahash *)hash)->base);
+ digestsize = crypto_ahash_digestsize(((struct crypto_ahash *)hash));
+ }
+
+ larval_addr = ssi_ahash_get_larval_digest_sram_addr(
+ ctx->drvdata, ctx->hash_mode);
+
+ /* The keylen value distinguishes HASH in case keylen is ZERO bytes,
+ any NON-ZERO value utilizes HMAC flow */
+ ctx->key_params.keylen = keylen;
+ ctx->key_params.key_dma_addr = 0;
+ ctx->is_hmac = true;
+
+ if (keylen != 0) {
+ ctx->key_params.key_dma_addr = dma_map_single(
+ &ctx->drvdata->plat_dev->dev,
+ (void *)key,
+ keylen, DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(&ctx->drvdata->plat_dev->dev,
+ ctx->key_params.key_dma_addr))) {
+ SSI_LOG_ERR("Mapping key va=0x%p len=%u for"
+ " DMA failed\n", key, keylen);
+ return -ENOMEM;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->key_params.key_dma_addr, keylen);
+ SSI_LOG_DEBUG("mapping key-buffer: key_dma_addr=0x%llX "
+ "keylen=%u\n",
+ (unsigned long long)ctx->key_params.key_dma_addr,
+ ctx->key_params.keylen);
+
+ if (keylen > blocksize) {
+ /* Load hash initial state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DIN_SRAM(&desc[idx], larval_addr,
+ ctx->inter_digestsize);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ /* Load the hash current length*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ ctx->key_params.key_dma_addr,
+ keylen, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+
+ /* Get hashed key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], ctx->opad_tmp_keys_dma_addr,
+ digestsize, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+ ssi_set_hash_endianity(ctx->hash_mode,&desc[idx]);
+ idx++;
+
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0, (blocksize - digestsize));
+ HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ (ctx->opad_tmp_keys_dma_addr + digestsize),
+ (blocksize - digestsize),
+ NS_BIT, 0);
+ idx++;
+ } else {
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ ctx->key_params.key_dma_addr,
+ keylen, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ (ctx->opad_tmp_keys_dma_addr),
+ keylen, NS_BIT, 0);
+ idx++;
+
+ if ((blocksize - keylen) != 0) {
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0, (blocksize - keylen));
+ HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ (ctx->opad_tmp_keys_dma_addr + keylen),
+ (blocksize - keylen),
+ NS_BIT, 0);
+ idx++;
+ }
+ }
+ } else {
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0, blocksize);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ (ctx->opad_tmp_keys_dma_addr),
+ blocksize,
+ NS_BIT, 0);
+ idx++;
+ }
+
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ goto out;
+ }
+
+ /* calc derived HMAC key */
+ for (idx = 0, i = 0; i < 2; i++) {
+ /* Load hash initial state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DIN_SRAM(&desc[idx], larval_addr,
+ ctx->inter_digestsize);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ /* Load the hash current length*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ /* Prepare ipad key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_XOR_VAL(&desc[idx], hmacPadConst[i]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ idx++;
+
+ /* Perform HASH update */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ ctx->opad_tmp_keys_dma_addr,
+ blocksize, NS_BIT);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx],ctx->hw_mode);
+ HW_DESC_SET_XOR_ACTIVE(&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+
+ /* Get the IPAD/OPAD xor key (Note, IPAD is the initial digest of the first HASH "update" state) */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ if (i > 0) /* Not first iteration */
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ ctx->opad_tmp_keys_dma_addr,
+ ctx->inter_digestsize,
+ NS_BIT, 0);
+ else /* First iteration */
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ ctx->digest_buff_dma_addr,
+ ctx->inter_digestsize,
+ NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ idx++;
+ }
+
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+
+out:
+ if (rc != 0) {
+ if (synchronize) {
+ crypto_shash_set_flags((struct crypto_shash *)hash, CRYPTO_TFM_RES_BAD_KEY_LEN);
+ } else {
+ crypto_ahash_set_flags((struct crypto_ahash *)hash, CRYPTO_TFM_RES_BAD_KEY_LEN);
+ }
+ }
+
+ if (ctx->key_params.key_dma_addr) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->key_params.key_dma_addr);
+ dma_unmap_single(&ctx->drvdata->plat_dev->dev,
+ ctx->key_params.key_dma_addr,
+ ctx->key_params.keylen, DMA_TO_DEVICE);
+ SSI_LOG_DEBUG("Unmapped key-buffer: key_dma_addr=0x%llX keylen=%u\n",
+ (unsigned long long)ctx->key_params.key_dma_addr,
+ ctx->key_params.keylen);
+ }
+ return rc;
+}
+
+
+static int ssi_xcbc_setkey(struct crypto_ahash *ahash,
+ const u8 *key, unsigned int keylen)
+{
+ struct ssi_crypto_req ssi_req = {};
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+ int idx = 0, rc = 0;
+ HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+
+ SSI_LOG_DEBUG("===== setkey (%d) ====\n", keylen);
+
+ switch (keylen) {
+ case AES_KEYSIZE_128:
+ case AES_KEYSIZE_192:
+ case AES_KEYSIZE_256:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ ctx->key_params.keylen = keylen;
+
+ ctx->key_params.key_dma_addr = dma_map_single(
+ &ctx->drvdata->plat_dev->dev,
+ (void *)key,
+ keylen, DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(&ctx->drvdata->plat_dev->dev,
+ ctx->key_params.key_dma_addr))) {
+ SSI_LOG_ERR("Mapping key va=0x%p len=%u for"
+ " DMA failed\n", key, keylen);
+ return -ENOMEM;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->key_params.key_dma_addr, keylen);
+ SSI_LOG_DEBUG("mapping key-buffer: key_dma_addr=0x%llX "
+ "keylen=%u\n",
+ (unsigned long long)ctx->key_params.key_dma_addr,
+ ctx->key_params.keylen);
+
+ ctx->is_hmac = true;
+ /* 1. Load the AES key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->key_params.key_dma_addr, keylen, NS_BIT);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keylen);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0x01010101, CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], (ctx->opad_tmp_keys_dma_addr +
+ XCBC_MAC_K1_OFFSET),
+ CC_AES_128_BIT_KEY_SIZE, NS_BIT, 0);
+ idx++;
+
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0x02020202, CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], (ctx->opad_tmp_keys_dma_addr +
+ XCBC_MAC_K2_OFFSET),
+ CC_AES_128_BIT_KEY_SIZE, NS_BIT, 0);
+ idx++;
+
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0x03030303, CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], (ctx->opad_tmp_keys_dma_addr +
+ XCBC_MAC_K3_OFFSET),
+ CC_AES_128_BIT_KEY_SIZE, NS_BIT, 0);
+ idx++;
+
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+
+ if (rc != 0)
+ crypto_ahash_set_flags(ahash, CRYPTO_TFM_RES_BAD_KEY_LEN);
+
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->key_params.key_dma_addr);
+ dma_unmap_single(&ctx->drvdata->plat_dev->dev,
+ ctx->key_params.key_dma_addr,
+ ctx->key_params.keylen, DMA_TO_DEVICE);
+ SSI_LOG_DEBUG("Unmapped key-buffer: key_dma_addr=0x%llX keylen=%u\n",
+ (unsigned long long)ctx->key_params.key_dma_addr,
+ ctx->key_params.keylen);
+
+ return rc;
+}
+#if SSI_CC_HAS_CMAC
+static int ssi_cmac_setkey(struct crypto_ahash *ahash,
+ const u8 *key, unsigned int keylen)
+{
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+ DECL_CYCLE_COUNT_RESOURCES;
+ SSI_LOG_DEBUG("===== setkey (%d) ====\n", keylen);
+
+ ctx->is_hmac = true;
+
+ switch (keylen) {
+ case AES_KEYSIZE_128:
+ case AES_KEYSIZE_192:
+ case AES_KEYSIZE_256:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ ctx->key_params.keylen = keylen;
+
+ /* STAT_PHASE_1: Copy key to ctx */
+ START_CYCLE_COUNT();
+
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr);
+ dma_sync_single_for_cpu(&ctx->drvdata->plat_dev->dev,
+ ctx->opad_tmp_keys_dma_addr,
+ keylen, DMA_TO_DEVICE);
+
+ memcpy(ctx->opad_tmp_keys_buff, key, keylen);
+ if (keylen == 24)
+ memset(ctx->opad_tmp_keys_buff + 24, 0, CC_AES_KEY_SIZE_MAX - 24);
+
+ dma_sync_single_for_device(&ctx->drvdata->plat_dev->dev,
+ ctx->opad_tmp_keys_dma_addr,
+ keylen, DMA_TO_DEVICE);
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr, keylen);
+
+ ctx->key_params.keylen = keylen;
+
+ END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_1);
+
+ return 0;
+}
+#endif
+
+static void ssi_hash_free_ctx(struct ssi_hash_ctx *ctx)
+{
+ struct device *dev = &ctx->drvdata->plat_dev->dev;
+
+ if (ctx->digest_buff_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->digest_buff_dma_addr);
+ dma_unmap_single(dev, ctx->digest_buff_dma_addr,
+ sizeof(ctx->digest_buff), DMA_BIDIRECTIONAL);
+ SSI_LOG_DEBUG("Unmapped digest-buffer: "
+ "digest_buff_dma_addr=0x%llX\n",
+ (unsigned long long)ctx->digest_buff_dma_addr);
+ ctx->digest_buff_dma_addr = 0;
+ }
+ if (ctx->opad_tmp_keys_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr);
+ dma_unmap_single(dev, ctx->opad_tmp_keys_dma_addr,
+ sizeof(ctx->opad_tmp_keys_buff),
+ DMA_BIDIRECTIONAL);
+ SSI_LOG_DEBUG("Unmapped opad-digest: "
+ "opad_tmp_keys_dma_addr=0x%llX\n",
+ (unsigned long long)ctx->opad_tmp_keys_dma_addr);
+ ctx->opad_tmp_keys_dma_addr = 0;
+ }
+
+ ctx->key_params.keylen = 0;
+
+}
+
+
+static int ssi_hash_alloc_ctx(struct ssi_hash_ctx *ctx)
+{
+ struct device *dev = &ctx->drvdata->plat_dev->dev;
+
+ ctx->key_params.keylen = 0;
+
+ ctx->digest_buff_dma_addr = dma_map_single(dev, (void *)ctx->digest_buff, sizeof(ctx->digest_buff), DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(dev, ctx->digest_buff_dma_addr)) {
+ SSI_LOG_ERR("Mapping digest len %zu B at va=%pK for DMA failed\n",
+ sizeof(ctx->digest_buff), ctx->digest_buff);
+ goto fail;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->digest_buff_dma_addr,
+ sizeof(ctx->digest_buff));
+ SSI_LOG_DEBUG("Mapped digest %zu B at va=%pK to dma=0x%llX\n",
+ sizeof(ctx->digest_buff), ctx->digest_buff,
+ (unsigned long long)ctx->digest_buff_dma_addr);
+
+ ctx->opad_tmp_keys_dma_addr = dma_map_single(dev, (void *)ctx->opad_tmp_keys_buff, sizeof(ctx->opad_tmp_keys_buff), DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(dev, ctx->opad_tmp_keys_dma_addr)) {
+ SSI_LOG_ERR("Mapping opad digest %zu B at va=%pK for DMA failed\n",
+ sizeof(ctx->opad_tmp_keys_buff),
+ ctx->opad_tmp_keys_buff);
+ goto fail;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->opad_tmp_keys_dma_addr,
+ sizeof(ctx->opad_tmp_keys_buff));
+ SSI_LOG_DEBUG("Mapped opad_tmp_keys %zu B at va=%pK to dma=0x%llX\n",
+ sizeof(ctx->opad_tmp_keys_buff), ctx->opad_tmp_keys_buff,
+ (unsigned long long)ctx->opad_tmp_keys_dma_addr);
+
+ ctx->is_hmac = false;
+ return 0;
+
+fail:
+ ssi_hash_free_ctx(ctx);
+ return -ENOMEM;
+}
+
+static int ssi_shash_cra_init(struct crypto_tfm *tfm)
+{
+ struct ssi_hash_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct shash_alg * shash_alg =
+ container_of(tfm->__crt_alg, struct shash_alg, base);
+ struct ssi_hash_alg *ssi_alg =
+ container_of(shash_alg, struct ssi_hash_alg, shash_alg);
+
+ ctx->hash_mode = ssi_alg->hash_mode;
+ ctx->hw_mode = ssi_alg->hw_mode;
+ ctx->inter_digestsize = ssi_alg->inter_digestsize;
+ ctx->drvdata = ssi_alg->drvdata;
+
+ return ssi_hash_alloc_ctx(ctx);
+}
+
+static int ssi_ahash_cra_init(struct crypto_tfm *tfm)
+{
+ struct ssi_hash_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct hash_alg_common * hash_alg_common =
+ container_of(tfm->__crt_alg, struct hash_alg_common, base);
+ struct ahash_alg *ahash_alg =
+ container_of(hash_alg_common, struct ahash_alg, halg);
+ struct ssi_hash_alg *ssi_alg =
+ container_of(ahash_alg, struct ssi_hash_alg, ahash_alg);
+
+
+ crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
+ sizeof(struct ahash_req_ctx));
+
+ ctx->hash_mode = ssi_alg->hash_mode;
+ ctx->hw_mode = ssi_alg->hw_mode;
+ ctx->inter_digestsize = ssi_alg->inter_digestsize;
+ ctx->drvdata = ssi_alg->drvdata;
+
+ return ssi_hash_alloc_ctx(ctx);
+}
+
+static void ssi_hash_cra_exit(struct crypto_tfm *tfm)
+{
+ struct ssi_hash_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ SSI_LOG_DEBUG("ssi_hash_cra_exit");
+ ssi_hash_free_ctx(ctx);
+}
+
+static int ssi_mac_update(struct ahash_request *req)
+{
+ struct ahash_req_ctx *state = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+ struct device *dev = &ctx->drvdata->plat_dev->dev;
+ unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base);
+ struct ssi_crypto_req ssi_req = {};
+ HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+ int rc;
+ uint32_t idx = 0;
+
+ if (req->nbytes == 0) {
+ /* no real updates required */
+ return 0;
+ }
+
+ state->xcbc_count++;
+
+ if (unlikely(rc = ssi_buffer_mgr_map_hash_request_update(ctx->drvdata, state, req->src, req->nbytes, block_size))) {
+ if (rc == 1) {
+ SSI_LOG_DEBUG(" data size not require HW update %x\n",
+ req->nbytes);
+ /* No hardware updates are required */
+ return 0;
+ }
+ SSI_LOG_ERR("map_ahash_request_update() failed\n");
+ return -ENOMEM;
+ }
+
+ if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
+ ssi_hash_create_xcbc_setup(req, desc, &idx);
+ } else {
+ ssi_hash_create_cmac_setup(req, desc, &idx);
+ }
+
+ ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, true, &idx);
+
+ /* store the hash digest result in context */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, ctx->inter_digestsize, NS_BIT, 1);
+ HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ idx++;
+
+ /* Setup DX request structure */
+ ssi_req.user_cb = (void *)ssi_hash_update_complete;
+ ssi_req.user_arg = (void *)req;
+#ifdef ENABLE_CYCLE_COUNT
+ ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+ if (unlikely(rc != -EINPROGRESS)) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, true);
+ }
+ return rc;
+}
+
+static int ssi_mac_final(struct ahash_request *req)
+{
+ struct ahash_req_ctx *state = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+ struct device *dev = &ctx->drvdata->plat_dev->dev;
+ struct ssi_crypto_req ssi_req = {};
+ HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+ int idx = 0;
+ int rc = 0;
+ uint32_t keySize, keyLen;
+ uint32_t digestsize = crypto_ahash_digestsize(tfm);
+
+ uint32_t rem_cnt = state->buff_index ? state->buff1_cnt :
+ state->buff0_cnt;
+
+
+ if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
+ keySize = CC_AES_128_BIT_KEY_SIZE;
+ keyLen = CC_AES_128_BIT_KEY_SIZE;
+ } else {
+ keySize = (ctx->key_params.keylen == 24) ? AES_MAX_KEY_SIZE : ctx->key_params.keylen;
+ keyLen = ctx->key_params.keylen;
+ }
+
+ SSI_LOG_DEBUG("===== final xcbc reminder (%d) ====\n", rem_cnt);
+
+ if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, req->src, req->nbytes, 0) != 0)) {
+ SSI_LOG_ERR("map_ahash_request_final() failed\n");
+ return -ENOMEM;
+ }
+
+ if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) {
+ SSI_LOG_ERR("map_ahash_digest() failed\n");
+ return -ENOMEM;
+ }
+
+ /* Setup DX request structure */
+ ssi_req.user_cb = (void *)ssi_hash_complete;
+ ssi_req.user_arg = (void *)req;
+#ifdef ENABLE_CYCLE_COUNT
+ ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+
+ if (state->xcbc_count && (rem_cnt == 0)) {
+ /* Load key for ECB decryption */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_DECRYPT);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ (ctx->opad_tmp_keys_dma_addr +
+ XCBC_MAC_K1_OFFSET),
+ keySize, NS_BIT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keyLen);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+
+ /* Initiate decryption of block state to previous block_state-XOR-M[n] */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT,0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+ idx++;
+
+ /* Memory Barrier: wait for axi write to complete */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ idx++;
+ }
+
+ if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
+ ssi_hash_create_xcbc_setup(req, desc, &idx);
+ } else {
+ ssi_hash_create_cmac_setup(req, desc, &idx);
+ }
+
+ if (state->xcbc_count == 0) {
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keyLen);
+ HW_DESC_SET_CMAC_SIZE0_MODE(&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+ } else if (rem_cnt > 0) {
+ ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx);
+ } else {
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0x00, CC_AES_BLOCK_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+ idx++;
+ }
+
+ /* Get final MAC result */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, 1); /*TODO*/
+ HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ idx++;
+
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+ if (unlikely(rc != -EINPROGRESS)) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, true);
+ ssi_hash_unmap_result(dev, state, digestsize, req->result);
+ }
+ return rc;
+}
+
+static int ssi_mac_finup(struct ahash_request *req)
+{
+ struct ahash_req_ctx *state = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+ struct device *dev = &ctx->drvdata->plat_dev->dev;
+ struct ssi_crypto_req ssi_req = {};
+ HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+ int idx = 0;
+ int rc = 0;
+ uint32_t key_len = 0;
+ uint32_t digestsize = crypto_ahash_digestsize(tfm);
+
+ SSI_LOG_DEBUG("===== finup xcbc(%d) ====\n", req->nbytes);
+
+ if (state->xcbc_count > 0 && req->nbytes == 0) {
+ SSI_LOG_DEBUG("No data to update. Call to fdx_mac_final \n");
+ return ssi_mac_final(req);
+ }
+
+ if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, req->src, req->nbytes, 1) != 0)) {
+ SSI_LOG_ERR("map_ahash_request_final() failed\n");
+ return -ENOMEM;
+ }
+ if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) {
+ SSI_LOG_ERR("map_ahash_digest() failed\n");
+ return -ENOMEM;
+ }
+
+ /* Setup DX request structure */
+ ssi_req.user_cb = (void *)ssi_hash_complete;
+ ssi_req.user_arg = (void *)req;
+#ifdef ENABLE_CYCLE_COUNT
+ ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+
+ if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
+ key_len = CC_AES_128_BIT_KEY_SIZE;
+ ssi_hash_create_xcbc_setup(req, desc, &idx);
+ } else {
+ key_len = ctx->key_params.keylen;
+ ssi_hash_create_cmac_setup(req, desc, &idx);
+ }
+
+ if (req->nbytes == 0) {
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len);
+ HW_DESC_SET_CMAC_SIZE0_MODE(&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+ } else {
+ ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx);
+ }
+
+ /* Get final MAC result */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, digestsize, NS_BIT, 1); /*TODO*/
+ HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ idx++;
+
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+ if (unlikely(rc != -EINPROGRESS)) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, true);
+ ssi_hash_unmap_result(dev, state, digestsize, req->result);
+ }
+ return rc;
+}
+
+static int ssi_mac_digest(struct ahash_request *req)
+{
+ struct ahash_req_ctx *state = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+ struct device *dev = &ctx->drvdata->plat_dev->dev;
+ uint32_t digestsize = crypto_ahash_digestsize(tfm);
+ struct ssi_crypto_req ssi_req = {};
+ HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];
+ uint32_t keyLen;
+ int idx = 0;
+ int rc;
+
+ SSI_LOG_DEBUG("===== -digest mac (%d) ====\n", req->nbytes);
+
+ if (unlikely(ssi_hash_map_request(dev, state, ctx) != 0)) {
+ SSI_LOG_ERR("map_ahash_source() failed\n");
+ return -ENOMEM;
+ }
+ if (unlikely(ssi_hash_map_result(dev, state, digestsize) != 0)) {
+ SSI_LOG_ERR("map_ahash_digest() failed\n");
+ return -ENOMEM;
+ }
+
+ if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, req->src, req->nbytes, 1) != 0)) {
+ SSI_LOG_ERR("map_ahash_request_final() failed\n");
+ return -ENOMEM;
+ }
+
+ /* Setup DX request structure */
+ ssi_req.user_cb = (void *)ssi_hash_digest_complete;
+ ssi_req.user_arg = (void *)req;
+#ifdef ENABLE_CYCLE_COUNT
+ ssi_req.op_type = STAT_OP_TYPE_ENCODE; /* Use "Encode" stats */
+#endif
+
+
+ if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
+ keyLen = CC_AES_128_BIT_KEY_SIZE;
+ ssi_hash_create_xcbc_setup(req, desc, &idx);
+ } else {
+ keyLen = ctx->key_params.keylen;
+ ssi_hash_create_cmac_setup(req, desc, &idx);
+ }
+
+ if (req->nbytes == 0) {
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], keyLen);
+ HW_DESC_SET_CMAC_SIZE0_MODE(&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+ } else {
+ ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx);
+ }
+
+ /* Get final MAC result */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], state->digest_result_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT,1);
+ HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx],DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->hw_mode);
+ idx++;
+
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+ if (unlikely(rc != -EINPROGRESS)) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ ssi_buffer_mgr_unmap_hash_request(dev, state, req->src, true);
+ ssi_hash_unmap_result(dev, state, digestsize, req->result);
+ ssi_hash_unmap_request(dev, state, ctx);
+ }
+ return rc;
+}
+
+//shash wrap functions
+#ifdef SYNC_ALGS
+static int ssi_shash_digest(struct shash_desc *desc,
+ const u8 *data, unsigned int len, u8 *out)
+{
+ struct ahash_req_ctx *state = shash_desc_ctx(desc);
+ struct crypto_shash *tfm = desc->tfm;
+ struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm);
+ uint32_t digestsize = crypto_shash_digestsize(tfm);
+ struct scatterlist src;
+
+ if (len == 0) {
+ return ssi_hash_digest(state, ctx, digestsize, NULL, 0, out, NULL);
+ }
+
+ /* sg_init_one may crash when len is 0 (depends on kernel configuration) */
+ sg_init_one(&src, (const void *)data, len);
+
+ return ssi_hash_digest(state, ctx, digestsize, &src, len, out, NULL);
+}
+
+static int ssi_shash_update(struct shash_desc *desc,
+ const u8 *data, unsigned int len)
+{
+ struct ahash_req_ctx *state = shash_desc_ctx(desc);
+ struct crypto_shash *tfm = desc->tfm;
+ struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm);
+ uint32_t blocksize = crypto_tfm_alg_blocksize(&tfm->base);
+ struct scatterlist src;
+
+ sg_init_one(&src, (const void *)data, len);
+
+ return ssi_hash_update(state, ctx, blocksize, &src, len, NULL);
+}
+
+static int ssi_shash_finup(struct shash_desc *desc,
+ const u8 *data, unsigned int len, u8 *out)
+{
+ struct ahash_req_ctx *state = shash_desc_ctx(desc);
+ struct crypto_shash *tfm = desc->tfm;
+ struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm);
+ uint32_t digestsize = crypto_shash_digestsize(tfm);
+ struct scatterlist src;
+
+ sg_init_one(&src, (const void *)data, len);
+
+ return ssi_hash_finup(state, ctx, digestsize, &src, len, out, NULL);
+}
+
+static int ssi_shash_final(struct shash_desc *desc, u8 *out)
+{
+ struct ahash_req_ctx *state = shash_desc_ctx(desc);
+ struct crypto_shash *tfm = desc->tfm;
+ struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm);
+ uint32_t digestsize = crypto_shash_digestsize(tfm);
+
+ return ssi_hash_final(state, ctx, digestsize, NULL, 0, out, NULL);
+}
+
+static int ssi_shash_init(struct shash_desc *desc)
+{
+ struct ahash_req_ctx *state = shash_desc_ctx(desc);
+ struct crypto_shash *tfm = desc->tfm;
+ struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm);
+
+ return ssi_hash_init(state, ctx);
+}
+
+#ifdef EXPORT_FIXED
+static int ssi_shash_export(struct shash_desc *desc, void *out)
+{
+ struct crypto_shash *tfm = desc->tfm;
+ struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm);
+
+ return ssi_hash_export(ctx, out);
+}
+
+static int ssi_shash_import(struct shash_desc *desc, const void *in)
+{
+ struct crypto_shash *tfm = desc->tfm;
+ struct ssi_hash_ctx *ctx = crypto_shash_ctx(tfm);
+
+ return ssi_hash_import(ctx, in);
+}
+#endif
+
+static int ssi_shash_setkey(struct crypto_shash *tfm,
+ const u8 *key, unsigned int keylen)
+{
+ return ssi_hash_setkey((void *) tfm, key, keylen, true);
+}
+
+#endif /* SYNC_ALGS */
+
+//ahash wrap functions
+static int ssi_ahash_digest(struct ahash_request *req)
+{
+ struct ahash_req_ctx *state = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+ uint32_t digestsize = crypto_ahash_digestsize(tfm);
+
+ return ssi_hash_digest(state, ctx, digestsize, req->src, req->nbytes, req->result, (void *)req);
+}
+
+static int ssi_ahash_update(struct ahash_request *req)
+{
+ struct ahash_req_ctx *state = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+ unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base);
+
+ return ssi_hash_update(state, ctx, block_size, req->src, req->nbytes, (void *)req);
+}
+
+static int ssi_ahash_finup(struct ahash_request *req)
+{
+ struct ahash_req_ctx *state = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+ uint32_t digestsize = crypto_ahash_digestsize(tfm);
+
+ return ssi_hash_finup(state, ctx, digestsize, req->src, req->nbytes, req->result, (void *)req);
+}
+
+static int ssi_ahash_final(struct ahash_request *req)
+{
+ struct ahash_req_ctx *state = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+ uint32_t digestsize = crypto_ahash_digestsize(tfm);
+
+ return ssi_hash_final(state, ctx, digestsize, req->src, req->nbytes, req->result, (void *)req);
+}
+
+static int ssi_ahash_init(struct ahash_request *req)
+{
+ struct ahash_req_ctx *state = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+
+ SSI_LOG_DEBUG("===== init (%d) ====\n", req->nbytes);
+
+ return ssi_hash_init(state, ctx);
+}
+
+#ifdef EXPORT_FIXED
+static int ssi_ahash_export(struct ahash_request *req, void *out)
+{
+ struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+
+ return ssi_hash_export(ctx, out);
+}
+
+static int ssi_ahash_import(struct ahash_request *req, const void *in)
+{
+ struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+
+ return ssi_hash_import(ctx, in);
+}
+#endif
+
+static int ssi_ahash_setkey(struct crypto_ahash *ahash,
+ const u8 *key, unsigned int keylen)
+{
+ return ssi_hash_setkey((void *) ahash, key, keylen, false);
+}
+
+struct ssi_hash_template {
+ char name[CRYPTO_MAX_ALG_NAME];
+ char driver_name[CRYPTO_MAX_ALG_NAME];
+ char hmac_name[CRYPTO_MAX_ALG_NAME];
+ char hmac_driver_name[CRYPTO_MAX_ALG_NAME];
+ unsigned int blocksize;
+ bool synchronize;
+ union {
+ struct ahash_alg template_ahash;
+ struct shash_alg template_shash;
+ };
+ int hash_mode;
+ int hw_mode;
+ int inter_digestsize;
+ struct ssi_drvdata *drvdata;
+};
+
+/* hash descriptors */
+static struct ssi_hash_template driver_hash[] = {
+ //Asynchronize hash template
+ {
+ .name = "sha1",
+ .driver_name = "sha1-dx",
+ .hmac_name = "hmac(sha1)",
+ .hmac_driver_name = "hmac-sha1-dx",
+ .blocksize = SHA1_BLOCK_SIZE,
+ .synchronize = false,
+ .template_ahash = {
+ .init = ssi_ahash_init,
+ .update = ssi_ahash_update,
+ .final = ssi_ahash_final,
+ .finup = ssi_ahash_finup,
+ .digest = ssi_ahash_digest,
+#ifdef EXPORT_FIXED
+ .export = ssi_ahash_export,
+ .import = ssi_ahash_import,
+#endif
+ .setkey = ssi_ahash_setkey,
+ .halg = {
+ .digestsize = SHA1_DIGEST_SIZE,
+ .statesize = sizeof(struct sha1_state),
+ },
+ },
+ .hash_mode = DRV_HASH_SHA1,
+ .hw_mode = DRV_HASH_HW_SHA1,
+ .inter_digestsize = SHA1_DIGEST_SIZE,
+ },
+ {
+ .name = "sha256",
+ .driver_name = "sha256-dx",
+ .hmac_name = "hmac(sha256)",
+ .hmac_driver_name = "hmac-sha256-dx",
+ .blocksize = SHA256_BLOCK_SIZE,
+ .synchronize = false,
+ .template_ahash = {
+ .init = ssi_ahash_init,
+ .update = ssi_ahash_update,
+ .final = ssi_ahash_final,
+ .finup = ssi_ahash_finup,
+ .digest = ssi_ahash_digest,
+#ifdef EXPORT_FIXED
+ .export = ssi_ahash_export,
+ .import = ssi_ahash_import,
+#endif
+ .setkey = ssi_ahash_setkey,
+ .halg = {
+ .digestsize = SHA256_DIGEST_SIZE,
+ .statesize = sizeof(struct sha256_state),
+ },
+ },
+ .hash_mode = DRV_HASH_SHA256,
+ .hw_mode = DRV_HASH_HW_SHA256,
+ .inter_digestsize = SHA256_DIGEST_SIZE,
+ },
+ {
+ .name = "sha224",
+ .driver_name = "sha224-dx",
+ .hmac_name = "hmac(sha224)",
+ .hmac_driver_name = "hmac-sha224-dx",
+ .blocksize = SHA224_BLOCK_SIZE,
+ .synchronize = false,
+ .template_ahash = {
+ .init = ssi_ahash_init,
+ .update = ssi_ahash_update,
+ .final = ssi_ahash_final,
+ .finup = ssi_ahash_finup,
+ .digest = ssi_ahash_digest,
+#ifdef EXPORT_FIXED
+ .export = ssi_ahash_export,
+ .import = ssi_ahash_import,
+#endif
+ .setkey = ssi_ahash_setkey,
+ .halg = {
+ .digestsize = SHA224_DIGEST_SIZE,
+ .statesize = sizeof(struct sha256_state),
+ },
+ },
+ .hash_mode = DRV_HASH_SHA224,
+ .hw_mode = DRV_HASH_HW_SHA256,
+ .inter_digestsize = SHA256_DIGEST_SIZE,
+ },
+#if (DX_DEV_SHA_MAX > 256)
+ {
+ .name = "sha384",
+ .driver_name = "sha384-dx",
+ .hmac_name = "hmac(sha384)",
+ .hmac_driver_name = "hmac-sha384-dx",
+ .blocksize = SHA384_BLOCK_SIZE,
+ .synchronize = false,
+ .template_ahash = {
+ .init = ssi_ahash_init,
+ .update = ssi_ahash_update,
+ .final = ssi_ahash_final,
+ .finup = ssi_ahash_finup,
+ .digest = ssi_ahash_digest,
+#ifdef EXPORT_FIXED
+ .export = ssi_ahash_export,
+ .import = ssi_ahash_import,
+#endif
+ .setkey = ssi_ahash_setkey,
+ .halg = {
+ .digestsize = SHA384_DIGEST_SIZE,
+ .statesize = sizeof(struct sha512_state),
+ },
+ },
+ .hash_mode = DRV_HASH_SHA384,
+ .hw_mode = DRV_HASH_HW_SHA512,
+ .inter_digestsize = SHA512_DIGEST_SIZE,
+ },
+ {
+ .name = "sha512",
+ .driver_name = "sha512-dx",
+ .hmac_name = "hmac(sha512)",
+ .hmac_driver_name = "hmac-sha512-dx",
+ .blocksize = SHA512_BLOCK_SIZE,
+ .synchronize = false,
+ .template_ahash = {
+ .init = ssi_ahash_init,
+ .update = ssi_ahash_update,
+ .final = ssi_ahash_final,
+ .finup = ssi_ahash_finup,
+ .digest = ssi_ahash_digest,
+#ifdef EXPORT_FIXED
+ .export = ssi_ahash_export,
+ .import = ssi_ahash_import,
+#endif
+ .setkey = ssi_ahash_setkey,
+ .halg = {
+ .digestsize = SHA512_DIGEST_SIZE,
+ .statesize = sizeof(struct sha512_state),
+ },
+ },
+ .hash_mode = DRV_HASH_SHA512,
+ .hw_mode = DRV_HASH_HW_SHA512,
+ .inter_digestsize = SHA512_DIGEST_SIZE,
+ },
+#endif
+ {
+ .name = "md5",
+ .driver_name = "md5-dx",
+ .hmac_name = "hmac(md5)",
+ .hmac_driver_name = "hmac-md5-dx",
+ .blocksize = MD5_HMAC_BLOCK_SIZE,
+ .synchronize = false,
+ .template_ahash = {
+ .init = ssi_ahash_init,
+ .update = ssi_ahash_update,
+ .final = ssi_ahash_final,
+ .finup = ssi_ahash_finup,
+ .digest = ssi_ahash_digest,
+#ifdef EXPORT_FIXED
+ .export = ssi_ahash_export,
+ .import = ssi_ahash_import,
+#endif
+ .setkey = ssi_ahash_setkey,
+ .halg = {
+ .digestsize = MD5_DIGEST_SIZE,
+ .statesize = sizeof(struct md5_state),
+ },
+ },
+ .hash_mode = DRV_HASH_MD5,
+ .hw_mode = DRV_HASH_HW_MD5,
+ .inter_digestsize = MD5_DIGEST_SIZE,
+ },
+ {
+ .name = "xcbc(aes)",
+ .driver_name = "xcbc-aes-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .synchronize = false,
+ .template_ahash = {
+ .init = ssi_ahash_init,
+ .update = ssi_mac_update,
+ .final = ssi_mac_final,
+ .finup = ssi_mac_finup,
+ .digest = ssi_mac_digest,
+ .setkey = ssi_xcbc_setkey,
+#ifdef EXPORT_FIXED
+ .export = ssi_ahash_export,
+ .import = ssi_ahash_import,
+#endif
+ .halg = {
+ .digestsize = AES_BLOCK_SIZE,
+ .statesize = sizeof(struct aeshash_state),
+ },
+ },
+ .hash_mode = DRV_HASH_NULL,
+ .hw_mode = DRV_CIPHER_XCBC_MAC,
+ .inter_digestsize = AES_BLOCK_SIZE,
+ },
+#if SSI_CC_HAS_CMAC
+ {
+ .name = "cmac(aes)",
+ .driver_name = "cmac-aes-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .synchronize = false,
+ .template_ahash = {
+ .init = ssi_ahash_init,
+ .update = ssi_mac_update,
+ .final = ssi_mac_final,
+ .finup = ssi_mac_finup,
+ .digest = ssi_mac_digest,
+ .setkey = ssi_cmac_setkey,
+#ifdef EXPORT_FIXED
+ .export = ssi_ahash_export,
+ .import = ssi_ahash_import,
+#endif
+ .halg = {
+ .digestsize = AES_BLOCK_SIZE,
+ .statesize = sizeof(struct aeshash_state),
+ },
+ },
+ .hash_mode = DRV_HASH_NULL,
+ .hw_mode = DRV_CIPHER_CMAC,
+ .inter_digestsize = AES_BLOCK_SIZE,
+ },
+#endif
+
+};
+
+static struct ssi_hash_alg *
+ssi_hash_create_alg(struct ssi_hash_template *template, bool keyed)
+{
+ struct ssi_hash_alg *t_crypto_alg;
+ struct crypto_alg *alg;
+
+ t_crypto_alg = kzalloc(sizeof(struct ssi_hash_alg), GFP_KERNEL);
+ if (!t_crypto_alg) {
+ SSI_LOG_ERR("failed to allocate t_alg\n");
+ return ERR_PTR(-ENOMEM);
+ }
+
+ t_crypto_alg->synchronize = template->synchronize;
+ if (template->synchronize) {
+ struct shash_alg *halg;
+ t_crypto_alg->shash_alg = template->template_shash;
+ halg = &t_crypto_alg->shash_alg;
+ alg = &halg->base;
+ if (!keyed) halg->setkey = NULL;
+ } else {
+ struct ahash_alg *halg;
+ t_crypto_alg->ahash_alg = template->template_ahash;
+ halg = &t_crypto_alg->ahash_alg;
+ alg = &halg->halg.base;
+ if (!keyed) halg->setkey = NULL;
+ }
+
+ if (keyed) {
+ snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s",
+ template->hmac_name);
+ snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
+ template->hmac_driver_name);
+ } else {
+ snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s",
+ template->name);
+ snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
+ template->driver_name);
+ }
+ alg->cra_module = THIS_MODULE;
+ alg->cra_ctxsize = sizeof(struct ssi_hash_ctx);
+ alg->cra_priority = SSI_CRA_PRIO;
+ alg->cra_blocksize = template->blocksize;
+ alg->cra_alignmask = 0;
+ alg->cra_exit = ssi_hash_cra_exit;
+
+ if (template->synchronize) {
+ alg->cra_init = ssi_shash_cra_init;
+ alg->cra_flags = CRYPTO_ALG_TYPE_SHASH |
+ CRYPTO_ALG_KERN_DRIVER_ONLY;
+ alg->cra_type = &crypto_shash_type;
+ } else {
+ alg->cra_init = ssi_ahash_cra_init;
+ alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_TYPE_AHASH |
+ CRYPTO_ALG_KERN_DRIVER_ONLY;
+ alg->cra_type = &crypto_ahash_type;
+ }
+
+ t_crypto_alg->hash_mode = template->hash_mode;
+ t_crypto_alg->hw_mode = template->hw_mode;
+ t_crypto_alg->inter_digestsize = template->inter_digestsize;
+
+ return t_crypto_alg;
+}
+
+int ssi_hash_init_sram_digest_consts(struct ssi_drvdata *drvdata)
+{
+ struct ssi_hash_handle *hash_handle = drvdata->hash_handle;
+ ssi_sram_addr_t sram_buff_ofs = hash_handle->digest_len_sram_addr;
+ unsigned int larval_seq_len = 0;
+ HwDesc_s larval_seq[CC_DIGEST_SIZE_MAX/sizeof(uint32_t)];
+ int rc = 0;
+#if (DX_DEV_SHA_MAX > 256)
+ int i;
+#endif
+
+ /* Copy-to-sram digest-len */
+ ssi_sram_mgr_const2sram_desc(digest_len_init, sram_buff_ofs,
+ ARRAY_SIZE(digest_len_init), larval_seq, &larval_seq_len);
+ rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+ if (unlikely(rc != 0))
+ goto init_digest_const_err;
+
+ sram_buff_ofs += sizeof(digest_len_init);
+ larval_seq_len = 0;
+
+#if (DX_DEV_SHA_MAX > 256)
+ /* Copy-to-sram digest-len for sha384/512 */
+ ssi_sram_mgr_const2sram_desc(digest_len_sha512_init, sram_buff_ofs,
+ ARRAY_SIZE(digest_len_sha512_init), larval_seq, &larval_seq_len);
+ rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+ if (unlikely(rc != 0))
+ goto init_digest_const_err;
+
+ sram_buff_ofs += sizeof(digest_len_sha512_init);
+ larval_seq_len = 0;
+#endif
+
+ /* The initial digests offset */
+ hash_handle->larval_digest_sram_addr = sram_buff_ofs;
+
+ /* Copy-to-sram initial SHA* digests */
+ ssi_sram_mgr_const2sram_desc(md5_init, sram_buff_ofs,
+ ARRAY_SIZE(md5_init), larval_seq, &larval_seq_len);
+ rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+ if (unlikely(rc != 0))
+ goto init_digest_const_err;
+ sram_buff_ofs += sizeof(md5_init);
+ larval_seq_len = 0;
+
+ ssi_sram_mgr_const2sram_desc(sha1_init, sram_buff_ofs,
+ ARRAY_SIZE(sha1_init), larval_seq, &larval_seq_len);
+ rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+ if (unlikely(rc != 0))
+ goto init_digest_const_err;
+ sram_buff_ofs += sizeof(sha1_init);
+ larval_seq_len = 0;
+
+ ssi_sram_mgr_const2sram_desc(sha224_init, sram_buff_ofs,
+ ARRAY_SIZE(sha224_init), larval_seq, &larval_seq_len);
+ rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+ if (unlikely(rc != 0))
+ goto init_digest_const_err;
+ sram_buff_ofs += sizeof(sha224_init);
+ larval_seq_len = 0;
+
+ ssi_sram_mgr_const2sram_desc(sha256_init, sram_buff_ofs,
+ ARRAY_SIZE(sha256_init), larval_seq, &larval_seq_len);
+ rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+ if (unlikely(rc != 0))
+ goto init_digest_const_err;
+ sram_buff_ofs += sizeof(sha256_init);
+ larval_seq_len = 0;
+
+#if (DX_DEV_SHA_MAX > 256)
+ /* We are forced to swap each double-word larval before copying to sram */
+ for (i = 0; i < ARRAY_SIZE(sha384_init); i++) {
+ const uint32_t const0 = ((uint32_t *)((uint64_t *)&sha384_init[i]))[1];
+ const uint32_t const1 = ((uint32_t *)((uint64_t *)&sha384_init[i]))[0];
+
+ ssi_sram_mgr_const2sram_desc(&const0, sram_buff_ofs, 1,
+ larval_seq, &larval_seq_len);
+ sram_buff_ofs += sizeof(uint32_t);
+ ssi_sram_mgr_const2sram_desc(&const1, sram_buff_ofs, 1,
+ larval_seq, &larval_seq_len);
+ sram_buff_ofs += sizeof(uint32_t);
+ }
+ rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("send_request() failed (rc = %d)\n", rc);
+ goto init_digest_const_err;
+ }
+ larval_seq_len = 0;
+
+ for (i = 0; i < ARRAY_SIZE(sha512_init); i++) {
+ const uint32_t const0 = ((uint32_t *)((uint64_t *)&sha512_init[i]))[1];
+ const uint32_t const1 = ((uint32_t *)((uint64_t *)&sha512_init[i]))[0];
+
+ ssi_sram_mgr_const2sram_desc(&const0, sram_buff_ofs, 1,
+ larval_seq, &larval_seq_len);
+ sram_buff_ofs += sizeof(uint32_t);
+ ssi_sram_mgr_const2sram_desc(&const1, sram_buff_ofs, 1,
+ larval_seq, &larval_seq_len);
+ sram_buff_ofs += sizeof(uint32_t);
+ }
+ rc = send_request_init(drvdata, larval_seq, larval_seq_len);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("send_request() failed (rc = %d)\n", rc);
+ goto init_digest_const_err;
+ }
+#endif
+
+init_digest_const_err:
+ return rc;
+}
+
+int ssi_hash_alloc(struct ssi_drvdata *drvdata)
+{
+ struct ssi_hash_handle *hash_handle;
+ ssi_sram_addr_t sram_buff;
+ uint32_t sram_size_to_alloc;
+ int rc = 0;
+ int alg;
+
+ hash_handle = kzalloc(sizeof(struct ssi_hash_handle), GFP_KERNEL);
+ if (hash_handle == NULL) {
+ SSI_LOG_ERR("kzalloc failed to allocate %zu B\n",
+ sizeof(struct ssi_hash_handle));
+ rc = -ENOMEM;
+ goto fail;
+ }
+
+ drvdata->hash_handle = hash_handle;
+
+ sram_size_to_alloc = sizeof(digest_len_init) +
+#if (DX_DEV_SHA_MAX > 256)
+ sizeof(digest_len_sha512_init) +
+ sizeof(sha384_init) +
+ sizeof(sha512_init) +
+#endif
+ sizeof(md5_init) +
+ sizeof(sha1_init) +
+ sizeof(sha224_init) +
+ sizeof(sha256_init);
+
+ sram_buff = ssi_sram_mgr_alloc(drvdata, sram_size_to_alloc);
+ if (sram_buff == NULL_SRAM_ADDR) {
+ SSI_LOG_ERR("SRAM pool exhausted\n");
+ rc = -ENOMEM;
+ goto fail;
+ }
+
+ /* The initial digest-len offset */
+ hash_handle->digest_len_sram_addr = sram_buff;
+
+ /*must be set before the alg registration as it is being used there*/
+ rc = ssi_hash_init_sram_digest_consts(drvdata);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("Init digest CONST failed (rc=%d)\n", rc);
+ goto fail;
+ }
+
+ INIT_LIST_HEAD(&hash_handle->hash_list);
+
+ /* ahash registration */
+ for (alg = 0; alg < ARRAY_SIZE(driver_hash); alg++) {
+ struct ssi_hash_alg *t_alg;
+
+ /* register hmac version */
+
+ if ((((struct ssi_hash_template)driver_hash[alg]).hw_mode != DRV_CIPHER_XCBC_MAC) &&
+ (((struct ssi_hash_template)driver_hash[alg]).hw_mode != DRV_CIPHER_CMAC)) {
+ t_alg = ssi_hash_create_alg(&driver_hash[alg], true);
+ if (IS_ERR(t_alg)) {
+ rc = PTR_ERR(t_alg);
+ SSI_LOG_ERR("%s alg allocation failed\n",
+ driver_hash[alg].driver_name);
+ goto fail;
+ }
+ t_alg->drvdata = drvdata;
+
+ if (t_alg->synchronize) {
+ rc = crypto_register_shash(&t_alg->shash_alg);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("%s alg registration failed\n",
+ t_alg->shash_alg.base.cra_driver_name);
+ kfree(t_alg);
+ goto fail;
+ } else
+ list_add_tail(&t_alg->entry, &hash_handle->hash_list);
+ } else {
+ rc = crypto_register_ahash(&t_alg->ahash_alg);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("%s alg registration failed\n",
+ t_alg->ahash_alg.halg.base.cra_driver_name);
+ kfree(t_alg);
+ goto fail;
+ } else
+ list_add_tail(&t_alg->entry, &hash_handle->hash_list);
+ }
+ }
+
+ /* register hash version */
+ t_alg = ssi_hash_create_alg(&driver_hash[alg], false);
+ if (IS_ERR(t_alg)) {
+ rc = PTR_ERR(t_alg);
+ SSI_LOG_ERR("%s alg allocation failed\n",
+ driver_hash[alg].driver_name);
+ goto fail;
+ }
+ t_alg->drvdata = drvdata;
+
+ if (t_alg->synchronize) {
+ rc = crypto_register_shash(&t_alg->shash_alg);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("%s alg registration failed\n",
+ t_alg->shash_alg.base.cra_driver_name);
+ kfree(t_alg);
+ goto fail;
+ } else
+ list_add_tail(&t_alg->entry, &hash_handle->hash_list);
+
+ } else {
+ rc = crypto_register_ahash(&t_alg->ahash_alg);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("%s alg registration failed\n",
+ t_alg->ahash_alg.halg.base.cra_driver_name);
+ kfree(t_alg);
+ goto fail;
+ } else
+ list_add_tail(&t_alg->entry, &hash_handle->hash_list);
+ }
+ }
+
+ return 0;
+
+fail:
+
+ if (drvdata->hash_handle != NULL) {
+ kfree(drvdata->hash_handle);
+ drvdata->hash_handle = NULL;
+ }
+ return rc;
+}
+
+int ssi_hash_free(struct ssi_drvdata *drvdata)
+{
+ struct ssi_hash_alg *t_hash_alg, *hash_n;
+ struct ssi_hash_handle *hash_handle = drvdata->hash_handle;
+
+ if (hash_handle != NULL) {
+
+ list_for_each_entry_safe(t_hash_alg, hash_n, &hash_handle->hash_list, entry) {
+ if (t_hash_alg->synchronize) {
+ crypto_unregister_shash(&t_hash_alg->shash_alg);
+ } else {
+ crypto_unregister_ahash(&t_hash_alg->ahash_alg);
+ }
+ list_del(&t_hash_alg->entry);
+ kfree(t_hash_alg);
+ }
+
+ kfree(hash_handle);
+ drvdata->hash_handle = NULL;
+ }
+ return 0;
+}
+
+static void ssi_hash_create_xcbc_setup(struct ahash_request *areq,
+ HwDesc_s desc[],
+ unsigned int *seq_size) {
+ unsigned int idx = *seq_size;
+ struct ahash_req_ctx *state = ahash_request_ctx(areq);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+
+ /* Setup XCBC MAC K1 */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr
+ + XCBC_MAC_K1_OFFSET),
+ CC_AES_128_BIT_KEY_SIZE, NS_BIT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* Setup XCBC MAC K2 */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr
+ + XCBC_MAC_K2_OFFSET),
+ CC_AES_128_BIT_KEY_SIZE, NS_BIT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* Setup XCBC MAC K3 */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, (ctx->opad_tmp_keys_dma_addr
+ + XCBC_MAC_K3_OFFSET),
+ CC_AES_128_BIT_KEY_SIZE, NS_BIT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE2);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* Loading MAC state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+ *seq_size = idx;
+}
+
+static void ssi_hash_create_cmac_setup(struct ahash_request *areq,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ unsigned int idx = *seq_size;
+ struct ahash_req_ctx *state = ahash_request_ctx(areq);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
+ struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+
+ /* Setup CMAC Key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->opad_tmp_keys_dma_addr,
+ ((ctx->key_params.keylen == 24) ? AES_MAX_KEY_SIZE : ctx->key_params.keylen), NS_BIT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->key_params.keylen);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* Load MAC state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, state->digest_buff_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->key_params.keylen);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+ *seq_size = idx;
+}
+
+static void ssi_hash_create_data_desc(struct ahash_req_ctx *areq_ctx,
+ struct ssi_hash_ctx *ctx,
+ unsigned int flow_mode,
+ HwDesc_s desc[],
+ bool is_not_last_data,
+ unsigned int *seq_size)
+{
+ unsigned int idx = *seq_size;
+
+ if (likely(areq_ctx->data_dma_buf_type == SSI_DMA_BUF_DLLI)) {
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ sg_dma_address(areq_ctx->curr_sg),
+ areq_ctx->curr_sg->length, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+ idx++;
+ } else {
+ if (areq_ctx->data_dma_buf_type == SSI_DMA_BUF_NULL) {
+ SSI_LOG_DEBUG(" NULL mode\n");
+ /* nothing to build */
+ return;
+ }
+ /* bypass */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ areq_ctx->mlli_params.mlli_dma_addr,
+ areq_ctx->mlli_params.mlli_len,
+ NS_BIT);
+ HW_DESC_SET_DOUT_SRAM(&desc[idx],
+ ctx->drvdata->mlli_sram_addr,
+ areq_ctx->mlli_params.mlli_len);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+ idx++;
+ /* process */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI,
+ ctx->drvdata->mlli_sram_addr,
+ areq_ctx->mlli_nents,
+ NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+ idx++;
+ }
+ if (is_not_last_data) {
+ HW_DESC_SET_DIN_NOT_LAST_INDICATION(&desc[idx-1]);
+ }
+ /* return updated desc sequence size */
+ *seq_size = idx;
+}
+
+/*!
+ * Gets the address of the initial digest in SRAM
+ * according to the given hash mode
+ *
+ * \param drvdata
+ * \param mode The Hash mode. Supported modes: MD5/SHA1/SHA224/SHA256
+ *
+ * \return uint32_t The address of the inital digest in SRAM
+ */
+ssi_sram_addr_t ssi_ahash_get_larval_digest_sram_addr(void *drvdata, uint32_t mode)
+{
+ struct ssi_drvdata *_drvdata = (struct ssi_drvdata *)drvdata;
+ struct ssi_hash_handle *hash_handle = _drvdata->hash_handle;
+
+ switch (mode) {
+ case DRV_HASH_NULL:
+ break; /*Ignore*/
+ case DRV_HASH_MD5:
+ return (hash_handle->larval_digest_sram_addr);
+ case DRV_HASH_SHA1:
+ return (hash_handle->larval_digest_sram_addr +
+ sizeof(md5_init));
+ case DRV_HASH_SHA224:
+ return (hash_handle->larval_digest_sram_addr +
+ sizeof(md5_init) +
+ sizeof(sha1_init));
+ case DRV_HASH_SHA256:
+ return (hash_handle->larval_digest_sram_addr +
+ sizeof(md5_init) +
+ sizeof(sha1_init) +
+ sizeof(sha224_init));
+#if (DX_DEV_SHA_MAX > 256)
+ case DRV_HASH_SHA384:
+ return (hash_handle->larval_digest_sram_addr +
+ sizeof(md5_init) +
+ sizeof(sha1_init) +
+ sizeof(sha224_init) +
+ sizeof(sha256_init));
+ case DRV_HASH_SHA512:
+ return (hash_handle->larval_digest_sram_addr +
+ sizeof(md5_init) +
+ sizeof(sha1_init) +
+ sizeof(sha224_init) +
+ sizeof(sha256_init) +
+ sizeof(sha384_init));
+#endif
+ default:
+ SSI_LOG_ERR("Invalid hash mode (%d)\n", mode);
+ }
+
+ /*This is valid wrong value to avoid kernel crash*/
+ return hash_handle->larval_digest_sram_addr;
+}
+
+ssi_sram_addr_t
+ssi_ahash_get_initial_digest_len_sram_addr(void *drvdata, uint32_t mode)
+{
+ struct ssi_drvdata *_drvdata = (struct ssi_drvdata *)drvdata;
+ struct ssi_hash_handle *hash_handle = _drvdata->hash_handle;
+ ssi_sram_addr_t digest_len_addr = hash_handle->digest_len_sram_addr;
+
+ switch (mode) {
+ case DRV_HASH_SHA1:
+ case DRV_HASH_SHA224:
+ case DRV_HASH_SHA256:
+ case DRV_HASH_MD5:
+ return digest_len_addr;
+#if (DX_DEV_SHA_MAX > 256)
+ case DRV_HASH_SHA384:
+ case DRV_HASH_SHA512:
+ return digest_len_addr + sizeof(digest_len_init);
+#endif
+ default:
+ return digest_len_addr; /*to avoid kernel crash*/
+ }
+}
+
diff --git a/drivers/staging/ccree/ssi_hash.h b/drivers/staging/ccree/ssi_hash.h
new file mode 100644
index 0000000..f736e2b
--- /dev/null
+++ b/drivers/staging/ccree/ssi_hash.h
@@ -0,0 +1,101 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+/* \file ssi_hash.h
+ ARM CryptoCell Hash Crypto API
+ */
+
+#ifndef __SSI_HASH_H__
+#define __SSI_HASH_H__
+
+#include "ssi_buffer_mgr.h"
+
+#define HMAC_IPAD_CONST 0x36363636
+#define HMAC_OPAD_CONST 0x5C5C5C5C
+#if (DX_DEV_SHA_MAX > 256)
+#define HASH_LEN_SIZE 16
+#define SSI_MAX_HASH_DIGEST_SIZE SHA512_DIGEST_SIZE
+#define SSI_MAX_HASH_BLCK_SIZE SHA512_BLOCK_SIZE
+#else
+#define HASH_LEN_SIZE 8
+#define SSI_MAX_HASH_DIGEST_SIZE SHA256_DIGEST_SIZE
+#define SSI_MAX_HASH_BLCK_SIZE SHA256_BLOCK_SIZE
+#endif
+
+#define XCBC_MAC_K1_OFFSET 0
+#define XCBC_MAC_K2_OFFSET 16
+#define XCBC_MAC_K3_OFFSET 32
+
+// this struct was taken from drivers/crypto/nx/nx-aes-xcbc.c and it is used for xcbc/cmac statesize
+struct aeshash_state {
+ u8 state[AES_BLOCK_SIZE];
+ unsigned int count;
+ u8 buffer[AES_BLOCK_SIZE];
+};
+
+/* ahash state */
+struct ahash_req_ctx {
+ uint8_t* buff0;
+ uint8_t* buff1;
+ uint8_t* digest_result_buff;
+ struct async_gen_req_ctx gen_ctx;
+ enum ssi_req_dma_buf_type data_dma_buf_type;
+ uint8_t *digest_buff;
+ uint8_t *opad_digest_buff;
+ uint8_t *digest_bytes_len;
+ dma_addr_t opad_digest_dma_addr;
+ dma_addr_t digest_buff_dma_addr;
+ dma_addr_t digest_bytes_len_dma_addr;
+ dma_addr_t digest_result_dma_addr;
+ uint32_t buff0_cnt;
+ uint32_t buff1_cnt;
+ uint32_t buff_index;
+ uint32_t xcbc_count; /* count xcbc update operatations */
+ struct scatterlist buff_sg[2];
+ struct scatterlist *curr_sg;
+ uint32_t in_nents;
+ uint32_t mlli_nents;
+ struct mlli_params mlli_params;
+};
+
+int ssi_hash_alloc(struct ssi_drvdata *drvdata);
+int ssi_hash_init_sram_digest_consts(struct ssi_drvdata *drvdata);
+int ssi_hash_free(struct ssi_drvdata *drvdata);
+
+/*!
+ * Gets the initial digest length
+ *
+ * \param drvdata
+ * \param mode The Hash mode. Supported modes: MD5/SHA1/SHA224/SHA256/SHA384/SHA512
+ *
+ * \return uint32_t returns the address of the initial digest length in SRAM
+ */
+ssi_sram_addr_t
+ssi_ahash_get_initial_digest_len_sram_addr(void *drvdata, uint32_t mode);
+
+/*!
+ * Gets the address of the initial digest in SRAM
+ * according to the given hash mode
+ *
+ * \param drvdata
+ * \param mode The Hash mode. Supported modes: MD5/SHA1/SHA224/SHA256/SHA384/SHA512
+ *
+ * \return uint32_t The address of the inital digest in SRAM
+ */
+ssi_sram_addr_t ssi_ahash_get_larval_digest_sram_addr(void *drvdata, uint32_t mode);
+
+#endif /*__SSI_HASH_H__*/
+
diff --git a/drivers/staging/ccree/ssi_pm.c b/drivers/staging/ccree/ssi_pm.c
index 8ee481b..da5f2d5 100644
--- a/drivers/staging/ccree/ssi_pm.c
+++ b/drivers/staging/ccree/ssi_pm.c
@@ -26,6 +26,7 @@
#include "ssi_request_mgr.h"
#include "ssi_sram_mgr.h"
#include "ssi_sysfs.h"
+#include "ssi_hash.h"
#include "ssi_pm.h"
#include "ssi_pm_ext.h"

@@ -79,6 +80,9 @@ int ssi_power_mgr_runtime_resume(struct device *dev)
return rc;
}

+ /* must be after the queue resuming as it uses the HW queue*/
+ ssi_hash_init_sram_digest_consts(drvdata);
+
return 0;
}

--
2.1.4

2017-04-20 13:12:58

by Gilad Ben-Yossef

[permalink] [raw]
Subject: [PATCH v2 4/9] staging: ccree: add IV generation support

Add CryptoCell IV hardware generation support.

This patch adds the needed support to drive the HW but does not expose
the ability via the kernel crypto API yet.

Signed-off-by: Gilad Ben-Yossef <[email protected]>
---
drivers/staging/ccree/Makefile | 2 +-
drivers/staging/ccree/ssi_buffer_mgr.c | 2 +
drivers/staging/ccree/ssi_cipher.c | 11 ++
drivers/staging/ccree/ssi_cipher.h | 1 +
drivers/staging/ccree/ssi_driver.c | 9 +
drivers/staging/ccree/ssi_driver.h | 7 +
drivers/staging/ccree/ssi_ivgen.c | 301 ++++++++++++++++++++++++++++++++
drivers/staging/ccree/ssi_ivgen.h | 72 ++++++++
drivers/staging/ccree/ssi_pm.c | 2 +
drivers/staging/ccree/ssi_request_mgr.c | 33 +++-
10 files changed, 438 insertions(+), 2 deletions(-)
create mode 100644 drivers/staging/ccree/ssi_ivgen.c
create mode 100644 drivers/staging/ccree/ssi_ivgen.h

diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile
index 21a80d5..89afe9a 100644
--- a/drivers/staging/ccree/Makefile
+++ b/drivers/staging/ccree/Makefile
@@ -1,2 +1,2 @@
obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o
-ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
+ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index a0fafa9..6a9c964 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -534,6 +534,7 @@ void ssi_buffer_mgr_unmap_blkcipher_request(
SSI_RESTORE_DMA_ADDR_TO_48BIT(req_ctx->gen_ctx.iv_dma_addr);
dma_unmap_single(dev, req_ctx->gen_ctx.iv_dma_addr,
ivsize,
+ req_ctx->is_giv ? DMA_BIDIRECTIONAL :
DMA_TO_DEVICE);
}
/* Release pool */
@@ -587,6 +588,7 @@ int ssi_buffer_mgr_map_blkcipher_request(
req_ctx->gen_ctx.iv_dma_addr =
dma_map_single(dev, (void *)info,
ivsize,
+ req_ctx->is_giv ? DMA_BIDIRECTIONAL:
DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev,
req_ctx->gen_ctx.iv_dma_addr))) {
diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
index 01467e8..2e4ce90 100644
--- a/drivers/staging/ccree/ssi_cipher.c
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -819,6 +819,13 @@ static int ssi_blkcipher_process(
areq,
desc, &seq_len);

+ /* do we need to generate IV? */
+ if (req_ctx->is_giv == true) {
+ ssi_req.ivgen_dma_addr[0] = req_ctx->gen_ctx.iv_dma_addr;
+ ssi_req.ivgen_dma_addr_len = 1;
+ /* set the IV size (8/16 B long)*/
+ ssi_req.ivgen_size = ivsize;
+ }
END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_2);

/* STAT_PHASE_3: Lock HW and push sequence */
@@ -901,6 +908,7 @@ static int ssi_sblkcipher_encrypt(struct blkcipher_desc *desc,
unsigned int ivsize = crypto_blkcipher_ivsize(blk_tfm);

req_ctx->backup_info = desc->info;
+ req_ctx->is_giv = false;

return ssi_blkcipher_process(tfm, req_ctx, dst, src, nbytes, desc->info, ivsize, NULL, DRV_CRYPTO_DIRECTION_ENCRYPT);
}
@@ -916,6 +924,7 @@ static int ssi_sblkcipher_decrypt(struct blkcipher_desc *desc,
unsigned int ivsize = crypto_blkcipher_ivsize(blk_tfm);

req_ctx->backup_info = desc->info;
+ req_ctx->is_giv = false;

return ssi_blkcipher_process(tfm, req_ctx, dst, src, nbytes, desc->info, ivsize, NULL, DRV_CRYPTO_DIRECTION_DECRYPT);
}
@@ -948,6 +957,7 @@ static int ssi_ablkcipher_encrypt(struct ablkcipher_request *req)
unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm);

req_ctx->backup_info = req->info;
+ req_ctx->is_giv = false;

return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src, req->nbytes, req->info, ivsize, (void *)req, DRV_CRYPTO_DIRECTION_ENCRYPT);
}
@@ -960,6 +970,7 @@ static int ssi_ablkcipher_decrypt(struct ablkcipher_request *req)
unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm);

req_ctx->backup_info = req->info;
+ req_ctx->is_giv = false;
return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src, req->nbytes, req->info, ivsize, (void *)req, DRV_CRYPTO_DIRECTION_DECRYPT);
}

diff --git a/drivers/staging/ccree/ssi_cipher.h b/drivers/staging/ccree/ssi_cipher.h
index 511800f1..d1a98f9 100644
--- a/drivers/staging/ccree/ssi_cipher.h
+++ b/drivers/staging/ccree/ssi_cipher.h
@@ -45,6 +45,7 @@ struct blkcipher_req_ctx {
uint32_t out_nents;
uint32_t out_mlli_nents;
uint8_t *backup_info; /*store iv for generated IV flow*/
+ bool is_giv;
struct mlli_params mlli_params;
};

diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index 1310ac5..aee5469 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -64,6 +64,7 @@
#include "ssi_sysfs.h"
#include "ssi_cipher.h"
#include "ssi_hash.h"
+#include "ssi_ivgen.h"
#include "ssi_sram_mgr.h"
#include "ssi_pm.h"

@@ -348,6 +349,12 @@ static int init_cc_resources(struct platform_device *plat_dev)
goto init_cc_res_err;
}

+ rc = ssi_ivgen_init(new_drvdata);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("ssi_ivgen_init failed\n");
+ goto init_cc_res_err;
+ }
+
/* Allocate crypto algs */
rc = ssi_ablkcipher_alloc(new_drvdata);
if (unlikely(rc != 0)) {
@@ -369,6 +376,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
if (new_drvdata != NULL) {
ssi_hash_free(new_drvdata);
ssi_ablkcipher_free(new_drvdata);
+ ssi_ivgen_fini(new_drvdata);
ssi_power_mgr_fini(new_drvdata);
ssi_buffer_mgr_fini(new_drvdata);
request_mgr_fini(new_drvdata);
@@ -410,6 +418,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)

ssi_hash_free(drvdata);
ssi_ablkcipher_free(drvdata);
+ ssi_ivgen_fini(drvdata);
ssi_power_mgr_fini(drvdata);
ssi_buffer_mgr_fini(drvdata);
request_mgr_fini(drvdata);
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index baac9bf..5f4b14e 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -106,9 +106,15 @@
#define MIN(a, b) (((a) < (b)) ? (a) : (b))
#define MAX(a, b) (((a) > (b)) ? (a) : (b))

+#define SSI_MAX_IVGEN_DMA_ADDRESSES 3
struct ssi_crypto_req {
void (*user_cb)(struct device *dev, void *req, void __iomem *cc_base);
void *user_arg;
+ dma_addr_t ivgen_dma_addr[SSI_MAX_IVGEN_DMA_ADDRESSES]; /* For the first 'ivgen_dma_addr_len' addresses of this array,
+ generated IV would be placed in it by send_request().
+ Same generated IV for all addresses! */
+ unsigned int ivgen_dma_addr_len; /* Amount of 'ivgen_dma_addr' elements to be filled. */
+ unsigned int ivgen_size; /* The generated IV size required, 8/16 B allowed. */
struct completion seq_compl; /* request completion */
#ifdef ENABLE_CYCLE_COUNT
enum stat_op op_type;
@@ -144,6 +150,7 @@ struct ssi_drvdata {
void *hash_handle;
void *blkcipher_handle;
void *request_mgr_handle;
+ void *ivgen_handle;
void *sram_mgr_handle;

#ifdef ENABLE_CYCLE_COUNT
diff --git a/drivers/staging/ccree/ssi_ivgen.c b/drivers/staging/ccree/ssi_ivgen.c
new file mode 100644
index 0000000..4d268d1
--- /dev/null
+++ b/drivers/staging/ccree/ssi_ivgen.c
@@ -0,0 +1,301 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include <linux/platform_device.h>
+#include <crypto/ctr.h>
+#include "ssi_config.h"
+#include "ssi_driver.h"
+#include "ssi_ivgen.h"
+#include "ssi_request_mgr.h"
+#include "ssi_sram_mgr.h"
+#include "ssi_buffer_mgr.h"
+
+/* The max. size of pool *MUST* be <= SRAM total size */
+#define SSI_IVPOOL_SIZE 1024
+/* The first 32B fraction of pool are dedicated to the
+ next encryption "key" & "IV" for pool regeneration */
+#define SSI_IVPOOL_META_SIZE (CC_AES_IV_SIZE + AES_KEYSIZE_128)
+#define SSI_IVPOOL_GEN_SEQ_LEN 4
+
+/**
+ * struct ssi_ivgen_ctx -IV pool generation context
+ * @pool: the start address of the iv-pool resides in internal RAM
+ * @ctr_key_dma: address of pool's encryption key material in internal RAM
+ * @ctr_iv_dma: address of pool's counter iv in internal RAM
+ * @next_iv_ofs: the offset to the next available IV in pool
+ * @pool_meta: virt. address of the initial enc. key/IV
+ * @pool_meta_dma: phys. address of the initial enc. key/IV
+ */
+struct ssi_ivgen_ctx {
+ ssi_sram_addr_t pool;
+ ssi_sram_addr_t ctr_key;
+ ssi_sram_addr_t ctr_iv;
+ uint32_t next_iv_ofs;
+ uint8_t *pool_meta;
+ dma_addr_t pool_meta_dma;
+};
+
+/*!
+ * Generates SSI_IVPOOL_SIZE of random bytes by
+ * encrypting 0's using AES128-CTR.
+ *
+ * \param ivgen iv-pool context
+ * \param iv_seq IN/OUT array to the descriptors sequence
+ * \param iv_seq_len IN/OUT pointer to the sequence length
+ */
+static int ssi_ivgen_generate_pool(
+ struct ssi_ivgen_ctx *ivgen_ctx,
+ HwDesc_s iv_seq[],
+ unsigned int *iv_seq_len)
+{
+ unsigned int idx = *iv_seq_len;
+
+ if ( (*iv_seq_len + SSI_IVPOOL_GEN_SEQ_LEN) > SSI_IVPOOL_SEQ_LEN) {
+ /* The sequence will be longer than allowed */
+ return -EINVAL;
+ }
+ /* Setup key */
+ HW_DESC_INIT(&iv_seq[idx]);
+ HW_DESC_SET_DIN_SRAM(&iv_seq[idx], ivgen_ctx->ctr_key, AES_KEYSIZE_128);
+ HW_DESC_SET_SETUP_MODE(&iv_seq[idx], SETUP_LOAD_KEY0);
+ HW_DESC_SET_CIPHER_CONFIG0(&iv_seq[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_FLOW_MODE(&iv_seq[idx], S_DIN_to_AES);
+ HW_DESC_SET_KEY_SIZE_AES(&iv_seq[idx], CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_CIPHER_MODE(&iv_seq[idx], DRV_CIPHER_CTR);
+ idx++;
+
+ /* Setup cipher state */
+ HW_DESC_INIT(&iv_seq[idx]);
+ HW_DESC_SET_DIN_SRAM(&iv_seq[idx], ivgen_ctx->ctr_iv, CC_AES_IV_SIZE);
+ HW_DESC_SET_CIPHER_CONFIG0(&iv_seq[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_FLOW_MODE(&iv_seq[idx], S_DIN_to_AES);
+ HW_DESC_SET_SETUP_MODE(&iv_seq[idx], SETUP_LOAD_STATE1);
+ HW_DESC_SET_KEY_SIZE_AES(&iv_seq[idx], CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_CIPHER_MODE(&iv_seq[idx], DRV_CIPHER_CTR);
+ idx++;
+
+ /* Perform dummy encrypt to skip first block */
+ HW_DESC_INIT(&iv_seq[idx]);
+ HW_DESC_SET_DIN_CONST(&iv_seq[idx], 0, CC_AES_IV_SIZE);
+ HW_DESC_SET_DOUT_SRAM(&iv_seq[idx], ivgen_ctx->pool, CC_AES_IV_SIZE);
+ HW_DESC_SET_FLOW_MODE(&iv_seq[idx], DIN_AES_DOUT);
+ idx++;
+
+ /* Generate IV pool */
+ HW_DESC_INIT(&iv_seq[idx]);
+ HW_DESC_SET_DIN_CONST(&iv_seq[idx], 0, SSI_IVPOOL_SIZE);
+ HW_DESC_SET_DOUT_SRAM(&iv_seq[idx], ivgen_ctx->pool, SSI_IVPOOL_SIZE);
+ HW_DESC_SET_FLOW_MODE(&iv_seq[idx], DIN_AES_DOUT);
+ idx++;
+
+ *iv_seq_len = idx; /* Update sequence length */
+
+ /* queue ordering assures pool readiness */
+ ivgen_ctx->next_iv_ofs = SSI_IVPOOL_META_SIZE;
+
+ return 0;
+}
+
+/*!
+ * Generates the initial pool in SRAM.
+ * This function should be invoked when resuming DX driver.
+ *
+ * \param drvdata
+ *
+ * \return int Zero for success, negative value otherwise.
+ */
+int ssi_ivgen_init_sram_pool(struct ssi_drvdata *drvdata)
+{
+ struct ssi_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
+ HwDesc_s iv_seq[SSI_IVPOOL_SEQ_LEN];
+ unsigned int iv_seq_len = 0;
+ int rc;
+
+ /* Generate initial enc. key/iv */
+ get_random_bytes(ivgen_ctx->pool_meta, SSI_IVPOOL_META_SIZE);
+
+ /* The first 32B reserved for the enc. Key/IV */
+ ivgen_ctx->ctr_key = ivgen_ctx->pool;
+ ivgen_ctx->ctr_iv = ivgen_ctx->pool + AES_KEYSIZE_128;
+
+ /* Copy initial enc. key and IV to SRAM at a single descriptor */
+ HW_DESC_INIT(&iv_seq[iv_seq_len]);
+ HW_DESC_SET_DIN_TYPE(&iv_seq[iv_seq_len], DMA_DLLI,
+ ivgen_ctx->pool_meta_dma, SSI_IVPOOL_META_SIZE,
+ NS_BIT);
+ HW_DESC_SET_DOUT_SRAM(&iv_seq[iv_seq_len], ivgen_ctx->pool,
+ SSI_IVPOOL_META_SIZE);
+ HW_DESC_SET_FLOW_MODE(&iv_seq[iv_seq_len], BYPASS);
+ iv_seq_len++;
+
+ /* Generate initial pool */
+ rc = ssi_ivgen_generate_pool(ivgen_ctx, iv_seq, &iv_seq_len);
+ if (unlikely(rc != 0)) {
+ return rc;
+ }
+ /* Fire-and-forget */
+ return send_request_init(drvdata, iv_seq, iv_seq_len);
+}
+
+/*!
+ * Free iv-pool and ivgen context.
+ *
+ * \param drvdata
+ */
+void ssi_ivgen_fini(struct ssi_drvdata *drvdata)
+{
+ struct ssi_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
+ struct device *device = &(drvdata->plat_dev->dev);
+
+ if (ivgen_ctx == NULL)
+ return;
+
+ if (ivgen_ctx->pool_meta != NULL) {
+ memset(ivgen_ctx->pool_meta, 0, SSI_IVPOOL_META_SIZE);
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(ivgen_ctx->pool_meta_dma);
+ dma_free_coherent(device, SSI_IVPOOL_META_SIZE,
+ ivgen_ctx->pool_meta, ivgen_ctx->pool_meta_dma);
+ }
+
+ ivgen_ctx->pool = NULL_SRAM_ADDR;
+
+ /* release "this" context */
+ kfree(ivgen_ctx);
+}
+
+/*!
+ * Allocates iv-pool and maps resources.
+ * This function generates the first IV pool.
+ *
+ * \param drvdata Driver's private context
+ *
+ * \return int Zero for success, negative value otherwise.
+ */
+int ssi_ivgen_init(struct ssi_drvdata *drvdata)
+{
+ struct ssi_ivgen_ctx *ivgen_ctx;
+ struct device *device = &drvdata->plat_dev->dev;
+ int rc;
+
+ /* Allocate "this" context */
+ drvdata->ivgen_handle = kzalloc(sizeof(struct ssi_ivgen_ctx), GFP_KERNEL);
+ if (!drvdata->ivgen_handle) {
+ SSI_LOG_ERR("Not enough memory to allocate IVGEN context "
+ "(%zu B)\n", sizeof(struct ssi_ivgen_ctx));
+ rc = -ENOMEM;
+ goto out;
+ }
+ ivgen_ctx = drvdata->ivgen_handle;
+
+ /* Allocate pool's header for intial enc. key/IV */
+ ivgen_ctx->pool_meta = dma_alloc_coherent(device, SSI_IVPOOL_META_SIZE,
+ &ivgen_ctx->pool_meta_dma, GFP_KERNEL);
+ if (!ivgen_ctx->pool_meta) {
+ SSI_LOG_ERR("Not enough memory to allocate DMA of pool_meta "
+ "(%u B)\n", SSI_IVPOOL_META_SIZE);
+ rc = -ENOMEM;
+ goto out;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(ivgen_ctx->pool_meta_dma,
+ SSI_IVPOOL_META_SIZE);
+ /* Allocate IV pool in SRAM */
+ ivgen_ctx->pool = ssi_sram_mgr_alloc(drvdata, SSI_IVPOOL_SIZE);
+ if (ivgen_ctx->pool == NULL_SRAM_ADDR) {
+ SSI_LOG_ERR("SRAM pool exhausted\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ return ssi_ivgen_init_sram_pool(drvdata);
+
+out:
+ ssi_ivgen_fini(drvdata);
+ return rc;
+}
+
+/*!
+ * Acquires 16 Bytes IV from the iv-pool
+ *
+ * \param drvdata Driver private context
+ * \param iv_out_dma Array of physical IV out addresses
+ * \param iv_out_dma_len Length of iv_out_dma array (additional elements of iv_out_dma array are ignore)
+ * \param iv_out_size May be 8 or 16 bytes long
+ * \param iv_seq IN/OUT array to the descriptors sequence
+ * \param iv_seq_len IN/OUT pointer to the sequence length
+ *
+ * \return int Zero for success, negative value otherwise.
+ */
+int ssi_ivgen_getiv(
+ struct ssi_drvdata *drvdata,
+ dma_addr_t iv_out_dma[],
+ unsigned int iv_out_dma_len,
+ unsigned int iv_out_size,
+ HwDesc_s iv_seq[],
+ unsigned int *iv_seq_len)
+{
+ struct ssi_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
+ unsigned int idx = *iv_seq_len;
+ unsigned int t;
+
+ if ((iv_out_size != CC_AES_IV_SIZE) &&
+ (iv_out_size != CTR_RFC3686_IV_SIZE)) {
+ return -EINVAL;
+ }
+ if ( (iv_out_dma_len + 1) > SSI_IVPOOL_SEQ_LEN) {
+ /* The sequence will be longer than allowed */
+ return -EINVAL;
+ }
+
+ //check that number of generated IV is limited to max dma address iv buffer size
+ if ( iv_out_dma_len > SSI_MAX_IVGEN_DMA_ADDRESSES) {
+ /* The sequence will be longer than allowed */
+ return -EINVAL;
+ }
+
+ for (t = 0; t < iv_out_dma_len; t++) {
+ /* Acquire IV from pool */
+ HW_DESC_INIT(&iv_seq[idx]);
+ HW_DESC_SET_DIN_SRAM(&iv_seq[idx],
+ ivgen_ctx->pool + ivgen_ctx->next_iv_ofs,
+ iv_out_size);
+ HW_DESC_SET_DOUT_DLLI(&iv_seq[idx], iv_out_dma[t],
+ iv_out_size, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&iv_seq[idx], BYPASS);
+ idx++;
+ }
+
+ /* Bypass operation is proceeded by crypto sequence, hence must
+ * assure bypass-write-transaction by a memory barrier */
+ HW_DESC_INIT(&iv_seq[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&iv_seq[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&iv_seq[idx], 0, 0, 1);
+ idx++;
+
+ *iv_seq_len = idx; /* update seq length */
+
+ /* Update iv index */
+ ivgen_ctx->next_iv_ofs += iv_out_size;
+
+ if ((SSI_IVPOOL_SIZE - ivgen_ctx->next_iv_ofs) < CC_AES_IV_SIZE) {
+ SSI_LOG_DEBUG("Pool exhausted, regenerating iv-pool\n");
+ /* pool is drained -regenerate it! */
+ return ssi_ivgen_generate_pool(ivgen_ctx, iv_seq, iv_seq_len);
+ }
+
+ return 0;
+}
+
+
diff --git a/drivers/staging/ccree/ssi_ivgen.h b/drivers/staging/ccree/ssi_ivgen.h
new file mode 100644
index 0000000..cf45f4f
--- /dev/null
+++ b/drivers/staging/ccree/ssi_ivgen.h
@@ -0,0 +1,72 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef __SSI_IVGEN_H__
+#define __SSI_IVGEN_H__
+
+#include "cc_hw_queue_defs.h"
+
+
+#define SSI_IVPOOL_SEQ_LEN 8
+
+/*!
+ * Allocates iv-pool and maps resources.
+ * This function generates the first IV pool.
+ *
+ * \param drvdata Driver's private context
+ *
+ * \return int Zero for success, negative value otherwise.
+ */
+int ssi_ivgen_init(struct ssi_drvdata *drvdata);
+
+/*!
+ * Free iv-pool and ivgen context.
+ *
+ * \param drvdata
+ */
+void ssi_ivgen_fini(struct ssi_drvdata *drvdata);
+
+/*!
+ * Generates the initial pool in SRAM.
+ * This function should be invoked when resuming DX driver.
+ *
+ * \param drvdata
+ *
+ * \return int Zero for success, negative value otherwise.
+ */
+int ssi_ivgen_init_sram_pool(struct ssi_drvdata *drvdata);
+
+/*!
+ * Acquires 16 Bytes IV from the iv-pool
+ *
+ * \param drvdata Driver private context
+ * \param iv_out_dma Array of physical IV out addresses
+ * \param iv_out_dma_len Length of iv_out_dma array (additional elements of iv_out_dma array are ignore)
+ * \param iv_out_size May be 8 or 16 bytes long
+ * \param iv_seq IN/OUT array to the descriptors sequence
+ * \param iv_seq_len IN/OUT pointer to the sequence length
+ *
+ * \return int Zero for success, negative value otherwise.
+ */
+int ssi_ivgen_getiv(
+ struct ssi_drvdata *drvdata,
+ dma_addr_t iv_out_dma[],
+ unsigned int iv_out_dma_len,
+ unsigned int iv_out_size,
+ HwDesc_s iv_seq[],
+ unsigned int *iv_seq_len);
+
+#endif /*__SSI_IVGEN_H__*/
diff --git a/drivers/staging/ccree/ssi_pm.c b/drivers/staging/ccree/ssi_pm.c
index da5f2d5..c2e3bb5 100644
--- a/drivers/staging/ccree/ssi_pm.c
+++ b/drivers/staging/ccree/ssi_pm.c
@@ -26,6 +26,7 @@
#include "ssi_request_mgr.h"
#include "ssi_sram_mgr.h"
#include "ssi_sysfs.h"
+#include "ssi_ivgen.h"
#include "ssi_hash.h"
#include "ssi_pm.h"
#include "ssi_pm_ext.h"
@@ -83,6 +84,7 @@ int ssi_power_mgr_runtime_resume(struct device *dev)
/* must be after the queue resuming as it uses the HW queue*/
ssi_hash_init_sram_digest_consts(drvdata);

+ ssi_ivgen_init_sram_pool(drvdata);
return 0;
}

diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c
index 976a54c..c19c006 100644
--- a/drivers/staging/ccree/ssi_request_mgr.c
+++ b/drivers/staging/ccree/ssi_request_mgr.c
@@ -28,6 +28,7 @@
#include "ssi_buffer_mgr.h"
#include "ssi_request_mgr.h"
#include "ssi_sysfs.h"
+#include "ssi_ivgen.h"
#include "ssi_pm.h"

#define SSI_MAX_POLL_ITER 10
@@ -359,9 +360,14 @@ int send_request(
void __iomem *cc_base = drvdata->cc_base;
struct ssi_request_mgr_handle *req_mgr_h = drvdata->request_mgr_handle;
unsigned int used_sw_slots;
+ unsigned int iv_seq_len = 0;
unsigned int total_seq_len = len; /*initial sequence length*/
+ HwDesc_s iv_seq[SSI_IVPOOL_SEQ_LEN];
int rc;
- unsigned int max_required_seq_len = total_seq_len + ((is_dout == 0) ? 1 : 0);
+ unsigned int max_required_seq_len = (total_seq_len +
+ ((ssi_req->ivgen_dma_addr_len == 0) ? 0 :
+ SSI_IVPOOL_SEQ_LEN ) +
+ ((is_dout == 0 )? 1 : 0));
DECL_CYCLE_COUNT_RESOURCES;

#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
@@ -410,6 +416,30 @@ int send_request(
total_seq_len++;
}

+ if (ssi_req->ivgen_dma_addr_len > 0) {
+ SSI_LOG_DEBUG("Acquire IV from pool into %d DMA addresses 0x%llX, 0x%llX, 0x%llX, IV-size=%u\n",
+ ssi_req->ivgen_dma_addr_len,
+ (unsigned long long)ssi_req->ivgen_dma_addr[0],
+ (unsigned long long)ssi_req->ivgen_dma_addr[1],
+ (unsigned long long)ssi_req->ivgen_dma_addr[2],
+ ssi_req->ivgen_size);
+
+ /* Acquire IV from pool */
+ rc = ssi_ivgen_getiv(drvdata, ssi_req->ivgen_dma_addr, ssi_req->ivgen_dma_addr_len,
+ ssi_req->ivgen_size, iv_seq, &iv_seq_len);
+
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("Failed to generate IV (rc=%d)\n", rc);
+ spin_unlock_bh(&req_mgr_h->hw_lock);
+#if defined (CONFIG_PM_RUNTIME) || defined (CONFIG_PM_SLEEP)
+ ssi_power_mgr_runtime_put_suspend(&drvdata->plat_dev->dev);
+#endif
+ return rc;
+ }
+
+ total_seq_len += iv_seq_len;
+ }
+
used_sw_slots = ((req_mgr_h->req_queue_head - req_mgr_h->req_queue_tail) & (MAX_REQUEST_QUEUE_SIZE-1));
if (unlikely(used_sw_slots > req_mgr_h->max_used_sw_slots)) {
req_mgr_h->max_used_sw_slots = used_sw_slots;
@@ -432,6 +462,7 @@ int send_request(

/* STAT_PHASE_4: Push sequence */
START_CYCLE_COUNT();
+ enqueue_seq(cc_base, iv_seq, iv_seq_len);
enqueue_seq(cc_base, desc, len);
enqueue_seq(cc_base, &req_mgr_h->compl_desc, (is_dout ? 0 : 1));
END_CYCLE_COUNT(ssi_req->op_type, STAT_PHASE_4);
--
2.1.4

2017-04-20 13:12:57

by Gilad Ben-Yossef

[permalink] [raw]
Subject: [PATCH v2 3/9] staging: ccree: add skcipher support

Add CryptoCell skcipher support

Signed-off-by: Gilad Ben-Yossef <[email protected]>
---
drivers/staging/ccree/Kconfig | 8 +
drivers/staging/ccree/Makefile | 2 +-
drivers/staging/ccree/cc_crypto_ctx.h | 21 +
drivers/staging/ccree/ssi_buffer_mgr.c | 147 ++++
drivers/staging/ccree/ssi_buffer_mgr.h | 16 +
drivers/staging/ccree/ssi_cipher.c | 1440 ++++++++++++++++++++++++++++++++
drivers/staging/ccree/ssi_cipher.h | 88 ++
drivers/staging/ccree/ssi_driver.c | 14 +
drivers/staging/ccree/ssi_driver.h | 30 +
9 files changed, 1765 insertions(+), 1 deletion(-)
create mode 100644 drivers/staging/ccree/ssi_cipher.c
create mode 100644 drivers/staging/ccree/ssi_cipher.h

diff --git a/drivers/staging/ccree/Kconfig b/drivers/staging/ccree/Kconfig
index a528a99..3fff040 100644
--- a/drivers/staging/ccree/Kconfig
+++ b/drivers/staging/ccree/Kconfig
@@ -3,11 +3,19 @@ config CRYPTO_DEV_CCREE
depends on CRYPTO_HW && OF && HAS_DMA
default n
select CRYPTO_HASH
+ select CRYPTO_BLKCIPHER
+ select CRYPTO_DES
+ select CRYPTO_AUTHENC
select CRYPTO_SHA1
select CRYPTO_MD5
select CRYPTO_SHA256
select CRYPTO_SHA512
select CRYPTO_HMAC
+ select CRYPTO_AES
+ select CRYPTO_CBC
+ select CRYPTO_ECB
+ select CRYPTO_CTR
+ select CRYPTO_XTS
help
Say 'Y' to enable a driver for the Arm TrustZone CryptoCell
C7xx. Currently only the CryptoCell 712 REE is supported.
diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile
index f94e225..21a80d5 100644
--- a/drivers/staging/ccree/Makefile
+++ b/drivers/staging/ccree/Makefile
@@ -1,2 +1,2 @@
obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o
-ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_hash.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
+ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h
index fedf259..f198779 100644
--- a/drivers/staging/ccree/cc_crypto_ctx.h
+++ b/drivers/staging/ccree/cc_crypto_ctx.h
@@ -242,6 +242,27 @@ struct drv_ctx_hmac {
CC_DIGEST_SIZE_MAX - CC_HMAC_BLOCK_SIZE_MAX];
};

+struct drv_ctx_cipher {
+ enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_AES */
+ enum drv_cipher_mode mode;
+ enum drv_crypto_direction direction;
+ enum drv_crypto_key_type crypto_key_type;
+ enum drv_crypto_padding_type padding_type;
+ uint32_t key_size; /* numeric value in bytes */
+ uint32_t data_unit_size; /* required for XTS */
+ /* block_state is the AES engine block state.
+ * It is used by the host to pass IV or counter at initialization.
+ * It is used by SeP for intermediate block chaining state and for
+ * returning MAC algorithms results. */
+ uint8_t block_state[CC_AES_BLOCK_SIZE];
+ uint8_t key[CC_AES_KEY_SIZE_MAX];
+ uint8_t xex_key[CC_AES_KEY_SIZE_MAX];
+ /* reserve to end of allocated context size */
+ uint32_t reserved[CC_DRV_CTX_SIZE_WORDS - 7 -
+ CC_AES_BLOCK_SIZE/sizeof(uint32_t) - 2 *
+ (CC_AES_KEY_SIZE_MAX/sizeof(uint32_t))];
+};
+
/*******************************************************************/
/***************** MESSAGE BASED CONTEXTS **************************/
/*******************************************************************/
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index 5144eaa..a0fafa9 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -28,6 +28,7 @@

#include "ssi_buffer_mgr.h"
#include "cc_lli_defs.h"
+#include "ssi_cipher.h"
#include "ssi_hash.h"

#define LLI_MAX_NUM_OF_DATA_ENTRIES 128
@@ -517,6 +518,152 @@ static inline int ssi_ahash_handle_curr_buf(struct device *dev,
return 0;
}

+void ssi_buffer_mgr_unmap_blkcipher_request(
+ struct device *dev,
+ void *ctx,
+ unsigned int ivsize,
+ struct scatterlist *src,
+ struct scatterlist *dst)
+{
+ struct blkcipher_req_ctx *req_ctx = (struct blkcipher_req_ctx *)ctx;
+
+ if (likely(req_ctx->gen_ctx.iv_dma_addr != 0)) {
+ SSI_LOG_DEBUG("Unmapped iv: iv_dma_addr=0x%llX iv_size=%u\n",
+ (unsigned long long)req_ctx->gen_ctx.iv_dma_addr,
+ ivsize);
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(req_ctx->gen_ctx.iv_dma_addr);
+ dma_unmap_single(dev, req_ctx->gen_ctx.iv_dma_addr,
+ ivsize,
+ DMA_TO_DEVICE);
+ }
+ /* Release pool */
+ if (req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(req_ctx->mlli_params.mlli_dma_addr);
+ dma_pool_free(req_ctx->mlli_params.curr_pool,
+ req_ctx->mlli_params.mlli_virt_addr,
+ req_ctx->mlli_params.mlli_dma_addr);
+ }
+
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(src));
+ dma_unmap_sg(dev, src, req_ctx->in_nents,
+ DMA_BIDIRECTIONAL);
+ SSI_LOG_DEBUG("Unmapped req->src=%pK\n",
+ sg_virt(src));
+
+ if (src != dst) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(dst));
+ dma_unmap_sg(dev, dst, req_ctx->out_nents,
+ DMA_BIDIRECTIONAL);
+ SSI_LOG_DEBUG("Unmapped req->dst=%pK\n",
+ sg_virt(dst));
+ }
+}
+
+int ssi_buffer_mgr_map_blkcipher_request(
+ struct ssi_drvdata *drvdata,
+ void *ctx,
+ unsigned int ivsize,
+ unsigned int nbytes,
+ void *info,
+ struct scatterlist *src,
+ struct scatterlist *dst)
+{
+ struct blkcipher_req_ctx *req_ctx = (struct blkcipher_req_ctx *)ctx;
+ struct mlli_params *mlli_params = &req_ctx->mlli_params;
+ struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle;
+ struct device *dev = &drvdata->plat_dev->dev;
+ struct buffer_array sg_data;
+ uint32_t dummy = 0;
+ int rc = 0;
+ uint32_t mapped_nents = 0;
+
+ req_ctx->dma_buf_type = SSI_DMA_BUF_DLLI;
+ mlli_params->curr_pool = NULL;
+ sg_data.num_of_buffers = 0;
+
+ /* Map IV buffer */
+ if (likely(ivsize != 0) ) {
+ dump_byte_array("iv", (uint8_t *)info, ivsize);
+ req_ctx->gen_ctx.iv_dma_addr =
+ dma_map_single(dev, (void *)info,
+ ivsize,
+ DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(dev,
+ req_ctx->gen_ctx.iv_dma_addr))) {
+ SSI_LOG_ERR("Mapping iv %u B at va=%pK "
+ "for DMA failed\n", ivsize, info);
+ return -ENOMEM;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(req_ctx->gen_ctx.iv_dma_addr,
+ ivsize);
+ SSI_LOG_DEBUG("Mapped iv %u B at va=%pK to dma=0x%llX\n",
+ ivsize, info,
+ (unsigned long long)req_ctx->gen_ctx.iv_dma_addr);
+ } else
+ req_ctx->gen_ctx.iv_dma_addr = 0;
+
+ /* Map the src SGL */
+ rc = ssi_buffer_mgr_map_scatterlist(dev, src,
+ nbytes, DMA_BIDIRECTIONAL, &req_ctx->in_nents,
+ LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents);
+ if (unlikely(rc != 0)) {
+ rc = -ENOMEM;
+ goto ablkcipher_exit;
+ }
+ if (mapped_nents > 1)
+ req_ctx->dma_buf_type = SSI_DMA_BUF_MLLI;
+
+ if (unlikely(src == dst)) {
+ /* Handle inplace operation */
+ if (unlikely(req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI)) {
+ req_ctx->out_nents = 0;
+ ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
+ req_ctx->in_nents, src,
+ nbytes, 0, true, &req_ctx->in_mlli_nents);
+ }
+ } else {
+ /* Map the dst sg */
+ if (unlikely(ssi_buffer_mgr_map_scatterlist(
+ dev,dst, nbytes,
+ DMA_BIDIRECTIONAL, &req_ctx->out_nents,
+ LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy,
+ &mapped_nents))){
+ rc = -ENOMEM;
+ goto ablkcipher_exit;
+ }
+ if (mapped_nents > 1)
+ req_ctx->dma_buf_type = SSI_DMA_BUF_MLLI;
+
+ if (unlikely((req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI))) {
+ ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
+ req_ctx->in_nents, src,
+ nbytes, 0, true,
+ &req_ctx->in_mlli_nents);
+ ssi_buffer_mgr_add_scatterlist_entry(&sg_data,
+ req_ctx->out_nents, dst,
+ nbytes, 0, true,
+ &req_ctx->out_mlli_nents);
+ }
+ }
+
+ if (unlikely(req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI)) {
+ mlli_params->curr_pool = buff_mgr->mlli_buffs_pool;
+ rc = ssi_buffer_mgr_generate_mlli(dev, &sg_data, mlli_params);
+ if (unlikely(rc!= 0))
+ goto ablkcipher_exit;
+
+ }
+
+ SSI_LOG_DEBUG("areq_ctx->dma_buf_type = %s\n",
+ GET_DMA_BUFFER_TYPE(req_ctx->dma_buf_type));
+
+ return 0;
+
+ablkcipher_exit:
+ ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst);
+ return rc;
+}
+
int ssi_buffer_mgr_map_hash_request_final(
struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update)
{
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.h b/drivers/staging/ccree/ssi_buffer_mgr.h
index ccac5ce..2c58a63 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.h
+++ b/drivers/staging/ccree/ssi_buffer_mgr.h
@@ -55,6 +55,22 @@ int ssi_buffer_mgr_init(struct ssi_drvdata *drvdata);

int ssi_buffer_mgr_fini(struct ssi_drvdata *drvdata);

+int ssi_buffer_mgr_map_blkcipher_request(
+ struct ssi_drvdata *drvdata,
+ void *ctx,
+ unsigned int ivsize,
+ unsigned int nbytes,
+ void *info,
+ struct scatterlist *src,
+ struct scatterlist *dst);
+
+void ssi_buffer_mgr_unmap_blkcipher_request(
+ struct device *dev,
+ void *ctx,
+ unsigned int ivsize,
+ struct scatterlist *src,
+ struct scatterlist *dst);
+
int ssi_buffer_mgr_map_hash_request_final(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update);

int ssi_buffer_mgr_map_hash_request_update(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, unsigned int block_size);
diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
new file mode 100644
index 0000000..01467e8
--- /dev/null
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -0,0 +1,1440 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/semaphore.h>
+#include <crypto/algapi.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/aes.h>
+#include <crypto/ctr.h>
+#include <crypto/des.h>
+
+#include "ssi_config.h"
+#include "ssi_driver.h"
+#include "cc_lli_defs.h"
+#include "ssi_buffer_mgr.h"
+#include "ssi_cipher.h"
+#include "ssi_request_mgr.h"
+#include "ssi_sysfs.h"
+
+#define MAX_ABLKCIPHER_SEQ_LEN 6
+
+#define template_ablkcipher template_u.ablkcipher
+#define template_sblkcipher template_u.blkcipher
+
+#define SSI_MIN_AES_XTS_SIZE 0x10
+#define SSI_MAX_AES_XTS_SIZE 0x2000
+struct ssi_blkcipher_handle {
+ struct list_head blkcipher_alg_list;
+};
+
+struct cc_user_key_info {
+ uint8_t *key;
+ dma_addr_t key_dma_addr;
+};
+struct cc_hw_key_info {
+ enum HwCryptoKey key1_slot;
+ enum HwCryptoKey key2_slot;
+};
+
+struct ssi_ablkcipher_ctx {
+ struct ssi_drvdata *drvdata;
+ int keylen;
+ int key_round_number;
+ int cipher_mode;
+ int flow_mode;
+ unsigned int flags;
+ struct blkcipher_req_ctx *sync_ctx;
+ struct cc_user_key_info user;
+ struct cc_hw_key_info hw;
+ struct crypto_shash *shash_tfm;
+};
+
+static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req, void __iomem *cc_base);
+
+
+static int validate_keys_sizes(struct ssi_ablkcipher_ctx *ctx_p, uint32_t size) {
+ switch (ctx_p->flow_mode){
+ case S_DIN_to_AES:
+ switch (size){
+ case CC_AES_128_BIT_KEY_SIZE:
+ case CC_AES_192_BIT_KEY_SIZE:
+ if (likely((ctx_p->cipher_mode != DRV_CIPHER_XTS) &&
+ (ctx_p->cipher_mode != DRV_CIPHER_ESSIV) &&
+ (ctx_p->cipher_mode != DRV_CIPHER_BITLOCKER)))
+ return 0;
+ break;
+ case CC_AES_256_BIT_KEY_SIZE:
+ return 0;
+ case (CC_AES_192_BIT_KEY_SIZE*2):
+ case (CC_AES_256_BIT_KEY_SIZE*2):
+ if (likely((ctx_p->cipher_mode == DRV_CIPHER_XTS) ||
+ (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) ||
+ (ctx_p->cipher_mode == DRV_CIPHER_BITLOCKER)))
+ return 0;
+ break;
+ default:
+ break;
+ }
+ case S_DIN_to_DES:
+ if (likely(size == DES3_EDE_KEY_SIZE ||
+ size == DES_KEY_SIZE))
+ return 0;
+ break;
+#if SSI_CC_HAS_MULTI2
+ case S_DIN_to_MULTI2:
+ if (likely(size == CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE))
+ return 0;
+ break;
+#endif
+ default:
+ break;
+
+ }
+ return -EINVAL;
+}
+
+
+static int validate_data_size(struct ssi_ablkcipher_ctx *ctx_p, unsigned int size) {
+ switch (ctx_p->flow_mode){
+ case S_DIN_to_AES:
+ switch (ctx_p->cipher_mode){
+ case DRV_CIPHER_XTS:
+ if ((size >= SSI_MIN_AES_XTS_SIZE) &&
+ (size <= SSI_MAX_AES_XTS_SIZE) &&
+ IS_ALIGNED(size, AES_BLOCK_SIZE))
+ return 0;
+ break;
+ case DRV_CIPHER_CBC_CTS:
+ if (likely(size >= AES_BLOCK_SIZE))
+ return 0;
+ break;
+ case DRV_CIPHER_OFB:
+ case DRV_CIPHER_CTR:
+ return 0;
+ case DRV_CIPHER_ECB:
+ case DRV_CIPHER_CBC:
+ case DRV_CIPHER_ESSIV:
+ case DRV_CIPHER_BITLOCKER:
+ if (likely(IS_ALIGNED(size, AES_BLOCK_SIZE)))
+ return 0;
+ break;
+ default:
+ break;
+ }
+ break;
+ case S_DIN_to_DES:
+ if (likely(IS_ALIGNED(size, DES_BLOCK_SIZE)))
+ return 0;
+ break;
+#if SSI_CC_HAS_MULTI2
+ case S_DIN_to_MULTI2:
+ switch (ctx_p->cipher_mode) {
+ case DRV_MULTI2_CBC:
+ if (likely(IS_ALIGNED(size, CC_MULTI2_BLOCK_SIZE)))
+ return 0;
+ break;
+ case DRV_MULTI2_OFB:
+ return 0;
+ default:
+ break;
+ }
+ break;
+#endif /*SSI_CC_HAS_MULTI2*/
+ default:
+ break;
+
+ }
+ return -EINVAL;
+}
+
+static unsigned int get_max_keysize(struct crypto_tfm *tfm)
+{
+ struct ssi_crypto_alg *ssi_alg = container_of(tfm->__crt_alg, struct ssi_crypto_alg, crypto_alg);
+
+ if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_ABLKCIPHER) {
+ return ssi_alg->crypto_alg.cra_ablkcipher.max_keysize;
+ }
+
+ if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_BLKCIPHER) {
+ return ssi_alg->crypto_alg.cra_blkcipher.max_keysize;
+ }
+
+ return 0;
+}
+
+static int ssi_blkcipher_init(struct crypto_tfm *tfm)
+{
+ struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+ struct crypto_alg *alg = tfm->__crt_alg;
+ struct ssi_crypto_alg *ssi_alg =
+ container_of(alg, struct ssi_crypto_alg, crypto_alg);
+ struct device *dev;
+ int rc = 0;
+ unsigned int max_key_buf_size = get_max_keysize(tfm);
+
+ SSI_LOG_DEBUG("Initializing context @%p for %s\n", ctx_p,
+ crypto_tfm_alg_name(tfm));
+
+ ctx_p->cipher_mode = ssi_alg->cipher_mode;
+ ctx_p->flow_mode = ssi_alg->flow_mode;
+ ctx_p->drvdata = ssi_alg->drvdata;
+ dev = &ctx_p->drvdata->plat_dev->dev;
+
+ /* Allocate key buffer, cache line aligned */
+ ctx_p->user.key = kmalloc(max_key_buf_size, GFP_KERNEL|GFP_DMA);
+ if (!ctx_p->user.key) {
+ SSI_LOG_ERR("Allocating key buffer in context failed\n");
+ rc = -ENOMEM;
+ }
+ SSI_LOG_DEBUG("Allocated key buffer in context. key=@%p\n",
+ ctx_p->user.key);
+
+ /* Map key buffer */
+ ctx_p->user.key_dma_addr = dma_map_single(dev, (void *)ctx_p->user.key,
+ max_key_buf_size, DMA_TO_DEVICE);
+ if (dma_mapping_error(dev, ctx_p->user.key_dma_addr)) {
+ SSI_LOG_ERR("Mapping Key %u B at va=%pK for DMA failed\n",
+ max_key_buf_size, ctx_p->user.key);
+ return -ENOMEM;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx_p->user.key_dma_addr, max_key_buf_size);
+ SSI_LOG_DEBUG("Mapped key %u B at va=%pK to dma=0x%llX\n",
+ max_key_buf_size, ctx_p->user.key,
+ (unsigned long long)ctx_p->user.key_dma_addr);
+
+ if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
+ /* Alloc hash tfm for essiv */
+ ctx_p->shash_tfm = crypto_alloc_shash("sha256-generic", 0, 0);
+ if (IS_ERR(ctx_p->shash_tfm)) {
+ SSI_LOG_ERR("Error allocating hash tfm for ESSIV.\n");
+ return PTR_ERR(ctx_p->shash_tfm);
+ }
+ }
+
+ return rc;
+}
+
+static void ssi_blkcipher_exit(struct crypto_tfm *tfm)
+{
+ struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+ struct device *dev = &ctx_p->drvdata->plat_dev->dev;
+ unsigned int max_key_buf_size = get_max_keysize(tfm);
+
+ SSI_LOG_DEBUG("Clearing context @%p for %s\n",
+ crypto_tfm_ctx(tfm), crypto_tfm_alg_name(tfm));
+
+ if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
+ /* Free hash tfm for essiv */
+ crypto_free_shash(ctx_p->shash_tfm);
+ ctx_p->shash_tfm = NULL;
+ }
+
+ /* Unmap key buffer */
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx_p->user.key_dma_addr);
+ dma_unmap_single(dev, ctx_p->user.key_dma_addr, max_key_buf_size,
+ DMA_TO_DEVICE);
+ SSI_LOG_DEBUG("Unmapped key buffer key_dma_addr=0x%llX\n",
+ (unsigned long long)ctx_p->user.key_dma_addr);
+
+ /* Free key buffer in context */
+ kfree(ctx_p->user.key);
+ SSI_LOG_DEBUG("Free key buffer in context. key=@%p\n", ctx_p->user.key);
+}
+
+
+typedef struct tdes_keys{
+ u8 key1[DES_KEY_SIZE];
+ u8 key2[DES_KEY_SIZE];
+ u8 key3[DES_KEY_SIZE];
+}tdes_keys_t;
+
+static const u8 zero_buff[] = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+ 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+ 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+ 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0};
+
+static enum HwCryptoKey hw_key_to_cc_hw_key(int slot_num)
+{
+ switch (slot_num) {
+ case 0:
+ return KFDE0_KEY;
+ case 1:
+ return KFDE1_KEY;
+ case 2:
+ return KFDE2_KEY;
+ case 3:
+ return KFDE3_KEY;
+ }
+ return END_OF_KEYS;
+}
+
+static int ssi_blkcipher_setkey(struct crypto_tfm *tfm,
+ const u8 *key,
+ unsigned int keylen)
+{
+ struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+ struct device *dev = &ctx_p->drvdata->plat_dev->dev;
+ u32 tmp[DES_EXPKEY_WORDS];
+ unsigned int max_key_buf_size = get_max_keysize(tfm);
+ DECL_CYCLE_COUNT_RESOURCES;
+
+ SSI_LOG_DEBUG("Setting key in context @%p for %s. keylen=%u\n",
+ ctx_p, crypto_tfm_alg_name(tfm), keylen);
+ dump_byte_array("key", (uint8_t *)key, keylen);
+
+ /* STAT_PHASE_0: Init and sanity checks */
+ START_CYCLE_COUNT();
+
+#if SSI_CC_HAS_MULTI2
+ /*last byte of key buffer is round number and should not be a part of key size*/
+ if (ctx_p->flow_mode == S_DIN_to_MULTI2) {
+ keylen -=1;
+ }
+#endif /*SSI_CC_HAS_MULTI2*/
+
+ if (unlikely(validate_keys_sizes(ctx_p,keylen) != 0)) {
+ SSI_LOG_ERR("Unsupported key size %d.\n", keylen);
+ crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+ return -EINVAL;
+ }
+
+ if (ssi_is_hw_key(tfm)) {
+ /* setting HW key slots */
+ struct arm_hw_key_info *hki = (struct arm_hw_key_info*)key;
+
+ if (unlikely(ctx_p->flow_mode != S_DIN_to_AES)) {
+ SSI_LOG_ERR("HW key not supported for non-AES flows\n");
+ return -EINVAL;
+ }
+
+ ctx_p->hw.key1_slot = hw_key_to_cc_hw_key(hki->hw_key1);
+ if (unlikely(ctx_p->hw.key1_slot == END_OF_KEYS)) {
+ SSI_LOG_ERR("Unsupported hw key1 number (%d)\n", hki->hw_key1);
+ return -EINVAL;
+ }
+
+ if ((ctx_p->cipher_mode == DRV_CIPHER_XTS) ||
+ (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) ||
+ (ctx_p->cipher_mode == DRV_CIPHER_BITLOCKER)) {
+ if (unlikely(hki->hw_key1 == hki->hw_key2)) {
+ SSI_LOG_ERR("Illegal hw key numbers (%d,%d)\n", hki->hw_key1, hki->hw_key2);
+ return -EINVAL;
+ }
+ ctx_p->hw.key2_slot = hw_key_to_cc_hw_key(hki->hw_key2);
+ if (unlikely(ctx_p->hw.key2_slot == END_OF_KEYS)) {
+ SSI_LOG_ERR("Unsupported hw key2 number (%d)\n", hki->hw_key2);
+ return -EINVAL;
+ }
+ }
+
+ ctx_p->keylen = keylen;
+ END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_0);
+ SSI_LOG_DEBUG("ssi_blkcipher_setkey: ssi_is_hw_key ret 0");
+
+ return 0;
+ }
+
+ // verify weak keys
+ if (ctx_p->flow_mode == S_DIN_to_DES) {
+ if (unlikely(!des_ekey(tmp, key)) &&
+ (crypto_tfm_get_flags(tfm) & CRYPTO_TFM_REQ_WEAK_KEY)) {
+ tfm->crt_flags |= CRYPTO_TFM_RES_WEAK_KEY;
+ SSI_LOG_DEBUG("ssi_blkcipher_setkey: weak DES key");
+ return -EINVAL;
+ }
+ }
+
+ END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_0);
+
+ /* STAT_PHASE_1: Copy key to ctx */
+ START_CYCLE_COUNT();
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx_p->user.key_dma_addr);
+ dma_sync_single_for_cpu(dev, ctx_p->user.key_dma_addr,
+ max_key_buf_size, DMA_TO_DEVICE);
+#if SSI_CC_HAS_MULTI2
+ if (ctx_p->flow_mode == S_DIN_to_MULTI2) {
+ memcpy(ctx_p->user.key, key, CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE);
+ ctx_p->key_round_number = key[CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE];
+ if (ctx_p->key_round_number < CC_MULTI2_MIN_NUM_ROUNDS ||
+ ctx_p->key_round_number > CC_MULTI2_MAX_NUM_ROUNDS) {
+ crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+ SSI_LOG_DEBUG("ssi_blkcipher_setkey: SSI_CC_HAS_MULTI2 einval");
+ return -EINVAL;
+ }
+ } else
+#endif /*SSI_CC_HAS_MULTI2*/
+ {
+ memcpy(ctx_p->user.key, key, keylen);
+ if (keylen == 24)
+ memset(ctx_p->user.key + 24, 0, CC_AES_KEY_SIZE_MAX - 24);
+
+ if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
+ /* sha256 for key2 - use sw implementation */
+ int key_len = keylen >> 1;
+ int err;
+ SHASH_DESC_ON_STACK(desc, ctx_p->shash_tfm);
+ desc->tfm = ctx_p->shash_tfm;
+
+ err = crypto_shash_digest(desc, ctx_p->user.key, key_len, ctx_p->user.key + key_len);
+ if (err) {
+ SSI_LOG_ERR("Failed to hash ESSIV key.\n");
+ return err;
+ }
+ }
+ }
+ dma_sync_single_for_device(dev, ctx_p->user.key_dma_addr,
+ max_key_buf_size, DMA_TO_DEVICE);
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx_p->user.key_dma_addr ,max_key_buf_size);
+ ctx_p->keylen = keylen;
+
+ END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_1);
+
+ SSI_LOG_DEBUG("ssi_blkcipher_setkey: return safely");
+ return 0;
+}
+
+static inline void
+ssi_blkcipher_create_setup_desc(
+ struct crypto_tfm *tfm,
+ struct blkcipher_req_ctx *req_ctx,
+ unsigned int ivsize,
+ unsigned int nbytes,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+ int cipher_mode = ctx_p->cipher_mode;
+ int flow_mode = ctx_p->flow_mode;
+ int direction = req_ctx->gen_ctx.op_type;
+ dma_addr_t key_dma_addr = ctx_p->user.key_dma_addr;
+ unsigned int key_len = ctx_p->keylen;
+ dma_addr_t iv_dma_addr = req_ctx->gen_ctx.iv_dma_addr;
+ unsigned int du_size = nbytes;
+
+ struct ssi_crypto_alg *ssi_alg = container_of(tfm->__crt_alg, struct ssi_crypto_alg, crypto_alg);
+
+ if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_BULK_MASK) == CRYPTO_ALG_BULK_DU_512)
+ du_size = 512;
+ if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_BULK_MASK) == CRYPTO_ALG_BULK_DU_4096)
+ du_size = 4096;
+
+ switch (cipher_mode) {
+ case DRV_CIPHER_CBC:
+ case DRV_CIPHER_CBC_CTS:
+ case DRV_CIPHER_CTR:
+ case DRV_CIPHER_OFB:
+ /* Load cipher state */
+ HW_DESC_INIT(&desc[*seq_size]);
+ HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+ iv_dma_addr, ivsize,
+ NS_BIT);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+ HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode);
+ HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode);
+ if ((cipher_mode == DRV_CIPHER_CTR) ||
+ (cipher_mode == DRV_CIPHER_OFB) ) {
+ HW_DESC_SET_SETUP_MODE(&desc[*seq_size],
+ SETUP_LOAD_STATE1);
+ } else {
+ HW_DESC_SET_SETUP_MODE(&desc[*seq_size],
+ SETUP_LOAD_STATE0);
+ }
+ (*seq_size)++;
+ /*FALLTHROUGH*/
+ case DRV_CIPHER_ECB:
+ /* Load key */
+ HW_DESC_INIT(&desc[*seq_size]);
+ HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+ if (flow_mode == S_DIN_to_AES) {
+
+ if (ssi_is_hw_key(tfm)) {
+ HW_DESC_SET_HW_CRYPTO_KEY(&desc[*seq_size], ctx_p->hw.key1_slot);
+ } else {
+ HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+ key_dma_addr,
+ ((key_len == 24) ? AES_MAX_KEY_SIZE : key_len),
+ NS_BIT);
+ }
+ HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len);
+ } else {
+ /*des*/
+ HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+ key_dma_addr, key_len,
+ NS_BIT);
+ HW_DESC_SET_KEY_SIZE_DES(&desc[*seq_size], key_len);
+ }
+ HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode);
+ HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_KEY0);
+ (*seq_size)++;
+ break;
+ case DRV_CIPHER_XTS:
+ case DRV_CIPHER_ESSIV:
+ case DRV_CIPHER_BITLOCKER:
+ /* Load AES key */
+ HW_DESC_INIT(&desc[*seq_size]);
+ HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+ if (ssi_is_hw_key(tfm)) {
+ HW_DESC_SET_HW_CRYPTO_KEY(&desc[*seq_size], ctx_p->hw.key1_slot);
+ } else {
+ HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+ key_dma_addr, key_len/2,
+ NS_BIT);
+ }
+ HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len/2);
+ HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode);
+ HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_KEY0);
+ (*seq_size)++;
+
+ /* load XEX key */
+ HW_DESC_INIT(&desc[*seq_size]);
+ HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+ if (ssi_is_hw_key(tfm)) {
+ HW_DESC_SET_HW_CRYPTO_KEY(&desc[*seq_size], ctx_p->hw.key2_slot);
+ } else {
+ HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+ (key_dma_addr+key_len/2), key_len/2,
+ NS_BIT);
+ }
+ HW_DESC_SET_XEX_DATA_UNIT_SIZE(&desc[*seq_size], du_size);
+ HW_DESC_SET_FLOW_MODE(&desc[*seq_size], S_DIN_to_AES2);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len/2);
+ HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_XEX_KEY);
+ (*seq_size)++;
+
+ /* Set state */
+ HW_DESC_INIT(&desc[*seq_size]);
+ HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_STATE1);
+ HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], cipher_mode);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[*seq_size], key_len/2);
+ HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+ iv_dma_addr, CC_AES_BLOCK_SIZE,
+ NS_BIT);
+ (*seq_size)++;
+ break;
+ default:
+ SSI_LOG_ERR("Unsupported cipher mode (%d)\n", cipher_mode);
+ BUG();
+ }
+}
+
+#if SSI_CC_HAS_MULTI2
+static inline void ssi_blkcipher_create_multi2_setup_desc(
+ struct crypto_tfm *tfm,
+ struct blkcipher_req_ctx *req_ctx,
+ unsigned int ivsize,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+
+ int direction = req_ctx->gen_ctx.op_type;
+ /* Load system key */
+ HW_DESC_INIT(&desc[*seq_size]);
+ HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], ctx_p->cipher_mode);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+ HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI, ctx_p->user.key_dma_addr,
+ CC_MULTI2_SYSTEM_KEY_SIZE,
+ NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[*seq_size], ctx_p->flow_mode);
+ HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_KEY0);
+ (*seq_size)++;
+
+ /* load data key */
+ HW_DESC_INIT(&desc[*seq_size]);
+ HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+ (ctx_p->user.key_dma_addr +
+ CC_MULTI2_SYSTEM_KEY_SIZE),
+ CC_MULTI2_DATA_KEY_SIZE, NS_BIT);
+ HW_DESC_SET_MULTI2_NUM_ROUNDS(&desc[*seq_size],
+ ctx_p->key_round_number);
+ HW_DESC_SET_FLOW_MODE(&desc[*seq_size], ctx_p->flow_mode);
+ HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], ctx_p->cipher_mode);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+ HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_STATE0 );
+ (*seq_size)++;
+
+
+ /* Set state */
+ HW_DESC_INIT(&desc[*seq_size]);
+ HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+ req_ctx->gen_ctx.iv_dma_addr,
+ ivsize, NS_BIT);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[*seq_size], direction);
+ HW_DESC_SET_FLOW_MODE(&desc[*seq_size], ctx_p->flow_mode);
+ HW_DESC_SET_CIPHER_MODE(&desc[*seq_size], ctx_p->cipher_mode);
+ HW_DESC_SET_SETUP_MODE(&desc[*seq_size], SETUP_LOAD_STATE1);
+ (*seq_size)++;
+
+}
+#endif /*SSI_CC_HAS_MULTI2*/
+
+static inline void
+ssi_blkcipher_create_data_desc(
+ struct crypto_tfm *tfm,
+ struct blkcipher_req_ctx *req_ctx,
+ struct scatterlist *dst, struct scatterlist *src,
+ unsigned int nbytes,
+ void *areq,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+ unsigned int flow_mode = ctx_p->flow_mode;
+
+ switch (ctx_p->flow_mode) {
+ case S_DIN_to_AES:
+ flow_mode = DIN_AES_DOUT;
+ break;
+ case S_DIN_to_DES:
+ flow_mode = DIN_DES_DOUT;
+ break;
+#if SSI_CC_HAS_MULTI2
+ case S_DIN_to_MULTI2:
+ flow_mode = DIN_MULTI2_DOUT;
+ break;
+#endif /*SSI_CC_HAS_MULTI2*/
+ default:
+ SSI_LOG_ERR("invalid flow mode, flow_mode = %d \n", flow_mode);
+ return;
+ }
+ /* Process */
+ if (likely(req_ctx->dma_buf_type == SSI_DMA_BUF_DLLI)){
+ SSI_LOG_DEBUG(" data params addr 0x%llX length 0x%X \n",
+ (unsigned long long)sg_dma_address(src),
+ nbytes);
+ SSI_LOG_DEBUG(" data params addr 0x%llX length 0x%X \n",
+ (unsigned long long)sg_dma_address(dst),
+ nbytes);
+ HW_DESC_INIT(&desc[*seq_size]);
+ HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+ sg_dma_address(src),
+ nbytes, NS_BIT);
+ HW_DESC_SET_DOUT_DLLI(&desc[*seq_size],
+ sg_dma_address(dst),
+ nbytes,
+ NS_BIT, (areq == NULL)? 0:1);
+ if (areq != NULL) {
+ HW_DESC_SET_QUEUE_LAST_IND(&desc[*seq_size]);
+ }
+ HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode);
+ (*seq_size)++;
+ } else {
+ /* bypass */
+ SSI_LOG_DEBUG(" bypass params addr 0x%llX "
+ "length 0x%X addr 0x%08X\n",
+ (unsigned long long)req_ctx->mlli_params.mlli_dma_addr,
+ req_ctx->mlli_params.mlli_len,
+ (unsigned int)ctx_p->drvdata->mlli_sram_addr);
+ HW_DESC_INIT(&desc[*seq_size]);
+ HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+ req_ctx->mlli_params.mlli_dma_addr,
+ req_ctx->mlli_params.mlli_len,
+ NS_BIT);
+ HW_DESC_SET_DOUT_SRAM(&desc[*seq_size],
+ ctx_p->drvdata->mlli_sram_addr,
+ req_ctx->mlli_params.mlli_len);
+ HW_DESC_SET_FLOW_MODE(&desc[*seq_size], BYPASS);
+ (*seq_size)++;
+
+ HW_DESC_INIT(&desc[*seq_size]);
+ HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_MLLI,
+ ctx_p->drvdata->mlli_sram_addr,
+ req_ctx->in_mlli_nents, NS_BIT);
+ if (req_ctx->out_nents == 0) {
+ SSI_LOG_DEBUG(" din/dout params addr 0x%08X "
+ "addr 0x%08X\n",
+ (unsigned int)ctx_p->drvdata->mlli_sram_addr,
+ (unsigned int)ctx_p->drvdata->mlli_sram_addr);
+ HW_DESC_SET_DOUT_MLLI(&desc[*seq_size],
+ ctx_p->drvdata->mlli_sram_addr,
+ req_ctx->in_mlli_nents,
+ NS_BIT,(areq == NULL)? 0:1);
+ } else {
+ SSI_LOG_DEBUG(" din/dout params "
+ "addr 0x%08X addr 0x%08X\n",
+ (unsigned int)ctx_p->drvdata->mlli_sram_addr,
+ (unsigned int)ctx_p->drvdata->mlli_sram_addr +
+ (uint32_t)LLI_ENTRY_BYTE_SIZE *
+ req_ctx->in_nents);
+ HW_DESC_SET_DOUT_MLLI(&desc[*seq_size],
+ (ctx_p->drvdata->mlli_sram_addr +
+ LLI_ENTRY_BYTE_SIZE *
+ req_ctx->in_mlli_nents),
+ req_ctx->out_mlli_nents, NS_BIT,(areq == NULL)? 0:1);
+ }
+ if (areq != NULL) {
+ HW_DESC_SET_QUEUE_LAST_IND(&desc[*seq_size]);
+ }
+ HW_DESC_SET_FLOW_MODE(&desc[*seq_size], flow_mode);
+ (*seq_size)++;
+ }
+}
+
+static int ssi_blkcipher_complete(struct device *dev,
+ struct ssi_ablkcipher_ctx *ctx_p,
+ struct blkcipher_req_ctx *req_ctx,
+ struct scatterlist *dst, struct scatterlist *src,
+ void *info, //req info
+ unsigned int ivsize,
+ void *areq,
+ void __iomem *cc_base)
+{
+ int completion_error = 0;
+ uint32_t inflight_counter;
+ DECL_CYCLE_COUNT_RESOURCES;
+
+ START_CYCLE_COUNT();
+ ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst);
+ info = req_ctx->backup_info;
+ END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_4);
+
+
+ /*Set the inflight couter value to local variable*/
+ inflight_counter = ctx_p->drvdata->inflight_counter;
+ /*Decrease the inflight counter*/
+ if(ctx_p->flow_mode == BYPASS && ctx_p->drvdata->inflight_counter > 0)
+ ctx_p->drvdata->inflight_counter--;
+
+ if(areq){
+ ablkcipher_request_complete(areq, completion_error);
+ return 0;
+ }
+ return completion_error;
+}
+
+static int ssi_blkcipher_process(
+ struct crypto_tfm *tfm,
+ struct blkcipher_req_ctx *req_ctx,
+ struct scatterlist *dst, struct scatterlist *src,
+ unsigned int nbytes,
+ void *info, //req info
+ unsigned int ivsize,
+ void *areq,
+ enum drv_crypto_direction direction)
+{
+ struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+ struct device *dev = &ctx_p->drvdata->plat_dev->dev;
+ HwDesc_s desc[MAX_ABLKCIPHER_SEQ_LEN];
+ struct ssi_crypto_req ssi_req = {};
+ int rc, seq_len = 0,cts_restore_flag = 0;
+ DECL_CYCLE_COUNT_RESOURCES;
+
+ SSI_LOG_DEBUG("%s areq=%p info=%p nbytes=%d\n",
+ ((direction==DRV_CRYPTO_DIRECTION_ENCRYPT)?"Encrypt":"Decrypt"),
+ areq, info, nbytes);
+
+ /* STAT_PHASE_0: Init and sanity checks */
+ START_CYCLE_COUNT();
+
+ /* TODO: check data length according to mode */
+ if (unlikely(validate_data_size(ctx_p, nbytes))) {
+ SSI_LOG_ERR("Unsupported data size %d.\n", nbytes);
+ crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_BLOCK_LEN);
+ return -EINVAL;
+ }
+ if (nbytes == 0) {
+ /* No data to process is valid */
+ return 0;
+ }
+ /*For CTS in case of data size aligned to 16 use CBC mode*/
+ if (((nbytes % AES_BLOCK_SIZE) == 0) && (ctx_p->cipher_mode == DRV_CIPHER_CBC_CTS)){
+
+ ctx_p->cipher_mode = DRV_CIPHER_CBC;
+ cts_restore_flag = 1;
+ }
+
+ /* Setup DX request structure */
+ ssi_req.user_cb = (void *)ssi_ablkcipher_complete;
+ ssi_req.user_arg = (void *)areq;
+
+#ifdef ENABLE_CYCLE_COUNT
+ ssi_req.op_type = (direction == DRV_CRYPTO_DIRECTION_DECRYPT) ?
+ STAT_OP_TYPE_DECODE : STAT_OP_TYPE_ENCODE;
+
+#endif
+
+ /* Setup request context */
+ req_ctx->gen_ctx.op_type = direction;
+
+ END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_0);
+
+ /* STAT_PHASE_1: Map buffers */
+ START_CYCLE_COUNT();
+
+ rc = ssi_buffer_mgr_map_blkcipher_request(ctx_p->drvdata, req_ctx, ivsize, nbytes, info, src, dst);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("map_request() failed\n");
+ goto exit_process;
+ }
+
+ END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_1);
+
+ /* STAT_PHASE_2: Create sequence */
+ START_CYCLE_COUNT();
+
+ /* Setup processing */
+#if SSI_CC_HAS_MULTI2
+ if (ctx_p->flow_mode == S_DIN_to_MULTI2) {
+ ssi_blkcipher_create_multi2_setup_desc(tfm,
+ req_ctx,
+ ivsize,
+ desc,
+ &seq_len);
+ } else
+#endif /*SSI_CC_HAS_MULTI2*/
+ {
+ ssi_blkcipher_create_setup_desc(tfm,
+ req_ctx,
+ ivsize,
+ nbytes,
+ desc,
+ &seq_len);
+ }
+ /* Data processing */
+ ssi_blkcipher_create_data_desc(tfm,
+ req_ctx,
+ dst, src,
+ nbytes,
+ areq,
+ desc, &seq_len);
+
+ END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_2);
+
+ /* STAT_PHASE_3: Lock HW and push sequence */
+ START_CYCLE_COUNT();
+
+ rc = send_request(ctx_p->drvdata, &ssi_req, desc, seq_len, (areq == NULL)? 0:1);
+ if(areq != NULL) {
+ if (unlikely(rc != -EINPROGRESS)) {
+ /* Failed to send the request or request completed synchronously */
+ ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst);
+ }
+
+ END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3);
+ } else {
+ if (rc != 0) {
+ ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst);
+ END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3);
+ } else {
+ END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3);
+ rc = ssi_blkcipher_complete(dev, ctx_p, req_ctx, dst, src, info, ivsize, NULL, ctx_p->drvdata->cc_base);
+ }
+ }
+
+exit_process:
+ if (cts_restore_flag != 0)
+ ctx_p->cipher_mode = DRV_CIPHER_CBC_CTS;
+
+ return rc;
+}
+
+static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req, void __iomem *cc_base)
+{
+ struct ablkcipher_request *areq = (struct ablkcipher_request *)ssi_req;
+ struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(areq);
+ struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(areq);
+ struct ssi_ablkcipher_ctx *ctx_p = crypto_ablkcipher_ctx(tfm);
+ unsigned int ivsize = crypto_ablkcipher_ivsize(tfm);
+
+ ssi_blkcipher_complete(dev, ctx_p, req_ctx, areq->dst, areq->src, areq->info, ivsize, areq, cc_base);
+}
+
+
+
+static int ssi_sblkcipher_init(struct crypto_tfm *tfm)
+{
+ struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+
+ /* Allocate sync ctx buffer */
+ ctx_p->sync_ctx = kmalloc(sizeof(struct blkcipher_req_ctx), GFP_KERNEL|GFP_DMA);
+ if (!ctx_p->sync_ctx) {
+ SSI_LOG_ERR("Allocating sync ctx buffer in context failed\n");
+ return -ENOMEM;
+ }
+ SSI_LOG_DEBUG("Allocated sync ctx buffer in context ctx_p->sync_ctx=@%p\n",
+ ctx_p->sync_ctx);
+
+ return ssi_blkcipher_init(tfm);
+}
+
+
+static void ssi_sblkcipher_exit(struct crypto_tfm *tfm)
+{
+ struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+
+ kfree(ctx_p->sync_ctx);
+ SSI_LOG_DEBUG("Free sync ctx buffer in context ctx_p->sync_ctx=@%p\n", ctx_p->sync_ctx);
+
+ ssi_blkcipher_exit(tfm);
+}
+
+#ifdef SYNC_ALGS
+static int ssi_sblkcipher_encrypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst, struct scatterlist *src,
+ unsigned int nbytes)
+{
+ struct crypto_blkcipher *blk_tfm = desc->tfm;
+ struct crypto_tfm *tfm = crypto_blkcipher_tfm(blk_tfm);
+ struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+ struct blkcipher_req_ctx *req_ctx = ctx_p->sync_ctx;
+ unsigned int ivsize = crypto_blkcipher_ivsize(blk_tfm);
+
+ req_ctx->backup_info = desc->info;
+
+ return ssi_blkcipher_process(tfm, req_ctx, dst, src, nbytes, desc->info, ivsize, NULL, DRV_CRYPTO_DIRECTION_ENCRYPT);
+}
+
+static int ssi_sblkcipher_decrypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst, struct scatterlist *src,
+ unsigned int nbytes)
+{
+ struct crypto_blkcipher *blk_tfm = desc->tfm;
+ struct crypto_tfm *tfm = crypto_blkcipher_tfm(blk_tfm);
+ struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+ struct blkcipher_req_ctx *req_ctx = ctx_p->sync_ctx;
+ unsigned int ivsize = crypto_blkcipher_ivsize(blk_tfm);
+
+ req_ctx->backup_info = desc->info;
+
+ return ssi_blkcipher_process(tfm, req_ctx, dst, src, nbytes, desc->info, ivsize, NULL, DRV_CRYPTO_DIRECTION_DECRYPT);
+}
+#endif
+
+/* Async wrap functions */
+
+static int ssi_ablkcipher_init(struct crypto_tfm *tfm)
+{
+ struct ablkcipher_tfm *ablktfm = &tfm->crt_ablkcipher;
+
+ ablktfm->reqsize = sizeof(struct blkcipher_req_ctx);
+
+ return ssi_blkcipher_init(tfm);
+}
+
+
+static int ssi_ablkcipher_setkey(struct crypto_ablkcipher *tfm,
+ const u8 *key,
+ unsigned int keylen)
+{
+ return ssi_blkcipher_setkey(crypto_ablkcipher_tfm(tfm), key, keylen);
+}
+
+static int ssi_ablkcipher_encrypt(struct ablkcipher_request *req)
+{
+ struct crypto_ablkcipher *ablk_tfm = crypto_ablkcipher_reqtfm(req);
+ struct crypto_tfm *tfm = crypto_ablkcipher_tfm(ablk_tfm);
+ struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req);
+ unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm);
+
+ req_ctx->backup_info = req->info;
+
+ return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src, req->nbytes, req->info, ivsize, (void *)req, DRV_CRYPTO_DIRECTION_ENCRYPT);
+}
+
+static int ssi_ablkcipher_decrypt(struct ablkcipher_request *req)
+{
+ struct crypto_ablkcipher *ablk_tfm = crypto_ablkcipher_reqtfm(req);
+ struct crypto_tfm *tfm = crypto_ablkcipher_tfm(ablk_tfm);
+ struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req);
+ unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm);
+
+ req_ctx->backup_info = req->info;
+ return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src, req->nbytes, req->info, ivsize, (void *)req, DRV_CRYPTO_DIRECTION_DECRYPT);
+}
+
+
+/* DX Block cipher alg */
+static struct ssi_alg_template blkcipher_algs[] = {
+/* Async template */
+#if SSI_CC_HAS_AES_XTS
+ {
+ .name = "xts(aes)",
+ .driver_name = "xts-aes-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE * 2,
+ .max_keysize = AES_MAX_KEY_SIZE * 2,
+ .ivsize = AES_BLOCK_SIZE,
+ .geniv = "eseqiv",
+ },
+ .cipher_mode = DRV_CIPHER_XTS,
+ .flow_mode = S_DIN_to_AES,
+ .synchronous = false,
+ },
+ {
+ .name = "xts(aes)",
+ .driver_name = "xts-aes-du512-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_512,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE * 2,
+ .max_keysize = AES_MAX_KEY_SIZE * 2,
+ .ivsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_XTS,
+ .flow_mode = S_DIN_to_AES,
+ .synchronous = false,
+ },
+ {
+ .name = "xts(aes)",
+ .driver_name = "xts-aes-du4096-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_4096,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE * 2,
+ .max_keysize = AES_MAX_KEY_SIZE * 2,
+ .ivsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_XTS,
+ .flow_mode = S_DIN_to_AES,
+ .synchronous = false,
+ },
+#endif /*SSI_CC_HAS_AES_XTS*/
+#if SSI_CC_HAS_AES_ESSIV
+ {
+ .name = "essiv(aes)",
+ .driver_name = "essiv-aes-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE * 2,
+ .max_keysize = AES_MAX_KEY_SIZE * 2,
+ .ivsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_ESSIV,
+ .flow_mode = S_DIN_to_AES,
+ .synchronous = false,
+ },
+ {
+ .name = "essiv(aes)",
+ .driver_name = "essiv-aes-du512-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_512,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE * 2,
+ .max_keysize = AES_MAX_KEY_SIZE * 2,
+ .ivsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_ESSIV,
+ .flow_mode = S_DIN_to_AES,
+ .synchronous = false,
+ },
+ {
+ .name = "essiv(aes)",
+ .driver_name = "essiv-aes-du4096-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_4096,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE * 2,
+ .max_keysize = AES_MAX_KEY_SIZE * 2,
+ .ivsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_ESSIV,
+ .flow_mode = S_DIN_to_AES,
+ .synchronous = false,
+ },
+#endif /*SSI_CC_HAS_AES_ESSIV*/
+#if SSI_CC_HAS_AES_BITLOCKER
+ {
+ .name = "bitlocker(aes)",
+ .driver_name = "bitlocker-aes-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE * 2,
+ .max_keysize = AES_MAX_KEY_SIZE * 2,
+ .ivsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_BITLOCKER,
+ .flow_mode = S_DIN_to_AES,
+ .synchronous = false,
+ },
+ {
+ .name = "bitlocker(aes)",
+ .driver_name = "bitlocker-aes-du512-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_512,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE * 2,
+ .max_keysize = AES_MAX_KEY_SIZE * 2,
+ .ivsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_BITLOCKER,
+ .flow_mode = S_DIN_to_AES,
+ .synchronous = false,
+ },
+ {
+ .name = "bitlocker(aes)",
+ .driver_name = "bitlocker-aes-du4096-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_4096,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE * 2,
+ .max_keysize = AES_MAX_KEY_SIZE * 2,
+ .ivsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_BITLOCKER,
+ .flow_mode = S_DIN_to_AES,
+ .synchronous = false,
+ },
+#endif /*SSI_CC_HAS_AES_BITLOCKER*/
+ {
+ .name = "ecb(aes)",
+ .driver_name = "ecb-aes-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .ivsize = 0,
+ },
+ .cipher_mode = DRV_CIPHER_ECB,
+ .flow_mode = S_DIN_to_AES,
+ .synchronous = false,
+ },
+ {
+ .name = "cbc(aes)",
+ .driver_name = "cbc-aes-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CBC,
+ .flow_mode = S_DIN_to_AES,
+ .synchronous = false,
+ },
+ {
+ .name = "ofb(aes)",
+ .driver_name = "ofb-aes-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_OFB,
+ .flow_mode = S_DIN_to_AES,
+ .synchronous = false,
+ },
+#if SSI_CC_HAS_AES_CTS
+ {
+ .name = "cts1(cbc(aes))",
+ .driver_name = "cts1-cbc-aes-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CBC_CTS,
+ .flow_mode = S_DIN_to_AES,
+ .synchronous = false,
+ },
+#endif
+ {
+ .name = "ctr(aes)",
+ .driver_name = "ctr-aes-dx",
+ .blocksize = 1,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CTR,
+ .flow_mode = S_DIN_to_AES,
+ .synchronous = false,
+ },
+ {
+ .name = "cbc(des3_ede)",
+ .driver_name = "cbc-3des-dx",
+ .blocksize = DES3_EDE_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = DES3_EDE_KEY_SIZE,
+ .max_keysize = DES3_EDE_KEY_SIZE,
+ .ivsize = DES3_EDE_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CBC,
+ .flow_mode = S_DIN_to_DES,
+ .synchronous = false,
+ },
+ {
+ .name = "ecb(des3_ede)",
+ .driver_name = "ecb-3des-dx",
+ .blocksize = DES3_EDE_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = DES3_EDE_KEY_SIZE,
+ .max_keysize = DES3_EDE_KEY_SIZE,
+ .ivsize = 0,
+ },
+ .cipher_mode = DRV_CIPHER_ECB,
+ .flow_mode = S_DIN_to_DES,
+ .synchronous = false,
+ },
+ {
+ .name = "cbc(des)",
+ .driver_name = "cbc-des-dx",
+ .blocksize = DES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = DES_KEY_SIZE,
+ .max_keysize = DES_KEY_SIZE,
+ .ivsize = DES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CBC,
+ .flow_mode = S_DIN_to_DES,
+ .synchronous = false,
+ },
+ {
+ .name = "ecb(des)",
+ .driver_name = "ecb-des-dx",
+ .blocksize = DES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = DES_KEY_SIZE,
+ .max_keysize = DES_KEY_SIZE,
+ .ivsize = 0,
+ },
+ .cipher_mode = DRV_CIPHER_ECB,
+ .flow_mode = S_DIN_to_DES,
+ .synchronous = false,
+ },
+#if SSI_CC_HAS_MULTI2
+ {
+ .name = "cbc(multi2)",
+ .driver_name = "cbc-multi2-dx",
+ .blocksize = CC_MULTI2_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_decrypt,
+ .min_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
+ .max_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
+ .ivsize = CC_MULTI2_IV_SIZE,
+ },
+ .cipher_mode = DRV_MULTI2_CBC,
+ .flow_mode = S_DIN_to_MULTI2,
+ .synchronous = false,
+ },
+ {
+ .name = "ofb(multi2)",
+ .driver_name = "ofb-multi2-dx",
+ .blocksize = 1,
+ .type = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .template_ablkcipher = {
+ .setkey = ssi_ablkcipher_setkey,
+ .encrypt = ssi_ablkcipher_encrypt,
+ .decrypt = ssi_ablkcipher_encrypt,
+ .min_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
+ .max_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
+ .ivsize = CC_MULTI2_IV_SIZE,
+ },
+ .cipher_mode = DRV_MULTI2_OFB,
+ .flow_mode = S_DIN_to_MULTI2,
+ .synchronous = false,
+ },
+#endif /*SSI_CC_HAS_MULTI2*/
+};
+
+static
+struct ssi_crypto_alg *ssi_ablkcipher_create_alg(struct ssi_alg_template *template)
+{
+ struct ssi_crypto_alg *t_alg;
+ struct crypto_alg *alg;
+
+ t_alg = kzalloc(sizeof(struct ssi_crypto_alg), GFP_KERNEL);
+ if (!t_alg) {
+ SSI_LOG_ERR("failed to allocate t_alg\n");
+ return ERR_PTR(-ENOMEM);
+ }
+
+ alg = &t_alg->crypto_alg;
+
+ snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", template->name);
+ snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
+ template->driver_name);
+ alg->cra_module = THIS_MODULE;
+ alg->cra_priority = SSI_CRA_PRIO;
+ alg->cra_blocksize = template->blocksize;
+ alg->cra_alignmask = 0;
+ alg->cra_ctxsize = sizeof(struct ssi_ablkcipher_ctx);
+
+ alg->cra_init = template->synchronous? ssi_sblkcipher_init:ssi_ablkcipher_init;
+ alg->cra_exit = template->synchronous? ssi_sblkcipher_exit:ssi_blkcipher_exit;
+ alg->cra_type = template->synchronous? &crypto_blkcipher_type:&crypto_ablkcipher_type;
+ if(template->synchronous) {
+ alg->cra_blkcipher = template->template_sblkcipher;
+ alg->cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY |
+ template->type;
+ } else {
+ alg->cra_ablkcipher = template->template_ablkcipher;
+ alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY |
+ template->type;
+ }
+
+ t_alg->cipher_mode = template->cipher_mode;
+ t_alg->flow_mode = template->flow_mode;
+
+ return t_alg;
+}
+
+int ssi_ablkcipher_free(struct ssi_drvdata *drvdata)
+{
+ struct ssi_crypto_alg *t_alg, *n;
+ struct ssi_blkcipher_handle *blkcipher_handle =
+ drvdata->blkcipher_handle;
+ struct device *dev;
+ dev = &drvdata->plat_dev->dev;
+
+ if (blkcipher_handle != NULL) {
+ /* Remove registered algs */
+ list_for_each_entry_safe(t_alg, n,
+ &blkcipher_handle->blkcipher_alg_list,
+ entry) {
+ crypto_unregister_alg(&t_alg->crypto_alg);
+ list_del(&t_alg->entry);
+ kfree(t_alg);
+ }
+ kfree(blkcipher_handle);
+ drvdata->blkcipher_handle = NULL;
+ }
+ return 0;
+}
+
+
+
+int ssi_ablkcipher_alloc(struct ssi_drvdata *drvdata)
+{
+ struct ssi_blkcipher_handle *ablkcipher_handle;
+ struct ssi_crypto_alg *t_alg;
+ int rc = -ENOMEM;
+ int alg;
+
+ ablkcipher_handle = kmalloc(sizeof(struct ssi_blkcipher_handle),
+ GFP_KERNEL);
+ if (ablkcipher_handle == NULL)
+ return -ENOMEM;
+
+ drvdata->blkcipher_handle = ablkcipher_handle;
+
+ INIT_LIST_HEAD(&ablkcipher_handle->blkcipher_alg_list);
+
+ /* Linux crypto */
+ SSI_LOG_DEBUG("Number of algorithms = %zu\n", ARRAY_SIZE(blkcipher_algs));
+ for (alg = 0; alg < ARRAY_SIZE(blkcipher_algs); alg++) {
+ SSI_LOG_DEBUG("creating %s\n", blkcipher_algs[alg].driver_name);
+ t_alg = ssi_ablkcipher_create_alg(&blkcipher_algs[alg]);
+ if (IS_ERR(t_alg)) {
+ rc = PTR_ERR(t_alg);
+ SSI_LOG_ERR("%s alg allocation failed\n",
+ blkcipher_algs[alg].driver_name);
+ goto fail0;
+ }
+ t_alg->drvdata = drvdata;
+
+ SSI_LOG_DEBUG("registering %s\n", blkcipher_algs[alg].driver_name);
+ rc = crypto_register_alg(&t_alg->crypto_alg);
+ SSI_LOG_DEBUG("%s alg registration rc = %x\n",
+ t_alg->crypto_alg.cra_driver_name, rc);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("%s alg registration failed\n",
+ t_alg->crypto_alg.cra_driver_name);
+ kfree(t_alg);
+ goto fail0;
+ } else {
+ list_add_tail(&t_alg->entry,
+ &ablkcipher_handle->blkcipher_alg_list);
+ SSI_LOG_DEBUG("Registered %s\n",
+ t_alg->crypto_alg.cra_driver_name);
+ }
+ }
+ return 0;
+
+fail0:
+ ssi_ablkcipher_free(drvdata);
+ return rc;
+}
diff --git a/drivers/staging/ccree/ssi_cipher.h b/drivers/staging/ccree/ssi_cipher.h
new file mode 100644
index 0000000..511800f1
--- /dev/null
+++ b/drivers/staging/ccree/ssi_cipher.h
@@ -0,0 +1,88 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+/* \file ssi_cipher.h
+ ARM CryptoCell Cipher Crypto API
+ */
+
+#ifndef __SSI_CIPHER_H__
+#define __SSI_CIPHER_H__
+
+#include <linux/kernel.h>
+#include <crypto/algapi.h>
+#include "ssi_driver.h"
+#include "ssi_buffer_mgr.h"
+
+
+/* Crypto cipher flags */
+#define CC_CRYPTO_CIPHER_KEY_KFDE0 (1 << 0)
+#define CC_CRYPTO_CIPHER_KEY_KFDE1 (1 << 1)
+#define CC_CRYPTO_CIPHER_KEY_KFDE2 (1 << 2)
+#define CC_CRYPTO_CIPHER_KEY_KFDE3 (1 << 3)
+#define CC_CRYPTO_CIPHER_DU_SIZE_512B (1 << 4)
+
+#define CC_CRYPTO_CIPHER_KEY_KFDE_MASK (CC_CRYPTO_CIPHER_KEY_KFDE0 | CC_CRYPTO_CIPHER_KEY_KFDE1 | CC_CRYPTO_CIPHER_KEY_KFDE2 | CC_CRYPTO_CIPHER_KEY_KFDE3)
+
+
+struct blkcipher_req_ctx {
+ struct async_gen_req_ctx gen_ctx;
+ enum ssi_req_dma_buf_type dma_buf_type;
+ uint32_t in_nents;
+ uint32_t in_mlli_nents;
+ uint32_t out_nents;
+ uint32_t out_mlli_nents;
+ uint8_t *backup_info; /*store iv for generated IV flow*/
+ struct mlli_params mlli_params;
+};
+
+
+
+int ssi_ablkcipher_alloc(struct ssi_drvdata *drvdata);
+
+int ssi_ablkcipher_free(struct ssi_drvdata *drvdata);
+
+#ifndef CRYPTO_ALG_BULK_MASK
+
+#define CRYPTO_ALG_BULK_DU_512 0x00002000
+#define CRYPTO_ALG_BULK_DU_4096 0x00004000
+#define CRYPTO_ALG_BULK_MASK (CRYPTO_ALG_BULK_DU_512 |\
+ CRYPTO_ALG_BULK_DU_4096)
+#endif /* CRYPTO_ALG_BULK_MASK */
+
+
+#ifdef CRYPTO_TFM_REQ_HW_KEY
+
+static inline bool ssi_is_hw_key(struct crypto_tfm *tfm)
+{
+ return (crypto_tfm_get_flags(tfm) & CRYPTO_TFM_REQ_HW_KEY);
+}
+
+#else
+
+struct arm_hw_key_info {
+ int hw_key1;
+ int hw_key2;
+};
+
+static inline bool ssi_is_hw_key(struct crypto_tfm *tfm)
+{
+ return 0;
+}
+
+#endif /* CRYPTO_TFM_REQ_HW_KEY */
+
+
+#endif /*__SSI_CIPHER_H__*/
diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index 95e27c2..1310ac5 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -23,6 +23,7 @@
#include <crypto/sha.h>
#include <crypto/authenc.h>
#include <crypto/scatterwalk.h>
+#include <crypto/internal/skcipher.h>

#include <linux/init.h>
#include <linux/moduleparam.h>
@@ -61,6 +62,7 @@
#include "ssi_request_mgr.h"
#include "ssi_buffer_mgr.h"
#include "ssi_sysfs.h"
+#include "ssi_cipher.h"
#include "ssi_hash.h"
#include "ssi_sram_mgr.h"
#include "ssi_pm.h"
@@ -219,6 +221,9 @@ static int init_cc_resources(struct platform_device *plat_dev)
goto init_cc_res_err;
}

+ /*Initialize inflight counter used in dx_ablkcipher_secure_complete used for count of BYSPASS blocks operations*/
+ new_drvdata->inflight_counter = 0;
+
dev_set_drvdata(&plat_dev->dev, new_drvdata);
/* Get device resources */
/* First CC registers space */
@@ -343,6 +348,13 @@ static int init_cc_resources(struct platform_device *plat_dev)
goto init_cc_res_err;
}

+ /* Allocate crypto algs */
+ rc = ssi_ablkcipher_alloc(new_drvdata);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("ssi_ablkcipher_alloc failed\n");
+ goto init_cc_res_err;
+ }
+
rc = ssi_hash_alloc(new_drvdata);
if (unlikely(rc != 0)) {
SSI_LOG_ERR("ssi_hash_alloc failed\n");
@@ -356,6 +368,7 @@ static int init_cc_resources(struct platform_device *plat_dev)

if (new_drvdata != NULL) {
ssi_hash_free(new_drvdata);
+ ssi_ablkcipher_free(new_drvdata);
ssi_power_mgr_fini(new_drvdata);
ssi_buffer_mgr_fini(new_drvdata);
request_mgr_fini(new_drvdata);
@@ -396,6 +409,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
(struct ssi_drvdata *)dev_get_drvdata(&plat_dev->dev);

ssi_hash_free(drvdata);
+ ssi_ablkcipher_free(drvdata);
ssi_power_mgr_fini(drvdata);
ssi_buffer_mgr_fini(drvdata);
request_mgr_fini(drvdata);
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index 9aa5d30..baac9bf 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -29,6 +29,7 @@
#endif
#include <linux/dma-mapping.h>
#include <crypto/algapi.h>
+#include <crypto/internal/skcipher.h>
#include <crypto/aes.h>
#include <crypto/sha.h>
#include <crypto/authenc.h>
@@ -141,15 +142,44 @@ struct ssi_drvdata {
struct completion icache_setup_completion;
void *buff_mgr_handle;
void *hash_handle;
+ void *blkcipher_handle;
void *request_mgr_handle;
void *sram_mgr_handle;

#ifdef ENABLE_CYCLE_COUNT
cycles_t isr_exit_cycles; /* Save for isr-to-tasklet latency */
#endif
+ uint32_t inflight_counter;

};

+struct ssi_crypto_alg {
+ struct list_head entry;
+ int cipher_mode;
+ int flow_mode; /* Note: currently, refers to the cipher mode only. */
+ int auth_mode;
+ struct ssi_drvdata *drvdata;
+ struct crypto_alg crypto_alg;
+};
+
+struct ssi_alg_template {
+ char name[CRYPTO_MAX_ALG_NAME];
+ char driver_name[CRYPTO_MAX_ALG_NAME];
+ unsigned int blocksize;
+ u32 type;
+ union {
+ struct ablkcipher_alg ablkcipher;
+ struct blkcipher_alg blkcipher;
+ struct cipher_alg cipher;
+ struct compress_alg compress;
+ } template_u;
+ int cipher_mode;
+ int flow_mode; /* Note: currently, refers to the cipher mode only. */
+ int auth_mode;
+ bool synchronous;
+ struct ssi_drvdata *drvdata;
+};
+
struct async_gen_req_ctx {
dma_addr_t iv_dma_addr;
enum drv_crypto_direction op_type;
--
2.1.4

2017-04-20 13:13:01

by Gilad Ben-Yossef

[permalink] [raw]
Subject: [PATCH v2 7/9] staging: ccree: add TODO list

Add TODO list for moving out of staging tree for ccree crypto driver

Signed-off-by: Gilad Ben-Yossef <[email protected]>
---
drivers/staging/ccree/TODO | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
create mode 100644 drivers/staging/ccree/TODO

diff --git a/drivers/staging/ccree/TODO b/drivers/staging/ccree/TODO
new file mode 100644
index 0000000..3f1d61d
--- /dev/null
+++ b/drivers/staging/ccree/TODO
@@ -0,0 +1,28 @@
+
+
+*************************************************************************
+* *
+* Arm Trust Zone CryptoCell REE Linux driver upstreaming TODO items *
+* *
+*************************************************************************
+
+ccree specific items
+a.k.a stuff fixing for this driver to move out of staging
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+1. Move to using Crypto Engine to handle backlog queueing.
+2. Remove synchronous algorithm support leftovers.
+3. Separate platform specific code for FIPS and power management into separate platform modules.
+4. Drop legacy kernel support code.
+5. Move most (all?) #ifdef CONFIG into inline functions.
+6. Remove all unused definitions.
+7. Re-factor to accomediate newer/older HW revisions besides the 712.
+8. Handle the many checkpatch errors.
+9. Implement ahash import/export correctly.
+10. Go through a proper review of DT bindings and sysfs ABI
+
+Kernel infrastructure items
+a.k.a stuff we either neither need to fix in the kernel or understand what we're doing wrong
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. ahash import/export context has a PAGE_SIZE/8 size limit. We need more.
+2. Crypto Engine seems to be built for HW with hardware queue depth of 1, we have 600++.
--
2.1.4

2017-04-20 13:13:00

by Gilad Ben-Yossef

[permalink] [raw]
Subject: [PATCH v2 6/9] staging: ccree: add FIPS support

Add FIPS mode support to CryptoCell driver

Signed-off-by: Gilad Ben-Yossef <[email protected]>
---
drivers/staging/ccree/Kconfig | 9 +
drivers/staging/ccree/Makefile | 1 +
drivers/staging/ccree/ssi_aead.c | 6 +
drivers/staging/ccree/ssi_cipher.c | 52 +
drivers/staging/ccree/ssi_driver.c | 19 +-
drivers/staging/ccree/ssi_driver.h | 2 +
drivers/staging/ccree/ssi_fips.c | 65 ++
drivers/staging/ccree/ssi_fips.h | 70 ++
drivers/staging/ccree/ssi_fips_data.h | 315 ++++++
drivers/staging/ccree/ssi_fips_ext.c | 96 ++
drivers/staging/ccree/ssi_fips_ll.c | 1681 +++++++++++++++++++++++++++++++
drivers/staging/ccree/ssi_fips_local.c | 369 +++++++
drivers/staging/ccree/ssi_fips_local.h | 77 ++
drivers/staging/ccree/ssi_hash.c | 21 +-
drivers/staging/ccree/ssi_request_mgr.c | 2 +
15 files changed, 2783 insertions(+), 2 deletions(-)
create mode 100644 drivers/staging/ccree/ssi_fips.c
create mode 100644 drivers/staging/ccree/ssi_fips.h
create mode 100644 drivers/staging/ccree/ssi_fips_data.h
create mode 100644 drivers/staging/ccree/ssi_fips_ext.c
create mode 100644 drivers/staging/ccree/ssi_fips_ll.c
create mode 100644 drivers/staging/ccree/ssi_fips_local.c
create mode 100644 drivers/staging/ccree/ssi_fips_local.h

diff --git a/drivers/staging/ccree/Kconfig b/drivers/staging/ccree/Kconfig
index 2d11223..ae62704 100644
--- a/drivers/staging/ccree/Kconfig
+++ b/drivers/staging/ccree/Kconfig
@@ -24,6 +24,15 @@ config CRYPTO_DEV_CCREE
cryptographic operations on the system REE.
If unsure say Y.

+config CCREE_FIPS_SUPPORT
+ bool "Turn on CryptoCell 7XX REE FIPS mode support"
+ depends on CRYPTO_DEV_CCREE
+ default n
+ help
+ Say 'Y' to enable support for FIPS compliant mode by the
+ CCREE driver.
+ If unsure say N.
+
config CCREE_DISABLE_COHERENT_DMA_OPS
bool "Disable Coherent DMA operations for the CCREE driver"
depends on CRYPTO_DEV_CCREE
diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile
index b9285c0..44f3e3e 100644
--- a/drivers/staging/ccree/Makefile
+++ b/drivers/staging/ccree/Makefile
@@ -1,2 +1,3 @@
obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o
ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_aead.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
+ccree-$(CCREE_FIPS_SUPPORT) += ssi_fips.o ssi_fips_ll.o ssi_fips_ext.o ssi_fips_local.o
diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c
index 1d2890e..3ab958b 100644
--- a/drivers/staging/ccree/ssi_aead.c
+++ b/drivers/staging/ccree/ssi_aead.c
@@ -36,6 +36,7 @@
#include "ssi_hash.h"
#include "ssi_sysfs.h"
#include "ssi_sram_mgr.h"
+#include "ssi_fips_local.h"

#define template_aead template_u.aead

@@ -153,6 +154,8 @@ static int ssi_aead_init(struct crypto_aead *tfm)
container_of(alg, struct ssi_crypto_alg, aead_alg);
SSI_LOG_DEBUG("Initializing context @%p for %s\n", ctx, crypto_tfm_alg_name(&(tfm->base)));

+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
+
/* Initialize modes in instance */
ctx->cipher_mode = ssi_alg->cipher_mode;
ctx->flow_mode = ssi_alg->flow_mode;
@@ -572,6 +575,7 @@ ssi_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
SSI_LOG_DEBUG("Setting key in context @%p for %s. key=%p keylen=%u\n",
ctx, crypto_tfm_alg_name(crypto_aead_tfm(tfm)), key, keylen);

+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
/* STAT_PHASE_0: Init and sanity checks */
START_CYCLE_COUNT();

@@ -699,6 +703,7 @@ static int ssi_aead_setauthsize(
{
struct ssi_aead_ctx *ctx = crypto_aead_ctx(authenc);

+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
/* Unsupported auth. sizes */
if ((authsize == 0) ||
(authsize >crypto_aead_maxauthsize(authenc))) {
@@ -2006,6 +2011,7 @@ static int ssi_aead_process(struct aead_request *req, enum drv_crypto_direction
SSI_LOG_DEBUG("%s context=%p req=%p iv=%p src=%p src_ofs=%d dst=%p dst_ofs=%d cryptolen=%d\n",
((direct==DRV_CRYPTO_DIRECTION_ENCRYPT)?"Encrypt":"Decrypt"), ctx, req, req->iv,
sg_virt(req->src), req->src->offset, sg_virt(req->dst), req->dst->offset, req->cryptlen);
+ CHECK_AND_RETURN_UPON_FIPS_ERROR();

/* STAT_PHASE_0: Init and sanity checks */
START_CYCLE_COUNT();
diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
index 2e4ce90..e8a4071 100644
--- a/drivers/staging/ccree/ssi_cipher.c
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -31,6 +31,7 @@
#include "ssi_cipher.h"
#include "ssi_request_mgr.h"
#include "ssi_sysfs.h"
+#include "ssi_fips_local.h"

#define MAX_ABLKCIPHER_SEQ_LEN 6

@@ -191,6 +192,7 @@ static int ssi_blkcipher_init(struct crypto_tfm *tfm)
SSI_LOG_DEBUG("Initializing context @%p for %s\n", ctx_p,
crypto_tfm_alg_name(tfm));

+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
ctx_p->cipher_mode = ssi_alg->cipher_mode;
ctx_p->flow_mode = ssi_alg->flow_mode;
ctx_p->drvdata = ssi_alg->drvdata;
@@ -269,6 +271,37 @@ static const u8 zero_buff[] = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0};

+/* The function verifies that tdes keys are not weak.*/
+static int ssi_fips_verify_3des_keys(const u8 *key, unsigned int keylen)
+{
+#ifdef CCREE_FIPS_SUPPORT
+ tdes_keys_t *tdes_key = (tdes_keys_t*)key;
+
+ /* verify key1 != key2 and key3 != key2*/
+ if (unlikely( (memcmp((u8*)tdes_key->key1, (u8*)tdes_key->key2, sizeof(tdes_key->key1)) == 0) ||
+ (memcmp((u8*)tdes_key->key3, (u8*)tdes_key->key2, sizeof(tdes_key->key3)) == 0) )) {
+ return -ENOEXEC;
+ }
+#endif /* CCREE_FIPS_SUPPORT */
+
+ return 0;
+}
+
+/* The function verifies that xts keys are not weak.*/
+static int ssi_fips_verify_xts_keys(const u8 *key, unsigned int keylen)
+{
+#ifdef CCREE_FIPS_SUPPORT
+ /* Weak key is define as key that its first half (128/256 lsb) equals its second half (128/256 msb) */
+ int singleKeySize = keylen >> 1;
+
+ if (unlikely(memcmp(key, &key[singleKeySize], singleKeySize) == 0)) {
+ return -ENOEXEC;
+ }
+#endif /* CCREE_FIPS_SUPPORT */
+
+ return 0;
+}
+
static enum HwCryptoKey hw_key_to_cc_hw_key(int slot_num)
{
switch (slot_num) {
@@ -298,6 +331,10 @@ static int ssi_blkcipher_setkey(struct crypto_tfm *tfm,
ctx_p, crypto_tfm_alg_name(tfm), keylen);
dump_byte_array("key", (uint8_t *)key, keylen);

+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
+
+ SSI_LOG_DEBUG("ssi_blkcipher_setkey: after FIPS check");
+
/* STAT_PHASE_0: Init and sanity checks */
START_CYCLE_COUNT();

@@ -359,6 +396,18 @@ static int ssi_blkcipher_setkey(struct crypto_tfm *tfm,
return -EINVAL;
}
}
+ if ((ctx_p->cipher_mode == DRV_CIPHER_XTS) &&
+ ssi_fips_verify_xts_keys(key, keylen) != 0) {
+ SSI_LOG_DEBUG("ssi_blkcipher_setkey: weak XTS key");
+ return -EINVAL;
+ }
+ if ((ctx_p->flow_mode == S_DIN_to_DES) &&
+ (keylen == DES3_EDE_KEY_SIZE) &&
+ ssi_fips_verify_3des_keys(key, keylen) != 0) {
+ SSI_LOG_DEBUG("ssi_blkcipher_setkey: weak 3DES key");
+ return -EINVAL;
+ }
+

END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_0);

@@ -744,6 +793,7 @@ static int ssi_blkcipher_process(
((direction==DRV_CRYPTO_DIRECTION_ENCRYPT)?"Encrypt":"Decrypt"),
areq, info, nbytes);

+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
/* STAT_PHASE_0: Init and sanity checks */
START_CYCLE_COUNT();

@@ -864,6 +914,8 @@ static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req, void __io
struct ssi_ablkcipher_ctx *ctx_p = crypto_ablkcipher_ctx(tfm);
unsigned int ivsize = crypto_ablkcipher_ivsize(tfm);

+ CHECK_AND_RETURN_VOID_UPON_FIPS_ERROR();
+
ssi_blkcipher_complete(dev, ctx_p, req_ctx, areq->dst, areq->src, areq->info, ivsize, areq, cc_base);
}

diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index 42a00fc..1615f76 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -69,6 +69,7 @@
#include "ssi_ivgen.h"
#include "ssi_sram_mgr.h"
#include "ssi_pm.h"
+#include "ssi_fips_local.h"


#ifdef DX_DUMP_BYTES
@@ -142,7 +143,15 @@ static irqreturn_t cc_isr(int irq, void *dev_id)
irr &= ~SSI_COMP_IRQ_MASK;
complete_request(drvdata);
}
-
+#ifdef CC_SUPPORT_FIPS
+ /* TEE FIPS interrupt */
+ if (likely((irr & SSI_GPR0_IRQ_MASK) != 0)) {
+ /* Mask interrupt - will be unmasked in Deferred service handler */
+ CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR), imr | SSI_GPR0_IRQ_MASK);
+ irr &= ~SSI_GPR0_IRQ_MASK;
+ fips_handler(drvdata);
+ }
+#endif
/* AXI error interrupt */
if (unlikely((irr & SSI_AXI_ERR_IRQ_MASK) != 0)) {
uint32_t axi_err;
@@ -351,6 +360,12 @@ static int init_cc_resources(struct platform_device *plat_dev)
goto init_cc_res_err;
}

+ rc = ssi_fips_init(new_drvdata);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("SSI_FIPS_INIT failed 0x%x\n", rc);
+ goto init_cc_res_err;
+ }
+
rc = ssi_ivgen_init(new_drvdata);
if (unlikely(rc != 0)) {
SSI_LOG_ERR("ssi_ivgen_init failed\n");
@@ -391,6 +406,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
ssi_buffer_mgr_fini(new_drvdata);
request_mgr_fini(new_drvdata);
ssi_sram_mgr_fini(new_drvdata);
+ ssi_fips_fini(new_drvdata);
#ifdef ENABLE_CC_SYSFS
ssi_sysfs_fini();
#endif
@@ -434,6 +450,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
ssi_buffer_mgr_fini(drvdata);
request_mgr_fini(drvdata);
ssi_sram_mgr_fini(drvdata);
+ ssi_fips_fini(drvdata);
#ifdef ENABLE_CC_SYSFS
ssi_sysfs_fini();
#endif
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index 1576a18..60aa8a8 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -54,6 +54,7 @@
#include "cc_crypto_ctx.h"
#include "ssi_sysfs.h"
#include "hash_defs.h"
+#include "ssi_fips_local.h"

#define DRV_MODULE_VERSION "3.0"

@@ -152,6 +153,7 @@ struct ssi_drvdata {
void *aead_handle;
void *blkcipher_handle;
void *request_mgr_handle;
+ void *fips_handle;
void *ivgen_handle;
void *sram_mgr_handle;

diff --git a/drivers/staging/ccree/ssi_fips.c b/drivers/staging/ccree/ssi_fips.c
new file mode 100644
index 0000000..d4e65e9
--- /dev/null
+++ b/drivers/staging/ccree/ssi_fips.c
@@ -0,0 +1,65 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+
+/**************************************************************
+This file defines the driver FIPS APIs *
+***************************************************************/
+
+#include <linux/module.h>
+#include "ssi_fips.h"
+
+
+extern int ssi_fips_ext_get_state(ssi_fips_state_t *p_state);
+extern int ssi_fips_ext_get_error(ssi_fips_error_t *p_err);
+
+/*
+This function returns the REE FIPS state.
+It should be called by kernel module.
+*/
+int ssi_fips_get_state(ssi_fips_state_t *p_state)
+{
+ int rc = 0;
+
+ if (p_state == NULL) {
+ return -EINVAL;
+ }
+
+ rc = ssi_fips_ext_get_state(p_state);
+
+ return rc;
+}
+
+EXPORT_SYMBOL(ssi_fips_get_state);
+
+/*
+This function returns the REE FIPS error.
+It should be called by kernel module.
+*/
+int ssi_fips_get_error(ssi_fips_error_t *p_err)
+{
+ int rc = 0;
+
+ if (p_err == NULL) {
+ return -EINVAL;
+ }
+
+ rc = ssi_fips_ext_get_error(p_err);
+
+ return rc;
+}
+
+EXPORT_SYMBOL(ssi_fips_get_error);
diff --git a/drivers/staging/ccree/ssi_fips.h b/drivers/staging/ccree/ssi_fips.h
new file mode 100644
index 0000000..9c1fbf9
--- /dev/null
+++ b/drivers/staging/ccree/ssi_fips.h
@@ -0,0 +1,70 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef __SSI_FIPS_H__
+#define __SSI_FIPS_H__
+
+
+#ifndef INT32_MAX /* Missing in Linux kernel */
+#define INT32_MAX 0x7FFFFFFFL
+#endif
+
+
+/*!
+@file
+@brief This file contains FIPS related defintions and APIs.
+*/
+
+typedef enum ssi_fips_state {
+ CC_FIPS_STATE_NOT_SUPPORTED = 0,
+ CC_FIPS_STATE_SUPPORTED,
+ CC_FIPS_STATE_ERROR,
+ CC_FIPS_STATE_RESERVE32B = INT32_MAX
+} ssi_fips_state_t;
+
+
+typedef enum ssi_fips_error {
+ CC_REE_FIPS_ERROR_OK = 0,
+ CC_REE_FIPS_ERROR_GENERAL,
+ CC_REE_FIPS_ERROR_FROM_TEE,
+ CC_REE_FIPS_ERROR_AES_ECB_PUT,
+ CC_REE_FIPS_ERROR_AES_CBC_PUT,
+ CC_REE_FIPS_ERROR_AES_OFB_PUT,
+ CC_REE_FIPS_ERROR_AES_CTR_PUT,
+ CC_REE_FIPS_ERROR_AES_CBC_CTS_PUT,
+ CC_REE_FIPS_ERROR_AES_XTS_PUT,
+ CC_REE_FIPS_ERROR_AES_CMAC_PUT,
+ CC_REE_FIPS_ERROR_AESCCM_PUT,
+ CC_REE_FIPS_ERROR_AESGCM_PUT,
+ CC_REE_FIPS_ERROR_DES_ECB_PUT,
+ CC_REE_FIPS_ERROR_DES_CBC_PUT,
+ CC_REE_FIPS_ERROR_SHA1_PUT,
+ CC_REE_FIPS_ERROR_SHA256_PUT,
+ CC_REE_FIPS_ERROR_SHA512_PUT,
+ CC_REE_FIPS_ERROR_HMAC_SHA1_PUT,
+ CC_REE_FIPS_ERROR_HMAC_SHA256_PUT,
+ CC_REE_FIPS_ERROR_HMAC_SHA512_PUT,
+ CC_REE_FIPS_ERROR_ROM_CHECKSUM,
+ CC_REE_FIPS_ERROR_RESERVE32B = INT32_MAX
+} ssi_fips_error_t;
+
+
+
+int ssi_fips_get_state(ssi_fips_state_t *p_state);
+int ssi_fips_get_error(ssi_fips_error_t *p_err);
+
+#endif /*__SSI_FIPS_H__*/
+
diff --git a/drivers/staging/ccree/ssi_fips_data.h b/drivers/staging/ccree/ssi_fips_data.h
new file mode 100644
index 0000000..3590073
--- /dev/null
+++ b/drivers/staging/ccree/ssi_fips_data.h
@@ -0,0 +1,315 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+/*
+The test vectors were taken from:
+
+* AES
+NIST Special Publication 800-38A 2001 Edition
+Recommendation for Block Cipher Modes of Operation
+http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf
+Appendix F: Example Vectors for Modes of Operation of the AES
+
+* AES CTS
+Advanced Encryption Standard (AES) Encryption for Kerberos 5
+February 2005
+https://tools.ietf.org/html/rfc3962#appendix-B
+B. Sample Test Vectors
+
+* AES XTS
+http://csrc.nist.gov/groups/STM/cavp/#08
+http://csrc.nist.gov/groups/STM/cavp/documents/aes/XTSTestVectors.zip
+
+* AES CMAC
+http://csrc.nist.gov/groups/STM/cavp/index.html#07
+http://csrc.nist.gov/groups/STM/cavp/documents/mac/cmactestvectors.zip
+
+* AES-CCM
+http://csrc.nist.gov/groups/STM/cavp/#07
+http://csrc.nist.gov/groups/STM/cavp/documents/mac/ccmtestvectors.zip
+
+* AES-GCM
+http://csrc.nist.gov/groups/STM/cavp/documents/mac/gcmtestvectors.zip
+
+* Triple-DES
+NIST Special Publication 800-67 January 2012
+Recommendation for the Triple Data Encryption Algorithm (TDEA) Block Cipher
+http://csrc.nist.gov/publications/nistpubs/800-67-Rev1/SP-800-67-Rev1.pdf
+APPENDIX B: EXAMPLE OF TDEA FORWARD AND INVERSE CIPHER OPERATIONS
+and
+http://csrc.nist.gov/groups/STM/cavp/#01
+http://csrc.nist.gov/groups/STM/cavp/documents/des/tdesmct_intermediate.zip
+
+* HASH
+http://csrc.nist.gov/groups/STM/cavp/#03
+http://csrc.nist.gov/groups/STM/cavp/documents/shs/shabytetestvectors.zip
+
+* HMAC
+http://csrc.nist.gov/groups/STM/cavp/#07
+http://csrc.nist.gov/groups/STM/cavp/documents/mac/hmactestvectors.zip
+
+*/
+
+/* NIST AES */
+#define AES_128_BIT_KEY_SIZE 16
+#define AES_192_BIT_KEY_SIZE 24
+#define AES_256_BIT_KEY_SIZE 32
+#define AES_512_BIT_KEY_SIZE 64
+
+#define NIST_AES_IV_SIZE 16
+
+#define NIST_AES_128_KEY { 0x2b, 0x7e, 0x15, 0x16, 0x28, 0xae, 0xd2, 0xa6, 0xab, 0xf7, 0x15, 0x88, 0x09, 0xcf, 0x4f, 0x3c }
+#define NIST_AES_192_KEY { 0x8e, 0x73, 0xb0, 0xf7, 0xda, 0x0e, 0x64, 0x52, 0xc8, 0x10, 0xf3, 0x2b, 0x80, 0x90, 0x79, 0xe5, \
+ 0x62, 0xf8, 0xea, 0xd2, 0x52, 0x2c, 0x6b, 0x7b }
+#define NIST_AES_256_KEY { 0x60, 0x3d, 0xeb, 0x10, 0x15, 0xca, 0x71, 0xbe, 0x2b, 0x73, 0xae, 0xf0, 0x85, 0x7d, 0x77, 0x81, \
+ 0x1f, 0x35, 0x2c, 0x07, 0x3b, 0x61, 0x08, 0xd7, 0x2d, 0x98, 0x10, 0xa3, 0x09, 0x14, 0xdf, 0xf4 }
+#define NIST_AES_VECTOR_SIZE 16
+#define NIST_AES_PLAIN_DATA { 0x6b, 0xc1, 0xbe, 0xe2, 0x2e, 0x40, 0x9f, 0x96, 0xe9, 0x3d, 0x7e, 0x11, 0x73, 0x93, 0x17, 0x2a }
+
+#define NIST_AES_ECB_IV { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }
+#define NIST_AES_128_ECB_CIPHER { 0x3a, 0xd7, 0x7b, 0xb4, 0x0d, 0x7a, 0x36, 0x60, 0xa8, 0x9e, 0xca, 0xf3, 0x24, 0x66, 0xef, 0x97 }
+#define NIST_AES_192_ECB_CIPHER { 0xbd, 0x33, 0x4f, 0x1d, 0x6e, 0x45, 0xf2, 0x5f, 0xf7, 0x12, 0xa2, 0x14, 0x57, 0x1f, 0xa5, 0xcc }
+#define NIST_AES_256_ECB_CIPHER { 0xf3, 0xee, 0xd1, 0xbd, 0xb5, 0xd2, 0xa0, 0x3c, 0x06, 0x4b, 0x5a, 0x7e, 0x3d, 0xb1, 0x81, 0xf8 }
+
+#define NIST_AES_CBC_IV { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f }
+#define NIST_AES_128_CBC_CIPHER { 0x76, 0x49, 0xab, 0xac, 0x81, 0x19, 0xb2, 0x46, 0xce, 0xe9, 0x8e, 0x9b, 0x12, 0xe9, 0x19, 0x7d }
+#define NIST_AES_192_CBC_CIPHER { 0x4f, 0x02, 0x1d, 0xb2, 0x43, 0xbc, 0x63, 0x3d, 0x71, 0x78, 0x18, 0x3a, 0x9f, 0xa0, 0x71, 0xe8 }
+#define NIST_AES_256_CBC_CIPHER { 0xf5, 0x8c, 0x4c, 0x04, 0xd6, 0xe5, 0xf1, 0xba, 0x77, 0x9e, 0xab, 0xfb, 0x5f, 0x7b, 0xfb, 0xd6 }
+
+#define NIST_AES_OFB_IV { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f }
+#define NIST_AES_128_OFB_CIPHER { 0x3b, 0x3f, 0xd9, 0x2e, 0xb7, 0x2d, 0xad, 0x20, 0x33, 0x34, 0x49, 0xf8, 0xe8, 0x3c, 0xfb, 0x4a }
+#define NIST_AES_192_OFB_CIPHER { 0xcd, 0xc8, 0x0d, 0x6f, 0xdd, 0xf1, 0x8c, 0xab, 0x34, 0xc2, 0x59, 0x09, 0xc9, 0x9a, 0x41, 0x74 }
+#define NIST_AES_256_OFB_CIPHER { 0xdc, 0x7e, 0x84, 0xbf, 0xda, 0x79, 0x16, 0x4b, 0x7e, 0xcd, 0x84, 0x86, 0x98, 0x5d, 0x38, 0x60 }
+
+#define NIST_AES_CTR_IV { 0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7, 0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff }
+#define NIST_AES_128_CTR_CIPHER { 0x87, 0x4d, 0x61, 0x91, 0xb6, 0x20, 0xe3, 0x26, 0x1b, 0xef, 0x68, 0x64, 0x99, 0x0d, 0xb6, 0xce }
+#define NIST_AES_192_CTR_CIPHER { 0x1a, 0xbc, 0x93, 0x24, 0x17, 0x52, 0x1c, 0xa2, 0x4f, 0x2b, 0x04, 0x59, 0xfe, 0x7e, 0x6e, 0x0b }
+#define NIST_AES_256_CTR_CIPHER { 0x60, 0x1e, 0xc3, 0x13, 0x77, 0x57, 0x89, 0xa5, 0xb7, 0xa7, 0xf5, 0x04, 0xbb, 0xf3, 0xd2, 0x28 }
+
+
+#define RFC3962_AES_128_KEY { 0x63, 0x68, 0x69, 0x63, 0x6b, 0x65, 0x6e, 0x20, 0x74, 0x65, 0x72, 0x69, 0x79, 0x61, 0x6b, 0x69 }
+#define RFC3962_AES_VECTOR_SIZE 17
+#define RFC3962_AES_PLAIN_DATA { 0x49, 0x20, 0x77, 0x6f, 0x75, 0x6c, 0x64, 0x20, 0x6c, 0x69, 0x6b, 0x65, 0x20, 0x74, 0x68, 0x65, 0x20 }
+#define RFC3962_AES_CBC_CTS_IV { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }
+#define RFC3962_AES_128_CBC_CTS_CIPHER { 0xc6, 0x35, 0x35, 0x68, 0xf2, 0xbf, 0x8c, 0xb4, 0xd8, 0xa5, 0x80, 0x36, 0x2d, 0xa7, 0xff, 0x7f, 0x97 }
+
+
+#define NIST_AES_256_XTS_KEY { 0xa1, 0xb9, 0x0c, 0xba, 0x3f, 0x06, 0xac, 0x35, 0x3b, 0x2c, 0x34, 0x38, 0x76, 0x08, 0x17, 0x62, \
+ 0x09, 0x09, 0x23, 0x02, 0x6e, 0x91, 0x77, 0x18, 0x15, 0xf2, 0x9d, 0xab, 0x01, 0x93, 0x2f, 0x2f }
+#define NIST_AES_256_XTS_IV { 0x4f, 0xae, 0xf7, 0x11, 0x7c, 0xda, 0x59, 0xc6, 0x6e, 0x4b, 0x92, 0x01, 0x3e, 0x76, 0x8a, 0xd5 }
+#define NIST_AES_256_XTS_VECTOR_SIZE 16
+#define NIST_AES_256_XTS_PLAIN { 0xeb, 0xab, 0xce, 0x95, 0xb1, 0x4d, 0x3c, 0x8d, 0x6f, 0xb3, 0x50, 0x39, 0x07, 0x90, 0x31, 0x1c }
+#define NIST_AES_256_XTS_CIPHER { 0x77, 0x8a, 0xe8, 0xb4, 0x3c, 0xb9, 0x8d, 0x5a, 0x82, 0x50, 0x81, 0xd5, 0xbe, 0x47, 0x1c, 0x63 }
+
+#define NIST_AES_512_XTS_KEY { 0x1e, 0xa6, 0x61, 0xc5, 0x8d, 0x94, 0x3a, 0x0e, 0x48, 0x01, 0xe4, 0x2f, 0x4b, 0x09, 0x47, 0x14, \
+ 0x9e, 0x7f, 0x9f, 0x8e, 0x3e, 0x68, 0xd0, 0xc7, 0x50, 0x52, 0x10, 0xbd, 0x31, 0x1a, 0x0e, 0x7c, \
+ 0xd6, 0xe1, 0x3f, 0xfd, 0xf2, 0x41, 0x8d, 0x8d, 0x19, 0x11, 0xc0, 0x04, 0xcd, 0xa5, 0x8d, 0xa3, \
+ 0xd6, 0x19, 0xb7, 0xe2, 0xb9, 0x14, 0x1e, 0x58, 0x31, 0x8e, 0xea, 0x39, 0x2c, 0xf4, 0x1b, 0x08 }
+#define NIST_AES_512_XTS_IV { 0xad, 0xf8, 0xd9, 0x26, 0x27, 0x46, 0x4a, 0xd2, 0xf0, 0x42, 0x8e, 0x84, 0xa9, 0xf8, 0x75, 0x64, }
+#define NIST_AES_512_XTS_VECTOR_SIZE 32
+#define NIST_AES_512_XTS_PLAIN { 0x2e, 0xed, 0xea, 0x52, 0xcd, 0x82, 0x15, 0xe1, 0xac, 0xc6, 0x47, 0xe8, 0x10, 0xbb, 0xc3, 0x64, \
+ 0x2e, 0x87, 0x28, 0x7f, 0x8d, 0x2e, 0x57, 0xe3, 0x6c, 0x0a, 0x24, 0xfb, 0xc1, 0x2a, 0x20, 0x2e }
+#define NIST_AES_512_XTS_CIPHER { 0xcb, 0xaa, 0xd0, 0xe2, 0xf6, 0xce, 0xa3, 0xf5, 0x0b, 0x37, 0xf9, 0x34, 0xd4, 0x6a, 0x9b, 0x13, \
+ 0x0b, 0x9d, 0x54, 0xf0, 0x7e, 0x34, 0xf3, 0x6a, 0xf7, 0x93, 0xe8, 0x6f, 0x73, 0xc6, 0xd7, 0xdb }
+
+
+/* NIST AES-CMAC */
+#define NIST_AES_128_CMAC_KEY { 0x67, 0x08, 0xc9, 0x88, 0x7b, 0x84, 0x70, 0x84, 0xf1, 0x23, 0xd3, 0xdd, 0x9c, 0x3a, 0x81, 0x36 }
+#define NIST_AES_128_CMAC_PLAIN_DATA { 0xa8, 0xde, 0x55, 0x17, 0x0c, 0x6d, 0xc0, 0xd8, 0x0d, 0xe3, 0x2f, 0x50, 0x8b, 0xf4, 0x9b, 0x70 }
+#define NIST_AES_128_CMAC_MAC { 0xcf, 0xef, 0x9b, 0x78, 0x39, 0x84, 0x1f, 0xdb, 0xcc, 0xbb, 0x6c, 0x2c, 0xf2, 0x38, 0xf7 }
+#define NIST_AES_128_CMAC_VECTOR_SIZE 16
+#define NIST_AES_128_CMAC_OUTPUT_SIZE 15
+
+#define NIST_AES_192_CMAC_KEY { 0x20, 0x51, 0xaf, 0x34, 0x76, 0x2e, 0xbe, 0x55, 0x6f, 0x72, 0xa5, 0xc6, 0xed, 0xc7, 0x77, 0x1e, \
+ 0xb9, 0x24, 0x5f, 0xad, 0x76, 0xf0, 0x34, 0xbe }
+#define NIST_AES_192_CMAC_PLAIN_DATA { 0xae, 0x8e, 0x93, 0xc9, 0xc9, 0x91, 0xcf, 0x89, 0x6a, 0x49, 0x1a, 0x89, 0x07, 0xdf, 0x4e, 0x4b, \
+ 0xe5, 0x18, 0x6a, 0xe4, 0x96, 0xcd, 0x34, 0x0d, 0xc1, 0x9b, 0x23, 0x78, 0x21, 0xdb, 0x7b, 0x60 }
+#define NIST_AES_192_CMAC_MAC { 0x74, 0xf7, 0x46, 0x08, 0xc0, 0x4f, 0x0f, 0x4e, 0x47, 0xfa, 0x64, 0x04, 0x33, 0xb6, 0xe6, 0xfb }
+#define NIST_AES_192_CMAC_VECTOR_SIZE 32
+#define NIST_AES_192_CMAC_OUTPUT_SIZE 16
+
+#define NIST_AES_256_CMAC_KEY { 0x3a, 0x75, 0xa9, 0xd2, 0xbd, 0xb8, 0xc8, 0x04, 0xba, 0x4a, 0xb4, 0x98, 0x35, 0x73, 0xa6, 0xb2, \
+ 0x53, 0x16, 0x0d, 0xd9, 0x0f, 0x8e, 0xdd, 0xfb, 0x2f, 0xdc, 0x2a, 0xb1, 0x76, 0x04, 0xf5, 0xc5 }
+#define NIST_AES_256_CMAC_PLAIN_DATA { 0x42, 0xf3, 0x5d, 0x5a, 0xa5, 0x33, 0xa7, 0xa0, 0xa5, 0xf7, 0x4e, 0x14, 0x4f, 0x2a, 0x5f, 0x20 }
+#define NIST_AES_256_CMAC_MAC { 0xf1, 0x53, 0x2f, 0x87, 0x32, 0xd9, 0xf5, 0x90, 0x30, 0x07 }
+#define NIST_AES_256_CMAC_VECTOR_SIZE 16
+#define NIST_AES_256_CMAC_OUTPUT_SIZE 10
+
+
+/* NIST TDES */
+#define TDES_NUM_OF_KEYS 3
+#define NIST_TDES_VECTOR_SIZE 8
+#define NIST_TDES_IV_SIZE 8
+
+#define NIST_TDES_ECB_IV { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }
+
+#define NIST_TDES_ECB3_KEY { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef, \
+ 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef, 0x01, \
+ 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef, 0x01, 0x23 }
+#define NIST_TDES_ECB3_PLAIN_DATA { 0x54, 0x68, 0x65, 0x20, 0x71, 0x75, 0x66, 0x63 }
+#define NIST_TDES_ECB3_CIPHER { 0xa8, 0x26, 0xfd, 0x8c, 0xe5, 0x3b, 0x85, 0x5f }
+
+#define NIST_TDES_CBC3_IV { 0xf8, 0xee, 0xe1, 0x35, 0x9c, 0x6e, 0x54, 0x40 }
+#define NIST_TDES_CBC3_KEY { 0xe9, 0xda, 0x37, 0xf8, 0xdc, 0x97, 0x6d, 0x5b, \
+ 0xb6, 0x8c, 0x04, 0xe3, 0xec, 0x98, 0x20, 0x15, \
+ 0xf4, 0x0e, 0x08, 0xb5, 0x97, 0x29, 0xf2, 0x8f }
+#define NIST_TDES_CBC3_PLAIN_DATA { 0x3b, 0xb7, 0xa7, 0xdb, 0xa3, 0xd5, 0x92, 0x91 }
+#define NIST_TDES_CBC3_CIPHER { 0x5b, 0x84, 0x24, 0xd2, 0x39, 0x3e, 0x55, 0xa2 }
+
+
+/* NIST AES-CCM */
+#define NIST_AESCCM_128_BIT_KEY_SIZE 16
+#define NIST_AESCCM_192_BIT_KEY_SIZE 24
+#define NIST_AESCCM_256_BIT_KEY_SIZE 32
+
+#define NIST_AESCCM_B0_VAL 0x79 /* L'[0:2]=1 , M'[3-5]=7 , Adata[6]=1, reserved[7]=0 */
+#define NIST_AESCCM_NONCE_SIZE 13
+#define NIST_AESCCM_IV_SIZE 16
+#define NIST_AESCCM_ADATA_SIZE 32
+#define NIST_AESCCM_TEXT_SIZE 16
+#define NIST_AESCCM_TAG_SIZE 16
+
+#define NIST_AESCCM_128_KEY { 0x70, 0x01, 0x0e, 0xd9, 0x0e, 0x61, 0x86, 0xec, 0xad, 0x41, 0xf0, 0xd3, 0xc7, 0xc4, 0x2f, 0xf8 }
+#define NIST_AESCCM_128_NONCE { 0xa5, 0xf4, 0xf4, 0x98, 0x6e, 0x98, 0x47, 0x29, 0x65, 0xf5, 0xab, 0xcc, 0x4b }
+#define NIST_AESCCM_128_ADATA { 0x3f, 0xec, 0x0e, 0x5c, 0xc2, 0x4d, 0x67, 0x13, 0x94, 0x37, 0xcb, 0xc8, 0x11, 0x24, 0x14, 0xfc, \
+ 0x8d, 0xac, 0xcd, 0x1a, 0x94, 0xb4, 0x9a, 0x4c, 0x76, 0xe2, 0xd3, 0x93, 0x03, 0x54, 0x73, 0x17 }
+#define NIST_AESCCM_128_PLAIN_TEXT { 0xbe, 0x32, 0x2f, 0x58, 0xef, 0xa7, 0xf8, 0xc6, 0x8a, 0x63, 0x5e, 0x0b, 0x9c, 0xce, 0x77, 0xf2 }
+#define NIST_AESCCM_128_CIPHER { 0x8e, 0x44, 0x25, 0xae, 0x57, 0x39, 0x74, 0xf0, 0xf0, 0x69, 0x3a, 0x18, 0x8b, 0x52, 0x58, 0x12 }
+#define NIST_AESCCM_128_MAC { 0xee, 0xf0, 0x8e, 0x3f, 0xb1, 0x5f, 0x42, 0x27, 0xe0, 0xd9, 0x89, 0xa4, 0xd5, 0x87, 0xa8, 0xcf }
+
+#define NIST_AESCCM_192_KEY { 0x68, 0x73, 0xf1, 0xc6, 0xc3, 0x09, 0x75, 0xaf, 0xf6, 0xf0, 0x84, 0x70, 0x26, 0x43, 0x21, 0x13, \
+ 0x0a, 0x6e, 0x59, 0x84, 0xad, 0xe3, 0x24, 0xe9 }
+#define NIST_AESCCM_192_NONCE { 0x7c, 0x4d, 0x2f, 0x7c, 0xec, 0x04, 0x36, 0x1f, 0x18, 0x7f, 0x07, 0x26, 0xd5 }
+#define NIST_AESCCM_192_ADATA { 0x77, 0x74, 0x3b, 0x5d, 0x83, 0xa0, 0x0d, 0x2c, 0x8d, 0x5f, 0x7e, 0x10, 0x78, 0x15, 0x31, 0xb4, \
+ 0x96, 0xe0, 0x9f, 0x3b, 0xc9, 0x29, 0x5d, 0x7a, 0xe9, 0x79, 0x9e, 0x64, 0x66, 0x8e, 0xf8, 0xc5 }
+#define NIST_AESCCM_192_PLAIN_TEXT { 0x50, 0x51, 0xa0, 0xb0, 0xb6, 0x76, 0x6c, 0xd6, 0xea, 0x29, 0xa6, 0x72, 0x76, 0x9d, 0x40, 0xfe }
+#define NIST_AESCCM_192_CIPHER { 0x0c, 0xe5, 0xac, 0x8d, 0x6b, 0x25, 0x6f, 0xb7, 0x58, 0x0b, 0xf6, 0xac, 0xc7, 0x64, 0x26, 0xaf }
+#define NIST_AESCCM_192_MAC { 0x40, 0xbc, 0xe5, 0x8f, 0xd4, 0xcd, 0x65, 0x48, 0xdf, 0x90, 0xa0, 0x33, 0x7c, 0x84, 0x20, 0x04 }
+
+#define NIST_AESCCM_256_KEY { 0xee, 0x8c, 0xe1, 0x87, 0x16, 0x97, 0x79, 0xd1, 0x3e, 0x44, 0x3d, 0x64, 0x28, 0xe3, 0x8b, 0x38, \
+ 0xb5, 0x5d, 0xfb, 0x90, 0xf0, 0x22, 0x8a, 0x8a, 0x4e, 0x62, 0xf8, 0xf5, 0x35, 0x80, 0x6e, 0x62 }
+#define NIST_AESCCM_256_NONCE { 0x12, 0x16, 0x42, 0xc4, 0x21, 0x8b, 0x39, 0x1c, 0x98, 0xe6, 0x26, 0x9c, 0x8a }
+#define NIST_AESCCM_256_ADATA { 0x71, 0x8d, 0x13, 0xe4, 0x75, 0x22, 0xac, 0x4c, 0xdf, 0x3f, 0x82, 0x80, 0x63, 0x98, 0x0b, 0x6d, \
+ 0x45, 0x2f, 0xcd, 0xcd, 0x6e, 0x1a, 0x19, 0x04, 0xbf, 0x87, 0xf5, 0x48, 0xa5, 0xfd, 0x5a, 0x05 }
+#define NIST_AESCCM_256_PLAIN_TEXT { 0xd1, 0x5f, 0x98, 0xf2, 0xc6, 0xd6, 0x70, 0xf5, 0x5c, 0x78, 0xa0, 0x66, 0x48, 0x33, 0x2b, 0xc9 }
+#define NIST_AESCCM_256_CIPHER { 0xcc, 0x17, 0xbf, 0x87, 0x94, 0xc8, 0x43, 0x45, 0x7d, 0x89, 0x93, 0x91, 0x89, 0x8e, 0xd2, 0x2a }
+#define NIST_AESCCM_256_MAC { 0x6f, 0x9d, 0x28, 0xfc, 0xb6, 0x42, 0x34, 0xe1, 0xcd, 0x79, 0x3c, 0x41, 0x44, 0xf1, 0xda, 0x50 }
+
+
+/* NIST AES-GCM */
+#define NIST_AESGCM_128_BIT_KEY_SIZE 16
+#define NIST_AESGCM_192_BIT_KEY_SIZE 24
+#define NIST_AESGCM_256_BIT_KEY_SIZE 32
+
+#define NIST_AESGCM_IV_SIZE 12
+#define NIST_AESGCM_ADATA_SIZE 16
+#define NIST_AESGCM_TEXT_SIZE 16
+#define NIST_AESGCM_TAG_SIZE 16
+
+#define NIST_AESGCM_128_KEY { 0x81, 0x6e, 0x39, 0x07, 0x04, 0x10, 0xcf, 0x21, 0x84, 0x90, 0x4d, 0xa0, 0x3e, 0xa5, 0x07, 0x5a }
+#define NIST_AESGCM_128_IV { 0x32, 0xc3, 0x67, 0xa3, 0x36, 0x26, 0x13, 0xb2, 0x7f, 0xc3, 0xe6, 0x7e }
+#define NIST_AESGCM_128_ADATA { 0xf2, 0xa3, 0x07, 0x28, 0xed, 0x87, 0x4e, 0xe0, 0x29, 0x83, 0xc2, 0x94, 0x43, 0x5d, 0x3c, 0x16 }
+#define NIST_AESGCM_128_PLAIN_TEXT { 0xec, 0xaf, 0xe9, 0x6c, 0x67, 0xa1, 0x64, 0x67, 0x44, 0xf1, 0xc8, 0x91, 0xf5, 0xe6, 0x94, 0x27 }
+#define NIST_AESGCM_128_CIPHER { 0x55, 0x2e, 0xbe, 0x01, 0x2e, 0x7b, 0xcf, 0x90, 0xfc, 0xef, 0x71, 0x2f, 0x83, 0x44, 0xe8, 0xf1 }
+#define NIST_AESGCM_128_MAC { 0xec, 0xaa, 0xe9, 0xfc, 0x68, 0x27, 0x6a, 0x45, 0xab, 0x0c, 0xa3, 0xcb, 0x9d, 0xd9, 0x53, 0x9f }
+
+#define NIST_AESGCM_192_KEY { 0x0c, 0x44, 0xd6, 0xc9, 0x28, 0xee, 0x11, 0x2c, 0xe6, 0x65, 0xfe, 0x54, 0x7e, 0xbd, 0x38, 0x72, \
+ 0x98, 0xa9, 0x54, 0xb4, 0x62, 0xf6, 0x95, 0xd8 }
+#define NIST_AESGCM_192_IV { 0x18, 0xb8, 0xf3, 0x20, 0xfe, 0xf4, 0xae, 0x8c, 0xcb, 0xe8, 0xf9, 0x52 }
+#define NIST_AESGCM_192_ADATA { 0x73, 0x41, 0xd4, 0x3f, 0x98, 0xcf, 0x38, 0x82, 0x21, 0x18, 0x09, 0x41, 0x97, 0x03, 0x76, 0xe8 }
+#define NIST_AESGCM_192_PLAIN_TEXT { 0x96, 0xad, 0x07, 0xf9, 0xb6, 0x28, 0xb6, 0x52, 0xcf, 0x86, 0xcb, 0x73, 0x17, 0x88, 0x6f, 0x51 }
+#define NIST_AESGCM_192_CIPHER { 0xa6, 0x64, 0x07, 0x81, 0x33, 0x40, 0x5e, 0xb9, 0x09, 0x4d, 0x36, 0xf7, 0xe0, 0x70, 0x19, 0x1f }
+#define NIST_AESGCM_192_MAC { 0xe8, 0xf9, 0xc3, 0x17, 0x84, 0x7c, 0xe3, 0xf3, 0xc2, 0x39, 0x94, 0xa4, 0x02, 0xf0, 0x65, 0x81 }
+
+#define NIST_AESGCM_256_KEY { 0x54, 0xe3, 0x52, 0xea, 0x1d, 0x84, 0xbf, 0xe6, 0x4a, 0x10, 0x11, 0x09, 0x61, 0x11, 0xfb, 0xe7, \
+ 0x66, 0x8a, 0xd2, 0x20, 0x3d, 0x90, 0x2a, 0x01, 0x45, 0x8c, 0x3b, 0xbd, 0x85, 0xbf, 0xce, 0x14 }
+#define NIST_AESGCM_256_IV { 0xdf, 0x7c, 0x3b, 0xca, 0x00, 0x39, 0x6d, 0x0c, 0x01, 0x84, 0x95, 0xd9 }
+#define NIST_AESGCM_256_ADATA { 0x7e, 0x96, 0x8d, 0x71, 0xb5, 0x0c, 0x1f, 0x11, 0xfd, 0x00, 0x1f, 0x3f, 0xef, 0x49, 0xd0, 0x45 }
+#define NIST_AESGCM_256_PLAIN_TEXT { 0x85, 0xfc, 0x3d, 0xfa, 0xd9, 0xb5, 0xa8, 0xd3, 0x25, 0x8e, 0x4f, 0xc4, 0x45, 0x71, 0xbd, 0x3b }
+#define NIST_AESGCM_256_CIPHER { 0x42, 0x6e, 0x0e, 0xfc, 0x69, 0x3b, 0x7b, 0xe1, 0xf3, 0x01, 0x8d, 0xb7, 0xdd, 0xbb, 0x7e, 0x4d }
+#define NIST_AESGCM_256_MAC { 0xee, 0x82, 0x57, 0x79, 0x5b, 0xe6, 0xa1, 0x16, 0x4d, 0x7e, 0x1d, 0x2d, 0x6c, 0xac, 0x77, 0xa7 }
+
+
+/* NIST HASH */
+#define NIST_SHA_MSG_SIZE 16
+
+#define NIST_SHA_1_MSG { 0x35, 0x52, 0x69, 0x4c, 0xdf, 0x66, 0x3f, 0xd9, 0x4b, 0x22, 0x47, 0x47, 0xac, 0x40, 0x6a, 0xaf }
+#define NIST_SHA_1_MD { 0xa1, 0x50, 0xde, 0x92, 0x74, 0x54, 0x20, 0x2d, 0x94, 0xe6, 0x56, 0xde, 0x4c, 0x7c, 0x0c, 0xa6, \
+ 0x91, 0xde, 0x95, 0x5d }
+
+#define NIST_SHA_256_MSG { 0x0a, 0x27, 0x84, 0x7c, 0xdc, 0x98, 0xbd, 0x6f, 0x62, 0x22, 0x0b, 0x04, 0x6e, 0xdd, 0x76, 0x2b }
+#define NIST_SHA_256_MD { 0x80, 0xc2, 0x5e, 0xc1, 0x60, 0x05, 0x87, 0xe7, 0xf2, 0x8b, 0x18, 0xb1, 0xb1, 0x8e, 0x3c, 0xdc, \
+ 0x89, 0x92, 0x8e, 0x39, 0xca, 0xb3, 0xbc, 0x25, 0xe4, 0xd4, 0xa4, 0xc1, 0x39, 0xbc, 0xed, 0xc4 }
+
+#define NIST_SHA_512_MSG { 0xcd, 0x67, 0xbd, 0x40, 0x54, 0xaa, 0xa3, 0xba, 0xa0, 0xdb, 0x17, 0x8c, 0xe2, 0x32, 0xfd, 0x5a }
+#define NIST_SHA_512_MD { 0x0d, 0x85, 0x21, 0xf8, 0xf2, 0xf3, 0x90, 0x03, 0x32, 0xd1, 0xa1, 0xa5, 0x5c, 0x60, 0xba, 0x81, \
+ 0xd0, 0x4d, 0x28, 0xdf, 0xe8, 0xc5, 0x04, 0xb6, 0x32, 0x8a, 0xe7, 0x87, 0x92, 0x5f, 0xe0, 0x18, \
+ 0x8f, 0x2b, 0xa9, 0x1c, 0x3a, 0x9f, 0x0c, 0x16, 0x53, 0xc4, 0xbf, 0x0a, 0xda, 0x35, 0x64, 0x55, \
+ 0xea, 0x36, 0xfd, 0x31, 0xf8, 0xe7, 0x3e, 0x39, 0x51, 0xca, 0xd4, 0xeb, 0xba, 0x8c, 0x6e, 0x04 }
+
+
+/* NIST HMAC */
+#define NIST_HMAC_MSG_SIZE 128
+
+#define NIST_HMAC_SHA1_KEY_SIZE 10
+#define NIST_HMAC_SHA1_KEY { 0x59, 0x78, 0x59, 0x28, 0xd7, 0x25, 0x16, 0xe3, 0x12, 0x72 }
+#define NIST_HMAC_SHA1_MSG { 0xa3, 0xce, 0x88, 0x99, 0xdf, 0x10, 0x22, 0xe8, 0xd2, 0xd5, 0x39, 0xb4, 0x7b, 0xf0, 0xe3, 0x09, \
+ 0xc6, 0x6f, 0x84, 0x09, 0x5e, 0x21, 0x43, 0x8e, 0xc3, 0x55, 0xbf, 0x11, 0x9c, 0xe5, 0xfd, 0xcb, \
+ 0x4e, 0x73, 0xa6, 0x19, 0xcd, 0xf3, 0x6f, 0x25, 0xb3, 0x69, 0xd8, 0xc3, 0x8f, 0xf4, 0x19, 0x99, \
+ 0x7f, 0x0c, 0x59, 0x83, 0x01, 0x08, 0x22, 0x36, 0x06, 0xe3, 0x12, 0x23, 0x48, 0x3f, 0xd3, 0x9e, \
+ 0xde, 0xaa, 0x4d, 0x3f, 0x0d, 0x21, 0x19, 0x88, 0x62, 0xd2, 0x39, 0xc9, 0xfd, 0x26, 0x07, 0x41, \
+ 0x30, 0xff, 0x6c, 0x86, 0x49, 0x3f, 0x52, 0x27, 0xab, 0x89, 0x5c, 0x8f, 0x24, 0x4b, 0xd4, 0x2c, \
+ 0x7a, 0xfc, 0xe5, 0xd1, 0x47, 0xa2, 0x0a, 0x59, 0x07, 0x98, 0xc6, 0x8e, 0x70, 0x8e, 0x96, 0x49, \
+ 0x02, 0xd1, 0x24, 0xda, 0xde, 0xcd, 0xbd, 0xa9, 0xdb, 0xd0, 0x05, 0x1e, 0xd7, 0x10, 0xe9, 0xbf }
+#define NIST_HMAC_SHA1_MD { 0x3c, 0x81, 0x62, 0x58, 0x9a, 0xaf, 0xae, 0xe0, 0x24, 0xfc, 0x9a, 0x5c, 0xa5, 0x0d, 0xd2, 0x33, \
+ 0x6f, 0xe3, 0xeb, 0x28 }
+
+#define NIST_HMAC_SHA256_KEY_SIZE 40
+#define NIST_HMAC_SHA256_KEY { 0x97, 0x79, 0xd9, 0x12, 0x06, 0x42, 0x79, 0x7f, 0x17, 0x47, 0x02, 0x5d, 0x5b, 0x22, 0xb7, 0xac, \
+ 0x60, 0x7c, 0xab, 0x08, 0xe1, 0x75, 0x8f, 0x2f, 0x3a, 0x46, 0xc8, 0xbe, 0x1e, 0x25, 0xc5, 0x3b, \
+ 0x8c, 0x6a, 0x8f, 0x58, 0xff, 0xef, 0xa1, 0x76 }
+#define NIST_HMAC_SHA256_MSG { 0xb1, 0x68, 0x9c, 0x25, 0x91, 0xea, 0xf3, 0xc9, 0xe6, 0x60, 0x70, 0xf8, 0xa7, 0x79, 0x54, 0xff, \
+ 0xb8, 0x17, 0x49, 0xf1, 0xb0, 0x03, 0x46, 0xf9, 0xdf, 0xe0, 0xb2, 0xee, 0x90, 0x5d, 0xcc, 0x28, \
+ 0x8b, 0xaf, 0x4a, 0x92, 0xde, 0x3f, 0x40, 0x01, 0xdd, 0x9f, 0x44, 0xc4, 0x68, 0xc3, 0xd0, 0x7d, \
+ 0x6c, 0x6e, 0xe8, 0x2f, 0xac, 0xea, 0xfc, 0x97, 0xc2, 0xfc, 0x0f, 0xc0, 0x60, 0x17, 0x19, 0xd2, \
+ 0xdc, 0xd0, 0xaa, 0x2a, 0xec, 0x92, 0xd1, 0xb0, 0xae, 0x93, 0x3c, 0x65, 0xeb, 0x06, 0xa0, 0x3c, \
+ 0x9c, 0x93, 0x5c, 0x2b, 0xad, 0x04, 0x59, 0x81, 0x02, 0x41, 0x34, 0x7a, 0xb8, 0x7e, 0x9f, 0x11, \
+ 0xad, 0xb3, 0x04, 0x15, 0x42, 0x4c, 0x6c, 0x7f, 0x5f, 0x22, 0xa0, 0x03, 0xb8, 0xab, 0x8d, 0xe5, \
+ 0x4f, 0x6d, 0xed, 0x0e, 0x3a, 0xb9, 0x24, 0x5f, 0xa7, 0x95, 0x68, 0x45, 0x1d, 0xfa, 0x25, 0x8e }
+#define NIST_HMAC_SHA256_MD { 0x76, 0x9f, 0x00, 0xd3, 0xe6, 0xa6, 0xcc, 0x1f, 0xb4, 0x26, 0xa1, 0x4a, 0x4f, 0x76, 0xc6, 0x46, \
+ 0x2e, 0x61, 0x49, 0x72, 0x6e, 0x0d, 0xee, 0x0e, 0xc0, 0xcf, 0x97, 0xa1, 0x66, 0x05, 0xac, 0x8b }
+
+#define NIST_HMAC_SHA512_KEY_SIZE 100
+#define NIST_HMAC_SHA512_KEY { 0x57, 0xc2, 0xeb, 0x67, 0x7b, 0x50, 0x93, 0xb9, 0xe8, 0x29, 0xea, 0x4b, 0xab, 0xb5, 0x0b, 0xde, \
+ 0x55, 0xd0, 0xad, 0x59, 0xfe, 0xc3, 0x4a, 0x61, 0x89, 0x73, 0x80, 0x2b, 0x2a, 0xd9, 0xb7, 0x8e, \
+ 0x26, 0xb2, 0x04, 0x5d, 0xda, 0x78, 0x4d, 0xf3, 0xff, 0x90, 0xae, 0x0f, 0x2c, 0xc5, 0x1c, 0xe3, \
+ 0x9c, 0xf5, 0x48, 0x67, 0x32, 0x0a, 0xc6, 0xf3, 0xba, 0x2c, 0x6f, 0x0d, 0x72, 0x36, 0x04, 0x80, \
+ 0xc9, 0x66, 0x14, 0xae, 0x66, 0x58, 0x1f, 0x26, 0x6c, 0x35, 0xfb, 0x79, 0xfd, 0x28, 0x77, 0x4a, \
+ 0xfd, 0x11, 0x3f, 0xa5, 0x18, 0x7e, 0xff, 0x92, 0x06, 0xd7, 0xcb, 0xe9, 0x0d, 0xd8, 0xbf, 0x67, \
+ 0xc8, 0x44, 0xe2, 0x02 }
+#define NIST_HMAC_SHA512_MSG { 0x24, 0x23, 0xdf, 0xf4, 0x8b, 0x31, 0x2b, 0xe8, 0x64, 0xcb, 0x34, 0x90, 0x64, 0x1f, 0x79, 0x3d, \
+ 0x2b, 0x9f, 0xb6, 0x8a, 0x77, 0x63, 0xb8, 0xe2, 0x98, 0xc8, 0x6f, 0x42, 0x24, 0x5e, 0x45, 0x40, \
+ 0xeb, 0x01, 0xae, 0x4d, 0x2d, 0x45, 0x00, 0x37, 0x0b, 0x18, 0x86, 0xf2, 0x3c, 0xa2, 0xcf, 0x97, \
+ 0x01, 0x70, 0x4c, 0xad, 0x5b, 0xd2, 0x1b, 0xa8, 0x7b, 0x81, 0x1d, 0xaf, 0x7a, 0x85, 0x4e, 0xa2, \
+ 0x4a, 0x56, 0x56, 0x5c, 0xed, 0x42, 0x5b, 0x35, 0xe4, 0x0e, 0x1a, 0xcb, 0xeb, 0xe0, 0x36, 0x03, \
+ 0xe3, 0x5d, 0xcf, 0x4a, 0x10, 0x0e, 0x57, 0x21, 0x84, 0x08, 0xa1, 0xd8, 0xdb, 0xcc, 0x3b, 0x99, \
+ 0x29, 0x6c, 0xfe, 0xa9, 0x31, 0xef, 0xe3, 0xeb, 0xd8, 0xf7, 0x19, 0xa6, 0xd9, 0xa1, 0x54, 0x87, \
+ 0xb9, 0xad, 0x67, 0xea, 0xfe, 0xdf, 0x15, 0x55, 0x9c, 0xa4, 0x24, 0x45, 0xb0, 0xf9, 0xb4, 0x2e }
+#define NIST_HMAC_SHA512_MD { 0x33, 0xc5, 0x11, 0xe9, 0xbc, 0x23, 0x07, 0xc6, 0x27, 0x58, 0xdf, 0x61, 0x12, 0x5a, 0x98, 0x0e, \
+ 0xe6, 0x4c, 0xef, 0xeb, 0xd9, 0x09, 0x31, 0xcb, 0x91, 0xc1, 0x37, 0x42, 0xd4, 0x71, 0x4c, 0x06, \
+ 0xde, 0x40, 0x03, 0xfa, 0xf3, 0xc4, 0x1c, 0x06, 0xae, 0xfc, 0x63, 0x8a, 0xd4, 0x7b, 0x21, 0x90, \
+ 0x6e, 0x6b, 0x10, 0x48, 0x16, 0xb7, 0x2d, 0xe6, 0x26, 0x9e, 0x04, 0x5a, 0x1f, 0x44, 0x29, 0xd4 }
+
diff --git a/drivers/staging/ccree/ssi_fips_ext.c b/drivers/staging/ccree/ssi_fips_ext.c
new file mode 100644
index 0000000..369ab86
--- /dev/null
+++ b/drivers/staging/ccree/ssi_fips_ext.c
@@ -0,0 +1,96 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+/**************************************************************
+This file defines the driver FIPS functions that should be
+implemented by the driver user. Current implementation is sample code only.
+***************************************************************/
+
+#include <linux/module.h>
+#include "ssi_fips_local.h"
+#include "ssi_driver.h"
+
+
+static bool tee_error;
+module_param(tee_error, bool, 0644);
+MODULE_PARM_DESC(tee_error, "Simulate TEE library failure flag: 0 - no error (default), 1 - TEE error occured ");
+
+static ssi_fips_state_t fips_state = CC_FIPS_STATE_NOT_SUPPORTED;
+static ssi_fips_error_t fips_error = CC_REE_FIPS_ERROR_OK;
+
+/*
+This function returns the FIPS REE state.
+The function should be implemented by the driver user, depends on where .
+the state value is stored.
+The reference code uses global variable.
+*/
+int ssi_fips_ext_get_state(ssi_fips_state_t *p_state)
+{
+ int rc = 0;
+
+ if (p_state == NULL) {
+ return -EINVAL;
+ }
+
+ *p_state = fips_state;
+
+ return rc;
+}
+
+/*
+This function returns the FIPS REE error.
+The function should be implemented by the driver user, depends on where .
+the error value is stored.
+The reference code uses global variable.
+*/
+int ssi_fips_ext_get_error(ssi_fips_error_t *p_err)
+{
+ int rc = 0;
+
+ if (p_err == NULL) {
+ return -EINVAL;
+ }
+
+ *p_err = fips_error;
+
+ return rc;
+}
+
+/*
+This function sets the FIPS REE state.
+The function should be implemented by the driver user, depends on where .
+the state value is stored.
+The reference code uses global variable.
+*/
+int ssi_fips_ext_set_state(ssi_fips_state_t state)
+{
+ fips_state = state;
+ return 0;
+}
+
+/*
+This function sets the FIPS REE error.
+The function should be implemented by the driver user, depends on where .
+the error value is stored.
+The reference code uses global variable.
+*/
+int ssi_fips_ext_set_error(ssi_fips_error_t err)
+{
+ fips_error = err;
+ return 0;
+}
+
+
diff --git a/drivers/staging/ccree/ssi_fips_ll.c b/drivers/staging/ccree/ssi_fips_ll.c
new file mode 100644
index 0000000..32daf35
--- /dev/null
+++ b/drivers/staging/ccree/ssi_fips_ll.c
@@ -0,0 +1,1681 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+/**************************************************************
+This file defines the driver FIPS Low Level implmentaion functions,
+that executes the KAT.
+***************************************************************/
+#include <linux/kernel.h>
+
+#include "ssi_driver.h"
+#include "ssi_fips_local.h"
+#include "ssi_fips_data.h"
+#include "cc_crypto_ctx.h"
+#include "ssi_hash.h"
+#include "ssi_request_mgr.h"
+
+
+static const uint32_t digest_len_init[] = {
+ 0x00000040, 0x00000000, 0x00000000, 0x00000000 };
+static const uint32_t sha1_init[] = {
+ SHA1_H4, SHA1_H3, SHA1_H2, SHA1_H1, SHA1_H0 };
+static const uint32_t sha256_init[] = {
+ SHA256_H7, SHA256_H6, SHA256_H5, SHA256_H4,
+ SHA256_H3, SHA256_H2, SHA256_H1, SHA256_H0 };
+#if (CC_SUPPORT_SHA > 256)
+static const uint32_t digest_len_sha512_init[] = {
+ 0x00000080, 0x00000000, 0x00000000, 0x00000000 };
+static const uint64_t sha512_init[] = {
+ SHA512_H7, SHA512_H6, SHA512_H5, SHA512_H4,
+ SHA512_H3, SHA512_H2, SHA512_H1, SHA512_H0 };
+#endif
+
+
+#define NIST_CIPHER_AES_MAX_VECTOR_SIZE 32
+
+struct fips_cipher_ctx {
+ uint8_t iv[CC_AES_IV_SIZE];
+ uint8_t key[AES_512_BIT_KEY_SIZE];
+ uint8_t din[NIST_CIPHER_AES_MAX_VECTOR_SIZE];
+ uint8_t dout[NIST_CIPHER_AES_MAX_VECTOR_SIZE];
+};
+
+typedef struct _FipsCipherData {
+ uint8_t isAes;
+ uint8_t key[AES_512_BIT_KEY_SIZE];
+ size_t keySize;
+ uint8_t iv[CC_AES_IV_SIZE];
+ enum drv_crypto_direction direction;
+ enum drv_cipher_mode oprMode;
+ uint8_t dataIn[NIST_CIPHER_AES_MAX_VECTOR_SIZE];
+ uint8_t dataOut[NIST_CIPHER_AES_MAX_VECTOR_SIZE];
+ size_t dataInSize;
+} FipsCipherData;
+
+
+struct fips_cmac_ctx {
+ uint8_t key[AES_256_BIT_KEY_SIZE];
+ uint8_t din[NIST_CIPHER_AES_MAX_VECTOR_SIZE];
+ uint8_t mac_res[CC_DIGEST_SIZE_MAX];
+};
+
+typedef struct _FipsCmacData {
+ enum drv_crypto_direction direction;
+ uint8_t key[AES_256_BIT_KEY_SIZE];
+ size_t key_size;
+ uint8_t data_in[NIST_CIPHER_AES_MAX_VECTOR_SIZE];
+ size_t data_in_size;
+ uint8_t mac_res[CC_DIGEST_SIZE_MAX];
+ size_t mac_res_size;
+} FipsCmacData;
+
+
+struct fips_hash_ctx {
+ uint8_t initial_digest[CC_DIGEST_SIZE_MAX];
+ uint8_t din[NIST_SHA_MSG_SIZE];
+ uint8_t mac_res[CC_DIGEST_SIZE_MAX];
+};
+
+typedef struct _FipsHashData {
+ enum drv_hash_mode hash_mode;
+ uint8_t data_in[NIST_SHA_MSG_SIZE];
+ size_t data_in_size;
+ uint8_t mac_res[CC_DIGEST_SIZE_MAX];
+} FipsHashData;
+
+
+/* note that the hmac key length must be equal or less than block size (block size is 64 up to sha256 and 128 for sha384/512) */
+struct fips_hmac_ctx {
+ uint8_t initial_digest[CC_DIGEST_SIZE_MAX];
+ uint8_t key[CC_HMAC_BLOCK_SIZE_MAX];
+ uint8_t k0[CC_HMAC_BLOCK_SIZE_MAX];
+ uint8_t digest_bytes_len[HASH_LEN_SIZE];
+ uint8_t tmp_digest[CC_DIGEST_SIZE_MAX];
+ uint8_t din[NIST_HMAC_MSG_SIZE];
+ uint8_t mac_res[CC_DIGEST_SIZE_MAX];
+};
+
+typedef struct _FipsHmacData {
+ enum drv_hash_mode hash_mode;
+ uint8_t key[CC_HMAC_BLOCK_SIZE_MAX];
+ size_t key_size;
+ uint8_t data_in[NIST_HMAC_MSG_SIZE];
+ size_t data_in_size;
+ uint8_t mac_res[CC_DIGEST_SIZE_MAX];
+} FipsHmacData;
+
+
+#define FIPS_CCM_B0_A0_ADATA_SIZE (NIST_AESCCM_IV_SIZE + NIST_AESCCM_IV_SIZE + NIST_AESCCM_ADATA_SIZE)
+
+struct fips_ccm_ctx {
+ uint8_t b0_a0_adata[FIPS_CCM_B0_A0_ADATA_SIZE];
+ uint8_t iv[NIST_AESCCM_IV_SIZE];
+ uint8_t ctr_cnt_0[NIST_AESCCM_IV_SIZE];
+ uint8_t key[CC_AES_KEY_SIZE_MAX];
+ uint8_t din[NIST_AESCCM_TEXT_SIZE];
+ uint8_t dout[NIST_AESCCM_TEXT_SIZE];
+ uint8_t mac_res[NIST_AESCCM_TAG_SIZE];
+};
+
+typedef struct _FipsCcmData {
+ enum drv_crypto_direction direction;
+ uint8_t key[CC_AES_KEY_SIZE_MAX];
+ size_t keySize;
+ uint8_t nonce[NIST_AESCCM_NONCE_SIZE];
+ uint8_t adata[NIST_AESCCM_ADATA_SIZE];
+ size_t adataSize;
+ uint8_t dataIn[NIST_AESCCM_TEXT_SIZE];
+ size_t dataInSize;
+ uint8_t dataOut[NIST_AESCCM_TEXT_SIZE];
+ uint8_t tagSize;
+ uint8_t macResOut[NIST_AESCCM_TAG_SIZE];
+} FipsCcmData;
+
+
+struct fips_gcm_ctx {
+ uint8_t adata[NIST_AESGCM_ADATA_SIZE];
+ uint8_t key[CC_AES_KEY_SIZE_MAX];
+ uint8_t hkey[CC_AES_KEY_SIZE_MAX];
+ uint8_t din[NIST_AESGCM_TEXT_SIZE];
+ uint8_t dout[NIST_AESGCM_TEXT_SIZE];
+ uint8_t mac_res[NIST_AESGCM_TAG_SIZE];
+ uint8_t len_block[AES_BLOCK_SIZE];
+ uint8_t iv_inc1[AES_BLOCK_SIZE];
+ uint8_t iv_inc2[AES_BLOCK_SIZE];
+};
+
+typedef struct _FipsGcmData {
+ enum drv_crypto_direction direction;
+ uint8_t key[CC_AES_KEY_SIZE_MAX];
+ size_t keySize;
+ uint8_t iv[NIST_AESGCM_IV_SIZE];
+ uint8_t adata[NIST_AESGCM_ADATA_SIZE];
+ size_t adataSize;
+ uint8_t dataIn[NIST_AESGCM_TEXT_SIZE];
+ size_t dataInSize;
+ uint8_t dataOut[NIST_AESGCM_TEXT_SIZE];
+ uint8_t tagSize;
+ uint8_t macResOut[NIST_AESGCM_TAG_SIZE];
+} FipsGcmData;
+
+
+typedef union _fips_ctx {
+ struct fips_cipher_ctx cipher;
+ struct fips_cmac_ctx cmac;
+ struct fips_hash_ctx hash;
+ struct fips_hmac_ctx hmac;
+ struct fips_ccm_ctx ccm;
+ struct fips_gcm_ctx gcm;
+} fips_ctx;
+
+
+/* test data tables */
+static const FipsCipherData FipsCipherDataTable[] = {
+ /* AES */
+ { 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_ECB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_ECB, NIST_AES_PLAIN_DATA, NIST_AES_128_ECB_CIPHER, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_ECB_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_ECB, NIST_AES_128_ECB_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_ECB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_ECB, NIST_AES_PLAIN_DATA, NIST_AES_192_ECB_CIPHER, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_ECB_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_ECB, NIST_AES_192_ECB_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_ECB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_ECB, NIST_AES_PLAIN_DATA, NIST_AES_256_ECB_CIPHER, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_ECB_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_ECB, NIST_AES_256_ECB_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_CBC_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CBC, NIST_AES_PLAIN_DATA, NIST_AES_128_CBC_CIPHER, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_CBC_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CBC, NIST_AES_128_CBC_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_CBC_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CBC, NIST_AES_PLAIN_DATA, NIST_AES_192_CBC_CIPHER, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_CBC_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CBC, NIST_AES_192_CBC_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_CBC_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CBC, NIST_AES_PLAIN_DATA, NIST_AES_256_CBC_CIPHER, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_CBC_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CBC, NIST_AES_256_CBC_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_OFB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_OFB, NIST_AES_PLAIN_DATA, NIST_AES_128_OFB_CIPHER, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_OFB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_OFB, NIST_AES_128_OFB_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_OFB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_OFB, NIST_AES_PLAIN_DATA, NIST_AES_192_OFB_CIPHER, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_OFB_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_OFB, NIST_AES_192_OFB_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_OFB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_OFB, NIST_AES_PLAIN_DATA, NIST_AES_256_OFB_CIPHER, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_OFB_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_OFB, NIST_AES_256_OFB_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_CTR_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CTR, NIST_AES_PLAIN_DATA, NIST_AES_128_CTR_CIPHER, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, NIST_AES_CTR_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CTR, NIST_AES_128_CTR_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_CTR_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CTR, NIST_AES_PLAIN_DATA, NIST_AES_192_CTR_CIPHER, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_192_KEY, CC_AES_192_BIT_KEY_SIZE, NIST_AES_CTR_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CTR, NIST_AES_192_CTR_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_CTR_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CTR, NIST_AES_PLAIN_DATA, NIST_AES_256_CTR_CIPHER, NIST_AES_VECTOR_SIZE },
+ { 1, NIST_AES_256_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_CTR_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CTR, NIST_AES_256_CTR_CIPHER, NIST_AES_PLAIN_DATA, NIST_AES_VECTOR_SIZE },
+ { 1, RFC3962_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, RFC3962_AES_CBC_CTS_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CBC_CTS, RFC3962_AES_PLAIN_DATA, RFC3962_AES_128_CBC_CTS_CIPHER, RFC3962_AES_VECTOR_SIZE },
+ { 1, RFC3962_AES_128_KEY, CC_AES_128_BIT_KEY_SIZE, RFC3962_AES_CBC_CTS_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CBC_CTS, RFC3962_AES_128_CBC_CTS_CIPHER, RFC3962_AES_PLAIN_DATA, RFC3962_AES_VECTOR_SIZE },
+ { 1, NIST_AES_256_XTS_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_256_XTS_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_XTS, NIST_AES_256_XTS_PLAIN, NIST_AES_256_XTS_CIPHER, NIST_AES_256_XTS_VECTOR_SIZE },
+ { 1, NIST_AES_256_XTS_KEY, CC_AES_256_BIT_KEY_SIZE, NIST_AES_256_XTS_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_XTS, NIST_AES_256_XTS_CIPHER, NIST_AES_256_XTS_PLAIN, NIST_AES_256_XTS_VECTOR_SIZE },
+#if (CC_SUPPORT_SHA > 256)
+ { 1, NIST_AES_512_XTS_KEY, 2*CC_AES_256_BIT_KEY_SIZE, NIST_AES_512_XTS_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_XTS, NIST_AES_512_XTS_PLAIN, NIST_AES_512_XTS_CIPHER, NIST_AES_512_XTS_VECTOR_SIZE },
+ { 1, NIST_AES_512_XTS_KEY, 2*CC_AES_256_BIT_KEY_SIZE, NIST_AES_512_XTS_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_XTS, NIST_AES_512_XTS_CIPHER, NIST_AES_512_XTS_PLAIN, NIST_AES_512_XTS_VECTOR_SIZE },
+#endif
+ /* DES */
+ { 0, NIST_TDES_ECB3_KEY, CC_DRV_DES_TRIPLE_KEY_SIZE, NIST_TDES_ECB_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_ECB, NIST_TDES_ECB3_PLAIN_DATA, NIST_TDES_ECB3_CIPHER, NIST_TDES_VECTOR_SIZE },
+ { 0, NIST_TDES_ECB3_KEY, CC_DRV_DES_TRIPLE_KEY_SIZE, NIST_TDES_ECB_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_ECB, NIST_TDES_ECB3_CIPHER, NIST_TDES_ECB3_PLAIN_DATA, NIST_TDES_VECTOR_SIZE },
+ { 0, NIST_TDES_CBC3_KEY, CC_DRV_DES_TRIPLE_KEY_SIZE, NIST_TDES_CBC3_IV, DRV_CRYPTO_DIRECTION_ENCRYPT, DRV_CIPHER_CBC, NIST_TDES_CBC3_PLAIN_DATA, NIST_TDES_CBC3_CIPHER, NIST_TDES_VECTOR_SIZE },
+ { 0, NIST_TDES_CBC3_KEY, CC_DRV_DES_TRIPLE_KEY_SIZE, NIST_TDES_CBC3_IV, DRV_CRYPTO_DIRECTION_DECRYPT, DRV_CIPHER_CBC, NIST_TDES_CBC3_CIPHER, NIST_TDES_CBC3_PLAIN_DATA, NIST_TDES_VECTOR_SIZE },
+};
+#define FIPS_CIPHER_NUM_OF_TESTS (sizeof(FipsCipherDataTable) / sizeof(FipsCipherData))
+
+static const FipsCmacData FipsCmacDataTable[] = {
+ { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AES_128_CMAC_KEY, AES_128_BIT_KEY_SIZE, NIST_AES_128_CMAC_PLAIN_DATA, NIST_AES_128_CMAC_VECTOR_SIZE, NIST_AES_128_CMAC_MAC, NIST_AES_128_CMAC_OUTPUT_SIZE },
+ { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AES_192_CMAC_KEY, AES_192_BIT_KEY_SIZE, NIST_AES_192_CMAC_PLAIN_DATA, NIST_AES_192_CMAC_VECTOR_SIZE, NIST_AES_192_CMAC_MAC, NIST_AES_192_CMAC_OUTPUT_SIZE },
+ { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AES_256_CMAC_KEY, AES_256_BIT_KEY_SIZE, NIST_AES_256_CMAC_PLAIN_DATA, NIST_AES_256_CMAC_VECTOR_SIZE, NIST_AES_256_CMAC_MAC, NIST_AES_256_CMAC_OUTPUT_SIZE },
+};
+#define FIPS_CMAC_NUM_OF_TESTS (sizeof(FipsCmacDataTable) / sizeof(FipsCmacData))
+
+static const FipsHashData FipsHashDataTable[] = {
+ { DRV_HASH_SHA1, NIST_SHA_1_MSG, NIST_SHA_MSG_SIZE, NIST_SHA_1_MD },
+ { DRV_HASH_SHA256, NIST_SHA_256_MSG, NIST_SHA_MSG_SIZE, NIST_SHA_256_MD },
+#if (CC_SUPPORT_SHA > 256)
+// { DRV_HASH_SHA512, NIST_SHA_512_MSG, NIST_SHA_MSG_SIZE, NIST_SHA_512_MD },
+#endif
+};
+#define FIPS_HASH_NUM_OF_TESTS (sizeof(FipsHashDataTable) / sizeof(FipsHashData))
+
+static const FipsHmacData FipsHmacDataTable[] = {
+ { DRV_HASH_SHA1, NIST_HMAC_SHA1_KEY, NIST_HMAC_SHA1_KEY_SIZE, NIST_HMAC_SHA1_MSG, NIST_HMAC_MSG_SIZE, NIST_HMAC_SHA1_MD },
+ { DRV_HASH_SHA256, NIST_HMAC_SHA256_KEY, NIST_HMAC_SHA256_KEY_SIZE, NIST_HMAC_SHA256_MSG, NIST_HMAC_MSG_SIZE, NIST_HMAC_SHA256_MD },
+#if (CC_SUPPORT_SHA > 256)
+// { DRV_HASH_SHA512, NIST_HMAC_SHA512_KEY, NIST_HMAC_SHA512_KEY_SIZE, NIST_HMAC_SHA512_MSG, NIST_HMAC_MSG_SIZE, NIST_HMAC_SHA512_MD },
+#endif
+};
+#define FIPS_HMAC_NUM_OF_TESTS (sizeof(FipsHmacDataTable) / sizeof(FipsHmacData))
+
+static const FipsCcmData FipsCcmDataTable[] = {
+ { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AESCCM_128_KEY, NIST_AESCCM_128_BIT_KEY_SIZE, NIST_AESCCM_128_NONCE, NIST_AESCCM_128_ADATA, NIST_AESCCM_ADATA_SIZE, NIST_AESCCM_128_PLAIN_TEXT, NIST_AESCCM_TEXT_SIZE, NIST_AESCCM_128_CIPHER, NIST_AESCCM_TAG_SIZE, NIST_AESCCM_128_MAC },
+ { DRV_CRYPTO_DIRECTION_DECRYPT, NIST_AESCCM_128_KEY, NIST_AESCCM_128_BIT_KEY_SIZE, NIST_AESCCM_128_NONCE, NIST_AESCCM_128_ADATA, NIST_AESCCM_ADATA_SIZE, NIST_AESCCM_128_CIPHER, NIST_AESCCM_TEXT_SIZE, NIST_AESCCM_128_PLAIN_TEXT, NIST_AESCCM_TAG_SIZE, NIST_AESCCM_128_MAC },
+ { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AESCCM_192_KEY, NIST_AESCCM_192_BIT_KEY_SIZE, NIST_AESCCM_192_NONCE, NIST_AESCCM_192_ADATA, NIST_AESCCM_ADATA_SIZE, NIST_AESCCM_192_PLAIN_TEXT, NIST_AESCCM_TEXT_SIZE, NIST_AESCCM_192_CIPHER, NIST_AESCCM_TAG_SIZE, NIST_AESCCM_192_MAC },
+ { DRV_CRYPTO_DIRECTION_DECRYPT, NIST_AESCCM_192_KEY, NIST_AESCCM_192_BIT_KEY_SIZE, NIST_AESCCM_192_NONCE, NIST_AESCCM_192_ADATA, NIST_AESCCM_ADATA_SIZE, NIST_AESCCM_192_CIPHER, NIST_AESCCM_TEXT_SIZE, NIST_AESCCM_192_PLAIN_TEXT, NIST_AESCCM_TAG_SIZE, NIST_AESCCM_192_MAC },
+ { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AESCCM_256_KEY, NIST_AESCCM_256_BIT_KEY_SIZE, NIST_AESCCM_256_NONCE, NIST_AESCCM_256_ADATA, NIST_AESCCM_ADATA_SIZE, NIST_AESCCM_256_PLAIN_TEXT, NIST_AESCCM_TEXT_SIZE, NIST_AESCCM_256_CIPHER, NIST_AESCCM_TAG_SIZE, NIST_AESCCM_256_MAC },
+ { DRV_CRYPTO_DIRECTION_DECRYPT, NIST_AESCCM_256_KEY, NIST_AESCCM_256_BIT_KEY_SIZE, NIST_AESCCM_256_NONCE, NIST_AESCCM_256_ADATA, NIST_AESCCM_ADATA_SIZE, NIST_AESCCM_256_CIPHER, NIST_AESCCM_TEXT_SIZE, NIST_AESCCM_256_PLAIN_TEXT, NIST_AESCCM_TAG_SIZE, NIST_AESCCM_256_MAC },
+};
+#define FIPS_CCM_NUM_OF_TESTS (sizeof(FipsCcmDataTable) / sizeof(FipsCcmData))
+
+static const FipsGcmData FipsGcmDataTable[] = {
+ { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AESGCM_128_KEY, NIST_AESGCM_128_BIT_KEY_SIZE, NIST_AESGCM_128_IV, NIST_AESGCM_128_ADATA, NIST_AESGCM_ADATA_SIZE, NIST_AESGCM_128_PLAIN_TEXT, NIST_AESGCM_TEXT_SIZE, NIST_AESGCM_128_CIPHER, NIST_AESGCM_TAG_SIZE, NIST_AESGCM_128_MAC },
+ { DRV_CRYPTO_DIRECTION_DECRYPT, NIST_AESGCM_128_KEY, NIST_AESGCM_128_BIT_KEY_SIZE, NIST_AESGCM_128_IV, NIST_AESGCM_128_ADATA, NIST_AESGCM_ADATA_SIZE, NIST_AESGCM_128_CIPHER, NIST_AESGCM_TEXT_SIZE, NIST_AESGCM_128_PLAIN_TEXT, NIST_AESGCM_TAG_SIZE, NIST_AESGCM_128_MAC },
+ { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AESGCM_192_KEY, NIST_AESGCM_192_BIT_KEY_SIZE, NIST_AESGCM_192_IV, NIST_AESGCM_192_ADATA, NIST_AESGCM_ADATA_SIZE, NIST_AESGCM_192_PLAIN_TEXT, NIST_AESGCM_TEXT_SIZE, NIST_AESGCM_192_CIPHER, NIST_AESGCM_TAG_SIZE, NIST_AESGCM_192_MAC },
+ { DRV_CRYPTO_DIRECTION_DECRYPT, NIST_AESGCM_192_KEY, NIST_AESGCM_192_BIT_KEY_SIZE, NIST_AESGCM_192_IV, NIST_AESGCM_192_ADATA, NIST_AESGCM_ADATA_SIZE, NIST_AESGCM_192_CIPHER, NIST_AESGCM_TEXT_SIZE, NIST_AESGCM_192_PLAIN_TEXT, NIST_AESGCM_TAG_SIZE, NIST_AESGCM_192_MAC },
+ { DRV_CRYPTO_DIRECTION_ENCRYPT, NIST_AESGCM_256_KEY, NIST_AESGCM_256_BIT_KEY_SIZE, NIST_AESGCM_256_IV, NIST_AESGCM_256_ADATA, NIST_AESGCM_ADATA_SIZE, NIST_AESGCM_256_PLAIN_TEXT, NIST_AESGCM_TEXT_SIZE, NIST_AESGCM_256_CIPHER, NIST_AESGCM_TAG_SIZE, NIST_AESGCM_256_MAC },
+ { DRV_CRYPTO_DIRECTION_DECRYPT, NIST_AESGCM_256_KEY, NIST_AESGCM_256_BIT_KEY_SIZE, NIST_AESGCM_256_IV, NIST_AESGCM_256_ADATA, NIST_AESGCM_ADATA_SIZE, NIST_AESGCM_256_CIPHER, NIST_AESGCM_TEXT_SIZE, NIST_AESGCM_256_PLAIN_TEXT, NIST_AESGCM_TAG_SIZE, NIST_AESGCM_256_MAC },
+};
+#define FIPS_GCM_NUM_OF_TESTS (sizeof(FipsGcmDataTable) / sizeof(FipsGcmData))
+
+
+static inline ssi_fips_error_t
+FIPS_CipherToFipsError(enum drv_cipher_mode mode, bool is_aes)
+{
+ switch (mode)
+ {
+ case DRV_CIPHER_ECB:
+ return is_aes ? CC_REE_FIPS_ERROR_AES_ECB_PUT : CC_REE_FIPS_ERROR_DES_ECB_PUT ;
+ case DRV_CIPHER_CBC:
+ return is_aes ? CC_REE_FIPS_ERROR_AES_CBC_PUT : CC_REE_FIPS_ERROR_DES_CBC_PUT ;
+ case DRV_CIPHER_OFB:
+ return CC_REE_FIPS_ERROR_AES_OFB_PUT;
+ case DRV_CIPHER_CTR:
+ return CC_REE_FIPS_ERROR_AES_CTR_PUT;
+ case DRV_CIPHER_CBC_CTS:
+ return CC_REE_FIPS_ERROR_AES_CBC_CTS_PUT;
+ case DRV_CIPHER_XTS:
+ return CC_REE_FIPS_ERROR_AES_XTS_PUT;
+ default:
+ return CC_REE_FIPS_ERROR_GENERAL;
+ }
+
+ return CC_REE_FIPS_ERROR_GENERAL;
+}
+
+
+static inline int
+ssi_cipher_fips_run_test(struct ssi_drvdata *drvdata,
+ bool is_aes,
+ int cipher_mode,
+ int direction,
+ dma_addr_t key_dma_addr,
+ size_t key_len,
+ dma_addr_t iv_dma_addr,
+ size_t iv_len,
+ dma_addr_t din_dma_addr,
+ dma_addr_t dout_dma_addr,
+ size_t data_size)
+{
+ /* max number of descriptors used for the flow */
+ #define FIPS_CIPHER_MAX_SEQ_LEN 6
+
+ int rc;
+ struct ssi_crypto_req ssi_req = {0};
+ HwDesc_s desc[FIPS_CIPHER_MAX_SEQ_LEN];
+ int idx = 0;
+ int s_flow_mode = is_aes ? S_DIN_to_AES : S_DIN_to_DES;
+
+ /* create setup descriptors */
+ switch (cipher_mode) {
+ case DRV_CIPHER_CBC:
+ case DRV_CIPHER_CBC_CTS:
+ case DRV_CIPHER_CTR:
+ case DRV_CIPHER_OFB:
+ /* Load cipher state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ iv_dma_addr, iv_len, NS_BIT);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode);
+ if ((cipher_mode == DRV_CIPHER_CTR) ||
+ (cipher_mode == DRV_CIPHER_OFB) ) {
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ } else {
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ }
+ idx++;
+ /*FALLTHROUGH*/
+ case DRV_CIPHER_ECB:
+ /* Load key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction);
+ if (is_aes) {
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ key_dma_addr,
+ ((key_len == 24) ? AES_MAX_KEY_SIZE : key_len),
+ NS_BIT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len);
+ } else {/*des*/
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ key_dma_addr, key_len,
+ NS_BIT);
+ HW_DESC_SET_KEY_SIZE_DES(&desc[idx], key_len);
+ }
+ HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+ break;
+ case DRV_CIPHER_XTS:
+ /* Load AES key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ key_dma_addr, key_len/2, NS_BIT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len/2);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ /* load XEX key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ (key_dma_addr+key_len/2), key_len/2, NS_BIT);
+ HW_DESC_SET_XEX_DATA_UNIT_SIZE(&desc[idx], data_size);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len/2);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_XEX_KEY);
+ idx++;
+
+ /* Set state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], cipher_mode);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direction);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len/2);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], s_flow_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ iv_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT);
+ idx++;
+ break;
+ default:
+ FIPS_LOG("Unsupported cipher mode (%d)\n", cipher_mode);
+ BUG();
+ }
+
+ /* create data descriptor */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, din_dma_addr, data_size, NS_BIT);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], dout_dma_addr, data_size, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], is_aes ? DIN_AES_DOUT : DIN_DES_DOUT);
+ idx++;
+
+ /* perform the operation - Lock HW and push sequence */
+ BUG_ON(idx > FIPS_CIPHER_MAX_SEQ_LEN);
+ rc = send_request(drvdata, &ssi_req, desc, idx, false);
+
+ // send_request returns error just in some corner cases which should not appear in this flow.
+ return rc;
+}
+
+
+ssi_fips_error_t
+ssi_cipher_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer)
+{
+ ssi_fips_error_t error = CC_REE_FIPS_ERROR_OK;
+ size_t i;
+ struct fips_cipher_ctx *virt_ctx = (struct fips_cipher_ctx *)cpu_addr_buffer;
+
+ /* set the phisical pointers for iv, key, din, dout */
+ dma_addr_t iv_dma_addr = dma_coherent_buffer + offsetof(struct fips_cipher_ctx, iv);
+ dma_addr_t key_dma_addr = dma_coherent_buffer + offsetof(struct fips_cipher_ctx, key);
+ dma_addr_t din_dma_addr = dma_coherent_buffer + offsetof(struct fips_cipher_ctx, din);
+ dma_addr_t dout_dma_addr = dma_coherent_buffer + offsetof(struct fips_cipher_ctx, dout);
+
+ for (i = 0; i < FIPS_CIPHER_NUM_OF_TESTS; ++i)
+ {
+ FipsCipherData *cipherData = (FipsCipherData*)&FipsCipherDataTable[i];
+ int rc = 0;
+ size_t iv_size = cipherData->isAes ? NIST_AES_IV_SIZE : NIST_TDES_IV_SIZE ;
+
+ memset(cpu_addr_buffer, 0, sizeof(struct fips_cipher_ctx));
+
+ /* copy into the allocated buffer */
+ memcpy(virt_ctx->iv, cipherData->iv, iv_size);
+ memcpy(virt_ctx->key, cipherData->key, cipherData->keySize);
+ memcpy(virt_ctx->din, cipherData->dataIn, cipherData->dataInSize);
+
+ FIPS_DBG("ssi_cipher_fips_run_test - (i = %d) \n", i);
+ rc = ssi_cipher_fips_run_test(drvdata,
+ cipherData->isAes,
+ cipherData->oprMode,
+ cipherData->direction,
+ key_dma_addr,
+ cipherData->keySize,
+ iv_dma_addr,
+ iv_size,
+ din_dma_addr,
+ dout_dma_addr,
+ cipherData->dataInSize);
+ if (rc != 0)
+ {
+ FIPS_LOG("ssi_cipher_fips_run_test %d returned error - rc = %d \n", i, rc);
+ error = FIPS_CipherToFipsError(cipherData->oprMode, cipherData->isAes);
+ break;
+ }
+
+ /* compare actual dout to expected */
+ if (memcmp(virt_ctx->dout, cipherData->dataOut, cipherData->dataInSize) != 0)
+ {
+ FIPS_LOG("dout comparison error %d - oprMode=%d, isAes=%d\n", i, cipherData->oprMode, cipherData->isAes);
+ FIPS_LOG(" i expected received \n");
+ FIPS_LOG(" i 0x%08x 0x%08x (size=%d) \n", (size_t)cipherData->dataOut, (size_t)virt_ctx->dout, cipherData->dataInSize);
+ for (i = 0; i < cipherData->dataInSize; ++i)
+ {
+ FIPS_LOG(" %d 0x%02x 0x%02x \n", i, cipherData->dataOut[i], virt_ctx->dout[i]);
+ }
+
+ error = FIPS_CipherToFipsError(cipherData->oprMode, cipherData->isAes);
+ break;
+ }
+ }
+
+ return error;
+}
+
+
+static inline int
+ssi_cmac_fips_run_test(struct ssi_drvdata *drvdata,
+ dma_addr_t key_dma_addr,
+ size_t key_len,
+ dma_addr_t din_dma_addr,
+ size_t din_len,
+ dma_addr_t digest_dma_addr,
+ size_t digest_len)
+{
+ /* max number of descriptors used for the flow */
+ #define FIPS_CMAC_MAX_SEQ_LEN 4
+
+ int rc;
+ struct ssi_crypto_req ssi_req = {0};
+ HwDesc_s desc[FIPS_CMAC_MAX_SEQ_LEN];
+ int idx = 0;
+
+ /* Setup CMAC Key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, key_dma_addr,
+ ((key_len == 24) ? AES_MAX_KEY_SIZE : key_len), NS_BIT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* Load MAC state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, digest_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_len);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+
+ //ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx);
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ din_dma_addr,
+ din_len, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+ idx++;
+
+ /* Get final MAC result */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], digest_dma_addr, CC_AES_BLOCK_SIZE, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_AES_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CMAC);
+ idx++;
+
+ /* perform the operation - Lock HW and push sequence */
+ BUG_ON(idx > FIPS_CMAC_MAX_SEQ_LEN);
+ rc = send_request(drvdata, &ssi_req, desc, idx, false);
+
+ // send_request returns error just in some corner cases which should not appear in this flow.
+ return rc;
+}
+
+ssi_fips_error_t
+ssi_cmac_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer)
+{
+ ssi_fips_error_t error = CC_REE_FIPS_ERROR_OK;
+ size_t i;
+ struct fips_cmac_ctx *virt_ctx = (struct fips_cmac_ctx *)cpu_addr_buffer;
+
+ /* set the phisical pointers for key, din, dout */
+ dma_addr_t key_dma_addr = dma_coherent_buffer + offsetof(struct fips_cmac_ctx, key);
+ dma_addr_t din_dma_addr = dma_coherent_buffer + offsetof(struct fips_cmac_ctx, din);
+ dma_addr_t mac_res_dma_addr = dma_coherent_buffer + offsetof(struct fips_cmac_ctx, mac_res);
+
+ for (i = 0; i < FIPS_CMAC_NUM_OF_TESTS; ++i)
+ {
+ FipsCmacData *cmac_data = (FipsCmacData*)&FipsCmacDataTable[i];
+ int rc = 0;
+
+ memset(cpu_addr_buffer, 0, sizeof(struct fips_cmac_ctx));
+
+ /* copy into the allocated buffer */
+ memcpy(virt_ctx->key, cmac_data->key, cmac_data->key_size);
+ memcpy(virt_ctx->din, cmac_data->data_in, cmac_data->data_in_size);
+
+ BUG_ON(cmac_data->direction != DRV_CRYPTO_DIRECTION_ENCRYPT);
+
+ FIPS_DBG("ssi_cmac_fips_run_test - (i = %d) \n", i);
+ rc = ssi_cmac_fips_run_test(drvdata,
+ key_dma_addr,
+ cmac_data->key_size,
+ din_dma_addr,
+ cmac_data->data_in_size,
+ mac_res_dma_addr,
+ cmac_data->mac_res_size);
+ if (rc != 0)
+ {
+ FIPS_LOG("ssi_cmac_fips_run_test %d returned error - rc = %d \n", i, rc);
+ error = CC_REE_FIPS_ERROR_AES_CMAC_PUT;
+ break;
+ }
+
+ /* compare actual mac result to expected */
+ if (memcmp(virt_ctx->mac_res, cmac_data->mac_res, cmac_data->mac_res_size) != 0)
+ {
+ FIPS_LOG("comparison error %d - digest_size=%d \n", i, cmac_data->mac_res_size);
+ FIPS_LOG(" i expected received \n");
+ FIPS_LOG(" i 0x%08x 0x%08x \n", (size_t)cmac_data->mac_res, (size_t)virt_ctx->mac_res);
+ for (i = 0; i < cmac_data->mac_res_size; ++i)
+ {
+ FIPS_LOG(" %d 0x%02x 0x%02x \n", i, cmac_data->mac_res[i], virt_ctx->mac_res[i]);
+ }
+
+ error = CC_REE_FIPS_ERROR_AES_CMAC_PUT;
+ break;
+ }
+ }
+
+ return error;
+}
+
+
+static inline ssi_fips_error_t
+FIPS_HashToFipsError(enum drv_hash_mode hash_mode)
+{
+ switch (hash_mode) {
+ case DRV_HASH_SHA1:
+ return CC_REE_FIPS_ERROR_SHA1_PUT;
+ case DRV_HASH_SHA256:
+ return CC_REE_FIPS_ERROR_SHA256_PUT;
+#if (CC_SUPPORT_SHA > 256)
+ case DRV_HASH_SHA512:
+ return CC_REE_FIPS_ERROR_SHA512_PUT;
+#endif
+ default:
+ return CC_REE_FIPS_ERROR_GENERAL;
+ }
+
+ return CC_REE_FIPS_ERROR_GENERAL;
+}
+
+static inline int
+ssi_hash_fips_run_test(struct ssi_drvdata *drvdata,
+ dma_addr_t initial_digest_dma_addr,
+ dma_addr_t din_dma_addr,
+ size_t data_in_size,
+ dma_addr_t mac_res_dma_addr,
+ enum drv_hash_mode hash_mode,
+ enum drv_hash_hw_mode hw_mode,
+ int digest_size,
+ int inter_digestsize)
+{
+ /* max number of descriptors used for the flow */
+ #define FIPS_HASH_MAX_SEQ_LEN 4
+
+ int rc;
+ struct ssi_crypto_req ssi_req = {0};
+ HwDesc_s desc[FIPS_HASH_MAX_SEQ_LEN];
+ int idx = 0;
+
+ /* Load initial digest */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, initial_digest_dma_addr, inter_digestsize, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ /* Load the hash current length */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ /* data descriptor */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, din_dma_addr, data_in_size, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+
+ /* Get final MAC result */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_res_dma_addr, digest_size, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+ if (unlikely((hash_mode == DRV_HASH_MD5) ||
+ (hash_mode == DRV_HASH_SHA384) ||
+ (hash_mode == DRV_HASH_SHA512))) {
+ HW_DESC_SET_BYTES_SWAP(&desc[idx], 1);
+ } else {
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+ }
+ idx++;
+
+ /* perform the operation - Lock HW and push sequence */
+ BUG_ON(idx > FIPS_HASH_MAX_SEQ_LEN);
+ rc = send_request(drvdata, &ssi_req, desc, idx, false);
+
+ return rc;
+}
+
+ssi_fips_error_t
+ssi_hash_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer)
+{
+ ssi_fips_error_t error = CC_REE_FIPS_ERROR_OK;
+ size_t i;
+ struct fips_hash_ctx *virt_ctx = (struct fips_hash_ctx *)cpu_addr_buffer;
+
+ /* set the phisical pointers for initial_digest, din, mac_res */
+ dma_addr_t initial_digest_dma_addr = dma_coherent_buffer + offsetof(struct fips_hash_ctx, initial_digest);
+ dma_addr_t din_dma_addr = dma_coherent_buffer + offsetof(struct fips_hash_ctx, din);
+ dma_addr_t mac_res_dma_addr = dma_coherent_buffer + offsetof(struct fips_hash_ctx, mac_res);
+
+ for (i = 0; i < FIPS_HASH_NUM_OF_TESTS; ++i)
+ {
+ FipsHashData *hash_data = (FipsHashData*)&FipsHashDataTable[i];
+ int rc = 0;
+ enum drv_hash_hw_mode hw_mode = 0;
+ int digest_size = 0;
+ int inter_digestsize = 0;
+
+ memset(cpu_addr_buffer, 0, sizeof(struct fips_hash_ctx));
+
+ switch (hash_data->hash_mode) {
+ case DRV_HASH_SHA1:
+ hw_mode = DRV_HASH_HW_SHA1;
+ digest_size = CC_SHA1_DIGEST_SIZE;
+ inter_digestsize = CC_SHA1_DIGEST_SIZE;
+ /* copy the initial digest into the allocated cache coherent buffer */
+ memcpy(virt_ctx->initial_digest, (void*)sha1_init, CC_SHA1_DIGEST_SIZE);
+ break;
+ case DRV_HASH_SHA256:
+ hw_mode = DRV_HASH_HW_SHA256;
+ digest_size = CC_SHA256_DIGEST_SIZE;
+ inter_digestsize = CC_SHA256_DIGEST_SIZE;
+ memcpy(virt_ctx->initial_digest, (void*)sha256_init, CC_SHA256_DIGEST_SIZE);
+ break;
+#if (CC_SUPPORT_SHA > 256)
+ case DRV_HASH_SHA512:
+ hw_mode = DRV_HASH_HW_SHA512;
+ digest_size = CC_SHA512_DIGEST_SIZE;
+ inter_digestsize = CC_SHA512_DIGEST_SIZE;
+ memcpy(virt_ctx->initial_digest, (void*)sha512_init, CC_SHA512_DIGEST_SIZE);
+ break;
+#endif
+ default:
+ error = FIPS_HashToFipsError(hash_data->hash_mode);
+ break;
+ }
+
+ /* copy the din data into the allocated buffer */
+ memcpy(virt_ctx->din, hash_data->data_in, hash_data->data_in_size);
+
+ /* run the test on HW */
+ FIPS_DBG("ssi_hash_fips_run_test - (i = %d) \n", i);
+ rc = ssi_hash_fips_run_test(drvdata,
+ initial_digest_dma_addr,
+ din_dma_addr,
+ hash_data->data_in_size,
+ mac_res_dma_addr,
+ hash_data->hash_mode,
+ hw_mode,
+ digest_size,
+ inter_digestsize);
+ if (rc != 0)
+ {
+ FIPS_LOG("ssi_hash_fips_run_test %d returned error - rc = %d \n", i, rc);
+ error = FIPS_HashToFipsError(hash_data->hash_mode);
+ break;
+ }
+
+ /* compare actual mac result to expected */
+ if (memcmp(virt_ctx->mac_res, hash_data->mac_res, digest_size) != 0)
+ {
+ FIPS_LOG("comparison error %d - hash_mode=%d digest_size=%d \n", i, hash_data->hash_mode, digest_size);
+ FIPS_LOG(" i expected received \n");
+ FIPS_LOG(" i 0x%08x 0x%08x \n", (size_t)hash_data->mac_res, (size_t)virt_ctx->mac_res);
+ for (i = 0; i < digest_size; ++i)
+ {
+ FIPS_LOG(" %d 0x%02x 0x%02x \n", i, hash_data->mac_res[i], virt_ctx->mac_res[i]);
+ }
+
+ error = FIPS_HashToFipsError(hash_data->hash_mode);
+ break;
+ }
+ }
+
+ return error;
+}
+
+
+static inline ssi_fips_error_t
+FIPS_HmacToFipsError(enum drv_hash_mode hash_mode)
+{
+ switch (hash_mode) {
+ case DRV_HASH_SHA1:
+ return CC_REE_FIPS_ERROR_HMAC_SHA1_PUT;
+ case DRV_HASH_SHA256:
+ return CC_REE_FIPS_ERROR_HMAC_SHA256_PUT;
+#if (CC_SUPPORT_SHA > 256)
+ case DRV_HASH_SHA512:
+ return CC_REE_FIPS_ERROR_HMAC_SHA512_PUT;
+#endif
+ default:
+ return CC_REE_FIPS_ERROR_GENERAL;
+ }
+
+ return CC_REE_FIPS_ERROR_GENERAL;
+}
+
+static inline int
+ssi_hmac_fips_run_test(struct ssi_drvdata *drvdata,
+ dma_addr_t initial_digest_dma_addr,
+ dma_addr_t key_dma_addr,
+ size_t key_size,
+ dma_addr_t din_dma_addr,
+ size_t data_in_size,
+ dma_addr_t mac_res_dma_addr,
+ enum drv_hash_mode hash_mode,
+ enum drv_hash_hw_mode hw_mode,
+ size_t digest_size,
+ size_t inter_digestsize,
+ size_t block_size,
+ dma_addr_t k0_dma_addr,
+ dma_addr_t tmp_digest_dma_addr,
+ dma_addr_t digest_bytes_len_dma_addr)
+{
+ /* The implemented flow is not the same as the one implemented in ssi_hash.c (setkey + digest flows).
+ In this flow, there is no need to store and reload some of the intermidiate results. */
+
+ /* max number of descriptors used for the flow */
+ #define FIPS_HMAC_MAX_SEQ_LEN 12
+
+ int rc;
+ struct ssi_crypto_req ssi_req = {0};
+ HwDesc_s desc[FIPS_HMAC_MAX_SEQ_LEN];
+ int idx = 0;
+ int i;
+ /* calc the hash opad first and ipad only afterwards (unlike the flow in ssi_hash.c) */
+ unsigned int hmacPadConst[2] = { HMAC_OPAD_CONST, HMAC_IPAD_CONST };
+
+ // assume (key_size <= block_size)
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, key_dma_addr, key_size, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], k0_dma_addr, key_size, NS_BIT, 0);
+ idx++;
+
+ // if needed, append Key with zeros to create K0
+ if ((block_size - key_size) != 0) {
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0, (block_size - key_size));
+ HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ (k0_dma_addr + key_size), (block_size - key_size),
+ NS_BIT, 0);
+ idx++;
+ }
+
+ BUG_ON(idx > FIPS_HMAC_MAX_SEQ_LEN);
+ rc = send_request(drvdata, &ssi_req, desc, idx, 0);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ return rc;
+ }
+ idx = 0;
+
+ /* calc derived HMAC key */
+ for (i = 0; i < 2; i++) {
+ /* Load hash initial state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, initial_digest_dma_addr, inter_digestsize, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+
+ /* Load the hash current length*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ /* Prepare opad/ipad key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_XOR_VAL(&desc[idx], hmacPadConst[i]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ idx++;
+
+ /* Perform HASH update */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ k0_dma_addr,
+ block_size, NS_BIT);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx],hw_mode);
+ HW_DESC_SET_XOR_ACTIVE(&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+
+ if (i == 0) {
+ /* First iteration - calc H(K0^opad) into tmp_digest_dma_addr */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ tmp_digest_dma_addr,
+ inter_digestsize,
+ NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ idx++;
+
+ // is this needed?? or continue with current descriptors??
+ BUG_ON(idx > FIPS_HMAC_MAX_SEQ_LEN);
+ rc = send_request(drvdata, &ssi_req, desc, idx, 0);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ return rc;
+ }
+ idx = 0;
+ }
+ }
+
+ /* data descriptor */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ din_dma_addr, data_in_size,
+ NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+
+ /* HW last hash block padding (aka. "DO_PAD") */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], k0_dma_addr, HASH_LEN_SIZE, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1);
+ HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD);
+ idx++;
+
+ /* store the hash digest result in the context */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], k0_dma_addr, digest_size, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ if (unlikely((hash_mode == DRV_HASH_MD5) ||
+ (hash_mode == DRV_HASH_SHA384) ||
+ (hash_mode == DRV_HASH_SHA512))) {
+ HW_DESC_SET_BYTES_SWAP(&desc[idx], 1);
+ } else {
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+ }
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ idx++;
+
+ /* at this point:
+ tmp_digest = H(o_key_pad)
+ k0 = H(i_key_pad || m)
+ */
+
+ /* Loading hash opad xor key state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, tmp_digest_dma_addr, inter_digestsize, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ /* Load the hash current length */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, digest_bytes_len_dma_addr, HASH_LEN_SIZE, NS_BIT);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ /* Memory Barrier: wait for IPAD/OPAD axi write to complete */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ idx++;
+
+ /* Perform HASH update */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, k0_dma_addr, digest_size, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+
+
+ /* Get final MAC result */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hw_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_res_dma_addr, digest_size, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+ if (unlikely((hash_mode == DRV_HASH_MD5) ||
+ (hash_mode == DRV_HASH_SHA384) ||
+ (hash_mode == DRV_HASH_SHA512))) {
+ HW_DESC_SET_BYTES_SWAP(&desc[idx], 1);
+ } else {
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+ }
+ idx++;
+
+ /* perform the operation - Lock HW and push sequence */
+ BUG_ON(idx > FIPS_HMAC_MAX_SEQ_LEN);
+ rc = send_request(drvdata, &ssi_req, desc, idx, false);
+
+ return rc;
+}
+
+ssi_fips_error_t
+ssi_hmac_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer)
+{
+ ssi_fips_error_t error = CC_REE_FIPS_ERROR_OK;
+ size_t i;
+ struct fips_hmac_ctx *virt_ctx = (struct fips_hmac_ctx *)cpu_addr_buffer;
+
+ /* set the phisical pointers */
+ dma_addr_t initial_digest_dma_addr = dma_coherent_buffer + offsetof(struct fips_hmac_ctx, initial_digest);
+ dma_addr_t key_dma_addr = dma_coherent_buffer + offsetof(struct fips_hmac_ctx, key);
+ dma_addr_t k0_dma_addr = dma_coherent_buffer + offsetof(struct fips_hmac_ctx, k0);
+ dma_addr_t tmp_digest_dma_addr = dma_coherent_buffer + offsetof(struct fips_hmac_ctx, tmp_digest);
+ dma_addr_t digest_bytes_len_dma_addr = dma_coherent_buffer + offsetof(struct fips_hmac_ctx, digest_bytes_len);
+ dma_addr_t din_dma_addr = dma_coherent_buffer + offsetof(struct fips_hmac_ctx, din);
+ dma_addr_t mac_res_dma_addr = dma_coherent_buffer + offsetof(struct fips_hmac_ctx, mac_res);
+
+ for (i = 0; i < FIPS_HMAC_NUM_OF_TESTS; ++i)
+ {
+ FipsHmacData *hmac_data = (FipsHmacData*)&FipsHmacDataTable[i];
+ int rc = 0;
+ enum drv_hash_hw_mode hw_mode = 0;
+ int digest_size = 0;
+ int block_size = 0;
+ int inter_digestsize = 0;
+
+ memset(cpu_addr_buffer, 0, sizeof(struct fips_hmac_ctx));
+
+ switch (hmac_data->hash_mode) {
+ case DRV_HASH_SHA1:
+ hw_mode = DRV_HASH_HW_SHA1;
+ digest_size = CC_SHA1_DIGEST_SIZE;
+ block_size = CC_SHA1_BLOCK_SIZE;
+ inter_digestsize = CC_SHA1_DIGEST_SIZE;
+ memcpy(virt_ctx->initial_digest, (void*)sha1_init, CC_SHA1_DIGEST_SIZE);
+ memcpy(virt_ctx->digest_bytes_len, digest_len_init, HASH_LEN_SIZE);
+ break;
+ case DRV_HASH_SHA256:
+ hw_mode = DRV_HASH_HW_SHA256;
+ digest_size = CC_SHA256_DIGEST_SIZE;
+ block_size = CC_SHA256_BLOCK_SIZE;
+ inter_digestsize = CC_SHA256_DIGEST_SIZE;
+ memcpy(virt_ctx->initial_digest, (void*)sha256_init, CC_SHA256_DIGEST_SIZE);
+ memcpy(virt_ctx->digest_bytes_len, digest_len_init, HASH_LEN_SIZE);
+ break;
+#if (CC_SUPPORT_SHA > 256)
+ case DRV_HASH_SHA512:
+ hw_mode = DRV_HASH_HW_SHA512;
+ digest_size = CC_SHA512_DIGEST_SIZE;
+ block_size = CC_SHA512_BLOCK_SIZE;
+ inter_digestsize = CC_SHA512_DIGEST_SIZE;
+ memcpy(virt_ctx->initial_digest, (void*)sha512_init, CC_SHA512_DIGEST_SIZE);
+ memcpy(virt_ctx->digest_bytes_len, digest_len_sha512_init, HASH_LEN_SIZE);
+ break;
+#endif
+ default:
+ error = FIPS_HmacToFipsError(hmac_data->hash_mode);
+ break;
+ }
+
+ /* copy into the allocated buffer */
+ memcpy(virt_ctx->key, hmac_data->key, hmac_data->key_size);
+ memcpy(virt_ctx->din, hmac_data->data_in, hmac_data->data_in_size);
+
+ /* run the test on HW */
+ FIPS_DBG("ssi_hmac_fips_run_test - (i = %d) \n", i);
+ rc = ssi_hmac_fips_run_test(drvdata,
+ initial_digest_dma_addr,
+ key_dma_addr,
+ hmac_data->key_size,
+ din_dma_addr,
+ hmac_data->data_in_size,
+ mac_res_dma_addr,
+ hmac_data->hash_mode,
+ hw_mode,
+ digest_size,
+ inter_digestsize,
+ block_size,
+ k0_dma_addr,
+ tmp_digest_dma_addr,
+ digest_bytes_len_dma_addr);
+ if (rc != 0)
+ {
+ FIPS_LOG("ssi_hmac_fips_run_test %d returned error - rc = %d \n", i, rc);
+ error = FIPS_HmacToFipsError(hmac_data->hash_mode);
+ break;
+ }
+
+ /* compare actual mac result to expected */
+ if (memcmp(virt_ctx->mac_res, hmac_data->mac_res, digest_size) != 0)
+ {
+ FIPS_LOG("comparison error %d - hash_mode=%d digest_size=%d \n", i, hmac_data->hash_mode, digest_size);
+ FIPS_LOG(" i expected received \n");
+ FIPS_LOG(" i 0x%08x 0x%08x \n", (size_t)hmac_data->mac_res, (size_t)virt_ctx->mac_res);
+ for (i = 0; i < digest_size; ++i)
+ {
+ FIPS_LOG(" %d 0x%02x 0x%02x \n", i, hmac_data->mac_res[i], virt_ctx->mac_res[i]);
+ }
+
+ error = FIPS_HmacToFipsError(hmac_data->hash_mode);
+ break;
+ }
+ }
+
+ return error;
+}
+
+
+static inline int
+ssi_ccm_fips_run_test(struct ssi_drvdata *drvdata,
+ enum drv_crypto_direction direction,
+ dma_addr_t key_dma_addr,
+ size_t key_size,
+ dma_addr_t iv_dma_addr,
+ dma_addr_t ctr_cnt_0_dma_addr,
+ dma_addr_t b0_a0_adata_dma_addr,
+ size_t b0_a0_adata_size,
+ dma_addr_t din_dma_addr,
+ size_t din_size,
+ dma_addr_t dout_dma_addr,
+ dma_addr_t mac_res_dma_addr)
+{
+ /* max number of descriptors used for the flow */
+ #define FIPS_CCM_MAX_SEQ_LEN 10
+
+ int rc;
+ struct ssi_crypto_req ssi_req = {0};
+ HwDesc_s desc[FIPS_CCM_MAX_SEQ_LEN];
+ unsigned int idx = 0;
+ unsigned int cipher_flow_mode;
+
+ if (direction == DRV_CRYPTO_DIRECTION_DECRYPT) {
+ cipher_flow_mode = AES_to_HASH_and_DOUT;
+ } else { /* Encrypt */
+ cipher_flow_mode = AES_and_HASH;
+ }
+
+ /* load key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, key_dma_addr,
+ ((key_size == NIST_AESCCM_192_BIT_KEY_SIZE) ? CC_AES_KEY_SIZE_MAX : key_size),
+ NS_BIT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* load ctr state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ iv_dma_addr, AES_BLOCK_SIZE,
+ NS_BIT);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* load MAC key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, key_dma_addr,
+ ((key_size == NIST_AESCCM_192_BIT_KEY_SIZE) ? CC_AES_KEY_SIZE_MAX : key_size),
+ NS_BIT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ idx++;
+
+ /* load MAC state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, mac_res_dma_addr, NIST_AESCCM_TAG_SIZE, NS_BIT);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ idx++;
+
+ /* prcess assoc data */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, b0_a0_adata_dma_addr, b0_a0_adata_size, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+
+ /* process the cipher */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, din_dma_addr, din_size, NS_BIT);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], dout_dma_addr, din_size, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], cipher_flow_mode);
+ idx++;
+
+ /* Read temporal MAC */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_res_dma_addr, NIST_AESCCM_TAG_SIZE, NS_BIT, 0);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ idx++;
+
+ /* load AES-CTR state (for last MAC calculation)*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ ctr_cnt_0_dma_addr,
+ AES_BLOCK_SIZE, NS_BIT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* Memory Barrier */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ idx++;
+
+ /* encrypt the "T" value and store MAC inplace */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, mac_res_dma_addr, NIST_AESCCM_TAG_SIZE, NS_BIT);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_res_dma_addr, NIST_AESCCM_TAG_SIZE, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+ idx++;
+
+ /* perform the operation - Lock HW and push sequence */
+ BUG_ON(idx > FIPS_CCM_MAX_SEQ_LEN);
+ rc = send_request(drvdata, &ssi_req, desc, idx, false);
+
+ return rc;
+}
+
+ssi_fips_error_t
+ssi_ccm_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer)
+{
+ ssi_fips_error_t error = CC_REE_FIPS_ERROR_OK;
+ size_t i;
+ struct fips_ccm_ctx *virt_ctx = (struct fips_ccm_ctx *)cpu_addr_buffer;
+
+ /* set the phisical pointers */
+ dma_addr_t b0_a0_adata_dma_addr = dma_coherent_buffer + offsetof(struct fips_ccm_ctx, b0_a0_adata);
+ dma_addr_t iv_dma_addr = dma_coherent_buffer + offsetof(struct fips_ccm_ctx, iv);
+ dma_addr_t ctr_cnt_0_dma_addr = dma_coherent_buffer + offsetof(struct fips_ccm_ctx, ctr_cnt_0);
+ dma_addr_t key_dma_addr = dma_coherent_buffer + offsetof(struct fips_ccm_ctx, key);
+ dma_addr_t din_dma_addr = dma_coherent_buffer + offsetof(struct fips_ccm_ctx, din);
+ dma_addr_t dout_dma_addr = dma_coherent_buffer + offsetof(struct fips_ccm_ctx, dout);
+ dma_addr_t mac_res_dma_addr = dma_coherent_buffer + offsetof(struct fips_ccm_ctx, mac_res);
+
+ for (i = 0; i < FIPS_CCM_NUM_OF_TESTS; ++i)
+ {
+ FipsCcmData *ccmData = (FipsCcmData*)&FipsCcmDataTable[i];
+ int rc = 0;
+
+ memset(cpu_addr_buffer, 0, sizeof(struct fips_ccm_ctx));
+
+ /* copy the nonce, key, adata, din data into the allocated buffer */
+ memcpy(virt_ctx->key, ccmData->key, ccmData->keySize);
+ memcpy(virt_ctx->din, ccmData->dataIn, ccmData->dataInSize);
+ {
+ /* build B0 -- B0, nonce, l(m) */
+ __be16 data = cpu_to_be16(NIST_AESCCM_TEXT_SIZE);
+ virt_ctx->b0_a0_adata[0] = NIST_AESCCM_B0_VAL;
+ memcpy(virt_ctx->b0_a0_adata + 1, ccmData->nonce, NIST_AESCCM_NONCE_SIZE);
+ memcpy(virt_ctx->b0_a0_adata + 14, (u8 *)&data, sizeof(__be16));
+ /* build A0+ADATA */
+ virt_ctx->b0_a0_adata[NIST_AESCCM_IV_SIZE + 0] = (ccmData->adataSize >> 8) & 0xFF;
+ virt_ctx->b0_a0_adata[NIST_AESCCM_IV_SIZE + 1] = ccmData->adataSize & 0xFF;
+ memcpy(virt_ctx->b0_a0_adata + NIST_AESCCM_IV_SIZE + 2, ccmData->adata, ccmData->adataSize);
+ /* iv */
+ virt_ctx->iv[0] = 1; /* L' */
+ memcpy(virt_ctx->iv + 1, ccmData->nonce, NIST_AESCCM_NONCE_SIZE);
+ virt_ctx->iv[15] = 1;
+ /* ctr_count_0 */
+ memcpy(virt_ctx->ctr_cnt_0, virt_ctx->iv, NIST_AESCCM_IV_SIZE);
+ virt_ctx->ctr_cnt_0[15] = 0;
+ }
+
+ FIPS_DBG("ssi_ccm_fips_run_test - (i = %d) \n", i);
+ rc = ssi_ccm_fips_run_test(drvdata,
+ ccmData->direction,
+ key_dma_addr,
+ ccmData->keySize,
+ iv_dma_addr,
+ ctr_cnt_0_dma_addr,
+ b0_a0_adata_dma_addr,
+ FIPS_CCM_B0_A0_ADATA_SIZE,
+ din_dma_addr,
+ ccmData->dataInSize,
+ dout_dma_addr,
+ mac_res_dma_addr);
+ if (rc != 0)
+ {
+ FIPS_LOG("ssi_ccm_fips_run_test %d returned error - rc = %d \n", i, rc);
+ error = CC_REE_FIPS_ERROR_AESCCM_PUT;
+ break;
+ }
+
+ /* compare actual dout to expected */
+ if (memcmp(virt_ctx->dout, ccmData->dataOut, ccmData->dataInSize) != 0)
+ {
+ FIPS_LOG("dout comparison error %d - size=%d \n", i, ccmData->dataInSize);
+ error = CC_REE_FIPS_ERROR_AESCCM_PUT;
+ break;
+ }
+
+ /* compare actual mac result to expected */
+ if (memcmp(virt_ctx->mac_res, ccmData->macResOut, ccmData->tagSize) != 0)
+ {
+ FIPS_LOG("mac_res comparison error %d - mac_size=%d \n", i, ccmData->tagSize);
+ FIPS_LOG(" i expected received \n");
+ FIPS_LOG(" i 0x%08x 0x%08x \n", (size_t)ccmData->macResOut, (size_t)virt_ctx->mac_res);
+ for (i = 0; i < ccmData->tagSize; ++i)
+ {
+ FIPS_LOG(" %d 0x%02x 0x%02x \n", i, ccmData->macResOut[i], virt_ctx->mac_res[i]);
+ }
+
+ error = CC_REE_FIPS_ERROR_AESCCM_PUT;
+ break;
+ }
+ }
+
+ return error;
+}
+
+
+static inline int
+ssi_gcm_fips_run_test(struct ssi_drvdata *drvdata,
+ enum drv_crypto_direction direction,
+ dma_addr_t key_dma_addr,
+ size_t key_size,
+ dma_addr_t hkey_dma_addr,
+ dma_addr_t block_len_dma_addr,
+ dma_addr_t iv_inc1_dma_addr,
+ dma_addr_t iv_inc2_dma_addr,
+ dma_addr_t adata_dma_addr,
+ size_t adata_size,
+ dma_addr_t din_dma_addr,
+ size_t din_size,
+ dma_addr_t dout_dma_addr,
+ dma_addr_t mac_res_dma_addr)
+{
+ /* max number of descriptors used for the flow */
+ #define FIPS_GCM_MAX_SEQ_LEN 15
+
+ int rc;
+ struct ssi_crypto_req ssi_req = {0};
+ HwDesc_s desc[FIPS_GCM_MAX_SEQ_LEN];
+ unsigned int idx = 0;
+ unsigned int cipher_flow_mode;
+
+ if (direction == DRV_CRYPTO_DIRECTION_DECRYPT) {
+ cipher_flow_mode = AES_and_HASH;
+ } else { /* Encrypt */
+ cipher_flow_mode = AES_to_HASH_and_DOUT;
+ }
+
+///////////////////////////////// 1 ////////////////////////////////////
+// ssi_aead_gcm_setup_ghash_desc(req, desc, seq_size);
+///////////////////////////////// 1 ////////////////////////////////////
+
+ /* load key to AES*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+ HW_DESC_SET_DIN_TYPE(&desc[idx],
+ DMA_DLLI, key_dma_addr, key_size,
+ NS_BIT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* process one zero block to generate hkey */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0x0, AES_BLOCK_SIZE);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ hkey_dma_addr, AES_BLOCK_SIZE,
+ NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+ idx++;
+
+ /* Memory Barrier */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ idx++;
+
+ /* Load GHASH subkey */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ hkey_dma_addr, AES_BLOCK_SIZE,
+ NS_BIT);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ /* Configure Hash Engine to work with GHASH.
+ Since it was not possible to extend HASH submodes to add GHASH,
+ The following command is necessary in order to select GHASH (according to HW designers)*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);
+ HW_DESC_SET_CIPHER_DO(&desc[idx], 1); //1=AES_SK RKEK
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ /* Load GHASH initial STATE (which is 0). (for any hash there is an initial state) */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0x0, AES_BLOCK_SIZE);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+
+
+///////////////////////////////// 2 ////////////////////////////////////
+ /* prcess(ghash) assoc data */
+// if (req->assoclen > 0)
+// ssi_aead_create_assoc_desc(req, DIN_HASH, desc, seq_size);
+///////////////////////////////// 2 ////////////////////////////////////
+
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ adata_dma_addr, adata_size,
+ NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+
+
+///////////////////////////////// 3 ////////////////////////////////////
+// ssi_aead_gcm_setup_gctr_desc(req, desc, seq_size);
+///////////////////////////////// 3 ////////////////////////////////////
+
+ /* load key to AES*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ key_dma_addr, key_size,
+ NS_BIT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* load AES/CTR initial CTR value inc by 2*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ iv_inc2_dma_addr, AES_BLOCK_SIZE,
+ NS_BIT);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+
+///////////////////////////////// 4 ////////////////////////////////////
+ /* process(gctr+ghash) */
+// if (req_ctx->cryptlen != 0)
+// ssi_aead_process_cipher_data_desc(req, cipher_flow_mode, desc, seq_size);
+///////////////////////////////// 4 ////////////////////////////////////
+
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ din_dma_addr, din_size,
+ NS_BIT);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ dout_dma_addr, din_size,
+ NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], cipher_flow_mode);
+ idx++;
+
+
+///////////////////////////////// 5 ////////////////////////////////////
+// ssi_aead_process_gcm_result_desc(req, desc, seq_size);
+///////////////////////////////// 5 ////////////////////////////////////
+
+ /* prcess(ghash) gcm_block_len */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ block_len_dma_addr, AES_BLOCK_SIZE,
+ NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+
+ /* Store GHASH state after GHASH(Associated Data + Cipher +LenBlock) */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ mac_res_dma_addr, AES_BLOCK_SIZE,
+ NS_BIT, 0);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ idx++;
+
+ /* load AES/CTR initial CTR value inc by 1*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], key_size);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ iv_inc1_dma_addr, AES_BLOCK_SIZE,
+ NS_BIT);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* Memory Barrier */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ idx++;
+
+ /* process GCTR on stored GHASH and store MAC inplace */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ mac_res_dma_addr, AES_BLOCK_SIZE,
+ NS_BIT);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ mac_res_dma_addr, AES_BLOCK_SIZE,
+ NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+ idx++;
+
+ /* perform the operation - Lock HW and push sequence */
+ BUG_ON(idx > FIPS_GCM_MAX_SEQ_LEN);
+ rc = send_request(drvdata, &ssi_req, desc, idx, false);
+
+ return rc;
+}
+
+ssi_fips_error_t
+ssi_gcm_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer)
+{
+ ssi_fips_error_t error = CC_REE_FIPS_ERROR_OK;
+ size_t i;
+ struct fips_gcm_ctx *virt_ctx = (struct fips_gcm_ctx *)cpu_addr_buffer;
+
+ /* set the phisical pointers */
+ dma_addr_t adata_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, adata);
+ dma_addr_t key_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, key);
+ dma_addr_t hkey_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, hkey);
+ dma_addr_t din_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, din);
+ dma_addr_t dout_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, dout);
+ dma_addr_t mac_res_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, mac_res);
+ dma_addr_t len_block_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, len_block);
+ dma_addr_t iv_inc1_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, iv_inc1);
+ dma_addr_t iv_inc2_dma_addr = dma_coherent_buffer + offsetof(struct fips_gcm_ctx, iv_inc2);
+
+ for (i = 0; i < FIPS_GCM_NUM_OF_TESTS; ++i)
+ {
+ FipsGcmData *gcmData = (FipsGcmData*)&FipsGcmDataTable[i];
+ int rc = 0;
+
+ memset(cpu_addr_buffer, 0, sizeof(struct fips_gcm_ctx));
+
+ /* copy the key, adata, din data - into the allocated buffer */
+ memcpy(virt_ctx->key, gcmData->key, gcmData->keySize);
+ memcpy(virt_ctx->adata, gcmData->adata, gcmData->adataSize);
+ memcpy(virt_ctx->din, gcmData->dataIn, gcmData->dataInSize);
+
+ /* len_block */
+ {
+ __be64 len_bits;
+ len_bits = cpu_to_be64(gcmData->adataSize * 8);
+ memcpy(virt_ctx->len_block, &len_bits, sizeof(len_bits));
+ len_bits = cpu_to_be64(gcmData->dataInSize * 8);
+ memcpy(virt_ctx->len_block + 8, &len_bits, sizeof(len_bits));
+ }
+ /* iv_inc1, iv_inc2 */
+ {
+ __be32 counter = cpu_to_be32(1);
+ memcpy(virt_ctx->iv_inc1, gcmData->iv, NIST_AESGCM_IV_SIZE);
+ memcpy(virt_ctx->iv_inc1 + NIST_AESGCM_IV_SIZE, &counter, sizeof(counter));
+ counter = cpu_to_be32(2);
+ memcpy(virt_ctx->iv_inc2, gcmData->iv, NIST_AESGCM_IV_SIZE);
+ memcpy(virt_ctx->iv_inc2 + NIST_AESGCM_IV_SIZE, &counter, sizeof(counter));
+ }
+
+ FIPS_DBG("ssi_gcm_fips_run_test - (i = %d) \n", i);
+ rc = ssi_gcm_fips_run_test(drvdata,
+ gcmData->direction,
+ key_dma_addr,
+ gcmData->keySize,
+ hkey_dma_addr,
+ len_block_dma_addr,
+ iv_inc1_dma_addr,
+ iv_inc2_dma_addr,
+ adata_dma_addr,
+ gcmData->adataSize,
+ din_dma_addr,
+ gcmData->dataInSize,
+ dout_dma_addr,
+ mac_res_dma_addr);
+ if (rc != 0)
+ {
+ FIPS_LOG("ssi_gcm_fips_run_test %d returned error - rc = %d \n", i, rc);
+ error = CC_REE_FIPS_ERROR_AESGCM_PUT;
+ break;
+ }
+
+ if (gcmData->direction == DRV_CRYPTO_DIRECTION_ENCRYPT) {
+ /* compare actual dout to expected */
+ if (memcmp(virt_ctx->dout, gcmData->dataOut, gcmData->dataInSize) != 0)
+ {
+ FIPS_LOG("dout comparison error %d - size=%d \n", i, gcmData->dataInSize);
+ FIPS_LOG(" i expected received \n");
+ FIPS_LOG(" i 0x%08x 0x%08x \n", (size_t)gcmData->dataOut, (size_t)virt_ctx->dout);
+ for (i = 0; i < gcmData->dataInSize; ++i)
+ {
+ FIPS_LOG(" %d 0x%02x 0x%02x \n", i, gcmData->dataOut[i], virt_ctx->dout[i]);
+ }
+
+ error = CC_REE_FIPS_ERROR_AESGCM_PUT;
+ break;
+ }
+ }
+
+ /* compare actual mac result to expected */
+ if (memcmp(virt_ctx->mac_res, gcmData->macResOut, gcmData->tagSize) != 0)
+ {
+ FIPS_LOG("mac_res comparison error %d - mac_size=%d \n", i, gcmData->tagSize);
+ FIPS_LOG(" i expected received \n");
+ FIPS_LOG(" i 0x%08x 0x%08x \n", (size_t)gcmData->macResOut, (size_t)virt_ctx->mac_res);
+ for (i = 0; i < gcmData->tagSize; ++i)
+ {
+ FIPS_LOG(" %d 0x%02x 0x%02x \n", i, gcmData->macResOut[i], virt_ctx->mac_res[i]);
+ }
+
+ error = CC_REE_FIPS_ERROR_AESGCM_PUT;
+ break;
+ }
+ }
+ return error;
+}
+
+
+size_t ssi_fips_max_mem_alloc_size(void)
+{
+ FIPS_DBG("sizeof(struct fips_cipher_ctx) %d \n", sizeof(struct fips_cipher_ctx));
+ FIPS_DBG("sizeof(struct fips_cmac_ctx) %d \n", sizeof(struct fips_cmac_ctx));
+ FIPS_DBG("sizeof(struct fips_hash_ctx) %d \n", sizeof(struct fips_hash_ctx));
+ FIPS_DBG("sizeof(struct fips_hmac_ctx) %d \n", sizeof(struct fips_hmac_ctx));
+ FIPS_DBG("sizeof(struct fips_ccm_ctx) %d \n", sizeof(struct fips_ccm_ctx));
+ FIPS_DBG("sizeof(struct fips_gcm_ctx) %d \n", sizeof(struct fips_gcm_ctx));
+
+ return sizeof(fips_ctx);
+}
+
diff --git a/drivers/staging/ccree/ssi_fips_local.c b/drivers/staging/ccree/ssi_fips_local.c
new file mode 100644
index 0000000..508031c
--- /dev/null
+++ b/drivers/staging/ccree/ssi_fips_local.c
@@ -0,0 +1,369 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+/**************************************************************
+This file defines the driver FIPS internal function, used by the driver itself.
+***************************************************************/
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <crypto/des.h>
+
+#include "ssi_config.h"
+#include "ssi_driver.h"
+#include "cc_hal.h"
+
+
+#define FIPS_POWER_UP_TEST_CIPHER 1
+#define FIPS_POWER_UP_TEST_CMAC 1
+#define FIPS_POWER_UP_TEST_HASH 1
+#define FIPS_POWER_UP_TEST_HMAC 1
+#define FIPS_POWER_UP_TEST_CCM 1
+#define FIPS_POWER_UP_TEST_GCM 1
+
+static bool ssi_fips_support = 1;
+module_param(ssi_fips_support, bool, 0644);
+MODULE_PARM_DESC(ssi_fips_support, "FIPS supported flag: 0 - off , 1 - on (default)");
+
+static void fips_dsr(unsigned long devarg);
+
+struct ssi_fips_handle {
+#ifdef COMP_IN_WQ
+ struct workqueue_struct *workq;
+ struct delayed_work fipswork;
+#else
+ struct tasklet_struct fipstask;
+#endif
+};
+
+
+extern int ssi_fips_get_state(ssi_fips_state_t *p_state);
+extern int ssi_fips_get_error(ssi_fips_error_t *p_err);
+extern int ssi_fips_ext_set_state(ssi_fips_state_t state);
+extern int ssi_fips_ext_set_error(ssi_fips_error_t err);
+
+/* FIPS power-up tests */
+extern ssi_fips_error_t ssi_cipher_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer);
+extern ssi_fips_error_t ssi_cmac_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer);
+extern ssi_fips_error_t ssi_hash_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer);
+extern ssi_fips_error_t ssi_hmac_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer);
+extern ssi_fips_error_t ssi_ccm_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer);
+extern ssi_fips_error_t ssi_gcm_fips_power_up_tests(struct ssi_drvdata *drvdata, void *cpu_addr_buffer, dma_addr_t dma_coherent_buffer);
+extern size_t ssi_fips_max_mem_alloc_size(void);
+
+
+/* The function called once at driver entry point to check whether TEE FIPS error occured.*/
+static enum ssi_fips_error ssi_fips_get_tee_error(struct ssi_drvdata *drvdata)
+{
+ uint32_t regVal;
+ void __iomem *cc_base = drvdata->cc_base;
+
+ regVal = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, GPR_HOST));
+ if (regVal == (CC_FIPS_SYNC_TEE_STATUS | CC_FIPS_SYNC_MODULE_OK)) {
+ return CC_REE_FIPS_ERROR_OK;
+ }
+ return CC_REE_FIPS_ERROR_FROM_TEE;
+}
+
+
+/*
+ This function should push the FIPS REE library status towards the TEE library.
+ By writing the error state to HOST_GPR0 register. The function is called from .
+ driver entry point so no need to protect by mutex.
+*/
+static void ssi_fips_update_tee_upon_ree_status(struct ssi_drvdata *drvdata, ssi_fips_error_t err)
+{
+ void __iomem *cc_base = drvdata->cc_base;
+ if (err == CC_REE_FIPS_ERROR_OK) {
+ CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_GPR0), (CC_FIPS_SYNC_REE_STATUS|CC_FIPS_SYNC_MODULE_OK));
+ } else {
+ CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_GPR0), (CC_FIPS_SYNC_REE_STATUS|CC_FIPS_SYNC_MODULE_ERROR));
+ }
+}
+
+
+
+void ssi_fips_fini(struct ssi_drvdata *drvdata)
+{
+ struct ssi_fips_handle *fips_h = drvdata->fips_handle;
+
+ if (fips_h == NULL)
+ return; /* Not allocated */
+
+#ifdef COMP_IN_WQ
+ if (fips_h->workq != NULL) {
+ flush_workqueue(fips_h->workq);
+ destroy_workqueue(fips_h->workq);
+ }
+#else
+ /* Kill tasklet */
+ tasklet_kill(&fips_h->fipstask);
+#endif
+ memset(fips_h, 0, sizeof(struct ssi_fips_handle));
+ kfree(fips_h);
+ drvdata->fips_handle = NULL;
+}
+
+void fips_handler(struct ssi_drvdata *drvdata)
+{
+ struct ssi_fips_handle *fips_handle_ptr =
+ drvdata->fips_handle;
+#ifdef COMP_IN_WQ
+ queue_delayed_work(fips_handle_ptr->workq, &fips_handle_ptr->fipswork, 0);
+#else
+ tasklet_schedule(&fips_handle_ptr->fipstask);
+#endif
+}
+
+
+
+#ifdef COMP_IN_WQ
+static void fips_wq_handler(struct work_struct *work)
+{
+ struct ssi_drvdata *drvdata =
+ container_of(work, struct ssi_drvdata, fipswork.work);
+
+ fips_dsr((unsigned long)drvdata);
+}
+#endif
+
+/* Deferred service handler, run as interrupt-fired tasklet */
+static void fips_dsr(unsigned long devarg)
+{
+ struct ssi_drvdata *drvdata = (struct ssi_drvdata *)devarg;
+ void __iomem *cc_base = drvdata->cc_base;
+ uint32_t irq;
+ uint32_t teeFipsError = 0;
+
+ irq = (drvdata->irq & (SSI_GPR0_IRQ_MASK));
+
+ if (irq & SSI_GPR0_IRQ_MASK) {
+ teeFipsError = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, GPR_HOST));
+ if (teeFipsError != (CC_FIPS_SYNC_TEE_STATUS | CC_FIPS_SYNC_MODULE_OK)) {
+ ssi_fips_set_error(drvdata, CC_REE_FIPS_ERROR_FROM_TEE);
+ }
+ }
+
+ /* after verifing that there is nothing to do, Unmask AXI completion interrupt */
+ CC_HAL_WRITE_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_IMR),
+ CC_HAL_READ_REGISTER(
+ CC_REG_OFFSET(HOST_RGF, HOST_IMR)) & ~irq);
+}
+
+
+ssi_fips_error_t cc_fips_run_power_up_tests(struct ssi_drvdata *drvdata)
+{
+ ssi_fips_error_t fips_error = CC_REE_FIPS_ERROR_OK;
+ void * cpu_addr_buffer = NULL;
+ dma_addr_t dma_handle;
+ size_t alloc_buff_size = ssi_fips_max_mem_alloc_size();
+ struct device *dev = &drvdata->plat_dev->dev;
+
+ // allocate memory using dma_alloc_coherent - for phisical, consecutive and cache coherent buffer (memory map is not needed)
+ // the return value is the virtual address - use it to copy data into the buffer
+ // the dma_handle is the returned phy address - use it in the HW descriptor
+ FIPS_DBG("dma_alloc_coherent \n");
+ cpu_addr_buffer = dma_alloc_coherent(dev, alloc_buff_size, &dma_handle, GFP_KERNEL);
+ if (cpu_addr_buffer == NULL) {
+ return CC_REE_FIPS_ERROR_GENERAL;
+ }
+ FIPS_DBG("allocated coherent buffer - addr 0x%08X , size = %d \n", (size_t)cpu_addr_buffer, alloc_buff_size);
+
+#if FIPS_POWER_UP_TEST_CIPHER
+ FIPS_DBG("ssi_cipher_fips_power_up_tests ...\n");
+ fips_error = ssi_cipher_fips_power_up_tests(drvdata, cpu_addr_buffer, dma_handle);
+ FIPS_DBG("ssi_cipher_fips_power_up_tests - done. (fips_error = %d) \n", fips_error);
+#endif
+#if FIPS_POWER_UP_TEST_CMAC
+ if (likely(fips_error == CC_REE_FIPS_ERROR_OK)) {
+ FIPS_DBG("ssi_cmac_fips_power_up_tests ...\n");
+ fips_error = ssi_cmac_fips_power_up_tests(drvdata, cpu_addr_buffer, dma_handle);
+ FIPS_DBG("ssi_cmac_fips_power_up_tests - done. (fips_error = %d) \n", fips_error);
+ }
+#endif
+#if FIPS_POWER_UP_TEST_HASH
+ if (likely(fips_error == CC_REE_FIPS_ERROR_OK)) {
+ FIPS_DBG("ssi_hash_fips_power_up_tests ...\n");
+ fips_error = ssi_hash_fips_power_up_tests(drvdata, cpu_addr_buffer, dma_handle);
+ FIPS_DBG("ssi_hash_fips_power_up_tests - done. (fips_error = %d) \n", fips_error);
+ }
+#endif
+#if FIPS_POWER_UP_TEST_HMAC
+ if (likely(fips_error == CC_REE_FIPS_ERROR_OK)) {
+ FIPS_DBG("ssi_hmac_fips_power_up_tests ...\n");
+ fips_error = ssi_hmac_fips_power_up_tests(drvdata, cpu_addr_buffer, dma_handle);
+ FIPS_DBG("ssi_hmac_fips_power_up_tests - done. (fips_error = %d) \n", fips_error);
+ }
+#endif
+#if FIPS_POWER_UP_TEST_CCM
+ if (likely(fips_error == CC_REE_FIPS_ERROR_OK)) {
+ FIPS_DBG("ssi_ccm_fips_power_up_tests ...\n");
+ fips_error = ssi_ccm_fips_power_up_tests(drvdata, cpu_addr_buffer, dma_handle);
+ FIPS_DBG("ssi_ccm_fips_power_up_tests - done. (fips_error = %d) \n", fips_error);
+ }
+#endif
+#if FIPS_POWER_UP_TEST_GCM
+ if (likely(fips_error == CC_REE_FIPS_ERROR_OK)) {
+ FIPS_DBG("ssi_gcm_fips_power_up_tests ...\n");
+ fips_error = ssi_gcm_fips_power_up_tests(drvdata, cpu_addr_buffer, dma_handle);
+ FIPS_DBG("ssi_gcm_fips_power_up_tests - done. (fips_error = %d) \n", fips_error);
+ }
+#endif
+ /* deallocate the buffer when all tests are done... */
+ FIPS_DBG("dma_free_coherent \n");
+ dma_free_coherent(dev, alloc_buff_size, cpu_addr_buffer, dma_handle);
+
+ return fips_error;
+}
+
+
+
+/* The function checks if FIPS supported and FIPS error exists.*
+* It should be used in every driver API.*/
+int ssi_fips_check_fips_error(void)
+{
+ ssi_fips_state_t fips_state;
+
+ if (ssi_fips_get_state(&fips_state) != 0) {
+ FIPS_LOG("ssi_fips_get_state FAILED, returning.. \n");
+ return -ENOEXEC;
+ }
+ if (fips_state == CC_FIPS_STATE_ERROR) {
+ FIPS_LOG("ssi_fips_get_state: fips_state is %d, returning.. \n", fips_state);
+ return -ENOEXEC;
+ }
+ return 0;
+}
+
+
+/* The function sets the REE FIPS state.*
+* It should be used while driver is being loaded .*/
+int ssi_fips_set_state(ssi_fips_state_t state)
+{
+ return ssi_fips_ext_set_state(state);
+}
+
+/* The function sets the REE FIPS error, and pushes the error to TEE library. *
+* It should be used when any of the KAT tests fails .*/
+int ssi_fips_set_error(struct ssi_drvdata *p_drvdata, ssi_fips_error_t err)
+{
+ int rc = 0;
+ ssi_fips_error_t current_err;
+
+ FIPS_LOG("ssi_fips_set_error - fips_error = %d \n", err);
+
+ // setting no error is not allowed
+ if (err == CC_REE_FIPS_ERROR_OK) {
+ return -ENOEXEC;
+ }
+ // If error exists, do not set new error
+ if (ssi_fips_get_error(&current_err) != 0) {
+ return -ENOEXEC;
+ }
+ if (current_err != CC_REE_FIPS_ERROR_OK) {
+ return -ENOEXEC;
+ }
+ // set REE internal error and state
+ rc = ssi_fips_ext_set_error(err);
+ if (rc != 0) {
+ return -ENOEXEC;
+ }
+ rc = ssi_fips_ext_set_state(CC_FIPS_STATE_ERROR);
+ if (rc != 0) {
+ return -ENOEXEC;
+ }
+
+ // push error towards TEE libraray, if it's not TEE error
+ if (err != CC_REE_FIPS_ERROR_FROM_TEE) {
+ ssi_fips_update_tee_upon_ree_status(p_drvdata, err);
+ }
+ return rc;
+}
+
+
+/* The function called once at driver entry point .*/
+int ssi_fips_init(struct ssi_drvdata *p_drvdata)
+{
+ ssi_fips_error_t rc = CC_REE_FIPS_ERROR_OK;
+ struct ssi_fips_handle *fips_h;
+
+ FIPS_DBG("CC FIPS code .. (fips=%d) \n", ssi_fips_support);
+
+ fips_h = kzalloc(sizeof(struct ssi_fips_handle),GFP_KERNEL);
+ if (fips_h == NULL) {
+ ssi_fips_set_error(p_drvdata, CC_REE_FIPS_ERROR_GENERAL);
+ return -ENOMEM;
+ }
+
+ p_drvdata->fips_handle = fips_h;
+
+#ifdef COMP_IN_WQ
+ SSI_LOG_DEBUG("Initializing fips workqueue\n");
+ fips_h->workq = create_singlethread_workqueue("arm_cc7x_fips_wq");
+ if (unlikely(fips_h->workq == NULL)) {
+ SSI_LOG_ERR("Failed creating fips work queue\n");
+ ssi_fips_set_error(p_drvdata, CC_REE_FIPS_ERROR_GENERAL);
+ rc = -ENOMEM;
+ goto ssi_fips_init_err;
+ }
+ INIT_DELAYED_WORK(&fips_h->fipswork, fips_wq_handler);
+#else
+ SSI_LOG_DEBUG("Initializing fips tasklet\n");
+ tasklet_init(&fips_h->fipstask, fips_dsr, (unsigned long)p_drvdata);
+#endif
+
+ /* init fips driver data */
+ rc = ssi_fips_set_state((ssi_fips_support == 0)? CC_FIPS_STATE_NOT_SUPPORTED : CC_FIPS_STATE_SUPPORTED);
+ if (unlikely(rc != 0)) {
+ ssi_fips_set_error(p_drvdata, CC_REE_FIPS_ERROR_GENERAL);
+ rc = -EAGAIN;
+ goto ssi_fips_init_err;
+ }
+
+ /* Run power up tests (before registration and operating the HW engines) */
+ FIPS_DBG("ssi_fips_get_tee_error \n");
+ rc = ssi_fips_get_tee_error(p_drvdata);
+ if (unlikely(rc != CC_REE_FIPS_ERROR_OK)) {
+ ssi_fips_set_error(p_drvdata, CC_REE_FIPS_ERROR_FROM_TEE);
+ rc = -EAGAIN;
+ goto ssi_fips_init_err;
+ }
+
+ FIPS_DBG("cc_fips_run_power_up_tests \n");
+ rc = cc_fips_run_power_up_tests(p_drvdata);
+ if (unlikely(rc != CC_REE_FIPS_ERROR_OK)) {
+ ssi_fips_set_error(p_drvdata, rc);
+ rc = -EAGAIN;
+ goto ssi_fips_init_err;
+ }
+ FIPS_LOG("cc_fips_run_power_up_tests - done ... fips_error = %d \n", rc);
+
+ /* when all tests passed, update TEE with fips OK status after power up tests */
+ ssi_fips_update_tee_upon_ree_status(p_drvdata, CC_REE_FIPS_ERROR_OK);
+
+ if (unlikely(rc != 0)) {
+ rc = -EAGAIN;
+ ssi_fips_set_error(p_drvdata, CC_REE_FIPS_ERROR_GENERAL);
+ goto ssi_fips_init_err;
+ }
+
+ return 0;
+
+ssi_fips_init_err:
+ ssi_fips_fini(p_drvdata);
+ return rc;
+}
+
diff --git a/drivers/staging/ccree/ssi_fips_local.h b/drivers/staging/ccree/ssi_fips_local.h
new file mode 100644
index 0000000..d82e6b5
--- /dev/null
+++ b/drivers/staging/ccree/ssi_fips_local.h
@@ -0,0 +1,77 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef __SSI_FIPS_LOCAL_H__
+#define __SSI_FIPS_LOCAL_H__
+
+
+#ifdef CONFIG_CCX7REE_FIPS_SUPPORT
+
+#include "ssi_fips.h"
+struct ssi_drvdata;
+
+// IG - how to make 1 file for TEE and REE
+typedef enum CC_FipsSyncStatus{
+ CC_FIPS_SYNC_MODULE_OK = 0x0,
+ CC_FIPS_SYNC_MODULE_ERROR = 0x1,
+ CC_FIPS_SYNC_REE_STATUS = 0x4,
+ CC_FIPS_SYNC_TEE_STATUS = 0x8,
+ CC_FIPS_SYNC_STATUS_RESERVE32B = INT32_MAX
+}CCFipsSyncStatus_t;
+
+
+#define CHECK_AND_RETURN_UPON_FIPS_ERROR() {\
+ if (ssi_fips_check_fips_error() != 0) {\
+ return -ENOEXEC;\
+ }\
+}
+#define CHECK_AND_RETURN_VOID_UPON_FIPS_ERROR() {\
+ if (ssi_fips_check_fips_error() != 0) {\
+ return;\
+ }\
+}
+#define SSI_FIPS_INIT(p_drvData) (ssi_fips_init(p_drvData))
+#define SSI_FIPS_FINI(p_drvData) (ssi_fips_fini(p_drvData))
+
+#define FIPS_LOG(...) SSI_LOG(KERN_INFO, __VA_ARGS__)
+#define FIPS_DBG(...) //SSI_LOG(KERN_INFO, __VA_ARGS__)
+
+/* FIPS functions */
+int ssi_fips_init(struct ssi_drvdata *p_drvdata);
+void ssi_fips_fini(struct ssi_drvdata *drvdata);
+int ssi_fips_check_fips_error(void);
+int ssi_fips_set_error(struct ssi_drvdata *p_drvdata, ssi_fips_error_t err);
+void fips_handler(struct ssi_drvdata *drvdata);
+
+#else /* CONFIG_CC7XXREE_FIPS_SUPPORT */
+
+#define CHECK_AND_RETURN_UPON_FIPS_ERROR()
+#define CHECK_AND_RETURN_VOID_UPON_FIPS_ERROR()
+
+static inline int ssi_fips_init(struct ssi_drvdata *p_drvdata)
+{
+ return 0;
+}
+
+static inline void ssi_fips_fini(struct ssi_drvdata *drvdata) {}
+
+void fips_handler(struct ssi_drvdata *drvdata);
+
+#endif /* CONFIG_CC7XXREE_FIPS_SUPPORT */
+
+
+#endif /*__SSI_FIPS_LOCAL_H__*/
+
diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c
index cb7fde7..dd06f50 100644
--- a/drivers/staging/ccree/ssi_hash.c
+++ b/drivers/staging/ccree/ssi_hash.c
@@ -30,6 +30,7 @@
#include "ssi_sysfs.h"
#include "ssi_hash.h"
#include "ssi_sram_mgr.h"
+#include "ssi_fips_local.h"

#define SSI_MAX_AHASH_SEQ_LEN 12
#define SSI_MAX_HASH_OPAD_TMP_KEYS_SIZE MAX(SSI_MAX_HASH_BLCK_SIZE, 3 * AES_BLOCK_SIZE)
@@ -467,6 +468,8 @@ static int ssi_hash_digest(struct ahash_req_ctx *state,

SSI_LOG_DEBUG("===== %s-digest (%d) ====\n", is_hmac?"hmac":"hash", nbytes);

+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
+
if (unlikely(ssi_hash_map_request(dev, state, ctx) != 0)) {
SSI_LOG_ERR("map_ahash_source() failed\n");
return -ENOMEM;
@@ -623,6 +626,7 @@ static int ssi_hash_update(struct ahash_req_ctx *state,
SSI_LOG_DEBUG("===== %s-update (%d) ====\n", ctx->is_hmac ?
"hmac":"hash", nbytes);

+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
if (nbytes == 0) {
/* no real updates required */
return 0;
@@ -719,6 +723,8 @@ static int ssi_hash_finup(struct ahash_req_ctx *state,

SSI_LOG_DEBUG("===== %s-finup (%d) ====\n", is_hmac?"hmac":"hash", nbytes);

+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
+
if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, src , nbytes, 1) != 0)) {
SSI_LOG_ERR("map_ahash_request_final() failed\n");
return -ENOMEM;
@@ -848,6 +854,8 @@ static int ssi_hash_final(struct ahash_req_ctx *state,

SSI_LOG_DEBUG("===== %s-final (%d) ====\n", is_hmac?"hmac":"hash", nbytes);

+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
+
if (unlikely(ssi_buffer_mgr_map_hash_request_final(ctx->drvdata, state, src, nbytes, 0) != 0)) {
SSI_LOG_ERR("map_ahash_request_final() failed\n");
return -ENOMEM;
@@ -975,6 +983,7 @@ static int ssi_hash_init(struct ahash_req_ctx *state, struct ssi_hash_ctx *ctx)
struct device *dev = &ctx->drvdata->plat_dev->dev;
state->xcbc_count = 0;

+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
ssi_hash_map_request(dev, state, ctx);

return 0;
@@ -983,12 +992,14 @@ static int ssi_hash_init(struct ahash_req_ctx *state, struct ssi_hash_ctx *ctx)
#ifdef EXPORT_FIXED
static int ssi_hash_export(struct ssi_hash_ctx *ctx, void *out)
{
+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
memcpy(out, ctx, sizeof(struct ssi_hash_ctx));
return 0;
}

static int ssi_hash_import(struct ssi_hash_ctx *ctx, const void *in)
{
+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
memcpy(ctx, in, sizeof(struct ssi_hash_ctx));
return 0;
}
@@ -1010,6 +1021,7 @@ static int ssi_hash_setkey(void *hash,

SSI_LOG_DEBUG("ssi_hash_setkey: start keylen: %d", keylen);

+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
if (synchronize) {
ctx = crypto_shash_ctx(((struct crypto_shash *)hash));
blocksize = crypto_tfm_alg_blocksize(&((struct crypto_shash *)hash)->base);
@@ -1218,6 +1230,7 @@ static int ssi_xcbc_setkey(struct crypto_ahash *ahash,
HwDesc_s desc[SSI_MAX_AHASH_SEQ_LEN];

SSI_LOG_DEBUG("===== setkey (%d) ====\n", keylen);
+ CHECK_AND_RETURN_UPON_FIPS_ERROR();

switch (keylen) {
case AES_KEYSIZE_128:
@@ -1303,6 +1316,7 @@ static int ssi_cmac_setkey(struct crypto_ahash *ahash,
struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash);
DECL_CYCLE_COUNT_RESOURCES;
SSI_LOG_DEBUG("===== setkey (%d) ====\n", keylen);
+ CHECK_AND_RETURN_UPON_FIPS_ERROR();

ctx->is_hmac = true;

@@ -1418,6 +1432,7 @@ static int ssi_shash_cra_init(struct crypto_tfm *tfm)
struct ssi_hash_alg *ssi_alg =
container_of(shash_alg, struct ssi_hash_alg, shash_alg);

+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
ctx->hash_mode = ssi_alg->hash_mode;
ctx->hw_mode = ssi_alg->hw_mode;
ctx->inter_digestsize = ssi_alg->inter_digestsize;
@@ -1437,6 +1452,7 @@ static int ssi_ahash_cra_init(struct crypto_tfm *tfm)
container_of(ahash_alg, struct ssi_hash_alg, ahash_alg);


+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
sizeof(struct ahash_req_ctx));

@@ -1468,6 +1484,7 @@ static int ssi_mac_update(struct ahash_request *req)
int rc;
uint32_t idx = 0;

+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
if (req->nbytes == 0) {
/* no real updates required */
return 0;
@@ -1535,6 +1552,7 @@ static int ssi_mac_final(struct ahash_request *req)
state->buff0_cnt;


+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
keySize = CC_AES_128_BIT_KEY_SIZE;
keyLen = CC_AES_128_BIT_KEY_SIZE;
@@ -1645,7 +1663,7 @@ static int ssi_mac_finup(struct ahash_request *req)
uint32_t digestsize = crypto_ahash_digestsize(tfm);

SSI_LOG_DEBUG("===== finup xcbc(%d) ====\n", req->nbytes);
-
+ CHECK_AND_RETURN_UPON_FIPS_ERROR();
if (state->xcbc_count > 0 && req->nbytes == 0) {
SSI_LOG_DEBUG("No data to update. Call to fdx_mac_final \n");
return ssi_mac_final(req);
@@ -1718,6 +1736,7 @@ static int ssi_mac_digest(struct ahash_request *req)
int rc;

SSI_LOG_DEBUG("===== -digest mac (%d) ====\n", req->nbytes);
+ CHECK_AND_RETURN_UPON_FIPS_ERROR();

if (unlikely(ssi_hash_map_request(dev, state, ctx) != 0)) {
SSI_LOG_ERR("map_ahash_source() failed\n");
diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c
index c19c006..925bc0b 100644
--- a/drivers/staging/ccree/ssi_request_mgr.c
+++ b/drivers/staging/ccree/ssi_request_mgr.c
@@ -30,6 +30,8 @@
#include "ssi_sysfs.h"
#include "ssi_ivgen.h"
#include "ssi_pm.h"
+#include "ssi_fips.h"
+#include "ssi_fips_local.h"

#define SSI_MAX_POLL_ITER 10

--
2.1.4

2017-04-20 13:13:02

by Gilad Ben-Yossef

[permalink] [raw]
Subject: [PATCH v2 8/9] staging: ccree: add DT bindings for Arm CryptoCell

This adds DT bindings for the Arm TrustZone CryptoCell cryptographic
accelerator IP.

Signed-off-by: Gilad Ben-Yossef <[email protected]>
---
.../devicetree/bindings/crypto/arm-cryptocell.txt | 27 ++++++++++++++++++++++
1 file changed, 27 insertions(+)
create mode 100644 drivers/staging/ccree/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt

diff --git a/drivers/staging/ccree/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt b/drivers/staging/ccree/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt
new file mode 100644
index 0000000..2ea6517
--- /dev/null
+++ b/drivers/staging/ccree/Documentation/devicetree/bindings/crypto/arm-cryptocell.txt
@@ -0,0 +1,27 @@
+Arm TrustZone CryptoCell cryptographic accelerators
+
+Required properties:
+- compatible: must be "arm,cryptocell-712-ree".
+- reg: shall contain base register location and length.
+ Typically length is 0x10000.
+- interrupts: shall contain the interrupt for the device.
+
+Optional properties:
+- interrupt-parent: can designate the interrupt controller the
+ device interrupt is connected to, if needed.
+- clocks: may contain the clock handling the device, if needed.
+- power-domains: may contain a reference to the PM domain, if applicable.
+
+
+Examples:
+
+Zynq FPGA device
+----------------
+
+ arm_cc7x: arm_cc7x@80000000 {
+ compatible = "arm,cryptocell-712-ree";
+ interrupt-parent = <&intc>;
+ interrupts = < 0 30 4 >;
+ reg = < 0x80000000 0x10000 >;
+ };
+
--
2.1.4

2017-04-20 13:12:59

by Gilad Ben-Yossef

[permalink] [raw]
Subject: [PATCH v2 5/9] staging: ccree: add AEAD support

Add CryptoCell AEAD support

Signed-off-by: Gilad Ben-Yossef <[email protected]>
---
drivers/staging/ccree/Kconfig | 1 +
drivers/staging/ccree/Makefile | 2 +-
drivers/staging/ccree/cc_crypto_ctx.h | 21 +
drivers/staging/ccree/ssi_aead.c | 2826 ++++++++++++++++++++++++++++++++
drivers/staging/ccree/ssi_aead.h | 120 ++
drivers/staging/ccree/ssi_buffer_mgr.c | 899 ++++++++++
drivers/staging/ccree/ssi_buffer_mgr.h | 4 +
drivers/staging/ccree/ssi_driver.c | 11 +
drivers/staging/ccree/ssi_driver.h | 4 +
9 files changed, 3887 insertions(+), 1 deletion(-)
create mode 100644 drivers/staging/ccree/ssi_aead.c
create mode 100644 drivers/staging/ccree/ssi_aead.h

diff --git a/drivers/staging/ccree/Kconfig b/drivers/staging/ccree/Kconfig
index 3fff040..2d11223 100644
--- a/drivers/staging/ccree/Kconfig
+++ b/drivers/staging/ccree/Kconfig
@@ -5,6 +5,7 @@ config CRYPTO_DEV_CCREE
select CRYPTO_HASH
select CRYPTO_BLKCIPHER
select CRYPTO_DES
+ select CRYPTO_AEAD
select CRYPTO_AUTHENC
select CRYPTO_SHA1
select CRYPTO_MD5
diff --git a/drivers/staging/ccree/Makefile b/drivers/staging/ccree/Makefile
index 89afe9a..b9285c0 100644
--- a/drivers/staging/ccree/Makefile
+++ b/drivers/staging/ccree/Makefile
@@ -1,2 +1,2 @@
obj-$(CONFIG_CRYPTO_DEV_CCREE) := ccree.o
-ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
+ccree-y := ssi_driver.o ssi_sysfs.o ssi_buffer_mgr.o ssi_request_mgr.o ssi_cipher.o ssi_hash.o ssi_aead.o ssi_ivgen.o ssi_sram_mgr.o ssi_pm.o ssi_pm_ext.o
diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h
index f198779..743461f 100644
--- a/drivers/staging/ccree/cc_crypto_ctx.h
+++ b/drivers/staging/ccree/cc_crypto_ctx.h
@@ -263,6 +263,27 @@ struct drv_ctx_cipher {
(CC_AES_KEY_SIZE_MAX/sizeof(uint32_t))];
};

+/* authentication and encryption with associated data class */
+struct drv_ctx_aead {
+ enum drv_crypto_alg alg; /* DRV_CRYPTO_ALG_AES */
+ enum drv_cipher_mode mode;
+ enum drv_crypto_direction direction;
+ uint32_t key_size; /* numeric value in bytes */
+ uint32_t nonce_size; /* nonce size (octets) */
+ uint32_t header_size; /* finit additional data size (octets) */
+ uint32_t text_size; /* finit text data size (octets) */
+ uint32_t tag_size; /* mac size, element of {4, 6, 8, 10, 12, 14, 16} */
+ /* block_state1/2 is the AES engine block state */
+ uint8_t block_state[CC_AES_BLOCK_SIZE];
+ uint8_t mac_state[CC_AES_BLOCK_SIZE]; /* MAC result */
+ uint8_t nonce[CC_AES_BLOCK_SIZE]; /* nonce buffer */
+ uint8_t key[CC_AES_KEY_SIZE_MAX];
+ /* reserve to end of allocated context size */
+ uint32_t reserved[CC_DRV_CTX_SIZE_WORDS - 8 -
+ 3 * (CC_AES_BLOCK_SIZE/sizeof(uint32_t)) -
+ CC_AES_KEY_SIZE_MAX/sizeof(uint32_t)];
+};
+
/*******************************************************************/
/***************** MESSAGE BASED CONTEXTS **************************/
/*******************************************************************/
diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c
new file mode 100644
index 0000000..1d2890e
--- /dev/null
+++ b/drivers/staging/ccree/ssi_aead.c
@@ -0,0 +1,2826 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <crypto/algapi.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/aead.h>
+#include <crypto/sha.h>
+#include <crypto/ctr.h>
+#include <crypto/authenc.h>
+#include <crypto/aes.h>
+#include <crypto/des.h>
+#include <linux/rtnetlink.h>
+#include <linux/version.h>
+#include "ssi_config.h"
+#include "ssi_driver.h"
+#include "ssi_buffer_mgr.h"
+#include "ssi_aead.h"
+#include "ssi_request_mgr.h"
+#include "ssi_hash.h"
+#include "ssi_sysfs.h"
+#include "ssi_sram_mgr.h"
+
+#define template_aead template_u.aead
+
+#define MAX_AEAD_SETKEY_SEQ 12
+#define MAX_AEAD_PROCESS_SEQ 23
+
+#define MAX_HMAC_DIGEST_SIZE (SHA256_DIGEST_SIZE)
+#define MAX_HMAC_BLOCK_SIZE (SHA256_BLOCK_SIZE)
+
+#define AES_CCM_RFC4309_NONCE_SIZE 3
+#define MAX_NONCE_SIZE CTR_RFC3686_NONCE_SIZE
+
+
+/* Value of each ICV_CMP byte (of 8) in case of success */
+#define ICV_VERIF_OK 0x01
+
+struct ssi_aead_handle {
+ ssi_sram_addr_t sram_workspace_addr;
+ struct list_head aead_list;
+};
+
+struct ssi_aead_ctx {
+ struct ssi_drvdata *drvdata;
+ uint8_t ctr_nonce[MAX_NONCE_SIZE]; /* used for ctr3686 iv and aes ccm */
+ uint8_t *enckey;
+ dma_addr_t enckey_dma_addr;
+ union {
+ struct {
+ uint8_t *padded_authkey;
+ uint8_t *ipad_opad; /* IPAD, OPAD*/
+ dma_addr_t padded_authkey_dma_addr;
+ dma_addr_t ipad_opad_dma_addr;
+ } hmac;
+ struct {
+ uint8_t *xcbc_keys; /* K1,K2,K3 */
+ dma_addr_t xcbc_keys_dma_addr;
+ } xcbc;
+ } auth_state;
+ unsigned int enc_keylen;
+ unsigned int auth_keylen;
+ unsigned int authsize; /* Actual (reduced?) size of the MAC/ICv */
+ enum drv_cipher_mode cipher_mode;
+ enum FlowMode flow_mode;
+ enum drv_hash_mode auth_mode;
+};
+
+static inline bool valid_assoclen(struct aead_request *req)
+{
+ return ((req->assoclen == 16) || (req->assoclen == 20));
+}
+
+static void ssi_aead_exit(struct crypto_aead *tfm)
+{
+ struct device *dev = NULL;
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+
+ SSI_LOG_DEBUG("Clearing context @%p for %s\n",
+ crypto_aead_ctx(tfm), crypto_tfm_alg_name(&(tfm->base)));
+
+ dev = &ctx->drvdata->plat_dev->dev;
+ /* Unmap enckey buffer */
+ if (ctx->enckey != NULL) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(ctx->enckey_dma_addr);
+ dma_free_coherent(dev, AES_MAX_KEY_SIZE, ctx->enckey, ctx->enckey_dma_addr);
+ SSI_LOG_DEBUG("Freed enckey DMA buffer enckey_dma_addr=0x%llX\n",
+ (unsigned long long)ctx->enckey_dma_addr);
+ ctx->enckey_dma_addr = 0;
+ ctx->enckey = NULL;
+ }
+
+ if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { /* XCBC authetication */
+ if (ctx->auth_state.xcbc.xcbc_keys != NULL) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(
+ ctx->auth_state.xcbc.xcbc_keys_dma_addr);
+ dma_free_coherent(dev, CC_AES_128_BIT_KEY_SIZE * 3,
+ ctx->auth_state.xcbc.xcbc_keys,
+ ctx->auth_state.xcbc.xcbc_keys_dma_addr);
+ }
+ SSI_LOG_DEBUG("Freed xcbc_keys DMA buffer xcbc_keys_dma_addr=0x%llX\n",
+ (unsigned long long)ctx->auth_state.xcbc.xcbc_keys_dma_addr);
+ ctx->auth_state.xcbc.xcbc_keys_dma_addr = 0;
+ ctx->auth_state.xcbc.xcbc_keys = NULL;
+ } else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC auth. */
+ if (ctx->auth_state.hmac.ipad_opad != NULL) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(
+ ctx->auth_state.hmac.ipad_opad_dma_addr);
+ dma_free_coherent(dev, 2 * MAX_HMAC_DIGEST_SIZE,
+ ctx->auth_state.hmac.ipad_opad,
+ ctx->auth_state.hmac.ipad_opad_dma_addr);
+ SSI_LOG_DEBUG("Freed ipad_opad DMA buffer ipad_opad_dma_addr=0x%llX\n",
+ (unsigned long long)ctx->auth_state.hmac.ipad_opad_dma_addr);
+ ctx->auth_state.hmac.ipad_opad_dma_addr = 0;
+ ctx->auth_state.hmac.ipad_opad = NULL;
+ }
+ if (ctx->auth_state.hmac.padded_authkey != NULL) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(
+ ctx->auth_state.hmac.padded_authkey_dma_addr);
+ dma_free_coherent(dev, MAX_HMAC_BLOCK_SIZE,
+ ctx->auth_state.hmac.padded_authkey,
+ ctx->auth_state.hmac.padded_authkey_dma_addr);
+ SSI_LOG_DEBUG("Freed padded_authkey DMA buffer padded_authkey_dma_addr=0x%llX\n",
+ (unsigned long long)ctx->auth_state.hmac.padded_authkey_dma_addr);
+ ctx->auth_state.hmac.padded_authkey_dma_addr = 0;
+ ctx->auth_state.hmac.padded_authkey = NULL;
+ }
+ }
+}
+
+static int ssi_aead_init(struct crypto_aead *tfm)
+{
+ struct device *dev;
+ struct aead_alg *alg = crypto_aead_alg(tfm);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct ssi_crypto_alg *ssi_alg =
+ container_of(alg, struct ssi_crypto_alg, aead_alg);
+ SSI_LOG_DEBUG("Initializing context @%p for %s\n", ctx, crypto_tfm_alg_name(&(tfm->base)));
+
+ /* Initialize modes in instance */
+ ctx->cipher_mode = ssi_alg->cipher_mode;
+ ctx->flow_mode = ssi_alg->flow_mode;
+ ctx->auth_mode = ssi_alg->auth_mode;
+ ctx->drvdata = ssi_alg->drvdata;
+ dev = &ctx->drvdata->plat_dev->dev;
+ crypto_aead_set_reqsize(tfm,sizeof(struct aead_req_ctx));
+
+ /* Allocate key buffer, cache line aligned */
+ ctx->enckey = dma_alloc_coherent(dev, AES_MAX_KEY_SIZE,
+ &ctx->enckey_dma_addr, GFP_KERNEL);
+ if (ctx->enckey == NULL) {
+ SSI_LOG_ERR("Failed allocating key buffer\n");
+ goto init_failed;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(ctx->enckey_dma_addr, AES_MAX_KEY_SIZE);
+ SSI_LOG_DEBUG("Allocated enckey buffer in context ctx->enckey=@%p\n", ctx->enckey);
+
+ /* Set default authlen value */
+
+ if (ctx->auth_mode == DRV_HASH_XCBC_MAC) { /* XCBC authetication */
+ /* Allocate dma-coherent buffer for XCBC's K1+K2+K3 */
+ /* (and temporary for user key - up to 256b) */
+ ctx->auth_state.xcbc.xcbc_keys = dma_alloc_coherent(dev,
+ CC_AES_128_BIT_KEY_SIZE * 3,
+ &ctx->auth_state.xcbc.xcbc_keys_dma_addr, GFP_KERNEL);
+ if (ctx->auth_state.xcbc.xcbc_keys == NULL) {
+ SSI_LOG_ERR("Failed allocating buffer for XCBC keys\n");
+ goto init_failed;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(
+ ctx->auth_state.xcbc.xcbc_keys_dma_addr,
+ CC_AES_128_BIT_KEY_SIZE * 3);
+ } else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC authentication */
+ /* Allocate dma-coherent buffer for IPAD + OPAD */
+ ctx->auth_state.hmac.ipad_opad = dma_alloc_coherent(dev,
+ 2 * MAX_HMAC_DIGEST_SIZE,
+ &ctx->auth_state.hmac.ipad_opad_dma_addr, GFP_KERNEL);
+ if (ctx->auth_state.hmac.ipad_opad == NULL) {
+ SSI_LOG_ERR("Failed allocating IPAD/OPAD buffer\n");
+ goto init_failed;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(
+ ctx->auth_state.hmac.ipad_opad_dma_addr,
+ 2 * MAX_HMAC_DIGEST_SIZE);
+ SSI_LOG_DEBUG("Allocated authkey buffer in context ctx->authkey=@%p\n",
+ ctx->auth_state.hmac.ipad_opad);
+
+ ctx->auth_state.hmac.padded_authkey = dma_alloc_coherent(dev,
+ MAX_HMAC_BLOCK_SIZE,
+ &ctx->auth_state.hmac.padded_authkey_dma_addr, GFP_KERNEL);
+ if (ctx->auth_state.hmac.padded_authkey == NULL) {
+ SSI_LOG_ERR("failed to allocate padded_authkey\n");
+ goto init_failed;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(
+ ctx->auth_state.hmac.padded_authkey_dma_addr,
+ MAX_HMAC_BLOCK_SIZE);
+ } else {
+ ctx->auth_state.hmac.ipad_opad = NULL;
+ ctx->auth_state.hmac.padded_authkey = NULL;
+ }
+
+ return 0;
+
+init_failed:
+ ssi_aead_exit(tfm);
+ return -ENOMEM;
+}
+
+
+static void ssi_aead_complete(struct device *dev, void *ssi_req, void __iomem *cc_base)
+{
+ struct aead_request *areq = (struct aead_request *)ssi_req;
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(areq);
+ struct crypto_aead *tfm = crypto_aead_reqtfm(ssi_req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ int err = 0;
+ DECL_CYCLE_COUNT_RESOURCES;
+
+ START_CYCLE_COUNT();
+
+ ssi_buffer_mgr_unmap_aead_request(dev, areq);
+
+ /* Restore ordinary iv pointer */
+ areq->iv = areq_ctx->backup_iv;
+
+ if (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) {
+ if (memcmp(areq_ctx->mac_buf, areq_ctx->icv_virt_addr,
+ ctx->authsize) != 0) {
+ SSI_LOG_DEBUG("Payload authentication failure, "
+ "(auth-size=%d, cipher=%d).\n",
+ ctx->authsize, ctx->cipher_mode);
+ /* In case of payload authentication failure, MUST NOT
+ revealed the decrypted message --> zero its memory. */
+ ssi_buffer_mgr_zero_sgl(areq->dst, areq_ctx->cryptlen);
+ err = -EBADMSG;
+ }
+ } else { /*ENCRYPT*/
+ if (unlikely(areq_ctx->is_icv_fragmented == true))
+ ssi_buffer_mgr_copy_scatterlist_portion(
+ areq_ctx->mac_buf, areq_ctx->dstSgl, areq->cryptlen+areq_ctx->dstOffset,
+ areq->cryptlen+areq_ctx->dstOffset + ctx->authsize, SSI_SG_FROM_BUF);
+
+ /* If an IV was generated, copy it back to the user provided buffer. */
+ if (areq_ctx->backup_giv != NULL) {
+ if (ctx->cipher_mode == DRV_CIPHER_CTR) {
+ memcpy(areq_ctx->backup_giv, areq_ctx->ctr_iv + CTR_RFC3686_NONCE_SIZE, CTR_RFC3686_IV_SIZE);
+ } else if (ctx->cipher_mode == DRV_CIPHER_CCM) {
+ memcpy(areq_ctx->backup_giv, areq_ctx->ctr_iv + CCM_BLOCK_IV_OFFSET, CCM_BLOCK_IV_SIZE);
+ }
+ }
+ }
+
+ END_CYCLE_COUNT(STAT_OP_TYPE_GENERIC, STAT_PHASE_4);
+ aead_request_complete(areq, err);
+}
+
+static int xcbc_setkey(HwDesc_s *desc, struct ssi_aead_ctx *ctx)
+{
+ /* Load the AES key */
+ HW_DESC_INIT(&desc[0]);
+ /* We are using for the source/user key the same buffer as for the output keys,
+ because after this key loading it is not needed anymore */
+ HW_DESC_SET_DIN_TYPE(&desc[0], DMA_DLLI, ctx->auth_state.xcbc.xcbc_keys_dma_addr, ctx->auth_keylen, NS_BIT);
+ HW_DESC_SET_CIPHER_MODE(&desc[0], DRV_CIPHER_ECB);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[0], DRV_CRYPTO_DIRECTION_ENCRYPT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[0], ctx->auth_keylen);
+ HW_DESC_SET_FLOW_MODE(&desc[0], S_DIN_to_AES);
+ HW_DESC_SET_SETUP_MODE(&desc[0], SETUP_LOAD_KEY0);
+
+ HW_DESC_INIT(&desc[1]);
+ HW_DESC_SET_DIN_CONST(&desc[1], 0x01010101, CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[1], DIN_AES_DOUT);
+ HW_DESC_SET_DOUT_DLLI(&desc[1], ctx->auth_state.xcbc.xcbc_keys_dma_addr, AES_KEYSIZE_128, NS_BIT, 0);
+
+ HW_DESC_INIT(&desc[2]);
+ HW_DESC_SET_DIN_CONST(&desc[2], 0x02020202, CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[2], DIN_AES_DOUT);
+ HW_DESC_SET_DOUT_DLLI(&desc[2], (ctx->auth_state.xcbc.xcbc_keys_dma_addr
+ + AES_KEYSIZE_128),
+ AES_KEYSIZE_128, NS_BIT, 0);
+
+ HW_DESC_INIT(&desc[3]);
+ HW_DESC_SET_DIN_CONST(&desc[3], 0x03030303, CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[3], DIN_AES_DOUT);
+ HW_DESC_SET_DOUT_DLLI(&desc[3], (ctx->auth_state.xcbc.xcbc_keys_dma_addr
+ + 2 * AES_KEYSIZE_128),
+ AES_KEYSIZE_128, NS_BIT, 0);
+
+ return 4;
+}
+
+static int hmac_setkey(HwDesc_s *desc, struct ssi_aead_ctx *ctx)
+{
+ unsigned int hmacPadConst[2] = { HMAC_IPAD_CONST, HMAC_OPAD_CONST };
+ unsigned int digest_ofs = 0;
+ unsigned int hash_mode = (ctx->auth_mode == DRV_HASH_SHA1) ?
+ DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256;
+ unsigned int digest_size = (ctx->auth_mode == DRV_HASH_SHA1) ?
+ CC_SHA1_DIGEST_SIZE : CC_SHA256_DIGEST_SIZE;
+
+ int idx = 0;
+ int i;
+
+ /* calc derived HMAC key */
+ for (i = 0; i < 2; i++) {
+ /* Load hash initial state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+ HW_DESC_SET_DIN_SRAM(&desc[idx],
+ ssi_ahash_get_larval_digest_sram_addr(
+ ctx->drvdata, ctx->auth_mode),
+ digest_size);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ /* Load the hash current length*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ /* Prepare ipad key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_XOR_VAL(&desc[idx], hmacPadConst[i]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ idx++;
+
+ /* Perform HASH update */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ ctx->auth_state.hmac.padded_authkey_dma_addr,
+ SHA256_BLOCK_SIZE, NS_BIT);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+ HW_DESC_SET_XOR_ACTIVE(&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+
+ /* Get the digset */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ (ctx->auth_state.hmac.ipad_opad_dma_addr +
+ digest_ofs),
+ digest_size, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+ idx++;
+
+ digest_ofs += digest_size;
+ }
+
+ return idx;
+}
+
+static int validate_keys_sizes(struct ssi_aead_ctx *ctx)
+{
+ SSI_LOG_DEBUG("enc_keylen=%u authkeylen=%u\n",
+ ctx->enc_keylen, ctx->auth_keylen);
+
+ switch (ctx->auth_mode) {
+ case DRV_HASH_SHA1:
+ case DRV_HASH_SHA256:
+ break;
+ case DRV_HASH_XCBC_MAC:
+ if ((ctx->auth_keylen != AES_KEYSIZE_128) &&
+ (ctx->auth_keylen != AES_KEYSIZE_192) &&
+ (ctx->auth_keylen != AES_KEYSIZE_256))
+ return -ENOTSUPP;
+ break;
+ case DRV_HASH_NULL: /* Not authenc (e.g., CCM) - no auth_key) */
+ if (ctx->auth_keylen > 0)
+ return -EINVAL;
+ break;
+ default:
+ SSI_LOG_ERR("Invalid auth_mode=%d\n", ctx->auth_mode);
+ return -EINVAL;
+ }
+ /* Check cipher key size */
+ if (unlikely(ctx->flow_mode == S_DIN_to_DES)) {
+ if (ctx->enc_keylen != DES3_EDE_KEY_SIZE) {
+ SSI_LOG_ERR("Invalid cipher(3DES) key size: %u\n",
+ ctx->enc_keylen);
+ return -EINVAL;
+ }
+ } else { /* Default assumed to be AES ciphers */
+ if ((ctx->enc_keylen != AES_KEYSIZE_128) &&
+ (ctx->enc_keylen != AES_KEYSIZE_192) &&
+ (ctx->enc_keylen != AES_KEYSIZE_256)) {
+ SSI_LOG_ERR("Invalid cipher(AES) key size: %u\n",
+ ctx->enc_keylen);
+ return -EINVAL;
+ }
+ }
+
+ return 0; /* All tests of keys sizes passed */
+}
+/*This function prepers the user key so it can pass to the hmac processing
+ (copy to intenral buffer or hash in case of key longer than block */
+static int
+ssi_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
+{
+ dma_addr_t key_dma_addr = 0;
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct device *dev = &ctx->drvdata->plat_dev->dev;
+ uint32_t larval_addr = ssi_ahash_get_larval_digest_sram_addr(
+ ctx->drvdata, ctx->auth_mode);
+ struct ssi_crypto_req ssi_req = {};
+ unsigned int blocksize;
+ unsigned int digestsize;
+ unsigned int hashmode;
+ unsigned int idx = 0;
+ int rc = 0;
+ HwDesc_s desc[MAX_AEAD_SETKEY_SEQ];
+ dma_addr_t padded_authkey_dma_addr =
+ ctx->auth_state.hmac.padded_authkey_dma_addr;
+
+ switch (ctx->auth_mode) { /* auth_key required and >0 */
+ case DRV_HASH_SHA1:
+ blocksize = SHA1_BLOCK_SIZE;
+ digestsize = SHA1_DIGEST_SIZE;
+ hashmode = DRV_HASH_HW_SHA1;
+ break;
+ case DRV_HASH_SHA256:
+ default:
+ blocksize = SHA256_BLOCK_SIZE;
+ digestsize = SHA256_DIGEST_SIZE;
+ hashmode = DRV_HASH_HW_SHA256;
+ }
+
+ if (likely(keylen != 0)) {
+ key_dma_addr = dma_map_single(dev, (void *)key, keylen, DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(dev, key_dma_addr))) {
+ SSI_LOG_ERR("Mapping key va=0x%p len=%u for"
+ " DMA failed\n", key, keylen);
+ return -ENOMEM;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(key_dma_addr, keylen);
+ if (keylen > blocksize) {
+ /* Load hash initial state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hashmode);
+ HW_DESC_SET_DIN_SRAM(&desc[idx], larval_addr, digestsize);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ /* Load the hash current length*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hashmode);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0, HASH_LEN_SIZE);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ key_dma_addr,
+ keylen, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+
+ /* Get hashed key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hashmode);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ padded_authkey_dma_addr,
+ digestsize,
+ NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx],
+ HASH_PADDING_DISABLED);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx],
+ HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+ idx++;
+
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0, (blocksize - digestsize));
+ HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ (padded_authkey_dma_addr + digestsize),
+ (blocksize - digestsize),
+ NS_BIT, 0);
+ idx++;
+ } else {
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ key_dma_addr,
+ keylen, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ (padded_authkey_dma_addr),
+ keylen, NS_BIT, 0);
+ idx++;
+
+ if ((blocksize - keylen) != 0) {
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0,
+ (blocksize - keylen));
+ HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ (padded_authkey_dma_addr + keylen),
+ (blocksize - keylen),
+ NS_BIT, 0);
+ idx++;
+ }
+ }
+ } else {
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0,
+ (blocksize - keylen));
+ HW_DESC_SET_FLOW_MODE(&desc[idx], BYPASS);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ padded_authkey_dma_addr,
+ blocksize,
+ NS_BIT, 0);
+ idx++;
+ }
+
+#ifdef ENABLE_CYCLE_COUNT
+ ssi_req.op_type = STAT_OP_TYPE_SETKEY;
+#endif
+
+ rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+ if (unlikely(rc != 0))
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+
+ if (likely(key_dma_addr != 0)) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(key_dma_addr);
+ dma_unmap_single(dev, key_dma_addr, keylen, DMA_TO_DEVICE);
+ }
+
+ return rc;
+}
+
+
+static int
+ssi_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
+{
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct rtattr *rta = (struct rtattr *)key;
+ struct ssi_crypto_req ssi_req = {};
+ struct crypto_authenc_key_param *param;
+ HwDesc_s desc[MAX_AEAD_SETKEY_SEQ];
+ int seq_len = 0, rc = -EINVAL;
+ DECL_CYCLE_COUNT_RESOURCES;
+
+ SSI_LOG_DEBUG("Setting key in context @%p for %s. key=%p keylen=%u\n",
+ ctx, crypto_tfm_alg_name(crypto_aead_tfm(tfm)), key, keylen);
+
+ /* STAT_PHASE_0: Init and sanity checks */
+ START_CYCLE_COUNT();
+
+ if (ctx->auth_mode != DRV_HASH_NULL) { /* authenc() alg. */
+ if (!RTA_OK(rta, keylen))
+ goto badkey;
+ if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)
+ goto badkey;
+ if (RTA_PAYLOAD(rta) < sizeof(*param))
+ goto badkey;
+ param = RTA_DATA(rta);
+ ctx->enc_keylen = be32_to_cpu(param->enckeylen);
+ key += RTA_ALIGN(rta->rta_len);
+ keylen -= RTA_ALIGN(rta->rta_len);
+ if (keylen < ctx->enc_keylen)
+ goto badkey;
+ ctx->auth_keylen = keylen - ctx->enc_keylen;
+
+ if (ctx->cipher_mode == DRV_CIPHER_CTR) {
+ /* the nonce is stored in bytes at end of key */
+ if (ctx->enc_keylen <
+ (AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE))
+ goto badkey;
+ /* Copy nonce from last 4 bytes in CTR key to
+ * first 4 bytes in CTR IV */
+ memcpy(ctx->ctr_nonce, key + ctx->auth_keylen + ctx->enc_keylen -
+ CTR_RFC3686_NONCE_SIZE, CTR_RFC3686_NONCE_SIZE);
+ /* Set CTR key size */
+ ctx->enc_keylen -= CTR_RFC3686_NONCE_SIZE;
+ }
+ } else { /* non-authenc - has just one key */
+ ctx->enc_keylen = keylen;
+ ctx->auth_keylen = 0;
+ }
+
+ rc = validate_keys_sizes(ctx);
+ if (unlikely(rc != 0))
+ goto badkey;
+
+ END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_0);
+ /* STAT_PHASE_1: Copy key to ctx */
+ START_CYCLE_COUNT();
+
+ /* Get key material */
+ memcpy(ctx->enckey, key + ctx->auth_keylen, ctx->enc_keylen);
+ if (ctx->enc_keylen == 24)
+ memset(ctx->enckey + 24, 0, CC_AES_KEY_SIZE_MAX - 24);
+ if (ctx->auth_mode == DRV_HASH_XCBC_MAC) {
+ memcpy(ctx->auth_state.xcbc.xcbc_keys, key, ctx->auth_keylen);
+ } else if (ctx->auth_mode != DRV_HASH_NULL) { /* HMAC */
+ rc = ssi_get_plain_hmac_key(tfm, key, ctx->auth_keylen);
+ if (rc != 0)
+ goto badkey;
+ }
+
+ END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_1);
+
+ /* STAT_PHASE_2: Create sequence */
+ START_CYCLE_COUNT();
+
+ switch (ctx->auth_mode) {
+ case DRV_HASH_SHA1:
+ case DRV_HASH_SHA256:
+ seq_len = hmac_setkey(desc, ctx);
+ break;
+ case DRV_HASH_XCBC_MAC:
+ seq_len = xcbc_setkey(desc, ctx);
+ break;
+ case DRV_HASH_NULL: /* non-authenc modes, e.g., CCM */
+ break; /* No auth. key setup */
+ default:
+ SSI_LOG_ERR("Unsupported authenc (%d)\n", ctx->auth_mode);
+ rc = -ENOTSUPP;
+ goto badkey;
+ }
+
+ END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_2);
+
+ /* STAT_PHASE_3: Submit sequence to HW */
+ START_CYCLE_COUNT();
+
+ if (seq_len > 0) { /* For CCM there is no sequence to setup the key */
+#ifdef ENABLE_CYCLE_COUNT
+ ssi_req.op_type = STAT_OP_TYPE_SETKEY;
+#endif
+ rc = send_request(ctx->drvdata, &ssi_req, desc, seq_len, 0);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ goto setkey_error;
+ }
+ }
+
+ /* Update STAT_PHASE_3 */
+ END_CYCLE_COUNT(STAT_OP_TYPE_SETKEY, STAT_PHASE_3);
+ return rc;
+
+badkey:
+ crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+
+setkey_error:
+ return rc;
+}
+
+#if SSI_CC_HAS_AES_CCM
+static int ssi_rfc4309_ccm_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
+{
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ int rc = 0;
+
+ if (keylen < 3)
+ return -EINVAL;
+
+ keylen -= 3;
+ memcpy(ctx->ctr_nonce, key + keylen, 3);
+
+ rc = ssi_aead_setkey(tfm, key, keylen);
+
+ return rc;
+}
+#endif /*SSI_CC_HAS_AES_CCM*/
+
+static int ssi_aead_setauthsize(
+ struct crypto_aead *authenc,
+ unsigned int authsize)
+{
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(authenc);
+
+ /* Unsupported auth. sizes */
+ if ((authsize == 0) ||
+ (authsize >crypto_aead_maxauthsize(authenc))) {
+ return -ENOTSUPP;
+ }
+
+ ctx->authsize = authsize;
+ SSI_LOG_DEBUG("authlen=%d\n", ctx->authsize);
+
+ return 0;
+}
+
+#if SSI_CC_HAS_AES_CCM
+static int ssi_rfc4309_ccm_setauthsize(struct crypto_aead *authenc,
+ unsigned int authsize)
+{
+ switch (authsize) {
+ case 8:
+ case 12:
+ case 16:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return ssi_aead_setauthsize(authenc, authsize);
+}
+
+static int ssi_ccm_setauthsize(struct crypto_aead *authenc,
+ unsigned int authsize)
+{
+ switch (authsize) {
+ case 4:
+ case 6:
+ case 8:
+ case 10:
+ case 12:
+ case 14:
+ case 16:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return ssi_aead_setauthsize(authenc, authsize);
+}
+#endif /*SSI_CC_HAS_AES_CCM*/
+
+static inline void
+ssi_aead_create_assoc_desc(
+ struct aead_request *areq,
+ unsigned int flow_mode,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(areq);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(areq);
+ enum ssi_req_dma_buf_type assoc_dma_type = areq_ctx->assoc_buff_type;
+ unsigned int idx = *seq_size;
+
+ switch (assoc_dma_type) {
+ case SSI_DMA_BUF_DLLI:
+ SSI_LOG_DEBUG("ASSOC buffer type DLLI\n");
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ sg_dma_address(areq->src),
+ areq->assoclen, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+ if (ctx->auth_mode == DRV_HASH_XCBC_MAC && (areq_ctx->cryptlen > 0) )
+ HW_DESC_SET_DIN_NOT_LAST_INDICATION(&desc[idx]);
+ break;
+ case SSI_DMA_BUF_MLLI:
+ SSI_LOG_DEBUG("ASSOC buffer type MLLI\n");
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI,
+ areq_ctx->assoc.sram_addr,
+ areq_ctx->assoc.mlli_nents,
+ NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+ if (ctx->auth_mode == DRV_HASH_XCBC_MAC && (areq_ctx->cryptlen > 0) )
+ HW_DESC_SET_DIN_NOT_LAST_INDICATION(&desc[idx]);
+ break;
+ case SSI_DMA_BUF_NULL:
+ default:
+ SSI_LOG_ERR("Invalid ASSOC buffer type\n");
+ }
+
+ *seq_size = (++idx);
+}
+
+static inline void
+ssi_aead_process_authenc_data_desc(
+ struct aead_request *areq,
+ unsigned int flow_mode,
+ HwDesc_s desc[],
+ unsigned int *seq_size,
+ int direct)
+{
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(areq);
+ enum ssi_req_dma_buf_type data_dma_type = areq_ctx->data_buff_type;
+ unsigned int idx = *seq_size;
+
+ switch (data_dma_type) {
+ case SSI_DMA_BUF_DLLI:
+ {
+ struct scatterlist *cipher =
+ (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ?
+ areq_ctx->dstSgl : areq_ctx->srcSgl;
+
+ unsigned int offset =
+ (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ?
+ areq_ctx->dstOffset : areq_ctx->srcOffset;
+ SSI_LOG_DEBUG("AUTHENC: SRC/DST buffer type DLLI\n");
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ (sg_dma_address(cipher)+ offset), areq_ctx->cryptlen,
+ NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+ break;
+ }
+ case SSI_DMA_BUF_MLLI:
+ {
+ /* DOUBLE-PASS flow (as default)
+ * assoc. + iv + data -compact in one table
+ * if assoclen is ZERO only IV perform */
+ ssi_sram_addr_t mlli_addr = areq_ctx->assoc.sram_addr;
+ uint32_t mlli_nents = areq_ctx->assoc.mlli_nents;
+
+ if (likely(areq_ctx->is_single_pass == true)) {
+ if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT){
+ mlli_addr = areq_ctx->dst.sram_addr;
+ mlli_nents = areq_ctx->dst.mlli_nents;
+ } else {
+ mlli_addr = areq_ctx->src.sram_addr;
+ mlli_nents = areq_ctx->src.mlli_nents;
+ }
+ }
+
+ SSI_LOG_DEBUG("AUTHENC: SRC/DST buffer type MLLI\n");
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI,
+ mlli_addr, mlli_nents, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+ break;
+ }
+ case SSI_DMA_BUF_NULL:
+ default:
+ SSI_LOG_ERR("AUTHENC: Invalid SRC/DST buffer type\n");
+ }
+
+ *seq_size = (++idx);
+}
+
+static inline void
+ssi_aead_process_cipher_data_desc(
+ struct aead_request *areq,
+ unsigned int flow_mode,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ unsigned int idx = *seq_size;
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(areq);
+ enum ssi_req_dma_buf_type data_dma_type = areq_ctx->data_buff_type;
+
+ if (areq_ctx->cryptlen == 0)
+ return; /*null processing*/
+
+ switch (data_dma_type) {
+ case SSI_DMA_BUF_DLLI:
+ SSI_LOG_DEBUG("CIPHER: SRC/DST buffer type DLLI\n");
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ (sg_dma_address(areq_ctx->srcSgl)+areq_ctx->srcOffset),
+ areq_ctx->cryptlen, NS_BIT);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ (sg_dma_address(areq_ctx->dstSgl)+areq_ctx->dstOffset),
+ areq_ctx->cryptlen, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+ break;
+ case SSI_DMA_BUF_MLLI:
+ SSI_LOG_DEBUG("CIPHER: SRC/DST buffer type MLLI\n");
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_MLLI,
+ areq_ctx->src.sram_addr,
+ areq_ctx->src.mlli_nents, NS_BIT);
+ HW_DESC_SET_DOUT_MLLI(&desc[idx],
+ areq_ctx->dst.sram_addr,
+ areq_ctx->dst.mlli_nents, NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], flow_mode);
+ break;
+ case SSI_DMA_BUF_NULL:
+ default:
+ SSI_LOG_ERR("CIPHER: Invalid SRC/DST buffer type\n");
+ }
+
+ *seq_size = (++idx);
+}
+
+static inline void ssi_aead_process_digest_result_desc(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+ unsigned int idx = *seq_size;
+ unsigned int hash_mode = (ctx->auth_mode == DRV_HASH_SHA1) ?
+ DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256;
+ int direct = req_ctx->gen_ctx.op_type;
+
+ /* Get final ICV result */
+ if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) {
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->icv_dma_addr,
+ ctx->authsize, NS_BIT, 1);
+ HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+ if (ctx->auth_mode == DRV_HASH_XCBC_MAC) {
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+ } else {
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx],
+ HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+ }
+ } else { /*Decrypt*/
+ /* Get ICV out from hardware */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->mac_buf_dma_addr,
+ ctx->authsize, NS_BIT, 1);
+ HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_DISABLED);
+ if (ctx->auth_mode == DRV_HASH_XCBC_MAC) {
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ } else {
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+ }
+ }
+
+ *seq_size = (++idx);
+}
+
+static inline void ssi_aead_setup_cipher_desc(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+ unsigned int hw_iv_size = req_ctx->hw_iv_size;
+ unsigned int idx = *seq_size;
+ int direct = req_ctx->gen_ctx.op_type;
+
+ /* Setup cipher state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direct);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], ctx->flow_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ req_ctx->gen_ctx.iv_dma_addr, hw_iv_size, NS_BIT);
+ if (ctx->cipher_mode == DRV_CIPHER_CTR) {
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ } else {
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ }
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->cipher_mode);
+ idx++;
+
+ /* Setup enc. key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], direct);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], ctx->flow_mode);
+ if (ctx->flow_mode == S_DIN_to_AES) {
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr,
+ ((ctx->enc_keylen == 24) ?
+ CC_AES_KEY_SIZE_MAX : ctx->enc_keylen), NS_BIT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+ } else {
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr,
+ ctx->enc_keylen, NS_BIT);
+ HW_DESC_SET_KEY_SIZE_DES(&desc[idx], ctx->enc_keylen);
+ }
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], ctx->cipher_mode);
+ idx++;
+
+ *seq_size = idx;
+}
+
+static inline void ssi_aead_process_cipher(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size,
+ unsigned int data_flow_mode)
+{
+ struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+ int direct = req_ctx->gen_ctx.op_type;
+ unsigned int idx = *seq_size;
+
+ if (req_ctx->cryptlen == 0)
+ return; /*null processing*/
+
+ ssi_aead_setup_cipher_desc(req, desc, &idx);
+ ssi_aead_process_cipher_data_desc(req, data_flow_mode, desc, &idx);
+ if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) {
+ /* We must wait for DMA to write all cipher */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ idx++;
+ }
+
+ *seq_size = idx;
+}
+
+static inline void ssi_aead_hmac_setup_digest_desc(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ unsigned int hash_mode = (ctx->auth_mode == DRV_HASH_SHA1) ?
+ DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256;
+ unsigned int digest_size = (ctx->auth_mode == DRV_HASH_SHA1) ?
+ CC_SHA1_DIGEST_SIZE : CC_SHA256_DIGEST_SIZE;
+ unsigned int idx = *seq_size;
+
+ /* Loading hash ipad xor key state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ ctx->auth_state.hmac.ipad_opad_dma_addr,
+ digest_size, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ /* Load init. digest len (64 bytes) */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+ HW_DESC_SET_DIN_SRAM(&desc[idx],
+ ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, hash_mode),
+ HASH_LEN_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ *seq_size = idx;
+}
+
+static inline void ssi_aead_xcbc_setup_digest_desc(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ unsigned int idx = *seq_size;
+
+ /* Loading MAC state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0, CC_AES_BLOCK_SIZE);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ idx++;
+
+ /* Setup XCBC MAC K1 */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ ctx->auth_state.xcbc.xcbc_keys_dma_addr,
+ AES_KEYSIZE_128, NS_BIT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ idx++;
+
+ /* Setup XCBC MAC K2 */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ (ctx->auth_state.xcbc.xcbc_keys_dma_addr +
+ AES_KEYSIZE_128),
+ AES_KEYSIZE_128, NS_BIT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ idx++;
+
+ /* Setup XCBC MAC K3 */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ (ctx->auth_state.xcbc.xcbc_keys_dma_addr +
+ 2 * AES_KEYSIZE_128),
+ AES_KEYSIZE_128, NS_BIT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE2);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_XCBC_MAC);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], CC_AES_128_BIT_KEY_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ idx++;
+
+ *seq_size = idx;
+}
+
+static inline void ssi_aead_process_digest_header_desc(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ unsigned int idx = *seq_size;
+ /* Hash associated data */
+ if (req->assoclen > 0)
+ ssi_aead_create_assoc_desc(req, DIN_HASH, desc, &idx);
+
+ /* Hash IV */
+ *seq_size = idx;
+}
+
+static inline void ssi_aead_process_digest_scheme_desc(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct ssi_aead_handle *aead_handle = ctx->drvdata->aead_handle;
+ unsigned int hash_mode = (ctx->auth_mode == DRV_HASH_SHA1) ?
+ DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256;
+ unsigned int digest_size = (ctx->auth_mode == DRV_HASH_SHA1) ?
+ CC_SHA1_DIGEST_SIZE : CC_SHA256_DIGEST_SIZE;
+ unsigned int idx = *seq_size;
+
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+ HW_DESC_SET_DOUT_SRAM(&desc[idx], aead_handle->sram_workspace_addr,
+ HASH_LEN_SIZE);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE1);
+ HW_DESC_SET_CIPHER_DO(&desc[idx], DO_PAD);
+ idx++;
+
+ /* Get final ICV result */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DOUT_SRAM(&desc[idx], aead_handle->sram_workspace_addr,
+ digest_size);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+ idx++;
+
+ /* Loading hash opad xor key state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ (ctx->auth_state.hmac.ipad_opad_dma_addr + digest_size),
+ digest_size, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ /* Load init. digest len (64 bytes) */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], hash_mode);
+ HW_DESC_SET_DIN_SRAM(&desc[idx],
+ ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata, hash_mode),
+ HASH_LEN_SIZE);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ /* Perform HASH update */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_SRAM(&desc[idx], aead_handle->sram_workspace_addr,
+ digest_size);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+
+ *seq_size = idx;
+}
+
+static inline void ssi_aead_load_mlli_to_sram(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+
+ if (unlikely(
+ (req_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI) ||
+ (req_ctx->data_buff_type == SSI_DMA_BUF_MLLI) ||
+ (req_ctx->is_single_pass == false))) {
+ SSI_LOG_DEBUG("Copy-to-sram: mlli_dma=%08x, mlli_size=%u\n",
+ (unsigned int)ctx->drvdata->mlli_sram_addr,
+ req_ctx->mlli_params.mlli_len);
+ /* Copy MLLI table host-to-sram */
+ HW_DESC_INIT(&desc[*seq_size]);
+ HW_DESC_SET_DIN_TYPE(&desc[*seq_size], DMA_DLLI,
+ req_ctx->mlli_params.mlli_dma_addr,
+ req_ctx->mlli_params.mlli_len, NS_BIT);
+ HW_DESC_SET_DOUT_SRAM(&desc[*seq_size],
+ ctx->drvdata->mlli_sram_addr,
+ req_ctx->mlli_params.mlli_len);
+ HW_DESC_SET_FLOW_MODE(&desc[*seq_size], BYPASS);
+ (*seq_size)++;
+ }
+}
+
+static inline enum FlowMode ssi_aead_get_data_flow_mode(
+ enum drv_crypto_direction direct,
+ enum FlowMode setup_flow_mode,
+ bool is_single_pass)
+{
+ enum FlowMode data_flow_mode;
+
+ if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) {
+ if (setup_flow_mode == S_DIN_to_AES)
+ data_flow_mode = likely(is_single_pass) ?
+ AES_to_HASH_and_DOUT : DIN_AES_DOUT;
+ else
+ data_flow_mode = likely(is_single_pass) ?
+ DES_to_HASH_and_DOUT : DIN_DES_DOUT;
+ } else { /* Decrypt */
+ if (setup_flow_mode == S_DIN_to_AES)
+ data_flow_mode = likely(is_single_pass) ?
+ AES_and_HASH : DIN_AES_DOUT;
+ else
+ data_flow_mode = likely(is_single_pass) ?
+ DES_and_HASH : DIN_DES_DOUT;
+ }
+
+ return data_flow_mode;
+}
+
+static inline void ssi_aead_hmac_authenc(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+ int direct = req_ctx->gen_ctx.op_type;
+ unsigned int data_flow_mode = ssi_aead_get_data_flow_mode(
+ direct, ctx->flow_mode, req_ctx->is_single_pass);
+
+ if (req_ctx->is_single_pass == true) {
+ /**
+ * Single-pass flow
+ */
+ ssi_aead_hmac_setup_digest_desc(req, desc, seq_size);
+ ssi_aead_setup_cipher_desc(req, desc, seq_size);
+ ssi_aead_process_digest_header_desc(req, desc, seq_size);
+ ssi_aead_process_cipher_data_desc(req, data_flow_mode, desc, seq_size);
+ ssi_aead_process_digest_scheme_desc(req, desc, seq_size);
+ ssi_aead_process_digest_result_desc(req, desc, seq_size);
+ return;
+ }
+
+ /**
+ * Double-pass flow
+ * Fallback for unsupported single-pass modes,
+ * i.e. using assoc. data of non-word-multiple */
+ if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) {
+ /* encrypt first.. */
+ ssi_aead_process_cipher(req, desc, seq_size, data_flow_mode);
+ /* authenc after..*/
+ ssi_aead_hmac_setup_digest_desc(req, desc, seq_size);
+ ssi_aead_process_authenc_data_desc(req, DIN_HASH, desc, seq_size, direct);
+ ssi_aead_process_digest_scheme_desc(req, desc, seq_size);
+ ssi_aead_process_digest_result_desc(req, desc, seq_size);
+
+ } else { /*DECRYPT*/
+ /* authenc first..*/
+ ssi_aead_hmac_setup_digest_desc(req, desc, seq_size);
+ ssi_aead_process_authenc_data_desc(req, DIN_HASH, desc, seq_size, direct);
+ ssi_aead_process_digest_scheme_desc(req, desc, seq_size);
+ /* decrypt after.. */
+ ssi_aead_process_cipher(req, desc, seq_size, data_flow_mode);
+ /* read the digest result with setting the completion bit
+ must be after the cipher operation */
+ ssi_aead_process_digest_result_desc(req, desc, seq_size);
+ }
+}
+
+static inline void
+ssi_aead_xcbc_authenc(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+ int direct = req_ctx->gen_ctx.op_type;
+ unsigned int data_flow_mode = ssi_aead_get_data_flow_mode(
+ direct, ctx->flow_mode, req_ctx->is_single_pass);
+
+ if (req_ctx->is_single_pass == true) {
+ /**
+ * Single-pass flow
+ */
+ ssi_aead_xcbc_setup_digest_desc(req, desc, seq_size);
+ ssi_aead_setup_cipher_desc(req, desc, seq_size);
+ ssi_aead_process_digest_header_desc(req, desc, seq_size);
+ ssi_aead_process_cipher_data_desc(req, data_flow_mode, desc, seq_size);
+ ssi_aead_process_digest_result_desc(req, desc, seq_size);
+ return;
+ }
+
+ /**
+ * Double-pass flow
+ * Fallback for unsupported single-pass modes,
+ * i.e. using assoc. data of non-word-multiple */
+ if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) {
+ /* encrypt first.. */
+ ssi_aead_process_cipher(req, desc, seq_size, data_flow_mode);
+ /* authenc after.. */
+ ssi_aead_xcbc_setup_digest_desc(req, desc, seq_size);
+ ssi_aead_process_authenc_data_desc(req, DIN_HASH, desc, seq_size, direct);
+ ssi_aead_process_digest_result_desc(req, desc, seq_size);
+ } else { /*DECRYPT*/
+ /* authenc first.. */
+ ssi_aead_xcbc_setup_digest_desc(req, desc, seq_size);
+ ssi_aead_process_authenc_data_desc(req, DIN_HASH, desc, seq_size, direct);
+ /* decrypt after..*/
+ ssi_aead_process_cipher(req, desc, seq_size, data_flow_mode);
+ /* read the digest result with setting the completion bit
+ must be after the cipher operation */
+ ssi_aead_process_digest_result_desc(req, desc, seq_size);
+ }
+}
+
+static int validate_data_size(struct ssi_aead_ctx *ctx,
+ enum drv_crypto_direction direct, struct aead_request *req)
+{
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ unsigned int assoclen = req->assoclen;
+ unsigned int cipherlen = (direct == DRV_CRYPTO_DIRECTION_DECRYPT) ?
+ (req->cryptlen - ctx->authsize) : req->cryptlen;
+
+ if (unlikely((direct == DRV_CRYPTO_DIRECTION_DECRYPT) &&
+ (req->cryptlen < ctx->authsize)))
+ goto data_size_err;
+
+ areq_ctx->is_single_pass = true; /*defaulted to fast flow*/
+
+ switch (ctx->flow_mode) {
+ case S_DIN_to_AES:
+ if (unlikely((ctx->cipher_mode == DRV_CIPHER_CBC) &&
+ !IS_ALIGNED(cipherlen, AES_BLOCK_SIZE)))
+ goto data_size_err;
+ if (ctx->cipher_mode == DRV_CIPHER_CCM)
+ break;
+ if (ctx->cipher_mode == DRV_CIPHER_GCTR)
+ {
+ if (areq_ctx->plaintext_authenticate_only == true)
+ areq_ctx->is_single_pass = false;
+ break;
+ }
+
+ if (!IS_ALIGNED(assoclen, sizeof(uint32_t)))
+ areq_ctx->is_single_pass = false;
+
+ if ((ctx->cipher_mode == DRV_CIPHER_CTR) &&
+ !IS_ALIGNED(cipherlen, sizeof(uint32_t)))
+ areq_ctx->is_single_pass = false;
+
+ break;
+ case S_DIN_to_DES:
+ if (unlikely(!IS_ALIGNED(cipherlen, DES_BLOCK_SIZE)))
+ goto data_size_err;
+ if (unlikely(!IS_ALIGNED(assoclen, DES_BLOCK_SIZE)))
+ areq_ctx->is_single_pass = false;
+ break;
+ default:
+ SSI_LOG_ERR("Unexpected flow mode (%d)\n", ctx->flow_mode);
+ goto data_size_err;
+ }
+
+ return 0;
+
+data_size_err:
+ return -EINVAL;
+}
+
+#if SSI_CC_HAS_AES_CCM
+static unsigned int format_ccm_a0(uint8_t *pA0Buff, uint32_t headerSize)
+{
+ unsigned int len = 0;
+ if ( headerSize == 0 ) {
+ return 0;
+ }
+ if ( headerSize < ((1UL << 16) - (1UL << 8) )) {
+ len = 2;
+
+ pA0Buff[0] = (headerSize >> 8) & 0xFF;
+ pA0Buff[1] = headerSize & 0xFF;
+ } else {
+ len = 6;
+
+ pA0Buff[0] = 0xFF;
+ pA0Buff[1] = 0xFE;
+ pA0Buff[2] = (headerSize >> 24) & 0xFF;
+ pA0Buff[3] = (headerSize >> 16) & 0xFF;
+ pA0Buff[4] = (headerSize >> 8) & 0xFF;
+ pA0Buff[5] = headerSize & 0xFF;
+ }
+
+ return len;
+}
+
+static int set_msg_len(u8 *block, unsigned int msglen, unsigned int csize)
+{
+ __be32 data;
+
+ memset(block, 0, csize);
+ block += csize;
+
+ if (csize >= 4)
+ csize = 4;
+ else if (msglen > (1 << (8 * csize)))
+ return -EOVERFLOW;
+
+ data = cpu_to_be32(msglen);
+ memcpy(block - csize, (u8 *)&data + 4 - csize, csize);
+
+ return 0;
+}
+
+static inline int ssi_aead_ccm(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+ unsigned int idx = *seq_size;
+ unsigned int cipher_flow_mode;
+ dma_addr_t mac_result;
+
+
+ if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) {
+ cipher_flow_mode = AES_to_HASH_and_DOUT;
+ mac_result = req_ctx->mac_buf_dma_addr;
+ } else { /* Encrypt */
+ cipher_flow_mode = AES_and_HASH;
+ mac_result = req_ctx->icv_dma_addr;
+ }
+
+ /* load key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr,
+ ((ctx->enc_keylen == 24) ?
+ CC_AES_KEY_SIZE_MAX : ctx->enc_keylen),
+ NS_BIT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* load ctr state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ req_ctx->gen_ctx.iv_dma_addr,
+ AES_BLOCK_SIZE, NS_BIT);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* load MAC key */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr,
+ ((ctx->enc_keylen == 24) ?
+ CC_AES_KEY_SIZE_MAX : ctx->enc_keylen),
+ NS_BIT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ idx++;
+
+ /* load MAC state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ req_ctx->mac_buf_dma_addr,
+ AES_BLOCK_SIZE, NS_BIT);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DESC_DIRECTION_ENCRYPT_ENCRYPT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ idx++;
+
+
+ /* process assoc data */
+ if (req->assoclen > 0) {
+ ssi_aead_create_assoc_desc(req, DIN_HASH, desc, &idx);
+ } else {
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ sg_dma_address(&req_ctx->ccm_adata_sg),
+ AES_BLOCK_SIZE + req_ctx->ccm_hdr_size,
+ NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+ }
+
+ /* process the cipher */
+ if (req_ctx->cryptlen != 0) {
+ ssi_aead_process_cipher_data_desc(req, cipher_flow_mode, desc, &idx);
+ }
+
+ /* Read temporal MAC */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CBC_MAC);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->mac_buf_dma_addr,
+ ctx->authsize, NS_BIT, 0);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], HASH_DIGEST_RESULT_LITTLE_ENDIAN);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ idx++;
+
+ /* load AES-CTR state (for last MAC calculation)*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_CTR);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ req_ctx->ccm_iv0_dma_addr ,
+ AES_BLOCK_SIZE, NS_BIT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ idx++;
+
+ /* encrypt the "T" value and store MAC in mac_state */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ req_ctx->mac_buf_dma_addr , ctx->authsize, NS_BIT);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_result , ctx->authsize, NS_BIT, 1);
+ HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+ idx++;
+
+ *seq_size = idx;
+ return 0;
+}
+
+static int config_ccm_adata(struct aead_request *req) {
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+ //unsigned int size_of_a = 0, rem_a_size = 0;
+ unsigned int lp = req->iv[0];
+ /* Note: The code assume that req->iv[0] already contains the value of L' of RFC3610 */
+ unsigned int l = lp + 1; /* This is L' of RFC 3610. */
+ unsigned int m = ctx->authsize; /* This is M' of RFC 3610. */
+ uint8_t *b0 = req_ctx->ccm_config + CCM_B0_OFFSET;
+ uint8_t *a0 = req_ctx->ccm_config + CCM_A0_OFFSET;
+ uint8_t *ctr_count_0 = req_ctx->ccm_config + CCM_CTR_COUNT_0_OFFSET;
+ unsigned int cryptlen = (req_ctx->gen_ctx.op_type ==
+ DRV_CRYPTO_DIRECTION_ENCRYPT) ?
+ req->cryptlen :
+ (req->cryptlen - ctx->authsize);
+ int rc;
+ memset(req_ctx->mac_buf, 0, AES_BLOCK_SIZE);
+ memset(req_ctx->ccm_config, 0, AES_BLOCK_SIZE*3);
+
+ /* taken from crypto/ccm.c */
+ /* 2 <= L <= 8, so 1 <= L' <= 7. */
+ if (2 > l || l > 8) {
+ SSI_LOG_ERR("illegal iv value %X\n",req->iv[0]);
+ return -EINVAL;
+ }
+ memcpy(b0, req->iv, AES_BLOCK_SIZE);
+
+ /* format control info per RFC 3610 and
+ * NIST Special Publication 800-38C
+ */
+ *b0 |= (8 * ((m - 2) / 2));
+ if (req->assoclen > 0)
+ *b0 |= 64; /* Enable bit 6 if Adata exists. */
+
+ rc = set_msg_len(b0 + 16 - l, cryptlen, l); /* Write L'. */
+ if (rc != 0) {
+ return rc;
+ }
+ /* END of "taken from crypto/ccm.c" */
+
+ /* l(a) - size of associated data. */
+ req_ctx->ccm_hdr_size = format_ccm_a0 (a0, req->assoclen);
+
+ memset(req->iv + 15 - req->iv[0], 0, req->iv[0] + 1);
+ req->iv [15] = 1;
+
+ memcpy(ctr_count_0, req->iv, AES_BLOCK_SIZE) ;
+ ctr_count_0[15] = 0;
+
+ return 0;
+}
+
+static void ssi_rfc4309_ccm_process(struct aead_request *req)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+
+ /* L' */
+ memset(areq_ctx->ctr_iv, 0, AES_BLOCK_SIZE);
+ areq_ctx->ctr_iv[0] = 3; /* For RFC 4309, always use 4 bytes for message length (at most 2^32-1 bytes). */
+
+ /* In RFC 4309 there is an 11-bytes nonce+IV part, that we build here. */
+ memcpy(areq_ctx->ctr_iv + CCM_BLOCK_NONCE_OFFSET, ctx->ctr_nonce, CCM_BLOCK_NONCE_SIZE);
+ memcpy(areq_ctx->ctr_iv + CCM_BLOCK_IV_OFFSET, req->iv, CCM_BLOCK_IV_SIZE);
+ req->iv = areq_ctx->ctr_iv;
+ req->assoclen -= CCM_BLOCK_IV_SIZE;
+}
+#endif /*SSI_CC_HAS_AES_CCM*/
+
+#if SSI_CC_HAS_AES_GCM
+
+static inline void ssi_aead_gcm_setup_ghash_desc(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+ unsigned int idx = *seq_size;
+
+ /* load key to AES*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_ECB);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr,
+ ctx->enc_keylen, NS_BIT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* process one zero block to generate hkey */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0x0, AES_BLOCK_SIZE);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx],
+ req_ctx->hkey_dma_addr,
+ AES_BLOCK_SIZE,
+ NS_BIT, 0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+ idx++;
+
+ /* Memory Barrier */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ idx++;
+
+ /* Load GHASH subkey */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ req_ctx->hkey_dma_addr,
+ AES_BLOCK_SIZE, NS_BIT);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ /* Configure Hash Engine to work with GHASH.
+ Since it was not possible to extend HASH submodes to add GHASH,
+ The following command is necessary in order to select GHASH (according to HW designers)*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);
+ HW_DESC_SET_CIPHER_DO(&desc[idx], 1); //1=AES_SK RKEK
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ idx++;
+
+ /* Load GHASH initial STATE (which is 0). (for any hash there is an initial state) */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_CONST(&desc[idx], 0x0, AES_BLOCK_SIZE);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_HASH);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);
+ HW_DESC_SET_CIPHER_CONFIG1(&desc[idx], HASH_PADDING_ENABLED);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE0);
+ idx++;
+
+ *seq_size = idx;
+}
+
+static inline void ssi_aead_gcm_setup_gctr_desc(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+ unsigned int idx = *seq_size;
+
+ /* load key to AES*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI, ctx->enckey_dma_addr,
+ ctx->enc_keylen, NS_BIT);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_KEY0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ if ((req_ctx->cryptlen != 0) && (req_ctx->plaintext_authenticate_only==false)){
+ /* load AES/CTR initial CTR value inc by 2*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ req_ctx->gcm_iv_inc2_dma_addr,
+ AES_BLOCK_SIZE, NS_BIT);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+ }
+
+ *seq_size = idx;
+}
+
+static inline void ssi_aead_process_gcm_result_desc(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+ dma_addr_t mac_result;
+ unsigned int idx = *seq_size;
+
+ if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) {
+ mac_result = req_ctx->mac_buf_dma_addr;
+ } else { /* Encrypt */
+ mac_result = req_ctx->icv_dma_addr;
+ }
+
+ /* process(ghash) gcm_block_len */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ req_ctx->gcm_block_len_dma_addr,
+ AES_BLOCK_SIZE, NS_BIT);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_HASH);
+ idx++;
+
+ /* Store GHASH state after GHASH(Associated Data + Cipher +LenBlock) */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_HASH_HW_GHASH);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], req_ctx->mac_buf_dma_addr,
+ AES_BLOCK_SIZE, NS_BIT, 0);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_WRITE_STATE0);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_HASH_to_DOUT);
+ HW_DESC_SET_AES_NOT_HASH_MODE(&desc[idx]);
+
+ idx++;
+
+ /* load AES/CTR initial CTR value inc by 1*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);
+ HW_DESC_SET_KEY_SIZE_AES(&desc[idx], ctx->enc_keylen);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ req_ctx->gcm_iv_inc1_dma_addr,
+ AES_BLOCK_SIZE, NS_BIT);
+ HW_DESC_SET_CIPHER_CONFIG0(&desc[idx], DRV_CRYPTO_DIRECTION_ENCRYPT);
+ HW_DESC_SET_SETUP_MODE(&desc[idx], SETUP_LOAD_STATE1);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], S_DIN_to_AES);
+ idx++;
+
+ /* Memory Barrier */
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_DIN_NO_DMA(&desc[idx], 0, 0xfffff0);
+ HW_DESC_SET_DOUT_NO_DMA(&desc[idx], 0, 0, 1);
+ idx++;
+
+ /* process GCTR on stored GHASH and store MAC in mac_state*/
+ HW_DESC_INIT(&desc[idx]);
+ HW_DESC_SET_CIPHER_MODE(&desc[idx], DRV_CIPHER_GCTR);
+ HW_DESC_SET_DIN_TYPE(&desc[idx], DMA_DLLI,
+ req_ctx->mac_buf_dma_addr,
+ AES_BLOCK_SIZE, NS_BIT);
+ HW_DESC_SET_DOUT_DLLI(&desc[idx], mac_result, ctx->authsize, NS_BIT, 1);
+ HW_DESC_SET_QUEUE_LAST_IND(&desc[idx]);
+ HW_DESC_SET_FLOW_MODE(&desc[idx], DIN_AES_DOUT);
+ idx++;
+
+ *seq_size = idx;
+}
+
+static inline int ssi_aead_gcm(
+ struct aead_request *req,
+ HwDesc_s desc[],
+ unsigned int *seq_size)
+{
+ struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+ unsigned int idx = *seq_size;
+ unsigned int cipher_flow_mode;
+
+ if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) {
+ cipher_flow_mode = AES_and_HASH;
+ } else { /* Encrypt */
+ cipher_flow_mode = AES_to_HASH_and_DOUT;
+ }
+
+
+ //in RFC4543 no data to encrypt. just copy data from src to dest.
+ if (req_ctx->plaintext_authenticate_only==true){
+ ssi_aead_process_cipher_data_desc(req, BYPASS, desc, seq_size);
+ ssi_aead_gcm_setup_ghash_desc(req, desc, seq_size);
+ /* process(ghash) assoc data */
+ ssi_aead_create_assoc_desc(req, DIN_HASH, desc, seq_size);
+ ssi_aead_gcm_setup_gctr_desc(req, desc, seq_size);
+ ssi_aead_process_gcm_result_desc(req, desc, seq_size);
+ idx = *seq_size;
+ return 0;
+ }
+
+ // for gcm and rfc4106.
+ ssi_aead_gcm_setup_ghash_desc(req, desc, seq_size);
+ /* process(ghash) assoc data */
+ if (req->assoclen > 0)
+ ssi_aead_create_assoc_desc(req, DIN_HASH, desc, seq_size);
+ ssi_aead_gcm_setup_gctr_desc(req, desc, seq_size);
+ /* process(gctr+ghash) */
+ if (req_ctx->cryptlen != 0)
+ ssi_aead_process_cipher_data_desc(req, cipher_flow_mode, desc, seq_size);
+ ssi_aead_process_gcm_result_desc(req, desc, seq_size);
+
+ idx = *seq_size;
+ return 0;
+}
+
+#ifdef CC_DEBUG
+static inline void ssi_aead_dump_gcm(
+ const char* title,
+ struct aead_request *req)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+
+ if (ctx->cipher_mode != DRV_CIPHER_GCTR)
+ return;
+
+ if (title != NULL) {
+ SSI_LOG_DEBUG("----------------------------------------------------------------------------------");
+ SSI_LOG_DEBUG("%s\n", title);
+ }
+
+ SSI_LOG_DEBUG("cipher_mode %d, authsize %d, enc_keylen %d, assoclen %d, cryptlen %d \n", \
+ ctx->cipher_mode, ctx->authsize, ctx->enc_keylen, req->assoclen, req_ctx->cryptlen );
+
+ if ( ctx->enckey != NULL ) {
+ dump_byte_array("mac key",ctx->enckey, 16);
+ }
+
+ dump_byte_array("req->iv",req->iv, AES_BLOCK_SIZE);
+
+ dump_byte_array("gcm_iv_inc1",req_ctx->gcm_iv_inc1, AES_BLOCK_SIZE);
+
+ dump_byte_array("gcm_iv_inc2",req_ctx->gcm_iv_inc2, AES_BLOCK_SIZE);
+
+ dump_byte_array("hkey",req_ctx->hkey, AES_BLOCK_SIZE);
+
+ dump_byte_array("mac_buf",req_ctx->mac_buf, AES_BLOCK_SIZE);
+
+ dump_byte_array("gcm_len_block",req_ctx->gcm_len_block.lenA, AES_BLOCK_SIZE);
+
+ if (req->src!=NULL && req->cryptlen) {
+ dump_byte_array("req->src",sg_virt(req->src), req->cryptlen+req->assoclen);
+ }
+
+ if (req->dst!=NULL) {
+ dump_byte_array("req->dst",sg_virt(req->dst), req->cryptlen+ctx->authsize+req->assoclen);
+ }
+}
+#endif
+
+static int config_gcm_context(struct aead_request *req) {
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *req_ctx = aead_request_ctx(req);
+
+ unsigned int cryptlen = (req_ctx->gen_ctx.op_type ==
+ DRV_CRYPTO_DIRECTION_ENCRYPT) ?
+ req->cryptlen :
+ (req->cryptlen - ctx->authsize);
+ __be32 counter = cpu_to_be32(2);
+
+ SSI_LOG_DEBUG("config_gcm_context() cryptlen = %d, req->assoclen = %d ctx->authsize = %d \n", cryptlen, req->assoclen, ctx->authsize);
+
+ memset(req_ctx->hkey, 0, AES_BLOCK_SIZE);
+
+ memset(req_ctx->mac_buf, 0, AES_BLOCK_SIZE);
+
+ memcpy(req->iv + 12, &counter, 4);
+ memcpy(req_ctx->gcm_iv_inc2, req->iv, 16);
+
+ counter = cpu_to_be32(1);
+ memcpy(req->iv + 12, &counter, 4);
+ memcpy(req_ctx->gcm_iv_inc1, req->iv, 16);
+
+
+ if (req_ctx->plaintext_authenticate_only == false)
+ {
+ __be64 temp64;
+ temp64 = cpu_to_be64(req->assoclen * 8);
+ memcpy ( &req_ctx->gcm_len_block.lenA , &temp64, sizeof(temp64) );
+ temp64 = cpu_to_be64(cryptlen * 8);
+ memcpy ( &req_ctx->gcm_len_block.lenC , &temp64, 8 );
+ }
+ else { //rfc4543=> all data(AAD,IV,Plain) are considered additional data that is nothing is encrypted.
+ __be64 temp64;
+ temp64 = cpu_to_be64((req->assoclen+GCM_BLOCK_RFC4_IV_SIZE+cryptlen) * 8);
+ memcpy ( &req_ctx->gcm_len_block.lenA , &temp64, sizeof(temp64) );
+ temp64 = 0;
+ memcpy ( &req_ctx->gcm_len_block.lenC , &temp64, 8 );
+ }
+
+ return 0;
+}
+
+static void ssi_rfc4_gcm_process(struct aead_request *req)
+{
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+
+ memcpy(areq_ctx->ctr_iv + GCM_BLOCK_RFC4_NONCE_OFFSET, ctx->ctr_nonce, GCM_BLOCK_RFC4_NONCE_SIZE);
+ memcpy(areq_ctx->ctr_iv + GCM_BLOCK_RFC4_IV_OFFSET, req->iv, GCM_BLOCK_RFC4_IV_SIZE);
+ req->iv = areq_ctx->ctr_iv;
+ req->assoclen -= GCM_BLOCK_RFC4_IV_SIZE;
+}
+
+
+#endif /*SSI_CC_HAS_AES_GCM*/
+
+static int ssi_aead_process(struct aead_request *req, enum drv_crypto_direction direct)
+{
+ int rc = 0;
+ int seq_len = 0;
+ HwDesc_s desc[MAX_AEAD_PROCESS_SEQ];
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ struct device *dev = &ctx->drvdata->plat_dev->dev;
+ struct ssi_crypto_req ssi_req = {};
+
+ DECL_CYCLE_COUNT_RESOURCES;
+
+ SSI_LOG_DEBUG("%s context=%p req=%p iv=%p src=%p src_ofs=%d dst=%p dst_ofs=%d cryptolen=%d\n",
+ ((direct==DRV_CRYPTO_DIRECTION_ENCRYPT)?"Encrypt":"Decrypt"), ctx, req, req->iv,
+ sg_virt(req->src), req->src->offset, sg_virt(req->dst), req->dst->offset, req->cryptlen);
+
+ /* STAT_PHASE_0: Init and sanity checks */
+ START_CYCLE_COUNT();
+
+ /* Check data length according to mode */
+ if (unlikely(validate_data_size(ctx, direct, req) != 0)) {
+ SSI_LOG_ERR("Unsupported crypt/assoc len %d/%d.\n",
+ req->cryptlen, req->assoclen);
+ crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_BLOCK_LEN);
+ return -EINVAL;
+ }
+
+ /* Setup DX request structure */
+ ssi_req.user_cb = (void *)ssi_aead_complete;
+ ssi_req.user_arg = (void *)req;
+
+#ifdef ENABLE_CYCLE_COUNT
+ ssi_req.op_type = (direct == DRV_CRYPTO_DIRECTION_DECRYPT) ?
+ STAT_OP_TYPE_DECODE : STAT_OP_TYPE_ENCODE;
+#endif
+ /* Setup request context */
+ areq_ctx->gen_ctx.op_type = direct;
+ areq_ctx->req_authsize = ctx->authsize;
+ areq_ctx->cipher_mode = ctx->cipher_mode;
+
+ END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_0);
+
+ /* STAT_PHASE_1: Map buffers */
+ START_CYCLE_COUNT();
+
+ if (ctx->cipher_mode == DRV_CIPHER_CTR) {
+ /* Build CTR IV - Copy nonce from last 4 bytes in
+ * CTR key to first 4 bytes in CTR IV */
+ memcpy(areq_ctx->ctr_iv, ctx->ctr_nonce, CTR_RFC3686_NONCE_SIZE);
+ if (areq_ctx->backup_giv == NULL) /*User none-generated IV*/
+ memcpy(areq_ctx->ctr_iv + CTR_RFC3686_NONCE_SIZE,
+ req->iv, CTR_RFC3686_IV_SIZE);
+ /* Initialize counter portion of counter block */
+ *(__be32 *)(areq_ctx->ctr_iv + CTR_RFC3686_NONCE_SIZE +
+ CTR_RFC3686_IV_SIZE) = cpu_to_be32(1);
+
+ /* Replace with counter iv */
+ req->iv = areq_ctx->ctr_iv;
+ areq_ctx->hw_iv_size = CTR_RFC3686_BLOCK_SIZE;
+ } else if ((ctx->cipher_mode == DRV_CIPHER_CCM) ||
+ (ctx->cipher_mode == DRV_CIPHER_GCTR) ) {
+ areq_ctx->hw_iv_size = AES_BLOCK_SIZE;
+ if (areq_ctx->ctr_iv != req->iv) {
+ memcpy(areq_ctx->ctr_iv, req->iv, crypto_aead_ivsize(tfm));
+ req->iv = areq_ctx->ctr_iv;
+ }
+ } else {
+ areq_ctx->hw_iv_size = crypto_aead_ivsize(tfm);
+ }
+
+#if SSI_CC_HAS_AES_CCM
+ if (ctx->cipher_mode == DRV_CIPHER_CCM) {
+ rc = config_ccm_adata(req);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("config_ccm_adata() returned with a failure %d!", rc);
+ goto exit;
+ }
+ } else {
+ areq_ctx->ccm_hdr_size = ccm_header_size_null;
+ }
+#else
+ areq_ctx->ccm_hdr_size = ccm_header_size_null;
+#endif /*SSI_CC_HAS_AES_CCM*/
+
+#if SSI_CC_HAS_AES_GCM
+ if (ctx->cipher_mode == DRV_CIPHER_GCTR) {
+ rc = config_gcm_context(req);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("config_gcm_context() returned with a failure %d!", rc);
+ goto exit;
+ }
+ }
+#endif /*SSI_CC_HAS_AES_GCM*/
+
+ rc = ssi_buffer_mgr_map_aead_request(ctx->drvdata, req);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("map_request() failed\n");
+ goto exit;
+ }
+
+ /* do we need to generate IV? */
+ if (areq_ctx->backup_giv != NULL) {
+
+ /* set the DMA mapped IV address*/
+ if (ctx->cipher_mode == DRV_CIPHER_CTR) {
+ ssi_req.ivgen_dma_addr[0] = areq_ctx->gen_ctx.iv_dma_addr + CTR_RFC3686_NONCE_SIZE;
+ ssi_req.ivgen_dma_addr_len = 1;
+ } else if (ctx->cipher_mode == DRV_CIPHER_CCM) {
+ /* In ccm, the IV needs to exist both inside B0 and inside the counter.
+ It is also copied to iv_dma_addr for other reasons (like returning
+ it to the user).
+ So, using 3 (identical) IV outputs. */
+ ssi_req.ivgen_dma_addr[0] = areq_ctx->gen_ctx.iv_dma_addr + CCM_BLOCK_IV_OFFSET;
+ ssi_req.ivgen_dma_addr[1] = sg_dma_address(&areq_ctx->ccm_adata_sg) + CCM_B0_OFFSET + CCM_BLOCK_IV_OFFSET;
+ ssi_req.ivgen_dma_addr[2] = sg_dma_address(&areq_ctx->ccm_adata_sg) + CCM_CTR_COUNT_0_OFFSET + CCM_BLOCK_IV_OFFSET;
+ ssi_req.ivgen_dma_addr_len = 3;
+ } else {
+ ssi_req.ivgen_dma_addr[0] = areq_ctx->gen_ctx.iv_dma_addr;
+ ssi_req.ivgen_dma_addr_len = 1;
+ }
+
+ /* set the IV size (8/16 B long)*/
+ ssi_req.ivgen_size = crypto_aead_ivsize(tfm);
+ }
+
+ END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_1);
+
+ /* STAT_PHASE_2: Create sequence */
+ START_CYCLE_COUNT();
+
+ /* Load MLLI tables to SRAM if necessary */
+ ssi_aead_load_mlli_to_sram(req, desc, &seq_len);
+
+ /*TODO: move seq len by reference */
+ switch (ctx->auth_mode) {
+ case DRV_HASH_SHA1:
+ case DRV_HASH_SHA256:
+ ssi_aead_hmac_authenc(req, desc, &seq_len);
+ break;
+ case DRV_HASH_XCBC_MAC:
+ ssi_aead_xcbc_authenc(req, desc, &seq_len);
+ break;
+#if ( SSI_CC_HAS_AES_CCM || SSI_CC_HAS_AES_GCM )
+ case DRV_HASH_NULL:
+#if SSI_CC_HAS_AES_CCM
+ if (ctx->cipher_mode == DRV_CIPHER_CCM) {
+ ssi_aead_ccm(req, desc, &seq_len);
+ }
+#endif /*SSI_CC_HAS_AES_CCM*/
+#if SSI_CC_HAS_AES_GCM
+ if (ctx->cipher_mode == DRV_CIPHER_GCTR) {
+ ssi_aead_gcm(req, desc, &seq_len);
+ }
+#endif /*SSI_CC_HAS_AES_GCM*/
+ break;
+#endif
+ default:
+ SSI_LOG_ERR("Unsupported authenc (%d)\n", ctx->auth_mode);
+ ssi_buffer_mgr_unmap_aead_request(dev, req);
+ rc = -ENOTSUPP;
+ goto exit;
+ }
+
+ END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_2);
+
+ /* STAT_PHASE_3: Lock HW and push sequence */
+ START_CYCLE_COUNT();
+
+ rc = send_request(ctx->drvdata, &ssi_req, desc, seq_len, 1);
+
+ if (unlikely(rc != -EINPROGRESS)) {
+ SSI_LOG_ERR("send_request() failed (rc=%d)\n", rc);
+ ssi_buffer_mgr_unmap_aead_request(dev, req);
+ }
+
+
+ END_CYCLE_COUNT(ssi_req.op_type, STAT_PHASE_3);
+exit:
+ return rc;
+}
+
+static int ssi_aead_encrypt(struct aead_request *req)
+{
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ int rc;
+
+ /* No generated IV required */
+ areq_ctx->backup_iv = req->iv;
+ areq_ctx->backup_giv = NULL;
+ areq_ctx->is_gcm4543 = false;
+
+ areq_ctx->plaintext_authenticate_only = false;
+
+ rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_ENCRYPT);
+ if (rc != -EINPROGRESS)
+ req->iv = areq_ctx->backup_iv;
+
+ return rc;
+}
+
+#if SSI_CC_HAS_AES_CCM
+static int ssi_rfc4309_ccm_encrypt(struct aead_request *req)
+{
+ /* Very similar to ssi_aead_encrypt() above. */
+
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ int rc = -EINVAL;
+
+ if (!valid_assoclen(req)) {
+ SSI_LOG_ERR("invalid Assoclen:%u\n", req->assoclen );
+ goto out;
+ }
+
+ /* No generated IV required */
+ areq_ctx->backup_iv = req->iv;
+ areq_ctx->backup_giv = NULL;
+ areq_ctx->is_gcm4543 = true;
+
+ ssi_rfc4309_ccm_process(req);
+
+ rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_ENCRYPT);
+ if (rc != -EINPROGRESS)
+ req->iv = areq_ctx->backup_iv;
+out:
+ return rc;
+}
+#endif /* SSI_CC_HAS_AES_CCM */
+
+static int ssi_aead_decrypt(struct aead_request *req)
+{
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ int rc;
+
+ /* No generated IV required */
+ areq_ctx->backup_iv = req->iv;
+ areq_ctx->backup_giv = NULL;
+ areq_ctx->is_gcm4543 = false;
+
+ areq_ctx->plaintext_authenticate_only = false;
+
+ rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_DECRYPT);
+ if (rc != -EINPROGRESS)
+ req->iv = areq_ctx->backup_iv;
+
+ return rc;
+
+}
+
+#if SSI_CC_HAS_AES_CCM
+static int ssi_rfc4309_ccm_decrypt(struct aead_request *req)
+{
+ /* Very similar to ssi_aead_decrypt() above. */
+
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ int rc = -EINVAL;
+
+ if (!valid_assoclen(req)) {
+ SSI_LOG_ERR("invalid Assoclen:%u\n", req->assoclen);
+ goto out;
+ }
+
+ /* No generated IV required */
+ areq_ctx->backup_iv = req->iv;
+ areq_ctx->backup_giv = NULL;
+
+ areq_ctx->is_gcm4543 = true;
+ ssi_rfc4309_ccm_process(req);
+
+ rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_DECRYPT);
+ if (rc != -EINPROGRESS)
+ req->iv = areq_ctx->backup_iv;
+
+out:
+ return rc;
+}
+#endif /* SSI_CC_HAS_AES_CCM */
+
+#if SSI_CC_HAS_AES_GCM
+
+static int ssi_rfc4106_gcm_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
+{
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ int rc = 0;
+
+ SSI_LOG_DEBUG("ssi_rfc4106_gcm_setkey() keylen %d, key %p \n", keylen, key );
+
+ if (keylen < 4)
+ return -EINVAL;
+
+ keylen -= 4;
+ memcpy(ctx->ctr_nonce, key + keylen, 4);
+
+ rc = ssi_aead_setkey(tfm, key, keylen);
+
+ return rc;
+}
+
+static int ssi_rfc4543_gcm_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
+{
+ struct ssi_aead_ctx *ctx = crypto_aead_ctx(tfm);
+ int rc = 0;
+
+ SSI_LOG_DEBUG("ssi_rfc4543_gcm_setkey() keylen %d, key %p \n", keylen, key );
+
+ if (keylen < 4)
+ return -EINVAL;
+
+ keylen -= 4;
+ memcpy(ctx->ctr_nonce, key + keylen, 4);
+
+ rc = ssi_aead_setkey(tfm, key, keylen);
+
+ return rc;
+}
+
+static int ssi_gcm_setauthsize(struct crypto_aead *authenc,
+ unsigned int authsize)
+{
+ switch (authsize) {
+ case 4:
+ case 8:
+ case 12:
+ case 13:
+ case 14:
+ case 15:
+ case 16:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return ssi_aead_setauthsize(authenc, authsize);
+}
+
+static int ssi_rfc4106_gcm_setauthsize(struct crypto_aead *authenc,
+ unsigned int authsize)
+{
+ SSI_LOG_DEBUG("ssi_rfc4106_gcm_setauthsize() authsize %d \n", authsize );
+
+ switch (authsize) {
+ case 8:
+ case 12:
+ case 16:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return ssi_aead_setauthsize(authenc, authsize);
+}
+
+static int ssi_rfc4543_gcm_setauthsize(struct crypto_aead *authenc,
+ unsigned int authsize)
+{
+ SSI_LOG_DEBUG("ssi_rfc4543_gcm_setauthsize() authsize %d \n", authsize );
+
+ if (authsize != 16)
+ return -EINVAL;
+
+ return ssi_aead_setauthsize(authenc, authsize);
+}
+
+static int ssi_rfc4106_gcm_encrypt(struct aead_request *req)
+{
+ /* Very similar to ssi_aead_encrypt() above. */
+
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ int rc = -EINVAL;
+
+ if (!valid_assoclen(req)) {
+ SSI_LOG_ERR("invalid Assoclen:%u\n", req->assoclen);
+ goto out;
+ }
+
+ /* No generated IV required */
+ areq_ctx->backup_iv = req->iv;
+ areq_ctx->backup_giv = NULL;
+
+ areq_ctx->plaintext_authenticate_only = false;
+
+ ssi_rfc4_gcm_process(req);
+ areq_ctx->is_gcm4543 = true;
+
+ rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_ENCRYPT);
+ if (rc != -EINPROGRESS)
+ req->iv = areq_ctx->backup_iv;
+out:
+ return rc;
+}
+
+static int ssi_rfc4543_gcm_encrypt(struct aead_request *req)
+{
+ /* Very similar to ssi_aead_encrypt() above. */
+
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ int rc;
+
+ //plaintext is not encryped with rfc4543
+ areq_ctx->plaintext_authenticate_only = true;
+
+ /* No generated IV required */
+ areq_ctx->backup_iv = req->iv;
+ areq_ctx->backup_giv = NULL;
+
+ ssi_rfc4_gcm_process(req);
+ areq_ctx->is_gcm4543 = true;
+
+ rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_ENCRYPT);
+ if (rc != -EINPROGRESS)
+ req->iv = areq_ctx->backup_iv;
+
+ return rc;
+}
+
+static int ssi_rfc4106_gcm_decrypt(struct aead_request *req)
+{
+ /* Very similar to ssi_aead_decrypt() above. */
+
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ int rc = -EINVAL;
+
+ if (!valid_assoclen(req)) {
+ SSI_LOG_ERR("invalid Assoclen:%u\n", req->assoclen);
+ goto out;
+ }
+
+ /* No generated IV required */
+ areq_ctx->backup_iv = req->iv;
+ areq_ctx->backup_giv = NULL;
+
+ areq_ctx->plaintext_authenticate_only = false;
+
+ ssi_rfc4_gcm_process(req);
+ areq_ctx->is_gcm4543 = true;
+
+ rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_DECRYPT);
+ if (rc != -EINPROGRESS)
+ req->iv = areq_ctx->backup_iv;
+out:
+ return rc;
+}
+
+static int ssi_rfc4543_gcm_decrypt(struct aead_request *req)
+{
+ /* Very similar to ssi_aead_decrypt() above. */
+
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ int rc;
+
+ //plaintext is not decryped with rfc4543
+ areq_ctx->plaintext_authenticate_only = true;
+
+ /* No generated IV required */
+ areq_ctx->backup_iv = req->iv;
+ areq_ctx->backup_giv = NULL;
+
+ ssi_rfc4_gcm_process(req);
+ areq_ctx->is_gcm4543 = true;
+
+ rc = ssi_aead_process(req, DRV_CRYPTO_DIRECTION_DECRYPT);
+ if (rc != -EINPROGRESS)
+ req->iv = areq_ctx->backup_iv;
+
+ return rc;
+}
+#endif /* SSI_CC_HAS_AES_GCM */
+
+/* DX Block aead alg */
+static struct ssi_alg_template aead_algs[] = {
+ {
+ .name = "authenc(hmac(sha1),cbc(aes))",
+ .driver_name = "authenc-hmac-sha1-cbc-aes-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_AEAD,
+ .template_aead = {
+ .setkey = ssi_aead_setkey,
+ .setauthsize = ssi_aead_setauthsize,
+ .encrypt = ssi_aead_encrypt,
+ .decrypt = ssi_aead_decrypt,
+ .init = ssi_aead_init,
+ .exit = ssi_aead_exit,
+ .ivsize = AES_BLOCK_SIZE,
+ .maxauthsize = SHA1_DIGEST_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CBC,
+ .flow_mode = S_DIN_to_AES,
+ .auth_mode = DRV_HASH_SHA1,
+ },
+ {
+ .name = "authenc(hmac(sha1),cbc(des3_ede))",
+ .driver_name = "authenc-hmac-sha1-cbc-des3-dx",
+ .blocksize = DES3_EDE_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_AEAD,
+ .template_aead = {
+ .setkey = ssi_aead_setkey,
+ .setauthsize = ssi_aead_setauthsize,
+ .encrypt = ssi_aead_encrypt,
+ .decrypt = ssi_aead_decrypt,
+ .init = ssi_aead_init,
+ .exit = ssi_aead_exit,
+ .ivsize = DES3_EDE_BLOCK_SIZE,
+ .maxauthsize = SHA1_DIGEST_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CBC,
+ .flow_mode = S_DIN_to_DES,
+ .auth_mode = DRV_HASH_SHA1,
+ },
+ {
+ .name = "authenc(hmac(sha256),cbc(aes))",
+ .driver_name = "authenc-hmac-sha256-cbc-aes-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_AEAD,
+ .template_aead = {
+ .setkey = ssi_aead_setkey,
+ .setauthsize = ssi_aead_setauthsize,
+ .encrypt = ssi_aead_encrypt,
+ .decrypt = ssi_aead_decrypt,
+ .init = ssi_aead_init,
+ .exit = ssi_aead_exit,
+ .ivsize = AES_BLOCK_SIZE,
+ .maxauthsize = SHA256_DIGEST_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CBC,
+ .flow_mode = S_DIN_to_AES,
+ .auth_mode = DRV_HASH_SHA256,
+ },
+ {
+ .name = "authenc(hmac(sha256),cbc(des3_ede))",
+ .driver_name = "authenc-hmac-sha256-cbc-des3-dx",
+ .blocksize = DES3_EDE_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_AEAD,
+ .template_aead = {
+ .setkey = ssi_aead_setkey,
+ .setauthsize = ssi_aead_setauthsize,
+ .encrypt = ssi_aead_encrypt,
+ .decrypt = ssi_aead_decrypt,
+ .init = ssi_aead_init,
+ .exit = ssi_aead_exit,
+ .ivsize = DES3_EDE_BLOCK_SIZE,
+ .maxauthsize = SHA256_DIGEST_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CBC,
+ .flow_mode = S_DIN_to_DES,
+ .auth_mode = DRV_HASH_SHA256,
+ },
+ {
+ .name = "authenc(xcbc(aes),cbc(aes))",
+ .driver_name = "authenc-xcbc-aes-cbc-aes-dx",
+ .blocksize = AES_BLOCK_SIZE,
+ .type = CRYPTO_ALG_TYPE_AEAD,
+ .template_aead = {
+ .setkey = ssi_aead_setkey,
+ .setauthsize = ssi_aead_setauthsize,
+ .encrypt = ssi_aead_encrypt,
+ .decrypt = ssi_aead_decrypt,
+ .init = ssi_aead_init,
+ .exit = ssi_aead_exit,
+ .ivsize = AES_BLOCK_SIZE,
+ .maxauthsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CBC,
+ .flow_mode = S_DIN_to_AES,
+ .auth_mode = DRV_HASH_XCBC_MAC,
+ },
+ {
+ .name = "authenc(hmac(sha1),rfc3686(ctr(aes)))",
+ .driver_name = "authenc-hmac-sha1-rfc3686-ctr-aes-dx",
+ .blocksize = 1,
+ .type = CRYPTO_ALG_TYPE_AEAD,
+ .template_aead = {
+ .setkey = ssi_aead_setkey,
+ .setauthsize = ssi_aead_setauthsize,
+ .encrypt = ssi_aead_encrypt,
+ .decrypt = ssi_aead_decrypt,
+ .init = ssi_aead_init,
+ .exit = ssi_aead_exit,
+ .ivsize = CTR_RFC3686_IV_SIZE,
+ .maxauthsize = SHA1_DIGEST_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CTR,
+ .flow_mode = S_DIN_to_AES,
+ .auth_mode = DRV_HASH_SHA1,
+ },
+ {
+ .name = "authenc(hmac(sha256),rfc3686(ctr(aes)))",
+ .driver_name = "authenc-hmac-sha256-rfc3686-ctr-aes-dx",
+ .blocksize = 1,
+ .type = CRYPTO_ALG_TYPE_AEAD,
+ .template_aead = {
+ .setkey = ssi_aead_setkey,
+ .setauthsize = ssi_aead_setauthsize,
+ .encrypt = ssi_aead_encrypt,
+ .decrypt = ssi_aead_decrypt,
+ .init = ssi_aead_init,
+ .exit = ssi_aead_exit,
+ .ivsize = CTR_RFC3686_IV_SIZE,
+ .maxauthsize = SHA256_DIGEST_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CTR,
+ .flow_mode = S_DIN_to_AES,
+ .auth_mode = DRV_HASH_SHA256,
+ },
+ {
+ .name = "authenc(xcbc(aes),rfc3686(ctr(aes)))",
+ .driver_name = "authenc-xcbc-aes-rfc3686-ctr-aes-dx",
+ .blocksize = 1,
+ .type = CRYPTO_ALG_TYPE_AEAD,
+ .template_aead = {
+ .setkey = ssi_aead_setkey,
+ .setauthsize = ssi_aead_setauthsize,
+ .encrypt = ssi_aead_encrypt,
+ .decrypt = ssi_aead_decrypt,
+ .init = ssi_aead_init,
+ .exit = ssi_aead_exit,
+ .ivsize = CTR_RFC3686_IV_SIZE,
+ .maxauthsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CTR,
+ .flow_mode = S_DIN_to_AES,
+ .auth_mode = DRV_HASH_XCBC_MAC,
+ },
+#if SSI_CC_HAS_AES_CCM
+ {
+ .name = "ccm(aes)",
+ .driver_name = "ccm-aes-dx",
+ .blocksize = 1,
+ .type = CRYPTO_ALG_TYPE_AEAD,
+ .template_aead = {
+ .setkey = ssi_aead_setkey,
+ .setauthsize = ssi_ccm_setauthsize,
+ .encrypt = ssi_aead_encrypt,
+ .decrypt = ssi_aead_decrypt,
+ .init = ssi_aead_init,
+ .exit = ssi_aead_exit,
+ .ivsize = AES_BLOCK_SIZE,
+ .maxauthsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CCM,
+ .flow_mode = S_DIN_to_AES,
+ .auth_mode = DRV_HASH_NULL,
+ },
+ {
+ .name = "rfc4309(ccm(aes))",
+ .driver_name = "rfc4309-ccm-aes-dx",
+ .blocksize = 1,
+ .type = CRYPTO_ALG_TYPE_AEAD,
+ .template_aead = {
+ .setkey = ssi_rfc4309_ccm_setkey,
+ .setauthsize = ssi_rfc4309_ccm_setauthsize,
+ .encrypt = ssi_rfc4309_ccm_encrypt,
+ .decrypt = ssi_rfc4309_ccm_decrypt,
+ .init = ssi_aead_init,
+ .exit = ssi_aead_exit,
+ .ivsize = CCM_BLOCK_IV_SIZE,
+ .maxauthsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_CCM,
+ .flow_mode = S_DIN_to_AES,
+ .auth_mode = DRV_HASH_NULL,
+ },
+#endif /*SSI_CC_HAS_AES_CCM*/
+#if SSI_CC_HAS_AES_GCM
+ {
+ .name = "gcm(aes)",
+ .driver_name = "gcm-aes-dx",
+ .blocksize = 1,
+ .type = CRYPTO_ALG_TYPE_AEAD,
+ .template_aead = {
+ .setkey = ssi_aead_setkey,
+ .setauthsize = ssi_gcm_setauthsize,
+ .encrypt = ssi_aead_encrypt,
+ .decrypt = ssi_aead_decrypt,
+ .init = ssi_aead_init,
+ .exit = ssi_aead_exit,
+ .ivsize = 12,
+ .maxauthsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_GCTR,
+ .flow_mode = S_DIN_to_AES,
+ .auth_mode = DRV_HASH_NULL,
+ },
+ {
+ .name = "rfc4106(gcm(aes))",
+ .driver_name = "rfc4106-gcm-aes-dx",
+ .blocksize = 1,
+ .type = CRYPTO_ALG_TYPE_AEAD,
+ .template_aead = {
+ .setkey = ssi_rfc4106_gcm_setkey,
+ .setauthsize = ssi_rfc4106_gcm_setauthsize,
+ .encrypt = ssi_rfc4106_gcm_encrypt,
+ .decrypt = ssi_rfc4106_gcm_decrypt,
+ .init = ssi_aead_init,
+ .exit = ssi_aead_exit,
+ .ivsize = GCM_BLOCK_RFC4_IV_SIZE,
+ .maxauthsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_GCTR,
+ .flow_mode = S_DIN_to_AES,
+ .auth_mode = DRV_HASH_NULL,
+ },
+ {
+ .name = "rfc4543(gcm(aes))",
+ .driver_name = "rfc4543-gcm-aes-dx",
+ .blocksize = 1,
+ .type = CRYPTO_ALG_TYPE_AEAD,
+ .template_aead = {
+ .setkey = ssi_rfc4543_gcm_setkey,
+ .setauthsize = ssi_rfc4543_gcm_setauthsize,
+ .encrypt = ssi_rfc4543_gcm_encrypt,
+ .decrypt = ssi_rfc4543_gcm_decrypt,
+ .init = ssi_aead_init,
+ .exit = ssi_aead_exit,
+ .ivsize = GCM_BLOCK_RFC4_IV_SIZE,
+ .maxauthsize = AES_BLOCK_SIZE,
+ },
+ .cipher_mode = DRV_CIPHER_GCTR,
+ .flow_mode = S_DIN_to_AES,
+ .auth_mode = DRV_HASH_NULL,
+ },
+#endif /*SSI_CC_HAS_AES_GCM*/
+};
+
+static struct ssi_crypto_alg *ssi_aead_create_alg(struct ssi_alg_template *template)
+{
+ struct ssi_crypto_alg *t_alg;
+ struct aead_alg *alg;
+
+ t_alg = kzalloc(sizeof(struct ssi_crypto_alg), GFP_KERNEL);
+ if (!t_alg) {
+ SSI_LOG_ERR("failed to allocate t_alg\n");
+ return ERR_PTR(-ENOMEM);
+ }
+ alg = &template->template_aead;
+
+ snprintf(alg->base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", template->name);
+ snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
+ template->driver_name);
+ alg->base.cra_module = THIS_MODULE;
+ alg->base.cra_priority = SSI_CRA_PRIO;
+
+ alg->base.cra_ctxsize = sizeof(struct ssi_aead_ctx);
+ alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY |
+ template->type;
+ alg->init = ssi_aead_init;
+ alg->exit = ssi_aead_exit;
+
+ t_alg->aead_alg = *alg;
+
+ t_alg->cipher_mode = template->cipher_mode;
+ t_alg->flow_mode = template->flow_mode;
+ t_alg->auth_mode = template->auth_mode;
+
+ return t_alg;
+}
+
+int ssi_aead_free(struct ssi_drvdata *drvdata)
+{
+ struct ssi_crypto_alg *t_alg, *n;
+ struct ssi_aead_handle *aead_handle =
+ (struct ssi_aead_handle *)drvdata->aead_handle;
+
+ if (aead_handle != NULL) {
+ /* Remove registered algs */
+ list_for_each_entry_safe(t_alg, n, &aead_handle->aead_list, entry) {
+ crypto_unregister_aead(&t_alg->aead_alg);
+ list_del(&t_alg->entry);
+ kfree(t_alg);
+ }
+ kfree(aead_handle);
+ drvdata->aead_handle = NULL;
+ }
+
+ return 0;
+}
+
+int ssi_aead_alloc(struct ssi_drvdata *drvdata)
+{
+ struct ssi_aead_handle *aead_handle;
+ struct ssi_crypto_alg *t_alg;
+ int rc = -ENOMEM;
+ int alg;
+
+ aead_handle = kmalloc(sizeof(struct ssi_aead_handle), GFP_KERNEL);
+ if (aead_handle == NULL) {
+ rc = -ENOMEM;
+ goto fail0;
+ }
+
+ drvdata->aead_handle = aead_handle;
+
+ aead_handle->sram_workspace_addr = ssi_sram_mgr_alloc(
+ drvdata, MAX_HMAC_DIGEST_SIZE);
+ if (aead_handle->sram_workspace_addr == NULL_SRAM_ADDR) {
+ SSI_LOG_ERR("SRAM pool exhausted\n");
+ rc = -ENOMEM;
+ goto fail1;
+ }
+
+ INIT_LIST_HEAD(&aead_handle->aead_list);
+
+ /* Linux crypto */
+ for (alg = 0; alg < ARRAY_SIZE(aead_algs); alg++) {
+ t_alg = ssi_aead_create_alg(&aead_algs[alg]);
+ if (IS_ERR(t_alg)) {
+ rc = PTR_ERR(t_alg);
+ SSI_LOG_ERR("%s alg allocation failed\n",
+ aead_algs[alg].driver_name);
+ goto fail1;
+ }
+ t_alg->drvdata = drvdata;
+ rc = crypto_register_aead(&t_alg->aead_alg);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("%s alg registration failed\n",
+ t_alg->aead_alg.base.cra_driver_name);
+ goto fail2;
+ } else {
+ list_add_tail(&t_alg->entry, &aead_handle->aead_list);
+ SSI_LOG_DEBUG("Registered %s\n", t_alg->aead_alg.base.cra_driver_name);
+ }
+ }
+
+ return 0;
+
+fail2:
+ kfree(t_alg);
+fail1:
+ ssi_aead_free(drvdata);
+fail0:
+ return rc;
+}
+
+
+
diff --git a/drivers/staging/ccree/ssi_aead.h b/drivers/staging/ccree/ssi_aead.h
new file mode 100644
index 0000000..95f30d8
--- /dev/null
+++ b/drivers/staging/ccree/ssi_aead.h
@@ -0,0 +1,120 @@
+/*
+ * Copyright (C) 2012-2016 ARM Limited or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+/* \file ssi_aead.h
+ ARM CryptoCell AEAD Crypto API
+ */
+
+#ifndef __SSI_AEAD_H__
+#define __SSI_AEAD_H__
+
+#include <linux/kernel.h>
+#include <crypto/algapi.h>
+#include <crypto/ctr.h>
+
+
+/* mac_cmp - HW writes 8 B but all bytes hold the same value */
+#define ICV_CMP_SIZE 8
+#define CCM_CONFIG_BUF_SIZE (AES_BLOCK_SIZE*3)
+#define MAX_MAC_SIZE MAX(SHA256_DIGEST_SIZE, AES_BLOCK_SIZE)
+
+
+/* defines for AES GCM configuration buffer */
+#define GCM_BLOCK_LEN_SIZE 8
+
+#define GCM_BLOCK_RFC4_IV_OFFSET 4
+#define GCM_BLOCK_RFC4_IV_SIZE 8 /* IV size for rfc's */
+#define GCM_BLOCK_RFC4_NONCE_OFFSET 0
+#define GCM_BLOCK_RFC4_NONCE_SIZE 4
+
+
+
+/* Offsets into AES CCM configuration buffer */
+#define CCM_B0_OFFSET 0
+#define CCM_A0_OFFSET 16
+#define CCM_CTR_COUNT_0_OFFSET 32
+/* CCM B0 and CTR_COUNT constants. */
+#define CCM_BLOCK_NONCE_OFFSET 1 /* Nonce offset inside B0 and CTR_COUNT */
+#define CCM_BLOCK_NONCE_SIZE 3 /* Nonce size inside B0 and CTR_COUNT */
+#define CCM_BLOCK_IV_OFFSET 4 /* IV offset inside B0 and CTR_COUNT */
+#define CCM_BLOCK_IV_SIZE 8 /* IV size inside B0 and CTR_COUNT */
+
+enum aead_ccm_header_size {
+ ccm_header_size_null = -1,
+ ccm_header_size_zero = 0,
+ ccm_header_size_2 = 2,
+ ccm_header_size_6 = 6,
+ ccm_header_size_max = INT32_MAX
+};
+
+struct aead_req_ctx {
+ /* Allocate cache line although only 4 bytes are needed to
+ * assure next field falls @ cache line
+ * Used for both: digest HW compare and CCM/GCM MAC value */
+ uint8_t mac_buf[MAX_MAC_SIZE] ____cacheline_aligned;
+ uint8_t ctr_iv[AES_BLOCK_SIZE] ____cacheline_aligned;
+
+ //used in gcm
+ uint8_t gcm_iv_inc1[AES_BLOCK_SIZE] ____cacheline_aligned;
+ uint8_t gcm_iv_inc2[AES_BLOCK_SIZE] ____cacheline_aligned;
+ uint8_t hkey[AES_BLOCK_SIZE] ____cacheline_aligned;
+ struct {
+ uint8_t lenA[GCM_BLOCK_LEN_SIZE] ____cacheline_aligned;
+ uint8_t lenC[GCM_BLOCK_LEN_SIZE] ;
+ } gcm_len_block;
+
+ uint8_t ccm_config[CCM_CONFIG_BUF_SIZE] ____cacheline_aligned;
+ unsigned int hw_iv_size ____cacheline_aligned; /*HW actual size input*/
+ uint8_t backup_mac[MAX_MAC_SIZE]; /*used to prevent cache coherence problem*/
+ uint8_t *backup_iv; /*store iv for generated IV flow*/
+ uint8_t *backup_giv; /*store iv for rfc3686(ctr) flow*/
+ dma_addr_t mac_buf_dma_addr; /* internal ICV DMA buffer */
+ dma_addr_t ccm_iv0_dma_addr; /* buffer for internal ccm configurations */
+ dma_addr_t icv_dma_addr; /* Phys. address of ICV */
+
+ //used in gcm
+ dma_addr_t gcm_iv_inc1_dma_addr; /* buffer for internal gcm configurations */
+ dma_addr_t gcm_iv_inc2_dma_addr; /* buffer for internal gcm configurations */
+ dma_addr_t hkey_dma_addr; /* Phys. address of hkey */
+ dma_addr_t gcm_block_len_dma_addr; /* Phys. address of gcm block len */
+ bool is_gcm4543;
+
+ uint8_t *icv_virt_addr; /* Virt. address of ICV */
+ struct async_gen_req_ctx gen_ctx;
+ struct ssi_mlli assoc;
+ struct ssi_mlli src;
+ struct ssi_mlli dst;
+ struct scatterlist* srcSgl;
+ struct scatterlist* dstSgl;
+ unsigned int srcOffset;
+ unsigned int dstOffset;
+ enum ssi_req_dma_buf_type assoc_buff_type;
+ enum ssi_req_dma_buf_type data_buff_type;
+ struct mlli_params mlli_params;
+ unsigned int cryptlen;
+ struct scatterlist ccm_adata_sg;
+ enum aead_ccm_header_size ccm_hdr_size;
+ unsigned int req_authsize;
+ enum drv_cipher_mode cipher_mode;
+ bool is_icv_fragmented;
+ bool is_single_pass;
+ bool plaintext_authenticate_only; //for gcm_rfc4543
+};
+
+int ssi_aead_alloc(struct ssi_drvdata *drvdata);
+int ssi_aead_free(struct ssi_drvdata *drvdata);
+
+#endif /*__SSI_AEAD_H__*/
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index 6a9c964..06935b1 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -17,6 +17,7 @@
#include <linux/crypto.h>
#include <linux/version.h>
#include <crypto/algapi.h>
+#include <crypto/internal/aead.h>
#include <crypto/hash.h>
#include <crypto/authenc.h>
#include <crypto/scatterwalk.h>
@@ -30,6 +31,7 @@
#include "cc_lli_defs.h"
#include "ssi_cipher.h"
#include "ssi_hash.h"
+#include "ssi_aead.h"

#define LLI_MAX_NUM_OF_DATA_ENTRIES 128
#define LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES 4
@@ -486,6 +488,42 @@ static int ssi_buffer_mgr_map_scatterlist(
return 0;
}

+static inline int
+ssi_aead_handle_config_buf(struct device *dev,
+ struct aead_req_ctx *areq_ctx,
+ uint8_t* config_data,
+ struct buffer_array *sg_data,
+ unsigned int assoclen)
+{
+ SSI_LOG_DEBUG(" handle additional data config set to DLLI \n");
+ /* create sg for the current buffer */
+ sg_init_one(&areq_ctx->ccm_adata_sg, config_data, AES_BLOCK_SIZE + areq_ctx->ccm_hdr_size);
+ if (unlikely(dma_map_sg(dev, &areq_ctx->ccm_adata_sg, 1,
+ DMA_TO_DEVICE) != 1)) {
+ SSI_LOG_ERR("dma_map_sg() "
+ "config buffer failed\n");
+ return -ENOMEM;
+ }
+ SSI_LOG_DEBUG("Mapped curr_buff: dma_address=0x%llX "
+ "page_link=0x%08lX addr=%pK "
+ "offset=%u length=%u\n",
+ (unsigned long long)sg_dma_address(&areq_ctx->ccm_adata_sg),
+ areq_ctx->ccm_adata_sg.page_link,
+ sg_virt(&areq_ctx->ccm_adata_sg),
+ areq_ctx->ccm_adata_sg.offset,
+ areq_ctx->ccm_adata_sg.length);
+ /* prepare for case of MLLI */
+ if (assoclen > 0) {
+ ssi_buffer_mgr_add_scatterlist_entry(sg_data, 1,
+ &areq_ctx->ccm_adata_sg,
+ (AES_BLOCK_SIZE +
+ areq_ctx->ccm_hdr_size), 0,
+ false, NULL);
+ }
+ return 0;
+}
+
+
static inline int ssi_ahash_handle_curr_buf(struct device *dev,
struct ahash_req_ctx *areq_ctx,
uint8_t* curr_buff,
@@ -666,6 +704,867 @@ int ssi_buffer_mgr_map_blkcipher_request(
return rc;
}

+void ssi_buffer_mgr_unmap_aead_request(
+ struct device *dev, struct aead_request *req)
+{
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ unsigned int hw_iv_size = areq_ctx->hw_iv_size;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ uint32_t dummy;
+ bool chained;
+ uint32_t size_to_unmap = 0;
+
+ if (areq_ctx->mac_buf_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->mac_buf_dma_addr);
+ dma_unmap_single(dev, areq_ctx->mac_buf_dma_addr,
+ MAX_MAC_SIZE, DMA_BIDIRECTIONAL);
+ }
+
+#if SSI_CC_HAS_AES_GCM
+ if (areq_ctx->cipher_mode == DRV_CIPHER_GCTR) {
+ if (areq_ctx->hkey_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->hkey_dma_addr);
+ dma_unmap_single(dev, areq_ctx->hkey_dma_addr,
+ AES_BLOCK_SIZE, DMA_BIDIRECTIONAL);
+ }
+
+ if (areq_ctx->gcm_block_len_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_block_len_dma_addr);
+ dma_unmap_single(dev, areq_ctx->gcm_block_len_dma_addr,
+ AES_BLOCK_SIZE, DMA_TO_DEVICE);
+ }
+
+ if (areq_ctx->gcm_iv_inc1_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_iv_inc1_dma_addr);
+ dma_unmap_single(dev, areq_ctx->gcm_iv_inc1_dma_addr,
+ AES_BLOCK_SIZE, DMA_TO_DEVICE);
+ }
+
+ if (areq_ctx->gcm_iv_inc2_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_iv_inc2_dma_addr);
+ dma_unmap_single(dev, areq_ctx->gcm_iv_inc2_dma_addr,
+ AES_BLOCK_SIZE, DMA_TO_DEVICE);
+ }
+ }
+#endif
+
+ if (areq_ctx->ccm_hdr_size != ccm_header_size_null) {
+ if (areq_ctx->ccm_iv0_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->ccm_iv0_dma_addr);
+ dma_unmap_single(dev, areq_ctx->ccm_iv0_dma_addr,
+ AES_BLOCK_SIZE, DMA_TO_DEVICE);
+ }
+
+ if (&areq_ctx->ccm_adata_sg != NULL)
+ dma_unmap_sg(dev, &areq_ctx->ccm_adata_sg,
+ 1, DMA_TO_DEVICE);
+ }
+ if (areq_ctx->gen_ctx.iv_dma_addr != 0) {
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gen_ctx.iv_dma_addr);
+ dma_unmap_single(dev, areq_ctx->gen_ctx.iv_dma_addr,
+ hw_iv_size, DMA_BIDIRECTIONAL);
+ }
+
+ /*In case a pool was set, a table was
+ allocated and should be released */
+ if (areq_ctx->mlli_params.curr_pool != NULL) {
+ SSI_LOG_DEBUG("free MLLI buffer: dma=0x%08llX virt=%pK\n",
+ (unsigned long long)areq_ctx->mlli_params.mlli_dma_addr,
+ areq_ctx->mlli_params.mlli_virt_addr);
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->mlli_params.mlli_dma_addr);
+ dma_pool_free(areq_ctx->mlli_params.curr_pool,
+ areq_ctx->mlli_params.mlli_virt_addr,
+ areq_ctx->mlli_params.mlli_dma_addr);
+ }
+
+ SSI_LOG_DEBUG("Unmapping src sgl: req->src=%pK areq_ctx->src.nents=%u areq_ctx->assoc.nents=%u assoclen:%u cryptlen=%u\n", sg_virt(req->src),areq_ctx->src.nents,areq_ctx->assoc.nents,req->assoclen,req->cryptlen);
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(req->src));
+ size_to_unmap = req->assoclen+req->cryptlen;
+ if(areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT){
+ size_to_unmap += areq_ctx->req_authsize;
+ }
+ if (areq_ctx->is_gcm4543)
+ size_to_unmap += crypto_aead_ivsize(tfm);
+
+ dma_unmap_sg(dev, req->src, ssi_buffer_mgr_get_sgl_nents(req->src,size_to_unmap,&dummy,&chained) , DMA_BIDIRECTIONAL);
+ if (unlikely(req->src != req->dst)) {
+ SSI_LOG_DEBUG("Unmapping dst sgl: req->dst=%pK\n",
+ sg_virt(req->dst));
+ SSI_RESTORE_DMA_ADDR_TO_48BIT(sg_dma_address(req->dst));
+ dma_unmap_sg(dev, req->dst, ssi_buffer_mgr_get_sgl_nents(req->dst,size_to_unmap,&dummy,&chained),
+ DMA_BIDIRECTIONAL);
+ }
+#if DX_HAS_ACP
+ if ((areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) &&
+ likely(req->src == req->dst))
+ {
+ uint32_t size_to_skip = req->assoclen;
+ if (areq_ctx->is_gcm4543) {
+ size_to_skip += crypto_aead_ivsize(tfm);
+ }
+ /* copy mac to a temporary location to deal with possible
+ data memory overriding that caused by cache coherence problem. */
+ ssi_buffer_mgr_copy_scatterlist_portion(
+ areq_ctx->backup_mac, req->src,
+ size_to_skip+ req->cryptlen - areq_ctx->req_authsize,
+ size_to_skip+ req->cryptlen, SSI_SG_FROM_BUF);
+ }
+#endif
+}
+
+static inline int ssi_buffer_mgr_get_aead_icv_nents(
+ struct scatterlist *sgl,
+ unsigned int sgl_nents,
+ unsigned int authsize,
+ uint32_t last_entry_data_size,
+ bool *is_icv_fragmented)
+{
+ unsigned int icv_max_size = 0;
+ unsigned int icv_required_size = authsize > last_entry_data_size ? (authsize - last_entry_data_size) : authsize;
+ unsigned int nents;
+ unsigned int i;
+
+ if (sgl_nents < MAX_ICV_NENTS_SUPPORTED) {
+ *is_icv_fragmented = false;
+ return 0;
+ }
+
+ for( i = 0 ; i < (sgl_nents - MAX_ICV_NENTS_SUPPORTED) ; i++) {
+ if (sgl == NULL) {
+ break;
+ }
+ sgl = sg_next(sgl);
+ }
+
+ if (sgl != NULL) {
+ icv_max_size = sgl->length;
+ }
+
+ if (last_entry_data_size > authsize) {
+ nents = 0; /* ICV attached to data in last entry (not fragmented!) */
+ *is_icv_fragmented = false;
+ } else if (last_entry_data_size == authsize) {
+ nents = 1; /* ICV placed in whole last entry (not fragmented!) */
+ *is_icv_fragmented = false;
+ } else if (icv_max_size > icv_required_size) {
+ nents = 1;
+ *is_icv_fragmented = true;
+ } else if (icv_max_size == icv_required_size) {
+ nents = 2;
+ *is_icv_fragmented = true;
+ } else {
+ SSI_LOG_ERR("Unsupported num. of ICV fragments (> %d)\n",
+ MAX_ICV_NENTS_SUPPORTED);
+ nents = -1; /*unsupported*/
+ }
+ SSI_LOG_DEBUG("is_frag=%s icv_nents=%u\n",
+ (*is_icv_fragmented ? "true" : "false"), nents);
+
+ return nents;
+}
+
+static inline int ssi_buffer_mgr_aead_chain_iv(
+ struct ssi_drvdata *drvdata,
+ struct aead_request *req,
+ struct buffer_array *sg_data,
+ bool is_last, bool do_chain)
+{
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ unsigned int hw_iv_size = areq_ctx->hw_iv_size;
+ struct device *dev = &drvdata->plat_dev->dev;
+ int rc = 0;
+
+ if (unlikely(req->iv == NULL)) {
+ areq_ctx->gen_ctx.iv_dma_addr = 0;
+ goto chain_iv_exit;
+ }
+
+ areq_ctx->gen_ctx.iv_dma_addr = dma_map_single(dev, req->iv,
+ hw_iv_size, DMA_BIDIRECTIONAL);
+ if (unlikely(dma_mapping_error(dev, areq_ctx->gen_ctx.iv_dma_addr))) {
+ SSI_LOG_ERR("Mapping iv %u B at va=%pK for DMA failed\n",
+ hw_iv_size, req->iv);
+ rc = -ENOMEM;
+ goto chain_iv_exit;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->gen_ctx.iv_dma_addr, hw_iv_size);
+
+ SSI_LOG_DEBUG("Mapped iv %u B at va=%pK to dma=0x%llX\n",
+ hw_iv_size, req->iv,
+ (unsigned long long)areq_ctx->gen_ctx.iv_dma_addr);
+ if (do_chain == true && areq_ctx->plaintext_authenticate_only == true){ // TODO: what about CTR?? ask Ron
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ unsigned int iv_size_to_authenc = crypto_aead_ivsize(tfm);
+ unsigned int iv_ofs = GCM_BLOCK_RFC4_IV_OFFSET;
+ /* Chain to given list */
+ ssi_buffer_mgr_add_buffer_entry(
+ sg_data, areq_ctx->gen_ctx.iv_dma_addr + iv_ofs,
+ iv_size_to_authenc, is_last,
+ &areq_ctx->assoc.mlli_nents);
+ areq_ctx->assoc_buff_type = SSI_DMA_BUF_MLLI;
+ }
+
+chain_iv_exit:
+ return rc;
+}
+
+static inline int ssi_buffer_mgr_aead_chain_assoc(
+ struct ssi_drvdata *drvdata,
+ struct aead_request *req,
+ struct buffer_array *sg_data,
+ bool is_last, bool do_chain)
+{
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ int rc = 0;
+ uint32_t mapped_nents = 0;
+ struct scatterlist *current_sg = req->src;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ unsigned int sg_index = 0;
+ uint32_t size_of_assoc = req->assoclen;
+
+ if (areq_ctx->is_gcm4543) {
+ size_of_assoc += crypto_aead_ivsize(tfm);
+ }
+
+ if (sg_data == NULL) {
+ rc = -EINVAL;
+ goto chain_assoc_exit;
+ }
+
+ if (unlikely(req->assoclen == 0)) {
+ areq_ctx->assoc_buff_type = SSI_DMA_BUF_NULL;
+ areq_ctx->assoc.nents = 0;
+ areq_ctx->assoc.mlli_nents = 0;
+ SSI_LOG_DEBUG("Chain assoc of length 0: buff_type=%s nents=%u\n",
+ GET_DMA_BUFFER_TYPE(areq_ctx->assoc_buff_type),
+ areq_ctx->assoc.nents);
+ goto chain_assoc_exit;
+ }
+
+ //iterate over the sgl to see how many entries are for associated data
+ //it is assumed that if we reach here , the sgl is already mapped
+ sg_index = current_sg->length;
+ if (sg_index > size_of_assoc) { //the first entry in the scatter list contains all the associated data
+ mapped_nents++;
+ }
+ else{
+ while (sg_index <= size_of_assoc) {
+ current_sg = sg_next(current_sg);
+ //if have reached the end of the sgl, then this is unexpected
+ if (current_sg == NULL) {
+ SSI_LOG_ERR("reached end of sg list. unexpected \n");
+ BUG();
+ }
+ sg_index += current_sg->length;
+ mapped_nents++;
+ }
+ }
+ if (unlikely(mapped_nents > LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES)) {
+ SSI_LOG_ERR("Too many fragments. current %d max %d\n",
+ mapped_nents, LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES);
+ return -ENOMEM;
+ }
+ areq_ctx->assoc.nents = mapped_nents;
+
+ /* in CCM case we have additional entry for
+ * ccm header configurations */
+ if (areq_ctx->ccm_hdr_size != ccm_header_size_null) {
+ if (unlikely((mapped_nents + 1) >
+ LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES)) {
+
+ SSI_LOG_ERR("CCM case.Too many fragments. "
+ "Current %d max %d\n",
+ (areq_ctx->assoc.nents + 1),
+ LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES);
+ rc = -ENOMEM;
+ goto chain_assoc_exit;
+ }
+ }
+
+ if (likely(mapped_nents == 1) &&
+ (areq_ctx->ccm_hdr_size == ccm_header_size_null))
+ areq_ctx->assoc_buff_type = SSI_DMA_BUF_DLLI;
+ else
+ areq_ctx->assoc_buff_type = SSI_DMA_BUF_MLLI;
+
+ if (unlikely((do_chain == true) ||
+ (areq_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI))) {
+
+ SSI_LOG_DEBUG("Chain assoc: buff_type=%s nents=%u\n",
+ GET_DMA_BUFFER_TYPE(areq_ctx->assoc_buff_type),
+ areq_ctx->assoc.nents);
+ ssi_buffer_mgr_add_scatterlist_entry(
+ sg_data, areq_ctx->assoc.nents,
+ req->src, req->assoclen, 0, is_last,
+ &areq_ctx->assoc.mlli_nents);
+ areq_ctx->assoc_buff_type = SSI_DMA_BUF_MLLI;
+ }
+
+chain_assoc_exit:
+ return rc;
+}
+
+static inline void ssi_buffer_mgr_prepare_aead_data_dlli(
+ struct aead_request *req,
+ uint32_t *src_last_bytes, uint32_t *dst_last_bytes)
+{
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ enum drv_crypto_direction direct = areq_ctx->gen_ctx.op_type;
+ unsigned int authsize = areq_ctx->req_authsize;
+
+ areq_ctx->is_icv_fragmented = false;
+ if (likely(req->src == req->dst)) {
+ /*INPLACE*/
+ areq_ctx->icv_dma_addr = sg_dma_address(
+ areq_ctx->srcSgl)+
+ (*src_last_bytes - authsize);
+ areq_ctx->icv_virt_addr = sg_virt(
+ areq_ctx->srcSgl) +
+ (*src_last_bytes - authsize);
+ } else if (direct == DRV_CRYPTO_DIRECTION_DECRYPT) {
+ /*NON-INPLACE and DECRYPT*/
+ areq_ctx->icv_dma_addr = sg_dma_address(
+ areq_ctx->srcSgl) +
+ (*src_last_bytes - authsize);
+ areq_ctx->icv_virt_addr = sg_virt(
+ areq_ctx->srcSgl) +
+ (*src_last_bytes - authsize);
+ } else {
+ /*NON-INPLACE and ENCRYPT*/
+ areq_ctx->icv_dma_addr = sg_dma_address(
+ areq_ctx->dstSgl) +
+ (*dst_last_bytes - authsize);
+ areq_ctx->icv_virt_addr = sg_virt(
+ areq_ctx->dstSgl)+
+ (*dst_last_bytes - authsize);
+ }
+}
+
+static inline int ssi_buffer_mgr_prepare_aead_data_mlli(
+ struct ssi_drvdata *drvdata,
+ struct aead_request *req,
+ struct buffer_array *sg_data,
+ uint32_t *src_last_bytes, uint32_t *dst_last_bytes,
+ bool is_last_table)
+{
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ enum drv_crypto_direction direct = areq_ctx->gen_ctx.op_type;
+ unsigned int authsize = areq_ctx->req_authsize;
+ int rc = 0, icv_nents;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+
+ if (likely(req->src == req->dst)) {
+ /*INPLACE*/
+ ssi_buffer_mgr_add_scatterlist_entry(sg_data,
+ areq_ctx->src.nents, areq_ctx->srcSgl,
+ areq_ctx->cryptlen,areq_ctx->srcOffset, is_last_table,
+ &areq_ctx->src.mlli_nents);
+
+ icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->srcSgl,
+ areq_ctx->src.nents, authsize, *src_last_bytes,
+ &areq_ctx->is_icv_fragmented);
+ if (unlikely(icv_nents < 0)) {
+ rc = -ENOTSUPP;
+ goto prepare_data_mlli_exit;
+ }
+
+ if (unlikely(areq_ctx->is_icv_fragmented == true)) {
+ /* Backup happens only when ICV is fragmented, ICV
+ verification is made by CPU compare in order to simplify
+ MAC verification upon request completion */
+ if (direct == DRV_CRYPTO_DIRECTION_DECRYPT) {
+#if !DX_HAS_ACP
+ /* In ACP platform we already copying ICV
+ for any INPLACE-DECRYPT operation, hence
+ we must neglect this code. */
+ uint32_t size_to_skip = req->assoclen;
+ if (areq_ctx->is_gcm4543) {
+ size_to_skip += crypto_aead_ivsize(tfm);
+ }
+ ssi_buffer_mgr_copy_scatterlist_portion(
+ areq_ctx->backup_mac, req->src,
+ size_to_skip+ req->cryptlen - areq_ctx->req_authsize,
+ size_to_skip+ req->cryptlen, SSI_SG_TO_BUF);
+#endif
+ areq_ctx->icv_virt_addr = areq_ctx->backup_mac;
+ } else {
+ areq_ctx->icv_virt_addr = areq_ctx->mac_buf;
+ areq_ctx->icv_dma_addr = areq_ctx->mac_buf_dma_addr;
+ }
+ } else { /* Contig. ICV */
+ /*Should hanlde if the sg is not contig.*/
+ areq_ctx->icv_dma_addr = sg_dma_address(
+ &areq_ctx->srcSgl[areq_ctx->src.nents - 1]) +
+ (*src_last_bytes - authsize);
+ areq_ctx->icv_virt_addr = sg_virt(
+ &areq_ctx->srcSgl[areq_ctx->src.nents - 1]) +
+ (*src_last_bytes - authsize);
+ }
+
+ } else if (direct == DRV_CRYPTO_DIRECTION_DECRYPT) {
+ /*NON-INPLACE and DECRYPT*/
+ ssi_buffer_mgr_add_scatterlist_entry(sg_data,
+ areq_ctx->src.nents, areq_ctx->srcSgl,
+ areq_ctx->cryptlen, areq_ctx->srcOffset,is_last_table,
+ &areq_ctx->src.mlli_nents);
+ ssi_buffer_mgr_add_scatterlist_entry(sg_data,
+ areq_ctx->dst.nents, areq_ctx->dstSgl,
+ areq_ctx->cryptlen,areq_ctx->dstOffset, is_last_table,
+ &areq_ctx->dst.mlli_nents);
+
+ icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->srcSgl,
+ areq_ctx->src.nents, authsize, *src_last_bytes,
+ &areq_ctx->is_icv_fragmented);
+ if (unlikely(icv_nents < 0)) {
+ rc = -ENOTSUPP;
+ goto prepare_data_mlli_exit;
+ }
+
+ if (unlikely(areq_ctx->is_icv_fragmented == true)) {
+ /* Backup happens only when ICV is fragmented, ICV
+ verification is made by CPU compare in order to simplify
+ MAC verification upon request completion */
+ uint32_t size_to_skip = req->assoclen;
+ if (areq_ctx->is_gcm4543) {
+ size_to_skip += crypto_aead_ivsize(tfm);
+ }
+ ssi_buffer_mgr_copy_scatterlist_portion(
+ areq_ctx->backup_mac, req->src,
+ size_to_skip+ req->cryptlen - areq_ctx->req_authsize,
+ size_to_skip+ req->cryptlen, SSI_SG_TO_BUF);
+ areq_ctx->icv_virt_addr = areq_ctx->backup_mac;
+ } else { /* Contig. ICV */
+ /*Should hanlde if the sg is not contig.*/
+ areq_ctx->icv_dma_addr = sg_dma_address(
+ &areq_ctx->srcSgl[areq_ctx->src.nents - 1]) +
+ (*src_last_bytes - authsize);
+ areq_ctx->icv_virt_addr = sg_virt(
+ &areq_ctx->srcSgl[areq_ctx->src.nents - 1]) +
+ (*src_last_bytes - authsize);
+ }
+
+ } else {
+ /*NON-INPLACE and ENCRYPT*/
+ ssi_buffer_mgr_add_scatterlist_entry(sg_data,
+ areq_ctx->dst.nents, areq_ctx->dstSgl,
+ areq_ctx->cryptlen,areq_ctx->dstOffset, is_last_table,
+ &areq_ctx->dst.mlli_nents);
+ ssi_buffer_mgr_add_scatterlist_entry(sg_data,
+ areq_ctx->src.nents, areq_ctx->srcSgl,
+ areq_ctx->cryptlen, areq_ctx->srcOffset,is_last_table,
+ &areq_ctx->src.mlli_nents);
+
+ icv_nents = ssi_buffer_mgr_get_aead_icv_nents(areq_ctx->dstSgl,
+ areq_ctx->dst.nents, authsize, *dst_last_bytes,
+ &areq_ctx->is_icv_fragmented);
+ if (unlikely(icv_nents < 0)) {
+ rc = -ENOTSUPP;
+ goto prepare_data_mlli_exit;
+ }
+
+ if (likely(areq_ctx->is_icv_fragmented == false)) {
+ /* Contig. ICV */
+ areq_ctx->icv_dma_addr = sg_dma_address(
+ &areq_ctx->dstSgl[areq_ctx->dst.nents - 1]) +
+ (*dst_last_bytes - authsize);
+ areq_ctx->icv_virt_addr = sg_virt(
+ &areq_ctx->dstSgl[areq_ctx->dst.nents - 1]) +
+ (*dst_last_bytes - authsize);
+ } else {
+ areq_ctx->icv_dma_addr = areq_ctx->mac_buf_dma_addr;
+ areq_ctx->icv_virt_addr = areq_ctx->mac_buf;
+ }
+ }
+
+prepare_data_mlli_exit:
+ return rc;
+}
+
+static inline int ssi_buffer_mgr_aead_chain_data(
+ struct ssi_drvdata *drvdata,
+ struct aead_request *req,
+ struct buffer_array *sg_data,
+ bool is_last_table, bool do_chain)
+{
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ struct device *dev = &drvdata->plat_dev->dev;
+ enum drv_crypto_direction direct = areq_ctx->gen_ctx.op_type;
+ unsigned int authsize = areq_ctx->req_authsize;
+ int src_last_bytes = 0, dst_last_bytes = 0;
+ int rc = 0;
+ uint32_t src_mapped_nents = 0, dst_mapped_nents = 0;
+ uint32_t offset = 0;
+ unsigned int size_for_map = req->assoclen +req->cryptlen; /*non-inplace mode*/
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ uint32_t sg_index = 0;
+ bool chained = false;
+ bool is_gcm4543 = areq_ctx->is_gcm4543;
+ uint32_t size_to_skip = req->assoclen;
+ if (is_gcm4543) {
+ size_to_skip += crypto_aead_ivsize(tfm);
+ }
+ offset = size_to_skip;
+
+ if (sg_data == NULL) {
+ rc = -EINVAL;
+ goto chain_data_exit;
+ }
+ areq_ctx->srcSgl = req->src;
+ areq_ctx->dstSgl = req->dst;
+
+ if (is_gcm4543) {
+ size_for_map += crypto_aead_ivsize(tfm);
+ }
+
+ size_for_map += (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? authsize:0;
+ src_mapped_nents = ssi_buffer_mgr_get_sgl_nents(req->src,size_for_map,&src_last_bytes, &chained);
+ sg_index = areq_ctx->srcSgl->length;
+ //check where the data starts
+ while (sg_index <= size_to_skip) {
+ offset -= areq_ctx->srcSgl->length;
+ areq_ctx->srcSgl = sg_next(areq_ctx->srcSgl);
+ //if have reached the end of the sgl, then this is unexpected
+ if (areq_ctx->srcSgl == NULL) {
+ SSI_LOG_ERR("reached end of sg list. unexpected \n");
+ BUG();
+ }
+ sg_index += areq_ctx->srcSgl->length;
+ src_mapped_nents--;
+ }
+ if (unlikely(src_mapped_nents > LLI_MAX_NUM_OF_DATA_ENTRIES))
+ {
+ SSI_LOG_ERR("Too many fragments. current %d max %d\n",
+ src_mapped_nents, LLI_MAX_NUM_OF_DATA_ENTRIES);
+ return -ENOMEM;
+ }
+
+ areq_ctx->src.nents = src_mapped_nents;
+
+ areq_ctx->srcOffset = offset;
+
+ if (req->src != req->dst) {
+ size_for_map = req->assoclen +req->cryptlen;
+ size_for_map += (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? authsize : 0;
+ if (is_gcm4543) {
+ size_for_map += crypto_aead_ivsize(tfm);
+ }
+
+ rc = ssi_buffer_mgr_map_scatterlist(dev, req->dst, size_for_map,
+ DMA_BIDIRECTIONAL, &(areq_ctx->dst.nents),
+ LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes,
+ &dst_mapped_nents);
+ if (unlikely(rc != 0)) {
+ rc = -ENOMEM;
+ goto chain_data_exit;
+ }
+ }
+
+ dst_mapped_nents = ssi_buffer_mgr_get_sgl_nents(req->dst,size_for_map,&dst_last_bytes, &chained);
+ sg_index = areq_ctx->dstSgl->length;
+ offset = size_to_skip;
+
+ //check where the data starts
+ while (sg_index <= size_to_skip) {
+
+ offset -= areq_ctx->dstSgl->length;
+ areq_ctx->dstSgl = sg_next(areq_ctx->dstSgl);
+ //if have reached the end of the sgl, then this is unexpected
+ if (areq_ctx->dstSgl == NULL) {
+ SSI_LOG_ERR("reached end of sg list. unexpected \n");
+ BUG();
+ }
+ sg_index += areq_ctx->dstSgl->length;
+ dst_mapped_nents--;
+ }
+ if (unlikely(dst_mapped_nents > LLI_MAX_NUM_OF_DATA_ENTRIES))
+ {
+ SSI_LOG_ERR("Too many fragments. current %d max %d\n",
+ dst_mapped_nents, LLI_MAX_NUM_OF_DATA_ENTRIES);
+ return -ENOMEM;
+ }
+ areq_ctx->dst.nents = dst_mapped_nents;
+ areq_ctx->dstOffset = offset;
+ if ((src_mapped_nents > 1) ||
+ (dst_mapped_nents > 1) ||
+ (do_chain == true)) {
+ areq_ctx->data_buff_type = SSI_DMA_BUF_MLLI;
+ rc = ssi_buffer_mgr_prepare_aead_data_mlli(drvdata, req, sg_data,
+ &src_last_bytes, &dst_last_bytes, is_last_table);
+ } else {
+ areq_ctx->data_buff_type = SSI_DMA_BUF_DLLI;
+ ssi_buffer_mgr_prepare_aead_data_dlli(
+ req, &src_last_bytes, &dst_last_bytes);
+ }
+
+chain_data_exit:
+ return rc;
+}
+
+static void ssi_buffer_mgr_update_aead_mlli_nents( struct ssi_drvdata *drvdata,
+ struct aead_request *req)
+{
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ uint32_t curr_mlli_size = 0;
+
+ if (areq_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI) {
+ areq_ctx->assoc.sram_addr = drvdata->mlli_sram_addr;
+ curr_mlli_size = areq_ctx->assoc.mlli_nents *
+ LLI_ENTRY_BYTE_SIZE;
+ }
+
+ if (areq_ctx->data_buff_type == SSI_DMA_BUF_MLLI) {
+ /*Inplace case dst nents equal to src nents*/
+ if (req->src == req->dst) {
+ areq_ctx->dst.mlli_nents = areq_ctx->src.mlli_nents;
+ areq_ctx->src.sram_addr = drvdata->mlli_sram_addr +
+ curr_mlli_size;
+ areq_ctx->dst.sram_addr = areq_ctx->src.sram_addr;
+ if (areq_ctx->is_single_pass == false)
+ areq_ctx->assoc.mlli_nents +=
+ areq_ctx->src.mlli_nents;
+ } else {
+ if (areq_ctx->gen_ctx.op_type ==
+ DRV_CRYPTO_DIRECTION_DECRYPT) {
+ areq_ctx->src.sram_addr =
+ drvdata->mlli_sram_addr +
+ curr_mlli_size;
+ areq_ctx->dst.sram_addr =
+ areq_ctx->src.sram_addr +
+ areq_ctx->src.mlli_nents *
+ LLI_ENTRY_BYTE_SIZE;
+ if (areq_ctx->is_single_pass == false)
+ areq_ctx->assoc.mlli_nents +=
+ areq_ctx->src.mlli_nents;
+ } else {
+ areq_ctx->dst.sram_addr =
+ drvdata->mlli_sram_addr +
+ curr_mlli_size;
+ areq_ctx->src.sram_addr =
+ areq_ctx->dst.sram_addr +
+ areq_ctx->dst.mlli_nents *
+ LLI_ENTRY_BYTE_SIZE;
+ if (areq_ctx->is_single_pass == false)
+ areq_ctx->assoc.mlli_nents +=
+ areq_ctx->dst.mlli_nents;
+ }
+ }
+ }
+}
+
+int ssi_buffer_mgr_map_aead_request(
+ struct ssi_drvdata *drvdata, struct aead_request *req)
+{
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ struct mlli_params *mlli_params = &areq_ctx->mlli_params;
+ struct device *dev = &drvdata->plat_dev->dev;
+ struct buffer_array sg_data;
+ unsigned int authsize = areq_ctx->req_authsize;
+ struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle;
+ int rc = 0;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ bool is_gcm4543 = areq_ctx->is_gcm4543;
+
+ uint32_t mapped_nents = 0;
+ uint32_t dummy = 0; /*used for the assoc data fragments */
+ uint32_t size_to_map = 0;
+
+ mlli_params->curr_pool = NULL;
+ sg_data.num_of_buffers = 0;
+
+#if DX_HAS_ACP
+ if ((areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT) &&
+ likely(req->src == req->dst))
+ {
+ uint32_t size_to_skip = req->assoclen;
+ if (is_gcm4543) {
+ size_to_skip += crypto_aead_ivsize(tfm);
+ }
+ /* copy mac to a temporary location to deal with possible
+ data memory overriding that caused by cache coherence problem. */
+ ssi_buffer_mgr_copy_scatterlist_portion(
+ areq_ctx->backup_mac, req->src,
+ size_to_skip+ req->cryptlen - areq_ctx->req_authsize,
+ size_to_skip+ req->cryptlen, SSI_SG_TO_BUF);
+ }
+#endif
+
+ /* cacluate the size for cipher remove ICV in decrypt*/
+ areq_ctx->cryptlen = (areq_ctx->gen_ctx.op_type ==
+ DRV_CRYPTO_DIRECTION_ENCRYPT) ?
+ req->cryptlen :
+ (req->cryptlen - authsize);
+
+ areq_ctx->mac_buf_dma_addr = dma_map_single(dev,
+ areq_ctx->mac_buf, MAX_MAC_SIZE, DMA_BIDIRECTIONAL);
+ if (unlikely(dma_mapping_error(dev, areq_ctx->mac_buf_dma_addr))) {
+ SSI_LOG_ERR("Mapping mac_buf %u B at va=%pK for DMA failed\n",
+ MAX_MAC_SIZE, areq_ctx->mac_buf);
+ rc = -ENOMEM;
+ goto aead_map_failure;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->mac_buf_dma_addr, MAX_MAC_SIZE);
+
+ if (areq_ctx->ccm_hdr_size != ccm_header_size_null) {
+ areq_ctx->ccm_iv0_dma_addr = dma_map_single(dev,
+ (areq_ctx->ccm_config + CCM_CTR_COUNT_0_OFFSET),
+ AES_BLOCK_SIZE, DMA_TO_DEVICE);
+
+ if (unlikely(dma_mapping_error(dev, areq_ctx->ccm_iv0_dma_addr))) {
+ SSI_LOG_ERR("Mapping mac_buf %u B at va=%pK "
+ "for DMA failed\n", AES_BLOCK_SIZE,
+ (areq_ctx->ccm_config + CCM_CTR_COUNT_0_OFFSET));
+ areq_ctx->ccm_iv0_dma_addr = 0;
+ rc = -ENOMEM;
+ goto aead_map_failure;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->ccm_iv0_dma_addr,
+ AES_BLOCK_SIZE);
+ if (ssi_aead_handle_config_buf(dev, areq_ctx,
+ areq_ctx->ccm_config, &sg_data, req->assoclen) != 0) {
+ rc = -ENOMEM;
+ goto aead_map_failure;
+ }
+ }
+
+#if SSI_CC_HAS_AES_GCM
+ if (areq_ctx->cipher_mode == DRV_CIPHER_GCTR) {
+ areq_ctx->hkey_dma_addr = dma_map_single(dev,
+ areq_ctx->hkey, AES_BLOCK_SIZE, DMA_BIDIRECTIONAL);
+ if (unlikely(dma_mapping_error(dev, areq_ctx->hkey_dma_addr))) {
+ SSI_LOG_ERR("Mapping hkey %u B at va=%pK for DMA failed\n",
+ AES_BLOCK_SIZE, areq_ctx->hkey);
+ rc = -ENOMEM;
+ goto aead_map_failure;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->hkey_dma_addr, AES_BLOCK_SIZE);
+
+ areq_ctx->gcm_block_len_dma_addr = dma_map_single(dev,
+ &areq_ctx->gcm_len_block, AES_BLOCK_SIZE, DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(dev, areq_ctx->gcm_block_len_dma_addr))) {
+ SSI_LOG_ERR("Mapping gcm_len_block %u B at va=%pK for DMA failed\n",
+ AES_BLOCK_SIZE, &areq_ctx->gcm_len_block);
+ rc = -ENOMEM;
+ goto aead_map_failure;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_block_len_dma_addr, AES_BLOCK_SIZE);
+
+ areq_ctx->gcm_iv_inc1_dma_addr = dma_map_single(dev,
+ areq_ctx->gcm_iv_inc1,
+ AES_BLOCK_SIZE, DMA_TO_DEVICE);
+
+ if (unlikely(dma_mapping_error(dev, areq_ctx->gcm_iv_inc1_dma_addr))) {
+ SSI_LOG_ERR("Mapping gcm_iv_inc1 %u B at va=%pK "
+ "for DMA failed\n", AES_BLOCK_SIZE,
+ (areq_ctx->gcm_iv_inc1));
+ areq_ctx->gcm_iv_inc1_dma_addr = 0;
+ rc = -ENOMEM;
+ goto aead_map_failure;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_iv_inc1_dma_addr,
+ AES_BLOCK_SIZE);
+
+ areq_ctx->gcm_iv_inc2_dma_addr = dma_map_single(dev,
+ areq_ctx->gcm_iv_inc2,
+ AES_BLOCK_SIZE, DMA_TO_DEVICE);
+
+ if (unlikely(dma_mapping_error(dev, areq_ctx->gcm_iv_inc2_dma_addr))) {
+ SSI_LOG_ERR("Mapping gcm_iv_inc2 %u B at va=%pK "
+ "for DMA failed\n", AES_BLOCK_SIZE,
+ (areq_ctx->gcm_iv_inc2));
+ areq_ctx->gcm_iv_inc2_dma_addr = 0;
+ rc = -ENOMEM;
+ goto aead_map_failure;
+ }
+ SSI_UPDATE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_iv_inc2_dma_addr,
+ AES_BLOCK_SIZE);
+ }
+#endif /*SSI_CC_HAS_AES_GCM*/
+
+ size_to_map = req->cryptlen + req->assoclen;
+ if (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT) {
+ size_to_map += authsize;
+ }
+ if (is_gcm4543)
+ size_to_map += crypto_aead_ivsize(tfm);
+ rc = ssi_buffer_mgr_map_scatterlist(dev, req->src,
+ size_to_map, DMA_BIDIRECTIONAL, &(areq_ctx->src.nents),
+ LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES+LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents);
+ if (unlikely(rc != 0)) {
+ rc = -ENOMEM;
+ goto aead_map_failure;
+ }
+
+ if (likely(areq_ctx->is_single_pass == true)) {
+ /*
+ * Create MLLI table for:
+ * (1) Assoc. data
+ * (2) Src/Dst SGLs
+ * Note: IV is contg. buffer (not an SGL)
+ */
+ rc = ssi_buffer_mgr_aead_chain_assoc(drvdata, req, &sg_data, true, false);
+ if (unlikely(rc != 0))
+ goto aead_map_failure;
+ rc = ssi_buffer_mgr_aead_chain_iv(drvdata, req, &sg_data, true, false);
+ if (unlikely(rc != 0))
+ goto aead_map_failure;
+ rc = ssi_buffer_mgr_aead_chain_data(drvdata, req, &sg_data, true, false);
+ if (unlikely(rc != 0))
+ goto aead_map_failure;
+ } else { /* DOUBLE-PASS flow */
+ /*
+ * Prepare MLLI table(s) in this order:
+ *
+ * If ENCRYPT/DECRYPT (inplace):
+ * (1) MLLI table for assoc
+ * (2) IV entry (chained right after end of assoc)
+ * (3) MLLI for src/dst (inplace operation)
+ *
+ * If ENCRYPT (non-inplace)
+ * (1) MLLI table for assoc
+ * (2) IV entry (chained right after end of assoc)
+ * (3) MLLI for dst
+ * (4) MLLI for src
+ *
+ * If DECRYPT (non-inplace)
+ * (1) MLLI table for assoc
+ * (2) IV entry (chained right after end of assoc)
+ * (3) MLLI for src
+ * (4) MLLI for dst
+ */
+ rc = ssi_buffer_mgr_aead_chain_assoc(drvdata, req, &sg_data, false, true);
+ if (unlikely(rc != 0))
+ goto aead_map_failure;
+ rc = ssi_buffer_mgr_aead_chain_iv(drvdata, req, &sg_data, false, true);
+ if (unlikely(rc != 0))
+ goto aead_map_failure;
+ rc = ssi_buffer_mgr_aead_chain_data(drvdata, req, &sg_data, true, true);
+ if (unlikely(rc != 0))
+ goto aead_map_failure;
+ }
+
+ /* Mlli support -start building the MLLI according to the above results */
+ if (unlikely(
+ (areq_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI) ||
+ (areq_ctx->data_buff_type == SSI_DMA_BUF_MLLI))) {
+
+ mlli_params->curr_pool = buff_mgr->mlli_buffs_pool;
+ rc = ssi_buffer_mgr_generate_mlli(dev, &sg_data, mlli_params);
+ if (unlikely(rc != 0)) {
+ goto aead_map_failure;
+ }
+
+ ssi_buffer_mgr_update_aead_mlli_nents(drvdata, req);
+ SSI_LOG_DEBUG("assoc params mn %d\n",areq_ctx->assoc.mlli_nents);
+ SSI_LOG_DEBUG("src params mn %d\n",areq_ctx->src.mlli_nents);
+ SSI_LOG_DEBUG("dst params mn %d\n",areq_ctx->dst.mlli_nents);
+ }
+ return 0;
+
+aead_map_failure:
+ ssi_buffer_mgr_unmap_aead_request(dev, req);
+ return rc;
+}
+
int ssi_buffer_mgr_map_hash_request_final(
struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update)
{
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.h b/drivers/staging/ccree/ssi_buffer_mgr.h
index 2c58a63..c9b3012 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.h
+++ b/drivers/staging/ccree/ssi_buffer_mgr.h
@@ -71,6 +71,10 @@ void ssi_buffer_mgr_unmap_blkcipher_request(
struct scatterlist *src,
struct scatterlist *dst);

+int ssi_buffer_mgr_map_aead_request(struct ssi_drvdata *drvdata, struct aead_request *req);
+
+void ssi_buffer_mgr_unmap_aead_request(struct device *dev, struct aead_request *req);
+
int ssi_buffer_mgr_map_hash_request_final(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, bool do_update);

int ssi_buffer_mgr_map_hash_request_update(struct ssi_drvdata *drvdata, void *ctx, struct scatterlist *src, unsigned int nbytes, unsigned int block_size);
diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index aee5469..42a00fc 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -21,6 +21,7 @@
#include <crypto/algapi.h>
#include <crypto/aes.h>
#include <crypto/sha.h>
+#include <crypto/aead.h>
#include <crypto/authenc.h>
#include <crypto/scatterwalk.h>
#include <crypto/internal/skcipher.h>
@@ -63,6 +64,7 @@
#include "ssi_buffer_mgr.h"
#include "ssi_sysfs.h"
#include "ssi_cipher.h"
+#include "ssi_aead.h"
#include "ssi_hash.h"
#include "ssi_ivgen.h"
#include "ssi_sram_mgr.h"
@@ -362,18 +364,26 @@ static int init_cc_resources(struct platform_device *plat_dev)
goto init_cc_res_err;
}

+ /* hash must be allocated before aead since hash exports APIs */
rc = ssi_hash_alloc(new_drvdata);
if (unlikely(rc != 0)) {
SSI_LOG_ERR("ssi_hash_alloc failed\n");
goto init_cc_res_err;
}

+ rc = ssi_aead_alloc(new_drvdata);
+ if (unlikely(rc != 0)) {
+ SSI_LOG_ERR("ssi_aead_alloc failed\n");
+ goto init_cc_res_err;
+ }
+
return 0;

init_cc_res_err:
SSI_LOG_ERR("Freeing CC HW resources!\n");

if (new_drvdata != NULL) {
+ ssi_aead_free(new_drvdata);
ssi_hash_free(new_drvdata);
ssi_ablkcipher_free(new_drvdata);
ssi_ivgen_fini(new_drvdata);
@@ -416,6 +426,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
struct ssi_drvdata *drvdata =
(struct ssi_drvdata *)dev_get_drvdata(&plat_dev->dev);

+ ssi_aead_free(drvdata);
ssi_hash_free(drvdata);
ssi_ablkcipher_free(drvdata);
ssi_ivgen_fini(drvdata);
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index 5f4b14e..1576a18 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -32,6 +32,7 @@
#include <crypto/internal/skcipher.h>
#include <crypto/aes.h>
#include <crypto/sha.h>
+#include <crypto/aead.h>
#include <crypto/authenc.h>
#include <crypto/hash.h>
#include <linux/version.h>
@@ -148,6 +149,7 @@ struct ssi_drvdata {
struct completion icache_setup_completion;
void *buff_mgr_handle;
void *hash_handle;
+ void *aead_handle;
void *blkcipher_handle;
void *request_mgr_handle;
void *ivgen_handle;
@@ -167,6 +169,7 @@ struct ssi_crypto_alg {
int auth_mode;
struct ssi_drvdata *drvdata;
struct crypto_alg crypto_alg;
+ struct aead_alg aead_alg;
};

struct ssi_alg_template {
@@ -176,6 +179,7 @@ struct ssi_alg_template {
u32 type;
union {
struct ablkcipher_alg ablkcipher;
+ struct aead_alg aead;
struct blkcipher_alg blkcipher;
struct cipher_alg cipher;
struct compress_alg compress;
--
2.1.4

2017-04-20 13:13:03

by Gilad Ben-Yossef

[permalink] [raw]
Subject: [PATCH v2 9/9] MAINTAINERS: add Gilad BY as ccree maintainer

I work for Arm on maintaining the TrustZone CryptoCell driver.

Signed-off-by: Gilad Ben-Yossef <[email protected]>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 676c139..f21caa1 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3066,6 +3066,13 @@ F: drivers/net/ieee802154/cc2520.c
F: include/linux/spi/cc2520.h
F: Documentation/devicetree/bindings/net/ieee802154/cc2520.txt

+CCREE ARM TRUSTZONE CRYPTOCELL 700 REE DRIVER
+M: Gilad Ben-Yossef <[email protected]>
+L: [email protected]
+S: Supported
+F: drivers/staging/ccree/
+W: https://developer.arm.com/products/system-ip/trustzone-cryptocell/cryptocell-700-family
+
CEC DRIVER
M: Hans Verkuil <[email protected]>
L: [email protected]
--
2.1.4

2017-04-20 13:30:52

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver

On Thu, Apr 20, 2017 at 04:12:54PM +0300, Gilad Ben-Yossef wrote:
> Arm TrustZone CryptoCell 700 is a family of cryptographic hardware
> accelerators. It is supported by a long lived series of out of tree
> drivers, which I am now in the process of unifying and upstreaming.
> This is the first drop, supporting the new CryptoCell 712 REE.
>
> The code still needs some cleanup before maturing to a proper
> upstream driver, which I am in the process of doing. However,
> as discussion of some of the capabilities of the hardware and
> its application to some dm-crypt and dm-verity features recently
> took place I though it is better to do this in the open via the
> staging tree.
>
> A Git repository based off of Linux 4.11-rc7 is also available at
> https://github.com/gby/linux.git branch ccree_v2 for those inclined.

If you want this in staging, I'll be glad to take it, but note then you
can't work off of an external repo, as syncing the two is almost
impossible and more work than you want to go through.

So, as long as this builds properly, want me to queue these up in my
tree?

thanks,

greg k-h

2017-04-20 13:36:08

by Gilad Ben-Yossef

[permalink] [raw]
Subject: Re: [PATCH v2 0/9] staging: ccree: add Arm TrustZone CryptoCell REE driver

On Thu, Apr 20, 2017 at 4:30 PM, Greg Kroah-Hartman
<[email protected]> wrote:
> On Thu, Apr 20, 2017 at 04:12:54PM +0300, Gilad Ben-Yossef wrote:
>> Arm TrustZone CryptoCell 700 is a family of cryptographic hardware
>> accelerators. It is supported by a long lived series of out of tree
>> drivers, which I am now in the process of unifying and upstreaming.
>> This is the first drop, supporting the new CryptoCell 712 REE.
>>
>> The code still needs some cleanup before maturing to a proper
>> upstream driver, which I am in the process of doing. However,
>> as discussion of some of the capabilities of the hardware and
>> its application to some dm-crypt and dm-verity features recently
>> took place I though it is better to do this in the open via the
>> staging tree.
>>
>> A Git repository based off of Linux 4.11-rc7 is also available at
>> https://github.com/gby/linux.git branch ccree_v2 for those inclined.
>
> If you want this in staging, I'll be glad to take it, but note then you
> can't work off of an external repo, as syncing the two is almost
> impossible and more work than you want to go through.

Once it's in the staging tree I don't need a separate repo. It was only useful
so long as I did not have an upstream tree to point people to.
>
> So, as long as this builds properly, want me to queue these up in my
> tree?

Yes, please.

Thanks,
Gilad



--
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
-- Jean-Baptiste Queru

2017-04-20 13:39:48

by Stephan Müller

[permalink] [raw]
Subject: Re: [PATCH v2 6/9] staging: ccree: add FIPS support

Am Donnerstag, 20. April 2017, 15:13:00 CEST schrieb Gilad Ben-Yossef:

Hi Gilad,

> +/* The function verifies that tdes keys are not weak.*/
> +static int ssi_fips_verify_3des_keys(const u8 *key, unsigned int keylen)
> +{
> +#ifdef CCREE_FIPS_SUPPORT
> + tdes_keys_t *tdes_key = (tdes_keys_t*)key;
> +
> + /* verify key1 != key2 and key3 != key2*/

I do not think that this check is necessary. There is no FIPS requirement or
IG that mandates this (unlike the XTS key check).

If there would be such requirement, we would need a common service function
for all TDES implementations

> + if (unlikely( (memcmp((u8*)tdes_key->key1, (u8*)tdes_key->key2,
> sizeof(tdes_key->key1)) == 0) || + (memcmp((u8*)tdes_key->key3,
> (u8*)tdes_key->key2, sizeof(tdes_key->key3)) == 0) )) { +
> return -ENOEXEC;
> + }
> +#endif /* CCREE_FIPS_SUPPORT */
> +
> + return 0;
> +}
> +
> +/* The function verifies that xts keys are not weak.*/
> +static int ssi_fips_verify_xts_keys(const u8 *key, unsigned int keylen)
> +{
> +#ifdef CCREE_FIPS_SUPPORT
> + /* Weak key is define as key that its first half (128/256 lsb)
> equals its second half (128/256 msb) */ + int singleKeySize = keylen
> >> 1;
> +
> + if (unlikely(memcmp(key, &key[singleKeySize], singleKeySize) == 0)) {
> + return -ENOEXEC;

Use xts_check_key.

> +The test vectors were taken from:
> +
> +* AES
> +NIST Special Publication 800-38A 2001 Edition
> +Recommendation for Block Cipher Modes of Operation
> +http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf
> +Appendix F: Example Vectors for Modes of Operation of the AES
> +
> +* AES CTS
> +Advanced Encryption Standard (AES) Encryption for Kerberos 5
> +February 2005
> +https://tools.ietf.org/html/rfc3962#appendix-B
> +B. Sample Test Vectors
> +
> +* AES XTS
> +http://csrc.nist.gov/groups/STM/cavp/#08
> +http://csrc.nist.gov/groups/STM/cavp/documents/aes/XTSTestVectors.zip
> +
> +* AES CMAC
> +http://csrc.nist.gov/groups/STM/cavp/index.html#07
> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/cmactestvectors.zip
> +
> +* AES-CCM
> +http://csrc.nist.gov/groups/STM/cavp/#07
> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/ccmtestvectors.zip
> +
> +* AES-GCM
> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/gcmtestvectors.zip
> +
> +* Triple-DES
> +NIST Special Publication 800-67 January 2012
> +Recommendation for the Triple Data Encryption Algorithm (TDEA) Block Cipher
> +http://csrc.nist.gov/publications/nistpubs/800-67-Rev1/SP-800-67-Rev1.pdf
> +APPENDIX B: EXAMPLE OF TDEA FORWARD AND INVERSE CIPHER OPERATIONS +and
> +http://csrc.nist.gov/groups/STM/cavp/#01
> +http://csrc.nist.gov/groups/STM/cavp/documents/des/tdesmct_intermediate.zip
> +
> +* HASH
> +http://csrc.nist.gov/groups/STM/cavp/#03
> +http://csrc.nist.gov/groups/STM/cavp/documents/shs/shabytetestvectors.zip
> +
> +* HMAC
> +http://csrc.nist.gov/groups/STM/cavp/#07
> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/hmactestvectors.zip
> +
> +*/

Is this test vector business really needed? Why do you think that testmgr.c is
not sufficient? Other successful FIPS validations of the kernel crypto API
managed without such special code.

Also, your entire API seems to implement the approach that if there is a self
test error, you disable the cipher functions, but leave the rest in-tact. The
standard kernel crypto API handling logic is to simply panic the kernel. Is it
really necessary to implement a special case for your driver?


Ciao
Stephan

2017-04-20 18:07:19

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v2 2/9] staging: ccree: add ahash support

Hi Gilad,

[auto build test WARNING on linus/master]
[also build test WARNING on v4.11-rc7]
[cannot apply to staging/staging-testing next-20170420]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url: https://github.com/0day-ci/linux/commits/Gilad-Ben-Yossef/staging-ccree-add-Arm-TrustZone-CryptoCell-REE-driver/20170420-222023


coccinelle warnings: (new ones prefixed by >>)

>> drivers/staging/ccree/ssi_hash.c:317:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:320:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:323:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:373:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:375:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:377:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:379:3-8: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:381:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:383:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation

2017-04-20 18:06:29

by kernel test robot

[permalink] [raw]
Subject: [PATCH] staging: ccree: fix ifnullfree.cocci warnings

drivers/staging/ccree/ssi_hash.c:317:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:320:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:323:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:373:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:375:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:377:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:379:3-8: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:381:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.
drivers/staging/ccree/ssi_hash.c:383:2-7: WARNING: NULL check before freeing functions like kfree, debugfs_remove, debugfs_remove_recursive or usb_free_urb is not needed. Maybe consider reorganizing relevant code to avoid passing NULL values.

NULL check before some freeing functions is not needed.

Based on checkpatch warning
"kfree(NULL) is safe this check is probably not required"
and kfreeaddr.cocci by Julia Lawall.

Generated by: scripts/coccinelle/free/ifnullfree.cocci

CC: Gilad Ben-Yossef <[email protected]>
Signed-off-by: Fengguang Wu <[email protected]>
---

ssi_hash.c | 27 +++++++++------------------
1 file changed, 9 insertions(+), 18 deletions(-)

--- a/drivers/staging/ccree/ssi_hash.c
+++ b/drivers/staging/ccree/ssi_hash.c
@@ -313,14 +313,11 @@ fail4:
state->digest_buff_dma_addr = 0;
}
fail3:
- if (state->opad_digest_buff != NULL)
- kfree(state->opad_digest_buff);
+ kfree(state->opad_digest_buff);
fail2:
- if (state->digest_bytes_len != NULL)
- kfree(state->digest_bytes_len);
+ kfree(state->digest_bytes_len);
fail1:
- if (state->digest_buff != NULL)
- kfree(state->digest_buff);
+ kfree(state->digest_buff);
fail_digest_result_buff:
if (state->digest_result_buff != NULL) {
kfree(state->digest_result_buff);
@@ -369,18 +366,12 @@ static void ssi_hash_unmap_request(struc
state->opad_digest_dma_addr = 0;
}

- if (state->opad_digest_buff != NULL)
- kfree(state->opad_digest_buff);
- if (state->digest_bytes_len != NULL)
- kfree(state->digest_bytes_len);
- if (state->digest_buff != NULL)
- kfree(state->digest_buff);
- if (state->digest_result_buff != NULL)
- kfree(state->digest_result_buff);
- if (state->buff1 != NULL)
- kfree(state->buff1);
- if (state->buff0 != NULL)
- kfree(state->buff0);
+ kfree(state->opad_digest_buff);
+ kfree(state->digest_bytes_len);
+ kfree(state->digest_buff);
+ kfree(state->digest_result_buff);
+ kfree(state->buff1);
+ kfree(state->buff0);
}

static void ssi_hash_unmap_result(struct device *dev,

2017-04-20 18:57:53

by kbuild test robot

[permalink] [raw]
Subject: Re: [PATCH v2 5/9] staging: ccree: add AEAD support

Hi Gilad,

[auto build test WARNING on linus/master]
[also build test WARNING on v4.11-rc7]
[cannot apply to staging/staging-testing next-20170420]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url: https://github.com/0day-ci/linux/commits/Gilad-Ben-Yossef/staging-ccree-add-Arm-TrustZone-CryptoCell-REE-driver/20170420-222023


coccinelle warnings: (new ones prefixed by >>)

>> drivers/staging/ccree/ssi_buffer_mgr.c:758:2-4: ERROR: test of a variable/field address

vim +758 drivers/staging/ccree/ssi_buffer_mgr.c

742
743 if (areq_ctx->gcm_iv_inc2_dma_addr != 0) {
744 SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gcm_iv_inc2_dma_addr);
745 dma_unmap_single(dev, areq_ctx->gcm_iv_inc2_dma_addr,
746 AES_BLOCK_SIZE, DMA_TO_DEVICE);
747 }
748 }
749 #endif
750
751 if (areq_ctx->ccm_hdr_size != ccm_header_size_null) {
752 if (areq_ctx->ccm_iv0_dma_addr != 0) {
753 SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->ccm_iv0_dma_addr);
754 dma_unmap_single(dev, areq_ctx->ccm_iv0_dma_addr,
755 AES_BLOCK_SIZE, DMA_TO_DEVICE);
756 }
757
> 758 if (&areq_ctx->ccm_adata_sg != NULL)
759 dma_unmap_sg(dev, &areq_ctx->ccm_adata_sg,
760 1, DMA_TO_DEVICE);
761 }
762 if (areq_ctx->gen_ctx.iv_dma_addr != 0) {
763 SSI_RESTORE_DMA_ADDR_TO_48BIT(areq_ctx->gen_ctx.iv_dma_addr);
764 dma_unmap_single(dev, areq_ctx->gen_ctx.iv_dma_addr,
765 hw_iv_size, DMA_BIDIRECTIONAL);
766 }

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation

2017-04-23 09:49:00

by Gilad Ben-Yossef

[permalink] [raw]
Subject: Re: [PATCH v2 6/9] staging: ccree: add FIPS support

Hi,

Thank you for the review.

On Thu, Apr 20, 2017 at 4:39 PM, Stephan Müller <[email protected]> wrote:

>> +/* The function verifies that tdes keys are not weak.*/
>> +static int ssi_fips_verify_3des_keys(const u8 *key, unsigned int keylen)
>> +{
>> +#ifdef CCREE_FIPS_SUPPORT
>> + tdes_keys_t *tdes_key = (tdes_keys_t*)key;
>> +
>> + /* verify key1 != key2 and key3 != key2*/
>
> I do not think that this check is necessary. There is no FIPS requirement or
> IG that mandates this (unlike the XTS key check).
>
> If there would be such requirement, we would need a common service function
> for all TDES implementations

I am not sure. I have forwarded a question internally and based on the
answer will either drop this or add a common function and post a patch
add the check to all 3DES implementation.

This has been added to the staging TODO list for the driver.

>
>> + if (unlikely( (memcmp((u8*)tdes_key->key1, (u8*)tdes_key->key2,
>> sizeof(tdes_key->key1)) == 0) || + (memcmp((u8*)tdes_key->key3,
>> (u8*)tdes_key->key2, sizeof(tdes_key->key3)) == 0) )) { +
>> return -ENOEXEC;
>> + }
>> +#endif /* CCREE_FIPS_SUPPORT */
>> +
>> + return 0;
>> +}
>> +
>> +/* The function verifies that xts keys are not weak.*/
>> +static int ssi_fips_verify_xts_keys(const u8 *key, unsigned int keylen)
>> +{
>> +#ifdef CCREE_FIPS_SUPPORT
>> + /* Weak key is define as key that its first half (128/256 lsb)
>> equals its second half (128/256 msb) */ + int singleKeySize = keylen
>> >> 1;
>> +
>> + if (unlikely(memcmp(key, &key[singleKeySize], singleKeySize) == 0)) {
>> + return -ENOEXEC;
>
> Use xts_check_key.

Will fix. Added to TODO staging list for the driver.

>
>> +The test vectors were taken from:
>> +
>> +* AES
>> +NIST Special Publication 800-38A 2001 Edition
>> +Recommendation for Block Cipher Modes of Operation
>> +http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf
>> +Appendix F: Example Vectors for Modes of Operation of the AES
>> +
>> +* AES CTS
>> +Advanced Encryption Standard (AES) Encryption for Kerberos 5
>> +February 2005
>> +https://tools.ietf.org/html/rfc3962#appendix-B
>> +B. Sample Test Vectors
>> +
>> +* AES XTS
>> +http://csrc.nist.gov/groups/STM/cavp/#08
>> +http://csrc.nist.gov/groups/STM/cavp/documents/aes/XTSTestVectors.zip
>> +
>> +* AES CMAC
>> +http://csrc.nist.gov/groups/STM/cavp/index.html#07
>> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/cmactestvectors.zip
>> +
>> +* AES-CCM
>> +http://csrc.nist.gov/groups/STM/cavp/#07
>> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/ccmtestvectors.zip
>> +
>> +* AES-GCM
>> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/gcmtestvectors.zip
>> +
>> +* Triple-DES
>> +NIST Special Publication 800-67 January 2012
>> +Recommendation for the Triple Data Encryption Algorithm (TDEA) Block Cipher
>> +http://csrc.nist.gov/publications/nistpubs/800-67-Rev1/SP-800-67-Rev1.pdf
>> +APPENDIX B: EXAMPLE OF TDEA FORWARD AND INVERSE CIPHER OPERATIONS +and
>> +http://csrc.nist.gov/groups/STM/cavp/#01
>> +http://csrc.nist.gov/groups/STM/cavp/documents/des/tdesmct_intermediate.zip
>> +
>> +* HASH
>> +http://csrc.nist.gov/groups/STM/cavp/#03
>> +http://csrc.nist.gov/groups/STM/cavp/documents/shs/shabytetestvectors.zip
>> +
>> +* HMAC
>> +http://csrc.nist.gov/groups/STM/cavp/#07
>> +http://csrc.nist.gov/groups/STM/cavp/documents/mac/hmactestvectors.zip
>> +
>> +*/
>
> Is this test vector business really needed? Why do you think that testmgr.c is
> not sufficient? Other successful FIPS validations of the kernel crypto API
> managed without such special code.

That is a very good question. I am guessing this has something to do
to with this driver spending its life out of tree and being maintained
against old kernel versions that may have had some gaps in FIPS
testing and since fixed.

I will review what, if at all, is missing from testmgr and fold those
missing parts (if found) there and drop this from the driver.

>
> Also, your entire API seems to implement the approach that if there is a self
> test error, you disable the cipher functions, but leave the rest in-tact. The
> standard kernel crypto API handling logic is to simply panic the kernel. Is it
> really necessary to implement a special case for your driver?
>
>

No it isn't. What ever the behavior we need it should be added,
pending review of course, to the generic FIPS logic handling.

I do wonder if there is value in alternate behavior of stopping crypto
API on FIPS error rather than a panic though. I will try to get an
explanation why we do it this way.

Handling all these has been added to the driver staging TODO list and
will be handled before it matures into drivers/crypto/

Many thanks for the review!

Gilad


--
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
-- Jean-Baptiste Queru

2017-04-23 18:57:58

by Stephan Müller

[permalink] [raw]
Subject: Re: [PATCH v2 6/9] staging: ccree: add FIPS support

Am Sonntag, 23. April 2017, 11:48:58 CEST schrieb Gilad Ben-Yossef:

Hi Gilad,

> I do wonder if there is value in alternate behavior of stopping crypto
> API on FIPS error rather than a panic though. I will try to get an
> explanation why we do it this way.

In FIPS, all crypto function must cease if a self test fails. This can be done
by instrumenting the crypto API calls with a check to a global flag or by
simply terminating the entire "FIPS module".

The panic() is the simplest approach to meet that requirement.

Ciao
Stephan

2017-04-24 06:06:09

by Gilad Ben-Yossef

[permalink] [raw]
Subject: Re: [PATCH v2 6/9] staging: ccree: add FIPS support

On Sun, Apr 23, 2017 at 12:48 PM, Gilad Ben-Yossef <[email protected]> wrote:
> Hi,
>
> Thank you for the review.
>
> On Thu, Apr 20, 2017 at 4:39 PM, Stephan Müller <[email protected]> wrote:
>
>>> +/* The function verifies that tdes keys are not weak.*/
>>> +static int ssi_fips_verify_3des_keys(const u8 *key, unsigned int keylen)
>>> +{
>>> +#ifdef CCREE_FIPS_SUPPORT
>>> + tdes_keys_t *tdes_key = (tdes_keys_t*)key;
>>> +
>>> + /* verify key1 != key2 and key3 != key2*/
>>
>> I do not think that this check is necessary. There is no FIPS requirement or
>> IG that mandates this (unlike the XTS key check).
>>
>> If there would be such requirement, we would need a common service function
>> for all TDES implementations


Well, it turns out there is and we do :-)

This is from crypto/des_generic.c:

/*
* RFC2451:
*
* For DES-EDE3, there is no known need to reject weak or
* complementation keys. Any weakness is obviated by the use of
* multiple keys.
*
* However, if the first two or last two independent 64-bit keys are
* equal (k1 == k2 or k2 == k3), then the DES3 operation is simply the
* same as DES. Implementers MUST reject keys that exhibit this
* property.
*
*/
int __des3_ede_setkey(u32 *expkey, u32 *flags, const u8 *key,
unsigned int keylen)

However, this does not check that k1 == k3. In this case DES3
becomes 2DES (2-keys TDEA), the use of which was dropped post 2015
by NIST Special Publication 800-131A*.

Would it be acceptable if I offer a patch adding this check to
__des3_ede_setkey()
and use that in the ccree driver?

* http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-131Ar1.pdf


Many thanks,
Gilad

--
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
-- Jean-Baptiste Queru
_______________________________________________
devel mailing list
[email protected]
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

2017-04-24 06:16:50

by Stephan Müller

[permalink] [raw]
Subject: Re: [PATCH v2 6/9] staging: ccree: add FIPS support

Am Montag, 24. April 2017, 08:06:09 CEST schrieb Gilad Ben-Yossef:

Hi Gilad,
>
> Well, it turns out there is and we do :-)
>
> This is from crypto/des_generic.c:
>
> /*
> * RFC2451:
> *
> * For DES-EDE3, there is no known need to reject weak or
> * complementation keys. Any weakness is obviated by the use of
> * multiple keys.
> *
> * However, if the first two or last two independent 64-bit keys are
> * equal (k1 == k2 or k2 == k3), then the DES3 operation is simply the
> * same as DES. Implementers MUST reject keys that exhibit this
> * property.
> *
> */
> int __des3_ede_setkey(u32 *expkey, u32 *flags, const u8 *key,
> unsigned int keylen)
>
> However, this does not check that k1 == k3. In this case DES3
> becomes 2DES (2-keys TDEA), the use of which was dropped post 2015
> by NIST Special Publication 800-131A*.

It is correct that the RFC wants at least a 2key 3DES. And it is correct that
SP800-131A mandates 3key 3DES post 2015. All I am saying is that FIPS 140-2
does *not* require a technical verification of the 3 keys being not identical.

Note, formally, FIPS 140-2 requires that the 3 keys (i.e. all 192 bits) must
be obtained from *one* call to a DRBG or KDF (separate independent calls to,
say, obtain one key at a time is *not* permitted). Of course, fixing the
parity bits is allowed after obtaining the random number.
>
> Would it be acceptable if I offer a patch adding this check to
> __des3_ede_setkey()
> and use that in the ccree driver?

I am not sure it makes sense as the core requirement is the *one* invocation
of the DRBG.

Ciao
Stephan

2017-04-24 06:21:46

by Stephan Müller

[permalink] [raw]
Subject: Re: [PATCH v2 6/9] staging: ccree: add FIPS support

Am Montag, 24. April 2017, 08:16:50 CEST schrieb Stephan M?ller:

Hi Gilad,

> >
> > int __des3_ede_setkey(u32 *expkey, u32 *flags, const u8 *key,
> >
> > unsigned int keylen)
> >
> > However, this does not check that k1 == k3. In this case DES3
> > becomes 2DES (2-keys TDEA), the use of which was dropped post 2015
> > by NIST Special Publication 800-131A*.
>
> It is correct that the RFC wants at least a 2key 3DES. And it is correct
> that SP800-131A mandates 3key 3DES post 2015. All I am saying is that FIPS
> 140-2 does *not* require a technical verification of the 3 keys being not
> identical.

One side note: we had discussed a patch to this function in the past, see [1].

[1] https://patchwork.kernel.org/patch/8293441/

Ciao
Stephan

2017-04-24 07:04:13

by Gilad Ben-Yossef

[permalink] [raw]
Subject: Re: [PATCH v2 6/9] staging: ccree: add FIPS support

On Mon, Apr 24, 2017 at 9:16 AM, Stephan Müller <[email protected]> wrote:
> Am Montag, 24. April 2017, 08:06:09 CEST schrieb Gilad Ben-Yossef:
>
> Hi Gilad,
>>
>> Well, it turns out there is and we do :-)
>>
>> This is from crypto/des_generic.c:
>>
>> /*
>> * RFC2451:
>> *
>> * For DES-EDE3, there is no known need to reject weak or
>> * complementation keys. Any weakness is obviated by the use of
>> * multiple keys.
>> *
>> * However, if the first two or last two independent 64-bit keys are
>> * equal (k1 == k2 or k2 == k3), then the DES3 operation is simply the
>> * same as DES. Implementers MUST reject keys that exhibit this
>> * property.
>> *
>> */
>> int __des3_ede_setkey(u32 *expkey, u32 *flags, const u8 *key,
>> unsigned int keylen)
>>
>> However, this does not check that k1 == k3. In this case DES3
>> becomes 2DES (2-keys TDEA), the use of which was dropped post 2015
>> by NIST Special Publication 800-131A*.
>
> It is correct that the RFC wants at least a 2key 3DES. And it is correct that
> SP800-131A mandates 3key 3DES post 2015. All I am saying is that FIPS 140-2
> does *not* require a technical verification of the 3 keys being not identical.
>
> Note, formally, FIPS 140-2 requires that the 3 keys (i.e. all 192 bits) must
> be obtained from *one* call to a DRBG or KDF (separate independent calls to,
> say, obtain one key at a time is *not* permitted). Of course, fixing the
> parity bits is allowed after obtaining the random number.
>>
>> Would it be acceptable if I offer a patch adding this check to
>> __des3_ede_setkey()
>> and use that in the ccree driver?
>
> I am not sure it makes sense as the core requirement is the *one* invocation
> of the DRBG.


Thanks you for the clarification. As I think is obvious by now I am
not a FIPS expert by any stretch.

Isn't the requirements on DRBG or KDF invocations pertain to key
generation only?
What happens if you don't derive the keys on the system, but wish to
use keys derived elsewhere?
I assumed the limitations on weak keys etc. were meant to protect
against that scenario and are
therefore independent from key generation requirements, but I may have
misunderstood that.

Anyway, if I understand you correctly, what you are saying is that
these checks might make an
implementation RFC 2451 and SP800-131A compliant respectively but they
are not required for
FIPS 140-2 compliance? did I understand that correctly?

If so, since two 3DES implementation in the kernel already do the RFC
2451 compliant check I assume
you would not object to the ccree driver using the same function,
disconnect from FIPS being set or
not, just as is being done with the other 3DES implementation.

In addition, would it be OK if we did an extra check in the ccree
driver for SP800-131A key compliance
and disable encryption (but allow decryption) if the key fails the
check? again, this would be independent
from FIPS mode?

Thanks again,
Gilad


--
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
-- Jean-Baptiste Queru
_______________________________________________
devel mailing list
[email protected]
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

2017-04-24 07:07:45

by Gilad Ben-Yossef

[permalink] [raw]
Subject: Re: [PATCH v2 6/9] staging: ccree: add FIPS support

On Mon, Apr 24, 2017 at 9:21 AM, Stephan Müller <[email protected]> wrote:
> Am Montag, 24. April 2017, 08:16:50 CEST schrieb Stephan Müller:
>
> Hi Gilad,
>
>> >
>> > int __des3_ede_setkey(u32 *expkey, u32 *flags, const u8 *key,
>> >
>> > unsigned int keylen)
>> >
>> > However, this does not check that k1 == k3. In this case DES3
>> > becomes 2DES (2-keys TDEA), the use of which was dropped post 2015
>> > by NIST Special Publication 800-131A*.
>>
>> It is correct that the RFC wants at least a 2key 3DES. And it is correct
>> that SP800-131A mandates 3key 3DES post 2015. All I am saying is that FIPS
>> 140-2 does *not* require a technical verification of the 3 keys being not
>> identical.
>
> One side note: we had discussed a patch to this function in the past, see [1].
>
> [1] https://patchwork.kernel.org/patch/8293441/
>

Thanks, I was not aware of that.

I guess we could change the function to indicate that a key is valid
for decryption but not encryption
and have the implementation limiting based on that if there is an
interest in SP800-131A compliance.

Gilad

--
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
-- Jean-Baptiste Queru

2017-04-24 07:22:02

by Stephan Müller

[permalink] [raw]
Subject: Re: [PATCH v2 6/9] staging: ccree: add FIPS support

Am Montag, 24. April 2017, 09:04:13 CEST schrieb Gilad Ben-Yossef:

Hi Gilad,

>
> Thanks you for the clarification. As I think is obvious by now I am
> not a FIPS expert by any stretch.
>
> Isn't the requirements on DRBG or KDF invocations pertain to key
> generation only?
> What happens if you don't derive the keys on the system, but wish to
> use keys derived elsewhere?
> I assumed the limitations on weak keys etc. were meant to protect
> against that scenario and are
> therefore independent from key generation requirements, but I may have
> misunderstood that.

That is exactly an important question. NIST lately moved away from a pure
cipher-only view of cryptography to a more holistic view (i.e. where are
ciphers used).

That said, for 3DES there is no formal requirement that the 3 keys must be
checked. NIST is fine when documentation says how the keys are generated by
some logic outside the module.
>
> Anyway, if I understand you correctly, what you are saying is that
> these checks might make an
> implementation RFC 2451 and SP800-131A compliant respectively but they
> are not required for
> FIPS 140-2 compliance? did I understand that correctly?

Correct. Regarding SP800-131A, it only says that a full 3key 3DES is required.
It does not say whether/how the key shall be enforced not being identical.
>
> If so, since two 3DES implementation in the kernel already do the RFC
> 2451 compliant check I assume
> you would not object to the ccree driver using the same function,
> disconnect from FIPS being set or
> not, just as is being done with the other 3DES implementation.

Absolutely. If possible, all 3DES implementations should use the same checking
functions. The existing checking function regarding the prevention of 1key
3DES should be used by your code too.

All I am saying that from a FIPS perspective, there is no need to extent the
common function by a 3key 3DES enforcer.
>
> In addition, would it be OK if we did an extra check in the ccree
> driver for SP800-131A key compliance
> and disable encryption (but allow decryption) if the key fails the
> check? again, this would be independent
> from FIPS mode?

My personal taste would be: changes that makes sense for all 3DES
implementations should go to a common function. Otherwise, 3DES implementation
A behaves differently than impl. B.

That said, having a check that all three keys are non-identical would
certainly be good (see my Ack to the patch from a year ago). But you cannot
use the argument that FIPS mandates it to push it through. :-)

Ciao
Stephan

PS: There is currently a new requirement being discussed for FIPS: 3DES
operations should not allow more than 4GB of data to be encrypted with one
key. This requirement would need technical enforcement. I am looking into this
one for some time now.

2017-04-24 07:23:32

by Stephan Müller

[permalink] [raw]
Subject: Re: [PATCH v2 6/9] staging: ccree: add FIPS support

Am Montag, 24. April 2017, 09:07:45 CEST schrieb Gilad Ben-Yossef:

Hi Gilad,

> I guess we could change the function to indicate that a key is valid
> for decryption but not encryption
> and have the implementation limiting based on that if there is an
> interest in SP800-131A compliance.

I would be delighted to see and review a patch.

Ciao
Stephan