2023-02-18 05:38:43

by Harsha, Harsha

[permalink] [raw]
Subject: [PATCH 0/4] crypto: Add Xilinx ZynqMP RSA driver support

This patch set does the following:
- Get the SoC family specific data for crypto operation
- Adds communication layer support for zynqmp_pm_rsa in zynqmp.c
- Adds Xilinx driver for RSA Algorithm
- Updates the list of MAINTAINERS

Harsha Harsha (4):
firmware: xilinx: Get the SoC family specific data for crypto
operation
firmware: xilinx: Add ZynqMP RSA API for RSA encrypt/decrypt operation
crypto: xilinx: Add ZynqMP RSA driver
MAINTAINERS: Add maintainer for Xilinx ZynqMP RSA driver

MAINTAINERS | 5 +
drivers/crypto/Kconfig | 10 +
drivers/crypto/xilinx/Makefile | 1 +
drivers/crypto/xilinx/xilinx-rsa.c | 489 +++++++++++++++++++++++++++
drivers/firmware/xilinx/zynqmp.c | 100 ++++++
include/linux/firmware/xlnx-zynqmp.h | 42 +++
6 files changed, 647 insertions(+)
create mode 100644 drivers/crypto/xilinx/xilinx-rsa.c

--
2.36.1



2023-02-18 05:38:45

by Harsha, Harsha

[permalink] [raw]
Subject: [PATCH 1/4] firmware: xilinx: Get the SoC family specific data for crypto operation

Get family type and sub-family type of SoC and on basis of that, return the
data specific to the SoC which can be used for the required crypto
operations.

Signed-off-by: Harsha Harsha <[email protected]>
Co-developed-by: Dhaval Shah <[email protected]>
Signed-off-by: Dhaval Shah <[email protected]>
---
drivers/firmware/xilinx/zynqmp.c | 79 ++++++++++++++++++++++++++++
include/linux/firmware/xlnx-zynqmp.h | 34 ++++++++++++
2 files changed, 113 insertions(+)

diff --git a/drivers/firmware/xilinx/zynqmp.c b/drivers/firmware/xilinx/zynqmp.c
index 129f68d7a6f5..10ae42a2ae22 100644
--- a/drivers/firmware/xilinx/zynqmp.c
+++ b/drivers/firmware/xilinx/zynqmp.c
@@ -339,6 +339,8 @@ int zynqmp_pm_invoke_fn(u32 pm_api_id, u32 arg0, u32 arg1,

static u32 pm_api_version;
static u32 pm_tz_version;
+static u32 pm_family_code;
+static u32 pm_sub_family_code;

int zynqmp_pm_register_sgi(u32 sgi_num, u32 reset)
{
@@ -404,6 +406,78 @@ int zynqmp_pm_get_chipid(u32 *idcode, u32 *version)
}
EXPORT_SYMBOL_GPL(zynqmp_pm_get_chipid);

+/**
+ * zynqmp_pm_get_family_info() - Get family info of platform
+ * @family: Returned family code value
+ * @subfamily: Returned sub-family code value
+ *
+ * Return: Returns status, either success or error+reason
+ */
+static int zynqmp_pm_get_family_info(u32 *family, u32 *subfamily)
+{
+ u32 ret_payload[PAYLOAD_ARG_CNT];
+ u32 idcode;
+ int ret;
+
+ /* Check is family or sub-family code already received */
+ if (pm_family_code && pm_sub_family_code) {
+ *family = pm_family_code;
+ *subfamily = pm_sub_family_code;
+ return 0;
+ }
+
+ ret = zynqmp_pm_invoke_fn(PM_GET_CHIPID, 0, 0, 0, 0, ret_payload);
+ if (ret < 0)
+ return ret;
+
+ idcode = ret_payload[1];
+ pm_family_code = FIELD_GET(GENMASK(FAMILY_CODE_MSB, FAMILY_CODE_LSB),
+ idcode);
+ pm_sub_family_code = FIELD_GET(GENMASK(SUB_FAMILY_CODE_MSB,
+ SUB_FAMILY_CODE_LSB), idcode);
+ *family = pm_family_code;
+ *subfamily = pm_sub_family_code;
+
+ return 0;
+}
+
+/**
+ * xlnx_get_crypto_dev_data() - Get crypto dev data of platform
+ * @feature_map: List of available feature map of all platform
+ *
+ * Return: Returns crypto dev data, either address crypto dev or ERR PTR
+ */
+void *xlnx_get_crypto_dev_data(struct xlnx_feature *feature_map)
+{
+ struct xlnx_feature *feature;
+ u32 v, api_id;
+ int ret;
+
+ ret = zynqmp_pm_get_api_version(&v);
+ if (ret)
+ return ERR_PTR(ret);
+
+ feature = feature_map;
+ for (; feature->family; feature++) {
+ if (feature->family == pm_family_code &&
+ (feature->subfamily == ALL_SUB_FAMILY_CODE ||
+ feature->subfamily == pm_sub_family_code)) {
+ api_id = FIELD_GET(API_ID_MASK, feature->feature_id);
+ if (feature->family == ZYNQMP_FAMILY_CODE) {
+ ret = zynqmp_pm_feature(api_id);
+ if (ret < 0)
+ return ERR_PTR(ret);
+ } else {
+ return ERR_PTR(-ENODEV);
+ }
+
+ return feature->data;
+ }
+ }
+ return ERR_PTR(-ENODEV);
+}
+EXPORT_SYMBOL_GPL(xlnx_get_crypto_dev_data);
+
/**
* zynqmp_pm_get_trustzone_version() - Get secure trustzone firmware version
* @version: Returned version value
@@ -1855,6 +1929,11 @@ static int zynqmp_firmware_probe(struct platform_device *pdev)
pr_info("%s Platform Management API v%d.%d\n", __func__,
pm_api_version >> 16, pm_api_version & 0xFFFF);

+ /* Get the Family code and sub family code of platform */
+ ret = zynqmp_pm_get_family_info(&pm_family_code, &pm_sub_family_code);
+ if (ret < 0)
+ return ret;
+
/* Check trustzone version number */
ret = zynqmp_pm_get_trustzone_version(&pm_tz_version);
if (ret)
diff --git a/include/linux/firmware/xlnx-zynqmp.h b/include/linux/firmware/xlnx-zynqmp.h
index b986e267d149..cd5acfa29cbc 100644
--- a/include/linux/firmware/xlnx-zynqmp.h
+++ b/include/linux/firmware/xlnx-zynqmp.h
@@ -34,6 +34,20 @@
/* PM API versions */
#define PM_API_VERSION_2 2

+#define ZYNQMP_FAMILY_CODE 0x23
+
+/* When all subfamily of platform need to support */
+#define ALL_SUB_FAMILY_CODE 0
+#define VERSAL_SUB_FAMILY_CODE 1
+#define VERSALNET_SUB_FAMILY_CODE 3
+
+#define FAMILY_CODE_LSB 21
+#define FAMILY_CODE_MSB 27
+#define SUB_FAMILY_CODE_LSB 19
+#define SUB_FAMILY_CODE_MSB 20
+
+#define API_ID_MASK GENMASK(7, 0)
+
/* ATF only commands */
#define TF_A_PM_REGISTER_SGI 0xa04
#define PM_GET_TRUSTZONE_VERSION 0xa03
@@ -475,12 +489,27 @@ struct zynqmp_pm_query_data {
u32 arg3;
};

+/**
+ * struct xlnx_feature - Feature data
+ * @family: Family code of platform
+ * @subfamily: Subfamily code of platform
+ * @feature_id: Feature id of module
+ * @data: Collection of all supported platform data
+ */
+struct xlnx_feature {
+ u32 family;
+ u32 subfamily;
+ u32 feature_id;
+ void *data;
+};
+
int zynqmp_pm_invoke_fn(u32 pm_api_id, u32 arg0, u32 arg1,
u32 arg2, u32 arg3, u32 *ret_payload);

#if IS_REACHABLE(CONFIG_ZYNQMP_FIRMWARE)
int zynqmp_pm_get_api_version(u32 *version);
int zynqmp_pm_get_chipid(u32 *idcode, u32 *version);
+void *xlnx_get_crypto_dev_data(struct xlnx_feature *feature_map);
int zynqmp_pm_query_data(struct zynqmp_pm_query_data qdata, u32 *out);
int zynqmp_pm_clock_enable(u32 clock_id);
int zynqmp_pm_clock_disable(u32 clock_id);
@@ -561,6 +590,11 @@ static inline int zynqmp_pm_get_chipid(u32 *idcode, u32 *version)
return -ENODEV;
}

+static inline void *xlnx_get_crypto_dev_data(struct xlnx_feature *feature_map)
+{
+ return ERR_PTR(-ENODEV);
+}
+
static inline int zynqmp_pm_query_data(struct zynqmp_pm_query_data qdata,
u32 *out)
{
--
2.36.1


2023-02-18 05:38:56

by Harsha, Harsha

[permalink] [raw]
Subject: [PATCH 2/4] firmware: xilinx: Add ZynqMP RSA API for RSA encrypt/decrypt operation

Add zynqmp_pm_rsa API in the ZynqMP firmware to encrypt and decrypt
the datai using RSA hardware engine for ZynqMP.

Signed-off-by: Harsha Harsha <[email protected]>
Co-developed-by: Dhaval Shah <[email protected]>
Signed-off-by: Dhaval Shah <[email protected]>
---
drivers/firmware/xilinx/zynqmp.c | 21 +++++++++++++++++++++
include/linux/firmware/xlnx-zynqmp.h | 8 ++++++++
2 files changed, 29 insertions(+)

diff --git a/drivers/firmware/xilinx/zynqmp.c b/drivers/firmware/xilinx/zynqmp.c
index 10ae42a2ae22..d6f73823bab4 100644
--- a/drivers/firmware/xilinx/zynqmp.c
+++ b/drivers/firmware/xilinx/zynqmp.c
@@ -1426,6 +1426,27 @@ int zynqmp_pm_sha_hash(const u64 address, const u32 size, const u32 flags)
}
EXPORT_SYMBOL_GPL(zynqmp_pm_sha_hash);

+/**
+ * zynqmp_pm_rsa - Access RSA hardware to encrypt/decrypt the data with RSA.
+ * @address: Address of the data
+ * @size: Size of the data.
+ * @flags:
+ * BIT(0) - Encryption/Decryption
+ * 0 - RSA decryption with private key
+ * 1 - RSA encryption with public key.
+ *
+ * Return: Returns status, either success or error code.
+ */
+int zynqmp_pm_rsa(const u64 address, const u32 size, const u32 flags)
+{
+ u32 lower_32_bits = lower_32_bits(address);
+ u32 upper_32_bits = upper_32_bits(address);
+
+ return zynqmp_pm_invoke_fn(PM_SECURE_RSA, upper_32_bits, lower_32_bits,
+ size, flags, NULL);
+}
+EXPORT_SYMBOL_GPL(zynqmp_pm_rsa);
+
/**
* zynqmp_pm_register_notifier() - PM API for register a subsystem
* to be notified about specific
diff --git a/include/linux/firmware/xlnx-zynqmp.h b/include/linux/firmware/xlnx-zynqmp.h
index cd5acfa29cbc..8666b0c3cd66 100644
--- a/include/linux/firmware/xlnx-zynqmp.h
+++ b/include/linux/firmware/xlnx-zynqmp.h
@@ -117,6 +117,7 @@ enum pm_api_id {
PM_FPGA_GET_STATUS = 23,
PM_GET_CHIPID = 24,
PM_SECURE_SHA = 26,
+ PM_SECURE_RSA = 27,
PM_PINCTRL_REQUEST = 28,
PM_PINCTRL_RELEASE = 29,
PM_PINCTRL_GET_FUNCTION = 30,
@@ -542,6 +543,7 @@ int zynqmp_pm_set_requirement(const u32 node, const u32 capabilities,
const enum zynqmp_pm_request_ack ack);
int zynqmp_pm_aes_engine(const u64 address, u32 *out);
int zynqmp_pm_sha_hash(const u64 address, const u32 size, const u32 flags);
+int zynqmp_pm_rsa(const u64 address, const u32 size, const u32 flags);
int zynqmp_pm_fpga_load(const u64 address, const u32 size, const u32 flags);
int zynqmp_pm_fpga_get_status(u32 *value);
int zynqmp_pm_write_ggs(u32 index, u32 value);
@@ -744,6 +746,12 @@ static inline int zynqmp_pm_sha_hash(const u64 address, const u32 size,
return -ENODEV;
}

+static inline int zynqmp_pm_rsa(const u64 address, const u32 size,
+ const u32 flags)
+{
+ return -ENODEV;
+}
+
static inline int zynqmp_pm_fpga_load(const u64 address, const u32 size,
const u32 flags)
{
--
2.36.1


2023-02-18 05:39:14

by Harsha, Harsha

[permalink] [raw]
Subject: [PATCH 3/4] crypto: xilinx: Add ZynqMP RSA driver

Adds RSA driver support for the ZynqMP SoC.
ZynqMP SoC has RSA engine used for encryption and decryption.
The flow is
RSA encrypt/decrypt request from Userspace ->
ZynqMP RSA driver -> Firmware driver -> RSA Hardware Engine

RSA Hardware engine supports 2048, 3072 and 4096 keysizes are supported.
So RSA operations using these key sizes are done by hardware engine.
Software fallback is being used for other key sizes.

Signed-off-by: Harsha Harsha <[email protected]>
Co-developed-by: Dhaval Shah <[email protected]>
Signed-off-by: Dhaval Shah <[email protected]>
---
drivers/crypto/Kconfig | 10 +
drivers/crypto/xilinx/Makefile | 1 +
drivers/crypto/xilinx/xilinx-rsa.c | 489 +++++++++++++++++++++++++++++
3 files changed, 500 insertions(+)
create mode 100644 drivers/crypto/xilinx/xilinx-rsa.c

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index dfb103f81a64..1ff19bfe8a13 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -695,6 +695,16 @@ config CRYPTO_DEV_ROCKCHIP_DEBUG
This will create /sys/kernel/debug/rk3288_crypto/stats for displaying
the number of requests per algorithm and other internal stats.

+config CRYPTO_DEV_XILINX_RSA
+ tristate "Support for Xilinx ZynqMP RSA hardware accelerator"
+ depends on ARCH_ZYNQMP || COMPILE_TEST
+ select CRYPTO_ENGINE
+ select CRYPTO_AKCIPHER
+ help
+ Xilinx processors have RSA hardware accelerator used for signature
+ generation and verification. This driver interfaces with RSA
+ hardware accelerator. Select this if you want to use the ZynqMP
+ module for RSA algorithms.

config CRYPTO_DEV_ZYNQMP_AES
tristate "Support for Xilinx ZynqMP AES hw accelerator"
diff --git a/drivers/crypto/xilinx/Makefile b/drivers/crypto/xilinx/Makefile
index 730feff5b5f2..819d82486a5d 100644
--- a/drivers/crypto/xilinx/Makefile
+++ b/drivers/crypto/xilinx/Makefile
@@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-$(CONFIG_CRYPTO_DEV_ZYNQMP_AES) += zynqmp-aes-gcm.o
obj-$(CONFIG_CRYPTO_DEV_ZYNQMP_SHA3) += zynqmp-sha.o
+obj-$(CONFIG_CRYPTO_DEV_XILINX_RSA) += xilinx-rsa.o
diff --git a/drivers/crypto/xilinx/xilinx-rsa.c b/drivers/crypto/xilinx/xilinx-rsa.c
new file mode 100644
index 000000000000..148a2a59ab89
--- /dev/null
+++ b/drivers/crypto/xilinx/xilinx-rsa.c
@@ -0,0 +1,489 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022 - 2023, Advanced Micro Devices, Inc.
+ */
+
+#include <linux/crypto.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/firmware/xlnx-zynqmp.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/platform_device.h>
+#include <crypto/engine.h>
+#include <crypto/internal/akcipher.h>
+#include <crypto/internal/rsa.h>
+#include <crypto/scatterwalk.h>
+
+#define XILINX_DMA_BIT_MASK 32U
+#define XILINX_RSA_MAX_KEY_SIZE 1024
+#define XILINX_RSA_BLOCKSIZE 64
+
+/* Key size in bytes */
+#define XSECURE_RSA_2048_KEY_SIZE (2048U / 8U)
+#define XSECURE_RSA_3072_KEY_SIZE (3072U / 8U)
+#define XSECURE_RSA_4096_KEY_SIZE (4096U / 8U)
+
+enum xilinx_akcipher_op {
+ XILINX_RSA_DECRYPT = 0,
+ XILINX_RSA_ENCRYPT,
+ XILINX_RSA_SIGN,
+ XILINX_RSA_VERIFY
+};
+
+struct xilinx_rsa_drv_ctx {
+ struct akcipher_alg alg;
+ struct device *dev;
+ struct crypto_engine *engine;
+ int (*xilinx_rsa_xcrypt)(struct akcipher_request *req);
+};
+
+/*
+ * 1st variable must be of struct crypto_engine_ctx type
+ */
+struct xilinx_rsa_tfm_ctx {
+ struct crypto_engine_ctx engine_ctx;
+ struct device *dev;
+ struct crypto_akcipher *fbk_cipher;
+ u8 *e_buf;
+ u8 *n_buf;
+ u8 *d_buf;
+ unsigned int key_len; /* in bits */
+ unsigned int e_len;
+ unsigned int n_len;
+ unsigned int d_len;
+};
+
+struct xilinx_rsa_req_ctx {
+ enum xilinx_akcipher_op op;
+};
+
+static int zynqmp_rsa_xcrypt(struct akcipher_request *req)
+{
+ struct xilinx_rsa_req_ctx *rq_ctx = akcipher_request_ctx(req);
+ unsigned int len, offset, diff = req->dst_len - req->src_len;
+ struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+ struct xilinx_rsa_tfm_ctx *tctx = akcipher_tfm_ctx(tfm);
+ dma_addr_t dma_addr;
+ char *kbuf, *buf;
+ size_t dma_size;
+ u8 padding = 0;
+ int ret;
+
+ if (rq_ctx->op == XILINX_RSA_ENCRYPT) {
+ padding = tctx->e_len % 2;
+ buf = tctx->e_buf;
+ len = tctx->e_len;
+ } else {
+ buf = tctx->d_buf;
+ len = tctx->d_len;
+ }
+
+ dma_size = req->dst_len + tctx->n_len + len + padding;
+ offset = dma_size - len;
+
+ kbuf = dma_alloc_coherent(tctx->dev, dma_size, &dma_addr, GFP_KERNEL);
+ if (!kbuf)
+ return -ENOMEM;
+
+ scatterwalk_map_and_copy(kbuf + diff, req->src, 0, req->src_len, 0);
+ memcpy(kbuf + req->dst_len, tctx->n_buf, tctx->n_len);
+
+ memcpy(kbuf + offset, buf, len);
+
+ ret = zynqmp_pm_rsa(dma_addr, tctx->n_len, rq_ctx->op);
+ if (ret < 0)
+ goto out;
+
+ sg_copy_from_buffer(req->dst, sg_nents(req->dst), kbuf, req->dst_len);
+
+out:
+ dma_free_coherent(tctx->dev, dma_size, kbuf, dma_addr);
+
+ return ret;
+}
+
+static int xilinx_rsa_decrypt(struct akcipher_request *req)
+{
+ struct xilinx_rsa_req_ctx *rctx = akcipher_request_ctx(req);
+ struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+ struct akcipher_alg *alg = crypto_akcipher_alg(tfm);
+ struct xilinx_rsa_drv_ctx *drv_ctx;
+
+ rctx->op = XILINX_RSA_DECRYPT;
+ drv_ctx = container_of(alg, struct xilinx_rsa_drv_ctx, alg);
+
+ return crypto_transfer_akcipher_request_to_engine(drv_ctx->engine, req);
+}
+
+static int xilinx_rsa_encrypt(struct akcipher_request *req)
+{
+ struct xilinx_rsa_req_ctx *rctx = akcipher_request_ctx(req);
+ struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+ struct akcipher_alg *alg = crypto_akcipher_alg(tfm);
+ struct xilinx_rsa_drv_ctx *drv_ctx;
+
+ rctx->op = XILINX_RSA_ENCRYPT;
+ drv_ctx = container_of(alg, struct xilinx_rsa_drv_ctx, alg);
+
+ return crypto_transfer_akcipher_request_to_engine(drv_ctx->engine, req);
+}
+
+static unsigned int xilinx_rsa_max_size(struct crypto_akcipher *tfm)
+{
+ struct xilinx_rsa_tfm_ctx *tctx = akcipher_tfm_ctx(tfm);
+
+ return tctx->n_len;
+}
+
+static inline int xilinx_copy_and_save_keypart(u8 **kpbuf, unsigned int *kplen,
+ const u8 *buf, size_t sz)
+{
+ int nskip;
+
+ for (nskip = 0; nskip < sz; nskip++)
+ if (buf[nskip])
+ break;
+
+ *kplen = sz - nskip;
+ *kpbuf = kmemdup(buf + nskip, *kplen, GFP_KERNEL);
+ if (!*kpbuf)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static int xilinx_check_key_length(unsigned int len)
+{
+ if (len < 8 || len > 4096)
+ return -EINVAL;
+ return 0;
+}
+
+static void xilinx_rsa_free_key_bufs(struct xilinx_rsa_tfm_ctx *ctx)
+{
+ /* Clean up old key data */
+ kfree_sensitive(ctx->e_buf);
+ ctx->e_buf = NULL;
+ ctx->e_len = 0;
+ kfree_sensitive(ctx->n_buf);
+ ctx->n_buf = NULL;
+ ctx->n_len = 0;
+ kfree_sensitive(ctx->d_buf);
+ ctx->d_buf = NULL;
+ ctx->d_len = 0;
+}
+
+static int xilinx_rsa_setkey(struct crypto_akcipher *tfm, const void *key,
+ unsigned int keylen, bool private)
+{
+ struct xilinx_rsa_tfm_ctx *tctx = akcipher_tfm_ctx(tfm);
+ struct rsa_key raw_key;
+ int ret;
+
+ if (private)
+ ret = rsa_parse_priv_key(&raw_key, key, keylen);
+ else
+ ret = rsa_parse_pub_key(&raw_key, key, keylen);
+ if (ret)
+ goto n_key;
+
+ ret = xilinx_copy_and_save_keypart(&tctx->n_buf, &tctx->n_len,
+ raw_key.n, raw_key.n_sz);
+ if (ret)
+ goto key_err;
+
+ /* convert to bits */
+ tctx->key_len = tctx->n_len << 3;
+ if (xilinx_check_key_length(tctx->key_len)) {
+ ret = -EINVAL;
+ goto key_err;
+ }
+
+ ret = xilinx_copy_and_save_keypart(&tctx->e_buf, &tctx->e_len,
+ raw_key.e, raw_key.e_sz);
+ if (ret)
+ goto key_err;
+
+ if (private) {
+ ret = xilinx_copy_and_save_keypart(&tctx->d_buf, &tctx->d_len,
+ raw_key.d, raw_key.d_sz);
+ if (ret)
+ goto key_err;
+ }
+
+ return 0;
+
+key_err:
+ xilinx_rsa_free_key_bufs(tctx);
+n_key:
+ return ret;
+}
+
+static int xilinx_rsa_set_priv_key(struct crypto_akcipher *tfm, const void *key,
+ unsigned int keylen)
+{
+ struct xilinx_rsa_tfm_ctx *tfm_ctx = akcipher_tfm_ctx(tfm);
+ int ret;
+
+ tfm_ctx->fbk_cipher->base.crt_flags &= ~CRYPTO_TFM_REQ_MASK;
+ tfm_ctx->fbk_cipher->base.crt_flags |= (tfm->base.crt_flags &
+ CRYPTO_TFM_REQ_MASK);
+
+ ret = crypto_akcipher_set_priv_key(tfm_ctx->fbk_cipher, key, keylen);
+ if (ret)
+ return ret;
+
+ return xilinx_rsa_setkey(tfm, key, keylen, true);
+}
+
+static int xilinx_rsa_set_pub_key(struct crypto_akcipher *tfm, const void *key,
+ unsigned int keylen)
+{
+ struct xilinx_rsa_tfm_ctx *tfm_ctx = akcipher_tfm_ctx(tfm);
+ int ret;
+
+ tfm_ctx->fbk_cipher->base.crt_flags &= ~CRYPTO_TFM_REQ_MASK;
+ tfm_ctx->fbk_cipher->base.crt_flags |= (tfm->base.crt_flags &
+ CRYPTO_TFM_REQ_MASK);
+
+ ret = crypto_akcipher_set_pub_key(tfm_ctx->fbk_cipher, key, keylen);
+ if (ret)
+ return ret;
+
+ return xilinx_rsa_setkey(tfm, key, keylen, false);
+}
+
+static int xilinx_fallback_check(struct xilinx_rsa_tfm_ctx *tfm_ctx,
+ struct akcipher_request *areq)
+{
+ int need_fallback = 0;
+
+ if (tfm_ctx->n_len != XSECURE_RSA_2048_KEY_SIZE &&
+ tfm_ctx->n_len != XSECURE_RSA_3072_KEY_SIZE &&
+ tfm_ctx->n_len != XSECURE_RSA_4096_KEY_SIZE)
+ need_fallback = 1;
+
+ if (areq->src_len > areq->dst_len)
+ need_fallback = 1;
+
+ return need_fallback;
+}
+
+static int handle_rsa_req(struct crypto_engine *engine,
+ void *req)
+{
+ struct akcipher_request *areq = container_of(req,
+ struct akcipher_request,
+ base);
+ struct crypto_akcipher *akcipher = crypto_akcipher_reqtfm(req);
+ struct akcipher_alg *cipher_alg = crypto_akcipher_alg(akcipher);
+ struct xilinx_rsa_tfm_ctx *tfm_ctx = akcipher_tfm_ctx(akcipher);
+ struct xilinx_rsa_req_ctx *rq_ctx = akcipher_request_ctx(areq);
+ struct akcipher_request *subreq = akcipher_request_ctx(req);
+ struct xilinx_rsa_drv_ctx *drv_ctx;
+ int need_fallback, err;
+
+ drv_ctx = container_of(cipher_alg, struct xilinx_rsa_drv_ctx, alg);
+
+ need_fallback = xilinx_fallback_check(tfm_ctx, areq);
+ if (need_fallback) {
+ akcipher_request_set_tfm(subreq, tfm_ctx->fbk_cipher);
+
+ akcipher_request_set_callback(subreq, areq->base.flags,
+ NULL, NULL);
+ akcipher_request_set_crypt(subreq, areq->src, areq->dst,
+ areq->src_len, areq->dst_len);
+
+ if (rq_ctx->op == XILINX_RSA_ENCRYPT)
+ err = crypto_akcipher_encrypt(subreq);
+ else if (rq_ctx->op == XILINX_RSA_DECRYPT)
+ err = crypto_akcipher_decrypt(subreq);
+ } else {
+ err = drv_ctx->xilinx_rsa_xcrypt(areq);
+ }
+
+ crypto_finalize_akcipher_request(engine, areq, err);
+
+ return 0;
+}
+
+static int xilinx_rsa_init(struct crypto_akcipher *tfm)
+{
+ struct xilinx_rsa_tfm_ctx *tfm_ctx =
+ (struct xilinx_rsa_tfm_ctx *)akcipher_tfm_ctx(tfm);
+ struct akcipher_alg *cipher_alg = crypto_akcipher_alg(tfm);
+ struct xilinx_rsa_drv_ctx *drv_ctx;
+
+ drv_ctx = container_of(cipher_alg, struct xilinx_rsa_drv_ctx, alg);
+ tfm_ctx->dev = drv_ctx->dev;
+
+ tfm_ctx->engine_ctx.op.do_one_request = handle_rsa_req;
+ tfm_ctx->engine_ctx.op.prepare_request = NULL;
+ tfm_ctx->engine_ctx.op.unprepare_request = NULL;
+ tfm_ctx->fbk_cipher = crypto_alloc_akcipher(drv_ctx->alg.base.cra_name,
+ 0,
+ CRYPTO_ALG_NEED_FALLBACK);
+ if (IS_ERR(tfm_ctx->fbk_cipher)) {
+ pr_err("%s() Error: failed to allocate fallback for %s\n",
+ __func__, drv_ctx->alg.base.cra_name);
+ return PTR_ERR(tfm_ctx->fbk_cipher);
+ }
+
+ akcipher_set_reqsize(tfm, sizeof(struct xilinx_rsa_req_ctx));
+
+ return 0;
+}
+
+static void xilinx_rsa_exit(struct crypto_akcipher *tfm)
+{
+ struct xilinx_rsa_tfm_ctx *tfm_ctx =
+ (struct xilinx_rsa_tfm_ctx *)akcipher_tfm_ctx(tfm);
+
+ xilinx_rsa_free_key_bufs(tfm_ctx);
+
+ if (tfm_ctx->fbk_cipher) {
+ crypto_free_akcipher(tfm_ctx->fbk_cipher);
+ tfm_ctx->fbk_cipher = NULL;
+ }
+ memzero_explicit(tfm_ctx, sizeof(struct xilinx_rsa_tfm_ctx));
+}
+
+struct xilinx_rsa_drv_ctx zynqmp_rsa_drv_ctx = {
+ .xilinx_rsa_xcrypt = zynqmp_rsa_xcrypt,
+ .alg = {
+ .init = xilinx_rsa_init,
+ .set_pub_key = xilinx_rsa_set_pub_key,
+ .set_priv_key = xilinx_rsa_set_priv_key,
+ .max_size = xilinx_rsa_max_size,
+ .decrypt = xilinx_rsa_decrypt,
+ .encrypt = xilinx_rsa_encrypt,
+ .sign = xilinx_rsa_decrypt,
+ .verify = xilinx_rsa_encrypt,
+ .exit = xilinx_rsa_exit,
+ .base = {
+ .cra_name = "rsa",
+ .cra_driver_name = "zynqmp-rsa",
+ .cra_priority = 200,
+ .cra_flags = CRYPTO_ALG_TYPE_AKCIPHER |
+ CRYPTO_ALG_KERN_DRIVER_ONLY |
+ CRYPTO_ALG_ALLOCATES_MEMORY |
+ CRYPTO_ALG_NEED_FALLBACK,
+ .cra_blocksize = XILINX_RSA_BLOCKSIZE,
+ .cra_ctxsize = sizeof(struct xilinx_rsa_tfm_ctx),
+ .cra_alignmask = 15,
+ .cra_module = THIS_MODULE,
+ },
+ }
+};
+
+static struct xlnx_feature rsa_feature_map[] = {
+ {
+ .family = ZYNQMP_FAMILY_CODE,
+ .subfamily = ALL_SUB_FAMILY_CODE,
+ .feature_id = PM_SECURE_RSA,
+ .data = &zynqmp_rsa_drv_ctx,
+ },
+ { /* sentinel */ }
+};
+
+static int xilinx_rsa_probe(struct platform_device *pdev)
+{
+ struct xilinx_rsa_drv_ctx *rsa_drv_ctx;
+ struct device *dev = &pdev->dev;
+ int ret;
+
+ /* Verify the hardware is present */
+ rsa_drv_ctx = xlnx_get_crypto_dev_data(rsa_feature_map);
+ if (IS_ERR(rsa_drv_ctx)) {
+ dev_err(dev, "RSA is not supported on the platform\n");
+ return PTR_ERR(rsa_drv_ctx);
+ }
+
+ ret = dma_set_mask_and_coherent(dev,
+ DMA_BIT_MASK(XILINX_DMA_BIT_MASK));
+ if (ret < 0) {
+ dev_err(dev, "no usable DMA configuration");
+ return ret;
+ }
+
+ rsa_drv_ctx->engine = crypto_engine_alloc_init(dev, 1);
+ if (!rsa_drv_ctx->engine) {
+ dev_err(dev, "Cannot alloc RSA engine\n");
+ return -ENOMEM;
+ }
+
+ ret = crypto_engine_start(rsa_drv_ctx->engine);
+ if (ret) {
+ dev_err(dev, "Cannot start RSA engine\n");
+ goto out;
+ }
+
+ rsa_drv_ctx->dev = dev;
+ platform_set_drvdata(pdev, rsa_drv_ctx);
+
+ ret = crypto_register_akcipher(&rsa_drv_ctx->alg);
+ if (ret < 0) {
+ dev_err(dev, "Failed to register akcipher alg.\n");
+ goto out;
+ }
+
+ return 0;
+
+out:
+ crypto_engine_exit(rsa_drv_ctx->engine);
+
+ return ret;
+}
+
+static int xilinx_rsa_remove(struct platform_device *pdev)
+{
+ struct xilinx_rsa_drv_ctx *rsa_drv_ctx;
+
+ rsa_drv_ctx = platform_get_drvdata(pdev);
+
+ crypto_engine_exit(rsa_drv_ctx->engine);
+
+ crypto_unregister_akcipher(&rsa_drv_ctx->alg);
+
+ return 0;
+}
+
+static struct platform_driver xilinx_rsa_driver = {
+ .probe = xilinx_rsa_probe,
+ .remove = xilinx_rsa_remove,
+ .driver = {
+ .name = "xilinx_rsa",
+ },
+};
+
+static int __init xilinx_rsa_driver_init(void)
+{
+ struct platform_device *pdev;
+ int ret;
+
+ ret = platform_driver_register(&xilinx_rsa_driver);
+ if (ret)
+ return ret;
+
+ pdev = platform_device_register_simple(xilinx_rsa_driver.driver.name,
+ 0, NULL, 0);
+ if (IS_ERR(pdev)) {
+ ret = PTR_ERR(pdev);
+ platform_driver_unregister(&xilinx_rsa_driver);
+ }
+
+ return ret;
+}
+
+static void __exit xilinx_rsa_driver_exit(void)
+{
+ platform_driver_unregister(&xilinx_rsa_driver);
+}
+
+device_initcall(xilinx_rsa_driver_init);
+module_exit(xilinx_rsa_driver_exit);
+
+MODULE_DESCRIPTION("Xilinx RSA hw acceleration support.");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Harsha <[email protected]>");
--
2.36.1


2023-02-18 05:39:17

by Harsha, Harsha

[permalink] [raw]
Subject: [PATCH 4/4] MAINTAINERS: Add maintainer for Xilinx ZynqMP RSA driver

This patch adds an entry for ZynqMP RSA driver in the list of
Maintainers.

Signed-off-by: Harsha Harsha <[email protected]>
---
MAINTAINERS | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index f61eb221415b..c21d646cffb0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -22907,6 +22907,11 @@ T: git https://github.com/Xilinx/linux-xlnx.git
F: Documentation/devicetree/bindings/phy/xlnx,zynqmp-psgtr.yaml
F: drivers/phy/xilinx/phy-zynqmp.c

+XILINX RSA DRIVER
+M: Harsha <[email protected]>
+S: Maintained
+F: drivers/crypto/xilinx/xilinx-rsa.c
+
XILINX ZYNQMP SHA3 DRIVER
M: Harsha <[email protected]>
S: Maintained
--
2.36.1


2023-03-10 10:32:10

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 3/4] crypto: xilinx: Add ZynqMP RSA driver

On Sat, Feb 18, 2023 at 11:08:08AM +0530, Harsha Harsha wrote:
>
> + .cra_flags = CRYPTO_ALG_TYPE_AKCIPHER |
> + CRYPTO_ALG_KERN_DRIVER_ONLY |
> + CRYPTO_ALG_ALLOCATES_MEMORY |
> + CRYPTO_ALG_NEED_FALLBACK,

The driver appears to be async so you should set the flag
CRYPTO_ALG_ASYNC.

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-03-15 05:02:53

by Harsha, Harsha

[permalink] [raw]
Subject: RE: [PATCH 3/4] crypto: xilinx: Add ZynqMP RSA driver

Hi,

> -----Original Message-----
> From: Herbert Xu <[email protected]>
> Sent: Friday, March 10, 2023 4:02 PM
> To: Harsha, Harsha <[email protected]>
> Cc: [email protected]; [email protected]; linux-
> [email protected]; [email protected];
> [email protected]; [email protected]; git (AMD-Xilinx) <[email protected]>;
> Shah, Dhaval (CPG-PSAV) <[email protected]>
> Subject: Re: [PATCH 3/4] crypto: xilinx: Add ZynqMP RSA driver
>
> On Sat, Feb 18, 2023 at 11:08:08AM +0530, Harsha Harsha wrote:
> >
> > + .cra_flags = CRYPTO_ALG_TYPE_AKCIPHER |
> > + CRYPTO_ALG_KERN_DRIVER_ONLY |
> > + CRYPTO_ALG_ALLOCATES_MEMORY |
> > + CRYPTO_ALG_NEED_FALLBACK,
>
> The driver appears to be async so you should set the flag
> CRYPTO_ALG_ASYNC.

Thanks for the review.
For the RSA driver, below is the flow of operation:
RSA linux driver -> ATF -> Firmware -> RSA hardware engine.

To perform the operation, the request goes to the RSA HW engine. Once the operation is done, the response is sent back
via firmware and ATF to the linux driver. Meanwhile the API in the linux driver waits until the operation is complete.
This is why the driver is synchronous and therefore the CRYPTO_ALG_ASYNC flag is not set.

Regards,
Harsha

>
> Thanks,
> --
> Email: Herbert Xu <[email protected]> Home Page:
> http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-03-15 05:06:16

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 3/4] crypto: xilinx: Add ZynqMP RSA driver

On Wed, Mar 15, 2023 at 05:02:39AM +0000, Harsha, Harsha wrote:
>
> To perform the operation, the request goes to the RSA HW engine. Once the operation is done, the response is sent back
> via firmware and ATF to the linux driver. Meanwhile the API in the linux driver waits until the operation is complete.
> This is why the driver is synchronous and therefore the CRYPTO_ALG_ASYNC flag is not set.

But you use crypto_engine, right? That is always async regardless
of what your driver does.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt