Assorted bug fixes and cleanups
Gilad Ben-Yossef (3):
crypto: ccree: unmap buffer before copying IV
crypto: ccree: shared irq lines are not a bug
crypto: ccree: don't copy zero size ciphertext
Hadar Gat (4):
crypto: ccree: improve error handling
crypto: ccree: add error message
crypto: ccree: fix free of unallocated mlli buffer
crypto: ccree: remove legacy leftover
drivers/crypto/ccree/cc_buffer_mgr.c | 87 ++++++++++++++--------------
drivers/crypto/ccree/cc_cipher.c | 6 +-
drivers/crypto/ccree/cc_driver.c | 6 +-
drivers/crypto/ccree/cc_driver.h | 2 -
4 files changed, 50 insertions(+), 51 deletions(-)
--
2.20.1
From: Hadar Gat <[email protected]>
In cc_unmap_aead_request(), call dma_pool_free() for mlli buffer only
if an item is allocated from the pool and not always if there is a
pool allocated.
This fixes a kernel panic when trying to free a non-allocated item.
Cc: [email protected]
Signed-off-by: Hadar Gat <[email protected]>
Signed-off-by: Gilad Ben-Yossef <[email protected]>
---
drivers/crypto/ccree/cc_buffer_mgr.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
index 237a87a57830..0ee1c52da0a4 100644
--- a/drivers/crypto/ccree/cc_buffer_mgr.c
+++ b/drivers/crypto/ccree/cc_buffer_mgr.c
@@ -614,10 +614,10 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
hw_iv_size, DMA_BIDIRECTIONAL);
}
- /*In case a pool was set, a table was
- *allocated and should be released
- */
- if (areq_ctx->mlli_params.curr_pool) {
+ /* Release pool */
+ if ((areq_ctx->assoc_buff_type == CC_DMA_BUF_MLLI ||
+ areq_ctx->data_buff_type == CC_DMA_BUF_MLLI) &&
+ (areq_ctx->mlli_params.mlli_virt_addr)) {
dev_dbg(dev, "free MLLI buffer: dma=%pad virt=%pK\n",
&areq_ctx->mlli_params.mlli_dma_addr,
areq_ctx->mlli_params.mlli_virt_addr);
--
2.20.1
From: Hadar Gat <[email protected]>
Remove legacy code no longer in use.
Signed-off-by: Hadar Gat <[email protected]>
Signed-off-by: Gilad Ben-Yossef <[email protected]>
---
drivers/crypto/ccree/cc_driver.h | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/crypto/ccree/cc_driver.h b/drivers/crypto/ccree/cc_driver.h
index 5be7fd431b05..33dbf3e6d15d 100644
--- a/drivers/crypto/ccree/cc_driver.h
+++ b/drivers/crypto/ccree/cc_driver.h
@@ -111,13 +111,11 @@ struct cc_crypto_req {
* @cc_base: virt address of the CC registers
* @irq: device IRQ number
* @irq_mask: Interrupt mask shadow (1 for masked interrupts)
- * @fw_ver: SeP loaded firmware version
*/
struct cc_drvdata {
void __iomem *cc_base;
int irq;
u32 irq_mask;
- u32 fw_ver;
struct completion hw_queue_avail; /* wait for HW queue availability */
struct platform_device *plat_dev;
cc_sram_addr_t mlli_sram_addr;
--
2.20.1
The ccree driver was logging an error if it got an interrupt but
HW indicated nothing to do as might happen if sharing an irq line.
Remove the error as this is normal and we already have a debug
print for the IRR register value.
Signed-off-by: Gilad Ben-Yossef <[email protected]>
---
drivers/crypto/ccree/cc_driver.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/crypto/ccree/cc_driver.c b/drivers/crypto/ccree/cc_driver.c
index 8ada308d72ee..e75fbe7a8f84 100644
--- a/drivers/crypto/ccree/cc_driver.c
+++ b/drivers/crypto/ccree/cc_driver.c
@@ -103,10 +103,10 @@ static irqreturn_t cc_isr(int irq, void *dev_id)
/* read the interrupt status */
irr = cc_ioread(drvdata, CC_REG(HOST_IRR));
dev_dbg(dev, "Got IRR=0x%08X\n", irr);
- if (irr == 0) { /* Probably shared interrupt line */
- dev_err(dev, "Got interrupt with empty IRR\n");
+
+ if (irr == 0) /* Probably shared interrupt line */
return IRQ_NONE;
- }
+
imr = cc_ioread(drvdata, CC_REG(HOST_IMR));
/* clear interrupt - must be before processing events */
--
2.20.1
We were copying the last ciphertext block into the IV field
for CBC before removing the DMA mapping of the output buffer
with the result of the buffer sometime being out-of-sync cache
wise and were getting intermittent cases of bad output IV.
Fix it by moving the DMA buffer unmapping before the copy.
Signed-off-by: Gilad Ben-Yossef <[email protected]>
Fixes: 00904aa0cd59 ("crypto: ccree - fix iv handling")
Cc: <[email protected]>
---
drivers/crypto/ccree/cc_cipher.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
index cc92b031fad1..98ea53524250 100644
--- a/drivers/crypto/ccree/cc_cipher.c
+++ b/drivers/crypto/ccree/cc_cipher.c
@@ -652,6 +652,8 @@ static void cc_cipher_complete(struct device *dev, void *cc_req, int err)
unsigned int ivsize = crypto_skcipher_ivsize(sk_tfm);
unsigned int len;
+ cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
+
switch (ctx_p->cipher_mode) {
case DRV_CIPHER_CBC:
/*
@@ -681,7 +683,6 @@ static void cc_cipher_complete(struct device *dev, void *cc_req, int err)
break;
}
- cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
kzfree(req_ctx->iv);
skcipher_request_complete(req, err);
--
2.20.1
From: Hadar Gat <[email protected]>
pass the returned error code to the higher level functions
Signed-off-by: Hadar Gat <[email protected]>
Signed-off-by: Gilad Ben-Yossef <[email protected]>
---
drivers/crypto/ccree/cc_buffer_mgr.c | 74 +++++++++++++---------------
1 file changed, 35 insertions(+), 39 deletions(-)
diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
index dd948e1df9e5..32d2df36ced2 100644
--- a/drivers/crypto/ccree/cc_buffer_mgr.c
+++ b/drivers/crypto/ccree/cc_buffer_mgr.c
@@ -511,10 +511,8 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
/* Map the src SGL */
rc = cc_map_sg(dev, src, nbytes, DMA_BIDIRECTIONAL, &req_ctx->in_nents,
LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents);
- if (rc) {
- rc = -ENOMEM;
+ if (rc)
goto cipher_exit;
- }
if (mapped_nents > 1)
req_ctx->dma_buf_type = CC_DMA_BUF_MLLI;
@@ -528,12 +526,11 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
}
} else {
/* Map the dst sg */
- if (cc_map_sg(dev, dst, nbytes, DMA_BIDIRECTIONAL,
- &req_ctx->out_nents, LLI_MAX_NUM_OF_DATA_ENTRIES,
- &dummy, &mapped_nents)) {
- rc = -ENOMEM;
+ rc = cc_map_sg(dev, dst, nbytes, DMA_BIDIRECTIONAL,
+ &req_ctx->out_nents, LLI_MAX_NUM_OF_DATA_ENTRIES,
+ &dummy, &mapped_nents);
+ if (rc)
goto cipher_exit;
- }
if (mapped_nents > 1)
req_ctx->dma_buf_type = CC_DMA_BUF_MLLI;
@@ -1078,10 +1075,8 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
&areq_ctx->dst.nents,
LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes,
&dst_mapped_nents);
- if (rc) {
- rc = -ENOMEM;
+ if (rc)
goto chain_data_exit;
- }
}
dst_mapped_nents = cc_get_sgl_nents(dev, req->dst, size_for_map,
@@ -1235,11 +1230,10 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
}
areq_ctx->ccm_iv0_dma_addr = dma_addr;
- if (cc_set_aead_conf_buf(dev, areq_ctx, areq_ctx->ccm_config,
- &sg_data, req->assoclen)) {
- rc = -ENOMEM;
+ rc = cc_set_aead_conf_buf(dev, areq_ctx, areq_ctx->ccm_config,
+ &sg_data, req->assoclen);
+ if (rc)
goto aead_map_failure;
- }
}
if (areq_ctx->cipher_mode == DRV_CIPHER_GCTR) {
@@ -1299,10 +1293,8 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
(LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES +
LLI_MAX_NUM_OF_DATA_ENTRIES),
&dummy, &mapped_nents);
- if (rc) {
- rc = -ENOMEM;
+ if (rc)
goto aead_map_failure;
- }
if (areq_ctx->is_single_pass) {
/*
@@ -1386,6 +1378,7 @@ int cc_map_hash_request_final(struct cc_drvdata *drvdata, void *ctx,
struct mlli_params *mlli_params = &areq_ctx->mlli_params;
struct buffer_array sg_data;
struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle;
+ int rc = 0;
u32 dummy = 0;
u32 mapped_nents = 0;
@@ -1405,18 +1398,18 @@ int cc_map_hash_request_final(struct cc_drvdata *drvdata, void *ctx,
/*TODO: copy data in case that buffer is enough for operation */
/* map the previous buffer */
if (*curr_buff_cnt) {
- if (cc_set_hash_buf(dev, areq_ctx, curr_buff, *curr_buff_cnt,
- &sg_data)) {
- return -ENOMEM;
- }
+ rc = cc_set_hash_buf(dev, areq_ctx, curr_buff, *curr_buff_cnt,
+ &sg_data);
+ if (rc)
+ return rc;
}
if (src && nbytes > 0 && do_update) {
- if (cc_map_sg(dev, src, nbytes, DMA_TO_DEVICE,
- &areq_ctx->in_nents, LLI_MAX_NUM_OF_DATA_ENTRIES,
- &dummy, &mapped_nents)) {
+ rc = cc_map_sg(dev, src, nbytes, DMA_TO_DEVICE,
+ &areq_ctx->in_nents, LLI_MAX_NUM_OF_DATA_ENTRIES,
+ &dummy, &mapped_nents);
+ if (rc)
goto unmap_curr_buff;
- }
if (src && mapped_nents == 1 &&
areq_ctx->data_dma_buf_type == CC_DMA_BUF_NULL) {
memcpy(areq_ctx->buff_sg, src,
@@ -1435,7 +1428,8 @@ int cc_map_hash_request_final(struct cc_drvdata *drvdata, void *ctx,
/* add the src data to the sg_data */
cc_add_sg_entry(dev, &sg_data, areq_ctx->in_nents, src, nbytes,
0, true, &areq_ctx->mlli_nents);
- if (cc_generate_mlli(dev, &sg_data, mlli_params, flags))
+ rc = cc_generate_mlli(dev, &sg_data, mlli_params, flags);
+ if (rc)
goto fail_unmap_din;
}
/* change the buffer index for the unmap function */
@@ -1451,7 +1445,7 @@ int cc_map_hash_request_final(struct cc_drvdata *drvdata, void *ctx,
if (*curr_buff_cnt)
dma_unmap_sg(dev, areq_ctx->buff_sg, 1, DMA_TO_DEVICE);
- return -ENOMEM;
+ return rc;
}
int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx,
@@ -1470,6 +1464,7 @@ int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx,
struct buffer_array sg_data;
struct buff_mgr_handle *buff_mgr = drvdata->buff_mgr_handle;
unsigned int swap_index = 0;
+ int rc = 0;
u32 dummy = 0;
u32 mapped_nents = 0;
@@ -1514,21 +1509,21 @@ int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx,
}
if (*curr_buff_cnt) {
- if (cc_set_hash_buf(dev, areq_ctx, curr_buff, *curr_buff_cnt,
- &sg_data)) {
- return -ENOMEM;
- }
+ rc = cc_set_hash_buf(dev, areq_ctx, curr_buff, *curr_buff_cnt,
+ &sg_data);
+ if (rc)
+ return rc;
/* change the buffer index for next operation */
swap_index = 1;
}
if (update_data_len > *curr_buff_cnt) {
- if (cc_map_sg(dev, src, (update_data_len - *curr_buff_cnt),
- DMA_TO_DEVICE, &areq_ctx->in_nents,
- LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy,
- &mapped_nents)) {
+ rc = cc_map_sg(dev, src, (update_data_len - *curr_buff_cnt),
+ DMA_TO_DEVICE, &areq_ctx->in_nents,
+ LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy,
+ &mapped_nents);
+ if (rc)
goto unmap_curr_buff;
- }
if (mapped_nents == 1 &&
areq_ctx->data_dma_buf_type == CC_DMA_BUF_NULL) {
/* only one entry in the SG and no previous data */
@@ -1548,7 +1543,8 @@ int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx,
cc_add_sg_entry(dev, &sg_data, areq_ctx->in_nents, src,
(update_data_len - *curr_buff_cnt), 0, true,
&areq_ctx->mlli_nents);
- if (cc_generate_mlli(dev, &sg_data, mlli_params, flags))
+ rc = cc_generate_mlli(dev, &sg_data, mlli_params, flags);
+ if (rc)
goto fail_unmap_din;
}
areq_ctx->buff_index = (areq_ctx->buff_index ^ swap_index);
@@ -1562,7 +1558,7 @@ int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx,
if (*curr_buff_cnt)
dma_unmap_sg(dev, areq_ctx->buff_sg, 1, DMA_TO_DEVICE);
- return -ENOMEM;
+ return rc;
}
void cc_unmap_hash_request(struct device *dev, void *ctx,
--
2.20.1
For decryption in CBC mode we need to save the last ciphertext block
for use as the next IV. However, we were trying to do this also with
zero sized ciphertext resulting in a panic.
Fix this by only doing the copy if the ciphertext length is at least
of IV size.
Signed-off-by: Gilad Ben-Yossef <[email protected]>
Cc: [email protected]
---
drivers/crypto/ccree/cc_cipher.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
index 98ea53524250..e202d7c7ea00 100644
--- a/drivers/crypto/ccree/cc_cipher.c
+++ b/drivers/crypto/ccree/cc_cipher.c
@@ -800,7 +800,8 @@ static int cc_cipher_decrypt(struct skcipher_request *req)
memset(req_ctx, 0, sizeof(*req_ctx));
- if (ctx_p->cipher_mode == DRV_CIPHER_CBC) {
+ if ((ctx_p->cipher_mode == DRV_CIPHER_CBC) &&
+ (req->cryptlen >= ivsize)) {
/* Allocate and save the last IV sized bytes of the source,
* which will be lost in case of in-place decryption.
--
2.20.1
From: Hadar Gat <[email protected]>
Add error message in case of too many mlli entries.
Signed-off-by: Hadar Gat <[email protected]>
Signed-off-by: Gilad Ben-Yossef <[email protected]>
---
drivers/crypto/ccree/cc_buffer_mgr.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
index 32d2df36ced2..237a87a57830 100644
--- a/drivers/crypto/ccree/cc_buffer_mgr.c
+++ b/drivers/crypto/ccree/cc_buffer_mgr.c
@@ -156,8 +156,11 @@ static int cc_render_buff_to_mlli(struct device *dev, dma_addr_t buff_dma,
/* Verify there is no memory overflow*/
new_nents = (*curr_nents + buff_size / CC_MAX_MLLI_ENTRY_SIZE + 1);
- if (new_nents > MAX_NUM_OF_TOTAL_MLLI_ENTRIES)
+ if (new_nents > MAX_NUM_OF_TOTAL_MLLI_ENTRIES) {
+ dev_err(dev, "Too many mlli entries. current %d max %d\n",
+ new_nents, MAX_NUM_OF_TOTAL_MLLI_ENTRIES);
return -ENOMEM;
+ }
/*handle buffer longer than 64 kbytes */
while (buff_size > CC_MAX_MLLI_ENTRY_SIZE) {
--
2.20.1
On Tue, Jan 15, 2019 at 03:43:10PM +0200, Gilad Ben-Yossef wrote:
> Assorted bug fixes and cleanups
>
> Gilad Ben-Yossef (3):
> crypto: ccree: unmap buffer before copying IV
> crypto: ccree: shared irq lines are not a bug
> crypto: ccree: don't copy zero size ciphertext
>
> Hadar Gat (4):
> crypto: ccree: improve error handling
> crypto: ccree: add error message
> crypto: ccree: fix free of unallocated mlli buffer
> crypto: ccree: remove legacy leftover
>
> drivers/crypto/ccree/cc_buffer_mgr.c | 87 ++++++++++++++--------------
> drivers/crypto/ccree/cc_cipher.c | 6 +-
> drivers/crypto/ccree/cc_driver.c | 6 +-
> drivers/crypto/ccree/cc_driver.h | 2 -
> 4 files changed, 50 insertions(+), 51 deletions(-)
All applied. Thanks.
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt