2012-12-21 17:04:00

by Mark Greer

[permalink] [raw]
Subject: [PATCH 0/9] crypto: omap-sham - Updates & New Functionality

From: "Mark A. Greer" <[email protected]>

[This supersedes the omap-sham driver patches sent in the
"crypto: omap-sham updates" series sent a few weeks ago.]

This patch series does several things to the omap-sham crypto
driver including:
- converting to use pm_runtime
- adding suspend/resume support
- converting to use dmaengine API
- adding device tree support
- adding OMAP4/AM33XX support
- adding SHA224 and SHA256 support
- some minor cleanups

The patches are based on the current k.o. kernel, plus:
- the ARM hwmod, etc patches from
"[PATCH 00/15] OMAP SHAM & AES Crypto Updates"
(http://marc.info/?l=linux-omap&m=135610732120447&w=2)
- the EDMA dmaengine patches submitted by Matt Porter
"[RFC PATCH v3 00/16] DMA Engine support for AM33XX]"
(https://lkml.org/lkml/2012/10/18/256)
- some misc patches required by the EDMA patches
- a hack to fix the compilation error that the current k.o. kernel has

A working examle is here:

[email protected]:mgreeraz/linux-mag.git submitted/crypto/sham

Mark A. Greer (9):
crypto: omap-sham - Remove unnecessary pr_info noise
crypto: omap-sham - Convert to use pm_runtime API
crypto: omap-sham - Add suspend/resume support
crypto: omap-sham - Add code to use dmaengine API
crypto: omap-sham - Remove usage of private DMA API
crypto: omap-sham - Add Device Tree Support
crypto: omap-sham - Convert to dma_request_slave_channel_compat()
crypto: omap-sham - Add OMAP4/AM33XX SHAM Support
crypto: omap-sham - Add SHA224 and SHA256 Support

drivers/crypto/omap-sham.c | 913 +++++++++++++++++++++++++++++++++++----------
1 file changed, 718 insertions(+), 195 deletions(-)

--
1.7.12



2012-12-21 17:04:01

by Mark Greer

[permalink] [raw]
Subject: [PATCH 1/9] crypto: omap-sham - Remove unnecessary pr_info noise

From: "Mark A. Greer" <[email protected]>

Remove the unnecessary pr_info() call in omap_sham_mod_init().

CC: Dmitry Kasatkin <[email protected]>
Signed-off-by: Mark A. Greer <[email protected]>
---
drivers/crypto/omap-sham.c | 2 --
1 file changed, 2 deletions(-)

diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
index 1d75e6f..2df7a54 100644
--- a/drivers/crypto/omap-sham.c
+++ b/drivers/crypto/omap-sham.c
@@ -1286,8 +1286,6 @@ static struct platform_driver omap_sham_driver = {

static int __init omap_sham_mod_init(void)
{
- pr_info("loading %s driver\n", "omap-sham");
-
return platform_driver_register(&omap_sham_driver);
}

--
1.7.12


2012-12-21 17:04:18

by Mark Greer

[permalink] [raw]
Subject: [PATCH 3/9] crypto: omap-sham - Add suspend/resume support

From: "Mark A. Greer" <[email protected]>

Add suspend/resume support to the OMAP SHAM driver.

CC: Dmitry Kasatkin <[email protected]>
Signed-off-by: Mark A. Greer <[email protected]>
---
drivers/crypto/omap-sham.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)

diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
index 777eb9f..9c3b096 100644
--- a/drivers/crypto/omap-sham.c
+++ b/drivers/crypto/omap-sham.c
@@ -1269,12 +1269,31 @@ static int __devexit omap_sham_remove(struct platform_device *pdev)
return 0;
}

+#ifdef CONFIG_PM_SLEEP
+static int omap_sham_suspend(struct device *dev)
+{
+ pm_runtime_put_sync(dev);
+ return 0;
+}
+
+static int omap_sham_resume(struct device *dev)
+{
+ pm_runtime_get_sync(dev);
+ return 0;
+}
+#endif
+
+static const struct dev_pm_ops omap_sham_pm_ops = {
+ SET_SYSTEM_SLEEP_PM_OPS(omap_sham_suspend, omap_sham_resume)
+};
+
static struct platform_driver omap_sham_driver = {
.probe = omap_sham_probe,
.remove = omap_sham_remove,
.driver = {
.name = "omap-sham",
.owner = THIS_MODULE,
+ .pm = &omap_sham_pm_ops,
},
};

--
1.7.12

2012-12-21 17:04:19

by Mark Greer

[permalink] [raw]
Subject: [PATCH 2/9] crypto: omap-sham - Convert to use pm_runtime API

From: "Mark A. Greer" <[email protected]>

Convert the omap-sham crypto driver to use the
pm_runtime API instead of the clk API.

CC: Kevin Hilman <[email protected]>
CC: Dmitry Kasatkin <[email protected]>
Signed-off-by: Mark A. Greer <[email protected]>
---
drivers/crypto/omap-sham.c | 28 +++++++++++-----------------
1 file changed, 11 insertions(+), 17 deletions(-)

diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
index 2df7a54..777eb9f 100644
--- a/drivers/crypto/omap-sham.c
+++ b/drivers/crypto/omap-sham.c
@@ -22,12 +22,12 @@
#include <linux/errno.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
-#include <linux/clk.h>
#include <linux/irq.h>
#include <linux/io.h>
#include <linux/platform_device.h>
#include <linux/scatterlist.h>
#include <linux/dma-mapping.h>
+#include <linux/pm_runtime.h>
#include <linux/delay.h>
#include <linux/crypto.h>
#include <linux/cryptohash.h>
@@ -140,7 +140,6 @@ struct omap_sham_dev {
struct device *dev;
void __iomem *io_base;
int irq;
- struct clk *iclk;
spinlock_t lock;
int err;
int dma;
@@ -237,7 +236,7 @@ static void omap_sham_copy_ready_hash(struct ahash_request *req)

static int omap_sham_hw_init(struct omap_sham_dev *dd)
{
- clk_enable(dd->iclk);
+ pm_runtime_get_sync(dd->dev);

if (!test_bit(FLAGS_INIT, &dd->flags)) {
omap_sham_write_mask(dd, SHA_REG_MASK,
@@ -652,7 +651,8 @@ static void omap_sham_finish_req(struct ahash_request *req, int err)
/* atomic operation is not needed here */
dd->flags &= ~(BIT(FLAGS_BUSY) | BIT(FLAGS_FINAL) | BIT(FLAGS_CPU) |
BIT(FLAGS_DMA_READY) | BIT(FLAGS_OUTPUT_READY));
- clk_disable(dd->iclk);
+
+ pm_runtime_put_sync(dd->dev);

if (req->base.complete)
req->base.complete(&req->base, err);
@@ -1197,14 +1197,6 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
if (err)
goto dma_err;

- /* Initializing the clock */
- dd->iclk = clk_get(dev, "ick");
- if (IS_ERR(dd->iclk)) {
- dev_err(dev, "clock intialization failed.\n");
- err = PTR_ERR(dd->iclk);
- goto clk_err;
- }
-
dd->io_base = ioremap(dd->phys_base, SZ_4K);
if (!dd->io_base) {
dev_err(dev, "can't ioremap\n");
@@ -1212,11 +1204,14 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
goto io_err;
}

- clk_enable(dd->iclk);
+ pm_runtime_enable(dev);
+ pm_runtime_get_sync(dev);
+
dev_info(dev, "hw accel on OMAP rev %u.%u\n",
(omap_sham_read(dd, SHA_REG_REV) & SHA_REG_REV_MAJOR) >> 4,
omap_sham_read(dd, SHA_REG_REV) & SHA_REG_REV_MINOR);
- clk_disable(dd->iclk);
+
+ pm_runtime_put_sync(&pdev->dev);

spin_lock(&sham.lock);
list_add_tail(&dd->list, &sham.dev_list);
@@ -1234,9 +1229,8 @@ err_algs:
for (j = 0; j < i; j++)
crypto_unregister_ahash(&algs[j]);
iounmap(dd->io_base);
+ pm_runtime_disable(dev);
io_err:
- clk_put(dd->iclk);
-clk_err:
omap_sham_dma_cleanup(dd);
dma_err:
if (dd->irq >= 0)
@@ -1265,7 +1259,7 @@ static int __devexit omap_sham_remove(struct platform_device *pdev)
crypto_unregister_ahash(&algs[i]);
tasklet_kill(&dd->done_task);
iounmap(dd->io_base);
- clk_put(dd->iclk);
+ pm_runtime_disable(&pdev->dev);
omap_sham_dma_cleanup(dd);
if (dd->irq >= 0)
free_irq(dd->irq, dd);
--
1.7.12

2012-12-21 17:04:20

by Mark Greer

[permalink] [raw]
Subject: [PATCH 4/9] crypto: omap-sham - Add code to use dmaengine API

From: "Mark A. Greer" <[email protected]>

Add code to use the new dmaengine API alongside
the existing DMA code that uses the private
OMAP DMA API. The API to use is chosen by
defining or undefining 'OMAP_SHAM_DMA_PRIVATE'.

This is a transitional change and the code that uses
the private DMA API will be removed in an upcoming
commit.

CC: Russell King <[email protected]>
CC: Dmitry Kasatkin <[email protected]>
Signed-off-by: Mark A. Greer <[email protected]>
---
drivers/crypto/omap-sham.c | 154 ++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 145 insertions(+), 9 deletions(-)

diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
index 9c3b096..f54ceb8 100644
--- a/drivers/crypto/omap-sham.c
+++ b/drivers/crypto/omap-sham.c
@@ -13,6 +13,8 @@
* Some ideas are from old omap-sha1-md5.c driver.
*/

+#define OMAP_SHAM_DMA_PRIVATE
+
#define pr_fmt(fmt) "%s: " fmt, __func__

#include <linux/err.h>
@@ -27,6 +29,8 @@
#include <linux/platform_device.h>
#include <linux/scatterlist.h>
#include <linux/dma-mapping.h>
+#include <linux/dmaengine.h>
+#include <linux/omap-dma.h>
#include <linux/pm_runtime.h>
#include <linux/delay.h>
#include <linux/crypto.h>
@@ -37,15 +41,15 @@
#include <crypto/hash.h>
#include <crypto/internal/hash.h>

-#include <linux/omap-dma.h>
-#include <mach/irqs.h>
-
#define SHA_REG_DIGEST(x) (0x00 + ((x) * 0x04))
#define SHA_REG_DIN(x) (0x1C + ((x) * 0x04))

#define SHA1_MD5_BLOCK_SIZE SHA1_BLOCK_SIZE
#define MD5_DIGEST_SIZE 16

+#define DST_MAXBURST 16
+#define DMA_MIN (DST_MAXBURST * sizeof(u32))
+
#define SHA_REG_DIGCNT 0x14

#define SHA_REG_CTRL 0x18
@@ -109,6 +113,9 @@ struct omap_sham_reqctx {

/* walk state */
struct scatterlist *sg;
+#ifndef OMAP_SHAM_DMA_PRIVATE
+ struct scatterlist sgl;
+#endif
unsigned int offset; /* offset in current sg */
unsigned int total; /* total request */

@@ -142,8 +149,12 @@ struct omap_sham_dev {
int irq;
spinlock_t lock;
int err;
+#ifdef OMAP_SHAM_DMA_PRIVATE
int dma;
int dma_lch;
+#else
+ struct dma_chan *dma_lch;
+#endif
struct tasklet_struct done_task;

unsigned long flags;
@@ -313,15 +324,32 @@ static int omap_sham_xmit_cpu(struct omap_sham_dev *dd, const u8 *buf,
return -EINPROGRESS;
}

+#ifndef OMAP_SHAM_DMA_PRIVATE
+static void omap_sham_dma_callback(void *param)
+{
+ struct omap_sham_dev *dd = param;
+
+ set_bit(FLAGS_DMA_READY, &dd->flags);
+ tasklet_schedule(&dd->done_task);
+}
+#endif
+
static int omap_sham_xmit_dma(struct omap_sham_dev *dd, dma_addr_t dma_addr,
- size_t length, int final)
+ size_t length, int final, int is_sg)
{
struct omap_sham_reqctx *ctx = ahash_request_ctx(dd->req);
+#ifdef OMAP_SHAM_DMA_PRIVATE
int len32;
+#else
+ struct dma_async_tx_descriptor *tx;
+ struct dma_slave_config cfg;
+ int len32, ret;
+#endif

dev_dbg(dd->dev, "xmit_dma: digcnt: %d, length: %d, final: %d\n",
ctx->digcnt, length, final);

+#ifdef OMAP_SHAM_DMA_PRIVATE
len32 = DIV_ROUND_UP(length, sizeof(u32));

omap_set_dma_transfer_params(dd->dma_lch, OMAP_DMA_DATA_TYPE_S32, len32,
@@ -331,6 +359,50 @@ static int omap_sham_xmit_dma(struct omap_sham_dev *dd, dma_addr_t dma_addr,
omap_set_dma_src_params(dd->dma_lch, 0, OMAP_DMA_AMODE_POST_INC,
dma_addr, 0, 0);

+#else
+ memset(&cfg, 0, sizeof(cfg));
+
+ cfg.dst_addr = dd->phys_base + SHA_REG_DIN(0);
+ cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+ cfg.dst_maxburst = DST_MAXBURST;
+
+ ret = dmaengine_slave_config(dd->dma_lch, &cfg);
+ if (ret) {
+ pr_err("omap-sham: can't configure dmaengine slave: %d\n", ret);
+ return ret;
+ }
+
+ len32 = DIV_ROUND_UP(length, DMA_MIN) * DMA_MIN;
+
+ if (is_sg) {
+ /*
+ * The SG entry passed in may not have the 'length' member
+ * set correctly so use a local SG entry (sgl) with the
+ * proper value for 'length' instead. If this is not done,
+ * the dmaengine may try to DMA the incorrect amount of data.
+ */
+ sg_init_table(&ctx->sgl, 1);
+ ctx->sgl.page_link = ctx->sg->page_link;
+ ctx->sgl.offset = ctx->sg->offset;
+ sg_dma_len(&ctx->sgl) = len32;
+ sg_dma_address(&ctx->sgl) = sg_dma_address(ctx->sg);
+
+ tx = dmaengine_prep_slave_sg(dd->dma_lch, &ctx->sgl, 1,
+ DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ } else {
+ tx = dmaengine_prep_slave_single(dd->dma_lch, dma_addr, len32,
+ DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ }
+
+ if (!tx) {
+ dev_err(dd->dev, "prep_slave_sg/single() failed\n");
+ return -EINVAL;
+ }
+
+ tx->callback = omap_sham_dma_callback;
+ tx->callback_param = dd;
+#endif
+
omap_sham_write_ctrl(dd, length, final, 1);

ctx->digcnt += length;
@@ -340,7 +412,12 @@ static int omap_sham_xmit_dma(struct omap_sham_dev *dd, dma_addr_t dma_addr,

set_bit(FLAGS_DMA_ACTIVE, &dd->flags);

+#ifdef OMAP_SHAM_DMA_PRIVATE
omap_start_dma(dd->dma_lch);
+#else
+ dmaengine_submit(tx);
+ dma_async_issue_pending(dd->dma_lch);
+#endif

return -EINPROGRESS;
}
@@ -387,6 +464,8 @@ static int omap_sham_xmit_dma_map(struct omap_sham_dev *dd,
struct omap_sham_reqctx *ctx,
size_t length, int final)
{
+ int ret;
+
ctx->dma_addr = dma_map_single(dd->dev, ctx->buffer, ctx->buflen,
DMA_TO_DEVICE);
if (dma_mapping_error(dd->dev, ctx->dma_addr)) {
@@ -396,8 +475,12 @@ static int omap_sham_xmit_dma_map(struct omap_sham_dev *dd,

ctx->flags &= ~BIT(FLAGS_SG);

- /* next call does not fail... so no unmap in the case of error */
- return omap_sham_xmit_dma(dd, ctx->dma_addr, length, final);
+ ret = omap_sham_xmit_dma(dd, ctx->dma_addr, length, final, 0);
+ if (ret)
+ dma_unmap_single(dd->dev, ctx->dma_addr, ctx->buflen,
+ DMA_TO_DEVICE);
+
+ return ret;
}

static int omap_sham_update_dma_slow(struct omap_sham_dev *dd)
@@ -432,6 +515,7 @@ static int omap_sham_update_dma_start(struct omap_sham_dev *dd)
struct omap_sham_reqctx *ctx = ahash_request_ctx(dd->req);
unsigned int length, final, tail;
struct scatterlist *sg;
+ int ret;

if (!ctx->total)
return 0;
@@ -439,6 +523,17 @@ static int omap_sham_update_dma_start(struct omap_sham_dev *dd)
if (ctx->bufcnt || ctx->offset)
return omap_sham_update_dma_slow(dd);

+#ifndef OMAP_SHAM_DMA_PRIVATE
+ /*
+ * Don't use the sg interface when the transfer size is less
+ * than the number of elements in a DMA frame. Otherwise,
+ * the dmaengine infrastructure will calculate that it needs
+ * to transfer 0 frames which ultimately fails.
+ */
+ if (ctx->total < (DST_MAXBURST * sizeof(u32)))
+ return omap_sham_update_dma_slow(dd);
+#endif
+
dev_dbg(dd->dev, "fast: digcnt: %d, bufcnt: %u, total: %u\n",
ctx->digcnt, ctx->bufcnt, ctx->total);

@@ -476,8 +571,11 @@ static int omap_sham_update_dma_start(struct omap_sham_dev *dd)

final = (ctx->flags & BIT(FLAGS_FINUP)) && !ctx->total;

- /* next call does not fail... so no unmap in the case of error */
- return omap_sham_xmit_dma(dd, sg_dma_address(ctx->sg), length, final);
+ ret = omap_sham_xmit_dma(dd, sg_dma_address(ctx->sg), length, final, 1);
+ if (ret)
+ dma_unmap_sg(dd->dev, ctx->sg, 1, DMA_TO_DEVICE);
+
+ return ret;
}

static int omap_sham_update_cpu(struct omap_sham_dev *dd)
@@ -496,7 +594,12 @@ static int omap_sham_update_dma_stop(struct omap_sham_dev *dd)
{
struct omap_sham_reqctx *ctx = ahash_request_ctx(dd->req);

+#ifdef OMAP_SHAM_DMA_PRIVATE
omap_stop_dma(dd->dma_lch);
+#else
+ dmaengine_terminate_all(dd->dma_lch);
+#endif
+
if (ctx->flags & BIT(FLAGS_SG)) {
dma_unmap_sg(dd->dev, ctx->sg, 1, DMA_TO_DEVICE);
if (ctx->sg->length == ctx->offset) {
@@ -583,7 +686,7 @@ static int omap_sham_final_req(struct omap_sham_dev *dd)
struct omap_sham_reqctx *ctx = ahash_request_ctx(req);
int err = 0, use_dma = 1;

- if (ctx->bufcnt <= 64)
+ if (ctx->bufcnt <= DMA_MIN)
/* faster to handle last block with cpu */
use_dma = 0;

@@ -699,6 +802,7 @@ static int omap_sham_handle_queue(struct omap_sham_dev *dd,
if (err)
goto err1;

+#ifdef OMAP_SHAM_DMA_PRIVATE
omap_set_dma_dest_params(dd->dma_lch, 0,
OMAP_DMA_AMODE_CONSTANT,
dd->phys_base + SHA_REG_DIN(0), 0, 16);
@@ -708,6 +812,7 @@ static int omap_sham_handle_queue(struct omap_sham_dev *dd,

omap_set_dma_src_burst_mode(dd->dma_lch,
OMAP_DMA_DATA_BURST_4);
+#endif

if (ctx->digcnt)
/* request has changed - restore hash */
@@ -1099,6 +1204,7 @@ static irqreturn_t omap_sham_irq(int irq, void *dev_id)
return IRQ_HANDLED;
}

+#ifdef OMAP_SHAM_DMA_PRIVATE
static void omap_sham_dma_callback(int lch, u16 ch_status, void *data)
{
struct omap_sham_dev *dd = data;
@@ -1136,12 +1242,17 @@ static void omap_sham_dma_cleanup(struct omap_sham_dev *dd)
dd->dma_lch = -1;
}
}
+#endif

static int __devinit omap_sham_probe(struct platform_device *pdev)
{
struct omap_sham_dev *dd;
struct device *dev = &pdev->dev;
struct resource *res;
+#ifndef OMAP_SHAM_DMA_PRIVATE
+ dma_cap_mask_t mask;
+ unsigned dma_chan;
+#endif
int err, i, j;

dd = kzalloc(sizeof(struct omap_sham_dev), GFP_KERNEL);
@@ -1176,7 +1287,11 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
err = -ENODEV;
goto res_err;
}
+#ifdef OMAP_SHAM_DMA_PRIVATE
dd->dma = res->start;
+#else
+ dma_chan = res->start;
+#endif

/* Get the IRQ */
dd->irq = platform_get_irq(pdev, 0);
@@ -1193,9 +1308,22 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
goto res_err;
}

+#ifdef OMAP_SHAM_DMA_PRIVATE
err = omap_sham_dma_init(dd);
if (err)
goto dma_err;
+#else
+ dma_cap_zero(mask);
+ dma_cap_set(DMA_SLAVE, mask);
+
+ dd->dma_lch = dma_request_channel(mask, omap_dma_filter_fn, &dma_chan);
+ if (!dd->dma_lch) {
+ dev_err(dev, "unable to obtain RX DMA engine channel %u\n",
+ dma_chan);
+ err = -ENXIO;
+ goto dma_err;
+ }
+#endif

dd->io_base = ioremap(dd->phys_base, SZ_4K);
if (!dd->io_base) {
@@ -1231,7 +1359,11 @@ err_algs:
iounmap(dd->io_base);
pm_runtime_disable(dev);
io_err:
+#ifdef OMAP_SHAM_DMA_PRIVATE
omap_sham_dma_cleanup(dd);
+#else
+ dma_release_channel(dd->dma_lch);
+#endif
dma_err:
if (dd->irq >= 0)
free_irq(dd->irq, dd);
@@ -1260,7 +1392,11 @@ static int __devexit omap_sham_remove(struct platform_device *pdev)
tasklet_kill(&dd->done_task);
iounmap(dd->io_base);
pm_runtime_disable(&pdev->dev);
+#ifdef OMAP_SHAM_DMA_PRIVATE
omap_sham_dma_cleanup(dd);
+#else
+ dma_release_channel(dd->dma_lch);
+#endif
if (dd->irq >= 0)
free_irq(dd->irq, dd);
kfree(dd);
--
1.7.12

2012-12-21 17:04:21

by Mark Greer

[permalink] [raw]
Subject: [PATCH 7/9] crypto: omap-sham - Convert to dma_request_slave_channel_compat()

From: "Mark A. Greer" <[email protected]>

Use the dma_request_slave_channel_compat() call instead of
the dma_request_channel() call to request a DMA channel.
This allows the omap-sham driver use different DMA engines.

CC: Dmitry Kasatkin <[email protected]>
Signed-off-by: Mark A. Greer <[email protected]>
---
drivers/crypto/omap-sham.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
index 860cad8..8074bd9 100644
--- a/drivers/crypto/omap-sham.c
+++ b/drivers/crypto/omap-sham.c
@@ -1292,7 +1292,8 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);

- dd->dma_lch = dma_request_channel(mask, omap_dma_filter_fn, &dd->dma);
+ dd->dma_lch = dma_request_slave_channel_compat(mask, omap_dma_filter_fn,
+ &dd->dma, dev, "rx");
if (!dd->dma_lch) {
dev_err(dev, "unable to obtain RX DMA engine channel %u\n",
dd->dma);
--
1.7.12

2012-12-21 17:04:22

by Mark Greer

[permalink] [raw]
Subject: [PATCH 5/9] crypto: omap-sham - Remove usage of private DMA API

From: "Mark A. Greer" <[email protected]>

Remove usage of the private OMAP DMA API.
The dmaengine API will be used instead.

CC: Russell King <[email protected]>
CC: Dmitry Kasatkin <[email protected]>
Signed-off-by: Mark A. Greer <[email protected]>
---
drivers/crypto/omap-sham.c | 109 ---------------------------------------------
1 file changed, 109 deletions(-)

diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
index f54ceb8..f6b270e 100644
--- a/drivers/crypto/omap-sham.c
+++ b/drivers/crypto/omap-sham.c
@@ -13,8 +13,6 @@
* Some ideas are from old omap-sha1-md5.c driver.
*/

-#define OMAP_SHAM_DMA_PRIVATE
-
#define pr_fmt(fmt) "%s: " fmt, __func__

#include <linux/err.h>
@@ -113,9 +111,7 @@ struct omap_sham_reqctx {

/* walk state */
struct scatterlist *sg;
-#ifndef OMAP_SHAM_DMA_PRIVATE
struct scatterlist sgl;
-#endif
unsigned int offset; /* offset in current sg */
unsigned int total; /* total request */

@@ -149,12 +145,7 @@ struct omap_sham_dev {
int irq;
spinlock_t lock;
int err;
-#ifdef OMAP_SHAM_DMA_PRIVATE
- int dma;
- int dma_lch;
-#else
struct dma_chan *dma_lch;
-#endif
struct tasklet_struct done_task;

unsigned long flags;
@@ -324,7 +315,6 @@ static int omap_sham_xmit_cpu(struct omap_sham_dev *dd, const u8 *buf,
return -EINPROGRESS;
}

-#ifndef OMAP_SHAM_DMA_PRIVATE
static void omap_sham_dma_callback(void *param)
{
struct omap_sham_dev *dd = param;
@@ -332,34 +322,18 @@ static void omap_sham_dma_callback(void *param)
set_bit(FLAGS_DMA_READY, &dd->flags);
tasklet_schedule(&dd->done_task);
}
-#endif

static int omap_sham_xmit_dma(struct omap_sham_dev *dd, dma_addr_t dma_addr,
size_t length, int final, int is_sg)
{
struct omap_sham_reqctx *ctx = ahash_request_ctx(dd->req);
-#ifdef OMAP_SHAM_DMA_PRIVATE
- int len32;
-#else
struct dma_async_tx_descriptor *tx;
struct dma_slave_config cfg;
int len32, ret;
-#endif

dev_dbg(dd->dev, "xmit_dma: digcnt: %d, length: %d, final: %d\n",
ctx->digcnt, length, final);

-#ifdef OMAP_SHAM_DMA_PRIVATE
- len32 = DIV_ROUND_UP(length, sizeof(u32));
-
- omap_set_dma_transfer_params(dd->dma_lch, OMAP_DMA_DATA_TYPE_S32, len32,
- 1, OMAP_DMA_SYNC_PACKET, dd->dma,
- OMAP_DMA_DST_SYNC_PREFETCH);
-
- omap_set_dma_src_params(dd->dma_lch, 0, OMAP_DMA_AMODE_POST_INC,
- dma_addr, 0, 0);
-
-#else
memset(&cfg, 0, sizeof(cfg));

cfg.dst_addr = dd->phys_base + SHA_REG_DIN(0);
@@ -401,7 +375,6 @@ static int omap_sham_xmit_dma(struct omap_sham_dev *dd, dma_addr_t dma_addr,

tx->callback = omap_sham_dma_callback;
tx->callback_param = dd;
-#endif

omap_sham_write_ctrl(dd, length, final, 1);

@@ -412,12 +385,8 @@ static int omap_sham_xmit_dma(struct omap_sham_dev *dd, dma_addr_t dma_addr,

set_bit(FLAGS_DMA_ACTIVE, &dd->flags);

-#ifdef OMAP_SHAM_DMA_PRIVATE
- omap_start_dma(dd->dma_lch);
-#else
dmaengine_submit(tx);
dma_async_issue_pending(dd->dma_lch);
-#endif

return -EINPROGRESS;
}
@@ -523,7 +492,6 @@ static int omap_sham_update_dma_start(struct omap_sham_dev *dd)
if (ctx->bufcnt || ctx->offset)
return omap_sham_update_dma_slow(dd);

-#ifndef OMAP_SHAM_DMA_PRIVATE
/*
* Don't use the sg interface when the transfer size is less
* than the number of elements in a DMA frame. Otherwise,
@@ -532,7 +500,6 @@ static int omap_sham_update_dma_start(struct omap_sham_dev *dd)
*/
if (ctx->total < (DST_MAXBURST * sizeof(u32)))
return omap_sham_update_dma_slow(dd);
-#endif

dev_dbg(dd->dev, "fast: digcnt: %d, bufcnt: %u, total: %u\n",
ctx->digcnt, ctx->bufcnt, ctx->total);
@@ -594,11 +561,7 @@ static int omap_sham_update_dma_stop(struct omap_sham_dev *dd)
{
struct omap_sham_reqctx *ctx = ahash_request_ctx(dd->req);

-#ifdef OMAP_SHAM_DMA_PRIVATE
- omap_stop_dma(dd->dma_lch);
-#else
dmaengine_terminate_all(dd->dma_lch);
-#endif

if (ctx->flags & BIT(FLAGS_SG)) {
dma_unmap_sg(dd->dev, ctx->sg, 1, DMA_TO_DEVICE);
@@ -802,18 +765,6 @@ static int omap_sham_handle_queue(struct omap_sham_dev *dd,
if (err)
goto err1;

-#ifdef OMAP_SHAM_DMA_PRIVATE
- omap_set_dma_dest_params(dd->dma_lch, 0,
- OMAP_DMA_AMODE_CONSTANT,
- dd->phys_base + SHA_REG_DIN(0), 0, 16);
-
- omap_set_dma_dest_burst_mode(dd->dma_lch,
- OMAP_DMA_DATA_BURST_16);
-
- omap_set_dma_src_burst_mode(dd->dma_lch,
- OMAP_DMA_DATA_BURST_4);
-#endif
-
if (ctx->digcnt)
/* request has changed - restore hash */
omap_sham_copy_hash(req, 0);
@@ -1204,55 +1155,13 @@ static irqreturn_t omap_sham_irq(int irq, void *dev_id)
return IRQ_HANDLED;
}

-#ifdef OMAP_SHAM_DMA_PRIVATE
-static void omap_sham_dma_callback(int lch, u16 ch_status, void *data)
-{
- struct omap_sham_dev *dd = data;
-
- if (ch_status != OMAP_DMA_BLOCK_IRQ) {
- pr_err("omap-sham DMA error status: 0x%hx\n", ch_status);
- dd->err = -EIO;
- clear_bit(FLAGS_INIT, &dd->flags);/* request to re-initialize */
- }
-
- set_bit(FLAGS_DMA_READY, &dd->flags);
- tasklet_schedule(&dd->done_task);
-}
-
-static int omap_sham_dma_init(struct omap_sham_dev *dd)
-{
- int err;
-
- dd->dma_lch = -1;
-
- err = omap_request_dma(dd->dma, dev_name(dd->dev),
- omap_sham_dma_callback, dd, &dd->dma_lch);
- if (err) {
- dev_err(dd->dev, "Unable to request DMA channel\n");
- return err;
- }
-
- return 0;
-}
-
-static void omap_sham_dma_cleanup(struct omap_sham_dev *dd)
-{
- if (dd->dma_lch >= 0) {
- omap_free_dma(dd->dma_lch);
- dd->dma_lch = -1;
- }
-}
-#endif
-
static int __devinit omap_sham_probe(struct platform_device *pdev)
{
struct omap_sham_dev *dd;
struct device *dev = &pdev->dev;
struct resource *res;
-#ifndef OMAP_SHAM_DMA_PRIVATE
dma_cap_mask_t mask;
unsigned dma_chan;
-#endif
int err, i, j;

dd = kzalloc(sizeof(struct omap_sham_dev), GFP_KERNEL);
@@ -1287,11 +1196,7 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
err = -ENODEV;
goto res_err;
}
-#ifdef OMAP_SHAM_DMA_PRIVATE
- dd->dma = res->start;
-#else
dma_chan = res->start;
-#endif

/* Get the IRQ */
dd->irq = platform_get_irq(pdev, 0);
@@ -1308,11 +1213,6 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
goto res_err;
}

-#ifdef OMAP_SHAM_DMA_PRIVATE
- err = omap_sham_dma_init(dd);
- if (err)
- goto dma_err;
-#else
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);

@@ -1323,7 +1223,6 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
err = -ENXIO;
goto dma_err;
}
-#endif

dd->io_base = ioremap(dd->phys_base, SZ_4K);
if (!dd->io_base) {
@@ -1359,11 +1258,7 @@ err_algs:
iounmap(dd->io_base);
pm_runtime_disable(dev);
io_err:
-#ifdef OMAP_SHAM_DMA_PRIVATE
- omap_sham_dma_cleanup(dd);
-#else
dma_release_channel(dd->dma_lch);
-#endif
dma_err:
if (dd->irq >= 0)
free_irq(dd->irq, dd);
@@ -1392,11 +1287,7 @@ static int __devexit omap_sham_remove(struct platform_device *pdev)
tasklet_kill(&dd->done_task);
iounmap(dd->io_base);
pm_runtime_disable(&pdev->dev);
-#ifdef OMAP_SHAM_DMA_PRIVATE
- omap_sham_dma_cleanup(dd);
-#else
dma_release_channel(dd->dma_lch);
-#endif
if (dd->irq >= 0)
free_irq(dd->irq, dd);
kfree(dd);
--
1.7.12

2012-12-21 17:04:25

by Mark Greer

[permalink] [raw]
Subject: [PATCH 9/9] crypto: omap-sham - Add SHA224 and SHA256 Support

From: "Mark A. Greer" <[email protected]>

The OMAP4/AM33xx version of the SHAM crypto module
supports SHA224 and SHA256 in addition to MD5 and
SHA1 that the OMAP2 version of the module supports.

To add this support, use the platform_data introduced
in an ealier commit to hold the list of algorithms
supported by the current module. The probe routine
will use that list to register the correct algorithms.

Note: The code being integrated is from the TI AM33xx SDK
and was written by Greg Turner <[email protected]> and
Herman Schuurman (current email unknown) while at TI.

CC: Greg Turner <[email protected]>
CC: Dmitry Kasatkin <[email protected]>
Signed-off-by: Mark A. Greer <[email protected]>
---
drivers/crypto/omap-sham.c | 183 ++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 173 insertions(+), 10 deletions(-)

diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
index fab0af4..edff981 100644
--- a/drivers/crypto/omap-sham.c
+++ b/drivers/crypto/omap-sham.c
@@ -83,6 +83,8 @@
#define SHA_REG_MODE_ALGO_MASK (3 << 1)
#define SHA_REG_MODE_ALGO_MD5_128 (0 << 1)
#define SHA_REG_MODE_ALGO_SHA1_160 (1 << 1)
+#define SHA_REG_MODE_ALGO_SHA2_224 (2 << 1)
+#define SHA_REG_MODE_ALGO_SHA2_256 (3 << 1)

#define SHA_REG_LENGTH 0x48

@@ -121,6 +123,10 @@
<< (FLAGS_MODE_SHIFT - 1))
#define FLAGS_MODE_SHA1 (SHA_REG_MODE_ALGO_SHA1_160 \
<< (FLAGS_MODE_SHIFT - 1))
+#define FLAGS_MODE_SHA224 (SHA_REG_MODE_ALGO_SHA2_224 \
+ << (FLAGS_MODE_SHIFT - 1))
+#define FLAGS_MODE_SHA256 (SHA_REG_MODE_ALGO_SHA2_256 \
+ << (FLAGS_MODE_SHIFT - 1))
#define FLAGS_HMAC 20
#define FLAGS_ERROR 21

@@ -173,7 +179,15 @@ struct omap_sham_ctx {

#define OMAP_SHAM_QUEUE_LENGTH 1

+struct omap_sham_algs_info {
+ struct ahash_alg *algs_list;
+ unsigned int size;
+ unsigned int registered;
+};
+
struct omap_sham_pdata {
+ struct omap_sham_algs_info *algs_info;
+ unsigned int algs_info_size;
unsigned long flags;
int digest_size;

@@ -322,6 +336,12 @@ static void omap_sham_copy_ready_hash(struct ahash_request *req)
big_endian = 1;
d = SHA1_DIGEST_SIZE / sizeof(u32);
break;
+ case FLAGS_MODE_SHA224:
+ d = SHA224_DIGEST_SIZE / sizeof(u32);
+ break;
+ case FLAGS_MODE_SHA256:
+ d = SHA256_DIGEST_SIZE / sizeof(u32);
+ break;
default:
d = 0;
}
@@ -780,6 +800,12 @@ static int omap_sham_init(struct ahash_request *req)
case SHA1_DIGEST_SIZE:
ctx->flags |= FLAGS_MODE_SHA1;
break;
+ case SHA224_DIGEST_SIZE:
+ ctx->flags |= FLAGS_MODE_SHA224;
+ break;
+ case SHA256_DIGEST_SIZE:
+ ctx->flags |= FLAGS_MODE_SHA256;
+ break;
}

ctx->bufcnt = 0;
@@ -1173,6 +1199,16 @@ static int omap_sham_cra_sha1_init(struct crypto_tfm *tfm)
return omap_sham_cra_init_alg(tfm, "sha1");
}

+static int omap_sham_cra_sha224_init(struct crypto_tfm *tfm)
+{
+ return omap_sham_cra_init_alg(tfm, "sha224");
+}
+
+static int omap_sham_cra_sha256_init(struct crypto_tfm *tfm)
+{
+ return omap_sham_cra_init_alg(tfm, "sha256");
+}
+
static int omap_sham_cra_md5_init(struct crypto_tfm *tfm)
{
return omap_sham_cra_init_alg(tfm, "md5");
@@ -1191,7 +1227,7 @@ static void omap_sham_cra_exit(struct crypto_tfm *tfm)
}
}

-static struct ahash_alg algs[] = {
+static struct ahash_alg algs_sha1_md5[] = {
{
.init = omap_sham_init,
.update = omap_sham_update,
@@ -1290,6 +1326,102 @@ static struct ahash_alg algs[] = {
}
};

+/* OMAP4 has some algs in addition to what OMAP2 has */
+static struct ahash_alg algs_sha224_sha256[] = {
+{
+ .init = omap_sham_init,
+ .update = omap_sham_update,
+ .final = omap_sham_final,
+ .finup = omap_sham_finup,
+ .digest = omap_sham_digest,
+ .halg.digestsize = SHA224_DIGEST_SIZE,
+ .halg.base = {
+ .cra_name = "sha224",
+ .cra_driver_name = "omap-sha224",
+ .cra_priority = 100,
+ .cra_flags = CRYPTO_ALG_TYPE_AHASH |
+ CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_NEED_FALLBACK,
+ .cra_blocksize = SHA224_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct omap_sham_ctx),
+ .cra_alignmask = 0,
+ .cra_module = THIS_MODULE,
+ .cra_init = omap_sham_cra_init,
+ .cra_exit = omap_sham_cra_exit,
+ }
+},
+{
+ .init = omap_sham_init,
+ .update = omap_sham_update,
+ .final = omap_sham_final,
+ .finup = omap_sham_finup,
+ .digest = omap_sham_digest,
+ .halg.digestsize = SHA256_DIGEST_SIZE,
+ .halg.base = {
+ .cra_name = "sha256",
+ .cra_driver_name = "omap-sha256",
+ .cra_priority = 100,
+ .cra_flags = CRYPTO_ALG_TYPE_AHASH |
+ CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_NEED_FALLBACK,
+ .cra_blocksize = SHA256_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct omap_sham_ctx),
+ .cra_alignmask = 0,
+ .cra_module = THIS_MODULE,
+ .cra_init = omap_sham_cra_init,
+ .cra_exit = omap_sham_cra_exit,
+ }
+},
+{
+ .init = omap_sham_init,
+ .update = omap_sham_update,
+ .final = omap_sham_final,
+ .finup = omap_sham_finup,
+ .digest = omap_sham_digest,
+ .setkey = omap_sham_setkey,
+ .halg.digestsize = SHA224_DIGEST_SIZE,
+ .halg.base = {
+ .cra_name = "hmac(sha224)",
+ .cra_driver_name = "omap-hmac-sha224",
+ .cra_priority = 100,
+ .cra_flags = CRYPTO_ALG_TYPE_AHASH |
+ CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_NEED_FALLBACK,
+ .cra_blocksize = SHA224_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct omap_sham_ctx) +
+ sizeof(struct omap_sham_hmac_ctx),
+ .cra_alignmask = OMAP_ALIGN_MASK,
+ .cra_module = THIS_MODULE,
+ .cra_init = omap_sham_cra_sha224_init,
+ .cra_exit = omap_sham_cra_exit,
+ }
+},
+{
+ .init = omap_sham_init,
+ .update = omap_sham_update,
+ .final = omap_sham_final,
+ .finup = omap_sham_finup,
+ .digest = omap_sham_digest,
+ .setkey = omap_sham_setkey,
+ .halg.digestsize = SHA256_DIGEST_SIZE,
+ .halg.base = {
+ .cra_name = "hmac(sha256)",
+ .cra_driver_name = "omap-hmac-sha256",
+ .cra_priority = 100,
+ .cra_flags = CRYPTO_ALG_TYPE_AHASH |
+ CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_NEED_FALLBACK,
+ .cra_blocksize = SHA256_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct omap_sham_ctx) +
+ sizeof(struct omap_sham_hmac_ctx),
+ .cra_alignmask = OMAP_ALIGN_MASK,
+ .cra_module = THIS_MODULE,
+ .cra_init = omap_sham_cra_sha256_init,
+ .cra_exit = omap_sham_cra_exit,
+ }
+},
+};
+
static void omap_sham_done_task(unsigned long data)
{
struct omap_sham_dev *dd = (struct omap_sham_dev *)data;
@@ -1364,7 +1496,16 @@ static irqreturn_t omap_sham_irq_omap4(int irq, void *dev_id)
return omap_sham_irq_common(dd);
}

+static struct omap_sham_algs_info omap_sham_algs_info_omap2[] = {
+ {
+ .algs_list = algs_sha1_md5,
+ .size = ARRAY_SIZE(algs_sha1_md5),
+ },
+};
+
static const struct omap_sham_pdata omap_sham_pdata_omap2 = {
+ .algs_info = omap_sham_algs_info_omap2,
+ .algs_info_size = ARRAY_SIZE(omap_sham_algs_info_omap2),
.flags = BIT(FLAGS_BE32_SHA1),
.digest_size = SHA1_DIGEST_SIZE,
.copy_hash = omap_sham_copy_hash_omap2,
@@ -1385,7 +1526,20 @@ static const struct omap_sham_pdata omap_sham_pdata_omap2 = {
};

#ifdef CONFIG_OF
+static struct omap_sham_algs_info omap_sham_algs_info_omap4[] = {
+ {
+ .algs_list = algs_sha1_md5,
+ .size = ARRAY_SIZE(algs_sha1_md5),
+ },
+ {
+ .algs_list = algs_sha224_sha256,
+ .size = ARRAY_SIZE(algs_sha224_sha256),
+ },
+};
+
static const struct omap_sham_pdata omap_sham_pdata_omap4 = {
+ .algs_info = omap_sham_algs_info_omap4,
+ .algs_info_size = ARRAY_SIZE(omap_sham_algs_info_omap4),
.flags = BIT(FLAGS_AUTO_XOR),
.digest_size = SHA256_DIGEST_SIZE,
.copy_hash = omap_sham_copy_hash_omap4,
@@ -1570,17 +1724,24 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
list_add_tail(&dd->list, &sham.dev_list);
spin_unlock(&sham.lock);

- for (i = 0; i < ARRAY_SIZE(algs); i++) {
- err = crypto_register_ahash(&algs[i]);
- if (err)
- goto err_algs;
+ for (i = 0; i < dd->pdata->algs_info_size; i++) {
+ for (j = 0; j < dd->pdata->algs_info[i].size; j++) {
+ err = crypto_register_ahash(
+ &dd->pdata->algs_info[i].algs_list[j]);
+ if (err)
+ goto err_algs;
+
+ dd->pdata->algs_info[i].registered++;
+ }
}

return 0;

err_algs:
- for (j = 0; j < i; j++)
- crypto_unregister_ahash(&algs[j]);
+ for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
+ for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--)
+ crypto_unregister_ahash(
+ &dd->pdata->algs_info[i].algs_list[j]);
pm_runtime_disable(dev);
dma_release_channel(dd->dma_lch);
dma_err:
@@ -1597,7 +1758,7 @@ data_err:
static int __devexit omap_sham_remove(struct platform_device *pdev)
{
static struct omap_sham_dev *dd;
- int i;
+ int i, j;

dd = platform_get_drvdata(pdev);
if (!dd)
@@ -1605,8 +1766,10 @@ static int __devexit omap_sham_remove(struct platform_device *pdev)
spin_lock(&sham.lock);
list_del(&dd->list);
spin_unlock(&sham.lock);
- for (i = 0; i < ARRAY_SIZE(algs); i++)
- crypto_unregister_ahash(&algs[i]);
+ for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
+ for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--)
+ crypto_unregister_ahash(
+ &dd->pdata->algs_info[i].algs_list[j]);
tasklet_kill(&dd->done_task);
pm_runtime_disable(&pdev->dev);
dma_release_channel(dd->dma_lch);
--
1.7.12

2012-12-21 17:04:25

by Mark Greer

[permalink] [raw]
Subject: [PATCH 8/9] crypto: omap-sham - Add OMAP4/AM33XX SHAM Support

From: "Mark A. Greer" <[email protected]>

Add support for the OMAP4 version of the SHAM module
that is present on OMAP4 and AM33xx SoCs.

The modules have several differences including register
offsets, hardware XORing, and how DMA is triggered.
To handle these differences, a platform_data structure
is defined and contains routine pointers, register offsets,
bit shifts within registers, and flags to indicate whether
the hardware supports XORing and provides SHA1 results in
big or little endian. OMAP2/OMAP3-specific routines are
suffixed with '_omap2' and OMAP4/AM33xx routines are suffixed
with '_omap4'.

Note: The code being integrated is from the TI AM33xx SDK
and was written by Greg Turner <[email protected]> and
Herman Schuurman (current email unknown) while at TI.

CC: Greg Turner <[email protected]>
CC: Dmitry Kasatkin <[email protected]>
Signed-off-by: Mark A. Greer <[email protected]>
---
drivers/crypto/omap-sham.c | 416 ++++++++++++++++++++++++++++++++++++---------
1 file changed, 332 insertions(+), 84 deletions(-)

diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
index 8074bd9..fab0af4 100644
--- a/drivers/crypto/omap-sham.c
+++ b/drivers/crypto/omap-sham.c
@@ -5,6 +5,7 @@
*
* Copyright (c) 2010 Nokia Corporation
* Author: Dmitry Kasatkin <[email protected]>
+ * Copyright (c) 2011 Texas Instruments Incorporated
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as published
@@ -43,16 +44,17 @@
#include <crypto/hash.h>
#include <crypto/internal/hash.h>

-#define SHA_REG_DIGEST(x) (0x00 + ((x) * 0x04))
-#define SHA_REG_DIN(x) (0x1C + ((x) * 0x04))
-
#define SHA1_MD5_BLOCK_SIZE SHA1_BLOCK_SIZE
#define MD5_DIGEST_SIZE 16

#define DST_MAXBURST 16
#define DMA_MIN (DST_MAXBURST * sizeof(u32))

-#define SHA_REG_DIGCNT 0x14
+#define SHA_REG_IDIGEST(dd, x) ((dd)->pdata->idigest_ofs + ((x)*0x04))
+#define SHA_REG_DIN(dd, x) ((dd)->pdata->din_ofs + ((x) * 0x04))
+#define SHA_REG_DIGCNT(dd) ((dd)->pdata->digcnt_ofs)
+
+#define SHA_REG_ODIGEST(x) (0x00 + ((x) * 0x04))

#define SHA_REG_CTRL 0x18
#define SHA_REG_CTRL_LENGTH (0xFFFFFFFF << 5)
@@ -62,19 +64,40 @@
#define SHA_REG_CTRL_INPUT_READY (1 << 1)
#define SHA_REG_CTRL_OUTPUT_READY (1 << 0)

-#define SHA_REG_REV 0x5C
-#define SHA_REG_REV_MAJOR 0xF0
-#define SHA_REG_REV_MINOR 0x0F
+#define SHA_REG_REV(dd) ((dd)->pdata->rev_ofs)

-#define SHA_REG_MASK 0x60
+#define SHA_REG_MASK(dd) ((dd)->pdata->mask_ofs)
#define SHA_REG_MASK_DMA_EN (1 << 3)
#define SHA_REG_MASK_IT_EN (1 << 2)
#define SHA_REG_MASK_SOFTRESET (1 << 1)
#define SHA_REG_AUTOIDLE (1 << 0)

-#define SHA_REG_SYSSTATUS 0x64
+#define SHA_REG_SYSSTATUS(dd) ((dd)->pdata->sysstatus_ofs)
#define SHA_REG_SYSSTATUS_RESETDONE (1 << 0)

+#define SHA_REG_MODE 0x44
+#define SHA_REG_MODE_HMAC_OUTER_HASH (1 << 7)
+#define SHA_REG_MODE_HMAC_KEY_PROC (1 << 5)
+#define SHA_REG_MODE_CLOSE_HASH (1 << 4)
+#define SHA_REG_MODE_ALGO_CONSTANT (1 << 3)
+#define SHA_REG_MODE_ALGO_MASK (3 << 1)
+#define SHA_REG_MODE_ALGO_MD5_128 (0 << 1)
+#define SHA_REG_MODE_ALGO_SHA1_160 (1 << 1)
+
+#define SHA_REG_LENGTH 0x48
+
+#define SHA_REG_IRQSTATUS 0x118
+#define SHA_REG_IRQSTATUS_CTX_RDY (1 << 3)
+#define SHA_REG_IRQSTATUS_PARTHASH_RDY (1 << 2)
+#define SHA_REG_IRQSTATUS_INPUT_RDY (1 << 1)
+#define SHA_REG_IRQSTATUS_OUTPUT_RDY (1 << 0)
+
+#define SHA_REG_IRQENA 0x11C
+#define SHA_REG_IRQENA_CTX_RDY (1 << 3)
+#define SHA_REG_IRQENA_PARTHASH_RDY (1 << 2)
+#define SHA_REG_IRQENA_INPUT_RDY (1 << 1)
+#define SHA_REG_IRQENA_OUTPUT_RDY (1 << 0)
+
#define DEFAULT_TIMEOUT_INTERVAL HZ

/* mostly device flags */
@@ -85,20 +108,29 @@
#define FLAGS_INIT 4
#define FLAGS_CPU 5
#define FLAGS_DMA_READY 6
+#define FLAGS_AUTO_XOR 7
+#define FLAGS_BE32_SHA1 8
/* context flags */
#define FLAGS_FINUP 16
#define FLAGS_SG 17
-#define FLAGS_SHA1 18
-#define FLAGS_HMAC 19
-#define FLAGS_ERROR 20

-#define OP_UPDATE 1
-#define OP_FINAL 2
+#define FLAGS_MODE_SHIFT 18
+#define FLAGS_MODE_MASK (SHA_REG_MODE_ALGO_MASK \
+ << (FLAGS_MODE_SHIFT - 1))
+#define FLAGS_MODE_MD5 (SHA_REG_MODE_ALGO_MD5_128 \
+ << (FLAGS_MODE_SHIFT - 1))
+#define FLAGS_MODE_SHA1 (SHA_REG_MODE_ALGO_SHA1_160 \
+ << (FLAGS_MODE_SHIFT - 1))
+#define FLAGS_HMAC 20
+#define FLAGS_ERROR 21
+
+#define OP_UPDATE 1
+#define OP_FINAL 2

#define OMAP_ALIGN_MASK (sizeof(u32)-1)
#define OMAP_ALIGNED __attribute__((aligned(sizeof(u32))))

-#define BUFLEN PAGE_SIZE
+#define BUFLEN PAGE_SIZE

struct omap_sham_dev;

@@ -107,7 +139,7 @@ struct omap_sham_reqctx {
unsigned long flags;
unsigned long op;

- u8 digest[SHA1_DIGEST_SIZE] OMAP_ALIGNED;
+ u8 digest[SHA256_DIGEST_SIZE] OMAP_ALIGNED;
size_t digcnt;
size_t bufcnt;
size_t buflen;
@@ -124,8 +156,8 @@ struct omap_sham_reqctx {

struct omap_sham_hmac_ctx {
struct crypto_shash *shash;
- u8 ipad[SHA1_MD5_BLOCK_SIZE];
- u8 opad[SHA1_MD5_BLOCK_SIZE];
+ u8 ipad[SHA1_MD5_BLOCK_SIZE] OMAP_ALIGNED;
+ u8 opad[SHA1_MD5_BLOCK_SIZE] OMAP_ALIGNED;
};

struct omap_sham_ctx {
@@ -141,6 +173,31 @@ struct omap_sham_ctx {

#define OMAP_SHAM_QUEUE_LENGTH 1

+struct omap_sham_pdata {
+ unsigned long flags;
+ int digest_size;
+
+ void (*copy_hash)(struct ahash_request *req, int out);
+ void (*write_ctrl)(struct omap_sham_dev *dd, size_t length,
+ int final, int dma);
+ void (*trigger)(struct omap_sham_dev *dd, size_t length);
+ int (*poll_irq)(struct omap_sham_dev *dd);
+ irqreturn_t (*intr_hdlr)(int irq, void *dev_id);
+
+ u32 odigest_ofs;
+ u32 idigest_ofs;
+ u32 din_ofs;
+ u32 digcnt_ofs;
+ u32 rev_ofs;
+ u32 mask_ofs;
+ u32 sysstatus_ofs;
+
+ u32 major_mask;
+ u32 major_shift;
+ u32 minor_mask;
+ u32 minor_shift;
+};
+
struct omap_sham_dev {
struct list_head list;
unsigned long phys_base;
@@ -156,6 +213,8 @@ struct omap_sham_dev {
unsigned long flags;
struct crypto_queue queue;
struct ahash_request *req;
+
+ const struct omap_sham_pdata *pdata;
};

struct omap_sham_drv {
@@ -203,21 +262,44 @@ static inline int omap_sham_wait(struct omap_sham_dev *dd, u32 offset, u32 bit)
return 0;
}

-static void omap_sham_copy_hash(struct ahash_request *req, int out)
+static void omap_sham_copy_hash_omap2(struct ahash_request *req, int out)
{
struct omap_sham_reqctx *ctx = ahash_request_ctx(req);
+ struct omap_sham_dev *dd = ctx->dd;
u32 *hash = (u32 *)ctx->digest;
int i;

- /* MD5 is almost unused. So copy sha1 size to reduce code */
- for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(u32); i++) {
+ for (i = 0; i < dd->pdata->digest_size / sizeof(u32); i++) {
if (out)
- hash[i] = omap_sham_read(ctx->dd,
- SHA_REG_DIGEST(i));
+ hash[i] = omap_sham_read(dd, SHA_REG_IDIGEST(dd, i));
else
- omap_sham_write(ctx->dd,
- SHA_REG_DIGEST(i), hash[i]);
+ omap_sham_write(dd, SHA_REG_IDIGEST(dd, i), hash[i]);
+ }
+}
+
+static void omap_sham_copy_hash_omap4(struct ahash_request *req, int out)
+{
+ struct omap_sham_reqctx *ctx = ahash_request_ctx(req);
+ struct omap_sham_dev *dd = ctx->dd;
+ int i;
+
+ if (ctx->flags & BIT(FLAGS_HMAC)) {
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(dd->req);
+ struct omap_sham_ctx *tctx = crypto_ahash_ctx(tfm);
+ struct omap_sham_hmac_ctx *bctx = tctx->base;
+ u32 *opad = (u32 *)bctx->opad;
+
+ for (i = 0; i < dd->pdata->digest_size / sizeof(u32); i++) {
+ if (out)
+ opad[i] = omap_sham_read(dd,
+ SHA_REG_ODIGEST(i));
+ else
+ omap_sham_write(dd, SHA_REG_ODIGEST(i),
+ opad[i]);
+ }
}
+
+ omap_sham_copy_hash_omap2(req, out);
}

static void omap_sham_copy_ready_hash(struct ahash_request *req)
@@ -225,20 +307,31 @@ static void omap_sham_copy_ready_hash(struct ahash_request *req)
struct omap_sham_reqctx *ctx = ahash_request_ctx(req);
u32 *in = (u32 *)ctx->digest;
u32 *hash = (u32 *)req->result;
- int i;
+ int i, d, big_endian = 0;

if (!hash)
return;

- if (likely(ctx->flags & BIT(FLAGS_SHA1))) {
- /* SHA1 results are in big endian */
- for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(u32); i++)
+ switch (ctx->flags & FLAGS_MODE_MASK) {
+ case FLAGS_MODE_MD5:
+ d = MD5_DIGEST_SIZE / sizeof(u32);
+ break;
+ case FLAGS_MODE_SHA1:
+ /* OMAP2 SHA1 is big endian */
+ if (test_bit(FLAGS_BE32_SHA1, &ctx->dd->flags))
+ big_endian = 1;
+ d = SHA1_DIGEST_SIZE / sizeof(u32);
+ break;
+ default:
+ d = 0;
+ }
+
+ if (big_endian)
+ for (i = 0; i < d; i++)
hash[i] = be32_to_cpu(in[i]);
- } else {
- /* MD5 results are in little endian */
- for (i = 0; i < MD5_DIGEST_SIZE / sizeof(u32); i++)
+ else
+ for (i = 0; i < d; i++)
hash[i] = le32_to_cpu(in[i]);
- }
}

static int omap_sham_hw_init(struct omap_sham_dev *dd)
@@ -246,13 +339,6 @@ static int omap_sham_hw_init(struct omap_sham_dev *dd)
pm_runtime_get_sync(dd->dev);

if (!test_bit(FLAGS_INIT, &dd->flags)) {
- omap_sham_write_mask(dd, SHA_REG_MASK,
- SHA_REG_MASK_SOFTRESET, SHA_REG_MASK_SOFTRESET);
-
- if (omap_sham_wait(dd, SHA_REG_SYSSTATUS,
- SHA_REG_SYSSTATUS_RESETDONE))
- return -ETIMEDOUT;
-
set_bit(FLAGS_INIT, &dd->flags);
dd->err = 0;
}
@@ -260,23 +346,23 @@ static int omap_sham_hw_init(struct omap_sham_dev *dd)
return 0;
}

-static void omap_sham_write_ctrl(struct omap_sham_dev *dd, size_t length,
+static void omap_sham_write_ctrl_omap2(struct omap_sham_dev *dd, size_t length,
int final, int dma)
{
struct omap_sham_reqctx *ctx = ahash_request_ctx(dd->req);
u32 val = length << 5, mask;

if (likely(ctx->digcnt))
- omap_sham_write(dd, SHA_REG_DIGCNT, ctx->digcnt);
+ omap_sham_write(dd, SHA_REG_DIGCNT(dd), ctx->digcnt);

- omap_sham_write_mask(dd, SHA_REG_MASK,
+ omap_sham_write_mask(dd, SHA_REG_MASK(dd),
SHA_REG_MASK_IT_EN | (dma ? SHA_REG_MASK_DMA_EN : 0),
SHA_REG_MASK_IT_EN | SHA_REG_MASK_DMA_EN);
/*
* Setting ALGO_CONST only for the first iteration
* and CLOSE_HASH only for the last one.
*/
- if (ctx->flags & BIT(FLAGS_SHA1))
+ if ((ctx->flags & FLAGS_MODE_MASK) == FLAGS_MODE_SHA1)
val |= SHA_REG_CTRL_ALGO;
if (!ctx->digcnt)
val |= SHA_REG_CTRL_ALGO_CONST;
@@ -289,6 +375,81 @@ static void omap_sham_write_ctrl(struct omap_sham_dev *dd, size_t length,
omap_sham_write_mask(dd, SHA_REG_CTRL, val, mask);
}

+static void omap_sham_trigger_omap2(struct omap_sham_dev *dd, size_t length)
+{
+}
+
+static int omap_sham_poll_irq_omap2(struct omap_sham_dev *dd)
+{
+ return omap_sham_wait(dd, SHA_REG_CTRL, SHA_REG_CTRL_INPUT_READY);
+}
+
+static void omap_sham_write_n(struct omap_sham_dev *dd, u32 offset,
+ u32 *value, int count)
+{
+ for (; count--; value++, offset += 4)
+ omap_sham_write(dd, offset, *value);
+}
+
+static void omap_sham_write_ctrl_omap4(struct omap_sham_dev *dd, size_t length,
+ int final, int dma)
+{
+ struct omap_sham_reqctx *ctx = ahash_request_ctx(dd->req);
+ u32 val, mask;
+
+ /*
+ * Setting ALGO_CONST only for the first iteration and
+ * CLOSE_HASH only for the last one. Note that flags mode bits
+ * correspond to algorithm encoding in mode register.
+ */
+ val = (ctx->flags & FLAGS_MODE_MASK) >> (FLAGS_MODE_SHIFT - 1);
+ if (!ctx->digcnt) {
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(dd->req);
+ struct omap_sham_ctx *tctx = crypto_ahash_ctx(tfm);
+ struct omap_sham_hmac_ctx *bctx = tctx->base;
+
+ val |= SHA_REG_MODE_ALGO_CONSTANT;
+
+ if (ctx->flags & BIT(FLAGS_HMAC)) {
+ val |= SHA_REG_MODE_HMAC_KEY_PROC;
+ omap_sham_write_n(dd, SHA_REG_ODIGEST(0),
+ (u32 *)bctx->ipad,
+ SHA1_BLOCK_SIZE / sizeof(u32));
+ ctx->digcnt += SHA1_BLOCK_SIZE;
+ }
+ }
+
+ if (final) {
+ val |= SHA_REG_MODE_CLOSE_HASH;
+
+ if (ctx->flags & BIT(FLAGS_HMAC))
+ val |= SHA_REG_MODE_HMAC_OUTER_HASH;
+ }
+
+ mask = SHA_REG_MODE_ALGO_CONSTANT | SHA_REG_MODE_CLOSE_HASH |
+ SHA_REG_MODE_ALGO_MASK | SHA_REG_MODE_HMAC_OUTER_HASH |
+ SHA_REG_MODE_HMAC_KEY_PROC;
+
+ dev_dbg(dd->dev, "ctrl: %08x, flags: %08lx\n", val, ctx->flags);
+ omap_sham_write_mask(dd, SHA_REG_MODE, val, mask);
+ omap_sham_write(dd, SHA_REG_IRQENA, SHA_REG_IRQENA_OUTPUT_RDY);
+ omap_sham_write_mask(dd, SHA_REG_MASK(dd),
+ SHA_REG_MASK_IT_EN |
+ (dma ? SHA_REG_MASK_DMA_EN : 0),
+ SHA_REG_MASK_IT_EN | SHA_REG_MASK_DMA_EN);
+}
+
+static void omap_sham_trigger_omap4(struct omap_sham_dev *dd, size_t length)
+{
+ omap_sham_write(dd, SHA_REG_LENGTH, length);
+}
+
+static int omap_sham_poll_irq_omap4(struct omap_sham_dev *dd)
+{
+ return omap_sham_wait(dd, SHA_REG_IRQSTATUS,
+ SHA_REG_IRQSTATUS_INPUT_RDY);
+}
+
static int omap_sham_xmit_cpu(struct omap_sham_dev *dd, const u8 *buf,
size_t length, int final)
{
@@ -299,12 +460,13 @@ static int omap_sham_xmit_cpu(struct omap_sham_dev *dd, const u8 *buf,
dev_dbg(dd->dev, "xmit_cpu: digcnt: %d, length: %d, final: %d\n",
ctx->digcnt, length, final);

- omap_sham_write_ctrl(dd, length, final, 0);
+ dd->pdata->write_ctrl(dd, length, final, 0);
+ dd->pdata->trigger(dd, length);

/* should be non-zero before next lines to disable clocks later */
ctx->digcnt += length;

- if (omap_sham_wait(dd, SHA_REG_CTRL, SHA_REG_CTRL_INPUT_READY))
+ if (dd->pdata->poll_irq(dd))
return -ETIMEDOUT;

if (final)
@@ -315,7 +477,7 @@ static int omap_sham_xmit_cpu(struct omap_sham_dev *dd, const u8 *buf,
len32 = DIV_ROUND_UP(length, sizeof(u32));

for (count = 0; count < len32; count++)
- omap_sham_write(dd, SHA_REG_DIN(count), buffer[count]);
+ omap_sham_write(dd, SHA_REG_DIN(dd, count), buffer[count]);

return -EINPROGRESS;
}
@@ -341,7 +503,7 @@ static int omap_sham_xmit_dma(struct omap_sham_dev *dd, dma_addr_t dma_addr,

memset(&cfg, 0, sizeof(cfg));

- cfg.dst_addr = dd->phys_base + SHA_REG_DIN(0);
+ cfg.dst_addr = dd->phys_base + SHA_REG_DIN(dd, 0);
cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
cfg.dst_maxburst = DST_MAXBURST;

@@ -381,7 +543,7 @@ static int omap_sham_xmit_dma(struct omap_sham_dev *dd, dma_addr_t dma_addr,
tx->callback = omap_sham_dma_callback;
tx->callback_param = dd;

- omap_sham_write_ctrl(dd, length, final, 1);
+ dd->pdata->write_ctrl(dd, length, final, 1);

ctx->digcnt += length;

@@ -393,6 +555,8 @@ static int omap_sham_xmit_dma(struct omap_sham_dev *dd, dma_addr_t dma_addr,
dmaengine_submit(tx);
dma_async_issue_pending(dd->dma_lch);

+ dd->pdata->trigger(dd, length);
+
return -EINPROGRESS;
}

@@ -450,7 +614,7 @@ static int omap_sham_xmit_dma_map(struct omap_sham_dev *dd,
ctx->flags &= ~BIT(FLAGS_SG);

ret = omap_sham_xmit_dma(dd, ctx->dma_addr, length, final, 0);
- if (ret)
+ if (ret != -EINPROGRESS)
dma_unmap_single(dd->dev, ctx->dma_addr, ctx->buflen,
DMA_TO_DEVICE);

@@ -544,7 +708,7 @@ static int omap_sham_update_dma_start(struct omap_sham_dev *dd)
final = (ctx->flags & BIT(FLAGS_FINUP)) && !ctx->total;

ret = omap_sham_xmit_dma(dd, sg_dma_address(ctx->sg), length, final, 1);
- if (ret)
+ if (ret != -EINPROGRESS)
dma_unmap_sg(dd->dev, ctx->sg, 1, DMA_TO_DEVICE);

return ret;
@@ -609,18 +773,27 @@ static int omap_sham_init(struct ahash_request *req)
dev_dbg(dd->dev, "init: digest size: %d\n",
crypto_ahash_digestsize(tfm));

- if (crypto_ahash_digestsize(tfm) == SHA1_DIGEST_SIZE)
- ctx->flags |= BIT(FLAGS_SHA1);
+ switch (crypto_ahash_digestsize(tfm)) {
+ case MD5_DIGEST_SIZE:
+ ctx->flags |= FLAGS_MODE_MD5;
+ break;
+ case SHA1_DIGEST_SIZE:
+ ctx->flags |= FLAGS_MODE_SHA1;
+ break;
+ }

ctx->bufcnt = 0;
ctx->digcnt = 0;
ctx->buflen = BUFLEN;

if (tctx->flags & BIT(FLAGS_HMAC)) {
- struct omap_sham_hmac_ctx *bctx = tctx->base;
+ if (!test_bit(FLAGS_AUTO_XOR, &dd->flags)) {
+ struct omap_sham_hmac_ctx *bctx = tctx->base;
+
+ memcpy(ctx->buffer, bctx->ipad, SHA1_MD5_BLOCK_SIZE);
+ ctx->bufcnt = SHA1_MD5_BLOCK_SIZE;
+ }

- memcpy(ctx->buffer, bctx->ipad, SHA1_MD5_BLOCK_SIZE);
- ctx->bufcnt = SHA1_MD5_BLOCK_SIZE;
ctx->flags |= BIT(FLAGS_HMAC);
}

@@ -697,7 +870,8 @@ static int omap_sham_finish(struct ahash_request *req)

if (ctx->digcnt) {
omap_sham_copy_ready_hash(req);
- if (ctx->flags & BIT(FLAGS_HMAC))
+ if ((ctx->flags & BIT(FLAGS_HMAC)) &&
+ !test_bit(FLAGS_AUTO_XOR, &dd->flags))
err = omap_sham_finish_hmac(req);
}

@@ -712,7 +886,7 @@ static void omap_sham_finish_req(struct ahash_request *req, int err)
struct omap_sham_dev *dd = ctx->dd;

if (!err) {
- omap_sham_copy_hash(req, 1);
+ dd->pdata->copy_hash(req, 1);
if (test_bit(FLAGS_FINAL, &dd->flags))
err = omap_sham_finish(req);
} else {
@@ -772,7 +946,7 @@ static int omap_sham_handle_queue(struct omap_sham_dev *dd,

if (ctx->digcnt)
/* request has changed - restore hash */
- omap_sham_copy_hash(req, 0);
+ dd->pdata->copy_hash(req, 0);

if (ctx->op == OP_UPDATE) {
err = omap_sham_update_req(dd);
@@ -911,7 +1085,21 @@ static int omap_sham_setkey(struct crypto_ahash *tfm, const u8 *key,
struct omap_sham_hmac_ctx *bctx = tctx->base;
int bs = crypto_shash_blocksize(bctx->shash);
int ds = crypto_shash_digestsize(bctx->shash);
+ struct omap_sham_dev *dd = NULL, *tmp;
int err, i;
+
+ spin_lock_bh(&sham.lock);
+ if (!tctx->dd) {
+ list_for_each_entry(tmp, &sham.dev_list, list) {
+ dd = tmp;
+ break;
+ }
+ tctx->dd = dd;
+ } else {
+ dd = tctx->dd;
+ }
+ spin_unlock_bh(&sham.lock);
+
err = crypto_shash_setkey(tctx->fallback, key, keylen);
if (err)
return err;
@@ -928,11 +1116,14 @@ static int omap_sham_setkey(struct crypto_ahash *tfm, const u8 *key,
}

memset(bctx->ipad + keylen, 0, bs - keylen);
- memcpy(bctx->opad, bctx->ipad, bs);

- for (i = 0; i < bs; i++) {
- bctx->ipad[i] ^= 0x36;
- bctx->opad[i] ^= 0x5c;
+ if (!test_bit(FLAGS_AUTO_XOR, &dd->flags)) {
+ memcpy(bctx->opad, bctx->ipad, bs);
+
+ for (i = 0; i < bs; i++) {
+ bctx->ipad[i] ^= 0x36;
+ bctx->opad[i] ^= 0x5c;
+ }
}

return err;
@@ -1137,7 +1328,19 @@ finish:
omap_sham_finish_req(dd->req, err);
}

-static irqreturn_t omap_sham_irq(int irq, void *dev_id)
+static irqreturn_t omap_sham_irq_common(struct omap_sham_dev *dd)
+{
+ if (!test_bit(FLAGS_BUSY, &dd->flags)) {
+ dev_warn(dd->dev, "Interrupt when no active requests.\n");
+ } else {
+ set_bit(FLAGS_OUTPUT_READY, &dd->flags);
+ tasklet_schedule(&dd->done_task);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t omap_sham_irq_omap2(int irq, void *dev_id)
{
struct omap_sham_dev *dd = dev_id;

@@ -1149,21 +1352,67 @@ static irqreturn_t omap_sham_irq(int irq, void *dev_id)
SHA_REG_CTRL_OUTPUT_READY);
omap_sham_read(dd, SHA_REG_CTRL);

- if (!test_bit(FLAGS_BUSY, &dd->flags)) {
- dev_warn(dd->dev, "Interrupt when no active requests.\n");
- return IRQ_HANDLED;
- }
+ return omap_sham_irq_common(dd);
+}

- set_bit(FLAGS_OUTPUT_READY, &dd->flags);
- tasklet_schedule(&dd->done_task);
+static irqreturn_t omap_sham_irq_omap4(int irq, void *dev_id)
+{
+ struct omap_sham_dev *dd = dev_id;

- return IRQ_HANDLED;
+ omap_sham_write_mask(dd, SHA_REG_MASK(dd), 0, SHA_REG_MASK_IT_EN);
+
+ return omap_sham_irq_common(dd);
}

+static const struct omap_sham_pdata omap_sham_pdata_omap2 = {
+ .flags = BIT(FLAGS_BE32_SHA1),
+ .digest_size = SHA1_DIGEST_SIZE,
+ .copy_hash = omap_sham_copy_hash_omap2,
+ .write_ctrl = omap_sham_write_ctrl_omap2,
+ .trigger = omap_sham_trigger_omap2,
+ .poll_irq = omap_sham_poll_irq_omap2,
+ .intr_hdlr = omap_sham_irq_omap2,
+ .idigest_ofs = 0x00,
+ .din_ofs = 0x1c,
+ .digcnt_ofs = 0x14,
+ .rev_ofs = 0x5c,
+ .mask_ofs = 0x60,
+ .sysstatus_ofs = 0x64,
+ .major_mask = 0xf0,
+ .major_shift = 4,
+ .minor_mask = 0x0f,
+ .minor_shift = 0,
+};
+
#ifdef CONFIG_OF
+static const struct omap_sham_pdata omap_sham_pdata_omap4 = {
+ .flags = BIT(FLAGS_AUTO_XOR),
+ .digest_size = SHA256_DIGEST_SIZE,
+ .copy_hash = omap_sham_copy_hash_omap4,
+ .write_ctrl = omap_sham_write_ctrl_omap4,
+ .trigger = omap_sham_trigger_omap4,
+ .poll_irq = omap_sham_poll_irq_omap4,
+ .intr_hdlr = omap_sham_irq_omap4,
+ .idigest_ofs = 0x020,
+ .din_ofs = 0x080,
+ .digcnt_ofs = 0x040,
+ .rev_ofs = 0x100,
+ .mask_ofs = 0x110,
+ .sysstatus_ofs = 0x114,
+ .major_mask = 0x0700,
+ .major_shift = 8,
+ .minor_mask = 0x003f,
+ .minor_shift = 0,
+};
+
static const struct of_device_id omap_sham_of_match[] = {
{
.compatible = "ti,omap2-sham",
+ .data = &omap_sham_pdata_omap2,
+ },
+ {
+ .compatible = "ti,omap4-sham",
+ .data = &omap_sham_pdata_omap4,
},
{},
};
@@ -1198,6 +1447,7 @@ static int omap_sham_get_res_of(struct omap_sham_dev *dd,
}

dd->dma = -1; /* Dummy value that's unused */
+ dd->pdata = match->data;

err:
return err;
@@ -1243,6 +1493,9 @@ static int omap_sham_get_res_pdev(struct omap_sham_dev *dd,
}
dd->dma = r->start;

+ /* Only OMAP2/3 can be non-DT */
+ dd->pdata = &omap_sham_pdata_omap2;
+
err:
return err;
}
@@ -1254,6 +1507,7 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
struct resource res;
dma_cap_mask_t mask;
int err, i, j;
+ u32 rev;

dd = kzalloc(sizeof(struct omap_sham_dev), GFP_KERNEL);
if (dd == NULL) {
@@ -1282,8 +1536,8 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
}
dd->phys_base = res.start;

- err = request_irq(dd->irq, omap_sham_irq,
- IRQF_TRIGGER_LOW, dev_name(dev), dd);
+ err = request_irq(dd->irq, dd->pdata->intr_hdlr, IRQF_TRIGGER_LOW,
+ dev_name(dev), dd);
if (err) {
dev_err(dev, "unable to request irq.\n");
goto res_err;
@@ -1301,21 +1555,16 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
goto dma_err;
}

- dd->io_base = ioremap(dd->phys_base, SZ_4K);
- if (!dd->io_base) {
- dev_err(dev, "can't ioremap\n");
- err = -ENOMEM;
- goto io_err;
- }
+ dd->flags |= dd->pdata->flags;

pm_runtime_enable(dev);
pm_runtime_get_sync(dev);
+ rev = omap_sham_read(dd, SHA_REG_REV(dd));
+ pm_runtime_put_sync(&pdev->dev);

dev_info(dev, "hw accel on OMAP rev %u.%u\n",
- (omap_sham_read(dd, SHA_REG_REV) & SHA_REG_REV_MAJOR) >> 4,
- omap_sham_read(dd, SHA_REG_REV) & SHA_REG_REV_MINOR);
-
- pm_runtime_put_sync(&pdev->dev);
+ (rev & dd->pdata->major_mask) >> dd->pdata->major_shift,
+ (rev & dd->pdata->minor_mask) >> dd->pdata->minor_shift);

spin_lock(&sham.lock);
list_add_tail(&dd->list, &sham.dev_list);
@@ -1333,7 +1582,6 @@ err_algs:
for (j = 0; j < i; j++)
crypto_unregister_ahash(&algs[j]);
pm_runtime_disable(dev);
-io_err:
dma_release_channel(dd->dma_lch);
dma_err:
free_irq(dd->irq, dd);
--
1.7.12

2012-12-21 17:04:06

by Mark Greer

[permalink] [raw]
Subject: [PATCH 6/9] crypto: omap-sham - Add Device Tree Support

From: "Mark A. Greer" <[email protected]>

Add Device Tree suport to the omap-sham crypto
driver. Currently, only support for OMAP2 and
OMAP3 is being added but support for OMAP4 will
be added in a subsequent patch.

CC: Dmitry Kasatkin <[email protected]>
Signed-off-by: Mark A. Greer <[email protected]>
---
drivers/crypto/omap-sham.c | 139 ++++++++++++++++++++++++++++++++++-----------
1 file changed, 106 insertions(+), 33 deletions(-)

diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
index f6b270e..860cad8 100644
--- a/drivers/crypto/omap-sham.c
+++ b/drivers/crypto/omap-sham.c
@@ -30,6 +30,10 @@
#include <linux/dmaengine.h>
#include <linux/omap-dma.h>
#include <linux/pm_runtime.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
#include <linux/delay.h>
#include <linux/crypto.h>
#include <linux/cryptohash.h>
@@ -145,6 +149,7 @@ struct omap_sham_dev {
int irq;
spinlock_t lock;
int err;
+ unsigned int dma;
struct dma_chan *dma_lch;
struct tasklet_struct done_task;

@@ -1155,13 +1160,99 @@ static irqreturn_t omap_sham_irq(int irq, void *dev_id)
return IRQ_HANDLED;
}

+#ifdef CONFIG_OF
+static const struct of_device_id omap_sham_of_match[] = {
+ {
+ .compatible = "ti,omap2-sham",
+ },
+ {},
+};
+MODULE_DEVICE_TABLE(of, omap_sham_of_match);
+
+static int omap_sham_get_res_of(struct omap_sham_dev *dd,
+ struct device *dev, struct resource *res)
+{
+ struct device_node *node = dev->of_node;
+ const struct of_device_id *match;
+ int err = 0;
+
+ match = of_match_device(of_match_ptr(omap_sham_of_match), dev);
+ if (!match) {
+ dev_err(dev, "no compatible OF match\n");
+ err = -EINVAL;
+ goto err;
+ }
+
+ err = of_address_to_resource(node, 0, res);
+ if (err < 0) {
+ dev_err(dev, "can't translate OF node address\n");
+ err = -EINVAL;
+ goto err;
+ }
+
+ dd->irq = of_irq_to_resource(node, 0, NULL);
+ if (!dd->irq) {
+ dev_err(dev, "can't translate OF irq value\n");
+ err = -EINVAL;
+ goto err;
+ }
+
+ dd->dma = -1; /* Dummy value that's unused */
+
+err:
+ return err;
+}
+#else
+static int omap_sham_get_res_dev(struct omap_sham_dev *dd,
+ struct device *dev, struct resource *res)
+{
+ return -EINVAL;
+}
+#endif
+
+static int omap_sham_get_res_pdev(struct omap_sham_dev *dd,
+ struct platform_device *pdev, struct resource *res)
+{
+ struct device *dev = &pdev->dev;
+ struct resource *r;
+ int err = 0;
+
+ /* Get the base address */
+ r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!r) {
+ dev_err(dev, "no MEM resource info\n");
+ err = -ENODEV;
+ goto err;
+ }
+ memcpy(res, r, sizeof(*res));
+
+ /* Get the IRQ */
+ dd->irq = platform_get_irq(pdev, 0);
+ if (dd->irq < 0) {
+ dev_err(dev, "no IRQ resource info\n");
+ err = dd->irq;
+ goto err;
+ }
+
+ /* Get the DMA */
+ r = platform_get_resource(pdev, IORESOURCE_DMA, 0);
+ if (!r) {
+ dev_err(dev, "no DMA resource info\n");
+ err = -ENODEV;
+ goto err;
+ }
+ dd->dma = r->start;
+
+err:
+ return err;
+}
+
static int __devinit omap_sham_probe(struct platform_device *pdev)
{
struct omap_sham_dev *dd;
struct device *dev = &pdev->dev;
- struct resource *res;
+ struct resource res;
dma_cap_mask_t mask;
- unsigned dma_chan;
int err, i, j;

dd = kzalloc(sizeof(struct omap_sham_dev), GFP_KERNEL);
@@ -1178,33 +1269,18 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
tasklet_init(&dd->done_task, omap_sham_done_task, (unsigned long)dd);
crypto_init_queue(&dd->queue, OMAP_SHAM_QUEUE_LENGTH);

- dd->irq = -1;
-
- /* Get the base address */
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (!res) {
- dev_err(dev, "no MEM resource info\n");
- err = -ENODEV;
- goto res_err;
- }
- dd->phys_base = res->start;
-
- /* Get the DMA */
- res = platform_get_resource(pdev, IORESOURCE_DMA, 0);
- if (!res) {
- dev_err(dev, "no DMA resource info\n");
- err = -ENODEV;
+ err = (dev->of_node) ? omap_sham_get_res_of(dd, dev, &res) :
+ omap_sham_get_res_pdev(dd, pdev, &res);
+ if (err)
goto res_err;
- }
- dma_chan = res->start;

- /* Get the IRQ */
- dd->irq = platform_get_irq(pdev, 0);
- if (dd->irq < 0) {
- dev_err(dev, "no IRQ resource info\n");
- err = dd->irq;
+ dd->io_base = devm_request_and_ioremap(dev, &res);
+ if (!dd->io_base) {
+ dev_err(dev, "can't ioremap\n");
+ err = -ENOMEM;
goto res_err;
}
+ dd->phys_base = res.start;

err = request_irq(dd->irq, omap_sham_irq,
IRQF_TRIGGER_LOW, dev_name(dev), dd);
@@ -1216,10 +1292,10 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);

- dd->dma_lch = dma_request_channel(mask, omap_dma_filter_fn, &dma_chan);
+ dd->dma_lch = dma_request_channel(mask, omap_dma_filter_fn, &dd->dma);
if (!dd->dma_lch) {
dev_err(dev, "unable to obtain RX DMA engine channel %u\n",
- dma_chan);
+ dd->dma);
err = -ENXIO;
goto dma_err;
}
@@ -1255,13 +1331,11 @@ static int __devinit omap_sham_probe(struct platform_device *pdev)
err_algs:
for (j = 0; j < i; j++)
crypto_unregister_ahash(&algs[j]);
- iounmap(dd->io_base);
pm_runtime_disable(dev);
io_err:
dma_release_channel(dd->dma_lch);
dma_err:
- if (dd->irq >= 0)
- free_irq(dd->irq, dd);
+ free_irq(dd->irq, dd);
res_err:
kfree(dd);
dd = NULL;
@@ -1285,11 +1359,9 @@ static int __devexit omap_sham_remove(struct platform_device *pdev)
for (i = 0; i < ARRAY_SIZE(algs); i++)
crypto_unregister_ahash(&algs[i]);
tasklet_kill(&dd->done_task);
- iounmap(dd->io_base);
pm_runtime_disable(&pdev->dev);
dma_release_channel(dd->dma_lch);
- if (dd->irq >= 0)
- free_irq(dd->irq, dd);
+ free_irq(dd->irq, dd);
kfree(dd);
dd = NULL;

@@ -1321,6 +1393,7 @@ static struct platform_driver omap_sham_driver = {
.name = "omap-sham",
.owner = THIS_MODULE,
.pm = &omap_sham_pm_ops,
+ .of_match_table = omap_sham_of_match,
},
};

--
1.7.12


2013-01-05 10:44:22

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 0/9] crypto: omap-sham - Updates & New Functionality

On Fri, Dec 21, 2012 at 10:04:00AM -0700, Mark A. Greer wrote:
> From: "Mark A. Greer" <[email protected]>
>
> [This supersedes the omap-sham driver patches sent in the
> "crypto: omap-sham updates" series sent a few weeks ago.]
>
> This patch series does several things to the omap-sham crypto
> driver including:
> - converting to use pm_runtime
> - adding suspend/resume support
> - converting to use dmaengine API
> - adding device tree support
> - adding OMAP4/AM33XX support
> - adding SHA224 and SHA256 support
> - some minor cleanups
>
> The patches are based on the current k.o. kernel, plus:
> - the ARM hwmod, etc patches from
> "[PATCH 00/15] OMAP SHAM & AES Crypto Updates"
> (http://marc.info/?l=linux-omap&m=135610732120447&w=2)
> - the EDMA dmaengine patches submitted by Matt Porter
> "[RFC PATCH v3 00/16] DMA Engine support for AM33XX]"
> (https://lkml.org/lkml/2012/10/18/256)
> - some misc patches required by the EDMA patches
> - a hack to fix the compilation error that the current k.o. kernel has

All applied. Thaks a lot!
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt