2007-11-12 18:26:34

by Loc Ho

[permalink] [raw]
Subject: New Crypto Hardware

Hi,

I am about to start developing a new device driver for new crypto hardware.
I am thinking of starting with Linux CryptoAPI interface. But I have the
following requirement:

1. Asynchronous encrypt/decrypt
2. Asynchronous hashing
3. Asynchronous combined mode (GCM, CCM, and GMAC) 4.
Asynchronous/synchronous public key accelaration (large vector operation) 5.
Support for additional algorithms - ARC4, Kasumi, AES-XCBC and etc 6. Other
minor offload such as windows check and etc. In fact, it can do packet level
processing as well.

What is the current state of asynchronous hashing? Will AEAD be changed to
make use of asynchronous hashing? Is anyone working on #1 and changing AEAD
to asychronous interface? In addition, most of the existent device driver
only use synchronous interface. Are there any one changing them to
asynchronous crypto interface - such as NETKEY (Linux IPSec)?

Thanks,
Loc


2007-11-13 01:41:44

by Herbert Xu

[permalink] [raw]
Subject: Re: New Crypto Hardware

Loc Ho <[email protected]> wrote:
>
> What is the current state of asynchronous hashing? Will AEAD be changed to

It's on my todo list but it's not the highest priority.

> make use of asynchronous hashing? Is anyone working on #1 and changing AEAD

You mean authenc? AEAD is just an interface which doesn't use
hashing directly. If you mean the authenc algorithm which combines
encryption and hashing then yes it will be changed. That would
be a fairly straightforward change too.

> to asychronous interface? In addition, most of the existent device driver

I'm not aware of any ongoing work so any contributions in this
area would definitely be welcome.

> only use synchronous interface. Are there any one changing them to
> asynchronous crypto interface - such as NETKEY (Linux IPSec)?

That I am working on right now :) See the recent 24 patche set
I posted to netdev. What's missing now is just the change to
get ESP to actually use AEAD.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2007-12-25 07:32:06

by Herbert Xu

[permalink] [raw]
Subject: Re: New Crypto Hardware

On Fri, Dec 21, 2007 at 09:41:11AM -0800, Loc Ho wrote:
> Hi Herbert,
>
> Begin of next year, I will be start adding asynchronous hashing support.
> I would like to check with you before I start. Besides modeled after
> crypto async block cipher, do you have any suggestion?

Thanks for looking into this.

What I'd like to see is to have the hash context moved into
the request. This is necessary for us to support simultanenous
hash operations on the same tfm.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2008-01-17 11:23:16

by Herbert Xu

[permalink] [raw]
Subject: Re: New Crypto Hardware

On Wed, Jan 16, 2008 at 10:19:04AM -0800, Loc Ho wrote:
> Hi,
>
> For hashing, there are HMAC key if HMAC and digest size. As digest size
> is part of the algorithm, there is only HMAC key. Are you referring to
> moving the HMAC key from transformation into the request structure? As I
> am modeled after ablkcipher_request structure, there will be a context
> point 'void *__ctx[] CRYPTO_MINALIGN_ATTR;' for the request operation.

No I meant the context that has to be stored between calls to
crypto_hash_update and crypto_hash_final.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2008-01-17 18:44:04

by Loc Ho

[permalink] [raw]
Subject: RE: New Crypto Hardware

Hi,

The current hash_alg struct is:

struct hash_alg {
int (*init)(struct hash_desc *desc);
int (*update)(struct hash_desc *desc, struct scatterlist *sg,
unsigned int nbytes);
int (*final)(struct hash_desc *desc, u8 *out);
int (*digest)(struct hash_desc *desc, struct scatterlist *sg,
unsigned int nbytes, u8 *out);
int (*setkey)(struct crypto_hash *tfm, const u8 *key,
unsigned int keylen);

unsigned int digestsize;
};

It looks like update and final functions are used for continuous hashing.
And digest function is used for single hashing. I was thing changing to this
for async:

struct ahash_alg {
int (*init)(struct ahash_request *req);
int (*digest)(struct ahash_request *req);
int (*setkey)(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen);

unsigned int digestsize;
};

But it seems like there is still a need to support continuation hashing as
well as just single hash.

Questions:
1. Is my assume about update/final and digest function correct?
2. What is the difference between digest and hash type besides one operation
on transformation structure (tfm) and the other on descriptor (desc)?
3. Currently, cryptodev-2.6 git doesn't work under PPC4xx development board
but Denx linux latest works. Have anyone tested on PPC4xx board?

The context between update and final function will be moved into the request
just like ablkcipher.

-Loc

-----Original Message-----
From: Herbert Xu [mailto:[email protected]]
Sent: Thursday, January 17, 2008 3:23 AM
To: Loc Ho
Cc: [email protected]
Subject: Re: New Crypto Hardware

On Wed, Jan 16, 2008 at 10:19:04AM -0800, Loc Ho wrote:
> Hi,
>
> For hashing, there are HMAC key if HMAC and digest size. As digest
> size is part of the algorithm, there is only HMAC key. Are you
> referring to moving the HMAC key from transformation into the request
> structure? As I am modeled after ablkcipher_request structure, there
> will be a context point 'void *__ctx[] CRYPTO_MINALIGN_ATTR;' for the
request operation.

No I meant the context that has to be stored between calls to
crypto_hash_update and crypto_hash_final.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]> Home Page:
http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

Subject: Re: New Crypto Hardware

* Loc Ho | 2008-01-17 10:37:20 [-0800]:

>struct hash_alg {
> int (*init)(struct hash_desc *desc);
> int (*update)(struct hash_desc *desc, struct scatterlist *sg,
> unsigned int nbytes);
> int (*final)(struct hash_desc *desc, u8 *out);
> int (*digest)(struct hash_desc *desc, struct scatterlist *sg,
> unsigned int nbytes, u8 *out);
> int (*setkey)(struct crypto_hash *tfm, const u8 *key,
> unsigned int keylen);
>
> unsigned int digestsize;
>};
>
>It looks like update and final functions are used for continuous hashing.
>And digest function is used for single hashing. I was thing changing to this
>for async:
>
>struct ahash_alg {
> int (*init)(struct ahash_request *req);
> int (*digest)(struct ahash_request *req);
> int (*setkey)(struct crypto_ahash *tfm, const u8 *key,
> unsigned int keylen);
>
> unsigned int digestsize;
>};
>
>But it seems like there is still a need to support continuation hashing as
>well as just single hash.
>
>Questions:
>1. Is my assume about update/final and digest function correct?
With update you do feed the digest/hash with new data. ->final writes
the result out. You can have 1+ update calls while you have only one
final call where you get your result. Does your HW create the digest in
one go and does not allow updates before creating the final result?

>2. What is the difference between digest and hash type besides one operation
>on transformation structure (tfm) and the other on descriptor (desc)?
They are used for different things.
Digest is something like sha1 or md5. You just feed your algoritm with
data and get a single result. A hash on the other hand has a key which
is used as a salt, a random initialization string.

>3. Currently, cryptodev-2.6 git doesn't work under PPC4xx development board
>but Denx linux latest works. Have anyone tested on PPC4xx board?

Please define doesn't work.

Sebastian

Subject: Re: New Crypto Hardware

* Loc Ho | 2008-01-17 13:25:38 [-0800]:

>>Questions:
>>2. What is the difference between digest and hash type besides one
>>operation on transformation structure (tfm) and the other on descriptor
>(desc)?
>They are used for different things.
>Digest is something like sha1 or md5. You just feed your algoritm with
>data and get a single result. A hash on the other hand has a key which
>is used as a salt, a random initialization string.
>
>[LH]
>Are you referring hash as HMAC mode of MD5/SHA? For normal MD5, use
>digest algorthm. For HMAC mode of MD5, use HASH. Is this correct?
Yes. You request them as hmac(sha1) resp. hmac(md5).

>>3. Currently, cryptodev-2.6 git doesn't work under PPC4xx development
>>board but Denx linux latest works. Have anyone tested on PPC4xx board?
>
>Please define doesn't work.
>
>[LH]
>When I compile and load Denx Linux for my sequoia board, the Kernel
>comes up and I get kernel message. When I compile and load
>cryptodev-2.6.git HEAD with the same config from Denx branch, nothing.
>It seems like cryptodev-2.6 doesn't support the sequoia board. What I
>have to do is patch only crypto file into Denx linux and that seems to
>work for me.
cryptodev-2.6 is almost v2.6.24-rc8 tree + cryptodev patches. If you can
boot v2.6.24-rc8 than you should have no problems with cryptodev. I
don't know the diff between denx-linux & linus-linux.

>Thanks,
>Loc

Sebastian

2008-01-22 01:29:11

by Loc Ho

[permalink] [raw]
Subject: RE: New Crypto Hardware

Hi,

If that is the case, then in order to fully support async hashing, I would
need an async version of HASH interface and an async version of digest. Am I
correct? Do you think it will be inconsistent if it is assumed that if the
functional setkey is not called, then it is digest. If it is called, then it
is hash.

-Loc

-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Sebastian Siewior
Sent: Friday, January 18, 2008 3:07 PM
To: Loc Ho
Cc: Herbert Xu; [email protected]
Subject: Re: New Crypto Hardware

* Loc Ho | 2008-01-17 13:25:38 [-0800]:

>>Questions:
>>2. What is the difference between digest and hash type besides one
>>operation on transformation structure (tfm) and the other on
>>descriptor
>(desc)?
>They are used for different things.
>Digest is something like sha1 or md5. You just feed your algoritm with
>data and get a single result. A hash on the other hand has a key which
>is used as a salt, a random initialization string.
>
>[LH]
>Are you referring hash as HMAC mode of MD5/SHA? For normal MD5, use
>digest algorthm. For HMAC mode of MD5, use HASH. Is this correct?
Yes. You request them as hmac(sha1) resp. hmac(md5).

>>3. Currently, cryptodev-2.6 git doesn't work under PPC4xx development
>>board but Denx linux latest works. Have anyone tested on PPC4xx board?
>
>Please define doesn't work.
>
>[LH]
>When I compile and load Denx Linux for my sequoia board, the Kernel
>comes up and I get kernel message. When I compile and load
>cryptodev-2.6.git HEAD with the same config from Denx branch, nothing.
>It seems like cryptodev-2.6 doesn't support the sequoia board. What I
>have to do is patch only crypto file into Denx linux and that seems to
>work for me.
cryptodev-2.6 is almost v2.6.24-rc8 tree + cryptodev patches. If you can
boot v2.6.24-rc8 than you should have no problems with cryptodev. I don't
know the diff between denx-linux & linus-linux.

>Thanks,
>Loc

Sebastian

Subject: Re: New Crypto Hardware

* Loc Ho | 2008-01-21 17:29:13 [-0800]:

>If that is the case, then in order to fully support async hashing, I would
>need an async version of HASH interface and an async version of digest. Am I
>correct?
Yes. In case you support hmac+sha1 in HW and you don't do sha1 (as
digest) at all you could skip that part.

>Do you think it will be inconsistent if it is assumed that if the
>functional setkey is not called, then it is digest. If it is called, then it
>is hash.
I would prefer to seperate them. However, this is one of those things
where Herbert has the last word :)

>-Loc

Sebastian

2008-01-23 02:17:16

by Loc Ho

[permalink] [raw]
Subject: RE: New Crypto Hardware

Hi,

I had an working version of async hashing (for HMAC mode). Have not tested
with software wrapper yet like async blkciper implementation but with an
async hash sample driver wrapped directly over hmac(md5).

Currently, async block ciper calls crypto_alloc_ablkciper for various mode.
I would like to do the same. If we do need two version, I would still like
to keep the interface called by tcrypt.c to be identical, except calling
'setkey' would be either no operation or return error.

Until this is confirmed, will start testing the async software wrapper over
synchronous interface.

-Loc


-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Sebastian Siewior
Sent: Tuesday, January 22, 2008 3:18 PM
To: Loc Ho
Cc: 'Sebastian Siewior'; Herbert Xu; [email protected]
Subject: Re: New Crypto Hardware

* Loc Ho | 2008-01-21 17:29:13 [-0800]:

>If that is the case, then in order to fully support async hashing, I
>would need an async version of HASH interface and an async version of
>digest. Am I correct?
Yes. In case you support hmac+sha1 in HW and you don't do sha1 (as
digest) at all you could skip that part.

>Do you think it will be inconsistent if it is assumed that if the
>functional setkey is not called, then it is digest. If it is called,
>then it is hash.
I would prefer to seperate them. However, this is one of those things where
Herbert has the last word :)

>-Loc

Sebastian

2008-01-24 21:07:23

by Loc Ho

[permalink] [raw]
Subject: RFC: Async Hash Support

Hi,

I have a working version of async hash support. It is attached. This follows
ablkciper implementation.

It looks like crypto async dispatch to kernel thread isn't been use anymore
(cryptd.c). Am I correct?

Is there a particular format for submitting patch?

-Loc


Attachments:
cryptodev_ahash.patch (38.35 kB)
Subject: Re: RFC: Async Hash Support

* Loc Ho | 2008-01-24 13:07:24 [-0800]:

>Is there a particular format for submitting patch?
Take a look on those two:

Documentation/SubmitChecklist
Documentation/SubmittingPatches

>
>-Loc
>

Sebastian

2008-01-25 02:45:24

by Loc Ho

[permalink] [raw]
Subject: [PATCH 1/1] CryptoAPI: Add Async Hash Support

From e5d67c3670f1ec15339a92cc291027c0a059aaed Mon Sep 17 00:00:00 2001
From: Loc Ho <[email protected]>
Date: Thu, 24 Jan 2008 18:13:28 -0800
Subject: [PATCH] Add Async Hash Support

---
crypto/Makefile | 1 +
crypto/ahash.c | 151 +++++++++++++++++
crypto/algapi.c | 2 +-
crypto/api.c | 2 +-
crypto/cryptd.c | 220 +++++++++++++++++++++++++
crypto/digest.c | 4 +-
crypto/hash.c | 102 +++++++++++-
crypto/tcrypt.c | 142 ++++++++++++++++-
drivers/crypto/Kconfig | 8 +-
drivers/crypto/Makefile | 1 +
drivers/crypto/ahash_sample.c | 354
+++++++++++++++++++++++++++++++++++++++++
include/crypto/algapi.h | 36 ++++
include/linux/crypto.h | 183 ++++++++++++++++++++-
13 files changed, 1183 insertions(+), 23 deletions(-)
create mode 100644 crypto/ahash.c
create mode 100644 drivers/crypto/ahash_sample.c

diff --git a/crypto/Makefile b/crypto/Makefile
index 48c7583..a9c3d09 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -18,6 +18,7 @@ obj-$(CONFIG_CRYPTO_BLKCIPHER) += eseqiv.o
obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o

crypto_hash-objs := hash.o
+crypto_hash-objs += ahash.o
obj-$(CONFIG_CRYPTO_HASH) += crypto_hash.o

obj-$(CONFIG_CRYPTO_MANAGER) += cryptomgr.o
diff --git a/crypto/ahash.c b/crypto/ahash.c
new file mode 100644
index 0000000..e9bf72f
--- /dev/null
+++ b/crypto/ahash.c
@@ -0,0 +1,151 @@
+/*
+ * Asynchronous Cryptographic Hash operations.
+ *
+ * This is the asynchronous version of hash.c with notification of
+ * completion via a callback.
+ *
+ * Copyright (c) 2008 Loc Ho <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
Free
+ * Software Foundation; either version 2 of the License, or (at your
option)
+ * any later version.
+ *
+ */
+
+#include <crypto/algapi.h>
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/seq_file.h>
+
+#include "internal.h"
+
+static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct ahash_alg *ahash = crypto_ahash_alg(tfm);
+ unsigned long alignmask = crypto_ahash_alignmask(tfm);
+ int ret;
+ u8 *buffer, *alignbuffer;
+ unsigned long absize;
+
+ absize = keylen + alignmask;
+ buffer = kmalloc(absize, GFP_ATOMIC);
+ if (!buffer)
+ return -ENOMEM;
+
+ alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
+ memcpy(alignbuffer, key, keylen);
+ ret = ahash->setkey(tfm, alignbuffer, keylen);
+ memset(alignbuffer, 0, keylen);
+ kfree(buffer);
+ return ret;
+}
+
+static int ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct ahash_alg *ahash = crypto_ahash_alg(tfm);
+ unsigned long alignmask = crypto_ahash_alignmask(tfm);
+
+ if ((unsigned long)key & alignmask)
+ return ahash_setkey_unaligned(tfm, key, keylen);
+
+ return ahash->setkey(tfm, key, keylen);
+}
+
+static unsigned int crypto_ahash_ctxsize(struct crypto_alg *alg, u32 type,
+ u32 mask)
+{
+ return alg->cra_ctxsize;
+}
+
+static int crypto_init_ahash_ops(struct crypto_tfm *tfm, u32 type, u32
mask)
+{
+ struct ahash_alg *alg = &tfm->__crt_alg->cra_ahash;
+ struct ahash_tfm *crt = &tfm->crt_ahash;
+
+ if (alg->digestsize > crypto_tfm_alg_blocksize(tfm))
+ return -EINVAL;
+
+ crt->init = alg->init;
+ crt->update = alg->update;
+ crt->final = alg->final;
+ crt->digest = alg->digest;
+ crt->setkey = ahash_setkey;
+ crt->base = __crypto_ahash_cast(tfm);
+ crt->digestsize = alg->digestsize;
+
+ return 0;
+}
+
+static void crypto_ahash_show(struct seq_file *m, struct crypto_alg *alg)
+ __attribute__ ((unused));
+static void crypto_ahash_show(struct seq_file *m, struct crypto_alg *alg)
+{
+ seq_printf(m, "type : hash\n");
+ seq_printf(m, "async : %s\n", alg->cra_flags &
CRYPTO_ALG_ASYNC ?
+ "yes" : "no");
+ seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
+ seq_printf(m, "digestsize : %u\n", alg->cra_hash.digestsize);
+}
+
+const struct crypto_type crypto_ahash_type = {
+ .ctxsize = crypto_ahash_ctxsize,
+ .init = crypto_init_ahash_ops,
+#ifdef CONFIG_PROC_FS
+ .show = crypto_ahash_show,
+#endif
+};
+EXPORT_SYMBOL_GPL(crypto_ahash_type);
+
+struct crypto_ahash *crypto_alloc_ahash(const char *alg_name,
+ u32 type, u32 mask)
+{
+ struct crypto_tfm *tfm;
+ int err;
+
+ mask &= ~CRYPTO_ALG_TYPE_MASK;
+ mask |= CRYPTO_ALG_TYPE_HASH_MASK;
+
+ for (;;) {
+ struct crypto_alg *alg;
+
+ type &= ~CRYPTO_ALG_TYPE_MASK;
+ type |= CRYPTO_ALG_TYPE_AHASH;
+ alg = crypto_alg_mod_lookup(alg_name, type, mask);
+ if (IS_ERR(alg)) {
+ type &= ~CRYPTO_ALG_TYPE_MASK;
+ type |= CRYPTO_ALG_TYPE_HASH;
+ alg = crypto_alg_mod_lookup(alg_name, type, mask);
+ if (IS_ERR(alg)) {
+ err = PTR_ERR(alg);
+ goto err;
+ }
+ }
+
+ tfm = __crypto_alloc_tfm(alg, type, mask |
CRYPTO_ALG_ASYNC);
+ if (!IS_ERR(tfm))
+ return __crypto_ahash_cast(tfm);
+
+ crypto_mod_put(alg);
+ err = PTR_ERR(tfm);
+
+err:
+ if (err != -EAGAIN)
+ break;
+ if (signal_pending(current)) {
+ err = -EINTR;
+ break;
+ }
+ }
+
+ return ERR_PTR(err);
+}
+EXPORT_SYMBOL_GPL(crypto_alloc_ahash);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Asynchronous cryptographic hash type");
diff --git a/crypto/algapi.c b/crypto/algapi.c
index e65cb50..5fdb974 100644
--- a/crypto/algapi.c
+++ b/crypto/algapi.c
@@ -182,7 +182,7 @@ static int __crypto_register_alg(struct crypto_alg *alg,

crypto_remove_spawns(&q->cra_users, list, alg->cra_flags);
}
-
+
list_add(&alg->cra_list, &crypto_alg_list);

crypto_notify(CRYPTO_MSG_ALG_REGISTER, alg);
diff --git a/crypto/api.c b/crypto/api.c
index a2496d1..c3213f4 100644
--- a/crypto/api.c
+++ b/crypto/api.c
@@ -10,7 +10,7 @@
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
Free
- * Software Foundation; either version 2 of the License, or (at your
option)
+ * Software Foundation; either version 2 of the License, or (at your
option)
* any later version.
*
*/
diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index 074298f..cdf57c8 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -45,6 +45,14 @@ struct cryptd_blkcipher_request_ctx {
crypto_completion_t complete;
};

+struct cryptd_hash_ctx {
+ struct crypto_hash *child;
+};
+
+struct cryptd_hash_request_ctx {
+ crypto_completion_t complete;
+ struct hash_desc desc;
+};

static inline struct cryptd_state *cryptd_get_state(struct crypto_tfm *tfm)
{
@@ -259,6 +267,216 @@ out_put_alg:
return inst;
}

+static int cryptd_hash_init_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
+ struct cryptd_instance_ctx *ictx = crypto_instance_ctx(inst);
+ struct crypto_spawn *spawn = &ictx->spawn;
+ struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct crypto_hash *cipher;
+
+ cipher = crypto_spawn_hash(spawn);
+ if (IS_ERR(cipher))
+ return PTR_ERR(cipher);
+
+ ctx->child = cipher;
+ tfm->crt_ahash.reqsize =
+ sizeof(struct cryptd_hash_request_ctx);
+ return 0;
+}
+
+static void cryptd_hash_exit_tfm(struct crypto_tfm *tfm)
+{
+ struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct cryptd_state *state = cryptd_get_state(tfm);
+ int active;
+
+ mutex_lock(&state->mutex);
+ active = ahash_tfm_in_queue(&state->queue,
+ __crypto_ahash_cast(tfm));
+ mutex_unlock(&state->mutex);
+
+ BUG_ON(active);
+
+ crypto_free_hash(ctx->child);
+}
+
+static int cryptd_hash_setkey(struct crypto_ahash *parent,
+ const u8 *key, unsigned int keylen)
+{
+ struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(parent);
+ struct crypto_hash *child = ctx->child;
+ int err;
+
+ crypto_hash_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+ crypto_hash_set_flags(child, crypto_ahash_get_flags(parent) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_hash_setkey(child, key, keylen);
+ crypto_ahash_set_flags(parent, crypto_hash_get_flags(child) &
+ CRYPTO_TFM_RES_MASK);
+ return err;
+}
+
+static int cryptd_hash_init(struct ahash_request *req)
+{
+ struct cryptd_hash_ctx *ctx = ahash_request_ctx(req);
+ struct crypto_hash *child = ctx->child;
+ struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
+ int err;
+
+ err = crypto_hash_crt(child)->init(&rctx->desc);
+ rctx->desc.flags |= CRYPTO_TFM_REQ_MAY_SLEEP;
+ return err;
+}
+
+static int cryptd_hash_enqueue(struct ahash_request *req,
+ crypto_completion_t complete)
+{
+ struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct cryptd_state *state =
+ cryptd_get_state(crypto_ahash_tfm(tfm));
+ int err;
+
+ rctx->complete = req->base.complete;
+ req->base.complete = complete;
+
+ spin_lock_bh(&state->lock);
+ err = ahash_enqueue_request(&state->queue, req);
+ spin_unlock_bh(&state->lock);
+
+ wake_up_process(state->task);
+ return err;
+}
+
+static void cryptd_hash_update(struct crypto_async_request *req_async, int
err)
+{
+ struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
+ struct crypto_hash *child = ctx->child;
+ struct ahash_request *req = ahash_request_cast(req_async);
+ struct cryptd_hash_request_ctx *rctx;
+
+ rctx = ahash_request_ctx(req);
+
+ if (unlikely(err == -EINPROGRESS)) {
+ rctx->complete(&req->base, err);
+ return;
+ }
+
+ err = crypto_hash_crt(child)->update(&rctx->desc,
+ req->src,
+ req->nbytes);
+
+ req->base.complete = rctx->complete;
+
+ local_bh_disable();
+ req->base.complete(&req->base, err);
+ local_bh_enable();
+}
+
+static int cryptd_hash_update_enqueue(struct ahash_request *req)
+{
+ return cryptd_hash_enqueue(req, cryptd_hash_update);
+}
+
+static void cryptd_hash_final(struct crypto_async_request *req_async, int
err)
+{
+ struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
+ struct crypto_hash *child = ctx->child;
+ struct ahash_request *req = ahash_request_cast(req_async);
+ struct cryptd_hash_request_ctx *rctx;
+
+ rctx = ahash_request_ctx(req);
+
+ if (unlikely(err == -EINPROGRESS)) {
+ rctx->complete(&req->base, err);
+ return;
+ }
+
+ err = crypto_hash_crt(child)->final(&rctx->desc, req->result);
+
+ req->base.complete = rctx->complete;
+
+ local_bh_disable();
+ req->base.complete(&req->base, err);
+ local_bh_enable();
+}
+
+static int cryptd_hash_final_enqueue(struct ahash_request *req)
+{
+ return cryptd_hash_enqueue(req, cryptd_hash_final);
+}
+
+static void cryptd_hash_digest(struct crypto_async_request *req_async, int
err)
+{
+ struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
+ struct crypto_hash *child = ctx->child;
+ struct ahash_request *req = ahash_request_cast(req_async);
+ struct cryptd_hash_request_ctx *rctx;
+ struct hash_desc desc;
+
+ rctx = ahash_request_ctx(req);
+
+ if (unlikely(err == -EINPROGRESS)) {
+ rctx->complete(&req->base, err);
+ return;
+ }
+
+ desc.tfm = child;
+ desc.flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+
+ err = crypto_hash_crt(child)->digest(&desc,
+ req->src,
+ req->nbytes,
+ req->result);
+
+ req->base.complete = rctx->complete;
+
+ local_bh_disable();
+ req->base.complete(&req->base, err);
+ local_bh_enable();
+}
+
+static int cryptd_hash_digest_enqueue(struct ahash_request *req)
+{
+ return cryptd_hash_enqueue(req, cryptd_hash_digest);
+}
+
+static struct crypto_instance *cryptd_alloc_hash(
+ struct rtattr **tb, struct cryptd_state *state)
+{
+ struct crypto_instance *inst;
+ struct crypto_alg *alg;
+
+ alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_HASH,
+ CRYPTO_ALG_TYPE_MASK);
+ if (IS_ERR(alg))
+ return ERR_PTR(PTR_ERR(alg));
+
+ inst = cryptd_alloc_instance(alg, state);
+ if (IS_ERR(inst))
+ goto out_put_alg;
+
+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_ASYNC;
+ inst->alg.cra_type = &crypto_ahash_type;
+
+ inst->alg.cra_ahash.digestsize = alg->cra_hash.digestsize;
+ inst->alg.cra_ctxsize = sizeof(struct cryptd_hash_ctx);
+
+ inst->alg.cra_init = cryptd_hash_init_tfm;
+ inst->alg.cra_exit = cryptd_hash_exit_tfm;
+
+ inst->alg.cra_ahash.init = cryptd_hash_init;
+ inst->alg.cra_ahash.update = cryptd_hash_update_enqueue;
+ inst->alg.cra_ahash.final = cryptd_hash_final_enqueue;
+ inst->alg.cra_ahash.setkey = cryptd_hash_setkey;
+ inst->alg.cra_ahash.digest = cryptd_hash_digest_enqueue;
+
+out_put_alg:
+ crypto_mod_put(alg);
+ return inst;
+}
+
static struct cryptd_state state;

static struct crypto_instance *cryptd_alloc(struct rtattr **tb)
@@ -272,6 +490,8 @@ static struct crypto_instance *cryptd_alloc(struct
rtattr **tb)
switch (algt->type & algt->mask & CRYPTO_ALG_TYPE_MASK) {
case CRYPTO_ALG_TYPE_BLKCIPHER:
return cryptd_alloc_blkcipher(tb, &state);
+ case CRYPTO_ALG_TYPE_HASH:
+ return cryptd_alloc_hash(tb, &state);
}

return ERR_PTR(-EINVAL);
diff --git a/crypto/digest.c b/crypto/digest.c
index 6fd43bd..19b7ade 100644
--- a/crypto/digest.c
+++ b/crypto/digest.c
@@ -141,14 +141,14 @@ int crypto_init_digest_ops(struct crypto_tfm *tfm)

if (dalg->dia_digestsize > crypto_tfm_alg_blocksize(tfm))
return -EINVAL;
-
+
ops->init = init;
ops->update = update;
ops->final = final;
ops->digest = digest;
ops->setkey = dalg->dia_setkey ? setkey : nosetkey;
ops->digestsize = dalg->dia_digestsize;
-
+
return 0;
}

diff --git a/crypto/hash.c b/crypto/hash.c
index 7dcff67..6df8a8c 100644
--- a/crypto/hash.c
+++ b/crypto/hash.c
@@ -59,24 +59,108 @@ static int hash_setkey(struct crypto_hash *crt, const
u8 *key,
return alg->setkey(crt, key, keylen);
}

-static int crypto_init_hash_ops(struct crypto_tfm *tfm, u32 type, u32 mask)
+static int hash_async_setkey(struct crypto_ahash *tfm_async, const u8 *key,
+ unsigned int keylen)
+{
+ struct crypto_tfm *tfm = crypto_ahash_tfm(tfm_async);
+ struct crypto_hash *tfm_hash = __crypto_hash_cast(tfm);
+ struct hash_alg *alg = &tfm->__crt_alg->cra_hash;
+
+ return alg->setkey(tfm_hash, key, keylen);
+}
+
+static int hash_async_init(struct ahash_request *req)
+{
+ struct crypto_tfm *tfm = req->base.tfm;
+ struct hash_alg *alg = &tfm->__crt_alg->cra_hash;
+ struct hash_desc desc = {
+ .tfm = __crypto_hash_cast(tfm),
+ .flags = req->base.flags,
+ };
+
+ return alg->init(&desc);
+}
+
+static int hash_async_update(struct ahash_request *req)
+{
+ struct crypto_tfm *tfm = req->base.tfm;
+ struct hash_alg *alg = &tfm->__crt_alg->cra_hash;
+ struct hash_desc desc = {
+ .tfm = __crypto_hash_cast(tfm),
+ .flags = req->base.flags,
+ };
+
+ return alg->update(&desc, req->src, req->nbytes);
+}
+
+static int hash_async_final(struct ahash_request *req)
+{
+ struct crypto_tfm *tfm = req->base.tfm;
+ struct hash_alg *alg = &tfm->__crt_alg->cra_hash;
+ struct hash_desc desc = {
+ .tfm = __crypto_hash_cast(tfm),
+ .flags = req->base.flags,
+ };
+
+ return alg->final(&desc, req->result);
+}
+
+static int hash_async_digest(struct ahash_request *req)
+{
+ struct crypto_tfm *tfm = req->base.tfm;
+ struct hash_alg *alg = &tfm->__crt_alg->cra_hash;
+ struct hash_desc desc = {
+ .tfm = __crypto_hash_cast(tfm),
+ .flags = req->base.flags,
+ };
+
+ return alg->digest(&desc, req->src, req->nbytes, req->result);
+}
+
+static int crypto_init_hash_ops_async(struct crypto_tfm *tfm)
+{
+ struct ahash_tfm *crt = &tfm->crt_ahash;
+ struct hash_alg *alg = &tfm->__crt_alg->cra_hash;
+
+ crt->init = hash_async_init;
+ crt->update = hash_async_update;
+ crt->final = hash_async_final;
+ crt->digest = hash_async_digest;
+ crt->setkey = hash_async_setkey;
+ crt->digestsize = alg->digestsize;
+ crt->base = __crypto_ahash_cast(tfm);
+
+ return 0;
+}
+
+static int crypto_init_hash_ops_sync(struct crypto_tfm *tfm)
{
struct hash_tfm *crt = &tfm->crt_hash;
struct hash_alg *alg = &tfm->__crt_alg->cra_hash;

- if (alg->digestsize > crypto_tfm_alg_blocksize(tfm))
- return -EINVAL;
-
- crt->init = alg->init;
- crt->update = alg->update;
- crt->final = alg->final;
- crt->digest = alg->digest;
- crt->setkey = hash_setkey;
+ crt->init = alg->init;
+ crt->update = alg->update;
+ crt->final = alg->final;
+ crt->digest = alg->digest;
+ crt->setkey = hash_setkey;
crt->digestsize = alg->digestsize;

return 0;
}

+static int crypto_init_hash_ops(struct crypto_tfm *tfm, u32 type, u32 mask)
+{
+ struct hash_alg *alg = &tfm->__crt_alg->cra_hash;
+
+ if (alg->digestsize > crypto_tfm_alg_blocksize(tfm))
+ return -EINVAL;
+
+ if (mask & CRYPTO_ALG_ASYNC)
+ return crypto_init_hash_ops_async(tfm);
+ else
+ return crypto_init_hash_ops_sync(tfm);
+}
+
static void crypto_hash_show(struct seq_file *m, struct crypto_alg *alg)
__attribute__ ((unused));
static void crypto_hash_show(struct seq_file *m, struct crypto_alg *alg)
diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 1ab8c01..784f0b5 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -35,6 +35,7 @@
#include <linux/jiffies.h>
#include <linux/timex.h>
#include <linux/interrupt.h>
+#include <linux/delay.h>
#include "tcrypt.h"

/*
@@ -220,6 +221,98 @@ out:
crypto_free_hash(tfm);
}

+static void test_ahash(char *algo, struct hash_testvec *template,
+ unsigned int tcount)
+{
+ struct hash_testvec *hash_tv;
+ struct crypto_ahash *tfm = NULL;
+ struct ahash_request *req = NULL;
+ struct tcrypt_result result;
+ struct scatterlist sg[8];
+ char digest_result[tcount][4*16];
+ unsigned int tsize;
+ unsigned int i;
+ int ret;
+
+ printk(KERN_INFO "\ntesting %s\n", algo);
+
+ tsize = sizeof(struct hash_testvec);
+ tsize *= tcount;
+ if (tsize > TVMEMSIZE) {
+ printk(KERN_ERR "template (%u) too big for tvmem (%u)\n",
+ tsize, TVMEMSIZE);
+ return;
+ }
+ memcpy(tvmem, template, tsize);
+ hash_tv = (void *)tvmem;
+
+ init_completion(&result.completion);
+
+ tfm = crypto_alloc_ahash(algo, 0, 0);
+ if (IS_ERR(tfm)) {
+ printk(KERN_ERR "failed to load transform for %s: %ld\n",
algo,
+ PTR_ERR(tfm));
+ return;
+ }
+ req = ahash_request_alloc(tfm, GFP_KERNEL);
+ if (!req) {
+ printk(KERN_ERR "failed to allocate request for %s\n",
algo);
+ goto out;
+ }
+ ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+ tcrypt_complete, &result);
+
+ for (i = 0; i < tcount; i++) {
+ printk(KERN_INFO "test %u:\n", i + 1);
+ memset(&digest_result[i], 0, 4*16);
+ crypto_ahash_clear_flags(tfm, ~0);
+ if (hash_tv[i].ksize) {
+ ret = crypto_ahash_setkey(tfm, hash_tv[i].key,
+ hash_tv[i].ksize);
+ if (ret) {
+ printk(KERN_ERR "setkey() failed error
%d\n",
+ ret);
+ goto out;
+ }
+ }
+
+ sg_init_one(&sg[0], hash_tv[i].plaintext, hash_tv[i].psize);
+
+ ahash_request_set_crypt(req, sg, digest_result[i],
+ hash_tv[i].psize);
+
+ ret = crypto_ahash_digest(req);
+ switch (ret) {
+ case 0:
+ break;
+ case -EINPROGRESS:
+ case -EBUSY:
+ ret = wait_for_completion_interruptible(
+ &result.completion);
+ if (!ret && !((ret = result.err))) {
+ INIT_COMPLETION(result.completion);
+ break;
+ }
+ /* fall through */
+ default:
+ printk(KERN_ERR "hash() failed error %d\n", ret);
+ goto out;
+ }
+
+ hexdump(digest_result[i], crypto_ahash_digestsize(tfm));
+ printk(KERN_INFO "%s\n",
+ memcmp(digest_result[i], hash_tv[i].digest,
+ crypto_ahash_digestsize(tfm)) ?
+ "fail" : "pass");
+ }
+
+out:
+ if (req)
+ ahash_request_free(req);
+
+ crypto_free_ahash(tfm);
+}
+
static void test_aead(char *algo, int enc, struct aead_testvec *template,
unsigned int tcount)
{
@@ -471,7 +564,7 @@ static void test_cipher(char *algo, int enc,
else
e = "decryption";

- printk("\ntesting %s %s\n", algo, e);
+ printk(KERN_INFO "\ntesting cipher %s %s\n", algo, e);

tsize = sizeof (struct cipher_testvec);
if (tsize > TVMEMSIZE) {
@@ -1619,6 +1712,51 @@ static void do_test(void)
XCBC_AES_TEST_VECTORS);
break;

+ case 110:
+ test_ahash("hmac(md5)", hmac_md5_tv_template,
+ HMAC_MD5_TEST_VECTORS);
+ break;
+
+ case 111:
+ test_ahash("hmac(sha1)", hmac_sha1_tv_template,
+ HMAC_SHA1_TEST_VECTORS);
+ break;
+
+ case 112:
+ test_ahash("hmac(sha256)", hmac_sha256_tv_template,
+ HMAC_SHA256_TEST_VECTORS);
+ break;
+
+ case 113:
+ test_ahash("hmac(sha384)", hmac_sha384_tv_template,
+ HMAC_SHA384_TEST_VECTORS);
+ break;
+
+ case 114:
+ test_ahash("hmac(sha512)", hmac_sha512_tv_template,
+ HMAC_SHA512_TEST_VECTORS);
+ break;
+
+ case 115:
+ test_ahash("hmac(sha224)", hmac_sha224_tv_template,
+ HMAC_SHA224_TEST_VECTORS);
+ break;
+
+ case 120:
+ test_ahash("hmac(md5)", hmac_md5_tv_template,
+ HMAC_MD5_TEST_VECTORS);
+ test_ahash("hmac(sha1)", hmac_sha1_tv_template,
+ HMAC_SHA1_TEST_VECTORS);
+ test_ahash("hmac(sha224)", hmac_sha224_tv_template,
+ HMAC_SHA224_TEST_VECTORS);
+ test_ahash("hmac(sha256)", hmac_sha256_tv_template,
+ HMAC_SHA256_TEST_VECTORS);
+ test_ahash("hmac(sha384)", hmac_sha384_tv_template,
+ HMAC_SHA384_TEST_VECTORS);
+ test_ahash("hmac(sha512)", hmac_sha512_tv_template,
+ HMAC_SHA512_TEST_VECTORS);
+ break;
+
case 200:
test_cipher_speed("ecb(aes)", ENCRYPT, sec, NULL, 0,
aes_speed_template);
@@ -1795,7 +1933,7 @@ static int __init init(void)

/* We intentionaly return -EAGAIN to prevent keeping
* the module. It does all its work from init()
- * and doesn't offer any runtime functionality
+ * and doesn't offer any runtime functionality
* => we don't need it in the memory, do we?
* -- mludvig
*/
diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index d8c7040..21e4234 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -92,6 +92,12 @@ config CRYPTO_DEV_HIFN_795X
help
This option allows you to have support for HIFN 795x crypto
adapters.

-
+config CRYPTO_DEV_AHASH_SAMPLE
+ tristate "Asynchronous HASH sample driver over software synchronous
HASH"
+ select CRYPTO_HASH
+ select CRYPTO_ALGAPI
+ help
+ This is a sample asynchronous HASH device driver over synchronous
software
+ HASH.

endif # CRYPTO_HW
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index c0327f0..0b1cc2f 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -2,3 +2,4 @@ obj-$(CONFIG_CRYPTO_DEV_PADLOCK_AES) += padlock-aes.o
obj-$(CONFIG_CRYPTO_DEV_PADLOCK_SHA) += padlock-sha.o
obj-$(CONFIG_CRYPTO_DEV_GEODE) += geode-aes.o
obj-$(CONFIG_CRYPTO_DEV_HIFN_795X) += hifn_795x.o
+obj-$(CONFIG_CRYPTO_DEV_AHASH_SAMPLE) += ahash_sample.o
diff --git a/drivers/crypto/ahash_sample.c b/drivers/crypto/ahash_sample.c
new file mode 100644
index 0000000..0c1ad60
--- /dev/null
+++ b/drivers/crypto/ahash_sample.c
@@ -0,0 +1,354 @@
+/*
+ * Sample Asynchronous device driver that wraps around software sync HASH
+ *
+ * 2008 Copyright (c) Loc Ho <[email protected]>
+ * All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
USA
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/mod_devicetable.h>
+#include <linux/interrupt.h>
+#include <linux/highmem.h>
+#include <linux/scatterlist.h>
+#include <linux/crypto.h>
+#include <crypto/algapi.h>
+
+struct ahash_sample_device {
+ char name[64];
+ struct tasklet_struct tasklet;
+ struct crypto_queue queue;
+ spinlock_t lock; /**< Queue lock protection
*/
+ struct list_head alg_list;
+};
+
+#define AHASH_SAMPLE_OP_DIGEST 0
+#define AHASH_SAMPLE_OP_UPDATE 1
+#define AHASH_SAMPLE_OP_FINAL 2
+
+struct ahash_sample_context {
+ struct ahash_sample_device *dev;
+ u8 key[16];
+ unsigned int keysize;
+ struct crypto_hash *sync_tfm;
+ struct hash_desc desc;
+ u8 ops;
+};
+
+struct ahash_sample_alg {
+ struct list_head entry;
+ struct crypto_alg alg;
+ struct ahash_sample_device *dev;
+};
+
+static struct ahash_sample_device *ahash_sample_dev;
+
+#define crypto_alg_to_ahash_sample_alg(a) container_of(a, \
+ struct ahash_sample_alg, \
+ alg)
+
+static int ahash_sample_alg_init(struct crypto_tfm *tfm)
+{
+ struct crypto_alg *alg = tfm->__crt_alg;
+ struct ahash_sample_alg *ahash_alg = crypto_alg_to_ahash_sample_alg(
+
alg);
+ struct ahash_sample_context *ctx = crypto_tfm_ctx(tfm);
+
+ ctx->dev = ahash_alg->dev;
+ ctx->sync_tfm = crypto_alloc_hash(alg->cra_name, 0,
CRYPTO_ALG_ASYNC);
+ if (IS_ERR(ctx->sync_tfm)) {
+ printk(KERN_ERR
+ "AHASH_SAMPLE: failed to load transform for %s:
%ld\n",
+ alg->cra_name, PTR_ERR(ctx->sync_tfm));
+ return -ENOMEM;
+ }
+ printk(KERN_INFO "AHASH_SAMPLE: initialize alg %s\n",
alg->cra_name);
+ return 0;
+}
+
+static void ahash_sample_alg_exit(struct crypto_tfm *tfm)
+{
+ struct crypto_alg *alg = tfm->__crt_alg;
+ struct ahash_sample_context *ctx = crypto_tfm_ctx(tfm);
+
+ printk(KERN_INFO "AHASH_SAMPLE: exit alg %s\n", alg->cra_name);
+
+ if (ctx->sync_tfm) {
+ crypto_free_hash(ctx->sync_tfm);
+ ctx->sync_tfm = NULL;
+ ctx->dev = NULL;
+ }
+}
+
+static int ahash_sample_ops_setkey(struct crypto_ahash *cipher, const u8
*key,
+ unsigned int keylen)
+{
+ struct crypto_tfm *tfm = crypto_ahash_tfm(cipher);
+ struct ahash_sample_context *ctx = crypto_tfm_ctx(tfm);
+ int ret;
+
+ printk(KERN_INFO "AHASH_SAMPLE: setkey\n");
+
+ ret = crypto_hash_setkey(ctx->sync_tfm, key, keylen);
+ if (ret) {
+ printk(KERN_ERR
+ "aynchronous hash generic setkey failed error %d\n",
+ ret);
+ return -1;
+ }
+ return ret;
+}
+
+static inline int ahash_sample_ops_init(struct ahash_request *req)
+{
+ struct ahash_sample_context *ctx = crypto_tfm_ctx(req->base.tfm);
+
+ printk(KERN_INFO "AHASH_SAMPLE: init\n");
+
+ ctx->desc.tfm = ctx->sync_tfm;
+ ctx->desc.flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+ return crypto_hash_init(&ctx->desc);
+}
+
+static inline int ahash_sample_ops_update(struct ahash_request *req)
+{
+ struct ahash_sample_context *ctx = crypto_tfm_ctx(req->base.tfm);
+ unsigned long flags;
+ int ret;
+
+ printk(KERN_INFO "AHASH_SAMPLE: update\n");
+
+ ctx->ops = AHASH_SAMPLE_OP_UPDATE;
+ spin_lock_irqsave(&ctx->dev->lock, flags);
+ ret = ahash_enqueue_request(&ctx->dev->queue, req);
+ spin_unlock_irqrestore(&ctx->dev->lock, flags);
+
+ tasklet_schedule(&ctx->dev->tasklet);
+ return ret;
+}
+
+static inline int ahash_sample_ops_final(struct ahash_request *req)
+{
+ struct ahash_sample_context *ctx = crypto_tfm_ctx(req->base.tfm);
+ unsigned long flags;
+ int ret;
+
+ printk(KERN_INFO "AHASH_SAMPLE: final\n");
+
+ ctx->ops = AHASH_SAMPLE_OP_FINAL;
+ spin_lock_irqsave(&ctx->dev->lock, flags);
+ ret = ahash_enqueue_request(&ctx->dev->queue, req);
+ spin_unlock_irqrestore(&ctx->dev->lock, flags);
+
+ tasklet_schedule(&ctx->dev->tasklet);
+ return ret;
+}
+
+static inline int ahash_sample_ops_digest(struct ahash_request *req)
+{
+ struct ahash_sample_context *ctx = crypto_tfm_ctx(req->base.tfm);
+ unsigned long flags;
+ int ret;
+
+ printk(KERN_INFO "AHASH_SAMPLE: digest\n");
+
+ ctx->ops = AHASH_SAMPLE_OP_DIGEST;
+ spin_lock_irqsave(&ctx->dev->lock, flags);
+ ret = ahash_enqueue_request(&ctx->dev->queue, req);
+ spin_unlock_irqrestore(&ctx->dev->lock, flags);
+
+ tasklet_schedule(&ctx->dev->tasklet);
+ return ret;
+}
+
+static int ahash_sample_handle_req(struct ahash_request *req)
+{
+ struct ahash_sample_context *ctx = crypto_tfm_ctx(req->base.tfm);
+ struct hash_desc desc;
+ int ret;
+
+ desc.tfm = ctx->sync_tfm;
+ desc.flags = 0;
+ switch (ctx->ops) {
+ case AHASH_SAMPLE_OP_UPDATE:
+ ret = crypto_hash_update(&desc, req->src, req->nbytes);
+ break;
+ case AHASH_SAMPLE_OP_FINAL:
+ ret = crypto_hash_final(&desc, req->result);
+ break;
+ case AHASH_SAMPLE_OP_DIGEST:
+ default:
+ ret = crypto_hash_digest(&desc, req->src,
+ req->nbytes, req->result);
+ break;
+ }
+ if (ret) {
+ printk(KERN_ERR "AHASH_SAMPLE: "
+ "asynchronous hash generic digest failed error
%d\n",
+ ret);
+ return ret;
+ }
+ return 0;
+}
+
+static void ahash_sample_bh_tasklet_cb(unsigned long data)
+{
+ struct ahash_sample_device *dev = (struct ahash_sample_device *)
data;
+ struct crypto_async_request *async_req;
+ struct ahash_sample_context *ctx;
+ struct ahash_request *req;
+ unsigned long flags;
+ int err;
+
+ while (1) {
+ spin_lock_irqsave(&dev->lock, flags);
+ async_req = crypto_dequeue_request(&dev->queue);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ if (!async_req)
+ break;
+
+ ctx = crypto_tfm_ctx(async_req->tfm);
+ req = container_of(async_req, struct ahash_request, base);
+
+ /* Process the request */
+ err = ahash_sample_handle_req(req);
+
+ /* Notify packet completed */
+ req->base.complete(&req->base, err);
+ }
+}
+
+static struct crypto_alg ahash_sample_alg_tbl[] =
+{
+ { .cra_name = "hmac(md5)",
+ .cra_driver_name = "ahash-md5",
+ .cra_priority = 300,
+ .cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_ASYNC,
+ .cra_blocksize = 64, /* MD5-HMAC block size is 512-bits */
+ .cra_ctxsize = sizeof(struct ahash_sample_context),
+ .cra_alignmask = 0,
+ .cra_type = &crypto_ahash_type,
+ .cra_module = THIS_MODULE,
+ .cra_u = { .ahash = {
+ .digestsize = 16, /* Disgest is 128-bits */
+ .init = ahash_sample_ops_init,
+ .update = ahash_sample_ops_update,
+ .final = ahash_sample_ops_final,
+ .digest = ahash_sample_ops_digest,
+ .setkey = ahash_sample_ops_setkey,
+ } },
+ },
+ { .cra_name = "" }
+};
+
+static void ahash_sample_unregister_alg(struct ahash_sample_device *dev)
+{
+ struct ahash_sample_alg *alg, *tmp;
+
+ list_for_each_entry_safe(alg, tmp, &dev->alg_list, entry) {
+ list_del(&alg->entry);
+ crypto_unregister_alg(&alg->alg);
+ kfree(alg);
+ }
+}
+
+static int ahash_sample_register_alg(struct ahash_sample_device *dev)
+{
+ struct ahash_sample_alg *alg;
+ int i;
+ int rc = 0;
+
+ for (i = 0; ahash_sample_alg_tbl[i].cra_name[0] != '\0'; i++) {
+ alg = kzalloc(sizeof(struct ahash_sample_alg), GFP_KERNEL);
+ if (!alg)
+ return -ENOMEM;
+
+ alg->alg = ahash_sample_alg_tbl[i];
+ INIT_LIST_HEAD(&alg->alg.cra_list);
+ alg->dev = dev;
+ alg->alg.cra_init = ahash_sample_alg_init;
+ alg->alg.cra_exit = ahash_sample_alg_exit;
+ list_add_tail(&alg->entry, &dev->alg_list);
+ rc = crypto_register_alg(&alg->alg);
+ if (rc) {
+ printk(KERN_ERR
+ "AHASH_SAMPLE: failed to register alg
%s.%s",
+ ahash_sample_alg_tbl[i].cra_driver_name,
+ ahash_sample_alg_tbl[i].cra_name);
+
+ list_del(&alg->entry);
+ kfree(alg);
+ return rc;
+ }
+ }
+ return rc;
+}
+
+static int __devinit ahash_sample_init(void)
+{
+ int err;
+
+ ahash_sample_dev = kzalloc(sizeof(struct ahash_sample_device) +
+ sizeof(struct crypto_alg),
+ GFP_KERNEL);
+ if (!ahash_sample_dev) {
+ err = -ENOMEM;
+ goto err_nomem;
+ }
+
+ INIT_LIST_HEAD(&ahash_sample_dev->alg_list);
+ strncpy(ahash_sample_dev->name, "AHASH_generic",
+ sizeof(ahash_sample_dev->name));
+
+ err = ahash_sample_register_alg(ahash_sample_dev);
+ if (err)
+ goto err_register_alg;
+
+ /* Init tasklet for asynchronous processing */
+ tasklet_init(&ahash_sample_dev->tasklet, ahash_sample_bh_tasklet_cb,
+ (unsigned long) ahash_sample_dev);
+ crypto_init_queue(&ahash_sample_dev->queue, 64*1024);
+
+ printk(KERN_INFO "AHASH_SAMPLE: Asynchronous "
+ "hashing sample driver successfully registered.\n");
+ return 0;
+
+err_register_alg:
+ kfree(ahash_sample_dev);
+ ahash_sample_dev = NULL;
+
+err_nomem:
+ return err;
+}
+
+static void __devexit ahash_sample_fini(void)
+{
+ ahash_sample_unregister_alg(ahash_sample_dev);
+ kfree(ahash_sample_dev);
+ ahash_sample_dev = NULL;
+ printk(KERN_INFO
+ "AHASH_SAMPLE: Driver for testing asynchronous hash support
"
+ "framework has been successfully unregistered.\n");
+}
+
+module_init(ahash_sample_init);
+module_exit(ahash_sample_fini);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Loc Ho <[email protected]>");
+MODULE_DESCRIPTION("Sample asynchronous hash driver");
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index 60d06e7..fef272a 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -98,6 +98,7 @@ extern const struct crypto_type crypto_ablkcipher_type;
extern const struct crypto_type crypto_aead_type;
extern const struct crypto_type crypto_blkcipher_type;
extern const struct crypto_type crypto_hash_type;
+extern const struct crypto_type crypto_ahash_type;

void crypto_mod_put(struct crypto_alg *alg);

@@ -314,5 +315,40 @@ static inline int crypto_requires_sync(u32 type, u32
mask)
return (type ^ CRYPTO_ALG_ASYNC) & mask & CRYPTO_ALG_ASYNC;
}

+static inline void *crypto_ahash_ctx(struct crypto_ahash *tfm)
+{
+ return crypto_tfm_ctx(&tfm->base);
+}
+
+static inline struct ahash_alg *crypto_ahash_alg(
+ struct crypto_ahash *tfm)
+{
+ return &crypto_ahash_tfm(tfm)->__crt_alg->cra_ahash;
+}
+
+static inline int ahash_enqueue_request(struct crypto_queue *queue,
+ struct ahash_request *request)
+{
+ return crypto_enqueue_request(queue, &request->base);
+}
+
+static inline struct ahash_request *ahash_dequeue_request(
+ struct crypto_queue *queue)
+{
+ return ahash_request_cast(crypto_dequeue_request(queue));
+}
+
+static inline void *ahash_request_ctx(struct ahash_request *req)
+{
+ return req->__ctx;
+}
+
+static inline int ahash_tfm_in_queue(struct crypto_queue *queue,
+ struct crypto_ahash *tfm)
+{
+ return crypto_tfm_in_queue(queue, crypto_ahash_tfm(tfm));
+}
+
+
#endif /* _CRYPTO_ALGAPI_H */

diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 5e02d1b..fe9a5c2 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -7,10 +7,10 @@
*
* Portions derived from Cryptoapi, by Alexander Kjeldaas <[email protected]>
* and Nettle, by Niels Möller.
- *
+ *
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
Free
- * Software Foundation; either version 2 of the License, or (at your
option)
+ * Software Foundation; either version 2 of the License, or (at your
option)
* any later version.
*
*/
@@ -37,6 +37,7 @@
#define CRYPTO_ALG_TYPE_GIVCIPHER 0x00000006
#define CRYPTO_ALG_TYPE_COMPRESS 0x00000008
#define CRYPTO_ALG_TYPE_AEAD 0x00000009
+#define CRYPTO_ALG_TYPE_AHASH 0x0000000A

#define CRYPTO_ALG_TYPE_HASH_MASK 0x0000000e
#define CRYPTO_ALG_TYPE_BLKCIPHER_MASK 0x0000000c
@@ -102,6 +103,7 @@ struct crypto_async_request;
struct crypto_aead;
struct crypto_blkcipher;
struct crypto_hash;
+struct crypto_ahash;
struct crypto_tfm;
struct crypto_type;
struct aead_givcrypt_request;
@@ -131,6 +133,16 @@ struct ablkcipher_request {
void *__ctx[] CRYPTO_MINALIGN_ATTR;
};

+struct ahash_request {
+ struct crypto_async_request base;
+
+ unsigned int nbytes;
+ struct scatterlist *src;
+ u8 *result;
+
+ void *__ctx[] CRYPTO_MINALIGN_ATTR;
+};
+
/**
* struct aead_request - AEAD request
* @base: Common attributes for async crypto requests
@@ -195,6 +207,17 @@ struct ablkcipher_alg {
unsigned int ivsize;
};

+struct ahash_alg {
+ int (*init)(struct ahash_request *req);
+ int (*update)(struct ahash_request *req);
+ int (*final)(struct ahash_request *req);
+ int (*digest)(struct ahash_request *req);
+ int (*setkey)(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen);
+
+ unsigned int digestsize;
+};
+
struct aead_alg {
int (*setkey)(struct crypto_aead *tfm, const u8 *key,
unsigned int keylen);
@@ -272,6 +295,7 @@ struct compress_alg {
#define cra_cipher cra_u.cipher
#define cra_digest cra_u.digest
#define cra_hash cra_u.hash
+#define cra_ahash cra_u.ahash
#define cra_compress cra_u.compress

struct crypto_alg {
@@ -298,13 +322,14 @@ struct crypto_alg {
struct cipher_alg cipher;
struct digest_alg digest;
struct hash_alg hash;
+ struct ahash_alg ahash;
struct compress_alg compress;
} cra_u;

int (*cra_init)(struct crypto_tfm *tfm);
void (*cra_exit)(struct crypto_tfm *tfm);
void (*cra_destroy)(struct crypto_alg *alg);
-
+
struct module *cra_module;
};

@@ -390,6 +415,19 @@ struct hash_tfm {
unsigned int digestsize;
};

+struct ahash_tfm {
+ int (*init)(struct ahash_request *req);
+ int (*update)(struct ahash_request *req);
+ int (*final)(struct ahash_request *req);
+ int (*digest)(struct ahash_request *req);
+ int (*setkey)(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen);
+
+ unsigned int digestsize;
+ struct crypto_ahash *base;
+ unsigned int reqsize;
+};
+
struct compress_tfm {
int (*cot_compress)(struct crypto_tfm *tfm,
const u8 *src, unsigned int slen,
@@ -404,21 +442,23 @@ struct compress_tfm {
#define crt_blkcipher crt_u.blkcipher
#define crt_cipher crt_u.cipher
#define crt_hash crt_u.hash
+#define crt_ahash crt_u.ahash
#define crt_compress crt_u.compress

struct crypto_tfm {

u32 crt_flags;
-
+
union {
struct ablkcipher_tfm ablkcipher;
struct aead_tfm aead;
struct blkcipher_tfm blkcipher;
struct cipher_tfm cipher;
struct hash_tfm hash;
+ struct ahash_tfm ahash;
struct compress_tfm compress;
} crt_u;
-
+
struct crypto_alg *__crt_alg;

void *__crt_ctx[] CRYPTO_MINALIGN_ATTR;
@@ -448,6 +488,10 @@ struct crypto_hash {
struct crypto_tfm base;
};

+struct crypto_ahash {
+ struct crypto_tfm base;
+};
+
enum {
CRYPTOA_UNSPEC,
CRYPTOA_ALG,
@@ -477,7 +521,7 @@ struct crypto_attr_u32 {
/*
* Transform user interface.
*/
-
+
struct crypto_tfm *crypto_alloc_tfm(const char *alg_name, u32 tfm_flags);
struct crypto_tfm *crypto_alloc_base(const char *alg_name, u32 type, u32
mask);
void crypto_free_tfm(struct crypto_tfm *tfm);
@@ -1112,7 +1156,7 @@ static inline struct crypto_hash
*crypto_alloc_hash(const char *alg_name,
u32 type, u32 mask)
{
type &= ~CRYPTO_ALG_TYPE_MASK;
- mask &= ~CRYPTO_ALG_TYPE_MASK;
+ mask &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_ASYNC);
type |= CRYPTO_ALG_TYPE_HASH;
mask |= CRYPTO_ALG_TYPE_HASH_MASK;

@@ -1271,5 +1315,130 @@ static inline int crypto_comp_decompress(struct
crypto_comp *tfm,
src, slen, dst, dlen);
}

+static inline struct crypto_tfm *crypto_ahash_tfm(
+ struct crypto_ahash *tfm)
+{
+ return &tfm->base;
+}
+
+struct crypto_ahash *crypto_alloc_ahash(const char *alg_name,
+ u32 type, u32 mask);
+
+static inline void crypto_free_ahash(struct crypto_ahash *tfm)
+{
+ crypto_free_tfm(crypto_ahash_tfm(tfm));
+}
+
+static inline struct crypto_ahash *__crypto_ahash_cast(struct crypto_tfm
*tfm)
+{
+ return (struct crypto_ahash *) tfm;
+}
+
+static inline unsigned int crypto_ahash_alignmask(
+ struct crypto_ahash *tfm)
+{
+ return crypto_tfm_alg_alignmask(crypto_ahash_tfm(tfm));
+}
+
+static inline struct ahash_tfm *crypto_ahash_crt(struct crypto_ahash *tfm)
+{
+ return &crypto_ahash_tfm(tfm)->crt_ahash;
+}
+
+static inline unsigned int crypto_ahash_digestsize(struct crypto_ahash
*tfm)
+{
+ return crypto_ahash_crt(tfm)->digestsize;
+}
+
+static inline u32 crypto_ahash_get_flags(struct crypto_ahash *tfm)
+{
+ return crypto_tfm_get_flags(crypto_ahash_tfm(tfm));
+}
+
+static inline void crypto_ahash_set_flags(struct crypto_ahash *tfm, u32
flags)
+{
+ crypto_tfm_set_flags(crypto_ahash_tfm(tfm), flags);
+}
+
+static inline void crypto_ahash_clear_flags(struct crypto_ahash *tfm, u32
flags)
+{
+ crypto_tfm_clear_flags(crypto_ahash_tfm(tfm), flags);
+}
+
+static inline struct crypto_ahash *crypto_ahash_reqtfm(
+ struct ahash_request *req)
+{
+ return __crypto_ahash_cast(req->base.tfm);
+}
+
+static inline unsigned int crypto_ahash_reqsize(struct crypto_ahash *tfm)
+{
+ return crypto_ahash_crt(tfm)->reqsize;
+}
+
+static inline int crypto_ahash_setkey(struct crypto_ahash *tfm,
+ const u8 *key, unsigned int keylen)
+{
+ struct ahash_tfm *crt = crypto_ahash_crt(tfm);
+
+ return crt->setkey(crt->base, key, keylen);
+}
+
+static inline int crypto_ahash_digest(struct ahash_request *req)
+{
+ struct ahash_tfm *crt = crypto_ahash_crt(crypto_ahash_reqtfm(req));
+ return crt->digest(req);
+}
+
+static inline void ahash_request_set_tfm(
+ struct ahash_request *req, struct crypto_ahash *tfm)
+{
+ req->base.tfm = crypto_ahash_tfm(crypto_ahash_crt(tfm)->base);
+}
+
+static inline struct ahash_request *ahash_request_alloc(
+ struct crypto_ahash *tfm, gfp_t gfp)
+{
+ struct ahash_request *req;
+
+ req = kmalloc(sizeof(struct ahash_request) +
+ crypto_ahash_reqsize(tfm), gfp);
+
+ if (likely(req))
+ ahash_request_set_tfm(req, tfm);
+
+ return req;
+}
+
+static inline void ahash_request_free(struct ahash_request *req)
+{
+ kfree(req);
+}
+
+static inline struct ahash_request *ahash_request_cast(
+ struct crypto_async_request *req)
+{
+ return container_of(req, struct ahash_request, base);
+}
+
+static inline void ahash_request_set_callback(
+ struct ahash_request *req,
+ u32 flags, crypto_completion_t complete, void *data)
+{
+ req->base.complete = complete;
+ req->base.data = data;
+ req->base.flags = flags;
+}
+
+static inline void ahash_request_set_crypt(
+ struct ahash_request *req,
+ struct scatterlist *src, u8 *result,
+ unsigned int nbytes)
+{
+ req->src = src;
+ req->nbytes = nbytes;
+ req->result = result;
+}
+
#endif /* _LINUX_CRYPTO_H */

--
1.5.3

2008-03-13 11:42:35

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/1] CryptoAPI: Add Async Hash Support

On Thu, Jan 24, 2008 at 06:45:25PM -0800, Loc Ho wrote:
> >From e5d67c3670f1ec15339a92cc291027c0a059aaed Mon Sep 17 00:00:00 2001
> From: Loc Ho <[email protected]>
> Date: Thu, 24 Jan 2008 18:13:28 -0800
> Subject: [PATCH] Add Async Hash Support

I'm really sorry for not responding earlier.

Anyway the code looks good over all and I'm going to play with
it here a little before sticking it into cryptodev.

Thanks!
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2008-03-13 17:46:10

by Loc Ho

[permalink] [raw]
Subject: RE: [PATCH 1/1] CryptoAPI: Add Async Hash Support

Hi,

Okay... Don't forget the async digest patch (separate email) as well. Is
there a reason why we still need the "async crypto deamon"?

-Loc

-----Original Message-----
From: Herbert Xu [mailto:[email protected]]
Sent: Thursday, March 13, 2008 4:43 AM
To: Loc Ho
Cc: [email protected]; 'Sebastian Siewior'
Subject: Re: [PATCH 1/1] CryptoAPI: Add Async Hash Support

On Thu, Jan 24, 2008 at 06:45:25PM -0800, Loc Ho wrote:
> >From e5d67c3670f1ec15339a92cc291027c0a059aaed Mon Sep 17 00:00:00
> >2001
> From: Loc Ho <[email protected]>
> Date: Thu, 24 Jan 2008 18:13:28 -0800
> Subject: [PATCH] Add Async Hash Support

I'm really sorry for not responding earlier.

Anyway the code looks good over all and I'm going to play with it here a
little before sticking it into cryptodev.

Thanks!
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]> Home Page:
http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


2008-04-10 02:03:49

by Loc Ho

[permalink] [raw]
Subject: RE: [PATCH 1/1] CryptoAPI: Add Async Hash Support

Hi Herbert,

I just sync with cryptodev git and don't see async hash/digest added.
Have it being committed?

-Loc

-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Herbert Xu
Sent: Thursday, March 13, 2008 4:43 AM
To: Loc Ho
Cc: [email protected]; 'Sebastian Siewior'
Subject: Re: [PATCH 1/1] CryptoAPI: Add Async Hash Support

On Thu, Jan 24, 2008 at 06:45:25PM -0800, Loc Ho wrote:
> >From e5d67c3670f1ec15339a92cc291027c0a059aaed Mon Sep 17 00:00:00
> >2001
> From: Loc Ho <[email protected]>
> Date: Thu, 24 Jan 2008 18:13:28 -0800
> Subject: [PATCH] Add Async Hash Support

I'm really sorry for not responding earlier.

Anyway the code looks good over all and I'm going to play with it here a
little before sticking it into cryptodev.


2008-04-10 02:30:01

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/1] CryptoAPI: Add Async Hash Support

On Wed, Apr 09, 2008 at 07:03:45PM -0700, Loc Ho wrote:
> Hi Herbert,
>
> I just sync with cryptodev git and don't see async hash/digest added.
> Have it being committed?

Sorry, I haven't finished testing it yet. I hope to get to it
soon.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2008-04-22 11:16:39

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/1] CryptoAPI: Add Async Hash Support

On Wed, Apr 09, 2008 at 07:03:45PM -0700, Loc Ho wrote:
> Hi Herbert,
>
> I just sync with cryptodev git and don't see async hash/digest added.
> Have it being committed?

OK I've looked at your patch and I think it's nearly perfect.
The only trouble is that it doesn't apply because your mailer
has corrupted it :)

So if you could resend it with a mailer that doesn't wrap long
lines it would be great.

Thanks,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2008-04-22 17:23:55

by Loc Ho

[permalink] [raw]
Subject: [PATCH 1/1] CryptoAPI: Add Async Hash Support

Hi,

They are attached instead inline until I figure out how to not wrap long
line.

-Loc

-----Original Message-----
From: Herbert Xu [mailto:[email protected]]
Sent: Tuesday, April 22, 2008 4:17 AM
To: Loc Ho
Cc: [email protected]
Subject: Re: [PATCH 1/1] CryptoAPI: Add Async Hash Support

On Wed, Apr 09, 2008 at 07:03:45PM -0700, Loc Ho wrote:
> Hi Herbert,
>
> I just sync with cryptodev git and don't see async hash/digest added.
> Have it being committed?

OK I've looked at your patch and I think it's nearly perfect.
The only trouble is that it doesn't apply because your mailer has
corrupted it :)

So if you could resend it with a mailer that doesn't wrap long lines it
would be great.


Attachments:
0001-Add-Async-Hash-Support.patch (41.01 kB)
0001-Add-Async-Hash-Support.patch
0001-Async-Digest-Support.patch (8.62 kB)
0001-Async-Digest-Support.patch
Download all attachments

2008-05-07 13:00:52

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/1] CryptoAPI: Add Async Hash Support

On Tue, Apr 22, 2008 at 10:23:23AM -0700, Loc Ho wrote:
>
> They are attached instead inline until I figure out how to not wrap long
> line.

Thanks. I've finally finished testing and integrating it. I've
merged them into one patch and dropped unrelated white-space
changes. You can resubmit those in a separate patch.

Could you please recheck this and if everything looks OK then
send it to me again with a sign-off? Oh and if you don't mind
please split the tcrypt and cryptd parts into separate patches
since they're conceptually distinct from the API bits.

Finally a proper description before each patch would be nice :)

Thanks,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
diff --git a/crypto/Makefile b/crypto/Makefile
index ca02441..6ea428a 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -19,6 +19,7 @@ obj-$(CONFIG_CRYPTO_BLKCIPHER) += crypto_blkcipher.o
obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o

crypto_hash-objs := hash.o
+crypto_hash-objs += ahash.o
obj-$(CONFIG_CRYPTO_HASH) += crypto_hash.o

obj-$(CONFIG_CRYPTO_MANAGER) += cryptomgr.o
diff --git a/crypto/ahash.c b/crypto/ahash.c
new file mode 100644
index 0000000..a83e035
--- /dev/null
+++ b/crypto/ahash.c
@@ -0,0 +1,106 @@
+/*
+ * Asynchronous Cryptographic Hash operations.
+ *
+ * This is the asynchronous version of hash.c with notification of
+ * completion via a callback.
+ *
+ * Copyright (c) 2008 Loc Ho <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+
+#include <crypto/algapi.h>
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/seq_file.h>
+
+#include "internal.h"
+
+static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct ahash_alg *ahash = crypto_ahash_alg(tfm);
+ unsigned long alignmask = crypto_ahash_alignmask(tfm);
+ int ret;
+ u8 *buffer, *alignbuffer;
+ unsigned long absize;
+
+ absize = keylen + alignmask;
+ buffer = kmalloc(absize, GFP_ATOMIC);
+ if (!buffer)
+ return -ENOMEM;
+
+ alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
+ memcpy(alignbuffer, key, keylen);
+ ret = ahash->setkey(tfm, alignbuffer, keylen);
+ memset(alignbuffer, 0, keylen);
+ kfree(buffer);
+ return ret;
+}
+
+static int ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct ahash_alg *ahash = crypto_ahash_alg(tfm);
+ unsigned long alignmask = crypto_ahash_alignmask(tfm);
+
+ if ((unsigned long)key & alignmask)
+ return ahash_setkey_unaligned(tfm, key, keylen);
+
+ return ahash->setkey(tfm, key, keylen);
+}
+
+static unsigned int crypto_ahash_ctxsize(struct crypto_alg *alg, u32 type,
+ u32 mask)
+{
+ return alg->cra_ctxsize;
+}
+
+static int crypto_init_ahash_ops(struct crypto_tfm *tfm, u32 type, u32 mask)
+{
+ struct ahash_alg *alg = &tfm->__crt_alg->cra_ahash;
+ struct ahash_tfm *crt = &tfm->crt_ahash;
+
+ if (alg->digestsize > crypto_tfm_alg_blocksize(tfm))
+ return -EINVAL;
+
+ crt->init = alg->init;
+ crt->update = alg->update;
+ crt->final = alg->final;
+ crt->digest = alg->digest;
+ crt->setkey = ahash_setkey;
+ crt->base = __crypto_ahash_cast(tfm);
+ crt->digestsize = alg->digestsize;
+
+ return 0;
+}
+
+static void crypto_ahash_show(struct seq_file *m, struct crypto_alg *alg)
+ __attribute__ ((unused));
+static void crypto_ahash_show(struct seq_file *m, struct crypto_alg *alg)
+{
+ seq_printf(m, "type : ahash\n");
+ seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
+ "yes" : "no");
+ seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
+ seq_printf(m, "digestsize : %u\n", alg->cra_hash.digestsize);
+}
+
+const struct crypto_type crypto_ahash_type = {
+ .ctxsize = crypto_ahash_ctxsize,
+ .init = crypto_init_ahash_ops,
+#ifdef CONFIG_PROC_FS
+ .show = crypto_ahash_show,
+#endif
+};
+EXPORT_SYMBOL_GPL(crypto_ahash_type);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Asynchronous cryptographic hash type");
diff --git a/crypto/api.c b/crypto/api.c
index 0a0f41e..d06e332 100644
--- a/crypto/api.c
+++ b/crypto/api.c
@@ -235,8 +235,12 @@ static int crypto_init_ops(struct crypto_tfm *tfm, u32 type, u32 mask)
return crypto_init_cipher_ops(tfm);

case CRYPTO_ALG_TYPE_DIGEST:
- return crypto_init_digest_ops(tfm);
-
+ if ((mask & CRYPTO_ALG_TYPE_HASH_MASK) !=
+ CRYPTO_ALG_TYPE_HASH_MASK)
+ return crypto_init_digest_ops_async(tfm);
+ else
+ return crypto_init_digest_ops(tfm);
+
case CRYPTO_ALG_TYPE_COMPRESS:
return crypto_init_compress_ops(tfm);

diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index 2504252..83fe67f 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -45,6 +45,13 @@ struct cryptd_blkcipher_request_ctx {
crypto_completion_t complete;
};

+struct cryptd_hash_ctx {
+ struct crypto_hash *child;
+};
+
+struct cryptd_hash_request_ctx {
+ crypto_completion_t complete;
+};

static inline struct cryptd_state *cryptd_get_state(struct crypto_tfm *tfm)
{
@@ -259,6 +266,240 @@ out_put_alg:
return inst;
}

+static int cryptd_hash_init_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
+ struct cryptd_instance_ctx *ictx = crypto_instance_ctx(inst);
+ struct crypto_spawn *spawn = &ictx->spawn;
+ struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct crypto_hash *cipher;
+
+ cipher = crypto_spawn_hash(spawn);
+ if (IS_ERR(cipher))
+ return PTR_ERR(cipher);
+
+ ctx->child = cipher;
+ tfm->crt_ahash.reqsize =
+ sizeof(struct cryptd_hash_request_ctx);
+ return 0;
+}
+
+static void cryptd_hash_exit_tfm(struct crypto_tfm *tfm)
+{
+ struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct cryptd_state *state = cryptd_get_state(tfm);
+ int active;
+
+ mutex_lock(&state->mutex);
+ active = ahash_tfm_in_queue(&state->queue,
+ __crypto_ahash_cast(tfm));
+ mutex_unlock(&state->mutex);
+
+ BUG_ON(active);
+
+ crypto_free_hash(ctx->child);
+}
+
+static int cryptd_hash_setkey(struct crypto_ahash *parent,
+ const u8 *key, unsigned int keylen)
+{
+ struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(parent);
+ struct crypto_hash *child = ctx->child;
+ int err;
+
+ crypto_hash_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+ crypto_hash_set_flags(child, crypto_ahash_get_flags(parent) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_hash_setkey(child, key, keylen);
+ crypto_ahash_set_flags(parent, crypto_hash_get_flags(child) &
+ CRYPTO_TFM_RES_MASK);
+ return err;
+}
+
+static int cryptd_hash_enqueue(struct ahash_request *req,
+ crypto_completion_t complete)
+{
+ struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct cryptd_state *state =
+ cryptd_get_state(crypto_ahash_tfm(tfm));
+ int err;
+
+ rctx->complete = req->base.complete;
+ req->base.complete = complete;
+
+ spin_lock_bh(&state->lock);
+ err = ahash_enqueue_request(&state->queue, req);
+ spin_unlock_bh(&state->lock);
+
+ wake_up_process(state->task);
+ return err;
+}
+
+static void cryptd_hash_init(struct crypto_async_request *req_async, int err)
+{
+ struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
+ struct crypto_hash *child = ctx->child;
+ struct ahash_request *req = ahash_request_cast(req_async);
+ struct cryptd_hash_request_ctx *rctx;
+ struct hash_desc desc;
+
+ rctx = ahash_request_ctx(req);
+
+ if (unlikely(err == -EINPROGRESS))
+ goto out;
+
+ desc.tfm = child;
+ desc.flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+
+ err = crypto_hash_crt(child)->init(&desc);
+
+ req->base.complete = rctx->complete;
+
+out:
+ local_bh_disable();
+ rctx->complete(&req->base, err);
+ local_bh_enable();
+}
+
+static int cryptd_hash_init_enqueue(struct ahash_request *req)
+{
+ return cryptd_hash_enqueue(req, cryptd_hash_init);
+}
+
+static void cryptd_hash_update(struct crypto_async_request *req_async, int err)
+{
+ struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
+ struct crypto_hash *child = ctx->child;
+ struct ahash_request *req = ahash_request_cast(req_async);
+ struct cryptd_hash_request_ctx *rctx;
+ struct hash_desc desc;
+
+ rctx = ahash_request_ctx(req);
+
+ if (unlikely(err == -EINPROGRESS))
+ goto out;
+
+ desc.tfm = child;
+ desc.flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+
+ err = crypto_hash_crt(child)->update(&desc,
+ req->src,
+ req->nbytes);
+
+ req->base.complete = rctx->complete;
+
+out:
+ local_bh_disable();
+ rctx->complete(&req->base, err);
+ local_bh_enable();
+}
+
+static int cryptd_hash_update_enqueue(struct ahash_request *req)
+{
+ return cryptd_hash_enqueue(req, cryptd_hash_update);
+}
+
+static void cryptd_hash_final(struct crypto_async_request *req_async, int err)
+{
+ struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
+ struct crypto_hash *child = ctx->child;
+ struct ahash_request *req = ahash_request_cast(req_async);
+ struct cryptd_hash_request_ctx *rctx;
+ struct hash_desc desc;
+
+ rctx = ahash_request_ctx(req);
+
+ if (unlikely(err == -EINPROGRESS))
+ goto out;
+
+ desc.tfm = child;
+ desc.flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+
+ err = crypto_hash_crt(child)->final(&desc, req->result);
+
+ req->base.complete = rctx->complete;
+
+out:
+ local_bh_disable();
+ rctx->complete(&req->base, err);
+ local_bh_enable();
+}
+
+static int cryptd_hash_final_enqueue(struct ahash_request *req)
+{
+ return cryptd_hash_enqueue(req, cryptd_hash_final);
+}
+
+static void cryptd_hash_digest(struct crypto_async_request *req_async, int err)
+{
+ struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm);
+ struct crypto_hash *child = ctx->child;
+ struct ahash_request *req = ahash_request_cast(req_async);
+ struct cryptd_hash_request_ctx *rctx;
+ struct hash_desc desc;
+
+ rctx = ahash_request_ctx(req);
+
+ if (unlikely(err == -EINPROGRESS))
+ goto out;
+
+ desc.tfm = child;
+ desc.flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+
+ err = crypto_hash_crt(child)->digest(&desc,
+ req->src,
+ req->nbytes,
+ req->result);
+
+ req->base.complete = rctx->complete;
+
+out:
+ local_bh_disable();
+ rctx->complete(&req->base, err);
+ local_bh_enable();
+}
+
+static int cryptd_hash_digest_enqueue(struct ahash_request *req)
+{
+ return cryptd_hash_enqueue(req, cryptd_hash_digest);
+}
+
+static struct crypto_instance *cryptd_alloc_hash(
+ struct rtattr **tb, struct cryptd_state *state)
+{
+ struct crypto_instance *inst;
+ struct crypto_alg *alg;
+
+ alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_HASH,
+ CRYPTO_ALG_TYPE_HASH_MASK);
+ if (IS_ERR(alg))
+ return ERR_PTR(PTR_ERR(alg));
+
+ inst = cryptd_alloc_instance(alg, state);
+ if (IS_ERR(inst))
+ goto out_put_alg;
+
+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_ASYNC;
+ inst->alg.cra_type = &crypto_ahash_type;
+
+ inst->alg.cra_ahash.digestsize = alg->cra_hash.digestsize;
+ inst->alg.cra_ctxsize = sizeof(struct cryptd_hash_ctx);
+
+ inst->alg.cra_init = cryptd_hash_init_tfm;
+ inst->alg.cra_exit = cryptd_hash_exit_tfm;
+
+ inst->alg.cra_ahash.init = cryptd_hash_init_enqueue;
+ inst->alg.cra_ahash.update = cryptd_hash_update_enqueue;
+ inst->alg.cra_ahash.final = cryptd_hash_final_enqueue;
+ inst->alg.cra_ahash.setkey = cryptd_hash_setkey;
+ inst->alg.cra_ahash.digest = cryptd_hash_digest_enqueue;
+
+out_put_alg:
+ crypto_mod_put(alg);
+ return inst;
+}
+
static struct cryptd_state state;

static struct crypto_instance *cryptd_alloc(struct rtattr **tb)
@@ -272,6 +513,8 @@ static struct crypto_instance *cryptd_alloc(struct rtattr **tb)
switch (algt->type & algt->mask & CRYPTO_ALG_TYPE_MASK) {
case CRYPTO_ALG_TYPE_BLKCIPHER:
return cryptd_alloc_blkcipher(tb, &state);
+ case CRYPTO_ALG_TYPE_DIGEST:
+ return cryptd_alloc_hash(tb, &state);
}

return ERR_PTR(-EINVAL);
diff --git a/crypto/digest.c b/crypto/digest.c
index b526cc3..025c9ae 100644
--- a/crypto/digest.c
+++ b/crypto/digest.c
@@ -157,3 +157,84 @@ int crypto_init_digest_ops(struct crypto_tfm *tfm)
void crypto_exit_digest_ops(struct crypto_tfm *tfm)
{
}
+
+static int digest_async_nosetkey(struct crypto_ahash *tfm_async, const u8 *key,
+ unsigned int keylen)
+{
+ crypto_ahash_clear_flags(tfm_async, CRYPTO_TFM_RES_MASK);
+ return -ENOSYS;
+}
+
+static int digest_async_setkey(struct crypto_ahash *tfm_async, const u8 *key,
+ unsigned int keylen)
+{
+ struct crypto_tfm *tfm = crypto_ahash_tfm(tfm_async);
+ struct digest_alg *dalg = &tfm->__crt_alg->cra_digest;
+
+ crypto_ahash_clear_flags(tfm_async, CRYPTO_TFM_RES_MASK);
+ return dalg->dia_setkey(tfm, key, keylen);
+}
+
+static int digest_async_init(struct ahash_request *req)
+{
+ struct crypto_tfm *tfm = req->base.tfm;
+ struct digest_alg *dalg = &tfm->__crt_alg->cra_digest;
+
+ dalg->dia_init(tfm);
+ return 0;
+}
+
+static int digest_async_update(struct ahash_request *req)
+{
+ struct crypto_tfm *tfm = req->base.tfm;
+ struct hash_desc desc = {
+ .tfm = __crypto_hash_cast(tfm),
+ .flags = req->base.flags,
+ };
+
+ update(&desc, req->src, req->nbytes);
+ return 0;
+}
+
+static int digest_async_final(struct ahash_request *req)
+{
+ struct crypto_tfm *tfm = req->base.tfm;
+ struct hash_desc desc = {
+ .tfm = __crypto_hash_cast(tfm),
+ .flags = req->base.flags,
+ };
+
+ final(&desc, req->result);
+ return 0;
+}
+
+static int digest_async_digest(struct ahash_request *req)
+{
+ struct crypto_tfm *tfm = req->base.tfm;
+ struct hash_desc desc = {
+ .tfm = __crypto_hash_cast(tfm),
+ .flags = req->base.flags,
+ };
+
+ return digest(&desc, req->src, req->nbytes, req->result);
+}
+
+int crypto_init_digest_ops_async(struct crypto_tfm *tfm)
+{
+ struct ahash_tfm *crt = &tfm->crt_ahash;
+ struct digest_alg *dalg = &tfm->__crt_alg->cra_digest;
+
+ if (dalg->dia_digestsize > crypto_tfm_alg_blocksize(tfm))
+ return -EINVAL;
+
+ crt->init = digest_async_init;
+ crt->update = digest_async_update;
+ crt->final = digest_async_final;
+ crt->digest = digest_async_digest;
+ crt->setkey = dalg->dia_setkey ? digest_async_setkey :
+ digest_async_nosetkey;
+ crt->digestsize = dalg->dia_digestsize;
+ crt->base = __crypto_ahash_cast(tfm);
+
+ return 0;
+}
diff --git a/crypto/hash.c b/crypto/hash.c
index 7dcff67..f9400a0 100644
--- a/crypto/hash.c
+++ b/crypto/hash.c
@@ -59,24 +59,108 @@ static int hash_setkey(struct crypto_hash *crt, const u8 *key,
return alg->setkey(crt, key, keylen);
}

-static int crypto_init_hash_ops(struct crypto_tfm *tfm, u32 type, u32 mask)
+static int hash_async_setkey(struct crypto_ahash *tfm_async, const u8 *key,
+ unsigned int keylen)
+{
+ struct crypto_tfm *tfm = crypto_ahash_tfm(tfm_async);
+ struct crypto_hash *tfm_hash = __crypto_hash_cast(tfm);
+ struct hash_alg *alg = &tfm->__crt_alg->cra_hash;
+
+ return alg->setkey(tfm_hash, key, keylen);
+}
+
+static int hash_async_init(struct ahash_request *req)
+{
+ struct crypto_tfm *tfm = req->base.tfm;
+ struct hash_alg *alg = &tfm->__crt_alg->cra_hash;
+ struct hash_desc desc = {
+ .tfm = __crypto_hash_cast(tfm),
+ .flags = req->base.flags,
+ };
+
+ return alg->init(&desc);
+}
+
+static int hash_async_update(struct ahash_request *req)
+{
+ struct crypto_tfm *tfm = req->base.tfm;
+ struct hash_alg *alg = &tfm->__crt_alg->cra_hash;
+ struct hash_desc desc = {
+ .tfm = __crypto_hash_cast(tfm),
+ .flags = req->base.flags,
+ };
+
+ return alg->update(&desc, req->src, req->nbytes);
+}
+
+static int hash_async_final(struct ahash_request *req)
+{
+ struct crypto_tfm *tfm = req->base.tfm;
+ struct hash_alg *alg = &tfm->__crt_alg->cra_hash;
+ struct hash_desc desc = {
+ .tfm = __crypto_hash_cast(tfm),
+ .flags = req->base.flags,
+ };
+
+ return alg->final(&desc, req->result);
+}
+
+static int hash_async_digest(struct ahash_request *req)
+{
+ struct crypto_tfm *tfm = req->base.tfm;
+ struct hash_alg *alg = &tfm->__crt_alg->cra_hash;
+ struct hash_desc desc = {
+ .tfm = __crypto_hash_cast(tfm),
+ .flags = req->base.flags,
+ };
+
+ return alg->digest(&desc, req->src, req->nbytes, req->result);
+}
+
+static int crypto_init_hash_ops_async(struct crypto_tfm *tfm)
+{
+ struct ahash_tfm *crt = &tfm->crt_ahash;
+ struct hash_alg *alg = &tfm->__crt_alg->cra_hash;
+
+ crt->init = hash_async_init;
+ crt->update = hash_async_update;
+ crt->final = hash_async_final;
+ crt->digest = hash_async_digest;
+ crt->setkey = hash_async_setkey;
+ crt->digestsize = alg->digestsize;
+ crt->base = __crypto_ahash_cast(tfm);
+
+ return 0;
+}
+
+static int crypto_init_hash_ops_sync(struct crypto_tfm *tfm)
{
struct hash_tfm *crt = &tfm->crt_hash;
struct hash_alg *alg = &tfm->__crt_alg->cra_hash;

- if (alg->digestsize > crypto_tfm_alg_blocksize(tfm))
- return -EINVAL;
-
- crt->init = alg->init;
- crt->update = alg->update;
- crt->final = alg->final;
- crt->digest = alg->digest;
- crt->setkey = hash_setkey;
+ crt->init = alg->init;
+ crt->update = alg->update;
+ crt->final = alg->final;
+ crt->digest = alg->digest;
+ crt->setkey = hash_setkey;
crt->digestsize = alg->digestsize;

return 0;
}

+static int crypto_init_hash_ops(struct crypto_tfm *tfm, u32 type, u32 mask)
+{
+ struct hash_alg *alg = &tfm->__crt_alg->cra_hash;
+
+ if (alg->digestsize > crypto_tfm_alg_blocksize(tfm))
+ return -EINVAL;
+
+ if ((mask & CRYPTO_ALG_TYPE_HASH_MASK) != CRYPTO_ALG_TYPE_HASH_MASK)
+ return crypto_init_hash_ops_async(tfm);
+ else
+ return crypto_init_hash_ops_sync(tfm);
+}
+
static void crypto_hash_show(struct seq_file *m, struct crypto_alg *alg)
__attribute__ ((unused));
static void crypto_hash_show(struct seq_file *m, struct crypto_alg *alg)
diff --git a/crypto/internal.h b/crypto/internal.h
index 32f4c21..683fcb2 100644
--- a/crypto/internal.h
+++ b/crypto/internal.h
@@ -86,6 +86,7 @@ struct crypto_alg *__crypto_alg_lookup(const char *name, u32 type, u32 mask);
struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask);

int crypto_init_digest_ops(struct crypto_tfm *tfm);
+int crypto_init_digest_ops_async(struct crypto_tfm *tfm);
int crypto_init_cipher_ops(struct crypto_tfm *tfm);
int crypto_init_compress_ops(struct crypto_tfm *tfm);

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 6beabc5..82919f9 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -110,22 +110,30 @@ static void test_hash(char *algo, struct hash_testvec *template,
unsigned int i, j, k, temp;
struct scatterlist sg[8];
char result[64];
- struct crypto_hash *tfm;
- struct hash_desc desc;
+ struct crypto_ahash *tfm;
+ struct ahash_request *req;
+ struct tcrypt_result tresult;
int ret;
void *hash_buff;

printk("\ntesting %s\n", algo);

- tfm = crypto_alloc_hash(algo, 0, CRYPTO_ALG_ASYNC);
+ init_completion(&tresult.completion);
+
+ tfm = crypto_alloc_ahash(algo, 0, 0);
if (IS_ERR(tfm)) {
printk("failed to load transform for %s: %ld\n", algo,
PTR_ERR(tfm));
return;
}

- desc.tfm = tfm;
- desc.flags = 0;
+ req = ahash_request_alloc(tfm, GFP_KERNEL);
+ if (!req) {
+ printk(KERN_ERR "failed to allocate request for %s\n", algo);
+ goto out_noreq;
+ }
+ ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+ tcrypt_complete, &tresult);

for (i = 0; i < tcount; i++) {
printk("test %u:\n", i + 1);
@@ -139,8 +147,9 @@ static void test_hash(char *algo, struct hash_testvec *template,
sg_init_one(&sg[0], hash_buff, template[i].psize);

if (template[i].ksize) {
- ret = crypto_hash_setkey(tfm, template[i].key,
- template[i].ksize);
+ crypto_ahash_clear_flags(tfm, ~0);
+ ret = crypto_ahash_setkey(tfm, template[i].key,
+ template[i].ksize);
if (ret) {
printk("setkey() failed ret=%d\n", ret);
kfree(hash_buff);
@@ -148,17 +157,30 @@ static void test_hash(char *algo, struct hash_testvec *template,
}
}

- ret = crypto_hash_digest(&desc, sg, template[i].psize, result);
- if (ret) {
+ ahash_request_set_crypt(req, sg, result, template[i].psize);
+ ret = crypto_ahash_digest(req);
+ switch (ret) {
+ case 0:
+ break;
+ case -EINPROGRESS:
+ case -EBUSY:
+ ret = wait_for_completion_interruptible(
+ &tresult.completion);
+ if (!ret && !(ret = tresult.err)) {
+ INIT_COMPLETION(tresult.completion);
+ break;
+ }
+ /* fall through */
+ default:
printk("digest () failed ret=%d\n", ret);
kfree(hash_buff);
goto out;
}

- hexdump(result, crypto_hash_digestsize(tfm));
+ hexdump(result, crypto_ahash_digestsize(tfm));
printk("%s\n",
memcmp(result, template[i].digest,
- crypto_hash_digestsize(tfm)) ?
+ crypto_ahash_digestsize(tfm)) ?
"fail" : "pass");
kfree(hash_buff);
}
@@ -187,8 +209,9 @@ static void test_hash(char *algo, struct hash_testvec *template,
}

if (template[i].ksize) {
- ret = crypto_hash_setkey(tfm, template[i].key,
- template[i].ksize);
+ crypto_ahash_clear_flags(tfm, ~0);
+ ret = crypto_ahash_setkey(tfm, template[i].key,
+ template[i].ksize);

if (ret) {
printk("setkey() failed ret=%d\n", ret);
@@ -196,23 +219,38 @@ static void test_hash(char *algo, struct hash_testvec *template,
}
}

- ret = crypto_hash_digest(&desc, sg, template[i].psize,
- result);
- if (ret) {
+ ahash_request_set_crypt(req, sg, result,
+ template[i].psize);
+ ret = crypto_ahash_digest(req);
+ switch (ret) {
+ case 0:
+ break;
+ case -EINPROGRESS:
+ case -EBUSY:
+ ret = wait_for_completion_interruptible(
+ &tresult.completion);
+ if (!ret && !(ret = tresult.err)) {
+ INIT_COMPLETION(tresult.completion);
+ break;
+ }
+ /* fall through */
+ default:
printk("digest () failed ret=%d\n", ret);
goto out;
}

- hexdump(result, crypto_hash_digestsize(tfm));
+ hexdump(result, crypto_ahash_digestsize(tfm));
printk("%s\n",
memcmp(result, template[i].digest,
- crypto_hash_digestsize(tfm)) ?
+ crypto_ahash_digestsize(tfm)) ?
"fail" : "pass");
}
}

out:
- crypto_free_hash(tfm);
+ ahash_request_free(req);
+out_noreq:
+ crypto_free_ahash(tfm);
}

static void test_aead(char *algo, int enc, struct aead_testvec *template,
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index 60d06e7..fef272a 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -98,6 +98,7 @@ extern const struct crypto_type crypto_ablkcipher_type;
extern const struct crypto_type crypto_aead_type;
extern const struct crypto_type crypto_blkcipher_type;
extern const struct crypto_type crypto_hash_type;
+extern const struct crypto_type crypto_ahash_type;

void crypto_mod_put(struct crypto_alg *alg);

@@ -314,5 +315,40 @@ static inline int crypto_requires_sync(u32 type, u32 mask)
return (type ^ CRYPTO_ALG_ASYNC) & mask & CRYPTO_ALG_ASYNC;
}

+static inline void *crypto_ahash_ctx(struct crypto_ahash *tfm)
+{
+ return crypto_tfm_ctx(&tfm->base);
+}
+
+static inline struct ahash_alg *crypto_ahash_alg(
+ struct crypto_ahash *tfm)
+{
+ return &crypto_ahash_tfm(tfm)->__crt_alg->cra_ahash;
+}
+
+static inline int ahash_enqueue_request(struct crypto_queue *queue,
+ struct ahash_request *request)
+{
+ return crypto_enqueue_request(queue, &request->base);
+}
+
+static inline struct ahash_request *ahash_dequeue_request(
+ struct crypto_queue *queue)
+{
+ return ahash_request_cast(crypto_dequeue_request(queue));
+}
+
+static inline void *ahash_request_ctx(struct ahash_request *req)
+{
+ return req->__ctx;
+}
+
+static inline int ahash_tfm_in_queue(struct crypto_queue *queue,
+ struct crypto_ahash *tfm)
+{
+ return crypto_tfm_in_queue(queue, crypto_ahash_tfm(tfm));
+}
+
+
#endif /* _CRYPTO_ALGAPI_H */

diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 425824b..2446428 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -30,15 +30,17 @@
*/
#define CRYPTO_ALG_TYPE_MASK 0x0000000f
#define CRYPTO_ALG_TYPE_CIPHER 0x00000001
-#define CRYPTO_ALG_TYPE_DIGEST 0x00000002
-#define CRYPTO_ALG_TYPE_HASH 0x00000003
+#define CRYPTO_ALG_TYPE_COMPRESS 0x00000002
+#define CRYPTO_ALG_TYPE_AEAD 0x00000003
#define CRYPTO_ALG_TYPE_BLKCIPHER 0x00000004
#define CRYPTO_ALG_TYPE_ABLKCIPHER 0x00000005
#define CRYPTO_ALG_TYPE_GIVCIPHER 0x00000006
-#define CRYPTO_ALG_TYPE_COMPRESS 0x00000008
-#define CRYPTO_ALG_TYPE_AEAD 0x00000009
+#define CRYPTO_ALG_TYPE_DIGEST 0x00000008
+#define CRYPTO_ALG_TYPE_HASH 0x00000009
+#define CRYPTO_ALG_TYPE_AHASH 0x0000000a

#define CRYPTO_ALG_TYPE_HASH_MASK 0x0000000e
+#define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000c
#define CRYPTO_ALG_TYPE_BLKCIPHER_MASK 0x0000000c

#define CRYPTO_ALG_LARVAL 0x00000010
@@ -102,6 +104,7 @@ struct crypto_async_request;
struct crypto_aead;
struct crypto_blkcipher;
struct crypto_hash;
+struct crypto_ahash;
struct crypto_tfm;
struct crypto_type;
struct aead_givcrypt_request;
@@ -131,6 +134,16 @@ struct ablkcipher_request {
void *__ctx[] CRYPTO_MINALIGN_ATTR;
};

+struct ahash_request {
+ struct crypto_async_request base;
+
+ unsigned int nbytes;
+ struct scatterlist *src;
+ u8 *result;
+
+ void *__ctx[] CRYPTO_MINALIGN_ATTR;
+};
+
/**
* struct aead_request - AEAD request
* @base: Common attributes for async crypto requests
@@ -195,6 +208,17 @@ struct ablkcipher_alg {
unsigned int ivsize;
};

+struct ahash_alg {
+ int (*init)(struct ahash_request *req);
+ int (*update)(struct ahash_request *req);
+ int (*final)(struct ahash_request *req);
+ int (*digest)(struct ahash_request *req);
+ int (*setkey)(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen);
+
+ unsigned int digestsize;
+};
+
struct aead_alg {
int (*setkey)(struct crypto_aead *tfm, const u8 *key,
unsigned int keylen);
@@ -272,6 +296,7 @@ struct compress_alg {
#define cra_cipher cra_u.cipher
#define cra_digest cra_u.digest
#define cra_hash cra_u.hash
+#define cra_ahash cra_u.ahash
#define cra_compress cra_u.compress

struct crypto_alg {
@@ -298,6 +323,7 @@ struct crypto_alg {
struct cipher_alg cipher;
struct digest_alg digest;
struct hash_alg hash;
+ struct ahash_alg ahash;
struct compress_alg compress;
} cra_u;

@@ -383,6 +409,19 @@ struct hash_tfm {
unsigned int digestsize;
};

+struct ahash_tfm {
+ int (*init)(struct ahash_request *req);
+ int (*update)(struct ahash_request *req);
+ int (*final)(struct ahash_request *req);
+ int (*digest)(struct ahash_request *req);
+ int (*setkey)(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen);
+
+ unsigned int digestsize;
+ struct crypto_ahash *base;
+ unsigned int reqsize;
+};
+
struct compress_tfm {
int (*cot_compress)(struct crypto_tfm *tfm,
const u8 *src, unsigned int slen,
@@ -397,6 +436,7 @@ struct compress_tfm {
#define crt_blkcipher crt_u.blkcipher
#define crt_cipher crt_u.cipher
#define crt_hash crt_u.hash
+#define crt_ahash crt_u.ahash
#define crt_compress crt_u.compress

struct crypto_tfm {
@@ -409,6 +449,7 @@ struct crypto_tfm {
struct blkcipher_tfm blkcipher;
struct cipher_tfm cipher;
struct hash_tfm hash;
+ struct ahash_tfm ahash;
struct compress_tfm compress;
} crt_u;

@@ -441,6 +482,10 @@ struct crypto_hash {
struct crypto_tfm base;
};

+struct crypto_ahash {
+ struct crypto_tfm base;
+};
+
enum {
CRYPTOA_UNSPEC,
CRYPTOA_ALG,
@@ -1264,5 +1309,137 @@ static inline int crypto_comp_decompress(struct crypto_comp *tfm,
src, slen, dst, dlen);
}

+static inline struct crypto_ahash *__crypto_ahash_cast(struct crypto_tfm *tfm)
+{
+ return (struct crypto_ahash *)tfm;
+}
+
+static inline struct crypto_ahash *crypto_alloc_ahash(const char *alg_name,
+ u32 type, u32 mask)
+{
+ type &= ~CRYPTO_ALG_TYPE_MASK;
+ mask &= ~CRYPTO_ALG_TYPE_MASK;
+ type |= CRYPTO_ALG_TYPE_AHASH;
+ mask |= CRYPTO_ALG_TYPE_AHASH_MASK;
+
+ return __crypto_ahash_cast(crypto_alloc_base(alg_name, type, mask));
+}
+
+static inline struct crypto_tfm *crypto_ahash_tfm(struct crypto_ahash *tfm)
+{
+ return &tfm->base;
+}
+
+static inline void crypto_free_ahash(struct crypto_ahash *tfm)
+{
+ crypto_free_tfm(crypto_ahash_tfm(tfm));
+}
+
+static inline unsigned int crypto_ahash_alignmask(
+ struct crypto_ahash *tfm)
+{
+ return crypto_tfm_alg_alignmask(crypto_ahash_tfm(tfm));
+}
+
+static inline struct ahash_tfm *crypto_ahash_crt(struct crypto_ahash *tfm)
+{
+ return &crypto_ahash_tfm(tfm)->crt_ahash;
+}
+
+static inline unsigned int crypto_ahash_digestsize(struct crypto_ahash *tfm)
+{
+ return crypto_ahash_crt(tfm)->digestsize;
+}
+
+static inline u32 crypto_ahash_get_flags(struct crypto_ahash *tfm)
+{
+ return crypto_tfm_get_flags(crypto_ahash_tfm(tfm));
+}
+
+static inline void crypto_ahash_set_flags(struct crypto_ahash *tfm, u32 flags)
+{
+ crypto_tfm_set_flags(crypto_ahash_tfm(tfm), flags);
+}
+
+static inline void crypto_ahash_clear_flags(struct crypto_ahash *tfm, u32 flags)
+{
+ crypto_tfm_clear_flags(crypto_ahash_tfm(tfm), flags);
+}
+
+static inline struct crypto_ahash *crypto_ahash_reqtfm(
+ struct ahash_request *req)
+{
+ return __crypto_ahash_cast(req->base.tfm);
+}
+
+static inline unsigned int crypto_ahash_reqsize(struct crypto_ahash *tfm)
+{
+ return crypto_ahash_crt(tfm)->reqsize;
+}
+
+static inline int crypto_ahash_setkey(struct crypto_ahash *tfm,
+ const u8 *key, unsigned int keylen)
+{
+ struct ahash_tfm *crt = crypto_ahash_crt(tfm);
+
+ return crt->setkey(crt->base, key, keylen);
+}
+
+static inline int crypto_ahash_digest(struct ahash_request *req)
+{
+ struct ahash_tfm *crt = crypto_ahash_crt(crypto_ahash_reqtfm(req));
+ return crt->digest(req);
+}
+
+static inline void ahash_request_set_tfm(struct ahash_request *req,
+ struct crypto_ahash *tfm)
+{
+ req->base.tfm = crypto_ahash_tfm(crypto_ahash_crt(tfm)->base);
+}
+
+static inline struct ahash_request *ahash_request_alloc(
+ struct crypto_ahash *tfm, gfp_t gfp)
+{
+ struct ahash_request *req;
+
+ req = kmalloc(sizeof(struct ahash_request) +
+ crypto_ahash_reqsize(tfm), gfp);
+
+ if (likely(req))
+ ahash_request_set_tfm(req, tfm);
+
+ return req;
+}
+
+static inline void ahash_request_free(struct ahash_request *req)
+{
+ kfree(req);
+}
+
+static inline struct ahash_request *ahash_request_cast(
+ struct crypto_async_request *req)
+{
+ return container_of(req, struct ahash_request, base);
+}
+
+static inline void ahash_request_set_callback(struct ahash_request *req,
+ u32 flags,
+ crypto_completion_t complete,
+ void *data)
+{
+ req->base.complete = complete;
+ req->base.data = data;
+ req->base.flags = flags;
+}
+
+static inline void ahash_request_set_crypt(struct ahash_request *req,
+ struct scatterlist *src, u8 *result,
+ unsigned int nbytes)
+{
+ req->src = src;
+ req->nbytes = nbytes;
+ req->result = result;
+}
+
#endif /* _LINUX_CRYPTO_H */


2008-05-07 13:09:48

by Patrick McHardy

[permalink] [raw]
Subject: Re: [PATCH 1/1] CryptoAPI: Add Async Hash Support

Herbert Xu wrote:
> On Tue, Apr 22, 2008 at 10:23:23AM -0700, Loc Ho wrote:
>> They are attached instead inline until I figure out how to not wrap long
>> line.
>
> Thanks. I've finally finished testing and integrating it. I've
> merged them into one patch and dropped unrelated white-space
> changes. You can resubmit those in a separate patch.


Cool, I started adding hashing support to HIFN until I noticed
the CrypoAPI doesn't support async hashing yet :)

2008-05-07 13:16:03

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/1] CryptoAPI: Add Async Hash Support

On Wed, May 07, 2008 at 03:09:14PM +0200, Patrick McHardy wrote:
>
> Cool, I started adding hashing support to HIFN until I noticed
> the CrypoAPI doesn't support async hashing yet :)


The other really cool thing about Loc's new interafce is that
it'll let us have hash tfm objects that are reentrant. So we
won't need to hold a spin lock around hash operations in IPsec
anymore. Well, once we convert all the algorithms across that
is :)

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2008-05-13 20:27:51

by Loc Ho

[permalink] [raw]
Subject: [PATCH 1/3] [CRYPTO] hash: Add Async Hash Support

Hi,

Patch 1/3 attached. I still can't get the line wrap problem solved.
Therefore it is attached instead inline.

Please note:
1. ahash_request has an extra variable 'void *info' for future uses with
KASUMI
2. Any particular reason for removing ahash_nosetkey from function
crypto_init_ahash_ops - This would be an error check for digest if some
caller called setkey function for digest algorithm.

-Loc

-----Original Message-----
From: Herbert Xu [mailto:[email protected]]
Sent: Wednesday, May 07, 2008 6:01 AM
To: Loc Ho
Cc: [email protected]
Subject: Re: [PATCH 1/1] CryptoAPI: Add Async Hash Support

On Tue, Apr 22, 2008 at 10:23:23AM -0700, Loc Ho wrote:
>
> They are attached instead inline until I figure out how to not wrap
long
> line.

Thanks. I've finally finished testing and integrating it. I've
merged them into one patch and dropped unrelated white-space
changes. You can resubmit those in a separate patch.

Could you please recheck this and if everything looks OK then
send it to me again with a sign-off? Oh and if you don't mind
please split the tcrypt and cryptd parts into separate patches
since they're conceptually distinct from the API bits.

Finally a proper description before each patch would be nice :)

Thanks,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Attachments:
0001-CRYPTO-hash-Add-asynchronous-hash-support.patch (19.52 kB)
0001-CRYPTO-hash-Add-asynchronous-hash-support.patch

2008-05-14 12:40:42

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/3] [CRYPTO] hash: Add Async Hash Support

On Tue, May 13, 2008 at 01:27:27PM -0700, Loc Ho wrote:
>
> Please note:
> 1. ahash_request has an extra variable 'void *info' for future uses with
> KASUMI

That's fine. We can change this structure at will.

> 2. Any particular reason for removing ahash_nosetkey from function
> crypto_init_ahash_ops - This would be an error check for digest if some
> caller called setkey function for digest algorithm.

All ahash algorithms should provide a setkey function just like hash.
Digest algorithms should never come down this way. When we convert
digest algorithms over to ahash we should add setkey functions.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2008-05-14 12:44:30

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/3] [CRYPTO] hash: Add Async Hash Support

On Wed, May 14, 2008 at 08:40:39PM +0800, Herbert Xu wrote:
>
> All ahash algorithms should provide a setkey function just like hash.
> Digest algorithms should never come down this way. When we convert
> digest algorithms over to ahash we should add setkey functions.

Oh and for the vast majority of them we create create a common
default setkey function that they can put in their function tables.

The NULL setkey check in digest was there so that the existing
algorithms didn't have to be modified. The ahash type does not
have this problem because there are no existing ahash algorithms.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2008-05-14 13:26:46

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/3] [CRYPTO] hash: Add Async Hash Support

On Tue, May 13, 2008 at 01:27:27PM -0700, Loc Ho wrote:
> Hi,
>
> Patch 1/3 attached. I still can't get the line wrap problem solved.
> Therefore it is attached instead inline.

All applied. Thanks a lot!
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt