2009-07-16 11:13:10

by Steffen Klassert

[permalink] [raw]
Subject: [RFC] [PATCH 0/7] IPsec: convert to ahash

This patchset converts IPsec over to the new ahash interface.
The pachset applies to cryptodev-2.6. I was able to test the synchronous
codepaths, the asynchronous ones are untested.

I'm still somewhat unhappy with the ahash version of authenc, but I decided
to post anyway as a base for discussion.

Since the calls to the hash algorithms can now return asynchronous, I'd like
to avoid multiple calls to the hash update functions. I'd rather like to do
all the hashing with one call to crypto_ahash_digest(). As it is, this
requires chaining of all the involved scatterlists. Since we still can't use
sg_chain() to chain up the lists, I added an additional scatterlist entry to
the scatterlist of the assoc data (esp) to be able to chain later in the
crypto layer. To keep compatibility I set the termination bit at the first
entry and remove it later in authenc. In fact to rely on this additional
entry and just to remove the termintation bit later makes me a bit nervous
and I'm not sure whether this is acceptable, so better ideas are very welcome.

Steffen



2009-07-16 11:13:50

by Steffen Klassert

[permalink] [raw]
Subject: [RFC] [PATCH 1/7] esp: Add an additional scatterlist entry for the assoc data

To be able to chain all the scatterlists we add an additional
scatterlist entry to the scatterlist of the associated data.
To keep compatibility we set the termination bit at the first
entry. This can be reverted as soon as we can use sg_chain().

Signed-off-by: Steffen Klassert <[email protected]>
---
net/ipv4/esp4.c | 23 +++++++++++++++++------
net/ipv6/esp6.c | 25 +++++++++++++++++++------
2 files changed, 36 insertions(+), 12 deletions(-)

diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
index 18bb383..dbb1a33 100644
--- a/net/ipv4/esp4.c
+++ b/net/ipv4/esp4.c
@@ -139,14 +139,14 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)
goto error;
nfrags = err;

- tmp = esp_alloc_tmp(aead, nfrags + 1);
+ tmp = esp_alloc_tmp(aead, nfrags + 2);
if (!tmp)
goto error;

iv = esp_tmp_iv(aead, tmp);
req = esp_tmp_givreq(aead, iv);
asg = esp_givreq_sg(aead, req);
- sg = asg + 1;
+ sg = asg + 2;

/* Fill padding... */
tail = skb_tail_pointer(trailer);
@@ -205,7 +205,16 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)
skb_to_sgvec(skb, sg,
esph->enc_data + crypto_aead_ivsize(aead) - skb->data,
clen + alen);
- sg_init_one(asg, esph, sizeof(*esph));
+
+ /*
+ * We add an additional scatterlist entry to be able to chain up
+ * the scatterlists in the crypto layer. To keep compatibility we
+ * set the termination bit at the first entry. This can be removed
+ * as soon as as architectures support scatterlist chaining.
+ */
+ sg_init_table(asg, 2);
+ sg_mark_end(asg);
+ sg_set_buf(asg, esph, sizeof(*esph));

aead_givcrypt_set_callback(req, 0, esp_output_done, skb);
aead_givcrypt_set_crypt(req, sg, sg, clen, iv);
@@ -347,7 +356,7 @@ static int esp_input(struct xfrm_state *x, struct sk_buff *skb)
nfrags = err;

err = -ENOMEM;
- tmp = esp_alloc_tmp(aead, nfrags + 1);
+ tmp = esp_alloc_tmp(aead, nfrags + 2);
if (!tmp)
goto out;

@@ -355,7 +364,7 @@ static int esp_input(struct xfrm_state *x, struct sk_buff *skb)
iv = esp_tmp_iv(aead, tmp);
req = esp_tmp_req(aead, iv);
asg = esp_req_sg(aead, req);
- sg = asg + 1;
+ sg = asg + 2;

skb->ip_summed = CHECKSUM_NONE;

@@ -366,7 +375,9 @@ static int esp_input(struct xfrm_state *x, struct sk_buff *skb)

sg_init_table(sg, nfrags);
skb_to_sgvec(skb, sg, sizeof(*esph) + crypto_aead_ivsize(aead), elen);
- sg_init_one(asg, esph, sizeof(*esph));
+ sg_init_table(asg, 2);
+ sg_mark_end(asg);
+ sg_set_buf(asg, esph, sizeof(*esph));

aead_request_set_callback(req, 0, esp_input_done, skb);
aead_request_set_crypt(req, sg, sg, elen, iv);
diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
index 678bb95..6ba707a 100644
--- a/net/ipv6/esp6.c
+++ b/net/ipv6/esp6.c
@@ -163,14 +163,14 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)
goto error;
nfrags = err;

- tmp = esp_alloc_tmp(aead, nfrags + 1);
+ tmp = esp_alloc_tmp(aead, nfrags + 2);
if (!tmp)
goto error;

iv = esp_tmp_iv(aead, tmp);
req = esp_tmp_givreq(aead, iv);
asg = esp_givreq_sg(aead, req);
- sg = asg + 1;
+ sg = asg + 2;

/* Fill padding... */
tail = skb_tail_pointer(trailer);
@@ -194,7 +194,17 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)
skb_to_sgvec(skb, sg,
esph->enc_data + crypto_aead_ivsize(aead) - skb->data,
clen + alen);
- sg_init_one(asg, esph, sizeof(*esph));
+
+
+ /*
+ * We add an additional scatterlist entry to be able to chain up
+ * the scatterlists in the crypto layer. To keep compatibility we
+ * set the termination bit at the first entry. This can be removed
+ * as soon as as architectures support scatterlist chaining.
+ */
+ sg_init_table(asg, 2);
+ sg_mark_end(asg);
+ sg_set_buf(asg, esph, sizeof(*esph));

aead_givcrypt_set_callback(req, 0, esp_output_done, skb);
aead_givcrypt_set_crypt(req, sg, sg, clen, iv);
@@ -298,7 +308,7 @@ static int esp6_input(struct xfrm_state *x, struct sk_buff *skb)
}

ret = -ENOMEM;
- tmp = esp_alloc_tmp(aead, nfrags + 1);
+ tmp = esp_alloc_tmp(aead, nfrags + 2);
if (!tmp)
goto out;

@@ -306,7 +316,7 @@ static int esp6_input(struct xfrm_state *x, struct sk_buff *skb)
iv = esp_tmp_iv(aead, tmp);
req = esp_tmp_req(aead, iv);
asg = esp_req_sg(aead, req);
- sg = asg + 1;
+ sg = asg + 2;

skb->ip_summed = CHECKSUM_NONE;

@@ -317,7 +327,10 @@ static int esp6_input(struct xfrm_state *x, struct sk_buff *skb)

sg_init_table(sg, nfrags);
skb_to_sgvec(skb, sg, sizeof(*esph) + crypto_aead_ivsize(aead), elen);
- sg_init_one(asg, esph, sizeof(*esph));
+
+ sg_init_table(asg, 2);
+ sg_mark_end(asg);
+ sg_set_buf(asg, esph, sizeof(*esph));

aead_request_set_callback(req, 0, esp_input_done, skb);
aead_request_set_crypt(req, sg, sg, elen, iv);
--
1.5.4.2


2009-07-16 11:15:08

by Steffen Klassert

[permalink] [raw]
Subject: [RFC] [PATCH 2/7] crypto: authenc - convert to ahash

This patch converts authenc to the new ahash interface.

Signed-off-by: Steffen Klassert <[email protected]>
---
crypto/authenc.c | 215 +++++++++++++++++++++++++++++++++++++----------------
1 files changed, 150 insertions(+), 65 deletions(-)

diff --git a/crypto/authenc.c b/crypto/authenc.c
index 2e16ce0..bd456e3 100644
--- a/crypto/authenc.c
+++ b/crypto/authenc.c
@@ -24,23 +24,29 @@
#include <linux/spinlock.h>

struct authenc_instance_ctx {
- struct crypto_spawn auth;
+ struct crypto_ahash_spawn auth;
struct crypto_skcipher_spawn enc;
};

struct crypto_authenc_ctx {
- spinlock_t auth_lock;
- struct crypto_hash *auth;
+ unsigned int reqoff;
+ struct crypto_ahash *auth;
struct crypto_ablkcipher *enc;
};

+struct authenc_ahash_request_ctx {
+ struct scatterlist sg[2];
+ crypto_completion_t complete;
+ char tail[];
+};
+
static int crypto_authenc_setkey(struct crypto_aead *authenc, const u8 *key,
unsigned int keylen)
{
unsigned int authkeylen;
unsigned int enckeylen;
struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
- struct crypto_hash *auth = ctx->auth;
+ struct crypto_ahash *auth = ctx->auth;
struct crypto_ablkcipher *enc = ctx->enc;
struct rtattr *rta = (void *)key;
struct crypto_authenc_key_param *param;
@@ -64,11 +70,11 @@ static int crypto_authenc_setkey(struct crypto_aead *authenc, const u8 *key,

authkeylen = keylen - enckeylen;

- crypto_hash_clear_flags(auth, CRYPTO_TFM_REQ_MASK);
- crypto_hash_set_flags(auth, crypto_aead_get_flags(authenc) &
+ crypto_ahash_clear_flags(auth, CRYPTO_TFM_REQ_MASK);
+ crypto_ahash_set_flags(auth, crypto_aead_get_flags(authenc) &
CRYPTO_TFM_REQ_MASK);
- err = crypto_hash_setkey(auth, key, authkeylen);
- crypto_aead_set_flags(authenc, crypto_hash_get_flags(auth) &
+ err = crypto_ahash_setkey(auth, key, authkeylen);
+ crypto_aead_set_flags(authenc, crypto_ahash_get_flags(auth) &
CRYPTO_TFM_RES_MASK);

if (err)
@@ -92,6 +98,13 @@ badkey:
static void authenc_chain(struct scatterlist *head, struct scatterlist *sg,
int chain)
{
+ /*
+ * head must be a scatterlist with two entries.
+ * We remove a potentially set termination bit
+ * on the first enty.
+ */
+ head->page_link &= ~0x02;
+
if (chain) {
head->length += sg->length;
sg = scatterwalk_sg_next(sg);
@@ -103,40 +116,85 @@ static void authenc_chain(struct scatterlist *head, struct scatterlist *sg,
sg_mark_end(head);
}

-static u8 *crypto_authenc_hash(struct aead_request *req, unsigned int flags,
- struct scatterlist *cipher,
- unsigned int cryptlen)
+static void crypto_authenc_ahash_geniv_done(
+ struct crypto_async_request *areq, int err)
{
+ struct aead_request *req = areq->data;
struct crypto_aead *authenc = crypto_aead_reqtfm(req);
struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
- struct crypto_hash *auth = ctx->auth;
- struct hash_desc desc = {
- .tfm = auth,
- .flags = aead_request_flags(req) & flags,
- };
- u8 *hash = aead_request_ctx(req);
- int err;
-
- hash = (u8 *)ALIGN((unsigned long)hash + crypto_hash_alignmask(auth),
- crypto_hash_alignmask(auth) + 1);
+ struct authenc_ahash_request_ctx *ahreq_ctx = aead_request_ctx(req);
+ struct ahash_request *ahreq = (void *)(ahreq_ctx->tail + ctx->reqoff);

- spin_lock_bh(&ctx->auth_lock);
- err = crypto_hash_init(&desc);
if (err)
- goto auth_unlock;
+ goto out;
+
+ scatterwalk_map_and_copy(ahreq->result, ahreq->src, ahreq->nbytes,
+ crypto_aead_authsize(authenc), 1);
+
+out:
+ aead_request_complete(req, err);
+}
+
+static void crypto_authenc_ahash_verify_done(
+ struct crypto_async_request *areq, int err)
+{
+ u8 *ihash;
+ unsigned int authsize;
+ struct ablkcipher_request *abreq;
+ struct aead_request *req = areq->data;
+ struct crypto_aead *authenc = crypto_aead_reqtfm(req);
+ struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
+ struct authenc_ahash_request_ctx *ahreq_ctx = aead_request_ctx(req);
+ struct ahash_request *ahreq = (void *)(ahreq_ctx->tail + ctx->reqoff);

- err = crypto_hash_update(&desc, req->assoc, req->assoclen);
if (err)
- goto auth_unlock;
+ goto out;
+
+ authsize = crypto_aead_authsize(authenc);
+ ihash = ahreq->result + authsize;
+ scatterwalk_map_and_copy(ihash, ahreq->src, ahreq->nbytes, authsize, 0);
+ err = memcmp(ihash, ahreq->result, authsize) ? -EBADMSG: 0;

- err = crypto_hash_update(&desc, cipher, cryptlen);
if (err)
- goto auth_unlock;
+ goto out;

- err = crypto_hash_final(&desc, hash);
-auth_unlock:
- spin_unlock_bh(&ctx->auth_lock);
+ abreq = aead_request_ctx(req);
+ ablkcipher_request_set_tfm(abreq, ctx->enc);
+ ablkcipher_request_set_callback(abreq, aead_request_flags(req),
+ req->base.complete, req->base.data);
+ ablkcipher_request_set_crypt(abreq, req->src, req->dst,
+ req->cryptlen, req->iv);
+
+ err = crypto_ablkcipher_decrypt(abreq);
+
+ if (err == -EINPROGRESS)
+ return;
+
+out:
+ aead_request_complete(req, err);
+}
+
+static u8 *crypto_authenc_ahash(struct aead_request *req, unsigned int flags,
+ struct scatterlist *cipher,
+ unsigned int cryptlen)
+{
+ struct crypto_aead *authenc = crypto_aead_reqtfm(req);
+ struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
+ struct crypto_ahash *auth = ctx->auth;
+ struct authenc_ahash_request_ctx *ahreq_ctx = aead_request_ctx(req);
+ struct ahash_request *ahreq = (void *)(ahreq_ctx->tail + ctx->reqoff);
+ u8 *hash = ahreq_ctx->tail;
+ int err;
+
+ hash = (u8 *)ALIGN((unsigned long)hash + crypto_ahash_alignmask(auth),
+ crypto_ahash_alignmask(auth) + 1);
+
+ ahash_request_set_tfm(ahreq, auth);
+ ahash_request_set_crypt(ahreq, cipher, hash, cryptlen);
+ ahash_request_set_callback(ahreq, aead_request_flags(req) & flags,
+ ahreq_ctx->complete, req);

+ err = crypto_ahash_digest(ahreq);
if (err)
return ERR_PTR(err);

@@ -147,8 +205,9 @@ static int crypto_authenc_genicv(struct aead_request *req, u8 *iv,
unsigned int flags)
{
struct crypto_aead *authenc = crypto_aead_reqtfm(req);
+ struct authenc_ahash_request_ctx *ahreq_ctx = aead_request_ctx(req);
struct scatterlist *dst = req->dst;
- struct scatterlist cipher[2];
+ struct scatterlist *cipher = ahreq_ctx->sg;
struct page *dstp;
unsigned int ivsize = crypto_aead_ivsize(authenc);
unsigned int cryptlen;
@@ -165,8 +224,13 @@ static int crypto_authenc_genicv(struct aead_request *req, u8 *iv,
dst = cipher;
}

- cryptlen = req->cryptlen + ivsize;
- hash = crypto_authenc_hash(req, flags, dst, cryptlen);
+ authenc_chain(req->assoc, dst, 0);
+ dst = req->assoc;
+
+ cryptlen = req->cryptlen + ivsize + req->assoclen;
+ ahreq_ctx->complete = crypto_authenc_ahash_geniv_done;
+
+ hash = crypto_authenc_ahash(req, flags, dst, cryptlen);
if (IS_ERR(hash))
return PTR_ERR(hash);

@@ -188,6 +252,9 @@ static void crypto_authenc_encrypt_done(struct crypto_async_request *req,
crypto_ablkcipher_reqsize(ctx->enc);

err = crypto_authenc_genicv(areq, iv, 0);
+
+ if (err == -EINPROGRESS)
+ return;
}

aead_request_complete(areq, err);
@@ -227,6 +294,9 @@ static void crypto_authenc_givencrypt_done(struct crypto_async_request *req,
struct skcipher_givcrypt_request *greq = aead_request_ctx(areq);

err = crypto_authenc_genicv(areq, greq->giv, 0);
+
+ if (err == -EINPROGRESS)
+ return;
}

aead_request_complete(areq, err);
@@ -260,12 +330,15 @@ static int crypto_authenc_verify(struct aead_request *req,
unsigned int cryptlen)
{
struct crypto_aead *authenc = crypto_aead_reqtfm(req);
+ struct authenc_ahash_request_ctx *ahreq_ctx = aead_request_ctx(req);
u8 *ohash;
u8 *ihash;
unsigned int authsize;

- ohash = crypto_authenc_hash(req, CRYPTO_TFM_REQ_MAY_SLEEP, cipher,
- cryptlen);
+ ahreq_ctx->complete = crypto_authenc_ahash_verify_done;
+
+ ohash = crypto_authenc_ahash(req, CRYPTO_TFM_REQ_MAY_SLEEP, cipher,
+ cryptlen);
if (IS_ERR(ohash))
return PTR_ERR(ohash);

@@ -279,8 +352,9 @@ static int crypto_authenc_iverify(struct aead_request *req, u8 *iv,
unsigned int cryptlen)
{
struct crypto_aead *authenc = crypto_aead_reqtfm(req);
+ struct authenc_ahash_request_ctx *ahreq_ctx = aead_request_ctx(req);
struct scatterlist *src = req->src;
- struct scatterlist cipher[2];
+ struct scatterlist *cipher = ahreq_ctx->sg;
struct page *srcp;
unsigned int ivsize = crypto_aead_ivsize(authenc);
u8 *vsrc;
@@ -295,7 +369,11 @@ static int crypto_authenc_iverify(struct aead_request *req, u8 *iv,
src = cipher;
}

- return crypto_authenc_verify(req, src, cryptlen + ivsize);
+ authenc_chain(req->assoc, src, 0);
+ src = req->assoc;
+
+ cryptlen += ivsize + req->assoclen;
+ return crypto_authenc_verify(req, src, cryptlen);
}

static int crypto_authenc_decrypt(struct aead_request *req)
@@ -326,38 +404,41 @@ static int crypto_authenc_decrypt(struct aead_request *req)

static int crypto_authenc_init_tfm(struct crypto_tfm *tfm)
{
- struct crypto_instance *inst = (void *)tfm->__crt_alg;
+ struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
struct authenc_instance_ctx *ictx = crypto_instance_ctx(inst);
struct crypto_authenc_ctx *ctx = crypto_tfm_ctx(tfm);
- struct crypto_hash *auth;
+ struct crypto_ahash *auth;
struct crypto_ablkcipher *enc;
int err;

- auth = crypto_spawn_hash(&ictx->auth);
+ auth = crypto_spawn_ahash(&ictx->auth);
if (IS_ERR(auth))
return PTR_ERR(auth);

+ ctx->reqoff = ALIGN(2 * crypto_ahash_digestsize(auth) +
+ crypto_ahash_alignmask(auth),
+ crypto_ahash_alignmask(auth) + 1);
+
enc = crypto_spawn_skcipher(&ictx->enc);
err = PTR_ERR(enc);
if (IS_ERR(enc))
- goto err_free_hash;
+ goto err_free_ahash;

ctx->auth = auth;
ctx->enc = enc;
+
tfm->crt_aead.reqsize = max_t(unsigned int,
- (crypto_hash_alignmask(auth) &
- ~(crypto_tfm_ctx_alignment() - 1)) +
- crypto_hash_digestsize(auth) * 2,
- sizeof(struct skcipher_givcrypt_request) +
- crypto_ablkcipher_reqsize(enc) +
- crypto_ablkcipher_ivsize(enc));
-
- spin_lock_init(&ctx->auth_lock);
+ crypto_ahash_reqsize(auth) + ctx->reqoff +
+ sizeof(struct authenc_ahash_request_ctx) +
+ sizeof(struct ahash_request),
+ sizeof(struct skcipher_givcrypt_request) +
+ crypto_ablkcipher_reqsize(enc) +
+ crypto_ablkcipher_ivsize(enc));

return 0;

-err_free_hash:
- crypto_free_hash(auth);
+err_free_ahash:
+ crypto_free_ahash(auth);
return err;
}

@@ -365,7 +446,7 @@ static void crypto_authenc_exit_tfm(struct crypto_tfm *tfm)
{
struct crypto_authenc_ctx *ctx = crypto_tfm_ctx(tfm);

- crypto_free_hash(ctx->auth);
+ crypto_free_ahash(ctx->auth);
crypto_free_ablkcipher(ctx->enc);
}

@@ -373,7 +454,8 @@ static struct crypto_instance *crypto_authenc_alloc(struct rtattr **tb)
{
struct crypto_attr_type *algt;
struct crypto_instance *inst;
- struct crypto_alg *auth;
+ struct hash_alg_common *auth;
+ struct crypto_alg *auth_base;
struct crypto_alg *enc;
struct authenc_instance_ctx *ctx;
const char *enc_name;
@@ -387,11 +469,13 @@ static struct crypto_instance *crypto_authenc_alloc(struct rtattr **tb)
if ((algt->type ^ CRYPTO_ALG_TYPE_AEAD) & algt->mask)
return ERR_PTR(-EINVAL);

- auth = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_HASH,
- CRYPTO_ALG_TYPE_HASH_MASK);
+ auth = ahash_attr_alg(tb[1], CRYPTO_ALG_TYPE_HASH,
+ CRYPTO_ALG_TYPE_AHASH_MASK);
if (IS_ERR(auth))
return ERR_PTR(PTR_ERR(auth));

+ auth_base = &auth->base;
+
enc_name = crypto_attr_alg_name(tb[2]);
err = PTR_ERR(enc_name);
if (IS_ERR(enc_name))
@@ -404,7 +488,7 @@ static struct crypto_instance *crypto_authenc_alloc(struct rtattr **tb)

ctx = crypto_instance_ctx(inst);

- err = crypto_init_spawn(&ctx->auth, auth, inst, CRYPTO_ALG_TYPE_MASK);
+ err = crypto_init_ahash_spawn(&ctx->auth, auth, inst);
if (err)
goto err_free_inst;

@@ -419,24 +503,25 @@ static struct crypto_instance *crypto_authenc_alloc(struct rtattr **tb)

err = -ENAMETOOLONG;
if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
- "authenc(%s,%s)", auth->cra_name, enc->cra_name) >=
+ "authenc(%s,%s)", auth_base->cra_name, enc->cra_name) >=
CRYPTO_MAX_ALG_NAME)
goto err_drop_enc;

if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
- "authenc(%s,%s)", auth->cra_driver_name,
+ "authenc(%s,%s)", auth_base->cra_driver_name,
enc->cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
goto err_drop_enc;

inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD;
inst->alg.cra_flags |= enc->cra_flags & CRYPTO_ALG_ASYNC;
- inst->alg.cra_priority = enc->cra_priority * 10 + auth->cra_priority;
+ inst->alg.cra_priority = enc->cra_priority *
+ 10 + auth_base->cra_priority;
inst->alg.cra_blocksize = enc->cra_blocksize;
- inst->alg.cra_alignmask = auth->cra_alignmask | enc->cra_alignmask;
+ inst->alg.cra_alignmask = auth_base->cra_alignmask | enc->cra_alignmask;
inst->alg.cra_type = &crypto_aead_type;

inst->alg.cra_aead.ivsize = enc->cra_ablkcipher.ivsize;
- inst->alg.cra_aead.maxauthsize = __crypto_shash_alg(auth)->digestsize;
+ inst->alg.cra_aead.maxauthsize = auth->digestsize;

inst->alg.cra_ctxsize = sizeof(struct crypto_authenc_ctx);

@@ -449,13 +534,13 @@ static struct crypto_instance *crypto_authenc_alloc(struct rtattr **tb)
inst->alg.cra_aead.givencrypt = crypto_authenc_givencrypt;

out:
- crypto_mod_put(auth);
+ crypto_mod_put(auth_base);
return inst;

err_drop_enc:
crypto_drop_skcipher(&ctx->enc);
err_drop_auth:
- crypto_drop_spawn(&ctx->auth);
+ crypto_drop_ahash(&ctx->auth);
err_free_inst:
kfree(inst);
out_put_auth:
@@ -468,7 +553,7 @@ static void crypto_authenc_free(struct crypto_instance *inst)
struct authenc_instance_ctx *ctx = crypto_instance_ctx(inst);

crypto_drop_skcipher(&ctx->enc);
- crypto_drop_spawn(&ctx->auth);
+ crypto_drop_ahash(&ctx->auth);
kfree(inst);
}

--
1.5.4.2


2009-07-16 11:15:53

by Steffen Klassert

[permalink] [raw]
Subject: [RFC] [PATCH 3/7] ah: Add struct crypto_ahash to ah_data

To support for ahash algorithms, we add a pointer to a
crypto_ahash to ah_data.

Signed-off-by: Steffen Klassert <[email protected]>
---
include/net/ah.h | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/include/net/ah.h b/include/net/ah.h
index ae1c322..7ac5221 100644
--- a/include/net/ah.h
+++ b/include/net/ah.h
@@ -14,6 +14,7 @@ struct ah_data
int icv_trunc_len;

struct crypto_hash *tfm;
+ struct crypto_ahash *ahash;
};

static inline int ah_mac_digest(struct ah_data *ahp, struct sk_buff *skb,
--
1.5.4.2


2009-07-16 11:16:30

by Steffen Klassert

[permalink] [raw]
Subject: [RFC] [PATCH 4/7] ah4: convert to ahash

This patch converts ah4 to the new ahash interface.

Signed-off-by: Steffen Klassert <[email protected]>
---
net/ipv4/ah4.c | 295 ++++++++++++++++++++++++++++++++++++++++++++-----------
1 files changed, 236 insertions(+), 59 deletions(-)

diff --git a/net/ipv4/ah4.c b/net/ipv4/ah4.c
index e878e49..68436fd 100644
--- a/net/ipv4/ah4.c
+++ b/net/ipv4/ah4.c
@@ -1,3 +1,4 @@
+#include <crypto/hash.h>
#include <linux/err.h>
#include <linux/module.h>
#include <net/ip.h>
@@ -5,10 +6,67 @@
#include <net/ah.h>
#include <linux/crypto.h>
#include <linux/pfkeyv2.h>
-#include <linux/spinlock.h>
+#include <linux/scatterlist.h>
#include <net/icmp.h>
#include <net/protocol.h>

+struct ah_skb_cb {
+ struct xfrm_skb_cb xfrm;
+ void *tmp;
+};
+
+#define AH_SKB_CB(__skb) ((struct ah_skb_cb *)&((__skb)->cb[0]))
+
+static void *ah_alloc_tmp(struct crypto_ahash *ahash, int nfrags,
+ unsigned int size)
+{
+ unsigned int len;
+
+ len = size + crypto_ahash_digestsize(ahash) +
+ (crypto_ahash_alignmask(ahash) &
+ ~(crypto_tfm_ctx_alignment() - 1));
+
+ len = ALIGN(len, crypto_tfm_ctx_alignment());
+
+ len += sizeof(struct ahash_request) + crypto_ahash_reqsize(ahash);
+ len = ALIGN(len, __alignof__(struct scatterlist));
+
+ len += sizeof(struct scatterlist) * nfrags;
+
+ return kmalloc(len, GFP_ATOMIC);
+}
+
+static inline u8 *ah_tmp_auth(void *tmp, unsigned int offset)
+{
+ return tmp + offset;
+}
+
+static inline u8 *ah_tmp_icv(struct crypto_ahash *ahash, void *tmp,
+ unsigned int offset)
+{
+ return PTR_ALIGN((u8 *)tmp + offset, crypto_ahash_alignmask(ahash) + 1);
+}
+
+static inline struct ahash_request *ah_tmp_req(struct crypto_ahash *ahash,
+ u8 *icv)
+{
+ struct ahash_request *req;
+
+ req = (void *)PTR_ALIGN(icv + crypto_ahash_digestsize(ahash),
+ crypto_tfm_ctx_alignment());
+
+ ahash_request_set_tfm(req, ahash);
+
+ return req;
+}
+
+static inline struct scatterlist *ah_req_sg(struct crypto_ahash *ahash,
+ struct ahash_request *req)
+{
+ return (void *)ALIGN((unsigned long)(req + 1) +
+ crypto_ahash_reqsize(ahash),
+ __alignof__(struct scatterlist));
+}

/* Clear mutable options and find final destination to substitute
* into IP header for icv calculation. Options are already checked
@@ -54,20 +112,72 @@ static int ip_clear_mutable_options(struct iphdr *iph, __be32 *daddr)
return 0;
}

+static void ah_output_done(struct crypto_async_request *base, int err)
+{
+ u8 *icv;
+ struct iphdr *iph;
+ struct sk_buff *skb = base->data;
+ struct xfrm_state *x = skb_dst(skb)->xfrm;
+ struct ah_data *ahp = x->data;
+ struct iphdr *top_iph = ip_hdr(skb);
+ struct ip_auth_hdr *ah = ip_auth_hdr(skb);
+ int ihl = ip_hdrlen(skb);
+
+ iph = AH_SKB_CB(skb)->tmp;
+ icv = ah_tmp_icv(ahp->ahash, iph, ihl);
+ memcpy(ah->auth_data, icv, ahp->icv_trunc_len);
+
+ top_iph->tos = iph->tos;
+ top_iph->ttl = iph->ttl;
+ top_iph->frag_off = iph->frag_off;
+ if (top_iph->ihl != 5) {
+ top_iph->daddr = iph->daddr;
+ memcpy(top_iph+1, iph+1, top_iph->ihl*4 - sizeof(struct iphdr));
+ }
+
+ err = ah->nexthdr;
+
+ kfree(AH_SKB_CB(skb)->tmp);
+ xfrm_output_resume(skb, err);
+}
+
static int ah_output(struct xfrm_state *x, struct sk_buff *skb)
{
int err;
+ int nfrags;
+ int ihl;
+ u8 *icv;
+ struct sk_buff *trailer;
+ struct crypto_ahash *ahash;
+ struct ahash_request *req;
+ struct scatterlist *sg;
struct iphdr *iph, *top_iph;
struct ip_auth_hdr *ah;
struct ah_data *ahp;
- union {
- struct iphdr iph;
- char buf[60];
- } tmp_iph;
+
+ ahp = x->data;
+ ahash = ahp->ahash;
+
+ if ((err = skb_cow_data(skb, 0, &trailer)) < 0)
+ goto out;
+ nfrags = err;

skb_push(skb, -skb_network_offset(skb));
+ ah = ip_auth_hdr(skb);
+ ihl = ip_hdrlen(skb);
+
+ err = -ENOMEM;
+ iph = ah_alloc_tmp(ahash, nfrags, ihl);
+ if (!iph)
+ goto out;
+
+ icv = ah_tmp_icv(ahash, iph, ihl);
+ req = ah_tmp_req(ahash, icv);
+ sg = ah_req_sg(ahash, req);
+
+ memset(ah->auth_data, 0, ahp->icv_trunc_len);
+
top_iph = ip_hdr(skb);
- iph = &tmp_iph.iph;

iph->tos = top_iph->tos;
iph->ttl = top_iph->ttl;
@@ -78,10 +188,9 @@ static int ah_output(struct xfrm_state *x, struct sk_buff *skb)
memcpy(iph+1, top_iph+1, top_iph->ihl*4 - sizeof(struct iphdr));
err = ip_clear_mutable_options(top_iph, &top_iph->daddr);
if (err)
- goto error;
+ goto out_free;
}

- ah = ip_auth_hdr(skb);
ah->nexthdr = *skb_mac_header(skb);
*skb_mac_header(skb) = IPPROTO_AH;

@@ -91,20 +200,31 @@ static int ah_output(struct xfrm_state *x, struct sk_buff *skb)
top_iph->ttl = 0;
top_iph->check = 0;

- ahp = x->data;
ah->hdrlen = (XFRM_ALIGN8(sizeof(*ah) + ahp->icv_trunc_len) >> 2) - 2;

ah->reserved = 0;
ah->spi = x->id.spi;
ah->seq_no = htonl(XFRM_SKB_CB(skb)->seq.output);

- spin_lock_bh(&x->lock);
- err = ah_mac_digest(ahp, skb, ah->auth_data);
- memcpy(ah->auth_data, ahp->work_icv, ahp->icv_trunc_len);
- spin_unlock_bh(&x->lock);
+ sg_init_table(sg, nfrags);
+ skb_to_sgvec(skb, sg, 0, skb->len);
+
+ ahash_request_set_crypt(req, sg, icv, skb->len);
+ ahash_request_set_callback(req, 0, ah_output_done, skb);

- if (err)
- goto error;
+ AH_SKB_CB(skb)->tmp = iph;
+
+ err = crypto_ahash_digest(req);
+ if (err) {
+ if (err == -EINPROGRESS)
+ goto out;
+
+ if (err == -EBUSY)
+ err = NET_XMIT_DROP;
+ goto out_free;
+ }
+
+ memcpy(ah->auth_data, icv, ahp->icv_trunc_len);

top_iph->tos = iph->tos;
top_iph->ttl = iph->ttl;
@@ -114,28 +234,67 @@ static int ah_output(struct xfrm_state *x, struct sk_buff *skb)
memcpy(top_iph+1, iph+1, top_iph->ihl*4 - sizeof(struct iphdr));
}

- err = 0;
-
-error:
+out_free:
+ kfree(iph);
+out:
return err;
}

+static void ah_input_done(struct crypto_async_request *base, int err)
+{
+ u8 *auth_data;
+ u8 *icv;
+ struct iphdr *work_iph;
+ struct sk_buff *skb = base->data;
+ struct xfrm_state *x = xfrm_input_state(skb);
+ struct ah_data *ahp = x->data;
+ struct ip_auth_hdr *ah = ip_auth_hdr(skb);
+ int ihl = ip_hdrlen(skb);
+ int ah_hlen = (ah->hdrlen + 2) << 2;
+
+ work_iph = AH_SKB_CB(skb)->tmp;
+ auth_data = ah_tmp_auth(work_iph, ihl);
+ icv = ah_tmp_icv(ahp->ahash, auth_data, ahp->icv_trunc_len);
+
+ err = memcmp(icv, auth_data, ahp->icv_trunc_len) ? -EBADMSG: 0;
+ if (err)
+ goto out;
+
+ skb->network_header += ah_hlen;
+ memcpy(skb_network_header(skb), work_iph, ihl);
+ __skb_pull(skb, ah_hlen + ihl);
+ skb_set_transport_header(skb, -ihl);
+
+ err = ah->nexthdr;
+out:
+ kfree(AH_SKB_CB(skb)->tmp);
+ xfrm_input_resume(skb, err);
+}
+
static int ah_input(struct xfrm_state *x, struct sk_buff *skb)
{
int ah_hlen;
int ihl;
int nexthdr;
- int err = -EINVAL;
- struct iphdr *iph;
+ int nfrags;
+ u8 *auth_data;
+ u8 *icv;
+ struct sk_buff *trailer;
+ struct crypto_ahash *ahash;
+ struct ahash_request *req;
+ struct scatterlist *sg;
+ struct iphdr *iph, *work_iph;
struct ip_auth_hdr *ah;
struct ah_data *ahp;
- char work_buf[60];
+ int err = -ENOMEM;

if (!pskb_may_pull(skb, sizeof(*ah)))
goto out;

ah = (struct ip_auth_hdr *)skb->data;
ahp = x->data;
+ ahash = ahp->ahash;
+
nexthdr = ah->nexthdr;
ah_hlen = (ah->hdrlen + 2) << 2;

@@ -156,9 +315,24 @@ static int ah_input(struct xfrm_state *x, struct sk_buff *skb)

ah = (struct ip_auth_hdr *)skb->data;
iph = ip_hdr(skb);
+ ihl = ip_hdrlen(skb);
+
+ if ((err = skb_cow_data(skb, 0, &trailer)) < 0)
+ goto out;
+ nfrags = err;
+
+ work_iph = ah_alloc_tmp(ahash, nfrags, ihl + ahp->icv_trunc_len);
+ if (!work_iph)
+ goto out;
+
+ auth_data = ah_tmp_auth(work_iph, ihl);
+ icv = ah_tmp_icv(ahash, auth_data, ahp->icv_trunc_len);
+ req = ah_tmp_req(ahash, icv);
+ sg = ah_req_sg(ahash, req);

- ihl = skb->data - skb_network_header(skb);
- memcpy(work_buf, iph, ihl);
+ memcpy(work_iph, iph, ihl);
+ memcpy(auth_data, ah->auth_data, ahp->icv_trunc_len);
+ memset(ah->auth_data, 0, ahp->icv_trunc_len);

iph->ttl = 0;
iph->tos = 0;
@@ -166,35 +340,44 @@ static int ah_input(struct xfrm_state *x, struct sk_buff *skb)
iph->check = 0;
if (ihl > sizeof(*iph)) {
__be32 dummy;
- if (ip_clear_mutable_options(iph, &dummy))
- goto out;
+ err = ip_clear_mutable_options(iph, &dummy);
+ if (err)
+ goto out_free;
}

- spin_lock(&x->lock);
- {
- u8 auth_data[MAX_AH_AUTH_LEN];
+ skb_push(skb, ihl);

- memcpy(auth_data, ah->auth_data, ahp->icv_trunc_len);
- skb_push(skb, ihl);
- err = ah_mac_digest(ahp, skb, ah->auth_data);
- if (err)
- goto unlock;
- if (memcmp(ahp->work_icv, auth_data, ahp->icv_trunc_len))
- err = -EBADMSG;
+ sg_init_table(sg, nfrags);
+ skb_to_sgvec(skb, sg, 0, skb->len);
+
+ ahash_request_set_crypt(req, sg, icv, skb->len);
+ ahash_request_set_callback(req, 0, ah_input_done, skb);
+
+ AH_SKB_CB(skb)->tmp = work_iph;
+
+ err = crypto_ahash_digest(req);
+ if (err) {
+ if (err == -EINPROGRESS)
+ goto out;
+
+ if (err == -EBUSY)
+ err = NET_XMIT_DROP;
+ goto out_free;
}
-unlock:
- spin_unlock(&x->lock);

+ err = memcmp(icv, auth_data, ahp->icv_trunc_len) ? -EBADMSG: 0;
if (err)
- goto out;
+ goto out_free;

skb->network_header += ah_hlen;
- memcpy(skb_network_header(skb), work_buf, ihl);
- skb->transport_header = skb->network_header;
+ memcpy(skb_network_header(skb), work_iph, ihl);
__skb_pull(skb, ah_hlen + ihl);
+ skb_set_transport_header(skb, -ihl);

- return nexthdr;
+ err = nexthdr;

+out_free:
+ kfree (work_iph);
out:
return err;
}
@@ -222,7 +405,7 @@ static int ah_init_state(struct xfrm_state *x)
{
struct ah_data *ahp = NULL;
struct xfrm_algo_desc *aalg_desc;
- struct crypto_hash *tfm;
+ struct crypto_ahash *ahash;

if (!x->aalg)
goto error;
@@ -231,31 +414,31 @@ static int ah_init_state(struct xfrm_state *x)
goto error;

ahp = kzalloc(sizeof(*ahp), GFP_KERNEL);
- if (ahp == NULL)
+ if (!ahp)
return -ENOMEM;

- tfm = crypto_alloc_hash(x->aalg->alg_name, 0, CRYPTO_ALG_ASYNC);
- if (IS_ERR(tfm))
+ ahash = crypto_alloc_ahash(x->aalg->alg_name, 0, 0);
+ if (IS_ERR(ahash))
goto error;

- ahp->tfm = tfm;
- if (crypto_hash_setkey(tfm, x->aalg->alg_key,
- (x->aalg->alg_key_len + 7) / 8))
+ ahp->ahash = ahash;
+ if (crypto_ahash_setkey(ahash, x->aalg->alg_key,
+ (x->aalg->alg_key_len + 7) / 8))
goto error;

/*
* Lookup the algorithm description maintained by xfrm_algo,
* verify crypto transform properties, and store information
* we need for AH processing. This lookup cannot fail here
- * after a successful crypto_alloc_hash().
+ * after a successful crypto_alloc_ahash().
*/
aalg_desc = xfrm_aalg_get_byname(x->aalg->alg_name, 0);
BUG_ON(!aalg_desc);

if (aalg_desc->uinfo.auth.icv_fullbits/8 !=
- crypto_hash_digestsize(tfm)) {
+ crypto_ahash_digestsize(ahash)) {
printk(KERN_INFO "AH: %s digestsize %u != %hu\n",
- x->aalg->alg_name, crypto_hash_digestsize(tfm),
+ x->aalg->alg_name, crypto_ahash_digestsize(ahash),
aalg_desc->uinfo.auth.icv_fullbits/8);
goto error;
}
@@ -265,10 +448,6 @@ static int ah_init_state(struct xfrm_state *x)

BUG_ON(ahp->icv_trunc_len > MAX_AH_AUTH_LEN);

- ahp->work_icv = kmalloc(ahp->icv_full_len, GFP_KERNEL);
- if (!ahp->work_icv)
- goto error;
-
x->props.header_len = XFRM_ALIGN8(sizeof(struct ip_auth_hdr) +
ahp->icv_trunc_len);
if (x->props.mode == XFRM_MODE_TUNNEL)
@@ -279,8 +458,7 @@ static int ah_init_state(struct xfrm_state *x)

error:
if (ahp) {
- kfree(ahp->work_icv);
- crypto_free_hash(ahp->tfm);
+ crypto_free_ahash(ahp->ahash);
kfree(ahp);
}
return -EINVAL;
@@ -293,8 +471,7 @@ static void ah_destroy(struct xfrm_state *x)
if (!ahp)
return;

- kfree(ahp->work_icv);
- crypto_free_hash(ahp->tfm);
+ crypto_free_ahash(ahp->ahash);
kfree(ahp);
}

--
1.5.4.2


2009-07-16 11:17:06

by Steffen Klassert

[permalink] [raw]
Subject: [RFC] [PATCH 5/7] ah6: convert to ahash

This patch converts ah6 to the new ahash interface.

Signed-off-by: Steffen Klassert <[email protected]>
---
net/ipv6/ah6.c | 352 +++++++++++++++++++++++++++++++++++++++++++-------------
1 files changed, 272 insertions(+), 80 deletions(-)

diff --git a/net/ipv6/ah6.c b/net/ipv6/ah6.c
index 86f42a2..9fb063c 100644
--- a/net/ipv6/ah6.c
+++ b/net/ipv6/ah6.c
@@ -24,18 +24,92 @@
* This file is derived from net/ipv4/ah.c.
*/

+#include <crypto/hash.h>
#include <linux/module.h>
#include <net/ip.h>
#include <net/ah.h>
#include <linux/crypto.h>
#include <linux/pfkeyv2.h>
-#include <linux/spinlock.h>
#include <linux/string.h>
+#include <linux/scatterlist.h>
#include <net/icmp.h>
#include <net/ipv6.h>
#include <net/protocol.h>
#include <net/xfrm.h>

+#define IPV6HDR_BASELEN 8
+
+struct tmp_ext {
+#if defined(CONFIG_IPV6_MIP6) || defined(CONFIG_IPV6_MIP6_MODULE)
+ struct in6_addr saddr;
+#endif
+ struct in6_addr daddr;
+ char hdrs[0];
+};
+
+struct ah_skb_cb {
+ struct xfrm_skb_cb xfrm;
+ void *tmp;
+};
+
+#define AH_SKB_CB(__skb) ((struct ah_skb_cb *)&((__skb)->cb[0]))
+
+static void *ah_alloc_tmp(struct crypto_ahash *ahash, int nfrags,
+ unsigned int size)
+{
+ unsigned int len;
+
+ len = size + crypto_ahash_digestsize(ahash) +
+ (crypto_ahash_alignmask(ahash) &
+ ~(crypto_tfm_ctx_alignment() - 1));
+
+ len = ALIGN(len, crypto_tfm_ctx_alignment());
+
+ len += sizeof(struct ahash_request) + crypto_ahash_reqsize(ahash);
+ len = ALIGN(len, __alignof__(struct scatterlist));
+
+ len += sizeof(struct scatterlist) * nfrags;
+
+ return kmalloc(len, GFP_ATOMIC);
+}
+
+static inline struct tmp_ext *ah_tmp_ext(void *base)
+{
+ return base + IPV6HDR_BASELEN;
+}
+
+static inline u8 *ah_tmp_auth(u8 *tmp, unsigned int offset)
+{
+ return tmp + offset;
+}
+
+static inline u8 *ah_tmp_icv(struct crypto_ahash *ahash, void *tmp,
+ unsigned int offset)
+{
+ return PTR_ALIGN((u8 *)tmp + offset, crypto_ahash_alignmask(ahash) + 1);
+}
+
+static inline struct ahash_request *ah_tmp_req(struct crypto_ahash *ahash,
+ u8 *icv)
+{
+ struct ahash_request *req;
+
+ req = (void *)PTR_ALIGN(icv + crypto_ahash_digestsize(ahash),
+ crypto_tfm_ctx_alignment());
+
+ ahash_request_set_tfm(req, ahash);
+
+ return req;
+}
+
+static inline struct scatterlist *ah_req_sg(struct crypto_ahash *ahash,
+ struct ahash_request *req)
+{
+ return (void *)ALIGN((unsigned long)(req + 1) +
+ crypto_ahash_reqsize(ahash),
+ __alignof__(struct scatterlist));
+}
+
static int zero_out_mutable_opts(struct ipv6_opt_hdr *opthdr)
{
u8 *opt = (u8 *)opthdr;
@@ -218,24 +292,85 @@ static int ipv6_clear_mutable_options(struct ipv6hdr *iph, int len, int dir)
return 0;
}

+static void ah6_output_done(struct crypto_async_request *base, int err)
+{
+ int extlen;
+ u8 *iph_base;
+ u8 *icv;
+ struct sk_buff *skb = base->data;
+ struct xfrm_state *x = skb_dst(skb)->xfrm;
+ struct ah_data *ahp = x->data;
+ struct ipv6hdr *top_iph = ipv6_hdr(skb);
+ struct ip_auth_hdr *ah = ip_auth_hdr(skb);
+ struct tmp_ext *iph_ext;
+
+ extlen = skb_network_header_len(skb) - sizeof(struct ipv6hdr);
+ if (extlen)
+ extlen += sizeof(*iph_ext);
+
+ iph_base = AH_SKB_CB(skb)->tmp;
+ iph_ext = ah_tmp_ext(iph_base);
+ icv = ah_tmp_icv(ahp->ahash, iph_ext, extlen);
+
+ memcpy(ah->auth_data, icv, ahp->icv_trunc_len);
+ memcpy(top_iph, iph_base, IPV6HDR_BASELEN);
+
+ if (extlen) {
+#if defined(CONFIG_IPV6_MIP6) || defined(CONFIG_IPV6_MIP6_MODULE)
+ memcpy(&top_iph->saddr, iph_ext, extlen);
+#else
+ memcpy(&top_iph->daddr, iph_ext, extlen);
+#endif
+ }
+
+ err = ah->nexthdr;
+
+ kfree(AH_SKB_CB(skb)->tmp);
+ xfrm_output_resume(skb, err);
+}
+
static int ah6_output(struct xfrm_state *x, struct sk_buff *skb)
{
int err;
+ int nfrags;
int extlen;
+ u8 *iph_base;
+ u8 *icv;
+ u8 nexthdr;
+ struct sk_buff *trailer;
+ struct crypto_ahash *ahash;
+ struct ahash_request *req;
+ struct scatterlist *sg;
struct ipv6hdr *top_iph;
struct ip_auth_hdr *ah;
struct ah_data *ahp;
- u8 nexthdr;
- char tmp_base[8];
- struct {
-#if defined(CONFIG_IPV6_MIP6) || defined(CONFIG_IPV6_MIP6_MODULE)
- struct in6_addr saddr;
-#endif
- struct in6_addr daddr;
- char hdrs[0];
- } *tmp_ext;
+ struct tmp_ext *iph_ext;
+
+ ahp = x->data;
+ ahash = ahp->ahash;
+
+ if ((err = skb_cow_data(skb, 0, &trailer)) < 0)
+ goto out;
+ nfrags = err;

skb_push(skb, -skb_network_offset(skb));
+ extlen = skb_network_header_len(skb) - sizeof(struct ipv6hdr);
+ if (extlen)
+ extlen += sizeof(*iph_ext);
+
+ err = -ENOMEM;
+ iph_base = ah_alloc_tmp(ahash, nfrags, IPV6HDR_BASELEN + extlen);
+ if (!iph_base)
+ goto out;
+
+ iph_ext = ah_tmp_ext(iph_base);
+ icv = ah_tmp_icv(ahash, iph_ext, extlen);
+ req = ah_tmp_req(ahash, icv);
+ sg = ah_req_sg(ahash, req);
+
+ ah = ip_auth_hdr(skb);
+ memset(ah->auth_data, 0, ahp->icv_trunc_len);
+
top_iph = ipv6_hdr(skb);
top_iph->payload_len = htons(skb->len - sizeof(*top_iph));

@@ -245,31 +380,22 @@ static int ah6_output(struct xfrm_state *x, struct sk_buff *skb)
/* When there are no extension headers, we only need to save the first
* 8 bytes of the base IP header.
*/
- memcpy(tmp_base, top_iph, sizeof(tmp_base));
+ memcpy(iph_base, top_iph, IPV6HDR_BASELEN);

- tmp_ext = NULL;
- extlen = skb_transport_offset(skb) - sizeof(struct ipv6hdr);
if (extlen) {
- extlen += sizeof(*tmp_ext);
- tmp_ext = kmalloc(extlen, GFP_ATOMIC);
- if (!tmp_ext) {
- err = -ENOMEM;
- goto error;
- }
#if defined(CONFIG_IPV6_MIP6) || defined(CONFIG_IPV6_MIP6_MODULE)
- memcpy(tmp_ext, &top_iph->saddr, extlen);
+ memcpy(iph_ext, &top_iph->saddr, extlen);
#else
- memcpy(tmp_ext, &top_iph->daddr, extlen);
+ memcpy(iph_ext, &top_iph->daddr, extlen);
#endif
err = ipv6_clear_mutable_options(top_iph,
- extlen - sizeof(*tmp_ext) +
+ extlen - sizeof(*iph_ext) +
sizeof(*top_iph),
XFRM_POLICY_OUT);
if (err)
- goto error_free_iph;
+ goto out_free;
}

- ah = ip_auth_hdr(skb);
ah->nexthdr = nexthdr;

top_iph->priority = 0;
@@ -278,36 +404,80 @@ static int ah6_output(struct xfrm_state *x, struct sk_buff *skb)
top_iph->flow_lbl[2] = 0;
top_iph->hop_limit = 0;

- ahp = x->data;
ah->hdrlen = (XFRM_ALIGN8(sizeof(*ah) + ahp->icv_trunc_len) >> 2) - 2;

ah->reserved = 0;
ah->spi = x->id.spi;
ah->seq_no = htonl(XFRM_SKB_CB(skb)->seq.output);

- spin_lock_bh(&x->lock);
- err = ah_mac_digest(ahp, skb, ah->auth_data);
- memcpy(ah->auth_data, ahp->work_icv, ahp->icv_trunc_len);
- spin_unlock_bh(&x->lock);
+ sg_init_table(sg, nfrags);
+ skb_to_sgvec(skb, sg, 0, skb->len);
+
+ ahash_request_set_crypt(req, sg, icv, skb->len);
+ ahash_request_set_callback(req, 0, ah6_output_done, skb);

- if (err)
- goto error_free_iph;
+ AH_SKB_CB(skb)->tmp = iph_base;
+
+ err = crypto_ahash_digest(req);
+ if (err) {
+ if (err == -EINPROGRESS)
+ goto out;
+
+ if (err == -EBUSY)
+ err = NET_XMIT_DROP;
+ goto out_free;
+ }

- memcpy(top_iph, tmp_base, sizeof(tmp_base));
- if (tmp_ext) {
+ memcpy(ah->auth_data, icv, ahp->icv_trunc_len);
+ memcpy(top_iph, iph_base, IPV6HDR_BASELEN);
+
+ if (extlen) {
#if defined(CONFIG_IPV6_MIP6) || defined(CONFIG_IPV6_MIP6_MODULE)
- memcpy(&top_iph->saddr, tmp_ext, extlen);
+ memcpy(&top_iph->saddr, iph_ext, extlen);
#else
- memcpy(&top_iph->daddr, tmp_ext, extlen);
+ memcpy(&top_iph->daddr, iph_ext, extlen);
#endif
-error_free_iph:
- kfree(tmp_ext);
}

-error:
+out_free:
+ kfree(iph_base);
+out:
return err;
}

+static void ah6_input_done(struct crypto_async_request *base, int err)
+{
+ u8 *auth_data;
+ u8 *icv;
+ u8 *work_iph;
+ struct sk_buff *skb = base->data;
+ struct xfrm_state *x = xfrm_input_state(skb);
+ struct ah_data *ahp = x->data;
+ struct ip_auth_hdr *ah = ip_auth_hdr(skb);
+ int hdr_len = skb_network_header_len(skb);
+ int ah_hlen = (ah->hdrlen + 2) << 2;
+
+ work_iph = AH_SKB_CB(skb)->tmp;
+ auth_data = ah_tmp_auth(work_iph, hdr_len);
+ icv = ah_tmp_icv(ahp->ahash, auth_data, ahp->icv_trunc_len);
+
+ err = memcmp(icv, auth_data, ahp->icv_trunc_len) ? -EBADMSG: 0;
+ if (err)
+ goto out;
+
+ skb->network_header += ah_hlen;
+ memcpy(skb_network_header(skb), work_iph, hdr_len);
+ __skb_pull(skb, ah_hlen + hdr_len);
+ skb_set_transport_header(skb, -hdr_len);
+
+ err = ah->nexthdr;
+out:
+ kfree(AH_SKB_CB(skb)->tmp);
+ xfrm_input_resume(skb, err);
+}
+
+
+
static int ah6_input(struct xfrm_state *x, struct sk_buff *skb)
{
/*
@@ -325,14 +495,21 @@ static int ah6_input(struct xfrm_state *x, struct sk_buff *skb)
* There is offset of AH before IPv6 header after the process.
*/

+ u8 *auth_data;
+ u8 *icv;
+ u8 *work_iph;
+ struct sk_buff *trailer;
+ struct crypto_ahash *ahash;
+ struct ahash_request *req;
+ struct scatterlist *sg;
struct ip_auth_hdr *ah;
struct ipv6hdr *ip6h;
struct ah_data *ahp;
- unsigned char *tmp_hdr = NULL;
u16 hdr_len;
u16 ah_hlen;
int nexthdr;
- int err = -EINVAL;
+ int nfrags;
+ int err = -ENOMEM;

if (!pskb_may_pull(skb, sizeof(struct ip_auth_hdr)))
goto out;
@@ -345,9 +522,11 @@ static int ah6_input(struct xfrm_state *x, struct sk_buff *skb)

skb->ip_summed = CHECKSUM_NONE;

- hdr_len = skb->data - skb_network_header(skb);
+ hdr_len = skb_network_header_len(skb);
ah = (struct ip_auth_hdr *)skb->data;
ahp = x->data;
+ ahash = ahp->ahash;
+
nexthdr = ah->nexthdr;
ah_hlen = (ah->hdrlen + 2) << 2;

@@ -358,48 +537,67 @@ static int ah6_input(struct xfrm_state *x, struct sk_buff *skb)
if (!pskb_may_pull(skb, ah_hlen))
goto out;

- tmp_hdr = kmemdup(skb_network_header(skb), hdr_len, GFP_ATOMIC);
- if (!tmp_hdr)
- goto out;
ip6h = ipv6_hdr(skb);
+
+ skb_push(skb, hdr_len);
+
+ if ((err = skb_cow_data(skb, 0, &trailer)) < 0)
+ goto out;
+ nfrags = err;
+
+ work_iph = ah_alloc_tmp(ahash, nfrags, hdr_len + ahp->icv_trunc_len);
+ if (!work_iph)
+ goto out;
+
+ auth_data = ah_tmp_auth(work_iph, hdr_len);
+ icv = ah_tmp_icv(ahash, auth_data, ahp->icv_trunc_len);
+ req = ah_tmp_req(ahash, icv);
+ sg = ah_req_sg(ahash, req);
+
+ memcpy(work_iph, ip6h, hdr_len);
+ memcpy(auth_data, ah->auth_data, ahp->icv_trunc_len);
+ memset(ah->auth_data, 0, ahp->icv_trunc_len);
+
if (ipv6_clear_mutable_options(ip6h, hdr_len, XFRM_POLICY_IN))
- goto free_out;
+ goto out_free;
+
ip6h->priority = 0;
ip6h->flow_lbl[0] = 0;
ip6h->flow_lbl[1] = 0;
ip6h->flow_lbl[2] = 0;
ip6h->hop_limit = 0;

- spin_lock(&x->lock);
- {
- u8 auth_data[MAX_AH_AUTH_LEN];
+ sg_init_table(sg, nfrags);
+ skb_to_sgvec(skb, sg, 0, skb->len);
+
+ ahash_request_set_crypt(req, sg, icv, skb->len);
+ ahash_request_set_callback(req, 0, ah6_input_done, skb);

- memcpy(auth_data, ah->auth_data, ahp->icv_trunc_len);
- memset(ah->auth_data, 0, ahp->icv_trunc_len);
- skb_push(skb, hdr_len);
- err = ah_mac_digest(ahp, skb, ah->auth_data);
- if (err)
- goto unlock;
- if (memcmp(ahp->work_icv, auth_data, ahp->icv_trunc_len))
- err = -EBADMSG;
+ AH_SKB_CB(skb)->tmp = work_iph;
+
+ err = crypto_ahash_digest(req);
+ if (err) {
+ if (err == -EINPROGRESS)
+ goto out;
+
+ if (err == -EBUSY)
+ err = NET_XMIT_DROP;
+ goto out_free;
}
-unlock:
- spin_unlock(&x->lock);

+ err = memcmp(icv, auth_data, ahp->icv_trunc_len) ? -EBADMSG: 0;
if (err)
- goto free_out;
+ goto out_free;

skb->network_header += ah_hlen;
- memcpy(skb_network_header(skb), tmp_hdr, hdr_len);
+ memcpy(skb_network_header(skb), work_iph, hdr_len);
skb->transport_header = skb->network_header;
__skb_pull(skb, ah_hlen + hdr_len);

- kfree(tmp_hdr);
+ err = nexthdr;

- return nexthdr;
-
-free_out:
- kfree(tmp_hdr);
+out_free:
+ kfree(work_iph);
out:
return err;
}
@@ -430,7 +628,7 @@ static int ah6_init_state(struct xfrm_state *x)
{
struct ah_data *ahp = NULL;
struct xfrm_algo_desc *aalg_desc;
- struct crypto_hash *tfm;
+ struct crypto_ahash *ahash;

if (!x->aalg)
goto error;
@@ -442,12 +640,12 @@ static int ah6_init_state(struct xfrm_state *x)
if (ahp == NULL)
return -ENOMEM;

- tfm = crypto_alloc_hash(x->aalg->alg_name, 0, CRYPTO_ALG_ASYNC);
- if (IS_ERR(tfm))
+ ahash = crypto_alloc_ahash(x->aalg->alg_name, 0, 0);
+ if (IS_ERR(ahash))
goto error;

- ahp->tfm = tfm;
- if (crypto_hash_setkey(tfm, x->aalg->alg_key,
+ ahp->ahash = ahash;
+ if (crypto_ahash_setkey(ahash, x->aalg->alg_key,
(x->aalg->alg_key_len + 7) / 8))
goto error;

@@ -461,9 +659,9 @@ static int ah6_init_state(struct xfrm_state *x)
BUG_ON(!aalg_desc);

if (aalg_desc->uinfo.auth.icv_fullbits/8 !=
- crypto_hash_digestsize(tfm)) {
+ crypto_ahash_digestsize(ahash)) {
printk(KERN_INFO "AH: %s digestsize %u != %hu\n",
- x->aalg->alg_name, crypto_hash_digestsize(tfm),
+ x->aalg->alg_name, crypto_ahash_digestsize(ahash),
aalg_desc->uinfo.auth.icv_fullbits/8);
goto error;
}
@@ -473,10 +671,6 @@ static int ah6_init_state(struct xfrm_state *x)

BUG_ON(ahp->icv_trunc_len > MAX_AH_AUTH_LEN);

- ahp->work_icv = kmalloc(ahp->icv_full_len, GFP_KERNEL);
- if (!ahp->work_icv)
- goto error;
-
x->props.header_len = XFRM_ALIGN8(sizeof(struct ip_auth_hdr) +
ahp->icv_trunc_len);
switch (x->props.mode) {
@@ -495,8 +689,7 @@ static int ah6_init_state(struct xfrm_state *x)

error:
if (ahp) {
- kfree(ahp->work_icv);
- crypto_free_hash(ahp->tfm);
+ crypto_free_ahash(ahp->ahash);
kfree(ahp);
}
return -EINVAL;
@@ -509,8 +702,7 @@ static void ah6_destroy(struct xfrm_state *x)
if (!ahp)
return;

- kfree(ahp->work_icv);
- crypto_free_hash(ahp->tfm);
+ crypto_free_ahash(ahp->ahash);
kfree(ahp);
}

--
1.5.4.2


2009-07-16 11:18:00

by Steffen Klassert

[permalink] [raw]
Subject: [RFC] [PATCH 6/7] ah: Remove obsolete code

ah4 and ah6 are converted to ahash now, so we can remove the
code for the obsolete hash algorithm.

Signed-off-by: Steffen Klassert <[email protected]>
---
include/net/ah.h | 29 +++--------------------------
1 files changed, 3 insertions(+), 26 deletions(-)

diff --git a/include/net/ah.h b/include/net/ah.h
index 7ac5221..7573a71 100644
--- a/include/net/ah.h
+++ b/include/net/ah.h
@@ -1,44 +1,21 @@
#ifndef _NET_AH_H
#define _NET_AH_H

-#include <linux/crypto.h>
-#include <net/xfrm.h>
+#include <linux/skbuff.h>

/* This is the maximum truncated ICV length that we know of. */
#define MAX_AH_AUTH_LEN 12

+struct crypto_ahash;
+
struct ah_data
{
- u8 *work_icv;
int icv_full_len;
int icv_trunc_len;

- struct crypto_hash *tfm;
struct crypto_ahash *ahash;
};

-static inline int ah_mac_digest(struct ah_data *ahp, struct sk_buff *skb,
- u8 *auth_data)
-{
- struct hash_desc desc;
- int err;
-
- desc.tfm = ahp->tfm;
- desc.flags = 0;
-
- memset(auth_data, 0, ahp->icv_trunc_len);
- err = crypto_hash_init(&desc);
- if (unlikely(err))
- goto out;
- err = skb_icv_walk(skb, &desc, 0, skb->len, crypto_hash_update);
- if (unlikely(err))
- goto out;
- err = crypto_hash_final(&desc, ahp->work_icv);
-
-out:
- return err;
-}

2009-07-16 11:48:59

by Steffen Klassert

[permalink] [raw]
Subject: [RFC] [PATCH 7/7] xfrm: remove skb_icv_walk

The last users of skb_icv_walk are converted to ahash now,
so skb_icv_walk is unused and can be removed.

Signed-off-by: Steffen Klassert <[email protected]>
---
include/net/xfrm.h | 3 --
net/xfrm/xfrm_algo.c | 78 --------------------------------------------------
2 files changed, 0 insertions(+), 81 deletions(-)

diff --git a/include/net/xfrm.h b/include/net/xfrm.h
index 9e3a3f4..8181541 100644
--- a/include/net/xfrm.h
+++ b/include/net/xfrm.h
@@ -1500,9 +1500,6 @@ struct scatterlist;
typedef int (icv_update_fn_t)(struct hash_desc *, struct scatterlist *,
unsigned int);

-extern int skb_icv_walk(const struct sk_buff *skb, struct hash_desc *tfm,
- int offset, int len, icv_update_fn_t icv_update);
-
static inline int xfrm_addr_cmp(xfrm_address_t *a, xfrm_address_t *b,
int family)
{
diff --git a/net/xfrm/xfrm_algo.c b/net/xfrm/xfrm_algo.c
index faf54c6..b393410 100644
--- a/net/xfrm/xfrm_algo.c
+++ b/net/xfrm/xfrm_algo.c
@@ -689,84 +689,6 @@ int xfrm_count_enc_supported(void)
}
EXPORT_SYMBOL_GPL(xfrm_count_enc_supported);

-/* Move to common area: it is shared with AH. */
-
-int skb_icv_walk(const struct sk_buff *skb, struct hash_desc *desc,
- int offset, int len, icv_update_fn_t icv_update)
-{
- int start = skb_headlen(skb);
- int i, copy = start - offset;
- struct sk_buff *frag_iter;
- struct scatterlist sg;
- int err;
-
- /* Checksum header. */
- if (copy > 0) {
- if (copy > len)
- copy = len;
-
- sg_init_one(&sg, skb->data + offset, copy);
-
- err = icv_update(desc, &sg, copy);
- if (unlikely(err))
- return err;
-
- if ((len -= copy) == 0)
- return 0;
- offset += copy;
- }
-
- for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
- int end;
-
- WARN_ON(start > offset + len);
-
- end = start + skb_shinfo(skb)->frags[i].size;
- if ((copy = end - offset) > 0) {
- skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
-
- if (copy > len)
- copy = len;
-
- sg_init_table(&sg, 1);
- sg_set_page(&sg, frag->page, copy,
- frag->page_offset + offset-start);
-
- err = icv_update(desc, &sg, copy);
- if (unlikely(err))
- return err;
-
- if (!(len -= copy))
- return 0;
- offset += copy;
- }
- start = end;
- }
-
- skb_walk_frags(skb, frag_iter) {
- int end;
-
- WARN_ON(start > offset + len);
-
- end = start + frag_iter->len;
- if ((copy = end - offset) > 0) {
- if (copy > len)
- copy = len;
- err = skb_icv_walk(frag_iter, desc, offset-start,
- copy, icv_update);
- if (unlikely(err))
- return err;
- if ((len -= copy) == 0)
- return 0;
- offset += copy;
- }
- start = end;
- }
- BUG_ON(len);
- return 0;
-}
-EXPORT_SYMBOL_GPL(skb_icv_walk);

2009-07-16 15:46:16

by Herbert Xu

[permalink] [raw]
Subject: Re: [RFC] [PATCH 0/7] IPsec: convert to ahash

On Thu, Jul 16, 2009 at 01:15:48PM +0200, Steffen Klassert wrote:
>
> Since the calls to the hash algorithms can now return asynchronous, I'd like
> to avoid multiple calls to the hash update functions. I'd rather like to do
> all the hashing with one call to crypto_ahash_digest(). As it is, this
> requires chaining of all the involved scatterlists. Since we still can't use
> sg_chain() to chain up the lists, I added an additional scatterlist entry to
> the scatterlist of the assoc data (esp) to be able to chain later in the
> crypto layer. To keep compatibility I set the termination bit at the first
> entry and remove it later in authenc. In fact to rely on this additional
> entry and just to remove the termintation bit later makes me a bit nervous
> and I'm not sure whether this is acceptable, so better ideas are very welcome.

My suggestion would be to optimise for the common case, where
assoc is a single-entry list. So just put some space aside in
the request context for a two-entry sg list and copy the assoc
sg entry into it and chain it with the rest.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2009-07-17 06:08:34

by Steffen Klassert

[permalink] [raw]
Subject: Re: [RFC] [PATCH 0/7] IPsec: convert to ahash

On Thu, Jul 16, 2009 at 11:46:14PM +0800, Herbert Xu wrote:
>
> My suggestion would be to optimise for the common case, where
> assoc is a single-entry list. So just put some space aside in
> the request context for a two-entry sg list and copy the assoc
> sg entry into it and chain it with the rest.
>

Sounds good. This would also have the advantage that all necessary
changes are local to authenc. I'll give it a try.

Any further comments on the rest of the patchset?

2009-07-17 07:39:58

by Steffen Klassert

[permalink] [raw]
Subject: Re: [RFC] [PATCH 0/7] IPsec: convert to ahash

On Thu, Jul 16, 2009 at 11:46:14PM +0800, Herbert Xu wrote:
>
> My suggestion would be to optimise for the common case, where
> assoc is a single-entry list. So just put some space aside in
> the request context for a two-entry sg list and copy the assoc
> sg entry into it and chain it with the rest.
>

Ok, I updated the authenc part in this manner. The authenc part does
not depend on networking changes anymore, so I'll resend it just to
linux-crypto.

2009-07-17 17:03:30

by David Miller

[permalink] [raw]
Subject: Re: [RFC] [PATCH 2/7] crypto: authenc - convert to ahash

From: Steffen Klassert <[email protected]>
Date: Thu, 16 Jul 2009 13:17:47 +0200

> + /*
> + * head must be a scatterlist with two entries.
> + * We remove a potentially set termination bit
> + * on the first enty.
> + */
> + head->page_link &= ~0x02;
> +
> if (chain) {
> head->length += sg->length;

Isn't there some interface to do this, rather than revealing the
implementation details of how the scatter list marking works?

If not, such an interface should be created and used here. Otherwise
any change in that implementation will break this code.

2009-07-20 06:24:02

by Steffen Klassert

[permalink] [raw]
Subject: Re: [RFC] [PATCH 2/7] crypto: authenc - convert to ahash

On Fri, Jul 17, 2009 at 10:03:34AM -0700, David Miller wrote:
> From: Steffen Klassert <[email protected]>
> Date: Thu, 16 Jul 2009 13:17:47 +0200
>
> > + /*
> > + * head must be a scatterlist with two entries.
> > + * We remove a potentially set termination bit
> > + * on the first enty.
> > + */
> > + head->page_link &= ~0x02;
> > +
> > if (chain) {
> > head->length += sg->length;
>
> Isn't there some interface to do this, rather than revealing the
> implementation details of how the scatter list marking works?

We have sg_mark_end() which sets the termination bit on a sg list entry,
but we don't have an interface to remove the termination bit from a sg
list entry.

>
> If not, such an interface should be created and used here. Otherwise
> any change in that implementation will break this code.

A grep over the kernel tree showed that just the block and the
crypto-layer doing this to chain sg lists. So we could create
sg_unmark_end() and use it there. If desired, I would do it.

2009-07-20 06:55:14

by Herbert Xu

[permalink] [raw]
Subject: Re: [RFC] [PATCH 2/7] crypto: authenc - convert to ahash

On Mon, Jul 20, 2009 at 08:26:43AM +0200, Steffen Klassert wrote:
>
> We have sg_mark_end() which sets the termination bit on a sg list entry,
> but we don't have an interface to remove the termination bit from a sg
> list entry.

For the case at hand, as we've agreed to only do the chaining
if there is only a single entry in the list, we don't need this
at all. All you need to do is copy the page pointer, offset and
length without copying the raw scatterlist entry.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2009-07-20 07:54:20

by Steffen Klassert

[permalink] [raw]
Subject: Re: [RFC] [PATCH 2/7] crypto: authenc - convert to ahash

On Mon, Jul 20, 2009 at 02:55:14PM +0800, Herbert Xu wrote:
>
> For the case at hand, as we've agreed to only do the chaining
> if there is only a single entry in the list, we don't need this
> at all. All you need to do is copy the page pointer, offset and
> length without copying the raw scatterlist entry.
>

Agreed for this case. What about the existing code, should we create an
interface for it?

2009-07-20 08:21:23

by Herbert Xu

[permalink] [raw]
Subject: Re: [RFC] [PATCH 2/7] crypto: authenc - convert to ahash

On Mon, Jul 20, 2009 at 09:57:01AM +0200, Steffen Klassert wrote:
>
> Agreed for this case. What about the existing code, should we create an
> interface for it?

Can you point me to the existing code?

Thanks,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2009-07-20 08:33:59

by Steffen Klassert

[permalink] [raw]
Subject: Re: [RFC] [PATCH 2/7] crypto: authenc - convert to ahash

On Mon, Jul 20, 2009 at 04:21:16PM +0800, Herbert Xu wrote:
>
> Can you point me to the existing code?
>

For the crypto-layer it's just scatterwalk_sg_chain() that removes the
termination bit.

For the block-layer it's blk_rq_map_integrity_sg() and two times in
blk_rq_map_sg().

2009-07-20 08:50:14

by Herbert Xu

[permalink] [raw]
Subject: Re: [RFC] [PATCH 2/7] crypto: authenc - convert to ahash

On Mon, Jul 20, 2009 at 10:36:40AM +0200, Steffen Klassert wrote:
>
> For the crypto-layer it's just scatterwalk_sg_chain() that removes the
> termination bit.

The history behind this is that the crypto layer has used chaining
way before generic chaining was invented. When the generic code
was added I tried to convert crypto over to that, however, that
failed because generic chaining was not ported to every architecture.

So that's why the crypto layer still retains its own chaining.

> For the block-layer it's blk_rq_map_integrity_sg() and two times in
> blk_rq_map_sg().

I suppose we should ask Jens whether he thinks this is worth adding
an interface over?

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2009-07-21 06:53:19

by Herbert Xu

[permalink] [raw]
Subject: Re: [RFC] [PATCH 4/7] ah4: convert to ahash

On Thu, Jul 16, 2009 at 01:19:09PM +0200, Steffen Klassert wrote:
> This patch converts ah4 to the new ahash interface.
>
> Signed-off-by: Steffen Klassert <[email protected]>

Looks good to me.

Thanks,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt