2007-09-27 20:58:50

by Joy Latten

[permalink] [raw]
Subject: [PATCH 1/1]: Revised CTR mode implementation

This patch implements CTR mode for IPSec and includes
improvements pointed out in review. It is based off of RFC 3686.

Please note:
1. The CTR mode counterblock is composed of,
nonce + IV + counter.

The size of counterblock is equivalent to the blocksize
of the cipher.

sizeof(nonce) + sizeof(IV) + 4 = blocksize.

Currently, the counter is set at 4 bytes.

The ctr template now includes the amount of bytes required in the
counterblock for the salt/nonce and IV.

ctr(cipher,size_of_nonce,size_of_iv)

So, for example,

ctr(aes,4,8)

specifies the counter block will be composed of 4 bytes from a
nonce and 8 bytes from the IV and 4 bytes for counter, which is set.

2. it is assumed that plaintext is multiple of blocksize.
3. currently nonce is extracted from the last 4 bytes of key.
Thus keys entered through setkey() have an additional 32 bits.
This causes problems for 256-bit keys. For example,
crypto_ablkcipher_setkey() checks the maximum keysize and
complains about keysize.
This issue will be taken cared of with the new
infrastructure/template for combined mode that is planned,
and appropriate changes will be made to crypto_ctr_setkey()
and testcases.
3. rfc3686 stated that last 4 bytes of counterblock, which is the
actual counter, are to be in big endian.
4. tested with AES based on rfc3686

The tcrypt vectors are from rfc 3686. They all pass except for the
ones with 256-bit keys.

Please let me know if all looks ok or not.

Signed-off-by: Joy Latten <[email protected]>


diff -urpN linux-2.6.22.aead/crypto/ctr.c linux-2.6.22.aead.patch/crypto/ctr.c
--- linux-2.6.22.aead/crypto/ctr.c 1969-12-31 18:00:00.000000000 -0600
+++ linux-2.6.22.aead.patch/crypto/ctr.c 2007-09-27 13:45:54.000000000 -0500
@@ -0,0 +1,398 @@
+/*
+ * CTR: Counter mode
+ *
+ * (C) Copyright IBM Corp. 2007 - Joy Latten <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+
+#include <crypto/algapi.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+
+struct ctr_instance_ctx {
+ struct crypto_spawn alg;
+ unsigned int noncesize;
+ unsigned int ivsize;
+};
+
+struct crypto_ctr_ctx {
+ struct crypto_cipher *child;
+ u8 *nonce;
+ void (*xor)(u8 *dst, const u8 *src, unsigned int bs);
+};
+
+static void ctr_inc(__be32 *counter)
+{
+ u32 c;
+
+ c = be32_to_cpu(counter[3]);
+ c++;
+ counter[3] = cpu_to_be32(c);
+}
+
+static void xor_byte(u8 *a, const u8 *b, unsigned int bs)
+{
+ do {
+ *a++ ^= *b++;
+ } while (--bs);
+}
+
+static void xor_quad(u8 *dst, const u8 *src, unsigned int bs)
+{
+ u32 *a = (u32 *)dst;
+ u32 *b = (u32 *)src;
+
+ do {
+ *a++ ^= *b++;
+ } while ((bs -= 4));
+}
+
+static void xor_64(u8 *a, const u8 *b, unsigned int bs)
+{
+ ((u32 *)a)[0] ^= ((u32 *)b)[0];
+ ((u32 *)a)[1] ^= ((u32 *)b)[1];
+}
+
+static void xor_128(u8 *a, const u8 *b, unsigned int bs)
+{
+ ((u32 *)a)[0] ^= ((u32 *)b)[0];
+ ((u32 *)a)[1] ^= ((u32 *)b)[1];
+ ((u32 *)a)[2] ^= ((u32 *)b)[2];
+ ((u32 *)a)[3] ^= ((u32 *)b)[3];
+}
+
+static int crypto_ctr_setkey(struct crypto_tfm *parent, const u8 *key,
+ unsigned int keylen)
+{
+ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(parent);
+ struct crypto_cipher *child = ctx->child;
+ struct ctr_instance_ctx *ictx =
+ crypto_instance_ctx(crypto_tfm_alg_instance(parent));
+
+ unsigned int noncelen = ictx->noncesize;
+ int err;
+
+ /* the nonce is stored in bytes at end of key */
+ if (keylen < noncelen) {
+ err = -EINVAL;
+ return err;
+ }
+
+ ctx->nonce = (u8 *)(key + (keylen - noncelen));
+ keylen -= noncelen;
+
+ crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+ crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_cipher_setkey(child, key, keylen);
+ crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) &
+ CRYPTO_TFM_RES_MASK);
+ return err;
+}
+
+static int crypto_ctr_crypt_segment(struct blkcipher_walk *walk,
+ struct crypto_cipher *tfm, u8 *ctrblk,
+ void (*xor)(u8 *, const u8 *, unsigned int))
+{
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+ crypto_cipher_alg(tfm)->cia_encrypt;
+ int bsize = crypto_cipher_blocksize(tfm);
+ unsigned int nbytes = walk->nbytes;
+ u8 *src = walk->src.virt.addr;
+ u8 *dst = walk->dst.virt.addr;
+
+ do {
+ /* create keystream */
+ fn(crypto_cipher_tfm(tfm), dst, ctrblk);
+ xor(dst, src, bsize);
+
+ /* increment counter in counterblock */
+ ctr_inc((__be32 *)ctrblk);
+
+ src += bsize;
+ dst += bsize;
+ } while ((nbytes -= bsize) >= bsize);
+
+ return nbytes;
+}
+
+static int crypto_ctr_crypt_inplace(struct blkcipher_walk *walk,
+ struct crypto_cipher *tfm, u8 *ctrblk,
+ void (*xor)(u8 *, const u8 *, unsigned int))
+{
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+ crypto_cipher_alg(tfm)->cia_encrypt;
+ int bsize = crypto_cipher_blocksize(tfm);
+ unsigned long alignmask = crypto_cipher_alignmask(tfm);
+ unsigned int nbytes = walk->nbytes;
+ u8 *src = walk->src.virt.addr;
+ u8 ks[bsize + alignmask];
+ u8 *keystream = (u8 *)ALIGN((unsigned long)ks, alignmask + 1);
+
+ do {
+ /* create keystream */
+ fn(crypto_cipher_tfm(tfm), keystream, ctrblk);
+ xor(src, keystream, bsize);
+
+ /* increment counter in counterblock */
+ ctr_inc((__be32 *)ctrblk);
+
+ src += bsize;
+ } while ((nbytes -= bsize) >= bsize);
+
+ return nbytes;
+}
+
+static int crypto_ctr_encrypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst, struct scatterlist *src,
+ unsigned int nbytes)
+{
+ struct blkcipher_walk walk;
+ struct crypto_blkcipher *tfm = desc->tfm;
+ struct crypto_ctr_ctx *ctx = crypto_blkcipher_ctx(tfm);
+ struct crypto_cipher *child = ctx->child;
+ int bsize = crypto_cipher_blocksize(child);
+ struct ctr_instance_ctx *ictx =
+ crypto_instance_ctx(crypto_tfm_alg_instance(&tfm->base));
+ u8 counterblk[bsize];
+ void (*xor)(u8 *, const u8 *, unsigned int bs) = ctx->xor;
+ int err;
+
+ blkcipher_walk_init(&walk, dst, src, nbytes);
+ err = blkcipher_walk_virt(desc, &walk);
+
+ /* set up counter block */
+ memset(counterblk, 0 , bsize);
+ memcpy(counterblk, ctx->nonce, ictx->noncesize);
+ memcpy(counterblk + ictx->noncesize, walk.iv, ictx->ivsize);
+
+ /* initialize counter portion of counter block */
+ /* counter's size is 4 bytes and checked when initalizing tfm. */
+ ctr_inc((__be32 *)counterblk);
+
+ while (walk.nbytes) {
+ if (walk.src.virt.addr == walk.dst.virt.addr)
+ nbytes = crypto_ctr_crypt_inplace(&walk, child,
+ counterblk, xor);
+ else
+ nbytes = crypto_ctr_crypt_segment(&walk, child,
+ counterblk, xor);
+
+ err = blkcipher_walk_done(desc, &walk, nbytes);
+ }
+ return err;
+}
+
+static int crypto_ctr_decrypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst, struct scatterlist *src,
+ unsigned int nbytes)
+{
+ struct blkcipher_walk walk;
+ struct crypto_blkcipher *tfm = desc->tfm;
+ struct crypto_ctr_ctx *ctx = crypto_blkcipher_ctx(tfm);
+ struct crypto_cipher *child = ctx->child;
+ int bsize = crypto_cipher_blocksize(child);
+ struct ctr_instance_ctx *ictx =
+ crypto_instance_ctx(crypto_tfm_alg_instance(&tfm->base));
+ u8 counterblk[bsize];
+ void (*xor)(u8 *, const u8 *, unsigned int bs) = ctx->xor;
+ int err;
+
+ blkcipher_walk_init(&walk, dst, src, nbytes);
+ err = blkcipher_walk_virt(desc, &walk);
+
+ /* set up counter block */
+ memset(counterblk, 0 , bsize);
+ memcpy(counterblk, ctx->nonce, ictx->noncesize);
+ memcpy(counterblk + ictx->noncesize, walk.iv, ictx->ivsize);
+
+ /* initialize counter portion of counter block */
+ /* counter's size is 4 bytes and checked when initalizing tfm. */
+ ctr_inc((__be32 *)counterblk);
+
+ while (walk.nbytes) {
+ if (walk.src.virt.addr == walk.dst.virt.addr)
+ nbytes = crypto_ctr_crypt_inplace(&walk, child,
+ counterblk, xor);
+ else
+ nbytes = crypto_ctr_crypt_segment(&walk, child,
+ counterblk, xor);
+
+ err = blkcipher_walk_done(desc, &walk, nbytes);
+ }
+
+ return err;
+}
+
+static int crypto_ctr_init_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
+ struct ctr_instance_ctx *ictx = crypto_instance_ctx(inst);
+ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct crypto_cipher *cipher;
+ unsigned int blocksize;
+ int err;
+
+ blocksize = crypto_tfm_alg_blocksize(tfm);
+
+ /* verify size of nonce + iv + counter */
+ err = -EINVAL;
+ if ((ictx->noncesize + ictx->ivsize + 4) != blocksize)
+ return err;
+
+ switch(blocksize) {
+ case 8:
+ ctx->xor = xor_64;
+ break;
+ case 16:
+ ctx->xor = xor_128;
+ break;
+ default:
+ if (blocksize % 4)
+ ctx->xor = xor_byte;
+ else
+ ctx->xor = xor_quad;
+ }
+
+ cipher = crypto_spawn_cipher(&ictx->alg);
+ if (IS_ERR(cipher))
+ return PTR_ERR(cipher);
+
+ ctx->child = cipher;
+ return 0;
+}
+
+static void crypto_ctr_exit_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ crypto_free_cipher(ctx->child);
+}
+
+static struct crypto_instance *crypto_ctr_alloc(struct rtattr **tb)
+{
+ struct crypto_instance *inst;
+ struct crypto_alg *alg;
+ struct ctr_instance_ctx *ctx;
+ unsigned int noncesize;
+ unsigned int ivsize;
+ int err;
+
+ err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_BLKCIPHER);
+ if (err)
+ return ERR_PTR(err);
+
+ alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_CIPHER,
+ CRYPTO_ALG_TYPE_MASK);
+ if (IS_ERR(alg))
+ return ERR_PTR(PTR_ERR(alg));
+
+ err = crypto_attr_u32(tb[2], &noncesize);
+ if (err)
+ goto out_put_alg;
+
+ err = crypto_attr_u32(tb[3], &ivsize);
+ if (err)
+ goto out_put_alg;
+
+ inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
+ err = -ENOMEM;
+ if (!inst)
+ goto out_put_alg;
+
+ err = -ENAMETOOLONG;
+ if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
+ "ctr(%s,%u,%u)", alg->cra_name, noncesize,
+ ivsize) >= CRYPTO_MAX_ALG_NAME) {
+ goto err_free_inst;
+ }
+
+ if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ "ctr(%s,%u,%u)", alg->cra_driver_name, noncesize,
+ ivsize) >= CRYPTO_MAX_ALG_NAME) {
+ goto err_free_inst;
+ }
+
+ ctx = crypto_instance_ctx(inst);
+ ctx->noncesize = noncesize;
+ ctx->ivsize = ivsize;
+
+ err = crypto_init_spawn(&ctx->alg, alg, inst,
+ CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_ASYNC);
+ if (err)
+ goto err_free_inst;
+
+ err = 0;
+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
+ inst->alg.cra_priority = alg->cra_priority;
+ inst->alg.cra_blocksize = alg->cra_blocksize;
+ inst->alg.cra_alignmask = alg->cra_alignmask;
+ inst->alg.cra_type = &crypto_blkcipher_type;
+
+ if (!(alg->cra_blocksize % 4))
+ inst->alg.cra_alignmask |= 3;
+ inst->alg.cra_blkcipher.ivsize = alg->cra_blocksize;
+ inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
+ inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize;
+
+ inst->alg.cra_ctxsize = sizeof(struct crypto_ctr_ctx);
+
+ inst->alg.cra_init = crypto_ctr_init_tfm;
+ inst->alg.cra_exit = crypto_ctr_exit_tfm;
+
+ inst->alg.cra_blkcipher.setkey = crypto_ctr_setkey;
+ inst->alg.cra_blkcipher.encrypt = crypto_ctr_encrypt;
+ inst->alg.cra_blkcipher.decrypt = crypto_ctr_decrypt;
+
+err_free_inst:
+ if (err)
+ kfree(inst);
+
+out_put_alg:
+ crypto_mod_put(alg);
+
+ if (err)
+ inst = ERR_PTR(err);
+
+ return inst;
+}
+
+static void crypto_ctr_free(struct crypto_instance *inst)
+{
+ struct ctr_instance_ctx *ictx = crypto_instance_ctx(inst);
+
+ crypto_drop_spawn(&ictx->alg);
+ kfree(inst);
+}
+
+static struct crypto_template crypto_ctr_tmpl = {
+ .name = "ctr",
+ .alloc = crypto_ctr_alloc,
+ .free = crypto_ctr_free,
+ .module = THIS_MODULE,
+};
+
+static int __init crypto_ctr_module_init(void)
+{
+ return crypto_register_template(&crypto_ctr_tmpl);
+}
+
+static void __exit crypto_ctr_module_exit(void)
+{
+ crypto_unregister_template(&crypto_ctr_tmpl);
+}
+
+module_init(crypto_ctr_module_init);
+module_exit(crypto_ctr_module_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("CTR Counter block mode");
diff -urpN linux-2.6.22.aead/crypto/Kconfig linux-2.6.22.aead.patch/crypto/Kconfig
--- linux-2.6.22.aead/crypto/Kconfig 2007-09-21 10:08:17.000000000 -0500
+++ linux-2.6.22.aead.patch/crypto/Kconfig 2007-09-27 13:46:19.000000000 -0500
@@ -187,6 +187,15 @@ config CRYPTO_LRW
The first 128, 192 or 256 bits in the key are used for AES and the
rest is used to tie each cipher block to its logical position.

+config CRYPTO_CTR
+ tristate "CTR support"
+ select CRYPTO_BLKCIPHER
+ select CRYPTO_MANAGER
+ default m
+ help
+ CTR: Counter mode
+ This block cipher algorithm is required for IPSec.
+
config CRYPTO_CRYPTD
tristate "Software async crypto daemon"
select CRYPTO_ABLKCIPHER
diff -urpN linux-2.6.22.aead/crypto/Makefile linux-2.6.22.aead.patch/crypto/Makefile
--- linux-2.6.22.aead/crypto/Makefile 2007-09-21 10:08:18.000000000 -0500
+++ linux-2.6.22.aead.patch/crypto/Makefile 2007-09-27 13:46:26.000000000 -0500
@@ -31,6 +31,7 @@ obj-$(CONFIG_CRYPTO_ECB) += ecb.o
obj-$(CONFIG_CRYPTO_CBC) += cbc.o
obj-$(CONFIG_CRYPTO_PCBC) += pcbc.o
obj-$(CONFIG_CRYPTO_LRW) += lrw.o
+obj-$(CONFIG_CRYPTO_CTR) += ctr.o
obj-$(CONFIG_CRYPTO_CRYPTD) += cryptd.o
obj-$(CONFIG_CRYPTO_DES) += des.o
obj-$(CONFIG_CRYPTO_FCRYPT) += fcrypt.o
diff -urpN linux-2.6.22.aead/crypto/tcrypt.c linux-2.6.22.aead.patch/crypto/tcrypt.c
--- linux-2.6.22.aead/crypto/tcrypt.c 2007-09-21 10:08:17.000000000 -0500
+++ linux-2.6.22.aead.patch/crypto/tcrypt.c 2007-09-27 13:47:29.000000000 -0500
@@ -955,6 +955,10 @@ static void do_test(void)
AES_LRW_ENC_TEST_VECTORS);
test_cipher("lrw(aes)", DECRYPT, aes_lrw_dec_tv_template,
AES_LRW_DEC_TEST_VECTORS);
+ test_cipher("ctr(aes,4,8)", ENCRYPT, aes_ctr_enc_tv_template,
+ AES_CTR_ENC_TEST_VECTORS);
+ test_cipher("ctr(aes,4,8)", DECRYPT, aes_ctr_dec_tv_template,
+ AES_CTR_DEC_TEST_VECTORS);

//CAST5
test_cipher("ecb(cast5)", ENCRYPT, cast5_enc_tv_template,
@@ -1132,6 +1136,10 @@ static void do_test(void)
AES_LRW_ENC_TEST_VECTORS);
test_cipher("lrw(aes)", DECRYPT, aes_lrw_dec_tv_template,
AES_LRW_DEC_TEST_VECTORS);
+ test_cipher("ctr(aes,4,8)", ENCRYPT, aes_ctr_enc_tv_template,
+ AES_CTR_ENC_TEST_VECTORS);
+ test_cipher("ctr(aes,4,8)", DECRYPT, aes_ctr_dec_tv_template,
+ AES_CTR_DEC_TEST_VECTORS);
break;

case 11:
diff -urpN linux-2.6.22.aead/crypto/tcrypt.h linux-2.6.22.aead.patch/crypto/tcrypt.h
--- linux-2.6.22.aead/crypto/tcrypt.h 2007-09-21 10:08:18.000000000 -0500
+++ linux-2.6.22.aead.patch/crypto/tcrypt.h 2007-09-27 13:47:33.000000000 -0500
@@ -2144,6 +2144,8 @@ static struct cipher_testvec cast6_dec_t
#define AES_CBC_DEC_TEST_VECTORS 2
#define AES_LRW_ENC_TEST_VECTORS 8
#define AES_LRW_DEC_TEST_VECTORS 8
+#define AES_CTR_ENC_TEST_VECTORS 6
+#define AES_CTR_DEC_TEST_VECTORS 6

static struct cipher_testvec aes_enc_tv_template[] = {
{ /* From FIPS-197 */
@@ -2784,6 +2786,191 @@ static struct cipher_testvec aes_lrw_dec
}
};

+
+static struct cipher_testvec aes_ctr_enc_tv_template[] = {
+ { /* From RFC 3686 */
+ .key = { 0xae, 0x68, 0x52, 0xf8, 0x12, 0x10, 0x67, 0xcc,
+ 0x4b, 0xf7, 0xa5, 0x76, 0x55, 0x77, 0xf3, 0x9e,
+ 0x00, 0x00, 0x00, 0x30 },
+ .klen = 20,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ .input = { "Single block msg" },
+ .ilen = 16,
+ .result = { 0xe4, 0x09, 0x5d, 0x4f, 0xb7, 0xa7, 0xb3, 0x79,
+ 0x2d, 0x61, 0x75, 0xa3, 0x26, 0x13, 0x11, 0xb8 },
+ .rlen = 16,
+ }, {
+
+ .key = { 0x7e, 0x24, 0x06, 0x78, 0x17, 0xfa, 0xe0, 0xd7,
+ 0x43, 0xd6, 0xce, 0x1f, 0x32, 0x53, 0x91, 0x63,
+ 0x00, 0x6c, 0xb6, 0xdb },
+ .klen = 20,
+ .iv = { 0xc0, 0x54, 0x3b, 0x59, 0xda, 0x48, 0xd9, 0x0b },
+ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .ilen = 32,
+ .result = { 0x51, 0x04, 0xa1, 0x06, 0x16, 0x8a, 0x72, 0xd9,
+ 0x79, 0x0d, 0x41, 0xee, 0x8e, 0xda, 0xd3, 0x88,
+ 0xeb, 0x2e, 0x1e, 0xfc, 0x46, 0xda, 0x57, 0xc8,
+ 0xfc, 0xe6, 0x30, 0xdf, 0x91, 0x41, 0xbe, 0x28 },
+ .rlen = 32,
+ }, {
+ .key = { 0x16, 0xaf, 0x5b, 0x14, 0x5f, 0xc9, 0xf5, 0x79,
+ 0xc1, 0x75, 0xf9, 0x3e, 0x3b, 0xfb, 0x0e, 0xed,
+ 0x86, 0x3d, 0x06, 0xcc, 0xfd, 0xb7, 0x85, 0x15,
+ 0x00, 0x00, 0x00, 0x48 },
+ .klen = 28,
+ .iv = { 0x36, 0x73, 0x3c, 0x14, 0x7d, 0x6d, 0x93, 0xcb },
+ .input = { "Single block msg" },
+ .ilen = 16,
+ .result = { 0x4b, 0x55, 0x38, 0x4f, 0xe2, 0x59, 0xc9, 0xc8,
+ 0x4e, 0x79, 0x35, 0xa0, 0x03, 0xcb, 0xe9, 0x28 },
+ .rlen = 16,
+ }, {
+ .key = { 0x7c, 0x5c, 0xb2, 0x40, 0x1b, 0x3d, 0xc3, 0x3c,
+ 0x19, 0xe7, 0x34, 0x08, 0x19, 0xe0, 0xf6, 0x9c,
+ 0x67, 0x8c, 0x3d, 0xb8, 0xe6, 0xf6, 0xa9, 0x1a,
+ 0x00, 0x96, 0xb0, 0x3b },
+ .klen = 28,
+ .iv = { 0x02, 0x0c, 0x6e, 0xad, 0xc2, 0xcb, 0x50, 0x0d },
+ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .ilen = 32,
+ .result = { 0x45, 0x32, 0x43, 0xfc, 0x60, 0x9b, 0x23, 0x32,
+ 0x7e, 0xdf, 0xaa, 0xfa, 0x71, 0x31, 0xcd, 0x9f,
+ 0x84, 0x90, 0x70, 0x1c, 0x5a, 0xd4, 0xa7, 0x9c,
+ 0xfc, 0x1f, 0xe0, 0xff, 0x42, 0xf4, 0xfb, 0x00 },
+ .rlen = 32,
+ }, {
+ .key = { 0x77, 0x6b, 0xef, 0xf2, 0x85, 0x1d, 0xb0, 0x6f,
+ 0x4c, 0x8a, 0x05, 0x42, 0xc8, 0x69, 0x6f, 0x6c,
+ 0x6a, 0x81, 0xaf, 0x1e, 0xec, 0x96, 0xb4, 0xd3,
+ 0x7f, 0xc1, 0xd6, 0x89, 0xe6, 0xc1, 0xc1, 0x04,
+ 0x00, 0x00, 0x00, 0x60 },
+ .klen = 36,
+ .iv = { 0xdb, 0x56, 0x72, 0xc9, 0x7a, 0xa8, 0xf0, 0xb2 },
+ .input = { "Single block msg" },
+ .ilen = 16,
+ .result = { 0x14, 0x5a, 0xd0, 0x1d, 0xbf, 0x82, 0x4e, 0xc7,
+ 0x56, 0x08, 0x63, 0xdc, 0x71, 0xe3, 0xe0, 0xc0 },
+ .rlen = 16,
+ }, {
+ .key = { 0xf6, 0xd6, 0x6d, 0x6b, 0xd5, 0x2d, 0x59, 0xbb,
+ 0x07, 0x96, 0x36, 0x58, 0x79, 0xef, 0xf8, 0x86,
+ 0xc6, 0x6d, 0xd5, 0x1a, 0x5b, 0x6a, 0x99, 0x74,
+ 0x4b, 0x50, 0x59, 0x0c, 0x87, 0xa2, 0x38, 0x84,
+ 0x00, 0xfa, 0xac, 0x24 },
+ .klen = 36,
+ .iv = { 0xc1, 0x58, 0x5e, 0xf1, 0x5a, 0x43, 0xd8, 0x75 },
+ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .ilen = 32,
+ .result = { 0xf0, 0x5e, 0x23, 0x1b, 0x38, 0x94, 0x61, 0x2c,
+ 0x49, 0xee, 0x00, 0x0b, 0x80, 0x4e, 0xb2, 0xa9,
+ 0xb8, 0x30, 0x6b, 0x50, 0x8f, 0x83, 0x9d, 0x6a,
+ 0x55, 0x30, 0x83, 0x1d, 0x93, 0x44, 0xaf, 0x1c },
+ .rlen = 32,
+ },
+};
+
+static struct cipher_testvec aes_ctr_dec_tv_template[] = {
+ { /* From RFC 3686 */
+ .key = { 0xae, 0x68, 0x52, 0xf8, 0x12, 0x10, 0x67, 0xcc,
+ 0x4b, 0xf7, 0xa5, 0x76, 0x55, 0x77, 0xf3, 0x9e,
+ 0x00, 0x00, 0x00, 0x30 },
+ .klen = 20,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ .input = { 0xe4, 0x09, 0x5d, 0x4f, 0xb7, 0xa7, 0xb3, 0x79,
+ 0x2d, 0x61, 0x75, 0xa3, 0x26, 0x13, 0x11, 0xb8 },
+ .ilen = 16,
+ .result = { "Single block msg" },
+ .rlen = 16,
+ }, {
+
+ .key = { 0x7e, 0x24, 0x06, 0x78, 0x17, 0xfa, 0xe0, 0xd7,
+ 0x43, 0xd6, 0xce, 0x1f, 0x32, 0x53, 0x91, 0x63,
+ 0x00, 0x6c, 0xb6, 0xdb },
+ .klen = 20,
+ .iv = { 0xc0, 0x54, 0x3b, 0x59, 0xda, 0x48, 0xd9, 0x0b },
+ .input = { 0x51, 0x04, 0xa1, 0x06, 0x16, 0x8a, 0x72, 0xd9,
+ 0x79, 0x0d, 0x41, 0xee, 0x8e, 0xda, 0xd3, 0x88,
+ 0xeb, 0x2e, 0x1e, 0xfc, 0x46, 0xda, 0x57, 0xc8,
+ 0xfc, 0xe6, 0x30, 0xdf, 0x91, 0x41, 0xbe, 0x28 },
+ .ilen = 32,
+ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .rlen = 32,
+ }, {
+ .key = { 0x16, 0xaf, 0x5b, 0x14, 0x5f, 0xc9, 0xf5, 0x79,
+ 0xc1, 0x75, 0xf9, 0x3e, 0x3b, 0xfb, 0x0e, 0xed,
+ 0x86, 0x3d, 0x06, 0xcc, 0xfd, 0xb7, 0x85, 0x15,
+ 0x00, 0x00, 0x00, 0x48 },
+ .klen = 28,
+ .iv = { 0x36, 0x73, 0x3c, 0x14, 0x7d, 0x6d, 0x93, 0xcb },
+ .input = { 0x4b, 0x55, 0x38, 0x4f, 0xe2, 0x59, 0xc9, 0xc8,
+ 0x4e, 0x79, 0x35, 0xa0, 0x03, 0xcb, 0xe9, 0x28 },
+ .ilen = 16,
+ .result = { "Single block msg" },
+ .rlen = 16,
+ }, {
+ .key = { 0x7c, 0x5c, 0xb2, 0x40, 0x1b, 0x3d, 0xc3, 0x3c,
+ 0x19, 0xe7, 0x34, 0x08, 0x19, 0xe0, 0xf6, 0x9c,
+ 0x67, 0x8c, 0x3d, 0xb8, 0xe6, 0xf6, 0xa9, 0x1a,
+ 0x00, 0x96, 0xb0, 0x3b },
+ .klen = 28,
+ .iv = { 0x02, 0x0c, 0x6e, 0xad, 0xc2, 0xcb, 0x50, 0x0d },
+ .input = { 0x45, 0x32, 0x43, 0xfc, 0x60, 0x9b, 0x23, 0x32,
+ 0x7e, 0xdf, 0xaa, 0xfa, 0x71, 0x31, 0xcd, 0x9f,
+ 0x84, 0x90, 0x70, 0x1c, 0x5a, 0xd4, 0xa7, 0x9c,
+ 0xfc, 0x1f, 0xe0, 0xff, 0x42, 0xf4, 0xfb, 0x00 },
+ .ilen = 32,
+ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .rlen = 32,
+ }, {
+ .key = { 0x77, 0x6b, 0xef, 0xf2, 0x85, 0x1d, 0xb0, 0x6f,
+ 0x4c, 0x8a, 0x05, 0x42, 0xc8, 0x69, 0x6f, 0x6c,
+ 0x6a, 0x81, 0xaf, 0x1e, 0xec, 0x96, 0xb4, 0xd3,
+ 0x7f, 0xc1, 0xd6, 0x89, 0xe6, 0xc1, 0xc1, 0x04,
+ 0x00, 0x00, 0x00, 0x60 },
+ .klen = 36,
+ .iv = { 0xdb, 0x56, 0x72, 0xc9, 0x7a, 0xa8, 0xf0, 0xb2 },
+ .input = { 0x14, 0x5a, 0xd0, 0x1d, 0xbf, 0x82, 0x4e, 0xc7,
+ 0x56, 0x08, 0x63, 0xdc, 0x71, 0xe3, 0xe0, 0xc0 },
+ .ilen = 16,
+ .result = { "Single block msg" },
+ .rlen = 16,
+ }, {
+ .key = { 0xf6, 0xd6, 0x6d, 0x6b, 0xd5, 0x2d, 0x59, 0xbb,
+ 0x07, 0x96, 0x36, 0x58, 0x79, 0xef, 0xf8, 0x86,
+ 0xc6, 0x6d, 0xd5, 0x1a, 0x5b, 0x6a, 0x99, 0x74,
+ 0x4b, 0x50, 0x59, 0x0c, 0x87, 0xa2, 0x38, 0x84,
+ 0x00, 0xfa, 0xac, 0x24 },
+ .klen = 36,
+ .iv = { 0xc1, 0x58, 0x5e, 0xf1, 0x5a, 0x43, 0xd8, 0x75 },
+ .input = { 0xf0, 0x5e, 0x23, 0x1b, 0x38, 0x94, 0x61, 0x2c,
+ 0x49, 0xee, 0x00, 0x0b, 0x80, 0x4e, 0xb2, 0xa9,
+ 0xb8, 0x30, 0x6b, 0x50, 0x8f, 0x83, 0x9d, 0x6a,
+ 0x55, 0x30, 0x83, 0x1d, 0x93, 0x44, 0xaf, 0x1c },
+ .ilen = 32,
+ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .rlen = 32,
+ },
+};
+
/* Cast5 test vectors from RFC 2144 */
#define CAST5_ENC_TEST_VECTORS 3
#define CAST5_DEC_TEST_VECTORS 3


2007-09-29 13:33:42

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/1]: Revised CTR mode implementation

On Thu, Sep 27, 2007 at 03:54:51PM -0500, Joy Latten wrote:
>
> So, for example,
>
> ctr(aes,4,8)
>
> specifies the counter block will be composed of 4 bytes from a
> nonce and 8 bytes from the IV and 4 bytes for counter, which is set.

Could you please add a check to verify that for

ctr(X,Y,Z)

we have

block_size(X) - Y - Z == 4

Return -EINVAL if this fails.

> 2. it is assumed that plaintext is multiple of blocksize.

Yes blkcipher will fail if there's any left-over.

Sorry, I think I misled you earlier when you asked about the
block size and left-overs.

The block size of ctr(aes,X,Y) should not be that of the block
size of AES. It should instead be 1 as CTR is a stream cipher.

The API currently doesn't allow that but I'll patch it so
that it does :)

> 3. currently nonce is extracted from the last 4 bytes of key.
> Thus keys entered through setkey() have an additional 32 bits.
> This causes problems for 256-bit keys. For example,
> crypto_ablkcipher_setkey() checks the maximum keysize and
> complains about keysize.
> This issue will be taken cared of with the new
> infrastructure/template for combined mode that is planned,
> and appropriate changes will be made to crypto_ctr_setkey()
> and testcases.

You should instead increase min_keysize/max_keysize accordingly.

Thanks,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2007-10-02 05:51:07

by Joy Latten

[permalink] [raw]
Subject: Re: [PATCH 1/1]: Revised CTR mode implementation

>On Thu, Sep 27, 2007 at 03:54:51PM -0500, Joy Latten wrote:
>>
>> So, for example,
>>
>> ctr(aes,4,8)
>>
>> specifies the counter block will be composed of 4 bytes from a
>> nonce and 8 bytes from the IV and 4 bytes for counter, which is set.
>
>Could you please add a check to verify that for
>
> ctr(X,Y,Z)
>
>we have
>
> block_size(X) - Y - Z == 4
>
>Return -EINVAL if this fails.

Ok, I already had a similar check, but it was
in crypto_ctr_init_tfm(). I don't think that was
best place to have it.

I have moved this check to crypto_ctr_alloc().

>> 2. it is assumed that plaintext is multiple of blocksize.
>
>Yes blkcipher will fail if there's any left-over.
>
>Sorry, I think I misled you earlier when you asked about the
>block size and left-overs.
>
>The block size of ctr(aes,X,Y) should not be that of the block
>size of AES. It should instead be 1 as CTR is a stream cipher.
>
>The API currently doesn't allow that but I'll patch it so
>that it does :)
>

Ok, I think I understand.
blocksize of aes is still 16, but of ctr(aes,X,Y) is 1...

Thus my instance of ctr(aes,X,Y) should have a
blocksize of 1, right? I have changed,

inst->alg.cra_blocksize = alg->cra_blocksize;
to
inst->alg.cra_blocksize = 1;

So, the correct way to say it is that my plaintext should be
multiple of cipher's blocksize, not CTR's blocksize?


>> 3. currently nonce is extracted from the last 4 bytes of key.
>> Thus keys entered through setkey() have an additional 32 bits.
>> This causes problems for 256-bit keys. For example,
>> crypto_ablkcipher_setkey() checks the maximum keysize and
>> complains about keysize.
>> This issue will be taken cared of with the new
>> infrastructure/template for combined mode that is planned,
>> and appropriate changes will be made to crypto_ctr_setkey()
>> and testcases.
>
>You should instead increase min_keysize/max_keysize accordingly.

Ok. I should have thought of that! :-)

I have made suggested changes in new patch below.
This time, all the ctr testcases passed. :-)

Thanks!

Joy

Signed-off-by: Joy Latten <[email protected]>


diff -urpN linux-2.6.22.aead/crypto/ctr.c linux-2.6.22.aead.patch/crypto/ctr.c
--- linux-2.6.22.aead/crypto/ctr.c 1969-12-31 18:00:00.000000000 -0600
+++ linux-2.6.22.aead.patch/crypto/ctr.c 2007-10-02 00:28:57.000000000 -0500
@@ -0,0 +1,399 @@
+/*
+ * CTR: Counter mode
+ *
+ * (C) Copyright IBM Corp. 2007 - Joy Latten <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+
+#include <crypto/algapi.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+
+struct ctr_instance_ctx {
+ struct crypto_spawn alg;
+ unsigned int noncesize;
+ unsigned int ivsize;
+};
+
+struct crypto_ctr_ctx {
+ struct crypto_cipher *child;
+ u8 *nonce;
+ void (*xor)(u8 *dst, const u8 *src, unsigned int bs);
+};
+
+static void ctr_inc(__be32 *counter)
+{
+ u32 c;
+
+ c = be32_to_cpu(counter[3]);
+ c++;
+ counter[3] = cpu_to_be32(c);
+}
+
+static void xor_byte(u8 *a, const u8 *b, unsigned int bs)
+{
+ do {
+ *a++ ^= *b++;
+ } while (--bs);
+}
+
+static void xor_quad(u8 *dst, const u8 *src, unsigned int bs)
+{
+ u32 *a = (u32 *)dst;
+ u32 *b = (u32 *)src;
+
+ do {
+ *a++ ^= *b++;
+ } while ((bs -= 4));
+}
+
+static void xor_64(u8 *a, const u8 *b, unsigned int bs)
+{
+ ((u32 *)a)[0] ^= ((u32 *)b)[0];
+ ((u32 *)a)[1] ^= ((u32 *)b)[1];
+}
+
+static void xor_128(u8 *a, const u8 *b, unsigned int bs)
+{
+ ((u32 *)a)[0] ^= ((u32 *)b)[0];
+ ((u32 *)a)[1] ^= ((u32 *)b)[1];
+ ((u32 *)a)[2] ^= ((u32 *)b)[2];
+ ((u32 *)a)[3] ^= ((u32 *)b)[3];
+}
+
+static int crypto_ctr_setkey(struct crypto_tfm *parent, const u8 *key,
+ unsigned int keylen)
+{
+ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(parent);
+ struct crypto_cipher *child = ctx->child;
+ struct ctr_instance_ctx *ictx =
+ crypto_instance_ctx(crypto_tfm_alg_instance(parent));
+
+ unsigned int noncelen = ictx->noncesize;
+ int err;
+
+ /* the nonce is stored in bytes at end of key */
+ if (keylen < noncelen) {
+ err = -EINVAL;
+ return err;
+ }
+
+ ctx->nonce = (u8 *)(key + (keylen - noncelen));
+ keylen -= noncelen;
+
+ crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+ crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_cipher_setkey(child, key, keylen);
+ crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) &
+ CRYPTO_TFM_RES_MASK);
+ return err;
+}
+
+static int crypto_ctr_crypt_segment(struct blkcipher_walk *walk,
+ struct crypto_cipher *tfm, u8 *ctrblk,
+ void (*xor)(u8 *, const u8 *, unsigned int))
+{
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+ crypto_cipher_alg(tfm)->cia_encrypt;
+ int bsize = crypto_cipher_blocksize(tfm);
+ unsigned int nbytes = walk->nbytes;
+ u8 *src = walk->src.virt.addr;
+ u8 *dst = walk->dst.virt.addr;
+
+ do {
+ /* create keystream */
+ fn(crypto_cipher_tfm(tfm), dst, ctrblk);
+ xor(dst, src, bsize);
+
+ /* increment counter in counterblock */
+ ctr_inc((__be32 *)ctrblk);
+
+ src += bsize;
+ dst += bsize;
+ } while ((nbytes -= bsize) >= bsize);
+
+ return nbytes;
+}
+
+static int crypto_ctr_crypt_inplace(struct blkcipher_walk *walk,
+ struct crypto_cipher *tfm, u8 *ctrblk,
+ void (*xor)(u8 *, const u8 *, unsigned int))
+{
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+ crypto_cipher_alg(tfm)->cia_encrypt;
+ int bsize = crypto_cipher_blocksize(tfm);
+ unsigned long alignmask = crypto_cipher_alignmask(tfm);
+ unsigned int nbytes = walk->nbytes;
+ u8 *src = walk->src.virt.addr;
+ u8 ks[bsize + alignmask];
+ u8 *keystream = (u8 *)ALIGN((unsigned long)ks, alignmask + 1);
+
+ do {
+ /* create keystream */
+ fn(crypto_cipher_tfm(tfm), keystream, ctrblk);
+ xor(src, keystream, bsize);
+
+ /* increment counter in counterblock */
+ ctr_inc((__be32 *)ctrblk);
+
+ src += bsize;
+ } while ((nbytes -= bsize) >= bsize);
+
+ return nbytes;
+}
+
+static int crypto_ctr_encrypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst, struct scatterlist *src,
+ unsigned int nbytes)
+{
+ struct blkcipher_walk walk;
+ struct crypto_blkcipher *tfm = desc->tfm;
+ struct crypto_ctr_ctx *ctx = crypto_blkcipher_ctx(tfm);
+ struct crypto_cipher *child = ctx->child;
+ int bsize = crypto_cipher_blocksize(child);
+ struct ctr_instance_ctx *ictx =
+ crypto_instance_ctx(crypto_tfm_alg_instance(&tfm->base));
+ u8 counterblk[bsize];
+ void (*xor)(u8 *, const u8 *, unsigned int bs) = ctx->xor;
+ int err;
+
+ blkcipher_walk_init(&walk, dst, src, nbytes);
+ err = blkcipher_walk_virt(desc, &walk);
+
+ /* set up counter block */
+ memset(counterblk, 0 , bsize);
+ memcpy(counterblk, ctx->nonce, ictx->noncesize);
+ memcpy(counterblk + ictx->noncesize, walk.iv, ictx->ivsize);
+
+ /* initialize counter portion of counter block */
+ /* counter's size is 4 bytes and checked when initalizing tfm. */
+ ctr_inc((__be32 *)counterblk);
+
+ while (walk.nbytes) {
+ if (walk.src.virt.addr == walk.dst.virt.addr)
+ nbytes = crypto_ctr_crypt_inplace(&walk, child,
+ counterblk, xor);
+ else
+ nbytes = crypto_ctr_crypt_segment(&walk, child,
+ counterblk, xor);
+
+ err = blkcipher_walk_done(desc, &walk, nbytes);
+ }
+ return err;
+}
+
+static int crypto_ctr_decrypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst, struct scatterlist *src,
+ unsigned int nbytes)
+{
+ struct blkcipher_walk walk;
+ struct crypto_blkcipher *tfm = desc->tfm;
+ struct crypto_ctr_ctx *ctx = crypto_blkcipher_ctx(tfm);
+ struct crypto_cipher *child = ctx->child;
+ int bsize = crypto_cipher_blocksize(child);
+ struct ctr_instance_ctx *ictx =
+ crypto_instance_ctx(crypto_tfm_alg_instance(&tfm->base));
+ u8 counterblk[bsize];
+ void (*xor)(u8 *, const u8 *, unsigned int bs) = ctx->xor;
+ int err;
+
+ blkcipher_walk_init(&walk, dst, src, nbytes);
+ err = blkcipher_walk_virt(desc, &walk);
+
+ /* set up counter block */
+ memset(counterblk, 0 , bsize);
+ memcpy(counterblk, ctx->nonce, ictx->noncesize);
+ memcpy(counterblk + ictx->noncesize, walk.iv, ictx->ivsize);
+
+ /* initialize counter portion of counter block */
+ /* counter's size is 4 bytes and checked when initalizing tfm. */
+ ctr_inc((__be32 *)counterblk);
+
+ while (walk.nbytes) {
+ if (walk.src.virt.addr == walk.dst.virt.addr)
+ nbytes = crypto_ctr_crypt_inplace(&walk, child,
+ counterblk, xor);
+ else
+ nbytes = crypto_ctr_crypt_segment(&walk, child,
+ counterblk, xor);
+
+ err = blkcipher_walk_done(desc, &walk, nbytes);
+ }
+
+ return err;
+}
+
+static int crypto_ctr_init_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
+ struct ctr_instance_ctx *ictx = crypto_instance_ctx(inst);
+ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct crypto_cipher *cipher;
+ unsigned int blocksize;
+
+ blocksize = crypto_tfm_alg_blocksize(tfm);
+
+ switch(blocksize) {
+ case 8:
+ ctx->xor = xor_64;
+ break;
+ case 16:
+ ctx->xor = xor_128;
+ break;
+ default:
+ if (blocksize % 4)
+ ctx->xor = xor_byte;
+ else
+ ctx->xor = xor_quad;
+ }
+
+ cipher = crypto_spawn_cipher(&ictx->alg);
+ if (IS_ERR(cipher))
+ return PTR_ERR(cipher);
+
+ ctx->child = cipher;
+ return 0;
+}
+
+static void crypto_ctr_exit_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ crypto_free_cipher(ctx->child);
+}
+
+static struct crypto_instance *crypto_ctr_alloc(struct rtattr **tb)
+{
+ struct crypto_instance *inst;
+ struct crypto_alg *alg;
+ struct ctr_instance_ctx *ctx;
+ unsigned int noncesize;
+ unsigned int ivsize;
+ int err;
+
+ err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_BLKCIPHER);
+ if (err)
+ return ERR_PTR(err);
+
+ alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_CIPHER,
+ CRYPTO_ALG_TYPE_MASK);
+ if (IS_ERR(alg))
+ return ERR_PTR(PTR_ERR(alg));
+
+ err = crypto_attr_u32(tb[2], &noncesize);
+ if (err)
+ goto out_put_alg;
+
+ err = crypto_attr_u32(tb[3], &ivsize);
+ if (err)
+ goto out_put_alg;
+
+ /* verify size of nonce + iv + counter */
+ err = -EINVAL;
+ if ((alg->cra_blocksize - noncesize - ivsize) != 4)
+ goto out_put_alg;
+
+ inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
+ err = -ENOMEM;
+ if (!inst)
+ goto out_put_alg;
+
+ err = -ENAMETOOLONG;
+ if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
+ "ctr(%s,%u,%u)", alg->cra_name, noncesize,
+ ivsize) >= CRYPTO_MAX_ALG_NAME) {
+ goto err_free_inst;
+ }
+
+ if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ "ctr(%s,%u,%u)", alg->cra_driver_name, noncesize,
+ ivsize) >= CRYPTO_MAX_ALG_NAME) {
+ goto err_free_inst;
+ }
+
+ ctx = crypto_instance_ctx(inst);
+ ctx->noncesize = noncesize;
+ ctx->ivsize = ivsize;
+
+ err = crypto_init_spawn(&ctx->alg, alg, inst,
+ CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_ASYNC);
+ if (err)
+ goto err_free_inst;
+
+ err = 0;
+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
+ inst->alg.cra_priority = alg->cra_priority;
+ inst->alg.cra_blocksize = 1;
+ inst->alg.cra_alignmask = alg->cra_alignmask;
+ inst->alg.cra_type = &crypto_blkcipher_type;
+
+ if (!(alg->cra_blocksize % 4))
+ inst->alg.cra_alignmask |= 3;
+ inst->alg.cra_blkcipher.ivsize = alg->cra_blocksize;
+ inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize
+ + noncesize;
+ inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize
+ + noncesize;
+
+ inst->alg.cra_ctxsize = sizeof(struct crypto_ctr_ctx);
+
+ inst->alg.cra_init = crypto_ctr_init_tfm;
+ inst->alg.cra_exit = crypto_ctr_exit_tfm;
+
+ inst->alg.cra_blkcipher.setkey = crypto_ctr_setkey;
+ inst->alg.cra_blkcipher.encrypt = crypto_ctr_encrypt;
+ inst->alg.cra_blkcipher.decrypt = crypto_ctr_decrypt;
+
+err_free_inst:
+ if (err)
+ kfree(inst);
+
+out_put_alg:
+ crypto_mod_put(alg);
+
+ if (err)
+ inst = ERR_PTR(err);
+
+ return inst;
+}
+
+static void crypto_ctr_free(struct crypto_instance *inst)
+{
+ struct ctr_instance_ctx *ictx = crypto_instance_ctx(inst);
+
+ crypto_drop_spawn(&ictx->alg);
+ kfree(inst);
+}
+
+static struct crypto_template crypto_ctr_tmpl = {
+ .name = "ctr",
+ .alloc = crypto_ctr_alloc,
+ .free = crypto_ctr_free,
+ .module = THIS_MODULE,
+};
+
+static int __init crypto_ctr_module_init(void)
+{
+ return crypto_register_template(&crypto_ctr_tmpl);
+}
+
+static void __exit crypto_ctr_module_exit(void)
+{
+ crypto_unregister_template(&crypto_ctr_tmpl);
+}
+
+module_init(crypto_ctr_module_init);
+module_exit(crypto_ctr_module_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("CTR Counter block mode");
diff -urpN linux-2.6.22.aead/crypto/Kconfig linux-2.6.22.aead.patch/crypto/Kconfig
--- linux-2.6.22.aead/crypto/Kconfig 2007-09-21 10:08:17.000000000 -0500
+++ linux-2.6.22.aead.patch/crypto/Kconfig 2007-09-27 13:46:19.000000000 -0500
@@ -187,6 +187,15 @@ config CRYPTO_LRW
The first 128, 192 or 256 bits in the key are used for AES and the
rest is used to tie each cipher block to its logical position.

+config CRYPTO_CTR
+ tristate "CTR support"
+ select CRYPTO_BLKCIPHER
+ select CRYPTO_MANAGER
+ default m
+ help
+ CTR: Counter mode
+ This block cipher algorithm is required for IPSec.
+
config CRYPTO_CRYPTD
tristate "Software async crypto daemon"
select CRYPTO_ABLKCIPHER
diff -urpN linux-2.6.22.aead/crypto/Makefile linux-2.6.22.aead.patch/crypto/Makefile
--- linux-2.6.22.aead/crypto/Makefile 2007-09-21 10:08:18.000000000 -0500
+++ linux-2.6.22.aead.patch/crypto/Makefile 2007-09-27 13:46:26.000000000 -0500
@@ -31,6 +31,7 @@ obj-$(CONFIG_CRYPTO_ECB) += ecb.o
obj-$(CONFIG_CRYPTO_CBC) += cbc.o
obj-$(CONFIG_CRYPTO_PCBC) += pcbc.o
obj-$(CONFIG_CRYPTO_LRW) += lrw.o
+obj-$(CONFIG_CRYPTO_CTR) += ctr.o
obj-$(CONFIG_CRYPTO_CRYPTD) += cryptd.o
obj-$(CONFIG_CRYPTO_DES) += des.o
obj-$(CONFIG_CRYPTO_FCRYPT) += fcrypt.o
diff -urpN linux-2.6.22.aead/crypto/tcrypt.c linux-2.6.22.aead.patch/crypto/tcrypt.c
--- linux-2.6.22.aead/crypto/tcrypt.c 2007-09-21 10:08:17.000000000 -0500
+++ linux-2.6.22.aead.patch/crypto/tcrypt.c 2007-09-27 13:47:29.000000000 -0500
@@ -955,6 +955,10 @@ static void do_test(void)
AES_LRW_ENC_TEST_VECTORS);
test_cipher("lrw(aes)", DECRYPT, aes_lrw_dec_tv_template,
AES_LRW_DEC_TEST_VECTORS);
+ test_cipher("ctr(aes,4,8)", ENCRYPT, aes_ctr_enc_tv_template,
+ AES_CTR_ENC_TEST_VECTORS);
+ test_cipher("ctr(aes,4,8)", DECRYPT, aes_ctr_dec_tv_template,
+ AES_CTR_DEC_TEST_VECTORS);

//CAST5
test_cipher("ecb(cast5)", ENCRYPT, cast5_enc_tv_template,
@@ -1132,6 +1136,10 @@ static void do_test(void)
AES_LRW_ENC_TEST_VECTORS);
test_cipher("lrw(aes)", DECRYPT, aes_lrw_dec_tv_template,
AES_LRW_DEC_TEST_VECTORS);
+ test_cipher("ctr(aes,4,8)", ENCRYPT, aes_ctr_enc_tv_template,
+ AES_CTR_ENC_TEST_VECTORS);
+ test_cipher("ctr(aes,4,8)", DECRYPT, aes_ctr_dec_tv_template,
+ AES_CTR_DEC_TEST_VECTORS);
break;

case 11:
diff -urpN linux-2.6.22.aead/crypto/tcrypt.h linux-2.6.22.aead.patch/crypto/tcrypt.h
--- linux-2.6.22.aead/crypto/tcrypt.h 2007-09-21 10:08:18.000000000 -0500
+++ linux-2.6.22.aead.patch/crypto/tcrypt.h 2007-09-27 13:47:33.000000000 -0500
@@ -2144,6 +2144,8 @@ static struct cipher_testvec cast6_dec_t
#define AES_CBC_DEC_TEST_VECTORS 2
#define AES_LRW_ENC_TEST_VECTORS 8
#define AES_LRW_DEC_TEST_VECTORS 8
+#define AES_CTR_ENC_TEST_VECTORS 6
+#define AES_CTR_DEC_TEST_VECTORS 6

static struct cipher_testvec aes_enc_tv_template[] = {
{ /* From FIPS-197 */
@@ -2784,6 +2786,191 @@ static struct cipher_testvec aes_lrw_dec
}
};

+
+static struct cipher_testvec aes_ctr_enc_tv_template[] = {
+ { /* From RFC 3686 */
+ .key = { 0xae, 0x68, 0x52, 0xf8, 0x12, 0x10, 0x67, 0xcc,
+ 0x4b, 0xf7, 0xa5, 0x76, 0x55, 0x77, 0xf3, 0x9e,
+ 0x00, 0x00, 0x00, 0x30 },
+ .klen = 20,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ .input = { "Single block msg" },
+ .ilen = 16,
+ .result = { 0xe4, 0x09, 0x5d, 0x4f, 0xb7, 0xa7, 0xb3, 0x79,
+ 0x2d, 0x61, 0x75, 0xa3, 0x26, 0x13, 0x11, 0xb8 },
+ .rlen = 16,
+ }, {
+
+ .key = { 0x7e, 0x24, 0x06, 0x78, 0x17, 0xfa, 0xe0, 0xd7,
+ 0x43, 0xd6, 0xce, 0x1f, 0x32, 0x53, 0x91, 0x63,
+ 0x00, 0x6c, 0xb6, 0xdb },
+ .klen = 20,
+ .iv = { 0xc0, 0x54, 0x3b, 0x59, 0xda, 0x48, 0xd9, 0x0b },
+ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .ilen = 32,
+ .result = { 0x51, 0x04, 0xa1, 0x06, 0x16, 0x8a, 0x72, 0xd9,
+ 0x79, 0x0d, 0x41, 0xee, 0x8e, 0xda, 0xd3, 0x88,
+ 0xeb, 0x2e, 0x1e, 0xfc, 0x46, 0xda, 0x57, 0xc8,
+ 0xfc, 0xe6, 0x30, 0xdf, 0x91, 0x41, 0xbe, 0x28 },
+ .rlen = 32,
+ }, {
+ .key = { 0x16, 0xaf, 0x5b, 0x14, 0x5f, 0xc9, 0xf5, 0x79,
+ 0xc1, 0x75, 0xf9, 0x3e, 0x3b, 0xfb, 0x0e, 0xed,
+ 0x86, 0x3d, 0x06, 0xcc, 0xfd, 0xb7, 0x85, 0x15,
+ 0x00, 0x00, 0x00, 0x48 },
+ .klen = 28,
+ .iv = { 0x36, 0x73, 0x3c, 0x14, 0x7d, 0x6d, 0x93, 0xcb },
+ .input = { "Single block msg" },
+ .ilen = 16,
+ .result = { 0x4b, 0x55, 0x38, 0x4f, 0xe2, 0x59, 0xc9, 0xc8,
+ 0x4e, 0x79, 0x35, 0xa0, 0x03, 0xcb, 0xe9, 0x28 },
+ .rlen = 16,
+ }, {
+ .key = { 0x7c, 0x5c, 0xb2, 0x40, 0x1b, 0x3d, 0xc3, 0x3c,
+ 0x19, 0xe7, 0x34, 0x08, 0x19, 0xe0, 0xf6, 0x9c,
+ 0x67, 0x8c, 0x3d, 0xb8, 0xe6, 0xf6, 0xa9, 0x1a,
+ 0x00, 0x96, 0xb0, 0x3b },
+ .klen = 28,
+ .iv = { 0x02, 0x0c, 0x6e, 0xad, 0xc2, 0xcb, 0x50, 0x0d },
+ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .ilen = 32,
+ .result = { 0x45, 0x32, 0x43, 0xfc, 0x60, 0x9b, 0x23, 0x32,
+ 0x7e, 0xdf, 0xaa, 0xfa, 0x71, 0x31, 0xcd, 0x9f,
+ 0x84, 0x90, 0x70, 0x1c, 0x5a, 0xd4, 0xa7, 0x9c,
+ 0xfc, 0x1f, 0xe0, 0xff, 0x42, 0xf4, 0xfb, 0x00 },
+ .rlen = 32,
+ }, {
+ .key = { 0x77, 0x6b, 0xef, 0xf2, 0x85, 0x1d, 0xb0, 0x6f,
+ 0x4c, 0x8a, 0x05, 0x42, 0xc8, 0x69, 0x6f, 0x6c,
+ 0x6a, 0x81, 0xaf, 0x1e, 0xec, 0x96, 0xb4, 0xd3,
+ 0x7f, 0xc1, 0xd6, 0x89, 0xe6, 0xc1, 0xc1, 0x04,
+ 0x00, 0x00, 0x00, 0x60 },
+ .klen = 36,
+ .iv = { 0xdb, 0x56, 0x72, 0xc9, 0x7a, 0xa8, 0xf0, 0xb2 },
+ .input = { "Single block msg" },
+ .ilen = 16,
+ .result = { 0x14, 0x5a, 0xd0, 0x1d, 0xbf, 0x82, 0x4e, 0xc7,
+ 0x56, 0x08, 0x63, 0xdc, 0x71, 0xe3, 0xe0, 0xc0 },
+ .rlen = 16,
+ }, {
+ .key = { 0xf6, 0xd6, 0x6d, 0x6b, 0xd5, 0x2d, 0x59, 0xbb,
+ 0x07, 0x96, 0x36, 0x58, 0x79, 0xef, 0xf8, 0x86,
+ 0xc6, 0x6d, 0xd5, 0x1a, 0x5b, 0x6a, 0x99, 0x74,
+ 0x4b, 0x50, 0x59, 0x0c, 0x87, 0xa2, 0x38, 0x84,
+ 0x00, 0xfa, 0xac, 0x24 },
+ .klen = 36,
+ .iv = { 0xc1, 0x58, 0x5e, 0xf1, 0x5a, 0x43, 0xd8, 0x75 },
+ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .ilen = 32,
+ .result = { 0xf0, 0x5e, 0x23, 0x1b, 0x38, 0x94, 0x61, 0x2c,
+ 0x49, 0xee, 0x00, 0x0b, 0x80, 0x4e, 0xb2, 0xa9,
+ 0xb8, 0x30, 0x6b, 0x50, 0x8f, 0x83, 0x9d, 0x6a,
+ 0x55, 0x30, 0x83, 0x1d, 0x93, 0x44, 0xaf, 0x1c },
+ .rlen = 32,
+ },
+};
+
+static struct cipher_testvec aes_ctr_dec_tv_template[] = {
+ { /* From RFC 3686 */
+ .key = { 0xae, 0x68, 0x52, 0xf8, 0x12, 0x10, 0x67, 0xcc,
+ 0x4b, 0xf7, 0xa5, 0x76, 0x55, 0x77, 0xf3, 0x9e,
+ 0x00, 0x00, 0x00, 0x30 },
+ .klen = 20,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ .input = { 0xe4, 0x09, 0x5d, 0x4f, 0xb7, 0xa7, 0xb3, 0x79,
+ 0x2d, 0x61, 0x75, 0xa3, 0x26, 0x13, 0x11, 0xb8 },
+ .ilen = 16,
+ .result = { "Single block msg" },
+ .rlen = 16,
+ }, {
+
+ .key = { 0x7e, 0x24, 0x06, 0x78, 0x17, 0xfa, 0xe0, 0xd7,
+ 0x43, 0xd6, 0xce, 0x1f, 0x32, 0x53, 0x91, 0x63,
+ 0x00, 0x6c, 0xb6, 0xdb },
+ .klen = 20,
+ .iv = { 0xc0, 0x54, 0x3b, 0x59, 0xda, 0x48, 0xd9, 0x0b },
+ .input = { 0x51, 0x04, 0xa1, 0x06, 0x16, 0x8a, 0x72, 0xd9,
+ 0x79, 0x0d, 0x41, 0xee, 0x8e, 0xda, 0xd3, 0x88,
+ 0xeb, 0x2e, 0x1e, 0xfc, 0x46, 0xda, 0x57, 0xc8,
+ 0xfc, 0xe6, 0x30, 0xdf, 0x91, 0x41, 0xbe, 0x28 },
+ .ilen = 32,
+ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .rlen = 32,
+ }, {
+ .key = { 0x16, 0xaf, 0x5b, 0x14, 0x5f, 0xc9, 0xf5, 0x79,
+ 0xc1, 0x75, 0xf9, 0x3e, 0x3b, 0xfb, 0x0e, 0xed,
+ 0x86, 0x3d, 0x06, 0xcc, 0xfd, 0xb7, 0x85, 0x15,
+ 0x00, 0x00, 0x00, 0x48 },
+ .klen = 28,
+ .iv = { 0x36, 0x73, 0x3c, 0x14, 0x7d, 0x6d, 0x93, 0xcb },
+ .input = { 0x4b, 0x55, 0x38, 0x4f, 0xe2, 0x59, 0xc9, 0xc8,
+ 0x4e, 0x79, 0x35, 0xa0, 0x03, 0xcb, 0xe9, 0x28 },
+ .ilen = 16,
+ .result = { "Single block msg" },
+ .rlen = 16,
+ }, {
+ .key = { 0x7c, 0x5c, 0xb2, 0x40, 0x1b, 0x3d, 0xc3, 0x3c,
+ 0x19, 0xe7, 0x34, 0x08, 0x19, 0xe0, 0xf6, 0x9c,
+ 0x67, 0x8c, 0x3d, 0xb8, 0xe6, 0xf6, 0xa9, 0x1a,
+ 0x00, 0x96, 0xb0, 0x3b },
+ .klen = 28,
+ .iv = { 0x02, 0x0c, 0x6e, 0xad, 0xc2, 0xcb, 0x50, 0x0d },
+ .input = { 0x45, 0x32, 0x43, 0xfc, 0x60, 0x9b, 0x23, 0x32,
+ 0x7e, 0xdf, 0xaa, 0xfa, 0x71, 0x31, 0xcd, 0x9f,
+ 0x84, 0x90, 0x70, 0x1c, 0x5a, 0xd4, 0xa7, 0x9c,
+ 0xfc, 0x1f, 0xe0, 0xff, 0x42, 0xf4, 0xfb, 0x00 },
+ .ilen = 32,
+ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .rlen = 32,
+ }, {
+ .key = { 0x77, 0x6b, 0xef, 0xf2, 0x85, 0x1d, 0xb0, 0x6f,
+ 0x4c, 0x8a, 0x05, 0x42, 0xc8, 0x69, 0x6f, 0x6c,
+ 0x6a, 0x81, 0xaf, 0x1e, 0xec, 0x96, 0xb4, 0xd3,
+ 0x7f, 0xc1, 0xd6, 0x89, 0xe6, 0xc1, 0xc1, 0x04,
+ 0x00, 0x00, 0x00, 0x60 },
+ .klen = 36,
+ .iv = { 0xdb, 0x56, 0x72, 0xc9, 0x7a, 0xa8, 0xf0, 0xb2 },
+ .input = { 0x14, 0x5a, 0xd0, 0x1d, 0xbf, 0x82, 0x4e, 0xc7,
+ 0x56, 0x08, 0x63, 0xdc, 0x71, 0xe3, 0xe0, 0xc0 },
+ .ilen = 16,
+ .result = { "Single block msg" },
+ .rlen = 16,
+ }, {
+ .key = { 0xf6, 0xd6, 0x6d, 0x6b, 0xd5, 0x2d, 0x59, 0xbb,
+ 0x07, 0x96, 0x36, 0x58, 0x79, 0xef, 0xf8, 0x86,
+ 0xc6, 0x6d, 0xd5, 0x1a, 0x5b, 0x6a, 0x99, 0x74,
+ 0x4b, 0x50, 0x59, 0x0c, 0x87, 0xa2, 0x38, 0x84,
+ 0x00, 0xfa, 0xac, 0x24 },
+ .klen = 36,
+ .iv = { 0xc1, 0x58, 0x5e, 0xf1, 0x5a, 0x43, 0xd8, 0x75 },
+ .input = { 0xf0, 0x5e, 0x23, 0x1b, 0x38, 0x94, 0x61, 0x2c,
+ 0x49, 0xee, 0x00, 0x0b, 0x80, 0x4e, 0xb2, 0xa9,
+ 0xb8, 0x30, 0x6b, 0x50, 0x8f, 0x83, 0x9d, 0x6a,
+ 0x55, 0x30, 0x83, 0x1d, 0x93, 0x44, 0xaf, 0x1c },
+ .ilen = 32,
+ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .rlen = 32,
+ },
+};
+
/* Cast5 test vectors from RFC 2144 */
#define CAST5_ENC_TEST_VECTORS 3
#define CAST5_DEC_TEST_VECTORS 3

2007-10-03 10:22:07

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/1]: Revised CTR mode implementation

Hi Joy:

On Tue, Oct 02, 2007 at 12:47:09AM -0500, Joy Latten wrote:
>
> So, the correct way to say it is that my plaintext should be
> multiple of cipher's blocksize, not CTR's blocksize?

It won't be. CTR is a stream cipher which means that it can
deal with any plain text without padding it to form a block.
So the last block may be shorter than the underlying block
size. So if the last block is n bytes, then you use the first
n bytes of your keystream and discard the rest.

> I have made suggested changes in new patch below.
> This time, all the ctr testcases passed. :-)

Excellent! I think we're really close now :)

> +static void ctr_inc(__be32 *counter)
> +{
> + u32 c;
> +
> + c = be32_to_cpu(counter[3]);
> + c++;
> + counter[3] = cpu_to_be32(c);
> +}

We can't assume that the counter block is always 16 bytes
since that depends on the underlying block size. It's probably
easiest if the caller computes the correct counter position and
gives that to us.

BTW, it isn't that hard to support arbitrary counter sizes so
we should fix that too and remove the counter size == 4 check.

Here's how you can do it:

static void ctr_inc_32(u8 *a, int size)
{
__be32 *b = (__be32 *)a;

*b = cpu_to_be32(be32_to_cpu(*b) + 1);
}

static void __ctr_inc_byte(u8 *a, int size)
{
__be8 *b = (__be8 *)(a + size);
u8 c;

do {
c = be8_to_cpu(*--b) + 1;
*b = cpu_to_be8(c);
if (c)
break;
} while (--size);
}

static void ctr_inc_quad(u8 *a, int size)
{
__be32 *b = (__be32 *)(a + size);
u32 c;

for (; size >= 4; size -= 4) {
c = be32_to_cpu(*--b) + 1;
*b = cpu_to_be32(c);
if (c)
return;
}

__ctr_inc_byte(a, size);
}

Just select the ctr function where you select xor.

> +static int crypto_ctr_setkey(struct crypto_tfm *parent, const u8 *key,
> + unsigned int keylen)
> +{
> + struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(parent);
> + struct crypto_cipher *child = ctx->child;
> + struct ctr_instance_ctx *ictx =
> + crypto_instance_ctx(crypto_tfm_alg_instance(parent));
> +
> + unsigned int noncelen = ictx->noncesize;
> + int err;
> +
> + /* the nonce is stored in bytes at end of key */
> + if (keylen < noncelen) {
> + err = -EINVAL;
> + return err;
> + }
> +
> + ctx->nonce = (u8 *)(key + (keylen - noncelen));

This won't work as we can't hold a reference to the caller's
data. You need to allocate space for it in the context and
copy it.

> +static int crypto_ctr_crypt_segment(struct blkcipher_walk *walk,
> + struct crypto_cipher *tfm, u8 *ctrblk,
> + void (*xor)(u8 *, const u8 *, unsigned int))
> +{
> + void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
> + crypto_cipher_alg(tfm)->cia_encrypt;
> + int bsize = crypto_cipher_blocksize(tfm);
> + unsigned int nbytes = walk->nbytes;
> + u8 *src = walk->src.virt.addr;
> + u8 *dst = walk->dst.virt.addr;
> +
> + do {
> + /* create keystream */
> + fn(crypto_cipher_tfm(tfm), dst, ctrblk);
> + xor(dst, src, bsize);
> +
> + /* increment counter in counterblock */
> + ctr_inc((__be32 *)ctrblk);
> +
> + src += bsize;
> + dst += bsize;
> + } while ((nbytes -= bsize) >= bsize);

As I mentioned above we need to deal with partial blocks at
the end. The easiest way is to change the loop termination
to:

if (nbytes < bsize)
break;
nbytes -= bsize;

and change

xor(dst, src, bsize);

to

xor(dst, src, min(nbytes, bsize));

Let's also get rid of the xor selection and just use this xor
function:

static void xor_quad(u8 *dst, const u8 *src, unsigned int bs)
{
u32 *a = (u32 *)dst;
u32 *b = (u32 *)src;

for (; bs >= 4; bs -= 4)
*a++ ^= *b++;

xor_byte((u8 *)a, (u8 *)b, bs);
}

> +static int crypto_ctr_encrypt(struct blkcipher_desc *desc,
> + struct scatterlist *dst, struct scatterlist *src,
> + unsigned int nbytes)
> +{
> + struct blkcipher_walk walk;
> + struct crypto_blkcipher *tfm = desc->tfm;
> + struct crypto_ctr_ctx *ctx = crypto_blkcipher_ctx(tfm);
> + struct crypto_cipher *child = ctx->child;
> + int bsize = crypto_cipher_blocksize(child);
> + struct ctr_instance_ctx *ictx =
> + crypto_instance_ctx(crypto_tfm_alg_instance(&tfm->base));
> + u8 counterblk[bsize];

This needs to be aligned by the underlying mask and at least 4.

> +static int crypto_ctr_decrypt(struct blkcipher_desc *desc,
> + struct scatterlist *dst, struct scatterlist *src,
> + unsigned int nbytes)

This is identical to the encrypt function so we can delete one
of them.

> + err = crypto_attr_u32(tb[2], &noncesize);
> + if (err)
> + goto out_put_alg;
> +
> + err = crypto_attr_u32(tb[3], &ivsize);
> + if (err)
> + goto out_put_alg;
> +
> + /* verify size of nonce + iv + counter */
> + err = -EINVAL;
> + if ((alg->cra_blocksize - noncesize - ivsize) != 4)
> + goto out_put_alg;

We should also check that noncesize/ivsize are less than the
block size.

> + inst->alg.cra_alignmask = alg->cra_alignmask;

This should just be 3.

> + if (!(alg->cra_blocksize % 4))
> + inst->alg.cra_alignmask |= 3;

This can go.

> + inst->alg.cra_blkcipher.ivsize = alg->cra_blocksize;

This should be ivsize.

Thanks,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2007-10-03 10:28:53

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/1]: Revised CTR mode implementation

On Wed, Oct 03, 2007 at 06:21:49PM +0800, Herbert Xu wrote:

> static void __ctr_inc_byte(u8 *a, int size)
> {
> __be8 *b = (__be8 *)(a + size);
> u8 c;
>
> do {
> c = be8_to_cpu(*--b) + 1;
> *b = cpu_to_be8(c);
> if (c)
> break;
> } while (--size);

This should be a for loop and we can make it inline too.

static inline void __ctr_inc_byte(u8 *a, int size)
{
__be8 *b = (__be8 *)(a + size);
u8 c;

for (; size; size--) {
c = b8_to_cpu(*--b) + 1;
*b = cpu_to_be8(c);
if (c)
break;
}
}

> Let's also get rid of the xor selection and just use this xor
> function:
>
> static void xor_quad(u8 *dst, const u8 *src, unsigned int bs)
> {
> u32 *a = (u32 *)dst;
> u32 *b = (u32 *)src;
>
> for (; bs >= 4; bs -= 4)
> *a++ ^= *b++;
>
> xor_byte((u8 *)a, (u8 *)b, bs);
> }

xor_byte should be constructed in a similar fashion.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2007-10-03 20:48:00

by Joy Latten

[permalink] [raw]
Subject: Re: [PATCH 1/1]: Revised CTR mode implementation

On Wed, 2007-10-03 at 18:28 +0800, Herbert Xu wrote:
> On Wed, Oct 03, 2007 at 06:21:49PM +0800, Herbert Xu wrote:
>
> > static void __ctr_inc_byte(u8 *a, int size)
> > {
> > __be8 *b = (__be8 *)(a + size);
> > u8 c;
> >
> > do {
> > c = be8_to_cpu(*--b) + 1;
> > *b = cpu_to_be8(c);
> > if (c)
> > break;
> > } while (--size);
>
> This should be a for loop and we can make it inline too.
>
Besides being used in ctr_inc_quad(), isn't __ctr_inc_byte() also
allowing for counters that might be less than an integer or 4 bytes?

Regards,
Joy

2007-10-03 23:21:24

by Joy Latten

[permalink] [raw]
Subject: Re: [PATCH 1/1]: Revised CTR mode implementation

On Wed, 2007-10-03 at 18:21 +0800, Herbert Xu wrote:
> We can't assume that the counter block is always 16 bytes
> since that depends on the underlying block size. It's probably
> easiest if the caller computes the correct counter position and
> gives that to us.
>
> BTW, it isn't that hard to support arbitrary counter sizes so
> we should fix that too and remove the counter size == 4 check.
>
> Here's how you can do it:
>
> static void ctr_inc_32(u8 *a, int size)
> {
> __be32 *b = (__be32 *)a;
>
> *b = cpu_to_be32(be32_to_cpu(*b) + 1);
> }
>
> static void __ctr_inc_byte(u8 *a, int size)
> {
> __be8 *b = (__be8 *)(a + size);
> u8 c;
>
> do {
> c = be8_to_cpu(*--b) + 1;
> *b = cpu_to_be8(c);
> if (c)
> break;
> } while (--size);
> }
>
> static void ctr_inc_quad(u8 *a, int size)
> {
> __be32 *b = (__be32 *)(a + size);
> u32 c;
>
> for (; size >= 4; size -= 4) {
> c = be32_to_cpu(*--b) + 1;
> *b = cpu_to_be32(c);
> if (c)
> return;
> }
>
> __ctr_inc_byte(a, size);
> }
>
I really like this! Something else I should have thought of. :-)

> > + do {
> > + /* create keystream */
> > + fn(crypto_cipher_tfm(tfm), dst, ctrblk);
> > + xor(dst, src, bsize);
> > +
> > + /* increment counter in counterblock */
> > + ctr_inc((__be32 *)ctrblk);
> > +
> > + src += bsize;
> > + dst += bsize;
> > + } while ((nbytes -= bsize) >= bsize);
>
> As I mentioned above we need to deal with partial blocks at
> the end. The easiest way is to change the loop termination
> to:
>
> if (nbytes < bsize)
> break;
> nbytes -= bsize;
>
> and change
>
> xor(dst, src, bsize);
>
> to
>
> xor(dst, src, min(nbytes, bsize));

Ok. Also, I assumed the data we passed to the cipher
algorithms such as aes, des, etc... must be in blocks of blocksize.

Since the last block of data to CTR may be a partial block, I changed
the following in crypto_ctr_crypt_segment(),

/* create keystream */
fn(crypto_cipher_tfm(tfm), dst, ctrblk);
xor_quad(dst, src, min(nbytes, bsize));
to

/* create keystream */
fn(crypto_cipher_tfm(tfm), keystream, ctrblk);
xor_quad(keystream, src, min(nbytes, bsize));
memcpy(dst, keystream, min(nbytes, bsize));

I also changed it such that we return 0 instead of nbytes.

This brings up another question...
In the encrypt function, there's the loop,

while (walk.nbytes) {
if (walk.src.virt.addr == walk.dst.virt.addr)
nbytes = crypto_ctr_crypt_inplace(&walk, ctx,
counterblk,
countersize);
else
nbytes = crypto_ctr_crypt_segment(&walk, ctx
counterblk,
countersize);

err = blkcipher_walk_done(desc, &walk, nbytes);
}

I assumed that if there is a partial block, it will occur at the
very end of all the data. However, looking at this loop, I wondered
if it was possible to get a partial block somewhere other than the end
while stepping through this loop?
I tried looking through blkcipher.c code, but figured I'd do better
just asking the question. :-)


> > +static int crypto_ctr_encrypt(struct blkcipher_desc *desc,
> > + struct scatterlist *dst, struct scatterlist *src,
> > + unsigned int nbytes)
> > +{
> > + struct blkcipher_walk walk;
> > + struct crypto_blkcipher *tfm = desc->tfm;
> > + struct crypto_ctr_ctx *ctx = crypto_blkcipher_ctx(tfm);
> > + struct crypto_cipher *child = ctx->child;
> > + int bsize = crypto_cipher_blocksize(child);
> > + struct ctr_instance_ctx *ictx =
> > + crypto_instance_ctx(crypto_tfm_alg_instance(&tfm->base));
> > + u8 counterblk[bsize];
>
> This needs to be aligned by the underlying mask and at least 4.
>
Ok, sorry I missed that.
However, I don't understand what the check
for 4 is for... my bsize should be at least 4?

Will get a new patch out to you shortly.

Thanks!!

Joy

2007-10-04 06:35:45

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/1]: Revised CTR mode implementation

On Wed, Oct 03, 2007 at 03:43:58PM -0500, Joy Latten wrote:
> On Wed, 2007-10-03 at 18:28 +0800, Herbert Xu wrote:
> > On Wed, Oct 03, 2007 at 06:21:49PM +0800, Herbert Xu wrote:
> >
> > > static void __ctr_inc_byte(u8 *a, int size)
> > > {
> > > __be8 *b = (__be8 *)(a + size);
> > > u8 c;
> > >
> > > do {
> > > c = be8_to_cpu(*--b) + 1;
> > > *b = cpu_to_be8(c);
> > > if (c)
> > > break;
> > > } while (--size);
> >
> > This should be a for loop and we can make it inline too.
>
> Besides being used in ctr_inc_quad(), isn't __ctr_inc_byte() also
> allowing for counters that might be less than an integer or 4 bytes?

Yeah I suppose since we're selecting between ctr_inc_32 and
ctr_inc_quad anyway we might as well keep ctr_inc_byte.

Of course the other optino is to throw away ctr_inc_32 and
just call ctr_inc_quad always.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2007-10-04 07:27:49

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/1]: Revised CTR mode implementation

On Wed, Oct 03, 2007 at 06:17:08PM -0500, Joy Latten wrote:
>
> Since the last block of data to CTR may be a partial block, I changed
> the following in crypto_ctr_crypt_segment(),

Good catch. In that case we can probably merge in_place and
_segment into one function.

> while (walk.nbytes) {
> if (walk.src.virt.addr == walk.dst.virt.addr)
> nbytes = crypto_ctr_crypt_inplace(&walk, ctx,
> counterblk,
> countersize);
> else
> nbytes = crypto_ctr_crypt_segment(&walk, ctx
> counterblk,
> countersize);
>
> err = blkcipher_walk_done(desc, &walk, nbytes);
> }
>
> I assumed that if there is a partial block, it will occur at the
> very end of all the data. However, looking at this loop, I wondered

Good point. We need to change the blkcipher helper so that it
lets you walk by a specified block size instead of the block
size of the algorithm.

The following patch should let you get the desired result if
you call blkcipher_walk_virt_block and specify the underlying
block size.

> > > + u8 counterblk[bsize];
> >
> > This needs to be aligned by the underlying mask and at least 4.
> >
> Ok, sorry I missed that.
> However, I don't understand what the check
> for 4 is for... my bsize should be at least 4?

Depending on the compiler a u8 might only be byte-aligned
regardless of its size.

The 4 is because xor/ctr_inc will access the data in words.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
[CRYPTO] blkcipher: Added blkcipher_walk_virt_block

This patch adds the helper blkcipher_walk_virt_block which is similar to
blkcipher_walk_virt but uses a supplied block size instead of the block
size of the block cipher. This is useful for CTR where the block size is
1 but we still want to walk by the block size of the underlying cipher.

Signed-off-by: Herbert Xu <[email protected]>
diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c
index 1b2a14a..77ee73b 100644
--- a/crypto/blkcipher.c
+++ b/crypto/blkcipher.c
@@ -84,8 +84,6 @@ static inline unsigned int blkcipher_done_slow(struct crypto_blkcipher *tfm,
static inline unsigned int blkcipher_done_fast(struct blkcipher_walk *walk,
unsigned int n)
{
- n = walk->nbytes - n;
-
if (walk->flags & BLKCIPHER_WALK_COPY) {
blkcipher_map_dst(walk);
memcpy(walk->dst.virt.addr, walk->page, n);
@@ -109,13 +107,15 @@ int blkcipher_walk_done(struct blkcipher_desc *desc,
unsigned int nbytes = 0;

if (likely(err >= 0)) {
- unsigned int bsize = crypto_blkcipher_blocksize(tfm);
- unsigned int n;
+ unsigned int n = walk->nbytes - err;

if (likely(!(walk->flags & BLKCIPHER_WALK_SLOW)))
- n = blkcipher_done_fast(walk, err);
- else
- n = blkcipher_done_slow(tfm, walk, bsize);
+ n = blkcipher_done_fast(walk, n);
+ else if (WARN_ON(err)) {
+ err = -EINVAL;
+ goto err;
+ } else
+ n = blkcipher_done_slow(tfm, walk, n);

nbytes = walk->total - n;
err = 0;
@@ -132,6 +132,7 @@ int blkcipher_walk_done(struct blkcipher_desc *desc,
return blkcipher_walk_next(desc, walk);
}

+err:
if (walk->iv != desc->info)
memcpy(desc->info, walk->iv, crypto_blkcipher_ivsize(tfm));
if (walk->buffer != walk->page)
@@ -225,12 +226,12 @@ static int blkcipher_walk_next(struct blkcipher_desc *desc,
{
struct crypto_blkcipher *tfm = desc->tfm;
unsigned int alignmask = crypto_blkcipher_alignmask(tfm);
- unsigned int bsize = crypto_blkcipher_blocksize(tfm);
+ unsigned int bsize;
unsigned int n;
int err;

n = walk->total;
- if (unlikely(n < bsize)) {
+ if (unlikely(n < crypto_blkcipher_blocksize(tfm))) {
desc->flags |= CRYPTO_TFM_RES_BAD_BLOCK_LEN;
return blkcipher_walk_done(desc, walk, -EINVAL);
}
@@ -247,6 +248,7 @@ static int blkcipher_walk_next(struct blkcipher_desc *desc,
}
}

+ bsize = min(walk->blocksize, n);
n = scatterwalk_clamp(&walk->in, n);
n = scatterwalk_clamp(&walk->out, n);

@@ -277,7 +279,7 @@ static inline int blkcipher_copy_iv(struct blkcipher_walk *walk,
struct crypto_blkcipher *tfm,
unsigned int alignmask)
{
- unsigned bs = crypto_blkcipher_blocksize(tfm);
+ unsigned bs = walk->blocksize;
unsigned int ivsize = crypto_blkcipher_ivsize(tfm);
unsigned aligned_bs = ALIGN(bs, alignmask + 1);
unsigned int size = aligned_bs * 2 + ivsize + max(aligned_bs, ivsize) -
@@ -302,6 +304,7 @@ int blkcipher_walk_virt(struct blkcipher_desc *desc,
struct blkcipher_walk *walk)
{
walk->flags &= ~BLKCIPHER_WALK_PHYS;
+ walk->blocksize = crypto_blkcipher_blocksize(desc->tfm);
return blkcipher_walk_first(desc, walk);
}
EXPORT_SYMBOL_GPL(blkcipher_walk_virt);
@@ -310,6 +313,7 @@ int blkcipher_walk_phys(struct blkcipher_desc *desc,
struct blkcipher_walk *walk)
{
walk->flags |= BLKCIPHER_WALK_PHYS;
+ walk->blocksize = crypto_blkcipher_blocksize(desc->tfm);
return blkcipher_walk_first(desc, walk);
}
EXPORT_SYMBOL_GPL(blkcipher_walk_phys);
@@ -342,6 +346,16 @@ static int blkcipher_walk_first(struct blkcipher_desc *desc,
return blkcipher_walk_next(desc, walk);
}

+int blkcipher_walk_virt_block(struct blkcipher_desc *desc,
+ struct blkcipher_walk *walk,
+ unsigned int blocksize)
+{
+ walk->flags &= ~BLKCIPHER_WALK_PHYS;
+ walk->blocksize = blocksize;
+ return blkcipher_walk_first(desc, walk);
+}
+EXPORT_SYMBOL_GPL(blkcipher_walk_virt_block);
+
static int setkey_unaligned(struct crypto_tfm *tfm, const u8 *key,
unsigned int keylen)
{
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index 4af72dc..b9b05d3 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -91,6 +91,7 @@ struct blkcipher_walk {
u8 *iv;

int flags;
+ unsigned int blocksize;
};

extern const struct crypto_type crypto_ablkcipher_type;
@@ -129,6 +130,9 @@ int blkcipher_walk_virt(struct blkcipher_desc *desc,
struct blkcipher_walk *walk);
int blkcipher_walk_phys(struct blkcipher_desc *desc,
struct blkcipher_walk *walk);
+int blkcipher_walk_virt_block(struct blkcipher_desc *desc,
+ struct blkcipher_walk *walk,
+ unsigned int blocksize);

static inline void *crypto_tfm_ctx_aligned(struct crypto_tfm *tfm)
{

2007-10-09 19:48:49

by Joy Latten

[permalink] [raw]
Subject: Re: [PATCH 1/1]: Revised CTR mode implementation

This should contain the geniv as well as all the
improvements discussed. All the testcases pass.

Regards,
Joy

diff -urpN linux-2.6.22.aead/crypto/ctr.c linux-2.6.22.aead.patch/crypto/ctr.c
--- linux-2.6.22.aead/crypto/ctr.c 1969-12-31 18:00:00.000000000 -0600
+++ linux-2.6.22.aead.patch/crypto/ctr.c 2007-10-09 12:12:54.000000000 -0500
@@ -0,0 +1,375 @@
+/*
+ * CTR: Counter mode
+ *
+ * (C) Copyright IBM Corp. 2007 - Joy Latten <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+
+#include <crypto/algapi.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/random.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+
+struct ctr_instance_ctx {
+ struct crypto_spawn alg;
+ unsigned int noncesize;
+ unsigned int ivsize;
+};
+
+struct crypto_ctr_ctx {
+ struct crypto_cipher *child;
+ u8 *nonce;
+};
+
+static inline void __ctr_inc_byte(u8 *a, unsigned int size)
+{
+ u8 *b = (a + size);
+ u8 c;
+
+ for (; size; size--) {
+ c = *--b + 1;
+ *b = c;
+ if (c)
+ break;
+ }
+}
+
+static void ctr_inc_quad(u8 *a, unsigned int size)
+{
+ __be32 *b = (__be32 *)(a + size);
+ u32 c;
+
+ for (; size >= 4; size -=4) {
+ c = be32_to_cpu(*--b) + 1;
+ *b = cpu_to_be32(c);
+ if (c)
+ return;
+ }
+
+ __ctr_inc_byte(a, size);
+}
+
+static void xor_byte(u8 *a, const u8 *b, unsigned int bs)
+{
+ for (; bs; bs--)
+ *a++ ^= *b++;
+}
+
+static void xor_quad(u8 *dst, const u8 *src, unsigned int bs)
+{
+ u32 *a = (u32 *)dst;
+ u32 *b = (u32 *)src;
+
+ for (; bs >= 4; bs -= 4)
+ *a++ ^= *b++;
+
+ xor_byte((u8 *)a, (u8 *)b, bs);
+}
+
+static int crypto_ctr_setkey(struct crypto_tfm *parent, const u8 *key,
+ unsigned int keylen)
+{
+ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(parent);
+ struct crypto_cipher *child = ctx->child;
+ struct ctr_instance_ctx *ictx =
+ crypto_instance_ctx(crypto_tfm_alg_instance(parent));
+ unsigned int noncelen = ictx->noncesize;
+ int err = 0;
+
+ /* the nonce is stored in bytes at end of key */
+ if (keylen < noncelen)
+ return -EINVAL;
+
+ memcpy(ctx->nonce, key + (keylen - noncelen), noncelen);
+
+ keylen -= noncelen;
+
+ crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+ crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_cipher_setkey(child, key, keylen);
+ crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) &
+ CRYPTO_TFM_RES_MASK);
+
+ return err;
+}
+
+static int crypto_ctr_crypt_segment(struct blkcipher_walk *walk,
+ struct crypto_cipher *tfm, u8 *ctrblk,
+ unsigned int countersize)
+{
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+ crypto_cipher_alg(tfm)->cia_encrypt;
+ unsigned int bsize = crypto_cipher_blocksize(tfm);
+ unsigned long alignmask = crypto_cipher_alignmask(tfm);
+ u8 ks[bsize + alignmask];
+ u8 *keystream = (u8 *)ALIGN((unsigned long)ks, alignmask + 1);
+ u8 *src = walk->src.virt.addr;
+ u8 *dst = walk->dst.virt.addr;
+ unsigned int nbytes = walk->nbytes;
+
+ do {
+ /* create keystream */
+ fn(crypto_cipher_tfm(tfm), keystream, ctrblk);
+ xor_quad(keystream, src, min(nbytes, bsize));
+
+ /* copy result into dst */
+ memcpy(dst, keystream, min(nbytes, bsize));
+
+ /* increment counter in counterblock */
+ ctr_inc_quad(ctrblk + (bsize - countersize), countersize);
+
+ if (nbytes < bsize)
+ break;
+
+ src += bsize;
+ dst += bsize;
+ nbytes -= bsize;
+
+ } while (nbytes);
+
+ return 0;
+}
+
+static int crypto_ctr_crypt_inplace(struct blkcipher_walk *walk,
+ struct crypto_cipher *tfm, u8 *ctrblk,
+ unsigned int countersize)
+{
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+ crypto_cipher_alg(tfm)->cia_encrypt;
+ unsigned int bsize = crypto_cipher_blocksize(tfm);
+ unsigned long alignmask = crypto_cipher_alignmask(tfm);
+ unsigned int nbytes = walk->nbytes;
+ u8 *src = walk->src.virt.addr;
+ u8 ks[bsize + alignmask];
+ u8 *keystream = (u8 *)ALIGN((unsigned long)ks, alignmask + 1);
+
+ do {
+ /* create keystream */
+ fn(crypto_cipher_tfm(tfm), keystream, ctrblk);
+ xor_quad(src, keystream, min(nbytes, bsize));
+
+ /* increment counter in counterblock */
+ ctr_inc_quad(ctrblk + (bsize - countersize), countersize);
+
+ if (nbytes < bsize)
+ break;
+
+ src += bsize;
+ nbytes -= bsize;
+
+ } while (nbytes);
+
+ return 0;
+}
+
+static int crypto_ctr_crypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst, struct scatterlist *src,
+ unsigned int nbytes)
+{
+ struct blkcipher_walk walk;
+ struct crypto_blkcipher *tfm = desc->tfm;
+ struct crypto_ctr_ctx *ctx = crypto_blkcipher_ctx(tfm);
+ struct crypto_cipher *child = ctx->child;
+ unsigned int bsize = crypto_cipher_blocksize(child);
+ struct ctr_instance_ctx *ictx =
+ crypto_instance_ctx(crypto_tfm_alg_instance(&tfm->base));
+ unsigned long alignmask = crypto_cipher_alignmask(child);
+ u8 cblk[bsize + alignmask];
+ u8 *counterblk = (u8 *)ALIGN((unsigned long)cblk, alignmask + 1);
+ unsigned int countersize;
+ int err;
+
+ blkcipher_walk_init(&walk, dst, src, nbytes);
+ err = blkcipher_walk_virt_block(desc, &walk, bsize);
+
+ /* set up counter block */
+ memset(counterblk, 0 , bsize);
+ memcpy(counterblk, ctx->nonce, ictx->noncesize);
+ memcpy(counterblk + ictx->noncesize, walk.iv, ictx->ivsize);
+
+ /* initialize counter portion of counter block */
+ countersize = bsize - ictx->noncesize - ictx->ivsize;
+ ctr_inc_quad(counterblk + (bsize - countersize), countersize);
+
+ while (walk.nbytes) {
+ if (walk.src.virt.addr == walk.dst.virt.addr)
+ nbytes = crypto_ctr_crypt_inplace(&walk, child,
+ counterblk,
+ countersize);
+ else
+ nbytes = crypto_ctr_crypt_segment(&walk, child,
+ counterblk,
+ countersize);
+
+ err = blkcipher_walk_done(desc, &walk, nbytes);
+ }
+ return err;
+}
+
+static void crypto_ctr_geniv(struct crypto_blkcipher *tfm, u8 *iv, u64 seq)
+{
+ get_random_bytes(iv, crypto_blkcipher_ivsize(tfm));
+}
+
+static int crypto_ctr_init_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
+ struct ctr_instance_ctx *ictx = crypto_instance_ctx(inst);
+ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct crypto_cipher *cipher;
+
+ ctx->nonce = kzalloc(ictx->noncesize, GFP_KERNEL);
+ if (!ctx->nonce)
+ return -ENOMEM;
+
+ cipher = crypto_spawn_cipher(&ictx->alg);
+ if (IS_ERR(cipher))
+ return PTR_ERR(cipher);
+
+ ctx->child = cipher;
+
+ return 0;
+}
+
+static void crypto_ctr_exit_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ kfree(ctx->nonce);
+ crypto_free_cipher(ctx->child);
+}
+
+static struct crypto_instance *crypto_ctr_alloc(struct rtattr **tb)
+{
+ struct crypto_instance *inst;
+ struct crypto_alg *alg;
+ struct ctr_instance_ctx *ictx;
+ unsigned int noncesize;
+ unsigned int ivsize;
+ int err;
+
+ err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_BLKCIPHER);
+ if (err)
+ return ERR_PTR(err);
+
+ alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_CIPHER,
+ CRYPTO_ALG_TYPE_MASK);
+ if (IS_ERR(alg))
+ return ERR_PTR(PTR_ERR(alg));
+
+ err = crypto_attr_u32(tb[2], &noncesize);
+ if (err)
+ goto out_put_alg;
+
+ err = crypto_attr_u32(tb[3], &ivsize);
+ if (err)
+ goto out_put_alg;
+
+ /* verify size of nonce + iv + counter */
+ err = -EINVAL;
+ if ((noncesize + ivsize) >= alg->cra_blocksize)
+ goto out_put_alg;
+
+ inst = kzalloc(sizeof(*inst) + sizeof(*ictx), GFP_KERNEL);
+ err = -ENOMEM;
+ if (!inst)
+ goto out_put_alg;
+
+ err = -ENAMETOOLONG;
+ if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
+ "ctr(%s,%u,%u)", alg->cra_name, noncesize,
+ ivsize) >= CRYPTO_MAX_ALG_NAME) {
+ goto err_free_inst;
+ }
+
+ if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ "ctr(%s,%u,%u)", alg->cra_driver_name, noncesize,
+ ivsize) >= CRYPTO_MAX_ALG_NAME) {
+ goto err_free_inst;
+ }
+
+ ictx = crypto_instance_ctx(inst);
+ ictx->noncesize = noncesize;
+ ictx->ivsize = ivsize;
+
+ err = crypto_init_spawn(&ictx->alg, alg, inst,
+ CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_ASYNC);
+ if (err)
+ goto err_free_inst;
+
+ err = 0;
+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
+ inst->alg.cra_priority = alg->cra_priority;
+ inst->alg.cra_blocksize = 1;
+ inst->alg.cra_alignmask = 3;
+ inst->alg.cra_type = &crypto_blkcipher_type;
+
+ inst->alg.cra_blkcipher.ivsize = ivsize;
+ inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize
+ + noncesize;
+ inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize
+ + noncesize;
+
+ inst->alg.cra_ctxsize = sizeof(struct crypto_ctr_ctx);
+
+ inst->alg.cra_init = crypto_ctr_init_tfm;
+ inst->alg.cra_exit = crypto_ctr_exit_tfm;
+
+ inst->alg.cra_blkcipher.setkey = crypto_ctr_setkey;
+ inst->alg.cra_blkcipher.encrypt = crypto_ctr_crypt;
+ inst->alg.cra_blkcipher.decrypt = crypto_ctr_crypt;
+ inst->alg.cra_blkcipher.geniv = crypto_ctr_geniv;
+
+err_free_inst:
+ if (err)
+ kfree(inst);
+
+out_put_alg:
+ crypto_mod_put(alg);
+
+ if (err)
+ inst = ERR_PTR(err);
+
+ return inst;
+}
+
+static void crypto_ctr_free(struct crypto_instance *inst)
+{
+ struct ctr_instance_ctx *ictx = crypto_instance_ctx(inst);
+
+ crypto_drop_spawn(&ictx->alg);
+ kfree(inst);
+}
+
+static struct crypto_template crypto_ctr_tmpl = {
+ .name = "ctr",
+ .alloc = crypto_ctr_alloc,
+ .free = crypto_ctr_free,
+ .module = THIS_MODULE,
+};
+
+static int __init crypto_ctr_module_init(void)
+{
+ return crypto_register_template(&crypto_ctr_tmpl);
+}
+
+static void __exit crypto_ctr_module_exit(void)
+{
+ crypto_unregister_template(&crypto_ctr_tmpl);
+}
+
+module_init(crypto_ctr_module_init);
+module_exit(crypto_ctr_module_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("CTR Counter block mode");
diff -urpN linux-2.6.22.aead/crypto/Kconfig linux-2.6.22.aead.patch/crypto/Kconfig
--- linux-2.6.22.aead/crypto/Kconfig 2007-10-09 10:24:57.000000000 -0500
+++ linux-2.6.22.aead.patch/crypto/Kconfig 2007-10-09 11:06:29.000000000 -0500
@@ -187,6 +187,15 @@ config CRYPTO_LRW
The first 128, 192 or 256 bits in the key are used for AES and the
rest is used to tie each cipher block to its logical position.

+config CRYPTO_CTR
+ tristate "CTR support"
+ select CRYPTO_BLKCIPHER
+ select CRYPTO_MANAGER
+ default m
+ help
+ CTR: Counter mode
+ This block cipher algorithm is required for IPSec.
+
config CRYPTO_CRYPTD
tristate "Software async crypto daemon"
select CRYPTO_ABLKCIPHER
diff -urpN linux-2.6.22.aead/crypto/Makefile linux-2.6.22.aead.patch/crypto/Makefile
--- linux-2.6.22.aead/crypto/Makefile 2007-10-09 10:24:57.000000000 -0500
+++ linux-2.6.22.aead.patch/crypto/Makefile 2007-10-09 11:06:20.000000000 -0500
@@ -31,6 +31,7 @@ obj-$(CONFIG_CRYPTO_ECB) += ecb.o
obj-$(CONFIG_CRYPTO_CBC) += cbc.o
obj-$(CONFIG_CRYPTO_PCBC) += pcbc.o
obj-$(CONFIG_CRYPTO_LRW) += lrw.o
+obj-$(CONFIG_CRYPTO_CTR) += ctr.o
obj-$(CONFIG_CRYPTO_CRYPTD) += cryptd.o
obj-$(CONFIG_CRYPTO_DES) += des.o
obj-$(CONFIG_CRYPTO_FCRYPT) += fcrypt.o
diff -urpN linux-2.6.22.aead/crypto/tcrypt.c linux-2.6.22.aead.patch/crypto/tcrypt.c
--- linux-2.6.22.aead/crypto/tcrypt.c 2007-10-09 09:58:58.000000000 -0500
+++ linux-2.6.22.aead.patch/crypto/tcrypt.c 2007-10-09 11:40:58.000000000 -0500
@@ -955,6 +955,10 @@ static void do_test(void)
AES_LRW_ENC_TEST_VECTORS);
test_cipher("lrw(aes)", DECRYPT, aes_lrw_dec_tv_template,
AES_LRW_DEC_TEST_VECTORS);
+ test_cipher("ctr(aes,4,8)", ENCRYPT, aes_ctr_enc_tv_template,
+ AES_CTR_ENC_TEST_VECTORS);
+ test_cipher("ctr(aes,4,8)", DECRYPT, aes_ctr_dec_tv_template,
+ AES_CTR_DEC_TEST_VECTORS);

//CAST5
test_cipher("ecb(cast5)", ENCRYPT, cast5_enc_tv_template,
@@ -1132,6 +1136,10 @@ static void do_test(void)
AES_LRW_ENC_TEST_VECTORS);
test_cipher("lrw(aes)", DECRYPT, aes_lrw_dec_tv_template,
AES_LRW_DEC_TEST_VECTORS);
+ test_cipher("ctr(aes,4,8)", ENCRYPT, aes_ctr_enc_tv_template,
+ AES_CTR_ENC_TEST_VECTORS);
+ test_cipher("ctr(aes,4,8)", DECRYPT, aes_ctr_dec_tv_template,
+ AES_CTR_DEC_TEST_VECTORS);
break;

case 11:
diff -urpN linux-2.6.22.aead/crypto/tcrypt.h linux-2.6.22.aead.patch/crypto/tcrypt.h
--- linux-2.6.22.aead/crypto/tcrypt.h 2007-10-09 09:58:58.000000000 -0500
+++ linux-2.6.22.aead.patch/crypto/tcrypt.h 2007-10-09 12:04:41.000000000 -0500
@@ -2144,6 +2144,8 @@ static struct cipher_testvec cast6_dec_t
#define AES_CBC_DEC_TEST_VECTORS 2
#define AES_LRW_ENC_TEST_VECTORS 8
#define AES_LRW_DEC_TEST_VECTORS 8
+#define AES_CTR_ENC_TEST_VECTORS 6
+#define AES_CTR_DEC_TEST_VECTORS 6

static struct cipher_testvec aes_enc_tv_template[] = {
{ /* From FIPS-197 */
@@ -2784,6 +2786,189 @@ static struct cipher_testvec aes_lrw_dec
}
};

+
+static struct cipher_testvec aes_ctr_enc_tv_template[] = {
+ { /* From RFC 3686 */
+ .key = { 0xae, 0x68, 0x52, 0xf8, 0x12, 0x10, 0x67, 0xcc,
+ 0x4b, 0xf7, 0xa5, 0x76, 0x55, 0x77, 0xf3, 0x9e,
+ 0x00, 0x00, 0x00, 0x30 },
+ .klen = 20,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ .input = { "Single block msg" },
+ .ilen = 16,
+ .result = { 0xe4, 0x09, 0x5d, 0x4f, 0xb7, 0xa7, 0xb3, 0x79,
+ 0x2d, 0x61, 0x75, 0xa3, 0x26, 0x13, 0x11, 0xb8 },
+ .rlen = 16,
+ }, {
+ .key = { 0x7e, 0x24, 0x06, 0x78, 0x17, 0xfa, 0xe0, 0xd7,
+ 0x43, 0xd6, 0xce, 0x1f, 0x32, 0x53, 0x91, 0x63,
+ 0x00, 0x6c, 0xb6, 0xdb },
+ .klen = 20,
+ .iv = { 0xc0, 0x54, 0x3b, 0x59, 0xda, 0x48, 0xd9, 0x0b },
+ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .ilen = 32,
+ .result = { 0x51, 0x04, 0xa1, 0x06, 0x16, 0x8a, 0x72, 0xd9,
+ 0x79, 0x0d, 0x41, 0xee, 0x8e, 0xda, 0xd3, 0x88,
+ 0xeb, 0x2e, 0x1e, 0xfc, 0x46, 0xda, 0x57, 0xc8,
+ 0xfc, 0xe6, 0x30, 0xdf, 0x91, 0x41, 0xbe, 0x28 },
+ .rlen = 32,
+ }, {
+ .key = { 0x16, 0xaf, 0x5b, 0x14, 0x5f, 0xc9, 0xf5, 0x79,
+ 0xc1, 0x75, 0xf9, 0x3e, 0x3b, 0xfb, 0x0e, 0xed,
+ 0x86, 0x3d, 0x06, 0xcc, 0xfd, 0xb7, 0x85, 0x15,
+ 0x00, 0x00, 0x00, 0x48 },
+ .klen = 28,
+ .iv = { 0x36, 0x73, 0x3c, 0x14, 0x7d, 0x6d, 0x93, 0xcb },
+ .input = { "Single block msg" },
+ .ilen = 16,
+ .result = { 0x4b, 0x55, 0x38, 0x4f, 0xe2, 0x59, 0xc9, 0xc8,
+ 0x4e, 0x79, 0x35, 0xa0, 0x03, 0xcb, 0xe9, 0x28 },
+ .rlen = 16,
+ }, {
+ .key = { 0x7c, 0x5c, 0xb2, 0x40, 0x1b, 0x3d, 0xc3, 0x3c,
+ 0x19, 0xe7, 0x34, 0x08, 0x19, 0xe0, 0xf6, 0x9c,
+ 0x67, 0x8c, 0x3d, 0xb8, 0xe6, 0xf6, 0xa9, 0x1a,
+ 0x00, 0x96, 0xb0, 0x3b },
+ .klen = 28,
+ .iv = { 0x02, 0x0c, 0x6e, 0xad, 0xc2, 0xcb, 0x50, 0x0d },
+ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .ilen = 32,
+ .result = { 0x45, 0x32, 0x43, 0xfc, 0x60, 0x9b, 0x23, 0x32,
+ 0x7e, 0xdf, 0xaa, 0xfa, 0x71, 0x31, 0xcd, 0x9f,
+ 0x84, 0x90, 0x70, 0x1c, 0x5a, 0xd4, 0xa7, 0x9c,
+ 0xfc, 0x1f, 0xe0, 0xff, 0x42, 0xf4, 0xfb, 0x00 },
+ .rlen = 32,
+ }, {
+ .key = { 0x77, 0x6b, 0xef, 0xf2, 0x85, 0x1d, 0xb0, 0x6f,
+ 0x4c, 0x8a, 0x05, 0x42, 0xc8, 0x69, 0x6f, 0x6c,
+ 0x6a, 0x81, 0xaf, 0x1e, 0xec, 0x96, 0xb4, 0xd3,
+ 0x7f, 0xc1, 0xd6, 0x89, 0xe6, 0xc1, 0xc1, 0x04,
+ 0x00, 0x00, 0x00, 0x60 },
+ .klen = 36,
+ .iv = { 0xdb, 0x56, 0x72, 0xc9, 0x7a, 0xa8, 0xf0, 0xb2 },
+ .input = { "Single block msg" },
+ .ilen = 16,
+ .result = { 0x14, 0x5a, 0xd0, 0x1d, 0xbf, 0x82, 0x4e, 0xc7,
+ 0x56, 0x08, 0x63, 0xdc, 0x71, 0xe3, 0xe0, 0xc0 },
+ .rlen = 16,
+ }, {
+ .key = { 0xf6, 0xd6, 0x6d, 0x6b, 0xd5, 0x2d, 0x59, 0xbb,
+ 0x07, 0x96, 0x36, 0x58, 0x79, 0xef, 0xf8, 0x86,
+ 0xc6, 0x6d, 0xd5, 0x1a, 0x5b, 0x6a, 0x99, 0x74,
+ 0x4b, 0x50, 0x59, 0x0c, 0x87, 0xa2, 0x38, 0x84,
+ 0x00, 0xfa, 0xac, 0x24 },
+ .klen = 36,
+ .iv = { 0xc1, 0x58, 0x5e, 0xf1, 0x5a, 0x43, 0xd8, 0x75 },
+ .input = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .ilen = 32,
+ .result = { 0xf0, 0x5e, 0x23, 0x1b, 0x38, 0x94, 0x61, 0x2c,
+ 0x49, 0xee, 0x00, 0x0b, 0x80, 0x4e, 0xb2, 0xa9,
+ 0xb8, 0x30, 0x6b, 0x50, 0x8f, 0x83, 0x9d, 0x6a,
+ 0x55, 0x30, 0x83, 0x1d, 0x93, 0x44, 0xaf, 0x1c },
+ .rlen = 32,
+ },
+};
+
+static struct cipher_testvec aes_ctr_dec_tv_template[] = {
+ { /* From RFC 3686 */
+ .key = { 0xae, 0x68, 0x52, 0xf8, 0x12, 0x10, 0x67, 0xcc,
+ 0x4b, 0xf7, 0xa5, 0x76, 0x55, 0x77, 0xf3, 0x9e,
+ 0x00, 0x00, 0x00, 0x30 },
+ .klen = 20,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ .input = { 0xe4, 0x09, 0x5d, 0x4f, 0xb7, 0xa7, 0xb3, 0x79,
+ 0x2d, 0x61, 0x75, 0xa3, 0x26, 0x13, 0x11, 0xb8 },
+ .ilen = 16,
+ .result = { "Single block msg" },
+ .rlen = 16,
+ }, {
+ .key = { 0x7e, 0x24, 0x06, 0x78, 0x17, 0xfa, 0xe0, 0xd7,
+ 0x43, 0xd6, 0xce, 0x1f, 0x32, 0x53, 0x91, 0x63,
+ 0x00, 0x6c, 0xb6, 0xdb },
+ .klen = 20,
+ .iv = { 0xc0, 0x54, 0x3b, 0x59, 0xda, 0x48, 0xd9, 0x0b },
+ .input = { 0x51, 0x04, 0xa1, 0x06, 0x16, 0x8a, 0x72, 0xd9,
+ 0x79, 0x0d, 0x41, 0xee, 0x8e, 0xda, 0xd3, 0x88,
+ 0xeb, 0x2e, 0x1e, 0xfc, 0x46, 0xda, 0x57, 0xc8,
+ 0xfc, 0xe6, 0x30, 0xdf, 0x91, 0x41, 0xbe, 0x28 },
+ .ilen = 32,
+ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .rlen = 32,
+ }, {
+ .key = { 0x16, 0xaf, 0x5b, 0x14, 0x5f, 0xc9, 0xf5, 0x79,
+ 0xc1, 0x75, 0xf9, 0x3e, 0x3b, 0xfb, 0x0e, 0xed,
+ 0x86, 0x3d, 0x06, 0xcc, 0xfd, 0xb7, 0x85, 0x15,
+ 0x00, 0x00, 0x00, 0x48 },
+ .klen = 28,
+ .iv = { 0x36, 0x73, 0x3c, 0x14, 0x7d, 0x6d, 0x93, 0xcb },
+ .input = { 0x4b, 0x55, 0x38, 0x4f, 0xe2, 0x59, 0xc9, 0xc8,
+ 0x4e, 0x79, 0x35, 0xa0, 0x03, 0xcb, 0xe9, 0x28 },
+ .ilen = 16,
+ .result = { "Single block msg" },
+ .rlen = 16,
+ }, {
+ .key = { 0x7c, 0x5c, 0xb2, 0x40, 0x1b, 0x3d, 0xc3, 0x3c,
+ 0x19, 0xe7, 0x34, 0x08, 0x19, 0xe0, 0xf6, 0x9c,
+ 0x67, 0x8c, 0x3d, 0xb8, 0xe6, 0xf6, 0xa9, 0x1a,
+ 0x00, 0x96, 0xb0, 0x3b },
+ .klen = 28,
+ .iv = { 0x02, 0x0c, 0x6e, 0xad, 0xc2, 0xcb, 0x50, 0x0d },
+ .input = { 0x45, 0x32, 0x43, 0xfc, 0x60, 0x9b, 0x23, 0x32,
+ 0x7e, 0xdf, 0xaa, 0xfa, 0x71, 0x31, 0xcd, 0x9f,
+ 0x84, 0x90, 0x70, 0x1c, 0x5a, 0xd4, 0xa7, 0x9c,
+ 0xfc, 0x1f, 0xe0, 0xff, 0x42, 0xf4, 0xfb, 0x00 },
+ .ilen = 32,
+ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .rlen = 32,
+ }, {
+ .key = { 0x77, 0x6b, 0xef, 0xf2, 0x85, 0x1d, 0xb0, 0x6f,
+ 0x4c, 0x8a, 0x05, 0x42, 0xc8, 0x69, 0x6f, 0x6c,
+ 0x6a, 0x81, 0xaf, 0x1e, 0xec, 0x96, 0xb4, 0xd3,
+ 0x7f, 0xc1, 0xd6, 0x89, 0xe6, 0xc1, 0xc1, 0x04,
+ 0x00, 0x00, 0x00, 0x60 },
+ .klen = 36,
+ .iv = { 0xdb, 0x56, 0x72, 0xc9, 0x7a, 0xa8, 0xf0, 0xb2 },
+ .input = { 0x14, 0x5a, 0xd0, 0x1d, 0xbf, 0x82, 0x4e, 0xc7,
+ 0x56, 0x08, 0x63, 0xdc, 0x71, 0xe3, 0xe0, 0xc0 },
+ .ilen = 16,
+ .result = { "Single block msg" },
+ .rlen = 16,
+ }, {
+ .key = { 0xf6, 0xd6, 0x6d, 0x6b, 0xd5, 0x2d, 0x59, 0xbb,
+ 0x07, 0x96, 0x36, 0x58, 0x79, 0xef, 0xf8, 0x86,
+ 0xc6, 0x6d, 0xd5, 0x1a, 0x5b, 0x6a, 0x99, 0x74,
+ 0x4b, 0x50, 0x59, 0x0c, 0x87, 0xa2, 0x38, 0x84,
+ 0x00, 0xfa, 0xac, 0x24 },
+ .klen = 36,
+ .iv = { 0xc1, 0x58, 0x5e, 0xf1, 0x5a, 0x43, 0xd8, 0x75 },
+ .input = { 0xf0, 0x5e, 0x23, 0x1b, 0x38, 0x94, 0x61, 0x2c,
+ 0x49, 0xee, 0x00, 0x0b, 0x80, 0x4e, 0xb2, 0xa9,
+ 0xb8, 0x30, 0x6b, 0x50, 0x8f, 0x83, 0x9d, 0x6a,
+ 0x55, 0x30, 0x83, 0x1d, 0x93, 0x44, 0xaf, 0x1c },
+ .ilen = 32,
+ .result = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ .rlen = 32,
+ },
+};
+
/* Cast5 test vectors from RFC 2144 */
#define CAST5_ENC_TEST_VECTORS 3
#define CAST5_DEC_TEST_VECTORS 3

2007-10-10 15:18:10

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/1]: Revised CTR mode implementation

On Tue, Oct 09, 2007 at 02:44:40PM -0500, Joy Latten wrote:
> This should contain the geniv as well as all the
> improvements discussed. All the testcases pass.

This looks pretty good!

I'm going to apply this once I fix up the geniv problems found
by Sebastian.

BTW, could you please send me a final changeset description
and Signed-off-by?

Thanks,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2007-10-10 15:48:54

by Joy Latten

[permalink] [raw]
Subject: Re: [PATCH 1/1]: Revised CTR mode implementation

On Wed, 2007-10-10 at 23:17 +0800, Herbert Xu wrote:
> On Tue, Oct 09, 2007 at 02:44:40PM -0500, Joy Latten wrote:
> > This should contain the geniv as well as all the
> > improvements discussed. All the testcases pass.
>
> This looks pretty good!
>
> I'm going to apply this once I fix up the geniv problems found
> by Sebastian.
>
> BTW, could you please send me a final changeset description
> and Signed-off-by?
>
Yes, and I am sorry that I forgot the signed-off-by statement this time.
Will send all this info shortly.

Regards,
Joy

2007-10-10 16:13:00

by Joy Latten

[permalink] [raw]
Subject: Re: [PATCH 1/1]: Revised CTR mode implementation

On Wed, 2007-10-10 at 23:17 +0800, Herbert Xu wrote:
> On Tue, Oct 09, 2007 at 02:44:40PM -0500, Joy Latten wrote:
> > This should contain the geniv as well as all the
> > improvements discussed. All the testcases pass.
>
> This looks pretty good!
>
> I'm going to apply this once I fix up the geniv problems found
> by Sebastian.
>
> BTW, could you please send me a final changeset description
> and Signed-off-by?
>

Description:

This patch implements CTR mode for IPsec.
It is based off of RFC 3686.

Please note:
1. CTR turns a block cipher into a stream cipher.
Encryption is done in blocks, however the last block
may be a partial block.

A "counter block" is encrypted, creating a keystream
that is xor'ed with the plaintext. The counter portion
of the counter block is incremented after each block
of plaintext is encrypted.
Decryption is performed in same manner.

2. The CTR counterblock is composed of,
nonce + IV + counter

The size of the counterblock is equivalent to the
blocksize of the cipher.
sizeof(nonce) + sizeof(IV) + sizeof(counter) = blocksize

The CTR template requires the name of the cipher
algorithm, the sizeof the nonce, and the sizeof the iv.
ctr(cipher,sizeof_nonce,sizeof_iv)

So for example,
ctr(aes,4,8)
specifies the counterblock will be composed of 4 bytes
from a nonce, 8 bytes from the iv, and 4 bytes for counter
since aes has a blocksize of 16 bytes.

3. The counter portion of the counter block is stored
in big endian for conformance to rfc 3686.

Regards,
Joy

Signed-off-by: Joy Latten <[email protected]>

2007-10-11 09:00:55

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 1/1]: Revised CTR mode implementation

On Wed, Oct 10, 2007 at 11:08:26AM -0500, Joy Latten wrote:
>
> This patch implements CTR mode for IPsec.
> It is based off of RFC 3686.

Thanks! I've just applied it to cryptodev-2.6 and will push it
out soon.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt