2006-08-31 12:40:12

by Rik Snel

[permalink] [raw]
Subject: LRW implementation, please comment


Hello Herbert (cc: list),

This is an attempt at implementing LRW for the new blockcipher API.
Please review, test and comment.

I intend to implement ABL also (arbitrary block length), an unencumbered wide
block cipher mode (it is based on GMC (Galois/Counter Mode) which is explicitly
claimed to be patent free and has the same creators).

The mode has been dropped by its creators (I don't know why) and
is not formally publised, see:

http://grouper.ieee.org/groups/1619/email/pdf00005.pdf
http://grouper.ieee.org/groups/1619/email/rtf00000.rtf

It fulfills the same function as the patent-encumbered EME, CMC and XCB.

Greetings,

Rik.



2006-08-31 12:40:13

by Rik Snel

[permalink] [raw]
Subject: [PATCH 2/6] crypto: benbi IV, big endian narrow block count for LRW-32-AES

LRW-32-AES needs a certain IV. This IV should be provided dm-crypt.
The block cipher mode could, in principle generate the correct IV from
the plain IV, but I think that it is cleaner to supply the right IV
directly.

The sector -> narrow block calculation uses a shift for performance reasons.
This shift is computed in .ctr and stored in cc->iv_gen_private (as a void*).

Signed-off-by: Rik Snel <[email protected]>
---
drivers/md/dm-crypt.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++++-
1 files changed, 55 insertions(+), 1 deletions(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 06961f7..12c871c 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -106,6 +106,9 @@ static kmem_cache_t *_crypt_io_pool;
* encrypted with the bulk cipher using a salt as key. The salt
* should be derived from the bulk cipher's key via hashing.
*
+ * benbi: the 64-bit "big-endian 'narrow block'-count", starting at 1
+ * (needed for LRW-32-AES and possible other narrow block modes)
+ *
* plumb: unimplemented, see:
* http://article.gmane.org/gmane.linux.kernel.device-mapper.dm-crypt/454
*/
@@ -200,6 +203,49 @@ static int crypt_iv_essiv_gen(struct cry
return 0;
}

+static int crypt_iv_benbi_ctr(struct crypt_config *cc, struct dm_target *ti,
+ const char *opts)
+{
+ unsigned int bs = crypto_blkcipher_blocksize(cc->tfm);
+ int log = long_log2(bs);
+
+ /* we need to calculate how far we must shift the sector count
+ * to get the cipher block count, we use this shift in _gen */
+
+ if (1 << log != bs) {
+ ti->error = "cypher blocksize is not a power of 2";
+ return -EINVAL;
+ }
+
+ if (log > 9) {
+ ti->error = "cypher blocksize is > 512";
+ return -EINVAL;
+ }
+
+ if (crypto_blkcipher_ivsize(cc->tfm) < sizeof(u64)) {
+ ti->error = "cypher ivsize is < 8";
+ return -EINVAL;
+ }
+
+ cc->iv_gen_private = (void*)(9 - log);
+
+ return 0;
+}
+
+static void crypt_iv_benbi_dtr(struct crypt_config *cc)
+{
+ cc->iv_gen_private = NULL;
+}
+
+static int crypt_iv_benbi_gen(struct crypt_config *cc, u8 *iv, sector_t sector)
+{
+ memset(iv, 0, cc->iv_size - sizeof(u64)); /* rest is cleared below */
+ *(u64 *)(iv + cc->iv_size - sizeof(u64)) =
+ cpu_to_be64(((u64)sector << (u32)cc->iv_gen_private) + 1);
+
+ return 0;
+}
+
static struct crypt_iv_operations crypt_iv_plain_ops = {
.generator = crypt_iv_plain_gen
};
@@ -210,6 +256,11 @@ static struct crypt_iv_operations crypt_
.generator = crypt_iv_essiv_gen
};

+static struct crypt_iv_operations crypt_iv_benbi_ops = {
+ .ctr = crypt_iv_benbi_ctr,
+ .dtr = crypt_iv_benbi_dtr,
+ .generator = crypt_iv_benbi_gen
+};

static int
crypt_convert_scatterlist(struct crypt_config *cc, struct scatterlist *out,
@@ -579,7 +630,8 @@ static int crypt_ctr(struct dm_target *t
cc->tfm = tfm;

/*
- * Choose ivmode. Valid modes: "plain", "essiv:<esshash>".
+ * Choose ivmode. Valid modes: "plain", "essiv:<esshash>", "benbi".
+ *
* See comments at iv code
*/

@@ -589,6 +641,8 @@ static int crypt_ctr(struct dm_target *t
cc->iv_gen_ops = &crypt_iv_plain_ops;
else if (strcmp(ivmode, "essiv") == 0)
cc->iv_gen_ops = &crypt_iv_essiv_ops;
+ else if (strcmp(ivmode, "benbi") == 0)
+ cc->iv_gen_ops = &crypt_iv_benbi_ops;
else {
ti->error = "Invalid IV mode";
goto bad2;
--
1.4.1.1


2006-08-31 12:40:12

by Rik Snel

[permalink] [raw]
Subject: [PATCH 3/6] crypto: some common 128-bit block operations, nicely centralized

128bit is a common blocksize in linux kernel cryptography, so it helps to
centralize some common operations. The data must be aligned at sizeof(int)
for decent performance.

The code, while mostly trivial, is based on a header file mode_hdr.h in
http://fp.gladman.plus.com/AES/modes.vc8.19-06-06.zip

The original copyright (and GPL statement) of the original author,
Dr Brian Gladman, is preserved.

Signed-off-by: Rik Snel <[email protected]>
---
crypto/b128ops.h | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 72 insertions(+), 0 deletions(-)

diff --git a/crypto/b128ops.h b/crypto/b128ops.h
new file mode 100644
index 0000000..d32bfc2
--- /dev/null
+++ b/crypto/b128ops.h
@@ -0,0 +1,72 @@
+/* b128ops.h - common 128-bit block operations
+ *
+ * Copyright (c) 2003, Dr Brian Gladman, Worcester, UK.
+ * Copyright (c) 2006, Rik Snel <[email protected]>
+ *
+ * Based on Dr Brain Gladman's (GPL'd) work published at
+ * http://fp.gladman.plus.com/cryptography_technology/index.htm
+ * See the original copyright notice below.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+/*
+ ---------------------------------------------------------------------------
+ Copyright (c) 2003, Dr Brian Gladman, Worcester, UK. All rights reserved.
+
+ LICENSE TERMS
+
+ The free distribution and use of this software in both source and binary
+ form is allowed (with or without changes) provided that:
+
+ 1. distributions of this source code include the above copyright
+ notice, this list of conditions and the following disclaimer;
+
+ 2. distributions in binary form include the above copyright
+ notice, this list of conditions and the following disclaimer
+ in the documentation and/or other associated materials;
+
+ 3. the copyright holder's name is not used to endorse products
+ built using this software without specific written permission.
+
+ ALTERNATIVELY, provided that this notice is retained in full, this product
+ may be distributed under the terms of the GNU General Public License (GPL),
+ in which case the provisions of the GPL apply INSTEAD OF those given above.
+
+ DISCLAIMER
+
+ This software is provided 'as is' with no explicit or implied warranties
+ in respect of its properties, including, but not limited to, correctness
+ and/or fitness for purpose.
+ ---------------------------------------------------------------------------
+ Issue Date: 13/06/2006
+*/
+
+#ifndef _LINUX_B128OPS_H
+#define _LINUX_B128OPS_H
+
+#include <linux/byteorder/swab.h>
+
+/* watch out: for good performance p and q must be aligned to 32bit
+ * boundaries on a 32bit machine and to 64bit boundaries on a 64bit
+ * machine. */
+inline void b128ops_mov(void *p, const void *q)
+{
+ ((u64*)p)[0] = ((u64*)q)[0];
+ ((u64*)p)[1] = ((u64*)q)[1];
+}
+
+inline void b128ops_xor(void *p, const void *q)
+{
+ ((u64*)p)[0] ^= ((u64*)q)[0];
+ ((u64*)p)[1] ^= ((u64*)q)[1];
+}
+
+inline void bswap64_block(void* d, const void* s, u32 n)
+{
+ while(n--) ((u64*)d)[n] = __swab64(((u64*)s)[n]);
+}
+
+#endif /* _LINUX_B128OPS_H */
--
1.4.1.1


2006-08-31 12:40:12

by Rik Snel

[permalink] [raw]
Subject: [PATCH 4/6] crypto: table driven multiplications in GF(2^128), needed by LRW (and in the future ABL)

WARNING: untested on bigendian, please test.

A lot of cypher modes need multiplications in GF(2^128). LRW, ABL, GCM...
I use functions from this library in my LRW implementation and I will
also use them in my ABL (Arbitrary Block Length, an unencumbered (correct
me if I am wrong, wide block cipher mode).

Elements of GF(2^128) must be presented as u64* (specifically u64[2]),
it encourages automatic and proper alignment.

The library contains support for two different representations of GF(2^128),
see the comment in gf128mul.h. There different levels of optimization
(memory/speed tradeoff).

The code is based on work by Dr Brian Gladman. Notable changes:
- deletion of two optimization modes
- change from u32 to u64 for faster handling on 64bit machines
- support for 'bbe' representation in addition to the, already implemented,
'lle' representation.
- move 'inline void' functions from header to 'static void' in the
source file
- update to use the linux coding style conventions

The original can be found at:
http://fp.gladman.plus.com/AES/modes.vc8.19-06-06.zip

The copyright (and GPL statement) of the original author is preserved.

Signed-off-by: Rik Snel <[email protected]>
---
crypto/Kconfig | 11 +
crypto/Makefile | 1
crypto/gf128mul.c | 401 +++++++++++++++++++++++++++++++++++++++++++++++++++++
crypto/gf128mul.h | 196 ++++++++++++++++++++++++++
4 files changed, 609 insertions(+), 0 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 1e2f39c..6b23c20 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -128,6 +128,17 @@ config CRYPTO_TGR192
See also:
<http://www.cs.technion.ac.il/~biham/Reports/Tiger/>.

+config CRYPTO_GF128MUL
+ tristate "GF(2^128) multiplication functions (EXPERIMENTAL)"
+ depends on EXPERIMENTAL
+ default n
+ help
+ Efficient table driven implementation of multiplications in the
+ field GF(2^128). This is needed by some cypher modes. This
+ option will be selected automatically if you select such a
+ cipher mode. Only select this option by hand if you expect to load
+ an external module that requires these functions.
+
config CRYPTO_ECB
tristate "ECB support"
select CRYPTO_BLKCIPHER
diff --git a/crypto/Makefile b/crypto/Makefile
index 7236620..bf0406b 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -23,6 +23,7 @@ obj-$(CONFIG_CRYPTO_SHA256) += sha256.o
obj-$(CONFIG_CRYPTO_SHA512) += sha512.o
obj-$(CONFIG_CRYPTO_WP512) += wp512.o
obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o
+obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o
obj-$(CONFIG_CRYPTO_ECB) += ecb.o
obj-$(CONFIG_CRYPTO_CBC) += cbc.o
obj-$(CONFIG_CRYPTO_DES) += des.o
diff --git a/crypto/gf128mul.c b/crypto/gf128mul.c
new file mode 100644
index 0000000..6d17704
--- /dev/null
+++ b/crypto/gf128mul.c
@@ -0,0 +1,401 @@
+/* gf128mul.c - GF(2^128) multiplication functions
+ *
+ * Copyright (c) 2003, Dr Brian Gladman, Worcester, UK.
+ * Copyright (c) 2006, Rik Snel <[email protected]>
+ *
+ * Based on Dr Brain Gladman's (GPL'd) work published at
+ * http://fp.gladman.plus.com/cryptography_technology/index.htm
+ * See the original copyright notice below.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+
+/*
+ ---------------------------------------------------------------------------
+ Copyright (c) 2003, Dr Brian Gladman, Worcester, UK. All rights reserved.
+
+ LICENSE TERMS
+
+ The free distribution and use of this software in both source and binary
+ form is allowed (with or without changes) provided that:
+
+ 1. distributions of this source code include the above copyright
+ notice, this list of conditions and the following disclaimer;
+
+ 2. distributions in binary form include the above copyright
+ notice, this list of conditions and the following disclaimer
+ in the documentation and/or other associated materials;
+
+ 3. the copyright holder's name is not used to endorse products
+ built using this software without specific written permission.
+
+ ALTERNATIVELY, provided that this notice is retained in full, this product
+ may be distributed under the terms of the GNU General Public License (GPL),
+ in which case the provisions of the GPL apply INSTEAD OF those given above.
+
+ DISCLAIMER
+
+ This software is provided 'as is' with no explicit or implied warranties
+ in respect of its properties, including, but not limited to, correctness
+ and/or fitness for purpose.
+ ---------------------------------------------------------------------------
+ Issue 31/01/2006
+
+ This file provides fast multiplication in GF(128) as required by several
+ cryptographic authentication modes
+*/
+
+#include <linux/module.h>
+#include "b128ops.h"
+#include "gf128mul.h"
+
+#define gf128mul_dat(q) { \
+ q(0x00), q(0x01), q(0x02), q(0x03), q(0x04), q(0x05), q(0x06), q(0x07),\
+ q(0x08), q(0x09), q(0x0a), q(0x0b), q(0x0c), q(0x0d), q(0x0e), q(0x0f),\
+ q(0x10), q(0x11), q(0x12), q(0x13), q(0x14), q(0x15), q(0x16), q(0x17),\
+ q(0x18), q(0x19), q(0x1a), q(0x1b), q(0x1c), q(0x1d), q(0x1e), q(0x1f),\
+ q(0x20), q(0x21), q(0x22), q(0x23), q(0x24), q(0x25), q(0x26), q(0x27),\
+ q(0x28), q(0x29), q(0x2a), q(0x2b), q(0x2c), q(0x2d), q(0x2e), q(0x2f),\
+ q(0x30), q(0x31), q(0x32), q(0x33), q(0x34), q(0x35), q(0x36), q(0x37),\
+ q(0x38), q(0x39), q(0x3a), q(0x3b), q(0x3c), q(0x3d), q(0x3e), q(0x3f),\
+ q(0x40), q(0x41), q(0x42), q(0x43), q(0x44), q(0x45), q(0x46), q(0x47),\
+ q(0x48), q(0x49), q(0x4a), q(0x4b), q(0x4c), q(0x4d), q(0x4e), q(0x4f),\
+ q(0x50), q(0x51), q(0x52), q(0x53), q(0x54), q(0x55), q(0x56), q(0x57),\
+ q(0x58), q(0x59), q(0x5a), q(0x5b), q(0x5c), q(0x5d), q(0x5e), q(0x5f),\
+ q(0x60), q(0x61), q(0x62), q(0x63), q(0x64), q(0x65), q(0x66), q(0x67),\
+ q(0x68), q(0x69), q(0x6a), q(0x6b), q(0x6c), q(0x6d), q(0x6e), q(0x6f),\
+ q(0x70), q(0x71), q(0x72), q(0x73), q(0x74), q(0x75), q(0x76), q(0x77),\
+ q(0x78), q(0x79), q(0x7a), q(0x7b), q(0x7c), q(0x7d), q(0x7e), q(0x7f),\
+ q(0x80), q(0x81), q(0x82), q(0x83), q(0x84), q(0x85), q(0x86), q(0x87),\
+ q(0x88), q(0x89), q(0x8a), q(0x8b), q(0x8c), q(0x8d), q(0x8e), q(0x8f),\
+ q(0x90), q(0x91), q(0x92), q(0x93), q(0x94), q(0x95), q(0x96), q(0x97),\
+ q(0x98), q(0x99), q(0x9a), q(0x9b), q(0x9c), q(0x9d), q(0x9e), q(0x9f),\
+ q(0xa0), q(0xa1), q(0xa2), q(0xa3), q(0xa4), q(0xa5), q(0xa6), q(0xa7),\
+ q(0xa8), q(0xa9), q(0xaa), q(0xab), q(0xac), q(0xad), q(0xae), q(0xaf),\
+ q(0xb0), q(0xb1), q(0xb2), q(0xb3), q(0xb4), q(0xb5), q(0xb6), q(0xb7),\
+ q(0xb8), q(0xb9), q(0xba), q(0xbb), q(0xbc), q(0xbd), q(0xbe), q(0xbf),\
+ q(0xc0), q(0xc1), q(0xc2), q(0xc3), q(0xc4), q(0xc5), q(0xc6), q(0xc7),\
+ q(0xc8), q(0xc9), q(0xca), q(0xcb), q(0xcc), q(0xcd), q(0xce), q(0xcf),\
+ q(0xd0), q(0xd1), q(0xd2), q(0xd3), q(0xd4), q(0xd5), q(0xd6), q(0xd7),\
+ q(0xd8), q(0xd9), q(0xda), q(0xdb), q(0xdc), q(0xdd), q(0xde), q(0xdf),\
+ q(0xe0), q(0xe1), q(0xe2), q(0xe3), q(0xe4), q(0xe5), q(0xe6), q(0xe7),\
+ q(0xe8), q(0xe9), q(0xea), q(0xeb), q(0xec), q(0xed), q(0xee), q(0xef),\
+ q(0xf0), q(0xf1), q(0xf2), q(0xf3), q(0xf4), q(0xf5), q(0xf6), q(0xf7),\
+ q(0xf8), q(0xf9), q(0xfa), q(0xfb), q(0xfc), q(0xfd), q(0xfe), q(0xff) \
+}
+
+/* Given the value i in 0..255 as the byte overflow when a field element
+ in GHASH is multipled by x^8, this function will return the values that
+ are generated in the lo 16-bit word of the field value by applying the
+ modular polynomial. The values lo_byte and hi_byte are returned via the
+ macro xp_fun(lo_byte, hi_byte) so that the values can be assembled into
+ memory as required by a suitable definition of this macro operating on
+ the table above
+*/
+
+#ifdef __BIG_ENDIAN
+#define xx(p,q) 0x##p##q /* assemble in big endian order */
+#else
+#define xx(p,q) 0x##q##p /* assemble in little endian order */
+#endif
+
+#define xda_bbe(i) ( \
+ (i&0x80?xx(43,80):0)^(i&0x40?xx(21,c0):0)^ \
+ (i&0x20?xx(10,e0):0)^(i&0x10?xx(08,70):0)^ \
+ (i&0x08?xx(04,38):0)^(i&0x04?xx(02,1c):0)^ \
+ (i&0x02?xx(01,0e):0)^(i&0x01?xx(00,87):0) \
+)
+
+#define xda_lle(i) ( \
+ (i&0x80?xx(e1,00):0)^(i&0x40?xx(70,80):0)^ \
+ (i&0x20?xx(38,40):0)^(i&0x10?xx(1c,20):0)^ \
+ (i&0x08?xx(0e,10):0)^(i&0x04?xx(07,08):0)^ \
+ (i&0x02?xx(03,84):0)^(i&0x01?xx(01,c2):0) \
+)
+
+static const u16 gf128mul_table_lle[256] = gf128mul_dat(xda_lle);
+EXPORT_SYMBOL(gf128mul_table_lle);
+
+static const u16 gf128mul_table_bbe[256] = gf128mul_dat(xda_bbe);
+EXPORT_SYMBOL(gf128mul_table_bbe);
+
+/* These functions multiply a field element by x, by x^4 and by x^8
+ * in the polynomial field representation. It uses 32-bit word operations
+ * to gain speed but compensates for machine endianess and hence works
+ * correctly on both styles of machine.
+ */
+#ifdef __BIG_ENDIAN
+
+static void gf128mul_x_lle(u64 *r, const u64 *x)
+{
+ u64 _tt = gf128mul_table_lle[(x[1] << 7) & 0xff];
+ r[1] = (x[1] >> 1) | (x[0] << 63);
+ r[0] = (x[0] >> 1) ^ (_tt << 48);
+}
+
+static void gf128mul_x_bbe(u64 *r, const u64 *x)
+{
+ u64 _tt = gf128mul_tab_bbe[(x[0] >> 63)];
+ r[0] = (x[0] << 1) | (x[1] >> 63);
+ r[1] = (x[1] << 1) ^ _tt;
+}
+
+static void gf128mul_x8_lle(u64 *x)
+{
+ _tt = gf128mul_table_lle[x[1] & 0xff];
+ x[1] = (x[1] >> 8) | (x[0] << 56);
+ x[0] = (x[0] >> 8) ^ (_tt << 48);
+}
+
+static void gf128mul_x8_bbe(u64 *x)
+{
+ _tt = gf128mul_tab_bbe[x[0] >> 56];
+ x[0] = (x[0] << 8) | (x[1] >> 56);
+ x[1] = (x[1] << 8) ^ _tt;
+}
+
+#else
+
+#define M80X 0x8080808080808080LLU
+#define M01X 0x0101010101010101LLU
+
+static void gf128mul_x_lle(u64 r[2], const u64 x[2])
+{
+ u64 _tt = gf128mul_table_lle[(x[1] >> 49) & 0x80];
+ r[1] = ((x[1] >> 1) & ~M80X) | (((x[1] << 15) | (x[0] >> 49)) & M80X);
+ r[0] = (((x[0] >> 1) & ~M80X) | ((x[0] << 15) & M80X)) ^ _tt;
+}
+
+static void gf128mul_x8_lle(u64 x[2])
+{
+ u64 _tt = gf128mul_table_lle[x[1] >> 56];
+ x[1] = (x[1] << 8) | (x[0] >> 56);
+ x[0] = (x[0] << 8) ^ _tt;
+}
+
+static void gf128mul_x_bbe(u64 r[2], const u64 x[2])
+{
+ u64 _tt = gf128mul_table_bbe[(x[0] >> 7) & 0x01];
+ r[0] = ((x[0] << 1) & ~M01X) | (((x[0] >> 15) | (x[1] << 49)) & M01X);
+ r[1] = (((x[1] << 1) & ~M01X) | ((x[1] >> 15) & M01X)) ^ _tt << 48;
+}
+
+static void gf128mul_x8_bbe(u64 x[2])
+{
+ u64 _tt = gf128mul_table_bbe[x[0]&0xff];
+ x[0] = (x[0] >> 8) | (x[1] << 56);
+ x[1] = (x[1] >> 8) ^ (_tt << 48);
+}
+
+#endif
+
+void gf128mul_lle(u64 *a, const u64* b)
+{
+ u64 r[GF128MUL_BYTES >> 3], p[8][GF128MUL_BYTES >> 3];
+ int i;
+
+ b128ops_mov(p[0], b);
+ for(i = 0; i < 7; ++i) gf128mul_x_lle(p[i+1], p[i]);
+
+ memset(r, 0, GF128MUL_BYTES);
+ for(i = 0; i < 16; ++i) {
+ u8 ch = ((u8*)a)[15-i];
+ if(i) gf128mul_x8_lle(r);
+
+ if(ch&0x80) b128ops_xor(r, p[0]);
+ if(ch&0x40) b128ops_xor(r, p[1]);
+ if(ch&0x20) b128ops_xor(r, p[2]);
+ if(ch&0x10) b128ops_xor(r, p[3]);
+ if(ch&0x08) b128ops_xor(r, p[4]);
+ if(ch&0x04) b128ops_xor(r, p[5]);
+ if(ch&0x02) b128ops_xor(r, p[6]);
+ if(ch&0x01) b128ops_xor(r, p[7]);
+ }
+ b128ops_mov(a, r);
+}
+EXPORT_SYMBOL(gf128mul_lle);
+
+void gf128mul_bbe(u64 *a, const u64* b)
+{
+ u64 r[GF128MUL_BYTES >> 3], p[8][GF128MUL_BYTES >> 3];
+ int i;
+ b128ops_mov(p[0], b);
+ for(i = 0; i < 7; ++i) gf128mul_x_bbe(p[i+1], p[i]);
+
+ memset(r, 0, GF128MUL_BYTES);
+ for(i = 0; i < 16; ++i) {
+ u8 ch = ((u8*)a)[i];
+ if(i) gf128mul_x8_bbe(r);
+
+ if(ch&0x80) b128ops_xor(r, p[7]);
+ if(ch&0x40) b128ops_xor(r, p[6]);
+ if(ch&0x20) b128ops_xor(r, p[5]);
+ if(ch&0x10) b128ops_xor(r, p[4]);
+ if(ch&0x08) b128ops_xor(r, p[3]);
+ if(ch&0x04) b128ops_xor(r, p[2]);
+ if(ch&0x02) b128ops_xor(r, p[1]);
+ if(ch&0x01) b128ops_xor(r, p[0]);
+ }
+ b128ops_mov(a, r);
+}
+EXPORT_SYMBOL(gf128mul_bbe);
+
+/* This version uses 64k bytes of table space on the stack.
+ A 16 byte buffer has to be multiplied by a 16 byte key
+ value in GF(128). If we consider a GF(128) value in
+ the buffer's lowest byte, we can construct a table of
+ the 256 16 byte values that result from the 256 values
+ of this byte. This requires 4096 bytes. But we also
+ need tables for each of the 16 higher bytes in the
+ buffer as well, which makes 64 kbytes in total.
+*/
+/* additional explanation
+ * t[0][BYTE] contains g*BYTE
+ * t[1][BYTE] contains g*x^8*BYTE
+ * ..
+ * t[15][BYTE] contains g*x^120*BYTE */
+void gf128mul_init_64k_lle(struct gf128mul_64k *t, const u64 *g)
+{
+ int i, j, k;
+
+ memset(t->t, 0, 16*256*GF128MUL_BYTES);
+ for (i = 0; i < GF128MUL_BYTES; ++i) {
+ if (!i) {
+ b128ops_mov(t->t[0][128], g);
+ for (j = 64; j > 0; j >>= 1) {
+ gf128mul_x_lle(t->t[0][j], t->t[0][j + j]);
+ }
+ } else for (j = 128; j > 0; j >>= 1) {
+ b128ops_mov(t->t[i][j], t->t[i - 1][j]);
+ gf128mul_x8_lle(t->t[i][j]);
+ }
+
+ for (j = 2; j < 256; j += j) for(k = 1; k < j; ++k) {
+ t->t[i][j+k][0] = t->t[i][j][0]^t->t[i][k][0];
+ t->t[i][j+k][1] = t->t[i][j][1]^t->t[i][k][1];
+ }
+ }
+}
+EXPORT_SYMBOL(gf128mul_init_64k_lle);
+
+void gf128mul_init_64k_bbe(struct gf128mul_64k *t, const u64 *g)
+{
+ int i, j, k;
+
+ memset(t->t, 0, 16*256*GF128MUL_BYTES);
+ for (i = 0; i < GF128MUL_BYTES; ++i) {
+ if (!i) {
+ b128ops_mov(t->t[0][1], g);
+ for (j = 1; j <= 64; j <<= 1) {
+ gf128mul_x_bbe(t->t[0][j + j], t->t[0][j]);
+ }
+ } else for (j = 128; j > 0; j >>= 1) {
+ b128ops_mov(t->t[i][j], t->t[i - 1][j]);
+ gf128mul_x8_bbe(t->t[i][j]);
+ }
+
+ for (j = 2; j < 256; j += j) for(k = 1; k < j; ++k) {
+ t->t[i][j+k][0] = t->t[i][j][0]^t->t[i][k][0];
+ t->t[i][j+k][1] = t->t[i][j][1]^t->t[i][k][1];
+ }
+ }
+}
+EXPORT_SYMBOL(gf128mul_init_64k_bbe);
+
+void gf128mul_64k_lle(u64 a[], struct gf128mul_64k *t, u64 *r)
+{
+ int i;
+ b128ops_mov(r, t->t[0][((u8*)a)[0]]);
+ for (i = 1; i < GF128MUL_BYTES; ++i) {
+ b128ops_xor(r, t->t[i][((u8*)a)[i]]);
+ }
+ b128ops_mov(a, r);
+}
+EXPORT_SYMBOL(gf128mul_64k_lle);
+
+void gf128mul_64k_bbe(u64 a[], struct gf128mul_64k *t, u64 *r)
+{
+ int i;
+ b128ops_mov(r, t->t[0][((u8*)a)[15]]);
+ for (i = 1; i < GF128MUL_BYTES; ++i) {
+ b128ops_xor(r, t->t[i][((u8*)a)[15 - i]]);
+ }
+ b128ops_mov(a, r);
+}
+EXPORT_SYMBOL(gf128mul_64k_bbe);
+
+/* This version uses 4k bytes of table space on the stack.
+ A 16 byte buffer has to be multiplied by a 16 byte key
+ value in GF(128). If we consider a GF(128) value in a
+ single byte, we can construct a table of the 256 16 byte
+ values that result from the 256 values of this byte.
+ This requires 4096 bytes. If we take the highest byte in
+ the buffer and use this table to get the result, we then
+ have to multiply by x^120 to get the final value. For the
+ next highest byte the result has to be multiplied by x^112
+ and so on. But we can do this by accumulating the result
+ in an accumulator starting with the result for the top
+ byte. We repeatedly multiply the accumulator value by
+ x^8 and then add in (i.e. xor) the 16 bytes of the next
+ lower byte in the buffer, stopping when we reach the
+ lowest byte. This requires a 4096 byte table.
+*/
+void gf128mul_init_4k_lle(struct gf128mul_4k *t, const u64 *g)
+{
+ int j, k;
+
+ memset(t, 0, 256*GF128MUL_BYTES);
+ b128ops_mov(t->t[128], g);
+ for (j = 64; j > 0; j >>= 1) gf128mul_x_lle(t->t[j], t->t[j+j]);
+
+ for (j = 2; j < 256; j += j) for (k = 1; k < j; ++k) {
+ t->t[j + k][0] = t->t[j][0] ^ t->t[k][0];
+ t->t[j + k][1] = t->t[j][1] ^ t->t[k][1];
+ }
+}
+EXPORT_SYMBOL(gf128mul_init_4k_lle);
+
+void gf128mul_init_4k_bbe(struct gf128mul_4k *t, const u64 *g)
+{
+ int j, k;
+
+ memset(t, 0, 256*GF128MUL_BYTES);
+ b128ops_mov(t->t[1], g);
+ for (j = 1; j <= 64; j <<= 1) gf128mul_x_bbe(t->t[j + j], t->t[j]);
+
+ for (j = 2; j < 256; j += j) for (k = 1; k < j; ++k) {
+ t->t[j + k][0] = t->t[j][0] ^ t->t[k][0];
+ t->t[j + k][1] = t->t[j][1] ^ t->t[k][1];
+ }
+}
+EXPORT_SYMBOL(gf128mul_init_4k_bbe);
+
+void gf128mul_4k_lle(u64 *a, struct gf128mul_4k *t, u64 *r)
+{
+ int i = 15;
+ b128ops_mov(r, t->t[((u8*)a)[15]]);
+ while(i--) {
+ gf128mul_x8_lle(r);
+ b128ops_xor(r, t->t[((u8*)a)[i]]);
+ }
+ b128ops_mov(a, r);
+}
+EXPORT_SYMBOL(gf128mul_4k_lle);
+
+void gf128mul_4k_bbe(u64 *a, struct gf128mul_4k *t, u64 *r)
+{
+ int i = 0;
+ b128ops_mov(r, t->t[((u8*)a)[0]]);
+ while(++i < 16) {
+ gf128mul_x8_bbe(r);
+ b128ops_xor(r, t->t[((u8*)a)[i]]);
+ }
+ b128ops_mov(a, r);
+}
+EXPORT_SYMBOL(gf128mul_4k_bbe);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("functions for multiplying elements of GF(2^128)");
diff --git a/crypto/gf128mul.h b/crypto/gf128mul.h
new file mode 100644
index 0000000..bbaae1f
--- /dev/null
+++ b/crypto/gf128mul.h
@@ -0,0 +1,196 @@
+/* gf128mul.h - GF(2^128) multiplication functions
+ *
+ * Copyright (c) 2003, Dr Brian Gladman, Worcester, UK.
+ * Copyright (c) 2006 Rik Snel <[email protected]>
+ *
+ * Based on Dr Brain Gladman's (GPL'd) work published at
+ * http://fp.gladman.plus.com/cryptography_technology/index.htm
+ * See the original copyright notice below.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+/*
+ ---------------------------------------------------------------------------
+ Copyright (c) 2003, Dr Brian Gladman, Worcester, UK. All rights reserved.
+
+ LICENSE TERMS
+
+ The free distribution and use of this software in both source and binary
+ form is allowed (with or without changes) provided that:
+
+ 1. distributions of this source code include the above copyright
+ notice, this list of conditions and the following disclaimer;
+
+ 2. distributions in binary form include the above copyright
+ notice, this list of conditions and the following disclaimer
+ in the documentation and/or other associated materials;
+
+ 3. the copyright holder's name is not used to endorse products
+ built using this software without specific written permission.
+
+ ALTERNATIVELY, provided that this notice is retained in full, this product
+ may be distributed under the terms of the GNU General Public License (GPL),
+ in which case the provisions of the GPL apply INSTEAD OF those given above.
+
+ DISCLAIMER
+
+ This software is provided 'as is' with no explicit or implied warranties
+ in respect of its properties, including, but not limited to, correctness
+ and/or fitness for purpose.
+ ---------------------------------------------------------------------------
+ Issue Date: 31/01/2006
+
+ An implementation of field multiplication in Galois Field GF(128)
+*/
+
+#ifndef _LINUX_GF128MUL_H
+#define _LINUX_GF128MUL_H
+/* Comment by Rik:
+ *
+ * For some background on GF(2^128) see for example: http://-
+ * csrc.nist.gov/CryptoToolkit/modes/proposedmodes/gcm/gcm-revised-spec.pdf
+ *
+ * The elements of GF(2^128) := GF(2)[X]/(X^128-X^7-X^2-X^1-1) can
+ * be mapped to computer memory in a variety of ways. Let's examine
+ * three common cases
+ *
+ * Take a look at the 16 binary octets below in memory order. The msb's
+ * are left and the lsb's are right. char b[16] is an array and b[0] is
+ * the first octet.
+ *
+ * 80000000 00000000 00000000 00000000 .... 00000000 00000000 00000000
+ * b[0] b[1] b[2] b[3] b[13] b[14] b[15]
+ *
+ * Every bit is a coefficient of some power of X. We can store the bits
+ * in every byte in little-endian order and the bytes themselves also in
+ * little endian order. I will call this lle (little-little-endian).
+ * The above buffer represents the polynomial 1, and X^7+X^2+X^1+1 looks
+ * like 11100001 00000000 .... 00000000 = { 0xE1, 0x00, }.
+ * This format was originally implemented in gf128mul and is used
+ * in GCM (Galois/Counter mode) and in ABL (Arbitrary Block Length).
+ *
+ * Another convention says: store the bits in bigendian order and the
+ * bytes also. This is bbe (big-big-endian). Now the buffer above
+ * represents X^127. X^7+X^2+X^1+1 looks like 00000000 .... 10000111,
+ * b[15] = 0x87 and the rest is 0. LRW uses this convention and bbe
+ * is partly implemented.
+ *
+ * Both of the above formats are easy to implement on big-endian
+ * machines.
+ *
+ * EME (which is patent encumbered) uses the ble format (bits are stored
+ * in big endian order and the bytes in little endian). The above buffer
+ * represents X^7 in this case and the primitive polynomial is b[0] = 0x87.
+ *
+ * The common machine word-size is smaller than 128 bits, so to make
+ * an efficient implementation we must split into machine word sizes.
+ * This file uses one 32bit for the moment. Machine endianness comes into
+ * play. The lle format in relation to machine endianness is discusses
+ * below by the original author of gf128mul Dr Brian Gladman.
+ *
+ * Let's look at the bbe and ble format on a little endian machine.
+ *
+ * bbe on a little endian machine u32 x[4]:
+ *
+ * MS x[0] LS MS x[1] LS
+ * ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ * 103..96 111.104 119.112 127.120 71...64 79...72 87...80 95...88
+ *
+ * MS x[2] LS MS x[3] LS
+ * ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ * 39...32 47...40 55...48 63...56 07...00 15...08 23...16 31...24
+ *
+ * ble on a little endian machine
+ *
+ * MS x[0] LS MS x[1] LS
+ * ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ * 31...24 23...16 15...08 07...00 63...56 55...48 47...40 39...32
+ *
+ * MS x[2] LS MS x[3] LS
+ * ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ * 95...88 87...80 79...72 71...64 127.120 199.112 111.104 103..96
+ *
+ * Multiplications in GF(2^128) are mostly bit-shifts, so you see why
+ * ble (and lbe also) are easier to implement on a little-endian
+ * machine than om a big-endian machine. The converse holds for bbe
+ * and lle.
+ *
+ * Note: to have good alignment, it seems to me that it is sufficient
+ * to keep elements of GF(2^128) in type u64[2]. On 32-bit wordsize
+ * machines this will automatically aligned to wordsize and on a 64-bit
+ * machine also.
+ */
+/* Multiply a GF128 field element by x. Field elements are held in arrays
+ of bytes in which field bits 8n..8n + 7 are held in byte[n], with lower
+ indexed bits placed in the more numerically significant bit positions
+ within bytes.
+
+ On little endian machines the bit indexes translate into the bit
+ positions within four 32-bit words in the following way
+
+ MS x[0] LS MS x[1] LS
+ ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ 24...31 16...23 08...15 00...07 56...63 48...55 40...47 32...39
+
+ MS x[2] LS MS x[3] LS
+ ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ 88...95 80...87 72...79 64...71 120.127 112.119 104.111 96..103
+
+ On big endian machines the bit indexes translate into the bit
+ positions within four 32-bit words in the following way
+
+ MS x[0] LS MS x[1] LS
+ ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ 00...07 08...15 16...23 24...31 32...39 40...47 48...55 56...63
+
+ MS x[2] LS MS x[3] LS
+ ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ 64...71 72...79 80...87 88...95 96..103 104.111 112.119 120.127
+*/
+
+#define GF128MUL_BYTES 16
+
+/* A slow generic version of gf_mul, implemented for lle and bbe
+ * It multiplies a and b and puts the result in a */
+void gf128mul_lle(u64 *a, const u64* b);
+
+void gf128mul_bbe(u64 *a, const u64* b);
+
+
+/* 64k table optimization, implemented for lle and bbe */
+
+struct gf128mul_64k {
+ u64 t[16][256][GF128MUL_BYTES >> 3];
+};
+
+/* first initialize with the constant factor with which you
+ * want to multiply and then call gf128_64k_lle with the other
+ * factor in the first argument, the table in the second and a
+ * scratch register in the third. Afterwards *a = *r. */
+void gf128mul_init_64k_lle(struct gf128mul_64k *t, const u64 *g);
+
+void gf128mul_init_64k_bbe(struct gf128mul_64k *t, const u64 *g);
+
+void gf128mul_64k_lle(u64 *a, struct gf128mul_64k *t, u64 *r);
+
+void gf128mul_64k_bbe(u64 *a, struct gf128mul_64k *t, u64 *r);
+
+
+/* 4k table optimization */
+
+struct gf128mul_4k {
+ u64 t[256][GF128MUL_BYTES >> 3];
+};
+
+void gf128mul_init_4k_lle(struct gf128mul_4k *t, const u64 *g);
+
+void gf128mul_init_4k_bbe(struct gf128mul_4k *t, const u64 *g);
+
+void gf128mul_4k_lle(u64 *a, struct gf128mul_4k *t, u64 *r);
+
+void gf128mul_4k_bbe(u64 *a, struct gf128mul_4k *t, u64 *r);
+
+#endif /* _LINUX_GF128MUL_H */
--
1.4.1.1


2006-08-31 12:40:12

by Rik Snel

[permalink] [raw]
Subject: [PATCH 5/6] crypto: LRW, Liskov Rivest Wagner, a tweakable narrow block cipher mode

Main module, this implements the Liskov Rivest Wagner block cipher mode
in the new blockcipher API. The implementation is based on ecb.c. The
first iteration of the blockcipher_walk loop is unrolled to give the
first narrow block special treatment.

The LRW-32-AES specification I used can be found at:

http://grouper.ieee.org/groups/1619/email/pdf00017.pdf

It implements the optimization specified as optional in the
specification, and in addition it uses optimized multiplication
routines from gf128mul.c.

Since gf128mul.[ch] is not tested on bigendian, this cipher mode
may currently fail badly on bigendian machines.

Signed-off-by: Rik Snel <[email protected]>
---
crypto/Kconfig | 13 ++
crypto/Makefile | 1
crypto/lrw.c | 297 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 311 insertions(+), 0 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 6b23c20..dfdfe08 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -156,6 +156,19 @@ config CRYPTO_CBC
CBC: Cipher Block Chaining mode
This block cipher algorithm is required for IPSec.

+config CRYPTO_LRW
+ tristate "LRW support (EXPERIMENTAL)"
+ depends on EXPERIMENTAL
+ select CRYPTO_BLKCIPHER
+ select CRYPTO_GF128MUL
+ default n
+ help
+ LRW: Liskov Rivest Wagner, a tweakable, non malleable, non movable
+ narrow block cipher mode. Use it with cipher specification string
+ aes-lrw-benbi, the key must be 256, 320 or 384. The first 128, 192
+ or 256 bits in the key are used for AES and the rest is used to tie
+ each cipher block to its logical position.
+
config CRYPTO_DES
tristate "DES and Triple DES EDE cipher algorithms"
select CRYPTO_ALGAPI
diff --git a/crypto/Makefile b/crypto/Makefile
index bf0406b..e2e57be 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -26,6 +26,7 @@ obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o
obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o
obj-$(CONFIG_CRYPTO_ECB) += ecb.o
obj-$(CONFIG_CRYPTO_CBC) += cbc.o
+obj-$(CONFIG_CRYPTO_LRW) += lrw.o
obj-$(CONFIG_CRYPTO_DES) += des.o
obj-$(CONFIG_CRYPTO_BLOWFISH) += blowfish.o
obj-$(CONFIG_CRYPTO_TWOFISH) += twofish.o
diff --git a/crypto/lrw.c b/crypto/lrw.c
new file mode 100644
index 0000000..9c7324c
--- /dev/null
+++ b/crypto/lrw.c
@@ -0,0 +1,297 @@
+/* LRW: as defined by Cyril Guyot in
+ * http://grouper.ieee.org/groups/1619/email/pdf00017.pdf
+ *
+ * Copyright (c) 2006 Rik Snel <[email protected]>
+ *
+ * Based om ecb.c
+ * Copyright (c) 2006 Herbert Xu <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+
+/* This implementation is checked against the test vectors in the above
+ * document and by a test vector provided by Ken Buchanan at
+ * http://www.mail-archive.com/[email protected]/msg00173.html
+ *
+ * The vectors can be found in Documentation/crypto they are in the form
+ * of a script and can therefore easily be checked, just run lrw-32-aes
+ * with sufficient permissions after reading it and also testvector.fun */
+#include <crypto/algapi.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+
+#include "b128ops.h"
+#include "gf128mul.h"
+
+struct priv {
+ struct crypto_cipher *child;
+ /* optimizes multiplying a random (non incrementing, as at the
+ * start of a new sector) value with key2, we could also have
+ * used 4k optimization tables or no optimization at all. In the
+ * latter case we would have to store key2 here */
+ struct gf128mul_64k table;
+ /* stores:
+ * key2*{ 0,0,...0,0,0,0,1 }, key2*{ 0,0,...0,0,0,1,1 },
+ * key2*{ 0,0,...0,0,1,1,1 }, key2*{ 0,0,...0,1,1,1,1 }
+ * key2*{ 0,0,...1,1,1,1,1 }, etc
+ * needed for optimized multiplication of incrementing values
+ * with key2 */
+ u64 mulinc[128][GF128MUL_BYTES >> 3];
+};
+
+static inline void setbit128(void *b, int bit)
+{
+ int index = 15 - bit/8;
+ ((u8*)b)[index] |= 1<<(bit%8);
+}
+
+static int setkey(struct crypto_tfm *parent, const u8 *key,
+ unsigned int keylen)
+{
+ struct priv *ctx = crypto_tfm_ctx(parent);
+ struct crypto_cipher *child = ctx->child;
+ int err, i;
+ u64 tmp[2] = { 0, }, scratch[2];
+ int bsize = crypto_cipher_blocksize(child);
+
+ crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+ crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) &
+ CRYPTO_TFM_REQ_MASK);
+ if ((err = crypto_cipher_setkey(child, key, keylen - bsize)))
+ return err;
+ crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) &
+ CRYPTO_TFM_RES_MASK);
+
+
+ /* initialize multiplication table for Key2 */
+ gf128mul_init_64k_bbe(&ctx->table, (u64*)(key + keylen - bsize));
+
+ /* initialize optimization table */
+ for (i = 0; i < 128; i++) {
+ setbit128(tmp, i);
+ b128ops_mov(ctx->mulinc[i], tmp);
+ gf128mul_64k_bbe(ctx->mulinc[i], &ctx->table, scratch);
+ }
+
+ return 0;
+}
+
+struct sinfo {
+ u64 b1[2], b2[2];
+ struct crypto_tfm *tfm;
+ void (*fn)(struct crypto_tfm*, u8*, const u8*);
+};
+
+static inline void inc(u64 *iv)
+{
+ if (!(iv[1] = cpu_to_be64(be64_to_cpu(iv[1]) + 1)))
+ iv[0] = cpu_to_be64(be64_to_cpu(iv[0]) + 1);
+}
+
+static inline void round(struct sinfo *s, u8 *dst, const u8 *src)
+{
+ b128ops_xor(s->b2, src); /* PP <- T xor P */
+ s->fn(s->tfm, dst, (u8*)s->b2); /* CC <- E(Key2,PP) */
+ b128ops_xor(dst, s->b1); /* C <- T xor CC */
+}
+
+/* this returns the number of consequative 1 bits
+ * starting from the right in i */
+static inline int get_index8(u8 i)
+{
+ int j = 1;
+
+ if (i&1) {
+ while ((i >>= 1)&1) j++;
+ return j;
+ }
+
+ return 0;
+}
+
+/* this returns the number of consequative 1 bits starting
+ * from the right, get_index128(00 00 00 00 00 00 ... 00 00 10 FB) = 2 */
+static inline int get_index128(u8 *block)
+{
+ int inc, ret = 0, len = 16;
+ while ((inc = get_index8(block[--len])) == 8) ret += 8;
+ return ret + inc;
+}
+
+static int crypt(struct blkcipher_desc *d,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes, struct priv *ctx,
+ void (*fn)(struct crypto_tfm*, u8*, const u8*))
+{
+ struct blkcipher_walk w;
+ int err;
+ unsigned int avail;
+ const int bs = crypto_cipher_blocksize(ctx->child);
+ u8 *wsrc, *wdst;
+ struct sinfo s = {
+ .tfm = crypto_cipher_tfm(ctx->child),
+ .fn = fn
+ };
+
+ blkcipher_walk_init(&w, dst, src, nbytes);
+
+ /* start of the loop unrolled to be able to
+ * handle the first block diffently from the others */
+ if ((err = blkcipher_walk_virt(d, &w))) return err;
+
+ wsrc = w.src.virt.addr;
+ wdst = w.dst.virt.addr;
+
+ /* calculate first value of T */
+ b128ops_mov(s.b1, w.iv);
+ gf128mul_64k_bbe(s.b1, &ctx->table, s.b2); /* T <- I*Key2 */
+
+ round(&s, wdst, wsrc);
+
+ while ((avail = w.nbytes)) {
+ while ((avail -= bs) >= bs) {
+ wsrc += bs;
+ wdst += bs;
+
+ /* old T is available in s.b1; new one
+ * must be made available in b1 and b2 */
+
+ /* T <- I*Key2, using the optimization
+ * discussed in the specification */
+ b128ops_xor(s.b1, ctx->mulinc[get_index128(w.iv)]);
+ inc((u64*)w.iv);
+ b128ops_mov(s.b2, s.b1);
+
+ round(&s, wdst, wsrc);
+ }
+
+ if ((err = blkcipher_walk_done(d, &w, avail))) return err;
+
+ wsrc = w.src.virt.addr;
+ wdst = w.dst.virt.addr;
+ }
+ return err;
+}
+
+static int encrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
+ struct scatterlist *src, unsigned int nbytes)
+{
+ struct priv *ctx = crypto_blkcipher_ctx(desc->tfm);
+ return crypt(desc, dst, src, nbytes, ctx,
+ crypto_cipher_alg(ctx->child)->cia_encrypt);
+}
+
+static int decrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
+ struct scatterlist *src, unsigned int nbytes)
+{
+ struct priv *ctx = crypto_blkcipher_ctx(desc->tfm);
+ return crypt(desc, dst, src, nbytes, ctx,
+ crypto_cipher_alg(ctx->child)->cia_decrypt);
+}
+
+static int init_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
+ struct crypto_spawn *spawn = crypto_instance_ctx(inst);
+ struct priv *ctx = crypto_tfm_ctx(tfm);
+ u32 *flags = &tfm->crt_flags;
+
+ tfm = crypto_spawn_tfm(spawn);
+ if (IS_ERR(tfm))
+ return PTR_ERR(tfm);
+
+ if (crypto_tfm_alg_blocksize(tfm) != 16) {
+ *flags |= CRYPTO_TFM_RES_BAD_BLOCK_LEN;
+ return -EINVAL;
+ }
+
+ ctx->child = crypto_cipher_cast(tfm);
+ return 0;
+}
+
+static void exit_tfm(struct crypto_tfm *tfm)
+{
+ struct priv *ctx = crypto_tfm_ctx(tfm);
+ crypto_free_cipher(ctx->child);
+}
+
+static struct crypto_instance *alloc(void *param, unsigned int len)
+{
+ struct crypto_instance *inst;
+ struct crypto_alg *alg;
+
+ alg = crypto_get_attr_alg(param, len, CRYPTO_ALG_TYPE_CIPHER,
+ CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_ASYNC);
+ if (IS_ERR(alg))
+ return ERR_PTR(PTR_ERR(alg));
+
+ inst = crypto_alloc_instance("lrw", alg);
+ if (IS_ERR(inst))
+ goto out_put_alg;
+
+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
+ inst->alg.cra_priority = alg->cra_priority;
+ inst->alg.cra_blocksize = alg->cra_blocksize;
+
+ if (alg->cra_alignmask < 7) inst->alg.cra_alignmask = 7;
+ else inst->alg.cra_alignmask = alg->cra_alignmask;
+ inst->alg.cra_type = &crypto_blkcipher_type;
+
+ if (!(alg->cra_blocksize % 4))
+ inst->alg.cra_alignmask |= 3;
+ inst->alg.cra_blkcipher.ivsize = alg->cra_blocksize;
+ inst->alg.cra_blkcipher.min_keysize =
+ alg->cra_cipher.cia_min_keysize + alg->cra_blocksize;
+ inst->alg.cra_blkcipher.max_keysize =
+ alg->cra_cipher.cia_max_keysize + alg->cra_blocksize;
+
+ inst->alg.cra_ctxsize = sizeof(struct priv);
+
+ inst->alg.cra_init = init_tfm;
+ inst->alg.cra_exit = exit_tfm;
+
+ inst->alg.cra_blkcipher.setkey = setkey;
+ inst->alg.cra_blkcipher.encrypt = encrypt;
+ inst->alg.cra_blkcipher.decrypt = decrypt;
+
+out_put_alg:
+ crypto_mod_put(alg);
+ return inst;
+}
+
+static void free(struct crypto_instance *inst)
+{
+ crypto_drop_spawn(crypto_instance_ctx(inst));
+ kfree(inst);
+}
+
+static struct crypto_template crypto_tmpl = {
+ .name = "lrw",
+ .alloc = alloc,
+ .free = free,
+ .module = THIS_MODULE,
+};
+
+static int __init crypto_module_init(void)
+{
+ return crypto_register_template(&crypto_tmpl);
+}
+
+static void __exit crypto_module_exit(void)
+{
+ crypto_unregister_template(&crypto_tmpl);
+}
+
+module_init(crypto_module_init);
+module_exit(crypto_module_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("LRW block cipher mode");
--
1.4.1.1


2006-08-31 12:40:12

by Rik Snel

[permalink] [raw]
Subject: [PATCH 6/6] crypto: a simple way of storing and checking test vectors, LRW vectors included

Test vectors for LRW and a simple way of checking them. For your convenience
and/or for real inclusion.

Signed-off-by: Rik Snel <[email protected]>
---
Documentation/crypto/README.testvector | 7 ++
Documentation/crypto/lrw-32-aes | 113 ++++++++++++++++++++++++++++++++
Documentation/crypto/testvector.fun | 57 ++++++++++++++++
3 files changed, 177 insertions(+), 0 deletions(-)

diff --git a/Documentation/crypto/README.testvector b/Documentation/crypto/README.testvector
new file mode 100644
index 0000000..2d4a0f0
--- /dev/null
+++ b/Documentation/crypto/README.testvector
@@ -0,0 +1,7 @@
+How to check the testvectors included in ./lrw-32-aes?
+
+Simply execute them; lrw-32-aes is a bash script that calls some functions
+in testvector.fun. Tools like dm-setup and some hex utitities will be used
+to check them through the usual kernel interface. You can add testvectors of
+other modes in an analogous manner.
+
diff --git a/Documentation/crypto/lrw-32-aes b/Documentation/crypto/lrw-32-aes
new file mode 100644
index 0000000..35de311
--- /dev/null
+++ b/Documentation/crypto/lrw-32-aes
@@ -0,0 +1,113 @@
+#!/bin/bash
+# you need losetup, dmsetup, xxd, bc
+
+# change this if you need
+LOOP=/dev/loop0
+MAPPER=crypt0
+
+# we need to import some important functions
+. testvector.fun
+
+#lrw specific options, some need to be calculated for each vector
+function special {
+ MODE=aes-lrw-benbi
+ KEY=$AESKEY$TWEAKKEY
+ IV_OFFSET=$(echo "ibase=16; ($TWEAKLOCATION-1)/20" | bc)
+ BOFFSET=$(echo "ibase=16; ($TWEAKLOCATION-1)%20" | bc)
+}
+
+NAME="LRW-32-AES 1"
+AESKEY=4562ac25f828176d4c268414b5680185
+TWEAKKEY=258e2a05e73e9d03ee5a830ccc094c87
+PLAINTEXT=30313233343536373839414243444546
+TWEAKLOCATION=00000000000000000000000000000001
+CIPHERTEXT=f1b273cd65a3df5fe95d489254634eb8
+check
+
+NAME="LRW-32-AES 2"
+AESKEY=59704714f557478cd779e80f54887944
+TWEAKKEY=0d48f0b7b15a53ea1caa6b29c2cafbaf
+PLAINTEXT=30313233343536373839414243444546
+TWEAKLOCATION=00000000000000000000000000000002
+CIPHERTEXT=00c82bae95bbcde5274f0769b260e136
+check
+
+NAME="LRW-32-AES 3"
+AESKEY=d82a9134b26a565030fe69e2377f9847
+TWEAKKEY=cdf90b160c648fb6b00d0d1bae85871f
+PLAINTEXT=30313233343536373839414243444546
+TWEAKLOCATION=00000000000000000000000200000000
+CIPHERTEXT=76322183ed8ff182f9596203690e5e01
+check
+
+NAME="LRW-32-AES 4"
+AESKEY=0f6aeff8d3d2bb152583f73c1f012874cac6bc354d4a6554
+TWEAKKEY=90ae61cf7baebdccade494c54a29ae70
+PLAINTEXT=30313233343536373839414243444546
+TWEAKLOCATION=00000000000000000000000000000001
+CIPHERTEXT=9c0f152f55a2d8f0d67b8f9e2822bc41
+check
+
+NAME="LRW-32-AES 5"
+AESKEY=8ad4ee102fbd81fff886ceac93c5adc6a01907c09df7bbdd
+TWEAKKEY=5213b2b7f0ff11d8d608d0cd2eb1176f
+PLAINTEXT=30313233343536373839414243444546
+TWEAKLOCATION=00000000000000000000000200000000
+CIPHERTEXT=d4276a7f14913d65c860480287e33406
+check
+
+NAME="LRW-32-AES 6"
+AESKEY=f8d476ffd646ee6c2384cb1c77d6195dfef1a9f37bbc8d21a79c21f8cb900289
+TWEAKKEY=a845348ec8c5b5f126f50e76fefd1b1e
+PLAINTEXT=30313233343536373839414243444546
+TWEAKLOCATION=00000000000000000000000000000001
+CIPHERTEXT=bd06b8e1db98899ec498e491cf1c702b
+check
+
+NAME="LRW-32-AES 7"
+AESKEY=fb7615b23d80891dd470980bc79584c8b2fb64ce6097878d17fce45a49e830b7
+TWEAKKEY=6e7817e72d5e12d46064047af12f9e0c
+PLAINTEXT=30313233343536373839414243444546
+TWEAKLOCATION=00000000000000000000000200000000
+CIPHERTEXT=5b908ec1abdd675f3d698a9553c89ce5
+check
+
+NAME="Ken Buchanan"
+AESKEY=f8d476ffd646ee6c2384cb1c77d6195dfef1a9f37bbc8d21a79c21f8cb900289
+TWEAKKEY=a845348ec8c5b5f126f50e76fefd1b1e
+TWEAKLOCATION=00000000000000000000000000000001
+PLAINTEXT=\
+0511b718abc62dac705df62294cde56c176bf61cf0f36ef850381f7149b657d6\
+8fcb8d6be3a62990fe2a6282ae6d8bf6ad1e9e205f38be04da108eeda2a487ab\
+da6bb40c75bad37cc9ac4231957cc904ebd56e32698adba615d73f4f2f666903\
+9c1f540fde1ff3654c9612ed7c9203016fbc3593acf127f1b496825a5fb0a050\
+89a48e664485ccfd331470e396b2c3d3bb545a1af974a2c52d6475ddb454e674\
+8cd39d9e86ab5153b7933e6fd04e2c40f6a82e3e9df466a5761273441a56d772\
+88cd218c4c0ffeda95e03aa6a58446cdd53e9d3ae267e6601ae2708558c21b09\
+e1d72ccaada88ff9acb30edbca2ee2b85171d93c6cf156f8ea9cf1fb0ce6b710\
+1cf8a97ce85335c1903e764a74a4212cf62c4e0f943a882e41096a337df6dd3f\
+8d23317484eb886eccb9bc2283190722a52ddfa5f380857884396a6d6a994fa5\
+15fe46b0e46ca5413cce8f426071a7750840658a82bff5437196a94d448a20be\
+fa4dbbc07d319665e775e53efd923bc955bb167ef7c28ca4401de5ef0edfe49a\
+627365fd4663253d2bafe564fea55ccf24f3b4ac64badf4bc6967d812d8d97f7\
+c5687784322bcc857496f0127761b9eb71aa82cb1cdb89c8c6b5e35c7d390724\
+da398745c02bbb01acbc2a5c7ffce8ce6d9c6fedd3c1a1d6c555a9662fe1c832\
+a65da43a9873e845a4c7a8b4f61303f6e92ec4290f84dbc421c4c2756789370a
+CIPHERTEXT=\
+1a1da930adf92f9bb61daeeff02ff85a393cbf2ab245b2231b633ccfaabecf4e\
+fae829c220682b3c2e8bf76e25bde33d6627d6afd6643ee3e8584697395107de\
+cb37bca9c05f75c30e84231d16d41c599c1a0255ab3a971ddfddc70651d770ae\
+23c68cf51ea0e582b8b2bf04a0328e68ebaf6e2d94222fce4cb559e2a22fa098\
+1a97c6d4b50059f2841472b19a6ea37fea20e7cb65773adfc8976715c22a27cc\
+1855a1240b2424af5bec68b8c8f5ba63ffed89ced53d88f325ef057c3aefebd8\
+7a320dd11e5859999025b526b0e32b6c4ca98b844f5e0150413058c56274521d\
+45246a42644f971ca866b56d79d40d48c55ff39032dddde1e4a99ffcc3525a46\
+e481849536597a6baab360adce9f9f28e0017522c44ea9625c620d00cb13e843\
+72d42d5346b5d1162218df3433f5d61cb879789794ff72134c27fccbbf0153a6\
+b4506ededfb543a459df52f97ce0116f2d148e24612ce117ccce510c198a8230\
+94d53d6a53065ebdb7ebfafd2751de851e865311539400ee2b8c082abfddae11\
+cb1ea2079a80cf629b09dc953c968eb109bde4ebdbca707a9efa3118453c2133\
+b0b32beaf3712de103ad1b48d46727f062e43dfb9b0876e7dd2b0139045a587a\
+f71190ecbd515c326bd73539026bf2a6d00d07e106c45b7de46ad7ee151f83b4\
+a3a75ec390b7efd3b74ff8924cb73c29cd7e2b5d43ea42e7743f7d588875de3e
+check
diff --git a/Documentation/crypto/testvector.fun b/Documentation/crypto/testvector.fun
new file mode 100644
index 0000000..34af646
--- /dev/null
+++ b/Documentation/crypto/testvector.fun
@@ -0,0 +1,57 @@
+function sig_handler_EXIT {
+ if [ -b /dev/mapper/$MAPPER ]; then dmsetup remove $MAPPER; fi
+ losetup -d $LOOP
+ rm $IMAGE
+ exit $1
+}
+
+function setup {
+ set -e
+ IMAGE=`mktemp`
+ trap sig_handler_EXIT EXIT TERM INT QUIT
+ dd if=/dev/zero of=$IMAGE count=1 bs=512 2> /dev/null
+ losetup $LOOP $IMAGE
+}
+
+function check_encrypt {
+ echo $PLAINTEXT | xxd -r -p | dd of=/dev/mapper/$MAPPER seek=$BOFFSET \
+ bs=$LEN count=1 2> /dev/null
+
+ C=`dd if=$IMAGE skip=$BOFFSET bs=$LEN count=1 2> \
+ /dev/null | xxd -p | tr -d '\n'`
+
+ if [ "$C" = "$CIPHERTEXT" ]; then ENC="--ENCRYPT OK--"
+ else ENC="--ENCRYPT FAILED--"; fi
+}
+
+function check_decrypt {
+ echo $CIPHERTEXT | xxd -r -p | dd of=$LOOP seek=$BOFFSET \
+ bs=$LEN count=1 2> /dev/null
+
+ P=`dd if=/dev/mapper/$MAPPER skip=$BOFFSET bs=$LEN count=1 2> \
+ /dev/null | xxd -p | tr -d '\n'`
+
+ if [ "$P" = "$PLAINTEXT" ]; then DEC="--DECRYPT OK--"
+ else DEC="--DECRYPT FAILED--"; fi
+}
+
+function check {
+ special
+
+ LEN=$((`echo $PLAINTEXT | wc -c`/2))
+
+ # setup mode
+ TABLE=`echo "0 1 crypt $MODE $KEY $IV_OFFSET $LOOP 0"`
+
+ echo $TABLE | dmsetup create $MAPPER
+
+ check_encrypt
+ check_decrypt
+
+ dmsetup remove $MAPPER
+
+ echo "$NAME: $ENC $DEC"
+}
+
+setup
+
--
1.4.1.1


2006-08-31 12:40:22

by Rik Snel

[permalink] [raw]
Subject: [PATCH 1/6] crypto: trivial comment improvements

Just some minor comment nits.

- little-endian is better than low-endian
- and since it is called essiv everywere it should also be essiv
in the comments (and not ess_iv)

Signed-off-by: Rik Snel <[email protected]>
---
drivers/md/dm-crypt.c | 8 ++++----
1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 91d4081..06961f7 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -99,12 +99,12 @@ static kmem_cache_t *_crypt_io_pool;
/*
* Different IV generation algorithms:
*
- * plain: the initial vector is the 32-bit low-endian version of the sector
+ * plain: the initial vector is the 32-bit little-endian version of the sector
* number, padded with zeros if neccessary.
*
- * ess_iv: "encrypted sector|salt initial vector", the sector number is
- * encrypted with the bulk cipher using a salt as key. The salt
- * should be derived from the bulk cipher's key via hashing.
+ * essiv: "encrypted sector|salt initial vector", the sector number is
+ * encrypted with the bulk cipher using a salt as key. The salt
+ * should be derived from the bulk cipher's key via hashing.
*
* plumb: unimplemented, see:
* http://article.gmane.org/gmane.linux.kernel.device-mapper.dm-crypt/454
--
1.4.1.1


2006-09-01 03:52:24

by Herbert Xu

[permalink] [raw]
Subject: Re: LRW implementation, please comment

Hi Rik:

On Thu, Aug 31, 2006 at 02:39:30PM +0200, Rik Snel wrote:
>
> This is an attempt at implementing LRW for the new blockcipher API.
> Please review, test and comment.

Thanks a lot for doing this. It looks good to me.

There are a few style (see Documentation/CodingStyle) issues. It would
be good if you can fix them up before I apply the patches.

Could you also convert the test vectors to use tcrypt.h/tcrypt.c?

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2006-09-01 08:55:34

by Rik Snel

[permalink] [raw]
Subject: Re: LRW implementation, please comment

Hello,

On Fri, Sep 01, 2006 at 01:52:25PM +1000, Herbert Xu wrote:
> On Thu, Aug 31, 2006 at 02:39:30PM +0200, Rik Snel wrote:
> > This is an attempt at implementing LRW for the new blockcipher API.
> > Please review, test and comment.
>
> Thanks a lot for doing this. It looks good to me.

That's good to hear.

> There are a few style (see Documentation/CodingStyle) issues. It would
> be good if you can fix them up before I apply the patches.

Can you give some examples about what I missed from CodingStyle?
(I just reread it and I think my patch adheres pretty well to those
rules: tabs=8, K&R bracing, don't pollute global namespace etc...)

> Could you also convert the test vectors to use tcrypt.h/tcrypt.c?

Yes, I just didn't know what tcrypt.[ch] was for, otherwise I would have put
those vectors there in the first place. The last testvector, however,
won't fit because it is an entire 512 byte sector. (which is not that
bad, because alle narrow blocks (16 bytes) are encrypted independently,
no realistic testvector of a wide block cipher mode would fit. (ABL uses
blocks of any size, say 512 bytes, and if 1 bit changes in the plaintext
block, the whole cypher block will be affected) I will propose something
to fix that if ABL is finished.

Greetings,

Rik.

--
Nothing is ever a total loss; it can always serve as a bad example.

2006-09-01 10:37:05

by Herbert Xu

[permalink] [raw]
Subject: Re: LRW implementation, please comment

On Fri, Sep 01, 2006 at 10:55:14AM +0200, [email protected] wrote:
>
> Can you give some examples about what I missed from CodingStyle?
> (I just reread it and I think my patch adheres pretty well to those
> rules: tabs=8, K&R bracing, don't pollute global namespace etc...)

It's only a few spots:

* Space before * for pointers, e.g., (u64 *) instead of (u64*).
* There is one line near the end b128ops.h that starts with spaces.
* Some lines terminate with spaces.

The last two issues can be caught if you test your patch with

git apply --check --whitespace=error-all diff

> Yes, I just didn't know what tcrypt.[ch] was for, otherwise I would have put
> those vectors there in the first place. The last testvector, however,
> won't fit because it is an entire 512 byte sector. (which is not that
> bad, because alle narrow blocks (16 bytes) are encrypted independently,
> no realistic testvector of a wide block cipher mode would fit. (ABL uses
> blocks of any size, say 512 bytes, and if 1 bit changes in the plaintext
> block, the whole cypher block will be affected) I will propose something
> to fix that if ABL is finished.

Feel free to extend the tcrypt cipher test buffer to 512 bytes.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2006-09-02 01:00:45

by Rik Snel

[permalink] [raw]
Subject: Re: LRW... v2


Hello Herbert/list,

Here is the updated version of my LRW patch set.

Greetings,

Rik.


--
VGER BF report: U 0.476584

2006-09-02 01:00:46

by Rik Snel

[permalink] [raw]
Subject: [PATCHv2 5/6] LRW, Liskov Rivest Wagner, a tweakable narrow block cipher mode

From: Rik Snel <[email protected]>

Main module, this implements the Liskov Rivest Wagner block cipher mode
in the new blockcipher API. The implementation is based on ecb.c.

The LRW-32-AES specification I used can be found at:
http://grouper.ieee.org/groups/1619/email/pdf00017.pdf

It implements the optimization specified as optional in the
specification, and in addition it uses optimized multiplication
routines from gf128mul.c.

Since gf128mul.[ch] is not tested on bigendian, this cipher mode
may currently fail badly on bigendian machines.

Signed-off-by: Rik Snel <[email protected]>
---

Note: I rerolled the loop because it turned out that I had unrolled it
wrong. (I noticed it after I added the long testvector; a cypherblock
was split in two by a page boundary and the reconstructed block was
discarded because of an error in the unrolling).

crypto/Kconfig | 13 ++
crypto/Makefile | 1
crypto/lrw.c | 291 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 305 insertions(+), 0 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 6b23c20..dfdfe08 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -156,6 +156,19 @@ config CRYPTO_CBC
CBC: Cipher Block Chaining mode
This block cipher algorithm is required for IPSec.

+config CRYPTO_LRW
+ tristate "LRW support (EXPERIMENTAL)"
+ depends on EXPERIMENTAL
+ select CRYPTO_BLKCIPHER
+ select CRYPTO_GF128MUL
+ default n
+ help
+ LRW: Liskov Rivest Wagner, a tweakable, non malleable, non movable
+ narrow block cipher mode. Use it with cipher specification string
+ aes-lrw-benbi, the key must be 256, 320 or 384. The first 128, 192
+ or 256 bits in the key are used for AES and the rest is used to tie
+ each cipher block to its logical position.
+
config CRYPTO_DES
tristate "DES and Triple DES EDE cipher algorithms"
select CRYPTO_ALGAPI
diff --git a/crypto/Makefile b/crypto/Makefile
index bf0406b..e2e57be 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -26,6 +26,7 @@ obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o
obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o
obj-$(CONFIG_CRYPTO_ECB) += ecb.o
obj-$(CONFIG_CRYPTO_CBC) += cbc.o
+obj-$(CONFIG_CRYPTO_LRW) += lrw.o
obj-$(CONFIG_CRYPTO_DES) += des.o
obj-$(CONFIG_CRYPTO_BLOWFISH) += blowfish.o
obj-$(CONFIG_CRYPTO_TWOFISH) += twofish.o
diff --git a/crypto/lrw.c b/crypto/lrw.c
new file mode 100644
index 0000000..a037379
--- /dev/null
+++ b/crypto/lrw.c
@@ -0,0 +1,291 @@
+/* LRW: as defined by Cyril Guyot in
+ * http://grouper.ieee.org/groups/1619/email/pdf00017.pdf
+ *
+ * Copyright (c) 2006 Rik Snel <[email protected]>
+ *
+ * Based om ecb.c
+ * Copyright (c) 2006 Herbert Xu <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+/* This implementation is checked against the test vectors in the above
+ * document and by a test vector provided by Ken Buchanan at
+ * http://www.mail-archive.com/[email protected]/msg00173.html
+ *
+ * The test vectors are included in the testing module tcrypt.[ch] */
+#include <crypto/algapi.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+
+#include "b128ops.h"
+#include "gf128mul.h"
+
+struct priv {
+ struct crypto_cipher *child;
+ /* optimizes multiplying a random (non incrementing, as at the
+ * start of a new sector) value with key2, we could also have
+ * used 4k optimization tables or no optimization at all. In the
+ * latter case we would have to store key2 here */
+ struct gf128mul_64k table;
+ /* stores:
+ * key2*{ 0,0,...0,0,0,0,1 }, key2*{ 0,0,...0,0,0,1,1 },
+ * key2*{ 0,0,...0,0,1,1,1 }, key2*{ 0,0,...0,1,1,1,1 }
+ * key2*{ 0,0,...1,1,1,1,1 }, etc
+ * needed for optimized multiplication of incrementing values
+ * with key2 */
+ u64 mulinc[128][GF128MUL_BYTES >> 3];
+};
+
+static inline void setbit128(void *b, int bit)
+{
+ int index = 15 - bit/8;
+ ((u8 *)b)[index] |= 1<<(bit%8);
+}
+
+static int setkey(struct crypto_tfm *parent, const u8 *key,
+ unsigned int keylen)
+{
+ struct priv *ctx = crypto_tfm_ctx(parent);
+ struct crypto_cipher *child = ctx->child;
+ int err, i;
+ u64 tmp[2] = { 0, }, scratch[2];
+ int bsize = crypto_cipher_blocksize(child);
+
+ crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+ crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) &
+ CRYPTO_TFM_REQ_MASK);
+ if ((err = crypto_cipher_setkey(child, key, keylen - bsize)))
+ return err;
+ crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) &
+ CRYPTO_TFM_RES_MASK);
+
+
+ /* initialize multiplication table for Key2 */
+ gf128mul_init_64k_bbe(&ctx->table, (u64 *)(key + keylen - bsize));
+
+ /* initialize optimization table */
+ for (i = 0; i < 128; i++) {
+ setbit128(tmp, i);
+ b128ops_mov(ctx->mulinc[i], tmp);
+ gf128mul_64k_bbe(ctx->mulinc[i], &ctx->table, scratch);
+ }
+
+ return 0;
+}
+
+struct sinfo {
+ u64 b1[2], b2[2];
+ struct crypto_tfm *tfm;
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
+};
+
+static inline void inc(u64 *iv)
+{
+ if (!(iv[1] = cpu_to_be64(be64_to_cpu(iv[1]) + 1)))
+ iv[0] = cpu_to_be64(be64_to_cpu(iv[0]) + 1);
+}
+
+static inline void round(struct sinfo *s, u8 *dst, const u8 *src)
+{
+ b128ops_xor(s->b2, src); /* PP <- T xor P */
+ s->fn(s->tfm, dst, (u8 *)s->b2); /* CC <- E(Key2,PP) */
+ b128ops_xor(dst, s->b1); /* C <- T xor CC */
+}
+
+/* this returns the number of consequative 1 bits
+ * starting from the right in i */
+static inline int get_index8(u8 i)
+{
+ int j = 1;
+
+ if (i&1) {
+ while ((i >>= 1)&1) j++;
+ return j;
+ }
+
+ return 0;
+}
+
+/* this returns the number of consequative 1 bits starting
+ * from the right, get_index128(00 00 00 00 00 00 ... 00 00 10 FB) = 2 */
+static inline int get_index128(u8 *block)
+{
+ int inc, ret = 0, len = 16;
+ while ((inc = get_index8(block[--len])) == 8) ret += 8;
+ return ret + inc;
+}
+
+static int crypt(struct blkcipher_desc *d,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes, struct priv *ctx,
+ void (*fn)(struct crypto_tfm *, u8 *, const u8 *))
+{
+ struct blkcipher_walk w;
+ int err, old_t_known = 0;
+ unsigned int avail;
+ const int bs = crypto_cipher_blocksize(ctx->child);
+ struct sinfo s = {
+ .tfm = crypto_cipher_tfm(ctx->child),
+ .fn = fn
+ };
+
+ blkcipher_walk_init(&w, dst, src, nbytes);
+
+ err = blkcipher_walk_virt(d, &w);
+
+ while ((avail = w.nbytes)) {
+ u8 *wsrc = w.src.virt.addr;
+ u8 *wdst = w.dst.virt.addr;
+ do {
+ if (old_t_known) {
+ /* old T is available in s.b1; new one
+ * must be made available in b1 and b2 */
+
+ /* T <- I*Key2, using the optimization
+ * discussed in the specification */
+ b128ops_xor(s.b1, ctx->mulinc[
+ get_index128(w.iv)]);
+ inc((u64*)w.iv);
+ b128ops_mov(s.b2, s.b1);
+ } else {
+ /* calculate first value of T */
+ b128ops_mov(s.b1, w.iv);
+ /* T <- I*Key2 */
+ gf128mul_64k_bbe(s.b1, &ctx->table, s.b2);
+ old_t_known = 1;
+ }
+
+ round(&s, wdst, wsrc);
+
+ wsrc += bs;
+ wdst += bs;
+ } while ((avail -= bs) >= bs);
+
+ err = blkcipher_walk_done(d, &w, avail);
+ }
+
+ return err;
+}
+
+static int encrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
+ struct scatterlist *src, unsigned int nbytes)
+{
+ struct priv *ctx = crypto_blkcipher_ctx(desc->tfm);
+ return crypt(desc, dst, src, nbytes, ctx,
+ crypto_cipher_alg(ctx->child)->cia_encrypt);
+}
+
+static int decrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
+ struct scatterlist *src, unsigned int nbytes)
+{
+ struct priv *ctx = crypto_blkcipher_ctx(desc->tfm);
+ return crypt(desc, dst, src, nbytes, ctx,
+ crypto_cipher_alg(ctx->child)->cia_decrypt);
+}
+
+static int init_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_instance *inst = (void *)tfm->__crt_alg;
+ struct crypto_spawn *spawn = crypto_instance_ctx(inst);
+ struct priv *ctx = crypto_tfm_ctx(tfm);
+ u32 *flags = &tfm->crt_flags;
+
+ tfm = crypto_spawn_tfm(spawn);
+ if (IS_ERR(tfm))
+ return PTR_ERR(tfm);
+
+ if (crypto_tfm_alg_blocksize(tfm) != 16) {
+ *flags |= CRYPTO_TFM_RES_BAD_BLOCK_LEN;
+ return -EINVAL;
+ }
+
+ ctx->child = crypto_cipher_cast(tfm);
+ return 0;
+}
+
+static void exit_tfm(struct crypto_tfm *tfm)
+{
+ struct priv *ctx = crypto_tfm_ctx(tfm);
+ crypto_free_cipher(ctx->child);
+}
+
+static struct crypto_instance *alloc(void *param, unsigned int len)
+{
+ struct crypto_instance *inst;
+ struct crypto_alg *alg;
+
+ alg = crypto_get_attr_alg(param, len, CRYPTO_ALG_TYPE_CIPHER,
+ CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_ASYNC);
+ if (IS_ERR(alg))
+ return ERR_PTR(PTR_ERR(alg));
+
+ inst = crypto_alloc_instance("lrw", alg);
+ if (IS_ERR(inst))
+ goto out_put_alg;
+
+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
+ inst->alg.cra_priority = alg->cra_priority;
+ inst->alg.cra_blocksize = alg->cra_blocksize;
+
+ if (alg->cra_alignmask < 7) inst->alg.cra_alignmask = 7;
+ else inst->alg.cra_alignmask = alg->cra_alignmask;
+ inst->alg.cra_type = &crypto_blkcipher_type;
+
+ if (!(alg->cra_blocksize % 4))
+ inst->alg.cra_alignmask |= 3;
+ inst->alg.cra_blkcipher.ivsize = alg->cra_blocksize;
+ inst->alg.cra_blkcipher.min_keysize =
+ alg->cra_cipher.cia_min_keysize + alg->cra_blocksize;
+ inst->alg.cra_blkcipher.max_keysize =
+ alg->cra_cipher.cia_max_keysize + alg->cra_blocksize;
+
+ inst->alg.cra_ctxsize = sizeof(struct priv);
+
+ inst->alg.cra_init = init_tfm;
+ inst->alg.cra_exit = exit_tfm;
+
+ inst->alg.cra_blkcipher.setkey = setkey;
+ inst->alg.cra_blkcipher.encrypt = encrypt;
+ inst->alg.cra_blkcipher.decrypt = decrypt;
+
+out_put_alg:
+ crypto_mod_put(alg);
+ return inst;
+}
+
+static void free(struct crypto_instance *inst)
+{
+ crypto_drop_spawn(crypto_instance_ctx(inst));
+ kfree(inst);
+}
+
+static struct crypto_template crypto_tmpl = {
+ .name = "lrw",
+ .alloc = alloc,
+ .free = free,
+ .module = THIS_MODULE,
+};
+
+static int __init crypto_module_init(void)
+{
+ return crypto_register_template(&crypto_tmpl);
+}
+
+static void __exit crypto_module_exit(void)
+{
+ crypto_unregister_template(&crypto_tmpl);
+}
+
+module_init(crypto_module_init);
+module_exit(crypto_module_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("LRW block cipher mode");
--
1.4.1.1


--
VGER BF report: U 0.5

2006-09-02 01:00:45

by Rik Snel

[permalink] [raw]
Subject: [PATCHv2 1/6] crypto: trivial comment improvements

From: Rik Snel <[email protected]>

Just some minor comment nits.

- little-endian is better than low-endian
- and since it is called essiv everywere it should also be essiv
in the comments (and not ess_iv)

Signed-off-by: Rik Snel <[email protected]>
---
drivers/md/dm-crypt.c | 8 ++++----
1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 91d4081..06961f7 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -99,12 +99,12 @@ static kmem_cache_t *_crypt_io_pool;
/*
* Different IV generation algorithms:
*
- * plain: the initial vector is the 32-bit low-endian version of the sector
+ * plain: the initial vector is the 32-bit little-endian version of the sector
* number, padded with zeros if neccessary.
*
- * ess_iv: "encrypted sector|salt initial vector", the sector number is
- * encrypted with the bulk cipher using a salt as key. The salt
- * should be derived from the bulk cipher's key via hashing.
+ * essiv: "encrypted sector|salt initial vector", the sector number is
+ * encrypted with the bulk cipher using a salt as key. The salt
+ * should be derived from the bulk cipher's key via hashing.
*
* plumb: unimplemented, see:
* http://article.gmane.org/gmane.linux.kernel.device-mapper.dm-crypt/454
--
1.4.1.1


--
VGER BF report: U 0.496196

2006-09-02 01:00:45

by Rik Snel

[permalink] [raw]
Subject: [PATCHv2 2/6] crypto: benbi IV, big endian narrow block count for LRW-32-AES

From: Rik Snel <[email protected]>

LRW-32-AES needs a certain IV. This IV should be provided dm-crypt.
The block cipher mode could, in principle generate the correct IV from
the plain IV, but I think that it is cleaner to supply the right IV
directly.

The sector -> narrow block calculation uses a shift for performance reasons.
This shift is computed in .ctr and stored in cc->iv_gen_private (as a void*).

Signed-off-by: Rik Snel <[email protected]>
---
drivers/md/dm-crypt.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++++-
1 files changed, 55 insertions(+), 1 deletions(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 06961f7..df399d6 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -106,6 +106,9 @@ static kmem_cache_t *_crypt_io_pool;
* encrypted with the bulk cipher using a salt as key. The salt
* should be derived from the bulk cipher's key via hashing.
*
+ * benbi: the 64-bit "big-endian 'narrow block'-count", starting at 1
+ * (needed for LRW-32-AES and possible other narrow block modes)
+ *
* plumb: unimplemented, see:
* http://article.gmane.org/gmane.linux.kernel.device-mapper.dm-crypt/454
*/
@@ -200,6 +203,49 @@ static int crypt_iv_essiv_gen(struct cry
return 0;
}

+static int crypt_iv_benbi_ctr(struct crypt_config *cc, struct dm_target *ti,
+ const char *opts)
+{
+ unsigned int bs = crypto_blkcipher_blocksize(cc->tfm);
+ int log = long_log2(bs);
+
+ /* we need to calculate how far we must shift the sector count
+ * to get the cipher block count, we use this shift in _gen */
+
+ if (1 << log != bs) {
+ ti->error = "cypher blocksize is not a power of 2";
+ return -EINVAL;
+ }
+
+ if (log > 9) {
+ ti->error = "cypher blocksize is > 512";
+ return -EINVAL;
+ }
+
+ if (crypto_blkcipher_ivsize(cc->tfm) < sizeof(u64)) {
+ ti->error = "cypher ivsize is < 8";
+ return -EINVAL;
+ }
+
+ cc->iv_gen_private = (void *)(9 - log);
+
+ return 0;
+}
+
+static void crypt_iv_benbi_dtr(struct crypt_config *cc)
+{
+ cc->iv_gen_private = NULL;
+}
+
+static int crypt_iv_benbi_gen(struct crypt_config *cc, u8 *iv, sector_t sector)
+{
+ memset(iv, 0, cc->iv_size - sizeof(u64)); /* rest is cleared below */
+ *(u64 *)(iv + cc->iv_size - sizeof(u64)) =
+ cpu_to_be64(((u64)sector << (u32)cc->iv_gen_private) + 1);
+
+ return 0;
+}
+
static struct crypt_iv_operations crypt_iv_plain_ops = {
.generator = crypt_iv_plain_gen
};
@@ -210,6 +256,11 @@ static struct crypt_iv_operations crypt_
.generator = crypt_iv_essiv_gen
};

+static struct crypt_iv_operations crypt_iv_benbi_ops = {
+ .ctr = crypt_iv_benbi_ctr,
+ .dtr = crypt_iv_benbi_dtr,
+ .generator = crypt_iv_benbi_gen
+};

static int
crypt_convert_scatterlist(struct crypt_config *cc, struct scatterlist *out,
@@ -579,7 +630,8 @@ static int crypt_ctr(struct dm_target *t
cc->tfm = tfm;

/*
- * Choose ivmode. Valid modes: "plain", "essiv:<esshash>".
+ * Choose ivmode. Valid modes: "plain", "essiv:<esshash>", "benbi".
+ *
* See comments at iv code
*/

@@ -589,6 +641,8 @@ static int crypt_ctr(struct dm_target *t
cc->iv_gen_ops = &crypt_iv_plain_ops;
else if (strcmp(ivmode, "essiv") == 0)
cc->iv_gen_ops = &crypt_iv_essiv_ops;
+ else if (strcmp(ivmode, "benbi") == 0)
+ cc->iv_gen_ops = &crypt_iv_benbi_ops;
else {
ti->error = "Invalid IV mode";
goto bad2;
--
1.4.1.1


--
VGER BF report: U 0.499935

2006-09-02 01:00:45

by Rik Snel

[permalink] [raw]
Subject: [PATCHv2 3/6] crypto: some common 128-bit block operations, nicely centralized

From: Rik Snel <[email protected]>

128bit is a common blocksize in linux kernel cryptography, so it helps to
centralize some common operations. The data must be aligned at sizeof(int)
for decent performance.

The code, while mostly trivial, is based on a header file mode_hdr.h in
http://fp.gladman.plus.com/AES/modes.vc8.19-06-06.zip

The original copyright (and GPL statement) of the original author,
Dr Brian Gladman, is preserved.

Signed-off-by: Rik Snel <[email protected]>
---
crypto/b128ops.h | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 72 insertions(+), 0 deletions(-)

diff --git a/crypto/b128ops.h b/crypto/b128ops.h
new file mode 100644
index 0000000..73d8a5f
--- /dev/null
+++ b/crypto/b128ops.h
@@ -0,0 +1,72 @@
+/* b128ops.h - common 128-bit block operations
+ *
+ * Copyright (c) 2003, Dr Brian Gladman, Worcester, UK.
+ * Copyright (c) 2006, Rik Snel <[email protected]>
+ *
+ * Based on Dr Brian Gladman's (GPL'd) work published at
+ * http://fp.gladman.plus.com/cryptography_technology/index.htm
+ * See the original copyright notice below.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+/*
+ ---------------------------------------------------------------------------
+ Copyright (c) 2003, Dr Brian Gladman, Worcester, UK. All rights reserved.
+
+ LICENSE TERMS
+
+ The free distribution and use of this software in both source and binary
+ form is allowed (with or without changes) provided that:
+
+ 1. distributions of this source code include the above copyright
+ notice, this list of conditions and the following disclaimer;
+
+ 2. distributions in binary form include the above copyright
+ notice, this list of conditions and the following disclaimer
+ in the documentation and/or other associated materials;
+
+ 3. the copyright holder's name is not used to endorse products
+ built using this software without specific written permission.
+
+ ALTERNATIVELY, provided that this notice is retained in full, this product
+ may be distributed under the terms of the GNU General Public License (GPL),
+ in which case the provisions of the GPL apply INSTEAD OF those given above.
+
+ DISCLAIMER
+
+ This software is provided 'as is' with no explicit or implied warranties
+ in respect of its properties, including, but not limited to, correctness
+ and/or fitness for purpose.
+ ---------------------------------------------------------------------------
+ Issue Date: 13/06/2006
+*/
+
+#ifndef _LINUX_B128OPS_H
+#define _LINUX_B128OPS_H
+
+#include <linux/byteorder/swab.h>
+
+/* watch out: for good performance p and q must be aligned to 32bit
+ * boundaries on a 32bit machine and to 64bit boundaries on a 64bit
+ * machine. */
+inline void b128ops_mov(void *p, const void *q)
+{
+ ((u64 *)p)[0] = ((u64 *)q)[0];
+ ((u64 *)p)[1] = ((u64 *)q)[1];
+}
+
+inline void b128ops_xor(void *p, const void *q)
+{
+ ((u64 *)p)[0] ^= ((u64 *)q)[0];
+ ((u64 *)p)[1] ^= ((u64 *)q)[1];
+}
+
+inline void bswap64_block(void *d, const void *s, u32 n)
+{
+ while(n--) ((u64 *)d)[n] = __swab64(((u64 *)s)[n]);
+}
+
+#endif /* _LINUX_B128OPS_H */
--
1.4.1.1


--
VGER BF report: U 0.5

2006-09-02 01:00:46

by Rik Snel

[permalink] [raw]
Subject: [PATCHv2 4/6] crypto: table driven multiplications in GF(2^128), needed by LRW (and in the future ABL)

From: Rik Snel <[email protected]>

WARNING: untested on bigendian, please test.

A lot of cypher modes need multiplications in GF(2^128). LRW, ABL, GCM...
I use functions from this library in my LRW implementation and I will
also use them in my ABL (Arbitrary Block Length, an unencumbered (correct
me if I am wrong, wide block cipher mode).

Elements of GF(2^128) must be presented as u64 * (specifically u64[2]),
it encourages automatic and proper alignment.

The library contains support for two different representations of GF(2^128),
see the comment in gf128mul.h. There different levels of optimization
(memory/speed tradeoff).

The code is based on work by Dr Brian Gladman. Notable changes:
- deletion of two optimization modes
- change from u32 to u64 for faster handling on 64bit machines
- support for 'bbe' representation in addition to the, already implemented,
'lle' representation.
- move 'inline void' functions from header to 'static void' in the
source file
- update to use the linux coding style conventions

The original can be found at:
http://fp.gladman.plus.com/AES/modes.vc8.19-06-06.zip

The copyright (and GPL statement) of the original author is preserved.

Signed-off-by: Rik Snel <[email protected]>
---
crypto/Kconfig | 11 +
crypto/Makefile | 1
crypto/gf128mul.c | 401 +++++++++++++++++++++++++++++++++++++++++++++++++++++
crypto/gf128mul.h | 196 ++++++++++++++++++++++++++
4 files changed, 609 insertions(+), 0 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 1e2f39c..6b23c20 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -128,6 +128,17 @@ config CRYPTO_TGR192
See also:
<http://www.cs.technion.ac.il/~biham/Reports/Tiger/>.

+config CRYPTO_GF128MUL
+ tristate "GF(2^128) multiplication functions (EXPERIMENTAL)"
+ depends on EXPERIMENTAL
+ default n
+ help
+ Efficient table driven implementation of multiplications in the
+ field GF(2^128). This is needed by some cypher modes. This
+ option will be selected automatically if you select such a
+ cipher mode. Only select this option by hand if you expect to load
+ an external module that requires these functions.
+
config CRYPTO_ECB
tristate "ECB support"
select CRYPTO_BLKCIPHER
diff --git a/crypto/Makefile b/crypto/Makefile
index 7236620..bf0406b 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -23,6 +23,7 @@ obj-$(CONFIG_CRYPTO_SHA256) += sha256.o
obj-$(CONFIG_CRYPTO_SHA512) += sha512.o
obj-$(CONFIG_CRYPTO_WP512) += wp512.o
obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o
+obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o
obj-$(CONFIG_CRYPTO_ECB) += ecb.o
obj-$(CONFIG_CRYPTO_CBC) += cbc.o
obj-$(CONFIG_CRYPTO_DES) += des.o
diff --git a/crypto/gf128mul.c b/crypto/gf128mul.c
new file mode 100644
index 0000000..db92dc1
--- /dev/null
+++ b/crypto/gf128mul.c
@@ -0,0 +1,401 @@
+/* gf128mul.c - GF(2^128) multiplication functions
+ *
+ * Copyright (c) 2003, Dr Brian Gladman, Worcester, UK.
+ * Copyright (c) 2006, Rik Snel <[email protected]>
+ *
+ * Based on Dr Brian Gladman's (GPL'd) work published at
+ * http://fp.gladman.plus.com/cryptography_technology/index.htm
+ * See the original copyright notice below.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+
+/*
+ ---------------------------------------------------------------------------
+ Copyright (c) 2003, Dr Brian Gladman, Worcester, UK. All rights reserved.
+
+ LICENSE TERMS
+
+ The free distribution and use of this software in both source and binary
+ form is allowed (with or without changes) provided that:
+
+ 1. distributions of this source code include the above copyright
+ notice, this list of conditions and the following disclaimer;
+
+ 2. distributions in binary form include the above copyright
+ notice, this list of conditions and the following disclaimer
+ in the documentation and/or other associated materials;
+
+ 3. the copyright holder's name is not used to endorse products
+ built using this software without specific written permission.
+
+ ALTERNATIVELY, provided that this notice is retained in full, this product
+ may be distributed under the terms of the GNU General Public License (GPL),
+ in which case the provisions of the GPL apply INSTEAD OF those given above.
+
+ DISCLAIMER
+
+ This software is provided 'as is' with no explicit or implied warranties
+ in respect of its properties, including, but not limited to, correctness
+ and/or fitness for purpose.
+ ---------------------------------------------------------------------------
+ Issue 31/01/2006
+
+ This file provides fast multiplication in GF(128) as required by several
+ cryptographic authentication modes
+*/
+
+#include <linux/module.h>
+#include "b128ops.h"
+#include "gf128mul.h"
+
+#define gf128mul_dat(q) { \
+ q(0x00), q(0x01), q(0x02), q(0x03), q(0x04), q(0x05), q(0x06), q(0x07),\
+ q(0x08), q(0x09), q(0x0a), q(0x0b), q(0x0c), q(0x0d), q(0x0e), q(0x0f),\
+ q(0x10), q(0x11), q(0x12), q(0x13), q(0x14), q(0x15), q(0x16), q(0x17),\
+ q(0x18), q(0x19), q(0x1a), q(0x1b), q(0x1c), q(0x1d), q(0x1e), q(0x1f),\
+ q(0x20), q(0x21), q(0x22), q(0x23), q(0x24), q(0x25), q(0x26), q(0x27),\
+ q(0x28), q(0x29), q(0x2a), q(0x2b), q(0x2c), q(0x2d), q(0x2e), q(0x2f),\
+ q(0x30), q(0x31), q(0x32), q(0x33), q(0x34), q(0x35), q(0x36), q(0x37),\
+ q(0x38), q(0x39), q(0x3a), q(0x3b), q(0x3c), q(0x3d), q(0x3e), q(0x3f),\
+ q(0x40), q(0x41), q(0x42), q(0x43), q(0x44), q(0x45), q(0x46), q(0x47),\
+ q(0x48), q(0x49), q(0x4a), q(0x4b), q(0x4c), q(0x4d), q(0x4e), q(0x4f),\
+ q(0x50), q(0x51), q(0x52), q(0x53), q(0x54), q(0x55), q(0x56), q(0x57),\
+ q(0x58), q(0x59), q(0x5a), q(0x5b), q(0x5c), q(0x5d), q(0x5e), q(0x5f),\
+ q(0x60), q(0x61), q(0x62), q(0x63), q(0x64), q(0x65), q(0x66), q(0x67),\
+ q(0x68), q(0x69), q(0x6a), q(0x6b), q(0x6c), q(0x6d), q(0x6e), q(0x6f),\
+ q(0x70), q(0x71), q(0x72), q(0x73), q(0x74), q(0x75), q(0x76), q(0x77),\
+ q(0x78), q(0x79), q(0x7a), q(0x7b), q(0x7c), q(0x7d), q(0x7e), q(0x7f),\
+ q(0x80), q(0x81), q(0x82), q(0x83), q(0x84), q(0x85), q(0x86), q(0x87),\
+ q(0x88), q(0x89), q(0x8a), q(0x8b), q(0x8c), q(0x8d), q(0x8e), q(0x8f),\
+ q(0x90), q(0x91), q(0x92), q(0x93), q(0x94), q(0x95), q(0x96), q(0x97),\
+ q(0x98), q(0x99), q(0x9a), q(0x9b), q(0x9c), q(0x9d), q(0x9e), q(0x9f),\
+ q(0xa0), q(0xa1), q(0xa2), q(0xa3), q(0xa4), q(0xa5), q(0xa6), q(0xa7),\
+ q(0xa8), q(0xa9), q(0xaa), q(0xab), q(0xac), q(0xad), q(0xae), q(0xaf),\
+ q(0xb0), q(0xb1), q(0xb2), q(0xb3), q(0xb4), q(0xb5), q(0xb6), q(0xb7),\
+ q(0xb8), q(0xb9), q(0xba), q(0xbb), q(0xbc), q(0xbd), q(0xbe), q(0xbf),\
+ q(0xc0), q(0xc1), q(0xc2), q(0xc3), q(0xc4), q(0xc5), q(0xc6), q(0xc7),\
+ q(0xc8), q(0xc9), q(0xca), q(0xcb), q(0xcc), q(0xcd), q(0xce), q(0xcf),\
+ q(0xd0), q(0xd1), q(0xd2), q(0xd3), q(0xd4), q(0xd5), q(0xd6), q(0xd7),\
+ q(0xd8), q(0xd9), q(0xda), q(0xdb), q(0xdc), q(0xdd), q(0xde), q(0xdf),\
+ q(0xe0), q(0xe1), q(0xe2), q(0xe3), q(0xe4), q(0xe5), q(0xe6), q(0xe7),\
+ q(0xe8), q(0xe9), q(0xea), q(0xeb), q(0xec), q(0xed), q(0xee), q(0xef),\
+ q(0xf0), q(0xf1), q(0xf2), q(0xf3), q(0xf4), q(0xf5), q(0xf6), q(0xf7),\
+ q(0xf8), q(0xf9), q(0xfa), q(0xfb), q(0xfc), q(0xfd), q(0xfe), q(0xff) \
+}
+
+/* Given the value i in 0..255 as the byte overflow when a field element
+ in GHASH is multipled by x^8, this function will return the values that
+ are generated in the lo 16-bit word of the field value by applying the
+ modular polynomial. The values lo_byte and hi_byte are returned via the
+ macro xp_fun(lo_byte, hi_byte) so that the values can be assembled into
+ memory as required by a suitable definition of this macro operating on
+ the table above
+*/
+
+#ifdef __BIG_ENDIAN
+#define xx(p,q) 0x##p##q /* assemble in big endian order */
+#else
+#define xx(p,q) 0x##q##p /* assemble in little endian order */
+#endif
+
+#define xda_bbe(i) ( \
+ (i&0x80?xx(43,80):0)^(i&0x40?xx(21,c0):0)^ \
+ (i&0x20?xx(10,e0):0)^(i&0x10?xx(08,70):0)^ \
+ (i&0x08?xx(04,38):0)^(i&0x04?xx(02,1c):0)^ \
+ (i&0x02?xx(01,0e):0)^(i&0x01?xx(00,87):0) \
+)
+
+#define xda_lle(i) ( \
+ (i&0x80?xx(e1,00):0)^(i&0x40?xx(70,80):0)^ \
+ (i&0x20?xx(38,40):0)^(i&0x10?xx(1c,20):0)^ \
+ (i&0x08?xx(0e,10):0)^(i&0x04?xx(07,08):0)^ \
+ (i&0x02?xx(03,84):0)^(i&0x01?xx(01,c2):0) \
+)
+
+static const u16 gf128mul_table_lle[256] = gf128mul_dat(xda_lle);
+EXPORT_SYMBOL(gf128mul_table_lle);
+
+static const u16 gf128mul_table_bbe[256] = gf128mul_dat(xda_bbe);
+EXPORT_SYMBOL(gf128mul_table_bbe);
+
+/* These functions multiply a field element by x, by x^4 and by x^8
+ * in the polynomial field representation. It uses 32-bit word operations
+ * to gain speed but compensates for machine endianess and hence works
+ * correctly on both styles of machine.
+ */
+#ifdef __BIG_ENDIAN
+
+static void gf128mul_x_lle(u64 r[2], const u64 x[2])
+{
+ u64 _tt = gf128mul_table_lle[(x[1] << 7) & 0xff];
+ r[1] = (x[1] >> 1) | (x[0] << 63);
+ r[0] = (x[0] >> 1) ^ (_tt << 48);
+}
+
+static void gf128mul_x_bbe(u64 r[2], const u64 x[2])
+{
+ u64 _tt = gf128mul_tab_bbe[(x[0] >> 63)];
+ r[0] = (x[0] << 1) | (x[1] >> 63);
+ r[1] = (x[1] << 1) ^ _tt;
+}
+
+static void gf128mul_x8_lle(u64 x[2])
+{
+ _tt = gf128mul_table_lle[x[1] & 0xff];
+ x[1] = (x[1] >> 8) | (x[0] << 56);
+ x[0] = (x[0] >> 8) ^ (_tt << 48);
+}
+
+static void gf128mul_x8_bbe(u64 x[2])
+{
+ _tt = gf128mul_tab_bbe[x[0] >> 56];
+ x[0] = (x[0] << 8) | (x[1] >> 56);
+ x[1] = (x[1] << 8) ^ _tt;
+}
+
+#else
+
+#define M80X 0x8080808080808080LLU
+#define M01X 0x0101010101010101LLU
+
+static void gf128mul_x_lle(u64 r[2], const u64 x[2])
+{
+ u64 _tt = gf128mul_table_lle[(x[1] >> 49) & 0x80];
+ r[1] = ((x[1] >> 1) & ~M80X) | (((x[1] << 15) | (x[0] >> 49)) & M80X);
+ r[0] = (((x[0] >> 1) & ~M80X) | ((x[0] << 15) & M80X)) ^ _tt;
+}
+
+static void gf128mul_x8_lle(u64 x[2])
+{
+ u64 _tt = gf128mul_table_lle[x[1] >> 56];
+ x[1] = (x[1] << 8) | (x[0] >> 56);
+ x[0] = (x[0] << 8) ^ _tt;
+}
+
+static void gf128mul_x_bbe(u64 r[2], const u64 x[2])
+{
+ u64 _tt = gf128mul_table_bbe[(x[0] >> 7) & 0x01];
+ r[0] = ((x[0] << 1) & ~M01X) | (((x[0] >> 15) | (x[1] << 49)) & M01X);
+ r[1] = (((x[1] << 1) & ~M01X) | ((x[1] >> 15) & M01X)) ^ _tt << 48;
+}
+
+static void gf128mul_x8_bbe(u64 x[2])
+{
+ u64 _tt = gf128mul_table_bbe[x[0]&0xff];
+ x[0] = (x[0] >> 8) | (x[1] << 56);
+ x[1] = (x[1] >> 8) ^ (_tt << 48);
+}
+
+#endif
+
+void gf128mul_lle(u64 *a, const u64 *b)
+{
+ u64 r[GF128MUL_BYTES >> 3], p[8][GF128MUL_BYTES >> 3];
+ int i;
+
+ b128ops_mov(p[0], b);
+ for(i = 0; i < 7; ++i) gf128mul_x_lle(p[i+1], p[i]);
+
+ memset(r, 0, GF128MUL_BYTES);
+ for(i = 0; i < 16; ++i) {
+ u8 ch = ((u8 *)a)[15-i];
+ if(i) gf128mul_x8_lle(r);
+
+ if(ch&0x80) b128ops_xor(r, p[0]);
+ if(ch&0x40) b128ops_xor(r, p[1]);
+ if(ch&0x20) b128ops_xor(r, p[2]);
+ if(ch&0x10) b128ops_xor(r, p[3]);
+ if(ch&0x08) b128ops_xor(r, p[4]);
+ if(ch&0x04) b128ops_xor(r, p[5]);
+ if(ch&0x02) b128ops_xor(r, p[6]);
+ if(ch&0x01) b128ops_xor(r, p[7]);
+ }
+ b128ops_mov(a, r);
+}
+EXPORT_SYMBOL(gf128mul_lle);
+
+void gf128mul_bbe(u64 *a, const u64 *b)
+{
+ u64 r[GF128MUL_BYTES >> 3], p[8][GF128MUL_BYTES >> 3];
+ int i;
+ b128ops_mov(p[0], b);
+ for(i = 0; i < 7; ++i) gf128mul_x_bbe(p[i+1], p[i]);
+
+ memset(r, 0, GF128MUL_BYTES);
+ for(i = 0; i < 16; ++i) {
+ u8 ch = ((u8 *)a)[i];
+ if(i) gf128mul_x8_bbe(r);
+
+ if(ch&0x80) b128ops_xor(r, p[7]);
+ if(ch&0x40) b128ops_xor(r, p[6]);
+ if(ch&0x20) b128ops_xor(r, p[5]);
+ if(ch&0x10) b128ops_xor(r, p[4]);
+ if(ch&0x08) b128ops_xor(r, p[3]);
+ if(ch&0x04) b128ops_xor(r, p[2]);
+ if(ch&0x02) b128ops_xor(r, p[1]);
+ if(ch&0x01) b128ops_xor(r, p[0]);
+ }
+ b128ops_mov(a, r);
+}
+EXPORT_SYMBOL(gf128mul_bbe);
+
+/* This version uses 64k bytes of table space on the stack.
+ A 16 byte buffer has to be multiplied by a 16 byte key
+ value in GF(128). If we consider a GF(128) value in
+ the buffer's lowest byte, we can construct a table of
+ the 256 16 byte values that result from the 256 values
+ of this byte. This requires 4096 bytes. But we also
+ need tables for each of the 16 higher bytes in the
+ buffer as well, which makes 64 kbytes in total.
+*/
+/* additional explanation
+ * t[0][BYTE] contains g*BYTE
+ * t[1][BYTE] contains g*x^8*BYTE
+ * ..
+ * t[15][BYTE] contains g*x^120*BYTE */
+void gf128mul_init_64k_lle(struct gf128mul_64k *t, const u64 *g)
+{
+ int i, j, k;
+
+ memset(t->t, 0, 16*256*GF128MUL_BYTES);
+ for (i = 0; i < GF128MUL_BYTES; ++i) {
+ if (!i) {
+ b128ops_mov(t->t[0][128], g);
+ for (j = 64; j > 0; j >>= 1) {
+ gf128mul_x_lle(t->t[0][j], t->t[0][j + j]);
+ }
+ } else for (j = 128; j > 0; j >>= 1) {
+ b128ops_mov(t->t[i][j], t->t[i - 1][j]);
+ gf128mul_x8_lle(t->t[i][j]);
+ }
+
+ for (j = 2; j < 256; j += j) for(k = 1; k < j; ++k) {
+ t->t[i][j+k][0] = t->t[i][j][0]^t->t[i][k][0];
+ t->t[i][j+k][1] = t->t[i][j][1]^t->t[i][k][1];
+ }
+ }
+}
+EXPORT_SYMBOL(gf128mul_init_64k_lle);
+
+void gf128mul_init_64k_bbe(struct gf128mul_64k *t, const u64 *g)
+{
+ int i, j, k;
+
+ memset(t->t, 0, 16*256*GF128MUL_BYTES);
+ for (i = 0; i < GF128MUL_BYTES; ++i) {
+ if (!i) {
+ b128ops_mov(t->t[0][1], g);
+ for (j = 1; j <= 64; j <<= 1) {
+ gf128mul_x_bbe(t->t[0][j + j], t->t[0][j]);
+ }
+ } else for (j = 128; j > 0; j >>= 1) {
+ b128ops_mov(t->t[i][j], t->t[i - 1][j]);
+ gf128mul_x8_bbe(t->t[i][j]);
+ }
+
+ for (j = 2; j < 256; j += j) for(k = 1; k < j; ++k) {
+ t->t[i][j+k][0] = t->t[i][j][0]^t->t[i][k][0];
+ t->t[i][j+k][1] = t->t[i][j][1]^t->t[i][k][1];
+ }
+ }
+}
+EXPORT_SYMBOL(gf128mul_init_64k_bbe);
+
+void gf128mul_64k_lle(u64 a[], struct gf128mul_64k *t, u64 *r)
+{
+ int i;
+ b128ops_mov(r, t->t[0][((u8 *)a)[0]]);
+ for (i = 1; i < GF128MUL_BYTES; ++i) {
+ b128ops_xor(r, t->t[i][((u8 *)a)[i]]);
+ }
+ b128ops_mov(a, r);
+}
+EXPORT_SYMBOL(gf128mul_64k_lle);
+
+void gf128mul_64k_bbe(u64 a[], struct gf128mul_64k *t, u64 *r)
+{
+ int i;
+ b128ops_mov(r, t->t[0][((u8 *)a)[15]]);
+ for (i = 1; i < GF128MUL_BYTES; ++i) {
+ b128ops_xor(r, t->t[i][((u8 *)a)[15 - i]]);
+ }
+ b128ops_mov(a, r);
+}
+EXPORT_SYMBOL(gf128mul_64k_bbe);
+
+/* This version uses 4k bytes of table space on the stack.
+ A 16 byte buffer has to be multiplied by a 16 byte key
+ value in GF(128). If we consider a GF(128) value in a
+ single byte, we can construct a table of the 256 16 byte
+ values that result from the 256 values of this byte.
+ This requires 4096 bytes. If we take the highest byte in
+ the buffer and use this table to get the result, we then
+ have to multiply by x^120 to get the final value. For the
+ next highest byte the result has to be multiplied by x^112
+ and so on. But we can do this by accumulating the result
+ in an accumulator starting with the result for the top
+ byte. We repeatedly multiply the accumulator value by
+ x^8 and then add in (i.e. xor) the 16 bytes of the next
+ lower byte in the buffer, stopping when we reach the
+ lowest byte. This requires a 4096 byte table.
+*/
+void gf128mul_init_4k_lle(struct gf128mul_4k *t, const u64 *g)
+{
+ int j, k;
+
+ memset(t, 0, 256*GF128MUL_BYTES);
+ b128ops_mov(t->t[128], g);
+ for (j = 64; j > 0; j >>= 1) gf128mul_x_lle(t->t[j], t->t[j+j]);
+
+ for (j = 2; j < 256; j += j) for (k = 1; k < j; ++k) {
+ t->t[j + k][0] = t->t[j][0] ^ t->t[k][0];
+ t->t[j + k][1] = t->t[j][1] ^ t->t[k][1];
+ }
+}
+EXPORT_SYMBOL(gf128mul_init_4k_lle);
+
+void gf128mul_init_4k_bbe(struct gf128mul_4k *t, const u64 *g)
+{
+ int j, k;
+
+ memset(t, 0, 256*GF128MUL_BYTES);
+ b128ops_mov(t->t[1], g);
+ for (j = 1; j <= 64; j <<= 1) gf128mul_x_bbe(t->t[j + j], t->t[j]);
+
+ for (j = 2; j < 256; j += j) for (k = 1; k < j; ++k) {
+ t->t[j + k][0] = t->t[j][0] ^ t->t[k][0];
+ t->t[j + k][1] = t->t[j][1] ^ t->t[k][1];
+ }
+}
+EXPORT_SYMBOL(gf128mul_init_4k_bbe);
+
+void gf128mul_4k_lle(u64 *a, struct gf128mul_4k *t, u64 *r)
+{
+ int i = 15;
+ b128ops_mov(r, t->t[((u8 *)a)[15]]);
+ while(i--) {
+ gf128mul_x8_lle(r);
+ b128ops_xor(r, t->t[((u8 *)a)[i]]);
+ }
+ b128ops_mov(a, r);
+}
+EXPORT_SYMBOL(gf128mul_4k_lle);
+
+void gf128mul_4k_bbe(u64 *a, struct gf128mul_4k *t, u64 *r)
+{
+ int i = 0;
+ b128ops_mov(r, t->t[((u8 *)a)[0]]);
+ while(++i < 16) {
+ gf128mul_x8_bbe(r);
+ b128ops_xor(r, t->t[((u8 *)a)[i]]);
+ }
+ b128ops_mov(a, r);
+}
+EXPORT_SYMBOL(gf128mul_4k_bbe);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("functions for multiplying elements of GF(2^128)");
diff --git a/crypto/gf128mul.h b/crypto/gf128mul.h
new file mode 100644
index 0000000..83129df
--- /dev/null
+++ b/crypto/gf128mul.h
@@ -0,0 +1,196 @@
+/* gf128mul.h - GF(2^128) multiplication functions
+ *
+ * Copyright (c) 2003, Dr Brian Gladman, Worcester, UK.
+ * Copyright (c) 2006 Rik Snel <[email protected]>
+ *
+ * Based on Dr Brian Gladman's (GPL'd) work published at
+ * http://fp.gladman.plus.com/cryptography_technology/index.htm
+ * See the original copyright notice below.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+/*
+ ---------------------------------------------------------------------------
+ Copyright (c) 2003, Dr Brian Gladman, Worcester, UK. All rights reserved.
+
+ LICENSE TERMS
+
+ The free distribution and use of this software in both source and binary
+ form is allowed (with or without changes) provided that:
+
+ 1. distributions of this source code include the above copyright
+ notice, this list of conditions and the following disclaimer;
+
+ 2. distributions in binary form include the above copyright
+ notice, this list of conditions and the following disclaimer
+ in the documentation and/or other associated materials;
+
+ 3. the copyright holder's name is not used to endorse products
+ built using this software without specific written permission.
+
+ ALTERNATIVELY, provided that this notice is retained in full, this product
+ may be distributed under the terms of the GNU General Public License (GPL),
+ in which case the provisions of the GPL apply INSTEAD OF those given above.
+
+ DISCLAIMER
+
+ This software is provided 'as is' with no explicit or implied warranties
+ in respect of its properties, including, but not limited to, correctness
+ and/or fitness for purpose.
+ ---------------------------------------------------------------------------
+ Issue Date: 31/01/2006
+
+ An implementation of field multiplication in Galois Field GF(128)
+*/
+
+#ifndef _LINUX_GF128MUL_H
+#define _LINUX_GF128MUL_H
+/* Comment by Rik:
+ *
+ * For some background on GF(2^128) see for example: http://-
+ * csrc.nist.gov/CryptoToolkit/modes/proposedmodes/gcm/gcm-revised-spec.pdf
+ *
+ * The elements of GF(2^128) := GF(2)[X]/(X^128-X^7-X^2-X^1-1) can
+ * be mapped to computer memory in a variety of ways. Let's examine
+ * three common cases.
+ *
+ * Take a look at the 16 binary octets below in memory order. The msb's
+ * are left and the lsb's are right. char b[16] is an array and b[0] is
+ * the first octet.
+ *
+ * 80000000 00000000 00000000 00000000 .... 00000000 00000000 00000000
+ * b[0] b[1] b[2] b[3] b[13] b[14] b[15]
+ *
+ * Every bit is a coefficient of some power of X. We can store the bits
+ * in every byte in little-endian order and the bytes themselves also in
+ * little endian order. I will call this lle (little-little-endian).
+ * The above buffer represents the polynomial 1, and X^7+X^2+X^1+1 looks
+ * like 11100001 00000000 .... 00000000 = { 0xE1, 0x00, }.
+ * This format was originally implemented in gf128mul and is used
+ * in GCM (Galois/Counter mode) and in ABL (Arbitrary Block Length).
+ *
+ * Another convention says: store the bits in bigendian order and the
+ * bytes also. This is bbe (big-big-endian). Now the buffer above
+ * represents X^127. X^7+X^2+X^1+1 looks like 00000000 .... 10000111,
+ * b[15] = 0x87 and the rest is 0. LRW uses this convention and bbe
+ * is partly implemented.
+ *
+ * Both of the above formats are easy to implement on big-endian
+ * machines.
+ *
+ * EME (which is patent encumbered) uses the ble format (bits are stored
+ * in big endian order and the bytes in little endian). The above buffer
+ * represents X^7 in this case and the primitive polynomial is b[0] = 0x87.
+ *
+ * The common machine word-size is smaller than 128 bits, so to make
+ * an efficient implementation we must split into machine word sizes.
+ * This file uses one 32bit for the moment. Machine endianness comes into
+ * play. The lle format in relation to machine endianness is discusses
+ * below by the original author of gf128mul Dr Brian Gladman.
+ *
+ * Let's look at the bbe and ble format on a little endian machine.
+ *
+ * bbe on a little endian machine u32 x[4]:
+ *
+ * MS x[0] LS MS x[1] LS
+ * ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ * 103..96 111.104 119.112 127.120 71...64 79...72 87...80 95...88
+ *
+ * MS x[2] LS MS x[3] LS
+ * ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ * 39...32 47...40 55...48 63...56 07...00 15...08 23...16 31...24
+ *
+ * ble on a little endian machine
+ *
+ * MS x[0] LS MS x[1] LS
+ * ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ * 31...24 23...16 15...08 07...00 63...56 55...48 47...40 39...32
+ *
+ * MS x[2] LS MS x[3] LS
+ * ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ * 95...88 87...80 79...72 71...64 127.120 199.112 111.104 103..96
+ *
+ * Multiplications in GF(2^128) are mostly bit-shifts, so you see why
+ * ble (and lbe also) are easier to implement on a little-endian
+ * machine than om a big-endian machine. The converse holds for bbe
+ * and lle.
+ *
+ * Note: to have good alignment, it seems to me that it is sufficient
+ * to keep elements of GF(2^128) in type u64[2]. On 32-bit wordsize
+ * machines this will automatically aligned to wordsize and on a 64-bit
+ * machine also.
+ */
+/* Multiply a GF128 field element by x. Field elements are held in arrays
+ of bytes in which field bits 8n..8n + 7 are held in byte[n], with lower
+ indexed bits placed in the more numerically significant bit positions
+ within bytes.
+
+ On little endian machines the bit indexes translate into the bit
+ positions within four 32-bit words in the following way
+
+ MS x[0] LS MS x[1] LS
+ ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ 24...31 16...23 08...15 00...07 56...63 48...55 40...47 32...39
+
+ MS x[2] LS MS x[3] LS
+ ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ 88...95 80...87 72...79 64...71 120.127 112.119 104.111 96..103
+
+ On big endian machines the bit indexes translate into the bit
+ positions within four 32-bit words in the following way
+
+ MS x[0] LS MS x[1] LS
+ ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ 00...07 08...15 16...23 24...31 32...39 40...47 48...55 56...63
+
+ MS x[2] LS MS x[3] LS
+ ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ 64...71 72...79 80...87 88...95 96..103 104.111 112.119 120.127
+*/
+
+#define GF128MUL_BYTES 16
+
+/* A slow generic version of gf_mul, implemented for lle and bbe
+ * It multiplies a and b and puts the result in a */
+void gf128mul_lle(u64 *a, const u64 *b);
+
+void gf128mul_bbe(u64 *a, const u64 *b);
+
+
+/* 64k table optimization, implemented for lle and bbe */
+
+struct gf128mul_64k {
+ u64 t[16][256][GF128MUL_BYTES >> 3];
+};
+
+/* first initialize with the constant factor with which you
+ * want to multiply and then call gf128_64k_lle with the other
+ * factor in the first argument, the table in the second and a
+ * scratch register in the third. Afterwards *a = *r. */
+void gf128mul_init_64k_lle(struct gf128mul_64k *t, const u64 *g);
+
+void gf128mul_init_64k_bbe(struct gf128mul_64k *t, const u64 *g);
+
+void gf128mul_64k_lle(u64 *a, struct gf128mul_64k *t, u64 *r);
+
+void gf128mul_64k_bbe(u64 *a, struct gf128mul_64k *t, u64 *r);
+
+
+/* 4k table optimization */
+
+struct gf128mul_4k {
+ u64 t[256][GF128MUL_BYTES >> 3];
+};
+
+void gf128mul_init_4k_lle(struct gf128mul_4k *t, const u64 *g);
+
+void gf128mul_init_4k_bbe(struct gf128mul_4k *t, const u64 *g);
+
+void gf128mul_4k_lle(u64 *a, struct gf128mul_4k *t, u64 *r);
+
+void gf128mul_4k_bbe(u64 *a, struct gf128mul_4k *t, u64 *r);
+
+#endif /* _LINUX_GF128MUL_H */
--
1.4.1.1


--
VGER BF report: U 0.5

2006-09-02 01:00:50

by Rik Snel

[permalink] [raw]
Subject: [PATCHv2 6/6] LRW testvectors in tcrypt.[ch]

From: Rik Snel <[email protected]>

Do modprobe tcrypt mode=10 to check the included test vectors, they are
from: http://grouper.ieee.org/groups/1619/email/pdf00017.pdf and from
http://www.mail-archive.com/[email protected]/msg00173.html.

To make the last test vector fit, I had to increase the buffer size of
input and result to 512 bytes.

Signed-off-by: Rik Snel <[email protected]>
---
crypto/tcrypt.c | 12 +
crypto/tcrypt.h | 513 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 521 insertions(+), 4 deletions(-)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 840ab8b..0fd14c3 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -906,6 +906,10 @@ static void do_test(void)
AES_CBC_ENC_TEST_VECTORS);
test_cipher("cbc(aes)", DECRYPT, aes_cbc_dec_tv_template,
AES_CBC_DEC_TEST_VECTORS);
+ test_cipher("lrw(aes)", ENCRYPT, aes_lrw_enc_tv_template,
+ AES_LRW_ENC_TEST_VECTORS);
+ test_cipher("lrw(aes)", DECRYPT, aes_lrw_dec_tv_template,
+ AES_LRW_DEC_TEST_VECTORS);

//CAST5
test_cipher("ecb(cast5)", ENCRYPT, cast5_enc_tv_template,
@@ -1052,6 +1056,10 @@ static void do_test(void)
AES_CBC_ENC_TEST_VECTORS);
test_cipher("cbc(aes)", DECRYPT, aes_cbc_dec_tv_template,
AES_CBC_DEC_TEST_VECTORS);
+ test_cipher("lrw(aes)", ENCRYPT, aes_lrw_enc_tv_template,
+ AES_LRW_ENC_TEST_VECTORS);
+ test_cipher("lrw(aes)", DECRYPT, aes_lrw_dec_tv_template,
+ AES_LRW_DEC_TEST_VECTORS);
break;

case 11:
@@ -1191,6 +1199,10 @@ static void do_test(void)
aes_speed_template);
test_cipher_speed("cbc(aes)", DECRYPT, sec, NULL, 0,
aes_speed_template);
+ test_cipher_speed("lrw(aes)", ENCRYPT, sec, NULL, 0,
+ aes_speed_template);
+ test_cipher_speed("lrw(aes)", DECRYPT, sec, NULL, 0,
+ aes_speed_template);
break;

case 201:
diff --git a/crypto/tcrypt.h b/crypto/tcrypt.h
index a40c441..0b60152 100644
--- a/crypto/tcrypt.h
+++ b/crypto/tcrypt.h
@@ -39,15 +39,15 @@ struct hash_testvec {
struct cipher_testvec {
char key[MAX_KEYLEN] __attribute__ ((__aligned__(4)));
char iv[MAX_IVLEN];
- char input[48];
- char result[48];
+ char input[512];
+ char result[512];
unsigned char tap[MAX_TAP];
int np;
unsigned char fail;
unsigned char wk; /* weak key flag */
unsigned char klen;
- unsigned char ilen;
- unsigned char rlen;
+ unsigned short ilen;
+ unsigned short rlen;
};

struct cipher_speed {
@@ -1831,6 +1831,8 @@ #define AES_ENC_TEST_VECTORS 3
#define AES_DEC_TEST_VECTORS 3
#define AES_CBC_ENC_TEST_VECTORS 2
#define AES_CBC_DEC_TEST_VECTORS 2
+#define AES_LRW_ENC_TEST_VECTORS 8
+#define AES_LRW_DEC_TEST_VECTORS 8

static struct cipher_testvec aes_enc_tv_template[] = {
{ /* From FIPS-197 */
@@ -1968,6 +1970,509 @@ static struct cipher_testvec aes_cbc_dec
},
};

+static struct cipher_testvec aes_lrw_enc_tv_template[] = {
+ /* from http://grouper.ieee.org/groups/1619/email/pdf00017.pdf */
+ { /* LRW-32-AES 1 */
+ .key = { 0x45, 0x62, 0xac, 0x25, 0xf8, 0x28, 0x17, 0x6d,
+ 0x4c, 0x26, 0x84, 0x14, 0xb5, 0x68, 0x01, 0x85,
+ 0x25, 0x8e, 0x2a, 0x05, 0xe7, 0x3e, 0x9d, 0x03,
+ 0xee, 0x5a, 0x83, 0x0c, 0xcc, 0x09, 0x4c, 0x87 },
+ .klen = 32,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01 },
+ .input = { 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ 0x38, 0x39, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46 },
+ .ilen = 16,
+ .result = { 0xf1, 0xb2, 0x73, 0xcd, 0x65, 0xa3, 0xdf, 0x5f,
+ 0xe9, 0x5d, 0x48, 0x92, 0x54, 0x63, 0x4e, 0xb8 },
+ .rlen = 16,
+ }, { /* LRW-32-AES 2 */
+ .key = { 0x59, 0x70, 0x47, 0x14, 0xf5, 0x57, 0x47, 0x8c,
+ 0xd7, 0x79, 0xe8, 0x0f, 0x54, 0x88, 0x79, 0x44,
+ 0x0d, 0x48, 0xf0, 0xb7, 0xb1, 0x5a, 0x53, 0xea,
+ 0x1c, 0xaa, 0x6b, 0x29, 0xc2, 0xca, 0xfb, 0xaf
+ },
+ .klen = 32,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02 },
+ .input = { 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ 0x38, 0x39, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46 },
+ .ilen = 16,
+ .result = { 0x00, 0xc8, 0x2b, 0xae, 0x95, 0xbb, 0xcd, 0xe5,
+ 0x27, 0x4f, 0x07, 0x69, 0xb2, 0x60, 0xe1, 0x36 },
+ .rlen = 16,
+ }, { /* LRW-32-AES 3 */
+ .key = { 0xd8, 0x2a, 0x91, 0x34, 0xb2, 0x6a, 0x56, 0x50,
+ 0x30, 0xfe, 0x69, 0xe2, 0x37, 0x7f, 0x98, 0x47,
+ 0xcd, 0xf9, 0x0b, 0x16, 0x0c, 0x64, 0x8f, 0xb6,
+ 0xb0, 0x0d, 0x0d, 0x1b, 0xae, 0x85, 0x87, 0x1f },
+ .klen = 32,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00 },
+ .input = { 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ 0x38, 0x39, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46 },
+ .ilen = 16,
+ .result = { 0x76, 0x32, 0x21, 0x83, 0xed, 0x8f, 0xf1, 0x82,
+ 0xf9, 0x59, 0x62, 0x03, 0x69, 0x0e, 0x5e, 0x01 },
+ .rlen = 16,
+ }, { /* LRW-32-AES 4 */
+ .key = { 0x0f, 0x6a, 0xef, 0xf8, 0xd3, 0xd2, 0xbb, 0x15,
+ 0x25, 0x83, 0xf7, 0x3c, 0x1f, 0x01, 0x28, 0x74,
+ 0xca, 0xc6, 0xbc, 0x35, 0x4d, 0x4a, 0x65, 0x54,
+ 0x90, 0xae, 0x61, 0xcf, 0x7b, 0xae, 0xbd, 0xcc,
+ 0xad, 0xe4, 0x94, 0xc5, 0x4a, 0x29, 0xae, 0x70 },
+ .klen = 40,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01 },
+ .input = { 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ 0x38, 0x39, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46 },
+ .ilen = 16,
+ .result = { 0x9c, 0x0f, 0x15, 0x2f, 0x55, 0xa2, 0xd8, 0xf0,
+ 0xd6, 0x7b, 0x8f, 0x9e, 0x28, 0x22, 0xbc, 0x41 },
+ .rlen = 16,
+ }, { /* LRW-32-AES 5 */
+ .key = { 0x8a, 0xd4, 0xee, 0x10, 0x2f, 0xbd, 0x81, 0xff,
+ 0xf8, 0x86, 0xce, 0xac, 0x93, 0xc5, 0xad, 0xc6,
+ 0xa0, 0x19, 0x07, 0xc0, 0x9d, 0xf7, 0xbb, 0xdd,
+ 0x52, 0x13, 0xb2, 0xb7, 0xf0, 0xff, 0x11, 0xd8,
+ 0xd6, 0x08, 0xd0, 0xcd, 0x2e, 0xb1, 0x17, 0x6f },
+ .klen = 40,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00 },
+ .input = { 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ 0x38, 0x39, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46 },
+ .ilen = 16,
+ .result = { 0xd4, 0x27, 0x6a, 0x7f, 0x14, 0x91, 0x3d, 0x65,
+ 0xc8, 0x60, 0x48, 0x02, 0x87, 0xe3, 0x34, 0x06 },
+ .rlen = 16,
+ }, { /* LRW-32-AES 6 */
+ .key = { 0xf8, 0xd4, 0x76, 0xff, 0xd6, 0x46, 0xee, 0x6c,
+ 0x23, 0x84, 0xcb, 0x1c, 0x77, 0xd6, 0x19, 0x5d,
+ 0xfe, 0xf1, 0xa9, 0xf3, 0x7b, 0xbc, 0x8d, 0x21,
+ 0xa7, 0x9c, 0x21, 0xf8, 0xcb, 0x90, 0x02, 0x89,
+ 0xa8, 0x45, 0x34, 0x8e, 0xc8, 0xc5, 0xb5, 0xf1,
+ 0x26, 0xf5, 0x0e, 0x76, 0xfe, 0xfd, 0x1b, 0x1e },
+ .klen = 48,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01 },
+ .input = { 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ 0x38, 0x39, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46 },
+ .ilen = 16,
+ .result = { 0xbd, 0x06, 0xb8, 0xe1, 0xdb, 0x98, 0x89, 0x9e,
+ 0xc4, 0x98, 0xe4, 0x91, 0xcf, 0x1c, 0x70, 0x2b },
+ .rlen = 16,
+ }, { /* LRW-32-AES 7 */
+ .key = { 0xfb, 0x76, 0x15, 0xb2, 0x3d, 0x80, 0x89, 0x1d,
+ 0xd4, 0x70, 0x98, 0x0b, 0xc7, 0x95, 0x84, 0xc8,
+ 0xb2, 0xfb, 0x64, 0xce, 0x60, 0x97, 0x87, 0x8d,
+ 0x17, 0xfc, 0xe4, 0x5a, 0x49, 0xe8, 0x30, 0xb7,
+ 0x6e, 0x78, 0x17, 0xe7, 0x2d, 0x5e, 0x12, 0xd4,
+ 0x60, 0x64, 0x04, 0x7a, 0xf1, 0x2f, 0x9e, 0x0c },
+ .klen = 48,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00 },
+ .input = { 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ 0x38, 0x39, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46 },
+ .ilen = 16,
+ .result = { 0x5b, 0x90, 0x8e, 0xc1, 0xab, 0xdd, 0x67, 0x5f,
+ 0x3d, 0x69, 0x8a, 0x95, 0x53, 0xc8, 0x9c, 0xe5 },
+ .rlen = 16,
+ }, {
+/* http://www.mail-archive.com/[email protected]/msg00173.html */
+ .key = { 0xf8, 0xd4, 0x76, 0xff, 0xd6, 0x46, 0xee, 0x6c,
+ 0x23, 0x84, 0xcb, 0x1c, 0x77, 0xd6, 0x19, 0x5d,
+ 0xfe, 0xf1, 0xa9, 0xf3, 0x7b, 0xbc, 0x8d, 0x21,
+ 0xa7, 0x9c, 0x21, 0xf8, 0xcb, 0x90, 0x02, 0x89,
+ 0xa8, 0x45, 0x34, 0x8e, 0xc8, 0xc5, 0xb5, 0xf1,
+ 0x26, 0xf5, 0x0e, 0x76, 0xfe, 0xfd, 0x1b, 0x1e },
+ .klen = 48,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01 },
+ .input = { 0x05, 0x11, 0xb7, 0x18, 0xab, 0xc6, 0x2d, 0xac,
+ 0x70, 0x5d, 0xf6, 0x22, 0x94, 0xcd, 0xe5, 0x6c,
+ 0x17, 0x6b, 0xf6, 0x1c, 0xf0, 0xf3, 0x6e, 0xf8,
+ 0x50, 0x38, 0x1f, 0x71, 0x49, 0xb6, 0x57, 0xd6,
+ 0x8f, 0xcb, 0x8d, 0x6b, 0xe3, 0xa6, 0x29, 0x90,
+ 0xfe, 0x2a, 0x62, 0x82, 0xae, 0x6d, 0x8b, 0xf6,
+ 0xad, 0x1e, 0x9e, 0x20, 0x5f, 0x38, 0xbe, 0x04,
+ 0xda, 0x10, 0x8e, 0xed, 0xa2, 0xa4, 0x87, 0xab,
+ 0xda, 0x6b, 0xb4, 0x0c, 0x75, 0xba, 0xd3, 0x7c,
+ 0xc9, 0xac, 0x42, 0x31, 0x95, 0x7c, 0xc9, 0x04,
+ 0xeb, 0xd5, 0x6e, 0x32, 0x69, 0x8a, 0xdb, 0xa6,
+ 0x15, 0xd7, 0x3f, 0x4f, 0x2f, 0x66, 0x69, 0x03,
+ 0x9c, 0x1f, 0x54, 0x0f, 0xde, 0x1f, 0xf3, 0x65,
+ 0x4c, 0x96, 0x12, 0xed, 0x7c, 0x92, 0x03, 0x01,
+ 0x6f, 0xbc, 0x35, 0x93, 0xac, 0xf1, 0x27, 0xf1,
+ 0xb4, 0x96, 0x82, 0x5a, 0x5f, 0xb0, 0xa0, 0x50,
+ 0x89, 0xa4, 0x8e, 0x66, 0x44, 0x85, 0xcc, 0xfd,
+ 0x33, 0x14, 0x70, 0xe3, 0x96, 0xb2, 0xc3, 0xd3,
+ 0xbb, 0x54, 0x5a, 0x1a, 0xf9, 0x74, 0xa2, 0xc5,
+ 0x2d, 0x64, 0x75, 0xdd, 0xb4, 0x54, 0xe6, 0x74,
+ 0x8c, 0xd3, 0x9d, 0x9e, 0x86, 0xab, 0x51, 0x53,
+ 0xb7, 0x93, 0x3e, 0x6f, 0xd0, 0x4e, 0x2c, 0x40,
+ 0xf6, 0xa8, 0x2e, 0x3e, 0x9d, 0xf4, 0x66, 0xa5,
+ 0x76, 0x12, 0x73, 0x44, 0x1a, 0x56, 0xd7, 0x72,
+ 0x88, 0xcd, 0x21, 0x8c, 0x4c, 0x0f, 0xfe, 0xda,
+ 0x95, 0xe0, 0x3a, 0xa6, 0xa5, 0x84, 0x46, 0xcd,
+ 0xd5, 0x3e, 0x9d, 0x3a, 0xe2, 0x67, 0xe6, 0x60,
+ 0x1a, 0xe2, 0x70, 0x85, 0x58, 0xc2, 0x1b, 0x09,
+ 0xe1, 0xd7, 0x2c, 0xca, 0xad, 0xa8, 0x8f, 0xf9,
+ 0xac, 0xb3, 0x0e, 0xdb, 0xca, 0x2e, 0xe2, 0xb8,
+ 0x51, 0x71, 0xd9, 0x3c, 0x6c, 0xf1, 0x56, 0xf8,
+ 0xea, 0x9c, 0xf1, 0xfb, 0x0c, 0xe6, 0xb7, 0x10,
+ 0x1c, 0xf8, 0xa9, 0x7c, 0xe8, 0x53, 0x35, 0xc1,
+ 0x90, 0x3e, 0x76, 0x4a, 0x74, 0xa4, 0x21, 0x2c,
+ 0xf6, 0x2c, 0x4e, 0x0f, 0x94, 0x3a, 0x88, 0x2e,
+ 0x41, 0x09, 0x6a, 0x33, 0x7d, 0xf6, 0xdd, 0x3f,
+ 0x8d, 0x23, 0x31, 0x74, 0x84, 0xeb, 0x88, 0x6e,
+ 0xcc, 0xb9, 0xbc, 0x22, 0x83, 0x19, 0x07, 0x22,
+ 0xa5, 0x2d, 0xdf, 0xa5, 0xf3, 0x80, 0x85, 0x78,
+ 0x84, 0x39, 0x6a, 0x6d, 0x6a, 0x99, 0x4f, 0xa5,
+ 0x15, 0xfe, 0x46, 0xb0, 0xe4, 0x6c, 0xa5, 0x41,
+ 0x3c, 0xce, 0x8f, 0x42, 0x60, 0x71, 0xa7, 0x75,
+ 0x08, 0x40, 0x65, 0x8a, 0x82, 0xbf, 0xf5, 0x43,
+ 0x71, 0x96, 0xa9, 0x4d, 0x44, 0x8a, 0x20, 0xbe,
+ 0xfa, 0x4d, 0xbb, 0xc0, 0x7d, 0x31, 0x96, 0x65,
+ 0xe7, 0x75, 0xe5, 0x3e, 0xfd, 0x92, 0x3b, 0xc9,
+ 0x55, 0xbb, 0x16, 0x7e, 0xf7, 0xc2, 0x8c, 0xa4,
+ 0x40, 0x1d, 0xe5, 0xef, 0x0e, 0xdf, 0xe4, 0x9a,
+ 0x62, 0x73, 0x65, 0xfd, 0x46, 0x63, 0x25, 0x3d,
+ 0x2b, 0xaf, 0xe5, 0x64, 0xfe, 0xa5, 0x5c, 0xcf,
+ 0x24, 0xf3, 0xb4, 0xac, 0x64, 0xba, 0xdf, 0x4b,
+ 0xc6, 0x96, 0x7d, 0x81, 0x2d, 0x8d, 0x97, 0xf7,
+ 0xc5, 0x68, 0x77, 0x84, 0x32, 0x2b, 0xcc, 0x85,
+ 0x74, 0x96, 0xf0, 0x12, 0x77, 0x61, 0xb9, 0xeb,
+ 0x71, 0xaa, 0x82, 0xcb, 0x1c, 0xdb, 0x89, 0xc8,
+ 0xc6, 0xb5, 0xe3, 0x5c, 0x7d, 0x39, 0x07, 0x24,
+ 0xda, 0x39, 0x87, 0x45, 0xc0, 0x2b, 0xbb, 0x01,
+ 0xac, 0xbc, 0x2a, 0x5c, 0x7f, 0xfc, 0xe8, 0xce,
+ 0x6d, 0x9c, 0x6f, 0xed, 0xd3, 0xc1, 0xa1, 0xd6,
+ 0xc5, 0x55, 0xa9, 0x66, 0x2f, 0xe1, 0xc8, 0x32,
+ 0xa6, 0x5d, 0xa4, 0x3a, 0x98, 0x73, 0xe8, 0x45,
+ 0xa4, 0xc7, 0xa8, 0xb4, 0xf6, 0x13, 0x03, 0xf6,
+ 0xe9, 0x2e, 0xc4, 0x29, 0x0f, 0x84, 0xdb, 0xc4,
+ 0x21, 0xc4, 0xc2, 0x75, 0x67, 0x89, 0x37, 0x0a },
+ .ilen = 512,
+ .result = { 0x1a, 0x1d, 0xa9, 0x30, 0xad, 0xf9, 0x2f, 0x9b,
+ 0xb6, 0x1d, 0xae, 0xef, 0xf0, 0x2f, 0xf8, 0x5a,
+ 0x39, 0x3c, 0xbf, 0x2a, 0xb2, 0x45, 0xb2, 0x23,
+ 0x1b, 0x63, 0x3c, 0xcf, 0xaa, 0xbe, 0xcf, 0x4e,
+ 0xfa, 0xe8, 0x29, 0xc2, 0x20, 0x68, 0x2b, 0x3c,
+ 0x2e, 0x8b, 0xf7, 0x6e, 0x25, 0xbd, 0xe3, 0x3d,
+ 0x66, 0x27, 0xd6, 0xaf, 0xd6, 0x64, 0x3e, 0xe3,
+ 0xe8, 0x58, 0x46, 0x97, 0x39, 0x51, 0x07, 0xde,
+ 0xcb, 0x37, 0xbc, 0xa9, 0xc0, 0x5f, 0x75, 0xc3,
+ 0x0e, 0x84, 0x23, 0x1d, 0x16, 0xd4, 0x1c, 0x59,
+ 0x9c, 0x1a, 0x02, 0x55, 0xab, 0x3a, 0x97, 0x1d,
+ 0xdf, 0xdd, 0xc7, 0x06, 0x51, 0xd7, 0x70, 0xae,
+ 0x23, 0xc6, 0x8c, 0xf5, 0x1e, 0xa0, 0xe5, 0x82,
+ 0xb8, 0xb2, 0xbf, 0x04, 0xa0, 0x32, 0x8e, 0x68,
+ 0xeb, 0xaf, 0x6e, 0x2d, 0x94, 0x22, 0x2f, 0xce,
+ 0x4c, 0xb5, 0x59, 0xe2, 0xa2, 0x2f, 0xa0, 0x98,
+ 0x1a, 0x97, 0xc6, 0xd4, 0xb5, 0x00, 0x59, 0xf2,
+ 0x84, 0x14, 0x72, 0xb1, 0x9a, 0x6e, 0xa3, 0x7f,
+ 0xea, 0x20, 0xe7, 0xcb, 0x65, 0x77, 0x3a, 0xdf,
+ 0xc8, 0x97, 0x67, 0x15, 0xc2, 0x2a, 0x27, 0xcc,
+ 0x18, 0x55, 0xa1, 0x24, 0x0b, 0x24, 0x24, 0xaf,
+ 0x5b, 0xec, 0x68, 0xb8, 0xc8, 0xf5, 0xba, 0x63,
+ 0xff, 0xed, 0x89, 0xce, 0xd5, 0x3d, 0x88, 0xf3,
+ 0x25, 0xef, 0x05, 0x7c, 0x3a, 0xef, 0xeb, 0xd8,
+ 0x7a, 0x32, 0x0d, 0xd1, 0x1e, 0x58, 0x59, 0x99,
+ 0x90, 0x25, 0xb5, 0x26, 0xb0, 0xe3, 0x2b, 0x6c,
+ 0x4c, 0xa9, 0x8b, 0x84, 0x4f, 0x5e, 0x01, 0x50,
+ 0x41, 0x30, 0x58, 0xc5, 0x62, 0x74, 0x52, 0x1d,
+ 0x45, 0x24, 0x6a, 0x42, 0x64, 0x4f, 0x97, 0x1c,
+ 0xa8, 0x66, 0xb5, 0x6d, 0x79, 0xd4, 0x0d, 0x48,
+ 0xc5, 0x5f, 0xf3, 0x90, 0x32, 0xdd, 0xdd, 0xe1,
+ 0xe4, 0xa9, 0x9f, 0xfc, 0xc3, 0x52, 0x5a, 0x46,
+ 0xe4, 0x81, 0x84, 0x95, 0x36, 0x59, 0x7a, 0x6b,
+ 0xaa, 0xb3, 0x60, 0xad, 0xce, 0x9f, 0x9f, 0x28,
+ 0xe0, 0x01, 0x75, 0x22, 0xc4, 0x4e, 0xa9, 0x62,
+ 0x5c, 0x62, 0x0d, 0x00, 0xcb, 0x13, 0xe8, 0x43,
+ 0x72, 0xd4, 0x2d, 0x53, 0x46, 0xb5, 0xd1, 0x16,
+ 0x22, 0x18, 0xdf, 0x34, 0x33, 0xf5, 0xd6, 0x1c,
+ 0xb8, 0x79, 0x78, 0x97, 0x94, 0xff, 0x72, 0x13,
+ 0x4c, 0x27, 0xfc, 0xcb, 0xbf, 0x01, 0x53, 0xa6,
+ 0xb4, 0x50, 0x6e, 0xde, 0xdf, 0xb5, 0x43, 0xa4,
+ 0x59, 0xdf, 0x52, 0xf9, 0x7c, 0xe0, 0x11, 0x6f,
+ 0x2d, 0x14, 0x8e, 0x24, 0x61, 0x2c, 0xe1, 0x17,
+ 0xcc, 0xce, 0x51, 0x0c, 0x19, 0x8a, 0x82, 0x30,
+ 0x94, 0xd5, 0x3d, 0x6a, 0x53, 0x06, 0x5e, 0xbd,
+ 0xb7, 0xeb, 0xfa, 0xfd, 0x27, 0x51, 0xde, 0x85,
+ 0x1e, 0x86, 0x53, 0x11, 0x53, 0x94, 0x00, 0xee,
+ 0x2b, 0x8c, 0x08, 0x2a, 0xbf, 0xdd, 0xae, 0x11,
+ 0xcb, 0x1e, 0xa2, 0x07, 0x9a, 0x80, 0xcf, 0x62,
+ 0x9b, 0x09, 0xdc, 0x95, 0x3c, 0x96, 0x8e, 0xb1,
+ 0x09, 0xbd, 0xe4, 0xeb, 0xdb, 0xca, 0x70, 0x7a,
+ 0x9e, 0xfa, 0x31, 0x18, 0x45, 0x3c, 0x21, 0x33,
+ 0xb0, 0xb3, 0x2b, 0xea, 0xf3, 0x71, 0x2d, 0xe1,
+ 0x03, 0xad, 0x1b, 0x48, 0xd4, 0x67, 0x27, 0xf0,
+ 0x62, 0xe4, 0x3d, 0xfb, 0x9b, 0x08, 0x76, 0xe7,
+ 0xdd, 0x2b, 0x01, 0x39, 0x04, 0x5a, 0x58, 0x7a,
+ 0xf7, 0x11, 0x90, 0xec, 0xbd, 0x51, 0x5c, 0x32,
+ 0x6b, 0xd7, 0x35, 0x39, 0x02, 0x6b, 0xf2, 0xa6,
+ 0xd0, 0x0d, 0x07, 0xe1, 0x06, 0xc4, 0x5b, 0x7d,
+ 0xe4, 0x6a, 0xd7, 0xee, 0x15, 0x1f, 0x83, 0xb4,
+ 0xa3, 0xa7, 0x5e, 0xc3, 0x90, 0xb7, 0xef, 0xd3,
+ 0xb7, 0x4f, 0xf8, 0x92, 0x4c, 0xb7, 0x3c, 0x29,
+ 0xcd, 0x7e, 0x2b, 0x5d, 0x43, 0xea, 0x42, 0xe7,
+ 0x74, 0x3f, 0x7d, 0x58, 0x88, 0x75, 0xde, 0x3e },
+ .rlen = 512,
+ }
+};
+
+static struct cipher_testvec aes_lrw_dec_tv_template[] = {
+ /* from http://grouper.ieee.org/groups/1619/email/pdf00017.pdf */
+ /* same as enc vectors with input and result reversed */
+ { /* LRW-32-AES 1 */
+ .key = { 0x45, 0x62, 0xac, 0x25, 0xf8, 0x28, 0x17, 0x6d,
+ 0x4c, 0x26, 0x84, 0x14, 0xb5, 0x68, 0x01, 0x85,
+ 0x25, 0x8e, 0x2a, 0x05, 0xe7, 0x3e, 0x9d, 0x03,
+ 0xee, 0x5a, 0x83, 0x0c, 0xcc, 0x09, 0x4c, 0x87 },
+ .klen = 32,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01 },
+ .input = { 0xf1, 0xb2, 0x73, 0xcd, 0x65, 0xa3, 0xdf, 0x5f,
+ 0xe9, 0x5d, 0x48, 0x92, 0x54, 0x63, 0x4e, 0xb8 },
+ .ilen = 16,
+ .result = { 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ 0x38, 0x39, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46 },
+ .rlen = 16,
+ }, { /* LRW-32-AES 2 */
+ .key = { 0x59, 0x70, 0x47, 0x14, 0xf5, 0x57, 0x47, 0x8c,
+ 0xd7, 0x79, 0xe8, 0x0f, 0x54, 0x88, 0x79, 0x44,
+ 0x0d, 0x48, 0xf0, 0xb7, 0xb1, 0x5a, 0x53, 0xea,
+ 0x1c, 0xaa, 0x6b, 0x29, 0xc2, 0xca, 0xfb, 0xaf
+ },
+ .klen = 32,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02 },
+ .input = { 0x00, 0xc8, 0x2b, 0xae, 0x95, 0xbb, 0xcd, 0xe5,
+ 0x27, 0x4f, 0x07, 0x69, 0xb2, 0x60, 0xe1, 0x36 },
+ .ilen = 16,
+ .result = { 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ 0x38, 0x39, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46 },
+ .rlen = 16,
+ }, { /* LRW-32-AES 3 */
+ .key = { 0xd8, 0x2a, 0x91, 0x34, 0xb2, 0x6a, 0x56, 0x50,
+ 0x30, 0xfe, 0x69, 0xe2, 0x37, 0x7f, 0x98, 0x47,
+ 0xcd, 0xf9, 0x0b, 0x16, 0x0c, 0x64, 0x8f, 0xb6,
+ 0xb0, 0x0d, 0x0d, 0x1b, 0xae, 0x85, 0x87, 0x1f },
+ .klen = 32,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00 },
+ .input = { 0x76, 0x32, 0x21, 0x83, 0xed, 0x8f, 0xf1, 0x82,
+ 0xf9, 0x59, 0x62, 0x03, 0x69, 0x0e, 0x5e, 0x01 },
+ .ilen = 16,
+ .result = { 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ 0x38, 0x39, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46 },
+ .rlen = 16,
+ }, { /* LRW-32-AES 4 */
+ .key = { 0x0f, 0x6a, 0xef, 0xf8, 0xd3, 0xd2, 0xbb, 0x15,
+ 0x25, 0x83, 0xf7, 0x3c, 0x1f, 0x01, 0x28, 0x74,
+ 0xca, 0xc6, 0xbc, 0x35, 0x4d, 0x4a, 0x65, 0x54,
+ 0x90, 0xae, 0x61, 0xcf, 0x7b, 0xae, 0xbd, 0xcc,
+ 0xad, 0xe4, 0x94, 0xc5, 0x4a, 0x29, 0xae, 0x70 },
+ .klen = 40,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01 },
+ .input = { 0x9c, 0x0f, 0x15, 0x2f, 0x55, 0xa2, 0xd8, 0xf0,
+ 0xd6, 0x7b, 0x8f, 0x9e, 0x28, 0x22, 0xbc, 0x41 },
+ .ilen = 16,
+ .result = { 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ 0x38, 0x39, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46 },
+ .rlen = 16,
+ }, { /* LRW-32-AES 5 */
+ .key = { 0x8a, 0xd4, 0xee, 0x10, 0x2f, 0xbd, 0x81, 0xff,
+ 0xf8, 0x86, 0xce, 0xac, 0x93, 0xc5, 0xad, 0xc6,
+ 0xa0, 0x19, 0x07, 0xc0, 0x9d, 0xf7, 0xbb, 0xdd,
+ 0x52, 0x13, 0xb2, 0xb7, 0xf0, 0xff, 0x11, 0xd8,
+ 0xd6, 0x08, 0xd0, 0xcd, 0x2e, 0xb1, 0x17, 0x6f },
+ .klen = 40,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00 },
+ .input = { 0xd4, 0x27, 0x6a, 0x7f, 0x14, 0x91, 0x3d, 0x65,
+ 0xc8, 0x60, 0x48, 0x02, 0x87, 0xe3, 0x34, 0x06 },
+ .ilen = 16,
+ .result = { 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ 0x38, 0x39, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46 },
+ .rlen = 16,
+ }, { /* LRW-32-AES 6 */
+ .key = { 0xf8, 0xd4, 0x76, 0xff, 0xd6, 0x46, 0xee, 0x6c,
+ 0x23, 0x84, 0xcb, 0x1c, 0x77, 0xd6, 0x19, 0x5d,
+ 0xfe, 0xf1, 0xa9, 0xf3, 0x7b, 0xbc, 0x8d, 0x21,
+ 0xa7, 0x9c, 0x21, 0xf8, 0xcb, 0x90, 0x02, 0x89,
+ 0xa8, 0x45, 0x34, 0x8e, 0xc8, 0xc5, 0xb5, 0xf1,
+ 0x26, 0xf5, 0x0e, 0x76, 0xfe, 0xfd, 0x1b, 0x1e },
+ .klen = 48,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01 },
+ .input = { 0xbd, 0x06, 0xb8, 0xe1, 0xdb, 0x98, 0x89, 0x9e,
+ 0xc4, 0x98, 0xe4, 0x91, 0xcf, 0x1c, 0x70, 0x2b },
+ .ilen = 16,
+ .result = { 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ 0x38, 0x39, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46 },
+ .rlen = 16,
+ }, { /* LRW-32-AES 7 */
+ .key = { 0xfb, 0x76, 0x15, 0xb2, 0x3d, 0x80, 0x89, 0x1d,
+ 0xd4, 0x70, 0x98, 0x0b, 0xc7, 0x95, 0x84, 0xc8,
+ 0xb2, 0xfb, 0x64, 0xce, 0x60, 0x97, 0x87, 0x8d,
+ 0x17, 0xfc, 0xe4, 0x5a, 0x49, 0xe8, 0x30, 0xb7,
+ 0x6e, 0x78, 0x17, 0xe7, 0x2d, 0x5e, 0x12, 0xd4,
+ 0x60, 0x64, 0x04, 0x7a, 0xf1, 0x2f, 0x9e, 0x0c },
+ .klen = 48,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00 },
+ .input = { 0x5b, 0x90, 0x8e, 0xc1, 0xab, 0xdd, 0x67, 0x5f,
+ 0x3d, 0x69, 0x8a, 0x95, 0x53, 0xc8, 0x9c, 0xe5 },
+ .ilen = 16,
+ .result = { 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
+ 0x38, 0x39, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46 },
+ .rlen = 16,
+ }, {
+/* http://www.mail-archive.com/[email protected]/msg00173.html */
+ .key = { 0xf8, 0xd4, 0x76, 0xff, 0xd6, 0x46, 0xee, 0x6c,
+ 0x23, 0x84, 0xcb, 0x1c, 0x77, 0xd6, 0x19, 0x5d,
+ 0xfe, 0xf1, 0xa9, 0xf3, 0x7b, 0xbc, 0x8d, 0x21,
+ 0xa7, 0x9c, 0x21, 0xf8, 0xcb, 0x90, 0x02, 0x89,
+ 0xa8, 0x45, 0x34, 0x8e, 0xc8, 0xc5, 0xb5, 0xf1,
+ 0x26, 0xf5, 0x0e, 0x76, 0xfe, 0xfd, 0x1b, 0x1e },
+ .klen = 48,
+ .iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01 },
+ .input = { 0x1a, 0x1d, 0xa9, 0x30, 0xad, 0xf9, 0x2f, 0x9b,
+ 0xb6, 0x1d, 0xae, 0xef, 0xf0, 0x2f, 0xf8, 0x5a,
+ 0x39, 0x3c, 0xbf, 0x2a, 0xb2, 0x45, 0xb2, 0x23,
+ 0x1b, 0x63, 0x3c, 0xcf, 0xaa, 0xbe, 0xcf, 0x4e,
+ 0xfa, 0xe8, 0x29, 0xc2, 0x20, 0x68, 0x2b, 0x3c,
+ 0x2e, 0x8b, 0xf7, 0x6e, 0x25, 0xbd, 0xe3, 0x3d,
+ 0x66, 0x27, 0xd6, 0xaf, 0xd6, 0x64, 0x3e, 0xe3,
+ 0xe8, 0x58, 0x46, 0x97, 0x39, 0x51, 0x07, 0xde,
+ 0xcb, 0x37, 0xbc, 0xa9, 0xc0, 0x5f, 0x75, 0xc3,
+ 0x0e, 0x84, 0x23, 0x1d, 0x16, 0xd4, 0x1c, 0x59,
+ 0x9c, 0x1a, 0x02, 0x55, 0xab, 0x3a, 0x97, 0x1d,
+ 0xdf, 0xdd, 0xc7, 0x06, 0x51, 0xd7, 0x70, 0xae,
+ 0x23, 0xc6, 0x8c, 0xf5, 0x1e, 0xa0, 0xe5, 0x82,
+ 0xb8, 0xb2, 0xbf, 0x04, 0xa0, 0x32, 0x8e, 0x68,
+ 0xeb, 0xaf, 0x6e, 0x2d, 0x94, 0x22, 0x2f, 0xce,
+ 0x4c, 0xb5, 0x59, 0xe2, 0xa2, 0x2f, 0xa0, 0x98,
+ 0x1a, 0x97, 0xc6, 0xd4, 0xb5, 0x00, 0x59, 0xf2,
+ 0x84, 0x14, 0x72, 0xb1, 0x9a, 0x6e, 0xa3, 0x7f,
+ 0xea, 0x20, 0xe7, 0xcb, 0x65, 0x77, 0x3a, 0xdf,
+ 0xc8, 0x97, 0x67, 0x15, 0xc2, 0x2a, 0x27, 0xcc,
+ 0x18, 0x55, 0xa1, 0x24, 0x0b, 0x24, 0x24, 0xaf,
+ 0x5b, 0xec, 0x68, 0xb8, 0xc8, 0xf5, 0xba, 0x63,
+ 0xff, 0xed, 0x89, 0xce, 0xd5, 0x3d, 0x88, 0xf3,
+ 0x25, 0xef, 0x05, 0x7c, 0x3a, 0xef, 0xeb, 0xd8,
+ 0x7a, 0x32, 0x0d, 0xd1, 0x1e, 0x58, 0x59, 0x99,
+ 0x90, 0x25, 0xb5, 0x26, 0xb0, 0xe3, 0x2b, 0x6c,
+ 0x4c, 0xa9, 0x8b, 0x84, 0x4f, 0x5e, 0x01, 0x50,
+ 0x41, 0x30, 0x58, 0xc5, 0x62, 0x74, 0x52, 0x1d,
+ 0x45, 0x24, 0x6a, 0x42, 0x64, 0x4f, 0x97, 0x1c,
+ 0xa8, 0x66, 0xb5, 0x6d, 0x79, 0xd4, 0x0d, 0x48,
+ 0xc5, 0x5f, 0xf3, 0x90, 0x32, 0xdd, 0xdd, 0xe1,
+ 0xe4, 0xa9, 0x9f, 0xfc, 0xc3, 0x52, 0x5a, 0x46,
+ 0xe4, 0x81, 0x84, 0x95, 0x36, 0x59, 0x7a, 0x6b,
+ 0xaa, 0xb3, 0x60, 0xad, 0xce, 0x9f, 0x9f, 0x28,
+ 0xe0, 0x01, 0x75, 0x22, 0xc4, 0x4e, 0xa9, 0x62,
+ 0x5c, 0x62, 0x0d, 0x00, 0xcb, 0x13, 0xe8, 0x43,
+ 0x72, 0xd4, 0x2d, 0x53, 0x46, 0xb5, 0xd1, 0x16,
+ 0x22, 0x18, 0xdf, 0x34, 0x33, 0xf5, 0xd6, 0x1c,
+ 0xb8, 0x79, 0x78, 0x97, 0x94, 0xff, 0x72, 0x13,
+ 0x4c, 0x27, 0xfc, 0xcb, 0xbf, 0x01, 0x53, 0xa6,
+ 0xb4, 0x50, 0x6e, 0xde, 0xdf, 0xb5, 0x43, 0xa4,
+ 0x59, 0xdf, 0x52, 0xf9, 0x7c, 0xe0, 0x11, 0x6f,
+ 0x2d, 0x14, 0x8e, 0x24, 0x61, 0x2c, 0xe1, 0x17,
+ 0xcc, 0xce, 0x51, 0x0c, 0x19, 0x8a, 0x82, 0x30,
+ 0x94, 0xd5, 0x3d, 0x6a, 0x53, 0x06, 0x5e, 0xbd,
+ 0xb7, 0xeb, 0xfa, 0xfd, 0x27, 0x51, 0xde, 0x85,
+ 0x1e, 0x86, 0x53, 0x11, 0x53, 0x94, 0x00, 0xee,
+ 0x2b, 0x8c, 0x08, 0x2a, 0xbf, 0xdd, 0xae, 0x11,
+ 0xcb, 0x1e, 0xa2, 0x07, 0x9a, 0x80, 0xcf, 0x62,
+ 0x9b, 0x09, 0xdc, 0x95, 0x3c, 0x96, 0x8e, 0xb1,
+ 0x09, 0xbd, 0xe4, 0xeb, 0xdb, 0xca, 0x70, 0x7a,
+ 0x9e, 0xfa, 0x31, 0x18, 0x45, 0x3c, 0x21, 0x33,
+ 0xb0, 0xb3, 0x2b, 0xea, 0xf3, 0x71, 0x2d, 0xe1,
+ 0x03, 0xad, 0x1b, 0x48, 0xd4, 0x67, 0x27, 0xf0,
+ 0x62, 0xe4, 0x3d, 0xfb, 0x9b, 0x08, 0x76, 0xe7,
+ 0xdd, 0x2b, 0x01, 0x39, 0x04, 0x5a, 0x58, 0x7a,
+ 0xf7, 0x11, 0x90, 0xec, 0xbd, 0x51, 0x5c, 0x32,
+ 0x6b, 0xd7, 0x35, 0x39, 0x02, 0x6b, 0xf2, 0xa6,
+ 0xd0, 0x0d, 0x07, 0xe1, 0x06, 0xc4, 0x5b, 0x7d,
+ 0xe4, 0x6a, 0xd7, 0xee, 0x15, 0x1f, 0x83, 0xb4,
+ 0xa3, 0xa7, 0x5e, 0xc3, 0x90, 0xb7, 0xef, 0xd3,
+ 0xb7, 0x4f, 0xf8, 0x92, 0x4c, 0xb7, 0x3c, 0x29,
+ 0xcd, 0x7e, 0x2b, 0x5d, 0x43, 0xea, 0x42, 0xe7,
+ 0x74, 0x3f, 0x7d, 0x58, 0x88, 0x75, 0xde, 0x3e },
+ .ilen = 512,
+ .result = { 0x05, 0x11, 0xb7, 0x18, 0xab, 0xc6, 0x2d, 0xac,
+ 0x70, 0x5d, 0xf6, 0x22, 0x94, 0xcd, 0xe5, 0x6c,
+ 0x17, 0x6b, 0xf6, 0x1c, 0xf0, 0xf3, 0x6e, 0xf8,
+ 0x50, 0x38, 0x1f, 0x71, 0x49, 0xb6, 0x57, 0xd6,
+ 0x8f, 0xcb, 0x8d, 0x6b, 0xe3, 0xa6, 0x29, 0x90,
+ 0xfe, 0x2a, 0x62, 0x82, 0xae, 0x6d, 0x8b, 0xf6,
+ 0xad, 0x1e, 0x9e, 0x20, 0x5f, 0x38, 0xbe, 0x04,
+ 0xda, 0x10, 0x8e, 0xed, 0xa2, 0xa4, 0x87, 0xab,
+ 0xda, 0x6b, 0xb4, 0x0c, 0x75, 0xba, 0xd3, 0x7c,
+ 0xc9, 0xac, 0x42, 0x31, 0x95, 0x7c, 0xc9, 0x04,
+ 0xeb, 0xd5, 0x6e, 0x32, 0x69, 0x8a, 0xdb, 0xa6,
+ 0x15, 0xd7, 0x3f, 0x4f, 0x2f, 0x66, 0x69, 0x03,
+ 0x9c, 0x1f, 0x54, 0x0f, 0xde, 0x1f, 0xf3, 0x65,
+ 0x4c, 0x96, 0x12, 0xed, 0x7c, 0x92, 0x03, 0x01,
+ 0x6f, 0xbc, 0x35, 0x93, 0xac, 0xf1, 0x27, 0xf1,
+ 0xb4, 0x96, 0x82, 0x5a, 0x5f, 0xb0, 0xa0, 0x50,
+ 0x89, 0xa4, 0x8e, 0x66, 0x44, 0x85, 0xcc, 0xfd,
+ 0x33, 0x14, 0x70, 0xe3, 0x96, 0xb2, 0xc3, 0xd3,
+ 0xbb, 0x54, 0x5a, 0x1a, 0xf9, 0x74, 0xa2, 0xc5,
+ 0x2d, 0x64, 0x75, 0xdd, 0xb4, 0x54, 0xe6, 0x74,
+ 0x8c, 0xd3, 0x9d, 0x9e, 0x86, 0xab, 0x51, 0x53,
+ 0xb7, 0x93, 0x3e, 0x6f, 0xd0, 0x4e, 0x2c, 0x40,
+ 0xf6, 0xa8, 0x2e, 0x3e, 0x9d, 0xf4, 0x66, 0xa5,
+ 0x76, 0x12, 0x73, 0x44, 0x1a, 0x56, 0xd7, 0x72,
+ 0x88, 0xcd, 0x21, 0x8c, 0x4c, 0x0f, 0xfe, 0xda,
+ 0x95, 0xe0, 0x3a, 0xa6, 0xa5, 0x84, 0x46, 0xcd,
+ 0xd5, 0x3e, 0x9d, 0x3a, 0xe2, 0x67, 0xe6, 0x60,
+ 0x1a, 0xe2, 0x70, 0x85, 0x58, 0xc2, 0x1b, 0x09,
+ 0xe1, 0xd7, 0x2c, 0xca, 0xad, 0xa8, 0x8f, 0xf9,
+ 0xac, 0xb3, 0x0e, 0xdb, 0xca, 0x2e, 0xe2, 0xb8,
+ 0x51, 0x71, 0xd9, 0x3c, 0x6c, 0xf1, 0x56, 0xf8,
+ 0xea, 0x9c, 0xf1, 0xfb, 0x0c, 0xe6, 0xb7, 0x10,
+ 0x1c, 0xf8, 0xa9, 0x7c, 0xe8, 0x53, 0x35, 0xc1,
+ 0x90, 0x3e, 0x76, 0x4a, 0x74, 0xa4, 0x21, 0x2c,
+ 0xf6, 0x2c, 0x4e, 0x0f, 0x94, 0x3a, 0x88, 0x2e,
+ 0x41, 0x09, 0x6a, 0x33, 0x7d, 0xf6, 0xdd, 0x3f,
+ 0x8d, 0x23, 0x31, 0x74, 0x84, 0xeb, 0x88, 0x6e,
+ 0xcc, 0xb9, 0xbc, 0x22, 0x83, 0x19, 0x07, 0x22,
+ 0xa5, 0x2d, 0xdf, 0xa5, 0xf3, 0x80, 0x85, 0x78,
+ 0x84, 0x39, 0x6a, 0x6d, 0x6a, 0x99, 0x4f, 0xa5,
+ 0x15, 0xfe, 0x46, 0xb0, 0xe4, 0x6c, 0xa5, 0x41,
+ 0x3c, 0xce, 0x8f, 0x42, 0x60, 0x71, 0xa7, 0x75,
+ 0x08, 0x40, 0x65, 0x8a, 0x82, 0xbf, 0xf5, 0x43,
+ 0x71, 0x96, 0xa9, 0x4d, 0x44, 0x8a, 0x20, 0xbe,
+ 0xfa, 0x4d, 0xbb, 0xc0, 0x7d, 0x31, 0x96, 0x65,
+ 0xe7, 0x75, 0xe5, 0x3e, 0xfd, 0x92, 0x3b, 0xc9,
+ 0x55, 0xbb, 0x16, 0x7e, 0xf7, 0xc2, 0x8c, 0xa4,
+ 0x40, 0x1d, 0xe5, 0xef, 0x0e, 0xdf, 0xe4, 0x9a,
+ 0x62, 0x73, 0x65, 0xfd, 0x46, 0x63, 0x25, 0x3d,
+ 0x2b, 0xaf, 0xe5, 0x64, 0xfe, 0xa5, 0x5c, 0xcf,
+ 0x24, 0xf3, 0xb4, 0xac, 0x64, 0xba, 0xdf, 0x4b,
+ 0xc6, 0x96, 0x7d, 0x81, 0x2d, 0x8d, 0x97, 0xf7,
+ 0xc5, 0x68, 0x77, 0x84, 0x32, 0x2b, 0xcc, 0x85,
+ 0x74, 0x96, 0xf0, 0x12, 0x77, 0x61, 0xb9, 0xeb,
+ 0x71, 0xaa, 0x82, 0xcb, 0x1c, 0xdb, 0x89, 0xc8,
+ 0xc6, 0xb5, 0xe3, 0x5c, 0x7d, 0x39, 0x07, 0x24,
+ 0xda, 0x39, 0x87, 0x45, 0xc0, 0x2b, 0xbb, 0x01,
+ 0xac, 0xbc, 0x2a, 0x5c, 0x7f, 0xfc, 0xe8, 0xce,
+ 0x6d, 0x9c, 0x6f, 0xed, 0xd3, 0xc1, 0xa1, 0xd6,
+ 0xc5, 0x55, 0xa9, 0x66, 0x2f, 0xe1, 0xc8, 0x32,
+ 0xa6, 0x5d, 0xa4, 0x3a, 0x98, 0x73, 0xe8, 0x45,
+ 0xa4, 0xc7, 0xa8, 0xb4, 0xf6, 0x13, 0x03, 0xf6,
+ 0xe9, 0x2e, 0xc4, 0x29, 0x0f, 0x84, 0xdb, 0xc4,
+ 0x21, 0xc4, 0xc2, 0x75, 0x67, 0x89, 0x37, 0x0a },
+ .rlen = 512,
+ }
+};
+
/* Cast5 test vectors from RFC 2144 */
#define CAST5_ENC_TEST_VECTORS 3
#define CAST5_DEC_TEST_VECTORS 3
--
1.4.1.1


--
VGER BF report: U 0.497913

2006-11-26 23:56:16

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCHv2 4/6] crypto: table driven multiplications in GF(2^128), needed by LRW (and in the future ABL)

On Sat, Sep 02, 2006 at 03:00:25AM +0200, [email protected] wrote:
>
> +#define M80X 0x8080808080808080LLU
> +#define M01X 0x0101010101010101LLU
> +
> +static void gf128mul_x_lle(u64 r[2], const u64 x[2])
> +{
> + u64 _tt = gf128mul_table_lle[(x[1] >> 49) & 0x80];
> + r[1] = ((x[1] >> 1) & ~M80X) | (((x[1] << 15) | (x[0] >> 49)) & M80X);
> + r[0] = (((x[0] >> 1) & ~M80X) | ((x[0] << 15) & M80X)) ^ _tt;
> +}

Hi Rik:

Sorry it took so long. But I've been trying to modify the code so
that the same source is used for both BE and LE machines. I've
finally accumulated enough time to finish it.

Unfortunately it seems that the end result doesn't quite agree with
your test vectors :) In particular, the LE version of your mul_x and
mul_x8 functions don't agree with mine.

Could you please compare the two versions and double-check them?
I'm unsure why 15 was used above as a shift count. It would seem
that 7 would seem to make more sense as endianness is byte-based
not word-based.

I've attached my version of gf128mul which is based on your BE
code.

The other main change I've made is to remove the need to
allocate a contiguous 64K table in gf128mul. Requiring every
tfm to allocate a contiguous 64K chunk of memory is not realistic
on Linux.

I also added a be128/le128/u128 type to make 128-bit operations
easier.

Thanks,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--

0c3bbb342495d510631b11c45a70d55fc83afe3f
diff --git a/crypto/Kconfig b/crypto/Kconfig
index 34f10d5..0a4ff51 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -139,6 +139,16 @@ config CRYPTO_TGR192
See also:
<http://www.cs.technion.ac.il/~biham/Reports/Tiger/>.

+config CRYPTO_GF128MUL
+ tristate "GF(2^128) multiplication functions (EXPERIMENTAL)"
+ depends on EXPERIMENTAL
+ help
+ Efficient table driven implementation of multiplications in the
+ field GF(2^128). This is needed by some cypher modes. This
+ option will be selected automatically if you select such a
+ cipher mode. Only select this option by hand if you expect to load
+ an external module that requires these functions.
+
config CRYPTO_ECB
tristate "ECB support"
select CRYPTO_BLKCIPHER
diff --git a/crypto/Makefile b/crypto/Makefile
index e94bb1b..76970de 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -24,6 +24,7 @@ obj-$(CONFIG_CRYPTO_SHA256) += sha256.o
obj-$(CONFIG_CRYPTO_SHA512) += sha512.o
obj-$(CONFIG_CRYPTO_WP512) += wp512.o
obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o
+obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o
obj-$(CONFIG_CRYPTO_ECB) += ecb.o
obj-$(CONFIG_CRYPTO_CBC) += cbc.o
obj-$(CONFIG_CRYPTO_DES) += des.o
diff --git a/crypto/gf128mul.c b/crypto/gf128mul.c
new file mode 100644
index 0000000..868366a
--- /dev/null
+++ b/crypto/gf128mul.c
@@ -0,0 +1,466 @@
+/* gf128mul.c - GF(2^128) multiplication functions
+ *
+ * Copyright (c) 2003, Dr Brian Gladman, Worcester, UK.
+ * Copyright (c) 2006, Rik Snel <[email protected]>
+ *
+ * Based on Dr Brian Gladman's (GPL'd) work published at
+ * http://fp.gladman.plus.com/cryptography_technology/index.htm
+ * See the original copyright notice below.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+
+/*
+ ---------------------------------------------------------------------------
+ Copyright (c) 2003, Dr Brian Gladman, Worcester, UK. All rights reserved.
+
+ LICENSE TERMS
+
+ The free distribution and use of this software in both source and binary
+ form is allowed (with or without changes) provided that:
+
+ 1. distributions of this source code include the above copyright
+ notice, this list of conditions and the following disclaimer;
+
+ 2. distributions in binary form include the above copyright
+ notice, this list of conditions and the following disclaimer
+ in the documentation and/or other associated materials;
+
+ 3. the copyright holder's name is not used to endorse products
+ built using this software without specific written permission.
+
+ ALTERNATIVELY, provided that this notice is retained in full, this product
+ may be distributed under the terms of the GNU General Public License (GPL),
+ in which case the provisions of the GPL apply INSTEAD OF those given above.
+
+ DISCLAIMER
+
+ This software is provided 'as is' with no explicit or implied warranties
+ in respect of its properties, including, but not limited to, correctness
+ and/or fitness for purpose.
+ ---------------------------------------------------------------------------
+ Issue 31/01/2006
+
+ This file provides fast multiplication in GF(128) as required by several
+ cryptographic authentication modes
+*/
+
+#include <crypto/gf128mul.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+#define gf128mul_dat(q) { \
+ q(0x00), q(0x01), q(0x02), q(0x03), q(0x04), q(0x05), q(0x06), q(0x07),\
+ q(0x08), q(0x09), q(0x0a), q(0x0b), q(0x0c), q(0x0d), q(0x0e), q(0x0f),\
+ q(0x10), q(0x11), q(0x12), q(0x13), q(0x14), q(0x15), q(0x16), q(0x17),\
+ q(0x18), q(0x19), q(0x1a), q(0x1b), q(0x1c), q(0x1d), q(0x1e), q(0x1f),\
+ q(0x20), q(0x21), q(0x22), q(0x23), q(0x24), q(0x25), q(0x26), q(0x27),\
+ q(0x28), q(0x29), q(0x2a), q(0x2b), q(0x2c), q(0x2d), q(0x2e), q(0x2f),\
+ q(0x30), q(0x31), q(0x32), q(0x33), q(0x34), q(0x35), q(0x36), q(0x37),\
+ q(0x38), q(0x39), q(0x3a), q(0x3b), q(0x3c), q(0x3d), q(0x3e), q(0x3f),\
+ q(0x40), q(0x41), q(0x42), q(0x43), q(0x44), q(0x45), q(0x46), q(0x47),\
+ q(0x48), q(0x49), q(0x4a), q(0x4b), q(0x4c), q(0x4d), q(0x4e), q(0x4f),\
+ q(0x50), q(0x51), q(0x52), q(0x53), q(0x54), q(0x55), q(0x56), q(0x57),\
+ q(0x58), q(0x59), q(0x5a), q(0x5b), q(0x5c), q(0x5d), q(0x5e), q(0x5f),\
+ q(0x60), q(0x61), q(0x62), q(0x63), q(0x64), q(0x65), q(0x66), q(0x67),\
+ q(0x68), q(0x69), q(0x6a), q(0x6b), q(0x6c), q(0x6d), q(0x6e), q(0x6f),\
+ q(0x70), q(0x71), q(0x72), q(0x73), q(0x74), q(0x75), q(0x76), q(0x77),\
+ q(0x78), q(0x79), q(0x7a), q(0x7b), q(0x7c), q(0x7d), q(0x7e), q(0x7f),\
+ q(0x80), q(0x81), q(0x82), q(0x83), q(0x84), q(0x85), q(0x86), q(0x87),\
+ q(0x88), q(0x89), q(0x8a), q(0x8b), q(0x8c), q(0x8d), q(0x8e), q(0x8f),\
+ q(0x90), q(0x91), q(0x92), q(0x93), q(0x94), q(0x95), q(0x96), q(0x97),\
+ q(0x98), q(0x99), q(0x9a), q(0x9b), q(0x9c), q(0x9d), q(0x9e), q(0x9f),\
+ q(0xa0), q(0xa1), q(0xa2), q(0xa3), q(0xa4), q(0xa5), q(0xa6), q(0xa7),\
+ q(0xa8), q(0xa9), q(0xaa), q(0xab), q(0xac), q(0xad), q(0xae), q(0xaf),\
+ q(0xb0), q(0xb1), q(0xb2), q(0xb3), q(0xb4), q(0xb5), q(0xb6), q(0xb7),\
+ q(0xb8), q(0xb9), q(0xba), q(0xbb), q(0xbc), q(0xbd), q(0xbe), q(0xbf),\
+ q(0xc0), q(0xc1), q(0xc2), q(0xc3), q(0xc4), q(0xc5), q(0xc6), q(0xc7),\
+ q(0xc8), q(0xc9), q(0xca), q(0xcb), q(0xcc), q(0xcd), q(0xce), q(0xcf),\
+ q(0xd0), q(0xd1), q(0xd2), q(0xd3), q(0xd4), q(0xd5), q(0xd6), q(0xd7),\
+ q(0xd8), q(0xd9), q(0xda), q(0xdb), q(0xdc), q(0xdd), q(0xde), q(0xdf),\
+ q(0xe0), q(0xe1), q(0xe2), q(0xe3), q(0xe4), q(0xe5), q(0xe6), q(0xe7),\
+ q(0xe8), q(0xe9), q(0xea), q(0xeb), q(0xec), q(0xed), q(0xee), q(0xef),\
+ q(0xf0), q(0xf1), q(0xf2), q(0xf3), q(0xf4), q(0xf5), q(0xf6), q(0xf7),\
+ q(0xf8), q(0xf9), q(0xfa), q(0xfb), q(0xfc), q(0xfd), q(0xfe), q(0xff) \
+}
+
+/* Given the value i in 0..255 as the byte overflow when a field element
+ in GHASH is multipled by x^8, this function will return the values that
+ are generated in the lo 16-bit word of the field value by applying the
+ modular polynomial. The values lo_byte and hi_byte are returned via the
+ macro xp_fun(lo_byte, hi_byte) so that the values can be assembled into
+ memory as required by a suitable definition of this macro operating on
+ the table above
+*/
+
+#define xx(p, q) __constant_be16_to_cpu(0x##p##q)
+
+#define xda_bbe(i) ( \
+ (i & 0x80 ? xx(43, 80) : 0) ^ (i & 0x40 ? xx(21, c0) : 0) ^ \
+ (i & 0x20 ? xx(10, e0) : 0) ^ (i & 0x10 ? xx(08, 70) : 0) ^ \
+ (i & 0x08 ? xx(04, 38) : 0) ^ (i & 0x04 ? xx(02, 1c) : 0) ^ \
+ (i & 0x02 ? xx(01, 0e) : 0) ^ (i & 0x01 ? xx(00, 87) : 0) \
+)
+
+#define xda_lle(i) ( \
+ (i & 0x80 ? xx(e1, 00) : 0) ^ (i & 0x40 ? xx(70, 80) : 0) ^ \
+ (i & 0x20 ? xx(38, 40) : 0) ^ (i & 0x10 ? xx(1c, 20) : 0) ^ \
+ (i & 0x08 ? xx(0e, 10) : 0) ^ (i & 0x04 ? xx(07, 08) : 0) ^ \
+ (i & 0x02 ? xx(03, 84) : 0) ^ (i & 0x01 ? xx(01, c2) : 0) \
+)
+
+static const u16 gf128mul_table_lle[256] = gf128mul_dat(xda_lle);
+static const u16 gf128mul_table_bbe[256] = gf128mul_dat(xda_bbe);
+
+/* These functions multiply a field element by x, by x^4 and by x^8
+ * in the polynomial field representation. It uses 32-bit word operations
+ * to gain speed but compensates for machine endianess and hence works
+ * correctly on both styles of machine.
+ */
+
+static void gf128mul_x_lle(be128 *r, const be128 *x)
+{
+ u64 a = be64_to_cpu(x->a);
+ u64 b = be64_to_cpu(x->b);
+ u64 _tt = gf128mul_table_lle[(b << 7) & 0xff];
+
+ r->b = cpu_to_be64((b >> 1) | (a << 63));
+ r->a = cpu_to_be64((a >> 1) ^ (_tt << 48));
+}
+
+static void gf128mul_x_bbe(be128 *r, const be128 *x)
+{
+ u64 a = be64_to_cpu(x->a);
+ u64 b = be64_to_cpu(x->b);
+ u64 _tt = gf128mul_table_bbe[a >> 63];
+
+ r->a = cpu_to_be64((a << 1) | (b >> 63));
+ r->b = cpu_to_be64((b << 1) ^ _tt);
+}
+
+static void gf128mul_x8_lle(be128 *x)
+{
+ u64 a = be64_to_cpu(x->a);
+ u64 b = be64_to_cpu(x->b);
+ u64 _tt = gf128mul_table_lle[b & 0xff];
+
+ x->b = cpu_to_be64((b >> 8) | (a << 56));
+ x->a = cpu_to_be64((a >> 8) ^ (_tt << 48));
+}
+
+static void gf128mul_x8_bbe(be128 *x)
+{
+ u64 a = be64_to_cpu(x->a);
+ u64 b = be64_to_cpu(x->b);
+ u64 _tt = gf128mul_table_bbe[a >> 56];
+
+ x->a = cpu_to_be64((a << 8) | (b >> 56));
+ x->b = cpu_to_be64((b << 8) ^ _tt);
+}
+
+void gf128mul_lle(be128 *r, const be128 *b)
+{
+ be128 p[8];
+ int i;
+
+ p[0] = *r;
+ for (i = 0; i < 7; ++i)
+ gf128mul_x_lle(&p[i + 1], &p[i]);
+
+ memset(r, 0, sizeof(r));
+ for (i = 0;;) {
+ u8 ch = ((u8 *)b)[15 - i];
+
+ if (ch & 0x80)
+ be128_xor(r, r, &p[0]);
+ if (ch & 0x40)
+ be128_xor(r, r, &p[1]);
+ if (ch & 0x20)
+ be128_xor(r, r, &p[2]);
+ if (ch & 0x10)
+ be128_xor(r, r, &p[3]);
+ if (ch & 0x08)
+ be128_xor(r, r, &p[4]);
+ if (ch & 0x04)
+ be128_xor(r, r, &p[5]);
+ if (ch & 0x02)
+ be128_xor(r, r, &p[6]);
+ if (ch & 0x01)
+ be128_xor(r, r, &p[7]);
+
+ if (++i >= 16)
+ break;
+
+ gf128mul_x8_lle(r);
+ }
+}
+EXPORT_SYMBOL(gf128mul_lle);
+
+void gf128mul_bbe(be128 *r, const be128 *b)
+{
+ be128 p[8];
+ int i;
+
+ p[0] = *r;
+ for (i = 0; i < 7; ++i)
+ gf128mul_x_bbe(&p[i + 1], &p[i]);
+
+ memset(r, 0, sizeof(r));
+ for (i = 0;;) {
+ u8 ch = ((u8 *)b)[i];
+
+ if (ch & 0x80)
+ be128_xor(r, r, &p[7]);
+ if (ch & 0x40)
+ be128_xor(r, r, &p[6]);
+ if (ch & 0x20)
+ be128_xor(r, r, &p[5]);
+ if (ch & 0x10)
+ be128_xor(r, r, &p[4]);
+ if (ch & 0x08)
+ be128_xor(r, r, &p[3]);
+ if (ch & 0x04)
+ be128_xor(r, r, &p[2]);
+ if (ch & 0x02)
+ be128_xor(r, r, &p[1]);
+ if (ch & 0x01)
+ be128_xor(r, r, &p[0]);
+
+ if (++i >= 16)
+ break;
+
+ gf128mul_x8_bbe(r);
+ }
+}
+EXPORT_SYMBOL(gf128mul_bbe);
+
+/* This version uses 64k bytes of table space.
+ A 16 byte buffer has to be multiplied by a 16 byte key
+ value in GF(128). If we consider a GF(128) value in
+ the buffer's lowest byte, we can construct a table of
+ the 256 16 byte values that result from the 256 values
+ of this byte. This requires 4096 bytes. But we also
+ need tables for each of the 16 higher bytes in the
+ buffer as well, which makes 64 kbytes in total.
+*/
+/* additional explanation
+ * t[0][BYTE] contains g*BYTE
+ * t[1][BYTE] contains g*x^8*BYTE
+ * ..
+ * t[15][BYTE] contains g*x^120*BYTE */
+struct gf128mul_64k *gf128mul_init_64k_lle(const be128 *g)
+{
+ struct gf128mul_64k *t;
+ int i, j, k;
+
+ t = kzalloc(sizeof(*t), GFP_KERNEL);
+ if (!t)
+ goto out;
+
+ for (i = 0; i < 16; i++) {
+ t->t[i] = kzalloc(sizeof(*t->t[i]), GFP_KERNEL);
+ if (!t->t[i]) {
+ gf128mul_free_64k(t);
+ t = NULL;
+ goto out;
+ }
+ }
+
+ t->t[0]->t[128] = *g;
+ for (j = 64; j > 0; j >>= 1)
+ gf128mul_x_lle(&t->t[0]->t[j], &t->t[0]->t[j + j]);
+
+ for (i = 0;;) {
+ for (j = 2; j < 256; j += j)
+ for (k = 1; k < j; ++k)
+ be128_xor(&t->t[i]->t[j + k],
+ &t->t[i]->t[j], &t->t[i]->t[k]);
+
+ if (++i >= 16)
+ break;
+
+ for (j = 128; j > 0; j >>= 1) {
+ t->t[i]->t[j] = t->t[i - 1]->t[j];
+ gf128mul_x8_lle(&t->t[i]->t[j]);
+ }
+ }
+
+out:
+ return t;
+}
+EXPORT_SYMBOL(gf128mul_init_64k_lle);
+
+struct gf128mul_64k *gf128mul_init_64k_bbe(const be128 *g)
+{
+ struct gf128mul_64k *t;
+ int i, j, k;
+
+ t = kzalloc(sizeof(*t), GFP_KERNEL);
+ if (!t)
+ goto out;
+
+ for (i = 0; i < 16; i++) {
+ t->t[i] = kzalloc(sizeof(*t->t[i]), GFP_KERNEL);
+ if (!t->t[i]) {
+ gf128mul_free_64k(t);
+ t = NULL;
+ goto out;
+ }
+ }
+
+ t->t[0]->t[1] = *g;
+ for (j = 1; j <= 64; j <<= 1)
+ gf128mul_x_bbe(&t->t[0]->t[j + j], &t->t[0]->t[j]);
+
+ for (i = 0;;) {
+ for (j = 2; j < 256; j += j)
+ for (k = 1; k < j; ++k)
+ be128_xor(&t->t[i]->t[j + k],
+ &t->t[i]->t[j], &t->t[i]->t[k]);
+
+ if (++i >= 16)
+ break;
+
+ for (j = 128; j > 0; j >>= 1) {
+ t->t[i]->t[j] = t->t[i - 1]->t[j];
+ gf128mul_x8_bbe(&t->t[i]->t[j]);
+ }
+ }
+
+out:
+ return t;
+}
+EXPORT_SYMBOL(gf128mul_init_64k_bbe);
+
+void gf128mul_free_64k(struct gf128mul_64k *t)
+{
+ int i;
+
+ for (i = 0; i < 16; i++)
+ kfree(t->t[i]);
+ kfree(t);
+}
+EXPORT_SYMBOL(gf128mul_free_64k);
+
+void gf128mul_64k_lle(be128 *a, struct gf128mul_64k *t)
+{
+ u8 *ap = (u8 *)a;
+ be128 r[1];
+ int i;
+
+ *r = t->t[0]->t[ap[0]];
+ for (i = 1; i < 16; ++i)
+ be128_xor(r, r, &t->t[i]->t[ap[i]]);
+ *a = *r;
+}
+EXPORT_SYMBOL(gf128mul_64k_lle);
+
+void gf128mul_64k_bbe(be128 *a, struct gf128mul_64k *t)
+{
+ u8 *ap = (u8 *)a;
+ be128 r[1];
+ int i;
+
+ *r = t->t[0]->t[ap[15]];
+ for (i = 1; i < 16; ++i)
+ be128_xor(r, r, &t->t[i]->t[ap[15 - i]]);
+ *a = *r;
+}
+EXPORT_SYMBOL(gf128mul_64k_bbe);
+
+/* This version uses 4k bytes of table space.
+ A 16 byte buffer has to be multiplied by a 16 byte key
+ value in GF(128). If we consider a GF(128) value in a
+ single byte, we can construct a table of the 256 16 byte
+ values that result from the 256 values of this byte.
+ This requires 4096 bytes. If we take the highest byte in
+ the buffer and use this table to get the result, we then
+ have to multiply by x^120 to get the final value. For the
+ next highest byte the result has to be multiplied by x^112
+ and so on. But we can do this by accumulating the result
+ in an accumulator starting with the result for the top
+ byte. We repeatedly multiply the accumulator value by
+ x^8 and then add in (i.e. xor) the 16 bytes of the next
+ lower byte in the buffer, stopping when we reach the
+ lowest byte. This requires a 4096 byte table.
+*/
+struct gf128mul_4k *gf128mul_init_4k_lle(const be128 *g)
+{
+ struct gf128mul_4k *t;
+ int j, k;
+
+ t = kzalloc(sizeof(*t), GFP_KERNEL);
+ if (!t)
+ goto out;
+
+ t->t[128] = *g;
+ for (j = 64; j > 0; j >>= 1)
+ gf128mul_x_lle(&t->t[j], &t->t[j+j]);
+
+ for (j = 2; j < 256; j += j)
+ for (k = 1; k < j; ++k)
+ be128_xor(&t->t[j + k], &t->t[j], &t->t[k]);
+
+out:
+ return t;
+}
+EXPORT_SYMBOL(gf128mul_init_4k_lle);
+
+struct gf128mul_4k *gf128mul_init_4k_bbe(const be128 *g)
+{
+ struct gf128mul_4k *t;
+ int j, k;
+
+ t = kzalloc(sizeof(*t), GFP_KERNEL);
+ if (!t)
+ goto out;
+
+ t->t[1] = *g;
+ for (j = 1; j <= 64; j <<= 1)
+ gf128mul_x_bbe(&t->t[j + j], &t->t[j]);
+
+ for (j = 2; j < 256; j += j)
+ for (k = 1; k < j; ++k)
+ be128_xor(&t->t[j + k], &t->t[j], &t->t[k]);
+
+out:
+ return t;
+}
+EXPORT_SYMBOL(gf128mul_init_4k_bbe);
+
+void gf128mul_4k_lle(be128 *a, struct gf128mul_4k *t)
+{
+ u8 *ap = (u8 *)a;
+ be128 r[1];
+ int i = 15;
+
+ *r = t->t[ap[15]];
+ while (i--) {
+ gf128mul_x8_lle(r);
+ be128_xor(r, r, &t->t[ap[i]]);
+ }
+ *a = *r;
+}
+EXPORT_SYMBOL(gf128mul_4k_lle);
+
+void gf128mul_4k_bbe(be128 *a, struct gf128mul_4k *t)
+{
+ u8 *ap = (u8 *)a;
+ be128 r[1];
+ int i = 0;
+
+ *r = t->t[ap[0]];
+ while (++i < 16) {
+ gf128mul_x8_bbe(r);
+ be128_xor(r, r, &t->t[ap[i]]);
+ }
+ *a = *r;
+}
+EXPORT_SYMBOL(gf128mul_4k_bbe);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Functions for multiplying elements of GF(2^128)");
diff --git a/include/crypto/gf128mul.h b/include/crypto/gf128mul.h
new file mode 100644
index 0000000..4fd3152
--- /dev/null
+++ b/include/crypto/gf128mul.h
@@ -0,0 +1,198 @@
+/* gf128mul.h - GF(2^128) multiplication functions
+ *
+ * Copyright (c) 2003, Dr Brian Gladman, Worcester, UK.
+ * Copyright (c) 2006 Rik Snel <[email protected]>
+ *
+ * Based on Dr Brian Gladman's (GPL'd) work published at
+ * http://fp.gladman.plus.com/cryptography_technology/index.htm
+ * See the original copyright notice below.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+/*
+ ---------------------------------------------------------------------------
+ Copyright (c) 2003, Dr Brian Gladman, Worcester, UK. All rights reserved.
+
+ LICENSE TERMS
+
+ The free distribution and use of this software in both source and binary
+ form is allowed (with or without changes) provided that:
+
+ 1. distributions of this source code include the above copyright
+ notice, this list of conditions and the following disclaimer;
+
+ 2. distributions in binary form include the above copyright
+ notice, this list of conditions and the following disclaimer
+ in the documentation and/or other associated materials;
+
+ 3. the copyright holder's name is not used to endorse products
+ built using this software without specific written permission.
+
+ ALTERNATIVELY, provided that this notice is retained in full, this product
+ may be distributed under the terms of the GNU General Public License (GPL),
+ in which case the provisions of the GPL apply INSTEAD OF those given above.
+
+ DISCLAIMER
+
+ This software is provided 'as is' with no explicit or implied warranties
+ in respect of its properties, including, but not limited to, correctness
+ and/or fitness for purpose.
+ ---------------------------------------------------------------------------
+ Issue Date: 31/01/2006
+
+ An implementation of field multiplication in Galois Field GF(128)
+*/
+
+#ifndef _CRYPTO_GF128MUL_H
+#define _CRYPTO_GF128MUL_H
+
+#include <crypto/b128ops.h>
+#include <linux/slab.h>
+
+/* Comment by Rik:
+ *
+ * For some background on GF(2^128) see for example: http://-
+ * csrc.nist.gov/CryptoToolkit/modes/proposedmodes/gcm/gcm-revised-spec.pdf
+ *
+ * The elements of GF(2^128) := GF(2)[X]/(X^128-X^7-X^2-X^1-1) can
+ * be mapped to computer memory in a variety of ways. Let's examine
+ * three common cases.
+ *
+ * Take a look at the 16 binary octets below in memory order. The msb's
+ * are left and the lsb's are right. char b[16] is an array and b[0] is
+ * the first octet.
+ *
+ * 80000000 00000000 00000000 00000000 .... 00000000 00000000 00000000
+ * b[0] b[1] b[2] b[3] b[13] b[14] b[15]
+ *
+ * Every bit is a coefficient of some power of X. We can store the bits
+ * in every byte in little-endian order and the bytes themselves also in
+ * little endian order. I will call this lle (little-little-endian).
+ * The above buffer represents the polynomial 1, and X^7+X^2+X^1+1 looks
+ * like 11100001 00000000 .... 00000000 = { 0xE1, 0x00, }.
+ * This format was originally implemented in gf128mul and is used
+ * in GCM (Galois/Counter mode) and in ABL (Arbitrary Block Length).
+ *
+ * Another convention says: store the bits in bigendian order and the
+ * bytes also. This is bbe (big-big-endian). Now the buffer above
+ * represents X^127. X^7+X^2+X^1+1 looks like 00000000 .... 10000111,
+ * b[15] = 0x87 and the rest is 0. LRW uses this convention and bbe
+ * is partly implemented.
+ *
+ * Both of the above formats are easy to implement on big-endian
+ * machines.
+ *
+ * EME (which is patent encumbered) uses the ble format (bits are stored
+ * in big endian order and the bytes in little endian). The above buffer
+ * represents X^7 in this case and the primitive polynomial is b[0] = 0x87.
+ *
+ * The common machine word-size is smaller than 128 bits, so to make
+ * an efficient implementation we must split into machine word sizes.
+ * This file uses one 32bit for the moment. Machine endianness comes into
+ * play. The lle format in relation to machine endianness is discussed
+ * below by the original author of gf128mul Dr Brian Gladman.
+ *
+ * Let's look at the bbe and ble format on a little endian machine.
+ *
+ * bbe on a little endian machine u32 x[4]:
+ *
+ * MS x[0] LS MS x[1] LS
+ * ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ * 103..96 111.104 119.112 127.120 71...64 79...72 87...80 95...88
+ *
+ * MS x[2] LS MS x[3] LS
+ * ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ * 39...32 47...40 55...48 63...56 07...00 15...08 23...16 31...24
+ *
+ * ble on a little endian machine
+ *
+ * MS x[0] LS MS x[1] LS
+ * ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ * 31...24 23...16 15...08 07...00 63...56 55...48 47...40 39...32
+ *
+ * MS x[2] LS MS x[3] LS
+ * ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ * 95...88 87...80 79...72 71...64 127.120 199.112 111.104 103..96
+ *
+ * Multiplications in GF(2^128) are mostly bit-shifts, so you see why
+ * ble (and lbe also) are easier to implement on a little-endian
+ * machine than on a big-endian machine. The converse holds for bbe
+ * and lle.
+ *
+ * Note: to have good alignment, it seems to me that it is sufficient
+ * to keep elements of GF(2^128) in type u64[2]. On 32-bit wordsize
+ * machines this will automatically aligned to wordsize and on a 64-bit
+ * machine also.
+ */
+/* Multiply a GF128 field element by x. Field elements are held in arrays
+ of bytes in which field bits 8n..8n + 7 are held in byte[n], with lower
+ indexed bits placed in the more numerically significant bit positions
+ within bytes.
+
+ On little endian machines the bit indexes translate into the bit
+ positions within four 32-bit words in the following way
+
+ MS x[0] LS MS x[1] LS
+ ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ 24...31 16...23 08...15 00...07 56...63 48...55 40...47 32...39
+
+ MS x[2] LS MS x[3] LS
+ ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ 88...95 80...87 72...79 64...71 120.127 112.119 104.111 96..103
+
+ On big endian machines the bit indexes translate into the bit
+ positions within four 32-bit words in the following way
+
+ MS x[0] LS MS x[1] LS
+ ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ 00...07 08...15 16...23 24...31 32...39 40...47 48...55 56...63
+
+ MS x[2] LS MS x[3] LS
+ ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
+ 64...71 72...79 80...87 88...95 96..103 104.111 112.119 120.127
+*/
+
+/* A slow generic version of gf_mul, implemented for lle and bbe
+ * It multiplies a and b and puts the result in a */
+void gf128mul_lle(be128 *a, const be128 *b);
+
+void gf128mul_bbe(be128 *a, const be128 *b);
+
+
+/* 4k table optimization */
+
+struct gf128mul_4k {
+ be128 t[256];
+};
+
+struct gf128mul_4k *gf128mul_init_4k_lle(const be128 *g);
+struct gf128mul_4k *gf128mul_init_4k_bbe(const be128 *g);
+void gf128mul_4k_lle(be128 *a, struct gf128mul_4k *t);
+void gf128mul_4k_bbe(be128 *a, struct gf128mul_4k *t);
+
+static inline void gf128mul_free_4k(struct gf128mul_4k *t)
+{
+ kfree(t);
+}
+
+
+/* 64k table optimization, implemented for lle and bbe */
+
+struct gf128mul_64k {
+ struct gf128mul_4k *t[16];
+};
+
+/* first initialize with the constant factor with which you
+ * want to multiply and then call gf128_64k_lle with the other
+ * factor in the first argument, the table in the second and a
+ * scratch register in the third. Afterwards *a = *r. */
+struct gf128mul_64k *gf128mul_init_64k_lle(const be128 *g);
+struct gf128mul_64k *gf128mul_init_64k_bbe(const be128 *g);
+void gf128mul_free_64k(struct gf128mul_64k *t);
+void gf128mul_64k_lle(be128 *a, struct gf128mul_64k *t);
+void gf128mul_64k_bbe(be128 *a, struct gf128mul_64k *t);
+
+#endif /* _CRYPTO_GF128MUL_H */

2006-11-28 20:02:59

by Rik Snel

[permalink] [raw]
Subject: Re: [PATCHv2 4/6] crypto: table driven multiplications in GF(2^128), needed by LRW (and in the future ABL)

Hello Herbert,

On Mon, Nov 27, 2006 at 10:56:07AM +1100, Herbert Xu wrote:
> On Sat, Sep 02, 2006 at 03:00:25AM +0200, [email protected] wrote:
>
> Sorry it took so long. But I've been trying to modify the code so
> that the same source is used for both BE and LE machines. I've
> finally accumulated enough time to finish it.

It's OK. The source will be more maintainable, but constantly converting
between big and little-endian (on little endian machines) may have a
significant performance impact (I will just test when your gf128 hits
cryptodev-2.6, and complain if it is the case).

Two more remarks (errors in v2 of my patch): b128ops.h and gf128mul.h
should be in include/ (so they can be used in modules) and the inline
functions in b128ops.h should also be static.

> Unfortunately it seems that the end result doesn't quite agree with
> your test vectors :) In particular, the LE version of your mul_x and
> mul_x8 functions don't agree with mine.
>
> Could you please compare the two versions and double-check them?
> I'm unsure why 15 was used above as a shift count. It would seem
> that 7 would seem to make more sense as endianness is byte-based
> not word-based.
> >
> > +#define M80X 0x8080808080808080LLU
> > +#define M01X 0x0101010101010101LLU
> > +
> > +static void gf128mul_x_lle(u64 r[2], const u64 x[2])
> > +{
> > + u64 _tt = gf128mul_table_lle[(x[1] >> 49) & 0x80];
> > + r[1] = ((x[1] >> 1) & ~M80X) | (((x[1] << 15) | (x[0] >> 49)) & M80X);
> > + r[0] = (((x[0] >> 1) & ~M80X) | ((x[0] << 15) & M80X)) ^ _tt;
> > +}

I'll try to explain why I think the above code is correct:
As in my comment in gf128mul.h, lle means the polynomials in gf127 are
stored in little-little-endian format. So
10000000 00000000 .. 00000000 = 0x80 0x00 .. 0x00 = 1
01000000 00000000 .. 00000000 = 0x40 0x00 .. 0x00 = X^1
01010001 00000000 .. 10000000 = 0x51 0x00 .. 0x80 = X^1 + X^3 + X^7 + X^120
00000000 00000000 .. 00000001 = 0x00 0x00 .. 0x01 = X^127

The u64 type emulates a little endian 64bit processor (on my 32bit
intel) so when we load the these examples in two 64bit little endian
integers they are:
0x0000000000000080 0x0000000000000000 = 1
0x0000000000000040 0x0000000000000000 = X^1
0x0000000000000051 0x8000000000000000 = X^1 + X^3 + X^7 + X^120
0x0000000000000000 0x0100000000000000 = X^127

Let's multiply the third example by X, we should get
00101000 10000000 .. 01000000 = 0x28 0x80 .. 0x40 = X^2 + X^4 + X^8 + X^121

Represented as two little endian 64 bit values:
0x0000000000008028 0x4000000000000000

The above code implements this shift (efficiently?) by noting:
in each byte bit 1 moves to bit 0, bit 2 moves to bit 1, ..., bit 7
moves to bit 6 and all 7th bits are zeroed afterwards.
(this is done by ((x[1] >> 1) & ~M80X)), the 7th bits are set by moving
the least significant bits of the bytes to the right position (15 bits
to the left) and orring.

Let's look at the example: the first 8 in 0x0...008028 comes from the
least significant bit of 0x00.00051. So the shift by 15 to get the 7th
bit of every byte right is correct. (I have included a simple program
to compare the different implementations)

> I've attached my version of gf128mul which is based on your BE
> code.

Ok, will comment.

> The other main change I've made is to remove the need to
> allocate a contiguous 64K table in gf128mul. Requiring every
> tfm to allocate a contiguous 64K chunk of memory is not realistic
> on Linux.

Ok.

> I also added a be128/le128/u128 type to make 128-bit operations
> easier.

I assume it is: typedef struct { u64 a, u64 b } be128;
As long as compilers don't do u128 natively it is just a matter of taste.

> [...]
> +#define xx(p, q) __constant_be16_to_cpu(0x##p##q)

This seems to be the problem, the table is constructed wrongly.
All the calculations take place as if we are on a big endian machine, so
the table entries should never be swapped, so the above line should read

+#define xx(p, q) 0x##p##q

While investigating the problem, I wrote a small program to compare
a bytewise implementation with Brian's and your implementation of mul_x,
it only works in the lle implementation.

Once I found this problem, I stopped looking, so let me know if the test
vectors still don't match up, then I will need to take a closer look.

Greetings,

Rik.

/* test program for mutiplying by X in GF(2^128) in lle representation
* on a little endian machine. */
/* GNU GPL >= v2 applies, (C) Brian Gladman, Herbert Xu, Rik Snel. */
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <assert.h>
#include <byteswap.h>

typedef uint64_t u64;
typedef uint16_t u16;
typedef struct {
u64 a;
u64 b;
} be128;

#define gf128mul_dat(q) { \
q(0x00), q(0x01), q(0x02), q(0x03), q(0x04), q(0x05), q(0x06), q(0x07),\
q(0x08), q(0x09), q(0x0a), q(0x0b), q(0x0c), q(0x0d), q(0x0e), q(0x0f),\
q(0x10), q(0x11), q(0x12), q(0x13), q(0x14), q(0x15), q(0x16), q(0x17),\
q(0x18), q(0x19), q(0x1a), q(0x1b), q(0x1c), q(0x1d), q(0x1e), q(0x1f),\
q(0x20), q(0x21), q(0x22), q(0x23), q(0x24), q(0x25), q(0x26), q(0x27),\
q(0x28), q(0x29), q(0x2a), q(0x2b), q(0x2c), q(0x2d), q(0x2e), q(0x2f),\
q(0x30), q(0x31), q(0x32), q(0x33), q(0x34), q(0x35), q(0x36), q(0x37),\
q(0x38), q(0x39), q(0x3a), q(0x3b), q(0x3c), q(0x3d), q(0x3e), q(0x3f),\
q(0x40), q(0x41), q(0x42), q(0x43), q(0x44), q(0x45), q(0x46), q(0x47),\
q(0x48), q(0x49), q(0x4a), q(0x4b), q(0x4c), q(0x4d), q(0x4e), q(0x4f),\
q(0x50), q(0x51), q(0x52), q(0x53), q(0x54), q(0x55), q(0x56), q(0x57),\
q(0x58), q(0x59), q(0x5a), q(0x5b), q(0x5c), q(0x5d), q(0x5e), q(0x5f),\
q(0x60), q(0x61), q(0x62), q(0x63), q(0x64), q(0x65), q(0x66), q(0x67),\
q(0x68), q(0x69), q(0x6a), q(0x6b), q(0x6c), q(0x6d), q(0x6e), q(0x6f),\
q(0x70), q(0x71), q(0x72), q(0x73), q(0x74), q(0x75), q(0x76), q(0x77),\
q(0x78), q(0x79), q(0x7a), q(0x7b), q(0x7c), q(0x7d), q(0x7e), q(0x7f),\
q(0x80), q(0x81), q(0x82), q(0x83), q(0x84), q(0x85), q(0x86), q(0x87),\
q(0x88), q(0x89), q(0x8a), q(0x8b), q(0x8c), q(0x8d), q(0x8e), q(0x8f),\
q(0x90), q(0x91), q(0x92), q(0x93), q(0x94), q(0x95), q(0x96), q(0x97),\
q(0x98), q(0x99), q(0x9a), q(0x9b), q(0x9c), q(0x9d), q(0x9e), q(0x9f),\
q(0xa0), q(0xa1), q(0xa2), q(0xa3), q(0xa4), q(0xa5), q(0xa6), q(0xa7),\
q(0xa8), q(0xa9), q(0xaa), q(0xab), q(0xac), q(0xad), q(0xae), q(0xaf),\
q(0xb0), q(0xb1), q(0xb2), q(0xb3), q(0xb4), q(0xb5), q(0xb6), q(0xb7),\
q(0xb8), q(0xb9), q(0xba), q(0xbb), q(0xbc), q(0xbd), q(0xbe), q(0xbf),\
q(0xc0), q(0xc1), q(0xc2), q(0xc3), q(0xc4), q(0xc5), q(0xc6), q(0xc7),\
q(0xc8), q(0xc9), q(0xca), q(0xcb), q(0xcc), q(0xcd), q(0xce), q(0xcf),\
q(0xd0), q(0xd1), q(0xd2), q(0xd3), q(0xd4), q(0xd5), q(0xd6), q(0xd7),\
q(0xd8), q(0xd9), q(0xda), q(0xdb), q(0xdc), q(0xdd), q(0xde), q(0xdf),\
q(0xe0), q(0xe1), q(0xe2), q(0xe3), q(0xe4), q(0xe5), q(0xe6), q(0xe7),\
q(0xe8), q(0xe9), q(0xea), q(0xeb), q(0xec), q(0xed), q(0xee), q(0xef),\
q(0xf0), q(0xf1), q(0xf2), q(0xf3), q(0xf4), q(0xf5), q(0xf6), q(0xf7),\
q(0xf8), q(0xf9), q(0xfa), q(0xfb), q(0xfc), q(0xfd), q(0xfe), q(0xff) \
}

#define xxle(p,q) 0x##q##p /* assemble in little endian order */
#define xxbe(p,q) 0x##p##q /* assemble in big endian order */

#define xda_lle_le(i) ( \
(i&0x80?xxle(e1,00):0)^(i&0x40?xxle(70,80):0)^ \
(i&0x20?xxle(38,40):0)^(i&0x10?xxle(1c,20):0)^ \
(i&0x08?xxle(0e,10):0)^(i&0x04?xxle(07,08):0)^ \
(i&0x02?xxle(03,84):0)^(i&0x01?xxle(01,c2):0) \
)

#define xda_lle_be(i) ( \
(i&0x80?xxbe(e1,00):0)^(i&0x40?xxbe(70,80):0)^ \
(i&0x20?xxbe(38,40):0)^(i&0x10?xxbe(1c,20):0)^ \
(i&0x08?xxbe(0e,10):0)^(i&0x04?xxbe(07,08):0)^ \
(i&0x02?xxbe(03,84):0)^(i&0x01?xxbe(01,c2):0) \
)

static const u16 gf128mul_table_lle_le[256] = gf128mul_dat(xda_lle_le);
static const u16 gf128mul_table_lle_be[256] = gf128mul_dat(xda_lle_be);

/* simple straightforward implementation, does not depend
* on machine endianness */
void mul_x_lle_simple(be128 *result, const be128 *ex) {
unsigned char *r = (unsigned char*)result;
const unsigned char *x = (unsigned char*)ex;
u64 overflow = 0xE1*(x[15]&0x01); /* do we get X^128 ? */
r[15] = x[15]>>1|x[14]<<7;
r[14] = x[14]>>1|x[13]<<7;
r[13] = x[13]>>1|x[12]<<7;
r[12] = x[12]>>1|x[11]<<7;
r[11] = x[11]>>1|x[10]<<7;
r[10] = x[10]>>1|x[9]<<7;
r[9] = x[9]>>1|x[8]<<7;
r[8] = x[8]>>1|x[7]<<7;
r[7] = x[7]>>1|x[6]<<7;
r[6] = x[6]>>1|x[5]<<7;
r[5] = x[5]>>1|x[4]<<7;
r[4] = x[4]>>1|x[3]<<7;
r[3] = x[3]>>1|x[2]<<7;
r[2] = x[2]>>1|x[1]<<7;
r[1] = x[1]>>1|x[0]<<7;
r[0] = x[0]>>1|overflow;
}

/* for use on little endian machines; remove bswap_64's for usage
* on bigendian machines */
void mul_x_lle_herbert(be128 *r, const be128 *x) {
u64 a = bswap_64(x->a);
u64 b = bswap_64(x->b);
u64 _tt = gf128mul_table_lle_be[(b << 7) & 0xff];

r->b = bswap_64((b >> 1) | (a << 63));
r->a = bswap_64((a >> 1) ^ (_tt << 48));
}

/* works on little endian machines, use above version without bswap_64's
* on big endian machines, should be faster than the above version
* on little endian machines */
#define M80X 0x8080808080808080LLU
void mul_x_lle_brian(be128 *r, const be128 *x) {
u64 _tt = gf128mul_table_lle_le[(x->b >> 49) & 0x80];
r->b = ((x->b >> 1) & ~M80X) | (((x->b << 15) | (x->a >> 49)) & M80X);
r->a = (((x->a >> 1) & ~M80X) | ((x->a << 15) & M80X)) ^ _tt;
}

void debug(char *name, be128 *ex) {
int i;
unsigned char *x = (unsigned char*)ex;
for (i = 0; i < 16; i++)
printf("%02x ", x[i]);
printf("%s\n", name);
}

int main(int argc, char **argv) {
assert(sizeof(be128) == 128/8);
char V[16] = {
//0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
//0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xF0,
0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xF0,
};
char R[16];
debug("V", (be128*)V);
mul_x_lle_simple((be128*)R, (be128*)V);
debug("XV_simple", (be128*)R);
mul_x_lle_herbert((be128*)R, (be128*)V);
debug("XV_herbert", (be128*)R);
mul_x_lle_brian((be128*)R, (be128*)V);
debug("XV_brian", (be128*)R);
exit(0);
}

--
Nothing is ever a total loss; it can always serve as a bad example.

2006-11-28 21:13:43

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCHv2 4/6] crypto: table driven multiplications in GF(2^128), needed by LRW (and in the future ABL)

Hi Rik:

[email protected] wrote:
>
> It's OK. The source will be more maintainable, but constantly converting
> between big and little-endian (on little endian machines) may have a
> significant performance impact (I will just test when your gf128 hits
> cryptodev-2.6, and complain if it is the case).

BTW, the tcrypt speed test for lrw doesn't work for me.

> Two more remarks (errors in v2 of my patch): b128ops.h and gf128mul.h
> should be in include/ (so they can be used in modules) and the inline
> functions in b128ops.h should also be static.

Yep they're in include/crypto with all the functions being static inline.

> This seems to be the problem, the table is constructed wrongly.
> All the calculations take place as if we are on a big endian machine, so
> the table entries should never be swapped, so the above line should read
>
> +#define xx(p, q) 0x##p##q

Yes you're quite right. That was the problem. I'll push this into mm
as soon as I get the speed tests fixed.

Thanks!
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2006-11-28 21:17:52

by Rik Snel

[permalink] [raw]
Subject: Re: [PATCHv2 4/6] crypto: table driven multiplications in GF(2^128), needed by LRW (and in the future ABL)

Hello Herbert,

On Wed, Nov 29, 2006 at 08:13:40AM +1100, Herbert Xu wrote:
> > It's OK. The source will be more maintainable, but constantly converting
> > between big and little-endian (on little endian machines) may have a
> > significant performance impact (I will just test when your gf128 hits
> > cryptodev-2.6, and complain if it is the case).
>
> BTW, the tcrypt speed test for lrw doesn't work for me.

Did you try my patch: [PATCH] adding speed_test_template for lrw(aes)
which I sent on Sep 23?

> > This seems to be the problem, the table is constructed wrongly.
> > All the calculations take place as if we are on a big endian machine, so
> > the table entries should never be swapped, so the above line should read
> >
> > +#define xx(p, q) 0x##p##q
>
> Yes you're quite right. That was the problem. I'll push this into mm
> as soon as I get the speed tests fixed.

Ok, that's good news.

Greetings,

Rik.

--
Nothing is ever a total loss; it can always serve as a bad example.

2006-11-28 22:25:02

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCHv2 4/6] crypto: table driven multiplications in GF(2^128), needed by LRW (and in the future ABL)

On Tue, Nov 28, 2006 at 10:17:39PM +0100, [email protected] wrote:
>
> Did you try my patch: [PATCH] adding speed_test_template for lrw(aes)
> which I sent on Sep 23?

Aha, that got buried in my mailbox. I'll push it out soon.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2006-11-29 08:04:14

by Herbert Xu

[permalink] [raw]
Subject: Re: LRW... v2

On Sat, Sep 02, 2006 at 03:00:21AM +0200, [email protected] wrote:
>
> Here is the updated version of my LRW patch set.

All patches merged and pushed into cryptodev-2.6.

Thanks again!
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt