2023-09-14 08:57:17

by Herbert Xu

[permalink] [raw]
Subject: [PATCH 0/8] crypto: Add lskcipher API type

This series introduces the lskcipher API type. Its relationship
to skcipher is the same as that between shash and ahash.

This series only converts ecb and cbc to the new algorithm type.
Once all templates have been moved over, we can then convert the
cipher implementations such as aes-generic.

Ard, if you have some spare cycles you can help with either the
templates or the cipher algorithm conversions. The latter will
be applied once the templates have been completely moved over.

Just let me know which ones you'd like to do so I won't touch
them.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


2023-09-14 09:13:05

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH 0/8] crypto: Add lskcipher API type

On Thu, 14 Sept 2023 at 10:28, Herbert Xu <[email protected]> wrote:
>
> This series introduces the lskcipher API type. Its relationship
> to skcipher is the same as that between shash and ahash.
>
> This series only converts ecb and cbc to the new algorithm type.
> Once all templates have been moved over, we can then convert the
> cipher implementations such as aes-generic.
>
> Ard, if you have some spare cycles you can help with either the
> templates or the cipher algorithm conversions. The latter will
> be applied once the templates have been completely moved over.
>
> Just let me know which ones you'd like to do so I won't touch
> them.
>

Hello Herbert,

Thanks for sending this.

So the intent is for lskcipher to ultimately supplant the current
cipher entirely, right? And lskcipher can be used directly by clients
of the crypto API, in which case kernel VAs may be used directly, but
no async support is available, while skcipher API clients will gain
access to lskciphers via a generic wrapper (if needed?)

That makes sense but it would help to spell this out.

I'd be happy to help out here but I'll be off on vacation for ~3 weeks
after this week so i won't get around to it before mid October. What I
will do (if it helps) is rebase my recent RISC-V scalar AES cipher
patches onto this, and implement ecb(aes) instead (which is the idea
IIUC?)

2023-09-14 09:31:32

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH 0/8] crypto: Add lskcipher API type

On Thu, 14 Sept 2023 at 10:56, Herbert Xu <[email protected]> wrote:
>
> On Thu, Sep 14, 2023 at 10:51:21AM +0200, Ard Biesheuvel wrote:
> >
> > So the intent is for lskcipher to ultimately supplant the current
> > cipher entirely, right? And lskcipher can be used directly by clients
> > of the crypto API, in which case kernel VAs may be used directly, but
> > no async support is available, while skcipher API clients will gain
> > access to lskciphers via a generic wrapper (if needed?)
> >
> > That makes sense but it would help to spell this out.
>
> Yes that's the idea. It is pretty much exactly the same as how
> shash and ahash are handled and used.
>
> Because of the way I structured the ecb transition code (it will
> take an old cipher and repackage it as an lskcipher), we need to
> convert the templates first and then do the cipher => lskcipher
> conversion.
>
> > I'd be happy to help out here but I'll be off on vacation for ~3 weeks
> > after this week so i won't get around to it before mid October. What I
> > will do (if it helps) is rebase my recent RISC-V scalar AES cipher
> > patches onto this, and implement ecb(aes) instead (which is the idea
> > IIUC?)
>
> That sounds good. In fact let me attach the aes-generic proof-
> of-concept conversion (it can only be applied after all templates
> have been converted, so if you test it now everything but ecb/cbc
> will be broken).
>

That helps, thanks.

...
> +static struct lskcipher_alg aes_alg = {
> + .co = {
> + .base.cra_name = "aes",

So this means that the base name will be aes, not ecb(aes), right?
What about cbc and ctr? It makes sense for a single lskcipher to
implement all three of those at least, so that algorithms like XTS and
GCM can be implemented cheaply using generic templates, without the
need to call into the lskcipher for each block of input.

2023-09-14 09:35:26

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH 0/8] crypto: Add lskcipher API type

On Thu, 14 Sept 2023 at 11:30, Herbert Xu <[email protected]> wrote:
>
> On Thu, Sep 14, 2023 at 11:18:00AM +0200, Ard Biesheuvel wrote:
> >
> > So this means that the base name will be aes, not ecb(aes), right?
> > What about cbc and ctr? It makes sense for a single lskcipher to
> > implement all three of those at least, so that algorithms like XTS and
> > GCM can be implemented cheaply using generic templates, without the
> > need to call into the lskcipher for each block of input.
>
> You can certainly implement all three with arch-specific code
> but I didn't think there was a need to do this for the generic
> version.
>

Fair enough. So what should such an arch version implement?

aes
cbc(aes)
ctr(aes)

or

ecb(aes)
cbc(aes)
ctr(aes)

?

2023-09-14 10:22:18

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 0/8] crypto: Add lskcipher API type

On Thu, Sep 14, 2023 at 11:18:00AM +0200, Ard Biesheuvel wrote:
>
> So this means that the base name will be aes, not ecb(aes), right?
> What about cbc and ctr? It makes sense for a single lskcipher to
> implement all three of those at least, so that algorithms like XTS and
> GCM can be implemented cheaply using generic templates, without the
> need to call into the lskcipher for each block of input.

You can certainly implement all three with arch-specific code
but I didn't think there was a need to do this for the generic
version.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-09-14 12:02:17

by Herbert Xu

[permalink] [raw]
Subject: [PATCH 2/8] ipsec: Stop using crypto_has_alg

Stop using the obsolete crypto_has_alg helper that is type-agnostic.
Instead use the type-specific helpers such as the newly added
crypto_has_aead.

This means that changes in the underlying type/mask values won't
affect IPsec.

Signed-off-by: Herbert Xu <[email protected]>
---
net/xfrm/xfrm_algo.c | 19 +++++++------------
1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/net/xfrm/xfrm_algo.c b/net/xfrm/xfrm_algo.c
index 094734fbec96..41533c631431 100644
--- a/net/xfrm/xfrm_algo.c
+++ b/net/xfrm/xfrm_algo.c
@@ -5,6 +5,7 @@
* Copyright (c) 2002 James Morris <[email protected]>
*/

+#include <crypto/aead.h>
#include <crypto/hash.h>
#include <crypto/skcipher.h>
#include <linux/module.h>
@@ -644,38 +645,33 @@ static inline int calg_entries(void)
}

struct xfrm_algo_list {
+ int (*find)(const char *name, u32 type, u32 mask);
struct xfrm_algo_desc *algs;
int entries;
- u32 type;
- u32 mask;
};

static const struct xfrm_algo_list xfrm_aead_list = {
+ .find = crypto_has_aead,
.algs = aead_list,
.entries = ARRAY_SIZE(aead_list),
- .type = CRYPTO_ALG_TYPE_AEAD,
- .mask = CRYPTO_ALG_TYPE_MASK,
};

static const struct xfrm_algo_list xfrm_aalg_list = {
+ .find = crypto_has_ahash,
.algs = aalg_list,
.entries = ARRAY_SIZE(aalg_list),
- .type = CRYPTO_ALG_TYPE_HASH,
- .mask = CRYPTO_ALG_TYPE_HASH_MASK,
};

static const struct xfrm_algo_list xfrm_ealg_list = {
+ .find = crypto_has_skcipher,
.algs = ealg_list,
.entries = ARRAY_SIZE(ealg_list),
- .type = CRYPTO_ALG_TYPE_SKCIPHER,
- .mask = CRYPTO_ALG_TYPE_MASK,
};

static const struct xfrm_algo_list xfrm_calg_list = {
+ .find = crypto_has_comp,
.algs = calg_list,
.entries = ARRAY_SIZE(calg_list),
- .type = CRYPTO_ALG_TYPE_COMPRESS,
- .mask = CRYPTO_ALG_TYPE_MASK,
};

static struct xfrm_algo_desc *xfrm_find_algo(
@@ -696,8 +692,7 @@ static struct xfrm_algo_desc *xfrm_find_algo(
if (!probe)
break;

- status = crypto_has_alg(list[i].name, algo_list->type,
- algo_list->mask);
+ status = algo_list->find(list[i].name, 0, 0);
if (!status)
break;

--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-09-14 12:31:43

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 0/8] crypto: Add lskcipher API type

On Thu, Sep 14, 2023 at 11:18:00AM +0200, Ard Biesheuvel wrote:
>
> > +static struct lskcipher_alg aes_alg = {
> > + .co = {
> > + .base.cra_name = "aes",
>
> So this means that the base name will be aes, not ecb(aes), right?

Yes this will be called "aes". If someone asks for "ecb(aes)"
that will instantiate the ecb template which will construct
a new algorithm with the same function pointers as the original.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-09-14 12:47:51

by Herbert Xu

[permalink] [raw]
Subject: [PATCH 1/8] crypto: aead - Add crypto_has_aead

Add the helper crypto_has_aead. This is meant to replace the
existing use of crypto_has_alg to locate AEAD algorithms.

Signed-off-by: Herbert Xu <[email protected]>
---
crypto/aead.c | 6 ++++++
include/crypto/aead.h | 12 ++++++++++++
2 files changed, 18 insertions(+)

diff --git a/crypto/aead.c b/crypto/aead.c
index d5ba204ebdbf..54906633566a 100644
--- a/crypto/aead.c
+++ b/crypto/aead.c
@@ -269,6 +269,12 @@ struct crypto_aead *crypto_alloc_aead(const char *alg_name, u32 type, u32 mask)
}
EXPORT_SYMBOL_GPL(crypto_alloc_aead);

+int crypto_has_aead(const char *alg_name, u32 type, u32 mask)
+{
+ return crypto_type_has_alg(alg_name, &crypto_aead_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_has_aead);
+
static int aead_prepare_alg(struct aead_alg *alg)
{
struct crypto_istat_aead *istat = aead_get_stat(alg);
diff --git a/include/crypto/aead.h b/include/crypto/aead.h
index 35e45b854a6f..51382befbe37 100644
--- a/include/crypto/aead.h
+++ b/include/crypto/aead.h
@@ -217,6 +217,18 @@ static inline void crypto_free_aead(struct crypto_aead *tfm)
crypto_destroy_tfm(tfm, crypto_aead_tfm(tfm));
}

+/**
+ * crypto_has_aead() - Search for the availability of an aead.
+ * @alg_name: is the cra_name / name or cra_driver_name / driver name of the
+ * aead
+ * @type: specifies the type of the aead
+ * @mask: specifies the mask for the aead
+ *
+ * Return: true when the aead is known to the kernel crypto API; false
+ * otherwise
+ */
+int crypto_has_aead(const char *alg_name, u32 type, u32 mask);
+
static inline const char *crypto_aead_driver_name(struct crypto_aead *tfm)
{
return crypto_tfm_alg_driver_name(crypto_aead_tfm(tfm));
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-09-14 18:57:50

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 0/8] crypto: Add lskcipher API type

On Thu, Sep 14, 2023 at 11:31:14AM +0200, Ard Biesheuvel wrote:
>
> ecb(aes)

This is unnecessary as the generic template will construct an
algorithm that's almost exactly the same as the underlying
algorithm. But you could register it if you want to. The
template instantiation is a one-off event.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-09-14 19:05:58

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 0/8] crypto: Add lskcipher API type

On Thu, Sep 14, 2023 at 10:51:21AM +0200, Ard Biesheuvel wrote:
>
> So the intent is for lskcipher to ultimately supplant the current
> cipher entirely, right? And lskcipher can be used directly by clients
> of the crypto API, in which case kernel VAs may be used directly, but
> no async support is available, while skcipher API clients will gain
> access to lskciphers via a generic wrapper (if needed?)
>
> That makes sense but it would help to spell this out.

Yes that's the idea. It is pretty much exactly the same as how
shash and ahash are handled and used.

Because of the way I structured the ecb transition code (it will
take an old cipher and repackage it as an lskcipher), we need to
convert the templates first and then do the cipher => lskcipher
conversion.

> I'd be happy to help out here but I'll be off on vacation for ~3 weeks
> after this week so i won't get around to it before mid October. What I
> will do (if it helps) is rebase my recent RISC-V scalar AES cipher
> patches onto this, and implement ecb(aes) instead (which is the idea
> IIUC?)

That sounds good. In fact let me attach the aes-generic proof-
of-concept conversion (it can only be applied after all templates
have been converted, so if you test it now everything but ecb/cbc
will be broken).

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
diff --git a/crypto/aes_generic.c b/crypto/aes_generic.c
index 666474b81c6a..afb74ee04193 100644
--- a/crypto/aes_generic.c
+++ b/crypto/aes_generic.c
@@ -47,14 +47,13 @@
* ---------------------------------------------------------------------------
*/

-#include <crypto/aes.h>
-#include <crypto/algapi.h>
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/types.h>
-#include <linux/errno.h>
-#include <asm/byteorder.h>
#include <asm/unaligned.h>
+#include <crypto/aes.h>
+#include <crypto/internal/skcipher.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>

static inline u8 byte(const u32 x, const unsigned n)
{
@@ -1123,7 +1122,7 @@ EXPORT_SYMBOL_GPL(crypto_it_tab);

/**
* crypto_aes_set_key - Set the AES key.
- * @tfm: The %crypto_tfm that is used in the context.
+ * @tfm: The %crypto_lskcipher that is used in the context.
* @in_key: The input key.
* @key_len: The size of the key.
*
@@ -1133,10 +1132,10 @@ EXPORT_SYMBOL_GPL(crypto_it_tab);
*
* Return: 0 on success; -EINVAL on failure (only happens for bad key lengths)
*/
-int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
- unsigned int key_len)
+int crypto_aes_set_key(struct crypto_lskcipher *tfm, const u8 *in_key,
+ unsigned int key_len)
{
- struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct crypto_aes_ctx *ctx = crypto_lskcipher_ctx(tfm);

return aes_expandkey(ctx, in_key, key_len);
}
@@ -1173,9 +1172,9 @@ EXPORT_SYMBOL_GPL(crypto_aes_set_key);
f_rl(bo, bi, 3, k); \
} while (0)

-static void crypto_aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+static void aes_encrypt_one(struct crypto_lskcipher *tfm, const u8 *in, u8 *out)
{
- const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+ const struct crypto_aes_ctx *ctx = crypto_lskcipher_ctx(tfm);
u32 b0[4], b1[4];
const u32 *kp = ctx->key_enc + 4;
const int key_len = ctx->key_length;
@@ -1212,6 +1211,17 @@ static void crypto_aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
put_unaligned_le32(b0[3], out + 12);
}

+static int crypto_aes_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned nbytes, u8 *iv, bool final)
+{
+ const unsigned int bsize = AES_BLOCK_SIZE;
+
+ for (; nbytes >= bsize; src += bsize, dst += bsize, nbytes -= bsize)
+ aes_encrypt_one(tfm, src, dst);
+
+ return nbytes && final ? -EINVAL : nbytes;
+}
+
/* decrypt a block of text */

#define i_rn(bo, bi, n, k) do { \
@@ -1243,9 +1253,9 @@ static void crypto_aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
i_rl(bo, bi, 3, k); \
} while (0)

-static void crypto_aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+static void aes_decrypt_one(struct crypto_lskcipher *tfm, const u8 *in, u8 *out)
{
- const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+ const struct crypto_aes_ctx *ctx = crypto_lskcipher_ctx(tfm);
u32 b0[4], b1[4];
const int key_len = ctx->key_length;
const u32 *kp = ctx->key_dec + 4;
@@ -1282,33 +1292,41 @@ static void crypto_aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
put_unaligned_le32(b0[3], out + 12);
}

-static struct crypto_alg aes_alg = {
- .cra_name = "aes",
- .cra_driver_name = "aes-generic",
- .cra_priority = 100,
- .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
- .cra_blocksize = AES_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct crypto_aes_ctx),
- .cra_module = THIS_MODULE,
- .cra_u = {
- .cipher = {
- .cia_min_keysize = AES_MIN_KEY_SIZE,
- .cia_max_keysize = AES_MAX_KEY_SIZE,
- .cia_setkey = crypto_aes_set_key,
- .cia_encrypt = crypto_aes_encrypt,
- .cia_decrypt = crypto_aes_decrypt
- }
- }
+static int crypto_aes_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned nbytes, u8 *iv, bool final)
+{
+ const unsigned int bsize = AES_BLOCK_SIZE;
+
+ for (; nbytes >= bsize; src += bsize, dst += bsize, nbytes -= bsize)
+ aes_decrypt_one(tfm, src, dst);
+
+ return nbytes && final ? -EINVAL : nbytes;
+}
+
+static struct lskcipher_alg aes_alg = {
+ .co = {
+ .base.cra_name = "aes",
+ .base.cra_driver_name = "aes-generic",
+ .base.cra_priority = 100,
+ .base.cra_blocksize = AES_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct crypto_aes_ctx),
+ .base.cra_module = THIS_MODULE,
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ },
+ .setkey = crypto_aes_set_key,
+ .encrypt = crypto_aes_encrypt,
+ .decrypt = crypto_aes_decrypt,
};

static int __init aes_init(void)
{
- return crypto_register_alg(&aes_alg);
+ return crypto_register_lskcipher(&aes_alg);
}

static void __exit aes_fini(void)
{
- crypto_unregister_alg(&aes_alg);
+ crypto_unregister_lskcipher(&aes_alg);
}

subsys_initcall(aes_init);
diff --git a/include/crypto/aes.h b/include/crypto/aes.h
index 2090729701ab..947109e24360 100644
--- a/include/crypto/aes.h
+++ b/include/crypto/aes.h
@@ -6,8 +6,9 @@
#ifndef _CRYPTO_AES_H
#define _CRYPTO_AES_H

+#include <linux/cache.h>
+#include <linux/errno.h>
#include <linux/types.h>
-#include <linux/crypto.h>

#define AES_MIN_KEY_SIZE 16
#define AES_MAX_KEY_SIZE 32
@@ -18,6 +19,8 @@
#define AES_MAX_KEYLENGTH (15 * 16)
#define AES_MAX_KEYLENGTH_U32 (AES_MAX_KEYLENGTH / sizeof(u32))

+struct crypto_lskcipher;
+
/*
* Please ensure that the first two fields are 16-byte aligned
* relative to the start of the structure, i.e., don't move them!
@@ -48,8 +51,8 @@ static inline int aes_check_keylen(unsigned int keylen)
return 0;
}

-int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
- unsigned int key_len);
+int crypto_aes_set_key(struct crypto_lskcipher *tfm, const u8 *in_key,
+ unsigned int key_len);

/**
* aes_expandkey - Expands the AES key as described in FIPS-197

2023-09-15 00:08:18

by Herbert Xu

[permalink] [raw]
Subject: [PATCH 4/8] crypto: skcipher - Add lskcipher

Add a new API type lskcipher designed for taking straight kernel
pointers instead of SG lists. Its relationship to skcipher will
be analogous to that between shash and ahash.

Signed-off-by: Herbert Xu <[email protected]>
---
crypto/Makefile | 6 +-
crypto/cryptd.c | 2 +-
crypto/lskcipher.c | 594 +++++++++++++++++++++++++++++
crypto/skcipher.c | 75 +++-
crypto/skcipher.h | 30 ++
include/crypto/internal/skcipher.h | 114 +++++-
include/crypto/skcipher.h | 309 ++++++++++++++-
include/linux/crypto.h | 1 +
8 files changed, 1086 insertions(+), 45 deletions(-)
create mode 100644 crypto/lskcipher.c
create mode 100644 crypto/skcipher.h

diff --git a/crypto/Makefile b/crypto/Makefile
index 953a7e105e58..5ac6876f935a 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -16,7 +16,11 @@ obj-$(CONFIG_CRYPTO_ALGAPI2) += crypto_algapi.o
obj-$(CONFIG_CRYPTO_AEAD2) += aead.o
obj-$(CONFIG_CRYPTO_GENIV) += geniv.o

-obj-$(CONFIG_CRYPTO_SKCIPHER2) += skcipher.o
+crypto_skcipher-y += lskcipher.o
+crypto_skcipher-y += skcipher.o
+
+obj-$(CONFIG_CRYPTO_SKCIPHER2) += crypto_skcipher.o
+
obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
obj-$(CONFIG_CRYPTO_ECHAINIV) += echainiv.o

diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index bbcc368b6a55..194a92d677b9 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -929,7 +929,7 @@ static int cryptd_create(struct crypto_template *tmpl, struct rtattr **tb)
return PTR_ERR(algt);

switch (algt->type & algt->mask & CRYPTO_ALG_TYPE_MASK) {
- case CRYPTO_ALG_TYPE_SKCIPHER:
+ case CRYPTO_ALG_TYPE_LSKCIPHER:
return cryptd_create_skcipher(tmpl, tb, algt, &queue);
case CRYPTO_ALG_TYPE_HASH:
return cryptd_create_hash(tmpl, tb, algt, &queue);
diff --git a/crypto/lskcipher.c b/crypto/lskcipher.c
new file mode 100644
index 000000000000..3343c6d955da
--- /dev/null
+++ b/crypto/lskcipher.c
@@ -0,0 +1,594 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Linear symmetric key cipher operations.
+ *
+ * Generic encrypt/decrypt wrapper for ciphers.
+ *
+ * Copyright (c) 2023 Herbert Xu <[email protected]>
+ */
+
+#include <linux/cryptouser.h>
+#include <linux/err.h>
+#include <linux/export.h>
+#include <linux/kernel.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <net/netlink.h>
+#include "skcipher.h"
+
+static inline struct crypto_lskcipher *__crypto_lskcipher_cast(
+ struct crypto_tfm *tfm)
+{
+ return container_of(tfm, struct crypto_lskcipher, base);
+}
+
+static inline struct lskcipher_alg *__crypto_lskcipher_alg(
+ struct crypto_alg *alg)
+{
+ return container_of(alg, struct lskcipher_alg, co.base);
+}
+
+static inline struct crypto_istat_cipher *lskcipher_get_stat(
+ struct lskcipher_alg *alg)
+{
+ return skcipher_get_stat_common(&alg->co);
+}
+
+static inline int crypto_lskcipher_errstat(struct lskcipher_alg *alg, int err)
+{
+ struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
+
+ if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
+ return err;
+
+ if (err)
+ atomic64_inc(&istat->err_cnt);
+
+ return err;
+}
+
+static int lskcipher_setkey_unaligned(struct crypto_lskcipher *tfm,
+ const u8 *key, unsigned int keylen)
+{
+ unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
+ struct lskcipher_alg *cipher = crypto_lskcipher_alg(tfm);
+ u8 *buffer, *alignbuffer;
+ unsigned long absize;
+ int ret;
+
+ absize = keylen + alignmask;
+ buffer = kmalloc(absize, GFP_ATOMIC);
+ if (!buffer)
+ return -ENOMEM;
+
+ alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
+ memcpy(alignbuffer, key, keylen);
+ ret = cipher->setkey(tfm, alignbuffer, keylen);
+ kfree_sensitive(buffer);
+ return ret;
+}
+
+int crypto_lskcipher_setkey(struct crypto_lskcipher *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
+ struct lskcipher_alg *cipher = crypto_lskcipher_alg(tfm);
+
+ if (keylen < cipher->co.min_keysize || keylen > cipher->co.max_keysize)
+ return -EINVAL;
+
+ if ((unsigned long)key & alignmask)
+ return lskcipher_setkey_unaligned(tfm, key, keylen);
+ else
+ return cipher->setkey(tfm, key, keylen);
+}
+EXPORT_SYMBOL_GPL(crypto_lskcipher_setkey);
+
+static int crypto_lskcipher_crypt_unaligned(
+ struct crypto_lskcipher *tfm, const u8 *src, u8 *dst, unsigned len,
+ u8 *iv, int (*crypt)(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv, bool final))
+{
+ unsigned ivsize = crypto_lskcipher_ivsize(tfm);
+ unsigned bs = crypto_lskcipher_blocksize(tfm);
+ unsigned cs = crypto_lskcipher_chunksize(tfm);
+ int err;
+ u8 *tiv;
+ u8 *p;
+
+ BUILD_BUG_ON(MAX_CIPHER_BLOCKSIZE > PAGE_SIZE ||
+ MAX_CIPHER_ALIGNMASK >= PAGE_SIZE);
+
+ tiv = kmalloc(PAGE_SIZE, GFP_ATOMIC);
+ if (!tiv)
+ return -ENOMEM;
+
+ memcpy(tiv, iv, ivsize);
+
+ p = kmalloc(PAGE_SIZE, GFP_ATOMIC);
+ err = -ENOMEM;
+ if (!p)
+ goto out;
+
+ while (len >= bs) {
+ unsigned chunk = min((unsigned)PAGE_SIZE, len);
+ int err;
+
+ if (chunk > cs)
+ chunk &= ~(cs - 1);
+
+ memcpy(p, src, chunk);
+ err = crypt(tfm, p, p, chunk, tiv, true);
+ if (err)
+ goto out;
+
+ memcpy(dst, p, chunk);
+ src += chunk;
+ dst += chunk;
+ len -= chunk;
+ }
+
+ err = len ? -EINVAL : 0;
+
+out:
+ memcpy(iv, tiv, ivsize);
+ kfree_sensitive(p);
+ kfree_sensitive(tiv);
+ return err;
+}
+
+static int crypto_lskcipher_crypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv,
+ int (*crypt)(struct crypto_lskcipher *tfm,
+ const u8 *src, u8 *dst,
+ unsigned len, u8 *iv,
+ bool final))
+{
+ unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
+ struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
+ int ret;
+
+ if (((unsigned long)src | (unsigned long)dst | (unsigned long)iv) &
+ alignmask) {
+ ret = crypto_lskcipher_crypt_unaligned(tfm, src, dst, len, iv,
+ crypt);
+ goto out;
+ }
+
+ ret = crypt(tfm, src, dst, len, iv, true);
+
+out:
+ return crypto_lskcipher_errstat(alg, ret);
+}
+
+int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv)
+{
+ struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
+
+ if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
+ struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
+
+ atomic64_inc(&istat->encrypt_cnt);
+ atomic64_add(len, &istat->encrypt_tlen);
+ }
+
+ return crypto_lskcipher_crypt(tfm, src, dst, len, iv, alg->encrypt);
+}
+EXPORT_SYMBOL_GPL(crypto_lskcipher_encrypt);
+
+int crypto_lskcipher_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv)
+{
+ struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
+
+ if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
+ struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
+
+ atomic64_inc(&istat->decrypt_cnt);
+ atomic64_add(len, &istat->decrypt_tlen);
+ }
+
+ return crypto_lskcipher_crypt(tfm, src, dst, len, iv, alg->decrypt);
+}
+EXPORT_SYMBOL_GPL(crypto_lskcipher_decrypt);
+
+int crypto_lskcipher_setkey_sg(struct crypto_skcipher *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct crypto_lskcipher **ctx = crypto_skcipher_ctx(tfm);
+
+ return crypto_lskcipher_setkey(*ctx, key, keylen);
+}
+
+static int crypto_lskcipher_crypt_sg(struct skcipher_request *req,
+ int (*crypt)(struct crypto_lskcipher *tfm,
+ const u8 *src, u8 *dst,
+ unsigned len, u8 *iv,
+ bool final))
+{
+ struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+ struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+ struct crypto_lskcipher *tfm = *ctx;
+ struct skcipher_walk walk;
+ int err;
+
+ err = skcipher_walk_virt(&walk, req, false);
+
+ while (walk.nbytes) {
+ err = crypt(tfm, walk.src.virt.addr, walk.dst.virt.addr,
+ walk.nbytes, walk.iv, walk.nbytes == walk.total);
+ err = skcipher_walk_done(&walk, err);
+ }
+
+ return err;
+}
+
+int crypto_lskcipher_encrypt_sg(struct skcipher_request *req)
+{
+ struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+ struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+ struct lskcipher_alg *alg = crypto_lskcipher_alg(*ctx);
+
+ return crypto_lskcipher_crypt_sg(req, alg->encrypt);
+}
+
+int crypto_lskcipher_decrypt_sg(struct skcipher_request *req)
+{
+ struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+ struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+ struct lskcipher_alg *alg = crypto_lskcipher_alg(*ctx);
+
+ return crypto_lskcipher_crypt_sg(req, alg->decrypt);
+}
+
+static void crypto_lskcipher_exit_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_lskcipher *skcipher = __crypto_lskcipher_cast(tfm);
+ struct lskcipher_alg *alg = crypto_lskcipher_alg(skcipher);
+
+ alg->exit(skcipher);
+}
+
+static int crypto_lskcipher_init_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_lskcipher *skcipher = __crypto_lskcipher_cast(tfm);
+ struct lskcipher_alg *alg = crypto_lskcipher_alg(skcipher);
+
+ if (alg->exit)
+ skcipher->base.exit = crypto_lskcipher_exit_tfm;
+
+ if (alg->init)
+ return alg->init(skcipher);
+
+ return 0;
+}
+
+static void crypto_lskcipher_free_instance(struct crypto_instance *inst)
+{
+ struct lskcipher_instance *skcipher =
+ container_of(inst, struct lskcipher_instance, s.base);
+
+ skcipher->free(skcipher);
+}
+
+static void __maybe_unused crypto_lskcipher_show(
+ struct seq_file *m, struct crypto_alg *alg)
+{
+ struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
+
+ seq_printf(m, "type : lskcipher\n");
+ seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
+ seq_printf(m, "min keysize : %u\n", skcipher->co.min_keysize);
+ seq_printf(m, "max keysize : %u\n", skcipher->co.max_keysize);
+ seq_printf(m, "ivsize : %u\n", skcipher->co.ivsize);
+ seq_printf(m, "chunksize : %u\n", skcipher->co.chunksize);
+}
+
+static int __maybe_unused crypto_lskcipher_report(
+ struct sk_buff *skb, struct crypto_alg *alg)
+{
+ struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
+ struct crypto_report_blkcipher rblkcipher;
+
+ memset(&rblkcipher, 0, sizeof(rblkcipher));
+
+ strscpy(rblkcipher.type, "lskcipher", sizeof(rblkcipher.type));
+ strscpy(rblkcipher.geniv, "<none>", sizeof(rblkcipher.geniv));
+
+ rblkcipher.blocksize = alg->cra_blocksize;
+ rblkcipher.min_keysize = skcipher->co.min_keysize;
+ rblkcipher.max_keysize = skcipher->co.max_keysize;
+ rblkcipher.ivsize = skcipher->co.ivsize;
+
+ return nla_put(skb, CRYPTOCFGA_REPORT_BLKCIPHER,
+ sizeof(rblkcipher), &rblkcipher);
+}
+
+static int __maybe_unused crypto_lskcipher_report_stat(
+ struct sk_buff *skb, struct crypto_alg *alg)
+{
+ struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
+ struct crypto_istat_cipher *istat;
+ struct crypto_stat_cipher rcipher;
+
+ istat = lskcipher_get_stat(skcipher);
+
+ memset(&rcipher, 0, sizeof(rcipher));
+
+ strscpy(rcipher.type, "cipher", sizeof(rcipher.type));
+
+ rcipher.stat_encrypt_cnt = atomic64_read(&istat->encrypt_cnt);
+ rcipher.stat_encrypt_tlen = atomic64_read(&istat->encrypt_tlen);
+ rcipher.stat_decrypt_cnt = atomic64_read(&istat->decrypt_cnt);
+ rcipher.stat_decrypt_tlen = atomic64_read(&istat->decrypt_tlen);
+ rcipher.stat_err_cnt = atomic64_read(&istat->err_cnt);
+
+ return nla_put(skb, CRYPTOCFGA_STAT_CIPHER, sizeof(rcipher), &rcipher);
+}
+
+static const struct crypto_type crypto_lskcipher_type = {
+ .extsize = crypto_alg_extsize,
+ .init_tfm = crypto_lskcipher_init_tfm,
+ .free = crypto_lskcipher_free_instance,
+#ifdef CONFIG_PROC_FS
+ .show = crypto_lskcipher_show,
+#endif
+#if IS_ENABLED(CONFIG_CRYPTO_USER)
+ .report = crypto_lskcipher_report,
+#endif
+#ifdef CONFIG_CRYPTO_STATS
+ .report_stat = crypto_lskcipher_report_stat,
+#endif
+ .maskclear = ~CRYPTO_ALG_TYPE_MASK,
+ .maskset = CRYPTO_ALG_TYPE_MASK,
+ .type = CRYPTO_ALG_TYPE_LSKCIPHER,
+ .tfmsize = offsetof(struct crypto_lskcipher, base),
+};
+
+static void crypto_lskcipher_exit_tfm_sg(struct crypto_tfm *tfm)
+{
+ struct crypto_lskcipher **ctx = crypto_tfm_ctx(tfm);
+
+ crypto_free_lskcipher(*ctx);
+}
+
+int crypto_init_lskcipher_ops_sg(struct crypto_tfm *tfm)
+{
+ struct crypto_lskcipher **ctx = crypto_tfm_ctx(tfm);
+ struct crypto_alg *calg = tfm->__crt_alg;
+ struct crypto_lskcipher *skcipher;
+
+ if (!crypto_mod_get(calg))
+ return -EAGAIN;
+
+ skcipher = crypto_create_tfm(calg, &crypto_lskcipher_type);
+ if (IS_ERR(skcipher)) {
+ crypto_mod_put(calg);
+ return PTR_ERR(skcipher);
+ }
+
+ *ctx = skcipher;
+ tfm->exit = crypto_lskcipher_exit_tfm_sg;
+
+ return 0;
+}
+
+int crypto_grab_lskcipher(struct crypto_lskcipher_spawn *spawn,
+ struct crypto_instance *inst,
+ const char *name, u32 type, u32 mask)
+{
+ spawn->base.frontend = &crypto_lskcipher_type;
+ return crypto_grab_spawn(&spawn->base, inst, name, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_grab_lskcipher);
+
+struct crypto_lskcipher *crypto_alloc_lskcipher(const char *alg_name,
+ u32 type, u32 mask)
+{
+ return crypto_alloc_tfm(alg_name, &crypto_lskcipher_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_alloc_lskcipher);
+
+static int lskcipher_prepare_alg(struct lskcipher_alg *alg)
+{
+ struct crypto_alg *base = &alg->co.base;
+ int err;
+
+ err = skcipher_prepare_alg_common(&alg->co);
+ if (err)
+ return err;
+
+ if (alg->co.chunksize & (alg->co.chunksize - 1))
+ return -EINVAL;
+
+ base->cra_type = &crypto_lskcipher_type;
+ base->cra_flags |= CRYPTO_ALG_TYPE_LSKCIPHER;
+
+ return 0;
+}
+
+int crypto_register_lskcipher(struct lskcipher_alg *alg)
+{
+ struct crypto_alg *base = &alg->co.base;
+ int err;
+
+ err = lskcipher_prepare_alg(alg);
+ if (err)
+ return err;
+
+ return crypto_register_alg(base);
+}
+EXPORT_SYMBOL_GPL(crypto_register_lskcipher);
+
+void crypto_unregister_lskcipher(struct lskcipher_alg *alg)
+{
+ crypto_unregister_alg(&alg->co.base);
+}
+EXPORT_SYMBOL_GPL(crypto_unregister_lskcipher);
+
+int crypto_register_lskciphers(struct lskcipher_alg *algs, int count)
+{
+ int i, ret;
+
+ for (i = 0; i < count; i++) {
+ ret = crypto_register_lskcipher(&algs[i]);
+ if (ret)
+ goto err;
+ }
+
+ return 0;
+
+err:
+ for (--i; i >= 0; --i)
+ crypto_unregister_lskcipher(&algs[i]);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(crypto_register_lskciphers);
+
+void crypto_unregister_lskciphers(struct lskcipher_alg *algs, int count)
+{
+ int i;
+
+ for (i = count - 1; i >= 0; --i)
+ crypto_unregister_lskcipher(&algs[i]);
+}
+EXPORT_SYMBOL_GPL(crypto_unregister_lskciphers);
+
+int lskcipher_register_instance(struct crypto_template *tmpl,
+ struct lskcipher_instance *inst)
+{
+ int err;
+
+ if (WARN_ON(!inst->free))
+ return -EINVAL;
+
+ err = lskcipher_prepare_alg(&inst->alg);
+ if (err)
+ return err;
+
+ return crypto_register_instance(tmpl, lskcipher_crypto_instance(inst));
+}
+EXPORT_SYMBOL_GPL(lskcipher_register_instance);
+
+static int lskcipher_setkey_simple(struct crypto_lskcipher *tfm, const u8 *key,
+ unsigned int keylen)
+{
+ struct crypto_lskcipher *cipher = lskcipher_cipher_simple(tfm);
+
+ crypto_lskcipher_clear_flags(cipher, CRYPTO_TFM_REQ_MASK);
+ crypto_lskcipher_set_flags(cipher, crypto_lskcipher_get_flags(tfm) &
+ CRYPTO_TFM_REQ_MASK);
+ return crypto_lskcipher_setkey(cipher, key, keylen);
+}
+
+static int lskcipher_init_tfm_simple(struct crypto_lskcipher *tfm)
+{
+ struct lskcipher_instance *inst = lskcipher_alg_instance(tfm);
+ struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+ struct crypto_lskcipher_spawn *spawn;
+ struct crypto_lskcipher *cipher;
+
+ spawn = lskcipher_instance_ctx(inst);
+ cipher = crypto_spawn_lskcipher(spawn);
+ if (IS_ERR(cipher))
+ return PTR_ERR(cipher);
+
+ *ctx = cipher;
+ return 0;
+}
+
+static void lskcipher_exit_tfm_simple(struct crypto_lskcipher *tfm)
+{
+ struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+
+ crypto_free_lskcipher(*ctx);
+}
+
+static void lskcipher_free_instance_simple(struct lskcipher_instance *inst)
+{
+ crypto_drop_lskcipher(lskcipher_instance_ctx(inst));
+ kfree(inst);
+}
+
+/**
+ * lskcipher_alloc_instance_simple - allocate instance of simple block cipher
+ *
+ * Allocate an lskcipher_instance for a simple block cipher mode of operation,
+ * e.g. cbc or ecb. The instance context will have just a single crypto_spawn,
+ * that for the underlying cipher. The {min,max}_keysize, ivsize, blocksize,
+ * alignmask, and priority are set from the underlying cipher but can be
+ * overridden if needed. The tfm context defaults to
+ * struct crypto_lskcipher *, and default ->setkey(), ->init(), and
+ * ->exit() methods are installed.
+ *
+ * @tmpl: the template being instantiated
+ * @tb: the template parameters
+ *
+ * Return: a pointer to the new instance, or an ERR_PTR(). The caller still
+ * needs to register the instance.
+ */
+struct lskcipher_instance *lskcipher_alloc_instance_simple(
+ struct crypto_template *tmpl, struct rtattr **tb)
+{
+ u32 mask;
+ struct lskcipher_instance *inst;
+ struct crypto_lskcipher_spawn *spawn;
+ struct lskcipher_alg *cipher_alg;
+ int err;
+
+ err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_LSKCIPHER, &mask);
+ if (err)
+ return ERR_PTR(err);
+
+ inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+ if (!inst)
+ return ERR_PTR(-ENOMEM);
+
+ spawn = lskcipher_instance_ctx(inst);
+ err = crypto_grab_lskcipher(spawn,
+ lskcipher_crypto_instance(inst),
+ crypto_attr_alg_name(tb[1]), 0, mask);
+ if (err)
+ goto err_free_inst;
+ cipher_alg = crypto_lskcipher_spawn_alg(spawn);
+
+ err = crypto_inst_setname(lskcipher_crypto_instance(inst), tmpl->name,
+ &cipher_alg->co.base);
+ if (err)
+ goto err_free_inst;
+
+ /* Don't allow nesting. */
+ err = -ELOOP;
+ if ((cipher_alg->co.base.cra_flags & CRYPTO_ALG_INSTANCE))
+ goto err_free_inst;
+
+ err = -EINVAL;
+ if (cipher_alg->co.ivsize)
+ goto err_free_inst;
+
+ inst->free = lskcipher_free_instance_simple;
+
+ /* Default algorithm properties, can be overridden */
+ inst->alg.co.base.cra_blocksize = cipher_alg->co.base.cra_blocksize;
+ inst->alg.co.base.cra_alignmask = cipher_alg->co.base.cra_alignmask;
+ inst->alg.co.base.cra_priority = cipher_alg->co.base.cra_priority;
+ inst->alg.co.min_keysize = cipher_alg->co.min_keysize;
+ inst->alg.co.max_keysize = cipher_alg->co.max_keysize;
+ inst->alg.co.ivsize = cipher_alg->co.base.cra_blocksize;
+
+ /* Use struct crypto_lskcipher * by default, can be overridden */
+ inst->alg.co.base.cra_ctxsize = sizeof(struct crypto_lskcipher *);
+ inst->alg.setkey = lskcipher_setkey_simple;
+ inst->alg.init = lskcipher_init_tfm_simple;
+ inst->alg.exit = lskcipher_exit_tfm_simple;
+
+ return inst;
+
+err_free_inst:
+ lskcipher_free_instance_simple(inst);
+ return ERR_PTR(err);
+}
+EXPORT_SYMBOL_GPL(lskcipher_alloc_instance_simple);
diff --git a/crypto/skcipher.c b/crypto/skcipher.c
index 7b275716cf4e..b9496dc8a609 100644
--- a/crypto/skcipher.c
+++ b/crypto/skcipher.c
@@ -24,8 +24,9 @@
#include <linux/slab.h>
#include <linux/string.h>
#include <net/netlink.h>
+#include "skcipher.h"

-#include "internal.h"
+#define CRYPTO_ALG_TYPE_SKCIPHER_MASK 0x0000000e

enum {
SKCIPHER_WALK_PHYS = 1 << 0,
@@ -43,6 +44,8 @@ struct skcipher_walk_buffer {
u8 buffer[];
};

+static const struct crypto_type crypto_skcipher_type;
+
static int skcipher_walk_next(struct skcipher_walk *walk);

static inline void skcipher_map_src(struct skcipher_walk *walk)
@@ -89,11 +92,7 @@ static inline struct skcipher_alg *__crypto_skcipher_alg(
static inline struct crypto_istat_cipher *skcipher_get_stat(
struct skcipher_alg *alg)
{
-#ifdef CONFIG_CRYPTO_STATS
- return &alg->stat;
-#else
- return NULL;
-#endif
+ return skcipher_get_stat_common(&alg->co);
}

static inline int crypto_skcipher_errstat(struct skcipher_alg *alg, int err)
@@ -468,6 +467,7 @@ static int skcipher_walk_skcipher(struct skcipher_walk *walk,
struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct skcipher_alg *alg = crypto_skcipher_alg(tfm);

walk->total = req->cryptlen;
walk->nbytes = 0;
@@ -485,10 +485,14 @@ static int skcipher_walk_skcipher(struct skcipher_walk *walk,
SKCIPHER_WALK_SLEEP : 0;

walk->blocksize = crypto_skcipher_blocksize(tfm);
- walk->stride = crypto_skcipher_walksize(tfm);
walk->ivsize = crypto_skcipher_ivsize(tfm);
walk->alignmask = crypto_skcipher_alignmask(tfm);

+ if (alg->co.base.cra_type != &crypto_skcipher_type)
+ walk->stride = alg->co.chunksize;
+ else
+ walk->stride = alg->walksize;
+
return skcipher_walk_first(walk);
}

@@ -616,6 +620,11 @@ int crypto_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned long alignmask = crypto_skcipher_alignmask(tfm);
int err;

+ if (cipher->co.base.cra_type != &crypto_skcipher_type) {
+ err = crypto_lskcipher_setkey_sg(tfm, key, keylen);
+ goto out;
+ }
+
if (keylen < cipher->min_keysize || keylen > cipher->max_keysize)
return -EINVAL;

@@ -624,6 +633,7 @@ int crypto_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
else
err = cipher->setkey(tfm, key, keylen);

+out:
if (unlikely(err)) {
skcipher_set_needkey(tfm);
return err;
@@ -649,6 +659,8 @@ int crypto_skcipher_encrypt(struct skcipher_request *req)

if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
ret = -ENOKEY;
+ else if (alg->co.base.cra_type != &crypto_skcipher_type)
+ ret = crypto_lskcipher_encrypt_sg(req);
else
ret = alg->encrypt(req);

@@ -671,6 +683,8 @@ int crypto_skcipher_decrypt(struct skcipher_request *req)

if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
ret = -ENOKEY;
+ else if (alg->co.base.cra_type != &crypto_skcipher_type)
+ ret = crypto_lskcipher_decrypt_sg(req);
else
ret = alg->decrypt(req);

@@ -693,6 +707,9 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)

skcipher_set_needkey(skcipher);

+ if (tfm->__crt_alg->cra_type != &crypto_skcipher_type)
+ return crypto_init_lskcipher_ops_sg(tfm);
+
if (alg->exit)
skcipher->base.exit = crypto_skcipher_exit_tfm;

@@ -702,6 +719,14 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
return 0;
}

+static unsigned int crypto_skcipher_extsize(struct crypto_alg *alg)
+{
+ if (alg->cra_type != &crypto_skcipher_type)
+ return sizeof(struct crypto_lskcipher *);
+
+ return crypto_alg_extsize(alg);
+}
+
static void crypto_skcipher_free_instance(struct crypto_instance *inst)
{
struct skcipher_instance *skcipher =
@@ -770,7 +795,7 @@ static int __maybe_unused crypto_skcipher_report_stat(
}

static const struct crypto_type crypto_skcipher_type = {
- .extsize = crypto_alg_extsize,
+ .extsize = crypto_skcipher_extsize,
.init_tfm = crypto_skcipher_init_tfm,
.free = crypto_skcipher_free_instance,
#ifdef CONFIG_PROC_FS
@@ -783,7 +808,7 @@ static const struct crypto_type crypto_skcipher_type = {
.report_stat = crypto_skcipher_report_stat,
#endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK,
- .maskset = CRYPTO_ALG_TYPE_MASK,
+ .maskset = CRYPTO_ALG_TYPE_SKCIPHER_MASK,
.type = CRYPTO_ALG_TYPE_SKCIPHER,
.tfmsize = offsetof(struct crypto_skcipher, base),
};
@@ -834,27 +859,43 @@ int crypto_has_skcipher(const char *alg_name, u32 type, u32 mask)
}
EXPORT_SYMBOL_GPL(crypto_has_skcipher);

-static int skcipher_prepare_alg(struct skcipher_alg *alg)
+int skcipher_prepare_alg_common(struct skcipher_alg_common *alg)
{
- struct crypto_istat_cipher *istat = skcipher_get_stat(alg);
+ struct crypto_istat_cipher *istat = skcipher_get_stat_common(alg);
struct crypto_alg *base = &alg->base;

- if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8 ||
- alg->walksize > PAGE_SIZE / 8)
+ if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8)
return -EINVAL;

if (!alg->chunksize)
alg->chunksize = base->cra_blocksize;
+
+ base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
+
+ if (IS_ENABLED(CONFIG_CRYPTO_STATS))
+ memset(istat, 0, sizeof(*istat));
+
+ return 0;
+}
+
+static int skcipher_prepare_alg(struct skcipher_alg *alg)
+{
+ struct crypto_alg *base = &alg->base;
+ int err;
+
+ err = skcipher_prepare_alg_common(&alg->co);
+ if (err)
+ return err;
+
+ if (alg->walksize > PAGE_SIZE / 8)
+ return -EINVAL;
+
if (!alg->walksize)
alg->walksize = alg->chunksize;

base->cra_type = &crypto_skcipher_type;
- base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
base->cra_flags |= CRYPTO_ALG_TYPE_SKCIPHER;

- if (IS_ENABLED(CONFIG_CRYPTO_STATS))
- memset(istat, 0, sizeof(*istat));
-
return 0;
}

diff --git a/crypto/skcipher.h b/crypto/skcipher.h
new file mode 100644
index 000000000000..6f1295f0fef2
--- /dev/null
+++ b/crypto/skcipher.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Cryptographic API.
+ *
+ * Copyright (c) 2023 Herbert Xu <[email protected]>
+ */
+#ifndef _LOCAL_CRYPTO_SKCIPHER_H
+#define _LOCAL_CRYPTO_SKCIPHER_H
+
+#include <crypto/internal/skcipher.h>
+#include "internal.h"
+
+static inline struct crypto_istat_cipher *skcipher_get_stat_common(
+ struct skcipher_alg_common *alg)
+{
+#ifdef CONFIG_CRYPTO_STATS
+ return &alg->stat;
+#else
+ return NULL;
+#endif
+}
+
+int crypto_lskcipher_setkey_sg(struct crypto_skcipher *tfm, const u8 *key,
+ unsigned int keylen);
+int crypto_lskcipher_encrypt_sg(struct skcipher_request *req);
+int crypto_lskcipher_decrypt_sg(struct skcipher_request *req);
+int crypto_init_lskcipher_ops_sg(struct crypto_tfm *tfm);
+int skcipher_prepare_alg_common(struct skcipher_alg_common *alg);
+
+#endif /* _LOCAL_CRYPTO_SKCIPHER_H */
diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h
index fb3d9e899f52..4382fd707b8a 100644
--- a/include/crypto/internal/skcipher.h
+++ b/include/crypto/internal/skcipher.h
@@ -36,10 +36,25 @@ struct skcipher_instance {
};
};

+struct lskcipher_instance {
+ void (*free)(struct lskcipher_instance *inst);
+ union {
+ struct {
+ char head[offsetof(struct lskcipher_alg, co.base)];
+ struct crypto_instance base;
+ } s;
+ struct lskcipher_alg alg;
+ };
+};
+
struct crypto_skcipher_spawn {
struct crypto_spawn base;
};

+struct crypto_lskcipher_spawn {
+ struct crypto_spawn base;
+};
+
struct skcipher_walk {
union {
struct {
@@ -80,6 +95,12 @@ static inline struct crypto_instance *skcipher_crypto_instance(
return &inst->s.base;
}

+static inline struct crypto_instance *lskcipher_crypto_instance(
+ struct lskcipher_instance *inst)
+{
+ return &inst->s.base;
+}
+
static inline struct skcipher_instance *skcipher_alg_instance(
struct crypto_skcipher *skcipher)
{
@@ -87,11 +108,23 @@ static inline struct skcipher_instance *skcipher_alg_instance(
struct skcipher_instance, alg);
}

+static inline struct lskcipher_instance *lskcipher_alg_instance(
+ struct crypto_lskcipher *lskcipher)
+{
+ return container_of(crypto_lskcipher_alg(lskcipher),
+ struct lskcipher_instance, alg);
+}
+
static inline void *skcipher_instance_ctx(struct skcipher_instance *inst)
{
return crypto_instance_ctx(skcipher_crypto_instance(inst));
}

+static inline void *lskcipher_instance_ctx(struct lskcipher_instance *inst)
+{
+ return crypto_instance_ctx(lskcipher_crypto_instance(inst));
+}
+
static inline void skcipher_request_complete(struct skcipher_request *req, int err)
{
crypto_request_complete(&req->base, err);
@@ -101,29 +134,56 @@ int crypto_grab_skcipher(struct crypto_skcipher_spawn *spawn,
struct crypto_instance *inst,
const char *name, u32 type, u32 mask);

+int crypto_grab_lskcipher(struct crypto_lskcipher_spawn *spawn,
+ struct crypto_instance *inst,
+ const char *name, u32 type, u32 mask);
+
static inline void crypto_drop_skcipher(struct crypto_skcipher_spawn *spawn)
{
crypto_drop_spawn(&spawn->base);
}

+static inline void crypto_drop_lskcipher(struct crypto_lskcipher_spawn *spawn)
+{
+ crypto_drop_spawn(&spawn->base);
+}
+
static inline struct skcipher_alg *crypto_skcipher_spawn_alg(
struct crypto_skcipher_spawn *spawn)
{
return container_of(spawn->base.alg, struct skcipher_alg, base);
}

+static inline struct lskcipher_alg *crypto_lskcipher_spawn_alg(
+ struct crypto_lskcipher_spawn *spawn)
+{
+ return container_of(spawn->base.alg, struct lskcipher_alg, co.base);
+}
+
static inline struct skcipher_alg *crypto_spawn_skcipher_alg(
struct crypto_skcipher_spawn *spawn)
{
return crypto_skcipher_spawn_alg(spawn);
}

+static inline struct lskcipher_alg *crypto_spawn_lskcipher_alg(
+ struct crypto_lskcipher_spawn *spawn)
+{
+ return crypto_lskcipher_spawn_alg(spawn);
+}
+
static inline struct crypto_skcipher *crypto_spawn_skcipher(
struct crypto_skcipher_spawn *spawn)
{
return crypto_spawn_tfm2(&spawn->base);
}

+static inline struct crypto_lskcipher *crypto_spawn_lskcipher(
+ struct crypto_lskcipher_spawn *spawn)
+{
+ return crypto_spawn_tfm2(&spawn->base);
+}
+
static inline void crypto_skcipher_set_reqsize(
struct crypto_skcipher *skcipher, unsigned int reqsize)
{
@@ -144,6 +204,13 @@ void crypto_unregister_skciphers(struct skcipher_alg *algs, int count);
int skcipher_register_instance(struct crypto_template *tmpl,
struct skcipher_instance *inst);

+int crypto_register_lskcipher(struct lskcipher_alg *alg);
+void crypto_unregister_lskcipher(struct lskcipher_alg *alg);
+int crypto_register_lskciphers(struct lskcipher_alg *algs, int count);
+void crypto_unregister_lskciphers(struct lskcipher_alg *algs, int count);
+int lskcipher_register_instance(struct crypto_template *tmpl,
+ struct lskcipher_instance *inst);
+
int skcipher_walk_done(struct skcipher_walk *walk, int err);
int skcipher_walk_virt(struct skcipher_walk *walk,
struct skcipher_request *req,
@@ -166,6 +233,11 @@ static inline void *crypto_skcipher_ctx(struct crypto_skcipher *tfm)
return crypto_tfm_ctx(&tfm->base);
}

+static inline void *crypto_lskcipher_ctx(struct crypto_lskcipher *tfm)
+{
+ return crypto_tfm_ctx(&tfm->base);
+}
+
static inline void *crypto_skcipher_ctx_dma(struct crypto_skcipher *tfm)
{
return crypto_tfm_ctx_dma(&tfm->base);
@@ -209,21 +281,16 @@ static inline unsigned int crypto_skcipher_alg_walksize(
return alg->walksize;
}

-/**
- * crypto_skcipher_walksize() - obtain walk size
- * @tfm: cipher handle
- *
- * In some cases, algorithms can only perform optimally when operating on
- * multiple blocks in parallel. This is reflected by the walksize, which
- * must be a multiple of the chunksize (or equal if the concern does not
- * apply)
- *
- * Return: walk size in bytes
- */
-static inline unsigned int crypto_skcipher_walksize(
- struct crypto_skcipher *tfm)
+static inline unsigned int crypto_lskcipher_alg_min_keysize(
+ struct lskcipher_alg *alg)
{
- return crypto_skcipher_alg_walksize(crypto_skcipher_alg(tfm));
+ return alg->co.min_keysize;
+}
+
+static inline unsigned int crypto_lskcipher_alg_max_keysize(
+ struct lskcipher_alg *alg)
+{
+ return alg->co.max_keysize;
}

/* Helpers for simple block cipher modes of operation */
@@ -249,5 +316,24 @@ static inline struct crypto_alg *skcipher_ialg_simple(
return crypto_spawn_cipher_alg(spawn);
}

+static inline struct crypto_lskcipher *lskcipher_cipher_simple(
+ struct crypto_lskcipher *tfm)
+{
+ struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+
+ return *ctx;
+}
+
+struct lskcipher_instance *lskcipher_alloc_instance_simple(
+ struct crypto_template *tmpl, struct rtattr **tb);
+
+static inline struct lskcipher_alg *lskcipher_ialg_simple(
+ struct lskcipher_instance *inst)
+{
+ struct crypto_lskcipher_spawn *spawn = lskcipher_instance_ctx(inst);
+
+ return crypto_lskcipher_spawn_alg(spawn);
+}
+
#endif /* _CRYPTO_INTERNAL_SKCIPHER_H */

diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 080d1ba3611d..a648ef5ce897 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -49,6 +49,10 @@ struct crypto_sync_skcipher {
struct crypto_skcipher base;
};

+struct crypto_lskcipher {
+ struct crypto_tfm base;
+};
+
/*
* struct crypto_istat_cipher - statistics for cipher algorithm
* @encrypt_cnt: number of encrypt requests
@@ -65,6 +69,43 @@ struct crypto_istat_cipher {
atomic64_t err_cnt;
};

+#ifdef CONFIG_CRYPTO_STATS
+#define SKCIPHER_ALG_COMMON_STAT struct crypto_istat_cipher stat;
+#else
+#define SKCIPHER_ALG_COMMON_STAT
+#endif
+
+/*
+ * struct skcipher_alg_common - common properties of skcipher_alg
+ * @min_keysize: Minimum key size supported by the transformation. This is the
+ * smallest key length supported by this transformation algorithm.
+ * This must be set to one of the pre-defined values as this is
+ * not hardware specific. Possible values for this field can be
+ * found via git grep "_MIN_KEY_SIZE" include/crypto/
+ * @max_keysize: Maximum key size supported by the transformation. This is the
+ * largest key length supported by this transformation algorithm.
+ * This must be set to one of the pre-defined values as this is
+ * not hardware specific. Possible values for this field can be
+ * found via git grep "_MAX_KEY_SIZE" include/crypto/
+ * @ivsize: IV size applicable for transformation. The consumer must provide an
+ * IV of exactly that size to perform the encrypt or decrypt operation.
+ * @chunksize: Equal to the block size except for stream ciphers such as
+ * CTR where it is set to the underlying block size.
+ * @stat: Statistics for cipher algorithm
+ * @base: Definition of a generic crypto algorithm.
+ */
+#define SKCIPHER_ALG_COMMON { \
+ unsigned int min_keysize; \
+ unsigned int max_keysize; \
+ unsigned int ivsize; \
+ unsigned int chunksize; \
+ \
+ SKCIPHER_ALG_COMMON_STAT \
+ \
+ struct crypto_alg base; \
+}
+struct skcipher_alg_common SKCIPHER_ALG_COMMON;
+
/**
* struct skcipher_alg - symmetric key cipher definition
* @min_keysize: Minimum key size supported by the transformation. This is the
@@ -120,6 +161,7 @@ struct crypto_istat_cipher {
* in parallel. Should be a multiple of chunksize.
* @stat: Statistics for cipher algorithm
* @base: Definition of a generic crypto algorithm.
+ * @co: see struct skcipher_alg_common
*
* All fields except @ivsize are mandatory and must be filled.
*/
@@ -131,17 +173,55 @@ struct skcipher_alg {
int (*init)(struct crypto_skcipher *tfm);
void (*exit)(struct crypto_skcipher *tfm);

- unsigned int min_keysize;
- unsigned int max_keysize;
- unsigned int ivsize;
- unsigned int chunksize;
unsigned int walksize;

-#ifdef CONFIG_CRYPTO_STATS
- struct crypto_istat_cipher stat;
-#endif
+ union {
+ struct SKCIPHER_ALG_COMMON;
+ struct skcipher_alg_common co;
+ };
+};

- struct crypto_alg base;
+/**
+ * struct lskcipher_alg - linear symmetric key cipher definition
+ * @setkey: Set key for the transformation. This function is used to either
+ * program a supplied key into the hardware or store the key in the
+ * transformation context for programming it later. Note that this
+ * function does modify the transformation context. This function can
+ * be called multiple times during the existence of the transformation
+ * object, so one must make sure the key is properly reprogrammed into
+ * the hardware. This function is also responsible for checking the key
+ * length for validity. In case a software fallback was put in place in
+ * the @cra_init call, this function might need to use the fallback if
+ * the algorithm doesn't support all of the key sizes.
+ * @encrypt: Encrypt a number of bytes. This function is used to encrypt
+ * the supplied data. This function shall not modify
+ * the transformation context, as this function may be called
+ * in parallel with the same transformation object. Data
+ * may be left over if length is not a multiple of blocks
+ * and there is more to come (final == false). The number of
+ * left-over bytes should be returned in case of success.
+ * @decrypt: Decrypt a number of bytes. This is a reverse counterpart to
+ * @encrypt and the conditions are exactly the same.
+ * @init: Initialize the cryptographic transformation object. This function
+ * is used to initialize the cryptographic transformation object.
+ * This function is called only once at the instantiation time, right
+ * after the transformation context was allocated.
+ * @exit: Deinitialize the cryptographic transformation object. This is a
+ * counterpart to @init, used to remove various changes set in
+ * @init.
+ * @co: see struct skcipher_alg_common
+ */
+struct lskcipher_alg {
+ int (*setkey)(struct crypto_lskcipher *tfm, const u8 *key,
+ unsigned int keylen);
+ int (*encrypt)(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv, bool final);
+ int (*decrypt)(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv, bool final);
+ int (*init)(struct crypto_lskcipher *tfm);
+ void (*exit)(struct crypto_lskcipher *tfm);
+
+ struct skcipher_alg_common co;
};

#define MAX_SYNC_SKCIPHER_REQSIZE 384
@@ -213,12 +293,36 @@ struct crypto_skcipher *crypto_alloc_skcipher(const char *alg_name,
struct crypto_sync_skcipher *crypto_alloc_sync_skcipher(const char *alg_name,
u32 type, u32 mask);

+
+/**
+ * crypto_alloc_lskcipher() - allocate linear symmetric key cipher handle
+ * @alg_name: is the cra_name / name or cra_driver_name / driver name of the
+ * lskcipher
+ * @type: specifies the type of the cipher
+ * @mask: specifies the mask for the cipher
+ *
+ * Allocate a cipher handle for an lskcipher. The returned struct
+ * crypto_lskcipher is the cipher handle that is required for any subsequent
+ * API invocation for that lskcipher.
+ *
+ * Return: allocated cipher handle in case of success; IS_ERR() is true in case
+ * of an error, PTR_ERR() returns the error code.
+ */
+struct crypto_lskcipher *crypto_alloc_lskcipher(const char *alg_name,
+ u32 type, u32 mask);
+
static inline struct crypto_tfm *crypto_skcipher_tfm(
struct crypto_skcipher *tfm)
{
return &tfm->base;
}

+static inline struct crypto_tfm *crypto_lskcipher_tfm(
+ struct crypto_lskcipher *tfm)
+{
+ return &tfm->base;
+}
+
/**
* crypto_free_skcipher() - zeroize and free cipher handle
* @tfm: cipher handle to be freed
@@ -235,6 +339,17 @@ static inline void crypto_free_sync_skcipher(struct crypto_sync_skcipher *tfm)
crypto_free_skcipher(&tfm->base);
}

+/**
+ * crypto_free_lskcipher() - zeroize and free cipher handle
+ * @tfm: cipher handle to be freed
+ *
+ * If @tfm is a NULL or error pointer, this function does nothing.
+ */
+static inline void crypto_free_lskcipher(struct crypto_lskcipher *tfm)
+{
+ crypto_destroy_tfm(tfm, crypto_lskcipher_tfm(tfm));
+}
+
/**
* crypto_has_skcipher() - Search for the availability of an skcipher.
* @alg_name: is the cra_name / name or cra_driver_name / driver name of the
@@ -253,6 +368,19 @@ static inline const char *crypto_skcipher_driver_name(
return crypto_tfm_alg_driver_name(crypto_skcipher_tfm(tfm));
}

+static inline const char *crypto_lskcipher_driver_name(
+ struct crypto_lskcipher *tfm)
+{
+ return crypto_tfm_alg_driver_name(crypto_lskcipher_tfm(tfm));
+}
+
+static inline struct skcipher_alg_common *crypto_skcipher_alg_common(
+ struct crypto_skcipher *tfm)
+{
+ return container_of(crypto_skcipher_tfm(tfm)->__crt_alg,
+ struct skcipher_alg_common, base);
+}
+
static inline struct skcipher_alg *crypto_skcipher_alg(
struct crypto_skcipher *tfm)
{
@@ -260,11 +388,24 @@ static inline struct skcipher_alg *crypto_skcipher_alg(
struct skcipher_alg, base);
}

+static inline struct lskcipher_alg *crypto_lskcipher_alg(
+ struct crypto_lskcipher *tfm)
+{
+ return container_of(crypto_lskcipher_tfm(tfm)->__crt_alg,
+ struct lskcipher_alg, co.base);
+}
+
static inline unsigned int crypto_skcipher_alg_ivsize(struct skcipher_alg *alg)
{
return alg->ivsize;
}

+static inline unsigned int crypto_lskcipher_alg_ivsize(
+ struct lskcipher_alg *alg)
+{
+ return alg->co.ivsize;
+}
+
/**
* crypto_skcipher_ivsize() - obtain IV size
* @tfm: cipher handle
@@ -276,7 +417,7 @@ static inline unsigned int crypto_skcipher_alg_ivsize(struct skcipher_alg *alg)
*/
static inline unsigned int crypto_skcipher_ivsize(struct crypto_skcipher *tfm)
{
- return crypto_skcipher_alg(tfm)->ivsize;
+ return crypto_skcipher_alg_common(tfm)->ivsize;
}

static inline unsigned int crypto_sync_skcipher_ivsize(
@@ -285,6 +426,21 @@ static inline unsigned int crypto_sync_skcipher_ivsize(
return crypto_skcipher_ivsize(&tfm->base);
}

+/**
+ * crypto_lskcipher_ivsize() - obtain IV size
+ * @tfm: cipher handle
+ *
+ * The size of the IV for the lskcipher referenced by the cipher handle is
+ * returned. This IV size may be zero if the cipher does not need an IV.
+ *
+ * Return: IV size in bytes
+ */
+static inline unsigned int crypto_lskcipher_ivsize(
+ struct crypto_lskcipher *tfm)
+{
+ return crypto_lskcipher_alg(tfm)->co.ivsize;
+}
+
/**
* crypto_skcipher_blocksize() - obtain block size of cipher
* @tfm: cipher handle
@@ -301,12 +457,34 @@ static inline unsigned int crypto_skcipher_blocksize(
return crypto_tfm_alg_blocksize(crypto_skcipher_tfm(tfm));
}

+/**
+ * crypto_lskcipher_blocksize() - obtain block size of cipher
+ * @tfm: cipher handle
+ *
+ * The block size for the lskcipher referenced with the cipher handle is
+ * returned. The caller may use that information to allocate appropriate
+ * memory for the data returned by the encryption or decryption operation
+ *
+ * Return: block size of cipher
+ */
+static inline unsigned int crypto_lskcipher_blocksize(
+ struct crypto_lskcipher *tfm)
+{
+ return crypto_tfm_alg_blocksize(crypto_lskcipher_tfm(tfm));
+}
+
static inline unsigned int crypto_skcipher_alg_chunksize(
struct skcipher_alg *alg)
{
return alg->chunksize;
}

+static inline unsigned int crypto_lskcipher_alg_chunksize(
+ struct lskcipher_alg *alg)
+{
+ return alg->co.chunksize;
+}
+
/**
* crypto_skcipher_chunksize() - obtain chunk size
* @tfm: cipher handle
@@ -321,7 +499,24 @@ static inline unsigned int crypto_skcipher_alg_chunksize(
static inline unsigned int crypto_skcipher_chunksize(
struct crypto_skcipher *tfm)
{
- return crypto_skcipher_alg_chunksize(crypto_skcipher_alg(tfm));
+ return crypto_skcipher_alg_common(tfm)->chunksize;
+}
+
+/**
+ * crypto_lskcipher_chunksize() - obtain chunk size
+ * @tfm: cipher handle
+ *
+ * The block size is set to one for ciphers such as CTR. However,
+ * you still need to provide incremental updates in multiples of
+ * the underlying block size as the IV does not have sub-block
+ * granularity. This is known in this API as the chunk size.
+ *
+ * Return: chunk size in bytes
+ */
+static inline unsigned int crypto_lskcipher_chunksize(
+ struct crypto_lskcipher *tfm)
+{
+ return crypto_lskcipher_alg_chunksize(crypto_lskcipher_alg(tfm));
}

static inline unsigned int crypto_sync_skcipher_blocksize(
@@ -336,6 +531,12 @@ static inline unsigned int crypto_skcipher_alignmask(
return crypto_tfm_alg_alignmask(crypto_skcipher_tfm(tfm));
}

+static inline unsigned int crypto_lskcipher_alignmask(
+ struct crypto_lskcipher *tfm)
+{
+ return crypto_tfm_alg_alignmask(crypto_lskcipher_tfm(tfm));
+}
+
static inline u32 crypto_skcipher_get_flags(struct crypto_skcipher *tfm)
{
return crypto_tfm_get_flags(crypto_skcipher_tfm(tfm));
@@ -371,6 +572,23 @@ static inline void crypto_sync_skcipher_clear_flags(
crypto_skcipher_clear_flags(&tfm->base, flags);
}

+static inline u32 crypto_lskcipher_get_flags(struct crypto_lskcipher *tfm)
+{
+ return crypto_tfm_get_flags(crypto_lskcipher_tfm(tfm));
+}
+
+static inline void crypto_lskcipher_set_flags(struct crypto_lskcipher *tfm,
+ u32 flags)
+{
+ crypto_tfm_set_flags(crypto_lskcipher_tfm(tfm), flags);
+}
+
+static inline void crypto_lskcipher_clear_flags(struct crypto_lskcipher *tfm,
+ u32 flags)
+{
+ crypto_tfm_clear_flags(crypto_lskcipher_tfm(tfm), flags);
+}
+
/**
* crypto_skcipher_setkey() - set key for cipher
* @tfm: cipher handle
@@ -396,16 +614,47 @@ static inline int crypto_sync_skcipher_setkey(struct crypto_sync_skcipher *tfm,
return crypto_skcipher_setkey(&tfm->base, key, keylen);
}

+/**
+ * crypto_lskcipher_setkey() - set key for cipher
+ * @tfm: cipher handle
+ * @key: buffer holding the key
+ * @keylen: length of the key in bytes
+ *
+ * The caller provided key is set for the lskcipher referenced by the cipher
+ * handle.
+ *
+ * Note, the key length determines the cipher type. Many block ciphers implement
+ * different cipher modes depending on the key size, such as AES-128 vs AES-192
+ * vs. AES-256. When providing a 16 byte key for an AES cipher handle, AES-128
+ * is performed.
+ *
+ * Return: 0 if the setting of the key was successful; < 0 if an error occurred
+ */
+int crypto_lskcipher_setkey(struct crypto_lskcipher *tfm,
+ const u8 *key, unsigned int keylen);
+
static inline unsigned int crypto_skcipher_min_keysize(
struct crypto_skcipher *tfm)
{
- return crypto_skcipher_alg(tfm)->min_keysize;
+ return crypto_skcipher_alg_common(tfm)->min_keysize;
}

static inline unsigned int crypto_skcipher_max_keysize(
struct crypto_skcipher *tfm)
{
- return crypto_skcipher_alg(tfm)->max_keysize;
+ return crypto_skcipher_alg_common(tfm)->max_keysize;
+}
+
+static inline unsigned int crypto_lskcipher_min_keysize(
+ struct crypto_lskcipher *tfm)
+{
+ return crypto_lskcipher_alg(tfm)->co.min_keysize;
+}
+
+static inline unsigned int crypto_lskcipher_max_keysize(
+ struct crypto_lskcipher *tfm)
+{
+ return crypto_lskcipher_alg(tfm)->co.max_keysize;
}

/**
@@ -457,6 +706,42 @@ int crypto_skcipher_encrypt(struct skcipher_request *req);
*/
int crypto_skcipher_decrypt(struct skcipher_request *req);

+/**
+ * crypto_lskcipher_encrypt() - encrypt plaintext
+ * @tfm: lskcipher handle
+ * @src: source buffer
+ * @dst: destination buffer
+ * @len: number of bytes to process
+ * @iv: IV for the cipher operation which must comply with the IV size defined
+ * by crypto_lskcipher_ivsize
+ *
+ * Encrypt plaintext data using the lskcipher handle.
+ *
+ * Return: >=0 if the cipher operation was successful, if positive
+ * then this many bytes have been left unprocessed;
+ * < 0 if an error occurred
+ */
+int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv);
+
+/**
+ * crypto_lskcipher_decrypt() - decrypt ciphertext
+ * @tfm: lskcipher handle
+ * @src: source buffer
+ * @dst: destination buffer
+ * @len: number of bytes to process
+ * @iv: IV for the cipher operation which must comply with the IV size defined
+ * by crypto_lskcipher_ivsize
+ *
+ * Decrypt ciphertext data using the lskcipher handle.
+ *
+ * Return: >=0 if the cipher operation was successful, if positive
+ * then this many bytes have been left unprocessed;
+ * < 0 if an error occurred
+ */
+int crypto_lskcipher_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
+ u8 *dst, unsigned len, u8 *iv);
+
/**
* DOC: Symmetric Key Cipher Request Handle
*
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index a0780deb017a..f3c3a3b27fac 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -24,6 +24,7 @@
#define CRYPTO_ALG_TYPE_CIPHER 0x00000001
#define CRYPTO_ALG_TYPE_COMPRESS 0x00000002
#define CRYPTO_ALG_TYPE_AEAD 0x00000003
+#define CRYPTO_ALG_TYPE_LSKCIPHER 0x00000004
#define CRYPTO_ALG_TYPE_SKCIPHER 0x00000005
#define CRYPTO_ALG_TYPE_AKCIPHER 0x00000006
#define CRYPTO_ALG_TYPE_SIG 0x00000007
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-09-17 16:27:57

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH 0/8] crypto: Add lskcipher API type

On Thu, 14 Sept 2023 at 11:34, Herbert Xu <[email protected]> wrote:
>
> On Thu, Sep 14, 2023 at 11:31:14AM +0200, Ard Biesheuvel wrote:
> >
> > ecb(aes)
>
> This is unnecessary as the generic template will construct an
> algorithm that's almost exactly the same as the underlying
> algorithm. But you could register it if you want to. The
> template instantiation is a one-off event.
>

Ported my RISC-V AES implementation here:
https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=riscv-scalar-aes

I will get back to this after mu holidays, early October.

Thanks,

2023-09-19 04:04:46

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 0/8] crypto: Add lskcipher API type

On Sun, Sep 17, 2023 at 06:24:32PM +0200, Ard Biesheuvel wrote:
>
> Ported my RISC-V AES implementation here:
> https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=riscv-scalar-aes

Looks good to me.

> I will get back to this after mu holidays, early October.

Have a great time!

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-09-20 06:29:37

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH 4/8] crypto: skcipher - Add lskcipher

On Thu, Sep 14, 2023 at 04:28:24PM +0800, Herbert Xu wrote:
> Add a new API type lskcipher designed for taking straight kernel
> pointers instead of SG lists. Its relationship to skcipher will
> be analogous to that between shash and ahash.

Is lskcipher only for algorithms that can be computed incrementally? That would
exclude the wide-block modes, and maybe others too. And if so, what is the
model for incremental computation? Based on crypto_lskcipher_crypt_sg(), all
the state is assumed to be carried forward in the "IV". Does that work for all
algorithms? Note that shash has an arbitrary state struct (shash_desc) instead.

- Eric

2023-09-21 18:28:03

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 4/8] crypto: skcipher - Add lskcipher

On Tue, Sep 19, 2023 at 11:25:51PM -0700, Eric Biggers wrote:
>
> Is lskcipher only for algorithms that can be computed incrementally? That would
> exclude the wide-block modes, and maybe others too. And if so, what is the

You mean things like adiantum? We could add a flag for that so
the skcipher wrapper linearises the input before calling lskcipher.

> model for incremental computation? Based on crypto_lskcipher_crypt_sg(), all
> the state is assumed to be carried forward in the "IV". Does that work for all
> algorithms? Note that shash has an arbitrary state struct (shash_desc) instead.

Is there any practical difference? You could always represent
one as the other, no?

The only case where it would matter is if an algorithm had both
an IV as well as additional state that should not be passed along
as part of the IV, do you have anything in mind?

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-09-22 03:27:33

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH 4/8] crypto: skcipher - Add lskcipher

On Thu, Sep 21, 2023 at 12:32:17PM +0800, Herbert Xu wrote:
> On Tue, Sep 19, 2023 at 11:25:51PM -0700, Eric Biggers wrote:
> >
> > Is lskcipher only for algorithms that can be computed incrementally? That would
> > exclude the wide-block modes, and maybe others too. And if so, what is the
>
> You mean things like adiantum?

Yes, wide-block modes such as Adiantum and HCTR2 require multiple passes over
the data. As do SIV modes such as AES-GCM-SIV (though AES-GCM-SIV isn't yet
supported by the kernel, and it would be an "aead", not an "skcipher").

> We could add a flag for that so
> the skcipher wrapper linearises the input before calling lskcipher.

That makes sense, but I suppose this would mean adding code that allocates huge
scratch buffers, like what the infamous crypto/scompress.c does? I hope that we
can ensure that these buffers are only allocated when they are actually needed.

>
> > model for incremental computation? Based on crypto_lskcipher_crypt_sg(), all
> > the state is assumed to be carried forward in the "IV". Does that work for all
> > algorithms? Note that shash has an arbitrary state struct (shash_desc) instead.
>
> Is there any practical difference? You could always represent
> one as the other, no?
>
> The only case where it would matter is if an algorithm had both
> an IV as well as additional state that should not be passed along
> as part of the IV, do you have anything in mind?

Well, IV is *initialization vector*: a value that the algorithm uses as input.
It shouldn't be overloaded to represent some internal intermediate state. We
already made this mistake with the iv vs. iv_out thing, which only ever got
implemented by CBC and CTR, and people repeatedly get confused by. So we know
it technically works for those two algorithms, but not anything else.

With ChaCha, for example, it makes more sense to use 16-word state matrix as the
intermediate state instead of the 4-word "IV". (See chacha_crypt().)
Especially for XChaCha, so that the HChaCha step doesn't need to be repeated.

- Eric

2023-11-17 06:31:06

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 4/8] crypto: skcipher - Add lskcipher

On Thu, Sep 21, 2023 at 08:10:30PM -0700, Eric Biggers wrote:
>
> Well, IV is *initialization vector*: a value that the algorithm uses as input.
> It shouldn't be overloaded to represent some internal intermediate state. We
> already made this mistake with the iv vs. iv_out thing, which only ever got
> implemented by CBC and CTR, and people repeatedly get confused by. So we know
> it technically works for those two algorithms, but not anything else.
>
> With ChaCha, for example, it makes more sense to use 16-word state matrix as the
> intermediate state instead of the 4-word "IV". (See chacha_crypt().)
> Especially for XChaCha, so that the HChaCha step doesn't need to be repeated.

Fair enough, but what's the point of keeping the internal state
across two lskcipher calls? The whole point of lskcipher is that the
input is linear and can be processed in one go.

With shash we must keep the internal state because the API operates
on the update/final model so we need multiple suboperations to finish
each hashing operation.

With ciphers we haven't traditionally done it that way. Are you
thinking of extending lskcipher so that it is more like hashing, with
an explicit finalisation step?

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-11-17 06:31:12

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH 4/8] crypto: skcipher - Add lskcipher

On Fri, Nov 17, 2023 at 01:19:46PM +0800, Herbert Xu wrote:
> On Thu, Sep 21, 2023 at 08:10:30PM -0700, Eric Biggers wrote:
> >
> > Well, IV is *initialization vector*: a value that the algorithm uses as input.
> > It shouldn't be overloaded to represent some internal intermediate state. We
> > already made this mistake with the iv vs. iv_out thing, which only ever got
> > implemented by CBC and CTR, and people repeatedly get confused by. So we know
> > it technically works for those two algorithms, but not anything else.
> >
> > With ChaCha, for example, it makes more sense to use 16-word state matrix as the
> > intermediate state instead of the 4-word "IV". (See chacha_crypt().)
> > Especially for XChaCha, so that the HChaCha step doesn't need to be repeated.
>
> Fair enough, but what's the point of keeping the internal state
> across two lskcipher calls? The whole point of lskcipher is that the
> input is linear and can be processed in one go.
>
> With shash we must keep the internal state because the API operates
> on the update/final model so we need multiple suboperations to finish
> each hashing operation.
>
> With ciphers we haven't traditionally done it that way. Are you
> thinking of extending lskcipher so that it is more like hashing, with
> an explicit finalisation step?

crypto_lskcipher_crypt_sg() assumes that a single en/decryption operation can be
broken up into multiple ones. I think you're arguing that since there's no
"init" or "final", these sub-en/decryptions aren't analogous to "update" but
rather are full en/decryptions that happen to combine to create the larger one.
So sure, looking at it that way, the input/output IV does make sense, though it
does mean that we end up with the confusing "output IV" terminology as well as
having to repeat any setup code, e.g. HChaCha, that some algorithms have.

- Eric

2023-11-17 10:42:43

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 4/8] crypto: skcipher - Add lskcipher

On Thu, Nov 16, 2023 at 09:42:31PM -0800, Eric Biggers wrote:
.
> crypto_lskcipher_crypt_sg() assumes that a single en/decryption operation can be
> broken up into multiple ones. I think you're arguing that since there's no

Good point. It means that we'd have to linearise the buffer for
such algorithms, or just write an SG implementation as we do now
in addition to the lskcipher.

Let me think about this a bit more.

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-11-24 10:43:05

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 4/8] crypto: skcipher - Add lskcipher

On Fri, Nov 17, 2023 at 05:07:22PM +0800, Herbert Xu wrote:
> On Thu, Nov 16, 2023 at 09:42:31PM -0800, Eric Biggers wrote:
> .
> > crypto_lskcipher_crypt_sg() assumes that a single en/decryption operation can be
> > broken up into multiple ones. I think you're arguing that since there's no

OK I see where some of the confusion is coming from. The current
skcipher interface assumes that the underlying algorithm can be
chained.

So the implementation of chacha is actually wrong as it stands
and it will produce incorrect results when used through if_alg.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-11-27 22:35:06

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH 4/8] crypto: skcipher - Add lskcipher

On Fri, Nov 24, 2023 at 06:27:25PM +0800, Herbert Xu wrote:
> On Fri, Nov 17, 2023 at 05:07:22PM +0800, Herbert Xu wrote:
> > On Thu, Nov 16, 2023 at 09:42:31PM -0800, Eric Biggers wrote:
> > .
> > > crypto_lskcipher_crypt_sg() assumes that a single en/decryption operation can be
> > > broken up into multiple ones. I think you're arguing that since there's no
>
> OK I see where some of the confusion is coming from. The current
> skcipher interface assumes that the underlying algorithm can be
> chained.
>
> So the implementation of chacha is actually wrong as it stands
> and it will produce incorrect results when used through if_alg.
>

As far as I can tell, currently "chaining" is only implemented by CBC and CTR.
So this really seems like an issue in AF_ALG, not the skcipher API per se.
AF_ALG should not support splitting up encryption/decryption operations on
algorithms that don't support it.

- Eric

2023-11-29 06:36:21

by Herbert Xu

[permalink] [raw]
Subject: [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now)

On Mon, Nov 27, 2023 at 02:28:03PM -0800, Eric Biggers wrote:
>
> As far as I can tell, currently "chaining" is only implemented by CBC and CTR.
> So this really seems like an issue in AF_ALG, not the skcipher API per se.
> AF_ALG should not support splitting up encryption/decryption operations on
> algorithms that don't support it.

Yes I can see your view. But it really is only a very small number
of algorithms (basically arc4 and chacha) that are currently broken
in this way. CTS is similarly broken but for a different reason.

Yes we could change the way af_alg operates by removing the ability
to process unlimited amounts of data and instead switching to the
AEAD model where all data is presented together.

However, I think this would be an unnecessary limitation since there
is a way to solve the chaining issue for stream ciphers and others
such as CTS.

So here is my attempt at this, hopefully without causing too much
churn or breakage:

Herbert Xu (4):
crypto: skcipher - Add internal state support
crypto: skcipher - Make use of internal state
crypto: arc4 - Add internal state
crypto: algif_skcipher - Fix stream cipher chaining

crypto/algif_skcipher.c | 71 +++++++++++++++++++++++++--
crypto/arc4.c | 8 ++-
crypto/cbc.c | 6 ++-
crypto/ecb.c | 10 ++--
crypto/lskcipher.c | 42 ++++++++++++----
crypto/skcipher.c | 64 +++++++++++++++++++++++-
include/crypto/if_alg.h | 2 +
include/crypto/skcipher.h | 100 +++++++++++++++++++++++++++++++++++++-
8 files changed, 280 insertions(+), 23 deletions(-)
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-11-29 22:44:46

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now)

On Wed, Nov 29, 2023 at 02:24:18PM +0800, Herbert Xu wrote:
> On Mon, Nov 27, 2023 at 02:28:03PM -0800, Eric Biggers wrote:
> >
> > As far as I can tell, currently "chaining" is only implemented by CBC and CTR.
> > So this really seems like an issue in AF_ALG, not the skcipher API per se.
> > AF_ALG should not support splitting up encryption/decryption operations on
> > algorithms that don't support it.
>
> Yes I can see your view. But it really is only a very small number
> of algorithms (basically arc4 and chacha) that are currently broken
> in this way. CTS is similarly broken but for a different reason.

I don't think that's accurate. CBC and CTR are the only skciphers for which
this behavior is actually tested. Everything else, not just stream ciphers but
all other skciphers, can be assumed to be broken. Even when I added the tests
for "output IV" for CBC and CTR back in 2019 (because I perhaps
over-simplisticly just considered those to be missing tests), many
implementations failed and had to be fixed. So I think it's fair to say that
this is not really something that has ever actually been important or even
supported, despite what the intent of the algif_skcipher code may have been. We
could choose to onboard new algorithms to that convention one by one, but we'd
need to add the tests and fix everything failing them, which will be a lot.

- Eric

2023-11-30 02:41:02

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now)

On Wed, Nov 29, 2023 at 01:04:21PM -0800, Eric Biggers wrote:
>
> I don't think that's accurate. CBC and CTR are the only skciphers for which
> this behavior is actually tested. Everything else, not just stream ciphers but
> all other skciphers, can be assumed to be broken. Even when I added the tests
> for "output IV" for CBC and CTR back in 2019 (because I perhaps
> over-simplisticly just considered those to be missing tests), many
> implementations failed and had to be fixed. So I think it's fair to say that
> this is not really something that has ever actually been important or even
> supported, despite what the intent of the algif_skcipher code may have been. We
> could choose to onboard new algorithms to that convention one by one, but we'd
> need to add the tests and fix everything failing them, which will be a lot.

OK I was perhaps a bit over the top, but it is certainly the case
that for IPsec encryption algorithms, all the underlying algorithms
are able to support chaining. I concede that the majority of disk
encryption algorithms do not.

I'm not worried about the amount of work here since most of it could
be done at the same as the lskcipher conversion which is worthy in and
of itself.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-11-30 10:42:44

by Herbert Xu

[permalink] [raw]
Subject: [v2 PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now)

v2 fixes a crash when no export/import functions are provided.

This series of patches adds the ability to process a skcipher
request in a piecemeal fashion, which is currently only possible
for selected algorithms such as CBC and CTR.

Herbert Xu (4):
crypto: skcipher - Add internal state support
crypto: skcipher - Make use of internal state
crypto: arc4 - Add internal state
crypto: algif_skcipher - Fix stream cipher chaining

crypto/algif_skcipher.c | 71 +++++++++++++++++++++++++--
crypto/arc4.c | 8 ++-
crypto/cbc.c | 6 ++-
crypto/ecb.c | 10 ++--
crypto/lskcipher.c | 42 ++++++++++++----
crypto/skcipher.c | 80 +++++++++++++++++++++++++++++-
include/crypto/if_alg.h | 2 +
include/crypto/skcipher.h | 100 +++++++++++++++++++++++++++++++++++++-
8 files changed, 296 insertions(+), 23 deletions(-)

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-12-02 04:32:44

by Herbert Xu

[permalink] [raw]
Subject: [v3 PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now)

v3 updates the documentation for crypto_lskcipher_encrypt/decrypt.
v2 fixes a crash when no export/import functions are provided.

This series of patches adds the ability to process a skcipher
request in a piecemeal fashion, which is currently only possible
for selected algorithms such as CBC and CTR.

Herbert Xu (4):
crypto: skcipher - Add internal state support
crypto: skcipher - Make use of internal state
crypto: arc4 - Add internal state
crypto: algif_skcipher - Fix stream cipher chaining

crypto/algif_skcipher.c | 71 ++++++++++++++++++++++-
crypto/arc4.c | 8 ++-
crypto/cbc.c | 6 +-
crypto/ecb.c | 10 ++--
crypto/lskcipher.c | 42 +++++++++++---
crypto/skcipher.c | 80 +++++++++++++++++++++++++-
include/crypto/if_alg.h | 2 +
include/crypto/skcipher.h | 117 +++++++++++++++++++++++++++++++++++---
8 files changed, 306 insertions(+), 30 deletions(-)

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-12-05 10:37:59

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 4/8] crypto: skcipher - Add lskcipher

On Thu, Sep 21, 2023 at 08:10:30PM -0700, Eric Biggers wrote:
>
> Yes, wide-block modes such as Adiantum and HCTR2 require multiple passes over
> the data. As do SIV modes such as AES-GCM-SIV (though AES-GCM-SIV isn't yet
> supported by the kernel, and it would be an "aead", not an "skcipher").

Right, AEAD algorithms have never supported incremental processing,
as one of the first algorithms CCM required two-pass processing.

We could support incremental processing if we really wanted to. It
would require a model where the user passes the data to the API twice
(or more if future algorithms requires so). However, I see no
pressing need for this so I'm happy with just marking such algorithms
as unsupported with algif_skcipher for now. There is also an
alternative of adding an AEAD-like mode fo algif_skcipher for these
algorithms but again I don't see the need to do this.

As such I'm going to add a field to indicate that adiantum and hctr2
cannot be used by algif_skcipher.

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-12-05 20:34:45

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH 4/8] crypto: skcipher - Add lskcipher

On Tue, Dec 05, 2023 at 04:41:12PM +0800, Herbert Xu wrote:
> On Thu, Sep 21, 2023 at 08:10:30PM -0700, Eric Biggers wrote:
> >
> > Yes, wide-block modes such as Adiantum and HCTR2 require multiple passes over
> > the data. As do SIV modes such as AES-GCM-SIV (though AES-GCM-SIV isn't yet
> > supported by the kernel, and it would be an "aead", not an "skcipher").
>
> Right, AEAD algorithms have never supported incremental processing,
> as one of the first algorithms CCM required two-pass processing.
>
> We could support incremental processing if we really wanted to. It
> would require a model where the user passes the data to the API twice
> (or more if future algorithms requires so). However, I see no
> pressing need for this so I'm happy with just marking such algorithms
> as unsupported with algif_skcipher for now. There is also an
> alternative of adding an AEAD-like mode fo algif_skcipher for these
> algorithms but again I don't see the need to do this.
>
> As such I'm going to add a field to indicate that adiantum and hctr2
> cannot be used by algif_skcipher.
>

Note that 'cryptsetup benchmark' uses AF_ALG, and there are recommendations
floating around the internet to use it to benchmark the various algorithms that
can be used with dm-crypt, including Adiantum. Perhaps it's a bit late to take
away support for algorithms that are already supported? AFAICS, algif_skcipher
only splits up operations if userspace does something like write(8192) followed
by read(4096), i.e. reading less than it wrote. Why not just make
algif_skcipher return an error in that case if the algorithm doesn't support it?

- Eric

2023-12-06 02:36:52

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH 4/8] crypto: skcipher - Add lskcipher

On Tue, Dec 05, 2023 at 12:17:57PM -0800, Eric Biggers wrote:
>
> Note that 'cryptsetup benchmark' uses AF_ALG, and there are recommendations
> floating around the internet to use it to benchmark the various algorithms that
> can be used with dm-crypt, including Adiantum. Perhaps it's a bit late to take
> away support for algorithms that are already supported? AFAICS, algif_skcipher
> only splits up operations if userspace does something like write(8192) followed
> by read(4096), i.e. reading less than it wrote. Why not just make
> algif_skcipher return an error in that case if the algorithm doesn't support it?

Yes that should be possible to implement.

Also I've changed my mind on the two-pass strategy. I think
I am going to try to implement it at least internally in the
layer between skcipher and lskcihper. Let me see whether this
is worth persuing or not for adiantum.

The reason is because after everything else switches over to
lskcipher, it'd be silly to have adiantum remain as skcipher
only. But if adiantum moves over to lskcipher, then we'd need
to disable the skcipher version of it or linearise the input.

Both seem unpalatable and perhaps a two-pass approach won't
be that bad.

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt