2018-06-25 21:15:00

by Kees Cook

[permalink] [raw]
Subject: [PATCH v2 00/11] crypto: Remove VLA usage

Hi,

v2:
- use 512 instead of PAGE_SIZE / 8 to avoid bloat on large-page archs.
- swtich xcbc to "16" max universally.
- fix typo in bounds check for dm buffer size.
- examine actual reqsizes for skcipher and ahash instead of guessing.
- improve names and comments for alg maxes


This is nearly the last of the VLA removals[1], but it's one of the
largest because crypto gets used in lots of places. After looking
through code, usage, reading the threads Gustavo started, and comparing
the use-cases to the other VLA removals that have landed in the kernel,
I think this series is likely the best way forward to shut the door on
VLAs forever.

As background, the crypto stack usage is for callers to do an immediate
bit of work that doesn't allocate new memory. This means that other VLA
removal techniques (like just using kmalloc) aren't workable, and the
next common technique is needed: examination of maximum stack usage and
the addition of sanity checks. This series does that, and in several
cases, these maximums were already implicit in the code.

This series is intended to land via the crypto tree, though it touches
dm as well, since there are dependent patches (new crypto #defines
being used).

Thanks!

-Kees

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com



Kees Cook (11):
crypto: xcbc: Remove VLA usage
crypto: cbc: Remove VLA usage
crypto: shash: Remove VLA usage
dm integrity: Remove VLA usage
crypto: ahash: Remove VLA usage
dm verity fec: Remove VLA usage
crypto alg: Introduce generic max blocksize and alignmask
crypto: qat: Remove VLA usage
crypto: shash: Remove VLA usage in unaligned hashing
crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK
crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK

crypto/ahash.c | 4 ++--
crypto/algapi.c | 7 ++++++-
crypto/algif_hash.c | 2 +-
crypto/shash.c | 25 +++++++++++-------------
crypto/xcbc.c | 9 +++++++--
drivers/crypto/qat/qat_common/qat_algs.c | 8 ++++++--
drivers/md/dm-integrity.c | 23 ++++++++++++++++------
drivers/md/dm-verity-fec.c | 5 ++++-
include/crypto/algapi.h | 4 +++-
include/crypto/cbc.h | 4 +++-
include/crypto/hash.h | 12 ++++++++++--
include/crypto/internal/hash.h | 1 +
include/crypto/internal/skcipher.h | 1 +
include/crypto/skcipher.h | 4 +++-
include/linux/compiler-gcc.h | 1 -
15 files changed, 75 insertions(+), 35 deletions(-)

--
2.17.1



2018-06-25 21:11:51

by Kees Cook

[permalink] [raw]
Subject: [PATCH v2 05/11] crypto: ahash: Remove VLA usage

In the quest to remove all stack VLA usage from the kernel[1], this
introduces max size macros for ahash, as already done for shash, and
adjust the crypto user to max state size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Herbert Xu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
---
crypto/ahash.c | 4 ++--
crypto/algif_hash.c | 2 +-
include/crypto/hash.h | 3 +++
3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index a64c143165b1..6435bdbe42fd 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -550,8 +550,8 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
{
struct crypto_alg *base = &alg->halg.base;

- if (alg->halg.digestsize > PAGE_SIZE / 8 ||
- alg->halg.statesize > PAGE_SIZE / 8 ||
+ if (alg->halg.digestsize > AHASH_MAX_DIGESTSIZE ||
+ alg->halg.statesize > AHASH_MAX_STATESIZE ||
alg->halg.statesize == 0)
return -EINVAL;

diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
index bfcf595fd8f9..8974ee8ebead 100644
--- a/crypto/algif_hash.c
+++ b/crypto/algif_hash.c
@@ -239,7 +239,7 @@ static int hash_accept(struct socket *sock, struct socket *newsock, int flags,
struct alg_sock *ask = alg_sk(sk);
struct hash_ctx *ctx = ask->private;
struct ahash_request *req = &ctx->req;
- char state[crypto_ahash_statesize(crypto_ahash_reqtfm(req)) ? : 1];
+ char state[AHASH_MAX_STATESIZE];
struct sock *sk2;
struct alg_sock *ask2;
struct hash_ctx *ctx2;
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 804e87a9510e..5d79e2f0936e 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -64,6 +64,9 @@ struct ahash_request {
void *__ctx[] CRYPTO_MINALIGN_ATTR;
};

+#define AHASH_MAX_DIGESTSIZE 512
+#define AHASH_MAX_STATESIZE 512
+
#define AHASH_REQUEST_ON_STACK(name, ahash) \
char __##name##_desc[sizeof(struct ahash_request) + \
crypto_ahash_reqsize(ahash)] CRYPTO_MINALIGN_ATTR; \
--
2.17.1


2018-06-25 21:12:14

by Kees Cook

[permalink] [raw]
Subject: [PATCH v2 08/11] crypto: qat: Remove VLA usage

In the quest to remove all stack VLA usage from the kernel[1], this uses
the new upper bound for the stack buffer. Also adds a sanity check.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Giovanni Cabiddu <[email protected]>
Cc: Herbert Xu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
---
drivers/crypto/qat/qat_common/qat_algs.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c
index 1138e41d6805..a28edf7b792f 100644
--- a/drivers/crypto/qat/qat_common/qat_algs.c
+++ b/drivers/crypto/qat/qat_common/qat_algs.c
@@ -153,8 +153,8 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash,
struct sha512_state sha512;
int block_size = crypto_shash_blocksize(ctx->hash_tfm);
int digest_size = crypto_shash_digestsize(ctx->hash_tfm);
- char ipad[block_size];
- char opad[block_size];
+ char ipad[MAX_ALGAPI_BLOCKSIZE];
+ char opad[MAX_ALGAPI_BLOCKSIZE];
__be32 *hash_state_out;
__be64 *hash512_state_out;
int i, offset;
@@ -164,6 +164,10 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash,
shash->tfm = ctx->hash_tfm;
shash->flags = 0x0;

+ if (WARN_ON(block_size > sizeof(ipad) ||
+ sizeof(ipad) != sizeof(opad)))
+ return -EINVAL;
+
if (auth_keylen > block_size) {
int ret = crypto_shash_digest(shash, auth_key,
auth_keylen, ipad);
--
2.17.1


2018-06-25 21:13:03

by Kees Cook

[permalink] [raw]
Subject: [PATCH v2 10/11] crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK

In the quest to remove all stack VLA usage from the kernel[1], this caps
the ahash request size similar to the other limits and adds a sanity
check at registration. Unfortunately, these reqsizes can be huge. Looking
at all callers of crypto_ahash_set_reqsize(), the largest appears to be
664 bytes, based on a combination of manual inspection and pahole output:

4 dcp_sha_req_ctx
40 crypto4xx_ctx
52 hash_req_ctx
80 ahash_request
88 rk_ahash_rctx
104 sun4i_req_ctx
200 mcryptd_hash_request_ctx
216 safexcel_ahash_req
228 sha1_hash_ctx
228 sha256_hash_ctx
248 img_hash_request_ctx
252 mtk_sha_reqctx
276 sahara_sha_reqctx
304 mv_cesa_ahash_req
316 iproc_reqctx_s
320 caam_hash_state
320 qce_sha_reqctx
356 sha512_hash_ctx
384 ahash_req_ctx
400 chcr_ahash_req_ctx
416 stm32_hash_request_ctx
432 talitos_ahash_req_ctx
462 atmel_sha_reqctx
496 ccp_aes_cmac_req_ctx
616 ccp_sha_req_ctx
664 artpec6_hash_request_context

So, this chooses 720 as a larger "round" number for the max.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Herbert Xu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Eric Biggers <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: Rabin Vincent <[email protected]>
Cc: Lars Persson <[email protected]>
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
---
include/crypto/hash.h | 3 ++-
include/crypto/internal/hash.h | 1 +
2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 5d79e2f0936e..b550077c4767 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -66,10 +66,11 @@ struct ahash_request {

#define AHASH_MAX_DIGESTSIZE 512
#define AHASH_MAX_STATESIZE 512
+#define AHASH_MAX_REQSIZE 720

#define AHASH_REQUEST_ON_STACK(name, ahash) \
char __##name##_desc[sizeof(struct ahash_request) + \
- crypto_ahash_reqsize(ahash)] CRYPTO_MINALIGN_ATTR; \
+ AHASH_MAX_REQSIZE] CRYPTO_MINALIGN_ATTR; \
struct ahash_request *name = (void *)__##name##_desc

/**
diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
index a0b0ad9d585e..d96ae5f52125 100644
--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -142,6 +142,7 @@ static inline struct ahash_alg *__crypto_ahash_alg(struct crypto_alg *alg)
static inline void crypto_ahash_set_reqsize(struct crypto_ahash *tfm,
unsigned int reqsize)
{
+ BUG_ON(reqsize > AHASH_MAX_REQSIZE);
tfm->reqsize = reqsize;
}

--
2.17.1


2018-06-25 21:13:08

by Kees Cook

[permalink] [raw]
Subject: [PATCH v2 11/11] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK

In the quest to remove all stack VLA usage from the kernel[1], this
caps the skcipher request size similar to other limits and adds a
sanity check at registration. In a manual review of the callers of
crypto_skcipher_set_reqsize(), the largest was 384:

4 sun4i_cipher_req_ctx
6 safexcel_cipher_req
8 cryptd_skcipher_request_ctx
80 cipher_req_ctx
80 skcipher_request
96 crypto_rfc3686_req_ctx
104 nitrox_kcrypt_request
144 mv_cesa_skcipher_std_req
384 rctx

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Herbert Xu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
---
include/crypto/internal/skcipher.h | 1 +
include/crypto/skcipher.h | 4 +++-
2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h
index e42f7063f245..5035482cbe68 100644
--- a/include/crypto/internal/skcipher.h
+++ b/include/crypto/internal/skcipher.h
@@ -130,6 +130,7 @@ static inline struct crypto_skcipher *crypto_spawn_skcipher(
static inline void crypto_skcipher_set_reqsize(
struct crypto_skcipher *skcipher, unsigned int reqsize)
{
+ BUG_ON(reqsize > SKCIPHER_MAX_REQSIZE);
skcipher->reqsize = reqsize;
}

diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 2f327f090c3e..26eba8304d1d 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -139,9 +139,11 @@ struct skcipher_alg {
struct crypto_alg base;
};

+#define SKCIPHER_MAX_REQSIZE 384
+
#define SKCIPHER_REQUEST_ON_STACK(name, tfm) \
char __##name##_desc[sizeof(struct skcipher_request) + \
- crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \
+ SKCIPHER_MAX_REQSIZE] CRYPTO_MINALIGN_ATTR; \
struct skcipher_request *name = (void *)__##name##_desc

/**
--
2.17.1


2018-06-25 21:13:40

by Kees Cook

[permalink] [raw]
Subject: [PATCH v2 07/11] crypto alg: Introduce generic max blocksize and alignmask

In the quest to remove all stack VLA usage from the kernel[1], this
exposes a new general upper bound on crypto blocksize and alignmask
(higher than for the existing cipher limits) for VLA removal,
and introduces new checks.

At present, the highest cra_alignmask in the kernel is 63. The highest
cra_blocksize is 144 (SHA3_224_BLOCK_SIZE, 18 8-byte words). For the
new blocksize limit, I went with 160 (20 8-byte words).

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Herbert Xu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
---
crypto/algapi.c | 7 ++++++-
include/crypto/algapi.h | 4 +++-
2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/crypto/algapi.c b/crypto/algapi.c
index c0755cf4f53f..496fc51bf215 100644
--- a/crypto/algapi.c
+++ b/crypto/algapi.c
@@ -57,9 +57,14 @@ static int crypto_check_alg(struct crypto_alg *alg)
if (alg->cra_alignmask & (alg->cra_alignmask + 1))
return -EINVAL;

- if (alg->cra_blocksize > PAGE_SIZE / 8)
+ /* General maximums for all algs. */
+ if (alg->cra_alignmask > MAX_ALGAPI_ALIGNMASK)
return -EINVAL;

+ if (alg->cra_blocksize > MAX_ALGAPI_BLOCKSIZE)
+ return -EINVAL;
+
+ /* Lower maximums for specific alg types. */
if (!alg->cra_type && (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
CRYPTO_ALG_TYPE_CIPHER) {
if (alg->cra_alignmask > MAX_CIPHER_ALIGNMASK)
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index bd5e8ccf1687..21371ac8f355 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -20,8 +20,10 @@
/*
* Maximum values for blocksize and alignmask, used to allocate
* static buffers that are big enough for any combination of
- * ciphers and architectures.
+ * algs and architectures. Ciphers have a lower maximum size.
*/
+#define MAX_ALGAPI_BLOCKSIZE 160
+#define MAX_ALGAPI_ALIGNMASK 63
#define MAX_CIPHER_BLOCKSIZE 16
#define MAX_CIPHER_ALIGNMASK 15

--
2.17.1


2018-06-25 21:14:31

by Kees Cook

[permalink] [raw]
Subject: [PATCH v2 02/11] crypto: cbc: Remove VLA usage

In the quest to remove all stack VLA usage from the kernel[1], this
uses the upper bounds on blocksize. Since this is always a cipher
blocksize, use the existing cipher max blocksize.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Herbert Xu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
---
include/crypto/cbc.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/include/crypto/cbc.h b/include/crypto/cbc.h
index f5b8bfc22e6d..47db0aac2ab9 100644
--- a/include/crypto/cbc.h
+++ b/include/crypto/cbc.h
@@ -113,7 +113,9 @@ static inline int crypto_cbc_decrypt_inplace(
unsigned int bsize = crypto_skcipher_blocksize(tfm);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
- u8 last_iv[bsize];
+ u8 last_iv[MAX_CIPHER_BLOCKSIZE];
+
+ BUG_ON(bsize > sizeof(last_iv));

/* Start of the last block. */
src += nbytes - (nbytes & (bsize - 1)) - bsize;
--
2.17.1


2018-06-25 21:14:32

by Kees Cook

[permalink] [raw]
Subject: [PATCH v2 09/11] crypto: shash: Remove VLA usage in unaligned hashing

In the quest to remove all stack VLA usage from the kernel[1], this uses
the newly defined max alignment to perform unaligned hashing to avoid
VLAs, and drops the helper function while adding sanity checks on the
resulting buffer sizes. Additionally, the __aligned_largest macro is
removed since this helper was the only user.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Herbert Xu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
---
crypto/shash.c | 19 ++++++++-----------
include/linux/compiler-gcc.h | 1 -
2 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/crypto/shash.c b/crypto/shash.c
index ab6902c6dae7..8081c5e03770 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -73,13 +73,6 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
}
EXPORT_SYMBOL_GPL(crypto_shash_setkey);

-static inline unsigned int shash_align_buffer_size(unsigned len,
- unsigned long mask)
-{
- typedef u8 __aligned_largest u8_aligned;
- return len + (mask & ~(__alignof__(u8_aligned) - 1));
-}
-
static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
@@ -88,11 +81,13 @@ static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
unsigned long alignmask = crypto_shash_alignmask(tfm);
unsigned int unaligned_len = alignmask + 1 -
((unsigned long)data & alignmask);
- u8 ubuf[shash_align_buffer_size(unaligned_len, alignmask)]
- __aligned_largest;
+ u8 ubuf[MAX_ALGAPI_ALIGNMASK + 1];
u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
int err;

+ if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf)))
+ return -EINVAL;
+
if (unaligned_len > len)
unaligned_len = len;

@@ -124,11 +119,13 @@ static int shash_final_unaligned(struct shash_desc *desc, u8 *out)
unsigned long alignmask = crypto_shash_alignmask(tfm);
struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned int ds = crypto_shash_digestsize(tfm);
- u8 ubuf[shash_align_buffer_size(ds, alignmask)]
- __aligned_largest;
+ u8 ubuf[SHASH_MAX_DIGESTSIZE];
u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
int err;

+ if (WARN_ON(buf + ds > ubuf + sizeof(ubuf)))
+ return -EINVAL;
+
err = shash->final(desc, buf);
if (err)
goto out;
diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
index f1a7492a5cc8..1f1cdef36a82 100644
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -125,7 +125,6 @@
*/
#define __pure __attribute__((pure))
#define __aligned(x) __attribute__((aligned(x)))
-#define __aligned_largest __attribute__((aligned))
#define __printf(a, b) __attribute__((format(printf, a, b)))
#define __scanf(a, b) __attribute__((format(scanf, a, b)))
#define __attribute_const__ __attribute__((__const__))
--
2.17.1


2018-06-25 21:14:40

by Kees Cook

[permalink] [raw]
Subject: [PATCH v2 01/11] crypto: xcbc: Remove VLA usage

In the quest to remove all stack VLA usage from the kernel[1], this uses
the maximum blocksize and adds a sanity check. For xcbc, the blocksize
must always be 16, so use that, since it's already being enforced during
instantiation.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Herbert Xu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
---
crypto/xcbc.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/crypto/xcbc.c b/crypto/xcbc.c
index 25c75af50d3f..7aa03beed795 100644
--- a/crypto/xcbc.c
+++ b/crypto/xcbc.c
@@ -57,6 +57,8 @@ struct xcbc_desc_ctx {
u8 ctx[];
};

+#define XCBC_BLOCKSIZE 16
+
static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
const u8 *inkey, unsigned int keylen)
{
@@ -65,7 +67,10 @@ static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
int bs = crypto_shash_blocksize(parent);
u8 *consts = PTR_ALIGN(&ctx->ctx[0], alignmask + 1);
int err = 0;
- u8 key1[bs];
+ u8 key1[XCBC_BLOCKSIZE];
+
+ if (WARN_ON(bs > sizeof(key1)))
+ return -EINVAL;

if ((err = crypto_cipher_setkey(ctx->child, inkey, keylen)))
return err;
@@ -212,7 +217,7 @@ static int xcbc_create(struct crypto_template *tmpl, struct rtattr **tb)
return PTR_ERR(alg);

switch(alg->cra_blocksize) {
- case 16:
+ case XCBC_BLOCKSIZE:
break;
default:
goto out_put_alg;
--
2.17.1


2018-06-25 21:14:49

by Kees Cook

[permalink] [raw]
Subject: [PATCH v2 03/11] crypto: shash: Remove VLA usage

In the quest to remove all stack VLA usage from the kernel[1], this
removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize())
by using the maximum allowable size (which is now more clearly captured
in a macro). Similar limits are turned into macros as well.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Herbert Xu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
---
crypto/shash.c | 6 +++---
include/crypto/hash.h | 6 +++++-
2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/crypto/shash.c b/crypto/shash.c
index 5d732c6bb4b2..ab6902c6dae7 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -458,9 +458,9 @@ static int shash_prepare_alg(struct shash_alg *alg)
{
struct crypto_alg *base = &alg->base;

- if (alg->digestsize > PAGE_SIZE / 8 ||
- alg->descsize > PAGE_SIZE / 8 ||
- alg->statesize > PAGE_SIZE / 8)
+ if (alg->digestsize > SHASH_MAX_DIGESTSIZE ||
+ alg->descsize > SHASH_MAX_DESCSIZE ||
+ alg->statesize > SHASH_MAX_STATESIZE)
return -EINVAL;

base->cra_type = &crypto_shash_type;
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 76e432cab75d..804e87a9510e 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -151,9 +151,13 @@ struct shash_desc {
void *__ctx[] CRYPTO_MINALIGN_ATTR;
};

+#define SHASH_MAX_DIGESTSIZE 512
+#define SHASH_MAX_DESCSIZE 512
+#define SHASH_MAX_STATESIZE 512
+
#define SHASH_DESC_ON_STACK(shash, ctx) \
char __##shash##_desc[sizeof(struct shash_desc) + \
- crypto_shash_descsize(ctx)] CRYPTO_MINALIGN_ATTR; \
+ SHASH_MAX_DESCSIZE] CRYPTO_MINALIGN_ATTR; \
struct shash_desc *shash = (struct shash_desc *)__##shash##_desc

/**
--
2.17.1


2018-06-25 21:14:53

by Kees Cook

[permalink] [raw]
Subject: [PATCH v2 06/11] dm verity fec: Remove VLA usage

In the quest to remove all stack VLA usage from the kernel[1], this
uses the newly defined max digest size macro. Also adds a sanity-check
at use-time.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Alasdair Kergon <[email protected]>
Cc: Mike Snitzer <[email protected]>
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
---
drivers/md/dm-verity-fec.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
index 684af08d0747..fe5cfd1a5fa5 100644
--- a/drivers/md/dm-verity-fec.c
+++ b/drivers/md/dm-verity-fec.c
@@ -212,12 +212,15 @@ static int fec_read_bufs(struct dm_verity *v, struct dm_verity_io *io,
struct dm_verity_fec_io *fio = fec_io(io);
u64 block, ileaved;
u8 *bbuf, *rs_block;
- u8 want_digest[v->digest_size];
+ u8 want_digest[AHASH_MAX_DIGESTSIZE];
unsigned n, k;

if (neras)
*neras = 0;

+ if (WARN_ON(v->digest_size > sizeof(want_digest)))
+ return -EINVAL;
+
/*
* read each of the rsn data blocks that are part of the RS block, and
* interleave contents to available bufs
--
2.17.1


2018-06-25 21:15:06

by Kees Cook

[permalink] [raw]
Subject: [PATCH v2 04/11] dm integrity: Remove VLA usage

In the quest to remove all stack VLA usage from the kernel[1], this uses
the new SHASH_MAX_DIGESTSIZE from the crypto layer to allocate the upper
bounds on stack usage.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Alasdair Kergon <[email protected]>
Cc: Mike Snitzer <[email protected]>
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
---
drivers/md/dm-integrity.c | 23 +++++++++++++++++------
1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index 86438b2f10dd..85e8ce1625a2 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -521,7 +521,12 @@ static void section_mac(struct dm_integrity_c *ic, unsigned section, __u8 result
}
memset(result + size, 0, JOURNAL_MAC_SIZE - size);
} else {
- __u8 digest[size];
+ __u8 digest[SHASH_MAX_DIGESTSIZE];
+
+ if (WARN_ON(size > sizeof(digest))) {
+ dm_integrity_io_error(ic, "digest_size", -EINVAL);
+ goto err;
+ }
r = crypto_shash_final(desc, digest);
if (unlikely(r)) {
dm_integrity_io_error(ic, "crypto_shash_final", r);
@@ -1244,7 +1249,7 @@ static void integrity_metadata(struct work_struct *w)
struct bio *bio = dm_bio_from_per_bio_data(dio, sizeof(struct dm_integrity_io));
char *checksums;
unsigned extra_space = unlikely(digest_size > ic->tag_size) ? digest_size - ic->tag_size : 0;
- char checksums_onstack[ic->tag_size + extra_space];
+ char checksums_onstack[SHASH_MAX_DIGESTSIZE];
unsigned sectors_to_process = dio->range.n_sectors;
sector_t sector = dio->range.logical_sector;

@@ -1253,8 +1258,14 @@ static void integrity_metadata(struct work_struct *w)

checksums = kmalloc((PAGE_SIZE >> SECTOR_SHIFT >> ic->sb->log2_sectors_per_block) * ic->tag_size + extra_space,
GFP_NOIO | __GFP_NORETRY | __GFP_NOWARN);
- if (!checksums)
+ if (!checksums) {
checksums = checksums_onstack;
+ if (WARN_ON(extra_space &&
+ digest_size > sizeof(checksums_onstack))) {
+ r = -EINVAL;
+ goto error;
+ }
+ }

__bio_for_each_segment(bv, bio, iter, dio->orig_bi_iter) {
unsigned pos;
@@ -1466,7 +1477,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio,
} while (++s < ic->sectors_per_block);
#ifdef INTERNAL_VERIFY
if (ic->internal_hash) {
- char checksums_onstack[max(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)];
+ char checksums_onstack[max(SHASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)];

integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack);
if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) {
@@ -1516,7 +1527,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio,
if (ic->internal_hash) {
unsigned digest_size = crypto_shash_digestsize(ic->internal_hash);
if (unlikely(digest_size > ic->tag_size)) {
- char checksums_onstack[digest_size];
+ char checksums_onstack[SHASH_MAX_DIGESTSIZE];
integrity_sector_checksum(ic, logical_sector, (char *)js, checksums_onstack);
memcpy(journal_entry_tag(ic, je), checksums_onstack, ic->tag_size);
} else
@@ -1937,7 +1948,7 @@ static void do_journal_write(struct dm_integrity_c *ic, unsigned write_start,
unlikely(from_replay) &&
#endif
ic->internal_hash) {
- char test_tag[max(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)];
+ char test_tag[max_t(size_t, SHASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)];

integrity_sector_checksum(ic, sec + ((l - j) << ic->sb->log2_sectors_per_block),
(char *)access_journal_data(ic, i, l), test_tag);
--
2.17.1


2018-06-25 21:24:23

by Joe Perches

[permalink] [raw]
Subject: Re: [PATCH v2 01/11] crypto: xcbc: Remove VLA usage

On Mon, 2018-06-25 at 14:10 -0700, Kees Cook wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this uses
> the maximum blocksize and adds a sanity check. For xcbc, the blocksize
> must always be 16, so use that, since it's already being enforced during
> instantiation.

Is it time yet to change this warning from 'make W=3' to W=1?
---
scripts/Makefile.extrawarn | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
index 8d5357053f86..27ba478d40cd 100644
--- a/scripts/Makefile.extrawarn
+++ b/scripts/Makefile.extrawarn
@@ -29,6 +29,7 @@ warning-1 += $(call cc-option, -Wmissing-include-dirs)
warning-1 += $(call cc-option, -Wunused-but-set-variable)
warning-1 += $(call cc-option, -Wunused-const-variable)
warning-1 += $(call cc-option, -Wpacked-not-aligned)
+warning-1 += $(call cc-option, -Wvla)
warning-1 += $(call cc-disable-warning, missing-field-initializers)
warning-1 += $(call cc-disable-warning, sign-compare)

@@ -52,7 +53,6 @@ warning-3 += -Wpointer-arith
warning-3 += -Wredundant-decls
warning-3 += -Wswitch-default
warning-3 += $(call cc-option, -Wpacked-bitfield-compat)
-warning-3 += $(call cc-option, -Wvla)

warning := $(warning-$(findstring 1,
$(KBUILD_ENABLE_EXTRA_GCC_CHECKS)))
warning += $(warning-$(findstring 2,
$(KBUILD_ENABLE_EXTRA_GCC_CHECKS)))


2018-06-25 21:34:00

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v2 01/11] crypto: xcbc: Remove VLA usage

On Mon, Jun 25, 2018 at 2:23 PM, Joe Perches <[email protected]> wrote:
> On Mon, 2018-06-25 at 14:10 -0700, Kees Cook wrote:
>> In the quest to remove all stack VLA usage from the kernel[1], this uses
>> the maximum blocksize and adds a sanity check. For xcbc, the blocksize
>> must always be 16, so use that, since it's already being enforced during
>> instantiation.
>
> Is it time yet to change this warning from 'make W=3' to W=1?
> ---
> scripts/Makefile.extrawarn | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
> index 8d5357053f86..27ba478d40cd 100644
> --- a/scripts/Makefile.extrawarn
> +++ b/scripts/Makefile.extrawarn
> @@ -29,6 +29,7 @@ warning-1 += $(call cc-option, -Wmissing-include-dirs)
> warning-1 += $(call cc-option, -Wunused-but-set-variable)
> warning-1 += $(call cc-option, -Wunused-const-variable)
> warning-1 += $(call cc-option, -Wpacked-not-aligned)
> +warning-1 += $(call cc-option, -Wvla)
> warning-1 += $(call cc-disable-warning, missing-field-initializers)
> warning-1 += $(call cc-disable-warning, sign-compare)
>
> @@ -52,7 +53,6 @@ warning-3 += -Wpointer-arith
> warning-3 += -Wredundant-decls
> warning-3 += -Wswitch-default
> warning-3 += $(call cc-option, -Wpacked-bitfield-compat)
> -warning-3 += $(call cc-option, -Wvla)
>
> warning := $(warning-$(findstring 1,
> $(KBUILD_ENABLE_EXTRA_GCC_CHECKS)))
> warning += $(warning-$(findstring 2,
> $(KBUILD_ENABLE_EXTRA_GCC_CHECKS)))

I was going to skip the churn since I intend to make the default build
use -Wvla for the next merge window (assuming we've killed all the
VLAs by then). After crypto, only fs/ntfs remains, and I have that
half done already. There are a couple more still under some
development back-and-forth.

I'm not _opposed_ to this change, but I'd rather just make it the
default. And then the next cycle, I'd want it to be -Werror=vla, but I
may get shouted down. ;)

-Kees

--
Kees Cook
Pixel Security

2018-06-25 21:40:29

by Joe Perches

[permalink] [raw]
Subject: Re: [PATCH v2 01/11] crypto: xcbc: Remove VLA usage

On Mon, 2018-06-25 at 14:32 -0700, Kees Cook wrote:
> On Mon, Jun 25, 2018 at 2:23 PM, Joe Perches <[email protected]> wrote:
> > On Mon, 2018-06-25 at 14:10 -0700, Kees Cook wrote:
> > > In the quest to remove all stack VLA usage from the kernel[1], this uses
> > > the maximum blocksize and adds a sanity check. For xcbc, the blocksize
> > > must always be 16, so use that, since it's already being enforced during
> > > instantiation.
> >
> > Is it time yet to change this warning from 'make W=3' to W=1?
[]
> I was going to skip the churn since I intend to make the default build
> use -Wvla for the next merge window (assuming we've killed all the
> VLAs by then).

Good.

Even if not all VLAs are removed, making the
warning default on is fine by me.

Getting others to do some of the work you've
been doing would be good too.

> After crypto, only fs/ntfs remains, and I have that
> half done already. There are a couple more still under some
> development back-and-forth.
>
> I'm not _opposed_ to this change, but I'd rather just make it the
> default. And then the next cycle, I'd want it to be -Werror=vla, but I
> may get shouted down. ;)

Yup, you should get shouted down there.
I think -Werror=<anything> is poor form.


2018-06-25 22:57:18

by Eric Biggers

[permalink] [raw]
Subject: Re: [dm-devel] [PATCH v2 10/11] crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK

On Mon, Jun 25, 2018 at 02:10:25PM -0700, Kees Cook wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this caps
> the ahash request size similar to the other limits and adds a sanity
> check at registration. Unfortunately, these reqsizes can be huge. Looking
> at all callers of crypto_ahash_set_reqsize(), the largest appears to be
> 664 bytes, based on a combination of manual inspection and pahole output:
>
> 4 dcp_sha_req_ctx
> 40 crypto4xx_ctx
> 52 hash_req_ctx
> 80 ahash_request
> 88 rk_ahash_rctx
> 104 sun4i_req_ctx
> 200 mcryptd_hash_request_ctx
> 216 safexcel_ahash_req
> 228 sha1_hash_ctx
> 228 sha256_hash_ctx
> 248 img_hash_request_ctx
> 252 mtk_sha_reqctx
> 276 sahara_sha_reqctx
> 304 mv_cesa_ahash_req
> 316 iproc_reqctx_s
> 320 caam_hash_state
> 320 qce_sha_reqctx
> 356 sha512_hash_ctx
> 384 ahash_req_ctx
> 400 chcr_ahash_req_ctx
> 416 stm32_hash_request_ctx
> 432 talitos_ahash_req_ctx
> 462 atmel_sha_reqctx
> 496 ccp_aes_cmac_req_ctx
> 616 ccp_sha_req_ctx
> 664 artpec6_hash_request_context
>
> So, this chooses 720 as a larger "round" number for the max.
>
> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>
> Cc: Herbert Xu <[email protected]>
> Cc: "David S. Miller" <[email protected]>
> Cc: Eric Biggers <[email protected]>
> Cc: Tim Chen <[email protected]>
> Cc: Rabin Vincent <[email protected]>
> Cc: Lars Persson <[email protected]>
> Cc: [email protected]
> Signed-off-by: Kees Cook <[email protected]>
> ---
> include/crypto/hash.h | 3 ++-
> include/crypto/internal/hash.h | 1 +
> 2 files changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/include/crypto/hash.h b/include/crypto/hash.h
> index 5d79e2f0936e..b550077c4767 100644
> --- a/include/crypto/hash.h
> +++ b/include/crypto/hash.h
> @@ -66,10 +66,11 @@ struct ahash_request {
>
> #define AHASH_MAX_DIGESTSIZE 512
> #define AHASH_MAX_STATESIZE 512
> +#define AHASH_MAX_REQSIZE 720
>
> #define AHASH_REQUEST_ON_STACK(name, ahash) \
> char __##name##_desc[sizeof(struct ahash_request) + \
> - crypto_ahash_reqsize(ahash)] CRYPTO_MINALIGN_ATTR; \
> + AHASH_MAX_REQSIZE] CRYPTO_MINALIGN_ATTR; \
> struct ahash_request *name = (void *)__##name##_desc
>
> /**
> diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
> index a0b0ad9d585e..d96ae5f52125 100644
> --- a/include/crypto/internal/hash.h
> +++ b/include/crypto/internal/hash.h
> @@ -142,6 +142,7 @@ static inline struct ahash_alg *__crypto_ahash_alg(struct crypto_alg *alg)
> static inline void crypto_ahash_set_reqsize(struct crypto_ahash *tfm,
> unsigned int reqsize)
> {
> + BUG_ON(reqsize > AHASH_MAX_REQSIZE);
> tfm->reqsize = reqsize;
> }

This isn't accounting for the cases where a hash algorithm is "wrapped" with
another one, which increases the request size. For example, "sha512_mb" ends up
with a request size of

sizeof(struct ahash_request) +
sizeof(struct mcryptd_hash_request_ctx) +
sizeof(struct ahash_request) +
sizeof(struct sha512_hash_ctx)

== 808 bytes, on x86_64 with CONFIG_DEBUG_SG enabled.

(Note also that structure sizes can vary depending on the architecture
and the kernel config.)

So, with the self-tests enabled your new BUG_ON() is hit on boot:

------------[ cut here ]------------
kernel BUG at ./include/crypto/internal/hash.h:145!
invalid opcode: 0000 [#1] SMP PTI
CPU: 4 PID: 337 Comm: cryptomgr_test Not tainted 4.18.0-rc2-00048-gda396e1e8ca5 #11
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
RIP: 0010:mcryptd_hash_init_tfm+0x36/0x40 crypto/mcryptd.c:289
Code: 01 00 00 e8 0c 05 fd ff 48 3d 00 f0 ff ff 77 18 48 89 43 40 8b 40 40 05 c8 00 00 00 3d d0 02 00 00 77 07 89 43 f8 31 c0 5b c3 <0f> 0b 0f 1f 84 00 00 00 00 00 80 7f 0c 00 74 01 c3 65 8b 05 d2 e2
RSP: 0000:ffffb180c0dafc50 EFLAGS: 00010202
RAX: 00000000000002d8 RBX: ffffa1867c9267c8 RCX: ffffffffb66f5ef0
RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffa1867c927c48
RBP: ffffa1867c926780 R08: 0000000000000001 R09: ffffa1867c927c00
R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
R13: ffffa1867c9c6848 R14: ffffa1867c9c6848 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffffa1867fd00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 0000000035c10001 CR4: 00000000001606e0
Call Trace:
crypto_create_tfm+0x86/0xd0 crypto/api.c:475
crypto_alloc_tfm+0x4b/0xb0 crypto/api.c:543
mcryptd_alloc_ahash+0x6f/0xb0 crypto/mcryptd.c:603
sha512_mb_async_init_tfm+0x1a/0x50 arch/x86/crypto/sha512-mb/sha512_mb.c:724
crypto_create_tfm+0x86/0xd0 crypto/api.c:475
crypto_alloc_tfm+0x4b/0xb0 crypto/api.c:543
__alg_test_hash+0x1c/0x90 crypto/testmgr.c:1799
alg_test_hash+0x6b/0x100 crypto/testmgr.c:1841
alg_test.part.5+0x119/0x2a0 crypto/testmgr.c:3687
cryptomgr_test+0x3b/0x40 crypto/algboss.c:223
kthread+0x114/0x130 kernel/kthread.c:240
ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:412
Modules linked in:
---[ end trace d726be03a53bddb5 ]---

2018-06-25 23:08:12

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v2 01/11] crypto: xcbc: Remove VLA usage

On Mon, Jun 25, 2018 at 2:23 PM, Joe Perches <[email protected]> wrote:
> warning-3 += -Wswitch-default

This reminds me, though. Should _this_ one get moved to warning-1?

$ git log next-20180625 --no-merges --grep 'mark expected switch
fall-through' --oneline | wc -l
129

Gustavo, have you checked recently how many of these are left? Maybe
we can move this to default too?

-Kees

--
Kees Cook
Pixel Security

2018-06-25 23:14:32

by Kees Cook

[permalink] [raw]
Subject: Re: [dm-devel] [PATCH v2 10/11] crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK

On Mon, Jun 25, 2018 at 3:56 PM, Eric Biggers <[email protected]> wrote:
> On Mon, Jun 25, 2018 at 02:10:25PM -0700, Kees Cook wrote:
>> In the quest to remove all stack VLA usage from the kernel[1], this caps
>> the ahash request size similar to the other limits and adds a sanity
>> check at registration. Unfortunately, these reqsizes can be huge. Looking
>> at all callers of crypto_ahash_set_reqsize(), the largest appears to be
>> 664 bytes, based on a combination of manual inspection and pahole output:
>>
>> 4 dcp_sha_req_ctx
>> 40 crypto4xx_ctx
>> 52 hash_req_ctx
>> 80 ahash_request
>> 88 rk_ahash_rctx
>> 104 sun4i_req_ctx
>> 200 mcryptd_hash_request_ctx
>> 216 safexcel_ahash_req
>> 228 sha1_hash_ctx
>> 228 sha256_hash_ctx
>> 248 img_hash_request_ctx
>> 252 mtk_sha_reqctx
>> 276 sahara_sha_reqctx
>> 304 mv_cesa_ahash_req
>> 316 iproc_reqctx_s
>> 320 caam_hash_state
>> 320 qce_sha_reqctx
>> 356 sha512_hash_ctx
>> 384 ahash_req_ctx
>> 400 chcr_ahash_req_ctx
>> 416 stm32_hash_request_ctx
>> 432 talitos_ahash_req_ctx
>> 462 atmel_sha_reqctx
>> 496 ccp_aes_cmac_req_ctx
>> 616 ccp_sha_req_ctx
>> 664 artpec6_hash_request_context
>>
>> So, this chooses 720 as a larger "round" number for the max.
>>
>
> This isn't accounting for the cases where a hash algorithm is "wrapped" with
> another one, which increases the request size. For example, "sha512_mb" ends up
> with a request size of
>
> sizeof(struct ahash_request) +
> sizeof(struct mcryptd_hash_request_ctx) +
> sizeof(struct ahash_request) +
> sizeof(struct sha512_hash_ctx)
>
> == 808 bytes, on x86_64 with CONFIG_DEBUG_SG enabled.
>
> (Note also that structure sizes can vary depending on the architecture
> and the kernel config.)
>
> So, with the self-tests enabled your new BUG_ON() is hit on boot:

Ugh, right. Wow, that _really_ gets big. Which are likely to wrap
which others? Looks like software case plus hardware case? i.e.
mcryptd_hash_request_ctx with artpec6_hash_request_context is the
largest we could get? So: 80 + 80 + 200 + 664 ? Oh, hilarious. That
comes exactly to 1024. :P

So ... 1024?

-Kees

--
Kees Cook
Pixel Security

2018-06-26 00:57:35

by Gustavo A. R. Silva

[permalink] [raw]
Subject: Re: [PATCH v2 01/11] crypto: xcbc: Remove VLA usage



On 06/25/2018 06:06 PM, Kees Cook wrote:
> On Mon, Jun 25, 2018 at 2:23 PM, Joe Perches <[email protected]> wrote:
>> warning-3 += -Wswitch-default
>
> This reminds me, though. Should _this_ one get moved to warning-1?
>
> $ git log next-20180625 --no-merges --grep 'mark expected switch
> fall-through' --oneline | wc -l
> 129
>
> Gustavo, have you checked recently how many of these are left? Maybe
> we can move this to default too?
>

I just built today's kernel and we have a total of 704 fall-through warnings with -Wimplicit-fallthrough=2

I think we might be able to do that in this development cycle.

Thanks
--
Gustavo

2018-06-26 09:21:23

by Herbert Xu

[permalink] [raw]
Subject: Re: [dm-devel] [PATCH v2 10/11] crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK

On Mon, Jun 25, 2018 at 03:56:09PM -0700, Eric Biggers wrote:
>
> > diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
> > index a0b0ad9d585e..d96ae5f52125 100644
> > --- a/include/crypto/internal/hash.h
> > +++ b/include/crypto/internal/hash.h
> > @@ -142,6 +142,7 @@ static inline struct ahash_alg *__crypto_ahash_alg(struct crypto_alg *alg)
> > static inline void crypto_ahash_set_reqsize(struct crypto_ahash *tfm,
> > unsigned int reqsize)
> > {
> > + BUG_ON(reqsize > AHASH_MAX_REQSIZE);
> > tfm->reqsize = reqsize;
> > }
>
> This isn't accounting for the cases where a hash algorithm is "wrapped" with
> another one, which increases the request size. For example, "sha512_mb" ends up
> with a request size of

I think this patch is on the wrong track. The stack requests are
only ever meant to be used for synchronous algorithms (IOW shash
algorithms) and were a quick-and-dirty fix for legacy users.

So either check SHASH_MAX_REQSIZE or just convert the users to
kmalloc or even better make them real async users.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2018-06-26 09:22:04

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v2 11/11] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK

On Mon, Jun 25, 2018 at 02:10:26PM -0700, Kees Cook wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this
> caps the skcipher request size similar to other limits and adds a
> sanity check at registration. In a manual review of the callers of
> crypto_skcipher_set_reqsize(), the largest was 384:
>
> 4 sun4i_cipher_req_ctx
> 6 safexcel_cipher_req
> 8 cryptd_skcipher_request_ctx
> 80 cipher_req_ctx
> 80 skcipher_request
> 96 crypto_rfc3686_req_ctx
> 104 nitrox_kcrypt_request
> 144 mv_cesa_skcipher_std_req
> 384 rctx
>
> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>
> Cc: Herbert Xu <[email protected]>
> Cc: "David S. Miller" <[email protected]>
> Cc: [email protected]
> Signed-off-by: Kees Cook <[email protected]>

This has the same issue as the ahash reqsize patch.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2018-06-26 16:46:14

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v2 11/11] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK

On Tue, Jun 26, 2018 at 2:20 AM, Herbert Xu <[email protected]> wrote:
> On Mon, Jun 25, 2018 at 02:10:26PM -0700, Kees Cook wrote:
>> In the quest to remove all stack VLA usage from the kernel[1], this
>> caps the skcipher request size similar to other limits and adds a
>> sanity check at registration. In a manual review of the callers of
>> crypto_skcipher_set_reqsize(), the largest was 384:
>>
>> 4 sun4i_cipher_req_ctx
>> 6 safexcel_cipher_req
>> 8 cryptd_skcipher_request_ctx
>> 80 cipher_req_ctx
>> 80 skcipher_request
>> 96 crypto_rfc3686_req_ctx
>> 104 nitrox_kcrypt_request
>> 144 mv_cesa_skcipher_std_req
>> 384 rctx
>>
>> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>>
>> Cc: Herbert Xu <[email protected]>
>> Cc: "David S. Miller" <[email protected]>
>> Cc: [email protected]
>> Signed-off-by: Kees Cook <[email protected]>
>
> This has the same issue as the ahash reqsize patch.

Which are likely to be wrapped together? Should I take this to 512 or
something else?

-Kees

--
Kees Cook
Pixel Security

2018-06-26 16:52:25

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v2 01/11] crypto: xcbc: Remove VLA usage

On Mon, Jun 25, 2018 at 5:54 PM, Gustavo A. R. Silva
<[email protected]> wrote:
>
>
> On 06/25/2018 06:06 PM, Kees Cook wrote:
>> On Mon, Jun 25, 2018 at 2:23 PM, Joe Perches <[email protected]> wrote:
>>> warning-3 += -Wswitch-default
>>
>> This reminds me, though. Should _this_ one get moved to warning-1?
>>
>> $ git log next-20180625 --no-merges --grep 'mark expected switch
>> fall-through' --oneline | wc -l
>> 129
>>
>> Gustavo, have you checked recently how many of these are left? Maybe
>> we can move this to default too?
>>
>
> I just built today's kernel and we have a total of 704 fall-through warnings with -Wimplicit-fallthrough=2

Wow; that's still a lot!

> I think we might be able to do that in this development cycle.

Do you think we could finish the rest of the unannotated ones this
cycle? If not, we could upgrade from W=3 to W=1 since it's a condition
we're actively trying to remove from the kernel...

-Kees

--
Kees Cook
Pixel Security

2018-06-26 17:04:23

by Kees Cook

[permalink] [raw]
Subject: Re: [dm-devel] [PATCH v2 10/11] crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK

On Tue, Jun 26, 2018 at 2:19 AM, Herbert Xu <[email protected]> wrote:
> On Mon, Jun 25, 2018 at 03:56:09PM -0700, Eric Biggers wrote:
>>
>> > diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
>> > index a0b0ad9d585e..d96ae5f52125 100644
>> > --- a/include/crypto/internal/hash.h
>> > +++ b/include/crypto/internal/hash.h
>> > @@ -142,6 +142,7 @@ static inline struct ahash_alg *__crypto_ahash_alg(struct crypto_alg *alg)
>> > static inline void crypto_ahash_set_reqsize(struct crypto_ahash *tfm,
>> > unsigned int reqsize)
>> > {
>> > + BUG_ON(reqsize > AHASH_MAX_REQSIZE);
>> > tfm->reqsize = reqsize;
>> > }
>>
>> This isn't accounting for the cases where a hash algorithm is "wrapped" with
>> another one, which increases the request size. For example, "sha512_mb" ends up
>> with a request size of
>
> I think this patch is on the wrong track. The stack requests are
> only ever meant to be used for synchronous algorithms (IOW shash
> algorithms) and were a quick-and-dirty fix for legacy users.
>
> So either check SHASH_MAX_REQSIZE or just convert the users to
> kmalloc or even better make them real async users.

There is no SHASH_MAX_REQSIZE?

As for users of AHASH_REQUEST_ON_STACK, I see:

$ git grep AHASH_REQUEST_ON_STACK
arch/x86/power/hibernate_64.c: AHASH_REQUEST_ON_STACK(req, tfm);
crypto/ccm.c: AHASH_REQUEST_ON_STACK(ahreq, ctx->mac);
drivers/block/drbd/drbd_worker.c: AHASH_REQUEST_ON_STACK(req, tfm);
drivers/block/drbd/drbd_worker.c: AHASH_REQUEST_ON_STACK(req, tfm);
drivers/md/dm-crypt.c: AHASH_REQUEST_ON_STACK(req, essiv->hash_tfm);
drivers/net/ppp/ppp_mppe.c: AHASH_REQUEST_ON_STACK(req, state->sha1);
drivers/staging/rtl8192e/rtllib_crypt_tkip.c:
AHASH_REQUEST_ON_STACK(req, tfm_michael);
drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c:
AHASH_REQUEST_ON_STACK(req, tfm_michael);
net/wireless/lib80211_crypt_tkip.c: AHASH_REQUEST_ON_STACK(req,
tfm_michael);

Regardless, I'll take a closer look at these.

The other patches leading up to the REQSIZE ones, though, I think are
ready to go? They're distinct from the last two, so the first 9
patches could be applied and I'll continue to improve the two REQSIZE
ones? If that sounds okay, do you want me to resend just first 9, or
do you want to take them from this series?

Thanks!

-Kees

--
Kees Cook
Pixel Security

2018-06-26 17:06:45

by Gustavo A. R. Silva

[permalink] [raw]
Subject: Re: [PATCH v2 01/11] crypto: xcbc: Remove VLA usage



On 06/26/2018 11:50 AM, Kees Cook wrote:
> On Mon, Jun 25, 2018 at 5:54 PM, Gustavo A. R. Silva
> <[email protected]> wrote:
>>
>>
>> On 06/25/2018 06:06 PM, Kees Cook wrote:
>>> On Mon, Jun 25, 2018 at 2:23 PM, Joe Perches <[email protected]> wrote:
>>>> warning-3 += -Wswitch-default
>>>
>>> This reminds me, though. Should _this_ one get moved to warning-1?
>>>
>>> $ git log next-20180625 --no-merges --grep 'mark expected switch
>>> fall-through' --oneline | wc -l
>>> 129
>>>
>>> Gustavo, have you checked recently how many of these are left? Maybe
>>> we can move this to default too?
>>>
>>
>> I just built today's kernel and we have a total of 704 fall-through warnings with -Wimplicit-fallthrough=2
>
> Wow; that's still a lot!
>
>> I think we might be able to do that in this development cycle.
>
> Do you think we could finish the rest of the unannotated ones this
> cycle? If not, we could upgrade from W=3 to W=1 since it's a condition
> we're actively trying to remove from the kernel...
>

Most definitely. I'm already on that!

Thanks
--
Gustavo

2018-06-27 14:36:48

by Herbert Xu

[permalink] [raw]
Subject: Re: [dm-devel] [PATCH v2 10/11] crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK

On Tue, Jun 26, 2018 at 10:02:31AM -0700, Kees Cook wrote:
>
> There is no SHASH_MAX_REQSIZE?
>
> As for users of AHASH_REQUEST_ON_STACK, I see:

These users are only using the top-level ahash interface. The
underlying algorithms must all be shas.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2018-06-27 14:39:38

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v2 11/11] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK

On Tue, Jun 26, 2018 at 09:45:09AM -0700, Kees Cook wrote:
>
> Which are likely to be wrapped together? Should I take this to 512 or
> something else?

The situation is similar to ahash. While they're using the same
skcipher interface, the underlying algorithms must all be
synchronous. In fact, if they're not then they're buggy.

Therefore it makes no sense to use the general skcipher request
size as a threshold. You should look at synchronous skcipher
algorithms only.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2018-06-27 18:14:07

by Kees Cook

[permalink] [raw]
Subject: Re: [dm-devel] [PATCH v2 10/11] crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK

On Wed, Jun 27, 2018 at 7:34 AM, Herbert Xu <[email protected]> wrote:
> On Tue, Jun 26, 2018 at 10:02:31AM -0700, Kees Cook wrote:
>>
>> There is no SHASH_MAX_REQSIZE?
>>
>> As for users of AHASH_REQUEST_ON_STACK, I see:
>
> These users are only using the top-level ahash interface. The
> underlying algorithms must all be shas.

typo? "shash" you mean?

I don't really understand the crypto APIs -- are you or Eric able to
help me a bit more here? I don't understand that things can wrap other
things, so I'm not sure the best way to reason about the maximum size
to choose here. (And the same for skcipher.)

-Kees

--
Kees Cook
Pixel Security

2018-06-27 18:32:17

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v2 11/11] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK

On Wed, Jun 27, 2018 at 7:36 AM, Herbert Xu <[email protected]> wrote:
> On Tue, Jun 26, 2018 at 09:45:09AM -0700, Kees Cook wrote:
>>
>> Which are likely to be wrapped together? Should I take this to 512 or
>> something else?
>
> The situation is similar to ahash. While they're using the same
> skcipher interface, the underlying algorithms must all be
> synchronous. In fact, if they're not then they're buggy.
>
> Therefore it makes no sense to use the general skcipher request
> size as a threshold. You should look at synchronous skcipher
> algorithms only.

I might be catching on... so from this list, I should only "count" the
synchronous ones as being wrappable? The skcipher list is actually
pretty short:

crypto/cryptd.c: crypto_skcipher_set_reqsize(
crypto/cryptd.c- tfm, sizeof(struct
cryptd_skcipher_request_ctx));

The above is, AIUI, unwrapped, so I only need to count sizeof(struct
cryptd_skcipher_request_ctx)?

These are "simple" wrappers:

crypto/lrw.c: crypto_skcipher_set_reqsize(tfm,
crypto_skcipher_reqsize(cipher) +
crypto/lrw.c- sizeof(struct rctx));

crypto/simd.c- reqsize = sizeof(struct skcipher_request);
crypto/simd.c- reqsize += crypto_skcipher_reqsize(&cryptd_tfm->base);
crypto/simd.c: crypto_skcipher_set_reqsize(tfm, reqsize);

crypto/xts.c: crypto_skcipher_set_reqsize(tfm,
crypto_skcipher_reqsize(child) +
crypto/xts.c- sizeof(struct rctx));

But what are the "legitimate" existing crypto_skcipher_reqsize() values here?

These are "complex" wrappers, with cts even adding blocksize to the mix...

crypto/ctr.c- align = crypto_skcipher_alignmask(tfm);
crypto/ctr.c- align &= ~(crypto_tfm_ctx_alignment() - 1);
crypto/ctr.c- reqsize = align + sizeof(struct crypto_rfc3686_req_ctx) +
crypto/ctr.c- crypto_skcipher_reqsize(cipher);
crypto/ctr.c: crypto_skcipher_set_reqsize(tfm, reqsize);

crypto/cts.c- align = crypto_skcipher_alignmask(tfm);
crypto/cts.c- bsize = crypto_skcipher_blocksize(cipher);
crypto/cts.c- reqsize = ALIGN(sizeof(struct crypto_cts_reqctx) +
crypto/cts.c- crypto_skcipher_reqsize(cipher),
crypto/cts.c- crypto_tfm_ctx_alignment()) +
crypto/cts.c- (align & ~(crypto_tfm_ctx_alignment() - 1)) + bsize;
crypto/cts.c-
crypto/cts.c: crypto_skcipher_set_reqsize(tfm, reqsize);

What values might be expected here? It seems the entire blocksize
needs to be included as well...

-Kees

--
Kees Cook
Pixel Security

2018-06-27 22:29:59

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v2 11/11] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK

On Wed, Jun 27, 2018 at 11:31:09AM -0700, Kees Cook wrote:
>
> I might be catching on... so from this list, I should only "count" the
> synchronous ones as being wrappable? The skcipher list is actually
> pretty short:
>
> crypto/cryptd.c: crypto_skcipher_set_reqsize(
> crypto/cryptd.c- tfm, sizeof(struct
> cryptd_skcipher_request_ctx));

cryptd is async so you don't have to include it.

> These are "simple" wrappers:
>
> crypto/lrw.c: crypto_skcipher_set_reqsize(tfm,
> crypto_skcipher_reqsize(cipher) +
> crypto/lrw.c- sizeof(struct rctx));
>
> crypto/simd.c- reqsize = sizeof(struct skcipher_request);
> crypto/simd.c- reqsize += crypto_skcipher_reqsize(&cryptd_tfm->base);
> crypto/simd.c: crypto_skcipher_set_reqsize(tfm, reqsize);

simd is async.

> crypto/xts.c: crypto_skcipher_set_reqsize(tfm,
> crypto_skcipher_reqsize(child) +
> crypto/xts.c- sizeof(struct rctx));
>
> But what are the "legitimate" existing crypto_skcipher_reqsize() values here?
>
> These are "complex" wrappers, with cts even adding blocksize to the mix...
>
> crypto/ctr.c- align = crypto_skcipher_alignmask(tfm);
> crypto/ctr.c- align &= ~(crypto_tfm_ctx_alignment() - 1);
> crypto/ctr.c- reqsize = align + sizeof(struct crypto_rfc3686_req_ctx) +
> crypto/ctr.c- crypto_skcipher_reqsize(cipher);
> crypto/ctr.c: crypto_skcipher_set_reqsize(tfm, reqsize);
>
> crypto/cts.c- align = crypto_skcipher_alignmask(tfm);
> crypto/cts.c- bsize = crypto_skcipher_blocksize(cipher);
> crypto/cts.c- reqsize = ALIGN(sizeof(struct crypto_cts_reqctx) +
> crypto/cts.c- crypto_skcipher_reqsize(cipher),
> crypto/cts.c- crypto_tfm_ctx_alignment()) +
> crypto/cts.c- (align & ~(crypto_tfm_ctx_alignment() - 1)) + bsize;
> crypto/cts.c-
> crypto/cts.c: crypto_skcipher_set_reqsize(tfm, reqsize);
>
> What values might be expected here? It seems the entire blocksize
> needs to be included as well...

But otherwise yes these are the ones that count.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2018-06-28 01:13:17

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v2 11/11] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK

On Wed, Jun 27, 2018 at 3:27 PM, Herbert Xu <[email protected]> wrote:
> On Wed, Jun 27, 2018 at 11:31:09AM -0700, Kees Cook wrote:
>> crypto/lrw.c: crypto_skcipher_set_reqsize(tfm,
>> crypto_skcipher_reqsize(cipher) +
>> crypto/lrw.c- sizeof(struct rctx));
>> ...
>> crypto/cts.c- align = crypto_skcipher_alignmask(tfm);
>> crypto/cts.c- bsize = crypto_skcipher_blocksize(cipher);
>> crypto/cts.c- reqsize = ALIGN(sizeof(struct crypto_cts_reqctx) +
>> crypto/cts.c- crypto_skcipher_reqsize(cipher),
>> crypto/cts.c- crypto_tfm_ctx_alignment()) +
>> crypto/cts.c- (align & ~(crypto_tfm_ctx_alignment() - 1)) + bsize;
>> crypto/cts.c-
>> crypto/cts.c: crypto_skcipher_set_reqsize(tfm, reqsize);
>>
>> What values might be expected here? It seems the entire blocksize
>> needs to be included as well...
>
> But otherwise yes these are the ones that count.

In both cases here, what is "cipher"? i.e. what ciphers could lrw be
wrapping, and what ciphers could cts be wrapping, so that I can
examine the blocksizes, etc?

FWIW, looking at the non-ASYNC wrappers, I see only:

crypto/ctr.c
crypto/cts.c
crypto/lrw.c
crypto/xts.c
drivers/crypto/sunxi-ss/sun4i-ss-cipher.c

Building in all crypto things and running tcrypt with an instrumented
crypto_skcipher_set_reqsize, I see:

# dmesg | grep skcipher_set_req | cut -c16- | sort -u | sort +1 -n
crypto_skcipher_set_reqsize: 8
crypto_skcipher_set_reqsize: 88
crypto_skcipher_set_reqsize: 184
crypto_skcipher_set_reqsize: 256
crypto_skcipher_set_reqsize: 472

The 472 maps to lrw with its 384 struct rctx:

[ 553.843884] tcrypt: testing lrw(twofish)
[ 553.844479] crypto_skcipher_set_reqsize: 8
[ 553.845076] crypto_skcipher_set_reqsize: 88
[ 553.845658] crypto_skcipher_set_reqsize: 472

[ 553.860578] tcrypt: testing lrw(serpent)
[ 553.861349] crypto_skcipher_set_reqsize: 8
[ 553.861960] crypto_skcipher_set_reqsize: 88
[ 553.862534] crypto_skcipher_set_reqsize: 472

[ 553.871676] tcrypt: testing lrw(aes)
[ 553.872398] crypto_skcipher_set_reqsize: 8
[ 553.873002] crypto_skcipher_set_reqsize: 88
[ 553.873574] crypto_skcipher_set_reqsize: 472

[ 553.957282] tcrypt: testing lrw(cast6)
[ 553.958098] crypto_skcipher_set_reqsize: 8
[ 553.958691] crypto_skcipher_set_reqsize: 88
[ 553.959311] crypto_skcipher_set_reqsize: 472

[ 553.982514] tcrypt: testing lrw(camellia)
[ 553.983308] crypto_skcipher_set_reqsize: 8
[ 553.983907] crypto_skcipher_set_reqsize: 88
[ 553.984470] crypto_skcipher_set_reqsize: 472

And while I'm using tcrypt, ahash shows:

44
124
336
368
528
536
568
616
648
728
808

The largest seems to be sha512:

[ 553.883440] tcrypt: testing sha512
[ 553.884179] sha512_mb: crypto_ahash_set_reqsize: 528
[ 553.884904] crypto_ahash_set_reqsize: 728
[ 553.885449] sha512_mb: crypto_ahash_set_reqsize: 808

So ... should I use 472 for skcipher and 808 for ahash?

-Kees

--
Kees Cook
Pixel Security

2018-07-01 06:26:26

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v2 11/11] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK

On Wed, Jun 27, 2018 at 05:10:05PM -0700, Kees Cook wrote:
>
> In both cases here, what is "cipher"? i.e. what ciphers could lrw be
> wrapping, and what ciphers could cts be wrapping, so that I can
> examine the blocksizes, etc?

A cipher is a simple cipher like aes that operates on a single
block.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt