From: Kees Cook Subject: [PATCH v3 9/9] crypto: shash: Remove VLA usage in unaligned hashing Date: Thu, 28 Jun 2018 17:28:43 -0700 Message-ID: <20180629002843.31095-10-keescook@chromium.org> References: <20180629002843.31095-1-keescook@chromium.org> Cc: Kees Cook , "Gustavo A. R. Silva" , Arnd Bergmann , Eric Biggers , Alasdair Kergon , Giovanni Cabiddu , Lars Persson , Mike Snitzer , Rabin Vincent , Tim Chen , "David S. Miller" , linux-crypto@vger.kernel.org, qat-linux@intel.com, dm-devel@redhat.com, linux-kernel@vger.kernel.org To: Herbert Xu Return-path: In-Reply-To: <20180629002843.31095-1-keescook@chromium.org> Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-crypto.vger.kernel.org In the quest to remove all stack VLA usage from the kernel[1], this uses the newly defined max alignment to perform unaligned hashing to avoid VLAs, and drops the helper function while adding sanity checks on the resulting buffer sizes. Additionally, the __aligned_largest macro is removed since this helper was the only user. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook --- crypto/shash.c | 19 ++++++++----------- include/linux/compiler-gcc.h | 1 - 2 files changed, 8 insertions(+), 12 deletions(-) diff --git a/crypto/shash.c b/crypto/shash.c index ab6902c6dae7..8081c5e03770 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -73,13 +73,6 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key, } EXPORT_SYMBOL_GPL(crypto_shash_setkey); -static inline unsigned int shash_align_buffer_size(unsigned len, - unsigned long mask) -{ - typedef u8 __aligned_largest u8_aligned; - return len + (mask & ~(__alignof__(u8_aligned) - 1)); -} - static int shash_update_unaligned(struct shash_desc *desc, const u8 *data, unsigned int len) { @@ -88,11 +81,13 @@ static int shash_update_unaligned(struct shash_desc *desc, const u8 *data, unsigned long alignmask = crypto_shash_alignmask(tfm); unsigned int unaligned_len = alignmask + 1 - ((unsigned long)data & alignmask); - u8 ubuf[shash_align_buffer_size(unaligned_len, alignmask)] - __aligned_largest; + u8 ubuf[MAX_ALGAPI_ALIGNMASK + 1]; u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); int err; + if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf))) + return -EINVAL; + if (unaligned_len > len) unaligned_len = len; @@ -124,11 +119,13 @@ static int shash_final_unaligned(struct shash_desc *desc, u8 *out) unsigned long alignmask = crypto_shash_alignmask(tfm); struct shash_alg *shash = crypto_shash_alg(tfm); unsigned int ds = crypto_shash_digestsize(tfm); - u8 ubuf[shash_align_buffer_size(ds, alignmask)] - __aligned_largest; + u8 ubuf[SHASH_MAX_DIGESTSIZE]; u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); int err; + if (WARN_ON(buf + ds > ubuf + sizeof(ubuf))) + return -EINVAL; + err = shash->final(desc, buf); if (err) goto out; diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index f1a7492a5cc8..1f1cdef36a82 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -125,7 +125,6 @@ */ #define __pure __attribute__((pure)) #define __aligned(x) __attribute__((aligned(x))) -#define __aligned_largest __attribute__((aligned)) #define __printf(a, b) __attribute__((format(printf, a, b))) #define __scanf(a, b) __attribute__((format(scanf, a, b))) #define __attribute_const__ __attribute__((__const__)) -- 2.17.1