2022-10-17 23:08:19

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH] crypto: x86/polyval - Fix crashes when keys are not 16-byte aligned

On Mon, Oct 17, 2022 at 03:26:20PM -0700, Nathan Huckleberry wrote:
> The key_powers array is not guaranteed to be 16-byte aligned, so using
> movaps to operate on key_powers is not allowed.
>
> Switch movaps to movups.
>
> Fixes: 34f7f6c30112 ("crypto: x86/polyval - Add PCLMULQDQ accelerated implementation of POLYVAL")
> Reported-by: Bruno Goncalves <[email protected]>
> Signed-off-by: Nathan Huckleberry <[email protected]>
> ---
> arch/x86/crypto/polyval-clmulni_asm.S | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/crypto/polyval-clmulni_asm.S b/arch/x86/crypto/polyval-clmulni_asm.S
> index a6ebe4e7dd2b..32b98cb53ddf 100644
> --- a/arch/x86/crypto/polyval-clmulni_asm.S
> +++ b/arch/x86/crypto/polyval-clmulni_asm.S
> @@ -234,7 +234,7 @@
>
> movups (MSG), %xmm0
> pxor SUM, %xmm0
> - movaps (KEY_POWERS), %xmm1
> + movups (KEY_POWERS), %xmm1
> schoolbook1_noload
> dec BLOCKS_LEFT
> addq $16, MSG

I thought that crypto_tfm::__crt_ctx is guaranteed to be 16-byte aligned,
and that the x86 AES code relies on that property.

But now I see that actually the x86 AES code manually aligns the context.
See aes_ctx() in arch/x86/crypto/aesni-intel_glue.c.

Did you consider doing the same for polyval?

If you do prefer this way, it would be helpful to leave a comment for
schoolbook1_iteration that mentions that the unaligned access support of
vpclmulqdq is being relied on, i.e. pclmulqdq wouldn't work.

- Eric


2022-10-17 23:57:06

by Nathan Huckleberry

[permalink] [raw]
Subject: Re: [PATCH] crypto: x86/polyval - Fix crashes when keys are not 16-byte aligned

On Mon, Oct 17, 2022 at 4:02 PM Eric Biggers <[email protected]> wrote:
>
> On Mon, Oct 17, 2022 at 03:26:20PM -0700, Nathan Huckleberry wrote:
> > The key_powers array is not guaranteed to be 16-byte aligned, so using
> > movaps to operate on key_powers is not allowed.
> >
> > Switch movaps to movups.
> >
> > Fixes: 34f7f6c30112 ("crypto: x86/polyval - Add PCLMULQDQ accelerated implementation of POLYVAL")
> > Reported-by: Bruno Goncalves <[email protected]>
> > Signed-off-by: Nathan Huckleberry <[email protected]>
> > ---
> > arch/x86/crypto/polyval-clmulni_asm.S | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/crypto/polyval-clmulni_asm.S b/arch/x86/crypto/polyval-clmulni_asm.S
> > index a6ebe4e7dd2b..32b98cb53ddf 100644
> > --- a/arch/x86/crypto/polyval-clmulni_asm.S
> > +++ b/arch/x86/crypto/polyval-clmulni_asm.S
> > @@ -234,7 +234,7 @@
> >
> > movups (MSG), %xmm0
> > pxor SUM, %xmm0
> > - movaps (KEY_POWERS), %xmm1
> > + movups (KEY_POWERS), %xmm1
> > schoolbook1_noload
> > dec BLOCKS_LEFT
> > addq $16, MSG
>
> I thought that crypto_tfm::__crt_ctx is guaranteed to be 16-byte aligned,
> and that the x86 AES code relies on that property.
>
> But now I see that actually the x86 AES code manually aligns the context.
> See aes_ctx() in arch/x86/crypto/aesni-intel_glue.c.
>
> Did you consider doing the same for polyval?

I'll submit a v2 aligning the tfm_ctx. I think that makes more sense
than working on unaligned keys.

Is there a need to do the same changes on arm64? The keys are also
unaligned there.

>
> If you do prefer this way, it would be helpful to leave a comment for
> schoolbook1_iteration that mentions that the unaligned access support of
> vpclmulqdq is being relied on, i.e. pclmulqdq wouldn't work.
>
> - Eric

Thanks,
Huck

2022-10-18 00:17:50

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH] crypto: x86/polyval - Fix crashes when keys are not 16-byte aligned

On Mon, Oct 17, 2022 at 04:38:25PM -0700, Nathan Huckleberry wrote:
> On Mon, Oct 17, 2022 at 4:02 PM Eric Biggers <[email protected]> wrote:
> >
> > On Mon, Oct 17, 2022 at 03:26:20PM -0700, Nathan Huckleberry wrote:
> > > The key_powers array is not guaranteed to be 16-byte aligned, so using
> > > movaps to operate on key_powers is not allowed.
> > >
> > > Switch movaps to movups.
> > >
> > > Fixes: 34f7f6c30112 ("crypto: x86/polyval - Add PCLMULQDQ accelerated implementation of POLYVAL")
> > > Reported-by: Bruno Goncalves <[email protected]>
> > > Signed-off-by: Nathan Huckleberry <[email protected]>
> > > ---
> > > arch/x86/crypto/polyval-clmulni_asm.S | 2 +-
> > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/arch/x86/crypto/polyval-clmulni_asm.S b/arch/x86/crypto/polyval-clmulni_asm.S
> > > index a6ebe4e7dd2b..32b98cb53ddf 100644
> > > --- a/arch/x86/crypto/polyval-clmulni_asm.S
> > > +++ b/arch/x86/crypto/polyval-clmulni_asm.S
> > > @@ -234,7 +234,7 @@
> > >
> > > movups (MSG), %xmm0
> > > pxor SUM, %xmm0
> > > - movaps (KEY_POWERS), %xmm1
> > > + movups (KEY_POWERS), %xmm1
> > > schoolbook1_noload
> > > dec BLOCKS_LEFT
> > > addq $16, MSG
> >
> > I thought that crypto_tfm::__crt_ctx is guaranteed to be 16-byte aligned,
> > and that the x86 AES code relies on that property.
> >
> > But now I see that actually the x86 AES code manually aligns the context.
> > See aes_ctx() in arch/x86/crypto/aesni-intel_glue.c.
> >
> > Did you consider doing the same for polyval?
>
> I'll submit a v2 aligning the tfm_ctx. I think that makes more sense
> than working on unaligned keys.
>
> Is there a need to do the same changes on arm64? The keys are also
> unaligned there.
>

arm64 defines ARCH_DMA_MINALIGN to 128, so I don't think the same issue applies
there. Also the instructions used don't assume aligned addresses.

- Eric

2022-10-18 04:07:06

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH] crypto: x86/polyval - Fix crashes when keys are not 16-byte aligned

On Mon, Oct 17, 2022 at 05:12:16PM -0700, Eric Biggers wrote:
>
> arm64 defines ARCH_DMA_MINALIGN to 128, so I don't think the same issue applies
> there. Also the instructions used don't assume aligned addresses.

Note that arm64 may lower ARCH_KMALLOC_MINALIGN to 8 soon.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2022-10-18 21:57:58

by Nathan Huckleberry

[permalink] [raw]
Subject: [PATCH v2] crypto: x86/polyval - Fix crashes when keys are not 16-byte aligned

crypto_tfm::__crt_ctx is not guaranteed to be 16-byte aligned on x86-64.
This causes crashes due to movaps instructions in clmul_polyval_update.

Add logic to align polyval_tfm_ctx to 16 bytes if required.

Fixes: 34f7f6c30112 ("crypto: x86/polyval - Add PCLMULQDQ accelerated implementation of POLYVAL")
Reported-by: Bruno Goncalves <[email protected]>
Signed-off-by: Nathan Huckleberry <[email protected]>
---
arch/x86/crypto/polyval-clmulni_glue.c | 29 ++++++++++++++++++++------
1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/arch/x86/crypto/polyval-clmulni_glue.c b/arch/x86/crypto/polyval-clmulni_glue.c
index b7664d018851..79ad497c32b5 100644
--- a/arch/x86/crypto/polyval-clmulni_glue.c
+++ b/arch/x86/crypto/polyval-clmulni_glue.c
@@ -27,13 +27,17 @@
#include <asm/cpu_device_id.h>
#include <asm/simd.h>

+#define POLYVAL_ALIGN 16
+#define POLYVAL_ALIGN_ATTR __aligned(POLYVAL_ALIGN)
+#define POLYVAL_ALIGN_EXTRA ((POLYVAL_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1))
+#define POLYVAL_CTX_SIZE (sizeof(struct polyval_tfm_ctx) + POLYVAL_ALIGN_EXTRA)
#define NUM_KEY_POWERS 8

struct polyval_tfm_ctx {
/*
* These powers must be in the order h^8, ..., h^1.
*/
- u8 key_powers[NUM_KEY_POWERS][POLYVAL_BLOCK_SIZE];
+ u8 key_powers[NUM_KEY_POWERS][POLYVAL_BLOCK_SIZE] POLYVAL_ALIGN_ATTR;
};

struct polyval_desc_ctx {
@@ -45,9 +49,20 @@ asmlinkage void clmul_polyval_update(const struct polyval_tfm_ctx *keys,
const u8 *in, size_t nblocks, u8 *accumulator);
asmlinkage void clmul_polyval_mul(u8 *op1, const u8 *op2);

-static void internal_polyval_update(const struct polyval_tfm_ctx *keys,
+static inline struct polyval_tfm_ctx *polyval_tfm_ctx(const void *raw_ctx)
+{
+ unsigned long addr = (unsigned long)raw_ctx;
+ unsigned long align = POLYVAL_ALIGN;
+
+ if (align <= crypto_tfm_ctx_alignment())
+ align = 1;
+ return (struct polyval_tfm_ctx *)ALIGN(addr, align);
+}
+
+static void internal_polyval_update(const void *raw_keys,
const u8 *in, size_t nblocks, u8 *accumulator)
{
+ const struct polyval_tfm_ctx *keys = polyval_tfm_ctx(raw_keys);
if (likely(crypto_simd_usable())) {
kernel_fpu_begin();
clmul_polyval_update(keys, in, nblocks, accumulator);
@@ -72,7 +87,7 @@ static void internal_polyval_mul(u8 *op1, const u8 *op2)
static int polyval_x86_setkey(struct crypto_shash *tfm,
const u8 *key, unsigned int keylen)
{
- struct polyval_tfm_ctx *tctx = crypto_shash_ctx(tfm);
+ struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(crypto_shash_ctx(tfm));
int i;

if (keylen != POLYVAL_BLOCK_SIZE)
@@ -102,7 +117,8 @@ static int polyval_x86_update(struct shash_desc *desc,
const u8 *src, unsigned int srclen)
{
struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
- const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+ const struct polyval_tfm_ctx *tctx =
+ polyval_tfm_ctx(crypto_shash_ctx(desc->tfm));
u8 *pos;
unsigned int nblocks;
unsigned int n;
@@ -143,7 +159,8 @@ static int polyval_x86_update(struct shash_desc *desc,
static int polyval_x86_final(struct shash_desc *desc, u8 *dst)
{
struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
- const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+ const struct polyval_tfm_ctx *tctx =
+ polyval_tfm_ctx(crypto_shash_ctx(desc->tfm));

if (dctx->bytes) {
internal_polyval_mul(dctx->buffer,
@@ -167,7 +184,7 @@ static struct shash_alg polyval_alg = {
.cra_driver_name = "polyval-clmulni",
.cra_priority = 200,
.cra_blocksize = POLYVAL_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct polyval_tfm_ctx),
+ .cra_ctxsize = POLYVAL_CTX_SIZE,
.cra_module = THIS_MODULE,
},
};
--
2.38.0.413.g74048e4d9e-goog

2022-10-18 22:43:51

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH v2] crypto: x86/polyval - Fix crashes when keys are not 16-byte aligned

On Tue, Oct 18, 2022 at 02:56:23PM -0700, Nathan Huckleberry wrote:
> -static void internal_polyval_update(const struct polyval_tfm_ctx *keys,
> +static inline struct polyval_tfm_ctx *polyval_tfm_ctx(const void *raw_ctx)
> +{
> + unsigned long addr = (unsigned long)raw_ctx;
> + unsigned long align = POLYVAL_ALIGN;
> +
> + if (align <= crypto_tfm_ctx_alignment())
> + align = 1;
> + return (struct polyval_tfm_ctx *)ALIGN(addr, align);
> +}

This could just use PTR_ALIGN. Also, checking for POLYVAL_ALIGN <=
crypto_tfm_ctx_alignment() isn't necessary.

> +
> +static void internal_polyval_update(const void *raw_keys,
> const u8 *in, size_t nblocks, u8 *accumulator)
> {
> + const struct polyval_tfm_ctx *keys = polyval_tfm_ctx(raw_keys);

This is being passed a struct polyval_tfm_ctx. There's no need to cast it back
to a void pointer and align it again redundantly.

> if (likely(crypto_simd_usable())) {
> kernel_fpu_begin();
> clmul_polyval_update(keys, in, nblocks, accumulator);
> @@ -102,7 +117,8 @@ static int polyval_x86_update(struct shash_desc *desc,
> const u8 *src, unsigned int srclen)
> {
> struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
> - const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
> + const struct polyval_tfm_ctx *tctx =
> + polyval_tfm_ctx(crypto_shash_ctx(desc->tfm));
> u8 *pos;
> unsigned int nblocks;
> unsigned int n;

It would make more sense to have the polyval_tfm_ctx() function take in the
struct crypto_shash.

How about using the following:

static inline struct polyval_tfm_ctx *polyval_tfm_ctx(struct crypto_shash *tfm)
{
return PTR_ALIGN(crypto_shash_ctx(tfm), POLYVAL_ALIGN);
}

- Eric

2022-10-18 23:08:32

by Nathan Huckleberry

[permalink] [raw]
Subject: [PATCH v3] crypto: x86/polyval - Fix crashes when keys are not 16-byte aligned

crypto_tfm::__crt_ctx is not guaranteed to be 16-byte aligned on x86-64.
This causes crashes due to movaps instructions in clmul_polyval_update.

Add logic to align polyval_tfm_ctx to 16 bytes.

Fixes: 34f7f6c30112 ("crypto: x86/polyval - Add PCLMULQDQ accelerated implementation of POLYVAL")
Reported-by: Bruno Goncalves <[email protected]>
Signed-off-by: Nathan Huckleberry <[email protected]>
---
arch/x86/crypto/polyval-clmulni_glue.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/polyval-clmulni_glue.c b/arch/x86/crypto/polyval-clmulni_glue.c
index b7664d018851..8fa58b0f3cb3 100644
--- a/arch/x86/crypto/polyval-clmulni_glue.c
+++ b/arch/x86/crypto/polyval-clmulni_glue.c
@@ -27,13 +27,17 @@
#include <asm/cpu_device_id.h>
#include <asm/simd.h>

+#define POLYVAL_ALIGN 16
+#define POLYVAL_ALIGN_ATTR __aligned(POLYVAL_ALIGN)
+#define POLYVAL_ALIGN_EXTRA ((POLYVAL_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1))
+#define POLYVAL_CTX_SIZE (sizeof(struct polyval_tfm_ctx) + POLYVAL_ALIGN_EXTRA)
#define NUM_KEY_POWERS 8

struct polyval_tfm_ctx {
/*
* These powers must be in the order h^8, ..., h^1.
*/
- u8 key_powers[NUM_KEY_POWERS][POLYVAL_BLOCK_SIZE];
+ u8 key_powers[NUM_KEY_POWERS][POLYVAL_BLOCK_SIZE] POLYVAL_ALIGN_ATTR;
};

struct polyval_desc_ctx {
@@ -45,6 +49,11 @@ asmlinkage void clmul_polyval_update(const struct polyval_tfm_ctx *keys,
const u8 *in, size_t nblocks, u8 *accumulator);
asmlinkage void clmul_polyval_mul(u8 *op1, const u8 *op2);

+static inline struct polyval_tfm_ctx *polyval_tfm_ctx(struct crypto_shash *tfm)
+{
+ return PTR_ALIGN(crypto_shash_ctx(tfm), POLYVAL_ALIGN);
+}
+
static void internal_polyval_update(const struct polyval_tfm_ctx *keys,
const u8 *in, size_t nblocks, u8 *accumulator)
{
@@ -72,7 +81,7 @@ static void internal_polyval_mul(u8 *op1, const u8 *op2)
static int polyval_x86_setkey(struct crypto_shash *tfm,
const u8 *key, unsigned int keylen)
{
- struct polyval_tfm_ctx *tctx = crypto_shash_ctx(tfm);
+ struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(tfm);
int i;

if (keylen != POLYVAL_BLOCK_SIZE)
@@ -102,7 +111,7 @@ static int polyval_x86_update(struct shash_desc *desc,
const u8 *src, unsigned int srclen)
{
struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
- const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+ const struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(desc->tfm);
u8 *pos;
unsigned int nblocks;
unsigned int n;
@@ -143,7 +152,7 @@ static int polyval_x86_update(struct shash_desc *desc,
static int polyval_x86_final(struct shash_desc *desc, u8 *dst)
{
struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
- const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
+ const struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(desc->tfm);

if (dctx->bytes) {
internal_polyval_mul(dctx->buffer,
@@ -167,7 +176,7 @@ static struct shash_alg polyval_alg = {
.cra_driver_name = "polyval-clmulni",
.cra_priority = 200,
.cra_blocksize = POLYVAL_BLOCK_SIZE,
- .cra_ctxsize = sizeof(struct polyval_tfm_ctx),
+ .cra_ctxsize = POLYVAL_CTX_SIZE,
.cra_module = THIS_MODULE,
},
};
--
2.38.0.413.g74048e4d9e-goog

2022-10-18 23:13:25

by Eric Biggers

[permalink] [raw]
Subject: Re: [PATCH v3] crypto: x86/polyval - Fix crashes when keys are not 16-byte aligned

On Tue, Oct 18, 2022 at 04:04:12PM -0700, Nathan Huckleberry wrote:
> crypto_tfm::__crt_ctx is not guaranteed to be 16-byte aligned on x86-64.
> This causes crashes due to movaps instructions in clmul_polyval_update.
>
> Add logic to align polyval_tfm_ctx to 16 bytes.
>
> Fixes: 34f7f6c30112 ("crypto: x86/polyval - Add PCLMULQDQ accelerated implementation of POLYVAL")
> Reported-by: Bruno Goncalves <[email protected]>
> Signed-off-by: Nathan Huckleberry <[email protected]>

Reviewed-by: Eric Biggers <[email protected]>

Please add 'Cc: [email protected]' as well. (Herbert can do it when
applying this patch, if you don't happen to send another version.)

- Eric

2022-10-19 12:42:39

by Bruno Goncalves

[permalink] [raw]
Subject: Re: [PATCH v3] crypto: x86/polyval - Fix crashes when keys are not 16-byte aligned

On Wed, 19 Oct 2022 at 01:04, Nathan Huckleberry <[email protected]> wrote:
>
> crypto_tfm::__crt_ctx is not guaranteed to be 16-byte aligned on x86-64.
> This causes crashes due to movaps instructions in clmul_polyval_update.
>
> Add logic to align polyval_tfm_ctx to 16 bytes.
>
> Fixes: 34f7f6c30112 ("crypto: x86/polyval - Add PCLMULQDQ accelerated implementation of POLYVAL")
> Reported-by: Bruno Goncalves <[email protected]>
> Signed-off-by: Nathan Huckleberry <[email protected]>
> ---
> arch/x86/crypto/polyval-clmulni_glue.c | 19 ++++++++++++++-----
> 1 file changed, 14 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/crypto/polyval-clmulni_glue.c b/arch/x86/crypto/polyval-clmulni_glue.c
> index b7664d018851..8fa58b0f3cb3 100644
> --- a/arch/x86/crypto/polyval-clmulni_glue.c
> +++ b/arch/x86/crypto/polyval-clmulni_glue.c
> @@ -27,13 +27,17 @@
> #include <asm/cpu_device_id.h>
> #include <asm/simd.h>
>
> +#define POLYVAL_ALIGN 16
> +#define POLYVAL_ALIGN_ATTR __aligned(POLYVAL_ALIGN)
> +#define POLYVAL_ALIGN_EXTRA ((POLYVAL_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1))
> +#define POLYVAL_CTX_SIZE (sizeof(struct polyval_tfm_ctx) + POLYVAL_ALIGN_EXTRA)
> #define NUM_KEY_POWERS 8
>
> struct polyval_tfm_ctx {
> /*
> * These powers must be in the order h^8, ..., h^1.
> */
> - u8 key_powers[NUM_KEY_POWERS][POLYVAL_BLOCK_SIZE];
> + u8 key_powers[NUM_KEY_POWERS][POLYVAL_BLOCK_SIZE] POLYVAL_ALIGN_ATTR;
> };
>
> struct polyval_desc_ctx {
> @@ -45,6 +49,11 @@ asmlinkage void clmul_polyval_update(const struct polyval_tfm_ctx *keys,
> const u8 *in, size_t nblocks, u8 *accumulator);
> asmlinkage void clmul_polyval_mul(u8 *op1, const u8 *op2);
>
> +static inline struct polyval_tfm_ctx *polyval_tfm_ctx(struct crypto_shash *tfm)
> +{
> + return PTR_ALIGN(crypto_shash_ctx(tfm), POLYVAL_ALIGN);
> +}
> +
> static void internal_polyval_update(const struct polyval_tfm_ctx *keys,
> const u8 *in, size_t nblocks, u8 *accumulator)
> {
> @@ -72,7 +81,7 @@ static void internal_polyval_mul(u8 *op1, const u8 *op2)
> static int polyval_x86_setkey(struct crypto_shash *tfm,
> const u8 *key, unsigned int keylen)
> {
> - struct polyval_tfm_ctx *tctx = crypto_shash_ctx(tfm);
> + struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(tfm);
> int i;
>
> if (keylen != POLYVAL_BLOCK_SIZE)
> @@ -102,7 +111,7 @@ static int polyval_x86_update(struct shash_desc *desc,
> const u8 *src, unsigned int srclen)
> {
> struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
> - const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
> + const struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(desc->tfm);
> u8 *pos;
> unsigned int nblocks;
> unsigned int n;
> @@ -143,7 +152,7 @@ static int polyval_x86_update(struct shash_desc *desc,
> static int polyval_x86_final(struct shash_desc *desc, u8 *dst)
> {
> struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
> - const struct polyval_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
> + const struct polyval_tfm_ctx *tctx = polyval_tfm_ctx(desc->tfm);
>
> if (dctx->bytes) {
> internal_polyval_mul(dctx->buffer,
> @@ -167,7 +176,7 @@ static struct shash_alg polyval_alg = {
> .cra_driver_name = "polyval-clmulni",
> .cra_priority = 200,
> .cra_blocksize = POLYVAL_BLOCK_SIZE,
> - .cra_ctxsize = sizeof(struct polyval_tfm_ctx),
> + .cra_ctxsize = POLYVAL_CTX_SIZE,
> .cra_module = THIS_MODULE,
> },
> };
> --
> 2.38.0.413.g74048e4d9e-goog
>

Thanks, this patch worked well for me.

Bruno

2022-10-21 11:54:42

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH v3] crypto: x86/polyval - Fix crashes when keys are not 16-byte aligned

On Tue, Oct 18, 2022 at 04:04:12PM -0700, Nathan Huckleberry wrote:
> crypto_tfm::__crt_ctx is not guaranteed to be 16-byte aligned on x86-64.
> This causes crashes due to movaps instructions in clmul_polyval_update.
>
> Add logic to align polyval_tfm_ctx to 16 bytes.
>
> Fixes: 34f7f6c30112 ("crypto: x86/polyval - Add PCLMULQDQ accelerated implementation of POLYVAL")
> Reported-by: Bruno Goncalves <[email protected]>
> Signed-off-by: Nathan Huckleberry <[email protected]>
> ---
> arch/x86/crypto/polyval-clmulni_glue.c | 19 ++++++++++++++-----
> 1 file changed, 14 insertions(+), 5 deletions(-)

Patch applied. Thanks.
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt