2023-01-18 14:38:45

by Tianjia Zhang

[permalink] [raw]
Subject: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption

When the cryption total length is zero, GCM cryption call
skcipher_walk_done() will cause an unexpected crash, so skip calling
this function to avoid possible crash when the GCM cryption length
is equal to zero.

Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode")
Signed-off-by: Tianjia Zhang <[email protected]>
---
arch/arm64/crypto/sm4-ce-gcm-glue.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c
index c450a2025ca9..9b63bcf9aa85 100644
--- a/arch/arm64/crypto/sm4-ce-gcm-glue.c
+++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c
@@ -178,11 +178,13 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk,

kernel_neon_end();

- err = skcipher_walk_done(walk, tail);
- if (err)
- return err;
- if (walk->nbytes)
- kernel_neon_begin();
+ if (walk->nbytes) {
+ err = skcipher_walk_done(walk, tail);
+ if (err)
+ return err;
+ if (walk->nbytes)
+ kernel_neon_begin();
+ }
} while (walk->nbytes > 0);

return 0;
--
2.24.3 (Apple Git-128)


2023-01-18 15:08:40

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption

On Wed, Jan 18, 2023 at 10:19:28PM +0800, Tianjia Zhang wrote:
> When the cryption total length is zero, GCM cryption call
> skcipher_walk_done() will cause an unexpected crash, so skip calling
> this function to avoid possible crash when the GCM cryption length
> is equal to zero.
>
> Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode")
> Signed-off-by: Tianjia Zhang <[email protected]>
> ---
> arch/arm64/crypto/sm4-ce-gcm-glue.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c
> index c450a2025ca9..9b63bcf9aa85 100644
> --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c
> +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c
> @@ -178,11 +178,13 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk,
>
> kernel_neon_end();
>
> - err = skcipher_walk_done(walk, tail);
> - if (err)
> - return err;
> - if (walk->nbytes)
> - kernel_neon_begin();
> + if (walk->nbytes) {

Please do
if (!walk->nbytes)
break;

As an additional improvement, the tail calculation can be removed
entirely because you already set the chunksize so the walker should
only be feeding you multiples of chunksize except at the end.

Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-01-30 07:34:55

by Tianjia Zhang

[permalink] [raw]
Subject: Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption

Hi Herbert,

On 1/18/23 10:54 PM, Herbert Xu wrote:
> On Wed, Jan 18, 2023 at 10:19:28PM +0800, Tianjia Zhang wrote:
>> When the cryption total length is zero, GCM cryption call
>> skcipher_walk_done() will cause an unexpected crash, so skip calling
>> this function to avoid possible crash when the GCM cryption length
>> is equal to zero.
>>
>> Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode")
>> Signed-off-by: Tianjia Zhang <[email protected]>
>> ---
>> arch/arm64/crypto/sm4-ce-gcm-glue.c | 12 +++++++-----
>> 1 file changed, 7 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c
>> index c450a2025ca9..9b63bcf9aa85 100644
>> --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c
>> +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c
>> @@ -178,11 +178,13 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk,
>>
>> kernel_neon_end();
>>
>> - err = skcipher_walk_done(walk, tail);
>> - if (err)
>> - return err;
>> - if (walk->nbytes)
>> - kernel_neon_begin();
>> + if (walk->nbytes) {
>
> Please do
> if (!walk->nbytes)
> break;

Thanks for the suggestion, a new patch has been sent.

>
> As an additional improvement, the tail calculation can be removed
> entirely because you already set the chunksize so the walker should
> only be feeding you multiples of chunksize except at the end.
>
> Cheers
I printed the walk->nbytes of each iteration of the walker, it is not
always multiples of chunksize except at the end when the algorithm test
manager is turned on.

For example, during a GCM encryption process, I get data like this:

total = 4014, nbytes = 2078, tail = 14
total = 1950, nbytes = 16, tail = 0
total = 1934, nbytes = 311, tail = 7
total = 1630, nbytes = 16, tail = 0
total = 1614, nbytes = 16, tail = 0
total = 1598, nbytes = 1598, tail = 14

Is my understanding wrong?

Best regards,
Tianjia

2023-01-30 07:35:34

by Tianjia Zhang

[permalink] [raw]
Subject: [PATCH v2] crypto: arm64/sm4 - Fix possible crash in GCM cryption

When the cryption total length is zero, GCM cryption call
skcipher_walk_done() will cause an unexpected crash, so skip calling
this function to avoid possible crash when the GCM cryption length
is equal to zero.

Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode")
Signed-off-by: Tianjia Zhang <[email protected]>
---
arch/arm64/crypto/sm4-ce-gcm-glue.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c
index c450a2025ca9..29aa7470281d 100644
--- a/arch/arm64/crypto/sm4-ce-gcm-glue.c
+++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c
@@ -178,6 +178,9 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk,

kernel_neon_end();

+ if (unlikely(!walk->nbytes))
+ break;
+
err = skcipher_walk_done(walk, tail);
if (err)
return err;
--
2.24.3 (Apple Git-128)


2023-01-30 08:16:30

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption

On Mon, Jan 30, 2023 at 03:34:42PM +0800, Tianjia Zhang wrote:
>
> I printed the walk->nbytes of each iteration of the walker, it is not
> always multiples of chunksize except at the end when the algorithm test
> manager is turned on.

Sorry I was mistaken. We only guarantee that a minimum of chunksize
bytes is given to you until the very end, not that it is exactly a
multiple of chunksize.

While you still need to compute tail, you could get rid of the else if
check as walk->nbytes - tail cannot be zero (we must provide you with
at least one chunk before the end):

if (walk->nbytes == walk->total) {
tail = 0;

sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv,
walk->nbytes, ghash,
ctx->ghash_table,
(const u8 *)&lengths);
} else {
sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv,
walk->nbytes - tail, ghash,
ctx->ghash_table, NULL);
}

In fact we could rewrite it like this:

unsigned int tail = walk->nbytes % SM4_BLOCK_SIZE;
unsigned int nbytes = walk->nbytes - tail;
const u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
u8 *lp = NULL;

if (walk->nbytes == walk->total) {
nbytes = walk->nbytes;
tail = 0;
lp = (u8 *)&lengths;
}

sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv,
nbytes, ghash, ctx->ghash_table, lp);

The second part of that loop could also be rewritten as:

kernel_neon_end();

err = skcipher_walk_done(walk, tail);
if (!walk->nbytes)
return err;

kernel_neon_begin();
} while (1);

Actually I think there is a serious bug here. If you're doing an
empty message, you must not call skcipher_walk_done as that may
then free random uninitialised stack memory.

Did you copy this code from somewhere else? If so wherever you got
it from needs to be fixed too. The loop should look like this:

if (!walk->nbytes) {
/* iv may be unaligned as the walker didn't run at all. */
sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, NULL, NULL, iv,
0, ghash, ctx->ghash_table,
(u8 *)&lengths);
kernel_neon_end();
return 0;
}

do {
...
}

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-01-30 09:01:31

by Herbert Xu

[permalink] [raw]
Subject: Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption

On Mon, Jan 30, 2023 at 04:15:33PM +0800, Herbert Xu wrote:
>
> Actually I think there is a serious bug here. If you're doing an
> empty message, you must not call skcipher_walk_done as that may
> then free random uninitialised stack memory.

Hah, I had forgotten that this thread started with your patch
to fix this exact bug :)

Could you confirm that you did copy this from ccm?

It would be nice if you could rewrite your loop in a form similar
to my patch to ccm.

Thanks,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2023-01-31 09:39:58

by Tianjia Zhang

[permalink] [raw]
Subject: Re: [PATCH] crypto: arm64/sm4 - Fix possible crash in GCM cryption

Hi Herbert,

On 1/30/23 5:01 PM, Herbert Xu wrote:
> On Mon, Jan 30, 2023 at 04:15:33PM +0800, Herbert Xu wrote:
>>
>> Actually I think there is a serious bug here. If you're doing an
>> empty message, you must not call skcipher_walk_done as that may
>> then free random uninitialised stack memory.
>
> Hah, I had forgotten that this thread started with your patch
> to fix this exact bug :)
>
> Could you confirm that you did copy this from ccm?
>
> It would be nice if you could rewrite your loop in a form similar
> to my patch to ccm.
>
> Thanks,

These codes are copied from gcm and ccm at the same time. I am not sure
which has more components, but I will rewrite the gcm and ccm encryption
loop of sm4 as soon as possible.

Cheers,
Tianjia