When processing the last block, the s390 ctr code will always read
a whole block, even if there isn't a whole block of data left. Fix
this by using the actual length left and copy it into a buffer first
for processing.
Fixes: 0200f3ecc196 ("crypto: s390 - add System z hardware support for CTR mode")
Cc: <[email protected]>
Reported-by: Guangwu Zhang <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
index c773820e4af9..c6fe5405de4a 100644
--- a/arch/s390/crypto/aes_s390.c
+++ b/arch/s390/crypto/aes_s390.c
@@ -597,7 +597,9 @@ static int ctr_aes_crypt(struct skcipher_request *req)
* final block may be < AES_BLOCK_SIZE, copy only nbytes
*/
if (nbytes) {
- cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr,
+ memset(buf, 0, AES_BLOCK_SIZE);
+ memcpy(buf, walk.src.virt.addr, nbytes);
+ cpacf_kmctr(sctx->fc, sctx->key, buf, buf,
AES_BLOCK_SIZE, walk.iv);
memcpy(walk.dst.virt.addr, buf, nbytes);
crypto_inc(walk.iv, AES_BLOCK_SIZE);
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
On 2023-11-28 07:22, Herbert Xu wrote:
> When processing the last block, the s390 ctr code will always read
> a whole block, even if there isn't a whole block of data left. Fix
> this by using the actual length left and copy it into a buffer first
> for processing.
>
> Fixes: 0200f3ecc196 ("crypto: s390 - add System z hardware support for
> CTR mode")
> Cc: <[email protected]>
> Reported-by: Guangwu Zhang <[email protected]>
> Signed-off-by: Herbert Xu <[email protected]>
>
> diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
> index c773820e4af9..c6fe5405de4a 100644
> --- a/arch/s390/crypto/aes_s390.c
> +++ b/arch/s390/crypto/aes_s390.c
> @@ -597,7 +597,9 @@ static int ctr_aes_crypt(struct skcipher_request
> *req)
> * final block may be < AES_BLOCK_SIZE, copy only nbytes
> */
> if (nbytes) {
> - cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr,
> + memset(buf, 0, AES_BLOCK_SIZE);
> + memcpy(buf, walk.src.virt.addr, nbytes);
> + cpacf_kmctr(sctx->fc, sctx->key, buf, buf,
> AES_BLOCK_SIZE, walk.iv);
> memcpy(walk.dst.virt.addr, buf, nbytes);
> crypto_inc(walk.iv, AES_BLOCK_SIZE);
Reviewd-by: Harald Freudenberger <[email protected]>
There is similar code in paes_s390.c. I'll send a patch for that.
On 2023-11-28 07:22, Herbert Xu wrote:
> When processing the last block, the s390 ctr code will always read
> a whole block, even if there isn't a whole block of data left. Fix
> this by using the actual length left and copy it into a buffer first
> for processing.
>
> Fixes: 0200f3ecc196 ("crypto: s390 - add System z hardware support for
> CTR mode")
> Cc: <[email protected]>
> Reported-by: Guangwu Zhang <[email protected]>
> Signed-off-by: Herbert Xu <[email protected]>
>
> diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
> index c773820e4af9..c6fe5405de4a 100644
> --- a/arch/s390/crypto/aes_s390.c
> +++ b/arch/s390/crypto/aes_s390.c
> @@ -597,7 +597,9 @@ static int ctr_aes_crypt(struct skcipher_request
> *req)
> * final block may be < AES_BLOCK_SIZE, copy only nbytes
> */
> if (nbytes) {
> - cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr,
> + memset(buf, 0, AES_BLOCK_SIZE);
> + memcpy(buf, walk.src.virt.addr, nbytes);
> + cpacf_kmctr(sctx->fc, sctx->key, buf, buf,
> AES_BLOCK_SIZE, walk.iv);
> memcpy(walk.dst.virt.addr, buf, nbytes);
> crypto_inc(walk.iv, AES_BLOCK_SIZE);
Here is a similar fix for the s390 paes ctr cipher. Compiles and is
tested. You may merge this with your patch for the s390 aes cipher.
--------------------------------------------------------------------------------
diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
index 8b541e44151d..55ee5567a5ea 100644
--- a/arch/s390/crypto/paes_s390.c
+++ b/arch/s390/crypto/paes_s390.c
@@ -693,9 +693,11 @@ static int ctr_paes_crypt(struct skcipher_request
*req)
* final block may be < AES_BLOCK_SIZE, copy only nbytes
*/
if (nbytes) {
+ memset(buf, 0, AES_BLOCK_SIZE);
+ memcpy(buf, walk.src.virt.addr, nbytes);
while (1) {
if (cpacf_kmctr(ctx->fc, ¶m, buf,
- walk.src.virt.addr,
AES_BLOCK_SIZE,
+ buf, AES_BLOCK_SIZE,
walk.iv) == AES_BLOCK_SIZE)
break;
if (__paes_convert_key(ctx))
On Tue, Nov 28, 2023 at 02:18:02PM +0100, Harald Freudenberger wrote:
>
> Here is a similar fix for the s390 paes ctr cipher. Compiles and is
> tested. You may merge this with your patch for the s390 aes cipher.
Thank you. I had to apply this by hand so please check the result
which I've just pushed out to cryptodev.
Cheers,
--
Email: Herbert Xu <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt