Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp5111410pxj; Wed, 26 May 2021 03:08:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxMboNS3MOut4VScZmTdWRrB0QmioOwvWaQewogvsYARjLLP+7TKqAlB1zShYZ+cvtJrzaT X-Received: by 2002:a5d:9d05:: with SMTP id j5mr26753808ioj.16.1622023711371; Wed, 26 May 2021 03:08:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622023711; cv=none; d=google.com; s=arc-20160816; b=FCzSINLIqgLJsQV+1r8F1K3yhO3Bbb4LxtYjXUk+c7RbG52JXqGEJFqF9C8jGPQOPF pkTKYws2/1hgbX7CkMTA/Sefwr/C8tSqj0PVA1hZcmkShf5kwFMLrcpDE/QbiU7WTpXt jGOj3g42aOohzlGthZ2uIgGICVPXLtjK03/j+fnbJOjxcb7icJxWbf9BcF3mpX0Lm3fR Yb1CGbWAtAMYQAtpSMkX8vPRo84ncWgIkMan8E+hvT3L/qsezC8Dir8fa99DbSEkjJGs rLJkp6JaPl3o/wJPZd3hSEpZ6Vr36jJST2SHpqTNk1JZyqkt/jFUaaXJ3I+degqiFKWY SmPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=07QYT/1ZWWD8fgU+9ikiAZpxhi9xhl7Cf9u3ZPyiB/I=; b=lrT5L7Yk2vIFku6L6GLrgU5fAmPYYYKBUA35Cvo+XXOZRJSRIWXHyXvGB7iZLx4G90 cINJrrRHQCYVIlV3RsydDDovJxPh8TSF36phSHND98w40a1LrvGJ2QM5G/85DtXj7jxZ Ar0HB8xxTadtyV3D0wvDRkoDNOzcSv9+hTdTMnhAnomx7HqGBYy/stQ5KW9vShbUSazI KmTWIiVtKof8vR4prJOrf+hAzlDuX8qjMX2ZANxaNR096KGJ18rhGA7Qant9lVSW2/dW cuPFesbS+AqqAQyVFFIwsRcgX4i/McwVVgQ5qWbBj16iCi+iCRg6IUILMZWHgyAP7CPj 3whw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ZFlxdqhp; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f2si23054512ioz.41.2021.05.26.03.08.16; Wed, 26 May 2021 03:08:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ZFlxdqhp; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233817AbhEZKJI (ORCPT + 99 others); Wed, 26 May 2021 06:09:08 -0400 Received: from mail.kernel.org ([198.145.29.99]:39752 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233771AbhEZKJH (ORCPT ); Wed, 26 May 2021 06:09:07 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 033BD6124C; Wed, 26 May 2021 10:07:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1622023656; bh=GRl77FY0MNG/tyz4J+RivEDKCEfrHh5ZhI6D2OCWqB8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZFlxdqhpKVAUZdk90CHzVJrSt5JhlL9d6fem02GR/fm9K0h4J5/EJ3Fvdov0KuYVT HL8093muumEli79EXaJPX8jiiWtvrOb5aH62xMUA/uZ2q+5evtqfvVPErfkEh/U9kS S1phVvcFeeLw5rMNXP0yksZM1jR1634ZmQEj3aPnI5BS5ANhF9iGoto3Od5DdbxEbi QYuVgHR8h3x20QuvZRgpc5MDmRN2d8EJa58qmJC6G/wlUWQ72RFhBOYT05acJk9TKe Uj/OwF5XTse6z41WhB2KF5twgc81qcfAJRCjIV1Az+jgnXIfThn354sFlzfR0kC5Et kWiRiATKHq1kQ== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, ebiggers@kernel.org, herbert@gondor.apana.org.au, will@kernel.org, kernel-team@android.com, Ard Biesheuvel , Eric Biggers Subject: [PATCH v6 1/6] crypto: arm64/gcm-aes-ce - remove non-SIMD fallback path Date: Wed, 26 May 2021 12:07:24 +0200 Message-Id: <20210526100729.12939-2-ardb@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210526100729.12939-1-ardb@kernel.org> References: <20210526100729.12939-1-ardb@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Now that kernel mode SIMD is guaranteed to be available when executing in task or softirq context, we no longer need scalar fallbacks to use when the NEON is unavailable. So get rid of them. Reviewed-by: Eric Biggers Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/ghash-ce-glue.c | 209 +++++--------------- 1 file changed, 51 insertions(+), 158 deletions(-) diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index 720cd3a58da3..15794fe21a0b 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -362,84 +362,36 @@ static int gcm_encrypt(struct aead_request *req) err = skcipher_walk_aead_encrypt(&walk, req, false); - if (likely(crypto_simd_usable())) { - do { - const u8 *src = walk.src.virt.addr; - u8 *dst = walk.dst.virt.addr; - int nbytes = walk.nbytes; - - tag = (u8 *)&lengths; - - if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) { - src = dst = memcpy(buf + sizeof(buf) - nbytes, - src, nbytes); - } else if (nbytes < walk.total) { - nbytes &= ~(AES_BLOCK_SIZE - 1); - tag = NULL; - } - - kernel_neon_begin(); - pmull_gcm_encrypt(nbytes, dst, src, ctx->ghash_key.h, - dg, iv, ctx->aes_key.key_enc, nrounds, - tag); - kernel_neon_end(); - - if (unlikely(!nbytes)) - break; - - if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) - memcpy(walk.dst.virt.addr, - buf + sizeof(buf) - nbytes, nbytes); - - err = skcipher_walk_done(&walk, walk.nbytes - nbytes); - } while (walk.nbytes); - } else { - while (walk.nbytes >= AES_BLOCK_SIZE) { - int blocks = walk.nbytes / AES_BLOCK_SIZE; - const u8 *src = walk.src.virt.addr; - u8 *dst = walk.dst.virt.addr; - int remaining = blocks; - - do { - aes_encrypt(&ctx->aes_key, buf, iv); - crypto_xor_cpy(dst, src, buf, AES_BLOCK_SIZE); - crypto_inc(iv, AES_BLOCK_SIZE); - - dst += AES_BLOCK_SIZE; - src += AES_BLOCK_SIZE; - } while (--remaining > 0); - - ghash_do_update(blocks, dg, walk.dst.virt.addr, - &ctx->ghash_key, NULL); - - err = skcipher_walk_done(&walk, - walk.nbytes % AES_BLOCK_SIZE); - } - - /* handle the tail */ - if (walk.nbytes) { - aes_encrypt(&ctx->aes_key, buf, iv); + do { + const u8 *src = walk.src.virt.addr; + u8 *dst = walk.dst.virt.addr; + int nbytes = walk.nbytes; - crypto_xor_cpy(walk.dst.virt.addr, walk.src.virt.addr, - buf, walk.nbytes); + tag = (u8 *)&lengths; - memcpy(buf, walk.dst.virt.addr, walk.nbytes); - memset(buf + walk.nbytes, 0, sizeof(buf) - walk.nbytes); + if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) { + src = dst = memcpy(buf + sizeof(buf) - nbytes, + src, nbytes); + } else if (nbytes < walk.total) { + nbytes &= ~(AES_BLOCK_SIZE - 1); + tag = NULL; } - tag = (u8 *)&lengths; - ghash_do_update(1, dg, tag, &ctx->ghash_key, - walk.nbytes ? buf : NULL); + kernel_neon_begin(); + pmull_gcm_encrypt(nbytes, dst, src, ctx->ghash_key.h, + dg, iv, ctx->aes_key.key_enc, nrounds, + tag); + kernel_neon_end(); - if (walk.nbytes) - err = skcipher_walk_done(&walk, 0); + if (unlikely(!nbytes)) + break; - put_unaligned_be64(dg[1], tag); - put_unaligned_be64(dg[0], tag + 8); - put_unaligned_be32(1, iv + GCM_IV_SIZE); - aes_encrypt(&ctx->aes_key, iv, iv); - crypto_xor(tag, iv, AES_BLOCK_SIZE); - } + if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) + memcpy(walk.dst.virt.addr, + buf + sizeof(buf) - nbytes, nbytes); + + err = skcipher_walk_done(&walk, walk.nbytes - nbytes); + } while (walk.nbytes); if (err) return err; @@ -464,6 +416,7 @@ static int gcm_decrypt(struct aead_request *req) u64 dg[2] = {}; be128 lengths; u8 *tag; + int ret; int err; lengths.a = cpu_to_be64(req->assoclen * 8); @@ -481,101 +434,41 @@ static int gcm_decrypt(struct aead_request *req) err = skcipher_walk_aead_decrypt(&walk, req, false); - if (likely(crypto_simd_usable())) { - int ret; - - do { - const u8 *src = walk.src.virt.addr; - u8 *dst = walk.dst.virt.addr; - int nbytes = walk.nbytes; - - tag = (u8 *)&lengths; - - if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) { - src = dst = memcpy(buf + sizeof(buf) - nbytes, - src, nbytes); - } else if (nbytes < walk.total) { - nbytes &= ~(AES_BLOCK_SIZE - 1); - tag = NULL; - } - - kernel_neon_begin(); - ret = pmull_gcm_decrypt(nbytes, dst, src, - ctx->ghash_key.h, - dg, iv, ctx->aes_key.key_enc, - nrounds, tag, otag, authsize); - kernel_neon_end(); - - if (unlikely(!nbytes)) - break; - - if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) - memcpy(walk.dst.virt.addr, - buf + sizeof(buf) - nbytes, nbytes); - - err = skcipher_walk_done(&walk, walk.nbytes - nbytes); - } while (walk.nbytes); - - if (err) - return err; - if (ret) - return -EBADMSG; - } else { - while (walk.nbytes >= AES_BLOCK_SIZE) { - int blocks = walk.nbytes / AES_BLOCK_SIZE; - const u8 *src = walk.src.virt.addr; - u8 *dst = walk.dst.virt.addr; - - ghash_do_update(blocks, dg, walk.src.virt.addr, - &ctx->ghash_key, NULL); - - do { - aes_encrypt(&ctx->aes_key, buf, iv); - crypto_xor_cpy(dst, src, buf, AES_BLOCK_SIZE); - crypto_inc(iv, AES_BLOCK_SIZE); - - dst += AES_BLOCK_SIZE; - src += AES_BLOCK_SIZE; - } while (--blocks > 0); + do { + const u8 *src = walk.src.virt.addr; + u8 *dst = walk.dst.virt.addr; + int nbytes = walk.nbytes; - err = skcipher_walk_done(&walk, - walk.nbytes % AES_BLOCK_SIZE); - } + tag = (u8 *)&lengths; - /* handle the tail */ - if (walk.nbytes) { - memcpy(buf, walk.src.virt.addr, walk.nbytes); - memset(buf + walk.nbytes, 0, sizeof(buf) - walk.nbytes); + if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) { + src = dst = memcpy(buf + sizeof(buf) - nbytes, + src, nbytes); + } else if (nbytes < walk.total) { + nbytes &= ~(AES_BLOCK_SIZE - 1); + tag = NULL; } - tag = (u8 *)&lengths; - ghash_do_update(1, dg, tag, &ctx->ghash_key, - walk.nbytes ? buf : NULL); - - if (walk.nbytes) { - aes_encrypt(&ctx->aes_key, buf, iv); + kernel_neon_begin(); + ret = pmull_gcm_decrypt(nbytes, dst, src, ctx->ghash_key.h, + dg, iv, ctx->aes_key.key_enc, + nrounds, tag, otag, authsize); + kernel_neon_end(); - crypto_xor_cpy(walk.dst.virt.addr, walk.src.virt.addr, - buf, walk.nbytes); + if (unlikely(!nbytes)) + break; - err = skcipher_walk_done(&walk, 0); - } + if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) + memcpy(walk.dst.virt.addr, + buf + sizeof(buf) - nbytes, nbytes); - if (err) - return err; + err = skcipher_walk_done(&walk, walk.nbytes - nbytes); + } while (walk.nbytes); - put_unaligned_be64(dg[1], tag); - put_unaligned_be64(dg[0], tag + 8); - put_unaligned_be32(1, iv + GCM_IV_SIZE); - aes_encrypt(&ctx->aes_key, iv, iv); - crypto_xor(tag, iv, AES_BLOCK_SIZE); + if (err) + return err; - if (crypto_memneq(tag, otag, authsize)) { - memzero_explicit(tag, AES_BLOCK_SIZE); - return -EBADMSG; - } - } - return 0; + return ret ? -EBADMSG : 0; } static struct aead_alg gcm_aes_alg = { -- 2.20.1