Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp982805pxb; Wed, 3 Mar 2021 23:26:37 -0800 (PST) X-Google-Smtp-Source: ABdhPJz1kpFaegSaO5VYP7ueCvLi3/ZD9CHjr5O+g4N1m2vn0023VJxOz/whp5K9MN78htlH0XfU X-Received: by 2002:aa7:c95a:: with SMTP id h26mr2897083edt.166.1614842796897; Wed, 03 Mar 2021 23:26:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614842796; cv=none; d=google.com; s=arc-20160816; b=lSffEB+MaP6TzT0celXLiwONVwS26zloiqrXAGKa+SaR5CA1VU/PMjC+J1POwA0M2V thtZydHksLcz/Ns+jLuad/cJw94AYFzcj/rVeIdo1RmwmUQnqlEPylVPwazXfesp/Ehp ktwGRbmxJZScxUiBWctAwBX2Mc070k3ZX+3xCJrH978pRpBkyEVhH5JrPYaT3JR6P+Nz Gygpio+xlzLpJv77LZPl2sMldr9kSsVnpZ6So3HIIGD82iMk4XUQqQnBB/UyzDVaT0kC u4zWIeB5FAAUZry8xm+4iy1JnAla69jjDUiZMRakFlw4X5wyUkEt4JQpWM6VJG0zjAN4 i8PQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=6MHb5MPgrlcC9fmLWRA4DNs0NTO0c2YT+EGOSnmRVxQ=; b=y47LYOuyaOFNVILHpEFnMhIlB3DV34eV9ZkdGczSTXaX5EQuPduBTsxEj69P2WOtYW JEYXet8i+iocSOQ1GczcAnBZiSKSDpfmxIewfxD+MzIOBQainryzzIp4lxg8npaORf+N byznl26gieRtNvbV2ciQfU2eq7AVv+CmH1Hm1rGoFaohXymrlSnVW2PKBuzmtanWBDNM NDb0MBCi7ywf51y4+fhxyTL2Dz2EhnEWmYQDvHvGErA4xw6vHDb51Rn7XllKe7k5rv/u aAlyt3H0FzVzYNSK5b2xinJlRgrAQBKnANVXT+3j0ObsEjonohMiUEtx3bq6vsLLReIo QKcw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=kdFsSF07; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bu17si18773528ejb.22.2021.03.03.23.26.13; Wed, 03 Mar 2021 23:26:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=kdFsSF07; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232660AbhCCBaL (ORCPT + 99 others); Tue, 2 Mar 2021 20:30:11 -0500 Received: from mail.kernel.org ([198.145.29.99]:40168 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378880AbhCBJFa (ORCPT ); Tue, 2 Mar 2021 04:05:30 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3BC8964F09; Tue, 2 Mar 2021 09:02:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1614675742; bh=AB2cosOzD9ogLjsOMUYy4p9Uf66zUxVKxqDHH3zv1Gw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kdFsSF07BDNAicH+v/YD0ZDY0rRbk6ZZZ/R+DGQheCP70MVeDvMb0YTpvA6iutOUQ E0rVzZBmYl6aqr9ChCBreft7mlQV7x1JqTOktsVANacdOBwFAeBPq8B7+dU/6Unovh HS3N+huI+YapJ/DwnA8uyneEA2Cjr4vjtOqwR6X575+R4GT5MLnG6qk93LjxwWCWyj 6PmkNhqkjwTZ+3IX/QSC9fcLvATBpZnGuvsS/O6Sv95/Zp7VkfwSuDntFk0YSSl/Gj Vrr28DETODzIrfYDOm84VSy7YRg7oVrxeqGlMWvRrq21rpRoURWzGx4rfHXeaNfBo3 +BcTRuXTCH7Yg== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Dave Martin , Mark Brown , Herbert Xu , Eric Biggers , Will Deacon , Catalin Marinas , Thomas Gleixner , Peter Zijlstra , Sebastian Andrzej Siewior , Ingo Molnar , Andy Lutomirski Subject: [PATCH v2 6/9] crypto: arm64/gcm-aes-ce - remove non-SIMD fallback path Date: Tue, 2 Mar 2021 10:01:15 +0100 Message-Id: <20210302090118.30666-7-ardb@kernel.org> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210302090118.30666-1-ardb@kernel.org> References: <20210302090118.30666-1-ardb@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Now that kernel mode SIMD is guaranteed to be available when executing in task or softirq context, we no longer need scalar fallbacks to use when the NEON is unavailable. So get rid of them. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/ghash-ce-glue.c | 209 +++++--------------- 1 file changed, 51 insertions(+), 158 deletions(-) diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index 720cd3a58da3..15794fe21a0b 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -362,84 +362,36 @@ static int gcm_encrypt(struct aead_request *req) err = skcipher_walk_aead_encrypt(&walk, req, false); - if (likely(crypto_simd_usable())) { - do { - const u8 *src = walk.src.virt.addr; - u8 *dst = walk.dst.virt.addr; - int nbytes = walk.nbytes; - - tag = (u8 *)&lengths; - - if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) { - src = dst = memcpy(buf + sizeof(buf) - nbytes, - src, nbytes); - } else if (nbytes < walk.total) { - nbytes &= ~(AES_BLOCK_SIZE - 1); - tag = NULL; - } - - kernel_neon_begin(); - pmull_gcm_encrypt(nbytes, dst, src, ctx->ghash_key.h, - dg, iv, ctx->aes_key.key_enc, nrounds, - tag); - kernel_neon_end(); - - if (unlikely(!nbytes)) - break; - - if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) - memcpy(walk.dst.virt.addr, - buf + sizeof(buf) - nbytes, nbytes); - - err = skcipher_walk_done(&walk, walk.nbytes - nbytes); - } while (walk.nbytes); - } else { - while (walk.nbytes >= AES_BLOCK_SIZE) { - int blocks = walk.nbytes / AES_BLOCK_SIZE; - const u8 *src = walk.src.virt.addr; - u8 *dst = walk.dst.virt.addr; - int remaining = blocks; - - do { - aes_encrypt(&ctx->aes_key, buf, iv); - crypto_xor_cpy(dst, src, buf, AES_BLOCK_SIZE); - crypto_inc(iv, AES_BLOCK_SIZE); - - dst += AES_BLOCK_SIZE; - src += AES_BLOCK_SIZE; - } while (--remaining > 0); - - ghash_do_update(blocks, dg, walk.dst.virt.addr, - &ctx->ghash_key, NULL); - - err = skcipher_walk_done(&walk, - walk.nbytes % AES_BLOCK_SIZE); - } - - /* handle the tail */ - if (walk.nbytes) { - aes_encrypt(&ctx->aes_key, buf, iv); + do { + const u8 *src = walk.src.virt.addr; + u8 *dst = walk.dst.virt.addr; + int nbytes = walk.nbytes; - crypto_xor_cpy(walk.dst.virt.addr, walk.src.virt.addr, - buf, walk.nbytes); + tag = (u8 *)&lengths; - memcpy(buf, walk.dst.virt.addr, walk.nbytes); - memset(buf + walk.nbytes, 0, sizeof(buf) - walk.nbytes); + if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) { + src = dst = memcpy(buf + sizeof(buf) - nbytes, + src, nbytes); + } else if (nbytes < walk.total) { + nbytes &= ~(AES_BLOCK_SIZE - 1); + tag = NULL; } - tag = (u8 *)&lengths; - ghash_do_update(1, dg, tag, &ctx->ghash_key, - walk.nbytes ? buf : NULL); + kernel_neon_begin(); + pmull_gcm_encrypt(nbytes, dst, src, ctx->ghash_key.h, + dg, iv, ctx->aes_key.key_enc, nrounds, + tag); + kernel_neon_end(); - if (walk.nbytes) - err = skcipher_walk_done(&walk, 0); + if (unlikely(!nbytes)) + break; - put_unaligned_be64(dg[1], tag); - put_unaligned_be64(dg[0], tag + 8); - put_unaligned_be32(1, iv + GCM_IV_SIZE); - aes_encrypt(&ctx->aes_key, iv, iv); - crypto_xor(tag, iv, AES_BLOCK_SIZE); - } + if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) + memcpy(walk.dst.virt.addr, + buf + sizeof(buf) - nbytes, nbytes); + + err = skcipher_walk_done(&walk, walk.nbytes - nbytes); + } while (walk.nbytes); if (err) return err; @@ -464,6 +416,7 @@ static int gcm_decrypt(struct aead_request *req) u64 dg[2] = {}; be128 lengths; u8 *tag; + int ret; int err; lengths.a = cpu_to_be64(req->assoclen * 8); @@ -481,101 +434,41 @@ static int gcm_decrypt(struct aead_request *req) err = skcipher_walk_aead_decrypt(&walk, req, false); - if (likely(crypto_simd_usable())) { - int ret; - - do { - const u8 *src = walk.src.virt.addr; - u8 *dst = walk.dst.virt.addr; - int nbytes = walk.nbytes; - - tag = (u8 *)&lengths; - - if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) { - src = dst = memcpy(buf + sizeof(buf) - nbytes, - src, nbytes); - } else if (nbytes < walk.total) { - nbytes &= ~(AES_BLOCK_SIZE - 1); - tag = NULL; - } - - kernel_neon_begin(); - ret = pmull_gcm_decrypt(nbytes, dst, src, - ctx->ghash_key.h, - dg, iv, ctx->aes_key.key_enc, - nrounds, tag, otag, authsize); - kernel_neon_end(); - - if (unlikely(!nbytes)) - break; - - if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) - memcpy(walk.dst.virt.addr, - buf + sizeof(buf) - nbytes, nbytes); - - err = skcipher_walk_done(&walk, walk.nbytes - nbytes); - } while (walk.nbytes); - - if (err) - return err; - if (ret) - return -EBADMSG; - } else { - while (walk.nbytes >= AES_BLOCK_SIZE) { - int blocks = walk.nbytes / AES_BLOCK_SIZE; - const u8 *src = walk.src.virt.addr; - u8 *dst = walk.dst.virt.addr; - - ghash_do_update(blocks, dg, walk.src.virt.addr, - &ctx->ghash_key, NULL); - - do { - aes_encrypt(&ctx->aes_key, buf, iv); - crypto_xor_cpy(dst, src, buf, AES_BLOCK_SIZE); - crypto_inc(iv, AES_BLOCK_SIZE); - - dst += AES_BLOCK_SIZE; - src += AES_BLOCK_SIZE; - } while (--blocks > 0); + do { + const u8 *src = walk.src.virt.addr; + u8 *dst = walk.dst.virt.addr; + int nbytes = walk.nbytes; - err = skcipher_walk_done(&walk, - walk.nbytes % AES_BLOCK_SIZE); - } + tag = (u8 *)&lengths; - /* handle the tail */ - if (walk.nbytes) { - memcpy(buf, walk.src.virt.addr, walk.nbytes); - memset(buf + walk.nbytes, 0, sizeof(buf) - walk.nbytes); + if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) { + src = dst = memcpy(buf + sizeof(buf) - nbytes, + src, nbytes); + } else if (nbytes < walk.total) { + nbytes &= ~(AES_BLOCK_SIZE - 1); + tag = NULL; } - tag = (u8 *)&lengths; - ghash_do_update(1, dg, tag, &ctx->ghash_key, - walk.nbytes ? buf : NULL); - - if (walk.nbytes) { - aes_encrypt(&ctx->aes_key, buf, iv); + kernel_neon_begin(); + ret = pmull_gcm_decrypt(nbytes, dst, src, ctx->ghash_key.h, + dg, iv, ctx->aes_key.key_enc, + nrounds, tag, otag, authsize); + kernel_neon_end(); - crypto_xor_cpy(walk.dst.virt.addr, walk.src.virt.addr, - buf, walk.nbytes); + if (unlikely(!nbytes)) + break; - err = skcipher_walk_done(&walk, 0); - } + if (unlikely(nbytes > 0 && nbytes < AES_BLOCK_SIZE)) + memcpy(walk.dst.virt.addr, + buf + sizeof(buf) - nbytes, nbytes); - if (err) - return err; + err = skcipher_walk_done(&walk, walk.nbytes - nbytes); + } while (walk.nbytes); - put_unaligned_be64(dg[1], tag); - put_unaligned_be64(dg[0], tag + 8); - put_unaligned_be32(1, iv + GCM_IV_SIZE); - aes_encrypt(&ctx->aes_key, iv, iv); - crypto_xor(tag, iv, AES_BLOCK_SIZE); + if (err) + return err; - if (crypto_memneq(tag, otag, authsize)) { - memzero_explicit(tag, AES_BLOCK_SIZE); - return -EBADMSG; - } - } - return 0; + return ret ? -EBADMSG : 0; } static struct aead_alg gcm_aes_alg = { -- 2.30.1