Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp1071760pxb; Fri, 27 Aug 2021 00:04:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx448QsliE6XUO8Wx4jf8OdD4X5ij0Wm3lqAajm1rkqtPlFUvhUAE8XhIk5XCyTcCkm5Tfr X-Received: by 2002:a05:6e02:1d88:: with SMTP id h8mr5706425ila.258.1630047880571; Fri, 27 Aug 2021 00:04:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1630047880; cv=none; d=google.com; s=arc-20160816; b=acFbJpXgE+Rc5P3nye0OOEXz5aa6vw2rd3WhN2EwM29bnDp4RHzQLxTNjQIEr7WJ92 /l5l8JdnrP/91DrgT/deEqKeFdE2HxDTwur0jwxpEmQb1hg8fsUahgSw6kq+JT1k306W Xbt8DD8JykUo6f4HmUdlHR8S/gbvNw4BjEAqUFKkQqZ/HdgPaIUHBZk+FBfpwFFtP50+ t1mR2lgtaCioXHsuji9rJS4T7OK5+TLKIzUxTZ/1j/nlPXwxixA/vYQhTPJDXOH04TK0 ig+LAeR2OyYA7JFib6zpnef4XTBRkGQ0Kmp6GI5k5O6w1wT1bZMU6s+2IGTkCn+yjh4Z 4Raw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=PndyUC0ucaNKgZ0Z9RIHuhcJTiU+UQzeBBbKehhJlhc=; b=pX/v2F1UfYnCpa4mQVnU6rwGQxT2IBZNu+x2Jh4ruRJ8Oyqyf+FcManq4KYFtb27Ob dNRqAlRGI89NFeMK6gEhO5u7foZp6Qb8+IoqWhARUDDRwUqJqZqRgrcR2yDjkgdEYXsA 3JoXtlBIbLd7rIC7+jV+eBSz1aO/n4HVVe5ESmEGxK7ssun0a3r4h5eeIk4yn02Sb3za 2kH9YRi+1HrbEp9lMSXu+oVNTQF5NMiQWIZMbdbf4poktYCH9Yn6VcpY1UJeBCA2TnPU DpAhpr1B2pdqk3pNX9eSYfw/FpyuiSJl+83tPjzhGYoqAtmtCux4idyZyNTGEnnNEWhd ewUA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Iq3mjIss; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h30si2802113jaa.9.2021.08.27.00.04.21; Fri, 27 Aug 2021 00:04:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Iq3mjIss; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244408AbhH0HEp (ORCPT + 99 others); Fri, 27 Aug 2021 03:04:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:55670 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244378AbhH0HEn (ORCPT ); Fri, 27 Aug 2021 03:04:43 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2369160FD9; Fri, 27 Aug 2021 07:03:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1630047835; bh=cDhV/NWPbyrsBr2BIwai5Ghc3qm8ztabL00TNktB3Yg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Iq3mjIssMMYw9io+norPiK5RE0Tv+i5aXHU8UyZDuCcRAEOhuXoFvZ0hOoSyEQ3iG KlL1/8yUUXYyVNO+YJTdGyVizFMO7H7tYmX+9bvT9vd6se2/jm7dCAeoa0Byj8VZek 2BdpAs+gfqtp0BveTuvne/v2uls4ZnBrDoAaxjyERqvkpHHBF+ZR/fhVORLMlApx7Q KRuHFeIvNd86eUHmRNM94sccL6XJIUoD1bfXr/f8NofAtj8OsDXUiEcBnAyl/qcunx TrP3tTqqIrWX6CetylfZVXWA9mRJTq08SyPlECwG1l0Hi20G52saCS2KuYmjItWmGa WLHhUPyC6BXtw== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel , Eric Biggers Subject: [PATCH v7 5/7] crypto: arm64/aes-ccm - remove non-SIMD fallback path Date: Fri, 27 Aug 2021 09:03:40 +0200 Message-Id: <20210827070342.218276-6-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210827070342.218276-1-ardb@kernel.org> References: <20210827070342.218276-1-ardb@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org AES/CCM on arm64 is implemented as a synchronous AEAD, and so it is guaranteed by the API that it is only invoked in task or softirq context. Since softirqs are now only handled when the SIMD is not being used in the task context that was interrupted to service the softirq, we no longer need a fallback path. Let's remove it. Signed-off-by: Ard Biesheuvel Reviewed-by: Eric Biggers --- arch/arm64/crypto/aes-ce-ccm-glue.c | 153 ++++---------------- 1 file changed, 32 insertions(+), 121 deletions(-) diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c index fe9c837ac4b9..c1f221a181a5 100644 --- a/arch/arm64/crypto/aes-ce-ccm-glue.c +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c @@ -6,12 +6,10 @@ */ #include -#include #include #include #include #include -#include #include #include @@ -99,36 +97,10 @@ static int ccm_init_mac(struct aead_request *req, u8 maciv[], u32 msglen) static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[], u32 abytes, u32 *macp) { - if (crypto_simd_usable()) { - kernel_neon_begin(); - ce_aes_ccm_auth_data(mac, in, abytes, macp, key->key_enc, - num_rounds(key)); - kernel_neon_end(); - } else { - if (*macp > 0 && *macp < AES_BLOCK_SIZE) { - int added = min(abytes, AES_BLOCK_SIZE - *macp); - - crypto_xor(&mac[*macp], in, added); - - *macp += added; - in += added; - abytes -= added; - } - - while (abytes >= AES_BLOCK_SIZE) { - aes_encrypt(key, mac, mac); - crypto_xor(mac, in, AES_BLOCK_SIZE); - - in += AES_BLOCK_SIZE; - abytes -= AES_BLOCK_SIZE; - } - - if (abytes > 0) { - aes_encrypt(key, mac, mac); - crypto_xor(mac, in, abytes); - *macp = abytes; - } - } + kernel_neon_begin(); + ce_aes_ccm_auth_data(mac, in, abytes, macp, key->key_enc, + num_rounds(key)); + kernel_neon_end(); } static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) @@ -172,54 +144,6 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) } while (len); } -static int ccm_crypt_fallback(struct skcipher_walk *walk, u8 mac[], u8 iv0[], - struct crypto_aes_ctx *ctx, bool enc) -{ - u8 buf[AES_BLOCK_SIZE]; - int err = 0; - - while (walk->nbytes) { - int blocks = walk->nbytes / AES_BLOCK_SIZE; - u32 tail = walk->nbytes % AES_BLOCK_SIZE; - u8 *dst = walk->dst.virt.addr; - u8 *src = walk->src.virt.addr; - u32 nbytes = walk->nbytes; - - if (nbytes == walk->total && tail > 0) { - blocks++; - tail = 0; - } - - do { - u32 bsize = AES_BLOCK_SIZE; - - if (nbytes < AES_BLOCK_SIZE) - bsize = nbytes; - - crypto_inc(walk->iv, AES_BLOCK_SIZE); - aes_encrypt(ctx, buf, walk->iv); - aes_encrypt(ctx, mac, mac); - if (enc) - crypto_xor(mac, src, bsize); - crypto_xor_cpy(dst, src, buf, bsize); - if (!enc) - crypto_xor(mac, dst, bsize); - dst += bsize; - src += bsize; - nbytes -= bsize; - } while (--blocks); - - err = skcipher_walk_done(walk, tail); - } - - if (!err) { - aes_encrypt(ctx, buf, iv0); - aes_encrypt(ctx, mac, mac); - crypto_xor(mac, buf, AES_BLOCK_SIZE); - } - return err; -} - static int ccm_encrypt(struct aead_request *req) { struct crypto_aead *aead = crypto_aead_reqtfm(req); @@ -242,30 +166,24 @@ static int ccm_encrypt(struct aead_request *req) err = skcipher_walk_aead_encrypt(&walk, req, false); - if (crypto_simd_usable()) { - while (walk.nbytes) { - u32 tail = walk.nbytes % AES_BLOCK_SIZE; + while (walk.nbytes) { + u32 tail = walk.nbytes % AES_BLOCK_SIZE; - if (walk.nbytes == walk.total) - tail = 0; + if (walk.nbytes == walk.total) + tail = 0; - kernel_neon_begin(); - ce_aes_ccm_encrypt(walk.dst.virt.addr, - walk.src.virt.addr, - walk.nbytes - tail, ctx->key_enc, - num_rounds(ctx), mac, walk.iv); - kernel_neon_end(); + kernel_neon_begin(); + ce_aes_ccm_encrypt(walk.dst.virt.addr, walk.src.virt.addr, + walk.nbytes - tail, ctx->key_enc, + num_rounds(ctx), mac, walk.iv); + kernel_neon_end(); - err = skcipher_walk_done(&walk, tail); - } - if (!err) { - kernel_neon_begin(); - ce_aes_ccm_final(mac, buf, ctx->key_enc, - num_rounds(ctx)); - kernel_neon_end(); - } - } else { - err = ccm_crypt_fallback(&walk, mac, buf, ctx, true); + err = skcipher_walk_done(&walk, tail); + } + if (!err) { + kernel_neon_begin(); + ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); + kernel_neon_end(); } if (err) return err; @@ -300,32 +218,25 @@ static int ccm_decrypt(struct aead_request *req) err = skcipher_walk_aead_decrypt(&walk, req, false); - if (crypto_simd_usable()) { - while (walk.nbytes) { - u32 tail = walk.nbytes % AES_BLOCK_SIZE; + while (walk.nbytes) { + u32 tail = walk.nbytes % AES_BLOCK_SIZE; - if (walk.nbytes == walk.total) - tail = 0; + if (walk.nbytes == walk.total) + tail = 0; - kernel_neon_begin(); - ce_aes_ccm_decrypt(walk.dst.virt.addr, - walk.src.virt.addr, + kernel_neon_begin(); + ce_aes_ccm_decrypt(walk.dst.virt.addr, walk.src.virt.addr, walk.nbytes - tail, ctx->key_enc, num_rounds(ctx), mac, walk.iv); - kernel_neon_end(); + kernel_neon_end(); - err = skcipher_walk_done(&walk, tail); - } - if (!err) { - kernel_neon_begin(); - ce_aes_ccm_final(mac, buf, ctx->key_enc, - num_rounds(ctx)); - kernel_neon_end(); - } - } else { - err = ccm_crypt_fallback(&walk, mac, buf, ctx, false); + err = skcipher_walk_done(&walk, tail); + } + if (!err) { + kernel_neon_begin(); + ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); + kernel_neon_end(); } - if (err) return err; -- 2.30.2