Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4822920pxj; Wed, 12 May 2021 14:10:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwLhUq7CFyessGnGNuLaHh2dggrdT4e/HON3Nx/FfWxbqMeh4T1I9U/DCku7VdP6yxfmGKL X-Received: by 2002:aca:c6d7:: with SMTP id w206mr27804835oif.87.1620853813100; Wed, 12 May 2021 14:10:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620853813; cv=none; d=google.com; s=arc-20160816; b=EAzQ3GmWUhgYa8x8jyw1Cydzi8ijux1+dgCgfkRw8qwnGoOCC4QIKcYH76i9K/awL8 cxKRSMYAXFPMJmaGIuhrueBkgPRpz3PYgriN68sf4r2x5ZeBaQYRt2Guyl/2ohZJBxLw F4MocuLfNpXF07Jm2cQCP7hqvsMi1tgujjldD8GTycJYdwyx97JvBTjJkDOIaMTUBFxW HONYR8EB5KXlxjlx8YYEUQGDHTE93DvvZIPex4ZJO48w0N7QejAzbjjzqV1udOSWhsr9 vSEUgKm1WFL3dwWeZRFO8Ts9KifOYUBqvCdMBN/L3gHZTQfca0e9bUDvvRZu+qlLC1tY yc6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=7QqufWmITGDKYqU05TnpS7eN5j37nKrJzsV+sOAt+Rw=; b=GH/TAc4IVZbEfx/sTwHp6qTka/rf51fOT6LhrR3M6+oXjp3CBpRLlnZJ1poVpD8kz0 rwWvqrlXm5hVHbuDx5/5n5dZQzfId2/LaR2roJ7N3wrlBiFAyViG7YVIQyLtc748RAmT 5OePnNRXijAQy8cEJ3TWC2E5J8DYvEov3k2uZuOFrjy0HysYNNXq+kvA8v1gOtZMxH0t 0fuljsn0YWWuwwHnjxuAdc23ZL1i2mKuX2GRej7MrJIPh/5Kpzl5U/W/nD1hX1xWXijg Q7Uu0fnMo7WusnkUWBFjcEYbZZXIWLAe3tjUfXU64gXlm4QcOJee30c1UCTXLx3YPlfc LTSQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=DWQVzRZR; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v1si531650oic.198.2021.05.12.14.09.50; Wed, 12 May 2021 14:10:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=DWQVzRZR; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236273AbhELVIE (ORCPT + 99 others); Wed, 12 May 2021 17:08:04 -0400 Received: from mail.kernel.org ([198.145.29.99]:58064 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236425AbhELSsV (ORCPT ); Wed, 12 May 2021 14:48:21 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id BC79761425; Wed, 12 May 2021 18:44:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620845100; bh=yMEeZkBsoAik7YRpeVN7A9qIi6cE5OsGQR+SfIXA8t4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DWQVzRZR9a5+rLgMtlIG7+x/AaukqYtX2ZCxGBfvFuJAwPwWNvKMLya/jNnTiO2Ft lurYYeSA0GXyXT683fHrmAqPB2TVOH+56rSPnPMRibzDzy/Fzy4VG04JGF1QGndI/Z YkHQQIaDU8EgwVeM6NLgeS5aDyM3wAlT9amwkv5d5opE3Pe9Lvn69XyeQ4UODc70SG r9sW5VfmE3T1tbWbjoCUruUn6dKeWaP3w+e59D9tjcSLj+yb3wTJDK4pMgn4/JeicU OGfqT8BJQ+4QGgZenNu7VtYN0I0lYOxezNe4OLeHEakn+iOJN5I2HO4pPopSOYC5DC 4bvMYHcJbuhcg== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, ebiggers@kernel.org, herbert@gondor.apana.org.au, will@kernel.org, kernel-team@android.com, Ard Biesheuvel Subject: [PATCH v3 7/7] crypto: arm64/aes-ccm - remove non-SIMD fallback path Date: Wed, 12 May 2021 20:44:39 +0200 Message-Id: <20210512184439.8778-8-ardb@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210512184439.8778-1-ardb@kernel.org> References: <20210512184439.8778-1-ardb@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org AES/CCM on arm64 is implemented as a synchronous AEAD, and so it is guaranteed by the API that it is only invoked in task or softirq context. Since softirqs are now only handled when the SIMD is not being used in the task context that was interrupted to service the softirq, we no longer need a fallback path. Let's remove it. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/aes-ce-ccm-core.S | 1 + arch/arm64/crypto/aes-ce-ccm-glue.c | 183 ++++++-------------- 2 files changed, 53 insertions(+), 131 deletions(-) diff --git a/arch/arm64/crypto/aes-ce-ccm-core.S b/arch/arm64/crypto/aes-ce-ccm-core.S index 99a028e298ed..8adff299fcd3 100644 --- a/arch/arm64/crypto/aes-ce-ccm-core.S +++ b/arch/arm64/crypto/aes-ce-ccm-core.S @@ -124,6 +124,7 @@ SYM_FUNC_START(ce_aes_ccm_final) SYM_FUNC_END(ce_aes_ccm_final) .macro aes_ccm_do_crypt,enc + cbz x2, 5f ldr x8, [x6, #8] /* load lower ctr */ ld1 {v0.16b}, [x5] /* load mac */ CPU_LE( rev x8, x8 ) /* keep swabbed ctr in reg */ diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c index f6d19b0dc893..a36df98f6fae 100644 --- a/arch/arm64/crypto/aes-ce-ccm-glue.c +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c @@ -99,36 +99,8 @@ static int ccm_init_mac(struct aead_request *req, u8 maciv[], u32 msglen) static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[], u32 abytes, u32 *macp) { - if (crypto_simd_usable()) { - kernel_neon_begin(); - ce_aes_ccm_auth_data(mac, in, abytes, macp, key->key_enc, - num_rounds(key)); - kernel_neon_end(); - } else { - if (*macp > 0 && *macp < AES_BLOCK_SIZE) { - int added = min(abytes, AES_BLOCK_SIZE - *macp); - - crypto_xor(&mac[*macp], in, added); - - *macp += added; - in += added; - abytes -= added; - } - - while (abytes >= AES_BLOCK_SIZE) { - aes_encrypt(key, mac, mac); - crypto_xor(mac, in, AES_BLOCK_SIZE); - - in += AES_BLOCK_SIZE; - abytes -= AES_BLOCK_SIZE; - } - - if (abytes > 0) { - aes_encrypt(key, mac, mac); - crypto_xor(mac, in, abytes); - *macp = abytes; - } - } + ce_aes_ccm_auth_data(mac, in, abytes, macp, key->key_enc, + num_rounds(key)); } static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) @@ -171,54 +143,6 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) } while (len); } -static int ccm_crypt_fallback(struct skcipher_walk *walk, u8 mac[], u8 iv0[], - struct crypto_aes_ctx *ctx, bool enc) -{ - u8 buf[AES_BLOCK_SIZE]; - int err = 0; - - while (walk->nbytes) { - int blocks = walk->nbytes / AES_BLOCK_SIZE; - u32 tail = walk->nbytes % AES_BLOCK_SIZE; - u8 *dst = walk->dst.virt.addr; - u8 *src = walk->src.virt.addr; - u32 nbytes = walk->nbytes; - - if (nbytes == walk->total && tail > 0) { - blocks++; - tail = 0; - } - - do { - u32 bsize = AES_BLOCK_SIZE; - - if (nbytes < AES_BLOCK_SIZE) - bsize = nbytes; - - crypto_inc(walk->iv, AES_BLOCK_SIZE); - aes_encrypt(ctx, buf, walk->iv); - aes_encrypt(ctx, mac, mac); - if (enc) - crypto_xor(mac, src, bsize); - crypto_xor_cpy(dst, src, buf, bsize); - if (!enc) - crypto_xor(mac, dst, bsize); - dst += bsize; - src += bsize; - nbytes -= bsize; - } while (--blocks); - - err = skcipher_walk_done(walk, tail); - } - - if (!err) { - aes_encrypt(ctx, buf, iv0); - aes_encrypt(ctx, mac, mac); - crypto_xor(mac, buf, AES_BLOCK_SIZE); - } - return err; -} - static int ccm_encrypt(struct aead_request *req) { struct crypto_aead *aead = crypto_aead_reqtfm(req); @@ -233,41 +157,40 @@ static int ccm_encrypt(struct aead_request *req) if (err) return err; - if (req->assoclen) - ccm_calculate_auth_mac(req, mac); - /* preserve the original iv for the final round */ memcpy(buf, req->iv, AES_BLOCK_SIZE); err = skcipher_walk_aead_encrypt(&walk, req, false); + if (unlikely(err)) + return err; - if (crypto_simd_usable()) { - while (walk.nbytes) { - u32 tail = walk.nbytes % AES_BLOCK_SIZE; + kernel_neon_begin(); - if (walk.nbytes == walk.total) - tail = 0; + if (req->assoclen) + ccm_calculate_auth_mac(req, mac); - kernel_neon_begin(); - ce_aes_ccm_encrypt(walk.dst.virt.addr, - walk.src.virt.addr, - walk.nbytes - tail, ctx->key_enc, - num_rounds(ctx), mac, walk.iv); - kernel_neon_end(); + do { + u32 tail = walk.nbytes % AES_BLOCK_SIZE; - err = skcipher_walk_done(&walk, tail); - } - if (!err) { + if (walk.nbytes == walk.total) + tail = 0; + + ce_aes_ccm_encrypt(walk.dst.virt.addr, walk.src.virt.addr, + walk.nbytes - tail, ctx->key_enc, + num_rounds(ctx), mac, walk.iv); + + if (walk.nbytes == walk.total) + ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); + + kernel_neon_end(); + + err = skcipher_walk_done(&walk, tail); + if (unlikely(err)) + return err; + + if (unlikely(walk.nbytes)) kernel_neon_begin(); - ce_aes_ccm_final(mac, buf, ctx->key_enc, - num_rounds(ctx)); - kernel_neon_end(); - } - } else { - err = ccm_crypt_fallback(&walk, mac, buf, ctx, true); - } - if (err) - return err; + } while (walk.nbytes); /* copy authtag to end of dst */ scatterwalk_map_and_copy(mac, req->dst, req->assoclen + req->cryptlen, @@ -291,42 +214,40 @@ static int ccm_decrypt(struct aead_request *req) if (err) return err; - if (req->assoclen) - ccm_calculate_auth_mac(req, mac); - /* preserve the original iv for the final round */ memcpy(buf, req->iv, AES_BLOCK_SIZE); err = skcipher_walk_aead_decrypt(&walk, req, false); + if (unlikely(err)) + return err; - if (crypto_simd_usable()) { - while (walk.nbytes) { - u32 tail = walk.nbytes % AES_BLOCK_SIZE; + kernel_neon_begin(); - if (walk.nbytes == walk.total) - tail = 0; + if (req->assoclen) + ccm_calculate_auth_mac(req, mac); - kernel_neon_begin(); - ce_aes_ccm_decrypt(walk.dst.virt.addr, - walk.src.virt.addr, - walk.nbytes - tail, ctx->key_enc, - num_rounds(ctx), mac, walk.iv); - kernel_neon_end(); + do { + u32 tail = walk.nbytes % AES_BLOCK_SIZE; - err = skcipher_walk_done(&walk, tail); - } - if (!err) { - kernel_neon_begin(); - ce_aes_ccm_final(mac, buf, ctx->key_enc, - num_rounds(ctx)); - kernel_neon_end(); - } - } else { - err = ccm_crypt_fallback(&walk, mac, buf, ctx, false); - } + if (walk.nbytes == walk.total) + tail = 0; - if (err) - return err; + ce_aes_ccm_decrypt(walk.dst.virt.addr, walk.src.virt.addr, + walk.nbytes - tail, ctx->key_enc, + num_rounds(ctx), mac, walk.iv); + + if (walk.nbytes == walk.total) + ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); + + kernel_neon_end(); + + err = skcipher_walk_done(&walk, tail); + if (unlikely(err)) + return err; + + if (unlikely(walk.nbytes)) + kernel_neon_begin(); + } while (walk.nbytes); /* compare calculated auth tag with the stored one */ scatterwalk_map_and_copy(buf, req->src, -- 2.20.1