Received: by 2002:ab2:6857:0:b0:1ef:ffd0:ce49 with SMTP id l23csp3166860lqp; Tue, 26 Mar 2024 01:06:20 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCW6972v5m7tpQAWj+ds1R+Kze1IL/emQj0En4OqvUUnmXof+ixzj41y1273zApbgpf97nkOblmIYCgWsNQRu4qhdyNW7lCThXbG64DYog== X-Google-Smtp-Source: AGHT+IE8pgIXa9S2wds38KEIia3YV9RRXgpazH/apoJAyCA9CUHv2k+kqb12JdL57dmG2CrQNbJ0 X-Received: by 2002:a05:6808:1897:b0:3c3:83dc:d711 with SMTP id bi23-20020a056808189700b003c383dcd711mr11238762oib.57.1711440379841; Tue, 26 Mar 2024 01:06:19 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1711440379; cv=pass; d=google.com; s=arc-20160816; b=MsDvftDFfWC17I8uM8lrrQJNzbHNJ9YlGyhvlC5NSzvj/RLGC6Eri8gatcZKWZqruT 5iZb3ccUS0/l0QVktEB1NvAlv9zmGx95U3U8+Tv8FANHaQaBd17jOwzRHYciJ+ymyT7q mEr0wmneoi1SMYUEDycSRSynCM6NnsTfpiZAOKWNWFbNQy8+3RunV/miplg4G04jTCrO umtUjGjozAk054fPlA6mb7uLsDpWmesAfSPbomTQosSxPQpZlXD2q//JMruWDUbGdAJc rLO0GSD868qnjMKBDla3swMues6lDnoICoxHsM1OASX8qtbh5zwj5LFphFZ4LZ4W7bCk RFww== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=7HGKv8nGKK6QK93jL+9V2JHxvgER1mFLkyxStq0pOqc=; fh=HYLtzSYcxOa1KHDXbG6ny3oWyG0LxDnh9szm5mOcs3E=; b=ePcJYYtq+MQVRsafYiZm5KgEb4nlAhabgCIFlRj/Gl71RgGxRGy+d2hQ4D3mBPUtv5 0z8CBDJxfTb5O1VwcFMZfJXz8NDsZGWI0HdfJlFsGSXDihmAiynxxPgldd8clbRgMTpk alatIngCG0/CFTXy3lORFoq9AQ8r5bOooXJe31PIs5tQJJVh3x1hfETHW0wzI5SreRJI v3SRnlqiBuguz5EOW7rE7u14nls2rcCaMMgqzcYQBpH8y0xZOkJfWw0UK6Lxyl/vx7pv rdxXp4Wtb+MsutZgsiuWwY9IikJyv6HCFo85dLiKE3oKJLnwl15REIUD4QEHn72vaEym UQhg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=GdxDALOb; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-crypto+bounces-2862-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-crypto+bounces-2862-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id g22-20020ac87d16000000b00431503dc2d7si4740895qtb.703.2024.03.26.01.06.19 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Mar 2024 01:06:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto+bounces-2862-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=GdxDALOb; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-crypto+bounces-2862-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-crypto+bounces-2862-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 796201C324A7 for ; Tue, 26 Mar 2024 08:06:19 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4AAD3134CF8; Tue, 26 Mar 2024 08:06:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="GdxDALOb" X-Original-To: linux-crypto@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 032E9134415; Tue, 26 Mar 2024 08:06:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711440364; cv=none; b=GusQ8UN0HQA4jZaotVePFDAF9+F7OI27hAymqgUcWzzrBy/lDJ94AoGAVpxcdCaTiDumxgqESpSLwl7AlI2gIdzki9cxeWv0q1XeCSKls+a/C+aVLgqpUS9Ng3jMJoN4yWuqBq4QeojH+5S2yw82jB1djBXvF9eZFruRzvo/M1w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711440364; c=relaxed/simple; bh=25QN1m4qR0rj+DYXZ3lhb2sHQ9YhoAoAzSFwtOM7EsM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LAW8BC7PzWWlL9yJHMlcJSiapSm0GrBTNRZX/uFzFsGDS1fTb2EnwuJohyWYEpgZaiS0VTl+UebY1OODjsbQyQRrgZg6VJucFA7ecyQmGVSUDKT+Ul6qgNOWkOOIgmPpEs56M2KCC5B1NMm0fWHRs6Fgh0gDAyXnk0WicgsyWnY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=GdxDALOb; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 573F6C43143; Tue, 26 Mar 2024 08:06:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1711440363; bh=25QN1m4qR0rj+DYXZ3lhb2sHQ9YhoAoAzSFwtOM7EsM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GdxDALObv8godfrn5N/evDtghmo0ux1vQ4ORA9lWaRCdF3Yu7KOlVX0LxBy6Bl/2s rV44ABdGmlDrcZQuwcKDOB4wrGOCqDJ0xZP3OucDZiSCBIwwsDoYjyjV2ng4AdliiV nfXdHH2l6Aq65rA80wblCFwSSbnuxKngeRblBf74fpBWJzm+T0/8ATNa4oRBXKZCMo tA/9FTxrF9npeSI0BYsxvyM4g8o+jaB+g0c3Y99qFiYyy6FZzkayQFggecFthtuIBr N12H9g+dnrjT8rCW+WEBg4VxwLUC86HjGWeGyrvMnjN1yZC2drdLSqzX5ZlF/3OXfd ZJO35xfwVl6DQ== From: Eric Biggers To: linux-crypto@vger.kernel.org, x86@kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , Andy Lutomirski , "Chang S . Bae" Subject: [PATCH 3/6] crypto: x86/aes-xts - wire up AESNI + AVX implementation Date: Tue, 26 Mar 2024 01:03:01 -0700 Message-ID: <20240326080305.402382-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240326080305.402382-1-ebiggers@kernel.org> References: <20240326080305.402382-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Eric Biggers Add an AES-XTS implementation "xts-aes-aesni-avx" for x86_64 CPUs that have the AES-NI and AVX extensions but not VAES. It's similar to the existing xts-aes-aesni in that uses xmm registers to operate on one AES block at a time. It differs from xts-aes-aesni in the following ways: - It uses the VEX-coded (non-destructive) instructions from AVX. This improves performance slightly. - It supports only 64-bit (x86_64). - It incorporates some small extra optimizations such as handling the tweak encryption more efficiently and caching some of the round keys. - It's generated by an assembly macro that will also be used to generate VAES-based implementations. The performance improvement over xts-aes-aesni varies from negligible to substantial, depending on the CPU and other factors such as the size of the messages en/decrypted. For example, the following increases in AES-256-XTS decryption throughput are seen on the following CPUs: | 4096-byte messages | 512-byte messages | ---------------+--------------------+-------------------+ Intel Skylake | 1% | 11% | AMD Zen 1 | 25% | 20% | AMD Zen 2 | 26% | 20% | (The above CPUs don't VAES, so they can't use VAES instead.) While this isn't as large an improvement as what VAES provides, this still seems worthwhile. This implementation is fairly easy to provide based on the assembly macro that's needed for VAES anyway, and it will be the best implementation on a large number of CPUs (very roughly, the CPUs launched by Intel and AMD from 2011 to 2018). This makes the existing xts-aes-aesni *mostly* obsolete. For now, leave it in place to support 32-bit kernels and also CPUs like Intel Westmere that support AES-NI but not AVX. (We could potentially remove it anyway and just rely on the indirect acceleration via ecb-aes-aesni in those cases, but that change will need to be considered separately.) Signed-off-by: Eric Biggers --- arch/x86/crypto/aes-xts-avx-x86_64.S | 9 ++ arch/x86/crypto/aesni-intel_glue.c | 198 ++++++++++++++++++++++++++- 2 files changed, 206 insertions(+), 1 deletion(-) diff --git a/arch/x86/crypto/aes-xts-avx-x86_64.S b/arch/x86/crypto/aes-xts-avx-x86_64.S index 92f1580e1eb0..a8003fea97b7 100644 --- a/arch/x86/crypto/aes-xts-avx-x86_64.S +++ b/arch/x86/crypto/aes-xts-avx-x86_64.S @@ -754,5 +754,14 @@ // En/decrypt again and store the last full block. _aes_crypt \enc, _XMM, CTS_TWEAK1, %xmm0 vmovdqu %xmm0, (DST) jmp .Ldone\@ .endm + +.set VL, 16 +.set USE_AVX10, 0 +SYM_TYPED_FUNC_START(aes_xts_encrypt_aesni_avx) + aes_xts_crypt 1 +SYM_FUNC_END(aes_xts_encrypt_aesni_avx) +SYM_TYPED_FUNC_START(aes_xts_decrypt_aesni_avx) + aes_xts_crypt 0 +SYM_FUNC_END(aes_xts_decrypt_aesni_avx) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index b1d90c25975a..d5e33c396b3e 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -1135,10 +1135,197 @@ static struct skcipher_alg aesni_xctr = { .encrypt = xctr_crypt, .decrypt = xctr_crypt, }; static struct simd_skcipher_alg *aesni_simd_xctr; + +// Flags for the 'int flags' parameter. Keep in sync with asm file. +#define XTS_FIRST 0x1 +#define XTS_UPDATE_IV 0x2 + +typedef void (*xts_asm_func)(const struct aesni_xts_ctx *key, + const u8 *src, u8 *dst, size_t len, + u8 iv[AES_BLOCK_SIZE], int flags); + +/* + * This handles cases where the full message isn't available in one step of the + * scatterlist walk. + */ +static noinline int +xts_crypt_slowpath(struct skcipher_request *req, + struct skcipher_walk *walk, xts_asm_func asm_func) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm); + int tail = req->cryptlen % AES_BLOCK_SIZE; + struct scatterlist sg_src[2], sg_dst[2]; + struct skcipher_request subreq; + struct scatterlist *src, *dst; + int flags = XTS_FIRST | XTS_UPDATE_IV; + int err; + + /* + * If the message length isn't divisible by the AES block size, then + * separate off the last full block and the partial block. This ensures + * that they are processed in the same call to the assembly function, + * which is required for ciphertext stealing. + */ + if (tail) { + skcipher_walk_abort(walk); + + skcipher_request_set_tfm(&subreq, tfm); + skcipher_request_set_callback(&subreq, + skcipher_request_flags(req), + NULL, NULL); + skcipher_request_set_crypt(&subreq, req->src, req->dst, + req->cryptlen - tail - AES_BLOCK_SIZE, + req->iv); + req = &subreq; + err = skcipher_walk_virt(walk, req, false); + } + + while (walk->nbytes) { + unsigned int nbytes = walk->nbytes; + + if (nbytes < walk->total) + nbytes = round_down(nbytes, AES_BLOCK_SIZE); + + kernel_fpu_begin(); + (*asm_func)(ctx, walk->src.virt.addr, walk->dst.virt.addr, + nbytes, req->iv, flags); + kernel_fpu_end(); + flags &= ~XTS_FIRST; + err = skcipher_walk_done(walk, walk->nbytes - nbytes); + } + + if (err || !tail) + return err; + + /* Do ciphertext stealing with the last full block and partial block. */ + + dst = src = scatterwalk_ffwd(sg_src, req->src, req->cryptlen); + if (req->dst != req->src) + dst = scatterwalk_ffwd(sg_dst, req->dst, req->cryptlen); + + skcipher_request_set_crypt(req, src, dst, AES_BLOCK_SIZE + tail, + req->iv); + + err = skcipher_walk_virt(walk, req, false); + if (err) + return err; + + kernel_fpu_begin(); + (*asm_func)(ctx, walk->src.virt.addr, walk->dst.virt.addr, walk->nbytes, + req->iv, flags); + kernel_fpu_end(); + + return skcipher_walk_done(walk, 0); +} + +/* __always_inline to avoid indirect call in fastpath */ +static __always_inline int +xts_crypt2(struct skcipher_request *req, xts_asm_func asm_func) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm); + struct skcipher_walk walk; + int err; + + /* The assembly code assumes these field offsets in the key struct. */ + BUILD_BUG_ON(offsetof(struct aesni_xts_ctx, tweak_ctx) != 0); + BUILD_BUG_ON(offsetof(struct aesni_xts_ctx, tweak_ctx.key_enc) != 0); + BUILD_BUG_ON(offsetof(struct aesni_xts_ctx, tweak_ctx.key_length) != 480); + BUILD_BUG_ON(offsetof(struct aesni_xts_ctx, crypt_ctx) != 496); + BUILD_BUG_ON(offsetof(struct aesni_xts_ctx, crypt_ctx.key_enc) != 496); + BUILD_BUG_ON(offsetof(struct aesni_xts_ctx, crypt_ctx.key_dec) != 736); + + if (req->cryptlen < AES_BLOCK_SIZE) + return -EINVAL; + + err = skcipher_walk_virt(&walk, req, false); + if (err) + return err; + if (likely(walk.nbytes == walk.total)) { + kernel_fpu_begin(); + (*asm_func)(ctx, walk.src.virt.addr, walk.dst.virt.addr, + walk.nbytes, req->iv, XTS_FIRST); + kernel_fpu_end(); + return skcipher_walk_done(&walk, 0); + } + return xts_crypt_slowpath(req, &walk, asm_func); +} + +#define DEFINE_XTS_ALG(suffix, driver_name, priority) \ + \ +asmlinkage void aes_xts_encrypt_##suffix(const struct aesni_xts_ctx *key, \ + const u8 *src, u8 *dst, size_t len, \ + u8 iv[AES_BLOCK_SIZE], int flags); \ +asmlinkage void aes_xts_decrypt_##suffix(const struct aesni_xts_ctx *key, \ + const u8 *src, u8 *dst, size_t len, \ + u8 iv[AES_BLOCK_SIZE], int flags); \ + \ +static int xts_encrypt_##suffix(struct skcipher_request *req) \ +{ \ + return xts_crypt2(req, aes_xts_encrypt_##suffix); \ +} \ + \ +static int xts_decrypt_##suffix(struct skcipher_request *req) \ +{ \ + return xts_crypt2(req, aes_xts_decrypt_##suffix); \ +} \ + \ +static struct skcipher_alg aes_xts_alg_##suffix = { \ + .base = { \ + .cra_name = "__xts(aes)", \ + .cra_driver_name = "__" driver_name, \ + .cra_priority = priority, \ + .cra_flags = CRYPTO_ALG_INTERNAL, \ + .cra_blocksize = AES_BLOCK_SIZE, \ + .cra_ctxsize = XTS_AES_CTX_SIZE, \ + .cra_module = THIS_MODULE, \ + }, \ + .min_keysize = 2 * AES_MIN_KEY_SIZE, \ + .max_keysize = 2 * AES_MAX_KEY_SIZE, \ + .ivsize = AES_BLOCK_SIZE, \ + .walksize = 2 * AES_BLOCK_SIZE, \ + .setkey = xts_aesni_setkey, \ + .encrypt = xts_encrypt_##suffix, \ + .decrypt = xts_decrypt_##suffix, \ +}; \ + \ +static struct simd_skcipher_alg *aes_xts_simdalg_##suffix + +DEFINE_XTS_ALG(aesni_avx, "xts-aes-aesni-avx", 500); + +static int __init register_xts_algs(void) +{ + int err; + + if (!boot_cpu_has(X86_FEATURE_AVX)) + return 0; + err = simd_register_skciphers_compat(&aes_xts_alg_aesni_avx, 1, + &aes_xts_simdalg_aesni_avx); + if (err) + return err; + return 0; +} + +static void unregister_xts_algs(void) +{ + if (aes_xts_simdalg_aesni_avx) + simd_unregister_skciphers(&aes_xts_alg_aesni_avx, 1, + &aes_xts_simdalg_aesni_avx); +} +#else +static int __init register_xts_algs(void) +{ + return 0; +} + +static void unregister_xts_algs(void) +{ +} #endif /* CONFIG_X86_64 */ #ifdef CONFIG_X86_64 static int generic_gcmaes_set_key(struct crypto_aead *aead, const u8 *key, unsigned int key_len) @@ -1274,17 +1461,25 @@ static int __init aesni_init(void) &aesni_simd_xctr); if (err) goto unregister_aeads; #endif /* CONFIG_X86_64 */ + err = register_xts_algs(); + if (err) + goto unregister_xts; + return 0; +unregister_xts: + unregister_xts_algs(); #ifdef CONFIG_X86_64 + if (aesni_simd_xctr) + simd_unregister_skciphers(&aesni_xctr, 1, &aesni_simd_xctr); unregister_aeads: +#endif /* CONFIG_X86_64 */ simd_unregister_aeads(aesni_aeads, ARRAY_SIZE(aesni_aeads), aesni_simd_aeads); -#endif /* CONFIG_X86_64 */ unregister_skciphers: simd_unregister_skciphers(aesni_skciphers, ARRAY_SIZE(aesni_skciphers), aesni_simd_skciphers); unregister_cipher: @@ -1301,10 +1496,11 @@ static void __exit aesni_exit(void) crypto_unregister_alg(&aesni_cipher_alg); #ifdef CONFIG_X86_64 if (boot_cpu_has(X86_FEATURE_AVX)) simd_unregister_skciphers(&aesni_xctr, 1, &aesni_simd_xctr); #endif /* CONFIG_X86_64 */ + unregister_xts_algs(); } late_initcall(aesni_init); module_exit(aesni_exit); -- 2.44.0