Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp865595rwb; Wed, 7 Dec 2022 06:03:38 -0800 (PST) X-Google-Smtp-Source: AA0mqf4s8Q7RKbPfretqbOSkzBqR7WqasHktQ1nGne2sBuDOrd15VNMaHrqCTBuKtt8S0dIT3p0e X-Received: by 2002:a05:6a00:26e3:b0:576:1b46:3ff6 with SMTP id p35-20020a056a0026e300b005761b463ff6mr30759304pfw.1.1670421818703; Wed, 07 Dec 2022 06:03:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670421818; cv=none; d=google.com; s=arc-20160816; b=DsScmIY9nrRpoqiSynrx1X5AcjbYBOEeVzrz1m13vtoZ4sqM4Gt8bio+4ByVV2+jxa lShQieJ+fjodrJTdOfYLYB8bTwzOh1ic0rL6VGpeFEhkGscLSEdnaZbtP+FVct7ibrUW 96J7/3Kbr1VGcecO97TxJ0ZeNo5EXp4imGef2DHbsH0SS/VjdI/0PV+fMXzrTdpTk8GN vRjcqijweawsgxlIJhlUuAzbotl3kzuwWMfIQn52Z2oItHywvuCdtwy6dC8XIWWEBFSt DYAQD1cmbMYUsAHbdmAGJkEoWjAmrYipWR+4Ief2+4/Cya10Gh0sEZu0gy5A2Uzxjaxh ip2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+VUmk0rgc9lOkWV+hZsb7izFv/K4EbuRmRzXdHPUrwg=; b=LAfNGS9CIxkpF7NWRIEJwdiFSWlIe+V2ULRJpRXpbx/MpCa2z7RhVyedAGEKSlhfKq qvyjBdrqrotF6j1hx1PW3qSSR424Gv1ts/K+bMUg91wiUMmRtsmVELJL8gI2UQa+su3N 2O2+uNqSNxfktuIQAywyjqyDgRIURehXcmFmaP1QWjGo3ozTMTFwPxlVHTcJ3LZuHr+0 GLTT3NzPASlDs6OU0dYhx19n9s0T9tkfk5QFDbQ4bU9Fo4SYDP5vxtIGLRaAe7B8h49x 2iH2IaC/BsYvEG6zqzNtiIk9kpGyQqspTUJUL4lKLYrvZbvWTozOjwyaSpkYREHQFx2/ 8TIA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=jNPoXna5; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lp12-20020a17090b4a8c00b001f31f339134si1670807pjb.152.2022.12.07.06.03.22; Wed, 07 Dec 2022 06:03:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=jNPoXna5; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229609AbiLGN72 (ORCPT + 99 others); Wed, 7 Dec 2022 08:59:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230045AbiLGN7S (ORCPT ); Wed, 7 Dec 2022 08:59:18 -0500 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D470D5BD78 for ; Wed, 7 Dec 2022 05:59:17 -0800 (PST) Received: by mail-pj1-x1030.google.com with SMTP id q15so16631798pja.0 for ; Wed, 07 Dec 2022 05:59:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+VUmk0rgc9lOkWV+hZsb7izFv/K4EbuRmRzXdHPUrwg=; b=jNPoXna5+f+MKmJTpiNfrEdoznqtFT+kv/vXPwO9KxWRytlg5iKqQyNtFctims6FFH G042WrOaN79SQSbd/kTvbVvibvJDm6I87f1OFv375FVVmiw21tYNQ7mdedxWpPOED7r8 wDj4Sf+8zLRMH1FPF/fxrXb7dSlmqWphhrBQSxMT3vgV8eXg8aGpyclH8nw+zOF7bhL2 FRpOJeRrYP/lTnS7g8qNnrQGZtzxq2pu55CcuJd50varh7KAqK+lY5ZdUeX3ONjZ46+z VNrS89vU8mqlSlxU7rwn7xD5R33Tietlmu6N55VfTxajnV9RoaQjDC2O1sg2Gfps0mHR i/Zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+VUmk0rgc9lOkWV+hZsb7izFv/K4EbuRmRzXdHPUrwg=; b=c4hUJqdkdaQFiiZfZE4RbhD6WvKrNNINIW4N7WVdyTTquwy3bbneZuT/NRyAtO9DFo EYA4hrKKGOx3djMZOEP0wroAnTolkUyVjdbmZSAG6LWKggyhpPXUKw7YowdZd0CfCbBR 9Ol9JSK9f508sBBXFkXqUVEIzlVQdW9SIT0X89YPy2Qtryff8kiD/9zyQGk5nuY0xScV R4jcYfmls0daeDQFOQGMslnTdq8kEtsGsFJxoG+P7YLjQWiPZyzKqZisO9s3cAVZoU7g gvBS5oNTpRzhaamw2M13gRlISdn5PkPcT4TZRTJkfnGQK2WWDywsm0DlD6UECpUqkFxk 1ruw== X-Gm-Message-State: ANoB5pkFbWaHAG7HJm26KsC/zE2Z5q+G4zJ7XnhnFjhYMumSB5gAPqGI P98KldpjBRcX80cD+Y9esfjwpvK1Cm+RHA== X-Received: by 2002:a17:903:410b:b0:189:911a:6b5a with SMTP id r11-20020a170903410b00b00189911a6b5amr640799pld.31.1670421556677; Wed, 07 Dec 2022 05:59:16 -0800 (PST) Received: from ap.. ([182.213.254.91]) by smtp.gmail.com with ESMTPSA id x22-20020a170902821600b001885041d7b8sm14554619pln.293.2022.12.07.05.59.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Dec 2022 05:59:15 -0800 (PST) From: Taehee Yoo To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, x86@kernel.org Cc: elliott@hpe.com, jussi.kivilinna@iki.fi, ebiggers@kernel.org, ap420073@gmail.com Subject: [PATCH v7 2/4] crypto: aria: do not use magic number offsets of aria_ctx Date: Wed, 7 Dec 2022 13:58:53 +0000 Message-Id: <20221207135855.459181-3-ap420073@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221207135855.459181-1-ap420073@gmail.com> References: <20221207135855.459181-1-ap420073@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org aria-avx assembly code accesses members of aria_ctx with magic number offset. If the shape of struct aria_ctx is changed carelessly, aria-avx will not work. So, we need to ensure accessing members of aria_ctx with correct offset values, not with magic numbers. It adds ARIA_CTX_enc_key, ARIA_CTX_dec_key, and ARIA_CTX_rounds in the asm-offsets.c So, correct offset definitions will be generated. aria-avx assembly code can access members of aria_ctx safely with these definitions. Signed-off-by: Taehee Yoo --- v7: - Use IS_ENABLED() instead of defined() v6: - Rebase for "CFI fixes" patchset. v5: - No changes. v4: - Add BUILD_BUG_ON() to check size of fields of aria_ctx. v3: - Patch introduced. arch/x86/crypto/aria-aesni-avx-asm_64.S | 26 +++++++++++-------------- arch/x86/kernel/asm-offsets.c | 9 +++++++++ crypto/aria_generic.c | 4 ++++ 3 files changed, 24 insertions(+), 15 deletions(-) diff --git a/arch/x86/crypto/aria-aesni-avx-asm_64.S b/arch/x86/crypto/aria-aesni-avx-asm_64.S index 03ae4cd1d976..be6adc6e7458 100644 --- a/arch/x86/crypto/aria-aesni-avx-asm_64.S +++ b/arch/x86/crypto/aria-aesni-avx-asm_64.S @@ -8,13 +8,9 @@ #include #include +#include #include -/* struct aria_ctx: */ -#define enc_key 0 -#define dec_key 272 -#define rounds 544 - /* register macros */ #define CTX %rdi @@ -874,7 +870,7 @@ SYM_FUNC_START_LOCAL(__aria_aesni_avx_crypt_16way) aria_fo(%xmm9, %xmm8, %xmm11, %xmm10, %xmm12, %xmm13, %xmm14, %xmm15, %xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %rax, %r9, 10); - cmpl $12, rounds(CTX); + cmpl $12, ARIA_CTX_rounds(CTX); jne .Laria_192; aria_ff(%xmm1, %xmm0, %xmm3, %xmm2, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm9, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, @@ -887,7 +883,7 @@ SYM_FUNC_START_LOCAL(__aria_aesni_avx_crypt_16way) aria_fo(%xmm9, %xmm8, %xmm11, %xmm10, %xmm12, %xmm13, %xmm14, %xmm15, %xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %rax, %r9, 12); - cmpl $14, rounds(CTX); + cmpl $14, ARIA_CTX_rounds(CTX); jne .Laria_256; aria_ff(%xmm1, %xmm0, %xmm3, %xmm2, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm9, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, @@ -923,7 +919,7 @@ SYM_TYPED_FUNC_START(aria_aesni_avx_encrypt_16way) FRAME_BEGIN - leaq enc_key(CTX), %r9; + leaq ARIA_CTX_enc_key(CTX), %r9; inpack16_pre(%xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm9, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, @@ -948,7 +944,7 @@ SYM_TYPED_FUNC_START(aria_aesni_avx_decrypt_16way) FRAME_BEGIN - leaq dec_key(CTX), %r9; + leaq ARIA_CTX_dec_key(CTX), %r9; inpack16_pre(%xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm9, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, @@ -1056,7 +1052,7 @@ SYM_TYPED_FUNC_START(aria_aesni_avx_ctr_crypt_16way) leaq (%rdx), %r11; leaq (%rcx), %rsi; leaq (%rcx), %rdx; - leaq enc_key(CTX), %r9; + leaq ARIA_CTX_enc_key(CTX), %r9; call __aria_aesni_avx_crypt_16way; @@ -1157,7 +1153,7 @@ SYM_FUNC_START_LOCAL(__aria_aesni_avx_gfni_crypt_16way) %xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %rax, %r9, 10); - cmpl $12, rounds(CTX); + cmpl $12, ARIA_CTX_rounds(CTX); jne .Laria_gfni_192; aria_ff_gfni(%xmm1, %xmm0, %xmm3, %xmm2, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm9, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, @@ -1174,7 +1170,7 @@ SYM_FUNC_START_LOCAL(__aria_aesni_avx_gfni_crypt_16way) %xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %rax, %r9, 12); - cmpl $14, rounds(CTX); + cmpl $14, ARIA_CTX_rounds(CTX); jne .Laria_gfni_256; aria_ff_gfni(%xmm1, %xmm0, %xmm3, %xmm2, %xmm4, %xmm5, %xmm6, %xmm7, @@ -1218,7 +1214,7 @@ SYM_TYPED_FUNC_START(aria_aesni_avx_gfni_encrypt_16way) FRAME_BEGIN - leaq enc_key(CTX), %r9; + leaq ARIA_CTX_enc_key(CTX), %r9; inpack16_pre(%xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm9, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, @@ -1243,7 +1239,7 @@ SYM_TYPED_FUNC_START(aria_aesni_avx_gfni_decrypt_16way) FRAME_BEGIN - leaq dec_key(CTX), %r9; + leaq ARIA_CTX_dec_key(CTX), %r9; inpack16_pre(%xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm9, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, @@ -1275,7 +1271,7 @@ SYM_TYPED_FUNC_START(aria_aesni_avx_gfni_ctr_crypt_16way) leaq (%rdx), %r11; leaq (%rcx), %rsi; leaq (%rcx), %rdx; - leaq enc_key(CTX), %r9; + leaq ARIA_CTX_enc_key(CTX), %r9; call __aria_aesni_avx_gfni_crypt_16way; diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c index cb50589a7102..9a31f7942f5c 100644 --- a/arch/x86/kernel/asm-offsets.c +++ b/arch/x86/kernel/asm-offsets.c @@ -7,6 +7,7 @@ #define COMPILE_OFFSETS #include +#include #include #include #include @@ -109,6 +110,14 @@ static void __used common(void) OFFSET(TSS_sp1, tss_struct, x86_tss.sp1); OFFSET(TSS_sp2, tss_struct, x86_tss.sp2); +#if IS_ENABLED(CONFIG_CRYPTO_ARIA_AESNI_AVX_X86_64) + /* Offset for fields in aria_ctx */ + BLANK(); + OFFSET(ARIA_CTX_enc_key, aria_ctx, enc_key); + OFFSET(ARIA_CTX_dec_key, aria_ctx, dec_key); + OFFSET(ARIA_CTX_rounds, aria_ctx, rounds); +#endif + if (IS_ENABLED(CONFIG_KVM_INTEL)) { BLANK(); OFFSET(VMX_spec_ctrl, vcpu_vmx, spec_ctrl); diff --git a/crypto/aria_generic.c b/crypto/aria_generic.c index 4cc29b82b99d..d96dfc4fdde6 100644 --- a/crypto/aria_generic.c +++ b/crypto/aria_generic.c @@ -178,6 +178,10 @@ int aria_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) if (key_len != 16 && key_len != 24 && key_len != 32) return -EINVAL; + BUILD_BUG_ON(sizeof(ctx->enc_key) != 272); + BUILD_BUG_ON(sizeof(ctx->dec_key) != 272); + BUILD_BUG_ON(sizeof(int) != sizeof(ctx->rounds)); + ctx->key_length = key_len; ctx->rounds = (key_len + 32) / 4; -- 2.34.1