Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp2140034rwl; Sat, 8 Apr 2023 08:29:32 -0700 (PDT) X-Google-Smtp-Source: AKy350Z6bLkKXiXy3ye9klSeWH3NvW8w0vqiRIJ14yPwSsEPd2rdh9Va1xCqFNdnveJSMzqTLx8p X-Received: by 2002:a05:6402:524e:b0:4fd:298d:4f26 with SMTP id t14-20020a056402524e00b004fd298d4f26mr2352690edd.3.1680967771728; Sat, 08 Apr 2023 08:29:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680967771; cv=none; d=google.com; s=arc-20160816; b=AoTQF/NIz8vKIm7novdUiPhqgvJbLHbE52gub7fEbl4NzhI1pkGCyEZIDPLOPpHrE7 jfjurkGg7Bbdr1gErggi/O5bRjL8gcZcVYJYszHt0lTcqBhPgMHS/CS6pAG0MnU+S+qC QxZJjObFPwcQdTCTg4ZPFVSk8Ua6aAJ6Xse5LU8a1IsA/VRHpdZqv8qtWSGR+QYKseGU LswkOEXyQNmmJ9cXNiRWDsJRVXyTL2vG/eZQLZQa1U2R66xUaipgScECr4FQf1M1bCMm SypOJBXDCAh/oVfqfqJjbmbghzfWgyq0EbTGi4CNOglvSWRIq1YPHkkeY04WLRABgp6x nZpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=laZhr70AuK+CWwbJYykYd4GZmKbCKTeDEmU/FTM4P+Y=; b=LmjrchwC4Dw/woPwZI3wRqxjcvkpr23u3kHla/QEOiQoWHX3YXhTC8urUKy1rs5fGM BP+Vd1reKxsyZRQLOGvB5tgNee/Ay+B8s/eJAlU/TIG+iwHUYi25O/+JoGL7GvYLR39c tFrk7GtaYEjmCEHEA87MS1uZUUhYGhI3BiMcCYwVWJIy+Z4qdworraweRa3xCbxqNVDX fBD1ULhG8Rr39CCBWlb8s0BUtwYo0WAIvLS0gSlBu4DZu2MdbX3DBqRdJn1ddF2ZVTg+ OgWvtD3NslXlvLygP0T56slazpnYm4cpso/Hua8Bw5D3EbfBZ56IRswuctz4/CV8xF1i KO7A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=RAHxo4d7; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c7-20020aa7c987000000b005049298481csi662381edt.406.2023.04.08.08.29.07; Sat, 08 Apr 2023 08:29:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=RAHxo4d7; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229871AbjDHP1z (ORCPT + 99 others); Sat, 8 Apr 2023 11:27:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229456AbjDHP1w (ORCPT ); Sat, 8 Apr 2023 11:27:52 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8335EFBE for ; Sat, 8 Apr 2023 08:27:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 75485601D6 for ; Sat, 8 Apr 2023 15:27:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B321DC4339E; Sat, 8 Apr 2023 15:27:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680967670; bh=l9CITOUKyf9W2lbRqO0ZLyRGjWVZQF5OeTjsmqHFqeU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RAHxo4d7N2BTuqEWhN8bep3k18OFBzxtPLuq5nEtoWPyixTUj5CVPas+4X1Jy9X2Z c6wHiTj5NrZ06anZq71Bu19TrYTPCim8xSKC/0rO+rsFi18WB321A87wopqlpuwWrW bwm3KBhitDI3Tjp5jliYS7zOjCpHDSWNwWbD3kjNpdFL116mWpG4ToUwrhYUz9fY51 wnGOk9pYhx8OVSSCh6O77Jey6EyF+QjRBX1/BexkgLDq9yCTWZxah5wASfKpiT+nm0 zd4nwk1Vx/7dgm4+EihWuUviEbh1mkJlswVyyxNY9m+TqhL1dCKBy2JcxlH+vIxLfQ 4UxOI/d1N8RDQ== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , Kees Cook Subject: [PATCH 08/10] crypto: x86/des3 - Use RIP-relative addressing Date: Sat, 8 Apr 2023 17:27:20 +0200 Message-Id: <20230408152722.3975985-9-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230408152722.3975985-1-ardb@kernel.org> References: <20230408152722.3975985-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4741; i=ardb@kernel.org; h=from:subject; bh=l9CITOUKyf9W2lbRqO0ZLyRGjWVZQF5OeTjsmqHFqeU=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIcWw/bqKGRd70bsjp4K3xX/6O7fYL+nwrZcvGrKaDs3Xn d+a4fK9o5SFQYyDQVZMkUVg9t93O09PlKp1niULM4eVCWQIAxenAEzE8Q8jw46ZTElbV00/7G2x QOvgU/W74ZEP3vZvjNjQ0LCrhv3RPEtGhnX3HvyrerO9WPjz5YVe6vatXWwzp2yqXK7iWfnCb6d KNB8A X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Prefer RIP-relative addressing where possible, which removes the need for boot time relocation fixups. Signed-off-by: Ard Biesheuvel --- arch/x86/crypto/des3_ede-asm_64.S | 96 +++++++++++++------- 1 file changed, 64 insertions(+), 32 deletions(-) diff --git a/arch/x86/crypto/des3_ede-asm_64.S b/arch/x86/crypto/des3_ede-asm_64.S index f4c760f4cade6d7b..cf21b998e77cc4ea 100644 --- a/arch/x86/crypto/des3_ede-asm_64.S +++ b/arch/x86/crypto/des3_ede-asm_64.S @@ -129,21 +129,29 @@ movzbl RW0bl, RT2d; \ movzbl RW0bh, RT3d; \ shrq $16, RW0; \ - movq s8(, RT0, 8), RT0; \ - xorq s6(, RT1, 8), to; \ + leaq s8(%rip), RW1; \ + movq (RW1, RT0, 8), RT0; \ + leaq s6(%rip), RW1; \ + xorq (RW1, RT1, 8), to; \ movzbl RW0bl, RL1d; \ movzbl RW0bh, RT1d; \ shrl $16, RW0d; \ - xorq s4(, RT2, 8), RT0; \ - xorq s2(, RT3, 8), to; \ + leaq s4(%rip), RW1; \ + xorq (RW1, RT2, 8), RT0; \ + leaq s2(%rip), RW1; \ + xorq (RW1, RT3, 8), to; \ movzbl RW0bl, RT2d; \ movzbl RW0bh, RT3d; \ - xorq s7(, RL1, 8), RT0; \ - xorq s5(, RT1, 8), to; \ - xorq s3(, RT2, 8), RT0; \ + leaq s7(%rip), RW1; \ + xorq (RW1, RL1, 8), RT0; \ + leaq s5(%rip), RW1; \ + xorq (RW1, RT1, 8), to; \ + leaq s3(%rip), RW1; \ + xorq (RW1, RT2, 8), RT0; \ load_next_key(n, RW0); \ xorq RT0, to; \ - xorq s1(, RT3, 8), to; \ + leaq s1(%rip), RW1; \ + xorq (RW1, RT3, 8), to; \ #define load_next_key(n, RWx) \ movq (((n) + 1) * 8)(CTX), RWx; @@ -355,65 +363,89 @@ SYM_FUNC_END(des3_ede_x86_64_crypt_blk) movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ shrq $16, RW0; \ - xorq s8(, RT3, 8), to##0; \ - xorq s6(, RT1, 8), to##0; \ + leaq s8(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s6(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ shrq $16, RW0; \ - xorq s4(, RT3, 8), to##0; \ - xorq s2(, RT1, 8), to##0; \ + leaq s4(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s2(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ shrl $16, RW0d; \ - xorq s7(, RT3, 8), to##0; \ - xorq s5(, RT1, 8), to##0; \ + leaq s7(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s5(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ load_next_key(n, RW0); \ - xorq s3(, RT3, 8), to##0; \ - xorq s1(, RT1, 8), to##0; \ + leaq s3(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s1(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ xorq from##1, RW1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ shrq $16, RW1; \ - xorq s8(, RT3, 8), to##1; \ - xorq s6(, RT1, 8), to##1; \ + leaq s8(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s6(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ shrq $16, RW1; \ - xorq s4(, RT3, 8), to##1; \ - xorq s2(, RT1, 8), to##1; \ + leaq s4(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s2(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ shrl $16, RW1d; \ - xorq s7(, RT3, 8), to##1; \ - xorq s5(, RT1, 8), to##1; \ + leaq s7(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s5(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ do_movq(RW0, RW1); \ - xorq s3(, RT3, 8), to##1; \ - xorq s1(, RT1, 8), to##1; \ + leaq s3(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s1(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ xorq from##2, RW2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ shrq $16, RW2; \ - xorq s8(, RT3, 8), to##2; \ - xorq s6(, RT1, 8), to##2; \ + leaq s8(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s6(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ shrq $16, RW2; \ - xorq s4(, RT3, 8), to##2; \ - xorq s2(, RT1, 8), to##2; \ + leaq s4(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s2(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ shrl $16, RW2d; \ - xorq s7(, RT3, 8), to##2; \ - xorq s5(, RT1, 8), to##2; \ + leaq s7(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s5(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ do_movq(RW0, RW2); \ - xorq s3(, RT3, 8), to##2; \ - xorq s1(, RT1, 8), to##2; + leaq s3(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s1(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; #define __movq(src, dst) \ movq src, dst; -- 2.39.2