Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp773376rwl; Wed, 12 Apr 2023 04:04:12 -0700 (PDT) X-Google-Smtp-Source: AKy350ZwO2n8somolN1g81e4g3bSWStGvJlD6uqk/l1kL+TEdeUCbp6wM2wKWd/uAxrplTdux/1J X-Received: by 2002:a17:903:2281:b0:1a0:563e:b0c4 with SMTP id b1-20020a170903228100b001a0563eb0c4mr2188615plh.2.1681297452293; Wed, 12 Apr 2023 04:04:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681297452; cv=none; d=google.com; s=arc-20160816; b=o+ij3ao0GxM9nQYGP9i9y7LMedcMZQO0lgiSzyItQKZote2QyKxiuvN+9BnTSE1l3/ wvV2CO9I9f1mK9Z4GU27/WPZ7IFcS4iblpgQv0ylS5SS5h4iXGAfeK8LNXwUZInFH1HI ANCOtuf0Xv28oSkyRQ7rdcaju2SSy/vUq6H9Rh2pE5SvoDi9C6zBYVP2GRarc+vafa13 farM8xw9CnJBa5+GH/1ibcDb+LV3kEYXTJqh/kfepjBqCru32XIPbQlfUGtw+3FMqX6x SkL08a0hxN4KDcmdmfhzalG3J1ps4PyGE/et92OjXT/uxkaRIEYPMtvkC0eb04y7Coxg Mr/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=/Y1rE4P1ooBLUX9zezlEpZS7rOILeNYBlKvaUSl8FYg=; b=HgfLWJ1uztx36BWIlhU3zjtzaKFkjQqvwQCabWgoSCYoNmCYetNQyOrGBzB0BJECwL BFz3VR+C/RGBxBGbSl9BCVQGwe+6E2iM5veMSJKW9eBak2W7mHbVS30kg/hWDyuTKCIW ZSynMnwWHacHob9aFjjHH74X7mQMIGyncC023SQOg7U1xHZLpOrAmRDzJDGnheQYcSn8 dB1aMrnBEjWcHGjLvEErfRoJ1kSUF8zivFNb7gWby/3oQubbqbP120sWu8caJyzvg8my 5pBPVxaa8GTX5SoAs4paZBfDFYXTinsC/KpcHj7Ntk6ER+JCi63MVGzlT2wlDKao7cEu q6NA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=vBFt+YQR; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j24-20020a63cf18000000b004fbb1485396si15201835pgg.165.2023.04.12.04.03.59; Wed, 12 Apr 2023 04:04:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=vBFt+YQR; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229679AbjDLLBP (ORCPT + 99 others); Wed, 12 Apr 2023 07:01:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229706AbjDLLBC (ORCPT ); Wed, 12 Apr 2023 07:01:02 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B738D6EAD for ; Wed, 12 Apr 2023 04:01:00 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5378963256 for ; Wed, 12 Apr 2023 11:01:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 93934C433D2; Wed, 12 Apr 2023 11:00:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681297259; bh=5xYP4xdbKfF7TFdddJFji/S1DKTwXAgbtRcwhupiGTo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vBFt+YQR6gmlzPhkQ4TxKFCdzSUAMC8oA3yrw7B0tI9Vwr6OLm8k/1TAbst/qYayK c9pwl4w1NFdSReLf3LIiBX9S0HA/Dv4nYHDjAyQR+GKdTSzjsBzNQNhro/NrT7NZu4 xemqD6JV8KJdn8BNsLRRJAjecftteCQZdf9BQVwhq8rTnWAzW+oEzeFV51TVjv73Iy TGNxCTcuzYtMdjA58VvOQyneOuCJedbWnRMQ4a5csoIDdvfBC6Ov8qmYRLxVf4nhD9 3BFN6lyXYTdh3JLbXJ8xbu4T/eZSTASHhKy6TcfgJ7z6Y8bV1qB+Vc7DXa3Q1RIJM/ lDLhJtpLgOirw== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: Ard Biesheuvel , Herbert Xu , Eric Biggers , Kees Cook Subject: [PATCH v2 08/13] crypto: x86/des3 - Use RIP-relative addressing Date: Wed, 12 Apr 2023 13:00:30 +0200 Message-Id: <20230412110035.361447-9-ardb@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230412110035.361447-1-ardb@kernel.org> References: <20230412110035.361447-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4853; i=ardb@kernel.org; h=from:subject; bh=5xYP4xdbKfF7TFdddJFji/S1DKTwXAgbtRcwhupiGTo=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIcWs3yd0yin/qQ/uXdFJNImbcmbbni2dc3bbabPM/DSzw 0m0kfFPRykLgxgHg6yYIovA7L/vdp6eKFXrPEsWZg4rE8gQBi5OAZjIP3WG/7FJ176J7pKZ8e3i AmsTi02dxtKRGtwS3ImnT23azJ4odYKR4ZN9pYxAUNasuxfVbkj3HPqrd/m87byUwJqZ6Vsu5dZ N4gcA X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Prefer RIP-relative addressing where possible, which removes the need for boot time relocation fixups. Co-developed-by: Thomas Garnier Signed-off-by: Thomas Garnier Signed-off-by: Ard Biesheuvel --- arch/x86/crypto/des3_ede-asm_64.S | 96 +++++++++++++------- 1 file changed, 64 insertions(+), 32 deletions(-) diff --git a/arch/x86/crypto/des3_ede-asm_64.S b/arch/x86/crypto/des3_ede-asm_64.S index f4c760f4cade6d7b..cf21b998e77cc4ea 100644 --- a/arch/x86/crypto/des3_ede-asm_64.S +++ b/arch/x86/crypto/des3_ede-asm_64.S @@ -129,21 +129,29 @@ movzbl RW0bl, RT2d; \ movzbl RW0bh, RT3d; \ shrq $16, RW0; \ - movq s8(, RT0, 8), RT0; \ - xorq s6(, RT1, 8), to; \ + leaq s8(%rip), RW1; \ + movq (RW1, RT0, 8), RT0; \ + leaq s6(%rip), RW1; \ + xorq (RW1, RT1, 8), to; \ movzbl RW0bl, RL1d; \ movzbl RW0bh, RT1d; \ shrl $16, RW0d; \ - xorq s4(, RT2, 8), RT0; \ - xorq s2(, RT3, 8), to; \ + leaq s4(%rip), RW1; \ + xorq (RW1, RT2, 8), RT0; \ + leaq s2(%rip), RW1; \ + xorq (RW1, RT3, 8), to; \ movzbl RW0bl, RT2d; \ movzbl RW0bh, RT3d; \ - xorq s7(, RL1, 8), RT0; \ - xorq s5(, RT1, 8), to; \ - xorq s3(, RT2, 8), RT0; \ + leaq s7(%rip), RW1; \ + xorq (RW1, RL1, 8), RT0; \ + leaq s5(%rip), RW1; \ + xorq (RW1, RT1, 8), to; \ + leaq s3(%rip), RW1; \ + xorq (RW1, RT2, 8), RT0; \ load_next_key(n, RW0); \ xorq RT0, to; \ - xorq s1(, RT3, 8), to; \ + leaq s1(%rip), RW1; \ + xorq (RW1, RT3, 8), to; \ #define load_next_key(n, RWx) \ movq (((n) + 1) * 8)(CTX), RWx; @@ -355,65 +363,89 @@ SYM_FUNC_END(des3_ede_x86_64_crypt_blk) movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ shrq $16, RW0; \ - xorq s8(, RT3, 8), to##0; \ - xorq s6(, RT1, 8), to##0; \ + leaq s8(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s6(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ shrq $16, RW0; \ - xorq s4(, RT3, 8), to##0; \ - xorq s2(, RT1, 8), to##0; \ + leaq s4(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s2(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ shrl $16, RW0d; \ - xorq s7(, RT3, 8), to##0; \ - xorq s5(, RT1, 8), to##0; \ + leaq s7(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s5(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ load_next_key(n, RW0); \ - xorq s3(, RT3, 8), to##0; \ - xorq s1(, RT1, 8), to##0; \ + leaq s3(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s1(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ xorq from##1, RW1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ shrq $16, RW1; \ - xorq s8(, RT3, 8), to##1; \ - xorq s6(, RT1, 8), to##1; \ + leaq s8(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s6(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ shrq $16, RW1; \ - xorq s4(, RT3, 8), to##1; \ - xorq s2(, RT1, 8), to##1; \ + leaq s4(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s2(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ shrl $16, RW1d; \ - xorq s7(, RT3, 8), to##1; \ - xorq s5(, RT1, 8), to##1; \ + leaq s7(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s5(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ do_movq(RW0, RW1); \ - xorq s3(, RT3, 8), to##1; \ - xorq s1(, RT1, 8), to##1; \ + leaq s3(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s1(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ xorq from##2, RW2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ shrq $16, RW2; \ - xorq s8(, RT3, 8), to##2; \ - xorq s6(, RT1, 8), to##2; \ + leaq s8(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s6(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ shrq $16, RW2; \ - xorq s4(, RT3, 8), to##2; \ - xorq s2(, RT1, 8), to##2; \ + leaq s4(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s2(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ shrl $16, RW2d; \ - xorq s7(, RT3, 8), to##2; \ - xorq s5(, RT1, 8), to##2; \ + leaq s7(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s5(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ do_movq(RW0, RW2); \ - xorq s3(, RT3, 8), to##2; \ - xorq s1(, RT1, 8), to##2; + leaq s3(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s1(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; #define __movq(src, dst) \ movq src, dst; -- 2.39.2