Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp3262278pxk; Tue, 15 Sep 2020 14:35:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw/VhsrO+rJG7wTDKI+XpQZBcd7mXW59rn2lO6XWQlYvc3ppAr7fZbGya2nGuXbKx8rKDk0 X-Received: by 2002:aa7:c158:: with SMTP id r24mr23643615edp.61.1600205749207; Tue, 15 Sep 2020 14:35:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600205749; cv=none; d=google.com; s=arc-20160816; b=KeDgfe09CpA7YDTzhO8RKmhppEuEhzq/kPF6LGOlOUnYD8ls3OheeNf3wCakzvVlMC r0tlyvQjCEwne9PdVYPJ2Z9d/qEt3vkCKdB89lsgAGgJREEyi5g0Y5gflcCcwVNGsIx0 b2ZyRc2mN2rj/gLHhd+rG3/ynFQ6WP08ci3riEByHosjDtTUFhBWL6AldOkTDUHEDP7+ l8SiZ4OgLIjWU51hiqaOVAI3mQ2z9csQ7tKo9yKHbvV0TSpUbZHHare4jMw34F2ucinN jBJEymmPQCjkUL3tfJHOfg3y+BhQyiPQYSbZbV3dKH/XxJ3lidY9CZFB4jkkvaVtpuTr GUpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=MucHrWQE8s9080cbVreub/HZo0yd2bpuGvvjjHNb5+I=; b=Eg/GO7xPDvFos5lyf3x4dSjIH1geyla17PtkivnnDWW1Aa27GPgQqAddIRbgs4HmA/ AJlb6HJEpfZeTuk/d1KbNUBb/eEe4LqhfuT0Ym8RjUPFDcTjVTH8bbQncbRCvHyLhCc7 xzqXDdd5Y2vMriPEyKkN2P6rr1d78zFoIV026H8B5s0CN3kmoMg+JHSCGxye4IOnu07i eqq3uzBwHwqQGwkUZxGLpurduDveKYKquITX1viKUFIGKuQ5pJmyr+/APa0GRpnS+rRm 1Zj9CoCibaSfiU6gE4QPW+Vaf4dwZQb6TI3x9q/OSI0Oz0UAz4lhYfEOe4Zkh0FaFamX DT2w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=w9Gz9AMp; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w6si10545023ejz.365.2020.09.15.14.35.06; Tue, 15 Sep 2020 14:35:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=w9Gz9AMp; spf=pass (google.com: domain of linux-crypto-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728130AbgIOVc6 (ORCPT + 99 others); Tue, 15 Sep 2020 17:32:58 -0400 Received: from mail.kernel.org ([198.145.29.99]:42088 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728128AbgIOVcF (ORCPT ); Tue, 15 Sep 2020 17:32:05 -0400 Received: from mail-ot1-f43.google.com (mail-ot1-f43.google.com [209.85.210.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0319820731 for ; Tue, 15 Sep 2020 21:32:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600205525; bh=xEbkG6kGcBqKkC6AlZm3Qz7Y5BmyDrs4OfJ9MYbXq9k=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=w9Gz9AMp1CSIBR0vsdHuHYxJE7C1ShEuzGFuwt45Hpy5TC0VJ27BXLvQUyDXt4EjY fUyLGW1sO+BLgL7XZFqJAXWYoS6lXcPlTSeOxl0KMTN9ezcAs4E5K+WosI6+0bwAFS a3vdMrEQZTfUWbQXuKgdw+/83LdPP/vNYTrQDW1Y= Received: by mail-ot1-f43.google.com with SMTP id c10so4664419otm.13 for ; Tue, 15 Sep 2020 14:32:04 -0700 (PDT) X-Gm-Message-State: AOAM5300VL3mGblRGwx15LcSBi28GlYr6aPQWBtlzF2N197bEyVgDSqN hGN5divWPVu07TMak1GqtiXpRgfp8xd2OFUkbiw= X-Received: by 2002:a9d:6193:: with SMTP id g19mr13970948otk.108.1600205524339; Tue, 15 Sep 2020 14:32:04 -0700 (PDT) MIME-Version: 1.0 References: <20200915094619.32548-1-ardb@kernel.org> In-Reply-To: From: Ard Biesheuvel Date: Wed, 16 Sep 2020 00:31:53 +0300 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH] crypto: arm/sha256-neon - avoid ADRL pseudo instruction To: Nick Desaulniers Cc: "open list:HARDWARE RANDOM NUMBER GENERATOR CORE" , Herbert Xu , Stefan Agner , Peter Smith Content-Type: text/plain; charset="UTF-8" Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org On Tue, 15 Sep 2020 at 21:50, Nick Desaulniers wrote: > > On Tue, Sep 15, 2020 at 2:46 AM Ard Biesheuvel wrote: > > > > The ADRL pseudo instruction is not an architectural construct, but a > > convenience macro that was supported by the ARM proprietary assembler > > and adopted by binutils GAS as well, but only when assembling in 32-bit > > ARM mode. Therefore, it can only be used in assembler code that is known > > to assemble in ARM mode only, but as it turns out, the Clang assembler > > does not implement ADRL at all, and so it is better to get rid of it > > entirely. > > > > So replace the ADRL instruction with a ADR instruction that refers to > > a nearer symbol, and apply the delta explicitly using an additional > > instruction. > > > > Cc: Nick Desaulniers > > Cc: Stefan Agner > > Cc: Peter Smith > > Signed-off-by: Ard Biesheuvel > > --- > > I will leave it to the Clang folks to decide whether this needs to be > > backported and how far, but a Cc stable seems reasonable here. > > > > arch/arm/crypto/sha256-armv4.pl | 4 ++-- > > arch/arm/crypto/sha256-core.S_shipped | 4 ++-- > > 2 files changed, 4 insertions(+), 4 deletions(-) > > > > diff --git a/arch/arm/crypto/sha256-armv4.pl b/arch/arm/crypto/sha256-armv4.pl > > index 9f96ff48e4a8..8aeb2e82f915 100644 > > --- a/arch/arm/crypto/sha256-armv4.pl > > +++ b/arch/arm/crypto/sha256-armv4.pl > > @@ -175,7 +175,6 @@ $code=<<___; > > #else > > .syntax unified > > # ifdef __thumb2__ > > -# define adrl adr > > .thumb > > # else > > .code 32 > > @@ -471,7 +470,8 @@ sha256_block_data_order_neon: > > stmdb sp!,{r4-r12,lr} > > > > sub $H,sp,#16*4+16 > > - adrl $Ktbl,K256 > > + adr $Ktbl,.Lsha256_block_data_order > > + add $Ktbl,$Ktbl,#K256-.Lsha256_block_data_order > > bic $H,$H,#15 @ align for 128-bit stores > > mov $t2,sp > > mov sp,$H @ alloca > > diff --git a/arch/arm/crypto/sha256-core.S_shipped b/arch/arm/crypto/sha256-core.S_shipped > > index ea04b2ab0c33..1861c4e8a5ba 100644 > > --- a/arch/arm/crypto/sha256-core.S_shipped > > +++ b/arch/arm/crypto/sha256-core.S_shipped > > @@ -56,7 +56,6 @@ > > #else > > .syntax unified > > # ifdef __thumb2__ > > -# define adrl adr > > .thumb > > # else > > .code 32 > > @@ -1885,7 +1884,8 @@ sha256_block_data_order_neon: > > stmdb sp!,{r4-r12,lr} > > > > sub r11,sp,#16*4+16 > > - adrl r14,K256 > > + adr r14,.Lsha256_block_data_order > > + add r14,r14,#K256-.Lsha256_block_data_order > > Hi Ard, > Thanks for the patch. With this patch applied: > > $ ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- make LLVM=1 LLVM_IAS=1 > -j71 defconfig > $ ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- make LLVM=1 LLVM_IAS=1 -j71 > ... > arch/arm/crypto/sha256-core.S:2038:2: error: out of range immediate fixup value > add r14,r14,#K256-.Lsha256_block_data_order > ^ > > :( > Strange. Could you change it to sub r14,r14,#.Lsha256_block_data_order-K256 and try again? If that does work, it means the Clang assembler does not update the instruction type for negative addends (add to sub in this case), which would be unfortunate, since it would be another functionality gap. > Would the adr_l macro you wrote in > https://lore.kernel.org/linux-arm-kernel/nycvar.YSQ.7.78.906.2009141003360.4095746@knanqh.ubzr/T/#t > be helpful here? > > > bic r11,r11,#15 @ align for 128-bit stores > > mov r12,sp > > mov sp,r11 @ alloca > > -- > > 2.17.1 > > > > > -- > Thanks, > ~Nick Desaulniers