Received: by 2002:a25:d7c1:0:0:0:0:0 with SMTP id o184csp4051920ybg; Fri, 25 Oct 2019 12:37:05 -0700 (PDT) X-Google-Smtp-Source: APXvYqzbkTvGLzglCT3k2WqLn4FhiUX7zSCHia5bZuzYThRb1xPyCHUs+cIH3lxb9qh7NrzwNJSQ X-Received: by 2002:a17:906:1651:: with SMTP id n17mr5055446ejd.220.1572032225537; Fri, 25 Oct 2019 12:37:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572032225; cv=none; d=google.com; s=arc-20160816; b=JN9UtDHxqb9XMn81IIJh6DWy4kiab3CAqq0PdhU4Y/DUts0IEsisUq2t7huKpNZEmU zELAlplIIrMSWDqTEVaaN8RuSW8yGw2N30U/QlMPqUs/TbsKBvxNtCKVicUzZVYnDQ5i IYmj9oTByOxZ732s2KsmvwUhLJ7lL9RxrtXKApY+n4F+iEQtPtVFdkelFOAj38L8PFE5 uvAjCxX3QXbMfEfMucQ4do6ac6+OpRprRj7AGx5JouUQU1rem8dPBQLICdRrjD3Ukf8/ OmJNy6RZZ5qm3dzzKVqLUVIzgC7S00MdseSmk3eBto9fQHxWeSVmo/zvDC2gVYF2UTj8 uzKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=nIgnh24TPceUqIN96mZDtpRbfKZSe5i6ynPPCaaAUWM=; b=FFeXb3mI5PwX/YA4gvTTu6CEjZhgUfvQ7TjHQW2lv5hNkxTlyCa3C3PB25O0lhWc7X bgBuYBq/6Xq31fLtfR4nJjzgkwGu42C5QFHMUsfwdIIWKXWyAM9XdM91E9wxMX04+XZd /tElEOlQPS+D1gQL6eCbqBogTm6wNQFJFotW++sxG7h/rpfELfIBaB2M0QYspLx5Enta 2dBnzIrauaDkg0JqeqCRb8Fms1pm1F1GRQM8bbnPK9Scbf/6B9L+WTDKHtLgQmKmEojS 4S/KY4X1jtH86RLyHU/dWTxmfR7rdWjJ3bBG9YI21y+4NScxk71dWu3pIH9p4RjSnDAf OV+A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h56si1733164eda.356.2019.10.25.12.36.41; Fri, 25 Oct 2019 12:37:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2438795AbfJYJlm (ORCPT + 99 others); Fri, 25 Oct 2019 05:41:42 -0400 Received: from foss.arm.com ([217.140.110.172]:37896 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2438778AbfJYJlm (ORCPT ); Fri, 25 Oct 2019 05:41:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E8EB228; Fri, 25 Oct 2019 02:41:41 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CA8C73F71F; Fri, 25 Oct 2019 02:41:39 -0700 (PDT) Date: Fri, 25 Oct 2019 10:41:37 +0100 From: Mark Rutland To: samitolvanen@google.com Cc: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Dave Martin , Kees Cook , Laura Abbott , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 02/17] arm64/lib: copy_page: avoid x18 register in assembler code Message-ID: <20191025094137.GB40270@lakrids.cambridge.arm.com> References: <20191018161033.261971-1-samitolvanen@google.com> <20191024225132.13410-1-samitolvanen@google.com> <20191024225132.13410-3-samitolvanen@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191024225132.13410-3-samitolvanen@google.com> User-Agent: Mutt/1.11.1+11 (2f07cb52) (2018-12-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Oct 24, 2019 at 03:51:17PM -0700, samitolvanen@google.com wrote: > From: Ard Biesheuvel > > Register x18 will no longer be used as a caller save register in the > future, so stop using it in the copy_page() code. > > Link: https://patchwork.kernel.org/patch/9836869/ > Signed-off-by: Ard Biesheuvel > Signed-off-by: Sami Tolvanen > --- > arch/arm64/lib/copy_page.S | 38 +++++++++++++++++++------------------- > 1 file changed, 19 insertions(+), 19 deletions(-) > > diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S > index bbb8562396af..8b562264c165 100644 > --- a/arch/arm64/lib/copy_page.S > +++ b/arch/arm64/lib/copy_page.S > @@ -34,45 +34,45 @@ alternative_else_nop_endif > ldp x14, x15, [x1, #96] > ldp x16, x17, [x1, #112] > > - mov x18, #(PAGE_SIZE - 128) > + add x0, x0, #256 > add x1, x1, #128 > 1: > - subs x18, x18, #128 > + tst x0, #(PAGE_SIZE - 1) > > alternative_if ARM64_HAS_NO_HW_PREFETCH > prfm pldl1strm, [x1, #384] > alternative_else_nop_endif > > - stnp x2, x3, [x0] > + stnp x2, x3, [x0, #-256] > ldp x2, x3, [x1] > - stnp x4, x5, [x0, #16] > + stnp x4, x5, [x0, #-240] > ldp x4, x5, [x1, #16] For legibility, could we make the offset and bias explicit in the STNPs so that these line up? e.g. stnp x4, x5, [x0, #16 - 256] ldp x4, x5, [x1, #16] ... that'd make it much easier to see by eye that this is sound, much as I trust my mental arithmetic. ;) > - stnp x6, x7, [x0, #32] > + stnp x6, x7, [x0, #-224] > ldp x6, x7, [x1, #32] > - stnp x8, x9, [x0, #48] > + stnp x8, x9, [x0, #-208] > ldp x8, x9, [x1, #48] > - stnp x10, x11, [x0, #64] > + stnp x10, x11, [x0, #-192] > ldp x10, x11, [x1, #64] > - stnp x12, x13, [x0, #80] > + stnp x12, x13, [x0, #-176] > ldp x12, x13, [x1, #80] > - stnp x14, x15, [x0, #96] > + stnp x14, x15, [x0, #-160] > ldp x14, x15, [x1, #96] > - stnp x16, x17, [x0, #112] > + stnp x16, x17, [x0, #-144] > ldp x16, x17, [x1, #112] > > add x0, x0, #128 > add x1, x1, #128 > > - b.gt 1b > + b.ne 1b > > - stnp x2, x3, [x0] > - stnp x4, x5, [x0, #16] > - stnp x6, x7, [x0, #32] > - stnp x8, x9, [x0, #48] > - stnp x10, x11, [x0, #64] > - stnp x12, x13, [x0, #80] > - stnp x14, x15, [x0, #96] > - stnp x16, x17, [x0, #112] > + stnp x2, x3, [x0, #-256] > + stnp x4, x5, [x0, #-240] > + stnp x6, x7, [x0, #-224] > + stnp x8, x9, [x0, #-208] > + stnp x10, x11, [x0, #-192] > + stnp x12, x13, [x0, #-176] > + stnp x14, x15, [x0, #-160] > + stnp x16, x17, [x0, #-144] ... likewise here: stnp xt1, xt2, [x0, #off - 256] I don't see a nicer way to write this sequence without using an additional register, so with those changes: Reviewed-by: Mark Rutland Thanks, Mark. > > ret > ENDPROC(copy_page) > -- > 2.24.0.rc0.303.g954a862665-goog >