Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2369647imm; Mon, 28 May 2018 06:53:49 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpFXA+Nyl/BvxPwz0sHPXfStaEXtOPnKebI2WxeUrAw5IiPXngTIyZil36FkmWVxouGVaNT X-Received: by 2002:a17:902:1e2:: with SMTP id b89-v6mr13789172plb.279.1527515629929; Mon, 28 May 2018 06:53:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527515629; cv=none; d=google.com; s=arc-20160816; b=Nt1Hqm05oiYwm8oifqTPYwvvEKUgS7Xu0tSTyoG1am6eIDemGmoClSYCTJMEq3WnVq MsvakRfBnyS76Ae5PUODmJwmCzrYkzLNJyFXR4kh9jAwjHqGEsBe1LF0owkTY9JhlZyx aGzuKwZh/jtSOx4odcDpXeNHBvY3shR2zPYkeNFwB0gJ0ly1dsYpJ+tlkHrgpePa/+hk IfBQmuh5qcYJi6I8606ibcoJ+cLbZyqotG4OjjTQtHQ/gVzF4xr9QsXNGLTQFI5x/91I ab81Z6RICCrFk+O7WAwPNNyJr4IQ3y0gHN/zRFoI9dbzHvhNpneS/HLRNp56LujxQIzj XlMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=Ootd7TeBkqVkYzuPkpeOnlVMCJ5ZT3nl8HBowbop7Z8=; b=CPxJa+60uQ77i8bIciQqd8YWezda88gVMtF641pup81WvEruA+AABJZmLnAU5moqNM J2hKYWRGNstvLTCIt9i+mYsXVBpWJxERXqhi6F/fauOQwCVDF+7qoAA+oNRpf2MO0URz YI0AdA+gbLWv23t+VOBgP/mMDs9vntOO+kaTdEt3YlgTo2Lu/ROApKPD3JeLwXbh8032 0Vc5rRUhmH7HioX1LtPeLADRdp2VYtNxVIc8LSYATnROoxEG2XQ4VZ5xE3PxHKkbudLQ f1tlYCHPIF/poxA/aOSIaLoDFo+i5Hi+df5jf9HFH7zIjhN6zVKk+CZlwSYTjVKkiFqU h7cA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=t7JAo8D/; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 1-v6si29703434plk.521.2018.05.28.06.53.34; Mon, 28 May 2018 06:53:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=t7JAo8D/; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S937581AbeE1Nww (ORCPT + 99 others); Mon, 28 May 2018 09:52:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:33288 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1162985AbeE1Klt (ORCPT ); Mon, 28 May 2018 06:41:49 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E7EC42086D; Mon, 28 May 2018 10:41:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1527504108; bh=fbPHpckkgbwGJuYSdWYCCh77DrXX9l24At3spJouWyA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=t7JAo8D/MYiUSxEpQRzFguU8bD0kUsz1HXmznn4i1M3F9AHvhcbJ2zaMn8c6hKXaH TYNJmGzic+QZIiqwWALqIbgWwFYMbdFX99eoXvU5Kx5rpAfQp6Rhg56qXvuN5J4jKg v4DCSRMEQ0J42iXJZZkOEYBNqqt4iOMDOPnEvWck= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dave Martin , Robin Murphy , Mark Salter , Will Deacon Subject: [PATCH 4.14 017/496] arm64: lse: Add early clobbers to some input/output asm operands Date: Mon, 28 May 2018 11:56:42 +0200 Message-Id: <20180528100320.318837751@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180528100319.498712256@linuxfoundation.org> References: <20180528100319.498712256@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Will Deacon commit 32c3fa7cdf0c4a3eb8405fc3e13398de019e828b upstream. For LSE atomics that read and write a register operand, we need to ensure that these operands are annotated as "early clobber" if the register is written before all of the input operands have been consumed. Failure to do so can result in the compiler allocating the same register to both operands, leading to splats such as: Unable to handle kernel paging request at virtual address 11111122222221 [...] x1 : 1111111122222222 x0 : 1111111122222221 Process swapper/0 (pid: 1, stack limit = 0x000000008209f908) Call trace: test_atomic64+0x1360/0x155c where x0 has been allocated as both the value to be stored and also the atomic_t pointer. This patch adds the missing clobbers. Cc: Cc: Dave Martin Cc: Robin Murphy Reported-by: Mark Salter Signed-off-by: Will Deacon Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/atomic_lse.h | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) --- a/arch/arm64/include/asm/atomic_lse.h +++ b/arch/arm64/include/asm/atomic_lse.h @@ -117,7 +117,7 @@ static inline void atomic_and(int i, ato /* LSE atomics */ " mvn %w[i], %w[i]\n" " stclr %w[i], %[v]") - : [i] "+r" (w0), [v] "+Q" (v->counter) + : [i] "+&r" (w0), [v] "+Q" (v->counter) : "r" (x1) : __LL_SC_CLOBBERS); } @@ -135,7 +135,7 @@ static inline int atomic_fetch_and##name /* LSE atomics */ \ " mvn %w[i], %w[i]\n" \ " ldclr" #mb " %w[i], %w[i], %[v]") \ - : [i] "+r" (w0), [v] "+Q" (v->counter) \ + : [i] "+&r" (w0), [v] "+Q" (v->counter) \ : "r" (x1) \ : __LL_SC_CLOBBERS, ##cl); \ \ @@ -161,7 +161,7 @@ static inline void atomic_sub(int i, ato /* LSE atomics */ " neg %w[i], %w[i]\n" " stadd %w[i], %[v]") - : [i] "+r" (w0), [v] "+Q" (v->counter) + : [i] "+&r" (w0), [v] "+Q" (v->counter) : "r" (x1) : __LL_SC_CLOBBERS); } @@ -180,7 +180,7 @@ static inline int atomic_sub_return##nam " neg %w[i], %w[i]\n" \ " ldadd" #mb " %w[i], w30, %[v]\n" \ " add %w[i], %w[i], w30") \ - : [i] "+r" (w0), [v] "+Q" (v->counter) \ + : [i] "+&r" (w0), [v] "+Q" (v->counter) \ : "r" (x1) \ : __LL_SC_CLOBBERS , ##cl); \ \ @@ -207,7 +207,7 @@ static inline int atomic_fetch_sub##name /* LSE atomics */ \ " neg %w[i], %w[i]\n" \ " ldadd" #mb " %w[i], %w[i], %[v]") \ - : [i] "+r" (w0), [v] "+Q" (v->counter) \ + : [i] "+&r" (w0), [v] "+Q" (v->counter) \ : "r" (x1) \ : __LL_SC_CLOBBERS, ##cl); \ \ @@ -314,7 +314,7 @@ static inline void atomic64_and(long i, /* LSE atomics */ " mvn %[i], %[i]\n" " stclr %[i], %[v]") - : [i] "+r" (x0), [v] "+Q" (v->counter) + : [i] "+&r" (x0), [v] "+Q" (v->counter) : "r" (x1) : __LL_SC_CLOBBERS); } @@ -332,7 +332,7 @@ static inline long atomic64_fetch_and##n /* LSE atomics */ \ " mvn %[i], %[i]\n" \ " ldclr" #mb " %[i], %[i], %[v]") \ - : [i] "+r" (x0), [v] "+Q" (v->counter) \ + : [i] "+&r" (x0), [v] "+Q" (v->counter) \ : "r" (x1) \ : __LL_SC_CLOBBERS, ##cl); \ \ @@ -358,7 +358,7 @@ static inline void atomic64_sub(long i, /* LSE atomics */ " neg %[i], %[i]\n" " stadd %[i], %[v]") - : [i] "+r" (x0), [v] "+Q" (v->counter) + : [i] "+&r" (x0), [v] "+Q" (v->counter) : "r" (x1) : __LL_SC_CLOBBERS); } @@ -377,7 +377,7 @@ static inline long atomic64_sub_return## " neg %[i], %[i]\n" \ " ldadd" #mb " %[i], x30, %[v]\n" \ " add %[i], %[i], x30") \ - : [i] "+r" (x0), [v] "+Q" (v->counter) \ + : [i] "+&r" (x0), [v] "+Q" (v->counter) \ : "r" (x1) \ : __LL_SC_CLOBBERS, ##cl); \ \ @@ -404,7 +404,7 @@ static inline long atomic64_fetch_sub##n /* LSE atomics */ \ " neg %[i], %[i]\n" \ " ldadd" #mb " %[i], %[i], %[v]") \ - : [i] "+r" (x0), [v] "+Q" (v->counter) \ + : [i] "+&r" (x0), [v] "+Q" (v->counter) \ : "r" (x1) \ : __LL_SC_CLOBBERS, ##cl); \ \ @@ -435,7 +435,7 @@ static inline long atomic64_dec_if_posit " sub x30, x30, %[ret]\n" " cbnz x30, 1b\n" "2:") - : [ret] "+r" (x0), [v] "+Q" (v->counter) + : [ret] "+&r" (x0), [v] "+Q" (v->counter) : : __LL_SC_CLOBBERS, "cc", "memory"); @@ -516,7 +516,7 @@ static inline long __cmpxchg_double##nam " eor %[old1], %[old1], %[oldval1]\n" \ " eor %[old2], %[old2], %[oldval2]\n" \ " orr %[old1], %[old1], %[old2]") \ - : [old1] "+r" (x0), [old2] "+r" (x1), \ + : [old1] "+&r" (x0), [old2] "+&r" (x1), \ [v] "+Q" (*(unsigned long *)ptr) \ : [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4), \ [oldval1] "r" (oldval1), [oldval2] "r" (oldval2) \