Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2271931imm; Mon, 28 May 2018 05:09:51 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJVCppSlzwn0V9vcT2mUsWSfJvfdWw5bQCApEuodWaqUiplrspUijOrmKsaXDhKx2Xk57M6 X-Received: by 2002:a62:4141:: with SMTP id o62-v6mr378729pfa.111.1527509391674; Mon, 28 May 2018 05:09:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527509391; cv=none; d=google.com; s=arc-20160816; b=euqa8qwoNgEwKp5BsrKde+u9EMSELWpwhIk9s+RclGxccZDNmZ1gnt+rBQiaZcWoCV nUQFH/YhIOOhmI6ZCyI5QMsvni2yXsV9wOHKpT467F8WbmpYTQlJjvvdiGuPACCs/iNt IV7vVeRhx2/ii4Vx+DDncJq9AYBJ8MEHQyCmDJjlMV/d2c5Neeok3xF8DJ3EbRGR0QKu U8fdPpBys7QwGH7w758ARMJPxDF9G9j7rbVwLyZTmCIJHWFiHsaoja7yzS0Qabap4nf6 ItVBaBYv2aW9R48OZu6DjpTQ1kQt3sQLVVQb9IAdsWIEK33Btl++Z/Vbi0DL7/KNKyJg ohyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=Aqg45niqxNsbn22tH0K0bqc1szKyn6AdpisTpWr070I=; b=EDXRHsU87koUvI1HDAG6LX4VNaIInkFx5hSFAWVaHjBcTTK6xyZxNTD32BSTem5T4o FPn5PnYeTq0yKCf2/S+Tk9XSnNXwjhLzY3vex2fx+7M0QkNURQGEkSqNtyzLDe2O7mP3 5dBaSwPxGLQNpVlFr/tEJ3WR9hGZP4tvzIMW0mc4Ij1uUW9TvDtEr7f/K5XpCBDbh3At CdcnAgpL+2lNKt6Lp9x8Q61N81pPzJ7K3z6+dnBW7819SmKiM1Tpr60ezrKvgQJkufDq G3g6InT5cWnSQ+asCr1L3fR8XInPn3l7/8cz0ePR0/z+M2THmUc0CMXfNLuYzSuydEHh zBRg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=IgDZJtnm; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u19-v6si30174142pfn.241.2018.05.28.05.09.37; Mon, 28 May 2018 05:09:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=IgDZJtnm; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1423079AbeE1LFE (ORCPT + 99 others); Mon, 28 May 2018 07:05:04 -0400 Received: from mail.kernel.org ([198.145.29.99]:52072 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1165043AbeE1LE4 (ORCPT ); Mon, 28 May 2018 07:04:56 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 30F0020883; Mon, 28 May 2018 11:04:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1527505495; bh=YOlV70wsptFnqYjTLT1pzt7KOC6Dh8vSXVM69dQhtjA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IgDZJtnmgx4HLZMOdubh3SoKkTXttBH4FHk8P1QZjck1XBIVRLcua/Y3Jx7CBBPL1 K94wksE+o9QvS8lIdLy7yOVLBQ8wT5h8HLjHwDT4tSoukagNmCYqxWbndw/gevruKX hxpGYzPgr8cUG6jbqoenPAYZM211DMtVigwIa6VU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dave Martin , Robin Murphy , Mark Salter , Will Deacon Subject: [PATCH 4.16 024/272] arm64: lse: Add early clobbers to some input/output asm operands Date: Mon, 28 May 2018 12:00:57 +0200 Message-Id: <20180528100242.541763658@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180528100240.256525891@linuxfoundation.org> References: <20180528100240.256525891@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.16-stable review patch. If anyone has any objections, please let me know. ------------------ From: Will Deacon commit 32c3fa7cdf0c4a3eb8405fc3e13398de019e828b upstream. For LSE atomics that read and write a register operand, we need to ensure that these operands are annotated as "early clobber" if the register is written before all of the input operands have been consumed. Failure to do so can result in the compiler allocating the same register to both operands, leading to splats such as: Unable to handle kernel paging request at virtual address 11111122222221 [...] x1 : 1111111122222222 x0 : 1111111122222221 Process swapper/0 (pid: 1, stack limit = 0x000000008209f908) Call trace: test_atomic64+0x1360/0x155c where x0 has been allocated as both the value to be stored and also the atomic_t pointer. This patch adds the missing clobbers. Cc: Cc: Dave Martin Cc: Robin Murphy Reported-by: Mark Salter Signed-off-by: Will Deacon Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/atomic_lse.h | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) --- a/arch/arm64/include/asm/atomic_lse.h +++ b/arch/arm64/include/asm/atomic_lse.h @@ -117,7 +117,7 @@ static inline void atomic_and(int i, ato /* LSE atomics */ " mvn %w[i], %w[i]\n" " stclr %w[i], %[v]") - : [i] "+r" (w0), [v] "+Q" (v->counter) + : [i] "+&r" (w0), [v] "+Q" (v->counter) : "r" (x1) : __LL_SC_CLOBBERS); } @@ -135,7 +135,7 @@ static inline int atomic_fetch_and##name /* LSE atomics */ \ " mvn %w[i], %w[i]\n" \ " ldclr" #mb " %w[i], %w[i], %[v]") \ - : [i] "+r" (w0), [v] "+Q" (v->counter) \ + : [i] "+&r" (w0), [v] "+Q" (v->counter) \ : "r" (x1) \ : __LL_SC_CLOBBERS, ##cl); \ \ @@ -161,7 +161,7 @@ static inline void atomic_sub(int i, ato /* LSE atomics */ " neg %w[i], %w[i]\n" " stadd %w[i], %[v]") - : [i] "+r" (w0), [v] "+Q" (v->counter) + : [i] "+&r" (w0), [v] "+Q" (v->counter) : "r" (x1) : __LL_SC_CLOBBERS); } @@ -180,7 +180,7 @@ static inline int atomic_sub_return##nam " neg %w[i], %w[i]\n" \ " ldadd" #mb " %w[i], w30, %[v]\n" \ " add %w[i], %w[i], w30") \ - : [i] "+r" (w0), [v] "+Q" (v->counter) \ + : [i] "+&r" (w0), [v] "+Q" (v->counter) \ : "r" (x1) \ : __LL_SC_CLOBBERS , ##cl); \ \ @@ -207,7 +207,7 @@ static inline int atomic_fetch_sub##name /* LSE atomics */ \ " neg %w[i], %w[i]\n" \ " ldadd" #mb " %w[i], %w[i], %[v]") \ - : [i] "+r" (w0), [v] "+Q" (v->counter) \ + : [i] "+&r" (w0), [v] "+Q" (v->counter) \ : "r" (x1) \ : __LL_SC_CLOBBERS, ##cl); \ \ @@ -314,7 +314,7 @@ static inline void atomic64_and(long i, /* LSE atomics */ " mvn %[i], %[i]\n" " stclr %[i], %[v]") - : [i] "+r" (x0), [v] "+Q" (v->counter) + : [i] "+&r" (x0), [v] "+Q" (v->counter) : "r" (x1) : __LL_SC_CLOBBERS); } @@ -332,7 +332,7 @@ static inline long atomic64_fetch_and##n /* LSE atomics */ \ " mvn %[i], %[i]\n" \ " ldclr" #mb " %[i], %[i], %[v]") \ - : [i] "+r" (x0), [v] "+Q" (v->counter) \ + : [i] "+&r" (x0), [v] "+Q" (v->counter) \ : "r" (x1) \ : __LL_SC_CLOBBERS, ##cl); \ \ @@ -358,7 +358,7 @@ static inline void atomic64_sub(long i, /* LSE atomics */ " neg %[i], %[i]\n" " stadd %[i], %[v]") - : [i] "+r" (x0), [v] "+Q" (v->counter) + : [i] "+&r" (x0), [v] "+Q" (v->counter) : "r" (x1) : __LL_SC_CLOBBERS); } @@ -377,7 +377,7 @@ static inline long atomic64_sub_return## " neg %[i], %[i]\n" \ " ldadd" #mb " %[i], x30, %[v]\n" \ " add %[i], %[i], x30") \ - : [i] "+r" (x0), [v] "+Q" (v->counter) \ + : [i] "+&r" (x0), [v] "+Q" (v->counter) \ : "r" (x1) \ : __LL_SC_CLOBBERS, ##cl); \ \ @@ -404,7 +404,7 @@ static inline long atomic64_fetch_sub##n /* LSE atomics */ \ " neg %[i], %[i]\n" \ " ldadd" #mb " %[i], %[i], %[v]") \ - : [i] "+r" (x0), [v] "+Q" (v->counter) \ + : [i] "+&r" (x0), [v] "+Q" (v->counter) \ : "r" (x1) \ : __LL_SC_CLOBBERS, ##cl); \ \ @@ -435,7 +435,7 @@ static inline long atomic64_dec_if_posit " sub x30, x30, %[ret]\n" " cbnz x30, 1b\n" "2:") - : [ret] "+r" (x0), [v] "+Q" (v->counter) + : [ret] "+&r" (x0), [v] "+Q" (v->counter) : : __LL_SC_CLOBBERS, "cc", "memory"); @@ -516,7 +516,7 @@ static inline long __cmpxchg_double##nam " eor %[old1], %[old1], %[oldval1]\n" \ " eor %[old2], %[old2], %[oldval2]\n" \ " orr %[old1], %[old1], %[old2]") \ - : [old1] "+r" (x0), [old2] "+r" (x1), \ + : [old1] "+&r" (x0), [old2] "+&r" (x1), \ [v] "+Q" (*(unsigned long *)ptr) \ : [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4), \ [oldval1] "r" (oldval1), [oldval2] "r" (oldval2) \