Received: by 10.223.185.116 with SMTP id b49csp418605wrg; Fri, 23 Feb 2018 00:31:32 -0800 (PST) X-Google-Smtp-Source: AH8x227MB8blg6HG/f0Syrdh9YivYYfIBkIX21+n3tecJZG4qQSbUBgmfintbcmboad1tl9P+eq1 X-Received: by 2002:a17:902:b707:: with SMTP id d7-v6mr1003364pls.119.1519374692498; Fri, 23 Feb 2018 00:31:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519374692; cv=none; d=google.com; s=arc-20160816; b=zYykIj9rg++mClv0HJ0pfeDLXrCvRR4tQmSJiybr6MWFd3U9b7WUJF3/XT9ZAOg81r q2GvGm+xIEAE3XVseFrk6zoAWhWEa7H68pLRTMCQfMHr3LO+W+igO5vQh32PWF5CP4hL h5iVez3osaDPWgTAwgl9XeHBl891YcgObI1U46zuBOOsYj6oB9dB2jstUgzhlUtKhSmP iqMHXtctJHbJj6JlZKbNMzQzYvWXUPEWhgJgbcFGJFtpf7Pm+n20+F4w6GHewNNRhJK3 tsrDIh1JSGVXHO/uv4pkFMd7nfwMx4F9YyxKsTaq3yoxdslqVX6++JJBPOvgCq1sbvK+ 9M0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-disposition :content-transfer-encoding:mime-version:robot-unsubscribe:robot-id :git-commit-id:subject:to:references:in-reply-to:reply-to:cc :message-id:from:date:arc-authentication-results; bh=pYjUepuLQFF/ZhSrrlUJg91WA4t2E9BVyV2B5z4F6qU=; b=XV2OYqZrqnOBD/AIP8n5QoiYbkNfMOqABvVdXzDpYN/IIJPx5RITa1jAPOs+xUbVRS PB4+JR4mSRGFEx5zDNZexPOccSKivSMwF+KhcmGXCVvsqQItUsTmtmPriRIZYeoGMG0i q6gsYpmPRCderPX46XhhjE7Z7uYKDmgGKQcLtso2qHAFJcAPwyMNSqiacG95olcy6CQT jmEpGKoyfNUPt9h0SJPf3k7QVkds3MWIGSDHKS6h3T5jST1WDnDXl+pGQm0VGvpNDGeQ CMOJPY2mRzfHSLsMX+Hj7WgiJSTvNH2GpZBOIl6KeJBR7aYZHR5bAyqbL1QX8xY/TzzI bRmg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g9-v6si1477343pln.818.2018.02.23.00.31.17; Fri, 23 Feb 2018 00:31:32 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751808AbeBWI2w (ORCPT + 99 others); Fri, 23 Feb 2018 03:28:52 -0500 Received: from terminus.zytor.com ([198.137.202.136]:35879 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751772AbeBWI2u (ORCPT ); Fri, 23 Feb 2018 03:28:50 -0500 Received: from terminus.zytor.com (localhost [127.0.0.1]) by terminus.zytor.com (8.15.2/8.15.2) with ESMTP id w1N8SMps010918; Fri, 23 Feb 2018 00:28:22 -0800 Received: (from tipbot@localhost) by terminus.zytor.com (8.15.2/8.15.2/Submit) id w1N8SLvx010915; Fri, 23 Feb 2018 00:28:21 -0800 Date: Fri, 23 Feb 2018 00:28:21 -0800 X-Authentication-Warning: terminus.zytor.com: tipbot set sender to tipbot@zytor.com using -f From: tip-bot for Andrea Parri Message-ID: Cc: ink@jurassic.park.msu.ru, linux-kernel@vger.kernel.org, mingo@kernel.org, akpm@linux-foundation.org, paulmck@linux.vnet.ibm.com, hpa@zytor.com, tglx@linutronix.de, mattst88@gmail.com, parri.andrea@gmail.com, will.deacon@arm.com, rth@twiddle.net, stern@rowland.harvard.edu, torvalds@linux-foundation.org, peterz@infradead.org Reply-To: parri.andrea@gmail.com, will.deacon@arm.com, rth@twiddle.net, stern@rowland.harvard.edu, torvalds@linux-foundation.org, peterz@infradead.org, ink@jurassic.park.msu.ru, mingo@kernel.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, tglx@linutronix.de, paulmck@linux.vnet.ibm.com, hpa@zytor.com, mattst88@gmail.com In-Reply-To: <1519291488-5752-1-git-send-email-parri.andrea@gmail.com> References: <1519291488-5752-1-git-send-email-parri.andrea@gmail.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:locking/urgent] locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs Git-Commit-ID: 472e8c55cf6622d1c112dc2bc777f68bbd4189db X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline X-Spam-Status: No, score=-0.8 required=5.0 tests=ALL_TRUSTED,BAYES_00, FREEMAIL_FORGED_REPLYTO autolearn=no autolearn_force=no version=3.4.1 X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on terminus.zytor.com Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 472e8c55cf6622d1c112dc2bc777f68bbd4189db Gitweb: https://git.kernel.org/tip/472e8c55cf6622d1c112dc2bc777f68bbd4189db Author: Andrea Parri AuthorDate: Thu, 22 Feb 2018 10:24:48 +0100 Committer: Ingo Molnar CommitDate: Fri, 23 Feb 2018 08:38:16 +0100 locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs Successful RMW operations are supposed to be fully ordered, but Alpha's xchg() and cmpxchg() do not meet this requirement. Will Deacon noticed the bug: > So MP using xchg: > > WRITE_ONCE(x, 1) > xchg(y, 1) > > smp_load_acquire(y) == 1 > READ_ONCE(x) == 0 > > would be allowed. ... which thus violates the above requirement. Fix it by adding a leading smp_mb() to the xchg() and cmpxchg() implementations. Reported-by: Will Deacon Signed-off-by: Andrea Parri Acked-by: Paul E. McKenney Cc: Alan Stern Cc: Andrew Morton Cc: Ivan Kokshaysky Cc: Linus Torvalds Cc: Matt Turner Cc: Peter Zijlstra Cc: Richard Henderson Cc: Thomas Gleixner Cc: linux-alpha@vger.kernel.org Link: http://lkml.kernel.org/r/1519291488-5752-1-git-send-email-parri.andrea@gmail.com Signed-off-by: Ingo Molnar --- arch/alpha/include/asm/xchg.h | 21 ++++++++++++++++++--- 1 file changed, 18 insertions(+), 3 deletions(-) diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h index e1facf6fc244..e2b59fac5257 100644 --- a/arch/alpha/include/asm/xchg.h +++ b/arch/alpha/include/asm/xchg.h @@ -12,6 +12,10 @@ * Atomic exchange. * Since it can be used to implement critical sections * it must clobber "memory" (also for interrupts in UP). + * + * The leading and the trailing memory barriers guarantee that these + * operations are fully ordered. + * */ static inline unsigned long @@ -19,6 +23,7 @@ ____xchg(_u8, volatile char *m, unsigned long val) { unsigned long ret, tmp, addr64; + smp_mb(); __asm__ __volatile__( " andnot %4,7,%3\n" " insbl %1,%4,%1\n" @@ -43,6 +48,7 @@ ____xchg(_u16, volatile short *m, unsigned long val) { unsigned long ret, tmp, addr64; + smp_mb(); __asm__ __volatile__( " andnot %4,7,%3\n" " inswl %1,%4,%1\n" @@ -67,6 +73,7 @@ ____xchg(_u32, volatile int *m, unsigned long val) { unsigned long dummy; + smp_mb(); __asm__ __volatile__( "1: ldl_l %0,%4\n" " bis $31,%3,%1\n" @@ -87,6 +94,7 @@ ____xchg(_u64, volatile long *m, unsigned long val) { unsigned long dummy; + smp_mb(); __asm__ __volatile__( "1: ldq_l %0,%4\n" " bis $31,%3,%1\n" @@ -128,9 +136,12 @@ ____xchg(, volatile void *ptr, unsigned long x, int size) * store NEW in MEM. Return the initial value in MEM. Success is * indicated by comparing RETURN with OLD. * - * The memory barrier is placed in SMP unconditionally, in order to - * guarantee that dependency ordering is preserved when a dependency - * is headed by an unsuccessful operation. + * The leading and the trailing memory barriers guarantee that these + * operations are fully ordered. + * + * The trailing memory barrier is placed in SMP unconditionally, in + * order to guarantee that dependency ordering is preserved when a + * dependency is headed by an unsuccessful operation. */ static inline unsigned long @@ -138,6 +149,7 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new) { unsigned long prev, tmp, cmp, addr64; + smp_mb(); __asm__ __volatile__( " andnot %5,7,%4\n" " insbl %1,%5,%1\n" @@ -165,6 +177,7 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new) { unsigned long prev, tmp, cmp, addr64; + smp_mb(); __asm__ __volatile__( " andnot %5,7,%4\n" " inswl %1,%5,%1\n" @@ -192,6 +205,7 @@ ____cmpxchg(_u32, volatile int *m, int old, int new) { unsigned long prev, cmp; + smp_mb(); __asm__ __volatile__( "1: ldl_l %0,%5\n" " cmpeq %0,%3,%1\n" @@ -215,6 +229,7 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new) { unsigned long prev, cmp; + smp_mb(); __asm__ __volatile__( "1: ldq_l %0,%5\n" " cmpeq %0,%3,%1\n"