Received: by 10.223.185.116 with SMTP id b49csp2435613wrg; Thu, 22 Feb 2018 13:48:05 -0800 (PST) X-Google-Smtp-Source: AH8x227WtNmtJ0KMNAR3Si27lGnbIU6513q91rHTAjjFLR7/vZuXWONtgNL9B9ybvl/vIfykkZDt X-Received: by 2002:a17:902:b28b:: with SMTP id u11-v6mr7871530plr.146.1519336085275; Thu, 22 Feb 2018 13:48:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519336085; cv=none; d=google.com; s=arc-20160816; b=upe92M+IIDBwFbcGdEX3K65ipsKTkhV53iZkQTo9/jAWU69Y1rZdEbH69+vF/51Rfm veieQI0qlLX9nWCaSwajxjpb9LmC9hMIcCAvjtT/IQB2nOv7CHqz2NkDmxcbkixK6wwp JEjqqRFaDDjZOUjGlZexZQbyDanZxXRA/ta+RQAJe0oGJZ/sAtUIbPFcJGJyiNhSEJY1 8XdjH6LE6ArpZwKqTlIo1sKEIIKZQoyatsI+ClZRNcgNRN8C/ZJ6VQBDZ1C3VLG1s6an Gn2wCuBWNDrqC/cKyrIz0HANUH5nDa5Vx9m7JUAEG61OPCgRG8hXOaMz3xKcGeDd8Ezu FZ3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:subject:cc:to :from:date:arc-authentication-results; bh=opcm2smyu2vLVgtrM4DToTPv5fWFj6JkJlrOfTajRO0=; b=ERdHTDFS7RXzxXZTkXbl8n2ktYTiXSPmelf05Nm83DRiP7dOr0j1ToCZ+VHMBBz+LU Q1jEo/IiMjBz8eBQ/rcpVu9mYqv3d0kbF5eluIE1+RsTz6IIeuOVt0JJfDHC4nm4/aUP G7xVUjC7LDhRH9abv4RY9aW52qlCVB8l1MT8uZ+J2/Z6yCL6Q1HlnLo4N5tNTltOWEnD zrSrbqzjccmx2nJ3MUX36n9vDoiEMshBNpUhvJdjWA17QZLOzT/NzyGtpCzt4n+CHnpu PK52z8bgT9wV6WCdU3snlw3rJDZZ2ka/75zCucg3mzQgsU2c4qo1HInAN9su30cE4hNQ 4u1Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bb8-v6si635482plb.719.2018.02.22.13.47.48; Thu, 22 Feb 2018 13:48:05 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751619AbeBVVrA (ORCPT + 99 others); Thu, 22 Feb 2018 16:47:00 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:42252 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751484AbeBVVqt (ORCPT ); Thu, 22 Feb 2018 16:46:49 -0500 Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w1MLg19R129990 for ; Thu, 22 Feb 2018 16:46:48 -0500 Received: from e17.ny.us.ibm.com (e17.ny.us.ibm.com [129.33.205.207]) by mx0b-001b2d01.pphosted.com with ESMTP id 2ga2qx7hbe-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Thu, 22 Feb 2018 16:46:48 -0500 Received: from localhost by e17.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 22 Feb 2018 16:46:47 -0500 Received: from b01cxnp22034.gho.pok.ibm.com (9.57.198.24) by e17.ny.us.ibm.com (146.89.104.204) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 22 Feb 2018 16:46:43 -0500 Received: from b01ledav003.gho.pok.ibm.com (b01ledav003.gho.pok.ibm.com [9.57.199.108]) by b01cxnp22034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w1MLkhaG53936168; Thu, 22 Feb 2018 21:46:43 GMT Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 721DBB204D; Thu, 22 Feb 2018 17:49:01 -0500 (EST) Received: from paulmck-ThinkPad-W541 (unknown [9.85.154.79]) by b01ledav003.gho.pok.ibm.com (Postfix) with ESMTP id 1F9C0B2046; Thu, 22 Feb 2018 17:49:01 -0500 (EST) Received: by paulmck-ThinkPad-W541 (Postfix, from userid 1000) id 5CC4D16C86CD; Thu, 22 Feb 2018 13:47:08 -0800 (PST) Date: Thu, 22 Feb 2018 13:47:08 -0800 From: "Paul E. McKenney" To: Andrea Parri Cc: Ingo Molnar , Peter Zijlstra , Alan Stern , Ivan Kokshaysky , Matt Turner , Richard Henderson , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/2] locking/xchg/alpha: Add leading smp_mb() to xchg(), cmpxchg() Reply-To: paulmck@linux.vnet.ibm.com References: <1519291488-5752-1-git-send-email-parri.andrea@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1519291488-5752-1-git-send-email-parri.andrea@gmail.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 18022221-0040-0000-0000-000003FBA11A X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00008578; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000254; SDB=6.00993612; UDB=6.00504898; IPR=6.00772952; MB=3.00019694; MTD=3.00000008; XFM=3.00000015; UTC=2018-02-22 21:46:46 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18022221-0041-0000-0000-000007FCA82E Message-Id: <20180222214708.GN2855@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2018-02-22_08:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=18 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1802220269 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 22, 2018 at 10:24:48AM +0100, Andrea Parri wrote: > Successful RMW operations are supposed to be fully ordered, but > Alpha's xchg() and cmpxchg() do not align to this requirement. > > Will reported that: > > > So MP using xchg: > > > > WRITE_ONCE(x, 1) > > xchg(y, 1) > > > > smp_load_acquire(y) == 1 > > READ_ONCE(x) == 0 > > > > would be allowed. > > (thus violating the above requirement). Amend this by adding a > leading smp_mb() to the implementations of xchg(), cmpxchg(). > > Reported-by: Will Deacon > Signed-off-by: Andrea Parri Acked-by: Paul E. McKenney > Cc: Peter Zijlstra > Cc: Paul E. McKenney > Cc: Alan Stern > Cc: Ivan Kokshaysky > Cc: Matt Turner > Cc: Richard Henderson > Cc: linux-alpha@vger.kernel.org > Cc: linux-kernel@vger.kernel.org > --- > arch/alpha/include/asm/xchg.h | 21 ++++++++++++++++++--- > 1 file changed, 18 insertions(+), 3 deletions(-) > > diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h > index e1facf6fc2446..e2b59fac5257d 100644 > --- a/arch/alpha/include/asm/xchg.h > +++ b/arch/alpha/include/asm/xchg.h > @@ -12,6 +12,10 @@ > * Atomic exchange. > * Since it can be used to implement critical sections > * it must clobber "memory" (also for interrupts in UP). > + * > + * The leading and the trailing memory barriers guarantee that these > + * operations are fully ordered. > + * > */ > > static inline unsigned long > @@ -19,6 +23,7 @@ ____xchg(_u8, volatile char *m, unsigned long val) > { > unsigned long ret, tmp, addr64; > > + smp_mb(); > __asm__ __volatile__( > " andnot %4,7,%3\n" > " insbl %1,%4,%1\n" > @@ -43,6 +48,7 @@ ____xchg(_u16, volatile short *m, unsigned long val) > { > unsigned long ret, tmp, addr64; > > + smp_mb(); > __asm__ __volatile__( > " andnot %4,7,%3\n" > " inswl %1,%4,%1\n" > @@ -67,6 +73,7 @@ ____xchg(_u32, volatile int *m, unsigned long val) > { > unsigned long dummy; > > + smp_mb(); > __asm__ __volatile__( > "1: ldl_l %0,%4\n" > " bis $31,%3,%1\n" > @@ -87,6 +94,7 @@ ____xchg(_u64, volatile long *m, unsigned long val) > { > unsigned long dummy; > > + smp_mb(); > __asm__ __volatile__( > "1: ldq_l %0,%4\n" > " bis $31,%3,%1\n" > @@ -128,9 +136,12 @@ ____xchg(, volatile void *ptr, unsigned long x, int size) > * store NEW in MEM. Return the initial value in MEM. Success is > * indicated by comparing RETURN with OLD. > * > - * The memory barrier is placed in SMP unconditionally, in order to > - * guarantee that dependency ordering is preserved when a dependency > - * is headed by an unsuccessful operation. > + * The leading and the trailing memory barriers guarantee that these > + * operations are fully ordered. > + * > + * The trailing memory barrier is placed in SMP unconditionally, in > + * order to guarantee that dependency ordering is preserved when a > + * dependency is headed by an unsuccessful operation. > */ > > static inline unsigned long > @@ -138,6 +149,7 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new) > { > unsigned long prev, tmp, cmp, addr64; > > + smp_mb(); > __asm__ __volatile__( > " andnot %5,7,%4\n" > " insbl %1,%5,%1\n" > @@ -165,6 +177,7 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new) > { > unsigned long prev, tmp, cmp, addr64; > > + smp_mb(); > __asm__ __volatile__( > " andnot %5,7,%4\n" > " inswl %1,%5,%1\n" > @@ -192,6 +205,7 @@ ____cmpxchg(_u32, volatile int *m, int old, int new) > { > unsigned long prev, cmp; > > + smp_mb(); > __asm__ __volatile__( > "1: ldl_l %0,%5\n" > " cmpeq %0,%3,%1\n" > @@ -215,6 +229,7 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new) > { > unsigned long prev, cmp; > > + smp_mb(); > __asm__ __volatile__( > "1: ldq_l %0,%5\n" > " cmpeq %0,%3,%1\n" > -- > 2.7.4 >