Received: by 10.223.185.116 with SMTP id b49csp425353wrg; Tue, 20 Feb 2018 01:35:36 -0800 (PST) X-Google-Smtp-Source: AH8x226H4m3p+OpURw7zJcvexnHAa0Fc/7db3sMKv2cUfntuIuIsRRdxO8GCvQ8AGE0YdiTST5MO X-Received: by 10.99.4.131 with SMTP id 125mr14269102pge.375.1519119336835; Tue, 20 Feb 2018 01:35:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519119336; cv=none; d=google.com; s=arc-20160816; b=QhS2mWhadlpBvDVjGT2uNKvG6CVg/H/3qBF0oViY3qVpcM6p+P/na76Yky+CSyPbOF 0BAZrV8leYMBpe+LUL31xqBolvcqdVJlTHWOtHQuuc2EMFZv/c2TAs7fzXMwvj+nR7ar Z8/defyePH1yoQ3/K/DlgDM6WC3rq5QWubqkSyvALQfDQMUtWr55azDiPLKMI2cyavh8 5INo2W8wK/ndFJvYSpC8BVO7mhGEGxV2PRBrlZdEPQNTk2+hHNS0adPlteqhyDMQxN6x CxcwHVoxPXQeyWV1UUnmf57LXKc8R+300xXmwMoZnz1UhekRWOpd77KYOO6Cy98Zcw8n Qo/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=LWhEuTPdgR3twhwSd+FiDtoXsqtIDvy+YMNTFUDopFo=; b=cX6KdrW0uGRm9mDhdZ/7xSO7mWVgz+my9e9cVyPjw7RvD0RaSO3AbCIUW3NUwciJWZ wfV2lDMYM0hhnnosuoHhLWeL4PiyUQtdShOUxHJE55JH3taaXiG5ue32zdEE5p8lxyuA KyPrpl9W2ogrqEMtCLO7CnRRjre3IaEFHQIbQgTs8M22oB0OEEr3DmsN9rjGCbc3RbIs 79l5kBIgAvQzlTpg7lSJDJWzu85PxsUW/WzcYp7inMgbW3wfiz4uf6GegL4DS0/MKyWf +K/+IBl1QFpchQ/0zlmvy33BuZcNcUAlKJuXU6fdkLrHxxVCaMYAR1FBoK8qu2INtuiH oxsw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=ZwdgSTrd; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c41-v6si4165678plj.150.2018.02.20.01.35.22; Tue, 20 Feb 2018 01:35:36 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=ZwdgSTrd; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751450AbeBTJd7 (ORCPT + 99 others); Tue, 20 Feb 2018 04:33:59 -0500 Received: from mail-wr0-f195.google.com ([209.85.128.195]:37554 "EHLO mail-wr0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751410AbeBTJdz (ORCPT ); Tue, 20 Feb 2018 04:33:55 -0500 Received: by mail-wr0-f195.google.com with SMTP id z12so6196080wrg.4 for ; Tue, 20 Feb 2018 01:33:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=LWhEuTPdgR3twhwSd+FiDtoXsqtIDvy+YMNTFUDopFo=; b=ZwdgSTrdioJkHGN0DNiMKG691TT33sqPW+nKggD20ib6rB1tsyFlduLI7qa09K0Aq0 Mj00qZD2FDF5sY0d8eMETzc6tiovVzboQZG2UIQm/wts6uGQfg0dyVYTL61/LTfFWavD kvFwApBbnIFWh8CD7C3i2Vy8kjUhQFnVpn2ieCH161WuxNl+L42u9PiIsPwTsz5FKZ6i mxEL+A9uF3/lnEUo0Ba4iiVmRfjMFYNWuUwqw5LCz8SPik2vOH8NemJ3q7+ieYTAYh5W 5Szu3C3ts19jLzxXTI6E6ndQDncRLq1TpCSgO2JXhgx94Pua487BoTU3eVTr5Kc9++Zw r2Qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=LWhEuTPdgR3twhwSd+FiDtoXsqtIDvy+YMNTFUDopFo=; b=bZfJe/o4nxSYaY4HkLBU8jHW/daSW1Xqtic11DR4hY10A+Svdw9D9z6jA3S8wW325L xzaNFQM8DqnjgP42bc1/0+WBNlKpQtnc4FVdIMk+A2t6cz8ef7ldQK3CC3SwAjfI9XEE y0ODvzp513GS01xCUakPrt4Gn34wqshe3tDfkBFeWW6BNR6zTMwX1BQX2SlawXYqz2UK d0QvDrw5SctA/G0MiZnAGCKKT/GBHI7iHIt66nmGok+XO75Re1EzdHmoAgSwTppFu16O SlbTywx2mDNk1xyFLefs/A4C4gVzIGyYYNUh7ZPhc3XfJQTbQaExiCM93x5CCLq5f3ne HKSw== X-Gm-Message-State: APf1xPBdzzJwXq0ObRLUqTUOWzQiUw+KPdg4RhpcC6f0bwU5IW1rvFi/ Z0Emvhy8WtrWcIGIsSVyzIg= X-Received: by 10.28.218.197 with SMTP id r188mr12853730wmg.42.1519119233889; Tue, 20 Feb 2018 01:33:53 -0800 (PST) Received: from andrea (85.100.broadband17.iol.cz. [109.80.100.85]) by smtp.gmail.com with ESMTPSA id v23sm23376960wmv.8.2018.02.20.01.33.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Feb 2018 01:33:53 -0800 (PST) Date: Tue, 20 Feb 2018 10:33:47 +0100 From: Andrea Parri To: Alan Stern Cc: "Paul E. McKenney" , Akira Yokosawa , Kernel development list , mingo@kernel.org, Will Deacon , peterz@infradead.org, boqun.feng@gmail.com, npiggin@gmail.com, dhowells@redhat.com, Jade Alglave , Luc Maranget , Patrick Bellasi Subject: Re: [PATCH] tools/memory-model: remove rb-dep, smp_read_barrier_depends, and lockless_dereference Message-ID: <20180220093346.GA5505@andrea> References: <20180217151413.GA3785@andrea> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Feb 19, 2018 at 12:14:45PM -0500, Alan Stern wrote: > On Sat, 17 Feb 2018, Andrea Parri wrote: > > > > Akira's observation about READ_ONCE extends to all (annotated) loads. In > > > fact, it also applies to loads corresponding to unsuccessful RMW operations; > > > consider, for example, the following variation of MP+onceassign+derefonce: > > > > > > C T > > > > > > { > > > y=z; > > > z=0; > > > } > > > > > > P0(int *x, int **y) > > > { > > > WRITE_ONCE(*x, 1); > > > smp_store_release(y, x); > > > } > > > > > > P1(int **y, int *z) > > > { > > > int *r0; > > > int r1; > > > > > > r0 = cmpxchg_relaxed(y, z, z); > > > r1 = READ_ONCE(*r0); > > > } > > > > > > exists (1:r0=x /\ 1:r1=0) > > > > > > The final state is allowed w/o the patch, and forbidden w/ the patch. > > > > > > This also reminds me of > > > > > > 5a8897cc7631fa544d079c443800f4420d1b173f > > > ("locking/atomics/alpha: Add smp_read_barrier_depends() to _release()/_relaxed() atomics") > > > > > > (that we probably want to mention in the commit message). > > > > Please also notice that 5a8897cc7631f only touched alpha's atomic.h: > > I see no corresponding commit/change on {,cmp}xchg.h (where the "mb" > > is currently conditionally executed). > > This leaves us with a question: Do we want to change the kernel by > adding memory barriers after unsuccessful RMW operations on Alpha, or > do we want to change the model by excluding such operations from > address dependencies? I'd like to continue to treat R[once] and R*[once] equally if possible. Given the (unconditional) smp_read_barrier_depends in READ_ONCE and in atomics, it seems reasonable to have it unconditionally in cmpxchg. As with the following patch? Andrea --- diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h index 68dfb3cb71454..e2660866ce972 100644 --- a/arch/alpha/include/asm/xchg.h +++ b/arch/alpha/include/asm/xchg.h @@ -128,10 +128,9 @@ ____xchg(, volatile void *ptr, unsigned long x, int size) * store NEW in MEM. Return the initial value in MEM. Success is * indicated by comparing RETURN with OLD. * - * The memory barrier should be placed in SMP only when we actually - * make the change. If we don't change anything (so if the returned - * prev is equal to old) then we aren't acquiring anything new and - * we don't need any memory barrier as far I can tell. + * The memory barrier is placed in SMP unconditionally, in order to + * guarantee that dependency ordering is preserved when a dependency + * is headed by an unsuccessful operation. */ static inline unsigned long @@ -150,8 +149,8 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new) " or %1,%2,%2\n" " stq_c %2,0(%4)\n" " beq %2,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous" @@ -177,8 +176,8 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new) " or %1,%2,%2\n" " stq_c %2,0(%4)\n" " beq %2,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous" @@ -200,8 +199,8 @@ ____cmpxchg(_u32, volatile int *m, int old, int new) " mov %4,%1\n" " stl_c %1,%2\n" " beq %1,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous" @@ -223,8 +222,8 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new) " mov %4,%1\n" " stq_c %1,%2\n" " beq %1,3f\n" - __ASM__MB "2:\n" + __ASM__MB ".subsection 2\n" "3: br 1b\n" ".previous" > > Note that operations like atomic_add_unless() already include memory > barriers. > > Alan >