Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp7012441ybi; Thu, 13 Jun 2019 08:08:28 -0700 (PDT) X-Google-Smtp-Source: APXvYqxK8vt8tWDm3UfA6ShfIJVqBQZjAropRQR7wo4j0EZckq0ehkvWO+dRgHF7goW5Kpm1Eai0 X-Received: by 2002:a63:3008:: with SMTP id w8mr31531540pgw.11.1560438508101; Thu, 13 Jun 2019 08:08:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560438508; cv=none; d=google.com; s=arc-20160816; b=WLuSnJnmJXovjDC7oZX6yzXZpKo6nSaSDqMr7jkfPTzKF47qUlaFk+3FJlzgq/NTgS XwTzJfnMCrCbann1K2rjYWDtnew3Ycl1BJ4fWNHAIS7ajkGpQ0g4hpMXXVaj6CgLhzVI /d44nupoEzZLaatzylURHBPi91rEwQFp75XIZzxJlUhrxg9rveIp46LN4SOkQeGkAzfK TDJvLaaHt1H5z746eQw3449jws2EDoOK1iqKYuRpU8cRX6Jr0czcvjJQNKS7GqVQJyWg EaayUBL/gc6H7uddQkwby6U8HtqnoJkCnFvHvQ1OSAimm61hH2n0rx2ZntBqdOUxAenF 8XQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=wnBWnl4mHMHphkBHWmmu45LIbgI0N+61uvpJhXZ6JYo=; b=ZC4o/w5lyqEU5evNPp7c4xG7tV5HQ196nqz+6/D2S1+2RkOrr5ruLTcK8s19vC4NH8 Gmk3zIPUAU3ihz/bsu1brEsk5Rz9JDw1eM4E+gpUsDLR2+fRbAumDZWtPg0//CDPeaVy TD2e+GrRwCM354GSCyvMHe38/3wQGMxHGTHDOyW8+EvKt7lgOVv7cdSogyt/Nj+qAlcf A98yMBvA92kRA+ApOKMh5vx8n/amXBf90JO3J4Yozaf8ItjzswiryKL/pPabVKvYCowl tfKblI++4s2qvlVH1Zw+zwv/I6Qc0R2H/oZYLAibOqyJQ5TCMO1h1MqpEB5jFA68cPPE 3+OQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e8si3518192plb.420.2019.06.13.08.08.13; Thu, 13 Jun 2019 08:08:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387487AbfFMPHC (ORCPT + 99 others); Thu, 13 Jun 2019 11:07:02 -0400 Received: from foss.arm.com ([217.140.110.172]:40610 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732395AbfFMOCJ (ORCPT ); Thu, 13 Jun 2019 10:02:09 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 88E342B; Thu, 13 Jun 2019 07:02:08 -0700 (PDT) Received: from fuggles.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BAC333F718; Thu, 13 Jun 2019 07:02:06 -0700 (PDT) Date: Thu, 13 Jun 2019 15:02:04 +0100 From: Will Deacon To: Peter Zijlstra Cc: stern@rowland.harvard.edu, akiyks@gmail.com, andrea.parri@amarulasolutions.com, boqun.feng@gmail.com, dlustig@nvidia.com, dhowells@redhat.com, j.alglave@ucl.ac.uk, luc.maranget@inria.fr, npiggin@gmail.com, paulmck@linux.ibm.com, paul.burton@mips.com, linux-kernel@vger.kernel.org, torvalds@linux-foundation.org Subject: Re: [PATCH v2 4/4] x86/atomic: Fix smp_mb__{before,after}_atomic() Message-ID: <20190613140204.GD18966@fuggles.cambridge.arm.com> References: <20190613134317.734881240@infradead.org> <20190613134933.141230706@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190613134933.141230706@infradead.org> User-Agent: Mutt/1.11.1+86 (6f28e57d73f2) () Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 13, 2019 at 03:43:21PM +0200, Peter Zijlstra wrote: > Recent probing at the Linux Kernel Memory Model uncovered a > 'surprise'. Strongly ordered architectures where the atomic RmW > primitive implies full memory ordering and > smp_mb__{before,after}_atomic() are a simple barrier() (such as x86) > fail for: > > *x = 1; > atomic_inc(u); > smp_mb__after_atomic(); > r0 = *y; > > Because, while the atomic_inc() implies memory order, it > (surprisingly) does not provide a compiler barrier. This then allows > the compiler to re-order like so: > > atomic_inc(u); > *x = 1; > smp_mb__after_atomic(); > r0 = *y; > > Which the CPU is then allowed to re-order (under TSO rules) like: > > atomic_inc(u); > r0 = *y; > *x = 1; > > And this very much was not intended. Therefore strengthen the atomic > RmW ops to include a compiler barrier. > > NOTE: atomic_{or,and,xor} and the bitops already had the compiler > barrier. > > Reported-by: Andrea Parri > Signed-off-by: Peter Zijlstra (Intel) > --- > Documentation/atomic_t.txt | 3 +++ > arch/x86/include/asm/atomic.h | 8 ++++---- > arch/x86/include/asm/atomic64_64.h | 8 ++++---- > arch/x86/include/asm/barrier.h | 4 ++-- > 4 files changed, 13 insertions(+), 10 deletions(-) > > --- a/Documentation/atomic_t.txt > +++ b/Documentation/atomic_t.txt > @@ -194,6 +194,9 @@ These helper barriers exist because arch > ordering on their SMP atomic primitives. For example our TSO architectures > provide full ordered atomics and these barriers are no-ops. > > +NOTE: when the atomic RmW ops are fully ordered, they should also imply a > +compiler barrier. Acked-by: Will Deacon Cheers, Will