Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp27776ybi; Thu, 13 Jun 2019 11:01:52 -0700 (PDT) X-Google-Smtp-Source: APXvYqxEKeJ0DzSYGGE4UkNVj37/LTLgFodLJgWBLaOPxDzGukrttkBpvr+UdJhLCIWbdpNGfaFk X-Received: by 2002:a63:1106:: with SMTP id g6mr29828223pgl.83.1560448912692; Thu, 13 Jun 2019 11:01:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560448912; cv=none; d=google.com; s=arc-20160816; b=JSPXq8Z/xA6DEps9s4SKgypO91Xbx3jYTZk4t2a3s5KArPeOK1jTIvKXYbiexrhhao BkUvF27tklIXRshjooJi2jOZnVruatbu5n/UvEVeFn19ekqM5AqRlAvDiYyx526Oetf7 Y1GcN+7FTswXQSA3br34SDPKW3AI9OKFFMyxvk7BWABsq/++inhUoMBSdf8hKZAr84Wo Wlw5UIBvyHj7hPOCbz5dnxR0AjhXutP9gcXBfMMyG5k4yXavSid1OxlBThnwpILV6FQg gzTz2U8+I+9JVpQawgFknz8i1KBDK+whg8OgqfeUjwZzqTpCVWP7gHemd0CVLJe1KqGw QUxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=Y2HBP4Suq4cb9acUey5TgY5ehjcUkDf2judRGou1cAw=; b=lnEhvVnEcvuqs6gc17WGBfdAYaj/XAbMXWzhdsr/Scv7vBzCYUsxvv7nk+/8w7m9gR y0SLFImyzSPjIjBcesZDjT1VOMFu5+upwYKINMYAqEftavmrG5Vg0GyIxhFqMeGnFv4J km15PSCvYrtwU0Xd8UJJQWqBl611EiOElmiPeL6JJ9YVY4GmqPBem4Huw8NXpiMI+IoH LQBqm3DrxMSAwwMe0PbbdkzWovm32BXAmZ+L1tvxygQ+v/QAq1292rXLP7yvDqUrlvR7 ieGxCNXXk8AxWhX9RXRUBSbkdBiE8NshnLlHTmqvLlnVdLLQifK2B4awP4SFB211WWpY X42A== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=sK80hHPx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i1si271461pjk.57.2019.06.13.11.01.35; Thu, 13 Jun 2019 11:01:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=sK80hHPx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726965AbfFMSAq (ORCPT + 99 others); Thu, 13 Jun 2019 14:00:46 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:57098 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727070AbfFMSAp (ORCPT ); Thu, 13 Jun 2019 14:00:45 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Y2HBP4Suq4cb9acUey5TgY5ehjcUkDf2judRGou1cAw=; b=sK80hHPxTYIdRxZtA26Sm+wM6 DPW6vJpYt7QVdrCYSs3qmMJz9tsGHzUkd4u8y5+qHNA7lpc5wcvhFYiAojZCtZxNT1UDzbnFiFKYg RND4ASdgud8UfdyIAGW4eNyXFCNjE9+PwIEElIrhw8bLJJFmmFsuYWQCYqKDGvg2AnjYfgkV9RaoW 9xONLRy7uVoREwEFcOKhcVUwOvMOQDTgYRTt6N1unwvIrqWM5NN0XlXB3GkKvEmWtpI+Ikjac0mAL 1WgdKIkUjkoIzf3YcW/av051DIoWiXZjvFBSL886OQ4q1tuF3I0osiw7xj32UOrXme8h60gaQAWS6 NMhIZOHcw==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hbU1X-0002TU-SW; Thu, 13 Jun 2019 18:00:32 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id A81E7209CC9DD; Thu, 13 Jun 2019 20:00:29 +0200 (CEST) Date: Thu, 13 Jun 2019 20:00:29 +0200 From: Peter Zijlstra To: Alan Stern Cc: David Howells , akiyks@gmail.com, andrea.parri@amarulasolutions.com, boqun.feng@gmail.com, dlustig@nvidia.com, j.alglave@ucl.ac.uk, luc.maranget@inria.fr, npiggin@gmail.com, paulmck@linux.ibm.com, will.deacon@arm.com, paul.burton@mips.com, linux-kernel@vger.kernel.org, torvalds@linux-foundation.org Subject: Re: [PATCH v2 0/4] atomic: Fixes to smp_mb__{before,after}_atomic() and mips. Message-ID: <20190613180029.GO3436@hirez.programming.kicks-ass.net> References: <1674.1560435952@warthog.procyon.org.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 13, 2019 at 12:58:11PM -0400, Alan Stern wrote: > On Thu, 13 Jun 2019, David Howells wrote: > > > Peter Zijlstra wrote: > > > > > Basically we fail for: > > > > > > *x = 1; > > > atomic_inc(u); > > > smp_mb__after_atomic(); > > > r0 = *y; > > > > > > Because, while the atomic_inc() implies memory order, it > > > (surprisingly) does not provide a compiler barrier. This then allows > > > the compiler to re-order like so: > > > > To quote memory-barriers.txt: > > > > (*) smp_mb__before_atomic(); > > (*) smp_mb__after_atomic(); > > > > These are for use with atomic (such as add, subtract, increment and > > decrement) functions that don't return a value, especially when used for > > reference counting. These functions do not imply memory barriers. > > > > so it's entirely to be expected? > > The text is perhaps ambiguous. It means that the atomic functions > which don't return values -- like atomic_inc() -- do not imply memory > barriers. It doesn't mean that smp_mb__before_atomic() and > smp_mb__after_atomic() do not imply memory barriers. > > The behavior Peter described is not to be expected. The expectation is > that the smp_mb__after_atomic() in the example should force the "*x = > 1" store to execute before the "r0 = *y" load. But on current x86 it > doesn't force this, for the reason explained in the description. Indeed, thanks Alan. The other other approach would be to upgrade smp_mb__{before,after}_mb() to actual full memory barriers on x86, but that seems quite rediculous since atomic_inc() already does all the expensive bits and is only missing the compiler barrier. That would result in code like: mov $1, x lock inc u lock addl $0, -4(%rsp) # aka smp_mb() mov y, %r which is really quite silly. And as noted in the Changelog, about half the non-value returning atomics already implied the compiler barrier anyway.