Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp7123103ybi; Thu, 13 Jun 2019 09:58:32 -0700 (PDT) X-Google-Smtp-Source: APXvYqw7UNIfpOhYdfEGHtKjXuD+fCzC7QDUCvnlAdbZ9JLHtAvT+p2waJzBQlPZnVspb6dF0ko/ X-Received: by 2002:a63:1b07:: with SMTP id b7mr3424395pgb.133.1560445112635; Thu, 13 Jun 2019 09:58:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560445112; cv=none; d=google.com; s=arc-20160816; b=ubb9N8HDHTGA1wdvVXs3VnyawEVbjWKKwA0IJEji3u6H8reb+y5lJZ4iN40Y3ynAZZ 9eqWKBWN1dj5j4Z7m0r7XzHsuVW0e9CdGBkpDzH4c/h8O/7D0XvAdwdsu4WH3Jqv9DSo +2larpFTn6+E5PEKcmnw+WvCNGS+P/fghuVAAnhN0gMiasm1Jo4kvXUjhgOcbXh+SwS6 hkJoYFx418FwSEIsJGFWh3lK/s4fXz3Dp2SeNL83scBT2EMvAy8OfT6m8Xi0aTPf2Qwv b4dwm/Qh+CloxH6+g6K3FEGo6mA1eRnjp2dTT4bCvSJ5pQ/pXzV0qDpd2gLnOvuQF133 k5Ig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:in-reply-to :subject:cc:to:from:date; bh=Qn2XKQk2vISMGpZ+p7Ve4XGtXyNAu+psEDsnfcYfDCA=; b=tNrrI1mDjzcbje9Dmdo4HpXOR7VSAtb/emevGKF8zFMNJbsmzTkeB5aN2SFVX33Tv8 hjUyA35wU6AkJn7Lr/3qgYhZ4aidis3bRdkUYWBhAP8g9yn/UItdzy1uHAKFRCxj90/h IpGV+y6BVQxkWOmikoIfTawD1VhN0H8dFYIwCIfesi5TYOHn3lDyJ2yJ1TrRhYynNC8T 7kCA+Cet0qQaXCmjfwAAoZl7eQiZwXAibUzdp2TVJSi2puad1NNjKoNOXzU29BrxEOm/ ATEEIWCPbAPn+JtvsBELYb21ZCbs3fPmFdY+uKEq8fE5zwyQX3v2N8EoJetWocEox5rX m38w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t11si228623pgs.261.2019.06.13.09.58.17; Thu, 13 Jun 2019 09:58:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730463AbfFMQ6Q (ORCPT + 99 others); Thu, 13 Jun 2019 12:58:16 -0400 Received: from iolanthe.rowland.org ([192.131.102.54]:60482 "HELO iolanthe.rowland.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S2393126AbfFMQ6M (ORCPT ); Thu, 13 Jun 2019 12:58:12 -0400 Received: (qmail 4098 invoked by uid 2102); 13 Jun 2019 12:58:11 -0400 Received: from localhost (sendmail-bs@127.0.0.1) by localhost with SMTP; 13 Jun 2019 12:58:11 -0400 Date: Thu, 13 Jun 2019 12:58:11 -0400 (EDT) From: Alan Stern X-X-Sender: stern@iolanthe.rowland.org To: David Howells cc: Peter Zijlstra , , , , , , , , , , , , Subject: Re: [PATCH v2 0/4] atomic: Fixes to smp_mb__{before,after}_atomic() and mips. In-Reply-To: <1674.1560435952@warthog.procyon.org.uk> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 13 Jun 2019, David Howells wrote: > Peter Zijlstra wrote: > > > Basically we fail for: > > > > *x = 1; > > atomic_inc(u); > > smp_mb__after_atomic(); > > r0 = *y; > > > > Because, while the atomic_inc() implies memory order, it > > (surprisingly) does not provide a compiler barrier. This then allows > > the compiler to re-order like so: > > To quote memory-barriers.txt: > > (*) smp_mb__before_atomic(); > (*) smp_mb__after_atomic(); > > These are for use with atomic (such as add, subtract, increment and > decrement) functions that don't return a value, especially when used for > reference counting. These functions do not imply memory barriers. > > so it's entirely to be expected? The text is perhaps ambiguous. It means that the atomic functions which don't return values -- like atomic_inc() -- do not imply memory barriers. It doesn't mean that smp_mb__before_atomic() and smp_mb__after_atomic() do not imply memory barriers. The behavior Peter described is not to be expected. The expectation is that the smp_mb__after_atomic() in the example should force the "*x = 1" store to execute before the "r0 = *y" load. But on current x86 it doesn't force this, for the reason explained in the description. Alan