Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp2115116yba; Fri, 19 Apr 2019 12:25:02 -0700 (PDT) X-Google-Smtp-Source: APXvYqye+WRf6+2UNchZNPSILcCOFwgICsMetZteypo7/XhGxg5MKRaPnxvcF9qYdAIJ1W5XIhdH X-Received: by 2002:a65:4481:: with SMTP id l1mr5484353pgq.66.1555701901971; Fri, 19 Apr 2019 12:25:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555701901; cv=none; d=google.com; s=arc-20160816; b=Jw+5DxcpvqbPghAHZPl5wpPQR8Yv2QgoS15uxbnDvWtRu/iUarkxf8JzfP2GC5yCZL flkmo7QpJL3pwGwGo8ADLpSOOk8eih14oyjPedchdTbx+BLGCoNtbC5RmL7Zc8P3v4mj O5lI/y5aUIDe/EszMAj7g5+eUjDEhAQS9ZijdugyS6uEAMq9A6eBbhQyiHAcCCLSYFxv 8pdN0QjDbpB/3CQPjK9FbXw+6inDC1SVXDYtMexpvAruYnsuXLqAPwESX/1C3exMSQF7 hYhK05Vln6SjrP3LexAuzk5TgchtpoYNglAVRkkjUrB2L6iG7pQsl0viGbiUznPh4kL9 wIWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature; bh=Q1XRVA4nLkhNb8zUiVD2/rCzk+v6r90PnPMQLr5/wys=; b=cjvAzLSWdrgVzxdTx5lq+kxtHHvH217Kr68u7LHcwg+CcO/aTHutvhO/HScV6xkARB hCOMe5Ruwn9nbcc75yCL6FolhP0KKBrmWsl+t/w+ohf+4yeAQg97iIkEFqw/+xPALBNB MPnuy/nUgGl6p281OT3zgWzQRBbSxJZhloxnoRrSq/CP9yPXeELPyzhV5rUkj9wSd9TF VfYHN7tCOiPemIHWIlOgjUsyTmx1RRMdu2wpqBT8BT+R2FzTrJnFxQxZj5oNIkncoOHu Jzt9H5MbNNyUBvh+V8iiuDQWHrn/6z+VIR8uY/jT9PvQVRCk8ZUKkD7dQDsIBI9mQfqp dayQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=o7CpAlSn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 77si6060490pfs.287.2019.04.19.12.24.46; Fri, 19 Apr 2019 12:25:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=o7CpAlSn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727724AbfDSTXN (ORCPT + 99 others); Fri, 19 Apr 2019 15:23:13 -0400 Received: from mail-pg1-f196.google.com ([209.85.215.196]:39123 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727468AbfDSTXM (ORCPT ); Fri, 19 Apr 2019 15:23:12 -0400 Received: by mail-pg1-f196.google.com with SMTP id l18so2904153pgj.6 for ; Fri, 19 Apr 2019 12:23:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=Q1XRVA4nLkhNb8zUiVD2/rCzk+v6r90PnPMQLr5/wys=; b=o7CpAlSnFRpI6mH9Q3yHc2iQrXLjlKwKyAwY3T8ncm5tVTu4dLOFlCdgIdxA6owgTw YWy/fGUWUsbc1ORWx4E9QbHIvA62z8vvsgyf6+SAt2wnKr7uQfn3pPYYTUmM2RvH2CoN Lm/PXRXq4kAr6Azvk4qd9O7GPIti8a58sud4pH21uiFcVCshsf0ENibZhPovFzZqP/5n O0E5tzEX6KJrvLN6TK4qfa2V2zZnz7JKvGhnu7lGh570fzYNiFCuYvriGLZa+FbzEjhc DDTJXsStXheex+YzxkFT+5ElkMy+RIT3DExIV7AbCxnY6jbUgWh5pIzkeS8sQOA5u6f/ uzRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=Q1XRVA4nLkhNb8zUiVD2/rCzk+v6r90PnPMQLr5/wys=; b=F0U6V36ATSHewTVul44aNDD1lcUPWX4if+YZs6PbNW5+JCDrw6Pdqgrvipg/5CC8Lf An5oDDCZPASoNzYJwgtY9CqRHWybut8V5GjgI7NZ9UMsvb/eIqzKW8KsRYCzA41MDEZq smhWBv2HwJzZWjij3MQefqq7+oA4jIl2czs+1qHgfRqtmB9GC3s8cT8Arm9/gdBz4P8N /Rg8lCeMDrpiYUO98LrZLBhnPl86AD30XYk5FwpgH0O0sSzizyvYnayftjNi/sKdP8sw y4XMkxggTr9p/hTg0is8GsKskrPEqt40ZT3Z+IPflNx6CrOs6tX47uDzegLu/c7+Rli/ kflA== X-Gm-Message-State: APjAAAVswa6625x4NuhWArQyekmV1EDRe/mi8ruuAuF2EAQJu146iKZ7 XLEsi0R6TYaspDEjYrXVE0ZyD1ot X-Received: by 2002:aa7:8b4c:: with SMTP id i12mr4400901pfd.189.1555686427023; Fri, 19 Apr 2019 08:07:07 -0700 (PDT) Received: from [192.168.11.2] (KD106167171201.ppp-bb.dion.ne.jp. [106.167.171.201]) by smtp.gmail.com with ESMTPSA id l74sm9906273pfi.174.2019.04.19.08.07.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 19 Apr 2019 08:07:06 -0700 (PDT) Subject: Re: Adding plain accesses and detecting data races in the LKMM To: "Paul E. McKenney" Cc: Andrea Parri , Alan Stern , Boqun Feng , Daniel Lustig , David Howells , Jade Alglave , Luc Maranget , Nicholas Piggin , Peter Zijlstra , Will Deacon , Daniel Kroening , Kernel development list , Akira Yokosawa References: <20190418125412.GA10817@andrea> <20190419005302.GA5311@andrea> <20190419124720.GU14111@linux.ibm.com> From: Akira Yokosawa Message-ID: <2827195a-f203-b9cd-444d-cf6425cef06f@gmail.com> Date: Sat, 20 Apr 2019 00:06:58 +0900 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190419124720.GU14111@linux.ibm.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Paul, Please find inline comments below. On Fri, 19 Apr 2019 05:47:20 -0700, Paul E. McKenney wrote: > On Fri, Apr 19, 2019 at 02:53:02AM +0200, Andrea Parri wrote: >>> Are you saying that on x86, atomic_inc() acts as a full memory barrier >>> but not as a compiler barrier, and vice versa for >>> smp_mb__after_atomic()? Or that neither atomic_inc() nor >>> smp_mb__after_atomic() implements a full memory barrier? >> >> I'd say the former; AFAICT, these boil down to: >> >> https://elixir.bootlin.com/linux/v5.1-rc5/source/arch/x86/include/asm/atomic.h#L95 >> https://elixir.bootlin.com/linux/v5.1-rc5/source/arch/x86/include/asm/barrier.h#L84 > > OK, how about the following? > > Thanx, Paul > > ------------------------------------------------------------------------ > > commit 19d166dadc4e1bba4b248fb46d32ca4f2d10896b > Author: Paul E. McKenney > Date: Fri Apr 19 05:20:30 2019 -0700 > > tools/memory-model: Make smp_mb__{before,after}_atomic() match x86 > > Read-modify-write atomic operations that do not return values need not > provide any ordering guarantees, and this means that both the compiler > and the CPU are free to reorder accesses across things like atomic_inc() > and atomic_dec(). The stronger systems such as x86 allow the compiler > to do the reordering, but prevent the CPU from so doing, and these > systems implement smp_mb__{before,after}_atomic() as compiler barriers. > The weaker systems such as Power allow both the compiler and the CPU > to reorder accesses across things like atomic_inc() and atomic_dec(), > and implement smp_mb__{before,after}_atomic() as full memory barriers. > > This means that smp_mb__before_atomic() only orders the atomic operation > itself with accesses preceding the smp_mb__before_atomic(), and does > not necessarily provide any ordering whatsoever against accesses > folowing the atomic operation. Similarly, smp_mb__after_atomic() s/folowing/following/ > only orders the atomic operation itself with accesses following the > smp_mb__after_atomic(), and does not necessarily provide any ordering > whatsoever against accesses preceding the atomic operation. Full ordering > therefore requires both an smp_mb__before_atomic() before the atomic > operation and an smp_mb__after_atomic() after the atomic operation. > > Therefore, linux-kernel.cat's current model of Before-atomic > and After-atomic is too strong, as it guarantees ordering of > accesses on the other side of the atomic operation from the > smp_mb__{before,after}_atomic(). This commit therefore weakens > the guarantee to match the semantics called out above. > > Reported-by: Andrea Parri > Suggested-by: Alan Stern > Signed-off-by: Paul E. McKenney > > diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt > index 169d938c0b53..e5b97c3e8e39 100644 > --- a/Documentation/memory-barriers.txt > +++ b/Documentation/memory-barriers.txt > @@ -1888,7 +1888,37 @@ There are some more advanced barrier functions: > atomic_dec(&obj->ref_count); > > This makes sure that the death mark on the object is perceived to be set > - *before* the reference counter is decremented. > + *before* the reference counter is decremented. However, please note > + that smp_mb__before_atomic()'s ordering guarantee does not necessarily > + extend beyond the atomic operation. For example: > + > + obj->dead = 1; > + smp_mb__before_atomic(); > + atomic_dec(&obj->ref_count); > + r1 = a; > + > + Here the store to obj->dead is not guaranteed to be ordered with > + with the load from a. This reordering can happen on x86 as follows: s/with// And I beg you to avoid using the single letter variable "a". It's confusing. > + (1) The compiler can reorder the load from a to precede the > + atomic_dec(), (2) Because x86 smp_mb__before_atomic() is only a > + compiler barrier, the CPU can reorder the preceding store to > + obj->dead with the later load from a. > + > + This could be avoided by using READ_ONCE(), which would prevent the > + compiler from reordering due to both atomic_dec() and READ_ONCE() > + being volatile accesses, and is usually preferable for loads from > + shared variables. However, weakly ordered CPUs would still be > + free to reorder the atomic_dec() with the load from a, so a more > + readable option is to also use smp_mb__after_atomic() as follows: The point here is not just "readability", but also the portability of the code, isn't it? Thanks, Akira > + > + WRITE_ONCE(obj->dead, 1); > + smp_mb__before_atomic(); > + atomic_dec(&obj->ref_count); > + smp_mb__after_atomic(); > + r1 = READ_ONCE(a); > + > + This orders all three accesses against each other, and also makes > + the intent quite clear. > > See Documentation/atomic_{t,bitops}.txt for more information. > > diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat > index 8dcb37835b61..b6866f93abb8 100644 > --- a/tools/memory-model/linux-kernel.cat > +++ b/tools/memory-model/linux-kernel.cat > @@ -28,8 +28,8 @@ include "lock.cat" > let rmb = [R \ Noreturn] ; fencerel(Rmb) ; [R \ Noreturn] > let wmb = [W] ; fencerel(Wmb) ; [W] > let mb = ([M] ; fencerel(Mb) ; [M]) | > - ([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) | > - ([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) | > + ([M] ; fencerel(Before-atomic) ; [RMW]) | > + ([RMW] ; fencerel(After-atomic) ; [M]) | > ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) | > ([M] ; po ; [UL] ; (co | po) ; [LKW] ; > fencerel(After-unlock-lock) ; [M]) >