Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2801277imu; Mon, 17 Dec 2018 08:04:03 -0800 (PST) X-Google-Smtp-Source: AFSGD/UyErQO3piad4GM5Rdb8gycyNgS3qMzhDsD7fcVmEN1m4JqoAo4vNup2l3EO8eoUnM4o28n X-Received: by 2002:a17:902:5a4d:: with SMTP id f13mr13617937plm.49.1545062643604; Mon, 17 Dec 2018 08:04:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545062643; cv=none; d=google.com; s=arc-20160816; b=cy51nxB7+PqKWDTG8MqvWNZqpwsdQeHg8QC/PR74r2auOOdmg+4NLz9jTtBwWH/+wY XYNzDKYYByUo3wnvfSErHqAjTBobg0HDH9m876fOg5g5OB3vvVlhaZhER/uMoUKwIPq7 CtB0nr4EyhLs/332U5J6E3EpjFDH5GZP6Lr7/wBLQJ21ZhUrzsy9Fws8YjGY08/Pz7xl Yhw/oj5+8AfhngLfbTlL+n7eRTdoLLH3tfnOUYiBWBOwgYL2co3PQRE7YzJWZ8mDvi7O QYchQ+jdjottrFm0CGJZ8sluKnzO1+WAAZ6BS0soGmv/Mg3HVhTu5ufUvFhFCH4mEbMe yllQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:in-reply-to :subject:cc:to:from:date; bh=XJ09RrrrQ0lSo878PxV8M8QvF9+PliwY50PcXujeivQ=; b=Ar349758V99YYCdG9KyCp5h3tp6z8VP7s1J2buDZowfGk2Idb1rHuMpqYya574Kx/4 bdl9GkQtCqoJsNBt/5Itop2vaeR2aS5qiVfalgBNJTQco0s1KGB+RuFQVeKyrGBy6LDe IfgUBAyRg4/rx4ivdUu4QWqVH6sWYrjf3SsiU051rCkm+eb6X3Z65WkdUvy8FfnYsUX4 F4PbcSjcqZYB9vAqfNcx30kYA3M4/OW+B0DSFwMeeyCO99CP7ZMsv07+Bhi13jVA1PaG qlIPqE1Dx14egFaEYqT8p7NkkxAsc7Okq3yFnN4OsD+t+oU4GNMCnYNmXPIrNzM3p1N4 6SrQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w185si8419379pgb.588.2018.12.17.08.03.46; Mon, 17 Dec 2018 08:04:03 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388006AbeLQQCl (ORCPT + 99 others); Mon, 17 Dec 2018 11:02:41 -0500 Received: from iolanthe.rowland.org ([192.131.102.54]:56440 "HELO iolanthe.rowland.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S2387708AbeLQQCl (ORCPT ); Mon, 17 Dec 2018 11:02:41 -0500 Received: (qmail 1964 invoked by uid 2102); 17 Dec 2018 11:02:40 -0500 Received: from localhost (sendmail-bs@127.0.0.1) by localhost with SMTP; 17 Dec 2018 11:02:40 -0500 Date: Mon, 17 Dec 2018 11:02:40 -0500 (EST) From: Alan Stern X-X-Sender: stern@iolanthe.rowland.org To: "Paul E. McKenney" cc: David Goldblatt , , Florian Weimer , , , , , , , , , , , , , , Subject: Re: [PATCH] Linux: Implement membarrier function In-Reply-To: <20181216185130.GB4170@linux.ibm.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, 16 Dec 2018, Paul E. McKenney wrote: > OK, so "simultaneous" IPIs could be emulated in a real implementation by > having sys_membarrier() send each IPI (but not wait for a response), then > execute a full memory barrier and set a shared variable. Each IPI handler > would spin waiting for the shared variable to be set, then execute a full > memory barrier and atomically increment yet another shared variable and > return from interrupt. When that other shared variable's value reached > the number of IPIs sent, the sys_membarrier() would execute its final > (already existing) full memory barrier and return. Horribly expensive > and definitely not recommended, but eminently doable. I don't think that's right. What would make the IPIs "simultaneous" would be if none of the handlers return until all of them have started executing. For example, you could have each handler increment a shared variable and then spin, waiting for the variable to reach the number of CPUs, before returning. What you wrote was to have each handler wait until all the IPIs had been sent, which is not the same thing at all. > The difference between current sys_membarrier() and the "simultaneous" > variant described above is similar to the difference between > non-multicopy-atomic and multicopy-atomic memory ordering. So, after > thinking it through, my guess is that pretty much any litmus test that > can discern between multicopy-atomic and non-multicopy-atomic should > be transformable into something that can distinguish between the current > and the "simultaneous" sys_membarrier() implementation. > > Seem reasonable? Yes. > Or alternatively, may I please apply your Signed-off-by to your earlier > sys_membarrier() patch so that I can queue it? I will probably also > change smp_memb() to membarrier() or some such. Again, within the > Linux kernel, membarrier() can be emulated with smp_call_function() > invoking a handler that does smp_mb(). Do you really want to put sys_membarrier into the LKMM? I'm not so sure it's appropriate. Alan