Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp4037979imm; Mon, 15 Oct 2018 08:10:09 -0700 (PDT) X-Google-Smtp-Source: ACcGV60IiHnJSQfRfp78m9SXkQFtlqiPjJOFRfXVYv0bZrS4NivxL3Ih1Wfx0+fP+MCMYMkljDHl X-Received: by 2002:a17:902:6907:: with SMTP id j7-v6mr17564980plk.232.1539616209564; Mon, 15 Oct 2018 08:10:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539616209; cv=none; d=google.com; s=arc-20160816; b=n4eoZCYn4Fr4mi0VRZ8dIwGkvyx0okxyeG828BTakU8KF5JvTOI5Sx9ol0ZiV6f5u4 m+t14sryByy3/R0dKz8mSwO+yak17ap7ZQhFsWHaJN+HXaYCcYXFWzahFE5WiS33Tjxi Ig+xfsIR9g6gNklCeEejTQmeEtoQpOE+tLRL5i4D2dGP2QCPxhBpwXlAeZBr2GKEM+2e wyoRuzjFC8nqCbL0shirodhClYiYMMqHFQohOiDwHHinywFU2rhcD+Zvtn9v6RrNI5AV 5elJT/EU2ofpNKQyF2QB3duovf6JRtAF5uwyAWHCZ3l6NVFHDHDcLV5ps6+UWtZhuhUw 2aew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=Sd5SGcWpD73A0ybyi0UyfY4FS3Lxvo4sHvcTpgUME8Q=; b=AWg5HXjKR9qsYssO+uhqWiQ3MLB3IBRNyOhC6PRk8wVZc3LQVWlBlBnxwlxzNb/Tlx aP1DIJscjiWzDT7yNbuKeTsTkhbAbaCtEG5e9wuTCUYfGzyMf5GNyAKk+/Mm9x/iYftB hEeqxFcOBMFp0qmQVJmfkY3XIr4RQMaBOtmKiJhoJ+5Xv6VjLOZYADOCduQTXTRA3bJu OTdyNS9LpTkL0/5rvgGwvixEqDwAKEZK8lmGw7X5ohH+lekciHPlXi0dt9IKbpGHg/LX LGFnx4RXbw1zVslYk3SyXtpKGwyOphPz/IeZw0a38yjsDLAfkhTnDXMrRRAS6ZQ/uE5y 1umw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a8-v6si11076773pgh.396.2018.10.15.08.09.54; Mon, 15 Oct 2018 08:10:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726798AbeJOWyr (ORCPT + 99 others); Mon, 15 Oct 2018 18:54:47 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:56431 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726744AbeJOWyr (ORCPT ); Mon, 15 Oct 2018 18:54:47 -0400 Received: from bigeasy by Galois.linutronix.de with local (Exim 4.80) (envelope-from ) id 1gC4UR-0000vY-7k; Mon, 15 Oct 2018 17:09:03 +0200 Date: Mon, 15 Oct 2018 17:09:03 +0200 From: Sebastian Andrzej Siewior To: Boqun Feng Cc: "Paul E. McKenney" , Tejun Heo , linux-kernel@vger.kernel.org, Peter Zijlstra , "Aneesh Kumar K.V" , tglx@linutronix.de, Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan Subject: Re: [PATCH] rcu: Use cpus_read_lock() while looking at cpu_online_mask Message-ID: <20181015150902.asifwhikqkz53ai4@linutronix.de> References: <20180910135615.tr3cvipwbhq6xug4@linutronix.de> <20180911160532.GJ4225@linux.vnet.ibm.com> <20180911162142.cc3vgook2gctus4c@linutronix.de> <20180911170222.GO4225@linux.vnet.ibm.com> <20180919205521.GE902964@devbig004.ftw2.facebook.com> <20180919221140.GH4222@linux.ibm.com> <20181012184114.w332lnkc34evd4sm@linutronix.de> <20181013134813.GD2674@linux.ibm.com> <20181015144217.nu5cp5mxlboyjbre@linutronix.de> <20181015150715.GA2422@tardis> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20181015150715.GA2422@tardis> User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2018-10-15 23:07:15 [+0800], Boqun Feng wrote: > Hi, Sebastian Hi Boqun, > On Mon, Oct 15, 2018 at 04:42:17PM +0200, Sebastian Andrzej Siewior wrote: > > On 2018-10-13 06:48:13 [-0700], Paul E. McKenney wrote: > > > > > > My concern would be that it would queue it by default for the current > > > CPU, which would serialize the processing, losing the concurrency of > > > grace-period initialization. But that was a long time ago, and perhaps > > > workqueues have changed. > > > > but the code here is always using the first CPU of a NUMA node or did I > > miss something? > > > > The thing is the original way is to pick one CPU for a *RCU* node to > run the grace-period work, but with your proposal, if a RCU node is > smaller than a NUMA node (having fewer CPUs), we could end up having two > grace-period works running on one CPU. I think that's Paul's concern. Ah. Okay. From what I observed, the RCU nodes and NUMA nodes were 1:1 here. Noted. Given that I can enqueue a work item on an offlined CPU I don't see why commit fcc6354365015 ("rcu: Make expedited GPs handle CPU 0 being offline") should make a difference. Any objections to just revert it? > Regards, > Boqun Sebastian