Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp450101pxu; Fri, 11 Dec 2020 06:19:07 -0800 (PST) X-Google-Smtp-Source: ABdhPJxV5ok7sU74FyvhVL2Kw+WpgHrUi6RlAZoqmTyi4gY+CAVDVjFw0OjOWX+BGLa2pqExVBTX X-Received: by 2002:a05:6402:114b:: with SMTP id g11mr12138736edw.228.1607696347637; Fri, 11 Dec 2020 06:19:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607696347; cv=none; d=google.com; s=arc-20160816; b=WP1l/khykVQ+jyZnQaycEwjrl2SUmj6S3brESVsfUlgCn7kPKnRyiZ1IVqQxIEw3+h vRgaWFu9zHGs0Rv8h+YThTfxGypj5GJrQhmPgQN8pIARdsQU0rBz25TXSkFD2Hj/1RX1 jQDkUveyebt7uJBv85A0qHBpJhceIrxjofrfPMN3SGQ88fBsKjwa4OyjPUWZatid9Lgw Hw/y06jspR1ApVuVTGkfpEKJVnLYDlN2+1Tiwg4Gmz9CKgfMk3bGInvdHSHenN+xbI6u mDUMZ12RjpapZuWx28QZvqGHrO52sJAn+XaszPcN14pYE7MwDQ6p2V7slEHL12g3rGuL zQNg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=MVtkBJ0AnsuKiJvnOwopKK6c+KTyLzrKN76RokqfHno=; b=qNWw/yCi6Sn0BmQsVY06LgyT8hzrYU53lML0lP6clC0nL94fEFRGrQxlMfTh3E5LMy TmXMUff4PnLt+36n1zmBGOF/uWNUXbp2OpGIqNZbT1fw/lhmzNKLHIVy6Qqncw3G9Nwc fiALkf5CWXUS8Gcl4elbzZyKmbboSRybRPuy6RzmgYotaIgvRQXDl/9x+C311w5tb7r9 zjIgXm5JzECVDWRTU7t0FUAtUg9CHN9SA9tTWwHIIZyJSqx8V9mcGdlDMrIINMeOHQl9 AUsaKmwPsDZ7CO+sOo6HIvaewl74uEXO+ItrOcWB9jxGrFYuei26/5ku+2Jdp7G1Tbd3 NkRg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bu10si4577429ejb.483.2020.12.11.06.18.44; Fri, 11 Dec 2020 06:19:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406086AbgLKNSP (ORCPT + 99 others); Fri, 11 Dec 2020 08:18:15 -0500 Received: from foss.arm.com ([217.140.110.172]:56648 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404965AbgLKNRb (ORCPT ); Fri, 11 Dec 2020 08:17:31 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C8E341FB; Fri, 11 Dec 2020 05:16:45 -0800 (PST) Received: from e120877-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9277F3F68F; Fri, 11 Dec 2020 05:16:43 -0800 (PST) Date: Fri, 11 Dec 2020 13:16:38 +0000 From: Vincent Donnefort To: Valentin Schneider Cc: linux-kernel@vger.kernel.org, Qian Cai , Peter Zijlstra , tglx@linutronix.de, mingo@kernel.org, bigeasy@linutronix.de, qais.yousef@arm.com, swood@redhat.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, tj@kernel.org, ouwen210@hotmail.com Subject: Re: [PATCH 2/2] workqueue: Fix affinity of kworkers attached during late hotplug Message-ID: <20201211131638.GA142813@e120877-lin.cambridge.arm.com> References: <20201210163830.21514-1-valentin.schneider@arm.com> <20201210163830.21514-3-valentin.schneider@arm.com> <20201211113920.GA75974@e120877-lin.cambridge.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Dec 11, 2020 at 01:13:35PM +0000, Valentin Schneider wrote: > On 11/12/20 12:51, Valentin Schneider wrote: > >> In that case maybe we should check for the cpu_active_mask here too ? > > > > Looking at it again, I think we might need to. > > > > IIUC you can end up with pools bound to a single NUMA node (?). In that > > case, say the last CPU of a node is going down, then: > > > > workqueue_offline_cpu() > > wq_update_unbound_numa() > > alloc_unbound_pwq() > > get_unbound_pool() > > > > would still pick that node, because it doesn't look at the online / active > > mask. And at this point, we would affine the > > kworkers to that node, and we're back to having kworkers enqueued on a > > (!active, online) CPU that is going down... > > Assuming a node covers at least 2 CPUs, that can't actually happen per > is_cpu_allowed(). Yes indeed, my bad, no problem here.