Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp1189492pxu; Sat, 12 Dec 2020 05:05:06 -0800 (PST) X-Google-Smtp-Source: ABdhPJy+cUbmh3Yb7W+Qjcgd9DHgvO7FtRjfJ6itZc8ngjwOH8A/8TvhGEBEFLRCkXNLVTy3aJ5F X-Received: by 2002:a50:b944:: with SMTP id m62mr2234328ede.182.1607778306240; Sat, 12 Dec 2020 05:05:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607778306; cv=none; d=google.com; s=arc-20160816; b=aXvO5BsOFZu2RYHL0JA6J6M52FMludqWiUKP3ZHb6kz/xxt0bQEzwL9XbSRTDMaC89 7NR+eLVAMl7PNuJqRbDstc/N8QTslQfjjK81oGs4Vv7wfWgf+H79huONM/iw2/D8yp/9 aDUQj7/dQ2AvKC4N0+mwmah5Pq0iuYwGJdjGAIA+S82g1u/DFS7Uxgh89qb9EuvS0DfX EQKI1EEMBFHN2XEeq1cXN+fZYOjky8k6A8SRmQfvN9ZJH+EuD+popgGRyaW5CeNGwLk2 AekVBC2AjQCw9noLb7graKMb6lG59C7wjgUDeeKjadqj2iCzihDJSsjMbdYTB4aahBtb Q52Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:in-reply-to:subject :cc:to:from:user-agent:references; bh=H3K0zLAiGtwp6qvXTtlyneFd75PdAFNHgqxWScM5I9M=; b=GTNckr8XJmu434MDtzrz1dsQAtz8zna8Ru2frwzx77k4ScjyWpqI8uxEOBS7qzBLuU AwcwoZBUTSFVJQa3akUWQUlm0IjMIsK9gGjqfjhTu2ZEzZql6QmZVKn5ATmc7Spsl3MK WwtkTorhx3Vivna37sK1/j7Tw5GsUnrCNiC226syxE2Aufdizptg1QtdHlbOZr4jqDEo +lk7FyIG4Uev8EUx3NRTY6GVkAoP0zABGTCvBowPv7CcJVjvlb/cRlLLxpo0MrhDcJuM Gqdj80d6InkZ1GSgQQBGw6AT1XKzt28avtzHITPYFjQtaOnzCfhwx1VPBWxKVrfyN3hO lmFw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u1si5987498ejt.709.2020.12.12.05.04.42; Sat, 12 Dec 2020 05:05:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2436725AbgLKNOv (ORCPT + 99 others); Fri, 11 Dec 2020 08:14:51 -0500 Received: from foss.arm.com ([217.140.110.172]:56456 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2436713AbgLKNOZ (ORCPT ); Fri, 11 Dec 2020 08:14:25 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C1E5F1FB; Fri, 11 Dec 2020 05:13:39 -0800 (PST) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8864D3F68F; Fri, 11 Dec 2020 05:13:37 -0800 (PST) References: <20201210163830.21514-1-valentin.schneider@arm.com> <20201210163830.21514-3-valentin.schneider@arm.com> <20201211113920.GA75974@e120877-lin.cambridge.arm.com> User-agent: mu4e 0.9.17; emacs 26.3 From: Valentin Schneider To: Vincent Donnefort Cc: linux-kernel@vger.kernel.org, Qian Cai , Peter Zijlstra , tglx@linutronix.de, mingo@kernel.org, bigeasy@linutronix.de, qais.yousef@arm.com, swood@redhat.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, tj@kernel.org, ouwen210@hotmail.com Subject: Re: [PATCH 2/2] workqueue: Fix affinity of kworkers attached during late hotplug In-reply-to: Date: Fri, 11 Dec 2020 13:13:35 +0000 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/12/20 12:51, Valentin Schneider wrote: >> In that case maybe we should check for the cpu_active_mask here too ? > > Looking at it again, I think we might need to. > > IIUC you can end up with pools bound to a single NUMA node (?). In that > case, say the last CPU of a node is going down, then: > > workqueue_offline_cpu() > wq_update_unbound_numa() > alloc_unbound_pwq() > get_unbound_pool() > > would still pick that node, because it doesn't look at the online / active > mask. And at this point, we would affine the > kworkers to that node, and we're back to having kworkers enqueued on a > (!active, online) CPU that is going down... Assuming a node covers at least 2 CPUs, that can't actually happen per is_cpu_allowed().