Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp4251875pxb; Mon, 8 Feb 2021 11:30:59 -0800 (PST) X-Google-Smtp-Source: ABdhPJw/Se1PeYOf5naRQOLUUCnE+afztrd93fFggyPMAhduwLO2Oq41CZdcSdEVU755FrHmiL2A X-Received: by 2002:a05:6402:1cc1:: with SMTP id ds1mr3952175edb.10.1612812659469; Mon, 08 Feb 2021 11:30:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612812659; cv=none; d=google.com; s=arc-20160816; b=ytkOsqpmZJe4IlWGEEBxtKSSGvaehxGGw4gJl99DF52ck7gm1Y0pc/BhM4esi2Om6i g/xM4tSgN8eruOJMNKBdjhe5bHFxxhxCXIhRm5eRZXg08kJ138QtjZncc0wXL9UISVK3 Ms+4cgC9kJJjrlbxmlm0joBa/WTQOd/j6aDHqe2WhV6YicP30c+ooAMtDIWfCOjuuYOm Cd8mVLq0IJ4FBaxWKv+ETWzv5JDl6CrsV0ueaZ+PQho8YML0ZSRvRn1RN0pFX+0MS8Gh 6sg5C0CJFuiK8oF3kd2ARknBGT0Lc85VD70ctF2TDVh/rdDP3HuXVDwkKHZfBrx8+N9D be4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=OlaiTah5UetF3PMOX6rAkS5syV5y6ax4K2UU+zROYx8=; b=gg7dafxbfrLOywTBxwL8XwToZwMQTUs4q+ZyRJjrve5gtw1/pTlCpZYiQJFG8uA/K3 6llL099lNR0kBuTcUXN65GwNXHqN5YmC2X5bHA6ku2sh+R9MH0pMV9xKMPfXGRiUD98Z 43LV9K/j2phfJpAEcXWrJsryPJ/CwbkwxFeaL8rD/L/I3tUAMSLWdJNWO11VzhXynWOh qjTdrX7o1ezEEn8ov2hzUYBcvPRTKe2fv4RlJmq8T+QozEaSefSLRHdskwjDHefOWbpe VWSQRxpT6v9naMqU1103h+pPU8lA6YtWTsCQllkzjsP8zc91Y+viCMaZyZ89IpU8Ad3l HpyA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=dxuMEheQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z1si4539437edp.254.2021.02.08.11.30.35; Mon, 08 Feb 2021 11:30:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=dxuMEheQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236306AbhBHT3V (ORCPT + 99 others); Mon, 8 Feb 2021 14:29:21 -0500 Received: from mail.kernel.org ([198.145.29.99]:46606 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235171AbhBHSCk (ORCPT ); Mon, 8 Feb 2021 13:02:40 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id D4F9164ED0; Mon, 8 Feb 2021 17:58:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1612807135; bh=y7eQowfWRCiPs6rqvJgWxNBr3H42kcUQxHDrWQVFBLM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dxuMEheQD32mZCSNgXYYpfNFVYR8uqNhN2S8vpB5cPtAz5X6Zj7pUHX/0QLHnXv/Y JzQILtTHf395nPcpuCi7Bx7cqat8+HQwvUDOZCtMb9+DjnP5f+ZSLzFLLqGqkkJSCW qnasDvtEj9zvgb+jtvPBK5r/LjHt19eGheG8Qx5LqjceuBQ2X1CWNzWnDTw2uk7e7b tnhyTTtB9Cr3u91Aj83mq7kicYidi/bKFEpcXsnfWKnAUAbH/IkaI9bK4FSiLqS+d2 vtn4AbeEqH0GUCbjA6Vr4A7Fxg4pP01IMzTt5gNEzY+/8x5q4qUFbMM3cVxZsDh8Wz cjLcHkHqhSmcw== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Thomas Gleixner , Robin Murphy , Nitesh Narayan Lal , Marcelo Tosatti , abelits@marvell.com, davem@davemloft.net, Sasha Levin Subject: [PATCH AUTOSEL 5.10 35/36] Revert "lib: Restrict cpumask_local_spread to houskeeping CPUs" Date: Mon, 8 Feb 2021 12:58:05 -0500 Message-Id: <20210208175806.2091668-35-sashal@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210208175806.2091668-1-sashal@kernel.org> References: <20210208175806.2091668-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Thomas Gleixner [ Upstream commit 2452483d9546de1c540f330469dc4042ff089731 ] This reverts commit 1abdfe706a579a702799fce465bceb9fb01d407c. This change is broken and not solving any problem it claims to solve. Robin reported that cpumask_local_spread() now returns any cpu out of cpu_possible_mask in case that NOHZ_FULL is disabled (runtime or compile time). It can also return any offline or not-present CPU in the housekeeping mask. Before that it was returning a CPU out of online_cpu_mask. While the function is racy against CPU hotplug if the caller does not protect against it, the actual use cases are not caring much about it as they use it mostly as hint for: - the user space affinity hint which is unused by the kernel - memory node selection which is just suboptimal - network queue affinity which might fail but is handled gracefully But the occasional fail vs. hotplug is very different from returning anything from possible_cpu_mask which can have a large amount of offline CPUs obviously. The changelog of the commit claims: "The current implementation of cpumask_local_spread() does not respect the isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task, it will return it to the caller for pinning of its IRQ threads. Having these unwanted IRQ threads on an isolated CPU adds up to a latency overhead." The only correct part of this changelog is: "The current implementation of cpumask_local_spread() does not respect the isolated CPUs." Everything else is just disjunct from reality. Reported-by: Robin Murphy Signed-off-by: Thomas Gleixner Cc: Nitesh Narayan Lal Cc: Marcelo Tosatti Cc: abelits@marvell.com Cc: davem@davemloft.net Link: https://lore.kernel.org/r/87y2g26tnt.fsf@nanos.tec.linutronix.de Signed-off-by: Sasha Levin --- lib/cpumask.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) diff --git a/lib/cpumask.c b/lib/cpumask.c index 85da6ab4fbb5a..fb22fb266f937 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -6,7 +6,6 @@ #include #include #include -#include /** * cpumask_next - get the next cpu in a cpumask @@ -206,27 +205,22 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask) */ unsigned int cpumask_local_spread(unsigned int i, int node) { - int cpu, hk_flags; - const struct cpumask *mask; + int cpu; - hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ; - mask = housekeeping_cpumask(hk_flags); /* Wrap: we always want a cpu. */ - i %= cpumask_weight(mask); + i %= num_online_cpus(); if (node == NUMA_NO_NODE) { - for_each_cpu(cpu, mask) { + for_each_cpu(cpu, cpu_online_mask) if (i-- == 0) return cpu; - } } else { /* NUMA first. */ - for_each_cpu_and(cpu, cpumask_of_node(node), mask) { + for_each_cpu_and(cpu, cpumask_of_node(node), cpu_online_mask) if (i-- == 0) return cpu; - } - for_each_cpu(cpu, mask) { + for_each_cpu(cpu, cpu_online_mask) { /* Skip NUMA nodes, done above. */ if (cpumask_test_cpu(cpu, cpumask_of_node(node))) continue; -- 2.27.0