Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp586284pxb; Wed, 27 Jan 2021 15:56:39 -0800 (PST) X-Google-Smtp-Source: ABdhPJxebXZwGKSxjHCgA5SxAN3to4nQDlf/94Bek9bQf5OU3Zrh2fo3kV9i/HIIZGXrCIJN+AuS X-Received: by 2002:a05:6402:1a56:: with SMTP id bf22mr11327677edb.284.1611791799043; Wed, 27 Jan 2021 15:56:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611791799; cv=none; d=google.com; s=arc-20160816; b=LGtvOwPFNgEoQtUdiq6PkYvI/D4Azikjaa2e2kJA1MvATMvA1q9/x4MEk/HOGCge5t 3OjfPc6hljP/auQSldRHf9+LBADbOjnQZNce3+GKU4sluhEj4kAL5YvrUiIVLW8dZncI rGkUMNqEgMOiin2Qi055+ftBgyh4gtI5LajrhM9M4vP32uFeQdeaQlbwj9SSL6eh+WlV s6llue1+4UwhHweOYB4/U+EsqbSLGASuH7pzgVvpWFn0MkasNukuUCtxqMrEJDT2Zc3P gy5S9zrcRVsz0ZV3suERHJvsj4IMjjm4IOnS4YdvKmamk2WhVgKAAlZ4Ln1/YmvGp5vD 7HnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=l4DmpEJ+VWLqDbu8dWKDaj0guBjLhlDrl9G+XBAcSGw=; b=vvrpyOCmcF5j1Wybt9XIarji4I5RCuboGvDjhzwktVbz4zzPJBJrHNXhErxOjC/o8P 7aq8xF6gsqQkbUd0iIFlnB1mbpJw76wZgKTpxUqqdma9ofr+jeH++n6QqN4gXvtZxLbf bWuZ1rrK7KrGX8/1xzvToiDW7lCvt/ofjrFBvr6zIybiDOm3/tpDeJk1tPV9gxWapjny xmk2iL+svmYk5rK30CGBakIcGDlrVb1UCvCnFj5XW7lxDhMV3wkEUVnoxmVjMWq+YcMk UYC+4yw/AKGIlvH81pJgtPVcFHDwMAOh3Du6bbOLkF7W8V1pOjaf44KPdBh0SjxSLekx L2Jg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i18si1522573ejp.271.2021.01.27.15.56.15; Wed, 27 Jan 2021 15:56:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343591AbhA0Mkb (ORCPT + 99 others); Wed, 27 Jan 2021 07:40:31 -0500 Received: from foss.arm.com ([217.140.110.172]:44472 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343595AbhA0Mhl (ORCPT ); Wed, 27 Jan 2021 07:37:41 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 07C931FB; Wed, 27 Jan 2021 04:36:40 -0800 (PST) Received: from [10.57.47.135] (unknown [10.57.47.135]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9C5FD3F68F; Wed, 27 Jan 2021 04:36:36 -0800 (PST) Subject: Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs To: Marcelo Tosatti Cc: Nitesh Narayan Lal , linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, frederic@kernel.org, juri.lelli@redhat.com, abelits@marvell.com, bhelgaas@google.com, linux-pci@vger.kernel.org, rostedt@goodmis.org, mingo@kernel.org, peterz@infradead.org, tglx@linutronix.de, davem@davemloft.net, akpm@linux-foundation.org, sfr@canb.auug.org.au, stephen@networkplumber.org, rppt@linux.vnet.ibm.com, jinyuqi@huawei.com, zhangshaokun@hisilicon.com References: <20200625223443.2684-1-nitesh@redhat.com> <20200625223443.2684-2-nitesh@redhat.com> <3e9ce666-c9cd-391b-52b6-3471fe2be2e6@arm.com> <20210127121939.GA54725@fuller.cnet> From: Robin Murphy Message-ID: Date: Wed, 27 Jan 2021 12:36:30 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.6.1 MIME-Version: 1.0 In-Reply-To: <20210127121939.GA54725@fuller.cnet> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021-01-27 12:19, Marcelo Tosatti wrote: > On Wed, Jan 27, 2021 at 11:57:16AM +0000, Robin Murphy wrote: >> Hi, >> >> On 2020-06-25 23:34, Nitesh Narayan Lal wrote: >>> From: Alex Belits >>> >>> The current implementation of cpumask_local_spread() does not respect the >>> isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task, >>> it will return it to the caller for pinning of its IRQ threads. Having >>> these unwanted IRQ threads on an isolated CPU adds up to a latency >>> overhead. >>> >>> Restrict the CPUs that are returned for spreading IRQs only to the >>> available housekeeping CPUs. >>> >>> Signed-off-by: Alex Belits >>> Signed-off-by: Nitesh Narayan Lal >>> --- >>> lib/cpumask.c | 16 +++++++++++----- >>> 1 file changed, 11 insertions(+), 5 deletions(-) >>> >>> diff --git a/lib/cpumask.c b/lib/cpumask.c >>> index fb22fb266f93..85da6ab4fbb5 100644 >>> --- a/lib/cpumask.c >>> +++ b/lib/cpumask.c >>> @@ -6,6 +6,7 @@ >>> #include >>> #include >>> #include >>> +#include >>> /** >>> * cpumask_next - get the next cpu in a cpumask >>> @@ -205,22 +206,27 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask) >>> */ >>> unsigned int cpumask_local_spread(unsigned int i, int node) >>> { >>> - int cpu; >>> + int cpu, hk_flags; >>> + const struct cpumask *mask; >>> + hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ; >>> + mask = housekeeping_cpumask(hk_flags); >> >> AFAICS, this generally resolves to something based on cpu_possible_mask >> rather than cpu_online_mask as before, so could now potentially return an >> offline CPU. Was that an intentional change? > > Robin, > > AFAICS online CPUs should be filtered. Apologies if I'm being thick, but can you explain how? In the case of isolation being disabled or compiled out, housekeeping_cpumask() is literally just "return cpu_possible_mask;". If we then iterate over that with for_each_cpu() and just return the i'th possible CPU (e.g. in the NUMA_NO_NODE case), what guarantees that CPU is actually online? Robin. >> I was just looking at the current code since I had the rare presence of mind >> to check if something suitable already existed before I start open-coding >> "any online CPU, but local node preferred" logic for handling IRQ affinity >> in a driver - cpumask_local_spread() appears to be almost what I want (if a >> bit more heavyweight), if only it would actually guarantee an online CPU as >> the kerneldoc claims :( >> >> Robin. >> >>> /* Wrap: we always want a cpu. */ >>> - i %= num_online_cpus(); >>> + i %= cpumask_weight(mask); >>> if (node == NUMA_NO_NODE) { >>> - for_each_cpu(cpu, cpu_online_mask) >>> + for_each_cpu(cpu, mask) { >>> if (i-- == 0) >>> return cpu; >>> + } >>> } else { >>> /* NUMA first. */ >>> - for_each_cpu_and(cpu, cpumask_of_node(node), cpu_online_mask) >>> + for_each_cpu_and(cpu, cpumask_of_node(node), mask) { >>> if (i-- == 0) >>> return cpu; >>> + } >>> - for_each_cpu(cpu, cpu_online_mask) { >>> + for_each_cpu(cpu, mask) { >>> /* Skip NUMA nodes, done above. */ >>> if (cpumask_test_cpu(cpu, cpumask_of_node(node))) >>> continue; >>> >