Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp210891pxa; Tue, 11 Aug 2020 00:30:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJym4hd95Tt13cHDx/OnWWVaJWiwSGSolrVzhfe3RlITgU+T2KDP+aU7ds/tsXnYSl5Y46zU X-Received: by 2002:a17:906:15cc:: with SMTP id l12mr26108704ejd.7.1597131020870; Tue, 11 Aug 2020 00:30:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597131020; cv=none; d=google.com; s=arc-20160816; b=Qf99IlEuxqZPPv3IdiwTNeJ6TADTFt+ORkDA4+lJP+b3gz7RVaNAduZtzXhkrHOxt0 ixZCvqT3WQwgmg71iEhCGQhEJHaXDDYGIXNPhEw4ZaMmi9VAq89eefYqI93NST0AOgS8 fGPIBzcG6q4ig8bGpfp406amBQfZH4vmTd59TfFmqMpkidHmxvczrMzFZQX++GVEXtU9 9BGspB0m8x0/5SPb1f4S2WaqfSdMBsTo+1zngwtL3NXbYf/Y9joS0o0ECo1tJWbqsTTF lfroYsVFVJwXriTReCwR4Km8tiU2+ycZyFvxe2WbD5NXaagAXvuIv5yXtSRvzLg6x3Qu 2K8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version; bh=tr8EM1xdbvBUWk9egHsW5lCZAPw5FQcJwP6Fhs9U/EE=; b=lexOe6xFmHI97m5D6LSJL7bOF8US65af7GcmSJirZHeF1rDe3awLp+y1QGZTriBp9d Xj115C8VprbUPbkVsFqJPtOT5bZ5uyBzdi1dg2zQY8u7Uing0M+09UZT+8AJikUGGt/p z83zFQoLq+moaBZjmcny1VwBA442222PmYSfTUfhkFaP0JDgTbgOUZtwrPTatzsN5BNG xMd2BALbLx+WhrMCOeeNOdRjorfN6HmnQyhDd4yvHLSCNT/jeH8cLidm+QUwa6KtlCVh 8a/fODfb3AtRIvusFbhLlnFr8jke07MpIF97gfMO3kPKDd/zQswyReUq2nhMhJuFSbwn FV6A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l9si13823670ejq.427.2020.08.11.00.29.57; Tue, 11 Aug 2020 00:30:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728021AbgHKH1D (ORCPT + 99 others); Tue, 11 Aug 2020 03:27:03 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:52571 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726421AbgHKH1D (ORCPT ); Tue, 11 Aug 2020 03:27:03 -0400 Received: by mail-wm1-f68.google.com with SMTP id x5so1677032wmi.2 for ; Tue, 11 Aug 2020 00:27:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=tr8EM1xdbvBUWk9egHsW5lCZAPw5FQcJwP6Fhs9U/EE=; b=rZo0d0HJ8R6XbH+Oehi5qU1bYpgcZmA7fzIyUPgPy8n5ZQkpqxfUzyTTn7rM5GiEOh wbeDPCuBK5mS1b1j/7h+GVudFCengujWL4wDFyxmpp8rPeOj3+bBkFsWwMa9SbeWtgBU 8yvdW4HxuhV1NYoZjw/s8Ryk8xrBsaXs9ycbwiMSZicEfKJ6L4gAV5p2ijjc8jXyQioF 9flTYQ6mJe7f7YiYfSdDBRkk+1Xg5e7bWjEu45yjfXMmr1/j3zcKOocRxRic3oUs0/a7 Sx+KsM7szb93YOMjO3uzKn6T4vABCYldM1VBSHOa07mElGsZM2q72/zQFbMtwPLvguV/ dLzw== X-Gm-Message-State: AOAM530umorFa7vUMqB6CEMWmr6J1GlOFmh496X4kpkWuKQ/4wkuOPnT 7MV0MjE+7m6vWhzTIEK3TBMo35A4bTsl7QCQrM8= X-Received: by 2002:a1c:1f85:: with SMTP id f127mr2921479wmf.154.1597130821317; Tue, 11 Aug 2020 00:27:01 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Namhyung Kim Date: Tue, 11 Aug 2020 16:26:50 +0900 Message-ID: Subject: Re: [PATCH v2 1/3] perf bench numa: use numa_node_to_cpus() to bind tasks to nodes To: Alexander Gordeev Cc: linux-kernel , Satheesh Rajendran , Srikar Dronamraju , "Naveen N. Rao" , Balamuruhan S , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, On Mon, Aug 10, 2020 at 3:22 PM Alexander Gordeev wrote: > > It is currently assumed that each node contains at most > nr_cpus/nr_nodes CPUs and node CPU ranges do not overlap. > That assumption is generally incorrect as there are archs > where a CPU number does not depend on to its node number. > > This update removes the described assumption by simply calling > numa_node_to_cpus() interface and using the returned mask for > binding CPUs to nodes. It also tightens a cpumask allocation > failure check a bit. > > Cc: Satheesh Rajendran > Cc: Srikar Dronamraju > Cc: Naveen N. Rao > Cc: Balamuruhan S > Cc: Peter Zijlstra > Cc: Ingo Molnar > Cc: Arnaldo Carvalho de Melo > Cc: Mark Rutland > Cc: Alexander Shishkin > Cc: Jiri Olsa > Cc: Namhyung Kim > Signed-off-by: Alexander Gordeev > --- > tools/perf/bench/numa.c | 27 +++++++++++++-------------- > 1 file changed, 13 insertions(+), 14 deletions(-) > > diff --git a/tools/perf/bench/numa.c b/tools/perf/bench/numa.c > index 5797253..23e224e 100644 > --- a/tools/perf/bench/numa.c > +++ b/tools/perf/bench/numa.c > @@ -247,12 +247,13 @@ static int is_node_present(int node) > */ > static bool node_has_cpus(int node) > { > - struct bitmask *cpu = numa_allocate_cpumask(); > + struct bitmask *cpumask = numa_allocate_cpumask(); > unsigned int i; > > - if (cpu && !numa_node_to_cpus(node, cpu)) { > - for (i = 0; i < cpu->size; i++) { > - if (numa_bitmask_isbitset(cpu, i)) > + BUG_ON(!cpumask); > + if (!numa_node_to_cpus(node, cpumask)) { > + for (i = 0; i < cpumask->size; i++) { > + if (numa_bitmask_isbitset(cpumask, i)) > return true; > } > } > @@ -288,14 +289,10 @@ static cpu_set_t bind_to_cpu(int target_cpu) > > static cpu_set_t bind_to_node(int target_node) > { > - int cpus_per_node = g->p.nr_cpus / nr_numa_nodes(); > cpu_set_t orig_mask, mask; > int cpu; > int ret; > > - BUG_ON(cpus_per_node * nr_numa_nodes() != g->p.nr_cpus); > - BUG_ON(!cpus_per_node); > - > ret = sched_getaffinity(0, sizeof(orig_mask), &orig_mask); > BUG_ON(ret); > > @@ -305,13 +302,15 @@ static cpu_set_t bind_to_node(int target_node) > for (cpu = 0; cpu < g->p.nr_cpus; cpu++) > CPU_SET(cpu, &mask); > } else { > - int cpu_start = (target_node + 0) * cpus_per_node; > - int cpu_stop = (target_node + 1) * cpus_per_node; > - > - BUG_ON(cpu_stop > g->p.nr_cpus); > + struct bitmask *cpumask = numa_allocate_cpumask(); > > - for (cpu = cpu_start; cpu < cpu_stop; cpu++) > - CPU_SET(cpu, &mask); > + BUG_ON(!cpumask); > + if (!numa_node_to_cpus(target_node, cpumask)) { > + for (cpu = 0; cpu < (int)cpumask->size; cpu++) { > + if (numa_bitmask_isbitset(cpumask, cpu)) > + CPU_SET(cpu, &mask); > + } > + } It seems you need to call numa_free_cpumask() for both functions. Thanks Namhyung > } > > ret = sched_setaffinity(0, sizeof(mask), &mask); > -- > 1.8.3.1 >