Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp9222ybt; Mon, 6 Jul 2020 02:43:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyBNIekHK4fl0WTU9rpjjlaoYSCQeLK/Bvo+OLAHtK9nk21/HHQCdnNrb2238XS6WB4nPtS X-Received: by 2002:aa7:dd8e:: with SMTP id g14mr56465039edv.208.1594028629484; Mon, 06 Jul 2020 02:43:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594028629; cv=none; d=google.com; s=arc-20160816; b=SqEtezEAWnrh5CtWPiN6xdbmPfar/gm6ngMSEqfo1wvJb9HUltCXbY0i1mm75e2iXe mb6ZTpyqeTiP+su039oRiyZs0SyzwPYLDcpdwHw2QsJlycjJ1CSz69J5ugMhdTSvN/xU RrkSH+MMa5qfrq9hVmuaiPU1+vAYWJqD4SW+0leT7FSy2005cy27VoZlMavmu92QOiGn J+9ypw3jaXpQT+00dNQHoDV57lAFQP5qv/q0crCib1WJ/82bcr1t2f0bf0hjKjcrOlgg tOXLQsxjrOwwN7ZvL9vE5wIKP9Bz3msa3NrRIE+nhDoqw/VvULQnbAMY2/Wxoo1rkj7s PcGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=ygj/QcUp720R8/fD0dI2LAK6Gw/bkYaOV6JAwwTtBPA=; b=TYHmClinyPIcJ1see0VlJuqGCbPGwpdBafTdImvQHe0t9yLdvvGk77MUjIe8kbjINj r8neS38GQOGB8EbSvxRr5amT/o3su/6bFk2wsPRIT+KSAt1z98Tr0gBcKpFHe0U8ZdK1 6MWRUO65pxnN/0aB3JrGFNZVN4wjR7vutT3pOJHWpZ2U0rIYqvcusSEvT0GH6DIYK42Q yF1zjmHCkix0rsKeg9uZ7LFAONhrnUZzhIevV+Mm9e/O0rfUdeLq5fPMtCuxTCLNT2PU f0RswB4zOtW2MnYruSyRtwL0Geg7LMFid7aYnCcpK4yBwsfmwXFJtGRxXiTwt5LmVDlf dM3A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v5si13665057edi.356.2020.07.06.02.43.26; Mon, 06 Jul 2020 02:43:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728616AbgGFJnN (ORCPT + 99 others); Mon, 6 Jul 2020 05:43:13 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:51986 "EHLO mx0b-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728517AbgGFJnJ (ORCPT ); Mon, 6 Jul 2020 05:43:09 -0400 Received: from pps.filterd (m0127361.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0669XgDN161450; Mon, 6 Jul 2020 05:43:07 -0400 Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0a-001b2d01.pphosted.com with ESMTP id 3237cfv6er-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 06 Jul 2020 05:43:07 -0400 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0669aM2F003137; Mon, 6 Jul 2020 09:43:05 GMT Received: from b06cxnps3074.portsmouth.uk.ibm.com (d06relay09.portsmouth.uk.ibm.com [9.149.109.194]) by ppma04fra.de.ibm.com with ESMTP id 322hd7s2vh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 06 Jul 2020 09:43:05 +0000 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0669h26F46071966 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 6 Jul 2020 09:43:02 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5CED752059; Mon, 6 Jul 2020 09:43:02 +0000 (GMT) Received: from oc3871087118.ibm.com (unknown [9.145.151.4]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTP id 0FEEB5204F; Mon, 6 Jul 2020 09:43:02 +0000 (GMT) From: Alexander Gordeev To: linux-perf-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Alexander Gordeev Subject: [PATCH 1/3] perf bench numa: fix incorrect NUMA toplogy assumption Date: Mon, 6 Jul 2020 11:42:52 +0200 Message-Id: <73667aca03e6f2e1247729f203490166ed16a83d.1594028310.git.agordeev@linux.ibm.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-06_07:2020-07-06,2020-07-06 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 suspectscore=1 bulkscore=0 priorityscore=1501 phishscore=0 impostorscore=0 spamscore=0 adultscore=0 mlxlogscore=999 clxscore=1015 cotscore=-2147483648 malwarescore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2007060075 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The current code assumes that CPUs are evenly spread among NUMA nodes. That is generally incorrect and leads to failure on systems that have different NUMA topology. Signed-off-by: Alexander Gordeev --- tools/perf/bench/numa.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/tools/perf/bench/numa.c b/tools/perf/bench/numa.c index 5797253..23e224e 100644 --- a/tools/perf/bench/numa.c +++ b/tools/perf/bench/numa.c @@ -247,12 +247,13 @@ static int is_node_present(int node) */ static bool node_has_cpus(int node) { - struct bitmask *cpu = numa_allocate_cpumask(); + struct bitmask *cpumask = numa_allocate_cpumask(); unsigned int i; - if (cpu && !numa_node_to_cpus(node, cpu)) { - for (i = 0; i < cpu->size; i++) { - if (numa_bitmask_isbitset(cpu, i)) + BUG_ON(!cpumask); + if (!numa_node_to_cpus(node, cpumask)) { + for (i = 0; i < cpumask->size; i++) { + if (numa_bitmask_isbitset(cpumask, i)) return true; } } @@ -288,14 +289,10 @@ static cpu_set_t bind_to_cpu(int target_cpu) static cpu_set_t bind_to_node(int target_node) { - int cpus_per_node = g->p.nr_cpus / nr_numa_nodes(); cpu_set_t orig_mask, mask; int cpu; int ret; - BUG_ON(cpus_per_node * nr_numa_nodes() != g->p.nr_cpus); - BUG_ON(!cpus_per_node); - ret = sched_getaffinity(0, sizeof(orig_mask), &orig_mask); BUG_ON(ret); @@ -305,13 +302,15 @@ static cpu_set_t bind_to_node(int target_node) for (cpu = 0; cpu < g->p.nr_cpus; cpu++) CPU_SET(cpu, &mask); } else { - int cpu_start = (target_node + 0) * cpus_per_node; - int cpu_stop = (target_node + 1) * cpus_per_node; - - BUG_ON(cpu_stop > g->p.nr_cpus); + struct bitmask *cpumask = numa_allocate_cpumask(); - for (cpu = cpu_start; cpu < cpu_stop; cpu++) - CPU_SET(cpu, &mask); + BUG_ON(!cpumask); + if (!numa_node_to_cpus(target_node, cpumask)) { + for (cpu = 0; cpu < (int)cpumask->size; cpu++) { + if (numa_bitmask_isbitset(cpumask, cpu)) + CPU_SET(cpu, &mask); + } + } } ret = sched_setaffinity(0, sizeof(mask), &mask); -- 1.8.3.1