Received: by 2002:a25:31c3:0:0:0:0:0 with SMTP id x186csp3463540ybx; Sun, 3 Nov 2019 19:46:50 -0800 (PST) X-Google-Smtp-Source: APXvYqxUPYkdB5pcn+Y0e+OwElDASj947uFbIN7rh2bCRpYIVQjZ4folgr9oAIyorDAUy0sKYGZT X-Received: by 2002:a50:c2c2:: with SMTP id u2mr27337117edf.133.1572839210536; Sun, 03 Nov 2019 19:46:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1572839210; cv=none; d=google.com; s=arc-20160816; b=GEPRUe7O0FCNtYo68rLkJbWa1wu5sqeY0HCn9FlVJ7Q2FcFLWM3uRjiYZm6K7D9Lz5 Mhk4Mtm873OuQrGiBCyd2gLB5opP3wkc3CWmsH2Qy6Y5j3C7ePAKMHU/BYXS0ExP4XZT SIac3geFWmHrxoy6Mh99Xn2U0PZRGkxhr1QdQEgLKC1UKDbqQ7mD8QAEmVF5id20G/Uh rTGMckF3Jz+ES8IAHTxh0X/7QVBCbAVUT2ruZid7lXD/LBoSYftpQOcAVNTXoALe7vih V6pu5kgry/s7J56laXggnxMF2GUtWp8ovpF5wMyQr0rPJo4w69yXjPUU4/OrN86Rgc4V PQcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject; bh=58+s/A6sbLxSP97GXT3ardGm+lybeehLdM9qvu7sTdg=; b=Rs406xg+q4cIisRe6eNYBnyukb5iXhkKH/HMOjKZelQLWy3LnC7hl8KfOV+W33pMYW m5IXImMW90gmjCKdny934DJxIkhQOr2qzbcs/l5J4ZgT8u1KIH4yPULlgBTLkeN1noOd HIQln+athBgQ5r5hA2Zg4CgGDYfSQrxBRy5DlroJ+7EWWTQxh7qWLI6u8Xe8Ok80/eUf bw3+Tz3xKPvDlqmFSmER2FdMSRX6j4VgIdlKp8oxyMsojjLWz89taJzK49RioH4GIGzr QbU27QPuG5RUfKaJueZv6nWofTFa6LPIk6+ej6tMKCutVkiP/r08aJqgYFYqzvq6T0pg 4ajw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n19si364497ejh.172.2019.11.03.19.46.26; Sun, 03 Nov 2019 19:46:50 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729170AbfKDDhf (ORCPT + 99 others); Sun, 3 Nov 2019 22:37:35 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:5696 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728414AbfKDDhe (ORCPT ); Sun, 3 Nov 2019 22:37:34 -0500 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id CCD336EF61E6B9F7A9E4; Mon, 4 Nov 2019 11:37:30 +0800 (CST) Received: from [127.0.0.1] (10.74.221.148) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.439.0; Mon, 4 Nov 2019 11:37:20 +0800 Subject: Re: [PATCH] lib: optimize cpumask_local_spread() To: Michal Hocko References: <1572501813-2125-1-git-send-email-zhangshaokun@hisilicon.com> <20191031073905.GD13102@dhcp22.suse.cz> CC: , yuqi jin , "Andrew Morton" , Mike Rapoport , "Paul Burton" , Michael Ellerman , Anshuman Khandual From: Shaokun Zhang Message-ID: <97943c12-5704-e8a2-3736-4d0c23e2ff80@hisilicon.com> Date: Mon, 4 Nov 2019 11:37:20 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1 MIME-Version: 1.0 In-Reply-To: <20191031073905.GD13102@dhcp22.suse.cz> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.74.221.148] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Michal, On 2019/10/31 15:39, Michal Hocko wrote: > On Thu 31-10-19 14:03:33, Shaokun Zhang wrote: >> From: yuqi jin >> >> In the multi-processor and NUMA system, A device may have many numa >> nodes belonging to multiple cpus. When we get a local numa, it is better >> to find the node closest to the local numa node to return instead of >> going to the online cpu immediately. >> >> For example, In Huawei Kunpeng 920 system, there are 4 NUMA node(0 -3) >> in the 2-socket system(0 - 1). If the I/O device is in socket1 >> and the local NUMA node is 2, we shall choose the non-local node3 in >> the same socket when cpu core in NUMA node2 is less that I/O requirements. >> If we directly pick one cpu core from all online ones, it may be in >> the another socket and it is not friendly for performance. > > My previous review feedback included a request for a much better > description of the actual problem and how much of a performance gain we > are talking about along with a workoload description. > Ok, I will update both in next version. > Besides that I do not think that the implementation is great either. Ok, Agree, so I sent it as RFC firstly and wanted to discuss this issue, I will fix it more reasonable. > Relying on GFP_ATOMIC is very dubious. Is there any specific reason why > the data structure cannot pre reallocated? The comment for Ok, will do it. > cpumask_local_spread says that this is not the most efficient function > so users should better be prepared to not call it from hot paths AFAIU. > That would imply that an internal locking should be acceptable as well > so a preallocated data structure could be used. Ok. Thanks, Shaokun > >> Cc: Andrew Morton >> Cc: Mike Rapoport >> Cc: Paul Burton >> Cc: Michal Hocko >> Cc: Michael Ellerman >> Cc: Anshuman Khandual >> Signed-off-by: yuqi jin >> Signed-off-by: Shaokun Zhang >> --- >> Changes from RFC: >> Address Michal Hocko's comment: Use GFP_ATOMIC instead of GFP_KERNEL >> >> lib/cpumask.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++--------- >> 1 file changed, 65 insertions(+), 11 deletions(-) >> >> diff --git a/lib/cpumask.c b/lib/cpumask.c >> index 0cb672eb107c..c92177b0e095 100644 >> --- a/lib/cpumask.c >> +++ b/lib/cpumask.c >> @@ -192,6 +192,33 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask) >> } >> #endif >> >> +static void calc_node_distance(int *node_dist, int node) >> +{ >> + int i; >> + >> + for (i = 0; i < nr_node_ids; i++) >> + node_dist[i] = node_distance(node, i); >> +} >> + >> +static int find_nearest_node(int *node_dist, bool *used_flag) >> +{ >> + int i, min_dist = node_dist[0], node_id = -1; >> + >> + for (i = 0; i < nr_node_ids; i++) >> + if (used_flag[i] == 0) { >> + min_dist = node_dist[i]; >> + node_id = i; >> + break; >> + } >> + for (i = 0; i < nr_node_ids; i++) >> + if (node_dist[i] < min_dist && used_flag[i] == 0) { >> + min_dist = node_dist[i]; >> + node_id = i; >> + } >> + >> + return node_id; >> +} >> + >> /** >> * cpumask_local_spread - select the i'th cpu with local numa cpu's first >> * @i: index number >> @@ -205,7 +232,8 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask) >> */ >> unsigned int cpumask_local_spread(unsigned int i, int node) >> { >> - int cpu; >> + int cpu, j, id, *node_dist; >> + bool *used_flag; >> >> /* Wrap: we always want a cpu. */ >> i %= num_online_cpus(); >> @@ -215,19 +243,45 @@ unsigned int cpumask_local_spread(unsigned int i, int node) >> if (i-- == 0) >> return cpu; >> } else { >> - /* NUMA first. */ >> - for_each_cpu_and(cpu, cpumask_of_node(node), cpu_online_mask) >> - if (i-- == 0) >> - return cpu; >> + node_dist = kmalloc_array(nr_node_ids, sizeof(int), GFP_ATOMIC); >> + if (!node_dist) >> + for_each_cpu(cpu, cpu_online_mask) >> + if (i-- == 0) >> + return cpu; >> >> - for_each_cpu(cpu, cpu_online_mask) { >> - /* Skip NUMA nodes, done above. */ >> - if (cpumask_test_cpu(cpu, cpumask_of_node(node))) >> - continue; >> + used_flag = kmalloc_array(nr_node_ids, sizeof(bool), GFP_ATOMIC); >> + if (!used_flag) >> + for_each_cpu(cpu, cpu_online_mask) >> + if (i-- == 0) { >> + kfree(node_dist); >> + return cpu; >> + } >> + memset(used_flag, 0, nr_node_ids * sizeof(bool)); >> >> - if (i-- == 0) >> - return cpu; >> + calc_node_distance(node_dist, node); >> + for (j = 0; j < nr_node_ids; j++) { >> + id = find_nearest_node(node_dist, used_flag); >> + if (id < 0) >> + break; >> + for_each_cpu_and(cpu, >> + cpumask_of_node(id), cpu_online_mask) >> + if (i-- == 0) { >> + kfree(node_dist); >> + kfree(used_flag); >> + return cpu; >> + } >> + used_flag[id] = 1; >> } >> + >> + for_each_cpu(cpu, cpu_online_mask) >> + if (i-- == 0) { >> + kfree(node_dist); >> + kfree(used_flag); >> + return cpu; >> + } >> + >> + kfree(node_dist); >> + kfree(used_flag); >> } >> BUG(); >> } >> -- >> 2.7.4 >