Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp513051pxb; Tue, 3 Nov 2020 05:41:04 -0800 (PST) X-Google-Smtp-Source: ABdhPJytVwp8mSf8lgTDxwtUDJ6cxu+VFbg3SVsXb7LCIG7yVOHiJGpBj/0qD2p2f+t/03qETGpQ X-Received: by 2002:aa7:d2d2:: with SMTP id k18mr14718305edr.290.1604410864094; Tue, 03 Nov 2020 05:41:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604410864; cv=none; d=google.com; s=arc-20160816; b=MSAzJJtbVObxSyjJC444ZkZIkLNRZPaAXIniUK91Ei+83jmmMEtFfDuy9pNEAmGvAS ciL7D2dYSPgs0tUyGVcOvGMhHBVDbZGMdhyUfId6GSiPj9FIT1zNP/CaYvNAW2+FsRId 3BULT5fsrmyGa+V+yZUSkkuss9UcoFyx9FK+0zycVwqUZ2eo+YspLznO1l/tAmNdhdKd VdfNck2C8kxhCbfY0xS1ygSt0U5F+yi9RwnoVAiumJrmZKAfNLxD68/EbR9TOyfZuvL1 A7lanjJpEcBtlXDPVY8Hw4L2WvNrxsFsxl+1F+7moQn/j/EnIz33UASI1m9kfWczmHEI HY/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=3BajWjtIhbYUDYL1Fim7WgYRa3pgyHJJg6GyyMqVGAI=; b=nNR9czrP4T8FFvvXDf/q/gyMNoWVW4xg+0LfHkYQ835mGMBiBoF6y2PHX3UR9NEwdI PKNHCjXlh5jgYWFz+gizJBQH4zOt6hmKR+6NLAeEumnOVaCLaozqox0w1Kl6/EjXMg7g 932Lqpsx9LDW3WDGmyXGunapJuYu8nmRP14sXJ18M7uCoyRPFcX4TTah5/Ck1YyMks78 vqjGXu0hrVGIcWrUpkL1CTAdKVKhN92b6Gin/CknGZmHTIEGAPcMjqowh4sRD4cKcYOl Tou3hjtSztpJ7NOcyO7dQZtpHkWSUOirlwnF0dJKSoexmaL4Kf0qGWTbVENG/rAwSUJJ BtlQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t2si13722754edj.427.2020.11.03.05.40.40; Tue, 03 Nov 2020 05:41:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729043AbgKCNjQ (ORCPT + 99 others); Tue, 3 Nov 2020 08:39:16 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:7454 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726312AbgKCNjQ (ORCPT ); Tue, 3 Nov 2020 08:39:16 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4CQW8s0jb9zhfTq; Tue, 3 Nov 2020 21:39:09 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.487.0; Tue, 3 Nov 2020 21:38:58 +0800 From: Shaokun Zhang To: , CC: Yuqi Jin , Rusty Russell , Andrew Morton , Juergen Gross , Paul Burton , Michal Hocko , "Michael Ellerman" , Mike Rapoport , "Anshuman Khandual" , Shaokun Zhang Subject: [PATCH v6] lib: optimize cpumask_local_spread() Date: Tue, 3 Nov 2020 21:39:27 +0800 Message-ID: <1604410767-55947-1-git-send-email-zhangshaokun@hisilicon.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.69.192.56] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Yuqi Jin In multi-processor and NUMA system, I/O driver will find cpu cores that which shall be bound IRQ. When cpu cores in the local numa have been used, it is better to find the node closest to the local numa node for performance, instead of choosing any online cpu immediately. Currently, Intel DDIO affects only local sockets, so its performance improvement is due to the relative difference in performance between the local socket I/O and remote socket I/O.To ensure that Intel DDIO’s benefits are available to applications where they are most useful, the irq can be pinned to particular sockets using Intel DDIO. This arrangement is called socket affinityi. So this patch can help Intel DDIO work. The same I/O stash function for most processors On Huawei Kunpeng 920 server, there are 4 NUMA node(0 - 3) in the 2-cpu system(0 - 1). The topology of this server is followed: available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 node 0 size: 63379 MB node 0 free: 61899 MB node 1 cpus: 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 node 1 size: 64509 MB node 1 free: 63942 MB node 2 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 node 2 size: 64509 MB node 2 free: 63056 MB node 3 cpus: 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 node 3 size: 63997 MB node 3 free: 63420 MB node distances: node 0 1 2 3 0: 10 16 32 33 1: 16 10 25 32 2: 32 25 10 16 3: 33 32 16 10 We perform PS (parameter server) business test, the behavior of the service is that the client initiates a request through the network card, the server responds to the request after calculation. When two PS processes run on node2 and node3 separately and the network card is located on 'node2' which is in cpu1, the performance of node2 (26W QPS) and node3 (22W QPS) is different. It is better that the NIC queues are bound to the cpu1 cores in turn, then XPS will also be properly initialized, while cpumask_local_spread only considers the local node. When the number of NIC queues exceeds the number of cores in the local node, it returns to the online core directly. So when PS runs on node3 sending a calculated request, the performance is not as good as the node2. The IRQ from 369-392 will be bound from NUMA node0 to NUMA node3 with this patch, before the patch: Euler:/sys/bus/pci # cat /proc/irq/369/smp_affinity_list 0 Euler:/sys/bus/pci # cat /proc/irq/370/smp_affinity_list 1 ... Euler:/sys/bus/pci # cat /proc/irq/391/smp_affinity_list 22 Euler:/sys/bus/pci # cat /proc/irq/392/smp_affinity_list 23 After the patch: Euler:/sys/bus/pci # cat /proc/irq/369/smp_affinity_list 72 Euler:/sys/bus/pci # cat /proc/irq/370/smp_affinity_list 73 ... Euler:/sys/bus/pci # cat /proc/irq/391/smp_affinity_list 94 Euler:/sys/bus/pci # cat /proc/irq/392/smp_affinity_list 95 So the performance of the node3 is the same as node2 that is 26W QPS when the network card is still in 'node2' with the patch. It is considered that the NIC and other I/O devices shall initialize the interrupt binding, if the cores of the local node are used up, it is reasonable to return the node closest to it. Let's optimize it and find the nearest node through NUMA distance for the non-local NUMA nodes. Cc: Rusty Russell Cc: Andrew Morton Cc: Juergen Gross Cc: Paul Burton Cc: Michal Hocko Cc: Michael Ellerman Cc: Mike Rapoport Cc: Anshuman Khandual Signed-off-by: Yuqi Jin Signed-off-by: Shaokun Zhang --- Hi Andrew, I rebased this patch later following this thread [1] ChangeLog from v5: 1. Rebase to 5.10-rc2 ChangeLog from v4: 1. Rebase to 5.6-rc3 ChangeLog from v3: 1. Make spread_lock local to cpumask_local_spread(); 2. Add more descriptions on the affinities change in log; ChangeLog from v2: 1. Change the variables as static and use spinlock to protect; 2. Give more explantation on test and performance; [1]https://lkml.org/lkml/2020/6/30/1300 lib/cpumask.c | 66 +++++++++++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 55 insertions(+), 11 deletions(-) diff --git a/lib/cpumask.c b/lib/cpumask.c index 85da6ab4fbb5..baecaf271770 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -193,6 +193,38 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask) } #endif +static void calc_node_distance(int *node_dist, int node) +{ + int i; + + for (i = 0; i < nr_node_ids; i++) + node_dist[i] = node_distance(node, i); +} + +static int find_nearest_node(int *node_dist, bool *used) +{ + int i, min_dist = node_dist[0], node_id = -1; + + /* Choose the first unused node to compare */ + for (i = 0; i < nr_node_ids; i++) { + if (used[i] == 0) { + min_dist = node_dist[i]; + node_id = i; + break; + } + } + + /* Compare and return the nearest node */ + for (i = 0; i < nr_node_ids; i++) { + if (node_dist[i] < min_dist && used[i] == 0) { + min_dist = node_dist[i]; + node_id = i; + } + } + + return node_id; +} + /** * cpumask_local_spread - select the i'th cpu with local numa cpu's first * @i: index number @@ -206,7 +238,11 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask) */ unsigned int cpumask_local_spread(unsigned int i, int node) { - int cpu, hk_flags; + static DEFINE_SPINLOCK(spread_lock); + static int node_dist[MAX_NUMNODES]; + static bool used[MAX_NUMNODES]; + unsigned long flags; + int cpu, hk_flags, j, id; const struct cpumask *mask; hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ; @@ -220,20 +256,28 @@ unsigned int cpumask_local_spread(unsigned int i, int node) return cpu; } } else { - /* NUMA first. */ - for_each_cpu_and(cpu, cpumask_of_node(node), mask) { - if (i-- == 0) - return cpu; - } + spin_lock_irqsave(&spread_lock, flags); + memset(used, 0, nr_node_ids * sizeof(bool)); + calc_node_distance(node_dist, node); + /* Local node first then the nearest node is used */ + for (j = 0; j < nr_node_ids; j++) { + id = find_nearest_node(node_dist, used); + if (id < 0) + break; - for_each_cpu(cpu, mask) { - /* Skip NUMA nodes, done above. */ - if (cpumask_test_cpu(cpu, cpumask_of_node(node))) - continue; + for_each_cpu_and(cpu, cpumask_of_node(id), mask) + if (i-- == 0) { + spin_unlock_irqrestore(&spread_lock, + flags); + return cpu; + } + used[id] = 1; + } + spin_unlock_irqrestore(&spread_lock, flags); + for_each_cpu(cpu, mask) if (i-- == 0) return cpu; - } } BUG(); } -- 2.7.4