2020-11-03 13:41:04

by Zhangshaokun

[permalink] [raw]
Subject: [PATCH v6] lib: optimize cpumask_local_spread()

From: Yuqi Jin <[email protected]>

In multi-processor and NUMA system, I/O driver will find cpu cores that
which shall be bound IRQ. When cpu cores in the local numa have been
used, it is better to find the node closest to the local numa node for
performance, instead of choosing any online cpu immediately.

Currently, Intel DDIO affects only local sockets, so its performance
improvement is due to the relative difference in performance between the
local socket I/O and remote socket I/O.To ensure that Intel DDIO’s
benefits are available to applications where they are most useful, the
irq can be pinned to particular sockets using Intel DDIO.
This arrangement is called socket affinityi. So this patch can help
Intel DDIO work. The same I/O stash function for most processors

On Huawei Kunpeng 920 server, there are 4 NUMA node(0 - 3) in the 2-cpu
system(0 - 1). The topology of this server is followed:
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
node 0 size: 63379 MB
node 0 free: 61899 MB
node 1 cpus: 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
node 1 size: 64509 MB
node 1 free: 63942 MB
node 2 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
node 2 size: 64509 MB
node 2 free: 63056 MB
node 3 cpus: 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
node 3 size: 63997 MB
node 3 free: 63420 MB
node distances:
node 0 1 2 3
0: 10 16 32 33
1: 16 10 25 32
2: 32 25 10 16
3: 33 32 16 10

We perform PS (parameter server) business test, the behavior of the
service is that the client initiates a request through the network card,
the server responds to the request after calculation. When two PS
processes run on node2 and node3 separately and the network card is
located on 'node2' which is in cpu1, the performance of node2 (26W QPS)
and node3 (22W QPS) is different.

It is better that the NIC queues are bound to the cpu1 cores in turn, then
XPS will also be properly initialized, while cpumask_local_spread only
considers the local node. When the number of NIC queues exceeds the
number of cores in the local node, it returns to the online core directly.
So when PS runs on node3 sending a calculated request, the performance is
not as good as the node2.

The IRQ from 369-392 will be bound from NUMA node0 to NUMA node3 with this
patch, before the patch:

Euler:/sys/bus/pci # cat /proc/irq/369/smp_affinity_list
0
Euler:/sys/bus/pci # cat /proc/irq/370/smp_affinity_list
1
...
Euler:/sys/bus/pci # cat /proc/irq/391/smp_affinity_list
22
Euler:/sys/bus/pci # cat /proc/irq/392/smp_affinity_list
23
After the patch:
Euler:/sys/bus/pci # cat /proc/irq/369/smp_affinity_list
72
Euler:/sys/bus/pci # cat /proc/irq/370/smp_affinity_list
73
...
Euler:/sys/bus/pci # cat /proc/irq/391/smp_affinity_list
94
Euler:/sys/bus/pci # cat /proc/irq/392/smp_affinity_list
95

So the performance of the node3 is the same as node2 that is 26W QPS when
the network card is still in 'node2' with the patch.

It is considered that the NIC and other I/O devices shall initialize the
interrupt binding, if the cores of the local node are used up, it is
reasonable to return the node closest to it. Let's optimize it and find
the nearest node through NUMA distance for the non-local NUMA nodes.

Cc: Rusty Russell <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Juergen Gross <[email protected]>
Cc: Paul Burton <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Signed-off-by: Yuqi Jin <[email protected]>
Signed-off-by: Shaokun Zhang <[email protected]>
---
Hi Andrew,

I rebased this patch later following this thread [1]

ChangeLog from v5:
1. Rebase to 5.10-rc2

ChangeLog from v4:
1. Rebase to 5.6-rc3

ChangeLog from v3:
1. Make spread_lock local to cpumask_local_spread();
2. Add more descriptions on the affinities change in log;

ChangeLog from v2:
1. Change the variables as static and use spinlock to protect;
2. Give more explantation on test and performance;

[1]https://lkml.org/lkml/2020/6/30/1300

lib/cpumask.c | 66 +++++++++++++++++++++++++++++++++++++++++++++++++----------
1 file changed, 55 insertions(+), 11 deletions(-)

diff --git a/lib/cpumask.c b/lib/cpumask.c
index 85da6ab4fbb5..baecaf271770 100644
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -193,6 +193,38 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask)
}
#endif

+static void calc_node_distance(int *node_dist, int node)
+{
+ int i;
+
+ for (i = 0; i < nr_node_ids; i++)
+ node_dist[i] = node_distance(node, i);
+}
+
+static int find_nearest_node(int *node_dist, bool *used)
+{
+ int i, min_dist = node_dist[0], node_id = -1;
+
+ /* Choose the first unused node to compare */
+ for (i = 0; i < nr_node_ids; i++) {
+ if (used[i] == 0) {
+ min_dist = node_dist[i];
+ node_id = i;
+ break;
+ }
+ }
+
+ /* Compare and return the nearest node */
+ for (i = 0; i < nr_node_ids; i++) {
+ if (node_dist[i] < min_dist && used[i] == 0) {
+ min_dist = node_dist[i];
+ node_id = i;
+ }
+ }
+
+ return node_id;
+}
+
/**
* cpumask_local_spread - select the i'th cpu with local numa cpu's first
* @i: index number
@@ -206,7 +238,11 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask)
*/
unsigned int cpumask_local_spread(unsigned int i, int node)
{
- int cpu, hk_flags;
+ static DEFINE_SPINLOCK(spread_lock);
+ static int node_dist[MAX_NUMNODES];
+ static bool used[MAX_NUMNODES];
+ unsigned long flags;
+ int cpu, hk_flags, j, id;
const struct cpumask *mask;

hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
@@ -220,20 +256,28 @@ unsigned int cpumask_local_spread(unsigned int i, int node)
return cpu;
}
} else {
- /* NUMA first. */
- for_each_cpu_and(cpu, cpumask_of_node(node), mask) {
- if (i-- == 0)
- return cpu;
- }
+ spin_lock_irqsave(&spread_lock, flags);
+ memset(used, 0, nr_node_ids * sizeof(bool));
+ calc_node_distance(node_dist, node);
+ /* Local node first then the nearest node is used */
+ for (j = 0; j < nr_node_ids; j++) {
+ id = find_nearest_node(node_dist, used);
+ if (id < 0)
+ break;

- for_each_cpu(cpu, mask) {
- /* Skip NUMA nodes, done above. */
- if (cpumask_test_cpu(cpu, cpumask_of_node(node)))
- continue;
+ for_each_cpu_and(cpu, cpumask_of_node(id), mask)
+ if (i-- == 0) {
+ spin_unlock_irqrestore(&spread_lock,
+ flags);
+ return cpu;
+ }
+ used[id] = 1;
+ }
+ spin_unlock_irqrestore(&spread_lock, flags);

+ for_each_cpu(cpu, mask)
if (i-- == 0)
return cpu;
- }
}
BUG();
}
--
2.7.4


2020-11-04 16:15:32

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v6] lib: optimize cpumask_local_spread()

On 11/3/20 5:39 AM, Shaokun Zhang wrote:
> Currently, Intel DDIO affects only local sockets, so its performance
> improvement is due to the relative difference in performance between the
> local socket I/O and remote socket I/O.To ensure that Intel DDIO’s
> benefits are available to applications where they are most useful, the
> irq can be pinned to particular sockets using Intel DDIO.
> This arrangement is called socket affinityi. So this patch can help
> Intel DDIO work. The same I/O stash function for most processors

A great changelog would probably include a bit of context about DDIO.
Even being from Intel, I'd heard about this, but I didn't immediately
know what the acronym was.

The thing that matters here is that DDIO allows devices to use processor
caches instead of having them always do uncached accesses to main
memory. That's a pretty important detail left out of the changelog.

> On Huawei Kunpeng 920 server, there are 4 NUMA node(0 - 3) in the 2-cpu
> system(0 - 1). The topology of this server is followed:

This is with a feature enabled that Intel calls sub-NUMA-clustering
(SNC), right? Explaining *that* feature would also be great context for
why this gets triggered on your system and not normally on others and
why nobody noticed this until now.

> The IRQ from 369-392 will be bound from NUMA node0 to NUMA node3 with this
> patch, before the patch:
>
> Euler:/sys/bus/pci # cat /proc/irq/369/smp_affinity_list
> 0
> Euler:/sys/bus/pci # cat /proc/irq/370/smp_affinity_list
> 1
> ...
> Euler:/sys/bus/pci # cat /proc/irq/391/smp_affinity_list
> 22
> Euler:/sys/bus/pci # cat /proc/irq/392/smp_affinity_list
> 23
> After the patch:

I _think_ what you are trying to convey here is that IRQs 369 and 370
are from devices plugged in to one socket, but their IRQs are bound to
CPUs 0 and 1 which are in the other socket. Once device traffic leaves
the socket, it can no longer use DDIO and performance suffers.

The same situation is true for IRQs 391/392 and CPUs 22/23.

You don't come out and say it, but I assume that the root of this issue
is that once we fill up a NUMA node worth of CPUs with an affinitized
IRQ per CPU, we go looking for CPUs in other NUMA nodes. In this case,
we have the processor in this weird mode that chops sockets into two
NUMA nodes, which makes the device's NUMA node fill up faster.

The current behavior just "wraps around" to find a new node. But, this
wrap around behavior is nasty in this case because it might cross a socket.

> +static void calc_node_distance(int *node_dist, int node)
> +{
> + int i;
> +
> + for (i = 0; i < nr_node_ids; i++)
> + node_dist[i] = node_distance(node, i);
> +}

This appears to be the only place node_dist[] is written. That means it
always contains a one-dimensional slice of the two-dimensional data
represented by node_distance().

Why is a copy of this data needed?

> +static int find_nearest_node(int *node_dist, bool *used)
> +{
> + int i, min_dist = node_dist[0], node_id = -1;
> +
> + /* Choose the first unused node to compare */
> + for (i = 0; i < nr_node_ids; i++) {
> + if (used[i] == 0) {
> + min_dist = node_dist[i];
> + node_id = i;
> + break;
> + }
> + }
> +
> + /* Compare and return the nearest node */
> + for (i = 0; i < nr_node_ids; i++) {
> + if (node_dist[i] < min_dist && used[i] == 0) {
> + min_dist = node_dist[i];
> + node_id = i;
> + }
> + }
> +
> + return node_id;
> +}
> +
> /**
> * cpumask_local_spread - select the i'th cpu with local numa cpu's first
> * @i: index number
> @@ -206,7 +238,11 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask)
> */

The diff missed some important context:

> * This function selects an online CPU according to a numa aware policy;
> * local cpus are returned first, followed by non-local ones, then it
> * wraps around.

This patch changes that behavior but doesn't update the comment.


> unsigned int cpumask_local_spread(unsigned int i, int node)
> {
> - int cpu, hk_flags;
> + static DEFINE_SPINLOCK(spread_lock);
> + static int node_dist[MAX_NUMNODES];
> + static bool used[MAX_NUMNODES];

Not to be *too* picky, but there is a reason we declare nodemask_t as a
bitmap and not an array of bools. Isn't this just wasteful?

> + unsigned long flags;
> + int cpu, hk_flags, j, id;
> const struct cpumask *mask;
>
> hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
> @@ -220,20 +256,28 @@ unsigned int cpumask_local_spread(unsigned int i, int node)
> return cpu;
> }
> } else {
> - /* NUMA first. */
> - for_each_cpu_and(cpu, cpumask_of_node(node), mask) {
> - if (i-- == 0)
> - return cpu;
> - }
> + spin_lock_irqsave(&spread_lock, flags);
> + memset(used, 0, nr_node_ids * sizeof(bool));
> + calc_node_distance(node_dist, node);
> + /* Local node first then the nearest node is used */

Is this comment really correct? This makes it sound like there is only
fallback to a single node. Doesn't the _code_ fall back basically
without limit?

> + for (j = 0; j < nr_node_ids; j++) {
> + id = find_nearest_node(node_dist, used);
> + if (id < 0)
> + break;
>
> - for_each_cpu(cpu, mask) {
> - /* Skip NUMA nodes, done above. */
> - if (cpumask_test_cpu(cpu, cpumask_of_node(node)))
> - continue;
> + for_each_cpu_and(cpu, cpumask_of_node(id), mask)
> + if (i-- == 0) {
> + spin_unlock_irqrestore(&spread_lock,
> + flags);
> + return cpu;
> + }
> + used[id] = 1;
> + }
> + spin_unlock_irqrestore(&spread_lock, flags);

The existing code was pretty sparsely commented. This looks to me to
make it more complicated and *less* commented. Not the best combo.

> + for_each_cpu(cpu, mask)
> if (i-- == 0)
> return cpu;
> - }
> }
> BUG();
> }
>

2020-11-13 02:08:47

by Zhangshaokun

[permalink] [raw]
Subject: Re: [PATCH v6] lib: optimize cpumask_local_spread()

Hi Dave,

在 2020/11/5 0:10, Dave Hansen 写道:
> On 11/3/20 5:39 AM, Shaokun Zhang wrote:
>> Currently, Intel DDIO affects only local sockets, so its performance
>> improvement is due to the relative difference in performance between the
>> local socket I/O and remote socket I/O.To ensure that Intel DDIO’s
>> benefits are available to applications where they are most useful, the
>> irq can be pinned to particular sockets using Intel DDIO.
>> This arrangement is called socket affinityi. So this patch can help
>> Intel DDIO work. The same I/O stash function for most processors
>
> A great changelog would probably include a bit of context about DDIO.
> Even being from Intel, I'd heard about this, but I didn't immediately
> know what the acronym was.
>
> The thing that matters here is that DDIO allows devices to use processor
> caches instead of having them always do uncached accesses to main
> memory. That's a pretty important detail left out of the changelog.
>
>> On Huawei Kunpeng 920 server, there are 4 NUMA node(0 - 3) in the 2-cpu
>> system(0 - 1). The topology of this server is followed:
>
> This is with a feature enabled that Intel calls sub-NUMA-clustering
> (SNC), right? Explaining *that* feature would also be great context for

Correct,

> why this gets triggered on your system and not normally on others and
> why nobody noticed this until now.

This is on intel 6248 platform:
[root@localhost ~]# numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 5 6 10 11 12 15 16 40 41 42 45 46 50 51 52 55 56
node 0 size: 46893 MB
node 0 free: 45982 MB
node 1 cpus: 3 4 7 8 9 13 14 17 18 19 43 44 47 48 49 53 54 57 58 59
node 1 size: 48379 MB
node 1 free: 48235 MB
node 2 cpus: 20 21 22 25 26 30 31 32 35 36 60 61 62 65 66 70 71 72 75 76
node 2 size: 48353 MB
node 2 free: 48022 MB
node 3 cpus: 23 24 27 28 29 33 34 37 38 39 63 64 67 68 69 73 74 77 78 79
node 3 size: 48378 MB
node 3 free: 48196 MB
node distances:
node 0 1 2 3
0: 10 11 21 21
1: 11 10 21 21
2: 21 21 10 11
3: 21 21 11 10
[root@localhost ~]#
When the intel client turns on SNC, the mlx5 network card is used and is located
in node2, while the number queues of network card is greater than the number of
cores of node2. When all cores in the node2 has been binded, the core of node0
will be choosed, resulting in cross-chip and DDIO failure. If applying this patch,
node3 will be choosed to avoid this cross-chip operation.

>
>> The IRQ from 369-392 will be bound from NUMA node0 to NUMA node3 with this
>> patch, before the patch:
>>
>> Euler:/sys/bus/pci # cat /proc/irq/369/smp_affinity_list
>> 0
>> Euler:/sys/bus/pci # cat /proc/irq/370/smp_affinity_list
>> 1
>> ...
>> Euler:/sys/bus/pci # cat /proc/irq/391/smp_affinity_list
>> 22
>> Euler:/sys/bus/pci # cat /proc/irq/392/smp_affinity_list
>> 23
>> After the patch:
>
> I _think_ what you are trying to convey here is that IRQs 369 and 370
> are from devices plugged in to one socket, but their IRQs are bound to
> CPUs 0 and 1 which are in the other socket. Once device traffic leaves
> the socket, it can no longer use DDIO and performance suffers.
>
> The same situation is true for IRQs 391/392 and CPUs 22/23.
>
> You don't come out and say it, but I assume that the root of this issue
> is that once we fill up a NUMA node worth of CPUs with an affinitized
> IRQ per CPU, we go looking for CPUs in other NUMA nodes. In this case,
> we have the processor in this weird mode that chops sockets into two
> NUMA nodes, which makes the device's NUMA node fill up faster.
>
> The current behavior just "wraps around" to find a new node. But, this
> wrap around behavior is nasty in this case because it might cross a socket.
>

We want to improve the siutation that the card is inserted in second socket,
but it is binded with the first socket CPU cores, so we want to calcualte
this distance between different NUMA node and choose the nearesd node, it is
not a simple wraps arouad.

>> +static void calc_node_distance(int *node_dist, int node)
>> +{
>> + int i;
>> +
>> + for (i = 0; i < nr_node_ids; i++)
>> + node_dist[i] = node_distance(node, i);
>> +}
>
> This appears to be the only place node_dist[] is written. That means it
> always contains a one-dimensional slice of the two-dimensional data
> represented by node_distance().
>
> Why is a copy of this data needed?

It is used to store the distance with the @node for later, apologies that I
can't follow your question correctly.

>
>> +static int find_nearest_node(int *node_dist, bool *used)
>> +{
>> + int i, min_dist = node_dist[0], node_id = -1;
>> +
>> + /* Choose the first unused node to compare */
>> + for (i = 0; i < nr_node_ids; i++) {
>> + if (used[i] == 0) {
>> + min_dist = node_dist[i];
>> + node_id = i;
>> + break;
>> + }
>> + }
>> +
>> + /* Compare and return the nearest node */
>> + for (i = 0; i < nr_node_ids; i++) {
>> + if (node_dist[i] < min_dist && used[i] == 0) {
>> + min_dist = node_dist[i];
>> + node_id = i;
>> + }
>> + }
>> +
>> + return node_id;
>> +}
>> +
>> /**
>> * cpumask_local_spread - select the i'th cpu with local numa cpu's first
>> * @i: index number
>> @@ -206,7 +238,11 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask)
>> */
>
> The diff missed some important context:
>
>> * This function selects an online CPU according to a numa aware policy;
>> * local cpus are returned first, followed by non-local ones, then it
>> * wraps around.
>
> This patch changes that behavior but doesn't update the comment.
>

Good catch, I will update this.

>
>> unsigned int cpumask_local_spread(unsigned int i, int node)
>> {
>> - int cpu, hk_flags;
>> + static DEFINE_SPINLOCK(spread_lock);
>> + static int node_dist[MAX_NUMNODES];
>> + static bool used[MAX_NUMNODES];
>
> Not to be *too* picky, but there is a reason we declare nodemask_t as a
> bitmap and not an array of bools. Isn't this just wasteful?
>
>> + unsigned long flags;
>> + int cpu, hk_flags, j, id;
>> const struct cpumask *mask;
>>
>> hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
>> @@ -220,20 +256,28 @@ unsigned int cpumask_local_spread(unsigned int i, int node)
>> return cpu;
>> }
>> } else {
>> - /* NUMA first. */
>> - for_each_cpu_and(cpu, cpumask_of_node(node), mask) {
>> - if (i-- == 0)
>> - return cpu;
>> - }
>> + spin_lock_irqsave(&spread_lock, flags);
>> + memset(used, 0, nr_node_ids * sizeof(bool));
>> + calc_node_distance(node_dist, node);
>> + /* Local node first then the nearest node is used */
>
> Is this comment really correct? This makes it sound like there is only

I think it is correct, that's what we want to choose the nearest node.

> fallback to a single node. Doesn't the _code_ fall back basically
> without limit?

If I follow your question correctly, without this patch, if the local
node is used up, one random node will be choosed, right? Now we firstly
choose the nearest node by the distance, if all nodes has been choosen,
it will return the initial solution.

>
>> + for (j = 0; j < nr_node_ids; j++) {
>> + id = find_nearest_node(node_dist, used);
>> + if (id < 0)
>> + break;
>>
>> - for_each_cpu(cpu, mask) {
>> - /* Skip NUMA nodes, done above. */
>> - if (cpumask_test_cpu(cpu, cpumask_of_node(node)))
>> - continue;
>> + for_each_cpu_and(cpu, cpumask_of_node(id), mask)
>> + if (i-- == 0) {
>> + spin_unlock_irqrestore(&spread_lock,
>> + flags);
>> + return cpu;
>> + }
>> + used[id] = 1;
>> + }
>> + spin_unlock_irqrestore(&spread_lock, flags);
>
> The existing code was pretty sparsely commented. This looks to me to
> make it more complicated and *less* commented. Not the best combo.

Apologies for the bad comments, hopefully I describe it clearly by the above
explantion.

Thanks,
Shaokun

>
>> + for_each_cpu(cpu, mask)
>> if (i-- == 0)
>> return cpu;
>> - }
>> }
>> BUG();
>> }
>>
>
> .
>

2020-11-13 16:06:51

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v6] lib: optimize cpumask_local_spread()

On 11/12/20 6:06 PM, Shaokun Zhang wrote:
>>> On Huawei Kunpeng 920 server, there are 4 NUMA node(0 - 3) in the 2-cpu
>>> system(0 - 1). The topology of this server is followed:
>>
>> This is with a feature enabled that Intel calls sub-NUMA-clustering
>> (SNC), right? Explaining *that* feature would also be great context for
>
> Correct,
>
>> why this gets triggered on your system and not normally on others and
>> why nobody noticed this until now.
>
> This is on intel 6248 platform:

I have no idea what a "6248 platform" is.

>>> +static void calc_node_distance(int *node_dist, int node)
>>> +{
>>> + int i;
>>> +
>>> + for (i = 0; i < nr_node_ids; i++)
>>> + node_dist[i] = node_distance(node, i);
>>> +}
>>
>> This appears to be the only place node_dist[] is written. That means it
>> always contains a one-dimensional slice of the two-dimensional data
>> represented by node_distance().
>>
>> Why is a copy of this data needed?
>
> It is used to store the distance with the @node for later, apologies that I
> can't follow your question correctly.

Right, the data that you store is useful. *But*, it's also a verbatim
copy of the data from node_distance(). Why not just use node_distance()
directly in your code rather than creating a partial copy of it in the
local node_dist[] array?


>>> unsigned int cpumask_local_spread(unsigned int i, int node)
>>> {
>>> - int cpu, hk_flags;
>>> + static DEFINE_SPINLOCK(spread_lock);
>>> + static int node_dist[MAX_NUMNODES];
>>> + static bool used[MAX_NUMNODES];
>>
>> Not to be *too* picky, but there is a reason we declare nodemask_t as a
>> bitmap and not an array of bools. Isn't this just wasteful?
>>
>>> + unsigned long flags;
>>> + int cpu, hk_flags, j, id;
>>> const struct cpumask *mask;
>>>
>>> hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
>>> @@ -220,20 +256,28 @@ unsigned int cpumask_local_spread(unsigned int i, int node)
>>> return cpu;
>>> }
>>> } else {
>>> - /* NUMA first. */
>>> - for_each_cpu_and(cpu, cpumask_of_node(node), mask) {
>>> - if (i-- == 0)
>>> - return cpu;
>>> - }
>>> + spin_lock_irqsave(&spread_lock, flags);
>>> + memset(used, 0, nr_node_ids * sizeof(bool));
>>> + calc_node_distance(node_dist, node);
>>> + /* Local node first then the nearest node is used */
>>
>> Is this comment really correct? This makes it sound like there is only
>
> I think it is correct, that's what we want to choose the nearest node.
>
>> fallback to a single node. Doesn't the _code_ fall back basically
>> without limit?
>
> If I follow your question correctly, without this patch, if the local
> node is used up, one random node will be choosed, right? Now we firstly
> choose the nearest node by the distance, if all nodes has been choosen,
> it will return the initial solution.

The comment makes it sound like the code does:
1. Do the local node
2. Do the next nearest node
3. Stop

In reality, I *think* it's more of a loop where it search
ever-increasing distances away from the local node.

I just think the comment needs to be made more precise.

>>> + for (j = 0; j < nr_node_ids; j++) {
>>> + id = find_nearest_node(node_dist, used);
>>> + if (id < 0)
>>> + break;
>>>
>>> - for_each_cpu(cpu, mask) {
>>> - /* Skip NUMA nodes, done above. */
>>> - if (cpumask_test_cpu(cpu, cpumask_of_node(node)))
>>> - continue;
>>> + for_each_cpu_and(cpu, cpumask_of_node(id), mask)
>>> + if (i-- == 0) {
>>> + spin_unlock_irqrestore(&spread_lock,
>>> + flags);
>>> + return cpu;
>>> + }
>>> + used[id] = 1;
>>> + }
>>> + spin_unlock_irqrestore(&spread_lock, flags);
>>
>> The existing code was pretty sparsely commented. This looks to me to
>> make it more complicated and *less* commented. Not the best combo.
>
> Apologies for the bad comments, hopefully I describe it clearly by the above
> explantion.

Do you want to take another pass at submitting this patch?

2020-11-16 08:04:12

by Zhangshaokun

[permalink] [raw]
Subject: Re: [PATCH v6] lib: optimize cpumask_local_spread()

Hi Dave,

在 2020/11/14 0:02, Dave Hansen 写道:
> On 11/12/20 6:06 PM, Shaokun Zhang wrote:
>>>> On Huawei Kunpeng 920 server, there are 4 NUMA node(0 - 3) in the 2-cpu
>>>> system(0 - 1). The topology of this server is followed:
>>>
>>> This is with a feature enabled that Intel calls sub-NUMA-clustering
>>> (SNC), right? Explaining *that* feature would also be great context for
>>
>> Correct,
>>
>>> why this gets triggered on your system and not normally on others and
>>> why nobody noticed this until now.
>>
>> This is on intel 6248 platform:
>
> I have no idea what a "6248 platform" is.
>

My apologies that it's Cascade Lake, [1]

>>>> +static void calc_node_distance(int *node_dist, int node)
>>>> +{
>>>> + int i;
>>>> +
>>>> + for (i = 0; i < nr_node_ids; i++)
>>>> + node_dist[i] = node_distance(node, i);
>>>> +}
>>>
>>> This appears to be the only place node_dist[] is written. That means it
>>> always contains a one-dimensional slice of the two-dimensional data
>>> represented by node_distance().
>>>
>>> Why is a copy of this data needed?
>>
>> It is used to store the distance with the @node for later, apologies that I
>> can't follow your question correctly.
>
> Right, the data that you store is useful. *But*, it's also a verbatim
> copy of the data from node_distance(). Why not just use node_distance()
> directly in your code rather than creating a partial copy of it in the

Ok, I will remove this redundant function in next version.

> local node_dist[] array?
>
>
>>>> unsigned int cpumask_local_spread(unsigned int i, int node)
>>>> {
>>>> - int cpu, hk_flags;
>>>> + static DEFINE_SPINLOCK(spread_lock);
>>>> + static int node_dist[MAX_NUMNODES];
>>>> + static bool used[MAX_NUMNODES];
>>>
>>> Not to be *too* picky, but there is a reason we declare nodemask_t as a
>>> bitmap and not an array of bools. Isn't this just wasteful?
>>>
>>>> + unsigned long flags;
>>>> + int cpu, hk_flags, j, id;
>>>> const struct cpumask *mask;
>>>>
>>>> hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
>>>> @@ -220,20 +256,28 @@ unsigned int cpumask_local_spread(unsigned int i, int node)
>>>> return cpu;
>>>> }
>>>> } else {
>>>> - /* NUMA first. */
>>>> - for_each_cpu_and(cpu, cpumask_of_node(node), mask) {
>>>> - if (i-- == 0)
>>>> - return cpu;
>>>> - }
>>>> + spin_lock_irqsave(&spread_lock, flags);
>>>> + memset(used, 0, nr_node_ids * sizeof(bool));
>>>> + calc_node_distance(node_dist, node);
>>>> + /* Local node first then the nearest node is used */
>>>
>>> Is this comment really correct? This makes it sound like there is only
>>
>> I think it is correct, that's what we want to choose the nearest node.
>>
>>> fallback to a single node. Doesn't the _code_ fall back basically
>>> without limit?
>>
>> If I follow your question correctly, without this patch, if the local
>> node is used up, one random node will be choosed, right? Now we firstly
>> choose the nearest node by the distance, if all nodes has been choosen,
>> it will return the initial solution.
>
> The comment makes it sound like the code does:
> 1. Do the local node
> 2. Do the next nearest node
> 3. Stop
>

That's more clear, I will udpate the comments as the new patch.

> In reality, I *think* it's more of a loop where it search
> ever-increasing distances away from the local node.
>
> I just think the comment needs to be made more precise.

Got it.

>
>>>> + for (j = 0; j < nr_node_ids; j++) {
>>>> + id = find_nearest_node(node_dist, used);
>>>> + if (id < 0)
>>>> + break;
>>>>
>>>> - for_each_cpu(cpu, mask) {
>>>> - /* Skip NUMA nodes, done above. */
>>>> - if (cpumask_test_cpu(cpu, cpumask_of_node(node)))
>>>> - continue;
>>>> + for_each_cpu_and(cpu, cpumask_of_node(id), mask)
>>>> + if (i-- == 0) {
>>>> + spin_unlock_irqrestore(&spread_lock,
>>>> + flags);
>>>> + return cpu;
>>>> + }
>>>> + used[id] = 1;
>>>> + }
>>>> + spin_unlock_irqrestore(&spread_lock, flags);
>>>
>>> The existing code was pretty sparsely commented. This looks to me to
>>> make it more complicated and *less* commented. Not the best combo.
>>
>> Apologies for the bad comments, hopefully I describe it clearly by the above
>> explantion.
>
> Do you want to take another pass at submitting this patch?

'Another pass'? Sorry for my bad understading, I don't follow it correctly.

Thanks,
Shaokun

[1]https://en.wikichip.org/wiki/intel/xeon_gold/6248

> .
>

2020-11-16 14:51:32

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v6] lib: optimize cpumask_local_spread()

On 11/15/20 11:59 PM, Shaokun Zhang wrote:
>> Do you want to take another pass at submitting this patch?
> 'Another pass'? Sorry for my bad understading, I don't follow it correctly.

Could you please incorporate the feedback that I've given about this
version of the patch and write a new version?

2020-11-17 01:14:16

by Zhangshaokun

[permalink] [raw]
Subject: Re: [PATCH v6] lib: optimize cpumask_local_spread()

Hi Dave,

在 2020/11/16 22:48, Dave Hansen 写道:
> On 11/15/20 11:59 PM, Shaokun Zhang wrote:
>>> Do you want to take another pass at submitting this patch?
>> 'Another pass'? Sorry for my bad understading, I don't follow it correctly.
>
> Could you please incorporate the feedback that I've given about this
> version of the patch and write a new version?

Yeah, I will do it soon addressed your comments.

Cheers,
Shaokun.

>