2022-05-23 06:01:51

by Mel Gorman

[permalink] [raw]
Subject: [PATCH v2 0/4] Mitigate inconsistent NUMA imbalance behaviour

Changes since V1
o Consolidate [allow|adjust]_numa_imbalance (peterz)
o #ifdefs around NUMA-specific pieces to build arc-allyesconfig (lkp)

A problem was reported privately related to inconsistent performance of
NAS when parallelised with MPICH. The root of the problem is that the
initial placement is unpredictable and there can be a larger imbalance
than expected between NUMA nodes. As there is spare capacity and the faults
are local, the imbalance persists for a long time and performance suffers.

This is not 100% an "allowed imbalance" problem as setting the allowed
imbalance to 0 does not fix the issue but the allowed imbalance contributes
the the performance problem. The unpredictable behaviour was most recently
introduced by commit c6f886546cb8 ("sched/fair: Trigger the update of
blocked load on newly idle cpu").

mpirun forks hydra_pmi_proxy helpers with MPICH that go to sleep before the
execing the target workload. As the new tasks are sleeping, the potential
imbalance is not observed as idle_cpus does not reflect the tasks that
will be running in the near future. How bad the problem depends on the
timing of when fork happens and whether the new tasks are still running.
Consequently, a large initial imbalance may not be detected until the
workload is fully running. Once running, NUMA Balancing picks the preferred
node based on locality and runtime load balancing often ignores the tasks
as can_migrate_task() fails for either locality or task_hot reasons and
instead picks unrelated tasks.

This is the min, max and range of run time for mg.D parallelised with ~25%
of the CPUs parallelised by MPICH running on a 2-socket machine (80 CPUs,
16 active for mg.D due to limitations of mg.D).

v5.3 Min 95.84 Max 96.55 Range 0.71 Mean 96.16
v5.7 Min 95.44 Max 96.51 Range 1.07 Mean 96.14
v5.8 Min 96.02 Max 197.08 Range 101.06 Mean 154.70
v5.12 Min 104.45 Max 111.03 Range 6.58 Mean 105.94
v5.13 Min 104.38 Max 170.37 Range 65.99 Mean 117.35
v5.13-revert-c6f886546cb8 Min 104.40 Max 110.70 Range 6.30 Mean 105.68
v5.18rc4-baseline Min 110.78 Max 169.84 Range 59.06 Mean 131.22
v5.18rc4-revert-c6f886546cb8 Min 113.98 Max 117.29 Range 3.31 Mean 114.71
v5.18rc4-this_series Min 95.56 Max 163.97 Range 68.41 Mean 105.39
v5.18rc4-this_series-revert-c6f886546cb8 Min 95.56 Max 104.86 Range 9.30 Mean 97.00

This shows that we've had unpredictable performance for a long time for
this load. Instability was introduced somewhere between v5.7 and v5.8,
fixed in v5.12 and broken again since v5.13. The revert against 5.13
and 5.18-rc4 shows that c6f886546cb8 is the primary source of instability
although the best case is still worse than 5.7.

This series addresses the allowed imbalance problems to get the peak
performance back to 5.7 although only some of the time due to the
instability problem. The series plus the revert is both stable and has
slightly better peak performance and similar average performance. I'm
not convinced commit c6f886546cb8 is wrong but haven't isolated exactly
why it's unstable. I'm just noting it has an issue for now.

Patch 1 initialises numa_migrate_retry. While this resolves itself
eventually, it is unpredictable early in the lifetime of
a task.

Patch 2 will not swap NUMA tasks in the same NUMA group or without
a NUMA group if there is spare capacity. Swapping is just
punishing one task to help another.

Patch 3 fixes an issue where a larger imbalance can be created at
fork time than would be allowed at run time. This behaviour
can help some workloads that are short lived and prefer
to remain local but it punishes long-lived tasks that are
memory intensive.

Patch 4 adjusts the threshold where a NUMA imbalance is allowed to
better approximate the number of memory channels, at least
for x86-64.

kernel/sched/fair.c | 91 +++++++++++++++++++++++++----------------
kernel/sched/topology.c | 23 +++++++----
2 files changed, 70 insertions(+), 44 deletions(-)

--
2.34.1


2022-05-23 06:09:07

by Mel Gorman

[permalink] [raw]
Subject: [PATCH 4/4] sched/numa: Adjust imb_numa_nr to a better approximation of memory channels

For a single LLC per node, a NUMA imbalance is allowed up until 25%
of CPUs sharing a node could be active. One intent of the cut-off is
to avoid an imbalance of memory channels but there is no topological
information based on active memory channels. Furthermore, there can
be differences between nodes depending on the number of populated
DIMMs.

A cut-off of 25% was arbitrary but generally worked. It does have a severe
corner cases though when an parallel workload is using 25% of all available
CPUs over-saturates memory channels. This can happen due to the initial
forking of tasks that get pulled more to one node after early wakeups
(e.g. a barrier synchronisation) that is not quickly corrected by the
load balancer. The LB may fail to act quickly as the parallel tasks are
considered to be poor migrate candidates due to locality or cache hotness.

On a range of modern Intel CPUs, 12.5% appears to be a better cut-off
assuming all memory channels are populated and is used as the new cut-off
point. A minimum of 1 is specified to allow a communicating pair to
remain local even for CPUs with low numbers of cores. For modern AMDs,
there are multiple LLCs and are not affected.

Signed-off-by: Mel Gorman <[email protected]>
---
kernel/sched/topology.c | 23 +++++++++++++++--------
1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 810750e62118..2740e245cb37 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2295,23 +2295,30 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att

/*
* For a single LLC per node, allow an
- * imbalance up to 25% of the node. This is an
- * arbitrary cutoff based on SMT-2 to balance
- * between memory bandwidth and avoiding
- * premature sharing of HT resources and SMT-4
- * or SMT-8 *may* benefit from a different
- * cutoff.
+ * imbalance up to 12.5% of the node. This is
+ * arbitrary cutoff based two factors -- SMT and
+ * memory channels. For SMT-2, the intent is to
+ * avoid premature sharing of HT resources but
+ * SMT-4 or SMT-8 *may* benefit from a different
+ * cutoff. For memory channels, this is a very
+ * rough estimate of how many channels may be
+ * active and is based on recent CPUs with
+ * many cores.
*
* For multiple LLCs, allow an imbalance
* until multiple tasks would share an LLC
* on one node while LLCs on another node
- * remain idle.
+ * remain idle. This assumes that there are
+ * enough logical CPUs per LLC to avoid SMT
+ * factors and that there is a correlation
+ * between LLCs and memory channels.
*/
nr_llcs = sd->span_weight / child->span_weight;
if (nr_llcs == 1)
- imb = sd->span_weight >> 2;
+ imb = sd->span_weight >> 3;
else
imb = nr_llcs;
+ imb = max(1U, imb);
sd->imb_numa_nr = imb;

/* Set span based on the first NUMA domain. */
--
2.34.1


2022-05-25 08:07:14

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] Mitigate inconsistent NUMA imbalance behaviour

On Fri, 20 May 2022 at 12:35, Mel Gorman <[email protected]> wrote:
>
> Changes since V1
> o Consolidate [allow|adjust]_numa_imbalance (peterz)
> o #ifdefs around NUMA-specific pieces to build arc-allyesconfig (lkp)
>
> A problem was reported privately related to inconsistent performance of
> NAS when parallelised with MPICH. The root of the problem is that the
> initial placement is unpredictable and there can be a larger imbalance
> than expected between NUMA nodes. As there is spare capacity and the faults
> are local, the imbalance persists for a long time and performance suffers.
>
> This is not 100% an "allowed imbalance" problem as setting the allowed
> imbalance to 0 does not fix the issue but the allowed imbalance contributes
> the the performance problem. The unpredictable behaviour was most recently
> introduced by commit c6f886546cb8 ("sched/fair: Trigger the update of
> blocked load on newly idle cpu").
>
> mpirun forks hydra_pmi_proxy helpers with MPICH that go to sleep before the
> execing the target workload. As the new tasks are sleeping, the potential
> imbalance is not observed as idle_cpus does not reflect the tasks that
> will be running in the near future. How bad the problem depends on the
> timing of when fork happens and whether the new tasks are still running.
> Consequently, a large initial imbalance may not be detected until the
> workload is fully running. Once running, NUMA Balancing picks the preferred
> node based on locality and runtime load balancing often ignores the tasks
> as can_migrate_task() fails for either locality or task_hot reasons and
> instead picks unrelated tasks.
>
> This is the min, max and range of run time for mg.D parallelised with ~25%
> of the CPUs parallelised by MPICH running on a 2-socket machine (80 CPUs,
> 16 active for mg.D due to limitations of mg.D).
>
> v5.3 Min 95.84 Max 96.55 Range 0.71 Mean 96.16
> v5.7 Min 95.44 Max 96.51 Range 1.07 Mean 96.14
> v5.8 Min 96.02 Max 197.08 Range 101.06 Mean 154.70
> v5.12 Min 104.45 Max 111.03 Range 6.58 Mean 105.94
> v5.13 Min 104.38 Max 170.37 Range 65.99 Mean 117.35
> v5.13-revert-c6f886546cb8 Min 104.40 Max 110.70 Range 6.30 Mean 105.68
> v5.18rc4-baseline Min 110.78 Max 169.84 Range 59.06 Mean 131.22
> v5.18rc4-revert-c6f886546cb8 Min 113.98 Max 117.29 Range 3.31 Mean 114.71
> v5.18rc4-this_series Min 95.56 Max 163.97 Range 68.41 Mean 105.39
> v5.18rc4-this_series-revert-c6f886546cb8 Min 95.56 Max 104.86 Range 9.30 Mean 97.00

I'm interested to understand why such instability can be introduced by
c6f886546cb8 as it aims to do the opposite by not waking up a random
idle cpu but using the current cpu which is becoming idle, instead. I
haven't been able to reproduce your problem with my current setup but
I assume this is specific to some use cases so I will try to reproduce
the mg.D test above. If you have more details on the setup to ease the
reproduction of the problem I'm interested.

>
> This shows that we've had unpredictable performance for a long time for
> this load. Instability was introduced somewhere between v5.7 and v5.8,
> fixed in v5.12 and broken again since v5.13. The revert against 5.13
> and 5.18-rc4 shows that c6f886546cb8 is the primary source of instability
> although the best case is still worse than 5.7.
>
> This series addresses the allowed imbalance problems to get the peak
> performance back to 5.7 although only some of the time due to the
> instability problem. The series plus the revert is both stable and has
> slightly better peak performance and similar average performance. I'm
> not convinced commit c6f886546cb8 is wrong but haven't isolated exactly
> why it's unstable. I'm just noting it has an issue for now.
>
> Patch 1 initialises numa_migrate_retry. While this resolves itself
> eventually, it is unpredictable early in the lifetime of
> a task.
>
> Patch 2 will not swap NUMA tasks in the same NUMA group or without
> a NUMA group if there is spare capacity. Swapping is just
> punishing one task to help another.
>
> Patch 3 fixes an issue where a larger imbalance can be created at
> fork time than would be allowed at run time. This behaviour
> can help some workloads that are short lived and prefer
> to remain local but it punishes long-lived tasks that are
> memory intensive.
>
> Patch 4 adjusts the threshold where a NUMA imbalance is allowed to
> better approximate the number of memory channels, at least
> for x86-64.
>
> kernel/sched/fair.c | 91 +++++++++++++++++++++++++----------------
> kernel/sched/topology.c | 23 +++++++----
> 2 files changed, 70 insertions(+), 44 deletions(-)
>
> --
> 2.34.1

2022-05-25 19:44:28

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] Mitigate inconsistent NUMA imbalance behaviour

On Tue, May 24, 2022 at 06:01:07PM +0200, Vincent Guittot wrote:
> > This is the min, max and range of run time for mg.D parallelised with ~25%
> > of the CPUs parallelised by MPICH running on a 2-socket machine (80 CPUs,
> > 16 active for mg.D due to limitations of mg.D).
> >
> > v5.3 Min 95.84 Max 96.55 Range 0.71 Mean 96.16
> > v5.7 Min 95.44 Max 96.51 Range 1.07 Mean 96.14
> > v5.8 Min 96.02 Max 197.08 Range 101.06 Mean 154.70
> > v5.12 Min 104.45 Max 111.03 Range 6.58 Mean 105.94
> > v5.13 Min 104.38 Max 170.37 Range 65.99 Mean 117.35
> > v5.13-revert-c6f886546cb8 Min 104.40 Max 110.70 Range 6.30 Mean 105.68
> > v5.18rc4-baseline Min 110.78 Max 169.84 Range 59.06 Mean 131.22
> > v5.18rc4-revert-c6f886546cb8 Min 113.98 Max 117.29 Range 3.31 Mean 114.71
> > v5.18rc4-this_series Min 95.56 Max 163.97 Range 68.41 Mean 105.39
> > v5.18rc4-this_series-revert-c6f886546cb8 Min 95.56 Max 104.86 Range 9.30 Mean 97.00
>
> I'm interested to understand why such instability can be introduced by
> c6f886546cb8 as it aims to do the opposite by not waking up a random
> idle cpu but using the current cpu which is becoming idle, instead. I
> haven't been able to reproduce your problem with my current setup but
> I assume this is specific to some use cases so I will try to reproduce
> the mg.D test above. If you have more details on the setup to ease the
> reproduction of the problem I'm interested.
>

Thanks Vincent,

The most straight-forward way to reproduce is via mmtests.

# git clone https://github.com/gormanm/mmtests/
# cd mmtests
# ./bin/generate-generic-configs
# ./run-mmtests.sh --run-monitor --config configs/config-hpc-nas-mpich-quarter-mgD-many test-mgD-many
# cd work/log
# ../../compare-kernels.sh

nas-mpich-mg NAS Time
test
mgD-many
Min mg.D 95.80 ( 0.00%)
Amean mg.D 110.77 ( 0.00%)
Stddev mg.D 21.55 ( 0.00%)
CoeffVar mg.D 19.46 ( 0.00%)
Max mg.D 155.35 ( 0.00%)
BAmean-50 mg.D 96.05 ( 0.00%)
BAmean-95 mg.D 107.83 ( 0.00%)
BAmean-99 mg.D 109.23 ( 0.00%)

Note the min of 95.80 seconds, max of 155.35 and high stddev indicating
the results are not stable.

The generated config is for openSUSE so it may not work for you. After
installing the mpich package, you'll need to adjust these lines

export NAS_MPICH_PATH=/usr/$MMTESTS_LIBDIR/mpi/gcc/$NAS_MPICH_VERSION/bin
export NAS_MPICH_LIBPATH=/usr/$MMTESTS_LIBDIR/mpi/gcc/$NAS_MPICH_VERSION/$MMTESTS_LIBDIR

NAS_MPICH_PATH and NAS_MPICH_LIBPATH need to point to the bin and lib
path for the mpich package your distribution ships.

--
Mel Gorman
SUSE Labs

2022-06-01 16:42:03

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] Mitigate inconsistent NUMA imbalance behaviour

On Wed, 25 May 2022 at 14:49, Mel Gorman <[email protected]> wrote:
>
> On Tue, May 24, 2022 at 06:01:07PM +0200, Vincent Guittot wrote:
> > > This is the min, max and range of run time for mg.D parallelised with ~25%
> > > of the CPUs parallelised by MPICH running on a 2-socket machine (80 CPUs,
> > > 16 active for mg.D due to limitations of mg.D).
> > >
> > > v5.3 Min 95.84 Max 96.55 Range 0.71 Mean 96.16
> > > v5.7 Min 95.44 Max 96.51 Range 1.07 Mean 96.14
> > > v5.8 Min 96.02 Max 197.08 Range 101.06 Mean 154.70
> > > v5.12 Min 104.45 Max 111.03 Range 6.58 Mean 105.94
> > > v5.13 Min 104.38 Max 170.37 Range 65.99 Mean 117.35
> > > v5.13-revert-c6f886546cb8 Min 104.40 Max 110.70 Range 6.30 Mean 105.68
> > > v5.18rc4-baseline Min 110.78 Max 169.84 Range 59.06 Mean 131.22
> > > v5.18rc4-revert-c6f886546cb8 Min 113.98 Max 117.29 Range 3.31 Mean 114.71
> > > v5.18rc4-this_series Min 95.56 Max 163.97 Range 68.41 Mean 105.39
> > > v5.18rc4-this_series-revert-c6f886546cb8 Min 95.56 Max 104.86 Range 9.30 Mean 97.00
> >
> > I'm interested to understand why such instability can be introduced by
> > c6f886546cb8 as it aims to do the opposite by not waking up a random
> > idle cpu but using the current cpu which is becoming idle, instead. I
> > haven't been able to reproduce your problem with my current setup but
> > I assume this is specific to some use cases so I will try to reproduce
> > the mg.D test above. If you have more details on the setup to ease the
> > reproduction of the problem I'm interested.
> >
>
> Thanks Vincent,
>
> The most straight-forward way to reproduce is via mmtests.
>
> # git clone https://github.com/gormanm/mmtests/
> # cd mmtests
> # ./bin/generate-generic-configs
> # ./run-mmtests.sh --run-monitor --config configs/config-hpc-nas-mpich-quarter-mgD-many test-mgD-many
> # cd work/log
> # ../../compare-kernels.sh
>
> nas-mpich-mg NAS Time
> test
> mgD-many
> Min mg.D 95.80 ( 0.00%)
> Amean mg.D 110.77 ( 0.00%)
> Stddev mg.D 21.55 ( 0.00%)
> CoeffVar mg.D 19.46 ( 0.00%)
> Max mg.D 155.35 ( 0.00%)
> BAmean-50 mg.D 96.05 ( 0.00%)
> BAmean-95 mg.D 107.83 ( 0.00%)
> BAmean-99 mg.D 109.23 ( 0.00%)
>
> Note the min of 95.80 seconds, max of 155.35 and high stddev indicating
> the results are not stable.
>
> The generated config is for openSUSE so it may not work for you. After
> installing the mpich package, you'll need to adjust these lines
>
> export NAS_MPICH_PATH=/usr/$MMTESTS_LIBDIR/mpi/gcc/$NAS_MPICH_VERSION/bin
> export NAS_MPICH_LIBPATH=/usr/$MMTESTS_LIBDIR/mpi/gcc/$NAS_MPICH_VERSION/$MMTESTS_LIBDIR
>
> NAS_MPICH_PATH and NAS_MPICH_LIBPATH need to point to the bin and lib
> path for the mpich package your distribution ships.

I have been able to run your tests on my setup: aarch64 2 nodes * 28
cores * 4 threads. But I can't reproduce the problem, results stay
stable before and after reverting c6f886546cb8.

I will continue to try to reproduce it

nas-mpich-mg NAS Time
test test
mgD-many-v5.18-0 mgD-many-v5.18-revert-0
Min mg.D 78.76 ( 0.00%) 78.78 ( -0.03%)
Amean mg.D 81.13 ( 0.00%) 81.45 * -0.40%*
Stddev mg.D 0.96 ( 0.00%) 1.12 ( -16.84%)
CoeffVar mg.D 1.18 ( 0.00%) 1.37 ( -16.38%)
Max mg.D 82.71 ( 0.00%) 82.91 ( -0.24%)
BAmean-50 mg.D 80.41 ( 0.00%) 80.65 ( -0.30%)
BAmean-95 mg.D 81.02 ( 0.00%) 81.34 ( -0.39%)
BAmean-99 mg.D 81.07 ( 0.00%) 81.40 ( -0.40%)

>
> --
> Mel Gorman
> SUSE Labs

2022-06-01 20:30:32

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] Mitigate inconsistent NUMA imbalance behaviour

On Tue, May 31, 2022 at 12:26:51PM +0200, Vincent Guittot wrote:
> > # cd work/log
> > # ../../compare-kernels.sh
> >
> > nas-mpich-mg NAS Time
> > test
> > mgD-many
> > Min mg.D 95.80 ( 0.00%)
> > Amean mg.D 110.77 ( 0.00%)
> > Stddev mg.D 21.55 ( 0.00%)
> > CoeffVar mg.D 19.46 ( 0.00%)
> > Max mg.D 155.35 ( 0.00%)
> > BAmean-50 mg.D 96.05 ( 0.00%)
> > BAmean-95 mg.D 107.83 ( 0.00%)
> > BAmean-99 mg.D 109.23 ( 0.00%)
> >
> > Note the min of 95.80 seconds, max of 155.35 and high stddev indicating
> > the results are not stable.
> >
> > The generated config is for openSUSE so it may not work for you. After
> > installing the mpich package, you'll need to adjust these lines
> >
> > export NAS_MPICH_PATH=/usr/$MMTESTS_LIBDIR/mpi/gcc/$NAS_MPICH_VERSION/bin
> > export NAS_MPICH_LIBPATH=/usr/$MMTESTS_LIBDIR/mpi/gcc/$NAS_MPICH_VERSION/$MMTESTS_LIBDIR
> >
> > NAS_MPICH_PATH and NAS_MPICH_LIBPATH need to point to the bin and lib
> > path for the mpich package your distribution ships.
>
> I have been able to run your tests on my setup: aarch64 2 nodes * 28
> cores * 4 threads. But I can't reproduce the problem, results stay
> stable before and after reverting c6f886546cb8.
>

It's unfortunate but not surprising that it's not perfectly reproducible
given that I expect this to be a race of some description and not all
machines will observe the problem.

Thanks for trying.

--
Mel Gorman
SUSE Labs

2022-06-08 10:43:45

by K Prateek Nayak

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] Mitigate inconsistent NUMA imbalance behaviour

Hello Mel,

Sorry this took a while but discussed below are the results from
our test system.

tl;dr

o The blip we saw with tbench in NPS2 mode still exists.
o We see some regression and run to tun variation with schbench
  but it is independent of this patch and depends on new idle balance.
o Short running Stream task still see benefits in NPS2 mode.
o Unixbench shows quite a lot of regression for all NPS modes with single
  and multiple parallel copy. This is expected given the nature of
  benchmark.
o Other than what is mentioned above, the results with this patch are
  comparable to results on tip and in many cases are more stable with
  the patch.

Detailed numbers are reported below:

On 5/20/2022 4:05 PM, Mel Gorman wrote:
> Changes since V1
> o Consolidate [allow|adjust]_numa_imbalance (peterz)
> o #ifdefs around NUMA-specific pieces to build arc-allyesconfig (lkp)
>
> A problem was reported privately related to inconsistent performance of
> NAS when parallelised with MPICH. The root of the problem is that the
> initial placement is unpredictable and there can be a larger imbalance
> than expected between NUMA nodes. As there is spare capacity and the faults
> are local, the imbalance persists for a long time and performance suffers.
>
> This is not 100% an "allowed imbalance" problem as setting the allowed
> imbalance to 0 does not fix the issue but the allowed imbalance contributes
> the the performance problem. The unpredictable behaviour was most recently
> introduced by commit c6f886546cb8 ("sched/fair: Trigger the update of
> blocked load on newly idle cpu").
>
> mpirun forks hydra_pmi_proxy helpers with MPICH that go to sleep before the
> execing the target workload. As the new tasks are sleeping, the potential
> imbalance is not observed as idle_cpus does not reflect the tasks that
> will be running in the near future. How bad the problem depends on the
> timing of when fork happens and whether the new tasks are still running.
> Consequently, a large initial imbalance may not be detected until the
> workload is fully running. Once running, NUMA Balancing picks the preferred
> node based on locality and runtime load balancing often ignores the tasks
> as can_migrate_task() fails for either locality or task_hot reasons and
> instead picks unrelated tasks.
>
> This is the min, max and range of run time for mg.D parallelised with ~25%
> of the CPUs parallelised by MPICH running on a 2-socket machine (80 CPUs,
> 16 active for mg.D due to limitations of mg.D).
>
> v5.3 Min 95.84 Max 96.55 Range 0.71 Mean 96.16
> v5.7 Min 95.44 Max 96.51 Range 1.07 Mean 96.14
> v5.8 Min 96.02 Max 197.08 Range 101.06 Mean 154.70
> v5.12 Min 104.45 Max 111.03 Range 6.58 Mean 105.94
> v5.13 Min 104.38 Max 170.37 Range 65.99 Mean 117.35
> v5.13-revert-c6f886546cb8 Min 104.40 Max 110.70 Range 6.30 Mean 105.68
> v5.18rc4-baseline Min 110.78 Max 169.84 Range 59.06 Mean 131.22
> v5.18rc4-revert-c6f886546cb8 Min 113.98 Max 117.29 Range 3.31 Mean 114.71
> v5.18rc4-this_series Min 95.56 Max 163.97 Range 68.41 Mean 105.39
> v5.18rc4-this_series-revert-c6f886546cb8 Min 95.56 Max 104.86 Range 9.30 Mean 97.00

Following are the results from testing on a dual socket Zen3 system
(2 x 64C/128T) in different NPS modes.

Following is the NUMA configuration for each NPS mode on the system:

NPS1: Each socket is a NUMA node.
    Total 2 NUMA nodes in the dual socket machine.

    Node 0: 0-63,   128-191
    Node 1: 64-127, 192-255

NPS2: Each socket is further logically divided into 2 NUMA regions.
    Total 4 NUMA nodes exist over 2 socket.
   
    Node 0: 0-31,   128-159
    Node 1: 32-63,  160-191
    Node 2: 64-95,  192-223
    Node 3: 96-127, 223-255

NPS4: Each socket is logically divided into 4 NUMA regions.
    Total 8 NUMA nodes exist over 2 socket.
   
    Node 0: 0-15,    128-143
    Node 1: 16-31,   144-159
    Node 2: 32-47,   160-175
    Node 3: 48-63,   176-191
    Node 4: 64-79,   192-207
    Node 5: 80-95,   208-223
    Node 6: 96-111,  223-231
    Node 7: 112-127, 232-255

Kernel versions:
- tip:          5.18-rc1 tip sched/core
- Numa Bal:     5.18-rc1 tip sched/core + this patch

tip was at commit: a658353167bf "sched/fair: Revise comment about lb decision matrix"

Following are the results reported by the benchmarks:

~~~~~~~~~
hackbench
~~~~~~~~~

NPS1

Test:                   tip                   NUMA Bal
 1-groups:         5.05 (0.00 pct)         5.01 (0.79 pct)
 2-groups:         5.81 (0.00 pct)         5.78 (0.51 pct)
 4-groups:         6.39 (0.00 pct)         6.31 (1.25 pct)
 8-groups:         8.18 (0.00 pct)         8.09 (1.10 pct)
16-groups:        11.43 (0.00 pct)        11.58 (-1.31 pct) [System is overloaded]

NPS2

Test:                   tip                   NUMA Bal
 1-groups:         5.00 (0.00 pct)         4.97 (0.60 pct)
 2-groups:         5.57 (0.00 pct)         5.63 (-1.07 pct)
 4-groups:         6.21 (0.00 pct)         6.17 (0.64 pct)
 8-groups:         7.80 (0.00 pct)         7.68 (1.53 pct)
16-groups:        10.59 (0.00 pct)        10.51 (0.75 pct)

NPS4

Test:                   tip                   NUMA Bal
 1-groups:         4.93 (0.00 pct)         4.95 (-0.40 pct)
 2-groups:         5.41 (0.00 pct)         5.34 (1.29 pct)
 4-groups:         6.33 (0.00 pct)         6.09 (3.79 pct)
 8-groups:         7.87 (0.00 pct)         7.80 (0.88 pct)
16-groups:        10.28 (0.00 pct)        10.40 (-1.16 pct) [System is overloaded]

~~~~~~~~
schbench
~~~~~~~~

NPS1

#workers:     tip                     NUMA Bal
  1:      13.00 (0.00 pct)        12.00 (7.69 pct)
  2:      36.50 (0.00 pct)        20.50 (43.83 pct)
  4:      45.50 (0.00 pct)        31.00 (31.86 pct)
  8:      59.00 (0.00 pct)        43.00 (27.11 pct)
 16:      71.00 (0.00 pct)        68.50 (3.52 pct)
 32:     101.50 (0.00 pct)       107.50 (-5.91 pct)   *
 32:     100.50 (0.00 pct)       103.50 (-2.98 pct)   [Verification Run]
 64:     182.50 (0.00 pct)       188.50 (-3.28 pct)
128:     402.50 (0.00 pct)       420.00 (-4.34 pct)
256:     928.00 (0.00 pct)       915.00 (1.40 pct)
512:     60224.00 (0.00 pct)     60096.00 (0.21 pct)
NPS2

#workers:      tip                     NUMA Bal
  1:      10.00 (0.00 pct)        10.50 (-5.00 pct)   *
  1:      9.00 (0.00 pct)         9.00 (0.00 pct)     [Verification Run]
  2:      26.00 (0.00 pct)        31.00 (-19.23 pct)  *
  2:      18.00 (0.00 pct)        19.50 (-8.33 pct)   [Verification Run]
  4:      42.00 (0.00 pct)        39.00 (7.14 pct)
  8:      52.50 (0.00 pct)        52.50 (0.00 pct)
 16:      66.50 (0.00 pct)        73.00 (-9.77 pct)   *
 16:      81.00 (0.00 pct)        75.00 (7.40 pct)    [Verification Run]
 32:     104.00 (0.00 pct)       105.00 (-0.96 pct)
 64:     186.00 (0.00 pct)       186.00 (0.00 pct)
128:     397.00 (0.00 pct)       397.00 (0.00 pct)
256:     957.00 (0.00 pct)       946.00 (1.14 pct)
512:     60416.00 (0.00 pct)     60224.00 (0.31 pct)

NPS4

#workers:      tip                     NUMA Bal
  1:      11.00 (0.00 pct)        10.50 (4.54 pct)
  2:      32.00 (0.00 pct)        33.00 (-3.12 pct)   *
  2:      35.00 (0.00 pct)        33.50 (4.28 pct)    [Verification Run]
  4:      31.50 (0.00 pct)        35.50 (-12.69 pct)  *
  4:      36.00 (0.00 pct)        35.00 (2.77 pct)    [Verification Run]
  8:      47.50 (0.00 pct)        49.00 (-3.15 pct)
 16:      87.00 (0.00 pct)        91.00 (-4.59 pct)
 32:     102.50 (0.00 pct)       107.00 (-4.39 pct)
 64:     192.50 (0.00 pct)       186.00 (3.37 pct)
128:     404.00 (0.00 pct)       400.50 (0.86 pct)
256:     970.00 (0.00 pct)       968.00 (0.20 pct)
512:     60480.00 (0.00 pct)     60352.00 (0.21 pct)

~~~~~~
tbench
~~~~~~

NPS1

Clients:      tip                     NUMA Bal
    1    438.22 (0.00 pct)       462.66 (5.57 pct)
    2    854.84 (0.00 pct)       898.10 (5.06 pct)
    4    1667.69 (0.00 pct)      1668.37 (0.04 pct)
    8    3018.52 (0.00 pct)      3178.64 (5.30 pct)
   16    5409.81 (0.00 pct)      5547.44 (2.54 pct)
   32    8437.87 (0.00 pct)      8410.80 (-0.32 pct)
   64    15687.72 (0.00 pct)     15960.17 (1.73 pct)
  128    27370.64 (0.00 pct)     27936.86 (2.06 pct)
  256    26645.86 (0.00 pct)     23011.01 (-13.64 pct)  [Know to be unstable]
  512    51768.54 (0.00 pct)     52320.17 (1.06 pct)
 1024    51736.04 (0.00 pct)     53242.06 (2.91 pct)

NPS2

Clients:       tip                    NUMA Bal
    1    446.30 (0.00 pct)       455.73 (2.11 pct)
    2    863.29 (0.00 pct)       868.29 (0.57 pct)
    4    1667.76 (0.00 pct)      1604.60 (-3.78 pct)
    8    2989.28 (0.00 pct)      2859.84 (-4.33 pct)
   16    5563.14 (0.00 pct)      5048.52 (-9.25 pct)    *
   16    5204.00 (0.00 pct)      4931.12 (-5.24 pct)    [Verification Run]
   32    10036.35 (0.00 pct)     9230.29 (-8.03 pct)    *
   32    9561.56 (0.00 pct)      9432.73 (-1.34 pct)    [Verification Run]
   64    16220.99 (0.00 pct)     15277.82 (-5.81 pct)   *
   64    16417.34 (0.00 pct)     15323.03 (-6.66 pct)   [Verification Run]
  128    24169.97 (0.00 pct)     26450.11 (9.43 pct)
  256    25147.23 (0.00 pct)     22811.07 (-9.28 pct)   [Know to be unstable]
  512    49985.76 (0.00 pct)     49978.16 (-0.01 pct)
 1024    51226.39 (0.00 pct)     51445.20 (0.42 pct)

NPS4

Clients:      tip                     NUMA Bal
    1    446.19 (0.00 pct)       451.40 (1.16 pct)
    2    870.95 (0.00 pct)       882.02 (1.27 pct)
    4    1635.15 (0.00 pct)      1662.83 (1.69 pct)
    8    3057.77 (0.00 pct)      3071.47 (0.44 pct)
   16    5446.06 (0.00 pct)      5660.99 (3.94 pct)
   32    10159.76 (0.00 pct)     10703.73 (5.35 pct)
   64    16778.72 (0.00 pct)     17979.45 (7.15 pct)
  128    27336.35 (0.00 pct)     28242.78 (3.31 pct)
  256    23160.91 (0.00 pct)     21820.05 (-5.78 pct)     [Know to be unstable]
  512    48981.68 (0.00 pct)     51492.91 (5.12 pct)
 1024    50575.32 (0.00 pct)     51642.89 (2.11 pct)

Note: tbench resuts for 256 workers are known to have
run to run variation on the test machine. Any regression
seen for the data point can be safely ignored.

~~~~~~
Stream
~~~~~~

- 10 runs

NPS1

Test:           tip                    NUMA Bal
 Copy:   178979.35 (0.00 pct)    174059.37 (-2.74 pct)
Scale:   195878.87 (0.00 pct)    201516.78 (2.87 pct)
  Add:   218987.24 (0.00 pct)    232609.27 (6.22 pct)
Triad:   215253.14 (0.00 pct)    227262.98 (5.57 pct)

NPS2

Test:            tip                   NUMA Bal
 Copy:   146772.26 (0.00 pct)    162532.71 (10.73 pct)
Scale:   183512.68 (0.00 pct)    194247.05 (5.84 pct)
  Add:   197574.24 (0.00 pct)    213254.88 (7.93 pct)
Triad:   195992.83 (0.00 pct)    211433.42 (7.87 pct)

NPS4

Test:            tip                   NUMA Bal
 Copy:   174993.71 (0.00 pct)    241688.13 (38.11 pct)
Scale:   221704.93 (0.00 pct)    218607.33 (-1.39 pct)
  Add:   252474.35 (0.00 pct)    264950.80 (4.94 pct)
Triad:   248847.55 (0.00 pct)    259883.14 (4.43 pct)

- 100 runs

NPS1

Test:            tip                   NUMA Bal
 Copy:   217128.10 (0.00 pct)    220565.22 (1.58 pct)
Scale:   215839.44 (0.00 pct)    215465.32 (-0.17 pct)
  Add:   263765.70 (0.00 pct)    263365.12 (-0.15 pct)
Triad:   251130.97 (0.00 pct)    251276.93 (0.05 pct)

NPS2

Test:            tip                   NUMA Bal
 Copy:   227274.62 (0.00 pct)    240077.10 (5.63 pct)
Scale:   219327.39 (0.00 pct)    220378.48 (0.47 pct)
  Add:   275971.20 (0.00 pct)    278044.21 (0.75 pct)
Triad:   262696.11 (0.00 pct)    265308.69 (0.99 pct)

NPS4

Test:            tip                   NUMA Bal
 Copy:   254879.07 (0.00 pct)    257151.33 (0.89 pct)
Scale:   228398.61 (0.00 pct)    229324.22 (0.40 pct)
  Add:   289858.40 (0.00 pct)    290531.58 (0.23 pct)
Triad:   272872.48 (0.00 pct)    274209.85 (0.49 pct)

~~~~~~~~~~~~
ycsb-mongodb
~~~~~~~~~~~~

NPS1

tip:          303718.33 (var: 1.31)
NUMA Bal:     300220.00 (var: 2.01)   (-1.15pct)

NPS2

tip:          304536.33 (var: 2.46)
NUMA Bal:     301681.67 (var: 0.56)   (-0.93 pct)

NPS4

tip:          301192.33 (var: 1.81)
NUMA Bal:     301025.00 (var: 1.35)   (-0.05 pct)

~~~~~~~~~~~~~~~~~
Unixbench - Spawn
~~~~~~~~~~~~~~~~~

NPS1

Parallel Copies              tip                  NUMA Bal
1 copy:                7020.0 (0.00 pct)      6143.7 (-12.48 pct)
4 copy:               17210.8 (0.00 pct)     16143.6 (-6.20 pct)

NPS2

Parallel Copies              tip                  NUMA Bal
1 copy:                8923.2 (0.00 pct)      7781.0 (-12.80 pct)
4 copy:               18679.5 (0.00 pct)     17396.9 (-6.86 pct)

NPS4

Parallel Copies              tip                  NUMA Bal
1 copy:                7873.1 (0.00 pct)      6786.7 (-13.79 pct)
4 copy:               18090.1 (0.00 pct)     17137.9 (-5.26 pct)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Run to Run Variation Details on Tip and Patched Kernel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

o schbench numbers depend on the new idle balance and
  the results reported are affected by external factors
  in cases of both the tip and patched kernel leading
  to a large amount of run to run variation.
  One of the data point for example is given below:

  --------------------------
  - tip vs NUMA Bal (NPS2) -
  --------------------------

  Metric           tip           NUMA Bal

  - 2 workers

  Min           : 20.00          25.00
  Max           : 34.00          41.00
  Median        : 26.00          31.00
  AMean         : 26.40          32.20
  GMean         : 26.16          31.78
  HMean         : 25.92          31.38
  AMean Stddev  : 3.81           5.51
  AMean CoefVar : 14.42 pct      17.12 pct

  - 2 workers (Rerun)

  Min           : 17.00          18.00
  Max           : 20.00          23.00
  Median        : 18.00          19.50
  AMean         : 18.10          19.60
  GMean         : 18.08          19.55
  HMean         : 18.06          19.49
  AMean Stddev  : 0.88           1.58
  AMean CoefVar : 4.84 pct       8.05 pct

o tbench still shows blips in NPS2 mode. Some of the
  datapoints that show regression are more stable on
  the patched kernel while others show larger run to
  run variation.
  Below is the detailed data for each data point.

  --------------------------
  - tip vs NUMA Bal (NPS2) -
  --------------------------

  Metric           tip           NUMA Bal

  - 16 clients

  Min           : 5528.71         4911.89
  Max           : 5584.80         5266.24
  Median        : 5576.24         4981.15
  AMean         : 5563.25         5053.09
  GMean         : 5563.20         5050.79
  HMean         : 5563.14         5048.52
  AMean Stddev  : 30.22           187.81
  AMean CoefVar : 0.54 pct        3.72 pct

  - 32 clients

  Min           : 9296.28         9128.25
  Max           : 10710.00        9342.78
  Median        : 10206.90        9222.35
  AMean         : 10071.06        9231.13
  GMean         : 10053.81        9230.71
  HMean         : 10036.35        9230.29
  AMean Stddev  : 716.58          107.53
  AMean CoefVar : 7.12 pct        1.16 pct

  - 64 clients

  Min           : 15222.50        15043.90
  Max           : 17063.60        15612.60
  Median        : 16488.30        15188.30
  AMean         : 16258.13        15281.60
  GMean         : 16239.68        15279.70
  HMean         : 16220.99        15277.82
  AMean Stddev  : 941.88          295.61
  AMean CoefVar : 5.79 pct        1.93 pct

  --------------------------------
  - tip vs NUMA Bal Rerun (NPS2) -
  --------------------------------

  Metric            tip           NUMA Bal

  - 16 clients

  Min           : 5174.01         4802.58
  Max           : 5239.66         5118.68
  Median        : 5198.76         4882.89
  AMean         : 5204.14         4934.72
  GMean         : 5204.07         4932.91
  HMean         : 5204.00         4931.12
  AMean Stddev  : 33.15           164.30
  AMean CoefVar : 0.64 pct        3.33 pct

  - 32 clients

  Min           : 9029.56         9105.11
  Max           : 10630.40        9750.46
  Median        : 9179.43         9464.88
  AMean         : 9613.13         9440.15
  GMean         : 9586.88         9436.45
  HMean         : 9561.56         9432.73
  AMean Stddev  : 884.16          323.38
  AMean CoefVar : 9.20 pct        3.43 pct

  - 64 clients

  Min           : 16190.30        14822.20
  Max           : 16596.00        15683.80
  Median        : 16471.00        15490.10
  AMean         : 16419.10        15332.03
  GMean         : 16418.22        15327.55
  HMean         : 16417.34        15323.03
  AMean Stddev  : 207.77          452.03
  AMean CoefVar : 1.27 pct        2.95 pct

> [..snip..]
Other than the couple of blips in tbench and schbench, the results
overall look stable. Unixbench regression is explained by the nature
of the benchmark which prefers consolidation.

Overall, the results look good. The numbers reported with patch seems
to be comparable to that with tip and there are good gains reported
for tbench on NPS1 and NPS4 config, and Stream on NPS2 config.
Some data points that show run to run variation on tip are now relatively
more stable with the patch.

Tested-by: K Prateek Nayak <[email protected]>

--
Thanks and Regards,
Prateek

2022-06-13 09:22:11

by tip-bot2 for Haifeng Xu

[permalink] [raw]
Subject: [tip: sched/core] sched/numa: Adjust imb_numa_nr to a better approximation of memory channels

The following commit has been merged into the sched/core branch of tip:

Commit-ID: 026b98a93bbdbefb37ab8008df84e38e2fedaf92
Gitweb: https://git.kernel.org/tip/026b98a93bbdbefb37ab8008df84e38e2fedaf92
Author: Mel Gorman <[email protected]>
AuthorDate: Fri, 20 May 2022 11:35:19 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 13 Jun 2022 10:30:00 +02:00

sched/numa: Adjust imb_numa_nr to a better approximation of memory channels

For a single LLC per node, a NUMA imbalance is allowed up until 25%
of CPUs sharing a node could be active. One intent of the cut-off is
to avoid an imbalance of memory channels but there is no topological
information based on active memory channels. Furthermore, there can
be differences between nodes depending on the number of populated
DIMMs.

A cut-off of 25% was arbitrary but generally worked. It does have a severe
corner cases though when an parallel workload is using 25% of all available
CPUs over-saturates memory channels. This can happen due to the initial
forking of tasks that get pulled more to one node after early wakeups
(e.g. a barrier synchronisation) that is not quickly corrected by the
load balancer. The LB may fail to act quickly as the parallel tasks are
considered to be poor migrate candidates due to locality or cache hotness.

On a range of modern Intel CPUs, 12.5% appears to be a better cut-off
assuming all memory channels are populated and is used as the new cut-off
point. A minimum of 1 is specified to allow a communicating pair to
remain local even for CPUs with low numbers of cores. For modern AMDs,
there are multiple LLCs and are not affected.

Signed-off-by: Mel Gorman <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Tested-by: K Prateek Nayak <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
kernel/sched/topology.c | 23 +++++++++++++++--------
1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 05b6c2a..8739c2a 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2316,23 +2316,30 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att

/*
* For a single LLC per node, allow an
- * imbalance up to 25% of the node. This is an
- * arbitrary cutoff based on SMT-2 to balance
- * between memory bandwidth and avoiding
- * premature sharing of HT resources and SMT-4
- * or SMT-8 *may* benefit from a different
- * cutoff.
+ * imbalance up to 12.5% of the node. This is
+ * arbitrary cutoff based two factors -- SMT and
+ * memory channels. For SMT-2, the intent is to
+ * avoid premature sharing of HT resources but
+ * SMT-4 or SMT-8 *may* benefit from a different
+ * cutoff. For memory channels, this is a very
+ * rough estimate of how many channels may be
+ * active and is based on recent CPUs with
+ * many cores.
*
* For multiple LLCs, allow an imbalance
* until multiple tasks would share an LLC
* on one node while LLCs on another node
- * remain idle.
+ * remain idle. This assumes that there are
+ * enough logical CPUs per LLC to avoid SMT
+ * factors and that there is a correlation
+ * between LLCs and memory channels.
*/
nr_llcs = sd->span_weight / child->span_weight;
if (nr_llcs == 1)
- imb = sd->span_weight >> 2;
+ imb = sd->span_weight >> 3;
else
imb = nr_llcs;
+ imb = max(1U, imb);
sd->imb_numa_nr = imb;

/* Set span based on the first NUMA domain. */