A soft lockup is being detected in build_sched_domains() on 32 socket
Sapphire Rapids systems with 3840 processors.
topology_span_sane(), called by build_sched_domains(), checks that each
processor's non-NUMA scheduling domains are completely equal or
completely disjoint. If a non-NUMA scheduling domain partially overlaps
another, scheduling groups can break.
This series adds for_each_cpu_from() as a generic cpumask macro to
optimize topology_span_sane() by removing duplicate comparisons. The
total number of comparisons is reduced from N * (N - 1) to
N * (N - 1) / 2 on each non-NUMA scheduling domain level, decreasing
the boot time by approximately 20 seconds and preventing the soft lockup
on the mentioned systems.
Changes in v2:
* 1/2: Change for_each_cpu()'s description.
* 2/2: Add more information to the commit message.
* https://lore.kernel.org/linux-kernel/[email protected]/T/
Kyle Meyer (2):
cpumask: Add for_each_cpu_from()
sched/topology: Optimize topology_span_sane()
include/linux/cpumask.h | 10 ++++++++++
kernel/sched/topology.c | 6 ++----
2 files changed, 12 insertions(+), 4 deletions(-)
--
2.44.0
Optimize topology_span_sane() by removing duplicate comparisons.
Since topology_span_sane() is called inside of for_each_cpu(), each
pervious CPU has already been compared against every other CPU. The
current CPU only needs to be compared against higher-numbered CPUs.
The total number of comparisons is reduced from N * (N - 1) to
N * (N - 1) / 2 on each non-NUMA scheduling domain level.
Signed-off-by: Kyle Meyer <[email protected]>
Reviewed-by: Yury Norov <[email protected]>
Acked-by: Vincent Guittot <[email protected]>
---
kernel/sched/topology.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 99ea5986038c..b6bcafc09969 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2347,7 +2347,7 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve
static bool topology_span_sane(struct sched_domain_topology_level *tl,
const struct cpumask *cpu_map, int cpu)
{
- int i;
+ int i = cpu + 1;
/* NUMA levels are allowed to overlap */
if (tl->flags & SDTL_OVERLAP)
@@ -2359,9 +2359,7 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl,
* breaking the sched_group lists - i.e. a later get_group() pass
* breaks the linking done for an earlier span.
*/
- for_each_cpu(i, cpu_map) {
- if (i == cpu)
- continue;
+ for_each_cpu_from(i, cpu_map) {
/*
* We should 'and' all those masks with 'cpu_map' to exactly
* match the topology we're about to build, but that can only
--
2.44.0
On Wed, Apr 10, 2024 at 04:33:11PM -0500, Kyle Meyer wrote:
> Optimize topology_span_sane() by removing duplicate comparisons.
>
> Since topology_span_sane() is called inside of for_each_cpu(), each
> pervious CPU has already been compared against every other CPU. The
previous
> current CPU only needs to be compared against higher-numbered CPUs.
>
> The total number of comparisons is reduced from N * (N - 1) to
> N * (N - 1) / 2 on each non-NUMA scheduling domain level.
Thank you, now it makes sense.
--
With Best Regards,
Andy Shevchenko
On Thu, Apr 11, 2024 at 01:38:46PM +0300, Andy Shevchenko wrote:
> On Wed, Apr 10, 2024 at 04:33:11PM -0500, Kyle Meyer wrote:
> > Optimize topology_span_sane() by removing duplicate comparisons.
> >
> > Since topology_span_sane() is called inside of for_each_cpu(), each
> > pervious CPU has already been compared against every other CPU. The
>
> previous
Thank you for pointing that out. Should I send an updated version or can
a maintainer correct my mistake?
> > current CPU only needs to be compared against higher-numbered CPUs.
> >
> > The total number of comparisons is reduced from N * (N - 1) to
> > N * (N - 1) / 2 on each non-NUMA scheduling domain level.
>
> Thank you, now it makes sense.
Thanks,
Kyle Meyer
On Thu, Apr 11, 2024 at 04:55:11PM -0500, Kyle Meyer wrote:
> On Thu, Apr 11, 2024 at 01:38:46PM +0300, Andy Shevchenko wrote:
> > On Wed, Apr 10, 2024 at 04:33:11PM -0500, Kyle Meyer wrote:
..
> > > Since topology_span_sane() is called inside of for_each_cpu(), each
> > > pervious CPU has already been compared against every other CPU. The
> >
> > previous
>
> Thank you for pointing that out. Should I send an updated version or can
> a maintainer correct my mistake?
Depends on the maintainer. I'm not the one here, don't expect answer from me.
> > > current CPU only needs to be compared against higher-numbered CPUs.
--
With Best Regards,
Andy Shevchenko
On Fri, Apr 12, 2024 at 05:25:26PM +0300, Andy Shevchenko wrote:
> On Thu, Apr 11, 2024 at 04:55:11PM -0500, Kyle Meyer wrote:
> > On Thu, Apr 11, 2024 at 01:38:46PM +0300, Andy Shevchenko wrote:
> > > On Wed, Apr 10, 2024 at 04:33:11PM -0500, Kyle Meyer wrote:
>
> ...
>
> > > > Since topology_span_sane() is called inside of for_each_cpu(), each
> > > > pervious CPU has already been compared against every other CPU. The
> > >
> > > previous
> >
> > Thank you for pointing that out. Should I send an updated version or can
> > a maintainer correct my mistake?
>
> Depends on the maintainer. I'm not the one here, don't expect answer from me.
I like this rework, and I'll take it in bitmap-for-next for testing.
I see Vincent already acked it, so I can also move it upstream, if no
objections from Peter, Juri and Ingo.
Thanks,
Yury
On 10/04/24 16:33, Kyle Meyer wrote:
> Optimize topology_span_sane() by removing duplicate comparisons.
>
> Since topology_span_sane() is called inside of for_each_cpu(), each
> pervious CPU has already been compared against every other CPU. The
> current CPU only needs to be compared against higher-numbered CPUs.
>
> The total number of comparisons is reduced from N * (N - 1) to
> N * (N - 1) / 2 on each non-NUMA scheduling domain level.
>
> Signed-off-by: Kyle Meyer <[email protected]>
> Reviewed-by: Yury Norov <[email protected]>
> Acked-by: Vincent Guittot <[email protected]>
Reviewed-by: Valentin Schneider <[email protected]>