2004-04-08 23:27:45

by Darren Hart

[permalink] [raw]
Subject: 2.6.5-rc3-mm4 x86_64 sched domains patch

The current default implementations of arch_init_sched_domains
constructs either a flat or two level topolology. The two level
topology is built if CONFIG_NUMA is set. It seems that CONFIG_NUMA is
not the appropriate flag to use for constructing a two level topology
since some architectures which define CONFIG_NUMA would be better served
with a flat topology. x86_64 for example will construct a two level
topology with one CPU per node, causing performance problems because
balancing within nodes is pointless and balancing across nodes doesn't
occur as often.

This patch introduces a new CONFIG_SCHED_NUMA flag and uses it to decide
between a flat or two level topology of sched_domains. The patch is
minimally invasive as it primarily modifies Kconfig files and sets the
appropriate default (off for x86_64, on for everything that used to
export CONFIG_NUMA) and should only change the sched_domains topology
constructed on x86_64 systems. I have verified this on a 4 node x86
NUMAQ, but need someone to test x86_64.

This patch is intended as a quick fix for the x86_64 problem, and
doesn't solve the problem of how to build generic sched domain
topologies. We can certainly conceive of various topologies for x86
systems, so even arch specific topologies may not be sufficient. Would
sub-arch (ie NUMAQ) be the right way to handle different topologies, or
will we be able to autodiscover the appropriate topology? I will be
looking into this more, but thought some might benefit from an immediate
x86_64 fix. I am very interested in hearing your ideas on this.

Regards,

Darren Hart


diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-rc3-mm4/arch/alpha/Kconfig linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/arch/alpha/Kconfig
--- linux-2.6.5-rc3-mm4/arch/alpha/Kconfig 2004-04-02 06:42:46.000000000 -0800
+++ linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/arch/alpha/Kconfig 2004-04-02 16:16:58.000000000 -0800
@@ -519,6 +519,14 @@ config NUMA
Access). This option is for configuring high-end multiprocessor
server machines. If in doubt, say N.

+config SCHED_NUMA
+ bool "Two level sched domains"
+ depends on NUMA
+ default y
+ help
+ Enable two level sched domains hierarchy.
+ Say Y if unsure.
+
# LARGE_VMALLOC is racy, if you *really* need it then fix it first
config ALPHA_LARGE_VMALLOC
bool
diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-rc3-mm4/arch/i386/Kconfig linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/arch/i386/Kconfig
--- linux-2.6.5-rc3-mm4/arch/i386/Kconfig 2004-04-02 06:42:52.000000000 -0800
+++ linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/arch/i386/Kconfig 2004-04-07 11:57:41.000000000 -0700
@@ -772,6 +772,14 @@ config NUMA
default n if X86_PC
default y if (X86_NUMAQ || X86_SUMMIT)

+config SCHED_NUMA
+ bool "Two level sched domains"
+ depends on NUMA
+ default y
+ help
+ Enable two level sched domains hierarchy.
+ Say Y if unsure.
+
# Need comments to help the hapless user trying to turn on NUMA support
comment "NUMA (NUMA-Q) requires SMP, 64GB highmem support"
depends on X86_NUMAQ && (!HIGHMEM64G || !SMP)
diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-rc3-mm4/arch/ia64/Kconfig linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/arch/ia64/Kconfig
--- linux-2.6.5-rc3-mm4/arch/ia64/Kconfig 2004-04-02 06:42:52.000000000 -0800
+++ linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/arch/ia64/Kconfig 2004-04-02 16:16:57.000000000 -0800
@@ -172,6 +172,14 @@ config NUMA
Access). This option is for configuring high-end multiprocessor
server systems. If in doubt, say N.

+config SCHED_NUMA
+ bool "Two level sched domains"
+ depends on NUMA
+ default y
+ help
+ Enable two level sched domains hierarchy.
+ Say Y if unsure.
+
config VIRTUAL_MEM_MAP
bool "Virtual mem map"
default y if !IA64_HP_SIM
diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-rc3-mm4/arch/mips/Kconfig linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/arch/mips/Kconfig
--- linux-2.6.5-rc3-mm4/arch/mips/Kconfig 2004-04-02 06:42:46.000000000 -0800
+++ linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/arch/mips/Kconfig 2004-04-02 16:16:58.000000000 -0800
@@ -337,6 +337,14 @@ config NUMA
Access). This option is for configuring high-end multiprocessor
server machines. If in doubt, say N.

+config SCHED_NUMA
+ bool "Two level sched domains"
+ depends on NUMA
+ default y
+ help
+ Enable two level sched domains hierarchy.
+ Say Y if unsure.
+
config MAPPED_KERNEL
bool "Mapped kernel support"
depends on SGI_IP27
diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-rc3-mm4/arch/ppc64/Kconfig linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/arch/ppc64/Kconfig
--- linux-2.6.5-rc3-mm4/arch/ppc64/Kconfig 2004-04-02 06:42:52.000000000 -0800
+++ linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/arch/ppc64/Kconfig 2004-04-02 16:16:59.000000000 -0800
@@ -173,6 +173,14 @@ config NUMA
bool "NUMA support"
depends on DISCONTIGMEM

+config SCHED_NUMA
+ bool "Two level sched domains"
+ depends on NUMA
+ default y
+ help
+ Enable two level sched domains hierarchy.
+ Say Y if unsure.
+
config SCHED_SMT
bool "SMT (Hyperthreading) scheduler support"
depends on SMP
diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-rc3-mm4/arch/x86_64/Kconfig linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/arch/x86_64/Kconfig
--- linux-2.6.5-rc3-mm4/arch/x86_64/Kconfig 2004-04-02 06:42:52.000000000 -0800
+++ linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/arch/x86_64/Kconfig 2004-04-02 16:17:00.000000000 -0800
@@ -261,6 +261,14 @@ config NUMA
depends on K8_NUMA
default y

+config SCHED_NUMA
+ bool "Two level sched domains"
+ depends on NUMA
+ default n
+ help
+ Enable two level sched domains hierarchy.
+ Say N if unsure.
+
config HAVE_DEC_LOCK
bool
depends on SMP
diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-rc3-mm4/include/linux/sched.h linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/include/linux/sched.h
--- linux-2.6.5-rc3-mm4/include/linux/sched.h 2004-04-02 06:42:53.000000000 -0800
+++ linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/include/linux/sched.h 2004-04-02 16:17:01.000000000 -0800
@@ -623,7 +623,7 @@ struct sched_domain {
.nr_balance_failed = 0, \
}

-#ifdef CONFIG_NUMA
+#ifdef CONFIG_SCHED_NUMA
/* Common values for NUMA nodes */
#define SD_NODE_INIT (struct sched_domain) { \
.span = CPU_MASK_NONE, \
@@ -656,7 +656,7 @@ static inline int set_cpus_allowed(task_

extern unsigned long long sched_clock(void);

-#ifdef CONFIG_NUMA
+#ifdef CONFIG_SCHED_NUMA
extern void sched_balance_exec(void);
#else
#define sched_balance_exec() {}
diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-rc3-mm4/kernel/sched.c linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/kernel/sched.c
--- linux-2.6.5-rc3-mm4/kernel/sched.c 2004-04-02 06:42:53.000000000 -0800
+++ linux-2.6.5-rc3-mm4-x86_64_arch_sched_domain/kernel/sched.c 2004-04-07 11:50:11.000000000 -0700
@@ -42,7 +42,7 @@
#include <linux/percpu.h>
#include <linux/kthread.h>

-#ifdef CONFIG_NUMA
+#ifdef CONFIG_SCHED_NUMA
#define cpu_to_node_mask(cpu) node_to_cpumask(cpu_to_node(cpu))
#else
#define cpu_to_node_mask(cpu) (cpu_online_map)
@@ -1142,7 +1142,7 @@ enum idle_type
};

#ifdef CONFIG_SMP
-#ifdef CONFIG_NUMA
+#ifdef CONFIG_SCHED_NUMA
/*
* If dest_cpu is allowed for this process, migrate the task to it.
* This is accomplished by forcing the cpu_allowed mask to only
@@ -1241,7 +1241,7 @@ void sched_balance_exec(void)
out:
put_cpu();
}
-#endif /* CONFIG_NUMA */
+#endif /* CONFIG_SCHED_NUMA */

/*
* double_lock_balance - lock the busiest runqueue, this_rq is locked already.
@@ -3461,7 +3461,7 @@ extern void __init arch_init_sched_domai
#else
static struct sched_group sched_group_cpus[NR_CPUS];
static DEFINE_PER_CPU(struct sched_domain, cpu_domains);
-#ifdef CONFIG_NUMA
+#ifdef CONFIG_SCHED_NUMA
static struct sched_group sched_group_nodes[MAX_NUMNODES];
static DEFINE_PER_CPU(struct sched_domain, node_domains);
static void __init arch_init_sched_domains(void)
@@ -3532,7 +3532,7 @@ static void __init arch_init_sched_domai
}
}

-#else /* !CONFIG_NUMA */
+#else /* !CONFIG_SCHED_NUMA */
static void __init arch_init_sched_domains(void)
{
int i;
@@ -3570,7 +3570,7 @@ static void __init arch_init_sched_domai
}
}

-#endif /* CONFIG_NUMA */
+#endif /* CONFIG_SCHED_NUMA */
#endif /* ARCH_HAS_SCHED_DOMAIN */

#define SCHED_DOMAIN_DEBUG


2004-04-08 23:42:13

by Nick Piggin

[permalink] [raw]
Subject: Re: 2.6.5-rc3-mm4 x86_64 sched domains patch



Darren Hart wrote:

>The current default implementations of arch_init_sched_domains
>constructs either a flat or two level topolology. The two level
>topology is built if CONFIG_NUMA is set. It seems that CONFIG_NUMA is
>not the appropriate flag to use for constructing a two level topology
>since some architectures which define CONFIG_NUMA would be better served
>with a flat topology. x86_64 for example will construct a two level
>topology with one CPU per node, causing performance problems because
>balancing within nodes is pointless and balancing across nodes doesn't
>occur as often.
>
>

This is correct, although I don't know why there would be
performance problems. The rebalance in the degenerate node-local
domain should be basically unmeasurable. It would be nice to
get rid of it at some time. I have code to prune off degenerate
domains, which I will submit soonish.

The NUMA rebalance should occur more often than the old numasched
did, but perhaps with some recent Altix-centric changes to the
generic setup, this is no longer the case.

The STREAM performance problem is due mainly to the more
conservative nature of balancing, which is otherwise a good thing.
I think we can fix this in the short term by having x86_64 balance
between nodes more often. In the long term, we can merge Ingo's
balance on clone stuff, and the interested people can play with
that.

>This patch introduces a new CONFIG_SCHED_NUMA flag and uses it to decide
>between a flat or two level topology of sched_domains. The patch is
>minimally invasive as it primarily modifies Kconfig files and sets the
>appropriate default (off for x86_64, on for everything that used to
>export CONFIG_NUMA) and should only change the sched_domains topology
>constructed on x86_64 systems. I have verified this on a 4 node x86
>NUMAQ, but need someone to test x86_64.
>
>

I guess I can't see a big problem with this, other than more
complexity. In the long run, we should obviously have the arch
code set up optimal domains depending on the machine and config.

>This patch is intended as a quick fix for the x86_64 problem, and
>doesn't solve the problem of how to build generic sched domain
>topologies. We can certainly conceive of various topologies for x86
>systems, so even arch specific topologies may not be sufficient. Would
>sub-arch (ie NUMAQ) be the right way to handle different topologies, or
>will we be able to autodiscover the appropriate topology? I will be
>looking into this more, but thought some might benefit from an immediate
>x86_64 fix. I am very interested in hearing your ideas on this.
>
>

SGI want to do sub arch domains so they can do specific things
with their systems. I don't really care what the arch code does
with them, but it would be wise to only specialise it when there
is a genuine need. I'm glad you'll be looking into it, thanks.

Nick

2004-04-11 08:57:29

by Shai Fultheim

[permalink] [raw]
Subject: RE: 2.6.5-rc3-mm4 x86_64 sched domains patch

Can SLIT/SRAT be used here to define topology for the generic case?

SRAT is being used by i386 to build zonelists, but not for the scheduler -
any good reason why?



--Shai


-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Nick Piggin
Sent: Thursday, April 08, 2004 16:42
To: Darren Hart
Cc: lkml; [email protected]; Martin J Bligh; Rick Lindsley; [email protected]; Ingo
Molnar
Subject: Re: 2.6.5-rc3-mm4 x86_64 sched domains patch



Darren Hart wrote:

>The current default implementations of arch_init_sched_domains
>constructs either a flat or two level topolology. The two level
>topology is built if CONFIG_NUMA is set. It seems that CONFIG_NUMA is
>not the appropriate flag to use for constructing a two level topology
>since some architectures which define CONFIG_NUMA would be better served
>with a flat topology. x86_64 for example will construct a two level
>topology with one CPU per node, causing performance problems because
>balancing within nodes is pointless and balancing across nodes doesn't
>occur as often.
>
>

This is correct, although I don't know why there would be
performance problems. The rebalance in the degenerate node-local
domain should be basically unmeasurable. It would be nice to
get rid of it at some time. I have code to prune off degenerate
domains, which I will submit soonish.

The NUMA rebalance should occur more often than the old numasched
did, but perhaps with some recent Altix-centric changes to the
generic setup, this is no longer the case.

The STREAM performance problem is due mainly to the more
conservative nature of balancing, which is otherwise a good thing.
I think we can fix this in the short term by having x86_64 balance
between nodes more often. In the long term, we can merge Ingo's
balance on clone stuff, and the interested people can play with
that.

>This patch introduces a new CONFIG_SCHED_NUMA flag and uses it to decide
>between a flat or two level topology of sched_domains. The patch is
>minimally invasive as it primarily modifies Kconfig files and sets the
>appropriate default (off for x86_64, on for everything that used to
>export CONFIG_NUMA) and should only change the sched_domains topology
>constructed on x86_64 systems. I have verified this on a 4 node x86
>NUMAQ, but need someone to test x86_64.
>
>

I guess I can't see a big problem with this, other than more
complexity. In the long run, we should obviously have the arch
code set up optimal domains depending on the machine and config.

>This patch is intended as a quick fix for the x86_64 problem, and
>doesn't solve the problem of how to build generic sched domain
>topologies. We can certainly conceive of various topologies for x86
>systems, so even arch specific topologies may not be sufficient. Would
>sub-arch (ie NUMAQ) be the right way to handle different topologies, or
>will we be able to autodiscover the appropriate topology? I will be
>looking into this more, but thought some might benefit from an immediate
>x86_64 fix. I am very interested in hearing your ideas on this.
>
>

SGI want to do sub arch domains so they can do specific things
with their systems. I don't really care what the arch code does
with them, but it would be wise to only specialise it when there
is a genuine need. I'm glad you'll be looking into it, thanks.

Nick

2004-04-11 09:57:48

by Rick Lindsley

[permalink] [raw]
Subject: Re: 2.6.5-rc3-mm4 x86_64 sched domains patch

Can SLIT/SRAT be used here to define topology for the generic case?

SRAT is being used by i386 to build zonelists, but not for the scheduler -
any good reason why?

I can think of some possible reasons, but I'm not familiar with SLIT/SRAT
... can you describe it for me?

Rick

2004-04-11 15:08:21

by Martin J. Bligh

[permalink] [raw]
Subject: RE: 2.6.5-rc3-mm4 x86_64 sched domains patch

> Can SLIT/SRAT be used here to define topology for the generic case?
>
> SRAT is being used by i386 to build zonelists, but not for the scheduler -
> any good reason why?

Because it's not generic to all machines.

M.

2004-04-14 13:45:02

by Andi Kleen

[permalink] [raw]
Subject: Re: 2.6.5-rc3-mm4 x86_64 sched domains patch

On Thu, 08 Apr 2004 16:22:09 -0700
Darren Hart <[email protected]> wrote:


>
> This patch is intended as a quick fix for the x86_64 problem, and

Ingo's latest tweaks seemed to already cure STREAM, but some more
tuning is probably a good idea agreed.

> doesn't solve the problem of how to build generic sched domain
> topologies. We can certainly conceive of various topologies for x86
> systems, so even arch specific topologies may not be sufficient. Would
> sub-arch (ie NUMAQ) be the right way to handle different topologies, or
> will we be able to autodiscover the appropriate topology? I will be
> looking into this more, but thought some might benefit from an immediate
> x86_64 fix. I am very interested in hearing your ideas on this.


The patch doesn't apply against 2.6.5-mm5 anymore. Can you generate a new patch?
I will test it then.

Also it will need merging with the patch that adds SMT support for IA32e machines
on x86-64.

-Andi

2004-04-14 14:27:37

by Nick Piggin

[permalink] [raw]
Subject: Re: 2.6.5-rc3-mm4 x86_64 sched domains patch

Andi Kleen wrote:
> On Thu, 08 Apr 2004 16:22:09 -0700
> Darren Hart <[email protected]> wrote:
>
>
>
>>This patch is intended as a quick fix for the x86_64 problem, and
>
>
> Ingo's latest tweaks seemed to already cure STREAM, but some more
> tuning is probably a good idea agreed.
>

Where is STREAM versus other kernels? You said you got
best performance on a custom 2.4 kernel. Do we match
that?

How is your performance for other things? I recall you
may have told me about some other (smaller) issues you
were seeing?

2004-04-14 14:44:01

by Andi Kleen

[permalink] [raw]
Subject: Re: 2.6.5-rc3-mm4 x86_64 sched domains patch

On Thu, 15 Apr 2004 00:14:19 +1000
Nick Piggin <[email protected]> wrote:

> Andi Kleen wrote:
> > On Thu, 08 Apr 2004 16:22:09 -0700
> > Darren Hart <[email protected]> wrote:
> >
> >
> >
> >>This patch is intended as a quick fix for the x86_64 problem, and
> >
> >
> > Ingo's latest tweaks seemed to already cure STREAM, but some more
> > tuning is probably a good idea agreed.
> >
>
> Where is STREAM versus other kernels? You said you got
> best performance on a custom 2.4 kernel. Do we match
> that?

Differences were below the measurement error, so I consider it fixed.

>
> How is your performance for other things? I recall you
> may have told me about some other (smaller) issues you
> were seeing?

I haven't tested much yet. I can compare kernel compilations later.

Also I'm still somewhat hoping that the IBM benchmark team will take a stab at
it - they are much better than me at running many tests.

-Andi

2004-04-14 17:24:59

by Darren Hart

[permalink] [raw]
Subject: Re: 2.6.5-rc3-mm4 x86_64 sched domains patch

On Wed, 2004-04-14 at 06:44, Andi Kleen wrote:
> On Thu, 08 Apr 2004 16:22:09 -0700
> Darren Hart <[email protected]> wrote:
> > This patch is intended as a quick fix for the x86_64 problem, and
>
> Ingo's latest tweaks seemed to already cure STREAM, but some more
> tuning is probably a good idea agreed.
> ...
> The patch doesn't apply against 2.6.5-mm5 anymore. Can you generate a new patch?
> I will test it then.

Find below the patch updated for akpm's 2.6.5-mm5-1.bz2 patch. As with
the previous patch I verified it works properly on a 4 node, 16 CPU
NUMA-Q. Please test both CONFIG_SCHED_NUMA=n (the improved case,
default) and CONFIG_SCHED_NUMA=y (pre-patch equivalent) on x86_64, and
thanks!

>
> Also it will need merging with the patch that adds SMT support for IA32e machines
> on x86-64.

Where is this patch?

-- Darren





diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-mm5/arch/alpha/Kconfig linux-2.6.5-mm5-x86_64_arch_sched_domain/arch/alpha/Kconfig
--- linux-2.6.5-mm5/arch/alpha/Kconfig 2004-04-03 19:37:40.000000000 -0800
+++ linux-2.6.5-mm5-x86_64_arch_sched_domain/arch/alpha/Kconfig 2004-04-14 09:39:40.000000000 -0700
@@ -519,6 +519,14 @@ config NUMA
Access). This option is for configuring high-end multiprocessor
server machines. If in doubt, say N.

+config SCHED_NUMA
+ bool "Two level sched domains"
+ depends on NUMA
+ default y
+ help
+ Enable two level sched domains hierarchy.
+ Say Y if unsure.
+
# LARGE_VMALLOC is racy, if you *really* need it then fix it first
config ALPHA_LARGE_VMALLOC
bool
diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-mm5/arch/i386/Kconfig linux-2.6.5-mm5-x86_64_arch_sched_domain/arch/i386/Kconfig
--- linux-2.6.5-mm5/arch/i386/Kconfig 2004-04-14 09:37:40.000000000 -0700
+++ linux-2.6.5-mm5-x86_64_arch_sched_domain/arch/i386/Kconfig 2004-04-14 09:39:40.000000000 -0700
@@ -724,6 +724,14 @@ config NUMA
default n if X86_PC
default y if (X86_NUMAQ || X86_SUMMIT)

+config SCHED_NUMA
+ bool "Two level sched domains"
+ depends on NUMA
+ default y
+ help
+ Enable two level sched domains hierarchy.
+ Say Y if unsure.
+
# Need comments to help the hapless user trying to turn on NUMA support
comment "NUMA (NUMA-Q) requires SMP, 64GB highmem support"
depends on X86_NUMAQ && (!HIGHMEM64G || !SMP)
diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-mm5/arch/ia64/Kconfig linux-2.6.5-mm5-x86_64_arch_sched_domain/arch/ia64/Kconfig
--- linux-2.6.5-mm5/arch/ia64/Kconfig 2004-04-14 09:37:41.000000000 -0700
+++ linux-2.6.5-mm5-x86_64_arch_sched_domain/arch/ia64/Kconfig 2004-04-14 09:39:40.000000000 -0700
@@ -172,6 +172,14 @@ config NUMA
Access). This option is for configuring high-end multiprocessor
server systems. If in doubt, say N.

+config SCHED_NUMA
+ bool "Two level sched domains"
+ depends on NUMA
+ default y
+ help
+ Enable two level sched domains hierarchy.
+ Say Y if unsure.
+
config VIRTUAL_MEM_MAP
bool "Virtual mem map"
default y if !IA64_HP_SIM
diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-mm5/arch/mips/Kconfig linux-2.6.5-mm5-x86_64_arch_sched_domain/arch/mips/Kconfig
--- linux-2.6.5-mm5/arch/mips/Kconfig 2004-04-03 19:37:06.000000000 -0800
+++ linux-2.6.5-mm5-x86_64_arch_sched_domain/arch/mips/Kconfig 2004-04-14 09:39:40.000000000 -0700
@@ -337,6 +337,14 @@ config NUMA
Access). This option is for configuring high-end multiprocessor
server machines. If in doubt, say N.

+config SCHED_NUMA
+ bool "Two level sched domains"
+ depends on NUMA
+ default y
+ help
+ Enable two level sched domains hierarchy.
+ Say Y if unsure.
+
config MAPPED_KERNEL
bool "Mapped kernel support"
depends on SGI_IP27
diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-mm5/arch/ppc64/Kconfig linux-2.6.5-mm5-x86_64_arch_sched_domain/arch/ppc64/Kconfig
--- linux-2.6.5-mm5/arch/ppc64/Kconfig 2004-04-14 09:37:43.000000000 -0700
+++ linux-2.6.5-mm5-x86_64_arch_sched_domain/arch/ppc64/Kconfig 2004-04-14 09:39:40.000000000 -0700
@@ -173,6 +173,14 @@ config NUMA
bool "NUMA support"
depends on DISCONTIGMEM

+config SCHED_NUMA
+ bool "Two level sched domains"
+ depends on NUMA
+ default y
+ help
+ Enable two level sched domains hierarchy.
+ Say Y if unsure.
+
config SCHED_SMT
bool "SMT (Hyperthreading) scheduler support"
depends on SMP
diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-mm5/arch/x86_64/Kconfig linux-2.6.5-mm5-x86_64_arch_sched_domain/arch/x86_64/Kconfig
--- linux-2.6.5-mm5/arch/x86_64/Kconfig 2004-04-14 09:37:46.000000000 -0700
+++ linux-2.6.5-mm5-x86_64_arch_sched_domain/arch/x86_64/Kconfig 2004-04-14 09:39:40.000000000 -0700
@@ -261,6 +261,14 @@ config NUMA
depends on K8_NUMA
default y

+config SCHED_NUMA
+ bool "Two level sched domains"
+ depends on NUMA
+ default n
+ help
+ Enable two level sched domains hierarchy.
+ Say N if unsure.
+
config HAVE_DEC_LOCK
bool
depends on SMP
diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-mm5/include/linux/sched.h linux-2.6.5-mm5-x86_64_arch_sched_domain/include/linux/sched.h
--- linux-2.6.5-mm5/include/linux/sched.h 2004-04-14 09:38:08.000000000 -0700
+++ linux-2.6.5-mm5-x86_64_arch_sched_domain/include/linux/sched.h 2004-04-14 09:41:35.000000000 -0700
@@ -670,7 +670,7 @@ struct sched_domain {
.nr_balance_failed = 0, \
}

-#ifdef CONFIG_NUMA
+#ifdef CONFIG_SCHED_NUMA
/* Common values for NUMA nodes */
#define SD_NODE_INIT (struct sched_domain) { \
.span = CPU_MASK_NONE, \
diff -aurpN -X /home/dvhart/.diff.exclude linux-2.6.5-mm5/kernel/sched.c linux-2.6.5-mm5-x86_64_arch_sched_domain/kernel/sched.c
--- linux-2.6.5-mm5/kernel/sched.c 2004-04-14 09:38:09.000000000 -0700
+++ linux-2.6.5-mm5-x86_64_arch_sched_domain/kernel/sched.c 2004-04-14 09:45:34.000000000 -0700
@@ -45,7 +45,7 @@
#include <linux/seq_file.h>
#include <linux/times.h>

-#ifdef CONFIG_NUMA
+#ifdef CONFIG_SCHED_NUMA
#define cpu_to_node_mask(cpu) node_to_cpumask(cpu_to_node(cpu))
#else
#define cpu_to_node_mask(cpu) (cpu_online_map)
@@ -3735,7 +3735,7 @@ extern void __init arch_init_sched_domai
#else
static struct sched_group sched_group_cpus[NR_CPUS];
static DEFINE_PER_CPU(struct sched_domain, cpu_domains);
-#ifdef CONFIG_NUMA
+#ifdef CONFIG_SCHED_NUMA
static struct sched_group sched_group_nodes[MAX_NUMNODES];
static DEFINE_PER_CPU(struct sched_domain, node_domains);
static void __init arch_init_sched_domains(void)
@@ -3806,7 +3806,7 @@ static void __init arch_init_sched_domai
}
}

-#else /* !CONFIG_NUMA */
+#else /* !CONFIG_SCHED_NUMA */
static void __init arch_init_sched_domains(void)
{
int i;
@@ -3845,7 +3845,7 @@ static void __init arch_init_sched_domai
}
}

-#endif /* CONFIG_NUMA */
+#endif /* CONFIG_SCHED_NUMA */
#endif /* ARCH_HAS_SCHED_DOMAIN */

#define SCHED_DOMAIN_DEBUG


2004-04-14 23:26:41

by Suresh Siddha

[permalink] [raw]
Subject: RE: 2.6.5-rc3-mm4 x86_64 sched domains patch

Darren Hart wrote:
> On Wed, 2004-04-14 at 06:44, Andi Kleen wrote:
> > Also it will need merging with the patch that adds SMT
> support for IA32e machines
> > on x86-64.
>
> Where is this patch?
>
> -- Darren

Attched the patch which goes on top of a slightly older mm tree.

thanks,
suresh


Attachments:
smt.diff (17.06 kB)
smt.diff

2004-04-15 05:51:47

by Nick Piggin

[permalink] [raw]
Subject: Re: 2.6.5-rc3-mm4 x86_64 sched domains patch

Andi Kleen wrote:
> On Thu, 15 Apr 2004 00:14:19 +1000
> Nick Piggin <[email protected]> wrote:

>>Where is STREAM versus other kernels? You said you got
>>best performance on a custom 2.4 kernel. Do we match
>>that?
>
>
> Differences were below the measurement error, so I consider it fixed.
>

great.

>
>>How is your performance for other things? I recall you
>>may have told me about some other (smaller) issues you
>>were seeing?
>
>
> I haven't tested much yet. I can compare kernel compilations later.
>

That would be good. I don't expect you to do all the work,
but Opteron being a non traditional NUMA, and me doing most
of my testing on an old NUMAQ makes them quite important.

Even if you just got some results for a couple of random
benchmarks would be great.

> Also I'm still somewhat hoping that the IBM benchmark team will take a stab at
> it - they are much better than me at running many tests.
>

Well we've survived OSDL's STP tests as far as I know. A
couple of regressions were found and fixed there, so that
was good.