2009-09-01 08:52:35

by Peter Zijlstra

[permalink] [raw]
Subject: [RFC][PATCH 4/8] sched: add smt_gain

The idea is that multi-threading a core yields more work capacity than
a single thread, provide a way to express a static gain for threads.

Signed-off-by: Peter Zijlstra <[email protected]>
---
include/linux/sched.h | 1 +
include/linux/topology.h | 1 +
kernel/sched.c | 8 +++++++-
3 files changed, 9 insertions(+), 1 deletion(-)

Index: linux-2.6/include/linux/sched.h
===================================================================
--- linux-2.6.orig/include/linux/sched.h
+++ linux-2.6/include/linux/sched.h
@@ -930,6 +930,7 @@ struct sched_domain {
unsigned int newidle_idx;
unsigned int wake_idx;
unsigned int forkexec_idx;
+ unsigned int smt_gain;
int flags; /* See SD_* */
enum sched_domain_level level;

Index: linux-2.6/include/linux/topology.h
===================================================================
--- linux-2.6.orig/include/linux/topology.h
+++ linux-2.6/include/linux/topology.h
@@ -99,6 +99,7 @@ int arch_update_cpu_topology(void);
| SD_SHARE_CPUPOWER, \
.last_balance = jiffies, \
.balance_interval = 1, \
+ .smt_gain = 1178, /* 15% */ \
}
#endif
#endif /* CONFIG_SCHED_SMT */
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -8490,9 +8490,15 @@ static void init_sched_groups_power(int
weight = cpumask_weight(sched_domain_span(sd));
/*
* SMT siblings share the power of a single core.
+ * Usually multiple threads get a better yield out of
+ * that one core than a single thread would have,
+ * reflect that in sd->smt_gain.
*/
- if ((sd->flags & SD_SHARE_CPUPOWER) && weight > 1)
+ if ((sd->flags & SD_SHARE_CPUPOWER) && weight > 1) {
+ power *= sd->smt_gain;
power /= weight;
+ power >>= SCHED_LOAD_SHIFT;
+ }
sg_inc_cpu_power(sd->groups, power);
return;
}

--


Subject: Re: [RFC][PATCH 4/8] sched: add smt_gain

On Tue, Sep 01, 2009 at 10:34:35AM +0200, Peter Zijlstra wrote:
> The idea is that multi-threading a core yields more work capacity than
> a single thread, provide a way to express a static gain for threads.
>
> Signed-off-by: Peter Zijlstra <[email protected]>
> ---
> include/linux/sched.h | 1 +
> include/linux/topology.h | 1 +
> kernel/sched.c | 8 +++++++-
> 3 files changed, 9 insertions(+), 1 deletion(-)
>
> Index: linux-2.6/include/linux/sched.h
> ===================================================================
> --- linux-2.6.orig/include/linux/sched.h
> +++ linux-2.6/include/linux/sched.h
> @@ -930,6 +930,7 @@ struct sched_domain {
> unsigned int newidle_idx;
> unsigned int wake_idx;
> unsigned int forkexec_idx;
> + unsigned int smt_gain;
> int flags; /* See SD_* */
> enum sched_domain_level level;
>
> Index: linux-2.6/include/linux/topology.h
> ===================================================================
> --- linux-2.6.orig/include/linux/topology.h
> +++ linux-2.6/include/linux/topology.h
> @@ -99,6 +99,7 @@ int arch_update_cpu_topology(void);
> | SD_SHARE_CPUPOWER, \
> .last_balance = jiffies, \
> .balance_interval = 1, \
> + .smt_gain = 1178, /* 15% */ \

/* 15% of SCHED_LOAD_SCALE */ , I suppose.

> }
> #endif
> #endif /* CONFIG_SCHED_SMT */
> Index: linux-2.6/kernel/sched.c
> ===================================================================
> --- linux-2.6.orig/kernel/sched.c
> +++ linux-2.6/kernel/sched.c
> @@ -8490,9 +8490,15 @@ static void init_sched_groups_power(int
> weight = cpumask_weight(sched_domain_span(sd));
> /*
> * SMT siblings share the power of a single core.
> + * Usually multiple threads get a better yield out of
> + * that one core than a single thread would have,
> + * reflect that in sd->smt_gain.
> */
> - if ((sd->flags & SD_SHARE_CPUPOWER) && weight > 1)
> + if ((sd->flags & SD_SHARE_CPUPOWER) && weight > 1) {
> + power *= sd->smt_gain;
> power /= weight;
> + power >>= SCHED_LOAD_SHIFT;
> + }
> sg_inc_cpu_power(sd->groups, power);
> return;
> }
>
> --

--
Thanks and Regards
gautham

2009-09-02 11:26:19

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC][PATCH 4/8] sched: add smt_gain

On Wed, 2009-09-02 at 16:52 +0530, Gautham R Shenoy wrote:
> > + .smt_gain = 1178, /* 15% */ \
>
> /* 15% of SCHED_LOAD_SCALE */ , I suppose.

Yeah, but that didn't fit on the line.

Also, its +15%, or 115% when pedantic ;-)

2009-09-04 08:56:14

by Peter Zijlstra

[permalink] [raw]
Subject: [tip:sched/balancing] sched: Add smt_gain

Commit-ID: a52bfd73589eaf88d9c95ad2c1de0b38a6b27972
Gitweb: http://git.kernel.org/tip/a52bfd73589eaf88d9c95ad2c1de0b38a6b27972
Author: Peter Zijlstra <[email protected]>
AuthorDate: Tue, 1 Sep 2009 10:34:35 +0200
Committer: Ingo Molnar <[email protected]>
CommitDate: Fri, 4 Sep 2009 10:09:54 +0200

sched: Add smt_gain

The idea is that multi-threading a core yields more work
capacity than a single thread, provide a way to express a
static gain for threads.

Signed-off-by: Peter Zijlstra <[email protected]>
Tested-by: Andreas Herrmann <[email protected]>
Acked-by: Andreas Herrmann <[email protected]>
Acked-by: Gautham R Shenoy <[email protected]>
Cc: Balbir Singh <[email protected]>
LKML-Reference: <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>


---
include/linux/sched.h | 1 +
include/linux/topology.h | 1 +
kernel/sched.c | 8 +++++++-
3 files changed, 9 insertions(+), 1 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 651dded..9c81c92 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -921,6 +921,7 @@ struct sched_domain {
unsigned int newidle_idx;
unsigned int wake_idx;
unsigned int forkexec_idx;
+ unsigned int smt_gain;
int flags; /* See SD_* */
enum sched_domain_level level;

diff --git a/include/linux/topology.h b/include/linux/topology.h
index 7402c1a..6203ae5 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -99,6 +99,7 @@ int arch_update_cpu_topology(void);
| SD_SHARE_CPUPOWER, \
.last_balance = jiffies, \
.balance_interval = 1, \
+ .smt_gain = 1178, /* 15% */ \
}
#endif
#endif /* CONFIG_SCHED_SMT */
diff --git a/kernel/sched.c b/kernel/sched.c
index ecb4a47..5511226 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -8523,9 +8523,15 @@ static void init_sched_groups_power(int cpu, struct sched_domain *sd)
weight = cpumask_weight(sched_domain_span(sd));
/*
* SMT siblings share the power of a single core.
+ * Usually multiple threads get a better yield out of
+ * that one core than a single thread would have,
+ * reflect that in sd->smt_gain.
*/
- if ((sd->flags & SD_SHARE_CPUPOWER) && weight > 1)
+ if ((sd->flags & SD_SHARE_CPUPOWER) && weight > 1) {
+ power *= sd->smt_gain;
power /= weight;
+ power >>= SCHED_LOAD_SHIFT;
+ }
sg_inc_cpu_power(sd->groups, power);
return;
}