2013-06-04 06:22:45

by Michael wang

[permalink] [raw]
Subject: [PATCH 0/3] sched: code refine/clean for cfs-bandwidth


Code refine and clean patch set.

Michael Wang (3):
[PATCH 1/3] sched: don't repeat the initialization in sched_init()
[PATCH 2/3] sched: code refine in unthrottle_cfs_rq()
[PATCH 3/3] sched: remove the useless declaration in kernel/sched/fair.c

---
b/kernel/sched/core.c | 46 +++++++++++++++++++++++++---------------------
b/kernel/sched/fair.c | 2 +-
kernel/sched/fair.c | 4 ----
3 files changed, 26 insertions(+), 26 deletions(-)


2013-06-04 06:23:24

by Michael wang

[permalink] [raw]
Subject: [PATCH 1/3] sched: don't repeat the initialization in sched_init()

In sched_init(), there is no need to initialize 'root_task_group.shares' and
'root_task_group.cfs_bandwidth' repeatedly.

CC: Ingo Molnar <[email protected]>
CC: Peter Zijlstra <[email protected]>
Signed-off-by: Michael Wang <[email protected]>
---
kernel/sched/core.c | 46 +++++++++++++++++++++++++---------------------
1 files changed, 25 insertions(+), 21 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 58453b8..c0c3716 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6955,6 +6955,31 @@ void __init sched_init(void)

#endif /* CONFIG_CGROUP_SCHED */

+#ifdef CONFIG_FAIR_GROUP_SCHED
+ root_task_group.shares = ROOT_TASK_GROUP_LOAD;
+
+ /*
+ * How much cpu bandwidth does root_task_group get?
+ *
+ * In case of task-groups formed thr' the cgroup filesystem, it
+ * gets 100% of the cpu resources in the system. This overall
+ * system cpu resource is divided among the tasks of
+ * root_task_group and its child task-groups in a fair manner,
+ * based on each entity's (task or task-group's) weight
+ * (se->load.weight).
+ *
+ * In other words, if root_task_group has 10 tasks of weight
+ * 1024) and two child groups A0 and A1 (of weight 1024 each),
+ * then A0's share of the cpu resource is:
+ *
+ * A0's bandwidth = 1024 / (10*1024 + 1024 + 1024) = 8.33%
+ *
+ * We achieve this by letting root_task_group's tasks sit
+ * directly in rq->cfs (i.e root_task_group->se[] = NULL).
+ */
+ init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
+#endif
+
for_each_possible_cpu(i) {
struct rq *rq;

@@ -6966,28 +6991,7 @@ void __init sched_init(void)
init_cfs_rq(&rq->cfs);
init_rt_rq(&rq->rt, rq);
#ifdef CONFIG_FAIR_GROUP_SCHED
- root_task_group.shares = ROOT_TASK_GROUP_LOAD;
INIT_LIST_HEAD(&rq->leaf_cfs_rq_list);
- /*
- * How much cpu bandwidth does root_task_group get?
- *
- * In case of task-groups formed thr' the cgroup filesystem, it
- * gets 100% of the cpu resources in the system. This overall
- * system cpu resource is divided among the tasks of
- * root_task_group and its child task-groups in a fair manner,
- * based on each entity's (task or task-group's) weight
- * (se->load.weight).
- *
- * In other words, if root_task_group has 10 tasks of weight
- * 1024) and two child groups A0 and A1 (of weight 1024 each),
- * then A0's share of the cpu resource is:
- *
- * A0's bandwidth = 1024 / (10*1024 + 1024 + 1024) = 8.33%
- *
- * We achieve this by letting root_task_group's tasks sit
- * directly in rq->cfs (i.e root_task_group->se[] = NULL).
- */
- init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL);
#endif /* CONFIG_FAIR_GROUP_SCHED */

--
1.7.4.1

2013-06-04 06:23:49

by Michael wang

[permalink] [raw]
Subject: [PATCH 2/3] sched: code refine in unthrottle_cfs_rq()

Directly use rq to save some code.

CC: Ingo Molnar <[email protected]>
CC: Peter Zijlstra <[email protected]>
Signed-off-by: Michael Wang <[email protected]>
---
kernel/sched/fair.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c61a614..1e10911 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2298,7 +2298,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
int enqueue = 1;
long task_delta;

- se = cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))];
+ se = cfs_rq->tg->se[cpu_of(rq)];

cfs_rq->throttled = 0;
raw_spin_lock(&cfs_b->lock);
--
1.7.4.1

2013-06-04 06:24:18

by Michael wang

[permalink] [raw]
Subject: [PATCH 3/3] sched: remove the useless declaration in kernel/sched/fair.c

default_cfs_period(), do_sched_cfs_period_timer(), do_sched_cfs_slack_timer()
already defined previously, no need to declare again.

CC: Ingo Molnar <[email protected]>
CC: Peter Zijlstra <[email protected]>
Signed-off-by: Michael Wang <[email protected]>
---
kernel/sched/fair.c | 4 ----
1 files changed, 0 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c61a614..73cad33 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2599,10 +2599,6 @@ static void check_cfs_rq_runtime(struct cfs_rq *cfs_rq)
throttle_cfs_rq(cfs_rq);
}

-static inline u64 default_cfs_period(void);
-static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun);
-static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b);
-
static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer)
{
struct cfs_bandwidth *cfs_b =
--
1.7.4.1

2013-06-04 06:52:55

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH 1/3] sched: don't repeat the initialization in sched_init()

On Mon, Jun 3, 2013 at 11:23 PM, Michael Wang
<[email protected]> wrote:
> In sched_init(), there is no need to initialize 'root_task_group.shares' and
> 'root_task_group.cfs_bandwidth' repeatedly.
>
> CC: Ingo Molnar <[email protected]>
> CC: Peter Zijlstra <[email protected]>
> Signed-off-by: Michael Wang <[email protected]>
> ---
> kernel/sched/core.c | 46 +++++++++++++++++++++++++---------------------
> 1 files changed, 25 insertions(+), 21 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 58453b8..c0c3716 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6955,6 +6955,31 @@ void __init sched_init(void)
>
> #endif /* CONFIG_CGROUP_SCHED */
>
> +#ifdef CONFIG_FAIR_GROUP_SCHED
> + root_task_group.shares = ROOT_TASK_GROUP_LOAD;
> +
> + /*
> + * How much cpu bandwidth does root_task_group get?
> + *
> + * In case of task-groups formed thr' the cgroup filesystem, it
> + * gets 100% of the cpu resources in the system. This overall
> + * system cpu resource is divided among the tasks of
> + * root_task_group and its child task-groups in a fair manner,
> + * based on each entity's (task or task-group's) weight
> + * (se->load.weight).
> + *
> + * In other words, if root_task_group has 10 tasks of weight
> + * 1024) and two child groups A0 and A1 (of weight 1024 each),
> + * then A0's share of the cpu resource is:
> + *
> + * A0's bandwidth = 1024 / (10*1024 + 1024 + 1024) = 8.33%
> + *
> + * We achieve this by letting root_task_group's tasks sit
> + * directly in rq->cfs (i.e root_task_group->se[] = NULL).
> + */

This comment has become unglued from what it's supposed to be attached
to (it's tied to root_task_group.shares & init_tg_cfs_entry, not
init_cfs_bandwidth).

> + init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
> +#endif
> +
> for_each_possible_cpu(i) {
> struct rq *rq;
>
> @@ -6966,28 +6991,7 @@ void __init sched_init(void)
> init_cfs_rq(&rq->cfs);
> init_rt_rq(&rq->rt, rq);
> #ifdef CONFIG_FAIR_GROUP_SCHED
> - root_task_group.shares = ROOT_TASK_GROUP_LOAD;
> INIT_LIST_HEAD(&rq->leaf_cfs_rq_list);
> - /*
> - * How much cpu bandwidth does root_task_group get?
> - *
> - * In case of task-groups formed thr' the cgroup filesystem, it
> - * gets 100% of the cpu resources in the system. This overall
> - * system cpu resource is divided among the tasks of
> - * root_task_group and its child task-groups in a fair manner,
> - * based on each entity's (task or task-group's) weight
> - * (se->load.weight).
> - *
> - * In other words, if root_task_group has 10 tasks of weight
> - * 1024) and two child groups A0 and A1 (of weight 1024 each),
> - * then A0's share of the cpu resource is:
> - *
> - * A0's bandwidth = 1024 / (10*1024 + 1024 + 1024) = 8.33%
> - *
> - * We achieve this by letting root_task_group's tasks sit
> - * directly in rq->cfs (i.e root_task_group->se[] = NULL).
> - */
> - init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
> init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL);
> #endif /* CONFIG_FAIR_GROUP_SCHED */
>
> --
> 1.7.4.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2013-06-04 07:23:38

by Michael wang

[permalink] [raw]
Subject: Re: [PATCH 1/3] sched: don't repeat the initialization in sched_init()

Hi, Paul

On 06/04/2013 02:52 PM, Paul Turner wrote:
> On Mon, Jun 3, 2013 at 11:23 PM, Michael Wang

[snip]

>
> This comment has become unglued from what it's supposed to be attached
> to (it's tied to root_task_group.shares & init_tg_cfs_entry, not
> init_cfs_bandwidth).

Thanks for your review and notify :)

What about put the comment with init_tg_cfs_entry()?

'root_task_group.shares' may not needed to be covered under the comment,
after all, it won't have any peers to flaunt it's share...

Regards,
Michael Wang

>
>> + init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
>> +#endif
>> +
>> for_each_possible_cpu(i) {
>> struct rq *rq;
>>
>> @@ -6966,28 +6991,7 @@ void __init sched_init(void)
>> init_cfs_rq(&rq->cfs);
>> init_rt_rq(&rq->rt, rq);
>> #ifdef CONFIG_FAIR_GROUP_SCHED
>> - root_task_group.shares = ROOT_TASK_GROUP_LOAD;
>> INIT_LIST_HEAD(&rq->leaf_cfs_rq_list);
>> - /*
>> - * How much cpu bandwidth does root_task_group get?
>> - *
>> - * In case of task-groups formed thr' the cgroup filesystem, it
>> - * gets 100% of the cpu resources in the system. This overall
>> - * system cpu resource is divided among the tasks of
>> - * root_task_group and its child task-groups in a fair manner,
>> - * based on each entity's (task or task-group's) weight
>> - * (se->load.weight).
>> - *
>> - * In other words, if root_task_group has 10 tasks of weight
>> - * 1024) and two child groups A0 and A1 (of weight 1024 each),
>> - * then A0's share of the cpu resource is:
>> - *
>> - * A0's bandwidth = 1024 / (10*1024 + 1024 + 1024) = 8.33%
>> - *
>> - * We achieve this by letting root_task_group's tasks sit
>> - * directly in rq->cfs (i.e root_task_group->se[] = NULL).
>> - */
>> - init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
>> init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL);
>> #endif /* CONFIG_FAIR_GROUP_SCHED */
>>
>> --
>> 1.7.4.1
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at http://www.tux.org/lkml/
>

2013-06-05 02:24:31

by Michael wang

[permalink] [raw]
Subject: [PATCH v2 1/3] sched: don't repeat the initialization in sched_init()

v2:
Move comments back before init_tg_cfs_entry(). (Thanks for the notify from pjt)

In sched_init(), there is no need to initialize 'root_task_group.shares' and
'root_task_group.cfs_bandwidth' repeatedly.

CC: Paul Tuner <[email protected]>
CC: Ingo Molnar <[email protected]>
CC: Peter Zijlstra <[email protected]>
Signed-off-by: Michael Wang <[email protected]>
---
kernel/sched/core.c | 7 +++++--
1 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 58453b8..96f69da 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6955,6 +6955,11 @@ void __init sched_init(void)

#endif /* CONFIG_CGROUP_SCHED */

+#ifdef CONFIG_FAIR_GROUP_SCHED
+ root_task_group.shares = ROOT_TASK_GROUP_LOAD;
+ init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
+#endif
+
for_each_possible_cpu(i) {
struct rq *rq;

@@ -6966,7 +6971,6 @@ void __init sched_init(void)
init_cfs_rq(&rq->cfs);
init_rt_rq(&rq->rt, rq);
#ifdef CONFIG_FAIR_GROUP_SCHED
- root_task_group.shares = ROOT_TASK_GROUP_LOAD;
INIT_LIST_HEAD(&rq->leaf_cfs_rq_list);
/*
* How much cpu bandwidth does root_task_group get?
@@ -6987,7 +6991,6 @@ void __init sched_init(void)
* We achieve this by letting root_task_group's tasks sit
* directly in rq->cfs (i.e root_task_group->se[] = NULL).
*/
- init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL);
#endif /* CONFIG_FAIR_GROUP_SCHED */

--
1.7.4.1

2013-06-05 11:07:10

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] sched: don't repeat the initialization in sched_init()

On Wed, Jun 05, 2013 at 10:24:18AM +0800, Michael Wang wrote:
> v2:
> Move comments back before init_tg_cfs_entry(). (Thanks for the notify from pjt)
>
> In sched_init(), there is no need to initialize 'root_task_group.shares' and
> 'root_task_group.cfs_bandwidth' repeatedly.
>
> CC: Paul Tuner <[email protected]>
> CC: Ingo Molnar <[email protected]>
> CC: Peter Zijlstra <[email protected]>
> Signed-off-by: Michael Wang <[email protected]>
> ---
> kernel/sched/core.c | 7 +++++--
> 1 files changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 58453b8..96f69da 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6955,6 +6955,11 @@ void __init sched_init(void)
>
> #endif /* CONFIG_CGROUP_SCHED */
>
> +#ifdef CONFIG_FAIR_GROUP_SCHED
> + root_task_group.shares = ROOT_TASK_GROUP_LOAD;
> + init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
> +#endif
> +
> for_each_possible_cpu(i) {
> struct rq *rq;
>
> @@ -6966,7 +6971,6 @@ void __init sched_init(void)
> init_cfs_rq(&rq->cfs);
> init_rt_rq(&rq->rt, rq);
> #ifdef CONFIG_FAIR_GROUP_SCHED
> - root_task_group.shares = ROOT_TASK_GROUP_LOAD;
> INIT_LIST_HEAD(&rq->leaf_cfs_rq_list);
> /*
> * How much cpu bandwidth does root_task_group get?
> @@ -6987,7 +6991,6 @@ void __init sched_init(void)
> * We achieve this by letting root_task_group's tasks sit
> * directly in rq->cfs (i.e root_task_group->se[] = NULL).
> */
> - init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
> init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL);
> #endif /* CONFIG_FAIR_GROUP_SCHED */

I would actually like a patch reducing the #ifdef forest there, not
adding to it.

There's no actual harm in doing the initialization mutliple times,
right?

2013-06-05 11:15:53

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 2/3] sched: code refine in unthrottle_cfs_rq()

On Tue, Jun 04, 2013 at 02:23:39PM +0800, Michael Wang wrote:
> Directly use rq to save some code.
>
> CC: Ingo Molnar <[email protected]>
> CC: Peter Zijlstra <[email protected]>
> Signed-off-by: Michael Wang <[email protected]>

Please send patches against tip/master; the below didn't apply cleanly.
It was a trivial conflict so I applied force and made it fit.

Thanks!

> ---
> kernel/sched/fair.c | 2 +-
> 1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index c61a614..1e10911 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2298,7 +2298,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
> int enqueue = 1;
> long task_delta;
>
> - se = cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))];
> + se = cfs_rq->tg->se[cpu_of(rq)];
>
> cfs_rq->throttled = 0;
> raw_spin_lock(&cfs_b->lock);
> --
> 1.7.4.1
>

2013-06-05 11:16:13

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: remove the useless declaration in kernel/sched/fair.c

On Tue, Jun 04, 2013 at 02:24:08PM +0800, Michael Wang wrote:
> default_cfs_period(), do_sched_cfs_period_timer(), do_sched_cfs_slack_timer()
> already defined previously, no need to declare again.
>
> CC: Ingo Molnar <[email protected]>
> CC: Peter Zijlstra <[email protected]>
> Signed-off-by: Michael Wang <[email protected]>

Thanks!

2013-06-06 02:19:24

by Michael wang

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] sched: don't repeat the initialization in sched_init()

On 06/05/2013 07:06 PM, Peter Zijlstra wrote:
> On Wed, Jun 05, 2013 at 10:24:18AM +0800, Michael Wang wrote:
>> v2:
>> Move comments back before init_tg_cfs_entry(). (Thanks for the notify from pjt)
>>
>> In sched_init(), there is no need to initialize 'root_task_group.shares' and
>> 'root_task_group.cfs_bandwidth' repeatedly.
>>
>> CC: Paul Tuner <[email protected]>
>> CC: Ingo Molnar <[email protected]>
>> CC: Peter Zijlstra <[email protected]>
>> Signed-off-by: Michael Wang <[email protected]>
>> ---
>> kernel/sched/core.c | 7 +++++--
>> 1 files changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 58453b8..96f69da 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -6955,6 +6955,11 @@ void __init sched_init(void)
>>
>> #endif /* CONFIG_CGROUP_SCHED */
>>
>> +#ifdef CONFIG_FAIR_GROUP_SCHED
>> + root_task_group.shares = ROOT_TASK_GROUP_LOAD;
>> + init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
>> +#endif
>> +
>> for_each_possible_cpu(i) {
>> struct rq *rq;
>>
>> @@ -6966,7 +6971,6 @@ void __init sched_init(void)
>> init_cfs_rq(&rq->cfs);
>> init_rt_rq(&rq->rt, rq);
>> #ifdef CONFIG_FAIR_GROUP_SCHED
>> - root_task_group.shares = ROOT_TASK_GROUP_LOAD;
>> INIT_LIST_HEAD(&rq->leaf_cfs_rq_list);
>> /*
>> * How much cpu bandwidth does root_task_group get?
>> @@ -6987,7 +6991,6 @@ void __init sched_init(void)
>> * We achieve this by letting root_task_group's tasks sit
>> * directly in rq->cfs (i.e root_task_group->se[] = NULL).
>> */
>> - init_cfs_bandwidth(&root_task_group.cfs_bandwidth);
>> init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL);
>> #endif /* CONFIG_FAIR_GROUP_SCHED */
>
> I would actually like a patch reducing the #ifdef forest there, not
> adding to it.

I see :)

>
> There's no actual harm in doing the initialization mutliple times,
> right?

Yeah, it's safe to redo the init, cost some cycles but not so expensive.

Regards,
Michael Wang

> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2013-06-06 02:22:58

by Michael wang

[permalink] [raw]
Subject: Re: [PATCH 2/3] sched: code refine in unthrottle_cfs_rq()

On 06/05/2013 07:15 PM, Peter Zijlstra wrote:
> On Tue, Jun 04, 2013 at 02:23:39PM +0800, Michael Wang wrote:
>> Directly use rq to save some code.
>>
>> CC: Ingo Molnar <[email protected]>
>> CC: Peter Zijlstra <[email protected]>
>> Signed-off-by: Michael Wang <[email protected]>
>
> Please send patches against tip/master; the below didn't apply cleanly.
> It was a trivial conflict so I applied force and made it fit.

My sincere apologies on that, please allow me to resend the accepted
patches based on latest tip/master, forgive me for create extra work
like that...

Regards,
Michael Wang

>
> Thanks!
>
>> ---
>> kernel/sched/fair.c | 2 +-
>> 1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index c61a614..1e10911 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -2298,7 +2298,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
>> int enqueue = 1;
>> long task_delta;
>>
>> - se = cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))];
>> + se = cfs_rq->tg->se[cpu_of(rq)];
>>
>> cfs_rq->throttled = 0;
>> raw_spin_lock(&cfs_b->lock);
>> --
>> 1.7.4.1
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2013-06-06 02:39:50

by Michael wang

[permalink] [raw]
Subject: [PATCH v2 2/3] sched: code refine in unthrottle_cfs_rq()

v2:
re-based on latest tip/master

Directly use rq to save some code.

CC: Ingo Molnar <[email protected]>
CC: Peter Zijlstra <[email protected]>
Signed-off-by: Michael Wang <[email protected]>
---
kernel/sched/fair.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 143dcdb..0cea941 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2275,7 +2275,7 @@ static void throttle_cfs_rq(struct cfs_rq *cfs_rq)
struct sched_entity *se;
long task_delta, dequeue = 1;

- se = cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))];
+ se = cfs_rq->tg->se[cpu_of(rq)];

/* freeze hierarchy runnable averages while throttled */
rcu_read_lock();
--
1.7.4.1

2013-06-06 02:40:04

by Michael wang

[permalink] [raw]
Subject: [PATCH v2 3/3] sched: remove the useless declaration in kernel/sched/fair.c

v2:
re-based on latest tip/master

default_cfs_period(), do_sched_cfs_period_timer(), do_sched_cfs_slack_timer()
already defined previously, no need to declare again.

CC: Ingo Molnar <[email protected]>
CC: Peter Zijlstra <[email protected]>
Signed-off-by: Michael Wang <[email protected]>
---
kernel/sched/fair.c | 4 ----
1 files changed, 0 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0cea941..9efc50f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2618,10 +2618,6 @@ static void check_cfs_rq_runtime(struct cfs_rq *cfs_rq)
throttle_cfs_rq(cfs_rq);
}

-static inline u64 default_cfs_period(void);
-static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun);
-static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b);
-
static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer)
{
struct cfs_bandwidth *cfs_b =
--
1.7.4.1

Subject: [tip:sched/core] sched: Femove the useless declaration in kernel/ sched/fair.c

Commit-ID: 8404c90d050733b3404dc36c500f63ccb0c972ce
Gitweb: http://git.kernel.org/tip/8404c90d050733b3404dc36c500f63ccb0c972ce
Author: Michael Wang <[email protected]>
AuthorDate: Tue, 4 Jun 2013 14:24:08 +0800
Committer: Ingo Molnar <[email protected]>
CommitDate: Wed, 19 Jun 2013 12:58:41 +0200

sched: Femove the useless declaration in kernel/sched/fair.c

default_cfs_period(), do_sched_cfs_period_timer(), do_sched_cfs_slack_timer()
already defined previously, no need to declare again.

Signed-off-by: Michael Wang <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 4 ----
1 file changed, 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 47a30be..c0ac2c3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2618,10 +2618,6 @@ static void check_cfs_rq_runtime(struct cfs_rq *cfs_rq)
throttle_cfs_rq(cfs_rq);
}

-static inline u64 default_cfs_period(void);
-static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun);
-static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b);
-
static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer)
{
struct cfs_bandwidth *cfs_b =

Subject: [tip:sched/core] sched: Refine the code in unthrottle_cfs_rq()

Commit-ID: 22b958d8cc5127d22d2ad2141277d312d93fad6c
Gitweb: http://git.kernel.org/tip/22b958d8cc5127d22d2ad2141277d312d93fad6c
Author: Michael Wang <[email protected]>
AuthorDate: Tue, 4 Jun 2013 14:23:39 +0800
Committer: Ingo Molnar <[email protected]>
CommitDate: Wed, 19 Jun 2013 12:58:41 +0200

sched: Refine the code in unthrottle_cfs_rq()

Directly use rq to save some code.

Signed-off-by: Michael Wang <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 143dcdb..47a30be 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2315,7 +2315,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
int enqueue = 1;
long task_delta;

- se = cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))];
+ se = cfs_rq->tg->se[cpu_of(rq)];

cfs_rq->throttled = 0;