2013-03-26 06:00:58

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 0/6] correct load_balance()

Commit 88b8dac0 makes load_balance() consider other cpus in its group.
But, there are some missing parts for this feature to work properly.
This patchset correct these things and make load_balance() robust.

Others are related to LBF_ALL_PINNED. This is fallback functionality
when all tasks can't be moved as cpu affinity. But, currently,
if imbalance is not large enough to task's load, we leave LBF_ALL_PINNED
flag and 'redo' is triggered. This is not our intention, so correct it.

These are based on v3.9-rc4.

Changelog
v1->v2: Changes from Peter's suggestion
[4/6]: don't include a code to evaluate load value in can_migrate_task()
[5/6]: rename load_balance_tmpmask to load_balance_mask
[6/6]: not use one more cpumasks, use env's cpus for prevent to re-select

Joonsoo Kim (6):
sched: change position of resched_cpu() in load_balance()
sched: explicitly cpu_idle_type checking in rebalance_domains()
sched: don't consider other cpus in our group in case of NEWLY_IDLE
sched: move up affinity check to mitigate useless redoing overhead
sched: rename load_balance_tmpmask to load_balance_mask
sched: prevent to re-select dst-cpu in load_balance()

kernel/sched/core.c | 4 +--
kernel/sched/fair.c | 67 +++++++++++++++++++++++++++------------------------
2 files changed, 38 insertions(+), 33 deletions(-)

--
1.7.9.5


2013-03-26 06:00:59

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 1/6] sched: change position of resched_cpu() in load_balance()

cur_ld_moved is reset if env.flags hit LBF_NEED_BREAK.
So, there is possibility that we miss doing resched_cpu().
Correct it as changing position of resched_cpu()
before checking LBF_NEED_BREAK.

Acked-by: Peter Zijlstra <[email protected]>
Signed-off-by: Joonsoo Kim <[email protected]>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7a33e59..f084069 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5061,17 +5061,17 @@ more_balance:
double_rq_unlock(env.dst_rq, busiest);
local_irq_restore(flags);

- if (env.flags & LBF_NEED_BREAK) {
- env.flags &= ~LBF_NEED_BREAK;
- goto more_balance;
- }
-
/*
* some other cpu did the load balance for us.
*/
if (cur_ld_moved && env.dst_cpu != smp_processor_id())
resched_cpu(env.dst_cpu);

+ if (env.flags & LBF_NEED_BREAK) {
+ env.flags &= ~LBF_NEED_BREAK;
+ goto more_balance;
+ }
+
/*
* Revisit (affine) tasks on src_cpu that couldn't be moved to
* us and move them to an alternate dst_cpu in our sched_group
--
1.7.9.5

2013-03-26 06:01:36

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 2/6] sched: explicitly cpu_idle_type checking in rebalance_domains()

After commit 88b8dac0, dst-cpu can be changed in load_balance(),
then we can't know cpu_idle_type of dst-cpu when load_balance()
return positive. So, add explicit cpu_idle_type checking.

Cc: Srivatsa Vaddagiri <[email protected]>
Signed-off-by: Joonsoo Kim <[email protected]>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f084069..9d693d0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5506,10 +5506,10 @@ static void rebalance_domains(int cpu, enum cpu_idle_type idle)
if (time_after_eq(jiffies, sd->last_balance + interval)) {
if (load_balance(cpu, rq, sd, idle, &balance)) {
/*
- * We've pulled tasks over so either we're no
- * longer idle.
+ * We've pulled tasks over so either we may
+ * be no longer idle.
*/
- idle = CPU_NOT_IDLE;
+ idle = idle_cpu(cpu) ? CPU_IDLE : CPU_NOT_IDLE;
}
sd->last_balance = jiffies;
}
--
1.7.9.5

2013-03-26 06:01:34

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 3/6] sched: don't consider other cpus in our group in case of NEWLY_IDLE

Commit 88b8dac0 makes load_balance() consider other cpus in its group,
regardless of idle type. When we do NEWLY_IDLE balancing, we should not
consider it, because a motivation of NEWLY_IDLE balancing is to turn
this cpu to non idle state if needed. This is not the case of other cpus.
So, change code not to consider other cpus for NEWLY_IDLE balancing.

With this patch, assign 'if (pulled_task) this_rq->idle_stamp = 0'
in idle_balance() is corrected, because NEWLY_IDLE balancing doesn't
consider other cpus. Assigning to 'this_rq->idle_stamp' is now valid.

Cc: Srivatsa Vaddagiri <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Signed-off-by: Joonsoo Kim <[email protected]>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9d693d0..3f8c4f2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5007,8 +5007,17 @@ static int load_balance(int this_cpu, struct rq *this_rq,
.cpus = cpus,
};

+ /* For NEWLY_IDLE load_balancing, we don't need to consider
+ * other cpus in our group */
+ if (idle == CPU_NEWLY_IDLE) {
+ env.dst_grpmask = NULL;
+ /* we don't care max_lb_iterations in this case,
+ * in following patch, this will be removed */
+ max_lb_iterations = 0;
+ } else {
+ max_lb_iterations = cpumask_weight(env.dst_grpmask);
+ }
cpumask_copy(cpus, cpu_active_mask);
- max_lb_iterations = cpumask_weight(env.dst_grpmask);

schedstat_inc(sd, lb_count[idle]);

--
1.7.9.5

2013-03-26 06:01:32

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 4/6] sched: move up affinity check to mitigate useless redoing overhead

Currently, LBF_ALL_PINNED is cleared after affinity check is passed.
So, if task migration is skipped by small load value or
small imbalance value in move_tasks(), we don't clear LBF_ALL_PINNED.
At last, we trigger 'redo' in load_balance().

Imbalance value is often so small that any tasks cannot be moved
to other cpus and, of course, this situation may be continued after
we change the target cpu. So this patch move up affinity check code and
clear LBF_ALL_PINNED before evaluating load value in order to
mitigate useless redoing overhead.

In addition, re-order some comments correctly.

Signed-off-by: Joonsoo Kim <[email protected]>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3f8c4f2..d3c6011 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3874,10 +3874,14 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
int tsk_cache_hot = 0;
/*
* We do not migrate tasks that are:
- * 1) running (obviously), or
+ * 1) throttled_lb_pair, or
* 2) cannot be migrated to this CPU due to cpus_allowed, or
- * 3) are cache-hot on their current CPU.
+ * 3) running (obviously), or
+ * 4) are cache-hot on their current CPU.
*/
+ if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
+ return 0;
+
if (!cpumask_test_cpu(env->dst_cpu, tsk_cpus_allowed(p))) {
int new_dst_cpu;

@@ -3948,9 +3952,6 @@ static int move_one_task(struct lb_env *env)
struct task_struct *p, *n;

list_for_each_entry_safe(p, n, &env->src_rq->cfs_tasks, se.group_node) {
- if (throttled_lb_pair(task_group(p), env->src_rq->cpu, env->dst_cpu))
- continue;
-
if (!can_migrate_task(p, env))
continue;

@@ -4002,7 +4003,7 @@ static int move_tasks(struct lb_env *env)
break;
}

- if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
+ if (!can_migrate_task(p, env))
goto next;

load = task_h_load(p);
@@ -4013,9 +4014,6 @@ static int move_tasks(struct lb_env *env)
if ((load / 2) > env->imbalance)
goto next;

- if (!can_migrate_task(p, env))
- goto next;
-
move_task(p, env);
pulled++;
env->imbalance -= load;
--
1.7.9.5

2013-03-26 06:01:28

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 5/6] sched: rename load_balance_tmpmask to load_balance_mask

This name doesn't represent specific meaning.
So rename it to imply it's purpose.

Signed-off-by: Joonsoo Kim <[email protected]>

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7f12624..07b4178 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6865,7 +6865,7 @@ struct task_group root_task_group;
LIST_HEAD(task_groups);
#endif

-DECLARE_PER_CPU(cpumask_var_t, load_balance_tmpmask);
+DECLARE_PER_CPU(cpumask_var_t, load_balance_mask);

void __init sched_init(void)
{
@@ -6902,7 +6902,7 @@ void __init sched_init(void)
#endif /* CONFIG_RT_GROUP_SCHED */
#ifdef CONFIG_CPUMASK_OFFSTACK
for_each_possible_cpu(i) {
- per_cpu(load_balance_tmpmask, i) = (void *)ptr;
+ per_cpu(load_balance_mask, i) = (void *)ptr;
ptr += cpumask_size();
}
#endif /* CONFIG_CPUMASK_OFFSTACK */
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d3c6011..e3f09f4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4958,7 +4958,7 @@ static struct rq *find_busiest_queue(struct lb_env *env,
#define MAX_PINNED_INTERVAL 512

/* Working cpumask for load_balance and load_balance_newidle. */
-DEFINE_PER_CPU(cpumask_var_t, load_balance_tmpmask);
+DEFINE_PER_CPU(cpumask_var_t, load_balance_mask);

static int need_active_balance(struct lb_env *env)
{
@@ -4993,7 +4993,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
struct sched_group *group;
struct rq *busiest;
unsigned long flags;
- struct cpumask *cpus = __get_cpu_var(load_balance_tmpmask);
+ struct cpumask *cpus = __get_cpu_var(load_balance_mask);

struct lb_env env = {
.sd = sd,
--
1.7.9.5

2013-03-26 06:01:25

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH v2 6/6] sched: prevent to re-select dst-cpu in load_balance()

Commit 88b8dac0 makes load_balance() consider other cpus in its group.
But, in that, there is no code for preventing to re-select dst-cpu.
So, same dst-cpu can be selected over and over.

This patch add functionality to load_balance() in order to exclude
cpu which is selected once. We prevent to re-select dst_cpu via
env's cpus, so now, env's cpus is a candidate not only for src_cpus,
but also dst_cpus.

Cc: Srivatsa Vaddagiri <[email protected]>
Signed-off-by: Joonsoo Kim <[email protected]>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e3f09f4..6f238d2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3883,7 +3883,7 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
return 0;

if (!cpumask_test_cpu(env->dst_cpu, tsk_cpus_allowed(p))) {
- int new_dst_cpu;
+ int cpu;

schedstat_inc(p, se.statistics.nr_failed_migrations_affine);

@@ -3898,12 +3898,14 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
if (!env->dst_grpmask || (env->flags & LBF_SOME_PINNED))
return 0;

- new_dst_cpu = cpumask_first_and(env->dst_grpmask,
- tsk_cpus_allowed(p));
- if (new_dst_cpu < nr_cpu_ids) {
- env->flags |= LBF_SOME_PINNED;
- env->new_dst_cpu = new_dst_cpu;
- }
+ /* Prevent to re-select dst_cpu via env's cpus */
+ for_each_cpu_and(cpu, env->dst_grpmask, env->cpus)
+ if (cpumask_test_cpu(cpu, tsk_cpus_allowed(p))) {
+ env->flags |= LBF_SOME_PINNED;
+ env->new_dst_cpu = cpu;
+ break;
+ }
+
return 0;
}

@@ -4989,7 +4991,6 @@ static int load_balance(int this_cpu, struct rq *this_rq,
int *balance)
{
int ld_moved, cur_ld_moved, active_balance = 0;
- int lb_iterations, max_lb_iterations;
struct sched_group *group;
struct rq *busiest;
unsigned long flags;
@@ -5009,11 +5010,6 @@ static int load_balance(int this_cpu, struct rq *this_rq,
* other cpus in our group */
if (idle == CPU_NEWLY_IDLE) {
env.dst_grpmask = NULL;
- /* we don't care max_lb_iterations in this case,
- * in following patch, this will be removed */
- max_lb_iterations = 0;
- } else {
- max_lb_iterations = cpumask_weight(env.dst_grpmask);
}
cpumask_copy(cpus, cpu_active_mask);

@@ -5041,7 +5037,6 @@ redo:
schedstat_add(sd, lb_imbalance[idle], env.imbalance);

ld_moved = 0;
- lb_iterations = 1;
if (busiest->nr_running > 1) {
/*
* Attempt to move tasks. If find_busiest_group has found
@@ -5098,14 +5093,17 @@ more_balance:
* moreover subsequent load balance cycles should correct the
* excess load moved.
*/
- if ((env.flags & LBF_SOME_PINNED) && env.imbalance > 0 &&
- lb_iterations++ < max_lb_iterations) {
+ if ((env.flags & LBF_SOME_PINNED) && env.imbalance > 0) {

env.dst_rq = cpu_rq(env.new_dst_cpu);
env.dst_cpu = env.new_dst_cpu;
env.flags &= ~LBF_SOME_PINNED;
env.loop = 0;
env.loop_break = sched_nr_migrate_break;
+
+ /* Prevent to re-select dst_cpu via env's cpus */
+ cpumask_clear_cpu(env.dst_cpu, env.cpus);
+
/*
* Go back to "more_balance" rather than "redo" since we
* need to continue with same src_cpu.
--
1.7.9.5

2013-04-22 08:00:13

by Joonsoo Kim

[permalink] [raw]
Subject: Re: [PATCH v2 0/6] correct load_balance()

On Tue, Mar 26, 2013 at 03:01:34PM +0900, Joonsoo Kim wrote:
> Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> But, there are some missing parts for this feature to work properly.
> This patchset correct these things and make load_balance() robust.
>
> Others are related to LBF_ALL_PINNED. This is fallback functionality
> when all tasks can't be moved as cpu affinity. But, currently,
> if imbalance is not large enough to task's load, we leave LBF_ALL_PINNED
> flag and 'redo' is triggered. This is not our intention, so correct it.
>
> These are based on v3.9-rc4.
>
> Changelog
> v1->v2: Changes from Peter's suggestion
> [4/6]: don't include a code to evaluate load value in can_migrate_task()
> [5/6]: rename load_balance_tmpmask to load_balance_mask
> [6/6]: not use one more cpumasks, use env's cpus for prevent to re-select
>
> Joonsoo Kim (6):
> sched: change position of resched_cpu() in load_balance()
> sched: explicitly cpu_idle_type checking in rebalance_domains()
> sched: don't consider other cpus in our group in case of NEWLY_IDLE
> sched: move up affinity check to mitigate useless redoing overhead
> sched: rename load_balance_tmpmask to load_balance_mask
> sched: prevent to re-select dst-cpu in load_balance()
>
> kernel/sched/core.c | 4 +--
> kernel/sched/fair.c | 67 +++++++++++++++++++++++++++------------------------
> 2 files changed, 38 insertions(+), 33 deletions(-)

Hello, Ingo and Peter.

Just ping for this patchset.
Please let me know what I have to do for merging this patchset.

Thanks.

>
> --
> 1.7.9.5
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2013-04-22 11:59:47

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 2/6] sched: explicitly cpu_idle_type checking in rebalance_domains()

On Tue, 2013-03-26 at 15:01 +0900, Joonsoo Kim wrote:
> @@ -5506,10 +5506,10 @@ static void rebalance_domains(int cpu, enum
> cpu_idle_type idle)
> if (time_after_eq(jiffies, sd->last_balance +
> interval)) {
> if (load_balance(cpu, rq, sd, idle, &balance))
> {
> /*
> - * We've pulled tasks over so either
> we're no
> - * longer idle.
> + * We've pulled tasks over so either
> we may
> + * be no longer idle.
> */

That comment didn't make sense and it does even less now.

How about we make that:

/*
* The LBF_SOME_PINNED logic could have changed
* env->dst_cpu, so we can't know our idle state
* even if we migrated tasks; update it.
*/

> - idle = CPU_NOT_IDLE;
> + idle = idle_cpu(cpu) ? CPU_IDLE :
> CPU_NOT_IDLE;
> }
> sd->last_balance = jiffies;
> }

2013-04-22 11:59:52

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 3/6] sched: don't consider other cpus in our group in case of NEWLY_IDLE

On Tue, 2013-03-26 at 15:01 +0900, Joonsoo Kim wrote:
> Commit 88b8dac0 makes load_balance() consider other cpus in its group,
> regardless of idle type. When we do NEWLY_IDLE balancing, we should not
> consider it, because a motivation of NEWLY_IDLE balancing is to turn
> this cpu to non idle state if needed. This is not the case of other cpus.
> So, change code not to consider other cpus for NEWLY_IDLE balancing.
>
> With this patch, assign 'if (pulled_task) this_rq->idle_stamp = 0'
> in idle_balance() is corrected, because NEWLY_IDLE balancing doesn't
> consider other cpus. Assigning to 'this_rq->idle_stamp' is now valid.
>
> Cc: Srivatsa Vaddagiri <[email protected]>
> Acked-by: Peter Zijlstra <[email protected]>
> Signed-off-by: Joonsoo Kim <[email protected]>
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 9d693d0..3f8c4f2 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5007,8 +5007,17 @@ static int load_balance(int this_cpu, struct rq *this_rq,
> .cpus = cpus,
> };
>
> + /* For NEWLY_IDLE load_balancing, we don't need to consider
> + * other cpus in our group */
> + if (idle == CPU_NEWLY_IDLE) {
> + env.dst_grpmask = NULL;
> + /* we don't care max_lb_iterations in this case,
> + * in following patch, this will be removed */

This comment violates coding style; comments looks like:

/* this is a single-line comment */

or

/*
* this is a multi-
* line comment.
*/

Luckily you're deleting these offensive lines again in patch 6 :-)

> + max_lb_iterations = 0;
> + } else {
> + max_lb_iterations = cpumask_weight(env.dst_grpmask);
> + }
> cpumask_copy(cpus, cpu_active_mask);
> - max_lb_iterations = cpumask_weight(env.dst_grpmask);
>
> schedstat_inc(sd, lb_count[idle]);
>

2013-04-22 11:59:58

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 6/6] sched: prevent to re-select dst-cpu in load_balance()

On Tue, 2013-03-26 at 15:01 +0900, Joonsoo Kim wrote:
> Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> But, in that, there is no code for preventing to re-select dst-cpu.
> So, same dst-cpu can be selected over and over.
>
> This patch add functionality to load_balance() in order to exclude
> cpu which is selected once. We prevent to re-select dst_cpu via
> env's cpus, so now, env's cpus is a candidate not only for src_cpus,
> but also dst_cpus.

Changelog forgets to mention that this removes the need for
lb_iterations.

> Cc: Srivatsa Vaddagiri <[email protected]>
> Signed-off-by: Joonsoo Kim <[email protected]>
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e3f09f4..6f238d2 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c

> @@ -3898,12 +3898,14 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
> if (!env->dst_grpmask || (env->flags & LBF_SOME_PINNED))
> return 0;
>
> - new_dst_cpu = cpumask_first_and(env->dst_grpmask,
> - tsk_cpus_allowed(p));
> - if (new_dst_cpu < nr_cpu_ids) {
> - env->flags |= LBF_SOME_PINNED;
> - env->new_dst_cpu = new_dst_cpu;
> - }
> + /* Prevent to re-select dst_cpu via env's cpus */
> + for_each_cpu_and(cpu, env->dst_grpmask, env->cpus)
> + if (cpumask_test_cpu(cpu, tsk_cpus_allowed(p))) {
> + env->flags |= LBF_SOME_PINNED;
> + env->new_dst_cpu = cpu;
> + break;
> + }

/me hands you a fresh bucket of curlies.. always use them on multi-line
single statements.

> return 0;
> }
>
> @@ -4989,7 +4991,6 @@ static int load_balance(int this_cpu, struct rq *this_rq,
> int *balance)
> {
> int ld_moved, cur_ld_moved, active_balance = 0;
> - int lb_iterations, max_lb_iterations;
> struct sched_group *group;
> struct rq *busiest;
> unsigned long flags;
> @@ -5009,11 +5010,6 @@ static int load_balance(int this_cpu, struct rq *this_rq,
> * other cpus in our group */
> if (idle == CPU_NEWLY_IDLE) {
> env.dst_grpmask = NULL;
> - /* we don't care max_lb_iterations in this case,
> - * in following patch, this will be removed */
> - max_lb_iterations = 0;
> - } else {
> - max_lb_iterations = cpumask_weight(env.dst_grpmask);
> }

And here you leave curlies around a single line single statement (the
only case we do leave them off) -- maybe collect them to replenish your
supply?

> cpumask_copy(cpus, cpu_active_mask);
>

2013-04-22 12:01:31

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 0/6] correct load_balance()

On Mon, 2013-04-22 at 17:01 +0900, Joonsoo Kim wrote:

> Hello, Ingo and Peter.
>
> Just ping for this patchset.

The patches look fine -- except a few cosmetic changes. I'm fine with
Ingo taking them now and merging a 7/6 patch that clears up the
cosmetic issues or you can repost with them taken care of.

I'll leave that up to you and Ingo.

Otherwise:

Acked-by: Peter Zijlstra <[email protected]>

Thanks!

2013-04-22 20:07:18

by Davidlohr Bueso

[permalink] [raw]
Subject: Re: [PATCH v2 0/6] correct load_balance()

On Mon, 2013-04-22 at 14:01 +0200, Peter Zijlstra wrote:
> On Mon, 2013-04-22 at 17:01 +0900, Joonsoo Kim wrote:
>
> > Hello, Ingo and Peter.
> >
> > Just ping for this patchset.
>
> The patches look fine -- except a few cosmetic changes. I'm fine with
> Ingo taking them now and merging a 7/6 patch that clears up the
> cosmetic issues or you can repost with them taken care of.
>
> I'll leave that up to you and Ingo.
>
> Otherwise:
>
> Acked-by: Peter Zijlstra <[email protected]>

On behalf of Jason who has been testing this patchset:

Tested-by: Jason Low <[email protected]>

Thanks,
Jason & Davidlohr

2013-04-23 08:31:30

by Joonsoo Kim

[permalink] [raw]
Subject: Re: [PATCH v2 0/6] correct load_balance()

On Mon, Apr 22, 2013 at 01:07:07PM -0700, Davidlohr Bueso wrote:
> On Mon, 2013-04-22 at 14:01 +0200, Peter Zijlstra wrote:
> > On Mon, 2013-04-22 at 17:01 +0900, Joonsoo Kim wrote:
> >
> > > Hello, Ingo and Peter.
> > >
> > > Just ping for this patchset.
> >
> > The patches look fine -- except a few cosmetic changes. I'm fine with
> > Ingo taking them now and merging a 7/6 patch that clears up the
> > cosmetic issues or you can repost with them taken care of.
> >
> > I'll leave that up to you and Ingo.
> >
> > Otherwise:
> >
> > Acked-by: Peter Zijlstra <[email protected]>
>
> On behalf of Jason who has been testing this patchset:
>
> Tested-by: Jason Low <[email protected]>

Hello, Jason and Davidlohr.
Thanks for testing!

>
> Thanks,
> Jason & Davidlohr
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2013-04-23 08:50:28

by Joonsoo Kim

[permalink] [raw]
Subject: Re: [PATCH v2 0/6] correct load_balance()

On Mon, Apr 22, 2013 at 02:01:18PM +0200, Peter Zijlstra wrote:
> On Mon, 2013-04-22 at 17:01 +0900, Joonsoo Kim wrote:
>
> > Hello, Ingo and Peter.
> >
> > Just ping for this patchset.
>
> The patches look fine -- except a few cosmetic changes. I'm fine with
> Ingo taking them now and merging a 7/6 patch that clears up the
> cosmetic issues or you can repost with them taken care of.
>
> I'll leave that up to you and Ingo.

I sent v3 which reflect all Peter's comments.
Here is the link for v3.
https://lkml.org/lkml/2013/4/23/88

>
> Otherwise:
>
> Acked-by: Peter Zijlstra <[email protected]>

Thanks!

>
> Thanks!
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/