2014-10-27 10:18:32

by Kirill Tkhai

[permalink] [raw]
Subject: [PATCH] sched: Fix race between task_group and sched_task_group


The race may happen when somebody is changing task_group of a forking task.
Child's cgroup is the same as parent's after dup_task_struct() (there just
memory copying). Also, cfs_rq and rt_rq are the same as parent's.

But if parent changes its task_group before it's called cgroup_post_fork(),
we do not reflect this situation on child. Child's cfs_rq and rt_rq remain
the same, while child's task_group changes in cgroup_post_fork().

To fix this we introduce fork() method, which calls sched_move_task() directly.
This function changes sched_task_group on appropriate (also its logic has
no problem with freshly created tasks, so we shouldn't introduce something
special; we are able just to use it).

Possibly, this decides the Burke Libbey's problem: https://lkml.org/lkml/2014/10/24/456

Signed-off-by: Kirill Tkhai <[email protected]>
---
kernel/sched/core.c | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4499950..dde8adb 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7833,6 +7833,11 @@ static void cpu_cgroup_css_offline(struct cgroup_subsys_state *css)
sched_offline_group(tg);
}

+static void cpu_cgroup_fork(struct task_struct *task)
+{
+ sched_move_task(task);
+}
+
static int cpu_cgroup_can_attach(struct cgroup_subsys_state *css,
struct cgroup_taskset *tset)
{
@@ -8205,6 +8210,7 @@ struct cgroup_subsys cpu_cgrp_subsys = {
.css_free = cpu_cgroup_css_free,
.css_online = cpu_cgroup_css_online,
.css_offline = cpu_cgroup_css_offline,
+ .fork = cpu_cgroup_fork,
.can_attach = cpu_cgroup_can_attach,
.attach = cpu_cgroup_attach,
.exit = cpu_cgroup_exit,



2014-10-27 12:21:56

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched: Fix race between task_group and sched_task_group

On Mon, Oct 27, 2014 at 02:18:25PM +0400, Kirill Tkhai wrote:
>
> The race may happen when somebody is changing task_group of a forking task.
> Child's cgroup is the same as parent's after dup_task_struct() (there just
> memory copying). Also, cfs_rq and rt_rq are the same as parent's.
>
> But if parent changes its task_group before it's called cgroup_post_fork(),
> we do not reflect this situation on child. Child's cfs_rq and rt_rq remain
> the same, while child's task_group changes in cgroup_post_fork().
>
> To fix this we introduce fork() method, which calls sched_move_task() directly.
> This function changes sched_task_group on appropriate (also its logic has
> no problem with freshly created tasks, so we shouldn't introduce something
> special; we are able just to use it).

Right, I read some of that cgroup.c stuff and this is indeed possible,
yucky. Applied, thanks!

2014-10-27 22:08:14

by Oleg Nesterov

[permalink] [raw]
Subject: Re: [PATCH] sched: Fix race between task_group and sched_task_group

On 10/27, Kirill Tkhai wrote:
>
> +static void cpu_cgroup_fork(struct task_struct *task)
> +{
> + sched_move_task(task);
> +}
> +
> static int cpu_cgroup_can_attach(struct cgroup_subsys_state *css,
> struct cgroup_taskset *tset)
> {
> @@ -8205,6 +8210,7 @@ struct cgroup_subsys cpu_cgrp_subsys = {
> .css_free = cpu_cgroup_css_free,
> .css_online = cpu_cgroup_css_online,
> .css_offline = cpu_cgroup_css_offline,
> + .fork = cpu_cgroup_fork,

Agreed, but it seems that sched_move_task() -> task_css_check() can
complain if CONFIG_PROVE_RCU...

cpu_cgroup_exit() too calls sched_move_task() without any lock, but
there is the PF_EXITING check and init_css_set can't go away.

perhaps sched_move_task() should just take rcu_read_lock() and use
task_css() ? This lockdep_is_held(siglock) looks ugly, and iiuc we
need it to shut up the warning if autogroup_move_group() is the caller.

Oleg.

2014-10-28 05:24:42

by Kirill Tkhai

[permalink] [raw]
Subject: Re: [PATCH] sched: Fix race between task_group and sched_task_group

В Вт, 28/10/2014 в 00:04 +0100, Oleg Nesterov пишет:
> On 10/27, Kirill Tkhai wrote:
> >
> > +static void cpu_cgroup_fork(struct task_struct *task)
> > +{
> > + sched_move_task(task);
> > +}
> > +
> > static int cpu_cgroup_can_attach(struct cgroup_subsys_state *css,
> > struct cgroup_taskset *tset)
> > {
> > @@ -8205,6 +8210,7 @@ struct cgroup_subsys cpu_cgrp_subsys = {
> > .css_free = cpu_cgroup_css_free,
> > .css_online = cpu_cgroup_css_online,
> > .css_offline = cpu_cgroup_css_offline,
> > + .fork = cpu_cgroup_fork,
>
> Agreed, but it seems that sched_move_task() -> task_css_check() can
> complain if CONFIG_PROVE_RCU...

Thanks, Oleg.

>
> cpu_cgroup_exit() too calls sched_move_task() without any lock, but
> there is the PF_EXITING check and init_css_set can't go away.
>
> perhaps sched_move_task() should just take rcu_read_lock() and use
> task_css() ? This lockdep_is_held(siglock) looks ugly, and iiuc we
> need it to shut up the warning if autogroup_move_group() is the caller.

Shouldn't we do that in separate patch? How about this?

[PATCH]sched: Remove lockdep check in sched_move_task()

sched_move_task() is the only interface to change sched_task_group:
cpu_cgrp_subsys methods and autogroup_move_group() use it.

Everything is synchronized by task_rq_lock(), so cpu_cgroup_attach()
is ordered with other users of sched_move_task(). This means we do
no need RCU here: if we've dereferenced a tg here, the .attach method
hasn't been called for it yet.

Thus, we should pass "true" to task_css_check() to silence lockdep
warnings.

Signed-off-by: Kirill Tkhai <[email protected]>
Reported-by: Oleg Nesterov <[email protected]>

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index dde8adb..d77e6ee 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7403,8 +7403,12 @@ void sched_move_task(struct task_struct *tsk)
if (unlikely(running))
put_prev_task(rq, tsk);

- tg = container_of(task_css_check(tsk, cpu_cgrp_id,
- lockdep_is_held(&tsk->sighand->siglock)),
+ /*
+ * All callers are synchronized by task_rq_lock(); we do not use RCU
+ * which is pointless here. Thus, we pass "true" to task_css_check()
+ * to prevent lockdep warnings.
+ */
+ tg = container_of(task_css_check(tsk, cpu_cgrp_id, true),
struct task_group, css);
tg = autogroup_task_group(tsk, tg);
tsk->sched_task_group = tg;

Subject: [tip:sched/core] sched: Fix race between task_group and sched_task_group

Commit-ID: eeb61e53ea19be0c4015b00b2e8b3b2185436f2b
Gitweb: http://git.kernel.org/tip/eeb61e53ea19be0c4015b00b2e8b3b2185436f2b
Author: Kirill Tkhai <[email protected]>
AuthorDate: Mon, 27 Oct 2014 14:18:25 +0400
Committer: Ingo Molnar <[email protected]>
CommitDate: Tue, 28 Oct 2014 10:45:59 +0100

sched: Fix race between task_group and sched_task_group

The race may happen when somebody is changing task_group of a forking task.
Child's cgroup is the same as parent's after dup_task_struct() (there just
memory copying). Also, cfs_rq and rt_rq are the same as parent's.

But if parent changes its task_group before it's called cgroup_post_fork(),
we do not reflect this situation on child. Child's cfs_rq and rt_rq remain
the same, while child's task_group changes in cgroup_post_fork().

To fix this we introduce fork() method, which calls sched_move_task() directly.
This function changes sched_task_group on appropriate (also its logic has
no problem with freshly created tasks, so we shouldn't introduce something
special; we are able just to use it).

Possibly, this decides the Burke Libbey's problem: https://lkml.org/lkml/2014/10/24/456

Signed-off-by: Kirill Tkhai <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Link: http://lkml.kernel.org/r/1414405105.19914.169.camel@tkhai
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/core.c | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4499950..dde8adb 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7833,6 +7833,11 @@ static void cpu_cgroup_css_offline(struct cgroup_subsys_state *css)
sched_offline_group(tg);
}

+static void cpu_cgroup_fork(struct task_struct *task)
+{
+ sched_move_task(task);
+}
+
static int cpu_cgroup_can_attach(struct cgroup_subsys_state *css,
struct cgroup_taskset *tset)
{
@@ -8205,6 +8210,7 @@ struct cgroup_subsys cpu_cgrp_subsys = {
.css_free = cpu_cgroup_css_free,
.css_online = cpu_cgroup_css_online,
.css_offline = cpu_cgroup_css_offline,
+ .fork = cpu_cgroup_fork,
.can_attach = cpu_cgroup_can_attach,
.attach = cpu_cgroup_attach,
.exit = cpu_cgroup_exit,

2014-10-28 21:56:37

by Oleg Nesterov

[permalink] [raw]
Subject: Re: [PATCH] sched: Fix race between task_group and sched_task_group

On 10/28, Kirill Tkhai wrote:
>
> Shouldn't we do that in separate patch? How about this?

Up to Peter, but I think a separate patch is fine.

> [PATCH]sched: Remove lockdep check in sched_move_task()
>
> sched_move_task() is the only interface to change sched_task_group:
> cpu_cgrp_subsys methods and autogroup_move_group() use it.

Yes, but...

> Everything is synchronized by task_rq_lock(), so cpu_cgroup_attach()
> is ordered with other users of sched_move_task(). This means we do
> no need RCU here: if we've dereferenced a tg here, the .attach method
> hasn't been called for it yet.
>
> Thus, we should pass "true" to task_css_check() to silence lockdep
> warnings.

In theory, I am not sure.

However, I never really understood this code and today I forgot everything,
please correct me.

> @@ -7403,8 +7403,12 @@ void sched_move_task(struct task_struct *tsk)
> if (unlikely(running))
> put_prev_task(rq, tsk);
>
> - tg = container_of(task_css_check(tsk, cpu_cgrp_id,
> - lockdep_is_held(&tsk->sighand->siglock)),
> + /*
> + * All callers are synchronized by task_rq_lock(); we do not use RCU
> + * which is pointless here. Thus, we pass "true" to task_css_check()
> + * to prevent lockdep warnings.
> + */
> + tg = container_of(task_css_check(tsk, cpu_cgrp_id, true),
> struct task_group, css);

Why this can't race with cgroup_task_migrate() if it is called by
cgroup_post_fork() ?

And cgroup_task_migrate() can free ->cgroups via call_rcu(). Of course,
in practice raw_spin_lock_irq() should also act as rcu_read_lock(), but
we should not rely on implementation details.

task_group = tsk->cgroups[cpu_cgrp_id] can't go away because yes, if we
race with migrate then ->attach() was not called. But it seems that in
theory it is not safe to dereference tsk->cgroups.

Oleg.

2014-10-29 03:20:56

by Kirill Tkhai

[permalink] [raw]
Subject: Re: [PATCH] sched: Fix race between task_group and sched_task_group

On 29.10.2014 01:52, Oleg Nesterov wrote:
> On 10/28, Kirill Tkhai wrote:
>>
>> Shouldn't we do that in separate patch? How about this?
>
> Up to Peter, but I think a separate patch is fine.
>
>> [PATCH]sched: Remove lockdep check in sched_move_task()
>>
>> sched_move_task() is the only interface to change sched_task_group:
>> cpu_cgrp_subsys methods and autogroup_move_group() use it.
>
> Yes, but...
>
>> Everything is synchronized by task_rq_lock(), so cpu_cgroup_attach()
>> is ordered with other users of sched_move_task(). This means we do
>> no need RCU here: if we've dereferenced a tg here, the .attach method
>> hasn't been called for it yet.
>>
>> Thus, we should pass "true" to task_css_check() to silence lockdep
>> warnings.
>
> In theory, I am not sure.
>
> However, I never really understood this code and today I forgot everything,
> please correct me.
>
>> @@ -7403,8 +7403,12 @@ void sched_move_task(struct task_struct *tsk)
>> if (unlikely(running))
>> put_prev_task(rq, tsk);
>>
>> - tg = container_of(task_css_check(tsk, cpu_cgrp_id,
>> - lockdep_is_held(&tsk->sighand->siglock)),
>> + /*
>> + * All callers are synchronized by task_rq_lock(); we do not use RCU
>> + * which is pointless here. Thus, we pass "true" to task_css_check()
>> + * to prevent lockdep warnings.
>> + */
>> + tg = container_of(task_css_check(tsk, cpu_cgrp_id, true),
>> struct task_group, css);
>
> Why this can't race with cgroup_task_migrate() if it is called by
> cgroup_post_fork() ?

It can race, but which problem is there? The only thing is
cgroup_post_fork()'s or ss->attach()'s call of sched_move_task() will be
NOOP.

cgroup_migrate_add_src()

cgroup_task_migrate()
cgroup_post_fork();
rcu_assign_pointer(tsk->cgroups, new_cset);
sched_move_task();
css->ss->attach(css, &tset);

sched_move_task();

cgroup_migrate_finish()

> And cgroup_task_migrate() can free ->cgroups via call_rcu(). Of course,
> in practice raw_spin_lock_irq() should also act as rcu_read_lock(), but
> we should not rely on implementation details.

Do you mean cgroup_task_migrate()->put_css_set_locked()? It's not
possible there, because old_cset->refcount is lager than 1. We increment
it in cgroup_migrate_add_src() and real freeing happens in
cgroup_migrate_finish(). These functions are around task_migrate(), they
are pair brackets.

> task_group = tsk->cgroups[cpu_cgrp_id] can't go away because yes, if we
> race with migrate then ->attach() was not called. But it seems that in
> theory it is not safe to dereference tsk->cgroups.

old_cset can't be freed in cgroup_task_migrate(), so we can safely
dereference it. If we've got old_cset in
cgroup_post_fork()->sched_move_task(), the right sched_task_group will
be installed by attach->sched_move_task().

Kirill

2014-10-29 09:16:48

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched: Fix race between task_group and sched_task_group

On Wed, Oct 29, 2014 at 06:20:48AM +0300, Kirill Tkhai wrote:
> > And cgroup_task_migrate() can free ->cgroups via call_rcu(). Of course,
> > in practice raw_spin_lock_irq() should also act as rcu_read_lock(), but
> > we should not rely on implementation details.
>
> Do you mean cgroup_task_migrate()->put_css_set_locked()? It's not
> possible there, because old_cset->refcount is lager than 1. We increment
> it in cgroup_migrate_add_src() and real freeing happens in
> cgroup_migrate_finish(). These functions are around task_migrate(), they
> are pair brackets.
>
> > task_group = tsk->cgroups[cpu_cgrp_id] can't go away because yes, if we
> > race with migrate then ->attach() was not called. But it seems that in
> > theory it is not safe to dereference tsk->cgroups.
>
> old_cset can't be freed in cgroup_task_migrate(), so we can safely
> dereference it. If we've got old_cset in
> cgroup_post_fork()->sched_move_task(), the right sched_task_group will
> be installed by attach->sched_move_task().


Would it be fair to summarise your argument thusly:

"Because sched_move_task() is only called from cgroup_subsys methods
the cgroup infrastructure itself holds reference on the relevant
css sets, and therefore their existence is guaranteed."

?

The question then would be how do we guarantee/assert the assumption
that sched_move_task() is indeed only ever called from such a method.

2014-10-29 11:13:09

by Kirill Tkhai

[permalink] [raw]
Subject: Re: [PATCH] sched: Fix race between task_group and sched_task_group

В Ср, 29/10/2014 в 10:16 +0100, Peter Zijlstra пишет:
> On Wed, Oct 29, 2014 at 06:20:48AM +0300, Kirill Tkhai wrote:
> > > And cgroup_task_migrate() can free ->cgroups via call_rcu(). Of course,
> > > in practice raw_spin_lock_irq() should also act as rcu_read_lock(), but
> > > we should not rely on implementation details.
> >
> > Do you mean cgroup_task_migrate()->put_css_set_locked()? It's not
> > possible there, because old_cset->refcount is lager than 1. We increment
> > it in cgroup_migrate_add_src() and real freeing happens in
> > cgroup_migrate_finish(). These functions are around task_migrate(), they
> > are pair brackets.
> >
> > > task_group = tsk->cgroups[cpu_cgrp_id] can't go away because yes, if we
> > > race with migrate then ->attach() was not called. But it seems that in
> > > theory it is not safe to dereference tsk->cgroups.
> >
> > old_cset can't be freed in cgroup_task_migrate(), so we can safely
> > dereference it. If we've got old_cset in
> > cgroup_post_fork()->sched_move_task(), the right sched_task_group will
> > be installed by attach->sched_move_task().
>
>
> Would it be fair to summarise your argument thusly:
>
> "Because sched_move_task() is only called from cgroup_subsys methods
> the cgroup infrastructure itself holds reference on the relevant
> css sets, and therefore their existence is guaranteed."
>
> ?
>
> The question then would be how do we guarantee/assert the assumption
> that sched_move_task() is indeed only ever called from such a method.

I mean the relationship between cgroup_task_migrate() and sched_move_task()
called from anywhere.

cgroup_task_migrate() is the only function which changes task_struct::cgroups.
This function is called only from cgroup_migrate().


(A) (B) (C)
| | |
v v v


cgroup_migrate_add_src()
get_css_set(src_cset)

cgroup_migrate()
cgroup_task_migrate()
old_cset = task_css_set(tsk)
get_css_set(new_cset)
rcu_assign_pointer(tsk->cgroups, new_cset)
/* old_cset.refcount > 1 here */
put_css_set_locked(old_cset)
/* not freed here */

css->ss->attach sched_move_task
cpu_cgroup_attach() task_rq_lock()
sched_move_task()
.... /* Possible use of old_cset */
.... task_rq_unlock()
.... ....
task_rq_lock()
...
task_rq_unlock()

sched_move_task()
task_rq_lock()
/* new_cset is used here */
task_rq_unlock()

cgroup_migrate_finish()
/* Possible freeing here */
put_css_set_locked(src_cset)


Even if (B) uses old_cset and old sched_task_group,
(A) will overwrite it before it's freed.

In case of (A) and (C), (C) reads new_cset, because
task_rq_lock() provides all necessary memory barriers.


Of course, cgroup_migrate_add_src() is used more
complex than I've drawn. But the idea is the same.

2014-10-29 18:25:26

by Oleg Nesterov

[permalink] [raw]
Subject: Re: [PATCH] sched: Fix race between task_group and sched_task_group

On 10/29, Kirill Tkhai wrote:
>
> On 29.10.2014 01:52, Oleg Nesterov wrote:
>
> > And cgroup_task_migrate() can free ->cgroups via call_rcu(). Of course,
> > in practice raw_spin_lock_irq() should also act as rcu_read_lock(), but
> > we should not rely on implementation details.
>
> Do you mean cgroup_task_migrate()->put_css_set_locked()? It's not
> possible there, because old_cset->refcount is lager than 1. We increment
> it in cgroup_migrate_add_src() and real freeing happens in
> cgroup_migrate_finish(). These functions are around task_migrate(), they
> are pair brackets.

Ah, I see.

> If we've got old_cset in
> cgroup_post_fork()->sched_move_task(), the right sched_task_group will
> be installed by attach->sched_move_task().

Yes, yes, this part is clear.

OK, I think the patch is fine then.

Thanks!

Oleg.

Subject: [tip:sched/urgent] sched: Remove lockdep check in sched_move_task ()

Commit-ID: f7b8a47da17c9ee4998f2ca2018fcc424e953c0e
Gitweb: http://git.kernel.org/tip/f7b8a47da17c9ee4998f2ca2018fcc424e953c0e
Author: Kirill Tkhai <[email protected]>
AuthorDate: Tue, 28 Oct 2014 08:24:34 +0300
Committer: Ingo Molnar <[email protected]>
CommitDate: Tue, 4 Nov 2014 07:07:30 +0100

sched: Remove lockdep check in sched_move_task()

sched_move_task() is the only interface to change sched_task_group:
cpu_cgrp_subsys methods and autogroup_move_group() use it.

Everything is synchronized by task_rq_lock(), so cpu_cgroup_attach()
is ordered with other users of sched_move_task(). This means we do no
need RCU here: if we've dereferenced a tg here, the .attach method
hasn't been called for it yet.

Thus, we should pass "true" to task_css_check() to silence lockdep
warnings.

Fixes: eeb61e53ea19 ("sched: Fix race between task_group and sched_task_group")
Reported-by: Oleg Nesterov <[email protected]>
Reported-by: Fengguang Wu <[email protected]>
Signed-off-by: Kirill Tkhai <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Link: http://lkml.kernel.org/r/1414473874.8574.2.camel@tkhai
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/core.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 240157c..6841fb4 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7444,8 +7444,12 @@ void sched_move_task(struct task_struct *tsk)
if (unlikely(running))
put_prev_task(rq, tsk);

- tg = container_of(task_css_check(tsk, cpu_cgrp_id,
- lockdep_is_held(&tsk->sighand->siglock)),
+ /*
+ * All callers are synchronized by task_rq_lock(); we do not use RCU
+ * which is pointless here. Thus, we pass "true" to task_css_check()
+ * to prevent lockdep warnings.
+ */
+ tg = container_of(task_css_check(tsk, cpu_cgrp_id, true),
struct task_group, css);
tg = autogroup_task_group(tsk, tg);
tsk->sched_task_group = tg;