2012-06-22 11:36:15

by Peter Zijlstra

[permalink] [raw]
Subject: [RFC][PATCH] sched: Fix race in task_group()

Stefan reported a crash on a kernel before a3e5d1091c1 ("sched: Don't
call task_group() too many times in set_task_rq()"), he found the reason
to be that the multiple task_group() invocations in set_task_rq()
returned different values.

Looking at all that I found a lack of serialization and plain wrong
comments.

The below tries to fix it using an extra pointer which is updated under
the appropriate scheduler locks. Its not pretty, but I can't really see
another way given how all the cgroup stuff works.

Anybody else got a better idea?


Reported-by: Stefan Bader <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
---
include/linux/init_task.h | 12 +++++++++++-
include/linux/sched.h | 5 ++++-
kernel/sched/core.c | 9 ++++++++-
kernel/sched/sched.h | 23 ++++++++++-------------
4 files changed, 33 insertions(+), 16 deletions(-)

diff --git a/include/linux/init_task.h b/include/linux/init_task.h
index 4e4bc1a..53be033 100644
--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -123,8 +123,17 @@ extern struct group_info init_groups;

extern struct cred init_cred;

+extern struct task_group root_task_group;
+
+#ifdef CONFIG_CGROUP_SCHED
+# define INIT_CGROUP_SCHED(tsk) \
+ .sched_task_group = &root_task_group,
+#else
+# define INIT_CGROUP_SCHED(tsk)
+#endif
+
#ifdef CONFIG_PERF_EVENTS
-# define INIT_PERF_EVENTS(tsk) \
+# define INIT_PERF_EVENTS(tsk) \
.perf_event_mutex = \
__MUTEX_INITIALIZER(tsk.perf_event_mutex), \
.perf_event_list = LIST_HEAD_INIT(tsk.perf_event_list),
@@ -168,6 +177,7 @@ extern struct cred init_cred;
}, \
.tasks = LIST_HEAD_INIT(tsk.tasks), \
INIT_PUSHABLE_TASKS(tsk) \
+ INIT_CGROUP_SCHED(tsk) \
.ptraced = LIST_HEAD_INIT(tsk.ptraced), \
.ptrace_entry = LIST_HEAD_INIT(tsk.ptrace_entry), \
.real_parent = &tsk, \
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 32157b9..77437d4 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1246,6 +1246,9 @@ struct task_struct {
const struct sched_class *sched_class;
struct sched_entity se;
struct sched_rt_entity rt;
+#ifdef CONFIG_CGROUP_SCHED
+ struct task_struct *sched_task_group;
+#endif

#ifdef CONFIG_NUMA
unsigned long numa_contrib;
@@ -2741,7 +2744,7 @@ extern int sched_group_set_rt_period(struct task_group *tg,
extern long sched_group_rt_period(struct task_group *tg);
extern int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk);
#endif
-#endif
+#endif /* CONFIG_CGROUP_SCHED */

extern int task_can_switch_user(struct user_struct *up,
struct task_struct *tsk);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9bb7d28..9adb9a0 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1096,7 +1096,7 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
* a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
*
* sched_move_task() holds both and thus holding either pins the cgroup,
- * see set_task_rq().
+ * see task_group().
*
* Furthermore, all task_rq users should acquire both locks, see
* task_rq_lock().
@@ -7581,6 +7581,8 @@ void sched_destroy_group(struct task_group *tg)
*/
void sched_move_task(struct task_struct *tsk)
{
+ struct cgroup_subsys_state *css;
+ struct task_group *tg;
int on_rq, running;
unsigned long flags;
struct rq *rq;
@@ -7595,6 +7597,11 @@ void sched_move_task(struct task_struct *tsk)
if (unlikely(running))
tsk->sched_class->put_prev_task(rq, tsk);

+ tg = container_of(task_subsys_state(p, cpu_cgroup_subsys_id),
+ struct task_group, css);
+ tg = autogroup_task_group(p, tg);
+ tsk->sched_task_group = tg;
+
#ifdef CONFIG_FAIR_GROUP_SCHED
if (tsk->sched_class->task_move_group)
tsk->sched_class->task_move_group(tsk, on_rq);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4134d37..c26378c 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -554,22 +554,19 @@ extern int group_balance_cpu(struct sched_group *sg);
/*
* Return the group to which this tasks belongs.
*
- * We use task_subsys_state_check() and extend the RCU verification with
- * pi->lock and rq->lock because cpu_cgroup_attach() holds those locks for each
- * task it moves into the cgroup. Therefore by holding either of those locks,
- * we pin the task to the current cgroup.
+ * We cannot use task_subsys_state() and friends because the cgroup
+ * subsystem changes that value before the cgroup_subsys::attach() method
+ * is called, therefore we cannot pin it and might observe the wrong value.
+ *
+ * The same is true for autogroup's p->signal->autogroup->tg, the autogroup
+ * core changes this before calling sched_move_task().
+ *
+ * Instead we use a 'copy' which is updated from sched_move_task() while
+ * holding both task_struct::pi_lock and rq::lock.
*/
static inline struct task_group *task_group(struct task_struct *p)
{
- struct task_group *tg;
- struct cgroup_subsys_state *css;
-
- css = task_subsys_state_check(p, cpu_cgroup_subsys_id,
- lockdep_is_held(&p->pi_lock) ||
- lockdep_is_held(&task_rq(p)->lock));
- tg = container_of(css, struct task_group, css);
-
- return autogroup_task_group(p, tg);
+ return p->sched_task_group;
}

/* Change a task's cfs_rq and parent entity if it moves across CPUs/groups */


2012-06-22 15:07:04

by Stefan Bader

[permalink] [raw]
Subject: Re: [RFC][PATCH] sched: Fix race in task_group()

On 22.06.2012 13:36, Peter Zijlstra wrote:
> Stefan reported a crash on a kernel before a3e5d1091c1 ("sched: Don't
> call task_group() too many times in set_task_rq()"), he found the reason
> to be that the multiple task_group() invocations in set_task_rq()
> returned different values.
>
> Looking at all that I found a lack of serialization and plain wrong
> comments.
>
> The below tries to fix it using an extra pointer which is updated under
> the appropriate scheduler locks. Its not pretty, but I can't really see
> another way given how all the cgroup stuff works.
>
> Anybody else got a better idea?
>
>
> Reported-by: Stefan Bader <[email protected]>
> Signed-off-by: Peter Zijlstra <[email protected]>
> ---
> include/linux/init_task.h | 12 +++++++++++-
> include/linux/sched.h | 5 ++++-
> kernel/sched/core.c | 9 ++++++++-
> kernel/sched/sched.h | 23 ++++++++++-------------
> 4 files changed, 33 insertions(+), 16 deletions(-)
>
> diff --git a/include/linux/init_task.h b/include/linux/init_task.h
> index 4e4bc1a..53be033 100644
> --- a/include/linux/init_task.h
> +++ b/include/linux/init_task.h
> @@ -123,8 +123,17 @@ extern struct group_info init_groups;
>
> extern struct cred init_cred;
>
> +extern struct task_group root_task_group;
> +
> +#ifdef CONFIG_CGROUP_SCHED
> +# define INIT_CGROUP_SCHED(tsk) \
> + .sched_task_group = &root_task_group,
> +#else
> +# define INIT_CGROUP_SCHED(tsk)
> +#endif
> +
> #ifdef CONFIG_PERF_EVENTS
> -# define INIT_PERF_EVENTS(tsk) \
> +# define INIT_PERF_EVENTS(tsk) \
> .perf_event_mutex = \
> __MUTEX_INITIALIZER(tsk.perf_event_mutex), \
> .perf_event_list = LIST_HEAD_INIT(tsk.perf_event_list),
> @@ -168,6 +177,7 @@ extern struct cred init_cred;
> }, \
> .tasks = LIST_HEAD_INIT(tsk.tasks), \
> INIT_PUSHABLE_TASKS(tsk) \
> + INIT_CGROUP_SCHED(tsk) \
> .ptraced = LIST_HEAD_INIT(tsk.ptraced), \
> .ptrace_entry = LIST_HEAD_INIT(tsk.ptrace_entry), \
> .real_parent = &tsk, \
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 32157b9..77437d4 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1246,6 +1246,9 @@ struct task_struct {
> const struct sched_class *sched_class;
> struct sched_entity se;
> struct sched_rt_entity rt;
> +#ifdef CONFIG_CGROUP_SCHED
> + struct task_struct *sched_task_group;
> +#endif
>
> #ifdef CONFIG_NUMA
> unsigned long numa_contrib;
> @@ -2741,7 +2744,7 @@ extern int sched_group_set_rt_period(struct task_group *tg,
> extern long sched_group_rt_period(struct task_group *tg);
> extern int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk);
> #endif
> -#endif
> +#endif /* CONFIG_CGROUP_SCHED */
>
> extern int task_can_switch_user(struct user_struct *up,
> struct task_struct *tsk);
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 9bb7d28..9adb9a0 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1096,7 +1096,7 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
> * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
> *
> * sched_move_task() holds both and thus holding either pins the cgroup,
> - * see set_task_rq().
> + * see task_group().
> *
> * Furthermore, all task_rq users should acquire both locks, see
> * task_rq_lock().
> @@ -7581,6 +7581,8 @@ void sched_destroy_group(struct task_group *tg)
> */
> void sched_move_task(struct task_struct *tsk)
> {
> + struct cgroup_subsys_state *css;
> + struct task_group *tg;
> int on_rq, running;
> unsigned long flags;
> struct rq *rq;
> @@ -7595,6 +7597,11 @@ void sched_move_task(struct task_struct *tsk)
> if (unlikely(running))
> tsk->sched_class->put_prev_task(rq, tsk);
>
> + tg = container_of(task_subsys_state(p, cpu_cgroup_subsys_id),
s/p/tsk/
> + struct task_group, css);
> + tg = autogroup_task_group(p, tg);
s/p/tsk/
> + tsk->sched_task_group = tg;
> +
> #ifdef CONFIG_FAIR_GROUP_SCHED
> if (tsk->sched_class->task_move_group)
> tsk->sched_class->task_move_group(tsk, on_rq);
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 4134d37..c26378c 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -554,22 +554,19 @@ extern int group_balance_cpu(struct sched_group *sg);
> /*
> * Return the group to which this tasks belongs.
> *
> - * We use task_subsys_state_check() and extend the RCU verification with
> - * pi->lock and rq->lock because cpu_cgroup_attach() holds those locks for each
> - * task it moves into the cgroup. Therefore by holding either of those locks,
> - * we pin the task to the current cgroup.
> + * We cannot use task_subsys_state() and friends because the cgroup
> + * subsystem changes that value before the cgroup_subsys::attach() method
> + * is called, therefore we cannot pin it and might observe the wrong value.
> + *
> + * The same is true for autogroup's p->signal->autogroup->tg, the autogroup
> + * core changes this before calling sched_move_task().
> + *
> + * Instead we use a 'copy' which is updated from sched_move_task() while
> + * holding both task_struct::pi_lock and rq::lock.
> */
> static inline struct task_group *task_group(struct task_struct *p)
> {
> - struct task_group *tg;
> - struct cgroup_subsys_state *css;
> -
> - css = task_subsys_state_check(p, cpu_cgroup_subsys_id,
> - lockdep_is_held(&p->pi_lock) ||
> - lockdep_is_held(&task_rq(p)->lock));
> - tg = container_of(css, struct task_group, css);
> -
> - return autogroup_task_group(p, tg);
> + return p->sched_task_group;
> }
>
> /* Change a task's cfs_rq and parent entity if it moves across CPUs/groups */
>

Tried out a backported (to 3.2) version of above patch which mainly differs in
having to move sched/sched.h changes back into sched.c and got this warning on boot:

[ 2.648099] ===============================
[ 2.648205] [ INFO: suspicious RCU usage. ]
[ 2.648338] -------------------------------
[ 2.648465] /home/smb/precise-amd64/ubuntu-2.6/include/linux/cgroup.h:548
suspicious rcu_dereference_check() usage!
[ 2.648775]
[ 2.648777] other info that might help us debug this:
[ 2.648780]
[ 2.649010]
[ 2.649012] rcu_scheduler_active = 1, debug_locks = 0
[ 2.649205] 3 locks held by udevd/91:
[ 2.649296] #0: (&(&sighand->siglock)->rlock){......}, at:
[<ffffffff8107ff24>] __lock_task_sighand+0x94/0x1b0
[ 2.649824] #1: (&p->pi_lock){-.-.-.}, at: [<ffffffff8104ee90>]
task_rq_lock+0x40/0xb0
[ 2.650071] #2: (&rq->lock){-.-.-.}, at: [<ffffffff8104eeab>]
task_rq_lock+0x5b/0xb0
[ 2.650297]
[ 2.650299] stack backtrace:
[ 2.650439] Pid: 91, comm: udevd Not tainted 3.2.0-26-generic #41+lp999755v7
[ 2.650562] Call Trace:
[ 2.650562] [<ffffffff810a5507>] lockdep_rcu_suspicious+0xd7/0xe0
[ 2.650562] [<ffffffff81065ea5>] sched_move_task+0x165/0x230
[ 2.650562] [<ffffffff8107feb3>] ? __lock_task_sighand+0x23/0x1b0
[ 2.650562] [<ffffffff8106607f>] autogroup_move_group+0xbf/0x160
[ 2.650562] [<ffffffff8106620e>] sched_autogroup_create_attach+0xce/0x150
[ 2.650562] [<ffffffff81084ca4>] sys_setsid+0xd4/0xf0
[ 2.650562] [<ffffffff816affc2>] system_call_fastpath+0x16/0x1b

Will see how well it survives the test but thought to let you know.

-Stefan


Attachments:
signature.asc (900.00 B)
OpenPGP digital signature

2012-06-22 15:15:17

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC][PATCH] sched: Fix race in task_group()

On Fri, 2012-06-22 at 17:06 +0200, Stefan Bader wrote:
> > @@ -7595,6 +7597,11 @@ void sched_move_task(struct task_struct *tsk)
> > if (unlikely(running))
> > tsk->sched_class->put_prev_task(rq, tsk);
> >
> > + tg = container_of(task_subsys_state(p, cpu_cgroup_subsys_id),
> s/p/tsk/
> > + struct task_group, css);
> > + tg = autogroup_task_group(p, tg);
> s/p/tsk/
> > + tsk->sched_task_group = tg;
> > +

Hmm, I'm very sure I at least compiled a kernel after this.. must've
been the wrong machine.. /me dons a brown paper bag.

2012-06-26 13:48:48

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC][PATCH] sched: Fix race in task_group()

Here's one that's actually compile tested (with the right CONFIG_foo
enabled) and I fixed the autogroup lockdep splat.

---
Subject: sched: Fix race in task_group()
From: Peter Zijlstra <[email protected]>
Date: Fri, 22 Jun 2012 13:36:05 +0200

Stefan reported a crash on a kernel before a3e5d1091c1 ("sched: Don't
call task_group() too many times in set_task_rq()"), he found the reason
to be that the multiple task_group() invocations in set_task_rq()
returned different values.

Looking at all that I found a lack of serialization and plain wrong
comments.

The below tries to fix it using an extra pointer which is updated under
the appropriate scheduler locks. Its not pretty, but I can't really see
another way given how all the cgroup stuff works.

Reported-by: Stefan Bader <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
---
include/linux/init_task.h | 12 +++++++++++-
include/linux/sched.h | 5 ++++-
kernel/sched/core.c | 9 ++++++++-
kernel/sched/sched.h | 23 ++++++++++-------------
4 files changed, 33 insertions(+), 16 deletions(-)

--- a/include/linux/init_task.h
+++ b/include/linux/init_task.h
@@ -123,8 +123,17 @@ extern struct group_info init_groups;

extern struct cred init_cred;

+extern struct task_group root_task_group;
+
+#ifdef CONFIG_CGROUP_SCHED
+# define INIT_CGROUP_SCHED(tsk) \
+ .sched_task_group = &root_task_group,
+#else
+# define INIT_CGROUP_SCHED(tsk)
+#endif
+
#ifdef CONFIG_PERF_EVENTS
-# define INIT_PERF_EVENTS(tsk) \
+# define INIT_PERF_EVENTS(tsk) \
.perf_event_mutex = \
__MUTEX_INITIALIZER(tsk.perf_event_mutex), \
.perf_event_list = LIST_HEAD_INIT(tsk.perf_event_list),
@@ -168,6 +177,7 @@ extern struct cred init_cred;
}, \
.tasks = LIST_HEAD_INIT(tsk.tasks), \
INIT_PUSHABLE_TASKS(tsk) \
+ INIT_CGROUP_SCHED(tsk) \
.ptraced = LIST_HEAD_INIT(tsk.ptraced), \
.ptrace_entry = LIST_HEAD_INIT(tsk.ptrace_entry), \
.real_parent = &tsk, \
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1246,6 +1246,9 @@ struct task_struct {
const struct sched_class *sched_class;
struct sched_entity se;
struct sched_rt_entity rt;
+#ifdef CONFIG_CGROUP_SCHED
+ struct task_group *sched_task_group;
+#endif

#ifdef CONFIG_NUMA
unsigned long numa_contrib;
@@ -2749,7 +2752,7 @@ extern int sched_group_set_rt_period(str
extern long sched_group_rt_period(struct task_group *tg);
extern int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk);
#endif
-#endif
+#endif /* CONFIG_CGROUP_SCHED */

extern int task_can_switch_user(struct user_struct *up,
struct task_struct *tsk);
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1096,7 +1096,7 @@ void set_task_cpu(struct task_struct *p,
* a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
*
* sched_move_task() holds both and thus holding either pins the cgroup,
- * see set_task_rq().
+ * see task_group().
*
* Furthermore, all task_rq users should acquire both locks, see
* task_rq_lock().
@@ -7712,6 +7712,7 @@ void sched_destroy_group(struct task_gro
*/
void sched_move_task(struct task_struct *tsk)
{
+ struct task_group *tg;
int on_rq, running;
unsigned long flags;
struct rq *rq;
@@ -7726,6 +7727,12 @@ void sched_move_task(struct task_struct
if (unlikely(running))
tsk->sched_class->put_prev_task(rq, tsk);

+ tg = container_of(task_subsys_state_check(tsk, cpu_cgroup_subsys_id,
+ lockdep_is_held(&tsk->sighand->siglock)),
+ struct task_group, css);
+ tg = autogroup_task_group(tsk, tg);
+ tsk->sched_task_group = tg;
+
#ifdef CONFIG_FAIR_GROUP_SCHED
if (tsk->sched_class->task_move_group)
tsk->sched_class->task_move_group(tsk, on_rq);
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -554,22 +554,19 @@ extern int group_balance_cpu(struct sche
/*
* Return the group to which this tasks belongs.
*
- * We use task_subsys_state_check() and extend the RCU verification with
- * pi->lock and rq->lock because cpu_cgroup_attach() holds those locks for each
- * task it moves into the cgroup. Therefore by holding either of those locks,
- * we pin the task to the current cgroup.
+ * We cannot use task_subsys_state() and friends because the cgroup
+ * subsystem changes that value before the cgroup_subsys::attach() method
+ * is called, therefore we cannot pin it and might observe the wrong value.
+ *
+ * The same is true for autogroup's p->signal->autogroup->tg, the autogroup
+ * core changes this before calling sched_move_task().
+ *
+ * Instead we use a 'copy' which is updated from sched_move_task() while
+ * holding both task_struct::pi_lock and rq::lock.
*/
static inline struct task_group *task_group(struct task_struct *p)
{
- struct task_group *tg;
- struct cgroup_subsys_state *css;
-
- css = task_subsys_state_check(p, cpu_cgroup_subsys_id,
- lockdep_is_held(&p->pi_lock) ||
- lockdep_is_held(&task_rq(p)->lock));
- tg = container_of(css, struct task_group, css);
-
- return autogroup_task_group(p, tg);
+ return p->sched_task_group;
}

/* Change a task's cfs_rq and parent entity if it moves across CPUs/groups */

2012-06-26 17:50:00

by Stefan Bader

[permalink] [raw]
Subject: Re: [RFC][PATCH] sched: Fix race in task_group()

On 26.06.2012 15:48, Peter Zijlstra wrote:
> Here's one that's actually compile tested (with the right CONFIG_foo
> enabled) and I fixed the autogroup lockdep splat.
>
> ---
> Subject: sched: Fix race in task_group()
> From: Peter Zijlstra <[email protected]>
> Date: Fri, 22 Jun 2012 13:36:05 +0200
>
> Stefan reported a crash on a kernel before a3e5d1091c1 ("sched: Don't
> call task_group() too many times in set_task_rq()"), he found the reason
> to be that the multiple task_group() invocations in set_task_rq()
> returned different values.
>
> Looking at all that I found a lack of serialization and plain wrong
> comments.
>
> The below tries to fix it using an extra pointer which is updated under
> the appropriate scheduler locks. Its not pretty, but I can't really see
> another way given how all the cgroup stuff works.
>
> Reported-by: Stefan Bader <[email protected]>
> Signed-off-by: Peter Zijlstra <[email protected]>
> ---
> include/linux/init_task.h | 12 +++++++++++-
> include/linux/sched.h | 5 ++++-
> kernel/sched/core.c | 9 ++++++++-
> kernel/sched/sched.h | 23 ++++++++++-------------
> 4 files changed, 33 insertions(+), 16 deletions(-)
>
> --- a/include/linux/init_task.h
> +++ b/include/linux/init_task.h
> @@ -123,8 +123,17 @@ extern struct group_info init_groups;
>
> extern struct cred init_cred;
>
> +extern struct task_group root_task_group;
> +
> +#ifdef CONFIG_CGROUP_SCHED
> +# define INIT_CGROUP_SCHED(tsk) \
> + .sched_task_group = &root_task_group,
> +#else
> +# define INIT_CGROUP_SCHED(tsk)
> +#endif
> +
> #ifdef CONFIG_PERF_EVENTS
> -# define INIT_PERF_EVENTS(tsk) \
> +# define INIT_PERF_EVENTS(tsk) \
> .perf_event_mutex = \
> __MUTEX_INITIALIZER(tsk.perf_event_mutex), \
> .perf_event_list = LIST_HEAD_INIT(tsk.perf_event_list),
> @@ -168,6 +177,7 @@ extern struct cred init_cred;
> }, \
> .tasks = LIST_HEAD_INIT(tsk.tasks), \
> INIT_PUSHABLE_TASKS(tsk) \
> + INIT_CGROUP_SCHED(tsk) \
> .ptraced = LIST_HEAD_INIT(tsk.ptraced), \
> .ptrace_entry = LIST_HEAD_INIT(tsk.ptrace_entry), \
> .real_parent = &tsk, \
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1246,6 +1246,9 @@ struct task_struct {
> const struct sched_class *sched_class;
> struct sched_entity se;
> struct sched_rt_entity rt;
> +#ifdef CONFIG_CGROUP_SCHED
> + struct task_group *sched_task_group;
> +#endif
>
> #ifdef CONFIG_NUMA
> unsigned long numa_contrib;
> @@ -2749,7 +2752,7 @@ extern int sched_group_set_rt_period(str
> extern long sched_group_rt_period(struct task_group *tg);
> extern int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk);
> #endif
> -#endif
> +#endif /* CONFIG_CGROUP_SCHED */
>
> extern int task_can_switch_user(struct user_struct *up,
> struct task_struct *tsk);
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1096,7 +1096,7 @@ void set_task_cpu(struct task_struct *p,
> * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
> *
> * sched_move_task() holds both and thus holding either pins the cgroup,
> - * see set_task_rq().
> + * see task_group().
> *
> * Furthermore, all task_rq users should acquire both locks, see
> * task_rq_lock().
> @@ -7712,6 +7712,7 @@ void sched_destroy_group(struct task_gro
> */
> void sched_move_task(struct task_struct *tsk)
> {
> + struct task_group *tg;
> int on_rq, running;
> unsigned long flags;
> struct rq *rq;
> @@ -7726,6 +7727,12 @@ void sched_move_task(struct task_struct
> if (unlikely(running))
> tsk->sched_class->put_prev_task(rq, tsk);
>
> + tg = container_of(task_subsys_state_check(tsk, cpu_cgroup_subsys_id,
> + lockdep_is_held(&tsk->sighand->siglock)),
> + struct task_group, css);
> + tg = autogroup_task_group(tsk, tg);
> + tsk->sched_task_group = tg;
> +
> #ifdef CONFIG_FAIR_GROUP_SCHED
> if (tsk->sched_class->task_move_group)
> tsk->sched_class->task_move_group(tsk, on_rq);
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -554,22 +554,19 @@ extern int group_balance_cpu(struct sche
> /*
> * Return the group to which this tasks belongs.
> *
> - * We use task_subsys_state_check() and extend the RCU verification with
> - * pi->lock and rq->lock because cpu_cgroup_attach() holds those locks for each
> - * task it moves into the cgroup. Therefore by holding either of those locks,
> - * we pin the task to the current cgroup.
> + * We cannot use task_subsys_state() and friends because the cgroup
> + * subsystem changes that value before the cgroup_subsys::attach() method
> + * is called, therefore we cannot pin it and might observe the wrong value.
> + *
> + * The same is true for autogroup's p->signal->autogroup->tg, the autogroup
> + * core changes this before calling sched_move_task().
> + *
> + * Instead we use a 'copy' which is updated from sched_move_task() while
> + * holding both task_struct::pi_lock and rq::lock.
> */
> static inline struct task_group *task_group(struct task_struct *p)
> {
> - struct task_group *tg;
> - struct cgroup_subsys_state *css;
> -
> - css = task_subsys_state_check(p, cpu_cgroup_subsys_id,
> - lockdep_is_held(&p->pi_lock) ||
> - lockdep_is_held(&task_rq(p)->lock));
> - tg = container_of(css, struct task_group, css);
> -
> - return autogroup_task_group(p, tg);
> + return p->sched_task_group;
> }
>
> /* Change a task's cfs_rq and parent entity if it moves across CPUs/groups */
>

I ran this version through the testcase and no more warning indeed. Also the
crash is not happening anymore (with the backported version).
This should probably get a "Cc: [email protected] # 2.6.38+" into the s-o-b
area. I don't think 2.6.38..2.6.39 have a longterm support but at least that was
the time when autogroup came in. For 3.0..3.2 the patch needs a bit of tweaking
due to some conference boredom. ;) I am attaching the backport I was using for
the test for convenience. Although ... it has to be refreshed after the original
patch has landed upstream...

Cheers,
-Stefan


Attachments:
0001-sched-Fix-race-in-task_group.patch (5.38 kB)
signature.asc (900.00 B)
OpenPGP digital signature
Download all attachments

2012-06-26 20:13:45

by Tejun Heo

[permalink] [raw]
Subject: Re: [RFC][PATCH] sched: Fix race in task_group()

On Tue, Jun 26, 2012 at 03:48:35PM +0200, Peter Zijlstra wrote:
> Here's one that's actually compile tested (with the right CONFIG_foo
> enabled) and I fixed the autogroup lockdep splat.
>
> ---
> Subject: sched: Fix race in task_group()
> From: Peter Zijlstra <[email protected]>
> Date: Fri, 22 Jun 2012 13:36:05 +0200
>
> Stefan reported a crash on a kernel before a3e5d1091c1 ("sched: Don't
> call task_group() too many times in set_task_rq()"), he found the reason
> to be that the multiple task_group() invocations in set_task_rq()
> returned different values.

Hmm... short of intertwining locking further I don't think we can
solve this in prettier way. So, yeah, looks good to me from cgroup
POV.

> Looking at all that I found a lack of serialization and plain wrong
> comments.
>
> The below tries to fix it using an extra pointer which is updated under
> the appropriate scheduler locks. Its not pretty, but I can't really see
> another way given how all the cgroup stuff works.

BTW your patch is whitespace broken. Seems like QP encoded.

Thanks.

--
tejun

2012-06-26 21:17:18

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC][PATCH] sched: Fix race in task_group()

On Tue, 2012-06-26 at 13:13 -0700, Tejun Heo wrote:
>
> BTW your patch is whitespace broken. Seems like QP encoded.
>
Yeah, that's the 'best' (d)evolution can do these days :/ I've been ><
close to looking at the source of that thing again, but the last time
left me traumatized.

I guess I should file a bug, but that means dealing with bugzilla..
which is just about as bad as looking at its source.

2012-06-27 12:40:05

by Hillf Danton

[permalink] [raw]
Subject: Re: [RFC][PATCH] sched: Fix race in task_group()

The patch went three versions, the first,

On Fri, Jun 22, 2012 at 7:36 PM, Peter Zijlstra <[email protected]> wrote:
> Reported-by: Stefan Bader <[email protected]>
> Signed-off-by: Peter Zijlstra <[email protected]>
> ---
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 32157b9..77437d4 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1246,6 +1246,9 @@ struct task_struct {
> const struct sched_class *sched_class;
> struct sched_entity se;
> struct sched_rt_entity rt;
> +#ifdef CONFIG_CGROUP_SCHED
> + struct task_struct *sched_task_group;
> +#endif
>

The second,

>> On 26.06.2012 15:48, Peter Zijlstra wrote:
>> Here's one that's actually compile tested (with the right CONFIG_foo
>> enabled) and I fixed the autogroup lockdep splat.
>>
>> ---
>> Subject: sched: Fix race in task_group()
>> From: Peter Zijlstra <[email protected]>
>> Date: Fri, 22 Jun 2012 13:36:05 +0200
>>
>> Reported-by: Stefan Bader <[email protected]>
>> Signed-off-by: Peter Zijlstra <[email protected]>
>> ---
>> --- a/include/linux/sched.h
>> +++ b/include/linux/sched.h
>> @@ -1246,6 +1246,9 @@ struct task_struct {
>>       const struct sched_class *sched_class;
>>       struct sched_entity se;
>>       struct sched_rt_entity rt;
>> +#ifdef CONFIG_CGROUP_SCHED
>> +     struct task_group *sched_task_group;
>> +#endif
>>

And the third, https://lkml.org/lkml/2012/6/26/331

>From d751ab1f1e532f32412d99b71a1bfea3e5282d07 Mon Sep 17 00:00:00 2001
From: Peter Zijlstra <[email protected]>
Date: Fri, 22 Jun 2012 13:36:00 +0200
Subject: [PATCH] sched: Fix race in task_group()

Stefan reported a crash on a kernel before a3e5d1091c1 ("sched: Don't
call task_group() too many times in set_task_rq()"), he found the reason
to be that the multiple task_group() invocations in set_task_rq()
returned different values.

Looking at all that I found a lack of serialization and plain wrong
comments.

The below tries to fix it using an extra pointer which is updated under
the appropriate scheduler locks. Its not pretty, but I can't really see
another way given how all the cgroup stuff works.

Reported-and-tested-by: Stefan Bader <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
[backported to apply to 3.0 and 3.2]
Signed-off-by: Stefan Bader <[email protected]>
---
include/linux/init_task.h | 12 +++++++++++-
include/linux/sched.h | 5 ++++-
kernel/sched.c | 32 ++++++++++++++++++--------------
3 files changed, 33 insertions(+), 16 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 56de5c1..1fd9884 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1242,6 +1242,9 @@ struct task_struct {
const struct sched_class *sched_class;
struct sched_entity se;
struct sched_rt_entity rt;
+#ifdef CONFIG_CGROUP_SCHED
+ struct task_struct *sched_task_group;
+#endif

where sched_task_group was defined to be task_struct twice(in the first
and the third versions) and to be task_group once.

Before backport, feel free to respin with the final define determined.

2012-06-27 12:52:19

by Stefan Bader

[permalink] [raw]
Subject: Re: [RFC][PATCH] sched: Fix race in task_group()

On 27.06.2012 14:40, Hillf Danton wrote:
> The patch went three versions, the first,
>
> On Fri, Jun 22, 2012 at 7:36 PM, Peter Zijlstra <[email protected]> wrote:
>> Reported-by: Stefan Bader <[email protected]>
>> Signed-off-by: Peter Zijlstra <[email protected]>
>> ---
>> diff --git a/include/linux/sched.h b/include/linux/sched.h
>> index 32157b9..77437d4 100644
>> --- a/include/linux/sched.h
>> +++ b/include/linux/sched.h
>> @@ -1246,6 +1246,9 @@ struct task_struct {
>> const struct sched_class *sched_class;
>> struct sched_entity se;
>> struct sched_rt_entity rt;
>> +#ifdef CONFIG_CGROUP_SCHED
>> + struct task_struct *sched_task_group;
>> +#endif
>>
>
> The second,
>
>>> On 26.06.2012 15:48, Peter Zijlstra wrote:
>>> Here's one that's actually compile tested (with the right CONFIG_foo
>>> enabled) and I fixed the autogroup lockdep splat.
>>>
>>> ---
>>> Subject: sched: Fix race in task_group()
>>> From: Peter Zijlstra <[email protected]>
>>> Date: Fri, 22 Jun 2012 13:36:05 +0200
>>>
>>> Reported-by: Stefan Bader <[email protected]>
>>> Signed-off-by: Peter Zijlstra <[email protected]>
>>> ---
>>> --- a/include/linux/sched.h
>>> +++ b/include/linux/sched.h
>>> @@ -1246,6 +1246,9 @@ struct task_struct {
>>> const struct sched_class *sched_class;
>>> struct sched_entity se;
>>> struct sched_rt_entity rt;
>>> +#ifdef CONFIG_CGROUP_SCHED
>>> + struct task_group *sched_task_group;
>>> +#endif
>>>
>
> And the third, https://lkml.org/lkml/2012/6/26/331
>
> From d751ab1f1e532f32412d99b71a1bfea3e5282d07 Mon Sep 17 00:00:00 2001
> From: Peter Zijlstra <[email protected]>
> Date: Fri, 22 Jun 2012 13:36:00 +0200
> Subject: [PATCH] sched: Fix race in task_group()
>
> Stefan reported a crash on a kernel before a3e5d1091c1 ("sched: Don't
> call task_group() too many times in set_task_rq()"), he found the reason
> to be that the multiple task_group() invocations in set_task_rq()
> returned different values.
>
> Looking at all that I found a lack of serialization and plain wrong
> comments.
>
> The below tries to fix it using an extra pointer which is updated under
> the appropriate scheduler locks. Its not pretty, but I can't really see
> another way given how all the cgroup stuff works.
>
> Reported-and-tested-by: Stefan Bader <[email protected]>
> Signed-off-by: Peter Zijlstra <[email protected]>
> [backported to apply to 3.0 and 3.2]
> Signed-off-by: Stefan Bader <[email protected]>
> ---
> include/linux/init_task.h | 12 +++++++++++-
> include/linux/sched.h | 5 ++++-
> kernel/sched.c | 32 ++++++++++++++++++--------------
> 3 files changed, 33 insertions(+), 16 deletions(-)
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 56de5c1..1fd9884 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1242,6 +1242,9 @@ struct task_struct {
> const struct sched_class *sched_class;
> struct sched_entity se;
> struct sched_rt_entity rt;
> +#ifdef CONFIG_CGROUP_SCHED
> + struct task_struct *sched_task_group;
> +#endif
>
> where sched_task_group was defined to be task_struct twice(in the first
> and the third versions) and to be task_group once.
>
> Before backport, feel free to respin with the final define determined.
>
The second version is correct. I just messed up updating my backport, failing to
notice that change (and trying to be clever and not going trhough re-applying
and failure again).

-Stefan



Attachments:
signature.asc (900.00 B)
OpenPGP digital signature