2010-06-01 13:06:16

by Daniel J Blueman

[permalink] [raw]
Subject: Re: [PATCH] RCU: don't turn off lockdep when find suspicious rcu_dereference_check() usage

Hi Paul,

With 2.6.35-rc1 and your patch in the context below, we still see
"include/linux/cgroup.h:534 invoked rcu_dereference_check() without
protection!", so need this additional patch:

Acquire read-side RCU lock around task_group() calls, addressing
"include/linux/cgroup.h:534 invoked rcu_dereference_check() without
protection!" warning.

Signed-off-by: Daniel J Blueman <[email protected]>

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 217e4a9..50ec9ea 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1241,6 +1241,7 @@ static int wake_affine(struct sched_domain *sd,
struct task_struct *p, int sync)
* effect of the currently running task from the load
* of the current CPU:
*/
+ rcu_read_lock();
if (sync) {
tg = task_group(current);
weight = current->se.load.weight;
@@ -1250,6 +1251,7 @@ static int wake_affine(struct sched_domain *sd,
struct task_struct *p, int sync)
}

tg = task_group(p);
+ rcu_read_unlock();
weight = p->se.load.weight;

imbalance = 100 + (sd->imbalance_pct - 100) / 2;

---

On Wed, Apr 21, 2010 at 02:35:43PM -0700, Paul E. McKenney wrote:
On Tue, Apr 20, 2010 at 11:38:28AM -0400, Miles Lane wrote:
Excellent. Here are the results on my machine. .config appended.
First, thank you very much for testing this, Miles!

And as Tetsuo Handa pointed out privately, my patch was way broken.

Here is an updated version.

Thanx, Paul

commit b15e561ed91b7a366c3cc635026f3b9ce6483070
Author: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
Date: Wed Apr 21 14:04:56 2010 -0700

sched: protect __sched_setscheduler() access to cgroups

A given task's cgroups structures must remain while that task is running
due to reference counting, so this is presumably a false positive.
Updated to reflect feedback from Tetsuo Handa.

Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>

diff --git a/kernel/sched.c b/kernel/sched.c
index 14c44ec..f425a2b 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4575,9 +4575,13 @@ recheck:
* Do not allow realtime tasks into groups that have no runtime
* assigned.
*/
+ rcu_read_lock();
if (rt_bandwidth_enabled() && rt_policy(policy) &&
- task_group(p)->rt_bandwidth.rt_runtime == 0)
+ task_group(p)->rt_bandwidth.rt_runtime == 0) {
+ rcu_read_unlock();
return -EPERM;
+ }
+ rcu_read_unlock();
#endif

retval = security_task_setscheduler(p, policy, param);
--
Daniel J Blueman


2010-06-02 14:56:57

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH] RCU: don't turn off lockdep when find suspicious rcu_dereference_check() usage

On Tue, Jun 01, 2010 at 02:06:13PM +0100, Daniel J Blueman wrote:
> Hi Paul,
>
> With 2.6.35-rc1 and your patch in the context below, we still see
> "include/linux/cgroup.h:534 invoked rcu_dereference_check() without
> protection!", so need this additional patch:
>
> Acquire read-side RCU lock around task_group() calls, addressing
> "include/linux/cgroup.h:534 invoked rcu_dereference_check() without
> protection!" warning.
>
> Signed-off-by: Daniel J Blueman <[email protected]>

Thank you, Daniel! I have queued this for 2.6.35.

I had to apply the patch by hand due to line wrapping. Could you please
check your email-agent settings? This simple patch was no problem to
hand apply, but for a larger patch this process would be both tedious
and error prone.

Thanx, Paul

> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> index 217e4a9..50ec9ea 100644
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -1241,6 +1241,7 @@ static int wake_affine(struct sched_domain *sd,
> struct task_struct *p, int sync)
> * effect of the currently running task from the load
> * of the current CPU:
> */
> + rcu_read_lock();
> if (sync) {
> tg = task_group(current);
> weight = current->se.load.weight;
> @@ -1250,6 +1251,7 @@ static int wake_affine(struct sched_domain *sd,
> struct task_struct *p, int sync)
> }
>
> tg = task_group(p);
> + rcu_read_unlock();
> weight = p->se.load.weight;
>
> imbalance = 100 + (sd->imbalance_pct - 100) / 2;
>
> ---
>
> On Wed, Apr 21, 2010 at 02:35:43PM -0700, Paul E. McKenney wrote:
> On Tue, Apr 20, 2010 at 11:38:28AM -0400, Miles Lane wrote:
> Excellent. Here are the results on my machine. .config appended.
> First, thank you very much for testing this, Miles!
>
> And as Tetsuo Handa pointed out privately, my patch was way broken.
>
> Here is an updated version.
>
> Thanx, Paul
>
> commit b15e561ed91b7a366c3cc635026f3b9ce6483070
> Author: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
> Date: Wed Apr 21 14:04:56 2010 -0700
>
> sched: protect __sched_setscheduler() access to cgroups
>
> A given task's cgroups structures must remain while that task is running
> due to reference counting, so this is presumably a false positive.
> Updated to reflect feedback from Tetsuo Handa.
>
> Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
>
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 14c44ec..f425a2b 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -4575,9 +4575,13 @@ recheck:
> * Do not allow realtime tasks into groups that have no runtime
> * assigned.
> */
> + rcu_read_lock();
> if (rt_bandwidth_enabled() && rt_policy(policy) &&
> - task_group(p)->rt_bandwidth.rt_runtime == 0)
> + task_group(p)->rt_bandwidth.rt_runtime == 0) {
> + rcu_read_unlock();
> return -EPERM;
> + }
> + rcu_read_unlock();
> #endif
>
> retval = security_task_setscheduler(p, policy, param);
> --
> Daniel J Blueman

2010-06-02 15:24:14

by Daniel J Blueman

[permalink] [raw]
Subject: Re: [PATCH] RCU: don't turn off lockdep when find suspicious rcu_dereference_check() usage

On Wed, Jun 2, 2010 at 3:56 PM, Paul E. McKenney
<[email protected]> wrote:
> On Tue, Jun 01, 2010 at 02:06:13PM +0100, Daniel J Blueman wrote:
>> Hi Paul,
>>
>> With 2.6.35-rc1 and your patch in the context below, we still see
>> "include/linux/cgroup.h:534 invoked rcu_dereference_check() without
>> protection!", so need this additional patch:
>>
>> Acquire read-side RCU lock around task_group() calls, addressing
>> "include/linux/cgroup.h:534 invoked rcu_dereference_check() without
>> protection!" warning.
>>
>> Signed-off-by: Daniel J Blueman <[email protected]>
>
> Thank you, Daniel! ?I have queued this for 2.6.35.
>
> I had to apply the patch by hand due to line wrapping. ?Could you please
> check your email-agent settings? ?This simple patch was no problem to
> hand apply, but for a larger patch this process would be both tedious
> and error prone.

True - will do.

Thanks,
Daniel

>> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
>> index 217e4a9..50ec9ea 100644
>> --- a/kernel/sched_fair.c
>> +++ b/kernel/sched_fair.c
>> @@ -1241,6 +1241,7 @@ static int wake_affine(struct sched_domain *sd,
>> struct task_struct *p, int sync)
>> ? ? ? ?* effect of the currently running task from the load
>> ? ? ? ?* of the current CPU:
>> ? ? ? ?*/
>> + ? ? rcu_read_lock();
>> ? ? ? if (sync) {
>> ? ? ? ? ? ? ? tg = task_group(current);
>> ? ? ? ? ? ? ? weight = current->se.load.weight;
>> @@ -1250,6 +1251,7 @@ static int wake_affine(struct sched_domain *sd,
>> struct task_struct *p, int sync)
>> ? ? ? }
>>
>> ? ? ? tg = task_group(p);
>> + ? ? rcu_read_unlock();
>> ? ? ? weight = p->se.load.weight;
>>
>> ? ? ? imbalance = 100 + (sd->imbalance_pct - 100) / 2;
>>
>> ---
>>
>> On Wed, Apr 21, 2010 at 02:35:43PM -0700, Paul E. McKenney wrote:
>> ? ? On Tue, Apr 20, 2010 at 11:38:28AM -0400, Miles Lane wrote:
>> ? ? ? ? Excellent. Here are the results on my machine. .config appended.
>> ? ? First, thank you very much for testing this, Miles!
>>
>> And as Tetsuo Handa pointed out privately, my patch was way broken.
>>
>> Here is an updated version.
>>
>> Thanx, Paul
>>
>> commit b15e561ed91b7a366c3cc635026f3b9ce6483070
>> Author: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
>> Date: Wed Apr 21 14:04:56 2010 -0700
>>
>> sched: protect __sched_setscheduler() access to cgroups
>>
>> A given task's cgroups structures must remain while that task is running
>> due to reference counting, so this is presumably a false positive.
>> Updated to reflect feedback from Tetsuo Handa.
>>
>> Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
>>
>> diff --git a/kernel/sched.c b/kernel/sched.c
>> index 14c44ec..f425a2b 100644
>> --- a/kernel/sched.c
>> +++ b/kernel/sched.c
>> @@ -4575,9 +4575,13 @@ recheck:
>> * Do not allow realtime tasks into groups that have no runtime
>> * assigned.
>> */
>> + rcu_read_lock();
>> if (rt_bandwidth_enabled() && rt_policy(policy) &&
>> - task_group(p)->rt_bandwidth.rt_runtime == 0)
>> + task_group(p)->rt_bandwidth.rt_runtime == 0) {
>> + rcu_read_unlock();
>> return -EPERM;
>> + }
>> + rcu_read_unlock();
>> #endif
>>
>> retval = security_task_setscheduler(p, policy, param);
--
Daniel J Blueman

2010-06-03 09:19:38

by Li Zefan

[permalink] [raw]
Subject: Re: [PATCH] RCU: don't turn off lockdep when find suspicious rcu_dereference_check() usage

Paul E. McKenney wrote:
> On Tue, Jun 01, 2010 at 02:06:13PM +0100, Daniel J Blueman wrote:
>> Hi Paul,
>>
>> With 2.6.35-rc1 and your patch in the context below, we still see
>> "include/linux/cgroup.h:534 invoked rcu_dereference_check() without
>> protection!", so need this additional patch:
>>
>> Acquire read-side RCU lock around task_group() calls, addressing
>> "include/linux/cgroup.h:534 invoked rcu_dereference_check() without
>> protection!" warning.
>>
>> Signed-off-by: Daniel J Blueman <[email protected]>
>
> Thank you, Daniel! I have queued this for 2.6.35.
>
> I had to apply the patch by hand due to line wrapping. Could you please
> check your email-agent settings? This simple patch was no problem to
> hand apply, but for a larger patch this process would be both tedious
> and error prone.
>
> Thanx, Paul
>
>> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
>> index 217e4a9..50ec9ea 100644
>> --- a/kernel/sched_fair.c
>> +++ b/kernel/sched_fair.c
>> @@ -1241,6 +1241,7 @@ static int wake_affine(struct sched_domain *sd,
>> struct task_struct *p, int sync)
>> * effect of the currently running task from the load
>> * of the current CPU:
>> */
>> + rcu_read_lock();
>> if (sync) {
>> tg = task_group(current);
>> weight = current->se.load.weight;
>> @@ -1250,6 +1251,7 @@ static int wake_affine(struct sched_domain *sd,
>> struct task_struct *p, int sync)
>> }
>>
>> tg = task_group(p);
>> + rcu_read_unlock();

Hmmm.. I think it's not safe to access tg after rcu_read_unlock.

>> weight = p->se.load.weight;
>>
>> imbalance = 100 + (sd->imbalance_pct - 100) / 2;
>>

2010-06-03 18:31:06

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH] RCU: don't turn off lockdep when find suspicious rcu_dereference_check() usage

On Thu, Jun 03, 2010 at 05:22:04PM +0800, Li Zefan wrote:
> Paul E. McKenney wrote:
> > On Tue, Jun 01, 2010 at 02:06:13PM +0100, Daniel J Blueman wrote:
> >> Hi Paul,
> >>
> >> With 2.6.35-rc1 and your patch in the context below, we still see
> >> "include/linux/cgroup.h:534 invoked rcu_dereference_check() without
> >> protection!", so need this additional patch:
> >>
> >> Acquire read-side RCU lock around task_group() calls, addressing
> >> "include/linux/cgroup.h:534 invoked rcu_dereference_check() without
> >> protection!" warning.
> >>
> >> Signed-off-by: Daniel J Blueman <[email protected]>
> >
> > Thank you, Daniel! I have queued this for 2.6.35.
> >
> > I had to apply the patch by hand due to line wrapping. Could you please
> > check your email-agent settings? This simple patch was no problem to
> > hand apply, but for a larger patch this process would be both tedious
> > and error prone.
> >
> > Thanx, Paul
> >
> >> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> >> index 217e4a9..50ec9ea 100644
> >> --- a/kernel/sched_fair.c
> >> +++ b/kernel/sched_fair.c
> >> @@ -1241,6 +1241,7 @@ static int wake_affine(struct sched_domain *sd,
> >> struct task_struct *p, int sync)
> >> * effect of the currently running task from the load
> >> * of the current CPU:
> >> */
> >> + rcu_read_lock();
> >> if (sync) {
> >> tg = task_group(current);
> >> weight = current->se.load.weight;
> >> @@ -1250,6 +1251,7 @@ static int wake_affine(struct sched_domain *sd,
> >> struct task_struct *p, int sync)
> >> }
> >>
> >> tg = task_group(p);
> >> + rcu_read_unlock();
>
> Hmmm.. I think it's not safe to access tg after rcu_read_unlock.

It does indeed look unsafe. How about the following on top of this patch?

> >> weight = p->se.load.weight;
> >>
> >> imbalance = 100 + (sd->imbalance_pct - 100) / 2;

Seems worth reviewing the other uses of task_group():

1. set_task_rq() -- only a runqueue and a sched_rt_entity leave
the RCU read-side critical section. Runqueues do persist.
I don't claim to understand the sched_rt_entity life cycle.

2. __sched_setscheduler() -- not clear to me that this one is
protected to begin with. If it is somehow correctly protected,
it discards the RCU-protected pointer immediately, so is OK
otherwise.

3. cpu_cgroup_destroy() -- ditto.

4. cpu_shares_read_u64() -- ditto.

5. print_task() -- protected by rcu_read_lock() and discards the
RCU-protected pointer immediately, so this one is OK.

Any task_group() experts able to weigh in on #2, #3, and #4?

Thanx, Paul

Signed-off-by: Paul E. McKenney <[email protected]>

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 50ec9ea..224ef98 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1251,7 +1251,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
}

tg = task_group(p);
- rcu_read_unlock();
weight = p->se.load.weight;

imbalance = 100 + (sd->imbalance_pct - 100) / 2;
@@ -1268,6 +1267,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
balanced = !this_load ||
100*(this_load + effective_load(tg, this_cpu, weight, weight)) <=
imbalance*(load + effective_load(tg, prev_cpu, 0, weight));
+ rcu_read_unlock();

/*
* If the currently running task will sleep within

2010-06-04 02:42:19

by Li Zefan

[permalink] [raw]
Subject: Re: [PATCH] RCU: don't turn off lockdep when find suspicious rcu_dereference_check() usage

> Seems worth reviewing the other uses of task_group():
>
> 1. set_task_rq() -- only a runqueue and a sched_rt_entity leave
> the RCU read-side critical section. Runqueues do persist.
> I don't claim to understand the sched_rt_entity life cycle.
>
> 2. __sched_setscheduler() -- not clear to me that this one is
> protected to begin with. If it is somehow correctly protected,
> it discards the RCU-protected pointer immediately, so is OK
> otherwise.
>
> 3. cpu_cgroup_destroy() -- ditto.
>
> 4. cpu_shares_read_u64() -- ditto.
>
> 5. print_task() -- protected by rcu_read_lock() and discards the
> RCU-protected pointer immediately, so this one is OK.
>
> Any task_group() experts able to weigh in on #2, #3, and #4?
>

#3 and #4 are safe, because it's not calling task_group(), but
cgroup_tg():

struct task_group *tg = cgroup_tg(cgrp);

As long as it's safe to access cgrp, it's safe to access tg.

> Thanx, Paul
>
> Signed-off-by: Paul E. McKenney <[email protected]>
>
> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> index 50ec9ea..224ef98 100644
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -1251,7 +1251,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> }
>
> tg = task_group(p);
> - rcu_read_unlock();
> weight = p->se.load.weight;
>
> imbalance = 100 + (sd->imbalance_pct - 100) / 2;
> @@ -1268,6 +1267,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> balanced = !this_load ||
> 100*(this_load + effective_load(tg, this_cpu, weight, weight)) <=
> imbalance*(load + effective_load(tg, prev_cpu, 0, weight));
> + rcu_read_unlock();
>

This is fine.

Another way is :

rcu_read_lock();
tg = task_group(p);
css_get(&tg->css);
rcu_read_unlock();

/* do something */
...

css_put(&tg->css);

2010-06-04 04:10:52

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH] RCU: don't turn off lockdep when find suspicious rcu_dereference_check() usage

On Fri, Jun 04, 2010 at 10:44:48AM +0800, Li Zefan wrote:
> > Seems worth reviewing the other uses of task_group():
> >
> > 1. set_task_rq() -- only a runqueue and a sched_rt_entity leave
> > the RCU read-side critical section. Runqueues do persist.
> > I don't claim to understand the sched_rt_entity life cycle.
> >
> > 2. __sched_setscheduler() -- not clear to me that this one is
> > protected to begin with. If it is somehow correctly protected,
> > it discards the RCU-protected pointer immediately, so is OK
> > otherwise.
> >
> > 3. cpu_cgroup_destroy() -- ditto.
> >
> > 4. cpu_shares_read_u64() -- ditto.
> >
> > 5. print_task() -- protected by rcu_read_lock() and discards the
> > RCU-protected pointer immediately, so this one is OK.
> >
> > Any task_group() experts able to weigh in on #2, #3, and #4?
> >
>
> #3 and #4 are safe, because it's not calling task_group(), but
> cgroup_tg():
>
> struct task_group *tg = cgroup_tg(cgrp);
>
> As long as it's safe to access cgrp, it's safe to access tg.

Good point, thank you!

Any takers on #2?

Thanx, Paul

> > Signed-off-by: Paul E. McKenney <[email protected]>
> >
> > diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> > index 50ec9ea..224ef98 100644
> > --- a/kernel/sched_fair.c
> > +++ b/kernel/sched_fair.c
> > @@ -1251,7 +1251,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> > }
> >
> > tg = task_group(p);
> > - rcu_read_unlock();
> > weight = p->se.load.weight;
> >
> > imbalance = 100 + (sd->imbalance_pct - 100) / 2;
> > @@ -1268,6 +1267,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> > balanced = !this_load ||
> > 100*(this_load + effective_load(tg, this_cpu, weight, weight)) <=
> > imbalance*(load + effective_load(tg, prev_cpu, 0, weight));
> > + rcu_read_unlock();
> >
>
> This is fine.
>
> Another way is :
>
> rcu_read_lock();
> tg = task_group(p);
> css_get(&tg->css);
> rcu_read_unlock();
>
> /* do something */
> ...
>
> css_put(&tg->css);

2010-06-04 09:00:06

by Daniel J Blueman

[permalink] [raw]
Subject: Re: [PATCH] RCU: don't turn off lockdep when find suspicious rcu_dereference_check() usage

On Fri, Jun 4, 2010 at 5:10 AM, Paul E. McKenney
<[email protected]> wrote:
> On Fri, Jun 04, 2010 at 10:44:48AM +0800, Li Zefan wrote:
>> > Seems worth reviewing the other uses of task_group():
>> >
>> > 1. ?set_task_rq() -- only a runqueue and a sched_rt_entity leave
>> > ? ? the RCU read-side critical section. ?Runqueues do persist.
>> > ? ? I don't claim to understand the sched_rt_entity life cycle.
>> >
>> > 2. ?__sched_setscheduler() -- not clear to me that this one is
>> > ? ? protected to begin with. ?If it is somehow correctly protected,
>> > ? ? it discards the RCU-protected pointer immediately, so is OK
>> > ? ? otherwise.
>> >
>> > 3. ?cpu_cgroup_destroy() -- ditto.
>> >
>> > 4. ?cpu_shares_read_u64() -- ditto.
>> >
>> > 5. ?print_task() -- protected by rcu_read_lock() and discards the
>> > ? ? RCU-protected pointer immediately, so this one is OK.
>> >
>> > Any task_group() experts able to weigh in on #2, #3, and #4?
>> >
>>
>> #3 and #4 are safe, because it's not calling task_group(), but
>> cgroup_tg():
>>
>> ? ? ? struct task_group *tg = cgroup_tg(cgrp);
>>
>> As long as it's safe to access cgrp, it's safe to access tg.
>
> Good point, thank you!
>
> Any takers on #2?

Indeed, __sched_setscheduler() is not protected. How does this look?

Since the struct task_group pointed to by the return value from task_group
isn't taken holding the RCU read lock, when it is soon after dereferenced,
it may have gone.

Signed-off-by: Daniel J Blueman <[email protected]>

diff --git a/kernel/sched.c b/kernel/sched.c
index d484081..b086a36 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4483,9 +4483,13 @@ recheck:
* Do not allow realtime tasks into groups that have no runtime
* assigned.
*/
+ rcu_read_lock();
if (rt_bandwidth_enabled() && rt_policy(policy) &&
- task_group(p)->rt_bandwidth.rt_runtime == 0)
+ task_group(p)->rt_bandwidth.rt_runtime == 0) {
+ rcu_read_unlock();
return -EPERM;
+ }
+ rcu_read_unlock();
#endif

retval = security_task_setscheduler(p, policy, param);

>
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Thanx, Paul
>
>> > Signed-off-by: Paul E. McKenney <[email protected]>
>> >
>> > diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
>> > index 50ec9ea..224ef98 100644
>> > --- a/kernel/sched_fair.c
>> > +++ b/kernel/sched_fair.c
>> > @@ -1251,7 +1251,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
>> > ? ? }
>> >
>> > ? ? tg = task_group(p);
>> > - ? rcu_read_unlock();
>> > ? ? weight = p->se.load.weight;
>> >
>> > ? ? imbalance = 100 + (sd->imbalance_pct - 100) / 2;
>> > @@ -1268,6 +1267,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
>> > ? ? balanced = !this_load ||
>> > ? ? ? ? ? ? 100*(this_load + effective_load(tg, this_cpu, weight, weight)) <=
>> > ? ? ? ? ? ? imbalance*(load + effective_load(tg, prev_cpu, 0, weight));
>> > + ? rcu_read_unlock();
>> >
>>
>> This is fine.
>>
>> Another way is :
>>
>> rcu_read_lock();
>> tg = task_group(p);
>> css_get(&tg->css);
>> rcu_read_unlock();
>>
>> /* do something */
>> ...
>>
>> css_put(&tg->css);
--
Daniel J Blueman