2011-06-07 10:13:29

by Nikunj A Dadhania

[permalink] [raw]
Subject: [PATCH] sched: remove rcu_read_lock from wake_affine

wake_affine is called from one path: select_task_rq_fair, which already has
rcu read lock held.

Signed-off-by: Nikunj A. Dadhania <[email protected]>
---
kernel/sched_fair.c | 3 +--
1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 354e26b..0bfec93 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1461,6 +1461,7 @@ static inline unsigned long effective_load(struct task_group *tg, int cpu,

#endif

+/* Assumes rcu_read_lock is held */
static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
{
s64 this_load, load;
@@ -1481,7 +1482,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
* effect of the currently running task from the load
* of the current CPU:
*/
- rcu_read_lock();
if (sync) {
tg = task_group(current);
weight = current->se.load.weight;
@@ -1517,7 +1517,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
balanced = this_eff_load <= prev_eff_load;
} else
balanced = true;
- rcu_read_unlock();

/*
* If the currently running task will sleep within


2011-06-07 10:26:57

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched: remove rcu_read_lock from wake_affine

On Tue, 2011-06-07 at 15:43 +0530, Nikunj A. Dadhania wrote:
> wake_affine is called from one path: select_task_rq_fair, which already has
> rcu read lock held.
>
> Signed-off-by: Nikunj A. Dadhania <[email protected]>
> ---
> kernel/sched_fair.c | 3 +--
> 1 files changed, 1 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> index 354e26b..0bfec93 100644
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -1461,6 +1461,7 @@ static inline unsigned long effective_load(struct task_group *tg, int cpu,
>
> #endif
>
> +/* Assumes rcu_read_lock is held */

Not a big fan of such comments, esp with CONFIG_PROVE_RCU its better to
use those facilities, which is to say: if we're missing a
rcu_read_lock() the thing will yell bloody murder.

> static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> {
> s64 this_load, load;
> @@ -1481,7 +1482,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> * effect of the currently running task from the load
> * of the current CPU:
> */
> - rcu_read_lock();
> if (sync) {
> tg = task_group(current);
> weight = current->se.load.weight;
> @@ -1517,7 +1517,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> balanced = this_eff_load <= prev_eff_load;
> } else
> balanced = true;
> - rcu_read_unlock();
>
> /*
> * If the currently running task will sleep within
>

OK, took the patch and removed the comment, thanks!

2011-06-07 17:26:20

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH] sched: remove rcu_read_lock from wake_affine

On Tue, Jun 07, 2011 at 12:26:51PM +0200, Peter Zijlstra wrote:
> On Tue, 2011-06-07 at 15:43 +0530, Nikunj A. Dadhania wrote:
> > wake_affine is called from one path: select_task_rq_fair, which already has
> > rcu read lock held.
> >
> > Signed-off-by: Nikunj A. Dadhania <[email protected]>
> > ---
> > kernel/sched_fair.c | 3 +--
> > 1 files changed, 1 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> > index 354e26b..0bfec93 100644
> > --- a/kernel/sched_fair.c
> > +++ b/kernel/sched_fair.c
> > @@ -1461,6 +1461,7 @@ static inline unsigned long effective_load(struct task_group *tg, int cpu,
> >
> > #endif
> >
> > +/* Assumes rcu_read_lock is held */
>
> Not a big fan of such comments, esp with CONFIG_PROVE_RCU its better to
> use those facilities, which is to say: if we're missing a
> rcu_read_lock() the thing will yell bloody murder.

Nikunj, one such approach is is "WARN_ON_ONCE(!rcu_read_lock_held())".

This will complain if this function is called without an rcu_read_lock()
in effect, but only if CONFIG_PROVE_RCU=y.

Thanx, Paul

> > static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> > {
> > s64 this_load, load;
> > @@ -1481,7 +1482,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> > * effect of the currently running task from the load
> > * of the current CPU:
> > */
> > - rcu_read_lock();
> > if (sync) {
> > tg = task_group(current);
> > weight = current->se.load.weight;
> > @@ -1517,7 +1517,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> > balanced = this_eff_load <= prev_eff_load;
> > } else
> > balanced = true;
> > - rcu_read_unlock();
> >
> > /*
> > * If the currently running task will sleep within
> >
>
> OK, took the patch and removed the comment, thanks!
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2011-06-07 17:29:32

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched: remove rcu_read_lock from wake_affine

On Tue, 2011-06-07 at 10:26 -0700, Paul E. McKenney wrote:
>
> Nikunj, one such approach is is "WARN_ON_ONCE(!rcu_read_lock_held())".
>
> This will complain if this function is called without an rcu_read_lock()
> in effect, but only if CONFIG_PROVE_RCU=y.

rcu_lockdep_assert(rcu_read_lock_held()) would be nicer, however, since
the below:

> > > static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> > > {
> > > s64 this_load, load;
> > > @@ -1481,7 +1482,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> > > * effect of the currently running task from the load
> > > * of the current CPU:
> > > */
> > > - rcu_read_lock();
> > > if (sync) {
> > > tg = task_group(current);
> > > weight = current->se.load.weight;

task_group() has an rcu_dereference_check() in, its really not needed,
the thing will yell if we get this wrong.

2011-06-07 18:11:37

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH] sched: remove rcu_read_lock from wake_affine

On Tue, Jun 07, 2011 at 07:29:23PM +0200, Peter Zijlstra wrote:
> On Tue, 2011-06-07 at 10:26 -0700, Paul E. McKenney wrote:
> >
> > Nikunj, one such approach is is "WARN_ON_ONCE(!rcu_read_lock_held())".
> >
> > This will complain if this function is called without an rcu_read_lock()
> > in effect, but only if CONFIG_PROVE_RCU=y.
>
> rcu_lockdep_assert(rcu_read_lock_held()) would be nicer,

Good point!

> however, since
> the below:
>
> > > > static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> > > > {
> > > > s64 this_load, load;
> > > > @@ -1481,7 +1482,6 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
> > > > * effect of the currently running task from the load
> > > > * of the current CPU:
> > > > */
> > > > - rcu_read_lock();
> > > > if (sync) {
> > > > tg = task_group(current);
> > > > weight = current->se.load.weight;
>
> task_group() has an rcu_dereference_check() in, its really not needed,
> the thing will yell if we get this wrong.

Fair enough! The main reason for adding it at this level as well is
to prevent people from "fixing" splats by adding rcu_read_lock() and
rcu_read_unlock() at this level. But you would see any such patch, so
such a "fix" would not get far. ;-)

Thanx, Paul