On Fri, 2011-01-28 at 16:21 -0800, Frank Rowand wrote:
> On 01/04/11 06:59, Peter Zijlstra wrote:
> > In order to be able to call set_task_cpu() while either holding
> > p->pi_lock or task_rq(p)->lock we need to hold both locks in order to
> > stabilize task_rq().
> >
> > This makes task_rq_lock() acquire both locks, and have
> > __task_rq_lock() validate that p->pi_lock is held. This increases the
> > locking overhead for most scheduler syscalls but allows reduction of
> > rq->lock contention for some scheduler hot paths (ttwu).
> >
> > Signed-off-by: Peter Zijlstra <[email protected]>
> > ---
> > kernel/sched.c | 81 ++++++++++++++++++++++++++-------------------------------
> > 1 file changed, 37 insertions(+), 44 deletions(-)
> >
> > Index: linux-2.6/kernel/sched.c
> > ===================================================================
> >
>
> > @@ -980,10 +972,13 @@ static void __task_rq_unlock(struct rq *
> > raw_spin_unlock(&rq->lock);
> > }
> >
> > -static inline void task_rq_unlock(struct rq *rq, unsigned long *flags)
> > +static inline void
> > +task_rq_unlock(struct rq *rq, struct task_struct *p, unsigned long *flags)
> > __releases(rq->lock)
> > + __releases(p->pi_lock)
> > {
> > - raw_spin_unlock_irqrestore(&rq->lock, *flags);
> > + raw_spin_unlock(&rq->lock);
> > + raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
> > }
> >
> > /*
>
> Most of the callers of task_rq_unlock() were also fixed up to reflect
> the newly added parameter "*p", but a couple were missed. By the end
> of the patch series that is ok because the couple that were missed
> get removed in patches 12 and 13. But if you want the patch series
> to be bisectable (which I think it is otherwise), you might want to
> fix those last couple of callers of task_rq_unlock() in this patch.
Fixed those up indeed, thanks!
>
> > @@ -2646,9 +2647,9 @@ void sched_fork(struct task_struct *p, i
> > *
> > * Silence PROVE_RCU.
> > */
> > - rcu_read_lock();
> > + raw_spin_lock_irqsave(&p->pi_lock, flags);
> > set_task_cpu(p, cpu);
> > - rcu_read_unlock();
> > + raw_spin_unlock_irqrestore(&p->pi_lock, flags);
>
> Does "* Silence PROVE_RCU." no longer apply after remove rcu_read_lock() and
> rcu_read_unlock()?
I think the locking is still strictly superfluous, I can't seem to
recollect why I changed it from RCU to pi_lock, but since the task is
fresh and unhashed it really cannot be subject to concurrency.
On 02/03/11 09:16, Peter Zijlstra wrote:
> On Fri, 2011-01-28 at 16:21 -0800, Frank Rowand wrote:
>> On 01/04/11 06:59, Peter Zijlstra wrote:
< snip >
>>> @@ -2646,9 +2647,9 @@ void sched_fork(struct task_struct *p, i
>>> *
>>> * Silence PROVE_RCU.
>>> */
>>> - rcu_read_lock();
>>> + raw_spin_lock_irqsave(&p->pi_lock, flags);
>>> set_task_cpu(p, cpu);
>>> - rcu_read_unlock();
>>> + raw_spin_unlock_irqrestore(&p->pi_lock, flags);
>>
>> Does "* Silence PROVE_RCU." no longer apply after remove rcu_read_lock() and
>> rcu_read_unlock()?
>
> I think the locking is still strictly superfluous, I can't seem to
> recollect why I changed it from RCU to pi_lock, but since the task is
> fresh and unhashed it really cannot be subject to concurrency.
Sorry, my comment was not very clear. I meant to ask: should the
comment "* Silence PROVE_RCU." also be removed?
-Frank