preempt_disable/enable() is equal to RCU read-side crital section,
and the mutex lock slowpath disable the preemption throughout the
entire slowpath. Let's remove the rcu_read_lock/unlock for saving
some cycles in hot codes.
Signed-off-by: Yanfei Xu <[email protected]>
---
kernel/locking/mutex.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index d456579d0952..54658c5db7d0 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -348,13 +348,13 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner,
{
bool ret = true;
- rcu_read_lock();
while (__mutex_owner(lock) == owner) {
/*
* Ensure we emit the owner->on_cpu, dereference _after_
- * checking lock->owner still matches owner. If that fails,
- * owner might point to freed memory. If it still matches,
- * the rcu_read_lock() ensures the memory stays valid.
+ * checking lock->owner still matches owner. And we already
+ * disabled preemption which is equal to the RCU read-side
+ * crital section throughout the entire progress of the mutex
+ * lock slowpath, thus the memory stays valid.
*/
barrier();
@@ -374,7 +374,6 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner,
cpu_relax();
}
- rcu_read_unlock();
return ret;
}
--
2.27.0
Use rcu_read_lock_sched to simplify the codes, and it also saves
some cycles of handling rcu nesting counter.
Signed-off-by: Yanfei Xu <[email protected]>
---
kernel/locking/rwsem.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 000e8d5a2884..7afadfe02286 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -616,8 +616,7 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
return false;
}
- preempt_disable();
- rcu_read_lock();
+ rcu_read_lock_sched();
owner = rwsem_owner_flags(sem, &flags);
/*
* Don't check the read-owner as the entry may be stale.
@@ -625,8 +624,7 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
if ((flags & RWSEM_NONSPINNABLE) ||
(owner && !(flags & RWSEM_READER_OWNED) && !owner_on_cpu(owner)))
ret = false;
- rcu_read_unlock();
- preempt_enable();
+ rcu_read_unlock_sched();
lockevent_cond_inc(rwsem_opt_fail, !ret);
return ret;
--
2.27.0
On 9/26/21 6:16 AM, Yanfei Xu wrote:
> preempt_disable/enable() is equal to RCU read-side crital section,
> and the mutex lock slowpath disable the preemption throughout the
> entire slowpath. Let's remove the rcu_read_lock/unlock for saving
> some cycles in hot codes.
The description is wrong. Preemption is disabled only in the optimistic
spinning code which is not the complete slowpath. Even though it may
sound reasonable that disable preemption is likely to prevent reaching
quiescent state, but I am not totally sure that will always be the case
as there are different RCU favors.
Cheers,
Longman
On 9/26/21 6:16 AM, Yanfei Xu wrote:
> Use rcu_read_lock_sched to simplify the codes, and it also saves
> some cycles of handling rcu nesting counter.
>
> Signed-off-by: Yanfei Xu <[email protected]>
> ---
> kernel/locking/rwsem.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
> index 000e8d5a2884..7afadfe02286 100644
> --- a/kernel/locking/rwsem.c
> +++ b/kernel/locking/rwsem.c
> @@ -616,8 +616,7 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
> return false;
> }
>
> - preempt_disable();
> - rcu_read_lock();
> + rcu_read_lock_sched();
> owner = rwsem_owner_flags(sem, &flags);
> /*
> * Don't check the read-owner as the entry may be stale.
> @@ -625,8 +624,7 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
> if ((flags & RWSEM_NONSPINNABLE) ||
> (owner && !(flags & RWSEM_READER_OWNED) && !owner_on_cpu(owner)))
> ret = false;
> - rcu_read_unlock();
> - preempt_enable();
> + rcu_read_unlock_sched();
>
> lockevent_cond_inc(rwsem_opt_fail, !ret);
> return ret;
I don't think there is any performance gain with this change. I would
prefer the original code that is more readable as some people may not
know rcu_read_lock_sched() will disable preemption if they don't look
into it.
Cheers,
Longman
On 9/26/21 3:16 PM, Waiman Long wrote:
> On 9/26/21 6:16 AM, Yanfei Xu wrote:
>> preempt_disable/enable() is equal to RCU read-side crital section,
>> and the mutex lock slowpath disable the preemption throughout the
>> entire slowpath. Let's remove the rcu_read_lock/unlock for saving
>> some cycles in hot codes.
>
> The description is wrong. Preemption is disabled only in the
> optimistic spinning code which is not the complete slowpath. Even
> though it may sound reasonable that disable preemption is likely to
> prevent reaching quiescent state, but I am not totally sure that will
> always be the case as there are different RCU favors.
It does look like that disable preemption can serve as a substitute for
rcu_read_lock(). However, I will suggest that you also insert a comment
saying so and the task structure won't go away during the spinning period.
Cheers,
Longman
On 9/27/21 8:46 AM, Waiman Long wrote:
> [Please note: This e-mail is from an EXTERNAL e-mail address]
>
> On 9/26/21 3:16 PM, Waiman Long wrote:
>> On 9/26/21 6:16 AM, Yanfei Xu wrote:
>>> preempt_disable/enable() is equal to RCU read-side crital section,
>>> and the mutex lock slowpath disable the preemption throughout the
>>> entire slowpath. Let's remove the rcu_read_lock/unlock for saving
>>> some cycles in hot codes.
>>
>> The description is wrong. Preemption is disabled only in the
>> optimistic spinning code which is not the complete slowpath. Even
>> though it may sound reasonable that disable preemption is likely to
>> prevent reaching quiescent state, but I am not totally sure that will
>> always be the case as there are different RCU favors.
>
> It does look like that disable preemption can serve as a substitute for
> rcu_read_lock(). However, I will suggest that you also insert a comment
> saying so and the task structure won't go away during the spinning period.
>
Will send v2, thanks.
Yanfei
> Cheers,
> Longman
>
On 9/27/21 3:22 AM, Waiman Long wrote:
> [Please note: This e-mail is from an EXTERNAL e-mail address]
>
> On 9/26/21 6:16 AM, Yanfei Xu wrote:
>> Use rcu_read_lock_sched to simplify the codes, and it also saves
>> some cycles of handling rcu nesting counter.
>>
>> Signed-off-by: Yanfei Xu <[email protected]>
>> ---
>> kernel/locking/rwsem.c | 6 ++----
>> 1 file changed, 2 insertions(+), 4 deletions(-)
>>
>> diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
>> index 000e8d5a2884..7afadfe02286 100644
>> --- a/kernel/locking/rwsem.c
>> +++ b/kernel/locking/rwsem.c
>> @@ -616,8 +616,7 @@ static inline bool rwsem_can_spin_on_owner(struct
>> rw_semaphore *sem)
>> return false;
>> }
>>
>> - preempt_disable();
>> - rcu_read_lock();
>> + rcu_read_lock_sched();
>> owner = rwsem_owner_flags(sem, &flags);
>> /*
>> * Don't check the read-owner as the entry may be stale.
>> @@ -625,8 +624,7 @@ static inline bool rwsem_can_spin_on_owner(struct
>> rw_semaphore *sem)
>> if ((flags & RWSEM_NONSPINNABLE) ||
>> (owner && !(flags & RWSEM_READER_OWNED) &&
>> !owner_on_cpu(owner)))
>> ret = false;
>> - rcu_read_unlock();
>> - preempt_enable();
>> + rcu_read_unlock_sched();
>>
>> lockevent_cond_inc(rwsem_opt_fail, !ret);
>> return ret;
>
> I don't think there is any performance gain with this change. I would
> prefer the original code that is more readable as some people may not
> know rcu_read_lock_sched() will disable preemption if they don't look
> into it.
>
OK, thanks for comments.
Yanfei
> Cheers,
> Longman
>