The function starts with establishing whether there is an owner it can
spin waiting on and proceeds to immediately do it again when entering
the loop, adding another lock word access and possibly an avoidable
cacheline bounce. Subsequent iterations don't have this problem.
The sound thing to do is to cpu_relax() first.
Signed-off-by: Mateusz Guzik <[email protected]>
---
This is a borderline cosmetic patch I did not bother benchmarking.
If you don't like it that's fine with me, I'm not going to fight for it.
Cheers.
kernel/locking/rwsem.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index c6d17aee4209..a6c5bb68920e 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -758,6 +758,8 @@ rwsem_spin_on_owner(struct rw_semaphore *sem)
return state;
for (;;) {
+ cpu_relax();
+
/*
* When a waiting writer set the handoff flag, it may spin
* on the owner as well. Once that writer acquires the lock,
@@ -784,8 +786,6 @@ rwsem_spin_on_owner(struct rw_semaphore *sem)
state = OWNER_NONSPINNABLE;
break;
}
-
- cpu_relax();
}
return state;
--
2.39.2