[ resend - forgot to send this to the list, also forgot intro text ]
> Robert Macaulay <[email protected]> wrote:
>>
>> On Wed, 15 Jan 2003, Andrew Morton wrote:
>>> if you could please test that with CONFIG_PREEMPT=y
>>
>> Reverting that brings the speed back up
>
> OK. How irritating.
>
> Presumably there's a fairness problem - once a CPU goes in there to start
> spinning on the lock, the length of the loop is such that it's easy for
> non-holders to zoom in and claim it first. Or something.
>
> Unless another way of solving the problem which that patch solves presents
> itself we may need to revert it.
>
> Or not. Should a CONFIG_PREEMPT SMP kernel compromise its latency because of
> overused locking??
Andrew, Everyone,
The new, preemptable spin_lock() spins on an atomic bus-locking
read/write instead of an ordinary read, as the original spin_lock
implementation did. Perhaps that is the source of the inefficiency
being seen.
Attached sample code compiles but is untested and incomplete (present
only to illustrate the idea).
Joe
--- 2.5-bk/kernel/sched.c.orig 2003-01-20 14:14:55.000000000 -0500
+++ 2.5-bk/kernel/sched.c 2003-01-20 17:31:49.000000000 -0500
@@ -2465,15 +2465,13 @@
_raw_spin_lock(lock);
return;
}
-
- while (!_raw_spin_trylock(lock)) {
- if (need_resched()) {
- preempt_enable_no_resched();
- __cond_resched();
- preempt_disable();
+ do {
+ preempt_enable();
+ while(spin_is_locked(lock)) {
+ cpu_relax();
}
- cpu_relax();
- }
+ preempt_disable();
+ } while (!_raw_spin_trylock(lock));
}
void __preempt_write_lock(rwlock_t *lock)
On Mon, 2003-01-20 at 22:58, Joe Korty wrote:
> The new, preemptable spin_lock() spins on an atomic bus-locking
> read/write instead of an ordinary read, as the original spin_lock
> implementation did. Perhaps that is the source of the inefficiency
> being seen.
Its a fairly critical "Never do this" on older intel processors and
kills the box very efficiently so your diagnosis is extremely
plausible