> > As you've observed, with the approach of waiting for all pre-empted
tasks
> > to synchronize, the possibility of a task staying pre-empted for a long
> > time could affect the latency of an update/synchonize (though its hard
for
> > me to judge how likely that is).
>
> It's very unlikely on a system that doesn't already have problems with
> CPU starvation because of runaway real-time tasks or interrupt handlers.
Agreed!
> First, preemption is a comparitively rare event with a mostly
> timesharing load, typically from 1% to 10% of all context switches.
Again, agreed!
> Second, the scheduler should not penalize the preempted task for being
> preempted, so that it should usually get to continue running as soon as
> the preempting task is descheduled, which is at most one timeslice for
> timesharing tasks.
The algorithms we have been looking at need to have absolute guarantees
that earlier activity has completed. The most straightforward way to
guarantee this is to have the critical-section activity run with preemption
disabled. Most of these code segments either take out locks or run
with interrupts disabled anyway, so there is little or no degradation of
latency in this case. In fact, in many cases, latency would actually be
improved due to removal of explicit locking primitives.
I believe that one of the issues that pushes in this direction is the
discovery that "synchronize_kernel()" could not be a nop in a UP kernel
unless the read-side critical sections disable preemption (either in
the natural course of events, or artificially if need be). Andi or
Rusty can correct me if I missed something in the previous exchange...
The read-side code segments are almost always quite short, and, again,
they would almost always otherwise need to be protected by a lock of
some sort, which would disable preemption in any event.
Thoughts?
Thanx, Paul
On Tue, 10 Apr 2001, Paul McKenney wrote:
> The algorithms we have been looking at need to have absolute guarantees
> that earlier activity has completed. The most straightforward way to
> guarantee this is to have the critical-section activity run with preemption
> disabled. Most of these code segments either take out locks or run
> with interrupts disabled anyway, so there is little or no degradation of
> latency in this case. In fact, in many cases, latency would actually be
> improved due to removal of explicit locking primitives.
>
> I believe that one of the issues that pushes in this direction is the
> discovery that "synchronize_kernel()" could not be a nop in a UP kernel
> unless the read-side critical sections disable preemption (either in
> the natural course of events, or artificially if need be). Andi or
> Rusty can correct me if I missed something in the previous exchange...
>
> The read-side code segments are almost always quite short, and, again,
> they would almost always otherwise need to be protected by a lock of
> some sort, which would disable preemption in any event.
>
> Thoughts?
Disabling preemption is a possible solution if the critical section is short
- less than 100us - otherwise preemption latencies become a problem.
The implementation of synchronize_kernel() that Rusty and I discussed
earlier in this thread would work in other cases, such as module
unloading, where there was a concern that it was not practical to have
any sort of lock in the read-side code path and the write side was not
time critical.
Nigel Gamble [email protected]
Mountain View, CA, USA. http://www.nrg.org/
MontaVista Software [email protected]