> On Tue, 10 Apr 2001, Paul McKenney wrote:
> > The algorithms we have been looking at need to have absolute guarantees
> > that earlier activity has completed. The most straightforward way to
> > guarantee this is to have the critical-section activity run with
preemption
> > disabled. Most of these code segments either take out locks or run
> > with interrupts disabled anyway, so there is little or no degradation
of
> > latency in this case. In fact, in many cases, latency would actually
be
> > improved due to removal of explicit locking primitives.
> >
> > I believe that one of the issues that pushes in this direction is the
> > discovery that "synchronize_kernel()" could not be a nop in a UP kernel
> > unless the read-side critical sections disable preemption (either in
> > the natural course of events, or artificially if need be). Andi or
> > Rusty can correct me if I missed something in the previous exchange...
> >
> > The read-side code segments are almost always quite short, and, again,
> > they would almost always otherwise need to be protected by a lock of
> > some sort, which would disable preemption in any event.
> >
> > Thoughts?
>
> Disabling preemption is a possible solution if the critical section is
short
> - less than 100us - otherwise preemption latencies become a problem.
Seems like a reasonable restriction. Of course, this same limit applies
to locks and interrupt disabling, right?
> The implementation of synchronize_kernel() that Rusty and I discussed
> earlier in this thread would work in other cases, such as module
> unloading, where there was a concern that it was not practical to have
> any sort of lock in the read-side code path and the write side was not
> time critical.
True, but only if the synchronize_kernel() implementation is applied to UP
kernels, also.
Thanx, Paul
> Nigel Gamble [email protected]
> Mountain View, CA, USA. http://www.nrg.org/
>
> MontaVista Software [email protected]
On Tue, Apr 10, 2001 at 09:08:16PM -0700, Paul McKenney wrote:
> > Disabling preemption is a possible solution if the critical section is
> short
> > - less than 100us - otherwise preemption latencies become a problem.
>
> Seems like a reasonable restriction. Of course, this same limit applies
> to locks and interrupt disabling, right?
So supposing 1/2 us per update
lock process list
for every process update pgd
unlock process list
is ok if #processes < 200, but can cause some unspecified system failure
due to a dependency on the 100us limit otherwise?
And on a slower machine or with some heavy I/O possibilities ....
We have a tiny little kernel to worry about inRTLinux and it's quite
hard for us to keep track of all possible delays in such cases. How's this
going to work for Linux?
--
---------------------------------------------------------
Victor Yodaiken
Finite State Machine Labs: The RTLinux Company.
http://www.fsmlabs.com http://www.rtlinux.com
On Tue, 10 Apr 2001, Paul McKenney wrote:
> > Disabling preemption is a possible solution if the critical section
> > is
> short
> > - less than 100us - otherwise preemption latencies become a problem.
>
> Seems like a reasonable restriction. Of course, this same limit
> applies to locks and interrupt disabling, right?
That's the goal I'd like to see us achieve in 2.5. Interrupts are
already in this range (with a few notable exceptions), but there is
still the big kernel lock and a few other long held spin locks to deal
with. So I want to make sure that any new locking scheme like the ones
under discussion play nicely with the efforts to achieve low-latency
Linux such as the preemptible kernel.
> > The implementation of synchronize_kernel() that Rusty and I
> > discussed earlier in this thread would work in other cases, such as
> > module unloading, where there was a concern that it was not
> > practical to have any sort of lock in the read-side code path and
> > the write side was not time critical.
>
> True, but only if the synchronize_kernel() implementation is applied
> to UP kernels, also.
Yes, that is the idea.
Nigel Gamble [email protected]
Mountain View, CA, USA. http://www.nrg.org/
MontaVista Software [email protected]
On Tue, 10 Apr 2001 [email protected] wrote:
> On Tue, Apr 10, 2001 at 09:08:16PM -0700, Paul McKenney wrote:
> > > Disabling preemption is a possible solution if the critical section is
> > short
> > > - less than 100us - otherwise preemption latencies become a problem.
> >
> > Seems like a reasonable restriction. Of course, this same limit applies
> > to locks and interrupt disabling, right?
>
> So supposing 1/2 us per update
> lock process list
> for every process update pgd
> unlock process list
>
> is ok if #processes < 200, but can cause some unspecified system failure
> due to a dependency on the 100us limit otherwise?
Only to a hard real-time system.
> And on a slower machine or with some heavy I/O possibilities ....
I'm mostly interested in Linux in embedded systems, where we have a lot
of control over the overall system, such as how many processes are
running. This makes it easier to control latencies than on a
general purpose computer.
> We have a tiny little kernel to worry about inRTLinux and it's quite
> hard for us to keep track of all possible delays in such cases. How's this
> going to work for Linux?
The same way everything works for Linux: with enough people around the
world interested in and working on these problems, they will be fixed.
Nigel Gamble [email protected]
Mountain View, CA, USA. http://www.nrg.org/
MontaVista Software [email protected]