2009-07-16 19:48:16

by James H. Anderson

[permalink] [raw]
Subject: Re: [Fwd: Re: RFC for a new Scheduling policy/class in the Linux-kernel]


> It looks to me like Jim and Bjoern name the kernel-mutex locking scheme
> (of non-preemption and FIFO queueing) as FMLP and advocate it for
> user-level mutexes. Jim: Please correct me if my interpretation is
> incorrect.

I should have addressed this, sorry.

Actually, I don't advocate for anything. :-) As I said in my very
first email in this thread, in the LTIMUS^RT project, changing Linux
is not one of our goals. I leave that to other people who are way
smarter than me.

But to the point you raise, please note that the long version of the
FMLP is a little more than combining non-preemption with FIFO waiting
since waiting is via suspension. And as I said in an earlier email,
we designed it for a real-time (only) environment. However, I think
a user-level variant that could be used in a more general environment
would certainly be possible.

-Jim

P.S. We didn't talk about the low processor utlization (Dhall effect)
mentioned in your last email. However, that applies to hard real-time
workloads, not soft real-time workloads. This discussion has been
touching on both.


2009-07-16 20:48:53

by Raj Rajkumar

[permalink] [raw]
Subject: Re: [Fwd: Re: RFC for a new Scheduling policy/class in the Linux-kernel]

Jim:

Good discussion. THanks for taking the time to educate me on past
exchanges.

> We didn't talk about the low processor utlization (Dhall effect)
> mentioned in your last email. However, that applies to hard real-time
> workloads, not soft real-time workloads. This discussion has been
> touching on both.

For hard real-time workloads, partitioning (static binding to specific
processors) works well, with developer control over where tasks run and
their contenders. For soft real-time workloads, global scheduling
(dynamic binding to available processors) should do well. The
situation is analogous to what we see in banks and airports. There is a
common global queue serviced by multiple counters for "soft" real-time
customers, and for those (business or first-class/special) customers
needing higher QoS, separate queues are provided. In the OS context,
we need to ensure that the two queues/servers do not interfere.
Ceilings would help even when the hard and soft real-time tasks use the
same processors.

However, the question of dealing with mutexes shared by processes
allocated to different processors remains. As Ted has pointed out,
avoiding them would be best! In practice, moving them to a
synchronization processor (as was pointed out by Peter? and also
discussed in one of my earlier papers on synchronization on
multi-processors) ought to be considered. I think the first-order
improvements are
from

(1) ensuring that task waiting times are bounded as a function of
critical sections only (i.e. avoiding the "unbounded" priority inversion
problem) - this is accomplished by having critical sections execute at
ceiling priorities (or higher) in the multiprocessor case,
(2) avoiding the wait from very long critical sections used only by
lower-priority tasks - using priority ceilings instead of higher
priority values for long critical sections mitigates this problem.

---
Raj