James H. Anderson wrote:
>
> Hi Raj,
>
> On Thu, 16 Jul 2009, Raj Rajkumar wrote:
>
>> non-preemptive critical section. In addition, we could allow
mutexes to either pick basic priority inheritance (desirable for local
mutexes?) or the priority ceiling version (desirable for global mutexes
shared across processors/cores).
>
> This discussion when I entered it was about using global scheduling
> in Linux (not partitioning), so that's what I thought the focus of the
> discussion was. What's the definition of a local mutex in that case?
> And how do you use ceilings under global scheduling?
>
> Thanks.
>
> -Jim
Hi Jim: I was not aware of the global scheduling constraint from the
earlier discussions - thanks for the clarification. Two thoughts on
global partitioning:
1. I presume you and others have pointed out the anomalies and low
processor utilization that can result from global scheduling (the Dhall
& Liu analysis being the most famous). In addition, there are run-time
performance implications as the caches keep getting cold as processes
migrate. The Linux notion of processor affinity needs to be put to
good use!
2. The definition of a priority ceiling (the priority of the highest
priority task that can access a shared resource/mutex) holds independent
of partitioning (static binding) or global scheduling (dynamic
binding). The following issue still remains. If there are m
processors, consider m low-priority tasks sharing m mutexes to execute
VERY long critical sections. These mutexes are only shared with m (or
fewer) other lower priority tasks. If these tasks each grab a mutex on
each processor and execute these long critical sections, higher priority
tasks waiting to execute will be starved/delayed. With the ceiling
notion in place, these critical sections will be executing at a lower
ceiling priority and can therefore be preempted.
Combining the two comments above, I would suspect that in practice,
tasks with tight timing constraints would be bound to specific
processors/cores (they can be spread out that they do not compete with
each other, and hence each/many/most can get very good response times on
their processors) and my prior comments would apply with processor
affinities in place. Tasks with less tight timing constraints and
perhaps targeting other functions with their own shared mutexes will use
ceiling execution for critical sections, without affecting the response
times of the tighter real-time tasks.
Best,
---
Raj