2005-04-12 19:52:22

by Daniel Walker

[permalink] [raw]
Subject: FUSYN and RT


I just wanted to discuss the problem a little more. From all the
conversations that I've had it seem that everyone is worried about
having PI in Fusyn, and PI in the RT mutex.

It seems like these two locks are going to interact on a very limited
basis. Fusyn will be the user space mutex, and the RT mutex is only in
the kernel. You can't lock an RT mutex and hold it, then lock a Fusyn
mutex (anyone disagree?). That is assuming Fusyn stays in user space.

The RT mutex will never lower a tasks priority lower than the priority
given to it by locking a Fusyn lock.

At least, both mutexes will need to use the same API to raise and lower
priorities.

The next question is deadlocks. Because one mutex is only in the kernel,
and the other is only in user space, it seems that deadlocks will only
occur when a process holds locks that are all the same type.


Daniel



2005-04-12 20:45:38

by Perez-Gonzalez, Inaky

[permalink] [raw]
Subject: RE: FUSYN and RT

>From: Esben Nielsen [mailto:[email protected]]
>On 12 Apr 2005, Daniel Walker wrote:
>
>>
>>
>> At least, both mutexes will need to use the same API to raise and
lower
>> priorities.
>
>You basicly need 3 priorities:
>1) Actual: task->prio
>2) Base prio with no RT locks taken: task->static_prio
>3) Base prio with no Fusyn locks taken: task->??
>
>So no, you will not need the same API, at all :-) Fusyn manipulates
>task->static_prio and only task->prio when no RT lock is taken. When
the
>first RT-lock is taken/released it manipulates task->prio only. A
release
>of a Fusyn will manipulate task->static_prio as well as task->prio.

Yes you do. You took care of the simple case. Things get funnier
when you own more than one PI lock, or you need to promote a
task that is blocked on other PI locks whose owners are blocked
on PI locks (transitivity), or when you mix PI and PP (priority
protection/ priority ceiling).

In that case not having a sim{pl,g}e API for doing it is nuts.

>> The next question is deadlocks. Because one mutex is only in the
kernel,
>> and the other is only in user space, it seems that deadlocks will
only
>> occur when a process holds locks that are all the same type.
>
>Yes.
>All these things assumes a clear lock nesting: Fusyns are on the outer
>level, RT locks on the inner level. What happens if there is a bug in
RT
>locking code will be unclear. On the other hand errors in Fusyn locking
>(user space) should not give problems in the kernel.

Wrong. Fusyns are kernel locks that are exposed to user space (much as
a file descriptor is a kernel object exposed to user space through
a system call). Of course if the user does something mean with them
they will cause an error, but should not have undesired consequences
in the kernel. But BUGS in the code will be as unclear as in RT mutexes.

>it is is bad maintainance to have to maintain two seperate systems. The
>best way ought to be to try to only have one PI system. The kernel is
big
>and confusing enough as it is!

Ayeh for the big...it is not that confusing :)

-- Inaky

2005-04-12 21:34:15

by Perez-Gonzalez, Inaky

[permalink] [raw]
Subject: RE: FUSYN and RT

>From: Daniel Walker [mailto:[email protected]]
>
>> Well yeah, but you could lock a fusyn, then invoke a system call
which
>> locks a kernel semaphore.
>
>Right .. For deadlock detection, I want to assume that the fusyn lock
is
>on the outer level. That way both deadlock detection system will work
>properly (in theory).

So that would mean to create a restriction (which is implicit
now, anyway): "a system call cannot return holding an rt-mutex"
right?

-- Inaky

2005-04-12 21:30:02

by Daniel Walker

[permalink] [raw]
Subject: Re: FUSYN and RT

On Tue, 2005-04-12 at 13:33, Joe Korty wrote:
> On Tue, Apr 12, 2005 at 11:15:02AM -0700, Daniel Walker wrote:
>
> > It seems like these two locks are going to interact on a very limited
> > basis. Fusyn will be the user space mutex, and the RT mutex is only in
> > the kernel. You can't lock an RT mutex and hold it, then lock a Fusyn
> > mutex (anyone disagree?). That is assuming Fusyn stays in user space.
>
> Well yeah, but you could lock a fusyn, then invoke a system call which
> locks a kernel semaphore.


Right .. For deadlock detection, I want to assume that the fusyn lock is
on the outer level. That way both deadlock detection system will work
properly (in theory).

Daniel

2005-04-12 21:02:13

by Perez-Gonzalez, Inaky

[permalink] [raw]
Subject: RE: FUSYN and RT

>From: Daniel Walker [mailto:[email protected]]
>
>I just wanted to discuss the problem a little more. From all the
>conversations that I've had it seem that everyone is worried about
>having PI in Fusyn, and PI in the RT mutex.

It sure is overlapping functionalities.

> ...
>The RT mutex will never lower a tasks priority lower than the priority
>given to it by locking a Fusyn lock.
>
>At least, both mutexes will need to use the same API to raise and lower
>priorities.

This would be step one. This would also mean they'd need to share
some data structures to protect transitivity (like when you own
more than one mutex that is PI or PP).

>The next question is deadlocks. Because one mutex is only in the
kernel,
>and the other is only in user space, it seems that deadlocks will only
>occur when a process holds locks that are all the same type.

Actually, I think it would occur in both cases. It all boils down
at the end to locks in the kernel. Some of them are not visible to
user space, some are (and with those we implement mutexes):

Task B owns FUSYN-MUTEX 2
Task A owns RT-MUTEX 1
Task B is waiting for RT-MUTEX 1
TASK A wants to wait for FUSYN-MUTEX 2
[deadlock]

[replace at will RT for FUSYN, FUSYN for RT, both, it doesn't
matter, the problem stays the same].

In any case, if let's say we have both implementations coexisting,
then verifying deadlock becomes a daunting tasks, because there
are two lists of ownerships that we have to verify: the chain
becomes a graph.

Whatever happens at the end: the kernel implementation needs to
be only one. Whichever (I particularly don't care, although I'd
like to see fusyns take it, I think they are able). The one that
exposes user space mutexes has to be built on top of that. And
exposing to user space is the toughest task (unless you don't
care about the fast-path).

-- Inaky

2005-04-12 22:22:30

by Daniel Walker

[permalink] [raw]
Subject: Re: FUSYN and RT

On Tue, 2005-04-12 at 13:29, Esben Nielsen wrote:

> You basicly need 3 priorities:
> 1) Actual: task->prio
> 2) Base prio with no RT locks taken: task->static_prio
> 3) Base prio with no Fusyn locks taken: task->??
>
> So no, you will not need the same API, at all :-) Fusyn manipulates
> task->static_prio and only task->prio when no RT lock is taken. When the
> first RT-lock is taken/released it manipulates task->prio only. A release
> of a Fusyn will manipulate task->static_prio as well as task->prio.

mutex_setprio() , I don't know if you could call that an API but that's
what I was talking about.. They should both use that. I think it would
be better if the RT mutex (and fusyn) didn't depend on a field in the
task_struct to retain the old priority. That would make it easier ..

This goes back to the assumption that the locking isn't intermingled
once you get into the kernel . The RT mutex can safely save the owner
priority with out a Fusyn jumping in and changing it and the other way
around..

Daniel

2005-04-12 22:38:13

by Daniel Walker

[permalink] [raw]
Subject: RE: FUSYN and RT

On Tue, 2005-04-12 at 15:26, Perez-Gonzalez, Inaky wrote:

> You should not need any of this if your user space mutexes are a
> wrapper over the kernel space ones. The kernel handles everything
> the same and there is no need to take care of any special cases or
> variations [other than the ones imposed by the wrapping].


The problem situation that I'm thinking of is when a task gets priority
boosted by Fusyn , then gets priority boosted by an RT Mutex. In that
situation, when the RT mutex demotes back to task->static_prio it will
be lower than the priority that Fusyn has given the task (potentially).
I don't think that's handled in the kernel anyplace, is it?

Daniel

2005-04-12 22:34:03

by Perez-Gonzalez, Inaky

[permalink] [raw]
Subject: RE: FUSYN and RT

>From: Daniel Walker [mailto:[email protected]]
>
>On Tue, 2005-04-12 at 13:29, Esben Nielsen wrote:
>
>> So no, you will not need the same API, at all :-) Fusyn manipulates
>> task->static_prio and only task->prio when no RT lock is taken. When
the
>> first RT-lock is taken/released it manipulates task->prio only. A
release
>> of a Fusyn will manipulate task->static_prio as well as task->prio.
>
>mutex_setprio() , I don't know if you could call that an API but that's
>what I was talking about.. They should both use that. I think it would
>be better if the RT mutex (and fusyn) didn't depend on a field in the
>task_struct to retain the old priority. That would make it easier ..
>
>This goes back to the assumption that the locking isn't intermingled
>once you get into the kernel . The RT mutex can safely save the owner
>priority with out a Fusyn jumping in and changing it and the other way
>around..

You should not need any of this if your user space mutexes are a
wrapper over the kernel space ones. The kernel handles everything
the same and there is no need to take care of any special cases or
variations [other than the ones imposed by the wrapping].

-- Inaky

2005-04-12 23:17:40

by Esben Nielsen

[permalink] [raw]
Subject: RE: FUSYN and RT

I think we (at least) got a bit confused here. What (I think) the thread
started out with was a clear layering of the mutexes. I.e. the code obeys
the grammar

VALID_LOCK_CODE = LOCK_FUSYN VALID_LOCK_CODE UNLOCK_FUSYN
| VALID_LOCK_CODE VALID_LOCK_CODE
| VALID_RTLOCK_CODE
VALID_RTLOCK = LOCK_RTLOCK VALID_RTLOCK_CODE UNLOCK_RTLOCK
| VALID_RTLOCK_CODE VALID_RTLOCK_CODE
| VALID_SPINLOCK_CODE
| (code with no locks at all)
VALID_SPINLOCK_CODE = ... :-)

In that context the case is simple: Fusyn's and RT-locks can easily
co-exist. One only need an extra level akin to static_prio to fall back to
when the last fusyn is unlocked. The API's should be _different_, but
fusyn_setprio() should both update static_prio and call mutex_setprio().
There will never be deadlocks involving both types of locks, as Daniel
said because the lock nesting is sorted out. Furtheremore, unbalanced
(incorrect) code like
LOCK_FUSYN VALID_RTLOCK_CODE (no unlock)
will never hit the RT-level. So assuming the RT-lock based code is
debugged the error must be in Fusyn based code.

Esben

On Tue, 12 Apr 2005, Perez-Gonzalez, Inaky wrote:

> >From: Esben Nielsen [mailto:[email protected]]
> >On 12 Apr 2005, Daniel Walker wrote:
> >
> >>
> >>
> >> At least, both mutexes will need to use the same API to raise and
> lower
> >> priorities.
> >
> >You basicly need 3 priorities:
> >1) Actual: task->prio
> >2) Base prio with no RT locks taken: task->static_prio
> >3) Base prio with no Fusyn locks taken: task->??
> >
> >So no, you will not need the same API, at all :-) Fusyn manipulates
> >task->static_prio and only task->prio when no RT lock is taken. When
> the
> >first RT-lock is taken/released it manipulates task->prio only. A
> release
> >of a Fusyn will manipulate task->static_prio as well as task->prio.
>
> Yes you do. You took care of the simple case. Things get funnier
> when you own more than one PI lock, or you need to promote a
> task that is blocked on other PI locks whose owners are blocked
> on PI locks (transitivity), or when you mix PI and PP (priority
> protection/ priority ceiling).
>
> In that case not having a sim{pl,g}e API for doing it is nuts.
>
> >> The next question is deadlocks. Because one mutex is only in the
> kernel,
> >> and the other is only in user space, it seems that deadlocks will
> only
> >> occur when a process holds locks that are all the same type.
> >
> >Yes.
> >All these things assumes a clear lock nesting: Fusyns are on the outer
> >level, RT locks on the inner level. What happens if there is a bug in
> RT
> >locking code will be unclear. On the other hand errors in Fusyn locking
> >(user space) should not give problems in the kernel.
>
> Wrong. Fusyns are kernel locks that are exposed to user space (much as
> a file descriptor is a kernel object exposed to user space through
> a system call). Of course if the user does something mean with them
> they will cause an error, but should not have undesired consequences
> in the kernel. But BUGS in the code will be as unclear as in RT mutexes.
>
> >it is is bad maintainance to have to maintain two seperate systems. The
> >best way ought to be to try to only have one PI system. The kernel is
> big
> >and confusing enough as it is!
>
> Ayeh for the big...it is not that confusing :)
>
> -- Inaky
>


2005-04-12 23:21:18

by Joe Korty

[permalink] [raw]
Subject: Re: FUSYN and RT

On Tue, Apr 12, 2005 at 11:15:02AM -0700, Daniel Walker wrote:

> It seems like these two locks are going to interact on a very limited
> basis. Fusyn will be the user space mutex, and the RT mutex is only in
> the kernel. You can't lock an RT mutex and hold it, then lock a Fusyn
> mutex (anyone disagree?). That is assuming Fusyn stays in user space.

Well yeah, but you could lock a fusyn, then invoke a system call which
locks a kernel semaphore.

Regards,
Joe
--
"Money can buy bandwidth, but latency is forever" -- John Mashey

2005-04-12 23:53:14

by Perez-Gonzalez, Inaky

[permalink] [raw]
Subject: RE: FUSYN and RT

>From: Daniel Walker [mailto:[email protected]]
>On Tue, 2005-04-12 at 15:26, Perez-Gonzalez, Inaky wrote:
>
>> You should not need any of this if your user space mutexes are a
>> wrapper over the kernel space ones. The kernel handles everything
>> the same and there is no need to take care of any special cases or
>> variations [other than the ones imposed by the wrapping].
>
>The problem situation that I'm thinking of is when a task gets priority
>boosted by Fusyn , then gets priority boosted by an RT Mutex. In that
>situation, when the RT mutex demotes back to task->static_prio it will
>be lower than the priority that Fusyn has given the task (potentially).
>I don't think that's handled in the kernel anyplace, is it?

There would be conflicts, for sure [I haven't been able
to grok all the details of rt-mutex, so if I say something
really gross, please speak up]:

- rt saves the new owner's original prio in the mutex
- fusyn uses the mutex owned list and recalculates
everytime the dyn prio changes based on that (so
if we change the prio of the owner while owning
there are no side effects).

- rt saves the pi-waiters in a per-task list, the rest
in another list
- fusyn associates all the waiters in a per-mutex list,
and the PI or not is not a per-waiter attribute, is
is a per-mutex attribute--this simplifies the bubbling
up of the boosting priority

In general the methods are similar, but the implementations
are too different. Yielding to the fact that I haven't looked
too deep at rt-mutex [to be far, I get lost :)], I think we'd
have a hard time getting them to interoperate with regards
to PI and PP.

Are you *really* considering to have them coexisting? I lack
the braveness :) It'd be easier to zap one in favour of the
other and crosspolinate.

<biased opinion>
if we take out all the user space crap from fusyn (vlocator.c,
ufu* and merge in or remove the hooks it requires), it seems
to me to be simpler than rt-mutex in the PI handling section.
However, that is possibly because I wrote it.
</biased opinion>

<long explanation copied from somewhere else>

fusyn uses (for transitivity and stacking purposes) the
priority of the 'fulock_olist' ownership list in the task
struct as the priority the task needs to be boosted to
and calls __prio_boost() [which sets task->boost_prio] with
that prio.

Thus, whenever you acquire a mutex, it is added to that list
in the right position:

- non PI or PP mutexes have priority BOTTOM_PRIO, so they
are always lower than any possible priority you can get
and won't affect your dynamic prio.

- PI mutexes inherit their prio from the highest prio
(task->prio) of the tasks waiting in its waiters list.

- PP mutexes get a prio assigned by a sort of ioctl

</long explanation copied from somewhere else>

-- Inaky

2005-04-12 23:44:55

by Perez-Gonzalez, Inaky

[permalink] [raw]
Subject: RE: FUSYN and RT

>From: Esben Nielsen [mailto:[email protected]]
>
>I think we (at least) got a bit confused here. What (I think) the
thread
>started out with was a clear layering of the mutexes. I.e. the code
obeys
>the grammar
>
> VALID_LOCK_CODE = LOCK_FUSYN VALID_LOCK_CODE UNLOCK_FUSYN
> | VALID_LOCK_CODE VALID_LOCK_CODE
> | VALID_RTLOCK_CODE
> VALID_RTLOCK = LOCK_RTLOCK VALID_RTLOCK_CODE UNLOCK_RTLOCK
> | VALID_RTLOCK_CODE VALID_RTLOCK_CODE
> | VALID_SPINLOCK_CODE
> | (code with no locks at all)
> VALID_SPINLOCK_CODE = ... :-)

Errrr...sorry, /me no comprende but I get the idea :)

Going back to the API problem I guess it'd be possible to code
an equivalent of __prio_boost [your fusyn_setprio()],
but it still does not answer another underlying problem.

mutex_setprio() won't take into account the case where a task
(1) takes a mutex, (2) it is boosted for that, (3) it (or somebody
else) changes its priority and then (4) it is deboosted.

[4] will deboost it to the prio (static or RT) it had before
boosting (at [1]), not the new one set at [3] (am I right, Ingo? I
didn't see any hooks in sched_setscheduler() and friends).

Now, this would be easy to solve: do a hook up in any function that
is going to touch the priority, and then for every owned mutex,
go updating the mutex->owner_prio.

And now the underlying problem: this is where it gets plenty of
complex when you have to implement PI transitivity--suddenly whenever
I boost an owner I have to walk all the mutexes that it potentially
owns. This becomes O(fugly) the minute you have an ownership chain
deeper than two or three levels with each owner owning more than a
couple of mutexes.

So that's why I chose to put it in the task struct, it becomes O(1)
at the expense (granted) of having to calc a min each time we compute
the dynamic priority.

But these are implementation details

> ...
>will never hit the RT-level. So assuming the RT-lock based code is
>debugged the error must be in Fusyn based code.

Naaah :)

-- Inaky

2005-04-13 00:08:35

by Esben Nielsen

[permalink] [raw]
Subject: Re: FUSYN and RT

On 12 Apr 2005, Daniel Walker wrote:

>
> I just wanted to discuss the problem a little more. From all the
> conversations that I've had it seem that everyone is worried about
> having PI in Fusyn, and PI in the RT mutex.
>
> It seems like these two locks are going to interact on a very limited
> basis. Fusyn will be the user space mutex, and the RT mutex is only in
> the kernel. You can't lock an RT mutex and hold it, then lock a Fusyn
> mutex (anyone disagree?). That is assuming Fusyn stays in user space.
>
> The RT mutex will never lower a tasks priority lower than the priority
> given to it by locking a Fusyn lock.

I have not seen the Fusyn code. Where is the before-any-lock priority
stored? Ingo's code sets the prio back to what is given by static_prio.
So, if Fusyn sets static_prio it will work as you say. It it will then be
up to Fusyn to restore static_prio to what it was before the first Fusyn
lock.

>
> At least, both mutexes will need to use the same API to raise and lower
> priorities.

You basicly need 3 priorities:
1) Actual: task->prio
2) Base prio with no RT locks taken: task->static_prio
3) Base prio with no Fusyn locks taken: task->??

So no, you will not need the same API, at all :-) Fusyn manipulates
task->static_prio and only task->prio when no RT lock is taken. When the
first RT-lock is taken/released it manipulates task->prio only. A release
of a Fusyn will manipulate task->static_prio as well as task->prio.

>
> The next question is deadlocks. Because one mutex is only in the kernel,
> and the other is only in user space, it seems that deadlocks will only
> occur when a process holds locks that are all the same type.

Yes.
All these things assumes a clear lock nesting: Fusyns are on the outer
level, RT locks on the inner level. What happens if there is a bug in RT
locking code will be unclear. On the other hand errors in Fusyn locking
(user space) should not give problems in the kernel.

>
>
> Daniel
>

I think that it might be a fast track to get things done to have a double
PI-locking system, one for the kernel and one for userspace. But I think
it is is bad maintainance to have to maintain two seperate systems. The
best way ought to be to try to only have one PI system. The kernel is big
and confusing enough as it is!

Esben

2005-04-13 00:33:29

by Daniel Walker

[permalink] [raw]
Subject: RE: FUSYN and RT


There is a great big snag in my assumptions. It's possible for a process
to hold a fusyn lock, then block on an RT lock. In that situation you
could have a high priority user space process be scheduled then block on
the same fusyn lock but the PI wouldn't be fully transitive , plus there
will be problems when the RT mutex tries to restore the priority.

We could add simple hooks to force the RT mutex to fix up it's PI, but
it's not a long term solution.

Daniel


On Tue, 2005-04-12 at 16:11, Esben Nielsen wrote:
> I think we (at least) got a bit confused here. What (I think) the thread
> started out with was a clear layering of the mutexes. I.e. the code obeys
> the grammar
>
> VALID_LOCK_CODE = LOCK_FUSYN VALID_LOCK_CODE UNLOCK_FUSYN
> | VALID_LOCK_CODE VALID_LOCK_CODE
> | VALID_RTLOCK_CODE
> VALID_RTLOCK = LOCK_RTLOCK VALID_RTLOCK_CODE UNLOCK_RTLOCK
> | VALID_RTLOCK_CODE VALID_RTLOCK_CODE
> | VALID_SPINLOCK_CODE
> | (code with no locks at all)
> VALID_SPINLOCK_CODE = ... :-)
>
> In that context the case is simple: Fusyn's and RT-locks can easily
> co-exist. One only need an extra level akin to static_prio to fall back to
> when the last fusyn is unlocked. The API's should be _different_, but
> fusyn_setprio() should both update static_prio and call mutex_setprio().
> There will never be deadlocks involving both types of locks, as Daniel
> said because the lock nesting is sorted out. Furtheremore, unbalanced
> (incorrect) code like
> LOCK_FUSYN VALID_RTLOCK_CODE (no unlock)
> will never hit the RT-level. So assuming the RT-lock based code is
> debugged the error must be in Fusyn based code.
>
> Esben
>
> On Tue, 12 Apr 2005, Perez-Gonzalez, Inaky wrote:
>
> > >From: Esben Nielsen [mailto:[email protected]]
> > >On 12 Apr 2005, Daniel Walker wrote:
> > >
> > >>
> > >>
> > >> At least, both mutexes will need to use the same API to raise and
> > lower
> > >> priorities.
> > >
> > >You basicly need 3 priorities:
> > >1) Actual: task->prio
> > >2) Base prio with no RT locks taken: task->static_prio
> > >3) Base prio with no Fusyn locks taken: task->??
> > >
> > >So no, you will not need the same API, at all :-) Fusyn manipulates
> > >task->static_prio and only task->prio when no RT lock is taken. When
> > the
> > >first RT-lock is taken/released it manipulates task->prio only. A
> > release
> > >of a Fusyn will manipulate task->static_prio as well as task->prio.
> >
> > Yes you do. You took care of the simple case. Things get funnier
> > when you own more than one PI lock, or you need to promote a
> > task that is blocked on other PI locks whose owners are blocked
> > on PI locks (transitivity), or when you mix PI and PP (priority
> > protection/ priority ceiling).
> >
> > In that case not having a sim{pl,g}e API for doing it is nuts.
> >
> > >> The next question is deadlocks. Because one mutex is only in the
> > kernel,
> > >> and the other is only in user space, it seems that deadlocks will
> > only
> > >> occur when a process holds locks that are all the same type.
> > >
> > >Yes.
> > >All these things assumes a clear lock nesting: Fusyns are on the outer
> > >level, RT locks on the inner level. What happens if there is a bug in
> > RT
> > >locking code will be unclear. On the other hand errors in Fusyn locking
> > >(user space) should not give problems in the kernel.
> >
> > Wrong. Fusyns are kernel locks that are exposed to user space (much as
> > a file descriptor is a kernel object exposed to user space through
> > a system call). Of course if the user does something mean with them
> > they will cause an error, but should not have undesired consequences
> > in the kernel. But BUGS in the code will be as unclear as in RT mutexes.
> >
> > >it is is bad maintainance to have to maintain two seperate systems. The
> > >best way ought to be to try to only have one PI system. The kernel is
> > big
> > >and confusing enough as it is!
> >
> > Ayeh for the big...it is not that confusing :)
> >
> > -- Inaky
> >
>
>

2005-04-13 15:47:27

by Steven Rostedt

[permalink] [raw]
Subject: RE: FUSYN and RT

On Tue, 2005-04-12 at 17:27 -0700, Daniel Walker wrote:
> There is a great big snag in my assumptions. It's possible for a process
> to hold a fusyn lock, then block on an RT lock. In that situation you
> could have a high priority user space process be scheduled then block on
> the same fusyn lock but the PI wouldn't be fully transitive , plus there
> will be problems when the RT mutex tries to restore the priority.
>
> We could add simple hooks to force the RT mutex to fix up it's PI, but
> it's not a long term solution.

How hard would it be to use the RT mutex PI for the priority inheritance
for fusyn? I only work with the RT mutex now and haven't looked at the
fusyn. Maybe Ingo can make a separate PI system with its own API that
both the fusyn and RT mutex can use. This way the fusyn locks can still
be separate from the RT mutex locks but still work together.

Basically can the fusyn work with the rt_mutex_waiter? That's what I
would pull into its own subsystem. Have another structure that would
reside in both the fusyn and RT mutex that would take over for the
current rt_mutex that is used in pi_setprio and task_blocks_on_lock in
rt.c. So if both locks used the same PI system, then this should all be
cleared up.

If this doesn't makes sense, or just confusing, I'll explain more :-)

-- Steve


2005-04-13 17:34:41

by Daniel Walker

[permalink] [raw]
Subject: RE: FUSYN and RT

On Wed, 2005-04-13 at 08:46, Steven Rostedt wrote:
> How hard would it be to use the RT mutex PI for the priority inheritance
> for fusyn? I only work with the RT mutex now and haven't looked at the
> fusyn. Maybe Ingo can make a separate PI system with its own API that
> both the fusyn and RT mutex can use. This way the fusyn locks can still
> be separate from the RT mutex locks but still work together.
>
> Basically can the fusyn work with the rt_mutex_waiter? That's what I
> would pull into its own subsystem. Have another structure that would
> reside in both the fusyn and RT mutex that would take over for the
> current rt_mutex that is used in pi_setprio and task_blocks_on_lock in
> rt.c. So if both locks used the same PI system, then this should all be
> cleared up.
>
> If this doesn't makes sense, or just confusing, I'll explain more :-)

I've thought about this as an option, but when I first started this
thread It seemed like the two could work independently, and safely which
doesn't appear to be the case any more.

The problems with pulling out the PI in the RT mutex are that
pi_setprio() does a walk over lock->owner and we're got two different
lock structures now . I was thinking we could add something like
lock_ops (get_owner(), wait_list_add(), wait_list_del(), ?? ) to
rt_mutex_waiter, or abstract rt_lock. Then pi_setprio would just use the
lock_ops instead of accessing a structure ..

I've only gone over the Fusyn code briefly , so I'm assuming all this
could be added. I think it's a safe assumption though .

Daniel

2005-04-13 18:48:04

by Steven Rostedt

[permalink] [raw]
Subject: RE: FUSYN and RT

On Wed, 2005-04-13 at 10:33 -0700, Daniel Walker wrote:
> On Wed, 2005-04-13 at 08:46, Steven Rostedt wrote:
> > How hard would it be to use the RT mutex PI for the priority inheritance
> > for fusyn? I only work with the RT mutex now and haven't looked at the
> > fusyn. Maybe Ingo can make a separate PI system with its own API that
> > both the fusyn and RT mutex can use. This way the fusyn locks can still
> > be separate from the RT mutex locks but still work together.
> >
> > Basically can the fusyn work with the rt_mutex_waiter? That's what I
> > would pull into its own subsystem. Have another structure that would
> > reside in both the fusyn and RT mutex that would take over for the
> > current rt_mutex that is used in pi_setprio and task_blocks_on_lock in
> > rt.c. So if both locks used the same PI system, then this should all be
> > cleared up.
> >
> > If this doesn't makes sense, or just confusing, I'll explain more :-)
>
> I've thought about this as an option, but when I first started this
> thread It seemed like the two could work independently, and safely which
> doesn't appear to be the case any more.
>
> The problems with pulling out the PI in the RT mutex are that
> pi_setprio() does a walk over lock->owner and we're got two different
> lock structures now . I was thinking we could add something like
> lock_ops (get_owner(), wait_list_add(), wait_list_del(), ?? ) to
> rt_mutex_waiter, or abstract rt_lock. Then pi_setprio would just use the
> lock_ops instead of accessing a structure ..

Yeah, I was thinking of another structure within rt_mutex and what fusyn
uses. Like a pi_struct of some sort. And this would be the common
structure that holds the owner and prio or what ever that the pi_setprio
and friends need. This would probably be the easier approach, but for
the long run, I think I like your idea of the ops better.

-- Steve


2005-04-15 22:53:34

by Bill Huey

[permalink] [raw]
Subject: Re: FUSYN and RT

On Wed, Apr 13, 2005 at 11:46:40AM -0400, Steven Rostedt wrote:
> On Tue, 2005-04-12 at 17:27 -0700, Daniel Walker wrote:
> > There is a great big snag in my assumptions. It's possible for a process
> > to hold a fusyn lock, then block on an RT lock. In that situation you
> > could have a high priority user space process be scheduled then block on
> > the same fusyn lock but the PI wouldn't be fully transitive , plus there
> > will be problems when the RT mutex tries to restore the priority.
> >
> > We could add simple hooks to force the RT mutex to fix up it's PI, but
> > it's not a long term solution.

Ok, I've been thinking about these issues and I believe there are a number
of misunderstandings here. The user and kernel space mutexes need to be
completely different implementations. I'll have more on this later.

First of all, priority transitivity should be discontinuous at the user/kernel
space boundary, but be propagated by the scheduler, via an API or hook,
upon a general priority boost to the thread in question.

You have thread A blocked in the kernel holding is onto userspace mutex 1a
and kernel mutex 2a. Thread A is priority boosted by a higher priority
thread B trying to acquire mutex 1a. The transitivity operation propagates
through the rest of the lock graph in userspace, via depth first search,
as usual. When it hits the last userspace mutex in question, this portion
of the propagation activity stops. Next, the scheduler itself finds out
that thread A has had it's priority altered because of a common priority
change API and starts another priority propagation operation in kernel
space to mutex 1b. There you have it. It's complete from user to kernel
space using a scheduler event/hook/api to propagate priority changes
into the kernel.

With all of that in place, you do a couple of things for the mutex
implementation. First, you convert as much code of the current RT mutex
code to be type polymorphic
as you can:

1) You use Daniel Walker's PI list handling for wait queue insertion for
both mutex implementation. This is done since it's already a library
and is already generic.

2) Then you generalize the dead lock detection code so that things like
"what to do in a deadlock case" is determine at the instantiation of
the code. You might have to use C preprocessor macros to do a generic
implementation and then fill in the parametric values for creating a
usable instance.

3) Make the grab owner code generic.

4) ...more part of the RT mutex...
etc...

> How hard would it be to use the RT mutex PI for the priority inheritance
> for fusyn? I only work with the RT mutex now and haven't looked at the
> fusyn. Maybe Ingo can make a separate PI system with its own API that
> both the fusyn and RT mutex can use. This way the fusyn locks can still
> be separate from the RT mutex locks but still work together.

I'd apply these implementation ideas across both mutexes, but keep the
individual mutexes functionality distinct. I look at this problem from
more of a reusability perspective than anything else.

> Basically can the fusyn work with the rt_mutex_waiter? That's what I
> would pull into its own subsystem. Have another structure that would
> reside in both the fusyn and RT mutex that would take over for the
> current rt_mutex that is used in pi_setprio and task_blocks_on_lock in
> rt.c. So if both locks used the same PI system, then this should all be
> cleared up.

Same thing...

There will be problems trying to implement a Posix read/write lock using
this method and the core RT mutex might have to be fundamentally altered
to handle recursion of some sort, decomposed into smaller bits and
recomposed into something else.

bill

2005-04-15 23:41:57

by Inaky Perez-Gonzalez

[permalink] [raw]
Subject: Re: FUSYN and RT

>>>>> Bill Huey (hui) <[email protected]> writes:

> Ok, I've been thinking about these issues and I believe there are a
> number of misunderstandings here. The user and kernel space mutexes
> need to be completely different implementations. I'll have more on
> this later.

> First of all, priority transitivity should be discontinuous at the
> user/kernel space boundary, but be propagated by the scheduler, via
> an API or hook, upon a general priority boost to the thread in
> question.

This is not necessarily true. My temperature-regulating thread should
be able to promote a task so it works around priority invertion, no
matter if they are sharing a kernel or user space lock.

By following your method, the pi engine becomes unnecesarily complex;
you have actually two engines following two different propagation
chains (one kernel, one user). If your mutexes/locks/whatever are the
same with a different cover, then you can simplify the whole
implementation by leaps.

> With all of that in place, you do a couple of things for the mutex
> implementation. First, you convert as much code of the current RT
> mutex code to be type polymorphic as you can:

> ...

> I'd apply these implementation ideas across both mutexes, but keep
> the individual mutexes functionality distinct. I look at this
> problem from more of a reusability perspective than anything else.

You are talking of what is implemented in fusyn already; the only
differences are that (a) I don't use macros, but funcition pointers
(mutex ops) and (b) transitivity across user/kernel space is allowed.

> There will be problems trying to implement a Posix read/write lock

As long as the concept of rwlock allows for it to have multiple owners
(read locks need to have them), the procedure is mostly the
same. However, this not being POSIX, nobody (yet) has asked for it.

--

Inaky

2005-04-16 01:15:29

by Steven Rostedt

[permalink] [raw]
Subject: Re: FUSYN and RT

On Fri, 2005-04-15 at 16:37 -0700, Inaky Perez-Gonzalez wrote:
> >>>>> Bill Huey (hui) <[email protected]> writes:
>
> > Ok, I've been thinking about these issues and I believe there are a
> > number of misunderstandings here. The user and kernel space mutexes
> > need to be completely different implementations. I'll have more on
> > this later.
>
> > First of all, priority transitivity should be discontinuous at the
> > user/kernel space boundary, but be propagated by the scheduler, via
> > an API or hook, upon a general priority boost to the thread in
> > question.
>
> This is not necessarily true. My temperature-regulating thread should
> be able to promote a task so it works around priority invertion, no
> matter if they are sharing a kernel or user space lock.
>
> By following your method, the pi engine becomes unnecesarily complex;
> you have actually two engines following two different propagation
> chains (one kernel, one user). If your mutexes/locks/whatever are the
> same with a different cover, then you can simplify the whole
> implementation by leaps.
>

I have to agree with Inaky too. Fundamentally, PI is the same for the
system regardless of if the locks are user or kernel. I still don't see
the difference here. But for other reasons, I feel that the user lock
should be a different structure from the kernel lock. That's why I
mentioned that it would be a good idea if Ingo modulized the PI portion.
So that part would be the same for both. If he doesn't have the time to
do it, I'll do it :-) (Ingo, all you need to do is ask.)

[...]

> > There will be problems trying to implement a Posix read/write lock
>
> As long as the concept of rwlock allows for it to have multiple owners
> (read locks need to have them), the procedure is mostly the
> same. However, this not being POSIX, nobody (yet) has asked for it.
>

I don't think rwlocks work well with PI. You can implement it, but it's
like implementing multiple inheritance for Object Oriented languages.
Java didn't implement it because they say keeping it out makes the code
cleaner (probably true), but the real reason I bet, was that
implementing it makes everything much more complex. The same goes with
rwlocks with multiple readers and PI. Without it makes for a cleaner
solution (for users as well as developers), and with it, it just makes
everything more complex.

-- Steve


2005-04-16 01:21:15

by Inaky Perez-Gonzalez

[permalink] [raw]
Subject: Re: FUSYN and RT

>>>>> Steven Rostedt <[email protected]> writes:
>> On Fri, 2005-04-15 at 16:37 -0700, Inaky Perez-Gonzalez wrote:

> I have to agree with Inaky too. Fundamentally, PI is the same for
> the system regardless of if the locks are user or kernel. I still
> don't see the difference here. But for other reasons, I feel that
> the user lock should be a different structure from the kernel
> lock. That's why I mentioned that it would be a good idea if Ingo
> modulized the PI portion. So that part would be the same for
> both. If he doesn't have the time to do it, I'll do it :-) (Ingo,
> all you need to do is ask.)

Can you qualify "different" here? I don't mean that they need to be
interchangeable, but that they are esentially the same. Obviously the
user cannot acces the kernel locks, but kernel locks are *used* to
implement user space locks.

Back to my example before: in fusyn, a user space lock is a kernel
space lock with a wrapper, that provides all that is necessary for
doing the fast path and handling user-space specific issues.

>> As long as the concept of rwlock allows for it to have multiple
>> owners (read locks need to have them), the procedure is mostly the
>> same. However, this not being POSIX, nobody (yet) has asked for it.
>
> I don't think rwlocks work well with PI. You can implement it, but
> it's like implementing multiple inheritance for Object Oriented
> languages...

I have to agree--that's why I don't go further than saying in theory
is possible. I would only touch it with a ten foot pole or if someone
offered a lot in exchange :]

--

Inaky

2005-04-16 01:39:47

by Steven Rostedt

[permalink] [raw]
Subject: Re: FUSYN and RT

On Fri, 2005-04-15 at 18:20 -0700, Inaky Perez-Gonzalez wrote:
> >>>>> Steven Rostedt <[email protected]> writes:
> >> On Fri, 2005-04-15 at 16:37 -0700, Inaky Perez-Gonzalez wrote:
>
> > I have to agree with Inaky too. Fundamentally, PI is the same for
> > the system regardless of if the locks are user or kernel. I still
> > don't see the difference here. But for other reasons, I feel that
> > the user lock should be a different structure from the kernel
> > lock. That's why I mentioned that it would be a good idea if Ingo
> > modulized the PI portion. So that part would be the same for
> > both. If he doesn't have the time to do it, I'll do it :-) (Ingo,
> > all you need to do is ask.)
>
> Can you qualify "different" here? I don't mean that they need to be
> interchangeable, but that they are esentially the same. Obviously the
> user cannot acces the kernel locks, but kernel locks are *used* to
> implement user space locks.
>
> Back to my example before: in fusyn, a user space lock is a kernel
> space lock with a wrapper, that provides all that is necessary for
> doing the fast path and handling user-space specific issues.

I was actually thinking of just giving more flexibility to the user
locks, in case the application using them needed more information, for
debugging or whatever. Honestly, I haven't looked at the fusyn code
yet, so I don't really know if there is a difference. As I said, I
"feel" the user lock should be different. I really don't know if they
should.

So, to answer your question. Looking forward, I kind of see two
different structures for locking. The rt_mutex and something that is
used by fusyn, then there being some common structure (or ops) that they
both use to implement the PI. But the implementation of how the locks
work may as well be different. But this may not be the case, and there
still be two structures but the fusyn just contain a rt_mutex lock to do
the actual locking and the rest of the structure be used for showing
information or what not back up to user space. This stuff wouldn't be
necessary for the rt_mutex. We need to keep rt_mutex small since it is
used all over the place.

That's all I meant.


-- Steve


2005-04-16 01:56:11

by Inaky Perez-Gonzalez

[permalink] [raw]
Subject: Re: FUSYN and RT

>>>>> Steven Rostedt <[email protected]> writes:

> On Fri, 2005-04-15 at 18:20 -0700, Inaky Perez-Gonzalez wrote:

>> Back to my example before: in fusyn, a user space lock is a kernel
>> space lock with a wrapper, that provides all that is necessary for
>> doing the fast path and handling user-space specific issues.

> ...

> So, to answer your question. Looking forward, I kind of see two
> different structures for locking. The rt_mutex and something that
> is used by fusyn, then there being some common structure (or ops)
> that they both use to implement the PI. But the implementation of
> how the locks work may as well be different. But this may not be the
> case, and there still be two structures but the fusyn just contain a
> rt_mutex lock to do the actual locking and the rest of the structure
> be used for showing information or what not back up to user
> space. This stuff wouldn't be necessary for the rt_mutex. We need to
> keep rt_mutex small since it is used all over the place.

I see--would the following fit your view?

This would be a kernel lock [from the fusyn patch, linux/fulock.h]:

/** A fulock, mutex usable from the kernel. */
struct fulock {
struct fuqueue fuqueue;
struct task_struct *owner;
unsigned flags;
struct plist olist_node;
};

This has an in kernel API so you can use it from modules or kernel
code.

And this would be kernel representation of a user space lock [from
linux/fulock_kernel.h]:

struct ufulock {
struct fulock fulock;
struct vlocator vlocator;
struct page *page;
};

This is exposed via system calls with fast-path as an option.

This is basically the kernel lock that provides the functionality and
an structure to keep a tab to where the thing is in user space (hash
queues a la futex). The ops are hidden in fulock.fuqueue.ops [fuqueue
is the waitqueue--just for reference, from linux/fuqueue.h].

/** A fuqueue, a prioritized wait queue usable from kernel space. */
struct fuqueue {
spinlock_t lock;
struct plist wlist;
struct fuqueue_ops *ops;
};

--

Inaky

2005-04-16 02:34:44

by Steven Rostedt

[permalink] [raw]
Subject: Re: FUSYN and RT

On Fri, 2005-04-15 at 18:53 -0700, Inaky Perez-Gonzalez wrote:
> >>>>> Steven Rostedt <[email protected]> writes:
>
> > On Fri, 2005-04-15 at 18:20 -0700, Inaky Perez-Gonzalez wrote:
>
> >> Back to my example before: in fusyn, a user space lock is a kernel
> >> space lock with a wrapper, that provides all that is necessary for
> >> doing the fast path and handling user-space specific issues.
>
> > ...
>
> > So, to answer your question. Looking forward, I kind of see two
> > different structures for locking. The rt_mutex and something that
> > is used by fusyn, then there being some common structure (or ops)
> > that they both use to implement the PI. But the implementation of
> > how the locks work may as well be different. But this may not be the
> > case, and there still be two structures but the fusyn just contain a
> > rt_mutex lock to do the actual locking and the rest of the structure
> > be used for showing information or what not back up to user
> > space. This stuff wouldn't be necessary for the rt_mutex. We need to
> > keep rt_mutex small since it is used all over the place.
>
> I see--would the following fit your view?
>

I think it's time that I take a look at the fusyn code. I don't have all
the emails on this thread, could you point out to me where I can see the
latest fusyn patch. Thanks.


> This would be a kernel lock [from the fusyn patch, linux/fulock.h]:
>
> /** A fulock, mutex usable from the kernel. */
> struct fulock {
> struct fuqueue fuqueue;
> struct task_struct *owner;
> unsigned flags;
> struct plist olist_node;
> };
>

It's getting late where I am, and I'm getting tired, so I'm a little
slow right now. Is what you are showing me here what is currently in
fusyn or a proposal with the merging of Ingo's rt_mutex? Reason why
I'm asking, is what's the difference between the owner here and in
fuqueue's spin_lock?

> This has an in kernel API so you can use it from modules or kernel
> code.
>
> And this would be kernel representation of a user space lock [from
> linux/fulock_kernel.h]:
>
> struct ufulock {
> struct fulock fulock;
> struct vlocator vlocator;
> struct page *page;
> };
>
> This is exposed via system calls with fast-path as an option.
>

This above structure would represent the user space wrapper structure
that I meant with the fusyn structure containing the rt_mutex lock.
Right?

> This is basically the kernel lock that provides the functionality and
> an structure to keep a tab to where the thing is in user space (hash
> queues a la futex). The ops are hidden in fulock.fuqueue.ops [fuqueue
> is the waitqueue--just for reference, from linux/fuqueue.h].
>
> /** A fuqueue, a prioritized wait queue usable from kernel space. */
> struct fuqueue {
> spinlock_t lock;
> struct plist wlist;
> struct fuqueue_ops *ops;
> };
>

Would the above spinlock_t be a raw_spinlock_t? This goes back to my
first question. I'm not sure how familiar you are with Ingo's work, but
he has turned all spinlocks into mutexes, and when you really need an
original spinlock, you declare it with raw_spinlock_t.

Also in the above, is the fuqueue_ops the ops we mentioned earlier as
the links into PI?

Sorry about being totally ignorant here, but it's the end of the day for
me, and I'm starting to feel burnt out.

Thanks,

-- Steve


2005-04-16 03:01:03

by Sven-Thorsten Dietrich

[permalink] [raw]
Subject: RE: FUSYN and RT

> > >
> > /** A fuqueue, a prioritized wait queue usable from
> kernel space. */
> > struct fuqueue {
> > spinlock_t lock;
> > struct plist wlist;
> > struct fuqueue_ops *ops;
> > };
> >
>
> Would the above spinlock_t be a raw_spinlock_t? This goes
> back to my first question. I'm not sure how familiar you are
> with Ingo's work, but he has turned all spinlocks into
> mutexes, and when you really need an original spinlock, you
> declare it with raw_spinlock_t.
>

This one probably should be a raw_spinlock.
This lock is only held to protect access to the queues.
Since the queues are already priority ordered, there is
little benefit to ordering -the order of insertion-
in case of contention on a queue, compared with the complexity.

Just what you want to say to a guy who says he is tired ;)

Sven



2005-04-16 03:33:05

by Steven Rostedt

[permalink] [raw]
Subject: RE: FUSYN and RT



On Fri, 15 Apr 2005, Sven Dietrich wrote:

> > > >
> > > /** A fuqueue, a prioritized wait queue usable from
> > kernel space. */
> > > struct fuqueue {
> > > spinlock_t lock;
> > > struct plist wlist;
> > > struct fuqueue_ops *ops;
> > > };
> > >
> >
> > Would the above spinlock_t be a raw_spinlock_t? This goes
> > back to my first question. I'm not sure how familiar you are
> > with Ingo's work, but he has turned all spinlocks into
> > mutexes, and when you really need an original spinlock, you
> > declare it with raw_spinlock_t.
> >
>
> This one probably should be a raw_spinlock.
> This lock is only held to protect access to the queues.
> Since the queues are already priority ordered, there is
> little benefit to ordering -the order of insertion-
> in case of contention on a queue, compared with the complexity.
>

Surprisingly, this makes perfect sense to me! I'm not going to comment on
this code until I actually get to see the whole package. I don't know how
much Inaky has used Ingo's work, but I'd figured it should be a
raw_spinlock.

> Just what you want to say to a guy who says he is tired ;)
>

This is nothing, I'm currently trying to test stuff from another thread
dealing with qdiscs in the net code. %-}

-- Steve

2005-04-16 04:06:48

by Inaky Perez-Gonzalez

[permalink] [raw]
Subject: Re: FUSYN and RT

>>>>> Steven Rostedt <[email protected]> writes:

> On Fri, 2005-04-15 at 18:53 -0700, Inaky Perez-Gonzalez wrote:
>> >>>>> Steven Rostedt <[email protected]> writes:
>>
>> I see--would the following fit your view?
>>

> I think it's time that I take a look at the fusyn code. I don't have
> all the emails on this thread, could you point out to me where I can
> see the latest fusyn patch. Thanks.

http://developer.osdl.org/dev/robustmutexes/fusyn/20040510

There you have the a full patch against 2.6.10, against the previous
stable release (2.3.1) and the patch broken up in parts (the first one
001.msg has the index).

You want 016.msg and 020.msg, the ones that implement fulocks (the
mutexes) and user space fulocks.

013.msg has all the priority change engine (along with the queues);
__fuqueue_waiter_chprio() is the function.

In the same page site, up a couple of levels, three is info on how to
grab it from CVS. There is also the OLS 2004 stuff which explains
things in detail.

> It's getting late where I am, and I'm getting tired, so I'm a little
> slow right now. Is what you are showing me here what is currently
> in fusyn or a proposal with the merging of Ingo's rt_mutex? Reason
> why I'm asking, is what's the difference between the owner here and
> in fuqueue's spin_lock?

This is all without taking into account Ingo's work (you could say it
is kind of parallel/redundant/etc). In any case, to sum up w/ what
Sven said, the spinlock would be a raw_spinlock_t when that makes it
into the kernel. It just protects access to the fuqueue/fulock/ufulock
structures.

>> This has an in kernel API so you can use it from modules or kernel
>> code.
>> ...

> This above structure would represent the user space wrapper
> structure that I meant with the fusyn structure containing the
> rt_mutex lock. Right?

Yep

> Also in the above, is the fuqueue_ops the ops we mentioned earlier
> as the links into PI?

Yes and no.

Yes because the ops are there to be able to be able to do the kind of
things that user space mutexes need done differently (look for the
fu*_op_* functions--or fulock_ops and ufulock_ops).

However, in the case of the PI handling--that is just priority change
handling--the op is the same for both fulocks and user space fulocks.

But the priority change handling is different for queues, so you need
an op to distinguish.

> Sorry about being totally ignorant here, but it's the end of the day
> for me, and I'm starting to feel burnt out.

That's when you do 'shutdown -h now' and go for a beer :)

--

Inaky

2005-04-16 13:11:25

by john cooper

[permalink] [raw]
Subject: Re: FUSYN and RT

Sven Dietrich wrote:
>>> /** A fuqueue, a prioritized wait queue usable from
>>
>>kernel space. */
>>
>>> struct fuqueue {
>>> spinlock_t lock;
>>> struct plist wlist;
>>> struct fuqueue_ops *ops;
>>> };
>>>
>>
>>Would the above spinlock_t be a raw_spinlock_t? This goes
>>back to my first question. I'm not sure how familiar you are
>>with Ingo's work, but he has turned all spinlocks into
>>mutexes, and when you really need an original spinlock, you
>>declare it with raw_spinlock_t.
>>
>
>
> This one probably should be a raw_spinlock.
> This lock is only held to protect access to the queues.
> Since the queues are already priority ordered, there is
> little benefit to ordering -the order of insertion-
> in case of contention on a queue, compared with the complexity.

The choice of lock type should derive from both the calling
context and the length of time the lock is expected to be held.

-john


--
[email protected]

2005-04-16 14:23:52

by Steven Rostedt

[permalink] [raw]
Subject: Re: FUSYN and RT

On Sat, 2005-04-16 at 09:05 -0400, john cooper wrote:
> Sven Dietrich wrote:
[...]
> > This one probably should be a raw_spinlock.
> > This lock is only held to protect access to the queues.
> > Since the queues are already priority ordered, there is
> > little benefit to ordering -the order of insertion-
> > in case of contention on a queue, compared with the complexity.
>
> The choice of lock type should derive from both the calling
> context and the length of time the lock is expected to be held.
>

In this case, I don't think time matters for choice of lock. Time
matters to keep it short since it does need the raw_spin_lock. This
lock is part of the whole locking scheme, and would be similar to not
using raw_spin_locks in the implementation of rt_mutex. Well, not
exactly the same, but if we want the fusyn code to use the same code as
rt_mutex for PI, then it will need to be a raw_spin_lock.

-- Steve


2005-04-16 14:57:33

by john cooper

[permalink] [raw]
Subject: Re: FUSYN and RT

Steven Rostedt wrote:
> On Sat, 2005-04-16 at 09:05 -0400, john cooper wrote:
>
>>Sven Dietrich wrote:
>
> [...]
>
>>>This one probably should be a raw_spinlock.
>>>This lock is only held to protect access to the queues.
>>>Since the queues are already priority ordered, there is
>>>little benefit to ordering -the order of insertion-
>>>in case of contention on a queue, compared with the complexity.
>>
>>The choice of lock type should derive from both the calling
>>context and the length of time the lock is expected to be held.
>>
>
>
> In this case, I don't think time matters for choice of lock. Time
> matters to keep it short since it does need the raw_spin_lock. This
> lock is part of the whole locking scheme, and would be similar to not
> using raw_spin_locks in the implementation of rt_mutex. Well, not
> exactly the same, but if we want the fusyn code to use the same code as
> rt_mutex for PI, then it will need to be a raw_spin_lock.

Ok, I was missing the context -- it does need to be a raw lock.
Is the scope of this lock limited to manipulating the list or
is it held to serialize other operations?

-john


--
[email protected]

2005-04-18 05:30:11

by Bill Huey

[permalink] [raw]
Subject: Re: FUSYN and RT

On Fri, Apr 15, 2005 at 04:37:05PM -0700, Inaky Perez-Gonzalez wrote:
> By following your method, the pi engine becomes unnecesarily complex;
> you have actually two engines following two different propagation
> chains (one kernel, one user). If your mutexes/locks/whatever are the
> same with a different cover, then you can simplify the whole
> implementation by leaps.

The main comment that I'm making here (so it doesn't get lost) is that,
IMO, you're going to find that there is a mismatch with the requirements
of Posix threading verse kernel uses. To drop the kernel mutex in 1:1 to
back a futex-ish entity is going to be problematic mainly because of how
kernel specific the RT mutex is (or any future kernel mutex) for debugging,
etc... and I think this is going to be clear as it gets progressively
implemented.

I think folks really need to think about this clearly before moving into
any direction prematurely. That's what I'm saying. PI is one of those
issues, but ultimately it's the fundamental differences between userspace
and kernel work.

LynxOS (similar threading system) keep priority calculations of this kind
seperate between user and kernel space. I'll have the ask one of our
engineers here why again that's the case, but I suspect it's for the
reasons I've discussed previously.

bill

2005-04-18 07:38:00

by Sven-Thorsten Dietrich

[permalink] [raw]
Subject: Re: FUSYN and RT

On Sun, 2005-04-17 at 22:30 -0700, Bill Huey wrote:
> On Fri, Apr 15, 2005 at 04:37:05PM -0700, Inaky Perez-Gonzalez wrote:
> > By following your method, the pi engine becomes unnecesarily complex;
> > you have actually two engines following two different propagation
> > chains (one kernel, one user). If your mutexes/locks/whatever are the
> > same with a different cover, then you can simplify the whole
> > implementation by leaps.
>
> The main comment that I'm making here (so it doesn't get lost) is that,
> IMO, you're going to find that there is a mismatch with the requirements
> of Posix threading verse kernel uses. To drop the kernel mutex in 1:1 to
> back a futex-ish entity is going to be problematic mainly because of how
> kernel specific the RT mutex is (or any future kernel mutex) for debugging,
> etc... and I think this is going to be clear as it gets progressively
> implemented.
>

PI behavior is pretty well spec'd out at this point, but its possible
to assume that no userspace locks are taken after the first kernel
lock is locked, and that the task exits the kernel without holding any
kernel locks. That makes it easier to think about, and from that
perspective, I see no complications with the transitive PI across
the user / kernel boundary.

If a userspace task has RT priority, it should be able to bump along
lower priority kernel tasks, whether they (or it) are holding any user
mutexes, or not.

> I think folks really need to think about this clearly before moving into
> any direction prematurely. That's what I'm saying. PI is one of those
> issues, but ultimately it's the fundamental differences between userspace
> and kernel work.
>

Bill, we are really trying to do this right, open, on the table.

This is an open invitation to anyone interested to get on the line
with us on Wednesday. Get the info for the FREE call here:

http://developer.osdl.org/dev/mutexes/


> LynxOS (similar threading system) keep priority calculations of this kind
> seperate between user and kernel space. I'll have the ask one of our
> engineers here why again that's the case, but I suspect it's for the
> reasons I've discussed previously.

Let us know, if its possible to disclose that info.

Sven


2005-04-18 11:34:21

by Steven Rostedt

[permalink] [raw]
Subject: Re: FUSYN and RT

On Mon, 2005-04-18 at 00:37 -0700, Sven-Thorsten Dietrich wrote:

>
> Bill, we are really trying to do this right, open, on the table.
>
> This is an open invitation to anyone interested to get on the line
> with us on Wednesday. Get the info for the FREE call here:
>
> http://developer.osdl.org/dev/mutexes/
>

This looks good. Lots of info on the links, but I just want to point
out that on http://developer.osdl.org/dev/mutexes/Terminology.shtml,
your link to Ingo's page:
http://people.redhat.com/~mingo/realtimepreempt/ returns a 404.

-- Steve