Dear all,
I have developed a framework similar to a hypervisor that switches between
Linux and a RT domain.
As RT priority is of highest importance, I want to give more time to the RT
domain *but* I also want to give Linux adequate time for being able to
operate.
Could anyone advise me on how to determine the minimum possible time slice
that I can give to Linux so that it doesn't crash?
That is, if I periodically execute my RT domain every ms and then I switch
to Linux, what's the minimum time that should be given to Linux, until the
next RT domain execution?
NOTE: The RT domain and Linux are not executed concurrently, i.e. the RT
domain is executed first and then Linux executes until the next RT domain
execution.
Thank you all in advance.
John K.
On Thu, Mar 31, 2011 at 01:13:55PM +0100, limp wrote:
> Dear all,
>
> I have developed a framework similar to a hypervisor that switches between
> Linux and a RT domain.
> As RT priority is of highest importance, I want to give more time to the RT
> domain *but* I also want to give Linux adequate time for being able to
> operate.
>
> Could anyone advise me on how to determine the minimum possible time slice
> that I can give to Linux so that it doesn't crash?
>
> That is, if I periodically execute my RT domain every ms and then I switch
> to Linux, what's the minimum time that should be given to Linux, until the
> next RT domain execution?
>
I'm not sure there is any requirement. The only problem I guess that can
happen is if you give such little time that the timer interrupt can't
finish, or that every time you schedule Linux back in, the timer
interrupt goes off and nothing else gets done.
Probably want to take a look at how virtualization works, as it has the
same issue. Linux guests get scheduled in periodically, and there's no
guarantee that they will run for a long time either, as real-time tasks
on the host may starve them.
Outside the live lock that I explained above, I doubt anything will
actually cause Linux to crash due to this.
-- Steve
On Thu, 2011-03-31 at 11:12 -0400, Steven Rostedt wrote:
> On Thu, Mar 31, 2011 at 01:13:55PM +0100, limp wrote:
> > Dear all,
> >
> > I have developed a framework similar to a hypervisor that switches between
> > Linux and a RT domain.
> > As RT priority is of highest importance, I want to give more time to the RT
> > domain *but* I also want to give Linux adequate time for being able to
> > operate.
> >
> I'm not sure there is any requirement. The only problem I guess that can
> happen is if you give such little time that the timer interrupt can't
> finish, or that every time you schedule Linux back in, the timer
> interrupt goes off and nothing else gets done.
>
Right. Moreover, what you're doing seems very very similar to what these
guys do: https://www.rtai.org/ , http://www.xenomai.org/ .
I've never checked if (and if yes how) they do such thing as you're
thinking, but maybe they do... Have you already looked at them?
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
----------------------------------------------------------------------
Dario Faggioli, ReTiS Lab, Scuola Superiore Sant'Anna, Pisa (Italy)
http://retis.sssup.it/people/faggioli -- [email protected]
Thank you guys for your responses,
To be honest I haven’t looked in detail how RTAI and Xenomai does it but
AFAIK, they don't give a fixed time slice to Linux either (i.e. they switch
To Linux only when they have finished with their RT tasks).
A difference between their implementation and mine is that I don't acknowledge
any Linux interrupt while the RT domain is executed so maybe, if Linux code
is not smart enough to re-issue a lost interrupt, and if the RT domain takes most
of CPU time starving Linux, this can cause Linux to crash at some point.
The idea of not acknowledging Linux interrupts on RT domain is that I don't
want to add *random* overhead into RT tasks execution.
What do you guys thing?
Kind regards,
John K.
-----Original Message-----
From: Dario Faggioli [mailto:[email protected]]
Sent: Friday, April 01, 2011 9:05 AM
To: Steven Rostedt
Cc: limp; [email protected]
Subject: Re: Minimum time slice for relaible Linux execution
On Thu, 2011-03-31 at 11:12 -0400, Steven Rostedt wrote:
> On Thu, Mar 31, 2011 at 01:13:55PM +0100, limp wrote:
> > Dear all,
> >
> > I have developed a framework similar to a hypervisor that switches
> > between Linux and a RT domain.
> > As RT priority is of highest importance, I want to give more time to
> > the RT domain *but* I also want to give Linux adequate time for
> > being able to operate.
> >
> I'm not sure there is any requirement. The only problem I guess that
> can happen is if you give such little time that the timer interrupt
> can't finish, or that every time you schedule Linux back in, the timer
> interrupt goes off and nothing else gets done.
>
Right. Moreover, what you're doing seems very very similar to what these guys do: https://www.rtai.org/ , http://www.xenomai.org/ .
I've never checked if (and if yes how) they do such thing as you're thinking, but maybe they do... Have you already looked at them?
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
----------------------------------------------------------------------
Dario Faggioli, ReTiS Lab, Scuola Superiore Sant'Anna, Pisa (Italy)
http://retis.sssup.it/people/faggioli -- [email protected]
On Fri, 2011-04-01 at 13:39 +0100, limp wrote:
> Thank you guys for your responses,
>
> To be honest I haven’t looked in detail how RTAI and Xenomai does it but
> AFAIK, they don't give a fixed time slice to Linux either (i.e. they switch
> To Linux only when they have finished with their RT tasks).
Perhaps you should look into more detail, maybe they do more than you
expect. Honestly, I haven't looked into detail of what they do either,
so I can not comment on how they work.
>
> A difference between their implementation and mine is that I don't acknowledge
> any Linux interrupt while the RT domain is executed so maybe, if Linux code
> is not smart enough to re-issue a lost interrupt, and if the RT domain takes most
> of CPU time starving Linux, this can cause Linux to crash at some point.
What exactly do you mean by not acknowledging Linux interrupts? If an
interrupt takes place while an RT domain is running, you simply drop it?
Yes that will break things. How will Linux know to reissue an interrupt
for a network packet coming in if it never knew it happened?
If your microkernel stores off the interrupt and reissues it to Linux
when Linux gets a chance to run again, then everything would work.
That's pretty much what the virtualization code does.
>
> The idea of not acknowledging Linux interrupts on RT domain is that I don't
> want to add *random* overhead into RT tasks execution.
Or do you simply mask the interrupts that the RT domain does not care
about when the RT domain runs? This should work as when you unmask them
they should trigger, and then you can pass it to the Linux irq handlers.
-- Steve
Hi,
This is pretty interesting to me. How would your framework handle an
RT application that does quite a bit of I/O? Does it block and go to
Linux, get serviced, and come back?
Thank you,
Rafael
On Thu, Mar 31, 2011 at 7:13 AM, limp <[email protected]> wrote:
> Dear all,
>
> I have developed a framework similar to a hypervisor that switches between
> Linux and a RT domain.
> As RT priority is of highest importance, I want to give more time to the RT
> domain *but* I also want to give Linux adequate time for being able to
> operate.
>
> Could anyone advise me on how to determine the minimum possible time slice
> that I can give to Linux so that it doesn't crash?
>
> That is, if I periodically execute my RT domain every ms and then I switch
> to Linux, what's the minimum time that should be given to Linux, until the
> next RT domain execution?
>
> NOTE: The RT domain and Linux are not executed concurrently, i.e. the RT
> domain is executed first and then Linux executes until the next RT domain
> execution.
>
> Thank you all in advance.
>
> John K.
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at ?http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at ?http://www.tux.org/lkml/
>
Also, how is this different than placing a starvation limit around the
RT set of tasks?
On Fri, Apr 1, 2011 at 9:19 AM, Rafael Fernandez
<[email protected]> wrote:
> Hi,
>
> This is pretty interesting to me. How would your framework handle an
> RT application that does quite a bit of I/O? Does it block and go to
> Linux, get serviced, and come back?
>
> Thank you,
> Rafael
>
> On Thu, Mar 31, 2011 at 7:13 AM, limp <[email protected]> wrote:
>> Dear all,
>>
>> I have developed a framework similar to a hypervisor that switches between
>> Linux and a RT domain.
>> As RT priority is of highest importance, I want to give more time to the RT
>> domain *but* I also want to give Linux adequate time for being able to
>> operate.
>>
>> Could anyone advise me on how to determine the minimum possible time slice
>> that I can give to Linux so that it doesn't crash?
>>
>> That is, if I periodically execute my RT domain every ms and then I switch
>> to Linux, what's the minimum time that should be given to Linux, until the
>> next RT domain execution?
>>
>> NOTE: The RT domain and Linux are not executed concurrently, i.e. the RT
>> domain is executed first and then Linux executes until the next RT domain
>> execution.
>>
>> Thank you all in advance.
>>
>> John K.
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to [email protected]
>> More majordomo info at ?http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at ?http://www.tux.org/lkml/
>>
>
--
Rafael E. Fern?ndez
B.S. Computer Science
University of Illinois at Urbana-Champaign