Mark Mielke <[email protected]> wrote:
>
> On Wed, Jul 10, 2002 at 04:09:21PM -0600, Cort Dougan wrote:
> > Yes, please do make it a config option. 10x interrupt overhead makes me
> > worry. It lets users tailor the kernel to their expected load.
>
> All this talk is getting to me.
>
> I thought we recently (1 month ago? 2 months ago?) concluded that
> increases in interrupt frequency only affects performance by a very
> small amount, but generates an increase in responsiveness. The only
> real argument against that I have seen, is the 'power conservation'
> argument. The idea was, that the scheduler itself did not execute
> on most interrupts. The clock is updated, and that is about all.
On UML and mainframe Linux, *any* periodic clock tick
is heavy overhead when you have a large number of
(mostly idle) instances of Linux running, isn't it?
I think I once heard those architectures went to great lengths
to avoid periodic clock ticks. (My memory is rusty, though.)
How about this: let's apply the high-resolution timer patch,
which adds explicit timer events inbetween the normal 100 Hz
events when needed to satisfy precise sleep requests. Then
let's increase the interval between the normal periodic clock
events from 10ms to infinity. Everything will keep working,
as the high-resolution timer patch code will schedule timer
events as needed -- but suddenly we'll have power consumption
as low as possible, snappier performance, and the thousands-of-instances
case will no longer have this huge drain on performance from
periodic timer events that do nothing but update jiffiers.
OK, so I'm just an ignorant member of the peanut gallery, but
I'd like to hear a real kernel hacker explain why this isn't
the way to go.
- Dan
Hi,
On Thu, 11 Jul 2002 [email protected] wrote:
> Then let's increase the interval between the normal periodic clock
> events from 10ms to infinity. Everything will keep working, as the
> high-resolution timer patch code will schedule timer events as needed --
> but suddenly we'll have power consumption as low as possible, snappier
> performance, and the thousands-of-instances case will no longer have
> this huge drain on performance from periodic timer events that do
> nothing but update jiffiers.
Well, that's the aim.
> OK, so I'm just an ignorant member of the peanut gallery, but
> I'd like to hear a real kernel hacker explain why this isn't
> the way to go.
The only thing that was mentioned yet was the amount of stuff that depends
on periodic ticks. If we just tick unperiodically, we'd fail for sure, but
if we make these instances depend on another timer - we won.
I think a good scheduler can handle this and should also be able to
determine a halfaway optimal tick rate for the current load.
That's a _real_ challenge, guys!
Regards,
Thunder
--
(Use http://www.ebb.org/ungeek if you can't decode)
------BEGIN GEEK CODE BLOCK------
Version: 3.12
GCS/E/G/S/AT d- s++:-- a? C++$ ULAVHI++++$ P++$ L++++(+++++)$ E W-$
N--- o? K? w-- O- M V$ PS+ PE- Y- PGP+ t+ 5+ X+ R- !tv b++ DI? !D G
e++++ h* r--- y-
------END GEEK CODE BLOCK------
Thunder from the hill wrote:
> On Thu, 11 Jul 2002 [email protected] wrote:
>> OK, so I'm just an ignorant member of the peanut gallery, but
>> I'd like to hear a real kernel hacker explain why this isn't
>> the way to go.
>
> The only thing that was mentioned yet was the amount of stuff that depends
> on periodic ticks. If we just tick unperiodically, we'd fail for sure, but
> if we make these instances depend on another timer - we won.
>
> I think a good scheduler can handle this and should also be able to
> determine a halfaway optimal tick rate for the current load.
The current approach on s390 is stop the timer tick only for idle cpus,
because that's where it hurts. A busy system can just keep on using 100
(or 1000) Hz timers.
The jiffies value then gets updated from the time stamp counter when an
interrupt happens on an idle CPU.
See Martin Schwidefsky's recent post for code:
http://marc.theaimsgroup.com/?l=linux-kernel&m=102578746520177&w=2
http://marc.theaimsgroup.com/?l=linux-kernel&m=102578746420174&w=2
Arnd <><
On Thursday 11 July 2002 03:45 pm, Arnd Bergmann wrote:
> Thunder from the hill wrote:
> > On Thu, 11 Jul 2002 [email protected] wrote:
> >> OK, so I'm just an ignorant member of the peanut gallery, but
> >> I'd like to hear a real kernel hacker explain why this isn't
> >> the way to go.
> >
> > The only thing that was mentioned yet was the amount of stuff that
> > depends on periodic ticks. If we just tick unperiodically, we'd fail for
> > sure, but if we make these instances depend on another timer - we won.
> >
> > I think a good scheduler can handle this and should also be able to
> > determine a halfaway optimal tick rate for the current load.
>
> The current approach on s390 is stop the timer tick only for idle cpus,
> because that's where it hurts. A busy system can just keep on using 100
> (or 1000) Hz timers.
> The jiffies value then gets updated from the time stamp counter when an
> interrupt happens on an idle CPU.
>
> See Martin Schwidefsky's recent post for code:
>
> http://marc.theaimsgroup.com/?l=linux-kernel&m=102578746520177&w=2
> http://marc.theaimsgroup.com/?l=linux-kernel&m=102578746420174&w=2
>
> Arnd <><
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
There has also been some work done by Michael Olsen ([email protected])
regarding noperiodic jiffie updates. This was in the context of embedded
handheld devices, in particular IBM's LinuxWatch. Similar problem
arise in such devices where the timer tick can be a significant source of
battery drainage.
There are/were some settled differences between Michael and Martin approach,
which I won't be able to adequately describe.
I believe Michael has a publication on this, I let him respond.
--
-- Hubertus Franke ([email protected])
In article <[email protected]>, <[email protected]> wrote:
>
>How about this: let's apply the high-resolution timer patch,
>which adds explicit timer events inbetween the normal 100 Hz
>events when needed to satisfy precise sleep requests.
The thing is, I think that's the wrong way around.
What we should have is a notion of a reasonably high frequency timer,
and then _slow_it_down_ to something else if not needed.
Speeding the timer up is bad, because:
- you do need to limit the speedup to _something_ anyway (and it might
as well be HZ)
- you get "partial jiffies", which means that only stuff that knows
about the finer granularity gets it.
In contrast, if you slow things down in integer increments of "n", the
only thing you need to do is to add in "n" instead of "1" in the timer
tick handler. Nobody else needs to really care - there is no such thing
as a "fractional jiffy".
Linus
In article <[email protected]>,
Linus Torvalds <[email protected]> wrote:
>
>In contrast, if you slow things down in integer increments of "n", the
>only thing you need to do is to add in "n" instead of "1" in the timer
>tick handler. Nobody else needs to really care - there is no such thing
>as a "fractional jiffy".
An added issue: if you slow the tick down, you can easily do it in _one_
place: the idle loop. When you exit the idle loop you speed it up again,
and nobody is ever any wiser (sure, you need to know enough about timers
in idle to know when you can do it, but that's still fairly localized).
In constrast, if you speed the timer up, you have to make this quite
fundamental in timer handling, and have architecture-specific issues on
how to speed the timer up and down etc. Big ugh, compared to just havin
git in one place inside code that is already architecture-specific for
other reasons.
Linus