2003-08-06 18:35:31

by Timothy Miller

[permalink] [raw]
Subject: Interactivity improvements

Here's a kooky idea...

I'm not sure about this detail, but I would guess that the int
schedulers are trying to determine relatively stable priority values for
processes. A process does not instantly come to its correct priority
level, because it gets there based on accumulation of behavioral patterns.

Well, it occurs to me that we could benefit from situations where
priority changes are underdamped. The results would sometimes be an
oscillation in priority levels. In the short term, a given process may
be given different amounts of CPU time when it is run, although in the
long term, it should average out.

At the same time, certain tasks can only be judged correctly over the
long term, like X, for example. Its long-term behavior is interactive,
but now and then, it will become a CPU hog, and we want to LET it.

The idea I'm proposing, however poorly formed, is that if we allow some
"excessive" oscillation early on in the life of a process, we may be
able to more quickly get processes to NEAR its correct priority, OR get
its CPU time over the course of three times being run for the
underdamped case to be about the same as it would be if we knew in
advance what the priority should be. But in the underdamped case, the
priority would continue to oscillate up and down around the correct
level, because we are intentionally overshooting the mark each time we
adjust priority.

This may not be related, but something that pops into my mind is a
numerical method called Newton's Method. It's a way to solve for roots
of an equation, and it involved derivatives, and I don't quite remember
how it works. But in any event, the results are less accurate than,
say, bisection, but you get to the answer MUCH more quickly.


2003-08-06 19:01:48

by Mike Fedyk

[permalink] [raw]
Subject: Re: Interactivity improvements

On Wed, Aug 06, 2003 at 02:48:11PM -0400, Timothy Miller wrote:
> The idea I'm proposing, however poorly formed, is that if we allow some
> "excessive" oscillation early on in the life of a process, we may be
> able to more quickly get processes to NEAR its correct priority, OR get
> its CPU time over the course of three times being run for the
> underdamped case to be about the same as it would be if we knew in
> advance what the priority should be. But in the underdamped case, the
> priority would continue to oscillate up and down around the correct
> level, because we are intentionally overshooting the mark each time we
> adjust priority.
>
> This may not be related, but something that pops into my mind is a
> numerical method called Newton's Method. It's a way to solve for roots
> of an equation, and it involved derivatives, and I don't quite remember
> how it works. But in any event, the results are less accurate than,
> say, bisection, but you get to the answer MUCH more quickly.

Sounds interesting.

Much like a decaying average, or average over the entire lifetime of the
process which can be weighed into the short term interactivity calculations
also.

I think Con is working on something like this already, except that it's
taking the short term into account more than the long term.

2003-08-06 20:02:24

by Helge Hafting

[permalink] [raw]
Subject: Re: Interactivity improvements

On Wed, Aug 06, 2003 at 02:48:11PM -0400, Timothy Miller wrote:
> Here's a kooky idea...
>
> I'm not sure about this detail, but I would guess that the int
> schedulers are trying to determine relatively stable priority values for
> processes. A process does not instantly come to its correct priority
> level, because it gets there based on accumulation of behavioral patterns.
>
> Well, it occurs to me that we could benefit from situations where
> priority changes are underdamped. The results would sometimes be an
> oscillation in priority levels. In the short term, a given process may
> be given different amounts of CPU time when it is run, although in the
> long term, it should average out.
>
Possibly a good idea...

> At the same time, certain tasks can only be judged correctly over the
> long term, like X, for example. Its long-term behavior is interactive,
> but now and then, it will become a CPU hog, and we want to LET it.
>
Possibly, but getting this detection right isn't easy. There
are many other cases where processes such as the compiler do
so much disk access that it is interactive for some time, but
we definitely don't want it to become a hog at interactive
priority - ever.

There are also those who think that if X ever has so much to do
(usually not because of graphichs acceleration) then it had
better wait because the X user isn't the only one waiting anyway.
(I.e. the machine is also a web/db/login server
or some such - so other people are waiting as well.)

> The idea I'm proposing, however poorly formed, is that if we allow some
> "excessive" oscillation early on in the life of a process, we may be
> able to more quickly get processes to NEAR its correct priority, OR get
> its CPU time over the course of three times being run for the
> underdamped case to be about the same as it would be if we knew in
> advance what the priority should be. But in the underdamped case, the
> priority would continue to oscillate up and down around the correct
> level, because we are intentionally overshooting the mark each time we
> adjust priority.

More oscillation in the start than later will behave as you describe,
wether we want that is another issue. Processes _can_
change between being hogs and interactive, and we want the
scheduler to pick up that fact quickly.

Example: a word processor is mostly interactive. Sometimes it
does a heavy job like typesetting an entire book though, this is cpu
intensive and should schedule as such even if the word processor
has been "interactive" for a week. (I never close it, linux
is so stable...) It should do its heavy work with low priority
so it don't interfere with the interactive work (or gameplaying)
I do while waiting for the .pdf to appear.
I definitely don't want a 5-minute
typesetting run to monopolize the cpu just because the task
has been sooo nice over the last week.

Helge Hafting

2003-08-06 21:10:31

by Con Kolivas

[permalink] [raw]
Subject: Re: Interactivity improvements

On Thu, 7 Aug 2003 04:48, Timothy Miller wrote:
> Here's a kooky idea...
>
> I'm not sure about this detail, but I would guess that the int
> schedulers are trying to determine relatively stable priority values for
> processes. A process does not instantly come to its correct priority
> level, because it gets there based on accumulation of behavioral patterns.
>
> Well, it occurs to me that we could benefit from situations where
> priority changes are underdamped. The results would sometimes be an
> oscillation in priority levels. In the short term, a given process may
> be given different amounts of CPU time when it is run, although in the
> long term, it should average out.
>
> At the same time, certain tasks can only be judged correctly over the
> long term, like X, for example. Its long-term behavior is interactive,
> but now and then, it will become a CPU hog, and we want to LET it.
>
> The idea I'm proposing, however poorly formed, is that if we allow some
> "excessive" oscillation early on in the life of a process, we may be
> able to more quickly get processes to NEAR its correct priority, OR get
> its CPU time over the course of three times being run for the
> underdamped case to be about the same as it would be if we knew in
> advance what the priority should be. But in the underdamped case, the
> priority would continue to oscillate up and down around the correct
> level, because we are intentionally overshooting the mark each time we
> adjust priority.
>
> This may not be related, but something that pops into my mind is a
> numerical method called Newton's Method. It's a way to solve for roots
> of an equation, and it involved derivatives, and I don't quite remember
> how it works. But in any event, the results are less accurate than,
> say, bisection, but you get to the answer MUCH more quickly.

Good thinking, but this is more or less already done in my code. I do have
very rapid elevation of priority, and once something is known interactive it
decays more slowly. It still _must_ be able to vary after the fact as
interactive tasks can turn into cpu hogs and so on.

Con

2003-08-07 14:29:08

by Patrick McLean

[permalink] [raw]
Subject: Re: Interactivity improvements

I have a few ideas about the interactivity problem that I thought i
would share.

First, the whole problem with interactive tasks turning into CPU hogs,
and vice-versa. The current implementation is fairly good at estimating
the interactivity of a new process isn't it? I assume it keeps some sort
of statitistice on what a process has been doing in the past, so if a
process starts to deviate from it's previous behavior quite a bit (say
an interactive task starts finishing time slices, or a CPU hog starts
sleeping a lot) couldn't we reset the priority, etc and let the
interactivity estimator re-estimate the interactivity as if it was a new
task. That way no task would get a real chance to starve others, while
keeping interactive tasks interactive (interactive tasks that become CPU
hogs for short peroids of time would have a pretty good chance to
smarten up before they really get penalized).

Another point is compilers, they tend to do a lot of disk I/O then
become major CPU hogs, could we have some sort or heuristic that reduces
the bonuses for sleeping on block I/O rather than other kinds of I/O
(say pipes and network I/O in the case of X). This would prevent tasks
like compilers from getting real bonuses for reading alot of files at
startup.

Finally, the interactivity estimator seems to be quite a bit of code,
which certain people have no real useful (in servers for example) and I
would imagine that it does reduce throughput, which is not a big deal in
desktops, but in a server environment it's not good, so maybe a
CONFIG_INTERACTIVE_ESTIMATOR or something similar would be an idea to
keep the server people happy, just have an option to completely get rid
of the extra overhead of having a really nice interactivity estimator. I
could be an idiot though, and I imagine that I will be needing some
asbestos for saying this, but I thought I would voice my opinion.

2003-08-07 15:26:43

by Richard Curnow

[permalink] [raw]
Subject: Re: Interactivity improvements

* Patrick McLean <[email protected]> [2003-08-07]:
> Another point is compilers, they tend to do a lot of disk I/O then
> become major CPU hogs, could we have some sort or heuristic that reduces
> the bonuses for sleeping on block I/O rather than other kinds of I/O
> (say pipes and network I/O in the case of X).

What about compilers chewing on source files coming in over NFS rather
than resident on local block devices? The network waits need to be
broken out into NFS versus other, or UDP versus TCP or something. e.g.
waits due to the user not having typed anything yet, or moved the mouse,
are going to be on TCP connections.

--
Richard \\\ SuperH Core+Debug Architect /// .. At home ..
P. /// [email protected] /// [email protected]
Curnow \\\ http://www.superh.com/ /// http://www.rc0.org.uk

2003-08-07 15:46:04

by Patrick McLean

[permalink] [raw]
Subject: Re: Interactivity improvements



Richard Curnow wrote:
> * Patrick McLean <[email protected]> [2003-08-07]:
>
>>Another point is compilers, they tend to do a lot of disk I/O then
>>become major CPU hogs, could we have some sort or heuristic that reduces
>>the bonuses for sleeping on block I/O rather than other kinds of I/O
>>(say pipes and network I/O in the case of X).
>
>
> What about compilers chewing on source files coming in over NFS rather
> than resident on local block devices? The network waits need to be
> broken out into NFS versus other, or UDP versus TCP or something. e.g.
> waits due to the user not having typed anything yet, or moved the mouse,
> are going to be on TCP connections.
>
Maybe if we had it reduce sleeping bonuses if it's waiting on filesystem
access, this would cover NFS as the kernel does consider it a
filesystem, this would cover SMB, AFS, etc as well.

2003-08-07 15:33:27

by Felipe Alfaro Solana

[permalink] [raw]
Subject: Re: Interactivity improvements

On Thu, 2003-08-07 at 16:26, Patrick McLean wrote:

> Finally, the interactivity estimator seems to be quite a bit of code,
> which certain people have no real useful (in servers for example) and I
> would imagine that it does reduce throughput, which is not a big deal in
> desktops, but in a server environment it's not good, so maybe a
> CONFIG_INTERACTIVE_ESTIMATOR or something similar would be an idea to
> keep the server people happy, just have an option to completely get rid
> of the extra overhead of having a really nice interactivity estimator. I
> could be an idiot though, and I imagine that I will be needing some
> asbestos for saying this, but I thought I would voice my opinion.

In the past, I proposed to have at least 2 schedulers available (much
like we have for I/O schedulers): one for servers, which doesn't mess
with bonuses and interactivity too much and gives best throughput for
batch processing (OLTP and in general, non-interactive loads), and
another one for desktops or Terminal Servers.

Don't know if this is feasible, however.

2003-08-07 17:42:00

by Robert Love

[permalink] [raw]
Subject: Re: Interactivity improvements

On Thu, 2003-08-07 at 07:26, Patrick McLean wrote:

> Finally, the interactivity estimator seems to be quite a bit of code,
> which certain people have no real useful (in servers for example) and I
> would imagine that it does reduce throughput

Actually, it should improve I/O throughput. What it might hurt is
computational performance, but only at the expense of benefiting other
processes.

The reason it benefits throughput is that file I/O is definitely marked
interactive, and that results in file I/O being able to quickly wake up,
dispatch the I/O, and go back to sleep. Its the usual treatment given to
I/O, and it works.

Robert Love


2003-08-07 18:33:32

by Mike Fedyk

[permalink] [raw]
Subject: Re: Interactivity improvements

On Thu, Aug 07, 2003 at 11:42:58AM -0400, Patrick McLean wrote:
>
>
> Richard Curnow wrote:
> >* Patrick McLean <[email protected]> [2003-08-07]:
> >
> >>Another point is compilers, they tend to do a lot of disk I/O then
> >>become major CPU hogs, could we have some sort or heuristic that reduces
> >>the bonuses for sleeping on block I/O rather than other kinds of I/O
> >>(say pipes and network I/O in the case of X).
> >
> >
> >What about compilers chewing on source files coming in over NFS rather
> >than resident on local block devices? The network waits need to be
> >broken out into NFS versus other, or UDP versus TCP or something. e.g.
> >waits due to the user not having typed anything yet, or moved the mouse,
> >are going to be on TCP connections.
> >
> Maybe if we had it reduce sleeping bonuses if it's waiting on filesystem

We are already doing this.

> access, this would cover NFS as the kernel does consider it a
> filesystem, this would cover SMB, AFS, etc as well.

Network interactivity is dealing with sockets, and such. Accessing NFS (and
other network filesystems) deals with a virtual block device, and gives
similar patterns to a local block device.

2003-08-07 20:47:06

by William Lee Irwin III

[permalink] [raw]
Subject: Re: Interactivity improvements

Patrick McLean <[email protected]> [2003-08-07]:
>> Another point is compilers, they tend to do a lot of disk I/O then
>> become major CPU hogs, could we have some sort or heuristic that reduces
>> the bonuses for sleeping on block I/O rather than other kinds of I/O
>> (say pipes and network I/O in the case of X).

On Thu, Aug 07, 2003 at 04:24:18PM +0100, Richard Curnow wrote:
> What about compilers chewing on source files coming in over NFS rather
> than resident on local block devices? The network waits need to be
> broken out into NFS versus other, or UDP versus TCP or something. e.g.
> waits due to the user not having typed anything yet, or moved the mouse,
> are going to be on TCP connections.

I'd be interested in whatever you come up with for this, as I use NFSS
a lot.


-- wli

2003-08-07 21:28:04

by Bernd Eckenfels

[permalink] [raw]
Subject: Re: Interactivity improvements

In article <[email protected]> you wrote:
> On Thu, Aug 07, 2003 at 04:24:18PM +0100, Richard Curnow wrote:
>> What about compilers chewing on source files coming in over NFS rather
>> than resident on local block devices? The network waits need to be
>> broken out into NFS versus other, or UDP versus TCP or something. e.g.
>> waits due to the user not having typed anything yet, or moved the mouse,
>> are going to be on TCP connections.
>
> I'd be interested in whatever you come up with for this, as I use NFSS
> a lot.

Well, it might be easy to separate user mode blocking in filesystem
operations as opposed to usermode sleeping on file descriptors on a file
system type base.

But maybe we do not need to differentiate anyway, perhaps we can somehow
detect blocking reads which stress the hardware, vs. blocking reads which
actually lowers the load. Perhaps we need a fd flag for that, where FDs
pointing to fifo, socket and pipes will be free, and FDs pointing to some
devices (e.g. not tty) and files will be tainted and get penalty on
blocking.

Another optimisation would be to only penalt a blocking process if there is
more than one, to avoid renicing processes on otherwise unloaded but slow
systems.

Greetings
Bernd
--
eckes privat - http://www.eckes.org/
Project Freefire - http://www.freefire.org/

2003-08-07 22:52:26

by Timothy Miller

[permalink] [raw]
Subject: Re: Interactivity improvements

One comment on two processes bouncing a semaphor...

If certain kinds of sleep, such as sleeping on a semaphor, do not add
any interactivity points, but the act of sleeping on a semaphor DOES
cause SHARING of interactivity points, then those two processes would
not be able to achieve interactive status based on their sleep patterns.

The next thing to do is to somehow notice that the process never sleeps
on I/O or any other sleeping that WOULD indicate interactivity.

The situation is that both processes really ARE CPU hogs, but we don't
detect that based on using up the timeslice. The solution is to track
what kinds of sleep a process does (counters of the various sorts),
which I'm sure already happens, but the difference is that you do not
penalize for using up the time slice. You penalize for NOT doing
interactive-indicating I/O.

This is really quite simple. All you're doing is you stop considering
time-slice expiration in your heuristics, at least directly. You have a
whole list of different sleep-reasons (Schlafgruende?), it's just that
timeslice expiration isn't one of them, so whenever your sleep-reasons
isn't in the list, you infer that it's timeslice expiration.

That means you have to catch all other reasons and make sure you've
accounted for them, but you get the idea.

Simpler still would be to treat sleeping on a semaphor (or a pipe or
some other IPC) as a timeslice expiration. That would be equivalent.
Now, if the other process is interactive, then you have a fight on your
hands.

Does the interactive process accumulate and share interactivity points
fast enough to keep the other one moving well? If one process is
interactive, then, REALLY, you would want the sleeping of the second
process to be a don't-care. But how can you tell?

If one process is a CPU hog (or perceived as such), and it is blocking
against another process, thus sharing interactivity points, does that
mean the CPU hog shares negative points with the other process?


Please pardon my stream-of-consciousness style in this post.