2004-03-16 15:02:24

by Timothy Miller

[permalink] [raw]
Subject: Scheduler: Process priority fed back to parent?

Something occurred to me, so it has probably occurred to others as well. :)

Anyhow, the idea that I had was to feed information about a process's
behavior (interactive/not) to the process's parent (and it's parent,
etc). The next time the parent forks, that information is used to
initially estimate the priority of the forked process.

This isn't perfect, but it would help distinguish between a user shell
where compiles are being done and a user shell (say, Gnome) from which
interactive processes are being launched. Each process maintains a
number which indicates the trends seen in the interactivity of its
descendents.


Another idea I had, but which I think I've seen discussed before, was to
cache information about individual executables. Every time a process
terminates (and/or periodically), the behavior of that process is fed to
a daemon which stores it on disk (on a periodic basis) in such a way
that the kernel can efficiently get at it. When the kernel launches a
process, it looks at the cache for interactivity history to estimate
initial priority.

This way, after gcc has run a few times, it'll be flagged as a CPU-bound
process and every time it's run after that point, it is always run at an
appropriate priority. Similarly, the first time xmms is run, its
interactivity estimate won't be right, but after it's determined to be
interactive, then the next time the program is launched, it STARTS with
an appropriate priority: no ramp-up time.


2004-03-16 15:55:17

by Muli Ben-Yehuda

[permalink] [raw]
Subject: Re: Scheduler: Process priority fed back to parent?

On Tue, Mar 16, 2004 at 10:16:50AM -0500, Timothy Miller wrote:

> This way, after gcc has run a few times, it'll be flagged as a CPU-bound
> process and every time it's run after that point, it is always run at an
> appropriate priority. Similarly, the first time xmms is run, its
> interactivity estimate won't be right, but after it's determined to be
> interactive, then the next time the program is launched, it STARTS with
> an appropriate priority: no ramp-up time.

This is something that I've thought of doing in the past. The reason I
didn't pursue it further is that it's impossible to get it right for
all cases, and it attacks the problem in the wrong place. The kernel
shouldn't need to guess(timate) what the process is going to do. The
userspace programmer, who knows what his process is going to do,
should tell the kernel.

Cheers,
Muli
--
Muli Ben-Yehuda
http://www.mulix.org | http://mulix.livejournal.com/


Attachments:
(No filename) (939.00 B)
signature.asc (189.00 B)
Digital signature
Download all attachments

2004-03-16 16:04:56

by Timothy Miller

[permalink] [raw]
Subject: Re: Scheduler: Process priority fed back to parent?



Muli Ben-Yehuda wrote:
> On Tue, Mar 16, 2004 at 10:16:50AM -0500, Timothy Miller wrote:
>
>
>>This way, after gcc has run a few times, it'll be flagged as a CPU-bound
>>process and every time it's run after that point, it is always run at an
>>appropriate priority. Similarly, the first time xmms is run, its
>>interactivity estimate won't be right, but after it's determined to be
>>interactive, then the next time the program is launched, it STARTS with
>>an appropriate priority: no ramp-up time.
>
>
> This is something that I've thought of doing in the past. The reason I
> didn't pursue it further is that it's impossible to get it right for
> all cases, and it attacks the problem in the wrong place. The kernel
> shouldn't need to guess(timate) what the process is going to do. The
> userspace programmer, who knows what his process is going to do,
> should tell the kernel.

I agree... somewhat. It would be nice if we could trust every program
to always do the right thing, always accurately indicate its priority,
and always yield the CPU at the best time. But if that were reality,
we'd still be using cooperative multitasking.

Unfortunately, the OS has to play babysitter to processes, because
they're guaranteed to misbehave. Preemption exists to ensure fairness
amongst processes. Thus, while you're right that it would be nice to
have processes report their CPU requirements, if we were to actually DO
that, it would be a disaster.


2004-03-16 16:15:46

by Muli Ben-Yehuda

[permalink] [raw]
Subject: Re: Scheduler: Process priority fed back to parent?

On Tue, Mar 16, 2004 at 11:19:46AM -0500, Timothy Miller wrote:

> Unfortunately, the OS has to play babysitter to processes, because
> they're guaranteed to misbehave. Preemption exists to ensure fairness
> amongst processes. Thus, while you're right that it would be nice to
> have processes report their CPU requirements, if we were to actually DO
> that, it would be a disaster.

I agree we should do the best thing possible without any prior
knowledge of what a process should do. I don't agree we should add
pointless complexity to the scheduler for dubious gains (getting rid
of the very short ramp up time). Of course, if you think it's useful,
feel free to implement it and let's resume the discussion when we have
some numbers.

Cheers,
Muli
--
Muli Ben-Yehuda
http://www.mulix.org | http://mulix.livejournal.com/


Attachments:
(No filename) (835.00 B)
signature.asc (189.00 B)
Digital signature
Download all attachments

2004-03-16 16:37:42

by Eric Bambach

[permalink] [raw]
Subject: Re: Scheduler: Process priority fed back to parent?

On Tuesday 16 March 2004 9:16 am, Timothy Miller wrote:
> Something occurred to me, so it has probably occurred to others as well.
> :)
>
> Anyhow, the idea that I had was to feed information about a process's
> behavior (interactive/not) to the process's parent (and it's parent,
> etc). The next time the parent forks, that information is used to
> initially estimate the priority of the forked process.

> This isn't perfect, but it would help distinguish between a user shell
> where compiles are being done and a user shell (say, Gnome) from which
> interactive processes are being launched. Each process maintains a
> number which indicates the trends seen in the interactivity of its
> descendents.

> Another idea I had, but which I think I've seen discussed before, was to
> cache information about individual executables. Every time a process
> terminates (and/or periodically), the behavior of that process is fed to
> a daemon which stores it on disk (on a periodic basis) in such a way
> that the kernel can efficiently get at it. When the kernel launches a
> process, it looks at the cache for interactivity history to estimate
> initial priority.

Sort of like what windows does with its prefetch stuff? Have a directory that
contains info about the best way to run a particular program and its memory
layouts/ disk accesses and patterns?

> This way, after gcc has run a few times, it'll be flagged as a CPU-bound
> process and every time it's run after that point, it is always run at an
> appropriate priority. Similarly, the first time xmms is run, its
> interactivity estimate won't be right, but after it's determined to be
> interactive, then the next time the program is launched, it STARTS with
> an appropriate priority: no ramp-up time.

This sounds like a good idea, however im not too versed in all the technical
details. One thing I do see being a problem is do I really want the kernel
doing a disk-read/write everytime something forks or starts a process? There
would have to be some sort of cache.
Also, is it a good idea for the kernel to be writing/reading from disk,
basing some of its decisions on disk files. Does this add filesystem specific
complexity? As far as I know the kernel, never keeps any configuration on an
actual hard disk. Everything is in /proc...etc... Something nags at me that
its a bad idea to have the kernel read/write things it needs to run on a hard
disk.
So if its a bad idea to write to disk we would have to keep the prefetch info
in /proc, which will hog memory as more and more information is gathered, or
we will lose our valuable information in between reboots.
If someone can explain why these things would/would not be a problem I think
finding a better way to handle processes would be interesting.

Overall it sounds like an ok idea if the specifics get hammered out.
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

--
-------------------------
Eric Bambach
Eric at cisu dot net
-------------------------

2004-03-16 16:42:02

by Timothy Miller

[permalink] [raw]
Subject: Re: Scheduler: Process priority fed back to parent?



Eric wrote:

>
>
> Sort of like what windows does with its prefetch stuff? Have a directory that
> contains info about the best way to run a particular program and its memory
> layouts/ disk accesses and patterns?

I'm not familiar with that aspect of Windows, but... sure. :)

>
>>This way, after gcc has run a few times, it'll be flagged as a CPU-bound
>>process and every time it's run after that point, it is always run at an
>>appropriate priority. Similarly, the first time xmms is run, its
>>interactivity estimate won't be right, but after it's determined to be
>>interactive, then the next time the program is launched, it STARTS with
>>an appropriate priority: no ramp-up time.
>
>
> This sounds like a good idea, however im not too versed in all the technical
> details. One thing I do see being a problem is do I really want the kernel
> doing a disk-read/write everytime something forks or starts a process? There
> would have to be some sort of cache.

The kernel already does disk access to load a process... to load the
executable. The cache of priorities would be structured well on disk,
and the data in that cache would be cached in RAM like any other file
data is cached by the VM.

The kernel needs a process context to access files, so either there
would be an artificial one which always exists for this, or the priority
cache would be accessed in the context of the process being launched.
It would be preferable to open the cache file once at boot time, so the
former is probably best.

> Also, is it a good idea for the kernel to be writing/reading from disk,
> basing some of its decisions on disk files. Does this add filesystem specific
> complexity? As far as I know the kernel, never keeps any configuration on an
> actual hard disk. Everything is in /proc...etc... Something nags at me that
> its a bad idea to have the kernel read/write things it needs to run on a hard
> disk.

I appreciate the problems, and the cost may be greater than the benefit.
But if the cache is just a file in the file system, then there are no
file-system dependencies.

> So if its a bad idea to write to disk we would have to keep the prefetch info
> in /proc, which will hog memory as more and more information is gathered, or
> we will lose our valuable information in between reboots.

Proc isn't a place. It's a pseudo filesystem which requests information
from kernel services. The output you see from "cat /proc/..." is
generated at the time you do the cat, which means it may not take ANY
memory, because it's information that the kernel service already has to
maintain anyway. At least, I THINK it works this way. :)

But in this case, it IS extra memory. Would there be a process-launch
penalty? Definately. But it might be worth it for the user experience.
Furthermore, the priority setting could be asynchronous, where the
initial priority is a guess, and isn't set until the priority info has
been fetched.

The thing is, if a program hasn't been loaded yet, then it has to be
fetched from disk, and one more fetch won't hurt. Launching processes
involves LOTS of files being opened (shares libs, etc.). Furthermore,
if the program has ALREADY been run once, the both the program AND its
priority descriptor are probably already in the VM disk cache.

Lastly, the WRITING to the disk of priorities is done in a lazy manner.
They could be fed via device node or /proc file to a daemon process
that consolidates it. Periodically, it would dump to disk. So,
launching and killing xterm 1000 times in one second would only result
in a single update if the daemon flushed once per second. Also, the
'flush', would mostly involve writing to memory cache, letting the file
system decide when it actually hits the platter.

2004-03-16 16:51:18

by Timothy Miller

[permalink] [raw]
Subject: Re: Scheduler: Process priority fed back to parent?



Muli Ben-Yehuda wrote:
> On Tue, Mar 16, 2004 at 11:19:46AM -0500, Timothy Miller wrote:
>
>
>>Unfortunately, the OS has to play babysitter to processes, because
>>they're guaranteed to misbehave. Preemption exists to ensure fairness
>>amongst processes. Thus, while you're right that it would be nice to
>>have processes report their CPU requirements, if we were to actually DO
>>that, it would be a disaster.
>
>
> I agree we should do the best thing possible without any prior
> knowledge of what a process should do. I don't agree we should add
> pointless complexity to the scheduler for dubious gains (getting rid
> of the very short ramp up time). Of course, if you think it's useful,
> feel free to implement it and let's resume the discussion when we have
> some numbers.


If shortening ramp-up times is the only benefit, then it's not worth the
effort. The objective is continuity and over-all feel of the system.

With Nick's and Con's schedulers, if you have a set of processes that
are running continuously, then after a brief period, everything has
settled into its ideal state. The interactive processes are
interactive, the CPU-bound ones are penalized, etc.

But real systems aren't like this. Processes are launched and killed
constantly. Consider what a shell script does! To have program
behavior from one time affect how the program is run at a later time
would allow system behavior to be smooth over many launch/kill cycles.


And having parent processes (eg. shell) maintain information on child
behavior would also help, because then a shell script would behave more
like a single program than many independent programs. Compiles would
affect the system less because 'make' would maintain information on
compiler behavior, making the impact of compiler launches much less
negative.

In FACT, that I think I may be on to something here... Hmmm. So far,
process schedulers have been trying to BOOST priority of interactive
processes. That's great. But maybe an even BIGGER problem is that
every time gcc (and cc1) is launched during a kernel compile, it's given
too high of a priority, and by the time the priority is corrected, the
compile has finished for that .c file.

This sounds SO familiar... I know this has been discussed before by
people much smarter about this than I.

2004-03-16 18:52:46

by Horst H. von Brand

[permalink] [raw]
Subject: Re: Scheduler: Process priority fed back to parent?

Muli Ben-Yehuda <[email protected]> said:

[...]

> This is something that I've thought of doing in the past. The reason I
> didn't pursue it further is that it's impossible to get it right for
> all cases, and it attacks the problem in the wrong place. The kernel
> shouldn't need to guess(timate) what the process is going to do. The
> userspace programmer, who knows what his process is going to do,
> should tell the kernel.

People have been known to lie on occasion, particularly when it is to their
advantage...
--
Dr. Horst H. von Brand User #22616 counter.li.org
Departamento de Informatica Fono: +56 32 654431
Universidad Tecnica Federico Santa Maria +56 32 654239
Casilla 110-V, Valparaiso, Chile Fax: +56 32 797513

2004-03-16 19:26:35

by Eric Bambach

[permalink] [raw]
Subject: Re: Scheduler: Process priority fed back to parent?

On Tuesday 16 March 2004 10:46 am, Timothy Miller wrote:
> Eric wrote:
> > Sort of like what windows does with its prefetch stuff? Have a directory
> > that contains info about the best way to run a particular program and its
> > memory layouts/ disk accesses and patterns?
>
> I'm not familiar with that aspect of Windows, but... sure. :)

All i know is it does some sort of prefetch/caching of information to make
user processes load faster.
>
> >>This way, after gcc has run a few times, it'll be flagged as a CPU-bound
> >>process and every time it's run after that point, it is always run at an
> >>appropriate priority. Similarly, the first time xmms is run, its
> >>interactivity estimate won't be right, but after it's determined to be
> >>interactive, then the next time the program is launched, it STARTS with
> >>an appropriate priority: no ramp-up time.
> >
> > This sounds like a good idea, however im not too versed in all the
> > technical details. One thing I do see being a problem is do I really want
> > the kernel doing a disk-read/write everytime something forks or starts a
> > process? There would have to be some sort of cache.
>
> The kernel already does disk access to load a process... to load the
> executable. The cache of priorities would be structured well on disk,
> and the data in that cache would be cached in RAM like any other file
> data is cached by the VM.
>
> The kernel needs a process context to access files, so either there
> would be an artificial one which always exists for this, or the priority
> cache would be accessed in the context of the process being launched.
> It would be preferable to open the cache file once at boot time, so the
> former is probably best.
>
> > Also, is it a good idea for the kernel to be writing/reading from disk,
> > basing some of its decisions on disk files. Does this add filesystem
> > specific complexity? As far as I know the kernel, never keeps any
> > configuration on an actual hard disk. Everything is in /proc...etc...
> > Something nags at me that its a bad idea to have the kernel read/write
> > things it needs to run on a hard disk.
>
> I appreciate the problems, and the cost may be greater than the benefit.
> But if the cache is just a file in the file system, then there are no
> file-system dependencies.
True. Read on for my thoughts about cost vs. benifit. The biggest costs would
only be incurred the first couple times a process is launched, at least the
cost of calculating what the heck the process is doing. After that it would
only be using already gathered info.
> > So if its a bad idea to write to disk we would have to keep the prefetch
> > info in /proc, which will hog memory as more and more information is
> > gathered, or we will lose our valuable information in between reboots.
>
> Proc isn't a place. It's a pseudo filesystem which requests information
> from kernel services. The output you see from "cat /proc/..." is
> generated at the time you do the cat, which means it may not take ANY
> memory, because it's information that the kernel service already has to
> maintain anyway. At least, I THINK it works this way. :)

Interesting..... I learned something today.

> But in this case, it IS extra memory. Would there be a process-launch
> penalty? Definately. But it might be worth it for the user experience.
> Furthermore, the priority setting could be asynchronous, where the
> initial priority is a guess, and isn't set until the priority info has
> been fetched.

Well on systems with alot of memory this kind of activity might be a good
trade off. I would gladly give up 1-10MB if it would guarantee faster/more
efficient process management. This prefetch activity could be turned on/off
from a /proc flag too just in case. It would have to be compiled in anyways
too, something like CONFIG_PREFETCH. Servers could turn off prefetch while
desktops could leave it on. This kind of thing would only improve
interactivity anyways, not affect long-running processes.

> The thing is, if a program hasn't been loaded yet, then it has to be
> fetched from disk, and one more fetch won't hurt. Launching processes
> involves LOTS of files being opened (shares libs, etc.). Furthermore,
> if the program has ALREADY been run once, the both the program AND its
> priority descriptor are probably already in the VM disk cache.

Your idea seems to deal mostly with process priority, what came to my mind
was something like a small struct or DB file describing the first 20-30 files
or something that that a program uses when it first starts up. That way the
file is already in disk cache or on its way when the program requests it. We
all know the slowest piece of a computer is the hard drive, so anything we
can do to predict, or even KNOW about its file usage we can get the
information ready before the process requests it.
Would monitoring such activity be too much overhead? Maybe there could be
some sort of way to stop monitoring after the first few seconds or if the
program's needs are too random, the db file/info/daemon would simply contain
the something like "Dont monitor this proccess its too random!" or "We
already have all the info we need about it, just use what we have" That way
after the overhead is incurred
Maybe the information could consist of the first 15-20 libraries (since which
libraries a program uses shouldn't change too much) and eventually when we
have run the program enough the prefetch file would know about some
configuration files and other things that the program always needs. That way
we can weed out random/temporary files and concentrate on what the program
needs consistently.

Please, let me know if my ideas are off track or not feasible, I don't have
an enormous knowledge about the kernel, but I am willing to learn/be
corrected.
I think I am getting off topic, but oh well. I think both ideas would
generally have some benifit.
Can we even monitor open/read's this way? Is it too much overhead or require
breaking other syscalls?

> Lastly, the WRITING to the disk of priorities is done in a lazy manner.
> They could be fed via device node or /proc file to a daemon process
> that consolidates it. Periodically, it would dump to disk. So,
> launching and killing xterm 1000 times in one second would only result
> in a single update if the daemon flushed once per second. Also, the
> 'flush', would mostly involve writing to memory cache, letting the file
> system decide when it actually hits the platter.
Hmm, i Like the daemon idea.

The process launching a thousand times was exactly the worst case scenario I
was thinking of.

Quoting a later message:
>In FACT, that I think I may be on to something here... Hmmm. ?So far,
>process schedulers have been trying to BOOST priority of interactive
>processes. ?That's great. ?But maybe an even BIGGER problem is that
>every time gcc (and cc1) is launched during a kernel compile, it's given
>too high of a priority, and by the time the priority is corrected, the
>compile has finished for that .c file.

True. I believe both your ideas about priority management and my idea about
disk prefetching would greatly improve the performance hit in this scenario.

I believe your idea about priority management would be much simpler to
implement and would have greater general applications, while my idea is
solely for desktop interactivity where program usage can be a bit chaotic at
times.

I've been monitoring LKML for a time and cannot remember any specific
discussions about this issue, and i tried a quick google, archive search, but
most of the information i recovered wasn't related.

See what you can dig up and keep us posted :)

-------------------------
Eric Bambach
Eric at cisu dot net
-------------------------

2004-03-16 21:22:08

by Timothy Miller

[permalink] [raw]
Subject: Re: Scheduler: Process priority fed back to parent?



Eric wrote:
> On Tuesday 16 March 2004 10:46 am, Timothy Miller wrote:
>
>>Eric wrote:
>>
>>> Sort of like what windows does with its prefetch stuff? Have a directory
>>>that contains info about the best way to run a particular program and its
>>>memory layouts/ disk accesses and patterns?
>>
>>I'm not familiar with that aspect of Windows, but... sure. :)
>
>
> All i know is it does some sort of prefetch/caching of information to make
> user processes load faster.

I'm not sure that this is quite the same. Mac OS X has a special cache
on disk of things that get loaded on boot. It's arranged near the
outer-most tracks of the disk, and it's stored linearly, so it can all
be read as efficiently as possible. This saves a lot of time.

This isn't what I'm talking about, however. This caching idea is
something that applies only to, for instance, loading system services,
and that would all be in user space.


>>I appreciate the problems, and the cost may be greater than the benefit.
>> But if the cache is just a file in the file system, then there are no
>>file-system dependencies.
>
> True. Read on for my thoughts about cost vs. benifit. The biggest costs would
> only be incurred the first couple times a process is launched, at least the
> cost of calculating what the heck the process is doing. After that it would
> only be using already gathered info.

Yes and no. We'd probably want to keep a running average or a life-time
average. We want to smooth out the bumps in program behavior over the
long-term, and we don't want unfortunate first-impressions to cause
grudges. So, let's say that some program you install uses an inordinate
amount of CPU for the first few days of its life because it's doing all
sorts of initialization, but after that point, it behaves more sanely.
We want the initial impression of the program's behavior to decay over
time to better match its present behavior.

So, while the existing process scheduler estimates interactivity over
the lifetime of a process, which could be mere seconds, the
interactivity cache could estimate interactivity over a period of hours
or more.


>>But in this case, it IS extra memory. Would there be a process-launch
>>penalty? Definately. But it might be worth it for the user experience.
>> Furthermore, the priority setting could be asynchronous, where the
>>initial priority is a guess, and isn't set until the priority info has
>>been fetched.
>
>
> Well on systems with alot of memory this kind of activity might be a good
> trade off. I would gladly give up 1-10MB if it would guarantee faster/more
> efficient process management. This prefetch activity could be turned on/off
> from a /proc flag too just in case. It would have to be compiled in anyways
> too, something like CONFIG_PREFETCH. Servers could turn off prefetch while
> desktops could leave it on. This kind of thing would only improve
> interactivity anyways, not affect long-running processes.

Let's not call it "prefetch", because that would be confusing. Let's
call it "CONFIG_INTERACTIVITY_HISTORY".

The cache would need to know as little as a number which indicates
interactivity history (or a short list), and either a filename or an
inode number.

Indeed, would it be possible to store this number as metadata in the
file's inode? That would be filesystem-dependent, but it would also
have zero cost. This would only be a problem for read-only file
systems, and those are common enough that it would warrant a separate
cache. Furthermore, in some cases, the cache could exist only for the
uptime, so if you reboot, you lose all the history, but your compiles
would only misbehave briefly before the system learned the behavior of cc1.

>
>
>>The thing is, if a program hasn't been loaded yet, then it has to be
>>fetched from disk, and one more fetch won't hurt. Launching processes
>>involves LOTS of files being opened (shares libs, etc.). Furthermore,
>>if the program has ALREADY been run once, the both the program AND its
>>priority descriptor are probably already in the VM disk cache.
>
>
> Your idea seems to deal mostly with process priority, what came to my mind
> was something like a small struct or DB file describing the first 20-30 files
> or something that that a program uses when it first starts up. That way the
> file is already in disk cache or on its way when the program requests it. We
> all know the slowest piece of a computer is the hard drive, so anything we
> can do to predict, or even KNOW about its file usage we can get the
> information ready before the process requests it.

This is a wonderful idea, but it's completely separate. This is
something that could be done almost completely in user space.

A few more things about this... we'd need a new filesystem which
maintained this cache in a section of the disk where it could keep
things linear, and secondly, the anticipatory I/O scheduler does a
wonderful job of reordering reads in a VERY efficient order.

> Would monitoring such activity be too much overhead? Maybe there could be
> some sort of way to stop monitoring after the first few seconds or if the
> program's needs are too random, the db file/info/daemon would simply contain
> the something like "Dont monitor this proccess its too random!" or "We
> already have all the info we need about it, just use what we have" That way
> after the overhead is incurred

Boot time would likely benefit most from this. And even if it helped a
lot over AS, it would simply come down to someone replacing the
user-space mechanism that starts system services.

> Maybe the information could consist of the first 15-20 libraries (since which
> libraries a program uses shouldn't change too much) and eventually when we
> have run the program enough the prefetch file would know about some
> configuration files and other things that the program always needs. That way
> we can weed out random/temporary files and concentrate on what the program
> needs consistently.

glibc is the biggest issue, but it's loaded into memory so early that I
don't think we could help anything by caching it.

>
> Please, let me know if my ideas are off track or not feasible, I don't have
> an enormous knowledge about the kernel, but I am willing to learn/be
> corrected.
> I think I am getting off topic, but oh well. I think both ideas would
> generally have some benifit.
> Can we even monitor open/read's this way? Is it too much overhead or require
> breaking other syscalls?

For what I've been talking about, we'd want to hide it as much as
possible, except from the scheduler and the daemon that maintains the
history.

>
>
>>Lastly, the WRITING to the disk of priorities is done in a lazy manner.
>> They could be fed via device node or /proc file to a daemon process
>>that consolidates it. Periodically, it would dump to disk. So,
>>launching and killing xterm 1000 times in one second would only result
>>in a single update if the daemon flushed once per second. Also, the
>>'flush', would mostly involve writing to memory cache, letting the file
>>system decide when it actually hits the platter.
>
> Hmm, i Like the daemon idea.
>
> The process launching a thousand times was exactly the worst case scenario I
> was thinking of.
>
> Quoting a later message:
>
>>In FACT, that I think I may be on to something here... Hmmm. So far,
>>process schedulers have been trying to BOOST priority of interactive
>>processes. That's great. But maybe an even BIGGER problem is that
>>every time gcc (and cc1) is launched during a kernel compile, it's given
>>too high of a priority, and by the time the priority is corrected, the
>>compile has finished for that .c file.
>
>
> True. I believe both your ideas about priority management and my idea about
> disk prefetching would greatly improve the performance hit in this scenario.

As I say, most of the time, this sort of caching is done only as a way
to passify users who want the OS to boot faster. On boot, so many
things are being loaded at the same time that it can really be helpful.
But when loading a single app, caching won't help much.

Plus, there are other things that would speed up Linux booting MORE.
For instance, if multiple independent system services were launched
simultaneously, that combined with AS would result in a significant
speedup in boot time.

>
> I believe your idea about priority management would be much simpler to
> implement and would have greater general applications, while my idea is
> solely for desktop interactivity where program usage can be a bit chaotic at
> times.

The first time gcc is started during a compile, it has to be loaded from
disk, which is always going to be time-consuming. But while that disk
access is going on, other processes run, so interactivity isn't
affected. Caching of the executable would only save a few milliseconds.
Subsequent launches of gcc would all come from memory cache. Only
THEN does gcc start to affect interactivity, and this is all about
process scheduling.


2004-03-16 23:07:09

by Eric Bambach

[permalink] [raw]
Subject: Re: Scheduler: Process priority fed back to parent?

On Tuesday 16 March 2004 3:35 pm, you wrote:
> Eric wrote:
> > On Tuesday 16 March 2004 10:46 am, Timothy Miller wrote:

> This isn't what I'm talking about, however. This caching idea is
> something that applies only to, for instance, loading system services,
> and that would all be in user space.

Yea. I agree, it sounds more like userspace.

> >>I appreciate the problems, and the cost may be greater than the benefit.
> >> But if the cache is just a file in the file system, then there are no
> >>file-system dependencies.
> >
> > True. Read on for my thoughts about cost vs. benifit. The biggest costs
> > would only be incurred the first couple times a process is launched, at
> > least the cost of calculating what the heck the process is doing. After
> > that it would only be using already gathered info.
>
> Yes and no. We'd probably want to keep a running average or a life-time
> average. We want to smooth out the bumps in program behavior over the
> long-term, and we don't want unfortunate first-impressions to cause
> grudges. So, let's say that some program you install uses an inordinate
> amount of CPU for the first few days of its life because it's doing all
> sorts of initialization, but after that point, it behaves more sanely.
> We want the initial impression of the program's behavior to decay over
> time to better match its present behavior.

> So, while the existing process scheduler estimates interactivity over
> the lifetime of a process, which could be mere seconds, the
> interactivity cache could estimate interactivity over a period of hours
> or more.

> >>But in this case, it IS extra memory. Would there be a process-launch
> >>penalty? Definately. But it might be worth it for the user experience.
> >> Furthermore, the priority setting could be asynchronous, where the
> >>initial priority is a guess, and isn't set until the priority info has
> >>been fetched.
> >
> > Well on systems with alot of memory this kind of activity might be a good
> > trade off. I would gladly give up 1-10MB if it would guarantee
> > faster/more efficient process management. This prefetch activity could be
> > turned on/off from a /proc flag too just in case. It would have to be
> > compiled in anyways too, something like CONFIG_PREFETCH. Servers could
> > turn off prefetch while desktops could leave it on. This kind of thing
> > would only improve interactivity anyways, not affect long-running
> > processes.
>
> Let's not call it "prefetch", because that would be confusing. Let's
> call it "CONFIG_INTERACTIVITY_HISTORY".
>
> The cache would need to know as little as a number which indicates
> interactivity history (or a short list), and either a filename or an
> inode number.

If you want to track history you will definatly need a short list. A filename
should be enough to associate a program with its interactivity.

> Indeed, would it be possible to store this number as metadata in the
> file's inode? That would be filesystem-dependent, but it would also
> have zero cost. This would only be a problem for read-only file
> systems, and those are common enough that it would warrant a separate
> cache. Furthermore, in some cases, the cache could exist only for the
> uptime, so if you reboot, you lose all the history, but your compiles
> would only misbehave briefly before the system learned the behavior of cc1.

Why not just sacrifice a MB or two, maybe compile time configurable for a DB
to store the info. Then you don't even have to write it to the filesystem.
For those of us with enough memory or a large variety of programs, you can
set it as high as 2-3MB while low-memory systems can disable it or set it as
small as a few K. If you only need a small string and an few ints to track
this sort of information, a few K should be more than enough for hundreds of
program entries. This eliminates entirely the filesystem dependency and fixes
it for read-only systems.

> >>The thing is, if a program hasn't been loaded yet, then it has to be
> >>fetched from disk, and one more fetch won't hurt. Launching processes
> >>involves LOTS of files being opened (shares libs, etc.). Furthermore,
> >>if the program has ALREADY been run once, the both the program AND its
> >>priority descriptor are probably already in the VM disk cache.
> >
> > Your idea seems to deal mostly with process priority, what came to my
> > mind was something like a small struct or DB file describing the first
> > 20-30 files or something that that a program uses when it first starts
> > up. That way the file is already in disk cache or on its way when the
> > program requests it. We all know the slowest piece of a computer is the
> > hard drive, so anything we can do to predict, or even KNOW about its file
> > usage we can get the information ready before the process requests it.
>
> This is a wonderful idea, but it's completely separate. This is
> something that could be done almost completely in user space.

Yes. Good point. This sounds like a job for some sort of launching daemon.

> A few more things about this... we'd need a new filesystem which
> maintained this cache in a section of the disk where it could keep
> things linear, and secondly, the anticipatory I/O scheduler does a
> wonderful job of reordering reads in a VERY efficient order.

Why not just put it in memory? What kind of space usage do you see? I see it
only requiring a few hundred K to be noticiably efficient and a few MB to
completely utilize. Although with this approach you would have a set limit to
the # of programs you could store information about and would need to write
some sort of replacement policy for it, but this would eliminate the
filesystem dependency. However with the file/inode approach you are able to
write information about each and every program. However it will require you
to make support for each filesystem and it might break in the future, whereas
a filesystem independent approach will be a lot less work to write and
maintain.

If the AS is as good as you say it is then my idea is pretty much moot while
yours is a good addition to it.

> For what I've been talking about, we'd want to hide it as much as
> possible, except from the scheduler and the daemon that maintains the
> history.

Thats why I suggest you keep it filesystem independent by keeping the DB in
memory and touching the filesystem as little as possible.

> > The process launching a thousand times was exactly the worst case
> > scenario I was thinking of.
> >
> > Quoting a later message:
> >>In FACT, that I think I may be on to something here... Hmmm. So far,
> >>process schedulers have been trying to BOOST priority of interactive
> >>processes. That's great. But maybe an even BIGGER problem is that
> >>every time gcc (and cc1) is launched during a kernel compile, it's given
> >>too high of a priority, and by the time the priority is corrected, the
> >>compile has finished for that .c file.
> >
> > True. I believe both your ideas about priority management and my idea
> > about disk prefetching would greatly improve the performance hit in this
> > scenario.

Erm... I just remembered that the libraries should already be in the disk
cache if you have enough memory... My idea is pretty much made moot by the
excellent AS and disk cache handler. Doh...

> As I say, most of the time, this sort of caching is done only as a way
> to passify users who want the OS to boot faster. On boot, so many
> things are being loaded at the same time that it can really be helpful.
> But when loading a single app, caching won't help much.

Yea, I never intended it to be for boot though. There are too many
implementations that overcome inherit problems with the sysV init system. So
if someone is *really* interested in speeding up boot times they should be
using a different scheme anyways.

> The first time gcc is started during a compile, it has to be loaded from
> disk, which is always going to be time-consuming. But while that disk
> access is going on, other processes run, so interactivity isn't
> affected. Caching of the executable would only save a few milliseconds.
> Subsequent launches of gcc would all come from memory cache. Only
> THEN does gcc start to affect interactivity, and this is all about
> process scheduling.

Ok. I hear where you are coming from. Thanks for the information. This
discussion has been enlightening.

-------------------------
Eric Bambach
Eric at cisu dot net
-------------------------

2004-03-18 02:56:02

by David Schwartz

[permalink] [raw]
Subject: RE: Scheduler: Process priority fed back to parent?


> > I'm not familiar with that aspect of Windows, but... sure. :)
>
> All i know is it does some sort of prefetch/caching of
> information to make
> user processes load faster.

My recollection is that Windows, rather than letting a program and the
libraries it rely upon just fault in, keeps track of what pages from what
files it needed within its first few seconds of execution on a previous run
and pre-loads/maps them.

If the startup access pattern of the program is quite random over multiple
files, this can save quite a bit of time because they can be loaded in in a
logical/sequential fashion. Otherwise, the operating system must load them
in in the order they fault because it has no idea what page will be needed
next as each fault blocks the process before it is known where the next
fault will be.

Theoretically, one might expect Linux to benefit as well from such a
prefetch/preload mechanism. It would be interesting to code one as a quick
hack and see what kind of difference it makes. My guess is that Linux is
called upon to start behemoth programs much less often than Windows is.

I also expect that the access patterns on Linux programs tends to be a bit
more linear than on Windows. If prefetching eliminates a large percentage of
the blocking faults, then there would be much less gain.

On Windows 98, though, it made an enormous difference. The first add-on
programs to provide this type of optimization were amazing, especially when
you look at launch times for programs like Netscape that are large and span
muliple files.

DS


2004-03-25 14:25:37

by Pavel Machek

[permalink] [raw]
Subject: Re: Scheduler: Process priority fed back to parent?

On Tue 16-03-04 14:49:41, Horst von Brand wrote:
> Muli Ben-Yehuda <[email protected]> said:
>
> [...]
>
> > This is something that I've thought of doing in the past. The reason I
> > didn't pursue it further is that it's impossible to get it right for
> > all cases, and it attacks the problem in the wrong place. The kernel
> > shouldn't need to guess(timate) what the process is going to do. The
> > userspace programmer, who knows what his process is going to do,
> > should tell the kernel.
>
> People have been known to lie on occasion, particularly when it is to their
> advantage...

You could heavily penalize process that lied... Or perhaps his user.

--
64 bytes from 195.113.31.123: icmp_seq=28 ttl=51 time=448769.1 ms