2000-11-05 12:55:04

by bobyetman

[permalink] [raw]
Subject: Loadavg calculation


I'm working a project a work that is using Linux to run some very
math-intensive calculations. One of the things we do is use the 1-minute
loadavg to determine how busy the machine is and can we fire off another
program to do more calculations. However, there's a problem with that.

Because it's a 1 minute load average, there's quite a bit of lag time from
when 1 program finishes until the loadavg goes down below a threshold for
our control mechanism to fire off another program.

Let me give an example (all on a 1-cpu PC)

HH:MM:SS
00:00:00 fire off 4 programs
00:01:00 loadavg goes up to 4
00:01:30 3 of the 4 programs finish loadavg still at 4
00:02:20 load avg goes down to 1, below our threshold
00:02:21 we fire off 3 more programs.

We'd like to reduce that almost 50 second lag time. Is it possible, in
user-space, to duplicate the loadavg calculation period, say to a 15
second load average, using the information in /proc?

The other option we looked at, besides using loadavg, was using idle pct%,
but if I read the source for top right, involves reading the entire
process table to calculate clock ticks used and then figuring out how many
weren't used.

Ideas, opinions welcome. Yes, I read the list, so either respond direct
to me, or to the list.

[email protected] (Robert A. Yetman)


2000-11-05 13:08:14

by bert hubert

[permalink] [raw]
Subject: Re: Loadavg calculation

On Sun, Nov 05, 2000 at 07:55:40AM -0500, [email protected] wrote:

> The other option we looked at, besides using loadavg, was using idle pct%,
> but if I read the source for top right, involves reading the entire
> process table to calculate clock ticks used and then figuring out how many
> weren't used.

Snoop the source of vmstat, it measures the amount of programs in the run
queue, which is how the load average is calculated. You do need to average
this data in order for it to be useful.

You might also follow /proc/uptime, which gives the number of jiffies the
system has been up, and how many jiffies have been spent by the idle task.
If you take the derivative if the idle jiffies, you get the percentage idle
time.

You might also try to measure directly how much time is spent in the idle
task (0).

Regards,

bert hubert

--
PowerDNS Versatile DNS Services
Trilab The Technology People
'SYN! .. SYN|ACK! .. ACK!' - the mating call of the internet

2000-11-05 13:58:54

by Andi Kleen

[permalink] [raw]
Subject: Re: Loadavg calculation

On Sun, Nov 05, 2000 at 07:55:40AM -0500, [email protected] wrote:
>
> We'd like to reduce that almost 50 second lag time. Is it possible, in
> user-space, to duplicate the loadavg calculation period, say to a 15
> second load average, using the information in /proc?

You could simply recompile the kernel with a smaller LOAD_FREQ constant.
It defines how often the average computation runs. The unit is jiffies,
which is 10ms on i386.
It may break some other programs though.

-Andi

2000-11-05 20:25:26

by Albert D. Cahalan

[permalink] [raw]
Subject: Re: Loadavg calculation

> The other option we looked at, besides using loadavg, was using idle pct%,
> but if I read the source for top right, involves reading the entire
> process table to calculate clock ticks used and then figuring out how many
> weren't used.

The old "top" code did that; it was a bug. Get some newer code:
http://www.cs.uml.edu/~acahalan/procps/

2000-11-06 08:22:33

by Sean Hunter

[permalink] [raw]
Subject: Re: Loadavg calculation

Sorry, I know this is a little left-field, but how about redesigning your
process so that instead of using a load_avg, you start all your calculations
from a single server on each node? It could queue up incoming calculations,
and fork a child to do each one.

Of course, it would catch a signal when the child died, so you'd immediately
know when to start up another calculation. If you liked, it could check the
one-minute load avg from time to time to see what would be a friendly level of
calculations overall, adjust the overall level of concurrent child processes
accordingly.

The timing, however, would still come from a signal, and would thus be
instantaneous.

Or am I being totally dumb?

Sean

On Sun, Nov 05, 2000 at 07:55:40AM -0500, [email protected] wrote:
>
> I'm working a project a work that is using Linux to run some very
> math-intensive calculations. One of the things we do is use the 1-minute
> loadavg to determine how busy the machine is and can we fire off another
> program to do more calculations. However, there's a problem with that.
>
> Because it's a 1 minute load average, there's quite a bit of lag time from
> when 1 program finishes until the loadavg goes down below a threshold for
> our control mechanism to fire off another program.
>
> Let me give an example (all on a 1-cpu PC)
>
> HH:MM:SS
> 00:00:00 fire off 4 programs
> 00:01:00 loadavg goes up to 4
> 00:01:30 3 of the 4 programs finish loadavg still at 4
> 00:02:20 load avg goes down to 1, below our threshold
> 00:02:21 we fire off 3 more programs.
>
> We'd like to reduce that almost 50 second lag time. Is it possible, in
> user-space, to duplicate the loadavg calculation period, say to a 15
> second load average, using the information in /proc?
>
> The other option we looked at, besides using loadavg, was using idle pct%,
> but if I read the source for top right, involves reading the entire
> process table to calculate clock ticks used and then figuring out how many
> weren't used.
>
> Ideas, opinions welcome. Yes, I read the list, so either respond direct
> to me, or to the list.
>
> [email protected] (Robert A. Yetman)
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> Please read the FAQ at http://www.tux.org/lkml/
>

2000-11-06 23:24:00

by Nathan Scott

[permalink] [raw]
Subject: Re: Loadavg calculation

hi,

As you've suggested, you'd be better off not using the load
average but rather some other measure (or combination of
measures) to figure out when you have enough spare cycles or
bandwidth.

The "pmie" tool might be useful to you - here's a contrived
example I just knocked up (instead of a "print" you'd want
to run your program via the "shell" keyword) with an
occassional artificial load in the background.

kernel.all.cpu.idle is aggregate idle time across all cpus.
pmie converts it to a rate (#idle milliseconds / 8 seconds)
so it will always have a value between 0 (no idle time) and
1 (lots of idle time).


$ pmie -t 8sec -v
( kernel.all.cpu.idle > 0.5 ) -> print "start a new job";
^D
expr_1: ?

Tue Nov 7 09:33:36 2000: start a new job
expr_1: true

Tue Nov 7 09:33:44 2000: start a new job
expr_1: true

Tue Nov 7 09:33:52 2000: start a new job
expr_1: true

expr_1: false

expr_1: false

expr_1: false

Tue Nov 7 09:34:24 2000: start a new job
expr_1: true

Tue Nov 7 09:34:32 2000: start a new job
expr_1: true

expr_1: false

expr_1: false

Tue Nov 7 09:34:56 2000: start a new job
expr_1: true


pmie is one of the gpl'd pcp tools which you can get from
the sgi oss site... hope its useful to you. mailto the
pcp list if you need any more info.

cheers.


[email protected] wrote:
>
> I'm working a project a work that is using Linux to run some very
> math-intensive calculations. One of the things we do is use the 1-minute
> loadavg to determine how busy the machine is and can we fire off another
> program to do more calculations. However, there's a problem with that.
>
> Because it's a 1 minute load average, there's quite a bit of lag time from
> when 1 program finishes until the loadavg goes down below a threshold for
> our control mechanism to fire off another program.
>
> Let me give an example (all on a 1-cpu PC)
>
> HH:MM:SS
> 00:00:00 fire off 4 programs
> 00:01:00 loadavg goes up to 4
> 00:01:30 3 of the 4 programs finish loadavg still at 4
> 00:02:20 load avg goes down to 1, below our threshold
> 00:02:21 we fire off 3 more programs.
>
> We'd like to reduce that almost 50 second lag time. Is it possible, in
> user-space, to duplicate the loadavg calculation period, say to a 15
> second load average, using the information in /proc?
>
> The other option we looked at, besides using loadavg, was using idle pct%,
> but if I read the source for top right, involves reading the entire
> process table to calculate clock ticks used and then figuring out how many
> weren't used.
>
> Ideas, opinions welcome. Yes, I read the list, so either respond direct
> to me, or to the list.
>
> [email protected] (Robert A. Yetman)
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> Please read the FAQ at http://www.tux.org/lkml/

--
Nathan

--
Nathan

2000-11-10 21:29:16

by Pavel Machek

[permalink] [raw]
Subject: Re: Loadavg calculation

Hi!

> > I'm working a project a work that is using Linux to run some very
> > math-intensive calculations. One of the things we do is use the 1-minute
> > loadavg to determine how busy the machine is and can we fire off another
> > program to do more calculations. However, there's a problem with that.
> >
> > Because it's a 1 minute load average, there's quite a bit of lag time from
> > when 1 program finishes until the loadavg goes down below a threshold for
> > our control mechanism to fire off another program.
> >
> > Let me give an example (all on a 1-cpu PC)
> >
> > HH:MM:SS
> > 00:00:00 fire off 4 programs
> > 00:01:00 loadavg goes up to 4
> > 00:01:30 3 of the 4 programs finish loadavg still at 4
> > 00:02:20 load avg goes down to 1, below our threshold
> > 00:02:21 we fire off 3 more programs.
> >
> > We'd like to reduce that almost 50 second lag time. Is it possible, in
> > user-space, to duplicate the loadavg calculation period, say to a 15
> > second load average, using the information in /proc?

pavel@bug:~$ cat /proc/loadavg
0.00 0.00 0.00 2/46 395
pavel@bug:~$

The three values of 0.00 are loadavg averaged over different
time. Select the right one and you are done.
Pavel
--
I'm [email protected]. "In my country we have almost anarchy and I don't care."
Panos Katsaloulis describing me w.r.t. patents at [email protected]