O*int patches trying to improve the interactivity of the 2.5/6 scheduler for
desktops. It appears possible to do this without moving to nanosecond
resolution.
This one makes a massive difference... Please test this to death.
Changes:
The big change is in the way sleep_avg is incremented. Any amount of sleep
will now raise you by at least one priority with each wakeup. This causes
massive differences to startup time, extremely rapid conversion to interactive
state, and recovery from non-interactive state rapidly as well (prevents X
stalling after thrashing around under high loads for many seconds).
The sleep buffer was dropped to just 10ms. This has the effect of causing mild
round robinning of very interactive tasks if they run for more than 10ms. The
requeuing was changed from (unlikely()) to an ordinary if.. branch as this
will be hit much more now.
MAX_BONUS as a #define was made easier to understand
Idle tasks were made slightly less interactive to prevent cpu hogs from
becoming interactive on their very first wakeup.
Con
This patch-O6int-0307170012 applies on top of 2.6.0-test1-mm1 and can be found
here:
http://kernel.kolivas.org/2.5
and here:
--- linux-2.6.0-test1-mm1/kernel/sched.c 2003-07-16 20:27:32.000000000 +1000
+++ linux-2.6.0-testck1/kernel/sched.c 2003-07-17 00:13:24.000000000 +1000
@@ -76,9 +76,9 @@
#define MIN_SLEEP_AVG (HZ)
#define MAX_SLEEP_AVG (10*HZ)
#define STARVATION_LIMIT (10*HZ)
-#define SLEEP_BUFFER (HZ/20)
+#define SLEEP_BUFFER (HZ/100)
#define NODE_THRESHOLD 125
-#define MAX_BONUS ((MAX_USER_PRIO - MAX_RT_PRIO) * PRIO_BONUS_RATIO / 100)
+#define MAX_BONUS (40 * PRIO_BONUS_RATIO / 100)
/*
* If a task is 'interactive' then we reinsert it in the active
@@ -399,7 +399,7 @@ static inline void activate_task(task_t
*/
if (sleep_time > MIN_SLEEP_AVG){
p->avg_start = jiffies - MIN_SLEEP_AVG;
- p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 1) /
+ p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 2) /
MAX_BONUS;
} else {
/*
@@ -413,14 +413,10 @@ static inline void activate_task(task_t
p->sleep_avg += sleep_time;
/*
- * Give a bonus to tasks that wake early on to prevent
- * the problem of the denominator in the bonus equation
- * from continually getting larger.
+ * Processes that sleep get pushed to a higher priority
+ * each time they sleep
*/
- if ((runtime - MIN_SLEEP_AVG) < MAX_SLEEP_AVG)
- p->sleep_avg += (runtime - p->sleep_avg) *
- (MAX_SLEEP_AVG + MIN_SLEEP_AVG - runtime) *
- (MAX_BONUS - INTERACTIVE_DELTA) / MAX_BONUS / MAX_SLEEP_AVG;
+ p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1) * runtime / MAX_BONUS;
/*
* Keep a small buffer of SLEEP_BUFFER sleep_avg to
@@ -1311,7 +1307,7 @@ void scheduler_tick(int user_ticks, int
enqueue_task(p, rq->expired);
} else
enqueue_task(p, rq->active);
- } else if (unlikely(p->prio < effective_prio(p))){
+ } else if (p->prio < effective_prio(p)){
/*
* Tasks that have lowered their priority are put to the end
* of the active array with their remaining timeslice
On Wed, 2003-07-16 at 16:30, Con Kolivas wrote:
> O*int patches trying to improve the interactivity of the 2.5/6 scheduler for
> desktops. It appears possible to do this without moving to nanosecond
> resolution.
>
> This one makes a massive difference... Please test this to death.
Oh, my god... This is nearly perfect! :-)
On 2.6.0-test1-mm1 with o6int.patch, I can't reproduce XMMS initial
starvation anymore and X feels smoother under heavy load.
Nice... ;-)
On Thu, 17 Jul 2003 00:30:25 +1000, Con Kolivas said:
> O*int patches trying to improve the interactivity of the 2.5/6 scheduler for
> desktops. It appears possible to do this without moving to nanosecond
> resolution.
>
> This one makes a massive difference... Please test this to death.
This one looks *awesome* here - the base -mm1 version (which was -O5int if I
remember right) was still subject to very tiny stutters (sound like "clicks")
in xmms (everybody's favorite tester ;) under some conditions (changing folders
in the Exmh mail client was usually good for a click). -O6int has stuttered
exactly once in the past hour, and that was with Exmh going, a large grep
running, a sudden influx of fetchmail/sendmail/procmail (probably 30-50 fork/
exec pairs/sec there), launching Mozilla (oink ;), and something else big all at the
same time (in other words, under as extreme load as this laptop ever sees in
actual production useage).
/Valdis
On Wednesday 16 July 2003 17:22, Felipe Alfaro Solana wrote:
Hi Con,
> > This one makes a massive difference... Please test this to death.
> Oh, my god... This is nearly perfect! :-)
> On 2.6.0-test1-mm1 with o6int.patch, I can't reproduce XMMS initial
> starvation anymore and X feels smoother under heavy load.
> Nice... ;-)
hmm, I really wonder why I don't see any difference for my box.
1. "make -j2 bzImage modules" slows down my box alot.
2. kmail is slow like a dog while make -j2
3. xterm needs ~5seconds while make -j2 to open up
4. xmms does not skip
5. I've tried Felipe's suggestions, they are:
#define PRIO_BONUS_RATIO 45
#define INTERACTIVE_DELTA 4
#define MAX_SLEEP_AVG (HZ)
#define STARVATION_LIMIT (HZ)
At least with these changes kmail is much much faster but still not
as fast as w/o the compilation. Xterm needs ~5seconds to open up.
6. Xterm: "ls -lsa" in a directory with ~1200 files
2.6.0-test1-mm1 + O6int:
------------------------
real 0m12.468s
user 0m0.170s
sys 0m0.057s
2.4.20-wolk4.4: O(1) from latest -aa tree
-----------------------------------------
real 0m0.689s
user 0m0.031s
sys 0m0.011s
7. playing an mpeg with mplayer while "make -j2 bzImage modules" let the movie
skip some frames every ~10 seconds.
8. I've also tried min_timeslice == max_timeslice (10) w/o much difference :-(
I remember that this helped alot in earlier 2.5 kernels.
I have to say that the XMMS issue is really less important for me. I want a
kernel where I can "make -j<huge number> bzImage modules" and don't notice
that compilation w/o renicing all the gcc instances.
Machine:
--------
Celeron 1,3GHz
512MB RAM
2x IDE (UDMA100) 60/40 GB
1GB SWAP, 512MB on each disk (same priority)
ext3fs (data=ordered)
anticipatory I/O scheduler
XFree 4.3
WindowMaker 0.82-CVS
Is my box the only one on earth which don't like the scheduler fixups? ;)
ciao, Marc
Hi Con,
I use the infinitely superior (to XMMS), interactive testing tool
mplayer! :-)
My test consists of playing a video in mplayer with software scaling,
(ie. no xv) and refreshing various fat web pages. This as you are fully
aware would cause mozilla to gobble up CPU in short bursts of
approximately .5 - 3 seconds during which time the video would be very
choppy.
06int is much improved in this scenario, the pauses during refreshing
web pages have almost disappeared. There are still very small hiccups in
the video playback during the refreshes.
Also, I use a local web page that queries a local mysql db to display a
large table in mozilla. So, mozilla, apache and mysql are all local.
06int is a huge improvement in this area as well. It is decreases the
choppiness of the video greatly. Previously this would cause almost a
complete halt of the video for several seconds and now I would say there
is only a small amount choppiness. It is still pretty choppy on the
initial load of that page after a reboot, maybe nothing is cached yet?
Thanks for your hard work!
Regards,
Shane
Hello,
I have been gone from the computer for a couple of hours, and now
everything is very chopping. For example I'm transfering some huge data
over nfs to another linux box (on sun, tho) where there is not enough
space available. It happens that in the middle of the copying the
process 'hangs'. I cannot interrupt it with ctrl-[cz], it takes a couple
of seconds (~20). I cannot tell for sure that the problem is with the O6
patch, since I haven't done it on an other kernel, yet.
On Thu, Jul 17, 2003 at 12:30:25AM +1000, Con Kolivas wrote:
> O*int patches trying to improve the interactivity of the 2.5/6 scheduler for
> desktops. It appears possible to do this without moving to nanosecond
> resolution.
>
> This one makes a massive difference... Please test this to death.
>
> Changes:
> The big change is in the way sleep_avg is incremented. Any amount of sleep
> will now raise you by at least one priority with each wakeup. This causes
> massive differences to startup time, extremely rapid conversion to interactive
> state, and recovery from non-interactive state rapidly as well (prevents X
> stalling after thrashing around under high loads for many seconds).
>
> The sleep buffer was dropped to just 10ms. This has the effect of causing mild
> round robinning of very interactive tasks if they run for more than 10ms. The
> requeuing was changed from (unlikely()) to an ordinary if.. branch as this
> will be hit much more now.
>
> MAX_BONUS as a #define was made easier to understand
>
> Idle tasks were made slightly less interactive to prevent cpu hogs from
> becoming interactive on their very first wakeup.
>
> Con
>
> This patch-O6int-0307170012 applies on top of 2.6.0-test1-mm1 and can be found
> here:
> http://kernel.kolivas.org/2.5
>
> and here:
>
> --- linux-2.6.0-test1-mm1/kernel/sched.c 2003-07-16 20:27:32.000000000 +1000
> +++ linux-2.6.0-testck1/kernel/sched.c 2003-07-17 00:13:24.000000000 +1000
> @@ -76,9 +76,9 @@
> #define MIN_SLEEP_AVG (HZ)
> #define MAX_SLEEP_AVG (10*HZ)
> #define STARVATION_LIMIT (10*HZ)
> -#define SLEEP_BUFFER (HZ/20)
> +#define SLEEP_BUFFER (HZ/100)
> #define NODE_THRESHOLD 125
> -#define MAX_BONUS ((MAX_USER_PRIO - MAX_RT_PRIO) * PRIO_BONUS_RATIO / 100)
> +#define MAX_BONUS (40 * PRIO_BONUS_RATIO / 100)
>
> /*
> * If a task is 'interactive' then we reinsert it in the active
> @@ -399,7 +399,7 @@ static inline void activate_task(task_t
> */
> if (sleep_time > MIN_SLEEP_AVG){
> p->avg_start = jiffies - MIN_SLEEP_AVG;
> - p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 1) /
> + p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 2) /
> MAX_BONUS;
> } else {
> /*
> @@ -413,14 +413,10 @@ static inline void activate_task(task_t
> p->sleep_avg += sleep_time;
>
> /*
> - * Give a bonus to tasks that wake early on to prevent
> - * the problem of the denominator in the bonus equation
> - * from continually getting larger.
> + * Processes that sleep get pushed to a higher priority
> + * each time they sleep
> */
> - if ((runtime - MIN_SLEEP_AVG) < MAX_SLEEP_AVG)
> - p->sleep_avg += (runtime - p->sleep_avg) *
> - (MAX_SLEEP_AVG + MIN_SLEEP_AVG - runtime) *
> - (MAX_BONUS - INTERACTIVE_DELTA) / MAX_BONUS / MAX_SLEEP_AVG;
> + p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1) * runtime / MAX_BONUS;
>
> /*
> * Keep a small buffer of SLEEP_BUFFER sleep_avg to
> @@ -1311,7 +1307,7 @@ void scheduler_tick(int user_ticks, int
> enqueue_task(p, rq->expired);
> } else
> enqueue_task(p, rq->active);
> - } else if (unlikely(p->prio < effective_prio(p))){
> + } else if (p->prio < effective_prio(p)){
> /*
> * Tasks that have lowered their priority are put to the end
> * of the active array with their remaining timeslice
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Regards,
Wiktor Wodecki
On Thu, 17 Jul 2003, Con Kolivas wrote:
> O*int patches trying to improve the interactivity of the 2.5/6 scheduler for
> desktops. It appears possible to do this without moving to nanosecond
> resolution.
>
> This one makes a massive difference... Please test this to death.
>
> Changes:
> The big change is in the way sleep_avg is incremented. Any amount of sleep
> will now raise you by at least one priority with each wakeup. This causes
> massive differences to startup time, extremely rapid conversion to interactive
> state, and recovery from non-interactive state rapidly as well (prevents X
> stalling after thrashing around under high loads for many seconds).
>
> The sleep buffer was dropped to just 10ms. This has the effect of causing mild
> round robinning of very interactive tasks if they run for more than 10ms. The
> requeuing was changed from (unlikely()) to an ordinary if.. branch as this
> will be hit much more now.
Con, I'll make a few notes on the code and a final comment.
> -#define MAX_BONUS ((MAX_USER_PRIO - MAX_RT_PRIO) * PRIO_BONUS_RATIO / 100)
> +#define MAX_BONUS (40 * PRIO_BONUS_RATIO / 100)
Why did you bolt in the 40 value ? It really comes from (MAX_USER_PRIO - MAX_RT_PRIO)
and you will have another place to change if the number of slots will
change. If you want to clarify better, stick a comment.
> + p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1) * runtime / MAX_BONUS;
I don't have the full code so I cannot see what "runtime" is, but if
"runtime" is the time the task ran, this is :
p->sleep_avg ~= p->sleep_avg + runtime / MAX_BONUS;
(in any case a non-decreasing function of "runtime" )
Are you sure you want to reward tasks that actually ran more ?
Con, you cannot follow the XMMS thingy otherwise you'll end up bolting in
the XMMS sleep->burn pattern and you'll probably break the make-j+RealPlay
for example. MultiMedia players are really tricky since they require strict
timings and forces you to create a special super-interactive treatment
inside the code. Interactivity in my box running moderate high loads is
very good for my desktop use. Maybe audio will skip here (didn't try) but
I believe that following the fix-XMMS thingy is really bad. I believe we
should try to make the desktop to feel interactive with human tollerances
and not with strict timings like MM apps. If the audio skips when dragging
like crazy a X window using the filled mode on a slow CPU, we shouldn't be
much worried about it for example. If audio skip when hitting the refresh
button of Mozilla, then yes it should be fixed. And the more you add super
interactive patterns, the more the scheduler will be exploitable. I
recommend you after doing changes to get this :
http://www.xmailserver.org/linux-patches/irman2.c
and run it with different -n (number of tasks) and -b (CPU burn ms time).
At the same time try to build a kernel for example. Then you will realize
that interactivity is not the bigger problem that the scheduler has right
now.
- Davide
On Thu, 17 Jul 2003 07:59, Wiktor Wodecki wrote:
> Hello,
>
> I have been gone from the computer for a couple of hours, and now
> everything is very chopping. For example I'm transfering some huge data
Small bug I noticed after sleeping on the patch. I'll post a fix later today.
Con
> over nfs to another linux box (on sun, tho) where there is not enough
> space available. It happens that in the middle of the copying the
> process 'hangs'. I cannot interrupt it with ctrl-[cz], it takes a couple
> of seconds (~20). I cannot tell for sure that the problem is with the O6
> patch, since I haven't done it on an other kernel, yet.
>
> On Thu, Jul 17, 2003 at 12:30:25AM +1000, Con Kolivas wrote:
> > O*int patches trying to improve the interactivity of the 2.5/6 scheduler
> > for desktops. It appears possible to do this without moving to nanosecond
> > resolution.
> >
> > This one makes a massive difference... Please test this to death.
> >
> > Changes:
> > The big change is in the way sleep_avg is incremented. Any amount of
> > sleep will now raise you by at least one priority with each wakeup. This
> > causes massive differences to startup time, extremely rapid conversion to
> > interactive state, and recovery from non-interactive state rapidly as
> > well (prevents X stalling after thrashing around under high loads for
> > many seconds).
> >
> > The sleep buffer was dropped to just 10ms. This has the effect of causing
> > mild round robinning of very interactive tasks if they run for more than
> > 10ms. The requeuing was changed from (unlikely()) to an ordinary if..
> > branch as this will be hit much more now.
> >
> > MAX_BONUS as a #define was made easier to understand
> >
> > Idle tasks were made slightly less interactive to prevent cpu hogs from
> > becoming interactive on their very first wakeup.
> >
> > Con
> >
> > This patch-O6int-0307170012 applies on top of 2.6.0-test1-mm1 and can be
> > found here:
> > http://kernel.kolivas.org/2.5
> >
> > and here:
> >
> > --- linux-2.6.0-test1-mm1/kernel/sched.c 2003-07-16 20:27:32.000000000
> > +1000 +++ linux-2.6.0-testck1/kernel/sched.c 2003-07-17
> > 00:13:24.000000000 +1000 @@ -76,9 +76,9 @@
> > #define MIN_SLEEP_AVG (HZ)
> > #define MAX_SLEEP_AVG (10*HZ)
> > #define STARVATION_LIMIT (10*HZ)
> > -#define SLEEP_BUFFER (HZ/20)
> > +#define SLEEP_BUFFER (HZ/100)
> > #define NODE_THRESHOLD 125
> > -#define MAX_BONUS ((MAX_USER_PRIO - MAX_RT_PRIO) * PRIO_BONUS_RATIO /
> > 100) +#define MAX_BONUS (40 * PRIO_BONUS_RATIO / 100)
> >
> > /*
> > * If a task is 'interactive' then we reinsert it in the active
> > @@ -399,7 +399,7 @@ static inline void activate_task(task_t
> > */
> > if (sleep_time > MIN_SLEEP_AVG){
> > p->avg_start = jiffies - MIN_SLEEP_AVG;
> > - p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 1) /
> > + p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 2) /
> > MAX_BONUS;
> > } else {
> > /*
> > @@ -413,14 +413,10 @@ static inline void activate_task(task_t
> > p->sleep_avg += sleep_time;
> >
> > /*
> > - * Give a bonus to tasks that wake early on to prevent
> > - * the problem of the denominator in the bonus equation
> > - * from continually getting larger.
> > + * Processes that sleep get pushed to a higher priority
> > + * each time they sleep
> > */
> > - if ((runtime - MIN_SLEEP_AVG) < MAX_SLEEP_AVG)
> > - p->sleep_avg += (runtime - p->sleep_avg) *
> > - (MAX_SLEEP_AVG + MIN_SLEEP_AVG - runtime) *
> > - (MAX_BONUS - INTERACTIVE_DELTA) / MAX_BONUS / MAX_SLEEP_AVG;
> > + p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1) * runtime /
> > MAX_BONUS;
> >
> > /*
> > * Keep a small buffer of SLEEP_BUFFER sleep_avg to
> > @@ -1311,7 +1307,7 @@ void scheduler_tick(int user_ticks, int
> > enqueue_task(p, rq->expired);
> > } else
> > enqueue_task(p, rq->active);
> > - } else if (unlikely(p->prio < effective_prio(p))){
> > + } else if (p->prio < effective_prio(p)){
> > /*
> > * Tasks that have lowered their priority are put to the end
> > * of the active array with their remaining timeslice
> >
> > -
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel"
> > in the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at http://www.tux.org/lkml/
Quoting Davide Libenzi <[email protected]>:
> On Thu, 17 Jul 2003, Con Kolivas wrote:
>
> > O*int patches trying to improve the interactivity of the 2.5/6 scheduler
> for
> > desktops. It appears possible to do this without moving to nanosecond
> > resolution.
> >
> > This one makes a massive difference... Please test this to death.
> >
> > Changes:
> > The big change is in the way sleep_avg is incremented. Any amount of sleep
> > will now raise you by at least one priority with each wakeup. This causes
> > massive differences to startup time, extremely rapid conversion to
> interactive
> > state, and recovery from non-interactive state rapidly as well (prevents X
> > stalling after thrashing around under high loads for many seconds).
> >
> > The sleep buffer was dropped to just 10ms. This has the effect of causing
> mild
> > round robinning of very interactive tasks if they run for more than 10ms.
> The
> > requeuing was changed from (unlikely()) to an ordinary if.. branch as this
> > will be hit much more now.
>
> Con, I'll make a few notes on the code and a final comment.
>
>
>
> > -#define MAX_BONUS ((MAX_USER_PRIO - MAX_RT_PRIO) *
PRIO_BONUS_RATIO /
> 100)
> > +#define MAX_BONUS (40 * PRIO_BONUS_RATIO / 100)
>
> Why did you bolt in the 40 value ? It really comes from (MAX_USER_PRIO -
> MAX_RT_PRIO)
> and you will have another place to change if the number of slots will
> change. If you want to clarify better, stick a comment.
Granted. Will revert. If you don't understand it you shouldn't be fiddling with
it I agree.
>
>
>
> > + p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1)
* runtime /
> MAX_BONUS;
>
> I don't have the full code so I cannot see what "runtime" is, but if
> "runtime" is the time the task ran, this is :
>
> p->sleep_avg ~= p->sleep_avg + runtime / MAX_BONUS;
>
> (in any case a non-decreasing function of "runtime" )
> Are you sure you want to reward tasks that actually ran more ?
That was the bug. Runtime was supposed to be limited to MAX_SLEEP_AVG. Fix will
be posted very soon.
>
>
> Con, you cannot follow the XMMS thingy otherwise you'll end up bolting in
> the XMMS sleep->burn pattern and you'll probably break the make-j+RealPlay
> for example. MultiMedia players are really tricky since they require strict
> timings and forces you to create a special super-interactive treatment
> inside the code. Interactivity in my box running moderate high loads is
> very good for my desktop use. Maybe audio will skip here (didn't try) but
> I believe that following the fix-XMMS thingy is really bad. I believe we
> should try to make the desktop to feel interactive with human tollerances
> and not with strict timings like MM apps. If the audio skips when dragging
> like crazy a X window using the filled mode on a slow CPU, we shouldn't be
> much worried about it for example. If audio skip when hitting the refresh
> button of Mozilla, then yes it should be fixed. And the more you add super
> interactive patterns, the more the scheduler will be exploitable. I
> recommend you after doing changes to get this :
>
> http://www.xmailserver.org/linux-patches/irman2.c
>
> and run it with different -n (number of tasks) and -b (CPU burn ms time).
> At the same time try to build a kernel for example. Then you will realize
> that interactivity is not the bigger problem that the scheduler has right
> now.
Please don't assume I'm writing an xmms scheduler. I've done a lot more testing
than xmms.
Con
On Thu, 17 Jul 2003, Con Kolivas wrote:
> > > + p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1)
> * runtime /
> > MAX_BONUS;
> >
> > I don't have the full code so I cannot see what "runtime" is, but if
> > "runtime" is the time the task ran, this is :
> >
> > p->sleep_avg ~= p->sleep_avg + runtime / MAX_BONUS;
> >
> > (in any case a non-decreasing function of "runtime" )
> > Are you sure you want to reward tasks that actually ran more ?
>
>
> That was the bug. Runtime was supposed to be limited to MAX_SLEEP_AVG. Fix will
> be posted very soon.
Con, it is not the limit. You're making sleep_avg a non-decreasing
function of "runtime". Basically you are rewarding tasks that did burn
more CPU (if runtime is what the name suggests). Are you sure this is what
you want ?
> > Con, you cannot follow the XMMS thingy otherwise you'll end up bolting in
> > the XMMS sleep->burn pattern and you'll probably break the make-j+RealPlay
> > for example. MultiMedia players are really tricky since they require strict
> > timings and forces you to create a special super-interactive treatment
> > inside the code. Interactivity in my box running moderate high loads is
> > very good for my desktop use. Maybe audio will skip here (didn't try) but
> > I believe that following the fix-XMMS thingy is really bad. I believe we
> > should try to make the desktop to feel interactive with human tollerances
> > and not with strict timings like MM apps. If the audio skips when dragging
> > like crazy a X window using the filled mode on a slow CPU, we shouldn't be
> > much worried about it for example. If audio skip when hitting the refresh
> > button of Mozilla, then yes it should be fixed. And the more you add super
> > interactive patterns, the more the scheduler will be exploitable. I
> > recommend you after doing changes to get this :
> >
> > http://www.xmailserver.org/linux-patches/irman2.c
> >
> > and run it with different -n (number of tasks) and -b (CPU burn ms time).
> > At the same time try to build a kernel for example. Then you will realize
> > that interactivity is not the bigger problem that the scheduler has right
> > now.
>
> Please don't assume I'm writing an xmms scheduler. I've done a lot more testing
> than xmms.
Ok, I'm feeling better already ;)
- Davide
Con Kolivas wrote:
> Quoting Davide Libenzi <[email protected]>:
>
>
>>On Thu, 17 Jul 2003, Con Kolivas wrote:
>>
>>
>>>O*int patches trying to improve the interactivity of the 2.5/6 scheduler
>>
>>for
>>
>>>desktops. It appears possible to do this without moving to nanosecond
>>>resolution.
>>>
>>>This one makes a massive difference... Please test this to death.
>>>
>>>Changes:
>>>The big change is in the way sleep_avg is incremented. Any amount of sleep
>>>will now raise you by at least one priority with each wakeup. This causes
>>>massive differences to startup time, extremely rapid conversion to
>>
>>interactive
>>
>>>state, and recovery from non-interactive state rapidly as well (prevents X
>>>stalling after thrashing around under high loads for many seconds).
>>>
>>>The sleep buffer was dropped to just 10ms. This has the effect of causing
>>
>>mild
>>
>>>round robinning of very interactive tasks if they run for more than 10ms.
>>
>>The
>>
>>>requeuing was changed from (unlikely()) to an ordinary if.. branch as this
>>>will be hit much more now.
>>
>>Con, I'll make a few notes on the code and a final comment.
>>
>>
>>
>>
>>>-#define MAX_BONUS ((MAX_USER_PRIO - MAX_RT_PRIO) *
>
> PRIO_BONUS_RATIO /
>
>>100)
>>
>>>+#define MAX_BONUS (40 * PRIO_BONUS_RATIO / 100)
>>
>>Why did you bolt in the 40 value ? It really comes from (MAX_USER_PRIO -
>>MAX_RT_PRIO)
>>and you will have another place to change if the number of slots will
>>change. If you want to clarify better, stick a comment.
>
>
> Granted. Will revert. If you don't understand it you shouldn't be fiddling with
> it I agree.
>
>
>>
>>
>>>+ p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1)
>
> * runtime /
>
>>MAX_BONUS;
>>
>>I don't have the full code so I cannot see what "runtime" is, but if
>>"runtime" is the time the task ran, this is :
>>
>>p->sleep_avg ~= p->sleep_avg + runtime / MAX_BONUS;
>>
>>(in any case a non-decreasing function of "runtime" )
>>Are you sure you want to reward tasks that actually ran more ?
>
>
>
> That was the bug. Runtime was supposed to be limited to MAX_SLEEP_AVG. Fix will
> be posted very soon.
Should any harm come from running 06int without the phantom patch
mentioned?
On Thu, 17 Jul 2003 10:35, Davide Libenzi wrote:
> On Thu, 17 Jul 2003, Con Kolivas wrote:
> > > > + p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1)
> >
> > * runtime /
> >
> > > MAX_BONUS;
> > >
> > > I don't have the full code so I cannot see what "runtime" is, but if
> > > "runtime" is the time the task ran, this is :
> > >
> > > p->sleep_avg ~= p->sleep_avg + runtime / MAX_BONUS;
> > >
> > > (in any case a non-decreasing function of "runtime" )
> > > Are you sure you want to reward tasks that actually ran more ?
> >
> > That was the bug. Runtime was supposed to be limited to MAX_SLEEP_AVG.
> > Fix will be posted very soon.
>
> Con, it is not the limit. You're making sleep_avg a non-decreasing
> function of "runtime". Basically you are rewarding tasks that did burn
> more CPU (if runtime is what the name suggests). Are you sure this is what
> you want ?
It's not cpu runtime; it is time since starting the process.
>
> > > Con, you cannot follow the XMMS thingy otherwise you'll end up bolting
> > > in the XMMS sleep->burn pattern and you'll probably break the
> > > make-j+RealPlay for example. MultiMedia players are really tricky since
> > > they require strict timings and forces you to create a special
> > > super-interactive treatment inside the code. Interactivity in my box
> > > running moderate high loads is very good for my desktop use. Maybe
> > > audio will skip here (didn't try) but I believe that following the
> > > fix-XMMS thingy is really bad. I believe we should try to make the
> > > desktop to feel interactive with human tollerances and not with strict
> > > timings like MM apps. If the audio skips when dragging like crazy a X
> > > window using the filled mode on a slow CPU, we shouldn't be much
> > > worried about it for example. If audio skip when hitting the refresh
> > > button of Mozilla, then yes it should be fixed. And the more you add
> > > super interactive patterns, the more the scheduler will be exploitable.
> > > I recommend you after doing changes to get this :
> > >
> > > http://www.xmailserver.org/linux-patches/irman2.c
> > >
> > > and run it with different -n (number of tasks) and -b (CPU burn ms
> > > time). At the same time try to build a kernel for example. Then you
> > > will realize that interactivity is not the bigger problem that the
> > > scheduler has right now.
> >
> > Please don't assume I'm writing an xmms scheduler. I've done a lot more
> > testing than xmms.
>
> Ok, I'm feeling better already ;)
Me too :)
Con
On Thu, 17 Jul 2003 10:48, Wade wrote:
> Con Kolivas wrote:
> > Quoting Davide Libenzi <[email protected]>:
> >>On Thu, 17 Jul 2003, Con Kolivas wrote:
> >>>O*int patches trying to improve the interactivity of the 2.5/6 scheduler
> >>
> >>for
> >>
> >>>desktops. It appears possible to do this without moving to nanosecond
> >>>resolution.
> >>>
> >>>This one makes a massive difference... Please test this to death.
> >>>
> >>>Changes:
> >>>The big change is in the way sleep_avg is incremented. Any amount of
> >>> sleep will now raise you by at least one priority with each wakeup.
> >>> This causes massive differences to startup time, extremely rapid
> >>> conversion to
> >>
> >>interactive
> >>
> >>>state, and recovery from non-interactive state rapidly as well (prevents
> >>> X stalling after thrashing around under high loads for many seconds).
> >>>
> >>>The sleep buffer was dropped to just 10ms. This has the effect of
> >>> causing
> >>
> >>mild
> >>
> >>>round robinning of very interactive tasks if they run for more than
> >>> 10ms.
> >>
> >>The
> >>
> >>>requeuing was changed from (unlikely()) to an ordinary if.. branch as
> >>> this will be hit much more now.
> >>
> >>Con, I'll make a few notes on the code and a final comment.
> >>
> >>>-#define MAX_BONUS ((MAX_USER_PRIO - MAX_RT_PRIO) *
> >
> > PRIO_BONUS_RATIO /
> >
> >>100)
> >>
> >>>+#define MAX_BONUS (40 * PRIO_BONUS_RATIO / 100)
> >>
> >>Why did you bolt in the 40 value ? It really comes from (MAX_USER_PRIO -
> >>MAX_RT_PRIO)
> >>and you will have another place to change if the number of slots will
> >>change. If you want to clarify better, stick a comment.
> >
> > Granted. Will revert. If you don't understand it you shouldn't be
> > fiddling with it I agree.
> >
> >>>+ p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1)
> >
> > * runtime /
> >
> >>MAX_BONUS;
> >>
> >>I don't have the full code so I cannot see what "runtime" is, but if
> >>"runtime" is the time the task ran, this is :
> >>
> >>p->sleep_avg ~= p->sleep_avg + runtime / MAX_BONUS;
> >>
> >>(in any case a non-decreasing function of "runtime" )
> >>Are you sure you want to reward tasks that actually ran more ?
> >
> > That was the bug. Runtime was supposed to be limited to MAX_SLEEP_AVG.
> > Fix will be posted very soon.
>
> Should any harm come from running 06int without the phantom patch
> mentioned?
No harm, but applications that have been running for a while (?hours) will
eventually not run quite as smoothly. I promise it is only one 10 minute
kernel compile away :)
Con
<quote sender="Con Kolivas">
> On Thu, 17 Jul 2003 10:48, Wade wrote:
> > Con Kolivas wrote:
> > > Quoting Davide Libenzi <[email protected]>:
> > >>On Thu, 17 Jul 2003, Con Kolivas wrote:
> > Should any harm come from running 06int without the phantom patch
> > mentioned?
>
> No harm, but applications that have been running for a while (?hours) will
> eventually not run quite as smoothly. I promise it is only one 10 minute
> kernel compile away :)
I am compiling my kernel with your patch. Is there any particular reason
why running applications for a while will eventually not run smoothly?
Ok, I shall find out myself :D
Eugene
This is much better than 2.5.75-mm3 with your patch. XMMS seems to work
well, and applications in X are much more responsive which is more
important to me anyway. Part of the sluggishness with 2.5.75-mm3 was
that the radeon DRM code apparently had problems. X was using 20-30% of
the CPU time which probably contributed to the laggy feeling. Still
even though 2.5.73-mm3 was crippled, this is also better than any of the
other 2.5.7x patched kernels I've tried.
Windows are pretty slow to redraw under load though sometimes. If I
have gimp scaling an image on workspace 2 and then switch back to 1 it
can take up to 7 seconds to redraw Mozilla or a terminal. This seems to
get better though the longer X has been running, once X has been up for
~15 minutes I can't get it to occur anymore.
XMMS still skips when my xscreensaver changes to a different
screensaver, but it's not as bad as before and I'm not expecting
miracles. I would rather a program start fast than take forever to
start just to avoid stalling XMMS.
I'm running a K6-2 400 w/384MB.
I'll keep testing and let you know if I find any problems...
Thanks,
Wes
Con Kolivas wrote:
>O*int patches trying to improve the interactivity of the 2.5/6 scheduler for
>desktops. It appears possible to do this without moving to nanosecond
>resolution.
>
>This one makes a massive difference... Please test this to death.
>
>Changes:
>The big change is in the way sleep_avg is incremented. Any amount of sleep
>will now raise you by at least one priority with each wakeup. This causes
>massive differences to startup time, extremely rapid conversion to interactive
>state, and recovery from non-interactive state rapidly as well (prevents X
>stalling after thrashing around under high loads for many seconds).
>
>The sleep buffer was dropped to just 10ms. This has the effect of causing mild
>round robinning of very interactive tasks if they run for more than 10ms. The
>requeuing was changed from (unlikely()) to an ordinary if.. branch as this
>will be hit much more now.
>
>MAX_BONUS as a #define was made easier to understand
>
>Idle tasks were made slightly less interactive to prevent cpu hogs from
>becoming interactive on their very first wakeup.
>
>Con
>
>This patch-O6int-0307170012 applies on top of 2.6.0-test1-mm1 and can be found
>here:
>http://kernel.kolivas.org/2.5
>
>and here:
>
>--- linux-2.6.0-test1-mm1/kernel/sched.c 2003-07-16 20:27:32.000000000 +1000
>+++ linux-2.6.0-testck1/kernel/sched.c 2003-07-17 00:13:24.000000000 +1000
>@@ -76,9 +76,9 @@
> #define MIN_SLEEP_AVG (HZ)
> #define MAX_SLEEP_AVG (10*HZ)
> #define STARVATION_LIMIT (10*HZ)
>-#define SLEEP_BUFFER (HZ/20)
>+#define SLEEP_BUFFER (HZ/100)
> #define NODE_THRESHOLD 125
>-#define MAX_BONUS ((MAX_USER_PRIO - MAX_RT_PRIO) * PRIO_BONUS_RATIO / 100)
>+#define MAX_BONUS (40 * PRIO_BONUS_RATIO / 100)
>
> /*
> * If a task is 'interactive' then we reinsert it in the active
>@@ -399,7 +399,7 @@ static inline void activate_task(task_t
> */
> if (sleep_time > MIN_SLEEP_AVG){
> p->avg_start = jiffies - MIN_SLEEP_AVG;
>- p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 1) /
>+ p->sleep_avg = MIN_SLEEP_AVG * (MAX_BONUS - INTERACTIVE_DELTA - 2) /
> MAX_BONUS;
> } else {
> /*
>@@ -413,14 +413,10 @@ static inline void activate_task(task_t
> p->sleep_avg += sleep_time;
>
> /*
>- * Give a bonus to tasks that wake early on to prevent
>- * the problem of the denominator in the bonus equation
>- * from continually getting larger.
>+ * Processes that sleep get pushed to a higher priority
>+ * each time they sleep
> */
>- if ((runtime - MIN_SLEEP_AVG) < MAX_SLEEP_AVG)
>- p->sleep_avg += (runtime - p->sleep_avg) *
>- (MAX_SLEEP_AVG + MIN_SLEEP_AVG - runtime) *
>- (MAX_BONUS - INTERACTIVE_DELTA) / MAX_BONUS / MAX_SLEEP_AVG;
>+ p->sleep_avg = (p->sleep_avg * MAX_BONUS / runtime + 1) * runtime / MAX_BONUS;
>
> /*
> * Keep a small buffer of SLEEP_BUFFER sleep_avg to
>@@ -1311,7 +1307,7 @@ void scheduler_tick(int user_ticks, int
> enqueue_task(p, rq->expired);
> } else
> enqueue_task(p, rq->active);
>- } else if (unlikely(p->prio < effective_prio(p))){
>+ } else if (p->prio < effective_prio(p)){
> /*
> * Tasks that have lowered their priority are put to the end
> * of the active array with their remaining timeslice
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to [email protected]
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
>
>
>
Con Kolivas, Wed, Jul 16, 2003 16:30:25 +0200:
> O*int patches trying to improve the interactivity of the 2.5/6 scheduler for
> desktops. It appears possible to do this without moving to nanosecond
> resolution.
>
> This one makes a massive difference... Please test this to death.
>
tar ztf file.tar.gz and make something somehow do not like each other.
Usually it is tar, which became very slow. And listings of directories
are slow if system is under load (about 3), too (most annoying).
UP P3-700, preempt. 2.6.0-test1-mm1 + O6-int.
Quoting Alex Riesen <[email protected]>:
> Con Kolivas, Wed, Jul 16, 2003 16:30:25 +0200:
> > O*int patches trying to improve the interactivity of the 2.5/6 scheduler
> for
> > desktops. It appears possible to do this without moving to nanosecond
> > resolution.
> >
> > This one makes a massive difference... Please test this to death.
> >
>
> tar ztf file.tar.gz and make something somehow do not like each other.
> Usually it is tar, which became very slow. And listings of directories
> are slow if system is under load (about 3), too (most annoying).
>
> UP P3-700, preempt. 2.6.0-test1-mm1 + O6-int.
Thanks for testing. It is distinctly possible that O6.1 addresses this problem.
Can you please test that? It applies on top of O6 and only requires a recompile
of sched.o.
http://kernel.kolivas.org/2.5
Con
At 03:12 PM 7/16/2003 -0700, Davide Libenzi wrote:
>http://www.xmailserver.org/linux-patches/irman2.c
>
>and run it with different -n (number of tasks) and -b (CPU burn ms time).
>At the same time try to build a kernel for example. Then you will realize
>that interactivity is not the bigger problem that the scheduler has right
>now.
I added an irman2 load to contest. Con's changes 06+06.1 stomped it flat
[1]. irman2 is modified to run for 30s at a time, but with default parameters.
-Mike
[1] imho a little too flat. it also made a worrisome impression on apache
bench
Mike Galbraith wrote:
> At 03:12 PM 7/16/2003 -0700, Davide Libenzi wrote:
>
>> http://www.xmailserver.org/linux-patches/irman2.c
>>
>> and run it with different -n (number of tasks) and -b (CPU burn ms
>> time).
>> At the same time try to build a kernel for example. Then you will
>> realize
>> that interactivity is not the bigger problem that the scheduler has
>> right
>> now.
>
>
> I added an irman2 load to contest. Con's changes 06+06.1 stomped it
> flat [1]. irman2 is modified to run for 30s at a time, but with
> default parameters.
>
> -Mike
>
> [1] imho a little too flat. it also made a worrisome impression on
> apache bench
>
>
>------------------------------------------------------------------------
>
>no_load:
>Kernel [runs] Time CPU% Loads LCPU% Ratio
>2.5.69 1 153 94.8 0.0 0.0 1.00
>2.5.70 1 153 94.1 0.0 0.0 1.00
>2.6.0-test1 1 153 94.1 0.0 0.0 1.00
>2.6.0-test1-mm1 1 152 94.7 0.0 0.0 1.00
>cacherun:
>Kernel [runs] Time CPU% Loads LCPU% Ratio
>2.5.69 1 146 98.6 0.0 0.0 0.95
>2.5.70 1 146 98.6 0.0 0.0 0.95
>2.6.0-test1 1 146 98.6 0.0 0.0 0.95
>2.6.0-test1-mm1 1 146 98.6 0.0 0.0 0.96
>process_load:
>Kernel [runs] Time CPU% Loads LCPU% Ratio
>2.5.69 1 331 43.8 90.0 55.3 2.16
>2.5.70 1 199 72.4 27.0 25.5 1.30
>2.6.0-test1 1 264 54.5 61.0 44.3 1.73
>2.6.0-test1-mm1 1 323 44.9 88.0 54.2 2.12
>ctar_load:
>Kernel [runs] Time CPU% Loads LCPU% Ratio
>2.5.69 1 190 77.9 0.0 0.0 1.24
>2.5.70 1 186 80.1 0.0 0.0 1.22
>2.6.0-test1 1 213 70.4 0.0 0.0 1.39
>2.6.0-test1-mm1 1 207 72.5 0.0 0.0 1.36
>xtar_load:
>Kernel [runs] Time CPU% Loads LCPU% Ratio
>2.5.69 1 196 75.0 0.0 3.1 1.28
>2.5.70 1 195 75.9 0.0 3.1 1.27
>2.6.0-test1 1 193 76.7 1.0 4.1 1.26
>2.6.0-test1-mm1 1 195 75.9 1.0 4.1 1.28
>io_load:
>Kernel [runs] Time CPU% Loads LCPU% Ratio
>2.5.69 1 437 34.6 69.1 15.1 2.86
>2.5.70 1 401 37.7 72.3 17.4 2.62
>2.6.0-test1 1 243 61.3 48.1 17.3 1.59
>2.6.0-test1-mm1 1 336 44.9 64.5 17.3 2.21
>
Looks like gcc is getting less priority after a read completes.
Keep an eye on this please.
Con Kolivas, Thu, Jul 17, 2003 11:14:55 +0200:
> > > O*int patches trying to improve the interactivity of the 2.5/6
> > > scheduler for desktops. It appears possible to do this without
> > > moving to nanosecond resolution.
> >
> > tar ztf file.tar.gz and make something somehow do not like each other.
> > Usually it is tar, which became very slow. And listings of directories
> > are slow if system is under load (about 3), too (most annoying).
> >
> > UP P3-700, preempt. 2.6.0-test1-mm1 + O6-int.
>
> Thanks for testing. It is distinctly possible that O6.1 addresses this
> problem. Can you please test that? It applies on top of O6 and only
> requires a recompile of sched.o.
Still no good. xine drops frames by kernel's make -j2, xmms skips while
bk pull (locally). Updates (after switching desktops in metacity) get
delayed for seconds (mozilla window redraws with http://www.kernel.org on it,
for example).
Priorities of xine threads were around 15-16, with one of them
constantly at 16 (the one with most cpu). gcc/as processes were 20-21.
That said, it feels better than before, though. And the last changes in
the scheduler seem to reveal more races in applications (a found rxvt
not checking for errors reading from pty).
-alex
At 04:34 PM 7/18/2003 +1000, Nick Piggin wrote:
>Mike Galbraith wrote:
>
>>no_load:
>>Kernel [runs] Time CPU% Loads LCPU% Ratio
>>2.5.69 1 153 94.8 0.0 0.0 1.00
>>2.5.70 1 153 94.1 0.0 0.0 1.00
>>2.6.0-test1 1 153 94.1 0.0 0.0 1.00
>>2.6.0-test1-mm1 1 152 94.7 0.0 0.0 1.00
>>cacherun:
>>Kernel [runs] Time CPU% Loads LCPU% Ratio
>>2.5.69 1 146 98.6 0.0 0.0 0.95
>>2.5.70 1 146 98.6 0.0 0.0 0.95
>>2.6.0-test1 1 146 98.6 0.0 0.0 0.95
>>2.6.0-test1-mm1 1 146 98.6 0.0 0.0 0.96
>>process_load:
>>Kernel [runs] Time CPU% Loads LCPU% Ratio
>>2.5.69 1 331 43.8 90.0 55.3 2.16
>>2.5.70 1 199 72.4 27.0 25.5 1.30
>>2.6.0-test1 1 264 54.5 61.0 44.3 1.73
>>2.6.0-test1-mm1 1 323 44.9 88.0 54.2 2.12
>>ctar_load:
>>Kernel [runs] Time CPU% Loads LCPU% Ratio
>>2.5.69 1 190 77.9 0.0 0.0 1.24
>>2.5.70 1 186 80.1 0.0 0.0 1.22
>>2.6.0-test1 1 213 70.4 0.0 0.0 1.39
>>2.6.0-test1-mm1 1 207 72.5 0.0 0.0 1.36
>>xtar_load:
>>Kernel [runs] Time CPU% Loads LCPU% Ratio
>>2.5.69 1 196 75.0 0.0 3.1 1.28
>>2.5.70 1 195 75.9 0.0 3.1 1.27
>>2.6.0-test1 1 193 76.7 1.0 4.1 1.26
>>2.6.0-test1-mm1 1 195 75.9 1.0 4.1 1.28
>>io_load:
>>Kernel [runs] Time CPU% Loads LCPU% Ratio
>>2.5.69 1 437 34.6 69.1 15.1 2.86
>>2.5.70 1 401 37.7 72.3 17.4 2.62
>>2.6.0-test1 1 243 61.3 48.1 17.3 1.59
>>2.6.0-test1-mm1 1 336 44.9 64.5 17.3 2.21
>
>Looks like gcc is getting less priority after a read completes.
>Keep an eye on this please.
That _might_ (add salt) be priorities of kernel threads dropping too low.
I'm also seeing occasional total stalls under heavy I/O in the order of
10-12 seconds (even the disk stops). I have no idea if that's something in
mm or the scheduler changes though, as I've yet to do any isolation and/or
tinkering. All I know at this point is that I haven't seen it in stock yet.
-Mike
On Fri, Jul 18, 2003 at 12:18:33PM +0200, Mike Galbraith wrote:
> That _might_ (add salt) be priorities of kernel threads dropping too low.
>
> I'm also seeing occasional total stalls under heavy I/O in the order of
> 10-12 seconds (even the disk stops). I have no idea if that's something in
> mm or the scheduler changes though, as I've yet to do any isolation and/or
> tinkering. All I know at this point is that I haven't seen it in stock yet.
I've seen this too while doing a huge nfs transfer from a 2.6 machine to
a 2.4 machine (sparc32). Thought it'd be something with the nfs changes
which were recently, might be the scheduler, tho. Ah, and it is fully
reproducable.
--
Regards,
Wiktor Wodecki
On Fri, 18 Jul 2003 20:31, Wiktor Wodecki wrote:
> On Fri, Jul 18, 2003 at 12:18:33PM +0200, Mike Galbraith wrote:
> > That _might_ (add salt) be priorities of kernel threads dropping too low.
> >
> > I'm also seeing occasional total stalls under heavy I/O in the order of
> > 10-12 seconds (even the disk stops). I have no idea if that's something
> > in mm or the scheduler changes though, as I've yet to do any isolation
> > and/or tinkering. All I know at this point is that I haven't seen it in
> > stock yet.
>
> I've seen this too while doing a huge nfs transfer from a 2.6 machine to
> a 2.4 machine (sparc32). Thought it'd be something with the nfs changes
> which were recently, might be the scheduler, tho. Ah, and it is fully
> reproducable.
Well I didn't want to post this yet because I'm not sure if it's a good
workaround yet but it looks like a reasonable compromise, and since you have a
testcase it will be interesting to see if it addresses it. It's possible that
a task is being requeued every millisecond, and this is a little smarter. It
allows cpu hogs to run for 100ms before being round robinned, but shorter for
interactive tasks. Can you try this O7 which applies on top of O6.1 please:
available here:
http://kernel.kolivas.org/2.5
and here:
--- linux-2.6.0-test1-mm1/kernel/sched.c 2003-07-17 19:59:16.000000000 +1000
+++ linux-2.6.0-testck1/kernel/sched.c 2003-07-18 00:10:55.000000000 +1000
@@ -1310,10 +1310,12 @@ void scheduler_tick(int user_ticks, int
enqueue_task(p, rq->expired);
} else
enqueue_task(p, rq->active);
- } else if (p->prio < effective_prio(p)){
+ } else if (!((task_timeslice(p) - p->time_slice) %
+ (MIN_TIMESLICE * (MAX_BONUS + 1 - p->sleep_avg * MAX_BONUS / MAX_SLEEP_AVG)))){
/*
- * Tasks that have lowered their priority are put to the end
- * of the active array with their remaining timeslice
+ * Running tasks get requeued with their remaining timeslice
+ * after a period proportional to how cpu intensive they are to
+ * minimise the duration one interactive task can starve another
*/
dequeue_task(p, rq->active);
set_tsk_need_resched(p);
On Fri, Jul 18, 2003 at 08:43:05PM +1000, Con Kolivas wrote:
> On Fri, 18 Jul 2003 20:31, Wiktor Wodecki wrote:
> > On Fri, Jul 18, 2003 at 12:18:33PM +0200, Mike Galbraith wrote:
> > > That _might_ (add salt) be priorities of kernel threads dropping too low.
> > >
> > > I'm also seeing occasional total stalls under heavy I/O in the order of
> > > 10-12 seconds (even the disk stops). I have no idea if that's something
> > > in mm or the scheduler changes though, as I've yet to do any isolation
> > > and/or tinkering. All I know at this point is that I haven't seen it in
> > > stock yet.
> >
> > I've seen this too while doing a huge nfs transfer from a 2.6 machine to
> > a 2.4 machine (sparc32). Thought it'd be something with the nfs changes
> > which were recently, might be the scheduler, tho. Ah, and it is fully
> > reproducable.
>
> Well I didn't want to post this yet because I'm not sure if it's a good
> workaround yet but it looks like a reasonable compromise, and since you have a
> testcase it will be interesting to see if it addresses it. It's possible that
> a task is being requeued every millisecond, and this is a little smarter. It
> allows cpu hogs to run for 100ms before being round robinned, but shorter for
> interactive tasks. Can you try this O7 which applies on top of O6.1 please:
>
> available here:
> http://kernel.kolivas.org/2.5
sorry, the problem still persists. Aborting the cp takes less time, tho
(about 10 seconds now, before it was about 30 secs). I'm aborting during
a big file, FYI.
>
> and here:
>
> --- linux-2.6.0-test1-mm1/kernel/sched.c 2003-07-17 19:59:16.000000000 +1000
> +++ linux-2.6.0-testck1/kernel/sched.c 2003-07-18 00:10:55.000000000 +1000
> @@ -1310,10 +1310,12 @@ void scheduler_tick(int user_ticks, int
> enqueue_task(p, rq->expired);
> } else
> enqueue_task(p, rq->active);
> - } else if (p->prio < effective_prio(p)){
> + } else if (!((task_timeslice(p) - p->time_slice) %
> + (MIN_TIMESLICE * (MAX_BONUS + 1 - p->sleep_avg * MAX_BONUS / MAX_SLEEP_AVG)))){
> /*
> - * Tasks that have lowered their priority are put to the end
> - * of the active array with their remaining timeslice
> + * Running tasks get requeued with their remaining timeslice
> + * after a period proportional to how cpu intensive they are to
> + * minimise the duration one interactive task can starve another
> */
> dequeue_task(p, rq->active);
> set_tsk_need_resched(p);
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Regards,
Wiktor Wodecki
Wiktor Wodecki wrote:
>On Fri, Jul 18, 2003 at 08:43:05PM +1000, Con Kolivas wrote:
>
>>On Fri, 18 Jul 2003 20:31, Wiktor Wodecki wrote:
>>
>>>On Fri, Jul 18, 2003 at 12:18:33PM +0200, Mike Galbraith wrote:
>>>
>>>>That _might_ (add salt) be priorities of kernel threads dropping too low.
>>>>
>>>>I'm also seeing occasional total stalls under heavy I/O in the order of
>>>>10-12 seconds (even the disk stops). I have no idea if that's something
>>>>in mm or the scheduler changes though, as I've yet to do any isolation
>>>>and/or tinkering. All I know at this point is that I haven't seen it in
>>>>stock yet.
>>>>
>>>I've seen this too while doing a huge nfs transfer from a 2.6 machine to
>>>a 2.4 machine (sparc32). Thought it'd be something with the nfs changes
>>>which were recently, might be the scheduler, tho. Ah, and it is fully
>>>reproducable.
>>>
>>Well I didn't want to post this yet because I'm not sure if it's a good
>>workaround yet but it looks like a reasonable compromise, and since you have a
>>testcase it will be interesting to see if it addresses it. It's possible that
>>a task is being requeued every millisecond, and this is a little smarter. It
>>allows cpu hogs to run for 100ms before being round robinned, but shorter for
>>interactive tasks. Can you try this O7 which applies on top of O6.1 please:
>>
>>available here:
>>http://kernel.kolivas.org/2.5
>>
>
>sorry, the problem still persists. Aborting the cp takes less time, tho
>(about 10 seconds now, before it was about 30 secs). I'm aborting during
>a big file, FYI.
>
OK if the IO actually stops then it shouldn't be an IO scheduler or
request allocation problem, but could you try to capture a sysrq T
trace for me during the freeze.
On Fri, 18 Jul 2003, Mike Galbraith wrote:
> At 03:12 PM 7/16/2003 -0700, Davide Libenzi wrote:
>
> >http://www.xmailserver.org/linux-patches/irman2.c
> >
> >and run it with different -n (number of tasks) and -b (CPU burn ms time).
> >At the same time try to build a kernel for example. Then you will realize
> >that interactivity is not the bigger problem that the scheduler has right
> >now.
>
> I added an irman2 load to contest. Con's changes 06+06.1 stomped it flat
> [1]. irman2 is modified to run for 30s at a time, but with default parameters.
In my case I cannot even estimate the time. It takes 8:33 ususally to do a
bzImage, and after 15 minutes I ctrl-c with only two lines printed in the
console. If you consider the ratio between the total number of lines that
a kernel build spits out, this couls have taken hours. Also, you might
want also to try a low number of processes with a short burn, like the new
patch seems to do to better hit mm players. Something like:
irman2 -n 10 -b 40
Guys, I'm saying this not because I do not appreciate the time Con is
spending on it. I just hate to see time spent in the wrong priorities.
Whatever super privileged sleep->burn pattern you code, it can be
exploited w/out a global throttle for the CPU time assigned to interactive
and non interactive tasks. This is Unix guys and it is used in multi-user
environments, we cannot ship with a flaw like this.
- Davide
On Fri, 18 Jul 2003 20:18, Mike Galbraith wrote:
> At 04:34 PM 7/18/2003 +1000, Nick Piggin wrote:
> >Mike Galbraith wrote:
> That _might_ (add salt) be priorities of kernel threads dropping too low.
Is there any good reason for the priorities of kernel threads to vary at all?
In the original design they are subject to the same interactivity changes as
other processes and I've maintained that but I can't see a good reason for it
and plan to change it unless someone tells me otherwise.
Con
At 12:24 AM 7/19/2003 +1000, Con Kolivas wrote:
>On Fri, 18 Jul 2003 20:18, Mike Galbraith wrote:
> > At 04:34 PM 7/18/2003 +1000, Nick Piggin wrote:
> > >Mike Galbraith wrote:
> > That _might_ (add salt) be priorities of kernel threads dropping too low.
>
>Is there any good reason for the priorities of kernel threads to vary at all?
>In the original design they are subject to the same interactivity changes as
>other processes and I've maintained that but I can't see a good reason for it
>and plan to change it unless someone tells me otherwise.
They're so light now days that I never see them change. I set bonus
manually to MAX_BONUS/2 in the last numbers posted.
-Mike
At 06:46 AM 7/18/2003 -0700, Davide Libenzi wrote:
>On Fri, 18 Jul 2003, Mike Galbraith wrote:
>
> > At 03:12 PM 7/16/2003 -0700, Davide Libenzi wrote:
> >
> > >http://www.xmailserver.org/linux-patches/irman2.c
> > >
> > >and run it with different -n (number of tasks) and -b (CPU burn ms time).
> > >At the same time try to build a kernel for example. Then you will realize
> > >that interactivity is not the bigger problem that the scheduler has right
> > >now.
> >
> > I added an irman2 load to contest. Con's changes 06+06.1 stomped it flat
> > [1]. irman2 is modified to run for 30s at a time, but with default
> parameters.
>
>In my case I cannot even estimate the time. It takes 8:33 ususally to do a
>bzImage, and after 15 minutes I ctrl-c with only two lines printed in the
>console. If you consider the ratio between the total number of lines that
>a kernel build spits out, this couls have taken hours. Also, you might
Yeah, I noticed... it's a nasty little bugger.
>want also to try a low number of processes with a short burn, like the new
>patch seems to do to better hit mm players. Something like:
>
>irman2 -n 10 -b 40
If I hadn't done the restart after 30 seconds thing, I knew it would take
ages. I wanted something to see contrast, not a life sentence ;-)
>Guys, I'm saying this not because I do not appreciate the time Con is
>spending on it. I just hate to see time spent in the wrong priorities.
>Whatever super privileged sleep->burn pattern you code, it can be
>exploited w/out a global throttle for the CPU time assigned to interactive
>and non interactive tasks. This is Unix guys and it is used in multi-user
>environments, we cannot ship with a flaw like this.
(Oh, I agree that the problem is nasty. I like fair scheduling a lot...
when _I'm_ not the one starving things to death;)
-Mike
At 12:31 PM 7/18/2003 +0200, Wiktor Wodecki wrote:
>On Fri, Jul 18, 2003 at 12:18:33PM +0200, Mike Galbraith wrote:
> > That _might_ (add salt) be priorities of kernel threads dropping too low.
> >
> > I'm also seeing occasional total stalls under heavy I/O in the order of
> > 10-12 seconds (even the disk stops). I have no idea if that's
> something in
> > mm or the scheduler changes though, as I've yet to do any isolation and/or
> > tinkering. All I know at this point is that I haven't seen it in stock
> yet.
>
>I've seen this too while doing a huge nfs transfer from a 2.6 machine to
>a 2.4 machine (sparc32). Thought it'd be something with the nfs changes
>which were recently, might be the scheduler, tho. Ah, and it is fully
>reproducable.
Telling to not mess with my kernel threads seems to have fixed it here...
no stalls during the whole contest run. New contest numbers attached.
-Mike
On Fri, 18 Jul 2003, Mike Galbraith wrote:
> Telling to not mess with my kernel threads seems to have fixed it here...
> no stalls during the whole contest run. New contest numbers attached.
It is ok to use unfairness towards kernel threads to avoid starvation. We
control them. It is right to apply uncontrolled unfairness to userspace
tasks though.
- Davide
On Fri, 18 Jul 2003, Davide Libenzi wrote:
> control them. It is right to apply uncontrolled unfairness to userspace
> tasks though.
s/It is right/It is not right/
- Davide
On Fri, 18 Jul 2003 10:05:05 PDT, Davide Libenzi said:
> On Fri, 18 Jul 2003, Davide Libenzi wrote:
>
> > control them. It is right to apply uncontrolled unfairness to userspace
> > tasks though.
>
> s/It is right/It is not right/
OK.. but is it right to apply *controlled* unfairness to userspace?
At 09:52 AM 7/18/2003 -0700, Davide Libenzi wrote:
>On Fri, 18 Jul 2003, Mike Galbraith wrote:
>
> > Telling to not mess with my kernel threads seems to have fixed it here...
> > no stalls during the whole contest run. New contest numbers attached.
>
>It is ok to use unfairness towards kernel threads to avoid starvation. We
>control them. It is right to apply uncontrolled unfairness to userspace
>tasks though.
In this case, it appears that the lowered priority was causing
trouble. One test run isn't enough to say 100%, but what I read out of the
numbers is that at least kswapd needs to be able to preempt.
wrt the uncontrolled unfairness, I've muttered about this before. I've
also tried (quite) a few things, but nothing yet has been good enough to...
require trashing that I couldn't do here ;-)
-Mike
On Fri, 18 Jul 2003 [email protected] wrote:
> On Fri, 18 Jul 2003 10:05:05 PDT, Davide Libenzi said:
> > On Fri, 18 Jul 2003, Davide Libenzi wrote:
> >
> > > control them. It is right to apply uncontrolled unfairness to userspace
> > > tasks though.
> >
> > s/It is right/It is not right/
>
> OK.. but is it right to apply *controlled* unfairness to userspace?
I'm sorry to say that guys, but I'm afraid it's what we have to do. We did
not think about it when this scheduler was dropped inside 2.5 sadly. The
interactivity concept is based on the fact that a particular class of
tasks characterized by certain sleep->burn patterns are never expired and
eventually, only oscillate between two (pretty high) priorities. Without
applying a global CPU throttle for interactive tasks, you can create a
small set of processes (like irman does) that hit the coded sleep->burn
pattern and that make everything is running with priority lower than the
lower of the two of the oscillation range, to almost completely starve.
Controlled unfairness would mean throttling the CPU time we reserve to
interactive tasks so that we always reserve a minimum time to non
interactive processes.
- Davide
At 12:31 PM 7/18/2003 -0700, Davide Libenzi wrote:
>On Fri, 18 Jul 2003 [email protected] wrote:
>
> > On Fri, 18 Jul 2003 10:05:05 PDT, Davide Libenzi said:
> > > On Fri, 18 Jul 2003, Davide Libenzi wrote:
> > >
> > > > control them. It is right to apply uncontrolled unfairness to userspace
> > > > tasks though.
> > >
> > > s/It is right/It is not right/
> >
> > OK.. but is it right to apply *controlled* unfairness to userspace?
>
>I'm sorry to say that guys, but I'm afraid it's what we have to do. We did
>not think about it when this scheduler was dropped inside 2.5 sadly. The
>interactivity concept is based on the fact that a particular class of
>tasks characterized by certain sleep->burn patterns are never expired and
>eventually, only oscillate between two (pretty high) priorities. Without
>applying a global CPU throttle for interactive tasks, you can create a
>small set of processes (like irman does) that hit the coded sleep->burn
>pattern and that make everything is running with priority lower than the
>lower of the two of the oscillation range, to almost completely starve.
>Controlled unfairness would mean throttling the CPU time we reserve to
>interactive tasks so that we always reserve a minimum time to non
>interactive processes.
I'd like to find a way to prevent that instead. There's got to be a way.
It's easy to prevent irman type things from starving others permanently (i
call this active starvation, or wakeup starvation), and this does something
fairly similar to what you're talking about. Just crawl down the queue
heads looking for the oldest task periodically instead of always taking the
highest queue. You can do that very fast, and it does prevent active
starvation.
-Mike
On Fri, 18 Jul 2003, Mike Galbraith wrote:
> >I'm sorry to say that guys, but I'm afraid it's what we have to do. We did
> >not think about it when this scheduler was dropped inside 2.5 sadly. The
> >interactivity concept is based on the fact that a particular class of
> >tasks characterized by certain sleep->burn patterns are never expired and
> >eventually, only oscillate between two (pretty high) priorities. Without
> >applying a global CPU throttle for interactive tasks, you can create a
> >small set of processes (like irman does) that hit the coded sleep->burn
> >pattern and that make everything is running with priority lower than the
> >lower of the two of the oscillation range, to almost completely starve.
> >Controlled unfairness would mean throttling the CPU time we reserve to
> >interactive tasks so that we always reserve a minimum time to non
> >interactive processes.
>
> I'd like to find a way to prevent that instead. There's got to be a way.
Remember that this is computer science, that is, for every problem there
"at least" one solution ;)
> It's easy to prevent irman type things from starving others permanently (i
> call this active starvation, or wakeup starvation), and this does something
> fairly similar to what you're talking about. Just crawl down the queue
> heads looking for the oldest task periodically instead of always taking the
> highest queue. You can do that very fast, and it does prevent active
> starvation.
Everything that will make the scheduler to say "ok, I gave enough time to
interactive tasks, now I'm really going to spin one from the masses" will
work. Having a clean solution would not be an option here.
- Davide
On Fri, Jul 18, 2003 at 09:38:10PM +1000, Nick Piggin wrote:
>
>
> Wiktor Wodecki wrote:
>
> >On Fri, Jul 18, 2003 at 08:43:05PM +1000, Con Kolivas wrote:
> >
> >>On Fri, 18 Jul 2003 20:31, Wiktor Wodecki wrote:
> >>
> >>>On Fri, Jul 18, 2003 at 12:18:33PM +0200, Mike Galbraith wrote:
> >>>
> >>>>That _might_ (add salt) be priorities of kernel threads dropping too
> >>>>low.
> >>>>
> >>>>I'm also seeing occasional total stalls under heavy I/O in the order of
> >>>>10-12 seconds (even the disk stops). I have no idea if that's something
> >>>>in mm or the scheduler changes though, as I've yet to do any isolation
> >>>>and/or tinkering. All I know at this point is that I haven't seen it in
> >>>>stock yet.
> >>>>
> >>>I've seen this too while doing a huge nfs transfer from a 2.6 machine to
> >>>a 2.4 machine (sparc32). Thought it'd be something with the nfs changes
> >>>which were recently, might be the scheduler, tho. Ah, and it is fully
> >>>reproducable.
> >>>
> >>Well I didn't want to post this yet because I'm not sure if it's a good
> >>workaround yet but it looks like a reasonable compromise, and since you
> >>have a testcase it will be interesting to see if it addresses it. It's
> >>possible that a task is being requeued every millisecond, and this is a
> >>little smarter. It allows cpu hogs to run for 100ms before being round
> >>robinned, but shorter for interactive tasks. Can you try this O7 which
> >>applies on top of O6.1 please:
> >>
> >>available here:
> >>http://kernel.kolivas.org/2.5
> >>
> >
> >sorry, the problem still persists. Aborting the cp takes less time, tho
> >(about 10 seconds now, before it was about 30 secs). I'm aborting during
> >a big file, FYI.
> >
>
> OK if the IO actually stops then it shouldn't be an IO scheduler or
> request allocation problem, but could you try to capture a sysrq T
> trace for me during the freeze.
okay, here it is. I only paste the output for cp, if you need the whole
thing, tell me please.
Jul 19 12:54:16 kakerlak kernel: cp D C0140F7B 6164 6160
(NOTLB)
Jul 19 12:54:16 kakerlak kernel: c2c6fec8 00200082 d3de680c c0140f7b
d3de6800 c72ef000 c0477cc0 d3de681c
Jul 19 12:54:16 kakerlak kernel: d139ad60 c2c6e000 ce8dc3c0
ce8dc3dc 00000000 c01aba07 00000000 00000001
Jul 19 12:54:16 kakerlak kernel: 00000000 00000001 ce8153e4
00000000 c26706d0 c011cdb0 ce8dc3dc ce8dc3dc
Jul 19 12:54:16 kakerlak kernel: Call Trace:
Jul 19 12:54:16 kakerlak kernel: [free_block+203/256]
free_block+0xcb/0x100
Jul 19 12:54:16 kakerlak kernel: [nfs_wait_on_request+151/336]
nfs_wait_on_request+0x97/0x150
Jul 19 12:54:16 kakerlak kernel: [default_wake_function+0/48]
default_wake_function+0x0/0x30
Jul 19 12:54:16 kakerlak kernel: [nfs_wait_on_requests+169/256]
nfs_wait_on_requests+0xa9/0x100
Jul 19 12:54:16 kakerlak kernel: [nfs_sync_file+150/192]
nfs_sync_file+0x96/0xc0
Jul 19 12:54:16 kakerlak kernel: [nfs_file_flush+88/208]
nfs_file_flush+0x58/0xd0
Jul 19 12:54:16 kakerlak kernel: [filp_close+101/128]
filp_close+0x65/0x80
Jul 19 12:54:16 kakerlak kernel: [sys_close+97/160] sys_close+0x61/0xa0
Jul 19 12:54:16 kakerlak kernel: [syscall_call+7/11]
syscall_call+0x7/0xb
Jul 19 12:54:16 kakerlak kernel:
--
Regards,
Wiktor Wodecki
At 01:38 PM 7/18/2003 -0700, Davide Libenzi wrote:
>On Fri, 18 Jul 2003, Mike Galbraith wrote:
>
> > >I'm sorry to say that guys, but I'm afraid it's what we have to do. We did
> > >not think about it when this scheduler was dropped inside 2.5 sadly. The
> > >interactivity concept is based on the fact that a particular class of
> > >tasks characterized by certain sleep->burn patterns are never expired and
> > >eventually, only oscillate between two (pretty high) priorities. Without
> > >applying a global CPU throttle for interactive tasks, you can create a
> > >small set of processes (like irman does) that hit the coded sleep->burn
> > >pattern and that make everything is running with priority lower than the
> > >lower of the two of the oscillation range, to almost completely starve.
> > >Controlled unfairness would mean throttling the CPU time we reserve to
> > >interactive tasks so that we always reserve a minimum time to non
> > >interactive processes.
> >
> > I'd like to find a way to prevent that instead. There's got to be a way.
>
>Remember that this is computer science, that is, for every problem there
>"at least" one solution ;)
As incentive for other folks to think about the solution I haven't been
able to come up with, I think I'll post what I do about it here, and
threaten to submit it for inclusion ;-) ...
> > It's easy to prevent irman type things from starving others permanently (i
> > call this active starvation, or wakeup starvation), and this does something
> > fairly similar to what you're talking about. Just crawl down the queue
> > heads looking for the oldest task periodically instead of always taking the
> > highest queue. You can do that very fast, and it does prevent active
> > starvation.
>
>Everything that will make the scheduler to say "ok, I gave enough time to
>interactive tasks, now I'm really going to spin one from the masses" will
>work. Having a clean solution would not be an option here.
... just as soon as I get my decidedly unclean work-around functioning at
least as well as it did for plain old irman. irman2 is _much_ more evil
than irman ever was (wow, good job!). I thought it'd be a half an hour
tops. This little bugger shows active starvation, expired starvation,
priority inflation, _and_ interactive starvation (i have to keep inventing
new terms to describe things i see.. jeez this is a good testcase).
-Mike
On Sat, 19 Jul 2003, Mike Galbraith wrote:
> >Everything that will make the scheduler to say "ok, I gave enough time to
> >interactive tasks, now I'm really going to spin one from the masses" will
> >work. Having a clean solution would not be an option here.
>
> ... just as soon as I get my decidedly unclean work-around functioning at
> least as well as it did for plain old irman. irman2 is _much_ more evil
> than irman ever was (wow, good job!). I thought it'd be a half an hour
> tops. This little bugger shows active starvation, expired starvation,
> priority inflation, _and_ interactive starvation (i have to keep inventing
> new terms to describe things i see.. jeez this is a good testcase).
Yes, the problem is not only the expired tasks starvation. Anything in
the active array that reside underneath the lower priority value of the
range irman2 tasks oscillate inbetween, will experience a "CPU time eclipse".
And you do not even need a smoked glass to look at it :)
- Davide
At 05:21 PM 7/20/2003 -0700, Davide Libenzi wrote:
>On Sat, 19 Jul 2003, Mike Galbraith wrote:
>
> > >Everything that will make the scheduler to say "ok, I gave enough time to
> > >interactive tasks, now I'm really going to spin one from the masses" will
> > >work. Having a clean solution would not be an option here.
> >
> > ... just as soon as I get my decidedly unclean work-around functioning at
> > least as well as it did for plain old irman. irman2 is _much_ more evil
> > than irman ever was (wow, good job!). I thought it'd be a half an hour
> > tops. This little bugger shows active starvation, expired starvation,
> > priority inflation, _and_ interactive starvation (i have to keep inventing
> > new terms to describe things i see.. jeez this is a good testcase).
>
>Yes, the problem is not only the expired tasks starvation. Anything in
>the active array that reside underneath the lower priority value of the
>range irman2 tasks oscillate inbetween, will experience a "CPU time eclipse".
>And you do not even need a smoked glass to look at it :)
Here there's no oscillation that I can see. It climbs steadily to prio 16
and stays there forever, with the hog running down at the bottom. I did a
quick requirement that a non-interactive task must run every HZ ticks at
least, with a sliding "select non-interactive" window staying open for
HZ/10 ticks, and retrieving an expired task if necessary instead of
expiring interactive tasks (or forcing the array switch) thinking it'd be
enough.
Wrong answer. For most things, it would be good enough I think, but with
the hog being part of irman2, I have to not only pull from the expired
array if no non-interactive task is available, I have to always pull once
the deadline is hit. I'm also going to have to put another check for queue
runtime to beat the darn thing. I ran irman2 with a bonnie -s 300 and a
kernel compile... After a half an hour, the compile was making steady (but
too slow because the irman2 periodic cpu hog was getting too much of what
gcc was intended to get;) progress, but the poor bonnie was starving at
prio 17. A sleep_avg vs cpu%*100 sanity check will help that, but not cure.
All this to avoid the pain (agony actually) of an array switch.
-Mike
(someone should wrap me upside the head with a clue-x-4. this darn thing
shouldn't be worth more than 10 lines of ugliness. i'm obviously past
that... and headed toward the twilight-zone at warp 9. wheee;)
Terribly sorry for delay. I got a bit distracted by some elements of
testing (dvd playing).
Ingo Molnar, Fri, Jul 25, 2003 16:29:33 +0200:
> > Still no good. xine drops frames by kernel's make -j2, xmms skips while
> > bk pull (locally). Updates (after switching desktops in metacity) get
> > delayed for seconds (mozilla window redraws with http://www.kernel.org on it,
> > for example).
>
> would you mind to give the attached sched-2.6.0-test1-G2 patch a go? (it's
> ontop of vanilla 2.6.0-test1.) Do you still see audio skipping and/or
> other bad scheduling artifacts?
Started make -j2 (my machine is UP), xine-distractor and gvim.
Moving gvim window over xine (even if paused) is jerky, but tolerable.
No skips during playing. No at all. Redraws in MozillaFirebird are
delayed (I get trails all over the firebird window), but again - no
annoyingly long delays.
I continue testing.
> (if you prefer -mm2 then please first unapply the second attached patch
> (Con's interactivity patchset) - they are mutually exclusive.)
I used the G2 on 2.6-test1.
-alex