Linus, this contain the sleep_time removal, a still lower ts and a ceiling
for dyn_prio in yield.
BTW, pre6 has still some kdev_t problem that prevents builds
- Davide
diff -Nru linux-2.5.2-pre6.vanilla/include/linux/sched.h linux-2.5.2-pre6.psch/include/linux/sched.h
--- linux-2.5.2-pre6.vanilla/include/linux/sched.h Wed Jan 2 11:26:58 2002
+++ linux-2.5.2-pre6.psch/include/linux/sched.h Wed Jan 2 12:22:28 2002
@@ -322,7 +322,6 @@
*/
struct list_head run_list;
long time_slice;
- unsigned long sleep_time;
/* recalculation loop checkpoint */
unsigned long rcl_last;
@@ -886,7 +885,6 @@
static inline void del_from_runqueue(struct task_struct * p)
{
nr_running--;
- p->sleep_time = jiffies;
list_del(&p->run_list);
p->run_list.next = NULL;
}
diff -Nru linux-2.5.2-pre6.vanilla/kernel/sched.c linux-2.5.2-pre6.psch/kernel/sched.c
--- linux-2.5.2-pre6.vanilla/kernel/sched.c Wed Jan 2 11:27:04 2002
+++ linux-2.5.2-pre6.psch/kernel/sched.c Wed Jan 2 12:28:32 2002
@@ -51,11 +51,11 @@
* NOTE! The unix "nice" value influences how long a process
* gets. The nice value ranges from -20 to +19, where a -20
* is a "high-priority" task, and a "+10" is a low-priority
- * task. The default time slice for zero-nice tasks will be 43ms.
+ * task. The default time slice for zero-nice tasks will be 37ms.
*/
#define NICE_RANGE 40
#define MIN_NICE_TSLICE 10000
-#define MAX_NICE_TSLICE 80000
+#define MAX_NICE_TSLICE 65000
#define TASK_TIMESLICE(p) ((int) ts_table[19 - (p)->nice])
static unsigned char ts_table[NICE_RANGE];
@@ -1070,7 +1070,8 @@
current->need_resched = 1;
current->time_slice = 0;
- current->dyn_prio++;
+ if (++current->dyn_prio > MAX_DYNPRIO)
+ current->dyn_prio = MAX_DYNPRIO;
}
return 0;
}
@@ -1325,7 +1326,8 @@
for (i = 0; i < NICE_RANGE; i++)
ts_table[i] = ((MIN_NICE_TSLICE +
- ((MAX_NICE_TSLICE - MIN_NICE_TSLICE) / NICE_RANGE) * i) * HZ) / 1000000;
+ ((MAX_NICE_TSLICE -
+ MIN_NICE_TSLICE) / (NICE_RANGE - 1)) * i) * HZ) / 1000000;
}
void __init sched_init(void)
Davide Libenzi <[email protected]> writes:
> a still lower ts
This also lowers the effectiveness of nice values. In 2.5.2-pre6, if I
run two cpu hogs at nice values 0 and 19 respectively, the niced task
will get approximately 20% cpu time (on x86 with HZ=100) and this
patch will give even more cpu time to the niced task. Isn't 20% too
much?
--
Peter Osterlund - [email protected]
http://w1.894.telia.com/~u89404340
On 2 Jan 2002, Peter Osterlund wrote:
> Davide Libenzi <[email protected]> writes:
>
> > a still lower ts
>
> This also lowers the effectiveness of nice values. In 2.5.2-pre6, if I
> run two cpu hogs at nice values 0 and 19 respectively, the niced task
> will get approximately 20% cpu time (on x86 with HZ=100) and this
> patch will give even more cpu time to the niced task. Isn't 20% too
> much?
The problem is that with HZ == 100 you don't have enough granularity to
correctly scale down nice time slices. Shorter time slices helps the
interactive feel that's why i'm pushing for this. Anyway i'm currently
running experiments with 30-40ms time slices. Another thing to remember is
that cpu hog processes will sit at dyn_prio 0 while processes like for
example gcc during a kernel build will range between 5-8 to 36 and in this
case their ts is actually doubled by the fact that they can require
another extra ts. For all processes that does not sit at dyn_prio 0 ( 90%
) the nice tasks cpu time is going to be half.
- Davide
Davide Libenzi <[email protected]> writes:
> On 2 Jan 2002, Peter Osterlund wrote:
>
> > Davide Libenzi <[email protected]> writes:
> >
> > > a still lower ts
> >
> > This also lowers the effectiveness of nice values. In 2.5.2-pre6, if I
> > run two cpu hogs at nice values 0 and 19 respectively, the niced task
> > will get approximately 20% cpu time (on x86 with HZ=100) and this
> > patch will give even more cpu time to the niced task. Isn't 20% too
> > much?
>
> The problem is that with HZ == 100 you don't have enough granularity to
> correctly scale down nice time slices. Shorter time slices helps the
> interactive feel that's why i'm pushing for this.
OK, but even architectures with bigger HZ values will suffer. Isn't it
better to set MIN_NICE_TSLICE to a smaller value (such as 1000) and
fix the calculation in fill_tslice_map to make sure ts_table[i] is
always non-zero. The current formula will break anyway if
HZ < 1000000 / MIN_NICE_TSLICE = 100,
but maybe HZ >= 100 is true for all architectures?
--
Peter Osterlund - [email protected]
http://w1.894.telia.com/~u89404340
On 3 Jan 2002, Peter Osterlund wrote:
> Davide Libenzi <[email protected]> writes:
>
> > On 2 Jan 2002, Peter Osterlund wrote:
> >
> > > Davide Libenzi <[email protected]> writes:
> > >
> > > > a still lower ts
> > >
> > > This also lowers the effectiveness of nice values. In 2.5.2-pre6, if I
> > > run two cpu hogs at nice values 0 and 19 respectively, the niced task
> > > will get approximately 20% cpu time (on x86 with HZ=100) and this
> > > patch will give even more cpu time to the niced task. Isn't 20% too
> > > much?
> >
> > The problem is that with HZ == 100 you don't have enough granularity to
> > correctly scale down nice time slices. Shorter time slices helps the
> > interactive feel that's why i'm pushing for this.
>
> OK, but even architectures with bigger HZ values will suffer. Isn't it
> better to set MIN_NICE_TSLICE to a smaller value (such as 1000) and
> fix the calculation in fill_tslice_map to make sure ts_table[i] is
> always non-zero. The current formula will break anyway if
>
> HZ < 1000000 / MIN_NICE_TSLICE = 100,
>
> but maybe HZ >= 100 is true for all architectures?
Yep, but we can do something like :
static void fill_tslice_map(void)
{
int i;
for (i = 0; i < NICE_RANGE; i++) {
ts_table[i] = ((MIN_NICE_TSLICE +
((MAX_NICE_TSLICE - MIN_NICE_TSLICE) /
(NICE_RANGE - 1)) * i) * HZ) / 1000000;
if (!ts_table[i]) ts_table[i] = 1;
}
}
- Davide
On Wed, 2 Jan 2002, Davide Libenzi wrote:
> On 2 Jan 2002, Peter Osterlund wrote:
> > Davide Libenzi <[email protected]> writes:
> >
> > > a still lower ts
> >
> > This also lowers the effectiveness of nice values. In 2.5.2-pre6, if I
> > run two cpu hogs at nice values 0 and 19 respectively, the niced task
> > will get approximately 20% cpu time (on x86 with HZ=100) and this
> > patch will give even more cpu time to the niced task. Isn't 20% too
> > much?
>
> The problem is that with HZ == 100 you don't have enough granularity
> to correctly scale down nice time slices. Shorter time slices helps
> the interactive feel that's why i'm pushing for this.
So don't give the niced task a new timeslice each time,
but only once in a while.
regards,
Rik
--
Shortwave goes a long way: irc.starchat.net #swl
http://www.surriel.com/ http://distro.conectiva.com/
On Thu, 3 Jan 2002, Rik van Riel wrote:
> On Wed, 2 Jan 2002, Davide Libenzi wrote:
> > On 2 Jan 2002, Peter Osterlund wrote:
> > > Davide Libenzi <[email protected]> writes:
> > >
> > > > a still lower ts
> > >
> > > This also lowers the effectiveness of nice values. In 2.5.2-pre6, if I
> > > run two cpu hogs at nice values 0 and 19 respectively, the niced task
> > > will get approximately 20% cpu time (on x86 with HZ=100) and this
> > > patch will give even more cpu time to the niced task. Isn't 20% too
> > > much?
> >
> > The problem is that with HZ == 100 you don't have enough granularity
> > to correctly scale down nice time slices. Shorter time slices helps
> > the interactive feel that's why i'm pushing for this.
>
> So don't give the niced task a new timeslice each time,
> but only once in a while.
Rik, this is part of the new architecture where tasks can spend the
virtual time they accumulated ( if any, dyn_prio > 0 ) one extra slice at
a time. This help in separating the time slice from the dynamic priority.
- Davide