* Davide Libenzi <[email protected]> [20011031 18;53]:"
> On Wed, 31 Oct 2001, Mike Kravetz wrote:
>
> > I'm going to try and merge your 'cache warmth' replacement for
> > PROC_CHANGE_PENALTY into the LSE MQ scheduler, as well as enable
> > the code to prevent task stealing during IPI delivery. This
> > should still be significantly different than your design because
> > MQ will still attempt to make global decisions. Results should
> > be interesting.
>
> I'm currently evaluating different weights for that.
> Right now I'm using :
>
> if (p->cpu_jtime > jiffies)
> weight += p->cpu_jtime - jiffies;
>
> that might be too much.
> Solutions :
>
> 1)
> if (p->cpu_jtime > jiffies)
> weight += (p->cpu_jtime - jiffies) >> 1;
>
> 2)
> int wtable[];
>
> if (p->cpu_jtime > jiffies)
> weight += wtable[p->cpu_jtime - jiffies];
>
> Speed will like 1).
> Other optimization is jiffies that is volatile and forces gcc to always
> reload it.
>
> static inline int goodness(struct task_struct * p, struct mm_struct
> *this_mm, unsigned long jiff)
>
> might be better, with jiffies taken out of the goodness loop.
> Mike I suggest you to use the LatSched patch to 1) know how really is
> performing the scheduler 2) understand if certain test gives certain
> results due wierd distributions.
>
One more. Throughout our MQ evaluation, it was also true that
the overall performance particularly for large thread counts was
very sensitive to the goodness function, that why a na_goodness_local
was introduced.
-- Hubertus
On Fri, 2 Nov 2001, Hubertus Franke wrote:
> One more. Throughout our MQ evaluation, it was also true that
> the overall performance particularly for large thread counts was
> very sensitive to the goodness function, that why a na_goodness_local
> was introduced.
Yes it is, but the real question is - It is better a save a few clock
cycles in goodness() or achieve a better process scheduling decisions ?
- Davide