this ends up saving a few math operations any time a child
process exits. ( calling sched_exit(task_t * p) )
here's my exact comment on the contents of the patch (left
out of the actual patch)
/*
* the funcion below was origionally this, for anyone
* wondering what I changed. I mearly used some algebra
* to factor out a 1 / (EXIT_WEIGHT + 1)
*
* p->parent->sleep_avg = p->parent->sleep_avg /
* (EXIT_WEIGHT + 1) * EXIT_WEIGHT + p->sleep_avg /
* (EXIT_WEIGHT + 1);
*
* the only possible effects I see this having are:
*
* 1. less math operations for each child process exiting
* 2. higher accuracy in the value of p->parent->sleep_avg
* due to using only 1 division over 2
*
*/
patches clean(a little offset, but no fuzz) on test9, test9-mms,
test10, test10-mm1
Pat Erley
/*************** patch follows ******************/
--- linux-2.6.0-test9/kernel/sched.c 2003-11-23 02:33:34.000000000 -0500
+++ linux/kernel/sched.c 2003-11-23 02:47:29.730649061 -0500
@@ -720,8 +720,8 @@
* the sleep_avg of the parent as well.
*/
if (p->sleep_avg < p->parent->sleep_avg)
- p->parent->sleep_avg = p->parent->sleep_avg /
- (EXIT_WEIGHT + 1) * EXIT_WEIGHT + p->sleep_avg /
+ p->parent->sleep_avg = ( p->parent->sleep_avg *
+ EXIT_WEIGHT + p->sleep_avg ) /
(EXIT_WEIGHT + 1);
}
--
On Wed, Nov 26, 2003 at 12:27:13AM -0500, Pat Erley wrote:
> this ends up saving a few math operations any time a child
> process exits. ( calling sched_exit(task_t * p) )
Yes, but does it have any noticeable effect on performance whatsoever?
premature optimization, root of all evil, etc.
Cheers,
Muli
--
Muli Ben-Yehuda
http://www.mulix.org | http://mulix.livejournal.com/
"the nucleus of linux oscillates my world" - gccbot@#offtopic
> > this ends up saving a few math operations any time a child
> > process exits. ( calling sched_exit(task_t * p) )
>
> Yes, but does it have any noticeable effect on performance whatsoever?
> premature optimization, root of all evil, etc.
I'm not on a system that I can take down long enough/crash testing
right now that I could check this. And, to be honest, I can't think
of anything other than a fork bomb that would do a good job of testing
this. I just remembered helping con with O(3)int schedular hacks and
he seemed concerned with how many math operations take place in sched.c
due to it being in the core.
If you can suggest a way to test this, I will test it on my system
tomorrow.
Pat Erley
On Wed, Nov 26, 2003 at 01:07:01AM -0500, s0be wrote:
> If you can suggest a way to test this, I will test it on my system
> tomorrow.
Just off the top of my head, you could try something like a kernel
compilation with and without it. I doubt you'll see any improvement,
though.. there are very few places in the kernel where such
micro-optimizations are worth it, IMVHO.
Cheers,
Muli
--
Muli Ben-Yehuda
http://www.mulix.org | http://mulix.livejournal.com/
"the nucleus of linux oscillates my world" - gccbot@#offtopic
> > If you can suggest a way to test this, I will test it on my system
> > tomorrow.
>
> Just off the top of my head, you could try something like a kernel
> compilation with and without it. I doubt you'll see any improvement,
> though.. there are very few places in the kernel where such
> micro-optimizations are worth it, IMVHO.
Well, I ran about 6 different compiles each on a patched and an unpatched,
and my 'average' savings were about 1 second per 6 minute compile, which is
negligible. I wonder if doing a make is a good test of this. Can anyone else
out there come up with another way to check this with a non-cpu hog
application? I'd like some other cases to test this in. I mean, when you do
a 'large' number of compiles that each take a half second, I can see
that saving a division and an addition really wouldn't make a big difference,
but in a situation where you have a large number of short lived threads in
a child process, it may end up saving a bit more.
Pat Erley