I ran the following lots_of_forks.sh script for several kernels.
http://people.nl.linux.org/~phillips/patches/lots_of_forks.sh
using time -v sh lots_of_forks.sh
The results for 2.4.20-pre2 and 2.4.20-pre2-ac1 are very different.
2.4.20-pre2:
Command being timed: "sh lots_of_forks.sh"
User time (seconds): 18.15
System time (seconds): 24.96
Percent of CPU this job got: 181%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:23.71
2.4.20-pre2-ac1:
Command being timed: "sh lots_of_forks.sh"
User time (seconds): 28.04
System time (seconds): 53.18
Percent of CPU this job got: 187%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:43.32
I ran this test 8 times in a row with no pause in between
runs. The numbers are System time as reported by time -v.
The test machine is 2-way p3, SMP kernels, configured the
same, no tweaks to /proc/sys/vm.
2.4.20-pre2 2.4.20-pre2-ac1 2.5.28 2.5.31
1 24.96 53.18 39.91 37.04
2 24.92 52.42 44.91 45.88
3 24.69 50.69 48.63 44.89
4 24.39 51.9 58.12 55.8
5 24.72 46.14 49.81 43.18
6 24.34 47.99 57.62 40.93
7 24.64 52.33 50.42 47.27
8 24.53 52.84 45 36.49
This may be a very unfair benchmark. Or it may show something
worth investigating further.
Steven
Steven Cole wrote:
>
> I ran the following lots_of_forks.sh script for several kernels.
>
> http://people.nl.linux.org/~phillips/patches/lots_of_forks.sh
>
> ...
> 2.4.20-pre2 2.4.20-pre2-ac1 2.5.28 2.5.31
>
> 1 24.96 53.18 39.91 37.04
> 2 24.92 52.42 44.91 45.88
> 3 24.69 50.69 48.63 44.89
> 4 24.39 51.9 58.12 55.8
> 5 24.72 46.14 49.81 43.18
> 6 24.34 47.99 57.62 40.93
> 7 24.64 52.33 50.42 47.27
> 8 24.53 52.84 45 36.49
>
That's the page_add_rmap/page_remove_rmap thing. Could you retest
2.5.31 with
http://www.zip.com.au/~akpm/linux/patches/2.5/2.5.31/stuff-sent-to-linus/everything.gz
applied?
On Wed, 2002-08-14 at 13:36, Andrew Morton wrote:
> Steven Cole wrote:
> >
> > I ran the following lots_of_forks.sh script for several kernels.
> >
> > http://people.nl.linux.org/~phillips/patches/lots_of_forks.sh
> >
> > ...
> > 2.4.20-pre2 2.4.20-pre2-ac1 2.5.28 2.5.31
> >
> > 1 24.96 53.18 39.91 37.04
> > 2 24.92 52.42 44.91 45.88
> > 3 24.69 50.69 48.63 44.89
> > 4 24.39 51.9 58.12 55.8
> > 5 24.72 46.14 49.81 43.18
> > 6 24.34 47.99 57.62 40.93
> > 7 24.64 52.33 50.42 47.27
> > 8 24.53 52.84 45 36.49
> >
>
> That's the page_add_rmap/page_remove_rmap thing. Could you retest
> 2.5.31 with
> http://www.zip.com.au/~akpm/linux/patches/2.5/2.5.31/stuff-sent-to-linus/everything.gz
> applied?
Yes, here are the results:
2.5.31-vanilla 2.5.31-akpm-all
1 37.04 35.98
2 45.88 36.94
3 44.89 35.23
4 55.8 38.49
5 43.18 51.43
6 40.93 46.3
7 47.27 46.94
8 36.49 47.19
Steven
On Wed, 2002-08-14 at 20:17, Steven Cole wrote:
> I ran the following lots_of_forks.sh script for several kernels.
>
> http://people.nl.linux.org/~phillips/patches/lots_of_forks.sh
>
> using time -v sh lots_of_forks.sh
>
> The results for 2.4.20-pre2 and 2.4.20-pre2-ac1 are very different.
I'd expect that to be the case. Rmap is a huge win for many things but
its not a good win on fork times. The question is whether fork bombs
dominate your working load and what the tradeoffs are between saner VM
behaviour and less accounting overhead.
Its not clear what the answer is.