2001-11-04 14:09:04

by Lorenzo Allegrucci

[permalink] [raw]
Subject: VM: qsbench numbers


I begin with the last Linus' kernel, three runs and kswapd CPU
time appended.

Linux-2.4.14-pre8:
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.270u 7.330s 2:33.29 50.6% 0+0k 0+0io 19670pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.090u 6.890s 2:32.29 50.5% 0+0k 0+0io 18337pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.510u 6.660s 2:29.29 51.6% 0+0k 0+0io 18463pf+0w
0:01 kswapd

Double swap space (from 200M to 400M):
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.510u 6.390s 2:24.39 53.2% 0+0k 0+0io 17902pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.600u 7.600s 2:56.97 44.1% 0+0k 0+0io 23599pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.370u 7.340s 2:50.26 45.6% 0+0k 0+0io 22295pf+0w
0:03 kswapd

This is interesting.
Runs 2 and 3 are slower, even with more swap space and the new
VM seems to have lost its proverbial stability on performance.

Old results below, for performance and behaviour comparisons.

Linux-2.4.14-pre7:
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
Out of Memory: Killed process 224 (qsbench).
17.770u 3.160s 1:19.95 26.1% 0+0k 0+0io 13294pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
Out of Memory: Killed process 226 (qsbench).
26.030u 15.530s 1:39.39 41.8% 0+0k 0+0io 13283pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
Out of Memory: Killed process 228 (qsbench).
29.350u 41.360s 2:27.63 47.8% 0+0k 0+0io 15214pf+0w
0:12 kswapd

Double swap space:
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.530u 2.920s 2:16.35 53.8% 0+0k 0+0io 17575pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.510u 3.160s 2:19.79 52.7% 0+0k 0+0io 17639pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.540u 3.270s 2:17.39 53.7% 0+0k 0+0io 17544pf+0w
0:01 kswapd


Linux-2.4.14-pre6:
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
Out of Memory: Killed process 224 (qsbench).
69.890u 3.430s 2:12.48 55.3% 0+0k 0+0io 16374pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
Out of Memory: Killed process 226 (qsbench).
69.550u 2.990s 2:11.31 55.2% 0+0k 0+0io 15374pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
Out of Memory: Killed process 228 (qsbench).
69.480u 3.100s 2:13.33 54.4% 0+0k 0+0io 15950pf+0w
0:01 kswapd


Linux-2.4.14-pre5:
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.340u 3.450s 2:13.62 55.2% 0+0k 0+0io 16829pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.590u 2.940s 2:15.48 54.2% 0+0k 0+0io 17182pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.140u 3.480s 2:14.66 54.6% 0+0k 0+0io 17122pf+0w
0:01 kswapd

2.4.14-pre5 has the best VM for qsbench :)


Linux-2.4.13:
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
71.260u 2.150s 2:20.68 52.1% 0+0k 0+0io 20173pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
71.020u 2.050s 2:18.78 52.6% 0+0k 0+0io 20353pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.810u 2.080s 2:19.50 52.2% 0+0k 0+0io 20413pf+0w
0:06 kswapd


Linux-2.4.11:
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
71.020u 1.650s 2:20.74 51.6% 0+0k 0+0io 10652pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
71.070u 1.650s 2:21.51 51.3% 0+0k 0+0io 10499pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.790u 1.670s 2:21.01 51.3% 0+0k 0+0io 10641pf+0w
0:04 kswapd


Linux-2.4.10:
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.410u 1.870s 2:45.25 43.7% 0+0k 0+0io 16088pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.910u 1.840s 2:45.16 44.0% 0+0k 0+0io 16338pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
71.310u 1.910s 2:45.20 44.3% 0+0k 0+0io 16211pf+0w
0:03 kswapd


Linux-2.4.13-ac4:
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
70.800u 3.470s 3:04.15 40.3% 0+0k 0+0io 13916pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
71.530u 3.930s 3:13.90 38.9% 0+0k 0+0io 14101pf+0w
lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100
71.260u 3.640s 3:03.54 40.8% 0+0k 0+0io 13047pf+0w
0:08 kswapd



--
Lorenzo


2001-11-04 17:21:38

by Linus Torvalds

[permalink] [raw]
Subject: Re: VM: qsbench numbers


On Sun, 4 Nov 2001, Lorenzo Allegrucci wrote:
>
> I begin with the last Linus' kernel, three runs and kswapd CPU
> time appended.

It's interesting how your numbers decrease with more swap-space. That,
together with the fact that the "more swap space" case also degrades the
second time around seems to imply that we leave swap-cache pages around
after they aren't used.

Does "free" after a run has completed imply that there's still lots of
swap used? We _should_ have gotten rid of it at "free_swap_and_cache()"
time, but if we missed it..

What happens if you make the "vm_swap_full()" define in <linux/swap.h> be
unconditionally defined to "1"? That should make us be more aggressive
about freeing those swap-cache pages, and it would be interesting to see
if it also stabilizes your numbers.

Linus

2001-11-04 21:15:38

by Lorenzo Allegrucci

[permalink] [raw]
Subject: Re: VM: qsbench numbers



--
Lorenzo

2001-11-05 01:07:02

by Linus Torvalds

[permalink] [raw]
Subject: Re: VM: qsbench numbers


On Sun, 4 Nov 2001, Lorenzo Allegrucci wrote:
> >
> >Does "free" after a run has completed imply that there's still lots of
> >swap used? We _should_ have gotten rid of it at "free_swap_and_cache()"
> >time, but if we missed it..
>
> 70.590u 7.640s 2:31.06 51.7% 0+0k 0+0io 19036pf+0w
> lenstra:~/src/qsort> free
> total used free shared buffers cached
> Mem: 255984 6008 249976 0 100 1096
> -/+ buffers/cache: 4812 251172
> Swap: 195512 5080 190432

That's not a noticeable amount, and is perfectly explainable by simply
having deamons that got swapped out with truly inactive pages. So a
swapcache leak does not seem to be the reason for the unstable numbers.

> >What happens if you make the "vm_swap_full()" define in <linux/swap.h> be
> >unconditionally defined to "1"?
>
> 70.530u 7.290s 2:33.26 50.7% 0+0k 0+0io 19689pf+0w
> 70.830u 7.100s 2:29.52 52.1% 0+0k 0+0io 18488pf+0w
> 70.560u 6.840s 2:28.66 52.0% 0+0k 0+0io 18203pf+0w
>
> Performace improved and numbers stabilized.

Indeed.

Mind doing some more tests? In particular, the "vm_swap_full()" macro is
only used in two places: mm/memory.c and mm/swapfile.c. Are you willing to
test _which_ one (or is it both together) it is that seems to bring on the
unstable numbers?

Linus

2001-11-05 15:28:02

by Lorenzo Allegrucci

[permalink] [raw]
Subject: Re: VM: qsbench numbers


[Forgot to CC linux-kernel people, sorry]

At 17.03 04/11/01 -0800, you wrote:
>
>On Sun, 4 Nov 2001, Lorenzo Allegrucci wrote:
>> >
>> >Does "free" after a run has completed imply that there's still lots of
>> >swap used? We _should_ have gotten rid of it at "free_swap_and_cache()"
>> >time, but if we missed it..
>>
>> 70.590u 7.640s 2:31.06 51.7% 0+0k 0+0io 19036pf+0w
>> lenstra:~/src/qsort> free
>> total used free shared buffers cached
>> Mem: 255984 6008 249976 0 100 1096
>> -/+ buffers/cache: 4812 251172
>> Swap: 195512 5080 190432
>
>That's not a noticeable amount, and is perfectly explainable by simply
>having deamons that got swapped out with truly inactive pages. So a
>swapcache leak does not seem to be the reason for the unstable numbers.
>
>> >What happens if you make the "vm_swap_full()" define in <linux/swap.h> be
>> >unconditionally defined to "1"?
>>
>> 70.530u 7.290s 2:33.26 50.7% 0+0k 0+0io 19689pf+0w
>> 70.830u 7.100s 2:29.52 52.1% 0+0k 0+0io 18488pf+0w
>> 70.560u 6.840s 2:28.66 52.0% 0+0k 0+0io 18203pf+0w
>>
>> Performace improved and numbers stabilized.
>
>Indeed.
>
>Mind doing some more tests? In particular, the "vm_swap_full()" macro is
>only used in two places: mm/memory.c and mm/swapfile.c. Are you willing to
>test _which_ one (or is it both together) it is that seems to bring on the
>unstable numbers?

mm/memory.c:
#undef vm_swap_full()
#define vm_swap_full() 1

swap=200M
70.480u 7.440s 2:35.74 50.0% 0+0k 0+0io 19897pf+0w
70.640u 7.280s 2:28.87 52.3% 0+0k 0+0io 18453pf+0w
70.750u 7.170s 2:36.26 49.8% 0+0k 0+0io 19719pf+0w

swap=400M
70.120u 6.940s 2:29.55 51.5% 0+0k 0+0io 18598pf+0w
70.160u 7.320s 2:37.34 49.2% 0+0k 0+0io 19720pf+0w
70.020u 11.310s 3:15.09 41.6% 0+0k 0+0io 28330pf+0w


mm/memory.c:
/* #undef vm_swap_full() */
/* #define vm_swap_full() 1 */

mm/swapfile.c:
#undef vm_swap_full()
#define vm_swap_full() 1

swap=200M
69.610u 7.830s 2:33.47 50.4% 0+0k 0+0io 19630pf+0w
70.260u 7.810s 2:54.06 44.8% 0+0k 0+0io 22816pf+0w
70.420u 7.420s 2:42.71 47.8% 0+0k 0+0io 20655pf+0w

swap=400M
70.240u 6.980s 2:40.37 48.1% 0+0k 0+0io 20437pf+0w
70.430u 6.450s 2:25.36 52.8% 0+0k 0+0io 18400pf+0w
70.270u 6.420s 2:25.52 52.7% 0+0k 0+0io 18267pf+0w
70.850u 6.530s 2:35.82 49.6% 0+0k 0+0io 19481pf+0w

These above are bad numbers but the worst is still to come..

I repeated the "What happens if you make the "vm_swap_full()" define
in <linux/swap.h> be unconditionally defined to "1"?" thing.

swap=200M
70.510u 7.510s 2:33.91 50.6% 0+0k 0+0io 19584pf+0w
70.100u 7.620s 2:42.20 47.9% 0+0k 0+0io 20562pf+0w
69.840u 7.910s 2:51.61 45.3% 0+0k 0+0io 22541pf+0w
70.370u 7.910s 2:52.06 45.4% 0+0k 0+0io 22793pf+0w

swap=400M
70.560u 7.580s 2:37.38 49.6% 0+0k 0+0io 19962pf+0w
70.120u 7.560s 2:45.04 47.0% 0+0k 0+0io 20403pf+0w
70.390u 7.130s 2:29.82 51.7% 0+0k 0+0io 18159pf+0w <-
70.080u 7.190s 2:29.63 51.6% 0+0k 0+0io 18580pf+0w <-
70.300u 6.810s 2:29.70 51.5% 0+0k 0+0io 18267pf+0w <-
69.770u 7.670s 2:49.68 45.6% 0+0k 0+0io 20980pf+0w

Well, numbers are unstable again.

Either I made some error patching the kernel, or my previous test:
>> 70.530u 7.290s 2:33.26 50.7% 0+0k 0+0io 19689pf+0w
>> 70.830u 7.100s 2:29.52 52.1% 0+0k 0+0io 18488pf+0w
>> 70.560u 6.840s 2:28.66 52.0% 0+0k 0+0io 18203pf+0w
was a statistical "fluctuation".
I don't know, and I would be very thankful if somebody can
confirm/deny these results.


--
Lorenzo