2001-04-28 18:24:22

by Frank de Lange

[permalink] [raw]
Subject: Severe trashing in 2.4.4

Hi'all,

There seems to be something amiss with 2.4.4 wrt. memory management, which
leads to severe trashing under *light* load on my SMP box (Abit BP-6 with
Maciej's IO-APIC patch). I've had my box become almost unuseable twice within a
4-hour period now. One of those times it was running a compile session, the
other time it did not run anything besides some X-terminals and xmms.

There's nothing in the log which seems to be related to this trashing. There
were no memory-intensive tasks running during the trashing (X was the biggest,
with 40 MB (including 32 MB for the video card).

When trashing, the system was running bdflush and kswapd, both of which used
between 30% and 60% CPU. System load was between 8 and 11, but the actual load
feels more like 50-60...

I have not found anything yet which triggers the problem. Coupled with the fact
that there's nothing in the logs (and nothing sensible to be seen through
sysrq). Judging from the thread on the list on the problems with the fork patch
(which seems to hit people with SMP systems), it might be related to this, but
that's only guessing.

Any ideas? Things to try?

Cheers//Frank
--
WWWWW _______________________
## o o\ / Frank de Lange \
}# \| / \
##---# _/ <Hacker for Hire> \
#### \ +31-320-252965 /
\ [email protected] /
-------------------------
[ "Omnis enim res, quae dando non deficit, dum habetur
et non datur, nondum habetur, quomodo habenda est." ]


2001-04-29 16:18:28

by Frank de Lange

[permalink] [raw]
Subject: Re: Severe trashing in 2.4.4

OK,

I seem to have found the culprit, although I'm stillin the dark st to the
'why' and 'how'.

First, some info:

2.4.4 with Maciej's IO-APIC patch
Abit BP-6, dual Celeron466@466
256MB RAM

So, 'yes, SMP...'

Running 'nget v0.7' (a command line nntp 'grabber') on 2.4.4 leads to massive
amounts of memory disappearing in thin air. I'm currently running a single
instance of this app, and I'm seeing the memory drain away. The system has 256
MB of physycal memory, and access to 500 MB of swap. Swap is not really being
used now, but it soon will be. Have a look at the current /proc/meminfo:

[frank@behemoth mozilla]$ cat /proc/meminfo
total: used: free: shared: buffers: cached:
Mem: 262049792 259854336 2195456 0 1773568 31211520
Swap: 511926272 4096 511922176
MemTotal: 255908 kB
MemFree: 2144 kB
MemShared: 0 kB
Buffers: 1732 kB
Cached: 30480 kB
Active: 26944 kB
Inact_dirty: 2384 kB
Inact_clean: 2884 kB
Inact_target: 984 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 255908 kB
LowFree: 2144 kB
SwapTotal: 499928 kB
SwapFree: 499924 kB

Also look at the top 10 memory users:

[frank@behemoth mp3]$ ps -xao rss,vsz,pid,command|sort -rn|head
6388 55320 1310 /usr/bin/X11/XFree86 -depth 16 -gamma 1.6 -auth /var/lib/gdm/:0
3604 8972 1438 gnome-terminal -t [email protected]
3116 8356 1405 panel --sm-config-prefix /panel.d/default-ZTNCVS/ --sm-client-i
3084 5484 1401 sawfish --sm-client-id 11c0a80105000098495218600000010240115 --
2940 8388 1696 gnome-terminal --tclass=Remote -x ssh -v ostrogoth.localnet
2748 7536 1432 mini_commander_applet --activate-goad-server mini-commander_app
2692 7656 1413 tasklist_applet --activate-goad-server tasklist_applet --goad-f
2536 7588 1411 deskguide_applet --activate-goad-server deskguide_applet --goad
2320 7388 1383 /usr/bin/gnome-session
2232 7660 1421 multiload_applet --activate-goad-server multiload_applet --goad

(the rest is mostly small stuff, < 1 MB, total of 89 processes)

[ swap is being hit at this moment... ]

Where did my memory go?

A few minutes later, with the same process load (minimal), a look at
/proc/meminfo:

[frank@behemoth mozilla]$ cat /proc/meminfo
total: used: free: shared: buffers: cached:
Mem: 262049792 260108288 1941504 0 1380352 11689984
Swap: 511926272 34279424 477646848
MemTotal: 255908 kB
MemFree: 1896 kB
MemShared: 0 kB
Buffers: 1348 kB
Cached: 11416 kB
Active: 9164 kB
Inact_dirty: 1240 kB
Inact_clean: 2360 kB
Inact_target: 996 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 255908 kB
LowFree: 1896 kB
SwapTotal: 499928 kB
SwapFree: 466452 kB

Already 34MB in swap...

Start xmms, and this is the result:

[frank@behemoth mozilla]$ cat /proc/meminfo
total: used: free: shared: buffers: cached:
Mem: 262049792 260411392 1638400 0 1380352 10063872
Swap: 511926272 38449152 473477120
MemTotal: 255908 kB
MemFree: 1600 kB
MemShared: 0 kB
Buffers: 1348 kB
Cached: 9828 kB
Active: 6400 kB
Inact_dirty: 1236 kB
Inact_clean: 3540 kB
Inact_target: 2128 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 255908 kB
LowFree: 1600 kB
SwapTotal: 499928 kB
SwapFree: 462380 kB

(top 10 memory users)

[frank@behemoth mp3]$ ps -xao rss,vsz,pid,command|sort -rn|head
2340 56604 1310 /usr/bin/X11/XFree86 -depth 16 -gamma 1.6 -auth /var/lib/gdm/:0
1592 5484 1401 sawfish --sm-client-id 11c0a80105000098495218600000010240115 --
1452 33784 1780 xmms
1436 33784 1785 xmms
1436 33784 1782 xmms
1436 33784 1781 xmms
1296 9000 1438 gnome-terminal -t [email protected]
1184 2936 1790 ps -xao rss,vsz,pid,command
1060 7656 1413 tasklist_applet --activate-goad-server tasklist_applet --goad-f
1008 8388 1696 gnome-terminal --tclass=Remote -x ssh -v ostrogoth.localnet

The memory is used somewhere, but nowhere I can find or pinpoint. Not in
buffers, not cached, not by processes I can see in ps or top or /proc. And it
does not come back either. Shooting down the nget process and xmms frees up
some swap, buth the disappeared memory stays that way as can be seen from this
(final) /proc/meminfo / ps combo:

[frank@behemoth mozilla]$ cat /proc/meminfo
total: used: free: shared: buffers: cached:
Mem: 262049792 260411392 1638400 0 1388544 8568832
Swap: 511926272 36360192 475566080
MemTotal: 255908 kB
MemFree: 1600 kB
MemShared: 0 kB
Buffers: 1356 kB
Cached: 8368 kB
Active: 6284 kB
Inact_dirty: 1236 kB
Inact_clean: 2204 kB
Inact_target: 632 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 255908 kB
LowFree: 1600 kB
SwapTotal: 499928 kB
SwapFree: 464420 kB

[frank@behemoth mp3]$ ps -xao rss,vsz,pid,command|sort -rn|head
2244 55304 1310 /usr/bin/X11/XFree86 -depth 16 -gamma 1.6 -auth /var/lib/gdm/:0
1644 5484 1401 sawfish --sm-client-id 11c0a80105000098495218600000010240115 --
1252 9008 1438 gnome-terminal -t [email protected]
1172 2924 1796 ps -xao rss,vsz,pid,command
956 7656 1413 tasklist_applet --activate-goad-server tasklist_applet --goad-f
944 8388 1696 gnome-terminal --tclass=Remote -x ssh -v ostrogoth.localnet
776 7588 1411 deskguide_applet --activate-goad-server deskguide_applet --goad
556 3012 1797 sort -rn
504 7436 1419 asclock_applet --activate-goad-server asclock_applet --goad-fd
464 8356 1405 panel --sm-config-prefix /panel.d/default-ZTNCVS/ --sm-client-i

[ system just started thrashing again, had to sysrq-reboot ]

So, there's something wrong here... Wish I knew what...

2.4.3 runs fine on the same box with the same apps.

Any clues?

Cheers//Frank
--
WWWWW _______________________
## o o\ / Frank de Lange \
}# \| / \
##---# _/ <Hacker for Hire> \
#### \ +31-320-252965 /
\ [email protected] /
-------------------------
[ "Omnis enim res, quae dando non deficit, dum habetur
et non datur, nondum habetur, quomodo habenda est." ]

2001-04-29 16:27:58

by Alexander Viro

[permalink] [raw]
Subject: Re: Severe trashing in 2.4.4



On Sun, 29 Apr 2001, Frank de Lange wrote:

> Running 'nget v0.7' (a command line nntp 'grabber') on 2.4.4 leads to massive
> amounts of memory disappearing in thin air. I'm currently running a single
> instance of this app, and I'm seeing the memory drain away. The system has 256
> MB of physycal memory, and access to 500 MB of swap. Swap is not really being
> used now, but it soon will be. Have a look at the current /proc/meminfo:
>
> [frank@behemoth mozilla]$ cat /proc/meminfo
> total: used: free: shared: buffers: cached:
> Mem: 262049792 259854336 2195456 0 1773568 31211520
> Swap: 511926272 4096 511922176
> MemTotal: 255908 kB
> MemFree: 2144 kB
> MemShared: 0 kB
> Buffers: 1732 kB
> Cached: 30480 kB
> Active: 26944 kB
> Inact_dirty: 2384 kB
> Inact_clean: 2884 kB
> Inact_target: 984 kB
> HighTotal: 0 kB
> HighFree: 0 kB
> LowTotal: 255908 kB
> LowFree: 2144 kB
> SwapTotal: 499928 kB
> SwapFree: 499924 kB

What about /proc/slabinfo? Notice that 2.4.4 (and couple of the 2.4.4-pre)
has a bug in prune_icache() that makes it underestimate the amount of
freeable inodes.

2001-04-29 17:47:01

by Frank de Lange

[permalink] [raw]
Subject: Re: Severe trashing in 2.4.4

On Sun, Apr 29, 2001 at 12:27:29PM -0400, Alexander Viro wrote:
> What about /proc/slabinfo? Notice that 2.4.4 (and couple of the 2.4.4-pre)
> has a bug in prune_icache() that makes it underestimate the amount of
> freeable inodes.

Gotcha, wrt. slabinfo. Seems 2.4.4 (at least on my box) only knows how to
allocate skbuff_head_cache entries, not how to free them. Here's the last
/proc/slabinfo entry before I sysRQ'd the box:

slabinfo - version: 1.1 (SMP)
kmem_cache 68 68 232 4 4 1 : 252 126
nfs_read_data 10 10 384 1 1 1 : 124 62
nfs_write_data 10 10 384 1 1 1 : 124 62
nfs_page 40 40 96 1 1 1 : 252 126
urb_priv 1 113 32 1 1 1 : 252 126
uhci_desc 1074 1239 64 21 21 1 : 252 126
ip_mrt_cache 0 0 96 0 0 1 : 252 126
tcp_tw_bucket 0 0 96 0 0 1 : 252 126
tcp_bind_bucket 16 226 32 2 2 1 : 252 126
tcp_open_request 0 0 64 0 0 1 : 252 126
inet_peer_cache 1 59 64 1 1 1 : 252 126
ip_fib_hash 20 226 32 2 2 1 : 252 126
ip_dst_cache 13 48 160 2 2 1 : 252 126
arp_cache 2 60 128 2 2 1 : 252 126
blkdev_requests 1024 1040 96 26 26 1 : 252 126
dnotify cache 0 0 20 0 0 1 : 252 126
file lock cache 126 126 92 3 3 1 : 252 126
fasync cache 3 202 16 1 1 1 : 252 126
uid_cache 5 226 32 2 2 1 : 252 126
skbuff_head_cache 341136 341136 160 14214 14214 1 : 252 126
sock 201 207 832 23 23 2 : 124 62
inode_cache 741 1640 480 205 205 1 : 124 62
bdev_cache 7 59 64 1 1 1 : 252 126
sigqueue 58 58 132 2 2 1 : 252 126
dentry_cache 790 3240 128 108 108 1 : 252 126
dquot 0 0 96 0 0 1 : 252 126
filp 1825 1880 96 47 47 1 : 252 126
names_cache 9 9 4096 9 9 1 : 60 30
buffer_head 891 2880 96 72 72 1 : 252 126
mm_struct 180 180 128 6 6 1 : 252 126
vm_area_struct 4033 4248 64 72 72 1 : 252 126
fs_cache 207 236 64 4 4 1 : 252 126
files_cache 132 135 416 15 15 1 : 124 62
signal_act 108 111 1312 37 37 1 : 60 30
size-131072(DMA) 0 0 131072 0 0 32 : 0 0
size-131072 0 0 131072 0 0 32 : 0 0
size-65536(DMA) 0 0 65536 0 0 16 : 0 0
size-65536 0 0 65536 0 0 16 : 0 0
size-32768(DMA) 0 0 32768 0 0 8 : 0 0
size-32768 3 3 32768 3 3 8 : 0 0
size-16384(DMA) 0 0 16384 0 0 4 : 0 0
size-16384 8 9 16384 8 9 4 : 0 0
size-8192(DMA) 0 0 8192 0 0 2 : 0 0
size-8192 1 1 8192 1 1 2 : 0 0
size-4096(DMA) 0 0 4096 0 0 1 : 60 30
size-4096 73 73 4096 73 73 1 : 60 30
size-2048(DMA) 0 0 2048 0 0 1 : 60 30
size-2048 66338 66338 2048 33169 33169 1 : 60 30
size-1024(DMA) 0 0 1024 0 0 1 : 124 62
size-1024 6372 6372 1024 1593 1593 1 : 124 62
size-512(DMA) 0 0 512 0 0 1 : 124 62
size-512 22776 22776 512 2847 2847 1 : 124 62
size-256(DMA) 0 0 256 0 0 1 : 252 126
size-256 75300 75300 256 5020 5020 1 : 252 126
size-128(DMA) 0 0 128 0 0 1 : 252 126
size-128 1309 1410 128 47 47 1 : 252 126
size-64(DMA) 0 0 64 0 0 1 : 252 126
size-64 4838 4838 64 82 82 1 : 252 126
size-32(DMA) 0 0 32 0 0 1 : 252 126
size-32 33900 33900 32 300 300 1 : 252 126

>From the same moment, the contents of /proc/meminfo:

total: used: free: shared: buffers: cached:
Mem: 262049792 260423680 1626112 0 1351680 6348800
Swap: 511926272 39727104 472199168
MemTotal: 255908 kB
MemFree: 1588 kB
MemShared: 0 kB
Buffers: 1320 kB
Cached: 6200 kB
Active: 2648 kB
Inact_dirty: 1260 kB
Inact_clean: 3604 kB
Inact_target: 3960 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 255908 kB
LowFree: 1588 kB
SwapTotal: 499928 kB
SwapFree: 461132 kB

And to top-10 memury hogs:

892 54696 2279 /usr/bin/X11/XFree86 -depth 16 -gamma 1.6 -auth /var/lib/gdm/:0
632 2932 11363 ps -ax -o rss,vsz,pid,command
600 8988 2785 gnome-terminal -t [email protected]
368 7660 2685 multiload_applet --activate-goad-server multiload_applet --goad
312 2100 4731 top
308 7528 2675 gnomexmms --activate-goad-server gnomexmms --goad-fd 10
244 7660 2701 multiload_applet --activate-goad-server multiload_applet --goad
240 7436 2682 asclock_applet --activate-goad-server asclock_applet --goad-fd
4 11740 1110 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=my
4 11740 1109 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=my

I've got a ton of logging from /proc/slabinfo, one entry a second. If someone
wants to peruse it, you can find it here:

http://www.unternet.org/~frank/projects/linux2404/2404-meminfo/

The .diff files are diffs between 'current' and 'previous' (one second
interval) snapshots. slabinfo and meminfo are self-explanatory I guess. The
'memhogs' entry is the top-10 memory users list for each second of logging.

Cheers//Frank
--
WWWWW _______________________
## o o\ / Frank de Lange \
}# \| / \
##---# _/ <Hacker for Hire> \
#### \ +31-320-252965 /
\ [email protected] /
-------------------------
[ "Omnis enim res, quae dando non deficit, dum habetur
et non datur, nondum habetur, quomodo habenda est." ]

2001-04-29 17:59:16

by Alexander Viro

[permalink] [raw]
Subject: Re: Severe trashing in 2.4.4



On Sun, 29 Apr 2001, Frank de Lange wrote:

> On Sun, Apr 29, 2001 at 12:27:29PM -0400, Alexander Viro wrote:
> > What about /proc/slabinfo? Notice that 2.4.4 (and couple of the 2.4.4-pre)
> > has a bug in prune_icache() that makes it underestimate the amount of
> > freeable inodes.
>
> Gotcha, wrt. slabinfo. Seems 2.4.4 (at least on my box) only knows how to
> allocate skbuff_head_cache entries, not how to free them. Here's the last
> /proc/slabinfo entry before I sysRQ'd the box:

> skbuff_head_cache 341136 341136 160 14214 14214 1 : 252 126
> size-2048 66338 66338 2048 33169 33169 1 : 60 30

Hmm... I'd say that you also have a leak in kmalloc()'ed stuff - something
in 1K--2K range. From your logs it looks like the thing never shrinks and
grows prettu fast...

2001-04-29 18:02:37

by Frank de Lange

[permalink] [raw]
Subject: Re: Severe trashing in 2.4.4

On Sun, Apr 29, 2001 at 01:58:52PM -0400, Alexander Viro wrote:
> Hmm... I'd say that you also have a leak in kmalloc()'ed stuff - something
> in 1K--2K range. From your logs it looks like the thing never shrinks and
> grows prettu fast...

Yeah, those as well. I kinda guessed they were related...

Cheers//Frank
--
WWWWW _______________________
## o o\ / Frank de Lange \
}# \| / \
##---# _/ <Hacker for Hire> \
#### \ +31-320-252965 /
\ [email protected] /
-------------------------
[ "Omnis enim res, quae dando non deficit, dum habetur
et non datur, nondum habetur, quomodo habenda est." ]

2001-04-29 18:05:46

by Frank de Lange

[permalink] [raw]
Subject: Re: Severe trashing in 2.4.4

On Sun, Apr 29, 2001 at 01:58:52PM -0400, Alexander Viro wrote:
> Hmm... I'd say that you also have a leak in kmalloc()'ed stuff - something
> in 1K--2K range. From your logs it looks like the thing never shrinks and
> grows prettu fast...

Same goes for buffer_head:

buffer_head 44236 48520 96 1188 1213 1 : 252 126

quite high I think. 2.4.3 shows this, after about the same time and activity:

buffer_head 891 2880 96 72 72 1 : 252 126

Cheers//Frank

--
WWWWW _______________________
## o o\ / Frank de Lange \
}# \| / \
##---# _/ <Hacker for Hire> \
#### \ +31-320-252965 /
\ [email protected] /
-------------------------
[ "Omnis enim res, quae dando non deficit, dum habetur
et non datur, nondum habetur, quomodo habenda est." ]

2001-04-29 22:07:47

by Manfred Spraul

[permalink] [raw]
Subject: Re: Severe trashing in 2.4.4

> On Sun, Apr 29, 2001 at 01:58:52PM -0400, Alexander Viro wrote:
> > Hmm... I'd say that you also have a leak in kmalloc()'ed stuff -
> > something in 1K--2K range. From your logs it looks like the
> > thing never shrinks and grows prettu fast...

You could enable STATS in mm/slab.c, then the number of alloc and free
calls would be printed in /proc/slabinfo.

> Yeah, those as well. I kinda guessed they were related...

Could you check /proc/sys/net/core/hot_list_length and skb_head_pool
(not available in /proc, use gdb --core /proc/kcore)? I doubt that this
causes your problems, but the skb_head code uses a special per-cpu
linked list for even faster allocations.

Which network card do you use? Perhaps a bug in the zero-copy code of
the driver?

--
Manfred

2001-04-29 22:15:17

by Frank de Lange

[permalink] [raw]
Subject: Re: Severe trashing in 2.4.4

On Mon, Apr 30, 2001 at 12:06:52AM +0200, Manfred Spraul wrote:
> You could enable STATS in mm/slab.c, then the number of alloc and free
> calls would be printed in /proc/slabinfo.
>
> > Yeah, those as well. I kinda guessed they were related...
>
> Could you check /proc/sys/net/core/hot_list_length and skb_head_pool
> (not available in /proc, use gdb --core /proc/kcore)? I doubt that this
> causes your problems, but the skb_head code uses a special per-cpu
> linked list for even faster allocations.
>
> Which network card do you use? Perhaps a bug in the zero-copy code of
> the driver?

I'll give it a go once I reboot into 2.4.4 again (now in 2.4.3 to get some
'work' done). Using the dreaded ne2k cards (two of them), which have caused me
more than one headache already...

I'll have a look at the driver for these cards.

Cheers//Frank

--
WWWWW _______________________
## o o\ / Frank de Lange \
}# \| / \
##---# _/ <Hacker for Hire> \
#### \ +31-320-252965 /
\ [email protected] /
-------------------------
[ "Omnis enim res, quae dando non deficit, dum habetur
et non datur, nondum habetur, quomodo habenda est." ]

2001-04-30 00:25:55

by Frank de Lange

[permalink] [raw]
Subject: Re: Severe trashing in 2.4.4

On Sun, Apr 29, 2001 at 04:45:00PM -0700, David S. Miller wrote:
>
> Frank de Lange writes:
> > What do you want me to check for? /proc/net/netstat is a rather busy place...
>
> Just show us the contents after you reproduce the problem.
> We just want to see if a certain event if being triggered.

Hm, 'twould be nice to know WHAT to look for (if only for educational
purposes), but ok:

http://www.unternet.org/~frank/projects/linux2404/2404-meminfo/

it contains an extra set of files, named p_n_netstat.*. Same as before, the
.diff contains one-second interval diffs.

Cheers//Frank
--
WWWWW _______________________
## o o\ / Frank de Lange \
}# \| / \
##---# _/ <Hacker for Hire> \
#### \ +31-320-252965 /
\ [email protected] /
-------------------------
[ "Omnis enim res, quae dando non deficit, dum habetur
et non datur, nondum habetur, quomodo habenda est." ]

2001-04-30 02:01:01

by David Miller

[permalink] [raw]
Subject: Re: Severe trashing in 2.4.4


Frank de Lange writes:
> Hm, 'twould be nice to know WHAT to look for (if only for educational
> purposes), but ok:

We're looking to see if queue collapsing is occuring on
receive.

Later,
David S. Miller
[email protected]

2001-04-30 06:16:39

by Mike Galbraith

[permalink] [raw]
Subject: Re: Severe trashing in 2.4.4

On Sun, 29 Apr 2001, Alexander Viro wrote:

> On Sun, 29 Apr 2001, Frank de Lange wrote:
>
> > On Sun, Apr 29, 2001 at 12:27:29PM -0400, Alexander Viro wrote:
> > > What about /proc/slabinfo? Notice that 2.4.4 (and couple of the 2.4.4-pre)
> > > has a bug in prune_icache() that makes it underestimate the amount of
> > > freeable inodes.
> >
> > Gotcha, wrt. slabinfo. Seems 2.4.4 (at least on my box) only knows how to
> > allocate skbuff_head_cache entries, not how to free them. Here's the last
> > /proc/slabinfo entry before I sysRQ'd the box:
>
> > skbuff_head_cache 341136 341136 160 14214 14214 1 : 252 126
> > size-2048 66338 66338 2048 33169 33169 1 : 60 30
>
> Hmm... I'd say that you also have a leak in kmalloc()'ed stuff - something
> in 1K--2K range. From your logs it looks like the thing never shrinks and
> grows prettu fast...

If it turns out to be difficult to track down, holler and I'll expedite
updating my IKD tree to 2.4.4.

-Mike (memleak maintenance weenie)

2001-04-30 07:46:31

by Mike Galbraith

[permalink] [raw]
Subject: Re: Severe trashing in 2.4.4

On Sun, 29 Apr 2001, Frank de Lange wrote:

> On Sun, Apr 29, 2001 at 01:58:52PM -0400, Alexander Viro wrote:
> > Hmm... I'd say that you also have a leak in kmalloc()'ed stuff - something
> > in 1K--2K range. From your logs it looks like the thing never shrinks and
> > grows prettu fast...
>
> Same goes for buffer_head:
>
> buffer_head 44236 48520 96 1188 1213 1 : 252 126
>
> quite high I think. 2.4.3 shows this, after about the same time and activity:
>
> buffer_head 891 2880 96 72 72 1 : 252 126

hmm: do_try_to_free_pages() doesn't call kmem_cache_reap() unless
there's no free page shortage. If you've got a leak...

if (free_shortage()) {
shrink_dcache_memory(DEF_PRIORITY, gfp_mask);
shrink_icache_memory(DEF_PRIORITY, gfp_mask);
} else {
/*
* Illogical, but true. At least for now.
*
* If we're _not_ under shortage any more, we
* reap the caches. Why? Because a noticeable
* part of the caches are the buffer-heads,
* which we'll want to keep if under shortage.
*/
kmem_cache_reap(gfp_mask);
}

You might try calling it if free_shortage() + inactive shortage() >
freepages.high or some such and then see what sticks out. Or, for
troubleshooting the leak, just always call it.

Printk says we fail to totally cure the shortage most of the time
once you start swapping.. likely the same for any sustained IO.

-Mike

(if you hoard IO until you can't avoid it, there're no cleanable pages
left in the laundry chute [bye-bye cache] except IO pages.. i digress;)