Hi Guys,
I updated three boxes today to 2.4.28 (from .27), one at work, and two here at
home (Redhat 7.1+, Slackware 10)
I am intrigued terribly by the small footprint of memory usage now. I have
gone through the changes file, but can really see nothing (to me, a n00b)
that would alter that?
Can anyone enlighten me?
As always, great work too :)
Nick
--
"When you're chewing on life's gristle,
Don't grumble, Give a whistle..."
On Tue, Nov 23, 2004 at 09:36:36PM +0000, Nick Warne wrote:
> Hi Guys,
>
> I updated three boxes today to 2.4.28 (from .27), one at work, and two here at
> home (Redhat 7.1+, Slackware 10)
>
> I am intrigued terribly by the small footprint of memory usage now. I have
> gone through the changes file, but can really see nothing (to me, a n00b)
> that would alter that?
>
> Can anyone enlighten me?
What do you mean by "memory usage"? SLAB (/proc/slabinfo) buffers
or pagecache ?
Whats your workload and what drivers are you using ?
Nothing that I am aware of explains this.
On Wed, 24 Nov 2004, Marcelo Tosatti wrote:
> On Tue, Nov 23, 2004 at 09:36:36PM +0000, Nick Warne wrote:
> >
> > I updated three boxes today to 2.4.28 (from .27), one at work, and two here at
> > home (Redhat 7.1+, Slackware 10)
> >
> > I am intrigued terribly by the small footprint of memory usage now. I have
> > gone through the changes file, but can really see nothing (to me, a n00b)
> > that would alter that?
> >
> > Can anyone enlighten me?
>
> What do you mean by "memory usage"? SLAB (/proc/slabinfo) buffers
> or pagecache ?
>
> Whats your workload and what drivers are you using ?
>
> Nothing that I am aware of explains this.
_If_ it's a reduction in /proc/slabinfo's dentry_cache, and
_if_ these boxes do a lot of removing files from tmpfs,
then it would be the "tmpfs: stop negative dentries".
Hugh
On Wednesday 24 November 2004 17:48, Hugh Dickins wrote:
> On Wed, 24 Nov 2004, Marcelo Tosatti wrote:
> > On Tue, Nov 23, 2004 at 09:36:36PM +0000, Nick Warne wrote:
> > > I updated three boxes today to 2.4.28 (from .27), one at work, and two
> > > here at home (Redhat 7.1+, Slackware 10)
> > >
> > > I am intrigued terribly by the small footprint of memory usage now. I
> > > have gone through the changes file, but can really see nothing (to me,
> > > a n00b) that would alter that?
> > >
> > > Can anyone enlighten me?
> >
> > What do you mean by "memory usage"? SLAB (/proc/slabinfo) buffers
> > or pagecache ?
> >
> > Whats your workload and what drivers are you using ?
> >
> > Nothing that I am aware of explains this.
>
> _If_ it's a reduction in /proc/slabinfo's dentry_cache, and
> _if_ these boxes do a lot of removing files from tmpfs,
> then it would be the "tmpfs: stop negative dentries".
I dunno, no real scientific measures at all, but I have noticed using 'free'
all boxes from boot load load like 40% less in memory. As time goes on,
memory usage grows (of course), but now it 'drops off' when not being used...
2.4.2x never done that.
I tested today on my Slackware box especially:
Linux linuxamd 2.4.28 #1 Tue Nov 23 17:46:52 GMT 2004 i686 unknown unknown
GNU/Linux (HM, append="1280M").
An Athlon 1.2Ghz running all up to date Slack 10 stable with KDE 3.3.0
upgrade.
I ran Celestia for over an hour, burnt a few knoppix ISO's, and then ISO'ed a
big directory to burn all using 'BashBurn'.
Just played Quake2 for three maps running full chat.
Normally memory slowly fills up, perhaps using swap for a bit under these
circumstances - but looking afterwards:
root@linuxamd:~# free
total used free shared buffers cached
Mem: 1292348 520012 772336 0 38596 327304
-/+ buffers/cache: 154112 1138236
Swap: 1959888 0 1959888
I would normally expect 'free' to report 900000 odd (with Celestia pushing
toward swap) by now... but it doesn't.
Another box,:
Linux quake.ddayuk.dyndns.org 2.4.28 #1 Tue Nov 23 17:28:32 GMT 2004 i686
unknown
Runs a Quake2 server and Teamspeak. Again, usually after 2 hours uptime
nearly hits peak, but now:
total used free shared buffers cached
Mem: 516440 45368 471072 0 6296 25556
-/+ buffers/cache: 13516 502924
Swap: 265032 0 265032
The box at work is a back-up httpd (apache) web server running NTPD for whole
sub-net, mrtg, and a lot of other stuff (I use for testing stuff until I push
to main web server)... this always has 30/40MB disk swap. Today only 6MB.
I build all kernels with no modules, all built in (expect USB for memory
sticks on slack). The only change I done this time from previous kernel
upgrades was download the full 2.4.28 bz2 file rather than apply patches to
existing build trees (make oldconfig).
But whatever, I am impressed indeed - somethings changed for the good!!!
Nick
--
"When you're chewing on life's gristle,
Don't grumble, Give a whistle..."
On 24 Nov 2004, Nick Warne mused:
> Normally memory slowly fills up, perhaps using swap for a bit under these
> circumstances - but looking afterwards:
This is a feature, not a bug. Free memory is wasted memory (although
some has to be kept free for drivers that need GFP_ATOMIC allocations:
i.e. `memory *now* dammit *now*'.
> root@linuxamd:~# free
> total used free shared buffers cached
> Mem: 1292348 520012 772336 0 38596 327304
> -/+ buffers/cache: 154112 1138236
> Swap: 1959888 0 1959888
The only thing I can think of that causes this is something very
memory-hungry that's just been killed, releasing a pile of pages back to
the system.
> But whatever, I am impressed indeed - somethings changed for the good!!!
I see no signs of such a change on my 2.4.28 boxes:
(UltraSPARC II)
total used free shared buffers cached
Mem: 509360 498432 10928 0 133568 52656
-/+ buffers/cache: 312208 197152
Swap: 1557264 143992 1413272
(Athlon IV)
total used free shared buffers cached
Mem: 775072 762744 12328 0 88304 322740
-/+ buffers/cache: 351700 423372
Swap: 1048560 81020 967540
(i586)
total used free shared buffers cached
Mem: 126992 123640 3352 0 11792 42012
-/+ buffers/cache: 69836 57156
Swap: 1245168 155560 1089608
The only suspiciously high free figure is on a 2.4.28 UML instance
(2.4.28 + forward-ported 2.4.27-1 patches) on one of those machines:
total used free shared buffers cached
Mem: 94000 66512 27488 0 2940 31260
-/+ buffers/cache: 32312 61688
Swap: 0 0 0
and that is trivially obviously caused by the instance's lack of swap :)
--
`The sword we forged has turned upon us
Only now, at the end of all things do we see
The lamp-bearer dies; only the lamp burns on.'
On Friday 26 November 2004 11:53, Nix wrote:
> On 24 Nov 2004, Nick Warne mused:
> > Normally memory slowly fills up, perhaps using swap for a bit under these
> > circumstances - but looking afterwards:
>
> This is a feature, not a bug. Free memory is wasted memory (although
> some has to be kept free for drivers that need GFP_ATOMIC allocations:
> i.e. `memory *now* dammit *now*'.
>
> > root@linuxamd:~# free
> > total used free shared buffers cached
> > Mem: 1292348 520012 772336 0 38596 327304
> > -/+ buffers/cache: 154112 1138236
> > Swap: 1959888 0 1959888
>
> The only thing I can think of that causes this is something very
> memory-hungry that's just been killed, releasing a pile of pages back to
> the system.
>
> > But whatever, I am impressed indeed - somethings changed for the good!!!
>
> I see no signs of such a change on my 2.4.28 boxes:
~snip~
Ummm. I do, though. Maybe the cause is I am getting more experienced, and
building kernels now, I _do_ the right things. But for some reason I do find
2.4.28 a lot more responsive to memory usage - I really do.
Nick
--
"When you're chewing on life's gristle,
Don't grumble, Give a whistle..."