2004-10-22 14:15:52

by Kristian Sørensen

[permalink] [raw]
Subject: Gigantic memory leak in linux-2.6.[789]!

Hi all!

After some more testing after the previous post of the OOPS in
generic_delete_inode, we have now found a gigantic memory leak in Linux 2.6.
[789]. The scenario is the same:

File system: EXT3
Unpack and delete linux-2.6.8.1.tar.bz2 with this Bash while loop:

let "i = 0"
while [ "$i" -lt 10 ]; do
tar jxf linux-2.6.8.1.tar.bz2;
rm -fr linux-2.6.8.1;
let "i = i + 1"
done

When the loop has completed, the system use 124 MB memory more _each_ time....
so it is pretty easy to make a denial-of-service attack :-(

We have tried the same test on a RHEL WS 3 host (running a RedHat 2.4 kernel)
- and there is no problem.


Any deas?

--
Kristian S?rensen
- The Umbrella Project
http://umbrella.sourceforge.net

E-mail: [email protected], Phone: +45 29723816


2004-10-22 14:32:18

by Kasper Sandberg

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

On Fri, 2004-10-22 at 16:13 +0200, Kristian S?rensen wrote:
> Hi all!
>
> After some more testing after the previous post of the OOPS in
> generic_delete_inode, we have now found a gigantic memory leak in Linux 2.6.
> [789]. The scenario is the same:
>
> File system: EXT3
> Unpack and delete linux-2.6.8.1.tar.bz2 with this Bash while loop:
>
> let "i = 0"
> while [ "$i" -lt 10 ]; do
> tar jxf linux-2.6.8.1.tar.bz2;
> rm -fr linux-2.6.8.1;
> let "i = i + 1"
> done
>
> When the loop has completed, the system use 124 MB memory more _each_ time....
> so it is pretty easy to make a denial-of-service attack :-(
well.. i could understand if it used the total size of a unpacked linux
kernel, even after the loop stopped, since it would just keep it cached,
however, it might not be that case when it adds 124mb each time...
>
> We have tried the same test on a RHEL WS 3 host (running a RedHat 2.4 kernel)
> - and there is no problem.
>
>
> Any deas?
>

2004-10-22 15:08:07

by Richard B. Johnson

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

On Fri, 22 Oct 2004, Kasper Sandberg wrote:

> On Fri, 2004-10-22 at 16:13 +0200, Kristian S?rensen wrote:
>> Hi all!
>>
>> After some more testing after the previous post of the OOPS in
>> generic_delete_inode, we have now found a gigantic memory leak in Linux 2.6.
>> [789]. The scenario is the same:
>>
>> File system: EXT3
>> Unpack and delete linux-2.6.8.1.tar.bz2 with this Bash while loop:
>>
>> let "i = 0"
>> while [ "$i" -lt 10 ]; do
>> tar jxf linux-2.6.8.1.tar.bz2;
>> rm -fr linux-2.6.8.1;
>> let "i = i + 1"
>> done
>>
>> When the loop has completed, the system use 124 MB memory more _each_ time....
>> so it is pretty easy to make a denial-of-service attack :-(


Do something like this with your favorite kernel version.....

while true ; do tar -xzf linux-2.6.9.tar.gz ; rm -rf linux-2.6.9 ; vmstat ; done

You can watch this for as long as you want. If there is no other
activity, the values reported by vmstat remain, on the average, stable.
If you throw in a `sync` command, the values rapidly converge to
little memory usage as the disk-data gets flused to disk.


> well.. i could understand if it used the total size of a unpacked linux
> kernel, even after the loop stopped, since it would just keep it cached,
> however, it might not be that case when it adds 124mb each time...
>>
>> We have tried the same test on a RHEL WS 3 host (running a RedHat 2.4 kernel)
>> - and there is no problem.
>>
>>
>> Any deas?
>>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

Cheers,
Dick Johnson
Penguin : Linux version 2.6.9 on an i686 machine (5537.79 GrumpyMips).
98.36% of all statistics are fiction.

2004-10-22 15:32:14

by Kristian Sørensen

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

Richard B. Johnson wrote:
> On Fri, 22 Oct 2004, Kasper Sandberg wrote:
>
>> On Fri, 2004-10-22 at 16:13 +0200, Kristian S?rensen wrote:
>>
>>> Hi all!
>>>
>>> After some more testing after the previous post of the OOPS in
>>> generic_delete_inode, we have now found a gigantic memory leak in
>>> Linux 2.6.
>>> [789]. The scenario is the same:
>>>
>>> File system: EXT3
>>> Unpack and delete linux-2.6.8.1.tar.bz2 with this Bash while loop:
>>>
>>> let "i = 0"
>>> while [ "$i" -lt 10 ]; do
>>> tar jxf linux-2.6.8.1.tar.bz2;
>>> rm -fr linux-2.6.8.1;
>>> let "i = i + 1"
>>> done
>>>
>>> When the loop has completed, the system use 124 MB memory more _each_
>>> time....
>>> so it is pretty easy to make a denial-of-service attack :-(
>
>
>
> Do something like this with your favorite kernel version.....
>
> while true ; do tar -xzf linux-2.6.9.tar.gz ; rm -rf linux-2.6.9 ;
> vmstat ; done
>
> You can watch this for as long as you want. If there is no other
> activity, the values reported by vmstat remain, on the average, stable.
> If you throw in a `sync` command, the values rapidly converge to
> little memory usage as the disk-data gets flused to disk.
The problem is, that the free memory reported by vmstat is decresing by
124mb for each 10-iterations....

The allocated memory does not get freed even if the system has been left
alone for three hours!


Cheers, Kristian.

--
Kristian S?rensen
- The Umbrella Project
http://umbrella.sourceforge.net

E-mail: [email protected], Phone: +45 29723816

2004-10-22 16:12:40

by Richard B. Johnson

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

On Fri, 22 Oct 2004, [ISO-8859-15] Kristian S?rensen wrote:

> Richard B. Johnson wrote:
>> On Fri, 22 Oct 2004, Kasper Sandberg wrote:
>>
>>> On Fri, 2004-10-22 at 16:13 +0200, Kristian S?rensen wrote:
>>>
>>>> Hi all!
>>>>
>>>> After some more testing after the previous post of the OOPS in
>>>> generic_delete_inode, we have now found a gigantic memory leak in Linux
>>>> 2.6.
>>>> [789]. The scenario is the same:
>>>>
>>>> File system: EXT3
>>>> Unpack and delete linux-2.6.8.1.tar.bz2 with this Bash while loop:
>>>>
>>>> let "i = 0"
>>>> while [ "$i" -lt 10 ]; do
>>>> tar jxf linux-2.6.8.1.tar.bz2;
>>>> rm -fr linux-2.6.8.1;
>>>> let "i = i + 1"
>>>> done
>>>>
>>>> When the loop has completed, the system use 124 MB memory more _each_
>>>> time....
>>>> so it is pretty easy to make a denial-of-service attack :-(
>>
>>
>>
>> Do something like this with your favorite kernel version.....
>>
>> while true ; do tar -xzf linux-2.6.9.tar.gz ; rm -rf linux-2.6.9 ; vmstat ;
>> done
>>
>> You can watch this for as long as you want. If there is no other
>> activity, the values reported by vmstat remain, on the average, stable.
>> If you throw in a `sync` command, the values rapidly converge to
>> little memory usage as the disk-data gets flused to disk.
> The problem is, that the free memory reported by vmstat is decresing by 124mb
> for each 10-iterations....
>
> The allocated memory does not get freed even if the system has been left
> alone for three hours!
>

Yes. So? Why would it be freed? It's left how it was until it
is needed. Freeing it would waste CPU cycles.

This cannot be a problem unless you are inventing some sort
of hot-swap memory thing. If so, you need to make a module that
tells the kernel memory manager to free everything so you
can remove and replace the RAM.


>
> Cheers, Kristian.


>
> --
> Kristian S?rensen
> - The Umbrella Project
> http://umbrella.sourceforge.net
>
> E-mail: [email protected], Phone: +45 29723816
>

Cheers,
Dick Johnson
Penguin : Linux version 2.6.9 on an i686 machine (5537.79 GrumpyMips).
98.36% of all statistics are fiction.

2004-10-22 16:15:39

by Gene Heskett

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

On Friday 22 October 2004 11:07, Richard B. Johnson wrote:
>while true ; do tar -xzf linux-2.6.9.tar.gz ; rm -rf linux-2.6.9 ;
> vmstat ; done

Stable, yes. But only after about 3 or 4 iterations. The first 3
rather handily used 500+ megs of memory that I did not get back when
I stopped it and cleaned up the mess.

--
Cheers, Gene
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
99.28% setiathome rank, not too shabby for a WV hillbilly
Yahoo.com attorneys please note, additions to this message
by Gene Heskett are:
Copyright 2004 by Maurice Eugene Heskett, all rights reserved.

2004-10-22 16:28:12

by Andre Tomt

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

Gene Heskett wrote:
> On Friday 22 October 2004 11:07, Richard B. Johnson wrote:
>
>>while true ; do tar -xzf linux-2.6.9.tar.gz ; rm -rf linux-2.6.9 ;
>>vmstat ; done
>
>
> Stable, yes. But only after about 3 or 4 iterations. The first 3
> rather handily used 500+ megs of memory that I did not get back when
> I stopped it and cleaned up the mess.


It should get freed when something else needs it. Usually not before.

2004-10-22 16:32:53

by Chris Friesen

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

Gene Heskett wrote:

> Stable, yes. But only after about 3 or 4 iterations. The first 3
> rather handily used 500+ megs of memory that I did not get back when
> I stopped it and cleaned up the mess.

Did you run a memory hog to put memory pressure on the system?

The following is with 2.6.9-rc4

-bash-2.05b$ while true ; do tar -xjf linux-2.6.7.tar.bz2 ; rm -rf linux-2.6.7 ;
vmstat ; done
procs memory swap io system cpu
r b swpd free buff cache si so bi bo in cs us sy wa id
1 0 0 1675768 104004 112576 0 0 0 1 11 2 0 0 0 10
procs memory swap io system cpu
r b swpd free buff cache si so bi bo in cs us sy wa id
1 1 0 1649032 110792 112724 0 0 0 1 11 3 0 0 0 10
procs memory swap io system cpu
r b swpd free buff cache si so bi bo in cs us sy wa id
1 0 0 1630472 118580 112620 0 0 0 2 11 3 0 0 0 10
procs memory swap io system cpu
r b swpd free buff cache si so bi bo in cs us sy wa id
1 0 0 1607560 125500 112636 0 0 0 2 11 3 0 0 0 10


After running a memory hog,

-bash-2.05b$ vmstat
procs memory swap io system cpu
r b swpd free buff cache si so bi bo in cs us sy wa id
0 0 0 1890248 672 4836 0 0 0 3 11 3 0 0 0 10


Looks like the cached memory all got freed, which is exactly as expected.

Chris

2004-10-22 19:11:56

by Kristian Sørensen

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

Richard B. Johnson wrote:

> On Fri, 22 Oct 2004, [ISO-8859-15] Kristian S?rensen wrote:
>
>> Richard B. Johnson wrote:
>>
>>> On Fri, 22 Oct 2004, Kasper Sandberg wrote:
>>>
>>>> On Fri, 2004-10-22 at 16:13 +0200, Kristian S?rensen wrote:
>>>>
>>>>> Hi all!
>>>>>
>>>>> After some more testing after the previous post of the OOPS in
>>>>> generic_delete_inode, we have now found a gigantic memory leak in
>>>>> Linux 2.6.
>>>>> [789]. The scenario is the same:
>>>>>
>>>>> File system: EXT3
>>>>> Unpack and delete linux-2.6.8.1.tar.bz2 with this Bash while loop:
>>>>>
>>>>> let "i = 0"
>>>>> while [ "$i" -lt 10 ]; do
>>>>> tar jxf linux-2.6.8.1.tar.bz2;
>>>>> rm -fr linux-2.6.8.1;
>>>>> let "i = i + 1"
>>>>> done
>>>>>
>>>>> When the loop has completed, the system use 124 MB memory more
>>>>> _each_ time....
>>>>> so it is pretty easy to make a denial-of-service attack :-(
>>>>
>>>
>>>
>>>
>>> Do something like this with your favorite kernel version.....
>>>
>>> while true ; do tar -xzf linux-2.6.9.tar.gz ; rm -rf linux-2.6.9 ;
>>> vmstat ; done
>>>
>>> You can watch this for as long as you want. If there is no other
>>> activity, the values reported by vmstat remain, on the average, stable.
>>> If you throw in a `sync` command, the values rapidly converge to
>>> little memory usage as the disk-data gets flused to disk.
>>
>> The problem is, that the free memory reported by vmstat is decresing
>> by 124mb for each 10-iterations....
>>
>> The allocated memory does not get freed even if the system has been
>> left alone for three hours!
>>
>
> Yes. So? Why would it be freed? It's left how it was until it
> is needed. Freeing it would waste CPU cycles.

Okay :-) So it looks like two of you says we have been mistaken :-D (and
the behaviour has been changed since linux-2.4)

Anyway - How does this work in practice? Does the file system
implementation use a wrapper for kfree or?
Is there any way to force instant free of kernel memory - when freed?
Else it is quite hard testing for possible memory leaks in our Umbrella
kernel module ... :-/


Best regards,

--
Kristian S?rensen
- The Umbrella Project
http://umbrella.sourceforge.net

E-mail: [email protected], Phone: +45 29723816


2004-10-22 19:24:32

by Richard B. Johnson

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

On Fri, 22 Oct 2004, [ISO-8859-15] Kristian S?rensen wrote:

> Richard B. Johnson wrote:
>
>> On Fri, 22 Oct 2004, [ISO-8859-15] Kristian S?rensen wrote:
>>
>>> Richard B. Johnson wrote:
>>>
>>>> On Fri, 22 Oct 2004, Kasper Sandberg wrote:
>>>>
>>>>> On Fri, 2004-10-22 at 16:13 +0200, Kristian S?rensen wrote:
>>>>>
>>>>>> Hi all!
>>>>>>
>>>>>> After some more testing after the previous post of the OOPS in
>>>>>> generic_delete_inode, we have now found a gigantic memory leak in Linux
>>>>>> 2.6.
>>>>>> [789]. The scenario is the same:
>>>>>>
>>>>>> File system: EXT3
>>>>>> Unpack and delete linux-2.6.8.1.tar.bz2 with this Bash while loop:
>>>>>>
>>>>>> let "i = 0"
>>>>>> while [ "$i" -lt 10 ]; do
>>>>>> tar jxf linux-2.6.8.1.tar.bz2;
>>>>>> rm -fr linux-2.6.8.1;
>>>>>> let "i = i + 1"
>>>>>> done
>>>>>>
>>>>>> When the loop has completed, the system use 124 MB memory more _each_
>>>>>> time....
>>>>>> so it is pretty easy to make a denial-of-service attack :-(
>>>>>
>>>>
>>>>
>>>>
>>>> Do something like this with your favorite kernel version.....
>>>>
>>>> while true ; do tar -xzf linux-2.6.9.tar.gz ; rm -rf linux-2.6.9 ; vmstat
>>>> ; done
>>>>
>>>> You can watch this for as long as you want. If there is no other
>>>> activity, the values reported by vmstat remain, on the average, stable.
>>>> If you throw in a `sync` command, the values rapidly converge to
>>>> little memory usage as the disk-data gets flused to disk.
>>>
>>> The problem is, that the free memory reported by vmstat is decresing by
>>> 124mb for each 10-iterations....
>>>
>>> The allocated memory does not get freed even if the system has been left
>>> alone for three hours!
>>>
>>
>> Yes. So? Why would it be freed? It's left how it was until it
>> is needed. Freeing it would waste CPU cycles.
>
> Okay :-) So it looks like two of you says we have been mistaken :-D (and the
> behaviour has been changed since linux-2.4)
>
> Anyway - How does this work in practice? Does the file system implementation
> use a wrapper for kfree or?
> Is there any way to force instant free of kernel memory - when freed? Else it
> is quite hard testing for possible memory leaks in our Umbrella kernel module
> ... :-/
>
>
> Best regards,
>

First, you can always execute sync() and flush most of the file-buffers
to disk. This frees up a lot.

In the kernel....
If you are doing a lot of kmalloc() allocation and free(), you
can write out the pointer values using printk(). I use '0' before
such ... printk("0 %p\n", ptr); do `dmesg | sort >xxx.xxx`. Now
you can look at the file and see the sorted pointer values. If
they repeat, chances are pretty good that you are not leaking
memory.

In user space...
Periodically look at the break address.
If it keeps going up, you may have a leak. If it's stable
you probably are okay.

The only sure way of detecting a memory leak is to use
some substitute code (maybe a macro) that substitutes
for (intercepts) the allocator and deallocator. It eventually
executes the allocator and deallocator after saving information
somewhere you define (array, file, etc). You can sort that
information and determine if there are as many (k)mallocs() as
there are for (z)frees() (for instance) of the same pointer values.


Cheers,
Dick Johnson
Penguin : Linux version 2.6.9 on an i686 machine (5537.79 GrumpyMips).
98.36% of all statistics are fiction.

2004-10-23 01:06:47

by David Lang

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

On Fri, 22 Oct 2004, Kristian S?rensen wrote:

> Hi all!
>
> After some more testing after the previous post of the OOPS in
> generic_delete_inode, we have now found a gigantic memory leak in Linux 2.6.
> [789]. The scenario is the same:
<SNIP>
> When the loop has completed, the system use 124 MB memory more _each_ time....
> so it is pretty easy to make a denial-of-service attack :-(
>
> We have tried the same test on a RHEL WS 3 host (running a RedHat 2.4 kernel)
> - and there is no problem.
>
>
> Any deas?
>
> --
> Kristian S?rensen
> - The Umbrella Project
> http://umbrella.sourceforge.net

This is a common mistake that many people make when first looking at the
Linux stats.

Linux starts off with most of the memory free, but rapidly uses it up. it
keeps a small amount (a few megs) free at all times, but for the rest
counts of freeing memory (possibly by swapping) when a new program asks
for memory and there is less then the minimum amount left free.

It does this becouse there is a chance that the memory will be re-used (in
your example where you were untaring the kernel source there is a chance
that someone else would be reading that source and if they did it would
already be in memory and not have to be re-read from disk) and becouse
there is a chance that nothing will ever need to use that memory before
the computer is shut off so it would be a waste of time to do the free
(which includes zeroing out the memory, not just marking it as available).

This puts the cost of zeroing out and freeing memory on new programs that
are allocating memory, which tends to scatter the work over time rather
then having a large burst of work kick in when a program exits (it seems
odd to think that if a large computer exits the machine would be pegged
for a little while while it frees up and zeros the memory, not exactly
what you would expect when you killed a program :-)

David Lang

--
There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies.
-- C.A.R. Hoare

2004-10-23 01:50:11

by Bernd Eckenfels

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

In article <[email protected]> you wrote:
> When the loop has completed, the system use 124 MB memory more _each_ time....
> so it is pretty easy to make a denial-of-service attack :-(

for starters i recommend to look at "free" and only at the marked number:

total used free shared buffers cached
Mem: 126368 108432 17936 0 6532 42104
-/+ buffers/cache: 59796* 66572*
Swap: 262128 43400 218728

or at the swap numbers if you have low memory (like i do).

Gruss
Bernd
--
eckes privat - http://www.eckes.org/
Project Freefire - http://www.freefire.org/

2004-10-23 03:45:07

by Chris Friesen

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

Kristian S?rensen wrote:

> Anyway - How does this work in practice? Does the file system
> implementation use a wrapper for kfree or?

When an app faults in new memory and there is no unused memory, the system will
page out apps and/or filesystem data from the page cache so the memory can be
given to the app requesting it.

> Is there any way to force instant free of kernel memory - when freed?

It's not free, it's in use by the page cache. This is a performance feature--we
try and keep around as much stuff as possible that might be needed by running apps.

> Else it is quite hard testing for possible memory leaks in our Umbrella
> kernel module ... :-/

Such is life. As a crude workaround, on a swapless system you can start one or
two memory hogs and they will force the system to free up as much memory as
possible.

Chris

2004-10-24 14:02:44

by Bill Davidsen

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

David Lang wrote:

> This puts the cost of zeroing out and freeing memory on new programs
> that are allocating memory, which tends to scatter the work over time
> rather then having a large burst of work kick in when a program exits
> (it seems odd to think that if a large computer exits the machine would
> be pegged for a little while while it frees up and zeros the memory, not
> exactly what you would expect when you killed a program :-)

Any this partially explains why response is bad every morning when
starting daily operation. Instead of using the totally unproductive time
in the idle loop to zero and free those pages when it would not hurt
response, the kernel saves that work for the next time the memory is
needed lest it do work which might not be needed before the system is
shutdown.

With all the work Nick, Ingo,Con and others are putting into latency and
responsiveness, I don't understand why anyone thinks this is desirable
behavior. The idle loop is the perfect place to perform things like
this, to convert non-productive cycles into performing tasks which will
directly improve response and performance when the task MUST be done.
Things like zeroing these pages, perhaps defragmenting memory, anything
which can be done in small parts.

It would seem that doing things like this in small inefficient steps in
idle moments is still better than doing them efficiently while a process
is waiting for the resources being freed.

--
bill davidsen <[email protected]>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979

2004-10-24 16:10:01

by Tommy Reynolds

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

Uttered Bill Davidsen <[email protected]>, spake thus:

> With all the work Nick, Ingo,Con and others are putting into latency and
> responsiveness, I don't understand why anyone thinks this is desirable
> behavior. The idle loop is the perfect place to perform things like
> this, to convert non-productive cycles into performing tasks which will
> directly improve response and performance when the task MUST be done.

Bill, with respect,

The idle loop is, by definition, the place to go when there is
nothing else to do. Scrubbing memory is, by definition, not
"nothing", so leave the idle loop alone.

That's why God, or maybe it was Linus, invented kernel threads.

Cheers!


Attachments:
(No filename) (685.00 B)
(No filename) (189.00 B)
Download all attachments

2004-10-25 22:14:06

by Bill Davidsen

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

Tommy Reynolds wrote:
> Uttered Bill Davidsen <[email protected]>, spake thus:
>
>
>>With all the work Nick, Ingo,Con and others are putting into latency and
>>responsiveness, I don't understand why anyone thinks this is desirable
>>behavior. The idle loop is the perfect place to perform things like
>>this, to convert non-productive cycles into performing tasks which will
>>directly improve response and performance when the task MUST be done.
>
>
> Bill, with respect,
>
> The idle loop is, by definition, the place to go when there is
> nothing else to do. Scrubbing memory is, by definition, not
> "nothing", so leave the idle loop alone.
>
> That's why God, or maybe it was Linus, invented kernel threads.

Did you really not know what I meant here, or are you being pedantic
about the nomenclature? Yes, obviously implement by thread(s) with
priority lower than whale shit, the object of which is to do the work
when no process is waiting for the CPU, and in very small steps so the
CPU isn't tied up.

--
-bill davidsen ([email protected])
"The secret to procrastination is to put things off until the
last possible moment - but no longer" -me

2004-10-26 04:46:24

by David Lang

[permalink] [raw]
Subject: Re: Gigantic memory leak in linux-2.6.[789]!

On Sun, 24 Oct 2004, Bill Davidsen wrote:

> David Lang wrote:
>
>> This puts the cost of zeroing out and freeing memory on new programs that
>> are allocating memory, which tends to scatter the work over time rather
>> then having a large burst of work kick in when a program exits (it seems
>> odd to think that if a large computer exits the machine would be pegged
>> for a little while while it frees up and zeros the memory, not exactly
>> what you would expect when you killed a program :-)
>
> Any this partially explains why response is bad every morning when starting
> daily operation. Instead of using the totally unproductive time in the idle
> loop to zero and free those pages when it would not hurt response, the kernel
> saves that work for the next time the memory is needed lest it do work which
> might not be needed before the system is shutdown.

actually, what useually has happened is that updatedb ran overnight and
used all your memory for it's work so all your application stuff got
thrown away or swapped out as it appeared to be less useful then the
then-active process. so first thing in the morning you need to do a lot of
disk reads to get your desktop working set into memory. the cost of
zeroing the pages is minor compared to the disk IO

> With all the work Nick, Ingo,Con and others are putting into latency and
> responsiveness, I don't understand why anyone thinks this is desirable
> behavior. The idle loop is the perfect place to perform things like this, to
> convert non-productive cycles into performing tasks which will directly
> improve response and performance when the task MUST be done. Things like
> zeroing these pages, perhaps defragmenting memory, anything which can be done
> in small parts.
>
> It would seem that doing things like this in small inefficient steps in idle
> moments is still better than doing them efficiently while a process is
> waiting for the resources being freed.

the problem is that you don't know that you need to throw away the data.
the next thing that you try to do could re-use the data that's in the ram,
how can the system know?

David Lang


--
There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies.
-- C.A.R. Hoare