2013-04-05 11:02:28

by Vassilis Virvilis

[permalink] [raw]
Subject: Debugging COW (copy on write) memory after fork: Is it possible to dump only the private anonymous memory of a process?

Hello, sorry if this is off topic. Just point me to the right direction.
Please cc me also in the reply.

Question
--------

Is it possible to dump only the private anonymous memory of a process?

Background
----------

I have a process where it reads and it initializes a large portion of
the memory (around 2.3GB). This memory is effectively read only from
that point and on. After the initialization I fork the process to
several children in order to take advantage of the multicore
architecture of modern cpus. The problem is that finally the program
ends up requiring number_of_process * 2.3GB memory effectively entering
swap thrashing and destroying the performance.

Steps so far
------------

The first thing I did is to monitor the memory. I found about
/proc/$pid/smaps and the http://wingolog.org/pub/mem_usage.py.

What happens is the following

The program starts reads from disk and has 2.3GB of private mappings
The program forks. Immediately the 2.3GB become shared mapping
between the parent and the child. Excellent so far.
As the time goes and the child starts performing its tasks the
shared memory is slowly migrating to the private mappings of each
process effectively blowing up the memory requirements.

I thought that if I could see (dump) the private mappings of each
process I could see from the data why the shared mappings are being
touched so I tried to dump the core with gcore and by playing with
/proc/$pid/coredump_filter like this

echo 0x1 > /proc/$pid/coredump_filter
gcore $pid

Unfortunately it always dumps 2.3GB despite the setting in
/proc/$pid/coredump_filter which says private anonymous mappings.

I have researched the question in google.

I even posted it in stack overflow.

Any other ideas?

Thanks in advance

Vassilis Virvilis


2013-04-06 18:22:03

by Bruno Prémont

[permalink] [raw]
Subject: Re: Debugging COW (copy on write) memory after fork: Is it possible to dump only the private anonymous memory of a process?

On Fri, 05 April 2013 Vassilis Virvilis <[email protected]> wrote:
> Hello, sorry if this is off topic. Just point me to the right direction.
> Please cc me also in the reply.
>
> Question
> --------
>
> Is it possible to dump only the private anonymous memory of a process?

I don't know if that's possible, but from your background you could
probably work around it be mmap()ing the memory you need and once
initialized mark all of that memory read-only (if you mmap very large
chunks you can even benefit from huge-pages).

Any of the forked processes that tried to access the memory would then
get a signal if they ever tried to write to the data (and thus unsharing it)


If you allocate and initialize all of your memory in little malloc()'ed
chunks it's possibly glibc's memory housekeeping that unshares all those
pages over time.

Bruno

> Background
> ----------
>
> I have a process where it reads and it initializes a large portion of
> the memory (around 2.3GB). This memory is effectively read only from
> that point and on. After the initialization I fork the process to
> several children in order to take advantage of the multicore
> architecture of modern cpus. The problem is that finally the program
> ends up requiring number_of_process * 2.3GB memory effectively entering
> swap thrashing and destroying the performance.
>
> Steps so far
> ------------
>
> The first thing I did is to monitor the memory. I found about
> /proc/$pid/smaps and the http://wingolog.org/pub/mem_usage.py.
>
> What happens is the following
>
> The program starts reads from disk and has 2.3GB of private mappings
> The program forks. Immediately the 2.3GB become shared mapping
> between the parent and the child. Excellent so far.
> As the time goes and the child starts performing its tasks the
> shared memory is slowly migrating to the private mappings of each
> process effectively blowing up the memory requirements.
>
> I thought that if I could see (dump) the private mappings of each
> process I could see from the data why the shared mappings are being
> touched so I tried to dump the core with gcore and by playing with
> /proc/$pid/coredump_filter like this
>
> echo 0x1 > /proc/$pid/coredump_filter
> gcore $pid
>
> Unfortunately it always dumps 2.3GB despite the setting in
> /proc/$pid/coredump_filter which says private anonymous mappings.
>
> I have researched the question in google.
>
> I even posted it in stack overflow.
>
> Any other ideas?
>
> Thanks in advance
>
> Vassilis Virvilis

2013-04-08 07:41:53

by Vassilis Virvilis

[permalink] [raw]
Subject: Re: Debugging COW (copy on write) memory after fork: Is it possible to dump only the private anonymous memory of a process?

On 04/06/2013 09:11 PM, Bruno Pr?mont wrote:
> On Fri, 05 April 2013 Vassilis Virvilis<[email protected]> wrote:
>>
>> Question
>> --------
>>
>> Is it possible to dump only the private anonymous memory of a process?
>
> I don't know if that's possible, but from your background you could
> probably work around it be mmap()ing the memory you need and once
> initialized mark all of that memory read-only (if you mmap very large
> chunks you can even benefit from huge-pages).
>
> Any of the forked processes that tried to access the memory would then
> get a signal if they ever tried to write to the data (and thus unsharing it)
>

I can't do that. We are talking about an existing system (in perl with C
modules) that has been parallelized in a second step.

> If you allocate and initialize all of your memory in little malloc()'ed
> chunks it's possibly glibc's memory housekeeping that unshares all those
> pages over time.

Yes I suppose it is a series of mallocs. I could easily verify that with
strace. However if glibc's memory housekeeping undermines the COW
behaviour that would be very bad.

I have unit tests that I was able to work around the usual perl problems
that cause memory unsharing such as the reference counting and hash
accessing. Garbage collector shouldn't be a problem because there is
nothing to collect from the shared memory, only private local variables
that go out of scope. The problem is when I am employing these
workarounds in the live system (with considerable IO) I am getting
massive unsharing. So I thought to have a look and see what's going in
two or three consecutive private memory dumps.

The point is I need to locate the source of the memory unsharing. Any
ideas how this can be done?

At this point I could try in house compiled kernels if I can enable some
logging to track this behavior. Does any knob like this exist? Even as
an #ifdef?

Vassilis