2004-10-28 10:37:03

by Denis Vlasenko

[permalink] [raw]
Subject: Swap strangeness: total VIRT ~23mb for all processes, swap 91156k used - impossible?

I am playing with 'small/beautiful stuff' like
bbox/uclibc.

I ran oom_trigger soon after boot and took
"top b n 1" snapshot after OOM kill.

Output puzzles me: total virtual space taken by *all*
processes is ~23mb yet swap usage is ~90mb.

How that can be? *What* is there? Surely it can't
be a filesystem cache because OOM condition reduces that
to nearly zero.

top output
(note: some of them are busybox'ed, others are compiled
against uclibc, some are statically built with dietlibc,
rest is plain old shared binaries built against glibc):

top - 13:19:32 up 48 min, 1 user, load average: 0.25, 0.22, 0.09
Tasks: 80 total, 1 running, 79 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2% us, 0.3% sy, 0.0% ni, 98.7% id, 0.8% wa, 0.0% hi, 0.0% si
Mem: 112376k total, 109620k used, 2756k free, 6460k buffers
Swap: 262136k total, 91156k used, 170980k free, 4700k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1204 root 15 0 1652 788 1520 R 2.0 0.7 0:00.01 top
1 root 16 0 968 12 892 S 0.0 0.0 0:01.27 init
2 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
3 root 5 -10 0 0 0 S 0.0 0.0 0:00.03 events/0
4 root 8 -10 0 0 0 S 0.0 0.0 0:00.00 khelper
23 root 5 -10 0 0 0 S 0.0 0.0 0:00.01 kblockd/0
47 root 15 0 0 0 0 S 0.0 0.0 0:00.00 pdflush
48 root 15 0 0 0 0 S 0.0 0.0 0:00.03 pdflush
50 root 14 -10 0 0 0 S 0.0 0.0 0:00.00 aio/0
49 root 15 0 0 0 0 S 0.0 0.0 0:02.15 kswapd0
51 root 15 0 0 0 0 S 0.0 0.0 0:00.00 cifsoplockd
125 root 25 0 0 0 0 S 0.0 0.0 0:00.00 kseriod
251 root 5 -10 0 0 0 S 0.0 0.0 0:00.00 reiserfs/0
273 root 18 0 1212 4 1180 S 0.0 0.0 0:00.00 udevd
479 rpc 16 0 1360 4 1308 S 0.0 0.0 0:00.00 rpc.portmap
543 root 16 0 52 16 16 S 0.0 0.0 0:00.01 svscan
556 root 17 0 348 4 316 S 0.0 0.0 0:00.00 sleep
557 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
558 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
559 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
560 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
561 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
563 root 16 0 1260 128 1228 S 0.0 0.1 0:00.02 gpm
564 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
565 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
566 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
567 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
568 user0 17 0 1660 772 1520 S 0.0 0.7 0:06.87 top
569 daemon 18 0 28 4 20 S 0.0 0.0 0:00.00 multilog
579 root 16 0 3076 3076 2456 S 0.0 2.7 0:00.02 ntpd
580 daemon 17 0 28 24 20 S 0.0 0.0 0:00.00 multilog
586 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
588 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
589 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
590 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
594 daemon 18 0 28 4 20 S 0.0 0.0 0:00.00 multilog
599 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
600 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
605 root 18 0 2492 700 2400 S 0.0 0.6 0:00.10 sshd
617 logger 17 0 28 28 20 S 0.0 0.0 0:00.00 multilog
624 root 17 0 36 28 16 S 0.0 0.0 0:00.01 socklog
632 daemon 17 0 28 4 20 S 0.0 0.0 0:00.00 multilog
636 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
637 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
643 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
644 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
647 root 16 0 44 4 36 S 0.0 0.0 0:00.00 tcpserver
648 apache 15 0 28 4 20 S 0.0 0.0 0:00.00 multilog
658 daemon 18 0 28 4 20 S 0.0 0.0 0:00.00 multilog
665 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
666 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
667 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
668 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
669 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
670 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
671 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
672 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
673 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
674 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
675 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
676 logger 16 0 36 4 16 S 0.0 0.0 0:00.00 socklog
677 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
678 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
679 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
680 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
681 root 16 0 20 4 16 S 0.0 0.0 0:00.00 supervise
685 root 16 0 1400 512 1320 S 0.0 0.5 0:00.02 automount
698 daemon 19 0 28 4 20 S 0.0 0.0 0:00.00 multilog
699 root 16 0 708 4 576 S 0.0 0.0 0:00.00 getty
704 root 17 0 1824 508 1528 S 0.0 0.5 0:00.01 login
715 logger 15 0 28 4 20 S 0.0 0.0 0:00.01 multilog
716 root 16 0 708 4 576 S 0.0 0.0 0:00.00 getty
721 root 16 0 708 4 576 S 0.0 0.0 0:00.00 getty
725 logger 16 0 28 4 20 S 0.0 0.0 0:00.00 multilog
729 root 16 0 708 4 576 S 0.0 0.0 0:00.00 getty
730 root 16 0 708 4 576 S 0.0 0.0 0:00.00 getty
731 root 16 0 708 4 576 S 0.0 0.0 0:00.00 getty
1097 root 15 0 1216 576 892 S 0.0 0.5 0:00.01 bash
1196 root 15 0 0 0 0 S 0.0 0.0 0:00.00 rpciod
1197 root 19 0 0 0 0 S 0.0 0.0 0:00.00 lockd

oom_trigger.c:

#include <stdlib.h>
int main() {
void *p;
unsigned size = 1<<20;
unsigned long total=0;
while(size) {
p = malloc(size);
if(!p) size>>=1;
else {
memset(p, 0x77, size);
total+=size;
printf("Allocated %9u bytes, %12lu total\n",size,total);
}
}
return 0;
}

--
vda


2004-10-28 10:39:15

by Denis Vlasenko

[permalink] [raw]
Subject: Re: Swap strangeness: total VIRT ~23mb for all processes, swap 91156k used - impossible?

On Thursday 28 October 2004 13:33, Denis Vlasenko wrote:
> top output
> (note: some of them are busybox'ed, others are compiled
> against uclibc, some are statically built with dietlibc,
> rest is plain old shared binaries built against glibc):
>
> top - 13:19:32 up 48 min, 1 user, load average: 0.25, 0.22, 0.09
> Tasks: 80 total, 1 running, 79 sleeping, 0 stopped, 0 zombie
> Cpu(s): 0.2% us, 0.3% sy, 0.0% ni, 98.7% id, 0.8% wa, 0.0% hi, 0.0% si
> Mem: 112376k total, 109620k used, 2756k free, 6460k buffers
> Swap: 262136k total, 91156k used, 170980k free, 4700k cached

Forgot to post /proc/meminfo:

MemTotal: 112376 kB
MemFree: 3116 kB
Buffers: 6672 kB
Cached: 5104 kB
SwapCached: 88340 kB
Active: 7964 kB
Inactive: 92720 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 112376 kB
LowFree: 3116 kB
SwapTotal: 262136 kB
SwapFree: 172240 kB
Dirty: 8 kB
Writeback: 0 kB
Mapped: 5192 kB
Slab: 5284 kB
Committed_AS: 6020 kB
PageTables: 648 kB
VmallocTotal: 917476 kB
VmallocUsed: 2048 kB
VmallocChunk: 915384 kB
HugePages_Total: 0
HugePages_Free: 0
Hugepagesize: 4096 kB
--
vda

2004-10-28 11:19:26

by Denis Vlasenko

[permalink] [raw]
Subject: Re: Swap strangeness: total VIRT ~23mb for all processes, swap 91156k used - impossible?

On Thursday 28 October 2004 14:01, Jan Engelhardt wrote:
> I have removed Cc: for I don't know how useful this reply is for the others.
>
> >I ran oom_trigger soon after boot and took
> >"top b n 1" snapshot after OOM kill.
> >
> >Output puzzles me: total virtual space taken by *all*
> >processes is ~23mb yet swap usage is ~90mb.
>
> Well, if you allocate more and more space (and actually use it), other
> applications must be swapped out. Even in a very calm system, the RSS(RES)
> field can drop to 4 (usual value) or even 0, which means they're
> almost/completely swapped out, and swap usage is "high".
>
> And if VIRT is so small, well then I guess, that's it. The VIRT values for the
> processes listed below all seem normal. Also, because you use ?clibc and
> busybox, as you say.

I think VIRT is a total virtual space taken by process, part of
which may be swapped. VIRT can't be reduced by swapping out -
correct me if I'm wrong.

But I believe even if I'm wrong on that, I simply do not have
90 mbytes to be swapped out here!

Look again at top output - most processes are below 1mb,
many are below 50k thanks to dietlibc.

> Exception is "supervise", but I do not know that one and how much RAM it takes
> when run in a normal production environment.

It's a rather nifty utility from daemontools package.
It my case, it is built with dietlibs. Yes, it is
really takes only 20k of virtual memory when running!

# ldd supervise
not a dynamic executable
# ls -l supervise
-rwxr-xr-x 1 root root 9668 Oct 19 06:48 supervise

More info at:

http://cr.yp.to/daemontools.html

If you don't like it's license, then look here
for alternative implementation:

http://smarden.org/runit/

> >How that can be? *What* is there? Surely it can't
> >be a filesystem cache because OOM condition reduces that
> >to nearly zero.
> >
> >top output
> >(note: some of them are busybox'ed, others are compiled
> >against uclibc, some are statically built with dietlibc,
> >rest is plain old shared binaries built against glibc):
>
> What if you do the oom with a pure glibc?

I think I will lose "it's impossible" argument,
because all processes will be more than 1 mbyte in VIRT.

I will try nevertheless.
--
vda

2004-10-28 11:34:19

by Jan Engelhardt

[permalink] [raw]
Subject: Re: Swap strangeness: total VIRT ~23mb for all processes, swap 91156k used - impossible?


>I think VIRT is a total virtual space taken by process, part of
>which may be swapped. VIRT can't be reduced by swapping out -
>correct me if I'm wrong.

I always went by:
VIRT = RES + SWAPPED OUT

$ ps aufwwx | grep mingetty
#user pid %cpu %mem vsz rsz
root 2490 0.0 0.2 1548 552 tty1 Ss+ Oct25 0:00 /sbin/mingetty tty1
$ swapoff -a ## for fun
$ ps aufwwx | grep mingetty
root 2490 0.0 0.2 1548 632 tty1 Ss+ Oct25 0:00 /sbin/mingetty tty1

So to say, VIRT = RES + SHR + SWAPPED OUT, probably.

>But I believe even if I'm wrong on that, I simply do not have
>90 mbytes to be swapped out here!

Have <= 128 MB RAM? Have a heavy busy system (even with >= 128)?

># ldd supervise
> not a dynamic executable
># ls -l supervise
>-rwxr-xr-x 1 root root 9668 Oct 19 06:48 supervise

Ph... you're missing upx -9 on supervise ;)


>I think I will lose "it's impossible" argument,
>because all processes will be more than 1 mbyte in VIRT.

Remember that glibc might be shared amongst processes, so *each* process will
have the 1 mb listed, though swap might stay at 0 if there is nothing else
which causes swapping.



Jan Engelhardt
--
Gesellschaft für Wissenschaftliche Datenverarbeitung
Am Fassberg, 37077 Göttingen, http://www.gwdg.de

2004-10-28 11:51:37

by Denis Vlasenko

[permalink] [raw]
Subject: Re: Swap strangeness: total VIRT ~23mb for all processes, swap 91156k used - impossible?

On Thursday 28 October 2004 14:34, Jan Engelhardt wrote:
> >I think VIRT is a total virtual space taken by process, part of
> >which may be swapped. VIRT can't be reduced by swapping out -
> >correct me if I'm wrong.
>
> I always went by:
> VIRT = RES + SWAPPED OUT
>
> $ ps aufwwx | grep mingetty
> #user pid %cpu %mem vsz rsz
> root 2490 0.0 0.2 1548 552 tty1 Ss+ Oct25 0:00 /sbin/mingetty tty1
> $ swapoff -a ## for fun
> $ ps aufwwx | grep mingetty
> root 2490 0.0 0.2 1548 632 tty1 Ss+ Oct25 0:00 /sbin/mingetty tty1
>
> So to say, VIRT = RES + SHR + SWAPPED OUT, probably.
>
> >But I believe even if I'm wrong on that, I simply do not have
> >90 mbytes to be swapped out here!
>
> Have <= 128 MB RAM? Have a heavy busy system (even with >= 128)?

128mb. System was idle, fresh after boot.

Seems I wasn't clear enough. I will try harder now:

Even if I add up size of every process, *counting libc shared pages
once per process* (which will overestimate memory usage), I arrive at
23mb *total memory required by all processes*. How come kernel
found 90mb to swap out? There is NOTHING to swap out except those
23mb!

(Of course when oom_trigger was running, kernel first swapped out
those 23mb and then started swapping out momery taken by oom_trigger
itself, but when oom_trigger was killed, its RAM *and* swapspace
should be deallocated. Thus I expected to see ~20 mb swap usage).
--
vda

2004-10-28 15:24:31

by William Lee Irwin III

[permalink] [raw]
Subject: Re: Swap strangeness: total VIRT ~23mb for all processes, swap 91156k used - impossible?

On Thu, Oct 28, 2004 at 01:33:53PM +0300, Denis Vlasenko wrote:
> I am playing with 'small/beautiful stuff' like
> bbox/uclibc.
> I ran oom_trigger soon after boot and took
> "top b n 1" snapshot after OOM kill.
> Output puzzles me: total virtual space taken by *all*
> processes is ~23mb yet swap usage is ~90mb.
> How that can be? *What* is there? Surely it can't
> be a filesystem cache because OOM condition reduces that
> to nearly zero.
> top output
> (note: some of them are busybox'ed, others are compiled
> against uclibc, some are statically built with dietlibc,
> rest is plain old shared binaries built against glibc):

Let's get top(1) out of the equation. Could you grab VSZ from directly
from /proc/?


Thanks.


-- wli

2004-10-28 19:19:59

by Dave Dodge

[permalink] [raw]
Subject: Re: [uClibc] Swap strangeness: total VIRT ~23mb for all processes, swap 91156k used - impossible?

On Thu, Oct 28, 2004 at 01:33:53PM +0300, Denis Vlasenko wrote:
> Output puzzles me: total virtual space taken by *all*
> processes is ~23mb yet swap usage is ~90mb.
>
> How that can be? *What* is there? Surely it can't
> be a filesystem cache because OOM condition reduces that
> to nearly zero.

Just a thought: do you have a tmpfs mounted anywhere?

-Dave Dodge

2004-10-29 10:31:09

by Denis Vlasenko

[permalink] [raw]
Subject: Re:Swap strangeness: total VIRT ~23mb for all processes, swap 91156k used - impossible?

> > Have <= 128 MB RAM? Have a heavy busy system (even with >= 128)?
>
> 128mb. System was idle, fresh after boot.
>
> Seems I wasn't clear enough. I will try harder now:
>
> Even if I add up size of every process, *counting libc shared pages
> once per process* (which will overestimate memory usage), I arrive at
> 23mb *total memory required by all processes*. How come kernel
> found 90mb to swap out? There is NOTHING to swap out except those
> 23mb!
>
> (Of course when oom_trigger was running, kernel first swapped out
> those 23mb and then started swapping out momery taken by oom_trigger
> itself, but when oom_trigger was killed, its RAM *and* swapspace
> should be deallocated. Thus I expected to see ~20 mb swap usage).

I did more testing. It does not happen right after boot.

It is not 100% reproducible, I *think* I need to fill pagecache
with filesystem cache first
(grep -rF qjklwmhflakwghfjklah $source_tree does this nicely)
and let the box sit idle for a minute or two.

Then I run oom_trigger. Typically it eats ~350mb and is killed.
That is, when things are working normally.

But sometimes it eats only ~250mb, and top before/after that looks
like this:

before oom:
top - 08:14:12 up 15 min, 1 user, load average: 0.37, 0.14, 0.06
Tasks: 78 total, 1 running, 77 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.4% us, 1.1% sy, 0.0% ni, 94.2% id, 4.2% wa, 0.1% hi, 0.0% si
Mem: 112376k total, 108564k used, 3812k free, 14968k buffers
Swap: 262136k total, 2592k used, 259544k free, 80516k cached

after oom:
top - 08:14:27 up 15 min, 1 user, load average: 0.52, 0.18, 0.07
Tasks: 78 total, 1 running, 77 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.4% us, 1.3% sy, 0.0% ni, 93.4% id, 4.8% wa, 0.1% hi, 0.0% si
Mem: 112376k total, 107704k used, 4672k free, 10336k buffers
Swap: 262136k total, 84168k used, 177968k free, 4056k cached

Both mem and swap 'used' values look strange.

Complete top outputs are attached.

Subsequently I automated the thing, and another such
event was captured (tarball of /proc/N/status before
and after OOM is attached):

#!/bin/sh
cd status_before
echo "PID.... Before......................... After......................"
for b in *; do
a=../status_after/$b
echo "$b:"$'\t'`grep VmSize $b`$'\t'`grep VmRSS $b`$'\t'`grep VmSize $a`$'\t'`grep VmRSS $a`
done

PID.... Before......................... After......................
1: VmSize: 968 kB VmRSS: 56 kB VmSize: 968 kB VmRSS: 4 kB
1094: VmSize: 1212 kB VmRSS: 4 kB VmSize: 1212 kB VmRSS: 0 kB
1101: VmSize: 1548 kB VmRSS: 960 kB VmSize: 1548 kB VmRSS: 0 kB
1102: VmSize: 340 kB VmRSS: 28 kB VmSize: 340 kB VmRSS: 0 kB
1208: VmSize: 1660 kB VmRSS: 896 kB VmSize: 1660 kB VmRSS: 772 kB
125:
1459: VmSize: 960 kB VmRSS: 548 kB VmSize: 960 kB VmRSS: 484 kB
1460: VmSize: 968 kB VmRSS: 596 kB
2:
23:
247:
269: VmSize: 1212 kB VmRSS: 76 kB VmSize: 1212 kB VmRSS: 0 kB
3:
4:
47:
471: VmSize: 1352 kB VmRSS: 4 kB VmSize: 1352 kB VmRSS: 0 kB
48:
49:
50:
51:
539: VmSize: 52 kB VmRSS: 16 kB VmSize: 52 kB VmRSS: 4 kB
549: VmSize: 348 kB VmRSS: 4 kB VmSize: 348 kB VmRSS: 0 kB
555: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
556: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
557: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
558: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
559: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
560: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
561: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
562: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
563: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
565: VmSize: 28 kB VmRSS: 4 kB VmSize: 28 kB VmRSS: 0 kB
572: VmSize: 28 kB VmRSS: 4 kB VmSize: 28 kB VmRSS: 0 kB
577: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
578: VmSize: 1260 kB VmRSS: 4 kB VmSize: 1260 kB VmRSS: 0 kB
588: VmSize: 3076 kB VmRSS: 3076 kB VmSize: 3076 kB VmRSS: 3076 kB
589: VmSize: 28 kB VmRSS: 24 kB VmSize: 28 kB VmRSS: 24 kB
593: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
594: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
595: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
602: VmSize: 36 kB VmRSS: 32 kB VmSize: 36 kB VmRSS: 28 kB
604: VmSize: 28 kB VmRSS: 28 kB VmSize: 28 kB VmRSS: 28 kB
611: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
612: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
633: VmSize: 2492 kB VmRSS: 920 kB VmSize: 2492 kB VmRSS: 760 kB
634: VmSize: 28 kB VmRSS: 4 kB VmSize: 28 kB VmRSS: 0 kB
640: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
641: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
642: VmSize: 28 kB VmRSS: 4 kB VmSize: 28 kB VmRSS: 0 kB
648: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
649: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
650: VmSize: 44 kB VmRSS: 4 kB VmSize: 44 kB VmRSS: 0 kB
656: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
657: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
658: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
659: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
660: VmSize: 28 kB VmRSS: 4 kB VmSize: 28 kB VmRSS: 0 kB
663: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
664: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
665: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
666: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
667: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
668: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
669: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
672: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
673: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
674: VmSize: 1416 kB VmRSS: 368 kB VmSize: 1416 kB VmRSS: 180 kB
678: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
679: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
680: VmSize: 20 kB VmRSS: 16 kB VmSize: 20 kB VmRSS: 0 kB
682: VmSize: 36 kB VmRSS: 24 kB VmSize: 36 kB VmRSS: 0 kB
695: VmSize: 28 kB VmRSS: 4 kB VmSize: 28 kB VmRSS: 0 kB
696: VmSize: 1824 kB VmRSS: 508 kB VmSize: 1824 kB VmRSS: 504 kB
715: VmSize: 28 kB VmRSS: 24 kB VmSize: 28 kB VmRSS: 0 kB
718: VmSize: 708 kB VmRSS: 4 kB VmSize: 708 kB VmRSS: 0 kB
723: VmSize: 708 kB VmRSS: 4 kB VmSize: 708 kB VmRSS: 0 kB
724: VmSize: 708 kB VmRSS: 4 kB VmSize: 708 kB VmRSS: 0 kB
731: VmSize: 28 kB VmRSS: 28 kB VmSize: 28 kB VmRSS: 0 kB
750: VmSize: 708 kB VmRSS: 4 kB VmSize: 708 kB VmRSS: 0 kB
754: VmSize: 708 kB VmRSS: 4 kB VmSize: 708 kB VmRSS: 0 kB
759: VmSize: 708 kB VmRSS: 4 kB VmSize: 708 kB VmRSS: 0 kB
--
vda





Attachments:
(No filename) (6.46 kB)
after (6.60 kB)
before (6.60 kB)
proc.tar.bz2 (3.36 kB)
Download all attachments