I've got VMWare killed a couple of times mysteriously.
I've got 256MB memory and no swap on my laptop running 2.4.16 and for
some reason VMWare has got killed with the following syslog
information:
Dec 17 23:33:23 puck kernel: Out of Memory: Killed process 28803 (vmware).
Dec 17 23:33:35 puck kernel: Out of Memory: Killed process 28804 (vmware).
Dec 17 23:33:37 puck kernel: /dev/vmmon: Vmx86_ReleaseVM: unlocked pages: 75286, unlocked dirty pages: 51084
What I find odd is, that I am quite certain this machine did _not_ run
out of memory when this happened. Just a few minutes ago I had an idle
VMWare session and started an XEmacs to edit a large file. Just being
curious, I happened to say 'free' a few seconds before VMWare got
killed:
$ free
total used free shared buffers cached
Mem: 256224 219748 36476 0 12956 48816
-/+ buffers/cache: 157976 98248
Swap: 0 0 0
Boom, it died about the same time I exit from XEmacs. After that, I
ran 'free' again:
n$ free
total used free shared buffers cached
Mem: 256224 203644 52580 0 13148 81620
-/+ buffers/cache: 108876 147348
Swap: 0 0 0
I may be missing something obvious herfe, but I just can't figure out
why kernel killes VMWare in this situation.
If anyone's interested, I think I can reproduce this and - if someone
will kindly instruct me a bit - produce some more information. I
_think_ this is the place where experienced kernel hackers start
speaking about running 'vmstat'. And where I usually start having
problems undertanding what it is that people are talking about...
Suonp??...
On 17 Dec 2001, Samuli Suonpaa wrote:
> I've got VMWare killed a couple of times mysteriously.
>
> I've got 256MB memory and no swap on my laptop running 2.4.16 and for
> some reason VMWare has got killed with the following syslog
> information:
>
> Dec 17 23:33:23 puck kernel: Out of Memory: Killed process 28803 (vmware).
> Dec 17 23:33:35 puck kernel: Out of Memory: Killed process 28804 (vmware).
> Dec 17 23:33:37 puck kernel: /dev/vmmon: Vmx86_ReleaseVM: unlocked pages: 75286, unlocked dirty pages: 51084
Samuli,
The problem is that buffer/cache/{i,d}cache pages are not getting freed
easily, and instead the kernel swapouts anonymous memory.
Could you please try 2.4.17-rc1 and tell me if it makes a difference for
you ?
Thanks
Hi,
As I tested 2.4.17-rc1, I think the OOM is still not
right working, In a session StarOffice got killed when
there is still a lot memory available. But strange
enough, 2.4.16 seems to be better in the same
situation. Another issue; because after that I restart
staroffice and mount floppy ; read a file from floppy,
because of a bug in Star office (I think ) when I save
file to the floppy from star office, star office hang,
no responsive at all. I have to kill staroffice but it
doesn't work, even I use signal 9. Then I try to
repeat this under 2.4.16, it did not happen (kill
works)
Regards,
=====
S.KIEU
http://greetings.yahoo.com.au - Yahoo! Greetings
- Send your festive greetings online!
On Mon, Dec 17, 2001 at 07:10:54PM -0200, you [Marcelo Tosatti] claimed:
>
>
> On 17 Dec 2001, Samuli Suonpaa wrote:
>
> > I've got VMWare killed a couple of times mysteriously.
> >
> > I've got 256MB memory and no swap on my laptop running 2.4.16 and for
> > some reason VMWare has got killed with the following syslog
> > information:
> >
> > Dec 17 23:33:23 puck kernel: Out of Memory: Killed process 28803 (vmware).
> > Dec 17 23:33:35 puck kernel: Out of Memory: Killed process 28804 (vmware).
> > Dec 17 23:33:37 puck kernel: /dev/vmmon: Vmx86_ReleaseVM: unlocked pages: 75286, unlocked dirty pages: 51084
>
> Samuli,
>
> The problem is that buffer/cache/{i,d}cache pages are not getting freed
> easily, and instead the kernel swapouts anonymous memory.
>
> Could you please try 2.4.17-rc1 and tell me if it makes a difference for
> you ?
See my report on what happens on a 2GB box with .16 or .17rc1. Buffers are
still not released as they should.
http://marc.theaimsgroup.com/?l=linux-kernel&m=100849985518543&w=2
http://marc.theaimsgroup.com/?l=linux-kernel&m=100857274818037&w=2
Perhaps someone could test on x86 with less memory (I can do that later, but
right now I don't have any throw-away box with a recent kernel on it). On ia64
with 2GB+256MB swap this results in OOM when trying to allocate and use
1.7GB, albeit the real mem usage (-buffers) is less than 200MB.
Basically the test is
fill cache
find / -type f -exec cat {} \; > /dev/null
updatedb
then run this with suitable argument:
#include <stdio.h>
#include <stdlib.h>
#define BKSP "\010\010\010\010\010\010"
int main(int argc, char** argv)
{
unsigned long megs = 512;
unsigned long size, i;
unsigned char* buf;
if (argc > 1) megs = atol(argv[1]);
size = megs * 1024 * 1024;
fprintf(stderr, "Allocating %lu megs...\n\n ", megs);
buf = malloc(size);
if (!buf)
{
fprintf(stderr, "malloc(%lu", size);
perror(")");
exit(-1);
}
for (i = 0; i < size; i++)
{
buf[i] = 42;
if ((i + 1) % (1024 * 1024) == 0)
fprintf(stderr, BKSP "%4uMB", (i + 1) / 1024 / 1024);
}
fprintf(stderr, "\n Success.\n");
return 1;
}
-- v --
[email protected]