2006-09-27 20:54:43

by Dave Jones

[permalink] [raw]
Subject: oom kill oddness.

So I have two boxes that are very similar.
Both have 2GB of RAM & 1GB of swap space.
One has a 2.8GHz CPU, the other a 2.93GHz CPU, both dualcore.

The slower box survives a 'make -j bzImage' of a 2.6.18 kernel tree
without incident. (Although it takes ~4 minutes longer than a -j2)

The faster box goes absolutely nuts, oomkilling everything in sight,
until eventually after about 10 minutes, the box locks up dead,
and won't even respond to pings.

Oh, the only other difference - the slower box has 1 disk, whereas the
faster box has two in RAID0. I'm not surprised that stuff is getting
oom-killed given the pathological scenario, but the fact that the
box never recovered at all is a little odd. Does md lack some means
of dealing with low memory scenarios ?

Dave


2006-09-28 00:00:12

by Andrew Morton

[permalink] [raw]
Subject: Re: oom kill oddness.

On Wed, 27 Sep 2006 16:54:35 -0400
Dave Jones <[email protected]> wrote:

> So I have two boxes that are very similar.
> Both have 2GB of RAM & 1GB of swap space.
> One has a 2.8GHz CPU, the other a 2.93GHz CPU, both dualcore.
>
> The slower box survives a 'make -j bzImage' of a 2.6.18 kernel tree
> without incident. (Although it takes ~4 minutes longer than a -j2)
>
> The faster box goes absolutely nuts, oomkilling everything in sight,
> until eventually after about 10 minutes, the box locks up dead,
> and won't even respond to pings.
>
> Oh, the only other difference - the slower box has 1 disk, whereas the
> faster box has two in RAID0. I'm not surprised that stuff is getting
> oom-killed given the pathological scenario, but the fact that the
> box never recovered at all is a little odd. Does md lack some means
> of dealing with low memory scenarios ?

Are you sure it isn't a memory leak?

Suggest you kill things just before it locks up, have a look at
/proc/meminfo, /proc/slabinfo, sysrq-M, echo 3>/proc/sys/vm/drop_caches,
etc.

2006-09-28 23:03:22

by Roman Zippel

[permalink] [raw]
Subject: Re: oom kill oddness.

Hi,

On Wed, 27 Sep 2006, Dave Jones wrote:

> So I have two boxes that are very similar.
> Both have 2GB of RAM & 1GB of swap space.
> One has a 2.8GHz CPU, the other a 2.93GHz CPU, both dualcore.
>
> The slower box survives a 'make -j bzImage' of a 2.6.18 kernel tree
> without incident. (Although it takes ~4 minutes longer than a -j2)
>
> The faster box goes absolutely nuts, oomkilling everything in sight,
> until eventually after about 10 minutes, the box locks up dead,
> and won't even respond to pings.
>
> Oh, the only other difference - the slower box has 1 disk, whereas the
> faster box has two in RAID0. I'm not surprised that stuff is getting
> oom-killed given the pathological scenario, but the fact that the
> box never recovered at all is a little odd. Does md lack some means
> of dealing with low memory scenarios ?

I think I see the same thing on the other end on slow machines, here it
only takes a single compile job, which doesn't quite fit into memory and
another task (like top) which occasionally wakes up and tries to allocate
memory and then kills the compile job - that's very annoying.

AFAICT the basic problem is that "did_some_progress" in __alloc_pages() is
rather local information, other processes can still make progress and keep
this process from making progress, which gets grumpy and starts killing.
What's happing here is that most memory is either mapped or in the swap
cache, so we have a race between processes trying to free memory from the
cache and processes mapping memory back into their address space.

If someone wants to play with the problem, the example program below
triggers the problem relatively easily (booting with only little ram
helps), it starts a number of readers, which should touch a bit more
memory than is available and a few writers, which occasionally allocate
memory.

bye, Roman


#include <stdlib.h>
#include <string.h>
#include <time.h>

#define MEM_SIZE (24 << 20)

int main(int ac, char **av)
{
volatile char *mem;
int i, memsize;

memsize = MEM_SIZE;
if (ac > 1)
memsize = atoi(av[1]) << 20;
mem = malloc(memsize);

memset(mem, 0, memsize);
for (i = 0; i < 32; i++) {
if (!fork()) {
while (1) {
*(mem + random() % memsize);
}
}
}
for (i = 0; i < 5; i++) {
if (!fork()) {
while (1) {
volatile char *p;
struct timespec ts;
int t = random() % 5000;
ts.tv_sec = t / 1000;
ts.tv_nsec = (t % 1000) * 1000000;
nanosleep(&ts, NULL);
p = malloc(1 << 16);
memset(p, 0, 1 << 16);
free(p);
}
}
}
while (1)
pause();
}

2006-09-29 00:17:32

by Andrew Morton

[permalink] [raw]
Subject: Re: oom kill oddness.

On Fri, 29 Sep 2006 01:03:16 +0200 (CEST)
Roman Zippel <[email protected]> wrote:

> Hi,
>
> On Wed, 27 Sep 2006, Dave Jones wrote:
>
> > So I have two boxes that are very similar.
> > Both have 2GB of RAM & 1GB of swap space.
> > One has a 2.8GHz CPU, the other a 2.93GHz CPU, both dualcore.
> >
> > The slower box survives a 'make -j bzImage' of a 2.6.18 kernel tree
> > without incident. (Although it takes ~4 minutes longer than a -j2)
> >
> > The faster box goes absolutely nuts, oomkilling everything in sight,
> > until eventually after about 10 minutes, the box locks up dead,
> > and won't even respond to pings.
> >
> > Oh, the only other difference - the slower box has 1 disk, whereas the
> > faster box has two in RAID0. I'm not surprised that stuff is getting
> > oom-killed given the pathological scenario, but the fact that the
> > box never recovered at all is a little odd. Does md lack some means
> > of dealing with low memory scenarios ?
>
> I think I see the same thing on the other end on slow machines, here it
> only takes a single compile job, which doesn't quite fit into memory and
> another task (like top) which occasionally wakes up and tries to allocate
> memory and then kills the compile job - that's very annoying.
>
> AFAICT the basic problem is that "did_some_progress" in __alloc_pages() is
> rather local information, other processes can still make progress and keep
> this process from making progress, which gets grumpy and starts killing.
> What's happing here is that most memory is either mapped or in the swap
> cache, so we have a race between processes trying to free memory from the
> cache and processes mapping memory back into their address space.

Kernel versions please, guys. There have been a lot of oom-killer changes
post-2.6.18.

> If someone wants to play with the problem, the example program below
> triggers the problem relatively easily (booting with only little ram
> helps), it starts a number of readers, which should touch a bit more
> memory than is available and a few writers, which occasionally allocate
> memory.
>

How much ram, how much swap?

2006-09-29 00:22:29

by Dave Jones

[permalink] [raw]
Subject: Re: oom kill oddness.

On Thu, Sep 28, 2006 at 05:17:06PM -0700, Andrew Morton wrote:
> On Fri, 29 Sep 2006 01:03:16 +0200 (CEST)
> Roman Zippel <[email protected]> wrote:
>
> > Hi,
> >
> > On Wed, 27 Sep 2006, Dave Jones wrote:
> >
> > > So I have two boxes that are very similar.
> > > Both have 2GB of RAM & 1GB of swap space.
> > > One has a 2.8GHz CPU, the other a 2.93GHz CPU, both dualcore.
> > >
> > > The slower box survives a 'make -j bzImage' of a 2.6.18 kernel tree
> > > without incident. (Although it takes ~4 minutes longer than a -j2)
> > >
> > > The faster box goes absolutely nuts, oomkilling everything in sight,
> > > until eventually after about 10 minutes, the box locks up dead,
> > > and won't even respond to pings.
> > >
> > > Oh, the only other difference - the slower box has 1 disk, whereas the
> > > faster box has two in RAID0. I'm not surprised that stuff is getting
> > > oom-killed given the pathological scenario, but the fact that the
> > > box never recovered at all is a little odd. Does md lack some means
> > > of dealing with low memory scenarios ?
> >
> > I think I see the same thing on the other end on slow machines, here it
> > only takes a single compile job, which doesn't quite fit into memory and
> > another task (like top) which occasionally wakes up and tries to allocate
> > memory and then kills the compile job - that's very annoying.
> >
> > AFAICT the basic problem is that "did_some_progress" in __alloc_pages() is
> > rather local information, other processes can still make progress and keep
> > this process from making progress, which gets grumpy and starts killing.
> > What's happing here is that most memory is either mapped or in the swap
> > cache, so we have a race between processes trying to free memory from the
> > cache and processes mapping memory back into their address space.
>
> Kernel versions please, guys. There have been a lot of oom-killer changes
> post-2.6.18.

Sorry, I've been stuck on 2.6.18 as that's what we're shipping in FC6 soon.

Dave

2006-09-29 00:58:09

by Roman Zippel

[permalink] [raw]
Subject: Re: oom kill oddness.

Hi,

On Thu, 28 Sep 2006, Andrew Morton wrote:

> Kernel versions please, guys. There have been a lot of oom-killer changes
> post-2.6.18.

Last I tested this was with 2.6.18.
The latest changes to vmscan.c should help...

> > If someone wants to play with the problem, the example program below
> > triggers the problem relatively easily (booting with only little ram
> > helps), it starts a number of readers, which should touch a bit more
> > memory than is available and a few writers, which occasionally allocate
> > memory.
> >
>
> How much ram, how much swap?

I tested it with 32MB and 64MB and plenty of swap.

bye, Roman

2006-09-29 01:39:48

by Nick Piggin

[permalink] [raw]
Subject: Re: oom kill oddness.

Roman Zippel wrote:

>Hi,
>
>On Thu, 28 Sep 2006, Andrew Morton wrote:
>
>
>>Kernel versions please, guys. There have been a lot of oom-killer changes
>>post-2.6.18.
>>
>
>Last I tested this was with 2.6.18.
>The latest changes to vmscan.c should help...
>

It would be good if you could confirm that. I basically got the kernel to
the point where it used up all swap before going OOM on the workload I
was looking at (MySQL running in virtual machines).

--

Send instant messages to your online friends http://au.messenger.yahoo.com

2006-09-29 19:58:48

by Larry Woodman

[permalink] [raw]
Subject: Re: oom kill oddness.

--- linux-2.6.18.noarch/mm/oom_kill.c.orig
+++ linux-2.6.18.noarch/mm/oom_kill.c
@@ -306,6 +306,69 @@ static int oom_kill_process(struct task_
return oom_kill_task(p, message);
}

+int should_oom_kill(void)
+{
+ static spinlock_t oom_lock = SPIN_LOCK_UNLOCKED;
+ static unsigned long first, last, count, lastkill;
+ unsigned long now, since;
+ int ret = 0;
+
+ spin_lock(&oom_lock);
+ now = jiffies;
+ since = now - last;
+ last = now;
+
+ /*
+ * If it's been a long time since last failure,
+ * we're not oom.
+ */
+ if (since > 5*HZ)
+ goto reset;
+
+ /*
+ * If we haven't tried for at least one second,
+ * we're not really oom.
+ */
+ since = now - first;
+ if (since < HZ)
+ goto out_unlock;
+
+ /*
+ * If we have gotten only a few failures,
+ * we're not really oom.
+ */
+ if (++count < 10)
+ goto out_unlock;
+
+ /*
+ * If we just killed a process, wait a while
+ * to give that task a chance to exit. This
+ * avoids killing multiple processes needlessly.
+ */
+ since = now - lastkill;
+ if (since < HZ*5)
+ goto out_unlock;
+
+ /*
+ * Ok, really out of memory. Kill something.
+ */
+ lastkill = now;
+ ret = 1;
+
+reset:
+/*
+ * We dropped the lock above, so check to be sure the variable
+ * first only ever increases to prevent false OOM's.
+ */
+ if (time_after(now, first))
+ first = now;
+ count = 0;
+
+out_unlock:
+ spin_unlock(&oom_lock);
+ return ret;
+}
+
/**
* out_of_memory - kill the "best" process when we run out of memory
*
@@ -326,6 +389,9 @@ void out_of_memory(struct zonelist *zone
show_mem();
}

+ if (!should_oom_kill())
+ return;
+
cpuset_lock();
read_lock(&tasklist_lock);

--- linux-2.6.18.noarch/mm/vmscan.c.orig
+++ linux-2.6.18.noarch/mm/vmscan.c
@@ -999,10 +999,8 @@ unsigned long try_to_free_pages(struct z
reclaim_state->reclaimed_slab = 0;
}
total_scanned += sc.nr_scanned;
- if (nr_reclaimed >= sc.swap_cluster_max) {
- ret = 1;
+ if (nr_reclaimed >= sc.swap_cluster_max)
goto out;
- }

/*
* Try to write back as many pages as we just scanned. This
@@ -1030,6 +1028,8 @@ out:

zone->prev_priority = zone->temp_priority;
}
+ if (nr_reclaimed)
+ ret = 1;
return ret;
}


Attachments:
oomkill.patch (2.14 kB)

2006-09-29 21:34:15

by Dave Jones

[permalink] [raw]
Subject: Re: oom kill oddness.

On Fri, Sep 29, 2006 at 04:03:14PM -0400, Larry Woodman wrote:

> Dave, this has been a problem since the out_of_memory() function was
> changed
> between 2.6.10 and 2.6.11. Before this change out_of_memory() required
> multiple
> calls within 5 seconds before actually OOM killed a process. After the
> change(in 2.6.11)
> a single call to out_of_memory() results in OOM killing a process. The
> following patch
> allows the 2.6.18 system to run under much more memory pressure before
> it OOM kills.

Some of these tests do seem to be readded in Linus' current tree.

[PATCH] oom: don't kill current when another OOM in progress

went in earlier today for eg.
I'm curious why these checks were ever removed in the first place though.

Dave