2002-10-29 21:30:57

by Matt Reppert

[permalink] [raw]
Subject: poll-related "scheduling while atomic", 2.5.44-mm6

Debug: sleeping function called from illegal context at mm/slab.c:1304
Call Trace:
[<c0113f98>] __might_sleep+0x54/0x5c
[<c012e342>] kmem_flagcheck+0x1e/0x50
[<c012ec4b>] kmalloc+0x4b/0x114
[<c014c2cd>] sys_poll+0x91/0x284
[<c0106eb3>] syscall_call+0x7/0xb

This one comes from calling kmalloc with GFP_KERNEL in sys_poll.

bad: scheduling while atomic!
Call Trace:
[<c0112ba1>] do_schedule+0x3d/0x2c8
[<c011d14e>] add_timer+0x36/0x124
[<c011ddb0>] schedule_timeout+0x84/0xa4
[<c011dd20>] process_timeout+0x0/0xc
[<c014c216>] do_poll+0xc2/0xe8
[<c014c3ca>] sys_poll+0x18e/0x284
[<c0106eb3>] syscall_call+0x7/0xb

Another little tidbit. I was in X11 while this was happening, and I
happened to stop a process (nautilus) just before I looked in my logs
about this ... and caught a "Notice: process nautilus exited with
preempt_count 2". So my guess is somewhere between -mm5 and -mm6 we
screwed up the atomicity count. (Funny I didn't see that for more
processes, though.)

Matt


2002-10-29 22:12:53

by Andrew Morton

[permalink] [raw]
Subject: Re: poll-related "scheduling while atomic", 2.5.44-mm6

Matt Reppert wrote:
>
> So my guess is somewhere between -mm5 and -mm6 we
> screwed up the atomicity count.

Mine too. I'll check it out, thanks.

Do you have preemption enabled?

2002-10-29 22:59:20

by Paolo Ciarrocchi

[permalink] [raw]
Subject: Re: poll-related "scheduling while atomic", 2.5.44-mm6

>> So my guess is somewhere between -mm5 and -mm6 we
>> screwed up the atomicity count.
>Mine too. I'll check it out, thanks.

The same here as well

>Do you have preemption enabled?

yes

Paolo
--

Powered by Outblaze

2002-10-29 23:49:09

by Matt Reppert

[permalink] [raw]
Subject: Re: poll-related "scheduling while atomic", 2.5.44-mm6

On Tue, 29 Oct 2002 14:19:04 -0800
Andrew Morton <[email protected]> wrote:

> Matt Reppert wrote:
> >
> > So my guess is somewhere between -mm5 and -mm6 we
> > screwed up the atomicity count.
>
> Mine too. I'll check it out, thanks.
>
> Do you have preemption enabled?

Sure do.

Thanks,
Matt

2002-10-30 06:20:54

by Andrew Morton

[permalink] [raw]
Subject: Re: poll-related "scheduling while atomic", 2.5.44-mm6

Paolo Ciarrocchi wrote:
>
> >> So my guess is somewhere between -mm5 and -mm6 we
> >> screwed up the atomicity count.
> >Mine too. I'll check it out, thanks.
>
> The same here as well
>

This'll fix it up. Whoever invented cut-n-paste has a lot to
answer for.


--- 25/mm/swap.c~preempt-count-fix Tue Oct 29 22:19:54 2002
+++ 25-akpm/mm/swap.c Tue Oct 29 22:20:16 2002
@@ -90,11 +90,12 @@ void lru_cache_add_active(struct page *p

void lru_add_drain(void)
{
- struct pagevec *pvec = &per_cpu(lru_add_pvecs, get_cpu());
+ int cpu = get_cpu();
+ struct pagevec *pvec = &per_cpu(lru_add_pvecs, cpu);

if (pagevec_count(pvec))
__pagevec_lru_add(pvec);
- pvec = &per_cpu(lru_add_active_pvecs, get_cpu());
+ pvec = &per_cpu(lru_add_active_pvecs, cpu);
if (pagevec_count(pvec))
__pagevec_lru_add_active(pvec);
put_cpu();

.

I had a crash while testing SMP+preempt btw. Nasty one - took a
pagefault from userspace but do_page_fault() decided that the
fault was in-kernel or something. It fell all the way through
to die() and, well, died. I saw the same happen some months ago.