I've been trying to use oprofile on an RT kernel to look at some
performance issues. While running I notice the following sent to
the console:
BUG: scheduling with irqs disabled: java/0x00000000/4521
caller is rt_mutex_slowlock+0x156/0x1dd
[<c032051a>] schedule+0x65/0xd2 (8)
[<c0321338>] rt_mutex_slowlock+0x156/0x1dd (12)
[<c032142a>] rt_mutex_lock+0x24/0x28 (72)
[<c0134904>] rt_down_read+0x38/0x3b (20)
[<c0322a89>] do_page_fault+0xe3/0x52d (12)
[<c03229a6>] do_page_fault+0x0/0x52d (76)
[<c01033bb>] error_code+0x4f/0x54 (8)
[<c01ce6d0>] __copy_from_user_ll+0x55/0x7c (44)
[<f89be7ef>] dump_user_backtrace+0x2e/0x56 [oprofile] (24)
[<c0134869>] rt_up_read+0x3e/0x41 (20)
[<f89be864>] x86_backtrace+0x4a/0x5a [oprofile] (20)
[<f89bd53a>] oprofile_add_sample+0x73/0x89 [oprofile] (20)
[<f89beea3>] athlon_check_ctrs+0x22/0x4a [oprofile] (32)
[<f89be8c5>] nmi_callback+0x18/0x1b [oprofile] (28)
[<c01041ff>] do_nmi+0x24/0x33 (12)
[<c0103462>] nmi_stack_correct+0x1d/0x22 (16)
It seems strange to me that oprofile would be calling
'__copy_from_user_ll' in this context. I can see why the
changes made for RT locking expose this. But, doesn't this
issue also exist on non-RT (default) kernels? What happens
when we generate a page fault in this context on non-RT kernels?
--
Mike
On Mon, 2006-10-23 at 14:23 -0700, Mike Kravetz wrote:
> I've been trying to use oprofile on an RT kernel to look at some
> performance issues. While running I notice the following sent to
> the console:
>
> BUG: scheduling with irqs disabled: java/0x00000000/4521
> caller is rt_mutex_slowlock+0x156/0x1dd
> [<c032051a>] schedule+0x65/0xd2 (8)
> [<c0321338>] rt_mutex_slowlock+0x156/0x1dd (12)
> [<c032142a>] rt_mutex_lock+0x24/0x28 (72)
> [<c0134904>] rt_down_read+0x38/0x3b (20)
> [<c0322a89>] do_page_fault+0xe3/0x52d (12)
> [<c03229a6>] do_page_fault+0x0/0x52d (76)
> [<c01033bb>] error_code+0x4f/0x54 (8)
> [<c01ce6d0>] __copy_from_user_ll+0x55/0x7c (44)
> [<f89be7ef>] dump_user_backtrace+0x2e/0x56 [oprofile] (24)
> [<c0134869>] rt_up_read+0x3e/0x41 (20)
> [<f89be864>] x86_backtrace+0x4a/0x5a [oprofile] (20)
> [<f89bd53a>] oprofile_add_sample+0x73/0x89 [oprofile] (20)
> [<f89beea3>] athlon_check_ctrs+0x22/0x4a [oprofile] (32)
> [<f89be8c5>] nmi_callback+0x18/0x1b [oprofile] (28)
> [<c01041ff>] do_nmi+0x24/0x33 (12)
> [<c0103462>] nmi_stack_correct+0x1d/0x22 (16)
>
> It seems strange to me that oprofile would be calling
> '__copy_from_user_ll' in this context. I can see why the
> changes made for RT locking expose this. But, doesn't this
> issue also exist on non-RT (default) kernels? What happens
> when we generate a page fault in this context on non-RT kernels?
>
As Mike has pointed out here, oprofile _can_ cause the nmi to schedule.
Here's the path: (looking at vanilla 2.6.18).
arch/i386/oprofile/nmi_int.c: nmi_callback
return model->check_ctrs(regs, &cpu_msrs[cpu]);
if model == &op_athlon_spec
(could be a problem with others, but I'm only looking here).
op_athlon_spec.check_ctrs = &athlon_check_ctrs
Here's the calling path:
athlon_check_ctrs
==> oprofile_add_sample
==> oprofile_add_ext_sample
==> oprofile_ops.backtrace
== x86_backtrace
==> dump_user_backtrace
==> __copy_from_user_inatomic
Don't let the name fool you, this _can_ schedule! (and says so in the
comments above it).
Now perhaps on a vanilla kernel opfile_add_ext_sample is not likely to
have log_sample fail. I don't know, but this path exits, so we can
indeed schedule in a NMI interrupt.
Mike, thanks for pointing this out.
-- Steve
On Mon, Oct 23, 2006 at 10:24:34PM -0400, Steven Rostedt wrote:
> > caller is rt_mutex_slowlock+0x156/0x1dd
> > [<c032051a>] schedule+0x65/0xd2 (8)
> > [<c0321338>] rt_mutex_slowlock+0x156/0x1dd (12)
> > [<c032142a>] rt_mutex_lock+0x24/0x28 (72)
> > [<c0134904>] rt_down_read+0x38/0x3b (20)
> > [<c0322a89>] do_page_fault+0xe3/0x52d (12)
> > [<c03229a6>] do_page_fault+0x0/0x52d (76)
> > [<c01033bb>] error_code+0x4f/0x54 (8)
> > [<c01ce6d0>] __copy_from_user_ll+0x55/0x7c (44)
> > [<f89be7ef>] dump_user_backtrace+0x2e/0x56 [oprofile] (24)
> > [<c0134869>] rt_up_read+0x3e/0x41 (20)
> > [<f89be864>] x86_backtrace+0x4a/0x5a [oprofile] (20)
> > [<f89bd53a>] oprofile_add_sample+0x73/0x89 [oprofile] (20)
> > [<f89beea3>] athlon_check_ctrs+0x22/0x4a [oprofile] (32)
> > [<f89be8c5>] nmi_callback+0x18/0x1b [oprofile] (28)
> > [<c01041ff>] do_nmi+0x24/0x33 (12)
> > [<c0103462>] nmi_stack_correct+0x1d/0x22 (16)
> >
> > It seems strange to me that oprofile would be calling
> > '__copy_from_user_ll' in this context. I can see why the
> > changes made for RT locking expose this. But, doesn't this
> > issue also exist on non-RT (default) kernels? What happens
> > when we generate a page fault in this context on non-RT kernels?
> >
>
> As Mike has pointed out here, oprofile _can_ cause the nmi to schedule.
in_atomic() is supposed to be true in this context, so the test in
do_page_fault() catches it.
john
On Tue, 24 Oct 2006, John Levon wrote:
> On Mon, Oct 23, 2006 at 10:24:34PM -0400, Steven Rostedt wrote:
>
>
> in_atomic() is supposed to be true in this context, so the test in
> do_page_fault() catches it.
>
/*
* If we're in an interrupt, have no user context or are running in an
* atomic region then we must not take the fault..
*/
if (in_atomic() || !mm)
goto bad_area_nosemaphore;
Ahh, missed that one. So this is an issue that _only_ rt needs to fix.
OK, thanks for pointing that out.
-- Steve
On Tue, Oct 24, 2006 at 08:54:42AM -0400, Steven Rostedt wrote:
> On Tue, 24 Oct 2006, John Levon wrote:
>
> > On Mon, Oct 23, 2006 at 10:24:34PM -0400, Steven Rostedt wrote:
> > in_atomic() is supposed to be true in this context, so the test in
> > do_page_fault() catches it.
>
> Ahh, missed that one. So this is an issue that _only_ rt needs to fix.
> OK, thanks for pointing that out.
Thanks! This issue is with an older RT kernel that I am running. In the
version of the kernel I am running nmi_enter() and nmi_exit() are commented
out as described here:
http://www.ussg.iu.edu/hypermail/linux/kernel/0508.1/1714.html
Newer RT kernels (such as linux-2.6.18-rt5) have reenabled the
add_preempt_count/sub_preempt_count calls in nmi_enter/exit. If I
understand correctly the reason one could not modify the preempt_count
from NMI code is that it could have been in the process of being
modified by non-NMI code. But, in recent RT kernels it appears that
preempt_count is still a single word modified by both NMI and
non-NMI code. What am I missing that now makes this safe?
--
Mike
On Wed, 2006-10-25 at 11:58 -0700, Mike Kravetz wrote:
> Newer RT kernels (such as linux-2.6.18-rt5) have reenabled the
> add_preempt_count/sub_preempt_count calls in nmi_enter/exit. If I
> understand correctly the reason one could not modify the preempt_count
> from NMI code is that it could have been in the process of being
> modified by non-NMI code. But, in recent RT kernels it appears that
> preempt_count is still a single word modified by both NMI and
> non-NMI code. What am I missing that now makes this safe?
>
It's not safe. NMI causes hard lockups on 2.6.18-rt5. Get 2.6.18-rt7.
Lee