It's reasonable to call __ftrace_dump function not only once,
so remove the dump_ran variable checking.
Signed-off-by: zhangwei(Jovi) <[email protected]>
---
kernel/trace/trace.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 090eddb..4cec7b8 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -5106,17 +5106,12 @@ __ftrace_dump(bool disable_tracing, enum ftrace_dump_mode oops_dump_mode)
/* use static because iter can be a bit big for the stack */
static struct trace_iterator iter;
unsigned int old_userobj;
- static int dump_ran;
unsigned long flags;
int cnt = 0, cpu;
/* only one dump */
local_irq_save(flags);
arch_spin_lock(&ftrace_dump_lock);
- if (dump_ran)
- goto out;
-
- dump_ran = 1;
tracing_off();
@@ -5206,7 +5201,6 @@ __ftrace_dump(bool disable_tracing, enum ftrace_dump_mode oops_dump_mode)
tracing_on();
}
- out:
arch_spin_unlock(&ftrace_dump_lock);
local_irq_restore(flags);
}
--
1.7.9.7
On Mon, 2013-03-11 at 15:13 +0800, zhangwei(Jovi) wrote:
> It's reasonable to call __ftrace_dump function not only once,
> so remove the dump_ran variable checking.
This needs a little more work. On an oops, I only want it dumped once,
because a crash can cause another crash while its dumping, and without
that check in will corrupt the buffer.
Now, we have things like ctrl^z that also does a dump where we don't
want to disable it. Cleaning this up has been on my todo list for a
while. I may go ahead and clean that up myself.
-- Steve
>
> Signed-off-by: zhangwei(Jovi) <[email protected]>
> ---
> kernel/trace/trace.c | 6 ------
> 1 file changed, 6 deletions(-)
>
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index 090eddb..4cec7b8 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -5106,17 +5106,12 @@ __ftrace_dump(bool disable_tracing, enum ftrace_dump_mode oops_dump_mode)
> /* use static because iter can be a bit big for the stack */
> static struct trace_iterator iter;
> unsigned int old_userobj;
> - static int dump_ran;
> unsigned long flags;
> int cnt = 0, cpu;
>
> /* only one dump */
> local_irq_save(flags);
> arch_spin_lock(&ftrace_dump_lock);
> - if (dump_ran)
> - goto out;
> -
> - dump_ran = 1;
>
> tracing_off();
>
> @@ -5206,7 +5201,6 @@ __ftrace_dump(bool disable_tracing, enum ftrace_dump_mode oops_dump_mode)
> tracing_on();
> }
>
> - out:
> arch_spin_unlock(&ftrace_dump_lock);
> local_irq_restore(flags);
> }
On Mon, Mar 11, 2013 at 10:08 PM, Steven Rostedt <[email protected]> wrote:
> On Mon, 2013-03-11 at 15:13 +0800, zhangwei(Jovi) wrote:
>> It's reasonable to call __ftrace_dump function not only once,
>> so remove the dump_ran variable checking.
>
> This needs a little more work. On an oops, I only want it dumped once,
> because a crash can cause another crash while its dumping, and without
> that check in will corrupt the buffer.
For reclusive dumping, it's already under the protection of
ftrace_dump_lock spinlock,
I missed something? would you explain more on this case?
>
> Now, we have things like ctrl^z that also does a dump where we don't
> want to disable it. Cleaning this up has been on my todo list for a
> while. I may go ahead and clean that up myself.
Nice, please ignore this patch if that code will be cleanup.
Thanks.
>
> -- Steve
>
>>
>> Signed-off-by: zhangwei(Jovi) <[email protected]>
>> ---
>> kernel/trace/trace.c | 6 ------
>> 1 file changed, 6 deletions(-)
>>
>> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
>> index 090eddb..4cec7b8 100644
>> --- a/kernel/trace/trace.c
>> +++ b/kernel/trace/trace.c
>> @@ -5106,17 +5106,12 @@ __ftrace_dump(bool disable_tracing, enum ftrace_dump_mode oops_dump_mode)
>> /* use static because iter can be a bit big for the stack */
>> static struct trace_iterator iter;
>> unsigned int old_userobj;
>> - static int dump_ran;
>> unsigned long flags;
>> int cnt = 0, cpu;
>>
>> /* only one dump */
>> local_irq_save(flags);
>> arch_spin_lock(&ftrace_dump_lock);
>> - if (dump_ran)
>> - goto out;
>> -
>> - dump_ran = 1;
>>
>> tracing_off();
>>
>> @@ -5206,7 +5201,6 @@ __ftrace_dump(bool disable_tracing, enum ftrace_dump_mode oops_dump_mode)
>> tracing_on();
>> }
>>
>> - out:
>> arch_spin_unlock(&ftrace_dump_lock);
>> local_irq_restore(flags);
>> }
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
On Mon, 2013-03-11 at 23:35 +0800, Jovi Zhang wrote:
> On Mon, Mar 11, 2013 at 10:08 PM, Steven Rostedt <[email protected]> wrote:
> > On Mon, 2013-03-11 at 15:13 +0800, zhangwei(Jovi) wrote:
> >> It's reasonable to call __ftrace_dump function not only once,
> >> so remove the dump_ran variable checking.
> >
> > This needs a little more work. On an oops, I only want it dumped once,
> > because a crash can cause another crash while its dumping, and without
> > that check in will corrupt the buffer.
> For reclusive dumping, it's already under the protection of
> ftrace_dump_lock spinlock,
> I missed something? would you explain more on this case?
Actually, it matters if it was called from NMI context or not. If an NMI
triggered and did a dump while a reader was reading the buffer, the NMI
can corrupt the buffer. It will print fine for the NMI, but if the
reader continues, there's a chance it can get messed up and corrupt the
buffer. The "dump once" is a paranoid method to say we only dump it once
on oops and stop (hence the ftrace_kill() there too).
Now I think there's a possible deadlock here as well. If the dump was
caused by something other than an NMI lock up, and while it is dumping
the NMI goes off and triggers a bug, it too can enter this and that
arch_spin_lock() will cause a deadlock. This will be something I need to
clean up as well.
Thanks for calling my attention to this.
-- Steve