2020-08-19 12:50:59

by Mark Brown

[permalink] [raw]
Subject: [PATCH v2 0/3] arm64: Convert to ARCH_STACKWALK

This series updates the arm64 stacktrace code to use the newer and much
simpler arch_stack_walk() interface, the main benefit being a single
entry point to the arch code with no need for the arch code to worry
about skipping frames. Along the way I noticed that the reliable
parameter to the arch_stack_walk() callback appears to be redundant
so there's also a patch here removing that from the existing code to
simplify the interface.

This is preparatory work for implementing reliable stack trace for
arm64.

v2: Rebase onto v5.9-rc1.

Mark Brown (3):
stacktrace: Remove reliable argument from arch_stack_walk() callback
arm64: stacktrace: Make stack walk callback consistent with generic
code
arm64: stacktrace: Convert to ARCH_STACKWALK

arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/stacktrace.h | 2 +-
arch/arm64/kernel/perf_callchain.c | 6 +--
arch/arm64/kernel/return_address.c | 8 +--
arch/arm64/kernel/stacktrace.c | 84 ++++-------------------------
arch/s390/kernel/stacktrace.c | 4 +-
arch/x86/kernel/stacktrace.c | 10 ++--
include/linux/stacktrace.h | 5 +-
kernel/stacktrace.c | 8 ++-
9 files changed, 30 insertions(+), 98 deletions(-)

--
2.20.1


2020-08-19 12:51:21

by Mark Brown

[permalink] [raw]
Subject: [PATCH v2 2/3] arm64: stacktrace: Make stack walk callback consistent with generic code

As with the generic arch_stack_walk() code the arm64 stack walk code takes
a callback that is called per stack frame. Currently the arm64 code always
passes a struct stackframe to the callback and the generic code just passes
the pc, however none of the users ever reference anything in the struct
other than the pc value. The arm64 code also uses a return type of int
while the generic code uses a return type of bool though in both cases the
return value is a boolean value.

In order to reduce code duplication when arm64 is converted to use
arch_stack_walk() change the signature of the arm64 specific callback to
match that of the generic code.

Signed-off-by: Mark Brown <[email protected]>
---
arch/arm64/include/asm/stacktrace.h | 2 +-
arch/arm64/kernel/perf_callchain.c | 6 +++---
arch/arm64/kernel/return_address.c | 8 ++++----
arch/arm64/kernel/stacktrace.c | 11 +++++------
4 files changed, 13 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h
index fc7613023c19..eb29b1fe8255 100644
--- a/arch/arm64/include/asm/stacktrace.h
+++ b/arch/arm64/include/asm/stacktrace.h
@@ -63,7 +63,7 @@ struct stackframe {

extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame);
extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
- int (*fn)(struct stackframe *, void *), void *data);
+ bool (*fn)(void *, unsigned long), void *data);
extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
const char *loglvl);

diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c
index b0e03e052dd1..bd2a91bd9e9e 100644
--- a/arch/arm64/kernel/perf_callchain.c
+++ b/arch/arm64/kernel/perf_callchain.c
@@ -137,11 +137,11 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
* whist unwinding the stackframe and is like a subroutine return so we use
* the PC.
*/
-static int callchain_trace(struct stackframe *frame, void *data)
+static bool callchain_trace(void *data, unsigned long pc)
{
struct perf_callchain_entry_ctx *entry = data;
- perf_callchain_store(entry, frame->pc);
- return 0;
+ perf_callchain_store(entry, pc);
+ return false;
}

void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
diff --git a/arch/arm64/kernel/return_address.c b/arch/arm64/kernel/return_address.c
index a5e8b3b9d798..6434427a827a 100644
--- a/arch/arm64/kernel/return_address.c
+++ b/arch/arm64/kernel/return_address.c
@@ -18,16 +18,16 @@ struct return_address_data {
void *addr;
};

-static int save_return_addr(struct stackframe *frame, void *d)
+static bool save_return_addr(void *d, unsigned long pc)
{
struct return_address_data *data = d;

if (!data->level) {
- data->addr = (void *)frame->pc;
- return 1;
+ data->addr = (void *)pc;
+ return true;
} else {
--data->level;
- return 0;
+ return false;
}
}
NOKPROBE_SYMBOL(save_return_addr);
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 2dd8e3b8b94b..743cf11fbfca 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -118,12 +118,12 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
NOKPROBE_SYMBOL(unwind_frame);

void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
- int (*fn)(struct stackframe *, void *), void *data)
+ bool (*fn)(void *, unsigned long), void *data)
{
while (1) {
int ret;

- if (fn(frame, data))
+ if (fn(data, frame->pc))
break;
ret = unwind_frame(tsk, frame);
if (ret < 0)
@@ -139,17 +139,16 @@ struct stack_trace_data {
unsigned int skip;
};

-static int save_trace(struct stackframe *frame, void *d)
+static bool save_trace(void *d, unsigned long addr)
{
struct stack_trace_data *data = d;
struct stack_trace *trace = data->trace;
- unsigned long addr = frame->pc;

if (data->no_sched_functions && in_sched_functions(addr))
- return 0;
+ return false;
if (data->skip) {
data->skip--;
- return 0;
+ return false;
}

trace->entries[trace->nr_entries++] = addr;
--
2.20.1

2020-08-19 12:51:42

by Mark Brown

[permalink] [raw]
Subject: [PATCH v2 3/3] arm64: stacktrace: Convert to ARCH_STACKWALK

Historically architectures have had duplicated code in their stack trace
implementations for filtering what gets traced. In order to avoid this
duplication some generic code has been provided using a new interface
arch_stack_walk(), enabled by selecting ARCH_STACKWALK in Kconfig, which
factors all this out into the generic stack trace code. Convert arm64
to use this common infrastructure.

Signed-off-by: Mark Brown <[email protected]>
---
arch/arm64/Kconfig | 1 +
arch/arm64/kernel/stacktrace.c | 79 ++++------------------------------
2 files changed, 9 insertions(+), 71 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6d232837cbee..d1ba52e4b976 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -29,6 +29,7 @@ config ARM64
select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SET_DIRECT_MAP
select ARCH_HAS_SET_MEMORY
+ select ARCH_STACKWALK
select ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_HAS_STRICT_MODULE_RWX
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 743cf11fbfca..a33fba048954 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -133,82 +133,19 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
NOKPROBE_SYMBOL(walk_stackframe);

#ifdef CONFIG_STACKTRACE
-struct stack_trace_data {
- struct stack_trace *trace;
- unsigned int no_sched_functions;
- unsigned int skip;
-};

-static bool save_trace(void *d, unsigned long addr)
+void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
+ struct task_struct *task, struct pt_regs *regs)
{
- struct stack_trace_data *data = d;
- struct stack_trace *trace = data->trace;
-
- if (data->no_sched_functions && in_sched_functions(addr))
- return false;
- if (data->skip) {
- data->skip--;
- return false;
- }
-
- trace->entries[trace->nr_entries++] = addr;
-
- return trace->nr_entries >= trace->max_entries;
-}
-
-void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
-{
- struct stack_trace_data data;
- struct stackframe frame;
-
- data.trace = trace;
- data.skip = trace->skip;
- data.no_sched_functions = 0;
-
- start_backtrace(&frame, regs->regs[29], regs->pc);
- walk_stackframe(current, &frame, save_trace, &data);
-}
-EXPORT_SYMBOL_GPL(save_stack_trace_regs);
-
-static noinline void __save_stack_trace(struct task_struct *tsk,
- struct stack_trace *trace, unsigned int nosched)
-{
- struct stack_trace_data data;
struct stackframe frame;

- if (!try_get_task_stack(tsk))
- return;
+ if (regs)
+ start_backtrace(&frame, regs->regs[29], regs->pc);
+ else
+ start_backtrace(&frame, thread_saved_fp(task),
+ thread_saved_pc(task));

- data.trace = trace;
- data.skip = trace->skip;
- data.no_sched_functions = nosched;
-
- if (tsk != current) {
- start_backtrace(&frame, thread_saved_fp(tsk),
- thread_saved_pc(tsk));
- } else {
- /* We don't want this function nor the caller */
- data.skip += 2;
- start_backtrace(&frame,
- (unsigned long)__builtin_frame_address(0),
- (unsigned long)__save_stack_trace);
- }
-
- walk_stackframe(tsk, &frame, save_trace, &data);
-
- put_task_stack(tsk);
-}
-
-void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
-{
- __save_stack_trace(tsk, trace, 1);
-}
-EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
-
-void save_stack_trace(struct stack_trace *trace)
-{
- __save_stack_trace(current, trace, 0);
+ walk_stackframe(task, &frame, consume_entry, cookie);
}

-EXPORT_SYMBOL_GPL(save_stack_trace);
#endif
--
2.20.1

2020-08-19 12:53:07

by Mark Brown

[permalink] [raw]
Subject: [PATCH v2 1/3] stacktrace: Remove reliable argument from arch_stack_walk() callback

Currently the callback passed to arch_stack_walk() has an argument called
reliable passed to it to indicate if the stack entry is reliable, a comment
says that this is used by some printk() consumers. However in the current
kernel none of the arch_stack_walk() implementations ever set this flag to
true and the only callback implementation we have is in the generic
stacktrace code which ignores the flag. It therefore appears that this
flag is redundant so we can simplify and clarify things by removing it.

Signed-off-by: Mark Brown <[email protected]>
---
arch/s390/kernel/stacktrace.c | 4 ++--
arch/x86/kernel/stacktrace.c | 10 +++++-----
include/linux/stacktrace.h | 5 +----
kernel/stacktrace.c | 8 +++-----
4 files changed, 11 insertions(+), 16 deletions(-)

diff --git a/arch/s390/kernel/stacktrace.c b/arch/s390/kernel/stacktrace.c
index fc5419ac64c8..7f1266c24f6b 100644
--- a/arch/s390/kernel/stacktrace.c
+++ b/arch/s390/kernel/stacktrace.c
@@ -19,7 +19,7 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,

unwind_for_each_frame(&state, task, regs, 0) {
addr = unwind_get_return_address(&state);
- if (!addr || !consume_entry(cookie, addr, false))
+ if (!addr || !consume_entry(cookie, addr))
break;
}
}
@@ -56,7 +56,7 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
return -EINVAL;
#endif

- if (!consume_entry(cookie, addr, false))
+ if (!consume_entry(cookie, addr))
return -EINVAL;
}

diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c
index 2fd698e28e4d..8627fda8d993 100644
--- a/arch/x86/kernel/stacktrace.c
+++ b/arch/x86/kernel/stacktrace.c
@@ -18,13 +18,13 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
struct unwind_state state;
unsigned long addr;

- if (regs && !consume_entry(cookie, regs->ip, false))
+ if (regs && !consume_entry(cookie, regs->ip))
return;

for (unwind_start(&state, task, regs, NULL); !unwind_done(&state);
unwind_next_frame(&state)) {
addr = unwind_get_return_address(&state);
- if (!addr || !consume_entry(cookie, addr, false))
+ if (!addr || !consume_entry(cookie, addr))
break;
}
}
@@ -72,7 +72,7 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
if (!addr)
return -EINVAL;

- if (!consume_entry(cookie, addr, false))
+ if (!consume_entry(cookie, addr))
return -EINVAL;
}

@@ -114,7 +114,7 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
{
const void __user *fp = (const void __user *)regs->bp;

- if (!consume_entry(cookie, regs->ip, false))
+ if (!consume_entry(cookie, regs->ip))
return;

while (1) {
@@ -128,7 +128,7 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
break;
if (!frame.ret_addr)
break;
- if (!consume_entry(cookie, frame.ret_addr, false))
+ if (!consume_entry(cookie, frame.ret_addr))
break;
fp = frame.next_fp;
}
diff --git a/include/linux/stacktrace.h b/include/linux/stacktrace.h
index b7af8cc13eda..50e2df30b0aa 100644
--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -29,14 +29,11 @@ unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
* stack_trace_consume_fn - Callback for arch_stack_walk()
* @cookie: Caller supplied pointer handed back by arch_stack_walk()
* @addr: The stack entry address to consume
- * @reliable: True when the stack entry is reliable. Required by
- * some printk based consumers.
*
* Return: True, if the entry was consumed or skipped
* False, if there is no space left to store
*/
-typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr,
- bool reliable);
+typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr);
/**
* arch_stack_walk - Architecture specific function to walk the stack
* @consume_entry: Callback which is invoked by the architecture code for
diff --git a/kernel/stacktrace.c b/kernel/stacktrace.c
index 946f44a9e86a..9f8117c7cfdd 100644
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -78,8 +78,7 @@ struct stacktrace_cookie {
unsigned int len;
};

-static bool stack_trace_consume_entry(void *cookie, unsigned long addr,
- bool reliable)
+static bool stack_trace_consume_entry(void *cookie, unsigned long addr)
{
struct stacktrace_cookie *c = cookie;

@@ -94,12 +93,11 @@ static bool stack_trace_consume_entry(void *cookie, unsigned long addr,
return c->len < c->size;
}

-static bool stack_trace_consume_entry_nosched(void *cookie, unsigned long addr,
- bool reliable)
+static bool stack_trace_consume_entry_nosched(void *cookie, unsigned long addr)
{
if (in_sched_functions(addr))
return true;
- return stack_trace_consume_entry(cookie, addr, reliable);
+ return stack_trace_consume_entry(cookie, addr);
}

/**
--
2.20.1

2020-09-01 16:09:55

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH v2 0/3] arm64: Convert to ARCH_STACKWALK

Hi,

On Wed, Aug 19, 2020 at 01:49:10PM +0100, Mark Brown wrote:
> This series updates the arm64 stacktrace code to use the newer and much
> simpler arch_stack_walk() interface, the main benefit being a single
> entry point to the arch code with no need for the arch code to worry
> about skipping frames. Along the way I noticed that the reliable
> parameter to the arch_stack_walk() callback appears to be redundant
> so there's also a patch here removing that from the existing code to
> simplify the interface.
>
> This is preparatory work for implementing reliable stack trace for
> arm64.

This all looks pretty nice!

Just to check, has the skipping logic been tested to work equivalently
to what we had before? By inspection I think it should, but since it
relies on function call boundaries it always strikes me as fragile.

If you could confirm that (e.g. with LKDTM perhaps?) that'd be great.
Assuming that looks right, for the series:

Acked-by: Mark Rutland <[email protected]>

Mark.

>
> v2: Rebase onto v5.9-rc1.
>
> Mark Brown (3):
> stacktrace: Remove reliable argument from arch_stack_walk() callback
> arm64: stacktrace: Make stack walk callback consistent with generic
> code
> arm64: stacktrace: Convert to ARCH_STACKWALK
>
> arch/arm64/Kconfig | 1 +
> arch/arm64/include/asm/stacktrace.h | 2 +-
> arch/arm64/kernel/perf_callchain.c | 6 +--
> arch/arm64/kernel/return_address.c | 8 +--
> arch/arm64/kernel/stacktrace.c | 84 ++++-------------------------
> arch/s390/kernel/stacktrace.c | 4 +-
> arch/x86/kernel/stacktrace.c | 10 ++--
> include/linux/stacktrace.h | 5 +-
> kernel/stacktrace.c | 8 ++-
> 9 files changed, 30 insertions(+), 98 deletions(-)
>
> --
> 2.20.1
>

2020-09-02 09:28:00

by Miroslav Benes

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] stacktrace: Remove reliable argument from arch_stack_walk() callback

On Wed, 19 Aug 2020, Mark Brown wrote:

> Currently the callback passed to arch_stack_walk() has an argument called
> reliable passed to it to indicate if the stack entry is reliable, a comment
> says that this is used by some printk() consumers. However in the current
> kernel none of the arch_stack_walk() implementations ever set this flag to
> true and the only callback implementation we have is in the generic
> stacktrace code which ignores the flag. It therefore appears that this
> flag is redundant so we can simplify and clarify things by removing it.
>
> Signed-off-by: Mark Brown <[email protected]>

Reviewed-by: Miroslav Benes <[email protected]>

M

2020-09-02 09:28:35

by Miroslav Benes

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] arm64: stacktrace: Make stack walk callback consistent with generic code

On Wed, 19 Aug 2020, Mark Brown wrote:

> As with the generic arch_stack_walk() code the arm64 stack walk code takes
> a callback that is called per stack frame. Currently the arm64 code always
> passes a struct stackframe to the callback and the generic code just passes
> the pc, however none of the users ever reference anything in the struct
> other than the pc value. The arm64 code also uses a return type of int
> while the generic code uses a return type of bool though in both cases the
> return value is a boolean value.
>
> In order to reduce code duplication when arm64 is converted to use
> arch_stack_walk() change the signature of the arm64 specific callback to
> match that of the generic code.
>
> Signed-off-by: Mark Brown <[email protected]>

Reviewed-by: Miroslav Benes <[email protected]>

M

2020-09-02 09:35:23

by Miroslav Benes

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] arm64: stacktrace: Convert to ARCH_STACKWALK

Hi,

it could be a silly question, but better to ask...

> + if (regs)
> + start_backtrace(&frame, regs->regs[29], regs->pc);
> + else
> + start_backtrace(&frame, thread_saved_fp(task),
> + thread_saved_pc(task));

Would this also work for task == current? Given that the original code had

> - start_backtrace(&frame,
> - (unsigned long)__builtin_frame_address(0),
> - (unsigned long)__save_stack_trace);

for the case, which seems correct (but I don't know much about arm64 arch
in the kernel).

Otherwise, I did not spot anything suspicious or wrong.

Regards
Miroslav

2020-09-02 18:53:46

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] arm64: stacktrace: Convert to ARCH_STACKWALK

On Wed, Sep 02, 2020 at 11:32:13AM +0200, Miroslav Benes wrote:
> Hi,
>
> it could be a silly question, but better to ask...
>
> > + if (regs)
> > + start_backtrace(&frame, regs->regs[29], regs->pc);
> > + else
> > + start_backtrace(&frame, thread_saved_fp(task),
> > + thread_saved_pc(task));
>
> Would this also work for task == current? Given that the original code had
>
> > - start_backtrace(&frame,
> > - (unsigned long)__builtin_frame_address(0),
> > - (unsigned long)__save_stack_trace);

Oh whoops; I'm annoyed I didn't spot that.

With that gone this cannot work for (task == current && regs == NULL), as
we'll erroneously use stale values from the task struct.

It looks like the LKDTM tests only trigger cases with non-NULL regs, but
IIUC this should show up with show_stack(NULL, NULL, KERN_INFO), as
drivers/tty/sysrq.c does for other cpus.

Thanks,
Mark.

2020-09-02 19:04:38

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH v2 0/3] arm64: Convert to ARCH_STACKWALK

On Wed, Sep 02, 2020 at 06:38:03PM +0100, Mark Brown wrote:
> On Tue, Sep 01, 2020 at 05:06:26PM +0100, Mark Rutland wrote:
>
> > Just to check, has the skipping logic been tested to work equivalently
> > to what we had before? By inspection I think it should, but since it
> > relies on function call boundaries it always strikes me as fragile.
>
> > If you could confirm that (e.g. with LKDTM perhaps?) that'd be great.
> > Assuming that looks right, for the series:
>
> I've tested this with LKDTM and otherwise and didn't spot any issues
> (and just did a bit of retesting) but it is a pretty manual process so
> it's possible I missed something.

It looks like the case Mirolav pointed out (self-bactrace with NULL
regs) isn't triggered by LKDTM (since it always causes a backtrace from
an exception with non-NULL regs), but we do rely on that elsewhere in
the kernel.

It might be worth adding an LKDTM test to trigger that.

I'll wait for a respin that handles that -- please drop the ack for now.

Thanks,
Mark.

2020-09-02 19:29:05

by Mark Brown

[permalink] [raw]
Subject: Re: [PATCH v2 0/3] arm64: Convert to ARCH_STACKWALK

On Tue, Sep 01, 2020 at 05:06:26PM +0100, Mark Rutland wrote:

> Just to check, has the skipping logic been tested to work equivalently
> to what we had before? By inspection I think it should, but since it
> relies on function call boundaries it always strikes me as fragile.

> If you could confirm that (e.g. with LKDTM perhaps?)?that'd be great.
> Assuming that looks right, for the series:

I've tested this with LKDTM and otherwise and didn't spot any issues
(and just did a bit of retesting) but it is a pretty manual process so
it's possible I missed something.


Attachments:
(No filename) (582.00 B)
signature.asc (499.00 B)
Download all attachments

2020-09-10 17:41:18

by Mark Brown

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] arm64: stacktrace: Convert to ARCH_STACKWALK

On Wed, Sep 02, 2020 at 07:50:27PM +0100, Mark Rutland wrote:
> On Wed, Sep 02, 2020 at 11:32:13AM +0200, Miroslav Benes wrote:

> > > - start_backtrace(&frame,
> > > - (unsigned long)__builtin_frame_address(0),
> > > - (unsigned long)__save_stack_trace);

> Oh whoops; I'm annoyed I didn't spot that.

> With that gone this cannot work for (task == current && regs == NULL), as
> we'll erroneously use stale values from the task struct.

I remember somehow convincing myself at the time I originally did this
that doing the above was redundant with the new interface but that was
quite some time ago and I can't reconstruct my reasoning any more, I'm
pretty sure I was just mistaken. I've added it back in, thanks for
spotting this.

> It looks like the LKDTM tests only trigger cases with non-NULL regs, but
> IIUC this should show up with show_stack(NULL, NULL, KERN_INFO), as
> drivers/tty/sysrq.c does for other cpus.

show_stack() doesn't go through this bit of the stacktrace code, it goes
through dump_backtrace() in traps.c which used the underlying arch
specific unwinder directly so is unaffected by arch_stack_walk().
Actually now I look at LKDTM it's ending up using show_stack() mostly
if not entirely so my testing with it was not exercising this change
as much as might be expected anyway (the modified code was getting hit
by other things like /proc/N/stack).


Attachments:
(No filename) (1.38 kB)
signature.asc (499.00 B)
Download all attachments