Hi,
This fixes up wchan which is various degrees of broken across the
architectures.
Patch 4 fixes wchan for x86, which has been returning 0 for the past many
releases.
Patch 5 fixes the fundamental race against scheduling.
Patch 6 deletes a lot and makes STACKTRACE unconditional
patch 7 fixes up a few STACKTRACE arch oddities
0day says all builds are good, so it must be perfect :-) I'm planning on
queueing up at least the first 5 patches, but I'm hoping the last two patches
can be too.
Also available here:
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git sched/wchan
---
arch/alpha/include/asm/processor.h | 2 +-
arch/alpha/kernel/process.c | 5 ++-
arch/arc/include/asm/processor.h | 2 --
arch/arc/kernel/stacktrace.c | 19 +---------
arch/arm/include/asm/processor.h | 2 --
arch/arm/kernel/process.c | 24 -------------
arch/arm64/include/asm/processor.h | 2 --
arch/arm64/kernel/process.c | 28 ---------------
arch/csky/include/asm/processor.h | 2 --
arch/csky/kernel/stacktrace.c | 26 ++++----------
arch/h8300/include/asm/processor.h | 2 +-
arch/h8300/kernel/process.c | 5 +--
arch/hexagon/include/asm/processor.h | 3 --
arch/hexagon/kernel/process.c | 28 ---------------
arch/ia64/include/asm/processor.h | 3 --
arch/ia64/kernel/process.c | 31 -----------------
arch/m68k/include/asm/processor.h | 2 +-
arch/m68k/kernel/process.c | 4 +--
arch/microblaze/include/asm/processor.h | 2 --
arch/microblaze/kernel/process.c | 6 ----
arch/mips/include/asm/processor.h | 2 --
arch/mips/kernel/process.c | 31 +----------------
arch/mips/kernel/stacktrace.c | 27 ++++++++------
arch/nds32/include/asm/processor.h | 2 --
arch/nds32/kernel/process.c | 28 ---------------
arch/nds32/kernel/stacktrace.c | 21 +++++------
arch/nios2/include/asm/processor.h | 2 +-
arch/nios2/kernel/process.c | 5 +--
arch/openrisc/include/asm/processor.h | 1 -
arch/openrisc/kernel/process.c | 6 ----
arch/parisc/include/asm/processor.h | 2 --
arch/parisc/kernel/process.c | 27 --------------
arch/powerpc/include/asm/processor.h | 2 --
arch/powerpc/kernel/process.c | 40 ---------------------
arch/riscv/include/asm/processor.h | 3 --
arch/riscv/kernel/stacktrace.c | 23 ------------
arch/s390/include/asm/processor.h | 1 -
arch/s390/kernel/process.c | 29 ---------------
arch/sh/include/asm/processor_32.h | 2 --
arch/sh/kernel/process_32.c | 22 ------------
arch/sparc/include/asm/processor_32.h | 2 +-
arch/sparc/include/asm/processor_64.h | 2 --
arch/sparc/kernel/process_32.c | 5 +--
arch/sparc/kernel/process_64.c | 31 -----------------
arch/um/include/asm/processor-generic.h | 1 -
arch/um/kernel/process.c | 35 -------------------
arch/x86/include/asm/processor.h | 2 --
arch/x86/kernel/process.c | 62 ---------------------------------
arch/xtensa/include/asm/processor.h | 2 --
arch/xtensa/kernel/process.c | 32 -----------------
fs/proc/array.c | 7 ++--
fs/proc/base.c | 19 +++++-----
include/linux/sched.h | 1 +
kernel/sched/core.c | 34 ++++++++++++++++++
lib/Kconfig.debug | 7 +---
scripts/leaking_addresses.pl | 3 +-
56 files changed, 97 insertions(+), 622 deletions(-)
On Thu, Oct 14, 2021 at 01:02:34PM +0100, Russell King (Oracle) wrote:
> On Fri, Oct 08, 2021 at 01:15:27PM +0200, Peter Zijlstra wrote:
> > Hi,
> >
> > This fixes up wchan which is various degrees of broken across the
> > architectures.
> >
> > Patch 4 fixes wchan for x86, which has been returning 0 for the past many
> > releases.
> >
> > Patch 5 fixes the fundamental race against scheduling.
> >
> > Patch 6 deletes a lot and makes STACKTRACE unconditional
> >
> > patch 7 fixes up a few STACKTRACE arch oddities
> >
> > 0day says all builds are good, so it must be perfect :-) I'm planning on
> > queueing up at least the first 5 patches, but I'm hoping the last two patches
> > can be too.
> >
> > Also available here:
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git sched/wchan
>
> These patches introduce a regression on ARM. Whereas before, I have
> /proc/*/wchan populated with non-zero values, with these patches they
> _all_ contain "0":
>
> root@clearfog21:~# cat /proc/*/wchan
> 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000root@clearfog21:~#
>
> I'll try to investigate what is going on later today.
What is going on here is that the ARM stacktrace code refuses to trace
non-current tasks in a SMP environment due to the racy nature of doing
so if the non-current tasks are running.
When walking the stack with frame pointers, we:
- validate that the frame pointer is between the stack pointer and the
top of stack defined by that stack pointer.
- we then load the next stack pointer and next frame pointer from the
stack.
The reason this is unsafe when the task is not blocked is the stack can
change at any moment, which can cause the value read as a stack pointer
to be wildly different. If the read frame pointer value is roughly in
agreement, we can end up reading any part of memory, which would be an
information leak.
The table based unwinding is much more complex being essentially a set
of instructions to the unwinder code about which values to read from
the stack into a set of pseudo-registers, corrections to the stack
pointer, or transfers from the pseudo-registers. I haven't analysed
this code enough to really know the implications of what could be
possible if the values on the stack change while this code is running
on another CPU (it's not my code!) There is an attempt to bounds-limit
the virtual stack pointer after each unwind instruction is processed
to catch the unwinder doing anything silly, so it may be safe in so far
as it will fail should it encounter anything "stupid".
However, get_wchan() is a different case; we know for certain that the
task is blocked, so it won't be running on another CPU, and with your
patch 4, we have this guarantee. However, that is not true of all
callers to the stacktracing code, so I don't see how we can sanely
switch to using the stacktracing code for this.
--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!
On Fri, Oct 08, 2021 at 01:15:27PM +0200, Peter Zijlstra wrote:
> Hi,
>
> This fixes up wchan which is various degrees of broken across the
> architectures.
>
> Patch 4 fixes wchan for x86, which has been returning 0 for the past many
> releases.
>
> Patch 5 fixes the fundamental race against scheduling.
>
> Patch 6 deletes a lot and makes STACKTRACE unconditional
>
> patch 7 fixes up a few STACKTRACE arch oddities
>
> 0day says all builds are good, so it must be perfect :-) I'm planning on
> queueing up at least the first 5 patches, but I'm hoping the last two patches
> can be too.
>
> Also available here:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git sched/wchan
These patches introduce a regression on ARM. Whereas before, I have
/proc/*/wchan populated with non-zero values, with these patches they
_all_ contain "0":
root@clearfog21:~# cat /proc/*/wchan
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000root@clearfog21:~#
I'll try to investigate what is going on later today.
--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!
On Thu, Oct 14, 2021 at 02:38:19PM +0100, Russell King (Oracle) wrote:
> What is going on here is that the ARM stacktrace code refuses to trace
> non-current tasks in a SMP environment due to the racy nature of doing
> so if the non-current tasks are running.
>
> When walking the stack with frame pointers, we:
>
> - validate that the frame pointer is between the stack pointer and the
> top of stack defined by that stack pointer.
> - we then load the next stack pointer and next frame pointer from the
> stack.
>
> The reason this is unsafe when the task is not blocked is the stack can
> change at any moment, which can cause the value read as a stack pointer
> to be wildly different. If the read frame pointer value is roughly in
> agreement, we can end up reading any part of memory, which would be an
> information leak.
It would be a good idea to add some guardrails to prevent that
regardless. If there's stack corruption for any reason, the unwinder
shouldn't make things worse.
On x86 the unwinder relies on the caller to ensure the task is blocked
(or current). If the caller doesn't do that, they might get garbage,
and they get to keep the pieces.
But an important part of that is that the unwinder has guardrails to
ensure it handles stack corruption gracefully by never accessing out of
bounds of the stack.
When multiple stacks are involved in a kernel execution path (task, irq,
exception, etc), the stacks link to each other (e.g., last word on the
irq stack might point to the task stack). Also the irq/exception stack
addresses are stored in percpu variables, and the task stack is in the
task struct. So the unwinder can easily make sure it's in-bounds. See
get_stack_info() in arch/x86/kernel/dumpstack_64.c.
--
Josh