The unwinder reports the boot CPU idle task's stack on XEN PV as
unreliable, which affects at least live patching. There are two reasons
for this. First, the task does not follow the x86 convention that its
stack starts at the offset right below saved pt_regs. It allows the
unwinder to easily detect the end of the stack and verify it. Second,
startup_xen() function does not store the return address before jumping
to xen_start_kernel() which confuses the unwinder.
Amend both issues by moving the starting point of initial stack in
startup_xen() and storing the return address before the jump, which is
exactly what call instruction does.
Signed-off-by: Miroslav Benes <[email protected]>
---
arch/x86/xen/xen-head.S | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 1d0cee3163e4..edc776af0e0a 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -35,7 +35,11 @@ SYM_CODE_START(startup_xen)
rep __ASM_SIZE(stos)
mov %_ASM_SI, xen_start_info
- mov $init_thread_union+THREAD_SIZE, %_ASM_SP
+#ifdef CONFIG_X86_64
+ mov initial_stack(%rip), %_ASM_SP
+#else
+ mov pa(initial_stack), %_ASM_SP
+#endif
#ifdef CONFIG_X86_64
/* Set up %gs.
@@ -51,7 +55,7 @@ SYM_CODE_START(startup_xen)
wrmsr
#endif
- jmp xen_start_kernel
+ call xen_start_kernel
SYM_CODE_END(startup_xen)
__FINIT
#endif
--
2.25.1
On 19.03.2020 10:56, Miroslav Benes wrote:
> The unwinder reports the boot CPU idle task's stack on XEN PV as
> unreliable, which affects at least live patching. There are two reasons
> for this. First, the task does not follow the x86 convention that its
> stack starts at the offset right below saved pt_regs. It allows the
> unwinder to easily detect the end of the stack and verify it. Second,
> startup_xen() function does not store the return address before jumping
> to xen_start_kernel() which confuses the unwinder.
>
> Amend both issues by moving the starting point of initial stack in
> startup_xen() and storing the return address before the jump, which is
> exactly what call instruction does.
>
> Signed-off-by: Miroslav Benes <[email protected]>
> ---
> arch/x86/xen/xen-head.S | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
> index 1d0cee3163e4..edc776af0e0a 100644
> --- a/arch/x86/xen/xen-head.S
> +++ b/arch/x86/xen/xen-head.S
> @@ -35,7 +35,11 @@ SYM_CODE_START(startup_xen)
> rep __ASM_SIZE(stos)
>
> mov %_ASM_SI, xen_start_info
> - mov $init_thread_union+THREAD_SIZE, %_ASM_SP
> +#ifdef CONFIG_X86_64
> + mov initial_stack(%rip), %_ASM_SP
> +#else
> + mov pa(initial_stack), %_ASM_SP
> +#endif
If you need to distinguish the two anyway, why not use %rsp and
%esp respectively?
Jan
On Thu, 19 Mar 2020, Jan Beulich wrote:
> On 19.03.2020 10:56, Miroslav Benes wrote:
> > The unwinder reports the boot CPU idle task's stack on XEN PV as
> > unreliable, which affects at least live patching. There are two reasons
> > for this. First, the task does not follow the x86 convention that its
> > stack starts at the offset right below saved pt_regs. It allows the
> > unwinder to easily detect the end of the stack and verify it. Second,
> > startup_xen() function does not store the return address before jumping
> > to xen_start_kernel() which confuses the unwinder.
> >
> > Amend both issues by moving the starting point of initial stack in
> > startup_xen() and storing the return address before the jump, which is
> > exactly what call instruction does.
> >
> > Signed-off-by: Miroslav Benes <[email protected]>
> > ---
> > arch/x86/xen/xen-head.S | 8 ++++++--
> > 1 file changed, 6 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
> > index 1d0cee3163e4..edc776af0e0a 100644
> > --- a/arch/x86/xen/xen-head.S
> > +++ b/arch/x86/xen/xen-head.S
> > @@ -35,7 +35,11 @@ SYM_CODE_START(startup_xen)
> > rep __ASM_SIZE(stos)
> >
> > mov %_ASM_SI, xen_start_info
> > - mov $init_thread_union+THREAD_SIZE, %_ASM_SP
> > +#ifdef CONFIG_X86_64
> > + mov initial_stack(%rip), %_ASM_SP
> > +#else
> > + mov pa(initial_stack), %_ASM_SP
> > +#endif
>
> If you need to distinguish the two anyway, why not use %rsp and
> %esp respectively?
I could, I just preferred the unification instead. Will change it if you
think it would be better.
Miroslav