Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754766AbcCYTjT (ORCPT ); Fri, 25 Mar 2016 15:39:19 -0400 Received: from mx1.redhat.com ([209.132.183.28]:41285 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754258AbcCYTgM (ORCPT ); Fri, 25 Mar 2016 15:36:12 -0400 From: Josh Poimboeuf To: Jiri Kosina , Jessica Yu , Miroslav Benes Cc: linux-kernel@vger.kernel.org, live-patching@vger.kernel.org, Vojtech Pavlik Subject: [RFC PATCH v1.9 03/14] x86/asm/head: standardize the bottom of the stack for idle tasks Date: Fri, 25 Mar 2016 14:34:50 -0500 Message-Id: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1928 Lines: 58 Thanks to all the recent x86 entry code refactoring, most tasks' kernel stacks start at the same offset right above their saved pt_regs, regardless of which syscall was used to enter the kernel. That creates a useful convention which makes it straightforward to identify the "bottom" of the stack, which can be useful for stack walking code which needs to verify the stack is sane. However there are still a few types of tasks which don't yet follow that convention: 1) CPU idle tasks, aka the "swapper" tasks 2) freshly forked tasks which haven't run yet and have TIF_FORK set Make the idle tasks conform to the new stack bottom convention by starting their stack at a sizeof(pt_regs) offset from the end of the stack page. Signed-off-by: Josh Poimboeuf --- arch/x86/kernel/head_64.S | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index f2deae7..255fe54 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -20,6 +20,7 @@ #include #include #include +#include "../entry/calling.h" #ifdef CONFIG_PARAVIRT #include @@ -287,8 +288,9 @@ ENTRY(start_cpu) * REX.W + FF /5 JMP m16:64 Jump far, absolute indirect, * address given in m16:64. */ - movq initial_code(%rip),%rax - pushq $0 # fake return address to stop unwinder + call 1f # put return address on stack for unwinder +1: xorq %rbp, %rbp # clear frame pointer + movq initial_code(%rip), %rax pushq $__KERNEL_CS # set correct cs pushq %rax # target address in negative space lretq @@ -316,7 +318,7 @@ ENDPROC(start_cpu0) GLOBAL(initial_gs) .quad INIT_PER_CPU_VAR(irq_stack_union) GLOBAL(initial_stack) - .quad init_thread_union+THREAD_SIZE-8 + .quad init_thread_union + THREAD_SIZE - SIZEOF_PTREGS __FINITDATA bad_address: -- 2.4.3