Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751741AbeAPWhb (ORCPT + 1 other); Tue, 16 Jan 2018 17:37:31 -0500 Received: from mail.kernel.org ([198.145.29.99]:58662 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750796AbeAPWha (ORCPT ); Tue, 16 Jan 2018 17:37:30 -0500 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8A3D32179F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org X-Google-Smtp-Source: ACJfBov5jiKtE52LuUUoaNyehc0Mp9XLfO/nQ5Fu0j8FK+bnKZA5GJ0Vr2sKKHrcFrq78Rbb1xBpd98qLyGScMxJYfQ= MIME-Version: 1.0 In-Reply-To: References: <1516120619-1159-1-git-send-email-joro@8bytes.org> <1516120619-1159-3-git-send-email-joro@8bytes.org> From: Andy Lutomirski Date: Tue, 16 Jan 2018 14:37:08 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 02/16] x86/entry/32: Enter the kernel via trampoline stack To: Thomas Gleixner Cc: Joerg Roedel , Ingo Molnar , "H . Peter Anvin" , X86 ML , LKML , linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , "Liguori, Anthony" , Daniel Gruss , Hugh Dickins , Kees Cook , Andrea Arcangeli , Waiman Long , Joerg Roedel Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: On Tue, Jan 16, 2018 at 12:30 PM, Thomas Gleixner wrote: > On Tue, 16 Jan 2018, Joerg Roedel wrote: >> @@ -89,13 +89,9 @@ static inline void refresh_sysenter_cs(struct thread_struct *thread) >> /* This is used when switching tasks or entering/exiting vm86 mode. */ >> static inline void update_sp0(struct task_struct *task) >> { >> - /* On x86_64, sp0 always points to the entry trampoline stack, which is constant: */ >> -#ifdef CONFIG_X86_32 >> - load_sp0(task->thread.sp0); >> -#else >> + /* sp0 always points to the entry trampoline stack, which is constant: */ >> if (static_cpu_has(X86_FEATURE_XENPV)) >> load_sp0(task_top_of_stack(task)); >> -#endif >> } >> >> #endif /* _ASM_X86_SWITCH_TO_H */ >> diff --git a/arch/x86/kernel/asm-offsets_32.c b/arch/x86/kernel/asm-offsets_32.c >> index 654229bac2fc..7270dd834f4b 100644 >> --- a/arch/x86/kernel/asm-offsets_32.c >> +++ b/arch/x86/kernel/asm-offsets_32.c >> @@ -47,9 +47,11 @@ void foo(void) >> BLANK(); >> >> /* Offset from the sysenter stack to tss.sp0 */ >> - DEFINE(TSS_sysenter_stack, offsetof(struct cpu_entry_area, tss.x86_tss.sp0) - >> + DEFINE(TSS_sysenter_stack, offsetof(struct cpu_entry_area, tss.x86_tss.sp1) - >> offsetofend(struct cpu_entry_area, entry_stack_page.stack)); I was going to say that this is just too magical. The convention is that STRUCT_member refers to "member" of "STRUCT". Here you're encoding a more complicated calculation. How about putting just the needed offsets in asm_offsets and putting the actual calculation in the asm code or a header. >> >> + OFFSET(TSS_sp1, tss_struct, x86_tss.sp1); This belongs in asm_offsets.c. Just move the asm_offsets_64.c version there and call it a day.