Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932195AbdCVUjc (ORCPT ); Wed, 22 Mar 2017 16:39:32 -0400 Received: from mail-pf0-f170.google.com ([209.85.192.170]:34711 "EHLO mail-pf0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754039AbdCVUir (ORCPT ); Wed, 22 Mar 2017 16:38:47 -0400 From: Thomas Garnier To: Martin Schwidefsky , Heiko Carstens , Dave Hansen , David Howells , Al Viro , Arnd Bergmann , Thomas Garnier , =?UTF-8?q?Ren=C3=A9=20Nyffenegger?= , Andrew Morton , "Paul E . McKenney" , Ingo Molnar , Thomas Gleixner , Oleg Nesterov , Pavel Tikhomirov , Stephen Smalley , Ingo Molnar , "H . Peter Anvin" , Andy Lutomirski , Paolo Bonzini , Rik van Riel , Kees Cook , Josh Poimboeuf , Borislav Petkov , Brian Gerst , "Kirill A . Shutemov" , Christian Borntraeger , Russell King , Vladimir Murzin , Will Deacon , Catalin Marinas , Mark Rutland , James Morse Cc: linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, linux-arm-kernel@lists.infradead.org, kernel-hardening@lists.openwall.com Subject: [PATCH v4 2/4] x86/syscalls: Specific usage of verify_pre_usermode_state Date: Wed, 22 Mar 2017 13:38:32 -0700 Message-Id: <20170322203834.67556-2-thgarnie@google.com> X-Mailer: git-send-email 2.12.1.500.gab5fba24ee-goog In-Reply-To: <20170322203834.67556-1-thgarnie@google.com> References: <20170322203834.67556-1-thgarnie@google.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3925 Lines: 108 Implement specific usage of verify_pre_usermode_state for user-mode returns for x86. --- Based on next-20170322 --- arch/x86/Kconfig | 1 + arch/x86/entry/common.c | 3 +++ arch/x86/entry/entry_64.S | 8 ++++++++ arch/x86/include/asm/pgtable_64_types.h | 11 +++++++++++ arch/x86/include/asm/processor.h | 11 ----------- 5 files changed, 23 insertions(+), 11 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 0e1bdadc8222..f48c96b834b5 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -63,6 +63,7 @@ config X86 select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_SERIO + select ARCH_NO_SYSCALL_VERIFY_PRE_USERMODE_STATE select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT select ARCH_SUPPORTS_NUMA_BALANCING if X86_64 diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index cdefcfdd9e63..76ef050255c9 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include @@ -183,6 +184,8 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs) struct thread_info *ti = current_thread_info(); u32 cached_flags; + verify_pre_usermode_state(); + if (IS_ENABLED(CONFIG_PROVE_LOCKING) && WARN_ON(!irqs_disabled())) local_irq_disable(); diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index d2b2a2948ffe..c079b010205c 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -218,6 +218,14 @@ entry_SYSCALL_64_fastpath: testl $_TIF_ALLWORK_MASK, TASK_TI_flags(%r11) jnz 1f + /* + * If address limit is not based on user-mode, jump to slow path for + * additional security checks. + */ + movq $TASK_SIZE_MAX, %rcx + cmp %rcx, TASK_addr_limit(%r11) + jnz 1f + LOCKDEP_SYS_EXIT TRACE_IRQS_ON /* user mode is traced as IRQs on */ movq RIP(%rsp), %rcx diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 516593e66bd6..12fa851c7fa8 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -78,4 +78,15 @@ typedef struct { pteval_t pte; } pte_t; #define EARLY_DYNAMIC_PAGE_TABLES 64 +/* + * User space process size. 47bits minus one guard page. The guard + * page is necessary on Intel CPUs: if a SYSCALL instruction is at + * the highest possible canonical userspace address, then that + * syscall will enter the kernel with a non-canonical return + * address, and SYSRET will explode dangerously. We avoid this + * particular problem by preventing anything from being mapped + * at the maximum canonical address. + */ +#define TASK_SIZE_MAX ((_AC(1, UL) << 47) - PAGE_SIZE) + #endif /* _ASM_X86_PGTABLE_64_DEFS_H */ diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 3cada998a402..e80822582d3e 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -825,17 +825,6 @@ static inline void spin_lock_prefetch(const void *x) #define KSTK_ESP(task) (task_pt_regs(task)->sp) #else -/* - * User space process size. 47bits minus one guard page. The guard - * page is necessary on Intel CPUs: if a SYSCALL instruction is at - * the highest possible canonical userspace address, then that - * syscall will enter the kernel with a non-canonical return - * address, and SYSRET will explode dangerously. We avoid this - * particular problem by preventing anything from being mapped - * at the maximum canonical address. - */ -#define TASK_SIZE_MAX ((1UL << 47) - PAGE_SIZE) - /* This decides where the kernel will search for a free chunk of vm * space during mmap's. */ -- 2.12.1.500.gab5fba24ee-goog