Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753148AbcKHQCE (ORCPT ); Tue, 8 Nov 2016 11:02:04 -0500 Received: from mail-ua0-f170.google.com ([209.85.217.170]:33583 "EHLO mail-ua0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752104AbcKHQCB (ORCPT ); Tue, 8 Nov 2016 11:02:01 -0500 MIME-Version: 1.0 In-Reply-To: <1478585533-19406-3-git-send-email-ricardo.neri-calderon@linux.intel.com> References: <1478585533-19406-1-git-send-email-ricardo.neri-calderon@linux.intel.com> <1478585533-19406-3-git-send-email-ricardo.neri-calderon@linux.intel.com> From: Andy Lutomirski Date: Tue, 8 Nov 2016 08:01:39 -0800 Message-ID: Subject: Re: [PATCH 2/4] x86: Prepare vm86 tasks to handle User-Mode Instruction Prevention To: Ricardo Neri Cc: Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , "linux-kernel@vger.kernel.org" , X86 ML , "linux-doc@vger.kernel.org" , Andy Lutomirski , Andrew Morton , Borislav Petkov , Brian Gerst , Chen Yucong , Chris Metcalf , Dave Hansen , Fenghua Yu , Huang Rui , Jiri Slaby , Jonathan Corbet , "Michael S . Tsirkin" , Paul Gortmaker , Peter Zijlstra , "Ravi V . Shankar" , Shuah Khan , Vlastimil Babka Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4595 Lines: 106 On Mon, Nov 7, 2016 at 10:12 PM, Ricardo Neri wrote: > User-Mode Instruction Prevention (UMIP) is a security feature in new Intel > processors that causes a general protection exception if certain > instructions are executed in user mode (CPL > 0). > > Unfortunately, some of the instructions that are protected by UMIP (i.e., > SGDT, SIDT and SMSW) are used by certain applications running in virtual- > 8086 mode (e.g., DOSEMU and Wine). Thus, UMIP needs to be disabled in > virtual-8086 tasks for such applications to run correctly. However, > unconditionally disabling UMIP for virtual-8086 tasks could be abused by > malicious applcations. Hence, UMIP can only be disabled for this particular > kind of tasks if requested at boot time via vm86_disable_x86_umip. > > If disabling UMIP is allowed, it is done in the following two code paths: > 1) entering virtual-8086 mode via a system call, and 2) task switch. When > For task-switching a new member is added to struct vm86 to keep track of > the UMIP disabling selection; set in the vm86 system call as per the the > selection made at boot time. > > If supported by the CPU, UMIP is re-enabled as soon as we exit virtual-8086 > mode via interrupt/exception or task switch. To determine that we switch to > a virtual-8086 mode task, we rely in the fact that virtual-8086 mode tasks > keep a copy of the value of the supervisor mode stack pointer prior to > entering in virtual-8086 mode. > > Since the X86_UMIP config option is not defined yet, this code remains > dormant until such option is enabled in a subsequent patch. Such patch will > also introduce code to disable UMIP for virtual-8086 tasks via a kernel > parameter. > > Cc: Andy Lutomirski > Cc: Andrew Morton > Cc: Borislav Petkov > Cc: Brian Gerst > Cc: Chen Yucong > Cc: Chris Metcalf > Cc: Dave Hansen > Cc: Fenghua Yu > Cc: Huang Rui > Cc: Jiri Slaby > Cc: Jonathan Corbet > Cc: Michael S. Tsirkin > Cc: Paul Gortmaker > Cc: Peter Zijlstra > Cc: Ravi V. Shankar > Cc: Shuah Khan > Cc: Vlastimil Babka > Signed-off-by: Ricardo Neri > --- > arch/x86/include/asm/vm86.h | 3 +++ > arch/x86/kernel/process.c | 10 ++++++++++ > arch/x86/kernel/vm86_32.c | 20 ++++++++++++++++++++ > 3 files changed, 33 insertions(+) > > diff --git a/arch/x86/include/asm/vm86.h b/arch/x86/include/asm/vm86.h > index 1e491f3..bd14cbc 100644 > --- a/arch/x86/include/asm/vm86.h > +++ b/arch/x86/include/asm/vm86.h > @@ -40,6 +40,7 @@ struct vm86 { > struct revectored_struct int_revectored; > struct revectored_struct int21_revectored; > struct vm86plus_info_struct vm86plus; > + bool disable_x86_umip; > }; > > #ifdef CONFIG_VM86 > @@ -47,6 +48,7 @@ struct vm86 { > void handle_vm86_fault(struct kernel_vm86_regs *, long); > int handle_vm86_trap(struct kernel_vm86_regs *, long, int); > void save_v86_state(struct kernel_vm86_regs *, int); > +void __init vm86_disable_x86_umip(void); > > struct task_struct; > > @@ -76,6 +78,7 @@ void release_vm86_irqs(struct task_struct *); > > #define handle_vm86_fault(a, b) > #define release_vm86_irqs(a) > +#define vm86_disable_x86_umip() > > static inline int handle_vm86_trap(struct kernel_vm86_regs *a, long b, int c) > { > diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c > index 0888a87..32b7301 100644 > --- a/arch/x86/kernel/process.c > +++ b/arch/x86/kernel/process.c > @@ -233,6 +233,16 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, > */ > memset(tss->io_bitmap, 0xff, prev->io_bitmap_max); > } > + > +#if defined(CONFIG_VM86) && defined(CONFIG_X86_INTEL_UMIP) > + if (next->vm86 && next->vm86->saved_sp0 && next->vm86->disable_x86_umip) > + cr4_clear_bits(X86_CR4_UMIP); > + else { > + if (static_cpu_has(X86_FEATURE_UMIP)) > + cr4_set_bits(X86_CR4_UMIP); > + } > +#endif > + NAK. If this code is going to exist, it needs to be deeply buried in some unlikely if statement that already exists. There's no good reason to penalize all context switches to support some nonsensical vm86 use case.