Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752284AbcCKUro (ORCPT ); Fri, 11 Mar 2016 15:47:44 -0500 Received: from mail-pa0-f54.google.com ([209.85.220.54]:36857 "EHLO mail-pa0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750987AbcCKUrk (ORCPT ); Fri, 11 Mar 2016 15:47:40 -0500 From: David Matlack To: linux-kernel@vger.kernel.org, x86@kernel.org, kvm@vger.kernel.org Cc: pbonzini@redhat.com, mingo@redhat.com, luto@kernel.org, hpa@zytor.com, digitaleric@google.com, David Matlack Subject: [PATCH 0/1] KVM: x86: using the fpu in interrupt context with a guest's xcr0 Date: Fri, 11 Mar 2016 12:47:19 -0800 Message-Id: <1457729240-3846-1-git-send-email-dmatlack@google.com> X-Mailer: git-send-email 2.7.0.rc3.207.g0ac5344 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5431 Lines: 190 We've found that an interrupt handler that uses the fpu can kill a KVM VM, if it runs under the following conditions: - the guest's xcr0 register is loaded on the cpu - the guest's fpu context is not loaded - the host is using eagerfpu Note that the guest's xcr0 register and fpu context are not loaded as part of the atomic world switch into "guest mode". They are loaded by KVM while the cpu is still in "host mode". Usage of the fpu in interrupt context is gated by irq_fpu_usable(). The interrupt handler will look something like this: if (irq_fpu_usable()) { kernel_fpu_begin(); [... code that uses the fpu ...] kernel_fpu_end(); } As long as the guest's fpu is not loaded and the host is using eager fpu, irq_fpu_usable() returns true (interrupted_kernel_fpu_idle() returns true). The interrupt handler proceeds to use the fpu with the guest's xcr0 live. kernel_fpu_begin() saves the current fpu context. If this uses XSAVE[OPT], it may leave the xsave area in an undesirable state. According to the SDM, during XSAVE bit i of XSTATE_BV is not modified if bit i is 0 in xcr0. So it's possible that XSTATE_BV[i] == 1 and xcr0[i] == 0 following an XSAVE. kernel_fpu_end() restores the fpu context. Now if any bit i in XSTATE_BV is 1 while xcr0[i] is 0, XRSTOR generates a #GP fault. (This #GP gets trapped and turned into a SIGSEGV, which kills the VM.) In guests that have access to the same CPU features as the host, this bug is more likely to reproduce during VM boot, while the guest xcr0 is 1. Once the guest's xcr0 is indistinguishable from the host's, there is no issue. I have not been able to trigger this bug on Linux 4.3, and suspect it is due to this commit from Linux 4.2: 653f52c kvm,x86: load guest FPU context more eagerly With this commit, as long as the host is using eagerfpu, the guest's fpu is always loaded just before the guest's xcr0 (vcpu->fpu_active is always 1 in the following snippet): 6569 if (vcpu->fpu_active) 6570 kvm_load_guest_fpu(vcpu); 6571 kvm_load_guest_xcr0(vcpu); When the guest's fpu is loaded, irq_fpu_usable() returns false. We've included our workaround for this bug, which applies to Linux 3.11. It does not apply cleanly to HEAD since the fpu subsystem was refactored in Linux 4.2. While the latest kernel does not look vulnerable, we may want to apply a fix to the vulnerable stable kernels. An equally effective solution may be to just backport 653f52c to stable. Attached here is a stress module we used to reproduce the bug. It fires IPIs at all online CPUs and uses the fpu in the IPI handler. We found that running this module while booting a VM was an extremely effective way to kill said VM :). For the kernel developers who are working to make eagerfpu the global default, this module might be a useful stress test, especially when run in the background during other tests. --- 8< --- irq_fpu_stress.c | 95 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) create mode 100644 irq_fpu_stress.c diff --git a/irq_fpu_stress.c b/irq_fpu_stress.c new file mode 100644 index 0000000..faa6ba3 --- /dev/null +++ b/irq_fpu_stress.c @@ -0,0 +1,95 @@ +/* + * For the duration of time this module is loaded, this module fires + * IPIs at all CPUs and tries to use the FPU on that CPU in irq + * context. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +MODULE_LICENSE("GPL"); + +#define MODNAME "irq_fpu_stress" +#undef pr_fmt +#define pr_fmt(fmt) MODNAME": "fmt + +struct workqueue_struct *work_queue; +struct work_struct work; + +struct { + atomic_t irq_fpu_usable; + atomic_t irq_fpu_unusable; + unsigned long num_tests; +} stats; + +bool done; + +static void test_irq_fpu(void *info) +{ + BUG_ON(!in_interrupt()); + + if (irq_fpu_usable()) { + atomic_inc(&stats.irq_fpu_usable); + + kernel_fpu_begin(); + kernel_fpu_end(); + } else { + atomic_inc(&stats.irq_fpu_unusable); + } +} + +static void do_work(struct work_struct *w) +{ + pr_info("starting test\n"); + + stats.num_tests = 0; + atomic_set(&stats.irq_fpu_usable, 0); + atomic_set(&stats.irq_fpu_unusable, 0); + + while (!ACCESS_ONCE(done)) { + preempt_disable(); + smp_call_function_many( + cpu_online_mask, test_irq_fpu, NULL, 1 /* wait */); + preempt_enable(); + + stats.num_tests++; + + if (need_resched()) + schedule(); + } + + pr_info("finished test\n"); +} + +int init_module(void) +{ + work_queue = create_singlethread_workqueue(MODNAME); + + INIT_WORK(&work, do_work); + queue_work(work_queue, &work); + + return 0; +} + +void cleanup_module(void) +{ + ACCESS_ONCE(done) = true; + + flush_workqueue(work_queue); + destroy_workqueue(work_queue); + + pr_info("num_tests %lu, irq_fpu_usable %d, irq_fpu_unusable %d\n", + stats.num_tests, + atomic_read(&stats.irq_fpu_usable), + atomic_read(&stats.irq_fpu_unusable)); +} --- 8< --- Eric Northup (1): KVM: don't allow irq_fpu_usable when the VCPU's XCR0 is loaded arch/x86/include/asm/i387.h | 3 +++ arch/x86/kernel/i387.c | 10 ++++++++-- arch/x86/kvm/x86.c | 4 ++++ 3 files changed, 15 insertions(+), 2 deletions(-) -- 2.7.0.rc3.207.g0ac5344