Received: by 2002:a4a:311b:0:0:0:0:0 with SMTP id k27-v6csp4802290ooa; Tue, 14 Aug 2018 10:45:13 -0700 (PDT) X-Google-Smtp-Source: AA+uWPxsA87a+zEuhi0VixGCT8BhtR+LgqY23bqrKTa3HK5oot+lWkWEmqQiX3EOE2DyMwB+O1o0 X-Received: by 2002:a63:5b0d:: with SMTP id p13-v6mr22400572pgb.202.1534268713857; Tue, 14 Aug 2018 10:45:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534268713; cv=none; d=google.com; s=arc-20160816; b=GvB9clLXBPTMAa4FQxhH1E0sK1oHRBVGBc3zMzpG4ajtxwoDnXhe7Wuu2guj7akQYt btcTXMcAyNzh2mFV0AKxzWS+IVK3PcBFjSjffpSXuO1HtjhMGz1ReaGqnzrqDQbwb2b9 dax22pwRzbjx8Nd8Amo+cgHndRmRg2GoqyQPlRTdsXwKfi7H2qk5PFg1eLHfjshXXKP2 1OgAQFcYQ6AQfPps05eF7e4Tzog8HqQ3K4r/mNAT1lny8hx2+BhSL32zkoHVIeOa84K3 CEpIWKX9wlvgVO1JYXUyij0jf4HCjz0yXGyXKpQ/yngffjbOv+QTKPdRyQY0YTdikEyY ayzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=lByE3yA+unolgvQvtmQLy2atDDsEyxnToNR20k6vKO0=; b=czd4NAGJqf5e3t42f8XHct/4vZJ5sYsc/JsSDZr0JZl57vJqjeFdOX5y6xEEFEtcrS iLPnIFfB3i+TzSIuXsnsgaRH4QUTPbw6+1R8LfY6cOT6eXbfY0Nbe6G2guoTq3BJLY6t 4frSvZ30yMrvWmPLv32WQcwsFGI0XlVbLv2dzTRoxPjjm6vUChTjm+yHE3H1cnii4ZL4 crEy31B8qGxkJlNB51ma7hSgnmH1rGYSitaNLM7Kqen07877aEHrn1VLWkTRiDRb5SDm UOOi3IG/W6CtsPJ5GQRFhy6Jstq8ap9hzW/xTRZpq5UhQU1QKnzP2ELOSvuVE7wQ+7fF qLKg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h66-v6si23764041pfa.238.2018.08.14.10.44.58; Tue, 14 Aug 2018 10:45:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390711AbeHNUcU (ORCPT + 99 others); Tue, 14 Aug 2018 16:32:20 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:59962 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728642AbeHNUcU (ORCPT ); Tue, 14 Aug 2018 16:32:20 -0400 Received: from localhost (unknown [194.244.16.108]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 075F5AF3; Tue, 14 Aug 2018 17:44:07 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Paolo Bonzini , Peter Zijlstra , Nicolai Stange , Thomas Gleixner , David Woodhouse Subject: [PATCH 4.9 087/107] x86/KVM/VMX: Introduce per-host-cpu analogue of l1tf_flush_l1d Date: Tue, 14 Aug 2018 19:17:50 +0200 Message-Id: <20180814171526.119684380@linuxfoundation.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180814171520.883143803@linuxfoundation.org> References: <20180814171520.883143803@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.9-stable review patch. If anyone has any objections, please let me know. ------------------ From: Nicolai Stange commit 45b575c00d8e72d69d75dd8c112f044b7b01b069 upstream Part of the L1TF mitigation for vmx includes flushing the L1D cache upon VMENTRY. L1D flushes are costly and two modes of operations are provided to users: "always" and the more selective "conditional" mode. If operating in the latter, the cache would get flushed only if a host side code path considered unconfined had been traversed. "Unconfined" in this context means that it might have pulled in sensitive data like user data or kernel crypto keys. The need for L1D flushes is tracked by means of the per-vcpu flag l1tf_flush_l1d. KVM exit handlers considered unconfined set it. A vmx_l1d_flush() subsequently invoked before the next VMENTER will conduct a L1d flush based on its value and reset that flag again. Currently, interrupts delivered "normally" while in root operation between VMEXIT and VMENTER are not taken into account. Part of the reason is that these don't leave any traces and thus, the vmx code is unable to tell if any such has happened. As proposed by Paolo Bonzini, prepare for tracking all interrupts by introducing a new per-cpu flag, "kvm_cpu_l1tf_flush_l1d". It will be in strong analogy to the per-vcpu ->l1tf_flush_l1d. A later patch will make interrupt handlers set it. For the sake of cache locality, group kvm_cpu_l1tf_flush_l1d into x86' per-cpu irq_cpustat_t as suggested by Peter Zijlstra. Provide the helpers kvm_set_cpu_l1tf_flush_l1d(), kvm_clear_cpu_l1tf_flush_l1d() and kvm_get_cpu_l1tf_flush_l1d(). Make them trivial resp. non-existent for !CONFIG_KVM_INTEL as appropriate. Let vmx_l1d_flush() handle kvm_cpu_l1tf_flush_l1d in the same way as l1tf_flush_l1d. Suggested-by: Paolo Bonzini Suggested-by: Peter Zijlstra Signed-off-by: Nicolai Stange Signed-off-by: Thomas Gleixner Reviewed-by: Paolo Bonzini Signed-off-by: David Woodhouse Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/hardirq.h | 23 +++++++++++++++++++++++ arch/x86/kvm/vmx.c | 17 +++++++++++++---- 2 files changed, 36 insertions(+), 4 deletions(-) --- a/arch/x86/include/asm/hardirq.h +++ b/arch/x86/include/asm/hardirq.h @@ -6,6 +6,9 @@ typedef struct { u16 __softirq_pending; +#if IS_ENABLED(CONFIG_KVM_INTEL) + u8 kvm_cpu_l1tf_flush_l1d; +#endif unsigned int __nmi_count; /* arch dependent */ #ifdef CONFIG_X86_LOCAL_APIC unsigned int apic_timer_irqs; /* arch dependent */ @@ -60,4 +63,24 @@ extern u64 arch_irq_stat_cpu(unsigned in extern u64 arch_irq_stat(void); #define arch_irq_stat arch_irq_stat + +#if IS_ENABLED(CONFIG_KVM_INTEL) +static inline void kvm_set_cpu_l1tf_flush_l1d(void) +{ + __this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 1); +} + +static inline void kvm_clear_cpu_l1tf_flush_l1d(void) +{ + __this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 0); +} + +static inline bool kvm_get_cpu_l1tf_flush_l1d(void) +{ + return __this_cpu_read(irq_stat.kvm_cpu_l1tf_flush_l1d); +} +#else /* !IS_ENABLED(CONFIG_KVM_INTEL) */ +static inline void kvm_set_cpu_l1tf_flush_l1d(void) { } +#endif /* IS_ENABLED(CONFIG_KVM_INTEL) */ + #endif /* _ASM_X86_HARDIRQ_H */ --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -8667,14 +8667,23 @@ static void vmx_l1d_flush(struct kvm_vcp * 'always' */ if (static_branch_likely(&vmx_l1d_flush_cond)) { - bool flush_l1d = vcpu->arch.l1tf_flush_l1d; + bool flush_l1d; /* - * Clear the flush bit, it gets set again either from - * vcpu_run() or from one of the unsafe VMEXIT - * handlers. + * Clear the per-vcpu flush bit, it gets set again + * either from vcpu_run() or from one of the unsafe + * VMEXIT handlers. */ + flush_l1d = vcpu->arch.l1tf_flush_l1d; vcpu->arch.l1tf_flush_l1d = false; + + /* + * Clear the per-cpu flush bit, it gets set again from + * the interrupt handlers. + */ + flush_l1d |= kvm_get_cpu_l1tf_flush_l1d(); + kvm_clear_cpu_l1tf_flush_l1d(); + if (!flush_l1d) return; }