Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755744AbeAIMFR (ORCPT + 1 other); Tue, 9 Jan 2018 07:05:17 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:36089 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753165AbeAIMDX (ORCPT ); Tue, 9 Jan 2018 07:03:23 -0500 X-Google-Smtp-Source: ACJfBotkLOCeum46sUyLCrbM7jfLGpKy4zve0QFA6rzBwwTyUivL8xtvmVLhMtXAVkDPqKLQI8EKUg== From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: rkrcmar@redhat.com, liran.alon@oracle.com, jmattson@google.com, aliguori@amazon.com, thomas.lendacky@amd.com, dwmw@amazon.co.uk, bp@alien8.de, x86@kernel.org, Tim Chen Subject: [PATCH 4/8] kvm: vmx: Set IBPB when running a different VCPU Date: Tue, 9 Jan 2018 13:03:06 +0100 Message-Id: <20180109120311.27565-5-pbonzini@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180109120311.27565-1-pbonzini@redhat.com> References: <20180109120311.27565-1-pbonzini@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: From: Tim Chen Ensure an IBPB (Indirect branch prediction barrier) before every VCPU switch. Signed-off-by: Tim Chen Reviewed-by: Liran Alon Signed-off-by: Paolo Bonzini --- arch/x86/kvm/vmx.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index ef603692aa98..49b4a2d61603 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -2375,6 +2375,8 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) { per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs; vmcs_load(vmx->loaded_vmcs->vmcs); + if (have_spec_ctrl) + wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); } if (!already_loaded) { @@ -4029,6 +4031,13 @@ static void free_loaded_vmcs(struct loaded_vmcs *loaded_vmcs) free_vmcs(loaded_vmcs->vmcs); loaded_vmcs->vmcs = NULL; WARN_ON(loaded_vmcs->shadow_vmcs != NULL); + + /* + * The VMCS could be recycled, causing a false negative in + * vmx_vcpu_load; block speculative execution. + */ + if (have_spec_ctrl) + wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); } static void vmx_nested_free_vmcs02(struct vcpu_vmx *vmx) -- 1.8.3.1