Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933451AbeAHTgy (ORCPT + 1 other); Mon, 8 Jan 2018 14:36:54 -0500 Received: from mail-it0-f68.google.com ([209.85.214.68]:41364 "EHLO mail-it0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933419AbeAHTgx (ORCPT ); Mon, 8 Jan 2018 14:36:53 -0500 X-Google-Smtp-Source: ACJfBosxLiWd2AYtBxssHxv5bNUSctzO8IBakMJ7G5N/+YhM8NI176pj3dVeojinbpcQViiu8aULcmxdTX5Teo7SsiU= MIME-Version: 1.0 In-Reply-To: <1515434925-10250-5-git-send-email-pbonzini@redhat.com> References: <1515434925-10250-1-git-send-email-pbonzini@redhat.com> <1515434925-10250-5-git-send-email-pbonzini@redhat.com> From: Jim Mattson Date: Mon, 8 Jan 2018 11:36:51 -0800 Message-ID: Subject: Re: [PATCH 4/7] kvm: vmx: Set IBPB when running a different VCPU To: Paolo Bonzini Cc: LKML , kvm list , aliguori@amazon.com, Tom Lendacky , dwmw@amazon.co.uk, bp@alien8.de, Tim Chen Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: Shouldn't there be an IBPB on *any* context switch away from a VCPU thread, even if it is to a non-VCPU thread? On Mon, Jan 8, 2018 at 10:08 AM, Paolo Bonzini wrote: > From: Tim Chen > > Ensure an IBPB (Indirect branch prediction barrier) before every VCPU > switch. > > Signed-off-by: Tim Chen > Signed-off-by: Paolo Bonzini > --- > arch/x86/kvm/vmx.c | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index d00bcad7336e..bf127c570675 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -2375,6 +2375,8 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) { > per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs; > vmcs_load(vmx->loaded_vmcs->vmcs); > + if (have_spec_ctrl) > + wrmsrl(MSR_IA32_PRED_CMD, FEATURE_SET_IBPB); > } > > if (!already_loaded) { > @@ -4029,6 +4031,13 @@ static void free_loaded_vmcs(struct loaded_vmcs *loaded_vmcs) > free_vmcs(loaded_vmcs->vmcs); > loaded_vmcs->vmcs = NULL; > WARN_ON(loaded_vmcs->shadow_vmcs != NULL); > + > + /* > + * The VMCS could be recycled, causing a false negative in > + * vmx_vcpu_load; block speculative execution. > + */ > + if (have_spec_ctrl) > + wrmsrl(MSR_IA32_PRED_CMD, FEATURE_SET_IBPB); > } > > static void vmx_nested_free_vmcs02(struct vcpu_vmx *vmx) > -- > 1.8.3.1 > >