Received: by 10.223.176.5 with SMTP id f5csp1700108wra; Wed, 31 Jan 2018 10:11:18 -0800 (PST) X-Google-Smtp-Source: AH8x226XhOyN8fHT8yvAztrib48HHkscrG7AAqxJBVSNhd6vPDgdjsZdEYgkWxmW2QxHOdlw75fm X-Received: by 10.99.66.68 with SMTP id p65mr26140146pga.384.1517422278043; Wed, 31 Jan 2018 10:11:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517422278; cv=none; d=google.com; s=arc-20160816; b=CJlHoZ2rJe+HEU4WM1ehhzAyJvxniVd3yfiOWFmrvQ3nA6Ov3ko8uyZqFO+rh3nMlC TzKXJPIpXcqn/MZukaoKIyVys4sknprVebdqXWy6puAuLHubgZlsdjHbGhS3qUIwixVc B1uESn6ozISV47+ySSp8NmloICTd47uXcLfXHkh/2zIFbxkL1vlDrze3YpMFYJAJDGZO ZOq/MPXfxAi+9BjB05PwAmx87gzbewiBwX36zdCZ9TwWGCn8HFxfAMZcWwCyXaiGSyDp FntRGxkiRk4GuU+4ToBq/9RX/apB0/Pnd3tmFf2aMNIZ47HJehtt8oC8s6fy6c3LUmv0 b5Zw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=DnwD90NOCxmFWpvzRBfOzYBsh4ZCSQwG5+0Ru2IbmH4=; b=LN2UII/ZR5Fb9nN2hf0h1SYNDiZlO54XGRG6cf1M+S79TYrm3GkQrqPNuDrH01Tn7r hFpfipCfAtL1T14VsmbqapIM0F/WDW5W04ASU2U7WCR2RLb93CkEyycf8zfRHUj08BCD kDO7ZI/tC1Kr0jPnkgbE91/sh45XGypmlU18vnnEOCNWgi6BB6hQoL7jWnhP9aikbsEB TdbauZlICFLiRCuTFwHUe8pUztA21fG66SYQ4Ll9SZ2+d7k1tjtazeTFBSvaqR8lRq3U gNZtOyU+LsAyPkdWS/nWXSaoWXQxooNlkrMgM36pDXJ7J1pdGPRrIICOg9N3sPN5WV6q 5g7A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g17-v6si77996plo.357.2018.01.31.10.11.03; Wed, 31 Jan 2018 10:11:18 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753177AbeAaRcu (ORCPT + 99 others); Wed, 31 Jan 2018 12:32:50 -0500 Received: from mx1.redhat.com ([209.132.183.28]:58396 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752404AbeAaRct (ORCPT ); Wed, 31 Jan 2018 12:32:49 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 29EFE1B3C77; Wed, 31 Jan 2018 17:32:49 +0000 (UTC) Received: from [10.36.116.69] (ovpn-116-69.ams2.redhat.com [10.36.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B2DB5600D1; Wed, 31 Jan 2018 17:32:43 +0000 (UTC) Subject: Re: [PATCH v4 2/5] KVM: x86: Add IBPB support To: KarimAllah Ahmed , x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Ashok Raj , Asit Mallick , Dave Hansen , Arjan Van De Ven , Tim Chen , Linus Torvalds , Andrea Arcangeli , Andi Kleen , Thomas Gleixner , Dan Williams , Jun Nakajima , Andy Lutomirski , Greg KH , Peter Zijlstra , David Woodhouse References: <1517404231-22406-1-git-send-email-karahmed@amazon.de> <1517404231-22406-3-git-send-email-karahmed@amazon.de> From: Paolo Bonzini Message-ID: Date: Wed, 31 Jan 2018 12:32:41 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.2 MIME-Version: 1.0 In-Reply-To: <1517404231-22406-3-git-send-email-karahmed@amazon.de> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Wed, 31 Jan 2018 17:32:49 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Looking at SVM now... On 31/01/2018 08:10, KarimAllah Ahmed wrote: > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index f40d0da..89495cf 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -529,6 +529,7 @@ struct svm_cpu_data { > struct kvm_ldttss_desc *tss_desc; > > struct page *save_area; > + struct vmcb *current_vmcb; > }; > > static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data); > @@ -1703,11 +1704,17 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) > __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); > kvm_vcpu_uninit(vcpu); > kmem_cache_free(kvm_vcpu_cache, svm); > + /* > + * The vmcb page can be recycled, causing a false negative in > + * svm_vcpu_load(). So do a full IBPB now. > + */ > + indirect_branch_prediction_barrier(); > } > > static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > { > struct vcpu_svm *svm = to_svm(vcpu); > + struct svm_cpu_data *sd = per_cpu(svm_data, cpu); > int i; > > if (unlikely(cpu != vcpu->cpu)) { > @@ -1736,6 +1743,10 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > if (static_cpu_has(X86_FEATURE_RDTSCP)) > wrmsrl(MSR_TSC_AUX, svm->tsc_aux); > > + if (sd->current_vmcb != svm->vmcb) { > + sd->current_vmcb = svm->vmcb; > + indirect_branch_prediction_barrier(); > + } > avic_vcpu_load(vcpu, cpu); > } > > @@ -3684,6 +3695,22 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) > case MSR_IA32_TSC: > kvm_write_tsc(vcpu, msr); > break; > + case MSR_IA32_PRED_CMD: > + if (!msr->host_initiated && > + !guest_cpuid_has(vcpu, X86_FEATURE_IBPB)) > + return 1; > + > + if (data & ~PRED_CMD_IBPB) > + return 1; > + > + if (!data) > + break; > + > + wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); This is missing an { .index = MSR_IA32_PRED_CMD, .always = false }, in the direct_access_msrs array. Once you do this, nested is taken care of automatically. Likewise for MSR_IA32_SPEC_CTRL in patch 5. Paolo > + if (is_guest_mode(vcpu)) > + break; > + set_msr_interception(svm->msrpm, MSR_IA32_PRED_CMD, 0, 1); > + break; > case MSR_STAR: > svm->vmcb->save.star = data; > break; > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index d46a61b..96e672e 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -2285,6 +2285,7 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) { > per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs; > vmcs_load(vmx->loaded_vmcs->vmcs); > + indirect_branch_prediction_barrier(); > } > > if (!already_loaded) { > @@ -3342,6 +3343,25 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > case MSR_IA32_TSC: > kvm_write_tsc(vcpu, msr_info); > break; > + case MSR_IA32_PRED_CMD: > + if (!msr_info->host_initiated && > + !guest_cpuid_has(vcpu, X86_FEATURE_IBPB)) > + return 1; > + > + if (data & ~PRED_CMD_IBPB) > + return 1; > + > + if (!data) > + break; > + > + wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); > + > + if (is_guest_mode(vcpu)) > + break; > + > + vmx_disable_intercept_for_msr(vmx->vmcs01.msr_bitmap, MSR_IA32_PRED_CMD, > + MSR_TYPE_W); > + break; > case MSR_IA32_CR_PAT: > if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) { > if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data)) > @@ -10046,7 +10066,8 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu, > unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap; > > /* This shortcut is ok because we support only x2APIC MSRs so far. */ > - if (!nested_cpu_has_virt_x2apic_mode(vmcs12)) > + if (!nested_cpu_has_virt_x2apic_mode(vmcs12) && > + !to_vmx(vcpu)->save_spec_ctrl_on_exit) > return false; > > page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap); > @@ -10079,6 +10100,14 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu, > MSR_TYPE_W); > } > } > + > + if (to_vmx(vcpu)->save_spec_ctrl_on_exit) { > + nested_vmx_disable_intercept_for_msr( > + msr_bitmap_l1, msr_bitmap_l0, > + MSR_IA32_PRED_CMD, > + MSR_TYPE_R); > + } > + > kunmap(page); > kvm_release_page_clean(page); > >