Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp789185imm; Mon, 21 May 2018 14:29:35 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqJbBk/cP+516xignjzcwsaPz8SWUpFq0ndDr5DzxUOMG81kNeP4P02fHD6CODzeIRrrfne X-Received: by 2002:a63:6d87:: with SMTP id i129-v6mr14971786pgc.217.1526938175704; Mon, 21 May 2018 14:29:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526938175; cv=none; d=google.com; s=arc-20160816; b=F55+l1s2zg/Wjk0rM2J10X7U7y59GF8ywXQVX6uR1p7ChYe/Ald/vNzSWuSVyQqEKa Zy0ppEVefVcjSRac8JP3lCdZeJN4NTH7hrNeXuIYqnMhjInZT40XAsVDr0F7zzzjSZPX 023mnZ3dgTXuP9cLegiEjAjiiH69IeVDRchRSkEUxb7Jae2N71kpaOSXVsrMQKzSXijN jFOOUDhMfBqTykpvz/6WQDx5qONsIbNndys40Nf292XZsNvEcobeZBxQ+14/wDUVKVVs Pw/c1vEQv7zZfVx/3b6xbufxjS0rT5oONapNdnsnLknJR7y8N8S3hXIv+kMxaNfydgC7 +0FA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=uIO+mwcBbyTQGSKFfP3Rof/+CsA5rLypLeAu4T1ueTE=; b=JkfsV4AMgOXObhopMTJW1M/8swJlZeno1uD6mmm6TCAGvfsgVj3xm/pgVofqsSN6g4 HJBCTEkAbtZr8IhSvJQvJYW7JrBiGEp8Fip2eF6UekErglAqyWooSiMuK5JKuLngt3G1 qwYoDirCrT17nqzlZTKWYMe8yPtzixDfgtcDgq3pUX33e4VjAd9X9/+iIN2pPwf4jbZ3 XBmgBus5WtLbeh1VLqdvwTVxOVDzpb5CEq+WILf18rYSDG9iB9TRrlgUhWLADEGBUbHJ z70EASGE+7YYWMigAQIBh/gK+E6vHb3+GuATWPHofg+5Q0ParplnB9jgiaNEaoTbF62B Pr9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=yit7IGlp; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n14-v6si11669924pgu.688.2018.05.21.14.29.21; Mon, 21 May 2018 14:29:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=yit7IGlp; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932687AbeEUV1I (ORCPT + 99 others); Mon, 21 May 2018 17:27:08 -0400 Received: from mail.kernel.org ([198.145.29.99]:41858 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932495AbeEUV1C (ORCPT ); Mon, 21 May 2018 17:27:02 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 10C8B20872; Mon, 21 May 2018 21:27:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1526938021; bh=HJ8WWe0QByOGWkUr2Z/8JpyxRU1Svd7pO5NA+ZZxS7w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=yit7IGlpGQ4bjD2CtWKskh9C4K12IwekDXZ3OpggBBBxulL7HScSh5gZ2BkygIVDb xgDbq1M1JnXhqGn/Pn49zpenHKNZajm5Qd+sqP5ohI2M3KaK/AlVhvksz8fVUoK3sj +ihTkrqyA78e8TQ2XH3ZecQqSgZHLgOywCQVy3OE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Borislav Petkov , Konrad Rzeszutek Wilk Subject: [PATCH 4.16 100/110] x86/bugs, KVM: Extend speculation control for VIRT_SPEC_CTRL Date: Mon, 21 May 2018 23:12:37 +0200 Message-Id: <20180521210514.544607886@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180521210503.823249477@linuxfoundation.org> References: <20180521210503.823249477@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.16-stable review patch. If anyone has any objections, please let me know. ------------------ From: Thomas Gleixner commit ccbcd2674472a978b48c91c1fbfb66c0ff959f24 upstream AMD is proposing a VIRT_SPEC_CTRL MSR to handle the Speculative Store Bypass Disable via MSR_AMD64_LS_CFG so that guests do not have to care about the bit position of the SSBD bit and thus facilitate migration. Also, the sibling coordination on Family 17H CPUs can only be done on the host. Extend x86_spec_ctrl_set_guest() and x86_spec_ctrl_restore_host() with an extra argument for the VIRT_SPEC_CTRL MSR. Hand in 0 from VMX and in SVM add a new virt_spec_ctrl member to the CPU data structure which is going to be used in later patches for the actual implementation. Signed-off-by: Thomas Gleixner Reviewed-by: Borislav Petkov Reviewed-by: Konrad Rzeszutek Wilk Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/spec-ctrl.h | 9 ++++++--- arch/x86/kernel/cpu/bugs.c | 20 ++++++++++++++++++-- arch/x86/kvm/svm.c | 11 +++++++++-- arch/x86/kvm/vmx.c | 5 +++-- 4 files changed, 36 insertions(+), 9 deletions(-) --- a/arch/x86/include/asm/spec-ctrl.h +++ b/arch/x86/include/asm/spec-ctrl.h @@ -10,10 +10,13 @@ * the guest has, while on VMEXIT we restore the host view. This * would be easier if SPEC_CTRL were architecturally maskable or * shadowable for guests but this is not (currently) the case. - * Takes the guest view of SPEC_CTRL MSR as a parameter. + * Takes the guest view of SPEC_CTRL MSR as a parameter and also + * the guest's version of VIRT_SPEC_CTRL, if emulated. */ -extern void x86_spec_ctrl_set_guest(u64); -extern void x86_spec_ctrl_restore_host(u64); +extern void x86_spec_ctrl_set_guest(u64 guest_spec_ctrl, + u64 guest_virt_spec_ctrl); +extern void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl, + u64 guest_virt_spec_ctrl); /* AMD specific Speculative Store Bypass MSR data */ extern u64 x86_amd_ls_cfg_base; --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -151,7 +151,15 @@ u64 x86_spec_ctrl_get_default(void) } EXPORT_SYMBOL_GPL(x86_spec_ctrl_get_default); -void x86_spec_ctrl_set_guest(u64 guest_spec_ctrl) +/** + * x86_spec_ctrl_set_guest - Set speculation control registers for the guest + * @guest_spec_ctrl: The guest content of MSR_SPEC_CTRL + * @guest_virt_spec_ctrl: The guest controlled bits of MSR_VIRT_SPEC_CTRL + * (may get translated to MSR_AMD64_LS_CFG bits) + * + * Avoids writing to the MSR if the content/bits are the same + */ +void x86_spec_ctrl_set_guest(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl) { u64 host = x86_spec_ctrl_base; @@ -168,7 +176,15 @@ void x86_spec_ctrl_set_guest(u64 guest_s } EXPORT_SYMBOL_GPL(x86_spec_ctrl_set_guest); -void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl) +/** + * x86_spec_ctrl_restore_host - Restore host speculation control registers + * @guest_spec_ctrl: The guest content of MSR_SPEC_CTRL + * @guest_virt_spec_ctrl: The guest controlled bits of MSR_VIRT_SPEC_CTRL + * (may get translated to MSR_AMD64_LS_CFG bits) + * + * Avoids writing to the MSR if the content/bits are the same + */ +void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl) { u64 host = x86_spec_ctrl_base; --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -192,6 +192,12 @@ struct vcpu_svm { } host; u64 spec_ctrl; + /* + * Contains guest-controlled bits of VIRT_SPEC_CTRL, which will be + * translated into the appropriate L2_CFG bits on the host to + * perform speculative control. + */ + u64 virt_spec_ctrl; u32 *msrpm; @@ -1910,6 +1916,7 @@ static void svm_vcpu_reset(struct kvm_vc vcpu->arch.microcode_version = 0x01000065; svm->spec_ctrl = 0; + svm->virt_spec_ctrl = 0; if (!init_event) { svm->vcpu.arch.apic_base = APIC_DEFAULT_PHYS_BASE | @@ -5401,7 +5408,7 @@ static void svm_vcpu_run(struct kvm_vcpu * is no need to worry about the conditional branch over the wrmsr * being speculatively taken. */ - x86_spec_ctrl_set_guest(svm->spec_ctrl); + x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl); asm volatile ( "push %%" _ASM_BP "; \n\t" @@ -5525,7 +5532,7 @@ static void svm_vcpu_run(struct kvm_vcpu if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); - x86_spec_ctrl_restore_host(svm->spec_ctrl); + x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl); reload_tss(vcpu); --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -9465,9 +9465,10 @@ static void __noclone vmx_vcpu_run(struc * is no need to worry about the conditional branch over the wrmsr * being speculatively taken. */ - x86_spec_ctrl_set_guest(vmx->spec_ctrl); + x86_spec_ctrl_set_guest(vmx->spec_ctrl, 0); vmx->__launched = vmx->loaded_vmcs->launched; + asm( /* Store host registers */ "push %%" _ASM_DX "; push %%" _ASM_BP ";" @@ -9603,7 +9604,7 @@ static void __noclone vmx_vcpu_run(struc if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) vmx->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); - x86_spec_ctrl_restore_host(vmx->spec_ctrl); + x86_spec_ctrl_restore_host(vmx->spec_ctrl, 0); /* Eliminate branch target predictions from guest mode */ vmexit_fill_RSB();