Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp60084pxy; Wed, 21 Apr 2021 18:30:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwj9doyLOnjA1tE3Lsmg/wrAknDqw38P9bEOJN3VA6xJ7Hn3LngAIeS76ql/YLmmFBKTJJh X-Received: by 2002:a17:902:da85:b029:eb:8794:7078 with SMTP id j5-20020a170902da85b02900eb87947078mr914355plx.25.1619055003657; Wed, 21 Apr 2021 18:30:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619055003; cv=none; d=google.com; s=arc-20160816; b=kt8HemnIed0sHdMdaH/LBgvsJJABJXrkWSFSoCTDPu2cIdpB7qRcQcshgN73PT8EvR nfPqZN4N+F+DZYI8c6YM7F+3dVCJnDVMjLUMMJeCmtmfsKFYwzbIqgK7krw/6+zckUo3 6i9G5Q1IFtQsTTvN1WhfPbf2yoTNzNpwzbrP7uQ4WLjMYAY8MzP1rPP60d2kAA/R9TNN jzWraiyfT+M5UzI7+gz1dDVtDGTA1dtHETWX8ztQgZwfXv4xZuaQBZpQTHpegFsAI3hH R9rc6nRWnhJ89ROTKAtFVU/1MtYs2/5Z963lvoeCYdgbqwlmJA1Ewcm+JzaHfgmwui80 n+HQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:mime-version:message-id:date :reply-to:dkim-signature; bh=TNz2FBkufKTcsHNiDDGsGzoST1t4D0jYYkLUrh9Lj04=; b=KxkLRYsS992BY6thiq0bRNEtL+2ekWpC0YTREqMCZWwssY3t7pqEgACu3uQ4/qMOtG KFHZ6H8Vvhj4PShv0GUgFzp/tOVdyZcu7Nai0EVXNn7BNhBtEBDpEdynl+oNVP6LgIe2 n8x0GVqEEhlrjus84s9o7FtQYtmjHD5122wDBHIirEAO7g/4GUYcLRSjYW0BmuHUIeLy 4erzxzLGQ2m5h1uMBp9doCUCD2VGY8SYbkjPpcPOIm1xoE1sUNZDfz804jQpUagTxfrp Fb/ddtJRIbF/NpAAxBpMa+b9lz0OsCEoEjuexiZfhKtmwrUhZKufCASdNLU93gMlRAOK YGYg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=C52ABESO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 20si1248863pgx.542.2021.04.21.18.29.52; Wed, 21 Apr 2021 18:30:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=C52ABESO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244102AbhDVASP (ORCPT + 99 others); Wed, 21 Apr 2021 20:18:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231958AbhDVASO (ORCPT ); Wed, 21 Apr 2021 20:18:14 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4D25C06138A for ; Wed, 21 Apr 2021 17:17:39 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id d89-20020a25a3620000b02904dc8d0450c6so17727032ybi.2 for ; Wed, 21 Apr 2021 17:17:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:message-id:mime-version:subject:from:to:cc; bh=TNz2FBkufKTcsHNiDDGsGzoST1t4D0jYYkLUrh9Lj04=; b=C52ABESOBrQ4hNk64m0bvImdImzscUFZkE9fKNI8cNC5fKiZBxy6+c88xmZQC5LQyN Urd/GAfviJZzejYGOvef1zX24doxtW3P/Vb57eG+/X0IdecFqG39RdglldYeb3SIAxxk Sxx+8fZx3wHFtRkr9fjiv2q0mJ+zknI3j+K+k69h0j1xlp/IY6J5MiL2YzTaDTpcDtF+ kOTznXRepw+L5OGm0omSVRs0v9lokZiSwmnc79U62K/fqpHDwTSzWMe3rFiMYc2BjYxf dHDYDZa9ZcP2Zs8q0fchkfdBn8uMlVGUKNjrpGbBvm/CTmk/H+mI8zAhQa7ARIU2n5fH bezw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:message-id:mime-version:subject :from:to:cc; bh=TNz2FBkufKTcsHNiDDGsGzoST1t4D0jYYkLUrh9Lj04=; b=j7rcfxpAr3UyXm8C5AtFNVpTjFyvW+FMPot2FQmGUk2YGpfFgMdaciAhUvak9j5EWY 4seXUpYqS+Gjz9Hwxs8bq2D2EosWHU4JIEjB4KdAtWFI/yFe+q4qpifv1YCdi0Vp6pT1 VkQHb0wQZ7zkO08lM61wihvfi4GX8zPqoxSTDn+iwPB5yJIvDPQR0dP7NPm1xpS+kgUU 8v1Mks4mywDElpxMmbOmEaBSHez17k+ihDaPE9E4nNfqiuZLBB7hO0kCnCfXBCNcqJ1y zk8eaN/aR5Cu4Pu1BVN5LrdsrHDOb3kQNdr7ERt0SRSSccusLouC8cq1DFPY6AsIJR68 ntSg== X-Gm-Message-State: AOAM533i1ZchVv86scC5V70RA4tC37yGanegSr7YqRsdIr9WjdubwHZ7 Orzp1IYEtDSmXqhtg+ZoInjZ22JCLm4= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:e012:374c:592:6194]) (user=seanjc job=sendgmr) by 2002:a25:7603:: with SMTP id r3mr873838ybc.272.1619050658986; Wed, 21 Apr 2021 17:17:38 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 21 Apr 2021 17:17:36 -0700 Message-Id: <20210422001736.3255735-1-seanjc@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.31.1.498.g6c1eba8ee3d-goog Subject: [PATCH v2] KVM: SVM: Delay restoration of host MSR_TSC_AUX until return to userspace From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Use KVM's "user return MSRs" framework to defer restoring the host's MSR_TSC_AUX until the CPU returns to userspace. Add/improve comments to clarify why MSR_TSC_AUX is intercepted on both RDMSR and WRMSR, and why it's safe for KVM to keep the guest's value loaded even if KVM is scheduled out. Signed-off-by: Sean Christopherson --- v2: Rebased to kvm/queue (ish), commit 0e91d1992235 ("KVM: SVM: Allocate SEV command structures on local stack") arch/x86/kvm/svm/svm.c | 50 ++++++++++++++++++------------------------ arch/x86/kvm/svm/svm.h | 7 ------ 2 files changed, 21 insertions(+), 36 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index cd8c333ed2dc..596361449f25 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -213,6 +213,15 @@ struct kvm_ldttss_desc { DEFINE_PER_CPU(struct svm_cpu_data *, svm_data); +/* + * Only MSR_TSC_AUX is switched via the user return hook. EFER is switched via + * the VMCB, and the SYSCALL/SYSENTER MSRs are handled by VMLOAD/VMSAVE. + * + * RDTSCP and RDPID are not used in the kernel, specifically to allow KVM to + * defer the restoration of TSC_AUX until the CPU returns to userspace. + */ +#define TSC_AUX_URET_SLOT 0 + static const u32 msrpm_ranges[] = {0, 0xc0000000, 0xc0010000}; #define NUM_MSR_MAPS ARRAY_SIZE(msrpm_ranges) @@ -958,6 +967,9 @@ static __init int svm_hardware_setup(void) kvm_tsc_scaling_ratio_frac_bits = 32; } + if (boot_cpu_has(X86_FEATURE_RDTSCP)) + kvm_define_user_return_msr(TSC_AUX_URET_SLOT, MSR_TSC_AUX); + /* Check for pause filtering support */ if (!boot_cpu_has(X86_FEATURE_PAUSEFILTER)) { pause_filter_count = 0; @@ -1423,19 +1435,10 @@ static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu); - unsigned int i; if (svm->guest_state_loaded) return; - /* - * Certain MSRs are restored on VMEXIT (sev-es), or vmload of host save - * area (non-sev-es). Save ones that aren't so we can restore them - * individually later. - */ - for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) - rdmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); - /* * Save additional host state that will be restored on VMEXIT (sev-es) * or subsequent vmload of host save area. @@ -1454,29 +1457,15 @@ static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu) } } - /* This assumes that the kernel never uses MSR_TSC_AUX */ if (static_cpu_has(X86_FEATURE_RDTSCP)) - wrmsrl(MSR_TSC_AUX, svm->tsc_aux); + kvm_set_user_return_msr(TSC_AUX_URET_SLOT, svm->tsc_aux, -1ull); svm->guest_state_loaded = true; } static void svm_prepare_host_switch(struct kvm_vcpu *vcpu) { - struct vcpu_svm *svm = to_svm(vcpu); - unsigned int i; - - if (!svm->guest_state_loaded) - return; - - /* - * Certain MSRs are restored on VMEXIT (sev-es), or vmload of host save - * area (non-sev-es). Restore the ones that weren't. - */ - for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) - wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); - - svm->guest_state_loaded = false; + to_svm(vcpu)->guest_state_loaded = false; } static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) @@ -2893,12 +2882,15 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) return 1; /* - * This is rare, so we update the MSR here instead of using - * direct_access_msrs. Doing that would require a rdmsr in - * svm_vcpu_put. + * TSC_AUX is usually changed only during boot and never read + * directly. Intercept TSC_AUX instead of exposing it to the + * guest via direct_acess_msrs, and switch it via user return. */ svm->tsc_aux = data; - wrmsrl(MSR_TSC_AUX, svm->tsc_aux); + + preempt_disable(); + kvm_set_user_return_msr(TSC_AUX_URET_SLOT, data, -1ull); + preempt_enable(); break; case MSR_IA32_DEBUGCTLMSR: if (!boot_cpu_has(X86_FEATURE_LBRV)) { diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 454da1c1d9b7..9dce6f290041 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -23,11 +23,6 @@ #define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT) -static const u32 host_save_user_msrs[] = { - MSR_TSC_AUX, -}; -#define NR_HOST_SAVE_USER_MSRS ARRAY_SIZE(host_save_user_msrs) - #define IOPM_SIZE PAGE_SIZE * 3 #define MSRPM_SIZE PAGE_SIZE * 2 @@ -128,8 +123,6 @@ struct vcpu_svm { u64 next_rip; - u64 host_user_msrs[NR_HOST_SAVE_USER_MSRS]; - u64 spec_ctrl; /* * Contains guest-controlled bits of VIRT_SPEC_CTRL, which will be -- 2.31.1.498.g6c1eba8ee3d-goog