Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp2652692rwd; Sun, 28 May 2023 21:22:14 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7ubEbGg2vncT7g9j/WznNPDP88/kXyEDBQsaoM+qYgueUNKSof9t5MeioLs0/pt2BZCUSN X-Received: by 2002:a17:902:f691:b0:1b0:113e:1027 with SMTP id l17-20020a170902f69100b001b0113e1027mr12913968plg.34.1685334134599; Sun, 28 May 2023 21:22:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685334134; cv=none; d=google.com; s=arc-20160816; b=MZBMaxB4rMswSyHmEPawZcFSzBa5IsEm9mPPHerq6/OpGc2OuysGZZOz5etrFZeAOH Go5Twpwf5rMaJKnmfMogvgfj1XScR0gXGfSA4N4LF74EDysostjZ7YLa+jOH8OiGh1Pp QlHuDCKoqdn/Wr7ajTlaMo4aKDB1RzZvE1lZHRU92ZuUep18WigYCx0x+7LcICDFmm+p hN5GK3WdHRfGhMZr6sNbc5Dk2/A3IHgTf2+gJgb8F2Ag4nw/CZiHaAnbNcq6oq6cco56 MONiqhz2HpaRtfm8ze1tXAOdWARsvDCYE95j6iqkrTC+vHFkVNhYZMMt+0D53m6Cc22o iaAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=8YJyqM/A7WKuGUQLDE7m4rU2b3v+bmWvYyF+M9xzgL8=; b=O305/542CsVpZ6mFLC542koandBqLhL1YFJth0rCtS163+pVcYU31Aj81/F0g3baPj Vu5/ExhCUZutAl/c4ya3CL9M/PsCE3XxN2pmOVoHt4rJqaOdtN64V2fV+rDaW1m9iNZu XiZ1YuZE7Us2KTVlaed9xKWGU9hDKkb3HoqMC3Jubv9cxznjYU54NQwPVTONNOeWJ0GS I/MvPGo9ZI2UdTSTEttvgQAd2y9pm+2WdbJYNVsXt4kOTTbHK6nuFma5FSewHYTh3O9Q pEh2rNVOh0laxZMt1niOcmUXNky/nJnZinouEXGECEu1cIQw6buJHTpUkshO6T56v1Gg iBHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=EbJw1RA1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m19-20020a170902bb9300b001ae5e755a12si3789118pls.186.2023.05.28.21.22.01; Sun, 28 May 2023 21:22:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=EbJw1RA1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231475AbjE2EVD (ORCPT + 99 others); Mon, 29 May 2023 00:21:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43638 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230522AbjE2EUs (ORCPT ); Mon, 29 May 2023 00:20:48 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD311BE; Sun, 28 May 2023 21:20:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685334046; x=1716870046; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zto5mHMP5JMga7+yVTssG7POoeKd6Ll3mUCHFXaan2U=; b=EbJw1RA1IpDGa5keWAgYp9y+8OdJsaph4aNh8NZV6N+izw9xkgdmJpx6 5CfKqDyt3dYtp/YW9gS/FXTCKpYsN0bOPGZ7VHR5g+HYG4F8ESmBVvkqW RcIusgs9Gi6I6WuK1C3msINllWUzyRRM+4T6uJA2lkn1kj4NA4sfVK+6Y +f2TsvVL4MjOlpRSmau/bIUHdQ4lgjL3abQ/PQnNFL5usM8ullNEWWe9u 5CNsOIVjBhMDjfZA6EW6cwRoJP2KWAHyvM/WqMMfeyyMirS92maXzEPrj TwAyq2EmOkfw4uQtlTwPCl7OyGAACNe31/t6WKF0uxa2cbQt7JgoirC2b g==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="418094238" X-IronPort-AV: E=Sophos;i="6.00,200,1681196400"; d="scan'208";a="418094238" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2023 21:20:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683419289" X-IronPort-AV: E=Sophos;i="6.00,200,1681196400"; d="scan'208";a="683419289" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2023 21:20:43 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com Subject: [PATCH v14 003/113] KVM: x86/vmx: Refactor KVM VMX module init/exit functions Date: Sun, 28 May 2023 21:18:45 -0700 Message-Id: <4ef61085333e97e0ae48c3d7603042b9801e3608.1685333727.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata Currently, KVM VMX module initialization/exit functions are a single function each. Refactor KVM VMX module initialization functions into KVM common part and VMX part so that TDX specific part can be added cleanly. Opportunistically refactor module exit function as well. The current module initialization flow is, 0.) Check if VMX is supported, 1.) hyper-v specific initialization, 2.) system-wide x86 specific and vendor specific initialization, 3.) Final VMX specific system-wide initialization, 4.) calculate the sizes of VMX kvm structure and VMX vcpu structure, 5.) report those sizes to the KVM common layer and KVM common initialization Refactor the KVM VMX module initialization function into functions with a wrapper function to separate VMX logic in vmx.c from a file, main.c, common among VMX and TDX. Introduce a wrapper function for vmx_init(). The KVM architecture common layer allocates struct kvm with reported size for architecture-specific code. The KVM VMX module defines its structure as struct vmx_kvm { struct kvm; VMX specific members;} and uses it as struct vmx kvm. Similar for vcpu structure. TDX KVM patches will define TDX specific kvm and vcpu structures. The current module exit function is also a single function, a combination of VMX specific logic and common KVM logic. Refactor it into VMX specific logic and KVM common logic. This is just refactoring to keep the VMX specific logic in vmx.c from main.c. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/main.c | 50 +++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 54 +++++--------------------------------- arch/x86/kvm/vmx/x86_ops.h | 13 ++++++++- 3 files changed, 68 insertions(+), 49 deletions(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index a59559ff140e..791ee271393d 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -165,3 +165,53 @@ struct kvm_x86_init_ops vt_init_ops __initdata = { .runtime_ops = &vt_x86_ops, .pmu_ops = &intel_pmu_ops, }; + +static int __init vt_init(void) +{ + unsigned int vcpu_size, vcpu_align; + int r; + + if (!kvm_is_vmx_supported()) + return -EOPNOTSUPP; + + /* + * Note, hv_init_evmcs() touches only VMX knobs, i.e. there's nothing + * to unwind if a later step fails. + */ + hv_init_evmcs(); + + r = kvm_x86_vendor_init(&vt_init_ops); + if (r) + return r; + + r = vmx_init(); + if (r) + goto err_vmx_init; + + /* + * Common KVM initialization _must_ come last, after this, /dev/kvm is + * exposed to userspace! + */ + vcpu_size = sizeof(struct vcpu_vmx); + vcpu_align = __alignof__(struct vcpu_vmx); + r = kvm_init(vcpu_size, vcpu_align, THIS_MODULE); + if (r) + goto err_kvm_init; + + return 0; + +err_kvm_init: + vmx_exit(); +err_vmx_init: + kvm_x86_vendor_exit(); + return r; +} +module_init(vt_init); + +static void vt_exit(void) +{ + kvm_exit(); + kvm_x86_vendor_exit(); + vmx_exit(); +} +module_exit(vt_exit); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ed7bf8fc55a8..9e4def64495b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -554,7 +554,7 @@ static int hv_enable_l2_tlb_flush(struct kvm_vcpu *vcpu) return 0; } -static __init void hv_init_evmcs(void) +__init void hv_init_evmcs(void) { int cpu; @@ -590,7 +590,7 @@ static __init void hv_init_evmcs(void) } } -static void hv_reset_evmcs(void) +void hv_reset_evmcs(void) { struct hv_vp_assist_page *vp_ap; @@ -614,10 +614,6 @@ static void hv_reset_evmcs(void) vp_ap->current_nested_vmcs = 0; vp_ap->enlighten_vmentry = 0; } - -#else /* IS_ENABLED(CONFIG_HYPERV) */ -static void hv_init_evmcs(void) {} -static void hv_reset_evmcs(void) {} #endif /* IS_ENABLED(CONFIG_HYPERV) */ /* @@ -2715,7 +2711,7 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf, return 0; } -static bool kvm_is_vmx_supported(void) +bool kvm_is_vmx_supported(void) { int cpu = raw_smp_processor_id(); @@ -8381,7 +8377,7 @@ static void vmx_cleanup_l1d_flush(void) l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO; } -static void __vmx_exit(void) +void vmx_exit(void) { allow_smaller_maxphyaddr = false; @@ -8392,32 +8388,10 @@ static void __vmx_exit(void) vmx_cleanup_l1d_flush(); } -static void vmx_exit(void) -{ - kvm_exit(); - kvm_x86_vendor_exit(); - - __vmx_exit(); -} -module_exit(vmx_exit); - -static int __init vmx_init(void) +int __init vmx_init(void) { int r, cpu; - if (!kvm_is_vmx_supported()) - return -EOPNOTSUPP; - - /* - * Note, hv_init_evmcs() touches only VMX knobs, i.e. there's nothing - * to unwind if a later step fails. - */ - hv_init_evmcs(); - - r = kvm_x86_vendor_init(&vt_init_ops); - if (r) - return r; - /* * Must be called after common x86 init so enable_ept is properly set * up. Hand the parameter mitigation value in which was stored in @@ -8427,7 +8401,7 @@ static int __init vmx_init(void) */ r = vmx_setup_l1d_flush(vmentry_l1d_flush_param); if (r) - goto err_l1d_flush; + return r; vmx_setup_fb_clear_ctrl(); @@ -8448,21 +8422,5 @@ static int __init vmx_init(void) if (!enable_ept) allow_smaller_maxphyaddr = true; - /* - * Common KVM initialization _must_ come last, after this, /dev/kvm is - * exposed to userspace! - */ - r = kvm_init(sizeof(struct vcpu_vmx), __alignof__(struct vcpu_vmx), - THIS_MODULE); - if (r) - goto err_kvm_init; - return 0; - -err_kvm_init: - __vmx_exit(); -err_l1d_flush: - kvm_x86_vendor_exit(); - return r; } -module_init(vmx_init); diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index e9ec4d259ff5..051b5c4b5c2f 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -8,11 +8,22 @@ #include "x86.h" -__init int vmx_hardware_setup(void); +#if IS_ENABLED(CONFIG_HYPERV) +__init void hv_init_evmcs(void); +void hv_reset_evmcs(void); +#else /* IS_ENABLED(CONFIG_HYPERV) */ +static inline void hv_init_evmcs(void) {} +static inline void hv_reset_evmcs(void) {} +#endif /* IS_ENABLED(CONFIG_HYPERV) */ + +bool kvm_is_vmx_supported(void); +int __init vmx_init(void); +void vmx_exit(void); extern struct kvm_x86_ops vt_x86_ops __initdata; extern struct kvm_x86_init_ops vt_init_ops __initdata; +__init int vmx_hardware_setup(void); void vmx_hardware_unsetup(void); int vmx_check_processor_compat(void); int vmx_hardware_enable(void); -- 2.25.1