Received: by 2002:a05:6a10:9afc:0:0:0:0 with SMTP id t28csp3696839pxm; Tue, 1 Mar 2022 03:44:26 -0800 (PST) X-Google-Smtp-Source: ABdhPJyogGg2NploTkVklCmOXiPnq5cCFwmTEDDuHqZl+0JrA5jKyz3Dy/zq2Am6TaFddKneNUTh X-Received: by 2002:aa7:cb09:0:b0:413:2be4:c9fa with SMTP id s9-20020aa7cb09000000b004132be4c9famr24461941edt.106.1646135066054; Tue, 01 Mar 2022 03:44:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1646135066; cv=none; d=google.com; s=arc-20160816; b=I1kPoNU7pZ7Nrab9EwovJKRHrfoBV+gkOnSar6WN+dQXUndtt4yNpOvaQuQtUYYwkN qD9zyYvA2Waa323z5/3XIxtywknMpivK4kicq1lYDXLQYRr0tGMyN7TMdJtXxbTGJQAx C9z3KUB+DJcuiU+MM/oIQPMFiK2l3KzF5hhAGPj17UEgumPw1TMLYC0W4YTUpcBOkxFt Vs+fxbSYVwxHj4PF6RXj4OlrGILdmriNBi179VdRBOF3wqwQflSV5+3TYM+sh8ocyKAC ppo51w+guqIiubKiAtZMiFFXA3cKh5ZPz10h1dTzCmZYqUIVibaoe6iJRDLJS0406YmC CeWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=xfqJBImpv6ejxDAentKgdhEmEab1NpVc3AsZ607qoI0=; b=P6SG2XNkHA4ZyKv7zN73eH248kItkbky4oO1s93X4F/LQyDXGYWKkcG0ZIgyBZsBmU 9Vr6kUfMfW6jFuI1kjHBmwf7XmafQFjGhV7rmEHD+Nx11kLFg1QX1RwhehtY/zwYEEBP Hpuz/pfi41S8EdqBPsGHUOgPn9ylVM9MQ/fkmx3G4y+KnVWlGxjhFP3zkrBmSX3TQIuO L/gjAn0EE2TCavVRitTqoHQH67uT9SHXtzwuKY6Dl/HQLoV2bvAu/2bzl7Pb2rGe/NdE qUVSXgqJnbXZRSN7bFbDvEy30xSZWYz0rcQuY7jzbWrke/HqCr85K66YSU3IYzY/popj va2Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=f+uwDpkn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s8-20020a508dc8000000b00410c5083a07si8515837edh.223.2022.03.01.03.44.02; Tue, 01 Mar 2022 03:44:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=f+uwDpkn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232667AbiCAGkJ (ORCPT + 99 others); Tue, 1 Mar 2022 01:40:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230338AbiCAGkH (ORCPT ); Tue, 1 Mar 2022 01:40:07 -0500 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF0EB32EDC; Mon, 28 Feb 2022 22:39:25 -0800 (PST) Received: by mail-pl1-x633.google.com with SMTP id q11so12682365pln.11; Mon, 28 Feb 2022 22:39:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=xfqJBImpv6ejxDAentKgdhEmEab1NpVc3AsZ607qoI0=; b=f+uwDpknpSowtlzyNOIgRKzipUq5YlqpZKWsCGLO+65fMDFcZXfZ4gB8yKcbWtL0Gb SGvnODq5LTJe4ZtIqKzmRhlfL2u1WmF8ENdoM+/bQPqruEoOlkSZzx2HXVPA6udoTWeP ZgiVbTVcM+uM0Z9xV0q3vQthmyHIaN/lCZJpzb3Xcg2+oWRPZYiqsKFxLb/A53PhMshG skI/f5nlc7bvehhL+qyXYbJjLfcSfRWZ9IqqJ3QkNhn/2sRCJXCs7fMfF4qr5mgFGuBN 3OjD1gvBeSwMzv0gjNtTCl1QWc91KF64mTUb6hFpkB9Pb65yzns+nJAGOmNkvivODG3o K8BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=xfqJBImpv6ejxDAentKgdhEmEab1NpVc3AsZ607qoI0=; b=O6nEPPFG5OOMyWuRr+3PHN2f21posghQ0HMI1j053z8MNA7ZsP7mUOruWKhMyBPYEz kJhHGNovXrS8024xa5gSbUKjG2EoceDyXSsgnZ07fB3ZyXL2Fe41Y40xBgSTHM/aiuGG 89hG0orAEueldiBXDCU1qcUCaXXKVbIBt021BkaS08DucOe+Fe4m+Mi81d+H2RWXp+Z4 qOBqYKpg/si3SHGYytoSebzJlT/IhSFJFQvRF8zNmoxz4ExybOVqnw9cPrsOUwZQTJon Q8VP3GJibYxUwZJkxih1V2x0kVwy44miiyZBO3CRDsOpvpBU+7k4qR5b90xa7ia3ZDeb FyqQ== X-Gm-Message-State: AOAM533OICkHgwXJiwdfdo9kkGi/3gmm2WfIKfFif7UUk82LScbLA0bd SZ+7wlJRIKBvl1VdEXCLoo8Q6Brn2XYVQgFd X-Received: by 2002:a17:903:2052:b0:14f:f5fd:d040 with SMTP id q18-20020a170903205200b0014ff5fdd040mr24778131pla.46.1646116765249; Mon, 28 Feb 2022 22:39:25 -0800 (PST) Received: from FLYINGPENG-MB0.tencent.com ([103.7.29.30]) by smtp.gmail.com with ESMTPSA id i128-20020a626d86000000b004f3f2929d7asm9436704pfc.217.2022.02.28.22.39.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 22:39:24 -0800 (PST) From: Peng Hao X-Google-Original-From: Peng Hao To: pbonzini@redhat.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH] kvm: x86: Improve virtual machine startup performance Date: Tue, 1 Mar 2022 14:37:56 +0800 Message-Id: <20220301063756.16817-1-flyingpeng@tencent.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Peng Hao vcpu 0 will repeatedly enter/exit the smm state during the startup phase, and kvm_init_mmu will be called repeatedly during this process. There are parts of the mmu initialization code that do not need to be modified after the first initialization. Statistics on my server, vcpu0 when starting the virtual machine Calling kvm_init_mmu more than 600 times (due to smm state switching). The patch can save about 36 microseconds in total. Signed-off-by: Peng Hao --- arch/x86/kvm/mmu.h | 2 +- arch/x86/kvm/mmu/mmu.c | 39 ++++++++++++++++++++++----------------- arch/x86/kvm/svm/nested.c | 2 +- arch/x86/kvm/vmx/nested.c | 2 +- arch/x86/kvm/x86.c | 2 +- 5 files changed, 26 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 9ae6168d381e..d263a8ca6d5e 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -67,7 +67,7 @@ static __always_inline u64 rsvd_bits(int s, int e) void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask); void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only); -void kvm_init_mmu(struct kvm_vcpu *vcpu); +void kvm_init_mmu(struct kvm_vcpu *vcpu, bool init); void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, unsigned long cr4, u64 efer, gpa_t nested_cr3); void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 33794379949e..fedc71d9bee2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4738,7 +4738,7 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, return role; } -static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) +static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, bool init) { struct kvm_mmu *context = &vcpu->arch.root_mmu; struct kvm_mmu_role_regs regs = vcpu_to_role_regs(vcpu); @@ -4749,14 +4749,17 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) return; context->mmu_role.as_u64 = new_role.as_u64; - context->page_fault = kvm_tdp_page_fault; - context->sync_page = nonpaging_sync_page; - context->invlpg = NULL; - context->shadow_root_level = kvm_mmu_get_tdp_level(vcpu); - context->direct_map = true; - context->get_guest_pgd = get_cr3; - context->get_pdptr = kvm_pdptr_read; - context->inject_page_fault = kvm_inject_page_fault; + + if (init) { + context->page_fault = kvm_tdp_page_fault; + context->sync_page = nonpaging_sync_page; + context->invlpg = NULL; + context->shadow_root_level = kvm_mmu_get_tdp_level(vcpu); + context->direct_map = true; + context->get_guest_pgd = get_cr3; + context->get_pdptr = kvm_pdptr_read; + context->inject_page_fault = kvm_inject_page_fault; + } context->root_level = role_regs_to_root_level(®s); if (!is_cr0_pg(context)) @@ -4924,16 +4927,18 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, } EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu); -static void init_kvm_softmmu(struct kvm_vcpu *vcpu) +static void init_kvm_softmmu(struct kvm_vcpu *vcpu, bool init) { struct kvm_mmu *context = &vcpu->arch.root_mmu; struct kvm_mmu_role_regs regs = vcpu_to_role_regs(vcpu); kvm_init_shadow_mmu(vcpu, ®s); - context->get_guest_pgd = get_cr3; - context->get_pdptr = kvm_pdptr_read; - context->inject_page_fault = kvm_inject_page_fault; + if (init) { + context->get_guest_pgd = get_cr3; + context->get_pdptr = kvm_pdptr_read; + context->inject_page_fault = kvm_inject_page_fault; + } } static union kvm_mmu_role @@ -4994,14 +4999,14 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu) reset_guest_paging_metadata(vcpu, g_context); } -void kvm_init_mmu(struct kvm_vcpu *vcpu) +void kvm_init_mmu(struct kvm_vcpu *vcpu, bool init) { if (mmu_is_nested(vcpu)) init_kvm_nested_mmu(vcpu); else if (tdp_enabled) - init_kvm_tdp_mmu(vcpu); + init_kvm_tdp_mmu(vcpu, init); else - init_kvm_softmmu(vcpu); + init_kvm_softmmu(vcpu, init); } EXPORT_SYMBOL_GPL(kvm_init_mmu); @@ -5054,7 +5059,7 @@ void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu) void kvm_mmu_reset_context(struct kvm_vcpu *vcpu) { kvm_mmu_unload(vcpu); - kvm_init_mmu(vcpu); + kvm_init_mmu(vcpu, false); } EXPORT_SYMBOL_GPL(kvm_mmu_reset_context); diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index f8b7bc04b3e7..66d70a48e35e 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -447,7 +447,7 @@ static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, kvm_register_mark_available(vcpu, VCPU_EXREG_CR3); /* Re-initialize the MMU, e.g. to pick up CR4 MMU role changes. */ - kvm_init_mmu(vcpu); + kvm_init_mmu(vcpu, true); return 0; } diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index b213ca966d41..28ce73da9150 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1101,7 +1101,7 @@ static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, kvm_register_mark_available(vcpu, VCPU_EXREG_CR3); /* Re-initialize the MMU, e.g. to pick up CR4 MMU role changes. */ - kvm_init_mmu(vcpu); + kvm_init_mmu(vcpu, true); return 0; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index dc7eb5fddfd3..fb1e3e945b72 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10895,7 +10895,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu_load(vcpu); kvm_set_tsc_khz(vcpu, max_tsc_khz); kvm_vcpu_reset(vcpu, false); - kvm_init_mmu(vcpu); + kvm_init_mmu(vcpu, true); vcpu_put(vcpu); return 0; -- 2.27.0