Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp2845654pxb; Tue, 13 Apr 2021 11:33:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwrZcc1771vKA8dBAFUZAVryPG6xoTbBqNhT8j9wbXWDKOxm3aHNKaZHE8CyLwK6fNGHWhJ X-Received: by 2002:a17:906:c148:: with SMTP id dp8mr5785255ejc.193.1618338797168; Tue, 13 Apr 2021 11:33:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618338797; cv=none; d=google.com; s=arc-20160816; b=GTTv/qrP9DnmVDaS5bJVCibm1eotQYWXp4loBstJnAwrF8SW0jF0ItRR1aCJWoz6oW zOoWcc9KMq8V47ajFSUDzJw8gtdW2/WN5I3tqQvQpU4AaSkNC4oXw8QF3jiPrIlT5LSz 4y4H64tqUnuxZHHDt2B6v7vpQXHndaikiApNRJ/wCOw8oiggAY+RGaw3Euf5V0UfUcl0 cCQzRckCwV0cSGuhwCGrzT1fSntORcmCmRIeX8vBQIRTNS8VP+6Tx1N104HY7iqVCgXh nDCD4XWB0E8tkrKY1lbIo5Kh+sMjIluh8ffN7R6jbE7vAY1hkg2BmqL3HynOkoh+pFKz 7nWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=12sTI9NDA3mO8v+Yi2N5L8VHS1eF1XC4Wqoh+3WYzU4=; b=F2uaMl2DbjRA6TRU/dYXs5DH/WrIo/46+adIiDisqPI2hMHig18X1bgBvX3bJfKXoC DbIbmdf0WWSntY7Co3hjzSTNr6S9HxpnJPQSv4ietHntNrKPGO8/LmGO8S5lsnP9QbPY 6ncuEOgnP6EuU1hOiXCvEU5D+JqsNLtCEz6uYzV2PtkeIqOSUQoaEWrgSpCFT1eMyiNG 8+MOMiumXWMlSaPNC9+IOgFbiArRFt0un5UNO5Iob0xF9DI44lY3nEFMRa+z4+5wvsUL g3OMG0CsAGirbrFf8hyCpvTmicl51xyYYMsyAUEJE7SKQP6o0ONNDAc9seinORgxhRhD WSSQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gn17si3715290ejc.324.2021.04.13.11.32.53; Tue, 13 Apr 2021 11:33:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346325AbhDMOL2 (ORCPT + 99 others); Tue, 13 Apr 2021 10:11:28 -0400 Received: from vps-vb.mhejs.net ([37.28.154.113]:48058 "EHLO vps-vb.mhejs.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231888AbhDMOLM (ORCPT ); Tue, 13 Apr 2021 10:11:12 -0400 Received: from MUA by vps-vb.mhejs.net with esmtps (TLS1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.93.0.4) (envelope-from ) id 1lWJkH-0003zD-73; Tue, 13 Apr 2021 16:10:25 +0200 From: "Maciej S. Szmigiero" To: Paolo Bonzini , Vitaly Kuznetsov Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Igor Mammedov , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Christian Borntraeger , Janosch Frank , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/8] KVM: x86: Cache total page count to avoid traversing the memslot array Date: Tue, 13 Apr 2021 16:10:07 +0200 Message-Id: <7a3e21252b29f6703efee5d68ed543376d65cd9a.1618322003.git.maciej.szmigiero@oracle.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Maciej S. Szmigiero" There is no point in recalculating from scratch the total number of pages in all memslots each time a memslot is created or deleted. Just cache the value and update it accordingly on each such operation so the code doesn't need to traverse the whole memslot array each time. Signed-off-by: Maciej S. Szmigiero --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu/mmu.c | 24 ------------------------ arch/x86/kvm/x86.c | 18 +++++++++++++++--- 3 files changed, 16 insertions(+), 28 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 44f893043a3c..24356d3f7d01 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -939,6 +939,7 @@ enum kvm_irqchip_mode { #define APICV_INHIBIT_REASON_X2APIC 5 struct kvm_arch { + unsigned long n_memslots_pages; unsigned long n_used_mmu_pages; unsigned long n_requested_mmu_pages; unsigned long n_max_mmu_pages; @@ -1426,7 +1427,6 @@ void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm, struct kvm_memory_slot *memslot); void kvm_mmu_zap_all(struct kvm *kvm); void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen); -unsigned long kvm_mmu_calculate_default_mmu_pages(struct kvm *kvm); void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long kvm_nr_mmu_pages); int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index efb41f31e80a..762314b04a39 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5869,30 +5869,6 @@ int kvm_mmu_module_init(void) return ret; } -/* - * Calculate mmu pages needed for kvm. - */ -unsigned long kvm_mmu_calculate_default_mmu_pages(struct kvm *kvm) -{ - unsigned long nr_mmu_pages; - unsigned long nr_pages = 0; - struct kvm_memslots *slots; - struct kvm_memory_slot *memslot; - int i; - - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - slots = __kvm_memslots(kvm, i); - - kvm_for_each_memslot(memslot, slots) - nr_pages += memslot->npages; - } - - nr_mmu_pages = nr_pages * KVM_PERMILLE_MMU_PAGES / 1000; - nr_mmu_pages = max(nr_mmu_pages, KVM_MIN_ALLOC_MMU_PAGES); - - return nr_mmu_pages; -} - void kvm_mmu_destroy(struct kvm_vcpu *vcpu) { kvm_mmu_unload(vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 16fb39503296..c73e5c05be6b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10959,9 +10959,21 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { - if (!kvm->arch.n_requested_mmu_pages) - kvm_mmu_change_mmu_pages(kvm, - kvm_mmu_calculate_default_mmu_pages(kvm)); + if (change == KVM_MR_CREATE) + kvm->arch.n_memslots_pages += new->npages; + else if (change == KVM_MR_DELETE) { + WARN_ON(kvm->arch.n_memslots_pages < old->npages); + kvm->arch.n_memslots_pages -= old->npages; + } + + if (!kvm->arch.n_requested_mmu_pages) { + unsigned long nr_mmu_pages; + + nr_mmu_pages = kvm->arch.n_memslots_pages * + KVM_PERMILLE_MMU_PAGES / 1000; + nr_mmu_pages = max(nr_mmu_pages, KVM_MIN_ALLOC_MMU_PAGES); + kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages); + } /* * FIXME: const-ify all uses of struct kvm_memory_slot.