Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp3036096ybt; Mon, 22 Jun 2020 13:16:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzdDGf7jgStDe3mjPgbzq8IuAIAOx8nehL25FoRlHQ4WovWJAkpk+qb2T1vXJYRwhG86ZXZ X-Received: by 2002:a50:ce45:: with SMTP id k5mr19075285edj.80.1592856963178; Mon, 22 Jun 2020 13:16:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592856963; cv=none; d=google.com; s=arc-20160816; b=YJne7FTc4q9BEFgnECVCcl5QJCYfh7pRkz/S+Zri8l+LzuKx4CX1pIWib3Ff8acbRm GIVMQ3w1RLPr9ujYkb/YPdzY/TjNmCg/lQ7+spJJF50LYCG2CCBknQgRdNgfqlhffnG5 5eEQwiSzF+n4eNVhsv4ABsYtr3uqMa9wavFfhfgmvC7jgtE3dAMoDOKVEugt9yHos6Dn 4GBj7GL43ira5DrI9wysMXBrNipWzc44oLA4xHqSIUDKZSXV2i1EumJLQYIpdCUL2Dxy oXIGz5Odp9cdaR+0R8yrKztLRse8rP2yEIFb+MZ1AY+331LTG3nIYisvYmzPTnJGwWNV rWug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=kEgH8SYcJ/BA6E5lhzBwaQ/oN9Gw3j0mtnAaaJOQWmg=; b=mJ4VZskk5TOp7e36i1NZgUO19g0C13LNleh1k+D8YRwpoGnaucbH5Ffu0TUBiB+Zja 9PNkv1PwsgrenvvSlx9Yy1NrW99FPzh/4+TWGdNM5uAKvkg7SmgHZ9ELjUMdTPE2P0p/ qUAB38ZCt3IQ2dJP101DYwRacEi9JMd78gZZ1MdKhWgGKYGFDsQXuZNHa8vdYsFcn03J Lm8h5+eFP623516KRW4gI8G+Cy4cFVCta2FEeuJv3fHrVzYd4OFhm0uGMwKaXtNgkSNg bSXAg9+3xSlc5CvG/OmQHCjWU/3V23bwD5I905JYaXhtdHsYvi0WmYQgZtJezJaoP9Ss dNHA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a6si11064913edj.537.2020.06.22.13.15.39; Mon, 22 Jun 2020 13:16:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730542AbgFVULV (ORCPT + 99 others); Mon, 22 Jun 2020 16:11:21 -0400 Received: from mga12.intel.com ([192.55.52.136]:60195 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730383AbgFVUJO (ORCPT ); Mon, 22 Jun 2020 16:09:14 -0400 IronPort-SDR: eW6f0yYIRMB0phIpQeNXUkoshnMl0BeCgEE0HxmConJxT/DULjI7bHGWcs9WSvgljFgYIAiTxm FC/KZSaXVr2Q== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="123527732" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="123527732" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 13:09:13 -0700 IronPort-SDR: 6Ia5uhwjctHaySi317F/9+WvPSn+PS6/FHFQk8LIOAcfpdbtUdcjR0PTwaH7efYNEMCOJi2STg cTr2KOvriHeg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="318877076" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by FMSMGA003.fm.intel.com with ESMTP; 22 Jun 2020 13:09:12 -0700 From: Sean Christopherson To: Marc Zyngier , Paolo Bonzini , Arnd Bergmann Cc: James Morse , Julien Thierry , Suzuki K Poulose , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Peter Feiner , Peter Shier , Junaid Shahid , Christoffer Dall Subject: [PATCH v2 08/21] KVM: x86/mmu: Clean up the gorilla math in mmu_topup_memory_caches() Date: Mon, 22 Jun 2020 13:08:09 -0700 Message-Id: <20200622200822.4426-9-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622200822.4426-1-sean.j.christopherson@intel.com> References: <20200622200822.4426-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Clean up the minimums in mmu_topup_memory_caches() to document the driving mechanisms behind the minimums. Now that encountering an empty cache is unlikely to trigger BUG_ON(), it is less dangerous to be more precise when defining the minimums. For rmaps, the logic is 1 parent PTE per level, plus a single rmap, and prefetched rmaps. The extra objects in the current '8 + PREFETCH' minimum came about due to an abundance of paranoia in commit c41ef344de212 ("KVM: MMU: increase per-vcpu rmap cache alloc size"), i.e. it could have increased the minimum to 2 rmaps. Furthermore, the unexpected extra rmap case was killed off entirely by commits f759e2b4c728c ("KVM: MMU: avoid pte_list_desc running out in kvm_mmu_pte_write") and f5a1e9f89504f ("KVM: MMU: remove call to kvm_mmu_pte_write from walk_addr"). For the so called page cache, replace '8' with 2*PT64_ROOT_MAX_LEVEL. The 2x multiplier is needed because the cache is used for both shadow pages and gfn arrays for indirect MMUs. And finally, for page headers, replace '4' with PT64_ROOT_MAX_LEVEL. Note, KVM now supports 5-level paging, i.e. the old minimums that used a baseline derived from 4-level paging were technically wrong. But, KVM always allocates roots in a separate flow, e.g. it's impossible in the current implementation to actually need 5 new shadow pages in a single flow. Use PT64_ROOT_MAX_LEVEL unmodified instead of subtracting 1, as the direct usage is likely more intuitive to uninformed readers, and the inflated minimum is unlikely to affect functionality in practice. Reviewed-by: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4b4c3234d623..451e0365e5dd 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1103,14 +1103,17 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) { int r; + /* 1 rmap, 1 parent PTE per level, and the prefetched rmaps. */ r = mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache, - 8 + PTE_PREFETCH_NUM); + 1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM); if (r) return r; - r = mmu_topup_memory_cache(&vcpu->arch.mmu_page_cache, 8); + r = mmu_topup_memory_cache(&vcpu->arch.mmu_page_cache, + 2 * PT64_ROOT_MAX_LEVEL); if (r) return r; - return mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, 4); + return mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, + PT64_ROOT_MAX_LEVEL); } static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) -- 2.26.0