Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1967628ybt; Thu, 2 Jul 2020 19:40:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw75QeVT1liN+JlBljPMWGaEDPP/a6HtbFyvL++FG1lQO/9BUtrEtWMlA16FHhPZznOavVW X-Received: by 2002:a17:906:40d7:: with SMTP id a23mr28833607ejk.421.1593744029768; Thu, 02 Jul 2020 19:40:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593744029; cv=none; d=google.com; s=arc-20160816; b=PXVxKLhO5SUKq0Gbzhil2TPQY5K7HaoZvoS3pzSMz4OSMzjjkleLzg4wh+b5L8E+rg oqx3b03Fc3+u8QdMKOppfTjvWI2+oAR+HMdc7FKtM6M8u7c7tfpWSlUJqQ9e1OcsFaLI muntx4DqwbwtIzJNFlCl2yfx1C9xXvkYHQyTjsFA/IjXoSozOPoeh5e84wAgXa+7y9zV BPupjUJJHz19ISzkZd7Hj58bJFx1HaB4aXChLhPpOSvYa6w/fFEzYcibyxqOqHjwlWfV 7IhCcahXCr7w109Ne13u+nMpm5wJ0Ey27OMvELl3ebzDzhYlSLyEmE5AHUjlXO5qUPxl pGwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=uLLCLyAcoPwIrMoM6pQgWpwzKk2wlirSBKByzhEw7nw=; b=bAObBmHH2TsSYaVMsBvP7decRsXid1D2a3sqLrKlvuRjyjHDDgaN4wkfVIYrAiU8C5 cdlRm7D+zj1v56wqPDjWJ/QeZDsA/pkNEEtxt7PgH6VPWZUa2trLCOdAb9zSZOTDF00+ dJ4ZZwoLX3wdUV82E1UdaS1TE0H03qDvbtjSL0qUcS2teSPcJSQDudCs6+Y7SQKfFfYR 2cRQvI1LtUDJ77rL5epJYKcB/e+7ryeb4Q6iWbgX94tplsg/rlGP5eizOeN7J8csWZia 54zYpaDV9djOpJwp8ydPEsWof+ccnW1bPvlL5r3Y8oEsJQSw+Tkvy4KTpRx/julJMA5h SAtA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h16si7186697ejb.456.2020.07.02.19.40.06; Thu, 02 Jul 2020 19:40:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726844AbgGCChc (ORCPT + 99 others); Thu, 2 Jul 2020 22:37:32 -0400 Received: from mga09.intel.com ([134.134.136.24]:3198 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726281AbgGCCgM (ORCPT ); Thu, 2 Jul 2020 22:36:12 -0400 IronPort-SDR: a82SE6pURz3yd7WnZxtqs1v1WtchoshpSSsWtF7VWlO2LrSkYrFZA5Wgmk/5IRgHKNpSI3O452 Hw06/A2o3stA== X-IronPort-AV: E=McAfee;i="6000,8403,9670"; a="148599913" X-IronPort-AV: E=Sophos;i="5.75,306,1589266800"; d="scan'208";a="148599913" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jul 2020 19:36:05 -0700 IronPort-SDR: ENBGXaaen3ycUCXOELoZLAvzfstE/lsUWtAmm6jgQXPdd3IF87HZ+OA2HCSWlCHs+ldCuydGpl m4jGIjlGS1Hg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,306,1589266800"; d="scan'208";a="278295755" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by orsmga003.jf.intel.com with ESMTP; 02 Jul 2020 19:36:05 -0700 From: Sean Christopherson To: Marc Zyngier , Paolo Bonzini , Arnd Bergmann Cc: James Morse , Julien Thierry , Suzuki K Poulose , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Peter Feiner , Peter Shier , Junaid Shahid , Christoffer Dall Subject: [PATCH v3 12/21] KVM: x86/mmu: Skip filling the gfn cache for guaranteed direct MMU topups Date: Thu, 2 Jul 2020 19:35:36 -0700 Message-Id: <20200703023545.8771-13-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200703023545.8771-1-sean.j.christopherson@intel.com> References: <20200703023545.8771-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Don't bother filling the gfn array cache when the caller is a fully direct MMU, i.e. won't need a gfn array for shadow pages. Reviewed-by: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 18 ++++++++++-------- arch/x86/kvm/mmu/paging_tmpl.h | 4 ++-- 2 files changed, 12 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 60b0d460bbf5..586d63de0e78 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1101,7 +1101,7 @@ static void mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc) } } -static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) +static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) { int r; @@ -1114,10 +1114,12 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) PT64_ROOT_MAX_LEVEL); if (r) return r; - r = mmu_topup_memory_cache(&vcpu->arch.mmu_gfn_array_cache, - PT64_ROOT_MAX_LEVEL); - if (r) - return r; + if (maybe_indirect) { + r = mmu_topup_memory_cache(&vcpu->arch.mmu_gfn_array_cache, + PT64_ROOT_MAX_LEVEL); + if (r) + return r; + } return mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, PT64_ROOT_MAX_LEVEL); } @@ -4107,7 +4109,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, if (fast_page_fault(vcpu, gpa, error_code)) return RET_PF_RETRY; - r = mmu_topup_memory_caches(vcpu); + r = mmu_topup_memory_caches(vcpu, false); if (r) return r; @@ -5142,7 +5144,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu) { int r; - r = mmu_topup_memory_caches(vcpu); + r = mmu_topup_memory_caches(vcpu, !vcpu->arch.mmu->direct_map); if (r) goto out; r = mmu_alloc_roots(vcpu); @@ -5336,7 +5338,7 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, * or not since pte prefetch is skiped if it does not have * enough objects in the cache. */ - mmu_topup_memory_caches(vcpu); + mmu_topup_memory_caches(vcpu, true); spin_lock(&vcpu->kvm->mmu_lock); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 451d7aa7d959..8d2159ae3bdf 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -815,7 +815,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code, return RET_PF_EMULATE; } - r = mmu_topup_memory_caches(vcpu); + r = mmu_topup_memory_caches(vcpu, true); if (r) return r; @@ -902,7 +902,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) * No need to check return value here, rmap_can_add() can * help us to skip pte prefetch later. */ - mmu_topup_memory_caches(vcpu); + mmu_topup_memory_caches(vcpu, true); if (!VALID_PAGE(root_hpa)) { WARN_ON(1); -- 2.26.0