Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1966278ybt; Thu, 2 Jul 2020 19:37:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJye0LsRa1QgkEw7Px9jk1i078Jlmctb1kFXgc3ijWODHMkwRdKlM8Rn6ZEjr9bBKvJrqbU6 X-Received: by 2002:a50:ab5c:: with SMTP id t28mr37972973edc.209.1593743851100; Thu, 02 Jul 2020 19:37:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593743851; cv=none; d=google.com; s=arc-20160816; b=AmtklyYIgNqZ7Me4+CDLRPUdK5Y04kmxdeEGrcd/V6JS13VQc7NGW1yL7KYipGkLYk auDnLeoCxK5DOs4Na40eZUXwwcmqPTG152XJASoZ8WKpFWPX2UTxuWN5CrQtI3tN8uH1 ZQhCBPtuxusVa0E8yo2k4UOBfn9AHxs6UEV0dc8I2BJe1C+sJddmEjlZ4772yci6JslZ VpK0QnQlmWgEFthVAcKxJ05LJl2uD7c2fxkSkT2Navct0rXK8MKjj5FyXx1rXiH4eDXk LrN2rmEpU7a7mSOwIkd+KsmrKS7sC3T68ic/J1vT9ayVt4dcDjtCX4OAON1fuhpoWgye AWNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=uSlx16laGtlYEWabcJQcxzSRL5Vv40DoS+5HYhpXAGs=; b=BF3cks0QuJLz8Xp8IGY/XHJNUMALr6lNyaRUyT5Hrp2ywW4SXOlbvv1Zrw4T9GRGDt D/OPeuw80DEbwBtrIeMCsZ4B0K3VIYGHnujOun8vvb9gu8MyznHwjYXaIWzZxzCPUVgA XciGBmpbxxNcWRy3CGzFvgN65UgypwuexACCoRn7YT8kRcUW7EaEvE2LegeXuEdhXAYF XgcntiWwo47VUxm72AsExHGMNdOvSLeiNDcYNiqMKPVtclza52NNhYI04EW6OZvYRfFS TTIcCNHAHd9mKJ/L1mosvNQA3Z0bWIXy8jqxeOnVY6DTXpcAG5ljyN3aG0nbbjiiyVJ8 3tGQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d6si6718775eja.468.2020.07.02.19.37.07; Thu, 02 Jul 2020 19:37:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726667AbgGCCgi (ORCPT + 99 others); Thu, 2 Jul 2020 22:36:38 -0400 Received: from mga09.intel.com ([134.134.136.24]:3199 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726345AbgGCCgO (ORCPT ); Thu, 2 Jul 2020 22:36:14 -0400 IronPort-SDR: cueX/nLsM/12m7OZA3mVLapoMCjl5+DsjuZRv76xnKy7nsLbTlk40KVZ2Srs4rylUpOMpGQs/V PVHyHXYysCcg== X-IronPort-AV: E=McAfee;i="6000,8403,9670"; a="148599910" X-IronPort-AV: E=Sophos;i="5.75,306,1589266800"; d="scan'208";a="148599910" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jul 2020 19:36:05 -0700 IronPort-SDR: g2UCCogMQfj7zhgST9jkZL/sstg2Xwrvyfbrz3kaZ7SOtOrK7WqsAVzLkvQJxdIHUq8vcItg6s rEwFp7CX9E7A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,306,1589266800"; d="scan'208";a="278295744" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by orsmga003.jf.intel.com with ESMTP; 02 Jul 2020 19:36:05 -0700 From: Sean Christopherson To: Marc Zyngier , Paolo Bonzini , Arnd Bergmann Cc: James Morse , Julien Thierry , Suzuki K Poulose , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Peter Feiner , Peter Shier , Junaid Shahid , Christoffer Dall Subject: [PATCH v3 09/21] KVM: x86/mmu: Separate the memory caches for shadow pages and gfn arrays Date: Thu, 2 Jul 2020 19:35:33 -0700 Message-Id: <20200703023545.8771-10-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200703023545.8771-1-sean.j.christopherson@intel.com> References: <20200703023545.8771-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Use separate caches for allocating shadow pages versus gfn arrays. This sets the stage for specifying __GFP_ZERO when allocating shadow pages without incurring extra cost for gfn arrays. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 3 ++- arch/x86/kvm/mmu/mmu.c | 15 ++++++++++----- 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 71bc32e00d7e..b71a4e77f65a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -636,7 +636,8 @@ struct kvm_vcpu_arch { struct kvm_mmu *walk_mmu; struct kvm_mmu_memory_cache mmu_pte_list_desc_cache; - struct kvm_mmu_memory_cache mmu_page_cache; + struct kvm_mmu_memory_cache mmu_shadow_page_cache; + struct kvm_mmu_memory_cache mmu_gfn_array_cache; struct kvm_mmu_memory_cache mmu_page_header_cache; /* diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index cf02ad93c249..8e1b55d8a728 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1108,8 +1108,12 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) 1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM); if (r) return r; - r = mmu_topup_memory_cache(&vcpu->arch.mmu_page_cache, - 2 * PT64_ROOT_MAX_LEVEL); + r = mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache, + PT64_ROOT_MAX_LEVEL); + if (r) + return r; + r = mmu_topup_memory_cache(&vcpu->arch.mmu_gfn_array_cache, + PT64_ROOT_MAX_LEVEL); if (r) return r; return mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, @@ -1119,7 +1123,8 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) { mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache); - mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache); + mmu_free_memory_cache(&vcpu->arch.mmu_gfn_array_cache); mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } @@ -2096,9 +2101,9 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct struct kvm_mmu_page *sp; sp = mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); - sp->spt = mmu_memory_cache_alloc(&vcpu->arch.mmu_page_cache); + sp->spt = mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); if (!direct) - sp->gfns = mmu_memory_cache_alloc(&vcpu->arch.mmu_page_cache); + sp->gfns = mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); /* -- 2.26.0