Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp3034414ybt; Mon, 22 Jun 2020 13:13:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzWmaKeWfr4Of0a9Sgo1gPI7wbKLl/msFCw7M0h+68DZCO1J27cye3Dl4c6HsEARPBR01Hg X-Received: by 2002:a17:906:4c97:: with SMTP id q23mr388013eju.534.1592856811784; Mon, 22 Jun 2020 13:13:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592856811; cv=none; d=google.com; s=arc-20160816; b=LtXiurDH7rdWov/4MXwIxm4KWsnxJQTIVpNc9iHz/e2Va7KP3QzBiaD8r6iz9T41rb GqImRvq4boQOyw+aroiLAmXfNevuZ3oGzZoSFLvwQ798Bw9ROgU8wNEuJwuVJp/H6P80 tt/gJa3oI44nIwxQnUiSL7LtrfMVCbhXp7m+1WfRSxBTf6qaolXYOI8Lq1ETU1O01mNy yrhnIuixHK59lASCQ60A7lkmOmgl0zeaAkWMuDNax77QsE029Wo1qQgqZRHsKEe4yGAg TA1Al/cLxVqah/xSJpWjKCn9ulbzJ5nSacEVPwoYSH1W6+Cy4Rx1jYWTQ8l1K5GdKUvr qV6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=As63B1T1z9PfRVcYNCr5LxMUg+Za0KDajwZlCzLnWrk=; b=Rt6VB+xpKeV6BSwFklNLMf8FDqdr/V8Q8w8uTKOihGpYstZ0SeXRb5MhpHZ/yZO4A2 DYE0tIrd9hS2kSsF+CTk0uIgfWXQunb4FZAX07iJuZ74KBX8zK4R32waU1rawNeREw3g 8K/go0aGh2bfSvGM9uLC4QyZxZU3gGrsq1nwklmW2qtlF/xVDbwjvhFUDekg04LulC6f SWYN+cUl3Sc7lOG72AHvXjS0+cxMCjtQCQXIixJCweYcSP7Penq5VEam2CbJHswNH/F2 Rybj/pp56VqLsDMrJLZij3vyt1KI7njyaTXWIcKKlS9Pos6FN4FMbmuF/jjirFV69dpV U63w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t5si9576938ejs.458.2020.06.22.13.13.09; Mon, 22 Jun 2020 13:13:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730613AbgFVUJP (ORCPT + 99 others); Mon, 22 Jun 2020 16:09:15 -0400 Received: from mga12.intel.com ([192.55.52.136]:60195 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728545AbgFVUJI (ORCPT ); Mon, 22 Jun 2020 16:09:08 -0400 IronPort-SDR: LHgdYx+YHRowd1VmxdaVQ2sIf3s6ASX9acNEl745hLmjZSp7+BZSqKXBMOdddFPmLrT5xEuiIk DbBBszADphCA== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="123527697" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="123527697" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 13:09:07 -0700 IronPort-SDR: vXb76dirRUAUgp/uOEGjgnvj/Ui+8YdfKo+tBiS+YyELBpeXM4tbMO7/ZRvLDb7nWDLaKxYnPx pCdlG1JSFAiQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="318877042" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by FMSMGA003.fm.intel.com with ESMTP; 22 Jun 2020 13:09:07 -0700 From: Sean Christopherson To: Marc Zyngier , Paolo Bonzini , Arnd Bergmann Cc: James Morse , Julien Thierry , Suzuki K Poulose , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon , Peter Feiner , Peter Shier , Junaid Shahid , Christoffer Dall Subject: [PATCH v2 01/21] KVM: x86/mmu: Track the associated kmem_cache in the MMU caches Date: Mon, 22 Jun 2020 13:08:02 -0700 Message-Id: <20200622200822.4426-2-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622200822.4426-1-sean.j.christopherson@intel.com> References: <20200622200822.4426-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Track the kmem_cache used for non-page KVM MMU memory caches instead of passing in the associated kmem_cache when filling the cache. This will allow consolidating code and other cleanups. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu/mmu.c | 24 +++++++++++------------- 2 files changed, 12 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f8998e97457f..7b6ac8fad9c2 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -251,6 +251,7 @@ struct kvm_kernel_irq_routing_entry; */ struct kvm_mmu_memory_cache { int nobjs; + struct kmem_cache *kmem_cache; void *objects[KVM_NR_MEM_OBJS]; }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index fdd05c233308..0830c195c9ed 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1060,15 +1060,14 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu) local_irq_enable(); } -static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache, - struct kmem_cache *base_cache, int min) +static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache, int min) { void *obj; if (cache->nobjs >= min) return 0; while (cache->nobjs < ARRAY_SIZE(cache->objects)) { - obj = kmem_cache_zalloc(base_cache, GFP_KERNEL_ACCOUNT); + obj = kmem_cache_zalloc(cache->kmem_cache, GFP_KERNEL_ACCOUNT); if (!obj) return cache->nobjs >= min ? 0 : -ENOMEM; cache->objects[cache->nobjs++] = obj; @@ -1081,11 +1080,10 @@ static int mmu_memory_cache_free_objects(struct kvm_mmu_memory_cache *cache) return cache->nobjs; } -static void mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc, - struct kmem_cache *cache) +static void mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc) { while (mc->nobjs) - kmem_cache_free(cache, mc->objects[--mc->nobjs]); + kmem_cache_free(mc->kmem_cache, mc->objects[--mc->nobjs]); } static int mmu_topup_memory_cache_page(struct kvm_mmu_memory_cache *cache, @@ -1115,25 +1113,22 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu) int r; r = mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache, - pte_list_desc_cache, 8 + PTE_PREFETCH_NUM); + 8 + PTE_PREFETCH_NUM); if (r) goto out; r = mmu_topup_memory_cache_page(&vcpu->arch.mmu_page_cache, 8); if (r) goto out; - r = mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, - mmu_page_header_cache, 4); + r = mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, 4); out: return r; } static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) { - mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache, - pte_list_desc_cache); + mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache); mmu_free_memory_cache_page(&vcpu->arch.mmu_page_cache); - mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache, - mmu_page_header_cache); + mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc) @@ -5684,6 +5679,9 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) uint i; int ret; + vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache; + vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; + vcpu->arch.mmu = &vcpu->arch.root_mmu; vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; -- 2.26.0