Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp5123770iog; Wed, 22 Jun 2022 12:30:31 -0700 (PDT) X-Google-Smtp-Source: AGRyM1umQ0JzG/X+dXPbfUoHm9APyK11K7Xy2UgKNpqHZ8jMnNBjRfFP6mMjWkgKRYEt3ofFvueC X-Received: by 2002:a63:6886:0:b0:406:2777:70f7 with SMTP id d128-20020a636886000000b00406277770f7mr4217049pgc.389.1655926231468; Wed, 22 Jun 2022 12:30:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655926231; cv=none; d=google.com; s=arc-20160816; b=e4/M+sTW0XVPXPmb87dR3OkGR1ZDrNyFzmjMPc/AJ1Sybt3G7/gg1rLGn8eIMuVRPY 7wvdZUrMkeedQhJ04nkz/iwr4+XgN4iixVAovwLZGqJyJqcP2GItk6PW7gOe/HFh60kU xQFRrIJj/TbP4/ZYhFTQ1gL7vjsNMTA0hZwpdj31nfB0ircSIZ3J8+om3GUTl5janlCx jY5fppequqjIFWvoyr6AVQZcoXfadmo2aHYi4SZQbIExgnyU00YI1+sGdjy9bZA+obvL Zv13C9c3gi9UGYHgofTanEcQybs0y78jMgOMCVkKcv7Bv0mkS4OeNRd1wDlPDMuVllKs g5pQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=7Qukz1qYkne3woLlBdlV01dlkueOorgqBCP+TMZrzAA=; b=wgRBScsQCzSelVd4iSSovGwhpGb0QMIkeJaEMbrEQsgVPsRplv6UEus48jKeL5XLgc hglKnQjF/DSeB6QVapvOt6LnulBpvXWQ1qvnp1+mfmg/4B6EW+CCQ+UBtm3h4eEBcmGE Tur+ykkRQCsSmcAONOC2hZ5si/kX4DZSz8KnWpIugI5vxPrc6dJ50aiBlhWCMT2P0c2Q 2aWeDNGipdmu2vkM0FCmUY3LCv3TKCRPh4KAvWZTgeh/OrQrjjuQD13oQVmew2bf+edx HBL+/txijbKm/ZS2XGvKShi1++2nxm56C2JyS2cMbIlvBICyFJA/R+3asi+DntQ9SY1R D8LQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=NAZjbLHJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nv12-20020a17090b1b4c00b001e31a305868si453576pjb.168.2022.06.22.12.30.19; Wed, 22 Jun 2022 12:30:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=NAZjbLHJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1359510AbiFVT2g (ORCPT + 99 others); Wed, 22 Jun 2022 15:28:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357544AbiFVT12 (ORCPT ); Wed, 22 Jun 2022 15:27:28 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 44D493E0D5 for ; Wed, 22 Jun 2022 12:27:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655926041; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7Qukz1qYkne3woLlBdlV01dlkueOorgqBCP+TMZrzAA=; b=NAZjbLHJwrn+S7ugepkCgYwqdF8ZRP1MG7+UOm02c1/RYOUvv7sTHgUYN94tj9EqEURgoi rHNyCN+ikKFLrQ40P0zZ3mS23kGnpU//HevdQTAQCmE2WNYOZFq3vIzpgfxxNb/KSZZQEf 8ucBo56R8dDGIx6sx4BQLt6PrAygL5M= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-515--zCoJpWwNty5NjGASqve8Q-1; Wed, 22 Jun 2022 15:27:15 -0400 X-MC-Unique: -zCoJpWwNty5NjGASqve8Q-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2F7853C11720; Wed, 22 Jun 2022 19:27:15 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id CD7B21410F3B; Wed, 22 Jun 2022 19:27:14 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: maz@kernel.org, anup@brainfault.org, seanjc@google.com, bgardon@google.com, peterx@redhat.com, maciej.szmigiero@oracle.com, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, pfeiner@google.com, jiangshanlai@gmail.com, dmatlack@google.com Subject: [PATCH v7 10/23] KVM: x86/mmu: Pass memory caches to allocate SPs separately Date: Wed, 22 Jun 2022 15:26:57 -0400 Message-Id: <20220622192710.2547152-11-pbonzini@redhat.com> In-Reply-To: <20220622192710.2547152-1-pbonzini@redhat.com> References: <20220622192710.2547152-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 X-Spam-Status: No, score=-3.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: David Matlack Refactor kvm_mmu_alloc_shadow_page() to receive the caches from which it will allocate the various pieces of memory for shadow pages as a parameter, rather than deriving them from the vcpu pointer. This will be useful in a future commit where shadow pages are allocated during VM ioctls for eager page splitting, and thus will use a different set of caches. Preemptively pull the caches out all the way to kvm_mmu_get_shadow_page() since eager page splitting will not be calling kvm_mmu_alloc_shadow_page() directly. No functional change intended. Signed-off-by: David Matlack Message-Id: <20220516232138.1783324-11-dmatlack@google.com> Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 36 +++++++++++++++++++++++++++++------- 1 file changed, 29 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2602c3642f23..fab417e7bf6c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2049,17 +2049,25 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, return sp; } +/* Caches used when allocating a new shadow page. */ +struct shadow_page_caches { + struct kvm_mmu_memory_cache *page_header_cache; + struct kvm_mmu_memory_cache *shadow_page_cache; + struct kvm_mmu_memory_cache *gfn_array_cache; +}; + static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, + struct shadow_page_caches *caches, gfn_t gfn, struct hlist_head *sp_list, union kvm_mmu_page_role role) { struct kvm_mmu_page *sp; - sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); - sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache); + sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache); if (!role.direct) - sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); + sp->gfns = kvm_mmu_memory_cache_alloc(caches->gfn_array_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); @@ -2081,9 +2089,10 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, return sp; } -static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, - gfn_t gfn, - union kvm_mmu_page_role role) +static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, + struct shadow_page_caches *caches, + gfn_t gfn, + union kvm_mmu_page_role role) { struct hlist_head *sp_list; struct kvm_mmu_page *sp; @@ -2094,13 +2103,26 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, sp = kvm_mmu_find_shadow_page(vcpu, gfn, sp_list, role); if (!sp) { created = true; - sp = kvm_mmu_alloc_shadow_page(vcpu, gfn, sp_list, role); + sp = kvm_mmu_alloc_shadow_page(vcpu, caches, gfn, sp_list, role); } trace_kvm_mmu_get_page(sp, created); return sp; } +static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, + gfn_t gfn, + union kvm_mmu_page_role role) +{ + struct shadow_page_caches caches = { + .page_header_cache = &vcpu->arch.mmu_page_header_cache, + .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache, + .gfn_array_cache = &vcpu->arch.mmu_gfn_array_cache, + }; + + return __kvm_mmu_get_shadow_page(vcpu, &caches, gfn, role); +} + static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, unsigned int access) { struct kvm_mmu_page *parent_sp = sptep_to_sp(sptep); -- 2.31.1