Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp7799532rwb; Tue, 6 Dec 2022 10:01:09 -0800 (PST) X-Google-Smtp-Source: AA0mqf7qPVI3mNm41rEYIMdohm9BKxTSXLAnbvlC96OjcKX8cQCWUcF/vg8MPqKuf0UcQwwOl42g X-Received: by 2002:a63:d407:0:b0:476:e0fc:8d8a with SMTP id a7-20020a63d407000000b00476e0fc8d8amr68061026pgh.145.1670349669439; Tue, 06 Dec 2022 10:01:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670349669; cv=none; d=google.com; s=arc-20160816; b=ZStH+KND/jLG1YCvzei4KK64O7dVrx+ilGhtLuH7tI7gEWsjopCGpZK8RZcsAWHF0A f92IPkXjpu5Vl8nkEcywMnOnehWZcJmsFR1uoB9YEuGVStSlwgf+Rm9PmAqE8Kp3TlHU n54dNL6f/4M4REq2E3MFiEqrP32LiA3UBm6vCARZXcbXsNXhsI8n8PzCGV70mTQnapMF wHDTwGjA5D1Bc6tHF+agh0Y+VsNd+FzfxYiUZzJt2cWJWrpldmcONJq86x4XVSpnE7Eh xhrCXNfHCpJLvufoTt0MIJU9HM2H+CHr3PvHFKnJoUr3iaryC5R1NMmheR7Br28s0ZbX Mv3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=H8e8OULO3pzQ8jNXethnMgHASPzYAV1ngKTaICSK2LU=; b=nMCGA2iKJE3pUKKiEjdKJHaluf9POGM5WOY4KDToE3c+w0qHoWq7/jVmLSsDv4nccC D6mGCB7BYSS+YlgC8plJVdpR0t3MrUM+THAaM8S/qIH1/Yn2j/In1O92eZmG3bTTCB2H tZZJ/+w4PdZYdiup0bzYZ0LIzrf5wMUEJLE8yRPucU1TwW/JsSudmn4dk+yYU1vPPi2F sAMrDn+YIZCioZktGxx6pZRSBtTvZVBNrajC5zPv2nhADv5S75qw9FYEuo+JmIIyBMab rItANpiZsanIIo1Otmzus6OB2ZcfdXXMm27KOaedZHm+F5/49uWQeO4IFuTjStAV+UvZ KBxQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=FRZhVvY5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u17-20020a056a00125100b0056cb8f67eefsi19042134pfi.70.2022.12.06.10.00.58; Tue, 06 Dec 2022 10:01:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=FRZhVvY5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235392AbiLFRg3 (ORCPT + 78 others); Tue, 6 Dec 2022 12:36:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235373AbiLFRgQ (ORCPT ); Tue, 6 Dec 2022 12:36:16 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15E033AC2E for ; Tue, 6 Dec 2022 09:36:13 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id p17-20020a056a0026d100b005769067d113so8029744pfw.3 for ; Tue, 06 Dec 2022 09:36:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=H8e8OULO3pzQ8jNXethnMgHASPzYAV1ngKTaICSK2LU=; b=FRZhVvY5mtOkbQD31kJD+YaHVJajPmTARMOqiCcI+/xvmLCmh6BGekzUTtjDNeotzq uV42h3eYcy2Y1U/jWIpUCdXPMDgEMZB91kpXLVv2PxG8wk3IE0JRMUxaYINZkDlBMDBa xpCtjKpLUcnALTVXha4JOqpMnLfJtCq7RtJbdS9Y/NG265fX+/qqKwUBhyWT0k0diYdV hq5HsZiSMwFdUeFrTC5lDJVcu3kftBgtGZ1hMUr8kPf0vK+aWbxsNiNiyI68v8zC2qbC 6pvrAGBfgZS4RecxrDW7y8awmGVUrRF0AFrFHz2Pjud05rWFkofdAumELQtzKuoObx4B tcew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=H8e8OULO3pzQ8jNXethnMgHASPzYAV1ngKTaICSK2LU=; b=jLwfkjnYmu+7WmfPTRcn4Ps1rOklcvD1KnBXljTAsO5KPMa6/zQLwuh5UkJzONjpM0 zykdxBpr0C7PsmXOYeiN1KVH6tOJLENocw0XTjsDZgY03/GjAC2M44xsfSQHEb3QgsUN eh5deEUUuktYYV+YCwKuu7GXoKh3jolSSrNyXdWKzjU/yFHLF7STB8YrjrdHSOY98kyB GMuzDhnmtSIKmooOVqNAa6kelz1N4xpfM1uLZrqOo6/eM5cRnVqvLPOxvMOgs93+syFs lwzMUTX4I5BeQ+4Trfb9TYLqCZuGMpVNBQa1ki3Mlj8+A+d3Y3HXLfQAK6wjUguPU0CN k/lg== X-Gm-Message-State: ANoB5plyJXLMCIjdPawPaTD8AWzT7gt46Il+LJJ3SAZPzfK0H+FiBfUA MFBleQesqLgvAgVBriicpIeYJZ9F57uoSTd0ma41U19Dx6UgMg1W3gJ3jHFqGAuDhVok5b27I3t pC8FPA/hIu87BYBEZwG1gVdSJCQH4/n8tJ1Af5lcivaR5pbWpyvlbwUJXcwqdBxZxF7c1hnqo X-Received: from sweer.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:e45]) (user=bgardon job=sendgmr) by 2002:a05:6a00:3398:b0:575:72f3:d4dc with SMTP id cm24-20020a056a00339800b0057572f3d4dcmr36836377pfb.6.1670348172521; Tue, 06 Dec 2022 09:36:12 -0800 (PST) Date: Tue, 6 Dec 2022 17:35:58 +0000 In-Reply-To: <20221206173601.549281-1-bgardon@google.com> Mime-Version: 1.0 References: <20221206173601.549281-1-bgardon@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221206173601.549281-5-bgardon@google.com> Subject: [PATCH 4/7] KVM: x86/MMU: Move rmap_can_add() and rmap_remove() to rmap.c From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , David Matlack , Vipin Sharma , Ben Gardon Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Move the functions to check if an entry can be added to an rmap and for removing elements from an rmap to rmap.(c|h). No functional change intended. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 34 +-------------------------------- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/rmap.c | 32 +++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/rmap.h | 3 +++ 4 files changed, 37 insertions(+), 33 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f8d7201210c8..52e487d89d54 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -658,7 +658,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) static bool sp_has_gptes(struct kvm_mmu_page *sp); -static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) +gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) { if (sp->role.passthrough) return sp->gfn; @@ -891,38 +891,6 @@ static bool kvm_zap_all_rmap_sptes(struct kvm *kvm, return true; } -static bool rmap_can_add(struct kvm_vcpu *vcpu) -{ - struct kvm_mmu_memory_cache *mc; - - mc = &vcpu->arch.mmu_pte_list_desc_cache; - return kvm_mmu_memory_cache_nr_free_objects(mc); -} - -static void rmap_remove(struct kvm *kvm, u64 *spte) -{ - struct kvm_memslots *slots; - struct kvm_memory_slot *slot; - struct kvm_mmu_page *sp; - gfn_t gfn; - struct kvm_rmap_head *rmap_head; - - sp = sptep_to_sp(spte); - gfn = kvm_mmu_page_get_gfn(sp, spte_index(spte)); - - /* - * Unlike rmap_add, rmap_remove does not run in the context of a vCPU - * so we have to determine which memslots to use based on context - * information in sp->role. - */ - slots = kvm_memslots_for_spte_role(kvm, sp->role); - - slot = __gfn_to_memslot(slots, gfn); - rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); - - pte_list_remove(spte, rmap_head); -} - static void drop_spte(struct kvm *kvm, u64 *sptep) { u64 old_spte = mmu_spte_clear_track_bits(kvm, sptep); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index cd1c8f32269d..3de703c2a5d4 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -318,4 +318,5 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); +gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/rmap.c b/arch/x86/kvm/mmu/rmap.c index 272e89147d96..6833676aa9ea 100644 --- a/arch/x86/kvm/mmu/rmap.c +++ b/arch/x86/kvm/mmu/rmap.c @@ -208,3 +208,35 @@ struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, idx = gfn_to_index(gfn, slot->base_gfn, level); return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; } + +bool rmap_can_add(struct kvm_vcpu *vcpu) +{ + struct kvm_mmu_memory_cache *mc; + + mc = &vcpu->arch.mmu_pte_list_desc_cache; + return kvm_mmu_memory_cache_nr_free_objects(mc); +} + +void rmap_remove(struct kvm *kvm, u64 *spte) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *slot; + struct kvm_mmu_page *sp; + gfn_t gfn; + struct kvm_rmap_head *rmap_head; + + sp = sptep_to_sp(spte); + gfn = kvm_mmu_page_get_gfn(sp, spte_index(spte)); + + /* + * Unlike rmap_add, rmap_remove does not run in the context of a vCPU + * so we have to determine which memslots to use based on context + * information in sp->role. + */ + slots = kvm_memslots_for_spte_role(kvm, sp->role); + + slot = __gfn_to_memslot(slots, gfn); + rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); + + pte_list_remove(spte, rmap_head); +} diff --git a/arch/x86/kvm/mmu/rmap.h b/arch/x86/kvm/mmu/rmap.h index 45732eda57e5..81df186ba3c3 100644 --- a/arch/x86/kvm/mmu/rmap.h +++ b/arch/x86/kvm/mmu/rmap.h @@ -51,4 +51,7 @@ u64 *rmap_get_next(struct rmap_iterator *iter); struct kvm_rmap_head *gfn_to_rmap(gfn_t gfn, int level, const struct kvm_memory_slot *slot); + +bool rmap_can_add(struct kvm_vcpu *vcpu); +void rmap_remove(struct kvm *kvm, u64 *spte); #endif /* __KVM_X86_MMU_RMAP_H */ -- 2.39.0.rc0.267.gcb52ba06e7-goog