Received: by 2002:a05:6358:45e:b0:b5:b6eb:e1f9 with SMTP id 30csp386700rwe; Wed, 24 Aug 2022 02:45:35 -0700 (PDT) X-Google-Smtp-Source: AA6agR4zHM8/3Y9cVAqbdKmTPFbBRk+a1qJFk7McJBse7hwi+uwVuu4/vu/4ildIYTsTSNknA3SI X-Received: by 2002:a17:903:247:b0:16c:5017:9ad4 with SMTP id j7-20020a170903024700b0016c50179ad4mr29060311plh.115.1661334335405; Wed, 24 Aug 2022 02:45:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661334335; cv=none; d=google.com; s=arc-20160816; b=buR9178Fy0H7zIpWkFACCi7iILvDwK4lmWw1ouO3phpChcXuuSzQld3HapSb0IjVkW ehIlEqbI3jEdYPKbM5xWXdpHHsmjmoJ6qgaKTiuMNDOzEZklER6jK3dBLgCGJrkThHKn PAmObskZKYwQ3MNyYmUZORfKstQNxIHOe8RO7x6bKchJRCzA9hE8p4+yr9wLoz6l+f4q dAgERNNfNPMDvr+xGS2L1I0NEZFDBNxNT2dVBJbWRt8AmvEzJVgIY5Sp2n4i6voGiPC3 Xvqv8tDFLPAQXYKe2OALqOTr2r7INr+noqDYAmdMQYTKsAOHI/rdyvUFg9RLZvJb7Ahu Ne3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=KY2EenYhMCsxzoUhY/xfcLoio/69L+2egHRapOq1MHk=; b=0USYXH0HQRDYgjyXIdGzkmhZ6Ode2PwHGRKr+01DbU5h047uCS+xdilFTRVB+q13ul 3KeN/K02gc+sSr/sGTv0Rn4psNJGCC7apEbTvRnt5Mow3936b8DzP7PQ9Dni34Qe+3r8 PAnWC09o2H0u3MqNA5N55dZtx7anuP79QZxZVYAfb5ohoem4Lh6kCL+GgHu3OyHWKPke s+f6WuLLz+bUWOS1xSEUFaTxHfcHlNA5NuYP4h3sdbZypKPuUKWOI+0ZF4oL7Lmvx/0n iO00mUHNzRVShu3dxNENyHrjT5H91bHt0QHywEgL3UDT5aqL9GxEDjoJLtY5pcqCMZHD 4i1A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=antgroup.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id oa4-20020a17090b1bc400b001f54cedede2si930054pjb.13.2022.08.24.02.45.23; Wed, 24 Aug 2022 02:45:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=antgroup.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231846AbiHXJ34 (ORCPT + 99 others); Wed, 24 Aug 2022 05:29:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236166AbiHXJ3e (ORCPT ); Wed, 24 Aug 2022 05:29:34 -0400 Received: from out0-158.mail.aliyun.com (out0-158.mail.aliyun.com [140.205.0.158]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D9D284EC5; Wed, 24 Aug 2022 02:29:31 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047211;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---.P-if8DK_1661333368; Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com fp:SMTPD_---.P-if8DK_1661333368) by smtp.aliyun-inc.com; Wed, 24 Aug 2022 17:29:29 +0800 From: "Hou Wenlong" To: kvm@vger.kernel.org Cc: David Matlack , Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org Subject: [PATCH v2 5/6] KVM: x86/mmu: Introduce helper function to do range-based flushing for given page Date: Wed, 24 Aug 2022 17:29:22 +0800 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Flushing tlb for one page (huge or not) is the main use case, so introduce a helper function for this common operation to make the code clear. Suggested-by: David Matlack Signed-off-by: Hou Wenlong --- arch/x86/kvm/mmu/mmu.c | 16 ++++++---------- arch/x86/kvm/mmu/mmu_internal.h | 10 ++++++++++ arch/x86/kvm/mmu/tdp_mmu.c | 6 ++---- 3 files changed, 18 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e0b9432b9491..92ca76e11d96 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -268,16 +268,14 @@ static void kvm_flush_remote_tlbs_sptep(struct kvm *kvm, u64 *sptep) struct kvm_mmu_page *sp = sptep_to_sp(sptep); gfn_t gfn = kvm_mmu_page_get_gfn(sp, spte_index(sptep)); - kvm_flush_remote_tlbs_with_address(kvm, gfn, - KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_flush_remote_tlbs_gfn(kvm, gfn, sp->role.level); } /* Flush all memory mapped by the given direct SP. */ static void kvm_flush_remote_tlbs_direct_sp(struct kvm *kvm, struct kvm_mmu_page *sp) { WARN_ON_ONCE(!sp->role.direct); - kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, - KVM_PAGES_PER_HPAGE(sp->role.level + 1)); + kvm_flush_remote_tlbs_gfn(kvm, sp->gfn, sp->role.level + 1); } static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, @@ -1449,8 +1447,8 @@ static bool kvm_set_pte_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, } if (need_flush && kvm_available_flush_tlb_with_range()) { - kvm_flush_remote_tlbs_with_address(kvm, gfn & -KVM_PAGES_PER_HPAGE(level), - KVM_PAGES_PER_HPAGE(level)); + kvm_flush_remote_tlbs_gfn(kvm, gfn & -KVM_PAGES_PER_HPAGE(level), + level); return false; } @@ -1618,8 +1616,7 @@ static void __rmap_add(struct kvm *kvm, if (rmap_count > RMAP_RECYCLE_THRESHOLD) { kvm_zap_all_rmap_sptes(kvm, rmap_head); - kvm_flush_remote_tlbs_with_address( - kvm, gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_flush_remote_tlbs_gfn(kvm, gfn, sp->role.level); } } @@ -2844,8 +2841,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, } if (flush) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, - KVM_PAGES_PER_HPAGE(level)); + kvm_flush_remote_tlbs_gfn(vcpu->kvm, gfn, level); pgprintk("%s: setting spte %llx\n", __func__, *sptep); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 582def531d4d..6651c154f2e0 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -163,8 +163,18 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); + void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages); + +/* Flush the given page (huge or not) of guest memory. */ +static inline void kvm_flush_remote_tlbs_gfn(struct kvm *kvm, gfn_t gfn, int level) +{ + u64 pages = KVM_PAGES_PER_HPAGE(level); + + kvm_flush_remote_tlbs_with_address(kvm, gfn, pages); +} + unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 08b7932122ec..567691440ab0 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -673,8 +673,7 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm, if (ret) return ret; - kvm_flush_remote_tlbs_with_address(kvm, iter->gfn, - KVM_PAGES_PER_HPAGE(iter->level)); + kvm_flush_remote_tlbs_gfn(kvm, iter->gfn, iter->level); /* * No other thread can overwrite the removed SPTE as they must either @@ -1071,8 +1070,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, return RET_PF_RETRY; else if (is_shadow_present_pte(iter->old_spte) && !is_last_spte(iter->old_spte, iter->level)) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, iter->gfn, - KVM_PAGES_PER_HPAGE(iter->level)); + kvm_flush_remote_tlbs_gfn(vcpu->kvm, iter->gfn, iter->level); /* * If the page fault was caused by a write but the page is write -- 2.31.1