Received: by 2002:a05:6512:e85:0:0:0:0 with SMTP id bi5csp143681lfb; Thu, 23 Jun 2022 21:20:39 -0700 (PDT) X-Google-Smtp-Source: AGRyM1sp+FAqrpprBikaqkbQM4PBnVfi6BpE7eQehxaExo2nongDlP+J/2ZlLZcztuBrauDLej5K X-Received: by 2002:a17:90a:9dc8:b0:1ec:78ec:8d47 with SMTP id x8-20020a17090a9dc800b001ec78ec8d47mr1649605pjv.219.1656044439450; Thu, 23 Jun 2022 21:20:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656044439; cv=none; d=google.com; s=arc-20160816; b=ZAypD2UOn7n28kw39q0pwG0/RkUgG4OffP7JcETIrmBj5HCKD1xaU86JRm7TEZT5LU g+hTkBoUiAP/AWajMo+iiwqsBJRw689KqV3Fuy1ieUkQ0H9NQpK/AOVOQxGLMYAI6ZDi SGUKR8YsICqyTD7gDi3mhC2LhtHibP/44UtXZ39p6FfJi/sVz41I7vacphxq6GRHhIl1 qFFj6PPo3KWJd0FKWTDB5ppViK8qV+WXM10qBs6DP/vCmOBa5QRg7yFZGN7TULykUKK7 SeT2B+V11mvFrJRuKTGF+Cm38RdTZC8eDa5Jng0OI/iupcYXeh/Wwb0/JPX86K7DPjtr gpTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=snL3cPUrlp1r2KPTKNdyHD0sPFLwtYwdwMoiHY5Xw5Y=; b=nou6LDE9rpt5YORSignutjkyNGV2tm+D+xQkDg152yFnPNUL5/NprVMirv1BfwxYW/ 5SREq7ZPnGhx3eFHD9NWm6owZ6ck6HHI9IVtSO2nAXAhKN99qDYwGymypTZ46sPODhVl HwxBVnh7OQAtm5HhvPdj9dpkAoa6pVSU01/16bbqUMMpYswjZuOjpjm8bvE28GOJt2B6 ZLJO66U2A2/GLWZ9+eem3vaxLWmAKP+GGxfjzR/KLo8NUC60EJr35c5p26jnoPcmzp4x rVTV5d2J4XLGx9JouhI4W8a6TIIBc+zwbqg3SrdPxD5gU2L3ZBfonq7QsgNqZrj77Ngp 6Tsg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=antgroup.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y1-20020a17090a104100b001e3415b6b1dsi5223325pjd.103.2022.06.23.21.20.17; Thu, 23 Jun 2022 21:20:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=antgroup.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231247AbiFXDhY (ORCPT + 99 others); Thu, 23 Jun 2022 23:37:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230356AbiFXDhQ (ORCPT ); Thu, 23 Jun 2022 23:37:16 -0400 Received: from out0-130.mail.aliyun.com (out0-130.mail.aliyun.com [140.205.0.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA61653A73; Thu, 23 Jun 2022 20:37:11 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R801e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047201;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---.OBgNiz3_1656041826; Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com fp:SMTPD_---.OBgNiz3_1656041826) by smtp.aliyun-inc.com; Fri, 24 Jun 2022 11:37:07 +0800 From: "Hou Wenlong" To: kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Lan Tianyu , linux-kernel@vger.kernel.org Subject: [PATCH 4/5] KVM: x86/mmu: Fix wrong start gfn of tlb flushing with range Date: Fri, 24 Jun 2022 11:37:00 +0800 Message-Id: <1dc86beeb58c54ac027d9c67d7e1ad9252b4b2a4.1656039275.git.houwenlong.hwl@antgroup.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When a spte is dropped, the start gfn of tlb flushing should be the gfn of spte not the base gfn of SP which contains the spte. Fixes: c3134ce240eed ("KVM: Replace old tlb flush function with new one to flush a specified range.") Signed-off-by: Hou Wenlong --- arch/x86/kvm/mmu/mmu.c | 8 +++++--- arch/x86/kvm/mmu/paging_tmpl.h | 3 ++- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 37bfc88ea212..577b85860891 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1145,7 +1145,8 @@ static void drop_large_spte(struct kvm *kvm, u64 *sptep, bool flush) drop_spte(kvm, sptep); if (flush) - kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, + kvm_flush_remote_tlbs_with_address(kvm, + kvm_mmu_page_get_gfn(sp, sptep - sp->spt), KVM_PAGES_PER_HPAGE(sp->role.level)); } @@ -1596,7 +1597,7 @@ static void __rmap_add(struct kvm *kvm, if (rmap_count > RMAP_RECYCLE_THRESHOLD) { kvm_unmap_rmapp(kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); kvm_flush_remote_tlbs_with_address( - kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm, gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); } } @@ -6397,7 +6398,8 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, pte_list_remove(kvm, rmap_head, sptep); if (kvm_available_flush_tlb_with_range()) - kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, + kvm_flush_remote_tlbs_with_address(kvm, + kvm_mmu_page_get_gfn(sp, sptep - sp->spt), KVM_PAGES_PER_HPAGE(sp->role.level)); else need_tlb_flush = 1; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 2448fa8d8438..fa78ee0caffd 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -938,7 +938,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) mmu_page_zap_pte(vcpu->kvm, sp, sptep, NULL); if (is_shadow_present_pte(old_spte)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, - sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_mmu_page_get_gfn(sp, sptep - sp->spt), + KVM_PAGES_PER_HPAGE(sp->role.level)); if (!rmap_can_add(vcpu)) break; -- 2.31.1