Received: by 2002:a05:6358:1087:b0:cb:c9d3:cd90 with SMTP id j7csp634522rwi; Mon, 10 Oct 2022 05:25:10 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5d1pamExONPwlwAUpoAfeR4zPA6GFBHAmdw79zBme1m47s9k1+sye9CN9RETbEf/1oA7Gh X-Received: by 2002:a17:90a:d151:b0:205:f2a4:f898 with SMTP id t17-20020a17090ad15100b00205f2a4f898mr20871147pjw.118.1665404699284; Mon, 10 Oct 2022 05:24:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665404699; cv=none; d=google.com; s=arc-20160816; b=MB75iWE3PWayWOLtFZJptTr/fjSS8cBHR+T4aF+QU273TCVetMPKbMLzl5l6TCTZGU Ff2tJi7z8GFcg4hQr9egv0fwxlKT3OBzh/ySyzhP/QMexwVF5UYOPEBvS4TON8BYxpDE VK/7yYcbhPjqvuUV6ZgPdJ8lgQCvyyL6kUY65KaZafPlhYj403uCVryo9P7qE9IuWe11 bgVevE4NM7ULSlYeVPArU/MNCq7XS6j7zAp3qDqW1ANKKe754p9XqS1CrqKL+5LgW0U1 UnBM+D6Wx3UyYqBplglEJhHTol/q9fZTsKejx9csyYVUhqQv7LN5S6aTCUS2BnroVctY 0lHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=3PMESaxXHyB4mTozr6dszPhksifGlWlPIdx02Qja3Js=; b=tqCE3WkpqIXKQL2oS6u0R7CXdHu5zxTOhU1n5fgbNeILylHNciXGufiM3/Drpq4X1G tkdp856Wq1KP3omiTHmOQD6VcmFEII9b7Rw5cMV3svQwF+Pc6UbvNbEzVrhHXjfVVIsS oOTCB/tzjFFUK+ur/9fszz8N0TsHri2rBnW5xziN7sjTUXn5cd4XCKF+M9/nswPbN/Wa aT5DHg5WCbkD9dEGthxmFSElAtMGLyFhpS1NT3u6FSRRfqFFpPygS4Nl2I8rXQb9K6f3 UvfcfHIhwha/yOKCi8FlzFXI6RQ/RH9DQMhqTzlVEnjGxq16I4cx6fru3fsdOm1E2eB6 8ULA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=antgroup.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n13-20020a170902e54d00b00176d5b20ebesi14038301plf.355.2022.10.10.05.24.48; Mon, 10 Oct 2022 05:24:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=antgroup.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231978AbiJJMTb (ORCPT + 99 others); Mon, 10 Oct 2022 08:19:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231629AbiJJMT0 (ORCPT ); Mon, 10 Oct 2022 08:19:26 -0400 Received: from out0-148.mail.aliyun.com (out0-148.mail.aliyun.com [140.205.0.148]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71CC34D4D2; Mon, 10 Oct 2022 05:19:23 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047190;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=12;SR=0;TI=SMTPD_---.PYFqMCc_1665404359; Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com fp:SMTPD_---.PYFqMCc_1665404359) by smtp.aliyun-inc.com; Mon, 10 Oct 2022 20:19:20 +0800 From: "Hou Wenlong" To: kvm@vger.kernel.org Cc: David Matlack , Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Lan Tianyu , linux-kernel@vger.kernel.org Subject: [PATCH v4 2/6] KVM: x86/mmu: Fix wrong gfn range of tlb flushing in kvm_set_pte_rmapp() Date: Mon, 10 Oct 2022 20:19:13 +0800 Message-Id: <0ce24d7078fa5f1f8d64b0c59826c50f32f8065e.1665214747.git.houwenlong.hwl@antgroup.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When the spte of hupe page is dropped in kvm_set_pte_rmapp(), the whole gfn range covered by the spte should be flushed. However, rmap_walk_init_level() doesn't align down the gfn for new level like tdp iterator does, then the gfn used in kvm_set_pte_rmapp() is not the base gfn of huge page. And the size of gfn range is wrong too for huge page. Use the base gfn of huge page and the size of huge page for flushing tlbs for huge page. Also introduce a helper function to flush the given page (huge or not) of guest memory, which would help prevent future buggy use of kvm_flush_remote_tlbs_with_address() in such case. Fixes: c3134ce240eed ("KVM: Replace old tlb flush function with new one to flush a specified range.") Signed-off-by: Hou Wenlong --- arch/x86/kvm/mmu/mmu.c | 4 +++- arch/x86/kvm/mmu/mmu_internal.h | 10 ++++++++++ 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7de3579d5a27..4874c603ed1c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1430,7 +1430,9 @@ static bool kvm_set_pte_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, } if (need_flush && kvm_available_flush_tlb_with_range()) { - kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); + gfn_t base = gfn_round_for_level(gfn, level); + + kvm_flush_remote_tlbs_gfn(kvm, base, level); return false; } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 17488d70f7da..249bfcd502b4 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -168,8 +168,18 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); + void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages); + +/* Flush the given page (huge or not) of guest memory. */ +static inline void kvm_flush_remote_tlbs_gfn(struct kvm *kvm, gfn_t gfn, int level) +{ + u64 pages = KVM_PAGES_PER_HPAGE(level); + + kvm_flush_remote_tlbs_with_address(kvm, gfn, pages); +} + unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; -- 2.31.1