Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp1183384rwb; Wed, 14 Dec 2022 07:28:49 -0800 (PST) X-Google-Smtp-Source: AA0mqf64idCKdte8vN6rM+irtfukcGRs7a8zpeJOd4fKm1Fw4yd1m7mnnrNu89Jsh85vUeXpWe0W X-Received: by 2002:a05:6a20:8c01:b0:ad:5cee:4d0 with SMTP id j1-20020a056a208c0100b000ad5cee04d0mr18550872pzh.12.1671031728899; Wed, 14 Dec 2022 07:28:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671031728; cv=none; d=google.com; s=arc-20160816; b=Cl2tE0W8Mx2rIHon3VNe/n2Hs7dm+xH183kO8YAwhuzJQK+/x6Ks70w3dyJQG6Hd1A jgpLE+Pyt4/pa8w6FngtUNwfntWbfnU/qUqIMK4ZVpZ/YAt/9YWBOShql5yGHVe7xTWq hIC+RTPJJpdQf9JOuEvaSs6+ZhsWLXNciwNko1VaD9i3bcq0FD3Gd3N2jL72l9OoLJDi +Fx9BODSke7RPnCK/yRANNnGFxutH2lYiwgQS+vJZZYe79DMnw35n/fS3WSI34KPNPQC LCbVN4NEPDI7r8skjGxFQGgWBHzeMZLQWCUjjrE5CBLzeswpEqfD0nQW5Qs/iwq0se8d XLgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=qghbZ1vjirzjUztvBE2sOOI5LO9NPRdxa9b2gkNrWIo=; b=LtaMmOOeyYqQL9oUNRlCCAr/Gq4MOKP/mU5WjP9toRJqBVxf/d+QSLluR2HOmyyE8a Yjj/jVjirNNlE/JpLMekXYTh0pEqu84ay3aO23+wEjDUzuOwlg01tBL5uqY9vXmTtAMf uuRwk3UtdeGTnYuTZK6knXr4+0yhx82uVsdIV0VGZnwQFme4q5sXSHmfX+WBnROAVPll A2eqeqWQiWHXUUJqeOsRxJzwFZM9qZ5bSEtP8Mwqc1w6v9ZSOVFlsGaRj6shoD516Tp8 lFwoGjrsJyLKj2x+a19+24VE5Bb3g2Fxq10FVcsJDopUa+VBTVlNwWQGwOt6oIS4Kmhq 0apw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=lV7TKJSj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k69-20020a638448000000b0047895cb205bsi6511pgd.416.2022.12.14.07.28.39; Wed, 14 Dec 2022 07:28:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=lV7TKJSj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238746AbiLNPIA (ORCPT + 70 others); Wed, 14 Dec 2022 10:08:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237778AbiLNPH5 (ORCPT ); Wed, 14 Dec 2022 10:07:57 -0500 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCAC811825; Wed, 14 Dec 2022 07:07:55 -0800 (PST) Received: by mail-pf1-x42a.google.com with SMTP id x66so4651961pfx.3; Wed, 14 Dec 2022 07:07:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=qghbZ1vjirzjUztvBE2sOOI5LO9NPRdxa9b2gkNrWIo=; b=lV7TKJSjN7h9wfcCVyKTNVr7QvjbhZ1bSqcrNg1PIPmWir9yH0rtfb1mzDx2PCfP7c pTQyQ3cBcXUB/+rGR1FZjdhk/CKFPIY+7EdbDsIcSjuAw0a7XM+kcP+740iHTIY1H8VO u+eYXzURbq2UtnYoqcqe8YTjY6uKsdc2ummc51vluIqCc6TxF//pSL9smq9XZHFQBBCl RsvWle3Bo22z9+9hOHEIOIxfMO9MejoCpH28FkrJMijq5Uj6e9d3og0nmhlibG/N4wxE UUw9mz3aJU48cCzPbiWd18xLVlReslMaqU5Akk+xtIyoPZEAJbDNn2ldpf70e2uZXZcE /dIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=qghbZ1vjirzjUztvBE2sOOI5LO9NPRdxa9b2gkNrWIo=; b=F/VfME7UO6jzJn1b7KWn6O6ZecQ1FHXQeiGMmqbBEo9DrLxsKtQClyO+GuJ4btHAo3 UsQbIR0baw2aamVHOy3Q3r+2kZ8hlHAepBKxI2JI/W2B4NrDjMIKJ9uPnM1nM+J7jIk3 /xEycEP+OFBLg3TcMe1n3mtpJ7O6z0jPA8FIO54zuG+6n02TjNUPouQ1DLYyO9zaMMwd bqWYg20Jm/QpqMT5WeJ4N0pnFjOaVvyRiZ54eCJtuhOb3LIo1JLOsZ2QeFitfNUCeVd3 Wdpaa7ua6y5YvfPoCPwOb5i3B0bChnUBnLclxu9rOABuOKwEkFfQkfylZOFdh22yEgVe PzHw== X-Gm-Message-State: ANoB5plexQmB4Wx6rmiElksY05xbMeJvJeBnuOm2FY3IsdUSufYNhZnA hshMQz2huqkJpThM2mtokoaLXKyDh/jV69NFVQEgmgrF X-Received: by 2002:a05:6a00:2352:b0:572:91c6:9e4e with SMTP id j18-20020a056a00235200b0057291c69e4emr81363148pfj.53.1671030475127; Wed, 14 Dec 2022 07:07:55 -0800 (PST) MIME-Version: 1.0 References: <0ce24d7078fa5f1f8d64b0c59826c50f32f8065e.1665214747.git.houwenlong.hwl@antgroup.com> In-Reply-To: From: Lai Jiangshan Date: Wed, 14 Dec 2022 23:07:43 +0800 Message-ID: Subject: Re: [PATCH v4 2/6] KVM: x86/mmu: Fix wrong gfn range of tlb flushing in kvm_set_pte_rmapp() To: Sean Christopherson Cc: Hou Wenlong , kvm@vger.kernel.org, David Matlack , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Lan Tianyu , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Oct 13, 2022 at 1:00 AM Sean Christopherson wrote: > > On Mon, Oct 10, 2022, Hou Wenlong wrote: > > When the spte of hupe page is dropped in kvm_set_pte_rmapp(), the whole > > gfn range covered by the spte should be flushed. However, > > rmap_walk_init_level() doesn't align down the gfn for new level like tdp > > iterator does, then the gfn used in kvm_set_pte_rmapp() is not the base > > gfn of huge page. And the size of gfn range is wrong too for huge page. > > Use the base gfn of huge page and the size of huge page for flushing > > tlbs for huge page. Also introduce a helper function to flush the given > > page (huge or not) of guest memory, which would help prevent future > > buggy use of kvm_flush_remote_tlbs_with_address() in such case. > > > > Fixes: c3134ce240eed ("KVM: Replace old tlb flush function with new one to flush a specified range.") > > Signed-off-by: Hou Wenlong > > --- > > arch/x86/kvm/mmu/mmu.c | 4 +++- > > arch/x86/kvm/mmu/mmu_internal.h | 10 ++++++++++ > > 2 files changed, 13 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index 7de3579d5a27..4874c603ed1c 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -1430,7 +1430,9 @@ static bool kvm_set_pte_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, > > } > > > > if (need_flush && kvm_available_flush_tlb_with_range()) { > > - kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); > > + gfn_t base = gfn_round_for_level(gfn, level); > > + > > + kvm_flush_remote_tlbs_gfn(kvm, base, level); > > return false; > > } > > > > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h > > index 17488d70f7da..249bfcd502b4 100644 > > --- a/arch/x86/kvm/mmu/mmu_internal.h > > +++ b/arch/x86/kvm/mmu/mmu_internal.h > > @@ -168,8 +168,18 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); > > bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, > > struct kvm_memory_slot *slot, u64 gfn, > > int min_level); > > + > > void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, > > u64 start_gfn, u64 pages); > > + > > +/* Flush the given page (huge or not) of guest memory. */ > > +static inline void kvm_flush_remote_tlbs_gfn(struct kvm *kvm, gfn_t gfn, int level) > > +{ > > + u64 pages = KVM_PAGES_PER_HPAGE(level); > > + > > Rather than require the caller to align gfn, what about doing gfn_round_for_level() > in this helper? It's a little odd that the caller needs to align gfn but doesn't > have to compute the size. > > I'm 99% certain kvm_set_pte_rmap() is the only path that doesn't already align the > gfn, but it's nice to not have to worry about getting this right, e.g. alternatively > this helper could WARN if the gfn is misaligned, but that's _more work. > > kvm_flush_remote_tlbs_with_address(kvm, gfn_round_for_level(gfn, level), > KVM_PAGES_PER_HPAGE(level); > > If no one objects, this can be done when the series is applied, i.e. no need to > send v5 just for this. > Hello Paolo, Sean, Hou, It seems the patchset has not been queued. I believe it does fix bugs. Thanks Lai