Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp8710480rwd; Tue, 20 Jun 2023 20:09:36 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4k4/osVqgZgdWlapwCgvASboKfhWR+FHTXmIGHRHhn4W6cWYMfUgzx+QFEX+Epe2FVsMAG X-Received: by 2002:a05:6830:22ef:b0:6b1:c171:3ff6 with SMTP id t15-20020a05683022ef00b006b1c1713ff6mr13289823otc.27.1687316976344; Tue, 20 Jun 2023 20:09:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687316976; cv=none; d=google.com; s=arc-20160816; b=QpA1NaiJCoD9X59aiD01dF7jsprX0l0PZ3pc/D1F0rQmsGqnnyBFpPJcA/vwCMr26+ JJu1h8WswOS3k6UxmtyQIBw5ismHU1/w9HtZhhAD06VXKe2Hm4TGH3VaN4XuKOvp2nqQ O8l4Kb1twgyz99oT826wR0nsLvg1E8hQarp7oNbCvCev5NUOwqHrKcNyGCsRY/IWgzMn vcxoq7Do+1svTmEcYBJfYn31mMKsF7b2rswNG17ubz4psP8r7AV07gD38/H1fcKKnBaX i+ZEWMm1FKi7QKGKDj9b3Qrnrw19WUV8LsEpwBLvwIxgIeKzh2mWY4FZ/C9xpabf5x3E TQ4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:references:subject:cc:to:from :message-id:date:content-transfer-encoding:mime-version :dkim-signature; bh=qJNr1vyPvHbMEk8O8XldE8NbrEXH2idSMvV3eZDgIT4=; b=06B/4JHm+8Is1C4K2g3Rlb5VKp2qkq/mfhRmnrDCLs8+3xzGCZYbcQLeUkDrjH2R8u hBzKeNVl7yOmlBeC86j/T4CvbHO/sWgVTNXbIwNjcnv/VzWkEzBETe+9Gh1cUf4iAf0u p5vjLLAoKiRrC7dt/Q4d0FJQbHCYXlJ6LrI5Oqbr5+mocXw+xzO7jIoEtSW4LM11wBub QaEamUKH9NQkB8FTZjfdNv563pVvbkpeNFOqqk+0tHZNlLuPoTbYBIGhAADuhvSo37XU ASSxjJ/fZ+t76d2ZJ0sE8JVcuZXN0Hw0PhlE240zAVDaZfu592YkY3wh9Z1DF16Vf24u C98g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=J9rhQVz1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v70-20020a638949000000b005443c85156bsi3000222pgd.222.2023.06.20.20.09.21; Tue, 20 Jun 2023 20:09:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=J9rhQVz1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229983AbjFUCvx (ORCPT + 99 others); Tue, 20 Jun 2023 22:51:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53188 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229676AbjFUCvv (ORCPT ); Tue, 20 Jun 2023 22:51:51 -0400 Received: from mail-ot1-x330.google.com (mail-ot1-x330.google.com [IPv6:2607:f8b0:4864:20::330]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A0E71B4; Tue, 20 Jun 2023 19:51:50 -0700 (PDT) Received: by mail-ot1-x330.google.com with SMTP id 46e09a7af769-6b470ebd670so2607799a34.0; Tue, 20 Jun 2023 19:51:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687315909; x=1689907909; h=in-reply-to:references:subject:cc:to:from:message-id:date :content-transfer-encoding:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=qJNr1vyPvHbMEk8O8XldE8NbrEXH2idSMvV3eZDgIT4=; b=J9rhQVz1R8lKDLiKwNIPca+zvG5Tv3NsIAj1Ji/wwJWZ5NGPo67uNji7TMbGfymVWO GMr37w69IFJ1a/xO2A5DmceFjN0O32RpzdAIssyO44XqTHGckAF25peyXSxvQDN3Hon3 nm7yK6UQvcoGWP40qbNg1er6nj4r5UQsT0ym99rNBvogtZMf2i5gafTMS+lgcgzNQGKq aoUmdiGQ3/vjF0fTi0Y3GwgVLDETdD79PdEDbQpGXiviLmWKyCtsFqxqvYfIKO+j7cmH 1dwMi6f30X6ti1+jJ2sfUBHAIRnjtvhicwwWnFeGYhp7QveaAM4i9Fm9K8L5dJwflH0T aPMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687315909; x=1689907909; h=in-reply-to:references:subject:cc:to:from:message-id:date :content-transfer-encoding:mime-version:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=qJNr1vyPvHbMEk8O8XldE8NbrEXH2idSMvV3eZDgIT4=; b=Cvs8PH9gXi5I5TbvEm6ueuUXfr4+MzUBGnZsj2anRhFeqsMeb68yjyAHEOjfFU8s4v VXBiwh95vcPcDhe3BmyGWVvBY9OItxL0M0KSE18VzppnO1vvcJ2PlxoDQDSqNvFMSwJg L6JnlscCqqw2og4/k4hmW9Wu5799g3corNzPHE4hYYIX+xL/3eRcvT8eVg2djie7Xl/C dAGItrlaC9AWUvsMgBbswlQxJWe/dAdwobkGZv77e+6WrgthYQaebnucq4UNkoJvWcYW MCQsB/Lxs1I7WNuYt6kNPJpkX6CQgqbyL59lvLTWaQj6lRdx47z+hq4PQGYj/VUzMkrg 88bA== X-Gm-Message-State: AC+VfDyiYbN5KLQbblO6vQ/QNWBMZcLJpiPTgCb5c+yIhXc1TaSmL9dE pFwSPTKWDnwy0634pBABP7nOGid4Xvs= X-Received: by 2002:a05:6359:a26:b0:12c:9960:ec1e with SMTP id el38-20020a0563590a2600b0012c9960ec1emr8123207rwb.19.1687315909090; Tue, 20 Jun 2023 19:51:49 -0700 (PDT) Received: from localhost (193-116-203-37.tpgi.com.au. [193.116.203.37]) by smtp.gmail.com with ESMTPSA id q15-20020a170902bd8f00b001a5fccab02dsm2246983pls.177.2023.06.20.19.51.33 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Jun 2023 19:51:48 -0700 (PDT) Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=UTF-8 Date: Wed, 21 Jun 2023 12:51:31 +1000 Message-Id: From: "Nicholas Piggin" To: "Yu Zhao" Cc: "Andrew Morton" , "Paolo Bonzini" , "Alistair Popple" , "Anup Patel" , "Ben Gardon" , "Borislav Petkov" , "Catalin Marinas" , "Chao Peng" , "Christophe Leroy" , "Dave Hansen" , "Fabiano Rosas" , "Gaosheng Cui" , "Gavin Shan" , "H. Peter Anvin" , "Ingo Molnar" , "James Morse" , "Jason A. Donenfeld" , "Jason Gunthorpe" , "Jonathan Corbet" , "Marc Zyngier" , "Masami Hiramatsu" , "Michael Ellerman" , "Michael Larabel" , "Mike Rapoport" , "Oliver Upton" , "Paul Mackerras" , "Peter Xu" , "Sean Christopherson" , "Steven Rostedt" , "Suzuki K Poulose" , "Thomas Gleixner" , "Thomas Huth" , "Will Deacon" , "Zenghui Yu" , , , , , , , , , , Subject: Re: [PATCH mm-unstable v2 07/10] kvm/powerpc: add kvm_arch_test_clear_young() X-Mailer: aerc 0.14.0 References: <20230526234435.662652-1-yuzhao@google.com> <20230526234435.662652-8-yuzhao@google.com> In-Reply-To: X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed Jun 21, 2023 at 10:38 AM AEST, Yu Zhao wrote: > On Tue, Jun 20, 2023 at 1:48=E2=80=AFAM Nicholas Piggin wrote: > > > > On Sat May 27, 2023 at 9:44 AM AEST, Yu Zhao wrote: > > > Implement kvm_arch_test_clear_young() to support the fast path in > > > mmu_notifier_ops->test_clear_young(). > > > > > > It focuses on a simple case, i.e., radix MMU sets the accessed bit in > > > KVM PTEs and VMs are not nested, where it can rely on RCU and > > > pte_xchg() to safely clear the accessed bit without taking > > > kvm->mmu_lock. Complex cases fall back to the existing slow path > > > where kvm->mmu_lock is then taken. > > > > > > Signed-off-by: Yu Zhao > > > --- > > > arch/powerpc/include/asm/kvm_host.h | 8 ++++ > > > arch/powerpc/include/asm/kvm_ppc.h | 1 + > > > arch/powerpc/kvm/book3s.c | 6 +++ > > > arch/powerpc/kvm/book3s.h | 1 + > > > arch/powerpc/kvm/book3s_64_mmu_radix.c | 59 ++++++++++++++++++++++++= ++ > > > arch/powerpc/kvm/book3s_hv.c | 5 +++ > > > 6 files changed, 80 insertions(+) > > > > > > diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/inclu= de/asm/kvm_host.h > > > index 14ee0dece853..75c260ea8a9e 100644 > > > --- a/arch/powerpc/include/asm/kvm_host.h > > > +++ b/arch/powerpc/include/asm/kvm_host.h > > > @@ -883,4 +883,12 @@ static inline void kvm_arch_sched_in(struct kvm_= vcpu *vcpu, int cpu) {} > > > static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} > > > static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {= } > > > > > > +#define kvm_arch_has_test_clear_young kvm_arch_has_test_clear_young > > > +static inline bool kvm_arch_has_test_clear_young(void) > > > +{ > > > + return IS_ENABLED(CONFIG_KVM_BOOK3S_HV_POSSIBLE) && > > > + cpu_has_feature(CPU_FTR_HVMODE) && cpu_has_feature(CPU_F= TR_ARCH_300) && > > > + radix_enabled(); > > > > This could probably be radix_enabled() && !kvmhv_on_pseries(). > > Will do. (I used !kvmhv_on_pseries() in v1 but had second thoughts on > moving kvmhv_on_pseries() into this file.) That should be okay. kvmhv_on_pseries is a property of the host so it seems reasonable to move it here if needed. > > Although unclear why not nested hypervisor... I'd have to think about i= t a bit > > more. Do you have any idea what might go wrong, or just didn't have the > > time to consider it? (Not just powerpc nested but any others). > > Yes, this series excludes nested KVM support on all architures. The > common reason for such a decision on powerpc and x86 (aarch64 doesn't > support nested yet) is that it's quite challenging to make the rmap, a > complex data structure that maps one PFN to multiple GFNs, lockless. > (See kvmhv_insert_nest_rmap().) It might be possible, however, the > potential ROI would be in question. Okay just wondering. rmap (at least the powerpc one) is just a list I think, with a few details. If that is all it is, it might not be so hard to make that lock-free or a fine-grained lock on the rmap chains maybe. But fine to ignore it to start with. > > > +} > > > + > > > #endif /* __POWERPC_KVM_HOST_H__ */ > > > diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/includ= e/asm/kvm_ppc.h > > > index 79a9c0bb8bba..ff1af6a7b44f 100644 > > > --- a/arch/powerpc/include/asm/kvm_ppc.h > > > +++ b/arch/powerpc/include/asm/kvm_ppc.h > > > @@ -287,6 +287,7 @@ struct kvmppc_ops { > > > bool (*unmap_gfn_range)(struct kvm *kvm, struct kvm_gfn_range *= range); > > > bool (*age_gfn)(struct kvm *kvm, struct kvm_gfn_range *range); > > > bool (*test_age_gfn)(struct kvm *kvm, struct kvm_gfn_range *ran= ge); > > > + bool (*test_clear_young)(struct kvm *kvm, struct kvm_gfn_range = *range); > > > bool (*set_spte_gfn)(struct kvm *kvm, struct kvm_gfn_range *ran= ge); > > > void (*free_memslot)(struct kvm_memory_slot *slot); > > > int (*init_vm)(struct kvm *kvm); > > > diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c > > > index 686d8d9eda3e..37bf40b0c4ff 100644 > > > --- a/arch/powerpc/kvm/book3s.c > > > +++ b/arch/powerpc/kvm/book3s.c > > > @@ -899,6 +899,12 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kv= m_gfn_range *range) > > > return kvm->arch.kvm_ops->test_age_gfn(kvm, range); > > > } > > > > > > +bool kvm_arch_test_clear_young(struct kvm *kvm, struct kvm_gfn_range= *range) > > > +{ > > > + return !kvm->arch.kvm_ops->test_clear_young || > > > + kvm->arch.kvm_ops->test_clear_young(kvm, range); > > > +} > > > + > > > bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) > > > { > > > return kvm->arch.kvm_ops->set_spte_gfn(kvm, range); > > > diff --git a/arch/powerpc/kvm/book3s.h b/arch/powerpc/kvm/book3s.h > > > index 58391b4b32ed..fa2659e21ccc 100644 > > > --- a/arch/powerpc/kvm/book3s.h > > > +++ b/arch/powerpc/kvm/book3s.h > > > @@ -12,6 +12,7 @@ extern void kvmppc_core_flush_memslot_hv(struct kvm= *kvm, > > > extern bool kvm_unmap_gfn_range_hv(struct kvm *kvm, struct kvm_gfn_r= ange *range); > > > extern bool kvm_age_gfn_hv(struct kvm *kvm, struct kvm_gfn_range *ra= nge); > > > extern bool kvm_test_age_gfn_hv(struct kvm *kvm, struct kvm_gfn_rang= e *range); > > > +extern bool kvm_test_clear_young_hv(struct kvm *kvm, struct kvm_gfn_= range *range); > > > extern bool kvm_set_spte_gfn_hv(struct kvm *kvm, struct kvm_gfn_rang= e *range); > > > > > > extern int kvmppc_mmu_init_pr(struct kvm_vcpu *vcpu); > > > diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kv= m/book3s_64_mmu_radix.c > > > index 3b65b3b11041..0a392e9a100a 100644 > > > --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c > > > +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c > > > @@ -1088,6 +1088,65 @@ bool kvm_test_age_radix(struct kvm *kvm, struc= t kvm_memory_slot *memslot, > > > return ref; > > > } > > > > > > +bool kvm_test_clear_young_hv(struct kvm *kvm, struct kvm_gfn_range *= range) > > > +{ > > > + bool err; > > > + gfn_t gfn =3D range->start; > > > + > > > + rcu_read_lock(); > > > + > > > + err =3D !kvm_is_radix(kvm); > > > + if (err) > > > + goto unlock; > > > + > > > + /* > > > + * Case 1: This function kvmppc_switch_mmu_to_hpt() > > > + * > > > + * rcu_read_lock() > > > + * Test kvm_is_radix() kvm->arch.radix =3D 0 > > > + * Use kvm->arch.pgtable synchronize_rcu() > > > + * rcu_read_unlock() > > > + * kvmppc_free_radix() > > > + * > > > + * > > > + * Case 2: This function kvmppc_switch_mmu_to_radix() > > > + * > > > + * kvmppc_init_vm_radix() > > > + * smp_wmb() > > > + * Test kvm_is_radix() kvm->arch.radix =3D 1 > > > + * smp_rmb() > > > + * Use kvm->arch.pgtable > > > + */ > > > + smp_rmb(); > > > > Comment could stand to expand slightly on what you are solving, in > > words. > > Will do. > > > If you use synchronize_rcu() on both sides, you wouldn't need the > > barrier, right? > > Case 2 is about memory ordering, which is orthogonal to case 1 (RCU > freeing). So we need the r/w barrier regardless. RCU can take care of memory ordering too though. If you had synchronize_rcu() where smp_wmb() is, then no smp_rmb() neeed here. > > > > + while (gfn < range->end) { > > > + pte_t *ptep; > > > + pte_t old, new; > > > + unsigned int shift; > > > + > > > + ptep =3D find_kvm_secondary_pte_unlocked(kvm, gfn * PAG= E_SIZE, &shift); > > > + if (!ptep) > > > + goto next; > > > + > > > + VM_WARN_ON_ONCE(!page_count(virt_to_page(ptep))); > > > > Not really appropriate at the KVM level. mm enforces this kind of > > thing (with notifiers). > > Will remove this. > > > > + > > > + old =3D READ_ONCE(*ptep); > > > + if (!pte_present(old) || !pte_young(old)) > > > + goto next; > > > + > > > + new =3D pte_mkold(old); > > > + > > > + if (kvm_should_clear_young(range, gfn)) > > > + pte_xchg(ptep, old, new); > > > > *Probably* will work. I can't think of a reason why not at the > > moment anyway :) > > My reasoning: > * It should work if we only change the dedicated A bit, i.e., not > shared for other purposes, because races are safe (the case here). > * It may not work for x86 EPT without the A bit (excluded in this > series) where accesses are trapped by the R/X bits, because races in > changing the R/X bits can be unsafe. (For the benefit of others reading, it works because powerpc's pte_xchg is actually a cmpxchg, for some reason which we really should fix). Although it can fail to clear the bit if the cmpxchg fails. I think pte_xchg is only used on with hash MMU in Linux before this change. I think we may want to keep it that way and use something like kvmppc_radix_update_pte() here to clear out the bit. But don't worry too much about fine details so much before sorting out the core changes I will have a better look after that. Thanks, Nick