Received: by 2002:a05:7412:b795:b0:e2:908c:2ebd with SMTP id iv21csp395049rdb; Thu, 2 Nov 2023 06:56:55 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHUUZ7pbwrN8dUDTdQAPk3lG07E68bnpxPSd0FJQLn/0oTJceFJxzbOs1V2xksXoeuq5kea X-Received: by 2002:a17:903:7c6:b0:1cc:630d:8a5e with SMTP id ko6-20020a17090307c600b001cc630d8a5emr8041246plb.48.1698933414950; Thu, 02 Nov 2023 06:56:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698933414; cv=none; d=google.com; s=arc-20160816; b=APf6jyoATAhi28Mzx4mjhYNXAHpzCsnM25P366bgJtA5j53AtlIWlAed5X5XJuLxdS lpLPhYuT/RjEjUTCng3nGHjbYJsQIviveYHN5or94pRUuayME34ldKf8pRg0i41x9B1i ubccTSZDIgCHlBtQo53jubCvLtWmZTEudoZzCyKP/7ZTRusqReLHN3gK9FYG5afX1JV/ Ar/IgHpA+yQlKWzZWxB+zL/75YYJr1c+zMfZ6Jcn8lPtxf43lRQEwQq3tFjEG18qNdYM KZQDjaPU7fKbUC60z984KuXvzgUSxOWnZ8ZDiEITpvm3Bbe/wWsUoCDnPVmemkcjszTh acyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=4Yb/48Ns1tpOy5twJOpAhL4i1qPmUF1LLO7pfzUXbus=; fh=xK+wiX2TTq7O2X4TlNj4MXVc7h6ma8eyuy1Ur/RMLHE=; b=MpQUOjFgdM7//WjWPSt2NZhJXpgGoKjwvGv0seGvj7aJeabk+UilNyTfgNaXiDUuS+ ROp09eOYp9nE3hhfIac/X32CVIwwTv1IHMsBxregBGKWe5XzdzYgPXW3hApKKq7HbkMb Stqn0OPVv4glxie5oC8jZUVHZwZkfFdENbz9JpeHgqQW7OjKFGXbsRS/OT2WQxc+9VWc x8twuwp9/GMLUiUDRvZC8nRjrP8EowJ8jhMzWDzpdd4RRy8xgo7Xr8UM60ow511vhlrM py5MjwAvRFDUiDnxgzYzRruFVaaegCMSDteNxvZFUMGTUqsn56yCZlC+/5lB5gm1IBR/ PC9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=FuZ9OSc+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id ju2-20020a170903428200b001c604fdbb14si4760357plb.81.2023.11.02.06.56.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Nov 2023 06:56:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=FuZ9OSc+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id DE6FC8073298; Thu, 2 Nov 2023 06:55:58 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229995AbjKBNzq (ORCPT + 99 others); Thu, 2 Nov 2023 09:55:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229962AbjKBNzo (ORCPT ); Thu, 2 Nov 2023 09:55:44 -0400 Received: from mail-vk1-xa29.google.com (mail-vk1-xa29.google.com [IPv6:2607:f8b0:4864:20::a29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAEE4181 for ; Thu, 2 Nov 2023 06:55:39 -0700 (PDT) Received: by mail-vk1-xa29.google.com with SMTP id 71dfb90a1353d-4a9183bf741so411203e0c.1 for ; Thu, 02 Nov 2023 06:55:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698933339; x=1699538139; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=4Yb/48Ns1tpOy5twJOpAhL4i1qPmUF1LLO7pfzUXbus=; b=FuZ9OSc+DNLYTFx2ZAnHD26IiNmPDELlQgibZwwNBWqFj+xZKbh3rEmPDk4dwlN5ps f6N7kvONIFqMk6YxGh7uH5M6yu0LP5pjEgj30EZ5tpJgCP3ZK+aeMjykv2uYHLZkRSj4 9KDe2OWSfOnF7tE3VaApiMt5qbuLd6DPNxIGyz/9Gid/YMyH5LdpXNUNj/ZotkWrmJLX CFi3qdF6FLhCom1Ak1TKi1cOFDxvT3spVbXTG+D9KSjLNjgTns/DMtFwd99veQIsMnS3 t1bKhgEdcCH+ZF2lLeUJu+RfdirKBzoHRRp66wNhPytJ1D/Sx4HcIwjWCRHr72bVucoV z1fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698933339; x=1699538139; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4Yb/48Ns1tpOy5twJOpAhL4i1qPmUF1LLO7pfzUXbus=; b=UHjE0wrr1fZ0MJzM+/EgiTg+3wsJgROJBSNBMFxmaMKBRrvHDzrfV2qXAILanvudLA yGQRC33a45BXGaNkHYycyqpewqRtwKZ5sI1aq8HPwGMfN2osaDAz9GRw3s2t7UGUt/pz DWjIoQNhUR+ltZB5oI1QZMKUkAw4LtvTpEx/TNdjD1QsSbPnkoOldmhfKQuxD0dBak6B ZfYk7GzkiLPhGVSEeG6L3Dg3dQEzgIGGc+udLAmsIpvnNw0bUS3pUbPrHXs5BgpOklcF OlEdPNn+pL8VhE0ThcTDcv1K7C0+ZMI1cGQvzSw46GXyPqtyBBQjJCGi7yQONswtTkQn Otpw== X-Gm-Message-State: AOJu0Yxg9flukqTxmhAeFJKec5g53XrZ85e4pyOGwgo8dGSB/0juPLL8 HgOxfKccrByxy+VABkLxnwQ7+qk66XK2NEtF5+qNzA== X-Received: by 2002:a1f:984f:0:b0:49d:a52a:4421 with SMTP id a76-20020a1f984f000000b0049da52a4421mr16094320vke.4.1698933338782; Thu, 02 Nov 2023 06:55:38 -0700 (PDT) MIME-Version: 1.0 References: <20231027182217.3615211-1-seanjc@google.com> <20231027182217.3615211-11-seanjc@google.com> In-Reply-To: <20231027182217.3615211-11-seanjc@google.com> From: Fuad Tabba Date: Thu, 2 Nov 2023 13:55:03 +0000 Message-ID: Subject: Re: [PATCH v13 10/35] KVM: Add a dedicated mmu_notifier flag for reclaiming freed memory To: Sean Christopherson Cc: Paolo Bonzini , Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Viro , Christian Brauner , "Matthew Wilcox (Oracle)" , Andrew Morton , kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Xiaoyao Li , Xu Yilun , Chao Peng , Jarkko Sakkinen , Anish Moorthy , David Matlack , Yu Zhang , Isaku Yamahata , =?UTF-8?B?TWlja2HDq2wgU2FsYcO8bg==?= , Vlastimil Babka , Vishal Annapurve , Ackerley Tng , Maciej Szmigiero , David Hildenbrand , Quentin Perret , Michael Roth , Wang , Liam Merwick , Isaku Yamahata , "Kirill A . Shutemov" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Thu, 02 Nov 2023 06:55:59 -0700 (PDT) On Fri, Oct 27, 2023 at 7:22=E2=80=AFPM Sean Christopherson wrote: > > Handle AMD SEV's kvm_arch_guest_memory_reclaimed() hook by having > __kvm_handle_hva_range() return whether or not an overlapping memslot > was found, i.e. mmu_lock was acquired. Using the .on_unlock() hook > works, but kvm_arch_guest_memory_reclaimed() needs to run after dropping > mmu_lock, which makes .on_lock() and .on_unlock() asymmetrical. > > Use a small struct to return the tuple of the notifier-specific return, > plus whether or not overlap was found. Because the iteration helpers are > __always_inlined, practically speaking, the struct will never actually be > returned from a function call (not to mention the size of the struct will > be two bytes in practice). > > Signed-off-by: Sean Christopherson > --- Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba Cheers, /fuad > virt/kvm/kvm_main.c | 53 +++++++++++++++++++++++++++++++-------------- > 1 file changed, 37 insertions(+), 16 deletions(-) > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 3f5b7c2c5327..2bc04c8ae1f4 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -561,6 +561,19 @@ struct kvm_mmu_notifier_range { > bool may_block; > }; > > +/* > + * The inner-most helper returns a tuple containing the return value fro= m the > + * arch- and action-specific handler, plus a flag indicating whether or = not at > + * least one memslot was found, i.e. if the handler found guest memory. > + * > + * Note, most notifiers are averse to booleans, so even though KVM track= s the > + * return from arch code as a bool, outer helpers will cast it to an int= . :-( > + */ > +typedef struct kvm_mmu_notifier_return { > + bool ret; > + bool found_memslot; > +} kvm_mn_ret_t; > + > /* > * Use a dedicated stub instead of NULL to indicate that there is no cal= lback > * function/handler. The compiler technically can't guarantee that a re= al > @@ -582,22 +595,25 @@ static const union kvm_mmu_notifier_arg KVM_MMU_NOT= IFIER_NO_ARG; > node; = \ > node =3D interval_tree_iter_next(node, start, last)) \ > > -static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, > - const struct kvm_mmu_no= tifier_range *range) > +static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *k= vm, > + const struct k= vm_mmu_notifier_range *range) > { > - bool ret =3D false, locked =3D false; > + struct kvm_mmu_notifier_return r =3D { > + .ret =3D false, > + .found_memslot =3D false, > + }; > struct kvm_gfn_range gfn_range; > struct kvm_memory_slot *slot; > struct kvm_memslots *slots; > int i, idx; > > if (WARN_ON_ONCE(range->end <=3D range->start)) > - return 0; > + return r; > > /* A null handler is allowed if and only if on_lock() is provided= . */ > if (WARN_ON_ONCE(IS_KVM_NULL_FN(range->on_lock) && > IS_KVM_NULL_FN(range->handler))) > - return 0; > + return r; > > idx =3D srcu_read_lock(&kvm->srcu); > > @@ -631,8 +647,8 @@ static __always_inline int __kvm_handle_hva_range(str= uct kvm *kvm, > gfn_range.end =3D hva_to_gfn_memslot(hva_end + PA= GE_SIZE - 1, slot); > gfn_range.slot =3D slot; > > - if (!locked) { > - locked =3D true; > + if (!r.found_memslot) { > + r.found_memslot =3D true; > KVM_MMU_LOCK(kvm); > if (!IS_KVM_NULL_FN(range->on_lock)) > range->on_lock(kvm); > @@ -640,14 +656,14 @@ static __always_inline int __kvm_handle_hva_range(s= truct kvm *kvm, > if (IS_KVM_NULL_FN(range->handler)) > break; > } > - ret |=3D range->handler(kvm, &gfn_range); > + r.ret |=3D range->handler(kvm, &gfn_range); > } > } > > - if (range->flush_on_ret && ret) > + if (range->flush_on_ret && r.ret) > kvm_flush_remote_tlbs(kvm); > > - if (locked) { > + if (r.found_memslot) { > KVM_MMU_UNLOCK(kvm); > if (!IS_KVM_NULL_FN(range->on_unlock)) > range->on_unlock(kvm); > @@ -655,8 +671,7 @@ static __always_inline int __kvm_handle_hva_range(str= uct kvm *kvm, > > srcu_read_unlock(&kvm->srcu, idx); > > - /* The notifiers are averse to booleans. :-( */ > - return (int)ret; > + return r; > } > > static __always_inline int kvm_handle_hva_range(struct mmu_notifier *mn, > @@ -677,7 +692,7 @@ static __always_inline int kvm_handle_hva_range(struc= t mmu_notifier *mn, > .may_block =3D false, > }; > > - return __kvm_handle_hva_range(kvm, &range); > + return __kvm_handle_hva_range(kvm, &range).ret; > } > > static __always_inline int kvm_handle_hva_range_no_flush(struct mmu_noti= fier *mn, > @@ -696,7 +711,7 @@ static __always_inline int kvm_handle_hva_range_no_fl= ush(struct mmu_notifier *mn > .may_block =3D false, > }; > > - return __kvm_handle_hva_range(kvm, &range); > + return __kvm_handle_hva_range(kvm, &range).ret; > } > > static bool kvm_change_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *r= ange) > @@ -798,7 +813,7 @@ static int kvm_mmu_notifier_invalidate_range_start(st= ruct mmu_notifier *mn, > .end =3D range->end, > .handler =3D kvm_mmu_unmap_gfn_range, > .on_lock =3D kvm_mmu_invalidate_begin, > - .on_unlock =3D kvm_arch_guest_memory_reclaimed, > + .on_unlock =3D (void *)kvm_null_fn, > .flush_on_ret =3D true, > .may_block =3D mmu_notifier_range_blockable(range), > }; > @@ -830,7 +845,13 @@ static int kvm_mmu_notifier_invalidate_range_start(s= truct mmu_notifier *mn, > gfn_to_pfn_cache_invalidate_start(kvm, range->start, range->end, > hva_range.may_block); > > - __kvm_handle_hva_range(kvm, &hva_range); > + /* > + * If one or more memslots were found and thus zapped, notify arc= h code > + * that guest memory has been reclaimed. This needs to be done *= after* > + * dropping mmu_lock, as x86's reclaim path is slooooow. > + */ > + if (__kvm_handle_hva_range(kvm, &hva_range).found_memslot) > + kvm_arch_guest_memory_reclaimed(kvm); > > return 0; > } > -- > 2.42.0.820.g83a721a137-goog >