Received: by 2002:a05:6a10:9afc:0:0:0:0 with SMTP id t28csp3225466pxm; Mon, 28 Feb 2022 14:59:18 -0800 (PST) X-Google-Smtp-Source: ABdhPJzFbAoqaMYGZOCvOUXIBStna4ixLME8MBRextRcTZNurAbPKMeob+LUhbzf8EPQFAAtCR/z X-Received: by 2002:a17:902:c111:b0:14f:c841:66e2 with SMTP id 17-20020a170902c11100b0014fc84166e2mr22794440pli.92.1646089158761; Mon, 28 Feb 2022 14:59:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1646089158; cv=none; d=google.com; s=arc-20160816; b=B4nM5MgbNAC++RcowZLEBYBDI7Wn9j0MGaobgYTXn5Ftb99JnmjRrqNWm0EO4/fMSj jLzC3w5VzhtBUCyw9lPRI7Mjm8qt2OEVXbrPosCrxrE2Qv0zzHe3cUc32e3wNmWlstnD Imf5Vct4KEN80uSQPdiYXxpoAlteB/wqwCXIcOlUESK4/5+0YWvjgTjy3ft+nv3yBKKU QgwUYJzZ2ndUX/dzc1enNvDh+TrvPWPq4BEScqH3mnk9dxBjbeJp9Mo4punVZ2BOg5XA /aGTS6kdTfWyr1Uq3FhJKpie+orRVI0XYiqoBPwvLdvfwf2pHJf+sUMQJ0PpGc1Vas+P B4mA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=JvcF/uCui9GejoorR8kzc/i0AUGjtyXAWJIZdvZ03NQ=; b=ItsV4A7EZOlYw+iCS8t1b385bT66gL23ALGoRYSPjHsLGORf6LuSdVUrt6tK9k3sDN tvuHkfTacc6jtorrTX6VCD1cpEWMrflcYthAWOTAamGbzFHy3pTgAiKS9wWHIm2aoS2i ZGJc4inIuQHNssu1SesCpLDBB77EzQDd0Y1Qk9sGlcV3P3EzhLHJBb3rJSbtdfs6Mjee e905lG0Z7jKVfYMSXAoGrLlO3uJjdT30qweYgqCM+9S4/6cFk8nEkvS3cXmrDtPvIo9i TV6yWRbGj6wG/dJ6/MCDSQMoiHFSVOcC7oHl5cfaIuAfp/WszGySkfMZY4HCFDKa1J15 lSnA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=cSeWu9QL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f33-20020a635121000000b0037264ef5f8bsi1784512pgb.859.2022.02.28.14.58.56; Mon, 28 Feb 2022 14:59:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=cSeWu9QL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231200AbiB1WGY (ORCPT + 99 others); Mon, 28 Feb 2022 17:06:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231272AbiB1WGR (ORCPT ); Mon, 28 Feb 2022 17:06:17 -0500 Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com [IPv6:2a00:1450:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 825BBC4B55 for ; Mon, 28 Feb 2022 14:05:37 -0800 (PST) Received: by mail-ej1-x635.google.com with SMTP id r13so27674250ejd.5 for ; Mon, 28 Feb 2022 14:05:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=JvcF/uCui9GejoorR8kzc/i0AUGjtyXAWJIZdvZ03NQ=; b=cSeWu9QLPxDY4T3cjtJyzxfFc2K1Ozx68VyjxgERiihlulc7eVq/5SblXRpR/nvCw6 tXE0HgjcCsUowDAj2oeNw+E+Qq+Wy2qrjCHpKVu+gzi4+1pJHnNqzf+IsjSX1g4q1NAf kTWfcOGfMFG8nUWsdKEKIKE9+0k2s247ZsfsMCIYPdVKniRsbxs+lgaJJiU9ySveyNGy rBrbgGCnny0NIcJb9dfeUGs5Gl5kxLLYJRHLU+oj2gXvhWOJVoW6yuNGWa4KXzPs9Z8E 9l5BIMEj9AKjteiFx5OIqPPaLHkZp6c9Ey+JU7UVVCvNsOoZzY6RK8V2WELW6fGEPk0/ wjnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=JvcF/uCui9GejoorR8kzc/i0AUGjtyXAWJIZdvZ03NQ=; b=p8f3VlmQ2pl4akwKMRuBYBe4Vhd1Pj3dYRdOsiJT+0e5ltIp9y86tBWx2nK0ATvJKH ucAXxSXD9B3NoqTSdJs14boYwBy8KKeIByu9IqjNGCtOdf0TvTsgb0Z3WSlESADAAE69 5CuLJgbeY3x/noIJjL4sdliz9uy1VGl9pGl6xtkmhDBbxGUgQ0XQXWK+AI2CkV25xxAj JRCNZlYdUHyKlNLLabBcIaTbNfJucdUCOAP5+8HWqeCDPofTQQefXvOMfN6brlrVq6iF 5RYTmjVQ8vNGo8b22b/7E//EfBTlC5piKKAMPRuvkllGVShql+3aXJ8BbyCjL/jf6ag1 588w== X-Gm-Message-State: AOAM53254Iw5hZMmhgG0Vf5ypZs6xCMPnqIJfAxFCCwRtXym8HWZSmzr wYjJH9Gs8aDofYTj0vRSAygMZTzBbFqP8Bh0yhQkJA== X-Received: by 2002:a17:906:eda9:b0:6ce:e24e:7b95 with SMTP id sa9-20020a170906eda900b006cee24e7b95mr16927248ejb.314.1646085935827; Mon, 28 Feb 2022 14:05:35 -0800 (PST) MIME-Version: 1.0 References: <20220225182248.3812651-1-seanjc@google.com> <20220225182248.3812651-4-seanjc@google.com> In-Reply-To: <20220225182248.3812651-4-seanjc@google.com> From: Ben Gardon Date: Mon, 28 Feb 2022 14:05:24 -0800 Message-ID: Subject: Re: [PATCH v2 3/7] KVM: Drop kvm_reload_remote_mmus(), open code request in x86 users To: Sean Christopherson Cc: Paolo Bonzini , Christian Borntraeger , Janosch Frank , David Hildenbrand , Claudio Imbrenda , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm , LKML , Lai Jiangshan Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-18.1 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Feb 25, 2022 at 10:22 AM Sean Christopherson wrote: > > Remove the generic kvm_reload_remote_mmus() and open code its > functionality into the two x86 callers. x86 is (obviously) the only > architecture that uses the hook, and is also the only architecture that > uses KVM_REQ_MMU_RELOAD in a way that's consistent with the name. That > will change in a future patch, as x86's usage when zapping a single > shadow page x86 doesn't actually _need_ to reload all vCPUs' MMUs, only > MMUs whose root is being zapped actually need to be reloaded. > > s390 also uses KVM_REQ_MMU_RELOAD, but for a slightly different purpose. > > Drop the generic code in anticipation of implementing s390 and x86 arch > specific requests, which will allow dropping KVM_REQ_MMU_RELOAD entirely. > > Opportunistically reword the x86 TDP MMU comment to avoid making > references to functions (and requests!) when possible, and to remove the > rather ambiguous "this". > > No functional change intended. > > Cc: Ben Gardon Reviewed-by: Ben Gardon > Signed-off-by: Sean Christopherson > --- > arch/x86/kvm/mmu/mmu.c | 14 +++++++------- > include/linux/kvm_host.h | 1 - > virt/kvm/kvm_main.c | 5 ----- > 3 files changed, 7 insertions(+), 13 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index b2c1c4eb6007..32c6d4b33d03 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -2353,7 +2353,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm, > * treats invalid shadow pages as being obsolete. > */ > if (!is_obsolete_sp(kvm, sp)) > - kvm_reload_remote_mmus(kvm); > + kvm_make_all_cpus_request(kvm, KVM_REQ_MMU_RELOAD); > } > > if (sp->lpage_disallowed) > @@ -5639,11 +5639,11 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) > */ > kvm->arch.mmu_valid_gen = kvm->arch.mmu_valid_gen ? 0 : 1; > > - /* In order to ensure all threads see this change when > - * handling the MMU reload signal, this must happen in the > - * same critical section as kvm_reload_remote_mmus, and > - * before kvm_zap_obsolete_pages as kvm_zap_obsolete_pages > - * could drop the MMU lock and yield. > + /* > + * In order to ensure all vCPUs drop their soon-to-be invalid roots, > + * invalidating TDP MMU roots must be done while holding mmu_lock for > + * write and in the same critical section as making the reload request, > + * e.g. before kvm_zap_obsolete_pages() could drop mmu_lock and yield. > */ > if (is_tdp_mmu_enabled(kvm)) > kvm_tdp_mmu_invalidate_all_roots(kvm); > @@ -5656,7 +5656,7 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) > * Note: we need to do this under the protection of mmu_lock, > * otherwise, vcpu would purge shadow page but miss tlb flush. > */ > - kvm_reload_remote_mmus(kvm); > + kvm_make_all_cpus_request(kvm, KVM_REQ_MMU_RELOAD); > > kvm_zap_obsolete_pages(kvm); > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index f11039944c08..0aeb47cffd43 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -1325,7 +1325,6 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target); > void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); > > void kvm_flush_remote_tlbs(struct kvm *kvm); > -void kvm_reload_remote_mmus(struct kvm *kvm); > > #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE > int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 83c57bcc6eb6..66bb1631cb89 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -354,11 +354,6 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) > EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); > #endif > > -void kvm_reload_remote_mmus(struct kvm *kvm) > -{ > - kvm_make_all_cpus_request(kvm, KVM_REQ_MMU_RELOAD); > -} > - > #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE > static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc, > gfp_t gfp_flags) > -- > 2.35.1.574.g5d30c73bfb-goog >