Received: by 2002:a05:6a10:9afc:0:0:0:0 with SMTP id t28csp718801pxm; Fri, 25 Feb 2022 18:13:03 -0800 (PST) X-Google-Smtp-Source: ABdhPJwnUIL8tXPyjPsqxt50L4zYgrYXpkTeH5bv61KZUZwYRfnwoX1HLPMAWxRPviMUQQdnLZmA X-Received: by 2002:a17:902:6b47:b0:150:80de:5d49 with SMTP id g7-20020a1709026b4700b0015080de5d49mr5572750plt.77.1645841583623; Fri, 25 Feb 2022 18:13:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1645841583; cv=none; d=google.com; s=arc-20160816; b=vH/uFWay15CmXYDRNoaF++4mOxCYvlac9zAdqox2Fj73C+ArNvhNhe2yqwsaOwtqVa maPZr8MWVlA5XQk8IaKU1W37YgMMrD4TU6XMoRijzj0fGXaOHhO7sw443Y7YeCbB+Q6D 7AMBLCrFFoSEhaImg172DVV+mMtsbtmnGdGGcy3yO1h2HB6xKGXXoM0WpHiDuZboO/9Q 3joF9mcEMP76UjWibyS9zgr38wypr7kuCIclr4Jf8/ps96qqZI3Lh1lnOuVSoYbhcBWG oq807EchvqhfuIaYt910+iGvTaZ9cPtgbd0xY36nUyxzAwdz/ys1MHraxGwjwj1h7LH5 JL7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:reply-to:dkim-signature; bh=CkcWm9TOTaG84AakiNdIByFyddSdpVR6qJn4me3qHQU=; b=DZkOuSyLKttsxVhQZpxQ5iylc7+vGMJy7Uf4GOXFILcQv5ZVtkn1dZPlJjILLFO+mU 6MjNazunvhPri9xTQapydzThbZoQ4fKWyde9yODjq0Rt04R35RvCxJp/I21pfLO/jJ/R 54yNpSCXRZch8K1PBpDF8Rf5NTow6vb616fOdFmIg109uMHWwdEZKnWYXy9e6ot5bY8q UXteQ3mHpWwlu6nlD9jTCnNsaGXvx/Uc1aU/ziiWAVWu3h0HwGgl6fIfH7Ov2BMnfRv6 A1LfNJPByGn3sFlX6HISQ5lpSj0trmBqVGOn0jxn40SXbaQ302JMx1dAkZ3aiKZ2SpAT NU6w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b="SrM/ryVB"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id w66-20020a627b45000000b004e19846db27si3035065pfc.202.2022.02.25.18.13.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 25 Feb 2022 18:13:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b="SrM/ryVB"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 88149159292; Fri, 25 Feb 2022 17:48:02 -0800 (PST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240628AbiBZARU (ORCPT + 99 others); Fri, 25 Feb 2022 19:17:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240153AbiBZAQw (ORCPT ); Fri, 25 Feb 2022 19:16:52 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 039AA22559E for ; Fri, 25 Feb 2022 16:16:12 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id z28-20020aa79f9c000000b004e10449d919so3967795pfr.4 for ; Fri, 25 Feb 2022 16:16:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=CkcWm9TOTaG84AakiNdIByFyddSdpVR6qJn4me3qHQU=; b=SrM/ryVBa/JLHQKNP3iNMNNVSSk4v5Ddwyv7M6e0IUPaBHea0ca2RgWivk1Shp4eg2 uCxS3N6bsurRUuP3f6of53M5aE0Jm8DfWQ3bPnNp6v5bLxJsYaGsyYLDKohJUWq1UyCH JJ//QzYD/LPHCpuq81wIXJGyR1pBt4umkqf11JnylOtcmv7VoMh3zMdnchl1KxnFIcb+ vjymo1seRffIdQdUjCrQvKvayjF6qiIX9HCWLbg7pSoUGiQsraZ61xX2TmydRLJuzAjp qLjkn1imbV0HaXP3mdWyAQqCZmRXh2kHITNGLlxhsmWGarGmArbPXKQHFezRF/B587wE zz6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=CkcWm9TOTaG84AakiNdIByFyddSdpVR6qJn4me3qHQU=; b=mhKRE8LOxAJlpmoNUJK/cmDqbFCqUaf9xFVuYs1Lj+DGa4bQ97zCUD2skylYMhmKri KlbcuSFZsanesNqi+0bWq8wsAabrJsJ02pxnin03MZRA4bonY+9xl68IRV1NvZzQqfTr B4c+ZCohJGR70dQIa4Brhu8qw2jwt50MUrtfQPZO+lxbjJX6WgtYvLNObfUnJfbSPb/8 tfxKoUmCDgTW+fzBlq80+8MZxdmxiJvEBCBNwR5useNTdnACLvWsFsSAawGbC4KsYSpy Akn2z11TyhpsYedWTbN9iFI7kebxUeY7jaPGobfImdTSctqZmbmC29CKiVgWIZ2CBHql Xj7Q== X-Gm-Message-State: AOAM533B5fPhWbXp7FwCVusADh/iXxiRhUdU17LBECNZ1aqapTPqCPrb C7PKrZ0WH6UtNO7sZbQ5p2mzGSfBHpc= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a17:902:a404:b0:14b:1100:aebc with SMTP id p4-20020a170902a40400b0014b1100aebcmr9483781plq.133.1645834571470; Fri, 25 Feb 2022 16:16:11 -0800 (PST) Reply-To: Sean Christopherson Date: Sat, 26 Feb 2022 00:15:24 +0000 In-Reply-To: <20220226001546.360188-1-seanjc@google.com> Message-Id: <20220226001546.360188-7-seanjc@google.com> Mime-Version: 1.0 References: <20220226001546.360188-1-seanjc@google.com> X-Mailer: git-send-email 2.35.1.574.g5d30c73bfb-goog Subject: [PATCH v3 06/28] KVM: x86/mmu: Require mmu_lock be held for write in unyielding root iter From: Sean Christopherson To: Paolo Bonzini , Christian Borntraeger , Janosch Frank , Claudio Imbrenda Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , David Hildenbrand , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Ben Gardon , Mingwei Zhang Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.5 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE, USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Assert that mmu_lock is held for write by users of the yield-unfriendly TDP iterator. The nature of a shared walk means that the caller needs to play nice with other tasks modifying the page tables, which is more or less the same thing as playing nice with yielding. Theoretically, KVM could gain a flow where it could legitimately take mmu_lock for read in a non-preemptible context, but that's highly unlikely and any such case should be viewed with a fair amount of scrutiny. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 5994db5d5226..189f21e71c36 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -29,13 +29,16 @@ bool kvm_mmu_init_tdp_mmu(struct kvm *kvm) return true; } -static __always_inline void kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, +/* Arbitrarily returns true so that this may be used in if statements. */ +static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, bool shared) { if (shared) lockdep_assert_held_read(&kvm->mmu_lock); else lockdep_assert_held_write(&kvm->mmu_lock); + + return true; } void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) @@ -187,11 +190,17 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, #define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \ __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, ALL_ROOTS) -#define for_each_tdp_mmu_root(_kvm, _root, _as_id) \ - list_for_each_entry_rcu(_root, &_kvm->arch.tdp_mmu_roots, link, \ - lockdep_is_held_type(&kvm->mmu_lock, 0) || \ - lockdep_is_held(&kvm->arch.tdp_mmu_pages_lock)) \ - if (kvm_mmu_page_as_id(_root) != _as_id) { \ +/* + * Iterate over all TDP MMU roots. Requires that mmu_lock be held for write, + * the implication being that any flow that holds mmu_lock for read is + * inherently yield-friendly and should use the yielf-safe variant above. + * Holding mmu_lock for write obviates the need for RCU protection as the list + * is guaranteed to be stable. + */ +#define for_each_tdp_mmu_root(_kvm, _root, _as_id) \ + list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link) \ + if (kvm_lockdep_assert_mmu_lock_held(_kvm, false) && \ + kvm_mmu_page_as_id(_root) != _as_id) { \ } else static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) -- 2.35.1.574.g5d30c73bfb-goog