Received: by 2002:a05:7412:2a8c:b0:e2:908c:2ebd with SMTP id u12csp3370242rdh; Thu, 28 Sep 2023 09:37:45 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFrjGzIXvC4VnTkhM9gH9O9e1p2un3oOQiF7RP5dMpPRGaHGDt0sCtqlTFaDJjFV9boNSLI X-Received: by 2002:a17:90a:bf85:b0:26d:2ae0:9b63 with SMTP id d5-20020a17090abf8500b0026d2ae09b63mr1642618pjs.16.1695919065183; Thu, 28 Sep 2023 09:37:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695919065; cv=none; d=google.com; s=arc-20160816; b=aV1dOdGzM+fOWoIkqMnr7a9EAWh4jjgGA44fFCXbpU7EXOC1YyGXZvClDihgWKJJpF nqvf379cFefCbdYaLwjaCPhvsX4bjK3Bps9zA6ZSM/wxh+RhbeHPB8m3p7JSuz5vNQx8 4vn7pFKdrhF6rYkCrf4MVPNQ2CSdEHI3lfU0j3gvCiKRtI6aGdV4qyeAq+EU/KLaIgbY 4OWPkYr8xuennw5zbyA1oTIdn9nNaCAUhSIpvnUY8Kyu7pUDtcebaPVpdh/6JVJ1Jq32 oyRWCSlwLHhNYXVPSMBs1wkeeU9ArFKRafCk3ViHjq8hbLHkQUwTFx20GcJIAbLqzgx5 dtLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=UdQ8vFIkrjKHPjmY3S0MSAG68uzsObzQSGb5suxbmVU=; fh=4N0ilCeWNPUMy3p+fwQEcpDcSktLq8IMxbqwLAweA1k=; b=dh6P1+oODFkNcDrLUdWa+OhyR92IUvk8JKUvmUkOwbK37RURfLAEBZNc8lqAds4ZTR imQhQkmBQTvJmQeCf4KVzcbqg+6r/o/AKfOE87+/NW0CF7vugUpmfWcP20+08BYSReyY HdIXi4i76UXEcf8MH+B8EG9jN/E46aI2p+MVA6Y8S+H+jPLdLKVHHyYSxzmgiy/PrMxD Lkf98oZndtEMNpNUoadJsmI/hXAKR532wn6CphqYtk7UcElPB+OZlVlwMvD6nluG40dN v02GdwsEQbwHGerQKWTWj/VzxyD2ViftJCCDWRAqybcc9godKKrBTpOmB0mgI+APKoAh QgCw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=MEebHRP8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id u14-20020a17090a2b8e00b002744e9b7a22si5239673pjd.9.2023.09.28.09.37.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Sep 2023 09:37:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=MEebHRP8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 10C81837A4C6; Thu, 28 Sep 2023 09:31:36 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231778AbjI1QbE (ORCPT + 99 others); Thu, 28 Sep 2023 12:31:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230212AbjI1Qaz (ORCPT ); Thu, 28 Sep 2023 12:30:55 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AF7DBF for ; Thu, 28 Sep 2023 09:30:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695918604; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UdQ8vFIkrjKHPjmY3S0MSAG68uzsObzQSGb5suxbmVU=; b=MEebHRP8dn6t2WqcPNXax43vxe0BoR78W2KS/jhrCual1N7s6HLKpORY7aL4FGytvp6tVs 2U3simylN27g86QkgO2mVmqWJjg2DrGVQQ7lYVIpqSW57i05cG04uCRW/oQFOaudGtb/fA oYrdCzIBxgtOtokdQrE2xLfU17tcjnY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-223-rVPCVgeKMlSnVbd3S2fAzQ-1; Thu, 28 Sep 2023 12:30:00 -0400 X-MC-Unique: rVPCVgeKMlSnVbd3S2fAzQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5F475185A78E; Thu, 28 Sep 2023 16:30:00 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 466F9140E969; Thu, 28 Sep 2023 16:30:00 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH 2/3] KVM: x86/mmu: remove unnecessary "bool shared" argument from iterators Date: Thu, 28 Sep 2023 12:29:58 -0400 Message-Id: <20230928162959.1514661-3-pbonzini@redhat.com> In-Reply-To: <20230928162959.1514661-1-pbonzini@redhat.com> References: <20230928162959.1514661-1-pbonzini@redhat.com> MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Thu, 28 Sep 2023 09:31:36 -0700 (PDT) The "bool shared" argument is more or less unnecessary in the for_each_*_tdp_mmu_root_yield_safe() macros. Many users check for the lock before calling it; all of them either call small functions that do the check, or end up calling tdp_mmu_set_spte_atomic() and tdp_mmu_iter_set_spte(). Add a few assertions to make up for the lost check in for_each_*_tdp_mmu_root_yield_safe(), but even this is probably overkill and mostly for documentation reasons. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/tdp_mmu.c | 42 +++++++++++++++++++------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index ab0876015be7..b9abfa78808a 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -155,23 +155,20 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, * If shared is set, this function is operating under the MMU lock in read * mode. */ -#define __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, _only_valid)\ +#define __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _only_valid)\ for (_root = tdp_mmu_next_root(_kvm, NULL, _only_valid); \ _root; \ _root = tdp_mmu_next_root(_kvm, _root, _only_valid)) \ - if (kvm_lockdep_assert_mmu_lock_held(_kvm, _shared) && \ - kvm_mmu_page_as_id(_root) != _as_id) { \ + if (kvm_mmu_page_as_id(_root) != _as_id) { \ } else -#define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \ - __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true) +#define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id) \ + __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, true) -#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _shared) \ +#define for_each_tdp_mmu_root_yield_safe(_kvm, _root) \ for (_root = tdp_mmu_next_root(_kvm, NULL, false); \ _root; \ _root = tdp_mmu_next_root(_kvm, _root, false)) - if (!kvm_lockdep_assert_mmu_lock_held(_kvm, _shared)) { \ - } else /* * Iterate over all TDP MMU roots. Requires that mmu_lock be held for write, @@ -840,7 +837,8 @@ bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush) { struct kvm_mmu_page *root; - for_each_tdp_mmu_root_yield_safe(kvm, root, false) + lockdep_assert_held_write(&kvm->mmu_lock); + for_each_tdp_mmu_root_yield_safe(kvm, root) flush = tdp_mmu_zap_leafs(kvm, root, start, end, true, flush); return flush; @@ -862,7 +860,8 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) * is being destroyed or the userspace VMM has exited. In both cases, * KVM_RUN is unreachable, i.e. no vCPUs will ever service the request. */ - for_each_tdp_mmu_root_yield_safe(kvm, root, false) + lockdep_assert_held_write(&kvm->mmu_lock); + for_each_tdp_mmu_root_yield_safe(kvm, root) tdp_mmu_zap_root(kvm, root, false); } @@ -876,7 +875,7 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) read_lock(&kvm->mmu_lock); - for_each_tdp_mmu_root_yield_safe(kvm, root, true) { + for_each_tdp_mmu_root_yield_safe(kvm, root) { if (!root->tdp_mmu_scheduled_root_to_zap) continue; @@ -899,7 +898,7 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) * the root must be reachable by mmu_notifiers while it's being * zapped */ - kvm_tdp_mmu_put_root(kvm, root, true); + kvm_tdp_mmu_put_root(kvm, root); } read_unlock(&kvm->mmu_lock); @@ -1133,7 +1132,9 @@ bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range, { struct kvm_mmu_page *root; - __for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false, false) + lockdep_assert_held_write(&kvm->mmu_lock); + + __for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false) flush = tdp_mmu_zap_leafs(kvm, root, range->start, range->end, range->may_block, flush); @@ -1322,7 +1323,7 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm, lockdep_assert_held_read(&kvm->mmu_lock); - for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true) + for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id) spte_set |= wrprot_gfn_range(kvm, root, slot->base_gfn, slot->base_gfn + slot->npages, min_level); @@ -1354,6 +1355,8 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, { struct kvm_mmu_page *sp; + kvm_lockdep_assert_mmu_lock_held(kvm, shared); + /* * Since we are allocating while under the MMU lock we have to be * careful about GFP flags. Use GFP_NOWAIT to avoid blocking on direct @@ -1504,11 +1507,10 @@ void kvm_tdp_mmu_try_split_huge_pages(struct kvm *kvm, int r = 0; kvm_lockdep_assert_mmu_lock_held(kvm, shared); - - for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, shared) { + for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id) { r = tdp_mmu_split_huge_pages_root(kvm, root, start, end, target_level, shared); if (r) { - kvm_tdp_mmu_put_root(kvm, root, shared); + kvm_tdp_mmu_put_root(kvm, root); break; } } @@ -1568,8 +1570,7 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm, bool spte_set = false; lockdep_assert_held_read(&kvm->mmu_lock); - - for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true) + for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id) spte_set |= clear_dirty_gfn_range(kvm, root, slot->base_gfn, slot->base_gfn + slot->npages); @@ -1703,8 +1704,7 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, struct kvm_mmu_page *root; lockdep_assert_held_read(&kvm->mmu_lock); - - for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true) + for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id) zap_collapsible_spte_range(kvm, root, slot); } -- 2.39.1