Received: by 2002:a05:7412:2a8c:b0:e2:908c:2ebd with SMTP id u12csp3485289rdh; Thu, 28 Sep 2023 13:06:23 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFFJAoieAv9MnkUbxYGemRamr4VKzO4qEmqI88rcOf9e21OdVYHX8l0sR/SUD+QL9M626ib X-Received: by 2002:a05:6808:1495:b0:3ab:7f46:ecc5 with SMTP id e21-20020a056808149500b003ab7f46ecc5mr2849941oiw.35.1695931582805; Thu, 28 Sep 2023 13:06:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695931582; cv=none; d=google.com; s=arc-20160816; b=DVCjz6vI7FR4VXRZNKGLY+wvtqhQT0346/9YFBHD7Tqf6qkEelvHtUMDawgiMacheO fOYZsPFKpZxGEWWtULkcw31golBaE93O5xnHMK4d/KrVM8O430wRQ95ikgtt95ajjvhG jLgx7qA2CTj3s54UNMNAzOl18SC6urdMT7EvBAFEOgYX/yqtEUIeGmpRkjrS3zVZEEcE ZuK1W7hkw5DAyX/+1SKM+pAdXJcwsECTeb/d6U6mZteFODLI38jV87O6OyVpstXLYS/u fcUvTAsSPP2AHTeP0dzkwtPf2BgYhkdKALblOdIkPratNnKdVfGMeZtFjsswo9/tKg/f 79GQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:date:to:from:subject:message-id :dkim-signature; bh=db4LuX+cyuFVDjWljKNDIps/ix54Zu0j3S8rKzmKS9g=; fh=lkC/ktWbyTNbUEw92GUpIzQdZSHr3KYwt4O6dyRosvc=; b=mcH3pne1W21b72d2FxvNOF76oyPPB+8PRbx7DxXfx7YWbZ8z8ShAgD3redCp9E0Nfs vvIaPDstk+TS+IpaLgTyitD7qCR6XOaDmb5fydneo+SjT9rTwT20PS4ItFYssTB7Zuwk ujDdUbCem+d4bru0c2gJXopH8R6XaIZTQxd9IVv8meBoJaRnBoTDd4tiFo+e96rJNwzH bgiqEpi3eudZVTLNrwEqC5j+7zEr6O0p1oujYdo2KJmEE6bAXrmh46eGFJvgHzh41W5K 9GkjbgsODN0ytaE0uAw76/RSTVExTwc7t+wP/oVkhbpHAwx3f1asw2NU0LMJLEegs+3M KvZA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DbE4CDFC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id bk13-20020a056a02028d00b00584a9290bd3si8849347pgb.522.2023.09.28.13.06.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Sep 2023 13:06:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DbE4CDFC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id C6BAB806311E; Thu, 28 Sep 2023 09:56:58 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231589AbjI1Q4Y (ORCPT + 99 others); Thu, 28 Sep 2023 12:56:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231377AbjI1Q4X (ORCPT ); Thu, 28 Sep 2023 12:56:23 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC19298 for ; Thu, 28 Sep 2023 09:55:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695920134; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=db4LuX+cyuFVDjWljKNDIps/ix54Zu0j3S8rKzmKS9g=; b=DbE4CDFCMdbX3w+Q9T50smcOZp1/+XAaQjYG0Itl7D8JCeI6JN+Dbe8pD1D1ZWaJcgano6 eJ428oHk273KSImKZFcMB6Rs1gJE/cYDjIjFoZebqrZXjm9hOl9zMGxWHc8h6Zc2VSQ3vk i3IZFCoAnjfbIBag3w5iB47giaCQYYY= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-38-zE3PqVvWN4mAHulTihU_Uw-1; Thu, 28 Sep 2023 12:55:32 -0400 X-MC-Unique: zE3PqVvWN4mAHulTihU_Uw-1 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-4054016ff33so66407275e9.1 for ; Thu, 28 Sep 2023 09:55:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695920131; x=1696524931; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:date:to:from:subject:message-id:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=db4LuX+cyuFVDjWljKNDIps/ix54Zu0j3S8rKzmKS9g=; b=cSRMLnoL2wsDfw8GCKsQ03/pYYrqE0nXlgRnVXQ6eozJQmndXy+IfPrJ58HDUFjnMa QoCGqit6NwRqC3qcank678aksbSj4B5j9/L+ElUe98zMX15atP3V/1xdAK3s2HNcjrze mukg1YIKPLvQZKLo22LOtPbC9IKiF6cw1kPTPF17cAAF/+CihIwPU/yhG3KGQK3v6tLk n3QtGWWzgIrtKpGTfmsa604Grs/iQvyfedndNl0jaYID/Kkc7c0VptNQILb/Ypwc/Dgr 4CPfdWBvRS6RrXi0rXBTT7rRXhQs3kLtZBJnzL2x4leSSCRLOh5DYWjPTztXD0Y9nPzI jy2w== X-Gm-Message-State: AOJu0YxvbU/1TJVGD6iiyA7osBlYaCWurS0z+mSkL63mvTZML4wgaPii A28bd06J7Y1LveFgk2PKnUEWEt3TD1KSIDyk7KzBsT/0lfUcqOUOz5ZajP8FT1oRcrZxNy5/s4i F9odST5iVmPsLrSyWwEk6EoNu X-Received: by 2002:a7b:cbc6:0:b0:401:b307:7ba8 with SMTP id n6-20020a7bcbc6000000b00401b3077ba8mr1739812wmi.13.1695920131329; Thu, 28 Sep 2023 09:55:31 -0700 (PDT) X-Received: by 2002:a7b:cbc6:0:b0:401:b307:7ba8 with SMTP id n6-20020a7bcbc6000000b00401b3077ba8mr1739797wmi.13.1695920130954; Thu, 28 Sep 2023 09:55:30 -0700 (PDT) Received: from starship ([89.237.96.178]) by smtp.gmail.com with ESMTPSA id u8-20020a7bc048000000b003fe2b081661sm23214122wmc.30.2023.09.28.09.55.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Sep 2023 09:55:30 -0700 (PDT) Message-ID: <6bc63f82495501f9664b7d19bd8c7ba64329d37b.camel@redhat.com> Subject: Re: [PATCH 2/3] KVM: x86/mmu: remove unnecessary "bool shared" argument from iterators From: Maxim Levitsky To: Paolo Bonzini , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Date: Thu, 28 Sep 2023 19:55:28 +0300 In-Reply-To: <20230928162959.1514661-3-pbonzini@redhat.com> References: <20230928162959.1514661-1-pbonzini@redhat.com> <20230928162959.1514661-3-pbonzini@redhat.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.36.5 (3.36.5-2.fc32) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Thu, 28 Sep 2023 09:56:59 -0700 (PDT) У чт, 2023-09-28 у 12:29 -0400, Paolo Bonzini пише: > The "bool shared" argument is more or less unnecessary in the > for_each_*_tdp_mmu_root_yield_safe() macros. Many users check for > the lock before calling it; all of them either call small functions > that do the check, or end up calling tdp_mmu_set_spte_atomic() and > tdp_mmu_iter_set_spte(). Add a few assertions to make up for the > lost check in for_each_*_tdp_mmu_root_yield_safe(), but even this > is probably overkill and mostly for documentation reasons. Why not to leave the 'kvm_lockdep_assert_mmu_lock_held' but drop the shared argument from it? and then use lockdep_assert_held. If I am not mistaken, lockdep_assert_held should assert if the lock is held for read or write. Best regards, Maxim Levitsky > > Signed-off-by: Paolo Bonzini > --- > arch/x86/kvm/mmu/tdp_mmu.c | 42 +++++++++++++++++++------------------- > 1 file changed, 21 insertions(+), 21 deletions(-) > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index ab0876015be7..b9abfa78808a 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -155,23 +155,20 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, > * If shared is set, this function is operating under the MMU lock in read > * mode. > */ > -#define __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, _only_valid)\ > +#define __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _only_valid)\ > for (_root = tdp_mmu_next_root(_kvm, NULL, _only_valid); \ > _root; \ > _root = tdp_mmu_next_root(_kvm, _root, _only_valid)) \ > - if (kvm_lockdep_assert_mmu_lock_held(_kvm, _shared) && \ > - kvm_mmu_page_as_id(_root) != _as_id) { \ > + if (kvm_mmu_page_as_id(_root) != _as_id) { \ > } else > > -#define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \ > - __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true) > +#define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id) \ > + __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, true) > > -#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _shared) \ > +#define for_each_tdp_mmu_root_yield_safe(_kvm, _root) \ > for (_root = tdp_mmu_next_root(_kvm, NULL, false); \ > _root; \ > _root = tdp_mmu_next_root(_kvm, _root, false)) > - if (!kvm_lockdep_assert_mmu_lock_held(_kvm, _shared)) { \ > - } else > > /* > * Iterate over all TDP MMU roots. Requires that mmu_lock be held for write, > @@ -840,7 +837,8 @@ bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush) > { > struct kvm_mmu_page *root; > > - for_each_tdp_mmu_root_yield_safe(kvm, root, false) > + lockdep_assert_held_write(&kvm->mmu_lock); > + for_each_tdp_mmu_root_yield_safe(kvm, root) > flush = tdp_mmu_zap_leafs(kvm, root, start, end, true, flush); > > return flush; > @@ -862,7 +860,8 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) > * is being destroyed or the userspace VMM has exited. In both cases, > * KVM_RUN is unreachable, i.e. no vCPUs will ever service the request. > */ > - for_each_tdp_mmu_root_yield_safe(kvm, root, false) > + lockdep_assert_held_write(&kvm->mmu_lock); > + for_each_tdp_mmu_root_yield_safe(kvm, root) > tdp_mmu_zap_root(kvm, root, false); > } > > @@ -876,7 +875,7 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) > > read_lock(&kvm->mmu_lock); > > - for_each_tdp_mmu_root_yield_safe(kvm, root, true) { > + for_each_tdp_mmu_root_yield_safe(kvm, root) { > if (!root->tdp_mmu_scheduled_root_to_zap) > continue; > > @@ -899,7 +898,7 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) > * the root must be reachable by mmu_notifiers while it's being > * zapped > */ > - kvm_tdp_mmu_put_root(kvm, root, true); > + kvm_tdp_mmu_put_root(kvm, root); > } > > read_unlock(&kvm->mmu_lock); > @@ -1133,7 +1132,9 @@ bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range, > { > struct kvm_mmu_page *root; > > - __for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false, false) > + lockdep_assert_held_write(&kvm->mmu_lock); > + > + __for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false) > flush = tdp_mmu_zap_leafs(kvm, root, range->start, range->end, > range->may_block, flush); > > @@ -1322,7 +1323,7 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm, > > lockdep_assert_held_read(&kvm->mmu_lock); > > - for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true) > + for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id) > spte_set |= wrprot_gfn_range(kvm, root, slot->base_gfn, > slot->base_gfn + slot->npages, min_level); > > @@ -1354,6 +1355,8 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, > { > struct kvm_mmu_page *sp; > > + kvm_lockdep_assert_mmu_lock_held(kvm, shared); > + > /* > * Since we are allocating while under the MMU lock we have to be > * careful about GFP flags. Use GFP_NOWAIT to avoid blocking on direct > @@ -1504,11 +1507,10 @@ void kvm_tdp_mmu_try_split_huge_pages(struct kvm *kvm, > int r = 0; > > kvm_lockdep_assert_mmu_lock_held(kvm, shared); > - > - for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, shared) { > + for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id) { > r = tdp_mmu_split_huge_pages_root(kvm, root, start, end, target_level, shared); > if (r) { > - kvm_tdp_mmu_put_root(kvm, root, shared); > + kvm_tdp_mmu_put_root(kvm, root); > break; > } > } > @@ -1568,8 +1570,7 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm, > bool spte_set = false; > > lockdep_assert_held_read(&kvm->mmu_lock); > - > - for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true) > + for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id) > spte_set |= clear_dirty_gfn_range(kvm, root, slot->base_gfn, > slot->base_gfn + slot->npages); > > @@ -1703,8 +1704,7 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, > struct kvm_mmu_page *root; > > lockdep_assert_held_read(&kvm->mmu_lock); > - > - for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true) > + for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id) > zap_collapsible_spte_range(kvm, root, slot); > } >