Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp4667106pxk; Wed, 30 Sep 2020 08:40:25 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxs3ZbQDGXkLXW6hiTaGL2of91My5yVnTCYzZv5cX5kKPwUGPP8CcdSUlWv+wWdPk79V/mt X-Received: by 2002:a17:906:f119:: with SMTP id gv25mr2150897ejb.373.1601480425786; Wed, 30 Sep 2020 08:40:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601480425; cv=none; d=google.com; s=arc-20160816; b=0ltq1aTEuU9rroeg2+oIjFGYLxACzVHo7RwBzfPP5WS4T11BN/aHJSttdzyiGcG4Dv aIvsVK+LyQiZiyli86O6QGVnJQD3PpsZKvFUiFGHR80qBdnKS7aWyObPsr0ZuJxrg55O ebyOCQ04RYhWNeWif/qfMvCHlC7z8TbcLUGiMNqp8hY+flFuoBB69A6Z8eGm9QzS8rCA xThV7xhQBJAiazeEQA5jjdDv1l2ql94B4IVpvP3giU3IVH1SKMsYrvzGzNs3vyryG1iL OojNuPL8XnWFP+4Y/b6jRTG2GkfOzl89DLnsjIFdKc4jbytODSghzk+LGnyjidg4CCVS 6hBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :ironport-sdr:ironport-sdr; bh=MgalVU1niH27s7sqVVoRk65t/Xr3nN4BtRW37dUfNQY=; b=ThGUORy2894TBqA/FOLGotgyh60BvNmYWoWdsy+Hyky+jcqde9By0LuI0o1UujnUpn fWyoZPEs0DoeVXljvH/Bpaqeb31Ui5lU1M3oazKcD5tmW+3AVwlyDsxavKpXQJjb5tEo cDX9u1hMnNHCfdEOdUzC9BHb5c4POWapgVmHkHse+6nF6vH0rLKECIDMEuG7ZUur/rY/ wgvS9/4FAXobDE+4lYcAxF+lyslC1gxagm+2muyQbhDpareKmteGGcPIl/5Nz/o5Lz8q IYfw+Rf4tamDdWF5Rqg5WwOcT/5BaC0PM6ipmvu6vdHYXumERi+bSoy8eQJxQM6yW1ck Mcjg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v5si1802756edi.14.2020.09.30.08.40.02; Wed, 30 Sep 2020 08:40:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731128AbgI3Pih (ORCPT + 99 others); Wed, 30 Sep 2020 11:38:37 -0400 Received: from mga12.intel.com ([192.55.52.136]:23518 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726572AbgI3Pid (ORCPT ); Wed, 30 Sep 2020 11:38:33 -0400 IronPort-SDR: Rxx2rytSC/4QPVc8Iyb29J00cBh+Xc2UigoX0ZRCbtSjIWnN8KltExgzUup6xSVgZ3noUFLInx nnKZir5LJjpg== X-IronPort-AV: E=McAfee;i="6000,8403,9759"; a="141883719" X-IronPort-AV: E=Sophos;i="5.77,322,1596524400"; d="scan'208";a="141883719" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2020 08:38:33 -0700 IronPort-SDR: 3g1ArE64tf93Mj0CVDgjreXVKrPAFugAVt4cjflUfy7T0NmU/tLG2FIYNI7GqHfZhrov/M7hqd OH01N6fhRmKw== X-IronPort-AV: E=Sophos;i="5.77,322,1596524400"; d="scan'208";a="350706648" Received: from sjchrist-coffee.jf.intel.com (HELO linux.intel.com) ([10.54.74.160]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2020 08:38:32 -0700 Date: Wed, 30 Sep 2020 08:38:31 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Ben Gardon , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Cannon Matthews , Peter Xu , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong Subject: Re: [PATCH 04/22] kvm: mmu: Allocate and free TDP MMU roots Message-ID: <20200930153824.GA32672@linux.intel.com> References: <20200925212302.3979661-1-bgardon@google.com> <20200925212302.3979661-5-bgardon@google.com> <20200930060610.GA29659@linux.intel.com> <6a5b78f8-0fbe-fbec-8313-f7759e2483b0@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6a5b78f8-0fbe-fbec-8313-f7759e2483b0@redhat.com> User-Agent: Mutt/1.5.24 (2015-08-30) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 30, 2020 at 08:26:28AM +0200, Paolo Bonzini wrote: > On 30/09/20 08:06, Sean Christopherson wrote: > >> +static struct kvm_mmu_page *get_tdp_mmu_vcpu_root(struct kvm_vcpu *vcpu) > >> +{ > >> + struct kvm_mmu_page *root; > >> + union kvm_mmu_page_role role; > >> + > >> + role = vcpu->arch.mmu->mmu_role.base; > >> + role.level = vcpu->arch.mmu->shadow_root_level; > >> + role.direct = true; > >> + role.gpte_is_8_bytes = true; > >> + role.access = ACC_ALL; > >> + > >> + spin_lock(&vcpu->kvm->mmu_lock); > >> + > >> + /* Search for an already allocated root with the same role. */ > >> + root = find_tdp_mmu_root_with_role(vcpu->kvm, role); > >> + if (root) { > >> + get_tdp_mmu_root(vcpu->kvm, root); > >> + spin_unlock(&vcpu->kvm->mmu_lock); > > Rather than manually unlock and return, this can be > > > > if (root) > > get_tdp_mmju_root(); > > > > spin_unlock() > > > > if (!root) > > root = alloc_tdp_mmu_root(); > > > > return root; > > > > You could also add a helper to do the "get" along with the "find". Not sure > > if that's worth the code. > > All in all I don't think it's any clearer than Ben's code. At least in > his case the "if"s clearly point at the double-checked locking pattern. Actually, why is this even dropping the lock to do the alloc? The allocs are coming from the caches, which are designed to be invoked while holding the spin lock. Also relevant is that, other than this code, the only user of find_tdp_mmu_root_with_role() is kvm_tdp_mmu_root_hpa_for_role(), and that helper is itself unused. I.e. the "find" can be open coded. Putting those two together yields this, which IMO is much cleaner. static struct kvm_mmu_page *get_tdp_mmu_vcpu_root(struct kvm_vcpu *vcpu) { union kvm_mmu_page_role role; struct kvm *kvm = vcpu->kvm; struct kvm_mmu_page *root; role = page_role_for_level(vcpu, vcpu->arch.mmu->shadow_root_level); spin_lock(&kvm->mmu_lock); /* Check for an existing root before allocating a new one. */ for_each_tdp_mmu_root(kvm, root) { if (root->role.word == role.word) { get_tdp_mmu_root(root); spin_unlock(&kvm->mmu_lock); return root; } } root = alloc_tdp_mmu_page(vcpu, 0, vcpu->arch.mmu->shadow_root_level); root->root_count = 1; list_add(&root->link, &kvm->arch.tdp_mmu_roots); spin_unlock(&kvm->mmu_lock); return root; }