Received: by 2002:a05:7412:8521:b0:e2:908c:2ebd with SMTP id t33csp2149889rdf; Mon, 6 Nov 2023 06:13:00 -0800 (PST) X-Google-Smtp-Source: AGHT+IHbaC7ACtP4cJsxYoD4UsYWbucwEjw2SyMLp8KrSl9AxHKD9MXgZkaCgXn8zFkblf/U2bTZ X-Received: by 2002:a05:6358:91a8:b0:16b:6ea4:d71d with SMTP id j40-20020a05635891a800b0016b6ea4d71dmr1829789rwa.26.1699279980305; Mon, 06 Nov 2023 06:13:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699279980; cv=none; d=google.com; s=arc-20160816; b=0+ABnUAUEJY9OW1jgcHMQ5dvo1pq+InFyZhnJkFxKiaAv6Bmo/h6BEh1/3XuhRV9wv x05tLOR4ZPqFTyGONsuogta6vnrbWif9xbZP/7EO5BMdtj+/bhy8glrobbgmqYFDb6EF yIpLqeTzGVc5vCg/FXxU/LGWTwUaBabfpE4UmdWh73xbcYjICOAwvvnNCv2DJvR0xxPk Ng83VMEiJL4tQgC5mOWXanqSsxgi51fAO9WSaFJWyOywVVRB1TCbnkePjVHqxtVGryIk x19r3nnyec5fzJL5oLcT4tF8vb4atohNeZBujL9ghwCi3jsl4zCBbQeDpXhPVIKr5WNB 6jmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=A3CBuUCtBedu68S9oCLYgJPqgVDgAohwR9zGww68nTA=; fh=M3hO8d4vR/GUN1iTCfaCUR7wzEIJGrF96wAlutuiZPU=; b=lgjpH3270mHQFPCbcKN5raikbVNVM6Zd+Z0JP5ScEaJSGAHUffrOBcCUarxGeRSftk RRgJqPP6mhz0rLD2R1RcYKTXxQO7JdaazWKwAX+4ZcX/+6T+gh58dDjrm7XRQmKGs1YM CJ4GFgXqclyz2icihQRB1mGZiYTTy2oo9ShoNAPPmws9I/7lubqDd0rcVmGu79D9mu9g 21IlFtRrDC6nwg6T2TKEDVjB0gjWUk8nPB1qOFQ+hzBu8XsXEUVASmf17maTaSyo/NCC oH53firAiaKXb0VAVmUcW0UEVcmidWK/gofCJ9MSwtD2AOqQsp26mw1HYZO/ALd31aGy QThg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Ln9jrD6g; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id q204-20020a632ad5000000b005aaab9e7bcfsi8415066pgq.388.2023.11.06.06.12.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Nov 2023 06:13:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Ln9jrD6g; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id ABBCB808EEC4; Mon, 6 Nov 2023 06:12:53 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229485AbjKFOMX (ORCPT + 99 others); Mon, 6 Nov 2023 09:12:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230405AbjKFOMW (ORCPT ); Mon, 6 Nov 2023 09:12:22 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E62B9BF for ; Mon, 6 Nov 2023 06:11:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1699279891; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=A3CBuUCtBedu68S9oCLYgJPqgVDgAohwR9zGww68nTA=; b=Ln9jrD6gzJ8Wuwp3opX9ZTys9f656iLF9e2lBmnPUL1qbNHRX1N/a1kYztmbOpRZ5SlYtt 0ZDOqQA0AILTs7RMI0PmbFgfnN4q7j2uKGleAusAMePYc+3IvwElJPg6eTriMQk2xi7eWJ ZJ2m45aZDDHScPY+X9YJuYIdD4r4CJc= Received: from mail-ej1-f69.google.com (mail-ej1-f69.google.com [209.85.218.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-349-b0INoHtWNG6scwq_AhXyuw-1; Mon, 06 Nov 2023 09:11:30 -0500 X-MC-Unique: b0INoHtWNG6scwq_AhXyuw-1 Received: by mail-ej1-f69.google.com with SMTP id a640c23a62f3a-9de267de2a0so142524266b.3 for ; Mon, 06 Nov 2023 06:11:30 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699279889; x=1699884689; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=A3CBuUCtBedu68S9oCLYgJPqgVDgAohwR9zGww68nTA=; b=aX5ebbyrBasMArGh9KkvT7hcJpUNz78+XzPiWuLmdypBRmlGlQWLIWWmU4al+2Uwcx RvaWTL/UvR8vPn7GbblgpOdh4Ha8rsFm063hkF3oD0qUap2vdDHMY08SrWpJ96TzBv4v icC23D4AzllJ7JYZXFAh8fsQBQZuefIQqliF1aWTV0AyU/wRN9VD9JKr9cbhNphMvyaw qYI/bcFfKK0pKN8XRz3QzPE2FriCSw6+znPQqpVpnTjQhmbHfsljIsdW4OGVdeXzwzId bRUCUY4WF0Tv9kAE5NL/xxx7IulsSalupUEQIEv59l6u3iNGF4v+7i7lFCtULNHbK1N6 u1Hg== X-Gm-Message-State: AOJu0YwRYXmioqbU72OgPE/BkyIHumac4IzLoXXd7x3OXPVA7hKTPrZ/ 3QyUXYj5O7lUKqOH7NqkiXeaVApW0caDEXcd6s1kMTVMotn5/RRZhqej0EeWr+vp3uIA6vnNKZH +ADLv951fFoMpKCsMMO1HqkG/ X-Received: by 2002:a17:907:2d91:b0:9ae:5765:c134 with SMTP id gt17-20020a1709072d9100b009ae5765c134mr17141281ejc.15.1699279889446; Mon, 06 Nov 2023 06:11:29 -0800 (PST) X-Received: by 2002:a17:907:2d91:b0:9ae:5765:c134 with SMTP id gt17-20020a1709072d9100b009ae5765c134mr17141245ejc.15.1699279889002; Mon, 06 Nov 2023 06:11:29 -0800 (PST) Received: from pollux ([2a02:810d:4b3f:de9c:abf:b8ff:feee:998b]) by smtp.gmail.com with ESMTPSA id mr31-20020a170907829f00b009dda52b1147sm3381918ejc.126.2023.11.06.06.11.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Nov 2023 06:11:28 -0800 (PST) Date: Mon, 6 Nov 2023 15:11:25 +0100 From: Danilo Krummrich To: Christian =?iso-8859-1?Q?K=F6nig?= Cc: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, faith@gfxstrand.net, dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH drm-misc-next v8 09/12] drm/gpuvm: reference count drm_gpuvm structures Message-ID: References: <20231101233113.8059-1-dakr@redhat.com> <20231101233113.8059-10-dakr@redhat.com> <51dea5f3-a18b-4797-b4fa-87da7db4624a@amd.com> <8e87d962-c80c-40d9-94d7-58b6cd9dd794@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <8e87d962-c80c-40d9-94d7-58b6cd9dd794@amd.com> X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Mon, 06 Nov 2023 06:12:53 -0800 (PST) On Mon, Nov 06, 2023 at 02:05:13PM +0100, Christian K?nig wrote: > Am 06.11.23 um 13:16 schrieb Danilo Krummrich: > > [SNIP] > > This reference count just prevents that the VM is freed as long as other > > ressources are attached to it that carry a VM pointer, such as mappings and > > VM_BOs. The motivation for that are VM_BOs. For mappings it's indeed a bit > > paranoid, but it doesn't hurt either and keeps it consistant. > > Ah! Yeah, we have similar semantics in amdgpu as well. > > But we keep the reference to the root GEM object and not the VM. > > Ok, that makes much more sense then keeping one reference for each mapping. > > > > Because of this the mapping should *never* have a reference to the VM, but > > > rather the VM destroys all mapping when it is destroyed itself. > > > > > > > Hence, If the VM is still alive at a point where you don't expect it to > > > > be, then it's > > > > simply a driver bug. > > > Driver bugs is just what I try to prevent here. When individual mappings > > > keep the VM structure alive then drivers are responsible to clean them up, > > > if the VM cleans up after itself then we don't need to worry about it in the > > > driver. > > Drivers are *always* responsible for that. This has nothing to do with whether > > the VM is reference counted or not. GPUVM can't clean up mappings after itself. > > Why not? I feel like we're talking past each other here, at least to some extend. However, I can't yet see where exactly the misunderstanding resides. > > At least in amdgpu we have it exactly like that. E.g. the higher level can > cleanup the BO_VM structure at any time possible, even when there are > mappings. What do you mean with "cleanup the VM_BO structue" exactly? The VM_BO structure keeps track of all the mappings mapped in the VM_BO's VM being backed by the VM_BO's GEM object. And the GEM objects keeps a list of the corresponding VM_BOs. Hence, as long as there are mappings that this VM_BO keeps track of, this VM_BO should stay alive. > The VM then keeps track which areas still need to be invalidated > in the physical representation of the page tables. And the VM does that through its tree of mappings (struct drm_gpuva). Hence, if the VM would just remove those structures on cleanup by itself, you'd loose the ability of cleaning up the page tables. Unless, you track this separately, which would make the whole tracking of GPUVM itself kinda pointless. > > I would expect that the generalized GPU VM handling would need something > similar. If we leave that to the driver then each driver would have to > implement that stuff on it's own again. Similar to what? What exactly do you think can be generalized here? > > > If the driver left mappings, GPUVM would just leak them without reference count. > > It doesn't know about the drivers surrounding structures, nor does it know about > > attached ressources such as PT(E)s. > > What are we talking with the word "mapping"? The BO_VM structure? Or each > individual mapping? An individual mapping represented by struct drm_gpuva. > > E.g. what we need to prevent is that VM structure (or the root GEM object) > is released while VM_BOs are still around. That's what I totally agree on. > > But each individual mapping is a different story. Userspace can create so > many of them that we probably could even overrun a 32bit counter quite > easily. REFCOUNT_MAX is specified as 0x7fff_ffff. I agree there can be a lot of mappings, but (including the VM_BO references) more than 2.147.483.647 per VM? > > > > When the mapping is destroyed with the VM drivers can't mess this common > > > operation up. That's why this is more defensive. > > > > > > What is a possible requirement is that external code needs to keep > > > references to the VM, but *never* the VM to itself through the mappings. I > > > would consider that a major bug in the component. > > Obviously, you just (want to) apply a different semantics to this reference > > count. It is meant to reflect that the VM structure can be freed, instead of the > > VM can be cleaned up. If you want to latter, you can have a driver specifc > > reference count for that in the exact same way as it was before this patch. > > Yeah, it becomes clear that you try to solve some different problem than I > have expected. > > Regards, > Christian. > > > > > > Regards, > > > Christian. > > > > > > > > Reference counting is nice when you don't know who else is referring > > > > > to your VM, but the cost is that you also don't know when the object > > > > > will guardedly be destroyed. > > > > > > > > > > I can trivially work around this by saying that the generic GPUVM > > > > > object has a different lifetime than the amdgpu specific object, but > > > > > that opens up doors for use after free again. > > > > If your driver never touches the VM's reference count and exits the VM > > > > with a clean state > > > > (no mappings and no VM_BOs left), effectively, this is the same as > > > > having no reference > > > > count. > > > > > > > > In the very worst case you could argue that we trade a potential UAF > > > > *and* memroy leak > > > > (no reference count) with *only* a memory leak (with reference count), > > > > which to me seems > > > > reasonable. > > > > > > > > > Regards, > > > > > Christian. > > > > > > > > > > > > Thanks, > > > > > > > Christian. > > > > > > [1]https://lore.kernel.org/dri-devel/6fa058a4-20d3-44b9-af58-755cfb375d75@redhat.com/ > > > > > > > > > > > > > > > > > > > > Signed-off-by: Danilo Krummrich > > > > > > > > --- > > > > > > > > ?? drivers/gpu/drm/drm_gpuvm.c??????????? | 44 > > > > > > > > +++++++++++++++++++------- > > > > > > > > ?? drivers/gpu/drm/nouveau/nouveau_uvmm.c | 20 +++++++++--- > > > > > > > > ?? include/drm/drm_gpuvm.h??????????????? | 31 +++++++++++++++++- > > > > > > > > ?? 3 files changed, 78 insertions(+), 17 deletions(-) > > > > > > > > > > > > > > > > diff --git a/drivers/gpu/drm/drm_gpuvm.c > > > > > > > > b/drivers/gpu/drm/drm_gpuvm.c > > > > > > > > index 53e2c406fb04..6a88eafc5229 100644 > > > > > > > > --- a/drivers/gpu/drm/drm_gpuvm.c > > > > > > > > +++ b/drivers/gpu/drm/drm_gpuvm.c > > > > > > > > @@ -746,6 +746,8 @@ drm_gpuvm_init(struct drm_gpuvm > > > > > > > > *gpuvm, const char *name, > > > > > > > > ?????? gpuvm->rb.tree = RB_ROOT_CACHED; > > > > > > > > ?????? INIT_LIST_HEAD(&gpuvm->rb.list); > > > > > > > > +??? kref_init(&gpuvm->kref); > > > > > > > > + > > > > > > > > ?????? gpuvm->name = name ? name : "unknown"; > > > > > > > > ?????? gpuvm->flags = flags; > > > > > > > > ?????? gpuvm->ops = ops; > > > > > > > > @@ -770,15 +772,8 @@ drm_gpuvm_init(struct drm_gpuvm > > > > > > > > *gpuvm, const char *name, > > > > > > > > ?? } > > > > > > > > ?? EXPORT_SYMBOL_GPL(drm_gpuvm_init); > > > > > > > > -/** > > > > > > > > - * drm_gpuvm_destroy() - cleanup a &drm_gpuvm > > > > > > > > - * @gpuvm: pointer to the &drm_gpuvm to clean up > > > > > > > > - * > > > > > > > > - * Note that it is a bug to call this function on a > > > > > > > > manager that still > > > > > > > > - * holds GPU VA mappings. > > > > > > > > - */ > > > > > > > > -void > > > > > > > > -drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) > > > > > > > > +static void > > > > > > > > +drm_gpuvm_fini(struct drm_gpuvm *gpuvm) > > > > > > > > ?? { > > > > > > > > ?????? gpuvm->name = NULL; > > > > > > > > @@ -790,7 +785,33 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) > > > > > > > > ?????? drm_gem_object_put(gpuvm->r_obj); > > > > > > > > ?? } > > > > > > > > -EXPORT_SYMBOL_GPL(drm_gpuvm_destroy); > > > > > > > > + > > > > > > > > +static void > > > > > > > > +drm_gpuvm_free(struct kref *kref) > > > > > > > > +{ > > > > > > > > +??? struct drm_gpuvm *gpuvm = container_of(kref, struct > > > > > > > > drm_gpuvm, kref); > > > > > > > > + > > > > > > > > +??? if (drm_WARN_ON(gpuvm->drm, !gpuvm->ops->vm_free)) > > > > > > > > +??????? return; > > > > > > > > + > > > > > > > > +??? drm_gpuvm_fini(gpuvm); > > > > > > > > + > > > > > > > > +??? gpuvm->ops->vm_free(gpuvm); > > > > > > > > +} > > > > > > > > + > > > > > > > > +/** > > > > > > > > + * drm_gpuvm_bo_put() - drop a struct drm_gpuvm reference > > > > > > > > + * @gpuvm: the &drm_gpuvm to release the reference of > > > > > > > > + * > > > > > > > > + * This releases a reference to @gpuvm. > > > > > > > > + */ > > > > > > > > +void > > > > > > > > +drm_gpuvm_put(struct drm_gpuvm *gpuvm) > > > > > > > > +{ > > > > > > > > +??? if (gpuvm) > > > > > > > > +??????? kref_put(&gpuvm->kref, drm_gpuvm_free); > > > > > > > > +} > > > > > > > > +EXPORT_SYMBOL_GPL(drm_gpuvm_put); > > > > > > > > ?? static int > > > > > > > > ?? __drm_gpuva_insert(struct drm_gpuvm *gpuvm, > > > > > > > > @@ -843,7 +864,7 @@ drm_gpuva_insert(struct drm_gpuvm *gpuvm, > > > > > > > > ?????? if (unlikely(!drm_gpuvm_range_valid(gpuvm, addr, range))) > > > > > > > > ?????????? return -EINVAL; > > > > > > > > -??? return __drm_gpuva_insert(gpuvm, va); > > > > > > > > +??? return __drm_gpuva_insert(drm_gpuvm_get(gpuvm), va); > > > > > > > > ?? } > > > > > > > > ?? EXPORT_SYMBOL_GPL(drm_gpuva_insert); > > > > > > > > @@ -876,6 +897,7 @@ drm_gpuva_remove(struct drm_gpuva *va) > > > > > > > > ?????? } > > > > > > > > ?????? __drm_gpuva_remove(va); > > > > > > > > +??? drm_gpuvm_put(va->vm); > > > > > > > > ?? } > > > > > > > > ?? EXPORT_SYMBOL_GPL(drm_gpuva_remove); > > > > > > > > diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c > > > > > > > > b/drivers/gpu/drm/nouveau/nouveau_uvmm.c > > > > > > > > index 54be12c1272f..cb2f06565c46 100644 > > > > > > > > --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c > > > > > > > > +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c > > > > > > > > @@ -1780,6 +1780,18 @@ nouveau_uvmm_bo_unmap_all(struct > > > > > > > > nouveau_bo *nvbo) > > > > > > > > ?????? } > > > > > > > > ?? } > > > > > > > > +static void > > > > > > > > +nouveau_uvmm_free(struct drm_gpuvm *gpuvm) > > > > > > > > +{ > > > > > > > > +??? struct nouveau_uvmm *uvmm = uvmm_from_gpuvm(gpuvm); > > > > > > > > + > > > > > > > > +??? kfree(uvmm); > > > > > > > > +} > > > > > > > > + > > > > > > > > +static const struct drm_gpuvm_ops gpuvm_ops = { > > > > > > > > +??? .vm_free = nouveau_uvmm_free, > > > > > > > > +}; > > > > > > > > + > > > > > > > > ?? int > > > > > > > > ?? nouveau_uvmm_ioctl_vm_init(struct drm_device *dev, > > > > > > > > ????????????????? void *data, > > > > > > > > @@ -1830,7 +1842,7 @@ nouveau_uvmm_ioctl_vm_init(struct > > > > > > > > drm_device *dev, > > > > > > > > ????????????????? NOUVEAU_VA_SPACE_END, > > > > > > > > ????????????????? init->kernel_managed_addr, > > > > > > > > ????????????????? init->kernel_managed_size, > > > > > > > > -?????????????? NULL); > > > > > > > > +?????????????? &gpuvm_ops); > > > > > > > > ?????? /* GPUVM takes care from here on. */ > > > > > > > > ?????? drm_gem_object_put(r_obj); > > > > > > > > @@ -1849,8 +1861,7 @@ nouveau_uvmm_ioctl_vm_init(struct > > > > > > > > drm_device *dev, > > > > > > > > ?????? return 0; > > > > > > > > ?? out_gpuvm_fini: > > > > > > > > -??? drm_gpuvm_destroy(&uvmm->base); > > > > > > > > -??? kfree(uvmm); > > > > > > > > +??? drm_gpuvm_put(&uvmm->base); > > > > > > > > ?? out_unlock: > > > > > > > > ?????? mutex_unlock(&cli->mutex); > > > > > > > > ?????? return ret; > > > > > > > > @@ -1902,7 +1913,6 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm) > > > > > > > > ?????? mutex_lock(&cli->mutex); > > > > > > > > ?????? nouveau_vmm_fini(&uvmm->vmm); > > > > > > > > -??? drm_gpuvm_destroy(&uvmm->base); > > > > > > > > -??? kfree(uvmm); > > > > > > > > +??? drm_gpuvm_put(&uvmm->base); > > > > > > > > ?????? mutex_unlock(&cli->mutex); > > > > > > > > ?? } > > > > > > > > diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h > > > > > > > > index 0c2e24155a93..4e6e1fd3485a 100644 > > > > > > > > --- a/include/drm/drm_gpuvm.h > > > > > > > > +++ b/include/drm/drm_gpuvm.h > > > > > > > > @@ -247,6 +247,11 @@ struct drm_gpuvm { > > > > > > > > ?????????? struct list_head list; > > > > > > > > ?????? } rb; > > > > > > > > +??? /** > > > > > > > > +???? * @kref: reference count of this object > > > > > > > > +???? */ > > > > > > > > +??? struct kref kref; > > > > > > > > + > > > > > > > > ?????? /** > > > > > > > > ??????? * @kernel_alloc_node: > > > > > > > > ??????? * > > > > > > > > @@ -273,7 +278,23 @@ void drm_gpuvm_init(struct > > > > > > > > drm_gpuvm *gpuvm, const char *name, > > > > > > > > ?????????????? u64 start_offset, u64 range, > > > > > > > > ?????????????? u64 reserve_offset, u64 reserve_range, > > > > > > > > ?????????????? const struct drm_gpuvm_ops *ops); > > > > > > > > -void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm); > > > > > > > > + > > > > > > > > +/** > > > > > > > > + * drm_gpuvm_get() - acquire a struct drm_gpuvm reference > > > > > > > > + * @gpuvm: the &drm_gpuvm to acquire the reference of > > > > > > > > + * > > > > > > > > + * This function acquires an additional reference to > > > > > > > > @gpuvm. It is illegal to > > > > > > > > + * call this without already holding a reference. No locks required. > > > > > > > > + */ > > > > > > > > +static inline struct drm_gpuvm * > > > > > > > > +drm_gpuvm_get(struct drm_gpuvm *gpuvm) > > > > > > > > +{ > > > > > > > > +??? kref_get(&gpuvm->kref); > > > > > > > > + > > > > > > > > +??? return gpuvm; > > > > > > > > +} > > > > > > > > + > > > > > > > > +void drm_gpuvm_put(struct drm_gpuvm *gpuvm); > > > > > > > > ?? bool drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, > > > > > > > > u64 addr, u64 range); > > > > > > > > ?? bool drm_gpuvm_interval_empty(struct drm_gpuvm > > > > > > > > *gpuvm, u64 addr, u64 range); > > > > > > > > @@ -673,6 +694,14 @@ static inline void > > > > > > > > drm_gpuva_init_from_op(struct drm_gpuva *va, > > > > > > > > ??? * operations to drivers. > > > > > > > > ??? */ > > > > > > > > ?? struct drm_gpuvm_ops { > > > > > > > > +??? /** > > > > > > > > +???? * @vm_free: called when the last reference of a > > > > > > > > struct drm_gpuvm is > > > > > > > > +???? * dropped > > > > > > > > +???? * > > > > > > > > +???? * This callback is mandatory. > > > > > > > > +???? */ > > > > > > > > +??? void (*vm_free)(struct drm_gpuvm *gpuvm); > > > > > > > > + > > > > > > > > ?????? /** > > > > > > > > ??????? * @op_alloc: called when the &drm_gpuvm allocates > > > > > > > > ??????? * a struct drm_gpuva_op