Received: by 2002:a05:6358:4e97:b0:b3:742d:4702 with SMTP id ce23csp2872317rwb; Mon, 15 Aug 2022 13:03:00 -0700 (PDT) X-Google-Smtp-Source: AA6agR5repcZn1ZRRl89sxH0eg0AltAERBG5ktkSOUFdvFJhuqmebacNUIfsGR5QA4BYT0nwU64K X-Received: by 2002:a17:907:6e9f:b0:730:e923:a378 with SMTP id sh31-20020a1709076e9f00b00730e923a378mr11509436ejc.71.1660593780284; Mon, 15 Aug 2022 13:03:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1660593780; cv=none; d=google.com; s=arc-20160816; b=yqFI9t6U1qc308bNehSxR4ynFaCsLHfb93IkkUXIzldXh6DFwPD5dpyh9tZ63Zssm6 Fv9I2DJULEtgzJLibp5fuxa22kPQtDFpkDge/CWXnQ+PHxGRJ1SVyy9+ay1lWdohhlvZ Bc00o9p8K/umgSzo5VNZiolmSngH2KO4N2PqJqRxxBNBqu/SWdSRhm8IPDmiRz07+aeO +EEQiNjCZsw9CENJIaOXEoCQUVqryd6kFuvEsFK6Qu5tfSwHs+Ol3wDqqfPJ9k8Dr79m 4cWeqVlaV45gplbUzonyiKv7EvT+gm4AKbPn8/gJhGO+M1COVJb1qGM4IGtmDb+atTq7 gfMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=PJT/59e3NveSfXEf6HNJVS7yjNfuFMeOb4YdHHoWAQ8=; b=Et/2OSeHTpFS1PYjPMLCiSa7d+7kILy9/nfKCeljoL6ipvfgoPJSo8Wbxd/X+caGUW aKsmw+XT7hX5PT3OL98E4WI4GImSoldmfFVTLWB9ItSJjMurY9x7/wegnp/15nmPC7Wd Ai44uxIT+oiw8kw/F8hVvEezl+ktgqqznySbrrhPQjUFU7LClbEWXSRBDVPBLUtWCSjo 9nvEHDDkqfB35QbDMwHSAjwa7UdwPloid/lEU9ljWSyWMVMu5Q0JFTqUkpHWX8LgKDAp f9uEq2Xrw1Z5dfDIrVIafL3xKW2BME8gaXr0pRfzMMcOy7lQmxYVJh87Tc53AhMI7jbq v42w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b="scZ/hBDM"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a94-20020a509ee7000000b0043be6506057si8542406edf.182.2022.08.15.13.02.34; Mon, 15 Aug 2022 13:03:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b="scZ/hBDM"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243505AbiHOSsa (ORCPT + 99 others); Mon, 15 Aug 2022 14:48:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242952AbiHOSnI (ORCPT ); Mon, 15 Aug 2022 14:43:08 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3866C2F64A; Mon, 15 Aug 2022 11:26:42 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id C373EB8107A; Mon, 15 Aug 2022 18:26:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ECFECC433C1; Mon, 15 Aug 2022 18:26:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1660587999; bh=5iy48iCyEwGdrV+incTh6wLrS6/kv5XVhoVB1Qw6QyM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=scZ/hBDMANurIJG/uISEnkgGAn+PuLQw4NxEECuBhj965CY5f+673OC1yquhbgT8e AsTB5v/1r9b8yX+ZILckoiZ8LahoyVf9wM+IaFHT+J45LncVw093lSm6CRVibZan5o ZCn+DsoOQCGPdUZYwwLvkzH0AcNJcDcnew7DVfoc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Zimmermann , Daniel Vetter , Sasha Levin Subject: [PATCH 5.15 256/779] drm/shmem-helper: Pass GEM shmem object in public interfaces Date: Mon, 15 Aug 2022 19:58:20 +0200 Message-Id: <20220815180348.284447299@linuxfoundation.org> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220815180337.130757997@linuxfoundation.org> References: <20220815180337.130757997@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Thomas Zimmermann [ Upstream commit a193f3b4e050e35c506a34d0870c838d8e0b0449 ] Change all GEM SHMEM object functions that receive a GEM object of type struct drm_gem_object to expect an object of type struct drm_gem_shmem_object instead. This change reduces the number of upcasts from struct drm_gem_object by moving them into callers. The C compiler can now verify that the GEM SHMEM functions are called with the correct type. For consistency, the patch also renames drm_gem_shmem_free_object to drm_gem_shmem_free. It further updates documentation for a number of functions. v3: * fix docs for drm_gem_shmem_object_free() v2: * mention _object_ callbacks in docs (Daniel) Signed-off-by: Thomas Zimmermann Acked-by: Daniel Vetter Link: https://patchwork.freedesktop.org/patch/msgid/20211108093149.7226-4-tzimmermann@suse.de Signed-off-by: Sasha Levin --- drivers/gpu/drm/drm_gem_shmem_helper.c | 83 ++++++++----------- drivers/gpu/drm/lima/lima_gem.c | 10 +-- drivers/gpu/drm/lima/lima_sched.c | 4 +- drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +- drivers/gpu/drm/panfrost/panfrost_gem.c | 8 +- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 2 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 5 +- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 +- drivers/gpu/drm/v3d/v3d_bo.c | 8 +- drivers/gpu/drm/virtio/virtgpu_object.c | 12 +-- include/drm/drm_gem_shmem_helper.h | 69 ++++++++------- 11 files changed, 107 insertions(+), 102 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 05a924e29133..a30ffc07470c 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -22,6 +22,11 @@ * * This library provides helpers for GEM objects backed by shmem buffers * allocated using anonymous pageable memory. + * + * Functions that operate on the GEM object receive struct &drm_gem_shmem_object. + * For GEM callback helpers in struct &drm_gem_object functions, see likewise + * named functions with an _object_ infix (e.g., drm_gem_shmem_object_vmap() wraps + * drm_gem_shmem_vmap()). These helpers perform the necessary type conversion. */ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = { @@ -112,15 +117,15 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t EXPORT_SYMBOL_GPL(drm_gem_shmem_create); /** - * drm_gem_shmem_free_object - Free resources associated with a shmem GEM object - * @obj: GEM object to free + * drm_gem_shmem_free - Free resources associated with a shmem GEM object + * @shmem: shmem GEM object to free * * This function cleans up the GEM object state and frees the memory used to * store the object itself. */ -void drm_gem_shmem_free_object(struct drm_gem_object *obj) +void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) { - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + struct drm_gem_object *obj = &shmem->base; WARN_ON(shmem->vmap_use_count); @@ -144,7 +149,7 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj) mutex_destroy(&shmem->vmap_lock); kfree(shmem); } -EXPORT_SYMBOL_GPL(drm_gem_shmem_free_object); +EXPORT_SYMBOL_GPL(drm_gem_shmem_free); static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) { @@ -224,7 +229,7 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages); /** * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object - * @obj: GEM object + * @shmem: shmem GEM object * * This function makes sure the backing pages are pinned in memory while the * buffer is exported. @@ -232,10 +237,8 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages); * Returns: * 0 on success or a negative error code on failure. */ -int drm_gem_shmem_pin(struct drm_gem_object *obj) +int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) { - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - WARN_ON(shmem->base.import_attach); return drm_gem_shmem_get_pages(shmem); @@ -244,15 +247,13 @@ EXPORT_SYMBOL(drm_gem_shmem_pin); /** * drm_gem_shmem_unpin - Unpin backing pages for a shmem GEM object - * @obj: GEM object + * @shmem: shmem GEM object * * This function removes the requirement that the backing pages are pinned in * memory. */ -void drm_gem_shmem_unpin(struct drm_gem_object *obj) +void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) { - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - WARN_ON(shmem->base.import_attach); drm_gem_shmem_put_pages(shmem); @@ -327,9 +328,8 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct * Returns: * 0 on success or a negative error code on failure. */ -int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) +int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map) { - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); int ret; ret = mutex_lock_interruptible(&shmem->vmap_lock); @@ -375,10 +375,8 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, * This function hides the differences between dma-buf imported and natively * allocated objects. */ -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) +void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map) { - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - mutex_lock(&shmem->vmap_lock); drm_gem_shmem_vunmap_locked(shmem, map); mutex_unlock(&shmem->vmap_lock); @@ -413,10 +411,8 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, /* Update madvise status, returns true if not purged, else * false or -errno. */ -int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv) +int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) { - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - mutex_lock(&shmem->pages_lock); if (shmem->madv >= 0) @@ -430,14 +426,14 @@ int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv) } EXPORT_SYMBOL(drm_gem_shmem_madvise); -void drm_gem_shmem_purge_locked(struct drm_gem_object *obj) +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) { + struct drm_gem_object *obj = &shmem->base; struct drm_device *dev = obj->dev; - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); WARN_ON(!drm_gem_shmem_is_purgeable(shmem)); - dma_unmap_sgtable(obj->dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); + dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); sg_free_table(shmem->sgt); kfree(shmem->sgt); shmem->sgt = NULL; @@ -456,18 +452,15 @@ void drm_gem_shmem_purge_locked(struct drm_gem_object *obj) */ shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1); - invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, - 0, (loff_t)-1); + invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); } EXPORT_SYMBOL(drm_gem_shmem_purge_locked); -bool drm_gem_shmem_purge(struct drm_gem_object *obj) +bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) { - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - if (!mutex_trylock(&shmem->pages_lock)) return false; - drm_gem_shmem_purge_locked(obj); + drm_gem_shmem_purge_locked(shmem); mutex_unlock(&shmem->pages_lock); return true; @@ -575,7 +568,7 @@ static const struct vm_operations_struct drm_gem_shmem_vm_ops = { /** * drm_gem_shmem_mmap - Memory-map a shmem GEM object - * @obj: gem object + * @shmem: shmem GEM object * @vma: VMA for the area to be mapped * * This function implements an augmented version of the GEM DRM file mmap @@ -584,9 +577,9 @@ static const struct vm_operations_struct drm_gem_shmem_vm_ops = { * Returns: * 0 on success or a negative error code on failure. */ -int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) +int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma) { - struct drm_gem_shmem_object *shmem; + struct drm_gem_object *obj = &shmem->base; int ret; if (obj->import_attach) { @@ -597,8 +590,6 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) return dma_buf_mmap(obj->dma_buf, vma, 0); } - shmem = to_drm_gem_shmem_obj(obj); - ret = drm_gem_shmem_get_pages(shmem); if (ret) { drm_gem_vm_close(vma); @@ -617,15 +608,13 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_mmap); /** * drm_gem_shmem_print_info() - Print &drm_gem_shmem_object info for debugfs + * @shmem: shmem GEM object * @p: DRM printer * @indent: Tab indentation level - * @obj: GEM object */ -void drm_gem_shmem_print_info(struct drm_printer *p, unsigned int indent, - const struct drm_gem_object *obj) +void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, + struct drm_printer *p, unsigned int indent) { - const struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count); drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); @@ -635,7 +624,7 @@ EXPORT_SYMBOL(drm_gem_shmem_print_info); /** * drm_gem_shmem_get_sg_table - Provide a scatter/gather table of pinned * pages for a shmem GEM object - * @obj: GEM object + * @shmem: shmem GEM object * * This function exports a scatter/gather table suitable for PRIME usage by * calling the standard DMA mapping API. @@ -646,9 +635,9 @@ EXPORT_SYMBOL(drm_gem_shmem_print_info); * Returns: * A pointer to the scatter/gather table of pinned pages or NULL on failure. */ -struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_object *obj) +struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem) { - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + struct drm_gem_object *obj = &shmem->base; WARN_ON(shmem->base.import_attach); @@ -659,7 +648,7 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); /** * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a * scatter/gather table for a shmem GEM object. - * @obj: GEM object + * @shmem: shmem GEM object * * This function returns a scatter/gather table suitable for driver usage. If * the sg table doesn't exist, the pages are pinned, dma-mapped, and a sg @@ -672,10 +661,10 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); * Returns: * A pointer to the scatter/gather table of pinned pages or errno on failure. */ -struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_object *obj) +struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) { + struct drm_gem_object *obj = &shmem->base; int ret; - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); struct sg_table *sgt; if (shmem->sgt) @@ -687,7 +676,7 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_object *obj) if (ret) return ERR_PTR(ret); - sgt = drm_gem_shmem_get_sg_table(&shmem->base); + sgt = drm_gem_shmem_get_sg_table(shmem); if (IS_ERR(sgt)) { ret = PTR_ERR(sgt); goto err_put_pages; diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 6590e1cae4ee..09ea621a4806 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -127,7 +127,7 @@ int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file, if (err) goto out; } else { - struct sg_table *sgt = drm_gem_shmem_get_pages_sgt(obj); + struct sg_table *sgt = drm_gem_shmem_get_pages_sgt(shmem); if (IS_ERR(sgt)) { err = PTR_ERR(sgt); @@ -151,7 +151,7 @@ static void lima_gem_free_object(struct drm_gem_object *obj) if (!list_empty(&bo->va)) dev_err(obj->dev->dev, "lima gem free bo still has va\n"); - drm_gem_shmem_free_object(obj); + drm_gem_shmem_free(&bo->base); } static int lima_gem_object_open(struct drm_gem_object *obj, struct drm_file *file) @@ -179,7 +179,7 @@ static int lima_gem_pin(struct drm_gem_object *obj) if (bo->heap_size) return -EINVAL; - return drm_gem_shmem_pin(obj); + return drm_gem_shmem_pin(&bo->base); } static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) @@ -189,7 +189,7 @@ static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) if (bo->heap_size) return -EINVAL; - return drm_gem_shmem_vmap(obj, map); + return drm_gem_shmem_vmap(&bo->base, map); } static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) @@ -199,7 +199,7 @@ static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) if (bo->heap_size) return -EINVAL; - return drm_gem_shmem_mmap(obj, vma); + return drm_gem_shmem_mmap(&bo->base, vma); } static const struct drm_gem_object_funcs lima_gem_funcs = { diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index dba8329937a3..2e817dbdcad7 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -390,7 +390,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) } else { buffer_chunk->size = lima_bo_size(bo); - ret = drm_gem_shmem_vmap(&bo->base.base, &map); + ret = drm_gem_shmem_vmap(&bo->base, &map); if (ret) { kvfree(et); goto out; @@ -398,7 +398,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size); - drm_gem_shmem_vunmap(&bo->base.base, &map); + drm_gem_shmem_vunmap(&bo->base, &map); } buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size; diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index de533f372764..e48e357ea4f1 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -418,7 +418,7 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, } } - args->retained = drm_gem_shmem_madvise(gem_obj, args->madv); + args->retained = drm_gem_shmem_madvise(&bo->base, args->madv); if (args->retained) { if (args->madv == PANFROST_MADV_DONTNEED) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index be1cc6579a71..6d9bdb9180cb 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -49,7 +49,7 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) kvfree(bo->sgts); } - drm_gem_shmem_free_object(obj); + drm_gem_shmem_free(&bo->base); } struct panfrost_gem_mapping * @@ -187,10 +187,12 @@ void panfrost_gem_close(struct drm_gem_object *obj, struct drm_file *file_priv) static int panfrost_gem_pin(struct drm_gem_object *obj) { - if (to_panfrost_bo(obj)->is_heap) + struct panfrost_gem_object *bo = to_panfrost_bo(obj); + + if (bo->is_heap) return -EINVAL; - return drm_gem_shmem_pin(obj); + return drm_gem_shmem_pin(&bo->base); } static const struct drm_gem_object_funcs panfrost_gem_funcs = { diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c index 1b9f68d8e9aa..b0142341e223 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -52,7 +52,7 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj) goto unlock_mappings; panfrost_gem_teardown_mappings_locked(bo); - drm_gem_shmem_purge_locked(obj); + drm_gem_shmem_purge_locked(&bo->base); ret = true; mutex_unlock(&shmem->pages_lock); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index c0189cc9a2f1..c3292a6bd1ae 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -288,7 +288,8 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu, int panfrost_mmu_map(struct panfrost_gem_mapping *mapping) { struct panfrost_gem_object *bo = mapping->obj; - struct drm_gem_object *obj = &bo->base.base; + struct drm_gem_shmem_object *shmem = &bo->base; + struct drm_gem_object *obj = &shmem->base; struct panfrost_device *pfdev = to_panfrost_device(obj->dev); struct sg_table *sgt; int prot = IOMMU_READ | IOMMU_WRITE; @@ -299,7 +300,7 @@ int panfrost_mmu_map(struct panfrost_gem_mapping *mapping) if (bo->noexec) prot |= IOMMU_NOEXEC; - sgt = drm_gem_shmem_get_pages_sgt(obj); + sgt = drm_gem_shmem_get_pages_sgt(shmem); if (WARN_ON(IS_ERR(sgt))) return PTR_ERR(sgt); diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c index 5ab03d605f57..9d9c067c1d70 100644 --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c @@ -105,7 +105,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, goto err_close_bo; } - ret = drm_gem_shmem_vmap(&bo->base, &map); + ret = drm_gem_shmem_vmap(bo, &map); if (ret) goto err_put_mapping; perfcnt->buf = map.vaddr; @@ -164,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, return 0; err_vunmap: - drm_gem_shmem_vunmap(&bo->base, &map); + drm_gem_shmem_vunmap(bo, &map); err_put_mapping: panfrost_gem_mapping_put(perfcnt->mapping); err_close_bo: @@ -194,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF)); perfcnt->user = NULL; - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map); + drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base, &map); perfcnt->buf = NULL; panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv); panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu); diff --git a/drivers/gpu/drm/v3d/v3d_bo.c b/drivers/gpu/drm/v3d/v3d_bo.c index b50677beb6ac..0d9af62f69ad 100644 --- a/drivers/gpu/drm/v3d/v3d_bo.c +++ b/drivers/gpu/drm/v3d/v3d_bo.c @@ -47,7 +47,7 @@ void v3d_free_object(struct drm_gem_object *obj) /* GPU execution may have dirtied any pages in the BO. */ bo->base.pages_mark_dirty_on_put = true; - drm_gem_shmem_free_object(obj); + drm_gem_shmem_free(&bo->base); } static const struct drm_gem_object_funcs v3d_gem_funcs = { @@ -95,7 +95,7 @@ v3d_bo_create_finish(struct drm_gem_object *obj) /* So far we pin the BO in the MMU for its lifetime, so use * shmem's helper for getting a lifetime sgt. */ - sgt = drm_gem_shmem_get_pages_sgt(&bo->base.base); + sgt = drm_gem_shmem_get_pages_sgt(&bo->base); if (IS_ERR(sgt)) return PTR_ERR(sgt); @@ -141,7 +141,7 @@ struct v3d_bo *v3d_bo_create(struct drm_device *dev, struct drm_file *file_priv, return bo; free_obj: - drm_gem_shmem_free_object(&shmem_obj->base); + drm_gem_shmem_free(shmem_obj); return ERR_PTR(ret); } @@ -159,7 +159,7 @@ v3d_prime_import_sg_table(struct drm_device *dev, ret = v3d_bo_create_finish(obj); if (ret) { - drm_gem_shmem_free_object(obj); + drm_gem_shmem_free(&to_v3d_bo(obj)->base); return ERR_PTR(ret); } diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 698431d233b8..187e10da2f17 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -79,10 +79,10 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo) sg_free_table(shmem->pages); kfree(shmem->pages); shmem->pages = NULL; - drm_gem_shmem_unpin(&bo->base.base); + drm_gem_shmem_unpin(&bo->base); } - drm_gem_shmem_free_object(&bo->base.base); + drm_gem_shmem_free(&bo->base); } else if (virtio_gpu_is_vram(bo)) { struct virtio_gpu_object_vram *vram = to_virtio_gpu_vram(bo); @@ -156,7 +156,7 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, struct scatterlist *sg; int si, ret; - ret = drm_gem_shmem_pin(&bo->base.base); + ret = drm_gem_shmem_pin(&bo->base); if (ret < 0) return -EINVAL; @@ -166,9 +166,9 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, * dma-ops. This is discouraged for other drivers, but should be fine * since virtio_gpu doesn't support dma-buf import from other devices. */ - shmem->pages = drm_gem_shmem_get_sg_table(&bo->base.base); + shmem->pages = drm_gem_shmem_get_sg_table(&bo->base); if (!shmem->pages) { - drm_gem_shmem_unpin(&bo->base.base); + drm_gem_shmem_unpin(&bo->base); return -EINVAL; } @@ -276,6 +276,6 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, err_put_id: virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); err_free_gem: - drm_gem_shmem_free_object(&shmem_obj->base); + drm_gem_shmem_free(shmem_obj); return ret; } diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 4199877ae588..311d66c9cf4b 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -107,16 +107,17 @@ struct drm_gem_shmem_object { container_of(obj, struct drm_gem_shmem_object, base) struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); -void drm_gem_shmem_free_object(struct drm_gem_object *obj); +void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); -int drm_gem_shmem_pin(struct drm_gem_object *obj); -void drm_gem_shmem_unpin(struct drm_gem_object *obj); -int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); +int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map); +void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map); +int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma); -int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv); +int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv); static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) { @@ -125,33 +126,31 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem !shmem->base.dma_buf && !shmem->base.import_attach; } -void drm_gem_shmem_purge_locked(struct drm_gem_object *obj); -bool drm_gem_shmem_purge(struct drm_gem_object *obj); +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); +bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); -int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev, - struct drm_mode_create_dumb *args); - -int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); - -void drm_gem_shmem_print_info(struct drm_printer *p, unsigned int indent, - const struct drm_gem_object *obj); +struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); +struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem); -struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_object *obj); +void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, + struct drm_printer *p, unsigned int indent); /* * GEM object functions */ /** - * drm_gem_shmem_object_free - GEM object function for drm_gem_shmem_free_object() + * drm_gem_shmem_object_free - GEM object function for drm_gem_shmem_free() * @obj: GEM object to free * - * This function wraps drm_gem_shmem_free_object(). Drivers that employ the shmem helpers + * This function wraps drm_gem_shmem_free(). Drivers that employ the shmem helpers * should use it as their &drm_gem_object_funcs.free handler. */ static inline void drm_gem_shmem_object_free(struct drm_gem_object *obj) { - drm_gem_shmem_free_object(obj); + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + drm_gem_shmem_free(shmem); } /** @@ -166,7 +165,9 @@ static inline void drm_gem_shmem_object_free(struct drm_gem_object *obj) static inline void drm_gem_shmem_object_print_info(struct drm_printer *p, unsigned int indent, const struct drm_gem_object *obj) { - drm_gem_shmem_print_info(p, indent, obj); + const struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + drm_gem_shmem_print_info(shmem, p, indent); } /** @@ -178,7 +179,9 @@ static inline void drm_gem_shmem_object_print_info(struct drm_printer *p, unsign */ static inline int drm_gem_shmem_object_pin(struct drm_gem_object *obj) { - return drm_gem_shmem_pin(obj); + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + return drm_gem_shmem_pin(shmem); } /** @@ -190,7 +193,9 @@ static inline int drm_gem_shmem_object_pin(struct drm_gem_object *obj) */ static inline void drm_gem_shmem_object_unpin(struct drm_gem_object *obj) { - drm_gem_shmem_unpin(obj); + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + drm_gem_shmem_unpin(shmem); } /** @@ -205,7 +210,9 @@ static inline void drm_gem_shmem_object_unpin(struct drm_gem_object *obj) */ static inline struct sg_table *drm_gem_shmem_object_get_sg_table(struct drm_gem_object *obj) { - return drm_gem_shmem_get_sg_table(obj); + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + return drm_gem_shmem_get_sg_table(shmem); } /* @@ -221,7 +228,9 @@ static inline struct sg_table *drm_gem_shmem_object_get_sg_table(struct drm_gem_ */ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { - return drm_gem_shmem_vmap(obj, map); + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + return drm_gem_shmem_vmap(shmem, map); } /* @@ -234,7 +243,9 @@ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj, struct d */ static inline void drm_gem_shmem_object_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) { - drm_gem_shmem_vunmap(obj, map); + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + drm_gem_shmem_vunmap(shmem, map); } /** @@ -250,7 +261,9 @@ static inline void drm_gem_shmem_object_vunmap(struct drm_gem_object *obj, struc */ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) { - return drm_gem_shmem_mmap(obj, vma); + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + return drm_gem_shmem_mmap(shmem, vma); } /* @@ -261,8 +274,8 @@ struct drm_gem_object * drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); - -struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_object *obj); +int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev, + struct drm_mode_create_dumb *args); /** * DRM_GEM_SHMEM_DRIVER_OPS - Default shmem GEM operations -- 2.35.1