Hello,
This series moves all drivers to a dynamic dma-buf locking specification.
From now on all dma-buf importers are made responsible for holding
dma-buf's reservation lock around all operations performed over dma-bufs
in accordance to the locking specification. This allows us to utilize
reservation lock more broadly around kernel without fearing of a potential
deadlocks.
This patchset passes all i915 selftests. It was also tested using VirtIO,
Panfrost, Lima, Tegra, udmabuf, AMDGPU and Nouveau drivers. I tested cases
of display+GPU, display+V4L and GPU+V4L dma-buf sharing (where appropriate),
which covers majority of kernel drivers since rest of the drivers share
same or similar code paths.
Changelog:
v3: - Factored out dma_buf_mmap_unlocked() and attachment functions
into aseparate patches, like was suggested by Christian König.
- Corrected and factored out dma-buf locking documentation into
a separate patch, like was suggested by Christian König.
- Intel driver dropped the reservation locking fews days ago from
its BO-release code path, but we need that locking for the imported
GEMs because in the end that code path unmaps the imported GEM.
So I added back the locking needed by the imported GEMs, updating
the "dma-buf attachment locking specification" patch appropriately.
- Tested Nouveau+Intel dma-buf import/export combo.
- Tested udmabuf import to i915/Nouveau/AMDGPU.
- Fixed few places in Etnaviv, Panfrost and Lima drivers that I missed
to switch to locked dma-buf vmapping in the drm/gem: Take reservation
lock for vmap/vunmap operations" patch. In a result invalidated the
Christian's r-b that he gave to v2.
- Added locked dma-buf vmap/vunmap functions that are needed for fixing
vmappping of Etnaviv, Panfrost and Lima drivers mentioned above.
I actually had this change stashed for the drm-shmem shrinker patchset,
but then realized that it's already needed by the dma-buf patches.
Also improved my tests to better cover these code paths.
v2: - Changed locking specification to avoid problems with a cross-driver
ww locking, like was suggested by Christian König. Now the attach/detach
callbacks are invoked without the held lock and exporter should take the
lock.
- Added "locking convention" documentation that explains which dma-buf
functions and callbacks are locked/unlocked for importers and exporters,
which was requested by Christian König.
- Added ack from Tomasz Figa to the V4L patches that he gave to v1.
Dmitry Osipenko (9):
dma-buf: Add _unlocked postfix to function names
dma-buf: Add locked variant of dma_buf_vmap/vunmap()
drm/gem: Take reservation lock for vmap/vunmap operations
dma-buf: Move dma_buf_vmap/vunmap_unlocked() to dynamic locking
specification
dma-buf: Move dma_buf_mmap_unlocked() to dynamic locking specification
dma-buf: Move dma-buf attachment to dynamic locking specification
dma-buf: Document dynamic locking convention
media: videobuf2: Stop using internal dma-buf lock
dma-buf: Remove internal lock
Documentation/driver-api/dma-buf.rst | 6 +
drivers/dma-buf/dma-buf.c | 276 ++++++++++++++----
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 +-
drivers/gpu/drm/armada/armada_gem.c | 14 +-
drivers/gpu/drm/drm_client.c | 4 +-
drivers/gpu/drm/drm_gem.c | 24 ++
drivers/gpu/drm/drm_gem_dma_helper.c | 6 +-
drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +-
drivers/gpu/drm/drm_gem_shmem_helper.c | 2 +-
drivers/gpu/drm/drm_gem_ttm_helper.c | 9 +-
drivers/gpu/drm/drm_prime.c | 12 +-
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 4 +-
drivers/gpu/drm/exynos/exynos_drm_gem.c | 2 +-
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 6 +-
drivers/gpu/drm/i915/gem/i915_gem_object.c | 12 +
.../drm/i915/gem/selftests/i915_gem_dmabuf.c | 20 +-
drivers/gpu/drm/lima/lima_sched.c | 4 +-
drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 8 +-
drivers/gpu/drm/panfrost/panfrost_dump.c | 4 +-
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 +-
drivers/gpu/drm/qxl/qxl_object.c | 17 +-
drivers/gpu/drm/qxl/qxl_prime.c | 4 +-
drivers/gpu/drm/tegra/gem.c | 27 +-
drivers/infiniband/core/umem_dmabuf.c | 11 +-
.../common/videobuf2/videobuf2-dma-contig.c | 26 +-
.../media/common/videobuf2/videobuf2-dma-sg.c | 23 +-
.../common/videobuf2/videobuf2-vmalloc.c | 17 +-
.../platform/nvidia/tegra-vde/dmabuf-cache.c | 12 +-
drivers/misc/fastrpc.c | 12 +-
drivers/xen/gntdev-dmabuf.c | 14 +-
include/drm/drm_gem.h | 3 +
include/linux/dma-buf.h | 57 ++--
32 files changed, 410 insertions(+), 242 deletions(-)
--
2.37.2
Add locked variant of dma_buf_vmap/vunmap() that will be utilized by
DRM drivers once vmap/unmap functions will be moved to the new locking
convention.
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/dma-buf/dma-buf.c | 42 +++++++++++++++++++++++++++++++++++----
include/linux/dma-buf.h | 2 ++
2 files changed, 40 insertions(+), 4 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 452a6a1f1e60..34173aafe6c9 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1373,7 +1373,7 @@ int dma_buf_mmap_unlocked(struct dma_buf *dmabuf, struct vm_area_struct *vma,
EXPORT_SYMBOL_NS_GPL(dma_buf_mmap_unlocked, DMA_BUF);
/**
- * dma_buf_vmap_unlocked - Create virtual mapping for the buffer object into kernel
+ * dma_buf_vmap - Create virtual mapping for the buffer object into kernel
* address space. Same restrictions as for vmap and friends apply.
* @dmabuf: [in] buffer to vmap
* @map: [out] returns the vmap pointer
@@ -1388,7 +1388,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap_unlocked, DMA_BUF);
*
* Returns 0 on success, or a negative errno code otherwise.
*/
-int dma_buf_vmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map)
+int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
{
struct iosys_map ptr;
int ret = 0;
@@ -1424,14 +1424,34 @@ int dma_buf_vmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map)
mutex_unlock(&dmabuf->lock);
return ret;
}
+EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF);
+
+/**
+ * dma_buf_vmap_unlocked - Create virtual mapping for the buffer object into kernel
+ * address space. Same restrictions as for vmap and friends apply.
+ * @dmabuf: [in] buffer to vmap
+ * @map: [out] returns the vmap pointer
+ *
+ * Unlocked version of dma_buf_vmap()
+ *
+ * Returns 0 on success, or a negative errno code otherwise.
+ */
+int dma_buf_vmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ int ret;
+
+ ret = dma_buf_vmap(dmabuf, map);
+
+ return ret;
+}
EXPORT_SYMBOL_NS_GPL(dma_buf_vmap_unlocked, DMA_BUF);
/**
- * dma_buf_vunmap_unlocked - Unmap a vmap obtained by dma_buf_vmap.
+ * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap.
* @dmabuf: [in] buffer to vunmap
* @map: [in] vmap pointer to vunmap
*/
-void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map)
+void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
{
if (WARN_ON(!dmabuf))
return;
@@ -1448,6 +1468,20 @@ void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map)
}
mutex_unlock(&dmabuf->lock);
}
+EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF);
+
+/**
+ * dma_buf_vunmap_unlocked - Unmap a vmap obtained by dma_buf_vmap.
+ * @dmabuf: [in] buffer to vunmap
+ * @map: [in] vmap pointer to vunmap
+ */
+void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ if (WARN_ON(!dmabuf))
+ return;
+
+ dma_buf_vunmap(dmabuf, map);
+}
EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap_unlocked, DMA_BUF);
#ifdef CONFIG_DEBUG_FS
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 9ab09569dec1..da2057569101 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -636,4 +636,6 @@ int dma_buf_mmap_unlocked(struct dma_buf *, struct vm_area_struct *,
unsigned long);
int dma_buf_vmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map);
void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map);
+int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map);
+void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map);
#endif /* __DMA_BUF_H__ */
--
2.37.2
Add _unlocked postfix to the dma-buf API function names in a preparation
to move all non-dynamic dma-buf users over to the dynamic locking
specification. This patch only renames API functions, preparing drivers
to the common locking convention. Later on, we will make the "unlocked"
functions to take the reservation lock.
Acked-by: Christian König <[email protected]>
Suggested-by: Christian König <[email protected]>
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/dma-buf/dma-buf.c | 76 ++++++++++---------
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 +-
drivers/gpu/drm/armada/armada_gem.c | 14 ++--
drivers/gpu/drm/drm_gem_dma_helper.c | 6 +-
drivers/gpu/drm/drm_gem_shmem_helper.c | 8 +-
drivers/gpu/drm/drm_prime.c | 12 +--
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 6 +-
drivers/gpu/drm/exynos/exynos_drm_gem.c | 2 +-
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 12 +--
.../drm/i915/gem/selftests/i915_gem_dmabuf.c | 20 ++---
drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 8 +-
drivers/gpu/drm/tegra/gem.c | 27 +++----
drivers/infiniband/core/umem_dmabuf.c | 11 +--
.../common/videobuf2/videobuf2-dma-contig.c | 15 ++--
.../media/common/videobuf2/videobuf2-dma-sg.c | 12 +--
.../common/videobuf2/videobuf2-vmalloc.c | 6 +-
.../platform/nvidia/tegra-vde/dmabuf-cache.c | 12 +--
drivers/misc/fastrpc.c | 12 +--
drivers/xen/gntdev-dmabuf.c | 14 ++--
include/linux/dma-buf.h | 34 +++++----
21 files changed, 162 insertions(+), 153 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 1c912255c5d6..452a6a1f1e60 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -795,7 +795,7 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach,
}
/**
- * dma_buf_dynamic_attach - Add the device to dma_buf's attachments list
+ * dma_buf_dynamic_attach_unlocked - Add the device to dma_buf's attachments list
* @dmabuf: [in] buffer to attach device to.
* @dev: [in] device to be attached.
* @importer_ops: [in] importer operations for the attachment
@@ -817,9 +817,9 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach,
* indicated with the error code -EBUSY.
*/
struct dma_buf_attachment *
-dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
- const struct dma_buf_attach_ops *importer_ops,
- void *importer_priv)
+dma_buf_dynamic_attach_unlocked(struct dma_buf *dmabuf, struct device *dev,
+ const struct dma_buf_attach_ops *importer_ops,
+ void *importer_priv)
{
struct dma_buf_attachment *attach;
int ret;
@@ -892,25 +892,25 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
if (dma_buf_is_dynamic(attach->dmabuf))
dma_resv_unlock(attach->dmabuf->resv);
- dma_buf_detach(dmabuf, attach);
+ dma_buf_detach_unlocked(dmabuf, attach);
return ERR_PTR(ret);
}
-EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach, DMA_BUF);
+EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach_unlocked, DMA_BUF);
/**
- * dma_buf_attach - Wrapper for dma_buf_dynamic_attach
+ * dma_buf_attach_unlocked - Wrapper for dma_buf_dynamic_attach
* @dmabuf: [in] buffer to attach device to.
* @dev: [in] device to be attached.
*
- * Wrapper to call dma_buf_dynamic_attach() for drivers which still use a static
- * mapping.
+ * Wrapper to call dma_buf_dynamic_attach_unlocked() for drivers which still
+ * use a static mapping.
*/
-struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
- struct device *dev)
+struct dma_buf_attachment *dma_buf_attach_unlocked(struct dma_buf *dmabuf,
+ struct device *dev)
{
- return dma_buf_dynamic_attach(dmabuf, dev, NULL, NULL);
+ return dma_buf_dynamic_attach_unlocked(dmabuf, dev, NULL, NULL);
}
-EXPORT_SYMBOL_NS_GPL(dma_buf_attach, DMA_BUF);
+EXPORT_SYMBOL_NS_GPL(dma_buf_attach_unlocked, DMA_BUF);
static void __unmap_dma_buf(struct dma_buf_attachment *attach,
struct sg_table *sg_table,
@@ -923,7 +923,7 @@ static void __unmap_dma_buf(struct dma_buf_attachment *attach,
}
/**
- * dma_buf_detach - Remove the given attachment from dmabuf's attachments list
+ * dma_buf_detach_unlocked - Remove the given attachment from dmabuf's attachments list
* @dmabuf: [in] buffer to detach from.
* @attach: [in] attachment to be detached; is free'd after this call.
*
@@ -931,7 +931,8 @@ static void __unmap_dma_buf(struct dma_buf_attachment *attach,
*
* Optionally this calls &dma_buf_ops.detach for device-specific detach.
*/
-void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
+void dma_buf_detach_unlocked(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attach)
{
if (WARN_ON(!dmabuf || !attach))
return;
@@ -956,14 +957,14 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
kfree(attach);
}
-EXPORT_SYMBOL_NS_GPL(dma_buf_detach, DMA_BUF);
+EXPORT_SYMBOL_NS_GPL(dma_buf_detach_unlocked, DMA_BUF);
/**
* dma_buf_pin - Lock down the DMA-buf
* @attach: [in] attachment which should be pinned
*
- * Only dynamic importers (who set up @attach with dma_buf_dynamic_attach()) may
- * call this, and only for limited use cases like scanout and not for temporary
+ * Only dynamic importers (who set up @attach with dma_buf_dynamic_attach_unlocked())
+ * may call this, and only for limited use cases like scanout and not for temporary
* pin operations. It is not permitted to allow userspace to pin arbitrary
* amounts of buffers through this interface.
*
@@ -1010,7 +1011,7 @@ void dma_buf_unpin(struct dma_buf_attachment *attach)
EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF);
/**
- * dma_buf_map_attachment - Returns the scatterlist table of the attachment;
+ * dma_buf_map_attachment_unlocked - Returns the scatterlist table of the attachment;
* mapped into _device_ address space. Is a wrapper for map_dma_buf() of the
* dma_buf_ops.
* @attach: [in] attachment whose scatterlist is to be returned
@@ -1030,8 +1031,9 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF);
* Important: Dynamic importers must wait for the exclusive fence of the struct
* dma_resv attached to the DMA-BUF first.
*/
-struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
- enum dma_data_direction direction)
+struct sg_table *
+dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
+ enum dma_data_direction direction)
{
struct sg_table *sg_table;
int r;
@@ -1097,10 +1099,10 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
#endif /* CONFIG_DMA_API_DEBUG */
return sg_table;
}
-EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF);
+EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_unlocked, DMA_BUF);
/**
- * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might
+ * dma_buf_unmap_attachment_unlocked - unmaps and decreases usecount of the buffer;might
* deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of
* dma_buf_ops.
* @attach: [in] attachment to unmap buffer from
@@ -1109,9 +1111,9 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF);
*
* This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment().
*/
-void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
- struct sg_table *sg_table,
- enum dma_data_direction direction)
+void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach,
+ struct sg_table *sg_table,
+ enum dma_data_direction direction)
{
might_sleep();
@@ -1133,7 +1135,7 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY))
dma_buf_unpin(attach);
}
-EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF);
+EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_unlocked, DMA_BUF);
/**
* dma_buf_move_notify - notify attachments that DMA-buf is moving
@@ -1330,7 +1332,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF);
/**
- * dma_buf_mmap - Setup up a userspace mmap with the given vma
+ * dma_buf_mmap_unlocked - Setup up a userspace mmap with the given vma
* @dmabuf: [in] buffer that should back the vma
* @vma: [in] vma for the mmap
* @pgoff: [in] offset in pages where this mmap should start within the
@@ -1343,8 +1345,8 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF);
*
* Can return negative error values, returns 0 on success.
*/
-int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
- unsigned long pgoff)
+int dma_buf_mmap_unlocked(struct dma_buf *dmabuf, struct vm_area_struct *vma,
+ unsigned long pgoff)
{
if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
@@ -1368,10 +1370,10 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
return dmabuf->ops->mmap(dmabuf, vma);
}
-EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF);
+EXPORT_SYMBOL_NS_GPL(dma_buf_mmap_unlocked, DMA_BUF);
/**
- * dma_buf_vmap - Create virtual mapping for the buffer object into kernel
+ * dma_buf_vmap_unlocked - Create virtual mapping for the buffer object into kernel
* address space. Same restrictions as for vmap and friends apply.
* @dmabuf: [in] buffer to vmap
* @map: [out] returns the vmap pointer
@@ -1386,7 +1388,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF);
*
* Returns 0 on success, or a negative errno code otherwise.
*/
-int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
+int dma_buf_vmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map)
{
struct iosys_map ptr;
int ret = 0;
@@ -1422,14 +1424,14 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
mutex_unlock(&dmabuf->lock);
return ret;
}
-EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF);
+EXPORT_SYMBOL_NS_GPL(dma_buf_vmap_unlocked, DMA_BUF);
/**
- * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap.
+ * dma_buf_vunmap_unlocked - Unmap a vmap obtained by dma_buf_vmap.
* @dmabuf: [in] buffer to vunmap
* @map: [in] vmap pointer to vunmap
*/
-void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
+void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map)
{
if (WARN_ON(!dmabuf))
return;
@@ -1446,7 +1448,7 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
}
mutex_unlock(&dmabuf->lock);
}
-EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF);
+EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap_unlocked, DMA_BUF);
#ifdef CONFIG_DEBUG_FS
static int dma_buf_debug_show(struct seq_file *s, void *unused)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 782cbca37538..d9ed5a4fbc6f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -449,8 +449,8 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
if (IS_ERR(obj))
return obj;
- attach = dma_buf_dynamic_attach(dma_buf, dev->dev,
- &amdgpu_dma_buf_attach_ops, obj);
+ attach = dma_buf_dynamic_attach_unlocked(dma_buf, dev->dev,
+ &amdgpu_dma_buf_attach_ops, obj);
if (IS_ERR(attach)) {
drm_gem_object_put(obj);
return ERR_CAST(attach);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index b1c455329023..ac1e2911b727 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -885,7 +885,7 @@ static int amdgpu_ttm_backend_bind(struct ttm_device *bdev,
struct sg_table *sgt;
attach = gtt->gobj->import_attach;
- sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
+ sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL);
if (IS_ERR(sgt))
return PTR_ERR(sgt);
@@ -1010,7 +1010,7 @@ static void amdgpu_ttm_backend_unbind(struct ttm_device *bdev,
struct dma_buf_attachment *attach;
attach = gtt->gobj->import_attach;
- dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL);
+ dma_buf_unmap_attachment_unlocked(attach, ttm->sg, DMA_BIDIRECTIONAL);
ttm->sg = NULL;
}
diff --git a/drivers/gpu/drm/armada/armada_gem.c b/drivers/gpu/drm/armada/armada_gem.c
index 5430265ad458..a499246ec28e 100644
--- a/drivers/gpu/drm/armada/armada_gem.c
+++ b/drivers/gpu/drm/armada/armada_gem.c
@@ -66,8 +66,8 @@ void armada_gem_free_object(struct drm_gem_object *obj)
if (dobj->obj.import_attach) {
/* We only ever display imported data */
if (dobj->sgt)
- dma_buf_unmap_attachment(dobj->obj.import_attach,
- dobj->sgt, DMA_TO_DEVICE);
+ dma_buf_unmap_attachment_unlocked(dobj->obj.import_attach,
+ dobj->sgt, DMA_TO_DEVICE);
drm_prime_gem_destroy(&dobj->obj, NULL);
}
@@ -364,7 +364,7 @@ int armada_gem_pwrite_ioctl(struct drm_device *dev, void *data,
if (args->offset > dobj->obj.size ||
args->size > dobj->obj.size - args->offset) {
- DRM_ERROR("invalid size: object size %u\n", dobj->obj.size);
+ DRM_ERROR("invalid size: object size %zu\n", dobj->obj.size);
ret = -EINVAL;
goto unref;
}
@@ -514,13 +514,13 @@ armada_gem_prime_import(struct drm_device *dev, struct dma_buf *buf)
}
}
- attach = dma_buf_attach(buf, dev->dev);
+ attach = dma_buf_attach_unlocked(buf, dev->dev);
if (IS_ERR(attach))
return ERR_CAST(attach);
dobj = armada_gem_alloc_private_object(dev, buf->size);
if (!dobj) {
- dma_buf_detach(buf, attach);
+ dma_buf_detach_unlocked(buf, attach);
return ERR_PTR(-ENOMEM);
}
@@ -539,8 +539,8 @@ int armada_gem_map_import(struct armada_gem_object *dobj)
{
int ret;
- dobj->sgt = dma_buf_map_attachment(dobj->obj.import_attach,
- DMA_TO_DEVICE);
+ dobj->sgt = dma_buf_map_attachment_unlocked(dobj->obj.import_attach,
+ DMA_TO_DEVICE);
if (IS_ERR(dobj->sgt)) {
ret = PTR_ERR(dobj->sgt);
dobj->sgt = NULL;
diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c b/drivers/gpu/drm/drm_gem_dma_helper.c
index f6901ff97bbb..1e658c448366 100644
--- a/drivers/gpu/drm/drm_gem_dma_helper.c
+++ b/drivers/gpu/drm/drm_gem_dma_helper.c
@@ -230,7 +230,7 @@ void drm_gem_dma_free(struct drm_gem_dma_object *dma_obj)
if (gem_obj->import_attach) {
if (dma_obj->vaddr)
- dma_buf_vunmap(gem_obj->import_attach->dmabuf, &map);
+ dma_buf_vunmap_unlocked(gem_obj->import_attach->dmabuf, &map);
drm_prime_gem_destroy(gem_obj, dma_obj->sgt);
} else if (dma_obj->vaddr) {
if (dma_obj->map_noncoherent)
@@ -581,7 +581,7 @@ drm_gem_dma_prime_import_sg_table_vmap(struct drm_device *dev,
struct iosys_map map;
int ret;
- ret = dma_buf_vmap(attach->dmabuf, &map);
+ ret = dma_buf_vmap_unlocked(attach->dmabuf, &map);
if (ret) {
DRM_ERROR("Failed to vmap PRIME buffer\n");
return ERR_PTR(ret);
@@ -589,7 +589,7 @@ drm_gem_dma_prime_import_sg_table_vmap(struct drm_device *dev,
obj = drm_gem_dma_prime_import_sg_table(dev, attach, sgt);
if (IS_ERR(obj)) {
- dma_buf_vunmap(attach->dmabuf, &map);
+ dma_buf_vunmap_unlocked(attach->dmabuf, &map);
return obj;
}
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 35138f8a375c..5f572716306d 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -299,10 +299,10 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
}
if (obj->import_attach) {
- ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
+ ret = dma_buf_vmap_unlocked(obj->import_attach->dmabuf, map);
if (!ret) {
if (WARN_ON(map->is_iomem)) {
- dma_buf_vunmap(obj->import_attach->dmabuf, map);
+ dma_buf_vunmap_unlocked(obj->import_attach->dmabuf, map);
ret = -EIO;
goto err_put_pages;
}
@@ -383,7 +383,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
return;
if (obj->import_attach) {
- dma_buf_vunmap(obj->import_attach->dmabuf, map);
+ dma_buf_vunmap_unlocked(obj->import_attach->dmabuf, map);
} else {
vunmap(shmem->vaddr);
drm_gem_shmem_put_pages(shmem);
@@ -618,7 +618,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
drm_gem_object_put(obj);
vma->vm_private_data = NULL;
- return dma_buf_mmap(obj->dma_buf, vma, 0);
+ return dma_buf_mmap_unlocked(obj->dma_buf, vma, 0);
}
ret = drm_gem_shmem_get_pages(shmem);
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index eb09e86044c6..e9b7d3fa67f1 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -934,13 +934,13 @@ struct drm_gem_object *drm_gem_prime_import_dev(struct drm_device *dev,
if (!dev->driver->gem_prime_import_sg_table)
return ERR_PTR(-EINVAL);
- attach = dma_buf_attach(dma_buf, attach_dev);
+ attach = dma_buf_attach_unlocked(dma_buf, attach_dev);
if (IS_ERR(attach))
return ERR_CAST(attach);
get_dma_buf(dma_buf);
- sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
+ sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL);
if (IS_ERR(sgt)) {
ret = PTR_ERR(sgt);
goto fail_detach;
@@ -958,9 +958,9 @@ struct drm_gem_object *drm_gem_prime_import_dev(struct drm_device *dev,
return obj;
fail_unmap:
- dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL);
+ dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL);
fail_detach:
- dma_buf_detach(dma_buf, attach);
+ dma_buf_detach_unlocked(dma_buf, attach);
dma_buf_put(dma_buf);
return ERR_PTR(ret);
@@ -1056,9 +1056,9 @@ void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg)
attach = obj->import_attach;
if (sg)
- dma_buf_unmap_attachment(attach, sg, DMA_BIDIRECTIONAL);
+ dma_buf_unmap_attachment_unlocked(attach, sg, DMA_BIDIRECTIONAL);
dma_buf = attach->dmabuf;
- dma_buf_detach(attach->dmabuf, attach);
+ dma_buf_detach_unlocked(attach->dmabuf, attach);
/* remove the reference */
dma_buf_put(dma_buf);
}
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 3fa2da149639..ae6c1eda0a72 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -65,7 +65,7 @@ static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
struct iosys_map map = IOSYS_MAP_INIT_VADDR(etnaviv_obj->vaddr);
if (etnaviv_obj->vaddr)
- dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf, &map);
+ dma_buf_vunmap_unlocked(etnaviv_obj->base.import_attach->dmabuf, &map);
/* Don't drop the pages for imported dmabuf, as they are not
* ours, just free the array we allocated:
@@ -82,7 +82,7 @@ static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
lockdep_assert_held(&etnaviv_obj->lock);
- ret = dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf, &map);
+ ret = dma_buf_vmap_unlocked(etnaviv_obj->base.import_attach->dmabuf, &map);
if (ret)
return NULL;
return map.vaddr;
@@ -91,7 +91,7 @@ static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
static int etnaviv_gem_prime_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
struct vm_area_struct *vma)
{
- return dma_buf_mmap(etnaviv_obj->base.dma_buf, vma, 0);
+ return dma_buf_mmap_unlocked(etnaviv_obj->base.dma_buf, vma, 0);
}
static const struct etnaviv_gem_ops etnaviv_gem_prime_ops = {
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index 3e493f48e0d4..8e95a3c5caf8 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -366,7 +366,7 @@ static int exynos_drm_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct
int ret;
if (obj->import_attach)
- return dma_buf_mmap(obj->dma_buf, vma, 0);
+ return dma_buf_mmap_unlocked(obj->dma_buf, vma, 0);
vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index f5062d0c6333..5ecea7df98b1 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -241,8 +241,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
assert_object_held(obj);
- pages = dma_buf_map_attachment(obj->base.import_attach,
- DMA_BIDIRECTIONAL);
+ pages = dma_buf_map_attachment_unlocked(obj->base.import_attach,
+ DMA_BIDIRECTIONAL);
if (IS_ERR(pages))
return PTR_ERR(pages);
@@ -270,8 +270,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj,
struct sg_table *pages)
{
- dma_buf_unmap_attachment(obj->base.import_attach, pages,
- DMA_BIDIRECTIONAL);
+ dma_buf_unmap_attachment_unlocked(obj->base.import_attach, pages,
+ DMA_BIDIRECTIONAL);
}
static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = {
@@ -306,7 +306,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
return ERR_PTR(-E2BIG);
/* need to attach */
- attach = dma_buf_attach(dma_buf, dev->dev);
+ attach = dma_buf_attach_unlocked(dma_buf, dev->dev);
if (IS_ERR(attach))
return ERR_CAST(attach);
@@ -337,7 +337,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
return &obj->base;
fail_detach:
- dma_buf_detach(dma_buf, attach);
+ dma_buf_detach_unlocked(dma_buf, attach);
dma_buf_put(dma_buf);
return ERR_PTR(ret);
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
index 62c61af77a42..6053af920a22 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
@@ -207,13 +207,13 @@ static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915,
i915_gem_object_unlock(import_obj);
/* Now try a fake an importer */
- import_attach = dma_buf_attach(dmabuf, obj->base.dev->dev);
+ import_attach = dma_buf_attach_unlocked(dmabuf, obj->base.dev->dev);
if (IS_ERR(import_attach)) {
err = PTR_ERR(import_attach);
goto out_import;
}
- st = dma_buf_map_attachment(import_attach, DMA_BIDIRECTIONAL);
+ st = dma_buf_map_attachment_unlocked(import_attach, DMA_BIDIRECTIONAL);
if (IS_ERR(st)) {
err = PTR_ERR(st);
goto out_detach;
@@ -226,9 +226,9 @@ static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915,
timeout = -ETIME;
}
err = timeout > 0 ? 0 : timeout;
- dma_buf_unmap_attachment(import_attach, st, DMA_BIDIRECTIONAL);
+ dma_buf_unmap_attachment_unlocked(import_attach, st, DMA_BIDIRECTIONAL);
out_detach:
- dma_buf_detach(dmabuf, import_attach);
+ dma_buf_detach_unlocked(dmabuf, import_attach);
out_import:
i915_gem_object_put(import_obj);
out_dmabuf:
@@ -296,7 +296,7 @@ static int igt_dmabuf_import(void *arg)
goto out_obj;
}
- err = dma_buf_vmap(dmabuf, &map);
+ err = dma_buf_vmap_unlocked(dmabuf, &map);
dma_map = err ? NULL : map.vaddr;
if (!dma_map) {
pr_err("dma_buf_vmap failed\n");
@@ -337,7 +337,7 @@ static int igt_dmabuf_import(void *arg)
err = 0;
out_dma_map:
- dma_buf_vunmap(dmabuf, &map);
+ dma_buf_vunmap_unlocked(dmabuf, &map);
out_obj:
i915_gem_object_put(obj);
out_dmabuf:
@@ -358,7 +358,7 @@ static int igt_dmabuf_import_ownership(void *arg)
if (IS_ERR(dmabuf))
return PTR_ERR(dmabuf);
- err = dma_buf_vmap(dmabuf, &map);
+ err = dma_buf_vmap_unlocked(dmabuf, &map);
ptr = err ? NULL : map.vaddr;
if (!ptr) {
pr_err("dma_buf_vmap failed\n");
@@ -367,7 +367,7 @@ static int igt_dmabuf_import_ownership(void *arg)
}
memset(ptr, 0xc5, PAGE_SIZE);
- dma_buf_vunmap(dmabuf, &map);
+ dma_buf_vunmap_unlocked(dmabuf, &map);
obj = to_intel_bo(i915_gem_prime_import(&i915->drm, dmabuf));
if (IS_ERR(obj)) {
@@ -418,7 +418,7 @@ static int igt_dmabuf_export_vmap(void *arg)
}
i915_gem_object_put(obj);
- err = dma_buf_vmap(dmabuf, &map);
+ err = dma_buf_vmap_unlocked(dmabuf, &map);
ptr = err ? NULL : map.vaddr;
if (!ptr) {
pr_err("dma_buf_vmap failed\n");
@@ -435,7 +435,7 @@ static int igt_dmabuf_export_vmap(void *arg)
memset(ptr, 0xc5, dmabuf->size);
err = 0;
- dma_buf_vunmap(dmabuf, &map);
+ dma_buf_vunmap_unlocked(dmabuf, &map);
out:
dma_buf_put(dmabuf);
return err;
diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
index 393f82e26927..a725a91c2ff9 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
@@ -119,13 +119,13 @@ struct drm_gem_object *omap_gem_prime_import(struct drm_device *dev,
}
}
- attach = dma_buf_attach(dma_buf, dev->dev);
+ attach = dma_buf_attach_unlocked(dma_buf, dev->dev);
if (IS_ERR(attach))
return ERR_CAST(attach);
get_dma_buf(dma_buf);
- sgt = dma_buf_map_attachment(attach, DMA_TO_DEVICE);
+ sgt = dma_buf_map_attachment_unlocked(attach, DMA_TO_DEVICE);
if (IS_ERR(sgt)) {
ret = PTR_ERR(sgt);
goto fail_detach;
@@ -142,9 +142,9 @@ struct drm_gem_object *omap_gem_prime_import(struct drm_device *dev,
return obj;
fail_unmap:
- dma_buf_unmap_attachment(attach, sgt, DMA_TO_DEVICE);
+ dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_TO_DEVICE);
fail_detach:
- dma_buf_detach(dma_buf, attach);
+ dma_buf_detach_unlocked(dma_buf, attach);
dma_buf_put(dma_buf);
return ERR_PTR(ret);
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index 81991090adcc..bbfe196ff6f6 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -78,15 +78,15 @@ static struct host1x_bo_mapping *tegra_bo_pin(struct device *dev, struct host1x_
if (gem->import_attach) {
struct dma_buf *buf = gem->import_attach->dmabuf;
- map->attach = dma_buf_attach(buf, dev);
+ map->attach = dma_buf_attach_unlocked(buf, dev);
if (IS_ERR(map->attach)) {
err = PTR_ERR(map->attach);
goto free;
}
- map->sgt = dma_buf_map_attachment(map->attach, direction);
+ map->sgt = dma_buf_map_attachment_unlocked(map->attach, direction);
if (IS_ERR(map->sgt)) {
- dma_buf_detach(buf, map->attach);
+ dma_buf_detach_unlocked(buf, map->attach);
err = PTR_ERR(map->sgt);
map->sgt = NULL;
goto free;
@@ -160,8 +160,9 @@ static struct host1x_bo_mapping *tegra_bo_pin(struct device *dev, struct host1x_
static void tegra_bo_unpin(struct host1x_bo_mapping *map)
{
if (map->attach) {
- dma_buf_unmap_attachment(map->attach, map->sgt, map->direction);
- dma_buf_detach(map->attach->dmabuf, map->attach);
+ dma_buf_unmap_attachment_unlocked(map->attach, map->sgt,
+ map->direction);
+ dma_buf_detach_unlocked(map->attach->dmabuf, map->attach);
} else {
dma_unmap_sgtable(map->dev, map->sgt, map->direction, 0);
sg_free_table(map->sgt);
@@ -181,7 +182,7 @@ static void *tegra_bo_mmap(struct host1x_bo *bo)
if (obj->vaddr) {
return obj->vaddr;
} else if (obj->gem.import_attach) {
- ret = dma_buf_vmap(obj->gem.import_attach->dmabuf, &map);
+ ret = dma_buf_vmap_unlocked(obj->gem.import_attach->dmabuf, &map);
return ret ? NULL : map.vaddr;
} else {
return vmap(obj->pages, obj->num_pages, VM_MAP,
@@ -197,7 +198,7 @@ static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
if (obj->vaddr)
return;
else if (obj->gem.import_attach)
- dma_buf_vunmap(obj->gem.import_attach->dmabuf, &map);
+ dma_buf_vunmap_unlocked(obj->gem.import_attach->dmabuf, &map);
else
vunmap(addr);
}
@@ -453,7 +454,7 @@ static struct tegra_bo *tegra_bo_import(struct drm_device *drm,
if (IS_ERR(bo))
return bo;
- attach = dma_buf_attach(buf, drm->dev);
+ attach = dma_buf_attach_unlocked(buf, drm->dev);
if (IS_ERR(attach)) {
err = PTR_ERR(attach);
goto free;
@@ -461,7 +462,7 @@ static struct tegra_bo *tegra_bo_import(struct drm_device *drm,
get_dma_buf(buf);
- bo->sgt = dma_buf_map_attachment(attach, DMA_TO_DEVICE);
+ bo->sgt = dma_buf_map_attachment_unlocked(attach, DMA_TO_DEVICE);
if (IS_ERR(bo->sgt)) {
err = PTR_ERR(bo->sgt);
goto detach;
@@ -479,9 +480,9 @@ static struct tegra_bo *tegra_bo_import(struct drm_device *drm,
detach:
if (!IS_ERR_OR_NULL(bo->sgt))
- dma_buf_unmap_attachment(attach, bo->sgt, DMA_TO_DEVICE);
+ dma_buf_unmap_attachment_unlocked(attach, bo->sgt, DMA_TO_DEVICE);
- dma_buf_detach(buf, attach);
+ dma_buf_detach_unlocked(buf, attach);
dma_buf_put(buf);
free:
drm_gem_object_release(&bo->gem);
@@ -508,8 +509,8 @@ void tegra_bo_free_object(struct drm_gem_object *gem)
tegra_bo_iommu_unmap(tegra, bo);
if (gem->import_attach) {
- dma_buf_unmap_attachment(gem->import_attach, bo->sgt,
- DMA_TO_DEVICE);
+ dma_buf_unmap_attachment_unlocked(gem->import_attach, bo->sgt,
+ DMA_TO_DEVICE);
drm_prime_gem_destroy(gem, NULL);
} else {
tegra_bo_free(gem->dev, bo);
diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c
index 04c04e6d24c3..667436a92b17 100644
--- a/drivers/infiniband/core/umem_dmabuf.c
+++ b/drivers/infiniband/core/umem_dmabuf.c
@@ -26,7 +26,8 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf)
if (umem_dmabuf->sgt)
goto wait_fence;
- sgt = dma_buf_map_attachment(umem_dmabuf->attach, DMA_BIDIRECTIONAL);
+ sgt = dma_buf_map_attachment_unlocked(umem_dmabuf->attach,
+ DMA_BIDIRECTIONAL);
if (IS_ERR(sgt))
return PTR_ERR(sgt);
@@ -102,8 +103,8 @@ void ib_umem_dmabuf_unmap_pages(struct ib_umem_dmabuf *umem_dmabuf)
umem_dmabuf->last_sg_trim = 0;
}
- dma_buf_unmap_attachment(umem_dmabuf->attach, umem_dmabuf->sgt,
- DMA_BIDIRECTIONAL);
+ dma_buf_unmap_attachment_unlocked(umem_dmabuf->attach, umem_dmabuf->sgt,
+ DMA_BIDIRECTIONAL);
umem_dmabuf->sgt = NULL;
}
@@ -149,7 +150,7 @@ struct ib_umem_dmabuf *ib_umem_dmabuf_get(struct ib_device *device,
if (!ib_umem_num_pages(umem))
goto out_free_umem;
- umem_dmabuf->attach = dma_buf_dynamic_attach(
+ umem_dmabuf->attach = dma_buf_dynamic_attach_unlocked(
dmabuf,
device->dma_device,
ops,
@@ -228,7 +229,7 @@ void ib_umem_dmabuf_release(struct ib_umem_dmabuf *umem_dmabuf)
dma_buf_unpin(umem_dmabuf->attach);
dma_resv_unlock(dmabuf->resv);
- dma_buf_detach(dmabuf, umem_dmabuf->attach);
+ dma_buf_detach_unlocked(dmabuf, umem_dmabuf->attach);
dma_buf_put(dmabuf);
kfree(umem_dmabuf);
}
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
index 678b359717c4..de762dbdaf78 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
@@ -101,7 +101,7 @@ static void *vb2_dc_vaddr(struct vb2_buffer *vb, void *buf_priv)
if (buf->db_attach) {
struct iosys_map map;
- if (!dma_buf_vmap(buf->db_attach->dmabuf, &map))
+ if (!dma_buf_vmap_unlocked(buf->db_attach->dmabuf, &map))
buf->vaddr = map.vaddr;
return buf->vaddr;
@@ -711,7 +711,7 @@ static int vb2_dc_map_dmabuf(void *mem_priv)
}
/* get the associated scatterlist for this buffer */
- sgt = dma_buf_map_attachment(buf->db_attach, buf->dma_dir);
+ sgt = dma_buf_map_attachment_unlocked(buf->db_attach, buf->dma_dir);
if (IS_ERR(sgt)) {
pr_err("Error getting dmabuf scatterlist\n");
return -EINVAL;
@@ -722,7 +722,8 @@ static int vb2_dc_map_dmabuf(void *mem_priv)
if (contig_size < buf->size) {
pr_err("contiguous chunk is too small %lu/%lu\n",
contig_size, buf->size);
- dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
+ dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt,
+ buf->dma_dir);
return -EFAULT;
}
@@ -750,10 +751,10 @@ static void vb2_dc_unmap_dmabuf(void *mem_priv)
}
if (buf->vaddr) {
- dma_buf_vunmap(buf->db_attach->dmabuf, &map);
+ dma_buf_vunmap_unlocked(buf->db_attach->dmabuf, &map);
buf->vaddr = NULL;
}
- dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
+ dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt, buf->dma_dir);
buf->dma_addr = 0;
buf->dma_sgt = NULL;
@@ -768,7 +769,7 @@ static void vb2_dc_detach_dmabuf(void *mem_priv)
vb2_dc_unmap_dmabuf(buf);
/* detach this attachment */
- dma_buf_detach(buf->db_attach->dmabuf, buf->db_attach);
+ dma_buf_detach_unlocked(buf->db_attach->dmabuf, buf->db_attach);
kfree(buf);
}
@@ -792,7 +793,7 @@ static void *vb2_dc_attach_dmabuf(struct vb2_buffer *vb, struct device *dev,
buf->vb = vb;
/* create attachment for the dmabuf with the user device */
- dba = dma_buf_attach(dbuf, buf->dev);
+ dba = dma_buf_attach_unlocked(dbuf, buf->dev);
if (IS_ERR(dba)) {
pr_err("failed to attach dmabuf\n");
kfree(buf);
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
index fa69158a65b1..39e11600304a 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
@@ -309,7 +309,7 @@ static void *vb2_dma_sg_vaddr(struct vb2_buffer *vb, void *buf_priv)
if (!buf->vaddr) {
if (buf->db_attach) {
- ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
+ ret = dma_buf_vmap_unlocked(buf->db_attach->dmabuf, &map);
buf->vaddr = ret ? NULL : map.vaddr;
} else {
buf->vaddr = vm_map_ram(buf->pages, buf->num_pages, -1);
@@ -565,7 +565,7 @@ static int vb2_dma_sg_map_dmabuf(void *mem_priv)
}
/* get the associated scatterlist for this buffer */
- sgt = dma_buf_map_attachment(buf->db_attach, buf->dma_dir);
+ sgt = dma_buf_map_attachment_unlocked(buf->db_attach, buf->dma_dir);
if (IS_ERR(sgt)) {
pr_err("Error getting dmabuf scatterlist\n");
return -EINVAL;
@@ -594,10 +594,10 @@ static void vb2_dma_sg_unmap_dmabuf(void *mem_priv)
}
if (buf->vaddr) {
- dma_buf_vunmap(buf->db_attach->dmabuf, &map);
+ dma_buf_vunmap_unlocked(buf->db_attach->dmabuf, &map);
buf->vaddr = NULL;
}
- dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
+ dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt, buf->dma_dir);
buf->dma_sgt = NULL;
}
@@ -611,7 +611,7 @@ static void vb2_dma_sg_detach_dmabuf(void *mem_priv)
vb2_dma_sg_unmap_dmabuf(buf);
/* detach this attachment */
- dma_buf_detach(buf->db_attach->dmabuf, buf->db_attach);
+ dma_buf_detach_unlocked(buf->db_attach->dmabuf, buf->db_attach);
kfree(buf);
}
@@ -633,7 +633,7 @@ static void *vb2_dma_sg_attach_dmabuf(struct vb2_buffer *vb, struct device *dev,
buf->dev = dev;
/* create attachment for the dmabuf with the user device */
- dba = dma_buf_attach(dbuf, buf->dev);
+ dba = dma_buf_attach_unlocked(dbuf, buf->dev);
if (IS_ERR(dba)) {
pr_err("failed to attach dmabuf\n");
kfree(buf);
diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
index 948152f1596b..7831bf545874 100644
--- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
+++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
@@ -376,7 +376,7 @@ static int vb2_vmalloc_map_dmabuf(void *mem_priv)
struct iosys_map map;
int ret;
- ret = dma_buf_vmap(buf->dbuf, &map);
+ ret = dma_buf_vmap_unlocked(buf->dbuf, &map);
if (ret)
return -EFAULT;
buf->vaddr = map.vaddr;
@@ -389,7 +389,7 @@ static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
struct vb2_vmalloc_buf *buf = mem_priv;
struct iosys_map map = IOSYS_MAP_INIT_VADDR(buf->vaddr);
- dma_buf_vunmap(buf->dbuf, &map);
+ dma_buf_vunmap_unlocked(buf->dbuf, &map);
buf->vaddr = NULL;
}
@@ -399,7 +399,7 @@ static void vb2_vmalloc_detach_dmabuf(void *mem_priv)
struct iosys_map map = IOSYS_MAP_INIT_VADDR(buf->vaddr);
if (buf->vaddr)
- dma_buf_vunmap(buf->dbuf, &map);
+ dma_buf_vunmap_unlocked(buf->dbuf, &map);
kfree(buf);
}
diff --git a/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c b/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c
index 69c346148070..58e4595f3a10 100644
--- a/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c
+++ b/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c
@@ -38,8 +38,8 @@ static void tegra_vde_release_entry(struct tegra_vde_cache_entry *entry)
if (entry->vde->domain)
tegra_vde_iommu_unmap(entry->vde, entry->iova);
- dma_buf_unmap_attachment(entry->a, entry->sgt, entry->dma_dir);
- dma_buf_detach(dmabuf, entry->a);
+ dma_buf_unmap_attachment_unlocked(entry->a, entry->sgt, entry->dma_dir);
+ dma_buf_detach_unlocked(dmabuf, entry->a);
dma_buf_put(dmabuf);
list_del(&entry->list);
@@ -95,14 +95,14 @@ int tegra_vde_dmabuf_cache_map(struct tegra_vde *vde,
goto ref;
}
- attachment = dma_buf_attach(dmabuf, dev);
+ attachment = dma_buf_attach_unlocked(dmabuf, dev);
if (IS_ERR(attachment)) {
dev_err(dev, "Failed to attach dmabuf\n");
err = PTR_ERR(attachment);
goto err_unlock;
}
- sgt = dma_buf_map_attachment(attachment, dma_dir);
+ sgt = dma_buf_map_attachment_unlocked(attachment, dma_dir);
if (IS_ERR(sgt)) {
dev_err(dev, "Failed to get dmabufs sg_table\n");
err = PTR_ERR(sgt);
@@ -152,9 +152,9 @@ int tegra_vde_dmabuf_cache_map(struct tegra_vde *vde,
err_free:
kfree(entry);
err_unmap:
- dma_buf_unmap_attachment(attachment, sgt, dma_dir);
+ dma_buf_unmap_attachment_unlocked(attachment, sgt, dma_dir);
err_detach:
- dma_buf_detach(dmabuf, attachment);
+ dma_buf_detach_unlocked(dmabuf, attachment);
err_unlock:
mutex_unlock(&vde->map_lock);
diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
index 93ebd174d848..558e8056eb80 100644
--- a/drivers/misc/fastrpc.c
+++ b/drivers/misc/fastrpc.c
@@ -310,9 +310,9 @@ static void fastrpc_free_map(struct kref *ref)
return;
}
}
- dma_buf_unmap_attachment(map->attach, map->table,
- DMA_BIDIRECTIONAL);
- dma_buf_detach(map->buf, map->attach);
+ dma_buf_unmap_attachment_unlocked(map->attach, map->table,
+ DMA_BIDIRECTIONAL);
+ dma_buf_detach_unlocked(map->buf, map->attach);
dma_buf_put(map->buf);
}
@@ -719,14 +719,14 @@ static int fastrpc_map_create(struct fastrpc_user *fl, int fd,
goto get_err;
}
- map->attach = dma_buf_attach(map->buf, sess->dev);
+ map->attach = dma_buf_attach_unlocked(map->buf, sess->dev);
if (IS_ERR(map->attach)) {
dev_err(sess->dev, "Failed to attach dmabuf\n");
err = PTR_ERR(map->attach);
goto attach_err;
}
- map->table = dma_buf_map_attachment(map->attach, DMA_BIDIRECTIONAL);
+ map->table = dma_buf_map_attachment_unlocked(map->attach, DMA_BIDIRECTIONAL);
if (IS_ERR(map->table)) {
err = PTR_ERR(map->table);
goto map_err;
@@ -763,7 +763,7 @@ static int fastrpc_map_create(struct fastrpc_user *fl, int fd,
return 0;
map_err:
- dma_buf_detach(map->buf, map->attach);
+ dma_buf_detach_unlocked(map->buf, map->attach);
attach_err:
dma_buf_put(map->buf);
get_err:
diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
index 940e5e9e8a54..5a50e2697e95 100644
--- a/drivers/xen/gntdev-dmabuf.c
+++ b/drivers/xen/gntdev-dmabuf.c
@@ -592,7 +592,7 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
gntdev_dmabuf->priv = priv;
gntdev_dmabuf->fd = fd;
- attach = dma_buf_attach(dma_buf, dev);
+ attach = dma_buf_attach_unlocked(dma_buf, dev);
if (IS_ERR(attach)) {
ret = ERR_CAST(attach);
goto fail_free_obj;
@@ -600,7 +600,7 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
gntdev_dmabuf->u.imp.attach = attach;
- sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
+ sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL);
if (IS_ERR(sgt)) {
ret = ERR_CAST(sgt);
goto fail_detach;
@@ -658,9 +658,9 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
fail_end_access:
dmabuf_imp_end_foreign_access(gntdev_dmabuf->u.imp.refs, count);
fail_unmap:
- dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL);
+ dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL);
fail_detach:
- dma_buf_detach(dma_buf, attach);
+ dma_buf_detach_unlocked(dma_buf, attach);
fail_free_obj:
dmabuf_imp_free_storage(gntdev_dmabuf);
fail_put:
@@ -708,10 +708,10 @@ static int dmabuf_imp_release(struct gntdev_dmabuf_priv *priv, u32 fd)
attach = gntdev_dmabuf->u.imp.attach;
if (gntdev_dmabuf->u.imp.sgt)
- dma_buf_unmap_attachment(attach, gntdev_dmabuf->u.imp.sgt,
- DMA_BIDIRECTIONAL);
+ dma_buf_unmap_attachment_unlocked(attach, gntdev_dmabuf->u.imp.sgt,
+ DMA_BIDIRECTIONAL);
dma_buf = attach->dmabuf;
- dma_buf_detach(attach->dmabuf, attach);
+ dma_buf_detach_unlocked(attach->dmabuf, attach);
dma_buf_put(dma_buf);
dmabuf_imp_free_storage(gntdev_dmabuf);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 71731796c8c3..9ab09569dec1 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -601,14 +601,16 @@ dma_buf_attachment_is_dynamic(struct dma_buf_attachment *attach)
return !!attach->importer_ops;
}
-struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
- struct device *dev);
+struct dma_buf_attachment *dma_buf_attach_unlocked(struct dma_buf *dmabuf,
+ struct device *dev);
struct dma_buf_attachment *
-dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
- const struct dma_buf_attach_ops *importer_ops,
- void *importer_priv);
-void dma_buf_detach(struct dma_buf *dmabuf,
- struct dma_buf_attachment *attach);
+dma_buf_dynamic_attach_unlocked(struct dma_buf *dmabuf, struct device *dev,
+ const struct dma_buf_attach_ops *importer_ops,
+ void *importer_priv);
+
+void dma_buf_detach_unlocked(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attach);
+
int dma_buf_pin(struct dma_buf_attachment *attach);
void dma_buf_unpin(struct dma_buf_attachment *attach);
@@ -618,18 +620,20 @@ int dma_buf_fd(struct dma_buf *dmabuf, int flags);
struct dma_buf *dma_buf_get(int fd);
void dma_buf_put(struct dma_buf *dmabuf);
-struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *,
- enum dma_data_direction);
-void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table *,
- enum dma_data_direction);
+struct sg_table *dma_buf_map_attachment_unlocked(struct dma_buf_attachment *,
+ enum dma_data_direction);
+void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *,
+ struct sg_table *,
+ enum dma_data_direction);
+
void dma_buf_move_notify(struct dma_buf *dma_buf);
int dma_buf_begin_cpu_access(struct dma_buf *dma_buf,
enum dma_data_direction dir);
int dma_buf_end_cpu_access(struct dma_buf *dma_buf,
enum dma_data_direction dir);
-int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
- unsigned long);
-int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map);
-void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map);
+int dma_buf_mmap_unlocked(struct dma_buf *, struct vm_area_struct *,
+ unsigned long);
+int dma_buf_vmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map);
+void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map);
#endif /* __DMA_BUF_H__ */
--
2.37.2
Move dma_buf_vmap/vunmap_unlocked() functions to the dynamic locking
specification by taking the reservation lock. All the affected drivers
were prepared to this change by a previous drm/gem patch.
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/dma-buf/dma-buf.c | 8 ++++++++
drivers/gpu/drm/drm_prime.c | 4 ++--
2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 34173aafe6c9..f358af401360 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1398,6 +1398,8 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
if (WARN_ON(!dmabuf))
return -EINVAL;
+ dma_resv_assert_held(dmabuf->resv);
+
if (!dmabuf->ops->vmap)
return -EINVAL;
@@ -1440,7 +1442,9 @@ int dma_buf_vmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map)
{
int ret;
+ dma_resv_lock(dmabuf->resv, NULL);
ret = dma_buf_vmap(dmabuf, map);
+ dma_resv_unlock(dmabuf->resv);
return ret;
}
@@ -1456,6 +1460,8 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
if (WARN_ON(!dmabuf))
return;
+ dma_resv_assert_held(dmabuf->resv);
+
BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr));
BUG_ON(dmabuf->vmapping_counter == 0);
BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map));
@@ -1480,7 +1486,9 @@ void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map)
if (WARN_ON(!dmabuf))
return;
+ dma_resv_lock(dmabuf->resv, NULL);
dma_buf_vunmap(dmabuf, map);
+ dma_resv_unlock(dmabuf->resv);
}
EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap_unlocked, DMA_BUF);
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 7a7710158880..e9b7d3fa67f1 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -682,7 +682,7 @@ int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct iosys_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- return drm_gem_vmap_unlocked(obj, map);
+ return drm_gem_vmap(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
@@ -698,7 +698,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- drm_gem_vunmap_unlocked(obj, map);
+ drm_gem_vunmap(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
--
2.37.2
Move dma-buf attachment API functions to the dynamic locking specification.
The strict locking convention prevents deadlock situations for dma-buf
importers and exporters.
Previously, the "unlocked" versions of the attachment API functions
weren't taking the reservation lock and this patch makes them to take
the lock.
Intel and AMD GPU drivers already were mapping the attached dma-bufs under
the held lock during attachment, hence these drivers are updated to use
the locked functions.
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/dma-buf/dma-buf.c | 115 ++++++++++++++-------
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 +-
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 8 +-
drivers/gpu/drm/i915/gem/i915_gem_object.c | 12 +++
include/linux/dma-buf.h | 20 ++--
5 files changed, 110 insertions(+), 49 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 4556a12bd741..f2a5a122da4a 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -559,7 +559,7 @@ static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
* 2. Userspace passes this file-descriptors to all drivers it wants this buffer
* to share with: First the file descriptor is converted to a &dma_buf using
* dma_buf_get(). Then the buffer is attached to the device using
- * dma_buf_attach().
+ * dma_buf_attach_unlocked().
*
* Up to this stage the exporter is still free to migrate or reallocate the
* backing storage.
@@ -569,8 +569,8 @@ static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
* dma_buf_map_attachment() and dma_buf_unmap_attachment().
*
* 4. Once a driver is done with a shared buffer it needs to call
- * dma_buf_detach() (after cleaning up any mappings) and then release the
- * reference acquired with dma_buf_get() by calling dma_buf_put().
+ * dma_buf_detach_unlocked() (after cleaning up any mappings) and then
+ * release the reference acquired with dma_buf_get() by calling dma_buf_put().
*
* For the detailed semantics exporters are expected to implement see
* &dma_buf_ops.
@@ -802,7 +802,7 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach,
* @importer_priv: [in] importer private pointer for the attachment
*
* Returns struct dma_buf_attachment pointer for this attachment. Attachments
- * must be cleaned up by calling dma_buf_detach().
+ * must be cleaned up by calling dma_buf_detach_unlocked().
*
* Optionally this calls &dma_buf_ops.attach to allow device-specific attach
* functionality.
@@ -858,8 +858,8 @@ dma_buf_dynamic_attach_unlocked(struct dma_buf *dmabuf, struct device *dev,
dma_buf_is_dynamic(dmabuf)) {
struct sg_table *sgt;
+ dma_resv_lock(attach->dmabuf->resv, NULL);
if (dma_buf_is_dynamic(attach->dmabuf)) {
- dma_resv_lock(attach->dmabuf->resv, NULL);
ret = dmabuf->ops->pin(attach);
if (ret)
goto err_unlock;
@@ -872,8 +872,7 @@ dma_buf_dynamic_attach_unlocked(struct dma_buf *dmabuf, struct device *dev,
ret = PTR_ERR(sgt);
goto err_unpin;
}
- if (dma_buf_is_dynamic(attach->dmabuf))
- dma_resv_unlock(attach->dmabuf->resv);
+ dma_resv_unlock(attach->dmabuf->resv);
attach->sgt = sgt;
attach->dir = DMA_BIDIRECTIONAL;
}
@@ -889,8 +888,7 @@ dma_buf_dynamic_attach_unlocked(struct dma_buf *dmabuf, struct device *dev,
dmabuf->ops->unpin(attach);
err_unlock:
- if (dma_buf_is_dynamic(attach->dmabuf))
- dma_resv_unlock(attach->dmabuf->resv);
+ dma_resv_unlock(attach->dmabuf->resv);
dma_buf_detach_unlocked(dmabuf, attach);
return ERR_PTR(ret);
@@ -927,7 +925,7 @@ static void __unmap_dma_buf(struct dma_buf_attachment *attach,
* @dmabuf: [in] buffer to detach from.
* @attach: [in] attachment to be detached; is free'd after this call.
*
- * Clean up a device attachment obtained by calling dma_buf_attach().
+ * Clean up a device attachment obtained by calling dma_buf_attach_unlocked().
*
* Optionally this calls &dma_buf_ops.detach for device-specific detach.
*/
@@ -937,21 +935,19 @@ void dma_buf_detach_unlocked(struct dma_buf *dmabuf,
if (WARN_ON(!dmabuf || !attach))
return;
+ dma_resv_lock(attach->dmabuf->resv, NULL);
+
if (attach->sgt) {
- if (dma_buf_is_dynamic(attach->dmabuf))
- dma_resv_lock(attach->dmabuf->resv, NULL);
__unmap_dma_buf(attach, attach->sgt, attach->dir);
- if (dma_buf_is_dynamic(attach->dmabuf)) {
+ if (dma_buf_is_dynamic(attach->dmabuf))
dmabuf->ops->unpin(attach);
- dma_resv_unlock(attach->dmabuf->resv);
- }
}
-
- dma_resv_lock(dmabuf->resv, NULL);
list_del(&attach->node);
+
dma_resv_unlock(dmabuf->resv);
+
if (dmabuf->ops->detach)
dmabuf->ops->detach(dmabuf, attach);
@@ -1011,7 +1007,7 @@ void dma_buf_unpin(struct dma_buf_attachment *attach)
EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF);
/**
- * dma_buf_map_attachment_unlocked - Returns the scatterlist table of the attachment;
+ * dma_buf_map_attachment - Returns the scatterlist table of the attachment;
* mapped into _device_ address space. Is a wrapper for map_dma_buf() of the
* dma_buf_ops.
* @attach: [in] attachment whose scatterlist is to be returned
@@ -1030,10 +1026,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF);
*
* Important: Dynamic importers must wait for the exclusive fence of the struct
* dma_resv attached to the DMA-BUF first.
+ *
+ * Importer is responsible for holding dmabuf's reservation lock.
*/
-struct sg_table *
-dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
- enum dma_data_direction direction)
+struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
+ enum dma_data_direction direction)
{
struct sg_table *sg_table;
int r;
@@ -1043,8 +1040,7 @@ dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
if (WARN_ON(!attach || !attach->dmabuf))
return ERR_PTR(-EINVAL);
- if (dma_buf_attachment_is_dynamic(attach))
- dma_resv_assert_held(attach->dmabuf->resv);
+ dma_resv_assert_held(attach->dmabuf->resv);
if (attach->sgt) {
/*
@@ -1059,7 +1055,6 @@ dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
}
if (dma_buf_is_dynamic(attach->dmabuf)) {
- dma_resv_assert_held(attach->dmabuf->resv);
if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) {
r = attach->dmabuf->ops->pin(attach);
if (r)
@@ -1099,10 +1094,38 @@ dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
#endif /* CONFIG_DMA_API_DEBUG */
return sg_table;
}
+EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF);
+
+/**
+ * dma_buf_map_attachment_unlocked - Returns the scatterlist table of the attachment;
+ * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the
+ * dma_buf_ops.
+ * @attach: [in] attachment whose scatterlist is to be returned
+ * @direction: [in] direction of DMA transfer
+ *
+ * Unlocked variant of dma_buf_map_attachment().
+ */
+struct sg_table *
+dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
+ enum dma_data_direction direction)
+{
+ struct sg_table *sg_table;
+
+ might_sleep();
+
+ if (WARN_ON(!attach || !attach->dmabuf))
+ return ERR_PTR(-EINVAL);
+
+ dma_resv_lock(attach->dmabuf->resv, NULL);
+ sg_table = dma_buf_map_attachment(attach, direction);
+ dma_resv_unlock(attach->dmabuf->resv);
+
+ return sg_table;
+}
EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_unlocked, DMA_BUF);
/**
- * dma_buf_unmap_attachment_unlocked - unmaps and decreases usecount of the buffer;might
+ * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might
* deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of
* dma_buf_ops.
* @attach: [in] attachment to unmap buffer from
@@ -1110,31 +1133,51 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_unlocked, DMA_BUF);
* @direction: [in] direction of DMA transfer
*
* This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment().
+ *
+ * Importer is responsible for holding dmabuf's reservation lock.
*/
-void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach,
- struct sg_table *sg_table,
- enum dma_data_direction direction)
+void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
+ struct sg_table *sg_table,
+ enum dma_data_direction direction)
{
might_sleep();
- if (WARN_ON(!attach || !attach->dmabuf || !sg_table))
- return;
-
- if (dma_buf_attachment_is_dynamic(attach))
- dma_resv_assert_held(attach->dmabuf->resv);
+ dma_resv_assert_held(attach->dmabuf->resv);
if (attach->sgt == sg_table)
return;
- if (dma_buf_is_dynamic(attach->dmabuf))
- dma_resv_assert_held(attach->dmabuf->resv);
-
__unmap_dma_buf(attach, sg_table, direction);
if (dma_buf_is_dynamic(attach->dmabuf) &&
!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY))
dma_buf_unpin(attach);
}
+EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF);
+
+/**
+ * dma_buf_unmap_attachment_unlocked - unmaps and decreases usecount of the buffer;might
+ * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of
+ * dma_buf_ops.
+ * @attach: [in] attachment to unmap buffer from
+ * @sg_table: [in] scatterlist info of the buffer to unmap
+ * @direction: [in] direction of DMA transfer
+ *
+ * Unlocked variant of dma_buf_unmap_attachment().
+ */
+void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach,
+ struct sg_table *sg_table,
+ enum dma_data_direction direction)
+{
+ might_sleep();
+
+ if (WARN_ON(!attach || !attach->dmabuf || !sg_table))
+ return;
+
+ dma_resv_lock(attach->dmabuf->resv, NULL);
+ dma_buf_unmap_attachment(attach, sg_table, direction);
+ dma_resv_unlock(attach->dmabuf->resv);
+}
EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_unlocked, DMA_BUF);
/**
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index ac1e2911b727..b1c455329023 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -885,7 +885,7 @@ static int amdgpu_ttm_backend_bind(struct ttm_device *bdev,
struct sg_table *sgt;
attach = gtt->gobj->import_attach;
- sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL);
+ sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
if (IS_ERR(sgt))
return PTR_ERR(sgt);
@@ -1010,7 +1010,7 @@ static void amdgpu_ttm_backend_unbind(struct ttm_device *bdev,
struct dma_buf_attachment *attach;
attach = gtt->gobj->import_attach;
- dma_buf_unmap_attachment_unlocked(attach, ttm->sg, DMA_BIDIRECTIONAL);
+ dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL);
ttm->sg = NULL;
}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index cc54a5b1d6ae..276a74bc7fd1 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -241,8 +241,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
assert_object_held(obj);
- pages = dma_buf_map_attachment_unlocked(obj->base.import_attach,
- DMA_BIDIRECTIONAL);
+ pages = dma_buf_map_attachment(obj->base.import_attach,
+ DMA_BIDIRECTIONAL);
if (IS_ERR(pages))
return PTR_ERR(pages);
@@ -270,8 +270,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj,
struct sg_table *pages)
{
- dma_buf_unmap_attachment_unlocked(obj->base.import_attach, pages,
- DMA_BIDIRECTIONAL);
+ dma_buf_unmap_attachment(obj->base.import_attach, pages,
+ DMA_BIDIRECTIONAL);
}
static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = {
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index 389e9f157ca5..9fbef3aea7b1 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -331,7 +331,19 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
continue;
}
+ /*
+ * dma_buf_unmap_attachment() requires reservation to be
+ * locked. The imported GEM should share reservation lock,
+ * so it's safe to take the lock.
+ */
+ if (obj->base.import_attach)
+ i915_gem_object_lock(obj, NULL);
+
__i915_gem_object_pages_fini(obj);
+
+ if (obj->base.import_attach)
+ i915_gem_object_unlock(obj);
+
__i915_gem_free_object(obj);
/* But keep the pointer alive for RCU-protected lookups */
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index da2057569101..d48d534dc55c 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -46,7 +46,7 @@ struct dma_buf_ops {
/**
* @attach:
*
- * This is called from dma_buf_attach() to make sure that a given
+ * This is called from dma_buf_attach_unlocked() to make sure that a given
* &dma_buf_attachment.dev can access the provided &dma_buf. Exporters
* which support buffer objects in special locations like VRAM or
* device-specific carveout areas should check whether the buffer could
@@ -74,7 +74,7 @@ struct dma_buf_ops {
/**
* @detach:
*
- * This is called by dma_buf_detach() to release a &dma_buf_attachment.
+ * This is called by dma_buf_detach_unlocked() to release a &dma_buf_attachment.
* Provided so that exporters can clean up any housekeeping for an
* &dma_buf_attachment.
*
@@ -94,7 +94,7 @@ struct dma_buf_ops {
* exclusive with @cache_sgt_mapping.
*
* This is called automatically for non-dynamic importers from
- * dma_buf_attach().
+ * dma_buf_attach_unlocked().
*
* Note that similar to non-dynamic exporters in their @map_dma_buf
* callback the driver must guarantee that the memory is available for
@@ -509,10 +509,10 @@ struct dma_buf_attach_ops {
* and its user device(s). The list contains one attachment struct per device
* attached to the buffer.
*
- * An attachment is created by calling dma_buf_attach(), and released again by
- * calling dma_buf_detach(). The DMA mapping itself needed to initiate a
- * transfer is created by dma_buf_map_attachment() and freed again by calling
- * dma_buf_unmap_attachment().
+ * An attachment is created by calling dma_buf_attach_unlocked(), and released
+ * again by calling dma_buf_detach_unlocked(). The DMA mapping itself needed to
+ * initiate a transfer is created by dma_buf_map_attachment() and freed
+ * again by calling dma_buf_unmap_attachment().
*/
struct dma_buf_attachment {
struct dma_buf *dmabuf;
@@ -626,6 +626,12 @@ void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *,
struct sg_table *,
enum dma_data_direction);
+struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *,
+ enum dma_data_direction);
+void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
+ struct sg_table *sg_table,
+ enum dma_data_direction direction);
+
void dma_buf_move_notify(struct dma_buf *dma_buf);
int dma_buf_begin_cpu_access(struct dma_buf *dma_buf,
enum dma_data_direction dir);
--
2.37.2
Add documentation for the dynamic locking convention. The documentation
tells dma-buf API users when they should take the reservation lock and
when not.
Signed-off-by: Dmitry Osipenko <[email protected]>
---
Documentation/driver-api/dma-buf.rst | 6 +++
drivers/dma-buf/dma-buf.c | 63 ++++++++++++++++++++++++++++
2 files changed, 69 insertions(+)
diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index 36a76cbe9095..622b8156d212 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -119,6 +119,12 @@ DMA Buffer ioctls
.. kernel-doc:: include/uapi/linux/dma-buf.h
+DMA-BUF locking convention
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. kernel-doc:: drivers/dma-buf/dma-buf.c
+ :doc: locking convention
+
Kernel Functions and Structures Reference
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index f2a5a122da4a..696d132b02f4 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -794,6 +794,69 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach,
return sg_table;
}
+/**
+ * DOC: locking convention
+ *
+ * In order to avoid deadlock situations between dma-buf exports and importers,
+ * all dma-buf API users must follow the common dma-buf locking convention.
+ *
+ * Convention for importers
+ *
+ * 1. Importers must hold the dma-buf reservation lock when calling these
+ * functions:
+ *
+ * - dma_buf_pin()
+ * - dma_buf_unpin()
+ * - dma_buf_map_attachment()
+ * - dma_buf_unmap_attachment()
+ * - dma_buf_vmap()
+ * - dma_buf_vunmap()
+ *
+ * 2. Importers must not hold the dma-buf reservation lock when calling these
+ * functions:
+ *
+ * - dma_buf_attach_unlocked()
+ * - dma_buf_dynamic_attach_unlocked()
+ * - dma_buf_detach_unlocked()
+ * - dma_buf_export(
+ * - dma_buf_fd()
+ * - dma_buf_get()
+ * - dma_buf_put()
+ * - dma_buf_begin_cpu_access()
+ * - dma_buf_end_cpu_access()
+ * - dma_buf_map_attachment_unlocked()
+ * - dma_buf_unmap_attachment_unlocked()
+ * - dma_buf_vmap_unlocked()
+ * - dma_buf_vunmap_unlocked()
+ *
+ * Convention for exporters
+ *
+ * 1. These &dma_buf_ops callbacks are invoked with unlocked dma-buf
+ * reservation and exporter can take the lock:
+ *
+ * - &dma_buf_ops.attach()
+ * - &dma_buf_ops.detach()
+ * - &dma_buf_ops.release()
+ * - &dma_buf_ops.begin_cpu_access()
+ * - &dma_buf_ops.end_cpu_access()
+ *
+ * 2. These &dma_buf_ops callbacks are invoked with locked dma-buf
+ * reservation and exporter can't take the lock:
+ *
+ * - &dma_buf_ops.pin()
+ * - &dma_buf_ops.unpin()
+ * - &dma_buf_ops.map_dma_buf()
+ * - &dma_buf_ops.unmap_dma_buf()
+ * - &dma_buf_ops.mmap()
+ * - &dma_buf_ops.vmap()
+ * - &dma_buf_ops.vunmap()
+ *
+ * 3. Exporters must hold the dma-buf reservation lock when calling these
+ * functions:
+ *
+ * - dma_buf_move_notify()
+ */
+
/**
* dma_buf_dynamic_attach_unlocked - Add the device to dma_buf's attachments list
* @dmabuf: [in] buffer to attach device to.
--
2.37.2
The new common dma-buf locking convention will require buffer importers
to hold the reservation lock around mapping operations. Make DRM GEM core
to take the lock around the vmapping operations and update DRM drivers to
use the locked functions for the case where DRM core now holds the lock.
This patch prepares DRM core and drivers for transitioning to the common
dma-buf locking convention.
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/gpu/drm/drm_client.c | 4 ++--
drivers/gpu/drm/drm_gem.c | 24 ++++++++++++++++++++
drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 ++---
drivers/gpu/drm/drm_gem_shmem_helper.c | 6 ++---
drivers/gpu/drm/drm_gem_ttm_helper.c | 9 +-------
drivers/gpu/drm/drm_prime.c | 4 ++--
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 2 +-
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 2 +-
drivers/gpu/drm/lima/lima_sched.c | 4 ++--
drivers/gpu/drm/panfrost/panfrost_dump.c | 4 ++--
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 ++---
drivers/gpu/drm/qxl/qxl_object.c | 17 +++++++-------
drivers/gpu/drm/qxl/qxl_prime.c | 4 ++--
include/drm/drm_gem.h | 3 +++
14 files changed, 58 insertions(+), 37 deletions(-)
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 2b230b4d6942..fbcb1e995384 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -323,7 +323,7 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer,
* fd_install step out of the driver backend hooks, to make that
* final step optional for internal users.
*/
- ret = drm_gem_vmap(buffer->gem, map);
+ ret = drm_gem_vmap_unlocked(buffer->gem, map);
if (ret)
return ret;
@@ -345,7 +345,7 @@ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
{
struct iosys_map *map = &buffer->map;
- drm_gem_vunmap(buffer->gem, map);
+ drm_gem_vunmap_unlocked(buffer->gem, map);
}
EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index ad068865ba20..9c55593d662d 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1156,6 +1156,8 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
{
int ret;
+ dma_resv_assert_held(obj->resv);
+
if (!obj->funcs->vmap)
return -EOPNOTSUPP;
@@ -1171,6 +1173,8 @@ EXPORT_SYMBOL(drm_gem_vmap);
void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
{
+ dma_resv_assert_held(obj->resv);
+
if (iosys_map_is_null(map))
return;
@@ -1182,6 +1186,26 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
}
EXPORT_SYMBOL(drm_gem_vunmap);
+int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map)
+{
+ int ret;
+
+ dma_resv_lock(obj->resv, NULL);
+ ret = drm_gem_vmap(obj, map);
+ dma_resv_unlock(obj->resv);
+
+ return ret;
+}
+EXPORT_SYMBOL(drm_gem_vmap_unlocked);
+
+void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map)
+{
+ dma_resv_lock(obj->resv, NULL);
+ drm_gem_vunmap(obj, map);
+ dma_resv_unlock(obj->resv);
+}
+EXPORT_SYMBOL(drm_gem_vunmap_unlocked);
+
/**
* drm_gem_lock_reservations - Sets up the ww context and acquires
* the lock on an array of GEM objects.
diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
index 880a4975507f..e35e224e6303 100644
--- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c
+++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
@@ -354,7 +354,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct iosys_map *map,
ret = -EINVAL;
goto err_drm_gem_vunmap;
}
- ret = drm_gem_vmap(obj, &map[i]);
+ ret = drm_gem_vmap_unlocked(obj, &map[i]);
if (ret)
goto err_drm_gem_vunmap;
}
@@ -376,7 +376,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct iosys_map *map,
obj = drm_gem_fb_get_obj(fb, i);
if (!obj)
continue;
- drm_gem_vunmap(obj, &map[i]);
+ drm_gem_vunmap_unlocked(obj, &map[i]);
}
return ret;
}
@@ -403,7 +403,7 @@ void drm_gem_fb_vunmap(struct drm_framebuffer *fb, struct iosys_map *map)
continue;
if (iosys_map_is_null(&map[i]))
continue;
- drm_gem_vunmap(obj, &map[i]);
+ drm_gem_vunmap_unlocked(obj, &map[i]);
}
}
EXPORT_SYMBOL(drm_gem_fb_vunmap);
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 5f572716306d..a3c549792915 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -299,10 +299,10 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
}
if (obj->import_attach) {
- ret = dma_buf_vmap_unlocked(obj->import_attach->dmabuf, map);
+ ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
if (!ret) {
if (WARN_ON(map->is_iomem)) {
- dma_buf_vunmap_unlocked(obj->import_attach->dmabuf, map);
+ dma_buf_vunmap(obj->import_attach->dmabuf, map);
ret = -EIO;
goto err_put_pages;
}
@@ -383,7 +383,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
return;
if (obj->import_attach) {
- dma_buf_vunmap_unlocked(obj->import_attach->dmabuf, map);
+ dma_buf_vunmap(obj->import_attach->dmabuf, map);
} else {
vunmap(shmem->vaddr);
drm_gem_shmem_put_pages(shmem);
diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
index e5fc875990c4..d5962a34c01d 100644
--- a/drivers/gpu/drm/drm_gem_ttm_helper.c
+++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
@@ -64,13 +64,8 @@ int drm_gem_ttm_vmap(struct drm_gem_object *gem,
struct iosys_map *map)
{
struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
- int ret;
-
- dma_resv_lock(gem->resv, NULL);
- ret = ttm_bo_vmap(bo, map);
- dma_resv_unlock(gem->resv);
- return ret;
+ return ttm_bo_vmap(bo, map);
}
EXPORT_SYMBOL(drm_gem_ttm_vmap);
@@ -87,9 +82,7 @@ void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
{
struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
- dma_resv_lock(gem->resv, NULL);
ttm_bo_vunmap(bo, map);
- dma_resv_unlock(gem->resv);
}
EXPORT_SYMBOL(drm_gem_ttm_vunmap);
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index e9b7d3fa67f1..7a7710158880 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -682,7 +682,7 @@ int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct iosys_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- return drm_gem_vmap(obj, map);
+ return drm_gem_vmap_unlocked(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
@@ -698,7 +698,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- drm_gem_vunmap(obj, map);
+ drm_gem_vunmap_unlocked(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index ae6c1eda0a72..9430d025c219 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -82,7 +82,7 @@ static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
lockdep_assert_held(&etnaviv_obj->lock);
- ret = dma_buf_vmap_unlocked(etnaviv_obj->base.import_attach->dmabuf, &map);
+ ret = dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf, &map);
if (ret)
return NULL;
return map.vaddr;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 5ecea7df98b1..cc54a5b1d6ae 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -72,7 +72,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf,
struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
void *vaddr;
- vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB);
+ vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
if (IS_ERR(vaddr))
return PTR_ERR(vaddr);
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index e82931712d8a..ff003403fbbc 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -371,7 +371,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
} else {
buffer_chunk->size = lima_bo_size(bo);
- ret = drm_gem_shmem_vmap(&bo->base, &map);
+ ret = drm_gem_vmap_unlocked(&bo->base.base, &map);
if (ret) {
kvfree(et);
goto out;
@@ -379,7 +379,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
- drm_gem_shmem_vunmap(&bo->base, &map);
+ drm_gem_vunmap_unlocked(&bo->base.base, &map);
}
buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
diff --git a/drivers/gpu/drm/panfrost/panfrost_dump.c b/drivers/gpu/drm/panfrost/panfrost_dump.c
index 89056a1aac7d..f62a019cc523 100644
--- a/drivers/gpu/drm/panfrost/panfrost_dump.c
+++ b/drivers/gpu/drm/panfrost/panfrost_dump.c
@@ -209,7 +209,7 @@ void panfrost_core_dump(struct panfrost_job *job)
goto dump_header;
}
- ret = drm_gem_shmem_vmap(&bo->base, &map);
+ ret = drm_gem_vmap_unlocked(&bo->base.base, &map);
if (ret) {
dev_err(pfdev->dev, "Panfrost Dump: couldn't map Buffer Object\n");
iter.hdr->bomap.valid = 0;
@@ -236,7 +236,7 @@ void panfrost_core_dump(struct panfrost_job *job)
vaddr = map.vaddr;
memcpy(iter.data, vaddr, bo->base.base.size);
- drm_gem_shmem_vunmap(&bo->base, &map);
+ drm_gem_vunmap_unlocked(&bo->base.base, &map);
iter.hdr->bomap.valid = cpu_to_le32(1);
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index bc0df93f7f21..ba9b6e2b2636 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -106,7 +106,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
goto err_close_bo;
}
- ret = drm_gem_shmem_vmap(bo, &map);
+ ret = drm_gem_vmap_unlocked(&bo->base, &map);
if (ret)
goto err_put_mapping;
perfcnt->buf = map.vaddr;
@@ -165,7 +165,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
return 0;
err_vunmap:
- drm_gem_shmem_vunmap(bo, &map);
+ drm_gem_vunmap_unlocked(&bo->base, &map);
err_put_mapping:
panfrost_gem_mapping_put(perfcnt->mapping);
err_close_bo:
@@ -195,7 +195,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
perfcnt->user = NULL;
- drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base, &map);
+ drm_gem_vunmap_unlocked(&perfcnt->mapping->obj->base.base, &map);
perfcnt->buf = NULL;
panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 695d9308d1f0..06a58dad5f5c 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -168,9 +168,16 @@ int qxl_bo_vmap_locked(struct qxl_bo *bo, struct iosys_map *map)
bo->map_count++;
goto out;
}
- r = ttm_bo_vmap(&bo->tbo, &bo->map);
+
+ r = __qxl_bo_pin(bo);
if (r)
return r;
+
+ r = ttm_bo_vmap(&bo->tbo, &bo->map);
+ if (r) {
+ __qxl_bo_unpin(bo);
+ return r;
+ }
bo->map_count = 1;
/* TODO: Remove kptr in favor of map everywhere. */
@@ -192,12 +199,6 @@ int qxl_bo_vmap(struct qxl_bo *bo, struct iosys_map *map)
if (r)
return r;
- r = __qxl_bo_pin(bo);
- if (r) {
- qxl_bo_unreserve(bo);
- return r;
- }
-
r = qxl_bo_vmap_locked(bo, map);
qxl_bo_unreserve(bo);
return r;
@@ -247,6 +248,7 @@ void qxl_bo_vunmap_locked(struct qxl_bo *bo)
return;
bo->kptr = NULL;
ttm_bo_vunmap(&bo->tbo, &bo->map);
+ __qxl_bo_unpin(bo);
}
int qxl_bo_vunmap(struct qxl_bo *bo)
@@ -258,7 +260,6 @@ int qxl_bo_vunmap(struct qxl_bo *bo)
return r;
qxl_bo_vunmap_locked(bo);
- __qxl_bo_unpin(bo);
qxl_bo_unreserve(bo);
return 0;
}
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 142d01415acb..9169c26357d3 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -59,7 +59,7 @@ int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map)
struct qxl_bo *bo = gem_to_qxl_bo(obj);
int ret;
- ret = qxl_bo_vmap(bo, map);
+ ret = qxl_bo_vmap_locked(bo, map);
if (ret < 0)
return ret;
@@ -71,5 +71,5 @@ void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
{
struct qxl_bo *bo = gem_to_qxl_bo(obj);
- qxl_bo_vunmap(bo);
+ qxl_bo_vunmap_locked(bo);
}
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 58a18a17c67e..30096f9efdbf 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -420,4 +420,7 @@ void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count,
int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
u32 handle, u64 *offset);
+int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map);
+void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map);
+
#endif /* __DRM_GEM_H__ */
--
2.37.2
The internal dma-buf lock isn't needed anymore because the updated
locking specification claims that dma-buf reservation must be locked
by importers, and thus, the internal data is already protected by the
reservation lock. Remove the obsoleted internal lock.
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/dma-buf/dma-buf.c | 14 ++++----------
include/linux/dma-buf.h | 9 ---------
2 files changed, 4 insertions(+), 19 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 696d132b02f4..a0406254f0ae 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -656,7 +656,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
dmabuf->file = file;
- mutex_init(&dmabuf->lock);
INIT_LIST_HEAD(&dmabuf->attachments);
mutex_lock(&db_list.lock);
@@ -1503,7 +1502,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap_unlocked, DMA_BUF);
int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
{
struct iosys_map ptr;
- int ret = 0;
+ int ret;
iosys_map_clear(map);
@@ -1515,28 +1514,25 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
if (!dmabuf->ops->vmap)
return -EINVAL;
- mutex_lock(&dmabuf->lock);
if (dmabuf->vmapping_counter) {
dmabuf->vmapping_counter++;
BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr));
*map = dmabuf->vmap_ptr;
- goto out_unlock;
+ return 0;
}
BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr));
ret = dmabuf->ops->vmap(dmabuf, &ptr);
if (WARN_ON_ONCE(ret))
- goto out_unlock;
+ return ret;
dmabuf->vmap_ptr = ptr;
dmabuf->vmapping_counter = 1;
*map = dmabuf->vmap_ptr;
-out_unlock:
- mutex_unlock(&dmabuf->lock);
- return ret;
+ return 0;
}
EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF);
@@ -1578,13 +1574,11 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
BUG_ON(dmabuf->vmapping_counter == 0);
BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map));
- mutex_lock(&dmabuf->lock);
if (--dmabuf->vmapping_counter == 0) {
if (dmabuf->ops->vunmap)
dmabuf->ops->vunmap(dmabuf, map);
iosys_map_clear(&dmabuf->vmap_ptr);
}
- mutex_unlock(&dmabuf->lock);
}
EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index d48d534dc55c..aed6695bbb50 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -326,15 +326,6 @@ struct dma_buf {
/** @ops: dma_buf_ops associated with this buffer object. */
const struct dma_buf_ops *ops;
- /**
- * @lock:
- *
- * Used internally to serialize list manipulation, attach/detach and
- * vmap/unmap. Note that in many cases this is superseeded by
- * dma_resv_lock() on @resv.
- */
- struct mutex lock;
-
/**
* @vmapping_counter:
*
--
2.37.2
Move dma_buf_mmap_unlocked() function to the dynamic locking specification
by taking the reservation lock. Neither of the today's drivers take the
reservation lock within the mmap() callback, hence it's safe to enforce
the locking.
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/dma-buf/dma-buf.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index f358af401360..4556a12bd741 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1348,6 +1348,8 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF);
int dma_buf_mmap_unlocked(struct dma_buf *dmabuf, struct vm_area_struct *vma,
unsigned long pgoff)
{
+ int ret;
+
if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
@@ -1368,7 +1370,11 @@ int dma_buf_mmap_unlocked(struct dma_buf *dmabuf, struct vm_area_struct *vma,
vma_set_file(vma, dmabuf->file);
vma->vm_pgoff = pgoff;
- return dmabuf->ops->mmap(dmabuf, vma);
+ dma_resv_lock(dmabuf->resv, NULL);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+ dma_resv_unlock(dmabuf->resv);
+
+ return ret;
}
EXPORT_SYMBOL_NS_GPL(dma_buf_mmap_unlocked, DMA_BUF);
--
2.37.2
All drivers that use dma-bufs have been moved to the updated locking
specification and now dma-buf reservation is guaranteed to be locked
by importers during the mapping operations. There is no need to take
the internal dma-buf lock anymore. Remove locking from the videobuf2
memory allocators.
Acked-by: Tomasz Figa <[email protected]>
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/media/common/videobuf2/videobuf2-dma-contig.c | 11 +----------
drivers/media/common/videobuf2/videobuf2-dma-sg.c | 11 +----------
drivers/media/common/videobuf2/videobuf2-vmalloc.c | 11 +----------
3 files changed, 3 insertions(+), 30 deletions(-)
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
index de762dbdaf78..2c69bf0470e7 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
@@ -382,18 +382,12 @@ static struct sg_table *vb2_dc_dmabuf_ops_map(
struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir)
{
struct vb2_dc_attachment *attach = db_attach->priv;
- /* stealing dmabuf mutex to serialize map/unmap operations */
- struct mutex *lock = &db_attach->dmabuf->lock;
struct sg_table *sgt;
- mutex_lock(lock);
-
sgt = &attach->sgt;
/* return previously mapped sg table */
- if (attach->dma_dir == dma_dir) {
- mutex_unlock(lock);
+ if (attach->dma_dir == dma_dir)
return sgt;
- }
/* release any previous cache */
if (attach->dma_dir != DMA_NONE) {
@@ -409,14 +403,11 @@ static struct sg_table *vb2_dc_dmabuf_ops_map(
if (dma_map_sgtable(db_attach->dev, sgt, dma_dir,
DMA_ATTR_SKIP_CPU_SYNC)) {
pr_err("failed to map scatterlist\n");
- mutex_unlock(lock);
return ERR_PTR(-EIO);
}
attach->dma_dir = dma_dir;
- mutex_unlock(lock);
-
return sgt;
}
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
index 39e11600304a..e63e718c0bf7 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
@@ -424,18 +424,12 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map(
struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir)
{
struct vb2_dma_sg_attachment *attach = db_attach->priv;
- /* stealing dmabuf mutex to serialize map/unmap operations */
- struct mutex *lock = &db_attach->dmabuf->lock;
struct sg_table *sgt;
- mutex_lock(lock);
-
sgt = &attach->sgt;
/* return previously mapped sg table */
- if (attach->dma_dir == dma_dir) {
- mutex_unlock(lock);
+ if (attach->dma_dir == dma_dir)
return sgt;
- }
/* release any previous cache */
if (attach->dma_dir != DMA_NONE) {
@@ -446,14 +440,11 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map(
/* mapping to the client with new direction */
if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) {
pr_err("failed to map scatterlist\n");
- mutex_unlock(lock);
return ERR_PTR(-EIO);
}
attach->dma_dir = dma_dir;
- mutex_unlock(lock);
-
return sgt;
}
diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
index 7831bf545874..41db707e43a4 100644
--- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
+++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
@@ -267,18 +267,12 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map(
struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir)
{
struct vb2_vmalloc_attachment *attach = db_attach->priv;
- /* stealing dmabuf mutex to serialize map/unmap operations */
- struct mutex *lock = &db_attach->dmabuf->lock;
struct sg_table *sgt;
- mutex_lock(lock);
-
sgt = &attach->sgt;
/* return previously mapped sg table */
- if (attach->dma_dir == dma_dir) {
- mutex_unlock(lock);
+ if (attach->dma_dir == dma_dir)
return sgt;
- }
/* release any previous cache */
if (attach->dma_dir != DMA_NONE) {
@@ -289,14 +283,11 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map(
/* mapping to the client with new direction */
if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) {
pr_err("failed to map scatterlist\n");
- mutex_unlock(lock);
return ERR_PTR(-EIO);
}
attach->dma_dir = dma_dir;
- mutex_unlock(lock);
-
return sgt;
}
--
2.37.2
Nice!
Acked-by: Hans Verkuil <[email protected]>
Regards,
Hans
On 24/08/2022 12:22, Dmitry Osipenko wrote:
> All drivers that use dma-bufs have been moved to the updated locking
> specification and now dma-buf reservation is guaranteed to be locked
> by importers during the mapping operations. There is no need to take
> the internal dma-buf lock anymore. Remove locking from the videobuf2
> memory allocators.
>
> Acked-by: Tomasz Figa <[email protected]>
> Signed-off-by: Dmitry Osipenko <[email protected]>
> ---
> drivers/media/common/videobuf2/videobuf2-dma-contig.c | 11 +----------
> drivers/media/common/videobuf2/videobuf2-dma-sg.c | 11 +----------
> drivers/media/common/videobuf2/videobuf2-vmalloc.c | 11 +----------
> 3 files changed, 3 insertions(+), 30 deletions(-)
>
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> index de762dbdaf78..2c69bf0470e7 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> @@ -382,18 +382,12 @@ static struct sg_table *vb2_dc_dmabuf_ops_map(
> struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir)
> {
> struct vb2_dc_attachment *attach = db_attach->priv;
> - /* stealing dmabuf mutex to serialize map/unmap operations */
> - struct mutex *lock = &db_attach->dmabuf->lock;
> struct sg_table *sgt;
>
> - mutex_lock(lock);
> -
> sgt = &attach->sgt;
> /* return previously mapped sg table */
> - if (attach->dma_dir == dma_dir) {
> - mutex_unlock(lock);
> + if (attach->dma_dir == dma_dir)
> return sgt;
> - }
>
> /* release any previous cache */
> if (attach->dma_dir != DMA_NONE) {
> @@ -409,14 +403,11 @@ static struct sg_table *vb2_dc_dmabuf_ops_map(
> if (dma_map_sgtable(db_attach->dev, sgt, dma_dir,
> DMA_ATTR_SKIP_CPU_SYNC)) {
> pr_err("failed to map scatterlist\n");
> - mutex_unlock(lock);
> return ERR_PTR(-EIO);
> }
>
> attach->dma_dir = dma_dir;
>
> - mutex_unlock(lock);
> -
> return sgt;
> }
>
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> index 39e11600304a..e63e718c0bf7 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> @@ -424,18 +424,12 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map(
> struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir)
> {
> struct vb2_dma_sg_attachment *attach = db_attach->priv;
> - /* stealing dmabuf mutex to serialize map/unmap operations */
> - struct mutex *lock = &db_attach->dmabuf->lock;
> struct sg_table *sgt;
>
> - mutex_lock(lock);
> -
> sgt = &attach->sgt;
> /* return previously mapped sg table */
> - if (attach->dma_dir == dma_dir) {
> - mutex_unlock(lock);
> + if (attach->dma_dir == dma_dir)
> return sgt;
> - }
>
> /* release any previous cache */
> if (attach->dma_dir != DMA_NONE) {
> @@ -446,14 +440,11 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map(
> /* mapping to the client with new direction */
> if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) {
> pr_err("failed to map scatterlist\n");
> - mutex_unlock(lock);
> return ERR_PTR(-EIO);
> }
>
> attach->dma_dir = dma_dir;
>
> - mutex_unlock(lock);
> -
> return sgt;
> }
>
> diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> index 7831bf545874..41db707e43a4 100644
> --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> @@ -267,18 +267,12 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map(
> struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir)
> {
> struct vb2_vmalloc_attachment *attach = db_attach->priv;
> - /* stealing dmabuf mutex to serialize map/unmap operations */
> - struct mutex *lock = &db_attach->dmabuf->lock;
> struct sg_table *sgt;
>
> - mutex_lock(lock);
> -
> sgt = &attach->sgt;
> /* return previously mapped sg table */
> - if (attach->dma_dir == dma_dir) {
> - mutex_unlock(lock);
> + if (attach->dma_dir == dma_dir)
> return sgt;
> - }
>
> /* release any previous cache */
> if (attach->dma_dir != DMA_NONE) {
> @@ -289,14 +283,11 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map(
> /* mapping to the client with new direction */
> if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) {
> pr_err("failed to map scatterlist\n");
> - mutex_unlock(lock);
> return ERR_PTR(-EIO);
> }
>
> attach->dma_dir = dma_dir;
>
> - mutex_unlock(lock);
> -
> return sgt;
> }
>
Am 24.08.22 um 12:22 schrieb Dmitry Osipenko:
> The internal dma-buf lock isn't needed anymore because the updated
> locking specification claims that dma-buf reservation must be locked
> by importers, and thus, the internal data is already protected by the
> reservation lock. Remove the obsoleted internal lock.
>
> Signed-off-by: Dmitry Osipenko <[email protected]>
Reviewed-by: Christian König <[email protected]>
> ---
> drivers/dma-buf/dma-buf.c | 14 ++++----------
> include/linux/dma-buf.h | 9 ---------
> 2 files changed, 4 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 696d132b02f4..a0406254f0ae 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -656,7 +656,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
>
> dmabuf->file = file;
>
> - mutex_init(&dmabuf->lock);
> INIT_LIST_HEAD(&dmabuf->attachments);
>
> mutex_lock(&db_list.lock);
> @@ -1503,7 +1502,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap_unlocked, DMA_BUF);
> int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
> {
> struct iosys_map ptr;
> - int ret = 0;
> + int ret;
>
> iosys_map_clear(map);
>
> @@ -1515,28 +1514,25 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
> if (!dmabuf->ops->vmap)
> return -EINVAL;
>
> - mutex_lock(&dmabuf->lock);
> if (dmabuf->vmapping_counter) {
> dmabuf->vmapping_counter++;
> BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr));
> *map = dmabuf->vmap_ptr;
> - goto out_unlock;
> + return 0;
> }
>
> BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr));
>
> ret = dmabuf->ops->vmap(dmabuf, &ptr);
> if (WARN_ON_ONCE(ret))
> - goto out_unlock;
> + return ret;
>
> dmabuf->vmap_ptr = ptr;
> dmabuf->vmapping_counter = 1;
>
> *map = dmabuf->vmap_ptr;
>
> -out_unlock:
> - mutex_unlock(&dmabuf->lock);
> - return ret;
> + return 0;
> }
> EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF);
>
> @@ -1578,13 +1574,11 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
> BUG_ON(dmabuf->vmapping_counter == 0);
> BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map));
>
> - mutex_lock(&dmabuf->lock);
> if (--dmabuf->vmapping_counter == 0) {
> if (dmabuf->ops->vunmap)
> dmabuf->ops->vunmap(dmabuf, map);
> iosys_map_clear(&dmabuf->vmap_ptr);
> }
> - mutex_unlock(&dmabuf->lock);
> }
> EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF);
>
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index d48d534dc55c..aed6695bbb50 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -326,15 +326,6 @@ struct dma_buf {
> /** @ops: dma_buf_ops associated with this buffer object. */
> const struct dma_buf_ops *ops;
>
> - /**
> - * @lock:
> - *
> - * Used internally to serialize list manipulation, attach/detach and
> - * vmap/unmap. Note that in many cases this is superseeded by
> - * dma_resv_lock() on @resv.
> - */
> - struct mutex lock;
> -
> /**
> * @vmapping_counter:
> *
Am 24.08.22 um 12:22 schrieb Dmitry Osipenko:
> The internal dma-buf lock isn't needed anymore because the updated
> locking specification claims that dma-buf reservation must be locked
> by importers, and thus, the internal data is already protected by the
> reservation lock. Remove the obsoleted internal lock.
>
> Signed-off-by: Dmitry Osipenko <[email protected]>
Reviewed-by: Christian König <[email protected]>
> ---
> drivers/dma-buf/dma-buf.c | 14 ++++----------
> include/linux/dma-buf.h | 9 ---------
> 2 files changed, 4 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 696d132b02f4..a0406254f0ae 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -656,7 +656,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
>
> dmabuf->file = file;
>
> - mutex_init(&dmabuf->lock);
> INIT_LIST_HEAD(&dmabuf->attachments);
>
> mutex_lock(&db_list.lock);
> @@ -1503,7 +1502,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap_unlocked, DMA_BUF);
> int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
> {
> struct iosys_map ptr;
> - int ret = 0;
> + int ret;
>
> iosys_map_clear(map);
>
> @@ -1515,28 +1514,25 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
> if (!dmabuf->ops->vmap)
> return -EINVAL;
>
> - mutex_lock(&dmabuf->lock);
> if (dmabuf->vmapping_counter) {
> dmabuf->vmapping_counter++;
> BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr));
> *map = dmabuf->vmap_ptr;
> - goto out_unlock;
> + return 0;
> }
>
> BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr));
>
> ret = dmabuf->ops->vmap(dmabuf, &ptr);
> if (WARN_ON_ONCE(ret))
> - goto out_unlock;
> + return ret;
>
> dmabuf->vmap_ptr = ptr;
> dmabuf->vmapping_counter = 1;
>
> *map = dmabuf->vmap_ptr;
>
> -out_unlock:
> - mutex_unlock(&dmabuf->lock);
> - return ret;
> + return 0;
> }
> EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF);
>
> @@ -1578,13 +1574,11 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
> BUG_ON(dmabuf->vmapping_counter == 0);
> BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map));
>
> - mutex_lock(&dmabuf->lock);
> if (--dmabuf->vmapping_counter == 0) {
> if (dmabuf->ops->vunmap)
> dmabuf->ops->vunmap(dmabuf, map);
> iosys_map_clear(&dmabuf->vmap_ptr);
> }
> - mutex_unlock(&dmabuf->lock);
> }
> EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF);
>
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index d48d534dc55c..aed6695bbb50 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -326,15 +326,6 @@ struct dma_buf {
> /** @ops: dma_buf_ops associated with this buffer object. */
> const struct dma_buf_ops *ops;
>
> - /**
> - * @lock:
> - *
> - * Used internally to serialize list manipulation, attach/detach and
> - * vmap/unmap. Note that in many cases this is superseeded by
> - * dma_resv_lock() on @resv.
> - */
> - struct mutex lock;
> -
> /**
> * @vmapping_counter:
> *
This should work, but I'm really wondering if this makes a difference
for somebody.
Anyway the approach is fine with me: Acked-by: Christian König
<[email protected]>
Regards,
Christian.
Am 24.08.22 um 12:22 schrieb Dmitry Osipenko:
> Move dma_buf_mmap_unlocked() function to the dynamic locking specification
> by taking the reservation lock. Neither of the today's drivers take the
> reservation lock within the mmap() callback, hence it's safe to enforce
> the locking.
>
> Signed-off-by: Dmitry Osipenko <[email protected]>
> ---
> drivers/dma-buf/dma-buf.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index f358af401360..4556a12bd741 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1348,6 +1348,8 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF);
> int dma_buf_mmap_unlocked(struct dma_buf *dmabuf, struct vm_area_struct *vma,
> unsigned long pgoff)
> {
> + int ret;
> +
> if (WARN_ON(!dmabuf || !vma))
> return -EINVAL;
>
> @@ -1368,7 +1370,11 @@ int dma_buf_mmap_unlocked(struct dma_buf *dmabuf, struct vm_area_struct *vma,
> vma_set_file(vma, dmabuf->file);
> vma->vm_pgoff = pgoff;
>
> - return dmabuf->ops->mmap(dmabuf, vma);
> + dma_resv_lock(dmabuf->resv, NULL);
> + ret = dmabuf->ops->mmap(dmabuf, vma);
> + dma_resv_unlock(dmabuf->resv);
> +
> + return ret;
> }
> EXPORT_SYMBOL_NS_GPL(dma_buf_mmap_unlocked, DMA_BUF);
>
Am 24.08.22 um 12:22 schrieb Dmitry Osipenko:
> Move dma-buf attachment API functions to the dynamic locking specification.
> The strict locking convention prevents deadlock situations for dma-buf
> importers and exporters.
>
> Previously, the "unlocked" versions of the attachment API functions
> weren't taking the reservation lock and this patch makes them to take
> the lock.
Didn't we concluded that we need to keep the attach and detach callbacks
without the lock and only move the map/unmap callbacks over?
Otherwise it won't be possible for drivers to lock multiple buffers if
they have to shuffle things around for a specific attachment.
Regards,
Christian.
>
> Intel and AMD GPU drivers already were mapping the attached dma-bufs under
> the held lock during attachment, hence these drivers are updated to use
> the locked functions.
>
> Signed-off-by: Dmitry Osipenko <[email protected]>
> ---
> drivers/dma-buf/dma-buf.c | 115 ++++++++++++++-------
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 +-
> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 8 +-
> drivers/gpu/drm/i915/gem/i915_gem_object.c | 12 +++
> include/linux/dma-buf.h | 20 ++--
> 5 files changed, 110 insertions(+), 49 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 4556a12bd741..f2a5a122da4a 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -559,7 +559,7 @@ static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
> * 2. Userspace passes this file-descriptors to all drivers it wants this buffer
> * to share with: First the file descriptor is converted to a &dma_buf using
> * dma_buf_get(). Then the buffer is attached to the device using
> - * dma_buf_attach().
> + * dma_buf_attach_unlocked().
> *
> * Up to this stage the exporter is still free to migrate or reallocate the
> * backing storage.
> @@ -569,8 +569,8 @@ static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
> * dma_buf_map_attachment() and dma_buf_unmap_attachment().
> *
> * 4. Once a driver is done with a shared buffer it needs to call
> - * dma_buf_detach() (after cleaning up any mappings) and then release the
> - * reference acquired with dma_buf_get() by calling dma_buf_put().
> + * dma_buf_detach_unlocked() (after cleaning up any mappings) and then
> + * release the reference acquired with dma_buf_get() by calling dma_buf_put().
> *
> * For the detailed semantics exporters are expected to implement see
> * &dma_buf_ops.
> @@ -802,7 +802,7 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach,
> * @importer_priv: [in] importer private pointer for the attachment
> *
> * Returns struct dma_buf_attachment pointer for this attachment. Attachments
> - * must be cleaned up by calling dma_buf_detach().
> + * must be cleaned up by calling dma_buf_detach_unlocked().
> *
> * Optionally this calls &dma_buf_ops.attach to allow device-specific attach
> * functionality.
> @@ -858,8 +858,8 @@ dma_buf_dynamic_attach_unlocked(struct dma_buf *dmabuf, struct device *dev,
> dma_buf_is_dynamic(dmabuf)) {
> struct sg_table *sgt;
>
> + dma_resv_lock(attach->dmabuf->resv, NULL);
> if (dma_buf_is_dynamic(attach->dmabuf)) {
> - dma_resv_lock(attach->dmabuf->resv, NULL);
> ret = dmabuf->ops->pin(attach);
> if (ret)
> goto err_unlock;
> @@ -872,8 +872,7 @@ dma_buf_dynamic_attach_unlocked(struct dma_buf *dmabuf, struct device *dev,
> ret = PTR_ERR(sgt);
> goto err_unpin;
> }
> - if (dma_buf_is_dynamic(attach->dmabuf))
> - dma_resv_unlock(attach->dmabuf->resv);
> + dma_resv_unlock(attach->dmabuf->resv);
> attach->sgt = sgt;
> attach->dir = DMA_BIDIRECTIONAL;
> }
> @@ -889,8 +888,7 @@ dma_buf_dynamic_attach_unlocked(struct dma_buf *dmabuf, struct device *dev,
> dmabuf->ops->unpin(attach);
>
> err_unlock:
> - if (dma_buf_is_dynamic(attach->dmabuf))
> - dma_resv_unlock(attach->dmabuf->resv);
> + dma_resv_unlock(attach->dmabuf->resv);
>
> dma_buf_detach_unlocked(dmabuf, attach);
> return ERR_PTR(ret);
> @@ -927,7 +925,7 @@ static void __unmap_dma_buf(struct dma_buf_attachment *attach,
> * @dmabuf: [in] buffer to detach from.
> * @attach: [in] attachment to be detached; is free'd after this call.
> *
> - * Clean up a device attachment obtained by calling dma_buf_attach().
> + * Clean up a device attachment obtained by calling dma_buf_attach_unlocked().
> *
> * Optionally this calls &dma_buf_ops.detach for device-specific detach.
> */
> @@ -937,21 +935,19 @@ void dma_buf_detach_unlocked(struct dma_buf *dmabuf,
> if (WARN_ON(!dmabuf || !attach))
> return;
>
> + dma_resv_lock(attach->dmabuf->resv, NULL);
> +
> if (attach->sgt) {
> - if (dma_buf_is_dynamic(attach->dmabuf))
> - dma_resv_lock(attach->dmabuf->resv, NULL);
>
> __unmap_dma_buf(attach, attach->sgt, attach->dir);
>
> - if (dma_buf_is_dynamic(attach->dmabuf)) {
> + if (dma_buf_is_dynamic(attach->dmabuf))
> dmabuf->ops->unpin(attach);
> - dma_resv_unlock(attach->dmabuf->resv);
> - }
> }
> -
> - dma_resv_lock(dmabuf->resv, NULL);
> list_del(&attach->node);
> +
> dma_resv_unlock(dmabuf->resv);
> +
> if (dmabuf->ops->detach)
> dmabuf->ops->detach(dmabuf, attach);
>
> @@ -1011,7 +1007,7 @@ void dma_buf_unpin(struct dma_buf_attachment *attach)
> EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF);
>
> /**
> - * dma_buf_map_attachment_unlocked - Returns the scatterlist table of the attachment;
> + * dma_buf_map_attachment - Returns the scatterlist table of the attachment;
> * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the
> * dma_buf_ops.
> * @attach: [in] attachment whose scatterlist is to be returned
> @@ -1030,10 +1026,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF);
> *
> * Important: Dynamic importers must wait for the exclusive fence of the struct
> * dma_resv attached to the DMA-BUF first.
> + *
> + * Importer is responsible for holding dmabuf's reservation lock.
> */
> -struct sg_table *
> -dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
> - enum dma_data_direction direction)
> +struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
> + enum dma_data_direction direction)
> {
> struct sg_table *sg_table;
> int r;
> @@ -1043,8 +1040,7 @@ dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
> if (WARN_ON(!attach || !attach->dmabuf))
> return ERR_PTR(-EINVAL);
>
> - if (dma_buf_attachment_is_dynamic(attach))
> - dma_resv_assert_held(attach->dmabuf->resv);
> + dma_resv_assert_held(attach->dmabuf->resv);
>
> if (attach->sgt) {
> /*
> @@ -1059,7 +1055,6 @@ dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
> }
>
> if (dma_buf_is_dynamic(attach->dmabuf)) {
> - dma_resv_assert_held(attach->dmabuf->resv);
> if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) {
> r = attach->dmabuf->ops->pin(attach);
> if (r)
> @@ -1099,10 +1094,38 @@ dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
> #endif /* CONFIG_DMA_API_DEBUG */
> return sg_table;
> }
> +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF);
> +
> +/**
> + * dma_buf_map_attachment_unlocked - Returns the scatterlist table of the attachment;
> + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the
> + * dma_buf_ops.
> + * @attach: [in] attachment whose scatterlist is to be returned
> + * @direction: [in] direction of DMA transfer
> + *
> + * Unlocked variant of dma_buf_map_attachment().
> + */
> +struct sg_table *
> +dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
> + enum dma_data_direction direction)
> +{
> + struct sg_table *sg_table;
> +
> + might_sleep();
> +
> + if (WARN_ON(!attach || !attach->dmabuf))
> + return ERR_PTR(-EINVAL);
> +
> + dma_resv_lock(attach->dmabuf->resv, NULL);
> + sg_table = dma_buf_map_attachment(attach, direction);
> + dma_resv_unlock(attach->dmabuf->resv);
> +
> + return sg_table;
> +}
> EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_unlocked, DMA_BUF);
>
> /**
> - * dma_buf_unmap_attachment_unlocked - unmaps and decreases usecount of the buffer;might
> + * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might
> * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of
> * dma_buf_ops.
> * @attach: [in] attachment to unmap buffer from
> @@ -1110,31 +1133,51 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_unlocked, DMA_BUF);
> * @direction: [in] direction of DMA transfer
> *
> * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment().
> + *
> + * Importer is responsible for holding dmabuf's reservation lock.
> */
> -void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach,
> - struct sg_table *sg_table,
> - enum dma_data_direction direction)
> +void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
> + struct sg_table *sg_table,
> + enum dma_data_direction direction)
> {
> might_sleep();
>
> - if (WARN_ON(!attach || !attach->dmabuf || !sg_table))
> - return;
> -
> - if (dma_buf_attachment_is_dynamic(attach))
> - dma_resv_assert_held(attach->dmabuf->resv);
> + dma_resv_assert_held(attach->dmabuf->resv);
>
> if (attach->sgt == sg_table)
> return;
>
> - if (dma_buf_is_dynamic(attach->dmabuf))
> - dma_resv_assert_held(attach->dmabuf->resv);
> -
> __unmap_dma_buf(attach, sg_table, direction);
>
> if (dma_buf_is_dynamic(attach->dmabuf) &&
> !IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY))
> dma_buf_unpin(attach);
> }
> +EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF);
> +
> +/**
> + * dma_buf_unmap_attachment_unlocked - unmaps and decreases usecount of the buffer;might
> + * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of
> + * dma_buf_ops.
> + * @attach: [in] attachment to unmap buffer from
> + * @sg_table: [in] scatterlist info of the buffer to unmap
> + * @direction: [in] direction of DMA transfer
> + *
> + * Unlocked variant of dma_buf_unmap_attachment().
> + */
> +void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach,
> + struct sg_table *sg_table,
> + enum dma_data_direction direction)
> +{
> + might_sleep();
> +
> + if (WARN_ON(!attach || !attach->dmabuf || !sg_table))
> + return;
> +
> + dma_resv_lock(attach->dmabuf->resv, NULL);
> + dma_buf_unmap_attachment(attach, sg_table, direction);
> + dma_resv_unlock(attach->dmabuf->resv);
> +}
> EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_unlocked, DMA_BUF);
>
> /**
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index ac1e2911b727..b1c455329023 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -885,7 +885,7 @@ static int amdgpu_ttm_backend_bind(struct ttm_device *bdev,
> struct sg_table *sgt;
>
> attach = gtt->gobj->import_attach;
> - sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL);
> + sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
> if (IS_ERR(sgt))
> return PTR_ERR(sgt);
>
> @@ -1010,7 +1010,7 @@ static void amdgpu_ttm_backend_unbind(struct ttm_device *bdev,
> struct dma_buf_attachment *attach;
>
> attach = gtt->gobj->import_attach;
> - dma_buf_unmap_attachment_unlocked(attach, ttm->sg, DMA_BIDIRECTIONAL);
> + dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL);
> ttm->sg = NULL;
> }
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> index cc54a5b1d6ae..276a74bc7fd1 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> @@ -241,8 +241,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
>
> assert_object_held(obj);
>
> - pages = dma_buf_map_attachment_unlocked(obj->base.import_attach,
> - DMA_BIDIRECTIONAL);
> + pages = dma_buf_map_attachment(obj->base.import_attach,
> + DMA_BIDIRECTIONAL);
> if (IS_ERR(pages))
> return PTR_ERR(pages);
>
> @@ -270,8 +270,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
> static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj,
> struct sg_table *pages)
> {
> - dma_buf_unmap_attachment_unlocked(obj->base.import_attach, pages,
> - DMA_BIDIRECTIONAL);
> + dma_buf_unmap_attachment(obj->base.import_attach, pages,
> + DMA_BIDIRECTIONAL);
> }
>
> static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = {
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> index 389e9f157ca5..9fbef3aea7b1 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> @@ -331,7 +331,19 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
> continue;
> }
>
> + /*
> + * dma_buf_unmap_attachment() requires reservation to be
> + * locked. The imported GEM should share reservation lock,
> + * so it's safe to take the lock.
> + */
> + if (obj->base.import_attach)
> + i915_gem_object_lock(obj, NULL);
> +
> __i915_gem_object_pages_fini(obj);
> +
> + if (obj->base.import_attach)
> + i915_gem_object_unlock(obj);
> +
> __i915_gem_free_object(obj);
>
> /* But keep the pointer alive for RCU-protected lookups */
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index da2057569101..d48d534dc55c 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -46,7 +46,7 @@ struct dma_buf_ops {
> /**
> * @attach:
> *
> - * This is called from dma_buf_attach() to make sure that a given
> + * This is called from dma_buf_attach_unlocked() to make sure that a given
> * &dma_buf_attachment.dev can access the provided &dma_buf. Exporters
> * which support buffer objects in special locations like VRAM or
> * device-specific carveout areas should check whether the buffer could
> @@ -74,7 +74,7 @@ struct dma_buf_ops {
> /**
> * @detach:
> *
> - * This is called by dma_buf_detach() to release a &dma_buf_attachment.
> + * This is called by dma_buf_detach_unlocked() to release a &dma_buf_attachment.
> * Provided so that exporters can clean up any housekeeping for an
> * &dma_buf_attachment.
> *
> @@ -94,7 +94,7 @@ struct dma_buf_ops {
> * exclusive with @cache_sgt_mapping.
> *
> * This is called automatically for non-dynamic importers from
> - * dma_buf_attach().
> + * dma_buf_attach_unlocked().
> *
> * Note that similar to non-dynamic exporters in their @map_dma_buf
> * callback the driver must guarantee that the memory is available for
> @@ -509,10 +509,10 @@ struct dma_buf_attach_ops {
> * and its user device(s). The list contains one attachment struct per device
> * attached to the buffer.
> *
> - * An attachment is created by calling dma_buf_attach(), and released again by
> - * calling dma_buf_detach(). The DMA mapping itself needed to initiate a
> - * transfer is created by dma_buf_map_attachment() and freed again by calling
> - * dma_buf_unmap_attachment().
> + * An attachment is created by calling dma_buf_attach_unlocked(), and released
> + * again by calling dma_buf_detach_unlocked(). The DMA mapping itself needed to
> + * initiate a transfer is created by dma_buf_map_attachment() and freed
> + * again by calling dma_buf_unmap_attachment().
> */
> struct dma_buf_attachment {
> struct dma_buf *dmabuf;
> @@ -626,6 +626,12 @@ void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *,
> struct sg_table *,
> enum dma_data_direction);
>
> +struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *,
> + enum dma_data_direction);
> +void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
> + struct sg_table *sg_table,
> + enum dma_data_direction direction);
> +
> void dma_buf_move_notify(struct dma_buf *dma_buf);
> int dma_buf_begin_cpu_access(struct dma_buf *dma_buf,
> enum dma_data_direction dir);
Am 24.08.22 um 12:22 schrieb Dmitry Osipenko:
> The internal dma-buf lock isn't needed anymore because the updated
> locking specification claims that dma-buf reservation must be locked
> by importers, and thus, the internal data is already protected by the
> reservation lock. Remove the obsoleted internal lock.
>
> Signed-off-by: Dmitry Osipenko <[email protected]>
Reviewed-by: Christian König <[email protected]>
> ---
> drivers/dma-buf/dma-buf.c | 14 ++++----------
> include/linux/dma-buf.h | 9 ---------
> 2 files changed, 4 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 696d132b02f4..a0406254f0ae 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -656,7 +656,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
>
> dmabuf->file = file;
>
> - mutex_init(&dmabuf->lock);
> INIT_LIST_HEAD(&dmabuf->attachments);
>
> mutex_lock(&db_list.lock);
> @@ -1503,7 +1502,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap_unlocked, DMA_BUF);
> int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
> {
> struct iosys_map ptr;
> - int ret = 0;
> + int ret;
>
> iosys_map_clear(map);
>
> @@ -1515,28 +1514,25 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
> if (!dmabuf->ops->vmap)
> return -EINVAL;
>
> - mutex_lock(&dmabuf->lock);
> if (dmabuf->vmapping_counter) {
> dmabuf->vmapping_counter++;
> BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr));
> *map = dmabuf->vmap_ptr;
> - goto out_unlock;
> + return 0;
> }
>
> BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr));
>
> ret = dmabuf->ops->vmap(dmabuf, &ptr);
> if (WARN_ON_ONCE(ret))
> - goto out_unlock;
> + return ret;
>
> dmabuf->vmap_ptr = ptr;
> dmabuf->vmapping_counter = 1;
>
> *map = dmabuf->vmap_ptr;
>
> -out_unlock:
> - mutex_unlock(&dmabuf->lock);
> - return ret;
> + return 0;
> }
> EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF);
>
> @@ -1578,13 +1574,11 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
> BUG_ON(dmabuf->vmapping_counter == 0);
> BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map));
>
> - mutex_lock(&dmabuf->lock);
> if (--dmabuf->vmapping_counter == 0) {
> if (dmabuf->ops->vunmap)
> dmabuf->ops->vunmap(dmabuf, map);
> iosys_map_clear(&dmabuf->vmap_ptr);
> }
> - mutex_unlock(&dmabuf->lock);
> }
> EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF);
>
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index d48d534dc55c..aed6695bbb50 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -326,15 +326,6 @@ struct dma_buf {
> /** @ops: dma_buf_ops associated with this buffer object. */
> const struct dma_buf_ops *ops;
>
> - /**
> - * @lock:
> - *
> - * Used internally to serialize list manipulation, attach/detach and
> - * vmap/unmap. Note that in many cases this is superseeded by
> - * dma_resv_lock() on @resv.
> - */
> - struct mutex lock;
> -
> /**
> * @vmapping_counter:
> *
Reviewed-by: Christian König <[email protected]> to patches #2-#4
Am 24.08.22 um 12:22 schrieb Dmitry Osipenko:
> Hello,
>
> This series moves all drivers to a dynamic dma-buf locking specification.
> From now on all dma-buf importers are made responsible for holding
> dma-buf's reservation lock around all operations performed over dma-bufs
> in accordance to the locking specification. This allows us to utilize
> reservation lock more broadly around kernel without fearing of a potential
> deadlocks.
>
> This patchset passes all i915 selftests. It was also tested using VirtIO,
> Panfrost, Lima, Tegra, udmabuf, AMDGPU and Nouveau drivers. I tested cases
> of display+GPU, display+V4L and GPU+V4L dma-buf sharing (where appropriate),
> which covers majority of kernel drivers since rest of the drivers share
> same or similar code paths.
>
> Changelog:
>
> v3: - Factored out dma_buf_mmap_unlocked() and attachment functions
> into aseparate patches, like was suggested by Christian König.
>
> - Corrected and factored out dma-buf locking documentation into
> a separate patch, like was suggested by Christian König.
>
> - Intel driver dropped the reservation locking fews days ago from
> its BO-release code path, but we need that locking for the imported
> GEMs because in the end that code path unmaps the imported GEM.
> So I added back the locking needed by the imported GEMs, updating
> the "dma-buf attachment locking specification" patch appropriately.
>
> - Tested Nouveau+Intel dma-buf import/export combo.
>
> - Tested udmabuf import to i915/Nouveau/AMDGPU.
>
> - Fixed few places in Etnaviv, Panfrost and Lima drivers that I missed
> to switch to locked dma-buf vmapping in the drm/gem: Take reservation
> lock for vmap/vunmap operations" patch. In a result invalidated the
> Christian's r-b that he gave to v2.
>
> - Added locked dma-buf vmap/vunmap functions that are needed for fixing
> vmappping of Etnaviv, Panfrost and Lima drivers mentioned above.
> I actually had this change stashed for the drm-shmem shrinker patchset,
> but then realized that it's already needed by the dma-buf patches.
> Also improved my tests to better cover these code paths.
>
> v2: - Changed locking specification to avoid problems with a cross-driver
> ww locking, like was suggested by Christian König. Now the attach/detach
> callbacks are invoked without the held lock and exporter should take the
> lock.
>
> - Added "locking convention" documentation that explains which dma-buf
> functions and callbacks are locked/unlocked for importers and exporters,
> which was requested by Christian König.
>
> - Added ack from Tomasz Figa to the V4L patches that he gave to v1.
>
> Dmitry Osipenko (9):
> dma-buf: Add _unlocked postfix to function names
> dma-buf: Add locked variant of dma_buf_vmap/vunmap()
> drm/gem: Take reservation lock for vmap/vunmap operations
> dma-buf: Move dma_buf_vmap/vunmap_unlocked() to dynamic locking
> specification
> dma-buf: Move dma_buf_mmap_unlocked() to dynamic locking specification
> dma-buf: Move dma-buf attachment to dynamic locking specification
> dma-buf: Document dynamic locking convention
> media: videobuf2: Stop using internal dma-buf lock
> dma-buf: Remove internal lock
>
> Documentation/driver-api/dma-buf.rst | 6 +
> drivers/dma-buf/dma-buf.c | 276 ++++++++++++++----
> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 +-
> drivers/gpu/drm/armada/armada_gem.c | 14 +-
> drivers/gpu/drm/drm_client.c | 4 +-
> drivers/gpu/drm/drm_gem.c | 24 ++
> drivers/gpu/drm/drm_gem_dma_helper.c | 6 +-
> drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +-
> drivers/gpu/drm/drm_gem_shmem_helper.c | 2 +-
> drivers/gpu/drm/drm_gem_ttm_helper.c | 9 +-
> drivers/gpu/drm/drm_prime.c | 12 +-
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 4 +-
> drivers/gpu/drm/exynos/exynos_drm_gem.c | 2 +-
> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 6 +-
> drivers/gpu/drm/i915/gem/i915_gem_object.c | 12 +
> .../drm/i915/gem/selftests/i915_gem_dmabuf.c | 20 +-
> drivers/gpu/drm/lima/lima_sched.c | 4 +-
> drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 8 +-
> drivers/gpu/drm/panfrost/panfrost_dump.c | 4 +-
> drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 +-
> drivers/gpu/drm/qxl/qxl_object.c | 17 +-
> drivers/gpu/drm/qxl/qxl_prime.c | 4 +-
> drivers/gpu/drm/tegra/gem.c | 27 +-
> drivers/infiniband/core/umem_dmabuf.c | 11 +-
> .../common/videobuf2/videobuf2-dma-contig.c | 26 +-
> .../media/common/videobuf2/videobuf2-dma-sg.c | 23 +-
> .../common/videobuf2/videobuf2-vmalloc.c | 17 +-
> .../platform/nvidia/tegra-vde/dmabuf-cache.c | 12 +-
> drivers/misc/fastrpc.c | 12 +-
> drivers/xen/gntdev-dmabuf.c | 14 +-
> include/drm/drm_gem.h | 3 +
> include/linux/dma-buf.h | 57 ++--
> 32 files changed, 410 insertions(+), 242 deletions(-)
>
On 8/24/22 17:08, Christian König wrote:
> Am 24.08.22 um 12:22 schrieb Dmitry Osipenko:
>> Move dma-buf attachment API functions to the dynamic locking
>> specification.
>> The strict locking convention prevents deadlock situations for dma-buf
>> importers and exporters.
>>
>> Previously, the "unlocked" versions of the attachment API functions
>> weren't taking the reservation lock and this patch makes them to take
>> the lock.
>
> Didn't we concluded that we need to keep the attach and detach callbacks
> without the lock and only move the map/unmap callbacks over?
>
> Otherwise it won't be possible for drivers to lock multiple buffers if
> they have to shuffle things around for a specific attachment.
We did conclude that. The attach/detach dma-buf ops are unlocked, but
the map_dma_buf/unmap_dma_buf must be invoked under lock and
dma_buf_dynamic_attach_unlocked() maps dma-buf if either importer or
exporter can't handle the dynamic mapping [1].
[1]
https://elixir.bootlin.com/linux/v6.0-rc2/source/drivers/dma-buf/dma-buf.c#L869
Hence I re-arranged the dma_resv_lock() in
dma_buf_dynamic_attach_unlocked() to move both pinning and mapping under
the held lock.
--
Best regards,
Dmitry
Am 24.08.22 um 12:22 schrieb Dmitry Osipenko:
> Move dma-buf attachment API functions to the dynamic locking specification.
> The strict locking convention prevents deadlock situations for dma-buf
> importers and exporters.
>
> Previously, the "unlocked" versions of the attachment API functions
> weren't taking the reservation lock and this patch makes them to take
> the lock.
>
> Intel and AMD GPU drivers already were mapping the attached dma-bufs under
> the held lock during attachment, hence these drivers are updated to use
> the locked functions.
>
> Signed-off-by: Dmitry Osipenko <[email protected]>
> ---
> drivers/dma-buf/dma-buf.c | 115 ++++++++++++++-------
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 +-
> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 8 +-
> drivers/gpu/drm/i915/gem/i915_gem_object.c | 12 +++
> include/linux/dma-buf.h | 20 ++--
> 5 files changed, 110 insertions(+), 49 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 4556a12bd741..f2a5a122da4a 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -559,7 +559,7 @@ static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
> * 2. Userspace passes this file-descriptors to all drivers it wants this buffer
> * to share with: First the file descriptor is converted to a &dma_buf using
> * dma_buf_get(). Then the buffer is attached to the device using
> - * dma_buf_attach().
> + * dma_buf_attach_unlocked().
Now I get why this is confusing me so much.
The _unlocked postfix implies that there is another function which
should be called with the locks already held, but this is not the case
for attach/detach (because they always need to grab the lock themselves).
So I suggest to drop the _unlocked postfix for the attach/detach
functions. Another step would then be to unify attach/detach with
dynamic_attach/dynamic_detach when both have the same locking convention
anyway.
Sorry that this is going so much back and forth, it's really complicated
to keep all the stuff in my head at the moment :)
Thanks a lot for looking into this,
Christian.
> *
> * Up to this stage the exporter is still free to migrate or reallocate the
> * backing storage.
> @@ -569,8 +569,8 @@ static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
> * dma_buf_map_attachment() and dma_buf_unmap_attachment().
> *
> * 4. Once a driver is done with a shared buffer it needs to call
> - * dma_buf_detach() (after cleaning up any mappings) and then release the
> - * reference acquired with dma_buf_get() by calling dma_buf_put().
> + * dma_buf_detach_unlocked() (after cleaning up any mappings) and then
> + * release the reference acquired with dma_buf_get() by calling dma_buf_put().
> *
> * For the detailed semantics exporters are expected to implement see
> * &dma_buf_ops.
> @@ -802,7 +802,7 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach,
> * @importer_priv: [in] importer private pointer for the attachment
> *
> * Returns struct dma_buf_attachment pointer for this attachment. Attachments
> - * must be cleaned up by calling dma_buf_detach().
> + * must be cleaned up by calling dma_buf_detach_unlocked().
> *
> * Optionally this calls &dma_buf_ops.attach to allow device-specific attach
> * functionality.
> @@ -858,8 +858,8 @@ dma_buf_dynamic_attach_unlocked(struct dma_buf *dmabuf, struct device *dev,
> dma_buf_is_dynamic(dmabuf)) {
> struct sg_table *sgt;
>
> + dma_resv_lock(attach->dmabuf->resv, NULL);
> if (dma_buf_is_dynamic(attach->dmabuf)) {
> - dma_resv_lock(attach->dmabuf->resv, NULL);
> ret = dmabuf->ops->pin(attach);
> if (ret)
> goto err_unlock;
> @@ -872,8 +872,7 @@ dma_buf_dynamic_attach_unlocked(struct dma_buf *dmabuf, struct device *dev,
> ret = PTR_ERR(sgt);
> goto err_unpin;
> }
> - if (dma_buf_is_dynamic(attach->dmabuf))
> - dma_resv_unlock(attach->dmabuf->resv);
> + dma_resv_unlock(attach->dmabuf->resv);
> attach->sgt = sgt;
> attach->dir = DMA_BIDIRECTIONAL;
> }
> @@ -889,8 +888,7 @@ dma_buf_dynamic_attach_unlocked(struct dma_buf *dmabuf, struct device *dev,
> dmabuf->ops->unpin(attach);
>
> err_unlock:
> - if (dma_buf_is_dynamic(attach->dmabuf))
> - dma_resv_unlock(attach->dmabuf->resv);
> + dma_resv_unlock(attach->dmabuf->resv);
>
> dma_buf_detach_unlocked(dmabuf, attach);
> return ERR_PTR(ret);
> @@ -927,7 +925,7 @@ static void __unmap_dma_buf(struct dma_buf_attachment *attach,
> * @dmabuf: [in] buffer to detach from.
> * @attach: [in] attachment to be detached; is free'd after this call.
> *
> - * Clean up a device attachment obtained by calling dma_buf_attach().
> + * Clean up a device attachment obtained by calling dma_buf_attach_unlocked().
> *
> * Optionally this calls &dma_buf_ops.detach for device-specific detach.
> */
> @@ -937,21 +935,19 @@ void dma_buf_detach_unlocked(struct dma_buf *dmabuf,
> if (WARN_ON(!dmabuf || !attach))
> return;
>
> + dma_resv_lock(attach->dmabuf->resv, NULL);
> +
> if (attach->sgt) {
> - if (dma_buf_is_dynamic(attach->dmabuf))
> - dma_resv_lock(attach->dmabuf->resv, NULL);
>
> __unmap_dma_buf(attach, attach->sgt, attach->dir);
>
> - if (dma_buf_is_dynamic(attach->dmabuf)) {
> + if (dma_buf_is_dynamic(attach->dmabuf))
> dmabuf->ops->unpin(attach);
> - dma_resv_unlock(attach->dmabuf->resv);
> - }
> }
> -
> - dma_resv_lock(dmabuf->resv, NULL);
> list_del(&attach->node);
> +
> dma_resv_unlock(dmabuf->resv);
> +
> if (dmabuf->ops->detach)
> dmabuf->ops->detach(dmabuf, attach);
>
> @@ -1011,7 +1007,7 @@ void dma_buf_unpin(struct dma_buf_attachment *attach)
> EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF);
>
> /**
> - * dma_buf_map_attachment_unlocked - Returns the scatterlist table of the attachment;
> + * dma_buf_map_attachment - Returns the scatterlist table of the attachment;
> * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the
> * dma_buf_ops.
> * @attach: [in] attachment whose scatterlist is to be returned
> @@ -1030,10 +1026,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF);
> *
> * Important: Dynamic importers must wait for the exclusive fence of the struct
> * dma_resv attached to the DMA-BUF first.
> + *
> + * Importer is responsible for holding dmabuf's reservation lock.
> */
> -struct sg_table *
> -dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
> - enum dma_data_direction direction)
> +struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
> + enum dma_data_direction direction)
> {
> struct sg_table *sg_table;
> int r;
> @@ -1043,8 +1040,7 @@ dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
> if (WARN_ON(!attach || !attach->dmabuf))
> return ERR_PTR(-EINVAL);
>
> - if (dma_buf_attachment_is_dynamic(attach))
> - dma_resv_assert_held(attach->dmabuf->resv);
> + dma_resv_assert_held(attach->dmabuf->resv);
>
> if (attach->sgt) {
> /*
> @@ -1059,7 +1055,6 @@ dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
> }
>
> if (dma_buf_is_dynamic(attach->dmabuf)) {
> - dma_resv_assert_held(attach->dmabuf->resv);
> if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) {
> r = attach->dmabuf->ops->pin(attach);
> if (r)
> @@ -1099,10 +1094,38 @@ dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
> #endif /* CONFIG_DMA_API_DEBUG */
> return sg_table;
> }
> +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF);
> +
> +/**
> + * dma_buf_map_attachment_unlocked - Returns the scatterlist table of the attachment;
> + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the
> + * dma_buf_ops.
> + * @attach: [in] attachment whose scatterlist is to be returned
> + * @direction: [in] direction of DMA transfer
> + *
> + * Unlocked variant of dma_buf_map_attachment().
> + */
> +struct sg_table *
> +dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
> + enum dma_data_direction direction)
> +{
> + struct sg_table *sg_table;
> +
> + might_sleep();
> +
> + if (WARN_ON(!attach || !attach->dmabuf))
> + return ERR_PTR(-EINVAL);
> +
> + dma_resv_lock(attach->dmabuf->resv, NULL);
> + sg_table = dma_buf_map_attachment(attach, direction);
> + dma_resv_unlock(attach->dmabuf->resv);
> +
> + return sg_table;
> +}
> EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_unlocked, DMA_BUF);
>
> /**
> - * dma_buf_unmap_attachment_unlocked - unmaps and decreases usecount of the buffer;might
> + * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might
> * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of
> * dma_buf_ops.
> * @attach: [in] attachment to unmap buffer from
> @@ -1110,31 +1133,51 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_unlocked, DMA_BUF);
> * @direction: [in] direction of DMA transfer
> *
> * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment().
> + *
> + * Importer is responsible for holding dmabuf's reservation lock.
> */
> -void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach,
> - struct sg_table *sg_table,
> - enum dma_data_direction direction)
> +void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
> + struct sg_table *sg_table,
> + enum dma_data_direction direction)
> {
> might_sleep();
>
> - if (WARN_ON(!attach || !attach->dmabuf || !sg_table))
> - return;
> -
> - if (dma_buf_attachment_is_dynamic(attach))
> - dma_resv_assert_held(attach->dmabuf->resv);
> + dma_resv_assert_held(attach->dmabuf->resv);
>
> if (attach->sgt == sg_table)
> return;
>
> - if (dma_buf_is_dynamic(attach->dmabuf))
> - dma_resv_assert_held(attach->dmabuf->resv);
> -
> __unmap_dma_buf(attach, sg_table, direction);
>
> if (dma_buf_is_dynamic(attach->dmabuf) &&
> !IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY))
> dma_buf_unpin(attach);
> }
> +EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF);
> +
> +/**
> + * dma_buf_unmap_attachment_unlocked - unmaps and decreases usecount of the buffer;might
> + * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of
> + * dma_buf_ops.
> + * @attach: [in] attachment to unmap buffer from
> + * @sg_table: [in] scatterlist info of the buffer to unmap
> + * @direction: [in] direction of DMA transfer
> + *
> + * Unlocked variant of dma_buf_unmap_attachment().
> + */
> +void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach,
> + struct sg_table *sg_table,
> + enum dma_data_direction direction)
> +{
> + might_sleep();
> +
> + if (WARN_ON(!attach || !attach->dmabuf || !sg_table))
> + return;
> +
> + dma_resv_lock(attach->dmabuf->resv, NULL);
> + dma_buf_unmap_attachment(attach, sg_table, direction);
> + dma_resv_unlock(attach->dmabuf->resv);
> +}
> EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_unlocked, DMA_BUF);
>
> /**
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index ac1e2911b727..b1c455329023 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -885,7 +885,7 @@ static int amdgpu_ttm_backend_bind(struct ttm_device *bdev,
> struct sg_table *sgt;
>
> attach = gtt->gobj->import_attach;
> - sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL);
> + sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
> if (IS_ERR(sgt))
> return PTR_ERR(sgt);
>
> @@ -1010,7 +1010,7 @@ static void amdgpu_ttm_backend_unbind(struct ttm_device *bdev,
> struct dma_buf_attachment *attach;
>
> attach = gtt->gobj->import_attach;
> - dma_buf_unmap_attachment_unlocked(attach, ttm->sg, DMA_BIDIRECTIONAL);
> + dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL);
> ttm->sg = NULL;
> }
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> index cc54a5b1d6ae..276a74bc7fd1 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> @@ -241,8 +241,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
>
> assert_object_held(obj);
>
> - pages = dma_buf_map_attachment_unlocked(obj->base.import_attach,
> - DMA_BIDIRECTIONAL);
> + pages = dma_buf_map_attachment(obj->base.import_attach,
> + DMA_BIDIRECTIONAL);
> if (IS_ERR(pages))
> return PTR_ERR(pages);
>
> @@ -270,8 +270,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
> static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj,
> struct sg_table *pages)
> {
> - dma_buf_unmap_attachment_unlocked(obj->base.import_attach, pages,
> - DMA_BIDIRECTIONAL);
> + dma_buf_unmap_attachment(obj->base.import_attach, pages,
> + DMA_BIDIRECTIONAL);
> }
>
> static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = {
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> index 389e9f157ca5..9fbef3aea7b1 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
> @@ -331,7 +331,19 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
> continue;
> }
>
> + /*
> + * dma_buf_unmap_attachment() requires reservation to be
> + * locked. The imported GEM should share reservation lock,
> + * so it's safe to take the lock.
> + */
> + if (obj->base.import_attach)
> + i915_gem_object_lock(obj, NULL);
> +
> __i915_gem_object_pages_fini(obj);
> +
> + if (obj->base.import_attach)
> + i915_gem_object_unlock(obj);
> +
> __i915_gem_free_object(obj);
>
> /* But keep the pointer alive for RCU-protected lookups */
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index da2057569101..d48d534dc55c 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -46,7 +46,7 @@ struct dma_buf_ops {
> /**
> * @attach:
> *
> - * This is called from dma_buf_attach() to make sure that a given
> + * This is called from dma_buf_attach_unlocked() to make sure that a given
> * &dma_buf_attachment.dev can access the provided &dma_buf. Exporters
> * which support buffer objects in special locations like VRAM or
> * device-specific carveout areas should check whether the buffer could
> @@ -74,7 +74,7 @@ struct dma_buf_ops {
> /**
> * @detach:
> *
> - * This is called by dma_buf_detach() to release a &dma_buf_attachment.
> + * This is called by dma_buf_detach_unlocked() to release a &dma_buf_attachment.
> * Provided so that exporters can clean up any housekeeping for an
> * &dma_buf_attachment.
> *
> @@ -94,7 +94,7 @@ struct dma_buf_ops {
> * exclusive with @cache_sgt_mapping.
> *
> * This is called automatically for non-dynamic importers from
> - * dma_buf_attach().
> + * dma_buf_attach_unlocked().
> *
> * Note that similar to non-dynamic exporters in their @map_dma_buf
> * callback the driver must guarantee that the memory is available for
> @@ -509,10 +509,10 @@ struct dma_buf_attach_ops {
> * and its user device(s). The list contains one attachment struct per device
> * attached to the buffer.
> *
> - * An attachment is created by calling dma_buf_attach(), and released again by
> - * calling dma_buf_detach(). The DMA mapping itself needed to initiate a
> - * transfer is created by dma_buf_map_attachment() and freed again by calling
> - * dma_buf_unmap_attachment().
> + * An attachment is created by calling dma_buf_attach_unlocked(), and released
> + * again by calling dma_buf_detach_unlocked(). The DMA mapping itself needed to
> + * initiate a transfer is created by dma_buf_map_attachment() and freed
> + * again by calling dma_buf_unmap_attachment().
> */
> struct dma_buf_attachment {
> struct dma_buf *dmabuf;
> @@ -626,6 +626,12 @@ void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *,
> struct sg_table *,
> enum dma_data_direction);
>
> +struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *,
> + enum dma_data_direction);
> +void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
> + struct sg_table *sg_table,
> + enum dma_data_direction direction);
> +
> void dma_buf_move_notify(struct dma_buf *dma_buf);
> int dma_buf_begin_cpu_access(struct dma_buf *dma_buf,
> enum dma_data_direction dir);
On 8/24/22 18:14, Christian König wrote:
> Am 24.08.22 um 17:03 schrieb Dmitry Osipenko:
>> On 8/24/22 17:08, Christian König wrote:
>>> Am 24.08.22 um 12:22 schrieb Dmitry Osipenko:
>>>> Move dma-buf attachment API functions to the dynamic locking
>>>> specification.
>>>> The strict locking convention prevents deadlock situations for dma-buf
>>>> importers and exporters.
>>>>
>>>> Previously, the "unlocked" versions of the attachment API functions
>>>> weren't taking the reservation lock and this patch makes them to take
>>>> the lock.
>>> Didn't we concluded that we need to keep the attach and detach callbacks
>>> without the lock and only move the map/unmap callbacks over?
>>>
>>> Otherwise it won't be possible for drivers to lock multiple buffers if
>>> they have to shuffle things around for a specific attachment.
>> We did conclude that. The attach/detach dma-buf ops are unlocked, but
>> the map_dma_buf/unmap_dma_buf must be invoked under lock and
>> dma_buf_dynamic_attach_unlocked() maps dma-buf if either importer or
>> exporter can't handle the dynamic mapping [1].
>
> Ah! You are confusing me over and over again with that :)
>
> Ok in this case that here is fine, I just need to re-read the patch.
It's indeed not trivial to review this patch, not sure if we can make it
simpler. Maybe it's possible to factor out the changes related to
dynamic mapping, or maybe it's not worthwhile..
Anyways, thank you for helping with reviewing it :)
--
Best regards,
Dmitry
Am 24.08.22 um 17:03 schrieb Dmitry Osipenko:
> On 8/24/22 17:08, Christian König wrote:
>> Am 24.08.22 um 12:22 schrieb Dmitry Osipenko:
>>> Move dma-buf attachment API functions to the dynamic locking
>>> specification.
>>> The strict locking convention prevents deadlock situations for dma-buf
>>> importers and exporters.
>>>
>>> Previously, the "unlocked" versions of the attachment API functions
>>> weren't taking the reservation lock and this patch makes them to take
>>> the lock.
>> Didn't we concluded that we need to keep the attach and detach callbacks
>> without the lock and only move the map/unmap callbacks over?
>>
>> Otherwise it won't be possible for drivers to lock multiple buffers if
>> they have to shuffle things around for a specific attachment.
> We did conclude that. The attach/detach dma-buf ops are unlocked, but
> the map_dma_buf/unmap_dma_buf must be invoked under lock and
> dma_buf_dynamic_attach_unlocked() maps dma-buf if either importer or
> exporter can't handle the dynamic mapping [1].
Ah! You are confusing me over and over again with that :)
Ok in this case that here is fine, I just need to re-read the patch.
Thanks,
Christian.
>
> [1]
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv6.0-rc2%2Fsource%2Fdrivers%2Fdma-buf%2Fdma-buf.c%23L869&data=05%7C01%7Cchristian.koenig%40amd.com%7Cdf23d89db8b84bf6d4c008da85e1dc6c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637969502441026991%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=d8kWKjDCFn%2B3KmK135Gcv6%2FMLffEYcipouqWxfc%2BKXM%3D&reserved=0
>
> Hence I re-arranged the dma_resv_lock() in
> dma_buf_dynamic_attach_unlocked() to move both pinning and mapping under
> the held lock.
>
On 8/24/22 18:24, Christian König wrote:
> Am 24.08.22 um 12:22 schrieb Dmitry Osipenko:
>> Move dma-buf attachment API functions to the dynamic locking
>> specification.
>> The strict locking convention prevents deadlock situations for dma-buf
>> importers and exporters.
>>
>> Previously, the "unlocked" versions of the attachment API functions
>> weren't taking the reservation lock and this patch makes them to take
>> the lock.
>>
>> Intel and AMD GPU drivers already were mapping the attached dma-bufs
>> under
>> the held lock during attachment, hence these drivers are updated to use
>> the locked functions.
>>
>> Signed-off-by: Dmitry Osipenko <[email protected]>
>> ---
>> drivers/dma-buf/dma-buf.c | 115 ++++++++++++++-------
>> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 +-
>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 8 +-
>> drivers/gpu/drm/i915/gem/i915_gem_object.c | 12 +++
>> include/linux/dma-buf.h | 20 ++--
>> 5 files changed, 110 insertions(+), 49 deletions(-)
>>
>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>> index 4556a12bd741..f2a5a122da4a 100644
>> --- a/drivers/dma-buf/dma-buf.c
>> +++ b/drivers/dma-buf/dma-buf.c
>> @@ -559,7 +559,7 @@ static struct file *dma_buf_getfile(struct dma_buf
>> *dmabuf, int flags)
>> * 2. Userspace passes this file-descriptors to all drivers it wants
>> this buffer
>> * to share with: First the file descriptor is converted to a
>> &dma_buf using
>> * dma_buf_get(). Then the buffer is attached to the device using
>> - * dma_buf_attach().
>> + * dma_buf_attach_unlocked().
>
> Now I get why this is confusing me so much.
>
> The _unlocked postfix implies that there is another function which
> should be called with the locks already held, but this is not the case
> for attach/detach (because they always need to grab the lock themselves).
That's correct. The attach/detach ops of exporter can take the lock
(like i915 exporter does it), hence importer must not grab the lock
around dma_buf_attach() invocation.
> So I suggest to drop the _unlocked postfix for the attach/detach
> functions. Another step would then be to unify attach/detach with
> dynamic_attach/dynamic_detach when both have the same locking convention
> anyway.
It's not a problem to change the name, but it's unclear to me why we
should do it. The _unlocked postfix tells importer that reservation must
be unlocked and it must be unlocked in case of dma_buf_attach().
Dropping the postfix will make dma_buf_attach() inconsistent with the
rest of the _unlocked functions(?). Are you sure we need to rename it?
> Sorry that this is going so much back and forth, it's really complicated
> to keep all the stuff in my head at the moment :)
Not a problem at all, I expected that it will take some time for this
patchset to settle down.
--
Best regards,
Dmitry
Am 24.08.22 um 17:49 schrieb Dmitry Osipenko:
> On 8/24/22 18:24, Christian König wrote:
>> Am 24.08.22 um 12:22 schrieb Dmitry Osipenko:
>>> Move dma-buf attachment API functions to the dynamic locking
>>> specification.
>>> The strict locking convention prevents deadlock situations for dma-buf
>>> importers and exporters.
>>>
>>> Previously, the "unlocked" versions of the attachment API functions
>>> weren't taking the reservation lock and this patch makes them to take
>>> the lock.
>>>
>>> Intel and AMD GPU drivers already were mapping the attached dma-bufs
>>> under
>>> the held lock during attachment, hence these drivers are updated to use
>>> the locked functions.
>>>
>>> Signed-off-by: Dmitry Osipenko <[email protected]>
>>> ---
>>> drivers/dma-buf/dma-buf.c | 115 ++++++++++++++-------
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 +-
>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 8 +-
>>> drivers/gpu/drm/i915/gem/i915_gem_object.c | 12 +++
>>> include/linux/dma-buf.h | 20 ++--
>>> 5 files changed, 110 insertions(+), 49 deletions(-)
>>>
>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>>> index 4556a12bd741..f2a5a122da4a 100644
>>> --- a/drivers/dma-buf/dma-buf.c
>>> +++ b/drivers/dma-buf/dma-buf.c
>>> @@ -559,7 +559,7 @@ static struct file *dma_buf_getfile(struct dma_buf
>>> *dmabuf, int flags)
>>> * 2. Userspace passes this file-descriptors to all drivers it wants
>>> this buffer
>>> * to share with: First the file descriptor is converted to a
>>> &dma_buf using
>>> * dma_buf_get(). Then the buffer is attached to the device using
>>> - * dma_buf_attach().
>>> + * dma_buf_attach_unlocked().
>> Now I get why this is confusing me so much.
>>
>> The _unlocked postfix implies that there is another function which
>> should be called with the locks already held, but this is not the case
>> for attach/detach (because they always need to grab the lock themselves).
> That's correct. The attach/detach ops of exporter can take the lock
> (like i915 exporter does it), hence importer must not grab the lock
> around dma_buf_attach() invocation.
>
>> So I suggest to drop the _unlocked postfix for the attach/detach
>> functions. Another step would then be to unify attach/detach with
>> dynamic_attach/dynamic_detach when both have the same locking convention
>> anyway.
> It's not a problem to change the name, but it's unclear to me why we
> should do it. The _unlocked postfix tells importer that reservation must
> be unlocked and it must be unlocked in case of dma_buf_attach().
>
> Dropping the postfix will make dma_buf_attach() inconsistent with the
> rest of the _unlocked functions(?). Are you sure we need to rename it?
The idea of the postfix was to distinguish between two different
versions of the same function, e.g. dma_buf_vmap_unlocked() vs normal
dma_buf_vmap().
When we don't have those two types of the same function I don't think it
makes to much sense to keep that. We should just properly document which
functions expect what and that's what your documentation patch does.
Regards,
Christian.
>
>> Sorry that this is going so much back and forth, it's really complicated
>> to keep all the stuff in my head at the moment :)
> Not a problem at all, I expected that it will take some time for this
> patchset to settle down.
>
On 8/24/22 20:45, Christian König wrote:
> Am 24.08.22 um 17:49 schrieb Dmitry Osipenko:
>> On 8/24/22 18:24, Christian König wrote:
>>> Am 24.08.22 um 12:22 schrieb Dmitry Osipenko:
>>>> Move dma-buf attachment API functions to the dynamic locking
>>>> specification.
>>>> The strict locking convention prevents deadlock situations for dma-buf
>>>> importers and exporters.
>>>>
>>>> Previously, the "unlocked" versions of the attachment API functions
>>>> weren't taking the reservation lock and this patch makes them to take
>>>> the lock.
>>>>
>>>> Intel and AMD GPU drivers already were mapping the attached dma-bufs
>>>> under
>>>> the held lock during attachment, hence these drivers are updated to use
>>>> the locked functions.
>>>>
>>>> Signed-off-by: Dmitry Osipenko <[email protected]>
>>>> ---
>>>> drivers/dma-buf/dma-buf.c | 115
>>>> ++++++++++++++-------
>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 +-
>>>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 8 +-
>>>> drivers/gpu/drm/i915/gem/i915_gem_object.c | 12 +++
>>>> include/linux/dma-buf.h | 20 ++--
>>>> 5 files changed, 110 insertions(+), 49 deletions(-)
>>>>
>>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>>>> index 4556a12bd741..f2a5a122da4a 100644
>>>> --- a/drivers/dma-buf/dma-buf.c
>>>> +++ b/drivers/dma-buf/dma-buf.c
>>>> @@ -559,7 +559,7 @@ static struct file *dma_buf_getfile(struct dma_buf
>>>> *dmabuf, int flags)
>>>> * 2. Userspace passes this file-descriptors to all drivers it wants
>>>> this buffer
>>>> * to share with: First the file descriptor is converted to a
>>>> &dma_buf using
>>>> * dma_buf_get(). Then the buffer is attached to the device using
>>>> - * dma_buf_attach().
>>>> + * dma_buf_attach_unlocked().
>>> Now I get why this is confusing me so much.
>>>
>>> The _unlocked postfix implies that there is another function which
>>> should be called with the locks already held, but this is not the case
>>> for attach/detach (because they always need to grab the lock
>>> themselves).
>> That's correct. The attach/detach ops of exporter can take the lock
>> (like i915 exporter does it), hence importer must not grab the lock
>> around dma_buf_attach() invocation.
>>
>>> So I suggest to drop the _unlocked postfix for the attach/detach
>>> functions. Another step would then be to unify attach/detach with
>>> dynamic_attach/dynamic_detach when both have the same locking convention
>>> anyway.
>> It's not a problem to change the name, but it's unclear to me why we
>> should do it. The _unlocked postfix tells importer that reservation must
>> be unlocked and it must be unlocked in case of dma_buf_attach().
>>
>> Dropping the postfix will make dma_buf_attach() inconsistent with the
>> rest of the _unlocked functions(?). Are you sure we need to rename it?
>
> The idea of the postfix was to distinguish between two different
> versions of the same function, e.g. dma_buf_vmap_unlocked() vs normal
> dma_buf_vmap().
>
> When we don't have those two types of the same function I don't think it
> makes to much sense to keep that. We should just properly document which
> functions expect what and that's what your documentation patch does.
Thank you for the clarification. I'll change the names in v4 like you're
suggesting, we can always improve naming later on if will be necessary.
--
Best regards,
Dmitry
On Wed, Aug 24, 2022 at 01:22:40PM +0300, Dmitry Osipenko wrote:
> Add _unlocked postfix to the dma-buf API function names in a preparation
> to move all non-dynamic dma-buf users over to the dynamic locking
> specification. This patch only renames API functions, preparing drivers
> to the common locking convention. Later on, we will make the "unlocked"
> functions to take the reservation lock.
>
> Acked-by: Christian K?nig <[email protected]>
> Suggested-by: Christian K?nig <[email protected]>
> Signed-off-by: Dmitry Osipenko <[email protected]>
> ---
> drivers/dma-buf/dma-buf.c | 76 ++++++++++---------
> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 +-
> drivers/gpu/drm/armada/armada_gem.c | 14 ++--
> drivers/gpu/drm/drm_gem_dma_helper.c | 6 +-
> drivers/gpu/drm/drm_gem_shmem_helper.c | 8 +-
> drivers/gpu/drm/drm_prime.c | 12 +--
> drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 6 +-
> drivers/gpu/drm/exynos/exynos_drm_gem.c | 2 +-
> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 12 +--
> .../drm/i915/gem/selftests/i915_gem_dmabuf.c | 20 ++---
> drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 8 +-
> drivers/gpu/drm/tegra/gem.c | 27 +++----
> drivers/infiniband/core/umem_dmabuf.c | 11 +--
> .../common/videobuf2/videobuf2-dma-contig.c | 15 ++--
> .../media/common/videobuf2/videobuf2-dma-sg.c | 12 +--
> .../common/videobuf2/videobuf2-vmalloc.c | 6 +-
> .../platform/nvidia/tegra-vde/dmabuf-cache.c | 12 +--
> drivers/misc/fastrpc.c | 12 +--
> drivers/xen/gntdev-dmabuf.c | 14 ++--
> include/linux/dma-buf.h | 34 +++++----
> 21 files changed, 162 insertions(+), 153 deletions(-)
>
For drivers/media/videobu2:
Acked-by: Tomasz Figa <[email protected]>
Best regards,
Tomasz
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 1c912255c5d6..452a6a1f1e60 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -795,7 +795,7 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach,
> }
>
> /**
> - * dma_buf_dynamic_attach - Add the device to dma_buf's attachments list
> + * dma_buf_dynamic_attach_unlocked - Add the device to dma_buf's attachments list
> * @dmabuf: [in] buffer to attach device to.
> * @dev: [in] device to be attached.
> * @importer_ops: [in] importer operations for the attachment
> @@ -817,9 +817,9 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach,
> * indicated with the error code -EBUSY.
> */
> struct dma_buf_attachment *
> -dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
> - const struct dma_buf_attach_ops *importer_ops,
> - void *importer_priv)
> +dma_buf_dynamic_attach_unlocked(struct dma_buf *dmabuf, struct device *dev,
> + const struct dma_buf_attach_ops *importer_ops,
> + void *importer_priv)
> {
> struct dma_buf_attachment *attach;
> int ret;
> @@ -892,25 +892,25 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
> if (dma_buf_is_dynamic(attach->dmabuf))
> dma_resv_unlock(attach->dmabuf->resv);
>
> - dma_buf_detach(dmabuf, attach);
> + dma_buf_detach_unlocked(dmabuf, attach);
> return ERR_PTR(ret);
> }
> -EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach, DMA_BUF);
> +EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach_unlocked, DMA_BUF);
>
> /**
> - * dma_buf_attach - Wrapper for dma_buf_dynamic_attach
> + * dma_buf_attach_unlocked - Wrapper for dma_buf_dynamic_attach
> * @dmabuf: [in] buffer to attach device to.
> * @dev: [in] device to be attached.
> *
> - * Wrapper to call dma_buf_dynamic_attach() for drivers which still use a static
> - * mapping.
> + * Wrapper to call dma_buf_dynamic_attach_unlocked() for drivers which still
> + * use a static mapping.
> */
> -struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
> - struct device *dev)
> +struct dma_buf_attachment *dma_buf_attach_unlocked(struct dma_buf *dmabuf,
> + struct device *dev)
> {
> - return dma_buf_dynamic_attach(dmabuf, dev, NULL, NULL);
> + return dma_buf_dynamic_attach_unlocked(dmabuf, dev, NULL, NULL);
> }
> -EXPORT_SYMBOL_NS_GPL(dma_buf_attach, DMA_BUF);
> +EXPORT_SYMBOL_NS_GPL(dma_buf_attach_unlocked, DMA_BUF);
>
> static void __unmap_dma_buf(struct dma_buf_attachment *attach,
> struct sg_table *sg_table,
> @@ -923,7 +923,7 @@ static void __unmap_dma_buf(struct dma_buf_attachment *attach,
> }
>
> /**
> - * dma_buf_detach - Remove the given attachment from dmabuf's attachments list
> + * dma_buf_detach_unlocked - Remove the given attachment from dmabuf's attachments list
> * @dmabuf: [in] buffer to detach from.
> * @attach: [in] attachment to be detached; is free'd after this call.
> *
> @@ -931,7 +931,8 @@ static void __unmap_dma_buf(struct dma_buf_attachment *attach,
> *
> * Optionally this calls &dma_buf_ops.detach for device-specific detach.
> */
> -void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
> +void dma_buf_detach_unlocked(struct dma_buf *dmabuf,
> + struct dma_buf_attachment *attach)
> {
> if (WARN_ON(!dmabuf || !attach))
> return;
> @@ -956,14 +957,14 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
>
> kfree(attach);
> }
> -EXPORT_SYMBOL_NS_GPL(dma_buf_detach, DMA_BUF);
> +EXPORT_SYMBOL_NS_GPL(dma_buf_detach_unlocked, DMA_BUF);
>
> /**
> * dma_buf_pin - Lock down the DMA-buf
> * @attach: [in] attachment which should be pinned
> *
> - * Only dynamic importers (who set up @attach with dma_buf_dynamic_attach()) may
> - * call this, and only for limited use cases like scanout and not for temporary
> + * Only dynamic importers (who set up @attach with dma_buf_dynamic_attach_unlocked())
> + * may call this, and only for limited use cases like scanout and not for temporary
> * pin operations. It is not permitted to allow userspace to pin arbitrary
> * amounts of buffers through this interface.
> *
> @@ -1010,7 +1011,7 @@ void dma_buf_unpin(struct dma_buf_attachment *attach)
> EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF);
>
> /**
> - * dma_buf_map_attachment - Returns the scatterlist table of the attachment;
> + * dma_buf_map_attachment_unlocked - Returns the scatterlist table of the attachment;
> * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the
> * dma_buf_ops.
> * @attach: [in] attachment whose scatterlist is to be returned
> @@ -1030,8 +1031,9 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF);
> * Important: Dynamic importers must wait for the exclusive fence of the struct
> * dma_resv attached to the DMA-BUF first.
> */
> -struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
> - enum dma_data_direction direction)
> +struct sg_table *
> +dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
> + enum dma_data_direction direction)
> {
> struct sg_table *sg_table;
> int r;
> @@ -1097,10 +1099,10 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
> #endif /* CONFIG_DMA_API_DEBUG */
> return sg_table;
> }
> -EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF);
> +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_unlocked, DMA_BUF);
>
> /**
> - * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might
> + * dma_buf_unmap_attachment_unlocked - unmaps and decreases usecount of the buffer;might
> * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of
> * dma_buf_ops.
> * @attach: [in] attachment to unmap buffer from
> @@ -1109,9 +1111,9 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF);
> *
> * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment().
> */
> -void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
> - struct sg_table *sg_table,
> - enum dma_data_direction direction)
> +void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach,
> + struct sg_table *sg_table,
> + enum dma_data_direction direction)
> {
> might_sleep();
>
> @@ -1133,7 +1135,7 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
> !IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY))
> dma_buf_unpin(attach);
> }
> -EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF);
> +EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_unlocked, DMA_BUF);
>
> /**
> * dma_buf_move_notify - notify attachments that DMA-buf is moving
> @@ -1330,7 +1332,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF);
>
>
> /**
> - * dma_buf_mmap - Setup up a userspace mmap with the given vma
> + * dma_buf_mmap_unlocked - Setup up a userspace mmap with the given vma
> * @dmabuf: [in] buffer that should back the vma
> * @vma: [in] vma for the mmap
> * @pgoff: [in] offset in pages where this mmap should start within the
> @@ -1343,8 +1345,8 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF);
> *
> * Can return negative error values, returns 0 on success.
> */
> -int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
> - unsigned long pgoff)
> +int dma_buf_mmap_unlocked(struct dma_buf *dmabuf, struct vm_area_struct *vma,
> + unsigned long pgoff)
> {
> if (WARN_ON(!dmabuf || !vma))
> return -EINVAL;
> @@ -1368,10 +1370,10 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
>
> return dmabuf->ops->mmap(dmabuf, vma);
> }
> -EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF);
> +EXPORT_SYMBOL_NS_GPL(dma_buf_mmap_unlocked, DMA_BUF);
>
> /**
> - * dma_buf_vmap - Create virtual mapping for the buffer object into kernel
> + * dma_buf_vmap_unlocked - Create virtual mapping for the buffer object into kernel
> * address space. Same restrictions as for vmap and friends apply.
> * @dmabuf: [in] buffer to vmap
> * @map: [out] returns the vmap pointer
> @@ -1386,7 +1388,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF);
> *
> * Returns 0 on success, or a negative errno code otherwise.
> */
> -int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
> +int dma_buf_vmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map)
> {
> struct iosys_map ptr;
> int ret = 0;
> @@ -1422,14 +1424,14 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
> mutex_unlock(&dmabuf->lock);
> return ret;
> }
> -EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF);
> +EXPORT_SYMBOL_NS_GPL(dma_buf_vmap_unlocked, DMA_BUF);
>
> /**
> - * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap.
> + * dma_buf_vunmap_unlocked - Unmap a vmap obtained by dma_buf_vmap.
> * @dmabuf: [in] buffer to vunmap
> * @map: [in] vmap pointer to vunmap
> */
> -void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
> +void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map)
> {
> if (WARN_ON(!dmabuf))
> return;
> @@ -1446,7 +1448,7 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
> }
> mutex_unlock(&dmabuf->lock);
> }
> -EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF);
> +EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap_unlocked, DMA_BUF);
>
> #ifdef CONFIG_DEBUG_FS
> static int dma_buf_debug_show(struct seq_file *s, void *unused)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index 782cbca37538..d9ed5a4fbc6f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -449,8 +449,8 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
> if (IS_ERR(obj))
> return obj;
>
> - attach = dma_buf_dynamic_attach(dma_buf, dev->dev,
> - &amdgpu_dma_buf_attach_ops, obj);
> + attach = dma_buf_dynamic_attach_unlocked(dma_buf, dev->dev,
> + &amdgpu_dma_buf_attach_ops, obj);
> if (IS_ERR(attach)) {
> drm_gem_object_put(obj);
> return ERR_CAST(attach);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index b1c455329023..ac1e2911b727 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -885,7 +885,7 @@ static int amdgpu_ttm_backend_bind(struct ttm_device *bdev,
> struct sg_table *sgt;
>
> attach = gtt->gobj->import_attach;
> - sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
> + sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL);
> if (IS_ERR(sgt))
> return PTR_ERR(sgt);
>
> @@ -1010,7 +1010,7 @@ static void amdgpu_ttm_backend_unbind(struct ttm_device *bdev,
> struct dma_buf_attachment *attach;
>
> attach = gtt->gobj->import_attach;
> - dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL);
> + dma_buf_unmap_attachment_unlocked(attach, ttm->sg, DMA_BIDIRECTIONAL);
> ttm->sg = NULL;
> }
>
> diff --git a/drivers/gpu/drm/armada/armada_gem.c b/drivers/gpu/drm/armada/armada_gem.c
> index 5430265ad458..a499246ec28e 100644
> --- a/drivers/gpu/drm/armada/armada_gem.c
> +++ b/drivers/gpu/drm/armada/armada_gem.c
> @@ -66,8 +66,8 @@ void armada_gem_free_object(struct drm_gem_object *obj)
> if (dobj->obj.import_attach) {
> /* We only ever display imported data */
> if (dobj->sgt)
> - dma_buf_unmap_attachment(dobj->obj.import_attach,
> - dobj->sgt, DMA_TO_DEVICE);
> + dma_buf_unmap_attachment_unlocked(dobj->obj.import_attach,
> + dobj->sgt, DMA_TO_DEVICE);
> drm_prime_gem_destroy(&dobj->obj, NULL);
> }
>
> @@ -364,7 +364,7 @@ int armada_gem_pwrite_ioctl(struct drm_device *dev, void *data,
>
> if (args->offset > dobj->obj.size ||
> args->size > dobj->obj.size - args->offset) {
> - DRM_ERROR("invalid size: object size %u\n", dobj->obj.size);
> + DRM_ERROR("invalid size: object size %zu\n", dobj->obj.size);
> ret = -EINVAL;
> goto unref;
> }
> @@ -514,13 +514,13 @@ armada_gem_prime_import(struct drm_device *dev, struct dma_buf *buf)
> }
> }
>
> - attach = dma_buf_attach(buf, dev->dev);
> + attach = dma_buf_attach_unlocked(buf, dev->dev);
> if (IS_ERR(attach))
> return ERR_CAST(attach);
>
> dobj = armada_gem_alloc_private_object(dev, buf->size);
> if (!dobj) {
> - dma_buf_detach(buf, attach);
> + dma_buf_detach_unlocked(buf, attach);
> return ERR_PTR(-ENOMEM);
> }
>
> @@ -539,8 +539,8 @@ int armada_gem_map_import(struct armada_gem_object *dobj)
> {
> int ret;
>
> - dobj->sgt = dma_buf_map_attachment(dobj->obj.import_attach,
> - DMA_TO_DEVICE);
> + dobj->sgt = dma_buf_map_attachment_unlocked(dobj->obj.import_attach,
> + DMA_TO_DEVICE);
> if (IS_ERR(dobj->sgt)) {
> ret = PTR_ERR(dobj->sgt);
> dobj->sgt = NULL;
> diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c b/drivers/gpu/drm/drm_gem_dma_helper.c
> index f6901ff97bbb..1e658c448366 100644
> --- a/drivers/gpu/drm/drm_gem_dma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_dma_helper.c
> @@ -230,7 +230,7 @@ void drm_gem_dma_free(struct drm_gem_dma_object *dma_obj)
>
> if (gem_obj->import_attach) {
> if (dma_obj->vaddr)
> - dma_buf_vunmap(gem_obj->import_attach->dmabuf, &map);
> + dma_buf_vunmap_unlocked(gem_obj->import_attach->dmabuf, &map);
> drm_prime_gem_destroy(gem_obj, dma_obj->sgt);
> } else if (dma_obj->vaddr) {
> if (dma_obj->map_noncoherent)
> @@ -581,7 +581,7 @@ drm_gem_dma_prime_import_sg_table_vmap(struct drm_device *dev,
> struct iosys_map map;
> int ret;
>
> - ret = dma_buf_vmap(attach->dmabuf, &map);
> + ret = dma_buf_vmap_unlocked(attach->dmabuf, &map);
> if (ret) {
> DRM_ERROR("Failed to vmap PRIME buffer\n");
> return ERR_PTR(ret);
> @@ -589,7 +589,7 @@ drm_gem_dma_prime_import_sg_table_vmap(struct drm_device *dev,
>
> obj = drm_gem_dma_prime_import_sg_table(dev, attach, sgt);
> if (IS_ERR(obj)) {
> - dma_buf_vunmap(attach->dmabuf, &map);
> + dma_buf_vunmap_unlocked(attach->dmabuf, &map);
> return obj;
> }
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 35138f8a375c..5f572716306d 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -299,10 +299,10 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> }
>
> if (obj->import_attach) {
> - ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> + ret = dma_buf_vmap_unlocked(obj->import_attach->dmabuf, map);
> if (!ret) {
> if (WARN_ON(map->is_iomem)) {
> - dma_buf_vunmap(obj->import_attach->dmabuf, map);
> + dma_buf_vunmap_unlocked(obj->import_attach->dmabuf, map);
> ret = -EIO;
> goto err_put_pages;
> }
> @@ -383,7 +383,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> return;
>
> if (obj->import_attach) {
> - dma_buf_vunmap(obj->import_attach->dmabuf, map);
> + dma_buf_vunmap_unlocked(obj->import_attach->dmabuf, map);
> } else {
> vunmap(shmem->vaddr);
> drm_gem_shmem_put_pages(shmem);
> @@ -618,7 +618,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
> drm_gem_object_put(obj);
> vma->vm_private_data = NULL;
>
> - return dma_buf_mmap(obj->dma_buf, vma, 0);
> + return dma_buf_mmap_unlocked(obj->dma_buf, vma, 0);
> }
>
> ret = drm_gem_shmem_get_pages(shmem);
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index eb09e86044c6..e9b7d3fa67f1 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -934,13 +934,13 @@ struct drm_gem_object *drm_gem_prime_import_dev(struct drm_device *dev,
> if (!dev->driver->gem_prime_import_sg_table)
> return ERR_PTR(-EINVAL);
>
> - attach = dma_buf_attach(dma_buf, attach_dev);
> + attach = dma_buf_attach_unlocked(dma_buf, attach_dev);
> if (IS_ERR(attach))
> return ERR_CAST(attach);
>
> get_dma_buf(dma_buf);
>
> - sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
> + sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL);
> if (IS_ERR(sgt)) {
> ret = PTR_ERR(sgt);
> goto fail_detach;
> @@ -958,9 +958,9 @@ struct drm_gem_object *drm_gem_prime_import_dev(struct drm_device *dev,
> return obj;
>
> fail_unmap:
> - dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL);
> + dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL);
> fail_detach:
> - dma_buf_detach(dma_buf, attach);
> + dma_buf_detach_unlocked(dma_buf, attach);
> dma_buf_put(dma_buf);
>
> return ERR_PTR(ret);
> @@ -1056,9 +1056,9 @@ void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg)
>
> attach = obj->import_attach;
> if (sg)
> - dma_buf_unmap_attachment(attach, sg, DMA_BIDIRECTIONAL);
> + dma_buf_unmap_attachment_unlocked(attach, sg, DMA_BIDIRECTIONAL);
> dma_buf = attach->dmabuf;
> - dma_buf_detach(attach->dmabuf, attach);
> + dma_buf_detach_unlocked(attach->dmabuf, attach);
> /* remove the reference */
> dma_buf_put(dma_buf);
> }
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 3fa2da149639..ae6c1eda0a72 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -65,7 +65,7 @@ static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
> struct iosys_map map = IOSYS_MAP_INIT_VADDR(etnaviv_obj->vaddr);
>
> if (etnaviv_obj->vaddr)
> - dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf, &map);
> + dma_buf_vunmap_unlocked(etnaviv_obj->base.import_attach->dmabuf, &map);
>
> /* Don't drop the pages for imported dmabuf, as they are not
> * ours, just free the array we allocated:
> @@ -82,7 +82,7 @@ static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
>
> lockdep_assert_held(&etnaviv_obj->lock);
>
> - ret = dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf, &map);
> + ret = dma_buf_vmap_unlocked(etnaviv_obj->base.import_attach->dmabuf, &map);
> if (ret)
> return NULL;
> return map.vaddr;
> @@ -91,7 +91,7 @@ static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
> static int etnaviv_gem_prime_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
> struct vm_area_struct *vma)
> {
> - return dma_buf_mmap(etnaviv_obj->base.dma_buf, vma, 0);
> + return dma_buf_mmap_unlocked(etnaviv_obj->base.dma_buf, vma, 0);
> }
>
> static const struct etnaviv_gem_ops etnaviv_gem_prime_ops = {
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> index 3e493f48e0d4..8e95a3c5caf8 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> @@ -366,7 +366,7 @@ static int exynos_drm_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct
> int ret;
>
> if (obj->import_attach)
> - return dma_buf_mmap(obj->dma_buf, vma, 0);
> + return dma_buf_mmap_unlocked(obj->dma_buf, vma, 0);
>
> vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> index f5062d0c6333..5ecea7df98b1 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> @@ -241,8 +241,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
>
> assert_object_held(obj);
>
> - pages = dma_buf_map_attachment(obj->base.import_attach,
> - DMA_BIDIRECTIONAL);
> + pages = dma_buf_map_attachment_unlocked(obj->base.import_attach,
> + DMA_BIDIRECTIONAL);
> if (IS_ERR(pages))
> return PTR_ERR(pages);
>
> @@ -270,8 +270,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj)
> static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj,
> struct sg_table *pages)
> {
> - dma_buf_unmap_attachment(obj->base.import_attach, pages,
> - DMA_BIDIRECTIONAL);
> + dma_buf_unmap_attachment_unlocked(obj->base.import_attach, pages,
> + DMA_BIDIRECTIONAL);
> }
>
> static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = {
> @@ -306,7 +306,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
> return ERR_PTR(-E2BIG);
>
> /* need to attach */
> - attach = dma_buf_attach(dma_buf, dev->dev);
> + attach = dma_buf_attach_unlocked(dma_buf, dev->dev);
> if (IS_ERR(attach))
> return ERR_CAST(attach);
>
> @@ -337,7 +337,7 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
> return &obj->base;
>
> fail_detach:
> - dma_buf_detach(dma_buf, attach);
> + dma_buf_detach_unlocked(dma_buf, attach);
> dma_buf_put(dma_buf);
>
> return ERR_PTR(ret);
> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> index 62c61af77a42..6053af920a22 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
> @@ -207,13 +207,13 @@ static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915,
> i915_gem_object_unlock(import_obj);
>
> /* Now try a fake an importer */
> - import_attach = dma_buf_attach(dmabuf, obj->base.dev->dev);
> + import_attach = dma_buf_attach_unlocked(dmabuf, obj->base.dev->dev);
> if (IS_ERR(import_attach)) {
> err = PTR_ERR(import_attach);
> goto out_import;
> }
>
> - st = dma_buf_map_attachment(import_attach, DMA_BIDIRECTIONAL);
> + st = dma_buf_map_attachment_unlocked(import_attach, DMA_BIDIRECTIONAL);
> if (IS_ERR(st)) {
> err = PTR_ERR(st);
> goto out_detach;
> @@ -226,9 +226,9 @@ static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915,
> timeout = -ETIME;
> }
> err = timeout > 0 ? 0 : timeout;
> - dma_buf_unmap_attachment(import_attach, st, DMA_BIDIRECTIONAL);
> + dma_buf_unmap_attachment_unlocked(import_attach, st, DMA_BIDIRECTIONAL);
> out_detach:
> - dma_buf_detach(dmabuf, import_attach);
> + dma_buf_detach_unlocked(dmabuf, import_attach);
> out_import:
> i915_gem_object_put(import_obj);
> out_dmabuf:
> @@ -296,7 +296,7 @@ static int igt_dmabuf_import(void *arg)
> goto out_obj;
> }
>
> - err = dma_buf_vmap(dmabuf, &map);
> + err = dma_buf_vmap_unlocked(dmabuf, &map);
> dma_map = err ? NULL : map.vaddr;
> if (!dma_map) {
> pr_err("dma_buf_vmap failed\n");
> @@ -337,7 +337,7 @@ static int igt_dmabuf_import(void *arg)
>
> err = 0;
> out_dma_map:
> - dma_buf_vunmap(dmabuf, &map);
> + dma_buf_vunmap_unlocked(dmabuf, &map);
> out_obj:
> i915_gem_object_put(obj);
> out_dmabuf:
> @@ -358,7 +358,7 @@ static int igt_dmabuf_import_ownership(void *arg)
> if (IS_ERR(dmabuf))
> return PTR_ERR(dmabuf);
>
> - err = dma_buf_vmap(dmabuf, &map);
> + err = dma_buf_vmap_unlocked(dmabuf, &map);
> ptr = err ? NULL : map.vaddr;
> if (!ptr) {
> pr_err("dma_buf_vmap failed\n");
> @@ -367,7 +367,7 @@ static int igt_dmabuf_import_ownership(void *arg)
> }
>
> memset(ptr, 0xc5, PAGE_SIZE);
> - dma_buf_vunmap(dmabuf, &map);
> + dma_buf_vunmap_unlocked(dmabuf, &map);
>
> obj = to_intel_bo(i915_gem_prime_import(&i915->drm, dmabuf));
> if (IS_ERR(obj)) {
> @@ -418,7 +418,7 @@ static int igt_dmabuf_export_vmap(void *arg)
> }
> i915_gem_object_put(obj);
>
> - err = dma_buf_vmap(dmabuf, &map);
> + err = dma_buf_vmap_unlocked(dmabuf, &map);
> ptr = err ? NULL : map.vaddr;
> if (!ptr) {
> pr_err("dma_buf_vmap failed\n");
> @@ -435,7 +435,7 @@ static int igt_dmabuf_export_vmap(void *arg)
> memset(ptr, 0xc5, dmabuf->size);
>
> err = 0;
> - dma_buf_vunmap(dmabuf, &map);
> + dma_buf_vunmap_unlocked(dmabuf, &map);
> out:
> dma_buf_put(dmabuf);
> return err;
> diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
> index 393f82e26927..a725a91c2ff9 100644
> --- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
> +++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
> @@ -119,13 +119,13 @@ struct drm_gem_object *omap_gem_prime_import(struct drm_device *dev,
> }
> }
>
> - attach = dma_buf_attach(dma_buf, dev->dev);
> + attach = dma_buf_attach_unlocked(dma_buf, dev->dev);
> if (IS_ERR(attach))
> return ERR_CAST(attach);
>
> get_dma_buf(dma_buf);
>
> - sgt = dma_buf_map_attachment(attach, DMA_TO_DEVICE);
> + sgt = dma_buf_map_attachment_unlocked(attach, DMA_TO_DEVICE);
> if (IS_ERR(sgt)) {
> ret = PTR_ERR(sgt);
> goto fail_detach;
> @@ -142,9 +142,9 @@ struct drm_gem_object *omap_gem_prime_import(struct drm_device *dev,
> return obj;
>
> fail_unmap:
> - dma_buf_unmap_attachment(attach, sgt, DMA_TO_DEVICE);
> + dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_TO_DEVICE);
> fail_detach:
> - dma_buf_detach(dma_buf, attach);
> + dma_buf_detach_unlocked(dma_buf, attach);
> dma_buf_put(dma_buf);
>
> return ERR_PTR(ret);
> diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
> index 81991090adcc..bbfe196ff6f6 100644
> --- a/drivers/gpu/drm/tegra/gem.c
> +++ b/drivers/gpu/drm/tegra/gem.c
> @@ -78,15 +78,15 @@ static struct host1x_bo_mapping *tegra_bo_pin(struct device *dev, struct host1x_
> if (gem->import_attach) {
> struct dma_buf *buf = gem->import_attach->dmabuf;
>
> - map->attach = dma_buf_attach(buf, dev);
> + map->attach = dma_buf_attach_unlocked(buf, dev);
> if (IS_ERR(map->attach)) {
> err = PTR_ERR(map->attach);
> goto free;
> }
>
> - map->sgt = dma_buf_map_attachment(map->attach, direction);
> + map->sgt = dma_buf_map_attachment_unlocked(map->attach, direction);
> if (IS_ERR(map->sgt)) {
> - dma_buf_detach(buf, map->attach);
> + dma_buf_detach_unlocked(buf, map->attach);
> err = PTR_ERR(map->sgt);
> map->sgt = NULL;
> goto free;
> @@ -160,8 +160,9 @@ static struct host1x_bo_mapping *tegra_bo_pin(struct device *dev, struct host1x_
> static void tegra_bo_unpin(struct host1x_bo_mapping *map)
> {
> if (map->attach) {
> - dma_buf_unmap_attachment(map->attach, map->sgt, map->direction);
> - dma_buf_detach(map->attach->dmabuf, map->attach);
> + dma_buf_unmap_attachment_unlocked(map->attach, map->sgt,
> + map->direction);
> + dma_buf_detach_unlocked(map->attach->dmabuf, map->attach);
> } else {
> dma_unmap_sgtable(map->dev, map->sgt, map->direction, 0);
> sg_free_table(map->sgt);
> @@ -181,7 +182,7 @@ static void *tegra_bo_mmap(struct host1x_bo *bo)
> if (obj->vaddr) {
> return obj->vaddr;
> } else if (obj->gem.import_attach) {
> - ret = dma_buf_vmap(obj->gem.import_attach->dmabuf, &map);
> + ret = dma_buf_vmap_unlocked(obj->gem.import_attach->dmabuf, &map);
> return ret ? NULL : map.vaddr;
> } else {
> return vmap(obj->pages, obj->num_pages, VM_MAP,
> @@ -197,7 +198,7 @@ static void tegra_bo_munmap(struct host1x_bo *bo, void *addr)
> if (obj->vaddr)
> return;
> else if (obj->gem.import_attach)
> - dma_buf_vunmap(obj->gem.import_attach->dmabuf, &map);
> + dma_buf_vunmap_unlocked(obj->gem.import_attach->dmabuf, &map);
> else
> vunmap(addr);
> }
> @@ -453,7 +454,7 @@ static struct tegra_bo *tegra_bo_import(struct drm_device *drm,
> if (IS_ERR(bo))
> return bo;
>
> - attach = dma_buf_attach(buf, drm->dev);
> + attach = dma_buf_attach_unlocked(buf, drm->dev);
> if (IS_ERR(attach)) {
> err = PTR_ERR(attach);
> goto free;
> @@ -461,7 +462,7 @@ static struct tegra_bo *tegra_bo_import(struct drm_device *drm,
>
> get_dma_buf(buf);
>
> - bo->sgt = dma_buf_map_attachment(attach, DMA_TO_DEVICE);
> + bo->sgt = dma_buf_map_attachment_unlocked(attach, DMA_TO_DEVICE);
> if (IS_ERR(bo->sgt)) {
> err = PTR_ERR(bo->sgt);
> goto detach;
> @@ -479,9 +480,9 @@ static struct tegra_bo *tegra_bo_import(struct drm_device *drm,
>
> detach:
> if (!IS_ERR_OR_NULL(bo->sgt))
> - dma_buf_unmap_attachment(attach, bo->sgt, DMA_TO_DEVICE);
> + dma_buf_unmap_attachment_unlocked(attach, bo->sgt, DMA_TO_DEVICE);
>
> - dma_buf_detach(buf, attach);
> + dma_buf_detach_unlocked(buf, attach);
> dma_buf_put(buf);
> free:
> drm_gem_object_release(&bo->gem);
> @@ -508,8 +509,8 @@ void tegra_bo_free_object(struct drm_gem_object *gem)
> tegra_bo_iommu_unmap(tegra, bo);
>
> if (gem->import_attach) {
> - dma_buf_unmap_attachment(gem->import_attach, bo->sgt,
> - DMA_TO_DEVICE);
> + dma_buf_unmap_attachment_unlocked(gem->import_attach, bo->sgt,
> + DMA_TO_DEVICE);
> drm_prime_gem_destroy(gem, NULL);
> } else {
> tegra_bo_free(gem->dev, bo);
> diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c
> index 04c04e6d24c3..667436a92b17 100644
> --- a/drivers/infiniband/core/umem_dmabuf.c
> +++ b/drivers/infiniband/core/umem_dmabuf.c
> @@ -26,7 +26,8 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf)
> if (umem_dmabuf->sgt)
> goto wait_fence;
>
> - sgt = dma_buf_map_attachment(umem_dmabuf->attach, DMA_BIDIRECTIONAL);
> + sgt = dma_buf_map_attachment_unlocked(umem_dmabuf->attach,
> + DMA_BIDIRECTIONAL);
> if (IS_ERR(sgt))
> return PTR_ERR(sgt);
>
> @@ -102,8 +103,8 @@ void ib_umem_dmabuf_unmap_pages(struct ib_umem_dmabuf *umem_dmabuf)
> umem_dmabuf->last_sg_trim = 0;
> }
>
> - dma_buf_unmap_attachment(umem_dmabuf->attach, umem_dmabuf->sgt,
> - DMA_BIDIRECTIONAL);
> + dma_buf_unmap_attachment_unlocked(umem_dmabuf->attach, umem_dmabuf->sgt,
> + DMA_BIDIRECTIONAL);
>
> umem_dmabuf->sgt = NULL;
> }
> @@ -149,7 +150,7 @@ struct ib_umem_dmabuf *ib_umem_dmabuf_get(struct ib_device *device,
> if (!ib_umem_num_pages(umem))
> goto out_free_umem;
>
> - umem_dmabuf->attach = dma_buf_dynamic_attach(
> + umem_dmabuf->attach = dma_buf_dynamic_attach_unlocked(
> dmabuf,
> device->dma_device,
> ops,
> @@ -228,7 +229,7 @@ void ib_umem_dmabuf_release(struct ib_umem_dmabuf *umem_dmabuf)
> dma_buf_unpin(umem_dmabuf->attach);
> dma_resv_unlock(dmabuf->resv);
>
> - dma_buf_detach(dmabuf, umem_dmabuf->attach);
> + dma_buf_detach_unlocked(dmabuf, umem_dmabuf->attach);
> dma_buf_put(dmabuf);
> kfree(umem_dmabuf);
> }
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> index 678b359717c4..de762dbdaf78 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> @@ -101,7 +101,7 @@ static void *vb2_dc_vaddr(struct vb2_buffer *vb, void *buf_priv)
> if (buf->db_attach) {
> struct iosys_map map;
>
> - if (!dma_buf_vmap(buf->db_attach->dmabuf, &map))
> + if (!dma_buf_vmap_unlocked(buf->db_attach->dmabuf, &map))
> buf->vaddr = map.vaddr;
>
> return buf->vaddr;
> @@ -711,7 +711,7 @@ static int vb2_dc_map_dmabuf(void *mem_priv)
> }
>
> /* get the associated scatterlist for this buffer */
> - sgt = dma_buf_map_attachment(buf->db_attach, buf->dma_dir);
> + sgt = dma_buf_map_attachment_unlocked(buf->db_attach, buf->dma_dir);
> if (IS_ERR(sgt)) {
> pr_err("Error getting dmabuf scatterlist\n");
> return -EINVAL;
> @@ -722,7 +722,8 @@ static int vb2_dc_map_dmabuf(void *mem_priv)
> if (contig_size < buf->size) {
> pr_err("contiguous chunk is too small %lu/%lu\n",
> contig_size, buf->size);
> - dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
> + dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt,
> + buf->dma_dir);
> return -EFAULT;
> }
>
> @@ -750,10 +751,10 @@ static void vb2_dc_unmap_dmabuf(void *mem_priv)
> }
>
> if (buf->vaddr) {
> - dma_buf_vunmap(buf->db_attach->dmabuf, &map);
> + dma_buf_vunmap_unlocked(buf->db_attach->dmabuf, &map);
> buf->vaddr = NULL;
> }
> - dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
> + dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt, buf->dma_dir);
>
> buf->dma_addr = 0;
> buf->dma_sgt = NULL;
> @@ -768,7 +769,7 @@ static void vb2_dc_detach_dmabuf(void *mem_priv)
> vb2_dc_unmap_dmabuf(buf);
>
> /* detach this attachment */
> - dma_buf_detach(buf->db_attach->dmabuf, buf->db_attach);
> + dma_buf_detach_unlocked(buf->db_attach->dmabuf, buf->db_attach);
> kfree(buf);
> }
>
> @@ -792,7 +793,7 @@ static void *vb2_dc_attach_dmabuf(struct vb2_buffer *vb, struct device *dev,
> buf->vb = vb;
>
> /* create attachment for the dmabuf with the user device */
> - dba = dma_buf_attach(dbuf, buf->dev);
> + dba = dma_buf_attach_unlocked(dbuf, buf->dev);
> if (IS_ERR(dba)) {
> pr_err("failed to attach dmabuf\n");
> kfree(buf);
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> index fa69158a65b1..39e11600304a 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> @@ -309,7 +309,7 @@ static void *vb2_dma_sg_vaddr(struct vb2_buffer *vb, void *buf_priv)
>
> if (!buf->vaddr) {
> if (buf->db_attach) {
> - ret = dma_buf_vmap(buf->db_attach->dmabuf, &map);
> + ret = dma_buf_vmap_unlocked(buf->db_attach->dmabuf, &map);
> buf->vaddr = ret ? NULL : map.vaddr;
> } else {
> buf->vaddr = vm_map_ram(buf->pages, buf->num_pages, -1);
> @@ -565,7 +565,7 @@ static int vb2_dma_sg_map_dmabuf(void *mem_priv)
> }
>
> /* get the associated scatterlist for this buffer */
> - sgt = dma_buf_map_attachment(buf->db_attach, buf->dma_dir);
> + sgt = dma_buf_map_attachment_unlocked(buf->db_attach, buf->dma_dir);
> if (IS_ERR(sgt)) {
> pr_err("Error getting dmabuf scatterlist\n");
> return -EINVAL;
> @@ -594,10 +594,10 @@ static void vb2_dma_sg_unmap_dmabuf(void *mem_priv)
> }
>
> if (buf->vaddr) {
> - dma_buf_vunmap(buf->db_attach->dmabuf, &map);
> + dma_buf_vunmap_unlocked(buf->db_attach->dmabuf, &map);
> buf->vaddr = NULL;
> }
> - dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
> + dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt, buf->dma_dir);
>
> buf->dma_sgt = NULL;
> }
> @@ -611,7 +611,7 @@ static void vb2_dma_sg_detach_dmabuf(void *mem_priv)
> vb2_dma_sg_unmap_dmabuf(buf);
>
> /* detach this attachment */
> - dma_buf_detach(buf->db_attach->dmabuf, buf->db_attach);
> + dma_buf_detach_unlocked(buf->db_attach->dmabuf, buf->db_attach);
> kfree(buf);
> }
>
> @@ -633,7 +633,7 @@ static void *vb2_dma_sg_attach_dmabuf(struct vb2_buffer *vb, struct device *dev,
>
> buf->dev = dev;
> /* create attachment for the dmabuf with the user device */
> - dba = dma_buf_attach(dbuf, buf->dev);
> + dba = dma_buf_attach_unlocked(dbuf, buf->dev);
> if (IS_ERR(dba)) {
> pr_err("failed to attach dmabuf\n");
> kfree(buf);
> diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> index 948152f1596b..7831bf545874 100644
> --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> @@ -376,7 +376,7 @@ static int vb2_vmalloc_map_dmabuf(void *mem_priv)
> struct iosys_map map;
> int ret;
>
> - ret = dma_buf_vmap(buf->dbuf, &map);
> + ret = dma_buf_vmap_unlocked(buf->dbuf, &map);
> if (ret)
> return -EFAULT;
> buf->vaddr = map.vaddr;
> @@ -389,7 +389,7 @@ static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
> struct vb2_vmalloc_buf *buf = mem_priv;
> struct iosys_map map = IOSYS_MAP_INIT_VADDR(buf->vaddr);
>
> - dma_buf_vunmap(buf->dbuf, &map);
> + dma_buf_vunmap_unlocked(buf->dbuf, &map);
> buf->vaddr = NULL;
> }
>
> @@ -399,7 +399,7 @@ static void vb2_vmalloc_detach_dmabuf(void *mem_priv)
> struct iosys_map map = IOSYS_MAP_INIT_VADDR(buf->vaddr);
>
> if (buf->vaddr)
> - dma_buf_vunmap(buf->dbuf, &map);
> + dma_buf_vunmap_unlocked(buf->dbuf, &map);
>
> kfree(buf);
> }
> diff --git a/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c b/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c
> index 69c346148070..58e4595f3a10 100644
> --- a/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c
> +++ b/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c
> @@ -38,8 +38,8 @@ static void tegra_vde_release_entry(struct tegra_vde_cache_entry *entry)
> if (entry->vde->domain)
> tegra_vde_iommu_unmap(entry->vde, entry->iova);
>
> - dma_buf_unmap_attachment(entry->a, entry->sgt, entry->dma_dir);
> - dma_buf_detach(dmabuf, entry->a);
> + dma_buf_unmap_attachment_unlocked(entry->a, entry->sgt, entry->dma_dir);
> + dma_buf_detach_unlocked(dmabuf, entry->a);
> dma_buf_put(dmabuf);
>
> list_del(&entry->list);
> @@ -95,14 +95,14 @@ int tegra_vde_dmabuf_cache_map(struct tegra_vde *vde,
> goto ref;
> }
>
> - attachment = dma_buf_attach(dmabuf, dev);
> + attachment = dma_buf_attach_unlocked(dmabuf, dev);
> if (IS_ERR(attachment)) {
> dev_err(dev, "Failed to attach dmabuf\n");
> err = PTR_ERR(attachment);
> goto err_unlock;
> }
>
> - sgt = dma_buf_map_attachment(attachment, dma_dir);
> + sgt = dma_buf_map_attachment_unlocked(attachment, dma_dir);
> if (IS_ERR(sgt)) {
> dev_err(dev, "Failed to get dmabufs sg_table\n");
> err = PTR_ERR(sgt);
> @@ -152,9 +152,9 @@ int tegra_vde_dmabuf_cache_map(struct tegra_vde *vde,
> err_free:
> kfree(entry);
> err_unmap:
> - dma_buf_unmap_attachment(attachment, sgt, dma_dir);
> + dma_buf_unmap_attachment_unlocked(attachment, sgt, dma_dir);
> err_detach:
> - dma_buf_detach(dmabuf, attachment);
> + dma_buf_detach_unlocked(dmabuf, attachment);
> err_unlock:
> mutex_unlock(&vde->map_lock);
>
> diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
> index 93ebd174d848..558e8056eb80 100644
> --- a/drivers/misc/fastrpc.c
> +++ b/drivers/misc/fastrpc.c
> @@ -310,9 +310,9 @@ static void fastrpc_free_map(struct kref *ref)
> return;
> }
> }
> - dma_buf_unmap_attachment(map->attach, map->table,
> - DMA_BIDIRECTIONAL);
> - dma_buf_detach(map->buf, map->attach);
> + dma_buf_unmap_attachment_unlocked(map->attach, map->table,
> + DMA_BIDIRECTIONAL);
> + dma_buf_detach_unlocked(map->buf, map->attach);
> dma_buf_put(map->buf);
> }
>
> @@ -719,14 +719,14 @@ static int fastrpc_map_create(struct fastrpc_user *fl, int fd,
> goto get_err;
> }
>
> - map->attach = dma_buf_attach(map->buf, sess->dev);
> + map->attach = dma_buf_attach_unlocked(map->buf, sess->dev);
> if (IS_ERR(map->attach)) {
> dev_err(sess->dev, "Failed to attach dmabuf\n");
> err = PTR_ERR(map->attach);
> goto attach_err;
> }
>
> - map->table = dma_buf_map_attachment(map->attach, DMA_BIDIRECTIONAL);
> + map->table = dma_buf_map_attachment_unlocked(map->attach, DMA_BIDIRECTIONAL);
> if (IS_ERR(map->table)) {
> err = PTR_ERR(map->table);
> goto map_err;
> @@ -763,7 +763,7 @@ static int fastrpc_map_create(struct fastrpc_user *fl, int fd,
> return 0;
>
> map_err:
> - dma_buf_detach(map->buf, map->attach);
> + dma_buf_detach_unlocked(map->buf, map->attach);
> attach_err:
> dma_buf_put(map->buf);
> get_err:
> diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
> index 940e5e9e8a54..5a50e2697e95 100644
> --- a/drivers/xen/gntdev-dmabuf.c
> +++ b/drivers/xen/gntdev-dmabuf.c
> @@ -592,7 +592,7 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
> gntdev_dmabuf->priv = priv;
> gntdev_dmabuf->fd = fd;
>
> - attach = dma_buf_attach(dma_buf, dev);
> + attach = dma_buf_attach_unlocked(dma_buf, dev);
> if (IS_ERR(attach)) {
> ret = ERR_CAST(attach);
> goto fail_free_obj;
> @@ -600,7 +600,7 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
>
> gntdev_dmabuf->u.imp.attach = attach;
>
> - sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
> + sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL);
> if (IS_ERR(sgt)) {
> ret = ERR_CAST(sgt);
> goto fail_detach;
> @@ -658,9 +658,9 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
> fail_end_access:
> dmabuf_imp_end_foreign_access(gntdev_dmabuf->u.imp.refs, count);
> fail_unmap:
> - dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL);
> + dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL);
> fail_detach:
> - dma_buf_detach(dma_buf, attach);
> + dma_buf_detach_unlocked(dma_buf, attach);
> fail_free_obj:
> dmabuf_imp_free_storage(gntdev_dmabuf);
> fail_put:
> @@ -708,10 +708,10 @@ static int dmabuf_imp_release(struct gntdev_dmabuf_priv *priv, u32 fd)
> attach = gntdev_dmabuf->u.imp.attach;
>
> if (gntdev_dmabuf->u.imp.sgt)
> - dma_buf_unmap_attachment(attach, gntdev_dmabuf->u.imp.sgt,
> - DMA_BIDIRECTIONAL);
> + dma_buf_unmap_attachment_unlocked(attach, gntdev_dmabuf->u.imp.sgt,
> + DMA_BIDIRECTIONAL);
> dma_buf = attach->dmabuf;
> - dma_buf_detach(attach->dmabuf, attach);
> + dma_buf_detach_unlocked(attach->dmabuf, attach);
> dma_buf_put(dma_buf);
>
> dmabuf_imp_free_storage(gntdev_dmabuf);
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index 71731796c8c3..9ab09569dec1 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -601,14 +601,16 @@ dma_buf_attachment_is_dynamic(struct dma_buf_attachment *attach)
> return !!attach->importer_ops;
> }
>
> -struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
> - struct device *dev);
> +struct dma_buf_attachment *dma_buf_attach_unlocked(struct dma_buf *dmabuf,
> + struct device *dev);
> struct dma_buf_attachment *
> -dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
> - const struct dma_buf_attach_ops *importer_ops,
> - void *importer_priv);
> -void dma_buf_detach(struct dma_buf *dmabuf,
> - struct dma_buf_attachment *attach);
> +dma_buf_dynamic_attach_unlocked(struct dma_buf *dmabuf, struct device *dev,
> + const struct dma_buf_attach_ops *importer_ops,
> + void *importer_priv);
> +
> +void dma_buf_detach_unlocked(struct dma_buf *dmabuf,
> + struct dma_buf_attachment *attach);
> +
> int dma_buf_pin(struct dma_buf_attachment *attach);
> void dma_buf_unpin(struct dma_buf_attachment *attach);
>
> @@ -618,18 +620,20 @@ int dma_buf_fd(struct dma_buf *dmabuf, int flags);
> struct dma_buf *dma_buf_get(int fd);
> void dma_buf_put(struct dma_buf *dmabuf);
>
> -struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *,
> - enum dma_data_direction);
> -void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table *,
> - enum dma_data_direction);
> +struct sg_table *dma_buf_map_attachment_unlocked(struct dma_buf_attachment *,
> + enum dma_data_direction);
> +void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *,
> + struct sg_table *,
> + enum dma_data_direction);
> +
> void dma_buf_move_notify(struct dma_buf *dma_buf);
> int dma_buf_begin_cpu_access(struct dma_buf *dma_buf,
> enum dma_data_direction dir);
> int dma_buf_end_cpu_access(struct dma_buf *dma_buf,
> enum dma_data_direction dir);
>
> -int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
> - unsigned long);
> -int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map);
> -void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map);
> +int dma_buf_mmap_unlocked(struct dma_buf *, struct vm_area_struct *,
> + unsigned long);
> +int dma_buf_vmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map);
> +void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map);
> #endif /* __DMA_BUF_H__ */
> --
> 2.37.2
>