2023-01-08 21:14:32

by Dmitry Osipenko

[permalink] [raw]
Subject: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

This series:

1. Makes minor fixes for drm_gem_lru and Panfrost
2. Brings refactoring for older code
3. Adds common drm-shmem memory shrinker
4. Enables shrinker for VirtIO-GPU driver
5. Switches Panfrost driver to the common shrinker

Changelog:

v10:- Rebased on a recent linux-next.

- Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.

- Added Steven's ack/r-b/t-b for the Panfrost patches.

- Fixed missing export of the new drm_gem_object_evict() function.

- Added fixes tags to the first two patches that are making minor fixes,
for consistency.

v9: - Replaced struct drm_gem_shmem_shrinker with drm_gem_shmem and
moved it to drm_device, like was suggested by Thomas Zimmermann.

- Replaced drm_gem_shmem_shrinker_register() with drmm_gem_shmem_init(),
like was suggested by Thomas Zimmermann.

- Moved evict() callback to drm_gem_object_funcs and added common
drm_gem_object_evict() helper, like was suggested by Thomas Zimmermann.

- The shmem object now is evictable by default, like was suggested by
Thomas Zimmermann. Dropped the set_evictable/purgeble() functions
as well, drivers will decide whether BO is evictable within theirs
madvise IOCTL.

- Added patches that convert drm-shmem code to use drm_WARN_ON() and
drm_dbg_kms(), like was requested by Thomas Zimmermann.

- Turned drm_gem_shmem_object booleans into 1-bit bit fields, like was
suggested by Thomas Zimmermann.

- Switched to use drm_dev->unique for the shmem shrinker name. Drivers
don't need to specify the name explicitly anymore.

- Re-added dma_resv_test_signaled() that was missing in v8 and also
fixed its argument to DMA_RESV_USAGE_READ. See comment to
dma_resv_usage_rw().

- Added new fix for Panfrost driver that silences lockdep warning
caused by shrinker. Both Panfrost old and new shmem shrinkers are
affected.

v8: - Rebased on top of recent linux-next that now has dma-buf locking
convention patches merged, which was blocking shmem shrinker before.

- Shmem shrinker now uses new drm_gem_lru helper.

- Dropped Steven Price t-b from the Panfrost patch because code
changed significantly since v6 and should be re-tested.

v7: - dma-buf locking convention

v6: https://lore.kernel.org/dri-devel/[email protected]/

Related patches:

Mesa: https://gitlab.freedesktop.org/digetx/mesa/-/commits/virgl-madvise
igt: https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/virtio-madvise
https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/panfrost-madvise

The Mesa and IGT patches will be sent out once the kernel part will land.

Dmitry Osipenko (11):
drm/msm/gem: Prevent blocking within shrinker loop
drm/panfrost: Don't sync rpm suspension after mmu flushing
drm/gem: Add evict() callback to drm_gem_object_funcs
drm/shmem: Put booleans in the end of struct drm_gem_shmem_object
drm/shmem: Switch to use drm_* debug helpers
drm/shmem-helper: Don't use vmap_use_count for dma-bufs
drm/shmem-helper: Switch to reservation lock
drm/shmem-helper: Add memory shrinker
drm/gem: Add drm_gem_pin_unlocked()
drm/virtio: Support memory shrinking
drm/panfrost: Switch to generic memory shrinker

drivers/gpu/drm/drm_gem.c | 54 +-
drivers/gpu/drm/drm_gem_shmem_helper.c | 646 +++++++++++++-----
drivers/gpu/drm/lima/lima_gem.c | 8 +-
drivers/gpu/drm/msm/msm_gem_shrinker.c | 8 +-
drivers/gpu/drm/panfrost/Makefile | 1 -
drivers/gpu/drm/panfrost/panfrost_device.h | 4 -
drivers/gpu/drm/panfrost/panfrost_drv.c | 34 +-
drivers/gpu/drm/panfrost/panfrost_gem.c | 30 +-
drivers/gpu/drm/panfrost/panfrost_gem.h | 9 -
.../gpu/drm/panfrost/panfrost_gem_shrinker.c | 122 ----
drivers/gpu/drm/panfrost/panfrost_job.c | 18 +-
drivers/gpu/drm/panfrost/panfrost_mmu.c | 21 +-
drivers/gpu/drm/virtio/virtgpu_drv.h | 18 +-
drivers/gpu/drm/virtio/virtgpu_gem.c | 52 ++
drivers/gpu/drm/virtio/virtgpu_ioctl.c | 37 +
drivers/gpu/drm/virtio/virtgpu_kms.c | 8 +
drivers/gpu/drm/virtio/virtgpu_object.c | 132 +++-
drivers/gpu/drm/virtio/virtgpu_plane.c | 22 +-
drivers/gpu/drm/virtio/virtgpu_vq.c | 40 ++
include/drm/drm_device.h | 10 +-
include/drm/drm_gem.h | 19 +-
include/drm/drm_gem_shmem_helper.h | 112 +--
include/uapi/drm/virtgpu_drm.h | 14 +
23 files changed, 1010 insertions(+), 409 deletions(-)
delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c

--
2.38.1


2023-01-08 21:15:11

by Dmitry Osipenko

[permalink] [raw]
Subject: [PATCH v10 04/11] drm/shmem: Put booleans in the end of struct drm_gem_shmem_object

Group all 1-bit boolean members of struct drm_gem_shmem_object in the end
of the structure, allowing compiler to pack data better and making code to
look more consistent.

Suggested-by: Thomas Zimmermann <[email protected]>
Signed-off-by: Dmitry Osipenko <[email protected]>
---
include/drm/drm_gem_shmem_helper.h | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index a2201b2488c5..5994fed5e327 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -60,20 +60,6 @@ struct drm_gem_shmem_object {
*/
struct list_head madv_list;

- /**
- * @pages_mark_dirty_on_put:
- *
- * Mark pages as dirty when they are put.
- */
- unsigned int pages_mark_dirty_on_put : 1;
-
- /**
- * @pages_mark_accessed_on_put:
- *
- * Mark pages as accessed when they are put.
- */
- unsigned int pages_mark_accessed_on_put : 1;
-
/**
* @sgt: Scatter/gather table for imported PRIME buffers
*/
@@ -97,10 +83,24 @@ struct drm_gem_shmem_object {
*/
unsigned int vmap_use_count;

+ /**
+ * @pages_mark_dirty_on_put:
+ *
+ * Mark pages as dirty when they are put.
+ */
+ bool pages_mark_dirty_on_put : 1;
+
+ /**
+ * @pages_mark_accessed_on_put:
+ *
+ * Mark pages as accessed when they are put.
+ */
+ bool pages_mark_accessed_on_put : 1;
+
/**
* @map_wc: map object write-combined (instead of using shmem defaults).
*/
- bool map_wc;
+ bool map_wc : 1;
};

#define to_drm_gem_shmem_obj(obj) \
--
2.38.1

2023-01-08 21:15:12

by Dmitry Osipenko

[permalink] [raw]
Subject: [PATCH v10 08/11] drm/shmem-helper: Add memory shrinker

Introduce common drm-shmem shrinker for DRM drivers.

To start using drm-shmem shrinker drivers should do the following:

1. Implement evict() callback of GEM object where driver should check
whether object is purgeable or evictable using drm-shmem helpers and
perform the shrinking action

2. Initialize drm-shmem internals using drmm_gem_shmem_init(drm_device),
which will register drm-shmem shrinker

3. Implement madvise IOCTL that will use drm_gem_shmem_madvise()

Signed-off-by: Daniel Almeida <[email protected]>
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 460 ++++++++++++++++--
.../gpu/drm/panfrost/panfrost_gem_shrinker.c | 9 +-
include/drm/drm_device.h | 10 +-
include/drm/drm_gem_shmem_helper.h | 61 ++-
4 files changed, 490 insertions(+), 50 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index a1f2f2158c50..3ab5ec325ddb 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -20,6 +20,7 @@
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_managed.h>
#include <drm/drm_prime.h>
#include <drm/drm_print.h>

@@ -128,6 +129,57 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_create);

+static void drm_gem_shmem_resv_assert_held(struct drm_gem_shmem_object *shmem)
+{
+ /*
+ * Destroying the object is a special case.. drm_gem_shmem_free()
+ * calls many things that WARN_ON if the obj lock is not held. But
+ * acquiring the obj lock in drm_gem_shmem_free() can cause a locking
+ * order inversion between reservation_ww_class_mutex and fs_reclaim.
+ *
+ * This deadlock is not actually possible, because no one should
+ * be already holding the lock when msm_gem_free_object() is called.
+ * Unfortunately lockdep is not aware of this detail. So when the
+ * refcount drops to zero, we pretend it is already locked.
+ */
+ if (kref_read(&shmem->base.refcount))
+ dma_resv_assert_held(shmem->base.resv);
+}
+
+static bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem)
+{
+ dma_resv_assert_held(shmem->base.resv);
+
+ return (shmem->madv >= 0) && shmem->base.funcs->evict &&
+ shmem->pages_use_count && !shmem->pages_pin_count &&
+ !shmem->base.dma_buf && !shmem->base.import_attach &&
+ shmem->sgt && !shmem->evicted;
+}
+
+static void
+drm_gem_shmem_update_pages_state(struct drm_gem_shmem_object *shmem)
+{
+ struct drm_gem_object *obj = &shmem->base;
+ struct drm_gem_shmem *shmem_mm = obj->dev->shmem_mm;
+ struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker;
+
+ drm_gem_shmem_resv_assert_held(shmem);
+
+ if (!gem_shrinker || obj->import_attach)
+ return;
+
+ if (shmem->madv < 0)
+ drm_gem_lru_remove(&shmem->base);
+ else if (drm_gem_shmem_is_evictable(shmem) || drm_gem_shmem_is_purgeable(shmem))
+ drm_gem_lru_move_tail(&gem_shrinker->lru_evictable, &shmem->base);
+ else if (shmem->evicted)
+ drm_gem_lru_move_tail(&gem_shrinker->lru_evicted, &shmem->base);
+ else if (!shmem->pages)
+ drm_gem_lru_remove(&shmem->base);
+ else
+ drm_gem_lru_move_tail(&gem_shrinker->lru_pinned, &shmem->base);
+}
+
/**
* drm_gem_shmem_free - Free resources associated with a shmem GEM object
* @shmem: shmem GEM object to free
@@ -142,7 +194,8 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
if (obj->import_attach) {
drm_prime_gem_destroy(obj, shmem->sgt);
} else {
- dma_resv_lock(shmem->base.resv, NULL);
+ /* take out shmem GEM object from the memory shrinker */
+ drm_gem_shmem_madvise(shmem, -1);

drm_WARN_ON(obj->dev, shmem->vmap_use_count);

@@ -152,12 +205,10 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
sg_free_table(shmem->sgt);
kfree(shmem->sgt);
}
- if (shmem->pages)
+ if (shmem->pages_use_count)
drm_gem_shmem_put_pages(shmem);

drm_WARN_ON(obj->dev, shmem->pages_use_count);
-
- dma_resv_unlock(shmem->base.resv);
}

drm_gem_object_release(obj);
@@ -165,19 +216,31 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_free);

-static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
+static int
+drm_gem_shmem_acquire_pages(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
struct page **pages;

- if (shmem->pages_use_count++ > 0)
+ dma_resv_assert_held(shmem->base.resv);
+
+ if (shmem->madv < 0) {
+ drm_WARN_ON(obj->dev, shmem->pages);
+ return -ENOMEM;
+ }
+
+ if (shmem->pages) {
+ drm_WARN_ON(obj->dev, !shmem->evicted);
return 0;
+ }
+
+ if (drm_WARN_ON(obj->dev, !shmem->pages_use_count))
+ return -EINVAL;

pages = drm_gem_get_pages(obj);
if (IS_ERR(pages)) {
drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
PTR_ERR(pages));
- shmem->pages_use_count = 0;
return PTR_ERR(pages);
}

@@ -196,6 +259,58 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
return 0;
}

+static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
+{
+ int err;
+
+ dma_resv_assert_held(shmem->base.resv);
+
+ if (shmem->madv < 0)
+ return -ENOMEM;
+
+ if (shmem->pages_use_count++ > 0) {
+ err = drm_gem_shmem_swap_in(shmem);
+ if (err)
+ goto err_zero_use;
+
+ return 0;
+ }
+
+ err = drm_gem_shmem_acquire_pages(shmem);
+ if (err)
+ goto err_zero_use;
+
+ drm_gem_shmem_update_pages_state(shmem);
+
+ return 0;
+
+err_zero_use:
+ shmem->pages_use_count = 0;
+
+ return err;
+}
+
+static void
+drm_gem_shmem_release_pages(struct drm_gem_shmem_object *shmem)
+{
+ struct drm_gem_object *obj = &shmem->base;
+
+ if (!shmem->pages) {
+ drm_WARN_ON(obj->dev, !shmem->evicted && shmem->madv >= 0);
+ return;
+ }
+
+#ifdef CONFIG_X86
+ if (shmem->map_wc)
+ set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
+#endif
+
+ drm_gem_put_pages(obj, shmem->pages,
+ shmem->pages_mark_dirty_on_put,
+ shmem->pages_mark_accessed_on_put);
+ shmem->pages = NULL;
+}
+
/*
* drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
* @shmem: shmem GEM object
@@ -206,7 +321,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;

- dma_resv_assert_held(shmem->base.resv);
+ drm_gem_shmem_resv_assert_held(shmem);

if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
return;
@@ -214,15 +329,9 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
if (--shmem->pages_use_count > 0)
return;

-#ifdef CONFIG_X86
- if (shmem->map_wc)
- set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
-#endif
+ drm_gem_shmem_release_pages(shmem);

- drm_gem_put_pages(obj, shmem->pages,
- shmem->pages_mark_dirty_on_put,
- shmem->pages_mark_accessed_on_put);
- shmem->pages = NULL;
+ drm_gem_shmem_update_pages_state(shmem);
}
EXPORT_SYMBOL(drm_gem_shmem_put_pages);

@@ -239,12 +348,17 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages);
int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
+ int ret;

dma_resv_assert_held(shmem->base.resv);

drm_WARN_ON(obj->dev, obj->import_attach);

- return drm_gem_shmem_get_pages(shmem);
+ ret = drm_gem_shmem_get_pages(shmem);
+ if (!ret)
+ shmem->pages_pin_count++;
+
+ return ret;
}
EXPORT_SYMBOL(drm_gem_shmem_pin);

@@ -263,7 +377,12 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)

drm_WARN_ON(obj->dev, obj->import_attach);

+ if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_pin_count))
+ return;
+
drm_gem_shmem_put_pages(shmem);
+
+ shmem->pages_pin_count--;
}
EXPORT_SYMBOL(drm_gem_shmem_unpin);

@@ -306,7 +425,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
return 0;
}

- ret = drm_gem_shmem_get_pages(shmem);
+ ret = drm_gem_shmem_pin(shmem);
if (ret)
goto err_zero_use;

@@ -329,7 +448,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,

err_put_pages:
if (!obj->import_attach)
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_unpin(shmem);
err_zero_use:
shmem->vmap_use_count = 0;

@@ -366,7 +485,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
return;

vunmap(shmem->vaddr);
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_unpin(shmem);
}

shmem->vaddr = NULL;
@@ -403,48 +522,84 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
*/
int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
{
- dma_resv_assert_held(shmem->base.resv);
+ drm_gem_shmem_resv_assert_held(shmem);

if (shmem->madv >= 0)
shmem->madv = madv;

madv = shmem->madv;

+ drm_gem_shmem_update_pages_state(shmem);
+
return (madv >= 0);
}
EXPORT_SYMBOL(drm_gem_shmem_madvise);

-void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
+/**
+ * drm_gem_shmem_swap_in() - Moves shmem GEM back to memory and enables
+ * hardware access to the memory.
+ * @shmem: shmem GEM object
+ *
+ * This function moves shmem GEM back to memory if it was previously evicted
+ * by the memory shrinker. The GEM is ready to use on success.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_swap_in(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
- struct drm_device *dev = obj->dev;
+ struct sg_table *sgt;
+ int err;

dma_resv_assert_held(shmem->base.resv);

- drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
+ if (shmem->evicted) {
+ err = drm_gem_shmem_acquire_pages(shmem);
+ if (err)
+ return err;
+
+ sgt = drm_gem_shmem_get_sg_table(shmem);
+ if (IS_ERR(sgt))
+ return PTR_ERR(sgt);
+
+ err = dma_map_sgtable(obj->dev->dev, sgt,
+ DMA_BIDIRECTIONAL, 0);
+ if (err) {
+ sg_free_table(sgt);
+ kfree(sgt);
+ return err;
+ }

- dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
- sg_free_table(shmem->sgt);
- kfree(shmem->sgt);
- shmem->sgt = NULL;
+ shmem->sgt = sgt;
+ shmem->evicted = false;

- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_update_pages_state(shmem);
+ }

- shmem->madv = -1;
+ if (!shmem->pages)
+ return -ENOMEM;

- drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
- drm_gem_free_mmap_offset(obj);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_swap_in);

- /* Our goal here is to return as much of the memory as
- * is possible back to the system as we are called from OOM.
- * To do this we must instruct the shmfs to drop all of its
- * backing pages, *now*.
- */
- shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1);
+static void drm_gem_shmem_unpin_pages(struct drm_gem_shmem_object *shmem)
+{
+ struct drm_gem_object *obj = &shmem->base;
+ struct drm_device *dev = obj->dev;

- invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
+ if (shmem->evicted)
+ return;
+
+ dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
+ drm_gem_shmem_release_pages(shmem);
+ drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
+
+ sg_free_table(shmem->sgt);
+ kfree(shmem->sgt);
+ shmem->sgt = NULL;
}
-EXPORT_SYMBOL(drm_gem_shmem_purge);

/**
* drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
@@ -495,22 +650,33 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
vm_fault_t ret;
struct page *page;
pgoff_t page_offset;
+ bool pages_unpinned;
+ int err;

/* We don't use vmf->pgoff since that has the fake offset */
page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;

dma_resv_lock(shmem->base.resv, NULL);

- if (page_offset >= num_pages ||
- drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
- shmem->madv < 0) {
+ /* Sanity-check that we have the pages pointer when it should present */
+ pages_unpinned = (shmem->evicted || shmem->madv < 0 || !shmem->pages_use_count);
+ drm_WARN_ON_ONCE(obj->dev, !shmem->pages ^ pages_unpinned);
+
+ if (page_offset >= num_pages || (!shmem->pages && !shmem->evicted)) {
ret = VM_FAULT_SIGBUS;
} else {
+ err = drm_gem_shmem_swap_in(shmem);
+ if (err) {
+ ret = VM_FAULT_OOM;
+ goto unlock;
+ }
+
page = shmem->pages[page_offset];

ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
}

+unlock:
dma_resv_unlock(shmem->base.resv);

return ret;
@@ -533,6 +699,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
shmem->pages_use_count++;

+ drm_gem_shmem_update_pages_state(shmem);
dma_resv_unlock(shmem->base.resv);

drm_gem_vm_open(vma);
@@ -615,7 +782,9 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
drm_printf_indent(p, indent, "vmap_use_count=%u\n",
shmem->vmap_use_count);

+ drm_printf_indent(p, indent, "evicted=%d\n", shmem->evicted);
drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
+ drm_printf_indent(p, indent, "madv=%d\n", shmem->madv);
}
EXPORT_SYMBOL(drm_gem_shmem_print_info);

@@ -688,6 +857,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)

shmem->sgt = sgt;

+ drm_gem_shmem_update_pages_state(shmem);
+
dma_resv_unlock(shmem->base.resv);

return sgt;
@@ -738,6 +909,209 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev,
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);

+static struct drm_gem_shmem_shrinker *
+to_drm_shrinker(struct shrinker *shrinker)
+{
+ return container_of(shrinker, struct drm_gem_shmem_shrinker, base);
+}
+
+static unsigned long
+drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
+ unsigned long count = gem_shrinker->lru_evictable.count;
+
+ if (count >= SHRINK_EMPTY)
+ return SHRINK_EMPTY - 1;
+
+ return count ?: SHRINK_EMPTY;
+}
+
+void drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem)
+{
+ struct drm_gem_object *obj = &shmem->base;
+
+ drm_WARN_ON(obj->dev, !drm_gem_shmem_is_evictable(shmem));
+ drm_WARN_ON(obj->dev, shmem->evicted);
+
+ drm_gem_shmem_unpin_pages(shmem);
+
+ shmem->evicted = true;
+ drm_gem_shmem_update_pages_state(shmem);
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_evict);
+
+void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
+{
+ struct drm_gem_object *obj = &shmem->base;
+
+ drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
+
+ drm_gem_shmem_unpin_pages(shmem);
+ drm_gem_free_mmap_offset(obj);
+
+ /* Our goal here is to return as much of the memory as
+ * is possible back to the system as we are called from OOM.
+ * To do this we must instruct the shmfs to drop all of its
+ * backing pages, *now*.
+ */
+ shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1);
+
+ invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
+
+ shmem->madv = -1;
+ shmem->evicted = false;
+ drm_gem_shmem_update_pages_state(shmem);
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_purge);
+
+static bool drm_gem_is_busy(struct drm_gem_object *obj)
+{
+ return !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ);
+}
+
+static bool drm_gem_shmem_shrinker_evict(struct drm_gem_object *obj)
+{
+ struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+
+ if (!drm_gem_shmem_is_evictable(shmem) ||
+ get_nr_swap_pages() < obj->size >> PAGE_SHIFT ||
+ drm_gem_is_busy(obj))
+ return false;
+
+ return drm_gem_object_evict(obj);
+}
+
+static bool drm_gem_shmem_shrinker_purge(struct drm_gem_object *obj)
+{
+ struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+
+ if (!drm_gem_shmem_is_purgeable(shmem) ||
+ drm_gem_is_busy(obj))
+ return false;
+
+ return drm_gem_object_evict(obj);
+}
+
+static unsigned long
+drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
+ unsigned long nr_to_scan = sc->nr_to_scan;
+ unsigned long remaining = 0;
+ unsigned long freed = 0;
+
+ /* purge as many objects as we can */
+ freed += drm_gem_lru_scan(&gem_shrinker->lru_evictable,
+ nr_to_scan, &remaining,
+ drm_gem_shmem_shrinker_purge);
+
+ /* evict as many objects as we can */
+ if (freed < nr_to_scan)
+ freed += drm_gem_lru_scan(&gem_shrinker->lru_evictable,
+ nr_to_scan - freed, &remaining,
+ drm_gem_shmem_shrinker_evict);
+
+ return (freed > 0 && remaining > 0) ? freed : SHRINK_STOP;
+}
+
+static int drm_gem_shmem_shrinker_init(struct drm_gem_shmem *shmem_mm,
+ const char *shrinker_name)
+{
+ struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker;
+ int err;
+
+ gem_shrinker->base.count_objects = drm_gem_shmem_shrinker_count_objects;
+ gem_shrinker->base.scan_objects = drm_gem_shmem_shrinker_scan_objects;
+ gem_shrinker->base.seeks = DEFAULT_SEEKS;
+
+ mutex_init(&gem_shrinker->lock);
+ drm_gem_lru_init(&gem_shrinker->lru_evictable, &gem_shrinker->lock);
+ drm_gem_lru_init(&gem_shrinker->lru_evicted, &gem_shrinker->lock);
+ drm_gem_lru_init(&gem_shrinker->lru_pinned, &gem_shrinker->lock);
+
+ err = register_shrinker(&gem_shrinker->base, shrinker_name);
+ if (err) {
+ mutex_destroy(&gem_shrinker->lock);
+ return err;
+ }
+
+ return 0;
+}
+
+static void drm_gem_shmem_shrinker_release(struct drm_device *dev,
+ struct drm_gem_shmem *shmem_mm)
+{
+ struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker;
+
+ unregister_shrinker(&gem_shrinker->base);
+ drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_evictable.list));
+ drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_evicted.list));
+ drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_pinned.list));
+ mutex_destroy(&gem_shrinker->lock);
+}
+
+static int drm_gem_shmem_init(struct drm_device *dev)
+{
+ int err;
+
+ if (WARN_ON(dev->shmem_mm))
+ return -EBUSY;
+
+ dev->shmem_mm = kzalloc(sizeof(*dev->shmem_mm), GFP_KERNEL);
+ if (!dev->shmem_mm)
+ return -ENOMEM;
+
+ err = drm_gem_shmem_shrinker_init(dev->shmem_mm, dev->unique);
+ if (err)
+ goto free_gem_shmem;
+
+ return 0;
+
+free_gem_shmem:
+ kfree(dev->shmem_mm);
+ dev->shmem_mm = NULL;
+
+ return err;
+}
+
+static void drm_gem_shmem_release(struct drm_device *dev, void *ptr)
+{
+ struct drm_gem_shmem *shmem_mm = dev->shmem_mm;
+
+ drm_gem_shmem_shrinker_release(dev, shmem_mm);
+ dev->shmem_mm = NULL;
+ kfree(shmem_mm);
+}
+
+/**
+ * drmm_gem_shmem_init() - Initialize drm-shmem internals
+ * @dev: DRM device
+ *
+ * Cleanup is automatically managed as part of DRM device releasing.
+ * Calling this function multiple times will result in a error.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drmm_gem_shmem_init(struct drm_device *dev)
+{
+ int err;
+
+ err = drm_gem_shmem_init(dev);
+ if (err)
+ return err;
+
+ err = drmm_add_action_or_reset(dev, drm_gem_shmem_release, NULL);
+ if (err)
+ return err;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(drmm_gem_shmem_init);
+
MODULE_DESCRIPTION("DRM SHMEM memory-management helpers");
MODULE_IMPORT_NS(DMA_BUF);
MODULE_LICENSE("GPL v2");
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
index 6a71a2555f85..865a989d67c8 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
@@ -15,6 +15,13 @@
#include "panfrost_gem.h"
#include "panfrost_mmu.h"

+static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
+{
+ return (shmem->madv > 0) &&
+ !shmem->pages_pin_count && shmem->sgt &&
+ !shmem->base.dma_buf && !shmem->base.import_attach;
+}
+
static unsigned long
panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
{
@@ -27,7 +34,7 @@ panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc
return 0;

list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) {
- if (drm_gem_shmem_is_purgeable(shmem))
+ if (panfrost_gem_shmem_is_purgeable(shmem))
count += shmem->base.size >> PAGE_SHIFT;
}

diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
index a68c6a312b46..8acd455fc156 100644
--- a/include/drm/drm_device.h
+++ b/include/drm/drm_device.h
@@ -16,6 +16,7 @@ struct drm_vblank_crtc;
struct drm_vma_offset_manager;
struct drm_vram_mm;
struct drm_fb_helper;
+struct drm_gem_shmem_shrinker;

struct inode;

@@ -277,8 +278,13 @@ struct drm_device {
/** @vma_offset_manager: GEM information */
struct drm_vma_offset_manager *vma_offset_manager;

- /** @vram_mm: VRAM MM memory manager */
- struct drm_vram_mm *vram_mm;
+ union {
+ /** @vram_mm: VRAM MM memory manager */
+ struct drm_vram_mm *vram_mm;
+
+ /** @shmem_mm: SHMEM GEM memory manager */
+ struct drm_gem_shmem *shmem_mm;
+ };

/**
* @switch_power_state:
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 20ddcd799df9..c264caf6c83b 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -6,6 +6,7 @@
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/mutex.h>
+#include <linux/shrinker.h>

#include <drm/drm_file.h>
#include <drm/drm_gem.h>
@@ -15,6 +16,7 @@
struct dma_buf_attachment;
struct drm_mode_create_dumb;
struct drm_printer;
+struct drm_device;
struct sg_table;

/**
@@ -39,12 +41,21 @@ struct drm_gem_shmem_object {
*/
unsigned int pages_use_count;

+ /**
+ * @pages_pin_count:
+ *
+ * Reference count on the pinned pages table.
+ * The pages allowed to be evicted by memory shrinker
+ * only when the count is zero.
+ */
+ unsigned int pages_pin_count;
+
/**
* @madv: State for madvise
*
* 0 is active/inuse.
+ * 1 is not-needed/can-be-purged
* A negative value is the object is purged.
- * Positive values are driver specific and not used by the helpers.
*/
int madv;

@@ -91,6 +102,12 @@ struct drm_gem_shmem_object {
* @map_wc: map object write-combined (instead of using shmem defaults).
*/
bool map_wc : 1;
+
+ /**
+ * @evicted: True if shmem pages are evicted by the memory shrinker.
+ * Used internally by memory shrinker.
+ */
+ bool evicted : 1;
};

#define to_drm_gem_shmem_obj(obj) \
@@ -112,11 +129,17 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv);

static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
{
- return (shmem->madv > 0) &&
- !shmem->vmap_use_count && shmem->sgt &&
- !shmem->base.dma_buf && !shmem->base.import_attach;
+ dma_resv_assert_held(shmem->base.resv);
+
+ return (shmem->madv > 0) && shmem->base.funcs->evict &&
+ shmem->pages_use_count && !shmem->pages_pin_count &&
+ !shmem->base.dma_buf && !shmem->base.import_attach &&
+ (shmem->sgt || shmem->evicted);
}

+int drm_gem_shmem_swap_in(struct drm_gem_shmem_object *shmem);
+
+void drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem);
void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);

struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
@@ -260,6 +283,36 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v
return drm_gem_shmem_mmap(shmem, vma);
}

+/**
+ * struct drm_gem_shmem_shrinker - Memory shrinker of GEM shmem memory manager
+ */
+struct drm_gem_shmem_shrinker {
+ /** @base: Shrinker for purging shmem GEM objects */
+ struct shrinker base;
+
+ /** @lock: Protects @lru_* */
+ struct mutex lock;
+
+ /** @lru_pinned: List of pinned shmem GEM objects */
+ struct drm_gem_lru lru_pinned;
+
+ /** @lru_evictable: List of shmem GEM objects to be evicted */
+ struct drm_gem_lru lru_evictable;
+
+ /** @lru_evicted: List of evicted shmem GEM objects */
+ struct drm_gem_lru lru_evicted;
+};
+
+/**
+ * struct drm_gem_shmem - GEM shmem memory manager
+ */
+struct drm_gem_shmem {
+ /** @shrinker: GEM shmem shrinker */
+ struct drm_gem_shmem_shrinker shrinker;
+};
+
+int drmm_gem_shmem_init(struct drm_device *dev);
+
/*
* Driver ops
*/
--
2.38.1

2023-01-08 21:16:13

by Dmitry Osipenko

[permalink] [raw]
Subject: [PATCH v10 02/11] drm/panfrost: Don't sync rpm suspension after mmu flushing

Lockdep warns about potential circular locking dependency of devfreq
with the fs_reclaim caused by immediate device suspension when mapping is
released by shrinker. Fix it by doing the suspension asynchronously.

Reviewed-by: Steven Price <[email protected]>
Fixes: ec7eba47da86 ("drm/panfrost: Rework page table flushing and runtime PM interaction ")
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index 4e83a1891f3e..666a5e53fe19 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -282,7 +282,7 @@ static void panfrost_mmu_flush_range(struct panfrost_device *pfdev,
if (pm_runtime_active(pfdev->dev))
mmu_hw_do_operation(pfdev, mmu, iova, size, AS_COMMAND_FLUSH_PT);

- pm_runtime_put_sync_autosuspend(pfdev->dev);
+ pm_runtime_put_autosuspend(pfdev->dev);
}

static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu,
--
2.38.1

2023-01-08 21:38:05

by Dmitry Osipenko

[permalink] [raw]
Subject: [PATCH v10 05/11] drm/shmem: Switch to use drm_* debug helpers

Ease debugging of a multi-GPU system by using drm_WARN_*() and
drm_dbg_kms() helpers that print out DRM device name corresponding
to shmem GEM.

Suggested-by: Thomas Zimmermann <[email protected]>
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 38 +++++++++++++++-----------
1 file changed, 22 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index f21f47737817..5006f7da7f2d 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -141,7 +141,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;

- WARN_ON(shmem->vmap_use_count);
+ drm_WARN_ON(obj->dev, shmem->vmap_use_count);

if (obj->import_attach) {
drm_prime_gem_destroy(obj, shmem->sgt);
@@ -156,7 +156,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
drm_gem_shmem_put_pages(shmem);
}

- WARN_ON(shmem->pages_use_count);
+ drm_WARN_ON(obj->dev, shmem->pages_use_count);

drm_gem_object_release(obj);
mutex_destroy(&shmem->pages_lock);
@@ -175,7 +175,8 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)

pages = drm_gem_get_pages(obj);
if (IS_ERR(pages)) {
- DRM_DEBUG_KMS("Failed to get pages (%ld)\n", PTR_ERR(pages));
+ drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
+ PTR_ERR(pages));
shmem->pages_use_count = 0;
return PTR_ERR(pages);
}
@@ -207,9 +208,10 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
*/
int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
{
+ struct drm_gem_object *obj = &shmem->base;
int ret;

- WARN_ON(shmem->base.import_attach);
+ drm_WARN_ON(obj->dev, obj->import_attach);

ret = mutex_lock_interruptible(&shmem->pages_lock);
if (ret)
@@ -225,7 +227,7 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;

- if (WARN_ON_ONCE(!shmem->pages_use_count))
+ if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
return;

if (--shmem->pages_use_count > 0)
@@ -268,7 +270,9 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages);
*/
int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
{
- WARN_ON(shmem->base.import_attach);
+ struct drm_gem_object *obj = &shmem->base;
+
+ drm_WARN_ON(obj->dev, obj->import_attach);

return drm_gem_shmem_get_pages(shmem);
}
@@ -283,7 +287,9 @@ EXPORT_SYMBOL(drm_gem_shmem_pin);
*/
void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
{
- WARN_ON(shmem->base.import_attach);
+ struct drm_gem_object *obj = &shmem->base;
+
+ drm_WARN_ON(obj->dev, obj->import_attach);

drm_gem_shmem_put_pages(shmem);
}
@@ -303,7 +309,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
if (obj->import_attach) {
ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
if (!ret) {
- if (WARN_ON(map->is_iomem)) {
+ if (drm_WARN_ON(obj->dev, map->is_iomem)) {
dma_buf_vunmap(obj->import_attach->dmabuf, map);
ret = -EIO;
goto err_put_pages;
@@ -328,7 +334,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
}

if (ret) {
- DRM_DEBUG_KMS("Failed to vmap pages, error %d\n", ret);
+ drm_dbg_kms(obj->dev, "Failed to vmap pages, error %d\n", ret);
goto err_put_pages;
}

@@ -378,7 +384,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
{
struct drm_gem_object *obj = &shmem->base;

- if (WARN_ON_ONCE(!shmem->vmap_use_count))
+ if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
return;

if (--shmem->vmap_use_count > 0)
@@ -463,7 +469,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
struct drm_gem_object *obj = &shmem->base;
struct drm_device *dev = obj->dev;

- WARN_ON(!drm_gem_shmem_is_purgeable(shmem));
+ drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));

dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
sg_free_table(shmem->sgt);
@@ -555,7 +561,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
mutex_lock(&shmem->pages_lock);

if (page_offset >= num_pages ||
- WARN_ON_ONCE(!shmem->pages) ||
+ drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
shmem->madv < 0) {
ret = VM_FAULT_SIGBUS;
} else {
@@ -574,7 +580,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
struct drm_gem_object *obj = vma->vm_private_data;
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);

- WARN_ON(shmem->base.import_attach);
+ drm_WARN_ON(obj->dev, obj->import_attach);

mutex_lock(&shmem->pages_lock);

@@ -583,7 +589,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
* mmap'd, vm_open() just grabs an additional reference for the new
* mm the vma is getting copied into (ie. on fork()).
*/
- if (!WARN_ON_ONCE(!shmem->pages_use_count))
+ if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
shmem->pages_use_count++;

mutex_unlock(&shmem->pages_lock);
@@ -677,7 +683,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;

- WARN_ON(shmem->base.import_attach);
+ drm_WARN_ON(obj->dev, obj->import_attach);

return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> PAGE_SHIFT);
}
@@ -708,7 +714,7 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
if (shmem->sgt)
return shmem->sgt;

- WARN_ON(obj->import_attach);
+ drm_WARN_ON(obj->dev, obj->import_attach);

ret = drm_gem_shmem_get_pages(shmem);
if (ret)
--
2.38.1

2023-01-08 21:41:09

by Dmitry Osipenko

[permalink] [raw]
Subject: [PATCH v10 07/11] drm/shmem-helper: Switch to reservation lock

Replace all drm-shmem locks with a GEM reservation lock. This makes locks
consistent with dma-buf locking convention where importers are responsible
for holding reservation lock for all operations performed over dma-bufs,
preventing deadlock between dma-buf importers and exporters.

Suggested-by: Daniel Vetter <[email protected]>
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 185 +++++++-----------
drivers/gpu/drm/lima/lima_gem.c | 8 +-
drivers/gpu/drm/panfrost/panfrost_drv.c | 7 +-
.../gpu/drm/panfrost/panfrost_gem_shrinker.c | 6 +-
drivers/gpu/drm/panfrost/panfrost_mmu.c | 19 +-
include/drm/drm_gem_shmem_helper.h | 14 +-
6 files changed, 94 insertions(+), 145 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 1392cbd3cc02..a1f2f2158c50 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -88,8 +88,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private)
if (ret)
goto err_release;

- mutex_init(&shmem->pages_lock);
- mutex_init(&shmem->vmap_lock);
INIT_LIST_HEAD(&shmem->madv_list);

if (!private) {
@@ -141,11 +139,13 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;

- drm_WARN_ON(obj->dev, shmem->vmap_use_count);
-
if (obj->import_attach) {
drm_prime_gem_destroy(obj, shmem->sgt);
} else {
+ dma_resv_lock(shmem->base.resv, NULL);
+
+ drm_WARN_ON(obj->dev, shmem->vmap_use_count);
+
if (shmem->sgt) {
dma_unmap_sgtable(obj->dev->dev, shmem->sgt,
DMA_BIDIRECTIONAL, 0);
@@ -154,18 +154,18 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
}
if (shmem->pages)
drm_gem_shmem_put_pages(shmem);
- }

- drm_WARN_ON(obj->dev, shmem->pages_use_count);
+ drm_WARN_ON(obj->dev, shmem->pages_use_count);
+
+ dma_resv_unlock(shmem->base.resv);
+ }

drm_gem_object_release(obj);
- mutex_destroy(&shmem->pages_lock);
- mutex_destroy(&shmem->vmap_lock);
kfree(shmem);
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_free);

-static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
struct page **pages;
@@ -197,35 +197,16 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
}

/*
- * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object
+ * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
* @shmem: shmem GEM object
*
- * This function makes sure that backing pages exists for the shmem GEM object
- * and increases the use count.
- *
- * Returns:
- * 0 on success or a negative error code on failure.
+ * This function decreases the use count and puts the backing pages when use drops to zero.
*/
-int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
+void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
- int ret;

- drm_WARN_ON(obj->dev, obj->import_attach);
-
- ret = mutex_lock_interruptible(&shmem->pages_lock);
- if (ret)
- return ret;
- ret = drm_gem_shmem_get_pages_locked(shmem);
- mutex_unlock(&shmem->pages_lock);
-
- return ret;
-}
-EXPORT_SYMBOL(drm_gem_shmem_get_pages);
-
-static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
-{
- struct drm_gem_object *obj = &shmem->base;
+ dma_resv_assert_held(shmem->base.resv);

if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
return;
@@ -243,19 +224,6 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
shmem->pages_mark_accessed_on_put);
shmem->pages = NULL;
}
-
-/*
- * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
- * @shmem: shmem GEM object
- *
- * This function decreases the use count and puts the backing pages when use drops to zero.
- */
-void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
-{
- mutex_lock(&shmem->pages_lock);
- drm_gem_shmem_put_pages_locked(shmem);
- mutex_unlock(&shmem->pages_lock);
-}
EXPORT_SYMBOL(drm_gem_shmem_put_pages);

/**
@@ -272,6 +240,8 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;

+ dma_resv_assert_held(shmem->base.resv);
+
drm_WARN_ON(obj->dev, obj->import_attach);

return drm_gem_shmem_get_pages(shmem);
@@ -289,14 +259,31 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;

+ dma_resv_assert_held(shmem->base.resv);
+
drm_WARN_ON(obj->dev, obj->import_attach);

drm_gem_shmem_put_pages(shmem);
}
EXPORT_SYMBOL(drm_gem_shmem_unpin);

-static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
- struct iosys_map *map)
+/*
+ * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
+ * @shmem: shmem GEM object
+ * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
+ * store.
+ *
+ * This function makes sure that a contiguous kernel virtual address mapping
+ * exists for the buffer backing the shmem GEM object. It hides the differences
+ * between dma-buf imported and natively allocated objects.
+ *
+ * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
+ struct iosys_map *map)
{
struct drm_gem_object *obj = &shmem->base;
int ret = 0;
@@ -312,6 +299,8 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
} else {
pgprot_t prot = PAGE_KERNEL;

+ dma_resv_assert_held(shmem->base.resv);
+
if (shmem->vmap_use_count++ > 0) {
iosys_map_set_vaddr(map, shmem->vaddr);
return 0;
@@ -346,45 +335,30 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,

return ret;
}
+EXPORT_SYMBOL(drm_gem_shmem_vmap);

/*
- * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
+ * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
* @shmem: shmem GEM object
- * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
- * store.
- *
- * This function makes sure that a contiguous kernel virtual address mapping
- * exists for the buffer backing the shmem GEM object. It hides the differences
- * between dma-buf imported and natively allocated objects.
+ * @map: Kernel virtual address where the SHMEM GEM object was mapped
*
- * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
+ * This function cleans up a kernel virtual address mapping acquired by
+ * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
+ * zero.
*
- * Returns:
- * 0 on success or a negative error code on failure.
+ * This function hides the differences between dma-buf imported and natively
+ * allocated objects.
*/
-int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
- struct iosys_map *map)
-{
- int ret;
-
- ret = mutex_lock_interruptible(&shmem->vmap_lock);
- if (ret)
- return ret;
- ret = drm_gem_shmem_vmap_locked(shmem, map);
- mutex_unlock(&shmem->vmap_lock);
-
- return ret;
-}
-EXPORT_SYMBOL(drm_gem_shmem_vmap);
-
-static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
- struct iosys_map *map)
+void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
+ struct iosys_map *map)
{
struct drm_gem_object *obj = &shmem->base;

if (obj->import_attach) {
dma_buf_vunmap(obj->import_attach->dmabuf, map);
} else {
+ dma_resv_assert_held(shmem->base.resv);
+
if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
return;

@@ -397,26 +371,6 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,

shmem->vaddr = NULL;
}
-
-/*
- * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
- * @shmem: shmem GEM object
- * @map: Kernel virtual address where the SHMEM GEM object was mapped
- *
- * This function cleans up a kernel virtual address mapping acquired by
- * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
- * zero.
- *
- * This function hides the differences between dma-buf imported and natively
- * allocated objects.
- */
-void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
- struct iosys_map *map)
-{
- mutex_lock(&shmem->vmap_lock);
- drm_gem_shmem_vunmap_locked(shmem, map);
- mutex_unlock(&shmem->vmap_lock);
-}
EXPORT_SYMBOL(drm_gem_shmem_vunmap);

static struct drm_gem_shmem_object *
@@ -449,24 +403,24 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
*/
int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
{
- mutex_lock(&shmem->pages_lock);
+ dma_resv_assert_held(shmem->base.resv);

if (shmem->madv >= 0)
shmem->madv = madv;

madv = shmem->madv;

- mutex_unlock(&shmem->pages_lock);
-
return (madv >= 0);
}
EXPORT_SYMBOL(drm_gem_shmem_madvise);

-void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
+void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
struct drm_device *dev = obj->dev;

+ dma_resv_assert_held(shmem->base.resv);
+
drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));

dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
@@ -474,7 +428,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
kfree(shmem->sgt);
shmem->sgt = NULL;

- drm_gem_shmem_put_pages_locked(shmem);
+ drm_gem_shmem_put_pages(shmem);

shmem->madv = -1;

@@ -490,17 +444,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)

invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
}
-EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
-
-bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
-{
- if (!mutex_trylock(&shmem->pages_lock))
- return false;
- drm_gem_shmem_purge_locked(shmem);
- mutex_unlock(&shmem->pages_lock);
-
- return true;
-}
EXPORT_SYMBOL(drm_gem_shmem_purge);

/**
@@ -556,7 +499,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
/* We don't use vmf->pgoff since that has the fake offset */
page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;

- mutex_lock(&shmem->pages_lock);
+ dma_resv_lock(shmem->base.resv, NULL);

if (page_offset >= num_pages ||
drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
@@ -568,7 +511,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
}

- mutex_unlock(&shmem->pages_lock);
+ dma_resv_unlock(shmem->base.resv);

return ret;
}
@@ -580,7 +523,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)

drm_WARN_ON(obj->dev, obj->import_attach);

- mutex_lock(&shmem->pages_lock);
+ dma_resv_lock(shmem->base.resv, NULL);

/*
* We should have already pinned the pages when the buffer was first
@@ -590,7 +533,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
shmem->pages_use_count++;

- mutex_unlock(&shmem->pages_lock);
+ dma_resv_unlock(shmem->base.resv);

drm_gem_vm_open(vma);
}
@@ -600,7 +543,10 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
struct drm_gem_object *obj = vma->vm_private_data;
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);

+ dma_resv_lock(shmem->base.resv, NULL);
drm_gem_shmem_put_pages(shmem);
+ dma_resv_unlock(shmem->base.resv);
+
drm_gem_vm_close(vma);
}

@@ -635,7 +581,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
return dma_buf_mmap(obj->dma_buf, vma, 0);
}

+ dma_resv_lock(shmem->base.resv, NULL);
ret = drm_gem_shmem_get_pages(shmem);
+ dma_resv_unlock(shmem->base.resv);
+
if (ret)
return ret;

@@ -721,9 +670,11 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)

drm_WARN_ON(obj->dev, obj->import_attach);

+ dma_resv_lock(shmem->base.resv, NULL);
+
ret = drm_gem_shmem_get_pages(shmem);
if (ret)
- return ERR_PTR(ret);
+ goto err_unlock;

sgt = drm_gem_shmem_get_sg_table(shmem);
if (IS_ERR(sgt)) {
@@ -737,6 +688,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)

shmem->sgt = sgt;

+ dma_resv_unlock(shmem->base.resv);
+
return sgt;

err_free_sgt:
@@ -744,6 +697,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
kfree(sgt);
err_put_pages:
drm_gem_shmem_put_pages(shmem);
+err_unlock:
+ dma_resv_unlock(shmem->base.resv);
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt);
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 0f1ca0b0db49..5008f0c2428f 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)

new_size = min(new_size, bo->base.base.size);

- mutex_lock(&bo->base.pages_lock);
+ dma_resv_lock(bo->base.base.resv, NULL);

if (bo->base.pages) {
pages = bo->base.pages;
@@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
sizeof(*pages), GFP_KERNEL | __GFP_ZERO);
if (!pages) {
- mutex_unlock(&bo->base.pages_lock);
+ dma_resv_unlock(bo->base.base.resv);
return -ENOMEM;
}

@@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
struct page *page = shmem_read_mapping_page(mapping, i);

if (IS_ERR(page)) {
- mutex_unlock(&bo->base.pages_lock);
+ dma_resv_unlock(bo->base.base.resv);
return PTR_ERR(page);
}
pages[i] = page;
}

- mutex_unlock(&bo->base.pages_lock);
+ dma_resv_unlock(bo->base.base.resv);

ret = sg_alloc_table_from_pages(&sgt, pages, i, 0,
new_size, GFP_KERNEL);
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index abb0dadd8f63..9f3f2283b67a 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -414,6 +414,10 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,

bo = to_panfrost_bo(gem_obj);

+ ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL);
+ if (ret)
+ goto out_put_object;
+
mutex_lock(&pfdev->shrinker_lock);
mutex_lock(&bo->mappings.lock);
if (args->madv == PANFROST_MADV_DONTNEED) {
@@ -451,7 +455,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
out_unlock_mappings:
mutex_unlock(&bo->mappings.lock);
mutex_unlock(&pfdev->shrinker_lock);
-
+ dma_resv_unlock(bo->base.base.resv);
+out_put_object:
drm_gem_object_put(gem_obj);
return ret;
}
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
index bf0170782f25..6a71a2555f85 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
@@ -48,14 +48,14 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj)
if (!mutex_trylock(&bo->mappings.lock))
return false;

- if (!mutex_trylock(&shmem->pages_lock))
+ if (!dma_resv_trylock(shmem->base.resv))
goto unlock_mappings;

panfrost_gem_teardown_mappings_locked(bo);
- drm_gem_shmem_purge_locked(&bo->base);
+ drm_gem_shmem_purge(&bo->base);
ret = true;

- mutex_unlock(&shmem->pages_lock);
+ dma_resv_unlock(shmem->base.resv);

unlock_mappings:
mutex_unlock(&bo->mappings.lock);
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index 666a5e53fe19..0679df57f394 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -443,6 +443,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
struct panfrost_gem_mapping *bomapping;
struct panfrost_gem_object *bo;
struct address_space *mapping;
+ struct drm_gem_object *obj;
pgoff_t page_offset;
struct sg_table *sgt;
struct page **pages;
@@ -465,15 +466,16 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
page_offset = addr >> PAGE_SHIFT;
page_offset -= bomapping->mmnode.start;

- mutex_lock(&bo->base.pages_lock);
+ obj = &bo->base.base;
+
+ dma_resv_lock(obj->resv, NULL);

if (!bo->base.pages) {
bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M,
sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO);
if (!bo->sgts) {
- mutex_unlock(&bo->base.pages_lock);
ret = -ENOMEM;
- goto err_bo;
+ goto err_unlock;
}

pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
@@ -481,9 +483,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
if (!pages) {
kvfree(bo->sgts);
bo->sgts = NULL;
- mutex_unlock(&bo->base.pages_lock);
ret = -ENOMEM;
- goto err_bo;
+ goto err_unlock;
}
bo->base.pages = pages;
bo->base.pages_use_count = 1;
@@ -491,7 +492,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
pages = bo->base.pages;
if (pages[page_offset]) {
/* Pages are already mapped, bail out. */
- mutex_unlock(&bo->base.pages_lock);
goto out;
}
}
@@ -502,14 +502,11 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
pages[i] = shmem_read_mapping_page(mapping, i);
if (IS_ERR(pages[i])) {
- mutex_unlock(&bo->base.pages_lock);
ret = PTR_ERR(pages[i]);
goto err_pages;
}
}

- mutex_unlock(&bo->base.pages_lock);
-
sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
@@ -528,6 +525,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr);

out:
+ dma_resv_unlock(obj->resv);
+
panfrost_gem_mapping_put(bomapping);

return 0;
@@ -536,6 +535,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
sg_free_table(sgt);
err_pages:
drm_gem_shmem_put_pages(&bo->base);
+err_unlock:
+ dma_resv_unlock(obj->resv);
err_bo:
panfrost_gem_mapping_put(bomapping);
return ret;
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 5994fed5e327..20ddcd799df9 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -26,11 +26,6 @@ struct drm_gem_shmem_object {
*/
struct drm_gem_object base;

- /**
- * @pages_lock: Protects the page table and use count
- */
- struct mutex pages_lock;
-
/**
* @pages: Page table
*/
@@ -65,11 +60,6 @@ struct drm_gem_shmem_object {
*/
struct sg_table *sgt;

- /**
- * @vmap_lock: Protects the vmap address and use count
- */
- struct mutex vmap_lock;
-
/**
* @vaddr: Kernel virtual address of the backing memory
*/
@@ -109,7 +99,6 @@ struct drm_gem_shmem_object {
struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size);
void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem);

-int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem);
void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem);
@@ -128,8 +117,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem
!shmem->base.dma_buf && !shmem->base.import_attach;
}

-void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
-bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
+void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);

struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);
--
2.38.1

2023-01-08 21:44:21

by Dmitry Osipenko

[permalink] [raw]
Subject: [PATCH v10 01/11] drm/msm/gem: Prevent blocking within shrinker loop

Consider this scenario:

1. APP1 continuously creates lots of small GEMs
2. APP2 triggers `drop_caches`
3. Shrinker starts to evict APP1 GEMs, while APP1 produces new purgeable
GEMs
4. msm_gem_shrinker_scan() returns non-zero number of freed pages
and causes shrinker to try shrink more
5. msm_gem_shrinker_scan() returns non-zero number of freed pages again,
goto 4
6. The APP2 is blocked in `drop_caches` until APP1 stops producing
purgeable GEMs

To prevent this blocking scenario, check number of remaining pages
that GPU shrinker couldn't release due to a GEM locking contention
or shrinking rejection. If there are no remaining pages left to shrink,
then there is no need to free up more pages and shrinker may break out
from the loop.

This problem was found during shrinker/madvise IOCTL testing of
virtio-gpu driver. The MSM driver is affected in the same way.

Reviewed-by: Rob Clark <[email protected]>
Fixes: b352ba54a820 ("drm/msm/gem: Convert to using drm_gem_lru")
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/gpu/drm/drm_gem.c | 9 +++++++--
drivers/gpu/drm/msm/msm_gem_shrinker.c | 8 ++++++--
include/drm/drm_gem.h | 4 +++-
3 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 59a0bb5ebd85..c6bca5ac6e0f 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1388,10 +1388,13 @@ EXPORT_SYMBOL(drm_gem_lru_move_tail);
*
* @lru: The LRU to scan
* @nr_to_scan: The number of pages to try to reclaim
+ * @remaining: The number of pages left to reclaim
* @shrink: Callback to try to shrink/reclaim the object.
*/
unsigned long
-drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
+drm_gem_lru_scan(struct drm_gem_lru *lru,
+ unsigned int nr_to_scan,
+ unsigned long *remaining,
bool (*shrink)(struct drm_gem_object *obj))
{
struct drm_gem_lru still_in_lru;
@@ -1430,8 +1433,10 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
* hit shrinker in response to trying to get backing pages
* for this obj (ie. while it's lock is already held)
*/
- if (!dma_resv_trylock(obj->resv))
+ if (!dma_resv_trylock(obj->resv)) {
+ *remaining += obj->size >> PAGE_SHIFT;
goto tail;
+ }

if (shrink(obj)) {
freed += obj->size >> PAGE_SHIFT;
diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c
index 051bdbc093cf..b7c1242014ec 100644
--- a/drivers/gpu/drm/msm/msm_gem_shrinker.c
+++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c
@@ -116,12 +116,14 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
};
long nr = sc->nr_to_scan;
unsigned long freed = 0;
+ unsigned long remaining = 0;

for (unsigned i = 0; (nr > 0) && (i < ARRAY_SIZE(stages)); i++) {
if (!stages[i].cond)
continue;
stages[i].freed =
- drm_gem_lru_scan(stages[i].lru, nr, stages[i].shrink);
+ drm_gem_lru_scan(stages[i].lru, nr, &remaining,
+ stages[i].shrink);
nr -= stages[i].freed;
freed += stages[i].freed;
}
@@ -132,7 +134,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
stages[3].freed);
}

- return (freed > 0) ? freed : SHRINK_STOP;
+ return (freed > 0 && remaining > 0) ? freed : SHRINK_STOP;
}

#ifdef CONFIG_DEBUG_FS
@@ -182,10 +184,12 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr)
NULL,
};
unsigned idx, unmapped = 0;
+ unsigned long remaining = 0;

for (idx = 0; lrus[idx] && unmapped < vmap_shrink_limit; idx++) {
unmapped += drm_gem_lru_scan(lrus[idx],
vmap_shrink_limit - unmapped,
+ &remaining,
vmap_shrink);
}

diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 772a4adf5287..f1f00fc2dba6 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -476,7 +476,9 @@ int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
void drm_gem_lru_init(struct drm_gem_lru *lru, struct mutex *lock);
void drm_gem_lru_remove(struct drm_gem_object *obj);
void drm_gem_lru_move_tail(struct drm_gem_lru *lru, struct drm_gem_object *obj);
-unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
+unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru,
+ unsigned int nr_to_scan,
+ unsigned long *remaining,
bool (*shrink)(struct drm_gem_object *obj));

#endif /* __DRM_GEM_H__ */
--
2.38.1

2023-01-25 22:55:46

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

Hello Thomas and Gerd,

On 1/9/23 00:04, Dmitry Osipenko wrote:
> This series:
>
> 1. Makes minor fixes for drm_gem_lru and Panfrost
> 2. Brings refactoring for older code
> 3. Adds common drm-shmem memory shrinker
> 4. Enables shrinker for VirtIO-GPU driver
> 5. Switches Panfrost driver to the common shrinker
>
> Changelog:
>
> v10:- Rebased on a recent linux-next.
>
> - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
>
> - Added Steven's ack/r-b/t-b for the Panfrost patches.
>
> - Fixed missing export of the new drm_gem_object_evict() function.
>
> - Added fixes tags to the first two patches that are making minor fixes,
> for consistency.

Do you have comments on this version? Otherwise ack will be appreciated.
Thanks in advance!

--
Best regards,
Dmitry


2023-01-26 12:16:40

by Gerd Hoffmann

[permalink] [raw]
Subject: Re: [PATCH v10 05/11] drm/shmem: Switch to use drm_* debug helpers

On Mon, Jan 09, 2023 at 12:04:39AM +0300, Dmitry Osipenko wrote:
> f a multi-GPU system by using drm_WARN_*() and
> drm_dbg_kms() helpers that print out DRM device name corresponding
> to shmem GEM.

That commit message looks truncated ...

take care,
Gerd


2023-01-27 08:14:31

by Gerd Hoffmann

[permalink] [raw]
Subject: Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

On Thu, Jan 26, 2023 at 01:55:09AM +0300, Dmitry Osipenko wrote:
> Hello Thomas and Gerd,
>
> On 1/9/23 00:04, Dmitry Osipenko wrote:
> > This series:
> >
> > 1. Makes minor fixes for drm_gem_lru and Panfrost
> > 2. Brings refactoring for older code
> > 3. Adds common drm-shmem memory shrinker
> > 4. Enables shrinker for VirtIO-GPU driver
> > 5. Switches Panfrost driver to the common shrinker
> >
> > Changelog:
> >
> > v10:- Rebased on a recent linux-next.
> >
> > - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
> >
> > - Added Steven's ack/r-b/t-b for the Panfrost patches.
> >
> > - Fixed missing export of the new drm_gem_object_evict() function.
> >
> > - Added fixes tags to the first two patches that are making minor fixes,
> > for consistency.
>
> Do you have comments on this version? Otherwise ack will be appreciated.
> Thanks in advance!

Don't feel like signing off on the locking changes, I'm not that
familiar with the drm locking rules. So someone else looking at them
would be good. Otherwise the series and specifically the virtio changes
look good to me.

Acked-by: Gerd Hoffmann <[email protected]>

take care,
Gerd


2023-01-30 12:02:21

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

On 1/27/23 11:13, Gerd Hoffmann wrote:
> On Thu, Jan 26, 2023 at 01:55:09AM +0300, Dmitry Osipenko wrote:
>> Hello Thomas and Gerd,
>>
>> On 1/9/23 00:04, Dmitry Osipenko wrote:
>>> This series:
>>>
>>> 1. Makes minor fixes for drm_gem_lru and Panfrost
>>> 2. Brings refactoring for older code
>>> 3. Adds common drm-shmem memory shrinker
>>> 4. Enables shrinker for VirtIO-GPU driver
>>> 5. Switches Panfrost driver to the common shrinker
>>>
>>> Changelog:
>>>
>>> v10:- Rebased on a recent linux-next.
>>>
>>> - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
>>>
>>> - Added Steven's ack/r-b/t-b for the Panfrost patches.
>>>
>>> - Fixed missing export of the new drm_gem_object_evict() function.
>>>
>>> - Added fixes tags to the first two patches that are making minor fixes,
>>> for consistency.
>>
>> Do you have comments on this version? Otherwise ack will be appreciated.
>> Thanks in advance!
>
> Don't feel like signing off on the locking changes, I'm not that
> familiar with the drm locking rules. So someone else looking at them
> would be good. Otherwise the series and specifically the virtio changes
> look good to me.
>
> Acked-by: Gerd Hoffmann <[email protected]>

Thomas was looking at the the DRM core changes. I expect he'll ack them.

Thank you for reviewing the virtio patches!

--
Best regards,
Dmitry


2023-02-16 12:15:46

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

On Mon, Jan 30, 2023 at 03:02:10PM +0300, Dmitry Osipenko wrote:
> On 1/27/23 11:13, Gerd Hoffmann wrote:
> > On Thu, Jan 26, 2023 at 01:55:09AM +0300, Dmitry Osipenko wrote:
> >> Hello Thomas and Gerd,
> >>
> >> On 1/9/23 00:04, Dmitry Osipenko wrote:
> >>> This series:
> >>>
> >>> 1. Makes minor fixes for drm_gem_lru and Panfrost
> >>> 2. Brings refactoring for older code
> >>> 3. Adds common drm-shmem memory shrinker
> >>> 4. Enables shrinker for VirtIO-GPU driver
> >>> 5. Switches Panfrost driver to the common shrinker
> >>>
> >>> Changelog:
> >>>
> >>> v10:- Rebased on a recent linux-next.
> >>>
> >>> - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
> >>>
> >>> - Added Steven's ack/r-b/t-b for the Panfrost patches.
> >>>
> >>> - Fixed missing export of the new drm_gem_object_evict() function.
> >>>
> >>> - Added fixes tags to the first two patches that are making minor fixes,
> >>> for consistency.
> >>
> >> Do you have comments on this version? Otherwise ack will be appreciated.
> >> Thanks in advance!
> >
> > Don't feel like signing off on the locking changes, I'm not that
> > familiar with the drm locking rules. So someone else looking at them
> > would be good. Otherwise the series and specifically the virtio changes
> > look good to me.
> >
> > Acked-by: Gerd Hoffmann <[email protected]>
>
> Thomas was looking at the the DRM core changes. I expect he'll ack them.
>
> Thank you for reviewing the virtio patches!

I think best-case would be an ack from msm people that this looks good
(even better a conversion for msm to start using this).

Otherwise I think the locking looks reasonable, I think the tricky bits
have been moving the dma-buf rules, but if you want I can try to take
another in-depth look. But would need to be in 2 weeks since I'm going on
vacations, pls ping me on irc if I'm needed.

Otherwise would be great if we can land this soon, so that it can soak the
entire linux-next cycle to catch any driver specific issues.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

Subject: Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

Il 16/02/23 13:15, Daniel Vetter ha scritto:
> On Mon, Jan 30, 2023 at 03:02:10PM +0300, Dmitry Osipenko wrote:
>> On 1/27/23 11:13, Gerd Hoffmann wrote:
>>> On Thu, Jan 26, 2023 at 01:55:09AM +0300, Dmitry Osipenko wrote:
>>>> Hello Thomas and Gerd,
>>>>
>>>> On 1/9/23 00:04, Dmitry Osipenko wrote:
>>>>> This series:
>>>>>
>>>>> 1. Makes minor fixes for drm_gem_lru and Panfrost
>>>>> 2. Brings refactoring for older code
>>>>> 3. Adds common drm-shmem memory shrinker
>>>>> 4. Enables shrinker for VirtIO-GPU driver
>>>>> 5. Switches Panfrost driver to the common shrinker
>>>>>
>>>>> Changelog:
>>>>>
>>>>> v10:- Rebased on a recent linux-next.
>>>>>
>>>>> - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
>>>>>
>>>>> - Added Steven's ack/r-b/t-b for the Panfrost patches.
>>>>>
>>>>> - Fixed missing export of the new drm_gem_object_evict() function.
>>>>>
>>>>> - Added fixes tags to the first two patches that are making minor fixes,
>>>>> for consistency.
>>>>
>>>> Do you have comments on this version? Otherwise ack will be appreciated.
>>>> Thanks in advance!
>>>
>>> Don't feel like signing off on the locking changes, I'm not that
>>> familiar with the drm locking rules. So someone else looking at them
>>> would be good. Otherwise the series and specifically the virtio changes
>>> look good to me.
>>>
>>> Acked-by: Gerd Hoffmann <[email protected]>
>>
>> Thomas was looking at the the DRM core changes. I expect he'll ack them.
>>
>> Thank you for reviewing the virtio patches!
>


> I think best-case would be an ack from msm people that this looks good
> (even better a conversion for msm to start using this).
>

Dmitry B, Konrad, can you please help with this one?

Thanks!

Regards,
Angelo

> Otherwise I think the locking looks reasonable, I think the tricky bits
> have been moving the dma-buf rules, but if you want I can try to take
> another in-depth look. But would need to be in 2 weeks since I'm going on
> vacations, pls ping me on irc if I'm needed.
>
> Otherwise would be great if we can land this soon, so that it can soak the
> entire linux-next cycle to catch any driver specific issues.
> -Daniel


2023-02-16 20:43:50

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

On 2/16/23 15:15, Daniel Vetter wrote:
> On Mon, Jan 30, 2023 at 03:02:10PM +0300, Dmitry Osipenko wrote:
>> On 1/27/23 11:13, Gerd Hoffmann wrote:
>>> On Thu, Jan 26, 2023 at 01:55:09AM +0300, Dmitry Osipenko wrote:
>>>> Hello Thomas and Gerd,
>>>>
>>>> On 1/9/23 00:04, Dmitry Osipenko wrote:
>>>>> This series:
>>>>>
>>>>> 1. Makes minor fixes for drm_gem_lru and Panfrost
>>>>> 2. Brings refactoring for older code
>>>>> 3. Adds common drm-shmem memory shrinker
>>>>> 4. Enables shrinker for VirtIO-GPU driver
>>>>> 5. Switches Panfrost driver to the common shrinker
>>>>>
>>>>> Changelog:
>>>>>
>>>>> v10:- Rebased on a recent linux-next.
>>>>>
>>>>> - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
>>>>>
>>>>> - Added Steven's ack/r-b/t-b for the Panfrost patches.
>>>>>
>>>>> - Fixed missing export of the new drm_gem_object_evict() function.
>>>>>
>>>>> - Added fixes tags to the first two patches that are making minor fixes,
>>>>> for consistency.
>>>>
>>>> Do you have comments on this version? Otherwise ack will be appreciated.
>>>> Thanks in advance!
>>>
>>> Don't feel like signing off on the locking changes, I'm not that
>>> familiar with the drm locking rules. So someone else looking at them
>>> would be good. Otherwise the series and specifically the virtio changes
>>> look good to me.
>>>
>>> Acked-by: Gerd Hoffmann <[email protected]>
>>
>> Thomas was looking at the the DRM core changes. I expect he'll ack them.
>>
>> Thank you for reviewing the virtio patches!
>
> I think best-case would be an ack from msm people that this looks good
> (even better a conversion for msm to start using this).

The MSM pretty much isn't touched by this patchset, apart from the minor
common shrinker fix. Moving whole MSM to use drm_shmem should be a big
change to the driver.

The Panfrost and VirtIO-GPU drivers already got the acks. I also tested
the Lima driver, which uses drm-shmem helpers. Other DRM drivers should
be unaffected by this series.

> Otherwise I think the locking looks reasonable, I think the tricky bits
> have been moving the dma-buf rules, but if you want I can try to take
> another in-depth look. But would need to be in 2 weeks since I'm going on
> vacations, pls ping me on irc if I'm needed.

The locking conversion is mostly a straightforward replacement of mutex
with resv lock for drm-shmem. The dma-buf rules were tricky, another
tricky part was fixing the lockdep warning for the bogus report of
fs_reclaim vs GEM shrinker at the GEM destroy time where I borrowed the
drm_gem_shmem_resv_assert_held() solution from the MSM driver where Rob
had a similar issue.

> Otherwise would be great if we can land this soon, so that it can soak the
> entire linux-next cycle to catch any driver specific issues.

That will be great. Was waiting for Thomas to ack the shmem patches
since he reviewed the previous versions, but if you or anyone else could
ack them, then will be good too. Thanks!

--
Best regards,
Dmitry


2023-02-16 22:08:15

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

On Thu, Feb 16, 2023 at 11:43:38PM +0300, Dmitry Osipenko wrote:
> On 2/16/23 15:15, Daniel Vetter wrote:
> > On Mon, Jan 30, 2023 at 03:02:10PM +0300, Dmitry Osipenko wrote:
> >> On 1/27/23 11:13, Gerd Hoffmann wrote:
> >>> On Thu, Jan 26, 2023 at 01:55:09AM +0300, Dmitry Osipenko wrote:
> >>>> Hello Thomas and Gerd,
> >>>>
> >>>> On 1/9/23 00:04, Dmitry Osipenko wrote:
> >>>>> This series:
> >>>>>
> >>>>> 1. Makes minor fixes for drm_gem_lru and Panfrost
> >>>>> 2. Brings refactoring for older code
> >>>>> 3. Adds common drm-shmem memory shrinker
> >>>>> 4. Enables shrinker for VirtIO-GPU driver
> >>>>> 5. Switches Panfrost driver to the common shrinker
> >>>>>
> >>>>> Changelog:
> >>>>>
> >>>>> v10:- Rebased on a recent linux-next.
> >>>>>
> >>>>> - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
> >>>>>
> >>>>> - Added Steven's ack/r-b/t-b for the Panfrost patches.
> >>>>>
> >>>>> - Fixed missing export of the new drm_gem_object_evict() function.
> >>>>>
> >>>>> - Added fixes tags to the first two patches that are making minor fixes,
> >>>>> for consistency.
> >>>>
> >>>> Do you have comments on this version? Otherwise ack will be appreciated.
> >>>> Thanks in advance!
> >>>
> >>> Don't feel like signing off on the locking changes, I'm not that
> >>> familiar with the drm locking rules. So someone else looking at them
> >>> would be good. Otherwise the series and specifically the virtio changes
> >>> look good to me.
> >>>
> >>> Acked-by: Gerd Hoffmann <[email protected]>
> >>
> >> Thomas was looking at the the DRM core changes. I expect he'll ack them.
> >>
> >> Thank you for reviewing the virtio patches!
> >
> > I think best-case would be an ack from msm people that this looks good
> > (even better a conversion for msm to start using this).
>
> The MSM pretty much isn't touched by this patchset, apart from the minor
> common shrinker fix. Moving whole MSM to use drm_shmem should be a big
> change to the driver.
>
> The Panfrost and VirtIO-GPU drivers already got the acks. I also tested
> the Lima driver, which uses drm-shmem helpers. Other DRM drivers should
> be unaffected by this series.

Ah that sounds good, I somehow thought that etnaviv also uses the helpers,
but there we only had problems with dma-buf. So that's all sorted.

> > Otherwise I think the locking looks reasonable, I think the tricky bits
> > have been moving the dma-buf rules, but if you want I can try to take
> > another in-depth look. But would need to be in 2 weeks since I'm going on
> > vacations, pls ping me on irc if I'm needed.
>
> The locking conversion is mostly a straightforward replacement of mutex
> with resv lock for drm-shmem. The dma-buf rules were tricky, another
> tricky part was fixing the lockdep warning for the bogus report of
> fs_reclaim vs GEM shrinker at the GEM destroy time where I borrowed the
> drm_gem_shmem_resv_assert_held() solution from the MSM driver where Rob
> had a similar issue.

Ah I missed that detail, if msm solved that the same way then I think very
high chances it all ends up being compatible. Which is really what
matters, not so much whether every last driver actually has converted
over.

> > Otherwise would be great if we can land this soon, so that it can soak the
> > entire linux-next cycle to catch any driver specific issues.
>
> That will be great. Was waiting for Thomas to ack the shmem patches
> since he reviewed the previous versions, but if you or anyone else could
> ack them, then will be good too. Thanks!

I'm good for an ack, but maybe ping Thomas for a review on irc since I'm
out next week. Also maybe Thomas has some series you can help land for
cross review.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

2023-02-17 12:02:14

by Thomas Zimmermann

[permalink] [raw]
Subject: Re: [PATCH v10 01/11] drm/msm/gem: Prevent blocking within shrinker loop

Hi

Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> Consider this scenario:
>
> 1. APP1 continuously creates lots of small GEMs
> 2. APP2 triggers `drop_caches`
> 3. Shrinker starts to evict APP1 GEMs, while APP1 produces new purgeable
> GEMs
> 4. msm_gem_shrinker_scan() returns non-zero number of freed pages
> and causes shrinker to try shrink more
> 5. msm_gem_shrinker_scan() returns non-zero number of freed pages again,
> goto 4
> 6. The APP2 is blocked in `drop_caches` until APP1 stops producing
> purgeable GEMs
>
> To prevent this blocking scenario, check number of remaining pages
> that GPU shrinker couldn't release due to a GEM locking contention
> or shrinking rejection. If there are no remaining pages left to shrink,
> then there is no need to free up more pages and shrinker may break out
> from the loop.
>
> This problem was found during shrinker/madvise IOCTL testing of
> virtio-gpu driver. The MSM driver is affected in the same way.
>
> Reviewed-by: Rob Clark <[email protected]>
> Fixes: b352ba54a820 ("drm/msm/gem: Convert to using drm_gem_lru")
> Signed-off-by: Dmitry Osipenko <[email protected]>
> ---
> drivers/gpu/drm/drm_gem.c | 9 +++++++--
> drivers/gpu/drm/msm/msm_gem_shrinker.c | 8 ++++++--
> include/drm/drm_gem.h | 4 +++-
> 3 files changed, 16 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 59a0bb5ebd85..c6bca5ac6e0f 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1388,10 +1388,13 @@ EXPORT_SYMBOL(drm_gem_lru_move_tail);
> *
> * @lru: The LRU to scan
> * @nr_to_scan: The number of pages to try to reclaim
> + * @remaining: The number of pages left to reclaim
> * @shrink: Callback to try to shrink/reclaim the object.
> */
> unsigned long
> -drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
> +drm_gem_lru_scan(struct drm_gem_lru *lru,
> + unsigned int nr_to_scan,
> + unsigned long *remaining,
> bool (*shrink)(struct drm_gem_object *obj))
> {
> struct drm_gem_lru still_in_lru;
> @@ -1430,8 +1433,10 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
> * hit shrinker in response to trying to get backing pages
> * for this obj (ie. while it's lock is already held)
> */
> - if (!dma_resv_trylock(obj->resv))
> + if (!dma_resv_trylock(obj->resv)) {
> + *remaining += obj->size >> PAGE_SHIFT;
> goto tail;
> + }
>
> if (shrink(obj)) {
> freed += obj->size >> PAGE_SHIFT;
> diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c
> index 051bdbc093cf..b7c1242014ec 100644
> --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c
> +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c
> @@ -116,12 +116,14 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
> };
> long nr = sc->nr_to_scan;
> unsigned long freed = 0;
> + unsigned long remaining = 0;
>
> for (unsigned i = 0; (nr > 0) && (i < ARRAY_SIZE(stages)); i++) {
> if (!stages[i].cond)
> continue;
> stages[i].freed =
> - drm_gem_lru_scan(stages[i].lru, nr, stages[i].shrink);
> + drm_gem_lru_scan(stages[i].lru, nr, &remaining,

This function relies in remaining being pre-initialized. That's not
obvious and error prone. At least, pass-in something like
&stages[i].remaining that is then initialized internally by
drm_gem_lru_scan() to zero. And similar to freed, sum up the individual
stages' remaining here.

TBH I somehow don't like the overall design of how all these functions
interact with each other. But I also can't really point to the actual
problem. So it's best to take what you have here; maybe with the change
I proposed.

Reviewed-by: Thomas Zimmermann <[email protected]>

Best regards
Thomas

> + stages[i].shrink);
> nr -= stages[i].freed;
> freed += stages[i].freed;
> }
> @@ -132,7 +134,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
> stages[3].freed);
> }
>
> - return (freed > 0) ? freed : SHRINK_STOP;
> + return (freed > 0 && remaining > 0) ? freed : SHRINK_STOP;
> }
>
> #ifdef CONFIG_DEBUG_FS
> @@ -182,10 +184,12 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr)
> NULL,
> };
> unsigned idx, unmapped = 0;
> + unsigned long remaining = 0;
>
> for (idx = 0; lrus[idx] && unmapped < vmap_shrink_limit; idx++) {
> unmapped += drm_gem_lru_scan(lrus[idx],
> vmap_shrink_limit - unmapped,
> + &remaining,
> vmap_shrink);
> }
>
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index 772a4adf5287..f1f00fc2dba6 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -476,7 +476,9 @@ int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
> void drm_gem_lru_init(struct drm_gem_lru *lru, struct mutex *lock);
> void drm_gem_lru_remove(struct drm_gem_object *obj);
> void drm_gem_lru_move_tail(struct drm_gem_lru *lru, struct drm_gem_object *obj);
> -unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
> +unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru,
> + unsigned int nr_to_scan,
> + unsigned long *remaining,
> bool (*shrink)(struct drm_gem_object *obj));
>
> #endif /* __DRM_GEM_H__ */

--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev


Attachments:
OpenPGP_signature (840.00 B)
OpenPGP digital signature

2023-02-17 12:25:09

by Thomas Zimmermann

[permalink] [raw]
Subject: Re: [PATCH v10 04/11] drm/shmem: Put booleans in the end of struct drm_gem_shmem_object



Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> Group all 1-bit boolean members of struct drm_gem_shmem_object in the end
> of the structure, allowing compiler to pack data better and making code to
> look more consistent.
>
> Suggested-by: Thomas Zimmermann <[email protected]>
> Signed-off-by: Dmitry Osipenko <[email protected]>

Reviewed-by: Thomas Zimmermann <[email protected]>

> ---
> include/drm/drm_gem_shmem_helper.h | 30 +++++++++++++++---------------
> 1 file changed, 15 insertions(+), 15 deletions(-)
>
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index a2201b2488c5..5994fed5e327 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -60,20 +60,6 @@ struct drm_gem_shmem_object {
> */
> struct list_head madv_list;
>
> - /**
> - * @pages_mark_dirty_on_put:
> - *
> - * Mark pages as dirty when they are put.
> - */
> - unsigned int pages_mark_dirty_on_put : 1;
> -
> - /**
> - * @pages_mark_accessed_on_put:
> - *
> - * Mark pages as accessed when they are put.
> - */
> - unsigned int pages_mark_accessed_on_put : 1;
> -
> /**
> * @sgt: Scatter/gather table for imported PRIME buffers
> */
> @@ -97,10 +83,24 @@ struct drm_gem_shmem_object {
> */
> unsigned int vmap_use_count;
>
> + /**
> + * @pages_mark_dirty_on_put:
> + *
> + * Mark pages as dirty when they are put.
> + */
> + bool pages_mark_dirty_on_put : 1;
> +
> + /**
> + * @pages_mark_accessed_on_put:
> + *
> + * Mark pages as accessed when they are put.
> + */
> + bool pages_mark_accessed_on_put : 1;
> +
> /**
> * @map_wc: map object write-combined (instead of using shmem defaults).
> */
> - bool map_wc;
> + bool map_wc : 1;
> };
>
> #define to_drm_gem_shmem_obj(obj) \

--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev


Attachments:
OpenPGP_signature (840.00 B)
OpenPGP digital signature

2023-02-17 12:28:53

by Thomas Zimmermann

[permalink] [raw]
Subject: Re: [PATCH v10 05/11] drm/shmem: Switch to use drm_* debug helpers



Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> Ease debugging of a multi-GPU system by using drm_WARN_*() and
> drm_dbg_kms() helpers that print out DRM device name corresponding
> to shmem GEM.
>
> Suggested-by: Thomas Zimmermann <[email protected]>
> Signed-off-by: Dmitry Osipenko <[email protected]>

Reviewed-by: Thomas Zimmermann <[email protected]>

> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 38 +++++++++++++++-----------
> 1 file changed, 22 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index f21f47737817..5006f7da7f2d 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -141,7 +141,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> - WARN_ON(shmem->vmap_use_count);
> + drm_WARN_ON(obj->dev, shmem->vmap_use_count);
>
> if (obj->import_attach) {
> drm_prime_gem_destroy(obj, shmem->sgt);
> @@ -156,7 +156,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> drm_gem_shmem_put_pages(shmem);
> }
>
> - WARN_ON(shmem->pages_use_count);
> + drm_WARN_ON(obj->dev, shmem->pages_use_count);
>
> drm_gem_object_release(obj);
> mutex_destroy(&shmem->pages_lock);
> @@ -175,7 +175,8 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
>
> pages = drm_gem_get_pages(obj);
> if (IS_ERR(pages)) {
> - DRM_DEBUG_KMS("Failed to get pages (%ld)\n", PTR_ERR(pages));
> + drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
> + PTR_ERR(pages));
> shmem->pages_use_count = 0;
> return PTR_ERR(pages);
> }
> @@ -207,9 +208,10 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> */
> int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> {
> + struct drm_gem_object *obj = &shmem->base;
> int ret;
>
> - WARN_ON(shmem->base.import_attach);
> + drm_WARN_ON(obj->dev, obj->import_attach);
>
> ret = mutex_lock_interruptible(&shmem->pages_lock);
> if (ret)
> @@ -225,7 +227,7 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> - if (WARN_ON_ONCE(!shmem->pages_use_count))
> + if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
> return;
>
> if (--shmem->pages_use_count > 0)
> @@ -268,7 +270,9 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages);
> */
> int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
> {
> - WARN_ON(shmem->base.import_attach);
> + struct drm_gem_object *obj = &shmem->base;
> +
> + drm_WARN_ON(obj->dev, obj->import_attach);
>
> return drm_gem_shmem_get_pages(shmem);
> }
> @@ -283,7 +287,9 @@ EXPORT_SYMBOL(drm_gem_shmem_pin);
> */
> void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
> {
> - WARN_ON(shmem->base.import_attach);
> + struct drm_gem_object *obj = &shmem->base;
> +
> + drm_WARN_ON(obj->dev, obj->import_attach);
>
> drm_gem_shmem_put_pages(shmem);
> }
> @@ -303,7 +309,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> if (obj->import_attach) {
> ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> if (!ret) {
> - if (WARN_ON(map->is_iomem)) {
> + if (drm_WARN_ON(obj->dev, map->is_iomem)) {
> dma_buf_vunmap(obj->import_attach->dmabuf, map);
> ret = -EIO;
> goto err_put_pages;
> @@ -328,7 +334,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> }
>
> if (ret) {
> - DRM_DEBUG_KMS("Failed to vmap pages, error %d\n", ret);
> + drm_dbg_kms(obj->dev, "Failed to vmap pages, error %d\n", ret);
> goto err_put_pages;
> }
>
> @@ -378,7 +384,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> {
> struct drm_gem_object *obj = &shmem->base;
>
> - if (WARN_ON_ONCE(!shmem->vmap_use_count))
> + if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
> return;
>
> if (--shmem->vmap_use_count > 0)
> @@ -463,7 +469,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
> struct drm_gem_object *obj = &shmem->base;
> struct drm_device *dev = obj->dev;
>
> - WARN_ON(!drm_gem_shmem_is_purgeable(shmem));
> + drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
>
> dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
> sg_free_table(shmem->sgt);
> @@ -555,7 +561,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> mutex_lock(&shmem->pages_lock);
>
> if (page_offset >= num_pages ||
> - WARN_ON_ONCE(!shmem->pages) ||
> + drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
> shmem->madv < 0) {
> ret = VM_FAULT_SIGBUS;
> } else {
> @@ -574,7 +580,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
> struct drm_gem_object *obj = vma->vm_private_data;
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>
> - WARN_ON(shmem->base.import_attach);
> + drm_WARN_ON(obj->dev, obj->import_attach);
>
> mutex_lock(&shmem->pages_lock);
>
> @@ -583,7 +589,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
> * mmap'd, vm_open() just grabs an additional reference for the new
> * mm the vma is getting copied into (ie. on fork()).
> */
> - if (!WARN_ON_ONCE(!shmem->pages_use_count))
> + if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
> shmem->pages_use_count++;
>
> mutex_unlock(&shmem->pages_lock);
> @@ -677,7 +683,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> - WARN_ON(shmem->base.import_attach);
> + drm_WARN_ON(obj->dev, obj->import_attach);
>
> return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> PAGE_SHIFT);
> }
> @@ -708,7 +714,7 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
> if (shmem->sgt)
> return shmem->sgt;
>
> - WARN_ON(obj->import_attach);
> + drm_WARN_ON(obj->dev, obj->import_attach);
>
> ret = drm_gem_shmem_get_pages(shmem);
> if (ret)

--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev


Attachments:
OpenPGP_signature (840.00 B)
OpenPGP digital signature

2023-02-17 12:52:56

by Thomas Zimmermann

[permalink] [raw]
Subject: Re: [PATCH v10 07/11] drm/shmem-helper: Switch to reservation lock

Hi

Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> Replace all drm-shmem locks with a GEM reservation lock. This makes locks
> consistent with dma-buf locking convention where importers are responsible
> for holding reservation lock for all operations performed over dma-bufs,
> preventing deadlock between dma-buf importers and exporters.
>
> Suggested-by: Daniel Vetter <[email protected]>
> Signed-off-by: Dmitry Osipenko <[email protected]>

How much testing has this patch seen?

I'm asking because when I tried to fix the locking in this code, I had
to review every importer to make sure that it aquired the lock. Has this
problem been resolved?

Best regards
Thomas

> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 185 +++++++-----------
> drivers/gpu/drm/lima/lima_gem.c | 8 +-
> drivers/gpu/drm/panfrost/panfrost_drv.c | 7 +-
> .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 6 +-
> drivers/gpu/drm/panfrost/panfrost_mmu.c | 19 +-
> include/drm/drm_gem_shmem_helper.h | 14 +-
> 6 files changed, 94 insertions(+), 145 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 1392cbd3cc02..a1f2f2158c50 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -88,8 +88,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private)
> if (ret)
> goto err_release;
>
> - mutex_init(&shmem->pages_lock);
> - mutex_init(&shmem->vmap_lock);
> INIT_LIST_HEAD(&shmem->madv_list);
>
> if (!private) {
> @@ -141,11 +139,13 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> - drm_WARN_ON(obj->dev, shmem->vmap_use_count);
> -
> if (obj->import_attach) {
> drm_prime_gem_destroy(obj, shmem->sgt);
> } else {
> + dma_resv_lock(shmem->base.resv, NULL);
> +
> + drm_WARN_ON(obj->dev, shmem->vmap_use_count);
> +
> if (shmem->sgt) {
> dma_unmap_sgtable(obj->dev->dev, shmem->sgt,
> DMA_BIDIRECTIONAL, 0);
> @@ -154,18 +154,18 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> }
> if (shmem->pages)
> drm_gem_shmem_put_pages(shmem);
> - }
>
> - drm_WARN_ON(obj->dev, shmem->pages_use_count);
> + drm_WARN_ON(obj->dev, shmem->pages_use_count);
> +
> + dma_resv_unlock(shmem->base.resv);
> + }
>
> drm_gem_object_release(obj);
> - mutex_destroy(&shmem->pages_lock);
> - mutex_destroy(&shmem->vmap_lock);
> kfree(shmem);
> }
> EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
>
> -static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
> struct page **pages;
> @@ -197,35 +197,16 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> }
>
> /*
> - * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object
> + * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
> * @shmem: shmem GEM object
> *
> - * This function makes sure that backing pages exists for the shmem GEM object
> - * and increases the use count.
> - *
> - * Returns:
> - * 0 on success or a negative error code on failure.
> + * This function decreases the use count and puts the backing pages when use drops to zero.
> */
> -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
> - int ret;
>
> - drm_WARN_ON(obj->dev, obj->import_attach);
> -
> - ret = mutex_lock_interruptible(&shmem->pages_lock);
> - if (ret)
> - return ret;
> - ret = drm_gem_shmem_get_pages_locked(shmem);
> - mutex_unlock(&shmem->pages_lock);
> -
> - return ret;
> -}
> -EXPORT_SYMBOL(drm_gem_shmem_get_pages);
> -
> -static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> -{
> - struct drm_gem_object *obj = &shmem->base;
> + dma_resv_assert_held(shmem->base.resv);
>
> if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
> return;
> @@ -243,19 +224,6 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> shmem->pages_mark_accessed_on_put);
> shmem->pages = NULL;
> }
> -
> -/*
> - * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
> - * @shmem: shmem GEM object
> - *
> - * This function decreases the use count and puts the backing pages when use drops to zero.
> - */
> -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
> -{
> - mutex_lock(&shmem->pages_lock);
> - drm_gem_shmem_put_pages_locked(shmem);
> - mutex_unlock(&shmem->pages_lock);
> -}
> EXPORT_SYMBOL(drm_gem_shmem_put_pages);
>
> /**
> @@ -272,6 +240,8 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> + dma_resv_assert_held(shmem->base.resv);
> +
> drm_WARN_ON(obj->dev, obj->import_attach);
>
> return drm_gem_shmem_get_pages(shmem);
> @@ -289,14 +259,31 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> + dma_resv_assert_held(shmem->base.resv);
> +
> drm_WARN_ON(obj->dev, obj->import_attach);
>
> drm_gem_shmem_put_pages(shmem);
> }
> EXPORT_SYMBOL(drm_gem_shmem_unpin);
>
> -static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> - struct iosys_map *map)
> +/*
> + * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> + * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + * store.
> + *
> + * This function makes sure that a contiguous kernel virtual address mapping
> + * exists for the buffer backing the shmem GEM object. It hides the differences
> + * between dma-buf imported and natively allocated objects.
> + *
> + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
> + *
> + * Returns:
> + * 0 on success or a negative error code on failure.
> + */
> +int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
> + struct iosys_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> int ret = 0;
> @@ -312,6 +299,8 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> } else {
> pgprot_t prot = PAGE_KERNEL;
>
> + dma_resv_assert_held(shmem->base.resv);
> +
> if (shmem->vmap_use_count++ > 0) {
> iosys_map_set_vaddr(map, shmem->vaddr);
> return 0;
> @@ -346,45 +335,30 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>
> return ret;
> }
> +EXPORT_SYMBOL(drm_gem_shmem_vmap);
>
> /*
> - * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> + * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
> * @shmem: shmem GEM object
> - * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> - * store.
> - *
> - * This function makes sure that a contiguous kernel virtual address mapping
> - * exists for the buffer backing the shmem GEM object. It hides the differences
> - * between dma-buf imported and natively allocated objects.
> + * @map: Kernel virtual address where the SHMEM GEM object was mapped
> *
> - * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
> + * This function cleans up a kernel virtual address mapping acquired by
> + * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> + * zero.
> *
> - * Returns:
> - * 0 on success or a negative error code on failure.
> + * This function hides the differences between dma-buf imported and natively
> + * allocated objects.
> */
> -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
> - struct iosys_map *map)
> -{
> - int ret;
> -
> - ret = mutex_lock_interruptible(&shmem->vmap_lock);
> - if (ret)
> - return ret;
> - ret = drm_gem_shmem_vmap_locked(shmem, map);
> - mutex_unlock(&shmem->vmap_lock);
> -
> - return ret;
> -}
> -EXPORT_SYMBOL(drm_gem_shmem_vmap);
> -
> -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> - struct iosys_map *map)
> +void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
> + struct iosys_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> if (obj->import_attach) {
> dma_buf_vunmap(obj->import_attach->dmabuf, map);
> } else {
> + dma_resv_assert_held(shmem->base.resv);
> +
> if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
> return;
>
> @@ -397,26 +371,6 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
>
> shmem->vaddr = NULL;
> }
> -
> -/*
> - * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
> - * @shmem: shmem GEM object
> - * @map: Kernel virtual address where the SHMEM GEM object was mapped
> - *
> - * This function cleans up a kernel virtual address mapping acquired by
> - * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> - * zero.
> - *
> - * This function hides the differences between dma-buf imported and natively
> - * allocated objects.
> - */
> -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
> - struct iosys_map *map)
> -{
> - mutex_lock(&shmem->vmap_lock);
> - drm_gem_shmem_vunmap_locked(shmem, map);
> - mutex_unlock(&shmem->vmap_lock);
> -}
> EXPORT_SYMBOL(drm_gem_shmem_vunmap);
>
> static struct drm_gem_shmem_object *
> @@ -449,24 +403,24 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
> */
> int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
> {
> - mutex_lock(&shmem->pages_lock);
> + dma_resv_assert_held(shmem->base.resv);
>
> if (shmem->madv >= 0)
> shmem->madv = madv;
>
> madv = shmem->madv;
>
> - mutex_unlock(&shmem->pages_lock);
> -
> return (madv >= 0);
> }
> EXPORT_SYMBOL(drm_gem_shmem_madvise);
>
> -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
> +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
> struct drm_device *dev = obj->dev;
>
> + dma_resv_assert_held(shmem->base.resv);
> +
> drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
>
> dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
> @@ -474,7 +428,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
> kfree(shmem->sgt);
> shmem->sgt = NULL;
>
> - drm_gem_shmem_put_pages_locked(shmem);
> + drm_gem_shmem_put_pages(shmem);
>
> shmem->madv = -1;
>
> @@ -490,17 +444,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
>
> invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
> }
> -EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
> -
> -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
> -{
> - if (!mutex_trylock(&shmem->pages_lock))
> - return false;
> - drm_gem_shmem_purge_locked(shmem);
> - mutex_unlock(&shmem->pages_lock);
> -
> - return true;
> -}
> EXPORT_SYMBOL(drm_gem_shmem_purge);
>
> /**
> @@ -556,7 +499,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> /* We don't use vmf->pgoff since that has the fake offset */
> page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
>
> - mutex_lock(&shmem->pages_lock);
> + dma_resv_lock(shmem->base.resv, NULL);
>
> if (page_offset >= num_pages ||
> drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
> @@ -568,7 +511,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
> }
>
> - mutex_unlock(&shmem->pages_lock);
> + dma_resv_unlock(shmem->base.resv);
>
> return ret;
> }
> @@ -580,7 +523,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
>
> drm_WARN_ON(obj->dev, obj->import_attach);
>
> - mutex_lock(&shmem->pages_lock);
> + dma_resv_lock(shmem->base.resv, NULL);
>
> /*
> * We should have already pinned the pages when the buffer was first
> @@ -590,7 +533,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
> if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
> shmem->pages_use_count++;
>
> - mutex_unlock(&shmem->pages_lock);
> + dma_resv_unlock(shmem->base.resv);
>
> drm_gem_vm_open(vma);
> }
> @@ -600,7 +543,10 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
> struct drm_gem_object *obj = vma->vm_private_data;
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>
> + dma_resv_lock(shmem->base.resv, NULL);
> drm_gem_shmem_put_pages(shmem);
> + dma_resv_unlock(shmem->base.resv);
> +
> drm_gem_vm_close(vma);
> }
>
> @@ -635,7 +581,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
> return dma_buf_mmap(obj->dma_buf, vma, 0);
> }
>
> + dma_resv_lock(shmem->base.resv, NULL);
> ret = drm_gem_shmem_get_pages(shmem);
> + dma_resv_unlock(shmem->base.resv);
> +
> if (ret)
> return ret;
>
> @@ -721,9 +670,11 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
>
> drm_WARN_ON(obj->dev, obj->import_attach);
>
> + dma_resv_lock(shmem->base.resv, NULL);
> +
> ret = drm_gem_shmem_get_pages(shmem);
> if (ret)
> - return ERR_PTR(ret);
> + goto err_unlock;
>
> sgt = drm_gem_shmem_get_sg_table(shmem);
> if (IS_ERR(sgt)) {
> @@ -737,6 +688,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
>
> shmem->sgt = sgt;
>
> + dma_resv_unlock(shmem->base.resv);
> +
> return sgt;
>
> err_free_sgt:
> @@ -744,6 +697,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
> kfree(sgt);
> err_put_pages:
> drm_gem_shmem_put_pages(shmem);
> +err_unlock:
> + dma_resv_unlock(shmem->base.resv);
> return ERR_PTR(ret);
> }
> EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt);
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 0f1ca0b0db49..5008f0c2428f 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
>
> new_size = min(new_size, bo->base.base.size);
>
> - mutex_lock(&bo->base.pages_lock);
> + dma_resv_lock(bo->base.base.resv, NULL);
>
> if (bo->base.pages) {
> pages = bo->base.pages;
> @@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
> pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
> sizeof(*pages), GFP_KERNEL | __GFP_ZERO);
> if (!pages) {
> - mutex_unlock(&bo->base.pages_lock);
> + dma_resv_unlock(bo->base.base.resv);
> return -ENOMEM;
> }
>
> @@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
> struct page *page = shmem_read_mapping_page(mapping, i);
>
> if (IS_ERR(page)) {
> - mutex_unlock(&bo->base.pages_lock);
> + dma_resv_unlock(bo->base.base.resv);
> return PTR_ERR(page);
> }
> pages[i] = page;
> }
>
> - mutex_unlock(&bo->base.pages_lock);
> + dma_resv_unlock(bo->base.base.resv);
>
> ret = sg_alloc_table_from_pages(&sgt, pages, i, 0,
> new_size, GFP_KERNEL);
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index abb0dadd8f63..9f3f2283b67a 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -414,6 +414,10 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
>
> bo = to_panfrost_bo(gem_obj);
>
> + ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL);
> + if (ret)
> + goto out_put_object;
> +
> mutex_lock(&pfdev->shrinker_lock);
> mutex_lock(&bo->mappings.lock);
> if (args->madv == PANFROST_MADV_DONTNEED) {
> @@ -451,7 +455,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
> out_unlock_mappings:
> mutex_unlock(&bo->mappings.lock);
> mutex_unlock(&pfdev->shrinker_lock);
> -
> + dma_resv_unlock(bo->base.base.resv);
> +out_put_object:
> drm_gem_object_put(gem_obj);
> return ret;
> }
> diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> index bf0170782f25..6a71a2555f85 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> @@ -48,14 +48,14 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj)
> if (!mutex_trylock(&bo->mappings.lock))
> return false;
>
> - if (!mutex_trylock(&shmem->pages_lock))
> + if (!dma_resv_trylock(shmem->base.resv))
> goto unlock_mappings;
>
> panfrost_gem_teardown_mappings_locked(bo);
> - drm_gem_shmem_purge_locked(&bo->base);
> + drm_gem_shmem_purge(&bo->base);
> ret = true;
>
> - mutex_unlock(&shmem->pages_lock);
> + dma_resv_unlock(shmem->base.resv);
>
> unlock_mappings:
> mutex_unlock(&bo->mappings.lock);
> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> index 666a5e53fe19..0679df57f394 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> @@ -443,6 +443,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
> struct panfrost_gem_mapping *bomapping;
> struct panfrost_gem_object *bo;
> struct address_space *mapping;
> + struct drm_gem_object *obj;
> pgoff_t page_offset;
> struct sg_table *sgt;
> struct page **pages;
> @@ -465,15 +466,16 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
> page_offset = addr >> PAGE_SHIFT;
> page_offset -= bomapping->mmnode.start;
>
> - mutex_lock(&bo->base.pages_lock);
> + obj = &bo->base.base;
> +
> + dma_resv_lock(obj->resv, NULL);
>
> if (!bo->base.pages) {
> bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M,
> sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO);
> if (!bo->sgts) {
> - mutex_unlock(&bo->base.pages_lock);
> ret = -ENOMEM;
> - goto err_bo;
> + goto err_unlock;
> }
>
> pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
> @@ -481,9 +483,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
> if (!pages) {
> kvfree(bo->sgts);
> bo->sgts = NULL;
> - mutex_unlock(&bo->base.pages_lock);
> ret = -ENOMEM;
> - goto err_bo;
> + goto err_unlock;
> }
> bo->base.pages = pages;
> bo->base.pages_use_count = 1;
> @@ -491,7 +492,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
> pages = bo->base.pages;
> if (pages[page_offset]) {
> /* Pages are already mapped, bail out. */
> - mutex_unlock(&bo->base.pages_lock);
> goto out;
> }
> }
> @@ -502,14 +502,11 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
> for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
> pages[i] = shmem_read_mapping_page(mapping, i);
> if (IS_ERR(pages[i])) {
> - mutex_unlock(&bo->base.pages_lock);
> ret = PTR_ERR(pages[i]);
> goto err_pages;
> }
> }
>
> - mutex_unlock(&bo->base.pages_lock);
> -
> sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
> ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
> NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
> @@ -528,6 +525,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
> dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr);
>
> out:
> + dma_resv_unlock(obj->resv);
> +
> panfrost_gem_mapping_put(bomapping);
>
> return 0;
> @@ -536,6 +535,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
> sg_free_table(sgt);
> err_pages:
> drm_gem_shmem_put_pages(&bo->base);
> +err_unlock:
> + dma_resv_unlock(obj->resv);
> err_bo:
> panfrost_gem_mapping_put(bomapping);
> return ret;
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 5994fed5e327..20ddcd799df9 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -26,11 +26,6 @@ struct drm_gem_shmem_object {
> */
> struct drm_gem_object base;
>
> - /**
> - * @pages_lock: Protects the page table and use count
> - */
> - struct mutex pages_lock;
> -
> /**
> * @pages: Page table
> */
> @@ -65,11 +60,6 @@ struct drm_gem_shmem_object {
> */
> struct sg_table *sgt;
>
> - /**
> - * @vmap_lock: Protects the vmap address and use count
> - */
> - struct mutex vmap_lock;
> -
> /**
> * @vaddr: Kernel virtual address of the backing memory
> */
> @@ -109,7 +99,6 @@ struct drm_gem_shmem_object {
> struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size);
> void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem);
>
> -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
> void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
> int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem);
> void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem);
> @@ -128,8 +117,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem
> !shmem->base.dma_buf && !shmem->base.import_attach;
> }
>
> -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
> -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
> +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
>
> struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
> struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);

--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev


Attachments:
OpenPGP_signature (840.00 B)
OpenPGP digital signature

2023-02-17 13:19:54

by Thomas Zimmermann

[permalink] [raw]
Subject: Re: [PATCH v10 08/11] drm/shmem-helper: Add memory shrinker

Hi

Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> Introduce common drm-shmem shrinker for DRM drivers.
>
> To start using drm-shmem shrinker drivers should do the following:
>
> 1. Implement evict() callback of GEM object where driver should check
> whether object is purgeable or evictable using drm-shmem helpers and
> perform the shrinking action
>
> 2. Initialize drm-shmem internals using drmm_gem_shmem_init(drm_device),
> which will register drm-shmem shrinker
>
> 3. Implement madvise IOCTL that will use drm_gem_shmem_madvise()

I left commenets beloew, but it's complicated code and a fairly large
change. It there any chance of splitting this up in a meaningful way?

>
> Signed-off-by: Daniel Almeida <[email protected]>
> Signed-off-by: Dmitry Osipenko <[email protected]>
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 460 ++++++++++++++++--
> .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 9 +-
> include/drm/drm_device.h | 10 +-
> include/drm/drm_gem_shmem_helper.h | 61 ++-
> 4 files changed, 490 insertions(+), 50 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index a1f2f2158c50..3ab5ec325ddb 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -20,6 +20,7 @@
> #include <drm/drm_device.h>
> #include <drm/drm_drv.h>
> #include <drm/drm_gem_shmem_helper.h>
> +#include <drm/drm_managed.h>
> #include <drm/drm_prime.h>
> #include <drm/drm_print.h>
>
> @@ -128,6 +129,57 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
> }
> EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
>
> +static void drm_gem_shmem_resv_assert_held(struct drm_gem_shmem_object *shmem)
> +{
> + /*
> + * Destroying the object is a special case.. drm_gem_shmem_free()
> + * calls many things that WARN_ON if the obj lock is not held. But
> + * acquiring the obj lock in drm_gem_shmem_free() can cause a locking
> + * order inversion between reservation_ww_class_mutex and fs_reclaim.
> + *
> + * This deadlock is not actually possible, because no one should
> + * be already holding the lock when msm_gem_free_object() is called.
> + * Unfortunately lockdep is not aware of this detail. So when the
> + * refcount drops to zero, we pretend it is already locked.
> + */
> + if (kref_read(&shmem->base.refcount))
> + dma_resv_assert_held(shmem->base.resv);
> +}
> +
> +static bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem)
> +{
> + dma_resv_assert_held(shmem->base.resv);
> +
> + return (shmem->madv >= 0) && shmem->base.funcs->evict &&
> + shmem->pages_use_count && !shmem->pages_pin_count &&
> + !shmem->base.dma_buf && !shmem->base.import_attach &&
> + shmem->sgt && !shmem->evicted;
> +}
> +
> +static void
> +drm_gem_shmem_update_pages_state(struct drm_gem_shmem_object *shmem)
> +{
> + struct drm_gem_object *obj = &shmem->base;
> + struct drm_gem_shmem *shmem_mm = obj->dev->shmem_mm;
> + struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker;
> +
> + drm_gem_shmem_resv_assert_held(shmem);
> +
> + if (!gem_shrinker || obj->import_attach)
> + return;
> +
> + if (shmem->madv < 0)
> + drm_gem_lru_remove(&shmem->base);
> + else if (drm_gem_shmem_is_evictable(shmem) || drm_gem_shmem_is_purgeable(shmem))
> + drm_gem_lru_move_tail(&gem_shrinker->lru_evictable, &shmem->base);
> + else if (shmem->evicted)
> + drm_gem_lru_move_tail(&gem_shrinker->lru_evicted, &shmem->base);
> + else if (!shmem->pages)
> + drm_gem_lru_remove(&shmem->base);
> + else
> + drm_gem_lru_move_tail(&gem_shrinker->lru_pinned, &shmem->base);
> +}
> +
> /**
> * drm_gem_shmem_free - Free resources associated with a shmem GEM object
> * @shmem: shmem GEM object to free
> @@ -142,7 +194,8 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> if (obj->import_attach) {
> drm_prime_gem_destroy(obj, shmem->sgt);
> } else {
> - dma_resv_lock(shmem->base.resv, NULL);
> + /* take out shmem GEM object from the memory shrinker */
> + drm_gem_shmem_madvise(shmem, -1);
>
> drm_WARN_ON(obj->dev, shmem->vmap_use_count);
>
> @@ -152,12 +205,10 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> sg_free_table(shmem->sgt);
> kfree(shmem->sgt);
> }
> - if (shmem->pages)
> + if (shmem->pages_use_count)
> drm_gem_shmem_put_pages(shmem);
>
> drm_WARN_ON(obj->dev, shmem->pages_use_count);
> -
> - dma_resv_unlock(shmem->base.resv);
> }
>
> drm_gem_object_release(obj);
> @@ -165,19 +216,31 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> }
> EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
>
> -static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> +static int
> +drm_gem_shmem_acquire_pages(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
> struct page **pages;
>
> - if (shmem->pages_use_count++ > 0)
> + dma_resv_assert_held(shmem->base.resv);
> +
> + if (shmem->madv < 0) {
> + drm_WARN_ON(obj->dev, shmem->pages);
> + return -ENOMEM;
> + }
> +
> + if (shmem->pages) {
> + drm_WARN_ON(obj->dev, !shmem->evicted);
> return 0;
> + }
> +
> + if (drm_WARN_ON(obj->dev, !shmem->pages_use_count))
> + return -EINVAL;
>
> pages = drm_gem_get_pages(obj);
> if (IS_ERR(pages)) {
> drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
> PTR_ERR(pages));
> - shmem->pages_use_count = 0;
> return PTR_ERR(pages);
> }
>
> @@ -196,6 +259,58 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> return 0;
> }
>
> +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> +{
> + int err;
> +
> + dma_resv_assert_held(shmem->base.resv);
> +
> + if (shmem->madv < 0)
> + return -ENOMEM;
> +
> + if (shmem->pages_use_count++ > 0) {
> + err = drm_gem_shmem_swap_in(shmem);
> + if (err)
> + goto err_zero_use;
> +
> + return 0;
> + }
> +
> + err = drm_gem_shmem_acquire_pages(shmem);
> + if (err)
> + goto err_zero_use;
> +
> + drm_gem_shmem_update_pages_state(shmem);
> +
> + return 0;
> +
> +err_zero_use:
> + shmem->pages_use_count = 0;
> +
> + return err;
> +}
> +
> +static void
> +drm_gem_shmem_release_pages(struct drm_gem_shmem_object *shmem)
> +{
> + struct drm_gem_object *obj = &shmem->base;
> +
> + if (!shmem->pages) {
> + drm_WARN_ON(obj->dev, !shmem->evicted && shmem->madv >= 0);
> + return;
> + }
> +
> +#ifdef CONFIG_X86
> + if (shmem->map_wc)
> + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
> +#endif
> +
> + drm_gem_put_pages(obj, shmem->pages,
> + shmem->pages_mark_dirty_on_put,
> + shmem->pages_mark_accessed_on_put);
> + shmem->pages = NULL;
> +}
> +
> /*
> * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
> * @shmem: shmem GEM object
> @@ -206,7 +321,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
> return;
> @@ -214,15 +329,9 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
> if (--shmem->pages_use_count > 0)
> return;
>
> -#ifdef CONFIG_X86
> - if (shmem->map_wc)
> - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
> -#endif
> + drm_gem_shmem_release_pages(shmem);
>
> - drm_gem_put_pages(obj, shmem->pages,
> - shmem->pages_mark_dirty_on_put,
> - shmem->pages_mark_accessed_on_put);
> - shmem->pages = NULL;
> + drm_gem_shmem_update_pages_state(shmem);
> }
> EXPORT_SYMBOL(drm_gem_shmem_put_pages);
>
> @@ -239,12 +348,17 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages);
> int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
> + int ret;
>
> dma_resv_assert_held(shmem->base.resv);
>
> drm_WARN_ON(obj->dev, obj->import_attach);
>
> - return drm_gem_shmem_get_pages(shmem);
> + ret = drm_gem_shmem_get_pages(shmem);
> + if (!ret)
> + shmem->pages_pin_count++;
> +
> + return ret;
> }
> EXPORT_SYMBOL(drm_gem_shmem_pin);
>
> @@ -263,7 +377,12 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
>
> drm_WARN_ON(obj->dev, obj->import_attach);
>
> + if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_pin_count))
> + return;
> +
> drm_gem_shmem_put_pages(shmem);
> +
> + shmem->pages_pin_count--;
> }
> EXPORT_SYMBOL(drm_gem_shmem_unpin);
>
> @@ -306,7 +425,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
> return 0;
> }
>
> - ret = drm_gem_shmem_get_pages(shmem);
> + ret = drm_gem_shmem_pin(shmem);
> if (ret)
> goto err_zero_use;
>
> @@ -329,7 +448,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
>
> err_put_pages:
> if (!obj->import_attach)
> - drm_gem_shmem_put_pages(shmem);
> + drm_gem_shmem_unpin(shmem);
> err_zero_use:
> shmem->vmap_use_count = 0;
>
> @@ -366,7 +485,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
> return;
>
> vunmap(shmem->vaddr);
> - drm_gem_shmem_put_pages(shmem);
> + drm_gem_shmem_unpin(shmem);
> }
>
> shmem->vaddr = NULL;
> @@ -403,48 +522,84 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
> */
> int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
> {
> - dma_resv_assert_held(shmem->base.resv);
> + drm_gem_shmem_resv_assert_held(shmem);
>
> if (shmem->madv >= 0)
> shmem->madv = madv;
>
> madv = shmem->madv;
>
> + drm_gem_shmem_update_pages_state(shmem);
> +
> return (madv >= 0);
> }
> EXPORT_SYMBOL(drm_gem_shmem_madvise);
>
> -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
> +/**
> + * drm_gem_shmem_swap_in() - Moves shmem GEM back to memory and enables
> + * hardware access to the memory.

Do we have a better name than _swap_in()? I suggest
drm_gem_shmem_unevict(), which suggest that it's the inverse to _evict().

> + * @shmem: shmem GEM object
> + *
> + * This function moves shmem GEM back to memory if it was previously evicted
> + * by the memory shrinker. The GEM is ready to use on success.
> + *
> + * Returns:
> + * 0 on success or a negative error code on failure.
> + */
> +int drm_gem_shmem_swap_in(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
> - struct drm_device *dev = obj->dev;
> + struct sg_table *sgt;
> + int err;
>
> dma_resv_assert_held(shmem->base.resv);
>
> - drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
> + if (shmem->evicted) {
> + err = drm_gem_shmem_acquire_pages(shmem);
> + if (err)
> + return err;
> +
> + sgt = drm_gem_shmem_get_sg_table(shmem);
> + if (IS_ERR(sgt))
> + return PTR_ERR(sgt);
> +
> + err = dma_map_sgtable(obj->dev->dev, sgt,
> + DMA_BIDIRECTIONAL, 0);
> + if (err) {
> + sg_free_table(sgt);
> + kfree(sgt);
> + return err;
> + }
>
> - dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
> - sg_free_table(shmem->sgt);
> - kfree(shmem->sgt);
> - shmem->sgt = NULL;
> + shmem->sgt = sgt;
> + shmem->evicted = false;
>
> - drm_gem_shmem_put_pages(shmem);
> + drm_gem_shmem_update_pages_state(shmem);
> + }
>
> - shmem->madv = -1;
> + if (!shmem->pages)
> + return -ENOMEM;
>
> - drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
> - drm_gem_free_mmap_offset(obj);
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(drm_gem_shmem_swap_in);
>
> - /* Our goal here is to return as much of the memory as
> - * is possible back to the system as we are called from OOM.
> - * To do this we must instruct the shmfs to drop all of its
> - * backing pages, *now*.
> - */
> - shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1);
> +static void drm_gem_shmem_unpin_pages(struct drm_gem_shmem_object *shmem)
> +{
> + struct drm_gem_object *obj = &shmem->base;
> + struct drm_device *dev = obj->dev;
>
> - invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
> + if (shmem->evicted)
> + return;
> +
> + dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
> + drm_gem_shmem_release_pages(shmem);
> + drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
> +
> + sg_free_table(shmem->sgt);
> + kfree(shmem->sgt);
> + shmem->sgt = NULL;
> }
> -EXPORT_SYMBOL(drm_gem_shmem_purge);
>
> /**
> * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
> @@ -495,22 +650,33 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> vm_fault_t ret;
> struct page *page;
> pgoff_t page_offset;
> + bool pages_unpinned;
> + int err;
>
> /* We don't use vmf->pgoff since that has the fake offset */
> page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
>
> dma_resv_lock(shmem->base.resv, NULL);
>
> - if (page_offset >= num_pages ||
> - drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
> - shmem->madv < 0) {
> + /* Sanity-check that we have the pages pointer when it should present */
> + pages_unpinned = (shmem->evicted || shmem->madv < 0 || !shmem->pages_use_count);
> + drm_WARN_ON_ONCE(obj->dev, !shmem->pages ^ pages_unpinned);
> +
> + if (page_offset >= num_pages || (!shmem->pages && !shmem->evicted)) {
> ret = VM_FAULT_SIGBUS;
> } else {
> + err = drm_gem_shmem_swap_in(shmem);
> + if (err) {
> + ret = VM_FAULT_OOM;
> + goto unlock;
> + }
> +
> page = shmem->pages[page_offset];
>
> ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
> }
>
> +unlock:
> dma_resv_unlock(shmem->base.resv);
>
> return ret;
> @@ -533,6 +699,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
> if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
> shmem->pages_use_count++;
>
> + drm_gem_shmem_update_pages_state(shmem);
> dma_resv_unlock(shmem->base.resv);
>
> drm_gem_vm_open(vma);
> @@ -615,7 +782,9 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
> drm_printf_indent(p, indent, "vmap_use_count=%u\n",
> shmem->vmap_use_count);
>
> + drm_printf_indent(p, indent, "evicted=%d\n", shmem->evicted);
> drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
> + drm_printf_indent(p, indent, "madv=%d\n", shmem->madv);
> }
> EXPORT_SYMBOL(drm_gem_shmem_print_info);
>
> @@ -688,6 +857,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
>
> shmem->sgt = sgt;
>
> + drm_gem_shmem_update_pages_state(shmem);
> +
> dma_resv_unlock(shmem->base.resv);
>
> return sgt;
> @@ -738,6 +909,209 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev,
> }
> EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
>
> +static struct drm_gem_shmem_shrinker *
> +to_drm_shrinker(struct shrinker *shrinker)

to_drm_gem_shmem_shrinker()

> +{
> + return container_of(shrinker, struct drm_gem_shmem_shrinker, base);
> +}
> +
> +static unsigned long
> +drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker,
> + struct shrink_control *sc)
> +{
> + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
> + unsigned long count = gem_shrinker->lru_evictable.count;
> +
> + if (count >= SHRINK_EMPTY)
> + return SHRINK_EMPTY - 1;
> +
> + return count ?: SHRINK_EMPTY;
> +}
> +
> +void drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem)
> +{
> + struct drm_gem_object *obj = &shmem->base;
> +
> + drm_WARN_ON(obj->dev, !drm_gem_shmem_is_evictable(shmem));
> + drm_WARN_ON(obj->dev, shmem->evicted);
> +
> + drm_gem_shmem_unpin_pages(shmem);
> +
> + shmem->evicted = true;
> + drm_gem_shmem_update_pages_state(shmem);
> +}
> +EXPORT_SYMBOL_GPL(drm_gem_shmem_evict);
> +
> +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
> +{
> + struct drm_gem_object *obj = &shmem->base;
> +
> + drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
> +
> + drm_gem_shmem_unpin_pages(shmem);
> + drm_gem_free_mmap_offset(obj);
> +
> + /* Our goal here is to return as much of the memory as
> + * is possible back to the system as we are called from OOM.
> + * To do this we must instruct the shmfs to drop all of its
> + * backing pages, *now*.
> + */
> + shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1);
> +
> + invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
> +
> + shmem->madv = -1;
> + shmem->evicted = false;
> + drm_gem_shmem_update_pages_state(shmem);
> +}
> +EXPORT_SYMBOL_GPL(drm_gem_shmem_purge);
> +
> +static bool drm_gem_is_busy(struct drm_gem_object *obj)
> +{
> + return !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ);
> +}

This is a generic GEM function. But do we need it?

> +
> +static bool drm_gem_shmem_shrinker_evict(struct drm_gem_object *obj)
> +{
> + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> +
> + if (!drm_gem_shmem_is_evictable(shmem) ||
> + get_nr_swap_pages() < obj->size >> PAGE_SHIFT ||
> + drm_gem_is_busy(obj))

Because here and below, we call drm_gem_is_busy(). Could we test
dma_resv_test_signaled() directly in drm_gem_shmem_evict() once and for all?

> + return false;
> +
> + return drm_gem_object_evict(obj);

I complaint about the use of booleans before. Here is should be

int ret = _evcit();
if (ret)
return false;
return true

and the shrink callback for drm_gem_lru_scan() should be changed to use
errno codes as well. That's for a later patchset.

> +}
> +
> +static bool drm_gem_shmem_shrinker_purge(struct drm_gem_object *obj)
> +{
> + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> +
> + if (!drm_gem_shmem_is_purgeable(shmem) ||
> + drm_gem_is_busy(obj))
> + return false;
> +
> + return drm_gem_object_evict(obj);
> +}
> +
> +static unsigned long
> +drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker,
> + struct shrink_control *sc)
> +{
> + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker);
> + unsigned long nr_to_scan = sc->nr_to_scan;
> + unsigned long remaining = 0;
> + unsigned long freed = 0;
> +
> + /* purge as many objects as we can */
> + freed += drm_gem_lru_scan(&gem_shrinker->lru_evictable,
> + nr_to_scan, &remaining,
> + drm_gem_shmem_shrinker_purge);
> +
> + /* evict as many objects as we can */
> + if (freed < nr_to_scan)
> + freed += drm_gem_lru_scan(&gem_shrinker->lru_evictable,
> + nr_to_scan - freed, &remaining,
> + drm_gem_shmem_shrinker_evict);
> +
> + return (freed > 0 && remaining > 0) ? freed : SHRINK_STOP;
> +}
> +
> +static int drm_gem_shmem_shrinker_init(struct drm_gem_shmem *shmem_mm,
> + const char *shrinker_name)
> +{
> + struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker;
> + int err;
> +
> + gem_shrinker->base.count_objects = drm_gem_shmem_shrinker_count_objects;
> + gem_shrinker->base.scan_objects = drm_gem_shmem_shrinker_scan_objects;
> + gem_shrinker->base.seeks = DEFAULT_SEEKS;
> +
> + mutex_init(&gem_shrinker->lock);
> + drm_gem_lru_init(&gem_shrinker->lru_evictable, &gem_shrinker->lock);
> + drm_gem_lru_init(&gem_shrinker->lru_evicted, &gem_shrinker->lock);
> + drm_gem_lru_init(&gem_shrinker->lru_pinned, &gem_shrinker->lock);
> +
> + err = register_shrinker(&gem_shrinker->base, shrinker_name);
> + if (err) {
> + mutex_destroy(&gem_shrinker->lock);
> + return err;
> + }
> +
> + return 0;
> +}
> +
> +static void drm_gem_shmem_shrinker_release(struct drm_device *dev,
> + struct drm_gem_shmem *shmem_mm)
> +{
> + struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker;
> +
> + unregister_shrinker(&gem_shrinker->base);
> + drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_evictable.list));
> + drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_evicted.list));
> + drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_pinned.list));
> + mutex_destroy(&gem_shrinker->lock);
> +}
> +
> +static int drm_gem_shmem_init(struct drm_device *dev)
> +{
> + int err;
> +
> + if (WARN_ON(dev->shmem_mm))

drm_WARN_ON()

> + return -EBUSY;
> +
> + dev->shmem_mm = kzalloc(sizeof(*dev->shmem_mm), GFP_KERNEL);
> + if (!dev->shmem_mm)
> + return -ENOMEM;
> +
> + err = drm_gem_shmem_shrinker_init(dev->shmem_mm, dev->unique);
> + if (err)
> + goto free_gem_shmem;
> +
> + return 0;
> +
> +free_gem_shmem:
> + kfree(dev->shmem_mm);
> + dev->shmem_mm = NULL;
> +
> + return err;
> +}
> +
> +static void drm_gem_shmem_release(struct drm_device *dev, void *ptr)
> +{
> + struct drm_gem_shmem *shmem_mm = dev->shmem_mm;
> +
> + drm_gem_shmem_shrinker_release(dev, shmem_mm);
> + dev->shmem_mm = NULL;
> + kfree(shmem_mm);
> +}
> +
> +/**
> + * drmm_gem_shmem_init() - Initialize drm-shmem internals
> + * @dev: DRM device
> + *
> + * Cleanup is automatically managed as part of DRM device releasing.
> + * Calling this function multiple times will result in a error.
> + *
> + * Returns:
> + * 0 on success or a negative error code on failure.
> + */
> +int drmm_gem_shmem_init(struct drm_device *dev)
> +{
> + int err;
> +
> + err = drm_gem_shmem_init(dev);
> + if (err)
> + return err;
> +
> + err = drmm_add_action_or_reset(dev, drm_gem_shmem_release, NULL);
> + if (err)
> + return err;
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(drmm_gem_shmem_init);
> +
> MODULE_DESCRIPTION("DRM SHMEM memory-management helpers");
> MODULE_IMPORT_NS(DMA_BUF);
> MODULE_LICENSE("GPL v2");
> diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> index 6a71a2555f85..865a989d67c8 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> @@ -15,6 +15,13 @@
> #include "panfrost_gem.h"
> #include "panfrost_mmu.h"
>
> +static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
> +{
> + return (shmem->madv > 0) &&
> + !shmem->pages_pin_count && shmem->sgt &&
> + !shmem->base.dma_buf && !shmem->base.import_attach;
> +}
> +
> static unsigned long
> panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
> {
> @@ -27,7 +34,7 @@ panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc
> return 0;
>
> list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) {
> - if (drm_gem_shmem_is_purgeable(shmem))
> + if (panfrost_gem_shmem_is_purgeable(shmem))
> count += shmem->base.size >> PAGE_SHIFT;
> }
>
> diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
> index a68c6a312b46..8acd455fc156 100644
> --- a/include/drm/drm_device.h
> +++ b/include/drm/drm_device.h
> @@ -16,6 +16,7 @@ struct drm_vblank_crtc;
> struct drm_vma_offset_manager;
> struct drm_vram_mm;
> struct drm_fb_helper;
> +struct drm_gem_shmem_shrinker;
>
> struct inode;
>
> @@ -277,8 +278,13 @@ struct drm_device {
> /** @vma_offset_manager: GEM information */
> struct drm_vma_offset_manager *vma_offset_manager;
>
> - /** @vram_mm: VRAM MM memory manager */
> - struct drm_vram_mm *vram_mm;
> + union {
> + /** @vram_mm: VRAM MM memory manager */
> + struct drm_vram_mm *vram_mm;
> +
> + /** @shmem_mm: SHMEM GEM memory manager */
> + struct drm_gem_shmem *shmem_mm;
> + };
>
> /**
> * @switch_power_state:
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 20ddcd799df9..c264caf6c83b 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -6,6 +6,7 @@
> #include <linux/fs.h>
> #include <linux/mm.h>
> #include <linux/mutex.h>
> +#include <linux/shrinker.h>
>
> #include <drm/drm_file.h>
> #include <drm/drm_gem.h>
> @@ -15,6 +16,7 @@
> struct dma_buf_attachment;
> struct drm_mode_create_dumb;
> struct drm_printer;
> +struct drm_device;

Alphabetically, please.

> struct sg_table;
>
> /**
> @@ -39,12 +41,21 @@ struct drm_gem_shmem_object {
> */
> unsigned int pages_use_count;
>
> + /**
> + * @pages_pin_count:
> + *
> + * Reference count on the pinned pages table.
> + * The pages allowed to be evicted by memory shrinker
> + * only when the count is zero.
> + */
> + unsigned int pages_pin_count;
> +
> /**
> * @madv: State for madvise
> *
> * 0 is active/inuse.
> + * 1 is not-needed/can-be-purged
> * A negative value is the object is purged.
> - * Positive values are driver specific and not used by the helpers.
> */
> int madv;
>
> @@ -91,6 +102,12 @@ struct drm_gem_shmem_object {
> * @map_wc: map object write-combined (instead of using shmem defaults).
> */
> bool map_wc : 1;
> +
> + /**
> + * @evicted: True if shmem pages are evicted by the memory shrinker.
> + * Used internally by memory shrinker.
> + */
> + bool evicted : 1;
> };
>
> #define to_drm_gem_shmem_obj(obj) \
> @@ -112,11 +129,17 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv);
>
> static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
> {
> - return (shmem->madv > 0) &&
> - !shmem->vmap_use_count && shmem->sgt &&
> - !shmem->base.dma_buf && !shmem->base.import_attach;
> + dma_resv_assert_held(shmem->base.resv);
> +
> + return (shmem->madv > 0) && shmem->base.funcs->evict &&
> + shmem->pages_use_count && !shmem->pages_pin_count &&
> + !shmem->base.dma_buf && !shmem->base.import_attach &&
> + (shmem->sgt || shmem->evicted);
> }
>
> +int drm_gem_shmem_swap_in(struct drm_gem_shmem_object *shmem);
> +
> +void drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem);
> void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
>
> struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
> @@ -260,6 +283,36 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v
> return drm_gem_shmem_mmap(shmem, vma);
> }
>
> +/**
> + * struct drm_gem_shmem_shrinker - Memory shrinker of GEM shmem memory manager
> + */
> +struct drm_gem_shmem_shrinker {
> + /** @base: Shrinker for purging shmem GEM objects */
> + struct shrinker base;
> +
> + /** @lock: Protects @lru_* */
> + struct mutex lock;
> +
> + /** @lru_pinned: List of pinned shmem GEM objects */
> + struct drm_gem_lru lru_pinned;
> +
> + /** @lru_evictable: List of shmem GEM objects to be evicted */
> + struct drm_gem_lru lru_evictable;
> +
> + /** @lru_evicted: List of evicted shmem GEM objects */
> + struct drm_gem_lru lru_evicted;
> +};
> +
> +/**
> + * struct drm_gem_shmem - GEM shmem memory manager
> + */
> +struct drm_gem_shmem {
> + /** @shrinker: GEM shmem shrinker */
> + struct drm_gem_shmem_shrinker shrinker;
> +};
> +
> +int drmm_gem_shmem_init(struct drm_device *dev);
> +
> /*
> * Driver ops
> */

--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev


Attachments:
OpenPGP_signature (840.00 B)
OpenPGP digital signature

2023-02-17 13:28:15

by Thomas Zimmermann

[permalink] [raw]
Subject: Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

Hi,

I looked through the series. Most of the patches should have an r-b or
a-b at this point. I can't say much about patch 2 and had questions
about others.

Maybe you can already land patches 2, and 4 to 6? They look independent
from the shrinker changes. You could also attempt to land the locking
changes in patch 7. They need to get testing. I'll send you an a-b for
the patch.

Best regards
Thomas

Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> This series:
>
> 1. Makes minor fixes for drm_gem_lru and Panfrost
> 2. Brings refactoring for older code
> 3. Adds common drm-shmem memory shrinker
> 4. Enables shrinker for VirtIO-GPU driver
> 5. Switches Panfrost driver to the common shrinker
>
> Changelog:
>
> v10:- Rebased on a recent linux-next.
>
> - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
>
> - Added Steven's ack/r-b/t-b for the Panfrost patches.
>
> - Fixed missing export of the new drm_gem_object_evict() function.
>
> - Added fixes tags to the first two patches that are making minor fixes,
> for consistency.
>
> v9: - Replaced struct drm_gem_shmem_shrinker with drm_gem_shmem and
> moved it to drm_device, like was suggested by Thomas Zimmermann.
>
> - Replaced drm_gem_shmem_shrinker_register() with drmm_gem_shmem_init(),
> like was suggested by Thomas Zimmermann.
>
> - Moved evict() callback to drm_gem_object_funcs and added common
> drm_gem_object_evict() helper, like was suggested by Thomas Zimmermann.
>
> - The shmem object now is evictable by default, like was suggested by
> Thomas Zimmermann. Dropped the set_evictable/purgeble() functions
> as well, drivers will decide whether BO is evictable within theirs
> madvise IOCTL.
>
> - Added patches that convert drm-shmem code to use drm_WARN_ON() and
> drm_dbg_kms(), like was requested by Thomas Zimmermann.
>
> - Turned drm_gem_shmem_object booleans into 1-bit bit fields, like was
> suggested by Thomas Zimmermann.
>
> - Switched to use drm_dev->unique for the shmem shrinker name. Drivers
> don't need to specify the name explicitly anymore.
>
> - Re-added dma_resv_test_signaled() that was missing in v8 and also
> fixed its argument to DMA_RESV_USAGE_READ. See comment to
> dma_resv_usage_rw().
>
> - Added new fix for Panfrost driver that silences lockdep warning
> caused by shrinker. Both Panfrost old and new shmem shrinkers are
> affected.
>
> v8: - Rebased on top of recent linux-next that now has dma-buf locking
> convention patches merged, which was blocking shmem shrinker before.
>
> - Shmem shrinker now uses new drm_gem_lru helper.
>
> - Dropped Steven Price t-b from the Panfrost patch because code
> changed significantly since v6 and should be re-tested.
>
> v7: - dma-buf locking convention
>
> v6: https://lore.kernel.org/dri-devel/[email protected]/
>
> Related patches:
>
> Mesa: https://gitlab.freedesktop.org/digetx/mesa/-/commits/virgl-madvise
> igt: https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/virtio-madvise
> https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/panfrost-madvise
>
> The Mesa and IGT patches will be sent out once the kernel part will land.
>
> Dmitry Osipenko (11):
> drm/msm/gem: Prevent blocking within shrinker loop
> drm/panfrost: Don't sync rpm suspension after mmu flushing
> drm/gem: Add evict() callback to drm_gem_object_funcs
> drm/shmem: Put booleans in the end of struct drm_gem_shmem_object
> drm/shmem: Switch to use drm_* debug helpers
> drm/shmem-helper: Don't use vmap_use_count for dma-bufs
> drm/shmem-helper: Switch to reservation lock
> drm/shmem-helper: Add memory shrinker
> drm/gem: Add drm_gem_pin_unlocked()
> drm/virtio: Support memory shrinking
> drm/panfrost: Switch to generic memory shrinker
>
> drivers/gpu/drm/drm_gem.c | 54 +-
> drivers/gpu/drm/drm_gem_shmem_helper.c | 646 +++++++++++++-----
> drivers/gpu/drm/lima/lima_gem.c | 8 +-
> drivers/gpu/drm/msm/msm_gem_shrinker.c | 8 +-
> drivers/gpu/drm/panfrost/Makefile | 1 -
> drivers/gpu/drm/panfrost/panfrost_device.h | 4 -
> drivers/gpu/drm/panfrost/panfrost_drv.c | 34 +-
> drivers/gpu/drm/panfrost/panfrost_gem.c | 30 +-
> drivers/gpu/drm/panfrost/panfrost_gem.h | 9 -
> .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 122 ----
> drivers/gpu/drm/panfrost/panfrost_job.c | 18 +-
> drivers/gpu/drm/panfrost/panfrost_mmu.c | 21 +-
> drivers/gpu/drm/virtio/virtgpu_drv.h | 18 +-
> drivers/gpu/drm/virtio/virtgpu_gem.c | 52 ++
> drivers/gpu/drm/virtio/virtgpu_ioctl.c | 37 +
> drivers/gpu/drm/virtio/virtgpu_kms.c | 8 +
> drivers/gpu/drm/virtio/virtgpu_object.c | 132 +++-
> drivers/gpu/drm/virtio/virtgpu_plane.c | 22 +-
> drivers/gpu/drm/virtio/virtgpu_vq.c | 40 ++
> include/drm/drm_device.h | 10 +-
> include/drm/drm_gem.h | 19 +-
> include/drm/drm_gem_shmem_helper.h | 112 +--
> include/uapi/drm/virtgpu_drm.h | 14 +
> 23 files changed, 1010 insertions(+), 409 deletions(-)
> delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
>

--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev


Attachments:
OpenPGP_signature (840.00 B)
OpenPGP digital signature

2023-02-17 13:30:39

by Thomas Zimmermann

[permalink] [raw]
Subject: Re: [PATCH v10 07/11] drm/shmem-helper: Switch to reservation lock

Hi

Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
> Replace all drm-shmem locks with a GEM reservation lock. This makes locks
> consistent with dma-buf locking convention where importers are responsible
> for holding reservation lock for all operations performed over dma-bufs,
> preventing deadlock between dma-buf importers and exporters.
>
> Suggested-by: Daniel Vetter <[email protected]>
> Signed-off-by: Dmitry Osipenko <[email protected]>

I don't dare to r-b this, but take my

Acked-by: Thomas Zimmermann <[email protected]>

if you want to land this patch.

Best regards
Thomas

> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 185 +++++++-----------
> drivers/gpu/drm/lima/lima_gem.c | 8 +-
> drivers/gpu/drm/panfrost/panfrost_drv.c | 7 +-
> .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 6 +-
> drivers/gpu/drm/panfrost/panfrost_mmu.c | 19 +-
> include/drm/drm_gem_shmem_helper.h | 14 +-
> 6 files changed, 94 insertions(+), 145 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 1392cbd3cc02..a1f2f2158c50 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -88,8 +88,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private)
> if (ret)
> goto err_release;
>
> - mutex_init(&shmem->pages_lock);
> - mutex_init(&shmem->vmap_lock);
> INIT_LIST_HEAD(&shmem->madv_list);
>
> if (!private) {
> @@ -141,11 +139,13 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> - drm_WARN_ON(obj->dev, shmem->vmap_use_count);
> -
> if (obj->import_attach) {
> drm_prime_gem_destroy(obj, shmem->sgt);
> } else {
> + dma_resv_lock(shmem->base.resv, NULL);
> +
> + drm_WARN_ON(obj->dev, shmem->vmap_use_count);
> +
> if (shmem->sgt) {
> dma_unmap_sgtable(obj->dev->dev, shmem->sgt,
> DMA_BIDIRECTIONAL, 0);
> @@ -154,18 +154,18 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> }
> if (shmem->pages)
> drm_gem_shmem_put_pages(shmem);
> - }
>
> - drm_WARN_ON(obj->dev, shmem->pages_use_count);
> + drm_WARN_ON(obj->dev, shmem->pages_use_count);
> +
> + dma_resv_unlock(shmem->base.resv);
> + }
>
> drm_gem_object_release(obj);
> - mutex_destroy(&shmem->pages_lock);
> - mutex_destroy(&shmem->vmap_lock);
> kfree(shmem);
> }
> EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
>
> -static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
> struct page **pages;
> @@ -197,35 +197,16 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> }
>
> /*
> - * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object
> + * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
> * @shmem: shmem GEM object
> *
> - * This function makes sure that backing pages exists for the shmem GEM object
> - * and increases the use count.
> - *
> - * Returns:
> - * 0 on success or a negative error code on failure.
> + * This function decreases the use count and puts the backing pages when use drops to zero.
> */
> -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
> - int ret;
>
> - drm_WARN_ON(obj->dev, obj->import_attach);
> -
> - ret = mutex_lock_interruptible(&shmem->pages_lock);
> - if (ret)
> - return ret;
> - ret = drm_gem_shmem_get_pages_locked(shmem);
> - mutex_unlock(&shmem->pages_lock);
> -
> - return ret;
> -}
> -EXPORT_SYMBOL(drm_gem_shmem_get_pages);
> -
> -static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> -{
> - struct drm_gem_object *obj = &shmem->base;
> + dma_resv_assert_held(shmem->base.resv);
>
> if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
> return;
> @@ -243,19 +224,6 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> shmem->pages_mark_accessed_on_put);
> shmem->pages = NULL;
> }
> -
> -/*
> - * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
> - * @shmem: shmem GEM object
> - *
> - * This function decreases the use count and puts the backing pages when use drops to zero.
> - */
> -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
> -{
> - mutex_lock(&shmem->pages_lock);
> - drm_gem_shmem_put_pages_locked(shmem);
> - mutex_unlock(&shmem->pages_lock);
> -}
> EXPORT_SYMBOL(drm_gem_shmem_put_pages);
>
> /**
> @@ -272,6 +240,8 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> + dma_resv_assert_held(shmem->base.resv);
> +
> drm_WARN_ON(obj->dev, obj->import_attach);
>
> return drm_gem_shmem_get_pages(shmem);
> @@ -289,14 +259,31 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> + dma_resv_assert_held(shmem->base.resv);
> +
> drm_WARN_ON(obj->dev, obj->import_attach);
>
> drm_gem_shmem_put_pages(shmem);
> }
> EXPORT_SYMBOL(drm_gem_shmem_unpin);
>
> -static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> - struct iosys_map *map)
> +/*
> + * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> + * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + * store.
> + *
> + * This function makes sure that a contiguous kernel virtual address mapping
> + * exists for the buffer backing the shmem GEM object. It hides the differences
> + * between dma-buf imported and natively allocated objects.
> + *
> + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
> + *
> + * Returns:
> + * 0 on success or a negative error code on failure.
> + */
> +int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
> + struct iosys_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> int ret = 0;
> @@ -312,6 +299,8 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> } else {
> pgprot_t prot = PAGE_KERNEL;
>
> + dma_resv_assert_held(shmem->base.resv);
> +
> if (shmem->vmap_use_count++ > 0) {
> iosys_map_set_vaddr(map, shmem->vaddr);
> return 0;
> @@ -346,45 +335,30 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>
> return ret;
> }
> +EXPORT_SYMBOL(drm_gem_shmem_vmap);
>
> /*
> - * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> + * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
> * @shmem: shmem GEM object
> - * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> - * store.
> - *
> - * This function makes sure that a contiguous kernel virtual address mapping
> - * exists for the buffer backing the shmem GEM object. It hides the differences
> - * between dma-buf imported and natively allocated objects.
> + * @map: Kernel virtual address where the SHMEM GEM object was mapped
> *
> - * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
> + * This function cleans up a kernel virtual address mapping acquired by
> + * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> + * zero.
> *
> - * Returns:
> - * 0 on success or a negative error code on failure.
> + * This function hides the differences between dma-buf imported and natively
> + * allocated objects.
> */
> -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
> - struct iosys_map *map)
> -{
> - int ret;
> -
> - ret = mutex_lock_interruptible(&shmem->vmap_lock);
> - if (ret)
> - return ret;
> - ret = drm_gem_shmem_vmap_locked(shmem, map);
> - mutex_unlock(&shmem->vmap_lock);
> -
> - return ret;
> -}
> -EXPORT_SYMBOL(drm_gem_shmem_vmap);
> -
> -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> - struct iosys_map *map)
> +void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
> + struct iosys_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> if (obj->import_attach) {
> dma_buf_vunmap(obj->import_attach->dmabuf, map);
> } else {
> + dma_resv_assert_held(shmem->base.resv);
> +
> if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
> return;
>
> @@ -397,26 +371,6 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
>
> shmem->vaddr = NULL;
> }
> -
> -/*
> - * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
> - * @shmem: shmem GEM object
> - * @map: Kernel virtual address where the SHMEM GEM object was mapped
> - *
> - * This function cleans up a kernel virtual address mapping acquired by
> - * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> - * zero.
> - *
> - * This function hides the differences between dma-buf imported and natively
> - * allocated objects.
> - */
> -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
> - struct iosys_map *map)
> -{
> - mutex_lock(&shmem->vmap_lock);
> - drm_gem_shmem_vunmap_locked(shmem, map);
> - mutex_unlock(&shmem->vmap_lock);
> -}
> EXPORT_SYMBOL(drm_gem_shmem_vunmap);
>
> static struct drm_gem_shmem_object *
> @@ -449,24 +403,24 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
> */
> int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
> {
> - mutex_lock(&shmem->pages_lock);
> + dma_resv_assert_held(shmem->base.resv);
>
> if (shmem->madv >= 0)
> shmem->madv = madv;
>
> madv = shmem->madv;
>
> - mutex_unlock(&shmem->pages_lock);
> -
> return (madv >= 0);
> }
> EXPORT_SYMBOL(drm_gem_shmem_madvise);
>
> -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
> +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
> struct drm_device *dev = obj->dev;
>
> + dma_resv_assert_held(shmem->base.resv);
> +
> drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
>
> dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
> @@ -474,7 +428,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
> kfree(shmem->sgt);
> shmem->sgt = NULL;
>
> - drm_gem_shmem_put_pages_locked(shmem);
> + drm_gem_shmem_put_pages(shmem);
>
> shmem->madv = -1;
>
> @@ -490,17 +444,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
>
> invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
> }
> -EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
> -
> -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
> -{
> - if (!mutex_trylock(&shmem->pages_lock))
> - return false;
> - drm_gem_shmem_purge_locked(shmem);
> - mutex_unlock(&shmem->pages_lock);
> -
> - return true;
> -}
> EXPORT_SYMBOL(drm_gem_shmem_purge);
>
> /**
> @@ -556,7 +499,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> /* We don't use vmf->pgoff since that has the fake offset */
> page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
>
> - mutex_lock(&shmem->pages_lock);
> + dma_resv_lock(shmem->base.resv, NULL);
>
> if (page_offset >= num_pages ||
> drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
> @@ -568,7 +511,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
> }
>
> - mutex_unlock(&shmem->pages_lock);
> + dma_resv_unlock(shmem->base.resv);
>
> return ret;
> }
> @@ -580,7 +523,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
>
> drm_WARN_ON(obj->dev, obj->import_attach);
>
> - mutex_lock(&shmem->pages_lock);
> + dma_resv_lock(shmem->base.resv, NULL);
>
> /*
> * We should have already pinned the pages when the buffer was first
> @@ -590,7 +533,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
> if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
> shmem->pages_use_count++;
>
> - mutex_unlock(&shmem->pages_lock);
> + dma_resv_unlock(shmem->base.resv);
>
> drm_gem_vm_open(vma);
> }
> @@ -600,7 +543,10 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
> struct drm_gem_object *obj = vma->vm_private_data;
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>
> + dma_resv_lock(shmem->base.resv, NULL);
> drm_gem_shmem_put_pages(shmem);
> + dma_resv_unlock(shmem->base.resv);
> +
> drm_gem_vm_close(vma);
> }
>
> @@ -635,7 +581,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
> return dma_buf_mmap(obj->dma_buf, vma, 0);
> }
>
> + dma_resv_lock(shmem->base.resv, NULL);
> ret = drm_gem_shmem_get_pages(shmem);
> + dma_resv_unlock(shmem->base.resv);
> +
> if (ret)
> return ret;
>
> @@ -721,9 +670,11 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
>
> drm_WARN_ON(obj->dev, obj->import_attach);
>
> + dma_resv_lock(shmem->base.resv, NULL);
> +
> ret = drm_gem_shmem_get_pages(shmem);
> if (ret)
> - return ERR_PTR(ret);
> + goto err_unlock;
>
> sgt = drm_gem_shmem_get_sg_table(shmem);
> if (IS_ERR(sgt)) {
> @@ -737,6 +688,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
>
> shmem->sgt = sgt;
>
> + dma_resv_unlock(shmem->base.resv);
> +
> return sgt;
>
> err_free_sgt:
> @@ -744,6 +697,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
> kfree(sgt);
> err_put_pages:
> drm_gem_shmem_put_pages(shmem);
> +err_unlock:
> + dma_resv_unlock(shmem->base.resv);
> return ERR_PTR(ret);
> }
> EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt);
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 0f1ca0b0db49..5008f0c2428f 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
>
> new_size = min(new_size, bo->base.base.size);
>
> - mutex_lock(&bo->base.pages_lock);
> + dma_resv_lock(bo->base.base.resv, NULL);
>
> if (bo->base.pages) {
> pages = bo->base.pages;
> @@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
> pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
> sizeof(*pages), GFP_KERNEL | __GFP_ZERO);
> if (!pages) {
> - mutex_unlock(&bo->base.pages_lock);
> + dma_resv_unlock(bo->base.base.resv);
> return -ENOMEM;
> }
>
> @@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
> struct page *page = shmem_read_mapping_page(mapping, i);
>
> if (IS_ERR(page)) {
> - mutex_unlock(&bo->base.pages_lock);
> + dma_resv_unlock(bo->base.base.resv);
> return PTR_ERR(page);
> }
> pages[i] = page;
> }
>
> - mutex_unlock(&bo->base.pages_lock);
> + dma_resv_unlock(bo->base.base.resv);
>
> ret = sg_alloc_table_from_pages(&sgt, pages, i, 0,
> new_size, GFP_KERNEL);
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index abb0dadd8f63..9f3f2283b67a 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -414,6 +414,10 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
>
> bo = to_panfrost_bo(gem_obj);
>
> + ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL);
> + if (ret)
> + goto out_put_object;
> +
> mutex_lock(&pfdev->shrinker_lock);
> mutex_lock(&bo->mappings.lock);
> if (args->madv == PANFROST_MADV_DONTNEED) {
> @@ -451,7 +455,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
> out_unlock_mappings:
> mutex_unlock(&bo->mappings.lock);
> mutex_unlock(&pfdev->shrinker_lock);
> -
> + dma_resv_unlock(bo->base.base.resv);
> +out_put_object:
> drm_gem_object_put(gem_obj);
> return ret;
> }
> diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> index bf0170782f25..6a71a2555f85 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> @@ -48,14 +48,14 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj)
> if (!mutex_trylock(&bo->mappings.lock))
> return false;
>
> - if (!mutex_trylock(&shmem->pages_lock))
> + if (!dma_resv_trylock(shmem->base.resv))
> goto unlock_mappings;
>
> panfrost_gem_teardown_mappings_locked(bo);
> - drm_gem_shmem_purge_locked(&bo->base);
> + drm_gem_shmem_purge(&bo->base);
> ret = true;
>
> - mutex_unlock(&shmem->pages_lock);
> + dma_resv_unlock(shmem->base.resv);
>
> unlock_mappings:
> mutex_unlock(&bo->mappings.lock);
> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> index 666a5e53fe19..0679df57f394 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> @@ -443,6 +443,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
> struct panfrost_gem_mapping *bomapping;
> struct panfrost_gem_object *bo;
> struct address_space *mapping;
> + struct drm_gem_object *obj;
> pgoff_t page_offset;
> struct sg_table *sgt;
> struct page **pages;
> @@ -465,15 +466,16 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
> page_offset = addr >> PAGE_SHIFT;
> page_offset -= bomapping->mmnode.start;
>
> - mutex_lock(&bo->base.pages_lock);
> + obj = &bo->base.base;
> +
> + dma_resv_lock(obj->resv, NULL);
>
> if (!bo->base.pages) {
> bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M,
> sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO);
> if (!bo->sgts) {
> - mutex_unlock(&bo->base.pages_lock);
> ret = -ENOMEM;
> - goto err_bo;
> + goto err_unlock;
> }
>
> pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
> @@ -481,9 +483,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
> if (!pages) {
> kvfree(bo->sgts);
> bo->sgts = NULL;
> - mutex_unlock(&bo->base.pages_lock);
> ret = -ENOMEM;
> - goto err_bo;
> + goto err_unlock;
> }
> bo->base.pages = pages;
> bo->base.pages_use_count = 1;
> @@ -491,7 +492,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
> pages = bo->base.pages;
> if (pages[page_offset]) {
> /* Pages are already mapped, bail out. */
> - mutex_unlock(&bo->base.pages_lock);
> goto out;
> }
> }
> @@ -502,14 +502,11 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
> for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
> pages[i] = shmem_read_mapping_page(mapping, i);
> if (IS_ERR(pages[i])) {
> - mutex_unlock(&bo->base.pages_lock);
> ret = PTR_ERR(pages[i]);
> goto err_pages;
> }
> }
>
> - mutex_unlock(&bo->base.pages_lock);
> -
> sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
> ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
> NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
> @@ -528,6 +525,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
> dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr);
>
> out:
> + dma_resv_unlock(obj->resv);
> +
> panfrost_gem_mapping_put(bomapping);
>
> return 0;
> @@ -536,6 +535,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
> sg_free_table(sgt);
> err_pages:
> drm_gem_shmem_put_pages(&bo->base);
> +err_unlock:
> + dma_resv_unlock(obj->resv);
> err_bo:
> panfrost_gem_mapping_put(bomapping);
> return ret;
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 5994fed5e327..20ddcd799df9 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -26,11 +26,6 @@ struct drm_gem_shmem_object {
> */
> struct drm_gem_object base;
>
> - /**
> - * @pages_lock: Protects the page table and use count
> - */
> - struct mutex pages_lock;
> -
> /**
> * @pages: Page table
> */
> @@ -65,11 +60,6 @@ struct drm_gem_shmem_object {
> */
> struct sg_table *sgt;
>
> - /**
> - * @vmap_lock: Protects the vmap address and use count
> - */
> - struct mutex vmap_lock;
> -
> /**
> * @vaddr: Kernel virtual address of the backing memory
> */
> @@ -109,7 +99,6 @@ struct drm_gem_shmem_object {
> struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size);
> void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem);
>
> -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
> void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
> int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem);
> void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem);
> @@ -128,8 +117,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem
> !shmem->base.dma_buf && !shmem->base.import_attach;
> }
>
> -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
> -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
> +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
>
> struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
> struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);

--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev


Attachments:
OpenPGP_signature (840.00 B)
OpenPGP digital signature

2023-02-17 13:33:57

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v10 07/11] drm/shmem-helper: Switch to reservation lock

On 2/17/23 15:52, Thomas Zimmermann wrote:
> Hi
>
> Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
>> Replace all drm-shmem locks with a GEM reservation lock. This makes locks
>> consistent with dma-buf locking convention where importers are
>> responsible
>> for holding reservation lock for all operations performed over dma-bufs,
>> preventing deadlock between dma-buf importers and exporters.
>>
>> Suggested-by: Daniel Vetter <[email protected]>
>> Signed-off-by: Dmitry Osipenko <[email protected]>
>
> How much testing has this patch seen?
>
> I'm asking because when I tried to fix the locking in this code, I had
> to review every importer to make sure that it aquired the lock. Has this
> problem been resolved?

The dma-buf locking rules was merged to v6.2 kernel.

I tested all the available importers that use drm-shmem. There were
deadlocks and lockdep warnings while I was working/testing the importer
paths in the past, feel confident that the code paths were tested well
enough. Note that Lima and Panfrost always use the importer paths in
case of display because display is a separate driver.

I checked that:

- desktop environment works
- 3d works
- video dec (v4l2) dmabuf sharing works
- shrinker works

I.e. tested it all with VirtIO-GPU, Panfrost and Lima drivers. For
VirtIO-GPU importer paths aren't relevant because it can only share bufs
with other virtio driver and in upstream we only have VirtIO-GPU driver
supporting dma-bufs.

--
Best regards,
Dmitry


2023-02-17 13:41:47

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

On 2/17/23 16:28, Thomas Zimmermann wrote:
> Hi,
>
> I looked through the series. Most of the patches should have an r-b or
> a-b at this point. I can't say much about patch 2 and had questions
> about others.
>
> Maybe you can already land patches 2, and 4 to 6? They look independent
> from the shrinker changes. You could also attempt to land the locking
> changes in patch 7. They need to get testing. I'll send you an a-b for
> the patch.

Thank you, I'll apply the acked patches and then make v11 with the
remaining patches updated.

Not sure if it will be possible to split patch 8, but I'll think on it
for v11.

--
Best regards,
Dmitry


2023-02-27 04:19:54

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

On 2/17/23 16:41, Dmitry Osipenko wrote:
> On 2/17/23 16:28, Thomas Zimmermann wrote:
>> Hi,
>>
>> I looked through the series. Most of the patches should have an r-b or
>> a-b at this point. I can't say much about patch 2 and had questions
>> about others.
>>
>> Maybe you can already land patches 2, and 4 to 6? They look independent
>> from the shrinker changes. You could also attempt to land the locking
>> changes in patch 7. They need to get testing. I'll send you an a-b for
>> the patch.
>
> Thank you, I'll apply the acked patches and then make v11 with the
> remaining patches updated.
>
> Not sure if it will be possible to split patch 8, but I'll think on it
> for v11.
>

Applied patches 1-2 to misc-fixes and patches 3-7 to misc-next, with the
review comments addressed.

--
Best regards,
Dmitry


2023-02-27 04:28:02

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v10 01/11] drm/msm/gem: Prevent blocking within shrinker loop

On 2/17/23 15:02, Thomas Zimmermann wrote:
> Hi
>
> Am 08.01.23 um 22:04 schrieb Dmitry Osipenko:
>> Consider this scenario:
>>
>> 1. APP1 continuously creates lots of small GEMs
>> 2. APP2 triggers `drop_caches`
>> 3. Shrinker starts to evict APP1 GEMs, while APP1 produces new purgeable
>>     GEMs
>> 4. msm_gem_shrinker_scan() returns non-zero number of freed pages
>>     and causes shrinker to try shrink more
>> 5. msm_gem_shrinker_scan() returns non-zero number of freed pages again,
>>     goto 4
>> 6. The APP2 is blocked in `drop_caches` until APP1 stops producing
>>     purgeable GEMs
>>
>> To prevent this blocking scenario, check number of remaining pages
>> that GPU shrinker couldn't release due to a GEM locking contention
>> or shrinking rejection. If there are no remaining pages left to shrink,
>> then there is no need to free up more pages and shrinker may break out
>> from the loop.
>>
>> This problem was found during shrinker/madvise IOCTL testing of
>> virtio-gpu driver. The MSM driver is affected in the same way.
>>
>> Reviewed-by: Rob Clark <[email protected]>
>> Fixes: b352ba54a820 ("drm/msm/gem: Convert to using drm_gem_lru")
>> Signed-off-by: Dmitry Osipenko <[email protected]>
>> ---
>>   drivers/gpu/drm/drm_gem.c              | 9 +++++++--
>>   drivers/gpu/drm/msm/msm_gem_shrinker.c | 8 ++++++--
>>   include/drm/drm_gem.h                  | 4 +++-
>>   3 files changed, 16 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>> index 59a0bb5ebd85..c6bca5ac6e0f 100644
>> --- a/drivers/gpu/drm/drm_gem.c
>> +++ b/drivers/gpu/drm/drm_gem.c
>> @@ -1388,10 +1388,13 @@ EXPORT_SYMBOL(drm_gem_lru_move_tail);
>>    *
>>    * @lru: The LRU to scan
>>    * @nr_to_scan: The number of pages to try to reclaim
>> + * @remaining: The number of pages left to reclaim
>>    * @shrink: Callback to try to shrink/reclaim the object.
>>    */
>>   unsigned long
>> -drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
>> +drm_gem_lru_scan(struct drm_gem_lru *lru,
>> +         unsigned int nr_to_scan,
>> +         unsigned long *remaining,
>>            bool (*shrink)(struct drm_gem_object *obj))
>>   {
>>       struct drm_gem_lru still_in_lru;
>> @@ -1430,8 +1433,10 @@ drm_gem_lru_scan(struct drm_gem_lru *lru,
>> unsigned nr_to_scan,
>>            * hit shrinker in response to trying to get backing pages
>>            * for this obj (ie. while it's lock is already held)
>>            */
>> -        if (!dma_resv_trylock(obj->resv))
>> +        if (!dma_resv_trylock(obj->resv)) {
>> +            *remaining += obj->size >> PAGE_SHIFT;
>>               goto tail;
>> +        }
>>             if (shrink(obj)) {
>>               freed += obj->size >> PAGE_SHIFT;
>> diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c
>> b/drivers/gpu/drm/msm/msm_gem_shrinker.c
>> index 051bdbc093cf..b7c1242014ec 100644
>> --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c
>> +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c
>> @@ -116,12 +116,14 @@ msm_gem_shrinker_scan(struct shrinker *shrinker,
>> struct shrink_control *sc)
>>       };
>>       long nr = sc->nr_to_scan;
>>       unsigned long freed = 0;
>> +    unsigned long remaining = 0;
>>         for (unsigned i = 0; (nr > 0) && (i < ARRAY_SIZE(stages)); i++) {
>>           if (!stages[i].cond)
>>               continue;
>>           stages[i].freed =
>> -            drm_gem_lru_scan(stages[i].lru, nr, stages[i].shrink);
>> +            drm_gem_lru_scan(stages[i].lru, nr, &remaining,
>
> This function relies in remaining being pre-initialized. That's not
> obvious and error prone. At least, pass-in something like
> &stages[i].remaining that is then initialized internally by
> drm_gem_lru_scan() to zero. And similar to freed, sum up the individual
> stages' remaining here.
>
> TBH I somehow don't like the overall design of how all these functions
> interact with each other. But I also can't really point to the actual
> problem. So it's best to take what you have here; maybe with the change
> I proposed.
>
> Reviewed-by: Thomas Zimmermann <[email protected]>

I had to keep to the remaining being pre-initialized because moving the
initialization was hurting the rest of the code. Though, updated the MSM
patch to use &stages[i].remaining

--
Best regards,
Dmitry


2023-02-27 04:34:59

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v10 08/11] drm/shmem-helper: Add memory shrinker

On 2/17/23 16:19, Thomas Zimmermann wrote:
>> +/**
>> + * drm_gem_shmem_swap_in() - Moves shmem GEM back to memory and enables
>> + *                           hardware access to the memory.
>
> Do we have a better name than _swap_in()? I suggest
> drm_gem_shmem_unevict(), which suggest that it's the inverse to _evict().

The canonical naming scheme used by TTM and other DRM drivers is
_swapin(), without the underscore. I'll use that variant in v11 for the
naming consistency with the rest of DRM code.

--
Best regards,
Dmitry


2023-02-27 10:38:06

by Jani Nikula

[permalink] [raw]
Subject: Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

On Mon, 27 Feb 2023, Dmitry Osipenko <[email protected]> wrote:
> On 2/17/23 16:41, Dmitry Osipenko wrote:
>> On 2/17/23 16:28, Thomas Zimmermann wrote:
>>> Hi,
>>>
>>> I looked through the series. Most of the patches should have an r-b or
>>> a-b at this point. I can't say much about patch 2 and had questions
>>> about others.
>>>
>>> Maybe you can already land patches 2, and 4 to 6? They look independent
>>> from the shrinker changes. You could also attempt to land the locking
>>> changes in patch 7. They need to get testing. I'll send you an a-b for
>>> the patch.
>>
>> Thank you, I'll apply the acked patches and then make v11 with the
>> remaining patches updated.
>>
>> Not sure if it will be possible to split patch 8, but I'll think on it
>> for v11.
>>
>
> Applied patches 1-2 to misc-fixes and patches 3-7 to misc-next, with the
> review comments addressed.

Please resolve the drm-tip rebuild conflict [1].

BR,
Jani.


[1] https://paste.debian.net/1272275/


--
Jani Nikula, Intel Open Source Graphics Center

2023-02-27 11:02:17

by Dmitry Osipenko

[permalink] [raw]
Subject: Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

On 2/27/23 13:37, Jani Nikula wrote:
> On Mon, 27 Feb 2023, Dmitry Osipenko <[email protected]> wrote:
>> On 2/17/23 16:41, Dmitry Osipenko wrote:
>>> On 2/17/23 16:28, Thomas Zimmermann wrote:
>>>> Hi,
>>>>
>>>> I looked through the series. Most of the patches should have an r-b or
>>>> a-b at this point. I can't say much about patch 2 and had questions
>>>> about others.
>>>>
>>>> Maybe you can already land patches 2, and 4 to 6? They look independent
>>>> from the shrinker changes. You could also attempt to land the locking
>>>> changes in patch 7. They need to get testing. I'll send you an a-b for
>>>> the patch.
>>>
>>> Thank you, I'll apply the acked patches and then make v11 with the
>>> remaining patches updated.
>>>
>>> Not sure if it will be possible to split patch 8, but I'll think on it
>>> for v11.
>>>
>>
>> Applied patches 1-2 to misc-fixes and patches 3-7 to misc-next, with the
>> review comments addressed.
>
> Please resolve the drm-tip rebuild conflict [1].
>
> BR,
> Jani.
>
>
> [1] https://paste.debian.net/1272275/

Don't see that conflict locally, perhaps somebody already fixed it?

--
Best regards,
Dmitry