From: Zhao Liu <[email protected]>
The use of kmap_atomic() is being deprecated in favor of
kmap_local_page()[1].
The main difference between atomic and local mappings is that local
mappings doesn't disable page faults or preemption.
In drm/i915/gem/selftests/huge_pages.c, function __cpu_check_shmem()
mainly uses mapping to flush cache and check the value. There're 2
reasons why __cpu_check_shmem() doesn't need to disable pagefaults
and preemption for mapping:
1. The flush operation is safe for CPU hotplug when preemption is not
disabled. Function __cpu_check_shmem() calls drm_clflush_virt_range()
to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86
and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush
operation is global and any issue with cpu's being added or removed
can be handled safely.
2. Any context switch caused by preemption or sleep (pagefault may
cause sleep) doesn't affect the validity of local mapping.
Therefore, __cpu_check_shmem() is a function where the use of
kmap_local_page() in place of kmap_atomic() is correctly suited.
Convert the calls of kmap_atomic() / kunmap_atomic() to
kmap_local_page() / kunmap_local().
[1]: https://lore.kernel.org/all/[email protected]
Suggested-by: Dave Hansen <[email protected]>
Suggested-by: Ira Weiny <[email protected]>
Suggested-by: Fabio M. De Francesco <[email protected]>
Signed-off-by: Zhao Liu <[email protected]>
---
Suggested by credits:
Dave: Referred to his explanation about cache flush.
Ira: Referred to his task document, review comments and explanation about
cache flush.
Fabio: Referred to his boiler plate commit message.
---
drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index c570cf780079..6f4efe905105 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -1022,7 +1022,7 @@ __cpu_check_shmem(struct drm_i915_gem_object *obj, u32 dword, u32 val)
goto err_unlock;
for (n = 0; n < obj->base.size >> PAGE_SHIFT; ++n) {
- u32 *ptr = kmap_atomic(i915_gem_object_get_page(obj, n));
+ u32 *ptr = kmap_local_page(i915_gem_object_get_page(obj, n));
if (needs_flush & CLFLUSH_BEFORE)
drm_clflush_virt_range(ptr, PAGE_SIZE);
@@ -1030,12 +1030,12 @@ __cpu_check_shmem(struct drm_i915_gem_object *obj, u32 dword, u32 val)
if (ptr[dword] != val) {
pr_err("n=%lu ptr[%u]=%u, val=%u\n",
n, dword, ptr[dword], val);
- kunmap_atomic(ptr);
+ kunmap_local(ptr);
err = -EINVAL;
break;
}
- kunmap_atomic(ptr);
+ kunmap_local(ptr);
}
i915_gem_object_finish_access(obj);
--
2.34.1