From: Zhao Liu <[email protected]>
Hi list,
Sorry for a long delay since v1 [1]. This patchset is based on 197b6b6
(Linux 6.3-rc4).
Welcome and thanks for your review and comments!
# Purpose of this patchset
The purpose of this pacthset is to replace all uses of kmap_atomic() in
i915 with kmap_local_page() because the use of kmap_atomic() is being
deprecated in favor of kmap_local_page()[1]. And 92b64bd (mm/highmem:
add notes about conversions from kmap{,_atomic}()) has declared the
deprecation of kmap_atomic().
# Motivation for deprecating kmap_atomic() and using kmap_local_page()
The main difference between atomic and local mappings is that local
mappings doesn't disable page faults or preemption (the preemption is
disabled for !PREEMPT_RT case, otherwise it only disables migration).
With kmap_local_page(), we can avoid the often unwanted side effect of
unnecessary page faults and preemption disables.
# Patch summary
Patch 1, 4-6 and 8-9 replace kamp_atomic()/kunmap_atomic() with
kmap_local_page()/kunmap_local() directly. With thses local
mappings, the page faults and preemption are allowed.
Patch 2 and 7 use memcpy_from_page() and memcpy_to_page() to replace
kamp_atomic()/kunmap_atomic(). These two variants of memcpy()
are based on the local mapping, so page faults and preemption
are also allowed in these two interfaces.
Patch 3 replaces kamp_atomic()/kunmap_atomic() with kmap_local_page()/
kunmap_local() and also diable page fault since the for special
handling (pls see the commit message).
# Changes since v1
* Dropped hot plug related description in commit message since it has
nothing to do with kmap_local_page().
* Emphasized the motivation for using kmap_local_page() in commit
message.
* Rebased patch 1 on f47e630 (drm/i915/gem: Typecheck page lookups) to
keep the "idx" variable of type pgoff_t here.
* Used memcpy_from_page() and memcpy_to_page() to replace
kmap_local_page() + memcpy() in patch 2.
# Reference
[1]: https://lore.kernel.org/lkml/[email protected]/
[1]: https://lore.kernel.org/all/[email protected]
---
Zhao Liu (9):
drm/i915: Use kmap_local_page() in gem/i915_gem_object.c
drm/i915: Use memcpy_[from/to]_page() in gem/i915_gem_pyhs.c
drm/i915: Use kmap_local_page() in gem/i915_gem_shmem.c
drm/i915: Use kmap_local_page() in gem/selftests/huge_pages.c
drm/i915: Use kmap_local_page() in gem/selftests/i915_gem_coherency.c
drm/i915: Use kmap_local_page() in gem/selftests/i915_gem_context.c
drm/i915: Use memcpy_from_page() in gt/uc/intel_uc_fw.c
drm/i915: Use kmap_local_page() in i915_cmd_parser.c
drm/i915: Use kmap_local_page() in gem/i915_gem_execbuffer.c
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 10 +++++-----
drivers/gpu/drm/i915/gem/i915_gem_object.c | 8 +++-----
drivers/gpu/drm/i915/gem/i915_gem_phys.c | 10 ++--------
drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 6 ++++--
drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 6 +++---
.../gpu/drm/i915/gem/selftests/i915_gem_coherency.c | 12 ++++--------
.../gpu/drm/i915/gem/selftests/i915_gem_context.c | 8 ++++----
drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 5 +----
drivers/gpu/drm/i915/i915_cmd_parser.c | 4 ++--
9 files changed, 28 insertions(+), 41 deletions(-)
--
2.34.1
From: Zhao Liu <[email protected]>
The use of kmap_atomic() is being deprecated in favor of
kmap_local_page()[1], and this patch converts the call from
kmap_atomic() to kmap_local_page().
The main difference between atomic and local mappings is that local
mappings doesn't disable page faults or preemption (the preemption is
disabled for !PREEMPT_RT case, otherwise it only disables migration).
With kmap_local_page(), we can avoid the often unwanted side effect of
unnecessary page faults and preemption disables.
There're 2 reasons why i915_gem_object_read_from_page_kmap() doesn't
need to disable pagefaults and preemption for mapping:
1. The flush operation is safe. In drm/i915/gem/i915_gem_object.c,
i915_gem_object_read_from_page_kmap() calls drm_clflush_virt_range() to
use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86
and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush
operation is global.
2. Any context switch caused by preemption or page faults (page fault
may cause sleep) doesn't affect the validity of local mapping.
Therefore, i915_gem_object_read_from_page_kmap() is a function where
the use of kmap_local_page() in place of kmap_atomic() is correctly
suited.
Convert the calls of kmap_atomic() / kunmap_atomic() to
kmap_local_page() / kunmap_local().
And remove the redundant variable that stores the address of the mapped
page since kunmap_local() can accept any pointer within the page.
[1]: https://lore.kernel.org/all/[email protected]
v2:
* Dropped hot plug related description since it has nothing to do with
kmap_local_page().
* Rebased on f47e630 (drm/i915/gem: Typecheck page lookups) to keep
the "idx" variable of type pgoff_t here.
* Added description of the motivation of using kmap_local_page().
Suggested-by: Dave Hansen <[email protected]>
Suggested-by: Ira Weiny <[email protected]>
Suggested-by: Fabio M. De Francesco <[email protected]>
Signed-off-by: Zhao Liu <[email protected]>
---
Suggested by credits:
Dave: Referred to his explanation about cache flush.
Ira: Referred to his task document, review comments and explanation
about cache flush.
Fabio: Referred to his boiler plate commit message and his description
about why kmap_local_page() should be preferred.
---
drivers/gpu/drm/i915/gem/i915_gem_object.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
index e6d4efde4fc5..c0bfdd7784f7 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
@@ -428,17 +428,15 @@ static void
i915_gem_object_read_from_page_kmap(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size)
{
pgoff_t idx = offset >> PAGE_SHIFT;
- void *src_map;
void *src_ptr;
- src_map = kmap_atomic(i915_gem_object_get_page(obj, idx));
-
- src_ptr = src_map + offset_in_page(offset);
+ src_ptr = kmap_local_page(i915_gem_object_get_page(obj, idx))
+ + offset_in_page(offset);
if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ))
drm_clflush_virt_range(src_ptr, size);
memcpy(dst, src_ptr, size);
- kunmap_atomic(src_map);
+ kunmap_local(src_ptr);
}
static void
--
2.34.1
From: Zhao Liu <[email protected]>
The use of kmap_atomic() is being deprecated in favor of
kmap_local_page()[1], and this patch converts the call from
kmap_atomic() + memcpy() to memcpy_[from/to]_page(), which use
kmap_local_page() to build local mapping and then do memcpy().
The main difference between atomic and local mappings is that local
mappings doesn't disable page faults or preemption (the preemption is
disabled for !PREEMPT_RT case, otherwise it only disables migration).
With kmap_local_page(), we can avoid the often unwanted side effect of
unnecessary page faults and preemption disables.
In drm/i915/gem/i915_gem_phys.c, the functions
i915_gem_object_get_pages_phys() and i915_gem_object_put_pages_phys()
don't need to disable pagefaults and preemption for mapping because of
2 reasons:
1. The flush operation is safe. In drm/i915/gem/i915_gem_object.c,
i915_gem_object_get_pages_phys() and i915_gem_object_put_pages_phys()
calls drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush.
Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in
drm_clflush_virt_range(), the flush operation is global.
2. Any context switch caused by preemption or page faults (page fault
may cause sleep) doesn't affect the validity of local mapping.
Therefore, i915_gem_object_get_pages_phys() and
i915_gem_object_put_pages_phys() are two functions where the uses of
local mappings in place of atomic mappings are correctly suited.
Convert the calls of kmap_atomic() / kunmap_atomic() + memcpy() to
memcpy_from_page() and memcpy_to_page().
[1]: https://lore.kernel.org/all/[email protected]
v2:
* Used memcpy_from_page() and memcpy_to_page() to replace
kmap_local_page() + memcpy().
* Dropped hot plug related description since it has nothing to do with
kmap_local_page().
* Added description of the motivation of using kmap_local_page().
Suggested-by: Dave Hansen <[email protected]>
Suggested-by: Ira Weiny <[email protected]>
Suggested-by: Fabio M. De Francesco <[email protected]>
Signed-off-by: Zhao Liu <[email protected]>
---
Suggested by credits:
Dave: Referred to his explanation about cache flush.
Ira: Referred to his task document, review comments and explanation
about cache flush.
Fabio: Referred to his boiler plate commit message and his description
about why kmap_local_page() should be preferred. Also based on
his suggestion to use memcpy_[from/to]_page() directly.
---
drivers/gpu/drm/i915/gem/i915_gem_phys.c | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index 76efe98eaa14..4c6d3f07260a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -64,16 +64,13 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
dst = vaddr;
for (i = 0; i < obj->base.size / PAGE_SIZE; i++) {
struct page *page;
- void *src;
page = shmem_read_mapping_page(mapping, i);
if (IS_ERR(page))
goto err_st;
- src = kmap_atomic(page);
- memcpy(dst, src, PAGE_SIZE);
+ memcpy_from_page(dst, page, 0, PAGE_SIZE);
drm_clflush_virt_range(dst, PAGE_SIZE);
- kunmap_atomic(src);
put_page(page);
dst += PAGE_SIZE;
@@ -112,16 +109,13 @@ i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
for (i = 0; i < obj->base.size / PAGE_SIZE; i++) {
struct page *page;
- char *dst;
page = shmem_read_mapping_page(mapping, i);
if (IS_ERR(page))
continue;
- dst = kmap_atomic(page);
drm_clflush_virt_range(src, PAGE_SIZE);
- memcpy(dst, src, PAGE_SIZE);
- kunmap_atomic(dst);
+ memcpy_to_page(page, 0, src, PAGE_SIZE);
set_page_dirty(page);
if (obj->mm.madv == I915_MADV_WILLNEED)
--
2.34.1
From: Zhao Liu <[email protected]>
The use of kmap_atomic() is being deprecated in favor of
kmap_local_page()[1], and this patch converts the call from
kmap_atomic() to kmap_local_page().
The main difference between atomic and local mappings is that local
mappings doesn't disable page faults or preemption (the preemption is
disabled for !PREEMPT_RT case, otherwise it only disables migration).
With kmap_local_page(), we can avoid the often unwanted side effect of
unnecessary page faults or preemption disables.
In drm/i915/gem/selftests/huge_pages.c, function __cpu_check_shmem()
mainly uses mapping to flush cache and check the value. There're
2 reasons why __cpu_check_shmem() doesn't need to disable pagefaults
and preemption for mapping:
1. The flush operation is safe. Function __cpu_check_shmem() calls
drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since
CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in
drm_clflush_virt_range(), the flush operation is global.
2. Any context switch caused by preemption or page faults (page fault
may cause sleep) doesn't affect the validity of local mapping.
Therefore, __cpu_check_shmem() is a function where the use of
kmap_local_page() in place of kmap_atomic() is correctly suited.
Convert the calls of kmap_atomic() / kunmap_atomic() to
kmap_local_page() / kunmap_local().
[1]: https://lore.kernel.org/all/[email protected]
v2:
* Dropped hot plug related description since it has nothing to do with
kmap_local_page().
* No code change since v1, and added description of the motivation of
using kmap_local_page().
Suggested-by: Dave Hansen <[email protected]>
Suggested-by: Ira Weiny <[email protected]>
Suggested-by: Fabio M. De Francesco <[email protected]>
Signed-off-by: Zhao Liu <[email protected]>
---
Suggested by credits:
Dave: Referred to his explanation about cache flush.
Ira: Referred to his task document, review comments and explanation
about cache flush.
Fabio: Referred to his boiler plate commit message and his description
about why kmap_local_page() should be preferred.
---
drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index defece0bcb81..3f9ea48a48d0 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -1026,7 +1026,7 @@ __cpu_check_shmem(struct drm_i915_gem_object *obj, u32 dword, u32 val)
goto err_unlock;
for (n = 0; n < obj->base.size >> PAGE_SHIFT; ++n) {
- u32 *ptr = kmap_atomic(i915_gem_object_get_page(obj, n));
+ u32 *ptr = kmap_local_page(i915_gem_object_get_page(obj, n));
if (needs_flush & CLFLUSH_BEFORE)
drm_clflush_virt_range(ptr, PAGE_SIZE);
@@ -1034,12 +1034,12 @@ __cpu_check_shmem(struct drm_i915_gem_object *obj, u32 dword, u32 val)
if (ptr[dword] != val) {
pr_err("n=%lu ptr[%u]=%u, val=%u\n",
n, dword, ptr[dword], val);
- kunmap_atomic(ptr);
+ kunmap_local(ptr);
err = -EINVAL;
break;
}
- kunmap_atomic(ptr);
+ kunmap_local(ptr);
}
i915_gem_object_finish_access(obj);
--
2.34.1
From: Zhao Liu <[email protected]>
The use of kmap_atomic() is being deprecated in favor of
kmap_local_page()[1], and this patch converts the call from
kmap_atomic() to kmap_local_page().
The main difference between atomic and local mappings is that local
mappings doesn't disable page faults or preemption (the preemption is
disabled for !PREEMPT_RT case, otherwise it only disables migration)..
With kmap_local_page(), we can avoid the often unwanted side effect of
unnecessary page faults or preemption disables.
In drm/i915/gem/selftests/i915_gem_coherency.c, functions cpu_set()
and cpu_get() mainly uses mapping to flush cache and assign the value.
There're 2 reasons why cpu_set() and cpu_get() don't need to disable
pagefaults and preemption for mapping:
1. The flush operation is safe. cpu_set() and cpu_get() call
drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since
CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in
drm_clflush_virt_range(), the flush operation is global.
2. Any context switch caused by preemption or page faults (page fault
may cause sleep) doesn't affect the validity of local mapping.
Therefore, cpu_set() and cpu_get() are functions where the use of
kmap_local_page() in place of kmap_atomic() is correctly suited.
Convert the calls of kmap_atomic() / kunmap_atomic() to
kmap_local_page() / kunmap_local().
[1]: https://lore.kernel.org/all/[email protected]
v2:
* Dropped hot plug related description since it has nothing to do with
kmap_local_page().
* No code change since v1, and added description of the motivation of
using kmap_local_page().
Suggested-by: Dave Hansen <[email protected]>
Suggested-by: Ira Weiny <[email protected]>
Suggested-by: Fabio M. De Francesco <[email protected]>
Signed-off-by: Zhao Liu <[email protected]>
---
Suggested by credits:
Dave: Referred to his explanation about cache flush.
Ira: Referred to his task document, review comments and explanation
about cache flush.
Fabio: Referred to his boiler plate commit message and his description
about why kmap_local_page() should be preferred.
---
.../gpu/drm/i915/gem/selftests/i915_gem_coherency.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
index 3bef1beec7cb..beeb3e12eccc 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
@@ -24,7 +24,6 @@ static int cpu_set(struct context *ctx, unsigned long offset, u32 v)
{
unsigned int needs_clflush;
struct page *page;
- void *map;
u32 *cpu;
int err;
@@ -34,8 +33,7 @@ static int cpu_set(struct context *ctx, unsigned long offset, u32 v)
goto out;
page = i915_gem_object_get_page(ctx->obj, offset >> PAGE_SHIFT);
- map = kmap_atomic(page);
- cpu = map + offset_in_page(offset);
+ cpu = kmap_local_page(page) + offset_in_page(offset);
if (needs_clflush & CLFLUSH_BEFORE)
drm_clflush_virt_range(cpu, sizeof(*cpu));
@@ -45,7 +43,7 @@ static int cpu_set(struct context *ctx, unsigned long offset, u32 v)
if (needs_clflush & CLFLUSH_AFTER)
drm_clflush_virt_range(cpu, sizeof(*cpu));
- kunmap_atomic(map);
+ kunmap_local(cpu);
i915_gem_object_finish_access(ctx->obj);
out:
@@ -57,7 +55,6 @@ static int cpu_get(struct context *ctx, unsigned long offset, u32 *v)
{
unsigned int needs_clflush;
struct page *page;
- void *map;
u32 *cpu;
int err;
@@ -67,15 +64,14 @@ static int cpu_get(struct context *ctx, unsigned long offset, u32 *v)
goto out;
page = i915_gem_object_get_page(ctx->obj, offset >> PAGE_SHIFT);
- map = kmap_atomic(page);
- cpu = map + offset_in_page(offset);
+ cpu = kmap_local_page(page) + offset_in_page(offset);
if (needs_clflush & CLFLUSH_BEFORE)
drm_clflush_virt_range(cpu, sizeof(*cpu));
*v = *cpu;
- kunmap_atomic(map);
+ kunmap_local(cpu);
i915_gem_object_finish_access(ctx->obj);
out:
--
2.34.1
From: Zhao Liu <[email protected]>
The use of kmap_atomic() is being deprecated in favor of
kmap_local_page()[1], and this patch converts the call from
kmap_atomic() to kmap_local_page().
The main difference between atomic and local mappings is that local
mappings doesn't disable page faults or preemption.
With kmap_local_page(), we can avoid the often unwanted side effect of
unnecessary page faults or preemption disables.
In drm/i915/gem/selftests/i915_gem_context.c, functions cpu_fill() and
cpu_check() mainly uses mapping to flush cache and check/assign the
value.
There're 2 reasons why cpu_fill() and cpu_check() don't need to disable
pagefaults and preemption for mapping:
1. The flush operation is safe. cpu_fill() and cpu_check() call
drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since
CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in
drm_clflush_virt_range(), the flush operation is global.
2. Any context switch caused by preemption or page faults (page fault
may cause sleep) doesn't affect the validity of local mapping.
Therefore, cpu_fill() and cpu_check() are functions where the use of
kmap_local_page() in place of kmap_atomic() is correctly suited.
Convert the calls of kmap_atomic() / kunmap_atomic() to
kmap_local_page() / kunmap_local().
[1]: https://lore.kernel.org/all/[email protected]
v2:
* Dropped hot plug related description since it has nothing to do with
kmap_local_page().
* No code change since v1, and added description of the motivation of
using kmap_local_page().
Suggested-by: Dave Hansen <[email protected]>
Suggested-by: Ira Weiny <[email protected]>
Suggested-by: Fabio M. De Francesco <[email protected]>
Signed-off-by: Zhao Liu <[email protected]>
---
Suggested by credits:
Dave: Referred to his explanation about cache flush.
Ira: Referred to his task document, review comments and explanation
about cache flush.
Fabio: Referred to his boiler plate commit message and his description
about why kmap_local_page() should be preferred.
---
drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
index a81fa6a20f5a..dcbc0b8e3323 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
@@ -481,12 +481,12 @@ static int cpu_fill(struct drm_i915_gem_object *obj, u32 value)
for (n = 0; n < real_page_count(obj); n++) {
u32 *map;
- map = kmap_atomic(i915_gem_object_get_page(obj, n));
+ map = kmap_local_page(i915_gem_object_get_page(obj, n));
for (m = 0; m < DW_PER_PAGE; m++)
map[m] = value;
if (!has_llc)
drm_clflush_virt_range(map, PAGE_SIZE);
- kunmap_atomic(map);
+ kunmap_local(map);
}
i915_gem_object_finish_access(obj);
@@ -512,7 +512,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj,
for (n = 0; n < real_page_count(obj); n++) {
u32 *map, m;
- map = kmap_atomic(i915_gem_object_get_page(obj, n));
+ map = kmap_local_page(i915_gem_object_get_page(obj, n));
if (needs_flush & CLFLUSH_BEFORE)
drm_clflush_virt_range(map, PAGE_SIZE);
@@ -538,7 +538,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj,
}
out_unmap:
- kunmap_atomic(map);
+ kunmap_local(map);
if (err)
break;
}
--
2.34.1
From: Zhao Liu <[email protected]>
The use of kmap_atomic() is being deprecated in favor of
kmap_local_page()[1], and this patch converts the call from
kmap_atomic() to kmap_local_page().
The main difference between atomic and local mappings is that local
mappings doesn't disable page faults or preemption (the preemption is
disabled for !PREEMPT_RT case, otherwise it only disables migration).
With kmap_local_page(), we can avoid the often unwanted side effect of
unnecessary page faults or preemption disables.
In drm/i915/gt/uc/intel_us_fw.c, the function intel_uc_fw_copy_rsa()
just use the mapping to do memory copy so it doesn't need to disable
pagefaults and preemption for mapping. Thus the local mapping without
atomic context (not disable pagefaults / preemption) is enough.
Therefore, intel_uc_fw_copy_rsa() is a function where the use of
memcpy_from_page() with kmap_local_page() in place of memcpy() with
kmap_atomic() is correctly suited.
Convert the calls of memcpy() with kmap_atomic() / kunmap_atomic() to
memcpy_from_page() which uses local mapping to copy.
[1]: https://lore.kernel.org/all/[email protected]/T/#u
v2: No code change since v1, and added description of the motivation of
using kmap_local_page().
Suggested-by: Ira Weiny <[email protected]>
Suggested-by: Fabio M. De Francesco <[email protected]>
Reviewed-by: Ira Weiny <[email protected]>
Signed-off-by: Zhao Liu <[email protected]>
---
Suggested by credits:
Ira: Referred to his task document and suggestions about using
memcpy_from_page() directly.
Fabio: Referred to his boiler plate commit message and his description
about why kmap_local_page() should be preferred.
---
drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
index 65672ff82605..5bbde4abd565 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c
@@ -1152,16 +1152,13 @@ size_t intel_uc_fw_copy_rsa(struct intel_uc_fw *uc_fw, void *dst, u32 max_len)
for_each_sgt_page(page, iter, uc_fw->obj->mm.pages) {
u32 len = min_t(u32, size, PAGE_SIZE - offset);
- void *vaddr;
if (idx > 0) {
idx--;
continue;
}
- vaddr = kmap_atomic(page);
- memcpy(dst, vaddr + offset, len);
- kunmap_atomic(vaddr);
+ memcpy_from_page(dst, page, offset, len);
offset = 0;
dst += len;
--
2.34.1
From: Zhao Liu <[email protected]>
The use of kmap_atomic() is being deprecated in favor of
kmap_local_page()[1].
The main difference between atomic and local mappings is that local
mappings doesn't disable page faults or preemption (the preemption is
disabled for !PREEMPT_RT case, otherwise it only disables migration).
With kmap_local_page(), we can avoid the often unwanted side effect of
unnecessary page faults or preemption disables.
In drm/i915/gem/i915_gem_shmem.c, the function shmem_pwrite() need to
disable pagefault to eliminate the potential recursion fault[2]. But
here __copy_from_user_inatomic() doesn't need to disable preemption and
local mapping is valid for sched in/out.
So it can use kmap_local_page() / kunmap_local() with
pagefault_disable() / pagefault_enable() to replace atomic mapping.
[1]: https://lore.kernel.org/all/[email protected]
[2]: https://patchwork.freedesktop.org/patch/295840/
v2: No code change since v1, and added description of the motivation of
using kmap_local_page().
Suggested-by: Ira Weiny <[email protected]>
Reviewed-by: Ira Weiny <[email protected]>
Reviewed-by: Fabio M. De Francesco <[email protected]>
Signed-off-by: Zhao Liu <[email protected]>
---
Suggested by credits:
Ira: Referred to his suggestions about keeping pagefault_disable().
Fabio: Referred to his description about why kmap_local_page() should
be preferred.
---
drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 37d1efcd3ca6..ad69a79c8b31 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -475,11 +475,13 @@ shmem_pwrite(struct drm_i915_gem_object *obj,
if (err < 0)
return err;
- vaddr = kmap_atomic(page);
+ vaddr = kmap_local_page(page);
+ pagefault_disable();
unwritten = __copy_from_user_inatomic(vaddr + pg,
user_data,
len);
- kunmap_atomic(vaddr);
+ pagefault_enable();
+ kunmap_local(vaddr);
err = aops->write_end(obj->base.filp, mapping, offset, len,
len - unwritten, page, data);
--
2.34.1
From: Zhao Liu <[email protected]>
The use of kmap_atomic() is being deprecated in favor of
kmap_local_page()[1], and this patch converts the calls from
kmap_atomic() to kmap_local_page().
The main difference between atomic and local mappings is that local
mappings doesn't disable page faults or preemption (the preemption is
disabled for !PREEMPT_RT case, otherwise it only disables migration).
With kmap_local_page(), we can avoid the often unwanted side effect of
unnecessary page faults and preemption disables.
In i915_gem_execbuffer.c, eb->reloc_cache.vaddr is mapped by
kmap_atomic() in eb_relocate_entry(), and is unmapped by
kunmap_atomic() in reloc_cache_reset().
And this mapping/unmapping occurs in two places: one is in
eb_relocate_vma(), and another is in eb_relocate_vma_slow().
The function eb_relocate_vma() or eb_relocate_vma_slow() doesn't
need to disable pagefaults and preemption during the above mapping/
unmapping.
So it can simply use kmap_local_page() / kunmap_local() that can
instead do the mapping / unmapping regardless of the context.
Convert the calls of kmap_atomic() / kunmap_atomic() to
kmap_local_page() / kunmap_local().
[1]: https://lore.kernel.org/all/[email protected]
v2: No code change since v1. Added description of the motivation of
using kmap_local_page() and "Suggested-by" tag of Fabio.
Suggested-by: Ira Weiny <[email protected]>
Suggested-by: Fabio M. De Francesco <[email protected]>
Signed-off-by: Zhao Liu <[email protected]>
---
Suggested by credits:
Ira: Referred to his task document, review comments.
Fabio: Referred to his boiler plate commit message and his description
about why kmap_local_page() should be preferred.
---
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 9dce2957b4e5..805565edd148 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -1151,7 +1151,7 @@ static void reloc_cache_unmap(struct reloc_cache *cache)
vaddr = unmask_page(cache->vaddr);
if (cache->vaddr & KMAP)
- kunmap_atomic(vaddr);
+ kunmap_local(vaddr);
else
io_mapping_unmap_atomic((void __iomem *)vaddr);
}
@@ -1167,7 +1167,7 @@ static void reloc_cache_remap(struct reloc_cache *cache,
if (cache->vaddr & KMAP) {
struct page *page = i915_gem_object_get_page(obj, cache->page);
- vaddr = kmap_atomic(page);
+ vaddr = kmap_local_page(page);
cache->vaddr = unmask_flags(cache->vaddr) |
(unsigned long)vaddr;
} else {
@@ -1197,7 +1197,7 @@ static void reloc_cache_reset(struct reloc_cache *cache, struct i915_execbuffer
if (cache->vaddr & CLFLUSH_AFTER)
mb();
- kunmap_atomic(vaddr);
+ kunmap_local(vaddr);
i915_gem_object_finish_access(obj);
} else {
struct i915_ggtt *ggtt = cache_to_ggtt(cache);
@@ -1229,7 +1229,7 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj,
struct page *page;
if (cache->vaddr) {
- kunmap_atomic(unmask_page(cache->vaddr));
+ kunmap_local(unmask_page(cache->vaddr));
} else {
unsigned int flushes;
int err;
@@ -1251,7 +1251,7 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj,
if (!obj->mm.dirty)
set_page_dirty(page);
- vaddr = kmap_atomic(page);
+ vaddr = kmap_local_page(page);
cache->vaddr = unmask_flags(cache->vaddr) | (unsigned long)vaddr;
cache->page = pageno;
--
2.34.1
From: Zhao Liu <[email protected]>
The use of kmap_atomic() is being deprecated in favor of
kmap_local_page()[1], and this patch converts the call from
kmap_atomic() to kmap_local_page().
The main difference between atomic and local mappings is that local
mappings doesn't disable page faults or preemption (the preemption is
disabled for !PREEMPT_RT case, otherwise it only disables migration).
With kmap_local_page(), we can avoid the often unwanted side effect of
unnecessary page faults and preemption disables.
There're 2 reasons why function copy_batch() doesn't need to disable
pagefaults and preemption for mapping:
1. The flush operation is safe. In i915_cmd_parser.c, copy_batch() calls
drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush.
Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu
in drm_clflush_virt_range(), the flush operation is global.
2. Any context switch caused by preemption or page faults (page fault
may cause sleep) doesn't affect the validity of local mapping.
Therefore, copy_batch() is a function where the use of
kmap_local_page() in place of kmap_atomic() is correctly suited.
Convert the calls of kmap_atomic() / kunmap_atomic() to
kmap_local_page() / kunmap_local().
[1]: https://lore.kernel.org/all/[email protected]
v2:
* Dropped hot plug related description since it has nothing to do with
kmap_local_page().
* No code change since v1, and added description of the motivation of
using kmap_local_page().
Suggested-by: Dave Hansen <[email protected]>
Suggested-by: Ira Weiny <[email protected]>
Suggested-by: Fabio M. De Francesco <[email protected]>
Signed-off-by: Zhao Liu <[email protected]>
---
Suggested by credits:
Dave: Referred to his explanation about cache flush.
Ira: Referred to his task document, review comments and explanation
about cache flush.
Fabio: Referred to his boiler plate commit message and his description
about why kmap_local_page() should be preferred.
---
drivers/gpu/drm/i915/i915_cmd_parser.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
index ddf49c2dbb91..2905df83e180 100644
--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
+++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
@@ -1211,11 +1211,11 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
for (n = offset >> PAGE_SHIFT; remain; n++) {
int len = min(remain, PAGE_SIZE - x);
- src = kmap_atomic(i915_gem_object_get_page(src_obj, n));
+ src = kmap_local_page(i915_gem_object_get_page(src_obj, n));
if (src_needs_clflush)
drm_clflush_virt_range(src + x, len);
memcpy(ptr, src + x, len);
- kunmap_atomic(src);
+ kunmap_local(src);
ptr += len;
remain -= len;
--
2.34.1
On mercoledì 29 marzo 2023 09:32:11 CEST Zhao Liu wrote:
> From: Zhao Liu <[email protected]>
>
> Hi list,
>
> Sorry for a long delay since v1 [1]. This patchset is based on 197b6b6
> (Linux 6.3-rc4).
>
> Welcome and thanks for your review and comments!
>
>
> # Purpose of this patchset
>
> The purpose of this pacthset is to replace all uses of kmap_atomic() in
> i915 with kmap_local_page() because the use of kmap_atomic() is being
> deprecated in favor of kmap_local_page()[1]. And 92b64bd (mm/highmem:
> add notes about conversions from kmap{,_atomic}()) has declared the
> deprecation of kmap_atomic().
>
>
> # Motivation for deprecating kmap_atomic() and using kmap_local_page()
>
> The main difference between atomic and local mappings is that local
> mappings doesn't disable page faults or preemption (the preemption is
> disabled for !PREEMPT_RT case, otherwise it only disables migration).
>
> With kmap_local_page(), we can avoid the often unwanted side effect of
> unnecessary page faults and preemption disables.
>
>
> # Patch summary
>
> Patch 1, 4-6 and 8-9 replace kamp_atomic()/kunmap_atomic() with
> kmap_local_page()/kunmap_local() directly. With thses local
> mappings, the page faults and preemption are allowed.
>
> Patch 2 and 7 use memcpy_from_page() and memcpy_to_page() to replace
> kamp_atomic()/kunmap_atomic(). These two variants of memcpy()
> are based on the local mapping, so page faults and preemption
> are also allowed in these two interfaces.
>
> Patch 3 replaces kamp_atomic()/kunmap_atomic() with kmap_local_page()/
> kunmap_local() and also diable page fault since the for special
> handling (pls see the commit message).
>
>
> # Changes since v1
>
> * Dropped hot plug related description in commit message since it has
> nothing to do with kmap_local_page().
> * Emphasized the motivation for using kmap_local_page() in commit
> message.
> * Rebased patch 1 on f47e630 (drm/i915/gem: Typecheck page lookups) to
> keep the "idx" variable of type pgoff_t here.
> * Used memcpy_from_page() and memcpy_to_page() to replace
> kmap_local_page() + memcpy() in patch 2.
>
>
> # Reference
>
> [1]:
> https://lore.kernel.org/lkml/[email protected]
> om/ [1]:
> https://lore.kernel.org/all/[email protected] ---
> Zhao Liu (9):
> drm/i915: Use kmap_local_page() in gem/i915_gem_object.c
> drm/i915: Use memcpy_[from/to]_page() in gem/i915_gem_pyhs.c
> drm/i915: Use kmap_local_page() in gem/i915_gem_shmem.c
> drm/i915: Use kmap_local_page() in gem/selftests/huge_pages.c
> drm/i915: Use kmap_local_page() in gem/selftests/i915_gem_coherency.c
> drm/i915: Use kmap_local_page() in gem/selftests/i915_gem_context.c
> drm/i915: Use memcpy_from_page() in gt/uc/intel_uc_fw.c
> drm/i915: Use kmap_local_page() in i915_cmd_parser.c
> drm/i915: Use kmap_local_page() in gem/i915_gem_execbuffer.c
>
I _think_ that the "long delay" you mentioned in the first sentence has paid
off in full.
I don't see things to improve (except all those "kamp_atomic()" typo in the
patches summary; however, typos are only in the cover so I'm sure they won't
hurt anybody).
Each of the nine patches listed above looks good to me, so they are all…
Reviewed-by: Fabio M. De Francesco <[email protected]>
Thanks!
Fabio
PS: Obviously there was no need to reconfirm my tag for patch 3/9. A single
tag that catches all patches is easier for a lazy person like me :-)
>
> drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 10 +++++-----
> drivers/gpu/drm/i915/gem/i915_gem_object.c | 8 +++-----
> drivers/gpu/drm/i915/gem/i915_gem_phys.c | 10 ++--------
> drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 6 ++++--
> drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 6 +++---
> .../gpu/drm/i915/gem/selftests/i915_gem_coherency.c | 12 ++++--------
> .../gpu/drm/i915/gem/selftests/i915_gem_context.c | 8 ++++----
> drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 5 +----
> drivers/gpu/drm/i915/i915_cmd_parser.c | 4 ++--
> 9 files changed, 28 insertions(+), 41 deletions(-)
>
> --
> 2.34.1
Hi Fabio,
On Wed, Mar 29, 2023 at 06:03:38PM +0200, Fabio M. De Francesco wrote:
> Date: Wed, 29 Mar 2023 18:03:38 +0200
> From: "Fabio M. De Francesco" <[email protected]>
> Subject: Re: [PATCH v2 0/9] drm/i915: Replace kmap_atomic() with
> kmap_local_page()
>
> On mercoled?? 29 marzo 2023 09:32:11 CEST Zhao Liu wrote:
> > From: Zhao Liu <[email protected]>
> >
> > Hi list,
> >
> > Sorry for a long delay since v1 [1]. This patchset is based on 197b6b6
> > (Linux 6.3-rc4).
> >
> > Welcome and thanks for your review and comments!
> >
> >
> > # Purpose of this patchset
> >
> > The purpose of this pacthset is to replace all uses of kmap_atomic() in
> > i915 with kmap_local_page() because the use of kmap_atomic() is being
> > deprecated in favor of kmap_local_page()[1]. And 92b64bd (mm/highmem:
> > add notes about conversions from kmap{,_atomic}()) has declared the
> > deprecation of kmap_atomic().
> >
> >
> > # Motivation for deprecating kmap_atomic() and using kmap_local_page()
> >
> > The main difference between atomic and local mappings is that local
> > mappings doesn't disable page faults or preemption (the preemption is
> > disabled for !PREEMPT_RT case, otherwise it only disables migration).
> >
> > With kmap_local_page(), we can avoid the often unwanted side effect of
> > unnecessary page faults and preemption disables.
> >
> >
> > # Patch summary
> >
> > Patch 1, 4-6 and 8-9 replace kamp_atomic()/kunmap_atomic() with
> > kmap_local_page()/kunmap_local() directly. With thses local
> > mappings, the page faults and preemption are allowed.
> >
> > Patch 2 and 7 use memcpy_from_page() and memcpy_to_page() to replace
> > kamp_atomic()/kunmap_atomic(). These two variants of memcpy()
> > are based on the local mapping, so page faults and preemption
> > are also allowed in these two interfaces.
> >
> > Patch 3 replaces kamp_atomic()/kunmap_atomic() with kmap_local_page()/
> > kunmap_local() and also diable page fault since the for special
> > handling (pls see the commit message).
> >
> >
> > # Changes since v1
> >
> > * Dropped hot plug related description in commit message since it has
> > nothing to do with kmap_local_page().
> > * Emphasized the motivation for using kmap_local_page() in commit
> > message.
> > * Rebased patch 1 on f47e630 (drm/i915/gem: Typecheck page lookups) to
> > keep the "idx" variable of type pgoff_t here.
> > * Used memcpy_from_page() and memcpy_to_page() to replace
> > kmap_local_page() + memcpy() in patch 2.
> >
> >
> > # Reference
> >
> > [1]:
> > https://lore.kernel.org/lkml/[email protected]
> > om/ [1]:
> > https://lore.kernel.org/all/[email protected] ---
> > Zhao Liu (9):
> > drm/i915: Use kmap_local_page() in gem/i915_gem_object.c
> > drm/i915: Use memcpy_[from/to]_page() in gem/i915_gem_pyhs.c
> > drm/i915: Use kmap_local_page() in gem/i915_gem_shmem.c
> > drm/i915: Use kmap_local_page() in gem/selftests/huge_pages.c
> > drm/i915: Use kmap_local_page() in gem/selftests/i915_gem_coherency.c
> > drm/i915: Use kmap_local_page() in gem/selftests/i915_gem_context.c
> > drm/i915: Use memcpy_from_page() in gt/uc/intel_uc_fw.c
> > drm/i915: Use kmap_local_page() in i915_cmd_parser.c
> > drm/i915: Use kmap_local_page() in gem/i915_gem_execbuffer.c
> >
>
> I _think_ that the "long delay" you mentioned in the first sentence has paid
> off in full.
>
> I don't see things to improve (except all those "kamp_atomic()" typo in the
> patches summary; however, typos are only in the cover so I'm sure they won't
> hurt anybody).
Thanks a lot for your patience and your help! :-)
>
> Each of the nine patches listed above looks good to me, so they are all??
>
> Reviewed-by: Fabio M. De Francesco <[email protected]>
>
> Thanks!
>
> Fabio
>
> PS: Obviously there was no need to reconfirm my tag for patch 3/9. A single
> tag that catches all patches is easier for a lazy person like me :-)
The typos and this description still can be improved. I'll pay
attention in the future!
Thanks,
Zhao
>
> >
> > drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 10 +++++-----
> > drivers/gpu/drm/i915/gem/i915_gem_object.c | 8 +++-----
> > drivers/gpu/drm/i915/gem/i915_gem_phys.c | 10 ++--------
> > drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 6 ++++--
> > drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 6 +++---
> > .../gpu/drm/i915/gem/selftests/i915_gem_coherency.c | 12 ++++--------
> > .../gpu/drm/i915/gem/selftests/i915_gem_context.c | 8 ++++----
> > drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 5 +----
> > drivers/gpu/drm/i915/i915_cmd_parser.c | 4 ++--
> > 9 files changed, 28 insertions(+), 41 deletions(-)
> >
> > --
> > 2.34.1
>
>
>
>
Zhao Liu wrote:
> From: Zhao Liu <[email protected]>
>
> The use of kmap_atomic() is being deprecated in favor of
> kmap_local_page()[1], and this patch converts the call from
> kmap_atomic() to kmap_local_page().
>
> The main difference between atomic and local mappings is that local
> mappings doesn't disable page faults or preemption (the preemption is
> disabled for !PREEMPT_RT case, otherwise it only disables migration).
>
> With kmap_local_page(), we can avoid the often unwanted side effect of
> unnecessary page faults and preemption disables.
>
> There're 2 reasons why i915_gem_object_read_from_page_kmap() doesn't
> need to disable pagefaults and preemption for mapping:
>
> 1. The flush operation is safe. In drm/i915/gem/i915_gem_object.c,
> i915_gem_object_read_from_page_kmap() calls drm_clflush_virt_range() to
> use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86
> and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush
> operation is global.
>
> 2. Any context switch caused by preemption or page faults (page fault
> may cause sleep) doesn't affect the validity of local mapping.
>
> Therefore, i915_gem_object_read_from_page_kmap() is a function where
> the use of kmap_local_page() in place of kmap_atomic() is correctly
> suited.
>
> Convert the calls of kmap_atomic() / kunmap_atomic() to
> kmap_local_page() / kunmap_local().
>
> And remove the redundant variable that stores the address of the mapped
> page since kunmap_local() can accept any pointer within the page.
>
> [1]: https://lore.kernel.org/all/[email protected]
>
> v2:
> * Dropped hot plug related description since it has nothing to do with
> kmap_local_page().
> * Rebased on f47e630 (drm/i915/gem: Typecheck page lookups) to keep
> the "idx" variable of type pgoff_t here.
> * Added description of the motivation of using kmap_local_page().
>
> Suggested-by: Dave Hansen <[email protected]>
> Suggested-by: Ira Weiny <[email protected]>
Reviewed-by: Ira Weiny <[email protected]>
Zhao Liu wrote:
> From: Zhao Liu <[email protected]>
>
> The use of kmap_atomic() is being deprecated in favor of
> kmap_local_page()[1], and this patch converts the call from
> kmap_atomic() + memcpy() to memcpy_[from/to]_page(), which use
> kmap_local_page() to build local mapping and then do memcpy().
>
> The main difference between atomic and local mappings is that local
> mappings doesn't disable page faults or preemption (the preemption is
> disabled for !PREEMPT_RT case, otherwise it only disables migration).
>
> With kmap_local_page(), we can avoid the often unwanted side effect of
> unnecessary page faults and preemption disables.
>
> In drm/i915/gem/i915_gem_phys.c, the functions
> i915_gem_object_get_pages_phys() and i915_gem_object_put_pages_phys()
> don't need to disable pagefaults and preemption for mapping because of
> 2 reasons:
>
> 1. The flush operation is safe. In drm/i915/gem/i915_gem_object.c,
> i915_gem_object_get_pages_phys() and i915_gem_object_put_pages_phys()
> calls drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush.
> Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in
> drm_clflush_virt_range(), the flush operation is global.
>
> 2. Any context switch caused by preemption or page faults (page fault
> may cause sleep) doesn't affect the validity of local mapping.
>
> Therefore, i915_gem_object_get_pages_phys() and
> i915_gem_object_put_pages_phys() are two functions where the uses of
> local mappings in place of atomic mappings are correctly suited.
>
> Convert the calls of kmap_atomic() / kunmap_atomic() + memcpy() to
> memcpy_from_page() and memcpy_to_page().
>
> [1]: https://lore.kernel.org/all/[email protected]
>
> v2:
> * Used memcpy_from_page() and memcpy_to_page() to replace
> kmap_local_page() + memcpy().
> * Dropped hot plug related description since it has nothing to do with
> kmap_local_page().
> * Added description of the motivation of using kmap_local_page().
>
> Suggested-by: Dave Hansen <[email protected]>
> Suggested-by: Ira Weiny <[email protected]>
Reviewed-by: Ira Weiny <[email protected]>
Zhao Liu wrote:
> From: Zhao Liu <[email protected]>
>
> The use of kmap_atomic() is being deprecated in favor of
> kmap_local_page()[1], and this patch converts the call from
> kmap_atomic() to kmap_local_page().
>
> The main difference between atomic and local mappings is that local
> mappings doesn't disable page faults or preemption (the preemption is
> disabled for !PREEMPT_RT case, otherwise it only disables migration).
>
> With kmap_local_page(), we can avoid the often unwanted side effect of
> unnecessary page faults or preemption disables.
>
> In drm/i915/gem/selftests/huge_pages.c, function __cpu_check_shmem()
> mainly uses mapping to flush cache and check the value. There're
> 2 reasons why __cpu_check_shmem() doesn't need to disable pagefaults
> and preemption for mapping:
>
> 1. The flush operation is safe. Function __cpu_check_shmem() calls
> drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since
> CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in
> drm_clflush_virt_range(), the flush operation is global.
>
> 2. Any context switch caused by preemption or page faults (page fault
> may cause sleep) doesn't affect the validity of local mapping.
>
> Therefore, __cpu_check_shmem() is a function where the use of
> kmap_local_page() in place of kmap_atomic() is correctly suited.
>
> Convert the calls of kmap_atomic() / kunmap_atomic() to
> kmap_local_page() / kunmap_local().
>
> [1]: https://lore.kernel.org/all/[email protected]
>
> v2:
> * Dropped hot plug related description since it has nothing to do with
> kmap_local_page().
> * No code change since v1, and added description of the motivation of
> using kmap_local_page().
>
> Suggested-by: Dave Hansen <[email protected]>
> Suggested-by: Ira Weiny <[email protected]>
Reviewed-by: Ira Weiny <[email protected]>
Zhao Liu wrote:
> From: Zhao Liu <[email protected]>
>
> The use of kmap_atomic() is being deprecated in favor of
> kmap_local_page()[1], and this patch converts the call from
> kmap_atomic() to kmap_local_page().
>
> The main difference between atomic and local mappings is that local
> mappings doesn't disable page faults or preemption (the preemption is
> disabled for !PREEMPT_RT case, otherwise it only disables migration)..
>
> With kmap_local_page(), we can avoid the often unwanted side effect of
> unnecessary page faults or preemption disables.
>
> In drm/i915/gem/selftests/i915_gem_coherency.c, functions cpu_set()
> and cpu_get() mainly uses mapping to flush cache and assign the value.
> There're 2 reasons why cpu_set() and cpu_get() don't need to disable
> pagefaults and preemption for mapping:
>
> 1. The flush operation is safe. cpu_set() and cpu_get() call
> drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since
> CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in
> drm_clflush_virt_range(), the flush operation is global.
>
> 2. Any context switch caused by preemption or page faults (page fault
> may cause sleep) doesn't affect the validity of local mapping.
>
> Therefore, cpu_set() and cpu_get() are functions where the use of
> kmap_local_page() in place of kmap_atomic() is correctly suited.
>
> Convert the calls of kmap_atomic() / kunmap_atomic() to
> kmap_local_page() / kunmap_local().
>
> [1]: https://lore.kernel.org/all/[email protected]
>
> v2:
> * Dropped hot plug related description since it has nothing to do with
> kmap_local_page().
> * No code change since v1, and added description of the motivation of
> using kmap_local_page().
>
> Suggested-by: Dave Hansen <[email protected]>
> Suggested-by: Ira Weiny <[email protected]>
Reviewed-by: Ira Weiny <[email protected]>
Zhao Liu wrote:
> From: Zhao Liu <[email protected]>
>
> The use of kmap_atomic() is being deprecated in favor of
> kmap_local_page()[1], and this patch converts the call from
> kmap_atomic() to kmap_local_page().
>
> The main difference between atomic and local mappings is that local
> mappings doesn't disable page faults or preemption.
>
> With kmap_local_page(), we can avoid the often unwanted side effect of
> unnecessary page faults or preemption disables.
>
> In drm/i915/gem/selftests/i915_gem_context.c, functions cpu_fill() and
> cpu_check() mainly uses mapping to flush cache and check/assign the
> value.
>
> There're 2 reasons why cpu_fill() and cpu_check() don't need to disable
> pagefaults and preemption for mapping:
>
> 1. The flush operation is safe. cpu_fill() and cpu_check() call
> drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since
> CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in
> drm_clflush_virt_range(), the flush operation is global.
>
> 2. Any context switch caused by preemption or page faults (page fault
> may cause sleep) doesn't affect the validity of local mapping.
>
> Therefore, cpu_fill() and cpu_check() are functions where the use of
> kmap_local_page() in place of kmap_atomic() is correctly suited.
>
> Convert the calls of kmap_atomic() / kunmap_atomic() to
> kmap_local_page() / kunmap_local().
>
> [1]: https://lore.kernel.org/all/[email protected]
>
> v2:
> * Dropped hot plug related description since it has nothing to do with
> kmap_local_page().
> * No code change since v1, and added description of the motivation of
> using kmap_local_page().
>
> Suggested-by: Dave Hansen <[email protected]>
> Suggested-by: Ira Weiny <[email protected]>
First off I think this is fine.
But as I looked at this final selftests patch I began to wonder how the
memory being mapped here and in the previous selftests patches are
allocated. Does highmem need to be considered at all? Unfortunately, I
could not determine where the memory in the SG list of this test gem
object was allocated.
AFAICS cpu_fill() is only called in create_test_object(). Digging into
huge_gem_object() did not reveal where these pages were allocated from.
I wonder if these kmap_local_page() calls could be removed entirely based
on knowing that the pages were allocated from low mem? Removing yet
another user of highmem altogether would be best if possible.
Do you know how these test objects are created? Do the pages come from
user space somehow?
Regardless this is still a step in the right direction so:
Reviewed-by: Ira Weiny <[email protected]>
> Suggested-by: Fabio M. De Francesco <[email protected]>
> Signed-off-by: Zhao Liu <[email protected]>
> ---
> Suggested by credits:
> Dave: Referred to his explanation about cache flush.
> Ira: Referred to his task document, review comments and explanation
> about cache flush.
> Fabio: Referred to his boiler plate commit message and his description
> about why kmap_local_page() should be preferred.
> ---
> drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> index a81fa6a20f5a..dcbc0b8e3323 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> @@ -481,12 +481,12 @@ static int cpu_fill(struct drm_i915_gem_object *obj, u32 value)
> for (n = 0; n < real_page_count(obj); n++) {
> u32 *map;
>
> - map = kmap_atomic(i915_gem_object_get_page(obj, n));
> + map = kmap_local_page(i915_gem_object_get_page(obj, n));
> for (m = 0; m < DW_PER_PAGE; m++)
> map[m] = value;
> if (!has_llc)
> drm_clflush_virt_range(map, PAGE_SIZE);
> - kunmap_atomic(map);
> + kunmap_local(map);
> }
>
> i915_gem_object_finish_access(obj);
> @@ -512,7 +512,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj,
> for (n = 0; n < real_page_count(obj); n++) {
> u32 *map, m;
>
> - map = kmap_atomic(i915_gem_object_get_page(obj, n));
> + map = kmap_local_page(i915_gem_object_get_page(obj, n));
> if (needs_flush & CLFLUSH_BEFORE)
> drm_clflush_virt_range(map, PAGE_SIZE);
>
> @@ -538,7 +538,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj,
> }
>
> out_unmap:
> - kunmap_atomic(map);
> + kunmap_local(map);
> if (err)
> break;
> }
> --
> 2.34.1
>
Zhao Liu wrote:
> From: Zhao Liu <[email protected]>
>
> The use of kmap_atomic() is being deprecated in favor of
> kmap_local_page()[1], and this patch converts the call from
> kmap_atomic() to kmap_local_page().
>
> The main difference between atomic and local mappings is that local
> mappings doesn't disable page faults or preemption (the preemption is
> disabled for !PREEMPT_RT case, otherwise it only disables migration).
>
> With kmap_local_page(), we can avoid the often unwanted side effect of
> unnecessary page faults and preemption disables.
>
> There're 2 reasons why function copy_batch() doesn't need to disable
> pagefaults and preemption for mapping:
>
> 1. The flush operation is safe. In i915_cmd_parser.c, copy_batch() calls
> drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush.
> Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu
> in drm_clflush_virt_range(), the flush operation is global.
>
> 2. Any context switch caused by preemption or page faults (page fault
> may cause sleep) doesn't affect the validity of local mapping.
>
> Therefore, copy_batch() is a function where the use of
> kmap_local_page() in place of kmap_atomic() is correctly suited.
>
> Convert the calls of kmap_atomic() / kunmap_atomic() to
> kmap_local_page() / kunmap_local().
>
> [1]: https://lore.kernel.org/all/[email protected]
>
> v2:
> * Dropped hot plug related description since it has nothing to do with
> kmap_local_page().
> * No code change since v1, and added description of the motivation of
> using kmap_local_page().
>
> Suggested-by: Dave Hansen <[email protected]>
> Suggested-by: Ira Weiny <[email protected]>
Reviewed-by: Ira Weiny <[email protected]>
Zhao Liu wrote:
> From: Zhao Liu <[email protected]>
>
> The use of kmap_atomic() is being deprecated in favor of
> kmap_local_page()[1], and this patch converts the calls from
> kmap_atomic() to kmap_local_page().
>
> The main difference between atomic and local mappings is that local
> mappings doesn't disable page faults or preemption (the preemption is
> disabled for !PREEMPT_RT case, otherwise it only disables migration).
>
> With kmap_local_page(), we can avoid the often unwanted side effect of
> unnecessary page faults and preemption disables.
>
> In i915_gem_execbuffer.c, eb->reloc_cache.vaddr is mapped by
> kmap_atomic() in eb_relocate_entry(), and is unmapped by
> kunmap_atomic() in reloc_cache_reset().
First off thanks for the series and sticking with this. That said this
patch kind of threw me for a loop because tracing the map/unmap calls did
not make sense to me. See below.
>
> And this mapping/unmapping occurs in two places: one is in
> eb_relocate_vma(), and another is in eb_relocate_vma_slow().
>
> The function eb_relocate_vma() or eb_relocate_vma_slow() doesn't
> need to disable pagefaults and preemption during the above mapping/
> unmapping.
>
> So it can simply use kmap_local_page() / kunmap_local() that can
> instead do the mapping / unmapping regardless of the context.
>
> Convert the calls of kmap_atomic() / kunmap_atomic() to
> kmap_local_page() / kunmap_local().
>
> [1]: https://lore.kernel.org/all/[email protected]
>
> v2: No code change since v1. Added description of the motivation of
> using kmap_local_page() and "Suggested-by" tag of Fabio.
>
> Suggested-by: Ira Weiny <[email protected]>
> Suggested-by: Fabio M. De Francesco <[email protected]>
> Signed-off-by: Zhao Liu <[email protected]>
> ---
> Suggested by credits:
> Ira: Referred to his task document, review comments.
> Fabio: Referred to his boiler plate commit message and his description
> about why kmap_local_page() should be preferred.
> ---
> drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> index 9dce2957b4e5..805565edd148 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
> @@ -1151,7 +1151,7 @@ static void reloc_cache_unmap(struct reloc_cache *cache)
>
> vaddr = unmask_page(cache->vaddr);
> if (cache->vaddr & KMAP)
> - kunmap_atomic(vaddr);
> + kunmap_local(vaddr);
In the cover letter you don't mention this unmap path. Rather you mention
only reloc_cache_reset().
After digging into this and considering these are kmap_atomic() calls I
_think_ what you have is ok. But I think I'd like to see the call paths
documented a bit more clearly. Or perhaps cleaned up a lot.
For example I see the following call possibility from a user ioctl. In
this trace I see 2 examples where something is unmapped first. I don't
understand why that is required? I would assume reloc_cache_unmap() and
reloc_kmap() are helpers called from somewhere else requiring a remapping
of the cache but I don't see it.
i915_gem_execbuffer2_ioctl()
eb_relocate_parse()
eb_relocate_parse_slow()
eb_relocate_vma_slow()
eb_relocate_entry()
reloc_cache_unmap()
kunmap_atomic() <=== HERE!
reloc_cache_remap()
kmap_atomic()
relocate_entry()
reloc_vaddr()
reloc_kmap()
kunmap_atomic() <== HERE!
kmap_atomic()
reloc_cache_reset()
kunmap_atomic()
Could these mappings be cleaned up a lot more? Perhaps by removing some
of the helper functions which AFAICT are left over from older versions of
the code?
Also as an aside I think it is really bad that eb_relocate_entry() returns
negative errors in a u64. Better to get the types right IMO.
Thanks for the series!
Ira
> else
> io_mapping_unmap_atomic((void __iomem *)vaddr);
> }
> @@ -1167,7 +1167,7 @@ static void reloc_cache_remap(struct reloc_cache *cache,
> if (cache->vaddr & KMAP) {
> struct page *page = i915_gem_object_get_page(obj, cache->page);
>
> - vaddr = kmap_atomic(page);
> + vaddr = kmap_local_page(page);
> cache->vaddr = unmask_flags(cache->vaddr) |
> (unsigned long)vaddr;
> } else {
> @@ -1197,7 +1197,7 @@ static void reloc_cache_reset(struct reloc_cache *cache, struct i915_execbuffer
> if (cache->vaddr & CLFLUSH_AFTER)
> mb();
>
> - kunmap_atomic(vaddr);
> + kunmap_local(vaddr);
> i915_gem_object_finish_access(obj);
> } else {
> struct i915_ggtt *ggtt = cache_to_ggtt(cache);
> @@ -1229,7 +1229,7 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj,
> struct page *page;
>
> if (cache->vaddr) {
> - kunmap_atomic(unmask_page(cache->vaddr));
> + kunmap_local(unmask_page(cache->vaddr));
> } else {
> unsigned int flushes;
> int err;
> @@ -1251,7 +1251,7 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj,
> if (!obj->mm.dirty)
> set_page_dirty(page);
>
> - vaddr = kmap_atomic(page);
> + vaddr = kmap_local_page(page);
> cache->vaddr = unmask_flags(cache->vaddr) | (unsigned long)vaddr;
> cache->page = pageno;
>
> --
> 2.34.1
>
On 31/03/2023 04:33, Ira Weiny wrote:
> Zhao Liu wrote:
>> From: Zhao Liu <[email protected]>
>>
>> The use of kmap_atomic() is being deprecated in favor of
>> kmap_local_page()[1], and this patch converts the call from
>> kmap_atomic() to kmap_local_page().
>>
>> The main difference between atomic and local mappings is that local
>> mappings doesn't disable page faults or preemption.
>>
>> With kmap_local_page(), we can avoid the often unwanted side effect of
>> unnecessary page faults or preemption disables.
>>
>> In drm/i915/gem/selftests/i915_gem_context.c, functions cpu_fill() and
>> cpu_check() mainly uses mapping to flush cache and check/assign the
>> value.
>>
>> There're 2 reasons why cpu_fill() and cpu_check() don't need to disable
>> pagefaults and preemption for mapping:
>>
>> 1. The flush operation is safe. cpu_fill() and cpu_check() call
>> drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since
>> CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in
>> drm_clflush_virt_range(), the flush operation is global.
>>
>> 2. Any context switch caused by preemption or page faults (page fault
>> may cause sleep) doesn't affect the validity of local mapping.
>>
>> Therefore, cpu_fill() and cpu_check() are functions where the use of
>> kmap_local_page() in place of kmap_atomic() is correctly suited.
>>
>> Convert the calls of kmap_atomic() / kunmap_atomic() to
>> kmap_local_page() / kunmap_local().
>>
>> [1]: https://lore.kernel.org/all/[email protected]
>>
>> v2:
>> * Dropped hot plug related description since it has nothing to do with
>> kmap_local_page().
>> * No code change since v1, and added description of the motivation of
>> using kmap_local_page().
>>
>> Suggested-by: Dave Hansen <[email protected]>
>> Suggested-by: Ira Weiny <[email protected]>
>
> First off I think this is fine.
>
> But as I looked at this final selftests patch I began to wonder how the
> memory being mapped here and in the previous selftests patches are
> allocated. Does highmem need to be considered at all? Unfortunately, I
> could not determine where the memory in the SG list of this test gem
> object was allocated.
>
> AFAICS cpu_fill() is only called in create_test_object(). Digging into
> huge_gem_object() did not reveal where these pages were allocated from.
>
> I wonder if these kmap_local_page() calls could be removed entirely based
> on knowing that the pages were allocated from low mem? Removing yet
> another user of highmem altogether would be best if possible.
>
> Do you know how these test objects are created? Do the pages come from
> user space somehow?
FWIW
create_test_object
-> huge_gem_object
-> i915_gem_object_init(obj, &huge_ops, &lock_class, 0);
Which is:
static const struct drm_i915_gem_object_ops huge_ops = {
.name = "huge-gem",
.get_pages = huge_get_pages,
.put_pages = huge_put_pages,
};
And:
huge_get_pages()
...
#define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_RETRY_MAYFAIL)
...
page = alloc_page(GFP | __GFP_HIGHMEM);
>
> Regardless this is still a step in the right direction so:
>
> Reviewed-by: Ira Weiny <[email protected]>
Yeah LGTM.
FYI I am yet to read through the rest of the series, but I don't think
there will be anything problematic and it passed our CI so likely is
good to pull in.
Regards,
Tvrtko
>
>> Suggested-by: Fabio M. De Francesco <[email protected]>
>> Signed-off-by: Zhao Liu <[email protected]>
>> ---
>> Suggested by credits:
>> Dave: Referred to his explanation about cache flush.
>> Ira: Referred to his task document, review comments and explanation
>> about cache flush.
>> Fabio: Referred to his boiler plate commit message and his description
>> about why kmap_local_page() should be preferred.
>> ---
>> drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c | 8 ++++----
>> 1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
>> index a81fa6a20f5a..dcbc0b8e3323 100644
>> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
>> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
>> @@ -481,12 +481,12 @@ static int cpu_fill(struct drm_i915_gem_object *obj, u32 value)
>> for (n = 0; n < real_page_count(obj); n++) {
>> u32 *map;
>>
>> - map = kmap_atomic(i915_gem_object_get_page(obj, n));
>> + map = kmap_local_page(i915_gem_object_get_page(obj, n));
>> for (m = 0; m < DW_PER_PAGE; m++)
>> map[m] = value;
>> if (!has_llc)
>> drm_clflush_virt_range(map, PAGE_SIZE);
>> - kunmap_atomic(map);
>> + kunmap_local(map);
>> }
>>
>> i915_gem_object_finish_access(obj);
>> @@ -512,7 +512,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj,
>> for (n = 0; n < real_page_count(obj); n++) {
>> u32 *map, m;
>>
>> - map = kmap_atomic(i915_gem_object_get_page(obj, n));
>> + map = kmap_local_page(i915_gem_object_get_page(obj, n));
>> if (needs_flush & CLFLUSH_BEFORE)
>> drm_clflush_virt_range(map, PAGE_SIZE);
>>
>> @@ -538,7 +538,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj,
>> }
>>
>> out_unmap:
>> - kunmap_atomic(map);
>> + kunmap_local(map);
>> if (err)
>> break;
>> }
>> --
>> 2.34.1
>>
>
>
On 31/03/2023 05:18, Ira Weiny wrote:
> Zhao Liu wrote:
>> From: Zhao Liu <[email protected]>
>>
>> The use of kmap_atomic() is being deprecated in favor of
>> kmap_local_page()[1], and this patch converts the calls from
>> kmap_atomic() to kmap_local_page().
>>
>> The main difference between atomic and local mappings is that local
>> mappings doesn't disable page faults or preemption (the preemption is
>> disabled for !PREEMPT_RT case, otherwise it only disables migration).
>>
>> With kmap_local_page(), we can avoid the often unwanted side effect of
>> unnecessary page faults and preemption disables.
>>
>> In i915_gem_execbuffer.c, eb->reloc_cache.vaddr is mapped by
>> kmap_atomic() in eb_relocate_entry(), and is unmapped by
>> kunmap_atomic() in reloc_cache_reset().
>
> First off thanks for the series and sticking with this. That said this
> patch kind of threw me for a loop because tracing the map/unmap calls did
> not make sense to me. See below.
>
>>
>> And this mapping/unmapping occurs in two places: one is in
>> eb_relocate_vma(), and another is in eb_relocate_vma_slow().
>>
>> The function eb_relocate_vma() or eb_relocate_vma_slow() doesn't
>> need to disable pagefaults and preemption during the above mapping/
>> unmapping.
>>
>> So it can simply use kmap_local_page() / kunmap_local() that can
>> instead do the mapping / unmapping regardless of the context.
>>
>> Convert the calls of kmap_atomic() / kunmap_atomic() to
>> kmap_local_page() / kunmap_local().
>>
>> [1]: https://lore.kernel.org/all/[email protected]
>>
>> v2: No code change since v1. Added description of the motivation of
>> using kmap_local_page() and "Suggested-by" tag of Fabio.
>>
>> Suggested-by: Ira Weiny <[email protected]>
>> Suggested-by: Fabio M. De Francesco <[email protected]>
>> Signed-off-by: Zhao Liu <[email protected]>
>> ---
>> Suggested by credits:
>> Ira: Referred to his task document, review comments.
>> Fabio: Referred to his boiler plate commit message and his description
>> about why kmap_local_page() should be preferred.
>> ---
>> drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 10 +++++-----
>> 1 file changed, 5 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>> index 9dce2957b4e5..805565edd148 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
>> @@ -1151,7 +1151,7 @@ static void reloc_cache_unmap(struct reloc_cache *cache)
>>
>> vaddr = unmask_page(cache->vaddr);
>> if (cache->vaddr & KMAP)
>> - kunmap_atomic(vaddr);
>> + kunmap_local(vaddr);
>
> In the cover letter you don't mention this unmap path. Rather you mention
> only reloc_cache_reset().
>
> After digging into this and considering these are kmap_atomic() calls I
> _think_ what you have is ok. But I think I'd like to see the call paths
> documented a bit more clearly. Or perhaps cleaned up a lot.
>
> For example I see the following call possibility from a user ioctl. In
> this trace I see 2 examples where something is unmapped first. I don't
> understand why that is required? I would assume reloc_cache_unmap() and
> reloc_kmap() are helpers called from somewhere else requiring a remapping
> of the cache but I don't see it.
Reloc_cache_unmap is called from eb_relocate_entry.
The confusing part unmap appears first is just because reloc_cache is a
stateful setup. The previous mapping is kept around until reset (callers
moves to a different parent object), and unampped/remapped once moved to
a different page within that object.
However I am unsure if disabling pagefaulting is needed or not. Thomas,
Matt, being the last to touch this area, perhaps you could have a look?
Because I notice we have a fallback iomap path which still uses
io_mapping_map_atomic_wc. So if kmap_atomic to kmap_local conversion is
safe, does the iomap side also needs converting to
io_mapping_map_local_wc? Or they have separate requirements?
Regards,
Tvrtko
>
> i915_gem_execbuffer2_ioctl()
> eb_relocate_parse()
> eb_relocate_parse_slow()
> eb_relocate_vma_slow()
> eb_relocate_entry()
> reloc_cache_unmap()
> kunmap_atomic() <=== HERE!
> reloc_cache_remap()
> kmap_atomic()
> relocate_entry()
> reloc_vaddr()
> reloc_kmap()
> kunmap_atomic() <== HERE!
> kmap_atomic()
>
> reloc_cache_reset()
> kunmap_atomic()
>
> Could these mappings be cleaned up a lot more? Perhaps by removing some
> of the helper functions which AFAICT are left over from older versions of
> the code?
>
> Also as an aside I think it is really bad that eb_relocate_entry() returns
> negative errors in a u64. Better to get the types right IMO.
>
> Thanks for the series!
> Ira
>
>> else
>> io_mapping_unmap_atomic((void __iomem *)vaddr);
>> }
>> @@ -1167,7 +1167,7 @@ static void reloc_cache_remap(struct reloc_cache *cache,
>> if (cache->vaddr & KMAP) {
>> struct page *page = i915_gem_object_get_page(obj, cache->page);
>>
>> - vaddr = kmap_atomic(page);
>> + vaddr = kmap_local_page(page);
>> cache->vaddr = unmask_flags(cache->vaddr) |
>> (unsigned long)vaddr;
>> } else {
>> @@ -1197,7 +1197,7 @@ static void reloc_cache_reset(struct reloc_cache *cache, struct i915_execbuffer
>> if (cache->vaddr & CLFLUSH_AFTER)
>> mb();
>>
>> - kunmap_atomic(vaddr);
>> + kunmap_local(vaddr);
>> i915_gem_object_finish_access(obj);
>> } else {
>> struct i915_ggtt *ggtt = cache_to_ggtt(cache);
>> @@ -1229,7 +1229,7 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj,
>> struct page *page;
>>
>> if (cache->vaddr) {
>> - kunmap_atomic(unmask_page(cache->vaddr));
>> + kunmap_local(unmask_page(cache->vaddr));
>> } else {
>> unsigned int flushes;
>> int err;
>> @@ -1251,7 +1251,7 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj,
>> if (!obj->mm.dirty)
>> set_page_dirty(page);
>>
>> - vaddr = kmap_atomic(page);
>> + vaddr = kmap_local_page(page);
>> cache->vaddr = unmask_flags(cache->vaddr) | (unsigned long)vaddr;
>> cache->page = pageno;
>>
>> --
>> 2.34.1
>>
>
>
On venerd? 31 marzo 2023 13:30:20 CEST Tvrtko Ursulin wrote:
> On 31/03/2023 05:18, Ira Weiny wrote:
> > Zhao Liu wrote:
> >> From: Zhao Liu <[email protected]>
> >>
> >> The use of kmap_atomic() is being deprecated in favor of
> >> kmap_local_page()[1], and this patch converts the calls from
> >> kmap_atomic() to kmap_local_page().
> >>
> >> The main difference between atomic and local mappings is that local
> >> mappings doesn't disable page faults or preemption (the preemption is
> >> disabled for !PREEMPT_RT case, otherwise it only disables migration).
> >>
> >> With kmap_local_page(), we can avoid the often unwanted side effect of
> >> unnecessary page faults and preemption disables.
> >>
> >> In i915_gem_execbuffer.c, eb->reloc_cache.vaddr is mapped by
> >> kmap_atomic() in eb_relocate_entry(), and is unmapped by
> >> kunmap_atomic() in reloc_cache_reset().
> >
> > First off thanks for the series and sticking with this. That said this
> > patch kind of threw me for a loop because tracing the map/unmap calls did
> > not make sense to me. See below.
> >
> >> And this mapping/unmapping occurs in two places: one is in
> >> eb_relocate_vma(), and another is in eb_relocate_vma_slow().
> >>
> >> The function eb_relocate_vma() or eb_relocate_vma_slow() doesn't
> >> need to disable pagefaults and preemption during the above mapping/
> >> unmapping.
> >>
> >> So it can simply use kmap_local_page() / kunmap_local() that can
> >> instead do the mapping / unmapping regardless of the context.
> >>
> >> Convert the calls of kmap_atomic() / kunmap_atomic() to
> >> kmap_local_page() / kunmap_local().
> >>
> >> [1]:
> >> https://lore.kernel.org/all/[email protected]
> >>
> >> v2: No code change since v1. Added description of the motivation of
> >>
> >> using kmap_local_page() and "Suggested-by" tag of Fabio.
> >>
> >> Suggested-by: Ira Weiny <[email protected]>
> >> Suggested-by: Fabio M. De Francesco <[email protected]>
> >> Signed-off-by: Zhao Liu <[email protected]>
> >> ---
> >>
> >> Suggested by credits:
> >> Ira: Referred to his task document, review comments.
> >> Fabio: Referred to his boiler plate commit message and his description
> >>
> >> about why kmap_local_page() should be preferred.
> >>
> >> ---
> >>
> >> drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 10 +++++-----
> >> 1 file changed, 5 insertions(+), 5 deletions(-)
> >>
[snip]
> However I am unsure if disabling pagefaulting is needed or not. Thomas,
> Matt, being the last to touch this area, perhaps you could have a look?
> Because I notice we have a fallback iomap path which still uses
> io_mapping_map_atomic_wc. So if kmap_atomic to kmap_local conversion is
> safe, does the iomap side also needs converting to
> io_mapping_map_local_wc? Or they have separate requirements?
AFAIK, the requirements for io_mapping_map_local_wc() are the same as for
kmap_local_page(): the kernel virtual address is _only_ valid in the caller
context, and map/unmap nesting must be done in stack-based ordering (LIFO).
I think a follow up patch could safely switch to io_mapping_map_local_wc() /
io_mapping_unmap_local_wc since the address is local to context.
However, not being an expert, reading your note now I suspect that I'm missing
something. Can I ask why you think that page-faults disabling might be
necessary?
Thanks,
Fabio
> Regards,
>
> Tvrtko
Thanks all for your review!
On Fri, Mar 31, 2023 at 05:32:17PM +0200, Fabio M. De Francesco wrote:
> Date: Fri, 31 Mar 2023 17:32:17 +0200
> From: "Fabio M. De Francesco" <[email protected]>
> Subject: Re: [PATCH v2 9/9] drm/i915: Use kmap_local_page() in
> gem/i915_gem_execbuffer.c
>
> On venerd? 31 marzo 2023 13:30:20 CEST Tvrtko Ursulin wrote:
> > On 31/03/2023 05:18, Ira Weiny wrote:
>
[snip]
>
> > However I am unsure if disabling pagefaulting is needed or not. Thomas,
> > Matt, being the last to touch this area, perhaps you could have a look?
> > Because I notice we have a fallback iomap path which still uses
> > io_mapping_map_atomic_wc. So if kmap_atomic to kmap_local conversion is
> > safe, does the iomap side also needs converting to
> > io_mapping_map_local_wc? Or they have separate requirements?
>
> AFAIK, the requirements for io_mapping_map_local_wc() are the same as for
> kmap_local_page(): the kernel virtual address is _only_ valid in the caller
> context, and map/unmap nesting must be done in stack-based ordering (LIFO).
>
> I think a follow up patch could safely switch to io_mapping_map_local_wc() /
> io_mapping_unmap_local_wc since the address is local to context.
>
> However, not being an expert, reading your note now I suspect that I'm missing
> something. Can I ask why you think that page-faults disabling might be
> necessary?
About the disabling of pagefault here, could you please talk more about
it? :-)
From previous discussions and commit history, I didn't find relevant
information and I lack background knowledge about it...
If we have the reason to diable pagefault, I will fix and refresh the new
version.
Thanks,
Zhao
>
> Thanks,
>
> Fabio
>
> > Regards,
> >
> > Tvrtko
>
>
>
On 31/03/2023 16:32, Fabio M. De Francesco wrote:
> On venerdì 31 marzo 2023 13:30:20 CEST Tvrtko Ursulin wrote:
>> On 31/03/2023 05:18, Ira Weiny wrote:
>>> Zhao Liu wrote:
>>>> From: Zhao Liu <[email protected]>
>>>>
>>>> The use of kmap_atomic() is being deprecated in favor of
>>>> kmap_local_page()[1], and this patch converts the calls from
>>>> kmap_atomic() to kmap_local_page().
>>>>
>>>> The main difference between atomic and local mappings is that local
>>>> mappings doesn't disable page faults or preemption (the preemption is
>>>> disabled for !PREEMPT_RT case, otherwise it only disables migration).
>>>>
>>>> With kmap_local_page(), we can avoid the often unwanted side effect of
>>>> unnecessary page faults and preemption disables.
>>>>
>>>> In i915_gem_execbuffer.c, eb->reloc_cache.vaddr is mapped by
>>>> kmap_atomic() in eb_relocate_entry(), and is unmapped by
>>>> kunmap_atomic() in reloc_cache_reset().
>>>
>>> First off thanks for the series and sticking with this. That said this
>>> patch kind of threw me for a loop because tracing the map/unmap calls did
>>> not make sense to me. See below.
>>>
>>>> And this mapping/unmapping occurs in two places: one is in
>>>> eb_relocate_vma(), and another is in eb_relocate_vma_slow().
>>>>
>>>> The function eb_relocate_vma() or eb_relocate_vma_slow() doesn't
>>>> need to disable pagefaults and preemption during the above mapping/
>>>> unmapping.
>>>>
>>>> So it can simply use kmap_local_page() / kunmap_local() that can
>>>> instead do the mapping / unmapping regardless of the context.
>>>>
>>>> Convert the calls of kmap_atomic() / kunmap_atomic() to
>>>> kmap_local_page() / kunmap_local().
>>>>
>>>> [1]:
>>>> https://lore.kernel.org/all/[email protected]
>>>>
>>>> v2: No code change since v1. Added description of the motivation of
>>>>
>>>> using kmap_local_page() and "Suggested-by" tag of Fabio.
>>>>
>>>> Suggested-by: Ira Weiny <[email protected]>
>>>> Suggested-by: Fabio M. De Francesco <[email protected]>
>>>> Signed-off-by: Zhao Liu <[email protected]>
>>>> ---
>>>>
>>>> Suggested by credits:
>>>> Ira: Referred to his task document, review comments.
>>>> Fabio: Referred to his boiler plate commit message and his description
>>>>
>>>> about why kmap_local_page() should be preferred.
>>>>
>>>> ---
>>>>
>>>> drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 10 +++++-----
>>>> 1 file changed, 5 insertions(+), 5 deletions(-)
>>>>
>
> [snip]
>
>> However I am unsure if disabling pagefaulting is needed or not. Thomas,
>> Matt, being the last to touch this area, perhaps you could have a look?
>> Because I notice we have a fallback iomap path which still uses
>> io_mapping_map_atomic_wc. So if kmap_atomic to kmap_local conversion is
>> safe, does the iomap side also needs converting to
>> io_mapping_map_local_wc? Or they have separate requirements?
>
> AFAIK, the requirements for io_mapping_map_local_wc() are the same as for
> kmap_local_page(): the kernel virtual address is _only_ valid in the caller
> context, and map/unmap nesting must be done in stack-based ordering (LIFO).
>
> I think a follow up patch could safely switch to io_mapping_map_local_wc() /
> io_mapping_unmap_local_wc since the address is local to context.
>
> However, not being an expert, reading your note now I suspect that I'm missing
> something. Can I ask why you think that page-faults disabling might be
> necessary?
I am not saying it is, was just unsure and wanted some people who worked on this code most recently to take a look and confirm.
I guess it will work since the copying is done like this anyway:
/*
* This is the fast path and we cannot handle a pagefault
* whilst holding the struct mutex lest the user pass in the
* relocations contained within a mmaped bo. For in such a case
* we, the page fault handler would call i915_gem_fault() and
* we would try to acquire the struct mutex again. Obviously
* this is bad and so lockdep complains vehemently.
*/
pagefault_disable();
copied = __copy_from_user_inatomic(r, urelocs, count * sizeof(r[0]));
pagefault_enable();
if (unlikely(copied)) {
remain = -EFAULT;
goto out;
}
Comment is a bit outdated since we don't use that global "struct mutex" any longer, but in any case, if there is a page fault on the mapping where we need to recurse into i915 again to satisfy if, we seem to have code already to handle it. So kmap_local conversion I *think* can't regress anything.
Patch to convert the io_mapping_map_atomic_wc can indeed come later.
In terms of logistics - if we landed this series to out branch it would be queued only for 6.5. Would that work for you?
Regards,
Tvrtko
Hi Tvrtko,
On Wed, Apr 12, 2023 at 04:45:13PM +0100, Tvrtko Ursulin wrote:
[snip]
> >
> > [snip]
> > > However I am unsure if disabling pagefaulting is needed or not. Thomas,
> > > Matt, being the last to touch this area, perhaps you could have a look?
> > > Because I notice we have a fallback iomap path which still uses
> > > io_mapping_map_atomic_wc. So if kmap_atomic to kmap_local conversion is
> > > safe, does the iomap side also needs converting to
> > > io_mapping_map_local_wc? Or they have separate requirements?
> >
> > AFAIK, the requirements for io_mapping_map_local_wc() are the same as for
> > kmap_local_page(): the kernel virtual address is _only_ valid in the caller
> > context, and map/unmap nesting must be done in stack-based ordering (LIFO).
> >
> > I think a follow up patch could safely switch to io_mapping_map_local_wc() /
> > io_mapping_unmap_local_wc since the address is local to context.
> >
> > However, not being an expert, reading your note now I suspect that I'm missing
> > something. Can I ask why you think that page-faults disabling might be
> > necessary?
>
> I am not saying it is, was just unsure and wanted some people who worked on this code most recently to take a look and confirm.
>
> I guess it will work since the copying is done like this anyway:
>
> /*
> * This is the fast path and we cannot handle a pagefault
> * whilst holding the struct mutex lest the user pass in the
> * relocations contained within a mmaped bo. For in such a case
> * we, the page fault handler would call i915_gem_fault() and
> * we would try to acquire the struct mutex again. Obviously
> * this is bad and so lockdep complains vehemently.
> */
> pagefault_disable();
> copied = __copy_from_user_inatomic(r, urelocs, count * sizeof(r[0]));
> pagefault_enable();
> if (unlikely(copied)) {
> remain = -EFAULT;
> goto out;
> }
>
> Comment is a bit outdated since we don't use that global "struct mutex" any longer, but in any case, if there is a page fault on the mapping where we need to recurse into i915 again to satisfy if, we seem to have code already to handle it. So kmap_local conversion I *think* can't regress anything.
Thanks for your explanation!
>
> Patch to convert the io_mapping_map_atomic_wc can indeed come later.
Okay, I will also look at this.
>
> In terms of logistics - if we landed this series to out branch it would be queued only for 6.5. Would that work for you?
Yeah, it's ok for me. But could I ask, did I miss the 6.4 merge time?
Thanks,
Zhao
>
> Regards,
>
> Tvrtko
On 14/04/2023 11:45, Zhao Liu wrote:
> Hi Tvrtko,
>
> On Wed, Apr 12, 2023 at 04:45:13PM +0100, Tvrtko Ursulin wrote:
>
> [snip]
>
>>>
>>> [snip]
>>>> However I am unsure if disabling pagefaulting is needed or not. Thomas,
>>>> Matt, being the last to touch this area, perhaps you could have a look?
>>>> Because I notice we have a fallback iomap path which still uses
>>>> io_mapping_map_atomic_wc. So if kmap_atomic to kmap_local conversion is
>>>> safe, does the iomap side also needs converting to
>>>> io_mapping_map_local_wc? Or they have separate requirements?
>>>
>>> AFAIK, the requirements for io_mapping_map_local_wc() are the same as for
>>> kmap_local_page(): the kernel virtual address is _only_ valid in the caller
>>> context, and map/unmap nesting must be done in stack-based ordering (LIFO).
>>>
>>> I think a follow up patch could safely switch to io_mapping_map_local_wc() /
>>> io_mapping_unmap_local_wc since the address is local to context.
>>>
>>> However, not being an expert, reading your note now I suspect that I'm missing
>>> something. Can I ask why you think that page-faults disabling might be
>>> necessary?
>>
>> I am not saying it is, was just unsure and wanted some people who worked on this code most recently to take a look and confirm.
>>
>> I guess it will work since the copying is done like this anyway:
>>
>> /*
>> * This is the fast path and we cannot handle a pagefault
>> * whilst holding the struct mutex lest the user pass in the
>> * relocations contained within a mmaped bo. For in such a case
>> * we, the page fault handler would call i915_gem_fault() and
>> * we would try to acquire the struct mutex again. Obviously
>> * this is bad and so lockdep complains vehemently.
>> */
>> pagefault_disable();
>> copied = __copy_from_user_inatomic(r, urelocs, count * sizeof(r[0]));
>> pagefault_enable();
>> if (unlikely(copied)) {
>> remain = -EFAULT;
>> goto out;
>> }
>>
>> Comment is a bit outdated since we don't use that global "struct mutex" any longer, but in any case, if there is a page fault on the mapping where we need to recurse into i915 again to satisfy if, we seem to have code already to handle it. So kmap_local conversion I *think* can't regress anything.
>
> Thanks for your explanation!
>
>>
>> Patch to convert the io_mapping_map_atomic_wc can indeed come later.
>
> Okay, I will also look at this.
>
>>
>> In terms of logistics - if we landed this series to out branch it would be queued only for 6.5. Would that work for you?
>
> Yeah, it's ok for me. But could I ask, did I miss the 6.4 merge time?
Yes, but just because we failed to review and merge in time, not because
you did not provide patches in time.
Regards,
Tvrtko
On Mon, Apr 17, 2023 at 12:24:45PM +0100, Tvrtko Ursulin wrote:
>
> On 14/04/2023 11:45, Zhao Liu wrote:
> > Hi Tvrtko,
> >
> > On Wed, Apr 12, 2023 at 04:45:13PM +0100, Tvrtko Ursulin wrote:
> >
> > [snip]
> >
> > > >
> > > > [snip]
> > > > > However I am unsure if disabling pagefaulting is needed or not. Thomas,
> > > > > Matt, being the last to touch this area, perhaps you could have a look?
> > > > > Because I notice we have a fallback iomap path which still uses
> > > > > io_mapping_map_atomic_wc. So if kmap_atomic to kmap_local conversion is
> > > > > safe, does the iomap side also needs converting to
> > > > > io_mapping_map_local_wc? Or they have separate requirements?
> > > >
> > > > AFAIK, the requirements for io_mapping_map_local_wc() are the same as for
> > > > kmap_local_page(): the kernel virtual address is _only_ valid in the caller
> > > > context, and map/unmap nesting must be done in stack-based ordering (LIFO).
> > > >
> > > > I think a follow up patch could safely switch to io_mapping_map_local_wc() /
> > > > io_mapping_unmap_local_wc since the address is local to context.
> > > >
> > > > However, not being an expert, reading your note now I suspect that I'm missing
> > > > something. Can I ask why you think that page-faults disabling might be
> > > > necessary?
> > >
> > > I am not saying it is, was just unsure and wanted some people who worked on this code most recently to take a look and confirm.
> > >
> > > I guess it will work since the copying is done like this anyway:
> > >
> > > /*
> > > * This is the fast path and we cannot handle a pagefault
> > > * whilst holding the struct mutex lest the user pass in the
> > > * relocations contained within a mmaped bo. For in such a case
> > > * we, the page fault handler would call i915_gem_fault() and
> > > * we would try to acquire the struct mutex again. Obviously
> > > * this is bad and so lockdep complains vehemently.
> > > */
> > > pagefault_disable();
> > > copied = __copy_from_user_inatomic(r, urelocs, count * sizeof(r[0]));
> > > pagefault_enable();
> > > if (unlikely(copied)) {
> > > remain = -EFAULT;
> > > goto out;
> > > }
> > >
> > > Comment is a bit outdated since we don't use that global "struct mutex" any longer, but in any case, if there is a page fault on the mapping where we need to recurse into i915 again to satisfy if, we seem to have code already to handle it. So kmap_local conversion I *think* can't regress anything.
> >
> > Thanks for your explanation!
> >
> > >
> > > Patch to convert the io_mapping_map_atomic_wc can indeed come later.
> >
> > Okay, I will also look at this.
> >
> > >
> > > In terms of logistics - if we landed this series to out branch it would be queued only for 6.5. Would that work for you?
> >
> > Yeah, it's ok for me. But could I ask, did I miss the 6.4 merge time?
>
> Yes, but just because we failed to review and merge in time, not because you
> did not provide patches in time.
It is worth mentioning that under drm we close the merge window earlier.
Around -rc5.
So, Linus' merge window for 6.4 didn't happen yet. But our drm-next that
is going to be sent there is already closed.
>
> Regards,
>
> Tvrtko
>
Hi Rodrigo and Tvrtko,
It seems this series is missed in v6.5.
This work should not be forgotten. Let me rebase and refresh the version.
Regards,
Zhao
On Mon, Apr 17, 2023 at 10:53:28AM -0400, Rodrigo Vivi wrote:
> Date: Mon, 17 Apr 2023 10:53:28 -0400
> From: Rodrigo Vivi <[email protected]>
> Subject: Re: [PATCH v2 9/9] drm/i915: Use kmap_local_page() in
> gem/i915_gem_execbuffer.c
>
> On Mon, Apr 17, 2023 at 12:24:45PM +0100, Tvrtko Ursulin wrote:
> >
> > On 14/04/2023 11:45, Zhao Liu wrote:
> > > Hi Tvrtko,
> > >
> > > On Wed, Apr 12, 2023 at 04:45:13PM +0100, Tvrtko Ursulin wrote:
> > >
> > > [snip]
> > >
> > > > >
> > > > > [snip]
> > > > > > However I am unsure if disabling pagefaulting is needed or not. Thomas,
> > > > > > Matt, being the last to touch this area, perhaps you could have a look?
> > > > > > Because I notice we have a fallback iomap path which still uses
> > > > > > io_mapping_map_atomic_wc. So if kmap_atomic to kmap_local conversion is
> > > > > > safe, does the iomap side also needs converting to
> > > > > > io_mapping_map_local_wc? Or they have separate requirements?
> > > > >
> > > > > AFAIK, the requirements for io_mapping_map_local_wc() are the same as for
> > > > > kmap_local_page(): the kernel virtual address is _only_ valid in the caller
> > > > > context, and map/unmap nesting must be done in stack-based ordering (LIFO).
> > > > >
> > > > > I think a follow up patch could safely switch to io_mapping_map_local_wc() /
> > > > > io_mapping_unmap_local_wc since the address is local to context.
> > > > >
> > > > > However, not being an expert, reading your note now I suspect that I'm missing
> > > > > something. Can I ask why you think that page-faults disabling might be
> > > > > necessary?
> > > >
> > > > I am not saying it is, was just unsure and wanted some people who worked on this code most recently to take a look and confirm.
> > > >
> > > > I guess it will work since the copying is done like this anyway:
> > > >
> > > > /*
> > > > * This is the fast path and we cannot handle a pagefault
> > > > * whilst holding the struct mutex lest the user pass in the
> > > > * relocations contained within a mmaped bo. For in such a case
> > > > * we, the page fault handler would call i915_gem_fault() and
> > > > * we would try to acquire the struct mutex again. Obviously
> > > > * this is bad and so lockdep complains vehemently.
> > > > */
> > > > pagefault_disable();
> > > > copied = __copy_from_user_inatomic(r, urelocs, count * sizeof(r[0]));
> > > > pagefault_enable();
> > > > if (unlikely(copied)) {
> > > > remain = -EFAULT;
> > > > goto out;
> > > > }
> > > >
> > > > Comment is a bit outdated since we don't use that global "struct mutex" any longer, but in any case, if there is a page fault on the mapping where we need to recurse into i915 again to satisfy if, we seem to have code already to handle it. So kmap_local conversion I *think* can't regress anything.
> > >
> > > Thanks for your explanation!
> > >
> > > >
> > > > Patch to convert the io_mapping_map_atomic_wc can indeed come later.
> > >
> > > Okay, I will also look at this.
> > >
> > > >
> > > > In terms of logistics - if we landed this series to out branch it would be queued only for 6.5. Would that work for you?
> > >
> > > Yeah, it's ok for me. But could I ask, did I miss the 6.4 merge time?
> >
> > Yes, but just because we failed to review and merge in time, not because you
> > did not provide patches in time.
>
> It is worth mentioning that under drm we close the merge window earlier.
> Around -rc5.
>
> So, Linus' merge window for 6.4 didn't happen yet. But our drm-next that
> is going to be sent there is already closed.
>
> >
> > Regards,
> >
> > Tvrtko
> >
Hi,
On 18/10/2023 17:19, Zhao Liu wrote:
> Hi Rodrigo and Tvrtko,
>
> It seems this series is missed in v6.5.
> This work should not be forgotten. Let me rebase and refresh the version.
Right it seems we did not manage to social engineer any reviews. Please
do respin and we will try again.
Regards,
Tvrtko
>
> Regards,
> Zhao
>
> On Mon, Apr 17, 2023 at 10:53:28AM -0400, Rodrigo Vivi wrote:
>> Date: Mon, 17 Apr 2023 10:53:28 -0400
>> From: Rodrigo Vivi <[email protected]>
>> Subject: Re: [PATCH v2 9/9] drm/i915: Use kmap_local_page() in
>> gem/i915_gem_execbuffer.c
>>
>> On Mon, Apr 17, 2023 at 12:24:45PM +0100, Tvrtko Ursulin wrote:
>>>
>>> On 14/04/2023 11:45, Zhao Liu wrote:
>>>> Hi Tvrtko,
>>>>
>>>> On Wed, Apr 12, 2023 at 04:45:13PM +0100, Tvrtko Ursulin wrote:
>>>>
>>>> [snip]
>>>>
>>>>>>
>>>>>> [snip]
>>>>>>> However I am unsure if disabling pagefaulting is needed or not. Thomas,
>>>>>>> Matt, being the last to touch this area, perhaps you could have a look?
>>>>>>> Because I notice we have a fallback iomap path which still uses
>>>>>>> io_mapping_map_atomic_wc. So if kmap_atomic to kmap_local conversion is
>>>>>>> safe, does the iomap side also needs converting to
>>>>>>> io_mapping_map_local_wc? Or they have separate requirements?
>>>>>>
>>>>>> AFAIK, the requirements for io_mapping_map_local_wc() are the same as for
>>>>>> kmap_local_page(): the kernel virtual address is _only_ valid in the caller
>>>>>> context, and map/unmap nesting must be done in stack-based ordering (LIFO).
>>>>>>
>>>>>> I think a follow up patch could safely switch to io_mapping_map_local_wc() /
>>>>>> io_mapping_unmap_local_wc since the address is local to context.
>>>>>>
>>>>>> However, not being an expert, reading your note now I suspect that I'm missing
>>>>>> something. Can I ask why you think that page-faults disabling might be
>>>>>> necessary?
>>>>>
>>>>> I am not saying it is, was just unsure and wanted some people who worked on this code most recently to take a look and confirm.
>>>>>
>>>>> I guess it will work since the copying is done like this anyway:
>>>>>
>>>>> /*
>>>>> * This is the fast path and we cannot handle a pagefault
>>>>> * whilst holding the struct mutex lest the user pass in the
>>>>> * relocations contained within a mmaped bo. For in such a case
>>>>> * we, the page fault handler would call i915_gem_fault() and
>>>>> * we would try to acquire the struct mutex again. Obviously
>>>>> * this is bad and so lockdep complains vehemently.
>>>>> */
>>>>> pagefault_disable();
>>>>> copied = __copy_from_user_inatomic(r, urelocs, count * sizeof(r[0]));
>>>>> pagefault_enable();
>>>>> if (unlikely(copied)) {
>>>>> remain = -EFAULT;
>>>>> goto out;
>>>>> }
>>>>>
>>>>> Comment is a bit outdated since we don't use that global "struct mutex" any longer, but in any case, if there is a page fault on the mapping where we need to recurse into i915 again to satisfy if, we seem to have code already to handle it. So kmap_local conversion I *think* can't regress anything.
>>>>
>>>> Thanks for your explanation!
>>>>
>>>>>
>>>>> Patch to convert the io_mapping_map_atomic_wc can indeed come later.
>>>>
>>>> Okay, I will also look at this.
>>>>
>>>>>
>>>>> In terms of logistics - if we landed this series to out branch it would be queued only for 6.5. Would that work for you?
>>>>
>>>> Yeah, it's ok for me. But could I ask, did I miss the 6.4 merge time?
>>>
>>> Yes, but just because we failed to review and merge in time, not because you
>>> did not provide patches in time.
>>
>> It is worth mentioning that under drm we close the merge window earlier.
>> Around -rc5.
>>
>> So, Linus' merge window for 6.4 didn't happen yet. But our drm-next that
>> is going to be sent there is already closed.
>>
>>>
>>> Regards,
>>>
>>> Tvrtko
>>>