Hello!
This patchset remove some unnecessary checks and adds clarifying
comment. While analysis zs_object_copy() function code, I spent
some time to understand what the call kunmap_atomic(d_addr) is for.
It seems that this point is not trivial and it is worth adding
a comment.
Alexey Romanov (2):
zsmalloc: zs_object_copy: add clarifying comment
zsmalloc: remove unnecessary size_class NULL check
mm/zsmalloc.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
--
2.30.1
It's not obvious why kunmap_atomic(d_addr) call is needed.
Signed-off-by: Alexey Romanov <[email protected]>
---
mm/zsmalloc.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 5d5fc04385b8..5efa8c592193 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1564,6 +1564,12 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
d_off += size;
d_size -= size;
+ /* Calling kunmap_atomic(d_addr) is necessary. kunmap_atomic() calls
+ * must occurs in reverse order of calls to kmap_atomic().
+ * So, to call kunmap_atomic(s_addr) we should first call kunmap_atomic(d_addr).
+ * For more details see:
+ * https://lore.kernel.org/linux-mm/[email protected]/
+ */
if (s_off >= PAGE_SIZE) {
kunmap_atomic(d_addr);
kunmap_atomic(s_addr);
--
2.30.1
pool->size_class array elements can't be NULL, so this check
is not needed.
In the whole code, we assign pool->size_class[i] values that are
not NULL. Releasing memory for these values occurs in the
zs_destroy_pool() function, which also releases and destroys the pool.
In addition, in the zs_stats_size_show() and async_free_zspage(),
with similar iterations over the array, we don't check it for NULL
pointer.
Signed-off-by: Alexey Romanov <[email protected]>
---
mm/zsmalloc.c | 7 -------
1 file changed, 7 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 5efa8c592193..f3cf0d0925a3 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -2154,8 +2154,6 @@ unsigned long zs_compact(struct zs_pool *pool)
for (i = ZS_SIZE_CLASSES - 1; i >= 0; i--) {
class = pool->size_class[i];
- if (!class)
- continue;
if (class->index != i)
continue;
pages_freed += __zs_compact(pool, class);
@@ -2200,8 +2198,6 @@ static unsigned long zs_shrinker_count(struct shrinker *shrinker,
for (i = ZS_SIZE_CLASSES - 1; i >= 0; i--) {
class = pool->size_class[i];
- if (!class)
- continue;
if (class->index != i)
continue;
@@ -2361,9 +2357,6 @@ void zs_destroy_pool(struct zs_pool *pool)
int fg;
struct size_class *class = pool->size_class[i];
- if (!class)
- continue;
-
if (class->index != i)
continue;
--
2.30.1
On (22/08/11 18:37), Alexey Romanov wrote:
> @@ -1564,6 +1564,12 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
> d_off += size;
> d_size -= size;
>
> + /* Calling kunmap_atomic(d_addr) is necessary. kunmap_atomic() calls
> + * must occurs in reverse order of calls to kmap_atomic().
> + * So, to call kunmap_atomic(s_addr) we should first call kunmap_atomic(d_addr).
> + * For more details see:
> + * https://lore.kernel.org/linux-mm/[email protected]/
> + */
I'd drop the link part, Emails are not documentation, kmap_atomic
is documented in Documentation/mm/highmem
Hi, Sergey.
On Mon, Aug 15, 2022 at 11:52:47AM +0900, Sergey Senozhatsky wrote:
> On (22/08/11 18:37), Alexey Romanov wrote:
> > @@ -1564,6 +1564,12 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
> > d_off += size;
> > d_size -= size;
> >
> > + /* Calling kunmap_atomic(d_addr) is necessary. kunmap_atomic() calls
> > + * must occurs in reverse order of calls to kmap_atomic().
> > + * So, to call kunmap_atomic(s_addr) we should first call kunmap_atomic(d_addr).
> > + * For more details see:
> > + * https://lore.kernel.org/linux-mm/[email protected]/
> > + */
>
> I'd drop the link part, Emails are not documentation, kmap_atomic
> is documented in Documentation/mm/highmem
I'll fix it in v2 patchset, but Andrew has already merged v1 patchset
into the mm-unstable branch.
--
Thank you,
Alexey