2023-06-24 05:32:19

by Sergey Senozhatsky

[permalink] [raw]
Subject: [PATCHv2 0/3] zsmalloc: small compaction improvements

Hi,
A tiny series that can reduce the number of
find_alloced_obj() invocations (which perform a linear
scan of sub-page) during compaction. Inspired by Alexey
Romanov's findings.

v2:
-- picked up a patch from Minchan

Minchan Kim (1):
zsmalloc: remove zs_compact_control

Sergey Senozhatsky (2):
zsmalloc: do not scan for allocated objects in empty zspage
zsmalloc: move migration destination zspage inuse check

mm/zsmalloc.c | 50 ++++++++++++++++++++------------------------------
1 file changed, 20 insertions(+), 30 deletions(-)

--
2.41.0.162.gfafddb0af9-goog



2023-06-24 06:04:12

by Sergey Senozhatsky

[permalink] [raw]
Subject: [PATCHv2 2/3] zsmalloc: move migration destination zspage inuse check

Destination zspage fullness check need to be done after
zs_object_copy() because that's where source and destination
zspages fullness change. Checking destination zspage fullness
before zs_object_copy() may cause migration to loop through
source zspage sub-pages scanning for allocate objects just to
find out at the end that the destination zspage is full.

Signed-off-by: Sergey Senozhatsky <[email protected]>
---
mm/zsmalloc.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 5d60eaedc3b7..4a84f7877669 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1620,10 +1620,6 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
continue;
}

- /* Stop if there is no more space */
- if (zspage_full(class, get_zspage(d_page)))
- break;
-
used_obj = handle_to_obj(handle);
free_obj = obj_malloc(pool, get_zspage(d_page), handle);
zs_object_copy(class, free_obj, used_obj);
@@ -1631,6 +1627,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
record_obj(handle, free_obj);
obj_free(class->size, used_obj);

+ /* Stop if there is no more space */
+ if (zspage_full(class, get_zspage(d_page)))
+ break;
+
/* Stop if there are no more objects to migrate */
if (zspage_empty(get_zspage(s_page)))
break;
--
2.41.0.162.gfafddb0af9-goog


2023-06-24 06:04:55

by Sergey Senozhatsky

[permalink] [raw]
Subject: [PATCHv2 1/3] zsmalloc: do not scan for allocated objects in empty zspage

zspage migration can terminate as soon as it moves the last
allocated object from the source zspage. Add a simple helper
zspage_empty() that tests zspage ->inuse on each migration
iteration.

Suggested-by: Alexey Romanov <[email protected]>
Signed-off-by: Sergey Senozhatsky <[email protected]>
---
mm/zsmalloc.c | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3f057970504e..5d60eaedc3b7 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1147,6 +1147,11 @@ static bool zspage_full(struct size_class *class, struct zspage *zspage)
return get_zspage_inuse(zspage) == class->objs_per_zspage;
}

+static bool zspage_empty(struct zspage *zspage)
+{
+ return get_zspage_inuse(zspage) == 0;
+}
+
/**
* zs_lookup_class_index() - Returns index of the zsmalloc &size_class
* that hold objects of the provided size.
@@ -1625,6 +1630,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
obj_idx++;
record_obj(handle, free_obj);
obj_free(class->size, used_obj);
+
+ /* Stop if there are no more objects to migrate */
+ if (zspage_empty(get_zspage(s_page)))
+ break;
}

/* Remember last position in this iteration */
--
2.41.0.162.gfafddb0af9-goog


2023-06-24 06:18:33

by Sergey Senozhatsky

[permalink] [raw]
Subject: [PATCHv2 3/3] zsmalloc: remove zs_compact_control

From: Minchan Kim <[email protected]>

__zs_compact always putback src_zspage into class list after
migrate_zspage. Thus, we don't need to keep last position of
src_zspage any more. Let's remove it.

Signed-off-by: Minchan Kim <[email protected]>
---
mm/zsmalloc.c | 37 +++++++++----------------------------
1 file changed, 9 insertions(+), 28 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 4a84f7877669..84beadc088b8 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1590,25 +1590,14 @@ static unsigned long find_alloced_obj(struct size_class *class,
return find_tagged_obj(class, page, obj_idx, OBJ_ALLOCATED_TAG);
}

-struct zs_compact_control {
- /* Source spage for migration which could be a subpage of zspage */
- struct page *s_page;
- /* Destination page for migration which should be a first page
- * of zspage. */
- struct page *d_page;
- /* Starting object index within @s_page which used for live object
- * in the subpage. */
- int obj_idx;
-};
-
-static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
- struct zs_compact_control *cc)
+static void migrate_zspage(struct zs_pool *pool, struct zspage *src_zspage,
+ struct zspage *dst_zspage)
{
unsigned long used_obj, free_obj;
unsigned long handle;
- struct page *s_page = cc->s_page;
- struct page *d_page = cc->d_page;
- int obj_idx = cc->obj_idx;
+ int obj_idx = 0;
+ struct page *s_page = get_first_page(src_zspage);
+ struct size_class *class = pool->size_class[src_zspage->class];

while (1) {
handle = find_alloced_obj(class, s_page, &obj_idx);
@@ -1621,24 +1610,20 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
}

used_obj = handle_to_obj(handle);
- free_obj = obj_malloc(pool, get_zspage(d_page), handle);
+ free_obj = obj_malloc(pool, dst_zspage, handle);
zs_object_copy(class, free_obj, used_obj);
obj_idx++;
record_obj(handle, free_obj);
obj_free(class->size, used_obj);

/* Stop if there is no more space */
- if (zspage_full(class, get_zspage(d_page)))
+ if (zspage_full(class, dst_zspage))
break;

/* Stop if there are no more objects to migrate */
- if (zspage_empty(get_zspage(s_page)))
+ if (zspage_empty(src_zspage))
break;
}
-
- /* Remember last position in this iteration */
- cc->s_page = s_page;
- cc->obj_idx = obj_idx;
}

static struct zspage *isolate_src_zspage(struct size_class *class)
@@ -2013,7 +1998,6 @@ static unsigned long zs_can_compact(struct size_class *class)
static unsigned long __zs_compact(struct zs_pool *pool,
struct size_class *class)
{
- struct zs_compact_control cc;
struct zspage *src_zspage = NULL;
struct zspage *dst_zspage = NULL;
unsigned long pages_freed = 0;
@@ -2031,7 +2015,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
if (!dst_zspage)
break;
migrate_write_lock(dst_zspage);
- cc.d_page = get_first_page(dst_zspage);
}

src_zspage = isolate_src_zspage(class);
@@ -2040,9 +2023,7 @@ static unsigned long __zs_compact(struct zs_pool *pool,

migrate_write_lock_nested(src_zspage);

- cc.obj_idx = 0;
- cc.s_page = get_first_page(src_zspage);
- migrate_zspage(pool, class, &cc);
+ migrate_zspage(pool, src_zspage, dst_zspage);
fg = putback_zspage(class, src_zspage);
migrate_write_unlock(src_zspage);

--
2.41.0.162.gfafddb0af9-goog


2023-06-25 06:29:13

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: [PATCHv2 3/3] zsmalloc: remove zs_compact_control

On (23/06/24 14:12), Sergey Senozhatsky wrote:
> __zs_compact always putback src_zspage into class list after
> migrate_zspage. Thus, we don't need to keep last position of
> src_zspage any more. Let's remove it.
>
> Signed-off-by: Minchan Kim <[email protected]>

Acked-by: Sergey Senozhatsky <[email protected]>

2023-06-26 11:21:18

by Alexey Romanov

[permalink] [raw]
Subject: Re: [PATCHv2 1/3] zsmalloc: do not scan for allocated objects in empty zspage

Hi,

On Sat, Jun 24, 2023 at 02:12:14PM +0900, Sergey Senozhatsky wrote:
> zspage migration can terminate as soon as it moves the last
> allocated object from the source zspage. Add a simple helper
> zspage_empty() that tests zspage ->inuse on each migration
> iteration.
>
> Suggested-by: Alexey Romanov <[email protected]>
> Signed-off-by: Sergey Senozhatsky <[email protected]>
> ---
> mm/zsmalloc.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 3f057970504e..5d60eaedc3b7 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1147,6 +1147,11 @@ static bool zspage_full(struct size_class *class, struct zspage *zspage)
> return get_zspage_inuse(zspage) == class->objs_per_zspage;
> }
>
> +static bool zspage_empty(struct zspage *zspage)
> +{
> + return get_zspage_inuse(zspage) == 0;
> +}
> +
> /**
> * zs_lookup_class_index() - Returns index of the zsmalloc &size_class
> * that hold objects of the provided size.
> @@ -1625,6 +1630,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
> obj_idx++;
> record_obj(handle, free_obj);
> obj_free(class->size, used_obj);
> +
> + /* Stop if there are no more objects to migrate */
> + if (zspage_empty(get_zspage(s_page)))
> + break;
> }
>
> /* Remember last position in this iteration */
> --
> 2.41.0.162.gfafddb0af9-goog
>

not sure if I can keep this tag but,

Reviewed-by: Alexey Romanov <[email protected]>

--
Thank you,
Alexey

2023-06-26 17:17:38

by Minchan Kim

[permalink] [raw]
Subject: Re: [PATCHv2 2/3] zsmalloc: move migration destination zspage inuse check

On Sat, Jun 24, 2023 at 02:12:15PM +0900, Sergey Senozhatsky wrote:
> Destination zspage fullness check need to be done after
> zs_object_copy() because that's where source and destination
> zspages fullness change. Checking destination zspage fullness
> before zs_object_copy() may cause migration to loop through
> source zspage sub-pages scanning for allocate objects just to
> find out at the end that the destination zspage is full.
>
> Signed-off-by: Sergey Senozhatsky <[email protected]>
Acked-by: Minchan Kim <[email protected]>

2023-06-26 17:27:23

by Minchan Kim

[permalink] [raw]
Subject: Re: [PATCHv2 3/3] zsmalloc: remove zs_compact_control

On Sat, Jun 24, 2023 at 02:12:16PM +0900, Sergey Senozhatsky wrote:
> From: Minchan Kim <[email protected]>
>
> __zs_compact always putback src_zspage into class list after
> migrate_zspage. Thus, we don't need to keep last position of
> src_zspage any more. Let's remove it.
>
> Signed-off-by: Minchan Kim <[email protected]>

Thanks for picking it up, Sergey!

2023-06-26 17:34:18

by Minchan Kim

[permalink] [raw]
Subject: Re: [PATCHv2 1/3] zsmalloc: do not scan for allocated objects in empty zspage

On Sat, Jun 24, 2023 at 02:12:14PM +0900, Sergey Senozhatsky wrote:
> zspage migration can terminate as soon as it moves the last
> allocated object from the source zspage. Add a simple helper
> zspage_empty() that tests zspage ->inuse on each migration
> iteration.
>
> Suggested-by: Alexey Romanov <[email protected]>
> Signed-off-by: Sergey Senozhatsky <[email protected]>
Acked-by: Minchan Kim <[email protected]>

2023-07-01 11:04:06

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: [PATCHv2 1/3] zsmalloc: do not scan for allocated objects in empty zspage

On (23/06/26 10:57), Alexey Romanov wrote:
> not sure if I can keep this tag but,

Sure, why not

>
> Reviewed-by: Alexey Romanov <[email protected]>

2023-07-01 12:03:49

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: [PATCHv2 3/3] zsmalloc: remove zs_compact_control

On (23/06/26 10:14), Minchan Kim wrote:
> On Sat, Jun 24, 2023 at 02:12:16PM +0900, Sergey Senozhatsky wrote:
> > From: Minchan Kim <[email protected]>
> >
> > __zs_compact always putback src_zspage into class list after
> > migrate_zspage. Thus, we don't need to keep last position of
> > src_zspage any more. Let's remove it.
> >
> > Signed-off-by: Minchan Kim <[email protected]>
>
> Thanks for picking it up, Sergey!

Thank you for clearing up the code!

2023-07-05 13:44:16

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: [PATCHv2 0/3] zsmalloc: small compaction improvements

On (23/06/24 14:12), Sergey Senozhatsky wrote:
> Hi,
> A tiny series that can reduce the number of
> find_alloced_obj() invocations (which perform a linear
> scan of sub-page) during compaction. Inspired by Alexey
> Romanov's findings.
>
> v2:
> -- picked up a patch from Minchan
>
> Minchan Kim (1):
> zsmalloc: remove zs_compact_control
>
> Sergey Senozhatsky (2):
> zsmalloc: do not scan for allocated objects in empty zspage
> zsmalloc: move migration destination zspage inuse check

Just for the record,

A synthetic zsmalloc fragmentation+compaction test (100% reproducible)

num find_tagged_obj() calls num iterations in find_tagged_obj()

base 545699 812899
patch #1 460701 691821
patch #2 422372 651372

// lower is better


patch #1 is "zsmalloc: do not scan for allocated objects in empty zspage"
patch #2 is "zsmalloc: move migration destination zspage inuse check"