2023-06-19 15:01:27

by Alexey Romanov

[permalink] [raw]
Subject: [PATCH v1 0/2] Add obj allocated counter for subpages

This patch series adds a count of allocated objects for each of the
zspage subpages. The main idea is that we can use the extra bytes
of the page_type field, because with PAGE_SIZE = 4096 we only use
the first two bytes there.

By storing the number of allocated objects, we can optimize, for
example, the running time of function find_allocated_obj, as well
as the entire compact algorithm as a whole. Also, counting allocated
objects has no effect on the performance of the entire zsmalloc:
bitwise operations are fast and we don't use any extra memory.

I also believe that we can also use this counter (maybe in the future)
in some other things, which will speed up the allocator even more.

Alexey Romanov (2):
zsmalloc: add allocated objects counter for subpage
zsmalloc: check empty page in find_alloced_obj

mm/zsmalloc.c | 41 ++++++++++++++++++++++++++++++++++++++---
1 file changed, 38 insertions(+), 3 deletions(-)

--
2.38.1



2023-06-19 15:01:56

by Alexey Romanov

[permalink] [raw]
Subject: [PATCH v1 2/2] zsmalloc: check empty page in find_alloced_obj

It makes no sense to search for an allocated object
if there are none on the page. Using this check, we
get rid of the extra kmap_atomic, as well as the search
for a tagged object. On my synthetic test data, this
change speed up zsmalloc compaction time by up to 10%.

Signed-off-by: Alexey Romanov <[email protected]>
---
mm/zsmalloc.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index dd6e2c3429e0..d0ce579dcde5 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1764,6 +1764,9 @@ static unsigned long find_tagged_obj(struct size_class *class,
static unsigned long find_alloced_obj(struct size_class *class,
struct page *page, int *obj_idx)
{
+ if (!get_obj_allocated(page))
+ return 0;
+
return find_tagged_obj(class, page, obj_idx, OBJ_ALLOCATED_TAG);
}

--
2.38.1