Kmemleak reported many leaks while under memory pressue in,
slots = alloc_slots(pool, gfp);
which is referenced by "zhdr" in init_z3fold_page(),
zhdr->slots = slots;
However, "zhdr" could be gone without freeing slots as the later will be
freed separately when the last "handle" off of "handles" array is freed. It
will be within "slots" which is always aligned.
unreferenced object 0xc000000fdadc1040 (size 104):
comm "oom04", pid 140476, jiffies 4295359280 (age 3454.970s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<00000000d1f0f5eb>] z3fold_zpool_malloc+0x7b0/0xe10
alloc_slots at mm/z3fold.c:214
(inlined by) init_z3fold_page at mm/z3fold.c:412
(inlined by) z3fold_alloc at mm/z3fold.c:1161
(inlined by) z3fold_zpool_malloc at mm/z3fold.c:1735
[<0000000064a2e969>] zpool_malloc+0x34/0x50
[<00000000af63e491>] zswap_frontswap_store+0x60c/0xda0
zswap_frontswap_store at mm/zswap.c:1093
[<00000000af5e07e0>] __frontswap_store+0x128/0x330
[<00000000de2f582b>] swap_writepage+0x58/0x110
[<000000000120885f>] pageout+0x16c/0xa40
[<00000000444c1f68>] shrink_page_list+0x1ac8/0x25c0
[<00000000d19e8610>] shrink_inactive_list+0x270/0x730
[<00000000e17df726>] shrink_lruvec+0x444/0xf30
[<000000005f02ab35>] shrink_node+0x2a4/0x9c0
[<00000000014cabbd>] do_try_to_free_pages+0x158/0x640
[<00000000dcfaba07>] try_to_free_pages+0x1bc/0x5f0
[<00000000fa207ab8>] __alloc_pages_slowpath.constprop.60+0x4dc/0x15a0
[<000000003669f1d2>] __alloc_pages_nodemask+0x520/0x650
[<0000000011fa4168>] alloc_pages_vma+0xc0/0x420
[<0000000098b376f2>] handle_mm_fault+0x1174/0x1bf0
Signed-off-by: Qian Cai <[email protected]>
---
mm/z3fold.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 8c3bb5e508b8..460b0feced26 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -43,6 +43,7 @@
#include <linux/spinlock.h>
#include <linux/zpool.h>
#include <linux/magic.h>
+#include <linux/kmemleak.h>
/*
* NCHUNKS_ORDER determines the internal allocation granularity, effectively
@@ -215,6 +216,8 @@ static inline struct z3fold_buddy_slots *alloc_slots(struct z3fold_pool *pool,
(gfp & ~(__GFP_HIGHMEM | __GFP_MOVABLE)));
if (slots) {
+ /* It will be freed separately in free_handle(). */
+ kmemleak_not_leak(slots);
memset(slots->slot, 0, sizeof(slots->slot));
slots->pool = (unsigned long)pool;
rwlock_init(&slots->lock);
--
2.17.2 (Apple Git-113)
On Fri, May 22, 2020 at 06:00:52PM -0400, Qian Cai wrote:
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index 8c3bb5e508b8..460b0feced26 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -43,6 +43,7 @@
> #include <linux/spinlock.h>
> #include <linux/zpool.h>
> #include <linux/magic.h>
> +#include <linux/kmemleak.h>
>
> /*
> * NCHUNKS_ORDER determines the internal allocation granularity, effectively
> @@ -215,6 +216,8 @@ static inline struct z3fold_buddy_slots *alloc_slots(struct z3fold_pool *pool,
> (gfp & ~(__GFP_HIGHMEM | __GFP_MOVABLE)));
>
> if (slots) {
> + /* It will be freed separately in free_handle(). */
> + kmemleak_not_leak(slots);
> memset(slots->slot, 0, sizeof(slots->slot));
> slots->pool = (unsigned long)pool;
> rwlock_init(&slots->lock);
Acked-by: Catalin Marinas <[email protected]>
An alternative would have been a kmemleak_alloc(zhdr, sizeof(*zhdr), 1)
in init_z3fold_page() and a corresponding kmemleak_free() in
free_z3fold_page() (if !headless) since kmemleak doesn't track page
allocations. The advantage is that it would track the slots in case
there is a leak. But if the code is clear enough that the slots are
freed, just keep the kmemleak_not_leak() annotation.
--
Catalin