2023-01-19 05:03:46

by Waiman Long

[permalink] [raw]
Subject: [RESEND PATCH v2 0/2] mm/kmemleak: Simplify kmemleak_cond_resched() & fix UAF

It was found that a KASAN use-after-free error was reported in the
kmemleak_scan() function. After further examination, it is believe
that even though a reference is taken from the current object, it does
not prevent the object pointed to by the next pointer from going away
after a cond_resched().

To fix that, additional flags are added to make sure that the current
object won't be removed from the object_list during the duration of
the cond_resched() to ensure the validity of the next pointer.

While making the change, I also simplify the current usage of
kmemleak_cond_resched() to make it easier to understand.

Waiman Long (2):
mm/kmemleak: Simplify kmemleak_cond_resched() usage
mm/kmemleak: Fix UAF bug in kmemleak_scan()

[v2: Update patch 2 to prevent object_list removal of current object]

mm/kmemleak.c | 83 +++++++++++++++++++++++++--------------------------
1 file changed, 41 insertions(+), 42 deletions(-)

--
2.31.1


2023-01-19 05:05:04

by Waiman Long

[permalink] [raw]
Subject: [RESEND PATCH v2 1/2] mm/kmemleak: Simplify kmemleak_cond_resched() usage

The presence of a pinned argument and the 64k loop count make
kmemleak_cond_resched() a bit more complex to read. The pinned argument
is used only by first kmemleak_scan() loop.

Simplify the usage of kmemleak_cond_resched() by removing the pinned
argument and always do a get_object()/put_object() sequence. In
addition, the 64k loop is removed by using need_resched() to decide if
kmemleak_cond_resched() should be called.

Signed-off-by: Waiman Long <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
---
mm/kmemleak.c | 48 ++++++++++++------------------------------------
1 file changed, 12 insertions(+), 36 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 267332904354..e7cb521236bf 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -1463,22 +1463,17 @@ static void scan_gray_list(void)
/*
* Conditionally call resched() in an object iteration loop while making sure
* that the given object won't go away without RCU read lock by performing a
- * get_object() if !pinned.
- *
- * Return: false if can't do a cond_resched() due to get_object() failure
- * true otherwise
+ * get_object() if necessaary.
*/
-static bool kmemleak_cond_resched(struct kmemleak_object *object, bool pinned)
+static void kmemleak_cond_resched(struct kmemleak_object *object)
{
- if (!pinned && !get_object(object))
- return false;
+ if (!get_object(object))
+ return; /* Try next object */

rcu_read_unlock();
cond_resched();
rcu_read_lock();
- if (!pinned)
- put_object(object);
- return true;
+ put_object(object);
}

/*
@@ -1492,15 +1487,12 @@ static void kmemleak_scan(void)
struct zone *zone;
int __maybe_unused i;
int new_leaks = 0;
- int loop_cnt = 0;

jiffies_last_scan = jiffies;

/* prepare the kmemleak_object's */
rcu_read_lock();
list_for_each_entry_rcu(object, &object_list, object_list) {
- bool obj_pinned = false;
-
raw_spin_lock_irq(&object->lock);
#ifdef DEBUG
/*
@@ -1526,19 +1518,13 @@ static void kmemleak_scan(void)

/* reset the reference count (whiten the object) */
object->count = 0;
- if (color_gray(object) && get_object(object)) {
+ if (color_gray(object) && get_object(object))
list_add_tail(&object->gray_list, &gray_list);
- obj_pinned = true;
- }

raw_spin_unlock_irq(&object->lock);

- /*
- * Do a cond_resched() every 64k objects to avoid soft lockup.
- */
- if (!(++loop_cnt & 0xffff) &&
- !kmemleak_cond_resched(object, obj_pinned))
- loop_cnt--; /* Try again on next object */
+ if (need_resched())
+ kmemleak_cond_resched(object);
}
rcu_read_unlock();

@@ -1605,14 +1591,9 @@ static void kmemleak_scan(void)
* scan and color them gray until the next scan.
*/
rcu_read_lock();
- loop_cnt = 0;
list_for_each_entry_rcu(object, &object_list, object_list) {
- /*
- * Do a cond_resched() every 64k objects to avoid soft lockup.
- */
- if (!(++loop_cnt & 0xffff) &&
- !kmemleak_cond_resched(object, false))
- loop_cnt--; /* Try again on next object */
+ if (need_resched())
+ kmemleak_cond_resched(object);

/*
* This is racy but we can save the overhead of lock/unlock
@@ -1647,14 +1628,9 @@ static void kmemleak_scan(void)
* Scanning result reporting.
*/
rcu_read_lock();
- loop_cnt = 0;
list_for_each_entry_rcu(object, &object_list, object_list) {
- /*
- * Do a cond_resched() every 64k objects to avoid soft lockup.
- */
- if (!(++loop_cnt & 0xffff) &&
- !kmemleak_cond_resched(object, false))
- loop_cnt--; /* Try again on next object */
+ if (need_resched())
+ kmemleak_cond_resched(object);

/*
* This is racy but we can save the overhead of lock/unlock
--
2.31.1