Hi,
There is a false positive case that a pointer is calculated by other
methods than the usual container_of macro. "kmemleak_ignore" can cover
such a false positive, but it would loose the advantage of memory leak
detection. This patch allows kmemleak to work with such false
positives by introducing a new special memory block with a specified
calculation formula. A client module can register its area with a
conversion function, with which function kmemleak scan could calculate
a correct pointer.
For this version 2, to avoid client kernel module being unloaded
before unregistering special conversion, module reference count is
used. This was pointed by Phil Carmody.
A typical use case could be the IOMMU pagetable allocation which
stores pointers to the second level of page tables with some
conversion, for example, a physical address with attribution
bits. Right now I don't have other use cases but I hope that there
could be some that this special scan works with.
Test:
# echo scan > kmemleak
# modprobe kmemleak-special-test
[ 1328.260162] Stored 1024@dfc5ac00 -> 9fc5ac01
[ 1328.264984] Stored 1024@dfc5b800 -> 9fc5b801
[ 1328.269500] Stored 1024@dfc5b400 -> 9fc5b401
[ 1328.273895] Stored 1024@dfc5b000 -> 9fc5b001
[ 1328.278381] Stored 1024@deb9bc00 -> 9eb9bc01
[ 1328.282714] Stored 1024@deea6c00 -> 9eea6c01
[ 1328.287139] Stored 1024@deea7c00 -> 9eea7c01
[ 1328.291473] Stored 1024@deea7800 -> 9eea7801
# echo scan > kmemleak
[ 1344.062591] kmemleak: 8 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
# rmmod kmemleak-special-test
# echo scan > kmemleak
# modprobe kmemleak-special-test timeout=60
[ 71.758850] Stored 1024@dfc5b000 -> 9fc5b001
[ 71.763702] Stored 1024@dfc5b400 -> 9fc5b401
[ 71.768066] Stored 1024@dfc5b800 -> 9fc5b801
[ 71.772583] Stored 1024@dfc5bc00 -> 9fc5bc01
[ 71.776977] Stored 1024@deea6000 -> 9eea6001
[ 71.781341] Stored 1024@deea6400 -> 9eea6401
[ 71.785736] Stored 1024@deea6800 -> 9eea6801
[ 71.790069] Stored 1024@deea6c00 -> 9eea6c01
[ 71.794433] kmemleak_special_init: Registered special scan: bf000360
# echo scan > kmemleak
[ 79.588836] custom_conversion: Converted 9fc5b001 -> dfc5b000
[ 79.594696] custom_conversion: Converted 9fc5b401 -> dfc5b400
[ 79.600494] custom_conversion: Converted 9fc5b801 -> dfc5b800
[ 79.606292] custom_conversion: Converted 9fc5bc01 -> dfc5bc00
[ 79.612060] custom_conversion: Converted 9eea6001 -> deea6000
[ 79.617889] custom_conversion: Converted 9eea6401 -> deea6400
[ 79.623687] custom_conversion: Converted 9eea6801 -> deea6800
[ 79.629486] custom_conversion: Converted 9eea6c01 -> deea6c00
# rmmod kmemleak-special-test
rmmod: cannot unload 'kmemleak_special_test': Resource temporarily unavailable
# lsmod kmemleak-special-test
Module Size Used by Not tainted
kmemleak_special_test 1467 1
# [ 131.800354] no_special_func: Unregistered special scan bf000360
# lsmod kmemleak-special-test
Module Size Used by Not tainted
kmemleak_special_test 1467 0
# rmmod kmemleak-special-test
Hiroshi DOYU (3):
kmemleak: Fix false positives with special scan
kmemleak: Add special scan test case
omap iommu: kmemleak: Fix false positive with special scan
arch/arm/plat-omap/iommu.c | 19 +++++++
include/linux/kmemleak.h | 5 ++
mm/Makefile | 2 +-
mm/kmemleak-special-test.c | 94 ++++++++++++++++++++++++++++++++++++
mm/kmemleak.c | 114 ++++++++++++++++++++++++++++++++++++++++++-
5 files changed, 230 insertions(+), 4 deletions(-)
create mode 100644 mm/kmemleak-special-test.c
Hi,
Sorry for the delay, I eventually got the time to look at your patches.
On Tue, 2010-06-01 at 11:25 +0100, Hiroshi DOYU wrote:
> There is a false positive case that a pointer is calculated by other
> methods than the usual container_of macro. "kmemleak_ignore" can cover
> such a false positive, but it would loose the advantage of memory leak
> detection. This patch allows kmemleak to work with such false
> positives by introducing a new special memory block with a specified
> calculation formula. A client module can register its area with a
> conversion function, with which function kmemleak scan could calculate
> a correct pointer.
While something needs to be done to cover these situations, I'm not so
convinced about the method as it complicates the code requiring such
conversion by having to insert two kmemleak hooks and a callback
function.
Can we not add a new prio tree (or just use the existing one) for
pointer aliases? The advantage is that you only have a single function
to call, something like kmemleak_add_alias() and you do it at the point
the value was converted.
Thanks.
--
Catalin
From: ext Catalin Marinas <[email protected]>
Subject: Re: [PATCH v2 0/3] kmemleak: Fix false positive with special scan
Date: Wed, 2 Jun 2010 12:01:24 +0200
> Hi,
>
> Sorry for the delay, I eventually got the time to look at your patches.
Thank you for your review.
> On Tue, 2010-06-01 at 11:25 +0100, Hiroshi DOYU wrote:
>> There is a false positive case that a pointer is calculated by other
>> methods than the usual container_of macro. "kmemleak_ignore" can cover
>> such a false positive, but it would loose the advantage of memory leak
>> detection. This patch allows kmemleak to work with such false
>> positives by introducing a new special memory block with a specified
>> calculation formula. A client module can register its area with a
>> conversion function, with which function kmemleak scan could calculate
>> a correct pointer.
>
> While something needs to be done to cover these situations, I'm not so
> convinced about the method as it complicates the code requiring such
> conversion by having to insert two kmemleak hooks and a callback
> function.
>
> Can we not add a new prio tree (or just use the existing one) for
> pointer aliases? The advantage is that you only have a single function
> to call, something like kmemleak_add_alias() and you do it at the point
> the value was converted.
Actually I considered the above aliasing a little bit but I gave up
soon.
I was afraid that this method might consume way more memory since this
just adds another member for "struct kmemleak_object", but adding a
single member for all objects. The number of kmemleak_object is
usually numerous.
Do you think that this increase of memory consumption is acceptable?
From: Hiroshi DOYU <[email protected]>
Subject: Re: [PATCH v2 0/3] kmemleak: Fix false positive with special scan
Date: Wed, 02 Jun 2010 14:34:58 +0300 (EEST)
> From: ext Catalin Marinas <[email protected]>
> Subject: Re: [PATCH v2 0/3] kmemleak: Fix false positive with special scan
> Date: Wed, 2 Jun 2010 12:01:24 +0200
>
>> Hi,
>>
>> Sorry for the delay, I eventually got the time to look at your patches.
>
> Thank you for your review.
>
>> On Tue, 2010-06-01 at 11:25 +0100, Hiroshi DOYU wrote:
>>> There is a false positive case that a pointer is calculated by other
>>> methods than the usual container_of macro. "kmemleak_ignore" can cover
>>> such a false positive, but it would loose the advantage of memory leak
>>> detection. This patch allows kmemleak to work with such false
>>> positives by introducing a new special memory block with a specified
>>> calculation formula. A client module can register its area with a
>>> conversion function, with which function kmemleak scan could calculate
>>> a correct pointer.
>>
>> While something needs to be done to cover these situations, I'm not so
>> convinced about the method as it complicates the code requiring such
>> conversion by having to insert two kmemleak hooks and a callback
>> function.
>>
>> Can we not add a new prio tree (or just use the existing one) for
>> pointer aliases? The advantage is that you only have a single function
>> to call, something like kmemleak_add_alias() and you do it at the point
>> the value was converted.
Ok, I understand now. Please ignore my previous. I'll try the above.
On Wed, 2010-06-02 at 12:34 +0100, Hiroshi DOYU wrote:
> From: ext Catalin Marinas <[email protected]>
> > Can we not add a new prio tree (or just use the existing one) for
> > pointer aliases? The advantage is that you only have a single function
> > to call, something like kmemleak_add_alias() and you do it at the point
> > the value was converted.
>
> Actually I considered the above aliasing a little bit but I gave up
> soon.
>
> I was afraid that this method might consume way more memory since this
> just adds another member for "struct kmemleak_object", but adding a
> single member for all objects. The number of kmemleak_object is
> usually numerous.
We could use a different tree with a "struct kmemleak_alias" structure
which is much smaller. Something like below:
struct kmemleak_alias {
struct list_head alias_list;
struct prio_tree_node tree_node;
struct kmemleak_object *object;
}
And an alias_list member would be added to kmemleak_object as well.
Would the alias tree need to allow overlapping? Like different IOMMU
mappings with the same address (but pointing to different physical
memory).
--
Catalin
From: ext Catalin Marinas <[email protected]>
Subject: Re: [PATCH v2 0/3] kmemleak: Fix false positive with special scan
Date: Wed, 2 Jun 2010 16:12:29 +0200
> On Wed, 2010-06-02 at 12:34 +0100, Hiroshi DOYU wrote:
>> From: ext Catalin Marinas <[email protected]>
>> > Can we not add a new prio tree (or just use the existing one) for
>> > pointer aliases? The advantage is that you only have a single function
>> > to call, something like kmemleak_add_alias() and you do it at the point
>> > the value was converted.
>>
>> Actually I considered the above aliasing a little bit but I gave up
>> soon.
>>
>> I was afraid that this method might consume way more memory since this
>> just adds another member for "struct kmemleak_object", but adding a
>> single member for all objects. The number of kmemleak_object is
>> usually numerous.
>
> We could use a different tree with a "struct kmemleak_alias" structure
> which is much smaller. Something like below:
>
> struct kmemleak_alias {
> struct list_head alias_list;
> struct prio_tree_node tree_node;
> struct kmemleak_object *object;
> }
The above seems to be better than I thought. I'll give this a try.
> And an alias_list member would be added to kmemleak_object as well.
>
> Would the alias tree need to allow overlapping? Like different IOMMU
> mappings with the same address (but pointing to different physical
> memory).
Not for omap iommu.
omap iommu can have multiple instances, multiple devices can have each
own address spaces respectively. This doesn't affect this kmemleak
false positive.
Hi,
This is another version of "kmemleak: Fix false positive", which
introduces another alias tree to keep track of all alias address of
each objects, based on the discussion(*1)
You can also find the previous one(*2), which uses special scan area
for alias addresses with a conversion function.
Compared with both methods, it seems that the current one takes a bit
longer to scan as below, tested with 512 elementes of (*3).
"kmemleak: Fix false positive with alias":
# time echo scan > /mnt/kmemleak
real 0m 8.40s
user 0m 0.00s
sys 0m 8.40s
"kmemleak: Fix false positive with special scan":
# time echo scan > /mnt/kmemleak
real 0m 3.96s
user 0m 0.00s
sys 0m 3.96s
For our case(*4) to reduce false positives for the 2nd level IOMMU
pagetable allocation, the previous special scan seems to be enough
lightweight, although there might be possiblity to improve alias
one and also I might misunderstand the original proposal of aliasing.
Any comment would be appreciated.
*1: http://lkml.org/lkml/2010/6/2/282
*2: kmemleak: Fix false positive with special scan
http://lkml.org/lkml/2010/6/1/137
*3: kmemleak: Add special scan test case
http://lkml.org/lkml/2010/6/1/134
*4: http://lkml.org/lkml/2010/6/1/136
Hiroshi DOYU (1):
kmemleak: Fix false positive with alias
include/linux/kmemleak.h | 4 +
mm/kmemleak.c | 198 ++++++++++++++++++++++++++++++++++++++--------
2 files changed, 168 insertions(+), 34 deletions(-)
There is a false positive case that a pointer is calculated by other
methods than the usual container_of macro. "kmemleak_ignore" can cover
such a false positive, but it would loose the advantage of memory leak
detection. This patch allows kmemleak to work with such false
positives by aliasing of address. A client module can register an
alias address to an original pointer.
A typical use case could be the IOMMU pagetable allocation which
stores pointers to the second level of page tables with some
conversion, for example, a physical address with attribute bits. Right
now I don't have other use cases but I hope that there could be some
that this special scan works with.
Signed-off-by: Hiroshi DOYU <[email protected]>
Cc: Phil Carmody <[email protected]>
---
include/linux/kmemleak.h | 4 +
mm/kmemleak.c | 198 ++++++++++++++++++++++++++++++++++++++--------
2 files changed, 168 insertions(+), 34 deletions(-)
diff --git a/include/linux/kmemleak.h b/include/linux/kmemleak.h
index 99d9a67..0a4ccee 100644
--- a/include/linux/kmemleak.h
+++ b/include/linux/kmemleak.h
@@ -34,6 +34,7 @@ extern void kmemleak_not_leak(const void *ptr) __ref;
extern void kmemleak_ignore(const void *ptr) __ref;
extern void kmemleak_scan_area(const void *ptr, size_t size, gfp_t gfp) __ref;
extern void kmemleak_no_scan(const void *ptr) __ref;
+extern void kmemleak_add_alias(const void *ptr, const void *new) __ref;
static inline void kmemleak_alloc_recursive(const void *ptr, size_t size,
int min_count, unsigned long flags,
@@ -92,6 +93,9 @@ static inline void kmemleak_erase(void **ptr)
static inline void kmemleak_no_scan(const void *ptr)
{
}
+static inline void kmemleak_add_alias(const void *ptr, const void *new)
+{
+}
#endif /* CONFIG_DEBUG_KMEMLEAK */
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 2c0d032..fa20304 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -157,6 +157,13 @@ struct kmemleak_object {
unsigned long jiffies; /* creation timestamp */
pid_t pid; /* pid of the current task */
char comm[TASK_COMM_LEN]; /* executable name */
+ struct kmemleak_alias *alias; /* if a pointer is modified */
+};
+
+struct kmemleak_alias {
+ struct list_head alias_list;
+ struct prio_tree_node tree_node;
+ struct kmemleak_object *object;
};
/* flag representing the memory block allocation status */
@@ -179,13 +186,18 @@ struct kmemleak_object {
static LIST_HEAD(object_list);
/* the list of gray-colored objects (see color_gray comment below) */
static LIST_HEAD(gray_list);
+/* the list of objects with alias (see alias comment below) */
+static LIST_HEAD(alias_list);
/* prio search tree for object boundaries */
static struct prio_tree_root object_tree_root;
+/* prio search tree for alias object boundaries */
+static struct prio_tree_root alias_tree_root;
/* rw_lock protecting the access to object_list and prio_tree_root */
static DEFINE_RWLOCK(kmemleak_lock);
/* allocation caches for kmemleak internal data */
static struct kmem_cache *object_cache;
+static struct kmem_cache *alias_cache;
static struct kmem_cache *scan_area_cache;
/* set if tracing memory operations is enabled */
@@ -269,6 +281,8 @@ static void kmemleak_disable(void);
kmemleak_disable(); \
} while (0)
+#define to_address(obj) ((obj)->tree_node.start)
+
/*
* Printing of the objects hex dump to the seq file. The number of lines to be
* printed is limited to HEX_MAX_LINES to prevent seq file spamming. The
@@ -369,7 +383,7 @@ static void dump_object_info(struct kmemleak_object *object)
trace.entries = object->trace;
pr_notice("Object 0x%08lx (size %zu):\n",
- object->tree_node.start, object->size);
+ to_address(object), object->size);
pr_notice(" comm \"%s\", pid %d, jiffies %lu\n",
object->comm, object->pid, object->jiffies);
pr_notice(" min_count = %d\n", object->min_count);
@@ -436,6 +450,8 @@ static void free_object_rcu(struct rcu_head *rcu)
hlist_del(elem);
kmem_cache_free(scan_area_cache, area);
}
+ if (object->alias)
+ kmem_cache_free(alias_cache, object->alias);
kmem_cache_free(object_cache, object);
}
@@ -479,6 +495,41 @@ static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
return object;
}
+static struct kmemleak_object *find_and_get_object_by_alias(unsigned long ptr,
+ int alias)
+{
+ struct kmemleak_object *object = NULL;
+ struct kmemleak_alias *ao = NULL;
+ struct prio_tree_node *node;
+ struct prio_tree_iter iter;
+ unsigned long flags;
+
+ if (likely(prio_tree_empty(&alias_tree_root)))
+ return NULL;
+
+ rcu_read_lock();
+ read_lock_irqsave(&kmemleak_lock, flags);
+
+ prio_tree_iter_init(&iter, &alias_tree_root, ptr, ptr);
+ node = prio_tree_next(&iter);
+ if (node) {
+ ao = prio_tree_entry(node, struct kmemleak_alias, tree_node);
+ if (!alias && to_address(ao) != ptr) {
+ kmemleak_warn("Found object by alias");
+ ao = NULL;
+ }
+ }
+
+ read_unlock_irqrestore(&kmemleak_lock, flags);
+
+ if (ao && get_object(ao->object))
+ object = ao->object;
+
+ rcu_read_unlock();
+
+ return object;
+}
+
/*
* Save stack trace to the given array of MAX_TRACE size.
*/
@@ -524,6 +575,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
object->count = 0; /* white color initially */
object->jiffies = jiffies;
object->checksum = 0;
+ object->alias = NULL;
/* task information */
if (in_irq()) {
@@ -547,7 +599,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
object->trace_len = __save_stack_trace(object->trace);
INIT_PRIO_TREE_NODE(&object->tree_node);
- object->tree_node.start = ptr;
+ to_address(object) = ptr;
object->tree_node.last = ptr + size - 1;
write_lock_irqsave(&kmemleak_lock, flags);
@@ -577,6 +629,40 @@ out:
return object;
}
+static void create_alias_object(struct kmemleak_object *object,
+ unsigned long ptr)
+{
+ struct kmemleak_alias *alias;
+ struct prio_tree_node *node;
+ unsigned long flags;
+
+ alias = kmem_cache_alloc(alias_cache, GFP_KERNEL);
+ if (!alias) {
+ kmemleak_stop("Cannot allocate a kmemleak_alias structure\n");
+ return;
+ }
+ INIT_LIST_HEAD(&alias->alias_list);
+ INIT_PRIO_TREE_NODE(&alias->tree_node);
+ to_address(alias) = ptr;
+
+ spin_lock_irqsave(&object->lock, flags);
+ alias->tree_node.last = ptr + object->size - 1;
+ alias->object = object;
+ object->alias = alias;
+ spin_unlock_irqrestore(&object->lock, flags);
+
+ write_lock_irqsave(&kmemleak_lock, flags);
+
+ node = prio_tree_insert(&alias_tree_root, &alias->tree_node);
+ if (!node) {
+ kmemleak_warn("Cannot allocate a kmemleak_alias structure\n");
+ kmem_cache_free(alias_cache, alias);
+ }
+ list_add_tail_rcu(&alias->alias_list, &alias_list);
+
+ write_unlock_irqrestore(&kmemleak_lock, flags);
+}
+
/*
* Remove the metadata (struct kmemleak_object) for a memory block from the
* object_list and object_tree_root and decrement its use_count.
@@ -588,6 +674,10 @@ static void __delete_object(struct kmemleak_object *object)
write_lock_irqsave(&kmemleak_lock, flags);
prio_tree_remove(&object_tree_root, &object->tree_node);
list_del_rcu(&object->object_list);
+ if (object->alias) {
+ prio_tree_remove(&alias_tree_root, &object->alias->tree_node);
+ list_del_rcu(&object->alias->alias_list);
+ }
write_unlock_irqrestore(&kmemleak_lock, flags);
WARN_ON(!(object->flags & OBJECT_ALLOCATED));
@@ -630,7 +720,7 @@ static void delete_object_full(unsigned long ptr)
*/
static void delete_object_part(unsigned long ptr, size_t size)
{
- struct kmemleak_object *object;
+ struct kmemleak_object *object, *new;
unsigned long start, end;
object = find_and_get_object(ptr, 1);
@@ -652,12 +742,24 @@ static void delete_object_part(unsigned long ptr, size_t size)
*/
start = object->pointer;
end = object->pointer + object->size;
- if (ptr > start)
- create_object(start, ptr - start, object->min_count,
- GFP_KERNEL);
- if (ptr + size < end)
- create_object(ptr + size, end - ptr - size, object->min_count,
- GFP_KERNEL);
+ if (ptr > start) {
+ new = create_object(start, ptr - start, object->min_count,
+ GFP_KERNEL);
+ if (new && unlikely(object->alias))
+ create_alias_object(new, to_address(object->alias));
+ }
+ if (ptr + size < end) {
+ new = create_object(ptr + size, end - ptr - size,
+ object->min_count, GFP_KERNEL);
+ if (new && unlikely(object->alias)) {
+ unsigned long alias_ptr;
+
+ alias_ptr = to_address(object->alias);
+ alias_ptr += ptr - start + size;
+
+ create_alias_object(new, alias_ptr);
+ }
+ }
put_object(object);
}
@@ -944,6 +1046,22 @@ void __ref kmemleak_no_scan(const void *ptr)
}
EXPORT_SYMBOL(kmemleak_no_scan);
+void kmemleak_add_alias(const void *ptr, const void *alias)
+{
+ struct kmemleak_object *object;
+
+ pr_debug("%s(0x%p -> 0x%p)\n", __func__, ptr, alias);
+
+ object = find_and_get_object((unsigned long)ptr, 0);
+ if (!object) {
+ kmemleak_warn("Aliasing unknown object at 0x%p\n", ptr);
+ return;
+ }
+ create_alias_object(object, (unsigned long)alias);
+ put_object(object);
+}
+EXPORT_SYMBOL(kmemleak_add_alias);
+
/*
* Update an object's checksum and return true if it was modified.
*/
@@ -979,37 +1097,21 @@ static int scan_should_stop(void)
return 0;
}
-/*
- * Scan a memory block (exclusive range) for valid pointers and add those
- * found to the gray list.
- */
-static void scan_block(void *_start, void *_end,
- struct kmemleak_object *scanned, int allow_resched)
+static void __scan_block(unsigned long *ptr, struct kmemleak_object *scanned)
{
- unsigned long *ptr;
- unsigned long *start = PTR_ALIGN(_start, BYTES_PER_POINTER);
- unsigned long *end = _end - (BYTES_PER_POINTER - 1);
+ int i;
+ unsigned long pointer = *ptr;
+ typedef struct kmemleak_object *(*fn_t)(unsigned long, int);
+ fn_t fns[] = { find_and_get_object, find_and_get_object_by_alias, };
- for (ptr = start; ptr < end; ptr++) {
- struct kmemleak_object *object;
+ for (i = 0; i < ARRAY_SIZE(fns); i++) {
unsigned long flags;
- unsigned long pointer;
-
- if (allow_resched)
- cond_resched();
- if (scan_should_stop())
- break;
-
- /* don't scan uninitialized memory */
- if (!kmemcheck_is_obj_initialized((unsigned long)ptr,
- BYTES_PER_POINTER))
- continue;
-
- pointer = *ptr;
+ struct kmemleak_object *object;
- object = find_and_get_object(pointer, 1);
+ object = fns[i](pointer, 1);
if (!object)
continue;
+
if (object == scanned) {
/* self referenced, ignore */
put_object(object);
@@ -1049,6 +1151,32 @@ static void scan_block(void *_start, void *_end,
}
/*
+ * Scan a memory block (exclusive range) for valid pointers and add those
+ * found to the gray list.
+ */
+static void scan_block(void *_start, void *_end,
+ struct kmemleak_object *scanned, int allow_resched)
+{
+ unsigned long *ptr;
+ unsigned long *start = PTR_ALIGN(_start, BYTES_PER_POINTER);
+ unsigned long *end = _end - (BYTES_PER_POINTER - 1);
+
+ for (ptr = start; ptr < end; ptr++) {
+ if (allow_resched)
+ cond_resched();
+ if (scan_should_stop())
+ break;
+
+ /* don't scan uninitialized memory */
+ if (!kmemcheck_is_obj_initialized((unsigned long)ptr,
+ BYTES_PER_POINTER))
+ continue;
+
+ __scan_block(ptr, scanned);
+ }
+}
+
+/*
* Scan a memory block corresponding to a kmemleak_object. A condition is
* that object->use_count >= 1.
*/
@@ -1620,8 +1748,10 @@ void __init kmemleak_init(void)
jiffies_scan_wait = msecs_to_jiffies(SECS_SCAN_WAIT * 1000);
object_cache = KMEM_CACHE(kmemleak_object, SLAB_NOLEAKTRACE);
+ alias_cache = KMEM_CACHE(kmemleak_alias, SLAB_NOLEAKTRACE);
scan_area_cache = KMEM_CACHE(kmemleak_scan_area, SLAB_NOLEAKTRACE);
INIT_PRIO_TREE_ROOT(&object_tree_root);
+ INIT_PRIO_TREE_ROOT(&alias_tree_root);
/* the kernel is still in UP mode, so disabling the IRQs is enough */
local_irq_save(flags);
--
1.7.1.rc2
On 18/06/10 08:04 +0200, Doyu Hiroshi (Nokia-D/Helsinki) wrote:
> Hi,
>
> This is another version of "kmemleak: Fix false positive", which
> introduces another alias tree to keep track of all alias address of
> each objects, based on the discussion(*1)
>
> You can also find the previous one(*2), which uses special scan area
> for alias addresses with a conversion function.
>
> Compared with both methods, it seems that the current one takes a bit
> longer to scan as below, tested with 512 elementes of (*3).
>
> "kmemleak: Fix false positive with alias":
> # time echo scan > /mnt/kmemleak
> real 0m 8.40s
> user 0m 0.00s
> sys 0m 8.40s
>
> "kmemleak: Fix false positive with special scan":
> # time echo scan > /mnt/kmemleak
> real 0m 3.96s
> user 0m 0.00s
> sys 0m 3.96s
>
> For our case(*4) to reduce false positives for the 2nd level IOMMU
> pagetable allocation, the previous special scan seems to be enough
> lightweight, although there might be possiblity to improve alias
> one and also I might misunderstand the original proposal of aliasing.
>
> Any comment would be appreciated.
After comparing the two, my Ack would still strongly be behind the
first one, the special scan. The additional work over a normal scan
is limited strictly to those regions that need it, which is a much
more clinical approach to the problem. Your timing data bears that
out.
Phil
> *1: http://lkml.org/lkml/2010/6/2/282
> *2: kmemleak: Fix false positive with special scan
> http://lkml.org/lkml/2010/6/1/137
> *3: kmemleak: Add special scan test case
> http://lkml.org/lkml/2010/6/1/134
> *4: http://lkml.org/lkml/2010/6/1/136
>
> Hiroshi DOYU (1):
> kmemleak: Fix false positive with alias
>
> include/linux/kmemleak.h | 4 +
> mm/kmemleak.c | 198 ++++++++++++++++++++++++++++++++++++++--------
> 2 files changed, 168 insertions(+), 34 deletions(-)
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-omap" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi,
(and sorry for the delay)
On Fri, 2010-06-18 at 07:04 +0100, Hiroshi DOYU wrote:
> This is another version of "kmemleak: Fix false positive", which
> introduces another alias tree to keep track of all alias address of
> each objects, based on the discussion(*1)
>
> You can also find the previous one(*2), which uses special scan area
> for alias addresses with a conversion function.
>
> Compared with both methods, it seems that the current one takes a bit
> longer to scan as below, tested with 512 elementes of (*3).
>
> "kmemleak: Fix false positive with alias":
> # time echo scan > /mnt/kmemleak
> real 0m 8.40s
> user 0m 0.00s
> sys 0m 8.40s
>
> "kmemleak: Fix false positive with special scan":
> # time echo scan > /mnt/kmemleak
> real 0m 3.96s
> user 0m 0.00s
> sys 0m 3.96s
Have you tried without your patches (just the test module but without
aliasing the pointers)? I'm curious what's the impact of your first set
of patches.
> For our case(*4) to reduce false positives for the 2nd level IOMMU
> pagetable allocation, the previous special scan seems to be enough
> lightweight, although there might be possiblity to improve alias
> one and also I might misunderstand the original proposal of aliasing.
The performance impact is indeed pretty high, though some parts of the
code look over-engineered to me (the __scan_block function with a loop
going through an array of two function pointers - I think the compiler
cannot figure out what to inline). You could just extend the
find_and_get_object() to search both trees under a single spinlock
region (as locking also takes time).
Anyway, you still get to search two trees for any pointer so there would
always be some performance impact. I just hoped they weren't be as bad.
In a normal system (not test module), how many elements would the alias
tree have?
Another approach - if we assume that there is a single alias per object
and such aliases don't overlap, we could just move (delete + re-insert)
the corresponding kmemleak_object in the tree to the alias position.
This way we only keep a single tree and a single object for an allocated
block. But in your use-case, the physical address of an object may
actually match the virtual address of a different object, so
lookup_object() needs to be iterative. You need two new kmemleak API
functions, e.g. kmemleak_alias() and kmemleak_unalias(), to be called
after allocation and before freeing a memory block.
If we can't make the performance hit acceptable, we could go for the
first approach and maybe just extend kmemleak_scan_area with a function
pointer structure rather than adding a new one. But as I said
previously, my main issue with your original approach is that I would
prefer to call the kmemleak API at the point where the false positive is
allocated rather than where the parent object was.
Thanks for working on this. Regards.
--
Catalin
From: ext Catalin Marinas <[email protected]>
Subject: Re: [RFC][PATCH 0/1] kmemleak: Fix false positive with alias
Date: Mon, 28 Jun 2010 16:46:12 +0200
> Hi,
>
> (and sorry for the delay)
>
> On Fri, 2010-06-18 at 07:04 +0100, Hiroshi DOYU wrote:
>> This is another version of "kmemleak: Fix false positive", which
>> introduces another alias tree to keep track of all alias address of
>> each objects, based on the discussion(*1)
>>
>> You can also find the previous one(*2), which uses special scan area
>> for alias addresses with a conversion function.
>>
>> Compared with both methods, it seems that the current one takes a bit
>> longer to scan as below, tested with 512 elementes of (*3).
>>
>> "kmemleak: Fix false positive with alias":
>> # time echo scan > /mnt/kmemleak
>> real 0m 8.40s
>> user 0m 0.00s
>> sys 0m 8.40s
>>
>> "kmemleak: Fix false positive with special scan":
>> # time echo scan > /mnt/kmemleak
>> real 0m 3.96s
>> user 0m 0.00s
>> sys 0m 3.96s
>
> Have you tried without your patches (just the test module but without
> aliasing the pointers)? I'm curious what's the impact of your first set
> of patches.
IIRC, not much difference against the one with the first patches. I'll
measure it again later.
>> For our case(*4) to reduce false positives for the 2nd level IOMMU
>> pagetable allocation, the previous special scan seems to be enough
>> lightweight, although there might be possiblity to improve alias
>> one and also I might misunderstand the original proposal of aliasing.
>
> The performance impact is indeed pretty high, though some parts of the
> code look over-engineered to me (the __scan_block function with a loop
> going through an array of two function pointers - I think the compiler
> cannot figure out what to inline). You could just extend the
> find_and_get_object() to search both trees under a single spinlock
> region (as locking also takes time).
Ok, a good point.
> Anyway, you still get to search two trees for any pointer so there would
> always be some performance impact. I just hoped they weren't be as bad.
> In a normal system (not test module), how many elements would the alias
> tree have?
Just in our case, it's 512 at most.
> Another approach - if we assume that there is a single alias per object
> and such aliases don't overlap, we could just move (delete + re-insert)
> the corresponding kmemleak_object in the tree to the alias position.
> This way we only keep a single tree and a single object for an allocated
> block. But in your use-case, the physical address of an object may
> actually match the virtual address of a different object, so
> lookup_object() needs to be iterative. You need two new kmemleak API
> functions, e.g. kmemleak_alias() and kmemleak_unalias(), to be called
> after allocation and before freeing a memory block.
>
> If we can't make the performance hit acceptable, we could go for the
> first approach and maybe just extend kmemleak_scan_area with a function
> pointer structure rather than adding a new one. But as I said
> previously, my main issue with your original approach is that I would
> prefer to call the kmemleak API at the point where the false positive is
> allocated rather than where the parent object was.
I'll try both and measure their impact again. Thank you for your comment.
Hi Catalin,
From: Hiroshi DOYU <[email protected]>
Subject: Re: [RFC][PATCH 0/1] kmemleak: Fix false positive with alias
Date: Tue, 29 Jun 2010 07:44:23 +0300 (EEST)
> From: ext Catalin Marinas <[email protected]>
> Subject: Re: [RFC][PATCH 0/1] kmemleak: Fix false positive with alias
> Date: Mon, 28 Jun 2010 16:46:12 +0200
>
>> Hi,
>>
>> (and sorry for the delay)
>>
>> On Fri, 2010-06-18 at 07:04 +0100, Hiroshi DOYU wrote:
>>> This is another version of "kmemleak: Fix false positive", which
>>> introduces another alias tree to keep track of all alias address of
>>> each objects, based on the discussion(*1)
>>>
>>> You can also find the previous one(*2), which uses special scan area
>>> for alias addresses with a conversion function.
>>>
>>> Compared with both methods, it seems that the current one takes a bit
>>> longer to scan as below, tested with 512 elementes of (*3).
>>>
>>> "kmemleak: Fix false positive with alias":
>>> # time echo scan > /mnt/kmemleak
>>> real 0m 8.40s
>>> user 0m 0.00s
>>> sys 0m 8.40s
>>>
>>> "kmemleak: Fix false positive with special scan":
>>> # time echo scan > /mnt/kmemleak
>>> real 0m 3.96s
>>> user 0m 0.00s
>>> sys 0m 3.96s
>>
>> Have you tried without your patches (just the test module but without
>> aliasing the pointers)? I'm curious what's the impact of your first set
>> of patches.
>
> IIRC, not much difference against the one with the first patches. I'll
> measure it again later.
>
>>> For our case(*4) to reduce false positives for the 2nd level IOMMU
>>> pagetable allocation, the previous special scan seems to be enough
>>> lightweight, although there might be possiblity to improve alias
>>> one and also I might misunderstand the original proposal of aliasing.
>>
>> The performance impact is indeed pretty high, though some parts of the
>> code look over-engineered to me (the __scan_block function with a loop
>> going through an array of two function pointers - I think the compiler
>> cannot figure out what to inline). You could just extend the
>> find_and_get_object() to search both trees under a single spinlock
>> region (as locking also takes time).
>
> Ok, a good point.
Now there's not much difference with the attached patch, a new version
of alias.
/ # modprobe kmemleak-special-test use_alias=0
/ # time echo scan > /sys/kernel/debug/kmemleak
real 0m 2.30s
user 0m 0.00s
sys 0m 2.30s
/ # modprobe kmemleak-special-test use_alias=1
/ # time echo scan > /sys/kernel/debug/kmemleak
real 0m 3.91s
user 0m 0.00s
sys 0m 3.91s
>From a5670d69b2cafe85f6f26f6951097210d3b9917f Mon Sep 17 00:00:00 2001
From: Hiroshi DOYU <[email protected]>
Date: Thu, 17 Jun 2010 13:36:45 +0300
Subject: [PATCH 1/1] kmemleak: Fix false positive with address alias
There is a false positive case that a pointer is calculated by other
methods than the usual container_of macro. "kmemleak_ignore" can cover
such a false positive, but it would loose the advantage of memory leak
detection. This patch allows kmemleak to work with such false
positives by aliasing of address. A client module can register an
alias address to an original pointer.
A typical use case could be the IOMMU pagetable allocation which
stores pointers to the second level of page tables with some
conversion, for example, a physical address with attribute bits. Right
now I don't have other use cases but I hope that there could be some
that this special scan works with.
Signed-off-by: Hiroshi DOYU <[email protected]>
Cc: Phil Carmody <[email protected]>
---
include/linux/kmemleak.h | 8 ++
mm/kmemleak.c | 208 +++++++++++++++++++++++++++++++++++++++++++---
2 files changed, 204 insertions(+), 12 deletions(-)
diff --git a/include/linux/kmemleak.h b/include/linux/kmemleak.h
index 99d9a67..9e2af3a 100644
--- a/include/linux/kmemleak.h
+++ b/include/linux/kmemleak.h
@@ -34,6 +34,8 @@ extern void kmemleak_not_leak(const void *ptr) __ref;
extern void kmemleak_ignore(const void *ptr) __ref;
extern void kmemleak_scan_area(const void *ptr, size_t size, gfp_t gfp) __ref;
extern void kmemleak_no_scan(const void *ptr) __ref;
+extern void kmemleak_add_alias(const void *ptr, const void *new) __ref;
+extern void kmemleak_unalias(const void *alias) __ref;
static inline void kmemleak_alloc_recursive(const void *ptr, size_t size,
int min_count, unsigned long flags,
@@ -92,6 +94,12 @@ static inline void kmemleak_erase(void **ptr)
static inline void kmemleak_no_scan(const void *ptr)
{
}
+static inline void kmemleak_add_alias(const void *ptr, const void *new)
+{
+}
+static inline void kmemleak_unalias(const void *alias)
+{
+}
#endif /* CONFIG_DEBUG_KMEMLEAK */
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 2c0d032..3875cb7 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -157,6 +157,13 @@ struct kmemleak_object {
unsigned long jiffies; /* creation timestamp */
pid_t pid; /* pid of the current task */
char comm[TASK_COMM_LEN]; /* executable name */
+ struct kmemleak_alias *alias; /* if a pointer is modified */
+};
+
+struct kmemleak_alias {
+ struct list_head alias_list;
+ struct prio_tree_node tree_node;
+ struct kmemleak_object *object;
};
/* flag representing the memory block allocation status */
@@ -179,13 +186,18 @@ struct kmemleak_object {
static LIST_HEAD(object_list);
/* the list of gray-colored objects (see color_gray comment below) */
static LIST_HEAD(gray_list);
+/* the list of objects with alias (see alias comment below) */
+static LIST_HEAD(alias_list);
/* prio search tree for object boundaries */
static struct prio_tree_root object_tree_root;
+/* prio search tree for alias object boundaries */
+static struct prio_tree_root alias_tree_root;
/* rw_lock protecting the access to object_list and prio_tree_root */
static DEFINE_RWLOCK(kmemleak_lock);
/* allocation caches for kmemleak internal data */
static struct kmem_cache *object_cache;
+static struct kmem_cache *alias_cache;
static struct kmem_cache *scan_area_cache;
/* set if tracing memory operations is enabled */
@@ -269,6 +281,8 @@ static void kmemleak_disable(void);
kmemleak_disable(); \
} while (0)
+#define to_address(obj) ((obj)->tree_node.start)
+
/*
* Printing of the objects hex dump to the seq file. The number of lines to be
* printed is limited to HEX_MAX_LINES to prevent seq file spamming. The
@@ -369,7 +383,7 @@ static void dump_object_info(struct kmemleak_object *object)
trace.entries = object->trace;
pr_notice("Object 0x%08lx (size %zu):\n",
- object->tree_node.start, object->size);
+ to_address(object), object->size);
pr_notice(" comm \"%s\", pid %d, jiffies %lu\n",
object->comm, object->pid, object->jiffies);
pr_notice(" min_count = %d\n", object->min_count);
@@ -436,6 +450,8 @@ static void free_object_rcu(struct rcu_head *rcu)
hlist_del(elem);
kmem_cache_free(scan_area_cache, area);
}
+ if (object->alias)
+ kmem_cache_free(alias_cache, object->alias);
kmem_cache_free(object_cache, object);
}
@@ -460,12 +476,11 @@ static void put_object(struct kmemleak_object *object)
/*
* Look up an object in the prio search tree and increase its use_count.
*/
-static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
+static struct kmemleak_object *__find_and_get_object(unsigned long ptr, int alias)
{
unsigned long flags;
struct kmemleak_object *object = NULL;
- rcu_read_lock();
read_lock_irqsave(&kmemleak_lock, flags);
if (ptr >= min_addr && ptr < max_addr)
object = lookup_object(ptr, alias);
@@ -474,6 +489,75 @@ static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
/* check whether the object is still available */
if (object && !get_object(object))
object = NULL;
+
+ return object;
+}
+
+static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
+{
+ struct kmemleak_object *object;
+
+ rcu_read_lock();
+ object = __find_and_get_object(ptr, alias);
+ rcu_read_unlock();
+
+ return object;
+}
+
+static struct kmemleak_object *__find_and_get_alias(unsigned long ptr, int alias)
+{
+ struct kmemleak_object *object = NULL;
+ struct kmemleak_alias *ao = NULL;
+ struct prio_tree_node *node;
+ struct prio_tree_iter iter;
+ unsigned long flags;
+
+ read_lock_irqsave(&kmemleak_lock, flags);
+
+ prio_tree_iter_init(&iter, &alias_tree_root, ptr, ptr);
+ node = prio_tree_next(&iter);
+ if (node) {
+ ao = prio_tree_entry(node, struct kmemleak_alias, tree_node);
+ if (!alias && to_address(ao) != ptr) {
+ kmemleak_warn("Found object by alias");
+ ao = NULL;
+ }
+ }
+
+ read_unlock_irqrestore(&kmemleak_lock, flags);
+
+ if (ao && get_object(ao->object))
+ object = ao->object;
+
+ return object;
+}
+
+static struct kmemleak_object *find_and_get_alias(unsigned long ptr, int alias)
+{
+ struct kmemleak_object *object = NULL;
+
+ rcu_read_lock();
+ object = __find_and_get_alias(ptr, alias);
+ rcu_read_unlock();
+
+ return object;
+}
+
+/*
+ * Try to find object first, and then with alias address if not found.
+ */
+static struct kmemleak_object *find_and_get_object_with_alias(unsigned long ptr,
+ int alias)
+{
+ struct kmemleak_object *object = NULL;
+
+ rcu_read_lock();
+
+ object = __find_and_get_object(ptr, alias);
+ if (!object &&
+ !prio_tree_empty(&alias_tree_root))
+ object = __find_and_get_alias(ptr, alias);
+
rcu_read_unlock();
return object;
@@ -524,6 +608,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
object->count = 0; /* white color initially */
object->jiffies = jiffies;
object->checksum = 0;
+ object->alias = NULL;
/* task information */
if (in_irq()) {
@@ -547,7 +632,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
object->trace_len = __save_stack_trace(object->trace);
INIT_PRIO_TREE_NODE(&object->tree_node);
- object->tree_node.start = ptr;
+ to_address(object) = ptr;
object->tree_node.last = ptr + size - 1;
write_lock_irqsave(&kmemleak_lock, flags);
@@ -577,6 +662,57 @@ out:
return object;
}
+static void create_alias_object(struct kmemleak_object *object,
+ unsigned long ptr)
+{
+ struct kmemleak_alias *alias;
+ struct prio_tree_node *node;
+ unsigned long flags;
+
+ alias = kmem_cache_alloc(alias_cache, GFP_KERNEL);
+ if (!alias) {
+ kmemleak_stop("Cannot allocate a kmemleak_alias structure\n");
+ return;
+ }
+ INIT_LIST_HEAD(&alias->alias_list);
+ INIT_PRIO_TREE_NODE(&alias->tree_node);
+ to_address(alias) = ptr;
+
+ spin_lock_irqsave(&object->lock, flags);
+ alias->tree_node.last = ptr + object->size - 1;
+ alias->object = object;
+ object->alias = alias;
+ spin_unlock_irqrestore(&object->lock, flags);
+
+ write_lock_irqsave(&kmemleak_lock, flags);
+
+ node = prio_tree_insert(&alias_tree_root, &alias->tree_node);
+ if (!node) {
+ kmemleak_warn("Cannot allocate a kmemleak_alias structure\n");
+ kmem_cache_free(alias_cache, alias);
+ }
+ list_add_tail_rcu(&alias->alias_list, &alias_list);
+
+ write_unlock_irqrestore(&kmemleak_lock, flags);
+}
+
+static void __delete_alias_object(struct kmemleak_object *object)
+{
+ prio_tree_remove(&alias_tree_root, &object->alias->tree_node);
+ list_del_rcu(&object->alias->alias_list);
+ kmem_cache_free(alias_cache, object->alias);
+ object->alias = NULL;
+}
+
+static void delete_alias_object(struct kmemleak_object *object)
+{
+ unsigned long flags;
+
+ write_lock_irqsave(&kmemleak_lock, flags);
+ __delete_alias_object(object);
+ write_unlock_irqrestore(&kmemleak_lock, flags);
+}
+
/*
* Remove the metadata (struct kmemleak_object) for a memory block from the
* object_list and object_tree_root and decrement its use_count.
@@ -588,6 +724,8 @@ static void __delete_object(struct kmemleak_object *object)
write_lock_irqsave(&kmemleak_lock, flags);
prio_tree_remove(&object_tree_root, &object->tree_node);
list_del_rcu(&object->object_list);
+ if (object->alias)
+ __delete_alias_object(object);
write_unlock_irqrestore(&kmemleak_lock, flags);
WARN_ON(!(object->flags & OBJECT_ALLOCATED));
@@ -630,7 +768,7 @@ static void delete_object_full(unsigned long ptr)
*/
static void delete_object_part(unsigned long ptr, size_t size)
{
- struct kmemleak_object *object;
+ struct kmemleak_object *object, *new;
unsigned long start, end;
object = find_and_get_object(ptr, 1);
@@ -652,12 +790,24 @@ static void delete_object_part(unsigned long ptr, size_t size)
*/
start = object->pointer;
end = object->pointer + object->size;
- if (ptr > start)
- create_object(start, ptr - start, object->min_count,
- GFP_KERNEL);
- if (ptr + size < end)
- create_object(ptr + size, end - ptr - size, object->min_count,
- GFP_KERNEL);
+ if (ptr > start) {
+ new = create_object(start, ptr - start, object->min_count,
+ GFP_KERNEL);
+ if (new && unlikely(object->alias))
+ create_alias_object(new, to_address(object->alias));
+ }
+ if (ptr + size < end) {
+ new = create_object(ptr + size, end - ptr - size,
+ object->min_count, GFP_KERNEL);
+ if (new && unlikely(object->alias)) {
+ unsigned long alias_ptr;
+
+ alias_ptr = to_address(object->alias);
+ alias_ptr += ptr - start + size;
+
+ create_alias_object(new, alias_ptr);
+ }
+ }
put_object(object);
}
@@ -944,6 +1094,38 @@ void __ref kmemleak_no_scan(const void *ptr)
}
EXPORT_SYMBOL(kmemleak_no_scan);
+void kmemleak_add_alias(const void *ptr, const void *alias)
+{
+ struct kmemleak_object *object;
+
+ pr_debug("%s(0x%p -> 0x%p)\n", __func__, ptr, alias);
+
+ object = find_and_get_object((unsigned long)ptr, 0);
+ if (!object) {
+ kmemleak_warn("Aliasing unknown object at 0x%p\n", ptr);
+ return;
+ }
+ create_alias_object(object, (unsigned long)alias);
+ put_object(object);
+}
+EXPORT_SYMBOL(kmemleak_add_alias);
+
+void kmemleak_unalias(const void *alias)
+{
+ struct kmemleak_object *object;
+
+ pr_debug("%s(0x%p)\n", __func__, alias);
+
+ object = find_and_get_alias((unsigned long)alias, 0);
+ if (!object) {
+ kmemleak_warn("Aliasing unknown object at 0x%p\n", alias);
+ return;
+ }
+ delete_alias_object(object);
+ put_object(object);
+}
+EXPORT_SYMBOL(kmemleak_unalias);
+
/*
* Update an object's checksum and return true if it was modified.
*/
@@ -1007,7 +1189,7 @@ static void scan_block(void *_start, void *_end,
pointer = *ptr;
- object = find_and_get_object(pointer, 1);
+ object = find_and_get_object_with_alias(pointer, 1);
if (!object)
continue;
if (object == scanned) {
@@ -1620,8 +1802,10 @@ void __init kmemleak_init(void)
jiffies_scan_wait = msecs_to_jiffies(SECS_SCAN_WAIT * 1000);
object_cache = KMEM_CACHE(kmemleak_object, SLAB_NOLEAKTRACE);
+ alias_cache = KMEM_CACHE(kmemleak_alias, SLAB_NOLEAKTRACE);
scan_area_cache = KMEM_CACHE(kmemleak_scan_area, SLAB_NOLEAKTRACE);
INIT_PRIO_TREE_ROOT(&object_tree_root);
+ INIT_PRIO_TREE_ROOT(&alias_tree_root);
/* the kernel is still in UP mode, so disabling the IRQs is enough */
local_irq_save(flags);
--
1.7.1.rc2
Hi Catalin,
From: Hiroshi DOYU <[email protected]>
Subject: Re: [RFC][PATCH 0/1] kmemleak: Fix false positive with alias
Date: Tue, 10 Aug 2010 18:49:03 +0300 (EEST)
<snip>
>>> The performance impact is indeed pretty high, though some parts of the
>>> code look over-engineered to me (the __scan_block function with a loop
>>> going through an array of two function pointers - I think the compiler
>>> cannot figure out what to inline). You could just extend the
>>> find_and_get_object() to search both trees under a single spinlock
>>> region (as locking also takes time).
>>
>> Ok, a good point.
>
> Now there's not much difference with the attached patch, a new version
> of alias.
>
> / # modprobe kmemleak-special-test use_alias=0
> / # time echo scan > /sys/kernel/debug/kmemleak
> real 0m 2.30s
> user 0m 0.00s
> sys 0m 2.30s
>
> / # modprobe kmemleak-special-test use_alias=1
> / # time echo scan > /sys/kernel/debug/kmemleak
> real 0m 3.91s
> user 0m 0.00s
> sys 0m 3.91s
It would be nice if you could have some time to take a look at this
patch and give some comments.
> From a5670d69b2cafe85f6f26f6951097210d3b9917f Mon Sep 17 00:00:00 2001
> From: Hiroshi DOYU <[email protected]>
> Date: Thu, 17 Jun 2010 13:36:45 +0300
> Subject: [PATCH 1/1] kmemleak: Fix false positive with address alias
>
> There is a false positive case that a pointer is calculated by other
> methods than the usual container_of macro. "kmemleak_ignore" can cover
> such a false positive, but it would loose the advantage of memory leak
> detection. This patch allows kmemleak to work with such false
> positives by aliasing of address. A client module can register an
> alias address to an original pointer.
>
> A typical use case could be the IOMMU pagetable allocation which
> stores pointers to the second level of page tables with some
> conversion, for example, a physical address with attribute bits. Right
> now I don't have other use cases but I hope that there could be some
> that this special scan works with.
>
> Signed-off-by: Hiroshi DOYU <[email protected]>
> Cc: Phil Carmody <[email protected]>
> ---
> include/linux/kmemleak.h | 8 ++
> mm/kmemleak.c | 208 +++++++++++++++++++++++++++++++++++++++++++---
> 2 files changed, 204 insertions(+), 12 deletions(-)
>
> diff --git a/include/linux/kmemleak.h b/include/linux/kmemleak.h
> index 99d9a67..9e2af3a 100644
> --- a/include/linux/kmemleak.h
> +++ b/include/linux/kmemleak.h
> @@ -34,6 +34,8 @@ extern void kmemleak_not_leak(const void *ptr) __ref;
> extern void kmemleak_ignore(const void *ptr) __ref;
> extern void kmemleak_scan_area(const void *ptr, size_t size, gfp_t gfp) __ref;
> extern void kmemleak_no_scan(const void *ptr) __ref;
> +extern void kmemleak_add_alias(const void *ptr, const void *new) __ref;
> +extern void kmemleak_unalias(const void *alias) __ref;
>
> static inline void kmemleak_alloc_recursive(const void *ptr, size_t size,
> int min_count, unsigned long flags,
> @@ -92,6 +94,12 @@ static inline void kmemleak_erase(void **ptr)
> static inline void kmemleak_no_scan(const void *ptr)
> {
> }
> +static inline void kmemleak_add_alias(const void *ptr, const void *new)
> +{
> +}
> +static inline void kmemleak_unalias(const void *alias)
> +{
> +}
>
> #endif /* CONFIG_DEBUG_KMEMLEAK */
>
> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> index 2c0d032..3875cb7 100644
> --- a/mm/kmemleak.c
> +++ b/mm/kmemleak.c
> @@ -157,6 +157,13 @@ struct kmemleak_object {
> unsigned long jiffies; /* creation timestamp */
> pid_t pid; /* pid of the current task */
> char comm[TASK_COMM_LEN]; /* executable name */
> + struct kmemleak_alias *alias; /* if a pointer is modified */
> +};
> +
> +struct kmemleak_alias {
> + struct list_head alias_list;
> + struct prio_tree_node tree_node;
> + struct kmemleak_object *object;
> };
>
> /* flag representing the memory block allocation status */
> @@ -179,13 +186,18 @@ struct kmemleak_object {
> static LIST_HEAD(object_list);
> /* the list of gray-colored objects (see color_gray comment below) */
> static LIST_HEAD(gray_list);
> +/* the list of objects with alias (see alias comment below) */
> +static LIST_HEAD(alias_list);
> /* prio search tree for object boundaries */
> static struct prio_tree_root object_tree_root;
> +/* prio search tree for alias object boundaries */
> +static struct prio_tree_root alias_tree_root;
> /* rw_lock protecting the access to object_list and prio_tree_root */
> static DEFINE_RWLOCK(kmemleak_lock);
>
> /* allocation caches for kmemleak internal data */
> static struct kmem_cache *object_cache;
> +static struct kmem_cache *alias_cache;
> static struct kmem_cache *scan_area_cache;
>
> /* set if tracing memory operations is enabled */
> @@ -269,6 +281,8 @@ static void kmemleak_disable(void);
> kmemleak_disable(); \
> } while (0)
>
> +#define to_address(obj) ((obj)->tree_node.start)
> +
> /*
> * Printing of the objects hex dump to the seq file. The number of lines to be
> * printed is limited to HEX_MAX_LINES to prevent seq file spamming. The
> @@ -369,7 +383,7 @@ static void dump_object_info(struct kmemleak_object *object)
> trace.entries = object->trace;
>
> pr_notice("Object 0x%08lx (size %zu):\n",
> - object->tree_node.start, object->size);
> + to_address(object), object->size);
> pr_notice(" comm \"%s\", pid %d, jiffies %lu\n",
> object->comm, object->pid, object->jiffies);
> pr_notice(" min_count = %d\n", object->min_count);
> @@ -436,6 +450,8 @@ static void free_object_rcu(struct rcu_head *rcu)
> hlist_del(elem);
> kmem_cache_free(scan_area_cache, area);
> }
> + if (object->alias)
> + kmem_cache_free(alias_cache, object->alias);
> kmem_cache_free(object_cache, object);
> }
>
> @@ -460,12 +476,11 @@ static void put_object(struct kmemleak_object *object)
> /*
> * Look up an object in the prio search tree and increase its use_count.
> */
> -static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
> +static struct kmemleak_object *__find_and_get_object(unsigned long ptr, int alias)
> {
> unsigned long flags;
> struct kmemleak_object *object = NULL;
>
> - rcu_read_lock();
> read_lock_irqsave(&kmemleak_lock, flags);
> if (ptr >= min_addr && ptr < max_addr)
> object = lookup_object(ptr, alias);
> @@ -474,6 +489,75 @@ static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
> /* check whether the object is still available */
> if (object && !get_object(object))
> object = NULL;
> +
> + return object;
> +}
> +
> +static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
> +{
> + struct kmemleak_object *object;
> +
> + rcu_read_lock();
> + object = __find_and_get_object(ptr, alias);
> + rcu_read_unlock();
> +
> + return object;
> +}
> +
> +static struct kmemleak_object *__find_and_get_alias(unsigned long ptr, int alias)
> +{
> + struct kmemleak_object *object = NULL;
> + struct kmemleak_alias *ao = NULL;
> + struct prio_tree_node *node;
> + struct prio_tree_iter iter;
> + unsigned long flags;
> +
> + read_lock_irqsave(&kmemleak_lock, flags);
> +
> + prio_tree_iter_init(&iter, &alias_tree_root, ptr, ptr);
> + node = prio_tree_next(&iter);
> + if (node) {
> + ao = prio_tree_entry(node, struct kmemleak_alias, tree_node);
> + if (!alias && to_address(ao) != ptr) {
> + kmemleak_warn("Found object by alias");
> + ao = NULL;
> + }
> + }
> +
> + read_unlock_irqrestore(&kmemleak_lock, flags);
> +
> + if (ao && get_object(ao->object))
> + object = ao->object;
> +
> + return object;
> +}
> +
> +static struct kmemleak_object *find_and_get_alias(unsigned long ptr, int alias)
> +{
> + struct kmemleak_object *object = NULL;
> +
> + rcu_read_lock();
> + object = __find_and_get_alias(ptr, alias);
> + rcu_read_unlock();
> +
> + return object;
> +}
> +
> +/*
> + * Try to find object first, and then with alias address if not found.
> + */
> +static struct kmemleak_object *find_and_get_object_with_alias(unsigned long ptr,
> + int alias)
> +{
> + struct kmemleak_object *object = NULL;
> +
> + rcu_read_lock();
> +
> + object = __find_and_get_object(ptr, alias);
> + if (!object &&
> + !prio_tree_empty(&alias_tree_root))
> + object = __find_and_get_alias(ptr, alias);
> +
> rcu_read_unlock();
>
> return object;
> @@ -524,6 +608,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
> object->count = 0; /* white color initially */
> object->jiffies = jiffies;
> object->checksum = 0;
> + object->alias = NULL;
>
> /* task information */
> if (in_irq()) {
> @@ -547,7 +632,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
> object->trace_len = __save_stack_trace(object->trace);
>
> INIT_PRIO_TREE_NODE(&object->tree_node);
> - object->tree_node.start = ptr;
> + to_address(object) = ptr;
> object->tree_node.last = ptr + size - 1;
>
> write_lock_irqsave(&kmemleak_lock, flags);
> @@ -577,6 +662,57 @@ out:
> return object;
> }
>
> +static void create_alias_object(struct kmemleak_object *object,
> + unsigned long ptr)
> +{
> + struct kmemleak_alias *alias;
> + struct prio_tree_node *node;
> + unsigned long flags;
> +
> + alias = kmem_cache_alloc(alias_cache, GFP_KERNEL);
> + if (!alias) {
> + kmemleak_stop("Cannot allocate a kmemleak_alias structure\n");
> + return;
> + }
> + INIT_LIST_HEAD(&alias->alias_list);
> + INIT_PRIO_TREE_NODE(&alias->tree_node);
> + to_address(alias) = ptr;
> +
> + spin_lock_irqsave(&object->lock, flags);
> + alias->tree_node.last = ptr + object->size - 1;
> + alias->object = object;
> + object->alias = alias;
> + spin_unlock_irqrestore(&object->lock, flags);
> +
> + write_lock_irqsave(&kmemleak_lock, flags);
> +
> + node = prio_tree_insert(&alias_tree_root, &alias->tree_node);
> + if (!node) {
> + kmemleak_warn("Cannot allocate a kmemleak_alias structure\n");
> + kmem_cache_free(alias_cache, alias);
> + }
> + list_add_tail_rcu(&alias->alias_list, &alias_list);
> +
> + write_unlock_irqrestore(&kmemleak_lock, flags);
> +}
> +
> +static void __delete_alias_object(struct kmemleak_object *object)
> +{
> + prio_tree_remove(&alias_tree_root, &object->alias->tree_node);
> + list_del_rcu(&object->alias->alias_list);
> + kmem_cache_free(alias_cache, object->alias);
> + object->alias = NULL;
> +}
> +
> +static void delete_alias_object(struct kmemleak_object *object)
> +{
> + unsigned long flags;
> +
> + write_lock_irqsave(&kmemleak_lock, flags);
> + __delete_alias_object(object);
> + write_unlock_irqrestore(&kmemleak_lock, flags);
> +}
> +
> /*
> * Remove the metadata (struct kmemleak_object) for a memory block from the
> * object_list and object_tree_root and decrement its use_count.
> @@ -588,6 +724,8 @@ static void __delete_object(struct kmemleak_object *object)
> write_lock_irqsave(&kmemleak_lock, flags);
> prio_tree_remove(&object_tree_root, &object->tree_node);
> list_del_rcu(&object->object_list);
> + if (object->alias)
> + __delete_alias_object(object);
> write_unlock_irqrestore(&kmemleak_lock, flags);
>
> WARN_ON(!(object->flags & OBJECT_ALLOCATED));
> @@ -630,7 +768,7 @@ static void delete_object_full(unsigned long ptr)
> */
> static void delete_object_part(unsigned long ptr, size_t size)
> {
> - struct kmemleak_object *object;
> + struct kmemleak_object *object, *new;
> unsigned long start, end;
>
> object = find_and_get_object(ptr, 1);
> @@ -652,12 +790,24 @@ static void delete_object_part(unsigned long ptr, size_t size)
> */
> start = object->pointer;
> end = object->pointer + object->size;
> - if (ptr > start)
> - create_object(start, ptr - start, object->min_count,
> - GFP_KERNEL);
> - if (ptr + size < end)
> - create_object(ptr + size, end - ptr - size, object->min_count,
> - GFP_KERNEL);
> + if (ptr > start) {
> + new = create_object(start, ptr - start, object->min_count,
> + GFP_KERNEL);
> + if (new && unlikely(object->alias))
> + create_alias_object(new, to_address(object->alias));
> + }
> + if (ptr + size < end) {
> + new = create_object(ptr + size, end - ptr - size,
> + object->min_count, GFP_KERNEL);
> + if (new && unlikely(object->alias)) {
> + unsigned long alias_ptr;
> +
> + alias_ptr = to_address(object->alias);
> + alias_ptr += ptr - start + size;
> +
> + create_alias_object(new, alias_ptr);
> + }
> + }
>
> put_object(object);
> }
> @@ -944,6 +1094,38 @@ void __ref kmemleak_no_scan(const void *ptr)
> }
> EXPORT_SYMBOL(kmemleak_no_scan);
>
> +void kmemleak_add_alias(const void *ptr, const void *alias)
> +{
> + struct kmemleak_object *object;
> +
> + pr_debug("%s(0x%p -> 0x%p)\n", __func__, ptr, alias);
> +
> + object = find_and_get_object((unsigned long)ptr, 0);
> + if (!object) {
> + kmemleak_warn("Aliasing unknown object at 0x%p\n", ptr);
> + return;
> + }
> + create_alias_object(object, (unsigned long)alias);
> + put_object(object);
> +}
> +EXPORT_SYMBOL(kmemleak_add_alias);
> +
> +void kmemleak_unalias(const void *alias)
> +{
> + struct kmemleak_object *object;
> +
> + pr_debug("%s(0x%p)\n", __func__, alias);
> +
> + object = find_and_get_alias((unsigned long)alias, 0);
> + if (!object) {
> + kmemleak_warn("Aliasing unknown object at 0x%p\n", alias);
> + return;
> + }
> + delete_alias_object(object);
> + put_object(object);
> +}
> +EXPORT_SYMBOL(kmemleak_unalias);
> +
> /*
> * Update an object's checksum and return true if it was modified.
> */
> @@ -1007,7 +1189,7 @@ static void scan_block(void *_start, void *_end,
>
> pointer = *ptr;
>
> - object = find_and_get_object(pointer, 1);
> + object = find_and_get_object_with_alias(pointer, 1);
> if (!object)
> continue;
> if (object == scanned) {
> @@ -1620,8 +1802,10 @@ void __init kmemleak_init(void)
> jiffies_scan_wait = msecs_to_jiffies(SECS_SCAN_WAIT * 1000);
>
> object_cache = KMEM_CACHE(kmemleak_object, SLAB_NOLEAKTRACE);
> + alias_cache = KMEM_CACHE(kmemleak_alias, SLAB_NOLEAKTRACE);
> scan_area_cache = KMEM_CACHE(kmemleak_scan_area, SLAB_NOLEAKTRACE);
> INIT_PRIO_TREE_ROOT(&object_tree_root);
> + INIT_PRIO_TREE_ROOT(&alias_tree_root);
>
> /* the kernel is still in UP mode, so disabling the IRQs is enough */
> local_irq_save(flags);
> --
> 1.7.1.rc2
>
On Fri, Aug 27, 2010 at 09:12:24AM +0300, Hiroshi DOYU wrote:
> Hi Catalin,
>
> From: Hiroshi DOYU <[email protected]>
> Subject: Re: [RFC][PATCH 0/1] kmemleak: Fix false positive with alias
> Date: Tue, 10 Aug 2010 18:49:03 +0300 (EEST)
>
> <snip>
> >>> The performance impact is indeed pretty high, though some parts of the
> >>> code look over-engineered to me (the __scan_block function with a loop
> >>> going through an array of two function pointers - I think the compiler
> >>> cannot figure out what to inline). You could just extend the
> >>> find_and_get_object() to search both trees under a single spinlock
> >>> region (as locking also takes time).
> >>
> >> Ok, a good point.
> >
> > Now there's not much difference with the attached patch, a new version
> > of alias.
> >
> > / # modprobe kmemleak-special-test use_alias=0
> > / # time echo scan > /sys/kernel/debug/kmemleak
> > real 0m 2.30s
> > user 0m 0.00s
> > sys 0m 2.30s
> >
> > / # modprobe kmemleak-special-test use_alias=1
> > / # time echo scan > /sys/kernel/debug/kmemleak
> > real 0m 3.91s
> > user 0m 0.00s
> > sys 0m 3.91s
>
> It would be nice if you could have some time to take a look at this
> patch and give some comments.
>
> > From a5670d69b2cafe85f6f26f6951097210d3b9917f Mon Sep 17 00:00:00 2001
> > From: Hiroshi DOYU <[email protected]>
> > Date: Thu, 17 Jun 2010 13:36:45 +0300
> > Subject: [PATCH 1/1] kmemleak: Fix false positive with address alias
> >
> > There is a false positive case that a pointer is calculated by other
> > methods than the usual container_of macro. "kmemleak_ignore" can cover
> > such a false positive, but it would loose the advantage of memory leak
> > detection. This patch allows kmemleak to work with such false
> > positives by aliasing of address. A client module can register an
> > alias address to an original pointer.
> >
> > A typical use case could be the IOMMU pagetable allocation which
> > stores pointers to the second level of page tables with some
> > conversion, for example, a physical address with attribute bits. Right
> > now I don't have other use cases but I hope that there could be some
> > that this special scan works with.
A few questions below...
Thanx, Paul
> > Signed-off-by: Hiroshi DOYU <[email protected]>
> > Cc: Phil Carmody <[email protected]>
> > ---
> > include/linux/kmemleak.h | 8 ++
> > mm/kmemleak.c | 208 +++++++++++++++++++++++++++++++++++++++++++---
> > 2 files changed, 204 insertions(+), 12 deletions(-)
> >
> > diff --git a/include/linux/kmemleak.h b/include/linux/kmemleak.h
> > index 99d9a67..9e2af3a 100644
> > --- a/include/linux/kmemleak.h
> > +++ b/include/linux/kmemleak.h
> > @@ -34,6 +34,8 @@ extern void kmemleak_not_leak(const void *ptr) __ref;
> > extern void kmemleak_ignore(const void *ptr) __ref;
> > extern void kmemleak_scan_area(const void *ptr, size_t size, gfp_t gfp) __ref;
> > extern void kmemleak_no_scan(const void *ptr) __ref;
> > +extern void kmemleak_add_alias(const void *ptr, const void *new) __ref;
> > +extern void kmemleak_unalias(const void *alias) __ref;
> >
> > static inline void kmemleak_alloc_recursive(const void *ptr, size_t size,
> > int min_count, unsigned long flags,
> > @@ -92,6 +94,12 @@ static inline void kmemleak_erase(void **ptr)
> > static inline void kmemleak_no_scan(const void *ptr)
> > {
> > }
> > +static inline void kmemleak_add_alias(const void *ptr, const void *new)
> > +{
> > +}
> > +static inline void kmemleak_unalias(const void *alias)
> > +{
> > +}
> >
> > #endif /* CONFIG_DEBUG_KMEMLEAK */
> >
> > diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> > index 2c0d032..3875cb7 100644
> > --- a/mm/kmemleak.c
> > +++ b/mm/kmemleak.c
> > @@ -157,6 +157,13 @@ struct kmemleak_object {
> > unsigned long jiffies; /* creation timestamp */
> > pid_t pid; /* pid of the current task */
> > char comm[TASK_COMM_LEN]; /* executable name */
> > + struct kmemleak_alias *alias; /* if a pointer is modified */
> > +};
> > +
> > +struct kmemleak_alias {
> > + struct list_head alias_list;
> > + struct prio_tree_node tree_node;
> > + struct kmemleak_object *object;
> > };
> >
> > /* flag representing the memory block allocation status */
> > @@ -179,13 +186,18 @@ struct kmemleak_object {
> > static LIST_HEAD(object_list);
> > /* the list of gray-colored objects (see color_gray comment below) */
> > static LIST_HEAD(gray_list);
> > +/* the list of objects with alias (see alias comment below) */
> > +static LIST_HEAD(alias_list);
> > /* prio search tree for object boundaries */
> > static struct prio_tree_root object_tree_root;
> > +/* prio search tree for alias object boundaries */
> > +static struct prio_tree_root alias_tree_root;
> > /* rw_lock protecting the access to object_list and prio_tree_root */
> > static DEFINE_RWLOCK(kmemleak_lock);
> >
> > /* allocation caches for kmemleak internal data */
> > static struct kmem_cache *object_cache;
> > +static struct kmem_cache *alias_cache;
> > static struct kmem_cache *scan_area_cache;
> >
> > /* set if tracing memory operations is enabled */
> > @@ -269,6 +281,8 @@ static void kmemleak_disable(void);
> > kmemleak_disable(); \
> > } while (0)
> >
> > +#define to_address(obj) ((obj)->tree_node.start)
> > +
> > /*
> > * Printing of the objects hex dump to the seq file. The number of lines to be
> > * printed is limited to HEX_MAX_LINES to prevent seq file spamming. The
> > @@ -369,7 +383,7 @@ static void dump_object_info(struct kmemleak_object *object)
> > trace.entries = object->trace;
> >
> > pr_notice("Object 0x%08lx (size %zu):\n",
> > - object->tree_node.start, object->size);
> > + to_address(object), object->size);
> > pr_notice(" comm \"%s\", pid %d, jiffies %lu\n",
> > object->comm, object->pid, object->jiffies);
> > pr_notice(" min_count = %d\n", object->min_count);
> > @@ -436,6 +450,8 @@ static void free_object_rcu(struct rcu_head *rcu)
> > hlist_del(elem);
> > kmem_cache_free(scan_area_cache, area);
> > }
> > + if (object->alias)
> > + kmem_cache_free(alias_cache, object->alias);
> > kmem_cache_free(object_cache, object);
> > }
> >
> > @@ -460,12 +476,11 @@ static void put_object(struct kmemleak_object *object)
> > /*
> > * Look up an object in the prio search tree and increase its use_count.
> > */
> > -static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
> > +static struct kmemleak_object *__find_and_get_object(unsigned long ptr, int alias)
> > {
> > unsigned long flags;
> > struct kmemleak_object *object = NULL;
> >
> > - rcu_read_lock();
> > read_lock_irqsave(&kmemleak_lock, flags);
> > if (ptr >= min_addr && ptr < max_addr)
> > object = lookup_object(ptr, alias);
> > @@ -474,6 +489,75 @@ static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
> > /* check whether the object is still available */
> > if (object && !get_object(object))
> > object = NULL;
> > +
> > + return object;
> > +}
> > +
> > +static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
> > +{
> > + struct kmemleak_object *object;
> > +
> > + rcu_read_lock();
> > + object = __find_and_get_object(ptr, alias);
> > + rcu_read_unlock();
> > +
> > + return object;
> > +}
> > +
> > +static struct kmemleak_object *__find_and_get_alias(unsigned long ptr, int alias)
> > +{
> > + struct kmemleak_object *object = NULL;
> > + struct kmemleak_alias *ao = NULL;
> > + struct prio_tree_node *node;
> > + struct prio_tree_iter iter;
> > + unsigned long flags;
> > +
> > + read_lock_irqsave(&kmemleak_lock, flags);
If we hold this readlock, how is RCU helping us? Or are there updates
that do not write-hold kmemleak_lock?
> > +
> > + prio_tree_iter_init(&iter, &alias_tree_root, ptr, ptr);
> > + node = prio_tree_next(&iter);
> > + if (node) {
> > + ao = prio_tree_entry(node, struct kmemleak_alias, tree_node);
> > + if (!alias && to_address(ao) != ptr) {
> > + kmemleak_warn("Found object by alias");
> > + ao = NULL;
> > + }
> > + }
> > +
> > + read_unlock_irqrestore(&kmemleak_lock, flags);
> > +
> > + if (ao && get_object(ao->object))
> > + object = ao->object;
> > +
> > + return object;
> > +}
> > +
> > +static struct kmemleak_object *find_and_get_alias(unsigned long ptr, int alias)
> > +{
> > + struct kmemleak_object *object = NULL;
> > +
> > + rcu_read_lock();
> > + object = __find_and_get_alias(ptr, alias);
> > + rcu_read_unlock();
> > +
> > + return object;
> > +}
> > +
> > +/*
> > + * Try to find object first, and then with alias address if not found.
> > + */
> > +static struct kmemleak_object *find_and_get_object_with_alias(unsigned long ptr,
> > + int alias)
> > +{
> > + struct kmemleak_object *object = NULL;
> > +
> > + rcu_read_lock();
> > +
> > + object = __find_and_get_object(ptr, alias);
> > + if (!object &&
> > + !prio_tree_empty(&alias_tree_root))
> > + object = __find_and_get_alias(ptr, alias);
> > +
> > rcu_read_unlock();
> >
> > return object;
> > @@ -524,6 +608,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
> > object->count = 0; /* white color initially */
> > object->jiffies = jiffies;
> > object->checksum = 0;
> > + object->alias = NULL;
> >
> > /* task information */
> > if (in_irq()) {
> > @@ -547,7 +632,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
> > object->trace_len = __save_stack_trace(object->trace);
> >
> > INIT_PRIO_TREE_NODE(&object->tree_node);
> > - object->tree_node.start = ptr;
> > + to_address(object) = ptr;
> > object->tree_node.last = ptr + size - 1;
> >
> > write_lock_irqsave(&kmemleak_lock, flags);
> > @@ -577,6 +662,57 @@ out:
> > return object;
> > }
> >
> > +static void create_alias_object(struct kmemleak_object *object,
> > + unsigned long ptr)
> > +{
> > + struct kmemleak_alias *alias;
> > + struct prio_tree_node *node;
> > + unsigned long flags;
> > +
> > + alias = kmem_cache_alloc(alias_cache, GFP_KERNEL);
> > + if (!alias) {
> > + kmemleak_stop("Cannot allocate a kmemleak_alias structure\n");
> > + return;
> > + }
> > + INIT_LIST_HEAD(&alias->alias_list);
> > + INIT_PRIO_TREE_NODE(&alias->tree_node);
> > + to_address(alias) = ptr;
> > +
> > + spin_lock_irqsave(&object->lock, flags);
> > + alias->tree_node.last = ptr + object->size - 1;
> > + alias->object = object;
> > + object->alias = alias;
> > + spin_unlock_irqrestore(&object->lock, flags);
> > +
> > + write_lock_irqsave(&kmemleak_lock, flags);
> > +
> > + node = prio_tree_insert(&alias_tree_root, &alias->tree_node);
> > + if (!node) {
> > + kmemleak_warn("Cannot allocate a kmemleak_alias structure\n");
> > + kmem_cache_free(alias_cache, alias);
> > + }
> > + list_add_tail_rcu(&alias->alias_list, &alias_list);
> > +
> > + write_unlock_irqrestore(&kmemleak_lock, flags);
> > +}
> > +
> > +static void __delete_alias_object(struct kmemleak_object *object)
> > +{
> > + prio_tree_remove(&alias_tree_root, &object->alias->tree_node);
> > + list_del_rcu(&object->alias->alias_list);
Don't we need an RCU grace period here, based on either synchronize_rcu()
or call_rcu()? Perhaps calling free_object_rcu(), though perhaps it frees
up more than you would like.
> > + kmem_cache_free(alias_cache, object->alias);
> > + object->alias = NULL;
> > +}
> > +
> > +static void delete_alias_object(struct kmemleak_object *object)
> > +{
> > + unsigned long flags;
> > +
> > + write_lock_irqsave(&kmemleak_lock, flags);
> > + __delete_alias_object(object);
> > + write_unlock_irqrestore(&kmemleak_lock, flags);
> > +}
> > +
> > /*
> > * Remove the metadata (struct kmemleak_object) for a memory block from the
> > * object_list and object_tree_root and decrement its use_count.
> > @@ -588,6 +724,8 @@ static void __delete_object(struct kmemleak_object *object)
> > write_lock_irqsave(&kmemleak_lock, flags);
> > prio_tree_remove(&object_tree_root, &object->tree_node);
> > list_del_rcu(&object->object_list);
> > + if (object->alias)
> > + __delete_alias_object(object);
> > write_unlock_irqrestore(&kmemleak_lock, flags);
> >
> > WARN_ON(!(object->flags & OBJECT_ALLOCATED));
> > @@ -630,7 +768,7 @@ static void delete_object_full(unsigned long ptr)
> > */
> > static void delete_object_part(unsigned long ptr, size_t size)
> > {
> > - struct kmemleak_object *object;
> > + struct kmemleak_object *object, *new;
> > unsigned long start, end;
> >
> > object = find_and_get_object(ptr, 1);
> > @@ -652,12 +790,24 @@ static void delete_object_part(unsigned long ptr, size_t size)
> > */
> > start = object->pointer;
> > end = object->pointer + object->size;
> > - if (ptr > start)
> > - create_object(start, ptr - start, object->min_count,
> > - GFP_KERNEL);
> > - if (ptr + size < end)
> > - create_object(ptr + size, end - ptr - size, object->min_count,
> > - GFP_KERNEL);
> > + if (ptr > start) {
> > + new = create_object(start, ptr - start, object->min_count,
> > + GFP_KERNEL);
> > + if (new && unlikely(object->alias))
> > + create_alias_object(new, to_address(object->alias));
> > + }
> > + if (ptr + size < end) {
> > + new = create_object(ptr + size, end - ptr - size,
> > + object->min_count, GFP_KERNEL);
> > + if (new && unlikely(object->alias)) {
> > + unsigned long alias_ptr;
> > +
> > + alias_ptr = to_address(object->alias);
> > + alias_ptr += ptr - start + size;
> > +
> > + create_alias_object(new, alias_ptr);
> > + }
> > + }
> >
> > put_object(object);
> > }
> > @@ -944,6 +1094,38 @@ void __ref kmemleak_no_scan(const void *ptr)
> > }
> > EXPORT_SYMBOL(kmemleak_no_scan);
> >
> > +void kmemleak_add_alias(const void *ptr, const void *alias)
> > +{
> > + struct kmemleak_object *object;
> > +
> > + pr_debug("%s(0x%p -> 0x%p)\n", __func__, ptr, alias);
> > +
> > + object = find_and_get_object((unsigned long)ptr, 0);
> > + if (!object) {
> > + kmemleak_warn("Aliasing unknown object at 0x%p\n", ptr);
> > + return;
> > + }
> > + create_alias_object(object, (unsigned long)alias);
> > + put_object(object);
> > +}
> > +EXPORT_SYMBOL(kmemleak_add_alias);
> > +
> > +void kmemleak_unalias(const void *alias)
> > +{
> > + struct kmemleak_object *object;
> > +
> > + pr_debug("%s(0x%p)\n", __func__, alias);
> > +
> > + object = find_and_get_alias((unsigned long)alias, 0);
> > + if (!object) {
> > + kmemleak_warn("Aliasing unknown object at 0x%p\n", alias);
> > + return;
> > + }
> > + delete_alias_object(object);
> > + put_object(object);
> > +}
> > +EXPORT_SYMBOL(kmemleak_unalias);
> > +
> > /*
> > * Update an object's checksum and return true if it was modified.
> > */
> > @@ -1007,7 +1189,7 @@ static void scan_block(void *_start, void *_end,
> >
> > pointer = *ptr;
> >
> > - object = find_and_get_object(pointer, 1);
> > + object = find_and_get_object_with_alias(pointer, 1);
> > if (!object)
> > continue;
> > if (object == scanned) {
> > @@ -1620,8 +1802,10 @@ void __init kmemleak_init(void)
> > jiffies_scan_wait = msecs_to_jiffies(SECS_SCAN_WAIT * 1000);
> >
> > object_cache = KMEM_CACHE(kmemleak_object, SLAB_NOLEAKTRACE);
> > + alias_cache = KMEM_CACHE(kmemleak_alias, SLAB_NOLEAKTRACE);
> > scan_area_cache = KMEM_CACHE(kmemleak_scan_area, SLAB_NOLEAKTRACE);
> > INIT_PRIO_TREE_ROOT(&object_tree_root);
> > + INIT_PRIO_TREE_ROOT(&alias_tree_root);
> >
> > /* the kernel is still in UP mode, so disabling the IRQs is enough */
> > local_irq_save(flags);
> > --
> > 1.7.1.rc2
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/