If __GFP_ZERO is used, the whole allocated memory should be cleared, not
the first part of it only.
Fixes: cc86e0c2f306 ("radix tree test suite: add support for slab bulk APIs")
Signed-off-by: Christophe JAILLET <[email protected]>
---
tools/testing/radix-tree/linux.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/radix-tree/linux.c b/tools/testing/radix-tree/linux.c
index d587a558997f..8ab162c48629 100644
--- a/tools/testing/radix-tree/linux.c
+++ b/tools/testing/radix-tree/linux.c
@@ -172,7 +172,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *cachep, gfp_t gfp, size_t size,
if (cachep->ctor)
cachep->ctor(p[i]);
else if (gfp & __GFP_ZERO)
- memset(p[i], 0, cachep->size);
+ memset(p[i], 0, cachep->size * size);
}
}
--
2.34.1
* Christophe JAILLET <[email protected]> [230928 17:11]:
> If __GFP_ZERO is used, the whole allocated memory should be cleared, not
> the first part of it only.
>
> Fixes: cc86e0c2f306 ("radix tree test suite: add support for slab bulk APIs")
> Signed-off-by: Christophe JAILLET <[email protected]>
> ---
> tools/testing/radix-tree/linux.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/tools/testing/radix-tree/linux.c b/tools/testing/radix-tree/linux.c
> index d587a558997f..8ab162c48629 100644
> --- a/tools/testing/radix-tree/linux.c
> +++ b/tools/testing/radix-tree/linux.c
> @@ -172,7 +172,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *cachep, gfp_t gfp, size_t size,
> if (cachep->ctor)
> cachep->ctor(p[i]);
> else if (gfp & __GFP_ZERO)
> - memset(p[i], 0, cachep->size);
> + memset(p[i], 0, cachep->size * size);
> }
> }
This doesn't look right. In fact, I think you have taken a bug and
extended it to clear the bugs over-allocation worth of memory.
The bulk allocator allocates to the array (here it's p) of a given size
(size). Each kmem_cache has a size pre-set, that is, each allocation
from that cache is of size cachep->size.
What you are doing is zeroing each pointer for the entire size of the
allocation for the entire range, and not just one elements worth. This
isn't crashing because we are allocating too much memory to begin with.
Note that this zeroing is in a for loop:
for (i = 0; i < size; i++) {
...
}
So, really the bug is that we are over-allocating and not that we aren't
zeroing the entire thing.
If you drop size from these arguments and use LSAN to check that things
are not overrun, then you will see that we can reduce the allocation by
a lot and still run correctly.
Thanks,
Liam