Given large enough allocations and a machine with low enough memory (i.e
a default QEMU VM), it's entirely possible that
kmsan_init_alloc_meta_for_range's shadow+origin allocation fails.
Instead of eating a NULL deref kernel oops, check explicitly for
memblock_alloc() failure and panic with a nice error message.
Signed-off-by: Pedro Falcato <[email protected]>
---
v2:
Address checkpatch warnings, namely:
- Unsplit a user-visible string
- Split an overly long line in the commit message
mm/kmsan/shadow.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c
index 87318f9170f..b9d05aff313 100644
--- a/mm/kmsan/shadow.c
+++ b/mm/kmsan/shadow.c
@@ -285,12 +285,17 @@ void __init kmsan_init_alloc_meta_for_range(void *start, void *end)
size = PAGE_ALIGN((u64)end - (u64)start);
shadow = memblock_alloc(size, PAGE_SIZE);
origin = memblock_alloc(size, PAGE_SIZE);
+
+ if (!shadow || !origin)
+ panic("%s: Failed to allocate metadata memory for early boot range of size %llu",
+ __func__, size);
+
for (u64 addr = 0; addr < size; addr += PAGE_SIZE) {
page = virt_to_page_or_null((char *)start + addr);
- shadow_p = virt_to_page_or_null((char *)shadow + addr);
+ shadow_p = virt_to_page((char *)shadow + addr);
set_no_shadow_origin_page(shadow_p);
shadow_page_for(page) = shadow_p;
- origin_p = virt_to_page_or_null((char *)origin + addr);
+ origin_p = virt_to_page((char *)origin + addr);
set_no_shadow_origin_page(origin_p);
origin_page_for(page) = origin_p;
}
--
2.42.0
On Mon, Oct 16, 2023 at 5:34 PM Pedro Falcato <[email protected]> wrote:
>
> Given large enough allocations and a machine with low enough memory (i.e
> a default QEMU VM), it's entirely possible that
> kmsan_init_alloc_meta_for_range's shadow+origin allocation fails.
>
> Instead of eating a NULL deref kernel oops, check explicitly for
> memblock_alloc() failure and panic with a nice error message.
For posterity, it is generally quite important for the allocated
shadow and origin to be contiguous, otherwise an unaligned memory
write may result in memory corruption (the corresponding unaligned
shadow write will be assuming that shadow pages are adjacent).
So instead of panicking we could have split the range into smaller
ones until the allocation succeeds, but that would've led to
hard-to-debug problems in the future.
>
> Signed-off-by: Pedro Falcato <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>