2020-07-02 11:53:21

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH 1/2] Revert "powerpc/kasan: Fix shadow pages allocation failure"

This reverts commit d2a91cef9bbdeb87b7449fdab1a6be6000930210.

This commit moved too much work in kasan_init(). The allocation
of shadow pages has to be moved for the reason explained in that
patch, but the allocation of page tables still need to be done
before switching to the final hash table.

First revert the incorrect commit, following patch redoes it
properly.

Reported-by: Erhard F. <[email protected]>
Link: https://bugzilla.kernel.org/show_bug.cgi?id=208181
Fixes: d2a91cef9bbd ("powerpc/kasan: Fix shadow pages allocation failure")
Cc: [email protected]
Signed-off-by: Christophe Leroy <[email protected]>
---
arch/powerpc/include/asm/kasan.h | 2 ++
arch/powerpc/mm/init_32.c | 2 ++
arch/powerpc/mm/kasan/kasan_init_32.c | 4 +---
3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
index be85c7005fb1..d635b96c7ea6 100644
--- a/arch/powerpc/include/asm/kasan.h
+++ b/arch/powerpc/include/asm/kasan.h
@@ -27,10 +27,12 @@

#ifdef CONFIG_KASAN
void kasan_early_init(void);
+void kasan_mmu_init(void);
void kasan_init(void);
void kasan_late_init(void);
#else
static inline void kasan_init(void) { }
+static inline void kasan_mmu_init(void) { }
static inline void kasan_late_init(void) { }
#endif

diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index 5a5469eb3174..bf1717f8d5f4 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -171,6 +171,8 @@ void __init MMU_init(void)
btext_unmap();
#endif

+ kasan_mmu_init();
+
setup_kup();

/* Shortly after that, the entire linear mapping will be available */
diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
index 0760e1e754e4..4813c6d50889 100644
--- a/arch/powerpc/mm/kasan/kasan_init_32.c
+++ b/arch/powerpc/mm/kasan/kasan_init_32.c
@@ -117,7 +117,7 @@ static void __init kasan_unmap_early_shadow_vmalloc(void)
kasan_update_early_region(k_start, k_end, __pte(0));
}

-static void __init kasan_mmu_init(void)
+void __init kasan_mmu_init(void)
{
int ret;
struct memblock_region *reg;
@@ -146,8 +146,6 @@ static void __init kasan_mmu_init(void)

void __init kasan_init(void)
{
- kasan_mmu_init();
-
kasan_remap_early_shadow_ro();

clear_page(kasan_early_shadow_page);
--
2.25.0


2020-07-16 12:57:23

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH 1/2] Revert "powerpc/kasan: Fix shadow pages allocation failure"

On Thu, 2 Jul 2020 11:52:02 +0000 (UTC), Christophe Leroy wrote:
> This reverts commit d2a91cef9bbdeb87b7449fdab1a6be6000930210.
>
> This commit moved too much work in kasan_init(). The allocation
> of shadow pages has to be moved for the reason explained in that
> patch, but the allocation of page tables still need to be done
> before switching to the final hash table.
>
> [...]

Applied to powerpc/next.

[1/2] Revert "powerpc/kasan: Fix shadow pages allocation failure"
https://git.kernel.org/powerpc/c/b506923ee44ae87fc9f4de16b53feb313623e146
[2/2] powerpc/kasan: Fix shadow pages allocation failure
https://git.kernel.org/powerpc/c/41ea93cf7ba4e0f0cc46ebfdda8b6ff27c67bc91

cheers