2022-10-25 05:21:45

by Yujie Liu

[permalink] [raw]
Subject: [tip:x86/mm] [x86/mm] 1248fb6a82: Kernel_panic-not_syncing:kasan_populate_pmd:Failed_to_allocate_page

Hi Peter,

We noticed that below commit changed the value of
CPU_ENTRY_AREA_MAP_SIZE. Seems KASAN uses this value to allocate memory,
and failed during initialization after this change, so we send this
mail and Cc KASAN folks. Please kindly check below report for more
details. Thanks.


Greeting,

FYI, we noticed Kernel_panic-not_syncing:kasan_populate_pmd:Failed_to_allocate_page due to commit (built with gcc-11):

commit: 1248fb6a8201ddac1c86a202f05a0a1765efbfce ("x86/mm: Randomize per-cpu entry area")
https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git x86/mm

in testcase: boot

on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G

caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):


[ 7.114808][ T0] Kernel panic - not syncing: kasan_populate_pmd+0x142/0x1d2: Failed to allocate page, nid=0 from=1000000
[ 7.119742][ T0] CPU: 0 PID: 0 Comm: swapper Not tainted 6.1.0-rc1-00001-g1248fb6a8201 #1
[ 7.122122][ T0] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-debian-1.16.0-4 04/01/2014
[ 7.124976][ T0] Call Trace:
[ 7.125849][ T0] <TASK>
[ 7.126642][ T0] ? dump_stack_lvl+0x45/0x5d
[ 7.127908][ T0] ? panic+0x21e/0x46a
[ 7.129009][ T0] ? panic_print_sys_info+0x77/0x77
[ 7.130618][ T0] ? memblock_alloc_try_nid_raw+0x106/0x106
[ 7.132224][ T0] ? memblock_alloc_try_nid+0xd9/0x118
[ 7.133717][ T0] ? memblock_alloc_try_nid_raw+0x106/0x106
[ 7.135252][ T0] ? kasan_populate_pmd+0x142/0x1d2
[ 7.136655][ T0] ? early_alloc+0x95/0x9d
[ 7.137738][ T0] ? kasan_populate_pmd+0x142/0x1d2
[ 7.138936][ T0] ? kasan_populate_pud+0x182/0x19f
[ 7.140335][ T0] ? kasan_populate_shadow+0x1e0/0x233
[ 7.141759][ T0] ? kasan_init+0x3be/0x57f
[ 7.142942][ T0] ? setup_arch+0x101d/0x11f0
[ 7.144229][ T0] ? start_kernel+0x6f/0x3d0
[ 7.145449][ T0] ? secondary_startup_64_no_verify+0xe0/0xeb
[ 7.147051][ T0] </TASK>
[ 7.147868][ T0] ---[ end Kernel panic - not syncing: kasan_populate_pmd+0x142/0x1d2: Failed to allocate page, nid=0 from=1000000 ]---


If you fix the issue, kindly add following tag
| Reported-by: kernel test robot <[email protected]>
| Link: https://lore.kernel.org/r/[email protected]


To reproduce:

# build kernel
cd linux
cp config-6.1.0-rc1-00001-g1248fb6a8201 .config
make HOSTCC=gcc-11 CC=gcc-11 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-11 CC=gcc-11 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz


git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email

# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.


--
0-DAY CI Kernel Test Service
https://01.org/lkp


Attachments:
(No filename) (3.09 kB)
config-6.1.0-rc1-00001-g1248fb6a8201 (172.24 kB)
job-script (4.66 kB)
dmesg.xz (3.18 kB)
Download all attachments

2022-10-25 11:25:50

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [tip:x86/mm] [x86/mm] 1248fb6a82: Kernel_panic-not_syncing:kasan_populate_pmd:Failed_to_allocate_page

On Tue, Oct 25, 2022 at 12:54:40PM +0800, kernel test robot wrote:
> Hi Peter,
>
> We noticed that below commit changed the value of
> CPU_ENTRY_AREA_MAP_SIZE. Seems KASAN uses this value to allocate memory,
> and failed during initialization after this change, so we send this
> mail and Cc KASAN folks. Please kindly check below report for more
> details. Thanks.
>
>
> Greeting,
>
> FYI, we noticed Kernel_panic-not_syncing:kasan_populate_pmd:Failed_to_allocate_page due to commit (built with gcc-11):
>
> commit: 1248fb6a8201ddac1c86a202f05a0a1765efbfce ("x86/mm: Randomize per-cpu entry area")
> https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git x86/mm
>
> in testcase: boot
>
> on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
>
> caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
>
>
> [ 7.114808][ T0] Kernel panic - not syncing: kasan_populate_pmd+0x142/0x1d2: Failed to allocate page, nid=0 from=1000000
> [ 7.119742][ T0] CPU: 0 PID: 0 Comm: swapper Not tainted 6.1.0-rc1-00001-g1248fb6a8201 #1
> [ 7.122122][ T0] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-debian-1.16.0-4 04/01/2014
> [ 7.124976][ T0] Call Trace:
> [ 7.125849][ T0] <TASK>
> [ 7.126642][ T0] ? dump_stack_lvl+0x45/0x5d
> [ 7.127908][ T0] ? panic+0x21e/0x46a
> [ 7.129009][ T0] ? panic_print_sys_info+0x77/0x77
> [ 7.130618][ T0] ? memblock_alloc_try_nid_raw+0x106/0x106
> [ 7.132224][ T0] ? memblock_alloc_try_nid+0xd9/0x118
> [ 7.133717][ T0] ? memblock_alloc_try_nid_raw+0x106/0x106
> [ 7.135252][ T0] ? kasan_populate_pmd+0x142/0x1d2
> [ 7.136655][ T0] ? early_alloc+0x95/0x9d
> [ 7.137738][ T0] ? kasan_populate_pmd+0x142/0x1d2
> [ 7.138936][ T0] ? kasan_populate_pud+0x182/0x19f
> [ 7.140335][ T0] ? kasan_populate_shadow+0x1e0/0x233
> [ 7.141759][ T0] ? kasan_init+0x3be/0x57f
> [ 7.142942][ T0] ? setup_arch+0x101d/0x11f0
> [ 7.144229][ T0] ? start_kernel+0x6f/0x3d0
> [ 7.145449][ T0] ? secondary_startup_64_no_verify+0xe0/0xeb
> [ 7.147051][ T0] </TASK>
> [ 7.147868][ T0] ---[ end Kernel panic - not syncing: kasan_populate_pmd+0x142/0x1d2: Failed to allocate page, nid=0 from=1000000 ]---

Ufff, no idea about what KASAN wants here; Andrey, you have clue?

Are you trying to allocate backing space for .5T of vspace and failing
that because the kvm thing doesn't have enough memory?


2022-10-25 14:37:39

by Yin, Fengwei

[permalink] [raw]
Subject: Re: [tip:x86/mm] [x86/mm] 1248fb6a82: Kernel_panic-not_syncing:kasan_populate_pmd:Failed_to_allocate_page

Hi Peter,

On 10/25/2022 6:33 PM, Peter Zijlstra wrote:
> On Tue, Oct 25, 2022 at 12:54:40PM +0800, kernel test robot wrote:
>> Hi Peter,
>>
>> We noticed that below commit changed the value of
>> CPU_ENTRY_AREA_MAP_SIZE. Seems KASAN uses this value to allocate memory,
>> and failed during initialization after this change, so we send this
>> mail and Cc KASAN folks. Please kindly check below report for more
>> details. Thanks.
>>
>>
>> Greeting,
>>
>> FYI, we noticed Kernel_panic-not_syncing:kasan_populate_pmd:Failed_to_allocate_page due to commit (built with gcc-11):
>>
>> commit: 1248fb6a8201ddac1c86a202f05a0a1765efbfce ("x86/mm: Randomize per-cpu entry area")
>> https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git x86/mm
>>
>> in testcase: boot
>>
>> on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
>>
>> caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
>>
>>
>> [ 7.114808][ T0] Kernel panic - not syncing: kasan_populate_pmd+0x142/0x1d2: Failed to allocate page, nid=0 from=1000000
>> [ 7.119742][ T0] CPU: 0 PID: 0 Comm: swapper Not tainted 6.1.0-rc1-00001-g1248fb6a8201 #1
>> [ 7.122122][ T0] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-debian-1.16.0-4 04/01/2014
>> [ 7.124976][ T0] Call Trace:
>> [ 7.125849][ T0] <TASK>
>> [ 7.126642][ T0] ? dump_stack_lvl+0x45/0x5d
>> [ 7.127908][ T0] ? panic+0x21e/0x46a
>> [ 7.129009][ T0] ? panic_print_sys_info+0x77/0x77
>> [ 7.130618][ T0] ? memblock_alloc_try_nid_raw+0x106/0x106
>> [ 7.132224][ T0] ? memblock_alloc_try_nid+0xd9/0x118
>> [ 7.133717][ T0] ? memblock_alloc_try_nid_raw+0x106/0x106
>> [ 7.135252][ T0] ? kasan_populate_pmd+0x142/0x1d2
>> [ 7.136655][ T0] ? early_alloc+0x95/0x9d
>> [ 7.137738][ T0] ? kasan_populate_pmd+0x142/0x1d2
>> [ 7.138936][ T0] ? kasan_populate_pud+0x182/0x19f
>> [ 7.140335][ T0] ? kasan_populate_shadow+0x1e0/0x233
>> [ 7.141759][ T0] ? kasan_init+0x3be/0x57f
>> [ 7.142942][ T0] ? setup_arch+0x101d/0x11f0
>> [ 7.144229][ T0] ? start_kernel+0x6f/0x3d0
>> [ 7.145449][ T0] ? secondary_startup_64_no_verify+0xe0/0xeb
>> [ 7.147051][ T0] </TASK>
>> [ 7.147868][ T0] ---[ end Kernel panic - not syncing: kasan_populate_pmd+0x142/0x1d2: Failed to allocate page, nid=0 from=1000000 ]---
>
> Ufff, no idea about what KASAN wants here; Andrey, you have clue?
>
> Are you trying to allocate backing space for .5T of vspace and failing
> that because the kvm thing doesn't have enough memory?
Here is what I got when I checked whether this report is valid or not:

KASAN create shadow for cpu entry area:
shadow_cpu_entry_begin = (void *)CPU_ENTRY_AREA_BASE;
shadow_cpu_entry_begin = kasan_mem_to_shadow(shadow_cpu_entry_begin);
shadow_cpu_entry_begin = (void *)round_down(
(unsigned long)shadow_cpu_entry_begin, PAGE_SIZE);

shadow_cpu_entry_end = (void *)(CPU_ENTRY_AREA_BASE +
CPU_ENTRY_AREA_MAP_SIZE);
^^^^^^^^^^^^^^^^^^^^^^^
shadow_cpu_entry_end = kasan_mem_to_shadow(shadow_cpu_entry_end);
shadow_cpu_entry_end = (void *)round_up(
(unsigned long)shadow_cpu_entry_end, PAGE_SIZE);

.....
kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin,
(unsigned long)shadow_cpu_entry_end, 0)

Before the patch, the CPU_ENTRY_AREA_MAP_SIZE is:
(CPU_ENTRY_AREA_PER_CPU + CPU_ENTRY_AREA_ARRAY_SIZE - CPU_ENTRY_AREA_BASE)

After the patch, it's same for 32bit. But for 64bit, it's: P4D_SIZE.
And trigger kasan_populate_shadow() applied to a very large range.

Hope this info could help somehow. Thanks.

Regards
Yin, Fengwei

>

2022-10-25 16:24:54

by Andrey Ryabinin

[permalink] [raw]
Subject: Re: [tip:x86/mm] [x86/mm] 1248fb6a82: Kernel_panic-not_syncing:kasan_populate_pmd:Failed_to_allocate_page



On 10/25/22 13:33, Peter Zijlstra wrote:
> On Tue, Oct 25, 2022 at 12:54:40PM +0800, kernel test robot wrote:
>> Hi Peter,
>>
>> We noticed that below commit changed the value of
>> CPU_ENTRY_AREA_MAP_SIZE. Seems KASAN uses this value to allocate memory,
>> and failed during initialization after this change, so we send this
>> mail and Cc KASAN folks. Please kindly check below report for more
>> details. Thanks.
>>
>>
>> Greeting,
>>
>> FYI, we noticed Kernel_panic-not_syncing:kasan_populate_pmd:Failed_to_allocate_page due to commit (built with gcc-11):
>>
>> commit: 1248fb6a8201ddac1c86a202f05a0a1765efbfce ("x86/mm: Randomize per-cpu entry area")
>> https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git x86/mm
>>
>> in testcase: boot
>>
>> on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
>>
>> caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
>>
>>
>> [ 7.114808][ T0] Kernel panic - not syncing: kasan_populate_pmd+0x142/0x1d2: Failed to allocate page, nid=0 from=1000000
>> [ 7.119742][ T0] CPU: 0 PID: 0 Comm: swapper Not tainted 6.1.0-rc1-00001-g1248fb6a8201 #1
>> [ 7.122122][ T0] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-debian-1.16.0-4 04/01/2014
>> [ 7.124976][ T0] Call Trace:
>> [ 7.125849][ T0] <TASK>
>> [ 7.126642][ T0] ? dump_stack_lvl+0x45/0x5d
>> [ 7.127908][ T0] ? panic+0x21e/0x46a
>> [ 7.129009][ T0] ? panic_print_sys_info+0x77/0x77
>> [ 7.130618][ T0] ? memblock_alloc_try_nid_raw+0x106/0x106
>> [ 7.132224][ T0] ? memblock_alloc_try_nid+0xd9/0x118
>> [ 7.133717][ T0] ? memblock_alloc_try_nid_raw+0x106/0x106
>> [ 7.135252][ T0] ? kasan_populate_pmd+0x142/0x1d2
>> [ 7.136655][ T0] ? early_alloc+0x95/0x9d
>> [ 7.137738][ T0] ? kasan_populate_pmd+0x142/0x1d2
>> [ 7.138936][ T0] ? kasan_populate_pud+0x182/0x19f
>> [ 7.140335][ T0] ? kasan_populate_shadow+0x1e0/0x233
>> [ 7.141759][ T0] ? kasan_init+0x3be/0x57f
>> [ 7.142942][ T0] ? setup_arch+0x101d/0x11f0
>> [ 7.144229][ T0] ? start_kernel+0x6f/0x3d0
>> [ 7.145449][ T0] ? secondary_startup_64_no_verify+0xe0/0xeb
>> [ 7.147051][ T0] </TASK>
>> [ 7.147868][ T0] ---[ end Kernel panic - not syncing: kasan_populate_pmd+0x142/0x1d2: Failed to allocate page, nid=0 from=1000000 ]---
>
> Ufff, no idea about what KASAN wants here; Andrey, you have clue?
>
> Are you trying to allocate backing space for .5T of vspace and failing
> that because the kvm thing doesn't have enough memory?
>

KASAN tries to allocate shadow memory for the whole cpu entry area.
The size is CPU_ENTRY_AREA_MAP_SIZE/8 and this is obviously fails after your patch.
The fix this might be something like this:


---
arch/x86/include/asm/kasan.h | 2 ++
arch/x86/mm/cpu_entry_area.c | 3 +++
arch/x86/mm/kasan_init_64.c | 16 +++++++++++++---
3 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index 13e70da38bed..77dd8b57f1e2 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -28,9 +28,11 @@
#ifdef CONFIG_KASAN
void __init kasan_early_init(void);
void __init kasan_init(void);
+void __init kasan_populate_shadow_for_vaddr(void *va, size_t size);
#else
static inline void kasan_early_init(void) { }
static inline void kasan_init(void) { }
+static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size) { }
#endif

#endif
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index ad1f750517a1..602daa550543 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -9,6 +9,7 @@
#include <asm/cpu_entry_area.h>
#include <asm/fixmap.h>
#include <asm/desc.h>
+#include <asm/kasan.h>

static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage);

@@ -91,6 +92,8 @@ void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags)
static void __init
cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
{
+ kasan_populate_shadow_for_vaddr(cea_vaddr, pages*PAGE_SIZE);
+
for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
}
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index e7b9b464a82f..dbee52f14700 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -316,6 +316,19 @@ void __init kasan_early_init(void)
kasan_map_early_shadow(init_top_pgt);
}

+void __init kasan_populate_shadow_for_vaddr(void *va, size_t size)
+{
+ unsigned long shadow_start, shadow_end;
+
+ shadow_start = (unsigned long)kasan_mem_to_shadow(va);
+ shadow_start = round_down(shadow_start, PAGE_SIZE);
+ shadow_end = (unsigned long)kasan_mem_to_shadow(va + size);
+ shadow_end = round_up(shadow_end, PAGE_SIZE);
+
+ kasan_populate_shadow(shadow_start, shadow_end,
+ early_pfn_to_nid(__pa(va)));
+}
+
void __init kasan_init(void)
{
int i;
@@ -393,9 +406,6 @@ void __init kasan_init(void)
kasan_mem_to_shadow((void *)VMALLOC_END + 1),
shadow_cpu_entry_begin);

- kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin,
- (unsigned long)shadow_cpu_entry_end, 0);
-
kasan_populate_early_shadow(shadow_cpu_entry_end,
kasan_mem_to_shadow((void *)__START_KERNEL_map));

--
2.37.4


2022-10-27 10:21:04

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [tip:x86/mm] [x86/mm] 1248fb6a82: Kernel_panic-not_syncing:kasan_populate_pmd:Failed_to_allocate_page

On Thu, Oct 27, 2022 at 06:02:14PM +0800, Yujie Liu wrote:
> Thanks for posting the fix. The issue is resolved after applying the fix.
>
> Tested-by: Yujie Liu <[email protected]>

Excellent; I'll talk to Dave if we want to ammend the original commit or
stuff this on top but we'll get it sorted.

Thanks!

2022-10-27 10:45:29

by Yujie Liu

[permalink] [raw]
Subject: Re: [tip:x86/mm] [x86/mm] 1248fb6a82: Kernel_panic-not_syncing:kasan_populate_pmd:Failed_to_allocate_page

On Tue, Oct 25, 2022 at 06:39:07PM +0300, Andrey Ryabinin wrote:
>
>
> On 10/25/22 13:33, Peter Zijlstra wrote:
> > On Tue, Oct 25, 2022 at 12:54:40PM +0800, kernel test robot wrote:
> >> Hi Peter,
> >>
> >> We noticed that below commit changed the value of
> >> CPU_ENTRY_AREA_MAP_SIZE. Seems KASAN uses this value to allocate memory,
> >> and failed during initialization after this change, so we send this
> >> mail and Cc KASAN folks. Please kindly check below report for more
> >> details. Thanks.
> >>
> >>
> >> Greeting,
> >>
> >> FYI, we noticed Kernel_panic-not_syncing:kasan_populate_pmd:Failed_to_allocate_page due to commit (built with gcc-11):
> >>
> >> commit: 1248fb6a8201ddac1c86a202f05a0a1765efbfce ("x86/mm: Randomize per-cpu entry area")
> >> https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git x86/mm
> >>
> >> in testcase: boot
> >>
> >> on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
> >>
> >> caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
> >>
> >>
> >> [ 7.114808][ T0] Kernel panic - not syncing: kasan_populate_pmd+0x142/0x1d2: Failed to allocate page, nid=0 from=1000000
> >> [ 7.119742][ T0] CPU: 0 PID: 0 Comm: swapper Not tainted 6.1.0-rc1-00001-g1248fb6a8201 #1
> >> [ 7.122122][ T0] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-debian-1.16.0-4 04/01/2014
> >> [ 7.124976][ T0] Call Trace:
> >> [ 7.125849][ T0] <TASK>
> >> [ 7.126642][ T0] ? dump_stack_lvl+0x45/0x5d
> >> [ 7.127908][ T0] ? panic+0x21e/0x46a
> >> [ 7.129009][ T0] ? panic_print_sys_info+0x77/0x77
> >> [ 7.130618][ T0] ? memblock_alloc_try_nid_raw+0x106/0x106
> >> [ 7.132224][ T0] ? memblock_alloc_try_nid+0xd9/0x118
> >> [ 7.133717][ T0] ? memblock_alloc_try_nid_raw+0x106/0x106
> >> [ 7.135252][ T0] ? kasan_populate_pmd+0x142/0x1d2
> >> [ 7.136655][ T0] ? early_alloc+0x95/0x9d
> >> [ 7.137738][ T0] ? kasan_populate_pmd+0x142/0x1d2
> >> [ 7.138936][ T0] ? kasan_populate_pud+0x182/0x19f
> >> [ 7.140335][ T0] ? kasan_populate_shadow+0x1e0/0x233
> >> [ 7.141759][ T0] ? kasan_init+0x3be/0x57f
> >> [ 7.142942][ T0] ? setup_arch+0x101d/0x11f0
> >> [ 7.144229][ T0] ? start_kernel+0x6f/0x3d0
> >> [ 7.145449][ T0] ? secondary_startup_64_no_verify+0xe0/0xeb
> >> [ 7.147051][ T0] </TASK>
> >> [ 7.147868][ T0] ---[ end Kernel panic - not syncing: kasan_populate_pmd+0x142/0x1d2: Failed to allocate page, nid=0 from=1000000 ]---
> >
> > Ufff, no idea about what KASAN wants here; Andrey, you have clue?
> >
> > Are you trying to allocate backing space for .5T of vspace and failing
> > that because the kvm thing doesn't have enough memory?
> >
>
> KASAN tries to allocate shadow memory for the whole cpu entry area.
> The size is CPU_ENTRY_AREA_MAP_SIZE/8 and this is obviously fails after your patch.
> The fix this might be something like this:

Hi Andrey,

Thanks for posting the fix. The issue is resolved after applying the fix.

Tested-by: Yujie Liu <[email protected]>

=========================================================================================
compiler/kconfig/rootfs/sleep/tbox_group/testcase:
gcc-11/x86_64-rhel-8.3-kselftests/yocto-i386-minimal-20190520.cgz/1/vm-snb/boot

commit:
v6.1-rc1
1248fb6a8201d ("x86/mm: Randomize per-cpu entry area")
5e25ad77cfd4a ("Fix "KASAN allocate shadow memory for cpu entry area"")

v6.1-rc1 1248fb6a8201ddac1c86a202f05 5e25ad77cfd4a0e089a1f370fbf
---------------- --------------------------- ---------------------------
fail:runs %reproduction fail:runs %reproduction fail:runs
| | | | |
:12 100% 10:10 0% :8 dmesg.Kernel_panic-not_syncing:kasan_populate_pmd:Failed_to_allocate_page,nid=#from=
:12 100% 10:10 0% :8 dmesg.boot_failures


Best Regards,
Yujie

>
>
> ---
> arch/x86/include/asm/kasan.h | 2 ++
> arch/x86/mm/cpu_entry_area.c | 3 +++
> arch/x86/mm/kasan_init_64.c | 16 +++++++++++++---
> 3 files changed, 18 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
> index 13e70da38bed..77dd8b57f1e2 100644
> --- a/arch/x86/include/asm/kasan.h
> +++ b/arch/x86/include/asm/kasan.h
> @@ -28,9 +28,11 @@
> #ifdef CONFIG_KASAN
> void __init kasan_early_init(void);
> void __init kasan_init(void);
> +void __init kasan_populate_shadow_for_vaddr(void *va, size_t size);
> #else
> static inline void kasan_early_init(void) { }
> static inline void kasan_init(void) { }
> +static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size) { }
> #endif
>
> #endif
> diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
> index ad1f750517a1..602daa550543 100644
> --- a/arch/x86/mm/cpu_entry_area.c
> +++ b/arch/x86/mm/cpu_entry_area.c
> @@ -9,6 +9,7 @@
> #include <asm/cpu_entry_area.h>
> #include <asm/fixmap.h>
> #include <asm/desc.h>
> +#include <asm/kasan.h>
>
> static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage);
>
> @@ -91,6 +92,8 @@ void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags)
> static void __init
> cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
> {
> + kasan_populate_shadow_for_vaddr(cea_vaddr, pages*PAGE_SIZE);
> +
> for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
> cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
> }
> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
> index e7b9b464a82f..dbee52f14700 100644
> --- a/arch/x86/mm/kasan_init_64.c
> +++ b/arch/x86/mm/kasan_init_64.c
> @@ -316,6 +316,19 @@ void __init kasan_early_init(void)
> kasan_map_early_shadow(init_top_pgt);
> }
>
> +void __init kasan_populate_shadow_for_vaddr(void *va, size_t size)
> +{
> + unsigned long shadow_start, shadow_end;
> +
> + shadow_start = (unsigned long)kasan_mem_to_shadow(va);
> + shadow_start = round_down(shadow_start, PAGE_SIZE);
> + shadow_end = (unsigned long)kasan_mem_to_shadow(va + size);
> + shadow_end = round_up(shadow_end, PAGE_SIZE);
> +
> + kasan_populate_shadow(shadow_start, shadow_end,
> + early_pfn_to_nid(__pa(va)));
> +}
> +
> void __init kasan_init(void)
> {
> int i;
> @@ -393,9 +406,6 @@ void __init kasan_init(void)
> kasan_mem_to_shadow((void *)VMALLOC_END + 1),
> shadow_cpu_entry_begin);
>
> - kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin,
> - (unsigned long)shadow_cpu_entry_end, 0);
> -
> kasan_populate_early_shadow(shadow_cpu_entry_end,
> kasan_mem_to_shadow((void *)__START_KERNEL_map));
>
> --
> 2.37.4
>

2022-10-27 15:34:22

by Dave Hansen

[permalink] [raw]
Subject: Re: [tip:x86/mm] [x86/mm] 1248fb6a82: Kernel_panic-not_syncing:kasan_populate_pmd:Failed_to_allocate_page

On 10/25/22 08:39, Andrey Ryabinin wrote:
> KASAN tries to allocate shadow memory for the whole cpu entry area.
> The size is CPU_ENTRY_AREA_MAP_SIZE/8 and this is obviously fails after your patch.
> The fix this might be something like this:
>
> ---
> arch/x86/include/asm/kasan.h | 2 ++
> arch/x86/mm/cpu_entry_area.c | 3 +++
> arch/x86/mm/kasan_init_64.c | 16 +++++++++++++---
> 3 files changed, 18 insertions(+), 3 deletions(-)

Andrey, if you have a minute, could you send this as a real patch, with
a SoB?

Alternatively, do you have any issues if we add your SoB and apply it?

2022-10-27 21:52:27

by Andrey Ryabinin

[permalink] [raw]
Subject: Re: [tip:x86/mm] [x86/mm] 1248fb6a82: Kernel_panic-not_syncing:kasan_populate_pmd:Failed_to_allocate_page



On 10/27/22 18:12, Dave Hansen wrote:
> On 10/25/22 08:39, Andrey Ryabinin wrote:
>> KASAN tries to allocate shadow memory for the whole cpu entry area.
>> The size is CPU_ENTRY_AREA_MAP_SIZE/8 and this is obviously fails after your patch.
>> The fix this might be something like this:
>>
>> ---
>> arch/x86/include/asm/kasan.h | 2 ++
>> arch/x86/mm/cpu_entry_area.c | 3 +++
>> arch/x86/mm/kasan_init_64.c | 16 +++++++++++++---
>> 3 files changed, 18 insertions(+), 3 deletions(-)
>
> Andrey, if you have a minute, could you send this as a real patch, with
> a SoB?

Done, It slightly different because there was a bug in vaddr->nid calculation.

2022-10-27 22:11:24

by Andrey Ryabinin

[permalink] [raw]
Subject: [PATCH] x86/kasan: map shadow for percpu pages on demand

KASAN maps shadow for the entire CPU-entry-area:
[CPU_ENTRY_AREA_BASE, CPU_ENTRY_AREA_BASE + CPU_ENTRY_AREA_MAP_SIZE]

This explodes after commit 1248fb6a8201 ("x86/mm: Randomize per-cpu entry area")
since it increases CPU_ENTRY_AREA_MAP_SIZE to 512 GB and KASAN fails
to allocate shadow for such big area.

Fix this by allocating KASAN shadow only for really used cpu entry area
addresses mapped by cea_map_percpu_pages()

Fixes: 1248fb6a8201 ("x86/mm: Randomize per-cpu entry area")
Reported-by: kernel test robot <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Tested-by: Yujie Liu <[email protected]>
Signed-off-by: Andrey Ryabinin <[email protected]>
---
arch/x86/include/asm/kasan.h | 3 +++
arch/x86/mm/cpu_entry_area.c | 8 +++++++-
arch/x86/mm/kasan_init_64.c | 15 ++++++++++++---
3 files changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index 13e70da38bed..de75306b932e 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -28,9 +28,12 @@
#ifdef CONFIG_KASAN
void __init kasan_early_init(void);
void __init kasan_init(void);
+void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid);
#else
static inline void kasan_early_init(void) { }
static inline void kasan_init(void) { }
+static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size,
+ int nid) { }
#endif

#endif
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index ad1f750517a1..ac2e952186c3 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -9,6 +9,7 @@
#include <asm/cpu_entry_area.h>
#include <asm/fixmap.h>
#include <asm/desc.h>
+#include <asm/kasan.h>

static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage);

@@ -91,8 +92,13 @@ void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags)
static void __init
cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
{
+ phys_addr_t pa = per_cpu_ptr_to_phys(ptr);
+
+ kasan_populate_shadow_for_vaddr(cea_vaddr, pages * PAGE_SIZE,
+ early_pfn_to_nid(PFN_DOWN(pa)));
+
for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
- cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
+ cea_set_pte(cea_vaddr, pa, prot);
}

static void __init percpu_setup_debug_store(unsigned int cpu)
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index e7b9b464a82f..d1416926ad52 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -316,6 +316,18 @@ void __init kasan_early_init(void)
kasan_map_early_shadow(init_top_pgt);
}

+void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid)
+{
+ unsigned long shadow_start, shadow_end;
+
+ shadow_start = (unsigned long)kasan_mem_to_shadow(va);
+ shadow_start = round_down(shadow_start, PAGE_SIZE);
+ shadow_end = (unsigned long)kasan_mem_to_shadow(va + size);
+ shadow_end = round_up(shadow_end, PAGE_SIZE);
+
+ kasan_populate_shadow(shadow_start, shadow_end, nid);
+}
+
void __init kasan_init(void)
{
int i;
@@ -393,9 +405,6 @@ void __init kasan_init(void)
kasan_mem_to_shadow((void *)VMALLOC_END + 1),
shadow_cpu_entry_begin);

- kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin,
- (unsigned long)shadow_cpu_entry_end, 0);
-
kasan_populate_early_shadow(shadow_cpu_entry_end,
kasan_mem_to_shadow((void *)__START_KERNEL_map));

--
2.37.4


2022-10-28 03:09:14

by Yin, Fengwei

[permalink] [raw]
Subject: Re: [PATCH] x86/kasan: map shadow for percpu pages on demand

Hi Andrey,

On 10/28/2022 5:31 AM, Andrey Ryabinin wrote:
> KASAN maps shadow for the entire CPU-entry-area:
> [CPU_ENTRY_AREA_BASE, CPU_ENTRY_AREA_BASE + CPU_ENTRY_AREA_MAP_SIZE]
>
> This explodes after commit 1248fb6a8201 ("x86/mm: Randomize per-cpu entry area")
> since it increases CPU_ENTRY_AREA_MAP_SIZE to 512 GB and KASAN fails
> to allocate shadow for such big area.
>
> Fix this by allocating KASAN shadow only for really used cpu entry area
> addresses mapped by cea_map_percpu_pages()
>
> Fixes: 1248fb6a8201 ("x86/mm: Randomize per-cpu entry area")
> Reported-by: kernel test robot <[email protected]>
> Link: https://lore.kernel.org/r/[email protected]
> Tested-by: Yujie Liu <[email protected]>
> Signed-off-by: Andrey Ryabinin <[email protected]>
> ---
> arch/x86/include/asm/kasan.h | 3 +++
> arch/x86/mm/cpu_entry_area.c | 8 +++++++-
> arch/x86/mm/kasan_init_64.c | 15 ++++++++++++---
> 3 files changed, 22 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
> index 13e70da38bed..de75306b932e 100644
> --- a/arch/x86/include/asm/kasan.h
> +++ b/arch/x86/include/asm/kasan.h
> @@ -28,9 +28,12 @@
> #ifdef CONFIG_KASAN
> void __init kasan_early_init(void);
> void __init kasan_init(void);
> +void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid);
> #else
> static inline void kasan_early_init(void) { }
> static inline void kasan_init(void) { }
> +static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size,
> + int nid) { }
> #endif
>
> #endif
> diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
> index ad1f750517a1..ac2e952186c3 100644
> --- a/arch/x86/mm/cpu_entry_area.c
> +++ b/arch/x86/mm/cpu_entry_area.c
> @@ -9,6 +9,7 @@
> #include <asm/cpu_entry_area.h>
> #include <asm/fixmap.h>
> #include <asm/desc.h>
> +#include <asm/kasan.h>
>
> static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage);
>
> @@ -91,8 +92,13 @@ void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags)
> static void __init
> cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
> {
> + phys_addr_t pa = per_cpu_ptr_to_phys(ptr);
> +
> + kasan_populate_shadow_for_vaddr(cea_vaddr, pages * PAGE_SIZE,
> + early_pfn_to_nid(PFN_DOWN(pa)));
> +
> for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
> - cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
> + cea_set_pte(cea_vaddr, pa, prot);
> }
>
> static void __init percpu_setup_debug_store(unsigned int cpu)
> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
> index e7b9b464a82f..d1416926ad52 100644
> --- a/arch/x86/mm/kasan_init_64.c
> +++ b/arch/x86/mm/kasan_init_64.c
> @@ -316,6 +316,18 @@ void __init kasan_early_init(void)
> kasan_map_early_shadow(init_top_pgt);
> }
>
> +void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid)
> +{
> + unsigned long shadow_start, shadow_end;
> +
> + shadow_start = (unsigned long)kasan_mem_to_shadow(va);
> + shadow_start = round_down(shadow_start, PAGE_SIZE);
> + shadow_end = (unsigned long)kasan_mem_to_shadow(va + size);
> + shadow_end = round_up(shadow_end, PAGE_SIZE);
> +
> + kasan_populate_shadow(shadow_start, shadow_end, nid);
> +}
> +
> void __init kasan_init(void)
> {
> int i;
> @@ -393,9 +405,6 @@ void __init kasan_init(void)
> kasan_mem_to_shadow((void *)VMALLOC_END + 1),
> shadow_cpu_entry_begin);
>
> - kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin,
> - (unsigned long)shadow_cpu_entry_end, 0);
> -
There will be address in the range (shadow_cpu_entry_begin, shadow_cpu_entry_end)
which has no KASAN shadow mapping populated after the patch. Not sure whether
it could be a problem. Thanks.

Regards
Yin, Fengwei

> kasan_populate_early_shadow(shadow_cpu_entry_end,
> kasan_mem_to_shadow((void *)__START_KERNEL_map));
>

2022-10-28 07:24:01

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: x86/mm] x86/kasan: Map shadow for percpu pages on demand

The following commit has been merged into the x86/mm branch of tip:

Commit-ID: 9fd429c28073fa40f5465cd6e4769a0af80bf398
Gitweb: https://git.kernel.org/tip/9fd429c28073fa40f5465cd6e4769a0af80bf398
Author: Andrey Ryabinin <[email protected]>
AuthorDate: Fri, 28 Oct 2022 00:31:04 +03:00
Committer: Dave Hansen <[email protected]>
CommitterDate: Thu, 27 Oct 2022 15:00:24 -07:00

x86/kasan: Map shadow for percpu pages on demand

KASAN maps shadow for the entire CPU-entry-area:
[CPU_ENTRY_AREA_BASE, CPU_ENTRY_AREA_BASE + CPU_ENTRY_AREA_MAP_SIZE]

This will explode once the per-cpu entry areas are randomized since it
will increase CPU_ENTRY_AREA_MAP_SIZE to 512 GB and KASAN fails to
allocate shadow for such big area.

Fix this by allocating KASAN shadow only for really used cpu entry area
addresses mapped by cea_map_percpu_pages()

Thanks to the 0day folks for finding and reporting this to be an issue.

[ dhansen: tweak changelog since this will get committed before peterz's
actual cpu-entry-area randomization ]

Signed-off-by: Andrey Ryabinin <[email protected]>
Signed-off-by: Dave Hansen <[email protected]>
Tested-by: Yujie Liu <[email protected]>
Cc: kernel test robot <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/include/asm/kasan.h | 3 +++
arch/x86/mm/cpu_entry_area.c | 8 +++++++-
arch/x86/mm/kasan_init_64.c | 15 ++++++++++++---
3 files changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index 13e70da..de75306 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -28,9 +28,12 @@
#ifdef CONFIG_KASAN
void __init kasan_early_init(void);
void __init kasan_init(void);
+void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid);
#else
static inline void kasan_early_init(void) { }
static inline void kasan_init(void) { }
+static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size,
+ int nid) { }
#endif

#endif
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index 6c2f1b7..d7081b1 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -9,6 +9,7 @@
#include <asm/cpu_entry_area.h>
#include <asm/fixmap.h>
#include <asm/desc.h>
+#include <asm/kasan.h>

static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage);

@@ -53,8 +54,13 @@ void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags)
static void __init
cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
{
+ phys_addr_t pa = per_cpu_ptr_to_phys(ptr);
+
+ kasan_populate_shadow_for_vaddr(cea_vaddr, pages * PAGE_SIZE,
+ early_pfn_to_nid(PFN_DOWN(pa)));
+
for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
- cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
+ cea_set_pte(cea_vaddr, pa, prot);
}

static void __init percpu_setup_debug_store(unsigned int cpu)
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index e7b9b46..d141692 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -316,6 +316,18 @@ void __init kasan_early_init(void)
kasan_map_early_shadow(init_top_pgt);
}

+void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid)
+{
+ unsigned long shadow_start, shadow_end;
+
+ shadow_start = (unsigned long)kasan_mem_to_shadow(va);
+ shadow_start = round_down(shadow_start, PAGE_SIZE);
+ shadow_end = (unsigned long)kasan_mem_to_shadow(va + size);
+ shadow_end = round_up(shadow_end, PAGE_SIZE);
+
+ kasan_populate_shadow(shadow_start, shadow_end, nid);
+}
+
void __init kasan_init(void)
{
int i;
@@ -393,9 +405,6 @@ void __init kasan_init(void)
kasan_mem_to_shadow((void *)VMALLOC_END + 1),
shadow_cpu_entry_begin);

- kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin,
- (unsigned long)shadow_cpu_entry_end, 0);
-
kasan_populate_early_shadow(shadow_cpu_entry_end,
kasan_mem_to_shadow((void *)__START_KERNEL_map));


2022-10-28 14:37:43

by Andrey Ryabinin

[permalink] [raw]
Subject: Re: [PATCH] x86/kasan: map shadow for percpu pages on demand



On 10/28/22 05:51, Yin, Fengwei wrote:
> Hi Andrey,
>

>> void __init kasan_init(void)
>> {
>> int i;
>> @@ -393,9 +405,6 @@ void __init kasan_init(void)
>> kasan_mem_to_shadow((void *)VMALLOC_END + 1),
>> shadow_cpu_entry_begin);
>>
>> - kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin,
>> - (unsigned long)shadow_cpu_entry_end, 0);
>> -
> There will be address in the range (shadow_cpu_entry_begin, shadow_cpu_entry_end)
> which has no KASAN shadow mapping populated after the patch. Not sure whether
> it could be a problem. Thanks.
>


This shouldn't be a problem. It's vital to have shadow *only* for addresses with mapped memory.
Shadow address accessed only if the address itself accessed. So the difference between not having shadow
for address with no mapping vs having it, is whether we crash on access to KASAN shadow or crash few
instructions later on access to the address itself.

2022-10-31 00:35:12

by Yin, Fengwei

[permalink] [raw]
Subject: Re: [PATCH] x86/kasan: map shadow for percpu pages on demand



On 10/28/2022 10:20 PM, Andrey Ryabinin wrote:
>
>
> On 10/28/22 05:51, Yin, Fengwei wrote:
>> Hi Andrey,
>>
>
>>> void __init kasan_init(void)
>>> {
>>> int i;
>>> @@ -393,9 +405,6 @@ void __init kasan_init(void)
>>> kasan_mem_to_shadow((void *)VMALLOC_END + 1),
>>> shadow_cpu_entry_begin);
>>>
>>> - kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin,
>>> - (unsigned long)shadow_cpu_entry_end, 0);
>>> -
>> There will be address in the range (shadow_cpu_entry_begin, shadow_cpu_entry_end)
>> which has no KASAN shadow mapping populated after the patch. Not sure whether
>> it could be a problem. Thanks.
>>
>
>
> This shouldn't be a problem. It's vital to have shadow *only* for addresses with mapped memory.
> Shadow address accessed only if the address itself accessed. So the difference between not having shadow
> for address with no mapping vs having it, is whether we crash on access to KASAN shadow or crash few
> instructions later on access to the address itself.

Thanks for clarification.

Regards
Yin, Fengwei


2022-12-17 19:09:42

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: x86/mm] x86/kasan: Map shadow for percpu pages on demand

The following commit has been merged into the x86/mm branch of tip:

Commit-ID: 3f148f3318140035e87decc1214795ff0755757b
Gitweb: https://git.kernel.org/tip/3f148f3318140035e87decc1214795ff0755757b
Author: Andrey Ryabinin <[email protected]>
AuthorDate: Fri, 28 Oct 2022 00:31:04 +03:00
Committer: Dave Hansen <[email protected]>
CommitterDate: Thu, 15 Dec 2022 10:37:26 -08:00

x86/kasan: Map shadow for percpu pages on demand

KASAN maps shadow for the entire CPU-entry-area:
[CPU_ENTRY_AREA_BASE, CPU_ENTRY_AREA_BASE + CPU_ENTRY_AREA_MAP_SIZE]

This will explode once the per-cpu entry areas are randomized since it
will increase CPU_ENTRY_AREA_MAP_SIZE to 512 GB and KASAN fails to
allocate shadow for such big area.

Fix this by allocating KASAN shadow only for really used cpu entry area
addresses mapped by cea_map_percpu_pages()

Thanks to the 0day folks for finding and reporting this to be an issue.

[ dhansen: tweak changelog since this will get committed before peterz's
actual cpu-entry-area randomization ]

Signed-off-by: Andrey Ryabinin <[email protected]>
Signed-off-by: Dave Hansen <[email protected]>
Tested-by: Yujie Liu <[email protected]>
Cc: kernel test robot <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/include/asm/kasan.h | 3 +++
arch/x86/mm/cpu_entry_area.c | 8 +++++++-
arch/x86/mm/kasan_init_64.c | 15 ++++++++++++---
3 files changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index 13e70da..de75306 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -28,9 +28,12 @@
#ifdef CONFIG_KASAN
void __init kasan_early_init(void);
void __init kasan_init(void);
+void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid);
#else
static inline void kasan_early_init(void) { }
static inline void kasan_init(void) { }
+static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size,
+ int nid) { }
#endif

#endif
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index 6c2f1b7..d7081b1 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -9,6 +9,7 @@
#include <asm/cpu_entry_area.h>
#include <asm/fixmap.h>
#include <asm/desc.h>
+#include <asm/kasan.h>

static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage);

@@ -53,8 +54,13 @@ void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags)
static void __init
cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot)
{
+ phys_addr_t pa = per_cpu_ptr_to_phys(ptr);
+
+ kasan_populate_shadow_for_vaddr(cea_vaddr, pages * PAGE_SIZE,
+ early_pfn_to_nid(PFN_DOWN(pa)));
+
for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE)
- cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot);
+ cea_set_pte(cea_vaddr, pa, prot);
}

static void __init percpu_setup_debug_store(unsigned int cpu)
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index e7b9b46..d141692 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -316,6 +316,18 @@ void __init kasan_early_init(void)
kasan_map_early_shadow(init_top_pgt);
}

+void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid)
+{
+ unsigned long shadow_start, shadow_end;
+
+ shadow_start = (unsigned long)kasan_mem_to_shadow(va);
+ shadow_start = round_down(shadow_start, PAGE_SIZE);
+ shadow_end = (unsigned long)kasan_mem_to_shadow(va + size);
+ shadow_end = round_up(shadow_end, PAGE_SIZE);
+
+ kasan_populate_shadow(shadow_start, shadow_end, nid);
+}
+
void __init kasan_init(void)
{
int i;
@@ -393,9 +405,6 @@ void __init kasan_init(void)
kasan_mem_to_shadow((void *)VMALLOC_END + 1),
shadow_cpu_entry_begin);

- kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin,
- (unsigned long)shadow_cpu_entry_end, 0);
-
kasan_populate_early_shadow(shadow_cpu_entry_end,
kasan_mem_to_shadow((void *)__START_KERNEL_map));