Hi KASAN folks,
Currently, x86 disables (most) KASLR when KASAN is enabled:
> /*
> * Apply no randomization if KASLR was disabled at boot or if KASAN
> * is enabled. KASAN shadow mappings rely on regions being PGD aligned.
> */
> static inline bool kaslr_memory_enabled(void)
> {
> return kaslr_enabled() && !IS_ENABLED(CONFIG_KASAN);
> }
I'm a bit confused by this, though. This code predates 5-level paging
so a PGD should be assumed to be 512G. The kernel_randomize_memory()
granularity seems to be 1 TB, which *is* PGD-aligned.
Are KASAN and kernel_randomize_memory()/KASLR (modules and
cpu_entry_area randomization is separate) really incompatible? Does
anyone have a more thorough explanation than that comment?
This isn't a big deal since KASAN is a debugging option after all. But,
I'm trying to unravel why this:
> if (kaslr_enabled()) {
> pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n",
> kaslr_offset(),
> __START_KERNEL,
> __START_KERNEL_map,
> MODULES_VADDR-1);
for instance uses kaslr_enabled() which includes just randomizing
module_load_offset, but *not* __START_KERNEL. I think this case should
be using kaslr_memory_enabled() to match up with the check in
kernel_randomize_memory(). But this really boils down to what the
difference is between kaslr_memory_enabled() and kaslr_enabled().
On Fri, Mar 3, 2023 at 11:35 PM Dave Hansen <[email protected]> wrote:
>
> Hi KASAN folks,
>
> Currently, x86 disables (most) KASLR when KASAN is enabled:
>
> > /*
> > * Apply no randomization if KASLR was disabled at boot or if KASAN
> > * is enabled. KASAN shadow mappings rely on regions being PGD aligned.
> > */
> > static inline bool kaslr_memory_enabled(void)
> > {
> > return kaslr_enabled() && !IS_ENABLED(CONFIG_KASAN);
> > }
>
> I'm a bit confused by this, though. This code predates 5-level paging
> so a PGD should be assumed to be 512G. The kernel_randomize_memory()
> granularity seems to be 1 TB, which *is* PGD-aligned.
>
> Are KASAN and kernel_randomize_memory()/KASLR (modules and
> cpu_entry_area randomization is separate) really incompatible? Does
> anyone have a more thorough explanation than that comment?
>
Yeah, I agree with you here, the comment doesn't make sense to me as well.
However, I see one problem with KASAN and kernel_randomize_memory()
compatibility:
vaddr_start - vaddr_end includes KASAN shadow memory
(Documentation/x86/x86_64/mm.rst):
ffffea0000000000 | -22 TB | ffffeaffffffffff | 1 TB |
virtual memory map (vmemmap_base)
ffffeb0000000000 | -21 TB | ffffebffffffffff | 1 TB | ... unused hole
ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN
shadow memory
fffffc0000000000 | -4 TB | fffffdffffffffff | 2 TB | ... unused hole
| | | |
vaddr_end for KASLR
So the vmemmap_base and probably some part of vmalloc could easily end
up in KASAN shadow.
> This isn't a big deal since KASAN is a debugging option after all. But,
> I'm trying to unravel why this:
>
> > if (kaslr_enabled()) {
> > pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n",
> > kaslr_offset(),
> > __START_KERNEL,
> > __START_KERNEL_map,
> > MODULES_VADDR-1);
>
> for instance uses kaslr_enabled() which includes just randomizing
> module_load_offset, but *not* __START_KERNEL. I think this case should
> be using kaslr_memory_enabled() to match up with the check in
> kernel_randomize_memory(). But this really boils down to what the
> difference is between kaslr_memory_enabled() and kaslr_enabled().
This code looks correct to me. __START_KERNEL is just a constant, it's
never randomized.
The location of the kernel image (.text, .data ...) however is
randomized, kaslr_offset() - is the random number here.
So
kaslr_enabled() - randomization of the kernel image and modules.
kaslr_memory_enabled() - randomization of the linear mapping
(__PAGE_OFFSET), vmalloc (VMALLOC_START) and vmemmap (VMEMMAP_START)
On Wed, Mar 08, 2023 at 06:24:05PM +0100, Andrey Ryabinin <[email protected]> wrote:
> So the vmemmap_base and probably some part of vmalloc could easily end
> up in KASAN shadow.
Would it help to (conditionally) reduce vaddr_end to the beginning of
KASAN shadow memory?
(I'm not that familiar with KASAN, so IOW, would KASAN handle
randomized: linear mapping (__PAGE_OFFSET), vmalloc (VMALLOC_START) and
vmemmap (VMEMMAP_START) in that smaller range.)
Thanks,
Michal
On Mon, Mar 13, 2023 at 10:41 AM Michal Koutný <[email protected]> wrote:
>
> On Wed, Mar 08, 2023 at 06:24:05PM +0100, Andrey Ryabinin <[email protected]> wrote:
> > So the vmemmap_base and probably some part of vmalloc could easily end
> > up in KASAN shadow.
>
> Would it help to (conditionally) reduce vaddr_end to the beginning of
> KASAN shadow memory?
> (I'm not that familiar with KASAN, so IOW, would KASAN handle
> randomized: linear mapping (__PAGE_OFFSET), vmalloc (VMALLOC_START) and
> vmemmap (VMEMMAP_START) in that smaller range.)
>
Yes, with the vaddr_end = KASAN_SHADOW_START it should work,
kaslr_memory_enabled() can be removed in favor of just the kaslr_enabled()
> Thanks,
> Michal
On Mon, Mar 13, 2023 at 02:40:33PM +0100, Andrey Ryabinin <[email protected]> wrote:
> Yes, with the vaddr_end = KASAN_SHADOW_START it should work,
> kaslr_memory_enabled() can be removed in favor of just the kaslr_enabled()
Thanks. FWIW, I've found the cautionary comment at vaddr_end from the
commit 1dddd2512511 ("x86/kaslr: Fix the vaddr_end mess"), so I'm not
removing kaslr_enabled_enabled() now.
Michal