2019-02-08 16:21:14

by Daniel Axtens

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] powerpc/32: Add KASAN support

Hi Christophe,

I've been attempting to port this to 64-bit Book3e nohash (e6500),
although I think I've ended up with an approach more similar to Aneesh's
much earlier (2015) series for book3s.

Part of this is just due to the changes between 32 and 64 bits - we need
to hack around the discontiguous mappings - but one thing that I'm
particularly puzzled by is what the kasan_early_init is supposed to do.

> +void __init kasan_early_init(void)
> +{
> + unsigned long addr = KASAN_SHADOW_START;
> + unsigned long end = KASAN_SHADOW_END;
> + unsigned long next;
> + pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
> + int i;
> + phys_addr_t pa = __pa(kasan_early_shadow_page);
> +
> + BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
> +
> + if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
> + panic("KASAN not supported with Hash MMU\n");
> +
> + for (i = 0; i < PTRS_PER_PTE; i++)
> + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
> + kasan_early_shadow_pte + i,
> + pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
> +
> + do {
> + next = pgd_addr_end(addr, end);
> + pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
> + } while (pmd++, addr = next, addr != end);
> +}

As far as I can tell it's mapping the early shadow page, read-only, over
the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
shadow PTE array from the generic code.

I haven't been able to find an answer to why this is in the docs, so I
was wondering if you or anyone else could explain the early part of
kasan init a bit better.

At the moment, I don't do any early init, and like Aneesh's series for
book3s, I end up needing a special flag to disable kasan until after
kasan_init. Also, as with Balbir's seris for Radix, some tests didn't
fire, although my missing tests are a superset of his. I suspect the
early init has something to do with these...?

(I'm happy to collate answers into a patch to the docs, btw!)

In the long term I hope to revive Aneesh's and Balbir's series for hash
and radix as well.

Regards,
Daniel

> +
> +static void __init kasan_init_region(struct memblock_region *reg)
> +{
> + void *start = __va(reg->base);
> + void *end = __va(reg->base + reg->size);
> + unsigned long k_start, k_end, k_cur, k_next;
> + pmd_t *pmd;
> +
> + if (start >= end)
> + return;
> +
> + k_start = (unsigned long)kasan_mem_to_shadow(start);
> + k_end = (unsigned long)kasan_mem_to_shadow(end);
> + pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
> +
> + for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
> + k_next = pgd_addr_end(k_cur, k_end);
> + if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
> + pte_t *new = pte_alloc_one_kernel(&init_mm);
> +
> + if (!new)
> + panic("kasan: pte_alloc_one_kernel() failed");
> + memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
> + pmd_populate_kernel(&init_mm, pmd, new);
> + }
> + };
> +
> + for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
> + void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
> + pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
> +
> + if (!va)
> + panic("kasan: memblock_alloc() failed");
> + pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
> + pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
> + }
> + flush_tlb_kernel_range(k_start, k_end);
> +}
> +
> +void __init kasan_init(void)
> +{
> + struct memblock_region *reg;
> +
> + for_each_memblock(memory, reg)
> + kasan_init_region(reg);
> +
> + kasan_init_tags();
> +
> + /* At this point kasan is fully initialized. Enable error messages */
> + init_task.kasan_depth = 0;
> + pr_info("KASAN init done\n");
> +}
> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
> index 33cc6f676fa6..ae7db88b72d6 100644
> --- a/arch/powerpc/mm/mem.c
> +++ b/arch/powerpc/mm/mem.c
> @@ -369,6 +369,10 @@ void __init mem_init(void)
> pr_info(" * 0x%08lx..0x%08lx : highmem PTEs\n",
> PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
> #endif /* CONFIG_HIGHMEM */
> +#ifdef CONFIG_KASAN
> + pr_info(" * 0x%08lx..0x%08lx : kasan shadow mem\n",
> + KASAN_SHADOW_START, KASAN_SHADOW_END);
> +#endif
> #ifdef CONFIG_NOT_COHERENT_CACHE
> pr_info(" * 0x%08lx..0x%08lx : consistent mem\n",
> IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
> --
> 2.13.3


2019-02-08 17:17:59

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] powerpc/32: Add KASAN support

Hi Daniel,

Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
> Hi Christophe,
>
> I've been attempting to port this to 64-bit Book3e nohash (e6500),
> although I think I've ended up with an approach more similar to Aneesh's
> much earlier (2015) series for book3s.
>
> Part of this is just due to the changes between 32 and 64 bits - we need
> to hack around the discontiguous mappings - but one thing that I'm
> particularly puzzled by is what the kasan_early_init is supposed to do.

It should be a problem as my patch uses a 'for_each_memblock(memory,
reg)' loop.

>
>> +void __init kasan_early_init(void)
>> +{
>> + unsigned long addr = KASAN_SHADOW_START;
>> + unsigned long end = KASAN_SHADOW_END;
>> + unsigned long next;
>> + pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>> + int i;
>> + phys_addr_t pa = __pa(kasan_early_shadow_page);
>> +
>> + BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>> +
>> + if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>> + panic("KASAN not supported with Hash MMU\n");
>> +
>> + for (i = 0; i < PTRS_PER_PTE; i++)
>> + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>> + kasan_early_shadow_pte + i,
>> + pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>> +
>> + do {
>> + next = pgd_addr_end(addr, end);
>> + pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>> + } while (pmd++, addr = next, addr != end);
>> +}
>
> As far as I can tell it's mapping the early shadow page, read-only, over
> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
> shadow PTE array from the generic code.
>
> I haven't been able to find an answer to why this is in the docs, so I
> was wondering if you or anyone else could explain the early part of
> kasan init a bit better.

See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
explanation of the shadow.

When shadow is 0, it means the memory area is entirely accessible.

It is necessary to setup a shadow area as soon as possible because all
data accesses check the shadow area, from the begining (except for a few
files where sanitizing has been disabled in Makefiles).

Until the real shadow area is set, all access are granted thanks to the
zero shadow area beeing for of zeros.

I mainly used ARM arch as an exemple when I implemented KASAN for ppc32.

>
> At the moment, I don't do any early init, and like Aneesh's series for
> book3s, I end up needing a special flag to disable kasan until after
> kasan_init. Also, as with Balbir's seris for Radix, some tests didn't
> fire, although my missing tests are a superset of his. I suspect the
> early init has something to do with these...?

I think you should really focus on establishing a zero shadow area as
early as possible instead of trying to ack the core parts of KASAN.

>
> (I'm happy to collate answers into a patch to the docs, btw!)

We can also have the discussion going via
https://github.com/linuxppc/issues/issues/106

>
> In the long term I hope to revive Aneesh's and Balbir's series for hash
> and radix as well.

Great.

Christophe

>
> Regards,
> Daniel
>
>> +
>> +static void __init kasan_init_region(struct memblock_region *reg)
>> +{
>> + void *start = __va(reg->base);
>> + void *end = __va(reg->base + reg->size);
>> + unsigned long k_start, k_end, k_cur, k_next;
>> + pmd_t *pmd;
>> +
>> + if (start >= end)
>> + return;
>> +
>> + k_start = (unsigned long)kasan_mem_to_shadow(start);
>> + k_end = (unsigned long)kasan_mem_to_shadow(end);
>> + pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
>> +
>> + for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
>> + k_next = pgd_addr_end(k_cur, k_end);
>> + if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
>> + pte_t *new = pte_alloc_one_kernel(&init_mm);
>> +
>> + if (!new)
>> + panic("kasan: pte_alloc_one_kernel() failed");
>> + memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
>> + pmd_populate_kernel(&init_mm, pmd, new);
>> + }
>> + };
>> +
>> + for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
>> + void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>> + pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
>> +
>> + if (!va)
>> + panic("kasan: memblock_alloc() failed");
>> + pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
>> + pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
>> + }
>> + flush_tlb_kernel_range(k_start, k_end);
>> +}
>> +
>> +void __init kasan_init(void)
>> +{
>> + struct memblock_region *reg;
>> +
>> + for_each_memblock(memory, reg)
>> + kasan_init_region(reg);
>> +
>> + kasan_init_tags();
>> +
>> + /* At this point kasan is fully initialized. Enable error messages */
>> + init_task.kasan_depth = 0;
>> + pr_info("KASAN init done\n");
>> +}
>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>> index 33cc6f676fa6..ae7db88b72d6 100644
>> --- a/arch/powerpc/mm/mem.c
>> +++ b/arch/powerpc/mm/mem.c
>> @@ -369,6 +369,10 @@ void __init mem_init(void)
>> pr_info(" * 0x%08lx..0x%08lx : highmem PTEs\n",
>> PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
>> #endif /* CONFIG_HIGHMEM */
>> +#ifdef CONFIG_KASAN
>> + pr_info(" * 0x%08lx..0x%08lx : kasan shadow mem\n",
>> + KASAN_SHADOW_START, KASAN_SHADOW_END);
>> +#endif
>> #ifdef CONFIG_NOT_COHERENT_CACHE
>> pr_info(" * 0x%08lx..0x%08lx : consistent mem\n",
>> IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
>> --
>> 2.13.3

2019-02-08 17:42:20

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] powerpc/32: Add KASAN support

On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <[email protected]> wrote:
>
> Hi Daniel,
>
> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
> > Hi Christophe,
> >
> > I've been attempting to port this to 64-bit Book3e nohash (e6500),
> > although I think I've ended up with an approach more similar to Aneesh's
> > much earlier (2015) series for book3s.
> >
> > Part of this is just due to the changes between 32 and 64 bits - we need
> > to hack around the discontiguous mappings - but one thing that I'm
> > particularly puzzled by is what the kasan_early_init is supposed to do.
>
> It should be a problem as my patch uses a 'for_each_memblock(memory,
> reg)' loop.
>
> >
> >> +void __init kasan_early_init(void)
> >> +{
> >> + unsigned long addr = KASAN_SHADOW_START;
> >> + unsigned long end = KASAN_SHADOW_END;
> >> + unsigned long next;
> >> + pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
> >> + int i;
> >> + phys_addr_t pa = __pa(kasan_early_shadow_page);
> >> +
> >> + BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
> >> +
> >> + if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
> >> + panic("KASAN not supported with Hash MMU\n");
> >> +
> >> + for (i = 0; i < PTRS_PER_PTE; i++)
> >> + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
> >> + kasan_early_shadow_pte + i,
> >> + pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
> >> +
> >> + do {
> >> + next = pgd_addr_end(addr, end);
> >> + pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
> >> + } while (pmd++, addr = next, addr != end);
> >> +}
> >
> > As far as I can tell it's mapping the early shadow page, read-only, over
> > the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
> > shadow PTE array from the generic code.
> >
> > I haven't been able to find an answer to why this is in the docs, so I
> > was wondering if you or anyone else could explain the early part of
> > kasan init a bit better.
>
> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
> explanation of the shadow.
>
> When shadow is 0, it means the memory area is entirely accessible.
>
> It is necessary to setup a shadow area as soon as possible because all
> data accesses check the shadow area, from the begining (except for a few
> files where sanitizing has been disabled in Makefiles).
>
> Until the real shadow area is set, all access are granted thanks to the
> zero shadow area beeing for of zeros.

Not entirely correct. kasan_early_init() indeed maps the whole shadow
memory range to the same kasan_early_shadow_page. However as kernel
loads and memory gets allocated this shadow page gets rewritten with
non-zero values by different KASAN allocator hooks. Since these values
come from completely different parts of the kernel, but all land on
the same page, kasan_early_shadow_page's content can be considered
garbage. When KASAN checks memory accesses for validity it detects
these garbage shadow values, but doesn't print any reports, as the
reporting routine bails out on the current->kasan_depth check (which
has the value of 1 initially). Only after kasan_init() completes, when
the proper shadow memory is mapped, current->kasan_depth gets set to 0
and we start reporting bad accesses.

>
> I mainly used ARM arch as an exemple when I implemented KASAN for ppc32.
>
> >
> > At the moment, I don't do any early init, and like Aneesh's series for
> > book3s, I end up needing a special flag to disable kasan until after
> > kasan_init. Also, as with Balbir's seris for Radix, some tests didn't
> > fire, although my missing tests are a superset of his. I suspect the
> > early init has something to do with these...?
>
> I think you should really focus on establishing a zero shadow area as
> early as possible instead of trying to ack the core parts of KASAN.
>
> >
> > (I'm happy to collate answers into a patch to the docs, btw!)
>
> We can also have the discussion going via
> https://github.com/linuxppc/issues/issues/106
>
> >
> > In the long term I hope to revive Aneesh's and Balbir's series for hash
> > and radix as well.
>
> Great.
>
> Christophe
>
> >
> > Regards,
> > Daniel
> >
> >> +
> >> +static void __init kasan_init_region(struct memblock_region *reg)
> >> +{
> >> + void *start = __va(reg->base);
> >> + void *end = __va(reg->base + reg->size);
> >> + unsigned long k_start, k_end, k_cur, k_next;
> >> + pmd_t *pmd;
> >> +
> >> + if (start >= end)
> >> + return;
> >> +
> >> + k_start = (unsigned long)kasan_mem_to_shadow(start);
> >> + k_end = (unsigned long)kasan_mem_to_shadow(end);
> >> + pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
> >> +
> >> + for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
> >> + k_next = pgd_addr_end(k_cur, k_end);
> >> + if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
> >> + pte_t *new = pte_alloc_one_kernel(&init_mm);
> >> +
> >> + if (!new)
> >> + panic("kasan: pte_alloc_one_kernel() failed");
> >> + memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
> >> + pmd_populate_kernel(&init_mm, pmd, new);
> >> + }
> >> + };
> >> +
> >> + for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
> >> + void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
> >> + pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
> >> +
> >> + if (!va)
> >> + panic("kasan: memblock_alloc() failed");
> >> + pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
> >> + pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
> >> + }
> >> + flush_tlb_kernel_range(k_start, k_end);
> >> +}
> >> +
> >> +void __init kasan_init(void)
> >> +{
> >> + struct memblock_region *reg;
> >> +
> >> + for_each_memblock(memory, reg)
> >> + kasan_init_region(reg);
> >> +
> >> + kasan_init_tags();
> >> +
> >> + /* At this point kasan is fully initialized. Enable error messages */
> >> + init_task.kasan_depth = 0;
> >> + pr_info("KASAN init done\n");
> >> +}
> >> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
> >> index 33cc6f676fa6..ae7db88b72d6 100644
> >> --- a/arch/powerpc/mm/mem.c
> >> +++ b/arch/powerpc/mm/mem.c
> >> @@ -369,6 +369,10 @@ void __init mem_init(void)
> >> pr_info(" * 0x%08lx..0x%08lx : highmem PTEs\n",
> >> PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
> >> #endif /* CONFIG_HIGHMEM */
> >> +#ifdef CONFIG_KASAN
> >> + pr_info(" * 0x%08lx..0x%08lx : kasan shadow mem\n",
> >> + KASAN_SHADOW_START, KASAN_SHADOW_END);
> >> +#endif
> >> #ifdef CONFIG_NOT_COHERENT_CACHE
> >> pr_info(" * 0x%08lx..0x%08lx : consistent mem\n",
> >> IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
> >> --
> >> 2.13.3
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
> To post to this group, send email to [email protected].
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/69720148-fd19-0810-5a1d-96c45e2ec00c%40c-s.fr.
> For more options, visit https://groups.google.com/d/optout.

2019-02-09 11:57:30

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] powerpc/32: Add KASAN support

Hi Andrey,

Le 08/02/2019 à 18:40, Andrey Konovalov a écrit :
> On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <[email protected]> wrote:
>>
>> Hi Daniel,
>>
>> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
>>> Hi Christophe,
>>>
>>> I've been attempting to port this to 64-bit Book3e nohash (e6500),
>>> although I think I've ended up with an approach more similar to Aneesh's
>>> much earlier (2015) series for book3s.
>>>
>>> Part of this is just due to the changes between 32 and 64 bits - we need
>>> to hack around the discontiguous mappings - but one thing that I'm
>>> particularly puzzled by is what the kasan_early_init is supposed to do.
>>
>> It should be a problem as my patch uses a 'for_each_memblock(memory,
>> reg)' loop.
>>
>>>
>>>> +void __init kasan_early_init(void)
>>>> +{
>>>> + unsigned long addr = KASAN_SHADOW_START;
>>>> + unsigned long end = KASAN_SHADOW_END;
>>>> + unsigned long next;
>>>> + pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>>>> + int i;
>>>> + phys_addr_t pa = __pa(kasan_early_shadow_page);
>>>> +
>>>> + BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>>>> +
>>>> + if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>>>> + panic("KASAN not supported with Hash MMU\n");
>>>> +
>>>> + for (i = 0; i < PTRS_PER_PTE; i++)
>>>> + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>>>> + kasan_early_shadow_pte + i,
>>>> + pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>>>> +
>>>> + do {
>>>> + next = pgd_addr_end(addr, end);
>>>> + pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>>>> + } while (pmd++, addr = next, addr != end);
>>>> +}
>>>
>>> As far as I can tell it's mapping the early shadow page, read-only, over
>>> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
>>> shadow PTE array from the generic code.
>>>
>>> I haven't been able to find an answer to why this is in the docs, so I
>>> was wondering if you or anyone else could explain the early part of
>>> kasan init a bit better.
>>
>> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
>> explanation of the shadow.
>>
>> When shadow is 0, it means the memory area is entirely accessible.
>>
>> It is necessary to setup a shadow area as soon as possible because all
>> data accesses check the shadow area, from the begining (except for a few
>> files where sanitizing has been disabled in Makefiles).
>>
>> Until the real shadow area is set, all access are granted thanks to the
>> zero shadow area beeing for of zeros.
>
> Not entirely correct. kasan_early_init() indeed maps the whole shadow
> memory range to the same kasan_early_shadow_page. However as kernel
> loads and memory gets allocated this shadow page gets rewritten with
> non-zero values by different KASAN allocator hooks. Since these values
> come from completely different parts of the kernel, but all land on
> the same page, kasan_early_shadow_page's content can be considered
> garbage. When KASAN checks memory accesses for validity it detects
> these garbage shadow values, but doesn't print any reports, as the
> reporting routine bails out on the current->kasan_depth check (which
> has the value of 1 initially). Only after kasan_init() completes, when
> the proper shadow memory is mapped, current->kasan_depth gets set to 0
> and we start reporting bad accesses.

That's surprising, because in the early phase I map the shadow area
read-only, so I do not expect it to get modified unless RO protection is
failing for some reason.

Next week I'll add a test in early_init() to check the content of the
early shadow area.

Christophe

>
>>
>> I mainly used ARM arch as an exemple when I implemented KASAN for ppc32.
>>
>>>
>>> At the moment, I don't do any early init, and like Aneesh's series for
>>> book3s, I end up needing a special flag to disable kasan until after
>>> kasan_init. Also, as with Balbir's seris for Radix, some tests didn't
>>> fire, although my missing tests are a superset of his. I suspect the
>>> early init has something to do with these...?
>>
>> I think you should really focus on establishing a zero shadow area as
>> early as possible instead of trying to ack the core parts of KASAN.
>>
>>>
>>> (I'm happy to collate answers into a patch to the docs, btw!)
>>
>> We can also have the discussion going via
>> https://github.com/linuxppc/issues/issues/106
>>
>>>
>>> In the long term I hope to revive Aneesh's and Balbir's series for hash
>>> and radix as well.
>>
>> Great.
>>
>> Christophe
>>
>>>
>>> Regards,
>>> Daniel
>>>
>>>> +
>>>> +static void __init kasan_init_region(struct memblock_region *reg)
>>>> +{
>>>> + void *start = __va(reg->base);
>>>> + void *end = __va(reg->base + reg->size);
>>>> + unsigned long k_start, k_end, k_cur, k_next;
>>>> + pmd_t *pmd;
>>>> +
>>>> + if (start >= end)
>>>> + return;
>>>> +
>>>> + k_start = (unsigned long)kasan_mem_to_shadow(start);
>>>> + k_end = (unsigned long)kasan_mem_to_shadow(end);
>>>> + pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
>>>> +
>>>> + for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
>>>> + k_next = pgd_addr_end(k_cur, k_end);
>>>> + if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
>>>> + pte_t *new = pte_alloc_one_kernel(&init_mm);
>>>> +
>>>> + if (!new)
>>>> + panic("kasan: pte_alloc_one_kernel() failed");
>>>> + memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
>>>> + pmd_populate_kernel(&init_mm, pmd, new);
>>>> + }
>>>> + };
>>>> +
>>>> + for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
>>>> + void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>>>> + pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
>>>> +
>>>> + if (!va)
>>>> + panic("kasan: memblock_alloc() failed");
>>>> + pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
>>>> + pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
>>>> + }
>>>> + flush_tlb_kernel_range(k_start, k_end);
>>>> +}
>>>> +
>>>> +void __init kasan_init(void)
>>>> +{
>>>> + struct memblock_region *reg;
>>>> +
>>>> + for_each_memblock(memory, reg)
>>>> + kasan_init_region(reg);
>>>> +
>>>> + kasan_init_tags();
>>>> +
>>>> + /* At this point kasan is fully initialized. Enable error messages */
>>>> + init_task.kasan_depth = 0;
>>>> + pr_info("KASAN init done\n");
>>>> +}
>>>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>>>> index 33cc6f676fa6..ae7db88b72d6 100644
>>>> --- a/arch/powerpc/mm/mem.c
>>>> +++ b/arch/powerpc/mm/mem.c
>>>> @@ -369,6 +369,10 @@ void __init mem_init(void)
>>>> pr_info(" * 0x%08lx..0x%08lx : highmem PTEs\n",
>>>> PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
>>>> #endif /* CONFIG_HIGHMEM */
>>>> +#ifdef CONFIG_KASAN
>>>> + pr_info(" * 0x%08lx..0x%08lx : kasan shadow mem\n",
>>>> + KASAN_SHADOW_START, KASAN_SHADOW_END);
>>>> +#endif
>>>> #ifdef CONFIG_NOT_COHERENT_CACHE
>>>> pr_info(" * 0x%08lx..0x%08lx : consistent mem\n",
>>>> IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
>>>> --
>>>> 2.13.3
>>
>> --
>> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
>> To post to this group, send email to [email protected].
>> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/69720148-fd19-0810-5a1d-96c45e2ec00c%40c-s.fr.
>> For more options, visit https://groups.google.com/d/optout.

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus


2019-02-11 12:26:15

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] powerpc/32: Add KASAN support

On Sat, Feb 9, 2019 at 12:55 PM christophe leroy
<[email protected]> wrote:
>
> Hi Andrey,
>
> Le 08/02/2019 à 18:40, Andrey Konovalov a écrit :
> > On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <[email protected]> wrote:
> >>
> >> Hi Daniel,
> >>
> >> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
> >>> Hi Christophe,
> >>>
> >>> I've been attempting to port this to 64-bit Book3e nohash (e6500),
> >>> although I think I've ended up with an approach more similar to Aneesh's
> >>> much earlier (2015) series for book3s.
> >>>
> >>> Part of this is just due to the changes between 32 and 64 bits - we need
> >>> to hack around the discontiguous mappings - but one thing that I'm
> >>> particularly puzzled by is what the kasan_early_init is supposed to do.
> >>
> >> It should be a problem as my patch uses a 'for_each_memblock(memory,
> >> reg)' loop.
> >>
> >>>
> >>>> +void __init kasan_early_init(void)
> >>>> +{
> >>>> + unsigned long addr = KASAN_SHADOW_START;
> >>>> + unsigned long end = KASAN_SHADOW_END;
> >>>> + unsigned long next;
> >>>> + pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
> >>>> + int i;
> >>>> + phys_addr_t pa = __pa(kasan_early_shadow_page);
> >>>> +
> >>>> + BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
> >>>> +
> >>>> + if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
> >>>> + panic("KASAN not supported with Hash MMU\n");
> >>>> +
> >>>> + for (i = 0; i < PTRS_PER_PTE; i++)
> >>>> + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
> >>>> + kasan_early_shadow_pte + i,
> >>>> + pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
> >>>> +
> >>>> + do {
> >>>> + next = pgd_addr_end(addr, end);
> >>>> + pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
> >>>> + } while (pmd++, addr = next, addr != end);
> >>>> +}
> >>>
> >>> As far as I can tell it's mapping the early shadow page, read-only, over
> >>> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
> >>> shadow PTE array from the generic code.
> >>>
> >>> I haven't been able to find an answer to why this is in the docs, so I
> >>> was wondering if you or anyone else could explain the early part of
> >>> kasan init a bit better.
> >>
> >> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
> >> explanation of the shadow.
> >>
> >> When shadow is 0, it means the memory area is entirely accessible.
> >>
> >> It is necessary to setup a shadow area as soon as possible because all
> >> data accesses check the shadow area, from the begining (except for a few
> >> files where sanitizing has been disabled in Makefiles).
> >>
> >> Until the real shadow area is set, all access are granted thanks to the
> >> zero shadow area beeing for of zeros.
> >
> > Not entirely correct. kasan_early_init() indeed maps the whole shadow
> > memory range to the same kasan_early_shadow_page. However as kernel
> > loads and memory gets allocated this shadow page gets rewritten with
> > non-zero values by different KASAN allocator hooks. Since these values
> > come from completely different parts of the kernel, but all land on
> > the same page, kasan_early_shadow_page's content can be considered
> > garbage. When KASAN checks memory accesses for validity it detects
> > these garbage shadow values, but doesn't print any reports, as the
> > reporting routine bails out on the current->kasan_depth check (which
> > has the value of 1 initially). Only after kasan_init() completes, when
> > the proper shadow memory is mapped, current->kasan_depth gets set to 0
> > and we start reporting bad accesses.
>
> That's surprising, because in the early phase I map the shadow area
> read-only, so I do not expect it to get modified unless RO protection is
> failing for some reason.

Actually it might be that the allocator hooks don't modify shadow at
this point, as the allocator is not yet initialized. However stack
should be getting poisoned and unpoisoned from the very start. But the
generic statement that early shadow gets dirtied should be correct.
Might it be that you don't use stack instrumentation?

>
> Next week I'll add a test in early_init() to check the content of the
> early shadow area.
>
> Christophe
>
> >
> >>
> >> I mainly used ARM arch as an exemple when I implemented KASAN for ppc32.
> >>
> >>>
> >>> At the moment, I don't do any early init, and like Aneesh's series for
> >>> book3s, I end up needing a special flag to disable kasan until after
> >>> kasan_init. Also, as with Balbir's seris for Radix, some tests didn't
> >>> fire, although my missing tests are a superset of his. I suspect the
> >>> early init has something to do with these...?
> >>
> >> I think you should really focus on establishing a zero shadow area as
> >> early as possible instead of trying to ack the core parts of KASAN.
> >>
> >>>
> >>> (I'm happy to collate answers into a patch to the docs, btw!)
> >>
> >> We can also have the discussion going via
> >> https://github.com/linuxppc/issues/issues/106
> >>
> >>>
> >>> In the long term I hope to revive Aneesh's and Balbir's series for hash
> >>> and radix as well.
> >>
> >> Great.
> >>
> >> Christophe
> >>
> >>>
> >>> Regards,
> >>> Daniel
> >>>
> >>>> +
> >>>> +static void __init kasan_init_region(struct memblock_region *reg)
> >>>> +{
> >>>> + void *start = __va(reg->base);
> >>>> + void *end = __va(reg->base + reg->size);
> >>>> + unsigned long k_start, k_end, k_cur, k_next;
> >>>> + pmd_t *pmd;
> >>>> +
> >>>> + if (start >= end)
> >>>> + return;
> >>>> +
> >>>> + k_start = (unsigned long)kasan_mem_to_shadow(start);
> >>>> + k_end = (unsigned long)kasan_mem_to_shadow(end);
> >>>> + pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
> >>>> +
> >>>> + for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
> >>>> + k_next = pgd_addr_end(k_cur, k_end);
> >>>> + if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
> >>>> + pte_t *new = pte_alloc_one_kernel(&init_mm);
> >>>> +
> >>>> + if (!new)
> >>>> + panic("kasan: pte_alloc_one_kernel() failed");
> >>>> + memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
> >>>> + pmd_populate_kernel(&init_mm, pmd, new);
> >>>> + }
> >>>> + };
> >>>> +
> >>>> + for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
> >>>> + void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
> >>>> + pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
> >>>> +
> >>>> + if (!va)
> >>>> + panic("kasan: memblock_alloc() failed");
> >>>> + pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
> >>>> + pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
> >>>> + }
> >>>> + flush_tlb_kernel_range(k_start, k_end);
> >>>> +}
> >>>> +
> >>>> +void __init kasan_init(void)
> >>>> +{
> >>>> + struct memblock_region *reg;
> >>>> +
> >>>> + for_each_memblock(memory, reg)
> >>>> + kasan_init_region(reg);
> >>>> +
> >>>> + kasan_init_tags();
> >>>> +
> >>>> + /* At this point kasan is fully initialized. Enable error messages */
> >>>> + init_task.kasan_depth = 0;
> >>>> + pr_info("KASAN init done\n");
> >>>> +}
> >>>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
> >>>> index 33cc6f676fa6..ae7db88b72d6 100644
> >>>> --- a/arch/powerpc/mm/mem.c
> >>>> +++ b/arch/powerpc/mm/mem.c
> >>>> @@ -369,6 +369,10 @@ void __init mem_init(void)
> >>>> pr_info(" * 0x%08lx..0x%08lx : highmem PTEs\n",
> >>>> PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
> >>>> #endif /* CONFIG_HIGHMEM */
> >>>> +#ifdef CONFIG_KASAN
> >>>> + pr_info(" * 0x%08lx..0x%08lx : kasan shadow mem\n",
> >>>> + KASAN_SHADOW_START, KASAN_SHADOW_END);
> >>>> +#endif
> >>>> #ifdef CONFIG_NOT_COHERENT_CACHE
> >>>> pr_info(" * 0x%08lx..0x%08lx : consistent mem\n",
> >>>> IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
> >>>> --
> >>>> 2.13.3
> >>
> >> --
> >> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> >> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
> >> To post to this group, send email to [email protected].
> >> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/69720148-fd19-0810-5a1d-96c45e2ec00c%40c-s.fr.
> >> For more options, visit https://groups.google.com/d/optout.
>
> ---
> L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
> https://www.avast.com/antivirus
>

2019-02-11 16:30:08

by Andrey Ryabinin

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] powerpc/32: Add KASAN support



On 2/11/19 3:25 PM, Andrey Konovalov wrote:
> On Sat, Feb 9, 2019 at 12:55 PM christophe leroy
> <[email protected]> wrote:
>>
>> Hi Andrey,
>>
>> Le 08/02/2019 à 18:40, Andrey Konovalov a écrit :
>>> On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <[email protected]> wrote:
>>>>
>>>> Hi Daniel,
>>>>
>>>> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
>>>>> Hi Christophe,
>>>>>
>>>>> I've been attempting to port this to 64-bit Book3e nohash (e6500),
>>>>> although I think I've ended up with an approach more similar to Aneesh's
>>>>> much earlier (2015) series for book3s.
>>>>>
>>>>> Part of this is just due to the changes between 32 and 64 bits - we need
>>>>> to hack around the discontiguous mappings - but one thing that I'm
>>>>> particularly puzzled by is what the kasan_early_init is supposed to do.
>>>>
>>>> It should be a problem as my patch uses a 'for_each_memblock(memory,
>>>> reg)' loop.
>>>>
>>>>>
>>>>>> +void __init kasan_early_init(void)
>>>>>> +{
>>>>>> + unsigned long addr = KASAN_SHADOW_START;
>>>>>> + unsigned long end = KASAN_SHADOW_END;
>>>>>> + unsigned long next;
>>>>>> + pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>>>>>> + int i;
>>>>>> + phys_addr_t pa = __pa(kasan_early_shadow_page);
>>>>>> +
>>>>>> + BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>>>>>> +
>>>>>> + if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>>>>>> + panic("KASAN not supported with Hash MMU\n");
>>>>>> +
>>>>>> + for (i = 0; i < PTRS_PER_PTE; i++)
>>>>>> + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>>>>>> + kasan_early_shadow_pte + i,
>>>>>> + pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>>>>>> +
>>>>>> + do {
>>>>>> + next = pgd_addr_end(addr, end);
>>>>>> + pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>>>>>> + } while (pmd++, addr = next, addr != end);
>>>>>> +}
>>>>>
>>>>> As far as I can tell it's mapping the early shadow page, read-only, over
>>>>> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
>>>>> shadow PTE array from the generic code.
>>>>>
>>>>> I haven't been able to find an answer to why this is in the docs, so I
>>>>> was wondering if you or anyone else could explain the early part of
>>>>> kasan init a bit better.
>>>>
>>>> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
>>>> explanation of the shadow.
>>>>
>>>> When shadow is 0, it means the memory area is entirely accessible.
>>>>
>>>> It is necessary to setup a shadow area as soon as possible because all
>>>> data accesses check the shadow area, from the begining (except for a few
>>>> files where sanitizing has been disabled in Makefiles).
>>>>
>>>> Until the real shadow area is set, all access are granted thanks to the
>>>> zero shadow area beeing for of zeros.
>>>
>>> Not entirely correct. kasan_early_init() indeed maps the whole shadow
>>> memory range to the same kasan_early_shadow_page. However as kernel
>>> loads and memory gets allocated this shadow page gets rewritten with
>>> non-zero values by different KASAN allocator hooks. Since these values
>>> come from completely different parts of the kernel, but all land on
>>> the same page, kasan_early_shadow_page's content can be considered
>>> garbage. When KASAN checks memory accesses for validity it detects
>>> these garbage shadow values, but doesn't print any reports, as the
>>> reporting routine bails out on the current->kasan_depth check (which
>>> has the value of 1 initially). Only after kasan_init() completes, when
>>> the proper shadow memory is mapped, current->kasan_depth gets set to 0
>>> and we start reporting bad accesses.
>>
>> That's surprising, because in the early phase I map the shadow area
>> read-only, so I do not expect it to get modified unless RO protection is
>> failing for some reason.
>
> Actually it might be that the allocator hooks don't modify shadow at
> this point, as the allocator is not yet initialized. However stack
> should be getting poisoned and unpoisoned from the very start. But the
> generic statement that early shadow gets dirtied should be correct.
> Might it be that you don't use stack instrumentation?
>

Yes, stack instrumentation is not used here, because shadow offset which we pass to
the -fasan-shadow-offset= cflag is not specified here. So the logic in scrpits/Makefile.kasan
just fallbacks to CFLAGS_KASAN_MINIMAL, which is outline and without stack instrumentation.

Christophe, you can specify KASAN_SHADOW_OFFSET either in Kconfig (e.g. x86_64) or
in Makefile (e.g. arm64). And make early mapping writable, because compiler generated code will write
to shadow memory in function prologue/epilogue.

2019-02-12 01:09:07

by Daniel Axtens

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] powerpc/32: Add KASAN support

Andrey Ryabinin <[email protected]> writes:

> On 2/11/19 3:25 PM, Andrey Konovalov wrote:
>> On Sat, Feb 9, 2019 at 12:55 PM christophe leroy
>> <[email protected]> wrote:
>>>
>>> Hi Andrey,
>>>
>>> Le 08/02/2019 à 18:40, Andrey Konovalov a écrit :
>>>> On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <[email protected]> wrote:
>>>>>
>>>>> Hi Daniel,
>>>>>
>>>>> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
>>>>>> Hi Christophe,
>>>>>>
>>>>>> I've been attempting to port this to 64-bit Book3e nohash (e6500),
>>>>>> although I think I've ended up with an approach more similar to Aneesh's
>>>>>> much earlier (2015) series for book3s.
>>>>>>
>>>>>> Part of this is just due to the changes between 32 and 64 bits - we need
>>>>>> to hack around the discontiguous mappings - but one thing that I'm
>>>>>> particularly puzzled by is what the kasan_early_init is supposed to do.
>>>>>
>>>>> It should be a problem as my patch uses a 'for_each_memblock(memory,
>>>>> reg)' loop.
>>>>>
>>>>>>
>>>>>>> +void __init kasan_early_init(void)
>>>>>>> +{
>>>>>>> + unsigned long addr = KASAN_SHADOW_START;
>>>>>>> + unsigned long end = KASAN_SHADOW_END;
>>>>>>> + unsigned long next;
>>>>>>> + pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>>>>>>> + int i;
>>>>>>> + phys_addr_t pa = __pa(kasan_early_shadow_page);
>>>>>>> +
>>>>>>> + BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>>>>>>> +
>>>>>>> + if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>>>>>>> + panic("KASAN not supported with Hash MMU\n");
>>>>>>> +
>>>>>>> + for (i = 0; i < PTRS_PER_PTE; i++)
>>>>>>> + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>>>>>>> + kasan_early_shadow_pte + i,
>>>>>>> + pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>>>>>>> +
>>>>>>> + do {
>>>>>>> + next = pgd_addr_end(addr, end);
>>>>>>> + pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>>>>>>> + } while (pmd++, addr = next, addr != end);
>>>>>>> +}
>>>>>>
>>>>>> As far as I can tell it's mapping the early shadow page, read-only, over
>>>>>> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
>>>>>> shadow PTE array from the generic code.
>>>>>>
>>>>>> I haven't been able to find an answer to why this is in the docs, so I
>>>>>> was wondering if you or anyone else could explain the early part of
>>>>>> kasan init a bit better.
>>>>>
>>>>> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
>>>>> explanation of the shadow.
>>>>>
>>>>> When shadow is 0, it means the memory area is entirely accessible.
>>>>>
>>>>> It is necessary to setup a shadow area as soon as possible because all
>>>>> data accesses check the shadow area, from the begining (except for a few
>>>>> files where sanitizing has been disabled in Makefiles).
>>>>>
>>>>> Until the real shadow area is set, all access are granted thanks to the
>>>>> zero shadow area beeing for of zeros.
>>>>
>>>> Not entirely correct. kasan_early_init() indeed maps the whole shadow
>>>> memory range to the same kasan_early_shadow_page. However as kernel
>>>> loads and memory gets allocated this shadow page gets rewritten with
>>>> non-zero values by different KASAN allocator hooks. Since these values
>>>> come from completely different parts of the kernel, but all land on
>>>> the same page, kasan_early_shadow_page's content can be considered
>>>> garbage. When KASAN checks memory accesses for validity it detects
>>>> these garbage shadow values, but doesn't print any reports, as the
>>>> reporting routine bails out on the current->kasan_depth check (which
>>>> has the value of 1 initially). Only after kasan_init() completes, when
>>>> the proper shadow memory is mapped, current->kasan_depth gets set to 0
>>>> and we start reporting bad accesses.
>>>
>>> That's surprising, because in the early phase I map the shadow area
>>> read-only, so I do not expect it to get modified unless RO protection is
>>> failing for some reason.
>>
>> Actually it might be that the allocator hooks don't modify shadow at
>> this point, as the allocator is not yet initialized. However stack
>> should be getting poisoned and unpoisoned from the very start. But the
>> generic statement that early shadow gets dirtied should be correct.
>> Might it be that you don't use stack instrumentation?
>>
>
> Yes, stack instrumentation is not used here, because shadow offset which we pass to
> the -fasan-shadow-offset= cflag is not specified here. So the logic in scrpits/Makefile.kasan
> just fallbacks to CFLAGS_KASAN_MINIMAL, which is outline and without stack instrumentation.
>
> Christophe, you can specify KASAN_SHADOW_OFFSET either in Kconfig (e.g. x86_64) or
> in Makefile (e.g. arm64). And make early mapping writable, because compiler generated code will write
> to shadow memory in function prologue/epilogue.

Hmm. Is this limitation just that compilers have not implemented
out-of-line support for stack instrumentation, or is there a deeper
reason that stack/global instrumentation relies upon inline
instrumentation?

I ask because it's very common on ppc64 to have the virtual address
space split up into discontiguous blocks. I know this means we lose
inline instrumentation, but I didn't realise we'd also lose stack and
global instrumentation...

I wonder if it would be worth, in the distant future, trying to
implement a smarter scheme in compilers where we could insert more
complex inline mapping schemes.

Regards,
Daniel

2019-02-12 12:01:42

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] powerpc/32: Add KASAN support



Le 12/02/2019 à 02:08, Daniel Axtens a écrit :
> Andrey Ryabinin <[email protected]> writes:
>
>> On 2/11/19 3:25 PM, Andrey Konovalov wrote:
>>> On Sat, Feb 9, 2019 at 12:55 PM christophe leroy
>>> <[email protected]> wrote:
>>>>
>>>> Hi Andrey,
>>>>
>>>> Le 08/02/2019 à 18:40, Andrey Konovalov a écrit :
>>>>> On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <[email protected]> wrote:
>>>>>>
>>>>>> Hi Daniel,
>>>>>>
>>>>>> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
>>>>>>> Hi Christophe,
>>>>>>>
>>>>>>> I've been attempting to port this to 64-bit Book3e nohash (e6500),
>>>>>>> although I think I've ended up with an approach more similar to Aneesh's
>>>>>>> much earlier (2015) series for book3s.
>>>>>>>
>>>>>>> Part of this is just due to the changes between 32 and 64 bits - we need
>>>>>>> to hack around the discontiguous mappings - but one thing that I'm
>>>>>>> particularly puzzled by is what the kasan_early_init is supposed to do.
>>>>>>
>>>>>> It should be a problem as my patch uses a 'for_each_memblock(memory,
>>>>>> reg)' loop.
>>>>>>
>>>>>>>
>>>>>>>> +void __init kasan_early_init(void)
>>>>>>>> +{
>>>>>>>> + unsigned long addr = KASAN_SHADOW_START;
>>>>>>>> + unsigned long end = KASAN_SHADOW_END;
>>>>>>>> + unsigned long next;
>>>>>>>> + pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>>>>>>>> + int i;
>>>>>>>> + phys_addr_t pa = __pa(kasan_early_shadow_page);
>>>>>>>> +
>>>>>>>> + BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>>>>>>>> +
>>>>>>>> + if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>>>>>>>> + panic("KASAN not supported with Hash MMU\n");
>>>>>>>> +
>>>>>>>> + for (i = 0; i < PTRS_PER_PTE; i++)
>>>>>>>> + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>>>>>>>> + kasan_early_shadow_pte + i,
>>>>>>>> + pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>>>>>>>> +
>>>>>>>> + do {
>>>>>>>> + next = pgd_addr_end(addr, end);
>>>>>>>> + pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>>>>>>>> + } while (pmd++, addr = next, addr != end);
>>>>>>>> +}
>>>>>>>
>>>>>>> As far as I can tell it's mapping the early shadow page, read-only, over
>>>>>>> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
>>>>>>> shadow PTE array from the generic code.
>>>>>>>
>>>>>>> I haven't been able to find an answer to why this is in the docs, so I
>>>>>>> was wondering if you or anyone else could explain the early part of
>>>>>>> kasan init a bit better.
>>>>>>
>>>>>> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
>>>>>> explanation of the shadow.
>>>>>>
>>>>>> When shadow is 0, it means the memory area is entirely accessible.
>>>>>>
>>>>>> It is necessary to setup a shadow area as soon as possible because all
>>>>>> data accesses check the shadow area, from the begining (except for a few
>>>>>> files where sanitizing has been disabled in Makefiles).
>>>>>>
>>>>>> Until the real shadow area is set, all access are granted thanks to the
>>>>>> zero shadow area beeing for of zeros.
>>>>>
>>>>> Not entirely correct. kasan_early_init() indeed maps the whole shadow
>>>>> memory range to the same kasan_early_shadow_page. However as kernel
>>>>> loads and memory gets allocated this shadow page gets rewritten with
>>>>> non-zero values by different KASAN allocator hooks. Since these values
>>>>> come from completely different parts of the kernel, but all land on
>>>>> the same page, kasan_early_shadow_page's content can be considered
>>>>> garbage. When KASAN checks memory accesses for validity it detects
>>>>> these garbage shadow values, but doesn't print any reports, as the
>>>>> reporting routine bails out on the current->kasan_depth check (which
>>>>> has the value of 1 initially). Only after kasan_init() completes, when
>>>>> the proper shadow memory is mapped, current->kasan_depth gets set to 0
>>>>> and we start reporting bad accesses.
>>>>
>>>> That's surprising, because in the early phase I map the shadow area
>>>> read-only, so I do not expect it to get modified unless RO protection is
>>>> failing for some reason.
>>>
>>> Actually it might be that the allocator hooks don't modify shadow at
>>> this point, as the allocator is not yet initialized. However stack
>>> should be getting poisoned and unpoisoned from the very start. But the
>>> generic statement that early shadow gets dirtied should be correct.
>>> Might it be that you don't use stack instrumentation?
>>>
>>
>> Yes, stack instrumentation is not used here, because shadow offset which we pass to
>> the -fasan-shadow-offset= cflag is not specified here. So the logic in scrpits/Makefile.kasan
>> just fallbacks to CFLAGS_KASAN_MINIMAL, which is outline and without stack instrumentation.
>>
>> Christophe, you can specify KASAN_SHADOW_OFFSET either in Kconfig (e.g. x86_64) or
>> in Makefile (e.g. arm64). And make early mapping writable, because compiler generated code will write
>> to shadow memory in function prologue/epilogue.
>
> Hmm. Is this limitation just that compilers have not implemented
> out-of-line support for stack instrumentation, or is there a deeper
> reason that stack/global instrumentation relies upon inline
> instrumentation?

No, it looks like as soon as we define KASAN_SHADOW_OFFSET in Makefile
in addition to asm/kasan.h, stack instrumentation works with out-of-line.

I'll send series v5 soon.

Christophe

>
> I ask because it's very common on ppc64 to have the virtual address
> space split up into discontiguous blocks. I know this means we lose
> inline instrumentation, but I didn't realise we'd also lose stack and
> global instrumentation...
>
> I wonder if it would be worth, in the distant future, trying to
> implement a smarter scheme in compilers where we could insert more
> complex inline mapping schemes.
>
> Regards,
> Daniel
>

2019-02-12 12:10:06

by Andrey Ryabinin

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] powerpc/32: Add KASAN support



On 2/12/19 4:08 AM, Daniel Axtens wrote:
> Andrey Ryabinin <[email protected]> writes:
>
>> On 2/11/19 3:25 PM, Andrey Konovalov wrote:
>>> On Sat, Feb 9, 2019 at 12:55 PM christophe leroy
>>> <[email protected]> wrote:
>>>>
>>>> Hi Andrey,
>>>>
>>>> Le 08/02/2019 à 18:40, Andrey Konovalov a écrit :
>>>>> On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <[email protected]> wrote:
>>>>>>
>>>>>> Hi Daniel,
>>>>>>
>>>>>> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
>>>>>>> Hi Christophe,
>>>>>>>
>>>>>>> I've been attempting to port this to 64-bit Book3e nohash (e6500),
>>>>>>> although I think I've ended up with an approach more similar to Aneesh's
>>>>>>> much earlier (2015) series for book3s.
>>>>>>>
>>>>>>> Part of this is just due to the changes between 32 and 64 bits - we need
>>>>>>> to hack around the discontiguous mappings - but one thing that I'm
>>>>>>> particularly puzzled by is what the kasan_early_init is supposed to do.
>>>>>>
>>>>>> It should be a problem as my patch uses a 'for_each_memblock(memory,
>>>>>> reg)' loop.
>>>>>>
>>>>>>>
>>>>>>>> +void __init kasan_early_init(void)
>>>>>>>> +{
>>>>>>>> + unsigned long addr = KASAN_SHADOW_START;
>>>>>>>> + unsigned long end = KASAN_SHADOW_END;
>>>>>>>> + unsigned long next;
>>>>>>>> + pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>>>>>>>> + int i;
>>>>>>>> + phys_addr_t pa = __pa(kasan_early_shadow_page);
>>>>>>>> +
>>>>>>>> + BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>>>>>>>> +
>>>>>>>> + if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>>>>>>>> + panic("KASAN not supported with Hash MMU\n");
>>>>>>>> +
>>>>>>>> + for (i = 0; i < PTRS_PER_PTE; i++)
>>>>>>>> + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>>>>>>>> + kasan_early_shadow_pte + i,
>>>>>>>> + pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>>>>>>>> +
>>>>>>>> + do {
>>>>>>>> + next = pgd_addr_end(addr, end);
>>>>>>>> + pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>>>>>>>> + } while (pmd++, addr = next, addr != end);
>>>>>>>> +}
>>>>>>>
>>>>>>> As far as I can tell it's mapping the early shadow page, read-only, over
>>>>>>> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
>>>>>>> shadow PTE array from the generic code.
>>>>>>>
>>>>>>> I haven't been able to find an answer to why this is in the docs, so I
>>>>>>> was wondering if you or anyone else could explain the early part of
>>>>>>> kasan init a bit better.
>>>>>>
>>>>>> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
>>>>>> explanation of the shadow.
>>>>>>
>>>>>> When shadow is 0, it means the memory area is entirely accessible.
>>>>>>
>>>>>> It is necessary to setup a shadow area as soon as possible because all
>>>>>> data accesses check the shadow area, from the begining (except for a few
>>>>>> files where sanitizing has been disabled in Makefiles).
>>>>>>
>>>>>> Until the real shadow area is set, all access are granted thanks to the
>>>>>> zero shadow area beeing for of zeros.
>>>>>
>>>>> Not entirely correct. kasan_early_init() indeed maps the whole shadow
>>>>> memory range to the same kasan_early_shadow_page. However as kernel
>>>>> loads and memory gets allocated this shadow page gets rewritten with
>>>>> non-zero values by different KASAN allocator hooks. Since these values
>>>>> come from completely different parts of the kernel, but all land on
>>>>> the same page, kasan_early_shadow_page's content can be considered
>>>>> garbage. When KASAN checks memory accesses for validity it detects
>>>>> these garbage shadow values, but doesn't print any reports, as the
>>>>> reporting routine bails out on the current->kasan_depth check (which
>>>>> has the value of 1 initially). Only after kasan_init() completes, when
>>>>> the proper shadow memory is mapped, current->kasan_depth gets set to 0
>>>>> and we start reporting bad accesses.
>>>>
>>>> That's surprising, because in the early phase I map the shadow area
>>>> read-only, so I do not expect it to get modified unless RO protection is
>>>> failing for some reason.
>>>
>>> Actually it might be that the allocator hooks don't modify shadow at
>>> this point, as the allocator is not yet initialized. However stack
>>> should be getting poisoned and unpoisoned from the very start. But the
>>> generic statement that early shadow gets dirtied should be correct.
>>> Might it be that you don't use stack instrumentation?
>>>
>>
>> Yes, stack instrumentation is not used here, because shadow offset which we pass to
>> the -fasan-shadow-offset= cflag is not specified here. So the logic in scrpits/Makefile.kasan
>> just fallbacks to CFLAGS_KASAN_MINIMAL, which is outline and without stack instrumentation.
>>
>> Christophe, you can specify KASAN_SHADOW_OFFSET either in Kconfig (e.g. x86_64) or
>> in Makefile (e.g. arm64). And make early mapping writable, because compiler generated code will write
>> to shadow memory in function prologue/epilogue.
>
> Hmm. Is this limitation just that compilers have not implemented
> out-of-line support for stack instrumentation, or is there a deeper
> reason that stack/global instrumentation relies upon inline
> instrumentation?
>

Yes, it's simply wasn't implemented in compilers. Stack [un]poisoning code is always inlined.

But globals is the opposite of that, they all poisoned out-of-line via __asan_register_globals() call.

> I ask because it's very common on ppc64 to have the virtual address
> space split up into discontiguous blocks. I know this means we lose
> inline instrumentation, but I didn't realise we'd also lose stack and
> global instrumentation...
>
> I wonder if it would be worth, in the distant future, trying to
> implement a smarter scheme in compilers where we could insert more
> complex inline mapping schemes.
>

I'd say it depends on performance boost that inline might give for those complex inline schemes.
The whole inline instrumentation thing exists only because it gives better performance.


> Regards,
> Daniel
>

2019-02-12 12:14:58

by Andrey Ryabinin

[permalink] [raw]
Subject: Re: [PATCH v4 3/3] powerpc/32: Add KASAN support

On 2/12/19 2:38 PM, Christophe Leroy wrote:
>
>
> Le 12/02/2019 à 02:08, Daniel Axtens a écrit :
>> Andrey Ryabinin <[email protected]> writes:
>>
>>>
>>> Christophe, you can specify KASAN_SHADOW_OFFSET either in Kconfig (e.g. x86_64) or
>>> in Makefile (e.g. arm64). And make early mapping writable, because compiler generated code will write
>>> to shadow memory in function prologue/epilogue.
>>
>> Hmm. Is this limitation just that compilers have not implemented
>> out-of-line support for stack instrumentation, or is there a deeper
>> reason that stack/global instrumentation relies upon inline
>> instrumentation?
>
> No, it looks like as soon as we define KASAN_SHADOW_OFFSET in Makefile in addition to asm/kasan.h, stack instrumentation works with out-of-line.
>


I think you confusing two different things.
CONFIG_KASAN_INLINE/CONFIG_KASAN_OUTLINE affects only generation of code that checks memory accesses,
whether we call __asan_loadx()/__asan_storex() or directly insert code, checking shadow memory.

But with stack instrumentation we also need to poison redzones around stack variables and unpoison them
when we leave the function. That poisoning/unpoisoning thing is always inlined. Currently there is no option to make it out-of-line.

So you can have stack instrumentation with KASAN_OUTLINE, but it just means that checking shadow memory on memory access will be outlined,
poisoning/unpoisoing stack redzones will remain inlined.