Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756541AbdLXBWN (ORCPT ); Sat, 23 Dec 2017 20:22:13 -0500 Received: from mx3.wp.pl ([212.77.101.9]:41945 "EHLO mx3.wp.pl" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753008AbdLXBWM (ORCPT ); Sat, 23 Dec 2017 20:22:12 -0500 Date: Sat, 23 Dec 2017 17:22:04 -0800 From: Jakub Kicinski To: Andrey Ryabinin Cc: Andy Lutomirski , Thomas Gleixner , Ingo Molnar , LKML Subject: Re: linux/master crashes on boot with KASAN=y Message-ID: <20171223172204.1eb623cd@cakuba.netronome.com> In-Reply-To: <41c68406-ad05-1db7-b0dd-a2e616448ee1@virtuozzo.com> References: <20171223000143.0af3366d@cakuba.netronome.com> <41c68406-ad05-1db7-b0dd-a2e616448ee1@virtuozzo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-WP-MailID: 7260d1b9f294482d80968b2ed86fe969 X-WP-AV: skaner antywirusowy Poczty Wirtualnej Polski X-WP-SPAM: NO 000000A [wfN0] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2687 Lines: 70 On Sat, 23 Dec 2017 15:41:27 +0300, Andrey Ryabinin wrote: > On 12/23/2017 11:01 AM, Jakub Kicinski wrote: > > Hi! > > > > I bisected a crash on boot to this: > > > > commit 21506525fb8ddb0342f2a2370812d47f6a1f3833 (HEAD, refs/bisect/bad) > > Author: Andy Lutomirski > > Date: Mon Dec 4 15:07:16 2017 +0100 > > > > x86/kasan/64: Teach KASAN about the cpu_entry_area > > > Thanks. > There is nothing wrong with this patch, it just uncovered older bug. > The actual problem comes from f06bdd4001c2 ("x86/mm: Adapt MODULES_END based on fixmap section size") > which is made kasan_mem_to_shadow(MODULES_END) potentially unaligned to page boundary. > And when we feed unaligned address to kasan_populate_zero_shadow() it doesn't do the right thing. > > Could you tell me if the fix bellow works for you? Works for me, thank you! Tested-by: Jakub Kicinski > arch/x86/include/asm/kasan.h | 8 ++++++++ > arch/x86/include/asm/pgtable_64_types.h | 4 +++- > 2 files changed, 11 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h > index b577dd0916aa..0c580e4b2ccc 100644 > --- a/arch/x86/include/asm/kasan.h > +++ b/arch/x86/include/asm/kasan.h > @@ -5,6 +5,14 @@ > #include > #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) > > +#ifndef KASAN_SHADOW_SCALE_SHIFT > +# ifdef CONFIG_KASAN > +# define KASAN_SHADOW_SCALE_SHIFT 3 > +# else > +# define KASAN_SHADOW_SCALE_SHIFT 0 > +# endif > +#endif > + > /* > * Compiler uses shadow offset assuming that addresses start > * from 0. Kernel addresses don't start from 0, so shadow > diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h > index 6d5f45dcd4a1..d34a90d6c374 100644 > --- a/arch/x86/include/asm/pgtable_64_types.h > +++ b/arch/x86/include/asm/pgtable_64_types.h > @@ -6,6 +6,7 @@ > > #ifndef __ASSEMBLY__ > #include > +#include > #include > > /* > @@ -96,7 +97,8 @@ typedef struct { pteval_t pte; } pte_t; > #define VMALLOC_END (VMALLOC_START + _AC((VMALLOC_SIZE_TB << 40) - 1, UL)) > #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) > /* The module sections ends with the start of the fixmap */ > -#define MODULES_END __fix_to_virt(__end_of_fixed_addresses + 1) > +#define MODULES_END (__fix_to_virt(__end_of_fixed_addresses + 1) & \ > + ~((PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) - 1)) > #define MODULES_LEN (MODULES_END - MODULES_VADDR) > #define ESPFIX_PGD_ENTRY _AC(-2, UL) > #define ESPFIX_BASE_ADDR (ESPFIX_PGD_ENTRY << P4D_SHIFT)