Received: by 10.213.65.68 with SMTP id h4csp122703imn; Fri, 30 Mar 2018 02:10:46 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+R1m+n2w1aQ4tael+zoCvzWsybkAYCMzFAq3KCd+w0n8qLg3GsiATbHJqXttoGKbp7JXRp X-Received: by 2002:a17:902:b283:: with SMTP id u3-v6mr5499515plr.381.1522401046113; Fri, 30 Mar 2018 02:10:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522401046; cv=none; d=google.com; s=arc-20160816; b=O+fAytDiGHkO8treH0NbZxex8u2r71hCdN2nEJcRR6RCRHDaOgf28mpo75zOW/agI7 NldP2gVO91++sxz1KtVx9PjMig7+7DiHMn3sCD5l7G98KG6ZSfYD85fWGBtYEV0A+dKT GU7LkIarslxrVHxcp8QXMvEtpJUUHUe8JXNQIyHIsAyaWG8kSJUxY+dLbi43FwhbGfnd 0rJroWQN5pduwJBM3pB4PES1Iel1SE5FyOYpINDh3HaHmKTpDoXiR7at1uz3uZpLNXeY 9oC3cHY0hQapCQzyHz9sAfseIG7cUOA+BK90jDLWMo81P9Yj6FhLRWPd13NIKCdmqPXw teRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:arc-authentication-results; bh=BSNdbOth0d35uiPSxtKeMBD6Qmtc5eUOR4pULh9l8iw=; b=EWofCj3Gd9nqSrktuFeAopudC+GKixqFbjCvPSmDuct+3rtEUVtC3TcL3t/jAb3ewU ck6NQezaImWutY9vsk8bv1A5haAEHQ4+B+d/CjjQoQf1+yWbGHhwWZgCSstaxaY7Wnhm zNXyhpTmBrqcLVYDUyp+sNnJzPBcRth3lEo6diaWX27uZ9J0v9/LPqXW6s2NKTTV+TmS YV2ZQTd0m/LmEjIFm1/B1sMrevUdmb9mEE266dAQbvVKcNCk2+iT2uoeimeGuxLMlNMU fu/fTEAhkLJRAADufrshMdafpRuOE8EpBDjsFrITxBNT0yA4WELQqI8721reHpuwXN1s GVWA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g24si1929819pfe.263.2018.03.30.02.10.32; Fri, 30 Mar 2018 02:10:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752043AbeC3JIy (ORCPT + 99 others); Fri, 30 Mar 2018 05:08:54 -0400 Received: from mail-ot0-f196.google.com ([74.125.82.196]:34153 "EHLO mail-ot0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750764AbeC3JIv (ORCPT ); Fri, 30 Mar 2018 05:08:51 -0400 Received: by mail-ot0-f196.google.com with SMTP id m7-v6so9026368otd.1 for ; Fri, 30 Mar 2018 02:08:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=BSNdbOth0d35uiPSxtKeMBD6Qmtc5eUOR4pULh9l8iw=; b=KQaMXFHAm4/bl2Th0Ygso4EkMVO3NtVraxZujbZmTOQm36xSSWEkFcsc56w9pJiVVw 6H/O7KJZn2snWUU8/p1RL5KddzxwxodK5Uf3o8xk0QDAZQIxhb5UmJqMSJt33KnRS95o e47D/hdmwZ1IbbqlAjjWGW/n42HvHjgg3gfShQhg0Buo2zyp4McGIRO/YmKb60qfVBTP M32ugwR7KOPz/cHg9Jg3VkgoeRpqUXi3PBhJOyli29isEBQhDrfmX94OFQme6JCBbBFq xpK1GcKpL21IbXgOuJUUmfK9TGZ7kge5/kuE9sdAm9hezKkWNbsZvyrxeyxbJChbUzlK hGzA== X-Gm-Message-State: AElRT7FbN1YpgidlnuN09ywD9HIdo8XzP0TZJGok2PLh9SXirlfrPROz 7lC3vjPqVRbTLirkRQtj14kSmluRDpPH5EETJoRGzA== X-Received: by 2002:a9d:252b:: with SMTP id k40-v6mr7285913otb.141.1522400930599; Fri, 30 Mar 2018 02:08:50 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a9d:cef:0:0:0:0:0 with HTTP; Fri, 30 Mar 2018 02:08:50 -0700 (PDT) In-Reply-To: <1522397755-33393-2-git-send-email-hejianet@gmail.com> References: <1522397755-33393-1-git-send-email-hejianet@gmail.com> <1522397755-33393-2-git-send-email-hejianet@gmail.com> From: Daniel Vacek Date: Fri, 30 Mar 2018 11:08:50 +0200 Message-ID: Subject: Re: [PATCH v4 1/5] mm: page_alloc: remain memblock_next_valid_pfn() on arm and arm64 To: Jia He Cc: Russell King , Andrew Morton , Michal Hocko , Catalin Marinas , Mel Gorman , Will Deacon , Mark Rutland , "H. Peter Anvin" , Pavel Tatashin , Daniel Jordan , AKASHI Takahiro , Gioh Kim , Steven Sistare , Eugeniu Rosca , Vlastimil Babka , open list , linux-mm@kvack.org, James Morse , Ard Biesheuvel , Steve Capper , Thomas Gleixner , Ingo Molnar , x86@kernel.org, Greg Kroah-Hartman , Kate Stewart , Philippe Ombredanne , Johannes Weiner , Kemi Wang , Petr Tesarik , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , richard.weiyang@gmail.com, Jia He Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 30, 2018 at 10:15 AM, Jia He wrote: > Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns > where possible") optimized the loop in memmap_init_zone(). But it causes > possible panic bug. So Daniel Vacek reverted it later. > > But as suggested by Daniel Vacek, it is fine to using memblock to skip > gaps and finding next valid frame with CONFIG_HAVE_ARCH_PFN_VALID. > > On arm and arm64, memblock is used by default. But generic version of > pfn_valid() is based on mem sections and memblock_next_valid_pfn() does > not always return the next valid one but skips more resulting in some > valid frames to be skipped (as if they were invalid). And that's why > kernel was eventually crashing on some !arm machines. > > And as verified by Eugeniu Rosca, arm can benifit from commit > b92df1de5d28. So remain the memblock_next_valid_pfn on arm{,64} and move > the related codes to arm64 arch directory. > > Suggested-by: Daniel Vacek > Signed-off-by: Jia He > --- > arch/arm/mm/init.c | 31 ++++++++++++++++++++++++++++++- > arch/arm64/mm/init.c | 31 ++++++++++++++++++++++++++++++- > mm/page_alloc.c | 13 ++++++++++++- > 3 files changed, 72 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c > index a1f11a7..0fb85ca 100644 > --- a/arch/arm/mm/init.c > +++ b/arch/arm/mm/init.c > @@ -198,7 +198,36 @@ int pfn_valid(unsigned long pfn) > return memblock_is_map_memory(__pfn_to_phys(pfn)); > } > EXPORT_SYMBOL(pfn_valid); > -#endif > + > +/* HAVE_MEMBLOCK is always enabled on arm */ > +unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn) > +{ > + struct memblock_type *type = &memblock.memory; > + unsigned int right = type->cnt; > + unsigned int mid, left = 0; > + phys_addr_t addr = PFN_PHYS(++pfn); > + > + do { > + mid = (right + left) / 2; > + > + if (addr < type->regions[mid].base) > + right = mid; > + else if (addr >= (type->regions[mid].base + > + type->regions[mid].size)) > + left = mid + 1; > + else { > + /* addr is within the region, so pfn is valid */ > + return pfn; > + } > + } while (left < right); > + > + if (right == type->cnt) > + return -1UL; > + else > + return PHYS_PFN(type->regions[right].base); > +} > +EXPORT_SYMBOL(memblock_next_valid_pfn); > +#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ > > #ifndef CONFIG_SPARSEMEM > static void __init arm_memory_present(void) > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 00e7b90..13e43ff 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -290,7 +290,36 @@ int pfn_valid(unsigned long pfn) > return memblock_is_map_memory(pfn << PAGE_SHIFT); > } > EXPORT_SYMBOL(pfn_valid); > -#endif > + > +/* HAVE_MEMBLOCK is always enabled on arm64 */ > +unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn) > +{ > + struct memblock_type *type = &memblock.memory; > + unsigned int right = type->cnt; > + unsigned int mid, left = 0; > + phys_addr_t addr = PFN_PHYS(++pfn); > + > + do { > + mid = (right + left) / 2; > + > + if (addr < type->regions[mid].base) > + right = mid; > + else if (addr >= (type->regions[mid].base + > + type->regions[mid].size)) > + left = mid + 1; > + else { > + /* addr is within the region, so pfn is valid */ > + return pfn; > + } > + } while (left < right); > + > + if (right == type->cnt) > + return -1UL; > + else > + return PHYS_PFN(type->regions[right].base); > +} > +EXPORT_SYMBOL(memblock_next_valid_pfn); > +#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ > > #ifndef CONFIG_SPARSEMEM > static void __init arm64_memory_present(void) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index c19f5ac..8a92df7 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5452,6 +5452,15 @@ void __ref build_all_zonelists(pg_data_t *pgdat) > * up by free_all_bootmem() once the early boot process is > * done. Non-atomic initialization, single-pass. > */ > +#if (defined CONFIG_HAVE_MEMBLOCK) && (defined CONFIG_HAVE_ARCH_PFN_VALID) > +extern unsigned long memblock_next_valid_pfn(unsigned long pfn); > +#define skip_to_last_invalid_pfn(pfn) (memblock_next_valid_pfn(pfn) - 1) > +#endif > + This should go to arch/arm{,64}/include/asm/page.h. > +#ifndef skip_to_last_invalid_pfn > +#define skip_to_last_invalid_pfn(pfn) (pfn) > +#endif And this to include/linux/mmzone.h. Something like this? diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h index 4355f0ec44d6..489875cf3889 100644 --- a/arch/arm/include/asm/page.h +++ b/arch/arm/include/asm/page.h @@ -158,6 +158,8 @@ extern void __cpu_copy_user_highpage(struct page *to, struct page *from, #ifdef CONFIG_HAVE_ARCH_PFN_VALID extern int pfn_valid(unsigned long); +extern unsigned long memblock_next_valid_pfn(unsigned long pfn); +#define skip_to_last_invalid_pfn(pfn) (memblock_next_valid_pfn(pfn) - 1) #endif #include diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 60d02c81a3a2..e57d3f2e2dbd 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -39,6 +39,8 @@ extern void __cpu_copy_user_page(void *to, const void *from, #ifdef CONFIG_HAVE_ARCH_PFN_VALID extern int pfn_valid(unsigned long); +extern unsigned long memblock_next_valid_pfn(unsigned long pfn); +#define skip_to_last_invalid_pfn(pfn) (memblock_next_valid_pfn(pfn) - 1) #endif #include diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 356a814e7c8e..40d51bab6fc0 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1222,6 +1222,7 @@ static inline struct mem_section *__pfn_to_section(unsigned long pfn) extern int __highest_present_section_nr; #ifndef CONFIG_HAVE_ARCH_PFN_VALID +#define skip_to_last_invalid_pfn(pfn) (pfn) static inline int pfn_valid(unsigned long pfn) { if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) --nX > void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, > unsigned long start_pfn, enum memmap_context context, > struct vmem_altmap *altmap) > @@ -5483,8 +5492,10 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, > if (context != MEMMAP_EARLY) > goto not_early; > > - if (!early_pfn_valid(pfn)) > + if (!early_pfn_valid(pfn)) { > + pfn = skip_to_last_invalid_pfn(pfn); > continue; > + } > if (!early_pfn_in_nid(pfn, nid)) > continue; > if (!update_defer_init(pgdat, pfn, end_pfn, &nr_initialised)) > -- > 2.7.4 >