Received: by 10.213.65.68 with SMTP id h4csp2072442imn; Sun, 1 Apr 2018 23:57:24 -0700 (PDT) X-Google-Smtp-Source: AIpwx49Jxtxi66MDzrB9kINrcmr+NUARqWi2lxfXVMXMgBcd7S3NrVglYhyUrD8Uk2LHAfLrXGPG X-Received: by 2002:a17:902:5501:: with SMTP id f1-v6mr8661263pli.50.1522652244757; Sun, 01 Apr 2018 23:57:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522652244; cv=none; d=google.com; s=arc-20160816; b=fOAano/2gBU4CECc6NeX+GODVE8NYf0UJVw5nvd7/PP7e7kWc6nIVMD1xZlqGxvsQf 002TeJpnidcC75x0qcAEJNstEZV+MACCVt9hiZip2FOD6tm+uzWwxgHyjPxFgln0FuxB 1lRXQ6tPojOL0aTq2ymhJCc0psexAQ60MY+cR9FhWDIIROhXgPc/7D5bJXECV8zdApJs XENS5lR+1DgdVg9gNYYTpj+9tOjDPePG2BKwqWP0js50Nd6WZTpHefNTIuclEJz8YFyP P0VQGwRVoZ32NNR0n43gitMKCD8vrlOPxIa9tlMsZ8XDoFcR+8FjuduggzPtIT5KHfYf XOLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature :arc-authentication-results; bh=glI2h/MQLdIp3O/l4eQW1OqHcwLpnq6TB2MKkK7UYCw=; b=Nvuaug6mIlcl6RYMrbQJAPq8RKxwEcQ1F9duJ2qu8DOfiqCdwnpJqYLBNuA37mzTNl D+pF0IpCzQT9nr92TJhexXz9/KpE5jMe8jlmcJ27BJ3mMmIsp0z94bBxPA16jySjd8wb RH8PNeIBKm3Tpa8x1IwD1EfY+49puJGujN8ILFkeDeGwOCC85LwWrDdbttR8vjml4p1L FYx4y758FThVo7qVU79pbAeiFKnpDM2XezNPLo5kHfXEuXkYBZMSfYiyk+sfqfRVemtW NWwFnJsyB6QhiBXK05qI+hoe5pSZycrZI1IQZOBLMQDDQ3ICzbaRCLdnaeImObxP0QxP F7UA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=EFbE40ZK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h80si10566316pfj.129.2018.04.01.23.57.10; Sun, 01 Apr 2018 23:57:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=EFbE40ZK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754210AbeDBG4C (ORCPT + 99 others); Mon, 2 Apr 2018 02:56:02 -0400 Received: from mail-io0-f196.google.com ([209.85.223.196]:36190 "EHLO mail-io0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752150AbeDBG4B (ORCPT ); Mon, 2 Apr 2018 02:56:01 -0400 Received: by mail-io0-f196.google.com with SMTP id o4so16930780iod.3 for ; Sun, 01 Apr 2018 23:56:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=glI2h/MQLdIp3O/l4eQW1OqHcwLpnq6TB2MKkK7UYCw=; b=EFbE40ZKLEsz3/CK889dTxCuXRAnAQ3AtdVTneVGcbCfC7Re59E5j6RKcBwCJ4HAc1 qiVJ/D981grzr5UTTyXTRGQTrW7ffbzmYp0883VQ5Uqnd2aVd4jJ9NUTLoaFSm4zu1PV nR2DmneEeeO8QwXb1w1axvnsmyfk3yQ+j7Kfc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=glI2h/MQLdIp3O/l4eQW1OqHcwLpnq6TB2MKkK7UYCw=; b=DeFikifW9Cek0KxzGHSoX7ONZmNLseu7Nf1CqNSnQzWf9HIdzRMfkrVdfLziUUC1gz ogG/zM3xtt1lcYmrweT4x+JGnQJlEFCl2ktdEvZykIM1hlVleRm/rwPFTkX/AjpdNICU IDWxro55kqhywR0nISblpzD5lpNKjp42wogmy3MVxqt0D8EEooqaaiVmXdgmcvphA88P +q7+zVQJc9Ua7jtERVimoHXAj/bDnBu/Xw7qfUrGco12vl4eTWpGaIKew8NjhbMfQUNm qZ3YYUqTHoTSc7QS/ETJyteUU3au1xZckcGfMEK4nBQS3KkvtoyZlNkkR5OzRSGkByin H0jA== X-Gm-Message-State: ALQs6tAs5CDEMaSkugzI2/Pmb8hMEYxqa2b2Twx0ByAu9ev6PovepSdG ctyO6byZvfeiR3I7LCwRSehNfXNyD4Ll8AiKWu1FAQ== X-Received: by 10.107.16.230 with SMTP id 99mr7753069ioq.60.1522652160253; Sun, 01 Apr 2018 23:56:00 -0700 (PDT) MIME-Version: 1.0 Received: by 10.107.187.67 with HTTP; Sun, 1 Apr 2018 23:55:59 -0700 (PDT) In-Reply-To: <1522636236-12625-2-git-send-email-hejianet@gmail.com> References: <1522636236-12625-1-git-send-email-hejianet@gmail.com> <1522636236-12625-2-git-send-email-hejianet@gmail.com> From: Ard Biesheuvel Date: Mon, 2 Apr 2018 08:55:59 +0200 Message-ID: Subject: Re: [PATCH v5 1/5] mm: page_alloc: remain memblock_next_valid_pfn() on arm and arm64 To: Jia He Cc: Russell King , Catalin Marinas , Will Deacon , Mark Rutland , Andrew Morton , Michal Hocko , Wei Yang , Kees Cook , Laura Abbott , Vladimir Murzin , Philip Derrin , Grygorii Strashko , AKASHI Takahiro , James Morse , Steve Capper , Pavel Tatashin , Gioh Kim , Vlastimil Babka , Mel Gorman , Johannes Weiner , Kemi Wang , Petr Tesarik , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , Daniel Jordan , Daniel Vacek , Eugeniu Rosca , linux-arm-kernel , Linux Kernel Mailing List , Linux-MM , Jia He Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2 April 2018 at 04:30, Jia He wrote: > Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns > where possible") optimized the loop in memmap_init_zone(). But it causes > possible panic bug. So Daniel Vacek reverted it later. > > But as suggested by Daniel Vacek, it is fine to using memblock to skip > gaps and finding next valid frame with CONFIG_HAVE_ARCH_PFN_VALID. > > On arm and arm64, memblock is used by default. But generic version of > pfn_valid() is based on mem sections and memblock_next_valid_pfn() does > not always return the next valid one but skips more resulting in some > valid frames to be skipped (as if they were invalid). And that's why > kernel was eventually crashing on some !arm machines. > > And as verified by Eugeniu Rosca, arm can benifit from commit > b92df1de5d28. So remain the memblock_next_valid_pfn on arm{,64} and move > the related codes to arm64 arch directory. > > Suggested-by: Daniel Vacek > Signed-off-by: Jia He Hello Jia, Apologies for chiming in late. If we are going to rearchitect this, I'd rather we change the loop in memmap_init_zone() so that we skip to the next valid PFN directly rather than skipping to the last invalid PFN so that the pfn++ in the for () results in the next value. Can we replace the pfn++ there with a function calls that defaults to 'return pfn + 1', but does the skip for architectures that implement it? > --- > arch/arm/include/asm/page.h | 2 ++ > arch/arm/mm/init.c | 31 ++++++++++++++++++++++++++++++- > arch/arm64/include/asm/page.h | 2 ++ > arch/arm64/mm/init.c | 31 ++++++++++++++++++++++++++++++- > include/linux/mmzone.h | 1 + > mm/page_alloc.c | 4 +++- > 6 files changed, 68 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h > index 4355f0e..489875c 100644 > --- a/arch/arm/include/asm/page.h > +++ b/arch/arm/include/asm/page.h > @@ -158,6 +158,8 @@ typedef struct page *pgtable_t; > > #ifdef CONFIG_HAVE_ARCH_PFN_VALID > extern int pfn_valid(unsigned long); > +extern unsigned long memblock_next_valid_pfn(unsigned long pfn); > +#define skip_to_last_invalid_pfn(pfn) (memblock_next_valid_pfn(pfn) - 1) > #endif > > #include > diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c > index a1f11a7..0fb85ca 100644 > --- a/arch/arm/mm/init.c > +++ b/arch/arm/mm/init.c > @@ -198,7 +198,36 @@ int pfn_valid(unsigned long pfn) > return memblock_is_map_memory(__pfn_to_phys(pfn)); > } > EXPORT_SYMBOL(pfn_valid); > -#endif > + > +/* HAVE_MEMBLOCK is always enabled on arm */ > +unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn) > +{ > + struct memblock_type *type = &memblock.memory; > + unsigned int right = type->cnt; > + unsigned int mid, left = 0; > + phys_addr_t addr = PFN_PHYS(++pfn); > + > + do { > + mid = (right + left) / 2; > + > + if (addr < type->regions[mid].base) > + right = mid; > + else if (addr >= (type->regions[mid].base + > + type->regions[mid].size)) > + left = mid + 1; > + else { > + /* addr is within the region, so pfn is valid */ > + return pfn; > + } > + } while (left < right); > + > + if (right == type->cnt) > + return -1UL; > + else > + return PHYS_PFN(type->regions[right].base); > +} > +EXPORT_SYMBOL(memblock_next_valid_pfn); > +#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ > > #ifndef CONFIG_SPARSEMEM > static void __init arm_memory_present(void) > diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h > index 60d02c8..e57d3f2 100644 > --- a/arch/arm64/include/asm/page.h > +++ b/arch/arm64/include/asm/page.h > @@ -39,6 +39,8 @@ typedef struct page *pgtable_t; > > #ifdef CONFIG_HAVE_ARCH_PFN_VALID > extern int pfn_valid(unsigned long); > +extern unsigned long memblock_next_valid_pfn(unsigned long pfn); > +#define skip_to_last_invalid_pfn(pfn) (memblock_next_valid_pfn(pfn) - 1) > #endif > > #include > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 00e7b90..13e43ff 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -290,7 +290,36 @@ int pfn_valid(unsigned long pfn) > return memblock_is_map_memory(pfn << PAGE_SHIFT); > } > EXPORT_SYMBOL(pfn_valid); > -#endif > + > +/* HAVE_MEMBLOCK is always enabled on arm64 */ > +unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn) > +{ > + struct memblock_type *type = &memblock.memory; > + unsigned int right = type->cnt; > + unsigned int mid, left = 0; > + phys_addr_t addr = PFN_PHYS(++pfn); > + > + do { > + mid = (right + left) / 2; > + > + if (addr < type->regions[mid].base) > + right = mid; > + else if (addr >= (type->regions[mid].base + > + type->regions[mid].size)) > + left = mid + 1; > + else { > + /* addr is within the region, so pfn is valid */ > + return pfn; > + } > + } while (left < right); > + > + if (right == type->cnt) > + return -1UL; > + else > + return PHYS_PFN(type->regions[right].base); > +} > +EXPORT_SYMBOL(memblock_next_valid_pfn); > +#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ > > #ifndef CONFIG_SPARSEMEM > static void __init arm64_memory_present(void) > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index d797716..f9c0c46 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -1245,6 +1245,7 @@ static inline int pfn_valid(unsigned long pfn) > return 0; > return valid_section(__nr_to_section(pfn_to_section_nr(pfn))); > } > +#define skip_to_last_invalid_pfn(pfn) (pfn) > #endif > > static inline int pfn_present(unsigned long pfn) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index c19f5ac..30f7d76 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5483,8 +5483,10 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, > if (context != MEMMAP_EARLY) > goto not_early; > > - if (!early_pfn_valid(pfn)) > + if (!early_pfn_valid(pfn)) { > + pfn = skip_to_last_invalid_pfn(pfn); > continue; > + } > if (!early_pfn_in_nid(pfn, nid)) > continue; > if (!update_defer_init(pgdat, pfn, end_pfn, &nr_initialised)) > -- > 2.7.4 >