Received: by 10.213.65.68 with SMTP id h4csp2127178imn; Thu, 29 Mar 2018 18:45:10 -0700 (PDT) X-Google-Smtp-Source: AIpwx48kqkhCPTjYKlXuJIk/9AJeeftX3NYXQ8SX4vT7WsoG+U2yFK3bP6RAQa47YEHumd0ORMkg X-Received: by 2002:a17:902:a5c9:: with SMTP id t9-v6mr9548070plq.265.1522374310101; Thu, 29 Mar 2018 18:45:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522374310; cv=none; d=google.com; s=arc-20160816; b=YiVO5j2IxpwSvTcfuVZkx/P+xAUUyQBZ0WrSG1xJMAFol+g77XCR6LYG9iBtkeexJL f4hs03o6fglmBahIqohwaQkg1QxBeeeuxJnOnaHKQUsz1WU4oNRz95vnsbN8LpIg6FVb y6mYYKarsG20gVOVgaq4IUR6t3RT9nXCzPGB/QZMZJMX3w0z0Ol5o++JXEyTY4+XRi1e QvK01zRMTzpXlB0j2a1LbqHCDFIn4EKiwY0EHkmfnVOvA7vfj3yn9IKU/L7lpoUlc0vd 39rePO9qV5ER5GDBcbWJjLzAufYbyvzVLQUXdWksDpreDC0yLJ6Mz1lWDcRbcRI+XeFY 3g/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date:dkim-signature:arc-authentication-results; bh=NPp0Ed9vK2sGqyEKj1DaHHQM/MvFotdwW2ROdyssUKo=; b=YRdm+NV86buMzq+xh/tPj3lm7gkRWFKDiDj40ETcBMw4F3lAeNPLACMgH4emFEP9ZQ MiovCBiXqgSBZ0uR9SA6Ol2R/AMQttCLXk5JAwD4HrixqKpVKuNoUvtf8Sy7EhXod89m K7pxYPoMFMJ/P3ZexiBZbjCfsK6FVyfzNPUCtcoU/6jFWZVN5UB3JH9v9p3+S9ag6Kk2 XOzXVa9dwNiEnhvLmTfYObrLMZbGbdhG0eRxo/+EoJC42bnzmEKL4hiM62QVxWb4zaA5 VYHfFKe9pY6w8BpSc03UBScRh9M3AYZwuYxzo1xyqyYQMRVTGSeRMDYTObGxHV9nGigS M+yg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=CWVvr6Xn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j10-v6si7503153plk.655.2018.03.29.18.44.56; Thu, 29 Mar 2018 18:45:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=CWVvr6Xn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752530AbeC3Bnv (ORCPT + 99 others); Thu, 29 Mar 2018 21:43:51 -0400 Received: from mail-pf0-f194.google.com ([209.85.192.194]:43883 "EHLO mail-pf0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751976AbeC3Bnt (ORCPT ); Thu, 29 Mar 2018 21:43:49 -0400 Received: by mail-pf0-f194.google.com with SMTP id j2so4457737pff.10 for ; Thu, 29 Mar 2018 18:43:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:reply-to:references:mime-version :content-disposition:in-reply-to:user-agent; bh=NPp0Ed9vK2sGqyEKj1DaHHQM/MvFotdwW2ROdyssUKo=; b=CWVvr6Xng8YK6Qm5Z/kU51Fd8+Vh1qmrUPaaHnUf1p1Mjc9PC9k0+BF5I9PdzhQrq5 +YIQaM+deYPSmmUWHUzYohVSuhxdM6ipuNFCNONkjrmtzlHzAF/9OEtgcPi2KkkMd/g7 +LV5Npi6puXnlmE34WlUj/BoeaBVcynrJEsT5HWfpEJxF8UUWaO9e79xIuuaxfq5SaJ5 s1grGsgKgtI1YJNQ4EpRiQv+AhEeTjtSndBoYEx/dXr/PzYBUdqK1a3jpdoaQg3+qP+u ILudOG+0xxDlwMljbJu0d12eYvMYPxqbWf+zoUUjMGE7xiftL7D8zcyeu97q8a3nkCB8 MU/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:reply-to :references:mime-version:content-disposition:in-reply-to:user-agent; bh=NPp0Ed9vK2sGqyEKj1DaHHQM/MvFotdwW2ROdyssUKo=; b=neM7xY6d0RWKr5wRJwklahsRYEyCjJ+gO+cN9oKgijuQoNYjYFnse6g4OqL+9pnmCf rNRIHXRvpa8VwWnqNw9LimcxZTfb2N6B2a1MIJdrc1VorUz2V8q75unk77HGsu8fvEsS 9wI+kOPxbwJX/e5pMjPN9x3K9jzcjDeDiPIuGy3vPRm9sIWZefxpLiUKsPAk/a9AB7I0 5kbN+eueJlhLS0Hz2ifExZe0Kf9oD1afmR0Kl+5wt0CgTLSRN3UUBT/avZkUNqK8yh8F O3BCgvq5MPh4vOcXWuKfqIcfeN2OKsPTgRIyJIa1YXhE79lwJSuVmytSicAeBRo3ok28 Hdbg== X-Gm-Message-State: AElRT7Hieut7Yvbe6UprG6zoVVSrOPP6maWtShEfVQ4pZEul+PUvAG47 ZRnXxFZuqxxEvHCfgeUPAUA= X-Received: by 10.98.66.203 with SMTP id h72mr8199357pfd.156.1522374228469; Thu, 29 Mar 2018 18:43:48 -0700 (PDT) Received: from localhost ([185.92.221.13]) by smtp.gmail.com with ESMTPSA id 29sm16731545pfj.40.2018.03.29.18.43.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 29 Mar 2018 18:43:47 -0700 (PDT) Date: Fri, 30 Mar 2018 09:43:40 +0800 From: Wei Yang To: Jia He Cc: Wei Yang , Andrew Morton , Michal Hocko , Catalin Marinas , Mel Gorman , Will Deacon , Mark Rutland , Ard Biesheuvel , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Pavel Tatashin , Daniel Jordan , AKASHI Takahiro , Gioh Kim , Steven Sistare , Daniel Vacek , Eugeniu Rosca , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org, James Morse , Steve Capper , x86@kernel.org, Greg Kroah-Hartman , Kate Stewart , Philippe Ombredanne , Johannes Weiner , Kemi Wang , Petr Tesarik , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , Jia He Subject: Re: [PATCH v3 2/5] mm: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn() Message-ID: <20180330014340.GB14446@WeideMacBook-Pro.local> Reply-To: Wei Yang References: <1522033340-6575-1-git-send-email-hejianet@gmail.com> <1522033340-6575-3-git-send-email-hejianet@gmail.com> <20180328092620.GA98648@WeideMacBook-Pro.local> <2c41a24b-1fa6-7115-c312-a11157619a16@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2c41a24b-1fa6-7115-c312-a11157619a16@gmail.com> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 29, 2018 at 04:06:38PM +0800, Jia He wrote: > > >On 3/28/2018 5:26 PM, Wei Yang Wrote: >> On Sun, Mar 25, 2018 at 08:02:16PM -0700, Jia He wrote: >> > Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns >> > where possible") optimized the loop in memmap_init_zone(). But there is >> > still some room for improvement. E.g. if pfn and pfn+1 are in the same >> > memblock region, we can simply pfn++ instead of doing the binary search >> > in memblock_next_valid_pfn. This patch only works when >> > CONFIG_HAVE_ARCH_PFN_VALID is enable. >> > >> > Signed-off-by: Jia He >> > --- >> > include/linux/memblock.h | 2 +- >> > mm/memblock.c | 73 +++++++++++++++++++++++++++++------------------- >> > mm/page_alloc.c | 3 +- >> > 3 files changed, 47 insertions(+), 31 deletions(-) >> > >> > diff --git a/include/linux/memblock.h b/include/linux/memblock.h >> > index efbbe4b..a8fb2ab 100644 >> > --- a/include/linux/memblock.h >> > +++ b/include/linux/memblock.h >> > @@ -204,7 +204,7 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, >> > #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ >> > >> > #ifdef CONFIG_HAVE_ARCH_PFN_VALID >> > -unsigned long memblock_next_valid_pfn(unsigned long pfn); >> > +unsigned long memblock_next_valid_pfn(unsigned long pfn, int *idx); >> > #endif >> > >> > /** >> > diff --git a/mm/memblock.c b/mm/memblock.c >> > index bea5a9c..06c1a08 100644 >> > --- a/mm/memblock.c >> > +++ b/mm/memblock.c >> > @@ -1102,35 +1102,6 @@ void __init_memblock __next_mem_pfn_range(int *idx, int nid, >> > *out_nid = r->nid; >> > } >> > >> > -#ifdef CONFIG_HAVE_ARCH_PFN_VALID >> > -unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn) >> > -{ >> > - struct memblock_type *type = &memblock.memory; >> > - unsigned int right = type->cnt; >> > - unsigned int mid, left = 0; >> > - phys_addr_t addr = PFN_PHYS(++pfn); >> > - >> > - do { >> > - mid = (right + left) / 2; >> > - >> > - if (addr < type->regions[mid].base) >> > - right = mid; >> > - else if (addr >= (type->regions[mid].base + >> > - type->regions[mid].size)) >> > - left = mid + 1; >> > - else { >> > - /* addr is within the region, so pfn is valid */ >> > - return pfn; >> > - } >> > - } while (left < right); >> > - >> > - if (right == type->cnt) >> > - return -1UL; >> > - else >> > - return PHYS_PFN(type->regions[right].base); >> > -} >> > -#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ >> > - >> > /** >> > * memblock_set_node - set node ID on memblock regions >> > * @base: base of area to set node ID for >> > @@ -1162,6 +1133,50 @@ int __init_memblock memblock_set_node(phys_addr_t base, phys_addr_t size, >> > } >> > #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ >> > >> > +#ifdef CONFIG_HAVE_ARCH_PFN_VALID >> > +unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn, >> > + int *last_idx) >> > +{ >> > + struct memblock_type *type = &memblock.memory; >> > + unsigned int right = type->cnt; >> > + unsigned int mid, left = 0; >> > + unsigned long start_pfn, end_pfn; >> > + phys_addr_t addr = PFN_PHYS(++pfn); >> > + >> > + /* fast path, return pfh+1 if next pfn is in the same region */ >> ^^^ pfn >Thanks >> > + if (*last_idx != -1) { >> > + start_pfn = PFN_DOWN(type->regions[*last_idx].base); >> To me, it should be PFN_UP(). >hmm.., seems all the base of memory region is pfn aligned (0x10000 aligned). >So > >PFN_UP is the same as PFN_DOWN here? >I got this logic from memblock_search_pfn_nid() Ok, I guess here hide some buggy code. When you look at __next_mem_pfn_range(), it uses PFN_UP() for base. The reason is try to clip some un-page-aligned memory. While PFN_DOWN() will introduce some unavailable memory to system. Even mostly those address are page-aligned, we need to be careful for this. Let me drop a patch to fix the original one. > >Cheers, >Jia > >> >> > + end_pfn = PFN_DOWN(type->regions[*last_idx].base + >> > + type->regions[*last_idx].size); >> > + >> > + if (pfn < end_pfn && pfn > start_pfn) >> Could be (pfn < end_pfn && pfn >= start_pfn)? >> >> pfn == start_pfn is also a valid address. >No, pfn=pfn+1 at the beginning, so pfn != start_pfn This is a little bit tricky. There is no requirement to pass a valid pfn to memblock_next_valid_pfn(). So suppose we have memory layout like this: [0x100, 0x1ff] [0x300, 0x3ff] And I call memblock_next_valid_pfn(0x2ff, 1), would this fits the fast path logic? Well, since memblock_next_valid_pfn() only used memmap_init_zone(), the situation as I mentioned seems will not happen. Even though, I suggest to chagne this, otherwise your logic in slow path and fast path differs. In the case above, your slow path returns 0x300 at last. >> >> > + return pfn; >> > + } >> > + >> > + /* slow path, do the binary searching */ >> > + do { >> > + mid = (right + left) / 2; >> > + >> > + if (addr < type->regions[mid].base) >> > + right = mid; >> > + else if (addr >= (type->regions[mid].base + >> > + type->regions[mid].size)) >> > + left = mid + 1; >> > + else { >> > + *last_idx = mid; >> > + return pfn; >> > + } >> > + } while (left < right); >> > + >> > + if (right == type->cnt) >> > + return -1UL; >> > + >> > + *last_idx = right; >> > + >> > + return PHYS_PFN(type->regions[*last_idx].base); >> > +} >> > +#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ >> The same comment as Daniel, you are moving the function out of >> CONFIG_HAVE_MEMBLOCK_NODE_MAP. >> > + >> > static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, >> > phys_addr_t align, phys_addr_t start, >> > phys_addr_t end, int nid, ulong flags) >> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> > index 2a967f7..0bb0274 100644 >> > --- a/mm/page_alloc.c >> > +++ b/mm/page_alloc.c >> > @@ -5459,6 +5459,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, >> > unsigned long end_pfn = start_pfn + size; >> > pg_data_t *pgdat = NODE_DATA(nid); >> > unsigned long pfn; >> > + int idx = -1; >> > unsigned long nr_initialised = 0; >> > struct page *page; >> > #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP >> > @@ -5490,7 +5491,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, >> > * end_pfn), such that we hit a valid pfn (or end_pfn) >> > * on our next iteration of the loop. >> > */ >> > - pfn = memblock_next_valid_pfn(pfn) - 1; >> > + pfn = memblock_next_valid_pfn(pfn, &idx) - 1; >> > #endif >> > continue; >> > } >> > -- >> > 2.7.4 -- Wei Yang Help you, Help me