Received: by 10.213.65.68 with SMTP id h4csp872018imn; Tue, 27 Mar 2018 10:18:53 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/WVTTeoj34v35o//k+exNWGmpN18F2/iY85skTvkhVKWt2YwBIdi7mPvPprhYnUN4BuZml X-Received: by 10.99.185.28 with SMTP id z28mr118373pge.59.1522171133467; Tue, 27 Mar 2018 10:18:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522171133; cv=none; d=google.com; s=arc-20160816; b=AvRAxmNtsYu2itD5prqVpHSupHjXItRrBco9OSMTW5bubqm59/QNsEh6cxV7Zk7cA1 tqqLVqE1UiWnAUJJbq53w++vYC9BFtNSaImSMfysT89Iuh+odGG7MHeLTbslObKAdIJd GRySzdehv0pcKkWdXwiD8aHLzMUzx0GkMUhaR8PrdCnjzz/tUKqVhqi0WFj8m9ceEopI GJfg1hre4TeswRyUee6f/hz6UrOpeKNglfwc1QvUNf1PXCYQsOhNbss1PKEB6roVxziD 4mr2iz8i0EgXhQoYu4N47sssryg0ZxLPcGqw/1caEkZqHOpkFp3SCfLMNvdAhywTaYWn 7q2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:arc-authentication-results; bh=rNAJyZJYPTxoBNbzjy2Gdc9vADSeYmEu2rUGD0sIji4=; b=rv1a60roltGorEdgynmE7rUHhfdIu71cXqYJSnfihIcKUlevfyiRPLNr8wQNbpm44Q VYIRNo+3mdv3q3ZN2niMNEDMyfEC8MDnv16PHZgolslhsC+1vulADG6PZaF8tgpH/AJq YvEgDYDNWTWia698TvUrL75x9hqxOyUGN1AIkQuAQ5pOWDXhikbEejGE3NWkuoLi0Q// TwZkZOQGdg2QYewlhWq+HIKzGsMLlzQ0LKWsKKGw1gdXJC9u0g1s0FOTy6kaa6dX54HH hIVh0owLDDqZxAs+a/+ebAbNVEmz3dBe2pomYTYXfmecY0AcVrlFI0HpjsMHTUNLnoDc 8kyA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 197si1132557pge.78.2018.03.27.10.18.38; Tue, 27 Mar 2018 10:18:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754144AbeC0RRn (ORCPT + 99 others); Tue, 27 Mar 2018 13:17:43 -0400 Received: from mail-ot0-f196.google.com ([74.125.82.196]:42900 "EHLO mail-ot0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752976AbeC0RRj (ORCPT ); Tue, 27 Mar 2018 13:17:39 -0400 Received: by mail-ot0-f196.google.com with SMTP id v23-v6so25325684oth.9 for ; Tue, 27 Mar 2018 10:17:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=rNAJyZJYPTxoBNbzjy2Gdc9vADSeYmEu2rUGD0sIji4=; b=mZAWTQMVYT7GwlwiyaMynk9O6a0brE/XoSDAF1mhB+Q+GvwCLa4j9iEVaR2Q0dtO6o kCuujFX9Ja7kiK6HgD7G+Ozy/5zYLW5KnQ/JWvqgikZHkirbykQ5cgsEOIKYlf934Rye zUQXGXERMTfQJ/QKCe2OMaTofq2rPSKf0IIlzFN1WSj+stFPLYrYdO1PES0TkwYhSStn LOdsWfcKehTqsweGHZg7b4El2vl0HGJTLaqc+GbsFHBFEpm/DAOVk3YWyt3XK/n5V9Or h0mgJZO4p4lvz/7T2TJUf4Smy+t9d7EIZYNhBsIkWGgpsNLFS+//f/mE8IejAU79YL67 suCQ== X-Gm-Message-State: AElRT7GrroXBG/9qwCLYHTq1Hm2LdaAcCt8gO9klesIdLxM+nzY2aQPH 7V2d2+jH1moJKMckH0UAoqYRs0EU4FtFFHYr//fG2g== X-Received: by 2002:a9d:3844:: with SMTP id r4-v6mr154225otd.90.1522171058436; Tue, 27 Mar 2018 10:17:38 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a9d:39f6:0:0:0:0:0 with HTTP; Tue, 27 Mar 2018 10:17:38 -0700 (PDT) In-Reply-To: <1522033340-6575-3-git-send-email-hejianet@gmail.com> References: <1522033340-6575-1-git-send-email-hejianet@gmail.com> <1522033340-6575-3-git-send-email-hejianet@gmail.com> From: Daniel Vacek Date: Tue, 27 Mar 2018 19:17:38 +0200 Message-ID: Subject: Re: [PATCH v3 2/5] mm: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn() To: Jia He Cc: Andrew Morton , Michal Hocko , Catalin Marinas , Mel Gorman , Will Deacon , Mark Rutland , Ard Biesheuvel , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Pavel Tatashin , Daniel Jordan , AKASHI Takahiro , Gioh Kim , Steven Sistare , Eugeniu Rosca , Vlastimil Babka , open list , linux-mm@kvack.org, James Morse , Steve Capper , x86@kernel.org, Greg Kroah-Hartman , Kate Stewart , Philippe Ombredanne , Johannes Weiner , Kemi Wang , Petr Tesarik , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , Jia He Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 26, 2018 at 5:02 AM, Jia He wrote: > Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns > where possible") optimized the loop in memmap_init_zone(). But there is > still some room for improvement. E.g. if pfn and pfn+1 are in the same > memblock region, we can simply pfn++ instead of doing the binary search > in memblock_next_valid_pfn. This patch only works when > CONFIG_HAVE_ARCH_PFN_VALID is enable. > > Signed-off-by: Jia He > --- > include/linux/memblock.h | 2 +- > mm/memblock.c | 73 +++++++++++++++++++++++++++++------------------- > mm/page_alloc.c | 3 +- > 3 files changed, 47 insertions(+), 31 deletions(-) > > diff --git a/include/linux/memblock.h b/include/linux/memblock.h > index efbbe4b..a8fb2ab 100644 > --- a/include/linux/memblock.h > +++ b/include/linux/memblock.h > @@ -204,7 +204,7 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, > #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ > > #ifdef CONFIG_HAVE_ARCH_PFN_VALID > -unsigned long memblock_next_valid_pfn(unsigned long pfn); > +unsigned long memblock_next_valid_pfn(unsigned long pfn, int *idx); > #endif > > /** > diff --git a/mm/memblock.c b/mm/memblock.c > index bea5a9c..06c1a08 100644 > --- a/mm/memblock.c > +++ b/mm/memblock.c > @@ -1102,35 +1102,6 @@ void __init_memblock __next_mem_pfn_range(int *idx, int nid, > *out_nid = r->nid; > } > > -#ifdef CONFIG_HAVE_ARCH_PFN_VALID > -unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn) > -{ > - struct memblock_type *type = &memblock.memory; > - unsigned int right = type->cnt; > - unsigned int mid, left = 0; > - phys_addr_t addr = PFN_PHYS(++pfn); > - > - do { > - mid = (right + left) / 2; > - > - if (addr < type->regions[mid].base) > - right = mid; > - else if (addr >= (type->regions[mid].base + > - type->regions[mid].size)) > - left = mid + 1; > - else { > - /* addr is within the region, so pfn is valid */ > - return pfn; > - } > - } while (left < right); > - > - if (right == type->cnt) > - return -1UL; > - else > - return PHYS_PFN(type->regions[right].base); > -} > -#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ > - > /** > * memblock_set_node - set node ID on memblock regions > * @base: base of area to set node ID for > @@ -1162,6 +1133,50 @@ int __init_memblock memblock_set_node(phys_addr_t base, phys_addr_t size, > } > #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ > > +#ifdef CONFIG_HAVE_ARCH_PFN_VALID > +unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn, > + int *last_idx) > +{ > + struct memblock_type *type = &memblock.memory; > + unsigned int right = type->cnt; > + unsigned int mid, left = 0; > + unsigned long start_pfn, end_pfn; > + phys_addr_t addr = PFN_PHYS(++pfn); > + > + /* fast path, return pfh+1 if next pfn is in the same region */ > + if (*last_idx != -1) { > + start_pfn = PFN_DOWN(type->regions[*last_idx].base); > + end_pfn = PFN_DOWN(type->regions[*last_idx].base + > + type->regions[*last_idx].size); > + > + if (pfn < end_pfn && pfn > start_pfn) > + return pfn; > + } > + > + /* slow path, do the binary searching */ > + do { > + mid = (right + left) / 2; > + > + if (addr < type->regions[mid].base) > + right = mid; > + else if (addr >= (type->regions[mid].base + > + type->regions[mid].size)) > + left = mid + 1; > + else { > + *last_idx = mid; > + return pfn; > + } > + } while (left < right); > + > + if (right == type->cnt) > + return -1UL; > + > + *last_idx = right; > + > + return PHYS_PFN(type->regions[*last_idx].base); > +} > +#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ > + > static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, > phys_addr_t align, phys_addr_t start, > phys_addr_t end, int nid, ulong flags) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 2a967f7..0bb0274 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5459,6 +5459,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, > unsigned long end_pfn = start_pfn + size; > pg_data_t *pgdat = NODE_DATA(nid); > unsigned long pfn; > + int idx = -1; > unsigned long nr_initialised = 0; > struct page *page; > #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP > @@ -5490,7 +5491,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, > * end_pfn), such that we hit a valid pfn (or end_pfn) > * on our next iteration of the loop. > */ > - pfn = memblock_next_valid_pfn(pfn) - 1; > + pfn = memblock_next_valid_pfn(pfn, &idx) - 1; > #endif > continue; > } > -- > 2.7.4 > So the function is only defined with CONFIG_HAVE_ARCH_PFN_VALID but it's called with CONFIG_HAVE_MEMBLOCK_NODE_MAP? The definition should likely depend on both options as the function really depends on both conditions. Otherwise it should be defined nop. --nX