Received: by 10.213.65.68 with SMTP id h4csp2145761imn; Thu, 29 Mar 2018 19:14:18 -0700 (PDT) X-Google-Smtp-Source: AIpwx49COgcZ+SSFEBQHoQyMK57n/YqVcDABfOZNP/ZRAaenH/0D8czMWCHLoc4dSwobLfG8c6SQ X-Received: by 2002:a17:902:51c3:: with SMTP id y61-v6mr11090270plh.101.1522376057986; Thu, 29 Mar 2018 19:14:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522376057; cv=none; d=google.com; s=arc-20160816; b=HRUVyIhoqhv14L0rnMHJx2v53VDkYpWBzikojm0n4y24E0acVq3qrY64nnJy4gF4gj HLU1ACGOx5ewNezlfHTCikU/v/pSNcX9kBpP3S+0DY0iWsVfLarI9Ka5StknQ78o4JN0 e8yFK+hj8o8fiBhZwHSRW7hvssUafdwhSamwEDaHTvz4GWXnxolMrISGpJyDVvkJtBma 5Uy1jMhbhNDbukUxr+NdT+PuQjvsrUtj7GXAK1q901rr0wOvM1YLYiFt39R8lw3OWNuT LO/CjfxpcrkXqiMhTNueXZUdTa31rSLYJL0jtXz09NEpqFwXUaBc+0kt7pG2XndIf7MW hxkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject:dkim-signature:arc-authentication-results; bh=PVZ0YS1aoX9EmLj+98eWkQr4CFgtIeC2DMQOV2AdBbs=; b=qrGUYg1RL6Gza3R4UjKXoswGJ0H3LPG9hoogag1VfrqjP127rorG+wb9RC5HmO5oqi dgg/JclpYD7giOo2dysONMTXhqRrQ+BApdxHD1TBNHn4ZyyA3c61NJ5kmyIPRp9mPdP7 omg7XAyA4Xexl28zMGoOBZWGYN1jCAlDNpt3/fSVbRCzy4d7br7LwvayMsRGQhPIiAxz ReqYYfXddFklWe+6UTbjYK6DRcyBkVc7/b80V2J3M0ulbLbbPnb/kES+ALdP0Nlr/oG1 DUIvtGkfSiw1NDvcnAmGw/Q42kQumi58yLw5vhR4a/JtpZ5DFr7whbr45ANch5Dgfrtk zy0g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=fzYjlgA9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h11si4862997pgq.28.2018.03.29.19.14.03; Thu, 29 Mar 2018 19:14:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=fzYjlgA9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752611AbeC3CMi (ORCPT + 99 others); Thu, 29 Mar 2018 22:12:38 -0400 Received: from mail-pf0-f194.google.com ([209.85.192.194]:41720 "EHLO mail-pf0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751893AbeC3CMh (ORCPT ); Thu, 29 Mar 2018 22:12:37 -0400 Received: by mail-pf0-f194.google.com with SMTP id a2so1880181pff.8 for ; Thu, 29 Mar 2018 19:12:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding; bh=PVZ0YS1aoX9EmLj+98eWkQr4CFgtIeC2DMQOV2AdBbs=; b=fzYjlgA9wcycdET2XGoE2Y1xWfi0XzT0gdrnl20qeS3CovHLAn+saYzy6MHj06C/4a /B+txM3u/DiUrVVwdAiC4Q5DVi22Kg2JCfPIVQc+sWo5xYLnt4kRVcP/6WNqEAiJ+SiI 4XhK7w05KOLrJYur7ep5AAiDAxIs7p7jzh3Q1qHvx/tmy3AfaFbhgASJU0y0TDMTYmDT 02A73arPGuohWRD9+86z7m/Wm/bPz87kQm/7iUWW8TFSyt54N009gk1vmxmEmIaOcSOR oIhoXKhqsoWahlWafHEiD9EF6Wffyima7pefNiYJQmswduPRLlbxJ5aqIVJxcM2my6i5 O/RA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding; bh=PVZ0YS1aoX9EmLj+98eWkQr4CFgtIeC2DMQOV2AdBbs=; b=oBD+H18lccwc9g5BU2Ym0zG6T7dJ4by+hE/LaRSkCJd0z1HdQdZ0gs2kKWGGogX0j0 0Kkz52FV78ebw+UK16Y1Po6ZssiGmlZGKQpjelQv4d6LnewR2oOz260FZgeLnBbxgn+G VGcHlHZ3MLvOq2FjVac4Bn+90rG6chaxWY00PXw1lSJuqjPNc6wrwruA6nWF5xNN2nX7 ETcbyMnGtM9rHhpnxMllUIVA/+zEaQ1CQAHhjGkff7hJd86fSPzI1iovTqRkeCVFLkoi WHLXnEYd/LTzcDUAIZE9DRZ99/Fbv3S22jnMimZ5UKtaC6WJxuX2SmoaTdiHu6r7fgti 5aLQ== X-Gm-Message-State: AElRT7HeK0n1t+hUV2o7FH4o/xm4lXQ74CfwCBf9KYWGQXPqt7g1S3fP yTPXiB4v5n5smYkAd96208I= X-Received: by 2002:a17:902:bc45:: with SMTP id t5-v6mr10864939plz.343.1522375956631; Thu, 29 Mar 2018 19:12:36 -0700 (PDT) Received: from [0.0.0.0] (67.216.217.169.16clouds.com. [67.216.217.169]) by smtp.gmail.com with ESMTPSA id s7sm12163624pgr.90.2018.03.29.19.12.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 29 Mar 2018 19:12:35 -0700 (PDT) Subject: Re: [PATCH v3 2/5] mm: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn() To: Wei Yang Cc: Andrew Morton , Michal Hocko , Catalin Marinas , Mel Gorman , Will Deacon , Mark Rutland , Ard Biesheuvel , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Pavel Tatashin , Daniel Jordan , AKASHI Takahiro , Gioh Kim , Steven Sistare , Daniel Vacek , Eugeniu Rosca , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org, James Morse , Steve Capper , x86@kernel.org, Greg Kroah-Hartman , Kate Stewart , Philippe Ombredanne , Johannes Weiner , Kemi Wang , Petr Tesarik , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , Jia He References: <1522033340-6575-1-git-send-email-hejianet@gmail.com> <1522033340-6575-3-git-send-email-hejianet@gmail.com> <20180328092620.GA98648@WeideMacBook-Pro.local> <2c41a24b-1fa6-7115-c312-a11157619a16@gmail.com> <20180330014340.GB14446@WeideMacBook-Pro.local> From: Jia He Message-ID: <21229f59-8f71-bce3-5705-d491e61476de@gmail.com> Date: Fri, 30 Mar 2018 10:12:17 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <20180330014340.GB14446@WeideMacBook-Pro.local> Content-Type: text/plain; charset=gbk; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 3/30/2018 9:43 AM, Wei Yang Wrote: > On Thu, Mar 29, 2018 at 04:06:38PM +0800, Jia He wrote: >> >> On 3/28/2018 5:26 PM, Wei Yang Wrote: >>> On Sun, Mar 25, 2018 at 08:02:16PM -0700, Jia He wrote: >>>> Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns >>>> where possible") optimized the loop in memmap_init_zone(). But there is >>>> still some room for improvement. E.g. if pfn and pfn+1 are in the same >>>> memblock region, we can simply pfn++ instead of doing the binary search >>>> in memblock_next_valid_pfn. This patch only works when >>>> CONFIG_HAVE_ARCH_PFN_VALID is enable. >>>> >>>> Signed-off-by: Jia He >>>> --- >>>> include/linux/memblock.h | 2 +- >>>> mm/memblock.c | 73 +++++++++++++++++++++++++++++------------------- >>>> mm/page_alloc.c | 3 +- >>>> 3 files changed, 47 insertions(+), 31 deletions(-) >>>> >>>> diff --git a/include/linux/memblock.h b/include/linux/memblock.h >>>> index efbbe4b..a8fb2ab 100644 >>>> --- a/include/linux/memblock.h >>>> +++ b/include/linux/memblock.h >>>> @@ -204,7 +204,7 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, >>>> #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ >>>> >>>> #ifdef CONFIG_HAVE_ARCH_PFN_VALID >>>> -unsigned long memblock_next_valid_pfn(unsigned long pfn); >>>> +unsigned long memblock_next_valid_pfn(unsigned long pfn, int *idx); >>>> #endif >>>> >>>> /** >>>> diff --git a/mm/memblock.c b/mm/memblock.c >>>> index bea5a9c..06c1a08 100644 >>>> --- a/mm/memblock.c >>>> +++ b/mm/memblock.c >>>> @@ -1102,35 +1102,6 @@ void __init_memblock __next_mem_pfn_range(int *idx, int nid, >>>> *out_nid = r->nid; >>>> } >>>> >>>> -#ifdef CONFIG_HAVE_ARCH_PFN_VALID >>>> -unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn) >>>> -{ >>>> - struct memblock_type *type = &memblock.memory; >>>> - unsigned int right = type->cnt; >>>> - unsigned int mid, left = 0; >>>> - phys_addr_t addr = PFN_PHYS(++pfn); >>>> - >>>> - do { >>>> - mid = (right + left) / 2; >>>> - >>>> - if (addr < type->regions[mid].base) >>>> - right = mid; >>>> - else if (addr >= (type->regions[mid].base + >>>> - type->regions[mid].size)) >>>> - left = mid + 1; >>>> - else { >>>> - /* addr is within the region, so pfn is valid */ >>>> - return pfn; >>>> - } >>>> - } while (left < right); >>>> - >>>> - if (right == type->cnt) >>>> - return -1UL; >>>> - else >>>> - return PHYS_PFN(type->regions[right].base); >>>> -} >>>> -#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ >>>> - >>>> /** >>>> * memblock_set_node - set node ID on memblock regions >>>> * @base: base of area to set node ID for >>>> @@ -1162,6 +1133,50 @@ int __init_memblock memblock_set_node(phys_addr_t base, phys_addr_t size, >>>> } >>>> #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ >>>> >>>> +#ifdef CONFIG_HAVE_ARCH_PFN_VALID >>>> +unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn, >>>> + int *last_idx) >>>> +{ >>>> + struct memblock_type *type = &memblock.memory; >>>> + unsigned int right = type->cnt; >>>> + unsigned int mid, left = 0; >>>> + unsigned long start_pfn, end_pfn; >>>> + phys_addr_t addr = PFN_PHYS(++pfn); >>>> + >>>> + /* fast path, return pfh+1 if next pfn is in the same region */ >>> ^^^ pfn >> Thanks >>>> + if (*last_idx != -1) { >>>> + start_pfn = PFN_DOWN(type->regions[*last_idx].base); >>> To me, it should be PFN_UP(). >> hmm.., seems all the base of memory region is pfn aligned (0x10000 aligned). >> So >> >> PFN_UP is the same as PFN_DOWN here? >> I got this logic from memblock_search_pfn_nid() > Ok, I guess here hide some buggy code. > > When you look at __next_mem_pfn_range(), it uses PFN_UP() for base. The reason > is try to clip some un-page-aligned memory. While PFN_DOWN() will introduce > some unavailable memory to system. > > Even mostly those address are page-aligned, we need to be careful for this. > > Let me drop a patch to fix the original one. Ok, please cc me, I will change the related code when your patch is accepted. ;-) >> Cheers, >> Jia >> >>>> + end_pfn = PFN_DOWN(type->regions[*last_idx].base + >>>> + type->regions[*last_idx].size); >>>> + >>>> + if (pfn < end_pfn && pfn > start_pfn) >>> Could be (pfn < end_pfn && pfn >= start_pfn)? >>> >>> pfn == start_pfn is also a valid address. >> No, pfn=pfn+1 at the beginning, so pfn != start_pfn > This is a little bit tricky. > > There is no requirement to pass a valid pfn to memblock_next_valid_pfn(). So > suppose we have memory layout like this: > > [0x100, 0x1ff] > [0x300, 0x3ff] > > And I call memblock_next_valid_pfn(0x2ff, 1), would this fits the fast path > logic? > > Well, since memblock_next_valid_pfn() only used memmap_init_zone(), the > situation as I mentioned seems will not happen. > > Even though, I suggest to chagne this, otherwise your logic in slow path and > fast path differs. In the case above, your slow path returns 0x300 at last. ok. looks like it does no harm, even I thought the code will guarantee it to skip from 0x1ff to 0x300 directly. I will change it after fuctional test. -- Cheers, Jia >>>> + return pfn; >>>> + } >>>> + >>>> + /* slow path, do the binary searching */ >>>> + do { >>>> + mid = (right + left) / 2; >>>> + >>>> + if (addr < type->regions[mid].base) >>>> + right = mid; >>>> + else if (addr >= (type->regions[mid].base + >>>> + type->regions[mid].size)) >>>> + left = mid + 1; >>>> + else { >>>> + *last_idx = mid; >>>> + return pfn; >>>> + } >>>> + } while (left < right); >>>> + >>>> + if (right == type->cnt) >>>> + return -1UL; >>>> + >>>> + *last_idx = right; >>>> + >>>> + return PHYS_PFN(type->regions[*last_idx].base); >>>> +} >>>> +#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ >>> The same comment as Daniel, you are moving the function out of >>> CONFIG_HAVE_MEMBLOCK_NODE_MAP. >>>> + >>>> static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, >>>> phys_addr_t align, phys_addr_t start, >>>> phys_addr_t end, int nid, ulong flags) >>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >>>> index 2a967f7..0bb0274 100644 >>>> --- a/mm/page_alloc.c >>>> +++ b/mm/page_alloc.c >>>> @@ -5459,6 +5459,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, >>>> unsigned long end_pfn = start_pfn + size; >>>> pg_data_t *pgdat = NODE_DATA(nid); >>>> unsigned long pfn; >>>> + int idx = -1; >>>> unsigned long nr_initialised = 0; >>>> struct page *page; >>>> #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP >>>> @@ -5490,7 +5491,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, >>>> * end_pfn), such that we hit a valid pfn (or end_pfn) >>>> * on our next iteration of the loop. >>>> */ >>>> - pfn = memblock_next_valid_pfn(pfn) - 1; >>>> + pfn = memblock_next_valid_pfn(pfn, &idx) - 1; >>>> #endif >>>> continue; >>>> } >>>> -- >>>> 2.7.4