Received: by 10.213.65.68 with SMTP id h4csp140879imn; Tue, 27 Mar 2018 18:47:13 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/glZJl1sg+hnWdqoyQlSCoeXwy4x671nRP3zaOJro239JUv4Ke2nFzbIoP/LZIcwl/jC1p X-Received: by 2002:a17:902:8d87:: with SMTP id v7-v6mr1694144plo.146.1522201633687; Tue, 27 Mar 2018 18:47:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522201633; cv=none; d=google.com; s=arc-20160816; b=oukbm0nMeTWBxgLYK9TYl9myrewJS4JciqGaeNgtgY4sw0yT4HOD7+qwtk2eeyePAX PjHAcfcGoleodBY5vhB3+3R7aTB2QpfUAq/gA59+AioXCJuCZziPoeZI/kUas5AE+R67 nCojxdp7FeyxW+M9iX910ZyjTbPpu6v/PF4/sqqjNyEqWF/U7h4NKdoQ0rV8pMVNCEdY KiYyP8GjCvtacs4QmacOBK+Ld3aerIRmAFo0W4L1dSoGVKVUmvqfkSMS9nrIA5Zcb8Iu YtAzHcZ9Q+HF8d29CtDS/CIwhiJTCsgHCeo/lsiZBwe5/nDAkuFQf3H/NHefS1qdkk1b hPWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject:dkim-signature:arc-authentication-results; bh=hu05MbZgkAkEgUllNiU2Nq/ff0qS/Y30cu0QqmQ9mQk=; b=C1PWx5ycdaXbG/G6hvFLlSSephLrJLo8yed+aSl3UBWMdPK1ghCQHija13dtBBDsE8 O/4CQHOqia1RBgBdhF7xs8HSxz7b5G33wX89p7S7jvlVIt1B/dL9YSxOr/xgL0+4m6L7 waATuiTzQ5qqfvE3olVnk/nMYFvcvyU3BIl8RxRqG6GrWvN065SFsjw3I/H2jNpLrdSV fwBOOiJcm00uX55F3wbdJD+aqRWTvUttyEmREbJDgQnAPx4iMtUFw2vBGSRB/5hlHONM iOf2NNxn6z4cPSX9QlpRRyghQFb0UsR1bAkKxgSxOyu7kvim7ZeG5cYh2y3CKUxewyY3 ilgA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=uQJaU8oK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p3si1894369pfh.84.2018.03.27.18.46.57; Tue, 27 Mar 2018 18:47:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=uQJaU8oK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752578AbeC1Bp6 (ORCPT + 99 others); Tue, 27 Mar 2018 21:45:58 -0400 Received: from mail-pl0-f66.google.com ([209.85.160.66]:37506 "EHLO mail-pl0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752258AbeC1Bp5 (ORCPT ); Tue, 27 Mar 2018 21:45:57 -0400 Received: by mail-pl0-f66.google.com with SMTP id v7-v6so613226plo.4 for ; Tue, 27 Mar 2018 18:45:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding; bh=hu05MbZgkAkEgUllNiU2Nq/ff0qS/Y30cu0QqmQ9mQk=; b=uQJaU8oKGK2DjnXEkd5NRI1g+9x2fdzF8e11LX1sXrHTlqBbhDwQ8bomYGFATNcqeC 5zeO49OIE7OhUfVQ3+zyBSZCWV9Re0HUDLgjIbPU1NzYcM7zypO6zBR3eAZbqobfxoZA lPbP1zfAroGj+6LH7ZtClzs8CNcevFRhmJ4T6n8dt6OooIDdZDbYF28yR0CRPrqJsiO/ dhiOxbshcPvBJXwuGUU4SEHm47eO/f87KtkAvWZC64z6PjMPCKrvRa6XXXd2qK5BC79D orsacRcoSTU9FxxidnuF39IKLNtwk4/EUtnY/XGD56WlvtY6ARbdJPVjnokFM20NLu2F uZdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding; bh=hu05MbZgkAkEgUllNiU2Nq/ff0qS/Y30cu0QqmQ9mQk=; b=r/20ptGS2/mA80HHf3irb/ddYfUKrbYI3oUoV+EQMI3G/A3si+jSCp4slH/yt4Zru6 OJ/qXmvGi5+FEaO1ZHpvf0Xu4yooz/srVTPbCPBXhrqQ/NVWWkOH8As/OQk1+hvRstqE T9bv896eyfOaUgjTOZ2SPshFjdpc1ZCjyCWi/3O4j/8m58Y14404zlWE+Ia8USzoRdeE ViNs6IHeQJ6byAMlQxCVj1Klg2MenvFmg25vhIKZXwH+YwZY1x3IaxukUn5+BLY5fH5O v93p4Ur9IXrPK0BCI80+Dj1HFgnKmWhTRxgaWPNFCwWX+03oPLlzbWjP5HczwvR3LZIm J3iw== X-Gm-Message-State: AElRT7FIN2mTrVqpUj2k7W9dXQAt55Dy5ZcWDNqvF/09NC97Dsqzd7AA LN48sYQwEUVuoXP1Gi82tXwTD0FucgI= X-Received: by 2002:a17:902:983:: with SMTP id 3-v6mr1733209pln.278.1522201556476; Tue, 27 Mar 2018 18:45:56 -0700 (PDT) Received: from [0.0.0.0] (67.216.217.169.16clouds.com. [67.216.217.169]) by smtp.gmail.com with ESMTPSA id a67sm4282622pgc.6.2018.03.27.18.45.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 27 Mar 2018 18:45:55 -0700 (PDT) Subject: Re: [PATCH v3 0/5] optimize memblock_next_valid_pfn and early_pfn_valid To: Wei Yang Cc: Andrew Morton , Michal Hocko , Catalin Marinas , Mel Gorman , Will Deacon , Mark Rutland , Ard Biesheuvel , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Pavel Tatashin , Daniel Jordan , AKASHI Takahiro , Gioh Kim , Steven Sistare , Daniel Vacek , Eugeniu Rosca , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org, James Morse , Steve Capper , x86@kernel.org, Greg Kroah-Hartman , Kate Stewart , Philippe Ombredanne , Johannes Weiner , Kemi Wang , Petr Tesarik , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov References: <1522033340-6575-1-git-send-email-hejianet@gmail.com> <20180327010213.GA80447@WeideMacBook-Pro.local> <20180328003012.GA91956@WeideMacBook-Pro.local> From: Jia He Message-ID: <49fefc1c-81dd-98f8-7da5-5b5e85d919e4@gmail.com> Date: Wed, 28 Mar 2018 09:45:33 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <20180328003012.GA91956@WeideMacBook-Pro.local> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 3/28/2018 8:30 AM, Wei Yang Wrote: > On Tue, Mar 27, 2018 at 03:15:08PM +0800, Jia He wrote: >> >> On 3/27/2018 9:02 AM, Wei Yang Wrote: >>> On Sun, Mar 25, 2018 at 08:02:14PM -0700, Jia He wrote: >>>> Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns >>>> where possible") tried to optimize the loop in memmap_init_zone(). But >>>> there is still some room for improvement. >>>> >>>> Patch 1 remain the memblock_next_valid_pfn when CONFIG_HAVE_ARCH_PFN_VALID >>>> is enabled >>>> Patch 2 optimizes the memblock_next_valid_pfn() >>>> Patch 3~5 optimizes the early_pfn_valid(), I have to split it into parts >>>> because the changes are located across subsystems. >>>> >>>> I tested the pfn loop process in memmap_init(), the same as before. >>>> As for the performance improvement, after this set, I can see the time >>>> overhead of memmap_init() is reduced from 41313 us to 24345 us in my >>>> armv8a server(QDF2400 with 96G memory). >>>> >>>> Attached the memblock region information in my server. >>>> [ 86.956758] Zone ranges: >>>> [ 86.959452] DMA [mem 0x0000000000200000-0x00000000ffffffff] >>>> [ 86.966041] Normal [mem 0x0000000100000000-0x00000017ffffffff] >>>> [ 86.972631] Movable zone start for each node >>>> [ 86.977179] Early memory node ranges >>>> [ 86.980985] node 0: [mem 0x0000000000200000-0x000000000021ffff] >>>> [ 86.987666] node 0: [mem 0x0000000000820000-0x000000000307ffff] >>>> [ 86.994348] node 0: [mem 0x0000000003080000-0x000000000308ffff] >>>> [ 87.001029] node 0: [mem 0x0000000003090000-0x00000000031fffff] >>>> [ 87.007710] node 0: [mem 0x0000000003200000-0x00000000033fffff] >>>> [ 87.014392] node 0: [mem 0x0000000003410000-0x000000000563ffff] >>>> [ 87.021073] node 0: [mem 0x0000000005640000-0x000000000567ffff] >>>> [ 87.027754] node 0: [mem 0x0000000005680000-0x00000000056dffff] >>>> [ 87.034435] node 0: [mem 0x00000000056e0000-0x00000000086fffff] >>>> [ 87.041117] node 0: [mem 0x0000000008700000-0x000000000871ffff] >>>> [ 87.047798] node 0: [mem 0x0000000008720000-0x000000000894ffff] >>>> [ 87.054479] node 0: [mem 0x0000000008950000-0x0000000008baffff] >>>> [ 87.061161] node 0: [mem 0x0000000008bb0000-0x0000000008bcffff] >>>> [ 87.067842] node 0: [mem 0x0000000008bd0000-0x0000000008c4ffff] >>>> [ 87.074524] node 0: [mem 0x0000000008c50000-0x0000000008e2ffff] >>>> [ 87.081205] node 0: [mem 0x0000000008e30000-0x0000000008e4ffff] >>>> [ 87.087886] node 0: [mem 0x0000000008e50000-0x0000000008fcffff] >>>> [ 87.094568] node 0: [mem 0x0000000008fd0000-0x000000000910ffff] >>>> [ 87.101249] node 0: [mem 0x0000000009110000-0x00000000092effff] >>>> [ 87.107930] node 0: [mem 0x00000000092f0000-0x000000000930ffff] >>>> [ 87.114612] node 0: [mem 0x0000000009310000-0x000000000963ffff] >>>> [ 87.121293] node 0: [mem 0x0000000009640000-0x000000000e61ffff] >>>> [ 87.127975] node 0: [mem 0x000000000e620000-0x000000000e64ffff] >>>> [ 87.134657] node 0: [mem 0x000000000e650000-0x000000000fffffff] >>>> [ 87.141338] node 0: [mem 0x0000000010800000-0x0000000017feffff] >>>> [ 87.148019] node 0: [mem 0x000000001c000000-0x000000001c00ffff] >>>> [ 87.154701] node 0: [mem 0x000000001c010000-0x000000001c7fffff] >>>> [ 87.161383] node 0: [mem 0x000000001c810000-0x000000007efbffff] >>>> [ 87.168064] node 0: [mem 0x000000007efc0000-0x000000007efdffff] >>>> [ 87.174746] node 0: [mem 0x000000007efe0000-0x000000007efeffff] >>>> [ 87.181427] node 0: [mem 0x000000007eff0000-0x000000007effffff] >>>> [ 87.188108] node 0: [mem 0x000000007f000000-0x00000017ffffffff] >>> Hi, Jia >>> >>> I haven't taken a deep look into your code, just one curious question on your >>> memory layout. >>> >>> The log above is printed out in free_area_init_nodes(), which iterates on >>> memblock.memory and prints them. If I am not wrong, memory regions added to >>> memblock.memory are ordered and merged if possible. >>> >>> While from your log, I see many regions could be merged but are isolated. For >>> example, the last two region: >>> >>> node 0: [mem 0x000000007eff0000-0x000000007effffff] >>> node 0: [mem 0x000000007f000000-0x00000017ffffffff] >>> >>> So I am curious why they are isolated instead of combined to one. >>> >>> >From the code, the possible reason is the region's flag differs from each >>> other. If you have time, would you mind taking a look into this? >>> >> Hi Wei >> I thought these 2 have different flags >> [    0.000000] idx=30,region [7eff0000:10000]flag=4     <--- aka >> MEMBLOCK_NOMAP >> [    0.000000]   node   0: [mem 0x000000007eff0000-0x000000007effffff] >> [    0.000000] idx=31,region [7f000000:81000000]flag=0 <--- aka MEMBLOCK_NONE >> [    0.000000]   node   0: [mem 0x000000007f000000-0x00000017ffffffff] > Thanks. > > Hmm, I am not that familiar with those flags, while they look like to indicate > the physical capability of this range. > > MEMBLOCK_NONE no special > MEMBLOCK_HOTPLUG hotplug-able > MEMBLOCK_MIRROR high reliable > MEMBLOCK_NOMAP no direct map > > While these flags are not there when they are first added into the memory > region. When you look at the memblock_add_range(), the last parameter passed > is always 0. This means current several separated ranges reflect the physical > memory capability layout. > > Then, why this layout is so scattered? As you can see several ranges are less > than 1M. > > If, just my assumption, we could merge some of them, we could have a better > performance. Less ranges, less searching time. Thanks for your suggestions, Wei Need further digging and will consider to improve it in another patchset. -- Cheers, Jia