Received: by 10.213.65.68 with SMTP id h4csp168151imn; Tue, 27 Mar 2018 19:38:29 -0700 (PDT) X-Google-Smtp-Source: AIpwx48KpL2esBDq256vhD4eQh4vQPN+1T6yYngwtCY2+dMddcQCX4jrPvjkktmkpqvR5+RX853b X-Received: by 10.98.247.19 with SMTP id h19mr1399464pfi.239.1522204709417; Tue, 27 Mar 2018 19:38:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522204709; cv=none; d=google.com; s=arc-20160816; b=tmv1yyw/bR8/oI1pPb/He1ueUyioPp+9UZwSMlqNGR5w2dycLtpQHREjR8QC6wqF06 Vxng6bbFjXayBk3Y1nsCr//9T4tiAlFFK2k4JjnGp67wZ5AwFN9FdFSFF1nokatrJoDS S5hitZdJYqvC0pEVumlmV7RRKofDNzTumEFrJj77f2PE+oEr7h1X/wQnvxYPdE6NHv0t KQ0vlJEazC25bZpkvxIEakxaw2QOT1XoRF3CgLCBNQUCZuD4C6vaLbWotART2gvUnUqR j0wXahbQvR6tQRourA5qPtWhYZP4l/1nM/O5KP8efJmzd63hOAar7V8kHAAccSKBMgaR sXIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject:dkim-signature:arc-authentication-results; bh=pM6Zr7JvJiK2VjNOxIPe2Q1usvS6zA6cJdptKQ73/Ck=; b=Ky+a7Yy+TlwoB8ScHEv+cLZ3kC5ry5PKr1CRC/qKDG5tEg5TV7f+zlSSdOgkZWjBcP MPQwFw7CZw1A+OjPZMdXIBtkP81QJycg65jdQjTfnhyr4f1QPf0XL5h9hadEZZHuKgBd 16ido/cXJG7/qQrMS4glTlv6SEQiEp0rFB1EHe8wHmOsdcZEHqmq/+MkQUM2OdHtXIgR wSCm5rVrBIwYTD25GajcXFnwST7WzppNGJ7Z78xb5BIeevi41r41iynbQ5JOU4mLkwHw nvllG6JMhohT5EOWLFIKRG/cnaAvtysbGRBuintJaIpt0xJ93BrABljWv+Ez+YhBrGfr Cl6g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=gnlxmqh2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e34-v6si2497964plb.281.2018.03.27.19.38.15; Tue, 27 Mar 2018 19:38:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=gnlxmqh2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752866AbeC1CKz (ORCPT + 99 others); Tue, 27 Mar 2018 22:10:55 -0400 Received: from mail-pl0-f66.google.com ([209.85.160.66]:40776 "EHLO mail-pl0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752453AbeC1CKx (ORCPT ); Tue, 27 Mar 2018 22:10:53 -0400 Received: by mail-pl0-f66.google.com with SMTP id x4-v6so648019pln.7 for ; Tue, 27 Mar 2018 19:10:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding; bh=pM6Zr7JvJiK2VjNOxIPe2Q1usvS6zA6cJdptKQ73/Ck=; b=gnlxmqh2PLtygz02y2YHUlHRn1V7G/+2E7Qv0gTW8AkIZMMI8Z8WJuynaGwKYCZY1R vKQPB7PlrfDg8W/fhEuOWEfKZqK5MJQWkYF6E9cCWuKn7qCSiaX9Hcy4/h7AaVHlU2DT 9d73azU7XV47dNeK9FtfH0LU7TFbij+ODUgmv9UCl7GwpbxdIn/HjOuoNfZODTANrp8L MbUPYkhCo1W7PQQibrvqLkXV1Mq38DHMQBtYcNTCJCk3WEq+xbIWLHgSWW2D62bZVIK0 k5/v0Vg57KojWRltJaA2luEtqrP0VPS05yijjLsSiZA/aB3SZV7nXk5FD72gP6rGxLsV nEUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding; bh=pM6Zr7JvJiK2VjNOxIPe2Q1usvS6zA6cJdptKQ73/Ck=; b=KaPB1cRnlbjfYWPqrlLQx5g44L3PvXTsg7xG9dlitE5FyzUwhv3+Gx4JeURDHXxd+f pHIjsnIDUX8hSP8sqtzOhMMl0nKzQC67JEUdVvN2ees9WYKCSt7Pfn9fQkG+0WPQAnLy via9B7oL/EKXW/K/O/1EWCr6k1l/97+HpAd3OgLaTv2F4dJJB+wEIZQ78m3nDvZvEOya CgU6ovenETtQpR/tGy1UTkg5MYIKK+IWdwge1XndHlhwPGvH4sk2SaoQXezj9XUTd7zo X39+JWEF0+PmK1zJ9riOZgpjR2HXNM+mz2ylQj9AagZr5QGs1jTxqtOXdcg6gcNEQNLA P2FQ== X-Gm-Message-State: AElRT7FzHkGmn943zTFYcnqB3vokxktyhjP32iRlbltOzntT4LGenySS 7W8+rD34ZoZK/qSSFgkuGoA= X-Received: by 2002:a17:902:778b:: with SMTP id o11-v6mr1772278pll.106.1522203053152; Tue, 27 Mar 2018 19:10:53 -0700 (PDT) Received: from [0.0.0.0] (67.216.217.169.16clouds.com. [67.216.217.169]) by smtp.gmail.com with ESMTPSA id y3sm3958789pgc.81.2018.03.27.19.10.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 27 Mar 2018 19:10:52 -0700 (PDT) Subject: Re: [PATCH v3 5/5] mm: page_alloc: reduce unnecessary binary search in early_pfn_valid() To: Daniel Vacek Cc: Andrew Morton , Michal Hocko , Catalin Marinas , Mel Gorman , Will Deacon , Mark Rutland , Ard Biesheuvel , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Pavel Tatashin , Daniel Jordan , AKASHI Takahiro , Gioh Kim , Steven Sistare , Eugeniu Rosca , Vlastimil Babka , open list , linux-mm@kvack.org, James Morse , Steve Capper , x86@kernel.org, Greg Kroah-Hartman , Kate Stewart , Philippe Ombredanne , Johannes Weiner , Kemi Wang , Petr Tesarik , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , Jia He References: <1522033340-6575-1-git-send-email-hejianet@gmail.com> <1522033340-6575-6-git-send-email-hejianet@gmail.com> From: Jia He Message-ID: <55b15841-bac7-7576-6da8-edff0fe0e9b2@gmail.com> Date: Wed, 28 Mar 2018 10:10:25 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 3/28/2018 1:51 AM, Daniel Vacek Wrote: > On Mon, Mar 26, 2018 at 5:02 AM, Jia He wrote: >> Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns >> where possible") optimized the loop in memmap_init_zone(). But there is >> still some room for improvement. E.g. in early_pfn_valid(), if pfn and >> pfn+1 are in the same memblock region, we can record the last returned >> memblock region index and check check pfn++ is still in the same region. >> >> Currently it only improve the performance on arm64 and will have no >> impact on other arches. >> >> Signed-off-by: Jia He >> --- >> arch/x86/include/asm/mmzone_32.h | 2 +- >> include/linux/mmzone.h | 12 +++++++++--- >> mm/page_alloc.c | 2 +- >> 3 files changed, 11 insertions(+), 5 deletions(-) >> >> diff --git a/arch/x86/include/asm/mmzone_32.h b/arch/x86/include/asm/mmzone_32.h >> index 73d8dd1..329d3ba 100644 >> --- a/arch/x86/include/asm/mmzone_32.h >> +++ b/arch/x86/include/asm/mmzone_32.h >> @@ -49,7 +49,7 @@ static inline int pfn_valid(int pfn) >> return 0; >> } >> >> -#define early_pfn_valid(pfn) pfn_valid((pfn)) >> +#define early_pfn_valid(pfn, last_region_idx) pfn_valid((pfn)) >> >> #endif /* CONFIG_DISCONTIGMEM */ >> >> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h >> index d797716..3a686af 100644 >> --- a/include/linux/mmzone.h >> +++ b/include/linux/mmzone.h >> @@ -1267,9 +1267,15 @@ static inline int pfn_present(unsigned long pfn) >> }) >> #else >> #define pfn_to_nid(pfn) (0) >> -#endif >> +#endif /*CONFIG_NUMA*/ >> + >> +#ifdef CONFIG_HAVE_ARCH_PFN_VALID >> +#define early_pfn_valid(pfn, last_region_idx) \ >> + pfn_valid_region(pfn, last_region_idx) >> +#else >> +#define early_pfn_valid(pfn, last_region_idx) pfn_valid(pfn) >> +#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ >> >> -#define early_pfn_valid(pfn) pfn_valid(pfn) >> void sparse_init(void); >> #else >> #define sparse_init() do {} while (0) >> @@ -1288,7 +1294,7 @@ struct mminit_pfnnid_cache { >> }; >> >> #ifndef early_pfn_valid >> -#define early_pfn_valid(pfn) (1) >> +#define early_pfn_valid(pfn, last_region_idx) (1) >> #endif >> >> void memory_present(int nid, unsigned long start, unsigned long end); >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index 0bb0274..debccf3 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -5484,7 +5484,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, >> if (context != MEMMAP_EARLY) >> goto not_early; >> >> - if (!early_pfn_valid(pfn)) { >> + if (!early_pfn_valid(pfn, &idx)) { >> #if (defined CONFIG_HAVE_MEMBLOCK) && (defined CONFIG_HAVE_ARCH_PFN_VALID) >> /* >> * Skip to the pfn preceding the next valid one (or >> -- >> 2.7.4 >> > Hmm, what about making index global variable instead of changing all > the prototypes? Similar to early_pfnnid_cache for example. Something > like: > > #ifdef CONFIG_HAVE_ARCH_PFN_VALID > extern int early_region_idx __meminitdata; > #define early_pfn_valid(pfn) \ > pfn_valid_region(pfn, &early_region_idx) > #else > #define early_pfn_valid(pfn) pfn_valid(pfn) > #endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ > > And move this to arch/arm64/include/asm/page.h ? > > --nX > Yes. ok with me -- Cheers, Jia