Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp5639785imm; Tue, 16 Oct 2018 13:33:55 -0700 (PDT) X-Google-Smtp-Source: ACcGV63XDvjHdq6FMfiKMhhiLkjjpABmZaTy75ReoiW68n8jjQI+tO1fLCKbZ7PM10mds72zH5pK X-Received: by 2002:a63:d64b:: with SMTP id d11-v6mr21421659pgj.450.1539722035048; Tue, 16 Oct 2018 13:33:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539722035; cv=none; d=google.com; s=arc-20160816; b=Gk+NGB4FvSGRIjGfhMtWt2EHnxJd06idByLt8AxSDQ5YE+YjHoiIHYYM+y6vCSPn3P Ppco79Du4bkWw8iBnoyK/xAbOILfMGVdr3HCJe1qWvuniFQi163e+0ppRyKsU3en11zn 16nItq9MC98QTy3+wi+2d5+TUaI84NnF37mUY2yBIrOpAslZ1ZjdwF/Y4p0ct081FhCY ZsT+vlZUuw0PSdL4ZEgkKkzQaozJ48yZRxCY9No50PwBrQrkSMFENMtaSKu34FnhBsGl IHstDQqa75VMJlWElbcHcT7SMzJA6XA2yYnTFIEd7+rBJvDHDoH4WlWvD4Cnyx3il9so 0WKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature; bh=JX4Bo2V4gq4gZ9OYbM9Z2TCrR6l12rjXU9E7NWZTsjE=; b=PCbDd0ZEIBeL8/OQI0wGoQe/0W3jUhjz2MN0iuY/Rpp4AYN88Di7GmCVAyr53LbOGH 81cjt3EQhGYFp2Lz02LU6L0Zyq9bAlwKuZ49yO+pw5K/sIfcZZd8BfvS8TAvJioWX69H zcTZMeCisOcbvmWbGxKxpYXYOWwlItoH8L5wHfOLksCWCEPOaxgudbBOl3aGbQGvQT3V hINm7jOZEI8/js83MOTS7ODRsvJwjQc8bQNrmb/+am9shAAL+7ebE5OHeG0UH5uOyNko YP/jwuW+MMmKBrxZ1dg3/0TJjmUaDgZNuL/jh9tMdBXxOBYwdchVYrfjqrTSGjYPnebn 3jTQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="a+7Cwfr/"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l33-v6si14836698pld.397.2018.10.16.13.33.36; Tue, 16 Oct 2018 13:33:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="a+7Cwfr/"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726663AbeJQEZV (ORCPT + 99 others); Wed, 17 Oct 2018 00:25:21 -0400 Received: from mail-qt1-f196.google.com ([209.85.160.196]:33692 "EHLO mail-qt1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726048AbeJQEZV (ORCPT ); Wed, 17 Oct 2018 00:25:21 -0400 Received: by mail-qt1-f196.google.com with SMTP id q40-v6so27465279qte.0; Tue, 16 Oct 2018 13:33:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=JX4Bo2V4gq4gZ9OYbM9Z2TCrR6l12rjXU9E7NWZTsjE=; b=a+7Cwfr/afFEfZDg/IyCPaQtaZ0/dwHcW5jLs4Rxtit21GWNzmeFfipadh62so9/n9 xzabnT8Hrt4Emd69PkfOF1+nV0O8/SoHY5kquu+3PsSAoOj2T2v834x+UWXNCIN4Kpwb 46sPnNnc5gAC4i6QFpCzteDFp9o4eBbVQZnyaUYk4rjoTr+LPU3foYokPJDxCoi1W4dW ufz2uWqljFyCDHDZfq1rL3jkwG5rRTtLUC6Z7jHPu7x1Xg71ZHjD2nM2tBKeWA+6i//R tbA7ArwIYpSpPLsnrP3nQiR8S1X+zSxihrJzq46JabuE9NniTJK/A6xhrAlWomSVPEAi RxkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=JX4Bo2V4gq4gZ9OYbM9Z2TCrR6l12rjXU9E7NWZTsjE=; b=Zwqjt0ydCtVV3vxcEBi7JcWXgaKFX4gO62Z2ooubIG4XSEIszU0W/Y37I+KxyLtt22 uKUwlteYruivy/W3q3q3RssQz9/x12rg/avc7yqeR8gJ45Px+P6PWHOneJPZHbZxNVp5 P2QR2qEi9F6025/QtMjjaM2W8DWKcwZ8a4yWZ/f3uMZKv33j24z7mRgN/9MQUDAnK8Ch sindMfJmNSBfX6Bvj2E0khmccmRiJ35Wcko0LKH5S+zidCLHVLDHVvKXbPz6u9gjXEzY vYpV6X6zA5eYo0ZBDgWzdnPfaR87VkjnthS9MetEsqeGXenNDGgm50h1Zt5UKZvKT7E3 W7vw== X-Gm-Message-State: ABuFfohke9BKonAQTHTmaE8HhrxhNrQEdk3e37EdRbMeo5HmPOM0+uKg Jnmsn9XucY432zbF12x4XIo= X-Received: by 2002:a0c:b510:: with SMTP id d16-v6mr22935340qve.34.1539721992601; Tue, 16 Oct 2018 13:33:12 -0700 (PDT) Received: from [192.168.1.10] (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id b134-v6sm9892949qka.45.2018.10.16.13.33.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 16 Oct 2018 13:33:11 -0700 (PDT) Subject: Re: [mm PATCH v3 2/6] mm: Drop meminit_pfn_in_nid as it is redundant To: Alexander Duyck , linux-mm@kvack.org, akpm@linux-foundation.org Cc: pavel.tatashin@microsoft.com, mhocko@suse.com, dave.jiang@intel.com, linux-kernel@vger.kernel.org, willy@infradead.org, davem@davemloft.net, yi.z.zhang@linux.intel.com, khalid.aziz@oracle.com, rppt@linux.vnet.ibm.com, vbabka@suse.cz, sparclinux@vger.kernel.org, dan.j.williams@intel.com, ldufour@linux.vnet.ibm.com, mgorman@techsingularity.net, mingo@kernel.org, kirill.shutemov@linux.intel.com References: <20181015202456.2171.88406.stgit@localhost.localdomain> <20181015202703.2171.40829.stgit@localhost.localdomain> From: Pavel Tatashin Message-ID: Date: Tue, 16 Oct 2018 16:33:10 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.0 MIME-Version: 1.0 In-Reply-To: <20181015202703.2171.40829.stgit@localhost.localdomain> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/15/18 4:27 PM, Alexander Duyck wrote: > As best as I can tell the meminit_pfn_in_nid call is completely redundant. > The deferred memory initialization is already making use of > for_each_free_mem_range which in turn will call into __next_mem_range which > will only return a memory range if it matches the node ID provided assuming > it is not NUMA_NO_NODE. > > I am operating on the assumption that there are no zones or pgdata_t > structures that have a NUMA node of NUMA_NO_NODE associated with them. If > that is the case then __next_mem_range will never return a memory range > that doesn't match the zone's node ID and as such the check is redundant. > > So one piece I would like to verfy on this is if this works for ia64. > Technically it was using a different approach to get the node ID, but it > seems to have the node ID also encoded into the memblock. So I am > assuming this is okay, but would like to get confirmation on that. > > Signed-off-by: Alexander Duyck If I am not mistaken, this code is for systems with memory interleaving. Quick looks shows that x86, powerpc, s390, and sparc have it set. I am not sure about other arches, but at least on SPARC, there are some processors with memory interleaving feature: http://www.fujitsu.com/global/products/computing/servers/unix/sparc-enterprise/technology/performance/memory.html Pavel > --- > mm/page_alloc.c | 50 ++++++++++++++------------------------------------ > 1 file changed, 14 insertions(+), 36 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 4bd858d1c3ba..a766a15fad81 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1301,36 +1301,22 @@ int __meminit early_pfn_to_nid(unsigned long pfn) > #endif > > #ifdef CONFIG_NODES_SPAN_OTHER_NODES > -static inline bool __meminit __maybe_unused > -meminit_pfn_in_nid(unsigned long pfn, int node, > - struct mminit_pfnnid_cache *state) > +/* Only safe to use early in boot when initialisation is single-threaded */ > +static inline bool __meminit early_pfn_in_nid(unsigned long pfn, int node) > { > int nid; > > - nid = __early_pfn_to_nid(pfn, state); > + nid = __early_pfn_to_nid(pfn, &early_pfnnid_cache); > if (nid >= 0 && nid != node) > return false; > return true; > } > > -/* Only safe to use early in boot when initialisation is single-threaded */ > -static inline bool __meminit early_pfn_in_nid(unsigned long pfn, int node) > -{ > - return meminit_pfn_in_nid(pfn, node, &early_pfnnid_cache); > -} > - > #else > - > static inline bool __meminit early_pfn_in_nid(unsigned long pfn, int node) > { > return true; > } > -static inline bool __meminit __maybe_unused > -meminit_pfn_in_nid(unsigned long pfn, int node, > - struct mminit_pfnnid_cache *state) > -{ > - return true; > -} > #endif > > > @@ -1459,21 +1445,13 @@ static inline void __init pgdat_init_report_one_done(void) > * > * Then, we check if a current large page is valid by only checking the validity > * of the head pfn. > - * > - * Finally, meminit_pfn_in_nid is checked on systems where pfns can interleave > - * within a node: a pfn is between start and end of a node, but does not belong > - * to this memory node. > */ > -static inline bool __init > -deferred_pfn_valid(int nid, unsigned long pfn, > - struct mminit_pfnnid_cache *nid_init_state) > +static inline bool __init deferred_pfn_valid(unsigned long pfn) > { > if (!pfn_valid_within(pfn)) > return false; > if (!(pfn & (pageblock_nr_pages - 1)) && !pfn_valid(pfn)) > return false; > - if (!meminit_pfn_in_nid(pfn, nid, nid_init_state)) > - return false; > return true; > } > > @@ -1481,15 +1459,14 @@ static inline void __init pgdat_init_report_one_done(void) > * Free pages to buddy allocator. Try to free aligned pages in > * pageblock_nr_pages sizes. > */ > -static void __init deferred_free_pages(int nid, int zid, unsigned long pfn, > +static void __init deferred_free_pages(unsigned long pfn, > unsigned long end_pfn) > { > - struct mminit_pfnnid_cache nid_init_state = { }; > unsigned long nr_pgmask = pageblock_nr_pages - 1; > unsigned long nr_free = 0; > > for (; pfn < end_pfn; pfn++) { > - if (!deferred_pfn_valid(nid, pfn, &nid_init_state)) { > + if (!deferred_pfn_valid(pfn)) { > deferred_free_range(pfn - nr_free, nr_free); > nr_free = 0; > } else if (!(pfn & nr_pgmask)) { > @@ -1509,17 +1486,18 @@ static void __init deferred_free_pages(int nid, int zid, unsigned long pfn, > * by performing it only once every pageblock_nr_pages. > * Return number of pages initialized. > */ > -static unsigned long __init deferred_init_pages(int nid, int zid, > +static unsigned long __init deferred_init_pages(struct zone *zone, > unsigned long pfn, > unsigned long end_pfn) > { > - struct mminit_pfnnid_cache nid_init_state = { }; > unsigned long nr_pgmask = pageblock_nr_pages - 1; > + int nid = zone_to_nid(zone); > unsigned long nr_pages = 0; > + int zid = zone_idx(zone); > struct page *page = NULL; > > for (; pfn < end_pfn; pfn++) { > - if (!deferred_pfn_valid(nid, pfn, &nid_init_state)) { > + if (!deferred_pfn_valid(pfn)) { > page = NULL; > continue; > } else if (!page || !(pfn & nr_pgmask)) { > @@ -1582,12 +1560,12 @@ static int __init deferred_init_memmap(void *data) > for_each_free_mem_range(i, nid, MEMBLOCK_NONE, &spa, &epa, NULL) { > spfn = max_t(unsigned long, first_init_pfn, PFN_UP(spa)); > epfn = min_t(unsigned long, zone_end_pfn(zone), PFN_DOWN(epa)); > - nr_pages += deferred_init_pages(nid, zid, spfn, epfn); > + nr_pages += deferred_init_pages(zone, spfn, epfn); > } > for_each_free_mem_range(i, nid, MEMBLOCK_NONE, &spa, &epa, NULL) { > spfn = max_t(unsigned long, first_init_pfn, PFN_UP(spa)); > epfn = min_t(unsigned long, zone_end_pfn(zone), PFN_DOWN(epa)); > - deferred_free_pages(nid, zid, spfn, epfn); > + deferred_free_pages(spfn, epfn); > } > pgdat_resize_unlock(pgdat, &flags); > > @@ -1676,7 +1654,7 @@ static int __init deferred_init_memmap(void *data) > while (spfn < epfn && nr_pages < nr_pages_needed) { > t = ALIGN(spfn + PAGES_PER_SECTION, PAGES_PER_SECTION); > first_deferred_pfn = min(t, epfn); > - nr_pages += deferred_init_pages(nid, zid, spfn, > + nr_pages += deferred_init_pages(zone, spfn, > first_deferred_pfn); > spfn = first_deferred_pfn; > } > @@ -1688,7 +1666,7 @@ static int __init deferred_init_memmap(void *data) > for_each_free_mem_range(i, nid, MEMBLOCK_NONE, &spa, &epa, NULL) { > spfn = max_t(unsigned long, first_init_pfn, PFN_UP(spa)); > epfn = min_t(unsigned long, first_deferred_pfn, PFN_DOWN(epa)); > - deferred_free_pages(nid, zid, spfn, epfn); > + deferred_free_pages(spfn, epfn); > > if (first_deferred_pfn == epfn) > break; >