Received: by 2002:ac0:98c7:0:0:0:0:0 with SMTP id g7-v6csp1606040imd; Sun, 4 Nov 2018 06:11:47 -0800 (PST) X-Google-Smtp-Source: AJdET5eVRn1D+YLB58muHvouXbgh8qxn7tnD2DfdYR2iZ8AiqSXBdp3qflil1N2sKOmYC3o7ndNK X-Received: by 2002:a17:902:3283:: with SMTP id z3-v6mr18790268plb.308.1541340707513; Sun, 04 Nov 2018 06:11:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541340707; cv=none; d=google.com; s=arc-20160816; b=nzf04gUgfn7osjCuGheBsfgZaGqC32yQvKh5+xnsVHgThkND/EMuK14l/xm2NfLPLj WbKprtEMpRJ3ROC+D9XhSsBbdqvTC4F+8MeRtmWE++hR66pKz8GfJzmvFJ9V2U0aesfu Bsgu9a+2KVm0cqmR6IPevo0+Y3pN7nw48aRDB25f/fyhQge5MzUMYvTiRoqXh4pHlGI/ Qj2l0n1McTIJp4nSP+bnAVf5ozNSa5dRmlFPkSxSbhOCjJhwzCWJDGvTuCV2ZiEgtffq 4JhEQML+FZhp0DnXfk5xeQ92Z5svz7KlwDWzVRIXaGmRgXGJACbOrDp+NYJ5C6JOe1c9 dseQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+iQVxCnKMsG61tjpwUKcD3voOfFRYpNzkDmKgeptgDY=; b=S8TijjBxuFLlmxcQ7tqXQx4AEZXSbTPN8dD8xho9MDNDH9v7BsV7iEq1xd3z4IJyCr B4D72bCzL0BlOYE5Y5T4dN3ZLUQtfOh4krCOjFle+ovrkCGJZ70PIYkCY7Lf0u4TKsCV uxd1OwsnSVFUMr4i/PAWoSUeX4JdHh0GW9aKlGEkyMqwMcZ/CfmYQ3+4zFnt5qJb5+2I bAhm9EHIQEHASptvSTWLgO7upqR2UV2LUlgaw8AzV3onKxmF2Iq6tqU8SNeZAyEKCmwh FMuDoJwFpTSwL4lOuhNgz2EtV4N93Myhw6U5fGm0p3N3hg45AtyskMwnzW0hQ8eXL//1 sYmA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=gXP+Roqb; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f35-v6si41913522plh.357.2018.11.04.06.11.32; Sun, 04 Nov 2018 06:11:47 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=gXP+Roqb; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729878AbeKDXGz (ORCPT + 99 others); Sun, 4 Nov 2018 18:06:55 -0500 Received: from mail.kernel.org ([198.145.29.99]:44040 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729529AbeKDXGy (ORCPT ); Sun, 4 Nov 2018 18:06:54 -0500 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1D23F2085B; Sun, 4 Nov 2018 13:51:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1541339510; bh=4qzpYStjZKaOMA9LijmwsvByGEcejmbmKkqFcrbtU3Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gXP+Roqb18bVnEl2KgP5pVy2YNl4dOW29XDlpGZHQ5JXdaRCD2+d2518Rz/qxeWWU QMztBAQYheTTyZW9Jd5J4FpZPn4MCp4ktAMlO72ZXhI65egTHRrKy8gLLZN+aMjqg2 Ex8UNVyRfKREVkL1wNhLKmIxslDbRhu8Uwzr0H2g= From: Sasha Levin To: stable@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Pavel Tatashin , Abdul Haleem , Baoquan He , Daniel Jordan , Dan Williams , Dave Hansen , David Rientjes , Greg Kroah-Hartman , Ingo Molnar , Jan Kara , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , "Kirill A . Shutemov" , Michael Ellerman , Michal Hocko , Souptick Joarder , Steven Sistare , Vlastimil Babka , Wei Yang , Pasha Tatashin , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH AUTOSEL 4.19 03/57] mm: calculate deferred pages after skipping mirrored memory Date: Sun, 4 Nov 2018 08:50:50 -0500 Message-Id: <20181104135144.88324-3-sashal@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181104135144.88324-1-sashal@kernel.org> References: <20181104135144.88324-1-sashal@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Pavel Tatashin [ Upstream commit d3035be4ce2345d98633a45f93a74e526e94b802 ] update_defer_init() should be called only when struct page is about to be initialized. Because it counts number of initialized struct pages, but there we may skip struct pages if there is some mirrored memory. So move, update_defer_init() after checking for mirrored memory. Also, rename update_defer_init() to defer_init() and reverse the return boolean to emphasize that this is a boolean function, that tells that the reset of memmap initialization should be deferred. Make this function self-contained: do not pass number of already initialized pages in this zone by using static counters. I found this bug by reading the code. The effect is that fewer than expected struct pages are initialized early in boot, and it is possible that in some corner cases we may fail to boot when mirrored pages are used. The deferred on demand code should somewhat mitigate this. But this still brings some inconsistencies compared to when booting without mirrored pages, so it is better to fix. [pasha.tatashin@oracle.com: add comment about defer_init's lack of locking] Link: http://lkml.kernel.org/r/20180726193509.3326-3-pasha.tatashin@oracle.com [akpm@linux-foundation.org: make defer_init non-inline, __meminit] Link: http://lkml.kernel.org/r/20180724235520.10200-3-pasha.tatashin@oracle.com Signed-off-by: Pavel Tatashin Reviewed-by: Oscar Salvador Cc: Abdul Haleem Cc: Baoquan He Cc: Daniel Jordan Cc: Dan Williams Cc: Dave Hansen Cc: David Rientjes Cc: Greg Kroah-Hartman Cc: Ingo Molnar Cc: Jan Kara Cc: Jérôme Glisse Cc: Kirill A. Shutemov Cc: Michael Ellerman Cc: Michal Hocko Cc: Souptick Joarder Cc: Steven Sistare Cc: Vlastimil Babka Cc: Wei Yang Cc: Pasha Tatashin Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/page_alloc.c | 45 +++++++++++++++++++++++++-------------------- 1 file changed, 25 insertions(+), 20 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e2ef1c17942f..63f990b73750 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -306,24 +306,33 @@ static inline bool __meminit early_page_uninitialised(unsigned long pfn) } /* - * Returns false when the remaining initialisation should be deferred until + * Returns true when the remaining initialisation should be deferred until * later in the boot cycle when it can be parallelised. */ -static inline bool update_defer_init(pg_data_t *pgdat, - unsigned long pfn, unsigned long zone_end, - unsigned long *nr_initialised) +static bool __meminit +defer_init(int nid, unsigned long pfn, unsigned long end_pfn) { + static unsigned long prev_end_pfn, nr_initialised; + + /* + * prev_end_pfn static that contains the end of previous zone + * No need to protect because called very early in boot before smp_init. + */ + if (prev_end_pfn != end_pfn) { + prev_end_pfn = end_pfn; + nr_initialised = 0; + } + /* Always populate low zones for address-constrained allocations */ - if (zone_end < pgdat_end_pfn(pgdat)) - return true; - (*nr_initialised)++; - if ((*nr_initialised > pgdat->static_init_pgcnt) && - (pfn & (PAGES_PER_SECTION - 1)) == 0) { - pgdat->first_deferred_pfn = pfn; + if (end_pfn < pgdat_end_pfn(NODE_DATA(nid))) return false; + nr_initialised++; + if ((nr_initialised > NODE_DATA(nid)->static_init_pgcnt) && + (pfn & (PAGES_PER_SECTION - 1)) == 0) { + NODE_DATA(nid)->first_deferred_pfn = pfn; + return true; } - - return true; + return false; } #else static inline bool early_page_uninitialised(unsigned long pfn) @@ -331,11 +340,9 @@ static inline bool early_page_uninitialised(unsigned long pfn) return false; } -static inline bool update_defer_init(pg_data_t *pgdat, - unsigned long pfn, unsigned long zone_end, - unsigned long *nr_initialised) +static inline bool defer_init(int nid, unsigned long pfn, unsigned long end_pfn) { - return true; + return false; } #endif @@ -5459,9 +5466,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, struct vmem_altmap *altmap) { unsigned long end_pfn = start_pfn + size; - pg_data_t *pgdat = NODE_DATA(nid); unsigned long pfn; - unsigned long nr_initialised = 0; struct page *page; #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP struct memblock_region *r = NULL, *tmp; @@ -5489,8 +5494,6 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, continue; if (!early_pfn_in_nid(pfn, nid)) continue; - if (!update_defer_init(pgdat, pfn, end_pfn, &nr_initialised)) - break; #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP /* @@ -5513,6 +5516,8 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, } } #endif + if (defer_init(nid, pfn, end_pfn)) + break; not_early: page = pfn_to_page(pfn); -- 2.17.1