Received: by 10.223.185.116 with SMTP id b49csp695258wrg; Wed, 21 Feb 2018 05:32:42 -0800 (PST) X-Google-Smtp-Source: AH8x2253Nr/mh5m3id9L79LZCbQCSQSNpzASWP1N5NPihatHtojDkdWpuibLMhftisdo01mdKZBx X-Received: by 10.98.59.11 with SMTP id i11mr3360214pfa.57.1519219962252; Wed, 21 Feb 2018 05:32:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519219962; cv=none; d=google.com; s=arc-20160816; b=usI4sXRO6SnumZdbJwplDJ7sw0FH1uN4mKzj8ojmPPuOb7pqLNNFFILNEamJ/F9B5v LDs5jNZAf3+7Fwzpy08kzqZJeFZ7GlLy+U4Nm8+OytmEmw57I+WRh3NJ+kI8327MQexu vt+hV9tDk/o+FMGG8LuVmphLqpXMckVtHGFF8tZ1XFHf1RY/JMKaheECznt+zmiil9Qu pSQwYtZLHusVDYqcYWeEXz97d0Gm1kgD1Gk5fIuIuXFzcm8S6h0tOYMDE3EbHe4oATHS kxSJFG07CUt7k2G9SNiM8PHNntJr3IHegOId+OMXzMWzJyfiuJpOX0WRiDkY0hNOXo+Q O0sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=kIH3ReyZiQdXRmXYYY1TXJ6QkhjSRfghrbonsGK+lTM=; b=tdcMY4waKv2fINYZZ2isXpG/dzrNgJMf74HnEGqe1ZBYZrMnPIUODKYlnhajzfjiLH FJEPJQo/FklcwaUWICBnfCz/nCQ1/PevLaPk4WUVxtuxarE1yWnCHJcxxcR5yfrbwBv1 PGbdbPOkqsNF8gKd2zybiGWDrI3nyp+T5YD5n5CgOpfUMB0nup0YkPStFeb/kKeRjpSx f2KGxbXCsUeFp3T4p4UCHY6HZC1ggH/5nCn1QiSoAMDKhDOj/GU3IJD3UXOMKR5yIfeH z2ivJeAr/M0CV3s+14dN6vA0F+F/8repL++aktKkeXqtlojQdOApmIkUk4oSsOkDW4v5 NZ5g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n61-v6si2752280plb.797.2018.02.21.05.32.27; Wed, 21 Feb 2018 05:32:42 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965134AbeBUNHV (ORCPT + 99 others); Wed, 21 Feb 2018 08:07:21 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:41516 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965068AbeBUNHR (ORCPT ); Wed, 21 Feb 2018 08:07:17 -0500 Received: from localhost (LFbn-1-12258-90.w90-92.abo.wanadoo.fr [90.92.71.90]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id A10F4F9C; Wed, 21 Feb 2018 13:07:16 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Michal Hocko , Bharata B Rao , Pavel Tatashin , Steven Sistare , Daniel Jordan , Bob Picco , Andrew Morton , Linus Torvalds Subject: [PATCH 4.15 039/163] mm, memory_hotplug: fix memmap initialization Date: Wed, 21 Feb 2018 13:47:48 +0100 Message-Id: <20180221124532.485499209@linuxfoundation.org> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20180221124529.931834518@linuxfoundation.org> References: <20180221124529.931834518@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: Michal Hocko commit 9bb5a391f9a5707e04763cf14298fc4cc29bfecd upstream. Bharata has noticed that onlining a newly added memory doesn't increase the total memory, pointing to commit f7f99100d8d9 ("mm: stop zeroing memory during allocation in vmemmap") as a culprit. This commit has changed the way how the memory for memmaps is initialized and moves it from the allocation time to the initialization time. This works properly for the early memmap init path. It doesn't work for the memory hotplug though because we need to mark page as reserved when the sparsemem section is created and later initialize it completely during onlining. memmap_init_zone is called in the early stage of onlining. With the current code it calls __init_single_page and as such it clears up the whole stage and therefore online_pages_range skips those pages. Fix this by skipping mm_zero_struct_page in __init_single_page for memory hotplug path. This is quite uggly but unifying both early init and memory hotplug init paths is a large project. Make sure we plug the regression at least. Link: http://lkml.kernel.org/r/20180130101141.GW21609@dhcp22.suse.cz Fixes: f7f99100d8d9 ("mm: stop zeroing memory during allocation in vmemmap") Signed-off-by: Michal Hocko Reported-by: Bharata B Rao Tested-by: Bharata B Rao Reviewed-by: Pavel Tatashin Cc: Steven Sistare Cc: Daniel Jordan Cc: Bob Picco Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/page_alloc.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1177,9 +1177,10 @@ static void free_one_page(struct zone *z } static void __meminit __init_single_page(struct page *page, unsigned long pfn, - unsigned long zone, int nid) + unsigned long zone, int nid, bool zero) { - mm_zero_struct_page(page); + if (zero) + mm_zero_struct_page(page); set_page_links(page, zone, nid, pfn); init_page_count(page); page_mapcount_reset(page); @@ -1194,9 +1195,9 @@ static void __meminit __init_single_page } static void __meminit __init_single_pfn(unsigned long pfn, unsigned long zone, - int nid) + int nid, bool zero) { - return __init_single_page(pfn_to_page(pfn), pfn, zone, nid); + return __init_single_page(pfn_to_page(pfn), pfn, zone, nid, zero); } #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT @@ -1217,7 +1218,7 @@ static void __meminit init_reserved_page if (pfn >= zone->zone_start_pfn && pfn < zone_end_pfn(zone)) break; } - __init_single_pfn(pfn, zid, nid); + __init_single_pfn(pfn, zid, nid, true); } #else static inline void init_reserved_page(unsigned long pfn) @@ -1514,7 +1515,7 @@ static unsigned long __init deferred_ini page++; else page = pfn_to_page(pfn); - __init_single_page(page, pfn, zid, nid); + __init_single_page(page, pfn, zid, nid, true); cond_resched(); } } @@ -5393,15 +5394,20 @@ not_early: * can be created for invalid pages (for alignment) * check here not to call set_pageblock_migratetype() against * pfn out of zone. + * + * Please note that MEMMAP_HOTPLUG path doesn't clear memmap + * because this is done early in sparse_add_one_section */ if (!(pfn & (pageblock_nr_pages - 1))) { struct page *page = pfn_to_page(pfn); - __init_single_page(page, pfn, zone, nid); + __init_single_page(page, pfn, zone, nid, + context != MEMMAP_HOTPLUG); set_pageblock_migratetype(page, MIGRATE_MOVABLE); cond_resched(); } else { - __init_single_pfn(pfn, zone, nid); + __init_single_pfn(pfn, zone, nid, + context != MEMMAP_HOTPLUG); } } }