Received: by 10.223.185.116 with SMTP id b49csp8918509wrg; Fri, 2 Mar 2018 10:07:40 -0800 (PST) X-Google-Smtp-Source: AG47ELu68tb1eE92XHcjxOytAQxoA/kgxRisqEv3fQBol029jzDWvA0t/6PUnZ/f5/HXVJ8gp/Jd X-Received: by 10.101.93.82 with SMTP id e18mr5124767pgt.371.1520014060719; Fri, 02 Mar 2018 10:07:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520014060; cv=none; d=google.com; s=arc-20160816; b=BlzrOPokAJZoSGQtESAJZ83qByCOz88Q/w0wzts/I1lm5dPJ5gBnRYhlhG/zpQ4aaK D7sMpjXzLedYejRPe8asFpTxi0sscR2GRwDK/GIdyC1VflfxoeHXJF5Zzi9OYqzXcTmi IYe72OM3HVoBt4+mdhpmDGdS+MR1i9+PM3Bo3mQJq0RdImA2K5xCFi5iWOUnxRu7oiQj 2d048Xsh6BCxTvZxH+EbOBdW300YM0Ns05FF/zVMQ1RnWHMqAXij7zv9mxaDh9D+f3XR HWZRUxl76fuvWmu2MTxn/I6zqBK2kUVE3tv6ytkKcXjAi7crGR4AHFVDDJ4llkI+qKtg xgjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=TLk4yNKmmAFKNB63rBTG0ljVSblW7ppoGECwZB204Bk=; b=Jkhx+X3nfaTlCu+AVnNXAwgdF0hdmDJc4zRhl2ra6yqoS9Rc0riKCYxQg3nHvspSMq pgCfdLUBGkwTxJK+hBWeV6B5V0re2a/bVp4RMdRJTdRJoTjiy9POfYPjhiqSh+ObgrJy 8n7HVToB1gIySyP1JW3Tscu80WAlWZ6eoZ00ka978/xZyKt9o3IZgnwzFGgXehtvgiBR 2qM0vFuQ6OfHCPuYJaJrmNl3YLDumeTmXCCPVhoDmp3tD8Mh+JVWbmml+/8ptbVAqs5b KlZXnTh1s4G8mUNu8go/7I3v8PliWl7EwdEizJ5/JmTazHOfn5rugYKs42Ym9V0cK3Fb +lgA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 92-v6si5121142pli.623.2018.03.02.10.07.25; Fri, 02 Mar 2018 10:07:40 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1426847AbeCBLBz (ORCPT + 99 others); Fri, 2 Mar 2018 06:01:55 -0500 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:41126 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1423770AbeCBLBt (ORCPT ); Fri, 2 Mar 2018 06:01:49 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B78A54022909; Fri, 2 Mar 2018 11:01:48 +0000 (UTC) Received: from slurm.brq.redhat.com (unknown [10.40.205.119]) by smtp.corp.redhat.com (Postfix) with ESMTP id CC653A9FB8; Fri, 2 Mar 2018 11:01:44 +0000 (UTC) From: Daniel Vacek To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Andrew Morton , Michal Hocko , Vlastimil Babka , Mel Gorman , Pavel Tatashin , Paul Burton , Daniel Vacek , stable@vger.kernel.org Subject: [PATCH v2] mm/page_alloc: fix memmap_init_zone pageblock alignment Date: Fri, 2 Mar 2018 12:01:37 +0100 Message-Id: <1519988497-28941-1-git-send-email-neelx@redhat.com> In-Reply-To: <1519908465-12328-1-git-send-email-neelx@redhat.com> References: <1519908465-12328-1-git-send-email-neelx@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Fri, 02 Mar 2018 11:01:48 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Fri, 02 Mar 2018 11:01:48 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'neelx@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org BUG at mm/page_alloc.c:1913 > VM_BUG_ON(page_zone(start_page) != page_zone(end_page)); Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns where possible") introduced a bug where move_freepages() triggers a VM_BUG_ON() on uninitialized page structure due to pageblock alignment. To fix this, simply align the skipped pfns in memmap_init_zone() the same way as in move_freepages_block(). Fixes: b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns where possible") Signed-off-by: Daniel Vacek Cc: stable@vger.kernel.org --- mm/memblock.c | 13 ++++++------- mm/page_alloc.c | 9 +++++++-- 2 files changed, 13 insertions(+), 9 deletions(-) diff --git a/mm/memblock.c b/mm/memblock.c index 5a9ca2a1751b..2a5facd236bb 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1101,13 +1101,12 @@ void __init_memblock __next_mem_pfn_range(int *idx, int nid, *out_nid = r->nid; } -unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn, - unsigned long max_pfn) +unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn) { struct memblock_type *type = &memblock.memory; unsigned int right = type->cnt; unsigned int mid, left = 0; - phys_addr_t addr = PFN_PHYS(pfn + 1); + phys_addr_t addr = PFN_PHYS(++pfn); do { mid = (right + left) / 2; @@ -1118,15 +1117,15 @@ unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn, type->regions[mid].size)) left = mid + 1; else { - /* addr is within the region, so pfn + 1 is valid */ - return min(pfn + 1, max_pfn); + /* addr is within the region, so pfn is valid */ + return pfn; } } while (left < right); if (right == type->cnt) - return max_pfn; + return -1UL; else - return min(PHYS_PFN(type->regions[right].base), max_pfn); + return PHYS_PFN(type->regions[right].base); } /** diff --git a/mm/page_alloc.c b/mm/page_alloc.c index cb416723538f..eb27ccb50928 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5359,9 +5359,14 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, /* * Skip to the pfn preceding the next valid one (or * end_pfn), such that we hit a valid pfn (or end_pfn) - * on our next iteration of the loop. + * on our next iteration of the loop. Note that it needs + * to be pageblock aligned even when the region itself + * is not as move_freepages_block() can shift ahead of + * the valid region but still depends on correct page + * metadata. */ - pfn = memblock_next_valid_pfn(pfn, end_pfn) - 1; + pfn = (memblock_next_valid_pfn(pfn) & + ~(pageblock_nr_pages-1)) - 1; #endif continue; } -- 2.16.2