Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933850AbcLBOub (ORCPT ); Fri, 2 Dec 2016 09:50:31 -0500 Received: from foss.arm.com ([217.140.101.70]:40498 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1761594AbcLBOu3 (ORCPT ); Fri, 2 Dec 2016 09:50:29 -0500 From: James Morse To: linux-arm-kernel@lists.infradead.org Cc: Robert Richter , Will Deacon , Catalin Marinas , Ard Biesheuvel , David Daney , Mark Rutland , Hanjun Guo , linux-kernel@vger.kernel.org Subject: [PATCH 2/2] arm64: hibernate: report nomap regions as being pfn_nosave Date: Fri, 2 Dec 2016 14:49:09 +0000 Message-Id: <20161202144909.18405-3-james.morse@arm.com> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20161202144909.18405-1-james.morse@arm.com> References: <20161202144909.18405-1-james.morse@arm.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1458 Lines: 42 pfn_valid() needs to be changed so that all struct pages in a numa node have the same node-id. Currently 'nomap' pages are skipped, and retain their pre-numa node-ids, which leads to a later BUG_ON(). Once this change happens, hibernate's code code will try and save/restore the nomap pages. Add the memblock nomap regions to the ranges reported as being 'pfn_nosave' to the hibernate core code. This only works if all pages in the nomap region are also marked with PG_reserved. Signed-off-by: James Morse --- arch/arm64/kernel/hibernate.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index d55a7b09959b..9e901658a123 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -17,6 +17,7 @@ #define pr_fmt(x) "hibernate: " x #include #include +#include #include #include #include @@ -105,7 +106,10 @@ int pfn_is_nosave(unsigned long pfn) unsigned long nosave_begin_pfn = virt_to_pfn(&__nosave_begin); unsigned long nosave_end_pfn = virt_to_pfn(&__nosave_end - 1); - return (pfn >= nosave_begin_pfn) && (pfn <= nosave_end_pfn); + if ((pfn >= nosave_begin_pfn) && (pfn <= nosave_end_pfn)) + return 1; + + return !memblock_is_map_memory(pfn << PAGE_SHIFT); } void notrace save_processor_state(void) -- 2.10.1