Received: by 2002:a25:ca44:0:0:0:0:0 with SMTP id a65csp25801ybg; Mon, 27 Jul 2020 22:13:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzNbeShLihJ+8ag4TJ9z7fBq7eAM4GtsVX+0baVg36x3B0V12Y4qaZazHJ2OkF37KAWnnf3 X-Received: by 2002:a17:906:e46:: with SMTP id q6mr22987345eji.234.1595913197757; Mon, 27 Jul 2020 22:13:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1595913197; cv=none; d=google.com; s=arc-20160816; b=zYCD7mptjHBJgnYXxk7rc5wHGa21k5wgU/QEYyvhebuNV5305DT0/IQ9A4MIhbh5NF Jx5ndCP+lAKcwwQM52CbHPBwVrNxf2kZEO7XbKf6qPS1P/jjXH1sGcqXHjWk06ampdHe suTF5/M0UYWJXi8DHlQ6hY5u0AxUGYVWpB/7cC8Xqq3Ku64HoN2/Cj27ltvXYU9x198Y q7gy0iU8HhjoKnxi/tUCevi7xe7klmeG1Y2oyK83aVYDXLY0F2kOdTCx99PbcivYWX4q P1Pj8m/EpzFMQ90RF1NBk4mH4Y8X712L/SSNXtSOSRgRvVSgt6ACegr1J+pjycNwP+1M a+9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=g31UF4NnU2coH8tkDWR20tNCoHrZ7Zy1ZtkCH1uo6o4=; b=YxOsOe5COHjmti78nGD1GMogZtq0JWeuVFyW1rb6KPJHNtLUMusUdZ/rgQ/9GhsWzt bHHCLzEz1rV5yycohpxrbTdTSB6NXe6Tyas3PaUvpqxTwzkjfBtN8F6yKtdpf3ZGsF9H ny7ViSOkRg0h4k6qoxt8kEaSJ4u2NIrUSo5bvh5MWTj/bk42J/ODkQ6el0wtuUZqI3Lc FI3UtGp9oayVplfNIxalegkCOXFFtkMHWq5q3jKInP39F2YgyKZwNhwV3sjCpMi7n2VD +F778AkBfdB2uuYVADlq8hDVBoPy27DMNIyqjMpBDwPVkrjEN1tBIwVhG67VGVzj8VDs IQoQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=x+PwdBnF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m13si1603477edv.45.2020.07.27.22.12.55; Mon, 27 Jul 2020 22:13:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=x+PwdBnF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726953AbgG1FMq (ORCPT + 99 others); Tue, 28 Jul 2020 01:12:46 -0400 Received: from mail.kernel.org ([198.145.29.99]:34482 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726308AbgG1FMo (ORCPT ); Tue, 28 Jul 2020 01:12:44 -0400 Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 062C32177B; Tue, 28 Jul 2020 05:12:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1595913163; bh=AgkSEjcZN+4R6IPrVqzpRXpoFcyMHs54nGgAVWDTHPg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=x+PwdBnFBSyRKhO6/VqoU+SYFSiUr4ZPNt1hhIVRjzQ9HePzYy2S+Yej0Z/u11LyZ uG+rbcxXiMNQpl4LSWOpjj+pufoPsK+yHEQT836x2daZ8a9CQWGcIAPHex79AOe15D gx9ohcFjCH4Hd8YFqXCT+rPcPQSgxmmLmhUebxnM= From: Mike Rapoport To: Andrew Morton Cc: Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christoph Hellwig , Dave Hansen , Ingo Molnar , Marek Szyprowski , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Peter Zijlstra , Russell King , Stafford Horne , Thomas Gleixner , Will Deacon , Yoshinori Sato , clang-built-linux@googlegroups.com, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-c6x-dev@linux-c6x.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-xtensa@linux-xtensa.org, linuxppc-dev@lists.ozlabs.org, openrisc@lists.librecores.org, sparclinux@vger.kernel.org, uclinux-h8-devel@lists.sourceforge.jp, x86@kernel.org Subject: [PATCH 03/15] arm, xtensa: simplify initialization of high memory pages Date: Tue, 28 Jul 2020 08:11:41 +0300 Message-Id: <20200728051153.1590-4-rppt@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200728051153.1590-1-rppt@kernel.org> References: <20200728051153.1590-1-rppt@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mike Rapoport The function free_highpages() in both arm and xtensa essentially open-code for_each_free_mem_range() loop to detect high memory pages that were not reserved and that should be initialized and passed to the buddy allocator. Replace open-coded implementation of for_each_free_mem_range() with usage of memblock API to simplify the code. Signed-off-by: Mike Rapoport --- arch/arm/mm/init.c | 48 +++++++------------------------------ arch/xtensa/mm/init.c | 55 ++++++++----------------------------------- 2 files changed, 18 insertions(+), 85 deletions(-) diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 01e18e43b174..626af348eb8f 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -352,61 +352,29 @@ static void __init free_unused_memmap(void) #endif } -#ifdef CONFIG_HIGHMEM -static inline void free_area_high(unsigned long pfn, unsigned long end) -{ - for (; pfn < end; pfn++) - free_highmem_page(pfn_to_page(pfn)); -} -#endif - static void __init free_highpages(void) { #ifdef CONFIG_HIGHMEM unsigned long max_low = max_low_pfn; - struct memblock_region *mem, *res; + phys_addr_t range_start, range_end; + u64 i; /* set highmem page free */ - for_each_memblock(memory, mem) { - unsigned long start = memblock_region_memory_base_pfn(mem); - unsigned long end = memblock_region_memory_end_pfn(mem); + for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, + &range_start, &range_end, NULL) { + unsigned long start = PHYS_PFN(range_start); + unsigned long end = PHYS_PFN(range_end); /* Ignore complete lowmem entries */ if (end <= max_low) continue; - if (memblock_is_nomap(mem)) - continue; - /* Truncate partial highmem entries */ if (start < max_low) start = max_low; - /* Find and exclude any reserved regions */ - for_each_memblock(reserved, res) { - unsigned long res_start, res_end; - - res_start = memblock_region_reserved_base_pfn(res); - res_end = memblock_region_reserved_end_pfn(res); - - if (res_end < start) - continue; - if (res_start < start) - res_start = start; - if (res_start > end) - res_start = end; - if (res_end > end) - res_end = end; - if (res_start != start) - free_area_high(start, res_start); - start = res_end; - if (start == end) - break; - } - - /* And now free anything which remains */ - if (start < end) - free_area_high(start, end); + for (; start < end; start++) + free_highmem_page(pfn_to_page(start)); } #endif } diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c index a05b306cf371..ad9d59d93f39 100644 --- a/arch/xtensa/mm/init.c +++ b/arch/xtensa/mm/init.c @@ -79,67 +79,32 @@ void __init zones_init(void) free_area_init(max_zone_pfn); } -#ifdef CONFIG_HIGHMEM -static void __init free_area_high(unsigned long pfn, unsigned long end) -{ - for (; pfn < end; pfn++) - free_highmem_page(pfn_to_page(pfn)); -} - static void __init free_highpages(void) { +#ifdef CONFIG_HIGHMEM unsigned long max_low = max_low_pfn; - struct memblock_region *mem, *res; + phys_addr_t range_start, range_end; + u64 i; - reset_all_zones_managed_pages(); /* set highmem page free */ - for_each_memblock(memory, mem) { - unsigned long start = memblock_region_memory_base_pfn(mem); - unsigned long end = memblock_region_memory_end_pfn(mem); + for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, + &range_start, &range_end, NULL) { + unsigned long start = PHYS_PFN(range_start); + unsigned long end = PHYS_PFN(range_end); /* Ignore complete lowmem entries */ if (end <= max_low) continue; - if (memblock_is_nomap(mem)) - continue; - /* Truncate partial highmem entries */ if (start < max_low) start = max_low; - /* Find and exclude any reserved regions */ - for_each_memblock(reserved, res) { - unsigned long res_start, res_end; - - res_start = memblock_region_reserved_base_pfn(res); - res_end = memblock_region_reserved_end_pfn(res); - - if (res_end < start) - continue; - if (res_start < start) - res_start = start; - if (res_start > end) - res_start = end; - if (res_end > end) - res_end = end; - if (res_start != start) - free_area_high(start, res_start); - start = res_end; - if (start == end) - break; - } - - /* And now free anything which remains */ - if (start < end) - free_area_high(start, end); + for (; start < end; start++) + free_highmem_page(pfn_to_page(start)); } -} -#else -static void __init free_highpages(void) -{ -} #endif +} /* * Initialize memory pages. -- 2.26.2