Received: by 2002:a25:ca44:0:0:0:0:0 with SMTP id a65csp27682ybg; Mon, 27 Jul 2020 22:17:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwtFAArLqoPtu+fcIOABxu4+dJwEWF9axTrRxXrG+1iaBrp2oNOKhK//gonpc3H7W8Ildd6 X-Received: by 2002:a17:906:3b91:: with SMTP id u17mr19413488ejf.305.1595913435217; Mon, 27 Jul 2020 22:17:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1595913435; cv=none; d=google.com; s=arc-20160816; b=hYaDrK3KXgF/Qm43ZSWg3Z9UoRgsjT7OBJf0fMHiYLl2Hd/2/1F5DkUoCtwgH8mS0s ilu/fi3LqGXXnxNM3uRKBdVQYDTmjiQld9WvnCRud9GVUIAzv78mlTIYK1ztN6whsf7b 09msodn4hh4kRDjfxh1LE6HMnXlKISTZtKS+xJIOwCj5xdRRY/lyVwa9lCYAyGEYE4Op lkC1o4tdVAVSdDqRbUBancnJW7qRziIpGzWslpk0HDOgTP9KuRkykyH+qTIal1vs1j4s qrs7cev3Q7wv1ocKdMKLrMvPlQw8CNnyePIqlR4C6SLg7cNhD+Uq6fuJGJHJiRI2vlyt Yofg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=8w1k07b96HCn00Tyl0tMboHhSjvP/+o1dvNSRF1lGnc=; b=KVBxA+XMPZZyzz/KLf70HNLqTJ5sCyFLRcs8svHUmppTUTLEVPE1TVyg4R3yhBWJuS PTN0gSJhp1XD+jcPdZ+X0Ud3fGatEpk1HfEUOiDrKb+rl2PnUXN9jV6YptFjZ+s5YAEv gtdy5bR6PXTzbzeRTI/tLDKjLV0Cgeg57XngZtbHEA0AWsclX+Lygpm2ItG43aXnZwzx 3rQLFfs9x6vEkPAuEqxVU+8iFBZruYAEuT4krCn07B1ySOrO0eZB/g0addZ9tFyBL8L/ 05LaXnbTSaGIroEfLR1/2HWd/5syGSl4tI1KTGRAqn/zgKP20ugH514ypk+WL5AaNOuu AIug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=XDqolARt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f22si6703149edy.17.2020.07.27.22.16.53; Mon, 27 Jul 2020 22:17:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=XDqolARt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727856AbgG1FOn (ORCPT + 99 others); Tue, 28 Jul 2020 01:14:43 -0400 Received: from mail.kernel.org ([198.145.29.99]:38032 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726251AbgG1FOm (ORCPT ); Tue, 28 Jul 2020 01:14:42 -0400 Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7F8F021D95; Tue, 28 Jul 2020 05:14:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1595913281; bh=hyYCib2EzOYupEpjqTA02kWUy84POJABcJ/qGLZac04=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XDqolARt7JP4OTgvgXGBzFyeZRytmbAFbtGJaFk1lZHBHYKYmJhZtfCfKN9tIFvku TKBPHHtbHVB/wUBdRDOwh8QsDOP2utFTiOILOl0mq07BFNqNdrKi5EMjqjtlAYe4x4 ixC0LlEffXTqoengolg1jQ37jbrs/aEBEQqXuFZs= From: Mike Rapoport To: Andrew Morton Cc: Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christoph Hellwig , Dave Hansen , Ingo Molnar , Marek Szyprowski , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Peter Zijlstra , Russell King , Stafford Horne , Thomas Gleixner , Will Deacon , Yoshinori Sato , clang-built-linux@googlegroups.com, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-c6x-dev@linux-c6x.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-xtensa@linux-xtensa.org, linuxppc-dev@lists.ozlabs.org, openrisc@lists.librecores.org, sparclinux@vger.kernel.org, uclinux-h8-devel@lists.sourceforge.jp, x86@kernel.org Subject: [PATCH 15/15] memblock: remove 'type' parameter from for_each_memblock() Date: Tue, 28 Jul 2020 08:11:53 +0300 Message-Id: <20200728051153.1590-16-rppt@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200728051153.1590-1-rppt@kernel.org> References: <20200728051153.1590-1-rppt@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mike Rapoport for_each_memblock() is used exclusively to iterate over memblock.memory in a few places that use data from memblock_region rather than the memory ranges. Remove type parameter from the for_each_memblock() iterator to improve encapsulation of memblock internals from its users. Signed-off-by: Mike Rapoport --- arch/arm64/kernel/setup.c | 2 +- arch/arm64/mm/numa.c | 2 +- arch/mips/netlogic/xlp/setup.c | 2 +- include/linux/memblock.h | 10 +++++++--- mm/memblock.c | 4 ++-- mm/page_alloc.c | 8 ++++---- 6 files changed, 16 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 93b3844cf442..23da7908cbed 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -217,7 +217,7 @@ static void __init request_standard_resources(void) if (!standard_resources) panic("%s: Failed to allocate %zu bytes\n", __func__, res_size); - for_each_memblock(memory, region) { + for_each_memblock(region) { res = &standard_resources[i++]; if (memblock_is_nomap(region)) { res->name = "reserved"; diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c index 0cbdbcc885fb..08721d2c0b79 100644 --- a/arch/arm64/mm/numa.c +++ b/arch/arm64/mm/numa.c @@ -350,7 +350,7 @@ static int __init numa_register_nodes(void) struct memblock_region *mblk; /* Check that valid nid is set to memblks */ - for_each_memblock(memory, mblk) { + for_each_memblock(mblk) { int mblk_nid = memblock_get_region_node(mblk); if (mblk_nid == NUMA_NO_NODE || mblk_nid >= MAX_NUMNODES) { diff --git a/arch/mips/netlogic/xlp/setup.c b/arch/mips/netlogic/xlp/setup.c index 1a0fc5b62ba4..e69d9fc468cf 100644 --- a/arch/mips/netlogic/xlp/setup.c +++ b/arch/mips/netlogic/xlp/setup.c @@ -70,7 +70,7 @@ static void nlm_fixup_mem(void) const int pref_backup = 512; struct memblock_region *mem; - for_each_memblock(memory, mem) { + for_each_memblock(mem) { memblock_remove(mem->base + mem->size - pref_backup, pref_backup); } diff --git a/include/linux/memblock.h b/include/linux/memblock.h index d70c2835e913..c901cb8ecf92 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -527,9 +527,13 @@ static inline unsigned long memblock_region_reserved_end_pfn(const struct memblo return PFN_UP(reg->base + reg->size); } -#define for_each_memblock(memblock_type, region) \ - for (region = memblock.memblock_type.regions; \ - region < (memblock.memblock_type.regions + memblock.memblock_type.cnt); \ +/** + * for_each_memblock - itereate over registered memory regions + * @region: loop variable + */ +#define for_each_memblock(region) \ + for (region = memblock.memory.regions; \ + region < (memblock.memory.regions + memblock.memory.cnt); \ region++) extern void *alloc_large_system_hash(const char *tablename, diff --git a/mm/memblock.c b/mm/memblock.c index 2ad5e6e47215..550bb72cf6cb 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1694,7 +1694,7 @@ static phys_addr_t __init_memblock __find_max_addr(phys_addr_t limit) * the memory memblock regions, if the @limit exceeds the total size * of those regions, max_addr will keep original value PHYS_ADDR_MAX */ - for_each_memblock(memory, r) { + for_each_memblock(r) { if (limit <= r->size) { max_addr = r->base + limit; break; @@ -1864,7 +1864,7 @@ void __init_memblock memblock_trim_memory(phys_addr_t align) phys_addr_t start, end, orig_start, orig_end; struct memblock_region *r; - for_each_memblock(memory, r) { + for_each_memblock(r) { orig_start = r->base; orig_end = r->base + r->size; start = round_up(orig_start, align); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 95af111d69d3..8a19f46dc86e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5927,7 +5927,7 @@ overlap_memmap_init(unsigned long zone, unsigned long *pfn) if (mirrored_kernelcore && zone == ZONE_MOVABLE) { if (!r || *pfn >= memblock_region_memory_end_pfn(r)) { - for_each_memblock(memory, r) { + for_each_memblock(r) { if (*pfn < memblock_region_memory_end_pfn(r)) break; } @@ -6528,7 +6528,7 @@ static unsigned long __init zone_absent_pages_in_node(int nid, unsigned long start_pfn, end_pfn; struct memblock_region *r; - for_each_memblock(memory, r) { + for_each_memblock(r) { start_pfn = clamp(memblock_region_memory_base_pfn(r), zone_start_pfn, zone_end_pfn); end_pfn = clamp(memblock_region_memory_end_pfn(r), @@ -7122,7 +7122,7 @@ static void __init find_zone_movable_pfns_for_nodes(void) * options. */ if (movable_node_is_enabled()) { - for_each_memblock(memory, r) { + for_each_memblock(r) { if (!memblock_is_hotpluggable(r)) continue; @@ -7143,7 +7143,7 @@ static void __init find_zone_movable_pfns_for_nodes(void) if (mirrored_kernelcore) { bool mem_below_4gb_not_mirrored = false; - for_each_memblock(memory, r) { + for_each_memblock(r) { if (memblock_is_mirror(r)) continue; -- 2.26.2