Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp86216pxa; Tue, 11 Aug 2020 18:11:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzzCZYPbnNcf0Rs7huTn0/vrCeqaTwDuqBJnmnLvUzI+Vjg64cbBStIMOoC4er8XsopDwo5 X-Received: by 2002:a17:906:cb0a:: with SMTP id lk10mr12700265ejb.209.1597194670228; Tue, 11 Aug 2020 18:11:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597194670; cv=none; d=google.com; s=arc-20160816; b=JqKnckT3sc8PlXX8Hxk8RvauBNzJSYijpGPhr77hQ5V4BkgIM+mboFsruLKQacH6Wv Si8eRlDVzvAs7Rh75TMJbsC4gCXdpcJC+8/dGFmBcefMUWN11yjr0Je3MKlUsOkxKLqR LsNT7WUPY8KDOE2395+UK9sBOe/3xy3uqlpUg+NHNvAr2uE1GbvmvrsRkkyZXXHODEg/ GPnshJpUSFLQkhyNxpfdy7prinngpI5p9S0jMvXWtmRymHJ5lQ0eFcQtYrnaverqQaXM 1+KVQF5Z/kEKUbv+yWtRt1UB1dnhFA2PjO0sqN0FTLxmi8mK4JjPntO5gnL4al+YZD4q yMtA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from; bh=2llJKwqTY9QS2JzabYzZFp5PddPyJfSyvMiTyZImUBw=; b=IY2LjKAylo431AA+rD+qbVMXF20rIEhd1Gaq8Ph6xbpsWbDV99nJaPC7/Fu9ZTunZW neqrRgcw4RRUjuKcxmj/expvaTRwxFYc6iG+TS1P94WqYIJ0gpbNbvefP81CDU4HEx4X fXC5+Ay5Et7Xb7XjVGnR4awgtu2TAnI5w3xr8kr/O4koCuzpt0pf0ffLUfIn8t2AabdF HUb3s+t5rFsZ+MX5vdWI1nYqv+YC4WcUVX55Qzvh/vZLL1JBQkG/h9rFOYyoPnRVcrsZ NCEoawXLlUhxW4ORXegbFF57MEkN5gCA3nATzit9PYRGG8mwOj+SFiNXGGD0ag4tyO/+ zibQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v6si166690ejr.682.2020.08.11.18.10.27; Tue, 11 Aug 2020 18:11:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726492AbgHLBHO (ORCPT + 99 others); Tue, 11 Aug 2020 21:07:14 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:44942 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726420AbgHLBHN (ORCPT ); Tue, 11 Aug 2020 21:07:13 -0400 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id B4164D85BC53D551978D; Wed, 12 Aug 2020 09:07:06 +0800 (CST) Received: from vm107-89-192.huawei.com (100.107.89.192) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.487.0; Wed, 12 Aug 2020 09:06:56 +0800 From: Wei Li To: , CC: , , , , , , , , , , , Subject: [PATCH v2] arm64: mm: free unused memmap for sparse memory model that define VMEMMAP Date: Wed, 12 Aug 2020 09:06:55 +0800 Message-ID: <20200812010655.96339-1-liwei213@huawei.com> X-Mailer: git-send-email 2.15.0 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [100.107.89.192] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP do not free the reserved memory for the page map, this patch do it. Signed-off-by: Wei Li Signed-off-by: Chen Feng Signed-off-by: Xia Qing v2: fix the patch v1 compile errors that are not based on the latest mainline. --- arch/arm64/mm/init.c | 81 +++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 71 insertions(+), 10 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 1e93cfc7c47a..600889945cd0 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -441,7 +441,48 @@ void __init bootmem_init(void) memblock_dump_all(); } -#ifndef CONFIG_SPARSEMEM_VMEMMAP +#ifdef CONFIG_SPARSEMEM_VMEMMAP +#define VMEMMAP_PAGE_INUSE 0xFD +static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn) +{ + unsigned long addr, end; + unsigned long next; + pmd_t *pmd; + void *page_addr; + phys_addr_t phys_addr; + + addr = (unsigned long)pfn_to_page(start_pfn); + end = (unsigned long)pfn_to_page(end_pfn); + + pmd = pmd_off_k(addr); + for (; addr < end; addr = next, pmd++) { + next = pmd_addr_end(addr, end); + + if (!pmd_present(*pmd)) + continue; + + if (IS_ALIGNED(addr, PMD_SIZE) && + IS_ALIGNED(next, PMD_SIZE)) { + phys_addr = __pfn_to_phys(pmd_pfn(*pmd)); + memblock_free(phys_addr, PMD_SIZE); + pmd_clear(pmd); + } else { + /* If here, we are freeing vmemmap pages. */ + memset((void *)addr, VMEMMAP_PAGE_INUSE, next - addr); + page_addr = page_address(pmd_page(*pmd)); + + if (!memchr_inv(page_addr, VMEMMAP_PAGE_INUSE, + PMD_SIZE)) { + phys_addr = __pfn_to_phys(pmd_pfn(*pmd)); + memblock_free(phys_addr, PMD_SIZE); + pmd_clear(pmd); + } + } + } + + flush_tlb_all(); +} +#else static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn) { struct page *start_pg, *end_pg; @@ -468,31 +509,53 @@ static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn) memblock_free(pg, pgend - pg); } +#endif + /* * The mem_map array can get very big. Free the unused area of the memory map. */ static void __init free_unused_memmap(void) { - unsigned long start, prev_end = 0; + unsigned long start, cur_start, prev_end = 0; struct memblock_region *reg; for_each_memblock(memory, reg) { - start = __phys_to_pfn(reg->base); + cur_start = __phys_to_pfn(reg->base); #ifdef CONFIG_SPARSEMEM /* * Take care not to free memmap entries that don't exist due * to SPARSEMEM sections which aren't present. */ - start = min(start, ALIGN(prev_end, PAGES_PER_SECTION)); -#endif + start = min(cur_start, ALIGN(prev_end, PAGES_PER_SECTION)); + /* - * If we had a previous bank, and there is a space between the - * current bank and the previous, free it. + * Free memory in the case of: + * 1. if cur_start - prev_end <= PAGES_PER_SECTION, + * free pre_end ~ cur_start. + * 2. if cur_start - prev_end > PAGES_PER_SECTION, + * free pre_end ~ ALIGN(prev_end, PAGES_PER_SECTION). */ if (prev_end && prev_end < start) free_memmap(prev_end, start); + /* + * Free memory in the case of: + * if cur_start - prev_end > PAGES_PER_SECTION, + * free ALIGN_DOWN(cur_start, PAGES_PER_SECTION) ~ cur_start. + */ + if (cur_start > start && + !IS_ALIGNED(cur_start, PAGES_PER_SECTION)) + free_memmap(ALIGN_DOWN(cur_start, PAGES_PER_SECTION), + cur_start); +#else + /* + * If we had a previous bank, and there is a space between the + * current bank and the previous, free it. + */ + if (prev_end && prev_end < cur_start) + free_memmap(prev_end, cur_start); +#endif /* * Align up here since the VM subsystem insists that the * memmap entries are valid from the bank end aligned to @@ -507,7 +570,6 @@ static void __init free_unused_memmap(void) free_memmap(prev_end, ALIGN(prev_end, PAGES_PER_SECTION)); #endif } -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ /* * mem_init() marks the free areas in the mem_map and tells us how much memory @@ -524,9 +586,8 @@ void __init mem_init(void) set_max_mapnr(max_pfn - PHYS_PFN_OFFSET); -#ifndef CONFIG_SPARSEMEM_VMEMMAP free_unused_memmap(); -#endif + /* this will put all unused low memory onto the freelists */ memblock_free_all(); -- 2.15.0