Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp512414ybh; Tue, 21 Jul 2020 00:32:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwTyFee1JeT/9mXOKyIF88hOLTUQBJtOsp4VsDvjE4STQRLUr7hcPC4RK7O4mR+HpPwIlb4 X-Received: by 2002:a17:906:4757:: with SMTP id j23mr22869831ejs.431.1595316775506; Tue, 21 Jul 2020 00:32:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1595316775; cv=none; d=google.com; s=arc-20160816; b=g15iwIR1kdkt8CeCiwAWpMcb8d6e+jcGdPZMy+FnT2jN7YZdjUAnOumXotExWOonGW c0Fg/WLOMzvwmOtdSzspFCVDBPEpAH3WbHrHAW6C4lHl7F/8i7kVD4YwYwAC0+lALg4w CrK8CF2IWikME1eApy/U2oOrMZZuDz90P1jJikCP6kX+D65xBgnAzak8JdQIe2nYYvkl 44dZbnv3T2cDV7J4zHMZN4OmyCgLXHxUpWufrQ8sSnHh/SayTZibAdv3rYRp2fGx1Tod Ne3SKgdwnuF2gF6/nZT6VeQJ4vVT0/gy9YV2jbwaJICjeCg9eXxwMWjfF53BxdZW+GC+ KAOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from; bh=eZRKUumjDVuj0WtqFIdbkdxiDoMRN2NWxQVZ8xnpQQ4=; b=YG2StqJOUxumx6S9y1DiN5G9/6WClLebxAKcT9zet9yMZsCVDQbIrPC2RJEucINKkN Xg2xaTKWSoPmgqPnTL/x9uNyR0LX0MRpc1Jnp0EiG6KhIy1lNGjADgi0pIe2YsQqyQhw NHFgLMxHX3bmf+tcvk6Y2DBmIX1qwdgGa9MMDeao4ETmadXOoLhjTwDb1lNZDERvF74b /9INb9J+eyilKU1Uli9rJELAKFGRrmyiNkmxoPXGbvjZXkhi+eIN30S3MqD9QHaF9bwt EKn6EFDcl3C8hDo6Pzy2At/D/a9VX7pHuY1VlCt+tky2MvPjCB6hdpHzsQ+WNCT41wRk +mEA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h23si12020749ejj.573.2020.07.21.00.32.32; Tue, 21 Jul 2020 00:32:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726607AbgGUHcX (ORCPT + 99 others); Tue, 21 Jul 2020 03:32:23 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:7803 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726039AbgGUHcW (ORCPT ); Tue, 21 Jul 2020 03:32:22 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 04945930AF83F7FFFCF2; Tue, 21 Jul 2020 15:32:17 +0800 (CST) Received: from vm107-89-192.huawei.com (100.107.89.192) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.487.0; Tue, 21 Jul 2020 15:32:07 +0800 From: Wei Li To: , CC: , , , , , , , , , , , Subject: [PATCH] arm64: mm: free unused memmap for sparse memory model that define VMEMMAP Date: Tue, 21 Jul 2020 15:32:03 +0800 Message-ID: <20200721073203.107862-1-liwei213@huawei.com> X-Mailer: git-send-email 2.15.0 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [100.107.89.192] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP do not free the reserved memory for the page map, this patch do it. Signed-off-by: Wei Li Signed-off-by: Chen Feng Signed-off-by: Xia Qing --- arch/arm64/mm/init.c | 81 +++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 71 insertions(+), 10 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 1e93cfc7c47a..d1b56b47d5ba 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -441,7 +441,48 @@ void __init bootmem_init(void) memblock_dump_all(); } -#ifndef CONFIG_SPARSEMEM_VMEMMAP +#ifdef CONFIG_SPARSEMEM_VMEMMAP +#define VMEMMAP_PAGE_INUSE 0xFD +static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn) +{ + unsigned long addr, end; + unsigned long next; + pmd_t *pmd; + void *page_addr; + phys_addr_t phys_addr; + + addr = (unsigned long)pfn_to_page(start_pfn); + end = (unsigned long)pfn_to_page(end_pfn); + + pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr); + for (; addr < end; addr = next, pmd++) { + next = pmd_addr_end(addr, end); + + if (!pmd_present(*pmd)) + continue; + + if (IS_ALIGNED(addr, PMD_SIZE) && + IS_ALIGNED(next, PMD_SIZE)) { + phys_addr = __pfn_to_phys(pmd_pfn(*pmd)); + free_bootmem(phys_addr, PMD_SIZE); + pmd_clear(pmd); + } else { + /* If here, we are freeing vmemmap pages. */ + memset((void *)addr, VMEMMAP_PAGE_INUSE, next - addr); + page_addr = page_address(pmd_page(*pmd)); + + if (!memchr_inv(page_addr, VMEMMAP_PAGE_INUSE, + PMD_SIZE)) { + phys_addr = __pfn_to_phys(pmd_pfn(*pmd)); + free_bootmem(phys_addr, PMD_SIZE); + pmd_clear(pmd); + } + } + } + + flush_tlb_all(); +} +#else static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn) { struct page *start_pg, *end_pg; @@ -468,31 +509,53 @@ static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn) memblock_free(pg, pgend - pg); } +#endif + /* * The mem_map array can get very big. Free the unused area of the memory map. */ static void __init free_unused_memmap(void) { - unsigned long start, prev_end = 0; + unsigned long start, cur_start, prev_end = 0; struct memblock_region *reg; for_each_memblock(memory, reg) { - start = __phys_to_pfn(reg->base); + cur_start = __phys_to_pfn(reg->base); #ifdef CONFIG_SPARSEMEM /* * Take care not to free memmap entries that don't exist due * to SPARSEMEM sections which aren't present. */ - start = min(start, ALIGN(prev_end, PAGES_PER_SECTION)); -#endif + start = min(cur_start, ALIGN(prev_end, PAGES_PER_SECTION)); + /* - * If we had a previous bank, and there is a space between the - * current bank and the previous, free it. + * Free memory in the case of: + * 1. if cur_start - prev_end <= PAGES_PER_SECTION, + * free pre_end ~ cur_start. + * 2. if cur_start - prev_end > PAGES_PER_SECTION, + * free pre_end ~ ALIGN(prev_end, PAGES_PER_SECTION). */ if (prev_end && prev_end < start) free_memmap(prev_end, start); + /* + * Free memory in the case of: + * if cur_start - prev_end > PAGES_PER_SECTION, + * free ALIGN_DOWN(cur_start, PAGES_PER_SECTION) ~ cur_start. + */ + if (cur_start > start && + !IS_ALIGNED(cur_start, PAGES_PER_SECTION)) + free_memmap(ALIGN_DOWN(cur_start, PAGES_PER_SECTION), + cur_start); +#else + /* + * If we had a previous bank, and there is a space between the + * current bank and the previous, free it. + */ + if (prev_end && prev_end < cur_start) + free_memmap(prev_end, cur_start); +#endif /* * Align up here since the VM subsystem insists that the * memmap entries are valid from the bank end aligned to @@ -507,7 +570,6 @@ static void __init free_unused_memmap(void) free_memmap(prev_end, ALIGN(prev_end, PAGES_PER_SECTION)); #endif } -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ /* * mem_init() marks the free areas in the mem_map and tells us how much memory @@ -524,9 +586,8 @@ void __init mem_init(void) set_max_mapnr(max_pfn - PHYS_PFN_OFFSET); -#ifndef CONFIG_SPARSEMEM_VMEMMAP free_unused_memmap(); -#endif + /* this will put all unused low memory onto the freelists */ memblock_free_all(); -- 2.15.0