Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp3011083pxb; Sun, 28 Feb 2021 22:09:39 -0800 (PST) X-Google-Smtp-Source: ABdhPJy5Q0oAhhYx6bFsGZVIfguDLVnUeNU5Ncy13D8QTWlgoFvayCUQX3UfWyNyu6tPlJC61gqN X-Received: by 2002:a05:6402:2030:: with SMTP id ay16mr14744841edb.156.1614578978989; Sun, 28 Feb 2021 22:09:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614578978; cv=none; d=google.com; s=arc-20160816; b=SZm4jiost4RsdE4JkB1rGCdgBEKP/ZpSXG8jV3zMMR26Xq2cI1nXKTuQ/uPw5dDdQv TDSzyYitRlQa+OEvjZ4WKKMOr/KJ+BNn6ex9l8aiL8ayrJJUc7lzu9DN7N9UVSDAFDy7 HeUwjTWPKhDCbh554YIIzr52WR6BKxo1RiwLb+zwn4kKWEUlqMxMkWHnNZMCULaCv/xa MzLBTUsMwtBvp+Gt8nnb3ATQDtoi5mMDyxFKQas8Uie9S1FV7o/sDPN7g7LXQlLYiJr3 BVM0GtFmGMUBuBP5Feg11nDIeCwZuuSwzaMBTL481KWNWrUXZqK+IimH4yoKwWxBFPys Cm/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=Q4bF7rgUuVxg7efvHe8TLZmf/f9Ji2/rTilFFHr9iDE=; b=qRgcFdV36owY3CKYAliS07jUIK4gUaTDkfW2pDfqiXSuuy42DBn+uxi4lkeAVTJALV ZPcYPTqJRfsTiimWie8sdHto0l4hyxLnjro5g/vYL0Te/R69SZz/NNfnciRJEc5Yo/la 0W1MktziLv9hhtzRg665Q9RuDtzBNnm47evx2hMNV8WkNbc8BWhR2ZeeW3m5eXH2N+Q6 WLyjn3yuOQKgGRDAKzshJLfA9qXwRxtkoOb7lfrruaUyRSV6FB30hG/TV2aE1vih7rWT tu3o7eAiPxWp6qxjuv9nHRInxwrbGqMaZXQP+mTwNfpVuerspXgU/Uglf409xpeegGJS ExrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=yhSTjNBk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bc17si3060376edb.2.2021.02.28.22.09.16; Sun, 28 Feb 2021 22:09:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=yhSTjNBk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233468AbhCAFaj (ORCPT + 99 others); Mon, 1 Mar 2021 00:30:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233470AbhCAFaf (ORCPT ); Mon, 1 Mar 2021 00:30:35 -0500 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFBFAC061786 for ; Sun, 28 Feb 2021 21:29:54 -0800 (PST) Received: by mail-pf1-x430.google.com with SMTP id 192so3707107pfv.0 for ; Sun, 28 Feb 2021 21:29:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Q4bF7rgUuVxg7efvHe8TLZmf/f9Ji2/rTilFFHr9iDE=; b=yhSTjNBk4+LUTofY6yR5hA9WO925NJsPqGS5fAYJDg6FL++Xec85hcZZhwXMdBXift mgZ7NcGVTUrHAUnx1RWKXsO6XycSb85A8Vm+ABidjGe0pzFCtqTTcvCQ8ncXpE4XCgLN g7SLe0RE5zlYpRL4xGDbgJnB2jea1tTN5ZxLIFoErFOl54aNehRYk8Rn+U7iKKJQ+jCW O7f6wGw63co2x2vzHCfKb+sKcq0FpW1Mmc/wWvTHTwbOGNZQp/yPOIPU6A2SAwSYjOW9 Sr32yd9d0JXG+FH3YCNeK9nf9I1oj0oUPl8H8APEmLIz2xSJdiTc30+YsQqnT+cgEj95 JkdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Q4bF7rgUuVxg7efvHe8TLZmf/f9Ji2/rTilFFHr9iDE=; b=CqseC+j7tFhBVpKn9gOOchQt6IAuh141OvExfYiJZN7Cerz8ZiQcEysIY8i0qgUx07 0t3Dcn0ztMex4OXrkZAgjQJP4R014Ppyhhy7d4pDJgsHyDADZnGoBLrWnBUFI7Aw06DL 6hqCneOHPPmZXi2mOSIU9qRMgavXAZaiWk66jjCKIjkEA8yncAgHhd/Qd+uqiuiy+itS HWgpLpHaaTRBBWVuJOx5d9OPZ4q+gv4Pm19JEBw3O49l7pTaEYo0zZhyAzbEfSRTdVLT wiQmbj1h/ZuKXW1LYKxTbv7i2dyqq32ENo9h5+v6e8ODA4TlzLvtwiwGPhwfeRP3AZC6 Z0JQ== X-Gm-Message-State: AOAM5335r+JxxXF5dNGrFhWE1EiSK5m0JMdCrePpSorgUaOm3JYZUI5s mlPUMstWKKKK0h4XtEGpGfqilTHyWJWbHGmgOo9urg== X-Received: by 2002:a63:141e:: with SMTP id u30mr12539284pgl.31.1614576594079; Sun, 28 Feb 2021 21:29:54 -0800 (PST) MIME-Version: 1.0 References: <20210225132130.26451-1-songmuchun@bytedance.com> <20210225132130.26451-5-songmuchun@bytedance.com> In-Reply-To: <20210225132130.26451-5-songmuchun@bytedance.com> From: Muchun Song Date: Mon, 1 Mar 2021 13:29:16 +0800 Message-ID: Subject: Re: [PATCH v17 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page To: Jonathan Corbet , Mike Kravetz , Thomas Gleixner , mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , David Rientjes , Matthew Wilcox , Oscar Salvador , Michal Hocko , "Song Bao Hua (Barry Song)" , David Hildenbrand , =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , Joao Martins Cc: Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 25, 2021 at 9:24 PM Muchun Song wrote: > > When we free a HugeTLB page to the buddy allocator, we should allocate > the vmemmap pages associated with it. But we may cannot allocate vmemmap > pages when the system is under memory pressure, in this case, we just > refuse to free the HugeTLB page instead of looping forever trying to > allocate the pages. This changes some behavior (list below) on some > corner cases. > > 1) Failing to free a huge page triggered by the user (decrease nr_pages). > > Need try again later by the user. > > 2) Failing to free a surplus huge page when freed by the application. > > Try again later when freeing a huge page next time. > > 3) Failing to dissolve a free huge page on ZONE_MOVABLE via > offline_pages(). > > This is a bit unfortunate if we have plenty of ZONE_MOVABLE memory > but are low on kernel memory. For example, migration of huge pages > would still work, however, dissolving the free page does not work. > This is a corner cases. When the system is that much under memory > pressure, offlining/unplug can be expected to fail. This is > unfortunate because it prevents from the memory offlining which > shouldn't happen for movable zones. People depending on the memory > hotplug and movable zone should carefuly consider whether savings > on unmovable memory are worth losing their hotplug functionality > in some situations. > > 4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via > alloc_contig_range() - once we have that handling in place. Mainly > affects CMA and virtio-mem. > > Similar to 3). virito-mem will handle migration errors gracefully. > CMA might be able to fallback on other free areas within the CMA > region. > > Vmemmap pages are allocated from the page freeing context. In order for > those allocations to be not disruptive (e.g. trigger oom killer) > __GFP_NORETRY is used. hugetlb_lock is dropped for the allocation > because a non sleeping allocation would be too fragile and it could fail > too easily under memory pressure. GFP_ATOMIC or other modes to access > memory reserves is not used because we want to prevent consuming > reserves under heavy hugetlb freeing. Hi, Since this patch is the only patch that has no reviewed-by tag. I hope someone (e.g. Mike, Oscar, David or Michal) could review this. Thanks a lot. > > Signed-off-by: Muchun Song > --- > Documentation/admin-guide/mm/hugetlbpage.rst | 8 +++ > include/linux/mm.h | 2 + > mm/hugetlb.c | 92 +++++++++++++++++++++------- > mm/hugetlb_vmemmap.c | 32 ++++++---- > mm/hugetlb_vmemmap.h | 23 +++++++ > mm/sparse-vmemmap.c | 75 ++++++++++++++++++++++- > 6 files changed, 197 insertions(+), 35 deletions(-) > > diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst > index f7b1c7462991..6988895d09a8 100644 > --- a/Documentation/admin-guide/mm/hugetlbpage.rst > +++ b/Documentation/admin-guide/mm/hugetlbpage.rst > @@ -60,6 +60,10 @@ HugePages_Surp > the pool above the value in ``/proc/sys/vm/nr_hugepages``. The > maximum number of surplus huge pages is controlled by > ``/proc/sys/vm/nr_overcommit_hugepages``. > + Note: When the feature of freeing unused vmemmap pages associated > + with each hugetlb page is enabled, the number of surplus huge pages > + may be temporarily larger than the maximum number of surplus huge > + pages when the system is under memory pressure. > Hugepagesize > is the default hugepage size (in Kb). > Hugetlb > @@ -80,6 +84,10 @@ returned to the huge page pool when freed by a task. A user with root > privileges can dynamically allocate more or free some persistent huge pages > by increasing or decreasing the value of ``nr_hugepages``. > > +Note: When the feature of freeing unused vmemmap pages associated with each > +hugetlb page is enabled, we can fail to free the huge pages triggered by > +the user when ths system is under memory pressure. Please try again later. > + > Pages that are used as huge pages are reserved inside the kernel and cannot > be used for other purposes. Huge pages cannot be swapped out under > memory pressure. > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 4ddfc31f21c6..77693c944a36 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2973,6 +2973,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) > > void vmemmap_remap_free(unsigned long start, unsigned long end, > unsigned long reuse); > +int vmemmap_remap_alloc(unsigned long start, unsigned long end, > + unsigned long reuse, gfp_t gfp_mask); > > void *sparse_buffer_alloc(unsigned long size); > struct page * __populate_section_memmap(unsigned long pfn, > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 43fed6785322..b6e4e3f31ad2 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1304,16 +1304,59 @@ static inline void destroy_compound_gigantic_page(struct page *page, > unsigned int order) { } > #endif > > -static void update_and_free_page(struct hstate *h, struct page *page) > +static int update_and_free_page(struct hstate *h, struct page *page) > + __releases(&hugetlb_lock) __acquires(&hugetlb_lock) > { > int i; > struct page *subpage = page; > + int nid = page_to_nid(page); > > if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) > - return; > + return 0; > > h->nr_huge_pages--; > - h->nr_huge_pages_node[page_to_nid(page)]--; > + h->nr_huge_pages_node[nid]--; > + VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page); > + VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page); > + set_page_refcounted(page); > + set_compound_page_dtor(page, NULL_COMPOUND_DTOR); > + > + /* > + * If the vmemmap pages associated with the HugeTLB page can be > + * optimized or the page is gigantic, we might block in > + * alloc_huge_page_vmemmap() or free_gigantic_page(). In both > + * cases, drop the hugetlb_lock. > + */ > + if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h)) > + spin_unlock(&hugetlb_lock); > + > + if (alloc_huge_page_vmemmap(h, page)) { > + spin_lock(&hugetlb_lock); > + INIT_LIST_HEAD(&page->lru); > + set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); > + h->nr_huge_pages++; > + h->nr_huge_pages_node[nid]++; > + > + /* > + * If we cannot allocate vmemmap pages, just refuse to free the > + * page and put the page back on the hugetlb free list and treat > + * as a surplus page. > + */ > + h->surplus_huge_pages++; > + h->surplus_huge_pages_node[nid]++; > + > + /* > + * The refcount can be perfectly increased by memory-failure or > + * soft_offline handlers. > + */ > + if (likely(put_page_testzero(page))) { > + arch_clear_hugepage_flags(page); > + enqueue_huge_page(h, page); > + } > + > + return -ENOMEM; > + } > + > for (i = 0; i < pages_per_huge_page(h); > i++, subpage = mem_map_next(subpage, page, i)) { > subpage->flags &= ~(1 << PG_locked | 1 << PG_error | > @@ -1321,22 +1364,18 @@ static void update_and_free_page(struct hstate *h, struct page *page) > 1 << PG_active | 1 << PG_private | > 1 << PG_writeback); > } > - VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page); > - VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page); > - set_compound_page_dtor(page, NULL_COMPOUND_DTOR); > - set_page_refcounted(page); > + > if (hstate_is_gigantic(h)) { > - /* > - * Temporarily drop the hugetlb_lock, because > - * we might block in free_gigantic_page(). > - */ > - spin_unlock(&hugetlb_lock); > destroy_compound_gigantic_page(page, huge_page_order(h)); > free_gigantic_page(page, huge_page_order(h)); > - spin_lock(&hugetlb_lock); > } else { > __free_pages(page, huge_page_order(h)); > } > + > + if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h)) > + spin_lock(&hugetlb_lock); > + > + return 0; > } > > struct hstate *size_to_hstate(unsigned long size) > @@ -1404,9 +1443,9 @@ static void __free_huge_page(struct page *page) > } else if (h->surplus_huge_pages_node[nid]) { > /* remove the page from active list */ > list_del(&page->lru); > - update_and_free_page(h, page); > h->surplus_huge_pages--; > h->surplus_huge_pages_node[nid]--; > + update_and_free_page(h, page); > } else { > arch_clear_hugepage_flags(page); > enqueue_huge_page(h, page); > @@ -1447,7 +1486,7 @@ void free_huge_page(struct page *page) > /* > * Defer freeing if in non-task context to avoid hugetlb_lock deadlock. > */ > - if (!in_task()) { > + if (!in_atomic()) { > /* > * Only call schedule_work() if hpage_freelist is previously > * empty. Otherwise, schedule_work() had been called but the > @@ -1699,8 +1738,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, > h->surplus_huge_pages--; > h->surplus_huge_pages_node[node]--; > } > - update_and_free_page(h, page); > - ret = 1; > + ret = !update_and_free_page(h, page); > break; > } > } > @@ -1713,10 +1751,14 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, > * nothing for in-use hugepages and non-hugepages. > * This function returns values like below: > * > - * -EBUSY: failed to dissolved free hugepages or the hugepage is in-use > - * (allocated or reserved.) > - * 0: successfully dissolved free hugepages or the page is not a > - * hugepage (considered as already dissolved) > + * -ENOMEM: failed to allocate vmemmap pages to free the freed hugepages > + * when the system is under memory pressure and the feature of > + * freeing unused vmemmap pages associated with each hugetlb page > + * is enabled. > + * -EBUSY: failed to dissolved free hugepages or the hugepage is in-use > + * (allocated or reserved.) > + * 0: successfully dissolved free hugepages or the page is not a > + * hugepage (considered as already dissolved) > */ > int dissolve_free_huge_page(struct page *page) > { > @@ -1771,8 +1813,12 @@ int dissolve_free_huge_page(struct page *page) > h->free_huge_pages--; > h->free_huge_pages_node[nid]--; > h->max_huge_pages--; > - update_and_free_page(h, head); > - rc = 0; > + rc = update_and_free_page(h, head); > + if (rc) { > + h->surplus_huge_pages--; > + h->surplus_huge_pages_node[nid]--; > + h->max_huge_pages++; > + } > } > out: > spin_unlock(&hugetlb_lock); > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index 0209b736e0b4..f7ab3d99250a 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -181,21 +181,31 @@ > #define RESERVE_VMEMMAP_NR 2U > #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) > > -/* > - * How many vmemmap pages associated with a HugeTLB page that can be freed > - * to the buddy allocator. > - * > - * Todo: Returns zero for now, which means the feature is disabled. We will > - * enable it once all the infrastructure is there. > - */ > -static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) > +static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) > { > - return 0; > + return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; > } > > -static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) > +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head) > { > - return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; > + unsigned long vmemmap_addr = (unsigned long)head; > + unsigned long vmemmap_end, vmemmap_reuse; > + > + if (!free_vmemmap_pages_per_hpage(h)) > + return 0; > + > + vmemmap_addr += RESERVE_VMEMMAP_SIZE; > + vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h); > + vmemmap_reuse = vmemmap_addr - PAGE_SIZE; > + /* > + * The pages which the vmemmap virtual address range [@vmemmap_addr, > + * @vmemmap_end) are mapped to are freed to the buddy allocator, and > + * the range is mapped to the page which @vmemmap_reuse is mapped to. > + * When a HugeTLB page is freed to the buddy allocator, previously > + * discarded vmemmap pages must be allocated and remapping. > + */ > + return vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse, > + GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE); > } > > void free_huge_page_vmemmap(struct hstate *h, struct page *head) > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h > index 6923f03534d5..a37771b0b82a 100644 > --- a/mm/hugetlb_vmemmap.h > +++ b/mm/hugetlb_vmemmap.h > @@ -11,10 +11,33 @@ > #include > > #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP > +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head); > void free_huge_page_vmemmap(struct hstate *h, struct page *head); > + > +/* > + * How many vmemmap pages associated with a HugeTLB page that can be freed > + * to the buddy allocator. > + * > + * Todo: Returns zero for now, which means the feature is disabled. We will > + * enable it once all the infrastructure is there. > + */ > +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) > +{ > + return 0; > +} > #else > +static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head) > +{ > + return 0; > +} > + > static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) > { > } > + > +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) > +{ > + return 0; > +} > #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ > #endif /* _LINUX_HUGETLB_VMEMMAP_H */ > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c > index d3076a7a3783..60fc6cd6cd23 100644 > --- a/mm/sparse-vmemmap.c > +++ b/mm/sparse-vmemmap.c > @@ -40,7 +40,8 @@ > * @remap_pte: called for each lowest-level entry (PTE). > * @reuse_page: the page which is reused for the tail vmemmap pages. > * @reuse_addr: the virtual address of the @reuse_page page. > - * @vmemmap_pages: the list head of the vmemmap pages that can be freed. > + * @vmemmap_pages: the list head of the vmemmap pages that can be freed > + * or is mapped from. > */ > struct vmemmap_remap_walk { > void (*remap_pte)(pte_t *pte, unsigned long addr, > @@ -237,6 +238,78 @@ void vmemmap_remap_free(unsigned long start, unsigned long end, > free_vmemmap_page_list(&vmemmap_pages); > } > > +static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, > + struct vmemmap_remap_walk *walk) > +{ > + pgprot_t pgprot = PAGE_KERNEL; > + struct page *page; > + void *to; > + > + BUG_ON(pte_page(*pte) != walk->reuse_page); > + > + page = list_first_entry(walk->vmemmap_pages, struct page, lru); > + list_del(&page->lru); > + to = page_to_virt(page); > + copy_page(to, (void *)walk->reuse_addr); > + > + set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); > +} > + > +static int alloc_vmemmap_page_list(unsigned long start, unsigned long end, > + gfp_t gfp_mask, struct list_head *list) > +{ > + unsigned long nr_pages = (end - start) >> PAGE_SHIFT; > + int nid = page_to_nid((struct page *)start); > + struct page *page, *next; > + > + while (nr_pages--) { > + page = alloc_pages_node(nid, gfp_mask, 0); > + if (!page) > + goto out; > + list_add_tail(&page->lru, list); > + } > + > + return 0; > +out: > + list_for_each_entry_safe(page, next, list, lru) > + __free_pages(page, 0); > + return -ENOMEM; > +} > + > +/** > + * vmemmap_remap_alloc - remap the vmemmap virtual address range [@start, end) > + * to the page which is from the @vmemmap_pages > + * respectively. > + * @start: start address of the vmemmap virtual address range that we want > + * to remap. > + * @end: end address of the vmemmap virtual address range that we want to > + * remap. > + * @reuse: reuse address. > + * @gpf_mask: GFP flag for allocating vmemmap pages. > + */ > +int vmemmap_remap_alloc(unsigned long start, unsigned long end, > + unsigned long reuse, gfp_t gfp_mask) > +{ > + LIST_HEAD(vmemmap_pages); > + struct vmemmap_remap_walk walk = { > + .remap_pte = vmemmap_restore_pte, > + .reuse_addr = reuse, > + .vmemmap_pages = &vmemmap_pages, > + }; > + > + /* See the comment in the vmemmap_remap_free(). */ > + BUG_ON(start - reuse != PAGE_SIZE); > + > + might_sleep_if(gfpflags_allow_blocking(gfp_mask)); > + > + if (alloc_vmemmap_page_list(start, end, gfp_mask, &vmemmap_pages)) > + return -ENOMEM; > + > + vmemmap_remap_range(reuse, end, &walk); > + > + return 0; > +} > + > /* > * Allocate a block of memory to be used to back the virtual memory map > * or to back the page tables that are used to create the mapping. > -- > 2.11.0 >