Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp654527pxk; Wed, 2 Sep 2020 11:11:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwLzOvEeFNNiAb785lINl/5cTKOHFp88Ywn5Uw25x5yNwlkKNPwQp6wfuo8vWcA8wfCPRui X-Received: by 2002:aa7:cb8f:: with SMTP id r15mr1358277edt.356.1599070289981; Wed, 02 Sep 2020 11:11:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599070289; cv=none; d=google.com; s=arc-20160816; b=1LI8jPOKOq60cwzL6JD4SoYrThGLkg+j5UvOFGgHy2ChoiUGhvBgsVw3k5lxt5O10J ph3AzZsUd0VCvqCkYkz5Ja9zj77bTV5nKE0DXuM9xEpqr3HxVxHW6vLAI9cQtrO+CWXf MhXv9VRwTLXNQvRsR0ltrjlfkv25bzQY9Flz5j2hl6plmtSzA+Ts6G3sS7yGGQws4nZC cPL5IY1KmQrFDCIEBzx2YUTBBxxua5VHcXP1nQ6D5dXXP8/p7jVH/n+CTQ2wyUWf5KKs J46K5yOPUNXQ0Q2GD9hGTAPXeCo3YjIdR1JJlHJBiPe2mMzEmGxwsmv4CzOt2IYCl/AU YhCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :reply-to:references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=vo+WlP2P6ZaoJIBCKZI+fyRIw5q3MoMn49o4u3ecZgg=; b=Bjfn3G/9HKOwQrNtjxXCq9ltQb9aCK/zVN32sejTqi5/SKzghFjO4EvKE6ymiTZ4wd 8l1STDDr8RlLe2QCujwEylyj0WjgZ/0rbiH498e+bzo0mby4h6C0fuWd4CReCy1GRPtU tk+z9/aVARt5M//lQ5SENqn2kXZ/IZJ6c68A1u4Tm3WiuNcjjNgDmgHe4qgtAqqIc4mv PcV10kvj6b9RIgFkxcguDC2mR6J3wWY6cZ6McMTPo+v/OuEwkPtIZr0wxTVICTLNeJnw JXCOKhAkVTm87DaubrCOi7oTGvvL8NytgB6Hc/PxpisBgY4vWeoeil0EfacMu0BecQuJ oIZA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@sent.com header.s=fm1 header.b=TEpelWRj; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=SVTFniIB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id re10si219376ejb.140.2020.09.02.11.11.05; Wed, 02 Sep 2020 11:11:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@sent.com header.s=fm1 header.b=TEpelWRj; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=SVTFniIB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728134AbgIBSHn (ORCPT + 99 others); Wed, 2 Sep 2020 14:07:43 -0400 Received: from out1-smtp.messagingengine.com ([66.111.4.25]:36053 "EHLO out1-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727950AbgIBSGm (ORCPT ); Wed, 2 Sep 2020 14:06:42 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 3434E5C01D3; Wed, 2 Sep 2020 14:06:35 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Wed, 02 Sep 2020 14:06:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=from :to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; s=fm1; bh=vo+WlP2P6ZaoJ IBCKZI+fyRIw5q3MoMn49o4u3ecZgg=; b=TEpelWRjPEIN4vRp/+z6iP4vNaUPK TCfhY2IrfliRvbztLqUxe/30zaxBLRMNaOfnCx4pTfN+O/8DNd+Pt3aXgK5LLiId 2X3/ltk49Uhbc1yU2cSrD7mnSHuQbOcH7vRCPFnXZuYWbyTdmOxM1ldW0UNO89+w ZAfcvluypPsVDAYYBqmdGlLd42+zsXSV5J/z1vL4tvDVfhZ3ATygP5WeCGNGR7DS 75E9INuE3acxE/LvEpuGf8LVWWa/Iqrhei2VQAvU29p2DbZvX1snfWYdZnqv1qzf oj1hUediWQjrhLVVizqERzNJfk9Y5GjqzAfQG8JNrnyTNus8FC6pInQ/w== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:reply-to:subject :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=vo+WlP2P6ZaoJIBCKZI+fyRIw5q3MoMn49o4u3ecZgg=; b=SVTFniIB rrl/tDZwQceF1nSi0Gj3wgFNyMZ7A2wCl+DZP9PuJ3elUf1hNTd7D2tJADQzuAGO r2/A2hglI/Pii+/yTm/oTYCtqvnt51yzJgp6Z8ROmqUyrKYqXkEV9K4voQbg6nzb K3DHuFUEOuep6H6xZWIXjyJ0y5h98frbJoTtrOlxYJxd0GvvPq2WOF64fzFQs0+B gFHgN/TDH+SrqWIVa6MM8v9+RcppP5/OfyZMVC95TVnLpQM/d8FIiyLjF0PbbOEi kfQqWy98Qy/ZBJ+j1lAsi1fX7QM/hrXgE4Vfdv3x1RON3hpT+r8AJAL5SLYtwIRP dp8XSwGkGt5nhA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefledguddvudcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhhrggfgsedtkeertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeduhf ffveektdduhfdutdfgtdekkedvhfetuedufedtgffgvdevleehheevjefgtdenucfkphep uddvrdegiedruddtiedrudeigeenucevlhhushhtvghrufhiiigvpeduvdenucfrrghrrg hmpehmrghilhhfrhhomhepiihirdihrghnsehsvghnthdrtghomh X-ME-Proxy: Received: from nvrsysarch6.NVidia.COM (unknown [12.46.106.164]) by mail.messagingengine.com (Postfix) with ESMTPA id 89BD23060272; Wed, 2 Sep 2020 14:06:34 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org, Roman Gushchin Cc: Rik van Riel , "Kirill A . Shutemov" , Matthew Wilcox , Shakeel Butt , Yang Shi , David Nellans , linux-kernel@vger.kernel.org, Zi Yan Subject: [RFC PATCH 16/16] mm: thp: use cma reservation for pud thp allocation. Date: Wed, 2 Sep 2020 14:06:28 -0400 Message-Id: <20200902180628.4052244-17-zi.yan@sent.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902180628.4052244-1-zi.yan@sent.com> References: <20200902180628.4052244-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zi Yan Sharing hugepage_cma reservation with hugetlb for pud thp allocaiton. The reserved cma regions still can be used for moveable page allocations. During 1GB page split, all subpages are cleared from the CMA bitmap, since they are no more 1GB pages and will be freed via the normal path instead of cma_release(). Signed-off-by: Zi Yan --- include/linux/cma.h | 3 +++ include/linux/huge_mm.h | 10 ++++++++++ mm/cma.c | 31 +++++++++++++++++++++++++++++++ mm/huge_memory.c | 30 ++++++++++++++++++++++++++++++ mm/mempolicy.c | 12 +++++++++--- mm/page_alloc.c | 3 ++- 6 files changed, 85 insertions(+), 4 deletions(-) diff --git a/include/linux/cma.h b/include/linux/cma.h index abcf7ab712f9..b765d19e4052 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -46,6 +46,9 @@ extern struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, bool no_warn); extern bool cma_release(struct cma *cma, const struct page *pages, unsigned int count); +extern bool cma_clear_bitmap_if_in_range(struct cma *cma, const struct page *page, + unsigned int count); + extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data); extern void cma_reserve(int min_order, unsigned long requested_size, diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 3bf8d8a09f08..5a45877055bb 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -24,6 +24,8 @@ extern struct page *follow_trans_huge_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pud, unsigned int flags); +extern struct page *alloc_thp_pud_page(int nid); +extern bool free_thp_pud_page(struct page *page, int order); #else static inline void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud) { @@ -43,6 +45,14 @@ struct page *follow_trans_huge_pud(struct vm_area_struct *vma, { return NULL; } +struct page *alloc_thp_pud_page(int nid) +{ + return NULL; +} +extern bool free_thp_pud_page(struct page *page, int order); +{ + return false; +} #endif extern vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd); diff --git a/mm/cma.c b/mm/cma.c index aa3a17d8a191..3f721b8f7ccd 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -532,6 +532,37 @@ bool cma_release(struct cma *cma, const struct page *pages, unsigned int count) return true; } +/** + * cma_clear_bitmap_if_in_range() - clear bitmap for a given page + * @cma: Contiguous memory region for which the allocation is performed. + * @pages: Allocated pages. + * @count: Number of allocated pages. + * + * This function clears bitmap of memory allocated by cma_alloc(). + * It returns false when provided pages do not belong to contiguous area and + * true otherwise. + */ +bool cma_clear_bitmap_if_in_range(struct cma *cma, const struct page *pages, + unsigned int count) +{ + unsigned long pfn; + + if (!cma || !pages) + return false; + + pfn = page_to_pfn(pages); + + if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count) + return false; + + if (pfn + count > cma->base_pfn + cma->count) + return false; + + cma_clear_bitmap(cma, pfn, count); + + return true; +} + int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data) { int i; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e1440a13da63..2020b843fd97 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -64,6 +65,10 @@ static struct shrinker deferred_split_shrinker; static atomic_t huge_zero_refcount; struct page *huge_zero_page __read_mostly; +#ifdef CONFIG_CMA +extern struct cma *hugepage_cma[MAX_NUMNODES]; +#endif + bool transparent_hugepage_enabled(struct vm_area_struct *vma) { /* The addr is used to check if the vma size fits */ @@ -2526,6 +2531,13 @@ static void __split_huge_pud_page(struct page *page, struct list_head *list, /* no file-back page support yet */ VM_BUG_ON(!PageAnon(page)); + /* */ + if (IS_ENABLED(CONFIG_CMA)) { + struct cma *cma = hugepage_cma[page_to_nid(head)]; + VM_BUG_ON(!cma_clear_bitmap_if_in_range(cma, head, + thp_nr_pages(head))); + } + for (i = HPAGE_PUD_NR - HPAGE_PMD_NR; i >= 1; i -= HPAGE_PMD_NR) { __split_huge_pud_page_tail(head, i, lruvec, list); } @@ -3753,3 +3765,21 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new) update_mmu_cache_pmd(vma, address, pvmw->pmd); } #endif + +struct page *alloc_thp_pud_page(int nid) +{ + struct page *page = NULL; +#ifdef CONFIG_CMA + page = cma_alloc(hugepage_cma[nid], HPAGE_PUD_NR, HPAGE_PUD_ORDER, true); +#endif + return page; +} + +bool free_thp_pud_page(struct page *page, int order) +{ + bool ret = false; +#ifdef CONFIG_CMA + ret = cma_release(hugepage_cma[page_to_nid(page)], page, 1< MAX_ORDER) { - page = alloc_contig_pages(1UL< MAX_ORDER) { - page = alloc_contig_pages(1UL<= MAX_ORDER) { destroy_compound_gigantic_page(page, order); - free_contig_range(page_to_pfn(page), 1 << order); + if (!free_thp_pud_page(page, order)) + free_contig_range(page_to_pfn(page), 1 << order); } else { migratetype = get_pfnblock_migratetype(page, pfn); local_irq_save(flags); -- 2.28.0