Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752973AbcKGXcz (ORCPT ); Mon, 7 Nov 2016 18:32:55 -0500 Received: from mail-pf0-f196.google.com ([209.85.192.196]:35918 "EHLO mail-pf0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752829AbcKGXcp (ORCPT ); Mon, 7 Nov 2016 18:32:45 -0500 From: Naoya Horiguchi To: linux-mm@kvack.org Cc: "Kirill A. Shutemov" , Hugh Dickins , Andrew Morton , Dave Hansen , Andrea Arcangeli , Mel Gorman , Michal Hocko , Vlastimil Babka , Pavel Emelyanov , Zi Yan , Balbir Singh , linux-kernel@vger.kernel.org, Naoya Horiguchi , Naoya Horiguchi Subject: [PATCH v2 12/12] mm: memory_hotplug: memory hotremove supports thp migration Date: Tue, 8 Nov 2016 08:31:57 +0900 Message-Id: <1478561517-4317-13-git-send-email-n-horiguchi@ah.jp.nec.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1478561517-4317-1-git-send-email-n-horiguchi@ah.jp.nec.com> References: <1478561517-4317-1-git-send-email-n-horiguchi@ah.jp.nec.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2202 Lines: 65 This patch enables thp migration for memory hotremove. Signed-off-by: Naoya Horiguchi --- ChangeLog v1->v2: - base code switched from alloc_migrate_target to new_node_page() --- mm/memory_hotplug.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git v4.9-rc2-mmotm-2016-10-27-18-27/mm/memory_hotplug.c v4.9-rc2-mmotm-2016-10-27-18-27_patched/mm/memory_hotplug.c index b18dab40..a9c3fe1 100644 --- v4.9-rc2-mmotm-2016-10-27-18-27/mm/memory_hotplug.c +++ v4.9-rc2-mmotm-2016-10-27-18-27_patched/mm/memory_hotplug.c @@ -1543,6 +1543,7 @@ static struct page *new_node_page(struct page *page, unsigned long private, int nid = page_to_nid(page); nodemask_t nmask = node_states[N_MEMORY]; struct page *new_page = NULL; + unsigned int order = 0; /* * TODO: allocate a destination hugepage from a nearest neighbor node, @@ -1553,6 +1554,11 @@ static struct page *new_node_page(struct page *page, unsigned long private, return alloc_huge_page_node(page_hstate(compound_head(page)), next_node_in(nid, nmask)); + if (thp_migration_supported() && PageTransHuge(page)) { + order = HPAGE_PMD_ORDER; + gfp_mask |= GFP_TRANSHUGE; + } + node_clear(nid, nmask); if (PageHighMem(page) @@ -1560,12 +1566,15 @@ static struct page *new_node_page(struct page *page, unsigned long private, gfp_mask |= __GFP_HIGHMEM; if (!nodes_empty(nmask)) - new_page = __alloc_pages_nodemask(gfp_mask, 0, + new_page = __alloc_pages_nodemask(gfp_mask, order, node_zonelist(nid, gfp_mask), &nmask); if (!new_page) - new_page = __alloc_pages(gfp_mask, 0, + new_page = __alloc_pages(gfp_mask, order, node_zonelist(nid, gfp_mask)); + if (new_page && order == HPAGE_PMD_ORDER) + prep_transhuge_page(new_page); + return new_page; } @@ -1595,7 +1604,9 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) if (isolate_huge_page(page, &source)) move_pages -= 1 << compound_order(head); continue; - } + } else if (thp_migration_supported() && PageTransHuge(page)) + pfn = page_to_pfn(compound_head(page)) + + HPAGE_PMD_NR - 1; if (!get_page_unless_zero(page)) continue; -- 2.7.0