Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756903AbcCCHmn (ORCPT ); Thu, 3 Mar 2016 02:42:43 -0500 Received: from mail-pf0-f176.google.com ([209.85.192.176]:35563 "EHLO mail-pf0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756797AbcCCHmj (ORCPT ); Thu, 3 Mar 2016 02:42:39 -0500 From: Naoya Horiguchi To: linux-mm@kvack.org Cc: "Kirill A. Shutemov" , Hugh Dickins , Andrew Morton , Dave Hansen , Andrea Arcangeli , Mel Gorman , Michal Hocko , Vlastimil Babka , Pavel Emelyanov , linux-kernel@vger.kernel.org, Naoya Horiguchi , Naoya Horiguchi Subject: [PATCH v1 11/11] mm: memory_hotplug: memory hotremove supports thp migration Date: Thu, 3 Mar 2016 16:41:58 +0900 Message-Id: <1456990918-30906-12-git-send-email-n-horiguchi@ah.jp.nec.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1456990918-30906-1-git-send-email-n-horiguchi@ah.jp.nec.com> References: <1456990918-30906-1-git-send-email-n-horiguchi@ah.jp.nec.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2489 Lines: 64 This patch enables thp migration for memory hotremove. Stub definition of prep_transhuge_page() is added for CONFIG_TRANSPARENT_HUGEPAGE=n. Signed-off-by: Naoya Horiguchi --- include/linux/huge_mm.h | 3 +++ mm/memory_hotplug.c | 8 ++++++++ mm/page_isolation.c | 8 ++++++++ 3 files changed, 19 insertions(+) diff --git v4.5-rc5-mmotm-2016-02-24-16-18/include/linux/huge_mm.h v4.5-rc5-mmotm-2016-02-24-16-18_patched/include/linux/huge_mm.h index 09b215d..7944346 100644 --- v4.5-rc5-mmotm-2016-02-24-16-18/include/linux/huge_mm.h +++ v4.5-rc5-mmotm-2016-02-24-16-18_patched/include/linux/huge_mm.h @@ -175,6 +175,9 @@ static inline bool thp_migration_supported(void) #define transparent_hugepage_enabled(__vma) 0 #define transparent_hugepage_flags 0UL +static inline void prep_transhuge_page(struct page *page) +{ +} static inline int split_huge_page_to_list(struct page *page, struct list_head *list) { diff --git v4.5-rc5-mmotm-2016-02-24-16-18/mm/memory_hotplug.c v4.5-rc5-mmotm-2016-02-24-16-18_patched/mm/memory_hotplug.c index e62aa07..b4b23d5 100644 --- v4.5-rc5-mmotm-2016-02-24-16-18/mm/memory_hotplug.c +++ v4.5-rc5-mmotm-2016-02-24-16-18_patched/mm/memory_hotplug.c @@ -1511,6 +1511,14 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) if (isolate_huge_page(page, &source)) move_pages -= 1 << compound_order(head); continue; + } else if (thp_migration_supported() && PageTransHuge(page)) { + struct page *head = compound_head(page); + + pfn = page_to_pfn(head) + (1< PFN_SECTION_SHIFT) { + ret = -EBUSY; + break; + } } if (!get_page_unless_zero(page)) diff --git v4.5-rc5-mmotm-2016-02-24-16-18/mm/page_isolation.c v4.5-rc5-mmotm-2016-02-24-16-18_patched/mm/page_isolation.c index 92c4c36..b2d22e8 100644 --- v4.5-rc5-mmotm-2016-02-24-16-18/mm/page_isolation.c +++ v4.5-rc5-mmotm-2016-02-24-16-18_patched/mm/page_isolation.c @@ -294,6 +294,14 @@ struct page *alloc_migrate_target(struct page *page, unsigned long private, nodes_complement(dst, src); return alloc_huge_page_node(page_hstate(compound_head(page)), next_node(page_to_nid(page), dst)); + } else if (thp_migration_supported() && PageTransHuge(page)) { + struct page *thp; + + thp = alloc_pages(GFP_TRANSHUGE, HPAGE_PMD_ORDER); + if (!thp) + return NULL; + prep_transhuge_page(thp); + return thp; } if (PageHighMem(page)) -- 2.7.0