Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965410Ab3GRVgN (ORCPT ); Thu, 18 Jul 2013 17:36:13 -0400 Received: from mx1.redhat.com ([209.132.183.28]:14765 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934458Ab3GRVfS (ORCPT ); Thu, 18 Jul 2013 17:35:18 -0400 From: Naoya Horiguchi To: linux-mm@kvack.org Cc: Andrew Morton , Mel Gorman , Hugh Dickins , KOSAKI Motohiro , Andi Kleen , Hillf Danton , Michal Hocko , Rik van Riel , "Aneesh Kumar K.V" , linux-kernel@vger.kernel.org, Naoya Horiguchi Subject: [PATCH 5/8] mbind: add hugepage migration code to mbind() Date: Thu, 18 Jul 2013 17:34:29 -0400 Message-Id: <1374183272-10153-6-git-send-email-n-horiguchi@ah.jp.nec.com> In-Reply-To: <1374183272-10153-1-git-send-email-n-horiguchi@ah.jp.nec.com> References: <1374183272-10153-1-git-send-email-n-horiguchi@ah.jp.nec.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3189 Lines: 94 This patch extends do_mbind() to handle vma with VM_HUGETLB set. We will be able to migrate hugepage with mbind(2) after applying the enablement patch which comes later in this series. ChangeLog v3: - revert introducing migrate_movable_pages - added alloc_huge_page_noerr free from ERR_VALUE ChangeLog v2: - updated description and renamed patch title Signed-off-by: Naoya Horiguchi --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 14 ++++++++++++++ mm/mempolicy.c | 4 +++- 3 files changed, 20 insertions(+), 1 deletion(-) diff --git v3.11-rc1.orig/include/linux/hugetlb.h v3.11-rc1/include/linux/hugetlb.h index 0b7a9e7..768ebbe 100644 --- v3.11-rc1.orig/include/linux/hugetlb.h +++ v3.11-rc1/include/linux/hugetlb.h @@ -267,6 +267,8 @@ struct huge_bootmem_page { }; struct page *alloc_huge_page_node(struct hstate *h, int nid); +struct page *alloc_huge_page_noerr(struct vm_area_struct *vma, + unsigned long addr, int avoid_reserve); /* arch callback */ int __init alloc_bootmem_huge_page(struct hstate *h); @@ -380,6 +382,7 @@ static inline pgoff_t basepage_index(struct page *page) #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; #define alloc_huge_page_node(h, nid) NULL +#define alloc_huge_page_noerr(v, a, r) NULL #define alloc_bootmem_huge_page(h) NULL #define hstate_file(f) NULL #define hstate_sizelog(s) NULL diff --git v3.11-rc1.orig/mm/hugetlb.c v3.11-rc1/mm/hugetlb.c index 4c48a70..fab29a1 100644 --- v3.11-rc1.orig/mm/hugetlb.c +++ v3.11-rc1/mm/hugetlb.c @@ -1195,6 +1195,20 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma, return page; } +/* + * alloc_huge_page()'s wrapper which simply returns the page if allocation + * succeeds, otherwise NULL. This function is called from new_vma_page(), + * where no ERR_VALUE is expected to be returned. + */ +struct page *alloc_huge_page_noerr(struct vm_area_struct *vma, + unsigned long addr, int avoid_reserve) +{ + struct page *page = alloc_huge_page(vma, addr, avoid_reserve); + if (IS_ERR(page)) + page = NULL; + return page; +} + int __weak alloc_bootmem_huge_page(struct hstate *h) { struct huge_bootmem_page *m; diff --git v3.11-rc1.orig/mm/mempolicy.c v3.11-rc1/mm/mempolicy.c index f3b65c0..d8ced3e 100644 --- v3.11-rc1.orig/mm/mempolicy.c +++ v3.11-rc1/mm/mempolicy.c @@ -1180,6 +1180,8 @@ static struct page *new_vma_page(struct page *page, unsigned long private, int * vma = vma->vm_next; } + if (PageHuge(page)) + return alloc_huge_page_noerr(vma, address, 1); /* * if !vma, alloc_page_vma() will use task or system default policy */ @@ -1290,7 +1292,7 @@ static long do_mbind(unsigned long start, unsigned long len, (unsigned long)vma, MIGRATE_SYNC, MR_MEMPOLICY_MBIND); if (nr_failed) - putback_lru_pages(&pagelist); + putback_movable_pages(&pagelist); } if (nr_failed && (flags & MPOL_MF_STRICT)) -- 1.8.3.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/