Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756239AbcCCHmI (ORCPT ); Thu, 3 Mar 2016 02:42:08 -0500 Received: from mail-pf0-f182.google.com ([209.85.192.182]:35251 "EHLO mail-pf0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751279AbcCCHmG (ORCPT ); Thu, 3 Mar 2016 02:42:06 -0500 From: Naoya Horiguchi To: linux-mm@kvack.org Cc: "Kirill A. Shutemov" , Hugh Dickins , Andrew Morton , Dave Hansen , Andrea Arcangeli , Mel Gorman , Michal Hocko , Vlastimil Babka , Pavel Emelyanov , linux-kernel@vger.kernel.org, Naoya Horiguchi , Naoya Horiguchi Subject: [PATCH v1 01/11] mm: mempolicy: add queue_pages_node_check() Date: Thu, 3 Mar 2016 16:41:48 +0900 Message-Id: <1456990918-30906-2-git-send-email-n-horiguchi@ah.jp.nec.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1456990918-30906-1-git-send-email-n-horiguchi@ah.jp.nec.com> References: <1456990918-30906-1-git-send-email-n-horiguchi@ah.jp.nec.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1967 Lines: 58 Introduce a separate check routine related to MPOL_MF_INVERT flag. This patch just does cleanup, no behavioral change. Signed-off-by: Naoya Horiguchi --- mm/mempolicy.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git v4.5-rc5-mmotm-2016-02-24-16-18/mm/mempolicy.c v4.5-rc5-mmotm-2016-02-24-16-18_patched/mm/mempolicy.c index 8c5fd08..840a0ad 100644 --- v4.5-rc5-mmotm-2016-02-24-16-18/mm/mempolicy.c +++ v4.5-rc5-mmotm-2016-02-24-16-18_patched/mm/mempolicy.c @@ -478,6 +478,15 @@ struct queue_pages { struct vm_area_struct *prev; }; +static inline bool queue_pages_node_check(struct page *page, + struct queue_pages *qp) +{ + int nid = page_to_nid(page); + unsigned long flags = qp->flags; + + return node_isset(nid, *qp->nmask) == !!(flags & MPOL_MF_INVERT); +} + /* * Scan through pages checking if pages follow certain conditions, * and move them to the pagelist if they do. @@ -529,8 +538,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, */ if (PageReserved(page)) continue; - nid = page_to_nid(page); - if (node_isset(nid, *qp->nmask) == !!(flags & MPOL_MF_INVERT)) + if (queue_pages_node_check(page, qp)) continue; if (PageTail(page) && PageAnon(page)) { get_page(page); @@ -562,7 +570,6 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask, #ifdef CONFIG_HUGETLB_PAGE struct queue_pages *qp = walk->private; unsigned long flags = qp->flags; - int nid; struct page *page; spinlock_t *ptl; pte_t entry; @@ -572,8 +579,7 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask, if (!pte_present(entry)) goto unlock; page = pte_page(entry); - nid = page_to_nid(page); - if (node_isset(nid, *qp->nmask) == !!(flags & MPOL_MF_INVERT)) + if (queue_pages_node_check(page, qp)) goto unlock; /* With MPOL_MF_MOVE, we migrate only unshared hugepage. */ if (flags & (MPOL_MF_MOVE_ALL) || -- 2.7.0