Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp821589ybt; Tue, 7 Jul 2020 00:48:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxz584VTvzjP0AC7q+ZTeyF0b1t3ThXAqTfPDdrR8R9W6piDdlJ/Zk9hnQnq7XBLUeDIODO X-Received: by 2002:a50:aca6:: with SMTP id x35mr57913952edc.328.1594108113719; Tue, 07 Jul 2020 00:48:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594108113; cv=none; d=google.com; s=arc-20160816; b=acyOqSbngSTXVsHKVOLGRLQK8Mk1DHZxJAS3HSF86eqJJGQQ0w3zvYHdbFXkZtCp9x eMXvfYyBJrT2dI0EgBh5JezsyXxNHv08fyYSCXngXBsXpAMaccuKbbFcmMvgeidr6hte YDs0fJJbZ3VTnKZdioKnHhskDnqF2p10+CGifgA2PNoAzqLIuWhgHGvdpHgsVHpTksca FqWsj4hUJRjjN9KEfijwOFTAXus+s2yufQdZhtBxqQ5YxGkgqA8Dc6ozFvmnLobnwiwf y/jsR/JDPrihBDFfpmICduQ+2edlhuiSFnFuFF6vk72ntczPhQ9283mjStb+zF0nAMZ0 aXUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=vQ5GSoRjpV3R4SO9Acoy8xbibvkjIyOIkvQQtOlm5pQ=; b=mDk+jvsEr1B6DVW3xKjW+K1TN9vO3l6W61tN4RWgsf95Y0YZhl1d/CO4FftmDY9BG6 vOMqyoOVPDqbndW9zYiUwIknbfxhxXevMUmNYh200pwWdZm9icnwLcWGKKi8ajH5D8YG 04PeCWny4iXPWDTRlyfZsXn68M18xr7R/REPKUBjUcH8/WmRQ8ilEd05fKJCE4KHJvLP Ks5cGjMoa4jM0SS8MKMW7ONqQXNO/pnshl+MxElbWaj3RcL2810Rs78A+NBTknjcIX9/ fdTztKrP+9A7Tag53/cQvawX0wni78dply0QfG4RJeOGaRRX4Ku3uIXeSrly8aXiV3T8 4OhA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=f283josl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v6si15815801edy.149.2020.07.07.00.48.09; Tue, 07 Jul 2020 00:48:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=f283josl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728374AbgGGHpb (ORCPT + 99 others); Tue, 7 Jul 2020 03:45:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727116AbgGGHpa (ORCPT ); Tue, 7 Jul 2020 03:45:30 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72A0FC061755 for ; Tue, 7 Jul 2020 00:45:30 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id j20so2402552pfe.5 for ; Tue, 07 Jul 2020 00:45:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=vQ5GSoRjpV3R4SO9Acoy8xbibvkjIyOIkvQQtOlm5pQ=; b=f283josljPq6DKf1JQSdsMSFFQredCoAwLG3Yvmiem7LiGKNwz3J7BUtwDqrzZv4SH 1SPNgz/774QEYHIPwHCfX/xQ8NjTsLxGDijd1TsdmRnHuu9U1HUhjo/Njc0jd85ivtbH S1QycfVPg4GGLLlxKRGJK5dRI2ZVbwoYz2NZSAQkO+RH8xd0kx9Nsq7XTbP8MoQ0A+5M gjMciWUXhzIQ78c81cPjrV41sEs0jEmtKekDudymRx6tzIFxpiC5ctdulsioavSJE0ue XLbbRoAw2n/37dZpgo4v/d3etiP7cGSzdQ4YaTP3Cl+bQ+rY4rNKxu+pFW+2iqZ+fU82 ZwFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vQ5GSoRjpV3R4SO9Acoy8xbibvkjIyOIkvQQtOlm5pQ=; b=MbYkkkAdELaBYjhZMDInKXs0I7FRgOAokW6VJgnpqzMtdIhW4tQQGvcM69dFLLGAEO 3wZSgdDSWzXA0hJGl7MdEzQFq7Fs1qkINbUPUUG/m1hxwHiTNMyMteXYqeje0PS8Mjl/ gYPUbnIZYLUZ2CHP7QVoIcbClJLcbr9NVuGI3mIJhyCwuREY/yAIGASNGwaA7/lZoWgD yLE+OgcE+8PEVNjBFGrp+niiJlg7GASnFjiLswmDyM5435iVufg/HOlR5R5Bgc62rij7 ffsEsaS1q16XERnfiq/6tbTBDN1aZ06Z8Ucr4BoxIngANgIs8UAeukpV7wLt+xJwdzfd hJiQ== X-Gm-Message-State: AOAM5332JduXRqGF7zPY/XC93gUJNoUjZpIOae2SEyj5t34/h7iHAw92 A3ZA34Q+I1i8T9HUfQVtQJw= X-Received: by 2002:a63:7cf:: with SMTP id 198mr42664298pgh.309.1594107929940; Tue, 07 Jul 2020 00:45:29 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id 191sm21330151pfw.150.2020.07.07.00.45.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Jul 2020 00:45:29 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH v4 06/11] mm/migrate: make a standard migration target allocation function Date: Tue, 7 Jul 2020 16:44:44 +0900 Message-Id: <1594107889-32228-7-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1594107889-32228-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1594107889-32228-1-git-send-email-iamjoonsoo.kim@lge.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Joonsoo Kim There are some similar functions for migration target allocation. Since there is no fundamental difference, it's better to keep just one rather than keeping all variants. This patch implements base migration target allocation function. In the following patches, variants will be converted to use this function. Changes should be mechanical but there are some differences. First, Some callers' nodemask is assgined to NULL since NULL nodemask will be considered as all available nodes, that is, &node_states[N_MEMORY]. Second, for hugetlb page allocation, gfp_mask is ORed since a user could provide a gfp_mask from now on. Third, if provided node id is NUMA_NO_NODE, node id is set up to the node where migration source lives. Note that PageHighmem() call in previous function is changed to open-code "is_highmem_idx()" since it provides more readability. Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- include/linux/migrate.h | 9 +++++---- mm/internal.h | 7 +++++++ mm/memory-failure.c | 7 +++++-- mm/memory_hotplug.c | 14 +++++++++----- mm/migrate.c | 27 +++++++++++++++++---------- mm/page_isolation.c | 7 +++++-- 6 files changed, 48 insertions(+), 23 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 1d70b4a..cc56f0d 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -10,6 +10,8 @@ typedef struct page *new_page_t(struct page *page, unsigned long private); typedef void free_page_t(struct page *page, unsigned long private); +struct migration_target_control; + /* * Return values from addresss_space_operations.migratepage(): * - negative errno on page migration failure; @@ -39,8 +41,7 @@ extern int migrate_page(struct address_space *mapping, enum migrate_mode mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, int reason); -extern struct page *new_page_nodemask(struct page *page, - int preferred_nid, nodemask_t *nodemask); +extern struct page *alloc_migration_target(struct page *page, unsigned long private); extern int isolate_movable_page(struct page *page, isolate_mode_t mode); extern void putback_movable_page(struct page *page); @@ -59,8 +60,8 @@ static inline int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, int reason) { return -ENOSYS; } -static inline struct page *new_page_nodemask(struct page *page, - int preferred_nid, nodemask_t *nodemask) +static inline struct page *alloc_migration_target(struct page *page, + unsigned long private) { return NULL; } static inline int isolate_movable_page(struct page *page, isolate_mode_t mode) { return -EBUSY; } diff --git a/mm/internal.h b/mm/internal.h index dd14c53..0beacf3 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -614,4 +614,11 @@ static inline bool is_migrate_highatomic_page(struct page *page) void setup_zone_pageset(struct zone *zone); extern struct page *alloc_new_node_page(struct page *page, unsigned long node); + +struct migration_target_control { + int nid; /* preferred node id */ + nodemask_t *nmask; + gfp_t gfp_mask; +}; + #endif /* __MM_INTERNAL_H */ diff --git a/mm/memory-failure.c b/mm/memory-failure.c index c5e4cee..609d42b6 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1679,9 +1679,12 @@ EXPORT_SYMBOL(unpoison_memory); static struct page *new_page(struct page *p, unsigned long private) { - int nid = page_to_nid(p); + struct migration_target_control mtc = { + .nid = page_to_nid(p), + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + }; - return new_page_nodemask(p, nid, &node_states[N_MEMORY]); + return alloc_migration_target(p, (unsigned long)&mtc); } /* diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index cafe65eb..86bc2ad 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1267,19 +1267,23 @@ static int scan_movable_pages(unsigned long start, unsigned long end, static struct page *new_node_page(struct page *page, unsigned long private) { - int nid = page_to_nid(page); nodemask_t nmask = node_states[N_MEMORY]; + struct migration_target_control mtc = { + .nid = page_to_nid(page), + .nmask = &nmask, + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + }; /* * try to allocate from a different node but reuse this node if there * are no other online nodes to be used (e.g. we are offlining a part * of the only existing node) */ - node_clear(nid, nmask); - if (nodes_empty(nmask)) - node_set(nid, nmask); + node_clear(mtc.nid, *mtc.nmask); + if (nodes_empty(*mtc.nmask)) + node_set(mtc.nid, *mtc.nmask); - return new_page_nodemask(page, nid, &nmask); + return alloc_migration_target(page, (unsigned long)&mtc); } static int diff --git a/mm/migrate.c b/mm/migrate.c index ecd7615..00cd81c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1531,19 +1531,27 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, return rc; } -struct page *new_page_nodemask(struct page *page, - int preferred_nid, nodemask_t *nodemask) +struct page *alloc_migration_target(struct page *page, unsigned long private) { - gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL; + struct migration_target_control *mtc; + gfp_t gfp_mask; unsigned int order = 0; struct page *new_page = NULL; + int nid; + int zidx; + + mtc = (struct migration_target_control *)private; + gfp_mask = mtc->gfp_mask; + nid = mtc->nid; + if (nid == NUMA_NO_NODE) + nid = page_to_nid(page); if (PageHuge(page)) { struct hstate *h = page_hstate(compound_head(page)); - gfp_mask = htlb_alloc_mask(h); - return alloc_huge_page_nodemask(h, preferred_nid, - nodemask, gfp_mask, false); + gfp_mask |= htlb_alloc_mask(h); + return alloc_huge_page_nodemask(h, nid, mtc->nmask, + gfp_mask, false); } if (PageTransHuge(page)) { @@ -1555,12 +1563,11 @@ struct page *new_page_nodemask(struct page *page, gfp_mask |= GFP_TRANSHUGE; order = HPAGE_PMD_ORDER; } - - if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE)) + zidx = zone_idx(page_zone(page)); + if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE) gfp_mask |= __GFP_HIGHMEM; - new_page = __alloc_pages_nodemask(gfp_mask, order, - preferred_nid, nodemask); + new_page = __alloc_pages_nodemask(gfp_mask, order, nid, mtc->nmask); if (new_page && PageTransHuge(new_page)) prep_transhuge_page(new_page); diff --git a/mm/page_isolation.c b/mm/page_isolation.c index aec26d9..f25c66e 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -309,7 +309,10 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, struct page *alloc_migrate_target(struct page *page, unsigned long private) { - int nid = page_to_nid(page); + struct migration_target_control mtc = { + .nid = page_to_nid(page), + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + }; - return new_page_nodemask(page, nid, &node_states[N_MEMORY]); + return alloc_migration_target(page, (unsigned long)&mtc); } -- 2.7.4