Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp745420yba; Wed, 3 Apr 2019 19:13:28 -0700 (PDT) X-Google-Smtp-Source: APXvYqywjzckmic5RP5Oo1JM0Fy4cCxgtvGgo8Rda2ikHPo8zjhsjMMimVttAv/sswdG017i+PJc X-Received: by 2002:aa7:8212:: with SMTP id k18mr3043164pfi.50.1554344008297; Wed, 03 Apr 2019 19:13:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554344008; cv=none; d=google.com; s=arc-20160816; b=q1vtsxKDjA/xqNoE9HICqJ11zZ6hoU6GvrUDaCEb4BzWTt7W6mEsGSqgWN065+nPCi dfpEd490IlzqW1B51BOU1vPrbKGurwwIrw5HEYL8qjrpEgh8lhPkaNPe/MMq9LGjgTHM 110aNt+ay4InfZq8Vdic85w8CGdTrDpd/3PLowKGgANEMQBkiEwQm70WhqrL2czRtrAo 81XBiHMsftcDtkidbbfm/tY6bntt0o2Jb4y/6DKfhJ1Iv7+72Jq/AQoeL89C+wcgcTvW IZ4uKjRJX5K1eXaDWIiHRZxB4FoRC+fhBbl5hPDu5f/rOu0MZd7Z+dTM4PGETOiT4Yn1 LWIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :reply-to:references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=VIkgM+882IONo3Guhv/rUEZozu4u9IBUTwV5VJxdnmI=; b=XE/ak1vFIPBH1zdRX2uSR3SFzhpBvroDRmAiswd5DanI5ukPCMqGhy7PCzNsZK3EXG RcoCHUArWZHc4IzBU2wmntcwLwCfnAb36VssGICFm2RoQIgiuHsQhoKU4I+0IjGFLuHX UE5HWaanntIezIzzXQa9tclMFd5NtK1kJ38XXkFcQCyTVA8Og94MHjzIqginyU4tftN6 js2LctvIUrmVRmXW245KrtWXYMMbKLbFzagAeU9/HrJ98KkpfltaxUr63cROe3Ivw86P IH6m1w31Y/ywcdle1rksBqjzwYUCrrF+epJtPbEbJI66lo7oFJrUaqM4hDrgeARpYZxg T88A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=YnCSB3YN; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=jdmARHKr; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p18si16012767pli.432.2019.04.03.19.13.13; Wed, 03 Apr 2019 19:13:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=YnCSB3YN; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=jdmARHKr; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726462AbfDDCJp (ORCPT + 99 others); Wed, 3 Apr 2019 22:09:45 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:35413 "EHLO out5-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726404AbfDDCJp (ORCPT ); Wed, 3 Apr 2019 22:09:45 -0400 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id BC556226AD; Wed, 3 Apr 2019 22:01:28 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Wed, 03 Apr 2019 22:01:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=from :to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; s=fm3; bh=VIkgM+882IONo 3Guhv/rUEZozu4u9IBUTwV5VJxdnmI=; b=YnCSB3YNRz/K6xN+4chFMo1NLtyZ0 kfSrQtI9P1RLihuwc5xXZ086brvPD38Tj81bn6cCtVcknYPaPuPm3mzKZEy8TIlI brQvSbwJe/XEAqOofxIGsuZNFXSsQV4vJiZZq3Uvko0zNgM2Nt4qS8jxInGurjaa M3nMz1hhMw4mRPw31I8o/ONAOLsP371pKpx134KkRijpljihQHgWwQqHkU+uO/tz UIF9+XCbyMYto9Lv3LtxTKIyj6YhubPWzxaoSgDnWTccQs856/ISylyoprlXRxhy R15JZ0nbIBY+PbPcSI4OZoyYid9V88Y6wHJMt4xDMHfzwHLqNrnccDbkA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:reply-to:subject :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=VIkgM+882IONo3Guhv/rUEZozu4u9IBUTwV5VJxdnmI=; b=jdmARHKr 73TxQMOntRLiEEfU1UDhBk9EGXmzpkGXFKVh4e4cGq7/FmBPaRIRgfXZmNN0cJej xyfBRN79joGo30hm1m8rJ9b7sJBMpwZSPRCD0KWO6FezA26TBPU41/dRaEPC9ZSA iQ2lMy54De3y7HRMG2ZVBisXUZdRFkPo6T2hKZ8uyptRf1k2W0GtsbUG4KcYoVQ6 rcD0/L7QOo09OnoVosRDag+/RLLdcYhQPNWVZWb3bUUF7wBuPKTmkEiARvtFZMlV grBvJrHk3D2q0cAgz0NKJwMCSXARybhdFybXKKowkjvXra+9JvBXVksHoVX3BbIx lHvFwhe9zrgbsQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduuddrtdeggdehudculddtuddrgedutddrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdp uffrtefokffrpgfnqfghnecuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivg hnthhsucdlqddutddtmdenucfjughrpefhvffufffkofgjfhhrggfgsedtkeertdertddt necuhfhrohhmpegkihcujggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucfkph epvdduiedrvddvkedrudduvddrvddvnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdr higrnhesshgvnhhtrdgtohhmnecuvehluhhsthgvrhfuihiivgepje X-ME-Proxy: Received: from nvrsysarch5.nvidia.com (thunderhill.nvidia.com [216.228.112.22]) by mail.messagingengine.com (Postfix) with ESMTPA id 87CC310316; Wed, 3 Apr 2019 22:01:26 -0400 (EDT) From: Zi Yan To: Dave Hansen , Yang Shi , Keith Busch , Fengguang Wu , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Daniel Jordan , Michal Hocko , "Kirill A . Shutemov" , Andrew Morton , Vlastimil Babka , Mel Gorman , John Hubbard , Mark Hairgrove , Nitin Gupta , Javier Cabezas , David Nellans , Zi Yan Subject: [RFC PATCH 08/25] mm: migrate: Add copy_page_dma into migrate_page_copy. Date: Wed, 3 Apr 2019 19:00:29 -0700 Message-Id: <20190404020046.32741-9-zi.yan@sent.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190404020046.32741-1-zi.yan@sent.com> References: <20190404020046.32741-1-zi.yan@sent.com> Reply-To: ziy@nvidia.com MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zi Yan Fallback to copy_highpage when it fails. Signed-off-by: Zi Yan --- include/linux/migrate_mode.h | 1 + include/uapi/linux/mempolicy.h | 1 + mm/migrate.c | 31 +++++++++++++++++++++---------- 3 files changed, 23 insertions(+), 10 deletions(-) diff --git a/include/linux/migrate_mode.h b/include/linux/migrate_mode.h index 5bc8a77..4f7f5557 100644 --- a/include/linux/migrate_mode.h +++ b/include/linux/migrate_mode.h @@ -23,6 +23,7 @@ enum migrate_mode { MIGRATE_MODE_MASK = 3, MIGRATE_SINGLETHREAD = 0, MIGRATE_MT = 1<<4, + MIGRATE_DMA = 1<<5, }; #endif /* MIGRATE_MODE_H_INCLUDED */ diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h index 890269b..49573a6 100644 --- a/include/uapi/linux/mempolicy.h +++ b/include/uapi/linux/mempolicy.h @@ -48,6 +48,7 @@ enum { #define MPOL_MF_LAZY (1<<3) /* Modifies '_MOVE: lazy migrate on fault */ #define MPOL_MF_INTERNAL (1<<4) /* Internal flags start here */ +#define MPOL_MF_MOVE_DMA (1<<5) /* Use DMA page copy routine */ #define MPOL_MF_MOVE_MT (1<<6) /* Use multi-threaded page copy routine */ #define MPOL_MF_VALID (MPOL_MF_STRICT | \ diff --git a/mm/migrate.c b/mm/migrate.c index 8a344e2..09114d3 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -553,15 +553,21 @@ int migrate_huge_page_move_mapping(struct address_space *mapping, * specialized. */ static void __copy_gigantic_page(struct page *dst, struct page *src, - int nr_pages) + int nr_pages, enum migrate_mode mode) { int i; struct page *dst_base = dst; struct page *src_base = src; + int rc = -EFAULT; for (i = 0; i < nr_pages; ) { cond_resched(); - copy_highpage(dst, src); + + if (mode & MIGRATE_DMA) + rc = copy_page_dma(dst, src, 1); + + if (rc) + copy_highpage(dst, src); i++; dst = mem_map_next(dst, dst_base, i); @@ -582,7 +588,7 @@ static void copy_huge_page(struct page *dst, struct page *src, nr_pages = pages_per_huge_page(h); if (unlikely(nr_pages > MAX_ORDER_NR_PAGES)) { - __copy_gigantic_page(dst, src, nr_pages); + __copy_gigantic_page(dst, src, nr_pages, mode); return; } } else { @@ -597,6 +603,8 @@ static void copy_huge_page(struct page *dst, struct page *src, if (mode & MIGRATE_MT) rc = copy_page_multithread(dst, src, nr_pages); + else if (mode & MIGRATE_DMA) + rc = copy_page_dma(dst, src, nr_pages); if (rc) for (i = 0; i < nr_pages; i++) { @@ -674,8 +682,9 @@ void migrate_page_copy(struct page *newpage, struct page *page, { if (PageHuge(page) || PageTransHuge(page)) copy_huge_page(newpage, page, mode); - else + else { copy_highpage(newpage, page); + } migrate_page_states(newpage, page); } @@ -1511,7 +1520,8 @@ static int store_status(int __user *status, int start, int value, int nr) } static int do_move_pages_to_node(struct mm_struct *mm, - struct list_head *pagelist, int node, bool migrate_mt) + struct list_head *pagelist, int node, + bool migrate_mt, bool migrate_dma) { int err; @@ -1519,7 +1529,8 @@ static int do_move_pages_to_node(struct mm_struct *mm, return 0; err = migrate_pages(pagelist, alloc_new_node_page, NULL, node, - MIGRATE_SYNC | (migrate_mt ? MIGRATE_MT : MIGRATE_SINGLETHREAD), + MIGRATE_SYNC | (migrate_mt ? MIGRATE_MT : MIGRATE_SINGLETHREAD) | + (migrate_dma ? MIGRATE_DMA : MIGRATE_SINGLETHREAD), MR_SYSCALL); if (err) putback_movable_pages(pagelist); @@ -1642,7 +1653,7 @@ static int do_pages_move(struct mm_struct *mm, nodemask_t task_nodes, start = i; } else if (node != current_node) { err = do_move_pages_to_node(mm, &pagelist, current_node, - flags & MPOL_MF_MOVE_MT); + flags & MPOL_MF_MOVE_MT, flags & MPOL_MF_MOVE_DMA); if (err) goto out; err = store_status(status, start, current_node, i - start); @@ -1666,7 +1677,7 @@ static int do_pages_move(struct mm_struct *mm, nodemask_t task_nodes, goto out_flush; err = do_move_pages_to_node(mm, &pagelist, current_node, - flags & MPOL_MF_MOVE_MT); + flags & MPOL_MF_MOVE_MT, flags & MPOL_MF_MOVE_DMA); if (err) goto out; if (i > start) { @@ -1682,7 +1693,7 @@ static int do_pages_move(struct mm_struct *mm, nodemask_t task_nodes, /* Make sure we do not overwrite the existing error */ err1 = do_move_pages_to_node(mm, &pagelist, current_node, - flags & MPOL_MF_MOVE_MT); + flags & MPOL_MF_MOVE_MT, flags & MPOL_MF_MOVE_DMA); if (!err1) err1 = store_status(status, start, current_node, i - start); if (!err) @@ -1778,7 +1789,7 @@ static int kernel_move_pages(pid_t pid, unsigned long nr_pages, nodemask_t task_nodes; /* Check flags */ - if (flags & ~(MPOL_MF_MOVE|MPOL_MF_MOVE_ALL|MPOL_MF_MOVE_MT)) + if (flags & ~(MPOL_MF_MOVE|MPOL_MF_MOVE_ALL|MPOL_MF_MOVE_MT|MPOL_MF_MOVE_DMA)) return -EINVAL; if ((flags & MPOL_MF_MOVE_ALL) && !capable(CAP_SYS_NICE)) -- 2.7.4