Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp743847yba; Wed, 3 Apr 2019 19:10:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqxAj96hMxDoWGrtpjQ33XPvXPTyWGOvmg7mhxcLPnfTVyg3zsZZzAqfrRcfyarjbXOGNI1i X-Received: by 2002:a62:209c:: with SMTP id m28mr2878377pfj.233.1554343851762; Wed, 03 Apr 2019 19:10:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554343851; cv=none; d=google.com; s=arc-20160816; b=TFCGgPLXxNnJHhk1vYTRIJ5qq20RvVtP27WnoIkk/ZYGAYqYQxRk1v204Zzce5h1TJ BTpjkXtVewrpLkCMbVrZgjx4gp3b8xLFCFwavTR2V7/8v/yl70+IfMQ8cuYYTzzlyL9u H+4tRoBrzrD2N0itSUI754zpprSc+gUflhHhg+Zm3wpKhBA5yVoECxxRSRKuuZM3auev 57Z6XXNn+RsoM6NUjja/dSSU0y+u+6SiYZmVWoqVt9RHn3MFBpPrKKD4rcwOKna+30NF 4fRr+4Gt7nMiR9QVW/chqPDfTLpYhGZ/DEQJP0N6I5mrI2+KeXEBa276cIWDmkq2AWzZ qpyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :reply-to:references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=CjIqcHjtGoHFPW1884J+3zbLpqMgt9/Z/pz8Qn99yHE=; b=DDiAUkE5LglW0RyRc75EwLzXvNFNthzFOcAeJk6ItxJaLvVRRymeAmXCrZL//B1Ipq +0Xd5jncgZy/JJ4Jrv4SVy3+N8N5s8WahKEstoTgnsl1n0hsh9y/jOp7cH+j6IvChUJb GIGtevThZXG07VZPGVtnNrJh6BWqHqO/242ui7I/ZkZpVPq+d9QBnFIESNtzdkuzzUl8 8MXcp/RZfAJL2PcOLftlK153laGDuRUwYAKB22WuF1PTlyaAd72F9CFH5RjpshcPIWRJ NU0/WyRFMiZtPIYOwgnx9H/0DwBEqF/LiWP/5hCGohxFnchH9wEfq4yt6aCXGFt8aVuw SVsQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=uEpllha3; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=qLsh8QQG; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q11si15001553pff.201.2019.04.03.19.10.36; Wed, 03 Apr 2019 19:10:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=uEpllha3; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=qLsh8QQG; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726797AbfDDCJz (ORCPT + 99 others); Wed, 3 Apr 2019 22:09:55 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:50557 "EHLO out5-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726600AbfDDCJs (ORCPT ); Wed, 3 Apr 2019 22:09:48 -0400 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id F0B7A21FAE; Wed, 3 Apr 2019 22:01:17 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Wed, 03 Apr 2019 22:01:17 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=from :to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; s=fm3; bh=CjIqcHjtGoHFP W1884J+3zbLpqMgt9/Z/pz8Qn99yHE=; b=uEpllha3uvC3u8tcgOeoEFcXkG5l/ kmY8OsTWncvsDv4bYW4h5o21393tnwIs+WUiL4zrefo1QL4rQ+ZvDAp/hsBq25q9 uVHArQYBQcDX9GZwkZYjsUUeXypWjMdGbZg5ms0VemMPA6NwIaek4m1xG7N78fec d17GfR4Za/gg5jSTifdKj3kChmqv3B7rSvfts7eTeeibCJZY3T4+ElpOWrIJBFBM jlSc4VcmSJhkJ7vE7upfE2AIhLZ36JPrcByF5POiSfNUdPldR6LxDikHzEhcKr+p QOjf+Zaw26zJdM9tQN8ZLb0xj+vnVNcnUL7Hv3S4ojZwFrFo83T0iPJlw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:reply-to:subject :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=CjIqcHjtGoHFPW1884J+3zbLpqMgt9/Z/pz8Qn99yHE=; b=qLsh8QQG SvXnhLHkzdEuprx7pRLulPcYDKK8+yCfY+Kl3tLISnITQUlI4zDeAu04tVbgiLKs vyIus0Y1JCSJt6G9if22pfcQ+bGMaxt0H439gBf6FtSx/3+dG2fsgeD7/NcQC71u 9cgyqjh5LMUpuoE1ijA1YjYpppZR32P0Bc8aihs7JSoSgjCqIcqREl3ftSqbfGP2 /ksAOF5L619riBCPlfOHGOjDverDWuDtE1/E7kWlUUHoBN6tdcj2+whK1a8F8c1b PHKiJ2pjh4CAHbQdZ698Kq5rqe/MFpha3MpXWi+ztNbT1IgfKsEUAE4wk5RK0sZ6 +RqqLRKWtVu5iA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduuddrtdeggdehudculddtuddrgedutddrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdp uffrtefokffrpgfnqfghnecuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivg hnthhsucdlqddutddtmdenucfjughrpefhvffufffkofgjfhhrggfgsedtkeertdertddt necuhfhrohhmpegkihcujggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucfkph epvdduiedrvddvkedrudduvddrvddvnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdr higrnhesshgvnhhtrdgtohhmnecuvehluhhsthgvrhfuihiivgepud X-ME-Proxy: Received: from nvrsysarch5.nvidia.com (thunderhill.nvidia.com [216.228.112.22]) by mail.messagingengine.com (Postfix) with ESMTPA id 1744410316; Wed, 3 Apr 2019 22:01:16 -0400 (EDT) From: Zi Yan To: Dave Hansen , Yang Shi , Keith Busch , Fengguang Wu , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Daniel Jordan , Michal Hocko , "Kirill A . Shutemov" , Andrew Morton , Vlastimil Babka , Mel Gorman , John Hubbard , Mark Hairgrove , Nitin Gupta , Javier Cabezas , David Nellans , Zi Yan Subject: [RFC PATCH 03/25] mm: migrate: Add a multi-threaded page migration function. Date: Wed, 3 Apr 2019 19:00:24 -0700 Message-Id: <20190404020046.32741-4-zi.yan@sent.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190404020046.32741-1-zi.yan@sent.com> References: <20190404020046.32741-1-zi.yan@sent.com> Reply-To: ziy@nvidia.com MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zi Yan copy_page_multithread() function is added to migrate huge pages in multi-threaded way, which provides higher throughput than a single-threaded way. Internally, copy_page_multithread() splits and distributes a huge page into multiple threads, then send them as jobs to system_highpri_wq. Signed-off-by: Zi Yan --- include/linux/highmem.h | 2 + mm/Makefile | 2 + mm/copy_page.c | 128 ++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 132 insertions(+) create mode 100644 mm/copy_page.c diff --git a/include/linux/highmem.h b/include/linux/highmem.h index ea5cdbd8c..0f50dc5 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -276,4 +276,6 @@ static inline void copy_highpage(struct page *to, struct page *from) #endif +int copy_page_multithread(struct page *to, struct page *from, int nr_pages); + #endif /* _LINUX_HIGHMEM_H */ diff --git a/mm/Makefile b/mm/Makefile index d210cc9..fa02a9f 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -44,6 +44,8 @@ obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ obj-y += init-mm.o obj-y += memblock.o +obj-y += copy_page.o + ifdef CONFIG_MMU obj-$(CONFIG_ADVISE_SYSCALLS) += madvise.o endif diff --git a/mm/copy_page.c b/mm/copy_page.c new file mode 100644 index 0000000..9cf849c --- /dev/null +++ b/mm/copy_page.c @@ -0,0 +1,128 @@ +/* + * Enhanced page copy routine. + * + * Copyright 2019 by NVIDIA. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * Authors: Zi Yan + * + */ + +#include +#include +#include +#include + + +const unsigned int limit_mt_num = 4; + +/* ======================== multi-threaded copy page ======================== */ + +struct copy_item { + char *to; + char *from; + unsigned long chunk_size; +}; + +struct copy_page_info { + struct work_struct copy_page_work; + unsigned long num_items; + struct copy_item item_list[0]; +}; + +static void copy_page_routine(char *vto, char *vfrom, + unsigned long chunk_size) +{ + memcpy(vto, vfrom, chunk_size); +} + +static void copy_page_work_queue_thread(struct work_struct *work) +{ + struct copy_page_info *my_work = (struct copy_page_info *)work; + int i; + + for (i = 0; i < my_work->num_items; ++i) + copy_page_routine(my_work->item_list[i].to, + my_work->item_list[i].from, + my_work->item_list[i].chunk_size); +} + +int copy_page_multithread(struct page *to, struct page *from, int nr_pages) +{ + unsigned int total_mt_num = limit_mt_num; + int to_node = page_to_nid(to); + int i; + struct copy_page_info *work_items[NR_CPUS] = {0}; + char *vto, *vfrom; + unsigned long chunk_size; + const struct cpumask *per_node_cpumask = cpumask_of_node(to_node); + int cpu_id_list[NR_CPUS] = {0}; + int cpu; + int err = 0; + + total_mt_num = min_t(unsigned int, total_mt_num, + cpumask_weight(per_node_cpumask)); + if (total_mt_num > 1) + total_mt_num = (total_mt_num / 2) * 2; + + if (total_mt_num > num_online_cpus() || total_mt_num <=1) + return -ENODEV; + + for (cpu = 0; cpu < total_mt_num; ++cpu) { + work_items[cpu] = kzalloc(sizeof(struct copy_page_info) + + sizeof(struct copy_item), GFP_KERNEL); + if (!work_items[cpu]) { + err = -ENOMEM; + goto free_work_items; + } + } + + i = 0; + for_each_cpu(cpu, per_node_cpumask) { + if (i >= total_mt_num) + break; + cpu_id_list[i] = cpu; + ++i; + } + + vfrom = kmap(from); + vto = kmap(to); + chunk_size = PAGE_SIZE*nr_pages / total_mt_num; + + for (i = 0; i < total_mt_num; ++i) { + INIT_WORK((struct work_struct *)work_items[i], + copy_page_work_queue_thread); + + work_items[i]->num_items = 1; + work_items[i]->item_list[0].to = vto + i * chunk_size; + work_items[i]->item_list[0].from = vfrom + i * chunk_size; + work_items[i]->item_list[0].chunk_size = chunk_size; + + queue_work_on(cpu_id_list[i], + system_highpri_wq, + (struct work_struct *)work_items[i]); + } + + /* Wait until it finishes */ + for (i = 0; i < total_mt_num; ++i) + flush_work((struct work_struct *)work_items[i]); + + kunmap(to); + kunmap(from); + +free_work_items: + for (cpu = 0; cpu < total_mt_num; ++cpu) + if (work_items[cpu]) + kfree(work_items[cpu]); + + return err; +} -- 2.7.4