Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753373AbdDJSMI (ORCPT ); Mon, 10 Apr 2017 14:12:08 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:60967 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752982AbdDJSMG (ORCPT ); Mon, 10 Apr 2017 14:12:06 -0400 Date: Mon, 10 Apr 2017 13:11:56 -0500 From: Reza Arbab To: =?iso-8859-1?B?Suly9G1l?= Glisse Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, John Hubbard , Anshuman Khandual , Balbir Singh , Benjamin Herrenschmidt , Aneesh Kumar , "Paul E . McKenney" , Srikar Dronamraju , Haren Myneni , Dan Williams Subject: Re: [RFC HMM CDM 3/3] mm/migrate: memory migration using a device DMA engine References: <1491596933-21669-1-git-send-email-jglisse@redhat.com> <1491596933-21669-4-git-send-email-jglisse@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1; format=flowed Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1491596933-21669-4-git-send-email-jglisse@redhat.com> Organization: IBM Linux Technology Center User-Agent: NeoMutt/20170306 (1.8.0) X-TM-AS-GCONF: 00 x-cbid: 17041018-0008-0000-0000-0000079671F1 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00006912; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000208; SDB=6.00845658; UDB=6.00417038; IPR=6.00624092; BA=6.00005281; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00014995; XFM=3.00000013; UTC=2017-04-10 18:12:00 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17041018-0009-0000-0000-0000415EB969 Message-Id: <20170410181156.hxwfsqhodbhachpu@arbab-laptop.localdomain> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-04-10_14:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1702020001 definitions=main-1704100143 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4225 Lines: 148 (Had sent this to you directly. Reposting for the whole cc list.) On Fri, Apr 07, 2017 at 04:28:53PM -0400, J?r?me Glisse wrote: >--- a/include/linux/migrate.h >+++ b/include/linux/migrate.h >@@ -212,28 +215,25 @@ static inline unsigned long migrate_pfn(unsigned long pfn) > * THE finalize_and_map() CALLBACK MUST NOT CHANGE ANY OF THE SRC OR DST ARRAY > * ENTRIES OR BAD THINGS WILL HAPPEN ! > */ >-struct migrate_vma_ops { >- void (*alloc_and_copy)(struct vm_area_struct *vma, >- const unsigned long *src, >- unsigned long *dst, >- unsigned long start, >- unsigned long end, >- void *private); >- void (*finalize_and_map)(struct vm_area_struct *vma, >- const unsigned long *src, >- const unsigned long *dst, >- unsigned long start, >- unsigned long end, >- void *private); >+struct migrate_dma_ops { >+ void (*alloc_and_copy)(struct migrate_dma_ctx *ctx); >+ void (*finalize_and_map)(struct migrate_dma_ctx *ctx); >+}; >+ >+struct migrate_dma_ctx { >+ const struct migrate_dma_ops *ops; >+ unsigned long *dst; >+ unsigned long *src; >+ unsigned long cpages; >+ unsigned long npages; Could you add this so we can still pass arguments to the callbacks? void *private; > }; > >-int migrate_vma(const struct migrate_vma_ops *ops, >+int migrate_vma(struct migrate_dma_ctx *ctx, > struct vm_area_struct *vma, > unsigned long start, >- unsigned long end, >- unsigned long *src, >- unsigned long *dst, >- void *private); >+ unsigned long end); >+int migrate_dma(struct migrate_dma_ctx *migrate_ctx); >+ > > #endif /* CONFIG_MIGRATION */ > ...%<... >--- a/mm/migrate.c >+++ b/mm/migrate.c >@@ -2803,16 +2761,76 @@ int migrate_vma(const struct migrate_vma_ops *ops, > * Note that migration can fail in migrate_vma_struct_page() for each > * individual page. > */ >- ops->alloc_and_copy(vma, src, dst, start, end, private); >+ migrate_ctx->ops->alloc_and_copy(migrate_ctx); > > /* This does the real migration of struct page */ >- migrate_vma_pages(&migrate); >+ migrate_dma_pages(migrate_ctx, vma, start, end); > >- ops->finalize_and_map(vma, src, dst, start, end, private); >+ migrate_ctx->ops->finalize_and_map(migrate_ctx); > > /* Unlock and remap pages */ >- migrate_vma_finalize(&migrate); >+ migrate_dma_finalize(migrate_ctx); > > return 0; > } > EXPORT_SYMBOL(migrate_vma); >+ >+/* >+ * migrate_dma() - migrate an array of pages using a device DMA engine >+ * >+ * @migrate_ctx: migrate context structure >+ * >+ * The context structure must have its src fields pointing to an array of >+ * migrate pfn entry each corresponding to a valid page and each page being >+ * lock. The dst entry must by an array as big as src, it will be use during >+ * migration to store the destination pfn. >+ * >+ */ >+int migrate_dma(struct migrate_dma_ctx *migrate_ctx) >+{ >+ unsigned long i; >+ >+ /* Sanity check the arguments */ >+ if (!migrate_ctx->ops || !migrate_ctx->src || !migrate_ctx->dst) >+ return -EINVAL; >+ >+ /* Below code should be hidden behind some DEBUG config */ >+ for (i = 0; i < migrate_ctx->npages; ++i) { >+ const unsigned long mask = MIGRATE_PFN_VALID | >+ MIGRATE_PFN_LOCKED; This line is before the pages are locked. I think it should be MIGRATE_PFN_MIGRATE; >+ >+ if (!(migrate_ctx->src[i] & mask)) >+ return -EINVAL; >+ } >+ >+ /* Lock and isolate page */ >+ migrate_dma_prepare(migrate_ctx); >+ if (!migrate_ctx->cpages) >+ return 0; >+ >+ /* Unmap pages */ >+ migrate_dma_unmap(migrate_ctx); >+ if (!migrate_ctx->cpages) >+ return 0; >+ >+ /* >+ * At this point pages are locked and unmapped, and thus they have >+ * stable content and can safely be copied to destination memory that >+ * is allocated by the callback. >+ * >+ * Note that migration can fail in migrate_vma_struct_page() for each >+ * individual page. >+ */ >+ migrate_ctx->ops->alloc_and_copy(migrate_ctx); >+ >+ /* This does the real migration of struct page */ >+ migrate_dma_pages(migrate_ctx, NULL, 0, 0); >+ >+ migrate_ctx->ops->finalize_and_map(migrate_ctx); >+ >+ /* Unlock and remap pages */ >+ migrate_dma_finalize(migrate_ctx); >+ >+ return 0; >+} >+EXPORT_SYMBOL(migrate_dma); -- Reza Arbab