Received: by 10.223.185.116 with SMTP id b49csp4195856wrg; Mon, 26 Feb 2018 13:00:21 -0800 (PST) X-Google-Smtp-Source: AH8x226zybUxvCc0tkw5S2heiE0T0P/Dgu/kz9NcV6lI8omn8GAm1JyY9aHiaBADQJMQGGmW+nQd X-Received: by 2002:a17:902:8349:: with SMTP id z9-v6mr11636399pln.163.1519678821065; Mon, 26 Feb 2018 13:00:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519678821; cv=none; d=google.com; s=arc-20160816; b=B+8PJ8G+vqoEwX5z9mFx+4jLiwWIWLg0Le7pwUPXt+vvRL8UO3CgIXdUiQPzaDlp1L BQpbbllhzkRT+tofgi9Oq5CkFHkZCz8crFqwdJs9xWygxd+cyTTJUcB2k3nYohGo2H4y wTUqxXXa7LmkMqYxI1CGLYrsUIvBQUDvhpFpLoqkEfleqRAH5GibJbfGygNsMNn1yFW4 mAf94lyGhTT/2A4PFrFDCKPuUrVNlysUyGNCjlhAtI6UgwivGdliigFSg31wu+dRde5y w/njgoH16AzfCuMrKPCYCVtg1VCQd7BYKrujiZSIQ6MOX6HBDSJqklYWQHfXeK9guZu4 63KQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=9QxHbkx3ZnQZXyTpFmoXourNltnTpuUT4ZiH0SF8OrQ=; b=gIhNOYvIG1zvdoikddq7yJi79GOa8vUHMwEMWkqVsJtYF4tAgWpAB8lRECn3LFvJl1 XQn8RqK6ZtImafMRc5d0iOobmSYpIf+agL8ApOBb1rV0DN+Qp8AgmK/ZVJUQk2QHLJj2 7XyPamtIefr8u0/XqEe2nzyMSrrEXGPyofRBgMqt5ORZQo5wvIj5Z66q4/Aq8kUwpe+h smc0+joH3Uai/C5392xVd6/OuveEsYf3baPo6B5dqkEVPOtHMB7hCslCpduDdDAwW6Do 7znPC6AFAVbBR9AnaXIeoMVBfkc2O1v1O9DraTdcH3yhLFRVtc3ZcUFHHUXkd2vB4rTO 6Xag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k15si5958420pgc.482.2018.02.26.13.00.04; Mon, 26 Feb 2018 13:00:21 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752232AbeBZUWb (ORCPT + 99 others); Mon, 26 Feb 2018 15:22:31 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:34118 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752057AbeBZUW3 (ORCPT ); Mon, 26 Feb 2018 15:22:29 -0500 Received: from localhost (clnet-b04-243.ikbnet.co.at [83.175.124.243]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 1E9C8E33; Mon, 26 Feb 2018 20:22:28 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dan Williams , Christoph Hellwig , Doug Ledford , Hal Rosenstock , Inki Dae , Jan Kara , Jason Gunthorpe , Jeff Moyer , Joonyoung Shim , Kyungmin Park , Mauro Carvalho Chehab , Mel Gorman , Ross Zwisler , Sean Hefty , Seung-Woo Kim , Vlastimil Babka , Andrew Morton , Linus Torvalds Subject: [PATCH 4.9 33/39] mm: introduce get_user_pages_longterm Date: Mon, 26 Feb 2018 21:20:54 +0100 Message-Id: <20180226201645.125678735@linuxfoundation.org> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20180226201643.660109883@linuxfoundation.org> References: <20180226201643.660109883@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.9-stable review patch. If anyone has any objections, please let me know. ------------------ From: Dan Williams commit 2bb6d2837083de722bfdc369cb0d76ce188dd9b4 upstream. Patch series "introduce get_user_pages_longterm()", v2. Here is a new get_user_pages api for cases where a driver intends to keep an elevated page count indefinitely. This is distinct from usages like iov_iter_get_pages where the elevated page counts are transient. The iov_iter_get_pages cases immediately turn around and submit the pages to a device driver which will put_page when the i/o operation completes (under kernel control). In the longterm case userspace is responsible for dropping the page reference at some undefined point in the future. This is untenable for filesystem-dax case where the filesystem is in control of the lifetime of the block / page and needs reasonable limits on how long it can wait for pages in a mapping to become idle. Fixing filesystems to actually wait for dax pages to be idle before blocks from a truncate/hole-punch operation are repurposed is saved for a later patch series. Also, allowing longterm registration of dax mappings is a future patch series that introduces a "map with lease" semantic where the kernel can revoke a lease and force userspace to drop its page references. I have also tagged these for -stable to purposely break cases that might assume that longterm memory registrations for filesystem-dax mappings were supported by the kernel. The behavior regression this policy change implies is one of the reasons we maintain the "dax enabled. Warning: EXPERIMENTAL, use at your own risk" notification when mounting a filesystem in dax mode. It is worth noting the device-dax interface does not suffer the same constraints since it does not support file space management operations like hole-punch. This patch (of 4): Until there is a solution to the dma-to-dax vs truncate problem it is not safe to allow long standing memory registrations against filesytem-dax vmas. Device-dax vmas do not have this problem and are explicitly allowed. This is temporary until a "memory registration with layout-lease" mechanism can be implemented for the affected sub-systems (RDMA and V4L2). [akpm@linux-foundation.org: use kcalloc()] Link: http://lkml.kernel.org/r/151068939435.7446.13560129395419350737.stgit@dwillia2-desk3.amr.corp.intel.com Fixes: 3565fce3a659 ("mm, x86: get_user_pages() for dax mappings") Signed-off-by: Dan Williams Suggested-by: Christoph Hellwig Cc: Doug Ledford Cc: Hal Rosenstock Cc: Inki Dae Cc: Jan Kara Cc: Jason Gunthorpe Cc: Jeff Moyer Cc: Joonyoung Shim Cc: Kyungmin Park Cc: Mauro Carvalho Chehab Cc: Mel Gorman Cc: Ross Zwisler Cc: Sean Hefty Cc: Seung-Woo Kim Cc: Vlastimil Babka Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- include/linux/dax.h | 5 ---- include/linux/fs.h | 20 ++++++++++++++++ include/linux/mm.h | 13 ++++++++++ mm/gup.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 97 insertions(+), 5 deletions(-) --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -61,11 +61,6 @@ static inline int dax_pmd_fault(struct v int dax_pfn_mkwrite(struct vm_area_struct *, struct vm_fault *); #define dax_mkwrite(vma, vmf, gb) dax_fault(vma, vmf, gb) -static inline bool vma_is_dax(struct vm_area_struct *vma) -{ - return vma->vm_file && IS_DAX(vma->vm_file->f_mapping->host); -} - static inline bool dax_mapping(struct address_space *mapping) { return mapping->host && IS_DAX(mapping->host); --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -3033,6 +3034,25 @@ static inline bool io_is_direct(struct f return (filp->f_flags & O_DIRECT) || IS_DAX(filp->f_mapping->host); } +static inline bool vma_is_dax(struct vm_area_struct *vma) +{ + return vma->vm_file && IS_DAX(vma->vm_file->f_mapping->host); +} + +static inline bool vma_is_fsdax(struct vm_area_struct *vma) +{ + struct inode *inode; + + if (!vma->vm_file) + return false; + if (!vma_is_dax(vma)) + return false; + inode = file_inode(vma->vm_file); + if (inode->i_mode == S_IFCHR) + return false; /* device-dax */ + return true; +} + static inline int iocb_flags(struct file *file) { int res = 0; --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1288,6 +1288,19 @@ long __get_user_pages_unlocked(struct ta struct page **pages, unsigned int gup_flags); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, struct page **pages, unsigned int gup_flags); +#ifdef CONFIG_FS_DAX +long get_user_pages_longterm(unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + struct vm_area_struct **vmas); +#else +static inline long get_user_pages_longterm(unsigned long start, + unsigned long nr_pages, unsigned int gup_flags, + struct page **pages, struct vm_area_struct **vmas) +{ + return get_user_pages(start, nr_pages, gup_flags, pages, vmas); +} +#endif /* CONFIG_FS_DAX */ + int get_user_pages_fast(unsigned long start, int nr_pages, int write, struct page **pages); --- a/mm/gup.c +++ b/mm/gup.c @@ -982,6 +982,70 @@ long get_user_pages(unsigned long start, } EXPORT_SYMBOL(get_user_pages); +#ifdef CONFIG_FS_DAX +/* + * This is the same as get_user_pages() in that it assumes we are + * operating on the current task's mm, but it goes further to validate + * that the vmas associated with the address range are suitable for + * longterm elevated page reference counts. For example, filesystem-dax + * mappings are subject to the lifetime enforced by the filesystem and + * we need guarantees that longterm users like RDMA and V4L2 only + * establish mappings that have a kernel enforced revocation mechanism. + * + * "longterm" == userspace controlled elevated page count lifetime. + * Contrast this to iov_iter_get_pages() usages which are transient. + */ +long get_user_pages_longterm(unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + struct vm_area_struct **vmas_arg) +{ + struct vm_area_struct **vmas = vmas_arg; + struct vm_area_struct *vma_prev = NULL; + long rc, i; + + if (!pages) + return -EINVAL; + + if (!vmas) { + vmas = kcalloc(nr_pages, sizeof(struct vm_area_struct *), + GFP_KERNEL); + if (!vmas) + return -ENOMEM; + } + + rc = get_user_pages(start, nr_pages, gup_flags, pages, vmas); + + for (i = 0; i < rc; i++) { + struct vm_area_struct *vma = vmas[i]; + + if (vma == vma_prev) + continue; + + vma_prev = vma; + + if (vma_is_fsdax(vma)) + break; + } + + /* + * Either get_user_pages() failed, or the vma validation + * succeeded, in either case we don't need to put_page() before + * returning. + */ + if (i >= rc) + goto out; + + for (i = 0; i < rc; i++) + put_page(pages[i]); + rc = -EOPNOTSUPP; +out: + if (vmas != vmas_arg) + kfree(vmas); + return rc; +} +EXPORT_SYMBOL(get_user_pages_longterm); +#endif /* CONFIG_FS_DAX */ + /** * populate_vma_page_range() - populate a range of pages in the vma. * @vma: target vma