Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3068509imu; Mon, 17 Dec 2018 12:43:39 -0800 (PST) X-Google-Smtp-Source: AFSGD/W2zCmQmPMJSvwtm1vyAKY1AWu1AWUTMM7Z2Rn5XVqwFXXSD/Hor9XFVyhBTJtDfrlspjbR X-Received: by 2002:a63:e051:: with SMTP id n17mr205111pgj.258.1545079419897; Mon, 17 Dec 2018 12:43:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545079419; cv=none; d=google.com; s=arc-20160816; b=diVa/eQL+XgUVWkzWmNZdlhVaOytmzSTjz4QJiqG+5jfeZzwZPu+o1m2Lwucft83yS pYzcXFp+jhSrCL5SiumD/sj20VXRzBXSzQMoE1pYuEOA4VOtR4H+L+94uXo14stj2/Tq XPU9Wg4gYSycyoJw/euco+o8YrB0ShiRxMysK9z5HupTEEYCjKq0tkifcpdump4rBo14 MSoHnoY0d/CTv7Z7dDH1M1y6bNUv5zDTWrNDaMffqL09l8I2j3TrkVG/gXyhWBVigXvT 5QpLQ7akj/tNsRZXKANSTi2oG5Hp5cXUQ07rA5ODG5RsYkUxopzBuy+y2szs6atyImLm xuiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=ubZF0Vz7S/rPDOpsIn/lA8pWsOPWyZA0UspKdV4JvUw=; b=f8Q4k4tDVZ/1rK4WYtp8mJu0NUU/yAZ9KjkteHDS0omR+1RFtbQSEXdqGoNRujQ5BP MjgaAiZ++EcLe6q9ZVcp3H6gXoxU70shr2zA2AAedaNJme9qbdQh4d++tEEPibGpoo+B GoxAopx3koc4LE+lIxbdahc1ukiU6+YSNIdkqmmJTJUmNaAlF8YOnzj4cbk/q/MI7ZEA r58oKVp5+dQITnSta2Lu4/oSg76JYJgD7U0Vj3V/PvAIbH3/eVMV92ZYbAiTz0DrFLOq rt2jshChrrp1fdmKlfzpgiHQCq2m7Ns3Uf8tnoOZzWF418hz3j1LjQ6nh3oHlkGJ2Mft T+rg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=ZCHjl8ah; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e4si11600526pgk.127.2018.12.17.12.43.25; Mon, 17 Dec 2018 12:43:39 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=ZCHjl8ah; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388542AbeLQTv6 (ORCPT + 99 others); Mon, 17 Dec 2018 14:51:58 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:34528 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726753AbeLQTv6 (ORCPT ); Mon, 17 Dec 2018 14:51:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=ubZF0Vz7S/rPDOpsIn/lA8pWsOPWyZA0UspKdV4JvUw=; b=ZCHjl8ahFTgyM52RcTBTBpy9+ 6YzfU5E89mjJKiQP9rUVNUqwY6kKss7oLF/EEUs01jKS9IGOjujCLIDD8jICkkba8pSBFEM7AAg9D 8cO0GqM7pbqgFFGvUStAEd3QYqfJPzIEgRoW6BMQ3myHeTsSRLnjijWbodwmi+Itpio/+wK0KAPnc XMYVGjKQi6D9lF3ck2LrfL/oAhrOUR3rA70BMcE2COohd0OpvK5MNb/VRV7KEQQWwgUFnnHkCUZ+n DkBghkUunoFtRAa3wBX0ASTvIgFdk38k16HgAueBfWRL5BhL/AILobu5sf8MDCMOZPKnE8pgymGCi xqKJnJXsw==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gYyvf-0003iI-7b; Mon, 17 Dec 2018 19:51:51 +0000 Date: Mon, 17 Dec 2018 11:51:51 -0800 From: Matthew Wilcox To: Jerome Glisse Cc: Dave Chinner , Jan Kara , John Hubbard , Dan Williams , John Hubbard , Andrew Morton , Linux MM , tom@talpey.com, Al Viro , benve@cisco.com, Christoph Hellwig , Christopher Lameter , "Dalessandro, Dennis" , Doug Ledford , Jason Gunthorpe , Michal Hocko , mike.marciniszyn@intel.com, rcampbell@nvidia.com, Linux Kernel Mailing List , linux-fsdevel Subject: Re: [PATCH 1/2] mm: introduce put_user_page*(), placeholder versions Message-ID: <20181217195150.GP10600@bombadil.infradead.org> References: <3c4d46c0-aced-f96f-1bf3-725d02f11b60@nvidia.com> <20181208022445.GA7024@redhat.com> <20181210102846.GC29289@quack2.suse.cz> <20181212150319.GA3432@redhat.com> <20181212214641.GB29416@dastard> <20181214154321.GF8896@quack2.suse.cz> <20181216215819.GC10644@dastard> <20181217181148.GA3341@redhat.com> <20181217183443.GO10600@bombadil.infradead.org> <20181217194759.GB3341@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20181217194759.GB3341@redhat.com> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Dec 17, 2018 at 02:48:00PM -0500, Jerome Glisse wrote: > On Mon, Dec 17, 2018 at 10:34:43AM -0800, Matthew Wilcox wrote: > > On Mon, Dec 17, 2018 at 01:11:50PM -0500, Jerome Glisse wrote: > > > On Mon, Dec 17, 2018 at 08:58:19AM +1100, Dave Chinner wrote: > > > > Sure, that's a possibility, but that doesn't close off any race > > > > conditions because there can be DMA into the page in progress while > > > > the page is being bounced, right? AFAICT this ext3+DIF/DIX case is > > > > different in that there is no 3rd-party access to the page while it > > > > is under IO (ext3 arbitrates all access to it's metadata), and so > > > > nothing can actually race for modification of the page between > > > > submission and bouncing at the block layer. > > > > > > > > In this case, the moment the page is unlocked, anyone else can map > > > > it and start (R)DMA on it, and that can happen before the bio is > > > > bounced by the block layer. So AFAICT, block layer bouncing doesn't > > > > solve the problem of racing writeback and DMA direct to the page we > > > > are doing IO on. Yes, it reduces the race window substantially, but > > > > it doesn't get rid of it. > > > > > > So the event flow is: > > > - userspace create object that match a range of virtual address > > > against a given kernel sub-system (let's say infiniband) and > > > let's assume that the range is an mmap() of a regular file > > > - device driver do GUP on the range (let's assume it is a write > > > GUP) so if the page is not already map with write permission > > > in the page table than a page fault is trigger and page_mkwrite > > > happens > > > - Once GUP return the page to the device driver and once the > > > device driver as updated the hardware states to allow access > > > to this page then from that point on hardware can write to the > > > page at _any_ time, it is fully disconnected from any fs event > > > like write back, it fully ignore things like page_mkclean > > > > > > This is how it is to day, we allowed people to push upstream such > > > users of GUP. This is a fact we have to live with, we can not stop > > > hardware access to the page, we can not force the hardware to follow > > > page_mkclean and force a page_mkwrite once write back ends. This is > > > the situation we are inheriting (and i am personnaly not happy with > > > that). > > > > > > >From my point of view we are left with 2 choices: > > > [C1] break all drivers that do not abide by the page_mkclean and > > > page_mkwrite > > > [C2] mitigate as much as possible the issue > > > > > > For [C2] the idea is to keep track of GUP per page so we know if we > > > can expect the page to be written to at any time. Here is the event > > > flow: > > > - driver GUP the page and program the hardware, page is mark as > > > GUPed > > > ... > > > - write back kicks in on the dirty page, lock the page and every > > > thing as usual , sees it is GUPed and inform the block layer to > > > use a bounce page > > > > No. The solution John, Dan & I have been looking at is to take the > > dirty page off the LRU while it is pinned by GUP. It will never be > > found for writeback. > > > > That's not the end of the story though. Other parts of the kernel (eg > > msync) also need to be taught to stay away from pages which are pinned > > by GUP. But the idea is that no page gets written back to storage while > > it's pinned by GUP. Only when the last GUP ends is the page returned > > to the list of dirty pages. > > > > > - block layer copy the page to a bounce page effectively creating > > > a snapshot of what is the content of the real page. This allows > > > everything in block layer that need stable content to work on > > > the bounce page (raid, stripping, encryption, ...) > > > - once write back is done the page is not marked clean but stays > > > dirty, this effectively disable things like COW for filesystem > > > and other feature that expect page_mkwrite between write back. > > > AFAIK it is believe that it is something acceptable > > > > So none of this is necessary. > > With the solution you are proposing we loose GUP fast and we have to > allocate a structure for each page that is under GUP, and the LRU > changes too. Moreover by not writing back there is a greater chance > of data loss. Why can't you store the hmm_data in a side data structure? Why does it have to be in struct page?