Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2043347imu; Thu, 17 Jan 2019 07:29:29 -0800 (PST) X-Google-Smtp-Source: ALg8bN5prVgWZD49b2qjYArGly/G8h8d/4W0w7/Y7tZlm7Jug7jP7HaOLGTRTfcTw/rMg+gB1Cj5 X-Received: by 2002:a17:902:2a0a:: with SMTP id i10mr15147027plb.323.1547738969508; Thu, 17 Jan 2019 07:29:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547738969; cv=none; d=google.com; s=arc-20160816; b=P1XWzOeiKow6/KHsC+5tbrUC9dHV9zEPD7ZNGropgpOzre0mNwFGcK5Z6FPCgVHgvr Y6bQrMry7CHNZUoW0kJgkNAEFd4cYrDSIRfVA5knLMDq7dHQBJeHbapB514DrR+qN0se +d74NBjC82cG/cWbMfkEXspT36iJ+No0gFOylYhz1fbIJwV8sNTR8ZhJmKR4U5x5Hksk bGTLRvd/y3IUEMyUoi/scM18ZyuBYPvvPQ21Jz/rIDkLcWhHI/k3rknwPbF3BDY59QqZ N7nwHX3yDGvYg8KBKCbB4FvBCALvR9e88aY92iR8rOZIoL7Z/68QdtaW/2IGkLyoEm09 T2HA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=8ZrGiw+6qO/ZiepZ1px+iaT9WmMN+h3jppdkRtLml0c=; b=sxRWmD59yM5ZnQK27MVo0LmXzOdWDZyH31oMOKmQuYNtDcb//0KR5uZmyL1BEMMEfp ADzPwT2ETS6tJ6n/XpanRY9LWTzfibYADwYO4sNiwXl+IwhOQpH4HThVyfysRefyIG4s 9ZoKaPNjb794UMZ/N+rANM2VrDW1rvheXVF3Gk5kuuZH+0qARXdkYdLpmMkS8aC1076w P5WmvGuPkcgfNxKmEEXHi1BYVdl874tVvKan8NX9UEAQp/DbcJWT1enPkzUUNhXhQTry O/7WS7SM7eHzlHScyajV/L3FHXLI2SDVo4xmg73nQ8YRnyXHQw8lDXslpRy5v3ZtbMJe 8F2w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u7si1704808pgg.357.2019.01.17.07.29.12; Thu, 17 Jan 2019 07:29:29 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728283AbfAQPSH (ORCPT + 99 others); Thu, 17 Jan 2019 10:18:07 -0500 Received: from mx1.redhat.com ([209.132.183.28]:38400 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725882AbfAQPSH (ORCPT ); Thu, 17 Jan 2019 10:18:07 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id DAFB9C0C05AA; Thu, 17 Jan 2019 15:18:05 +0000 (UTC) Received: from redhat.com (unknown [10.20.6.236]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 6FD68611A1; Thu, 17 Jan 2019 15:18:01 +0000 (UTC) Date: Thu, 17 Jan 2019 10:17:59 -0500 From: Jerome Glisse To: Jan Kara Cc: John Hubbard , Matthew Wilcox , Dave Chinner , Dan Williams , John Hubbard , Andrew Morton , Linux MM , tom@talpey.com, Al Viro , benve@cisco.com, Christoph Hellwig , Christopher Lameter , "Dalessandro, Dennis" , Doug Ledford , Jason Gunthorpe , Michal Hocko , mike.marciniszyn@intel.com, rcampbell@nvidia.com, Linux Kernel Mailing List , linux-fsdevel Subject: Re: [PATCH 1/2] mm: introduce put_user_page*(), placeholder versions Message-ID: <20190117151759.GA3550@redhat.com> References: <20190112020228.GA5059@redhat.com> <294bdcfa-5bf9-9c09-9d43-875e8375e264@nvidia.com> <20190112024625.GB5059@redhat.com> <20190114145447.GJ13316@quack2.suse.cz> <20190114172124.GA3702@redhat.com> <20190115080759.GC29524@quack2.suse.cz> <20190116113819.GD26069@quack2.suse.cz> <20190116130813.GA3617@redhat.com> <20190117093047.GB9378@quack2.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20190117093047.GB9378@quack2.suse.cz> User-Agent: Mutt/1.10.0 (2018-05-17) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Thu, 17 Jan 2019 15:18:06 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 17, 2019 at 10:30:47AM +0100, Jan Kara wrote: > On Wed 16-01-19 08:08:14, Jerome Glisse wrote: > > On Wed, Jan 16, 2019 at 12:38:19PM +0100, Jan Kara wrote: > > > On Tue 15-01-19 09:07:59, Jan Kara wrote: > > > > Agreed. So with page lock it would actually look like: > > > > > > > > get_page_pin() > > > > lock_page(page); > > > > wait_for_stable_page(); > > > > atomic_add(&page->_refcount, PAGE_PIN_BIAS); > > > > unlock_page(page); > > > > > > > > And if we perform page_pinned() check under page lock, then if > > > > page_pinned() returned false, we are sure page is not and will not be > > > > pinned until we drop the page lock (and also until page writeback is > > > > completed if needed). > > > > > > After some more though, why do we even need wait_for_stable_page() and > > > lock_page() in get_page_pin()? > > > > > > During writepage page_mkclean() will write protect all page tables. So > > > there can be no new writeable GUP pins until we unlock the page as all such > > > GUPs will have to first go through fault and ->page_mkwrite() handler. And > > > that will wait on page lock and do wait_for_stable_page() for us anyway. > > > Am I just confused? > > > > Yeah with page lock it should synchronize on the pte but you still > > need to check for writeback iirc the page is unlocked after file > > system has queue up the write and thus the page can be unlock with > > write back pending (and PageWriteback() == trye) and i am not sure > > that in that states we can safely let anyone write to that page. I > > am assuming that in some case the block device also expect stable > > page content (RAID stuff). > > > > So the PageWriteback() test is not only for racing page_mkclean()/ > > test_set_page_writeback() and GUP but also for pending write back. > > But this is prevented by wait_for_stable_page() that is already present in > ->page_mkwrite() handlers. Look: > > ->writepage() > /* Page is locked here */ > clear_page_dirty_for_io(page) > page_mkclean(page) > -> page tables get writeprotected > /* The following line will be added by our patches */ > if (page_pinned(page)) -> bounce > TestClearPageDirty(page) > set_page_writeback(page); > unlock_page(page); > ...submit_io... > > IRQ > - IO completion > end_page_writeback() > > So if GUP happens before page_mkclean() writeprotects corresponding PTE > (and these two actions are synchronized on the PTE lock), page_pinned() > will see the increment and report the page as pinned. > > If GUP happens after page_mkclean() writeprotects corresponding PTE, it > will fault: > handle_mm_fault() > do_wp_page() > wp_page_shared() > do_page_mkwrite() > ->page_mkwrite() - that is block_page_mkwrite() or > iomap_page_mkwrite() or whatever filesystem provides > lock_page(page) > ... prepare page ... > wait_for_stable_page(page) -> this blocks until IO completes > if someone cares about pages not being modified while under IO. The case i am worried is GUP see pte with write flag set but has not lock the page yet (GUP is get pte first, then pte to page then lock page), then it locks the page but the lock page can make it wait for a racing page_mkclean()...write back that have not yet write protected the pte the GUP just read. So by the time GUP has the page locked the pte it read might no longer have the write flag set. Hence why you need to also check for write back after taking the page lock. Alternatively you could recheck the pte after a successful try_lock on the page. You can optimize out by checking also before trying to lock the page to bail out early and avoid unecessary lock wait. But that is an optimization. For correctness you need to check after taking the page lock. > > > > That actually touches on another question I wanted to get opinions on. GUP > > > can be for read and GUP can be for write (that is one of GUP flags). > > > Filesystems with page cache generally have issues only with GUP for write > > > as it can currently corrupt data, unexpectedly dirty page etc.. DAX & memory > > > hotplug have issues with both (DAX cannot truncate page pinned in any way, > > > memory hotplug will just loop in kernel until the page gets unpinned). So > > > we probably want to track both types of GUP pins and page-cache based > > > filesystems will take the hit even if they don't have to for read-pins? > > > > Yes the distinction between read and write would be nice. With the map > > count solution you can only increment the mapcount for GUP(write=true). > > Well, but if we track only pins for write, DAX or memory hotplug will not > be able to use this mechanism. So at this point I'm more leaning towards > tracking all pins. It will cost some performance needlessly for read pins > and filesystems using page cache when bouncing such pages but it's not like > writeback of pinned pages is some performance critical operation... But I > wanted to spell this out so that people are aware of this. No they would know for regular pin, it is just as page migrate code. If the refcount + (extra_ref_by_the_code_checking) > mapcount then you know someone has extra reference on your page. Those extra references are either some regular fs event taking place (some code doing find_get_page for instance) or a GUP reference (wether it is a write pin or a read pin). So the only issue is false positive, ie thinking the page is under GUP while it has just elevated refcount because of some other regular fs/mm event. To minimize false positive for a more accurate pin test (write or read) you can enforce few thing: 1 - first page lock 2 - then freeze the page with expected counted With that it should minimize false positive. In the end even with the bias case you can also have false positive. Cheers, J?r?me