Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2397198imu; Mon, 17 Dec 2018 00:58:40 -0800 (PST) X-Google-Smtp-Source: AFSGD/W5Oi2ZxVsMRwSxQtDdkNIcIYPQCRBA45uALzVHUncVHOFmfYKNlcr3eL001jBnBvKsCiwL X-Received: by 2002:a63:be4d:: with SMTP id g13mr11559926pgo.378.1545037120759; Mon, 17 Dec 2018 00:58:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545037120; cv=none; d=google.com; s=arc-20160816; b=M8e/zO97c5jLcZTEhSxW0f5FYHhgWTM7LXilgoO1EHhs2ny3o1xu094NQTyZR2KT8L qL+yQy6FcVj6NhHSDaDkSmER5OC3rVa9fEuEiTm4cQ5Jb9ps9RVISKSvFqh6Ab+6OEfY G4kpEK/oeWoInCM5LMN/xQvmydeiTPANUbcgSyppwRvok15nrx40gKM/7vGoTqxJ9sE7 d1F3IkUb8j0gwY8rSkYq+MEXHwp0boynGLXaYd9pL0YE42j+CbmnS0Hk3vQgXKC1NMYv 4rUrJpRXz03VrbN0eiL5z0h6m0MkyyqVMmyK3E7B59f09MayvyvLgcOKSpDeUc8TvA4w 3XVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=06Dw/y8KkZOtFNiQMcJLE10tg9gS0DXDB11gbukG2g4=; b=DMKoPFwjpSYzfgRKKpnoPMvK8uPPAsfpYf0zUsuy1Q4GTShap6t3cbKbA1U7IlLnFT bDm5p8dXp2FwIeE1QlJd4CqgxurIxrmCN14MR74OCRGqX1kPM+T0v9aN+qr7LqsQBmd5 4q9BeylEFnwMSTpwYyAKS3D1muoeOLAs4EupkO4FU7d+m9l4UB/vpvpRpXVM6aoNk0/n cUQL4Niw6m5aM7KSLW0DFDdz9curLpx6I34BYSYQu7ZXD21QZuax4mL6bPuaVz5V9OB8 43jPr0gFPXMRq9eWwTDrswmDX3ZagYlzL6BMRDe3/scgr+gessoMB/a9T3h6/BS0D7sT GfdA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a3si10764772pld.252.2018.12.17.00.58.25; Mon, 17 Dec 2018 00:58:40 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732001AbeLQI45 (ORCPT + 99 others); Mon, 17 Dec 2018 03:56:57 -0500 Received: from mx2.suse.de ([195.135.220.15]:59432 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1731986AbeLQI44 (ORCPT ); Mon, 17 Dec 2018 03:56:56 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 81024ACC5; Mon, 17 Dec 2018 08:56:54 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 64C101E1475; Mon, 17 Dec 2018 09:56:53 +0100 (CET) Date: Mon, 17 Dec 2018 09:56:53 +0100 From: Jan Kara To: Dan Williams Cc: John Hubbard , david , =?iso-8859-1?B?Suly9G1l?= Glisse , Jan Kara , Matthew Wilcox , John Hubbard , Andrew Morton , Linux MM , tom@talpey.com, Al Viro , benve@cisco.com, Christoph Hellwig , Christopher Lameter , "Dalessandro, Dennis" , Doug Ledford , Jason Gunthorpe , Michal Hocko , Mike Marciniszyn , rcampbell@nvidia.com, Linux Kernel Mailing List , linux-fsdevel , Dave Hansen Subject: Re: [PATCH 1/2] mm: introduce put_user_page*(), placeholder versions Message-ID: <20181217085653.GB28270@quack2.suse.cz> References: <20181208022445.GA7024@redhat.com> <20181210102846.GC29289@quack2.suse.cz> <20181212150319.GA3432@redhat.com> <20181212214641.GB29416@dastard> <20181212215931.GG5037@redhat.com> <20181213005119.GD29416@dastard> <05a68829-6e6d-b766-11b4-99e1ba4bc87b@nvidia.com> <01cf4e0c-b2d6-225a-3ee9-ef0f7e53684d@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri 14-12-18 11:38:59, Dan Williams wrote: > On Thu, Dec 13, 2018 at 10:11 PM John Hubbard wrote: > > > > On 12/13/18 9:21 PM, Dan Williams wrote: > > > On Thu, Dec 13, 2018 at 7:53 PM John Hubbard wrote: > > >> > > >> On 12/12/18 4:51 PM, Dave Chinner wrote: > > >>> On Wed, Dec 12, 2018 at 04:59:31PM -0500, Jerome Glisse wrote: > > >>>> On Thu, Dec 13, 2018 at 08:46:41AM +1100, Dave Chinner wrote: > > >>>>> On Wed, Dec 12, 2018 at 10:03:20AM -0500, Jerome Glisse wrote: > > >>>>>> On Mon, Dec 10, 2018 at 11:28:46AM +0100, Jan Kara wrote: > > >>>>>>> On Fri 07-12-18 21:24:46, Jerome Glisse wrote: > > >>>>>>> So this approach doesn't look like a win to me over using counter in struct > > >>>>>>> page and I'd rather try looking into squeezing HMM public page usage of > > >>>>>>> struct page so that we can fit that gup counter there as well. I know that > > >>>>>>> it may be easier said than done... > > >>>>>> > > >> > > >> Agreed. After all the discussion this week, I'm thinking that the original idea > > >> of a per-struct-page counter is better. Fortunately, we can do the moral equivalent > > >> of that, unless I'm overlooking something: Jerome had another proposal that he > > >> described, off-list, for doing that counting, and his idea avoids the problem of > > >> finding space in struct page. (And in fact, when I responded yesterday, I initially > > >> thought that's where he was going with this.) > > >> > > >> So how about this hybrid solution: > > >> > > >> 1. Stay with the basic RFC approach of using a per-page counter, but actually > > >> store the counter(s) in the mappings instead of the struct page. We can use > > >> !PageAnon and page_mapping to look up all the mappings, stash the dma_pinned_count > > >> there. So the total pinned count is scattered across mappings. Probably still need > > >> a PageDmaPinned bit. > > > > > > How do you safely look at page->mapping from the get_user_pages_fast() > > > path? You'll be racing invalidation disconnecting the page from the > > > mapping. > > > > > > > I don't have an answer for that, so maybe the page->mapping idea is dead already. > > > > So in that case, there is still one more way to do all of this, which is to > > combine ZONE_DEVICE, HMM, and gup/dma information in a per-page struct, and get > > there via basically page->private, more or less like this: > > If we're going to allocate something new out-of-line then maybe we > should go even further to allow for a page "proxy" object to front a > real struct page. This idea arose from Dave Hansen as I explained to > him the dax-reflink problem, and dovetails with Dave Chinner's > suggestion earlier in this thread for dax-reflink. > > Have get_user_pages() allocate a proxy object that gets passed around > to drivers. Something like a struct page pointer with bit 0 set. This > would add a conditional branch and pointer chase to many page > operations, like page_to_pfn(), I thought something like it would be > unacceptable a few years ago, but then HMM went and added similar > overhead to put_page() and nobody balked. > > This has the additional benefit of catching cases that might be doing > a get_page() on a get_user_pages() result and should instead switch to > a "ref_user_page()" (opposite of put_user_page()) as the API to take > additional references on a get_user_pages() result. > > page->index and page->mapping could be overridden by similar > attributes in the proxy, and allow an N:1 relationship of proxy > instances to actual pages. Filesystems could generate dynamic proxies > as well. > > The auxiliary information (dev_pagemap, hmm_data, etc...) moves to the > proxy and stops polluting the base struct page which remains the > canonical location for dirty-tracking and dma operations. > > The difficulties are reconciling the source of the proxies as both > get_user_pages() and filesystem may want to be the source of the > allocation. In the get_user_pages_fast() path we may not be able to > ask the filesystem for the proxy, at least not without destroying the > performance expectations of get_user_pages_fast(). What you describe here sounds almost like page_ext mechanism we already have? Or do you really aim at per-pin allocated structure? Honza -- Jan Kara SUSE Labs, CR