Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp1320432imm; Wed, 20 Jun 2018 15:57:38 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIe4SheQETtHpq17cKaOQc0SpSsQeLSozwSlOCrt6l3TjTdvfLXq5XuVxwpajejyQU3GG6J X-Received: by 2002:a17:902:b40f:: with SMTP id x15-v6mr25960039plr.270.1529535458312; Wed, 20 Jun 2018 15:57:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529535458; cv=none; d=google.com; s=arc-20160816; b=oSKc0qZ2XG9US8f7GRN4aWswLvx7Jt6+q/lXwSatFSLJkfX85esDfv3JBXePuVu1tK FMYs/DZX5b5M04zaxj0sVAmiP7CEzAln0XmqNrUcno3EmUqYmNUKAz6kv/vExWXR/7No 7ryBd3l2yoA+ojLP9uTaB/bqSWjlJD2LZgegpqsoSyLjysNHgFSQxTkGnUDmnBDjqGLV LDgwXDBLoC7MMlhIZJuKzcq4YEMq/mPuxheYTdcjz5kSyAwSlbSEuhDqbl2rEaJcP8o4 M4pgr4JuWv88vFq9KRWgOFwye1l+kjNyLWWED7YWIAXzsulz6oJ+DwuxDxBY3PxVRVdc aYVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=ncsp7GKD/QU7vJpluL/gv294KEkj1ePK0e4blnfV14k=; b=hY6VxKy9Lvqhgm7QxkLSph4Wj93n5SNmDBQ0pKzvpnLSlPN/z1T2SURrnBybClQg6b w339rIOrS/24Vw9lIvQ0ZVeHdoUW43yEQ2jVfTeucxNfi7skaMEUoZ+6cZnSp9Bk7DhE GIgKNmO8+CXC9M0BU3oA7FOErCX9eAcMZluvgGcca+RzRYcBvOgI6W3U71E5IsQR6q4q 2XF5u+RHP7tA68lJbu3npMPLSP78ww67luU2fUFtXDtLwyyX5f7DIk0i3CT7xn1Lq+4U 76Y7DTc9Kgz0gcWEuGon93HgRJwewLTsIaMp0wWo7tNraLpenpFvqEcnbpeiP4fsdLgk Iwsg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i5-v6si2730655pgq.665.2018.06.20.15.57.24; Wed, 20 Jun 2018 15:57:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754166AbeFTW4l (ORCPT + 99 others); Wed, 20 Jun 2018 18:56:41 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:2392 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752838AbeFTW4j (ORCPT ); Wed, 20 Jun 2018 18:56:39 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1, AES128-SHA) id ; Wed, 20 Jun 2018 15:56:44 -0700 Received: from HQMAIL107.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 20 Jun 2018 15:56:42 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 20 Jun 2018 15:56:42 -0700 Received: from [10.110.48.28] (10.110.48.28) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1347.2; Wed, 20 Jun 2018 22:56:38 +0000 Subject: Re: [PATCH 2/2] mm: set PG_dma_pinned on get_user_pages*() To: Jan Kara CC: Matthew Wilcox , Dan Williams , Christoph Hellwig , Jason Gunthorpe , John Hubbard , Michal Hocko , Christopher Lameter , Linux MM , LKML , linux-rdma References: <20180618081258.GB16991@lst.de> <3898ef6b-2fa0-e852-a9ac-d904b47320d5@nvidia.com> <0e6053b3-b78c-c8be-4fab-e8555810c732@nvidia.com> <20180619082949.wzoe42wpxsahuitu@quack2.suse.cz> <20180619090255.GA25522@bombadil.infradead.org> <20180619104142.lpilc6esz7w3a54i@quack2.suse.cz> <70001987-3938-d33e-11e0-de5b19ca3bdf@nvidia.com> <20180620120824.bghoklv7qu2z5wgy@quack2.suse.cz> X-Nvconfidentiality: public From: John Hubbard Message-ID: <151edbf3-66ff-df0c-c1cc-5998de50111e@nvidia.com> Date: Wed, 20 Jun 2018 15:55:41 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 MIME-Version: 1.0 In-Reply-To: <20180620120824.bghoklv7qu2z5wgy@quack2.suse.cz> X-Originating-IP: [10.110.48.28] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/20/2018 05:08 AM, Jan Kara wrote: > On Tue 19-06-18 11:11:48, John Hubbard wrote: >> On 06/19/2018 03:41 AM, Jan Kara wrote: >>> On Tue 19-06-18 02:02:55, Matthew Wilcox wrote: >>>> On Tue, Jun 19, 2018 at 10:29:49AM +0200, Jan Kara wrote: [...] >>> I'm also still pondering the idea of inserting a "virtual" VMA into vma >>> interval tree in the inode - as the GUP references are IMHO closest to an >>> mlocked mapping - and that would achieve all the functionality we need as >>> well. I just didn't have time to experiment with it. >> >> How would this work? Would it have the same virtual address range? And how >> does it avoid the problems we've been discussing? Sorry to be a bit slow >> here. :) > > The range covered by the virtual mapping would be the one sent to > get_user_pages() to get page references. And then we would need to teach > page_mkclean() to check for these virtual VMAs and block / skip / report > (different situations would need different behavior) such page. But this > second part is the same regardless how we identify a page that is pinned by > get_user_pages(). OK. That neatly avoids the need a new page flag, I think. But of course it is somewhat more extensive to implement. Sounds like something to keep in mind, in case it has better tradeoffs than the direction I'm heading so far. >>> And then there's the aspect that both these approaches are a bit too >>> heavyweight for some get_user_pages_fast() users (e.g. direct IO) - Al Viro >>> had an idea to use page lock for that path but e.g. fs/direct-io.c would have >>> problems due to lock ordering constraints (filesystem ->get_block would >>> suddently get called with the page lock held). But we can probably leave >>> performance optimizations for phase two. >> >> >> So I assume that phase one would be to apply this approach only to >> get_user_pages_longterm. (Please let me know if that's wrong.) > > No, I meant phase 1 would be to apply this to all get_user_pages() flavors. > Then phase 2 is to try to find a way to make get_user_pages_fast() fast > again. And then in parallel to that, we also need to find a way for > get_user_pages_longterm() to signal to the user pinned pages must be > released soon. Because after phase 1 pinned pages will block page > writeback and such system won't oops but will become unusable > sooner rather than later. And again this problem needs to be solved > regardless of a mechanism of identifying pinned pages. > OK, thanks, that does help. I had the priorities of these get_user_pages*() changes all scrambled, but between your and Dan's explanation, I finally understand the preferred ordering of this work.