Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp37066img; Tue, 19 Mar 2019 13:57:21 -0700 (PDT) X-Google-Smtp-Source: APXvYqwKMhPiTi2e9ALeB/2MQiLEn3MitQ1y+ATTpvXKscNkqaNz++gdpb69y5/7pCmuVE4HsGMd X-Received: by 2002:a17:902:900a:: with SMTP id a10mr4620890plp.183.1553029041127; Tue, 19 Mar 2019 13:57:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553029041; cv=none; d=google.com; s=arc-20160816; b=bqNcTQClRtB0gDcGyhlftM0Bnd4Bgv2rR0tM9uNpw12yv9PNZ75q2/vGyCq59Qekeq frtjgSY0ApxRPEG5l0GB7yLdaBmmr1mw73e09w9sbY5HPLMdRw0OIa79/BZG6gUHY+70 N5LIfI6zCm4zETIIN/7oMU+rMU+2XTQhGB6/4UU0xJf4Ba2BSsX2/DmbhwrzqKRm1Ays +Eo9PVs8tkM0hBk6Ve6THu/AYdlr9n52BXO3Lg00qko5kOtNvT3xjInM2bYO4Wq1R9C+ L5J4uuTSoCEDa6+jXaj5KFV+fv99v3H0yCNxU9sR9f85wOvzuEEsjnMftPhUG+gdma8c sVzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=YUCF5iT8WZ1bs0Asu5Ul8X8Z6auVA0aar4nuYXpbU0Y=; b=SQw4krIgxrQflK2xzHn5mAjcXzx+dJ7NI6KDChONeFQU/8YOcjodSXKPMof1OMs+YY IDs+vH0SYfqEmMOns5BYoFRhx+fnLR61GQ44Vrt1S6JLbq/uHZgoZv6DtiU79Fbv8Dxu gyL8xJAmq86w0eN7PZVC39pyHumIdx74uHE+eJLYXaHxCc2nzZrYehzc2P3kCEAszSft qC88+1OL+IS4GNHcl4rNL4pQZLLG3C425LSbgsgcsAqIVZiSPxM19bxHXbR1jqa5A+qB q0WZKfMNkPVQvaPJj2kWu8Ppo4InQbreel4eQn6lGta+KEFjh5v9K6Rv9ljxazCyR3aM 4kUA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n25si9646062pff.132.2019.03.19.13.56.59; Tue, 19 Mar 2019 13:57:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727262AbfCSUz2 (ORCPT + 99 others); Tue, 19 Mar 2019 16:55:28 -0400 Received: from p3plsmtpa11-09.prod.phx3.secureserver.net ([68.178.252.110]:33635 "EHLO p3plsmtpa11-09.prod.phx3.secureserver.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726933AbfCSUz1 (ORCPT ); Tue, 19 Mar 2019 16:55:27 -0400 Received: from [10.10.38.206] ([139.138.146.66]) by :SMTPAUTH: with ESMTPSA id 6Llch3MG7QWsA6LlchfylR; Tue, 19 Mar 2019 13:55:25 -0700 Subject: Re: [PATCH v4 1/1] mm: introduce put_user_page*(), placeholder versions To: Jerome Glisse Cc: Ira Weiny , Jan Kara , "Kirill A. Shutemov" , john.hubbard@gmail.com, Andrew Morton , linux-mm@kvack.org, Al Viro , Christian Benvenuti , Christoph Hellwig , Christopher Lameter , Dan Williams , Dave Chinner , Dennis Dalessandro , Doug Ledford , Jason Gunthorpe , Matthew Wilcox , Michal Hocko , Mike Rapoport , Mike Marciniszyn , Ralph Campbell , LKML , linux-fsdevel@vger.kernel.org, John Hubbard , Andrea Arcangeli References: <20190308213633.28978-1-jhubbard@nvidia.com> <20190308213633.28978-2-jhubbard@nvidia.com> <20190319120417.yzormwjhaeuu7jpp@kshutemo-mobl1> <20190319134724.GB3437@redhat.com> <20190319141416.GA3879@redhat.com> <20190319142918.6a5vom55aeojapjp@kshutemo-mobl1> <20190319153644.GB26099@quack2.suse.cz> <20190319090322.GE7485@iweiny-DESK2.sc.intel.com> <20190319204512.GB3096@redhat.com> From: Tom Talpey Message-ID: <0bdce970-1ec4-6bda-b82a-015fa68535a3@talpey.com> Date: Tue, 19 Mar 2019 15:55:23 -0500 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.5.3 MIME-Version: 1.0 In-Reply-To: <20190319204512.GB3096@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-CMAE-Envelope: MS4wfMVa2shDUX+qImuuG9eO58HEKoL7WNIRscvVfrd3/c/hyQmzZqjN/V9bvlUbC8/EbcvuePnPxhhkYoRNhkkGkPKJMYMnFrBd7bkdhD8Lqc2/8QJxDpW8 FwUnpH7vpLy0rFhtpSBAKBjJ6yr6DUn1Lc2ndWvyK446f3yIA+o0dJv94gKEbCQ7M4CnP1cpG9qmvkEkKClKzUSYeV8qEXoH1I89L4L4PhHbfED87+idqKaO BTZcmi2y62gW4ZU7z2kCstuaa+XQ7vbch5oK8aXuI2y8SoHKfel8ZviHUIyLr9L52jcVRkrBx3m7bLgG5mdADZ/Tbu52umTn13TuTtK8CGuEtb9ODBTOX1yt X24YGGj/x0C9mIZl18gNTaa+HcPdAmasOr3LD2Loyx5v4Zqds/IeiDmUFMtwIa9ED7uTI2bdSMX5zPRyj2qqMFX5fIQQ+f7Dl9aT+Qkkf/Ut8y7oR1ZQ1Lcw PLHwr8JmQCTHM/GoyKv3lIv1Q68bL+oPdmMEhUi/QquXWcjbnpWIMJoR/UBFpmV2EYbINs3FoogX5Lwu6M6cwy5kiNXy/6VXse6lcbUuowawXAn9RA3ye3tg DVd+8K1Fx/NE/BXuSYH3fQgp2Tm0vSjPw0GsTmODQjneFb5axER1dYvZ/fYf+icUwu6w97noG4AOC5Mjvtus7IDFzddULflgke/IuKJVwNvjPNTC49RZStRP pu4HoAAFs2FHNNJ6pbH5Vm4eM5Xuf/xHBJf2XKhpGF39oQZKk7cJ6w7T8TqMS1KIxRTZ3w6uLEiR6eNa+0KF/Xu5DHZxLQUio/JVf7QIed7zMAEbRAEt91Jy JnxvLb4jr+1BTzgJoJtr6nCVMWbQn/dVwPmOgpqCf9TTN7tTpy8mLKtoQ4PnocvShgzwXlA1F+jsMHLnJhI+/7PCnYrDCNqXnCL7nCAO Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 3/19/2019 3:45 PM, Jerome Glisse wrote: > On Tue, Mar 19, 2019 at 03:43:44PM -0500, Tom Talpey wrote: >> On 3/19/2019 4:03 AM, Ira Weiny wrote: >>> On Tue, Mar 19, 2019 at 04:36:44PM +0100, Jan Kara wrote: >>>> On Tue 19-03-19 17:29:18, Kirill A. Shutemov wrote: >>>>> On Tue, Mar 19, 2019 at 10:14:16AM -0400, Jerome Glisse wrote: >>>>>> On Tue, Mar 19, 2019 at 09:47:24AM -0400, Jerome Glisse wrote: >>>>>>> On Tue, Mar 19, 2019 at 03:04:17PM +0300, Kirill A. Shutemov wrote: >>>>>>>> On Fri, Mar 08, 2019 at 01:36:33PM -0800, john.hubbard@gmail.com wrote: >>>>>>>>> From: John Hubbard >>>>>>> >>>>>>> [...] >>>>>>> >>>>>>>>> diff --git a/mm/gup.c b/mm/gup.c >>>>>>>>> index f84e22685aaa..37085b8163b1 100644 >>>>>>>>> --- a/mm/gup.c >>>>>>>>> +++ b/mm/gup.c >>>>>>>>> @@ -28,6 +28,88 @@ struct follow_page_context { >>>>>>>>> unsigned int page_mask; >>>>>>>>> }; >>>>>>>>> +typedef int (*set_dirty_func_t)(struct page *page); >>>>>>>>> + >>>>>>>>> +static void __put_user_pages_dirty(struct page **pages, >>>>>>>>> + unsigned long npages, >>>>>>>>> + set_dirty_func_t sdf) >>>>>>>>> +{ >>>>>>>>> + unsigned long index; >>>>>>>>> + >>>>>>>>> + for (index = 0; index < npages; index++) { >>>>>>>>> + struct page *page = compound_head(pages[index]); >>>>>>>>> + >>>>>>>>> + if (!PageDirty(page)) >>>>>>>>> + sdf(page); >>>>>>>> >>>>>>>> How is this safe? What prevents the page to be cleared under you? >>>>>>>> >>>>>>>> If it's safe to race clear_page_dirty*() it has to be stated explicitly >>>>>>>> with a reason why. It's not very clear to me as it is. >>>>>>> >>>>>>> The PageDirty() optimization above is fine to race with clear the >>>>>>> page flag as it means it is racing after a page_mkclean() and the >>>>>>> GUP user is done with the page so page is about to be write back >>>>>>> ie if (!PageDirty(page)) see the page as dirty and skip the sdf() >>>>>>> call while a split second after TestClearPageDirty() happens then >>>>>>> it means the racing clear is about to write back the page so all >>>>>>> is fine (the page was dirty and it is being clear for write back). >>>>>>> >>>>>>> If it does call the sdf() while racing with write back then we >>>>>>> just redirtied the page just like clear_page_dirty_for_io() would >>>>>>> do if page_mkclean() failed so nothing harmful will come of that >>>>>>> neither. Page stays dirty despite write back it just means that >>>>>>> the page might be write back twice in a row. >>>>>> >>>>>> Forgot to mention one thing, we had a discussion with Andrea and Jan >>>>>> about set_page_dirty() and Andrea had the good idea of maybe doing >>>>>> the set_page_dirty() at GUP time (when GUP with write) not when the >>>>>> GUP user calls put_page(). We can do that by setting the dirty bit >>>>>> in the pte for instance. They are few bonus of doing things that way: >>>>>> - amortize the cost of calling set_page_dirty() (ie one call for >>>>>> GUP and page_mkclean() >>>>>> - it is always safe to do so at GUP time (ie the pte has write >>>>>> permission and thus the page is in correct state) >>>>>> - safe from truncate race >>>>>> - no need to ever lock the page >>>>>> >>>>>> Extra bonus from my point of view, it simplify thing for my generic >>>>>> page protection patchset (KSM for file back page). >>>>>> >>>>>> So maybe we should explore that ? It would also be a lot less code. >>>>> >>>>> Yes, please. It sounds more sensible to me to dirty the page on get, not >>>>> on put. >>>> >>>> I fully agree this is a desirable final state of affairs. >>> >>> I'm glad to see this presented because it has crossed my mind more than once >>> that effectively a GUP pinned page should be considered "dirty" at all times >>> until the pin is removed. This is especially true in the RDMA case. >> >> But, what if the RDMA registration is readonly? That's not uncommon, and >> marking dirty unconditonally would add needless overhead to such pages. > > Yes and this is only when FOLL_WRITE is set ie when you are doing GUP and > asking for write. Doing GUP and asking for read is always safe. Aha, ok great. I guess it does introduce something for callers to be aware of, if they GUP very large regions. I suppose if they're sufficiently aware of the situation, e.g. pnfs LAYOUT_COMMIT notifications, they could walk lists and reset page_dirty for untouched pages before releasing. That's their issue though, and agreed it's safest for the GUP layer to mark. Tom.