Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp1021img; Tue, 19 Mar 2019 13:03:25 -0700 (PDT) X-Google-Smtp-Source: APXvYqyOYlUS8hW8MyfybyJCugQYVBVFmge5bjtaoEv4rHmBiA/zJDoW1gq3KSai3F9mllhgOLvu X-Received: by 2002:a17:902:b404:: with SMTP id x4mr4382106plr.232.1553025805897; Tue, 19 Mar 2019 13:03:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553025805; cv=none; d=google.com; s=arc-20160816; b=iD1j81rW7K8SnV1/WkOk4OcAANk1SihnmAWh6s5uT6Qrrg98dNiukrip5KywqY2OgP +nOXDqDxjNB1OzkNj+k+x4D8O8iAs3zdGtkqGbwW8fbS0kWeaE9tkXToM6yV1FpqSG4S LH0ajOmiXk9uJKHnHOs33/XM13zJf6HdfsSeTyoy9vTte5vOxdH1SVPspIad8By78b6f 6zJdNyxXyhqHM3TmJEN/tx77JxTwMYU0WKvhYZagTsnGghhZ2k7mJ3Hi0IIsOso6YE5r jp7906kraXiLbk+kR98a6aio2ot7AUZjoHCAO20/lAb7YeZ3yGtMtRX6fNeydofnMcbB oXFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=HSpyNkpOUN54LoSVErAzdS1E1kZB+HgZS0Abyi+9vjU=; b=FLofJCRSo93x1uV0A97R/9E3s+0p7QXAYDhAojwVbzaJeGUnY841ZEnT/wiQ70rlgK olebVMND0lq1Xidz7PxUD1uqKOuxjri2fh7cUzIYdHTumlPG0j2J16uNLq0p3ZPw1Bbx F70cmr0hyXSjAO3Tzwc8c+PiyLydyVe253BoLZSUHdmQaUUFKmewOIL8z6rA51QeXwjL S22/uw/1n6s2FPPbC+q/0JyuC5S6vm940+OG9E0grz++JPinURquhqyk56tG35rE0B89 x/qikohIH4F/sERqHnM0Moy1RwrEt2BfPLntpzJV+AGYHJAlnbhyyFTnIJVfHav5X+WC DcyA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=oN2BBuMk; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p21si12554129plo.170.2019.03.19.13.03.09; Tue, 19 Mar 2019 13:03:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=oN2BBuMk; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727394AbfCSUBE (ORCPT + 99 others); Tue, 19 Mar 2019 16:01:04 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:14983 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726939AbfCSUBE (ORCPT ); Tue, 19 Mar 2019 16:01:04 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 19 Mar 2019 13:00:47 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 19 Mar 2019 13:01:02 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 19 Mar 2019 13:01:02 -0700 Received: from [10.110.48.28] (172.20.13.39) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 19 Mar 2019 20:01:01 +0000 Subject: Re: [PATCH v4 1/1] mm: introduce put_user_page*(), placeholder versions To: "Kirill A. Shutemov" , Jerome Glisse CC: , Andrew Morton , , Al Viro , Christian Benvenuti , Christoph Hellwig , Christopher Lameter , Dan Williams , Dave Chinner , Dennis Dalessandro , Doug Ledford , Ira Weiny , Jan Kara , Jason Gunthorpe , Matthew Wilcox , Michal Hocko , Mike Rapoport , Mike Marciniszyn , Ralph Campbell , Tom Talpey , LKML , References: <20190308213633.28978-1-jhubbard@nvidia.com> <20190308213633.28978-2-jhubbard@nvidia.com> <20190319120417.yzormwjhaeuu7jpp@kshutemo-mobl1> <20190319134724.GB3437@redhat.com> <20190319140623.tblqyb4dcjabjn3o@kshutemo-mobl1> From: John Hubbard X-Nvconfidentiality: public Message-ID: <6aa32cca-d97a-a3e5-b998-c67d0a6cc52a@nvidia.com> Date: Tue, 19 Mar 2019 13:01:01 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.5.3 MIME-Version: 1.0 In-Reply-To: <20190319140623.tblqyb4dcjabjn3o@kshutemo-mobl1> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL106.nvidia.com (172.18.146.12) To HQMAIL101.nvidia.com (172.20.187.10) Content-Type: text/plain; charset="utf-8" Content-Language: en-US-large Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1553025647; bh=HSpyNkpOUN54LoSVErAzdS1E1kZB+HgZS0Abyi+9vjU=; h=X-PGP-Universal:Subject:To:CC:References:From:X-Nvconfidentiality: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=oN2BBuMkNCIXH77NHOjjmKZpQZ9GitLjMPLc2DOfX/7d7jvPcisxSIGFy4WGrRix3 M3o1wPgnNpS7wLfDpqv8vDHGdBJ3Jgn1qYOZS8VUalLX+wIMtxCPpfyltItaQMwleQ ptCwtHyq3H4alTuMTew3Bm37SkGH4we0MK4ZFg1q8mY4s1ytDbJUzflFw/PvaCvQwq +B7R0o7qjiUzOFIzZZudvjtnmC/Slf7qFeUYuCzuzGH7oTU8Xg2R7E4cNe9H4Yfb1R MfGwkr7LIvwWCXBzRD2lXqk+JnXl5TLWnKHpaDRyxW9PQF+tUQJXrDQ44VAS1NWP5B 8cEOAcfT606Ag== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 3/19/19 7:06 AM, Kirill A. Shutemov wrote: > On Tue, Mar 19, 2019 at 09:47:24AM -0400, Jerome Glisse wrote: >> On Tue, Mar 19, 2019 at 03:04:17PM +0300, Kirill A. Shutemov wrote: >>> On Fri, Mar 08, 2019 at 01:36:33PM -0800, john.hubbard@gmail.com wrote: >>>> From: John Hubbard >> >> [...] >> >>>> diff --git a/mm/gup.c b/mm/gup.c >>>> index f84e22685aaa..37085b8163b1 100644 >>>> --- a/mm/gup.c >>>> +++ b/mm/gup.c >>>> @@ -28,6 +28,88 @@ struct follow_page_context { >>>> unsigned int page_mask; >>>> }; >>>> >>>> +typedef int (*set_dirty_func_t)(struct page *page); >>>> + >>>> +static void __put_user_pages_dirty(struct page **pages, >>>> + unsigned long npages, >>>> + set_dirty_func_t sdf) >>>> +{ >>>> + unsigned long index; >>>> + >>>> + for (index = 0; index < npages; index++) { >>>> + struct page *page = compound_head(pages[index]); >>>> + >>>> + if (!PageDirty(page)) >>>> + sdf(page); >>> >>> How is this safe? What prevents the page to be cleared under you? >>> >>> If it's safe to race clear_page_dirty*() it has to be stated explicitly >>> with a reason why. It's not very clear to me as it is. >> >> The PageDirty() optimization above is fine to race with clear the >> page flag as it means it is racing after a page_mkclean() and the >> GUP user is done with the page so page is about to be write back >> ie if (!PageDirty(page)) see the page as dirty and skip the sdf() >> call while a split second after TestClearPageDirty() happens then >> it means the racing clear is about to write back the page so all >> is fine (the page was dirty and it is being clear for write back). >> >> If it does call the sdf() while racing with write back then we >> just redirtied the page just like clear_page_dirty_for_io() would >> do if page_mkclean() failed so nothing harmful will come of that >> neither. Page stays dirty despite write back it just means that >> the page might be write back twice in a row. > > Fair enough. Should we get it into a comment here? How's this read to you? I reworded and slightly expanded Jerome's description: diff --git a/mm/gup.c b/mm/gup.c index d1df7b8ba973..86397ae23922 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -61,6 +61,24 @@ static void __put_user_pages_dirty(struct page **pages, for (index = 0; index < npages; index++) { struct page *page = compound_head(pages[index]); + /* + * Checking PageDirty at this point may race with + * clear_page_dirty_for_io(), but that's OK. Two key cases: + * + * 1) This code sees the page as already dirty, so it skips + * the call to sdf(). That could happen because + * clear_page_dirty_for_io() called page_mkclean(), + * followed by set_page_dirty(). However, now the page is + * going to get written back, which meets the original + * intention of setting it dirty, so all is well: + * clear_page_dirty_for_io() goes on to call + * TestClearPageDirty(), and write the page back. + * + * 2) This code sees the page as clean, so it calls sdf(). + * The page stays dirty, despite being written back, so it + * gets written back again in the next writeback cycle. + * This is harmless. + */ if (!PageDirty(page)) sdf(page); > >>>> +void put_user_pages(struct page **pages, unsigned long npages) >>>> +{ >>>> + unsigned long index; >>>> + >>>> + for (index = 0; index < npages; index++) >>>> + put_user_page(pages[index]); >>> >>> I believe there's an room for improvement for compound pages. >>> >>> If there's multiple consequential pages in the array that belong to the >>> same compound page we can get away with a single atomic operation to >>> handle them all. >> >> Yes maybe just add a comment with that for now and leave this kind of >> optimization to latter ? > > Sounds good to me. > Here's a comment for that: @@ -127,6 +145,11 @@ void put_user_pages(struct page **pages, unsigned long npages) { unsigned long index; + /* + * TODO: this can be optimized for huge pages: if a series of pages is + * physically contiguous and part of the same compound page, then a + * single operation to the head page should suffice. + */ for (index = 0; index < npages; index++) put_user_page(pages[index]); } thanks, -- John Hubbard NVIDIA