Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp1393010img; Tue, 19 Mar 2019 06:48:27 -0700 (PDT) X-Google-Smtp-Source: APXvYqwqo3fIJ1V4Yb5EUsqGFSBbx3QLXYCVG7xqajA5JsXbgcOJ6L1JuM8d2/nwPbPLzNuX7H5b X-Received: by 2002:a17:902:9a83:: with SMTP id w3mr2195361plp.137.1553003306995; Tue, 19 Mar 2019 06:48:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553003306; cv=none; d=google.com; s=arc-20160816; b=krG9zFn1+b07sFnrfEOJZJvYjUUVH459sK6BfL0QPTxR4Cz2TwMnaX6Y235cpsM9Ex HpM9l9GYdDIQLL4HCcAoQxroaTCwK/f9GUmqLMgtrG7vgIVS/cNk1bMdKI5STkoKayce 6t3Ek7LD2SEz3zyailJ/uWK9BS26yleNSvgD3C2diUjxx8ojINfxuBjS5tWsgGZ61MES IzSOu7tiKP/5oMrRY0trMPvt/meBz2yC4NbKB8yHglqOuW0FyrV4UmpNy2DYUQM0xxCK 7IrgcP/9VFsWGaYHVd3lN2W73C5ngcjM71TeGw2T7cJFle0frfXn6O5NvsSLyOI79xsv 5sEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=4IV/YydOERXOEcACWDCxxLIAsvFogFMpjtB31rI36IE=; b=yTPsWPbl3/SJErzV44tFO1TtWBoR8hbXX8i2p/LemLm0lo+a9e8yiDmIHjJs30iO2e nnRLP7HxL2aJxi/LgWoUAP+8G/0eVdLExs/69RTq/qTDEPlV2toTAkdKBa+v9octX0kE UuVqKkvu4W97fw+zretaNCTzl3HCZMJOGru/ezxvnr6ieR4h99rUBbMjKvspQy+EGMtN OjOBvV+5ccWteepnFgtiF8rVtZtk0vX9EhuR7rhlgLKP/VyefHAMzvJOfBOfM3IuR6ud ikpOEitMkabviOwiOJ1Q4ZdjBH4hR/PalHPtnCxXjpxqgaIhmlYIZK6hrqvGHVBkIEaQ YA8A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c24si12344977pls.18.2019.03.19.06.48.11; Tue, 19 Mar 2019 06:48:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727047AbfCSNra (ORCPT + 99 others); Tue, 19 Mar 2019 09:47:30 -0400 Received: from mx1.redhat.com ([209.132.183.28]:42756 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725951AbfCSNra (ORCPT ); Tue, 19 Mar 2019 09:47:30 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3AE0320277; Tue, 19 Mar 2019 13:47:29 +0000 (UTC) Received: from redhat.com (unknown [10.20.6.236]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A26125D706; Tue, 19 Mar 2019 13:47:26 +0000 (UTC) Date: Tue, 19 Mar 2019 09:47:24 -0400 From: Jerome Glisse To: "Kirill A. Shutemov" Cc: john.hubbard@gmail.com, Andrew Morton , linux-mm@kvack.org, Al Viro , Christian Benvenuti , Christoph Hellwig , Christopher Lameter , Dan Williams , Dave Chinner , Dennis Dalessandro , Doug Ledford , Ira Weiny , Jan Kara , Jason Gunthorpe , Matthew Wilcox , Michal Hocko , Mike Rapoport , Mike Marciniszyn , Ralph Campbell , Tom Talpey , LKML , linux-fsdevel@vger.kernel.org, John Hubbard Subject: Re: [PATCH v4 1/1] mm: introduce put_user_page*(), placeholder versions Message-ID: <20190319134724.GB3437@redhat.com> References: <20190308213633.28978-1-jhubbard@nvidia.com> <20190308213633.28978-2-jhubbard@nvidia.com> <20190319120417.yzormwjhaeuu7jpp@kshutemo-mobl1> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190319120417.yzormwjhaeuu7jpp@kshutemo-mobl1> User-Agent: Mutt/1.10.0 (2018-05-17) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Tue, 19 Mar 2019 13:47:29 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 19, 2019 at 03:04:17PM +0300, Kirill A. Shutemov wrote: > On Fri, Mar 08, 2019 at 01:36:33PM -0800, john.hubbard@gmail.com wrote: > > From: John Hubbard [...] > > diff --git a/mm/gup.c b/mm/gup.c > > index f84e22685aaa..37085b8163b1 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -28,6 +28,88 @@ struct follow_page_context { > > unsigned int page_mask; > > }; > > > > +typedef int (*set_dirty_func_t)(struct page *page); > > + > > +static void __put_user_pages_dirty(struct page **pages, > > + unsigned long npages, > > + set_dirty_func_t sdf) > > +{ > > + unsigned long index; > > + > > + for (index = 0; index < npages; index++) { > > + struct page *page = compound_head(pages[index]); > > + > > + if (!PageDirty(page)) > > + sdf(page); > > How is this safe? What prevents the page to be cleared under you? > > If it's safe to race clear_page_dirty*() it has to be stated explicitly > with a reason why. It's not very clear to me as it is. The PageDirty() optimization above is fine to race with clear the page flag as it means it is racing after a page_mkclean() and the GUP user is done with the page so page is about to be write back ie if (!PageDirty(page)) see the page as dirty and skip the sdf() call while a split second after TestClearPageDirty() happens then it means the racing clear is about to write back the page so all is fine (the page was dirty and it is being clear for write back). If it does call the sdf() while racing with write back then we just redirtied the page just like clear_page_dirty_for_io() would do if page_mkclean() failed so nothing harmful will come of that neither. Page stays dirty despite write back it just means that the page might be write back twice in a row. > > + > > + put_user_page(page); > > + } > > +} > > + > > +/** > > + * put_user_pages_dirty() - release and dirty an array of gup-pinned pages > > + * @pages: array of pages to be marked dirty and released. > > + * @npages: number of pages in the @pages array. > > + * > > + * "gup-pinned page" refers to a page that has had one of the get_user_pages() > > + * variants called on that page. > > + * > > + * For each page in the @pages array, make that page (or its head page, if a > > + * compound page) dirty, if it was previously listed as clean. Then, release > > + * the page using put_user_page(). > > + * > > + * Please see the put_user_page() documentation for details. > > + * > > + * set_page_dirty(), which does not lock the page, is used here. > > + * Therefore, it is the caller's responsibility to ensure that this is > > + * safe. If not, then put_user_pages_dirty_lock() should be called instead. > > + * > > + */ > > +void put_user_pages_dirty(struct page **pages, unsigned long npages) > > +{ > > + __put_user_pages_dirty(pages, npages, set_page_dirty); > > Have you checked if compiler is clever enough eliminate indirect function > call here? Maybe it's better to go with an opencodded approach and get rid > of callbacks? > Good point, dunno if John did check that. > > > +} > > +EXPORT_SYMBOL(put_user_pages_dirty); > > + > > +/** > > + * put_user_pages_dirty_lock() - release and dirty an array of gup-pinned pages > > + * @pages: array of pages to be marked dirty and released. > > + * @npages: number of pages in the @pages array. > > + * > > + * For each page in the @pages array, make that page (or its head page, if a > > + * compound page) dirty, if it was previously listed as clean. Then, release > > + * the page using put_user_page(). > > + * > > + * Please see the put_user_page() documentation for details. > > + * > > + * This is just like put_user_pages_dirty(), except that it invokes > > + * set_page_dirty_lock(), instead of set_page_dirty(). > > + * > > + */ > > +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages) > > +{ > > + __put_user_pages_dirty(pages, npages, set_page_dirty_lock); > > +} > > +EXPORT_SYMBOL(put_user_pages_dirty_lock); > > + > > +/** > > + * put_user_pages() - release an array of gup-pinned pages. > > + * @pages: array of pages to be marked dirty and released. > > + * @npages: number of pages in the @pages array. > > + * > > + * For each page in the @pages array, release the page using put_user_page(). > > + * > > + * Please see the put_user_page() documentation for details. > > + */ > > +void put_user_pages(struct page **pages, unsigned long npages) > > +{ > > + unsigned long index; > > + > > + for (index = 0; index < npages; index++) > > + put_user_page(pages[index]); > > I believe there's an room for improvement for compound pages. > > If there's multiple consequential pages in the array that belong to the > same compound page we can get away with a single atomic operation to > handle them all. Yes maybe just add a comment with that for now and leave this kind of optimization to latter ?