Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1720270pxb; Wed, 10 Feb 2021 15:21:38 -0800 (PST) X-Google-Smtp-Source: ABdhPJz9cJXerym3L2iFGgjr3AsvXOV4l86WWX89ycZd8PTBpECTBr/KED3/Fp4sD9PJHQVOOuIP X-Received: by 2002:a17:906:5846:: with SMTP id h6mr5269847ejs.521.1612999298027; Wed, 10 Feb 2021 15:21:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612999298; cv=none; d=google.com; s=arc-20160816; b=nMulEDjz2V1tMoPyorXp64j5hpmIQkA6qjGmRISvOo7au78auZ67dWmJvABGv4HaJP vRsT0dlwvwYQ7sF5idb2Ft+mH2wzK24y1pzDfsyjb67kTx3kOCMtF1R8ZAoyFub1Ar9E W6VhietyX1mczrNj1hl6q9AM6oOyu5Llxvaw3ZrPZ/P5qLtPxpTjF71CktOUP8FxyReZ zhf8bf7yCs7ARtjopuC7u3IQALhOQP3NoGpV2AFmdBJy85I8Bn3nR6OTtCbcBYNLeO65 9QKoat/o6gWAJjzXh1AjXVdC5noQ28+VtGtaTd/w7+D5bR3/N1YoxD7HDDXFtmC5+Wm+ eX4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=FPDbkf9bycX12XeKfhn1RlZRPRS4j668V61lFIhF4zQ=; b=xuIsjAROfIbN+B0kr5JL6yFMUBTexyWIffb8f3YVjCkkiyx0o3lTzgYREBMB6IuGGr DJquCU5P7PcDPzCzASii9nlTrmFJN08ZVAUe7MYsnRgPcr4MTJm7FowjPLB4isudQlHs LZyPugrkvYMsR3kox4qrB6Y4NdrQjYk13ev4v20/ZAHq6YMw4FY+i+vSLAnGl5PQLSaB wcTmjKKy10JC/daMicxjPuJuaxCAcRCb72Q0GNe8G1dTP1i2E46e/Qm1PKLgGc3U8BFN aNDAEHY7TbW3fvcMCLX30SBxo0V2iYpKQ0rUsfypY4EojgZ0uKzdWoFZ9geb3Em6r7Ro dRMQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=pb8CprMo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ku7si2359902ejc.143.2021.02.10.15.21.13; Wed, 10 Feb 2021 15:21:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=pb8CprMo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232957AbhBJXU0 (ORCPT + 99 others); Wed, 10 Feb 2021 18:20:26 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:6907 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232203AbhBJXUW (ORCPT ); Wed, 10 Feb 2021 18:20:22 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Wed, 10 Feb 2021 15:19:41 -0800 Received: from [10.2.50.67] (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 10 Feb 2021 23:19:41 +0000 Subject: Re: [PATCH v3 3/4] mm/gup: add a range variant of unpin_user_pages_dirty_lock() To: Joao Martins , CC: , , Andrew Morton , Jason Gunthorpe , Doug Ledford , Matthew Wilcox References: <20210205204127.29441-1-joao.m.martins@oracle.com> <20210205204127.29441-4-joao.m.martins@oracle.com> From: John Hubbard Message-ID: <6ce67c15-3bb3-3ccb-3c45-edb0efb3a38f@nvidia.com> Date: Wed, 10 Feb 2021 15:19:40 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:85.0) Gecko/20100101 Thunderbird/85.0 MIME-Version: 1.0 In-Reply-To: <20210205204127.29441-4-joao.m.martins@oracle.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1612999181; bh=FPDbkf9bycX12XeKfhn1RlZRPRS4j668V61lFIhF4zQ=; h=Subject:To:CC:References:From:Message-ID:Date:User-Agent: MIME-Version:In-Reply-To:Content-Type:Content-Language: Content-Transfer-Encoding:X-Originating-IP:X-ClientProxiedBy; b=pb8CprMoPgh3SW8Vrysu2qUQLAibhJIIpBJh9GFUPASzC3DConnSy/kHhHoQYrBw1 jdnU5p70FH1Ui3NL0b/YtyJWC2YZhsCx8sQ7ZFcoBvNi4r4D6cOdHVyv6Ju3808F5P ewabyAawRX94Hae6meRvQpMBM7fzJpc+JzgwBWZy4Tv0iUe/S72ifD7HKDCEgY/7s3 q6BSmybp1vT2Paeeji9uZh1qIl5jZq4FEPVz7BZD7K8rNhymrwWDFND8y2+Beez5r4 qHfXrx0Grwgrn59Zam47B3JSD5B3p9QUTY6gXF4h85uJs9jDDZrHk8xfqMHZkWrUfH K6R1F9Z9sAurw== Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2/5/21 12:41 PM, Joao Martins wrote: > Add a unpin_user_page_range_dirty_lock() API which takes a starting page > and how many consecutive pages we want to unpin and optionally dirty. > > To that end, define another iterator for_each_compound_range() > that operates in page ranges as opposed to page array. > > For users (like RDMA mr_dereg) where each sg represents a > contiguous set of pages, we're able to more efficiently unpin > pages without having to supply an array of pages much of what > happens today with unpin_user_pages(). > > Suggested-by: Jason Gunthorpe > Signed-off-by: Joao Martins > --- > include/linux/mm.h | 2 ++ > mm/gup.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 64 insertions(+) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index a608feb0d42e..b76063f7f18a 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -1265,6 +1265,8 @@ static inline void put_page(struct page *page) > void unpin_user_page(struct page *page); > void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, > bool make_dirty); > +void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages, > + bool make_dirty); > void unpin_user_pages(struct page **pages, unsigned long npages); > > /** > diff --git a/mm/gup.c b/mm/gup.c > index 467a11df216d..938964d31494 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -215,6 +215,32 @@ void unpin_user_page(struct page *page) > } > EXPORT_SYMBOL(unpin_user_page); > > +static inline void compound_range_next(unsigned long i, unsigned long npages, > + struct page **list, struct page **head, > + unsigned int *ntails) Yes, the new names look good, and I have failed to find any logic errors, so: Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA > +{ > + struct page *next, *page; > + unsigned int nr = 1; > + > + if (i >= npages) > + return; > + > + next = *list + i; > + page = compound_head(next); > + if (PageCompound(page) && compound_order(page) >= 1) > + nr = min_t(unsigned int, > + page + compound_nr(page) - next, npages - i); > + > + *head = page; > + *ntails = nr; > +} > + > +#define for_each_compound_range(__i, __list, __npages, __head, __ntails) \ > + for (__i = 0, \ > + compound_range_next(__i, __npages, __list, &(__head), &(__ntails)); \ > + __i < __npages; __i += __ntails, \ > + compound_range_next(__i, __npages, __list, &(__head), &(__ntails))) > + > static inline void compound_next(unsigned long i, unsigned long npages, > struct page **list, struct page **head, > unsigned int *ntails) > @@ -303,6 +329,42 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, > } > EXPORT_SYMBOL(unpin_user_pages_dirty_lock); > > +/** > + * unpin_user_page_range_dirty_lock() - release and optionally dirty > + * gup-pinned page range > + * > + * @page: the starting page of a range maybe marked dirty, and definitely released. > + * @npages: number of consecutive pages to release. > + * @make_dirty: whether to mark the pages dirty > + * > + * "gup-pinned page range" refers to a range of pages that has had one of the > + * get_user_pages() variants called on that page. > + * > + * For the page ranges defined by [page .. page+npages], make that range (or > + * its head pages, if a compound page) dirty, if @make_dirty is true, and if the > + * page range was previously listed as clean. > + * > + * set_page_dirty_lock() is used internally. If instead, set_page_dirty() is > + * required, then the caller should a) verify that this is really correct, > + * because _lock() is usually required, and b) hand code it: > + * set_page_dirty_lock(), unpin_user_page(). > + * > + */ > +void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages, > + bool make_dirty) > +{ > + unsigned long index; > + struct page *head; > + unsigned int ntails; > + > + for_each_compound_range(index, &page, npages, head, ntails) { > + if (make_dirty && !PageDirty(head)) > + set_page_dirty_lock(head); > + put_compound_head(head, ntails, FOLL_PIN); > + } > +} > +EXPORT_SYMBOL(unpin_user_page_range_dirty_lock); > + > /** > * unpin_user_pages() - release an array of gup-pinned pages. > * @pages: array of pages to be marked dirty and released. >