Received: by 2002:a05:6520:2f93:b029:af:d4db:7a05 with SMTP id 19csp1774935lkf; Thu, 4 Feb 2021 20:16:54 -0800 (PST) X-Google-Smtp-Source: ABdhPJxyvrKSRa/G7Ie3ZA+pU2k1MxPphuktf0lFtGdLfajhQ/ZzQK7DkUNl8wBCXGXP9iBCgo6b X-Received: by 2002:a17:906:c09:: with SMTP id s9mr2200450ejf.539.1612498613532; Thu, 04 Feb 2021 20:16:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612498613; cv=none; d=google.com; s=arc-20160816; b=HHq7JKLWAsTeEJA1zhWrOg1uhSknOKK0ocVd3wQWWysrhykPPipftJkmCfxdRHoIVe U+tccsZRcMPF9V3/JeUskIHTDs411uxJSV5gVuP/M4Sto98buXCg4s7K3sMqJXgh/Bmf MFx3GKngf5Iwd2yNbZeMTVjE+P5+jZWDCFi/a0YCpH8oRpkDPQ0H3FdPVK3/uACbr2k3 D/CMcYJlLthaUMi/tPeU9fXVQ+7cupVKzTptBkWllaA67Z48FY+QLP16/FAUO31s2Upo YwCjKtPaNL+DvEtCvKOl29+G34O0/oLP4LvOh9rBN9+KpzrgjOf7Vndg+XswD6YxkbC8 +2Cg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=90hw+ICZ/gKhWm8sbsBiKwsDNH4LvCEMvipiRUiV0MM=; b=fMXJlQPm7w1yeSPeWbntWo6A/CuVIiwbRBIzz5HXvk7uprK4jzaPBWNOj9OzC4Q8e5 KKQvw8y/olzFYiACL4H4JDT+utpOt3GSPFH5TMPXUGmEFDah72OW5ZAiBUN/TE3Mz8k5 q8d8FY6tOu2RZar0mVdnulZDnTlldiPFTmP1kTf9poX1+MeitS/CFh3nY5tejvqyNB8+ +0eMmDhqnkFPE5Eh3HX91GQNcPGC+2fjSW4+95CvQlTMGzwTAVa/MsDX1/wlaG8BFbgx IdFhrY4+t6qSyDb6DJgEAuRxSoS+3vKSdY/OPDPXIkuyO0rMj2QQrHI2FSAljCaWsWYk QgdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=LhpAcaoP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s12si1339063edy.569.2021.02.04.20.16.30; Thu, 04 Feb 2021 20:16:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=LhpAcaoP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229872AbhBEEM3 (ORCPT + 99 others); Thu, 4 Feb 2021 23:12:29 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:13924 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229586AbhBEEM2 (ORCPT ); Thu, 4 Feb 2021 23:12:28 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 04 Feb 2021 20:11:47 -0800 Received: from [10.2.60.31] (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 5 Feb 2021 04:11:46 +0000 Subject: Re: [PATCH v2 1/4] mm/gup: add compound page list iterator To: Joao Martins , CC: , , Andrew Morton , Jason Gunthorpe , Doug Ledford , Matthew Wilcox References: <20210204202500.26474-1-joao.m.martins@oracle.com> <20210204202500.26474-2-joao.m.martins@oracle.com> From: John Hubbard Message-ID: <74edd971-a80c-78b6-7ab2-5c1f6ba4ade9@nvidia.com> Date: Thu, 4 Feb 2021 20:11:46 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:85.0) Gecko/20100101 Thunderbird/85.0 MIME-Version: 1.0 In-Reply-To: <20210204202500.26474-2-joao.m.martins@oracle.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1612498307; bh=90hw+ICZ/gKhWm8sbsBiKwsDNH4LvCEMvipiRUiV0MM=; h=Subject:To:CC:References:From:Message-ID:Date:User-Agent: MIME-Version:In-Reply-To:Content-Type:Content-Language: Content-Transfer-Encoding:X-Originating-IP:X-ClientProxiedBy; b=LhpAcaoP90LK+h6C9NVPT0uTQMDR5adUruhZC5cyal3ffapkhntOgUvld0IvTvww4 /K+2lyHpdwCoj8kFhCUtjNO7/H2V75pkG6KVChSn6gg2JOnyD5H0hRCoLz3Ldi7/Zf r3YgIcBJ9JtKRuerOeZMhNmkTmF2LkBbJIH0YHq3UkLOs7rxC7WbxhD9322HjLtYCl CSt8GFEnCdY0CxhkercG678mzx7Tu3VUS5Ca+G3GjyC9ZqeKUhwxDak0vOWHsc47ut /pclIuZ47BQD5SnLBVMOyY9Aa2tLWZQkcI2U+Y5Ja5BR58NFwT+4+pKK3NA9OFjrck h0rpXks7BXNZw== Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2/4/21 12:24 PM, Joao Martins wrote: > Add an helper that iterates over head pages in a list of pages. It > essentially counts the tails until the next page to process has a > different head that the current. This is going to be used by > unpin_user_pages() family of functions, to batch the head page refcount > updates once for all passed consecutive tail pages. > > Suggested-by: Jason Gunthorpe > Signed-off-by: Joao Martins > --- > mm/gup.c | 29 +++++++++++++++++++++++++++++ > 1 file changed, 29 insertions(+) > > diff --git a/mm/gup.c b/mm/gup.c > index d68bcb482b11..d1549c61c2f6 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -215,6 +215,35 @@ void unpin_user_page(struct page *page) > } > EXPORT_SYMBOL(unpin_user_page); > > +static inline void compound_next(unsigned long i, unsigned long npages, > + struct page **list, struct page **head, > + unsigned int *ntails) > +{ > + struct page *page; > + unsigned int nr; > + > + if (i >= npages) > + return; > + > + list += i; > + npages -= i; It is worth noting that this is slightly more complex to read than it needs to be. You are changing both endpoints of a loop at once. That's hard to read for a human. And you're only doing it in order to gain the small benefit of being able to use nr directly at the end of the routine. If instead you keep npages constant like it naturally wants to be, you could just do a "(*ntails)++" in the loop, to take care of *ntails. However, given that the patch is correct and works as-is, the above is really just an optional idea, so please feel free to add: Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA > + page = compound_head(*list); > + > + for (nr = 1; nr < npages; nr++) { > + if (compound_head(list[nr]) != page) > + break; > + } > + > + *head = page; > + *ntails = nr; > +} > + > +#define for_each_compound_head(__i, __list, __npages, __head, __ntails) \ > + for (__i = 0, \ > + compound_next(__i, __npages, __list, &(__head), &(__ntails)); \ > + __i < __npages; __i += __ntails, \ > + compound_next(__i, __npages, __list, &(__head), &(__ntails))) > + > /** > * unpin_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages > * @pages: array of pages to be maybe marked dirty, and definitely released. >