Received: by 2002:a25:31c3:0:0:0:0:0 with SMTP id x186csp324020ybx; Wed, 30 Oct 2019 15:55:34 -0700 (PDT) X-Google-Smtp-Source: APXvYqxgTToj3oCNG1i3jFuDFzJXnB5ZNoch1HbBoOgDDJm+/mQGTWQKL/ha/0qNRDuGWNEYith5 X-Received: by 2002:a17:906:7e13:: with SMTP id e19mr706541ejr.298.1572476134836; Wed, 30 Oct 2019 15:55:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572476134; cv=none; d=google.com; s=arc-20160816; b=zlhb6oFUkk8HL0hn2WL1NAENFvQzhdfgzqETMEvjbr1jFAK6PmaAxN58rIeiS25n6s RA0/W5pYgN3DGjBEngFCbrfG1kFvruFzjUq1mlSzb0utovKugWHIAff5xwcUwyBAMXoh mbfJt4EDEXN+S2x4vb43ToiKjzu5vcf+p7/seEzsRVEjm84+T40wu2D8xXFPfquWbUzO jDBDqBj8fYwanW9Gnwu4N3oe8nFi3n3uftckLO3Zet0nsNIc+NLS3H/z0l2GlXisO4G2 exk/A+v50TBGqMbv8p7JbYtbFa2OIkP+wx6aPAax9FHq5wpCCZza7697YAPJfaFE3oKE EYvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=tBh63Up4GmYWMpZfv2SoHkQER3l3/AwmpTx4CmGyUOY=; b=YAT5oiEMkdOa+m25d/u0la6N9bx4HbBqR7/cvOrQeteVQ418wHnC4JFMf4axjtO0zt xH0AQ7uWND2/DZZDvGafOH4P3nnMyeUkCcEyjPAD9pTvUQrV/ET0aCc1hfgAv+AAj8A+ xM8H0Pf0tLpo6JRFk74a7Zlrq0jy3113jg3hUKJa6hqIANAv+zv7Ru4ltolHhhYnHs/i JVNHbIHUb15Mk2LevnePeFDwOibO6VaeTpsMLRrF1Ew1MoPFTEcQi/mY4ClnEV9s7PNc h9yxAGucxJ5gzxJ5VvvAfro26GpiOs4WKrxQfVZ4txY0zZtQrLeVKUy9yKWxQ9STpYi5 MgQQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b="q6Q/A8TB"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g2si2288116edn.415.2019.10.30.15.55.11; Wed, 30 Oct 2019 15:55:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b="q6Q/A8TB"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728162AbfJ3Wv7 (ORCPT + 99 others); Wed, 30 Oct 2019 18:51:59 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:1382 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727671AbfJ3Wts (ORCPT ); Wed, 30 Oct 2019 18:49:48 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 30 Oct 2019 15:49:43 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 30 Oct 2019 15:49:37 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 30 Oct 2019 15:49:37 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 30 Oct 2019 22:49:37 +0000 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 30 Oct 2019 22:49:37 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 30 Oct 2019 22:49:35 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Wed, 30 Oct 2019 15:49:35 -0700 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Magnus Karlsson , "Mauro Carvalho Chehab" , Michael Ellerman , Michal Hocko , Mike Kravetz , "Paul Mackerras" , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard , "Christoph Hellwig" , "Aneesh Kumar K . V" Subject: [PATCH 02/19] mm/gup: factor out duplicate code from four routines Date: Wed, 30 Oct 2019 15:49:13 -0700 Message-ID: <20191030224930.3990755-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191030224930.3990755-1-jhubbard@nvidia.com> References: <20191030224930.3990755-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1572475783; bh=tBh63Up4GmYWMpZfv2SoHkQER3l3/AwmpTx4CmGyUOY=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=q6Q/A8TBBZ55PyF/FZZqYnJ+lgBDwYNDtC48DaucDzCF0TjR8x89MUFmfJrXfVIy8 YhLIOpQBeIReIooXA8TPZibffV6cpNr4lEyyTxNLOCl4fSlAiNzImWEmpAHpBQAgnm 2VJfy72jnhXV4Na9artoMHPQqpC3lkGe4ymiRJWOT1WeuiuvX5ZsQGDmnAJG4LGImR 9ty1Onsdh/I+zLky4Hr6hnCHYagTmLcM/PUFXTvUJ2RHGzNqzx9q0kjJ96JHXHJJWk 2oGM89HTfGbtaCai1JlzNt3fE78ewH7qAVLMEPdwaEG9DhYOb9MPVg/HVxjVhX4dEl BY9wQk9uQlmQw== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are four locations in gup.c that have a fair amount of code duplication. This means that changing one requires making the same changes in four places, not to mention reading the same code four times, and wondering if there are subtle differences. Factor out the common code into static functions, thus reducing the overall line count and the code's complexity. Also, take the opportunity to slightly improve the efficiency of the error cases, by doing a mass subtraction of the refcount, surrounded by get_page()/put_page(). Also, further simplify (slightly), by waiting until the the successful end of each routine, to increment *nr. Cc: Christoph Hellwig Cc: Aneesh Kumar K.V Signed-off-by: John Hubbard --- mm/gup.c | 113 ++++++++++++++++++++++--------------------------------- 1 file changed, 46 insertions(+), 67 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 85caf76b3012..8fb0d9cdfaf5 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1969,6 +1969,35 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *p= udp, unsigned long addr, } #endif =20 +static int __record_subpages(struct page *page, unsigned long addr, + unsigned long end, struct page **pages, int nr) +{ + int nr_recorded_pages =3D 0; + + do { + pages[nr] =3D page; + nr++; + page++; + nr_recorded_pages++; + } while (addr +=3D PAGE_SIZE, addr !=3D end); + return nr_recorded_pages; +} + +static void __remove_refs_from_head(struct page *page, int refs) +{ + /* Do a get_page() first, in case refs =3D=3D page->_refcount */ + get_page(page); + page_ref_sub(page, refs); + put_page(page); +} + +static int __huge_pt_done(struct page *head, int nr_recorded_pages, int *n= r) +{ + *nr +=3D nr_recorded_pages; + SetPageReferenced(head); + return 1; +} + #ifdef CONFIG_ARCH_HAS_HUGEPD static unsigned long hugepte_addr_end(unsigned long addr, unsigned long en= d, unsigned long sz) @@ -1998,34 +2027,19 @@ static int gup_hugepte(pte_t *ptep, unsigned long s= z, unsigned long addr, /* hugepages are never "special" */ VM_BUG_ON(!pfn_valid(pte_pfn(pte))); =20 - refs =3D 0; head =3D pte_page(pte); - page =3D head + ((addr & (sz-1)) >> PAGE_SHIFT); - do { - VM_BUG_ON(compound_head(page) !=3D head); - pages[*nr] =3D page; - (*nr)++; - page++; - refs++; - } while (addr +=3D PAGE_SIZE, addr !=3D end); + refs =3D __record_subpages(page, addr, end, pages, *nr); =20 head =3D try_get_compound_head(head, refs); - if (!head) { - *nr -=3D refs; + if (!head) return 0; - } =20 if (unlikely(pte_val(pte) !=3D pte_val(*ptep))) { - /* Could be optimized better */ - *nr -=3D refs; - while (refs--) - put_page(head); + __remove_refs_from_head(head, refs); return 0; } - - SetPageReferenced(head); - return 1; + return __huge_pt_done(head, refs, nr); } =20 static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, @@ -2071,30 +2085,18 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, un= signed long addr, pages, nr); } =20 - refs =3D 0; page =3D pmd_page(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - do { - pages[*nr] =3D page; - (*nr)++; - page++; - refs++; - } while (addr +=3D PAGE_SIZE, addr !=3D end); + refs =3D __record_subpages(page, addr, end, pages, *nr); =20 head =3D try_get_compound_head(pmd_page(orig), refs); - if (!head) { - *nr -=3D refs; + if (!head) return 0; - } =20 if (unlikely(pmd_val(orig) !=3D pmd_val(*pmdp))) { - *nr -=3D refs; - while (refs--) - put_page(head); + __remove_refs_from_head(head, refs); return 0; } - - SetPageReferenced(head); - return 1; + return __huge_pt_done(head, refs, nr); } =20 static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, @@ -2114,30 +2116,18 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, un= signed long addr, pages, nr); } =20 - refs =3D 0; page =3D pud_page(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - do { - pages[*nr] =3D page; - (*nr)++; - page++; - refs++; - } while (addr +=3D PAGE_SIZE, addr !=3D end); + refs =3D __record_subpages(page, addr, end, pages, *nr); =20 head =3D try_get_compound_head(pud_page(orig), refs); - if (!head) { - *nr -=3D refs; + if (!head) return 0; - } =20 if (unlikely(pud_val(orig) !=3D pud_val(*pudp))) { - *nr -=3D refs; - while (refs--) - put_page(head); + __remove_refs_from_head(head, refs); return 0; } - - SetPageReferenced(head); - return 1; + return __huge_pt_done(head, refs, nr); } =20 static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, @@ -2151,30 +2141,19 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, un= signed long addr, return 0; =20 BUILD_BUG_ON(pgd_devmap(orig)); - refs =3D 0; + page =3D pgd_page(orig) + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT); - do { - pages[*nr] =3D page; - (*nr)++; - page++; - refs++; - } while (addr +=3D PAGE_SIZE, addr !=3D end); + refs =3D __record_subpages(page, addr, end, pages, *nr); =20 head =3D try_get_compound_head(pgd_page(orig), refs); - if (!head) { - *nr -=3D refs; + if (!head) return 0; - } =20 if (unlikely(pgd_val(orig) !=3D pgd_val(*pgdp))) { - *nr -=3D refs; - while (refs--) - put_page(head); + __remove_refs_from_head(head, refs); return 0; } - - SetPageReferenced(head); - return 1; + return __huge_pt_done(head, refs, nr); } =20 static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, --=20 2.23.0