Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp1994253ybk; Thu, 21 May 2020 22:21:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyF60xZKygAAqrH6HAEFTj4fVOeuRSy4zviUlgX1xPuMiXPR0SZGYj6VWyEQMKwOHxCOFbW X-Received: by 2002:aa7:d35a:: with SMTP id m26mr1640053edr.25.1590124887910; Thu, 21 May 2020 22:21:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590124887; cv=none; d=google.com; s=arc-20160816; b=i9zsRBRlNiRMHw+Pp6PU/zUnVaDRlLb4tqf8wd5t3UYRw6OkmQVUlp1HEuSvs/X/oa ZwE29AmNE7YBpasWBHh4YyArH59X42mIryqufPUqxnage5id5lBScJ5u1O8Cb57D/r3s Hqx/sP0oizzk5n/EMpJZcQRbV9iAl7mgpUTJ/n8IW9AQK4MS+fGQXx73agBD7RA3o80t ee3Ne/nacWkVDJXjD0dC7Wq7LVkIePbaqPTfgW0ZYypiymAR8puin+5wfDiIFM14WWOC BYBZIHCrgS/9je7TprEiiv1aL0doapoxPygeCzGR4DXJpkeIhFcxjKNIijrYMkEX3oMb 47bA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=rovyqRvoWmUY8LZ787X+YkZeUTez+YCYnMbA60VZKSY=; b=ukA8U8KUbhJ5jkqdgPH4dEPcKf+k/4yTwGJqdYHGt2IPcUpxwzE2kAKEM3NM/glRfu v5MAAlrYestQ4jsUhQ0lXuIwbBii0dt3tIso7EtHhZRStFpzmKJs5HH0TzLH9KHsTP/R Q5UuhUJsZIjOALuXnPVyJXD10ZMaQhfexKtMiFI4SlhyH3t4yqzyBlV3sJRgMpkJEoiG K38w+zrDK/+LMA/PCZvcyF5XfMdk7A0dhCVJLyon2lI7u46fkf5mSFsALPMEzUeIL3HR XgjH+RMtiUfHi6AZTlnAD/xm9XO/t1EviaL4bPGRJfXk1D6YuIxI0rnxPKgCJ9ASIweS Nvmg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b="XuIBAq/A"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o18si4206605edi.508.2020.05.21.22.21.04; Thu, 21 May 2020 22:21:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b="XuIBAq/A"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728234AbgEVFTh (ORCPT + 99 others); Fri, 22 May 2020 01:19:37 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:6776 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728190AbgEVFTe (ORCPT ); Fri, 22 May 2020 01:19:34 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 21 May 2020 22:19:21 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Thu, 21 May 2020 22:19:33 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Thu, 21 May 2020 22:19:33 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 22 May 2020 05:19:32 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 22 May 2020 05:19:32 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.48.182]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 21 May 2020 22:19:32 -0700 From: John Hubbard To: Andrew Morton CC: Souptick Joarder , Matthew Wilcox , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , Chris Wilson , Tvrtko Ursulin , Matthew Auld , , , LKML , , John Hubbard Subject: [PATCH v2 1/4] mm/gup: move __get_user_pages_fast() down a few lines in gup.c Date: Thu, 21 May 2020 22:19:28 -0700 Message-ID: <20200522051931.54191-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522051931.54191-1-jhubbard@nvidia.com> References: <20200522051931.54191-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1590124761; bh=rovyqRvoWmUY8LZ787X+YkZeUTez+YCYnMbA60VZKSY=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=XuIBAq/A0KBu9ehMAokSDp1goZT+al4PJYx5H38f0UguHmNxz7iT7NXIk9uMd+jzO sN/WE6p/FHXZi0w9OYMKDeqVnqZkeJwXrtf/4Il66h5dbaJJCCE9YJ7PXupyL4aOni 86TBqrNg4UxLuMJNtneeOZIPCegiExALDEDtRRjVJAP9EwaN4JURPo61fXCGc6yk5I p0/gBzB9CmeVpumV5bps7FcEJG5SGINVXO+z8JSI3PSVdbpj73vl1wDByk9AGI8nJl LknBJFWTZHAIc49c7bEho9YjnVotwALQi//zrck2GPgKExY7ThvXD6SMl7OBLLI5AL t7z/swtSVv/uw== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This is in order to avoid a forward declaration of internal_get_user_pages_fast(), in the next patch. This is code movement only--all generated code should be identical. Signed-off-by: John Hubbard --- mm/gup.c | 112 +++++++++++++++++++++++++++---------------------------- 1 file changed, 56 insertions(+), 56 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 50cd9323efff..4502846d57f9 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2666,62 +2666,6 @@ static bool gup_fast_permitted(unsigned long start, = unsigned long end) } #endif =20 -/* - * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall b= ack to - * the regular GUP. - * Note a difference with get_user_pages_fast: this always returns the - * number of pages pinned, 0 if no pages were pinned. - * - * If the architecture does not support this function, simply return with = no - * pages pinned. - */ -int __get_user_pages_fast(unsigned long start, int nr_pages, int write, - struct page **pages) -{ - unsigned long len, end; - unsigned long flags; - int nr_pinned =3D 0; - /* - * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, - * because gup fast is always a "pin with a +1 page refcount" request. - */ - unsigned int gup_flags =3D FOLL_GET; - - if (write) - gup_flags |=3D FOLL_WRITE; - - start =3D untagged_addr(start) & PAGE_MASK; - len =3D (unsigned long) nr_pages << PAGE_SHIFT; - end =3D start + len; - - if (end <=3D start) - return 0; - if (unlikely(!access_ok((void __user *)start, len))) - return 0; - - /* - * Disable interrupts. We use the nested form as we can already have - * interrupts disabled by get_futex_key. - * - * With interrupts disabled, we block page table pages from being - * freed from under us. See struct mmu_table_batch comments in - * include/asm-generic/tlb.h for more details. - * - * We do not adopt an rcu_read_lock(.) here as we also want to - * block IPIs that come from THPs splitting. - */ - - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && - gup_fast_permitted(start, end)) { - local_irq_save(flags); - gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); - local_irq_restore(flags); - } - - return nr_pinned; -} -EXPORT_SYMBOL_GPL(__get_user_pages_fast); - static int __gup_longterm_unlocked(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) { @@ -2794,6 +2738,62 @@ static int internal_get_user_pages_fast(unsigned lon= g start, int nr_pages, return ret; } =20 +/* + * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall b= ack to + * the regular GUP. + * Note a difference with get_user_pages_fast: this always returns the + * number of pages pinned, 0 if no pages were pinned. + * + * If the architecture does not support this function, simply return with = no + * pages pinned. + */ +int __get_user_pages_fast(unsigned long start, int nr_pages, int write, + struct page **pages) +{ + unsigned long len, end; + unsigned long flags; + int nr_pinned =3D 0; + /* + * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, + * because gup fast is always a "pin with a +1 page refcount" request. + */ + unsigned int gup_flags =3D FOLL_GET; + + if (write) + gup_flags |=3D FOLL_WRITE; + + start =3D untagged_addr(start) & PAGE_MASK; + len =3D (unsigned long) nr_pages << PAGE_SHIFT; + end =3D start + len; + + if (end <=3D start) + return 0; + if (unlikely(!access_ok((void __user *)start, len))) + return 0; + + /* + * Disable interrupts. We use the nested form as we can already have + * interrupts disabled by get_futex_key. + * + * With interrupts disabled, we block page table pages from being + * freed from under us. See struct mmu_table_batch comments in + * include/asm-generic/tlb.h for more details. + * + * We do not adopt an rcu_read_lock(.) here as we also want to + * block IPIs that come from THPs splitting. + */ + + if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && + gup_fast_permitted(start, end)) { + local_irq_save(flags); + gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); + local_irq_restore(flags); + } + + return nr_pinned; +} +EXPORT_SYMBOL_GPL(__get_user_pages_fast); + /** * get_user_pages_fast() - pin user pages in memory * @start: starting user address --=20 2.26.2