Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp3079416ybk; Mon, 18 May 2020 17:23:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJysHg0N/+pQZqoCPiMpjxNfoi6jnRYTNymgUH6PihKwILXJmUDaGgGczAlB2CB6gri8avms X-Received: by 2002:aa7:d1c6:: with SMTP id g6mr16135734edp.303.1589847794886; Mon, 18 May 2020 17:23:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589847794; cv=none; d=google.com; s=arc-20160816; b=J4Fa/eqyfWRjHW4avEWyRprKStrWyvYP7pDG6qiRd0Pcan3TohPFwGk5salL7XPUh8 0QAuSk6SOxOAlLiyTPYVymPzVAz0eZpJMbD5Dgio6WtUtcI7Pk2GYualBNTc4N9mzsHz WIByrmd5a89YvgdL3urrNwzecfv13fe1kWnrzWal7EJnWVkkn1RlKbCJXnhLwOEgg3g6 Bsxl6QBtkkd+N0dI4iMbrz823PmluZOXeySKPCeoOc0/yhvDfyDniX9tu3y+bjmPiXxV ZJBAkbMmeeLOGUk+oHnZOZ140VqccxHnuQoxXycy2kTF8lHwUoiRxlK/MwDmwwHuzVht qWZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=rovyqRvoWmUY8LZ787X+YkZeUTez+YCYnMbA60VZKSY=; b=YvqKVTYCNTCKR0CUbz1bttV2uWsHJ5CYG9n+nZqXGbxlo96F3vULsB7uPU6gZezm4i rkiZtQMUmzt+qkrtkEiFmOzyCVj5nSAb2z8RlBXvlCphsa9//5bq8J8QPw4GKu/UalOV zjqXAo+kz0pN1uU8jB3Ub601HNWnMtCWzlylxyqOURzVxqp3Px5ymrP5z3tVOtxS31yC UUYhCPooI3+mYoSMEbmpOMvcVv6kX3r7AYTq8jV9icImy9NjZU4SIcAcJOgMmlJ3AQRF CvfZq02R5IOLjDa3vMtgn6CoFY5y2RzyBENLY6U8YWeDxRc4VeeKe3MRBCjDlNY+HIKC vedw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=bSQ34rpR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c2si3789224edq.134.2020.05.18.17.22.51; Mon, 18 May 2020 17:23:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=bSQ34rpR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726937AbgESAV2 (ORCPT + 99 others); Mon, 18 May 2020 20:21:28 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:9203 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726454AbgESAV2 (ORCPT ); Mon, 18 May 2020 20:21:28 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 18 May 2020 17:19:06 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 18 May 2020 17:21:27 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 18 May 2020 17:21:27 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 19 May 2020 00:21:27 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 19 May 2020 00:21:27 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.55.90]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 18 May 2020 17:21:26 -0700 From: John Hubbard To: Andrew Morton CC: Souptick Joarder , Matthew Wilcox , Jani Nikula , "Joonas Lahtinen" , Rodrigo Vivi , David Airlie , Daniel Vetter , Chris Wilson , Tvrtko Ursulin , Matthew Auld , , , LKML , , John Hubbard Subject: [PATCH 1/4] mm/gup: move __get_user_pages_fast() down a few lines in gup.c Date: Mon, 18 May 2020 17:21:21 -0700 Message-ID: <20200519002124.2025955-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200519002124.2025955-1-jhubbard@nvidia.com> References: <20200519002124.2025955-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1589847546; bh=rovyqRvoWmUY8LZ787X+YkZeUTez+YCYnMbA60VZKSY=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=bSQ34rpRpEOByCzljfvC/zfc+UVQ2Rn8RyR8bdgPZR2DNTrpl8PkavKtjTEuscWrd JXJB008sQjaDiHiaipNpC7QRm3/xF0SR7xUuIz6f9FUne9lgD47lmRB40+OWyCy5gj mF7WTWbg/vfo7D17F9f2Roho+MrbfURPvNvvYVUNF6lX7Al0oraHZhMjiWHn6zqjaF ExjbImR95YFfmlP/Z8W7f1wb8ABXmC9OpXQZQ1kUxMifYC1qqdbjdcl7fq2CScM3uw X1hIFk76ZaYn5DST5AfgzCqeXzXV2vhyorhVCH1kG+7fccAblz6n8S/vtBqYbpRiGb 5JsazZgU3xcrw== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This is in order to avoid a forward declaration of internal_get_user_pages_fast(), in the next patch. This is code movement only--all generated code should be identical. Signed-off-by: John Hubbard --- mm/gup.c | 112 +++++++++++++++++++++++++++---------------------------- 1 file changed, 56 insertions(+), 56 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 50cd9323efff..4502846d57f9 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2666,62 +2666,6 @@ static bool gup_fast_permitted(unsigned long start, = unsigned long end) } #endif =20 -/* - * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall b= ack to - * the regular GUP. - * Note a difference with get_user_pages_fast: this always returns the - * number of pages pinned, 0 if no pages were pinned. - * - * If the architecture does not support this function, simply return with = no - * pages pinned. - */ -int __get_user_pages_fast(unsigned long start, int nr_pages, int write, - struct page **pages) -{ - unsigned long len, end; - unsigned long flags; - int nr_pinned =3D 0; - /* - * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, - * because gup fast is always a "pin with a +1 page refcount" request. - */ - unsigned int gup_flags =3D FOLL_GET; - - if (write) - gup_flags |=3D FOLL_WRITE; - - start =3D untagged_addr(start) & PAGE_MASK; - len =3D (unsigned long) nr_pages << PAGE_SHIFT; - end =3D start + len; - - if (end <=3D start) - return 0; - if (unlikely(!access_ok((void __user *)start, len))) - return 0; - - /* - * Disable interrupts. We use the nested form as we can already have - * interrupts disabled by get_futex_key. - * - * With interrupts disabled, we block page table pages from being - * freed from under us. See struct mmu_table_batch comments in - * include/asm-generic/tlb.h for more details. - * - * We do not adopt an rcu_read_lock(.) here as we also want to - * block IPIs that come from THPs splitting. - */ - - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && - gup_fast_permitted(start, end)) { - local_irq_save(flags); - gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); - local_irq_restore(flags); - } - - return nr_pinned; -} -EXPORT_SYMBOL_GPL(__get_user_pages_fast); - static int __gup_longterm_unlocked(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) { @@ -2794,6 +2738,62 @@ static int internal_get_user_pages_fast(unsigned lon= g start, int nr_pages, return ret; } =20 +/* + * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall b= ack to + * the regular GUP. + * Note a difference with get_user_pages_fast: this always returns the + * number of pages pinned, 0 if no pages were pinned. + * + * If the architecture does not support this function, simply return with = no + * pages pinned. + */ +int __get_user_pages_fast(unsigned long start, int nr_pages, int write, + struct page **pages) +{ + unsigned long len, end; + unsigned long flags; + int nr_pinned =3D 0; + /* + * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, + * because gup fast is always a "pin with a +1 page refcount" request. + */ + unsigned int gup_flags =3D FOLL_GET; + + if (write) + gup_flags |=3D FOLL_WRITE; + + start =3D untagged_addr(start) & PAGE_MASK; + len =3D (unsigned long) nr_pages << PAGE_SHIFT; + end =3D start + len; + + if (end <=3D start) + return 0; + if (unlikely(!access_ok((void __user *)start, len))) + return 0; + + /* + * Disable interrupts. We use the nested form as we can already have + * interrupts disabled by get_futex_key. + * + * With interrupts disabled, we block page table pages from being + * freed from under us. See struct mmu_table_batch comments in + * include/asm-generic/tlb.h for more details. + * + * We do not adopt an rcu_read_lock(.) here as we also want to + * block IPIs that come from THPs splitting. + */ + + if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && + gup_fast_permitted(start, end)) { + local_irq_save(flags); + gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); + local_irq_restore(flags); + } + + return nr_pinned; +} +EXPORT_SYMBOL_GPL(__get_user_pages_fast); + /** * get_user_pages_fast() - pin user pages in memory * @start: starting user address --=20 2.26.2