Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp2357337ybk; Sun, 17 May 2020 18:55:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwfeBtoYeBjUTAsab0EX5bJF/73ufsVBd4OSrEV/JNIvhJCJX0lkl+vwH5aUCkUadaWMnI5 X-Received: by 2002:a17:906:909:: with SMTP id i9mr12705939ejd.347.1589766927668; Sun, 17 May 2020 18:55:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589766927; cv=none; d=google.com; s=arc-20160816; b=W0Pk4TokrIdfw1CjU/9kmCEtx+NVwBbvHVQqMiNiG4DGrYnjpezQjCerhZ+DmTjFku vmPAFcB/vQVdZftYfGrx8GT4gqOrjqd73yqDWelBqg94HVmP9BO2GDEiVVwpsqY2WLqU 3Vbde90vBxrYk/L16R3nhN5IRtjrko7s4ocZg6RGP6gMuTxHyl7lB4/25LGETuvsyQbn ZDHlxgRfO9fmhHNBUicDORPtoLz5r/tKgbiwxPOxHLgJRGDnUrHPain5SgIg4FAnXqFs Uh9ysLs029UMLC6CoIvegI2NA2Gx2ngKWlT6+eCOk+Yibh5fyTwKg/thXndAejxi9gZy gvig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :mime-version:message-id:date:subject:cc:to:from; bh=8+QRGV+R9yBACKFygsmXJhzujsCS4WK3hn0lWKiFxDU=; b=vMgwTIzfpBk1hiDtzvCgSSN/Ig9j8o+p8EX2L+kTywrBz4Z+RKXPyYrh9Lvu6nVL01 ArZdN+vOOGdtndEJMMahXQt+b1/xI5b2t8woyduOYrYMzA1ywrY0JQg1BKoKHArSXSKd 1n020Fl+PQqNJWSmB3kVCHT04yCISxdoaPF4EQAlLVByWwPm3I3TjwEcnmJa7bEZzEK0 vJLwAoc7cRBIsIh1AJsp6LTkfVOaLpRfV2/LkzxEi7XzXIj3Lr5pyQMEKh96/Kd3ZzVK ViD5KaZxXq0m9B14ucJDOCHwcQK0bQgVot8FYbfT7rsWrjEbtF34VGWIRvXeFJOmoVXL HXQw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=CsBeFnJt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c3si5644331eja.251.2020.05.17.18.54.52; Sun, 17 May 2020 18:55:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=CsBeFnJt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726725AbgERBwm (ORCPT + 99 others); Sun, 17 May 2020 21:52:42 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:18724 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726695AbgERBwl (ORCPT ); Sun, 17 May 2020 21:52:41 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Sun, 17 May 2020 18:51:24 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Sun, 17 May 2020 18:52:41 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Sun, 17 May 2020 18:52:41 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 18 May 2020 01:52:41 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 18 May 2020 01:52:41 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.48.175]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Sun, 17 May 2020 18:52:40 -0700 From: John Hubbard To: LKML CC: John Hubbard , Frank Haverkamp , Arnd Bergmann , Greg Kroah-Hartman Subject: [PATCH] genwqe: convert get_user_pages() --> pin_user_pages() Date: Sun, 17 May 2020 18:52:37 -0700 Message-ID: <20200518015237.1568940-1-jhubbard@nvidia.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1589766684; bh=8+QRGV+R9yBACKFygsmXJhzujsCS4WK3hn0lWKiFxDU=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: MIME-Version:X-NVConfidentiality:Content-Transfer-Encoding: Content-Type; b=CsBeFnJtvkuZB1mqnKp/CFmVHB52SrPZhF6sfZ69HvK/+RO2uHpRS1cGV/u9jGkd5 sYYQ6eEl2wbX9HQISAgBgB6Gbg5jZhulgfahYN9Qx1QTz1pRUYFmviT/IC9vb/LI+x +aOEZ7ysiwAa9YaMP6Ci5oI9XAhZ3huTn3i94+mM7SpC3x9wK8GO9ekDPYbHZhWMKE XgbJ6+V16psnnLRUliTJQvwnRCuXSA2x1KiuGAgZS8SE/eWMi6GfGTOsgMulujm7yM ntuMCMoe+P038hU7I9kC6o14HotKuqDolyBZg5hm1XOaV3yVBN8Dz6iMCK7AJ+GxYj 1PhzxJoM8U1+w== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This code was using get_user_pages*(), in a "Case 2" scenario (DMA/RDMA), using the categorization from [1]. That means that it's time to convert the get_user_pages*() + put_page() calls to pin_user_pages*() + unpin_user_pages() calls. There is some helpful background in [2]: basically, this is a small part of fixing a long-standing disconnect between pinning pages, and file systems' use of those pages. [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://lwn.net/Articles/807108/ Cc: Frank Haverkamp Cc: Arnd Bergmann Cc: Greg Kroah-Hartman Signed-off-by: John Hubbard --- Hi, Note that I have only compile-tested this patch, although that does also include cross-compiling for a few other arches...but it only seemed to be supported on x86_64 and sparc, for those. thanks, John Hubbard NVIDIA drivers/misc/genwqe/card_utils.c | 42 +++++++------------------------- 1 file changed, 9 insertions(+), 33 deletions(-) diff --git a/drivers/misc/genwqe/card_utils.c b/drivers/misc/genwqe/card_ut= ils.c index 2e1c4d2905e8..60460a053b2d 100644 --- a/drivers/misc/genwqe/card_utils.c +++ b/drivers/misc/genwqe/card_utils.c @@ -514,30 +514,6 @@ int genwqe_free_sync_sgl(struct genwqe_dev *cd, struct= genwqe_sgl *sgl) return rc; } =20 -/** - * genwqe_free_user_pages() - Give pinned pages back - * - * Documentation of get_user_pages is in mm/gup.c: - * - * If the page is written to, set_page_dirty (or set_page_dirty_lock, - * as appropriate) must be called after the page is finished with, and - * before put_page is called. - */ -static int genwqe_free_user_pages(struct page **page_list, - unsigned int nr_pages, int dirty) -{ - unsigned int i; - - for (i =3D 0; i < nr_pages; i++) { - if (page_list[i] !=3D NULL) { - if (dirty) - set_page_dirty_lock(page_list[i]); - put_page(page_list[i]); - } - } - return 0; -} - /** * genwqe_user_vmap() - Map user-space memory to virtual kernel memory * @cd: pointer to genwqe device @@ -597,18 +573,18 @@ int genwqe_user_vmap(struct genwqe_dev *cd, struct dm= a_mapping *m, void *uaddr, m->dma_list =3D (dma_addr_t *)(m->page_list + m->nr_pages); =20 /* pin user pages in memory */ - rc =3D get_user_pages_fast(data & PAGE_MASK, /* page aligned addr */ + rc =3D pin_user_pages_fast(data & PAGE_MASK, /* page aligned addr */ m->nr_pages, m->write ? FOLL_WRITE : 0, /* readable/writable */ m->page_list); /* ptrs to pages */ if (rc < 0) - goto fail_get_user_pages; + goto fail_pin_user_pages; =20 - /* assumption: get_user_pages can be killed by signals. */ + /* assumption: pin_user_pages can be killed by signals. */ if (rc < m->nr_pages) { - genwqe_free_user_pages(m->page_list, rc, m->write); + unpin_user_pages_dirty_lock(m->page_list, rc, m->write); rc =3D -EFAULT; - goto fail_get_user_pages; + goto fail_pin_user_pages; } =20 rc =3D genwqe_map_pages(cd, m->page_list, m->nr_pages, m->dma_list); @@ -618,9 +594,9 @@ int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_= mapping *m, void *uaddr, return 0; =20 fail_free_user_pages: - genwqe_free_user_pages(m->page_list, m->nr_pages, m->write); + unpin_user_pages_dirty_lock(m->page_list, m->nr_pages, m->write); =20 - fail_get_user_pages: + fail_pin_user_pages: kfree(m->page_list); m->page_list =3D NULL; m->dma_list =3D NULL; @@ -650,8 +626,8 @@ int genwqe_user_vunmap(struct genwqe_dev *cd, struct dm= a_mapping *m) genwqe_unmap_pages(cd, m->dma_list, m->nr_pages); =20 if (m->page_list) { - genwqe_free_user_pages(m->page_list, m->nr_pages, m->write); - + unpin_user_pages_dirty_lock(m->page_list, m->nr_pages, + m->write); kfree(m->page_list); m->page_list =3D NULL; m->dma_list =3D NULL; base-commit: b9bbe6ed63b2b9f2c9ee5cbd0f2c946a2723f4ce --=20 2.26.2