Received: by 2002:a25:ef43:0:0:0:0:0 with SMTP id w3csp90890ybm; Tue, 26 May 2020 11:31:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyZh0uhPC4EwXh5PMORfSMUktDxwCeO3j+U8htOuAtDDVLW1nArL8ZQratyOkojW80haqi4 X-Received: by 2002:aa7:d90b:: with SMTP id a11mr21329347edr.159.1590517899711; Tue, 26 May 2020 11:31:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590517899; cv=none; d=google.com; s=arc-20160816; b=ZJQD6S1me39KfnNSi2ZZo82tECddjSXIJQJR4KwCLaIeNzF1UCLgb9uhUIzKQv3a3X UatuQp5j1q+XlJJpoab3/cd3IHTozTB9t99ynu3m2TmJzgE2O+kuRpQqkEiFhR8M1E1t ABJaw5Iq2djwDXNA5oIrAro/hWijFCv5tTTY56UxWxhZ1rBl2DAFafvogNrqcp0RFs+R T0lvHQTt0AsdZi0ilh45b52PPqvWNVf18UFLc6U195cUHVOu9WUsh9U6FOkKjg0/623Z YKpkBg+yKCSTxIRwJdlXpEnK6LlPTSsn0CHMHKJxlktlvi/oyio1pZn1ggzEzWMJOed1 cgbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :mime-version:message-id:date:subject:cc:to:from; bh=+7F/94TzCK/cPi9jWHCOZuja1kFgCeQFGR9DSAN4SLE=; b=YJEaYt0O9qOIUJ7/2dswoE6ptS+I9nntrLg8L2QSv1s33+lsk8H7vfGdpKaEYdH3Lm khPEh+9httpaA7f3qFY21alJC6DDOk3KiHLqsI3FXoioSgIMPeDfgaZ7J/fa4NVlWOnX mECYRPgXTpUOGqwbmwA7pSZVGv2E5oOSnxyLWiFmAzj1n68cDIYz0dsC4FJQQ3ca2yWw fWcnSdhVK5YiXc7a5UGzdJi1MCddp9wJHNX35faMFNw3Be1f5dZD/FeXbzwH7U2BbqL9 yALk+an9TJedWpSS6HspZ4f3QPtVqm4r+w+qABnmSbw8VhXdvna0D5urIBeY5ad8VOTN esoQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=PUQDLpq5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d22si356299edx.407.2020.05.26.11.31.15; Tue, 26 May 2020 11:31:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=PUQDLpq5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729943AbgEZS1L (ORCPT + 99 others); Tue, 26 May 2020 14:27:11 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:18853 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729163AbgEZS1L (ORCPT ); Tue, 26 May 2020 14:27:11 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 26 May 2020 11:26:58 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 26 May 2020 11:27:10 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 26 May 2020 11:27:10 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 26 May 2020 18:27:10 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 26 May 2020 18:27:10 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.50.17]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 26 May 2020 11:27:10 -0700 From: John Hubbard To: LKML CC: Souptick Joarder , John Hubbard , =?UTF-8?q?Kai=20M=C3=A4kisara=20=28Kolumbus=29?= , Bart Van Assche , "James E . J . Bottomley" , "Martin K . Petersen" , Subject: [PATCH v2] scsi: st: convert convert get_user_pages() --> pin_user_pages() Date: Tue, 26 May 2020 11:27:09 -0700 Message-ID: <20200526182709.99599-1-jhubbard@nvidia.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-NVConfidentiality: public Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1590517618; bh=+7F/94TzCK/cPi9jWHCOZuja1kFgCeQFGR9DSAN4SLE=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: MIME-Version:X-NVConfidentiality:Content-Type: Content-Transfer-Encoding; b=PUQDLpq5QAaAQylmPbkePzO834JX34IxkMzVlwVWLOXzs2DUhn+5mxC3s5mgzIfoA ypaXkrTBFosCl8JiV0dQGNqmHtUgfGbBVMltLOiFuk3UrRR1QVH37WL3I0684VGJYL bGF3oRbcdFUWNCd7BGuXcIHoPxtgiTSLzxKWEjd0PnhN2eA8s8ozzdflZ2KRjagmV/ 5ilJSQKf7JSqN518XowupBXJhekSVgU0U/9Puqp6EWVf7k7rg7hCrwhoyARYU0OXwY cezrUF5lXc9EFDA/hLqL9zb0YKRivZ2YD3bzLR4PbKlC8T8FnXHICTV+WBdly0MD9N QKD7avnwOq/GA== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This code was using get_user_pages*(), in a "Case 1" scenario (Direct IO), using the categorization from [1]. That means that it's time to convert the get_user_pages*() + put_page() calls to pin_user_pages*() + unpin_user_pages() calls. There is some helpful background in [2]: basically, this is a small part of fixing a long-standing disconnect between pinning pages, and file systems' use of those pages. Note that this effectively changes the code's behavior as well: it now ultimately calls set_page_dirty_lock(), instead of SetPageDirty().This is probably more accurate. As Christoph Hellwig put it, "set_page_dirty() is only safe if we are dealing with a file backed page where we have reference on the inode it hangs off." [3] Also, this deletes one of the two FIXME comments (about refcounting), because there is nothing wrong with the refcounting at this point. [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://lwn.net/Articles/807108/ [3] https://lore.kernel.org/r/20190723153640.GB720@lst.de Cc: "Kai M=C3=A4kisara (Kolumbus)" Cc: Bart Van Assche Cc: James E.J. Bottomley Cc: Martin K. Petersen Cc: linux-scsi@vger.kernel.org Signed-off-by: John Hubbard --- Hi, As mentioned in the v1 review thread, we probably still want/need this. Or so I claim. :) Please see what you think... Changes since v1: changed the commit log, to refer to Direct IO (Case 1), instead of DMA/RDMA (Case 2). And added Bart to Cc. v1: https://lore.kernel.org/linux-scsi/20200519045525.2446851-1-jhubbard@nvidia= .com/ thanks, John Hubbard NVIDIA drivers/scsi/st.c | 20 +++++--------------- 1 file changed, 5 insertions(+), 15 deletions(-) diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c index c5f9b348b438..1e3eda9fa231 100644 --- a/drivers/scsi/st.c +++ b/drivers/scsi/st.c @@ -4922,7 +4922,7 @@ static int sgl_map_user_pages(struct st_buffer *STbp, unsigned long end =3D (uaddr + count + PAGE_SIZE - 1) >> PAGE_SHIFT; unsigned long start =3D uaddr >> PAGE_SHIFT; const int nr_pages =3D end - start; - int res, i, j; + int res, i; struct page **pages; struct rq_map_data *mdata =3D &STbp->map_data; =20 @@ -4944,7 +4944,7 @@ static int sgl_map_user_pages(struct st_buffer *STbp, =20 /* Try to fault in all of the necessary pages */ /* rw=3D=3DREAD means read from drive, write into memory area */ - res =3D get_user_pages_fast(uaddr, nr_pages, rw =3D=3D READ ? FOLL_WRITE = : 0, + res =3D pin_user_pages_fast(uaddr, nr_pages, rw =3D=3D READ ? FOLL_WRITE = : 0, pages); =20 /* Errors and no page mapped should return here */ @@ -4964,8 +4964,7 @@ static int sgl_map_user_pages(struct st_buffer *STbp, return nr_pages; out_unmap: if (res > 0) { - for (j=3D0; j < res; j++) - put_page(pages[j]); + unpin_user_pages(pages, res); res =3D 0; } kfree(pages); @@ -4977,18 +4976,9 @@ static int sgl_map_user_pages(struct st_buffer *STbp= , static int sgl_unmap_user_pages(struct st_buffer *STbp, const unsigned int nr_pages, int dirtied) { - int i; - - for (i=3D0; i < nr_pages; i++) { - struct page *page =3D STbp->mapped_pages[i]; + /* FIXME: cache flush missing for rw=3D=3DREAD */ + unpin_user_pages_dirty_lock(STbp->mapped_pages, nr_pages, dirtied); =20 - if (dirtied) - SetPageDirty(page); - /* FIXME: cache flush missing for rw=3D=3DREAD - * FIXME: call the correct reference counting function - */ - put_page(page); - } kfree(STbp->mapped_pages); STbp->mapped_pages =3D NULL; =20 --=20 2.26.2