Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1206025ybt; Wed, 24 Jun 2020 23:49:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJysFupsiATvWLunLDznBtoeoYInIjHBNrRxiHJLxbDyEjBHvTMdbzU9I6bBdq7EWIJ6ly9g X-Received: by 2002:a17:906:c047:: with SMTP id bm7mr3931934ejb.499.1593067743752; Wed, 24 Jun 2020 23:49:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593067743; cv=none; d=google.com; s=arc-20160816; b=ZkIow2yc3HqSkxztvjpY7qw2ntt/429susOOPo+pa6YNDbKe5ZGd9KGjH88eNSvGQg r5mvFdoF4kvOHRkhEcsPjeh6wFw49xSsVGBiEMQ9DIjoRm2wjZ9g4e7+FTRzxb/nBQQ6 ifeg2YAsYXMJhUXJ1BPkQ1Xmon1sNwIR/AOArixBwotYdoCBKhw/immQ8o6intN4etKN HM2rWBuXHt/4Sy/NQXKfOaCNRMEP25AlkajAjvKoycUMoxWpxgVBcGRTyR3u3oZmj/Nx YhuFe4nh0ypww1ch4kbqCnpXR0Mo8a4bpcW8/5BoZxz9OP2X52r9zfkPbKHfgsNZGc1M 8o+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :mime-version:message-id:date:subject:cc:to:from; bh=T8kM6kIhzYlig7AUGzoS+A4qc/UX6U40K3WHpkS7F4M=; b=P7c5/kjxC0vs++hhf3bnWRgsDUDpFbCE3F89PMH9lKd+joBLFf9kp1WhecQZZMoZnM N2+RcL2f2KGguLtY9L6H+lZvsKCK2Iu+cUvudwRKKJASconE2JmlimVCD7QUXCQwvz5+ egrZF6JCibVutocxOsMOfqNySJVvhZg+R2lQImlGFY4I6cLT4rgWx8prBco0kAk00Uhp 0D7yE771cCawLxnXK+hAned99T4lGqkmhACckYvx2ZpV+fqxqn2Sgnb1U5aO/aakXqGU v9XG6/rRe/ug82O8t+KhA21VqpT+TVpYbOL9OjA0EnrlqFCdVvCkIKM2kala1kmdKbwK UwuA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=PzYJ3lae; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id by12si4664038edb.99.2020.06.24.23.48.40; Wed, 24 Jun 2020 23:49:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=PzYJ3lae; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389958AbgFYGhV (ORCPT + 99 others); Thu, 25 Jun 2020 02:37:21 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:9181 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727999AbgFYGhU (ORCPT ); Thu, 25 Jun 2020 02:37:20 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 24 Jun 2020 23:37:07 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 24 Jun 2020 23:37:20 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 24 Jun 2020 23:37:20 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 25 Jun 2020 06:37:19 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 25 Jun 2020 06:37:20 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.59.206]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Wed, 24 Jun 2020 23:37:20 -0700 From: John Hubbard To: LKML CC: John Hubbard , Alex Williamson , Cornelia Huck , Subject: [PATCH v2] vfio/spapr_tce: convert get_user_pages() --> pin_user_pages() Date: Wed, 24 Jun 2020 23:37:17 -0700 Message-ID: <20200625063717.834923-1-jhubbard@nvidia.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1593067027; bh=T8kM6kIhzYlig7AUGzoS+A4qc/UX6U40K3WHpkS7F4M=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: MIME-Version:X-NVConfidentiality:Content-Transfer-Encoding: Content-Type; b=PzYJ3lae3MPcCxOUNX59NTfx3uUcrmR2uRJaVRYyA3k0Wp4+0gX7xyZqXFse1JxWq 0R9PMXLvoFGX731TeqEDRI/WZWNQUtRkW2t+mONUCcpiprl0dgUjuKXzrjsqZ8iceE 8Y/Y6aU4kBZxJrs0CiS/pCXAxi11FP0ISPiAK8s39RzVzTr3nSJzR6wuEMIs4j9MMh HZ+87EQp1abZ2ixJK+7Olm10PPuvjY9TsJ5s4i/UrvkGe0WBGfh/BkgFcWbsi81xYb IdzPSrwWhvM6hdcsFyn9kMDPBZPYatjs8tJ4+dePb4XL2eiSN7rYDD6B3lIasQ1Ixn NOMJ8IW9Cg5Hg== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This code was using get_user_pages*(), in a "Case 2" scenario (DMA/RDMA), using the categorization from [1]. That means that it's time to convert the get_user_pages*() + put_page() calls to pin_user_pages*() + unpin_user_pages() calls. There is some helpful background in [2]: basically, this is a small part of fixing a long-standing disconnect between pinning pages, and file systems' use of those pages. [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://lwn.net/Articles/807108/ Cc: Alex Williamson Cc: Cornelia Huck Cc: kvm@vger.kernel.org Signed-off-by: John Hubbard --- Hi, Changes since v1: rebased onto Linux-5.8-rc2. thanks, John Hubbard drivers/vfio/vfio_iommu_spapr_tce.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_= spapr_tce.c index 16b3adc508db..fe888b5dcc00 100644 --- a/drivers/vfio/vfio_iommu_spapr_tce.c +++ b/drivers/vfio/vfio_iommu_spapr_tce.c @@ -383,7 +383,7 @@ static void tce_iommu_unuse_page(struct tce_container *= container, struct page *page; =20 page =3D pfn_to_page(hpa >> PAGE_SHIFT); - put_page(page); + unpin_user_page(page); } =20 static int tce_iommu_prereg_ua_to_hpa(struct tce_container *container, @@ -486,7 +486,7 @@ static int tce_iommu_use_page(unsigned long tce, unsign= ed long *hpa) struct page *page =3D NULL; enum dma_data_direction direction =3D iommu_tce_direction(tce); =20 - if (get_user_pages_fast(tce & PAGE_MASK, 1, + if (pin_user_pages_fast(tce & PAGE_MASK, 1, direction !=3D DMA_TO_DEVICE ? FOLL_WRITE : 0, &page) !=3D 1) return -EFAULT; --=20 2.27.0