Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp2353181pxa; Mon, 24 Aug 2020 11:44:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzpgcdBIHIvZ5MWIffjqo/VFNoHDN/gUnmvZHuLTvq/THWrgLoJDcxBdTBMiCYQI5hNQgVO X-Received: by 2002:a05:6402:3128:: with SMTP id dd8mr6525900edb.97.1598294681349; Mon, 24 Aug 2020 11:44:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598294681; cv=none; d=google.com; s=arc-20160816; b=m6g/KD39hMCIyj1v1+bLlvAiIqXQqmrBkl3ZFV5abmVJJ2/kJNn2NxoUwyzBkyHYr5 6ZJXz2uvPO50Bq/R0T4cF7JhnP8wARbFdQV7FaRdLYfXnY2uAIt23+Be9sIUBPShEQor prJ6xq6/pD56t5CRLH3y1M+DbwTACh5AmqYsrt7i91mTzrh1OOTXF5CWvxjWbyz8WK1a 3VGE5H2VUe/UL0OD9lb4/6totLaCWTWw1AzX9LN6Yc0R7230bzCAWd1GBF9XQowAYPba aS95Z/a6lhmHD4r1NP50plR2W2NKrPaPu1AF07qT/tllxY+sa+Id3xJxGxRuAx+zOHJU Wo7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=klGkO69RyKV41kWO8RX/ZqxuPo1xSyXNnZlFSqLUpC0=; b=RT6WP8JaszGZZJTuXRri8GFW1oIlIttw/46JNxUbm6f1cGjF6RQoAkgj6AiMQb/5OQ Rz3XM4UWTBFZNxItsxqmhj3jFF+WBe5UwzX/z1OrqfCKncwqNRFydX9+83ibG2lubBnj 1u40QKMxmtorbJGJJh9V+9xz3/S6LjGUIsThbIfYVA1SOb/A69vzXI2Px0xCgC6uwcHA ggw2t+d1088MMap3RaRGtwenn+4zEDE2hvNS0tkEElN3964vAy304wy8+IJKNiZSV0Ct 6U0cdIFOTsUFACpX6q1NIQ1MkkTan8wnkP6qZTR9Bo7d0istnPSkXpH4V/I3yczZwjbi ipxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=dpQhgZn9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f7si3128732ejd.344.2020.08.24.11.44.18; Mon, 24 Aug 2020 11:44:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=dpQhgZn9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727843AbgHXSmz (ORCPT + 99 others); Mon, 24 Aug 2020 14:42:55 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:4254 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727124AbgHXSmw (ORCPT ); Mon, 24 Aug 2020 14:42:52 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 24 Aug 2020 11:42:38 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Mon, 24 Aug 2020 11:42:52 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Mon, 24 Aug 2020 11:42:52 -0700 Received: from [10.2.58.8] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 24 Aug 2020 18:42:52 +0000 Subject: Re: [PATCH v2] tee: convert convert get_user_pages() --> pin_user_pages() To: CC: , , , , , , Sumit Semwal , , , References: <20200824183641.632126-1-jhubbard@nvidia.com> From: John Hubbard Message-ID: Date: Mon, 24 Aug 2020 11:42:51 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <20200824183641.632126-1-jhubbard@nvidia.com> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1598294558; bh=klGkO69RyKV41kWO8RX/ZqxuPo1xSyXNnZlFSqLUpC0=; h=X-PGP-Universal:Subject:To:CC:References:From:Message-ID:Date: User-Agent:MIME-Version:In-Reply-To:X-Originating-IP: X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=dpQhgZn9rMKgRclkpFkX4O2GVTZdqhj5ePqwQZlzSUp1dsosdVJZLa4kPLRP2sPzU Pw0b/lMHaPUUYRNnZPKOWo3nbGZFeAPPRb/OK78DbPlMyMGBrRL3NjJi/Q1J5GXUep kMpl0WrwaJUcg2OvIUFu5M4f/xg92hzRHyZc3GZlzo2+XKc0Hj4MZjaDgmvocd/6P8 gNdX1yMcJiD0ucJnY3iIjlG9jbplBOX6XxfZEu+cAgrNfR6iNzWfKdRL0jnAYcIvMW RLppo6O8+ft+A6g3PxUZs290J6BKnoV4sYxwMi83F6YwANTWqY5aZQgyGfIki7Ewa3 SRVlGPfHQFiZA== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 8/24/20 11:36 AM, John Hubbard wrote: > This code was using get_user_pages*(), in a "Case 2" scenario > (DMA/RDMA), using the categorization from [1]. That means that it's > time to convert the get_user_pages*() + put_page() calls to > pin_user_pages*() + unpin_user_pages() calls. > > There is some helpful background in [2]: basically, this is a small > part of fixing a long-standing disconnect between pinning pages, and > file systems' use of those pages. > > [1] Documentation/core-api/pin_user_pages.rst > > [2] "Explicit pinning of user-space pages": > https://lwn.net/Articles/807108/ > > Cc: Jens Wiklander > Cc: Sumit Semwal > Cc: tee-dev@lists.linaro.org > Cc: linux-media@vger.kernel.org > Cc: dri-devel@lists.freedesktop.org > Cc: linaro-mm-sig@lists.linaro.org > Signed-off-by: John Hubbard > --- > > OK, this should be indentical to v1 [1], but now rebased against > Linux 5.9-rc2. > ...ohhh, wait, I should have read the earlier message from Jens more carefully: "The conflict isn't trivial, I guess we need to handle the different types of pages differently when releasing them." So it's not good to have a logically identical patch. argghhh. Let me see how hard it is to track these memory types separately and handle the release accordingly, just a sec. Sorry about the false move here. thanks, -- John Hubbard NVIDIA > As before, I've compile-tested it again with a cross compiler, but that's > the only testing I'm set up for with CONFIG_TEE. > > [1] https://lore.kernel.org/r/20200519051850.2845561-1-jhubbard@nvidia.com > > thanks, > John Hubbard > NVIDIA > > drivers/tee/tee_shm.c | 12 +++--------- > 1 file changed, 3 insertions(+), 9 deletions(-) > > diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c > index 827ac3d0fea9..3c29e6c3ebe8 100644 > --- a/drivers/tee/tee_shm.c > +++ b/drivers/tee/tee_shm.c > @@ -32,16 +32,13 @@ static void tee_shm_release(struct tee_shm *shm) > > poolm->ops->free(poolm, shm); > } else if (shm->flags & TEE_SHM_REGISTER) { > - size_t n; > int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm); > > if (rc) > dev_err(teedev->dev.parent, > "unregister shm %p failed: %d", shm, rc); > > - for (n = 0; n < shm->num_pages; n++) > - put_page(shm->pages[n]); > - > + unpin_user_pages(shm->pages, shm->num_pages); > kfree(shm->pages); > } > > @@ -228,7 +225,7 @@ struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr, > } > > if (flags & TEE_SHM_USER_MAPPED) { > - rc = get_user_pages_fast(start, num_pages, FOLL_WRITE, > + rc = pin_user_pages_fast(start, num_pages, FOLL_WRITE, > shm->pages); > } else { > struct kvec *kiov; > @@ -292,16 +289,13 @@ struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr, > return shm; > err: > if (shm) { > - size_t n; > - > if (shm->id >= 0) { > mutex_lock(&teedev->mutex); > idr_remove(&teedev->idr, shm->id); > mutex_unlock(&teedev->mutex); > } > if (shm->pages) { > - for (n = 0; n < shm->num_pages; n++) > - put_page(shm->pages[n]); > + unpin_user_pages(shm->pages, shm->num_pages); > kfree(shm->pages); > } > } > v