Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp252423imm; Tue, 9 Oct 2018 17:33:11 -0700 (PDT) X-Google-Smtp-Source: ACcGV61OKm3wxO7WlME0x8hOsGgMovFA1uVwdI9Bv4ED0zC9IEDPnWA/IOebSFw2oaN+e31oXFPE X-Received: by 2002:a62:d841:: with SMTP id e62-v6mr32314456pfg.60.1539131591045; Tue, 09 Oct 2018 17:33:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539131591; cv=none; d=google.com; s=arc-20160816; b=cyI077M7mZh9PYt48EI5MP44wzOr+bXXcTR/iNG9nYitPcySb8d1iSWjMHK1zO29jD NZQ8p37/dJkvmeXxJ1IH8zpWURGRNjbaiE3tkXCG3OlQIA3RXkMbpDfQIiMuasJt7RCJ Kt47knzFhXqQwhIxzwG8/QavHTgVz3r9fr5PM6aJvqFQ53tjAcTbEegepUhC7odRerX0 8BDHhoExofVT+dwMJ6ZQo3EMvK/3yMCxpBEYSszrGqZ71iC9J/He80bpnrWtiYHWQ3QK Q6tRPbwlvCj2IralE7mNIwZ0aERfeoUnh3hYRciWnw/pE0Qa6LeA6WyRTrDuKlWrs/Vg pnEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=GUCg8cPP+mBlc4rVdnuWV5cH6WRG/fccH/MuopV7ZSQ=; b=MhrzQtriLD5qmDZj9cGCNJHUdxqlvXQ0gr0hq4IUREKYIcas8dF8+0rIB2cbPeGWAA FtguPbsIB3u0afMorqfZLT8i9ky7FVIYPo6enF7jiBUXNZuZ2iPAGXETbftsdaU5x9c1 txqz0m8O+/9yfHF48wk/JbGwXCYFFEtrsj4mfixIxf2EStq99D8VY3lIXzLYwE138bjd coA4lJYlPwUDrwnNOIT9KvAhp9dbdsjjv/zxedHIy9OSVkYT7w2ZjqhymGG/xf2WpC3l +mdVpTetUEji365zDlYi5euJd/h7ctM/EM/DClK6u5sF5znB/i7oPcZqAUh5A7lH81pS HGHg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=A+5PQQII; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q13-v6si14695632pgq.526.2018.10.09.17.32.54; Tue, 09 Oct 2018 17:33:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=A+5PQQII; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726492AbeJJHvq (ORCPT + 99 others); Wed, 10 Oct 2018 03:51:46 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:18386 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725766AbeJJHvq (ORCPT ); Wed, 10 Oct 2018 03:51:46 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 09 Oct 2018 17:32:20 -0700 Received: from HQMAIL101.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 09 Oct 2018 17:32:18 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 09 Oct 2018 17:32:18 -0700 Received: from [10.110.48.28] (172.20.13.39) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 10 Oct 2018 00:32:16 +0000 Subject: Re: [PATCH v4 2/3] mm: introduce put_user_page*(), placeholder versions To: Andrew Morton , Jan Kara CC: , Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , , LKML , linux-rdma , , Al Viro , Jerome Glisse , Christoph Hellwig , Ralph Campbell References: <20181008211623.30796-1-jhubbard@nvidia.com> <20181008211623.30796-3-jhubbard@nvidia.com> <20181008171442.d3b3a1ea07d56c26d813a11e@linux-foundation.org> <20181009083025.GE11150@quack2.suse.cz> <20181009162012.c662ef0b041993557e150035@linux-foundation.org> From: John Hubbard X-Nvconfidentiality: public Message-ID: <62492f47-d51f-5c41-628c-ff17de21829e@nvidia.com> Date: Tue, 9 Oct 2018 17:32:16 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.0 MIME-Version: 1.0 In-Reply-To: <20181009162012.c662ef0b041993557e150035@linux-foundation.org> X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL106.nvidia.com (172.18.146.12) To HQMAIL101.nvidia.com (172.20.187.10) Content-Type: text/plain; charset="utf-8" Content-Language: en-US-large Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1539131540; bh=GUCg8cPP+mBlc4rVdnuWV5cH6WRG/fccH/MuopV7ZSQ=; h=X-PGP-Universal:Subject:To:CC:References:From:X-Nvconfidentiality: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=A+5PQQIIqiXtqldyKeS92AVu8RwHk3BMXUFHGkswDPuthFaCkFHjwbuyEQAzknkjF bZDRgPNWKjvYPS5M7MrcyLSKisyAl9P0IxzNPZXiEQxQbVxrv4CEHfHwNxPdQrpLzR I09voG3prwW+ZT+3H3jT4U09qry6vyD78cyVZaA5EcKTSsuoDUjvlRVNxDT3SDls93 OQPRljHA9844DcJFbg4R1Zow+OHqCGb00hvgevwZi6LBRmVlS9lmdsPpAIum3vX1aF Rm6EvB189/7j2oiiblKVD7MbXsGvrCe05D6vp4n2PbMlNBShxO21Hdrs7zmgzUgl4i Hjxn6GzWT1T2w== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/9/18 4:20 PM, Andrew Morton wrote: > On Tue, 9 Oct 2018 10:30:25 +0200 Jan Kara wrote: > >>> Also, maintainability. What happens if someone now uses put_page() by >>> mistake? Kernel fails in some mysterious fashion? How can we prevent >>> this from occurring as code evolves? Is there a cheap way of detecting >>> this bug at runtime? >> >> The same will happen as with any other reference counting bug - the special >> user reference will leak. It will be pretty hard to debug I agree. I was >> thinking about whether we could provide some type safety against such bugs >> such as get_user_pages() not returning struct page pointers but rather some >> other special type but it would result in a big amount of additional churn >> as we'd have to propagate this different type e.g. through the IO path so >> that IO completion routines could properly call put_user_pages(). So I'm >> not sure it's really worth it. > > I'm not really understanding. Patch 3/3 changes just one infiniband > driver to use put_user_page(). But the changelogs here imply (to me) > that every user of get_user_pages() needs to be converted to > s/put_page/put_user_page/. > > Methinks a bit more explanation is needed in these changelogs? > OK, yes, it does sound like the explanation is falling short. I'll work on something clearer. Did the proposed steps in the changelogs, such as: [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubbard@nvidia.com Proposed steps for fixing get_user_pages() + DMA problems. help at all, or is it just too many references, and I should write the words directly in the changelog? Anyway, patch 3/3 is a just a working example (which we do want to submit, though), and many more conversions will follow. But they don't have to be done all upfront--they can be done in follow up patchsets. The put_user_page*() routines are, at this point, not going to significantly change behavior. I'm working on an RFC that will show what the long-term fix to get_user_pages and put_user_pages will look like. But meanwhile it's good to get started on converting all of the call sites. thanks, -- John Hubbard NVIDIA