Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp289368imm; Thu, 11 Oct 2018 20:54:36 -0700 (PDT) X-Google-Smtp-Source: ACcGV61YL44U1VkwWofyEfAJLSITgbaM89xHGKxbK+bbTUqFE/rkSowH2Z9NKDh8jKQuSgIdyf1U X-Received: by 2002:a62:aa17:: with SMTP id e23-v6mr4366056pff.211.1539316476341; Thu, 11 Oct 2018 20:54:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539316476; cv=none; d=google.com; s=arc-20160816; b=AjCgmKX/UZl9xV4PqKY6MSSkPoCsDolt7Iy2DoTXRmXY21HQ2YD4gbbDHy1Drokm1F g2p+57JaG3HA+i2VYEtc5oVrHkKKETYPR7ZA0a/k6XGh9n+hHyTgYGnzewMdRs5FNwe4 eaKoWCvMYssS8Or7FrcRhYOeHf31C8+4zVbXz86qEHrnRtYW142MlG6zuYHptHlHRRpn Q4kxCEFLtuimDrlfOpd0e0nW/uxhFrz43dfjZy361fS/eLjgMseag+yG7OnBslTIBXjd gsCKUs6O7qPesqskh+/PCQkSluejsilHcetUd/dDw/aS8GCDIbtUBPxxcwVeoWkRpxev 6l6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:references:cc:to:from:subject; bh=DLYDAxARImMUfvoI4Sni/4ANHhReBxvEJH36JkJ5MB4=; b=iYI6E7LHt9UlFCS1Awx1JbA3My9Yn5/odeNJnB5VoDtgYYl6CN/AkJ4qyb60FkKCQK sRFRDU7egdZJIXXrrRCRZD7PULj8T3Md3GToSEZYlRlkoWpGF8HMdXLUMhfMpyLNMgZD F9Ry8b8IcFSpt3/qtMGgfWl7hk58gGTY4Rqk1gs1PcTcrCtcoM1+7avEkw4EEE+dLhY8 F0b31YggyOnu2AwiQvIr2IlYJnLiYfs3FXYtlYcV8cMWnYvV5DVGNUvB9vzMfDb94OuZ 1jN22iYKI/QslbEH1yVk6tIBd3bBAu6zs3u/hqdREEDcQGUtZEtKp73fZ6JgHehtB4am 6Lfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=mv3RblzF; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x24-v6si29641074pll.184.2018.10.11.20.54.09; Thu, 11 Oct 2018 20:54:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=mv3RblzF; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726791AbeJLLX7 (ORCPT + 99 others); Fri, 12 Oct 2018 07:23:59 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:4315 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726336AbeJLLX7 (ORCPT ); Fri, 12 Oct 2018 07:23:59 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 11 Oct 2018 20:53:38 -0700 Received: from HQMAIL106.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 11 Oct 2018 20:53:35 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 11 Oct 2018 20:53:35 -0700 Received: from HQMAIL102.nvidia.com (172.18.146.10) by HQMAIL106.nvidia.com (172.18.146.12) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Fri, 12 Oct 2018 03:53:34 +0000 Received: from [10.110.48.28] (10.124.1.5) by HQMAIL102.nvidia.com (172.18.146.10) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Fri, 12 Oct 2018 03:53:34 +0000 Subject: Re: [PATCH v4 2/3] mm: introduce put_user_page*(), placeholder versions From: John Hubbard To: Jason Gunthorpe , Jan Kara CC: Andrew Morton , , Matthew Wilcox , Michal Hocko , Christopher Lameter , Dan Williams , , LKML , linux-rdma , , Al Viro , Jerome Glisse , "Christoph Hellwig" , Ralph Campbell References: <20181008211623.30796-1-jhubbard@nvidia.com> <20181008211623.30796-3-jhubbard@nvidia.com> <20181008171442.d3b3a1ea07d56c26d813a11e@linux-foundation.org> <5198a797-fa34-c859-ff9d-568834a85a83@nvidia.com> <20181010164541.ec4bf53f5a9e4ba6e5b52a21@linux-foundation.org> <20181011084929.GB8418@quack2.suse.cz> <20181011132013.GA5968@ziepe.ca> <97e89e08-5b94-240a-56e9-ece2b91f6dbc@nvidia.com> X-Nvconfidentiality: public Message-ID: Date: Thu, 11 Oct 2018 20:53:34 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.0 MIME-Version: 1.0 In-Reply-To: <97e89e08-5b94-240a-56e9-ece2b91f6dbc@nvidia.com> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL102.nvidia.com (172.18.146.10) Content-Type: text/plain; charset="utf-8" Content-Language: en-US-large Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1539316418; bh=DLYDAxARImMUfvoI4Sni/4ANHhReBxvEJH36JkJ5MB4=; h=X-PGP-Universal:Subject:From:To:CC:References:X-Nvconfidentiality: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=mv3RblzF21ENYwQ4E/7I/FDr/05K7rGR/U4jgVMrWT0/+aobc31sDuZvrHJt41K06 4VJKCbLJ9PeLDgSNApafEVbFWWsliZU8USTRvbFuga9KvERttgsbpooxopzjslVPn2 /LrnY7fo76Rdgu2Hp+qHhhUE7cn4cZtad6H9D5JZZvS4JG68HSjera5Hw+g2vFNgSY zG1BLZE39MTvp++7OxKrGZnjJe0KtIWFDMbVpAlBL/Y8rt3F71nmqN82YRRTh4TaE7 jNGlKefH2x6J3B9X8eJMUptO/D8x1E/G8ZD5+cv0FHbktnXAJ60ktTfpr3f3HXUwiI a9YzUqmWd2y/Q== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/11/18 6:23 PM, John Hubbard wrote: > On 10/11/18 6:20 AM, Jason Gunthorpe wrote: >> On Thu, Oct 11, 2018 at 10:49:29AM +0200, Jan Kara wrote: >> >>>> This is a real worry. If someone uses a mistaken put_page() then how >>>> will that bug manifest at runtime? Under what set of circumstances >>>> will the kernel trigger the bug? >>> >>> At runtime such bug will manifest as a page that can never be evicted from >>> memory. We could warn in put_page() if page reference count drops below >>> bare minimum for given user pin count which would be able to catch some >>> issues but it won't be 100% reliable. So at this point I'm more leaning >>> towards making get_user_pages() return a different type than just >>> struct page * to make it much harder for refcount to go wrong... >> >> At least for the infiniband code being used as an example here we take >> the struct page from get_user_pages, then stick it in a sgl, and at >> put_page time we get the page back out of the sgl via sg_page() >> >> So type safety will not help this case... I wonder how many other >> users are similar? I think this is a pretty reasonable flow for DMA >> with user pages. >> > > That is true. The infiniband code, fortunately, never mixes the two page > types into the same pool (or sg list), so it's actually an easier example > than some other subsystems. But, yes, type safety doesn't help there. I can > take a moment to look around at the other areas, to quantify how much a type > safety change might help. > > Back to page flags again, out of desperation: > > How much do we know about the page types that all of these subsystems > use? In other words, can we, for example, use bit 1 of page->lru.next (see [1] > for context) as the "dma-pinned" page flag, while tracking pages within parts > of the kernel that call a mix of alloc_pages, get_user_pages, and other allocators? > In order for that to work, page->index, page->private, and bit 1 of page->mapping > must not be used. I doubt that this is always going to hold, but...does it? > Oops, pardon me, please ignore that nonsense about page->index and page->private and page->mapping, that's actually fine (I was seeing "union", where "struct" was written--too much staring at this code). So actually, I think maybe we can just use bit 1 in page->lru.next to sort out which pages are dma-pinned, in the calling code, just like we're going to do in writeback situations. This should also allow run-time checking that Andrew was hoping for: put_user_page(): assert that the page is dma-pinned put_page(): assert that the page is *not* dma-pinned ...both of which depend on that bit being, essentially, available as sort of a general page flag. And in fact, if it's not, then the whole approach is dead anyway. Am I missing anything? This avoids the need to change the get_user_pages interface. thanks, -- John Hubbard NVIDIA