Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp749398pxu; Wed, 2 Dec 2020 02:19:28 -0800 (PST) X-Google-Smtp-Source: ABdhPJw+e8OdnkUxXDopKRY/2Y2/x5rB9Mt69SiIfeqm1nwv2Y++l1LCI4xoHFbGKO6ttTJ77gfZ X-Received: by 2002:a17:906:e082:: with SMTP id gh2mr1541562ejb.406.1606904368007; Wed, 02 Dec 2020 02:19:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606904368; cv=none; d=google.com; s=arc-20160816; b=xEB9QSmNInInHo1MEeMmRAmU2xbAwshvAP6pwQmhN+jx8xy8FSo62udEYYoiPP+nIr /9H2i+7sBhxw0BGL54vp2+9I5xEj04Vhm4Pygv2uJqFFRQiPlpkQo/9gm8Bcwpgov6gE cWhbkwBtjthgYfdGpWiiT6vzRkRBeCToYot1I3ATLW8y5VNPPZl+Cew7bnwb2CyKacZj yjdKGiuj/Zvt18B4vB/EJcegV1Yruq3pK9IsVSVZBnIBlgwjvIz7bg+lnzm0H8YZFyKO IaWPC2Rkfualpwu87kCyZSAwuYprMwkHdKaWj6ZcD6dcKkTJFsw9e1WZuGStQcVqyeyJ 8xpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=6JQJz069Pn9FzDTKwcZbvxFsinxmrV0IOvdJ15l24Hk=; b=R4IOgW5wKLeF9iWSLVzVM0lAxNqe4p8CKADNjGORln/HGf/chdoHTiZNfoRgkixl4q WHpyw5HSHm0Dil/vEsZPM7o1uhCHE4HjAMncB6nSjvDstzvBk8RWZOyGmOSnZoOn0wtY e3N/gEIV5oGvhXiYOOLoGN22jzyvmGaiK/AYwsrtUPN7OS+JY9oRP5BIrNWUItL7pE9j 9srCTmExN5hRSUDKKTO3587AeaNMtZOxV/2CvkDztCt38yJASdr34MKUPFWMjN7DbYjH PhjEu7qV2KRJBaHytIgGQ/94mcAZPy1vU6Z3jUd/CYQWPGi8h1kravgMfv9ixHr7olz1 1RSw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id jz8si562622ejb.182.2020.12.02.02.19.04; Wed, 02 Dec 2020 02:19:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729391AbgLBKPM (ORCPT + 99 others); Wed, 2 Dec 2020 05:15:12 -0500 Received: from verein.lst.de ([213.95.11.211]:53496 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727623AbgLBKPL (ORCPT ); Wed, 2 Dec 2020 05:15:11 -0500 Received: by verein.lst.de (Postfix, from userid 2407) id 2A56B67373; Wed, 2 Dec 2020 11:14:27 +0100 (CET) Date: Wed, 2 Dec 2020 11:14:26 +0100 From: Christoph Hellwig To: Ralph Campbell Cc: Christoph Hellwig , linux-mm@kvack.org, nouveau@lists.freedesktop.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Jerome Glisse , John Hubbard , Alistair Popple , Jason Gunthorpe , Bharata B Rao , Zi Yan , "Kirill A . Shutemov" , Yang Shi , Ben Skeggs , Shuah Khan , Andrew Morton , Logan Gunthorpe , Dan Williams , linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org Subject: Re: [PATCH v3 3/6] mm: support THP migration to device private memory Message-ID: <20201202101426.GC7597@lst.de> References: <20201106005147.20113-1-rcampbell@nvidia.com> <20201106005147.20113-4-rcampbell@nvidia.com> <20201106080322.GE31341@lst.de> <20201109091415.GC28918@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org [adding a few of the usual suspects] On Wed, Nov 11, 2020 at 03:38:42PM -0800, Ralph Campbell wrote: > There are 4 types of ZONE_DEVICE struct pages: > MEMORY_DEVICE_PRIVATE, MEMORY_DEVICE_FS_DAX, MEMORY_DEVICE_GENERIC, and > MEMORY_DEVICE_PCI_P2PDMA. > > Currently, memremap_pages() allocates struct pages for a physical address range > with a page_ref_count(page) of one and increments the pgmap->ref per CPU > reference count by the number of pages created since each ZONE_DEVICE struct > page has a pointer to the pgmap. > > The struct pages are not freed until memunmap_pages() is called which > calls put_page() which calls put_dev_pagemap() which releases a reference to > pgmap->ref. memunmap_pages() blocks waiting for pgmap->ref reference count > to be zero. As far as I can tell, the put_page() in memunmap_pages() has to > be the *last* put_page() (see MEMORY_DEVICE_PCI_P2PDMA). > My RFC [1] breaks this put_page() -> put_dev_pagemap() connection so that > the struct page reference count can go to zero and back to non-zero without > changing the pgmap->ref reference count. > > Q1: Is that safe? Is there some code that depends on put_page() dropping > the pgmap->ref reference count as part of memunmap_pages()? > My testing of [1] seems OK but I'm sure there are lots of cases I didn't test. It should be safe, but the audit you've done is important to make sure we do not miss anything important. > MEMORY_DEVICE_PCI_P2PDMA: > Struct pages are created in pci_p2pdma_add_resource() and represent device > memory accessible by PCIe bar address space. Memory is allocated with > pci_alloc_p2pmem() based on a byte length but the gen_pool_alloc_owner() > call will allocate memory in a minimum of PAGE_SIZE units. > Reference counting is +1 per *allocation* on the pgmap->ref reference count. > Note that this is not +1 per page which is what put_page() expects. So > currently, a get_page()/put_page() works OK because the page reference count > only goes 1->2 and 2->1. If it went to zero, the pgmap->ref reference count > would be incorrect if the allocation size was greater than one page. > > I see pci_alloc_p2pmem() is called by nvme_alloc_sq_cmds() and > pci_p2pmem_alloc_sgl() to create a command queue and a struct scatterlist *. > Looks like sg_page(sg) returns the ZONE_DEVICE struct page of the scatterlist. > There are a huge number of places sg_page() is called so it is hard to tell > whether or not get_page()/put_page() is ever called on MEMORY_DEVICE_PCI_P2PDMA > pages. Nothing should call get_page/put_page on them, as they are not treated as refcountable memory. More importantly nothing is allowed to keep a reference longer than the time of the I/O. > pci_p2pmem_virt_to_bus() will return the physical address and I guess > pfn_to_page(physaddr >> PAGE_SHIFT) could return the struct page. > > Since there is a clear allocation/free, pci_alloc_p2pmem() can probably be > modified to increment/decrement the MEMORY_DEVICE_PCI_P2PDMA struct page > reference count. Or maybe just leave it at one like it is now. And yes, doing that is probably a sensible safe guard. > MEMORY_DEVICE_FS_DAX: > Struct pages are created in pmem_attach_disk() and virtio_fs_setup_dax() with > an initial reference count of one. > The problem I see is that there are 3 states that are important: > a) memory is free and not allocated to any file (page_ref_count() == 0). > b) memory is allocated to a file and in the page cache (page_ref_count() == 1). > c) some gup() or I/O has a reference even after calling unmap_mapping_pages() > (page_ref_count() > 1). ext4_break_layouts() basically waits until the > page_ref_count() == 1 with put_page() calling wake_up_var(&page->_refcount) > to wake up ext4_break_layouts(). > The current code doesn't seem to distinguish (a) and (b). If we want to use > the 0->1 reference count to signal (c), then the page cache would have hold > entries with a page_ref_count() == 0 which doesn't match the general page cache I think the sensible model here is to grab a reference when it is added to the page cache. That is exactly how normal system memory pages work.