Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp746230pxu; Wed, 2 Dec 2020 02:13:30 -0800 (PST) X-Google-Smtp-Source: ABdhPJyNSjnCHZcZxS3ytkpOJezOI9LjqZ4Bw4SLWA+LszgpcKDPWdlXJvSLt11o+xKdBmGCfB+1 X-Received: by 2002:aa7:c542:: with SMTP id s2mr1793907edr.205.1606904010279; Wed, 02 Dec 2020 02:13:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606904010; cv=none; d=google.com; s=arc-20160816; b=NpKUjzpDD8WM9b6pWfU2u0SePNJmYr3p08RcXH9W5sAONybAphaPGkZ+omNajNOI2X FOo1lVtqb20Q6b3fkbo7LsbIX83N1pPr4djRTPYg/N416NarFT89WlTSmYQPBEGruHaI lpO0nJC+pUSpE3xZy+yyyCSZkcevhXmB+kkGUaKyw9ZJQsZPUWyRexWG2HlJFetPJ5OB VvmhmfQIj6ZrsB83B+7wzcnzk7ezt7vazi6MTxqpKukOvfhphoYJOCF1HGvD1GGLYLgN 6uf2cO5eU6cTF3gNkKQAeCHssZ6XILWywaUUWa+LK6igWTivUz6xtJjlGfUwMG7IkbIM vU9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=EEaV8CD5Ky6FLUshcdqipkIaj4z/JiBJ8n2YygauXuU=; b=MghiqNd3XsqpCOF82bm37l39ZCJpqbxj/ldMmsuycsIVE15rOBa400W39t5ntzQuO2 OOoZfp5QxiQ1TZCD5vL5YFaMO8RFcUB0fpgGeKbZ1MaOFuuCSlDI24EwUk49u9JGtjah lu6zydDytL15Pl6Ch5wsO2M8hNac94qfkyFk/VSj3HQ17Lo+lF7dyLLmwKMXP/6h1raS e9BYvXcbgtJZpcper9yInx4gKRJLo4U2ojEZz3F2DRrDRwJLx7zhrhWZSfVil4xykCwV RFh+oXvBfqZ7r1t4fMdqtz7pnLbDX427UdUI8TqZMBYx24j0ngEz2FikH9mEL8l8r6hZ z2yA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n15si637156eje.673.2020.12.02.02.13.06; Wed, 02 Dec 2020 02:13:30 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388526AbgLBKJk (ORCPT + 99 others); Wed, 2 Dec 2020 05:09:40 -0500 Received: from verein.lst.de ([213.95.11.211]:53455 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388089AbgLBKJk (ORCPT ); Wed, 2 Dec 2020 05:09:40 -0500 Received: by verein.lst.de (Postfix, from userid 2407) id 4A8C467373; Wed, 2 Dec 2020 11:08:55 +0100 (CET) Date: Wed, 2 Dec 2020 11:08:54 +0100 From: Christoph Hellwig To: Jason Gunthorpe Cc: Ralph Campbell , Christoph Hellwig , linux-mm@kvack.org, nouveau@lists.freedesktop.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Jerome Glisse , John Hubbard , Alistair Popple , Bharata B Rao , Zi Yan , "Kirill A . Shutemov" , Yang Shi , Ben Skeggs , Shuah Khan , Andrew Morton , Roger Pau Monne Subject: Re: [PATCH v3 3/6] mm: support THP migration to device private memory Message-ID: <20201202100854.GB7597@lst.de> References: <20201106005147.20113-1-rcampbell@nvidia.com> <20201106005147.20113-4-rcampbell@nvidia.com> <20201106080322.GE31341@lst.de> <20201109091415.GC28918@lst.de> <20201120200133.GH917484@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201120200133.GH917484@nvidia.com> User-Agent: Mutt/1.5.17 (2007-11-01) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Nov 20, 2020 at 04:01:33PM -0400, Jason Gunthorpe wrote: > On Wed, Nov 11, 2020 at 03:38:42PM -0800, Ralph Campbell wrote: > > > MEMORY_DEVICE_GENERIC: > > Struct pages are created in dev_dax_probe() and represent non-volatile memory. > > The device can be mmap()'ed which calls dax_mmap() which sets > > vma->vm_flags | VM_HUGEPAGE. > > A CPU page fault will result in a PTE, PMD, or PUD sized page > > (but not compound) to be inserted by vmf_insert_mixed() which will call either > > insert_pfn() or insert_page(). > > Neither insert_pfn() nor insert_page() increments the page reference > > count. > > But why was this done? It seems very strange to put a pfn with a > struct page into a VMA and then deliberately not take the refcount for > the duration of that pfn being in the VMA? > > What prevents memunmap_pages() from progressing while VMAs still point > at the memory? Agreed. Adding Roger who added MEMORY_DEVICE_GENERIC and the only user. > > I think just leaving the page reference count at one is better than trying > > to use the mmu_interval_notifier or changing vmf_insert_mixed() and > > invalidations of pfn_t_devmap(pfn) to adjust the page reference count. > > Why so? The entire point of getting struct page's for this stuff was > to be able to follow the struct page flow. I never did learn a reason > why there is devmap stuff all over the place in the page table code... Exactly.