Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp399298imm; Wed, 19 Sep 2018 00:21:03 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdbd83EAlK/dgy4uJ79Xys7G3Y5dcFVD0gxF78K2xcQ2OJgHC21ZsXvcIG6MOOxOlhAGRVOA X-Received: by 2002:a62:2119:: with SMTP id h25-v6mr35215033pfh.112.1537341662952; Wed, 19 Sep 2018 00:21:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537341662; cv=none; d=google.com; s=arc-20160816; b=07CJGBXLD8D/eb2AGnN5ZXcAhlEMHiIxzgbYhmdC0eX00tq1ZUKMDvJBHv3fFm1oE0 455vyJwcnrGOA8osjLZM9QQZbC+50uUAe9sIJ6vrGAGpQ0Wh0VHoen0C0u7mG1AeEL6+ zcbBkluyPTbG2RCsBDy2XqG+93MB+Pj97Yw2m2Z8C4CMU0c1rm8YkCFQwOI5M8hwFCci AgrHF8htxnWtjN9sEVosqAKRtt7mlotbC9g3cu2UIFSxEB5GgDAVI6KP7QSblh9QKH3G CNhSBbbsXx3snnjJVq/EIBsA7Eh0KSar4f01ytVa9eeqU6lAtuTDLwvjgvx8XZAYVGBB x01A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:organization:autocrypt:openpgp:from:references:cc:to :subject; bh=EK0B/OuN7vjDzQTOWVDFasdgrj+MvhXuSg1cKo8bCjU=; b=AKyb9F8GmTnKqesyKYwcvlt7pqZ/gN4LKWX46lXPx4zeQ0nPjn+4k6yd21tQxVtwVk NsKvdKTIAuTn+/c9s9AKy1zoU+IhXaDyxL1IFUzlD1bf+adSu7MVNcoVoT3X6m3n7TA7 BLAw+Dea4ROlQMTscbica/c/zKAKjYY8EmDqMqS6lY/EzJy8X7+Q3ozPtD/COzkW9yxm chIcpWHD1D3gf3vHhwSa7jLGfyFz1UDkl3kKPwYq8zoFreOqVU6J1aa9I5/FrUaiiJzv RTUoe7PIXZGz4hIlrUq6ottTanr/mV7nPdu6VNIaGc46lAwDi4OztjLjVlmHgmNEZrO1 sxSQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e7-v6si21595213plk.122.2018.09.19.00.20.46; Wed, 19 Sep 2018 00:21:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730883AbeISM5F (ORCPT + 99 others); Wed, 19 Sep 2018 08:57:05 -0400 Received: from mx1.redhat.com ([209.132.183.28]:57968 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727650AbeISM5F (ORCPT ); Wed, 19 Sep 2018 08:57:05 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C258C3002A69; Wed, 19 Sep 2018 07:20:29 +0000 (UTC) Received: from [10.36.117.41] (ovpn-117-41.ams2.redhat.com [10.36.117.41]) by smtp.corp.redhat.com (Postfix) with ESMTP id A80B510694DC; Wed, 19 Sep 2018 07:20:26 +0000 (UTC) Subject: Re: [PATCH V5 4/4] kvm: add a check if pfn is from NVDIMM pmem. To: Dan Williams , Zhang Yi Cc: KVM list , Linux Kernel Mailing List , linux-nvdimm , Paolo Bonzini , Dave Jiang , "Zhang, Yu C" , Pankaj Gupta , Jan Kara , Christoph Hellwig , Linux MM , rkrcmar@redhat.com, =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , "Zhang, Yi Z" References: <4e8c2e0facd46cfaf4ab79e19c9115958ab6f218.1536342881.git.yi.z.zhang@linux.intel.com> From: David Hildenbrand Openpgp: preference=signencrypt Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwX4EEwECACgFAljj9eoCGwMFCQlmAYAGCwkI BwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEE3eEPcA/4Na5IIP/3T/FIQMxIfNzZshIq687qgG 8UbspuE/YSUDdv7r5szYTK6KPTlqN8NAcSfheywbuYD9A4ZeSBWD3/NAVUdrCaRP2IvFyELj xoMvfJccbq45BxzgEspg/bVahNbyuBpLBVjVWwRtFCUEXkyazksSv8pdTMAs9IucChvFmmq3 jJ2vlaz9lYt/lxN246fIVceckPMiUveimngvXZw21VOAhfQ+/sofXF8JCFv2mFcBDoa7eYob s0FLpmqFaeNRHAlzMWgSsP80qx5nWWEvRLdKWi533N2vC/EyunN3HcBwVrXH4hxRBMco3jvM m8VKLKao9wKj82qSivUnkPIwsAGNPdFoPbgghCQiBjBe6A75Z2xHFrzo7t1jg7nQfIyNC7ez MZBJ59sqA9EDMEJPlLNIeJmqslXPjmMFnE7Mby/+335WJYDulsRybN+W5rLT5aMvhC6x6POK z55fMNKrMASCzBJum2Fwjf/VnuGRYkhKCqqZ8gJ3OvmR50tInDV2jZ1DQgc3i550T5JDpToh dPBxZocIhzg+MBSRDXcJmHOx/7nQm3iQ6iLuwmXsRC6f5FbFefk9EjuTKcLMvBsEx+2DEx0E UnmJ4hVg7u1PQ+2Oy+Lh/opK/BDiqlQ8Pz2jiXv5xkECvr/3Sv59hlOCZMOaiLTTjtOIU7Tq 7ut6OL64oAq+zsFNBFXLn5EBEADn1959INH2cwYJv0tsxf5MUCghCj/CA/lc/LMthqQ773ga uB9mN+F1rE9cyyXb6jyOGn+GUjMbnq1o121Vm0+neKHUCBtHyseBfDXHA6m4B3mUTWo13nid 0e4AM71r0DS8+KYh6zvweLX/LL5kQS9GQeT+QNroXcC1NzWbitts6TZ+IrPOwT1hfB4WNC+X 2n4AzDqp3+ILiVST2DT4VBc11Gz6jijpC/KI5Al8ZDhRwG47LUiuQmt3yqrmN63V9wzaPhC+ xbwIsNZlLUvuRnmBPkTJwwrFRZvwu5GPHNndBjVpAfaSTOfppyKBTccu2AXJXWAE1Xjh6GOC 8mlFjZwLxWFqdPHR1n2aPVgoiTLk34LR/bXO+e0GpzFXT7enwyvFFFyAS0Nk1q/7EChPcbRb hJqEBpRNZemxmg55zC3GLvgLKd5A09MOM2BrMea+l0FUR+PuTenh2YmnmLRTro6eZ/qYwWkC u8FFIw4pT0OUDMyLgi+GI1aMpVogTZJ70FgV0pUAlpmrzk/bLbRkF3TwgucpyPtcpmQtTkWS gDS50QG9DR/1As3LLLcNkwJBZzBG6PWbvcOyrwMQUF1nl4SSPV0LLH63+BrrHasfJzxKXzqg rW28CTAE2x8qi7e/6M/+XXhrsMYG+uaViM7n2je3qKe7ofum3s4vq7oFCPsOgwARAQABwsFl BBgBAgAPBQJVy5+RAhsMBQkJZgGAAAoJEE3eEPcA/4NagOsP/jPoIBb/iXVbM+fmSHOjEshl KMwEl/m5iLj3iHnHPVLBUWrXPdS7iQijJA/VLxjnFknhaS60hkUNWexDMxVVP/6lbOrs4bDZ NEWDMktAeqJaFtxackPszlcpRVkAs6Msn9tu8hlvB517pyUgvuD7ZS9gGOMmYwFQDyytpepo YApVV00P0u3AaE0Cj/o71STqGJKZxcVhPaZ+LR+UCBZOyKfEyq+ZN311VpOJZ1IvTExf+S/5 lqnciDtbO3I4Wq0ArLX1gs1q1XlXLaVaA3yVqeC8E7kOchDNinD3hJS4OX0e1gdsx/e6COvy qNg5aL5n0Kl4fcVqM0LdIhsubVs4eiNCa5XMSYpXmVi3HAuFyg9dN+x8thSwI836FoMASwOl C7tHsTjnSGufB+D7F7ZBT61BffNBBIm1KdMxcxqLUVXpBQHHlGkbwI+3Ye+nE6HmZH7IwLwV W+Ajl7oYF+jeKaH4DZFtgLYGLtZ1LDwKPjX7VAsa4Yx7S5+EBAaZGxK510MjIx6SGrZWBrrV TEvdV00F2MnQoeXKzD7O4WFbL55hhyGgfWTHwZ457iN9SgYi1JLPqWkZB0JRXIEtjd4JEQcx +8Umfre0Xt4713VxMygW0PnQt5aSQdMD58jHFxTk092mU+yIHj5LeYgvwSgZN4airXk5yRXl SE+xAvmumFBY Organization: Red Hat GmbH Message-ID: Date: Wed, 19 Sep 2018 09:20:25 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Wed, 19 Sep 2018 07:20:29 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Am 19.09.18 um 04:53 schrieb Dan Williams: > On Fri, Sep 7, 2018 at 2:25 AM Zhang Yi wrote: >> >> For device specific memory space, when we move these area of pfn to >> memory zone, we will set the page reserved flag at that time, some of >> these reserved for device mmio, and some of these are not, such as >> NVDIMM pmem. >> >> Now, we map these dev_dax or fs_dax pages to kvm for DIMM/NVDIMM >> backend, since these pages are reserved, the check of >> kvm_is_reserved_pfn() misconceives those pages as MMIO. Therefor, we >> introduce 2 page map types, MEMORY_DEVICE_FS_DAX/MEMORY_DEVICE_DEV_DAX, >> to identify these pages are from NVDIMM pmem and let kvm treat these >> as normal pages. >> >> Without this patch, many operations will be missed due to this >> mistreatment to pmem pages, for example, a page may not have chance to >> be unpinned for KVM guest(in kvm_release_pfn_clean), not able to be >> marked as dirty/accessed(in kvm_set_pfn_dirty/accessed) etc. >> >> Signed-off-by: Zhang Yi >> Acked-by: Pankaj Gupta >> --- >> virt/kvm/kvm_main.c | 16 ++++++++++++++-- >> 1 file changed, 14 insertions(+), 2 deletions(-) >> >> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c >> index c44c406..9c49634 100644 >> --- a/virt/kvm/kvm_main.c >> +++ b/virt/kvm/kvm_main.c >> @@ -147,8 +147,20 @@ __weak void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, >> >> bool kvm_is_reserved_pfn(kvm_pfn_t pfn) >> { >> - if (pfn_valid(pfn)) >> - return PageReserved(pfn_to_page(pfn)); >> + struct page *page; >> + >> + if (pfn_valid(pfn)) { >> + page = pfn_to_page(pfn); >> + >> + /* >> + * For device specific memory space, there is a case >> + * which we need pass MEMORY_DEVICE_FS[DEV]_DAX pages >> + * to kvm, these pages marked reserved flag as it is a >> + * zone device memory, we need to identify these pages >> + * and let kvm treat these as normal pages >> + */ >> + return PageReserved(page) && !is_dax_page(page); > > Should we consider just not setting PageReserved for > devm_memremap_pages()? Perhaps kvm is not be the only component making > these assumptions about this flag? I was asking the exact same question in v3 or so. I was recently going through all PageReserved users, trying to clean up and document how it is used. PG_reserved used to be a marker "not available for the page allocator". This is only partially true and not really helpful I think. My current understanding: " PG_reserved is set for special pages, struct pages of such pages should in general not be touched except by their owner. Pages marked as reserved include: - Kernel image (including vDSO) and similar (e.g. BIOS, initrd) - Pages allocated early during boot (bootmem, memblock) - Zero pages - Pages that have been associated with a zone but were not onlined (e.g. NVDIMM/pmem, online_page_callback used by XEN) - Pages to exclude from the hibernation image (e.g. loaded kexec images) - MCA (memory error) pages on ia64 - Offline pages Some architectures don't allow to ioremap RAM pages that are not marked as reserved. Allocated pages might have to be set reserved to allow for that - if there is a good reason to enforce this. Consequently, PG_reserved part of a user space table might be the indicator for the zero page, pmem or MMIO pages. " Swapping code does not care about PageReserved at all as far as I remember. This seems to be fine as it only looks at the way pages have been mapped into user space. I don't really see a good reason to set pmem pages as reserved. One question would be, how/if to exclude them from the hibernation image. But that could also be solved differently (we would have to double check how they are handled in hibernation code). A similar user of PageReserved to look at is: drivers/vfio/vfio_iommu_type1.c:is_invalid_reserved_pfn() It will not mark pages dirty if they are reserved. Similar to KVM code. > > Why is MEMORY_DEVICE_PUBLIC memory specifically excluded? > > This has less to do with "dax" pages and more to do with > devm_memremap_pages() established ranges. P2PDMA is another producer > of these pages. If either MEMORY_DEVICE_PUBLIC or P2PDMA pages can be > used in these kvm paths then I think this points to consider clearing > the Reserved flag. > > That said I haven't audited all the locations that test PageReserved(). > > Sorry for not responding sooner I was on extended leave. > -- Thanks, David / dhildenb