Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp4356133ybl; Mon, 3 Feb 2020 17:26:01 -0800 (PST) X-Google-Smtp-Source: APXvYqzbrO3OHTvtsrfjOFFgaE0KEaSwFwrFjSO14bqUoIRywtEtc6I4aaAWyPcGvvF+riI8hYMW X-Received: by 2002:a05:6830:1689:: with SMTP id k9mr20599958otr.311.1580779561550; Mon, 03 Feb 2020 17:26:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1580779561; cv=none; d=google.com; s=arc-20160816; b=moa7gHFEaEzDIPsR6TcNTmTF3jQXxKccRVs0h8CF8CyaATIIH/fv7MFraaaZz+mceC UeyP076SH0UhSEHLOmWNFHCcFtZTu7CXXGvKyj+3Gh/5KwEnOZ8dAilDu4FPt+xKBObV UGx2bUhuEekQm3kOA7CtNm6TTdJ0/cm4qwxHhOdPkijBAjlLQJMA4OEjqjVM2JSR5pi7 EjMKlL0uE2d5Hgqr18E0gSyASzRx5Ts+NkXWKK/HzygmXtehqeVhYsqoyxM4+ElOoVqd CTRqmL4jUCMAgReqixDb6m0WngTED8KhZ39JqO3UeAodw3FIcwHVGqGakhXAhS2eWpLt 9N6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=zaDofTomGFH7niMxAvlM4SlRadN36KR0v+tIdhJo7+Q=; b=0w3qsUEhZK82RGjmAWNPZgk1+Gc5Hzyrr9tTpIgmx1CFqg7iXiGczRBzVSsvsmxPeM Nx4l21pKSYpIcNxj54iAQ5NHnrPQ5mxGX58fc8i/U4bsnTUbEpIsEPVLAsSQ3Dx6gtJL LJ+Z7DtYZfLMbZAKOTvJ7rrLu5LSk1Z4ovoRi9sfG3qUL/VhWmNGMtjmTgiArcsf8NsW QboqSLLj8Y+9hoOspS2hfh+2AX0MD9e4ciba50L+/MNHXr3zw7fuWMfvIroT3ZZ3wD9o gM/ioDZQp3RqwHUouy396R2aa8O4XIY2tTwUSiIp4Ttx+jLUELKroKj/hO1gdSjF9wYQ GUxQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=IsfOtT9+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w4si11029448otl.214.2020.02.03.17.25.49; Mon, 03 Feb 2020 17:26:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=IsfOtT9+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727063AbgBDBYe (ORCPT + 99 others); Mon, 3 Feb 2020 20:24:34 -0500 Received: from mail-oi1-f194.google.com ([209.85.167.194]:34366 "EHLO mail-oi1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726369AbgBDBYd (ORCPT ); Mon, 3 Feb 2020 20:24:33 -0500 Received: by mail-oi1-f194.google.com with SMTP id l136so16830452oig.1 for ; Mon, 03 Feb 2020 17:24:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=zaDofTomGFH7niMxAvlM4SlRadN36KR0v+tIdhJo7+Q=; b=IsfOtT9+NHHnq/sKg7+EGZj8H1jckeyOm5iMgQuY93l7vwexXpoM0gI8kVp2DFyCVP aBgE/cTY6vrnR2MaC5QILyfCNFv9AIq/986DLz/syRYLB9a5SVa/AeVxohJSnv3e3bvV dPx+gMnFD39YXULqYcXGvEV8+eDltTXa3qQgwvBNNzNRz8er2J2JmL/h3JfA8fRHDblz 6kt72a040EluXNLpv56fFzhkVmpekmFUDV/xB/mk+ay9eZU09EBifqzI493jIszzyxk5 5iQbeJHlA2IyrtQHWbudM6Vl1M4ChcY2PNFkV+yfOYcGwYHEe8lt+Bd2oHpXUW3GQo8e EnqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=zaDofTomGFH7niMxAvlM4SlRadN36KR0v+tIdhJo7+Q=; b=CZltCBj6M7wWqSii+K91xvqljU5+NE80Y839bVvzLowy66O1B1+s+RQDxrgsfRGyHk SnI5TyZrYwHq+gzk2z7BQUkIdjWOqo2zXAtDGrYRd7xlX1liqcFTN8JQmsYrdkgeCyxa 32I6Bil35m4xe3QUbSqo082nJ9UUaUfN9YQhuDW3Mk8RN4tU+/Mzbdo2y15yBjRWGVdB RWJ9N7yoG2aXQZnY+mbK1PuujhX6XAetnZCpHx2nmfkrqh4N97sGw7/vcU1ae7e/GSQZ 7E1n+BuTJ88uDRQgc3bbFxSRa79/2KNvaFgk5EIWk0blBOFhNYf+TZkccDZlIUzA6L9J 7oUw== X-Gm-Message-State: APjAAAXAIhU4JX0THeItyN1IuUPE6IBkHIrtBZN/X8JGRvCZ68UDLqwC tM3rIt3/19Y/EFHtGJm6bHM7lS7cRYkOv0GUIWNN8w== X-Received: by 2002:aca:aa0e:: with SMTP id t14mr1557357oie.149.1580779471936; Mon, 03 Feb 2020 17:24:31 -0800 (PST) MIME-Version: 1.0 References: <20200110190313.17144-1-joao.m.martins@oracle.com> In-Reply-To: <20200110190313.17144-1-joao.m.martins@oracle.com> From: Dan Williams Date: Mon, 3 Feb 2020 17:24:20 -0800 Message-ID: Subject: Re: [PATCH RFC 00/10] device-dax: Support devices without PFN metadata To: Joao Martins Cc: linux-nvdimm , Vishal Verma , Dave Jiang , Ira Weiny , Alex Williamson , Cornelia Huck , KVM list , Andrew Morton , Linux MM , Linux Kernel Mailing List , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , X86 ML , Liran Alon , Nikita Leshenko , Barret Rhoden , Boris Ostrovsky , Matthew Wilcox , Konrad Rzeszutek Wilk Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 10, 2020 at 11:06 AM Joao Martins wrote: > > Hey, Hi Joao, > > Presented herewith a small series which allows device-dax to work without > struct page to be used to back KVM guests memory. It's an RFC, and there's > still some items we're looking at (see TODO below); So it turns out I already have some patches in flight that address discontiguous allocation item. Here's a WIP branch that I'll be sending out after the merge window closes. https://git.kernel.org/pub/scm/linux/kernel/git/djbw/nvdimm.git/log/?h=libnvdimm-pending > but wondering if folks > would be OK carving some time out of their busy schedules to provide feedback > direction-wise on this work. ...apologies I did not get to this sooner. Please feel free to ping me after a week if you're awaiting comment on anything in the nvdimm or device-dax area. > In virtualized environments (specially those with no kernel-backed PV > interfaces, and just SR-IOV), memory is largelly assigned to guests: either > persistent with NVDIMMs or volatile for regular RAM. The kernel > (hypervisor) tracks it with 'struct page' (64b) for each 4K page. Overall > we're spending 16GB for each 1Tb of host memory tracked that the kernel won't > need which could instead be used to create other guests. One of motivations of > this series is to then get that memory used for 'struct page', when it is meant > to solely be used by userspace. Do you mean reclaim it for the guest to use for 'struct page' capacity since the host hypervisor has reduced need for it? > This is also useful for the case of memory > backing guests virtual NVDIMMs. The other neat side effect is that the > hypervisor has no virtual mapping of the guest and hence code gadgets (if > found) are limited in their effectiveness. You mean just the direct-map? qemu still gets a valid virtual address, or are you proposing it's not mapped there either? > It is expected that a smaller (instead of total) amount of host memory is > defined for the kernel (with mem=X or memmap=X!Y). For KVM userspace VMM (e.g. > QEMU), the main thing that is needed is a device which open + mmap + close with > a certain alignment (4K, 2M, 1G). That made us look at device-dax which does > just that and so the work comprised here was improving what's there and the > interfaces it uses. In general I think this "'struct page'-less device-dax" capability makes sense, why suffer 1.6% capacity loss when that memory is going unused? My main concerns are: 1/ Use of memmap=nn!ss when none of the persistent memory infrastructure is needed. 2/ Auditing the new usage of VM_PFNMAP and what that means for memory-error handling and applications that expect 'struct page' services to be present. 3/ "page-less" dreams have been dashed on the rocks in the past. The slow drip of missing features (ptrace(), direct-I/O, fork()...) is why the kernel now requires them by default for dax. For 1/ I have a proposal, for 2/ I need to dig in to what you have here, but maybe I can trade you some review on the discontiguous allocation patches. For 3/ can you say a bit more about why the overhead is intolerable? The proposal for 1/ is please let's not build more on top of memmap=nn!ss. It's fragile because there's no facility to validate that it is correct, the assumption is that the user overriding the memmap knows to pick address that don't collide and have real memory backing. Practice has shown that people get it wrong often enough that we need something better. It's also confusing that applications start seeing "Persistent Memory" in /proc/iomem when it's only there to shunt memory over to device-dax. The alternative is "efi_fake_mem=nn@ss:0x40000" and the related 'soft-reservation' enabling that bypasses the persistent memory enabling and assigns memory to device-dax directly. EFI attribute override is safer because this attribute is only honored when applied to existing EFI conventional memory. Have a look at that branch just be warned it's not polished, or just wait for me to send them out with a proper cover letter, and I'll take a look at what you have below. > > The series is divided as follows: > > * Patch 1 , 3: Preparatory work for patch 7 for adding support for > vmf_insert_{pmd,pud} with dax pfn flags PFN_DEV|PFN_SPECIAL > > * Patch 2 , 4: Preparatory work for patch 7 for adding support for > follow_pfn() to work with 2M/1G huge pages, which is > what KVM uses for VM_PFNMAP. > > * Patch 5 - 7: One bugfix and device-dax support for PFN_DEV|PFN_SPECIAL, > which encompasses mainly dealing with the lack of devmap, > and creating a VM_PFNMAP vma. > > * Patch 8: PMEM support for no PFN metadata only for device-dax namespaces. > At the very end of the cover letter (after scissors mark), > there's a patch for ndctl to be able to create namespaces > with '--mode devdax --map none'. > > * Patch 9: Let VFIO handle VM_PFNMAP without relying on vm_pgoff being > a PFN. > > * Patch 10: The actual end consumer example for RAM case. The patch just adds a > label storage area which consequently allows namespaces to be > created. We picked PMEM legacy for starters. > > Thoughts, coments appreciated. > Joao > > P.S. As an example to try this out: > > 1) add 'memmap=48G!16G' to the kernel command line, on a host with 64G, > and kernel has 16G. > > 2) create a devdax namespace with 1G hugepages: > > $ ndctl create-namespace --verbose --mode devdax --map none --size 32G --align 1G -r 0 > { > "dev":"namespace0.0", > "mode":"devdax", > "map":"none", > "size":"32.00 GiB (34.36 GB)", > "uuid":"dfdd05cd-2611-46ac-8bcd-10b6194f32d4", > "daxregion":{ > "id":0, > "size":"32.00 GiB (34.36 GB)", > "align":1073741824, > "devices":[ > { > "chardev":"dax0.0", > "size":"32.00 GiB (34.36 GB)", > "target_node":0, > "mode":"devdax" > } > ] > }, > "align":1073741824 > } > > 3) Add this to your qemu params: > -m 32G > -object memory-backend-file,id=mem,size=32G,mem-path=/dev/dax0.0,share=on,align=1G > -numa node,memdev=mem > > TODO: > > * Discontiguous regions/namespaces: The work above is limited to max > contiguous extent, coming from nvdimm dpa allocation heuristics -- which I take > is because of what specs allow for persistent namespaces. But for volatile RAM > case we would need handling of discontiguous extents (hence a region would represent > more than a resource) to be less bound to how guests are placed on the system. > I played around with multi-resource for device-dax, but I'm wondering about > UABI: 1) whether nvdimm DPA allocation heuristics should be relaxed for RAM > case (under certain nvdimm region bits); or if 2) device-dax would have it's > own separate UABI to be used by daxctl (which would be also useful for hmem > devices?). > > * MCE handling: For contiguous regions vm_pgoff could be set to the pfn in > device-dax, which would allow collect_procs() to find the processes solely based > on the PFN. But for discontiguous namespaces, not sure if this would work; perhaps > looking at the dax-region pfn range for each DAX vma. You mean, make the memory error handling device-dax aware? > > * NUMA: For now excluded setting the target_node; while these two patches > are being worked on[1][2]. > > [1] https://lore.kernel.org/lkml/157401276776.43284.12396353118982684546.stgit@dwillia2-desk3.amr.corp.intel.com/ > [2] https://lore.kernel.org/lkml/157401277293.43284.3805106435228534675.stgit@dwillia2-desk3.amr.corp.intel.com/ I'll ping x86 folks again after the merge window. I expect they have just not had time to ack them yet.