Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp894210pxa; Wed, 19 Aug 2020 18:55:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyCq9ngyCP9czzvfO34VLA0Azm+0F8T9SlRc4q2groVDLCX7q8xFJ4mf+Hu9GIpAMeK4JSi X-Received: by 2002:a05:6402:33a:: with SMTP id q26mr881863edw.8.1597888521706; Wed, 19 Aug 2020 18:55:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597888521; cv=none; d=google.com; s=arc-20160816; b=xMDk1ARGmQkz4Eybd8Oit0F+mmKS6q9l2RGnf3ODEa69Du/wCs7qahxUm+NzlKKNqI QtA4GLK60jVzHWCvE/60lDFypDKIvwDLCv0p5D0QwCaDYk/8xE7GPaHVymIGhjYGjBsZ wqO16ZpQZv6iFCihMwqy2RY8I03/pE2G6w4dDAxyzLgmXhWuadM7SCL9c6HAzI9nxM8o DAlE9y29C1oQw/kCAItRfNlgIgqxocvZyrq7qKTv4q1/tfAyLkB7/xiuEpCx+k9xmxYZ PoDBkAArM1Px0dpAYoh3JxeK8O5XbkLIFio1bCKKtwest4EIBlrtPenJugAlwp7pB7Le /PBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=M2856byAPxR8wC0hiTcV4Eu6Vwp5zHAzX+5RXykLdZ8=; b=OMkYeOAYT+L6Sc6LH/3K3pic52FjShovM5UKWP3Pl0HH1LySD8B08SZGlwOx//gmBw L391KKgFfUjrU5lvLvu+qfwzoSOeWb0atd9j0K6YK/w3l5kYG1HHe/Rer1arf77YnYJg QtimAAuEOGdbDCAB71/hrXkZBGjq6Fa4NoSxbO0VynaAu3iuhIkNXETZLsWCF44V/+zv 8YVdJEv4TDKLtwtG4W/Z+9FUhJFCP5+IdJ+5Z7miwFhmH2asRQEwpbip8LIsmFd1AuSz /U66hSvUIT5VxpLbJH24vnRyMHLkTBHi+79n0eofXDZQV7moyzVSB38p3mKJOYkQKh9W f6kg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b="lZ/K4yVK"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y19si351082eje.337.2020.08.19.18.54.57; Wed, 19 Aug 2020 18:55:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b="lZ/K4yVK"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726741AbgHTByL (ORCPT + 99 others); Wed, 19 Aug 2020 21:54:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726435AbgHTByK (ORCPT ); Wed, 19 Aug 2020 21:54:10 -0400 Received: from mail-ej1-x641.google.com (mail-ej1-x641.google.com [IPv6:2a00:1450:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AEF2C061757 for ; Wed, 19 Aug 2020 18:54:10 -0700 (PDT) Received: by mail-ej1-x641.google.com with SMTP id g19so711238ejc.9 for ; Wed, 19 Aug 2020 18:54:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=M2856byAPxR8wC0hiTcV4Eu6Vwp5zHAzX+5RXykLdZ8=; b=lZ/K4yVK00w07TAWoz4NrJ7bh2SQj0QlEXGy+16Uvv+KVxUgjz3B3Iv7EDFWqB1Pdr DuqS0vl9GBWbYe7Tv0iQX9ujEvHAFPuKsnBXPYGFMmNAlAMmAGkWwywfwrjoKzj7KHaq 9x9L9GAliRR0hzTYRqnJnP0AEzhhN86hMJ/tfa5N1nZQYhNMWXesPPOB/n1sQBmB9iFO 1gldSVsbODRKjEwdNvNll8hWpTbiqQ6XVTWltCogzN6iGXyVjZR5g3DVI13dx5mRUp7V PQNTIep7dDdU8pcLAQzgJrYbrZcCxhZ6TJ0/WHGSPo2V8rXXnajEPWLIdfd2u3CPbZ4m C4iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=M2856byAPxR8wC0hiTcV4Eu6Vwp5zHAzX+5RXykLdZ8=; b=GH9YugRSHYKd8vNbkCDX+C1iy8uujB51grX2D2R95zfJYeG0sjUJUNW7xiOJcFAX7O mRowAf6zqd/ipItYO+L5W4T94PBn5Q+cjT6blT+lmQ/AEBkFDuXq68ctnKa50MqmqEQE rIy00oiRCA3CpvT+VHAdFdtwwOwDvvokm8JMRbet46c7bHebGW9mEEQcOm/6ldDnWCNi t6RGHCb+lBy+D7qibzZDD8NWzDceFl41/DXIu61MeUYeUAUF6qqYx+6MqmD+tFhD2jdc FfvlGkJMoJTsrZ9/ESJHdAXoTIcd/YPDwaDwloU2Ky24LeHS+dV6gH7YXsAJN0DAGWim li4Q== X-Gm-Message-State: AOAM531yA+FR3qpD2LGYX+OH/OY7HsOcCAsifDyeSJqTyhLxx1IYT5Uy 6s18aotYIwD7/EkMm3rUbivmWUClT3cQFwIYjIEQog== X-Received: by 2002:a17:906:413:: with SMTP id d19mr1123427eja.523.1597888448722; Wed, 19 Aug 2020 18:54:08 -0700 (PDT) MIME-Version: 1.0 References: <159643094279.4062302.17779410714418721328.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: From: Dan Williams Date: Wed, 19 Aug 2020 18:53:57 -0700 Message-ID: Subject: Re: [PATCH v4 00/23] device-dax: Support sub-dividing soft-reserved ranges To: David Hildenbrand Cc: Andrew Morton , Ira Weiny , Ard Biesheuvel , Mike Rapoport , Borislav Petkov , Vishal Verma , David Airlie , Will Deacon , Catalin Marinas , Ard Biesheuvel , Joao Martins , Tom Lendacky , Dave Jiang , "Rafael J. Wysocki" , Jonathan Cameron , Wei Yang , X86 ML , "H. Peter Anvin" , Thomas Gleixner , Greg Kroah-Hartman , Pavel Tatashin , Peter Zijlstra , Ben Skeggs , Benjamin Herrenschmidt , Jason Gunthorpe , Jia He , Ingo Molnar , Dave Hansen , Paul Mackerras , Brice Goglin , Jeff Moyer , Michael Ellerman , "Rafael J. Wysocki" , Daniel Vetter , Andy Lutomirski , "Rafael J. Wysocki" , Linux MM , linux-nvdimm , Linux Kernel Mailing List , Linux ACPI , Maling list - DRI developers Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 3, 2020 at 12:48 AM David Hildenbrand wrote: > > [...] > > > Well, no v5.8-rc8 to line this up for v5.9, so next best is early > > integration into -mm before other collisions develop. > > > > Chatted with Justin offline and it currently appears that the missing > > numa information is the fault of the platform firmware to populate all > > the necessary NUMA data in the NFIT. > > I'm planning on looking at some bits of this series this week, but some > questions upfront ... > > > > > --- > > Cover: > > > > The device-dax facility allows an address range to be directly mapped > > through a chardev, or optionally hotplugged to the core kernel page > > allocator as System-RAM. It is the mechanism for converting persistent > > memory (pmem) to be used as another volatile memory pool i.e. the > > current Memory Tiering hot topic on linux-mm. > > > > In the case of pmem the nvdimm-namespace-label mechanism can sub-divide > > it, but that labeling mechanism is not available / applicable to > > soft-reserved ("EFI specific purpose") memory [3]. This series provides > > a sysfs-mechanism for the daxctl utility to enable provisioning of > > volatile-soft-reserved memory ranges. > > > > The motivations for this facility are: > > > > 1/ Allow performance differentiated memory ranges to be split between > > kernel-managed and directly-accessed use cases. > > > > 2/ Allow physical memory to be provisioned along performance relevant > > address boundaries. For example, divide a memory-side cache [4] along > > cache-color boundaries. > > > > 3/ Parcel out soft-reserved memory to VMs using device-dax as a security > > / permissions boundary [5]. Specifically I have seen people (ab)using > > memmap=nn!ss (mark System-RAM as Persistent Memory) just to get the > > device-dax interface on custom address ranges. A follow-on for the VM > > use case is to teach device-dax to dynamically allocate 'struct page' at > > runtime to reduce the duplication of 'struct page' space in both the > > guest and the host kernel for the same physical pages. > > > I think I am missing some important pieces. Bear with me. No worries, also bear with me, I'm going to be offline intermittently until at least mid-September. Hopefully Joao and/or Vishal can jump in on this discussion. > > 1. On x86-64, e820 indicates "soft-reserved" memory. This memory is not > automatically used in the buddy during boot, but remains untouched > (similar to pmem). But as it involves ACPI as well, it could also be > used on arm64 (-e820), correct? Correct, arm64 also gets the EFI support for enumerating memory this way. However, I would clarify that whether soft-reserved is given to the buddy allocator by default or not is the kernel's policy choice, "buddy-by-default" is ok and is what will happen anyways with older kernels on platforms that enumerate a memory range this way. > 2. Soft-reserved memory is volatile RAM with differing performance > characteristics ("performance differentiated memory"). What would be > examples of such memory? Likely the most prominent one that drove the creation of the "EFI Specific Purpose" attribute bit is high-bandwidth memory. One concrete example of that was a platform called Knights Landing [1] that ended up shipping firmware that lied to the OS about the latency characteristics of the memory to try to reverse engineer OS behavior to not allocate from that memory range by default. With the EFI attribute firmware performance tables can tell the truth about the performance characteristics of the memory range *and* indicate that the OS not use it for general purpose allocations by default. [1]: https://software.intel.com/content/www/us/en/develop/blogs/an-intro-to-mcdram-high-bandwidth-memory-on-knights-landing.html > Like, memory that is faster than RAM (scratch > pad), or slower (pmem)? Or both? :) Both, but note that PMEM is already hard-reserved by default. Soft-reserved is about a memory range that, for example, an administrator may want to reserve 100% for a weather simulation where if even a small amount of memory was stolen for the page cache the application may not meet its performance targets. It could also be a memory range that is so slow that only applications with higher latency tolerances would be prepared to consume it. In other words the soft-reserved memory can be used to indicate memory that is either too precious, or too slow for general purpose OS allocations. > Is it a valid use case to use pmem > in a hypervisor to back this memory? Depends on the pmem. That performance capability is indicated by the ACPI HMAT, not the EFI soft-reserved designation. > 3. There seem to be use cases where "soft-reserved" memory is used via > DAX. What is an example use case? I assume it's *not* to treat it like > PMEM but instead e.g., use it as a fast buffer inside applications or > similar. Right, in that weather-simulation example that application could just mmap /dev/daxX.Y and never worry about contending for the "fast memory" resource on the platform. Alternatively if that resource needs to be shared and/or over-commited then kernel memory-management services are needed and that dax-device can be assigned to kmem. > 4. There seem to be use cases where some part of "soft-reserved" memory > is used via DAX, some other is given to the buddy. What is an example > use case? Is this really necessary or only some theoretical use case? It's as necessary as pmem namespace partitioning, or the inclusion of dax-kmem upstream in the first place. In that kmem case the motivation was that some users want a portion of pmem provisioned for storage and some for volatile usage. The motivation is similar here, platform firmware can only identify memory attributes on coarse boundaries, finer grained provisioning decisions are up to the administrator / platform-owner and the kernel is a just a facilitator of that policy. > > 5. The "provisioned along performance relevant address boundaries." part > is unclear to me. Can you give an example of how this would look like > from user space? Like, split that memory in blocks of size X with > alignment Y and give them to separate applications? One example of platform address boundaries are the memory address ranges that alias in a direct-mapped memory-side-cache. In the direct-map-cache aliasing may repeat every N GBs where N is the ratio of far-to-near memory. ("Near memory" == cache "Far memory" == backing memory). Also refer back to the background in the page allocator shuffling patches [2]. With this partitioning mechanism you could, for one example use case, assign different VMs to exclusive colors in the memory side cache. [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e900a918b098 > 6. If you add such memory to the buddy, is there any way the system can > differentiate it from other memory? E.g., via fake/other NUMA nodes? Numa node numbers / are how performance differentiated memory ranges are enumerated. The expectation is that all distinct performance memory targets have unique ACPI proximity domains and Linux numa node numbers as a result. > Also, can you give examples of how kmem-added memory is represented in > /proc/iomem for a) pmem and b) soft-resered memory after this series > (skimming over the patches, I think there is a change for pmem, right?)? I don't expect a change. The only difference is the parent resource will be marked "Soft Reserved" instead of "Persistent Memory". > I am really wondering if it's the right approach to squeeze this into > our pmem/nvdimm infrastructure just because it's easy to do. E.g., man > "ndctl" - "ndctl - Manage "libnvdimm" subsystem devices (Non-volatile > Memory)" speaks explicitly about non-volatile memory. In fact it's not squeezed into PMEM infrastructure. dax-kmem and device-dax are independent of PMEM. PMEM is one source of potential device-dax instances, soft-reserved memory is another orthogonal source. This is why device-dax needs its own userspace policy directed partitioning mechanism because there is no PMEM to store the configuration for partitioned higph-bandwidth memory. The userspace tooling for this mechanism is targeted for a tool called daxctl that has no PMEM dependencies. Look to Joao's use case that is using this infrastructure independent of PMEM with manual soft-reservations specified on the kernel command-line.