Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp7308636ybi; Wed, 5 Jun 2019 15:13:34 -0700 (PDT) X-Google-Smtp-Source: APXvYqw0PwLy1l1qUB2wnNuWR2MEK+L9AhGyrfQeJEed11/8JXw2zBTZEYa70hGCJ6OCbs3JHWU1 X-Received: by 2002:a63:b24:: with SMTP id 36mr73221pgl.439.1559772814385; Wed, 05 Jun 2019 15:13:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559772814; cv=none; d=google.com; s=arc-20160816; b=qdUX+tT+DwvJ/Sy3eG49cBGWsF3T7Jzh4cihcraMJXKvnHnj+wC53c8tXlv1pTHfr6 cw96oQhbVzh8KOGDTAnhVdT20PdCPx8CkqsExny+4gSxr5jk8m9KQ/7pdO5vE0tcC+ZT IEC84m8ZJ7ksSZYBNpOVvLPrVT+Jg5NK5we3hSakqgifbrW67KhuWMGpZmGZLEDTA9Xv rOWkVlwnggWlnaQa3euXnUsXVEO6J/PabjCR1192r/LCcnyagumGVKJX8uPYBnmtESWG hL0PTuGPwqg6y4azEGexyg1jo6Dge4us3UZJaBJULO5mETD8zA0jH2xpHfEdM8lwDeQo qBYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:message-id:date:cc:to:from:subject; bh=JCsM1h5yQXD7SUajnOU5P284s+Joo0zf1R6NxeH8Gd8=; b=I9EIQ3VwviC9BFpe9vFTto5mafrPR/kgqa84U+5rLM1yOrJEc8PU+gc9EeQDHwAiAX gZHeIF92Az0mx/yYkwvqaP7PR0MBV8xxTZ5DEJod4Yd9yhZzyDL2pqgnuWJKE5ZaOImx MZyzHpS3l3uSjKU589N1sXC13Ipq/pkbfQbxzwQ5iZ8QEhfOnmXN0XYgS6zVTYmbVDSZ nNXCYlfkuTdHLBqvYJAoCQQJREnoqaB6I1wH1Kf8U+q4UQCQGybc2TTwGhsskM6KD5MV 49QjQHhiDeCPCCnAUz9b3VSpH4z8B6mXptBmvQSmzxKG6Sc0da425ixj8cAofc/a0+o3 2kog== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g10si142849pgq.36.2019.06.05.15.13.16; Wed, 05 Jun 2019 15:13:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726608AbfFEWMH (ORCPT + 99 others); Wed, 5 Jun 2019 18:12:07 -0400 Received: from mga09.intel.com ([134.134.136.24]:47196 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726502AbfFEWMG (ORCPT ); Wed, 5 Jun 2019 18:12:06 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Jun 2019 15:12:05 -0700 X-ExtLoop1: 1 Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by orsmga005.jf.intel.com with ESMTP; 05 Jun 2019 15:12:05 -0700 Subject: [PATCH v9 00/12] mm: Sub-section memory hotplug support From: Dan Williams To: akpm@linux-foundation.org Cc: David Hildenbrand , Mike Rapoport , Jane Chu , Michael Ellerman , Pavel Tatashin , Benjamin Herrenschmidt , Jonathan Corbet , Logan Gunthorpe , Paul Mackerras , Toshi Kani , Oscar Salvador , Jeff Moyer , Michal Hocko , Vlastimil Babka , stable@vger.kernel.org, =?utf-8?b?SsOpcsO0bWU=?= Glisse , linux-mm@kvack.org, linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, osalvador@suse.de, mhocko@suse.com Date: Wed, 05 Jun 2019 14:57:49 -0700 Message-ID: <155977186863.2443951.9036044808311959913.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Changes since v8 [1]: - Rebase on next-20190604 to incorporate the removal of the MHP_MEMBLOCK_API flag and other cleanups from David. - Move definition of subsection_mask_set() earlier into "mm/sparsemem: Add helpers track active portions of a section at boot" (Oscar) - Cleanup unnecessary IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP) in section_deactivate() in response to a request (declined) to split the pure CONFIG_SPARSEMEM bits from section_{de,}activate(). I submit that the maintenance is less error prone, especially when modifying common logic, if the implementations remain unified. (Oscar) - Cleanup sparse_add_section() vs sparse_index_init() return code. (Oscar) - Document ZONE_DEVICE and subsection semantics relative to CONFIG_SPARSEMEM_VMEMMAP in Documentation/vm/memory-model.rst. (Mike) [1]: https://lore.kernel.org/lkml/155718596657.130019.17139634728875079809.stgit@dwillia2-desk3.amr.corp.intel.com/ --- The memory hotplug section is an arbitrary / convenient unit for memory hotplug. 'Section-size' units have bled into the user interface ('memblock' sysfs) and can not be changed without breaking existing userspace. The section-size constraint, while mostly benign for typical memory hotplug, has and continues to wreak havoc with 'device-memory' use cases, persistent memory (pmem) in particular. Recall that pmem uses devm_memremap_pages(), and subsequently arch_add_memory(), to allocate a 'struct page' memmap for pmem. However, it does not use the 'bottom half' of memory hotplug, i.e. never marks pmem pages online and never exposes the userspace memblock interface for pmem. This leaves an opening to redress the section-size constraint. To date, the libnvdimm subsystem has attempted to inject padding to satisfy the internal constraints of arch_add_memory(). Beyond complicating the code, leading to bugs [2], wasting memory, and limiting configuration flexibility, the padding hack is broken when the platform changes this physical memory alignment of pmem from one boot to the next. Device failure (intermittent or permanent) and physical reconfiguration are events that can cause the platform firmware to change the physical placement of pmem on a subsequent boot, and device failure is an everyday event in a data-center. It turns out that sections are only a hard requirement of the user-facing interface for memory hotplug and with a bit more infrastructure sub-section arch_add_memory() support can be added for kernel internal usages like devm_memremap_pages(). Here is an analysis of the current design assumptions in the current code and how they are addressed in the new implementation: Current design assumptions: - Sections that describe boot memory (early sections) are never unplugged / removed. - pfn_valid(), in the CONFIG_SPARSEMEM_VMEMMAP=y, case devolves to a valid_section() check - __add_pages() and helper routines assume all operations occur in PAGES_PER_SECTION units. - The memblock sysfs interface only comprehends full sections New design assumptions: - Sections are instrumented with a sub-section bitmask to track (on x86) individual 2MB sub-divisions of a 128MB section. - Partially populated early sections can be extended with additional sub-sections, and those sub-sections can be removed with arch_remove_memory(). With this in place we no longer lose usable memory capacity to padding. - pfn_valid() is updated to look deeper than valid_section() to also check the active-sub-section mask. This indication is in the same cacheline as the valid_section() so the performance impact is expected to be negligible. So far the lkp robot has not reported any regressions. - Outside of the core vmemmap population routines which are replaced, other helper routines like shrink_{zone,pgdat}_span() are updated to handle the smaller granularity. Core memory hotplug routines that deal with online memory are not touched. - The existing memblock sysfs user api guarantees / assumptions are not touched since this capability is limited to !online !memblock-sysfs-accessible sections. Meanwhile the issue reports continue to roll in from users that do not understand when and how the 128MB constraint will bite them. The current implementation relied on being able to support at least one misaligned namespace, but that immediately falls over on any moderately complex namespace creation attempt. Beyond the initial problem of 'System RAM' colliding with pmem, and the unsolvable problem of physical alignment changes, Linux is now being exposed to platforms that collide pmem ranges with other pmem ranges by default [3]. In short, devm_memremap_pages() has pushed the venerable section-size constraint past the breaking point, and the simplicity of section-aligned arch_add_memory() is no longer tenable. These patches are exposed to the kbuild robot on my libnvdimm-pending branch [4], and a preview of the unit test for this functionality is available on the 'subsection-pending' branch of ndctl [5]. [2]: https://lore.kernel.org/r/155000671719.348031.2347363160141119237.stgit@dwillia2-desk3.amr.corp.intel.com [3]: https://github.com/pmem/ndctl/issues/76 [4]: https://git.kernel.org/pub/scm/linux/kernel/git/djbw/nvdimm.git/log/?h=libnvdimm-pending [5]: https://github.com/pmem/ndctl/commit/7c59b4867e1c --- Dan Williams (12): mm/sparsemem: Introduce struct mem_section_usage mm/sparsemem: Add helpers track active portions of a section at boot mm/hotplug: Prepare shrink_{zone,pgdat}_span for sub-section removal mm/sparsemem: Convert kmalloc_section_memmap() to populate_section_memmap() mm/hotplug: Kill is_dev_zone() usage in __remove_pages() mm: Kill is_dev_zone() helper mm/sparsemem: Prepare for sub-section ranges mm/sparsemem: Support sub-section hotplug mm: Document ZONE_DEVICE memory-model implications mm/devm_memremap_pages: Enable sub-section remap libnvdimm/pfn: Fix fsdax-mode namespace info-block zero-fields libnvdimm/pfn: Stop padding pmem namespaces to section alignment Documentation/vm/memory-model.rst | 39 ++++ arch/powerpc/include/asm/sparsemem.h | 3 arch/x86/mm/init_64.c | 4 drivers/nvdimm/dax_devs.c | 2 drivers/nvdimm/pfn.h | 15 - drivers/nvdimm/pfn_devs.c | 95 +++------ include/linux/memory_hotplug.h | 7 - include/linux/mm.h | 4 include/linux/mmzone.h | 92 +++++++-- kernel/memremap.c | 61 ++---- mm/memory_hotplug.c | 171 +++++++++------- mm/page_alloc.c | 10 + mm/sparse-vmemmap.c | 21 +- mm/sparse.c | 359 +++++++++++++++++++++++----------- 14 files changed, 534 insertions(+), 349 deletions(-)