Received: by 2002:a05:6a10:2785:0:0:0:0 with SMTP id ia5csp158000pxb; Tue, 12 Jan 2021 23:39:06 -0800 (PST) X-Google-Smtp-Source: ABdhPJzq5tDYve14gBOYMC0ouii7+UVYFAnPr2U654G/1hC3SodDOFa15xiYzj4SFzMSPQmcghci X-Received: by 2002:a17:906:d1cb:: with SMTP id bs11mr579382ejb.535.1610523546289; Tue, 12 Jan 2021 23:39:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1610523546; cv=none; d=google.com; s=arc-20160816; b=jEIHzo9SYzZL1i/0RcTXG2j9x6M9FKFlSYmZuoYF/F0SGWB1QuY2hKnRQvUuY5uOd8 CxaQb1YE+d208ib30U4mF9MOGZl3DDmvVCbN07A3LYhHcfRhg6jGxDjjtLWSRe0za+/a TMTOFLYuI1TGHca8uqliAks04eVYXKYfxujz3q8CH1A71laGr/YxzAdVCT+q3QSmAFSo v1vZcOYGpyAoRtW1NZhp/SrBxdAf9rCexeDeytW08P7UuzI7fULMqiszXsBcB+bI6zCB KtbKqw4/HcE61PAeh7DCLgkl/lIyhE5HYnNzfdKb8WXYkrDTxWi1PfujM1mNxmKnVmfN Pqzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:ironport-sdr:ironport-sdr; bh=3jWjhNUqHOftf1U76lRO149j6G5ZCzvWrJgA/MgDM8A=; b=VkpZe5lmg07868i5HTMK9rlWjrBd7qs/5xomdQqj/PIrZX2P9KPlyh63mT9dMg4OXK M6JPLfD2WgKDgBC39mX0wAL3NEOhwN4TOpp+uD/cF8zKtDKG4z9RRzjr7lgnbCP/QfaH aHYYEEsplV3lkQBtKpVxxFKR/80iBRnkmnYvSJvD0Z96j3KX1EhRCAx0c66VgUix0hTm 3JPkhz0blhn1mn/qR4BdWWKG5QZhOPUbGVGPo8c/86r6gP3HcWbmU3Zgonl/0EYm/2Qw VRS3HfcO5oHnN7pxCIfYekeFeu8Yert2VSZYrubiEi1x1QLtMncq6CtTLFqU394ghhpR L7Ng== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r24si603704edc.286.2021.01.12.23.38.42; Tue, 12 Jan 2021 23:39:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726541AbhAMHgT (ORCPT + 99 others); Wed, 13 Jan 2021 02:36:19 -0500 Received: from mga03.intel.com ([134.134.136.65]:47784 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725773AbhAMHgS (ORCPT ); Wed, 13 Jan 2021 02:36:18 -0500 IronPort-SDR: etXqqZS5ea+ashEZeidzGPwuYVsdvHXCcfNfFsVq4px+UWANq1ZoMGRfR94NecMPMbhuh2vSmf rw6y4LwVLppQ== X-IronPort-AV: E=McAfee;i="6000,8403,9862"; a="178256541" X-IronPort-AV: E=Sophos;i="5.79,343,1602572400"; d="scan'208";a="178256541" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2021 23:35:34 -0800 IronPort-SDR: 2BNrKHRJWjiF6q0CKsuBT0Efcp8WyTtHNZDeXy4FbsHl9FNH70Te5FUI67+jxsON21PNzfwTkB c9lPsVYkhFFg== X-IronPort-AV: E=Sophos;i="5.79,343,1602572400"; d="scan'208";a="352126909" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2021 23:35:33 -0800 Subject: [PATCH v3 3/6] mm: Teach pfn_to_online_page() about ZONE_DEVICE section collisions From: Dan Williams To: linux-mm@kvack.org Cc: Andrew Morton , Michal Hocko , David Hildenbrand , David Hildenbrand , Oscar Salvador , vishal.l.verma@intel.com, linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org Date: Tue, 12 Jan 2021 23:35:33 -0800 Message-ID: <161052333339.1805594.262356571080399636.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <161052331545.1805594.2356512831689786960.stgit@dwillia2-desk3.amr.corp.intel.com> References: <161052331545.1805594.2356512831689786960.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org While pfn_to_online_page() is able to determine pfn_valid() at subsection granularity it is not able to reliably determine if a given pfn is also online if the section is mixes ZONE_{NORMAL,MOVABLE} with ZONE_DEVICE. This means that pfn_to_online_page() may return invalid @page objects. For example with a memory map like: 100000000-1fbffffff : System RAM 142000000-143002e16 : Kernel code 143200000-143713fff : Kernel rodata 143800000-143b15b7f : Kernel data 144227000-144ffffff : Kernel bss 1fc000000-2fbffffff : Persistent Memory (legacy) 1fc000000-2fbffffff : namespace0.0 This command: echo 0x1fc000000 > /sys/devices/system/memory/soft_offline_page ...succeeds when it should fail. When it succeeds it touches an uninitialized page and may crash or cause other damage (see dissolve_free_huge_page()). While the memory map above is contrived via the memmap=ss!nn kernel command line option, the collision happens in practice on shipping platforms. The memory controller resources that decode spans of physical address space are a limited resource. One technique platform-firmware uses to conserve those resources is to share a decoder across 2 devices to keep the address range contiguous. Unfortunately the unit of operation of a decoder is 64MiB while the Linux section size is 128MiB. This results in situations where, without subsection hotplug memory mappings with different lifetimes collide into one object that can only express one lifetime. Update move_pfn_range_to_zone() to flag (SECTION_TAINT_ZONE_DEVICE) a section that mixes ZONE_DEVICE pfns with other online pfns. With SECTION_TAINT_ZONE_DEVICE to delineate, pfn_to_online_page() can fall back to a slow-path check for ZONE_DEVICE pfns in an online section. In the fast path online_section() for a full ZONE_DEVICE section returns false. Because the collision case is rare, and for simplicity, the SECTION_TAINT_ZONE_DEVICE flag is never cleared once set. Fixes: ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug") Cc: Andrew Morton Reported-by: Michal Hocko Reported-by: David Hildenbrand Reviewed-by: David Hildenbrand Reviewed-by: Oscar Salvador Signed-off-by: Dan Williams --- include/linux/mmzone.h | 22 +++++++++++++++------- mm/memory_hotplug.c | 38 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 53 insertions(+), 7 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b593316bff3d..0b5c44f730b4 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1273,13 +1273,14 @@ extern size_t mem_section_usage_size(void); * which results in PFN_SECTION_SHIFT equal 6. * To sum it up, at least 6 bits are available. */ -#define SECTION_MARKED_PRESENT (1UL<<0) -#define SECTION_HAS_MEM_MAP (1UL<<1) -#define SECTION_IS_ONLINE (1UL<<2) -#define SECTION_IS_EARLY (1UL<<3) -#define SECTION_MAP_LAST_BIT (1UL<<4) -#define SECTION_MAP_MASK (~(SECTION_MAP_LAST_BIT-1)) -#define SECTION_NID_SHIFT 3 +#define SECTION_MARKED_PRESENT (1UL<<0) +#define SECTION_HAS_MEM_MAP (1UL<<1) +#define SECTION_IS_ONLINE (1UL<<2) +#define SECTION_IS_EARLY (1UL<<3) +#define SECTION_TAINT_ZONE_DEVICE (1UL<<4) +#define SECTION_MAP_LAST_BIT (1UL<<5) +#define SECTION_MAP_MASK (~(SECTION_MAP_LAST_BIT-1)) +#define SECTION_NID_SHIFT 3 static inline struct page *__section_mem_map_addr(struct mem_section *section) { @@ -1318,6 +1319,13 @@ static inline int online_section(struct mem_section *section) return (section && (section->section_mem_map & SECTION_IS_ONLINE)); } +static inline int online_device_section(struct mem_section *section) +{ + unsigned long flags = SECTION_IS_ONLINE | SECTION_TAINT_ZONE_DEVICE; + + return section && ((section->section_mem_map & flags) == flags); +} + static inline int online_section_nr(unsigned long nr) { return online_section(__nr_to_section(nr)); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 9f37f8a68da4..889d58523fa1 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -308,6 +308,7 @@ static int check_hotplug_memory_addressable(unsigned long pfn, struct page *pfn_to_online_page(unsigned long pfn) { unsigned long nr = pfn_to_section_nr(pfn); + struct dev_pagemap *pgmap; struct mem_section *ms; if (nr >= NR_MEM_SECTIONS) @@ -328,6 +329,22 @@ struct page *pfn_to_online_page(unsigned long pfn) if (!pfn_section_valid(ms, pfn)) return NULL; + if (!online_device_section(ms)) + return pfn_to_page(pfn); + + /* + * Slowpath: when ZONE_DEVICE collides with + * ZONE_{NORMAL,MOVABLE} within the same section some pfns in + * the section may be 'offline' but 'valid'. Only + * get_dev_pagemap() can determine sub-section online status. + */ + pgmap = get_dev_pagemap(pfn, NULL); + put_dev_pagemap(pgmap); + + /* The presence of a pgmap indicates ZONE_DEVICE offline pfn */ + if (pgmap) + return NULL; + return pfn_to_page(pfn); } EXPORT_SYMBOL_GPL(pfn_to_online_page); @@ -710,6 +727,14 @@ static void __meminit resize_pgdat_range(struct pglist_data *pgdat, unsigned lon pgdat->node_spanned_pages = max(start_pfn + nr_pages, old_end_pfn) - pgdat->node_start_pfn; } + +static void section_taint_zone_device(unsigned long pfn) +{ + struct mem_section *ms = __pfn_to_section(pfn); + + ms->section_mem_map |= SECTION_TAINT_ZONE_DEVICE; +} + /* * Associate the pfn range with the given zone, initializing the memmaps * and resizing the pgdat/zone data to span the added pages. After this @@ -739,6 +764,19 @@ void __ref move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, resize_pgdat_range(pgdat, start_pfn, nr_pages); pgdat_resize_unlock(pgdat, &flags); + /* + * Subsection population requires care in pfn_to_online_page(). + * Set the taint to enable the slow path detection of + * ZONE_DEVICE pages in an otherwise ZONE_{NORMAL,MOVABLE} + * section. + */ + if (zone_idx(zone) == ZONE_DEVICE) { + if (!IS_ALIGNED(start_pfn, PAGES_PER_SECTION)) + section_taint_zone_device(start_pfn); + if (!IS_ALIGNED(start_pfn + nr_pages, PAGES_PER_SECTION)) + section_taint_zone_device(start_pfn + nr_pages); + } + /* * TODO now we have a visible range of pages which are not associated * with their zone properly. Not nice but set_pfnblock_flags_mask