2018-09-20 22:24:42

by Alexander Duyck

[permalink] [raw]
Subject: [PATCH v4 0/5] Address issues slowing persistent memory initialization

This patch set is meant to be a v4 to my earlier patch set "Address issues
slowing memory init"[1], and a follow-up to my earlier patch set "Address
issues slowing persistent memory initialization"[2].

Excluding any gains seen from using the vm_debug option to disable page
init poisoning I see a total reduction in file-system init time of about
two and a half minutes, or 65%, for a system initializing btrfs on a 12TB
block of persistent memory split evenly over 4 NUMA nodes.

Since the last patch set I have reworked the first patch to provide a more
generic disable implementation that can be extended in the future.

I tweaked the commit message for the second patch slightly to reflect why
we might want to use a non-atomic __set_bit versus the atomic set_bit.

I have modified the third patch to make it so that it can merge onto either
the linux git tree or the linux-next git tree. The patch set that Dan
Williams has outstanding may end up conflicting with this patch depending
on the merge order. If his are merged first I believe the code I changed
in mm/hmm.c could be dropped entirely.

The fourth patch has been split into two and focused more on the async
scheduling portion of the nvdimm code. The result is much cleaner than the
original approach in that instead of having two threads running we are now
getting the thread running where we wanted it to be.

The last change for all patches is that I have updated my email address to
[email protected] to reflect the fact that I have changed
teams within Intel. I will be trying to use that for correspondence going
forward instead of my gmail account.

[1]: https://lkml.org/lkml/2018/9/5/924
[2]: https://lkml.org/lkml/2018/9/11/10
[3]: https://lkml.org/lkml/2018/9/13/104

---

Alexander Duyck (5):
mm: Provide kernel parameter to allow disabling page init poisoning
mm: Create non-atomic version of SetPageReserved for init use
mm: Defer ZONE_DEVICE page initialization to the point where we init pgmap
async: Add support for queueing on specific node
nvdimm: Schedule device registration on node local to the device


Documentation/admin-guide/kernel-parameters.txt | 12 +++
drivers/nvdimm/bus.c | 19 ++++
include/linux/async.h | 20 ++++-
include/linux/mm.h | 2
include/linux/page-flags.h | 9 ++
kernel/async.c | 36 ++++++--
kernel/memremap.c | 24 ++---
mm/debug.c | 46 ++++++++++
mm/hmm.c | 12 ++-
mm/memblock.c | 5 -
mm/page_alloc.c | 101 ++++++++++++++++++++++-
mm/sparse.c | 4 -
12 files changed, 243 insertions(+), 47 deletions(-)

--


2018-09-20 22:27:49

by Alexander Duyck

[permalink] [raw]
Subject: [PATCH v4 1/5] mm: Provide kernel parameter to allow disabling page init poisoning

On systems with a large amount of memory it can take a significant amount
of time to initialize all of the page structs with the PAGE_POISON_PATTERN
value. I have seen it take over 2 minutes to initialize a system with
over 12TB of RAM.

In order to work around the issue I had to disable CONFIG_DEBUG_VM and then
the boot time returned to something much more reasonable as the
arch_add_memory call completed in milliseconds versus seconds. However in
doing that I had to disable all of the other VM debugging on the system.

In order to work around a kernel that might have CONFIG_DEBUG_VM enabled on
a system that has a large amount of memory I have added a new kernel
parameter named "vm_debug" that can be set to "-" in order to disable it.

Signed-off-by: Alexander Duyck <[email protected]>
---

v3: Switched from kernel config option to parameter
v4: Added comment to parameter handler to record when option is disabled
Updated parameter description based on feedback from Michal Hocko
Fixed GB vs TB typo in patch description.
Switch to vm_debug option similar to slub_debug

Documentation/admin-guide/kernel-parameters.txt | 12 ++++++
include/linux/page-flags.h | 8 ++++
mm/debug.c | 46 +++++++++++++++++++++++
mm/memblock.c | 5 +--
mm/sparse.c | 4 +-
5 files changed, 69 insertions(+), 6 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index dfe3d7b99abf..ee257b5b584f 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4811,6 +4811,18 @@
This is actually a boot loader parameter; the value is
passed to the kernel using a special protocol.

+ vm_debug[=options] [KNL] Available with CONFIG_DEBUG_VM=y.
+ May slow down system boot speed, especially when
+ enabled on systems with a large amount of memory.
+ All options are enabled by default, and this
+ interface is meant to allow for selectively
+ enabling or disabling specific virtual memory
+ debugging features.
+
+ Available options are:
+ P Enable page structure init time poisoning
+ - Disable all of the above options
+
vmalloc=nn[KMG] [KNL,BOOT] Forces the vmalloc area to have an exact
size of <nn>. This can be used to increase the
minimum size (128MB on x86). It can also be used to
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 4d99504f6496..934f91ef3f54 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -163,6 +163,14 @@ static inline int PagePoisoned(const struct page *page)
return page->flags == PAGE_POISON_PATTERN;
}

+#ifdef CONFIG_DEBUG_VM
+void page_init_poison(struct page *page, size_t size);
+#else
+static inline void page_init_poison(struct page *page, size_t size)
+{
+}
+#endif
+
/*
* Page flags policies wrt compound pages
*
diff --git a/mm/debug.c b/mm/debug.c
index bd10aad8539a..cdacba12e09a 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -13,6 +13,7 @@
#include <trace/events/mmflags.h>
#include <linux/migrate.h>
#include <linux/page_owner.h>
+#include <linux/ctype.h>

#include "internal.h"

@@ -175,4 +176,49 @@ void dump_mm(const struct mm_struct *mm)
);
}

+static bool page_init_poisoning __read_mostly = true;
+
+static int __init setup_vm_debug(char *str)
+{
+ bool __page_init_poisoning = true;
+
+ /*
+ * Calling vm_debug with no arguments is equivalent to requesting
+ * to enable all debugging options we can control.
+ */
+ if (*str++ != '=' || !*str)
+ goto out;
+
+ __page_init_poisoning = false;
+ if (*str == '-')
+ goto out;
+
+ while (*str) {
+ switch (tolower(*str)) {
+ case'p':
+ __page_init_poisoning = true;
+ break;
+ default:
+ pr_err("vm_debug option '%c' unknown. skipped\n",
+ *str);
+ }
+
+ str++;
+ }
+out:
+ if (page_init_poisoning && !__page_init_poisoning)
+ pr_warn("Page struct poisoning disabled by kernel command line option 'vm_debug'\n");
+
+ page_init_poisoning = __page_init_poisoning;
+
+ return 1;
+}
+__setup("vm_debug", setup_vm_debug);
+
+void page_init_poison(struct page *page, size_t size)
+{
+ if (page_init_poisoning)
+ memset(page, PAGE_POISON_PATTERN, size);
+}
+EXPORT_SYMBOL_GPL(page_init_poison);
#endif /* CONFIG_DEBUG_VM */
diff --git a/mm/memblock.c b/mm/memblock.c
index f7981098537b..b1017ec1b167 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1495,10 +1495,9 @@ void * __init memblock_virt_alloc_try_nid_raw(

ptr = memblock_virt_alloc_internal(size, align,
min_addr, max_addr, nid);
-#ifdef CONFIG_DEBUG_VM
if (ptr && size > 0)
- memset(ptr, PAGE_POISON_PATTERN, size);
-#endif
+ page_init_poison(ptr, size);
+
return ptr;
}

diff --git a/mm/sparse.c b/mm/sparse.c
index 10b07eea9a6e..67ad061f7fb8 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -696,13 +696,11 @@ int __meminit sparse_add_one_section(struct pglist_data *pgdat,
goto out;
}

-#ifdef CONFIG_DEBUG_VM
/*
* Poison uninitialized struct pages in order to catch invalid flags
* combinations.
*/
- memset(memmap, PAGE_POISON_PATTERN, sizeof(struct page) * PAGES_PER_SECTION);
-#endif
+ page_init_poison(memmap, sizeof(struct page) * PAGES_PER_SECTION);

section_mark_present(ms);
sparse_init_one_section(ms, section_nr, memmap, usemap);


2018-09-20 22:28:29

by Alexander Duyck

[permalink] [raw]
Subject: [PATCH v4 2/5] mm: Create non-atomic version of SetPageReserved for init use

It doesn't make much sense to use the atomic SetPageReserved at init time
when we are using memset to clear the memory and manipulating the page
flags via simple "&=" and "|=" operations in __init_single_page.

This patch adds a non-atomic version __SetPageReserved that can be used
during page init and shows about a 10% improvement in initialization times
on the systems I have available for testing. On those systems I saw
initialization times drop from around 35 seconds to around 32 seconds to
initialize a 3TB block of persistent memory. I believe the main advantage
of this is that it allows for more compiler optimization as the __set_bit
operation can be reordered whereas the atomic version cannot.

I tried adding a bit of documentation based on commit <f1dd2cd13c4> ("mm,
memory_hotplug: do not associate hotadded memory to zones until online").

Ideally the reserved flag should be set earlier since there is a brief
window where the page is initialization via __init_single_page and we have
not set the PG_Reserved flag. I'm leaving that for a future patch set as
that will require a more significant refactor.

Acked-by: Michal Hocko <[email protected]>
Signed-off-by: Alexander Duyck <[email protected]>
---

v4: Added comment about __set_bit vs set_bit to the patch description

include/linux/page-flags.h | 1 +
mm/page_alloc.c | 9 +++++++--
2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 934f91ef3f54..50ce1bddaf56 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -303,6 +303,7 @@ static inline void page_init_poison(struct page *page, size_t size)

PAGEFLAG(Reserved, reserved, PF_NO_COMPOUND)
__CLEARPAGEFLAG(Reserved, reserved, PF_NO_COMPOUND)
+ __SETPAGEFLAG(Reserved, reserved, PF_NO_COMPOUND)
PAGEFLAG(SwapBacked, swapbacked, PF_NO_TAIL)
__CLEARPAGEFLAG(SwapBacked, swapbacked, PF_NO_TAIL)
__SETPAGEFLAG(SwapBacked, swapbacked, PF_NO_TAIL)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 712cab17f86f..29bd662fffd7 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1239,7 +1239,12 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
/* Avoid false-positive PageTail() */
INIT_LIST_HEAD(&page->lru);

- SetPageReserved(page);
+ /*
+ * no need for atomic set_bit because the struct
+ * page is not visible yet so nobody should
+ * access it yet.
+ */
+ __SetPageReserved(page);
}
}
}
@@ -5513,7 +5518,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
page = pfn_to_page(pfn);
__init_single_page(page, pfn, zone, nid);
if (context == MEMMAP_HOTPLUG)
- SetPageReserved(page);
+ __SetPageReserved(page);

/*
* Mark the block movable so that blocks are reserved for


2018-09-20 22:30:39

by Alexander Duyck

[permalink] [raw]
Subject: [PATCH v4 4/5] async: Add support for queueing on specific node

This patch introduces two new variants of the async_schedule_ functions
that allow scheduling on a specific node. These functions are
async_schedule_on and async_schedule_on_domain which end up mapping to
async_schedule and async_schedule_domain but provide NUMA node specific
functionality. The original functions were moved to inline function
definitions that call the new functions while passing NUMA_NO_NODE.

The main motivation behind this is to address the need to be able to
schedule NVDIMM init work on specific NUMA nodes in order to improve
performance of memory initialization.

One additional change I made is I dropped the "extern" from the function
prototypes in the async.h kernel header since they aren't needed.

Signed-off-by: Alexander Duyck <[email protected]>
---
include/linux/async.h | 20 +++++++++++++++++---
kernel/async.c | 36 +++++++++++++++++++++++++-----------
2 files changed, 42 insertions(+), 14 deletions(-)

diff --git a/include/linux/async.h b/include/linux/async.h
index 6b0226bdaadc..9878b99cbb01 100644
--- a/include/linux/async.h
+++ b/include/linux/async.h
@@ -14,6 +14,7 @@

#include <linux/types.h>
#include <linux/list.h>
+#include <linux/numa.h>

typedef u64 async_cookie_t;
typedef void (*async_func_t) (void *data, async_cookie_t cookie);
@@ -37,9 +38,22 @@ struct async_domain {
struct async_domain _name = { .pending = LIST_HEAD_INIT(_name.pending), \
.registered = 0 }

-extern async_cookie_t async_schedule(async_func_t func, void *data);
-extern async_cookie_t async_schedule_domain(async_func_t func, void *data,
- struct async_domain *domain);
+async_cookie_t async_schedule_on(async_func_t func, void *data, int node);
+async_cookie_t async_schedule_on_domain(async_func_t func, void *data, int node,
+ struct async_domain *domain);
+
+static inline async_cookie_t async_schedule(async_func_t func, void *data)
+{
+ return async_schedule_on(func, data, NUMA_NO_NODE);
+}
+
+static inline async_cookie_t
+async_schedule_domain(async_func_t func, void *data,
+ struct async_domain *domain)
+{
+ return async_schedule_on_domain(func, data, NUMA_NO_NODE, domain);
+}
+
void async_unregister_domain(struct async_domain *domain);
extern void async_synchronize_full(void);
extern void async_synchronize_full_domain(struct async_domain *domain);
diff --git a/kernel/async.c b/kernel/async.c
index a893d6170944..1d7ce81c1949 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -56,6 +56,7 @@ synchronization with the async_synchronize_full() function, before returning
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/workqueue.h>
+#include <linux/cpu.h>

#include "workqueue_internal.h"

@@ -149,8 +150,11 @@ static void async_run_entry_fn(struct work_struct *work)
wake_up(&async_done);
}

-static async_cookie_t __async_schedule(async_func_t func, void *data, struct async_domain *domain)
+static async_cookie_t __async_schedule(async_func_t func, void *data,
+ struct async_domain *domain,
+ int node)
{
+ int cpu = WORK_CPU_UNBOUND;
struct async_entry *entry;
unsigned long flags;
async_cookie_t newcookie;
@@ -194,30 +198,40 @@ static async_cookie_t __async_schedule(async_func_t func, void *data, struct asy
/* mark that this task has queued an async job, used by module init */
current->flags |= PF_USED_ASYNC;

+ /* guarantee cpu_online_mask doesn't change during scheduling */
+ get_online_cpus();
+
+ if (node >= 0 && node < MAX_NUMNODES && node_online(node))
+ cpu = cpumask_any_and(cpumask_of_node(node), cpu_online_mask);
+
/* schedule for execution */
- queue_work(system_unbound_wq, &entry->work);
+ queue_work_on(cpu, system_unbound_wq, &entry->work);
+
+ put_online_cpus();

return newcookie;
}

/**
- * async_schedule - schedule a function for asynchronous execution
+ * async_schedule_on - schedule a function for asynchronous execution
* @func: function to execute asynchronously
* @data: data pointer to pass to the function
+ * @node: NUMA node to complete the work on
*
* Returns an async_cookie_t that may be used for checkpointing later.
* Note: This function may be called from atomic or non-atomic contexts.
*/
-async_cookie_t async_schedule(async_func_t func, void *data)
+async_cookie_t async_schedule_on(async_func_t func, void *data, int node)
{
- return __async_schedule(func, data, &async_dfl_domain);
+ return __async_schedule(func, data, &async_dfl_domain, node);
}
-EXPORT_SYMBOL_GPL(async_schedule);
+EXPORT_SYMBOL_GPL(async_schedule_on);

/**
- * async_schedule_domain - schedule a function for asynchronous execution within a certain domain
+ * async_schedule_on_domain - schedule a function for asynchronous execution within a certain domain
* @func: function to execute asynchronously
* @data: data pointer to pass to the function
+ * @node: NUMA node to complete the work on
* @domain: the domain
*
* Returns an async_cookie_t that may be used for checkpointing later.
@@ -226,12 +240,12 @@ async_cookie_t async_schedule(async_func_t func, void *data)
* synchronization domain is specified via @domain. Note: This function
* may be called from atomic or non-atomic contexts.
*/
-async_cookie_t async_schedule_domain(async_func_t func, void *data,
- struct async_domain *domain)
+async_cookie_t async_schedule_on_domain(async_func_t func, void *data, int node,
+ struct async_domain *domain)
{
- return __async_schedule(func, data, domain);
+ return __async_schedule(func, data, domain, node);
}
-EXPORT_SYMBOL_GPL(async_schedule_domain);
+EXPORT_SYMBOL_GPL(async_schedule_on_domain);

/**
* async_synchronize_full - synchronize all asynchronous function calls


2018-09-20 22:31:25

by Alexander Duyck

[permalink] [raw]
Subject: [PATCH v4 5/5] nvdimm: Schedule device registration on node local to the device

This patch is meant to force the device registration for nvdimm devices to
be closer to the actual device. This is achieved by using either the NUMA
node ID of the region, or of the parent. By doing this we can have
everything above the region based on the region, and everything below the
region based on the nvdimm bus.

One additional change I made is that we hold onto a reference to the parent
while we are going through registration. By doing this we can guarantee we
can complete the registration before we have the parent device removed.

By guaranteeing NUMA locality I see an improvement of as high as 25% for
per-node init of a system with 12TB of persistent memory.

Signed-off-by: Alexander Duyck <[email protected]>
---
drivers/nvdimm/bus.c | 19 +++++++++++++++++--
1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
index 8aae6dcc839f..ca935296d55e 100644
--- a/drivers/nvdimm/bus.c
+++ b/drivers/nvdimm/bus.c
@@ -487,7 +487,9 @@ static void nd_async_device_register(void *d, async_cookie_t cookie)
dev_err(dev, "%s: failed\n", __func__);
put_device(dev);
}
+
put_device(dev);
+ put_device(dev->parent);
}

static void nd_async_device_unregister(void *d, async_cookie_t cookie)
@@ -504,12 +506,25 @@ static void nd_async_device_unregister(void *d, async_cookie_t cookie)

void __nd_device_register(struct device *dev)
{
+ int node;
+
if (!dev)
return;
+
dev->bus = &nvdimm_bus_type;
+ get_device(dev->parent);
get_device(dev);
- async_schedule_domain(nd_async_device_register, dev,
- &nd_async_domain);
+
+ /*
+ * For a region we can break away from the parent node,
+ * otherwise for all other devices we just inherit the node from
+ * the parent.
+ */
+ node = is_nd_region(dev) ? to_nd_region(dev)->numa_node :
+ dev_to_node(dev->parent);
+
+ async_schedule_on_domain(nd_async_device_register, dev, node,
+ &nd_async_domain);
}

void nd_device_register(struct device *dev)


2018-09-20 22:31:30

by Alexander Duyck

[permalink] [raw]
Subject: [PATCH v4 3/5] mm: Defer ZONE_DEVICE page initialization to the point where we init pgmap

The ZONE_DEVICE pages were being initialized in two locations. One was with
the memory_hotplug lock held and another was outside of that lock. The
problem with this is that it was nearly doubling the memory initialization
time. Instead of doing this twice, once while holding a global lock and
once without, I am opting to defer the initialization to the one outside of
the lock. This allows us to avoid serializing the overhead for memory init
and we can instead focus on per-node init times.

One issue I encountered is that devm_memremap_pages and
hmm_devmmem_pages_create were initializing only the pgmap field the same
way. One wasn't initializing hmm_data, and the other was initializing it to
a poison value. Since this is something that is exposed to the driver in
the case of hmm I am opting for a third option and just initializing
hmm_data to 0 since this is going to be exposed to unknown third party
drivers.

Signed-off-by: Alexander Duyck <[email protected]>
---

v4: Moved moved memmap_init_zone_device to below memmmap_init_zone to avoid
merge conflicts with other changes in the kernel.

include/linux/mm.h | 2 +
kernel/memremap.c | 24 +++++---------
mm/hmm.c | 12 ++++---
mm/page_alloc.c | 92 ++++++++++++++++++++++++++++++++++++++++++++++++++--
4 files changed, 107 insertions(+), 23 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index d63d163f341d..25c89615d303 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -848,6 +848,8 @@ static inline bool is_zone_device_page(const struct page *page)
{
return page_zonenum(page) == ZONE_DEVICE;
}
+extern void memmap_init_zone_device(struct zone *, unsigned long,
+ unsigned long, struct dev_pagemap *);
#else
static inline bool is_zone_device_page(const struct page *page)
{
diff --git a/kernel/memremap.c b/kernel/memremap.c
index 5b8600d39931..d0c32e473f82 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -175,10 +175,10 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
struct vmem_altmap *altmap = pgmap->altmap_valid ?
&pgmap->altmap : NULL;
struct resource *res = &pgmap->res;
- unsigned long pfn, pgoff, order;
+ struct dev_pagemap *conflict_pgmap;
pgprot_t pgprot = PAGE_KERNEL;
+ unsigned long pgoff, order;
int error, nid, is_ram;
- struct dev_pagemap *conflict_pgmap;

align_start = res->start & ~(SECTION_SIZE - 1);
align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE)
@@ -256,19 +256,13 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
if (error)
goto err_add_memory;

- for_each_device_pfn(pfn, pgmap) {
- struct page *page = pfn_to_page(pfn);
-
- /*
- * ZONE_DEVICE pages union ->lru with a ->pgmap back
- * pointer. It is a bug if a ZONE_DEVICE page is ever
- * freed or placed on a driver-private list. Seed the
- * storage with LIST_POISON* values.
- */
- list_del(&page->lru);
- page->pgmap = pgmap;
- percpu_ref_get(pgmap->ref);
- }
+ /*
+ * Initialization of the pages has been deferred until now in order
+ * to allow us to do the work while not holding the hotplug lock.
+ */
+ memmap_init_zone_device(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
+ align_start >> PAGE_SHIFT,
+ align_size >> PAGE_SHIFT, pgmap);

devm_add_action(dev, devm_memremap_pages_release, pgmap);

diff --git a/mm/hmm.c b/mm/hmm.c
index c968e49f7a0c..774d684fa2b4 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -1024,7 +1024,6 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem)
resource_size_t key, align_start, align_size, align_end;
struct device *device = devmem->device;
int ret, nid, is_ram;
- unsigned long pfn;

align_start = devmem->resource->start & ~(PA_SECTION_SIZE - 1);
align_size = ALIGN(devmem->resource->start +
@@ -1109,11 +1108,14 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem)
align_size >> PAGE_SHIFT, NULL);
mem_hotplug_done();

- for (pfn = devmem->pfn_first; pfn < devmem->pfn_last; pfn++) {
- struct page *page = pfn_to_page(pfn);
+ /*
+ * Initialization of the pages has been deferred until now in order
+ * to allow us to do the work while not holding the hotplug lock.
+ */
+ memmap_init_zone_device(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
+ align_start >> PAGE_SHIFT,
+ align_size >> PAGE_SHIFT, &devmem->pagemap);

- page->pgmap = &devmem->pagemap;
- }
return 0;

error_add_memory:
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 29bd662fffd7..ac1fa0efdea0 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5490,12 +5490,23 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
if (highest_memmap_pfn < end_pfn - 1)
highest_memmap_pfn = end_pfn - 1;

+#ifdef CONFIG_ZONE_DEVICE
/*
* Honor reservation requested by the driver for this ZONE_DEVICE
- * memory
+ * memory. We limit the total number of pages to initialize to just
+ * those that might contain the memory mapping. We will defer the
+ * ZONE_DEVICE page initialization until after we have released
+ * the hotplug lock.
*/
- if (altmap && start_pfn == altmap->base_pfn)
- start_pfn += altmap->reserve;
+ if (zone == ZONE_DEVICE) {
+ if (!altmap)
+ return;
+
+ if (start_pfn == altmap->base_pfn)
+ start_pfn += altmap->reserve;
+ end_pfn = altmap->base_pfn + vmem_altmap_offset(altmap);
+ }
+#endif

for (pfn = start_pfn; pfn < end_pfn; pfn++) {
/*
@@ -5539,6 +5550,81 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
}
}

+#ifdef CONFIG_ZONE_DEVICE
+void __ref memmap_init_zone_device(struct zone *zone,
+ unsigned long start_pfn,
+ unsigned long size,
+ struct dev_pagemap *pgmap)
+{
+ unsigned long pfn, end_pfn = start_pfn + size;
+ struct pglist_data *pgdat = zone->zone_pgdat;
+ unsigned long zone_idx = zone_idx(zone);
+ unsigned long start = jiffies;
+ int nid = pgdat->node_id;
+
+ if (WARN_ON_ONCE(!pgmap || !is_dev_zone(zone)))
+ return;
+
+ /*
+ * The call to memmap_init_zone should have already taken care
+ * of the pages reserved for the memmap, so we can just jump to
+ * the end of that region and start processing the device pages.
+ */
+ if (pgmap->altmap_valid) {
+ struct vmem_altmap *altmap = &pgmap->altmap;
+
+ start_pfn = altmap->base_pfn + vmem_altmap_offset(altmap);
+ size = end_pfn - start_pfn;
+ }
+
+ for (pfn = start_pfn; pfn < end_pfn; pfn++) {
+ struct page *page = pfn_to_page(pfn);
+
+ __init_single_page(page, pfn, zone_idx, nid);
+
+ /*
+ * Mark page reserved as it will need to wait for onlining
+ * phase for it to be fully associated with a zone.
+ *
+ * We can use the non-atomic __set_bit operation for setting
+ * the flag as we are still initializing the pages.
+ */
+ __SetPageReserved(page);
+
+ /*
+ * ZONE_DEVICE pages union ->lru with a ->pgmap back
+ * pointer and hmm_data. It is a bug if a ZONE_DEVICE
+ * page is ever freed or placed on a driver-private list.
+ */
+ page->pgmap = pgmap;
+ page->hmm_data = 0;
+
+ /*
+ * Mark the block movable so that blocks are reserved for
+ * movable at startup. This will force kernel allocations
+ * to reserve their blocks rather than leaking throughout
+ * the address space during boot when many long-lived
+ * kernel allocations are made.
+ *
+ * bitmap is created for zone's valid pfn range. but memmap
+ * can be created for invalid pages (for alignment)
+ * check here not to call set_pageblock_migratetype() against
+ * pfn out of zone.
+ *
+ * Please note that MEMMAP_HOTPLUG path doesn't clear memmap
+ * because this is done early in sparse_add_one_section
+ */
+ if (!(pfn & (pageblock_nr_pages - 1))) {
+ set_pageblock_migratetype(page, MIGRATE_MOVABLE);
+ cond_resched();
+ }
+ }
+
+ pr_info("%s initialised, %lu pages in %ums\n", dev_name(pgmap->dev),
+ size, jiffies_to_msecs(jiffies - start));
+}
+
+#endif
static void __meminit zone_init_free_lists(struct zone *zone)
{
unsigned int order, t;


2018-09-20 23:00:58

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH v4 5/5] nvdimm: Schedule device registration on node local to the device

On Thu, Sep 20, 2018 at 3:31 PM Alexander Duyck
<[email protected]> wrote:
>
> This patch is meant to force the device registration for nvdimm devices to
> be closer to the actual device. This is achieved by using either the NUMA
> node ID of the region, or of the parent. By doing this we can have
> everything above the region based on the region, and everything below the
> region based on the nvdimm bus.
>
> One additional change I made is that we hold onto a reference to the parent
> while we are going through registration. By doing this we can guarantee we
> can complete the registration before we have the parent device removed.
>
> By guaranteeing NUMA locality I see an improvement of as high as 25% for
> per-node init of a system with 12TB of persistent memory.
>
> Signed-off-by: Alexander Duyck <[email protected]>
> ---
> drivers/nvdimm/bus.c | 19 +++++++++++++++++--
> 1 file changed, 17 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
> index 8aae6dcc839f..ca935296d55e 100644
> --- a/drivers/nvdimm/bus.c
> +++ b/drivers/nvdimm/bus.c
> @@ -487,7 +487,9 @@ static void nd_async_device_register(void *d, async_cookie_t cookie)
> dev_err(dev, "%s: failed\n", __func__);
> put_device(dev);
> }
> +
> put_device(dev);
> + put_device(dev->parent);

Good catch. The child does not pin the parent until registration, but
we need to make sure the parent isn't gone while were waiting for the
registration work to run.

Let's break this reference count fix out into its own separate patch,
because this looks to be covering a gap that may need to be
recommended for -stable.


>
> static void nd_async_device_unregister(void *d, async_cookie_t cookie)
> @@ -504,12 +506,25 @@ static void nd_async_device_unregister(void *d, async_cookie_t cookie)
>
> void __nd_device_register(struct device *dev)
> {
> + int node;
> +
> if (!dev)
> return;
> +
> dev->bus = &nvdimm_bus_type;
> + get_device(dev->parent);
> get_device(dev);
> - async_schedule_domain(nd_async_device_register, dev,
> - &nd_async_domain);
> +
> + /*
> + * For a region we can break away from the parent node,
> + * otherwise for all other devices we just inherit the node from
> + * the parent.
> + */
> + node = is_nd_region(dev) ? to_nd_region(dev)->numa_node :
> + dev_to_node(dev->parent);

Devices already automatically inherit the node of their parent, so I'm
not understanding why this is needed?

2018-09-21 00:17:04

by Alexander Duyck

[permalink] [raw]
Subject: Re: [PATCH v4 5/5] nvdimm: Schedule device registration on node local to the device

On 9/20/2018 3:59 PM, Dan Williams wrote:
> On Thu, Sep 20, 2018 at 3:31 PM Alexander Duyck
> <[email protected]> wrote:
>>
>> This patch is meant to force the device registration for nvdimm devices to
>> be closer to the actual device. This is achieved by using either the NUMA
>> node ID of the region, or of the parent. By doing this we can have
>> everything above the region based on the region, and everything below the
>> region based on the nvdimm bus.
>>
>> One additional change I made is that we hold onto a reference to the parent
>> while we are going through registration. By doing this we can guarantee we
>> can complete the registration before we have the parent device removed.
>>
>> By guaranteeing NUMA locality I see an improvement of as high as 25% for
>> per-node init of a system with 12TB of persistent memory.
>>
>> Signed-off-by: Alexander Duyck <[email protected]>
>> ---
>> drivers/nvdimm/bus.c | 19 +++++++++++++++++--
>> 1 file changed, 17 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
>> index 8aae6dcc839f..ca935296d55e 100644
>> --- a/drivers/nvdimm/bus.c
>> +++ b/drivers/nvdimm/bus.c
>> @@ -487,7 +487,9 @@ static void nd_async_device_register(void *d, async_cookie_t cookie)
>> dev_err(dev, "%s: failed\n", __func__);
>> put_device(dev);
>> }
>> +
>> put_device(dev);
>> + put_device(dev->parent);
>
> Good catch. The child does not pin the parent until registration, but
> we need to make sure the parent isn't gone while were waiting for the
> registration work to run.
>
> Let's break this reference count fix out into its own separate patch,
> because this looks to be covering a gap that may need to be
> recommended for -stable.

Okay, I guess I can do that.

>
>>
>> static void nd_async_device_unregister(void *d, async_cookie_t cookie)
>> @@ -504,12 +506,25 @@ static void nd_async_device_unregister(void *d, async_cookie_t cookie)
>>
>> void __nd_device_register(struct device *dev)
>> {
>> + int node;
>> +
>> if (!dev)
>> return;
>> +
>> dev->bus = &nvdimm_bus_type;
>> + get_device(dev->parent);
>> get_device(dev);
>> - async_schedule_domain(nd_async_device_register, dev,
>> - &nd_async_domain);
>> +
>> + /*
>> + * For a region we can break away from the parent node,
>> + * otherwise for all other devices we just inherit the node from
>> + * the parent.
>> + */
>> + node = is_nd_region(dev) ? to_nd_region(dev)->numa_node :
>> + dev_to_node(dev->parent);
>
> Devices already automatically inherit the node of their parent, so I'm
> not understanding why this is needed?

That doesn't happen until you call device_add, which you don't call
until nd_async_device_register. All that has been called on the device
up to now is device_initialize which leaves the node at NUMA_NO_NODE.

2018-09-21 00:37:20

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH v4 5/5] nvdimm: Schedule device registration on node local to the device

On Thu, Sep 20, 2018 at 5:26 PM Alexander Duyck
<[email protected]> wrote:
>
> On 9/20/2018 3:59 PM, Dan Williams wrote:
> > On Thu, Sep 20, 2018 at 3:31 PM Alexander Duyck
> > <[email protected]> wrote:
> >>
> >> This patch is meant to force the device registration for nvdimm devices to
> >> be closer to the actual device. This is achieved by using either the NUMA
> >> node ID of the region, or of the parent. By doing this we can have
> >> everything above the region based on the region, and everything below the
> >> region based on the nvdimm bus.
> >>
> >> One additional change I made is that we hold onto a reference to the parent
> >> while we are going through registration. By doing this we can guarantee we
> >> can complete the registration before we have the parent device removed.
> >>
> >> By guaranteeing NUMA locality I see an improvement of as high as 25% for
> >> per-node init of a system with 12TB of persistent memory.
> >>
> >> Signed-off-by: Alexander Duyck <[email protected]>
> >> ---
> >> drivers/nvdimm/bus.c | 19 +++++++++++++++++--
> >> 1 file changed, 17 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
> >> index 8aae6dcc839f..ca935296d55e 100644
> >> --- a/drivers/nvdimm/bus.c
> >> +++ b/drivers/nvdimm/bus.c
> >> @@ -487,7 +487,9 @@ static void nd_async_device_register(void *d, async_cookie_t cookie)
> >> dev_err(dev, "%s: failed\n", __func__);
> >> put_device(dev);
> >> }
> >> +
> >> put_device(dev);
> >> + put_device(dev->parent);
> >
> > Good catch. The child does not pin the parent until registration, but
> > we need to make sure the parent isn't gone while were waiting for the
> > registration work to run.
> >
> > Let's break this reference count fix out into its own separate patch,
> > because this looks to be covering a gap that may need to be
> > recommended for -stable.
>
> Okay, I guess I can do that.
>
> >
> >>
> >> static void nd_async_device_unregister(void *d, async_cookie_t cookie)
> >> @@ -504,12 +506,25 @@ static void nd_async_device_unregister(void *d, async_cookie_t cookie)
> >>
> >> void __nd_device_register(struct device *dev)
> >> {
> >> + int node;
> >> +
> >> if (!dev)
> >> return;
> >> +
> >> dev->bus = &nvdimm_bus_type;
> >> + get_device(dev->parent);
> >> get_device(dev);
> >> - async_schedule_domain(nd_async_device_register, dev,
> >> - &nd_async_domain);
> >> +
> >> + /*
> >> + * For a region we can break away from the parent node,
> >> + * otherwise for all other devices we just inherit the node from
> >> + * the parent.
> >> + */
> >> + node = is_nd_region(dev) ? to_nd_region(dev)->numa_node :
> >> + dev_to_node(dev->parent);
> >
> > Devices already automatically inherit the node of their parent, so I'm
> > not understanding why this is needed?
>
> That doesn't happen until you call device_add, which you don't call
> until nd_async_device_register. All that has been called on the device
> up to now is device_initialize which leaves the node at NUMA_NO_NODE.

Ooh, yeah, missed that. I think I'd prefer this policy to moved out to
where we set the dev->parent before calling __nd_device_register, or
at least a comment here about *why* we know region devices are special
(i.e. because the nd_region_desc specified the node at region creation
time).

2018-09-21 01:36:05

by Alexander Duyck

[permalink] [raw]
Subject: Re: [PATCH v4 5/5] nvdimm: Schedule device registration on node local to the device



On 9/20/2018 5:36 PM, Dan Williams wrote:
> On Thu, Sep 20, 2018 at 5:26 PM Alexander Duyck
> <[email protected]> wrote:
>>
>> On 9/20/2018 3:59 PM, Dan Williams wrote:
>>> On Thu, Sep 20, 2018 at 3:31 PM Alexander Duyck
>>> <[email protected]> wrote:
>>>>
>>>> This patch is meant to force the device registration for nvdimm devices to
>>>> be closer to the actual device. This is achieved by using either the NUMA
>>>> node ID of the region, or of the parent. By doing this we can have
>>>> everything above the region based on the region, and everything below the
>>>> region based on the nvdimm bus.
>>>>
>>>> One additional change I made is that we hold onto a reference to the parent
>>>> while we are going through registration. By doing this we can guarantee we
>>>> can complete the registration before we have the parent device removed.
>>>>
>>>> By guaranteeing NUMA locality I see an improvement of as high as 25% for
>>>> per-node init of a system with 12TB of persistent memory.
>>>>
>>>> Signed-off-by: Alexander Duyck <[email protected]>
>>>> ---
>>>> drivers/nvdimm/bus.c | 19 +++++++++++++++++--
>>>> 1 file changed, 17 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
>>>> index 8aae6dcc839f..ca935296d55e 100644
>>>> --- a/drivers/nvdimm/bus.c
>>>> +++ b/drivers/nvdimm/bus.c
>>>> @@ -487,7 +487,9 @@ static void nd_async_device_register(void *d, async_cookie_t cookie)
>>>> dev_err(dev, "%s: failed\n", __func__);
>>>> put_device(dev);
>>>> }
>>>> +
>>>> put_device(dev);
>>>> + put_device(dev->parent);
>>>
>>> Good catch. The child does not pin the parent until registration, but
>>> we need to make sure the parent isn't gone while were waiting for the
>>> registration work to run.
>>>
>>> Let's break this reference count fix out into its own separate patch,
>>> because this looks to be covering a gap that may need to be
>>> recommended for -stable.
>>
>> Okay, I guess I can do that.
>>
>>>
>>>>
>>>> static void nd_async_device_unregister(void *d, async_cookie_t cookie)
>>>> @@ -504,12 +506,25 @@ static void nd_async_device_unregister(void *d, async_cookie_t cookie)
>>>>
>>>> void __nd_device_register(struct device *dev)
>>>> {
>>>> + int node;
>>>> +
>>>> if (!dev)
>>>> return;
>>>> +
>>>> dev->bus = &nvdimm_bus_type;
>>>> + get_device(dev->parent);
>>>> get_device(dev);
>>>> - async_schedule_domain(nd_async_device_register, dev,
>>>> - &nd_async_domain);
>>>> +
>>>> + /*
>>>> + * For a region we can break away from the parent node,
>>>> + * otherwise for all other devices we just inherit the node from
>>>> + * the parent.
>>>> + */
>>>> + node = is_nd_region(dev) ? to_nd_region(dev)->numa_node :
>>>> + dev_to_node(dev->parent);
>>>
>>> Devices already automatically inherit the node of their parent, so I'm
>>> not understanding why this is needed?
>>
>> That doesn't happen until you call device_add, which you don't call
>> until nd_async_device_register. All that has been called on the device
>> up to now is device_initialize which leaves the node at NUMA_NO_NODE.
>
> Ooh, yeah, missed that. I think I'd prefer this policy to moved out to
> where we set the dev->parent before calling __nd_device_register, or
> at least a comment here about *why* we know region devices are special
> (i.e. because the nd_region_desc specified the node at region creation
> time).
>

Are you talking about pulling the scheduling out or just adding a node
value to the nd_device_register call so it can be set directly from the
caller?

If you wanted what I could do is pull the set_dev_node call from
nvdimm_bus_uevent and place it in nd_device_register. That should stick
as the node doesn't get overwritten by the parent if it is set after
device_initialize. If I did that along with the parent bit I was already
doing then all that would be left to do in is just use the dev_to_node
call on the device itself.

2018-09-21 02:46:38

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH v4 5/5] nvdimm: Schedule device registration on node local to the device

On Thu, Sep 20, 2018 at 6:34 PM Alexander Duyck
<[email protected]> wrote:
>
>
>
> On 9/20/2018 5:36 PM, Dan Williams wrote:
> > On Thu, Sep 20, 2018 at 5:26 PM Alexander Duyck
> > <[email protected]> wrote:
> >>
> >> On 9/20/2018 3:59 PM, Dan Williams wrote:
> >>> On Thu, Sep 20, 2018 at 3:31 PM Alexander Duyck
> >>> <[email protected]> wrote:
> >>>>
> >>>> This patch is meant to force the device registration for nvdimm devices to
> >>>> be closer to the actual device. This is achieved by using either the NUMA
> >>>> node ID of the region, or of the parent. By doing this we can have
> >>>> everything above the region based on the region, and everything below the
> >>>> region based on the nvdimm bus.
> >>>>
> >>>> One additional change I made is that we hold onto a reference to the parent
> >>>> while we are going through registration. By doing this we can guarantee we
> >>>> can complete the registration before we have the parent device removed.
> >>>>
> >>>> By guaranteeing NUMA locality I see an improvement of as high as 25% for
> >>>> per-node init of a system with 12TB of persistent memory.
> >>>>
> >>>> Signed-off-by: Alexander Duyck <[email protected]>
> >>>> ---
> >>>> drivers/nvdimm/bus.c | 19 +++++++++++++++++--
> >>>> 1 file changed, 17 insertions(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
> >>>> index 8aae6dcc839f..ca935296d55e 100644
> >>>> --- a/drivers/nvdimm/bus.c
> >>>> +++ b/drivers/nvdimm/bus.c
> >>>> @@ -487,7 +487,9 @@ static void nd_async_device_register(void *d, async_cookie_t cookie)
> >>>> dev_err(dev, "%s: failed\n", __func__);
> >>>> put_device(dev);
> >>>> }
> >>>> +
> >>>> put_device(dev);
> >>>> + put_device(dev->parent);
> >>>
> >>> Good catch. The child does not pin the parent until registration, but
> >>> we need to make sure the parent isn't gone while were waiting for the
> >>> registration work to run.
> >>>
> >>> Let's break this reference count fix out into its own separate patch,
> >>> because this looks to be covering a gap that may need to be
> >>> recommended for -stable.
> >>
> >> Okay, I guess I can do that.
> >>
> >>>
> >>>>
> >>>> static void nd_async_device_unregister(void *d, async_cookie_t cookie)
> >>>> @@ -504,12 +506,25 @@ static void nd_async_device_unregister(void *d, async_cookie_t cookie)
> >>>>
> >>>> void __nd_device_register(struct device *dev)
> >>>> {
> >>>> + int node;
> >>>> +
> >>>> if (!dev)
> >>>> return;
> >>>> +
> >>>> dev->bus = &nvdimm_bus_type;
> >>>> + get_device(dev->parent);
> >>>> get_device(dev);
> >>>> - async_schedule_domain(nd_async_device_register, dev,
> >>>> - &nd_async_domain);
> >>>> +
> >>>> + /*
> >>>> + * For a region we can break away from the parent node,
> >>>> + * otherwise for all other devices we just inherit the node from
> >>>> + * the parent.
> >>>> + */
> >>>> + node = is_nd_region(dev) ? to_nd_region(dev)->numa_node :
> >>>> + dev_to_node(dev->parent);
> >>>
> >>> Devices already automatically inherit the node of their parent, so I'm
> >>> not understanding why this is needed?
> >>
> >> That doesn't happen until you call device_add, which you don't call
> >> until nd_async_device_register. All that has been called on the device
> >> up to now is device_initialize which leaves the node at NUMA_NO_NODE.
> >
> > Ooh, yeah, missed that. I think I'd prefer this policy to moved out to
> > where we set the dev->parent before calling __nd_device_register, or
> > at least a comment here about *why* we know region devices are special
> > (i.e. because the nd_region_desc specified the node at region creation
> > time).
> >
>
> Are you talking about pulling the scheduling out or just adding a node
> value to the nd_device_register call so it can be set directly from the
> caller?

I was thinking everywhere we set dev->parent before registering, also
set the node...

> If you wanted what I could do is pull the set_dev_node call from
> nvdimm_bus_uevent and place it in nd_device_register. That should stick
> as the node doesn't get overwritten by the parent if it is set after
> device_initialize. If I did that along with the parent bit I was already
> doing then all that would be left to do in is just use the dev_to_node
> call on the device itself.

...but this is even better.

2018-09-21 14:47:25

by Alexander Duyck

[permalink] [raw]
Subject: Re: [PATCH v4 5/5] nvdimm: Schedule device registration on node local to the device



On 9/20/2018 7:46 PM, Dan Williams wrote:
> On Thu, Sep 20, 2018 at 6:34 PM Alexander Duyck
> <[email protected]> wrote:
>>
>>
>>
>> On 9/20/2018 5:36 PM, Dan Williams wrote:
>>> On Thu, Sep 20, 2018 at 5:26 PM Alexander Duyck
>>> <[email protected]> wrote:
>>>>
>>>> On 9/20/2018 3:59 PM, Dan Williams wrote:
>>>>> On Thu, Sep 20, 2018 at 3:31 PM Alexander Duyck
>>>>> <[email protected]> wrote:
>>>>>>
>>>>>> This patch is meant to force the device registration for nvdimm devices to
>>>>>> be closer to the actual device. This is achieved by using either the NUMA
>>>>>> node ID of the region, or of the parent. By doing this we can have
>>>>>> everything above the region based on the region, and everything below the
>>>>>> region based on the nvdimm bus.
>>>>>>
>>>>>> One additional change I made is that we hold onto a reference to the parent
>>>>>> while we are going through registration. By doing this we can guarantee we
>>>>>> can complete the registration before we have the parent device removed.
>>>>>>
>>>>>> By guaranteeing NUMA locality I see an improvement of as high as 25% for
>>>>>> per-node init of a system with 12TB of persistent memory.
>>>>>>
>>>>>> Signed-off-by: Alexander Duyck <[email protected]>
>>>>>> ---
>>>>>> drivers/nvdimm/bus.c | 19 +++++++++++++++++--
>>>>>> 1 file changed, 17 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
>>>>>> index 8aae6dcc839f..ca935296d55e 100644
>>>>>> --- a/drivers/nvdimm/bus.c
>>>>>> +++ b/drivers/nvdimm/bus.c
>>>>>> @@ -487,7 +487,9 @@ static void nd_async_device_register(void *d, async_cookie_t cookie)
>>>>>> dev_err(dev, "%s: failed\n", __func__);
>>>>>> put_device(dev);
>>>>>> }
>>>>>> +
>>>>>> put_device(dev);
>>>>>> + put_device(dev->parent);
>>>>>
>>>>> Good catch. The child does not pin the parent until registration, but
>>>>> we need to make sure the parent isn't gone while were waiting for the
>>>>> registration work to run.
>>>>>
>>>>> Let's break this reference count fix out into its own separate patch,
>>>>> because this looks to be covering a gap that may need to be
>>>>> recommended for -stable.
>>>>
>>>> Okay, I guess I can do that.
>>>>
>>>>>
>>>>>>
>>>>>> static void nd_async_device_unregister(void *d, async_cookie_t cookie)
>>>>>> @@ -504,12 +506,25 @@ static void nd_async_device_unregister(void *d, async_cookie_t cookie)
>>>>>>
>>>>>> void __nd_device_register(struct device *dev)
>>>>>> {
>>>>>> + int node;
>>>>>> +
>>>>>> if (!dev)
>>>>>> return;
>>>>>> +
>>>>>> dev->bus = &nvdimm_bus_type;
>>>>>> + get_device(dev->parent);
>>>>>> get_device(dev);
>>>>>> - async_schedule_domain(nd_async_device_register, dev,
>>>>>> - &nd_async_domain);
>>>>>> +
>>>>>> + /*
>>>>>> + * For a region we can break away from the parent node,
>>>>>> + * otherwise for all other devices we just inherit the node from
>>>>>> + * the parent.
>>>>>> + */
>>>>>> + node = is_nd_region(dev) ? to_nd_region(dev)->numa_node :
>>>>>> + dev_to_node(dev->parent);
>>>>>
>>>>> Devices already automatically inherit the node of their parent, so I'm
>>>>> not understanding why this is needed?
>>>>
>>>> That doesn't happen until you call device_add, which you don't call
>>>> until nd_async_device_register. All that has been called on the device
>>>> up to now is device_initialize which leaves the node at NUMA_NO_NODE.
>>>
>>> Ooh, yeah, missed that. I think I'd prefer this policy to moved out to
>>> where we set the dev->parent before calling __nd_device_register, or
>>> at least a comment here about *why* we know region devices are special
>>> (i.e. because the nd_region_desc specified the node at region creation
>>> time).
>>>
>>
>> Are you talking about pulling the scheduling out or just adding a node
>> value to the nd_device_register call so it can be set directly from the
>> caller?
>
> I was thinking everywhere we set dev->parent before registering, also
> set the node...

That will not work unless we move the call to device_initialize to
somewhere before you are setting the node. That is why I was thinking it
might work to put the node assignment in nd_device_register itself since
it looks like the regions don't call __nd_device_register directly.

I guess we could get rid of nd_device_register if we wanted to go that
route.

>> If you wanted what I could do is pull the set_dev_node call from
>> nvdimm_bus_uevent and place it in nd_device_register. That should stick
>> as the node doesn't get overwritten by the parent if it is set after
>> device_initialize. If I did that along with the parent bit I was already
>> doing then all that would be left to do in is just use the dev_to_node
>> call on the device itself.
>
> ...but this is even better.
>

I'm not sure it adds that much. Basically My thought was we just need to
make sure to set the device node after the call to device_initialize but
before the call to device_add. This just seems like a bunch more work
spread the device_initialize calls all over and introduce possible
regressions.

2018-09-21 14:57:12

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH v4 5/5] nvdimm: Schedule device registration on node local to the device

On Fri, Sep 21, 2018 at 7:48 AM Alexander Duyck
<[email protected]> wrote:
[..]
> > I was thinking everywhere we set dev->parent before registering, also
> > set the node...
>
> That will not work unless we move the call to device_initialize to
> somewhere before you are setting the node. That is why I was thinking it
> might work to put the node assignment in nd_device_register itself since
> it looks like the regions don't call __nd_device_register directly.
>
> I guess we could get rid of nd_device_register if we wanted to go that
> route.
>
> >> If you wanted what I could do is pull the set_dev_node call from
> >> nvdimm_bus_uevent and place it in nd_device_register. That should stick
> >> as the node doesn't get overwritten by the parent if it is set after
> >> device_initialize. If I did that along with the parent bit I was already
> >> doing then all that would be left to do in is just use the dev_to_node
> >> call on the device itself.
> >
> > ...but this is even better.
> >
>
> I'm not sure it adds that much. Basically My thought was we just need to
> make sure to set the device node after the call to device_initialize but
> before the call to device_add. This just seems like a bunch more work
> spread the device_initialize calls all over and introduce possible
> regressions.

Yeah, device_initialize() clobbering the numa_node makes it awkward.
Lets go with what you have presently and fix up the comment to say why
region devices are special.

2018-09-21 14:58:27

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH v4 4/5] async: Add support for queueing on specific node

On Thu, Sep 20, 2018 at 3:31 PM Alexander Duyck
<[email protected]> wrote:
>
> This patch introduces two new variants of the async_schedule_ functions
> that allow scheduling on a specific node. These functions are
> async_schedule_on and async_schedule_on_domain which end up mapping to
> async_schedule and async_schedule_domain but provide NUMA node specific
> functionality. The original functions were moved to inline function
> definitions that call the new functions while passing NUMA_NO_NODE.
>
> The main motivation behind this is to address the need to be able to
> schedule NVDIMM init work on specific NUMA nodes in order to improve
> performance of memory initialization.
>
> One additional change I made is I dropped the "extern" from the function
> prototypes in the async.h kernel header since they aren't needed.
>
> Signed-off-by: Alexander Duyck <[email protected]>
> ---
> include/linux/async.h | 20 +++++++++++++++++---
> kernel/async.c | 36 +++++++++++++++++++++++++-----------
> 2 files changed, 42 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/async.h b/include/linux/async.h
> index 6b0226bdaadc..9878b99cbb01 100644
> --- a/include/linux/async.h
> +++ b/include/linux/async.h
> @@ -14,6 +14,7 @@
>
> #include <linux/types.h>
> #include <linux/list.h>
> +#include <linux/numa.h>
>
> typedef u64 async_cookie_t;
> typedef void (*async_func_t) (void *data, async_cookie_t cookie);
> @@ -37,9 +38,22 @@ struct async_domain {
> struct async_domain _name = { .pending = LIST_HEAD_INIT(_name.pending), \
> .registered = 0 }
>
> -extern async_cookie_t async_schedule(async_func_t func, void *data);
> -extern async_cookie_t async_schedule_domain(async_func_t func, void *data,
> - struct async_domain *domain);
> +async_cookie_t async_schedule_on(async_func_t func, void *data, int node);
> +async_cookie_t async_schedule_on_domain(async_func_t func, void *data, int node,
> + struct async_domain *domain);

I would expect this to take a cpu instead of a node to not surprise
users coming from queue_work_on() / schedule_work_on()...

> +
> +static inline async_cookie_t async_schedule(async_func_t func, void *data)
> +{
> + return async_schedule_on(func, data, NUMA_NO_NODE);
> +}
> +
> +static inline async_cookie_t
> +async_schedule_domain(async_func_t func, void *data,
> + struct async_domain *domain)
> +{
> + return async_schedule_on_domain(func, data, NUMA_NO_NODE, domain);
> +}
> +
> void async_unregister_domain(struct async_domain *domain);
> extern void async_synchronize_full(void);
> extern void async_synchronize_full_domain(struct async_domain *domain);
> diff --git a/kernel/async.c b/kernel/async.c
> index a893d6170944..1d7ce81c1949 100644
> --- a/kernel/async.c
> +++ b/kernel/async.c
> @@ -56,6 +56,7 @@ synchronization with the async_synchronize_full() function, before returning
> #include <linux/sched.h>
> #include <linux/slab.h>
> #include <linux/workqueue.h>
> +#include <linux/cpu.h>
>
> #include "workqueue_internal.h"
>
> @@ -149,8 +150,11 @@ static void async_run_entry_fn(struct work_struct *work)
> wake_up(&async_done);
> }
>
> -static async_cookie_t __async_schedule(async_func_t func, void *data, struct async_domain *domain)
> +static async_cookie_t __async_schedule(async_func_t func, void *data,
> + struct async_domain *domain,
> + int node)
> {
> + int cpu = WORK_CPU_UNBOUND;
> struct async_entry *entry;
> unsigned long flags;
> async_cookie_t newcookie;
> @@ -194,30 +198,40 @@ static async_cookie_t __async_schedule(async_func_t func, void *data, struct asy
> /* mark that this task has queued an async job, used by module init */
> current->flags |= PF_USED_ASYNC;
>
> + /* guarantee cpu_online_mask doesn't change during scheduling */
> + get_online_cpus();
> +
> + if (node >= 0 && node < MAX_NUMNODES && node_online(node))
> + cpu = cpumask_any_and(cpumask_of_node(node), cpu_online_mask);

...I think this node to cpu helper should be up-leveled for callers. I
suspect using get_online_cpus() may cause lockdep problems to take the
cpu_hotplug_lock() within a "do_something_on()" routine. For example,
I found this when auditing queue_work_on() users:

/*
* Doesn't need any cpu hotplug locking because we do rely on per-cpu
* kworkers being shut down before our page_alloc_cpu_dead callback is
* executed on the offlined cpu.
* Calling this function with cpu hotplug locks held can actually lead
* to obscure indirect dependencies via WQ context.
*/
void lru_add_drain_all(void)

I think it's a gotcha waiting to happen if async_schedule_on() has
more restrictive calling contexts than queue_work_on().

2018-09-21 17:03:23

by Alexander Duyck

[permalink] [raw]
Subject: Re: [PATCH v4 4/5] async: Add support for queueing on specific node



On 9/21/2018 7:57 AM, Dan Williams wrote:
> On Thu, Sep 20, 2018 at 3:31 PM Alexander Duyck
> <[email protected]> wrote:
>>
>> This patch introduces two new variants of the async_schedule_ functions
>> that allow scheduling on a specific node. These functions are
>> async_schedule_on and async_schedule_on_domain which end up mapping to
>> async_schedule and async_schedule_domain but provide NUMA node specific
>> functionality. The original functions were moved to inline function
>> definitions that call the new functions while passing NUMA_NO_NODE.
>>
>> The main motivation behind this is to address the need to be able to
>> schedule NVDIMM init work on specific NUMA nodes in order to improve
>> performance of memory initialization.
>>
>> One additional change I made is I dropped the "extern" from the function
>> prototypes in the async.h kernel header since they aren't needed.
>>
>> Signed-off-by: Alexander Duyck <[email protected]>
>> ---
>> include/linux/async.h | 20 +++++++++++++++++---
>> kernel/async.c | 36 +++++++++++++++++++++++++-----------
>> 2 files changed, 42 insertions(+), 14 deletions(-)
>>
>> diff --git a/include/linux/async.h b/include/linux/async.h
>> index 6b0226bdaadc..9878b99cbb01 100644
>> --- a/include/linux/async.h
>> +++ b/include/linux/async.h
>> @@ -14,6 +14,7 @@
>>
>> #include <linux/types.h>
>> #include <linux/list.h>
>> +#include <linux/numa.h>
>>
>> typedef u64 async_cookie_t;
>> typedef void (*async_func_t) (void *data, async_cookie_t cookie);
>> @@ -37,9 +38,22 @@ struct async_domain {
>> struct async_domain _name = { .pending = LIST_HEAD_INIT(_name.pending), \
>> .registered = 0 }
>>
>> -extern async_cookie_t async_schedule(async_func_t func, void *data);
>> -extern async_cookie_t async_schedule_domain(async_func_t func, void *data,
>> - struct async_domain *domain);
>> +async_cookie_t async_schedule_on(async_func_t func, void *data, int node);
>> +async_cookie_t async_schedule_on_domain(async_func_t func, void *data, int node,
>> + struct async_domain *domain);
>
> I would expect this to take a cpu instead of a node to not surprise
> users coming from queue_work_on() / schedule_work_on()...

The thing is queue_work_on actually queues the work on a cpu in most
cases. The problem is that we are running on an unbound workqueue so
what we actually get is node specific behavior instead of CPU specific.
That is why I opted for this.
https://elixir.bootlin.com/linux/v4.19-rc4/source/kernel/workqueue.c#L1390

>> +
>> +static inline async_cookie_t async_schedule(async_func_t func, void *data)
>> +{
>> + return async_schedule_on(func, data, NUMA_NO_NODE);
>> +}
>> +
>> +static inline async_cookie_t
>> +async_schedule_domain(async_func_t func, void *data,
>> + struct async_domain *domain)
>> +{
>> + return async_schedule_on_domain(func, data, NUMA_NO_NODE, domain);
>> +}
>> +
>> void async_unregister_domain(struct async_domain *domain);
>> extern void async_synchronize_full(void);
>> extern void async_synchronize_full_domain(struct async_domain *domain);
>> diff --git a/kernel/async.c b/kernel/async.c
>> index a893d6170944..1d7ce81c1949 100644
>> --- a/kernel/async.c
>> +++ b/kernel/async.c
>> @@ -56,6 +56,7 @@ synchronization with the async_synchronize_full() function, before returning
>> #include <linux/sched.h>
>> #include <linux/slab.h>
>> #include <linux/workqueue.h>
>> +#include <linux/cpu.h>
>>
>> #include "workqueue_internal.h"
>>
>> @@ -149,8 +150,11 @@ static void async_run_entry_fn(struct work_struct *work)
>> wake_up(&async_done);
>> }
>>
>> -static async_cookie_t __async_schedule(async_func_t func, void *data, struct async_domain *domain)
>> +static async_cookie_t __async_schedule(async_func_t func, void *data,
>> + struct async_domain *domain,
>> + int node)
>> {
>> + int cpu = WORK_CPU_UNBOUND;
>> struct async_entry *entry;
>> unsigned long flags;
>> async_cookie_t newcookie;
>> @@ -194,30 +198,40 @@ static async_cookie_t __async_schedule(async_func_t func, void *data, struct asy
>> /* mark that this task has queued an async job, used by module init */
>> current->flags |= PF_USED_ASYNC;
>>
>> + /* guarantee cpu_online_mask doesn't change during scheduling */
>> + get_online_cpus();
>> +
>> + if (node >= 0 && node < MAX_NUMNODES && node_online(node))
>> + cpu = cpumask_any_and(cpumask_of_node(node), cpu_online_mask);
>
> ...I think this node to cpu helper should be up-leveled for callers. I
> suspect using get_online_cpus() may cause lockdep problems to take the
> cpu_hotplug_lock() within a "do_something_on()" routine. For example,
> I found this when auditing queue_work_on() users:

Yeah, after looking over the code I think I do see an issue. I will
probably need to add something like a "unbound_cpu_by_node" type of
function in order to pair it up with the unbound_pwq_by_node call that
is in __queue_work. Otherwise I run the risk of scheduling on a CPU that
I shouldn't be scheduling on.

> /*
> * Doesn't need any cpu hotplug locking because we do rely on per-cpu
> * kworkers being shut down before our page_alloc_cpu_dead callback is
> * executed on the offlined cpu.
> * Calling this function with cpu hotplug locks held can actually lead
> * to obscure indirect dependencies via WQ context.
> */
> void lru_add_drain_all(void)
>
> I think it's a gotcha waiting to happen if async_schedule_on() has
> more restrictive calling contexts than queue_work_on().
I can look into that. If nothing else it looks like queue_work_on does
put the onus on the caller to make certain the CPU cannot go away so I
can push the responsibility up the call chain in order to maintain parity.


2018-09-21 19:04:57

by Pasha Tatashin

[permalink] [raw]
Subject: Re: [PATCH v4 1/5] mm: Provide kernel parameter to allow disabling page init poisoning


> + pr_err("vm_debug option '%c' unknown. skipped\n",
> + *str);
> + }
> +
> + str++;
> + }
> +out:
> + if (page_init_poisoning && !__page_init_poisoning)
> + pr_warn("Page struct poisoning disabled by kernel command line option 'vm_debug'\n");

New lines '\n' can be removed, they are not needed for kprintfs.


Reviewed-by: Pavel Tatashin <[email protected]>

2018-09-21 19:07:56

by Pasha Tatashin

[permalink] [raw]
Subject: Re: [PATCH v4 2/5] mm: Create non-atomic version of SetPageReserved for init use



On 9/20/18 6:27 PM, Alexander Duyck wrote:
> It doesn't make much sense to use the atomic SetPageReserved at init time
> when we are using memset to clear the memory and manipulating the page
> flags via simple "&=" and "|=" operations in __init_single_page.
>
> This patch adds a non-atomic version __SetPageReserved that can be used
> during page init and shows about a 10% improvement in initialization times
> on the systems I have available for testing. On those systems I saw
> initialization times drop from around 35 seconds to around 32 seconds to
> initialize a 3TB block of persistent memory. I believe the main advantage
> of this is that it allows for more compiler optimization as the __set_bit
> operation can be reordered whereas the atomic version cannot.
>
> I tried adding a bit of documentation based on commit <f1dd2cd13c4> ("mm,
> memory_hotplug: do not associate hotadded memory to zones until online").
>
> Ideally the reserved flag should be set earlier since there is a brief
> window where the page is initialization via __init_single_page and we have
> not set the PG_Reserved flag. I'm leaving that for a future patch set as
> that will require a more significant refactor.
>
> Acked-by: Michal Hocko <[email protected]>
> Signed-off-by: Alexander Duyck <[email protected]>

Reviewed-by: Pavel Tatashin <[email protected]>

> ---
>
> v4: Added comment about __set_bit vs set_bit to the patch description
>
> include/linux/page-flags.h | 1 +
> mm/page_alloc.c | 9 +++++++--
> 2 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 934f91ef3f54..50ce1bddaf56 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -303,6 +303,7 @@ static inline void page_init_poison(struct page *page, size_t size)
>
> PAGEFLAG(Reserved, reserved, PF_NO_COMPOUND)
> __CLEARPAGEFLAG(Reserved, reserved, PF_NO_COMPOUND)
> + __SETPAGEFLAG(Reserved, reserved, PF_NO_COMPOUND)
> PAGEFLAG(SwapBacked, swapbacked, PF_NO_TAIL)
> __CLEARPAGEFLAG(SwapBacked, swapbacked, PF_NO_TAIL)
> __SETPAGEFLAG(SwapBacked, swapbacked, PF_NO_TAIL)
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 712cab17f86f..29bd662fffd7 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1239,7 +1239,12 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
> /* Avoid false-positive PageTail() */
> INIT_LIST_HEAD(&page->lru);
>
> - SetPageReserved(page);
> + /*
> + * no need for atomic set_bit because the struct
> + * page is not visible yet so nobody should
> + * access it yet.
> + */
> + __SetPageReserved(page);
> }
> }
> }
> @@ -5513,7 +5518,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
> page = pfn_to_page(pfn);
> __init_single_page(page, pfn, zone, nid);
> if (context == MEMMAP_HOTPLUG)
> - SetPageReserved(page);
> + __SetPageReserved(page);
>
> /*
> * Mark the block movable so that blocks are reserved for
>

2018-09-21 19:41:54

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v4 1/5] mm: Provide kernel parameter to allow disabling page init poisoning

On 2018-09-21 1:04 PM, Pasha Tatashin wrote:
>
>> + pr_err("vm_debug option '%c' unknown. skipped\n",
>> + *str);
>> + }
>> +
>> + str++;
>> + }
>> +out:
>> + if (page_init_poisoning && !__page_init_poisoning)
>> + pr_warn("Page struct poisoning disabled by kernel command line option 'vm_debug'\n");
>
> New lines '\n' can be removed, they are not needed for kprintfs.

No, that's not correct.

A printk without a newline termination is not emitted
as output until the next printk call. (To support KERN_CONT).
Therefore removing the '\n' causes a printk to not be printed when it is
called which can cause long delayed messages and subtle problems when
debugging. Always keep the newline in place even though the kernel will
add one for you if it's missing.

Logan


2018-09-21 19:50:54

by Pasha Tatashin

[permalink] [raw]
Subject: Re: [PATCH v4 3/5] mm: Defer ZONE_DEVICE page initialization to the point where we init pgmap



On 9/20/18 6:29 PM, Alexander Duyck wrote:
> The ZONE_DEVICE pages were being initialized in two locations. One was with
> the memory_hotplug lock held and another was outside of that lock. The
> problem with this is that it was nearly doubling the memory initialization
> time. Instead of doing this twice, once while holding a global lock and
> once without, I am opting to defer the initialization to the one outside of
> the lock. This allows us to avoid serializing the overhead for memory init
> and we can instead focus on per-node init times.
>
> One issue I encountered is that devm_memremap_pages and
> hmm_devmmem_pages_create were initializing only the pgmap field the same
> way. One wasn't initializing hmm_data, and the other was initializing it to
> a poison value. Since this is something that is exposed to the driver in
> the case of hmm I am opting for a third option and just initializing
> hmm_data to 0 since this is going to be exposed to unknown third party
> drivers.
>
> Signed-off-by: Alexander Duyck <[email protected]>

> +void __ref memmap_init_zone_device(struct zone *zone,
> + unsigned long start_pfn,
> + unsigned long size,
> + struct dev_pagemap *pgmap)
> +{
> + unsigned long pfn, end_pfn = start_pfn + size;
> + struct pglist_data *pgdat = zone->zone_pgdat;
> + unsigned long zone_idx = zone_idx(zone);
> + unsigned long start = jiffies;
> + int nid = pgdat->node_id;
> +
> + if (WARN_ON_ONCE(!pgmap || !is_dev_zone(zone)))
> + return;
> +
> + /*
> + * The call to memmap_init_zone should have already taken care
> + * of the pages reserved for the memmap, so we can just jump to
> + * the end of that region and start processing the device pages.
> + */
> + if (pgmap->altmap_valid) {
> + struct vmem_altmap *altmap = &pgmap->altmap;
> +
> + start_pfn = altmap->base_pfn + vmem_altmap_offset(altmap);
> + size = end_pfn - start_pfn;
> + }
> +
> + for (pfn = start_pfn; pfn < end_pfn; pfn++) {
> + struct page *page = pfn_to_page(pfn);
> +
> + __init_single_page(page, pfn, zone_idx, nid);
> +
> + /*
> + * Mark page reserved as it will need to wait for onlining
> + * phase for it to be fully associated with a zone.
> + *
> + * We can use the non-atomic __set_bit operation for setting
> + * the flag as we are still initializing the pages.
> + */
> + __SetPageReserved(page);
> +
> + /*
> + * ZONE_DEVICE pages union ->lru with a ->pgmap back
> + * pointer and hmm_data. It is a bug if a ZONE_DEVICE
> + * page is ever freed or placed on a driver-private list.
> + */
> + page->pgmap = pgmap;
> + page->hmm_data = 0;

__init_single_page()
mm_zero_struct_page()

Takes care of zeroing, no need to do another store here.


Looks good otherwise.

Reviewed-by: Pavel Tatashin <[email protected]>

2018-09-21 19:53:46

by Pasha Tatashin

[permalink] [raw]
Subject: Re: [PATCH v4 1/5] mm: Provide kernel parameter to allow disabling page init poisoning



On 9/21/18 3:41 PM, Logan Gunthorpe wrote:
> On 2018-09-21 1:04 PM, Pasha Tatashin wrote:
>>
>>> + pr_err("vm_debug option '%c' unknown. skipped\n",
>>> + *str);
>>> + }
>>> +
>>> + str++;
>>> + }
>>> +out:
>>> + if (page_init_poisoning && !__page_init_poisoning)
>>> + pr_warn("Page struct poisoning disabled by kernel command line option 'vm_debug'\n");
>>
>> New lines '\n' can be removed, they are not needed for kprintfs.
>
> No, that's not correct.
>
> A printk without a newline termination is not emitted
> as output until the next printk call. (To support KERN_CONT).
> Therefore removing the '\n' causes a printk to not be printed when it is
> called which can cause long delayed messages and subtle problems when
> debugging. Always keep the newline in place even though the kernel will
> add one for you if it's missing.

OK. Thank you for clarifying Logan. I've seen new lines are being
removed in other patches,

Pavel

2018-09-21 20:05:26

by Alexander Duyck

[permalink] [raw]
Subject: Re: [PATCH v4 3/5] mm: Defer ZONE_DEVICE page initialization to the point where we init pgmap



On 9/21/2018 12:50 PM, Pasha Tatashin wrote:
>
>
> On 9/20/18 6:29 PM, Alexander Duyck wrote:
>> The ZONE_DEVICE pages were being initialized in two locations. One was with
>> the memory_hotplug lock held and another was outside of that lock. The
>> problem with this is that it was nearly doubling the memory initialization
>> time. Instead of doing this twice, once while holding a global lock and
>> once without, I am opting to defer the initialization to the one outside of
>> the lock. This allows us to avoid serializing the overhead for memory init
>> and we can instead focus on per-node init times.
>>
>> One issue I encountered is that devm_memremap_pages and
>> hmm_devmmem_pages_create were initializing only the pgmap field the same
>> way. One wasn't initializing hmm_data, and the other was initializing it to
>> a poison value. Since this is something that is exposed to the driver in
>> the case of hmm I am opting for a third option and just initializing
>> hmm_data to 0 since this is going to be exposed to unknown third party
>> drivers.
>>
>> Signed-off-by: Alexander Duyck <[email protected]>
>
>> +void __ref memmap_init_zone_device(struct zone *zone,
>> + unsigned long start_pfn,
>> + unsigned long size,
>> + struct dev_pagemap *pgmap)
>> +{
>> + unsigned long pfn, end_pfn = start_pfn + size;
>> + struct pglist_data *pgdat = zone->zone_pgdat;
>> + unsigned long zone_idx = zone_idx(zone);
>> + unsigned long start = jiffies;
>> + int nid = pgdat->node_id;
>> +
>> + if (WARN_ON_ONCE(!pgmap || !is_dev_zone(zone)))
>> + return;
>> +
>> + /*
>> + * The call to memmap_init_zone should have already taken care
>> + * of the pages reserved for the memmap, so we can just jump to
>> + * the end of that region and start processing the device pages.
>> + */
>> + if (pgmap->altmap_valid) {
>> + struct vmem_altmap *altmap = &pgmap->altmap;
>> +
>> + start_pfn = altmap->base_pfn + vmem_altmap_offset(altmap);
>> + size = end_pfn - start_pfn;
>> + }
>> +
>> + for (pfn = start_pfn; pfn < end_pfn; pfn++) {
>> + struct page *page = pfn_to_page(pfn);
>> +
>> + __init_single_page(page, pfn, zone_idx, nid);
>> +
>> + /*
>> + * Mark page reserved as it will need to wait for onlining
>> + * phase for it to be fully associated with a zone.
>> + *
>> + * We can use the non-atomic __set_bit operation for setting
>> + * the flag as we are still initializing the pages.
>> + */
>> + __SetPageReserved(page);
>> +
>> + /*
>> + * ZONE_DEVICE pages union ->lru with a ->pgmap back
>> + * pointer and hmm_data. It is a bug if a ZONE_DEVICE
>> + * page is ever freed or placed on a driver-private list.
>> + */
>> + page->pgmap = pgmap;
>> + page->hmm_data = 0;
>
> __init_single_page()
> mm_zero_struct_page()
>
> Takes care of zeroing, no need to do another store here.

The problem is __init_singe_page also calls INIT_LIST_HEAD which I
believe sets the prev pointer which overlaps with hmm_data.

>
> Looks good otherwise.
>
> Reviewed-by: Pavel Tatashin <[email protected]>
>

Thanks for the review.

2018-09-21 20:14:55

by Pasha Tatashin

[permalink] [raw]
Subject: Re: [PATCH v4 3/5] mm: Defer ZONE_DEVICE page initialization to the point where we init pgmap


>>> +        page->pgmap = pgmap;
>>> +        page->hmm_data = 0;
>>
>> __init_single_page()
>>    mm_zero_struct_page()
>>
>> Takes care of zeroing, no need to do another store here.
>
> The problem is __init_singe_page also calls INIT_LIST_HEAD which I
> believe sets the prev pointer which overlaps with hmm_data.

Indeed it does:

INIT_LIST_HEAD(&page->lru); overlaps with hmm_data, and before
list_del(&page->lru); was called to remove from the list.

And now I see you also mentioned about this in comments. I also prefer
having it zeroed instead of left poisoned or uninitialized. The change
looks good.

Thank you,
Pavel

>
>>
>> Looks good otherwise.
>>
>> Reviewed-by: Pavel Tatashin <[email protected]>
>>
>
> Thanks for the review.
>

2018-09-29 09:17:15

by Chen, Rong A

[permalink] [raw]
Subject: [LKP] [async] 06f4f5bfb3: BUG:sleeping_function_called_from_invalid_context_at_include/linux/percpu-rwsem.h

FYI, we noticed the following commit (built with gcc-7):

commit: 06f4f5bfb3404db7b4c45b0e4757b1e9a76cdd9a ("[PATCH v4 4/5] async: Add support for queueing on specific node")
url: https://github.com/0day-ci/linux/commits/Alexander-Duyck/Address-issues-slowing-persistent-memory-initialization/20180921-225440


in testcase: fio-basic
with following parameters:

runtime: 300s
disk: 1HDD
fs: xfs
nr_task: 1
test_size: 128G
rw: write
bs: 4k
ioengine: sync
ucode: 0x42d
cpufreq_governor: performance
fs2: nfsv4

test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio


on test machine: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 64G memory

caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):


+-----------------------------------------------------------------------------------+------------+------------+
| | 0f537b5505 | 06f4f5bfb3 |
+-----------------------------------------------------------------------------------+------------+------------+
| boot_successes | 3 | 4 |
| boot_failures | 11 | 9 |
| WARNING:at#for_ip_interrupt_entry/0x | 8 | 5 |
| WARNING:stack_recursion | 7 | 5 |
| WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x | 3 | 3 |
| BUG:sleeping_function_called_from_invalid_context_at_include/linux/percpu-rwsem.h | 0 | 9 |
+-----------------------------------------------------------------------------------+------------+------------+



[ 16.233052] BUG: sleeping function called from invalid context at include/linux/percpu-rwsem.h:34
[ 16.245303] in_atomic(): 1, irqs_disabled(): 1, pid: 555, name: scsi_eh_0
[ 16.245306] CPU: 1 PID: 555 Comm: scsi_eh_0 Not tainted 4.19.0-rc4-00184-g06f4f5b #1
Startin
[ 16.245309] Hardware name: Intel Corporation S2600WP/S2600WP, BIOS SE5C600.86B.02.02.0002.122320131210 12/23/2013
g OpenBSD Secure
[ 16.275455] Call Trace:
Shell server...
[ 16.279747] dump_stack+0x5c/0x7b

[ 16.284992] ___might_sleep+0xf1/0x110
[ 16.289548] cpus_read_lock+0x18/0x50
[ 16.289552] __async_schedule+0x163/0x210
0m] Reached targ
[ 16.289566] ? scsi_try_target_reset+0x90/0x90
et System Time S
[ 16.289569] ? sas_scsi_recover_host+0x2b9/0x390 [libsas]
ynchronized.
[ 16.289571] sas_scsi_recover_host+0x2b9/0x390 [libsas]
[ 16.289576] ? scsi_error_handler+0x3b/0x620
[ 16.333060] ? scsi_error_handler+0x9a/0x620
[ 16.333064] ? scsi_try_target_reset+0x90/0x90
0m] Started Dail
[ 16.349819] ? __wake_up_common+0x76/0x170
y apt download a
[ 16.355945] ? scsi_eh_get_sense+0x240/0x240
ctivities.
[ 16.362271] kthread+0x11e/0x140
[ 16.367041] ? kthread_associate_blkcg+0xb0/0xb0
[ 16.367048] ret_from_fork+0x35/0x40
0m] Started Daily apt upgrade and clean activities.
Starting LSB: Load kernel image with kexec...
[ 16.547582] ata7.00: ATA-8: WDC WD1003FBYZ-010FB0, 01.01V03, max UDMA/133
[ 16.555799] ata7.00: 1953525168 sectors, multi 0: LBA48 NCQ (depth 32)
[ 16.566308] ata7.00: configured for UDMA/133
[ 16.571765] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 0 tries: 1
[ 16.582543] scsi 0:0:0:0: Direct-Access ATA WDC WD1003FBYZ-0 1V03 PQ: 0 ANSI: 5
[ 16.592293] sas: DONE DISCOVERY on port 0, pid:698, result:0
[ 16.602838] scsi 0:0:0:0: Attached scsi generic sg0 type 0
[ 16.611425] sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB)
[ 16.620677] sd 0:0:0:0: [sda] Write Protect is off
[ 16.626659] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[ 16.632977] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 16.692308] sda: sda1 sda2 sda3
[ 16.697284] sd 0:0:0:0: [sda] Attached SCSI disk
[ 17.073992] raid6: sse2x1 gen() 7636 MB/s
[ 17.095989] raid6: sse2x1 xor() 5824 MB/s
[ 17.117991] raid6: sse2x2 gen() 9554 MB/s
[ 17.139990] raid6: sse2x2 xor() 6521 MB/s
[ 17.161987] raid6: sse2x4 gen() 11550 MB/s
[ 17.183991] raid6: sse2x4 xor() 7769 MB/s
[ 17.189383] raid6: using algorithm sse2x4 gen() 11550 MB/s
[ 17.196124] raid6: .... xor() 7769 MB/s, rmw enabled
[ 17.202273] raid6: using ssse3x2 recovery algorithm
[ 17.212489] xor: automatically using best checksumming function avx
[ 17.247296] Btrfs loaded, crc32c=crc32c-generic
[ 17.254019] BTRFS: device fsid 83e57bc1-35de-4d61-8929-dc8aa3d711c2 devid 1 transid 4454 /dev/sda3
[ 20.961795] Kernel tests: Boot OK!
[ 20.961799]
[ 22.781499] BTRFS info (device sda3): disk space caching is enabled
[ 22.788537] BTRFS info (device sda3): has skinny extents
[ 22.878137] netpoll: netconsole: local port 6665
[ 22.883329] netpoll: netconsole: local IPv4 address 0.0.0.0
[ 22.889559] netpoll: netconsole: interface 'eth0'
[ 22.894835] netpoll: netconsole: remote port 6644
[ 22.900116] netpoll: netconsole: remote IPv4 address 192.168.2.1
[ 22.906837] netpoll: netconsole: remote ethernet address ff:ff:ff:ff:ff:ff
[ 22.914544] netpoll: netconsole: local IP 192.168.2.17
[ 22.920389] console [netcon0] enabled
[ 22.924517] netconsole: network logging started
[ 24.381587] install debs round one: dpkg -i --force-confdef --force-depends /opt/deb/sysstat_11.4.3-2_amd64.deb
[ 24.381593]
[ 24.395562] /opt/deb/gawk_1%3a4.1.4+dfsg-1_amd64.deb
[ 24.395563]
[ 24.403952] Selecting previously unselected package sysstat.
[ 24.403954]
[ 24.413629] (Reading database ... 16106 files and directories currently installed.)
[ 24.413630]
[ 24.425242] Preparing to unpack .../deb/sysstat_11.4.3-2_amd64.deb ...
[ 24.425244]
[ 24.434957] Unpacking sysstat (11.4.3-2) ...
[ 24.434959]
[ 24.442519] Selecting previously unselected package gawk.
[ 24.442520]
[ 24.451624] Preparing to unpack .../gawk_1%3a4.1.4+dfsg-1_amd64.deb ...
[ 24.451625]
[ 24.461553] Unpacking gawk (1:4.1.4+dfsg-1) ...
[ 24.461554]
[ 24.469073] Setting up sysstat (11.4.3-2) ...
[ 24.469075]
[ 24.476496] Setting up gawk (1:4.1.4+dfsg-1) ...
[ 24.476497]
[ 24.484539] Processing triggers for systemd (232-25+deb9u2) ...
[ 24.484540]
[ 24.494683] 23 Sep 03:19:18 ntpdate[865]: step time server 192.168.1.1 offset 18.120275 sec
[ 24.494684]
[ 24.506357] /lkp/lkp/src/bin/run-lkp
[ 24.506358]
[ 24.597525] device-mapper: uevent: version 1.0.3
[ 24.602963] device-mapper: ioctl: 4.39.0-ioctl (2018-04-03) initialised: [email protected]
[ 24.615614] random: crng init done


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml



Thanks,
Rong Chen


Attachments:
(No filename) (7.20 kB)
config-4.19.0-rc4-00184-g06f4f5b (170.48 kB)
job-script (7.74 kB)
dmesg.xz (24.58 kB)
job.yaml (5.32 kB)
reproduce (1.02 kB)
Download all attachments