From: Oscar Salvador <[email protected]>
This patchset aims to solve [1] and [2] issues.
Due to the lack of feedback of previous versions, I decided to go safe,
so I reverted some of the changes I did in RFCv3:
1) It is no longer based on [3], although the code would be easier and
the changes less.
2) hotplug lock stays in HMM/devm, mainly because I am not sure whether
it is ok to leave the kasan calls out of lock or not.
If we think that this can be done, the hotplug lock can be moved
within add/del_device_memory, which would be nicer IMHO.
3) Although I think that init_currently_empty_zone should be protected
by the spanlock since it touches zone_start_pfn, I decided to leave
it as it is right now.
The main point of moving it within the lock was to be able to move
move_pfn_range_to_zone out of the hotplug lock for HMM/devm code.
The main point of this patchset is to move all the page/zone handling
from the hot-remove path, back to the offlining stage.
In this way, we can better split up what each part does:
* hot-add path:
- Create a new resource for the hot-added memory
- Create memory sections for the hot-added memory
- Create the memblocks representing the hot-added memory
* online path:
- Re-adjust zone/pgdat nr of pages (managed, spanned, present)
- Initialize the pages from the new memory-range
- Online memory sections
* offline path:
- Offline memory sections
- Re-adjust zone/pgdat nr of pages (managed, spanned, present)
* hot-remove path:
- Remove memory sections
- Remove memblocks
- Remove resources
So, hot-add/remove stages should only care about sections and memblocks.
While all the zone/page handling should belong to the online/offline stage.
Another thing is that for the sake of reviewability, I split the patchset
in 5 parts, but pathc3 could be combined into patch4.
This patchset is based on top of mmotm.
[1] https://patchwork.kernel.org/patch/10547445/
[2] https://www.spinics.net/lists/linux-mm/msg161316.html
[3] https://patchwork.kernel.org/cover/10613425/
Oscar Salvador (5):
mm/memory_hotplug: Add nid parameter to arch_remove_memory
mm/memory_hotplug: Create add/del_device_memory functions
mm/memory_hotplug: Check for IORESOURCE_SYSRAM in
release_mem_region_adjustable
mm/memory_hotplug: Move zone/pages handling to offline stage
mm/memory-hotplug: Rework unregister_mem_sect_under_nodes
arch/ia64/mm/init.c | 6 +-
arch/powerpc/mm/mem.c | 14 +---
arch/s390/mm/init.c | 2 +-
arch/sh/mm/init.c | 6 +-
arch/x86/mm/init_32.c | 6 +-
arch/x86/mm/init_64.c | 11 +---
drivers/base/memory.c | 9 ++-
drivers/base/node.c | 38 ++---------
include/linux/memory.h | 2 +-
include/linux/memory_hotplug.h | 21 ++++--
include/linux/node.h | 9 ++-
kernel/memremap.c | 13 ++--
kernel/resource.c | 16 +++++
mm/hmm.c | 35 +++++-----
mm/memory_hotplug.c | 142 +++++++++++++++++++++++++----------------
mm/sparse.c | 6 +-
16 files changed, 177 insertions(+), 159 deletions(-)
--
2.13.6
From: Oscar Salvador <[email protected]>
This tries to address another issue about accessing
unitiliazed pages.
Jonathan reported a problem [1] where we can access steal pages
in case we hot-remove memory without onlining it first.
This time is in unregister_mem_sect_under_nodes.
This function tries to get the nid from the pfn and then
tries to remove the symlink between mem_blk <-> nid and vice versa.
Since we already know the nid in remove_memory(), we can pass
it down the chain to unregister_mem_sect_under_nodes.
There we can just remove the symlinks without the need
to look into the pages.
[1] https://www.spinics.net/lists/linux-mm/msg161316.html
Signed-off-by: Oscar Salvador <[email protected]>
---
drivers/base/memory.c | 9 ++++-----
drivers/base/node.c | 38 +++++++-------------------------------
include/linux/memory.h | 2 +-
include/linux/node.h | 9 ++++-----
mm/memory_hotplug.c | 2 +-
5 files changed, 17 insertions(+), 43 deletions(-)
diff --git a/drivers/base/memory.c b/drivers/base/memory.c
index 0e5985682642..3d8c65d84bea 100644
--- a/drivers/base/memory.c
+++ b/drivers/base/memory.c
@@ -744,8 +744,7 @@ unregister_memory(struct memory_block *memory)
device_unregister(&memory->dev);
}
-static int remove_memory_section(unsigned long node_id,
- struct mem_section *section, int phys_device)
+static int remove_memory_section(unsigned long nid, struct mem_section *section)
{
struct memory_block *mem;
@@ -759,7 +758,7 @@ static int remove_memory_section(unsigned long node_id,
if (!mem)
goto out_unlock;
- unregister_mem_sect_under_nodes(mem, __section_nr(section));
+ unregister_mem_sect_under_nodes(nid, mem);
mem->section_count--;
if (mem->section_count == 0)
@@ -772,12 +771,12 @@ static int remove_memory_section(unsigned long node_id,
return 0;
}
-int unregister_memory_section(struct mem_section *section)
+int unregister_memory_section(int nid, struct mem_section *section)
{
if (!present_section(section))
return -EINVAL;
- return remove_memory_section(0, section, 0);
+ return remove_memory_section(nid, section);
}
#endif /* CONFIG_MEMORY_HOTREMOVE */
diff --git a/drivers/base/node.c b/drivers/base/node.c
index 86d6cd92ce3d..65bc5920bd3d 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -453,40 +453,16 @@ int register_mem_sect_under_node(struct memory_block *mem_blk, void *arg)
return 0;
}
-/* unregister memory section under all nodes that it spans */
-int unregister_mem_sect_under_nodes(struct memory_block *mem_blk,
- unsigned long phys_index)
+/*
+ * This mem_blk is going to be removed, so let us remove the link
+ * to the node and vice versa
+ */
+void unregister_mem_sect_under_nodes(int nid, struct memory_block *mem_blk)
{
- NODEMASK_ALLOC(nodemask_t, unlinked_nodes, GFP_KERNEL);
- unsigned long pfn, sect_start_pfn, sect_end_pfn;
-
- if (!mem_blk) {
- NODEMASK_FREE(unlinked_nodes);
- return -EFAULT;
- }
- if (!unlinked_nodes)
- return -ENOMEM;
- nodes_clear(*unlinked_nodes);
-
- sect_start_pfn = section_nr_to_pfn(phys_index);
- sect_end_pfn = sect_start_pfn + PAGES_PER_SECTION - 1;
- for (pfn = sect_start_pfn; pfn <= sect_end_pfn; pfn++) {
- int nid;
-
- nid = get_nid_for_pfn(pfn);
- if (nid < 0)
- continue;
- if (!node_online(nid))
- continue;
- if (node_test_and_set(nid, *unlinked_nodes))
- continue;
- sysfs_remove_link(&node_devices[nid]->dev.kobj,
+ sysfs_remove_link(&node_devices[nid]->dev.kobj,
kobject_name(&mem_blk->dev.kobj));
- sysfs_remove_link(&mem_blk->dev.kobj,
+ sysfs_remove_link(&mem_blk->dev.kobj,
kobject_name(&node_devices[nid]->dev.kobj));
- }
- NODEMASK_FREE(unlinked_nodes);
- return 0;
}
int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn)
diff --git a/include/linux/memory.h b/include/linux/memory.h
index a6ddefc60517..d75ec88ca09d 100644
--- a/include/linux/memory.h
+++ b/include/linux/memory.h
@@ -113,7 +113,7 @@ extern int register_memory_isolate_notifier(struct notifier_block *nb);
extern void unregister_memory_isolate_notifier(struct notifier_block *nb);
int hotplug_memory_register(int nid, struct mem_section *section);
#ifdef CONFIG_MEMORY_HOTREMOVE
-extern int unregister_memory_section(struct mem_section *);
+extern int unregister_memory_section(int nid, struct mem_section *);
#endif
extern int memory_dev_init(void);
extern int memory_notify(unsigned long val, void *v);
diff --git a/include/linux/node.h b/include/linux/node.h
index 257bb3d6d014..dddead9937ab 100644
--- a/include/linux/node.h
+++ b/include/linux/node.h
@@ -72,8 +72,8 @@ extern int register_cpu_under_node(unsigned int cpu, unsigned int nid);
extern int unregister_cpu_under_node(unsigned int cpu, unsigned int nid);
extern int register_mem_sect_under_node(struct memory_block *mem_blk,
void *arg);
-extern int unregister_mem_sect_under_nodes(struct memory_block *mem_blk,
- unsigned long phys_index);
+extern void unregister_mem_sect_under_nodes(int nid,
+ struct memory_block *mem_blk);
#ifdef CONFIG_HUGETLBFS
extern void register_hugetlbfs_with_node(node_registration_func_t doregister,
@@ -105,10 +105,9 @@ static inline int register_mem_sect_under_node(struct memory_block *mem_blk,
{
return 0;
}
-static inline int unregister_mem_sect_under_nodes(struct memory_block *mem_blk,
- unsigned long phys_index)
+static inline void unregister_mem_sect_under_nodes(int nid,
+ struct memory_block *mem_blk)
{
- return 0;
}
static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 6b98321aa52f..66ccbb5b8a88 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -528,7 +528,7 @@ static int __remove_section(int nid, struct mem_section *ms,
if (!valid_section(ms))
return ret;
- ret = unregister_memory_section(ms);
+ ret = unregister_memory_section(nid, ms);
if (ret)
return ret;
--
2.13.6
From: Oscar Salvador <[email protected]>
This patch is only a preparation for the following-up patches.
The idea of passing the nid is that will allow us to get rid
of the zone parameter in the patches that follow
Signed-off-by: Oscar Salvador <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
---
arch/ia64/mm/init.c | 2 +-
arch/powerpc/mm/mem.c | 3 ++-
arch/s390/mm/init.c | 2 +-
arch/sh/mm/init.c | 2 +-
arch/x86/mm/init_32.c | 2 +-
arch/x86/mm/init_64.c | 3 ++-
include/linux/memory_hotplug.h | 2 +-
kernel/memremap.c | 4 +++-
mm/hmm.c | 4 +++-
mm/memory_hotplug.c | 2 +-
10 files changed, 16 insertions(+), 10 deletions(-)
diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
index d5e12ff1d73c..904fe55e10fc 100644
--- a/arch/ia64/mm/init.c
+++ b/arch/ia64/mm/init.c
@@ -661,7 +661,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap,
}
#ifdef CONFIG_MEMORY_HOTREMOVE
-int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
+int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap)
{
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 5551f5870dcc..6db8f527babb 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -139,7 +139,8 @@ int __meminit arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *
}
#ifdef CONFIG_MEMORY_HOTREMOVE
-int __meminit arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
+int __meminit arch_remove_memory(int nid, u64 start, u64 size,
+ struct vmem_altmap *altmap)
{
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 76d0708438e9..b7503f8b7725 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -242,7 +242,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap,
}
#ifdef CONFIG_MEMORY_HOTREMOVE
-int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
+int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap)
{
/*
* There is no hardware or firmware interface which could trigger a
diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c
index c8c13c777162..a8e5c0e00fca 100644
--- a/arch/sh/mm/init.c
+++ b/arch/sh/mm/init.c
@@ -443,7 +443,7 @@ EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
#endif
#ifdef CONFIG_MEMORY_HOTREMOVE
-int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
+int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap)
{
unsigned long start_pfn = PFN_DOWN(start);
unsigned long nr_pages = size >> PAGE_SHIFT;
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 49ecf5ecf6d3..85c94f9a87f8 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -860,7 +860,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap,
}
#ifdef CONFIG_MEMORY_HOTREMOVE
-int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
+int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap)
{
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 5fab264948c2..449958da97a4 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1147,7 +1147,8 @@ kernel_physical_mapping_remove(unsigned long start, unsigned long end)
remove_pagetable(start, end, true, NULL);
}
-int __ref arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
+int __ref arch_remove_memory(int nid, u64 start, u64 size,
+ struct vmem_altmap *altmap)
{
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 84e9ae205930..786cdfc9a974 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -107,7 +107,7 @@ static inline bool movable_node_is_enabled(void)
}
#ifdef CONFIG_MEMORY_HOTREMOVE
-extern int arch_remove_memory(u64 start, u64 size,
+extern int arch_remove_memory(int nid, u64 start, u64 size,
struct vmem_altmap *altmap);
extern int __remove_pages(struct zone *zone, unsigned long start_pfn,
unsigned long nr_pages, struct vmem_altmap *altmap);
diff --git a/kernel/memremap.c b/kernel/memremap.c
index 9eced2cc9f94..c95df6ed2d4a 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -87,6 +87,7 @@ static void devm_memremap_pages_release(void *data)
struct resource *res = &pgmap->res;
resource_size_t align_start, align_size;
unsigned long pfn;
+ int nid;
for_each_device_pfn(pfn, pgmap)
put_page(pfn_to_page(pfn));
@@ -100,9 +101,10 @@ static void devm_memremap_pages_release(void *data)
align_start = res->start & ~(SECTION_SIZE - 1);
align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE)
- align_start;
+ nid = dev_to_node(dev);
mem_hotplug_begin();
- arch_remove_memory(align_start, align_size, pgmap->altmap_valid ?
+ arch_remove_memory(nid, align_start, align_size, pgmap->altmap_valid ?
&pgmap->altmap : NULL);
kasan_remove_zero_shadow(__va(align_start), align_size);
mem_hotplug_done();
diff --git a/mm/hmm.c b/mm/hmm.c
index 774d684fa2b4..42d79bcc8aab 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -995,6 +995,7 @@ static void hmm_devmem_release(struct device *dev, void *data)
unsigned long start_pfn, npages;
struct zone *zone;
struct page *page;
+ int nid;
if (percpu_ref_tryget_live(&devmem->ref)) {
dev_WARN(dev, "%s: page mapping is still live!\n", __func__);
@@ -1007,12 +1008,13 @@ static void hmm_devmem_release(struct device *dev, void *data)
page = pfn_to_page(start_pfn);
zone = page_zone(page);
+ nid = zone->zone_pgdat->node_id;
mem_hotplug_begin();
if (resource->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY)
__remove_pages(zone, start_pfn, npages, NULL);
else
- arch_remove_memory(start_pfn << PAGE_SHIFT,
+ arch_remove_memory(nid, start_pfn << PAGE_SHIFT,
npages << PAGE_SHIFT, NULL);
mem_hotplug_done();
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index dbbb94547ad0..33d448314b3f 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1875,7 +1875,7 @@ void __ref __remove_memory(int nid, u64 start, u64 size)
memblock_free(start, size);
memblock_remove(start, size);
- arch_remove_memory(start, size, NULL);
+ arch_remove_memory(nid, start, size, NULL);
try_offline_node(nid);
--
2.13.6
From: Oscar Salvador <[email protected]>
Currently, we decrement zone/node spanned_pages during the
hot-remove path.
The picture we have now is:
- hot-add memory:
a) Allocate a new resource based on the hot-added memory
b) Add memory sections for the hot-added memory
- online memory:
c) Re-adjust zone/pgdat nr of pages (managed, spanned, present)
d) Initialize the pages from the new memory-range
e) Online memory sections
- offline memory:
f) Offline memory sections
g) Re-adjust zone/pgdat nr of managed/present pages
- hot-remove memory:
i) Re-adjust zone/pgdat nr of spanned pages
j) Remove memory sections
k) Release resource
This, besides of not being consistent with the current code,
implies that we can access steal pages if we never get to online
that memory.
So we should move i) to the offline stage.
Hot-remove path should only care about memory sections and memory
blocks.
There is a particularity and that is HMM/devm.
When the memory is being handled by HMM/devm, this memory is moved to
ZONE_DEVICE by means of move_pfn_range_to_zone, but since this memory
does not get "online", the sections do not get online either.
This is a problem because shrink_zone_pgdat_pages will now look for
online sections, so we need to explicitly offline the sections
before calling in.
add_device_memory takes care of online them, and del_device_memory offlines
them.
Finally, shrink_zone_pgdat_pages() is moved to offline_pages(), so now,
all pages/zone handling is being taken care in online/offline stage.
So now we will have:
- hot-add memory:
a) Allocate a new resource based on the hot-added memory
b) Add memory sections for the hot-added memory
- online memory:
c) Re-adjust zone/pgdat nr of pages (managed, spanned, present)
d) Initialize the pages from the new memory-range
e) Online memory sections
- offline memory:
f) Offline memory sections
g) Re-adjust zone/pgdat nr of managed/present/spanned pages
- hot-remove memory:
i) Remove memory sections
j) Release resource
Signed-off-by: Oscar Salvador <[email protected]>
---
arch/ia64/mm/init.c | 4 +-
arch/powerpc/mm/mem.c | 11 +----
arch/sh/mm/init.c | 4 +-
arch/x86/mm/init_32.c | 4 +-
arch/x86/mm/init_64.c | 8 +---
include/linux/memory_hotplug.h | 8 ++--
mm/memory_hotplug.c | 99 ++++++++++++++++++++----------------------
mm/sparse.c | 6 +--
8 files changed, 58 insertions(+), 86 deletions(-)
diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
index 904fe55e10fc..a6b5f351620c 100644
--- a/arch/ia64/mm/init.c
+++ b/arch/ia64/mm/init.c
@@ -665,11 +665,9 @@ int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap)
{
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
- struct zone *zone;
int ret;
- zone = page_zone(pfn_to_page(start_pfn));
- ret = __remove_pages(zone, start_pfn, nr_pages, altmap);
+ ret = __remove_pages(nid, start_pfn, nr_pages, altmap);
if (ret)
pr_warn("%s: Problem encountered in __remove_pages() as"
" ret=%d\n", __func__, ret);
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 6db8f527babb..14b6c859bc04 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -144,18 +144,9 @@ int __meminit arch_remove_memory(int nid, u64 start, u64 size,
{
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
- struct page *page;
int ret;
- /*
- * If we have an altmap then we need to skip over any reserved PFNs
- * when querying the zone.
- */
- page = pfn_to_page(start_pfn);
- if (altmap)
- page += vmem_altmap_offset(altmap);
-
- ret = __remove_pages(page_zone(page), start_pfn, nr_pages, altmap);
+ ret = __remove_pages(nid, start_pfn, nr_pages, altmap);
if (ret)
return ret;
diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c
index a8e5c0e00fca..6e80a7a50f8b 100644
--- a/arch/sh/mm/init.c
+++ b/arch/sh/mm/init.c
@@ -447,11 +447,9 @@ int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap)
{
unsigned long start_pfn = PFN_DOWN(start);
unsigned long nr_pages = size >> PAGE_SHIFT;
- struct zone *zone;
int ret;
- zone = page_zone(pfn_to_page(start_pfn));
- ret = __remove_pages(zone, start_pfn, nr_pages, altmap);
+ ret = __remove_pages(nid, start_pfn, nr_pages, altmap);
if (unlikely(ret))
pr_warn("%s: Failed, __remove_pages() == %d\n", __func__,
ret);
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 85c94f9a87f8..4719af142c10 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -864,10 +864,8 @@ int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap)
{
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
- struct zone *zone;
- zone = page_zone(pfn_to_page(start_pfn));
- return __remove_pages(zone, start_pfn, nr_pages, altmap);
+ return __remove_pages(nid, start_pfn, nr_pages, altmap);
}
#endif
#endif
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 449958da97a4..5f73b6deb794 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1152,15 +1152,9 @@ int __ref arch_remove_memory(int nid, u64 start, u64 size,
{
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
- struct page *page = pfn_to_page(start_pfn);
- struct zone *zone;
int ret;
- /* With altmap the first mapped page is offset from @start */
- if (altmap)
- page += vmem_altmap_offset(altmap);
- zone = page_zone(page);
- ret = __remove_pages(zone, start_pfn, nr_pages, altmap);
+ ret = __remove_pages(nid, start_pfn, nr_pages, altmap);
WARN_ON_ONCE(ret);
kernel_physical_mapping_remove(start, start + size);
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index cf014d5edbb2..a1be5e76be14 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -109,8 +109,8 @@ static inline bool movable_node_is_enabled(void)
#ifdef CONFIG_MEMORY_HOTREMOVE
extern int arch_remove_memory(int nid, u64 start, u64 size,
struct vmem_altmap *altmap);
-extern int __remove_pages(struct zone *zone, unsigned long start_pfn,
- unsigned long nr_pages, struct vmem_altmap *altmap);
+extern int __remove_pages(int nid, unsigned long start_pfn,
+ unsigned long nr_pages, struct vmem_altmap *altmap);
#ifdef CONFIG_ZONE_DEVICE
extern int del_device_memory(int nid, unsigned long start, unsigned long size,
@@ -346,8 +346,8 @@ extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages);
extern bool is_memblock_offlined(struct memory_block *mem);
extern int sparse_add_one_section(struct pglist_data *pgdat,
unsigned long start_pfn, struct vmem_altmap *altmap);
-extern void sparse_remove_one_section(struct zone *zone, struct mem_section *ms,
- unsigned long map_offset, struct vmem_altmap *altmap);
+extern void sparse_remove_one_section(int nid, struct mem_section *ms,
+ unsigned long map_offset, struct vmem_altmap *altmap);
extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map,
unsigned long pnum);
extern bool allow_online_pfn_range(int nid, unsigned long pfn, unsigned long nr_pages,
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 5874aceb81ac..6b98321aa52f 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -319,12 +319,10 @@ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone,
unsigned long start_pfn,
unsigned long end_pfn)
{
- struct mem_section *ms;
-
for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SECTION) {
- ms = __pfn_to_section(start_pfn);
+ struct mem_section *ms = __pfn_to_section(start_pfn);
- if (unlikely(!valid_section(ms)))
+ if (unlikely(!online_section(ms)))
continue;
if (unlikely(pfn_to_nid(start_pfn) != nid))
@@ -344,15 +342,14 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone,
unsigned long start_pfn,
unsigned long end_pfn)
{
- struct mem_section *ms;
unsigned long pfn;
/* pfn is the end pfn of a memory section. */
pfn = end_pfn - 1;
for (; pfn >= start_pfn; pfn -= PAGES_PER_SECTION) {
- ms = __pfn_to_section(pfn);
+ struct mem_section *ms = __pfn_to_section(pfn);
- if (unlikely(!valid_section(ms)))
+ if (unlikely(!online_section(ms)))
continue;
if (unlikely(pfn_to_nid(pfn) != nid))
@@ -414,7 +411,7 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
for (; pfn < zone_end_pfn; pfn += PAGES_PER_SECTION) {
ms = __pfn_to_section(pfn);
- if (unlikely(!valid_section(ms)))
+ if (unlikely(!online_section(ms)))
continue;
if (page_zone(pfn_to_page(pfn)) != zone)
@@ -482,7 +479,7 @@ static void shrink_pgdat_span(struct pglist_data *pgdat,
for (; pfn < pgdat_end_pfn; pfn += PAGES_PER_SECTION) {
ms = __pfn_to_section(pfn);
- if (unlikely(!valid_section(ms)))
+ if (unlikely(!online_section(ms)))
continue;
if (pfn_to_nid(pfn) != nid)
@@ -501,23 +498,31 @@ static void shrink_pgdat_span(struct pglist_data *pgdat,
pgdat->node_spanned_pages = 0;
}
-static void __remove_zone(struct zone *zone, unsigned long start_pfn)
+static void shrink_zone_pgdat_pages(struct zone *zone, unsigned long start_pfn,
+ unsigned long end_pfn, unsigned long offlined_pages)
{
struct pglist_data *pgdat = zone->zone_pgdat;
int nr_pages = PAGES_PER_SECTION;
unsigned long flags;
+ unsigned long pfn;
- pgdat_resize_lock(zone->zone_pgdat, &flags);
- shrink_zone_span(zone, start_pfn, start_pfn + nr_pages);
- shrink_pgdat_span(pgdat, start_pfn, start_pfn + nr_pages);
- pgdat_resize_unlock(zone->zone_pgdat, &flags);
+ zone->present_pages -= offlined_pages;
+ clear_zone_contiguous(zone);
+
+ pgdat_resize_lock(pgdat, &flags);
+ pgdat->node_present_pages -= offlined_pages;
+ for (pfn = start_pfn; pfn < end_pfn; pfn += nr_pages) {
+ shrink_zone_span(zone, pfn, pfn + nr_pages);
+ shrink_pgdat_span(pgdat, pfn, pfn + nr_pages);
+ }
+ pgdat_resize_unlock(pgdat, &flags);
+
+ set_zone_contiguous(zone);
}
-static int __remove_section(struct zone *zone, struct mem_section *ms,
- unsigned long map_offset, struct vmem_altmap *altmap)
+static int __remove_section(int nid, struct mem_section *ms,
+ unsigned long map_offset, struct vmem_altmap *altmap)
{
- unsigned long start_pfn;
- int scn_nr;
int ret = -EINVAL;
if (!valid_section(ms))
@@ -527,17 +532,13 @@ static int __remove_section(struct zone *zone, struct mem_section *ms,
if (ret)
return ret;
- scn_nr = __section_nr(ms);
- start_pfn = section_nr_to_pfn((unsigned long)scn_nr);
- __remove_zone(zone, start_pfn);
-
- sparse_remove_one_section(zone, ms, map_offset, altmap);
+ sparse_remove_one_section(nid, ms, map_offset, altmap);
return 0;
}
/**
* __remove_pages() - remove sections of pages from a zone
- * @zone: zone from which pages need to be removed
+ * @nid: nid from which pages belong to
* @phys_start_pfn: starting pageframe (must be aligned to start of a section)
* @nr_pages: number of pages to remove (must be multiple of section size)
* @altmap: alternative device page map or %NULL if default memmap is used
@@ -547,35 +548,28 @@ static int __remove_section(struct zone *zone, struct mem_section *ms,
* sure that pages are marked reserved and zones are adjust properly by
* calling offline_pages().
*/
-int __remove_pages(struct zone *zone, unsigned long phys_start_pfn,
- unsigned long nr_pages, struct vmem_altmap *altmap)
+int __remove_pages(int nid, unsigned long phys_start_pfn,
+ unsigned long nr_pages, struct vmem_altmap *altmap)
{
unsigned long i;
unsigned long map_offset = 0;
int sections_to_remove, ret = 0;
+ resource_size_t start, size;
- /* In the ZONE_DEVICE case device driver owns the memory region */
- if (is_dev_zone(zone)) {
- if (altmap)
- map_offset = vmem_altmap_offset(altmap);
- } else {
- resource_size_t start, size;
+ start = phys_start_pfn << PAGE_SHIFT;
+ size = nr_pages * PAGE_SIZE;
- start = phys_start_pfn << PAGE_SHIFT;
- size = nr_pages * PAGE_SIZE;
+ if (altmap)
+ map_offset = vmem_altmap_offset(altmap);
- ret = release_mem_region_adjustable(&iomem_resource, start,
- size);
- if (ret) {
- resource_size_t endres = start + size - 1;
+ ret = release_mem_region_adjustable(&iomem_resource, start, size);
+ if (ret) {
+ resource_size_t endres = start + size - 1;
- pr_warn("Unable to release resource <%pa-%pa> (%d)\n",
- &start, &endres, ret);
- }
+ pr_warn("Unable to release resource <%pa-%pa> (%d)\n",
+ &start, &endres, ret);
}
- clear_zone_contiguous(zone);
-
/*
* We can only remove entire sections
*/
@@ -586,15 +580,13 @@ int __remove_pages(struct zone *zone, unsigned long phys_start_pfn,
for (i = 0; i < sections_to_remove; i++) {
unsigned long pfn = phys_start_pfn + i*PAGES_PER_SECTION;
- ret = __remove_section(zone, __pfn_to_section(pfn), map_offset,
- altmap);
+ ret = __remove_section(nid, __pfn_to_section(pfn), map_offset,
+ altmap);
map_offset = 0;
if (ret)
break;
}
- set_zone_contiguous(zone);
-
return ret;
}
#endif /* CONFIG_MEMORY_HOTREMOVE */
@@ -1566,7 +1558,6 @@ static int __ref __offline_pages(unsigned long start_pfn,
unsigned long pfn, nr_pages;
long offlined_pages;
int ret, node;
- unsigned long flags;
unsigned long valid_start, valid_end;
struct zone *zone;
struct memory_notify arg;
@@ -1644,11 +1635,9 @@ static int __ref __offline_pages(unsigned long start_pfn,
undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
/* removal success */
adjust_managed_page_count(pfn_to_page(start_pfn), -offlined_pages);
- zone->present_pages -= offlined_pages;
- pgdat_resize_lock(zone->zone_pgdat, &flags);
- zone->zone_pgdat->node_present_pages -= offlined_pages;
- pgdat_resize_unlock(zone->zone_pgdat, &flags);
+ /* Here we will shrink zone/node's spanned/present_pages */
+ shrink_zone_pgdat_pages(zone, valid_start, valid_end, offlined_pages);
init_per_zone_wmark_min();
@@ -1899,10 +1888,13 @@ int del_device_memory(int nid, unsigned long start, unsigned long size,
unsigned long nr_pages = size >> PAGE_SHIFT;
struct zone *zone = page_zone(pfn_to_page(pfn));
+ offline_mem_sections(start_pfn, start_pfn + nr_pages);
+ shrink_zone_pgdat_pages(zone, start_pfn, start_pfn + nr_pages, 0);
+
if (mapping)
ret = arch_remove_memory(nid, start, size, altmap);
else
- ret = __remove_pages(zone, start_pfn, nr_pages, altmap);
+ ret = __remove_pages(nid, start_pfn, nr_pages, altmap);
return ret;
}
@@ -1925,6 +1917,7 @@ int add_device_memory(int nid, unsigned long start, unsigned long size,
if (!ret) {
struct zone *zone = &NODE_DATA(nid)->node_zones[ZONE_DEVICE];
+ online_mem_sections(start_pfn, start_pfn + nr_pages);
move_pfn_range_to_zone(zone, start_pfn, nr_pages, altmap);
}
diff --git a/mm/sparse.c b/mm/sparse.c
index 33307fc05c4d..76974479a14b 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -765,12 +765,12 @@ static void free_section_usemap(struct page *memmap, unsigned long *usemap,
free_map_bootmem(memmap);
}
-void sparse_remove_one_section(struct zone *zone, struct mem_section *ms,
- unsigned long map_offset, struct vmem_altmap *altmap)
+void sparse_remove_one_section(int nid, struct mem_section *ms,
+ unsigned long map_offset, struct vmem_altmap *altmap)
{
struct page *memmap = NULL;
unsigned long *usemap = NULL, flags;
- struct pglist_data *pgdat = zone->zone_pgdat;
+ struct pglist_data *pgdat = NODE_DATA(nid);
pgdat_resize_lock(pgdat, &flags);
if (ms->section_mem_map) {
--
2.13.6
From: Oscar Salvador <[email protected]>
This is a preparation for the next patch.
Currently, we only call release_mem_region_adjustable() in __remove_pages
if the zone is not ZONE_DEVICE, because resources that belong to
HMM/devm are being released by themselves with devm_release_mem_region.
Since we do not want to touch any zone/page stuff during the removing
of the memory (but during the offlining), we do not want to check for
the zone here.
So we need another way to tell release_mem_region_adjustable() to not
realease the resource in case it belongs to HMM/devm.
HMM/devm acquires/releases a resource through
devm_request_mem_region/devm_release_mem_region.
These resources have the flag IORESOURCE_MEM, while resources acquired by
hot-add memory path (register_memory_resource()) contain
IORESOURCE_SYSTEM_RAM.
So, we can check for this flag in release_mem_region_adjustable, and if
the resource does not contain such flag, we know that we are dealing with
a HMM/devm resource, so we can back off.
Signed-off-by: Oscar Salvador <[email protected]>
---
kernel/resource.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/kernel/resource.c b/kernel/resource.c
index 81937830a42f..c45decd7d6af 100644
--- a/kernel/resource.c
+++ b/kernel/resource.c
@@ -1272,6 +1272,22 @@ int release_mem_region_adjustable(struct resource *parent,
continue;
}
+ /*
+ * All memory regions added from memory-hotplug path
+ * have the flag IORESOURCE_SYSTEM_RAM.
+ * If the resource does not have this flag, we know that
+ * we are dealing with a resource coming from HMM/devm.
+ * HMM/devm use another mechanism to add/release a resource.
+ * This goes via devm_request_mem_region and
+ * devm_release_mem_region.
+ * HMM/devm take care to release their resources when they want,
+ * so if we are dealing with them, let us just back off here.
+ */
+ if (!(res->flags & IORESOURCE_SYSRAM)) {
+ ret = 0;
+ break;
+ }
+
if (!(res->flags & IORESOURCE_MEM))
break;
--
2.13.6
From: Oscar Salvador <[email protected]>
HMM/devm have a particular handling of memory-hotplug.
They do not go through the common path, and so, they do not
call either offline_pages() or online_pages().
The operations they perform are the following ones:
1) Create the linear mapping in case the memory is not private
2) Initialize the pages and add the sections
3) Move the pages to ZONE_DEVICE
Due to this particular handling of hot-add/remove memory from HMM/devm,
I think it would be nice to provide a helper function in order to
make this cleaner, and not populate other regions with code
that should belong to memory-hotplug.
The helpers are named:
del_device_memory
add_device_memory
The idea is that add_device_memory will be in charge of:
a) call either arch_add_memory() or add_pages(), depending on whether
we want a linear mapping
b) online the memory sections that correspond to the pfn range
c) call move_pfn_range_to_zone() being zone ZONE_DEVICE to
expand zone/pgdat spanned pages and initialize its pages
del_device_memory, on the other hand, will be in charge of:
a) offline the memory sections that correspond to the pfn range
b) call shrink_zone_pgdat_pages(), which shrinks node/zone spanned pages.
c) call either arch_remove_memory() or __remove_pages(), depending on
whether we need to tear down the linear mapping or not
The reason behind step b) from add_device_memory() and step a)
from del_device_memory is that now find_smallest/biggest_section_pfn
will have to check for online sections, and not for valid sections as
they used to do, because we call offline_mem_sections() in
offline_pages().
In order to split up better the patches and ease the review,
this patch will only make a) case work for add_device_memory(),
and case c) for del_device_memory.
The other cases will be added in the next patch.
These two functions have to be called from devm/HMM code:
dd_device_memory:
- devm_memremap_pages()
- hmm_devmem_pages_create()
del_device_memory:
- hmm_devmem_release
- devm_memremap_pages_release
One thing I do not know is whether we can move kasan calls out of the
hotplug lock or not.
If we can, we could move the hotplug lock within add/del_device_memory().
Signed-off-by: Oscar Salvador <[email protected]>
---
include/linux/memory_hotplug.h | 11 +++++++++++
kernel/memremap.c | 11 ++++-------
mm/hmm.c | 33 +++++++++++++++++----------------
mm/memory_hotplug.c | 41 +++++++++++++++++++++++++++++++++++++++++
4 files changed, 73 insertions(+), 23 deletions(-)
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 786cdfc9a974..cf014d5edbb2 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -111,8 +111,19 @@ extern int arch_remove_memory(int nid, u64 start, u64 size,
struct vmem_altmap *altmap);
extern int __remove_pages(struct zone *zone, unsigned long start_pfn,
unsigned long nr_pages, struct vmem_altmap *altmap);
+
+#ifdef CONFIG_ZONE_DEVICE
+extern int del_device_memory(int nid, unsigned long start, unsigned long size,
+ struct vmem_altmap *altmap, bool private_mem);
+#endif
+
#endif /* CONFIG_MEMORY_HOTREMOVE */
+#ifdef CONFIG_ZONE_DEVICE
+extern int add_device_memory(int nid, unsigned long start, unsigned long size,
+ struct vmem_altmap *altmap, bool private_mem);
+#endif
+
/* reasonably generic interface to expand the physical pages */
extern int __add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages,
struct vmem_altmap *altmap, bool want_memblock);
diff --git a/kernel/memremap.c b/kernel/memremap.c
index c95df6ed2d4a..b86bba8713b9 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -86,6 +86,8 @@ static void devm_memremap_pages_release(void *data)
struct device *dev = pgmap->dev;
struct resource *res = &pgmap->res;
resource_size_t align_start, align_size;
+ struct vmem_altmap *altmap = pgmap->altmap_valid ?
+ &pgmap->altmap : NULL;
unsigned long pfn;
int nid;
@@ -104,8 +106,7 @@ static void devm_memremap_pages_release(void *data)
nid = dev_to_node(dev);
mem_hotplug_begin();
- arch_remove_memory(nid, align_start, align_size, pgmap->altmap_valid ?
- &pgmap->altmap : NULL);
+ del_device_memory(nid, align_start, align_size, altmap, true);
kasan_remove_zero_shadow(__va(align_start), align_size);
mem_hotplug_done();
@@ -204,11 +205,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
goto err_kasan;
}
- error = arch_add_memory(nid, align_start, align_size, altmap, false);
- if (!error)
- move_pfn_range_to_zone(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
- align_start >> PAGE_SHIFT,
- align_size >> PAGE_SHIFT, altmap);
+ error = add_device_memory(nid, align_start, align_size, altmap, true);
mem_hotplug_done();
if (error)
goto err_add_memory;
diff --git a/mm/hmm.c b/mm/hmm.c
index 42d79bcc8aab..d3e52ae71bd9 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -996,6 +996,7 @@ static void hmm_devmem_release(struct device *dev, void *data)
struct zone *zone;
struct page *page;
int nid;
+ bool mapping;
if (percpu_ref_tryget_live(&devmem->ref)) {
dev_WARN(dev, "%s: page mapping is still live!\n", __func__);
@@ -1010,12 +1011,15 @@ static void hmm_devmem_release(struct device *dev, void *data)
zone = page_zone(page);
nid = zone->zone_pgdat->node_id;
- mem_hotplug_begin();
if (resource->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY)
- __remove_pages(zone, start_pfn, npages, NULL);
+ mapping = false;
else
- arch_remove_memory(nid, start_pfn << PAGE_SHIFT,
- npages << PAGE_SHIFT, NULL);
+ mapping = true;
+
+ mem_hotplug_begin();
+ del_device_memory(nid, start_pfn << PAGE_SHIFT, npages << PAGE_SHIFT,
+ NULL,
+ mapping);
mem_hotplug_done();
hmm_devmem_radix_release(resource);
@@ -1026,6 +1030,7 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem)
resource_size_t key, align_start, align_size, align_end;
struct device *device = devmem->device;
int ret, nid, is_ram;
+ bool mapping;
align_start = devmem->resource->start & ~(PA_SECTION_SIZE - 1);
align_size = ALIGN(devmem->resource->start +
@@ -1084,7 +1089,6 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem)
if (nid < 0)
nid = numa_mem_id();
- mem_hotplug_begin();
/*
* For device private memory we call add_pages() as we only need to
* allocate and initialize struct page for the device memory. More-
@@ -1096,20 +1100,17 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem)
* want the linear mapping and thus use arch_add_memory().
*/
if (devmem->pagemap.type == MEMORY_DEVICE_PUBLIC)
- ret = arch_add_memory(nid, align_start, align_size, NULL,
- false);
+ mapping = true;
else
- ret = add_pages(nid, align_start >> PAGE_SHIFT,
- align_size >> PAGE_SHIFT, NULL, false);
- if (ret) {
- mem_hotplug_done();
- goto error_add_memory;
- }
- move_pfn_range_to_zone(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
- align_start >> PAGE_SHIFT,
- align_size >> PAGE_SHIFT, NULL);
+ mapping = false;
+
+ mem_hotplug_begin();
+ ret = add_device_memory(nid, align_start, align_size, NULL, mapping);
mem_hotplug_done();
+ if (ret)
+ goto error_add_memory;
+
/*
* Initialization of the pages has been deferred until now in order
* to allow us to do the work while not holding the hotplug lock.
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 33d448314b3f..5874aceb81ac 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1889,4 +1889,45 @@ void remove_memory(int nid, u64 start, u64 size)
unlock_device_hotplug();
}
EXPORT_SYMBOL_GPL(remove_memory);
+
+#ifdef CONFIG_ZONE_DEVICE
+int del_device_memory(int nid, unsigned long start, unsigned long size,
+ struct vmem_altmap *altmap, bool mapping)
+{
+ int ret;
+ unsigned long start_pfn = PHYS_PFN(start);
+ unsigned long nr_pages = size >> PAGE_SHIFT;
+ struct zone *zone = page_zone(pfn_to_page(pfn));
+
+ if (mapping)
+ ret = arch_remove_memory(nid, start, size, altmap);
+ else
+ ret = __remove_pages(zone, start_pfn, nr_pages, altmap);
+
+ return ret;
+}
+#endif
#endif /* CONFIG_MEMORY_HOTREMOVE */
+
+#ifdef CONFIG_ZONE_DEVICE
+int add_device_memory(int nid, unsigned long start, unsigned long size,
+ struct vmem_altmap *altmap, bool mapping)
+{
+ int ret;
+ unsigned long start_pfn = PHYS_PFN(start);
+ unsigned long nr_pages = size >> PAGE_SHIFT;
+
+ if (mapping)
+ ret = arch_add_memory(nid, start, size, altmap, false);
+ else
+ ret = add_pages(nid, start_pfn, nr_pages, altmap, false);
+
+ if (!ret) {
+ struct zone *zone = &NODE_DATA(nid)->node_zones[ZONE_DEVICE];
+
+ move_pfn_range_to_zone(zone, start_pfn, nr_pages, altmap);
+ }
+
+ return ret;
+}
+#endif
--
2.13.6
> index 42d79bcc8aab..d3e52ae71bd9 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -996,6 +996,7 @@ static void hmm_devmem_release(struct device *dev, void *data)
> struct zone *zone;
> struct page *page;
> int nid;
> + bool mapping;
>
> if (percpu_ref_tryget_live(&devmem->ref)) {
> dev_WARN(dev, "%s: page mapping is still live!\n", __func__);
> @@ -1010,12 +1011,15 @@ static void hmm_devmem_release(struct device *dev, void *data)
> zone = page_zone(page);
> nid = zone->zone_pgdat->node_id;
>
> - mem_hotplug_begin();
> if (resource->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY)
> - __remove_pages(zone, start_pfn, npages, NULL);
> + mapping = false;
> else
> - arch_remove_memory(nid, start_pfn << PAGE_SHIFT,
> - npages << PAGE_SHIFT, NULL);
> + mapping = true;
> +
> + mem_hotplug_begin();
> + del_device_memory(nid, start_pfn << PAGE_SHIFT, npages << PAGE_SHIFT,
> + NULL,
> + mapping);
> mem_hotplug_done();
>
> hmm_devmem_radix_release(resource);
> @@ -1026,6 +1030,7 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem)
> resource_size_t key, align_start, align_size, align_end;
> struct device *device = devmem->device;
> int ret, nid, is_ram;
> + bool mapping;
>
> align_start = devmem->resource->start & ~(PA_SECTION_SIZE - 1);
> align_size = ALIGN(devmem->resource->start +
> @@ -1084,7 +1089,6 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem)
> if (nid < 0)
> nid = numa_mem_id();
>
> - mem_hotplug_begin();
> /*
> * For device private memory we call add_pages() as we only need to
> * allocate and initialize struct page for the device memory. More-
> @@ -1096,20 +1100,17 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem)
> * want the linear mapping and thus use arch_add_memory().
> */
Some parts of this comment should be moved into add_device_memory now.
(e.g. we call add_pages() ...)
> if (devmem->pagemap.type == MEMORY_DEVICE_PUBLIC)
> - ret = arch_add_memory(nid, align_start, align_size, NULL,
> - false);
> + mapping = true;
> else
> - ret = add_pages(nid, align_start >> PAGE_SHIFT,
> - align_size >> PAGE_SHIFT, NULL, false);
> - if (ret) {
> - mem_hotplug_done();
> - goto error_add_memory;
> - }
> - move_pfn_range_to_zone(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
> - align_start >> PAGE_SHIFT,
> - align_size >> PAGE_SHIFT, NULL);
> + mapping = false;
> +
> + mem_hotplug_begin();
> + ret = add_device_memory(nid, align_start, align_size, NULL, mapping);
> mem_hotplug_done();
>
> + if (ret)
> + goto error_add_memory;
> +
> /*
> * Initialization of the pages has been deferred until now in order
> * to allow us to do the work while not holding the hotplug lock.
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 33d448314b3f..5874aceb81ac 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1889,4 +1889,45 @@ void remove_memory(int nid, u64 start, u64 size)
> unlock_device_hotplug();
> }
> EXPORT_SYMBOL_GPL(remove_memory);
> +
> +#ifdef CONFIG_ZONE_DEVICE
> +int del_device_memory(int nid, unsigned long start, unsigned long size,
> + struct vmem_altmap *altmap, bool mapping)
> +{
> + int ret;
nit: personally I prefer short parameters last in the list.
> + unsigned long start_pfn = PHYS_PFN(start);
> + unsigned long nr_pages = size >> PAGE_SHIFT;
> + struct zone *zone = page_zone(pfn_to_page(pfn));
> +
> + if (mapping)
> + ret = arch_remove_memory(nid, start, size, altmap);
> + else
> + ret = __remove_pages(zone, start_pfn, nr_pages, altmap);
> +
> + return ret;
> +}
> +#endif
> #endif /* CONFIG_MEMORY_HOTREMOVE */
> +
> +#ifdef CONFIG_ZONE_DEVICE
> +int add_device_memory(int nid, unsigned long start, unsigned long size,
> + struct vmem_altmap *altmap, bool mapping)
> +{
> + int ret;
dito
> + unsigned long start_pfn = PHYS_PFN(start);
> + unsigned long nr_pages = size >> PAGE_SHIFT;
> +
> + if (mapping)
> + ret = arch_add_memory(nid, start, size, altmap, false);
> + else
> + ret = add_pages(nid, start_pfn, nr_pages, altmap, false);
> +
> + if (!ret) {
> + struct zone *zone = &NODE_DATA(nid)->node_zones[ZONE_DEVICE];
> +
> + move_pfn_range_to_zone(zone, start_pfn, nr_pages, altmap);
> + }
> +
> + return ret;
> +}
> +#endif
>
Can you document for both functions that they should be called with the
memory hotplug lock in write?
Apart from that looks good to me.
--
Thanks,
David / dhildenb
On 15/10/2018 17:30, Oscar Salvador wrote:
> From: Oscar Salvador <[email protected]>
>
> This is a preparation for the next patch.
>
> Currently, we only call release_mem_region_adjustable() in __remove_pages
> if the zone is not ZONE_DEVICE, because resources that belong to
> HMM/devm are being released by themselves with devm_release_mem_region.
>
> Since we do not want to touch any zone/page stuff during the removing
> of the memory (but during the offlining), we do not want to check for
> the zone here.
> So we need another way to tell release_mem_region_adjustable() to not
> realease the resource in case it belongs to HMM/devm.
>
> HMM/devm acquires/releases a resource through
> devm_request_mem_region/devm_release_mem_region.
>
> These resources have the flag IORESOURCE_MEM, while resources acquired by
> hot-add memory path (register_memory_resource()) contain
> IORESOURCE_SYSTEM_RAM.
>
> So, we can check for this flag in release_mem_region_adjustable, and if
> the resource does not contain such flag, we know that we are dealing with
> a HMM/devm resource, so we can back off.
>
> Signed-off-by: Oscar Salvador <[email protected]>
> ---
> kernel/resource.c | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
> diff --git a/kernel/resource.c b/kernel/resource.c
> index 81937830a42f..c45decd7d6af 100644
> --- a/kernel/resource.c
> +++ b/kernel/resource.c
> @@ -1272,6 +1272,22 @@ int release_mem_region_adjustable(struct resource *parent,
> continue;
> }
>
> + /*
> + * All memory regions added from memory-hotplug path
> + * have the flag IORESOURCE_SYSTEM_RAM.
> + * If the resource does not have this flag, we know that
> + * we are dealing with a resource coming from HMM/devm.
> + * HMM/devm use another mechanism to add/release a resource.
> + * This goes via devm_request_mem_region and
> + * devm_release_mem_region.
> + * HMM/devm take care to release their resources when they want,
> + * so if we are dealing with them, let us just back off here.
> + */
> + if (!(res->flags & IORESOURCE_SYSRAM)) {
> + ret = 0;
> + break;
> + }
> +
> if (!(res->flags & IORESOURCE_MEM))
> break;
>
>
Reviewed-by: David Hildenbrand <[email protected]>
--
Thanks,
David / dhildenb
> > /*
> > * For device private memory we call add_pages() as we only need to
> > * allocate and initialize struct page for the device memory. More-
> > @@ -1096,20 +1100,17 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem)
> > * want the linear mapping and thus use arch_add_memory().
> > */
>
> Some parts of this comment should be moved into add_device_memory now.
> (e.g. we call add_pages() ...)
I agree.
> > +#ifdef CONFIG_ZONE_DEVICE
> > +int del_device_memory(int nid, unsigned long start, unsigned long size,
> > + struct vmem_altmap *altmap, bool mapping)
> > +{
> > + int ret;
>
> nit: personally I prefer short parameters last in the list.
I do not have a strong opinion here.
If people think that long parameters should be placed at the end because
it improves readability, I am ok with moving them there.
> Can you document for both functions that they should be called with the
> memory hotplug lock in write?
Sure, I will do that in the next version, once I get some more feedback.
> Apart from that looks good to me.
Thanks for reviewing it David ;-)!
May I assume your Reviewed-by here (if the above comments are addressed)?
Thanks
--
Oscar Salvador
SUSE L3
On 17/10/2018 11:33, Oscar Salvador wrote:
>>> /*
>>> * For device private memory we call add_pages() as we only need to
>>> * allocate and initialize struct page for the device memory. More-
>>> @@ -1096,20 +1100,17 @@ static int hmm_devmem_pages_create(struct hmm_devmem *devmem)
>>> * want the linear mapping and thus use arch_add_memory().
>>> */
>>
>> Some parts of this comment should be moved into add_device_memory now.
>> (e.g. we call add_pages() ...)
>
> I agree.
>
>>> +#ifdef CONFIG_ZONE_DEVICE
>>> +int del_device_memory(int nid, unsigned long start, unsigned long size,
>>> + struct vmem_altmap *altmap, bool mapping)
>>> +{
>>> + int ret;
>>
>> nit: personally I prefer short parameters last in the list.
>
> I do not have a strong opinion here.
> If people think that long parameters should be placed at the end because
> it improves readability, I am ok with moving them there.
>
>> Can you document for both functions that they should be called with the
>> memory hotplug lock in write?
>
> Sure, I will do that in the next version, once I get some more feedback.
>
>> Apart from that looks good to me.
>
> Thanks for reviewing it David ;-)!
> May I assume your Reviewed-by here (if the above comments are addressed)?
Here you go ;)
Reviewed-by: David Hildenbrand <[email protected]>
I'm planning to look into the other patches as well, but I'll be busy
with traveling and KVM forum the next 1.5 weeks.
Cheers!
--
Thanks,
David / dhildenb
On Wed, Oct 17, 2018 at 11:45:50AM +0200, David Hildenbrand wrote:
> Here you go ;)
>
> Reviewed-by: David Hildenbrand <[email protected]>
thanks!
> I'm planning to look into the other patches as well, but I'll be busy
> with traveling and KVM forum the next 1.5 weeks.
No need to hurry, this can wait.
--
Oscar Salvador
SUSE L3
On Mon, 15 Oct 2018 17:30:34 +0200
Oscar Salvador <[email protected]> wrote:
> From: Oscar Salvador <[email protected]>
>
> This tries to address another issue about accessing
> unitiliazed pages.
>
> Jonathan reported a problem [1] where we can access steal pages
> in case we hot-remove memory without onlining it first.
>
> This time is in unregister_mem_sect_under_nodes.
> This function tries to get the nid from the pfn and then
> tries to remove the symlink between mem_blk <-> nid and vice versa.
>
> Since we already know the nid in remove_memory(), we can pass
> it down the chain to unregister_mem_sect_under_nodes.
> There we can just remove the symlinks without the need
> to look into the pages.
>
> [1] https://www.spinics.net/lists/linux-mm/msg161316.html
>
> Signed-off-by: Oscar Salvador <[email protected]>
Hi Oscar,
I've not gone through these in detail, but I have run some
quick smoke tests and all seems fine with the out of tree arm64
patches we are carrying (with the obvious changes to match
up with those for other architectures that you make in this set).
Tested-by: Jonathan Cameron <[email protected]>
In particular (as expected) it fixes the issue you mention above
so I can drop the hack I had in place to avoid that :)
Thanks,
Jonathan
> ---
> drivers/base/memory.c | 9 ++++-----
> drivers/base/node.c | 38 +++++++-------------------------------
> include/linux/memory.h | 2 +-
> include/linux/node.h | 9 ++++-----
> mm/memory_hotplug.c | 2 +-
> 5 files changed, 17 insertions(+), 43 deletions(-)
>
> diff --git a/drivers/base/memory.c b/drivers/base/memory.c
> index 0e5985682642..3d8c65d84bea 100644
> --- a/drivers/base/memory.c
> +++ b/drivers/base/memory.c
> @@ -744,8 +744,7 @@ unregister_memory(struct memory_block *memory)
> device_unregister(&memory->dev);
> }
>
> -static int remove_memory_section(unsigned long node_id,
> - struct mem_section *section, int phys_device)
> +static int remove_memory_section(unsigned long nid, struct mem_section *section)
> {
> struct memory_block *mem;
>
> @@ -759,7 +758,7 @@ static int remove_memory_section(unsigned long node_id,
> if (!mem)
> goto out_unlock;
>
> - unregister_mem_sect_under_nodes(mem, __section_nr(section));
> + unregister_mem_sect_under_nodes(nid, mem);
>
> mem->section_count--;
> if (mem->section_count == 0)
> @@ -772,12 +771,12 @@ static int remove_memory_section(unsigned long node_id,
> return 0;
> }
>
> -int unregister_memory_section(struct mem_section *section)
> +int unregister_memory_section(int nid, struct mem_section *section)
> {
> if (!present_section(section))
> return -EINVAL;
>
> - return remove_memory_section(0, section, 0);
> + return remove_memory_section(nid, section);
> }
> #endif /* CONFIG_MEMORY_HOTREMOVE */
>
> diff --git a/drivers/base/node.c b/drivers/base/node.c
> index 86d6cd92ce3d..65bc5920bd3d 100644
> --- a/drivers/base/node.c
> +++ b/drivers/base/node.c
> @@ -453,40 +453,16 @@ int register_mem_sect_under_node(struct memory_block *mem_blk, void *arg)
> return 0;
> }
>
> -/* unregister memory section under all nodes that it spans */
> -int unregister_mem_sect_under_nodes(struct memory_block *mem_blk,
> - unsigned long phys_index)
> +/*
> + * This mem_blk is going to be removed, so let us remove the link
> + * to the node and vice versa
> + */
> +void unregister_mem_sect_under_nodes(int nid, struct memory_block *mem_blk)
> {
> - NODEMASK_ALLOC(nodemask_t, unlinked_nodes, GFP_KERNEL);
> - unsigned long pfn, sect_start_pfn, sect_end_pfn;
> -
> - if (!mem_blk) {
> - NODEMASK_FREE(unlinked_nodes);
> - return -EFAULT;
> - }
> - if (!unlinked_nodes)
> - return -ENOMEM;
> - nodes_clear(*unlinked_nodes);
> -
> - sect_start_pfn = section_nr_to_pfn(phys_index);
> - sect_end_pfn = sect_start_pfn + PAGES_PER_SECTION - 1;
> - for (pfn = sect_start_pfn; pfn <= sect_end_pfn; pfn++) {
> - int nid;
> -
> - nid = get_nid_for_pfn(pfn);
> - if (nid < 0)
> - continue;
> - if (!node_online(nid))
> - continue;
> - if (node_test_and_set(nid, *unlinked_nodes))
> - continue;
> - sysfs_remove_link(&node_devices[nid]->dev.kobj,
> + sysfs_remove_link(&node_devices[nid]->dev.kobj,
> kobject_name(&mem_blk->dev.kobj));
> - sysfs_remove_link(&mem_blk->dev.kobj,
> + sysfs_remove_link(&mem_blk->dev.kobj,
> kobject_name(&node_devices[nid]->dev.kobj));
> - }
> - NODEMASK_FREE(unlinked_nodes);
> - return 0;
> }
>
> int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn)
> diff --git a/include/linux/memory.h b/include/linux/memory.h
> index a6ddefc60517..d75ec88ca09d 100644
> --- a/include/linux/memory.h
> +++ b/include/linux/memory.h
> @@ -113,7 +113,7 @@ extern int register_memory_isolate_notifier(struct notifier_block *nb);
> extern void unregister_memory_isolate_notifier(struct notifier_block *nb);
> int hotplug_memory_register(int nid, struct mem_section *section);
> #ifdef CONFIG_MEMORY_HOTREMOVE
> -extern int unregister_memory_section(struct mem_section *);
> +extern int unregister_memory_section(int nid, struct mem_section *);
> #endif
> extern int memory_dev_init(void);
> extern int memory_notify(unsigned long val, void *v);
> diff --git a/include/linux/node.h b/include/linux/node.h
> index 257bb3d6d014..dddead9937ab 100644
> --- a/include/linux/node.h
> +++ b/include/linux/node.h
> @@ -72,8 +72,8 @@ extern int register_cpu_under_node(unsigned int cpu, unsigned int nid);
> extern int unregister_cpu_under_node(unsigned int cpu, unsigned int nid);
> extern int register_mem_sect_under_node(struct memory_block *mem_blk,
> void *arg);
> -extern int unregister_mem_sect_under_nodes(struct memory_block *mem_blk,
> - unsigned long phys_index);
> +extern void unregister_mem_sect_under_nodes(int nid,
> + struct memory_block *mem_blk);
>
> #ifdef CONFIG_HUGETLBFS
> extern void register_hugetlbfs_with_node(node_registration_func_t doregister,
> @@ -105,10 +105,9 @@ static inline int register_mem_sect_under_node(struct memory_block *mem_blk,
> {
> return 0;
> }
> -static inline int unregister_mem_sect_under_nodes(struct memory_block *mem_blk,
> - unsigned long phys_index)
> +static inline void unregister_mem_sect_under_nodes(int nid,
> + struct memory_block *mem_blk)
> {
> - return 0;
> }
>
> static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 6b98321aa52f..66ccbb5b8a88 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -528,7 +528,7 @@ static int __remove_section(int nid, struct mem_section *ms,
> if (!valid_section(ms))
> return ret;
>
> - ret = unregister_memory_section(ms);
> + ret = unregister_memory_section(nid, ms);
> if (ret)
> return ret;
>
On Thu, Oct 18, 2018 at 03:24:34PM +0100, Jonathan Cameron wrote:
> Tested-by: Jonathan Cameron <[email protected]>
Thanks a lot Jonathan for having tested it!
Did you test the whole serie or only this patch?
Since you have caught some bugs testing the memory-hotplug code
on ARM64, I wonder if you could test it with the whole serie
applied (if you got some free time, of course).
thanks again!
--
Oscar Salvador
SUSE L3
On Thu, 18 Oct 2018 17:02:21 +0200
Oscar Salvador <[email protected]> wrote:
> On Thu, Oct 18, 2018 at 03:24:34PM +0100, Jonathan Cameron wrote:
> > Tested-by: Jonathan Cameron <[email protected]>
>
> Thanks a lot Jonathan for having tested it!
>
> Did you test the whole serie or only this patch?
> Since you have caught some bugs testing the memory-hotplug code
> on ARM64, I wonder if you could test it with the whole serie
> applied (if you got some free time, of course).
>
>
> thanks again!
Sorry I should have said. Whole series on latest mmotm as of yesterday.
Obviously that only tested some of the code paths as I didn't test
hmm at all.
There are a few more quirks to chase down on my list, but nothing
related to this series and all superficial stuff.
I'm away from my boards (or the remote connection to them anyway) until
the 29th so any other tests will probably have to wait until then.
It's not clear if we'll take the actual arm64 support forwards but
hopefully someone will pick it up in the near future if we don't.
The complexity around pfn_valid on arm64 may take some time to unwind.
Thanks,
Jonathan
On 18-10-15 17:30:30, Oscar Salvador wrote:
> From: Oscar Salvador <[email protected]>
>
> This patch is only a preparation for the following-up patches.
> The idea of passing the nid is that will allow us to get rid
> of the zone parameter in the patches that follow
>
> Signed-off-by: Oscar Salvador <[email protected]>
> Reviewed-by: David Hildenbrand <[email protected]>
Reviewed-by: Pavel Tatashin <[email protected]>
On 18-10-15 17:30:31, Oscar Salvador wrote:
> From: Oscar Salvador <[email protected]>
>
> HMM/devm have a particular handling of memory-hotplug.
> They do not go through the common path, and so, they do not
> call either offline_pages() or online_pages().
>
> The operations they perform are the following ones:
>
> 1) Create the linear mapping in case the memory is not private
> 2) Initialize the pages and add the sections
> 3) Move the pages to ZONE_DEVICE
>
> Due to this particular handling of hot-add/remove memory from HMM/devm,
> I think it would be nice to provide a helper function in order to
> make this cleaner, and not populate other regions with code
> that should belong to memory-hotplug.
>
> The helpers are named:
>
> del_device_memory
> add_device_memory
>
> The idea is that add_device_memory will be in charge of:
>
> a) call either arch_add_memory() or add_pages(), depending on whether
> we want a linear mapping
> b) online the memory sections that correspond to the pfn range
> c) call move_pfn_range_to_zone() being zone ZONE_DEVICE to
> expand zone/pgdat spanned pages and initialize its pages
>
> del_device_memory, on the other hand, will be in charge of:
>
> a) offline the memory sections that correspond to the pfn range
> b) call shrink_zone_pgdat_pages(), which shrinks node/zone spanned pages.
> c) call either arch_remove_memory() or __remove_pages(), depending on
> whether we need to tear down the linear mapping or not
>
> The reason behind step b) from add_device_memory() and step a)
> from del_device_memory is that now find_smallest/biggest_section_pfn
> will have to check for online sections, and not for valid sections as
> they used to do, because we call offline_mem_sections() in
> offline_pages().
>
> In order to split up better the patches and ease the review,
> this patch will only make a) case work for add_device_memory(),
> and case c) for del_device_memory.
>
> The other cases will be added in the next patch.
>
> These two functions have to be called from devm/HMM code:
>
> dd_device_memory:
> - devm_memremap_pages()
> - hmm_devmem_pages_create()
>
> del_device_memory:
> - hmm_devmem_release
> - devm_memremap_pages_release
>
> One thing I do not know is whether we can move kasan calls out of the
> hotplug lock or not.
> If we can, we could move the hotplug lock within add/del_device_memory().
>
> Signed-off-by: Oscar Salvador <[email protected]>
Looks good to me, thank you.
Reviewed-by: Pavel Tatashin <[email protected]>
Pasha
On Mon, Oct 15, 2018 at 8:31 AM Oscar Salvador
<[email protected]> wrote:
>
> From: Oscar Salvador <[email protected]>
>
> HMM/devm have a particular handling of memory-hotplug.
> They do not go through the common path, and so, they do not
> call either offline_pages() or online_pages().
>
> The operations they perform are the following ones:
>
> 1) Create the linear mapping in case the memory is not private
> 2) Initialize the pages and add the sections
> 3) Move the pages to ZONE_DEVICE
>
> Due to this particular handling of hot-add/remove memory from HMM/devm,
> I think it would be nice to provide a helper function in order to
> make this cleaner, and not populate other regions with code
> that should belong to memory-hotplug.
>
> The helpers are named:
>
> del_device_memory
> add_device_memory
>
> The idea is that add_device_memory will be in charge of:
>
> a) call either arch_add_memory() or add_pages(), depending on whether
> we want a linear mapping
> b) online the memory sections that correspond to the pfn range
> c) call move_pfn_range_to_zone() being zone ZONE_DEVICE to
> expand zone/pgdat spanned pages and initialize its pages
>
> del_device_memory, on the other hand, will be in charge of:
>
> a) offline the memory sections that correspond to the pfn range
> b) call shrink_zone_pgdat_pages(), which shrinks node/zone spanned pages.
> c) call either arch_remove_memory() or __remove_pages(), depending on
> whether we need to tear down the linear mapping or not
>
> The reason behind step b) from add_device_memory() and step a)
> from del_device_memory is that now find_smallest/biggest_section_pfn
> will have to check for online sections, and not for valid sections as
> they used to do, because we call offline_mem_sections() in
> offline_pages().
>
> In order to split up better the patches and ease the review,
> this patch will only make a) case work for add_device_memory(),
> and case c) for del_device_memory.
>
> The other cases will be added in the next patch.
>
> These two functions have to be called from devm/HMM code:
>
> dd_device_memory:
> - devm_memremap_pages()
> - hmm_devmem_pages_create()
>
> del_device_memory:
> - hmm_devmem_release
> - devm_memremap_pages_release
>
> One thing I do not know is whether we can move kasan calls out of the
> hotplug lock or not.
> If we can, we could move the hotplug lock within add/del_device_memory().
>
> Signed-off-by: Oscar Salvador <[email protected]>
> ---
> include/linux/memory_hotplug.h | 11 +++++++++++
> kernel/memremap.c | 11 ++++-------
> mm/hmm.c | 33 +++++++++++++++++----------------
> mm/memory_hotplug.c | 41 +++++++++++++++++++++++++++++++++++++++++
> 4 files changed, 73 insertions(+), 23 deletions(-)
This collides with the refactoring of hmm, to be done in terms of
devm_memremap_pages(). I'd rather not introduce another common
function *beneath* hmm and devm_memremap_pages() and rather make
devm_memremap_pages() the common function.
I plan to resubmit that cleanup after Plumbers. So, unless I'm
misunderstanding some other benefit a nak from me on this patch as it
stands currently.
On 18-10-15 17:30:32, Oscar Salvador wrote:
> From: Oscar Salvador <[email protected]>
>
> This is a preparation for the next patch.
>
> Currently, we only call release_mem_region_adjustable() in __remove_pages
> if the zone is not ZONE_DEVICE, because resources that belong to
> HMM/devm are being released by themselves with devm_release_mem_region.
>
> Since we do not want to touch any zone/page stuff during the removing
> of the memory (but during the offlining), we do not want to check for
> the zone here.
> So we need another way to tell release_mem_region_adjustable() to not
> realease the resource in case it belongs to HMM/devm.
>
> HMM/devm acquires/releases a resource through
> devm_request_mem_region/devm_release_mem_region.
>
> These resources have the flag IORESOURCE_MEM, while resources acquired by
> hot-add memory path (register_memory_resource()) contain
> IORESOURCE_SYSTEM_RAM.
>
> So, we can check for this flag in release_mem_region_adjustable, and if
> the resource does not contain such flag, we know that we are dealing with
> a HMM/devm resource, so we can back off.
>
> Signed-off-by: Oscar Salvador <[email protected]>
> ---
> kernel/resource.c | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
> diff --git a/kernel/resource.c b/kernel/resource.c
> index 81937830a42f..c45decd7d6af 100644
> --- a/kernel/resource.c
> +++ b/kernel/resource.c
> @@ -1272,6 +1272,22 @@ int release_mem_region_adjustable(struct resource *parent,
> continue;
> }
>
> + /*
> + * All memory regions added from memory-hotplug path
> + * have the flag IORESOURCE_SYSTEM_RAM.
> + * If the resource does not have this flag, we know that
> + * we are dealing with a resource coming from HMM/devm.
> + * HMM/devm use another mechanism to add/release a resource.
> + * This goes via devm_request_mem_region and
> + * devm_release_mem_region.
> + * HMM/devm take care to release their resources when they want,
> + * so if we are dealing with them, let us just back off here.
> + */
> + if (!(res->flags & IORESOURCE_SYSRAM)) {
> + ret = 0;
> + break;
> + }
> +
> if (!(res->flags & IORESOURCE_MEM))
> break;
Reviewed-by: Pavel Tatashin <[email protected]>
A couple nits, re-format above comment block to fill 80-char limit:
/*
* All memory regions added from memory-hotplug path have the
* flag IORESOURCE_SYSTEM_RAM. If the resource does not have
* this flag, we know that we are dealing with a resource coming
* from HMM/devm. HMM/devm use another mechanism to add/release
* a resource. This goes via devm_request_mem_region and
* devm_release_mem_region. HMM/devm take care to release their
* resources when they want, so if we are dealing with them, let
* us just back off here.
*/
I would set ret = 0, at the beginning instead of -EINVAL, and change
returns accordingly.
Thank you,
Pasha
>
> This collides with the refactoring of hmm, to be done in terms of
> devm_memremap_pages(). I'd rather not introduce another common
> function *beneath* hmm and devm_memremap_pages() and rather make
> devm_memremap_pages() the common function.
>
> I plan to resubmit that cleanup after Plumbers. So, unless I'm
> misunderstanding some other benefit a nak from me on this patch as it
> stands currently.
>
Ok, Dan, I will wait for your new refactoring series before continuing
reviewing this series.
Thank you,
Pasha
> This collides with the refactoring of hmm, to be done in terms of
> devm_memremap_pages(). I'd rather not introduce another common
> function *beneath* hmm and devm_memremap_pages() and rather make
> devm_memremap_pages() the common function.
Hi Dan,
That is true.
Previous version of this patchet was based on yours (hmm-refactor),
but then I decided to not base it here due to lack of feedback.
But if devm_memremap_pages() is going to be the main/common function,
I am totally fine to put the memory-hotplug specific there.
> I plan to resubmit that cleanup after Plumbers. So, unless I'm
> misunderstanding some other benefit a nak from me on this patch as it
> stands currently.
Great, then I will wait for your new patchset, and then I will base
this one on that.
Thanks
Oscar Salvador
On Mon, 2018-11-12 at 21:28 +0000, Pavel Tatashin wrote:
> >
> > This collides with the refactoring of hmm, to be done in terms of
> > devm_memremap_pages(). I'd rather not introduce another common
> > function *beneath* hmm and devm_memremap_pages() and rather make
> > devm_memremap_pages() the common function.
> >
> > I plan to resubmit that cleanup after Plumbers. So, unless I'm
> > misunderstanding some other benefit a nak from me on this patch as
> > it
> > stands currently.
> >
>
> Ok, Dan, I will wait for your new refactoring series before
> continuing
> reviewing this series.
Hi Pavel,
thanks for reviewing the other patches.
You could still check patch4 and patch5, as they are not strictly
related to this one.
(Not asking for your Reviewed-by, but I would still like you to check
them)
I could use your eyes there if you have time ;-)
Thanks
Oscar Salvador