First 2 are comment fixes.
Second 2 add pgdat_resize_lock()/unlock() usage per existing documentation.
--
Since v2 (http://comments.gmane.org/gmane.linux.kernel.mm/99316):
- add ack on patch 1 from rientjes.
- quote documentation in patch 3 & 4.
--
Since v1 (http://thread.gmane.org/gmane.linux.kernel.mm/99297):
- drop making lock_memory_hotplug() required (old patch #1)
- fix __offline_pages() in the same manner as online_pages() (rientjes)
- make comment regarding pgdat_resize_lock()/unlock() usage more clear (rientjes)
Cody P Schafer (4):
mm: fix comment referring to non-existent size_seqlock, change to
span_seqlock
mmzone: note that node_size_lock should be manipulated via
pgdat_resize_lock()
memory_hotplug: use pgdat_resize_lock() in online_pages()
memory_hotplug: use pgdat_resize_lock() in __offline_pages()
include/linux/mmzone.h | 5 ++++-
mm/memory_hotplug.c | 9 +++++++++
2 files changed, 13 insertions(+), 1 deletion(-)
--
1.8.2.2
Signed-off-by: Cody P Schafer <[email protected]>
---
include/linux/mmzone.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index fc859a0c..41557be 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -716,6 +716,9 @@ typedef struct pglist_data {
* or node_spanned_pages stay constant. Holding this will also
* guarantee that any pfn_valid() stays that way.
*
+ * pgdat_resize_lock() and pgdat_resize_unlock() are provided to
+ * manipulate node_size_lock without checking for CONFIG_MEMORY_HOTPLUG.
+ *
* Nests above zone->lock and zone->span_seqlock
*/
spinlock_t node_size_lock;
--
1.8.2.2
Signed-off-by: Cody P Schafer <[email protected]>
Acked-by: David Rientjes <[email protected]>
---
include/linux/mmzone.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 5c76737..fc859a0c 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -716,7 +716,7 @@ typedef struct pglist_data {
* or node_spanned_pages stay constant. Holding this will also
* guarantee that any pfn_valid() stays that way.
*
- * Nests above zone->lock and zone->size_seqlock.
+ * Nests above zone->lock and zone->span_seqlock
*/
spinlock_t node_size_lock;
#endif
--
1.8.2.2
mmzone.h documents node_size_lock (which pgdat_resize_lock() locks) as
follows:
* Must be held any time you expect node_start_pfn, node_present_pages
* or node_spanned_pages stay constant. [...]
So actually hold it when we update node_present_pages in online_pages().
Signed-off-by: Cody P Schafer <[email protected]>
---
mm/memory_hotplug.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index a221fac..0bdca10 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -915,6 +915,7 @@ static void node_states_set_node(int node, struct memory_notify *arg)
int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_type)
{
+ unsigned long flags;
unsigned long onlined_pages = 0;
struct zone *zone;
int need_zonelists_rebuild = 0;
@@ -993,7 +994,11 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_typ
zone->managed_pages += onlined_pages;
zone->present_pages += onlined_pages;
+
+ pgdat_resize_lock(zone->zone_pgdat, &flags);
zone->zone_pgdat->node_present_pages += onlined_pages;
+ pgdat_resize_unlock(zone->zone_pgdat, &flags);
+
if (onlined_pages) {
node_states_set_node(zone_to_nid(zone), &arg);
if (need_zonelists_rebuild)
--
1.8.2.2
mmzone.h documents node_size_lock (which pgdat_resize_lock() locks) as
follows:
* Must be held any time you expect node_start_pfn, node_present_pages
* or node_spanned_pages stay constant. [...]
So actually hold it when we update node_present_pages in __offline_pages().
Signed-off-by: Cody P Schafer <[email protected]>
---
mm/memory_hotplug.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 0bdca10..b59a695 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1582,7 +1582,11 @@ repeat:
/* removal success */
zone->managed_pages -= offlined_pages;
zone->present_pages -= offlined_pages;
+
+ pgdat_resize_lock(zone->zone_pgdat, &flags);
zone->zone_pgdat->node_present_pages -= offlined_pages;
+ pgdat_resize_unlock(zone->zone_pgdat, &flags);
+
totalram_pages -= offlined_pages;
init_per_zone_wmark_min();
--
1.8.2.2
On Mon, 13 May 2013 16:13:06 -0700 Cody P Schafer <[email protected]> wrote:
> mmzone.h documents node_size_lock (which pgdat_resize_lock() locks) as
> follows:
>
> * Must be held any time you expect node_start_pfn, node_present_pages
> * or node_spanned_pages stay constant. [...]
Yeah, I suppose so. Although no present code sites actually do that.
> So actually hold it when we update node_present_pages in online_pages().
>
> ...
>
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -915,6 +915,7 @@ static void node_states_set_node(int node, struct memory_notify *arg)
>
> int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_type)
> {
> + unsigned long flags;
> unsigned long onlined_pages = 0;
> struct zone *zone;
> int need_zonelists_rebuild = 0;
> @@ -993,7 +994,11 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_typ
>
> zone->managed_pages += onlined_pages;
> zone->present_pages += onlined_pages;
> +
> + pgdat_resize_lock(zone->zone_pgdat, &flags);
> zone->zone_pgdat->node_present_pages += onlined_pages;
> + pgdat_resize_unlock(zone->zone_pgdat, &flags);
> +
> if (onlined_pages) {
> node_states_set_node(zone_to_nid(zone), &arg);
> if (need_zonelists_rebuild)
afaict the only benefits of making this change are
a) so that code which does
a = p->node_present_pages;
...
b = p->node_present_pages;
can ensure that `a' and `b' are equal, by taking pgdat_resize_lock().
Which is somewhat odd, and
b) to make the comment truthful ;)
On Wed, 15 May 2013, Andrew Morton wrote:
> On Mon, 13 May 2013 16:13:06 -0700 Cody P Schafer <[email protected]> wrote:
>
> > mmzone.h documents node_size_lock (which pgdat_resize_lock() locks) as
> > follows:
> >
> > * Must be held any time you expect node_start_pfn, node_present_pages
> > * or node_spanned_pages stay constant. [...]
>
> Yeah, I suppose so. Although no present code sites actually do that.
>
Agreed, I see no purpose in patches 2-4 in this series as mentioned in v2.