2019-06-21 11:44:45

by Alan Jenkins

[permalink] [raw]
Subject: [PATCH] mm: fix setting the high and low watermarks

When setting the low and high watermarks we use min_wmark_pages(zone).
I guess this is to reduce the line length. But we forgot that this macro
includes zone->watermark_boost. We need to reset zone->watermark_boost
first. Otherwise the watermarks will be set inconsistently.

E.g. this could cause inconsistent values if the watermarks have been
boosted, and then you change a sysctl which triggers
__setup_per_zone_wmarks().

I strongly suspect this explains why I have seen slightly high watermarks.
Suspicious-looking zoneinfo below - notice high-low != low-min.

Node 0, zone Normal
pages free 74597
min 9582
low 34505
high 36900

https://unix.stackexchange.com/questions/525674/my-low-and-high-watermarks-seem-higher-than-predicted-by-documentation-sysctl-vm/525687

Signed-off-by: Alan Jenkins <[email protected]>
Fixes: 1c30844d2dfe ("mm: reclaim small amounts of memory when an external
fragmentation event occurs")
Cc: [email protected]
---

Tested by compiler :-).

Ideally the commit message would be clear about what happens the
*first* time __setup_per_zone_watermarks() is called. I guess that
zone->watermark_boost is *usually* zero, or we would have noticed
some wild problems :-). However I am not familiar with how the zone
structures are allocated & initialized. Maybe there is a case where
zone->watermark_boost could contain an arbitrary unitialized value
at this point. Can we rule that out?

mm/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c02cff1ed56e..db9758cda6f8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7606,9 +7606,9 @@ static void __setup_per_zone_wmarks(void)
mult_frac(zone_managed_pages(zone),
watermark_scale_factor, 10000));

+ zone->watermark_boost = 0;
zone->_watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp;
zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2;
- zone->watermark_boost = 0;

spin_unlock_irqrestore(&zone->lock, flags);
}
--
2.20.1


2019-06-21 12:11:26

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH] mm: fix setting the high and low watermarks

On 6/21/19 1:43 PM, Alan Jenkins wrote:
> When setting the low and high watermarks we use min_wmark_pages(zone).
> I guess this is to reduce the line length. But we forgot that this macro
> includes zone->watermark_boost. We need to reset zone->watermark_boost
> first. Otherwise the watermarks will be set inconsistently.
>
> E.g. this could cause inconsistent values if the watermarks have been
> boosted, and then you change a sysctl which triggers
> __setup_per_zone_wmarks().
>
> I strongly suspect this explains why I have seen slightly high watermarks.
> Suspicious-looking zoneinfo below - notice high-low != low-min.
>
> Node 0, zone Normal
> pages free 74597
> min 9582
> low 34505
> high 36900
>
> https://unix.stackexchange.com/questions/525674/my-low-and-high-watermarks-seem-higher-than-predicted-by-documentation-sysctl-vm/525687
>
> Signed-off-by: Alan Jenkins <[email protected]>
> Fixes: 1c30844d2dfe ("mm: reclaim small amounts of memory when an external
> fragmentation event occurs")
> Cc: [email protected]

Nice catch, thanks!

Acked-by: Vlastimil Babka <[email protected]>

Personally I would implement it a bit differently, see below. If you
agree, it's fine if you keep the authorship of the whole patch.

> ---
>
> Tested by compiler :-).
>
> Ideally the commit message would be clear about what happens the
> *first* time __setup_per_zone_watermarks() is called. I guess that
> zone->watermark_boost is *usually* zero, or we would have noticed
> some wild problems :-). However I am not familiar with how the zone
> structures are allocated & initialized. Maybe there is a case where
> zone->watermark_boost could contain an arbitrary unitialized value
> at this point. Can we rule that out?

Dunno if there's some arch override, but generic_alloc_nodedata() uses
kzalloc() so it's zeroed.

-----8<-----
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d66bc8abe0af..3b2f0cedf78e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7624,6 +7624,7 @@ static void __setup_per_zone_wmarks(void)

for_each_zone(zone) {
u64 tmp;
+ unsigned long wmark_min;

spin_lock_irqsave(&zone->lock, flags);
tmp = (u64)pages_min * zone_managed_pages(zone);
@@ -7642,13 +7643,13 @@ static void __setup_per_zone_wmarks(void)

min_pages = zone_managed_pages(zone) / 1024;
min_pages = clamp(min_pages, SWAP_CLUSTER_MAX, 128UL);
- zone->_watermark[WMARK_MIN] = min_pages;
+ wmark_min = min_pages;
} else {
/*
* If it's a lowmem zone, reserve a number of pages
* proportionate to the zone's size.
*/
- zone->_watermark[WMARK_MIN] = tmp;
+ wmark_min = tmp;
}

/*
@@ -7660,8 +7661,9 @@ static void __setup_per_zone_wmarks(void)
mult_frac(zone_managed_pages(zone),
watermark_scale_factor, 10000));

- zone->_watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp;
- zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2;
+ zone->_watermark[WMARK_MIN] = wmark_min;
+ zone->_watermark[WMARK_LOW] = wmark_min + tmp;
+ zone->_watermark[WMARK_HIGH] = wmark_min + tmp * 2;
zone->watermark_boost = 0;

spin_unlock_irqrestore(&zone->lock, flags);

2019-06-21 14:07:52

by Bharath Vedartham

[permalink] [raw]
Subject: Re: [PATCH] mm: fix setting the high and low watermarks

On Fri, Jun 21, 2019 at 02:09:31PM +0200, Vlastimil Babka wrote:
> On 6/21/19 1:43 PM, Alan Jenkins wrote:
> > When setting the low and high watermarks we use min_wmark_pages(zone).
> > I guess this is to reduce the line length. But we forgot that this macro
> > includes zone->watermark_boost. We need to reset zone->watermark_boost
> > first. Otherwise the watermarks will be set inconsistently.
> >
> > E.g. this could cause inconsistent values if the watermarks have been
> > boosted, and then you change a sysctl which triggers
> > __setup_per_zone_wmarks().
> >
> > I strongly suspect this explains why I have seen slightly high watermarks.
> > Suspicious-looking zoneinfo below - notice high-low != low-min.
> >
> > Node 0, zone Normal
> > pages free 74597
> > min 9582
> > low 34505
> > high 36900
> >
> > https://unix.stackexchange.com/questions/525674/my-low-and-high-watermarks-seem-higher-than-predicted-by-documentation-sysctl-vm/525687
> >
> > Signed-off-by: Alan Jenkins <[email protected]>
> > Fixes: 1c30844d2dfe ("mm: reclaim small amounts of memory when an external
> > fragmentation event occurs")
> > Cc: [email protected]
>
> Nice catch, thanks!
>
> Acked-by: Vlastimil Babka <[email protected]>
>
> Personally I would implement it a bit differently, see below. If you
> agree, it's fine if you keep the authorship of the whole patch.
>
> > ---
> >
> > Tested by compiler :-).
> >
> > Ideally the commit message would be clear about what happens the
> > *first* time __setup_per_zone_watermarks() is called. I guess that
> > zone->watermark_boost is *usually* zero, or we would have noticed
> > some wild problems :-). However I am not familiar with how the zone
> > structures are allocated & initialized. Maybe there is a case where
> > zone->watermark_boost could contain an arbitrary unitialized value
> > at this point. Can we rule that out?
>
> Dunno if there's some arch override, but generic_alloc_nodedata() uses
> kzalloc() so it's zeroed.
>
> -----8<-----
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d66bc8abe0af..3b2f0cedf78e 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7624,6 +7624,7 @@ static void __setup_per_zone_wmarks(void)
>
> for_each_zone(zone) {
> u64 tmp;
> + unsigned long wmark_min;
>
> spin_lock_irqsave(&zone->lock, flags);
> tmp = (u64)pages_min * zone_managed_pages(zone);
> @@ -7642,13 +7643,13 @@ static void __setup_per_zone_wmarks(void)
>
> min_pages = zone_managed_pages(zone) / 1024;
> min_pages = clamp(min_pages, SWAP_CLUSTER_MAX, 128UL);
> - zone->_watermark[WMARK_MIN] = min_pages;
> + wmark_min = min_pages;
> } else {
> /*
> * If it's a lowmem zone, reserve a number of pages
> * proportionate to the zone's size.
> */
> - zone->_watermark[WMARK_MIN] = tmp;
> + wmark_min = tmp;
> }
>
> /*
> @@ -7660,8 +7661,9 @@ static void __setup_per_zone_wmarks(void)
> mult_frac(zone_managed_pages(zone),
> watermark_scale_factor, 10000));
>
> - zone->_watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp;
> - zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2;
> + zone->_watermark[WMARK_MIN] = wmark_min;
> + zone->_watermark[WMARK_LOW] = wmark_min + tmp;
> + zone->_watermark[WMARK_HIGH] = wmark_min + tmp * 2;
> zone->watermark_boost = 0;
Do you think this could cause a race condition between
__setup_per_zone_wmarks and pgdat_watermark_boosted which checks whether
the watermark_boost of each zone is non-zero? pgdat_watermark_boosted is
not called with a zone lock.
Here is a probable case scenario:
watermarks are boosted in steal_suitable_fallback(which happens under a
zone lock). After that kswapd is woken up by
wakeup_kswapd(zone,0,0,zone_idx(zone)) in rmqueue without holding a
zone lock. Lets say someone modified min_kfree_bytes, this would lead to
all the zone->watermark_boost being set to 0. This may cause
pgdat_watermark_boosted to return false, which would not wakeup kswapd
as intended by boosting the watermark. This behaviour is similar to waking up kswapd for a
balanced node.

Also if kswapd was woken up successfully because of watermarks being
boosted. In balance_pgdat, we use nr_boost_reclaim to count number of
pages to reclaim because of boosting. nr_boost_reclaim is calculated as:
nr_boost_reclaim = 0;
for (i = 0; i <= classzone_idx; i++) {
zone = pgdat->node_zones + i;
if (!managed_zone(zone))
continue;

nr_boost_reclaim += zone->watermark_boost;
zone_boosts[i] = zone->watermark_boost;
}
boosted = nr_boost_reclaim;

This is not under a zone_lock. This could lead to nr_boost_reclaim to
be 0 if min_kfree_bytes is set to 0. Which would wake up kcompactd
without reclaiming memory.
kcompactd compaction might be spurious if the if the memory reclaim step is not happening?

Any thoughts?
> spin_unlock_irqrestore(&zone->lock, flags);
>

2019-06-21 14:21:26

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH] mm: fix setting the high and low watermarks

On Fri, Jun 21, 2019 at 12:43:25PM +0100, Alan Jenkins wrote:
> When setting the low and high watermarks we use min_wmark_pages(zone).
> I guess this is to reduce the line length. But we forgot that this macro
> includes zone->watermark_boost. We need to reset zone->watermark_boost
> first. Otherwise the watermarks will be set inconsistently.
>
> E.g. this could cause inconsistent values if the watermarks have been
> boosted, and then you change a sysctl which triggers
> __setup_per_zone_wmarks().
>
> I strongly suspect this explains why I have seen slightly high watermarks.
> Suspicious-looking zoneinfo below - notice high-low != low-min.
>
> Node 0, zone Normal
> pages free 74597
> min 9582
> low 34505
> high 36900
>
> https://unix.stackexchange.com/questions/525674/my-low-and-high-watermarks-seem-higher-than-predicted-by-documentation-sysctl-vm/525687
>
> Signed-off-by: Alan Jenkins <[email protected]>
> Fixes: 1c30844d2dfe ("mm: reclaim small amounts of memory when an external
> fragmentation event occurs")
> Cc: [email protected]

Either way

Acked-by: Mel Gorman <[email protected]>

--
Mel Gorman
SUSE Labs

2019-06-21 15:33:00

by Alan Jenkins

[permalink] [raw]
Subject: [PATCH v2] mm: avoid inconsistent "boosts" when updating the high and low watermarks

When setting the low and high watermarks we use min_wmark_pages(zone).
I guess this was to reduce the line length. Then this macro was modified
to include zone->watermark_boost. So we needed to set watermark_boost
before we set the high and low watermarks... but we did not.

It seems mostly harmless. It might set the watermarks a bit higher than
needed: when 1) the watermarks have been "boosted" and 2) you then
triggered __setup_per_zone_wmarks() (by setting one of the sysctls, or
hotplugging memory...).

I noticed it because it also breaks the documented equality
(high - low == low - min). Below is an example of reproducing the bug.

First sample. Equality is met (high - low == low - min):

Node 0, zone Normal
pages free 11962
min 9531
low 11913
high 14295
spanned 1173504
present 1173504
managed 1134235

A later sample. Something has caused us to boost the watermarks:

Node 0, zone Normal
pages free 12614
min 10043
low 12425
high 14807

Now trigger the watermarks to be recalculated. "cd /proc/sys/vm" and
"cat watermark_scale_factor > watermark_scale_factor". Then the watermarks
are boosted inconsistently. The equality is broken:

Node 0, zone Normal
pages free 12412
min 9531
low 12425
high 14807

14807 - 12425 = 2382
12425 - 9531 = 2894

Co-developed-by: Vlastimil Babka <[email protected]>
Signed-off-by: Vlastimil Babka <[email protected]>
Signed-off-by: Alan Jenkins <[email protected]>
Fixes: 1c30844d2dfe ("mm: reclaim small amounts of memory when an external
fragmentation event occurs")
Acked-by: Mel Gorman <[email protected]>

---

Changes since v1:

Use Vlastimil's suggested code. It is much cleaner, thanks :-).
I considered this "Co-developed-by" and s-o-b credit.

Update commit message to be specific about expected effects.

Node data is always allocated with kzalloc(). So there is no risk of
the code reading arbitrary unintialized data from ->watermark_boost,
the first time it is run.

AFAICT the bug is mostly harmless. I do not require a -stable port.
I leave it to anyone else, if they think it's worth adding
"Cc: [email protected]".


mm/page_alloc.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c02cff1ed56e..01233705e490 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7570,6 +7570,7 @@ static void __setup_per_zone_wmarks(void)

for_each_zone(zone) {
u64 tmp;
+ unsigned long wmark_min;

spin_lock_irqsave(&zone->lock, flags);
tmp = (u64)pages_min * zone_managed_pages(zone);
@@ -7588,13 +7589,13 @@ static void __setup_per_zone_wmarks(void)

min_pages = zone_managed_pages(zone) / 1024;
min_pages = clamp(min_pages, SWAP_CLUSTER_MAX, 128UL);
- zone->_watermark[WMARK_MIN] = min_pages;
+ wmark_min = min_pages;
} else {
/*
* If it's a lowmem zone, reserve a number of pages
* proportionate to the zone's size.
*/
- zone->_watermark[WMARK_MIN] = tmp;
+ wmark_min = tmp;
}

/*
@@ -7606,8 +7607,9 @@ static void __setup_per_zone_wmarks(void)
mult_frac(zone_managed_pages(zone),
watermark_scale_factor, 10000));

- zone->_watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp;
- zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2;
+ zone->_watermark[WMARK_MIN] = wmark_min;
+ zone->_watermark[WMARK_LOW] = wmark_min + tmp;
+ zone->_watermark[WMARK_HIGH] = wmark_min + tmp * 2;
zone->watermark_boost = 0;

spin_unlock_irqrestore(&zone->lock, flags);
--
2.20.1

2019-06-21 20:58:36

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH v2] mm: avoid inconsistent "boosts" when updating the high and low watermarks

On Fri, 21 Jun 2019, Alan Jenkins wrote:

> When setting the low and high watermarks we use min_wmark_pages(zone).
> I guess this was to reduce the line length. Then this macro was modified
> to include zone->watermark_boost. So we needed to set watermark_boost
> before we set the high and low watermarks... but we did not.
>
> It seems mostly harmless. It might set the watermarks a bit higher than
> needed: when 1) the watermarks have been "boosted" and 2) you then
> triggered __setup_per_zone_wmarks() (by setting one of the sysctls, or
> hotplugging memory...).
>
> I noticed it because it also breaks the documented equality
> (high - low == low - min). Below is an example of reproducing the bug.
>
> First sample. Equality is met (high - low == low - min):
>
> Node 0, zone Normal
> pages free 11962
> min 9531
> low 11913
> high 14295
> spanned 1173504
> present 1173504
> managed 1134235
>
> A later sample. Something has caused us to boost the watermarks:
>
> Node 0, zone Normal
> pages free 12614
> min 10043
> low 12425
> high 14807
>
> Now trigger the watermarks to be recalculated. "cd /proc/sys/vm" and
> "cat watermark_scale_factor > watermark_scale_factor". Then the watermarks
> are boosted inconsistently. The equality is broken:
>
> Node 0, zone Normal
> pages free 12412
> min 9531
> low 12425
> high 14807
>
> 14807 - 12425 = 2382
> 12425 - 9531 = 2894
>
> Co-developed-by: Vlastimil Babka <[email protected]>
> Signed-off-by: Vlastimil Babka <[email protected]>
> Signed-off-by: Alan Jenkins <[email protected]>
> Fixes: 1c30844d2dfe ("mm: reclaim small amounts of memory when an external
> fragmentation event occurs")
> Acked-by: Mel Gorman <[email protected]>

Acked-by: David Rientjes <[email protected]>

2019-06-24 05:48:34

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH] mm: fix setting the high and low watermarks

On 6/21/19 4:07 PM, Bharath Vedartham wrote:
> Do you think this could cause a race condition between
> __setup_per_zone_wmarks and pgdat_watermark_boosted which checks whether
> the watermark_boost of each zone is non-zero? pgdat_watermark_boosted is
> not called with a zone lock.
> Here is a probable case scenario:
> watermarks are boosted in steal_suitable_fallback(which happens under a
> zone lock). After that kswapd is woken up by
> wakeup_kswapd(zone,0,0,zone_idx(zone)) in rmqueue without holding a
> zone lock. Lets say someone modified min_kfree_bytes, this would lead to
> all the zone->watermark_boost being set to 0. This may cause
> pgdat_watermark_boosted to return false, which would not wakeup kswapd
> as intended by boosting the watermark. This behaviour is similar to waking up kswapd for a
> balanced node.

Not waking up kswapd shouldn't cause a significant trouble.

> Also if kswapd was woken up successfully because of watermarks being
> boosted. In balance_pgdat, we use nr_boost_reclaim to count number of
> pages to reclaim because of boosting. nr_boost_reclaim is calculated as:
> nr_boost_reclaim = 0;
> for (i = 0; i <= classzone_idx; i++) {
> zone = pgdat->node_zones + i;
> if (!managed_zone(zone))
> continue;
>
> nr_boost_reclaim += zone->watermark_boost;
> zone_boosts[i] = zone->watermark_boost;
> }
> boosted = nr_boost_reclaim;
>
> This is not under a zone_lock. This could lead to nr_boost_reclaim to
> be 0 if min_kfree_bytes is set to 0. Which would wake up kcompactd
> without reclaiming memory.

Setting min_kfree_bytes to 0 is asking for problems regardless of this
check. Much more trouble than waking up kcompactd spuriously, which is
just a few wasted cpu cycles.

> kcompactd compaction might be spurious if the if the memory reclaim step is not happening?
>
> Any thoughts?

Unless the races cause either some data corruption, or e.g. spurious
allocation failures, I don't think they are worth adding new spinlock
sections.

Thanks,
Vlastimil

>> spin_unlock_irqrestore(&zone->lock, flags);
>>
>