From: Jakub Kicinski <[email protected]>
If a cgroup violates its memory.high constraints, we may end
up unduly penalising it. For example, for the following hierarchy:
A: max high, 20 usage
A/B: 9 high, 10 usage
A/C: max high, 10 usage
We would end up doing the following calculation below when calculating
high delay for A/B:
A/B: 10 - 9 = 1...
A: 20 - PAGE_COUNTER_MAX = 21, so set max_overage to 21.
This gets worse with higher disparities in usage in the parent.
I have no idea how this disappeared from the final version of the patch,
but it is certainly Not Good(tm). This wasn't obvious in testing
because, for a simple cgroup hierarchy with only one child, the result
is usually roughly the same. It's only in more complex hierarchies that
things go really awry (although still, the effects are limited to a
maximum of 2 seconds in schedule_timeout_killable at a maximum).
[[email protected]: changelog]
Fixes: e26733e0d0ec ("mm, memcg: throttle allocators based on ancestral memory.high")
Signed-off-by: Jakub Kicinski <[email protected]>
Signed-off-by: Chris Down <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: [email protected] # 5.4.x
---
mm/memcontrol.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index eecf003b0c56..75a978307863 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2336,6 +2336,9 @@ static unsigned long calculate_high_delay(struct mem_cgroup *memcg,
usage = page_counter_read(&memcg->memory);
high = READ_ONCE(memcg->high);
+ if (usage <= high)
+ continue;
+
/*
* Prevent division by 0 in overage calculation by acting as if
* it was a threshold of 1 page
--
2.26.0
Andrew, this is a pretty bad one that could definitely affect memory.high
users. We should probably expedite it going in.
Sorry for the trouble, especially on a -stable patch...
On Tue 31-03-20 16:24:24, Chris Down wrote:
> From: Jakub Kicinski <[email protected]>
>
> If a cgroup violates its memory.high constraints, we may end
> up unduly penalising it. For example, for the following hierarchy:
>
> A: max high, 20 usage
> A/B: 9 high, 10 usage
> A/C: max high, 10 usage
>
> We would end up doing the following calculation below when calculating
> high delay for A/B:
>
> A/B: 10 - 9 = 1...
> A: 20 - PAGE_COUNTER_MAX = 21, so set max_overage to 21.
>
> This gets worse with higher disparities in usage in the parent.
>
> I have no idea how this disappeared from the final version of the patch,
> but it is certainly Not Good(tm). This wasn't obvious in testing
> because, for a simple cgroup hierarchy with only one child, the result
> is usually roughly the same. It's only in more complex hierarchies that
> things go really awry (although still, the effects are limited to a
> maximum of 2 seconds in schedule_timeout_killable at a maximum).
I find this paragraph rather confusing. This is essentially an unsigned
underflow when any of the memcg up the hierarchy is below the high
limit, right? There doesn't really seem anything complex in such a
hierarchy.
> [[email protected]: changelog]
>
> Fixes: e26733e0d0ec ("mm, memcg: throttle allocators based on ancestral memory.high")
> Signed-off-by: Jakub Kicinski <[email protected]>
> Signed-off-by: Chris Down <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: [email protected] # 5.4.x
To the patch
Acked-by: Michal Hocko <[email protected]>
> ---
> mm/memcontrol.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index eecf003b0c56..75a978307863 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2336,6 +2336,9 @@ static unsigned long calculate_high_delay(struct mem_cgroup *memcg,
> usage = page_counter_read(&memcg->memory);
> high = READ_ONCE(memcg->high);
>
> + if (usage <= high)
> + continue;
> +
> /*
> * Prevent division by 0 in overage calculation by acting as if
> * it was a threshold of 1 page
> --
> 2.26.0
>
--
Michal Hocko
SUSE Labs
Michal Hocko writes:
>I find this paragraph rather confusing. This is essentially an unsigned
>underflow when any of the memcg up the hierarchy is below the high
>limit, right? There doesn't really seem anything complex in such a
>hierarchy.
The conditions to trigger the bug itself are easy, but having it obviously
visible in tests requires a moderately complex hierarchy, since in the basic
case ancestor_usage is "similar enough" to the test leaf cgroup's usage.
Chris Down writes:
>Michal Hocko writes:
>>I find this paragraph rather confusing. This is essentially an unsigned
>>underflow when any of the memcg up the hierarchy is below the high
>>limit, right? There doesn't really seem anything complex in such a
>>hierarchy.
>
>The conditions to trigger the bug itself are easy, but having it
>obviously visible in tests requires a moderately complex hierarchy,
>since in the basic case ancestor_usage is "similar enough" to the test
>leaf cgroup's usage.
Here is another reason why this wasn't caught -- division usually renders the
overage 0 anyway with such a large input.
With the attached patch applied before this fix, you can see that usually
division results in an overage of 0, so the result is the same. Here's an
example where pid 213 is a cgroup in system.slice/foo.service hitting its own
memory.high, and system.slice has no memory.high configuresd:
[root@ktst ~]# cat /sys/kernel/debug/tracing/trace
# tracer: nop
#
# entries-in-buffer/entries-written: 33/33 #P:4
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
(bash)-213 [002] .N.. 58.873988: mem_cgroup_handle_over_high: usage: 32, high: 1
(bash)-213 [002] .N.. 58.873993: mem_cgroup_handle_over_high: 1 overage before shifting (31)
(bash)-213 [002] .N.. 58.873994: mem_cgroup_handle_over_high: 1 overage after shifting (32505856)
(bash)-213 [002] .N.. 58.873995: mem_cgroup_handle_over_high: 1 overage after div (32505856)
(bash)-213 [002] .N.. 58.873996: mem_cgroup_handle_over_high: 1 cgroup new overage (32505856)
(bash)-213 [002] .N.. 58.873998: mem_cgroup_handle_over_high: usage: 18641, high: 2251799813685247
(bash)-213 [002] .N.. 58.873998: mem_cgroup_handle_over_high: 2 overage before shifting (18444492273895885010)
(bash)-213 [002] .N.. 58.873999: mem_cgroup_handle_over_high: 2 overage after shifting (19547553792)
(bash)-213 [002] .N.. 58.874000: mem_cgroup_handle_over_high: 2 overage after div (0)
(bash)-213 [002] .N.. 58.874001: mem_cgroup_handle_over_high: 2 cgroup too low (0)
(bash)-213 [002] .N.. 58.874002: mem_cgroup_handle_over_high: Used 1 from leaf to get result
On Tue, Mar 31, 2020 at 04:24:24PM +0100, Chris Down wrote:
> From: Jakub Kicinski <[email protected]>
>
> If a cgroup violates its memory.high constraints, we may end
> up unduly penalising it. For example, for the following hierarchy:
>
> A: max high, 20 usage
> A/B: 9 high, 10 usage
> A/C: max high, 10 usage
>
> We would end up doing the following calculation below when calculating
> high delay for A/B:
>
> A/B: 10 - 9 = 1...
> A: 20 - PAGE_COUNTER_MAX = 21, so set max_overage to 21.
>
> This gets worse with higher disparities in usage in the parent.
>
> I have no idea how this disappeared from the final version of the patch,
> but it is certainly Not Good(tm). This wasn't obvious in testing
> because, for a simple cgroup hierarchy with only one child, the result
> is usually roughly the same. It's only in more complex hierarchies that
> things go really awry (although still, the effects are limited to a
> maximum of 2 seconds in schedule_timeout_killable at a maximum).
>
> [[email protected]: changelog]
>
> Fixes: e26733e0d0ec ("mm, memcg: throttle allocators based on ancestral memory.high")
> Signed-off-by: Jakub Kicinski <[email protected]>
> Signed-off-by: Chris Down <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: [email protected] # 5.4.x
Oops.
Acked-by: Johannes Weiner <[email protected]>