When the unsigned page_counter underflows, even just by a few pages, a
cgroup will not be able to run anything afterwards and trigger the OOM
killer in a loop.
Underflows shouldn't happen, but when they do in practice, we may just
be off by a small amount that doesn't interfere with the normal
operation - consequences don't need to be that dire.
Reset the page_counter to 0 upon underflow. We'll issue a warning that
the accounting will be off and then try to keep limping along.
[ We used to do this with the original res_counter, where it was a
more straight-forward correction inside the spinlock section. I
didn't carry it forward into the lockless page counters for
simplicity, but it turns out this is quite useful in practice. ]
Signed-off-by: Johannes Weiner <[email protected]>
---
mm/page_counter.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/page_counter.c b/mm/page_counter.c
index c6860f51b6c6..7d83641eb86b 100644
--- a/mm/page_counter.c
+++ b/mm/page_counter.c
@@ -52,9 +52,13 @@ void page_counter_cancel(struct page_counter *counter, unsigned long nr_pages)
long new;
new = atomic_long_sub_return(nr_pages, &counter->usage);
- propagate_protected_usage(counter, new);
/* More uncharges than charges? */
- WARN_ON_ONCE(new < 0);
+ if (WARN_ONCE(new < 0, "page_counter underflow: %ld nr_pages=%lu\n",
+ new, nr_pages)) {
+ new = 0;
+ atomic_long_set(&counter->usage, new);
+ }
+ propagate_protected_usage(counter, new);
}
/**
--
2.31.1
On Thu 08-04-21 10:31:55, Johannes Weiner wrote:
> When the unsigned page_counter underflows, even just by a few pages, a
> cgroup will not be able to run anything afterwards and trigger the OOM
> killer in a loop.
>
> Underflows shouldn't happen, but when they do in practice, we may just
> be off by a small amount that doesn't interfere with the normal
> operation - consequences don't need to be that dire.
Yes, I do agree.
> Reset the page_counter to 0 upon underflow. We'll issue a warning that
> the accounting will be off and then try to keep limping along.
I do not remember any reports about the existing WARN_ON but it is not
really hard to imagine a charging imbalance to be introduced easily.
> [ We used to do this with the original res_counter, where it was a
> more straight-forward correction inside the spinlock section. I
> didn't carry it forward into the lockless page counters for
> simplicity, but it turns out this is quite useful in practice. ]
The lack of external synchronization makes it more tricky because
certain charges might get just lost depending on the ordering. This
sucks but considering that the system is already botched and counters
cannot be trusted this is definitely better than a potentially
completely unusable memcg. It would be nice to mention that in the above
paragraph as a caveat.
> Signed-off-by: Johannes Weiner <[email protected]>
Acked-by: Michal Hocko <[email protected]>
> ---
> mm/page_counter.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_counter.c b/mm/page_counter.c
> index c6860f51b6c6..7d83641eb86b 100644
> --- a/mm/page_counter.c
> +++ b/mm/page_counter.c
> @@ -52,9 +52,13 @@ void page_counter_cancel(struct page_counter *counter, unsigned long nr_pages)
> long new;
>
> new = atomic_long_sub_return(nr_pages, &counter->usage);
> - propagate_protected_usage(counter, new);
> /* More uncharges than charges? */
> - WARN_ON_ONCE(new < 0);
> + if (WARN_ONCE(new < 0, "page_counter underflow: %ld nr_pages=%lu\n",
> + new, nr_pages)) {
> + new = 0;
> + atomic_long_set(&counter->usage, new);
> + }
> + propagate_protected_usage(counter, new);
> }
>
> /**
> --
> 2.31.1
--
Michal Hocko
SUSE Labs
Johannes Weiner writes:
>When the unsigned page_counter underflows, even just by a few pages, a
>cgroup will not be able to run anything afterwards and trigger the OOM
>killer in a loop.
>
>Underflows shouldn't happen, but when they do in practice, we may just
>be off by a small amount that doesn't interfere with the normal
>operation - consequences don't need to be that dire.
>
>Reset the page_counter to 0 upon underflow. We'll issue a warning that
>the accounting will be off and then try to keep limping along.
>
>[ We used to do this with the original res_counter, where it was a
> more straight-forward correction inside the spinlock section. I
> didn't carry it forward into the lockless page counters for
> simplicity, but it turns out this is quite useful in practice. ]
>
>Signed-off-by: Johannes Weiner <[email protected]>
Acked-by: Chris Down <[email protected]>
>---
> mm/page_counter.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
>diff --git a/mm/page_counter.c b/mm/page_counter.c
>index c6860f51b6c6..7d83641eb86b 100644
>--- a/mm/page_counter.c
>+++ b/mm/page_counter.c
>@@ -52,9 +52,13 @@ void page_counter_cancel(struct page_counter *counter, unsigned long nr_pages)
> long new;
>
> new = atomic_long_sub_return(nr_pages, &counter->usage);
>- propagate_protected_usage(counter, new);
> /* More uncharges than charges? */
>- WARN_ON_ONCE(new < 0);
>+ if (WARN_ONCE(new < 0, "page_counter underflow: %ld nr_pages=%lu\n",
>+ new, nr_pages)) {
>+ new = 0;
>+ atomic_long_set(&counter->usage, new);
>+ }
>+ propagate_protected_usage(counter, new);
> }
>
> /**
>--
>2.31.1
>
>
On Thu, Apr 8, 2021 at 7:31 AM Johannes Weiner <[email protected]> wrote:
>
> When the unsigned page_counter underflows, even just by a few pages, a
> cgroup will not be able to run anything afterwards and trigger the OOM
> killer in a loop.
>
> Underflows shouldn't happen, but when they do in practice, we may just
> be off by a small amount that doesn't interfere with the normal
> operation - consequences don't need to be that dire.
>
> Reset the page_counter to 0 upon underflow. We'll issue a warning that
> the accounting will be off and then try to keep limping along.
>
> [ We used to do this with the original res_counter, where it was a
> more straight-forward correction inside the spinlock section. I
> didn't carry it forward into the lockless page counters for
> simplicity, but it turns out this is quite useful in practice. ]
>
> Signed-off-by: Johannes Weiner <[email protected]>
Reviewed-by: Shakeel Butt <[email protected]>