Adding the number of swap pages to the byte limit of a memory control
group makes no sense. Convert the pages to bytes before adding them.
The only user of this code is the OOM killer, and the way it is used
means that the error results in a higher OOM badness value. Since the
cgroup limit is the same for all tasks in the cgroup, the error should
have no practical impact at the moment.
But let's not wait for future or changing users to trip over it.
Signed-off-by: Johannes Weiner <[email protected]>
---
mm/memcontrol.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1552,8 +1552,9 @@ u64 mem_cgroup_get_limit(struct mem_cgro
u64 limit;
u64 memsw;
- limit = res_counter_read_u64(&memcg->res, RES_LIMIT) +
- total_swap_pages;
+ limit = res_counter_read_u64(&memcg->res, RES_LIMIT);
+ limit += total_swap_pages << PAGE_SHIFT;
+
memsw = res_counter_read_u64(&memcg->memsw, RES_LIMIT);
/*
* If memsw is finite and limits the amount of swap space available
Johannes Weiner <[email protected]> writes:
> Adding the number of swap pages to the byte limit of a memory control
> group makes no sense. Convert the pages to bytes before adding them.
>
> The only user of this code is the OOM killer, and the way it is used
> means that the error results in a higher OOM badness value. Since the
> cgroup limit is the same for all tasks in the cgroup, the error should
> have no practical impact at the moment.
>
> But let's not wait for future or changing users to trip over it.
Thanks for the fix.
> Signed-off-by: Johannes Weiner <[email protected]>
Reviewed-by: Greg Thelen <[email protected]>
> ---
> mm/memcontrol.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1552,8 +1552,9 @@ u64 mem_cgroup_get_limit(struct mem_cgro
> u64 limit;
> u64 memsw;
>
> - limit = res_counter_read_u64(&memcg->res, RES_LIMIT) +
> - total_swap_pages;
> + limit = res_counter_read_u64(&memcg->res, RES_LIMIT);
> + limit += total_swap_pages << PAGE_SHIFT;
> +
> memsw = res_counter_read_u64(&memcg->memsw, RES_LIMIT);
> /*
> * If memsw is finite and limits the amount of swap space available
On Tue, 9 Nov 2010, Greg Thelen wrote:
> Johannes Weiner <[email protected]> writes:
>
> > Adding the number of swap pages to the byte limit of a memory control
> > group makes no sense. Convert the pages to bytes before adding them.
> >
> > The only user of this code is the OOM killer, and the way it is used
> > means that the error results in a higher OOM badness value. Since the
> > cgroup limit is the same for all tasks in the cgroup, the error should
> > have no practical impact at the moment.
> >
> > But let's not wait for future or changing users to trip over it.
>
> Thanks for the fix.
>
> > Signed-off-by: Johannes Weiner <[email protected]>
> Reviewed-by: Greg Thelen <[email protected]>
>
Nice catch, but it's done in the opposite way: the oom killer doesn't use
byte limits but page limits. So this needs to be
(res_counter_read_u64(&memcg->res, RES_LIMIT) >> PAGE_SHIFT) +
total_swap_pages;
On Tue, 9 Nov 2010 12:05:21 +0100
Johannes Weiner <[email protected]> wrote:
> Adding the number of swap pages to the byte limit of a memory control
> group makes no sense. Convert the pages to bytes before adding them.
>
> The only user of this code is the OOM killer, and the way it is used
> means that the error results in a higher OOM badness value. Since the
> cgroup limit is the same for all tasks in the cgroup, the error should
> have no practical impact at the moment.
>
> But let's not wait for future or changing users to trip over it.
>
> Signed-off-by: Johannes Weiner <[email protected]>
Acked-by: KAMEZAWA Hiroyuki <[email protected]>