Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752895AbdC1PNv (ORCPT ); Tue, 28 Mar 2017 11:13:51 -0400 Received: from mx2.suse.de ([195.135.220.15]:42492 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751712AbdC1PNt (ORCPT ); Tue, 28 Mar 2017 11:13:49 -0400 Date: Tue, 28 Mar 2017 16:52:46 +0200 From: Michal Hocko To: Johannes Weiner Cc: Andrew Morton , "Kirill A. Shutemov" , Vladimir Davydov , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [PATCH] mm: rmap: fix huge file mmap accounting in the memcg stats Message-ID: <20170328145245.GL18241@dhcp22.suse.cz> References: <20170322005111.3156-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170322005111.3156-1-hannes@cmpxchg.org> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2005 Lines: 62 On Tue 21-03-17 20:51:11, Johannes Weiner wrote: > Huge pages are accounted as single units in the memcg's "file_mapped" > counter. Account the correct number of base pages, like we do in the > corresponding node counter. > > Signed-off-by: Johannes Weiner with the CC: stable Acked-by: Michal Hocko thanks! > --- > include/linux/memcontrol.h | 6 ++++++ > mm/rmap.c | 4 ++-- > 2 files changed, 8 insertions(+), 2 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index baa274150210..c5ebb32fef49 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -741,6 +741,12 @@ static inline bool mem_cgroup_oom_synchronize(bool wait) > return false; > } > > +static inline void mem_cgroup_update_page_stat(struct page *page, > + enum mem_cgroup_stat_index idx, > + int nr) > +{ > +} > + > static inline void mem_cgroup_inc_page_stat(struct page *page, > enum mem_cgroup_stat_index idx) > { > diff --git a/mm/rmap.c b/mm/rmap.c > index 1d82057144ba..f514cdd84482 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1154,7 +1154,7 @@ void page_add_file_rmap(struct page *page, bool compound) > goto out; > } > __mod_node_page_state(page_pgdat(page), NR_FILE_MAPPED, nr); > - mem_cgroup_inc_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); > + mem_cgroup_update_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED, nr); > out: > unlock_page_memcg(page); > } > @@ -1194,7 +1194,7 @@ static void page_remove_file_rmap(struct page *page, bool compound) > * pte lock(a spinlock) is held, which implies preemption disabled. > */ > __mod_node_page_state(page_pgdat(page), NR_FILE_MAPPED, -nr); > - mem_cgroup_dec_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); > + mem_cgroup_update_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED, -nr); > > if (unlikely(PageMlocked(page))) > clear_page_mlock(page); > -- > 2.12.0 > -- Michal Hocko SUSE Labs