Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933803AbdCVAvZ (ORCPT ); Tue, 21 Mar 2017 20:51:25 -0400 Received: from gum.cmpxchg.org ([85.214.110.215]:41570 "EHLO gum.cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933022AbdCVAvX (ORCPT ); Tue, 21 Mar 2017 20:51:23 -0400 From: Johannes Weiner To: Andrew Morton Cc: "Kirill A. Shutemov" , Michal Hocko , Vladimir Davydov , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH] mm: rmap: fix huge file mmap accounting in the memcg stats Date: Tue, 21 Mar 2017 20:51:11 -0400 Message-Id: <20170322005111.3156-1-hannes@cmpxchg.org> X-Mailer: git-send-email 2.12.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1751 Lines: 51 Huge pages are accounted as single units in the memcg's "file_mapped" counter. Account the correct number of base pages, like we do in the corresponding node counter. Signed-off-by: Johannes Weiner --- include/linux/memcontrol.h | 6 ++++++ mm/rmap.c | 4 ++-- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index baa274150210..c5ebb32fef49 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -741,6 +741,12 @@ static inline bool mem_cgroup_oom_synchronize(bool wait) return false; } +static inline void mem_cgroup_update_page_stat(struct page *page, + enum mem_cgroup_stat_index idx, + int nr) +{ +} + static inline void mem_cgroup_inc_page_stat(struct page *page, enum mem_cgroup_stat_index idx) { diff --git a/mm/rmap.c b/mm/rmap.c index 1d82057144ba..f514cdd84482 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1154,7 +1154,7 @@ void page_add_file_rmap(struct page *page, bool compound) goto out; } __mod_node_page_state(page_pgdat(page), NR_FILE_MAPPED, nr); - mem_cgroup_inc_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); + mem_cgroup_update_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED, nr); out: unlock_page_memcg(page); } @@ -1194,7 +1194,7 @@ static void page_remove_file_rmap(struct page *page, bool compound) * pte lock(a spinlock) is held, which implies preemption disabled. */ __mod_node_page_state(page_pgdat(page), NR_FILE_MAPPED, -nr); - mem_cgroup_dec_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); + mem_cgroup_update_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED, -nr); if (unlikely(PageMlocked(page))) clear_page_mlock(page); -- 2.12.0