Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751224AbdFCRuJ (ORCPT ); Sat, 3 Jun 2017 13:50:09 -0400 Received: from mail-lf0-f68.google.com ([209.85.215.68]:35642 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750991AbdFCRuH (ORCPT ); Sat, 3 Jun 2017 13:50:07 -0400 Date: Sat, 3 Jun 2017 20:50:02 +0300 From: Vladimir Davydov To: Johannes Weiner Cc: Josef Bacik , Michal Hocko , Andrew Morton , Rik van Riel , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [PATCH 5/6] mm: memcontrol: per-lruvec stats infrastructure Message-ID: <20170603175002.GE15130@esperanza> References: <20170530181724.27197-1-hannes@cmpxchg.org> <20170530181724.27197-6-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170530181724.27197-6-hannes@cmpxchg.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1549 Lines: 46 On Tue, May 30, 2017 at 02:17:23PM -0400, Johannes Weiner wrote: > lruvecs are at the intersection of the NUMA node and memcg, which is > the scope for most paging activity. > > Introduce a convenient accounting infrastructure that maintains > statistics per node, per memcg, and the lruvec itself. > > Then convert over accounting sites for statistics that are already > tracked in both nodes and memcgs and can be easily switched. > > Signed-off-by: Johannes Weiner > --- > include/linux/memcontrol.h | 238 +++++++++++++++++++++++++++++++++++++++------ > include/linux/vmstat.h | 1 - > mm/memcontrol.c | 6 ++ > mm/page-writeback.c | 15 +-- > mm/rmap.c | 8 +- > mm/workingset.c | 9 +- > 6 files changed, 225 insertions(+), 52 deletions(-) > ... > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 9c68a40c83e3..e37908606c0f 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -4122,6 +4122,12 @@ static int alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node) > if (!pn) > return 1; > > + pn->lruvec_stat = alloc_percpu(struct lruvec_stat); > + if (!pn->lruvec_stat) { > + kfree(pn); > + return 1; > + } > + > lruvec_init(&pn->lruvec); > pn->usage_in_excess = 0; > pn->on_tree = false; I don't see the matching free_percpu() anywhere, forget to patch free_mem_cgroup_per_node_info()? Other than that and with the follow-up fix applied, this patch is good IMO. Acked-by: Vladimir Davydov