Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp157317imu; Thu, 24 Jan 2019 23:27:30 -0800 (PST) X-Google-Smtp-Source: ALg8bN5eoBFjg4ja2zz9GAjZpBwKLolYjc51HEHmGC982RICrWqvrcmVqAQHz1NVYbQngogbPYWt X-Received: by 2002:a62:cd1:: with SMTP id 78mr9785083pfm.219.1548401250365; Thu, 24 Jan 2019 23:27:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548401250; cv=none; d=google.com; s=arc-20160816; b=Fh/stAcOR4bN2D6H6zC+ar0QOB173OqVsUpY7GrIubfa7NafhUstF98w815dPx6+g3 bRVRFaba9SvIBnixIYxF6vLd+9xzsUm7E7TxgI9/Zm9RpJxWwLyXTbp8DkBw2zRm/+Zo +oAEnyphuDIcFpPQr1ZEfeeqc2F+i65GmcskMlu1AbdG+IBe+YoGgRvmgc7w7aXWYXs+ R4r2CtritWPRnBoeGyGFh9K9qXieQ+1Mx7SKS2ltEnamw8w3swCV23TYaEB8Uaus7e7w YF6jlAfRl44HhH54jKSfNn8yu0pIPm882ddmQCys0ahHqQar2PsRFnctBelxxVsjkOdi iCDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=gbyU9eJTKPUZ8bSd4lco6SdSstA8SqFGkoYEC3fjxw8=; b=SevWjUntlTj1LEffXtlLI5/0TSQO7RZlCzcnvMWMuLNPUEgiAD716K1C+txXdeV4kP /Hat/KI111ReN//X/TeAbI82ljjou4SleOvVJ7Lu0mBbO7jGbJvbtIjTed4PzfzFDEil MCAMYwTLbWvuNX4AUFw+/q2yi/M35FLEmnNP6AMTY5UyORTdwDlUkdsoWMdIPMLl/Qzi arAILui+rs+9HzDZorTgZwZQBe+O0YOk1ZALTOTH2blHNA2L+j+vDDrjOm4OpSZdhIp+ reRBiZ8K2XsXe0DOu9tu1BIUXGyFKBTIZ1DzftZCLg/6ZCt1VlpvNzQE5ytRY6K5Byzu vozg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p3si9898748plk.424.2019.01.24.23.27.14; Thu, 24 Jan 2019 23:27:30 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728548AbfAYH0D (ORCPT + 99 others); Fri, 25 Jan 2019 02:26:03 -0500 Received: from mx2.suse.de ([195.135.220.15]:40864 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727991AbfAYH0C (ORCPT ); Fri, 25 Jan 2019 02:26:02 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 653CCAEA3; Fri, 25 Jan 2019 07:25:59 +0000 (UTC) Date: Fri, 25 Jan 2019 08:25:58 +0100 From: Michal Hocko To: Chris Down Cc: Andrew Morton , Johannes Weiner , Tejun Heo , Roman Gushchin , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, kernel-team@fb.com Subject: Re: [PATCH 1/2] mm: Create mem_cgroup_from_seq Message-ID: <20190125072558.GB3560@dhcp22.suse.cz> References: <20190124194050.GA31341@chrisdown.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190124194050.GA31341@chrisdown.name> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 24-01-19 14:40:50, Chris Down wrote: > This is the start of a series of patches similar to my earlier > DEFINE_MEMCG_MAX_OR_VAL work, but with less Macro Magic(tm). > > There are a bunch of places we go from seq_file to mem_cgroup, which > currently requires manually getting the css, then getting the mem_cgroup > from the css. It's in enough places now that having mem_cgroup_from_seq > makes sense (and also makes the next patch a bit nicer). > > Signed-off-by: Chris Down > Cc: Andrew Morton > Cc: Johannes Weiner > Cc: Tejun Heo > Cc: Roman Gushchin > Cc: linux-kernel@vger.kernel.org > Cc: cgroups@vger.kernel.org > Cc: linux-mm@kvack.org > Cc: kernel-team@fb.com Acked-by: Michal Hocko > --- > include/linux/memcontrol.h | 10 ++++++++++ > mm/memcontrol.c | 24 ++++++++++++------------ > mm/slab_common.c | 6 +++--- > 3 files changed, 25 insertions(+), 15 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index b0eb29ea0d9c..1f3d880b7ca1 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -429,6 +429,11 @@ static inline unsigned short mem_cgroup_id(struct mem_cgroup *memcg) > } > struct mem_cgroup *mem_cgroup_from_id(unsigned short id); > > +static inline struct mem_cgroup *mem_cgroup_from_seq(struct seq_file *m) > +{ > + return mem_cgroup_from_css(seq_css(m)); > +} > + > static inline struct mem_cgroup *lruvec_memcg(struct lruvec *lruvec) > { > struct mem_cgroup_per_node *mz; > @@ -937,6 +942,11 @@ static inline struct mem_cgroup *mem_cgroup_from_id(unsigned short id) > return NULL; > } > > +static inline struct mem_cgroup *mem_cgroup_from_seq(struct seq_file *m) > +{ > + return NULL; > +} > + > static inline struct mem_cgroup *lruvec_memcg(struct lruvec *lruvec) > { > return NULL; > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 18f4aefbe0bf..98aad31f5226 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -3359,7 +3359,7 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v) > const struct numa_stat *stat; > int nid; > unsigned long nr; > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > > for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { > nr = mem_cgroup_nr_lru_pages(memcg, stat->lru_mask); > @@ -3410,7 +3410,7 @@ static const char *const memcg1_event_names[] = { > > static int memcg_stat_show(struct seq_file *m, void *v) > { > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > unsigned long memory, memsw; > struct mem_cgroup *mi; > unsigned int i; > @@ -3842,7 +3842,7 @@ static void mem_cgroup_oom_unregister_event(struct mem_cgroup *memcg, > > static int mem_cgroup_oom_control_read(struct seq_file *sf, void *v) > { > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(sf)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(sf); > > seq_printf(sf, "oom_kill_disable %d\n", memcg->oom_kill_disable); > seq_printf(sf, "under_oom %d\n", (bool)memcg->under_oom); > @@ -5385,7 +5385,7 @@ static u64 memory_current_read(struct cgroup_subsys_state *css, > > static int memory_min_show(struct seq_file *m, void *v) > { > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > unsigned long min = READ_ONCE(memcg->memory.min); > > if (min == PAGE_COUNTER_MAX) > @@ -5415,7 +5415,7 @@ static ssize_t memory_min_write(struct kernfs_open_file *of, > > static int memory_low_show(struct seq_file *m, void *v) > { > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > unsigned long low = READ_ONCE(memcg->memory.low); > > if (low == PAGE_COUNTER_MAX) > @@ -5445,7 +5445,7 @@ static ssize_t memory_low_write(struct kernfs_open_file *of, > > static int memory_high_show(struct seq_file *m, void *v) > { > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > unsigned long high = READ_ONCE(memcg->high); > > if (high == PAGE_COUNTER_MAX) > @@ -5482,7 +5482,7 @@ static ssize_t memory_high_write(struct kernfs_open_file *of, > > static int memory_max_show(struct seq_file *m, void *v) > { > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > unsigned long max = READ_ONCE(memcg->memory.max); > > if (max == PAGE_COUNTER_MAX) > @@ -5544,7 +5544,7 @@ static ssize_t memory_max_write(struct kernfs_open_file *of, > > static int memory_events_show(struct seq_file *m, void *v) > { > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > > seq_printf(m, "low %lu\n", > atomic_long_read(&memcg->memory_events[MEMCG_LOW])); > @@ -5562,7 +5562,7 @@ static int memory_events_show(struct seq_file *m, void *v) > > static int memory_stat_show(struct seq_file *m, void *v) > { > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > struct accumulated_stats acc; > int i; > > @@ -5639,7 +5639,7 @@ static int memory_stat_show(struct seq_file *m, void *v) > > static int memory_oom_group_show(struct seq_file *m, void *v) > { > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > > seq_printf(m, "%d\n", memcg->oom_group); > > @@ -6622,7 +6622,7 @@ static u64 swap_current_read(struct cgroup_subsys_state *css, > > static int swap_max_show(struct seq_file *m, void *v) > { > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > unsigned long max = READ_ONCE(memcg->swap.max); > > if (max == PAGE_COUNTER_MAX) > @@ -6652,7 +6652,7 @@ static ssize_t swap_max_write(struct kernfs_open_file *of, > > static int swap_events_show(struct seq_file *m, void *v) > { > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > > seq_printf(m, "max %lu\n", > atomic_long_read(&memcg->memory_events[MEMCG_SWAP_MAX])); > diff --git a/mm/slab_common.c b/mm/slab_common.c > index 81732d05e74a..3dfdbe49ce34 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -1424,7 +1424,7 @@ void dump_unreclaimable_slab(void) > #if defined(CONFIG_MEMCG) > void *memcg_slab_start(struct seq_file *m, loff_t *pos) > { > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > > mutex_lock(&slab_mutex); > return seq_list_start(&memcg->kmem_caches, *pos); > @@ -1432,7 +1432,7 @@ void *memcg_slab_start(struct seq_file *m, loff_t *pos) > > void *memcg_slab_next(struct seq_file *m, void *p, loff_t *pos) > { > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > > return seq_list_next(p, &memcg->kmem_caches, pos); > } > @@ -1446,7 +1446,7 @@ int memcg_slab_show(struct seq_file *m, void *p) > { > struct kmem_cache *s = list_entry(p, struct kmem_cache, > memcg_params.kmem_caches_node); > - struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); > > if (p == memcg->kmem_caches.next) > print_slabinfo_header(m); > -- > 2.20.1 -- Michal Hocko SUSE Labs