Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752601AbbLNRPS (ORCPT ); Mon, 14 Dec 2015 12:15:18 -0500 Received: from mx2.parallels.com ([199.115.105.18]:42551 "EHLO mx2.parallels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751057AbbLNRPP (ORCPT ); Mon, 14 Dec 2015 12:15:15 -0500 Date: Mon, 14 Dec 2015 20:14:55 +0300 From: Vladimir Davydov To: Johannes Weiner CC: Andrew Morton , Michal Hocko , , , , Subject: Re: [PATCH 4/4] mm: memcontrol: clean up alloc, online, offline, free functions Message-ID: <20151214171455.GF28521@esperanza> References: <1449863653-6546-1-git-send-email-hannes@cmpxchg.org> <1449863653-6546-4-git-send-email-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <1449863653-6546-4-git-send-email-hannes@cmpxchg.org> X-ClientProxiedBy: US-EXCH2.sw.swsoft.com (10.255.249.46) To US-EXCH.sw.swsoft.com (10.255.249.47) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5273 Lines: 162 On Fri, Dec 11, 2015 at 02:54:13PM -0500, Johannes Weiner wrote: ... > -static int > -mem_cgroup_css_online(struct cgroup_subsys_state *css) > +static struct cgroup_subsys_state * __ref > +mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) > { > - struct mem_cgroup *memcg = mem_cgroup_from_css(css); > - struct mem_cgroup *parent = mem_cgroup_from_css(css->parent); > - int ret; > - > - if (css->id > MEM_CGROUP_ID_MAX) > - return -ENOSPC; > + struct mem_cgroup *parent = mem_cgroup_from_css(parent_css); > + struct mem_cgroup *memcg; > + long error = -ENOMEM; > > - if (!parent) > - return 0; > + memcg = mem_cgroup_alloc(); > + if (!memcg) > + return ERR_PTR(error); > > mutex_lock(&memcg_create_mutex); It is pointless to take memcg_create_mutex in ->css_alloc. It won't prevent setting use_hierarchy for parent after a new child was allocated, but before it was added to the list of children (see create_css()). Taking the mutex in ->css_online renders this race impossible. That is, your cleanup breaks use_hierarchy consistency check. Can we drop this use_hierarchy consistency check at all and allow children of a cgroup with use_hierarchy=1 have use_hierarchy=0? Yeah, that might result in some strangeness if cgroups are created in parallel with use_hierarchy flipped, but is it a valid use case? I surmise, one just sets use_hierarchy for a cgroup once and for good before starting to create sub-cgroups. > - > - memcg->use_hierarchy = parent->use_hierarchy; > - memcg->oom_kill_disable = parent->oom_kill_disable; > - memcg->swappiness = mem_cgroup_swappiness(parent); > - > - if (parent->use_hierarchy) { > + memcg->high = PAGE_COUNTER_MAX; > + memcg->soft_limit = PAGE_COUNTER_MAX; > + if (parent) > + memcg->swappiness = mem_cgroup_swappiness(parent); > + if (parent && parent->use_hierarchy) { > + memcg->use_hierarchy = true; > + memcg->oom_kill_disable = parent->oom_kill_disable; oom_kill_disable was propagated to child cgroup despite use_hierarchy configuration. I don't see any reason to change this. > page_counter_init(&memcg->memory, &parent->memory); > - memcg->high = PAGE_COUNTER_MAX; > - memcg->soft_limit = PAGE_COUNTER_MAX; > page_counter_init(&memcg->memsw, &parent->memsw); > page_counter_init(&memcg->kmem, &parent->kmem); > page_counter_init(&memcg->tcpmem, &parent->tcpmem); > - > - /* > - * No need to take a reference to the parent because cgroup > - * core guarantees its existence. > - */ > } else { > page_counter_init(&memcg->memory, NULL); > - memcg->high = PAGE_COUNTER_MAX; > - memcg->soft_limit = PAGE_COUNTER_MAX; > page_counter_init(&memcg->memsw, NULL); > page_counter_init(&memcg->kmem, NULL); > page_counter_init(&memcg->tcpmem, NULL); > @@ -4296,19 +4211,30 @@ mem_cgroup_css_online(struct cgroup_subsys_state *css) > } > mutex_unlock(&memcg_create_mutex); > > - ret = memcg_propagate_kmem(memcg); > - if (ret) > - return ret; > + /* The following stuff does not apply to the root */ > + if (!parent) { > + root_mem_cgroup = memcg; > + return &memcg->css; > + } > + > + error = memcg_propagate_kmem(parent, memcg); I don't think ->css_alloc is the right place for this function: if create_css() fails after ->css_alloc and before ->css_online, it'll call ->css_free, which won't cleanup kmem properly. > + if (error) > + goto fail; > > if (cgroup_subsys_on_dfl(memory_cgrp_subsys) && !cgroup_memory_nosocket) > static_branch_inc(&memcg_sockets_enabled_key); Frankly, I don't get why this should live here either. This has nothing to do with memcg allocation and looks rather like a preparation for online. > > - /* > - * Make sure the memcg is initialized: mem_cgroup_iter() > - * orders reading memcg->initialized against its callers > - * reading the memcg members. > - */ > - smp_store_release(&memcg->initialized, 1); > + return &memcg->css; > +fail: > + mem_cgroup_free(memcg); > + return NULL; > +} > + > +static int > +mem_cgroup_css_online(struct cgroup_subsys_state *css) > +{ > + if (css->id > MEM_CGROUP_ID_MAX) > + return -ENOSPC; > > return 0; > } > @@ -4330,10 +4256,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) > } > spin_unlock(&memcg->event_list_lock); > > - vmpressure_cleanup(&memcg->vmpressure); > - > memcg_offline_kmem(memcg); > - > wb_memcg_offline(memcg); > } > > @@ -4347,9 +4270,11 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css) > if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && memcg->tcpmem_active) > static_branch_dec(&memcg_sockets_enabled_key); > > + vmpressure_cleanup(&memcg->vmpressure); vmpressure->work can be scheduled after offline, so ->css_free is definitely the right place for vmpressure_cleanup. Looks like you've just fixed a potential use-after-free bug. Thanks, Vladimir > + cancel_work_sync(&memcg->high_work); > + mem_cgroup_remove_from_trees(memcg); > memcg_free_kmem(memcg); > - > - __mem_cgroup_free(memcg); > + mem_cgroup_free(memcg); > } > > /** -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/