Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752711AbdHNWm6 (ORCPT ); Mon, 14 Aug 2017 18:42:58 -0400 Received: from mail-pg0-f42.google.com ([74.125.83.42]:33395 "EHLO mail-pg0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752532AbdHNWm4 (ORCPT ); Mon, 14 Aug 2017 18:42:56 -0400 Date: Mon, 14 Aug 2017 15:42:54 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Roman Gushchin cc: linux-mm@kvack.org, Michal Hocko , Vladimir Davydov , Johannes Weiner , Tetsuo Handa , Tejun Heo , kernel-team@fb.com, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [v5 2/4] mm, oom: cgroup-aware OOM killer In-Reply-To: <20170814183213.12319-3-guro@fb.com> Message-ID: References: <20170814183213.12319-1-guro@fb.com> <20170814183213.12319-3-guro@fb.com> User-Agent: Alpine 2.10 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3311 Lines: 96 On Mon, 14 Aug 2017, Roman Gushchin wrote: > diff --git a/include/linux/oom.h b/include/linux/oom.h > index 8a266e2be5a6..b7ec3bd441be 100644 > --- a/include/linux/oom.h > +++ b/include/linux/oom.h > @@ -39,6 +39,7 @@ struct oom_control { > unsigned long totalpages; > struct task_struct *chosen; > unsigned long chosen_points; > + struct mem_cgroup *chosen_memcg; > }; > > extern struct mutex oom_lock; > @@ -79,6 +80,8 @@ extern void oom_killer_enable(void); > > extern struct task_struct *find_lock_task_mm(struct task_struct *p); > > +extern int oom_evaluate_task(struct task_struct *task, void *arg); > + > /* sysctls */ > extern int sysctl_oom_dump_tasks; > extern int sysctl_oom_kill_allocating_task; > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index df6f63ee95d6..0b81dc55c6ac 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -2639,6 +2639,181 @@ static inline bool memcg_has_children(struct mem_cgroup *memcg) > return ret; > } > > +static long memcg_oom_badness(struct mem_cgroup *memcg, > + const nodemask_t *nodemask) > +{ > + long points = 0; > + int nid; > + > + for_each_node_state(nid, N_MEMORY) { > + if (nodemask && !node_isset(nid, *nodemask)) > + continue; > + > + points += mem_cgroup_node_nr_lru_pages(memcg, nid, > + LRU_ALL_ANON | BIT(LRU_UNEVICTABLE)); > + } > + > + points += memcg_page_state(memcg, MEMCG_KERNEL_STACK_KB) / > + (PAGE_SIZE / 1024); > + points += memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE); > + points += memcg_page_state(memcg, MEMCG_SOCK); > + points += memcg_page_state(memcg, MEMCG_SWAP); > + > + return points; > +} I'm indifferent to the memcg evaluation criteria used to determine which memcg should be selected over others with the same priority, others may feel differently. > + > +static long oom_evaluate_memcg(struct mem_cgroup *memcg, > + const nodemask_t *nodemask) > +{ > + struct css_task_iter it; > + struct task_struct *task; > + int elegible = 0; > + > + css_task_iter_start(&memcg->css, 0, &it); > + while ((task = css_task_iter_next(&it))) { > + /* > + * If there are no tasks, or all tasks have oom_score_adj set > + * to OOM_SCORE_ADJ_MIN and oom_kill_all_tasks is not set, > + * don't select this memory cgroup. > + */ > + if (!elegible && > + (memcg->oom_kill_all_tasks || > + task->signal->oom_score_adj != OOM_SCORE_ADJ_MIN)) > + elegible = 1; I'm curious about the decision made in this conditional and how oom_kill_memcg_member() ignores task->signal->oom_score_adj. It means that memory.oom_kill_all_tasks overrides /proc/pid/oom_score_adj if it should otherwise be disabled. It's undocumented in the changelog, but I'm questioning whether it's the right decision. Doesn't it make sense to kill all tasks that are not oom disabled, and allow the user to still protect certain processes by their /proc/pid/oom_score_adj setting? Otherwise, there's no way to do that protection without a sibling memcg and its own reservation of memory. I'm thinking about a process that governs jobs inside the memcg and if there is an oom kill, it wants to do logging and any cleanup necessary before exiting itself. It seems like a powerful combination if coupled with oom notification. Also, s/elegible/eligible/ Otherwise, looks good!