Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751492AbcCKKNE (ORCPT ); Fri, 11 Mar 2016 05:13:04 -0500 Received: from mx2.parallels.com ([199.115.105.18]:52094 "EHLO mx2.parallels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750803AbcCKKM5 (ORCPT ); Fri, 11 Mar 2016 05:12:57 -0500 From: Vladimir Davydov To: Andrew Morton CC: Johannes Weiner , Michal Hocko , , Subject: [PATCH] mm: memcontrol: zap task_struct->memcg_oom_{gfp_mask,order} Date: Fri, 11 Mar 2016 13:12:47 +0300 Message-ID: <1457691167-22756-1-git-send-email-vdavydov@virtuozzo.com> X-Mailer: git-send-email 2.1.4 MIME-Version: 1.0 Content-Type: text/plain X-ClientProxiedBy: US-EXCH.sw.swsoft.com (10.255.249.47) To US-EXCH2.sw.swsoft.com (10.255.249.46) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2393 Lines: 76 These fields are used for dumping info about allocation that triggered OOM. For cgroup this information doesn't make much sense, because OOM killer is always invoked from page fault handler. It isn't worth the space these fields occupy in the task_struct. Signed-off-by: Vladimir Davydov --- include/linux/sched.h | 2 -- mm/memcontrol.c | 14 +++++--------- 2 files changed, 5 insertions(+), 11 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index ba8d8355c93a..626f5da5c43e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1839,8 +1839,6 @@ struct task_struct { #endif #ifdef CONFIG_MEMCG struct mem_cgroup *memcg_in_oom; - gfp_t memcg_oom_gfp_mask; - int memcg_oom_order; /* number of pages to reclaim on returning to userland */ unsigned int memcg_nr_pages_over_high; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 36db05fa8acb..a217b1374c32 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1232,14 +1232,13 @@ static unsigned long mem_cgroup_get_limit(struct mem_cgroup *memcg) return limit; } -static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, - int order) +static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg) { struct oom_control oc = { .zonelist = NULL, .nodemask = NULL, - .gfp_mask = gfp_mask, - .order = order, + .gfp_mask = GFP_KERNEL, + .order = 0, }; struct mem_cgroup *iter; unsigned long chosen_points = 0; @@ -1605,8 +1604,6 @@ static void mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int order) */ css_get(&memcg->css); current->memcg_in_oom = memcg; - current->memcg_oom_gfp_mask = mask; - current->memcg_oom_order = order; } /** @@ -1656,8 +1653,7 @@ bool mem_cgroup_oom_synchronize(bool handle) if (locked && !memcg->oom_kill_disable) { mem_cgroup_unmark_under_oom(memcg); finish_wait(&memcg_oom_waitq, &owait.wait); - mem_cgroup_out_of_memory(memcg, current->memcg_oom_gfp_mask, - current->memcg_oom_order); + mem_cgroup_out_of_memory(memcg); } else { schedule(); mem_cgroup_unmark_under_oom(memcg); @@ -5063,7 +5059,7 @@ static ssize_t memory_max_write(struct kernfs_open_file *of, } mem_cgroup_events(memcg, MEMCG_OOM, 1); - if (!mem_cgroup_out_of_memory(memcg, GFP_KERNEL, 0)) + if (!mem_cgroup_out_of_memory(memcg)) break; } -- 2.1.4