Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp1608670imm; Sat, 2 Jun 2018 05:01:31 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIaykjH4gxaC6VQBy450Ma+9c6RejC1PicyQSkV1vYQvgRyy9ZFb2oa+DM3qdDiv2ChSxIz X-Received: by 2002:a62:4fd8:: with SMTP id f85-v6mr14468865pfj.77.1527940891148; Sat, 02 Jun 2018 05:01:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527940891; cv=none; d=google.com; s=arc-20160816; b=Ch4/3Cv7AIjgqHkrV0y7Ne/ifUwvKcJ13uqzVwsGu0gk4Rl3D3WY5GbcMBPB5O69Em tLCesS0p/RSOPRptoE6l4XLJxkTzh5p4ZKUkOz95+NB5TCxJBFkwARD55IWM4N8FSW45 RSVPptlPvqmnVrOqmnJKPaQOFt5axB/HGH6bYtoTRfNyaIiYXjt/vaPDBG0airKIBi4n 0tg5mTmZRUvYVxH3yeJUZYxyl9G6YE9VTk0hm9D0i6yw/GGg4efMa3pQnLLYxdFQ44N8 9Aa+mCmvtkZcgK/yF+N5IyH/PMBX9dDJHmWvsK6X+6PfjACMnjRLIWXkOkxAbovl7gwO rwCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=B3mr7q7JlMWx6b65/X/ihdrj8dCcIGeQt/tFTgp6uag=; b=X5m+gkVdtBjXc1DsnGeGnEBVjuJzBIy/dmeNqntqeaYjV79otBUNzuZNdzyZER5io9 hRETqsaiuo2jNLyaR0DnsY7OUM2yjGaPR8psLMoKn0bAiJ/cgze3epfIvWcN+9z2CyY+ o59yT51tYNyeklaXfwK1bmw8xfZug3CYKqXMf2YkvH0UmlRcY+tvE9ucTIEMrRcQRZXe iHKYrQhlDwA+ZN48yRDJkD3xLm9Ed63M2v9IgUC+chHc95GH1YnfUg/CKl7lE118YtJi YnAPC69XjEbvlRN5MGTrUV3Q+SX+C9LgW5PbzlJZbNc7tVM2WZ+aPWZkp2+i8cVbNu/M 1yXQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=BmThaAKK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y9-v6si32897012pgv.452.2018.06.02.05.01.16; Sat, 02 Jun 2018 05:01:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=BmThaAKK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751838AbeFBL7k (ORCPT + 99 others); Sat, 2 Jun 2018 07:59:40 -0400 Received: from mail-pf0-f196.google.com ([209.85.192.196]:44674 "EHLO mail-pf0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751780AbeFBL7d (ORCPT ); Sat, 2 Jun 2018 07:59:33 -0400 Received: by mail-pf0-f196.google.com with SMTP id h12-v6so3798925pfk.11 for ; Sat, 02 Jun 2018 04:59:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=B3mr7q7JlMWx6b65/X/ihdrj8dCcIGeQt/tFTgp6uag=; b=BmThaAKKpfWYR264/++sWQgnxYZS/HqbtIs86z/+FxkMFdkFQlgNwXls26vRSaTcgf vcM24EVAW2C0k/p9v0z1RDQyjynvcGidMyaE3ixlg9CIABeNT7AMJL8I1B38Fmgfbspv rvToL7YAGwNqrAQqjEolxqFhM6mZOGoVsJvJiqMcjWgclWSEEf4V9kJHfihb+ITv7IS4 SB8OZZaWR6i1sXzqz6oD7536k2x8p3vLzDPOSHdkSgPUv1oHA/SrRON3A1EBCp3obdzD imw6h3lhxGFRn73dePqh9IMmSE0voZppk3Srjc/iEJEvRB0oubTzaNZJq1uKZ1GeaCFX w6dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=B3mr7q7JlMWx6b65/X/ihdrj8dCcIGeQt/tFTgp6uag=; b=uQuw/3mKpr7bhj1lwJcTTo/FEAY/PZpz7UuHjKAc5iWtA9md5Kyb3EnhgXGS17l5Ld Hv2UJV/Cwca4pEj+ds8ikaiYCWsRQlBfWvNWl8coX1IwtLG2Yu3C10+Sg4m8kijEyl6C RbZVgBIi6qES6+7uX63h9DePmqCaXLN6xlZiqDIiTX51ja8/kvxYX+ez7DeERq4hH4KV ZJRQEelVSUCXLKwBmes7pbmoQqXJWLH/Ds7c72sJYDz/skCrDpNrHi6tR4W4hqo4q/nm bGDolfd1gaRXS8aHSX0S1GMACf5h3oRDSqCUwV0wdapQUxQmZsQ2B0O+ni7eJo7tAT5w cNkA== X-Gm-Message-State: ALKqPwfgb9JU642zDAoogo4jgpigqEQvcFj6EKDdZi/VsniKa4TRNYGO DWhgZOrVnDcXZnVeWLeXkxc= X-Received: by 2002:a62:581:: with SMTP id 123-v6mr14515532pff.38.1527940773053; Sat, 02 Jun 2018 04:59:33 -0700 (PDT) Received: from dest.didichuxing.com ([52.229.160.192]) by smtp.gmail.com with ESMTPSA id c83-v6sm84723543pfc.111.2018.06.02.04.59.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 02 Jun 2018 04:59:32 -0700 (PDT) From: ufo19890607@gmail.com To: akpm@linux-foundation.org, mhocko@suse.com, rientjes@google.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, penguin-kernel@I-love.SAKURA.ne.jp, guro@fb.com, yang.s@alibaba-inc.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, yuzhoujian Subject: [PATCH v7 2/2] Refactor part of the oom report in dump_header Date: Sat, 2 Jun 2018 19:58:52 +0800 Message-Id: <1527940734-35161-2-git-send-email-ufo19890607@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1527940734-35161-1-git-send-email-ufo19890607@gmail.com> References: <1527940734-35161-1-git-send-email-ufo19890607@gmail.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: yuzhoujian The dump_header does not print the memcg's name when the system oom happened, so users cannot locate the certain container which contains the task that has been killed by the oom killer. I follow the advices of David Rientjes and Michal Hocko, and refactor part of the oom report in a backwards compatible way. After this patch, users can get the memcg's path from the oom report and check the certain container more quickly. Below is the part of the oom report in the dmesg ... [ 142.158316] panic cpuset=/ mems_allowed=0-1 [ 142.158983] CPU: 15 PID: 8682 Comm: panic Not tainted 4.17.0-rc6+ #13 [ 142.159659] Hardware name: Inspur SA5212M4/YZMB-00370-107, BIOS 4.1.10 11/14/2016 [ 142.160342] Call Trace: [ 142.161037] dump_stack+0x78/0xb3 [ 142.161734] dump_header+0x7d/0x334 [ 142.162433] oom_kill_process+0x228/0x490 [ 142.163126] ? oom_badness+0x2a/0x130 [ 142.163821] out_of_memory+0xf0/0x280 [ 142.164532] __alloc_pages_slowpath+0x711/0xa07 [ 142.165241] __alloc_pages_nodemask+0x23f/0x260 [ 142.165947] alloc_pages_vma+0x73/0x180 [ 142.166665] do_anonymous_page+0xed/0x4e0 [ 142.167388] __handle_mm_fault+0xbd2/0xe00 [ 142.168114] handle_mm_fault+0x116/0x250 [ 142.168841] __do_page_fault+0x233/0x4d0 [ 142.169567] do_page_fault+0x32/0x130 [ 142.170303] ? page_fault+0x8/0x30 [ 142.171036] page_fault+0x1e/0x30 [ 142.171764] RIP: 0033:0x7f403000a860 [ 142.172517] RSP: 002b:00007ffc9f745c28 EFLAGS: 00010206 [ 142.173268] RAX: 00007f3f6fd7d000 RBX: 0000000000000000 RCX: 00007f3f7f5cd000 [ 142.174040] RDX: 00007f3fafd7d000 RSI: 0000000000000000 RDI: 00007f3f6fd7d000 [ 142.174806] RBP: 00007ffc9f745c50 R08: ffffffffffffffff R09: 0000000000000000 [ 142.175623] R10: 0000000000000022 R11: 0000000000000246 R12: 0000000000400490 [ 142.176542] R13: 00007ffc9f745d30 R14: 0000000000000000 R15: 0000000000000000 [ 142.177709] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),origin_memcg=(null),kill_memcg=/test/test1/test2,task=panic,pid= 8622,uid= 0 ... Changes since v6: - divide the patch v5 into two parts. One part is to add an array of const char and put enum oom_constraint into the memcontrol.h; the other is will refactor the output in the dump_header. - limit the memory usage for the static char array by using NAME_MAX in the mem_cgroup_print_oom_context. - eliminate the spurious spaces in the oom's output and fix the spelling of "constrain". Changes since v5: - add an array of const char for each constraint. - replace all of the pr_cont with a single line print of the pr_info. - put enum oom_constraint into the memcontrol.c file for printing oom constraint. Changes since v4: - rename the helper's name to mem_cgroup_print_oom_context. - rename the mem_cgroup_print_oom_info to mem_cgroup_print_oom_meminfo. - add the constrain info in the dump_header. Changes since v3: - rename the helper's name to mem_cgroup_print_oom_memcg_name. - add the rcu lock held to the helper. - remove the print info of memcg's name in mem_cgroup_print_oom_info. Changes since v2: - add the mem_cgroup_print_memcg_name helper to print the memcg's name which contains the task that will be killed by the oom-killer. Changes since v1: - replace adding mem_cgroup_print_oom_info with printing the memcg's name only. Signed-off-by: yuzhoujian --- include/linux/memcontrol.h | 15 ++++++++++--- mm/memcontrol.c | 55 ++++++++++++++++++++++++++++++++-------------- mm/oom_kill.c | 5 +++-- 3 files changed, 53 insertions(+), 22 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 57311b6c4d67..1c7d5da1c827 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -478,8 +478,11 @@ void mem_cgroup_handle_over_high(void); unsigned long mem_cgroup_get_limit(struct mem_cgroup *memcg); -void mem_cgroup_print_oom_info(struct mem_cgroup *memcg, - struct task_struct *p); +void mem_cgroup_print_oom_context(struct mem_cgroup *memcg, + struct task_struct *p, enum oom_constraint constraint, + nodemask_t *nodemask); + +void mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg); static inline void mem_cgroup_oom_enable(void) { @@ -873,7 +876,13 @@ static inline unsigned long mem_cgroup_get_limit(struct mem_cgroup *memcg) } static inline void -mem_cgroup_print_oom_info(struct mem_cgroup *memcg, struct task_struct *p) +mem_cgroup_print_oom_context(struct mem_cgroup *memcg, struct task_struct *p, + enum oom_constraint constraint, nodemask_t *nodemask) +{ +} + +static inline void +mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2bd3df3d101a..fd1172938c8e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1118,33 +1118,54 @@ static const char *const memcg1_stat_names[] = { }; #define K(x) ((x) << (PAGE_SHIFT-10)) -/** - * mem_cgroup_print_oom_info: Print OOM information relevant to memory controller. - * @memcg: The memory cgroup that went over limit +/* + * mem_cgroup_print_oom_context: Print OOM context information relevant to + * memory controller, which includes allocation constraint, nodemask, origin + * memcg that has reached its limit, kill memcg that contains the killed + * process, killed process's command, pid and uid. + * @memcg: The origin memory cgroup that went over limit * @p: Task that is going to be killed + * @constraint: The allocation constraint + * @nodemask: The allocation nodemask * * NOTE: @memcg and @p's mem_cgroup can be different when hierarchy is * enabled */ -void mem_cgroup_print_oom_info(struct mem_cgroup *memcg, struct task_struct *p) +void mem_cgroup_print_oom_context(struct mem_cgroup *memcg, struct task_struct *p, + enum oom_constraint constraint, nodemask_t *nodemask) { - struct mem_cgroup *iter; - unsigned int i; + static char origin_memcg_name[NAME_MAX], kill_memcg_name[NAME_MAX]; + struct cgroup *origin_cgrp, *kill_cgrp; rcu_read_lock(); - - if (p) { - pr_info("Task in "); - pr_cont_cgroup_path(task_cgroup(p, memory_cgrp_id)); - pr_cont(" killed as a result of limit of "); - } else { - pr_info("Memory limit reached of cgroup "); + if (memcg) { + origin_cgrp = memcg->css.cgroup; + cgroup_path(origin_cgrp, origin_memcg_name, NAME_MAX); } - - pr_cont_cgroup_path(memcg->css.cgroup); - pr_cont("\n"); - + kill_cgrp = task_cgroup(p, memory_cgrp_id); + cgroup_path(kill_cgrp, kill_memcg_name, NAME_MAX); + + if (p) + pr_info("oom-kill:constraint=%s,nodemask=%*pbl,origin_memcg=%s,kill_memcg=%s,task=%s,pid=%5d,uid=%5d\n", + oom_constraint_text[constraint], nodemask_pr_args(nodemask), + strlen(origin_memcg_name) ? origin_memcg_name : "(null)", + kill_memcg_name, p->comm, p->pid, + from_kuid(&init_user_ns, task_uid(p))); + else + pr_info("oom-kill:constraint=%s,nodemask=%*pbl,origin_memcg=%s,kill_memcg=%s\n", + oom_constraint_text[constraint], nodemask_pr_args(nodemask), + strlen(origin_memcg_name) ? origin_memcg_name : "(null)", kill_memcg_name); rcu_read_unlock(); +} + +/** + * mem_cgroup_print_oom_info: Print OOM memory information relevant to memory controller. + * @memcg: The memory cgroup that went over limit + */ +void mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg) +{ + struct mem_cgroup *iter; + unsigned int i; pr_info("memory: usage %llukB, limit %llukB, failcnt %lu\n", K((u64)page_counter_read(&memcg->memory)), diff --git a/mm/oom_kill.c b/mm/oom_kill.c index c806cd656af6..af0efab8a9e5 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -16,7 +16,6 @@ * for newbie kernel hackers. It features several pointers to major * kernel subsystems and hints as to where to find out what things do. */ - #include #include #include @@ -414,6 +413,7 @@ static void dump_tasks(struct mem_cgroup *memcg, const nodemask_t *nodemask) static void dump_header(struct oom_control *oc, struct task_struct *p) { + enum oom_constraint constraint = constrained_alloc(oc); pr_warn("%s invoked oom-killer: gfp_mask=%#x(%pGg), nodemask=%*pbl, order=%d, oom_score_adj=%hd\n", current->comm, oc->gfp_mask, &oc->gfp_mask, nodemask_pr_args(oc->nodemask), oc->order, @@ -423,8 +423,9 @@ static void dump_header(struct oom_control *oc, struct task_struct *p) cpuset_print_current_mems_allowed(); dump_stack(); + mem_cgroup_print_oom_context(oc->memcg, p, constraint, oc->nodemask); if (is_memcg_oom(oc)) - mem_cgroup_print_oom_info(oc->memcg, p); + mem_cgroup_print_oom_meminfo(oc->memcg); else { show_mem(SHOW_MEM_FILTER_NODES, oc->nodemask); if (is_dump_unreclaim_slabs()) -- 2.14.1