Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp2943897imm; Thu, 17 May 2018 00:13:59 -0700 (PDT) X-Google-Smtp-Source: AB8JxZo6vLcAuHc91NCqed6qLWeo9WXhO0acIy1zQUse9oe3P/1GXytYUFry68xO+gWEeEP08UO8 X-Received: by 2002:a63:aa07:: with SMTP id e7-v6mr3191778pgf.331.1526541239088; Thu, 17 May 2018 00:13:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526541239; cv=none; d=google.com; s=arc-20160816; b=J0NBeZrYhl8h/10x1FZZ8F53JWMtkRmCmxzi8lAwdhMUx56d3jJE5c42IVbRN3XK6A 6xWLSc07FM84dvH0eiKjpvSLl9nHQl3tZx0wDhCtglzyTk32cqOUf9WAmQ5G0ij+Wx8t 8BqzFIM+7C21msxLgMIvLidNxhEBVlXkJYihgQMCfzWCRL1h2JlYNxxbql3gMHOEdTLT W91MN0KZdvGdIRcGsIuijSWc/3QOTnq+h7d7YfmqWuOdiqZG9KgsMnLwsl+tyViuOnxc t57EHCOpLcHFaZCoyIgzCi719xTWJIKQXX2Dot0peFQvUn5vauC48L3/iq0+LPMpchWw 6aGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=a7z3YdOQOkhqB2ilJUgAKvJrI79LfB6mEMQgaZq+qE8=; b=wh5jHdDBWRacgbRMTssU2I1vQKVxn9Qs20HnOOZQR5EZcPUJ+1Cqt8C8VehGte0PDY rarwUrJiOXA7o4i7OvdKRkpyBiUMRrIySEuAk0J0BKYyRgUB+rkryz/lKSAwF1vQGx8j e4BLrlGQhfOpe37VcOgKZA2ARlPZVRcxh1RiBbe5KsbMK4BKlHkgmGPOXDtp1Smub3qT eQolGBIFjmgn7sOxSUyqd4SgI7x8Xc6zuYTD0hDhYCXzFJL6xvi8nsoPuhNjPqQPQHgq I51SVRjQWjs9GYkBhOl54B3qvozX39TjD5TUTyWXNZLCsy4Tevx5FQPtb6CbXHrvivCP PFQQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r4-v6si4647602pff.24.2018.05.17.00.13.44; Thu, 17 May 2018 00:13:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752227AbeEQHLq (ORCPT + 99 others); Thu, 17 May 2018 03:11:46 -0400 Received: from mx2.suse.de ([195.135.220.15]:59608 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751367AbeEQHLp (ORCPT ); Thu, 17 May 2018 03:11:45 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 7CFD2AC03; Thu, 17 May 2018 07:11:43 +0000 (UTC) Date: Thu, 17 May 2018 09:11:40 +0200 From: Michal Hocko To: ufo19890607 Cc: akpm@linux-foundation.org, rientjes@google.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, penguin-kernel@I-love.SAKURA.ne.jp, guro@fb.com, yang.s@alibaba-inc.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, yuzhoujian Subject: Re: [PATCH] Add the memcg print oom info for system oom Message-ID: <20180517071140.GQ12670@dhcp22.suse.cz> References: <1526540428-12178-1-git-send-email-ufo19890607@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1526540428-12178-1-git-send-email-ufo19890607@gmail.com> User-Agent: Mutt/1.9.5 (2018-04-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 17-05-18 08:00:28, ufo19890607 wrote: > From: yuzhoujian > > The dump_header does not print the memcg's name when the system > oom happened. Some users want to locate the certain container > which contains the task that has been killed by the oom killer. > So I add the mem_cgroup_print_oom_info when system oom events > happened. The oom report is quite heavy today. Do we really need the full memcg oom report here. Wouldn't it be sufficient to print the memcg the task belongs to? > Signed-off-by: yuzhoujian > --- > mm/oom_kill.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c > index 8ba6cb88cf58..244416c9834a 100644 > --- a/mm/oom_kill.c > +++ b/mm/oom_kill.c > @@ -433,6 +433,7 @@ static void dump_header(struct oom_control *oc, struct task_struct *p) > if (is_memcg_oom(oc)) > mem_cgroup_print_oom_info(oc->memcg, p); > else { > + mem_cgroup_print_oom_info(mem_cgroup_from_task(p), p); > show_mem(SHOW_MEM_FILTER_NODES, oc->nodemask); > if (is_dump_unreclaim_slabs()) > dump_unreclaimable_slab(); > -- > 2.14.1 > -- Michal Hocko SUSE Labs