Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp220207rwb; Thu, 1 Dec 2022 01:06:37 -0800 (PST) X-Google-Smtp-Source: AA0mqf6qKF+6m5VKGigl3sSTKP4LazvqHISJrk+h7NJSlbg0RuWiMm8DlHk5wUs7Kqn7Aa57T1Lj X-Received: by 2002:a17:906:6057:b0:7c0:a90b:be7e with SMTP id p23-20020a170906605700b007c0a90bbe7emr3641952ejj.202.1669885597542; Thu, 01 Dec 2022 01:06:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669885597; cv=none; d=google.com; s=arc-20160816; b=Pvk7wsyQEcxhBX8QA431ZvHOUAhBTRXFy90uak+Jn8oN5baQzE8x45QwCErKy0Al37 LYdy4E64vOmg0sxmGjEHTxxsQrjS5LBV2Vdtmwwfan/Hjj3Hr6mTDNkCI4yTHbN/yfvV JkQsLjVeXjkCDZMirne6TmV9uo8A1c5lV2Vi+FNO28Tinja7LlT1q5yxe9+oaFebzRzO BJh2d7qdQycC3qsRlvnfMFwkSsetKlqUyAvg8HYFbi6yvlhBvJ6wUnS65gQhn0oMaE8P UWUK0iAK3egkMX52qOozBAD9tsgLBvb3R/eDjYUgjWeJEheMYuB8ENX7mk+mBEFzbTWr ZlRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=guIxpxNAKGFiX13aZHwKzc4sgDiWlaSOm8SorFitB8w=; b=udB+yCUWUNJ3KO9+Oxp3roiX5vxjkPLCG0TAGFD8tytV82X3HBDsWEFZEeXgAxgEaB 6BIOFC141QB2Xa/2ecCF8OwBtZQF6D04B3eOt4n2Kz8tLoQJNLcwnBXFAbbQHgAaaZsz P7c8BB8RG77NXX+Hf7E2H/HLtTTPP3LZsqUjzwSRYzLuwjQkjp97A87deIOTc3ZXcit/ FNuGNtHbXMIaDoohGoRJVuVRKCUhhOvOfTu/RgbLRrOY/WB3Uh9NQtB6zN62osBsvNjE w8NOdsjYJcSvPtWEQu5Vbd/hlQPmO4BcsniSQwG09dVXoaUgKDI+PWjg1f4XDs30gJ/+ VkZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=N3ADfoln; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=suse.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ml21-20020a170906cc1500b007ba7b53b4a7si2464534ejb.933.2022.12.01.01.06.17; Thu, 01 Dec 2022 01:06:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=N3ADfoln; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=suse.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229771AbiLAIty (ORCPT + 82 others); Thu, 1 Dec 2022 03:49:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229843AbiLAIte (ORCPT ); Thu, 1 Dec 2022 03:49:34 -0500 Received: from smtp-out2.suse.de (smtp-out2.suse.de [IPv6:2001:67c:2178:6::1d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C157A11A1D; Thu, 1 Dec 2022 00:49:32 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 2D59C1FD68; Thu, 1 Dec 2022 08:49:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1669884568; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=guIxpxNAKGFiX13aZHwKzc4sgDiWlaSOm8SorFitB8w=; b=N3ADfolne44ylq/Uj9XYi305h0aolKKN92dzo6nzondChOAnn42JLZGonKi8piI1EwfYrL 5qhGgcY5pZR1iPH0r/oAwVw5Bk3BusbtMcafYF38Bf26AKI3S6FsOcdGIdLeNuc4XPcekJ rezD8+qtp+o1njMbFZ0/5oGMnYYS78Q= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1A4B813B4A; Thu, 1 Dec 2022 08:49:28 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id gHRmBphqiGM8GgAAMHmgww (envelope-from ); Thu, 01 Dec 2022 08:49:28 +0000 Date: Thu, 1 Dec 2022 09:49:27 +0100 From: Michal Hocko To: =?utf-8?B?56iL5Z6y5rab?= Chengkaitao Cheng Cc: Tao pilgrim , "tj@kernel.org" , "lizefan.x@bytedance.com" , "hannes@cmpxchg.org" , "corbet@lwn.net" , "roman.gushchin@linux.dev" , "shakeelb@google.com" , "akpm@linux-foundation.org" , "songmuchun@bytedance.com" , "cgel.zte@gmail.com" , "ran.xiaokai@zte.com.cn" , "viro@zeniv.linux.org.uk" , "zhengqi.arch@bytedance.com" , "ebiederm@xmission.com" , "Liam.Howlett@oracle.com" , "chengzhihao1@huawei.com" , "haolee.swjtu@gmail.com" , "yuzhao@google.com" , "willy@infradead.org" , "vasily.averin@linux.dev" , "vbabka@suse.cz" , "surenb@google.com" , "sfr@canb.auug.org.au" , "mcgrof@kernel.org" , "sujiaxun@uniontech.com" , "feng.tang@intel.com" , "cgroups@vger.kernel.org" , "linux-doc@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , Bagas Sanjaya , "linux-mm@kvack.org" , Greg Kroah-Hartman Subject: Re: [PATCH] mm: memcontrol: protect the memory in cgroup from being oom killed Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 01-12-22 04:52:27, 程垲涛 Chengkaitao Cheng wrote: > At 2022-12-01 00:27:54, "Michal Hocko" wrote: > >On Wed 30-11-22 15:46:19, 程垲涛 Chengkaitao Cheng wrote: > >> On 2022-11-30 21:15:06, "Michal Hocko" wrote: > >> > On Wed 30-11-22 15:01:58, chengkaitao wrote: > >> > > From: chengkaitao > >> > > > >> > > We created a new interface for memory, If there is > >> > > the OOM killer under parent memory cgroup, and the memory usage of a > >> > > child cgroup is within its effective oom.protect boundary, the cgroup's > >> > > tasks won't be OOM killed unless there is no unprotected tasks in other > >> > > children cgroups. It draws on the logic of in the > >> > > inheritance relationship. > >> > > >> > Could you be more specific about usecases? > > > >This is a very important question to answer. > > usecases 1: users say that they want to protect an important process > with high memory consumption from being killed by the oom in case > of docker container failure, so as to retain more critical on-site > information or a self recovery mechanism. At this time, they suggest > setting the score_adj of this process to -1000, but I don't agree with > it, because the docker container is not important to other docker > containers of the same physical machine. If score_adj of the process > is set to -1000, the probability of oom in other container processes will > increase. > > usecases 2: There are many business processes and agent processes > mixed together on a physical machine, and they need to be classified > and protected. However, some agents are the parents of business > processes, and some business processes are the parents of agent > processes, It will be troublesome to set different score_adj for them. > Business processes and agents cannot determine which level their > score_adj should be at, If we create another agent to set all processes's > score_adj, we have to cycle through all the processes on the physical > machine regularly, which looks stupid. I do agree that oom_score_adj is far from ideal tool for these usecases. But I also agree with Roman that these could be addressed by an oom killer implementation in the userspace which can have much better tailored policies. OOM protection limits would require tuning and also regular revisions (e.g. memory consumption by any workload might change with different kernel versions) to provide what you are looking for. > >> > How do you tune oom.protect > >> > wrt to other tunables? How does this interact with the oom_score_adj > >> > tunining (e.g. a first hand oom victim with the score_adj 1000 sitting > >> > in a oom protected memcg)? > >> > >> We prefer users to use score_adj and oom.protect independently. Score_adj is > >> a parameter applicable to host, and oom.protect is a parameter applicable to cgroup. > >> When the physical machine's memory size is particularly large, the score_adj > >> granularity is also very large. However, oom.protect can achieve more fine-grained > >> adjustment. > > > >Let me clarify a bit. I am not trying to defend oom_score_adj. It has > >it's well known limitations and it is is essentially unusable for many > >situations other than - hide or auto-select potential oom victim. > > > >> When the score_adj of the processes are the same, I list the following cases > >> for explanation, > >> > >> root > >> | > >> cgroup A > >> / \ > >> cgroup B cgroup C > >> (task m,n) (task x,y) > >> > >> score_adj(all task) = 0; > >> oom.protect(cgroup A) = 0; > >> oom.protect(cgroup B) = 0; > >> oom.protect(cgroup C) = 3G; > > > >How can you enforce protection at C level without any protection at A > >level? > > The basic idea of this scheme is that all processes in the same cgroup are > equally important. If some processes need extra protection, a new cgroup > needs to be created for unified settings. I don't think it is necessary to > implement protection in cgroup C, because task x and task y are equally > important. Only the four processes (task m, n, x and y) in cgroup A, have > important and secondary differences. > > > This would easily allow arbitrary cgroup to hide from the oom > > killer and spill over to other cgroups. > > I don't think this will happen, because eoom.protect only works on parent > cgroup. If "oom.protect(parent cgroup) = 0", from perspective of > grandpa cgroup, task x and y will not be specially protected. Just to confirm I am on the same page. This means that there won't be any protection in case of the global oom in the above example. So effectively the same semantic as the low/min protection. > >> usage(task m) = 1G > >> usage(task n) = 2G > >> usage(task x) = 1G > >> usage(task y) = 2G > >> > >> oom killer order of cgroup A: n > m > y > x > >> oom killer order of host: y = n > x = m > >> > >> If cgroup A is a directory maintained by users, users can use oom.protect > >> to protect relatively important tasks x and y. > >> > >> However, when score_adj and oom.protect are used at the same time, we > >> will also consider the impact of both, as expressed in the following formula. > >> but I have to admit that it is an unstable result. > >> score = task_usage + score_adj * totalpage - eoom.protect * task_usage / local_memcg_usage > > > >I hope I am not misreading but this has some rather unexpected > >properties. First off, bigger memory consumers in a protected memcg are > >protected more. > > Since cgroup needs to reasonably distribute the protection quota to all > processes in the cgroup, I think that processes consuming more memory > should get more quota. It is fair to processes consuming less memory > too, even if processes consuming more memory get more quota, its > oom_score is still higher than the processes consuming less memory. > When the oom killer appears in local cgroup, the order of oom killer > remains unchanged Why cannot you simply discount the protection from all processes equally? I do not follow why the task_usage has to play any role in that. > > >Also I would expect the protection discount would > >be capped by the actual usage otherwise excessive protection > >configuration could skew the results considerably. > > In the calculation, we will select the minimum value of memcg_usage and > oom.protect > > >> > I haven't really read through the whole patch but this struck me odd. > >> > >> > > @@ -552,8 +552,19 @@ static int proc_oom_score(struct seq_file *m, struct pid_namespace *ns, > >> > > unsigned long totalpages = totalram_pages() + total_swap_pages; > >> > > unsigned long points = 0; > >> > > long badness; > >> > > +#ifdef CONFIG_MEMCG > >> > > + struct mem_cgroup *memcg; > >> > > > >> > > - badness = oom_badness(task, totalpages); > >> > > + rcu_read_lock(); > >> > > + memcg = mem_cgroup_from_task(task); > >> > > + if (memcg && !css_tryget(&memcg->css)) > >> > > + memcg = NULL; > >> > > + rcu_read_unlock(); > >> > > + > >> > > + update_parent_oom_protection(root_mem_cgroup, memcg); > >> > > + css_put(&memcg->css); > >> > > +#endif > >> > > + badness = oom_badness(task, totalpages, MEMCG_OOM_PROTECT); > >> > > >> > the badness means different thing depending on which memcg hierarchy > >> > subtree you look at. Scaling based on the global oom could get really > >> > misleading. > >> > >> I also took it into consideration. I planned to change "/proc/pid/oom_score" > >> to a writable node. When writing to different cgroup paths, different values > >> will be output. The default output is root cgroup. Do you think this idea is > >> feasible? > > > >I do not follow. Care to elaborate? > > Take two example, > cmd: cat /proc/pid/oom_score > output: Scaling based on the global oom > > cmd: echo "/cgroupA/cgroupB" > /proc/pid/oom_score > output: Scaling based on the cgroupB oom > (If the task is not in the cgroupB's hierarchy subtree, output: invalid parameter) This is a terrible interface. First of all it assumes a state for the file without any way to guarantee atomicity. How do you deal with two different callers accessing the file? -- Michal Hocko SUSE Labs