Received: by 2002:a05:6358:5282:b0:b5:90e7:25cb with SMTP id g2csp3859584rwa; Tue, 23 Aug 2022 11:21:09 -0700 (PDT) X-Google-Smtp-Source: AA6agR5WElAHSR423YXTrrA2VdGVS5uNkad3Mx5D8vxAwQdu30hT4AlCyvhzJ+/gtBTB/Zy4CsAG X-Received: by 2002:a17:906:9b89:b0:73d:6a55:c489 with SMTP id dd9-20020a1709069b8900b0073d6a55c489mr575980ejc.406.1661278869641; Tue, 23 Aug 2022 11:21:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661278869; cv=none; d=google.com; s=arc-20160816; b=uzNA5z5FFabL5EXQc/p7YVLQqn8TBBt1ppjEuqrV1TuuzsWNbKfzhi2HV0rqRUXe3b y4WsPm3I1qy9C1ZDkC09/fDkP1UatFCbxBUtzoZoOKR68AbnH+6rCfIbwrNZ24VAyI8R JYmhxEfQIMRwm3UCboutDhoq3bbcvwCqkHqTLlMpTWmZDYGZZFzQZwPKK5QUTgXMbSZb TTBMV3+0gHSU3+FilbQCjHCA7KZJ1IqHgATsEwCeY9ieoWhoC03+uorkYthluxYeRSNN IZ7g9UGHGeFIv9tlX56uF4efQzZ2miDOuJ23JZzMk1ZVqBiTfKqV4d7lqQdzzGbdE94G kl8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=Xs/HommssY7s0B/PLHAjlcRJ4mqHCr3IhAd1yTCEzCE=; b=M8kdA8F0+m5XY6k1GfCVJiHCuEKsUbemwvwGtfgqNBfibz0zU7p9LNrTXgDjN6olXN vrHE7AePvf5+SOecKzgOnUNS73yH7TC73R+b3v19aeVowfhrm9EqO8tBLj4ko1ovIvPW n7AGx06lUaep9Ep6cbhiKToG3eoyV/kndf5dm3v3Fx1dmmfdWMx4jDeFdUGbnO10GtdS IFNnmZ6oiNMdFWdlFZ38ZszyyaG/Qff5UKJrpXegXIZIo0SAoTGrfTo1qtCra06WRDmJ mGS20ZBMhu55zzuE+9QyhG2SzdzEcpQaUf7ULqSvXqbFQNC6o9wifX3IUAeOJKtNbz1J rj9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=J1Vec7bi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dr16-20020a170907721000b0073d6859e5c7si329517ejc.373.2022.08.23.11.20.43; Tue, 23 Aug 2022 11:21:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=J1Vec7bi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233417AbiHWSLJ (ORCPT + 99 others); Tue, 23 Aug 2022 14:11:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234164AbiHWSKo (ORCPT ); Tue, 23 Aug 2022 14:10:44 -0400 Received: from mail-yw1-x1129.google.com (mail-yw1-x1129.google.com [IPv6:2607:f8b0:4864:20::1129]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02FE99DF9C for ; Tue, 23 Aug 2022 09:21:28 -0700 (PDT) Received: by mail-yw1-x1129.google.com with SMTP id 00721157ae682-334dc616f86so392918047b3.8 for ; Tue, 23 Aug 2022 09:21:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc; bh=Xs/HommssY7s0B/PLHAjlcRJ4mqHCr3IhAd1yTCEzCE=; b=J1Vec7bio8CJXhkUKFR6AzmUJeklF7b0qMyGawJeDXz5GGJgAljz1jk82Jwg21H8b8 MJDJD1xs5FwNkG1j2kMpeWNyP8pdGsmfN7aU3UVQEvk9NhMuUsL6r683dw6VoMlDcd8G psVsV9/S3s8Q4wef27D6xaRF+221cD3tzkamSJpXS2MAQsDQckI8ZFsOYbX6rALQQaSV 46ZkmcB6yVW57VjIJmlDnR5Wkv5Dus4vv871VPSQX4sYGeuTFuwOcwZL9uOnR/Xn+Kui XKgQrFZnf+EpiWWc/svuhx+dAmCsL20Vwn/xBwhpc8LO+atcSrj5yMcOkVlkDmN25JPa m/9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=Xs/HommssY7s0B/PLHAjlcRJ4mqHCr3IhAd1yTCEzCE=; b=l39UCXlfi/VWd23+x4UpM1/TqIpB8+UpdtTfcP7wd3hm+MOMzq28agMSv3jtt1XtYl ncPY9Qvm4Zg9H2G9CIBMwXUTILBWbqBxbtJFacsX28llH7C/H7TA6ws5MYn6GLetzLp9 pfXCdqMUtzdOmggSdS2ca5qNPDoX8qxEDH9s3i/FMMiExyDsUMfI5RVBsxMRtZPpjmC2 quaN9poH5K4s3Fw6eMg8126lug2t1Q7M3IZEvCUiqicoSZOmSlglW/5fbKOzr1jgTPiz LTIUi0/VlG5x5HCZdEmV1/MV85gMzJwzRsln6pKaF70SyFgkVaA4eXfdowwM0Yi7ryMB hc5g== X-Gm-Message-State: ACgBeo00YutyZx0C7jrZMFaUs9XtdeJ5OPvoqkMZhjqEhZ22bnSvd9lZ faKgmRgxCsB1c+Std20mBizVQBSYnNK08+Mu/YU23A== X-Received: by 2002:a05:6902:1366:b0:691:4335:455b with SMTP id bt6-20020a056902136600b006914335455bmr24056494ybb.282.1661271687900; Tue, 23 Aug 2022 09:21:27 -0700 (PDT) MIME-Version: 1.0 References: <1660908562-17409-1-git-send-email-zhaoyang.huang@unisoc.com> In-Reply-To: From: Suren Baghdasaryan Date: Tue, 23 Aug 2022 09:21:16 -0700 Message-ID: Subject: Re: [RFC PATCH] memcg: use root_mem_cgroup when css is inherited To: Michal Hocko Cc: Zhaoyang Huang , Tejun Heo , Shakeel Butt , "zhaoyang.huang" , Johannes Weiner , Linux MM , LKML , Cgroups , Ke Wang , Zefan Li , Roman Gushchin , Muchun Song Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 23, 2022 at 4:51 AM Michal Hocko wrote: > > On Tue 23-08-22 17:20:59, Zhaoyang Huang wrote: > > On Tue, Aug 23, 2022 at 4:33 PM Michal Hocko wrote: > > > > > > On Tue 23-08-22 14:03:04, Zhaoyang Huang wrote: > > > > On Tue, Aug 23, 2022 at 1:21 PM Michal Hocko wrote: > > > > > > > > > > On Tue 23-08-22 10:31:57, Zhaoyang Huang wrote: > > > [...] > > > > > > I would like to quote the comments from google side for more details > > > > > > which can also be observed from different vendors. > > > > > > "Also be advised that when you enable memcg v2 you will be using > > > > > > per-app memcg configuration which implies noticeable overhead because > > > > > > every app will have its own group. For example pagefault path will > > > > > > regress by about 15%. And obviously there will be some memory overhead > > > > > > as well. That's the reason we don't enable them in Android by > > > > > > default." > > > > > > > > > > This should be reported and investigated. Because per-application memcg > > > > > vs. memcg in general shouldn't make much of a difference from the > > > > > performance side. I can see a potential performance impact for no-memcg > > > > > vs. memcg case but even then 15% is quite a lot. > > > > Less efficiency on memory reclaim caused by multi-LRU should be one of > > > > the reason, which has been proved by comparing per-app memcg on/off. > > > > Besides, theoretically workingset could also broken as LRU is too > > > > short to compose workingset. > > > > > > Do you have any data to back these claims? Is this something that could > > > be handled on the configuration level? E.g. by applying low limit > > > protection to keep the workingset in the memory? > > I don't think so. IMO, workingset works when there are pages evicted > > from LRU and then refault which provide refault distance for pages. > > Applying memcg's protection will have all LRU out of evicted which > > make the mechanism fail. > > It is really hard to help you out without any actual data. The idea was > though to use the low limit protection to adaptively configure > respective memcgs to reduce refaults. You already have data about > refaults ready so increasing the limit for often refaulting memcgs would > reduce the trashing. Sorry for joining late. A couple years ago I tested root-memcg vs per-app memcg configurations on an Android phone. Here is a snapshot from my findings: Problem ======= We see tangible increase in major faults and workingset refaults when transitioning from root-only memory cgroup to per-application cgroups on Android. Test results ============ Results while running memory-demanding workload: root memcg per-app memcg delta workingset_refault 1771228 3874281 +118.73% workingset_nodereclaim 4543 13928 +206.58% pgpgin 13319208 20618944 +54.81% pgpgout 1739552 3080664 +77.1% pgpgoutclean 2616571 4805755 +83.67% pswpin 359211 3918716 +990.92% pswpout 1082238 5697463 +426.45% pgfree 28978393 32531010 +12.26% pgactivate 2586562 8731113 +237.56% pgdeactivate 3811074 11670051 +206.21% pgfault 38692510 46096963 +19.14% pgmajfault 441288 4100020 +829.1% pgrefill 4590451 12768165 +178.15% Results while running application cycle test (20 apps, 20 cycles): root memcg per-app memcg delta workingset_refault 10634691 11429223 +7.47% workingset_nodereclaim 37477 59033 +57.52% pgpgin 70662840 69569516 -1.55% pgpgout 2605968 2695596 +3.44% pgpgoutclean 13514955 14980610 +10.84% pswpin 1489851 3780868 +153.77% pswpout 4125547 8050819 +95.15% pgfree 99823083 105104637 +5.29% pgactivate 7685275 11647913 +51.56% pgdeactivate 14193660 21459784 +51.19% pgfault 89173166 100598528 +12.81% pgmajfault 1856172 4227190 +127.74% pgrefill 16643554 23203927 +39.42% Tests were conducted on an Android phone with 4GB RAM. Similar regression was reported a couple years ago here: https://www.spinics.net/lists/linux-mm/msg121665.html I plan on checking the difference again on newer kernels (likely 5.15) after LPC this September. > > [...] > > > A.cgroup.controllers = memory > > > A.cgroup.subtree_control = memory > > > > > > A/B.cgroup.controllers = memory > > > A/B.cgroup.subtree_control = memory > > > A/B/B1.cgroup.controllers = memory > > > > > > A/C.cgroup.controllers = memory > > > A/C.cgroup.subtree_control = "" > > > A/C/C1.cgroup.controllers = "" > > Yes for above hierarchy and configuration. > > > > > > Is your concern that C1 is charged to A/C or that you cannot actually make > > > A/C.cgroup.controllers = "" because you want to maintain memory in A? > > > Because that would be breaking the internal node constrain rule AFAICS. > > No. I just want to keep memory on B. > > That would require A to be without controllers which is not possible due > to hierarchical constrain. > > > > Or maybe you just really want a different hierarchy where > > > A == root_cgroup and want the memory acocunted in B > > > (root/B.cgroup.controllers = memory) but not in C (root/C.cgroup.controllers = "")? > > Yes. > > > > > > That would mean that C memory would be maintained on the global (root > > > memcg) LRUs which is the only internal node which is allowed to have > > > resources because it is special. > > Exactly. I would like to have all groups like C which have no parent's > > subtree_control = memory charge memory to root. Under this > > implementation, memory under enabled group will be protected by > > min/low while other groups' memory share the same LRU to have > > workingset things take effect. > > One way to achieve that would be shaping the hierarchy the following way > root > / \ > no_memcg[1] memcg[2] > |||||||| ||||| > app_cgroups app_cgroups > > with > no_memcg.subtree_control = "" > memcg.subtree_control = memory > > no? > > You haven't really described why you need per application freezer cgroup > but I suspect you want to selectively freeze applications. Is there > any obstacle to have a dedicated frozen cgroup and migrate tasks to be > frozen there? We intend for Android to gradually migrate to v2 cgroups for all controllers and given that it has to use a unified hierarchy, per-application hierarchy provides highest flexibility. That way we can control every aspect of every app without affecting others. Of course that comes with its overhead. Thanks, Suren. > -- > Michal Hocko > SUSE Labs