Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp114165rwb; Tue, 6 Dec 2022 18:08:38 -0800 (PST) X-Google-Smtp-Source: AA0mqf5Xvjb6gUZdY6keU32dLPFqkHhrZJvURg9WfgY+c8t3PM87voBYYt+oL3sN/bIRSfn2l7Ef X-Received: by 2002:aa7:c9d0:0:b0:458:ed79:ed5 with SMTP id i16-20020aa7c9d0000000b00458ed790ed5mr64396542edt.374.1670378918276; Tue, 06 Dec 2022 18:08:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670378918; cv=none; d=google.com; s=arc-20160816; b=vObjki+ucQERSeNFRW4b49QrMgYVMZi7Bi0+Qa1s3pkdTczz4p7Z6P9Xm9VZNMbpm6 nE5OQG38yXdpqQYyrSxlmfmvU1MFQItaBwbtuKn6S7xgWkmTSXM2FMbbBuW8DXAudFBw bkc63BaTs+Hi0TqxC+PmYyPOi9FaXK3l1MGvStJejdQxw/ziYGRtSeZoiHqPzbdlIGWZ 8NgJjcE/voBM+U8OVtYl9ph4vcAG0IZEwAtxilgMPlgbFi3VI+T2720sen/ngRFbzxH4 cM8acgb8CLY/yFYPCco1+G6JR3BW0DaQvfFI0WTySvszbOAIlDyxzJRbVzsp+Bdt2U9N pVNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=mL42ZLeHwCOgm2dvIiRUP/S/P8aaOI55orVdQTEq1so=; b=G5v0GB/WOaQ3q3WZDAxeRQgn5su3smDCjU1jCG4/uQFe713NTeLCSbb8NdjTI6KYnZ rTaMj7imjJfzA8GJ0jiTtB2+TCmop9S3RNtSZu91ajoJgCFg1Q+QSwxMyGSVs08p6R8B UZ14UXevZCgHn44ZxpI2ppJSoxcp4BeAzKbVOvV+W6reC8386zjaD3cUwDUUeOu+nC2X O5r6ZoUEQaT1wear0BgvzYNnhv6GWnXmMY9kXssbd1V3/WJZxgLzx8beKfSDcYd+fk+N k6eBCXgaBpPaDaD2xY8I88joFD8HCcGGpNDLDAkvKzRSJfvht8XmzDQemH5lgFhy5UvA v3Qw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=p8ZTQ2rU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ga5-20020a1709070c0500b00788a4c018b5si14405367ejc.806.2022.12.06.18.08.19; Tue, 06 Dec 2022 18:08:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=p8ZTQ2rU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230020AbiLGB4W (ORCPT + 77 others); Tue, 6 Dec 2022 20:56:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229966AbiLGB4Q (ORCPT ); Tue, 6 Dec 2022 20:56:16 -0500 Received: from mail-vs1-xe2c.google.com (mail-vs1-xe2c.google.com [IPv6:2607:f8b0:4864:20::e2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68A68532D7 for ; Tue, 6 Dec 2022 17:56:11 -0800 (PST) Received: by mail-vs1-xe2c.google.com with SMTP id f189so11441081vsc.11 for ; Tue, 06 Dec 2022 17:56:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=mL42ZLeHwCOgm2dvIiRUP/S/P8aaOI55orVdQTEq1so=; b=p8ZTQ2rU5P4QOtPRFnTYHuXdD+XDdWS37wPhaV6Vzsl06SEO63uhm5y8g5EY4pVhHt vZ/WZm1jjEnByccyvCIV8pS12Gnll4Vns3pQ1kk8YJWbgGaiw4TrKIGGyqj45oZ1YFFe 0dSIWF7rVcT5bNaqFYEy6JH/GmKfvqQeTdqyZSRbphVYy9hGksGun8HPh+NqpWfPhkiy SdgwG8QD74z9eeJc+IN5LumC8Co5cOyVoWz9UpdZKaUVBSYDPaJRPkyfx0LtPQD1TRzg 7cgd86WssXlHvN31iH8Z7rZ1uNOY7NiiNA0pJE/4KIgCyQKo9XZ0v34+6FaEQbMX65kq z+wQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=mL42ZLeHwCOgm2dvIiRUP/S/P8aaOI55orVdQTEq1so=; b=IK3JHr320scAOYhCiTnyU/tKYb2q34qwhTePL9Jd+/aq9B/nYISHgDsiv/yLT3bb4S qhWNGTLZKhpf/16t4nZefuhGk5WOMp/UAbDjlJr3O3HYwYjS1A3FWMB4hJl25UPiQW5X v00UamunufylGvEaxw+likbs2vemO1xb00dczybPpZLZb3G+2jmc/53HvrfxF0fssTJn zZJzlzRfGKmK81hB47wxmcDzxqPkw2U8fKBNbKz7K62KD0+kuvMNyTJKb6I5bJ5JlnJN cU1PK0HwLsck86NPmP9LuTcw3TVWxYpWh5W2n0XI3eU519fGGtIQf0jS+Rw0F6zORzgF JuKw== X-Gm-Message-State: ANoB5pnB+djX8DhACsgJl0kfQB4Vp4BMZlaAk2EVl3Af+XYz4ai0LBop 2Vkq/47G5zcUkg+s3YrWqHasLXG8lC2vSY5b2CrWwQ== X-Received: by 2002:a05:6102:54a5:b0:3b0:7462:a88c with SMTP id bk37-20020a05610254a500b003b07462a88cmr34065725vsb.49.1670378170228; Tue, 06 Dec 2022 17:56:10 -0800 (PST) MIME-Version: 1.0 References: <20221206023406.3182800-1-almasrymina@google.com> In-Reply-To: From: Mina Almasry Date: Tue, 6 Dec 2022 17:55:58 -0800 Message-ID: Subject: Re: [PATCH v3] [mm-unstable] mm: Fix memcg reclaim on memory tiered systems To: Michal Hocko Cc: Andrew Morton , Johannes Weiner , Roman Gushchin , Shakeel Butt , Muchun Song , Huang Ying , Yang Shi , Yosry Ahmed , weixugc@google.com, fvdl@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Dec 6, 2022 at 11:55 AM Michal Hocko wrote: > > On Tue 06-12-22 08:06:51, Mina Almasry wrote: > > On Tue, Dec 6, 2022 at 4:20 AM Michal Hocko wrote: > > > > > > On Mon 05-12-22 18:34:05, Mina Almasry wrote: > > > > commit 3f1509c57b1b ("Revert "mm/vmscan: never demote for memcg > > > > reclaim"") enabled demotion in memcg reclaim, which is the right thing > > > > to do, however, it introduced a regression in the behavior of > > > > try_to_free_mem_cgroup_pages(). > > > > > > > > The callers of try_to_free_mem_cgroup_pages() expect it to attempt to > > > > reclaim - not demote - nr_pages from the cgroup. I.e. the memory usage > > > > of the cgroup should reduce by nr_pages. The callers expect > > > > try_to_free_mem_cgroup_pages() to also return the number of pages > > > > reclaimed, not demoted. > > > > > > > > However, what try_to_free_mem_cgroup_pages() actually does is it > > > > unconditionally counts demoted pages as reclaimed pages. So in practice > > > > when it is called it will often demote nr_pages and return the number of > > > > demoted pages to the caller. Demoted pages don't lower the memcg usage, > > > > and so try_to_free_mem_cgroup_pages() is not actually doing what the > > > > callers want it to do. > > > > > > > > Various things work suboptimally on memory tiered systems or don't work > > > > at all due to this: > > > > > > > > - memory.high enforcement likely doesn't work (it just demotes nr_pages > > > > instead of lowering the memcg usage by nr_pages). > > > > - try_charge_memcg() will keep retrying the charge while > > > > try_to_free_mem_cgroup_pages() is just demoting pages and not actually > > > > making any room for the charge. > > > > > > This has been brought up during the review https://lore.kernel.org/all/YoYTEDD+c4GT0xYY@dhcp22.suse.cz/ > > > > > > > Ah, I did indeed miss this. Thanks for the pointer. However I don't > > understand this bit from your email (sorry I'm probably missing > > something): > > > > "I suspect this is rather unlikely situation, though. The last tear > > (without any fallback) should have some memory to reclaim most of > > the time." > > > > Reading the code in try_charge_memcg(), I don't see the last retry for > > try_to_free_mem_cgroup_pages() do anything special. My concern here is > > that try_charge_memcg() calls try_to_free_mem_cgroup_pages() > > MAX_RECLAIM_RETRIES times. Each time that call may demote pages and > > report back that it was able to 'reclaim' memory, but the charge keeps > > failing because the memcg reclaim didn't actually make room for the > > charge. What happens in this case? My understanding is that the memcg > > oom-killer gets wrongly invoked. > > The memcg reclaim shrinks from all zones in the allowed zonelist. In > general from all nodes. So unless the lower tier is outside of this > zonelist then there is a zone to reclaim from which cannot demote. > Correct? > Ah, thanks for pointing this out. I did indeed miss that the memcg reclaim tries to apply pressure equally to all the nodes. With some additional testing I'm able to see what you said: there could be no premature oom kill invocation because generally the memcg reclaim will find some pages to reclaim from lower tier nodes. I do find that the first call to try_to_free_mem_cgroup_pages() sometimes will mostly demote and not do much reclaim. I haven't been able to fully track the cause of that down but I suspect that the first call in my test will find most of the cgroup's memory on top tier nodes. However we do retry a bunch of times before we invoke oom, and in my testing subsequent calls will find plenty of memory in the lower tier nodes that it can reclaim. I'll update the commit message in the next version. > > > > - memory.reclaim has a wonky interface. It advertises to the user it > > > > reclaims the provided amount but it will actually often demote that > > > > amount. > > > > > > > > There may be more effects to this issue. > > > > > > > > To fix these issues I propose shrink_folio_list() to only count pages > > > > demoted from inside of sc->nodemask to outside of sc->nodemask as > > > > 'reclaimed'. > > > > > > Could you expand on why the node mask matters? From the charge point of > > > view it should be completely uninteresting as the charge remains. > > > > > > I suspect we really need to change to reclaim metrics for memcg reclaim. > > > In the memory balancing reclaim we can indeed consider demotions as a > > > reclaim because the memory is freed in the end but for the memcg reclaim > > > we really should be counting discharges instead. No demotion/migration will > > > free up charges. > > > > I think what you're describing is exactly what this patch aims to do. > > I'm proposing an interface change to shrink_folio_list() such that it > > only counts demoted pages as reclaimed iff sc->nodemask is provided by > > the caller and the demotion removed pages from inside sc->nodemask to > > outside sc->nodemask. In this case: > > > > 1. memory balancing reclaim would pass sc->nodemask=nid to > > shrink_folio_list() indicating that it should count pages demoted from > > sc->nodemask as reclaimed. > > > > 2. memcg reclaim would pass sc->nodemask=NULL to shrink_folio_list() > > indicating that it is looking for reclaim across all nodes and no > > demoted pages should count as reclaimed. > > > > Sorry if the commit message was not clear. I can try making it clearer > > in the next version but it's already very long. > > Either I am missing something or I simply do not understand why you are > hooked into nodemask so much. Why cannot we have a simple rule that > only global reclaim considers demotions as nr_reclaimed? > Thanks. I think this approach would work for most callers. My issue here is properly supporting the recently added nodes= arg[1] to memory.reclaim. If the user specifies all nodes or provides no arg, I'd like to treat it as memcg reclaim which doesn't count demotions. If the user provides the top tier nodes, I would like to count demotions as this interface is the way to trigger proactive demotion from top tier nodes. I guess I can check which args the user is passing and decide whether or not to count demotions. But the user right now can specify any combination of nodes, some of them top tier, some lower tier, some in the middle. I can return -EINVAL for that, but that seems like a shame. I thought a generic way to address this was what I'm doing here, i.e. counting pages demoted from the nodemask as reclaimed. Is that not acceptable? Is -EINVAL preferred here? [1] https://lore.kernel.org/linux-mm/87tu2a1v3y.fsf@yhuang6-desk2.ccr.corp.intel.com/ > -- > Michal Hocko > SUSE Labs