Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp1020961pxb; Wed, 6 Apr 2022 06:55:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx5YYKgY658yKeT07n15up4cdgfC66lftTvU8vY84IiPWkBcHgtFAFMz6NNi3pg9xxIqXrl X-Received: by 2002:a05:6a00:1991:b0:4fa:fdb8:8082 with SMTP id d17-20020a056a00199100b004fafdb88082mr9095161pfl.20.1649253351902; Wed, 06 Apr 2022 06:55:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1649253351; cv=none; d=google.com; s=arc-20160816; b=gE3Twoy6rkTU6ApGvG5oHDxzWjoiTUlI/ISwWbjmhWQXs5cV/A3JNm2arvCsPWf8Fb KqGjxzc2auVWKADUy9ggETny9KC/8So9Mt+HMyXYDJ6Z6QC0NyKoqXONnf2JrSZSqNb7 z5jCsG/Y+SiwCyHAjIEkX5u0Ho38jwdevTXI6VYDmUHCqPfa+nvOa04Nn3YXoySDLFDU egDb8Ia9mPmiUqozs41c8xtk0Bh/GB8tFNM4EpyjZO4fntFvGBTaves7G10lzP02NyuG 5VcfamX5TNUjrNlh8q2nP9sY0qjnG+o70OVFX/ffhDVuALq2iMTOFG9YvBhs1+EygCA/ 5taA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=UfJhD+E2GRU6YH38nkOxTOd08GRraZaWE5aUbe4VMjs=; b=ndPmZzjH0qUlytmC413XOwlN8FBgzZ95sXFsYB43nBm/qesLZhNFWrQPfT5T39jQO5 W+bT7KxWiWraWryquPNqvaBmjCRD3d+VftYPiKmlJMr75inE4Ee3APY9ojtpTYjPyHy2 3t65UrUjt0bdhW/KlHtUN6vF/kmVwLHIJ/flxUZB52l933m6CryjrPxNUUcS+wAGoXSo yrNgynQxbzFMP29RykSnakJ+A4sTtx+k1TeTQl6CzBkeWKnhcJDPcCteerNHgydsqP4y H4x2xMTX1NoQ/NA3Bm2CsQgE4X3vpMqqQ332koPwy3moL8fUYCbv4TbwOg4eJ8ZQSCFa PzaQ== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id t128-20020a632d86000000b00381f727a284si16013147pgt.423.2022.04.06.06.55.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Apr 2022 06:55:51 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 44B701AAA4C; Wed, 6 Apr 2022 04:49:18 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1391740AbiDFFyU (ORCPT + 99 others); Wed, 6 Apr 2022 01:54:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2360919AbiDFDoD (ORCPT ); Tue, 5 Apr 2022 23:44:03 -0400 Received: from mail105.syd.optusnet.com.au (mail105.syd.optusnet.com.au [211.29.132.249]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CA4182706FD for ; Tue, 5 Apr 2022 17:11:25 -0700 (PDT) Received: from dread.disaster.area (pa49-180-43-123.pa.nsw.optusnet.com.au [49.180.43.123]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 2EABE10E56C3; Wed, 6 Apr 2022 10:11:07 +1000 (AEST) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1nbtGM-00EFWW-5v; Wed, 06 Apr 2022 10:11:06 +1000 Date: Wed, 6 Apr 2022 10:11:06 +1000 From: Dave Chinner To: Roman Gushchin Cc: Yang Shi , Hillf Danton , MM , Matthew Wilcox , Mel Gorman , Stephen Brennan , Yu Zhao , David Hildenbrand , LKML Subject: Re: [RFC] mm/vmscan: add periodic slab shrinker Message-ID: <20220406001106.GA1609613@dread.disaster.area> References: <20220402072103.5140-1-hdanton@sina.com> <20220403005618.5263-1-hdanton@sina.com> <20220404010948.GV1609613@dread.disaster.area> <20220405051710.GW1609613@dread.disaster.area> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.4 cv=e9dl9Yl/ c=1 sm=1 tr=0 ts=624cda9d a=MV6E7+DvwtTitA3W+3A2Lw==:117 a=MV6E7+DvwtTitA3W+3A2Lw==:17 a=kj9zAlcOel0A:10 a=z0gMJWrwH1QA:10 a=7-415B0cAAAA:8 a=Fe5XsLPeW7db1_PS-6wA:9 a=CjuIK1q_8ugA:10 a=biEYGPWJfzWAr4FL6Ov7:22 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 05, 2022 at 02:31:02PM -0700, Roman Gushchin wrote: > On Tue, Apr 05, 2022 at 01:58:59PM -0700, Yang Shi wrote: > > On Tue, Apr 5, 2022 at 9:36 AM Roman Gushchin wrote: > > > On Tue, Apr 05, 2022 at 03:17:10PM +1000, Dave Chinner wrote: > > > > On Mon, Apr 04, 2022 at 12:08:25PM -0700, Roman Gushchin wrote: > > > > > On Mon, Apr 04, 2022 at 11:09:48AM +1000, Dave Chinner wrote: > > IMHO > > the number of really freed pages should be returned (I do understand > > it is not that easy for now), and returning 0 should be fine. > > It's doable, there is already a mechanism in place which hooks into > the slub/slab/slob release path and stops the slab reclaim as a whole > if enough memory was freed. The reclaim state that accounts for slab pages freed really needs to be first class shrinker state that is aggregated at the do_shrink_slab() level and passed back to the vmscan code. The shrinker infrastructure itself should be aware of the progress each shrinker is making - not just objects reclaimed but also pages reclaimed - so it can make better decisions about how much work should be done by each shrinker. e.g. lots of objects in cache, lots of objects reclaimed, no pages reclaimed is indicative of a fragmented slab cache. If this keeps happening, we should be trying to apply extra pressure to this specific cache because the only method we have for correcting a fragmented cache to return some memory is to reclaim lots more objects from it. > > The > > current logic (returning the number of objects) may feed up something > > over-optimistic. I, at least, experienced once or twice that a > > significant amount of slab caches were shrunk, but actually 0 pages > > were freed actually. TBH the new slab controller may make it worse > > since the page may be pinned by the objects from other memcgs. > > Of course, the more dense the placement of objects is, the harder is to get > the physical pages back. But usually it pays off by having a dramatically > lower total number of slab pages. Unless you have tens of millions of objects in the cache. The dentry cache is a prime example of this "lots of tiny cached objects" where we have tens of objects per slab page and so can suffer badly from internal fragmentation.... Cheers, Dave. -- Dave Chinner david@fromorbit.com