Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp958362pxb; Wed, 6 Apr 2022 05:19:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzRprrDIElvixk8hpUg3QaXt/I7d3t8xoqOJqCjsGCT0lSmsFFTUy0CQC++fEUV5JesIjM/ X-Received: by 2002:a17:90b:1b10:b0:1c7:3413:87e0 with SMTP id nu16-20020a17090b1b1000b001c7341387e0mr9593309pjb.132.1649247554420; Wed, 06 Apr 2022 05:19:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1649247554; cv=none; d=google.com; s=arc-20160816; b=LwcaJWFVAZ7Mz/XO/nUP5IN5POQOq/fUlSyjcpCbHrVh52ytx8Q60BcDIzZpp8Ot0n k8A2tcUpk6yAO7iS0/Rs47rxI0IG175Rr7a///+9ys+tnROSXlLuEV2Rw4CglQ+8i/LY 4iD8fSeIci/NhH/aFQDCkCPCE3s8KXY1p3vBqrC/AuZWE4b2x+w5v1J1h1SHlDhENqxu 01PQg5WmwepW8jVTlWnlRqjT9+K79gSma039LgfMXwaTH3eyh2s342Y+q6GW5Vhecukn AhDp5SbUNua7n1kInYA5TZ/WLuSiqPZL1v/ZqA6h/dE79IHMoYm1Vu8bSc6ggch2Tn/P zHPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=7d4jHN3hEVC7czPTgRhPRNegosioytKGOnpN+jNlS3I=; b=tad+NDM+Mgl6i3vdA+Xo8i4ekZG93PDiI7plR7xj+KVE6E0FTkMNI4oAB7WBvrpxLz v1myoi20ctP+RZxHxKhNrTYT272w8Ua5wEZ/pTjL/7m94uuyQs6HNIAxKm8Y/b7imhL8 aWxafmNzQo8cHcrHmaKq0+AG3ImjFTywojNAD5scxFZxsjHNOc4cSHe3g3RVEOdRQJRE psfxjaVnHWYzKUAa2YPGiypB+AnESDPN28fwVh+Up611NAS4PM2L5lbXZ/UxRtvmOui6 KWJZzQNJZq5g8n8Dx/vEZEsFZHfa4Q+BGFYGhoDsdopVAeJMcfdUxX0p4tG8yGPD6GmW iAww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=A62E7PMT; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id r1-20020a17090a940100b001c60d6b8faesi4450325pjo.78.2022.04.06.05.19.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Apr 2022 05:19:14 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=A62E7PMT; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 469296CCD3C; Wed, 6 Apr 2022 04:04:35 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235344AbiDFCej (ORCPT + 99 others); Tue, 5 Apr 2022 22:34:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1447769AbiDEWXf (ORCPT ); Tue, 5 Apr 2022 18:23:35 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2BFD9BABB9 for ; Tue, 5 Apr 2022 14:18:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=7d4jHN3hEVC7czPTgRhPRNegosioytKGOnpN+jNlS3I=; b=A62E7PMTsau07ViNlj8KL+QG3d raIeADbj0mqUWjK39CpcCv3kTB3mlGZxV4RMhqen7LQ9O5+SmC9BLqJ3lzT7iJxXZN0LJhJyis/V+ cSiIAAKBylGreUeWo50FEhGx/xQUXwU0TYercKUbCUI/9uZrH5/Jkf/UgdXQy/0z0Ec/l81NBI7eG eABiiRFdDmkrq3aoYO2c8lfiubAVqzVnnhM/FZ4r8tLE83MrO/6Ih5OEVcsUHmJbXhn31xaKeeK8i ceJUFE3StPeb3Vc+Nsz9RxW6UEOaY2LTzAs5Eu+u1BagGMbd+2dvAzSievhz69Ysb68dRzF+1M2/J cantghkg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nbqYr-0073yK-Kv; Tue, 05 Apr 2022 21:18:01 +0000 Date: Tue, 5 Apr 2022 22:18:01 +0100 From: Matthew Wilcox To: Stephen Brennan Cc: Dave Chinner , Hillf Danton , Roman Gushchin , MM , Mel Gorman , Yu Zhao , David Hildenbrand , LKML Subject: Re: [RFC] mm/vmscan: add periodic slab shrinker Message-ID: References: <20220402072103.5140-1-hdanton@sina.com> <20220403005618.5263-1-hdanton@sina.com> <20220404010948.GV1609613@dread.disaster.area> <87ilrn5ttl.fsf@stepbren-lnx.us.oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87ilrn5ttl.fsf@stepbren-lnx.us.oracle.com> X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 05, 2022 at 10:22:14AM -0700, Stephen Brennan wrote: > I can't speak for every slab cache, but I've been coming to the same > conclusion myself regarding the dentry cache. I think that the rate of > stepping through the LRU should be tied to the rate of allocations. > Truly in-use objects shouldn't be harmed by this, as they should get > referenced and rotated to the beginning of the LRU. But the one-offs > which are bloating the cache will be found and removed. I agree with all this. > I've implemented a version of this patch which just takes one step > through the LRU on each d_alloc. It's quite interesting to watch it > work. You can create 5 million negative dentries in directory /A via > stat(), and then create 5 million negative dentries in directory /B. The > total dentry slab size reaches 5 million but never goes past it, since > the old negative dentries from /A aren't really in use, and they get > pruned at the same rate as negative dentries from /B get created. On the > other hand, if you *continue* to stat() on the dentries of /A while you > create negative dentries in /B, then the cache grows to 10 million, > since the /A dentries are actually in use. > > Maybe a solution could involve some generic list_lru machinery that can > let you do these sorts of incremental scans? Maybe batching them up so > instead of doing one every allocation, you do them every 100 or 1000? > It would still be up to the individual user to put this to good use in > the object allocation path. I feel like we need to allow the list to both shrink and grow, depending on how useful the entries in it are. So one counter per LRU, incremented on every add. When that counter gets to 100, reset it to 0 and scan 110 entries. Maybe 0 of them can be reclaimed; maybe 110 of them can be. But the list can shrink over time instead of being a "one in, one out" scenario. Clearly 110 is a magic number, but intuitively, attempting to shrink by 10% feels reasonable. Need to strike a balance between shrinking quickly enough and giving the cache time to figure out which entries are actually useful.