Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp25915imm; Fri, 13 Jul 2018 16:18:18 -0700 (PDT) X-Google-Smtp-Source: AAOMgpeD+6Zb3+SMd5t0EjkgLuSmMLoUggbOgHhVyaneAbT3SNBrHFSQRn7QKsSUYvnPYcdRdcTu X-Received: by 2002:a65:665a:: with SMTP id z26-v6mr7584419pgv.193.1531523898908; Fri, 13 Jul 2018 16:18:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531523898; cv=none; d=google.com; s=arc-20160816; b=JcM/hggc/1OdOY8rmvGkZt4WKYXA6b0CPdKovLzuimi+sxrko+CqH5kVLpzix+1dG6 qr6nrFl2I8aVvQcY++wbSB9UcXJ1TEjOJNHUAj3OwPDGfTX61G+uyl2zra+dPPmFH1NY Lce/d+Tmjm5v2oO/UE8Wf6Y0ghLdPB92aX8XmWiqaZ4xyJ26Ymp/+cAAjM1MNOUc+2OQ LqfkHwf3Uv9R9TsNBS6biIUFDfakhOAtvy88P37Ev3uQw1cPZYgI+MGbjIRrD4F6il3h gz0QwqCrAl6cHVra6e1E5AuTxqAvirlmqM6UnSmnWQLIdHcgO3qOBnp4Poh3kK7BtBRL a8RA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date :arc-authentication-results; bh=t8Zd7eSfWeJcZaplFEm/ZzjCxzYJxhznAe9N9RfY5ws=; b=ed9kwe9jKLYtaYg+k/j0SLP4u96emtNJKN1LKVU71mv0dW0GHX8gh1Z77S1JRuAyGD iEfabrPs/g4mUtjmTSIfmGOm7edGvk7HSumUq9xUGjyXtObYfaxjhQ/eg5UK60f7sK9u spfCJmH06dk7sR0HqJNIOxXT7xWHRuWYaHtfS6wnrEErin4qObheTc1r67y9MMsEQ7Zs P5eZFMI9hxOlyrVa/XT5+M9vGa1kzSUjx3igKowCnlvAE1M/kkJncFd6cRuc0tw+MSfk NJ4bFK1PGsDjGE5GGEbIT2mOY4iap4dRkElU3Nmr9r2pvV+jicao3OrujI8SzWHpPD/4 OczQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bb12-v6si23495449plb.328.2018.07.13.16.18.01; Fri, 13 Jul 2018 16:18:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729926AbeGMXeG (ORCPT + 99 others); Fri, 13 Jul 2018 19:34:06 -0400 Received: from ipmail01.adl6.internode.on.net ([150.101.137.136]:40172 "EHLO ipmail01.adl6.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727524AbeGMXeG (ORCPT ); Fri, 13 Jul 2018 19:34:06 -0400 Received: from ppp59-167-129-252.static.internode.on.net (HELO dastard) ([59.167.129.252]) by ipmail01.adl6.internode.on.net with ESMTP; 14 Jul 2018 08:47:18 +0930 Received: from dave by dastard with local (Exim 4.80) (envelope-from ) id 1fe7JN-0006Tw-9P; Sat, 14 Jul 2018 09:17:17 +1000 Date: Sat, 14 Jul 2018 09:17:17 +1000 From: Dave Chinner To: James Bottomley Cc: Linus Torvalds , Matthew Wilcox , Waiman Long , Michal Hocko , Al Viro , Jonathan Corbet , "Luis R. Rodriguez" , Kees Cook , Linux Kernel Mailing List , linux-fsdevel , linux-mm , "open list:DOCUMENTATION" , Jan Kara , Paul McKenney , Andrew Morton , Ingo Molnar , Miklos Szeredi , Larry Woodman , "Wangkai (Kevin,C)" Subject: Re: [PATCH v6 0/7] fs/dcache: Track & limit # of negative dentries Message-ID: <20180713231717.GX2234@dastard> References: <18c5cbfe-403b-bb2b-1d11-19d324ec6234@redhat.com> <1531336913.3260.18.camel@HansenPartnership.com> <4d49a270-23c9-529f-f544-65508b6b53cc@redhat.com> <1531411494.18255.6.camel@HansenPartnership.com> <20180712164932.GA3475@bombadil.infradead.org> <1531416080.18255.8.camel@HansenPartnership.com> <1531425435.18255.17.camel@HansenPartnership.com> <20180713003614.GW2234@dastard> <1531496812.3361.9.camel@HansenPartnership.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1531496812.3361.9.camel@HansenPartnership.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 13, 2018 at 08:46:52AM -0700, James Bottomley wrote: > On Fri, 2018-07-13 at 10:36 +1000, Dave Chinner wrote: > > On Thu, Jul 12, 2018 at 12:57:15PM -0700, James Bottomley wrote: > > > What surprises me most about this behaviour is the steadiness of > > > the page cache ... I would have thought we'd have shrunk it > > > somewhat given the intense call on the dcache. > > > > Oh, good, the page cache vs superblock shrinker balancing still > > protects the working set of each cache the way it's supposed to > > under heavy single cache pressure. :) > > Well, yes, but my expectation is most of the page cache is clean, so > easily reclaimable. I suppose part of my surprise is that I expected > us to reclaim the clean caches first before we started pushing out the > dirty stuff and reclaiming it. I'm not saying it's a bad thing, just > saying I didn't expect us to make such good decisions under the > parameters of this test. The clean caches are still turned over by the workload, but it is very slow and only enough to eject old objects that have fallen out of the working set. We've got a lot better at keeping the working set in memory in adverse conditions over the past few years... > > Keep in mind that the amount of work slab cache shrinkers perform is > > directly proportional to the amount of page cache reclaim that is > > performed and the size of the slab cache being reclaimed.??IOWs, > > under a "single cache pressure" workload we should be directing > > reclaim work to the huge cache creating the pressure and do very > > little reclaim from other caches.... > > That definitely seems to happen. The thing I was most surprised about > is the steady pushing of anonymous objects to swap. I agree the dentry > cache doesn't seem to be growing hugely after the initial jump, so it > seems to be the largest source of reclaim. Which means swap behaviour has changed since I last looked at reclaim balance several years ago. These sorts of dentry/inode loads never used to push the system to swap. Not saying it's a bad thing, just that it is different. :) > > [ What follows from here is conjecture, but is based on what I've > > seen in the past 10+ years on systems with large numbers of negative > > dentries and fragmented dentry/inode caches. ] > > OK, so I fully agree with the concern about pathological object vs page > freeing problems (I referred to it previously). However, I did think > the compaction work that's been ongoing in mm was supposed to help > here? Compaction doesn't touch slab caches. We can't move active dentries and other slab objects around in memory because they have external objects with active references that point directly to them. Getting exclusive access to active objects and all the things that point to them from reclaim so we can move them is an intractable problem - it has sunk slab cache defragmentation every time it has been attempted in the past 15 years.... Cheers, Dave. -- Dave Chinner david@fromorbit.com