Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935955AbcJUXAQ (ORCPT ); Fri, 21 Oct 2016 19:00:16 -0400 Received: from ipmail04.adl6.internode.on.net ([150.101.137.141]:40593 "EHLO ipmail04.adl6.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932774AbcJUXAN (ORCPT ); Fri, 21 Oct 2016 19:00:13 -0400 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AjYYAI6dClh5LNdpIGdsb2JhbABcHAEBBAEBCgEBgz4BAQEBAR2BVIJ5g3mcGQEBAQEBBoEbjAiKPYYbAgIBAQKBalQBAgEBAQEBAgYBAQEBAQE5RYRjAQEEJxMcIxAIAw4KCSUPBSUDBxoTiFHDIwEBCAIBJB6FVIUghA+GFwEEmhOQBZAMjQKEAIEfBgiFEyo0hiOCIAEBAQ Date: Sat, 22 Oct 2016 10:00:07 +1100 From: Dave Chinner To: Shaohua Li Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Kernel-team@fb.com, viro@zeniv.linux.org.uk Subject: Re: [RFC] put more pressure on proc/sysfs slab shrink Message-ID: <20161021230007.GV23194@dastard> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1730 Lines: 48 On Fri, Oct 21, 2016 at 01:35:14PM -0700, Shaohua Li wrote: > In our systems, proc/sysfs inode/dentry cache use more than 1G memory > even memory pressure is high sometimes. Since proc/sysfs is in-memory > filesystem, rebuilding the cache is fast. There is no point proc/sysfs > and disk fs have equal pressure for slab shrink. > > One idea is directly discarding proc/sysfs inode/dentry cache rightly > after the proc/sysfs file is closed. But the discarding will make > proc/sysfs file open slower next time, which is 20x slower in my test if > multiple applications are accessing proc files. This patch doesn't go > that far. Instead, just put more pressure to shrink proc/sysfs slabs. > > Signed-off-by: Shaohua Li > --- > fs/kernfs/mount.c | 2 ++ > fs/proc/inode.c | 2 ++ > 2 files changed, 4 insertions(+) > > diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c > index d5b149a..5b4e747 100644 > --- a/fs/kernfs/mount.c > +++ b/fs/kernfs/mount.c > @@ -161,6 +161,8 @@ static int kernfs_fill_super(struct super_block *sb, unsigned long magic) > sb->s_xattr = kernfs_xattr_handlers; > sb->s_time_gran = 1; > > + sb->s_shrink.seeks = 1; > + sb->s_shrink.batch = 0; This sort of thing needs comments as to why they are being changed. Otherwise the next person who comes along to do shrinker modifications won't have a clue about why this magic exists. Also, I don't think s_shrink.batch = 0 does what you think it does. The superblock batch size default of 1024 is more efficient than setting sb->s_shrink.batch = 0 as that makes the shrinker use SHRINK_BATCH: #define SHRINK_BATCH 128 i.e. it does less work per batch so has more overhead.... Cheers, Dave. -- Dave Chinner david@fromorbit.com