Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp169372imm; Tue, 9 Oct 2018 15:52:09 -0700 (PDT) X-Google-Smtp-Source: ACcGV6365VtoX5p29IfPGYIV/TpnylIeptZ5gQQ7ofJOXXFdd0RrR/q5ITdreQFcVB/4fABynlkw X-Received: by 2002:a17:902:24e7:: with SMTP id l36-v6mr30465612plg.234.1539125529117; Tue, 09 Oct 2018 15:52:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539125529; cv=none; d=google.com; s=arc-20160816; b=gcfTzwG75z1bnGThj8HSjt+km5rMUCEGHPp8nGYwSYjA0xciLetdH2rrsqbRNAPhjC ZWRldIeyiFtrqOARBV1rNaAdItjlvV+i03BHpRdbdRltNT/4MXwNRjlrfhOF9Gj6pmqx RrZyiJd7g1+1NzOF3Z8Z7GmOFDeQDSRAhRLjnJf8RQYw18b2O0w4w5IJzq53PwSQWwo0 er3xOGMg+pDd4+09AqM9ZD2Kg2DRHPH/kyxSlAp3dtUu/Lyc4DhcHm1RBnqmqp4nzL9H eY2fek023CpBp+kahBBu5mDwgpAcSWH7qVddwpVfJPNKxQXOvGsjz+zs3pQNCSY8u2Bz FHCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date; bh=756t5kzjDB/hoihgHp7GVheVtQYO26Y4aTKms2DliJI=; b=TgoBMvQPuA4j4aA9c03Z7bRvW6WBz/0CSltnMIsCjyGjxDxEnMhi/HO+9AV1qp2ZDv KSdY0aH8tbbetgOxi8sraeWvE34mbbIfDtAtzhLgytglwnzJe6TGJ5gzzCFlFIxglolw eB9ws4H9nCxRMpg8fv7yuxXg88mn8iW4QaTvko4vPsJ3jN8dsP5niz5ltHtUpcVazFVr vsWd/CXSOGcJ1PgaxZtS9spYIgwc8Na00knFI3MpWONvwoUT/96xcDsUzQV5MV3/jph0 A3Kdvf+5uLUA+Z/5p6Kq9+LJn8qfmp61uSq97BjMGFW60HlJYUfhrYgynzkl1OqG9mkF UVFg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y89-v6si12178959pfa.47.2018.10.09.15.51.54; Tue, 09 Oct 2018 15:52:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727529AbeJJFfD (ORCPT + 99 others); Wed, 10 Oct 2018 01:35:03 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:59204 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725837AbeJJFfD (ORCPT ); Wed, 10 Oct 2018 01:35:03 -0400 Received: from akpm3.svl.corp.google.com (unknown [104.133.8.65]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 575F9B69; Tue, 9 Oct 2018 22:15:58 +0000 (UTC) Date: Tue, 9 Oct 2018 15:15:56 -0700 From: Andrew Morton To: Johannes Weiner Cc: Rik van Riel , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [PATCH 4/4] mm: zero-seek shrinkers Message-Id: <20181009151556.5b0a3c9ae270b7551b3d12e6@linux-foundation.org> In-Reply-To: <20181009184732.762-5-hannes@cmpxchg.org> References: <20181009184732.762-1-hannes@cmpxchg.org> <20181009184732.762-5-hannes@cmpxchg.org> X-Mailer: Sylpheed 3.6.0 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 9 Oct 2018 14:47:33 -0400 Johannes Weiner wrote: > The page cache and most shrinkable slab caches hold data that has been > read from disk, but there are some caches that only cache CPU work, > such as the dentry and inode caches of procfs and sysfs, as well as > the subset of radix tree nodes that track non-resident page cache. > > Currently, all these are shrunk at the same rate: using DEFAULT_SEEKS > for the shrinker's seeks setting tells the reclaim algorithm that for > every two page cache pages scanned it should scan one slab object. > > This is a bogus setting. A virtual inode that required no IO to create > is not twice as valuable as a page cache page; shadow cache entries > with eviction distances beyond the size of memory aren't either. > > In most cases, the behavior in practice is still fine. Such virtual > caches don't tend to grow and assert themselves aggressively, and > usually get picked up before they cause problems. But there are > scenarios where that's not true. > > Our database workloads suffer from two of those. For one, their file > workingset is several times bigger than available memory, which has > the kernel aggressively create shadow page cache entries for the > non-resident parts of it. The workingset code does tell the VM that > most of these are expendable, but the VM ends up balancing them 2:1 to > cache pages as per the seeks setting. This is a huge waste of memory. > > These workloads also deal with tens of thousands of open files and use > /proc for introspection, which ends up growing the proc_inode_cache to > absurdly large sizes - again at the cost of valuable cache space, > which isn't a reasonable trade-off, given that proc inodes can be > re-created without involving the disk. > > This patch implements a "zero-seek" setting for shrinkers that results > in a target ratio of 0:1 between their objects and IO-backed > caches. This allows such virtual caches to grow when memory is > available (they do cache/avoid CPU work after all), but effectively > disables them as soon as IO-backed objects are under pressure. > > It then switches the shrinkers for procfs and sysfs metadata, as well > as excess page cache shadow nodes, to the new zero-seek setting. > Seems sane, but I'm somewhat worried about unexpected effects on other workloads. So I think I'll hold this over for 4.20. Or shouldn't I?