Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932233AbbKXXCe (ORCPT ); Tue, 24 Nov 2015 18:02:34 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:56159 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754830AbbKXXC2 (ORCPT ); Tue, 24 Nov 2015 18:02:28 -0500 Date: Tue, 24 Nov 2015 15:02:27 -0800 From: Andrew Morton To: Vladimir Davydov Cc: Johannes Weiner , Michal Hocko , Vlastimil Babka , Mel Gorman , Dave Chinner , , Subject: Re: [PATCH] vmscan: fix slab vs lru balance Message-Id: <20151124150227.78c9e39b789f593c5216471e@linux-foundation.org> In-Reply-To: <1448369241-26593-1-git-send-email-vdavydov@virtuozzo.com> References: <1448369241-26593-1-git-send-email-vdavydov@virtuozzo.com> X-Mailer: Sylpheed 3.4.1 (GTK+ 2.24.23; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2621 Lines: 70 On Tue, 24 Nov 2015 15:47:21 +0300 Vladimir Davydov wrote: > The comment to shrink_slab states that the portion of kmem objects > scanned by it equals the portion of lru pages scanned by shrink_zone > over shrinker->seeks. > > shrinker->seeks is supposed to be equal to the number of disk seeks > required to recreated an object. It is usually set to DEFAULT_SEEKS (2), > which is quite logical, because most kmem objects (e.g. dentry or inode) > require random IO to reread (seek to read and seek back). > > That said, one would expect that dcache is scanned two times less > intensively than page cache, which sounds sane as dentries are generally > more costly to recreate. > > However, the formula for distributing memory pressure between slab and > lru actually looks as follows (see do_shrink_slab): > > lru_scanned > objs_to_scan = objs_total * --------------- * 4 / shrinker->seeks > lru_reclaimable > > That is dcache, as well as most of other slab caches, is scanned two > times more aggressively than page cache. > > Fix this by dropping '4' from the equation above. > oh geeze. Who wrote that crap? commit c3f4656118a78c1c294e0b4d338ac946265a822b Author: Andrew Morton Date: Mon Dec 29 23:48:44 2003 -0800 [PATCH] shrink_slab acounts for seeks incorrectly wli points out that shrink_slab inverts the sense of shrinker->seeks: those caches which require more seeks to reestablish an object are shrunk harder. That's wrong - they should be shrunk less. So fix that up, but scaling the result so that the patch is actually a no-op at this time, because all caches use DEFAULT_SEEKS (2). diff --git a/mm/vmscan.c b/mm/vmscan.c index b859482..f2da3c9 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -154,7 +154,7 @@ static int shrink_slab(long scanned, unsigned int gfp_mask) list_for_each_entry(shrinker, &shrinker_list, list) { unsigned long long delta; - delta = scanned * shrinker->seeks; + delta = 4 * (scanned / shrinker->seeks); delta *= (*shrinker->shrinker)(0, gfp_mask); do_div(delta, pages + 1); shrinker->nr += delta; What a pathetic changelog. The current code may be good, it may be bad, but I'm reluctant to change it without a solid demonstration that the result is overall superior. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/