Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3846407imu; Mon, 28 Jan 2019 11:55:03 -0800 (PST) X-Google-Smtp-Source: ALg8bN74NcFFOif0r2rJROmkDCaNh/ia1DU9Xks/ogUsDdTBkadlRVClt37pbKm0I3Yj9+S8bfVH X-Received: by 2002:a63:b649:: with SMTP id v9mr21121733pgt.436.1548705303228; Mon, 28 Jan 2019 11:55:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548705303; cv=none; d=google.com; s=arc-20160816; b=bSD7HvDKLlylYGz4E8pqPigqGPjwjtZZdZaqf51APa2t6VhK5BqD0bxGV9e/d5lUYg Vzd8l8E2X2M4ijLl8alZtROa8QD39WXpUYPhliJ/2FRr1dzpXY8sTWj/qOC3BGaYRvQQ sxfDSLWRT2LftISD6ePlGRx5dde5BBJdKAumD9g/scqhQLvnZDk/C2a1FH9OQBonPMgA twq6j80HX1XYTV9x9c9xW78j42CrnUcfpM8J0E8v7UKgjx0DnFpUTLrmzBKQq7HcRlUa VyhmefB3xMxGVyTVn87LE4TkY54XHTnxaf9+ogiZE5EW+kHyPJKZYuwzoHnTYv6lFBS4 t7Rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date; bh=spqcKaMhg2D1agEtkYZsyaEWcvon3hwAsAnZw+kg3mI=; b=Vh4cCJLS8gL8Qth/OSHlzB0BZYwlYeUkCs84UV/Ofzr5p+LhsTgOs64QgTRCKY0G4h 45hzW3ATVU3QKojR1ZDJcNEzwBusP1ib2ETg+1SYkBvz0d38A6dughROq2mEhxmf/11P IMEyKI0c0r+oWJqa1J+t20U1lGwpnUFEExIDhMEykXjCyUhgiat/N5A6Vb4hiotRDAew QN5QpvMeyVnRVMPNp+2B9WFifszWclr0Sdqf2Yg7KCbNoj7Se/M1AsmAq8KvIfaqlniu uGnSiYRvIlqRORg7Jg354Du4xTsOKAiZdU4x5CBGPyFltMr7jNYD6Ey2G7xe3zcTv79Z DmtA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l8si32889264pfc.98.2019.01.28.11.54.47; Mon, 28 Jan 2019 11:55:03 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727064AbfA1Ty1 (ORCPT + 99 others); Mon, 28 Jan 2019 14:54:27 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:46434 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726766AbfA1Ty1 (ORCPT ); Mon, 28 Jan 2019 14:54:27 -0500 Received: from akpm3.svl.corp.google.com (unknown [104.133.8.65]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 103181AE5; Mon, 28 Jan 2019 19:54:25 +0000 (UTC) Date: Mon, 28 Jan 2019 11:54:24 -0800 From: Andrew Morton To: Rik van Riel Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@fb.com, Johannes Weiner , Chris Mason , Roman Gushchin , Michal Hocko Subject: Re: [PATCH] mm,slab,vmscan: accumulate gradual pressure on small slabs Message-Id: <20190128115424.df3f4647023e9e43e75afe67@linux-foundation.org> In-Reply-To: <20190128143535.7767c397@imladris.surriel.com> References: <20190128143535.7767c397@imladris.surriel.com> X-Mailer: Sylpheed 3.6.0 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 28 Jan 2019 14:35:35 -0500 Rik van Riel wrote: > There are a few issues with the way the number of slab objects to > scan is calculated in do_shrink_slab. First, for zero-seek slabs, > we could leave the last object around forever. That could result > in pinning a dying cgroup into memory, instead of reclaiming it. > The fix for that is trivial. > > Secondly, small slabs receive much more pressure, relative to their > size, than larger slabs, due to "rounding up" the minimum number of > scanned objects to batch_size. > > We can keep the pressure on all slabs equal relative to their size > by accumulating the scan pressure on small slabs over time, resulting > in sometimes scanning an object, instead of always scanning several. > > This results in lower system CPU use, and a lower major fault rate, > as actively used entries from smaller caches get reclaimed less > aggressively, and need to be reloaded/recreated less often. > > Fixes: 4b85afbdacd2 ("mm: zero-seek shrinkers") > Fixes: 172b06c32b94 ("mm: slowly shrink slabs with a relatively small number of objects") > Cc: Johannes Weiner > Cc: Chris Mason > Cc: Roman Gushchin > Cc: kernel-team@fb.com > Tested-by: Chris Mason I added your Signed-off-by: > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -488,18 +488,28 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, > * them aggressively under memory pressure to keep > * them from causing refetches in the IO caches. > */ > - delta = freeable / 2; > + delta = (freeable + 1)/ 2; > } > > /* > * Make sure we apply some minimal pressure on default priority > - * even on small cgroups. Stale objects are not only consuming memory > + * even on small cgroups, by accumulating pressure across multiple > + * slab shrinker runs. Stale objects are not only consuming memory > * by themselves, but can also hold a reference to a dying cgroup, > * preventing it from being reclaimed. A dying cgroup with all > * corresponding structures like per-cpu stats and kmem caches > * can be really big, so it may lead to a significant waste of memory. > */ > - delta = max_t(unsigned long long, delta, min(freeable, batch_size)); > + if (!delta) { > + shrinker->small_scan += freeable; > + > + delta = shrinker->small_scan >> priority; > + shrinker->small_scan -= delta << priority; > + > + delta *= 4; > + do_div(delta, shrinker->seeks); What prevents shrinker->small_scan from over- or underflowing over time? > + } > > total_scan += delta; > if (total_scan < 0) { I'll add this: whitespace fixes, per Roman --- a/mm/vmscan.c~mmslabvmscan-accumulate-gradual-pressure-on-small-slabs-fix +++ a/mm/vmscan.c @@ -488,7 +488,7 @@ static unsigned long do_shrink_slab(stru * them aggressively under memory pressure to keep * them from causing refetches in the IO caches. */ - delta = (freeable + 1)/ 2; + delta = (freeable + 1) / 2; } /* @@ -508,7 +508,6 @@ static unsigned long do_shrink_slab(stru delta *= 4; do_div(delta, shrinker->seeks); - } total_scan += delta; _