Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp1210237pxu; Wed, 6 Jan 2021 16:00:13 -0800 (PST) X-Google-Smtp-Source: ABdhPJx5EBlnGvtlHQpH5ATyNiHNH+NzgfhvpgVINnxmXn2muLTfEafxEiVs3q1/byvg2vuZm9DG X-Received: by 2002:a05:6402:354e:: with SMTP id f14mr5548270edd.183.1609977613225; Wed, 06 Jan 2021 16:00:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1609977613; cv=none; d=google.com; s=arc-20160816; b=KAgg29P0t7UqcoSl4HGMAcLcloNT7bIytGORL+uy8V0CxkT0lGrEQhuo/ZqPKKR6yj NwVeGL2vhdNSu1i2sCVvlG6g48Mv06xWzCGU1OZDJ7BlnMeGT7pQAkSuRdb69eMgsMb6 rEGjdQPaBTwINbh9g4wZAY7r2TAM7pLja3xrfp7kPc5qfWrIpsljF+XMRs5c4xaIAQQQ OjOFu+7E/KXNUl4GaT+vqo/4lmWXZylCy6QuYhuPp4408Y5st1h/9bKr447bX7Ve+x1X AOdltiKe/kw0J2p499L8Z9icSjJlsM3XRdGhZjS1Go6uOM4F7wNnnUQXzEb0pHHmXjou pkcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=mkZ03DWBxBOvXyG8wH86KjV7sx1kJwuHSECEpuYiA9g=; b=kYCho3T46nUNdsbBM9229X038fMKiGlHavyzl4sozvUBfJ1XIltfgu0iBorhyRy2cc /f0TSJeDBSNl0X6HfnsxTMM/FXrUfv/HIbu+bvBDSTbokwmxN5oZYeHT1AtwxoWBul8V A3Bi5FxwOxO/kc2kUqn6MOG43nS6tGYFAWGWssYIn7iaqnTVHQMSMcpOuvjNyGxjuTdq 1Z9vtcB+o5n0/5eXQI7QPS7IGmdkrf28oZ3Y1cqhrbrVZSPyEe8/5ajTphm4NYjrKcWG FWVlJ50fJ7WuKMt1N34+bgoUPxmjk02waGCIFedcEngwNIllgr+Y9mciiuDJS8fi/DLs iT2g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=xtZscw1N; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x14si1548918edq.88.2021.01.06.15.59.49; Wed, 06 Jan 2021 16:00:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=xtZscw1N; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727823AbhAFX4o (ORCPT + 99 others); Wed, 6 Jan 2021 18:56:44 -0500 Received: from mail.kernel.org ([198.145.29.99]:53200 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726787AbhAFX4o (ORCPT ); Wed, 6 Jan 2021 18:56:44 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 643012333B; Wed, 6 Jan 2021 23:56:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1609977363; bh=gA46bnzFjTY1FGYHxSKAixtrOZKz7AQN8HjwXSenaeM=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=xtZscw1NEOE7Vkmuk84KZMwZnYh4KD/4SlvkFsP4jqvjYi0Ccch7xTMxJ2m5i/xrh gbiNwVmoSP1I9ada/BeOZLocDDQXlAzILYFC+MnscyCDV7TsR/2fBN7L2BaBFLxtDT pvwHrWBozVkEO/2dYDagRk9AfJRB+UY9TW0JrSJ8= Date: Wed, 6 Jan 2021 15:56:02 -0800 From: Andrew Morton To: Sudarshan Rajagopalan Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vladimir Davydov , Dave Chinner Subject: Re: [PATCH] mm: vmscan: support complete shrinker reclaim Message-Id: <20210106155602.6ce48dfe88ca7b94986b329b@linux-foundation.org> In-Reply-To: <2d1f1dbb7e018ad02a9e7af36a8c86397a1598a7.1609892546.git.sudaraja@codeaurora.org> References: <2d1f1dbb7e018ad02a9e7af36a8c86397a1598a7.1609892546.git.sudaraja@codeaurora.org> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org (cc's added) On Tue, 5 Jan 2021 16:43:38 -0800 Sudarshan Rajagopalan wrote: > Ensure that shrinkers are given the option to completely drop > their caches even when their caches are smaller than the batch size. > This change helps improve memory headroom by ensuring that under > significant memory pressure shrinkers can drop all of their caches. > This change only attempts to more aggressively call the shrinkers > during background memory reclaim, inorder to avoid hurting the > performance of direct memory reclaim. > > ... > > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -424,6 +424,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, > long batch_size = shrinker->batch ? shrinker->batch > : SHRINK_BATCH; > long scanned = 0, next_deferred; > + long min_cache_size = batch_size; > + > + if (current_is_kswapd()) > + min_cache_size = 0; > > if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) > nid = 0; > @@ -503,7 +507,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, > * scanning at high prio and therefore should try to reclaim as much as > * possible. > */ > - while (total_scan >= batch_size || > + while (total_scan > min_cache_size || > total_scan >= freeable) { > unsigned long ret; > unsigned long nr_to_scan = min(batch_size, total_scan); I don't really see the need to exclude direct reclaim from this fix. And if we're leaving unscanned objects behind in this situation, the current code simply isn't working as intended, and 0b1fb40a3b1 ("mm: vmscan: shrink all slab objects if tight on memory") either failed to achieve its objective or was later broken? Vladimir, could you please take a look?