Received: by 2002:a25:7ec1:0:0:0:0:0 with SMTP id z184csp3696131ybc; Thu, 14 Nov 2019 13:13:36 -0800 (PST) X-Google-Smtp-Source: APXvYqw+RgPgdk8fb4WFzLAM9V7DH6KoUp/tgiT5gGaSqscedclaMQGwltNfDOX9SKN281Wv88rJ X-Received: by 2002:a05:6000:18c:: with SMTP id p12mr10563739wrx.154.1573766016513; Thu, 14 Nov 2019 13:13:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1573766016; cv=none; d=google.com; s=arc-20160816; b=hwNLNOthXnl8jwpkQrJ5qJ4hcRakD/6GbmxwxTtwm23uAbMh8p1q4wkYtYOFcw4a7i wEudM+r6+bVpRigfjpi1mvRlvKUdYgv4f632RKYEN3BBzgWRoEEpKsp6DdM9b82tpgXe 9I9/Wvw01mxNCqJ9sjiqNh4bqrlHqDCtcpblEJIIp8lcOdXRe8Ta3Cp/3LSp3JizuGm8 DLBPA81w3jSi1/4zrRBjAp0PlpV4SODUxltDgCPO45kvcqoXJyCct5BafgDtsuPNyO1g d3eFxj7WOHwHT5qziDVOv9K3rA+4ikWYPsCTuZJZhSJ2OZzhv2GmXPgBm1gDCnU7K+4f ZFGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=gJkXxyL66sLrSS2aYN6kDyFNdADd+ypHpFnkfQ8vkns=; b=jB+79EEaWcERy2Q/FNLb9KSJSoLl/GBRsfiD1S0THRSgXZj1HL7gVnJJePri61sLOM BoSzd/Ppa0ZrvNhF01TTEcoBsfAdYNbNoCMe42MQygVLPysHPAKwU5maMNZy6jAGmnDZ 4iANhcmDZMhM7Uf5i+XN+Zs8ciNl3YUhs9iNS1lOGkLZIUkvJ+xjZtOLb8mllrM2/6YM /55n+4Abjo5VinFDP4eAXn665ioMM60FjsTGV7XRYZWzZfk1ZQdox+iRHNHs419zslIT 32pGw8vH9vLEULVcX+RGMZmzlKItHxcmD//eymtcUhhm6dmc48VqFUEyKlImAGED9Xbi hf9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i15si4123462ejz.434.2019.11.14.13.13.11; Thu, 14 Nov 2019 13:13:36 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726992AbfKNVLz (ORCPT + 99 others); Thu, 14 Nov 2019 16:11:55 -0500 Received: from mail104.syd.optusnet.com.au ([211.29.132.246]:33970 "EHLO mail104.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726592AbfKNVLy (ORCPT ); Thu, 14 Nov 2019 16:11:54 -0500 Received: from dread.disaster.area (pa49-181-255-80.pa.nsw.optusnet.com.au [49.181.255.80]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id 37AD943EA9F; Fri, 15 Nov 2019 08:11:50 +1100 (AEDT) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iVMP8-0003Oh-7P; Fri, 15 Nov 2019 08:11:50 +1100 Date: Fri, 15 Nov 2019 08:11:50 +1100 From: Dave Chinner To: Brian Foster Cc: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 12/28] shrinker: defer work only to kswapd Message-ID: <20191114211150.GE4614@dread.disaster.area> References: <20191031234618.15403-1-david@fromorbit.com> <20191031234618.15403-13-david@fromorbit.com> <20191104152954.GC10665@bfoster> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191104152954.GC10665@bfoster> User-Agent: Mutt/1.10.1 (2018-07-13) X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=G6BsK5s5 c=1 sm=1 tr=0 a=XqaD5fcB6dAc7xyKljs8OA==:117 a=XqaD5fcB6dAc7xyKljs8OA==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=kj9zAlcOel0A:10 a=MeAgGD-zjQ4A:10 a=7-415B0cAAAA:8 a=5X_tDGEKKelVXH1UxT4A:9 a=CjuIK1q_8ugA:10 a=biEYGPWJfzWAr4FL6Ov7:22 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 04, 2019 at 10:29:54AM -0500, Brian Foster wrote: > On Fri, Nov 01, 2019 at 10:46:02AM +1100, Dave Chinner wrote: > > @@ -601,10 +605,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, > > * scanning at high prio and therefore should try to reclaim as much as > > * possible. > > */ > > - while (total_scan >= batch_size || > > - total_scan >= freeable_objects) { > > + while (scan_count >= batch_size || > > + scan_count >= freeable_objects) { > > unsigned long ret; > > - unsigned long nr_to_scan = min(batch_size, total_scan); > > + unsigned long nr_to_scan = min_t(long, batch_size, scan_count); > > > > shrinkctl->nr_to_scan = nr_to_scan; > > shrinkctl->nr_scanned = nr_to_scan; > > @@ -614,29 +618,29 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, > > freed += ret; > > > > count_vm_events(SLABS_SCANNED, shrinkctl->nr_scanned); > > - total_scan -= shrinkctl->nr_scanned; > > - scanned += shrinkctl->nr_scanned; > > + scan_count -= shrinkctl->nr_scanned; > > + scanned_objects += shrinkctl->nr_scanned; > > > > cond_resched(); > > } > > - > > done: > > - if (next_deferred >= scanned) > > - next_deferred -= scanned; > > + if (deferred_count) > > + next_deferred = deferred_count - scanned_objects; > > else > > - next_deferred = 0; > > + next_deferred = scan_count; > > Hmm.. so if there was no deferred count on this cycle, we set > next_deferred to whatever is left from scan_count and add that back into > the shrinker struct below. If there was a pending deferred count on this > cycle, we subtract what we scanned from that and add that value back. > But what happens to the remaining scan_count in the latter case? Is it > lost, or am I missing something? if deferred_count is not zero, then it is kswapd that is running. It does the deferred work, and if it doesn't make progress then adding it's scan count to the deferred work doesn't matter. That's because it will come back with an increased priority in a short while and try to scan more of the deferred count plus it's larger scan count. IOWs, if we defer kswapd unused scan count, we effectively increase the pressure as the priority goes up, potentially making the deferred count increase out of control. i.e. kswapd can make progress and free items, but the result is that it increased the deferred scan count rather than reducing it. This leads to excessive reclaim of the slab caches and kswapd can trash the caches long after the memory pressure has gone away... > For example, suppose we start this cycle with a large scan_count and > ->scan_objects() returned SHRINK_STOP before doing much work. In that > scenario, it looks like whether ->nr_deferred is 0 or not is the only > thing that determines whether we defer the entire remaining scan_count > or just what is left from the previous ->nr_deferred. The existing code > appears to consistently factor in what is left from the current scan > with the previous deferred count. Hm? If kswapd doesn't have any deferred work, then it's largely no different in behaviour to direct reclaim. If it has no deferred work, then the shrinker is not getting stopped early in direct reclaim, so it's unlikely that kswapd is going to get stopped early, either.... Cheers, Dave. -- Dave Chinner david@fromorbit.com