Received: by 2002:a25:31c3:0:0:0:0:0 with SMTP id x186csp107691ybx; Thu, 31 Oct 2019 16:49:35 -0700 (PDT) X-Google-Smtp-Source: APXvYqzp+Grinq8gDBeOLO4NNYl8D78YzmbwvHrQZSF0wNJ08gHjR5ntZhqbsAsxAq7GYsYHFeSu X-Received: by 2002:a50:9b43:: with SMTP id a3mr9440890edj.73.1572565775538; Thu, 31 Oct 2019 16:49:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572565775; cv=none; d=google.com; s=arc-20160816; b=a2t4vX6aAKl3dEPan2fANQx2/pf74CVajD/lOGMDtWkbHXE5XGtXbj2uDjztICPl9A H9UEBeHq8/R07T18eu75rMEvkmyPVAqkDpShN8rfMtt+ZTThAcCo2roqVHc+pp1AtWTu SGq5CjIlpUyystwN0FksYvJv0msgiWEDSyXbTIre7Q0XReE/jOMRfPIsCAcup03Ezery Bux879FxBo6DVrNEMlCSAks9qtKyMCjMWkzQR0sMYILm8xQGYy8jI6sapkBwQC+swnTy uXnqxKifJqSan8L94WJd+zL+nM68UovmdJ9NpdEG04FMwGYY5khnDq8PUUfFo/+6Vh8B EXoQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=ZV0Cuzo/j+tKpLZVYukVSmUagwE3ViWhVO5aMKjhak8=; b=dNeC/DKNcOFdrL7s5exk8PJ90YehN8btKzcIxBkgNwIGc+h00jAw+4rjX/AnYHNc80 mTW8k3Fc5ezFhxXSSWiuo0R4qhwut0ZAbp3myE7weeN+ij0C/FodcWB3v2dR2+P+VVD8 2L/T3CMJ9iXvLSGXEpqISgmrMh5DcSo01qEh91b6Zc28HQfvfgZEpWSUkukMsrHBAoe1 uHUXaWAOmcp8wtgu4IC84sOctGGKDz86Bx7fN0bLec8o+mQ1wrhBE7vt5qLpcKh9ar36 gR9AuoyAKCisMkgIF4sWY4MGVWWBcAp/hgxX/huCgK/WeTw46vFtVv4wRWZpm3Ci9AH+ /B2A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c2si5530363eda.322.2019.10.31.16.49.12; Thu, 31 Oct 2019 16:49:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729647AbfJaXr6 (ORCPT + 99 others); Thu, 31 Oct 2019 19:47:58 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:55729 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728672AbfJaXqb (ORCPT ); Thu, 31 Oct 2019 19:46:31 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id ADDED3A289E; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007CO-GU; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041s-ET; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 15/28] mm: back off direct reclaim on excessive shrinker deferral Date: Fri, 1 Nov 2019 10:46:05 +1100 Message-Id: <20191031234618.15403-16-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=P6RKvmIu c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=c3jh6I83BcSAbW0NpfQA:9 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dave Chinner When the majority of possible shrinker reclaim work is deferred by the shrinkers (e.g. due to GFP_NOFS context), and there is more work defered than LRU pages were scanned, back off reclaim if there are large amounts of IO in progress. This tends to occur when there are inode cache heavy workloads that have little page cache or application memory pressure on filesytems like XFS. Inode cache heavy workloads involve lots of IO, so if we are getting device congestion it is indicative of memory reclaim running up against an IO throughput limitation. in this situation we need to throttle direct reclaim as we nee dto wait for kswapd to get some of the deferred work done. However, if there is no device congestion, then the system is keeping up with both the workload and memory reclaim and so there's no need to throttle. Hence we should only back off scanning for a bit if we see this condition and there is block device congestion present. Signed-off-by: Dave Chinner --- include/linux/swap.h | 2 ++ mm/vmscan.c | 30 +++++++++++++++++++++++++++++- 2 files changed, 31 insertions(+), 1 deletion(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 72b855fe20b0..da0913e14bb9 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -131,6 +131,8 @@ union swap_header { */ struct reclaim_state { unsigned long reclaimed_pages; /* pages freed by shrinkers */ + unsigned long scanned_objects; /* quantity of work done */ + unsigned long deferred_objects; /* work that wasn't done */ }; /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 967e3d3c7748..13c11e10c9c5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -570,6 +570,8 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, deferred_count = min(deferred_count, freeable_objects * 2); } + if (current->reclaim_state) + current->reclaim_state->scanned_objects += scanned_objects; /* * Avoid risking looping forever due to too large nr value: @@ -585,8 +587,11 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, * If the shrinker can't run (e.g. due to gfp_mask constraints), then * defer the work to a context that can scan the cache. */ - if (shrinkctl->defer_work) + if (shrinkctl->defer_work) { + if (current->reclaim_state) + current->reclaim_state->deferred_objects += scan_count; goto done; + } /* * Normally, we should not scan less than batch_size objects in one @@ -2871,7 +2876,30 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) if (reclaim_state) { sc->nr_reclaimed += reclaim_state->reclaimed_pages; + + /* + * If we are deferring more work than we are actually + * doing in the shrinkers, and we are scanning more + * objects than we are pages, the we have a large amount + * of slab caches we are deferring work to kswapd for. + * We better back off here for a while, otherwise + * we risk priority windup, swap storms and OOM kills + * once we empty the page lists but still can't make + * progress on the shrinker memory. + * + * kswapd won't ever defer work as it's run under a + * GFP_KERNEL context and can always do work. + */ + if ((reclaim_state->deferred_objects > + sc->nr_scanned - nr_scanned) && + (reclaim_state->deferred_objects > + reclaim_state->scanned_objects)) { + wait_iff_congested(BLK_RW_ASYNC, HZ/50); + } + reclaim_state->reclaimed_pages = 0; + reclaim_state->deferred_objects = 0; + reclaim_state->scanned_objects = 0; } /* Record the subtree's reclaim efficiency */ -- 2.24.0.rc0