Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964807AbdCJKSA (ORCPT ); Fri, 10 Mar 2017 05:18:00 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:41322 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935415AbdCJJap (ORCPT ); Fri, 10 Mar 2017 04:30:45 -0500 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Michal Hocko , Trevor Cordes , Minchan Kim , Hillf Danton , Mel Gorman , Johannes Weiner , Andrew Morton , Linus Torvalds Subject: [PATCH 4.10 051/167] mm, vmscan: consider eligible zones in get_scan_count Date: Fri, 10 Mar 2017 10:08:14 +0100 Message-Id: <20170310084000.177494462@linuxfoundation.org> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20170310083956.767605269@linuxfoundation.org> References: <20170310083956.767605269@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2889 Lines: 75 4.10-stable review patch. If anyone has any objections, please let me know. ------------------ From: Michal Hocko commit 71ab6cfe88dcf9f6e6a65eb85cf2bda20a257682 upstream. get_scan_count() considers the whole node LRU size when - doing SCAN_FILE due to many page cache inactive pages - calculating the number of pages to scan In both cases this might lead to unexpected behavior especially on 32b systems where we can expect lowmem memory pressure very often. A large highmem zone can easily distort SCAN_FILE heuristic because there might be only few file pages from the eligible zones on the node lru and we would still enforce file lru scanning which can lead to trashing while we could still scan anonymous pages. The later use of lruvec_lru_size can be problematic as well. Especially when there are not many pages from the eligible zones. We would have to skip over many pages to find anything to reclaim but shrink_node_memcg would only reduce the remaining number to scan by SWAP_CLUSTER_MAX at maximum. Therefore we can end up going over a large LRU many times without actually having chance to reclaim much if anything at all. The closer we are out of memory on lowmem zone the worse the problem will be. Fix this by filtering out all the ineligible zones when calculating the lru size for both paths and consider only sc->reclaim_idx zones. The patch would need to be tweaked a bit to apply to 4.10 and older but I will do that as soon as it hits the Linus tree in the next merge window. Link: http://lkml.kernel.org/r/20170117103702.28542-3-mhocko@kernel.org Fixes: b2e18757f2c9 ("mm, vmscan: begin reclaiming pages on a per-node basis") Signed-off-by: Michal Hocko Tested-by: Trevor Cordes Acked-by: Minchan Kim Acked-by: Hillf Danton Acked-by: Mel Gorman Acked-by: Johannes Weiner Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/vmscan.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2205,7 +2205,7 @@ static void get_scan_count(struct lruvec * system is under heavy pressure. */ if (!inactive_list_is_low(lruvec, true, sc) && - lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, MAX_NR_ZONES) >> sc->priority) { + lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, sc->reclaim_idx) >> sc->priority) { scan_balance = SCAN_FILE; goto out; } @@ -2272,7 +2272,7 @@ out: unsigned long size; unsigned long scan; - size = lruvec_lru_size(lruvec, lru, MAX_NR_ZONES); + size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx); scan = size >> sc->priority; if (!scan && pass && force_scan)