Received: by 2002:ac0:98c7:0:0:0:0:0 with SMTP id g7-v6csp1594741imd; Sun, 4 Nov 2018 06:02:11 -0800 (PST) X-Google-Smtp-Source: AJdET5cAq4oCQJ7gFtTsU8JDWqceTijFKsi7tQB+FObjYzLEdVGWl4V0vtzrMm9jV30B7jScGt/q X-Received: by 2002:a62:454d:: with SMTP id s74-v6mr19198561pfa.136.1541340130982; Sun, 04 Nov 2018 06:02:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541340130; cv=none; d=google.com; s=arc-20160816; b=xb0uK1TSsz4k8Gb8ZwspyBcTz8cCrBftG1WUvelmQT7DiHsBZ17IDniftj2/23O15k RURf0Vv2wTDs8gWjn7zYXnPU2lTO+6MRMCx1QV7qVDFJkt3ANFV/lJ5l7K72H9wodHqD L5ZtEFChPTCBSxeW2zyBh1ByDkvGo+3aSCIfGLqzi0C+6TNpfpgpTV0bQTDPRFp4VBjc aCb3EfTgROQ+ir0wMqNuuwlheCOJWcmqGshOmfnPAG9rggg8DouLT/8OUtSseNx+VZTV CGaiWuUbPgrZHAP4Td4lpPYrXPBnozFEiqIauY2CChaL+8O4XY2ED88s3yZbbD68TO7+ J70w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=fcNopMamqapVYz9HdjRONdtNOXGiQC20MRPm+JqCNYY=; b=bwRcZhRhoc/rnpn3/H/Q7JOrCeFgDAHqubumx9ZGFW7La0IPNLj/ZQ2QSwnkAdi7iI P/HKqdVVknn2o+JRZpRqj49b9Fl2PxoJHibA/mLL3QuLR49gvfZPHvEUpYQ7gWKxW93n wBQnE20kJ5wfkjiCjXDk02yXy2jJfMo3b44t6ZytlwGBdGQ4e5NZnYhn/uPgLLLNWblT 1eQuPAJu9/0Vms0BjgNXtj2TItrWukwtiBh/Fp818nsWhSTuWtdKjpjpAnQj9WM/SX+h whgGvsXlIPllCzx3gsJHo15HERxWg9lEu9JJYOdprvP+690nmPTWLZAX73VntfSG49J6 aBgw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=QKSYJkn9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v32-v6si40568079pgk.16.2018.11.04.06.01.56; Sun, 04 Nov 2018 06:02:10 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=QKSYJkn9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731176AbeKDXIf (ORCPT + 99 others); Sun, 4 Nov 2018 18:08:35 -0500 Received: from mail.kernel.org ([198.145.29.99]:47374 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729652AbeKDXIe (ORCPT ); Sun, 4 Nov 2018 18:08:34 -0500 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4731020868; Sun, 4 Nov 2018 13:53:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1541339610; bh=9dH0lsyD10h8yVXMzJ7Xjtl0we3Me20hRIgdPeiXvpM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QKSYJkn9ij8C1j8jBi7h3bk803tftIwwqZWlH0et4vBxjrXWHI5j7ZqyoRZ5nwK5b XmHrzPrhskhVeEUfDNADwwqU6tvcwwp6zpWpfxHCC0IPXLu2fe/dK1qcuxpIByc2CZ petNtoh3nRzBPZkHMbUBuyxi+eh+FCNiZbmaj2bw= From: Sasha Levin To: stable@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Roman Gushchin , Johannes Weiner , Michal Hocko , Tejun Heo , Rik van Riel , Konstantin Khlebnikov , Matthew Wilcox , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH AUTOSEL 4.14 03/30] mm: don't miss the last page because of round-off error Date: Sun, 4 Nov 2018 08:52:58 -0500 Message-Id: <20181104135325.88524-3-sashal@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181104135325.88524-1-sashal@kernel.org> References: <20181104135325.88524-1-sashal@kernel.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Roman Gushchin [ Upstream commit 68600f623d69da428c6163275f97ca126e1a8ec5 ] I've noticed, that dying memory cgroups are often pinned in memory by a single pagecache page. Even under moderate memory pressure they sometimes stayed in such state for a long time. That looked strange. My investigation showed that the problem is caused by applying the LRU pressure balancing math: scan = div64_u64(scan * fraction[lru], denominator), where denominator = fraction[anon] + fraction[file] + 1. Because fraction[lru] is always less than denominator, if the initial scan size is 1, the result is always 0. This means the last page is not scanned and has no chances to be reclaimed. Fix this by rounding up the result of the division. In practice this change significantly improves the speed of dying cgroups reclaim. [guro@fb.com: prevent double calculation of DIV64_U64_ROUND_UP() arguments] Link: http://lkml.kernel.org/r/20180829213311.GA13501@castle Link: http://lkml.kernel.org/r/20180827162621.30187-3-guro@fb.com Signed-off-by: Roman Gushchin Reviewed-by: Andrew Morton Cc: Johannes Weiner Cc: Michal Hocko Cc: Tejun Heo Cc: Rik van Riel Cc: Konstantin Khlebnikov Cc: Matthew Wilcox Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- include/linux/math64.h | 3 +++ mm/vmscan.c | 6 ++++-- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/include/linux/math64.h b/include/linux/math64.h index 082de345b73c..3a7a14062668 100644 --- a/include/linux/math64.h +++ b/include/linux/math64.h @@ -254,4 +254,7 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor) } #endif /* mul_u64_u32_div */ +#define DIV64_U64_ROUND_UP(ll, d) \ + ({ u64 _tmp = (d); div64_u64((ll) + _tmp - 1, _tmp); }) + #endif /* _LINUX_MATH64_H */ diff --git a/mm/vmscan.c b/mm/vmscan.c index be56e2e1931e..9734e62654fa 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2367,9 +2367,11 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, /* * Scan types proportional to swappiness and * their relative recent reclaim efficiency. + * Make sure we don't miss the last page + * because of a round-off error. */ - scan = div64_u64(scan * fraction[file], - denominator); + scan = DIV64_U64_ROUND_UP(scan * fraction[file], + denominator); break; case SCAN_FILE: case SCAN_ANON: -- 2.17.1