Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp3702590pxb; Tue, 26 Jan 2021 02:29:21 -0800 (PST) X-Google-Smtp-Source: ABdhPJx5dFqiUEGPJ18b8dLFEihbjZUQh86r/Dmg48OAdiZOzaDE8imnSe01sxU9jhYy9AU0GQq5 X-Received: by 2002:a17:906:a453:: with SMTP id cb19mr3071626ejb.459.1611656961610; Tue, 26 Jan 2021 02:29:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611656961; cv=none; d=google.com; s=arc-20160816; b=xUuM8S3u0pFYGsUa8KMkrwjkH+dewPjX4uSH+x9prSSm2lJbuV+E9+pie0HF8ewFOG Idl5Kn0VgvFCneF3JcZESP0HPXjEZGPXD4FqUrZDLfYtZj5k424Hgx56cIC8tKEMJJ2c Qbt5/Xkw+RxuZoxzZB0lh4YBOKLiVu3wLenzp91lMIw1GF2mIbIuqk59TB9qqVcccjS3 yX1Nl+G9yJZXB7XaawhttJ9wUKtQJeUltlKIxQCwYPx8vXlxO5Rvj1ZPDEz/8aGE/wXt qj2FDF+vsM9Nm9nOEnVRO+c3L/B975dv0MG53QfKIWNh0nplhE8h4EC7HpC46I9Q6QGa anmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:message-id:in-reply-to:references:date:from:cc :to:subject:ironport-sdr:ironport-sdr; bh=Q5bqIq1XBJIoJ06OHMBZd/GolFre1HMFHsBh7Cy7BjI=; b=t/Oi85e+4E24UCbjF0EkAM7Wd53K+EVnZbc2hM/j02YKTV3qhuR9Tpx6E3YSAXq7yd KOPIiQP0l9Quo7GUkQsDctQ5xxlMz6/awupd8W7XSBbca4LUIeZKsoxY91dPkS+QTH1F 6b3mqZY0Sq6+ZD+68dKuqTnavEz8Vnw+Af7sGWgacF/pmmdeQ0ZgsV+y8/GIHl9UZ+x1 LcwNiO0FDe15B9uLo7wA1mbCPEMAC2+G8bLHg4TQb7+2Y6BI8lle4VVJIfLx3x+zPFsm zqbMr5qcWGAg91m5agYJNje89hK27oeWeqeMBkOfUNRAAn1VGG7ryT/qsE70lcrVII2y Xtpw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id hs7si7405144ejc.147.2021.01.26.02.28.57; Tue, 26 Jan 2021 02:29:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391900AbhAZK0k (ORCPT + 99 others); Tue, 26 Jan 2021 05:26:40 -0500 Received: from mga17.intel.com ([192.55.52.151]:55727 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732098AbhAZBbO (ORCPT ); Mon, 25 Jan 2021 20:31:14 -0500 IronPort-SDR: n9mD0KFHC5p0HMpt6gev/faNbwYzc8Gt1v3vEWVR8tCj6i0R6JcqK3HNqMQdd/gEx+3fE+jH1T XLoFzAm79JkA== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="159603685" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="159603685" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:41:50 -0800 IronPort-SDR: SqQbFya8pBJyjCISCrzIQsncIzU0SYhJtTH0YCI0M9R0reNoVgJoG4XNXagfqux3P5ECijZwyv F3190jrHZ2Wg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="429496854" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga001.jf.intel.com with ESMTP; 25 Jan 2021 16:41:50 -0800 Subject: [RFC][PATCH 10/13] mm/vmscan: add helper for querying ability to age anonymous pages To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Dave Hansen , rientjes@google.com, ying.huang@intel.com, dan.j.williams@intel.com, david@redhat.com, osalvador@suse.de From: Dave Hansen Date: Mon, 25 Jan 2021 16:34:31 -0800 References: <20210126003411.2AC51464@viggo.jf.intel.com> In-Reply-To: <20210126003411.2AC51464@viggo.jf.intel.com> Message-Id: <20210126003431.19BDC239@viggo.jf.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dave Hansen Anonymous pages are kept on their own LRU(s). These lists could theoretically always be scanned and maintained. But, without swap, there is currently nothing the kernel can *do* with the results of a scanned, sorted LRU for anonymous pages. A check for '!total_swap_pages' currently serves as a valid check as to whether anonymous LRUs should be maintained. However, another method will be added shortly: page demotion. Abstract out the 'total_swap_pages' checks into a helper, give it a logically significant name, and check for the possibility of page demotion. Signed-off-by: Dave Hansen Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- b/mm/vmscan.c | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) diff -puN mm/vmscan.c~mm-vmscan-anon-can-be-aged mm/vmscan.c --- a/mm/vmscan.c~mm-vmscan-anon-can-be-aged 2021-01-25 16:23:17.044866690 -0800 +++ b/mm/vmscan.c 2021-01-25 16:23:17.053866690 -0800 @@ -2508,6 +2508,26 @@ out: } } +/* + * Anonymous LRU management is a waste if there is + * ultimately no way to reclaim the memory. + */ +bool anon_should_be_aged(struct lruvec *lruvec) +{ + struct pglist_data *pgdat = lruvec_pgdat(lruvec); + + /* Aging the anon LRU is valuable if swap is present: */ + if (total_swap_pages > 0) + return true; + + /* Also valuable if anon pages can be demoted: */ + if (next_demotion_node(pgdat->node_id) >= 0) + return true; + + /* No way to reclaim anon pages. Should not age anon LRUs: */ + return false; +} + static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) { unsigned long nr[NR_LRU_LISTS]; @@ -2617,7 +2637,8 @@ static void shrink_lruvec(struct lruvec * Even if we did not try to evict anon pages at all, we want to * rebalance the anon lru active/inactive ratio. */ - if (total_swap_pages && inactive_is_low(lruvec, LRU_INACTIVE_ANON)) + if (anon_should_be_aged(lruvec) && + inactive_is_low(lruvec, LRU_INACTIVE_ANON)) shrink_active_list(SWAP_CLUSTER_MAX, lruvec, sc, LRU_ACTIVE_ANON); } @@ -3446,10 +3467,11 @@ static void age_active_anon(struct pglis struct mem_cgroup *memcg; struct lruvec *lruvec; - if (!total_swap_pages) + lruvec = mem_cgroup_lruvec(NULL, pgdat); + + if (!anon_should_be_aged(lruvec)) return; - lruvec = mem_cgroup_lruvec(NULL, pgdat); if (!inactive_is_low(lruvec, LRU_INACTIVE_ANON)) return; _