Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp14153pxb; Thu, 15 Apr 2021 21:10:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzPR05i76S7yLJsugz0uhwEnqecD57YEghKUU7uRz+RC65Hip8cfNwhzDVRPYlxnjT27xRL X-Received: by 2002:a63:4106:: with SMTP id o6mr6595777pga.104.1618546227989; Thu, 15 Apr 2021 21:10:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618546227; cv=none; d=google.com; s=arc-20160816; b=qfEG6YdJUT0HH9GEpONRD4K1bsXAl2GbJTVtZ+ZUWEyXOskAPKWxpIxkKnfj05I4CT 5akp8TJcAHTo499yLQLCm33UMnveKD391n1nm6h2mbqURho/EkFoYSibD1YMyHce2u3Y sJhAQDtyjAUny499QjVnlps9grwDxc/ous2SbAMFYBsQnynOsG9WST2Uy1kqe3C7l6Zc pqXGz6DeGsLD0rMhHD6uewTMc6Tv5/AEcVCJZ3Cqanz3GhX+BE8azBIVnJy9M5f8iN+h WxB3MjVYi3ruGc3zvXlgsZ9ZqI1JnRHlSMTD8fxXrqU939pOW9R3DrPFPKJzPH522beU Kfrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:ironport-sdr:ironport-sdr; bh=19elLxz/SGf09/BePnXEROSSvfBGYqMv3tOmyg04ku0=; b=0ijKr5z0/K8L/TtReg7K96RhMSQbmMCfEOPrhW9bwFx7bfQBNei8Mo7HWS3UnZL+jh R1qj3nn+MF0hR1lQ9ax06/dX29IGcrtE6DRI5/z99ttCEf9lwKAnDeN9PeGgpGJx8ch9 OWzd/222L+A0kK9CZj4uaATyc/uXk8IMEI03lFrCvJwhRG5tlcZDo6CQsIKBqCNaN21K fHNe4VTtxT3uhSNG42dvXNAssIx+monZI6V2PGeBYg6BsEaEW5ojnuayn7uissqw7rPy WUSDIPwVuopgW2EsUCrqCb/Xjye/dUjJZ6BN+UAuBXxpmRXtaJeYGxlXtmFKR2fwwz0V ljxA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y7si5572801plr.78.2021.04.15.21.10.16; Thu, 15 Apr 2021 21:10:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234866AbhDPCaH (ORCPT + 99 others); Thu, 15 Apr 2021 22:30:07 -0400 Received: from mga17.intel.com ([192.55.52.151]:32958 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234659AbhDPCaH (ORCPT ); Thu, 15 Apr 2021 22:30:07 -0400 IronPort-SDR: jgtw9HTEcuV23FzKHqVyhCEOErUXWb3NSy3gOrE+OWunI1ePRHBpr8vtOnKe0rVah8Z9ldpSo/ dqzr5v6bHhOA== X-IronPort-AV: E=McAfee;i="6200,9189,9955"; a="175085568" X-IronPort-AV: E=Sophos;i="5.82,226,1613462400"; d="scan'208";a="175085568" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2021 19:29:43 -0700 IronPort-SDR: SngLhycm5K800PlphSrnzdMrT0sZYjcs3qa6a62EmAN0URS/tgUqhK/o7RbEFfQvRHlLmA8Huk qMrWdxgZwlpg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,226,1613462400"; d="scan'208";a="601326807" Received: from intel10-debian.sh.intel.com ([10.239.53.15]) by orsmga005.jf.intel.com with ESMTP; 15 Apr 2021 19:29:41 -0700 From: zhengjun.xing@linux.intel.com To: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: ying.huang@intel.com, tim.c.chen@linux.intel.com, zhengjun.xing@linux.intel.com Subject: [RFC] mm/vmscan.c: avoid possible long latency caused by too_many_isolated() Date: Fri, 16 Apr 2021 02:35:36 +0000 Message-Id: <20210416023536.168632-1-zhengjun.xing@linux.intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhengjun Xing In the system with very few file pages, it is easy to reproduce "nr_isolated_file > nr_inactive_file", then too_many_isolated return true, shrink_inactive_list enter "msleep(100)", the long latency will happen. The test case to reproduce it is very simple, allocate a lot of huge pages (near the DRAM size), then do free, repeat the same operation many times. There is a 3/10 rate to reproduce the issue. In the test, sc-> gfp_mask is 0x342cca ("_GFP_IO" and "__GFP_FS" is masked),it is more easy to enter “inactive >>=3”, then “isolated > inactive” will easy to be true. So I have a proposal to set a threshold number for the total file pages to ignore the system with very few file pages, and then bypass the 100ms sleep. It is hard to set a perfect number for the threshold, so I just give an example of "256" for it, need more inputs for it. Signed-off-by: Zhengjun Xing --- mm/vmscan.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 562e87cbd7a1..a1926463455c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -168,6 +168,7 @@ struct scan_control { * From 0 .. 200. Higher means more swappy. */ int vm_swappiness = 60; +int lru_list_threshold = SWAP_CLUSTER_MAX << 3; static void set_task_reclaim_state(struct task_struct *task, struct reclaim_state *rs) @@ -1785,7 +1786,7 @@ int isolate_lru_page(struct page *page) static int too_many_isolated(struct pglist_data *pgdat, int file, struct scan_control *sc) { - unsigned long inactive, isolated; + unsigned long inactive, isolated, active, nr_lru_pages; if (current_is_kswapd()) return 0; @@ -1796,11 +1797,13 @@ static int too_many_isolated(struct pglist_data *pgdat, int file, if (file) { inactive = node_page_state(pgdat, NR_INACTIVE_FILE); isolated = node_page_state(pgdat, NR_ISOLATED_FILE); + active = node_page_state(pgdat, NR_ACTIVE_FILE); } else { inactive = node_page_state(pgdat, NR_INACTIVE_ANON); isolated = node_page_state(pgdat, NR_ISOLATED_ANON); + active = node_page_state(pgdat, NR_ACTIVE_ANON); } - + nr_lru_pages = inactive + active; /* * GFP_NOIO/GFP_NOFS callers are allowed to isolate more pages, so they * won't get blocked by normal direct-reclaimers, forming a circular @@ -1809,6 +1812,10 @@ static int too_many_isolated(struct pglist_data *pgdat, int file, if ((sc->gfp_mask & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS)) inactive >>= 3; + if (isolated > inactive) + if (nr_lru_pages < lru_list_threshold) + return 0; + return isolated > inactive; } -- 2.17.1