Received: by 2002:a05:7412:419a:b0:f3:1519:9f41 with SMTP id i26csp3595446rdh; Mon, 27 Nov 2023 20:06:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IENrF7pUtUPnzViEcqUVmoVJS/8QIc1qVRjpPButsb+SGjx5Tuh5+LTVPTp7C4vn20sI7Ru X-Received: by 2002:a17:90b:1e0d:b0:285:aea9:c1b4 with SMTP id pg13-20020a17090b1e0d00b00285aea9c1b4mr9209629pjb.19.1701144371335; Mon, 27 Nov 2023 20:06:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701144371; cv=none; d=google.com; s=arc-20160816; b=aBQvhJli52SK7vC/UO/RM1H28expL/xAObdI+NC8Mep5Y+vEyShTMFQPPR+XFxaHd2 qT6FxKOvELPg7OK0awwC4HiYnopGhkPGlSu8js49/AlxHR2JwcAclGxrrcLjzffO/olb 9J7V1r3XAtJdQbhlwMxqe2ne2alyy/83Xug7lqw6xvOOjTva1yUBr/A55D9yuFymIWAS 2EBLV79/aA2yjSPVdY5qyjXdEe88rO3p4VX8OScgeues/V+qi+rcNeoNqJrV4Tx4sir+ /xFVFcB1gBfKxHaXP9bEt/TvAYorQk/TIiVCuMCFqMq1JI5F1b9QCXP0zqO1s/Q0+ccg Nbww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:message-id:date:references:in-reply-to:subject:cc:to :from:dkim-signature; bh=eOc0iVCVQy9cm7PMue0Vd3g6HPlXbkZGUMWcBaRRu6A=; fh=76F0Ja2qiCsVSA/RSZlcEQdDMAeqk3aN2OJtq2tmWuo=; b=JKsTShgYOMRO4xZ76IGOYVNabGUcVburhN8Jxi1rCdDVlLfJTW4LWxhOlsk/B2yrnu yTJOPq135/haTlKhTWG63tYpNMlAwRSu0zlA3CEsn5Plk/VEjdoVFKBut6IBYdRPimPC YLNjPAXxZh4ndZw1mD3SsCqw1AKQl0VEf3N/wh8O/zo1x38IYSSUwcfkw/IqrBgQAo3T eL88vvIyQRCQtGPZE0d2+E5xedpIRE7srDWZVPesno1m4iP0XeaEylpviyIGgwWEBUp4 +M9LQa8/JTNhmMKgCC+qbBaU4UgtmUGobWF1XA2Ls5oEexsjrfAGyqif3T8tkv4293nA GRHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Te9zc6VB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id s23-20020a17090aad9700b00285d5509980si3480118pjq.48.2023.11.27.20.06.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 20:06:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Te9zc6VB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id C1B6F823FAF3; Mon, 27 Nov 2023 20:06:07 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234407AbjK1EFt (ORCPT + 99 others); Mon, 27 Nov 2023 23:05:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229789AbjK1EFs (ORCPT ); Mon, 27 Nov 2023 23:05:48 -0500 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C00A2C1 for ; Mon, 27 Nov 2023 20:05:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701144353; x=1732680353; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version:content-transfer-encoding; bh=8Bt99589MBLkeCDpzNaucEg3AKJeNgJQZORAaSlDfnk=; b=Te9zc6VB6UjhsGfbvzo94qvkmlbOSYXjSnBCCwNE0K1XGu5J+NEQjk0W 4DxqEso+ZpIj71sKQFebxCgZhF9+jZPoVdWjJefv3jLVVO84fuSNVy9UO rTpRH49iT8lcJSesULDtO6Qq4nyY7e/GhDQc/jixwDx/40H8q7Ruu1/Ob iwZgD3kshfpdhPAUiX41V+iaHdtbc7OKwzmC19vDkkqyyRnJZgZkmQBZW Xb/jEZi1UQl3F5A1J0LHmaynf7HOsInp3sbwSLWcPgF4elcMDXBPvZVp9 /6ceE2D4/CD+1F+BkIcy+mx6SWJwuOMTdNy93t1b27CnFVUaJmQZeOCfU A==; X-IronPort-AV: E=McAfee;i="6600,9927,10907"; a="395662373" X-IronPort-AV: E=Sophos;i="6.04,232,1695711600"; d="scan'208";a="395662373" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2023 20:05:53 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,232,1695711600"; d="scan'208";a="9998846" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2023 20:05:50 -0800 From: "Huang, Ying" To: Yosry Ahmed Cc: Minchan Kim , Chris Li , Michal Hocko , Liu Shixin , Yu Zhao , Andrew Morton , Sachin Sant , Johannes Weiner , Kefeng Wang , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v10] mm: vmscan: try to reclaim swapcache pages if no swap space In-Reply-To: (Yosry Ahmed's message of "Mon, 27 Nov 2023 19:27:36 -0800") References: <87msv58068.fsf@yhuang6-desk2.ccr.corp.intel.com> <87h6l77wl5.fsf@yhuang6-desk2.ccr.corp.intel.com> <87bkbf7gz6.fsf@yhuang6-desk2.ccr.corp.intel.com> <87msuy5zuv.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Tue, 28 Nov 2023 12:03:49 +0800 Message-ID: <87fs0q5xsq.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Mon, 27 Nov 2023 20:06:08 -0800 (PST) Yosry Ahmed writes: > On Mon, Nov 27, 2023 at 7:21=E2=80=AFPM Huang, Ying wrote: >> >> Yosry Ahmed writes: >> >> > On Mon, Nov 27, 2023 at 1:32=E2=80=AFPM Minchan Kim wrote: >> >> >> >> On Mon, Nov 27, 2023 at 12:22:59AM -0800, Chris Li wrote: >> >> > On Mon, Nov 27, 2023 at 12:14=E2=80=AFAM Huang, Ying wrote: >> >> > > > I agree with Ying that anonymous pages typically have differen= t page >> >> > > > access patterns than file pages, so we might want to treat them >> >> > > > differently to reclaim them effectively. >> >> > > > One random idea: >> >> > > > How about we put the anonymous page in a swap cache in a differ= ent LRU >> >> > > > than the rest of the anonymous pages. Then shrinking against th= ose >> >> > > > pages in the swap cache would be more effective.Instead of havi= ng >> >> > > > [anon, file] LRU, now we have [anon not in swap cache, anon in = swap >> >> > > > cache, file] LRU >> >> > > >> >> > > I don't think that it is necessary. The patch is only for a spec= ial use >> >> > > case. Where the swap device is used up while some pages are in s= wap >> >> > > cache. The patch will kill performance, but it is used to avoid = OOM >> >> > > only, not to improve performance. Per my understanding, we will = not use >> >> > > up swap device space in most cases. This may be true for ZRAM, b= ut will >> >> > > we keep pages in swap cache for long when we use ZRAM? >> >> > >> >> > I ask the question regarding how many pages can be freed by this pa= tch >> >> > in this email thread as well, but haven't got the answer from the >> >> > author yet. That is one important aspect to evaluate how valuable is >> >> > that patch. >> >> >> >> Exactly. Since swap cache has different life time with page cache, th= ey >> >> would be usually dropped when pages are unmapped(unless they are shar= ed >> >> with others but anon is usually exclusive private) so I wonder how mu= ch >> >> memory we can save. >> > >> > I think the point of this patch is not saving memory, but rather >> > avoiding an OOM condition that will happen if we have no swap space >> > left, but some pages left in the swap cache. Of course, the OOM >> > avoidance will come at the cost of extra work in reclaim to swap those >> > pages out. >> > >> > The only case where I think this might be harmful is if there's plenty >> > of pages to reclaim on the file LRU, and instead we opt to chase down >> > the few swap cache pages. So perhaps we can add a check to only set >> > sc->swapcache_only if the number of pages in the swap cache is more >> > than the number of pages on the file LRU or similar? Just make sure we >> > don't chase the swapcache pages down if there's plenty to scan on the >> > file LRU? >> >> The swap cache pages can be divided to 3 groups. >> >> - group 1: pages have been written out, at the tail of inactive LRU, but >> not reclaimed yet. >> >> - group 2: pages have been written out, but were failed to be reclaimed >> (e.g., were accessed before reclaiming) >> >> - group 3: pages have been swapped in, but were kept in swap cache. The >> pages may be in active LRU. >> >> The main target of the original patch should be group 1. And the pages >> may be cheaper to reclaim than file pages. >> >> Group 2 are hard to be reclaimed if swap_count() isn't 0. >> >> Group 3 should be reclaimed in theory, but the overhead may be high. >> And we may need to reclaim the swap entries instead of pages if the pages >> are hot. But we can start to reclaim the swap entries before the swap >> space is run out. >> >> So, if we can count group 1, we may use that as indicator to scan anon >> pages. And we may add code to reclaim group 3 earlier. >> > > My point was not that reclaiming the pages in the swap cache is more > expensive that reclaiming the pages in the file LRU. In a lot of > cases, as you point out, the pages in the swap cache can just be > dropped, so they may be as cheap or cheaper to reclaim than the pages > in the file LRU. > > My point was that scanning the anon LRU when swap space is exhausted > to get to the pages in the swap cache may be much more expensive, > because there may be a lot of pages on the anon LRU that are not in > the swap cache, and hence are not reclaimable, unlike pages in the > file LRU, which should mostly be reclaimable. > > So what I am saying is that maybe we should not do the effort of > scanning the anon LRU in the swapcache_only case unless there aren't a > lot of pages to reclaim on the file LRU (relatively). For example, if > we have a 100 pages in the swap cache out of 10000 pages in the anon > LRU, and there are 10000 pages in the file LRU, it's probably not > worth scanning the anon LRU. For group 1 pages, they are at the tail of the anon inactive LRU, so the scan overhead is low too. For example, if number of group 1 pages is 100, we just need to scan 100 pages to reclaim them. We can choose to stop scanning when the number of the non-group-1 pages reached some threshold. -- Best Regards, Huang, Ying