Received: by 2002:a05:7412:419a:b0:f3:1519:9f41 with SMTP id i26csp3580977rdh; Mon, 27 Nov 2023 19:21:38 -0800 (PST) X-Google-Smtp-Source: AGHT+IHkG6HJHbtc5qKdZf+issRFbTkHLPuAbrr0QG/wObzYOcL06EPMOmrnOkEzTHP4DDVZAPsQ X-Received: by 2002:a17:903:1205:b0:1cf:c01d:c056 with SMTP id l5-20020a170903120500b001cfc01dc056mr8408255plh.57.1701141697916; Mon, 27 Nov 2023 19:21:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701141697; cv=none; d=google.com; s=arc-20160816; b=ZVqjgDrsJkaZCxMuyxbwd11mS7yVrdVLV1brxjujFLxzlkVNU8e8p5yHTh+XXSmO3D 8qzhq/Cnr1Ab+VsxbzXJUzziu23+ownxaGZqcX+U22QEkVAgpVCHHcE7ry/ruBHlu7yh bOaBph7L6LFb7BI6pBXHmHWFrmv0bo/07mjn3R7KpaQ7YJzaF9IqGRNclKdV00ZMfFJx if2uzkU94LTWALP4l0hTsTCQU4Ft5lZ+D+lWMHTcUGwSGaxQT8G1lTopIXlYic77WOH3 4+1/dRXMh1/xl+hZBlcpjG7CQGq6dIy9ThD3KKtgDNXbbl4mGpG1fkKMxI4yEsWOXr+s +j9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:message-id:date:references:in-reply-to:subject:cc:to :from:dkim-signature; bh=qJdV+ElV0fd1UB1a4W72IrVHG2tZM8+sFooooYE+glU=; fh=76F0Ja2qiCsVSA/RSZlcEQdDMAeqk3aN2OJtq2tmWuo=; b=xJzZzzkWscNx59CZP0WiwZdGhSHxMdTgvDZZF2U56e2xGe9LzAtFVFyQD6COLtsyP6 W0pOUtkCJZM5ZFXEksk1NyZOGheHUPoM9kwD6cnWjvGDq9IsDG4tzTgLKOKfL8WbrUiJ mXFRWexYHOofK+j95R3SC90Q0C9p96r/pV7kvXk0aJ6qY1Qd4JkZSi6GJnUrgDITXGzY xoQgESGJI2VOU/rFksxwMqMynPkW23MaA6GdmwbJx5jzXg0SaRj6fDTZ0WeXijcp/zPd Ff4HSlltkkEWO7Ynw+a9OFM9JeVGBfSdYETXeI84GhTiQQjhDxA7KaU7uKYShvV8p966 7UUQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=dbCwV1Wd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id y7-20020a1709029b8700b001ce5b6e97fesi10820167plp.45.2023.11.27.19.21.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 19:21:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=dbCwV1Wd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 8D18580A91A5; Mon, 27 Nov 2023 19:21:30 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234543AbjK1DVU (ORCPT + 99 others); Mon, 27 Nov 2023 22:21:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232694AbjK1DVS (ORCPT ); Mon, 27 Nov 2023 22:21:18 -0500 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD47ECE for ; Mon, 27 Nov 2023 19:21:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701141684; x=1732677684; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version:content-transfer-encoding; bh=uAyuEl4PRkvMBnAcgIlon89LF6qmybriGImNGAei/Ts=; b=dbCwV1WdhpKMiP2lV7zCJd7nVY36q0kV9EX9Fh307ofK2hiq1g9vl3CB MiEj6YLpuN3y9BHx6nabWh49bebhLN58hfW5UpqtfJjniKkbz9j3Sgut7 P8fSoOIfC7op+LwaP2XuljC3yQ7shKjN6FnPauiMHRc1748QNdlNQZDIq sbeA80rVDyuHMNDwNqeSbr+XLjP3UCMypC+dxG1wAw6VdAY9ua4BCdEdT 6rHFA102GfnlPfuxiXC+PQz7aBv8M5OWy8VX9LcDuKTsSluydrd2wjrSv lu7bMlvAEXq4+6CVyZn9BZ6PySaC9aXpt1tokUAlt4OxeqGX5PeXsZLI+ w==; X-IronPort-AV: E=McAfee;i="6600,9927,10907"; a="457171853" X-IronPort-AV: E=Sophos;i="6.04,232,1695711600"; d="scan'208";a="457171853" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2023 19:21:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10907"; a="891943316" X-IronPort-AV: E=Sophos;i="6.04,232,1695711600"; d="scan'208";a="891943316" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2023 19:21:20 -0800 From: "Huang, Ying" To: Yosry Ahmed Cc: Minchan Kim , Chris Li , Michal Hocko , Liu Shixin , Yu Zhao , Andrew Morton , Sachin Sant , Johannes Weiner , Kefeng Wang , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v10] mm: vmscan: try to reclaim swapcache pages if no swap space In-Reply-To: (Yosry Ahmed's message of "Mon, 27 Nov 2023 13:56:26 -0800") References: <87msv58068.fsf@yhuang6-desk2.ccr.corp.intel.com> <87h6l77wl5.fsf@yhuang6-desk2.ccr.corp.intel.com> <87bkbf7gz6.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Tue, 28 Nov 2023 11:19:20 +0800 Message-ID: <87msuy5zuv.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 27 Nov 2023 19:21:30 -0800 (PST) Yosry Ahmed writes: > On Mon, Nov 27, 2023 at 1:32=E2=80=AFPM Minchan Kim = wrote: >> >> On Mon, Nov 27, 2023 at 12:22:59AM -0800, Chris Li wrote: >> > On Mon, Nov 27, 2023 at 12:14=E2=80=AFAM Huang, Ying wrote: >> > > > I agree with Ying that anonymous pages typically have different p= age >> > > > access patterns than file pages, so we might want to treat them >> > > > differently to reclaim them effectively. >> > > > One random idea: >> > > > How about we put the anonymous page in a swap cache in a different= LRU >> > > > than the rest of the anonymous pages. Then shrinking against those >> > > > pages in the swap cache would be more effective.Instead of having >> > > > [anon, file] LRU, now we have [anon not in swap cache, anon in swap >> > > > cache, file] LRU >> > > >> > > I don't think that it is necessary. The patch is only for a special= use >> > > case. Where the swap device is used up while some pages are in swap >> > > cache. The patch will kill performance, but it is used to avoid OOM >> > > only, not to improve performance. Per my understanding, we will not= use >> > > up swap device space in most cases. This may be true for ZRAM, but = will >> > > we keep pages in swap cache for long when we use ZRAM? >> > >> > I ask the question regarding how many pages can be freed by this patch >> > in this email thread as well, but haven't got the answer from the >> > author yet. That is one important aspect to evaluate how valuable is >> > that patch. >> >> Exactly. Since swap cache has different life time with page cache, they >> would be usually dropped when pages are unmapped(unless they are shared >> with others but anon is usually exclusive private) so I wonder how much >> memory we can save. > > I think the point of this patch is not saving memory, but rather > avoiding an OOM condition that will happen if we have no swap space > left, but some pages left in the swap cache. Of course, the OOM > avoidance will come at the cost of extra work in reclaim to swap those > pages out. > > The only case where I think this might be harmful is if there's plenty > of pages to reclaim on the file LRU, and instead we opt to chase down > the few swap cache pages. So perhaps we can add a check to only set > sc->swapcache_only if the number of pages in the swap cache is more > than the number of pages on the file LRU or similar? Just make sure we > don't chase the swapcache pages down if there's plenty to scan on the > file LRU? The swap cache pages can be divided to 3 groups. - group 1: pages have been written out, at the tail of inactive LRU, but not reclaimed yet. - group 2: pages have been written out, but were failed to be reclaimed (e.g., were accessed before reclaiming) - group 3: pages have been swapped in, but were kept in swap cache. The pages may be in active LRU. The main target of the original patch should be group 1. And the pages may be cheaper to reclaim than file pages. Group 2 are hard to be reclaimed if swap_count() isn't 0. Group 3 should be reclaimed in theory, but the overhead may be high. And we may need to reclaim the swap entries instead of pages if the pages are hot. But we can start to reclaim the swap entries before the swap space is run out. So, if we can count group 1, we may use that as indicator to scan anon pages. And we may add code to reclaim group 3 earlier. >> > Regarding running out of swap space. That is a good point, in server >> > workload we don't typically run out of swap device space anyway. Think about this again. In server workload, if we set some swap usage limit for a memcg, we may run out of the limit. Is it common for a server workload run out of the swap usage limit of the memcg? >> > Android uses ZRAM, the story might be different. Adding Minchan here. >> >> Swap is usually almost full in Android since it compacts(i.e., swapout) >> background apps aggressively. If my understanding were correct, because ZRAM has SWP_SYNCHRONOUS_IO set, the anonymous pages will only be put in swap cache temporarily during swap out. So, the remaining swap cache pages in anon LRU should not be a problem for ZRAM. -- Best Regards, Huang, Ying