Received: by 2002:a05:7412:419a:b0:f3:1519:9f41 with SMTP id i26csp3389287rdh; Mon, 27 Nov 2023 13:01:26 -0800 (PST) X-Google-Smtp-Source: AGHT+IEAacGSYDblTW2DpBUvwM5Sg6apxG+ZQ4cJaHM26YMkei4oFMzjb12ayjuQJz4jwFqMRXN3 X-Received: by 2002:a17:903:264f:b0:1cf:6584:4860 with SMTP id je15-20020a170903264f00b001cf65844860mr10915165plb.10.1701118885623; Mon, 27 Nov 2023 13:01:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701118885; cv=none; d=google.com; s=arc-20160816; b=l5+pgE+2/mo6QMfbXMZpg5mbbl44nQnlXqBHFvsGyhdAt8EBH+t15APjrM7TJzf/tY 3sNDmaQt5KRDqndfiKVuIiEoGBbWdr+bdx+uDl0FO/uriOa3HwfV86+PlE73d7PlwOud NqRr4WU2r9lWAMC6K3Ja14fXzCTiGZr1eZ9EkhaXrN8mIg9+DFbi+rF9fdte5Fl5G0Kl eLJmLBy57BwJxnBxBmLfcN9R6Cwyo9MuB2budVm2DVmqS3qIyFQzjNDWwkej2sUtI39Z +c7Kj8pxP+7GWTC4RLtmI1H187DvBTGCLRqVHkATgpvF4EapXXB5Xx/GWb2gMjtbaUNQ xXug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=LY6AvkeA6OKl7ZBG8XXTWi8GeuV0aqlfHr7oz74WW2o=; fh=l8Ska0tqZdzKUD0P745nL2eS+RMZEL3Hi5UmK1nn4ao=; b=vcD1q5kPQAmdev1wxH5IWZRkglajaIvA78gCoRjbCiWcryxi+ZXJ6nnk8J6o1HL2AM EBdiLxyV0i0FDCymIAVqkFULdm9yjYENV/hVbV0RAPWPrCF1bs8KXE6fj2HovjDJNuq8 a54izeDY5JxVLcVZzRUnJvN7RXNvyu3symJnUzWbpj2Cp8amx8k5M1tC+JgZg13kqb5G rqtXE7cb6/WJlAyMTRO4UKchc/FDPSllF0HmXGCbHXX1H/D1qH1s+GDohwlOR95rAHfX 7HtSkDUjAj9AzZox4Onf+azsLoJygDODGi1U1E45dm+KqWD46onyrd6IiBhf4xznMxDq 6cdA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=yzDbR+kT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id r33-20020a635d21000000b005c272dbe3aesi10053186pgb.555.2023.11.27.13.01.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 13:01:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=yzDbR+kT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 7431C80964D9; Mon, 27 Nov 2023 13:01:10 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231190AbjK0VAx (ORCPT + 99 others); Mon, 27 Nov 2023 16:00:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229531AbjK0VAv (ORCPT ); Mon, 27 Nov 2023 16:00:51 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E91AC0 for ; Mon, 27 Nov 2023 13:00:57 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A1E8DC433C9; Mon, 27 Nov 2023 21:00:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1701118857; bh=d0FagZci1PXFMezi/BonwyYv+7yV34Dht6vxRLQOKK8=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=yzDbR+kT8L1Ms6G8xL0TrV5ITh7u9pHV2Tf3NB55xhtilyqokg+gTeQDHkcIheVm8 DFA9Di1naDUQ7+V1qm6//jzz5SoG4oQI202ZHvMMcRWO/D2v+XJx6r6wBn7BU8ic6i nNpDcKvEsu+FmI1qaMNFFOqckiGwHehyfX1VN2tA= Date: Mon, 27 Nov 2023 13:00:55 -0800 From: Andrew Morton To: Nhat Pham Cc: hannes@cmpxchg.org, cerasuolodomenico@gmail.com, yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org Subject: Re: [PATCH v6 6/6] zswap: shrinks zswap pool based on memory pressure Message-Id: <20231127130055.30c455906d912e09dcb7e79b@linux-foundation.org> In-Reply-To: <20231127193703.1980089-7-nphamcs@gmail.com> References: <20231127193703.1980089-1-nphamcs@gmail.com> <20231127193703.1980089-7-nphamcs@gmail.com> X-Mailer: Sylpheed 3.8.0beta1 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-2.7 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Mon, 27 Nov 2023 13:01:10 -0800 (PST) On Mon, 27 Nov 2023 11:37:03 -0800 Nhat Pham wrote: > Currently, we only shrink the zswap pool when the user-defined limit is > hit. This means that if we set the limit too high, cold data that are > unlikely to be used again will reside in the pool, wasting precious > memory. It is hard to predict how much zswap space will be needed ahead > of time, as this depends on the workload (specifically, on factors such > as memory access patterns and compressibility of the memory pages). > > This patch implements a memcg- and NUMA-aware shrinker for zswap, that > is initiated when there is memory pressure. The shrinker does not > have any parameter that must be tuned by the user, and can be opted in > or out on a per-memcg basis. > > Furthermore, to make it more robust for many workloads and prevent > overshrinking (i.e evicting warm pages that might be refaulted into > memory), we build in the following heuristics: > > * Estimate the number of warm pages residing in zswap, and attempt to > protect this region of the zswap LRU. > * Scale the number of freeable objects by an estimate of the memory > saving factor. The better zswap compresses the data, the fewer pages > we will evict to swap (as we will otherwise incur IO for relatively > small memory saving). > * During reclaim, if the shrinker encounters a page that is also being > brought into memory, the shrinker will cautiously terminate its > shrinking action, as this is a sign that it is touching the warmer > region of the zswap LRU. > > As a proof of concept, we ran the following synthetic benchmark: > build the linux kernel in a memory-limited cgroup, and allocate some > cold data in tmpfs to see if the shrinker could write them out and > improved the overall performance. Depending on the amount of cold data > generated, we observe from 14% to 35% reduction in kernel CPU time used > in the kernel builds. > > ... > > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -22,6 +22,7 @@ > #include > #include > #include > +#include > #include > > /* Free memory management - zoned buddy allocator. */ > @@ -641,6 +642,7 @@ struct lruvec { > #ifdef CONFIG_MEMCG > struct pglist_data *pgdat; > #endif > + struct zswap_lruvec_state zswap_lruvec_state; Normally we'd put this in #ifdef CONFIG_ZSWAP. > --- a/include/linux/zswap.h > +++ b/include/linux/zswap.h > @@ -5,20 +5,40 @@ > #include > #include > > +struct lruvec; > + > extern u64 zswap_pool_total_size; > extern atomic_t zswap_stored_pages; > > #ifdef CONFIG_ZSWAP > > +struct zswap_lruvec_state { > + /* > + * Number of pages in zswap that should be protected from the shrinker. > + * This number is an estimate of the following counts: > + * > + * a) Recent page faults. > + * b) Recent insertion to the zswap LRU. This includes new zswap stores, > + * as well as recent zswap LRU rotations. > + * > + * These pages are likely to be warm, and might incur IO if the are written > + * to swap. > + */ > + atomic_long_t nr_zswap_protected; > +}; > + > bool zswap_store(struct folio *folio); > bool zswap_load(struct folio *folio); > void zswap_invalidate(int type, pgoff_t offset); > void zswap_swapon(int type); > void zswap_swapoff(int type); > void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg); > - > +void zswap_lruvec_state_init(struct lruvec *lruvec); > +void zswap_lruvec_swapin(struct page *page); > #else > > +struct zswap_lruvec_state {}; But instead you made it an empty struct in this case. That's a bit funky, but I guess OK. It does send a careful reader of struct lruvec over to look at the zswap_lruvec_state definition to understand what's going on. > static inline bool zswap_store(struct folio *folio) > { > return false; > @@ -33,7 +53,8 @@ static inline void zswap_invalidate(int type, pgoff_t offset) {} > static inline void zswap_swapon(int type) {} > static inline void zswap_swapoff(int type) {} > static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {} > - > +static inline void zswap_lruvec_init(struct lruvec *lruvec) {} > +static inline void zswap_lruvec_swapin(struct page *page) {} Needed this build fix: --- a/include/linux/zswap.h~zswap-shrinks-zswap-pool-based-on-memory-pressure-fix +++ a/include/linux/zswap.h @@ -54,6 +54,7 @@ static inline void zswap_swapon(int type static inline void zswap_swapoff(int type) {} static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {} static inline void zswap_lruvec_init(struct lruvec *lruvec) {} +static inline void zswap_lruvec_state_init(struct lruvec *lruvec) {} static inline void zswap_lruvec_swapin(struct page *page) {} #endif _