Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp783621rdb; Tue, 5 Dec 2023 22:44:12 -0800 (PST) X-Google-Smtp-Source: AGHT+IGm8nJUkVmdO2gmO3Z0/wPCisqgQ1SECuYTVoeaAcCFL3AXPF7pEqJ+9rBoT1fZb3faPYG+ X-Received: by 2002:a05:6a20:8e22:b0:18c:399:1eb4 with SMTP id y34-20020a056a208e2200b0018c03991eb4mr599575pzj.13.1701845051754; Tue, 05 Dec 2023 22:44:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701845051; cv=none; d=google.com; s=arc-20160816; b=bTnP/uxRl4a2VoqBwv5qIS34uPzoXo7GZORigyVZfCIace9XTYjonjdw6ZDO2XbClf tynLHdIJv4NF+ym8kI3NTQHMKrJcPtctlDDWxl9eFWKJhj+c1rE4iAtDD7vRpz1Tvtzz dIlg1qBT5an+8pt/ORJ0+jG+3ir+6lFv/Pi8uRrws7gfrB6D7EKPev3QZvVVuF/IM+Px pWXkn3QckmxJsyD/35iUbi53NhygsmJus622uOSnOeqqq71pTpe03PAY5/aCLQtswN4P 0oARRB69Qk+WqPyCTPABQaEwSSpsL5OkaUtsQS8PkMgzDdcKifYU/O80Aiq8e7qSbum5 2J/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:mime-version:date :dkim-signature:message-id; bh=L68GGlTplESV/Uk34Q5KhUMIcQyrsh0KKX3y4sPM0Qw=; fh=TJ9DXgBwZXl7KlHpNEGKeVn6deVqVIUZdquH8oTnGWY=; b=p/jz37EVDKm9drYSglwiFlg5LsQxZ6L9hH6zWLgu2igVS6OB+Gi2UxokcdPbo3fI9+ mRKZSaAyk24koHiuxuZgFjjzk8Yod/5qZRwlAVc1lls0SSdsZbcDw20u2xlLPoOTMIwj 9u3MUHR5DQAlodvURvc7GzYVcdhabcowL5rM3n9tnv9m4AHJD6OhFEeuaiqp/JGp0epz dhWtfhhiLqSVwYC4FzpQvJVs7+5rflWc8LUmzsr8GZSUSXD8SNo/7oAb6IVZSu4rceT0 4cRlOm43otTcodX4y2hbONsBs7CvgeYAumb6WnUdZ4L7u2OwoeBtnbU2FuTIzYjTS3DT k3Xw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=R3EXQyhc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id de12-20020a056a00468c00b006cddb04ab75si10590207pfb.53.2023.12.05.22.44.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Dec 2023 22:44:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=R3EXQyhc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 0A904802FA82; Tue, 5 Dec 2023 22:44:09 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231374AbjLFGnx (ORCPT + 99 others); Wed, 6 Dec 2023 01:43:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229520AbjLFGnw (ORCPT ); Wed, 6 Dec 2023 01:43:52 -0500 Received: from out-171.mta1.migadu.com (out-171.mta1.migadu.com [95.215.58.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8216A1BD; Tue, 5 Dec 2023 22:43:57 -0800 (PST) Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1701845035; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=L68GGlTplESV/Uk34Q5KhUMIcQyrsh0KKX3y4sPM0Qw=; b=R3EXQyhcv5+RGJmgumggdbkeOKhqUnpkHITAo3gTVWgV17Jj4KhKB2fUTP9P/3FkyHK1Yx 4lvSFddDjHbUXiiM427uElO/cTmNnrqqMrLaXr2/LnerjMVGe9Fbg6nUBB8vEopa5YlL9+ lt0APH3XACSf68CH9mKii8xsDNYXXp0= Date: Wed, 6 Dec 2023 14:43:48 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure Content-Language: en-US To: Yosry Ahmed Cc: Nhat Pham , akpm@linux-foundation.org, hannes@cmpxchg.org, cerasuolodomenico@gmail.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, shuah@kernel.org References: <20231130194023.4102148-1-nphamcs@gmail.com> <20231130194023.4102148-7-nphamcs@gmail.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 05 Dec 2023 22:44:09 -0800 (PST) On 2023/12/6 13:59, Yosry Ahmed wrote: > [..] >>> @@ -526,6 +582,102 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root, >>> return entry; >>> } >>> >>> +/********************************* >>> +* shrinker functions >>> +**********************************/ >>> +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l, >>> + spinlock_t *lock, void *arg); >>> + >>> +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker, >>> + struct shrink_control *sc) >>> +{ >>> + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid)); >>> + unsigned long shrink_ret, nr_protected, lru_size; >>> + struct zswap_pool *pool = shrinker->private_data; >>> + bool encountered_page_in_swapcache = false; >>> + >>> + nr_protected = >>> + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected); >>> + lru_size = list_lru_shrink_count(&pool->list_lru, sc); >>> + >>> + /* >>> + * Abort if the shrinker is disabled or if we are shrinking into the >>> + * protected region. >>> + * >>> + * This short-circuiting is necessary because if we have too many multiple >>> + * concurrent reclaimers getting the freeable zswap object counts at the >>> + * same time (before any of them made reasonable progress), the total >>> + * number of reclaimed objects might be more than the number of unprotected >>> + * objects (i.e the reclaimers will reclaim into the protected area of the >>> + * zswap LRU). >>> + */ >>> + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) { >>> + sc->nr_scanned = 0; >>> + return SHRINK_STOP; >>> + } >>> + >>> + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb, >>> + &encountered_page_in_swapcache); >>> + >>> + if (encountered_page_in_swapcache) >>> + return SHRINK_STOP; >>> + >>> + return shrink_ret ? shrink_ret : SHRINK_STOP; >>> +} >>> + >>> +static unsigned long zswap_shrinker_count(struct shrinker *shrinker, >>> + struct shrink_control *sc) >>> +{ >>> + struct zswap_pool *pool = shrinker->private_data; >>> + struct mem_cgroup *memcg = sc->memcg; >>> + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid)); >>> + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected; >>> + >>> +#ifdef CONFIG_MEMCG_KMEM >>> + cgroup_rstat_flush(memcg->css.cgroup); >>> + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT; >>> + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED); >>> +#else >>> + /* use pool stats instead of memcg stats */ >>> + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT; >>> + nr_stored = atomic_read(&pool->nr_stored); >>> +#endif >>> + >>> + if (!zswap_shrinker_enabled || !nr_stored) >> When I tested with this series, with !zswap_shrinker_enabled in the default case, >> I found the performance is much worse than that without this patch. >> >> Testcase: memory.max=2G, zswap enabled, kernel build -j32 in a tmpfs directory. >> >> The reason seems the above cgroup_rstat_flush(), caused much rstat lock contention >> to the zswap_store() path. And if I put the "zswap_shrinker_enabled" check above >> the cgroup_rstat_flush(), the performance become much better. >> >> Maybe we can put the "zswap_shrinker_enabled" check above cgroup_rstat_flush()? > > Yes, we should do nothing if !zswap_shrinker_enabled. We should also > use mem_cgroup_flush_stats() here like other places unless accuracy is > crucial, which I doubt given that reclaim uses > mem_cgroup_flush_stats(). > Yes. After changing to use mem_cgroup_flush_stats() here, the performance become much better. > mem_cgroup_flush_stats() has some thresholding to make sure we don't > do flushes unnecessarily, and I have a pending series in mm-unstable > that makes that thresholding per-memcg. Keep in mind that adding a > call to mem_cgroup_flush_stats() will cause a conflict in mm-unstable, My test branch is linux-next 20231205, and it's all good after changing to use mem_cgroup_flush_stats(memcg). > because the series there adds a memcg argument to > mem_cgroup_flush_stats(). That should be easily amenable though, I can > post a fixlet for my series to add the memcg argument there on top of > users if needed. > It's great. Thanks!