Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp140544rwd; Fri, 26 May 2023 16:32:39 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7+F5WttiMBK+mv7xPZI7DN5E/RVgNTqkUnwnMi+SXGeIWLQgm0oZx58zGum1Ixjyq+VmuU X-Received: by 2002:a17:90a:648e:b0:250:648b:781d with SMTP id h14-20020a17090a648e00b00250648b781dmr539320pjj.23.1685143959265; Fri, 26 May 2023 16:32:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685143959; cv=none; d=google.com; s=arc-20160816; b=ygd/eTWJG5o6RCUGBcQ1u7Vdn7vb2sDICZp+AmM9p+wbdWJxsUdgnlQ/sM5lPi1q5e xuWH7+3Yvi+edW1KeA1H9znFUWlbIdWH3PoHvmkBwrnmBKzYUD3JdVV7RjzemY+yMmmm UIXssN9IeZEexQKsuwzBbywm7aJX6XwcBsbNjcOQXDtGpQ6ScJGhAzov7wH2BTjBHTcU yGGyHg7DQaVtDwbjBQtyvqnqcuMmWktyR9ggYKV8NWrFSlbpEbc79pNQ37x/vd46FC89 mm59DFFcFsdCmhoKXpDDPExB9KNfhRc2WwzsEsX9o/DM4CZ0d4aWfYjt9Z53kA66hc3O nowA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=Fej27+UW2WctZnjK6a6Xwarozk5lgZH3Ol3mD1BkHkw=; b=0GadfnMSdzD/F7kA1LZz2CWc2tmZmOVjPMZhS+5oNONA3c97yRyF9STQiaqaOvTFdL 1Z4XSzHecHGu1qPJhBr1z/Igt2rbPMtP22Jp2zgpaXLc11oimNJkLQBrLPtaDrSljA4A FXXjD0eNOzsw73UUlOJHVnLV26wGFwW6ssOXw7hi8ILktLFHHeQOmx15eYrl04Bhf2ci 9l3fzgwY6mpIVqQEkV7Z+7L62LoaPf83CR6CA4AJIbjtxpbF9G9X2uPAsUZlVwj1ckJL tthT+CyknTSFsQI9wYXfdVI1taaQHK48DXR+/TM5XIECUeVjyonLtA62Y4eNDLrju56T 83aw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=JZwCITr2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u186-20020a6385c3000000b00530b135eae9si4807610pgd.119.2023.05.26.16.32.03; Fri, 26 May 2023 16:32:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=JZwCITr2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231356AbjEZXFS (ORCPT + 99 others); Fri, 26 May 2023 19:05:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229755AbjEZXFR (ORCPT ); Fri, 26 May 2023 19:05:17 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83B1EE7 for ; Fri, 26 May 2023 16:05:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1E506614DE for ; Fri, 26 May 2023 23:05:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 23CD0C433EF; Fri, 26 May 2023 23:05:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1685142315; bh=o7r+d8qzwX9M3m6z3rdCF7BCtCqP4HTkRqtrPREaUUM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=JZwCITr2YFhYNri4/dGDXFblt+6XlKP9TEe3b+489HHKlYRPXjrTUg796gOhh64sF 3w0stqbIgJtgjqrpznsVihZ7M3m7KVtyKRs7Ovzai854pxyPbBGouLU6v5veQUoMgr W+whw8M85k/y9KWzu044ugGn50RSO76RcVR23tDiWN84fOiUaJQre2g1I+t9Jk+Qg3 w17wpNaB7z1tAOKNjbJyU8ijnlvn3PzSUBP33Jc6STe+wYvlRSvkN5Nh8tYeOXk8Hy pSqBnwmx1ZEz1OltHv/Lbx9TvviHLbKnagXRKFHrkJ8xLlEUqk/SJarv2653YO2zbN CcfvkKM8eZqtg== Date: Fri, 26 May 2023 16:05:13 -0700 From: Chris Li To: Domenico Cerasuolo Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, yosryahmed@google.com, hannes@cmpxchg.org, kernel-team@fb.com Subject: Re: [PATCH] mm: zswap: shrink until can accept Message-ID: References: <20230524065051.6328-1-cerasuolodomenico@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230524065051.6328-1-cerasuolodomenico@gmail.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 24, 2023 at 08:50:51AM +0200, Domenico Cerasuolo wrote: > This update addresses an issue with the zswap reclaim mechanism, which > hinders the efficient offloading of cold pages to disk, thereby > compromising the preservation of the LRU order and consequently > diminishing, if not inverting, its performance benefits. > > The functioning of the zswap shrink worker was found to be inadequate, > as shown by basic benchmark test. For the test, a kernel build was > utilized as a reference, with its memory confined to 1G via a cgroup and > a 5G swap file provided. The results are presented below, these are > averages of three runs without the use of zswap: > > real 46m26s > user 35m4s > sys 7m37s > > With zswap (zbud) enabled and max_pool_percent set to 1 (in a 32G > system), the results changed to: > > real 56m4s > user 35m13s > sys 8m43s > > written_back_pages: 18 > reject_reclaim_fail: 0 > pool_limit_hit:1478 > > Besides the evident regression, one thing to notice from this data is > the extremely low number of written_back_pages and pool_limit_hit. > > The pool_limit_hit counter, which is increased in zswap_frontswap_store > when zswap is completely full, doesn't account for a particular > scenario: once zswap hits his limit, zswap_pool_reached_full is set to > true; with this flag on, zswap_frontswap_store rejects pages if zswap is > still above the acceptance threshold. Once we include the rejections due > to zswap_pool_reached_full && !zswap_can_accept(), the number goes from > 1478 to a significant 21578266. > > Zswap is stuck in an undesirable state where it rejects pages because > it's above the acceptance threshold, yet fails to attempt memory > reclaimation. This happens because the shrink work is only queued when > zswap_frontswap_store detects that it's full and the work itself only > reclaims one page per run. > > This state results in hot pages getting written directly to disk, > while cold ones remain memory, waiting only to be invalidated. The LRU > order is completely broken and zswap ends up being just an overhead > without providing any benefits. > > This commit applies 2 changes: a) the shrink worker is set to reclaim > pages until the acceptance threshold is met and b) the task is also > enqueued when zswap is not full but still above the threshold. > > Testing this suggested update showed much better numbers: > > real 36m37s > user 35m8s > sys 9m32s > > written_back_pages: 10459423 > reject_reclaim_fail: 12896 > pool_limit_hit: 75653 > > Fixes: 45190f01dd40 ("mm/zswap.c: add allocation hysteresis if pool limit is hit") > Signed-off-by: Domenico Cerasuolo > --- > mm/zswap.c | 10 +++++++--- > 1 file changed, 7 insertions(+), 3 deletions(-) > > diff --git a/mm/zswap.c b/mm/zswap.c > index 59da2a415fbb..2ee0775d8213 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -587,9 +587,13 @@ static void shrink_worker(struct work_struct *w) > { > struct zswap_pool *pool = container_of(w, typeof(*pool), > shrink_work); > + int ret; Very minor nit pick, you can move the declare inside the do statement where it get used. > > - if (zpool_shrink(pool->zpool, 1, NULL)) > - zswap_reject_reclaim_fail++; > + do { > + ret = zpool_shrink(pool->zpool, 1, NULL); > + if (ret) > + zswap_reject_reclaim_fail++; > + } while (!zswap_can_accept() && ret != -EINVAL); As others point out, this while loop can be problematic. Have you find out what was the common reason causing the reclaim fail? Inside the shrink function there is a while loop that would be the place to perform try harder conditions. For example, if all the page in the LRU are already try once there's no reason to keep on calling the shrink function. The outer loop actually doesn't have this kind of visibilities. Chris > zswap_pool_put(pool); > } > > @@ -1188,7 +1192,7 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, > if (zswap_pool_reached_full) { > if (!zswap_can_accept()) { > ret = -ENOMEM; > - goto reject; > + goto shrink; > } else > zswap_pool_reached_full = false; > } > -- > 2.34.1 >