Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp5088761rwd; Tue, 30 May 2023 14:35:48 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6uCeW9fW1bnRddncbfuoEyfg/ERfgbWQDhQFvFI/uBoVwxetl0Uqgl6kPleRjxc7unLkEC X-Received: by 2002:a17:90a:f3c9:b0:255:4f20:7ceb with SMTP id ha9-20020a17090af3c900b002554f207cebmr3949435pjb.0.1685482547817; Tue, 30 May 2023 14:35:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685482547; cv=none; d=google.com; s=arc-20160816; b=dZEoZFPGvoFJrLXLF7HgHF+/AxwcM4TTUmwguuHgh6CTUuB1tNFd5C7VaeMv3MzKtN mvif22VGgdnkND5/rE3rMztPONinfRPkskPnFurv4GOzHrmXeZDQDl0FO4LsR1rJ5u5A WktpzwadY+FrhU3r/xw/s+NEtSLIH2GLbWpFpkVWhPCSaQxkhHSwHDyoJwotP3AOMT5z kftSeF8PGkBZCoss72o9aWIvW4t/cjNjH7qLWGBoqQ6Q+vZe0zX+JhKANza/ZkyCtP8o uBKy1NMKzFGwoSuWi4K1kGf/sPVjYcgskyLehMkQvZ3250PJjfsIkjh2kMAkFHWNOw85 +0vQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=UKx0dH2MYZx9Giq3ctu3lSYAA7/bUMeGV8wNZBA/bM4=; b=q115wE3t6qCigjocsNHiNEioDh4CRLPgZZ/P9wJB+90vezDy2WgSZs1KalLeMmT5yt ezkTEU4dRNTi28bVURBuu+JLKmPzmJRhlsKBPa9agYcDtJ1zC0hMV//NKBbUxKQDbxFo L7+tzUdCbXZydzuZd7l7d+1LVpUsDcv4hobCo4CdGZs8eBnz+8xZ9olQPwpwRBEFzjVU yr3ZxArGVP7z3Mq5wlaIyU1BP20xn3+TWeyZWA1WFLmKskGcpQ8xul4eaxP43z8rNvcy YlChO+Ocwf0LDdh80vzPHSWxWiA6GlW5i1G/FkReB2reTbrPgE8gnTRfu2Mja82d7BFH E9BA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b="nhW6W/5i"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i12-20020a17090a650c00b00246b1a9630csi11967131pjj.130.2023.05.30.14.35.33; Tue, 30 May 2023 14:35:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b="nhW6W/5i"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233029AbjE3VQP (ORCPT + 99 others); Tue, 30 May 2023 17:16:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231591AbjE3VQN (ORCPT ); Tue, 30 May 2023 17:16:13 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B88619B for ; Tue, 30 May 2023 14:15:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 158CB632F1 for ; Tue, 30 May 2023 21:15:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 15857C433D2; Tue, 30 May 2023 21:15:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1685481348; bh=BsfsJT3RNDghO1BT+q5WEVurwj/3UqhWOXK4IIuJXH0=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=nhW6W/5iMxjAytttCC/K0zmpFkn4MmAKCh7wCaQOd1LqQjyVlMgFDQTgS8OLrob4x oa487zz/93lbMOX92PMcBDA/Wer/rq2x+mO/TSkalpdppofj/A0m6ucNO9+TReWccc GyLKULTJ8HLXWO/KshhR3T1rOOLDLsRCdTMNX0Pk= Date: Tue, 30 May 2023 14:15:47 -0700 From: Andrew Morton To: Yosry Ahmed Cc: Konrad Rzeszutek Wilk , Seth Jennings , Dan Streetman , Vitaly Wool , Johannes Weiner , Nhat Pham , Domenico Cerasuolo , Yu Zhao , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm: zswap: support exclusive loads Message-Id: <20230530141547.609c4a434470c3fbf7570ff8@linux-foundation.org> In-Reply-To: <20230530210251.493194-1-yosryahmed@google.com> References: <20230530210251.493194-1-yosryahmed@google.com> X-Mailer: Sylpheed 3.8.0beta1 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-4.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 30 May 2023 21:02:51 +0000 Yosry Ahmed wrote: > Commit 71024cb4a0bf ("frontswap: remove frontswap_tmem_exclusive_gets") > removed support for exclusive loads from frontswap as it was not used. > > Bring back exclusive loads support to frontswap by adding an > exclusive_loads argument to frontswap_ops. Add support for exclusive > loads to zswap behind CONFIG_ZSWAP_EXCLUSIVE_LOADS. Why is this Kconfigurable? Why not just enable the feature for all builds? > Refactor zswap entry invalidation in zswap_frontswap_invalidate_page() > into zswap_invalidate_entry() to reuse it in zswap_frontswap_load(). > > With exclusive loads, we avoid having two copies of the same page in > memory (compressed & uncompressed) after faulting it in from zswap. On > the other hand, if the page is to be reclaimed again without being > dirtied, it will be re-compressed. Compression is not usually slow, and > a page that was just faulted in is less likely to be reclaimed again > soon. > > --- a/mm/Kconfig > +++ b/mm/Kconfig > @@ -46,6 +46,19 @@ config ZSWAP_DEFAULT_ON > The selection made here can be overridden by using the kernel > command line 'zswap.enabled=' option. > > +config ZSWAP_EXCLUSIVE_LOADS > + bool "Invalidate zswap entries when pages are loaded" > + depends on ZSWAP > + help > + If selected, when a page is loaded from zswap, the zswap entry is > + invalidated at once, as opposed to leaving it in zswap until the > + swap entry is freed. > + > + This avoids having two copies of the same page in memory > + (compressed and uncompressed) after faulting in a page from zswap. > + The cost is that if the page was never dirtied and needs to be > + swapped out again, it will be re-compressed. So it's a speed-vs-space tradeoff? I'm not sure how users are to decide whether they want this. Did we help them as much as possible?