Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp183601rwd; Tue, 30 May 2023 18:26:45 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7/ukPJgBYcX0Jpx/GtpcM05X1FXZNAhowVa8Z6UtHbRs2Tv7aEMlHIqk0e19f/MRasGKMp X-Received: by 2002:a05:6870:4295:b0:196:6493:d0ac with SMTP id y21-20020a056870429500b001966493d0acmr2097419oah.59.1685496405705; Tue, 30 May 2023 18:26:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685496405; cv=none; d=google.com; s=arc-20160816; b=ujXSdMX7qDwLXaywb/Lm3sBkf5mL/r3ytgltsd1VEYz9EX72tEgs35cdfZqCzDYbpn XNFvt6lVrYtFbxVd9okHzrOIB5qK1P+YQt4W9NtF7ZQrB1/R0+QzN3UezsQ9oWRmFAOC S5gxtDZYT7GyRps53L4ZRCYmIWpt/72iQS1CNFTtSI3sRbJLaSeCpldS709OfchB/N+p 4ioJl0D6eLYZddGsMb1tCJV70MEGH147Wp6dXShIwDG5T6C2icjSv9Wv4schQU6/lZqh bKWq0GANoY4RbWZbYQOxl40bSm++8jYzZ9SBdUJ5ZqMf2wF4WixBGPZ5Wj5aih2L0CPR pBgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=rAghq1jNqfdVVCjOpZzwvb1zxQ7Re/mYbd6qoDv/GeY=; b=wwGjlf5sODfuh/NFFwhWRJg1uOjxWQ+ClhYjcnWz9PZrXm3EM9mkd/5x1sVhlUAM9C 3Kdd4NcjaVLNLZJ6XnyBSrjAbo9LIRaWJrwuKKHwLXMLBRC6iQT3hkD6nsqTprlVYLOb X16LhQnDUTDdcpMgdCEF6+F0aLEQfFxQFWd9f17lqeZWGNIqkEuHs5ytC2zik836pZwa pyEeuGOFt3PsBAjWlcLCrqnDkoo7BCjmJBAJA48iVLKPguq2y3BiyitqpP7TvEKlgPQI 8F4QbuI7FUIW6Rd05tuvT5LsGa3T4w41ADt6eYJD2Jim4eG/t2g0QvyzTKJ/DJQ6e4ix IeeQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=nWx7LkvO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p1-20020aa79e81000000b0063b88f52220si2617730pfq.144.2023.05.30.18.26.33; Tue, 30 May 2023 18:26:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=nWx7LkvO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233860AbjEaBGo (ORCPT + 99 others); Tue, 30 May 2023 21:06:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231258AbjEaBGn (ORCPT ); Tue, 30 May 2023 21:06:43 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E63DC9 for ; Tue, 30 May 2023 18:06:42 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 728CF635BA for ; Wed, 31 May 2023 01:06:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 80F08C433D2; Wed, 31 May 2023 01:06:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1685495200; bh=cIUz6fVeRhVKCmvPBvZJh6mfPlzIro+3fTmeKE1vVQI=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=nWx7LkvOy///Dtnw40aqCRG+YIwOdLTbCXHUV7eRRcbKWGwnGl1uQlr/xoqwS84S9 jKAEH38FVfDc44Dth7bU32aw7TocDAyWQLuxpKkki8g8vkZ6uh9lYvAn+tJJiOkETx lrAKh38u996iyY4TuTQKoCnh2Dgvp2TI43UDzVJ0i+3GLlhFVgn2tWP2elqfBqVsTN DWz3kElxSOakSKqh8nWm9Aa3NwGl58BmeeK7BBKRZmG28is3b+FstNLSuVoH7A5SLA E3Y/KjsXTv+yKOwySSz0H+0VhmKH9XamFfS/zLwGu4AR89YdmfkphR2maKsoJyjMdy 5WBaHUsL8Q5BQ== Date: Tue, 30 May 2023 18:06:38 -0700 From: Chris Li To: Johannes Weiner Cc: Domenico Cerasuolo , linux-mm@kvack.org, linux-kernel@vger.kernel.org, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, yosryahmed@google.com, kernel-team@fb.com Subject: Re: [PATCH] mm: zswap: shrink until can accept Message-ID: References: <20230524065051.6328-1-cerasuolodomenico@gmail.com> <20230530041341.GB84971@cmpxchg.org> <20230530155519.GB97194@cmpxchg.org> <20230530185451.GA101722@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230530185451.GA101722@cmpxchg.org> X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 30, 2023 at 02:54:51PM -0400, Johannes Weiner wrote: > > Maybe ENOMEM is a bad example. How about if the swap device > > just went bad and can't complete new IO writes? > > This is actually outside the scope of zswap, and handled by the > swapcache (end_swap_bio_write). > > Once the IO is submitted, zswap will ax its copy and leave the rest to > the swapcache. It behaves the same way as if zswap had never been > involved to begin with when the swap out fails on IO errors. > > From a zswap perspective, there are no persistent errors in moving a > zswap entry back into the swapcache. Not just currently, but generally. Again, you are right that this zswap writeback is async. So the writeback error is NOT going to propagate to the shrink function. With the current three pool backends that I looked at{zbud, z3fold,zsmalloc} they all have internal retry 8 times. Adding more retry did not fundamentally change the existing behavior. I look at all the possible error codes generated inside the reclaim code, the only noticeable errors are ENOMEM and concurrent swap invalidation or a racing swapin fault. BTW, zswap reclaim consumes memory. Keep on looping ENOMEM might cause more OOM. But that can exist in current code as well. > > > Aside from -ENOMEM, writeback_entry will fail on concurrent swap > > > invalidation or a racing swapin fault. In both cases we should > > > absolutely keep trying other entries until the goal is met. > > > > How about a narrower fix recognizing those error cases and making > > the inner loop continue in those errors? > > Right, I just I don't really see the value proposition of this > complication, and I see some downsides (see below). No single entry > error should ever cause us to stop the wider reclaim loop. That is until the current LRU list has been through once. I expect repeating the same list yields less reclaimed pages. > > > > > > extreme case where it's the only page left on the list, I again don't > > > > > see how retrying a few times will make the situation worse. > > > > > > > > > > In practice, IMO there is little upside in trying to be more > > > > > discerning about the error codes. Simple seems better here. > > > > > > > > Just trying to think about what should be the precise loop termination > > > > condition here. > > > > > > > > I still feel blindly trying a few times is a very imprecise condition. > > > > > > The precise termination condition is when can_accept() returns true > > > again. The safety cap is only added as precaution to avoid infinite > > > loops if something goes wrong or unexpected, now or in the future. > > > > In my mind, that statement already suggests can_accept() is not > > *precise*, considering the avoid infinite loop. > > e.g. Do we know what is the optimal cap value and why that value > > is optical? > > Oh but it is precise. That's the goal we want to accomplish. I understand it is the goal, the precise condition I am talking about is the loop termination condition, can_accept() is not the only one. Anyway, let's move on. > > The cap is just so that in case something unexpectedly goes wrong (a > bug), we fail gracefully and don't lock up the machine. The same > reason we prefer WARN_ONs over BUG_ONs if we can, to avoid > crashes. That's really all there is too it, and it strikes me as > reasonable and robust design choice. It's fine to limp along or be > suboptimal after such a bug happens; the bar is avoiding an infinite > loop, nothing else. > > Your suggestion of whitelisting certain errors is more complicated, > but also less robust: in case an entry error does by some accident > become persistent for the whole LRU, we're locking up the host. We'd > rather catch a bug like this by seeing spikes in the reclaim failure > rate than by losing production machines. > > > Putting the definition of precise aside, I do see the unconditional > > retry can have unwanted effects. > > I hope I could address this above. But if not, please share your > concerns. Thanks for the discussion. I am less concerned about the retry now. Retry on EAGAIN might be the simplest way to proceed. Outside of the scope of this patch, I am still surprised to see such a high number of retries caused by race conditions. There are 8 inner loop retry already. The actual pages retried need to times 8. If there is a reproducer script, I want to local repo this to understand better. Wish there are ways to reduce the retry. Another idea is that we can start the shrinking once the pool max was reached. Try to reduce to the threshold earlier. Chris