Received: by 2002:a05:7412:8d23:b0:f7:29d7:fb05 with SMTP id bj35csp39783rdb; Fri, 15 Dec 2023 22:16:41 -0800 (PST) X-Google-Smtp-Source: AGHT+IG1uz82LMxTc2G5EKxmcILcoHaKRZDSlVyjUW8SSGVfAIOKP0qnGgHTcCvMuZjRHsMW9CY+ X-Received: by 2002:a05:6214:2b0f:b0:67f:1728:6c44 with SMTP id jx15-20020a0562142b0f00b0067f17286c44mr3548494qvb.111.1702707400842; Fri, 15 Dec 2023 22:16:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702707400; cv=none; d=google.com; s=arc-20160816; b=oiBV91h9235q1jK7w9fYHojg8AYbSjZJlf05HqqY8WCjiFf3hf+9O0PtBOkihDZPtt JWpKGXejwR66X2PZx3i/a3RGfzGTe4fdHJUj7Zh0FyfLh4Be+bRIkuedA/lkA7Ck+hx3 yQkNWxctaW/XO/cCYstykJwbggkQ1TeZvRepVL2xJfhXSgd9W0WT7txRmDEQwcYbo2Sy MiWvYjKr0O2RfMVi4sNYSuqEc+QD3sxmpATApv9cUTSAq0vgW4fWAI2aeNMBnpa6wdZR bXj9wk0TIhfA5Afz9gT3IpELT1FvDXxpehqVcKgFTlVtJn4ImKDtmixd6YiTwZkmCkVG NR5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=xiSK1F35rHpZWUgm/bdMTInxhSrYy62fl78Ydp6QZJA=; fh=Zpj2gbplmSPd+XwhRuyhuymDGssrzsQy7Ewgpj1dFP8=; b=XQzaX3FfYu3oWwf/LCR7BrhLh8kN5EINhKXmCb+vRMHi/O7nh2ryez07QB/VDn4Fa8 vf03MEmtP8psUZqZ9fvbRoggXfKKNzHX1wRP3arIWMub5TR8jO18DpppcSyD68g4cbYd Uo9i3RSIIw/rgiBC6RMPZD10nhSj7d3mKpXSTweSKnj+Nnd1qVOVGGXSnRlGemRUkhEy e3tGcx4yqry0rT7B+PSO4Hy690pCUn3ckWh7aWeQXvHpSKTzMpZ5tPe6m4e5JcrbgwGn oX1Ks/ksRNPAiLaOzXG0PgTMRu22G4eQC1+yg5jFIE7jomGJU9/eCL/0GixOdb7nFKNH GQzg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=hac5i32D; spf=pass (google.com: domain of linux-kernel+bounces-2109-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-2109-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id h16-20020a05620a401000b0077dc274b7f9si20425924qko.243.2023.12.15.22.16.40 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Dec 2023 22:16:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-2109-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=hac5i32D; spf=pass (google.com: domain of linux-kernel+bounces-2109-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-2109-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 95B131C24ACF for ; Sat, 16 Dec 2023 06:16:40 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2D97212E76; Sat, 16 Dec 2023 06:16:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="hac5i32D" X-Original-To: linux-kernel@vger.kernel.org Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EAC7D10A00; Sat, 16 Dec 2023 06:16:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=xiSK1F35rHpZWUgm/bdMTInxhSrYy62fl78Ydp6QZJA=; b=hac5i32DsaZHjY4GwPrQjsdIKT NpajQ6R5JGxdujxDuV2HHY9RJ4dZouscdlYMNOGTkU6nXy155cXbBlx9B0M+DTkFD9obr33BGI487 36Z5RLoKqqj8dfFxZ7Wo8OuKhPxnLYnXe4bq65B1A+WGCwYGbUZ6ACTCK5FHPVFEKJzsryFnVDbfz EpiaraTo2japVRpvFPLRo+87JIFHgRSQuy8lkjpUDaMy2pc8UsfVm6qsr7YbncFbJljvnE+D/Knw9 X5+TYuqomBpamoZ8w/Wthequ7PWyfVnn5+JjZWegFp0pst7J2yWGubbXVsvISF9wTZwJ5X2d5JlSV QfVW2amg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rENyM-007bj0-ID; Sat, 16 Dec 2023 06:16:26 +0000 Date: Sat, 16 Dec 2023 06:16:26 +0000 From: Matthew Wilcox To: Christoph Hellwig Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Jan Kara , David Howells Subject: Re: [PATCH 04/11] writeback: Simplify the loops in write_cache_pages() Message-ID: References: <20231214132544.376574-1-hch@lst.de> <20231214132544.376574-5-hch@lst.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231214132544.376574-5-hch@lst.de> On Thu, Dec 14, 2023 at 02:25:37PM +0100, Christoph Hellwig wrote: > From: "Matthew Wilcox (Oracle)" > > Collapse the two nested loops into one. This is needed as a step > towards turning this into an iterator. Signed-off-by: Matthew Wilcox (Oracle) > Signed-off-by: Christoph Hellwig > --- > mm/page-writeback.c | 98 ++++++++++++++++++++++----------------------- > 1 file changed, 49 insertions(+), 49 deletions(-) > > diff --git a/mm/page-writeback.c b/mm/page-writeback.c > index 5a3df8665ff4f9..2087d16115710e 100644 > --- a/mm/page-writeback.c > +++ b/mm/page-writeback.c > @@ -2460,6 +2460,7 @@ int write_cache_pages(struct address_space *mapping, > void *data) > { > int error; > + int i = 0; > > if (wbc->range_cyclic) { > wbc->index = mapping->writeback_index; /* prev offset */ > @@ -2477,67 +2478,66 @@ int write_cache_pages(struct address_space *mapping, > folio_batch_init(&wbc->fbatch); > wbc->err = 0; > > - while (wbc->index <= wbc->end) { > - int i; > - > - writeback_get_batch(mapping, wbc); > + for (;;) { > + struct folio *folio; > + unsigned long nr; > > + if (i == wbc->fbatch.nr) { > + writeback_get_batch(mapping, wbc); > + i = 0; > + } > if (wbc->fbatch.nr == 0) > break; > > - for (i = 0; i < wbc->fbatch.nr; i++) { > - struct folio *folio = wbc->fbatch.folios[i]; > - unsigned long nr; > + folio = wbc->fbatch.folios[i++]; > > - wbc->done_index = folio->index; > + wbc->done_index = folio->index; > > - folio_lock(folio); > - if (!should_writeback_folio(mapping, wbc, folio)) { > - folio_unlock(folio); > - continue; > - } > + folio_lock(folio); > + if (!should_writeback_folio(mapping, wbc, folio)) { > + folio_unlock(folio); > + continue; > + } > > - trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); > - > - error = writepage(folio, wbc, data); > - nr = folio_nr_pages(folio); > - if (unlikely(error)) { > - /* > - * Handle errors according to the type of > - * writeback. There's no need to continue for > - * background writeback. Just push done_index > - * past this page so media errors won't choke > - * writeout for the entire file. For integrity > - * writeback, we must process the entire dirty > - * set regardless of errors because the fs may > - * still have state to clear for each page. In > - * that case we continue processing and return > - * the first error. > - */ > - if (error == AOP_WRITEPAGE_ACTIVATE) { > - folio_unlock(folio); > - error = 0; > - } else if (wbc->sync_mode != WB_SYNC_ALL) { > - wbc->err = error; > - wbc->done_index = folio->index + nr; > - return writeback_finish(mapping, > - wbc, true); > - } > - if (!wbc->err) > - wbc->err = error; > - } > + trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); > > + error = writepage(folio, wbc, data); > + nr = folio_nr_pages(folio); > + if (unlikely(error)) { > /* > - * We stop writing back only if we are not doing > - * integrity sync. In case of integrity sync we have to > - * keep going until we have written all the pages > - * we tagged for writeback prior to entering this loop. > + * Handle errors according to the type of > + * writeback. There's no need to continue for > + * background writeback. Just push done_index > + * past this page so media errors won't choke > + * writeout for the entire file. For integrity > + * writeback, we must process the entire dirty > + * set regardless of errors because the fs may > + * still have state to clear for each page. In > + * that case we continue processing and return > + * the first error. > */ > - wbc->nr_to_write -= nr; > - if (wbc->nr_to_write <= 0 && > - wbc->sync_mode == WB_SYNC_NONE) > + if (error == AOP_WRITEPAGE_ACTIVATE) { > + folio_unlock(folio); > + error = 0; > + } else if (wbc->sync_mode != WB_SYNC_ALL) { > + wbc->err = error; > + wbc->done_index = folio->index + nr; > return writeback_finish(mapping, wbc, true); > + } > + if (!wbc->err) > + wbc->err = error; > } > + > + /* > + * We stop writing back only if we are not doing > + * integrity sync. In case of integrity sync we have to > + * keep going until we have written all the pages > + * we tagged for writeback prior to entering this loop. > + */ > + wbc->nr_to_write -= nr; > + if (wbc->nr_to_write <= 0 && > + wbc->sync_mode == WB_SYNC_NONE) > + return writeback_finish(mapping, wbc, true); > } > > return writeback_finish(mapping, wbc, false); > -- > 2.39.2 >