Received: by 2002:ac0:cd04:0:0:0:0:0 with SMTP id w4csp48807imn; Fri, 1 Jul 2022 09:40:23 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uYWeH/gJl3nPm3fTTBL36BpPEdWVWV3y6runAEQTrvvTK3xCOeU0gVYM2BLpDH/ykECF3G X-Received: by 2002:a05:6402:5299:b0:435:61da:9bb9 with SMTP id en25-20020a056402529900b0043561da9bb9mr20344038edb.21.1656693622873; Fri, 01 Jul 2022 09:40:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656693622; cv=none; d=google.com; s=arc-20160816; b=xeX6dgh8BoTsh7NH6cMEIH5EbO6dnZ47PiKxLypG4PFfCMM6r0J360nZYr6x+BCvf5 MWqN21kVd05q/lAUYGwLMrL9ucM9ZYwSSMoeBQ79vB03OKrBPSzuy+Wh41pBF3sgCEfM /LRJS9tUD4sjEYm3ldCQBdio4NWDtbmsBz5PF0HJ1q1icWgTCIwu/S4GeFRkeJwhJ6yu hj9kR2dfEizLgkbNeqaoPiKqmfgrOMHzxsjphsmqMMeLka8MGyx0WIg7L95oqz03sSAH e/kqoS41oqV/uZmFCasfRSO3e7ld1hoQ5/gzX/AXog48NRK78zjGX81iYB7C2kF4EKn9 C0zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=gscZC+lbayJaYo1pkT+jEcXOiGusMCk35lYaddpSCrQ=; b=DBuVmZXskCeoDn7mGCwPJv3fj3ieRgHoYCwZw91yJ3E42btbMUWmq8olm05WT2JhqI zmX5g6qGikKqyceV5+6f3uofRzst3wg16ZNQlrlYgolF1opJr6HAcOJfzipr6c1UWtLC Nut0xhpLjMfPimMwt037Jai8ZfO6XXtDJE6eljk2Cow9ptUMxJLCu451VeCJdwl83P1D Rr8a/75TFLJu16ZLNbuTURaLcRanbUQbHvJlSIMXTlsmgzP6TqKKpgkcP5YHtrNoF3Sy Ulhu07JbSzMrrqsOcxxQtFiI92xqIAm8YoJCEV6lILOai9mIxJAnvZvoIAUDhNrDsTPK mHug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VgZsLFuw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e20-20020aa7d7d4000000b0042bc4e98aa7si1941795eds.88.2022.07.01.09.39.57; Fri, 01 Jul 2022 09:40:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VgZsLFuw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231373AbiGAQDb (ORCPT + 99 others); Fri, 1 Jul 2022 12:03:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229639AbiGAQDa (ORCPT ); Fri, 1 Jul 2022 12:03:30 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8ABBF1DA6D for ; Fri, 1 Jul 2022 09:03:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1656691408; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=gscZC+lbayJaYo1pkT+jEcXOiGusMCk35lYaddpSCrQ=; b=VgZsLFuweQvIqfbhOznac5xuI9L5bzzAYjpGmIedzDKjCQjV41TVJ7IjHwrxOVfp80omE0 b+Ms4+6DuF3tdrPLqB7gO4zqJgwKMhXMonpI9GUFZ+jgnBjQ1BMH5Fd1cW8/xS4lWZa06Z xVkhEJb3bTcEPXL7M2Xz51u1+q7OqyM= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-196-FOVPXU2_MzGTmlOFyTLTWg-1; Fri, 01 Jul 2022 12:03:27 -0400 X-MC-Unique: FOVPXU2_MzGTmlOFyTLTWg-1 Received: by mail-qt1-f197.google.com with SMTP id x16-20020ac85f10000000b0031d3262f264so652148qta.22 for ; Fri, 01 Jul 2022 09:03:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=gscZC+lbayJaYo1pkT+jEcXOiGusMCk35lYaddpSCrQ=; b=DFvG4jMu6bw7MvfhMslYi8aj9YDV9s+1u3Rpq1bgHM266vYyfTfES4r3lnlKZhSAOp XX3jqHoGlG8ZxIvHGGNjom8Icb6VAArXVxDBiWTKhV40SvrPrD6zOWHcrwfUFCk2Xqn8 JpPNVt2ch+TojMg2YSH74XST1w9m2SYjmWyR/u929EWl439JaSwCXsfiE8QwHh/xJgjt NHW8Pcm2aLPoHLwTnIIbfTVe5cKU5q8nrbHGyJFdd3GCAMIRJr/4W6HUy6WPLvFbI7Ws Iy3xtvYwSpnXGJs0Z4SnIOKfQsdOQgD9fxBuWuGYD1vW7ioo0aEnv6LI0uP9mg/deh0v zZWg== X-Gm-Message-State: AJIora8Z/DTg5Anz3WatJbfHIhT0Q2X7MQQRPmi4vCERDhCUv73GlNVX lDiDgzO5xFYBnZfFZXg1trcpmdh+rJQO9i+/OWVZT69/Czv8Y3ZU4SKe5BPtaxZExyv6E50g/H0 WN1mVNftXaTADuPddNuG5Er/s X-Received: by 2002:a37:a7d3:0:b0:6ae:fdb9:e8e4 with SMTP id q202-20020a37a7d3000000b006aefdb9e8e4mr10821093qke.325.1656691406978; Fri, 01 Jul 2022 09:03:26 -0700 (PDT) X-Received: by 2002:a37:a7d3:0:b0:6ae:fdb9:e8e4 with SMTP id q202-20020a37a7d3000000b006aefdb9e8e4mr10821008qke.325.1656691406093; Fri, 01 Jul 2022 09:03:26 -0700 (PDT) Received: from bfoster (c-24-61-119-116.hsd1.ma.comcast.net. [24.61.119.116]) by smtp.gmail.com with ESMTPSA id s12-20020a05620a16ac00b006a70f581243sm16748615qkj.93.2022.07.01.09.03.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Jul 2022 09:03:25 -0700 (PDT) Date: Fri, 1 Jul 2022 12:03:23 -0400 From: Brian Foster To: "Darrick J. Wong" Cc: Dave Chinner , Matthew Wilcox , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig , linux-mm@kvack.org Subject: Re: Multi-page folio issues in 5.19-rc4 (was [PATCH v3 25/25] xfs: Support large folios) Message-ID: References: <20220628073120.GI227878@dread.disaster.area> <20220628221757.GJ227878@dread.disaster.area> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 29, 2022 at 01:22:06PM -0700, Darrick J. Wong wrote: > On Wed, Jun 29, 2022 at 08:57:30AM -0400, Brian Foster wrote: > > On Tue, Jun 28, 2022 at 04:21:55PM -0700, Darrick J. Wong wrote: > > > On Wed, Jun 29, 2022 at 08:17:57AM +1000, Dave Chinner wrote: > > > > On Tue, Jun 28, 2022 at 02:18:24PM +0100, Matthew Wilcox wrote: > > > > > On Tue, Jun 28, 2022 at 12:31:55PM +0100, Matthew Wilcox wrote: > > > > > > On Tue, Jun 28, 2022 at 12:27:40PM +0100, Matthew Wilcox wrote: > > > > > > > On Tue, Jun 28, 2022 at 05:31:20PM +1000, Dave Chinner wrote: > > > > > > > > So using this technique, I've discovered that there's a dirty page > > > > > > > > accounting leak that eventually results in fsx hanging in > > > > > > > > balance_dirty_pages(). > > > > > > > > > > > > > > Alas, I think this is only an accounting error, and not related to > > > > > > > the problem(s) that Darrick & Zorro are seeing. I think what you're > > > > > > > seeing is dirty pages being dropped at truncation without the > > > > > > > appropriate accounting. ie this should be the fix: > > > > > > > > > > > > Argh, try one that actually compiles. > > > > > > > > > > ... that one's going to underflow the accounting. Maybe I shouldn't > > > > > be writing code at 6am? > > > > > > > > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > > > > index f7248002dad9..4eec6ee83e44 100644 > > > > > --- a/mm/huge_memory.c > > > > > +++ b/mm/huge_memory.c > > > > > @@ -18,6 +18,7 @@ > > > > > #include > > > > > #include > > > > > #include > > > > > +#include > > > > > #include > > > > > #include > > > > > #include > > > > > @@ -2439,11 +2440,15 @@ static void __split_huge_page(struct page *page, struct list_head *list, > > > > > __split_huge_page_tail(head, i, lruvec, list); > > > > > /* Some pages can be beyond EOF: drop them from page cache */ > > > > > if (head[i].index >= end) { > > > > > - ClearPageDirty(head + i); > > > > > - __delete_from_page_cache(head + i, NULL); > > > > > + struct folio *tail = page_folio(head + i); > > > > > + > > > > > if (shmem_mapping(head->mapping)) > > > > > shmem_uncharge(head->mapping->host, 1); > > > > > - put_page(head + i); > > > > > + else if (folio_test_clear_dirty(tail)) > > > > > + folio_account_cleaned(tail, > > > > > + inode_to_wb(folio->mapping->host)); > > > > > + __filemap_remove_folio(tail, NULL); > > > > > + folio_put(tail); > > > > > } else if (!PageAnon(page)) { > > > > > __xa_store(&head->mapping->i_pages, head[i].index, > > > > > head + i, 0); > > > > > > > > > > > > > Yup, that fixes the leak. > > > > > > > > Tested-by: Dave Chinner > > > > > > Four hours of generic/522 running is long enough to conclude that this > > > is likely the fix for my problem and migrate long soak testing to my > > > main g/522 rig and: > > > > > > Tested-by: Darrick J. Wong > > > > > > > Just based on Willy's earlier comment.. what I would probably be a > > little careful/curious about here is whether the accounting fix leads to > > an indirect behavior change that does impact reproducibility of the > > corruption problem. For example, does artificially escalated dirty page > > tracking lead to increased reclaim/writeback activity than might > > otherwise occur, and thus contend with the fs workload? Clearly it has > > some impact based on Dave's balance_dirty_pages() problem reproducer, > > but I don't know if it extends beyond that off the top of my head. That > > might make some sense if the workload is fsx, since that doesn't > > typically stress cache/memory usage the way a large fsstress workload or > > something might. > > > > So for example, interesting questions might be... Do your corruption > > events happen to correspond with dirty page accounting crossing some > > threshold based on available memory in your test environment? Does > > reducing available memory affect reproducibility? Etc. > > Yeah, I wonder that too now. I managed to trace generic/522 a couple of > times before willy's patch dropped. From what I could tell, a large > folio X would get page P assigned to the fsx file's page cache to cover > range R, dirtied, and written to disk. At some point later, we'd > reflink into part of the file range adjacent to P, but not P itself. > I /think/ that should have caused the whole folio to get invalidated? > > Then some more things happened (none of which dirtied R, according to > fsx) and then suddenly writeback would trigger on some page (don't know > which) that would write to the disk blocks backing R. I'm fairly sure > that's where the incorrect disk contents came from. > > Next, we'd reflink part of the file range including R into a different > part of the file (call it R2). fsx would read R2, bringing a new page > into cache, and it wouldn't match the fsxgood buffer, leading to fsx > aborting. > > After a umount/mount cycle, reading R and R2 would both reveal the > incorrect contents that had caused fsx to abort. > FWIW, I hadn't been able to reproduce this in my default environment to this point. With the memory leak issue in the light, I was eventually able to by reducing dirty_bytes to something the system would be more likely to hit sooner (i.e. 16-32MB), but I also see stalling behavior and whatnot due to the leak that requires backing off from the specified dirty limit every so often. If I apply the accounting patch to avoid the leak and set dirty_background_bytes to something notably aggressive (1kB), the test survived 100 iterations or so before I stopped it. If I then set dirty_bytes to something similarly aggressive (1MB), I hit the failure on the next iteration (assuming it's the same problem). It's spinning again at ~25 or so iterations without a failure so far, so I'd have to wait and see how reliable the reproducer really is. Though if it doesn't reoccur soonish, perhaps I'll try reducing dirty_bytes a bit more... My suspicion based on these characteristics would be that the blocking limit triggers more aggressive reclaim/invalidation, and thus helps detect the problem sooner. If reflink is involved purely as a cache invalidation step (i.e. so a subsequent read will hit the disk and detect a cache inconsistency), then it might be interesting to see if it can still be reproduced without reflink operations enabled but instead with some combination of the -f/-X fsx flags to perform more flush invals and on-disk data checks.. Brian > Unfortunately the second ftrace attempt ate some trace data, so I was > unable to figure out if the same thing happened again. > > At this point I really need to get on reviewing patches for 5.20, so > I'll try to keep poking at this (examining the trace data requires a lot > of concentration which isn't really possible while sawzall construction > is going on at home) but at worst I can ask Linus to merge a patch for > 5.19 final that makes setting mapping_set_large_folio a > Kconfig/CONFIG_XFS_DEBUG option. > > --D > > > > > Brian > > > > > --D > > > > > > > Cheers, > > > > > > > > Dave. > > > > -- > > > > Dave Chinner > > > > david@fromorbit.com > > > > > >