Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp2051359pxu; Tue, 24 Nov 2020 15:56:37 -0800 (PST) X-Google-Smtp-Source: ABdhPJx1H35wRS07FmNZwm3+U7I/9NzkZqi2X8icRNV5fTGNPw+2TX41IAMahYadBO2ezLqmNRoo X-Received: by 2002:a17:906:c821:: with SMTP id dd1mr840287ejb.216.1606262197664; Tue, 24 Nov 2020 15:56:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606262197; cv=none; d=google.com; s=arc-20160816; b=NRxZuqd9xd6Ttf5n/a4DBmSr14Z2LwIyucnDQ4dSsZ+rOPGNhJHLL/7jTqQ4JV9jF5 BQmGYVsV9fLbxoErVcvguiNQhed49SZQpx1vM8Me7UASffCn/CvBH119RVlDJDstNDg0 OFOwssJECPXQe6NQ3HFT/UcDYsAEjeQ+hcK1UUOrxFMuZS51uGRY4WqDvUT4Tj5Axo2P LoE9y35u1aT6LiXSlK472t26v3sbGuoPYYeSQN7ypQYeiPRZpWUhVmOqTh3+TxDpkNMG 5bm2lGMtBzQwvK/MSGSYTGbKD0ByF/lh92BSJJ32Ra44dGU2vhtPpg3g8pZw9OTsuIMC ffEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=r81uB21KgkrQOks2IFcrN3ZOTlRJS3bUyN7vYcsmfFs=; b=Ow/8zBi7O8BzvTmDxj1vlCbpqeBreltS4Kw8347KfVWzOdKSAeMeU+O+BhaOf0xjZC W7Yd+3l4S3mUQK7j2xONkuOOzXD/Ykp0QYmNF7m9iHBaJzNd/4y3c3zEr2RljYDzfdLU O3JkC4U/l6AzB+PnVsgJzIPwzL4dK9NbYjyYJz4uolULMXMh0e51NPSO70rKJcrJeydz sDtRbXGvvKcIawN8ZNAgflWGUmvO++o9s9B4uTrn3vLz10WFBmjD2kYS1fLa7eRBKOd4 Irsm6YhxYThyUylxelr1D92NAm0j1INz4weJP9YRx2I0/WSjOkPsEo+DaKXN9Y2WR85p EAnA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=UlG140+S; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id mj1si274660ejb.54.2020.11.24.15.56.13; Tue, 24 Nov 2020 15:56:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=UlG140+S; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730254AbgKXUQA (ORCPT + 99 others); Tue, 24 Nov 2020 15:16:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40624 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728617AbgKXUP7 (ORCPT ); Tue, 24 Nov 2020 15:15:59 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4251EC0613D6; Tue, 24 Nov 2020 12:15:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=r81uB21KgkrQOks2IFcrN3ZOTlRJS3bUyN7vYcsmfFs=; b=UlG140+SGaMtrTfp69FggRQZT2 ua4M8lWOfBdQjG/fVnSThO5rTs+SVW0D2ckxd67Q5PXBI012Svr53SElnjEBQ+BcbPkWpOxyit+zn JTXRW5S2yebgbbbmDgc4SFbBo62a4s2goDFOD29PLJHpuYbtlskUkcK1oY2q8AqYKLjKIVZrciXIo 8sB3tfUXK8DxuWA5cWodBpAwcn47cgqWVQOrhQMuFyK+3SYe54Btds2vzaHA8+m3McQZpOae8rEkS zNABYfLBTEFYtw0ObLKVnr5Xk2HUBZ7nwD5GQBoOLe7GuQ8l8WJPcLvo/eHwfuGAOMtbiasHd1pLF cvFQA94w==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1khejA-0001KH-FM; Tue, 24 Nov 2020 20:15:52 +0000 Date: Tue, 24 Nov 2020 20:15:52 +0000 From: Matthew Wilcox To: Linus Torvalds Cc: Hugh Dickins , Jan Kara , syzbot , Andreas Dilger , Ext4 Developers List , Linux Kernel Mailing List , syzkaller-bugs , Theodore Ts'o , Linux-MM , Oleg Nesterov , Andrew Morton , "Kirill A. Shutemov" , Nicholas Piggin , Alex Shi , Qian Cai , Christoph Hellwig , "Darrick J. Wong" , William Kucharski , Jens Axboe , linux-fsdevel , linux-xfs Subject: Re: kernel BUG at fs/ext4/inode.c:LINE! Message-ID: <20201124201552.GE4327@casper.infradead.org> References: <000000000000d3a33205add2f7b2@google.com> <20200828100755.GG7072@quack2.suse.cz> <20200831100340.GA26519@quack2.suse.cz> <20201124121912.GZ4327@casper.infradead.org> <20201124183351.GD4327@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Tue, Nov 24, 2020 at 11:00:42AM -0800, Linus Torvalds wrote: > On Tue, Nov 24, 2020 at 10:33 AM Matthew Wilcox wrote: > > > > We could fix this by turning that 'if' into a 'while' in > > write_cache_pages(). > > That might be the simplest patch indeed. > > At the same time, I do worry about other cases like this: while > spurious wakeup events are normal and happen in other places, this is > a bit different. > > This is literally a wakeup that leaks from a previous use of a page, > and makes us think that something could have happened to the new use. > > The unlock_page() case presumably never hits that, because even if we > have some unlock without a page ref (which I don't think can happen, > but whatever..), the exclusive nature of "lock_page()" means that no > locker can care - once you get the lock, you own the page./ > > The writeback code is special in that the writeback bit isn't some > kind of exclusive bit, but this code kind of expected it to be that. > > So I'd _like_ to have something like > > WARN_ON_ONCE(!page_count(page)); > > in the wake_up_page_bit() function, to catch things that wake up a > page that has already been released and might be reused.. > > And that would require the "get_page()" to be done when we set the > writeback bit and queue the page up for IO (so that then > end_page_writeback() would clear the bit, do the wakeup, and then drop > the ref). > > Hugh's second patch isn't pretty - I think the "get_page()" is > conceptually in the wrong place - but it "works" in that it keeps that > "implicit page reference" being kept by the PG_writeback bit, and then > it takes an explicit page reference before it clears the bit. > > So while I don't love the whole "PG_writeback is an implicit reference > to the page" model, Hugh's patch at least makes that model much more > straightforward: we really either have that PG_writeback, _or_ we have > a real ref to the page, and we never have that odd "we could actually > lose the page" situation. > > So I think I prefer Hugh's two-liner over your one-liner suggestion. > > But your one-liner is technically not just smaller, it obviously also > avoids the whole mucking with the atomic page ref. > > I don't _think_ that the extra get/put overhead could possibly really > matter: doing the writeback is going to be a lot more expensive > anyway. And an atomic access to a 'struct page' sounds expensive, but > that cacheline is already likely dirty in the L1 cache because we've > touch page->flags and done other things to it). > > So I'd personally be inclined to go with Hugh's patch. Comments? My only objection to Hugh's patch is that it may cause us to fail to split pages when we can currently split them. That is, we do: wait_on_page_writeback() if (page_has_private(page)) do_invalidatepage(page, offset, length); split_huge_page() (at least we do in my THP patchset; not sure if there's any of that in the kernel today), and the extra reference held for a few nanoseconds after calling wake_up_page() will cause us to fail to split the page. It probably doesn't matter; there has to be a fallback path anyway. Now I'm looking at that codepath, and the race that Hugh uncovered now looks like a real bug. Consider this sequence: page allocated, added to page cache, dirtied, writeback started --- thread A --- end_page_writeback() test_clear_page_writeback --- ctx switch to thread B --- alloc page, add to page cache, dirty page, start page writeback, truncate_inode_pages_range() wait_on_page_writeback() --- ctx switch to thread A --- wake_up_page() --- ctx switch to thread B --- free page alloc page write new data to page ... now the DMA actually starts to do page writeback, and it's writing the new data. So my s/if/while/ suggestion is wrong and we need to do something to prevent spurious wakeups. Unless we bury the spurious wakeup logic inside wait_on_page_writeback() ...