Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp2325186rwb; Fri, 11 Nov 2022 07:53:50 -0800 (PST) X-Google-Smtp-Source: AA0mqf5c3sLznbU1yiUhUMNxgf9Dcqi4VSOy92JvACb3LiFoV5uajXb1/EgQDzGDEbUmFpEVguy7 X-Received: by 2002:a17:906:35d7:b0:7ad:eb81:c6c7 with SMTP id p23-20020a17090635d700b007adeb81c6c7mr2230044ejb.91.1668182030636; Fri, 11 Nov 2022 07:53:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668182030; cv=none; d=google.com; s=arc-20160816; b=vgq3pIaNBVoHIs0KYn+tRWMzM3pZpDdgpLIDjzuWif4L0fGX9h2lha2Bo1UYzS81wp TeWgHYvvcKQCu1Ww9/nUvyW1jR6u78ajfue1IMIwRucHV2HeMDHvUFssTJbeQp2EZAR1 Dum8yauL+XvBTzpTZcqaclh9RqoLh5he1TzancJ3VvVzsyyMZSH41pMvnQYujkEgIyRe FgrhaPfK5s+DAB+3TsWz9LutT1QMDiyB+mREpKxUj/K33ftgltiWw4b8KzUaGcfY/SeO STyJtutwjOBJEdHTAxichHgaxBwPLAWBQ/ue9i81ToPTCl/gVrOW3239fdkVINTkC+QP SISA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature:dkim-filter; bh=YpxvktSqWwuUkvWeIEXUTMm+3ugzACkXLiT5QhLAjQ8=; b=HLnCheRrjC8TQYN3nlbDi9Ljvl8hvFEMq6jhfKDlvF7zIZ3jicCmxdQ5jmle2JCA0j O2ODWFlUHoPOdHR+iSi8B3QWn3pKlall5dquYyadeolQ0dfgyIafXVywKvW+ByrSD8vm 67wtjnJEIN2jVfqAl+RGBdKxxYvLKymg5fPZh9z9kSpGbGfbPH6C4h/wb4CteaA3Xkor QvvDb3a8RHXx/FhmG/g//xDXxtBfEYhaiT0PXHoYa8GbAAjCjzUmqH3OMY3aSKpzTD04 Xtwc3bk534QwH0iZfay5P2l6M9mEeCHBp9HkQPRWMv3G7XGy+MzOeZjEdUyTuRqY4Da1 /Odw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=T5xgFY+1; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id la26-20020a170907781a00b007aa493b067asi1717185ejc.396.2022.11.11.07.53.21; Fri, 11 Nov 2022 07:53:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=T5xgFY+1; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233883AbiKKPwl (ORCPT + 99 others); Fri, 11 Nov 2022 10:52:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234045AbiKKPwk (ORCPT ); Fri, 11 Nov 2022 10:52:40 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 500D7258 for ; Fri, 11 Nov 2022 07:52:39 -0800 (PST) Received: by linux.microsoft.com (Postfix, from userid 1112) id E99DE20B717A; Fri, 11 Nov 2022 07:52:38 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com E99DE20B717A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1668181958; bh=YpxvktSqWwuUkvWeIEXUTMm+3ugzACkXLiT5QhLAjQ8=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=T5xgFY+138mUatankYZhW7/LNn5RjJdurTjozHHL6Y12tKAZFMXSSKNercxCtIq7h mzkzj/QzqkcZZ+mhlRBfFu87kaAMtakNwP3ww6z7qVcMmwSftM1IaU09DVtriyaNK6 NtXcdD/LvpMJoRGw2oty20prml2311cECAnCejJA= Date: Fri, 11 Nov 2022 07:52:38 -0800 From: Jeremi Piotrowski To: Jan Kara Cc: Thilo Fromm , Ye Bin , jack@suse.com, tytso@mit.edu, linux-ext4@vger.kernel.org, regressions@lists.linux.dev Subject: Re: [syzbot] possible deadlock in jbd2_journal_lock_updates Message-ID: <20221111155238.GA32201@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net> References: <20221014132543.i3aiyx4ent4qwy4i@quack3> <20221024104628.ozxjtdrotysq2haj@quack3> <643d007e-1041-4b3d-ed5e-ae47804f279d@linux.microsoft.com> <20221026101854.k6qgunxexhxthw64@quack3> <20221110125758.GA6919@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net> <20221110152637.g64p4hycnd7bfnnr@quack3> <20221110192701.GA29083@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net> <20221111142424.vwt4khbtfzd5foiy@quack3> <20221111151029.GA27244@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221111151029.GA27244@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Fri, Nov 11, 2022 at 07:10:29AM -0800, Jeremi Piotrowski wrote: > On Fri, Nov 11, 2022 at 03:24:24PM +0100, Jan Kara wrote: > > On Thu 10-11-22 11:27:01, Jeremi Piotrowski wrote: > > > On Thu, Nov 10, 2022 at 04:26:37PM +0100, Jan Kara wrote: > > > > On Thu 10-11-22 04:57:58, Jeremi Piotrowski wrote: > > > > > On Wed, Oct 26, 2022 at 12:18:54PM +0200, Jan Kara wrote: > > > > > > On Mon 24-10-22 18:32:51, Thilo Fromm wrote: > > > > > > > Hello Honza, > > > > > > > > > > > > > > > Yeah, I was pondering about this for some time but still I have no clue who > > > > > > > > could be holding the buffer lock (which blocks the task holding the > > > > > > > > transaction open) or how this could related to the commit you have > > > > > > > > identified. I have two things to try: > > > > > > > > > > > > > > > > 1) Can you please check whether the deadlock reproduces also with 6.0 > > > > > > > > kernel? The thing is that xattr handling code in ext4 has there some > > > > > > > > additional changes, commit 307af6c8793 ("mbcache: automatically delete > > > > > > > > entries from cache on freeing") in particular. > > > > > > > > > > > > > > This would be complex; we currently do not integrate 6.0 with Flatcar and > > > > > > > would need to spend quite some effort ingesting it first (mostly, make sure > > > > > > > the new kernel does not break something unrelated). Flatcar is an > > > > > > > image-based distro, so kernel updates imply full distro updates. > > > > > > > > > > > > OK, understood. > > > > > > > > > > > > > > 2) I have created a debug patch (against 5.15.x stable kernel). Can you > > > > > > > > please reproduce the failure with it and post the output of "echo w > > > > > > > > > /proc/sysrq-trigger" and also the output the debug patch will put into the > > > > > > > > kernel log? It will dump the information about buffer lock owner if we > cannot get the lock for more than 32 seconds. > > > > > > > > > > > > > > This would be more straightforward - I can reach out to one of our users > > > > > > > suffering from the issue; they can reliably reproduce it and don't shy away > > > > > > > from patching their kernel. Where can I find the patch? > > > > > > > > > > > > Ha, my bad. I forgot to attach it. Here it is. > > > > > > > > > > > > > > > > Unfortunately this patch produced no output, but I have been able to repro so I > > > > > understand why: except for the hung tasks, we have 1+ tasks busy-looping through > > > > > the following code in ext4_xattr_block_set(): > > > > > > > > > > inserted: > > > > > if (!IS_LAST_ENTRY(s->first)) { > > > > > new_bh = ext4_xattr_block_cache_find(inode, header(s->base), > > > > > &ce); > > > > > if (new_bh) { > > > > > /* We found an identical block in the cache. */ > > > > > if (new_bh == bs->bh) > > > > > ea_bdebug(new_bh, "keeping"); > > > > > else { > > > > > u32 ref; > > > > > > > > > > WARN_ON_ONCE(dquot_initialize_needed(inode)); > > > > > > > > > > /* The old block is released after updating > > > > > the inode. */ > > > > > error = dquot_alloc_block(inode, > > > > > EXT4_C2B(EXT4_SB(sb), 1)); > > > > > if (error) > > > > > goto cleanup; > > > > > BUFFER_TRACE(new_bh, "get_write_access"); > > > > > error = ext4_journal_get_write_access( > > > > > handle, sb, new_bh, > > > > > EXT4_JTR_NONE); > > > > > if (error) > > > > > goto cleanup_dquot; > > > > > lock_buffer(new_bh); > > > > > /* > > > > > * We have to be careful about races with > > > > > * adding references to xattr block. Once we > > > > > * hold buffer lock xattr block's state is > > > > > * stable so we can check the additional > > > > > * reference fits. > > > > > */ > > > > > ref = le32_to_cpu(BHDR(new_bh)->h_refcount) + 1; > > > > > if (ref > EXT4_XATTR_REFCOUNT_MAX) { > > > > > /* > > > > > * Undo everything and check mbcache > > > > > * again. > > > > > */ > > > > > unlock_buffer(new_bh); > > > > > dquot_free_block(inode, > > > > > EXT4_C2B(EXT4_SB(sb), > > > > > 1)); > > > > > brelse(new_bh); > > > > > mb_cache_entry_put(ea_block_cache, ce); > > > > > ce = NULL; > > > > > new_bh = NULL; > > > > > goto inserted; > > > > > } > > > > > > > > > > The tasks keep taking the 'goto inserted' branch, and never finish. I've been > > > > > able to repro with kernel v6.0.7 as well. > > > > > > > > Interesting! That makes is much clearer (and also makes my debug patch > > > > unnecessary). So clearly the e_reusable variable in the mb_cache_entry got > > > > out of sync with the number of references really in the xattr block - in > > > > particular the block likely has h_refcount >= EXT4_XATTR_REFCOUNT_MAX but > > > > e_reusable is set to true. Now I can see how e_reusable can stay at false due > > > > to a race when refcount is actually smaller but I don't see how it could > > > > stay at true when refcount is big enough - that part seems to be locked > > > > properly. If you can reproduce reasonably easily, can you try reproducing > > > > with attached patch? Thanks! > > > > > > > > > > Sure, with that patch I'm getting the following output, reusable is false on > > > most items until we hit something with reusable true and then that loops > > > indefinitely: > > > > Thanks. So that is what I've suspected. I'm still not 100% clear on how > > this inconsistency can happen although I have a suspicion - does attached > > patch fix the problem for you? > > > > Also is it possible to share the reproducer or it needs some special > > infrastructure? > > > > Honza > > I'll test the patch and report back. > > Attached you'll find the reproducer, for me it reproduces within a few minutes. > It brings up a k8s node and then runs 3 instances of the application which > creates a lot of small files in a loop. The OS we run it on has selinux enabled > in permissive mode, that might play a role. > I can still reproduce it with the patch. > > -- > > Jan Kara > > SUSE Labs, CR > > > >From 6132433e400ff7be348fe04fdf8ee67eb105ec21 Mon Sep 17 00:00:00 2001 > > From: Jan Kara > > Date: Thu, 10 Nov 2022 16:22:06 +0100 > > Subject: [PATCH] ext4: Lock xattr buffer before inserting cache entry > > > > --- > > fs/ext4/xattr.c | 9 ++++++--- > > 1 file changed, 6 insertions(+), 3 deletions(-) > > > > diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c > > index 36d6ba7190b6..02e265bb94e2 100644 > > --- a/fs/ext4/xattr.c > > +++ b/fs/ext4/xattr.c > > @@ -2970,15 +2970,18 @@ ext4_xattr_block_cache_insert(struct mb_cache *ea_block_cache, > > struct buffer_head *bh) > > { > > struct ext4_xattr_header *header = BHDR(bh); > > - __u32 hash = le32_to_cpu(header->h_hash); > > - int reusable = le32_to_cpu(header->h_refcount) < > > - EXT4_XATTR_REFCOUNT_MAX; > > + __u32 hash; > > + int reusable; > > int error; > > > > if (!ea_block_cache) > > return; > > + lock_buffer(bh); > > + hash = le32_to_cpu(header->h_hash); > > + reusable = le32_to_cpu(header->h_refcount) < EXT4_XATTR_REFCOUNT_MAX; > > error = mb_cache_entry_create(ea_block_cache, GFP_NOFS, hash, > > bh->b_blocknr, reusable); > > + unlock_buffer(bh); > > if (error) { > > if (error == -EBUSY) > > ea_bdebug(bh, "already in cache"); > > -- > > 2.35.3 > > >