Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp5628115img; Wed, 27 Mar 2019 12:02:06 -0700 (PDT) X-Google-Smtp-Source: APXvYqxGlm7qkBgNP3SUFKlnfkogJPcVjLHbbVM8OpcH0ROEfhal5tSmiXb7guGbqhW6nDaedfv+ X-Received: by 2002:a17:902:f094:: with SMTP id go20mr15798485plb.159.1553713326684; Wed, 27 Mar 2019 12:02:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553713326; cv=none; d=google.com; s=arc-20160816; b=jWrjeuSquIjHX9i0nCsfHvXysNYhazAhmJX53EsBvFiVqEgPkG3QXktv/j1dPqOogN fmWj1l9ehZXy28mjfvHeWywN1bEMrODAwD3HaweVeZpBS+NoFO3oBzp0Ra2N+3vuvDI7 Wd/m0HrDkLqPjMWseGqu/h2iCW3vgHbvERZrtNrRmPMTkku9UiF94z0sIoo37V5vFLbV 2hZ0d2l962X+XsM8ZxF41P/MPHiKtVcrepQQnLuHiws1l8riWgjZhcG6bc0H981HxHv/ 9h+USoVxSCqVeap/l+pGdDEFcxebk2ycXKY4wSTRrLTUao4abFe43sW6tOzZ101ENbur gDlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=91i7TmC6RbaUjUZMv0qKw4s+QlodbcJEpOKWOHTRlrk=; b=ZzseDNpEEr51Cbkie8QiYJffYCgPClaQRMDorTf+CN0krShLd6Q9qeiLEYerF8w78A hqWY0vZs9FDuBzXk1eK3fIk1eICKYeGVcYBG9mL8BUKyQVZaakqBzbBOkYJMMdg669Dn JuaameMiTB1xfQlyM8ZKHm9YHYBV6HnKzTdiV+FWv5Ox0g0iygusm+7577ivpE8SDon6 YM7okq1hWfwcqCcfg6lk9B/U+0nfXQxKXcerKJ7bvfTy6b0vEnAcy+/VrK+zYGv4/zyy lQIS1z3lOnXLxYFKVwcD4bnHRfuxJb8cjfOfIlYd/9vLUB7E7iekocY+tYdRvd/Z0Blv QYWQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t3si18589197pgc.307.2019.03.27.12.01.51; Wed, 27 Mar 2019 12:02:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390053AbfC0TAB (ORCPT + 99 others); Wed, 27 Mar 2019 15:00:01 -0400 Received: from zeniv.linux.org.uk ([195.92.253.2]:33030 "EHLO ZenIV.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389673AbfC0S77 (ORCPT ); Wed, 27 Mar 2019 14:59:59 -0400 Received: from viro by ZenIV.linux.org.uk with local (Exim 4.92 #3 (Red Hat Linux)) id 1h9Dm8-00060f-HR; Wed, 27 Mar 2019 18:59:48 +0000 Date: Wed, 27 Mar 2019 18:59:48 +0000 From: Al Viro To: Jan Kara Cc: Mark Fasheh , Dave Chinner , Linus Torvalds , syzbot , Alexei Starovoitov , Daniel Borkmann , linux-fsdevel , Linux List Kernel Mailing , syzkaller-bugs , Jaegeuk Kim , Joel Becker Subject: Re: KASAN: use-after-free Read in path_lookupat Message-ID: <20190327185948.GC2217@ZenIV.linux.org.uk> References: <0000000000006946d2057bbd0eef@google.com> <20190325045744.GK2217@ZenIV.linux.org.uk> <20190325194332.GO2217@ZenIV.linux.org.uk> <20190325224823.GF26298@dastard> <20190325230211.GR2217@ZenIV.linux.org.uk> <20190326041509.GZ2217@ZenIV.linux.org.uk> <20190327165831.GB6742@quack2.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190327165831.GB6742@quack2.suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Mar 27, 2019 at 05:58:31PM +0100, Jan Kara wrote: > On Tue 26-03-19 04:15:10, Al Viro wrote: > > On Mon, Mar 25, 2019 at 08:18:25PM -0700, Mark Fasheh wrote: > > > > > Hey Al, > > > > > > It's been a while since I've looked at that bit of code but it looks like > > > Ocfs2 is syncing the inode to disk and disposing of it's memory > > > representation (which would include the cluster locks held) so that other > > > nodes get a chance to delete the potentially orphaned inode. In Ocfs2 we > > > won't delete an inode if it exists in another nodes cache. > > > > Wait a sec - what's the reason for forcing that write_inode_now(); why > > doesn't the normal mechanism work? I'm afraid I still don't get it - > > we do wait for writeback in evict_inode(), or the local filesystems > > wouldn't work. > > I'm just guessing here but they don't want an inode cached once its last > dentry goes away (it makes cluster wide synchronization easier for them and > they do play tricks with cluster lock on dentries). Sure, but that's as simple as "return 1 from ->drop_inode()". > There is some info in > 513e2dae9422 "ocfs2: flush inode data to disk and free inode when i_count > becomes zero" which adds this ocfs2_drop_inode() implementation. So when > the last inode reference is dropped, they want to flush any dirty data to > disk and evict the inode. But AFAICT they should be fine with flushing the > inode from their ->evict_inode method. I_FREEING just stops the flusher > thread from touching the inode but explicit writeback through > write_inode_now(inode, 1) should go through just fine. Umm... Why is that write_inode_now() needed in either place? I agree that moving it to ->evict_inode() ought to be safe, but what makes it necessary in the first place? Put it another way, what dirties the data and/or metadata without marking it dirty?