Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp5520186img; Wed, 27 Mar 2019 09:59:25 -0700 (PDT) X-Google-Smtp-Source: APXvYqzkKPNsAc1zNf0GfuDQWt8XSo+jk9bQ8Mg4MuoBkZvRH+BpvuwrVOazKaC4yr6/RUKSN2vf X-Received: by 2002:a65:508b:: with SMTP id r11mr35495848pgp.242.1553705965476; Wed, 27 Mar 2019 09:59:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553705965; cv=none; d=google.com; s=arc-20160816; b=z5O+n7z9HTK8HvjgpcUwBRgkjaGqBEtlABxKgZ/OWrZkzkuyzjy/5IVarzwxsAkTR0 IsSlT8/A3PgQ3pQ4+mMaZu69sHun66/e0b+ejvz1PXhkEDONPV6PxVFqIFcgq+iikv9K z5IHqQZY6pCT/i2U6622Ye2ZmKYcNWu3djAGchWeYyxljRGRP8SKWUeRWHgFqA9Q5qV9 GV/qg88+Ypfpk3xSySrv4CCkRZmgDIwpmsj1dZDrLc2416OfYv85rQOLbf9qEKUzc31S aAQCUVTOE9dbdiIG53Tfqfa+p6tCNmvVTsYwXtwpd3tXm7A1gUnAM5bP48yGjNSto8w/ P9zw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=BnC36qQcb3S074xH60N40l/0RkHC1uSA2k8woENeAF8=; b=l5StLPkvdZWeKSzsrHlBnYomSpuN+OP6XlxQXrRqmVQos8t+Rpe2G0iuhZroFH7Ld5 eWyC47H8OTNsLTVZgg34myOLyFlL2sbKErhHIq967wLba9KyqLd3RSp160WM5T0PpsjY gHMip6HAocbt70qHRl2b/ILoMnCB3rXLxWO30/f+MJX1Y/fNpFmJHRFIoUxCHjLQ2zNR Y8VUUgbdUCvsWlG7ZYAAyEtKNTWIIfSCMaH4kT1VrYJTVD2gUxb8gvsvOlwMF0ZtxSKF weqAXE1ImaHhHPCwJwkRsvQhrjj52WQf420WZq99h4c2uynn4KeivuMEuCxK6GgARGMX zSXw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k66si10745760pgc.247.2019.03.27.09.59.09; Wed, 27 Mar 2019 09:59:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727995AbfC0Q6e (ORCPT + 99 others); Wed, 27 Mar 2019 12:58:34 -0400 Received: from mx2.suse.de ([195.135.220.15]:54192 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727454AbfC0Q6d (ORCPT ); Wed, 27 Mar 2019 12:58:33 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 44D1EAFAC; Wed, 27 Mar 2019 16:58:32 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 806EA1E1589; Wed, 27 Mar 2019 17:58:31 +0100 (CET) Date: Wed, 27 Mar 2019 17:58:31 +0100 From: Jan Kara To: Al Viro Cc: Mark Fasheh , Dave Chinner , Linus Torvalds , syzbot , Alexei Starovoitov , Daniel Borkmann , linux-fsdevel , Linux List Kernel Mailing , syzkaller-bugs , Jan Kara , Jaegeuk Kim , Joel Becker Subject: Re: KASAN: use-after-free Read in path_lookupat Message-ID: <20190327165831.GB6742@quack2.suse.cz> References: <0000000000006946d2057bbd0eef@google.com> <20190325045744.GK2217@ZenIV.linux.org.uk> <20190325194332.GO2217@ZenIV.linux.org.uk> <20190325224823.GF26298@dastard> <20190325230211.GR2217@ZenIV.linux.org.uk> <20190326041509.GZ2217@ZenIV.linux.org.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190326041509.GZ2217@ZenIV.linux.org.uk> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 26-03-19 04:15:10, Al Viro wrote: > On Mon, Mar 25, 2019 at 08:18:25PM -0700, Mark Fasheh wrote: > > > Hey Al, > > > > It's been a while since I've looked at that bit of code but it looks like > > Ocfs2 is syncing the inode to disk and disposing of it's memory > > representation (which would include the cluster locks held) so that other > > nodes get a chance to delete the potentially orphaned inode. In Ocfs2 we > > won't delete an inode if it exists in another nodes cache. > > Wait a sec - what's the reason for forcing that write_inode_now(); why > doesn't the normal mechanism work? I'm afraid I still don't get it - > we do wait for writeback in evict_inode(), or the local filesystems > wouldn't work. I'm just guessing here but they don't want an inode cached once its last dentry goes away (it makes cluster wide synchronization easier for them and they do play tricks with cluster lock on dentries). There is some info in 513e2dae9422 "ocfs2: flush inode data to disk and free inode when i_count becomes zero" which adds this ocfs2_drop_inode() implementation. So when the last inode reference is dropped, they want to flush any dirty data to disk and evict the inode. But AFAICT they should be fine with flushing the inode from their ->evict_inode method. I_FREEING just stops the flusher thread from touching the inode but explicit writeback through write_inode_now(inode, 1) should go through just fine. Honza -- Jan Kara SUSE Labs, CR