Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp3694480img; Mon, 25 Mar 2019 16:03:27 -0700 (PDT) X-Google-Smtp-Source: APXvYqztYM4BMhzMEn8R+i7ew8tTKFbNEYW5J8zm+6HVdMWhFXCx+OgeB1Ows/mG2Af8h2msTuj9 X-Received: by 2002:aa7:8d17:: with SMTP id j23mr26282043pfe.62.1553555007705; Mon, 25 Mar 2019 16:03:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553555007; cv=none; d=google.com; s=arc-20160816; b=wqTxRT9xjIuOouNu3frHfu088nx08b23+XmMYguc5VwZs/HVZ66xvM7FDEhJIsGEj5 rq40Bn0RrJRH3cE1mz1zgnnSU10oBj5SWXe/JX6LO1A/F5QN59VfmxNTBvhQFji1M7ae TwCGqMfqvWqmBV4sKpTvKazCSHW6TusQUW5XsZPI/LiKPiqBtIO0TL0hTDMx7X7PMf7A AT35b8KAXBGL9L7Y5+cCuT9tQw+Fn57VzHaOpC7Xu9oyHBlqiYmTDEjTuiOguWWhO79q PlWyOh+T86lH2ixiLvCuY6tZf3nSB45QI+2v8lsBPg/Z70YL1tYYCuSXxxCvPJZtHw2a 23Vg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=Wz6KMVh7fge4BDxtJ68g977TV9KrufsXwvaGA8mW4qM=; b=JcRShqEJOz/NvaVL0dsNoO+Gc2j/LlBJg4bet5ofyanFjW0egFP/tjkrxGgE96RAlX xb+aHR0f3MxTNbTF9ligrlHxQV2B5GvcUcQeQDBVSGnNxRYa0EjAVptRP9rcSIy9A26v 69iTQsmxBsFPWFMLqhoe5tAMRmq7BU68S/JQqqv3j5UeUSSCwO160qXFi9DIffxHWZa/ NTwag6R72UucsuEJKFWALywYwOWjeBtpYlgrNZkky8FkzdambicB9Wq2cyX1GJDHCxJ4 US3DAkHsZBydfLXalokvBbdlfKQyuYgXavvPb0vkqNlB1kHepU6mkbPTPxkHdLoZlV8A ciAg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i3si15099778plt.120.2019.03.25.16.03.12; Mon, 25 Mar 2019 16:03:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730637AbfCYXCU (ORCPT + 99 others); Mon, 25 Mar 2019 19:02:20 -0400 Received: from zeniv.linux.org.uk ([195.92.253.2]:56772 "EHLO ZenIV.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726010AbfCYXCU (ORCPT ); Mon, 25 Mar 2019 19:02:20 -0400 Received: from viro by ZenIV.linux.org.uk with local (Exim 4.92 #3 (Red Hat Linux)) id 1h8Ybb-0000ti-QB; Mon, 25 Mar 2019 23:02:11 +0000 Date: Mon, 25 Mar 2019 23:02:11 +0000 From: Al Viro To: Dave Chinner Cc: Linus Torvalds , syzbot , Alexei Starovoitov , Daniel Borkmann , linux-fsdevel , Linux List Kernel Mailing , syzkaller-bugs , Jan Kara , Jaegeuk Kim , Joel Becker , Mark Fasheh Subject: Re: KASAN: use-after-free Read in path_lookupat Message-ID: <20190325230211.GR2217@ZenIV.linux.org.uk> References: <0000000000006946d2057bbd0eef@google.com> <20190325045744.GK2217@ZenIV.linux.org.uk> <20190325194332.GO2217@ZenIV.linux.org.uk> <20190325224823.GF26298@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190325224823.GF26298@dastard> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 26, 2019 at 09:48:23AM +1100, Dave Chinner wrote: > And when it comes to VFS inode reclaim, XFS does not implement > ->evict_inode because there is nothing at the VFS level to do. > And ->destroy_inode ends up doing cleanup work (e.g. freeing on-disk > inodes) which is non-trivial, blocking work, but then still requires > the struct xfs_inode to be written back to disk before it can bei > freed. So it just gets marked "reclaimable" and background reclaim > then takes care of it from there so we avoid synchronous IO in inode > reclaim... > > This works because don't track dirty inode metadata in the VFS > writeback code (it's tracked with much more precision in the XFS log > infrastructure) and we don't write back inodes from the VFS > infrastructure, either. It's all done based on internal state > outside the VFS. > > And, because of this, the VFS cannot assume that it can free > the struct inode after calling ->destroy_inode or even use > call_rcu() to run a filesystem destructor because the filesystem > may need to do work that needs to block and that's not allowed in an > RCU callback... In Linus' patch that's what you get with non-NULL ->destroy_inode + NULL ->destroy_inode_rcu, so XFS won't be screwed by that. Said that, yes, XFS adds another fun twist there (AFAICS, it's the only in-tree filesystem that pulls that off). I would really like some comments from f2fs and ocfs2 folks, as well as Jan - he's had much more recent contact with writeback code than I have... Could somebody explain what's going on in f2fs and ocfs2 ->drop_inode()? It _should_ be just a predicate; looks like both are playing very odd games to work around writeback problems and I wonder if there's a cleaner solution for that. I can try and dig through maillist(s) archives, but I would really appreciate it if somebody could give a braindump on the issues dealt with in there...