Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp3891552ybi; Fri, 5 Jul 2019 16:33:50 -0700 (PDT) X-Google-Smtp-Source: APXvYqwrp9VGvfCyfVPl3cdPeReU5c+ebi2K9ypgdS9DTsKMPFqHYCuzcT+tNugwp/PbLAOLxqut X-Received: by 2002:a17:902:20e9:: with SMTP id v38mr8474895plg.62.1562369629919; Fri, 05 Jul 2019 16:33:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1562369629; cv=none; d=google.com; s=arc-20160816; b=jN6pwdSvMU0/FQ9VBJ+X+JjAlCPuzQw1mMevvxaff9ND3AfmXtrfspb/sFedMkoOzs pvJ/0XdC9MeorO33mskfoWvZ/3nhbs2G7PLvu2FjDcNsLJ6TXirhCbWwPYqdPbSdrzEM sD8Jp7DT3NGVhwWz6wOGL2CB/nLS9VPIli6L7ItiJbub+UBzkP/9420pvzUQ9qhGYu40 Co0IQRwh8RL5q7zH6Fgi/xUi6smzSVddQ8b8hZhfCYDryHGvfM/0w45HgprgT+lizoJh Phlm3S0X1jcqtwNSI8UMZYIGnzXFBTQhecXEU1bRhxQOcrueUWRT+A2zG52fEIL14H6T cw7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=VyBTtzIQkIjJPXiLMNrFFELwh8prpbEHt4zASERjQvg=; b=eg/DD071GlerKksAR5e1vzgbQvud5vmeYLO6xoaGQF65VC7+PQI3YAh0Fv7Vx0KXaa 3d1DQ+tpPBZ8iyT7vygyUE/W5mOCpwIZQSqMy5iPGfQxoVC8/QiJ05T2FyaiALL01WaH 7c6sktJW+dVdDcjPy0BlFN4yxiM5Iim7WPGqh88n7u4hM3/qo7wcdYvUM0qXNQ0BsVlZ 6JyJD/Bxbw/zKG2aT5XD/4FZ4k1OepyDu4Yn4hvALgGWcnKQVYwciW5vNRYEX6llJx/Z /lbmrc2NdDdcL1eln6NuYAaAS2bDgUe+JX2T+eGewW3Wy5ausdZhBKrpUIEsk46q08WD nd/A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s6si9926927plp.229.2019.07.05.16.33.33; Fri, 05 Jul 2019 16:33:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726672AbfGEXdN (ORCPT + 99 others); Fri, 5 Jul 2019 19:33:13 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:48665 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726038AbfGEXdM (ORCPT ); Fri, 5 Jul 2019 19:33:12 -0400 Received: from dread.disaster.area (pa49-195-139-63.pa.nsw.optusnet.com.au [49.195.139.63]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 98FE814A6E7; Sat, 6 Jul 2019 09:33:05 +1000 (AEST) Received: from dave by dread.disaster.area with local (Exim 4.92) (envelope-from ) id 1hjXgL-0005yO-Jj; Sat, 06 Jul 2019 09:31:57 +1000 Date: Sat, 6 Jul 2019 09:31:57 +1000 From: Dave Chinner To: Boaz Harrosh Cc: Jan Kara , Amir Goldstein , Linus Torvalds , Kent Overstreet , Dave Chinner , "Darrick J . Wong" , Christoph Hellwig , Matthew Wilcox , Linux List Kernel Mailing , linux-xfs , linux-fsdevel , Josef Bacik , Alexander Viro , Andrew Morton Subject: Re: pagecache locking Message-ID: <20190705233157.GD7689@dread.disaster.area> References: <20190613183625.GA28171@kmo-pixel> <20190613235524.GK14363@dread.disaster.area> <20190617224714.GR14363@dread.disaster.area> <20190619103838.GB32409@quack2.suse.cz> <20190619223756.GC26375@dread.disaster.area> <3f394239-f532-23eb-9ff1-465f7d1f3cb4@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3f394239-f532-23eb-9ff1-465f7d1f3cb4@gmail.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=FNpr/6gs c=1 sm=1 tr=0 cx=a_idp_d a=fNT+DnnR6FjB+3sUuX8HHA==:117 a=fNT+DnnR6FjB+3sUuX8HHA==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=kj9zAlcOel0A:10 a=0o9FgrsRnhwA:10 a=7-415B0cAAAA:8 a=VP1V1_vrrUTVS67bY_cA:9 a=CjuIK1q_8ugA:10 a=biEYGPWJfzWAr4FL6Ov7:22 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 03, 2019 at 03:04:45AM +0300, Boaz Harrosh wrote: > On 20/06/2019 01:37, Dave Chinner wrote: > <> > > > > I'd prefer it doesn't get lifted to the VFS because I'm planning on > > getting rid of it in XFS with range locks. i.e. the XFS_MMAPLOCK is > > likely to go away in the near term because a range lock can be > > taken on either side of the mmap_sem in the page fault path. > > > <> > Sir Dave > > Sorry if this was answered before. I am please very curious. In the zufs > project I have an equivalent rw_MMAPLOCK that I _read_lock on page_faults. > (Read & writes all take read-locks ...) > The only reason I have it is because of lockdep actually. > > Specifically for those xfstests that mmap a buffer then direct_IO in/out > of that buffer from/to another file in the same FS or the same file. > (For lockdep its the same case). Which can deadlock if the same inode rwsem is taken on both sides of the mmap_sem, as lockdep tells you... > I would be perfectly happy to recursively _read_lock both from the top > of the page_fault at the DIO path, and under in the page_fault. I'm > _read_locking after all. But lockdep is hard to convince. So I stole the > xfs idea of having an rw_MMAPLOCK. And grab yet another _write_lock at > truncate/punch/clone time when all mapping traversal needs to stop for > the destructive change to take place. (Allocations are done another way > and are race safe with traversal) > > How do you intend to address this problem with range-locks? ie recursively > taking the same "lock"? because if not for the recursive-ity and lockdep I would > not need the extra lock-object per inode. As long as the IO ranges to the same file *don't overlap*, it should be perfectly safe to take separate range locks (in read or write mode) on either side of the mmap_sem as non-overlapping range locks can be nested and will not self-deadlock. The "recursive lock problem" still arises with DIO and page faults inside gup, but it only occurs when the user buffer range overlaps the DIO range to the same file. IOWs, the application is trying to do something that has an undefined result and is likely to result in data corruption. So, in that case I plan to have the gup page faults fail and the DIO return -EDEADLOCK to userspace.... Cheers, Dave. -- Dave Chinner david@fromorbit.com