Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp6770664ybi; Mon, 8 Jul 2019 08:23:16 -0700 (PDT) X-Google-Smtp-Source: APXvYqxKV5F7UkQetbdW8CHT1nhnQF5gwCy7KI+ZB9+Gvsgi25NTNRY9Kmkxg8VPTabDBwDYma8a X-Received: by 2002:a63:231c:: with SMTP id j28mr24213018pgj.430.1562599396157; Mon, 08 Jul 2019 08:23:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1562599396; cv=none; d=google.com; s=arc-20160816; b=YH130mo+3tYiOa2tB+8VUZfGwTH2i8r0z+N1P547L+6plj9x/L6GO/7lMiEOV8NNyy lvydRhgHeXIiVSAxshnO5JlxDY4FAvBN6psQ99BkDFOgK2EQ0MuhS7IPomIXWxySqee+ TOlqjTWS+jbK3kSD1amZQi9ewIaXMhN6qlrdwKolUfPECjpr1vdwLwp2QBSMzXKTXyia yQU0jM+Xq43JuPfiaQbj3ulZQnLyq5vYXkKWWG+heyKbsQX1/+21Kjcs6rv41cXfvam8 Ojy7kpPy9DRKxoWHTN1BzLp6/J/mOfl3fI9I9cYwtlrIUI+M33Xqde78+zA9GgyWbf4+ iJvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=NT2xpIRQsrvHsuIgg/RhtweXtjwmVNAgRxlZzkeTsig=; b=lb1G0TzGmqXsPf43zhkXmTRCrqLgAhrmsXSoI3DS9jk8RGmDZm1QPOOdXZ/aaKWI/O I9QNJWO1eSacE8bvc21jTnEX3w6pewbnezdb7htBLiVYzQF9i4MqvWrXZYHHR32yVfbZ Fx0u22GF0mJoherLKKFxdmxPNnrmB2y+8Vfjjno+RB408yPTACz/5cnSyJvXYUg1y5OF JpCS0RxyrrY2PpDTinJMC74stJ9gIEw8uOpn77ezVczj8Jbm8cXZcp1NXRBB1kGvmkM6 +X0ziAivGFoYXPpXtahho0VL/zcPN9IbUSfHT0UnbmAr3xAHZF0BRR1q48znlVsY69KK HkVg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t17si17754440pgu.68.2019.07.08.08.23.01; Mon, 08 Jul 2019 08:23:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731324AbfGHNbT (ORCPT + 99 others); Mon, 8 Jul 2019 09:31:19 -0400 Received: from mx2.suse.de ([195.135.220.15]:33506 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728124AbfGHNbT (ORCPT ); Mon, 8 Jul 2019 09:31:19 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 04C9FAFF5; Mon, 8 Jul 2019 13:31:16 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 39BF51E159B; Mon, 8 Jul 2019 15:31:14 +0200 (CEST) Date: Mon, 8 Jul 2019 15:31:14 +0200 From: Jan Kara To: Dave Chinner Cc: Boaz Harrosh , Jan Kara , Amir Goldstein , Linus Torvalds , Kent Overstreet , Dave Chinner , "Darrick J . Wong" , Christoph Hellwig , Matthew Wilcox , Linux List Kernel Mailing , linux-xfs , linux-fsdevel , Josef Bacik , Alexander Viro , Andrew Morton Subject: Re: pagecache locking Message-ID: <20190708133114.GC20507@quack2.suse.cz> References: <20190613235524.GK14363@dread.disaster.area> <20190617224714.GR14363@dread.disaster.area> <20190619103838.GB32409@quack2.suse.cz> <20190619223756.GC26375@dread.disaster.area> <3f394239-f532-23eb-9ff1-465f7d1f3cb4@gmail.com> <20190705233157.GD7689@dread.disaster.area> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190705233157.GD7689@dread.disaster.area> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat 06-07-19 09:31:57, Dave Chinner wrote: > On Wed, Jul 03, 2019 at 03:04:45AM +0300, Boaz Harrosh wrote: > > On 20/06/2019 01:37, Dave Chinner wrote: > > <> > > > > > > I'd prefer it doesn't get lifted to the VFS because I'm planning on > > > getting rid of it in XFS with range locks. i.e. the XFS_MMAPLOCK is > > > likely to go away in the near term because a range lock can be > > > taken on either side of the mmap_sem in the page fault path. > > > > > <> > > Sir Dave > > > > Sorry if this was answered before. I am please very curious. In the zufs > > project I have an equivalent rw_MMAPLOCK that I _read_lock on page_faults. > > (Read & writes all take read-locks ...) > > The only reason I have it is because of lockdep actually. > > > > Specifically for those xfstests that mmap a buffer then direct_IO in/out > > of that buffer from/to another file in the same FS or the same file. > > (For lockdep its the same case). > > Which can deadlock if the same inode rwsem is taken on both sides of > the mmap_sem, as lockdep tells you... > > > I would be perfectly happy to recursively _read_lock both from the top > > of the page_fault at the DIO path, and under in the page_fault. I'm > > _read_locking after all. But lockdep is hard to convince. So I stole the > > xfs idea of having an rw_MMAPLOCK. And grab yet another _write_lock at > > truncate/punch/clone time when all mapping traversal needs to stop for > > the destructive change to take place. (Allocations are done another way > > and are race safe with traversal) > > > > How do you intend to address this problem with range-locks? ie recursively > > taking the same "lock"? because if not for the recursive-ity and lockdep I would > > not need the extra lock-object per inode. > > As long as the IO ranges to the same file *don't overlap*, it should > be perfectly safe to take separate range locks (in read or write > mode) on either side of the mmap_sem as non-overlapping range locks > can be nested and will not self-deadlock. I'd be really careful with nesting range locks. You can have nasty situations like: Thread 1 Thread 2 read_lock(0,1000) write_lock(500,1500) -> blocks due to read lock read_lock(1001,1500) -> blocks due to write lock (or you have to break fairness and then deal with starvation issues). So once you allow nesting of range locks, you have to very carefully define what is and what is not allowed. That's why in my range lock implementation ages back I've decided to treat range lock as a rwsem for deadlock verification purposes. Honza -- Jan Kara SUSE Labs, CR