Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp45793imu; Thu, 8 Nov 2018 13:32:07 -0800 (PST) X-Google-Smtp-Source: AJdET5fdJv91KUyc4r5J9Q6a48fnCMfzIt+NjS2LVrMHFmFKEDfk0ToaQOVrcWWHDeUtjdaSih5g X-Received: by 2002:a63:f210:: with SMTP id v16-v6mr5006834pgh.371.1541712727467; Thu, 08 Nov 2018 13:32:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541712727; cv=none; d=google.com; s=arc-20160816; b=pwAhr30ps/zoCOyIGZpZuFWQjT/6htwksIF2TfMCtK6HVzqjaQeb7VA1b5dTNKw7Tz JYsnhUCpzDXwwaomX0QIRgUBjAR6gsI0ThaOQyIq0930T5AeVG1n60T9l/5SOJduKR4f yOgL/I36OW03tuz48Tn6tZaqWrP0uXNFYZO4FXu3R9lB2+1pmP+hrPpT7Q7Fo7/vQ3W8 AdY2y+YJ0mWzg5TqIuT27C+4r+2P+8mPKcvtEAC1z0imBXNCfoYPF6Nj6Ml/TU2v7ZJk fd8ZacLlW5SuWnjThN82pXdzUsnLVXV6DcwmctE518+iUNjCqvlV4h5qd7sa356lxXg2 J1ew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=VzxhDsbQXpqZHcRsLCzQtSxbm1scTigRKa48o5zIlrs=; b=Yrf2siVmIwaO9xccv3G2wriT/TSFPpsIrwVLeDD1eZbn6pmQs3Pl+joqqlTErSI79b Pn2CWM4NziAuW4/f/hyGIc+Zq+MBJE4LlVYhWRJNSK8HmkAxneoOZHi6w4Xtqd/g1vqo pMnpK4VFRj4i0WbcpqmpP7ebqvxfRv0BUTIWGXVonr1CjfjQf4T2AyBIU8dbsI9tTcrB jhlJV3WwtmnqLWTaVsha8SXoTyLa+/ORGiOiGBRDO/73MgxxCsVXG6sezzh46q9PfzMK JoueimvWezoL15KimQ90+criJ0nGDV9wZcyMFOtIdxphL9CAwBz+8S3jW0c6QZx5OT5d aCow== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bd6-v6si5441412plb.399.2018.11.08.13.31.50; Thu, 08 Nov 2018 13:32:07 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727299AbeKIHHx (ORCPT + 99 others); Fri, 9 Nov 2018 02:07:53 -0500 Received: from fieldses.org ([173.255.197.46]:39158 "EHLO fieldses.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725882AbeKIHHx (ORCPT ); Fri, 9 Nov 2018 02:07:53 -0500 Received: by fieldses.org (Postfix, from userid 2815) id D0AF32014; Thu, 8 Nov 2018 16:30:30 -0500 (EST) Date: Thu, 8 Nov 2018 16:30:30 -0500 From: "J. Bruce Fields" To: NeilBrown Cc: Jeff Layton , Alexander Viro , Martin Wilck , linux-fsdevel@vger.kernel.org, Frank Filz , linux-kernel@vger.kernel.org Subject: Re: [PATCH 10/12] fs/locks: create a tree of dependent requests. Message-ID: <20181108213030.GF6090@fieldses.org> References: <154138128401.31651.1381177427603557514.stgit@noble> <154138144796.31651.14201944346371750178.stgit@noble> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <154138144796.31651.14201944346371750178.stgit@noble> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 05, 2018 at 12:30:48PM +1100, NeilBrown wrote: > When we find an existing lock which conflicts with a request, > and the request wants to wait, we currently add the request > to a list. When the lock is removed, the whole list is woken. > This can cause the thundering-herd problem. > To reduce the problem, we make use of the (new) fact that > a pending request can itself have a list of blocked requests. > When we find a conflict, we look through the existing blocked requests. > If any one of them blocks the new request, the new request is attached > below that request, otherwise it is added to the list of blocked > requests, which are now known to be mutually non-conflicting. > > This way, when the lock is released, only a set of non-conflicting > locks will be woken, the rest can stay asleep. > If the lock request cannot be granted and the request needs to be > requeued, all the other requests it blocks will then be woken So, to make sure I understand: the tree of blocking locks only ever has three levels (the active lock, the locks blocking on it, and their children?) --b. > > Reported-and-tested-by: Martin Wilck > Signed-off-by: NeilBrown > --- > fs/locks.c | 29 +++++++++++++++++++++++------ > 1 file changed, 23 insertions(+), 6 deletions(-) > > diff --git a/fs/locks.c b/fs/locks.c > index 802d5853acd5..1b0eac6b2918 100644 > --- a/fs/locks.c > +++ b/fs/locks.c > @@ -715,11 +715,25 @@ static void locks_delete_block(struct file_lock *waiter) > * fl_blocked list itself is protected by the blocked_lock_lock, but by ensuring > * that the flc_lock is also held on insertions we can avoid taking the > * blocked_lock_lock in some cases when we see that the fl_blocked list is empty. > + * > + * Rather than just adding to the list, we check for conflicts with any existing > + * waiters, and add beneath any waiter that blocks the new waiter. > + * Thus wakeups don't happen until needed. > */ > static void __locks_insert_block(struct file_lock *blocker, > - struct file_lock *waiter) > + struct file_lock *waiter, > + bool conflict(struct file_lock *, > + struct file_lock *)) > { > + struct file_lock *fl; > BUG_ON(!list_empty(&waiter->fl_block)); > + > +new_blocker: > + list_for_each_entry(fl, &blocker->fl_blocked, fl_block) > + if (conflict(fl, waiter)) { > + blocker = fl; > + goto new_blocker; > + } > waiter->fl_blocker = blocker; > list_add_tail(&waiter->fl_block, &blocker->fl_blocked); > if (IS_POSIX(blocker) && !IS_OFDLCK(blocker)) > @@ -734,10 +748,12 @@ static void __locks_insert_block(struct file_lock *blocker, > > /* Must be called with flc_lock held. */ > static void locks_insert_block(struct file_lock *blocker, > - struct file_lock *waiter) > + struct file_lock *waiter, > + bool conflict(struct file_lock *, > + struct file_lock *)) > { > spin_lock(&blocked_lock_lock); > - __locks_insert_block(blocker, waiter); > + __locks_insert_block(blocker, waiter, conflict); > spin_unlock(&blocked_lock_lock); > } > > @@ -996,7 +1012,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request) > if (!(request->fl_flags & FL_SLEEP)) > goto out; > error = FILE_LOCK_DEFERRED; > - locks_insert_block(fl, request); > + locks_insert_block(fl, request, flock_locks_conflict); > goto out; > } > if (request->fl_flags & FL_ACCESS) > @@ -1071,7 +1087,8 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request, > spin_lock(&blocked_lock_lock); > if (likely(!posix_locks_deadlock(request, fl))) { > error = FILE_LOCK_DEFERRED; > - __locks_insert_block(fl, request); > + __locks_insert_block(fl, request, > + posix_locks_conflict); > } > spin_unlock(&blocked_lock_lock); > goto out; > @@ -1542,7 +1559,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type) > break_time -= jiffies; > if (break_time == 0) > break_time++; > - locks_insert_block(fl, new_fl); > + locks_insert_block(fl, new_fl, leases_conflict); > trace_break_lease_block(inode, new_fl); > spin_unlock(&ctx->flc_lock); > percpu_up_read_preempt_enable(&file_rwsem); >