Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp2093939imm; Thu, 9 Aug 2018 07:14:48 -0700 (PDT) X-Google-Smtp-Source: AA+uWPze0p1jilMpR4AzYhRMLQlAO/U4Xmk/2XUo+YNf9Djndu8IW7dafLMmeAN7cskf/5uoeOo9 X-Received: by 2002:a65:520d:: with SMTP id o13-v6mr2362954pgp.282.1533824087964; Thu, 09 Aug 2018 07:14:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533824087; cv=none; d=google.com; s=arc-20160816; b=aEaatUSotXxKVcemsIhrKppfd+SnlvAts943OqXpy79hxSamLWuHnolNZ0SNXVg/n7 tone8MRmUbI6liPHDF8HWcIjyQ23DuVEV6KaCcUboZ/eeEwiAuBoZVxjH/bjiBi66RbC w/1rZAzGd4XzSIdYG1tJRVZuyKXNseLC9cegj8MvfrNrbFp3JZ0iWwUdLTegDfvy80oH LmrdK+YTL92Dg4+bUpkFNy7zyzTnyORmi5bKLtgaQPUU9SKfcjc1I094lc+uax5OJ+rA L0vdjZfgBwYavH6mu2qchfbKujDchiqARPp4FLnGo1+Sf/p+xmJ/gWKGKAv5++2eT015 K/oQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=vJGe9yVc6iwiqBR2AAK/Mv7QNkguGGzqKoi2PBjqaN0=; b=XCvjgm/PcADSOSytNpMXgr5wc0cw3VK9G9dJLP5E/eDXlDtH49KAQbfLrQ1vWqppXw hBMSzTSWeOXgjntFmFD+DmMJqiA+N6bqHzgGMWr3EhWbO5aCXiyUmyWa0DJuXr+V2+Sb u6e8oDNjI89VpFvbHRRueRCIgKXzLNHcmQH34zIfHcdwK68RTbvvtcy+PLELYbujm1W5 cvxq7fi8fQSanRnjZbnvyUB8KHpMKCT1zJFC/fI4k1oGP2OF/x6Sz0n3pog4Em+UBndg y+/s8c7lrdB17PKDc5jtxll3MVmWNWELoMB4EVDjNeas31nZaVBMKyOrDTRytPtvVxb0 zOkQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f9-v6si5447209plo.206.2018.08.09.07.14.32; Thu, 09 Aug 2018 07:14:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732304AbeHIQir (ORCPT + 99 others); Thu, 9 Aug 2018 12:38:47 -0400 Received: from fieldses.org ([173.255.197.46]:48416 "EHLO fieldses.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730813AbeHIQiq (ORCPT ); Thu, 9 Aug 2018 12:38:46 -0400 Received: by fieldses.org (Postfix, from userid 2815) id 2AC453F4; Thu, 9 Aug 2018 10:13:41 -0400 (EDT) Date: Thu, 9 Aug 2018 10:13:41 -0400 From: "J. Bruce Fields" To: NeilBrown Cc: Jeff Layton , Alexander Viro , Martin Wilck , linux-fsdevel@vger.kernel.org, Frank Filz , linux-kernel@vger.kernel.org Subject: Re: [PATCH 5/5] fs/locks: create a tree of dependent requests. Message-ID: <20180809141341.GI23873@fieldses.org> References: <153378012255.1220.6754153662007899557.stgit@noble> <153378028121.1220.4418653283078446336.stgit@noble> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <153378028121.1220.4418653283078446336.stgit@noble> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 09, 2018 at 12:04:41PM +1000, NeilBrown wrote: > When we find an existing lock which conflicts with a request, > and the request wants to wait, we currently add the request > to a list. When the lock is removed, the whole list is woken. > This can cause the thundering-herd problem. > To reduce the problem, we make use of the (new) fact that > a pending request can itself have a list of blocked requests. > When we find a conflict, we look through the existing blocked requests. > If any one of them blocks the new request, the new request is attached > below that request. > This way, when the lock is released, only a set of non-conflicting > locks will be woken. The rest of the herd can stay asleep. That that's not true any more--some of the locks you wake may conflict with each other. Is that right? Which is fine (the possibility of thundering herds in weird overlapping-range cases probably isn't a big deal). I just want to make sure I understand.... I think you could simplify the code a lot by maintaining the tree so that it always satisfies the condition that waiters are always strictly "weaker" than their descendents, so that finding a conflict with a waiter is always enough to know that the descendents also conflict. So, when you put a waiter to sleep, you don't add it below a child unless it's "stronger" than the child. You give up the property that siblings don't conflict, but again that just means thundering herds in weird cases, which is OK. --b. > > Reported-and-tested-by: Martin Wilck > Signed-off-by: NeilBrown > --- > fs/locks.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++++++++++----- > 1 file changed, 63 insertions(+), 6 deletions(-) > > diff --git a/fs/locks.c b/fs/locks.c > index fc64016d01ee..17843feb6f5b 100644 > --- a/fs/locks.c > +++ b/fs/locks.c > @@ -738,6 +738,39 @@ static void locks_delete_block(struct file_lock *waiter) > spin_unlock(&blocked_lock_lock); > } > > +static void wake_non_conflicts(struct file_lock *waiter, struct file_lock *blocker, > + enum conflict conflict(struct file_lock *, > + struct file_lock *)) > +{ > + struct file_lock *parent = waiter; > + struct file_lock *fl; > + struct file_lock *t; > + > + fl = list_entry(&parent->fl_blocked, struct file_lock, fl_block); > +restart: > + list_for_each_entry_safe_continue(fl, t, &parent->fl_blocked, fl_block) { > + switch (conflict(fl, blocker)) { > + default: > + case FL_NO_CONFLICT: > + __locks_wake_one(fl); > + break; > + case FL_CONFLICT: > + /* Need to check children */ > + parent = fl; > + fl = list_entry(&parent->fl_blocked, struct file_lock, fl_block); > + goto restart; > + case FL_TRANSITIVE_CONFLICT: > + /* all children must also conflict, no need to check */ > + continue; > + } > + } > + if (parent != waiter) { > + parent = parent->fl_blocker; > + fl = parent; > + goto restart; > + } > +} > + > /* Insert waiter into blocker's block list. > * We use a circular list so that processes can be easily woken up in > * the order they blocked. The documentation doesn't require this but > @@ -747,11 +780,32 @@ static void locks_delete_block(struct file_lock *waiter) > * fl_blocked list itself is protected by the blocked_lock_lock, but by ensuring > * that the flc_lock is also held on insertions we can avoid taking the > * blocked_lock_lock in some cases when we see that the fl_blocked list is empty. > + * > + * Rather than just adding to the list, we check for conflicts with any existing > + * waiter, and add to that waiter instead. > + * Thus wakeups don't happen until needed. > */ > static void __locks_insert_block(struct file_lock *blocker, > - struct file_lock *waiter) > + struct file_lock *waiter, > + enum conflict conflict(struct file_lock *, > + struct file_lock *)) > { > + struct file_lock *fl; > BUG_ON(!list_empty(&waiter->fl_block)); > + > + /* Any request in waiter->fl_blocked is know to conflict with > + * waiter, but it might not conflict with blocker. > + * If it doesn't, it needs to be woken now so it can find > + * somewhere else to wait, or possible it can get granted. > + */ > + if (conflict(waiter, blocker) != FL_TRANSITIVE_CONFLICT) > + wake_non_conflicts(waiter, blocker, conflict); > +new_blocker: > + list_for_each_entry(fl, &blocker->fl_blocked, fl_block) > + if (conflict(fl, waiter)) { > + blocker = fl; > + goto new_blocker; > + } > waiter->fl_blocker = blocker; > list_add_tail(&waiter->fl_block, &blocker->fl_blocked); > if (IS_POSIX(blocker) && !IS_OFDLCK(blocker)) > @@ -760,10 +814,12 @@ static void __locks_insert_block(struct file_lock *blocker, > > /* Must be called with flc_lock held. */ > static void locks_insert_block(struct file_lock *blocker, > - struct file_lock *waiter) > + struct file_lock *waiter, > + enum conflict conflict(struct file_lock *, > + struct file_lock *)) > { > spin_lock(&blocked_lock_lock); > - __locks_insert_block(blocker, waiter); > + __locks_insert_block(blocker, waiter, conflict); > spin_unlock(&blocked_lock_lock); > } > > @@ -1033,7 +1089,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request) > if (!(request->fl_flags & FL_SLEEP)) > goto out; > error = FILE_LOCK_DEFERRED; > - locks_insert_block(fl, request); > + locks_insert_block(fl, request, flock_locks_conflict); > goto out; > } > if (request->fl_flags & FL_ACCESS) > @@ -1107,7 +1163,8 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request, > spin_lock(&blocked_lock_lock); > if (likely(!posix_locks_deadlock(request, fl))) { > error = FILE_LOCK_DEFERRED; > - __locks_insert_block(fl, request); > + __locks_insert_block(fl, request, > + posix_locks_conflict); > } > spin_unlock(&blocked_lock_lock); > goto out; > @@ -1581,7 +1638,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type) > break_time -= jiffies; > if (break_time == 0) > break_time++; > - locks_insert_block(fl, new_fl); > + locks_insert_block(fl, new_fl, leases_conflict); > trace_break_lease_block(inode, new_fl); > spin_unlock(&ctx->flc_lock); > percpu_up_read_preempt_enable(&file_rwsem); >