Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp1056887imm; Sat, 11 Aug 2018 05:24:29 -0700 (PDT) X-Google-Smtp-Source: AA+uWPw8Nd7R6bkSYvYJZLqJ7o33tsQsBx9nNJv2RTFd1lgmkPFpR40+mioKFOsqlPTTozG0xiJ0 X-Received: by 2002:a17:902:b60e:: with SMTP id b14-v6mr9718271pls.111.1533990269699; Sat, 11 Aug 2018 05:24:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533990269; cv=none; d=google.com; s=arc-20160816; b=hdH8UgAt0Hu7soD/VjhCwj8sAv1xpoyAlnXtX3bZcIo/tRozvQm0FjZXBLXgc2Sjre 0V0dc+sT+vk8Am1V6oHSBUS0EE1IZy3ce+QRUgyawvYvbd454ye7m5FXpKstDR1SWIfc 5RJmHHOn57XHxxyipx7ae3dgllboEJ5bGtpxcc/fBueUwla3svA3e51Scc7UmmyyCP3P f2Wh2eTf0TXPP6oWwGTp5WcWRl4uSCf+UINqv+qDnfOoH1IF89QEY7T5Nq6NDvhCl5zX bohorW51oSkDBx8zj/jn+a4u+y4C5DhY0MumdQpulrUZo7YLsH8V8+JPBjqYtEajOF/P 5qww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=aLKUD0p+xSZfS7QRn0WGaJnOgTNF732aUvL/uangPfA=; b=iaEuFq0lp9NEINx8lKJmiyBi5luLCCWAUgIjPN9+fIdZAbn2gMQBXOPUj5/zIXxetF vnWBLf12ukYM4sGb6vxx249mkSumENondwZSU7MORbho/mX4EN5yG06bSBhUuGcLJuln mkQ5LfzZlJmVV1VMs8UAwmJhhomhSnmgAk4VnmElZ5ZYg6ApZIF6V7mF9EELp6mZDQ6M b4hdwYzv3qUYbNvraSEiswbkkfYl8HTdMje2dPFK6WLdxXVUPDtl2U1Ffind4fhN6rjw nCPzPfuGjbDQJ+F70H+qZYUqPxfsudn8kC0fvZAybqV2DfLP8uaEp04IpJTJGHFnq/GZ C9Kw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t135-v6si14159052pfc.139.2018.08.11.05.23.50; Sat, 11 Aug 2018 05:24:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727393AbeHKOzw (ORCPT + 99 others); Sat, 11 Aug 2018 10:55:52 -0400 Received: from fieldses.org ([173.255.197.46]:52420 "EHLO fieldses.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727310AbeHKOzv (ORCPT ); Sat, 11 Aug 2018 10:55:51 -0400 Received: by fieldses.org (Postfix, from userid 2815) id 39F0CBD4; Sat, 11 Aug 2018 08:21:50 -0400 (EDT) Date: Sat, 11 Aug 2018 08:21:50 -0400 From: "J. Bruce Fields" To: Jeff Layton Cc: NeilBrown , Alexander Viro , Martin Wilck , linux-fsdevel@vger.kernel.org, Frank Filz , linux-kernel@vger.kernel.org Subject: Re: [PATCH 0/5 - V2] locks: avoid thundering-herd wake-ups Message-ID: <20180811122150.GA15848@fieldses.org> References: <153378012255.1220.6754153662007899557.stgit@noble> <20180809173245.GM23873@fieldses.org> <87lg9frxyc.fsf@notabene.neil.brown.name> <20180810002922.GA3915@fieldses.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Aug 11, 2018 at 07:51:13AM -0400, Jeff Layton wrote: > On Thu, 2018-08-09 at 20:29 -0400, J. Bruce Fields wrote: > > On Fri, Aug 10, 2018 at 08:12:43AM +1000, NeilBrown wrote: > > > On Thu, Aug 09 2018, J. Bruce Fields wrote: > > > > > > > I think there's also a problem with multiple tasks sharing the same > > > > lock owner. > > > > > > > > So, all locks are exclusive locks for the same range. We have four > > > > tasks. Tasks 1 and 4 share the same owner, the others' owners are > > > > distinct. > > > > > > > > - Task 1 gets a lock. > > > > - Task 2 gets a conflicting lock. > > > > - Task 3 gets another conflicting lock. So now we the tree is > > > > 3->2->1. > > > > - Task 1's lock is released. > > > > - Before task 2 is scheduled, task 4 acquires a new lock. > > > > - Task 2 waits on task 4's lock, we now have > > > > 3->2->4. > > > > > > > > Task 3 shouldn't be waiting--the lock it's requesting has the same owner > > > > as the lock task 4 holds--but we fail to wake up task 3. > > > > > > So task 1 and task 4 are threads in the one process - OK. > > > Tasks 2 and 3 are threads in two other processes. > > > > > > So 2 and 3 conflict with either 1 or 4 equally - why should task 3 be > > > woken? > > > > > > I suspect you got the numbers bit mixed up, > > > > Whoops. > > > > > but in any case, the "conflict()" function that is passed around takes > > > ownership into account when assessing if one lock conflicts with > > > another. > > > > Right, I know, but, let me try again: > > > > All locks are exclusive locks for the same range. Only tasks 3 and 4 > > share the the same owner. > > > > - Task 1 gets a lock. > > - Task 2 requests a conflicting lock, so we have 2->1. > > - Task 3 requests a conflicting lock, so we have 3->2->1. > > - Task 1 unlocks. We wake up task 2, but it isn't scheduled yet. > > - Task 4 gets a new lock. > > - Task 2 runs, discovers the conflict, and waits. Now we have: > > 3->2->4. > > > > There is no conflict between the lock 3 requested and the lock 4 holds, > > but 3 is not woken up. > > > > This is another version of the first problem: there's information we > > need (the owners of the waiting locks in the tree) that we can't > > determine just from looking at the root of the tree. > > > > I'm not sure what to do about that. > > > > Is this still a problem in the v2 set? > > wake_non_conflicts walks the whole tree of requests that were blocked on > it, Not in the FL_TRANSITIVE_CONFLICT case, which is the case here. --b. > so a. After task 2 discovers the conflict, it should wake any of its > children that don't conflict. So in that last step, task 3 would be > awoken before task 2 goes back to sleep. > > -- > Jeff Layton