Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755302AbdC2JoW (ORCPT ); Wed, 29 Mar 2017 05:44:22 -0400 Received: from bombadil.infradead.org ([65.50.211.133]:35699 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754553AbdC2JoU (ORCPT ); Wed, 29 Mar 2017 05:44:20 -0400 Date: Wed, 29 Mar 2017 11:44:13 +0200 From: Peter Zijlstra To: Davidlohr Bueso Cc: mingo@kernel.org, akpm@linux-foundation.org, jack@suse.cz, kirill.shutemov@linux.intel.com, mhocko@suse.com, mgorman@techsingularity.net, linux-kernel@vger.kernel.org, Davidlohr Bueso Subject: Re: [PATCH 1/5] locking: Introduce range reader/writer lock Message-ID: <20170329094413.ind7iaondjhl3tzh@hirez.programming.kicks-ass.net> References: <1488863010-13028-1-git-send-email-dave@stgolabs.net> <1488863010-13028-2-git-send-email-dave@stgolabs.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1488863010-13028-2-git-send-email-dave@stgolabs.net> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1240 Lines: 43 On Mon, Mar 06, 2017 at 09:03:26PM -0800, Davidlohr Bueso wrote: > +static __always_inline int > +__range_read_lock_common(struct range_rwlock_tree *tree, > + struct range_rwlock *lock, long state) > +{ > + struct interval_tree_node *node; > + unsigned long flags; > + > + spin_lock_irqsave(&tree->lock, flags); > + lock->reader = true; > + > + if (!__range_intersects_intree(tree, lock)) > + goto insert; > + > + node = interval_tree_iter_first(&tree->root, lock->node.start, > + lock->node.last); > + while (node) { for (node = interval_tree_iter_first(); node; node = interval_tree_iter_next()) { Or some interval_tree_for_each helper? > + struct range_rwlock *blocked_lock; > + blocked_lock = range_entry(node, struct range_rwlock, node); > + > + if (!blocked_lock->reader) > + lock->blocking_ranges++; Can this @blocked range now go EINTR and remove itself from wait_for_ranges()? If so, who will decrement our blocking_ranges? > + node = interval_tree_iter_next(node, lock->node.start, > + lock->node.last); > + } > +insert: > + __range_tree_insert(tree, lock); > + > + lock->task = current; > + spin_unlock_irqrestore(&tree->lock, flags); > + > + return wait_for_ranges(tree, lock, state); > +}