Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752391AbbFEWLX (ORCPT ); Fri, 5 Jun 2015 18:11:23 -0400 Received: from zeniv.linux.org.uk ([195.92.253.2]:33465 "EHLO ZenIV.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751447AbbFEWLV (ORCPT ); Fri, 5 Jun 2015 18:11:21 -0400 Date: Fri, 5 Jun 2015 23:11:11 +0100 From: Al Viro To: Oleg Nesterov Cc: Linus Torvalds , Davidlohr Bueso , Peter Zijlstra , Paul McKenney , Tejun Heo , Ingo Molnar , Linux Kernel Mailing List , der.herr@hofr.at Subject: Re: [RFC][PATCH 0/5] Optimize percpu-rwsem Message-ID: <20150605221111.GY7232@ZenIV.linux.org.uk> References: <20150526114356.609107918@infradead.org> <1432665731.8196.3.camel@stgolabs.net> <20150605014558.GS7232@ZenIV.linux.org.uk> <20150605210857.GA24905@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150605210857.GA24905@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1566 Lines: 29 On Fri, Jun 05, 2015 at 11:08:57PM +0200, Oleg Nesterov wrote: > On 06/05, Al Viro wrote: > > > > FWIW, I hadn't really looked into stop_machine uses, but fs/locks.c one > > is really not all that great - there we have a large trashcan of a list > > (every file_lock on the system) and the only use of that list is /proc/locks > > output generation. Sure, additions take this CPU's spinlock. And removals > > take pretty much a random one - losing the timeslice and regaining it on > > a different CPU is quite likely with the uses there. > > > > Why do we need a global lock there, anyway? Why not hold only one for > > the chain currently being traversed? Sure, we'll need to get and drop > > them in ->next() that way; so what? > > And note that fs/seq_file.c:seq_hlist_next_percpu() has no other users. > > And given that locks_delete_global_locks() takes the random lock anyway, > perhaps the hashed lists/locking makes no sense, I dunno. It's not about making life easier for /proc/locks; it's about not screwing those who add/remove file_lock... And no, that "random lock" isn't held when modifying the (per-cpu) lists - it protects the list hanging off each element of the global list, and /proc/locks scans those lists, so rather than taking/dropping it in each ->show(), it's taken once in ->start()... -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/