Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752127AbbFEXhW (ORCPT ); Fri, 5 Jun 2015 19:37:22 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50996 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751282AbbFEXhT (ORCPT ); Fri, 5 Jun 2015 19:37:19 -0400 Date: Sat, 6 Jun 2015 01:36:22 +0200 From: Oleg Nesterov To: Al Viro Cc: Linus Torvalds , Davidlohr Bueso , Peter Zijlstra , Paul McKenney , Tejun Heo , Ingo Molnar , Linux Kernel Mailing List , der.herr@hofr.at Subject: Re: [RFC][PATCH 0/5] Optimize percpu-rwsem Message-ID: <20150605233622.GA31034@redhat.com> References: <20150526114356.609107918@infradead.org> <1432665731.8196.3.camel@stgolabs.net> <20150605014558.GS7232@ZenIV.linux.org.uk> <20150605210857.GA24905@redhat.com> <20150605221111.GY7232@ZenIV.linux.org.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150605221111.GY7232@ZenIV.linux.org.uk> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2320 Lines: 61 On 06/05, Al Viro wrote: > > On Fri, Jun 05, 2015 at 11:08:57PM +0200, Oleg Nesterov wrote: > > On 06/05, Al Viro wrote: > > > > > > FWIW, I hadn't really looked into stop_machine uses, but fs/locks.c one > > > is really not all that great - there we have a large trashcan of a list > > > (every file_lock on the system) and the only use of that list is /proc/locks > > > output generation. Sure, additions take this CPU's spinlock. And removals > > > take pretty much a random one - losing the timeslice and regaining it on > > > a different CPU is quite likely with the uses there. > > > > > > Why do we need a global lock there, anyway? Why not hold only one for > > > the chain currently being traversed? Sure, we'll need to get and drop > > > them in ->next() that way; so what? > > > > And note that fs/seq_file.c:seq_hlist_next_percpu() has no other users. > > > > And given that locks_delete_global_locks() takes the random lock anyway, > > perhaps the hashed lists/locking makes no sense, I dunno. > > It's not about making life easier for /proc/locks; it's about not screwing > those who add/remove file_lock... I meant, seq_hlist_next_percpu() could be "static" in fs/locks.c. > And no, that "random lock" isn't held > when modifying the (per-cpu) lists - it protects the list hanging off each > element of the global list, and /proc/locks scans those lists, so rather > than taking/dropping it in each ->show(), it's taken once in ->start()... Sure, I understand. I meant that (perhaps) something like struct { spinlock_t lock; struct list_head *head } file_lock_list[]; locks_insert_global_locks(fl) { int idx = fl->idx = hash(fl); spin_lock(&file_lock_list[idx].lock); hlist_add_head(...); spin_unlock(...); } seq_hlist_next_percpu() could scan file_lock_list[] and unlock/lock ->lock when it changes the index. But please forget, this is really minor. Just I think that file_lock_list is not actually "per-cpu", exactly because every locks_delete_global_locks() needs lg_local_lock_cpu(fl->fl_link_cpu) as you pointed out. Oleg. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/