Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752105AbbEDUiQ (ORCPT ); Mon, 4 May 2015 16:38:16 -0400 Received: from e31.co.us.ibm.com ([32.97.110.149]:54327 "EHLO e31.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751168AbbEDUiG (ORCPT ); Mon, 4 May 2015 16:38:06 -0400 Date: Mon, 4 May 2015 13:38:01 -0700 From: "Paul E. McKenney" To: Rik van Riel Cc: Paolo Bonzini , Ingo Molnar , Andy Lutomirski , "linux-kernel@vger.kernel.org" , X86 ML , williams@redhat.com, Andrew Lutomirski , fweisbec@redhat.com, Peter Zijlstra , Heiko Carstens , Thomas Gleixner , Ingo Molnar , Linus Torvalds Subject: Re: question about RCU dynticks_nesting Message-ID: <20150504203801.GG5381@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20150501184025.GA2114@gmail.com> <5543CFE5.1030509@redhat.com> <20150502052733.GA9983@gmail.com> <55473B47.6080600@redhat.com> <55479749.7070608@redhat.com> <20150504183906.GS5381@linux.vnet.ibm.com> <5547CAED.9010201@redhat.com> <20150504200232.GB5381@linux.vnet.ibm.com> <5547D2FE.9010806@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5547D2FE.9010806@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15050420-8236-0000-0000-00000B385534 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2762 Lines: 61 On Mon, May 04, 2015 at 04:13:50PM -0400, Rik van Riel wrote: > On 05/04/2015 04:02 PM, Paul E. McKenney wrote: > > On Mon, May 04, 2015 at 03:39:25PM -0400, Rik van Riel wrote: > >> On 05/04/2015 02:39 PM, Paul E. McKenney wrote: > >>> On Mon, May 04, 2015 at 11:59:05AM -0400, Rik van Riel wrote: > >> > >>>> In fact, would we be able to simply use tsk->rcu_read_lock_nesting > >>>> as an indicator of whether or not we should bother waiting on that > >>>> task or CPU when doing synchronize_rcu? > >>> > >>> Depends on exactly what you are asking. If you are asking if I could add > >>> a few more checks to preemptible RCU and speed up grace-period detection > >>> in a number of cases, the answer is very likely "yes". This is on my > >>> list, but not particularly high priority. If you are asking whether > >>> CPU 0 could access ->rcu_read_lock_nesting of some task running on > >>> some other CPU, in theory, the answer is "yes", but in practice that > >>> would require putting full memory barriers in both rcu_read_lock() > >>> and rcu_read_unlock(), so the real answer is "no". > >>> > >>> Or am I missing your point? > >> > >> The main question is "how can we greatly reduce the overhead > >> of nohz_full, by simplifying the RCU extended quiescent state > >> code called in the syscall fast path, and maybe piggyback on > >> that to do time accounting for remote CPUs?" > >> > >> Your memory barrier answer above makes it clear we will still > >> want to do the RCU stuff at syscall entry & exit time, at least > >> on x86, where we already have automatic and implicit memory > >> barriers. > > > > We do need to keep in mind that x86's automatic and implicit memory > > barriers do not order prior stores against later loads. > > > > Hmmm... But didn't earlier performance measurements show that the bulk of > > the overhead was the delta-time computations rather than RCU accounting? > > The bulk of the overhead was disabling and re-enabling > irqs around the calls to rcu_user_exit and rcu_user_enter :) Really??? OK... How about software irq masking? (I know, that is probably a bit of a scary change as well.) > Of the remaining time, about 2/3 seems to be the vtime > stuff, and the other 1/3 the rcu code. OK, worth some thought, then. > I suspect it makes sense to optimize both, though the > vtime code may be the easiest :) Making a crude version that does jiffies (or whatever) instead of fine-grained computations might give good bang for the buck. ;-) Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/