Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752574AbbEDUxc (ORCPT ); Mon, 4 May 2015 16:53:32 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50949 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751756AbbEDUxY (ORCPT ); Mon, 4 May 2015 16:53:24 -0400 Message-ID: <5547DC3C.1000504@redhat.com> Date: Mon, 04 May 2015 16:53:16 -0400 From: Rik van Riel User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: paulmck@linux.vnet.ibm.com CC: Paolo Bonzini , Ingo Molnar , Andy Lutomirski , "linux-kernel@vger.kernel.org" , X86 ML , williams@redhat.com, Andrew Lutomirski , fweisbec@redhat.com, Peter Zijlstra , Heiko Carstens , Thomas Gleixner , Ingo Molnar , Linus Torvalds Subject: Re: question about RCU dynticks_nesting References: <20150501184025.GA2114@gmail.com> <5543CFE5.1030509@redhat.com> <20150502052733.GA9983@gmail.com> <55473B47.6080600@redhat.com> <55479749.7070608@redhat.com> <20150504183906.GS5381@linux.vnet.ibm.com> <5547CAED.9010201@redhat.com> <20150504200232.GB5381@linux.vnet.ibm.com> <5547D2FE.9010806@redhat.com> <20150504203801.GG5381@linux.vnet.ibm.com> In-Reply-To: <20150504203801.GG5381@linux.vnet.ibm.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1906 Lines: 47 On 05/04/2015 04:38 PM, Paul E. McKenney wrote: > On Mon, May 04, 2015 at 04:13:50PM -0400, Rik van Riel wrote: >> On 05/04/2015 04:02 PM, Paul E. McKenney wrote: >>> Hmmm... But didn't earlier performance measurements show that the bulk of >>> the overhead was the delta-time computations rather than RCU accounting? >> >> The bulk of the overhead was disabling and re-enabling >> irqs around the calls to rcu_user_exit and rcu_user_enter :) > > Really??? OK... How about software irq masking? (I know, that is > probably a bit of a scary change as well.) > >> Of the remaining time, about 2/3 seems to be the vtime >> stuff, and the other 1/3 the rcu code. > > OK, worth some thought, then. > >> I suspect it makes sense to optimize both, though the >> vtime code may be the easiest :) > > Making a crude version that does jiffies (or whatever) instead of > fine-grained computations might give good bang for the buck. ;-) Ingo's idea is to simply have cpu 0 check the current task on all other CPUs, see whether that task is running in system mode, user mode, guest mode, irq mode, etc and update that task's vtime accordingly. I suspect the runqueue lock is probably enough to do that, and between rcu state and PF_VCPU we probably have enough information to see what mode the task is running in, with just remote memory reads. I looked at implementing the vtime bits (and am pretty sure how to do those now), and then spent some hours looking at the RCU bits, to see if we could not simplify both things at once, especially considering that the current RCU context tracking bits need to be called with irqs disabled. -- All rights reversed -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/