Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932546AbcKJMFD (ORCPT ); Thu, 10 Nov 2016 07:05:03 -0500 Received: from foss.arm.com ([217.140.101.70]:47404 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932223AbcKJMFC (ORCPT ); Thu, 10 Nov 2016 07:05:02 -0500 Date: Thu, 10 Nov 2016 12:04:23 +0000 From: Mark Rutland To: Peter Zijlstra Cc: kan.liang@intel.com, mingo@redhat.com, linux-kernel@vger.kernel.org, acme@kernel.org, tglx@linutronix.de, alexander.shishkin@linux.intel.com, vince@deater.net, eranian@google.com, andi@firstfloor.org Subject: Re: [PATCH] perf/core: introduce context per CPU event list Message-ID: <20161110120423.GC4418@leverpostej> References: <1478718286-12824-1-git-send-email-kan.liang@intel.com> <20161110083355.GO3568@worktop.programming.kicks-ass.net> <20161110110516.GA4418@leverpostej> <20161110113704.GU3142@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161110113704.GU3142@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2104 Lines: 45 On Thu, Nov 10, 2016 at 12:37:05PM +0100, Peter Zijlstra wrote: > On Thu, Nov 10, 2016 at 11:05:17AM +0000, Mark Rutland wrote: > > On Thu, Nov 10, 2016 at 09:33:55AM +0100, Peter Zijlstra wrote: > > > Yes this is a problem, but no this cannot be done. We can't have per-cpu > > > storage per task. That rapidly explodes. > > > > > > Mark is looking at replacing this stuff with an rb-tree for big-little, > > > that would also allow improving this I think. > > > > Unfortunately I've not had the chance to look at that since returning > > from Plumbers. Also, I was leaning towards the alternative approach we > > discussed, with a perf_event_task_contexts container, as that also > > solved some other issues with the way we used perf_event_context::pmu in > > big.LITTLE systems. > > > > Looking at the way perf_iterate_ctx is used, it seems that we're just > > trying to iterate over the active events for a context (i.e. those > > programmed into the HW at this point in time). Though I'm missing some > > subtlety, since we check event->state < PERF_EVENT_STATE_INACTIVE. > > > > We have a similar issue with perf_event_task_tick() needing to know the > > relevant contexts, and for that we have the active_ctx_list. Can't we do > > something similar and add an active_events_list to perf_event_context? > > So the problem is finding which events are active when. Sure. If we only care about PERF_EVENT_STATE_ACTIVE, then I think we can fairly easily maintain a perf_event_context::active_event_list at event_sched_{in,out}() time (or somewhere close to that). If we need PERF_EVENT_STATE_INACTIVE events, then that doesn't work, since we can give up early and not schedule some eligible events. > If we stick all events in an RB-tree sorted on: {pmu,cpu,runtime} we > can, fairly easily, find the relevant subtree and limit the iteration. > Esp. if we use a threaded tree. That would cater for big.LITTLE, certainly, but I'm not sure I follow how that helps to find active events -- you'll still have to iterate through the whole PMU subtree to find which are active, no? Thanks, Mark.