Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751774AbdH3Iae (ORCPT ); Wed, 30 Aug 2017 04:30:34 -0400 Received: from mga11.intel.com ([192.55.52.93]:9020 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751746AbdH3Iab (ORCPT ); Wed, 30 Aug 2017 04:30:31 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,448,1498546800"; d="scan'208";a="145539005" Subject: Re: [PATCH v6 1/3] perf/core: use rb trees for pinned/flexible groups To: Alexander Shishkin , Peter Zijlstra Cc: Ingo Molnar , Arnaldo Carvalho de Melo , Andi Kleen , Kan Liang , Dmitri Prokhorov , Valery Cherepennikov , Mark Rutland , Stephane Eranian , David Carrillo-Cisneros , linux-kernel References: <96c7776f-1f17-a39e-23e9-658596216d6b@linux.intel.com> <20170803130002.oatczvnaalplrsep@hirez.programming.kicks-ass.net> <86cbe0b0-a1ec-4d5f-addc-87bccf2e97d7@linux.intel.com> <20170804143628.34c2xqxl2e6k2arj@hirez.programming.kicks-ass.net> <9d2e25c9-209c-f28a-d601-d3f1a71f032f@linux.intel.com> <87lgm2xz0m.fsf@ashishki-desk.ger.corp.intel.com> From: Alexey Budankov Organization: Intel Corp. Message-ID: Date: Wed, 30 Aug 2017 11:30:26 +0300 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: <87lgm2xz0m.fsf@ashishki-desk.ger.corp.intel.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2738 Lines: 106 On 29.08.2017 16:51, Alexander Shishkin wrote: > Alexey Budankov writes: > >> Now I figured that not all indexed events are always located under >> the root with the same cpu, and it depends on the order of insertion >> e.g. with insertion order 01,02,03,14,15,16 we get this: >> >> 02 >> / \ >> 01 14 >> / \ >> 03 15 >> \ >> 16 >> >> and it is unclear how to iterate cpu==0 part of tree in this case. > > Using this example, rb_next() should take you through the nodes in this > order (assuming you start with 01): 01, 02, 03, 14, etc. So you iterate > while event->cpu==cpu using rb_next() and you should be fine. Well, indeed we get the most left leaf (03) in rb_next() for the case above. > >> Iterating cpu specific subtree like this: >> >> #define for_each_group_event(event, group, cpu, pmu, field) \ >> for (event = rb_entry_safe(group_first(group, cpu, pmu), \ >> typeof(*event), field); \ >> event && event->cpu == cpu && event->pmu == pmu; \ >> event = rb_entry_safe(rb_next(&event->field), \ >> typeof(*event), field)) > > Afaict, this assumes that you are also ordering on event->pmu, which > should be reflected in your _less function. And also assuming that > group_first() is doing the right thing. Can we see the code? I didn't do ordering by PMU for this patch set. Yet more I implemented groups_first() like this: static struct perf_event * perf_event_groups_first(struct perf_event_groups *groups, int cpu) { struct perf_event *node_event = NULL; struct rb_node *node = NULL; node = groups->tree.rb_node; while (node) { node_event = container_of(node, struct perf_event, group_node); if (cpu < node_event->cpu) { node = node->rb_left; } else if (cpu > node_event->cpu) { node = node->rb_right; } else { node = node->rb_left; } } return node_event; } and it doesn't work as expected for case above with cpu == 1. I corrected the code above to this: static struct perf_event * perf_event_groups_first(struct perf_event_groups *groups, int cpu) { struct perf_event *node_event = NULL, *match = NULL; struct rb_node *node = NULL; node = groups->tree.rb_node; while (node) { node_event = container_of(node, struct perf_event, group_node); if (cpu < node_event->cpu) { node = node->rb_left; } else if (cpu > node_event->cpu) { node = node->rb_right; } else { match = node_event; node = node->rb_left; } } return match; } but now struggling with silent oopses which I guess are not related to multiplexing at all. Please look at v8 for a while. It addresses your comments for v7. > > Regards, > -- > Alex > Thanks, Alexey