Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932413AbdHWRX2 (ORCPT ); Wed, 23 Aug 2017 13:23:28 -0400 Received: from mga05.intel.com ([192.55.52.43]:14855 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932109AbdHWRX1 (ORCPT ); Wed, 23 Aug 2017 13:23:27 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,417,1498546800"; d="scan'208";a="121959551" Subject: Re: [PATCH v7 1/2] perf/core: use rb trees for pinned/flexible groups To: Alexander Shishkin , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo Cc: Andi Kleen , Kan Liang , Dmitri Prokhorov , Valery Cherepennikov , Mark Rutland , Stephane Eranian , David Carrillo-Cisneros , linux-kernel References: <96c7776f-1f17-a39e-23e9-658596216d6b@linux.intel.com> <9bd70585-c5d9-6615-ca68-77ad021ee942@linux.intel.com> <87wp5uzg6u.fsf@ashishki-desk.ger.corp.intel.com> From: Alexey Budankov Organization: Intel Corp. Message-ID: <05d50ff4-b672-03e8-0154-b479cebfd80e@linux.intel.com> Date: Wed, 23 Aug 2017 20:23:22 +0300 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: <87wp5uzg6u.fsf@ashishki-desk.ger.corp.intel.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2862 Lines: 103 On 23.08.2017 14:17, Alexander Shishkin wrote: > Alexey Budankov writes: > >> @@ -3091,61 +3231,55 @@ static void cpu_ctx_sched_out(struct perf_cpu_context *cpuctx, >> } >> >> static void >> -ctx_pinned_sched_in(struct perf_event_context *ctx, >> - struct perf_cpu_context *cpuctx) >> +ctx_pinned_sched_in(struct perf_event *event, >> + struct perf_cpu_context *cpuctx, >> + struct perf_event_context *ctx) > > If you're doing this, you also need to rename the function, because it > now schedules in one event and not a context. But better just keep it as > is. > >> { >> - struct perf_event *event; >> - >> - list_for_each_entry(event, &ctx->pinned_groups, group_entry) { > > Why not put your new iterator here in place of the old one, instead of > moving things around? Because what follows is hard to read and is also > completely unnecessary: > >> - if (event->state <= PERF_EVENT_STATE_OFF) >> - continue; >> - if (!event_filter_match(event)) >> - continue; >> + if (event->state <= PERF_EVENT_STATE_OFF) >> + return; >> + if (!event_filter_match(event)) >> + return; > > like this, > >> >> - /* may need to reset tstamp_enabled */ >> - if (is_cgroup_event(event)) >> - perf_cgroup_mark_enabled(event, ctx); >> + /* may need to reset tstamp_enabled */ >> + if (is_cgroup_event(event)) >> + perf_cgroup_mark_enabled(event, ctx); > > or this > >> >> - if (group_can_go_on(event, cpuctx, 1)) >> - group_sched_in(event, cpuctx, ctx); >> + if (group_can_go_on(event, cpuctx, 1)) >> + group_sched_in(event, cpuctx, ctx); > > etc, etc. > >> @@ -3156,7 +3290,8 @@ ctx_sched_in(struct perf_event_context *ctx, >> struct task_struct *task) >> { >> int is_active = ctx->is_active; >> - u64 now; > > Why? Shortened the scope/span/life time of this variable. Declared, defined and initialized closer to the place of employment. > >> + struct perf_event *event; >> + struct rb_node *node; >> >> lockdep_assert_held(&ctx->lock); >> >> @@ -3175,7 +3310,7 @@ ctx_sched_in(struct perf_event_context *ctx, >> >> if (is_active & EVENT_TIME) { >> /* start ctx time */ >> - now = perf_clock(); >> + u64 now = perf_clock(); > > Why?> >> ctx->timestamp = now; >> perf_cgroup_set_timestamp(task, ctx); >> } >> @@ -3185,11 +3320,19 @@ ctx_sched_in(struct perf_event_context *ctx, >> * in order to give them the best chance of going on. >> */ >> if (is_active & EVENT_PINNED) >> - ctx_pinned_sched_in(ctx, cpuctx); >> + perf_event_groups_for_each(event, node, >> + &ctx->pinned_groups, group_node, >> + group_list, group_entry) >> + ctx_pinned_sched_in(event, cpuctx, ctx); > > So this perf_event_groups_for_each() can just move into > ctx_*_sched_in(), can't it? Yes. Makes sense. Addressed it in v8. Thanks! > > Regards, > -- > Alex >