Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757492AbcCaQVu (ORCPT ); Thu, 31 Mar 2016 12:21:50 -0400 Received: from casper.infradead.org ([85.118.1.10]:52397 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757444AbcCaQVs (ORCPT ); Thu, 31 Mar 2016 12:21:48 -0400 Date: Thu, 31 Mar 2016 18:21:41 +0200 From: Peter Zijlstra To: Arnaldo Carvalho de Melo Cc: kan.liang@intel.com, ak@linux.intel.com, eranian@google.com, vincent.weaver@maine.edu, tglx@linutronix.de, mingo@kernel.org, acme@redhat.com, jolsa@redhat.com, alexander.shishkin@linux.intel.com, ying.huang@linux.intel.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH V2 1/1] perf/core: don't find side-band event from all pmus Message-ID: <20160331162141.GF11035@twins.programming.kicks-ass.net> References: <1458757477-3781-1-git-send-email-kan.liang@intel.com> <20160329120609.GG3408@twins.programming.kicks-ass.net> <20160331144439.GB27708@kernel.org> <20160331145621.GM3430@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160331145621.GM3430@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 9629 Lines: 351 On Thu, Mar 31, 2016 at 04:56:21PM +0200, Peter Zijlstra wrote: > On Thu, Mar 31, 2016 at 11:44:39AM -0300, Arnaldo Carvalho de Melo wrote: > > It probably will cope, but can't we just emit one single record? > > I'll try and figure something out... less clever but probably good enough.. --- Subject: perf/core: don't find side-band event from all pmus From: Kan Liang Date: Wed, 23 Mar 2016 11:24:37 -0700 perf_event_aux funciton goes through all pmus and all events in whatever contexts to find the side-band event to output, which is unnecessary and expensive. For example, the brk test case in lkp triggers many mmap operations, at the time, perf with cycles:pp is also running on the system. As a result, many perf_event_aux are invoked, and each would search all pmus and all events. If we enable the uncore support (even when uncore event are not really used), dozens of uncore pmus will be added into pmus list, which can significantly decrease brk_test's ops_per_sec. Based on our test, the ops_per_sec without uncore patch is 2647573, while the ops_per_sec with uncore patch is only 1768444, which is a 33.2% reduction. To get at the per cpu side-band event, this patch put the side-band events to four categories, which are tracked by 4 per-cpu lists. It only finds the interested events from masked category. To get at the per task side-band event, each task context for current task will be searched. Because we don't want to go update more global state on context switch. Cc: vincent.weaver@maine.edu Cc: mingo@kernel.org Cc: acme@redhat.com Cc: jolsa@redhat.com Cc: ak@linux.intel.com Cc: tglx@linutronix.de Cc: eranian@google.com Cc: alexander.shishkin@linux.intel.com Reported-by: Huang, Ying Suggested-by: Peter Zijlstra (Intel) Signed-off-by: Kan Liang Signed-off-by: Peter Zijlstra (Intel) Link: http://lkml.kernel.org/r/1458757477-3781-1-git-send-email-kan.liang@intel.com --- The V1 patch is "perf/core: find auxiliary events in running pmus list" https://lkml.org/lkml/2016/2/24/961. This V2 patch almost changes everything compare with V1. The V2 patch is mainly based on Peter's suggestion. But I didn't rename perf_event_aux to perf_event_sb. Because it looks there are many aux things in the codes, e.g. AUX area in ring buffer. I'm not sure if we need to change all aux to sb. We may do the rename later in separate patch. include/linux/perf_event.h | 26 ++++++++ kernel/events/core.c | 134 +++++++++++++++++++++++++++++++++++++++------ 2 files changed, 143 insertions(+), 17 deletions(-) --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -437,6 +437,31 @@ struct swevent_hlist { struct perf_cgroup; struct ring_buffer; +struct pmu_event_list { + raw_spinlock_t lock; + struct list_head list; +}; + +/* + * {mmap,mmap_data,mmap2} -> mmap + * {comm,comm_exec} -> comm + * task + * context_switch + */ +enum event_sb_channel { + sb_mmap = 0, + sb_comm, + sb_task, + sb_switch, + + sb_nr, +}; + +#define IS_SB_MMAP(attr) \ + (attr.mmap || attr.mmap_data || attr.mmap2) +#define IS_SB_COMM(attr) \ + (attr.comm || attr.comm_exec) + /** * struct perf_event - performance event kernel representation: */ @@ -589,6 +614,7 @@ struct perf_event { int cgrp_defer_enabled; #endif + struct list_head sb_list[sb_nr]; #endif /* CONFIG_PERF_EVENTS */ }; --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -333,6 +333,7 @@ static atomic_t perf_sched_count; static DEFINE_PER_CPU(atomic_t, perf_cgroup_events); static DEFINE_PER_CPU(int, perf_sched_cb_usages); +static DEFINE_PER_CPU(struct pmu_event_list, pmu_sb_events[sb_nr]); static atomic_t nr_mmap_events __read_mostly; static atomic_t nr_comm_events __read_mostly; @@ -3598,6 +3599,37 @@ static void free_event_rcu(struct rcu_he static void ring_buffer_attach(struct perf_event *event, struct ring_buffer *rb); +static void detach_sb_event(struct perf_event *event, enum event_sb_channel sb) +{ + struct pmu_event_list *pel = per_cpu_ptr(&pmu_sb_events[sb], event->cpu); + + raw_spin_lock(&pel->lock); + list_del_rcu(&event->sb_list[sb]); + raw_spin_unlock(&pel->lock); +} + +static void unaccount_pmu_sb_event(struct perf_event *event) +{ + if (event->parent) + return; + + if (event->attach_state & PERF_ATTACH_TASK) + return; + + if (IS_SB_MMAP(event->attr)) + detach_sb_event(event, sb_mmap); + + if (IS_SB_COMM(event->attr)) + detach_sb_event(event, sb_comm); + + if (event->attr.task) + detach_sb_event(event, sb_task); + + if (event->attr.context_switch) + detach_sb_event(event, sb_switch); + +} + static void unaccount_event_cpu(struct perf_event *event, int cpu) { if (event->parent) @@ -3661,6 +3693,8 @@ static void unaccount_event(struct perf_ } unaccount_event_cpu(event, event->cpu); + + unaccount_pmu_sb_event(event); } static void perf_sched_delayed(struct work_struct *work) @@ -5785,13 +5819,41 @@ perf_event_aux_task_ctx(perf_event_aux_o rcu_read_unlock(); } +static void perf_event_sb_iterate(enum event_sb_channel sb, + perf_event_aux_output_cb output, + void *data) +{ + struct pmu_event_list *pel = this_cpu_ptr(&pmu_sb_events[sb]); + struct perf_event *event; + + list_for_each_entry_rcu(event, &pel->list, sb_list[sb]) { + if (event->state < PERF_EVENT_STATE_INACTIVE) + continue; + if (!event_filter_match(event)) + continue; + output(event, data); + } +} + +static void perf_event_sb_mask(unsigned int sb_mask, + perf_event_aux_output_cb output, + void *data) +{ + int sb; + + for (sb = 0; sb < sb_nr; sb++) { + if (!(sb_mask & (1 << sb))) + continue; + perf_event_sb_iterate(sb, output, data); + } +} + static void perf_event_aux(perf_event_aux_output_cb output, void *data, - struct perf_event_context *task_ctx) + struct perf_event_context *task_ctx, + unsigned int sb_mask) { - struct perf_cpu_context *cpuctx; struct perf_event_context *ctx; - struct pmu *pmu; int ctxn; /* @@ -5806,20 +5868,15 @@ perf_event_aux(perf_event_aux_output_cb } rcu_read_lock(); - list_for_each_entry_rcu(pmu, &pmus, entry) { - cpuctx = get_cpu_ptr(pmu->pmu_cpu_context); - if (cpuctx->unique_pmu != pmu) - goto next; - perf_event_aux_ctx(&cpuctx->ctx, output, data); - ctxn = pmu->task_ctx_nr; - if (ctxn < 0) - goto next; + preempt_disable(); + perf_event_sb_mask(sb_mask, output, data); + + for_each_task_context_nr(ctxn) { ctx = rcu_dereference(current->perf_event_ctxp[ctxn]); if (ctx) perf_event_aux_ctx(ctx, output, data); -next: - put_cpu_ptr(pmu->pmu_cpu_context); } + preempt_enable(); rcu_read_unlock(); } @@ -5991,7 +6048,8 @@ static void perf_event_task(struct task_ perf_event_aux(perf_event_task_output, &task_event, - task_ctx); + task_ctx, + (1 << sb_task) | (1 << sb_mmap) | (1 << sb_comm)); } void perf_event_fork(struct task_struct *task) @@ -6070,7 +6128,8 @@ static void perf_event_comm_event(struct perf_event_aux(perf_event_comm_output, comm_event, - NULL); + NULL, + 1 << sb_comm); } void perf_event_comm(struct task_struct *task, bool exec) @@ -6301,7 +6360,8 @@ static void perf_event_mmap_event(struct perf_event_aux(perf_event_mmap_output, mmap_event, - NULL); + NULL, + 1 << sb_mmap); kfree(buf); } @@ -6489,7 +6549,8 @@ static void perf_event_switch(struct tas perf_event_aux(perf_event_switch_output, &switch_event, - NULL); + NULL, + 1 << sb_switch); } /* @@ -7986,6 +8047,37 @@ static struct pmu *perf_init_event(struc return pmu; } +static void attach_sb_event(struct perf_event *event, enum event_sb_channel sb) +{ + struct pmu_event_list *pel = per_cpu_ptr(&pmu_sb_events[sb], event->cpu); + + raw_spin_lock(&pel->lock); + list_add_rcu(&event->sb_list[sb], &pel->list); + raw_spin_unlock(&pel->lock); +} + +static void account_pmu_sb_event(struct perf_event *event) +{ + if (event->parent) + return; + + if (event->attach_state & PERF_ATTACH_TASK) + return; + + if (IS_SB_MMAP(event->attr)) + attach_sb_event(event, sb_mmap); + + if (IS_SB_COMM(event->attr)) + attach_sb_event(event, sb_comm); + + if (event->attr.task) + attach_sb_event(event, sb_task); + + if (event->attr.context_switch) + attach_sb_event(event, sb_switch); + +} + static void account_event_cpu(struct perf_event *event, int cpu) { if (event->parent) @@ -8066,6 +8158,8 @@ static void account_event(struct perf_ev enabled: account_event_cpu(event, event->cpu); + + account_pmu_sb_event(event); } /* @@ -8083,6 +8177,7 @@ perf_event_alloc(struct perf_event_attr struct perf_event *event; struct hw_perf_event *hwc; long err = -EINVAL; + int i; if ((unsigned)cpu >= nr_cpu_ids) { if (!task || cpu != -1) @@ -8110,6 +8205,8 @@ perf_event_alloc(struct perf_event_attr INIT_LIST_HEAD(&event->active_entry); INIT_HLIST_NODE(&event->hlist_entry); + for (i = 0; i < sb_nr; i++) + INIT_LIST_HEAD(&event->sb_list[i]); init_waitqueue_head(&event->waitq); init_irq_work(&event->pending, perf_pending_event); @@ -9511,11 +9608,14 @@ static void __init perf_event_init_all_c { struct swevent_htable *swhash; int cpu; + int i; for_each_possible_cpu(cpu) { swhash = &per_cpu(swevent_htable, cpu); mutex_init(&swhash->hlist_mutex); INIT_LIST_HEAD(&per_cpu(active_ctx_list, cpu)); + for (i = 0; i < sb_nr; i++) + INIT_LIST_HEAD(&per_cpu(pmu_sb_events[i].list, cpu)); } }