Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932644AbbKMOXg (ORCPT ); Fri, 13 Nov 2015 09:23:36 -0500 Received: from mail-wm0-f46.google.com ([74.125.82.46]:35422 "EHLO mail-wm0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932115AbbKMOWT (ORCPT ); Fri, 13 Nov 2015 09:22:19 -0500 From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , Peter Zijlstra , Chris Metcalf , Thomas Gleixner , Luiz Capitulino , Christoph Lameter , Ingo Molnar , Viresh Kumar , Rik van Riel Subject: [PATCH 3/7] perf: Migrate perf to use new tick dependency mask model Date: Fri, 13 Nov 2015 15:22:05 +0100 Message-Id: <1447424529-13671-4-git-send-email-fweisbec@gmail.com> X-Mailer: git-send-email 2.5.3 In-Reply-To: <1447424529-13671-1-git-send-email-fweisbec@gmail.com> References: <1447424529-13671-1-git-send-email-fweisbec@gmail.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4696 Lines: 140 Instead of providing asynchronous checks for the nohz subsystem to verify perf event tick dependency, migrate perf to the new mask. Perf needs the tick for two situations: 1) Freq events. We could set the tick dependency when those are installed on a CPU context. But setting a global dependency on top of the global freq events accounting is much easier. If people want that to be optimized, we can still refine that on the per-CPU tick dependency level. This patch dooesn't change the current behaviour anyway. 2) Throttled events: this is a per-cpu dependency. Cc: Christoph Lameter Cc: Chris Metcalf Cc: Ingo Molnar Cc: Luiz Capitulino Cc: Peter Zijlstra Cc: Rik van Riel Cc: Thomas Gleixner Cc: Viresh Kumar Signed-off-by: Frederic Weisbecker --- include/linux/perf_event.h | 6 ------ kernel/events/core.c | 22 +++++++--------------- kernel/time/tick-sched.c | 6 ------ 3 files changed, 7 insertions(+), 27 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 092a0e8..63a377f 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -1023,12 +1023,6 @@ static inline void perf_event_task_tick(void) { } static inline int perf_event_release_kernel(struct perf_event *event) { return 0; } #endif -#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_NO_HZ_FULL) -extern bool perf_event_can_stop_tick(void); -#else -static inline bool perf_event_can_stop_tick(void) { return true; } -#endif - #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL) extern void perf_restore_debug_store(void); #else diff --git a/kernel/events/core.c b/kernel/events/core.c index b11756f..1ee70ba 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3099,17 +3099,6 @@ done: return rotate; } -#ifdef CONFIG_NO_HZ_FULL -bool perf_event_can_stop_tick(void) -{ - if (atomic_read(&nr_freq_events) || - __this_cpu_read(perf_throttled_count)) - return false; - else - return true; -} -#endif - void perf_event_task_tick(void) { struct list_head *head = this_cpu_ptr(&active_ctx_list); @@ -3120,6 +3109,7 @@ void perf_event_task_tick(void) __this_cpu_inc(perf_throttled_seq); throttled = __this_cpu_xchg(perf_throttled_count, 0); + tick_nohz_clear_dep_cpu(TICK_PERF_EVENTS_BIT, smp_processor_id()); list_for_each_entry_safe(ctx, tmp, head, active_ctx_list) perf_adjust_freq_unthr_context(ctx, throttled); @@ -3540,8 +3530,10 @@ static void unaccount_event(struct perf_event *event) atomic_dec(&nr_comm_events); if (event->attr.task) atomic_dec(&nr_task_events); - if (event->attr.freq) - atomic_dec(&nr_freq_events); + if (event->attr.freq) { + if (atomic_dec_and_test(&nr_freq_events)) + tick_nohz_clear_dep(TICK_PERF_EVENTS_BIT); + } if (event->attr.context_switch) { static_key_slow_dec_deferred(&perf_sched_events); atomic_dec(&nr_switch_events); @@ -6307,9 +6299,9 @@ static int __perf_event_overflow(struct perf_event *event, if (unlikely(throttle && hwc->interrupts >= max_samples_per_tick)) { __this_cpu_inc(perf_throttled_count); + tick_nohz_set_dep_cpu(TICK_PERF_EVENTS_BIT, smp_processor_id()); hwc->interrupts = MAX_INTERRUPTS; perf_log_throttle(event, 0); - tick_nohz_full_kick(); ret = 1; } } @@ -7695,7 +7687,7 @@ static void account_event(struct perf_event *event) atomic_inc(&nr_task_events); if (event->attr.freq) { if (atomic_inc_return(&nr_freq_events) == 1) - tick_nohz_full_kick_all(); + tick_nohz_set_dep(TICK_PERF_EVENTS_BIT); } if (event->attr.context_switch) { atomic_inc(&nr_switch_events); diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index b9ea21d..9bb19c8 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -22,7 +22,6 @@ #include #include #include -#include #include #include @@ -213,11 +212,6 @@ static bool can_stop_full_tick(struct tick_sched *ts) return false; } - if (!perf_event_can_stop_tick()) { - trace_tick_stop(0, "perf events running\n"); - return false; - } - #ifdef CONFIG_HAVE_UNSTABLE_SCHED_CLOCK /* * sched_clock_tick() needs us? -- 2.5.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/