Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753897AbbHJP31 (ORCPT ); Mon, 10 Aug 2015 11:29:27 -0400 Received: from mail-wi0-f170.google.com ([209.85.212.170]:35744 "EHLO mail-wi0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753592AbbHJP3Z (ORCPT ); Mon, 10 Aug 2015 11:29:25 -0400 Date: Mon, 10 Aug 2015 17:29:22 +0200 From: Frederic Weisbecker To: Peter Zijlstra Cc: Juri Lelli , LKML , Thomas Gleixner , Preeti U Murthy , Christoph Lameter , Ingo Molnar , Viresh Kumar , Rik van Riel Subject: Re: [PATCH 07/10] sched: Migrate sched to use new tick dependency mask model Message-ID: <20150810152920.GB31251@lerouge> References: <1437669735-8786-8-git-send-email-fweisbec@gmail.com> <20150803140046.GK19282@twins.programming.kicks-ass.net> <20150803145031.GD25554@lerouge> <20150803170911.GV25159@twins.programming.kicks-ass.net> <20150803173031.GB26022@lerouge> <20150804074116.GH25159@twins.programming.kicks-ass.net> <55C8AEDC.90603@arm.com> <20150810141656.GA31251@lerouge> <20150810142847.GE16853@twins.programming.kicks-ass.net> <20150810151151.GE18673@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150810151151.GE18673@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4002 Lines: 98 On Mon, Aug 10, 2015 at 05:11:51PM +0200, Peter Zijlstra wrote: > On Mon, Aug 10, 2015 at 04:28:47PM +0200, Peter Zijlstra wrote: > > On Mon, Aug 10, 2015 at 04:16:58PM +0200, Frederic Weisbecker wrote: > > > > > I considered many times relying on hrtick btw but everyone seem to say it has a lot > > > of overhead, especially due to clock reprogramming on schedule() calls. > > > > Yeah, I have some vague ideas of how to take out much of that overhead > > (tglx will launch frozen sharks at me I suspect), but we cannot get > > around the overhead of actually having to program the hardware and that > > is still a significant amount on many machines. > > > > Supposedly machines with TSC deadline are better, but I've not tried > > to benchmark that. > > Basically something along these lines.. which avoids a whole bunch of > hrtimer stuff. > > But without fast hardware its all still pointless. > > diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h > index 76dd4f0da5ca..c279950cb8c3 100644 > --- a/include/linux/hrtimer.h > +++ b/include/linux/hrtimer.h > @@ -200,6 +200,7 @@ struct hrtimer_cpu_base { > unsigned int nr_retries; > unsigned int nr_hangs; > unsigned int max_hang_time; > + ktime_t expires_sched; > #endif > struct hrtimer_clock_base clock_base[HRTIMER_MAX_CLOCK_BASES]; > } ____cacheline_aligned; > diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c > index 5c7ae4b641c4..be9c0a555eaa 100644 > --- a/kernel/time/hrtimer.c > +++ b/kernel/time/hrtimer.c > @@ -68,6 +68,7 @@ DEFINE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases) = > { > .lock = __RAW_SPIN_LOCK_UNLOCKED(hrtimer_bases.lock), > .seq = SEQCNT_ZERO(hrtimer_bases.seq), > + .expires_sched = { .tv64 = KTIME_MAX, }, > .clock_base = > { > { > @@ -460,7 +461,7 @@ static inline void hrtimer_update_next_timer(struct hrtimer_cpu_base *cpu_base, > static ktime_t __hrtimer_get_next_event(struct hrtimer_cpu_base *cpu_base) > { > struct hrtimer_clock_base *base = cpu_base->clock_base; > - ktime_t expires, expires_next = { .tv64 = KTIME_MAX }; > + ktime_t expires, expires_next = cpu_base->expires_sched; > unsigned int active = cpu_base->active_bases; > > hrtimer_update_next_timer(cpu_base, NULL); > @@ -1289,6 +1290,33 @@ static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now) > > #ifdef CONFIG_HIGH_RES_TIMERS > > +void sched_hrtick_set(u64 ns) > +{ > + struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases); > + ktime_t expires = ktime_add_ns(ktime_get(), ns); > + > + raw_spin_lock(&cpu_base->lock); > + cpu_base->expires_sched = expires; > + > + if (expires.tv64 < cpu_base->expires_next.tv64) > + hrtimer_force_reprogram(cpu_base, 0); > + > + raw_spin_unlock(&cpu_base->lock); > +} > + > +void sched_hrtick_cancel(void) > +{ > + struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases); > + > + raw_spin_lock(&cpu_base->lock); > + /* > + * If the current event was this sched event, eat the superfluous > + * interrupt rather than touch the hardware again. > + */ > + cpu_base->expires_sched.tv64 = KTIME_MAX; > + raw_spin_unlock(&cpu_base->lock); > +} Well, there could be a more proper way to do this without tying that to the scheduler tick. This could be some sort of hrtimer_cancel_soft() which more generally cancels a timer without cancelling the interrupt itself. We might want to still keep track of that lost interrupt though in case of later clock reprogramming that fits the lost interrupt. With a field like cpu_base->expires_interrupt. I thought about expires_soft and expires_hard but I think that terminology is already used :-) That said that feature at least wouldn't fit nohz full which really wants to avoid spurious interrupts. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/