Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756394AbcCaJvK (ORCPT ); Thu, 31 Mar 2016 05:51:10 -0400 Received: from mail-lf0-f43.google.com ([209.85.215.43]:35109 "EHLO mail-lf0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755453AbcCaJvH (ORCPT ); Thu, 31 Mar 2016 05:51:07 -0400 MIME-Version: 1.0 In-Reply-To: <20160331093408.GB12845@twins.programming.kicks-ass.net> References: <1458606068-7476-1-git-send-email-smuckle@linaro.org> <56F91D56.4020007@arm.com> <56F95D10.4070400@linaro.org> <56F97856.4040804@arm.com> <56F98832.3030207@linaro.org> <20160330193544.GD407@worktop> <20160331093408.GB12845@twins.programming.kicks-ass.net> From: Vincent Guittot Date: Thu, 31 Mar 2016 11:50:40 +0200 Message-ID: Subject: Re: [PATCH 1/2] sched/fair: move cpufreq hook to update_cfs_rq_load_avg() To: Peter Zijlstra Cc: Steve Muckle , Dietmar Eggemann , Ingo Molnar , linux-kernel , "linux-pm@vger.kernel.org" , "Rafael J. Wysocki" , Morten Rasmussen , Juri Lelli , Patrick Bellasi , Michael Turquette Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1483 Lines: 31 On 31 March 2016 at 11:34, Peter Zijlstra wrote: > On Thu, Mar 31, 2016 at 11:27:22AM +0200, Vincent Guittot wrote: >> On 30 March 2016 at 21:35, Peter Zijlstra wrote: >> > On Mon, Mar 28, 2016 at 12:38:26PM -0700, Steve Muckle wrote: >> >> Without covering all the paths where CFS utilization changes it's >> >> possible to have to wait up to a tick to act on some changes, since the >> >> tick is the only guaranteed regularly-occurring instance of the hook. >> >> That's an unacceptable amount of latency IMO... >> > >> > Note that even with your patches that might still be the case. Remote >> > wakeups might not happen on the destination CPU at all, so it might not >> > be until the next tick (which always happens locally) that we'll >> > 'observe' the utilization change brought with the wakeups. >> > >> > We could force all the remote wakeups to IPI the destination CPU, but >> > that comes at a significant performance cost. >> >> Isn't a reschedule ipi already sent in this case ? > > In what case? Assuming you talk about a remove wakeup, no. Only if that > wakeup results in a preemption, which isn't a given. yes, i was speaking about a remote wakeup. In the ttwu_queue_remote, there is a call to smp_send_reschedule. Is there another way to add a remote task in the wake list ? > > And we really don't want to carry the 'has util increased' information > all the way down to where we make that decision. yes i agree