Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754719AbcCaBm0 (ORCPT ); Wed, 30 Mar 2016 21:42:26 -0400 Received: from mail-pf0-f179.google.com ([209.85.192.179]:36628 "EHLO mail-pf0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751821AbcCaBmY (ORCPT ); Wed, 30 Mar 2016 21:42:24 -0400 Subject: Re: [PATCH 1/2] sched/fair: move cpufreq hook to update_cfs_rq_load_avg() To: Peter Zijlstra References: <1458606068-7476-1-git-send-email-smuckle@linaro.org> <56F91D56.4020007@arm.com> <56F95D10.4070400@linaro.org> <56F97856.4040804@arm.com> <56F98832.3030207@linaro.org> <20160330193544.GD407@worktop> Cc: Dietmar Eggemann , Ingo Molnar , linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, "Rafael J. Wysocki" , Vincent Guittot , Morten Rasmussen , Juri Lelli , Patrick Bellasi , Michael Turquette From: Steve Muckle Message-ID: <56FC807C.80204@linaro.org> Date: Wed, 30 Mar 2016 18:42:20 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.0 MIME-Version: 1.0 In-Reply-To: <20160330193544.GD407@worktop> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 891 Lines: 17 On 03/30/2016 12:35 PM, Peter Zijlstra wrote: > On Mon, Mar 28, 2016 at 12:38:26PM -0700, Steve Muckle wrote: >> Without covering all the paths where CFS utilization changes it's >> possible to have to wait up to a tick to act on some changes, since the >> tick is the only guaranteed regularly-occurring instance of the hook. >> That's an unacceptable amount of latency IMO... > > Note that even with your patches that might still be the case. Remote > wakeups might not happen on the destination CPU at all, so it might not > be until the next tick (which always happens locally) that we'll > 'observe' the utilization change brought with the wakeups. > > We could force all the remote wakeups to IPI the destination CPU, but > that comes at a significant performance cost. What about only IPI'ing the destination when the utilization change is known to require a higher CPU frequency?