Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp6985980ybl; Wed, 15 Jan 2020 13:30:48 -0800 (PST) X-Google-Smtp-Source: APXvYqzPKp/XWkvq3xRXCTe18ND0YFk+1zpPjfDXVsxaX3ZL2eiX8+m839CKLqZJSPsI2K7lYuMO X-Received: by 2002:aca:db43:: with SMTP id s64mr1503041oig.144.1579123847927; Wed, 15 Jan 2020 13:30:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1579123847; cv=none; d=google.com; s=arc-20160816; b=nSfnG5VYCtFIJ8RwVpF5G87nPJ1Cva30sDdcC9bgX0t7Xp2C6OnOZRrqNYPX7hzOzP jDCM6BO1L3qIZ8T1BEdZzgiI4Ip31hNs4zBEzVjT/jm9mMQUHCbUq/yJj16MI9mRzZln oA8d9Kfv3WT5RoZZCBr7vmTmEaNv9tRDl2iG57610D4YF9Zpd5i5jslhp3vSG+X+6kji urm+GCTeIdZM5KuIFfSxjbwVjcY8PaYo+aBY1hqEfMVlalbcuuO2M0g2ISNnjgpixuRT mnlh9Ur4AShcU71WH5pXYUMJGVnIPrDd7hhXB+tQXtoVY2cU1B1qb2KEt0079bjXme/K KK5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:to:subject; bh=ZEhaaagWVgfDHbguEkcCXjjg/tFHzOtcTGllgk+zYqw=; b=YZz9SXDU5raKsvOs0hjFugMooq5QP9GifD1zshTuSJFHX3wOl0ifZl6TK1QtQ7t2JR Z5Rr25ILvs9X+S9Uvo0N4amhN8xJsf9mFbLOniThlQVOnnlfcK4A0MiICnFg5ndS/kYq t/JJlCzHOxN8ZEJvSaS5fjWkwGzTBhi9J4QIg/8vk7d7DAmn+q4umQADg9X8MP/K7D6c cp+x8aTZTdKweZLdgMU7ch7DnuQ1CoYyU8UiJAX4xrF469bYQOJ5Ge/MNG1AQbAaHXY2 9rIo+ayPVJOC+GbZMxqn9ZjvtoXHvClTGxrbBox7ToJ9i2O+YPR+lcKpTTu5wpTDuqEY Mf6g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a205si10355732oii.95.2020.01.15.13.30.35; Wed, 15 Jan 2020 13:30:47 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729039AbgAOUUw (ORCPT + 99 others); Wed, 15 Jan 2020 15:20:52 -0500 Received: from foss.arm.com ([217.140.110.172]:42190 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726220AbgAOUUv (ORCPT ); Wed, 15 Jan 2020 15:20:51 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CFC48328; Wed, 15 Jan 2020 12:20:50 -0800 (PST) Received: from [192.168.0.17] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 27B583F718; Wed, 15 Jan 2020 12:20:49 -0800 (PST) Subject: Re: [PATCH] sched/fair: remove redundant call to cpufreq_update_util To: Vincent Guittot , rjw@rjwysocki.net, viresh.kumar@linaro.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org References: <1579083620-24943-1-git-send-email-vincent.guittot@linaro.org> From: Dietmar Eggemann Message-ID: Date: Wed, 15 Jan 2020 21:20:47 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: <1579083620-24943-1-git-send-email-vincent.guittot@linaro.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 15/01/2020 11:20, Vincent Guittot wrote: > With commit bef69dd87828 ("sched/cpufreq: Move the cfs_rq_util_change() call to cpufreq_update_util()") > update_load_avg() has become the central point for calling cpufreq (not > including the update of blocked load). This change helps to simplify > further the number of call to cpufreq_update_util() and to remove last > redundant ones. With update_load_avg(), we are now sure that > cpufreq_update_util() will be called after every task attachment to a > cfs_rq and especially after propagating this event down to the util_avg of > the root cfs_rq, which is the level that is used by cpufreq governors like > schedutil to set the frequency of a CPU. > > The SCHED_CPUFREQ_MIGRATION flag forces an early call to cpufreq when the > migration happens in a cgroup whereas util_avg of root cfs_rq is not yet > updated and this call is duplicated with the one that happens immediately > after when the migration event reaches the root cfs_rq. The dedicated flag > SCHED_CPUFREQ_MIGRATION is now useless and can be removed. The interface of > attach_entity_load_avg() can also be simplified accordingly. > > Signed-off-by: Vincent Guittot LGTM. Doesn't this allow to get rid of the 'int flags' in cfs_rq_util_change() as well? 8<--- kernel/sched/fair.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 328d59e8afba..f82f4fde0cd3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3110,7 +3110,7 @@ static inline void update_cfs_group(struct sched_entity *se) } #endif /* CONFIG_FAIR_GROUP_SCHED */ -static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq, int flags) +static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq) { struct rq *rq = rq_of(cfs_rq); @@ -3129,7 +3129,7 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq, int flags) * * See cpu_util(). */ - cpufreq_update_util(rq, flags); + cpufreq_update_util(rq, 0); } } @@ -3556,7 +3556,7 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s add_tg_cfs_propagate(cfs_rq, se->avg.load_sum); - cfs_rq_util_change(cfs_rq, 0); + cfs_rq_util_change(cfs_rq); trace_pelt_cfs_tp(cfs_rq); } @@ -3577,7 +3577,7 @@ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s add_tg_cfs_propagate(cfs_rq, -se->avg.load_sum); - cfs_rq_util_change(cfs_rq, 0); + cfs_rq_util_change(cfs_rq); trace_pelt_cfs_tp(cfs_rq); } @@ -3618,7 +3618,7 @@ static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s update_tg_load_avg(cfs_rq, 0); } else if (decayed) { - cfs_rq_util_change(cfs_rq, 0); + cfs_rq_util_change(cfs_rq); if (flags & UPDATE_TG) update_tg_load_avg(cfs_rq, 0); @@ -3851,7 +3851,7 @@ static inline void update_misfit_status(struct task_struct *p, struct rq *rq) static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se, int not_used1) { - cfs_rq_util_change(cfs_rq, 0); + cfs_rq_util_change(cfs_rq); } static inline void remove_entity_load_avg(struct sched_entity *se) {} -- 2.17.1