Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1793542pxb; Fri, 5 Feb 2021 01:19:19 -0800 (PST) X-Google-Smtp-Source: ABdhPJySp9hWq2iYXxYK1f31rPja05CXW3I1GI/UKLDYfFhQg//XqqTu7ZKeXjHjrrkbtVDNfIKu X-Received: by 2002:a17:906:4013:: with SMTP id v19mr3199371ejj.5.1612516759577; Fri, 05 Feb 2021 01:19:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612516759; cv=none; d=google.com; s=arc-20160816; b=HwxVViGzixCbh9EOQBqfKzs4V9jMlxz8YeBlqEO7TyD8XK4GrgSQbWlZnDn7SXdoSY yN5snraGmPQ1G5PVJecEG57TYxAC3X+HlgRGj1T5nWSo3MxmPCtZfH3kvwChAnqe+1Ph mdYwxLK192CX6YjFxaf6aIF0amLK3X8SBGUIpvYHTtkT9SbJP/kf29YjanPO6O3HGbWz IVJmIGTM6fGI+UjE8VLlReSkYjQCmbc3IrszIk5P+OmpW5FXowZxGmNowahDXursaTKR ShP3qanu9N9d8fXbiNcKOdkmXEJpG1+4GIhn8b7viNv3is1yInaS3gfytSHdVHXG7KQI eduw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=hF9qEuN0t60aQwKqM2EZNImZ2qgd4dzTdPhlz4GhOnQ=; b=DzKs7IhjdYIIRNDuvAgYrEvystk1EYElwefgWd2S3HySs2HoBV7dkbjpolNr25/VXl U/RZNxK+CknLXiGKL8tG/ZqMopj4V0+FPWcLtuKCnTU1TVrVfBTRoELoydxA5l/hRf4H oxDiGxqh3CW535PbZK4yYgm4qt9yFPa+djPFmIu6aeCxMiNk1ab8NKe3tQ39Sz4zHLgC RxleasQUu9RnvdVi8qWkKE2UAGsdhl45xH2aK1pYcRZLptjDc3f6xSh9eSQ95vpU4CGf FjDLmn33GnSXmz9PK6cs73YS4+6yklqjFGLdwTNMpucZm8sFqOtL0oZE8yf+a59wu9Mu 8TDQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=baG+mICm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g9si4839604ejj.456.2021.02.05.01.18.54; Fri, 05 Feb 2021 01:19:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=baG+mICm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229721AbhBEJSD (ORCPT + 99 others); Fri, 5 Feb 2021 04:18:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230094AbhBEJPI (ORCPT ); Fri, 5 Feb 2021 04:15:08 -0500 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFAA1C06178B for ; Fri, 5 Feb 2021 01:14:27 -0800 (PST) Received: by mail-pl1-x631.google.com with SMTP id s15so3234297plr.9 for ; Fri, 05 Feb 2021 01:14:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=hF9qEuN0t60aQwKqM2EZNImZ2qgd4dzTdPhlz4GhOnQ=; b=baG+mICmSRFGZPpmbX/z/Q4npvhOxgihshME85DpDK2T0FDiP7iS/B3GDOrt6gJxVZ FZtI11V//62Tp1qEsLtjEMoYU3iXL2YKkyckgA4PUsQIMK+XKon9U4sIWjCmgEvzoTb0 TzEgzdfvMHxlSrzrD7Gc/Bvh1lCla/shieQ/Tn3wQlCWHrd1oIB5ajwVjGMRjE2pxw0E I3C/2bRtZcqghfKMZCQjcBulO8O7VO2I02wJWYUH8HN15LGZvYLrl5OyeGHMP5lkxU3X DZijxN0r3GKjXbNrsx2DM4daqIiIXsQp+JhPVhvE7LFBtyB2y6sHrcf965+pSgDCdN2f j4uQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=hF9qEuN0t60aQwKqM2EZNImZ2qgd4dzTdPhlz4GhOnQ=; b=ac7GPk9ayoGqNPnI1lDj+8kDNhw0le1+DSE5DVYbu5f4jm1NHnFW/zJTe9uDWoruHw Eck4iQVBxeo7yLnR6jw5TRdmxxJnH57VHJg1nNdKVmrfY3+SlrINjp3m1BqkJv6XVETg LBso7g4qA8cTwMvQbDtF1ct1kR5N/xq/t4vd1bhBGT1DlRxP7Z4GuEs9fTerFm92MMVk gE8pRtv0ay+fsfCt1RutHy0y+uTN7EtpZOOoqRXESQMS6GTCXGpb0i+ePwL7fZVSiP8s c+VIs5dqAJqp9vxn7QajHXQNZhZYTA7JIS5sgGUlc80I/9BNgI0m2WJD7RSccjR2Den1 gCxg== X-Gm-Message-State: AOAM531XCj8iyvltqivgqoEkEb+iv6KAZ8DOHWBb14CvUeyJ20UyHaiK Pwcz1eTsyhtMNy+3G3o3vREfXw== X-Received: by 2002:a17:90a:7c45:: with SMTP id e5mr3198293pjl.170.1612516467392; Fri, 05 Feb 2021 01:14:27 -0800 (PST) Received: from localhost ([122.172.59.240]) by smtp.gmail.com with ESMTPSA id j3sm7721155pjs.50.2021.02.05.01.14.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 05 Feb 2021 01:14:26 -0800 (PST) Date: Fri, 5 Feb 2021 14:44:24 +0530 From: Viresh Kumar To: Ionela Voinescu Cc: Rafael Wysocki , Catalin Marinas , Will Deacon , Vincent Guittot , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pm@vger.kernel.org, Sudeep Holla , Greg Kroah-Hartman Subject: Re: [PATCH V3 1/2] topology: Allow multiple entities to provide sched_freq_tick() callback Message-ID: <20210205091424.3od3tme3f7mh7ebp@vireshk-i7> References: <20210203114521.GA6380@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210203114521.GA6380@arm.com> User-Agent: NeoMutt/20180716-391-311a52 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03-02-21, 11:45, Ionela Voinescu wrote: > Therefore, I think system level invariance management (checks and > call to rebuild_sched_domains_energy()) also needs to move from arm64 > code to arch_topology code. Here is the 3rd patch of this series then :) From: Viresh Kumar Date: Fri, 5 Feb 2021 13:31:53 +0530 Subject: [PATCH] drivers: arch_topology: rebuild sched domains on invariance change We already do this for the arm64, move it to arch_topology.c as we manage all sched_freq_tick sources here now. Reported-by: Ionela Voinescu Signed-off-by: Viresh Kumar --- arch/arm64/kernel/topology.c | 16 ---------------- drivers/base/arch_topology.c | 22 ++++++++++++++++++++++ 2 files changed, 22 insertions(+), 16 deletions(-) diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index 1e47dfd465f8..47fca7376c93 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -240,7 +240,6 @@ static struct scale_freq_data amu_sfd = { static void amu_fie_setup(const struct cpumask *cpus) { - bool invariant; int cpu; /* We are already set since the last insmod of cpufreq driver */ @@ -257,25 +256,10 @@ static void amu_fie_setup(const struct cpumask *cpus) cpumask_or(amu_fie_cpus, amu_fie_cpus, cpus); - invariant = topology_scale_freq_invariant(); - - /* We aren't fully invariant yet */ - if (!invariant && !cpumask_equal(amu_fie_cpus, cpu_present_mask)) - return; - topology_set_scale_freq_source(&amu_sfd, amu_fie_cpus); pr_debug("CPUs[%*pbl]: counters will be used for FIE.", cpumask_pr_args(cpus)); - - /* - * Task scheduler behavior depends on frequency invariance support, - * either cpufreq or counter driven. If the support status changes as - * a result of counter initialisation and use, retrigger the build of - * scheduling domains to ensure the information is propagated properly. - */ - if (!invariant) - rebuild_sched_domains_energy(); } static int init_amu_fie_callback(struct notifier_block *nb, unsigned long val, diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c index 20b511949cd8..3631877f4440 100644 --- a/drivers/base/arch_topology.c +++ b/drivers/base/arch_topology.c @@ -23,6 +23,7 @@ static DEFINE_PER_CPU(struct scale_freq_data *, sft_data); static struct cpumask scale_freq_counters_mask; +static bool scale_freq_invariant; static bool supports_scale_freq_counters(const struct cpumask *cpus) { @@ -35,6 +36,23 @@ bool topology_scale_freq_invariant(void) supports_scale_freq_counters(cpu_online_mask); } +static void update_scale_freq_invariant(bool status) +{ + if (scale_freq_invariant == status) + return; + + /* + * Task scheduler behavior depends on frequency invariance support, + * either cpufreq or counter driven. If the support status changes as + * a result of counter initialisation and use, retrigger the build of + * scheduling domains to ensure the information is propagated properly. + */ + if (topology_scale_freq_invariant() == status) { + scale_freq_invariant = status; + rebuild_sched_domains_energy(); + } +} + void topology_set_scale_freq_source(struct scale_freq_data *data, const struct cpumask *cpus) { @@ -50,6 +68,8 @@ void topology_set_scale_freq_source(struct scale_freq_data *data, cpumask_set_cpu(cpu, &scale_freq_counters_mask); } } + + update_scale_freq_invariant(true); } EXPORT_SYMBOL_GPL(topology_set_scale_freq_source); @@ -67,6 +87,8 @@ void topology_clear_scale_freq_source(enum scale_freq_source source, cpumask_clear_cpu(cpu, &scale_freq_counters_mask); } } + + update_scale_freq_invariant(false); } EXPORT_SYMBOL_GPL(topology_clear_scale_freq_source); -- 2.25.0.rc1.19.g042ed3e048af -- viresh