Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp591027pxj; Thu, 10 Jun 2021 08:06:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJydt9G7AozwKjbOZfo1tclOOhbGXwJlcmqzRUg2Id2KEga7lza+FzYPeq/TX69hAD5aehrP X-Received: by 2002:aa7:c6c2:: with SMTP id b2mr5385805eds.8.1623337565268; Thu, 10 Jun 2021 08:06:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623337565; cv=none; d=google.com; s=arc-20160816; b=md5NiZgvBF6zyKjySMHA4e0uRoHIo50Y0/1JgxglPGrk5uyWRjL80fm+veTFNvZXuY gE7PlMs7OKMt4PHBGMDxxB32KBd6jWyHTXQnqTonRpJP+uwcKC8VqBJhLBqgMvs3ATOZ EELE2M+L7gpcYntJAaCouybIbooXgFzT5kev2NoJYYQCjMnQeXGDhIYf5wTizS/reViK bf0KWVrnFw5m0Wn5w28clb2xO0zObZoFXLxr+HD/AWH7EN8J15yCL4IX8ytlbNLxmFe3 FONSd8HjwI3vefPGlA7bUzKuNhfclNN6a/gxz5vSUlKHSB7R/nD1cfQ6Z4NjNDZasqxg cBYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=AZzaYUj8BO5Fk+p8kKBRLfOwrZDbH0tI9QQ3CnxDH2A=; b=YIU7Dj9IXQ9ETv2xut0LIxO/tBgU6lNCaMYRqQ7JnA/mC7oJXivoG/Ci8EZyfRul0F 5K30gN7xg1wo5t9ja+ul4x8kD8s9+tX6UqsSPe4L6zkpvmBg3/TpIY1zJfDTwpwIzvFj bCfLZ+rYyySFZkb+0aHwxExXZoUm0BC9VjiqMhv2FAiEaNO+L/swEqsSWMsI/CB7ti5k OzATBA0f6AB+0aUwlreQfi+0Bxz5iy2z/ScwUO7FV08qmved3F4CX4hnLA4y3QUw2RI4 eDlXHlqlD9qk+Z/Cd5Sm5eTIMN4YGjwrzDHH2yehWrNzIwqEY4AiqitnLYQXHVbU2wJ3 oFiQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a15si2481394ejc.78.2021.06.10.08.05.41; Thu, 10 Jun 2021 08:06:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231519AbhFJPFh (ORCPT + 99 others); Thu, 10 Jun 2021 11:05:37 -0400 Received: from foss.arm.com ([217.140.110.172]:33974 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230410AbhFJPFg (ORCPT ); Thu, 10 Jun 2021 11:05:36 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4077C1396; Thu, 10 Jun 2021 08:03:40 -0700 (PDT) Received: from e123648.arm.com (unknown [10.57.4.220]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 87FBB3F719; Thu, 10 Jun 2021 08:03:36 -0700 (PDT) From: Lukasz Luba To: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org, peterz@infradead.org, rjw@rjwysocki.net, viresh.kumar@linaro.org, vincent.guittot@linaro.org, qperret@google.com, dietmar.eggemann@arm.com, vincent.donnefort@arm.com, lukasz.luba@arm.com, Beata.Michalska@arm.com, mingo@redhat.com, juri.lelli@redhat.com, rostedt@goodmis.org, segall@google.com, mgorman@suse.de, bristot@redhat.com, thara.gopinath@linaro.org, amit.kachhap@gmail.com, amitk@kernel.org, rui.zhang@intel.com, daniel.lezcano@linaro.org Subject: [PATCH v3 1/3] thermal: cpufreq_cooling: Update also offline CPUs per-cpu thermal_pressure Date: Thu, 10 Jun 2021 16:03:22 +0100 Message-Id: <20210610150324.22919-2-lukasz.luba@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210610150324.22919-1-lukasz.luba@arm.com> References: <20210610150324.22919-1-lukasz.luba@arm.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The thermal pressure signal gives information to the scheduler about reduced CPU capacity due to thermal. It is based on a value stored in a per-cpu 'thermal_pressure' variable. The online CPUs will get the new value there, while the offline won't. Unfortunately, when the CPU is back online, the value read from per-cpu variable might be wrong (stale data). This might affect the scheduler decisions, since it sees the CPU capacity differently than what is actually available. Fix it by making sure that all online+offline CPUs would get the proper value in their per-cpu variable when thermal framework sets capping. Fixes: f12e4f66ab6a3 ("thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping") Signed-off-by: Lukasz Luba --- drivers/thermal/cpufreq_cooling.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c index eeb4e4b76c0b..43b1ae8a7789 100644 --- a/drivers/thermal/cpufreq_cooling.c +++ b/drivers/thermal/cpufreq_cooling.c @@ -478,7 +478,7 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev, ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency); if (ret >= 0) { cpufreq_cdev->cpufreq_state = state; - cpus = cpufreq_cdev->policy->cpus; + cpus = cpufreq_cdev->policy->related_cpus; max_capacity = arch_scale_cpu_capacity(cpumask_first(cpus)); capacity = frequency * max_capacity; capacity /= cpufreq_cdev->policy->cpuinfo.max_freq; -- 2.17.1