Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp1466494pxb; Sat, 16 Oct 2021 10:37:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz29oX6WdHuVSbIjidd20P5a0Te++bSmvkVQVD7x1CV2BdNJC5I4/ZqMhfCTmp/9ZiwS/Ho X-Received: by 2002:a05:6a00:140e:b0:444:b077:51ef with SMTP id l14-20020a056a00140e00b00444b07751efmr18051487pfu.61.1634405833012; Sat, 16 Oct 2021 10:37:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634405833; cv=none; d=google.com; s=arc-20160816; b=no18GuRjP+3xJCYHfvMvCvOMu1cWaC3yg5AWi9QNY3XKGJrMbaTiKnd8Th2SPU6NB7 +TU9WdOUCDJbEn+bMflxqBs0qnX547xCJVYq780nYyGg7hEifY7vZ5r7h6I0PT51HpxJ wsbEekcH5FqgjaD4Ct0PvsCkpcgzExAYnBP7EQSOWBDiLdW38Iu4733Avbihp37H0Rfz gVRVinsTaZoDfyZWhq663AIWr/qBvjfHYEakG9Mau+1t+UwppjCr1xtHOQQfXCojUw6J 68MH9KfzPWDYBMFslncWUELW4dVA4t0pPxlWj/d9MURp++PihFeNPDmFDqZT8W1zCoxW HgTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=Az6UZDYUUxc2m40tuJN9nRex7szw91NtuxLcRjkOBH0=; b=ivPpLx5eY9DqUaKg4y9MNxtN6APXJ49x6ntJr1/iKKZvJ8gzR7j4IT8XETyaphftGe smF++pWq+Qys7xMGDVrdi3630ljyvAIqZttQrm8wYi/5XfJCMvB75jG8Nfi9sbKlX+BV 9a8NRxidiDWGFx96GAGQMuWXkSEuDJplNwArQNbuYAihvEZPH7czWYBlsENN+yjdXl5y +Zjatq/a9Wxh/qvmCaNTs/qiQMhxFbHsQmc9yiIZKra+wiCWKn9hzvF4hj6r9V6m+8a+ jo1Z2zyr878dMZy1HXDsY8myZorjNL07Pm788sZE30xH7/Wv7fLB2B6tXZRQBioTwKn+ IHyg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id lr7si24967262pjb.36.2021.10.16.10.36.58; Sat, 16 Oct 2021 10:37:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240819AbhJOOsg (ORCPT + 99 others); Fri, 15 Oct 2021 10:48:36 -0400 Received: from foss.arm.com ([217.140.110.172]:44092 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240802AbhJOOsS (ORCPT ); Fri, 15 Oct 2021 10:48:18 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 14561147A; Fri, 15 Oct 2021 07:46:11 -0700 (PDT) Received: from e123648.arm.com (unknown [10.57.23.184]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1B6ED3F66F; Fri, 15 Oct 2021 07:46:07 -0700 (PDT) From: Lukasz Luba To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, lukasz.luba@arm.com, sudeep.holla@arm.com, will@kernel.org, catalin.marinas@arm.com, linux@armlinux.org.uk, gregkh@linuxfoundation.org, rafael@kernel.org, viresh.kumar@linaro.org, amitk@kernel.org, daniel.lezcano@linaro.org, amit.kachhap@gmail.com, thara.gopinath@linaro.org, bjorn.andersson@linaro.org, agross@kernel.org Subject: [PATCH v2 3/5] cpufreq: qcom-cpufreq-hw: Update offline CPUs per-cpu thermal pressure Date: Fri, 15 Oct 2021 15:45:48 +0100 Message-Id: <20211015144550.23719-4-lukasz.luba@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211015144550.23719-1-lukasz.luba@arm.com> References: <20211015144550.23719-1-lukasz.luba@arm.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The thermal pressure signal gives information to the scheduler about reduced CPU capacity due to thermal. It is based on a value stored in a per-cpu 'thermal_pressure' variable. The online CPUs will get the new value there, while the offline won't. Unfortunately, when the CPU is back online, the value read from per-cpu variable might be wrong (stale data). This might affect the scheduler decisions, since it sees the CPU capacity differently than what is actually available. Fix it by making sure that all online+offline CPUs would get the proper value in their per-cpu variable when there is throttling or throttling is removed. Fixes: 275157b367f479 ("cpufreq: qcom-cpufreq-hw: Add dcvs interrupt support") Reviewed-by: Thara Gopinath Signed-off-by: Lukasz Luba --- drivers/cpufreq/qcom-cpufreq-hw.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c index a2be0df7e174..0138b2ec406d 100644 --- a/drivers/cpufreq/qcom-cpufreq-hw.c +++ b/drivers/cpufreq/qcom-cpufreq-hw.c @@ -304,7 +304,8 @@ static void qcom_lmh_dcvs_notify(struct qcom_cpufreq_data *data) if (capacity > max_capacity) capacity = max_capacity; - arch_set_thermal_pressure(policy->cpus, max_capacity - capacity); + arch_set_thermal_pressure(policy->related_cpus, + max_capacity - capacity); /* * In the unlikely case policy is unregistered do not enable -- 2.17.1