Received: by 2002:a05:6a10:8395:0:0:0:0 with SMTP id n21csp617859pxh; Tue, 9 Nov 2021 16:15:51 -0800 (PST) X-Google-Smtp-Source: ABdhPJywYLnrtnAcmXLYzuYBUlcLRPPm24SuxYrdkc5qVqPgSAaJQuw0F9jFdIZgQvOrZ1S698M6 X-Received: by 2002:a5d:9ed6:: with SMTP id a22mr8112705ioe.167.1636503350830; Tue, 09 Nov 2021 16:15:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1636503350; cv=none; d=google.com; s=arc-20160816; b=w4EyqBuaO+bpXqGgqE5G8P04drCiHG/PKHZinwuikWxNYwf6lc0jTd2Igxhe8e0VME 3WOeHK4DzrjZYhuyjUVY/IuCfghpke2qo/Y5SVVVcUnsDj42sRF9S36iu3lfQQZsE/fm fW8i9I3lNBRnr7nJl90ksEbKr/KL9oNU2ML1+soZ1kHhikG7FH1RC1c0fTL32MlMSyk1 2RpggPJAaKC0wNatLT6r+SyrtxZO2EivSBjbEtWQ9d1N2vbm12V1rD3k3JXJ/pBN91Tb 3dBv7nvRIhJK8GdeYOqx2IZUY/VCZvW81RldnhZbud0VXHCnEnUorG0M6JdMuEf0LUFP 51lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=Az6UZDYUUxc2m40tuJN9nRex7szw91NtuxLcRjkOBH0=; b=N7o59+iNYZYufggfqSpW7WT8BQvNTQm4ovGsEj65gOr1QnE+MYAEK+O/eaZIKGxGm5 X8+nFVCKKv0ndGswOl6WqWGB/3RZ5jb6db8hDv7vYbpdBnxoTjPfCj5EhOVF66jlmHMg H95aJuCnbyWK+2rvNjgSs4IQ+9yeTXzGURjA4CC8p7Eh4yXG7SHY+rqgWwsBa8K0NfYU g2lcu99OkOoHQF1H1mtQBG4+hmtEvZUnd+iZZ8KfLvxKFUtrzTR0KI8KUshajw/358t6 9RGEfWK/S3h3ZXGiUpDXujcfOWztyhijfahBP8+No4hGwE0lFaHqkvoJMqt8elHpUKIw qZpQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h36si676569jav.5.2021.11.09.16.15.38; Tue, 09 Nov 2021 16:15:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244029AbhKIUAf (ORCPT + 97 others); Tue, 9 Nov 2021 15:00:35 -0500 Received: from foss.arm.com ([217.140.110.172]:37956 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244012AbhKIUAe (ORCPT ); Tue, 9 Nov 2021 15:00:34 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B09F62B; Tue, 9 Nov 2021 11:57:47 -0800 (PST) Received: from e123648.arm.com (unknown [10.57.26.224]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6CB5C3F7F5; Tue, 9 Nov 2021 11:57:43 -0800 (PST) From: Lukasz Luba To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, steev@kali.org, lukasz.luba@arm.com, sudeep.holla@arm.com, will@kernel.org, catalin.marinas@arm.com, linux@armlinux.org.uk, gregkh@linuxfoundation.org, rafael@kernel.org, viresh.kumar@linaro.org, amitk@kernel.org, daniel.lezcano@linaro.org, amit.kachhap@gmail.com, thara.gopinath@linaro.org, bjorn.andersson@linaro.org, agross@kernel.org Subject: [PATCH v4 3/5] cpufreq: qcom-cpufreq-hw: Update offline CPUs per-cpu thermal pressure Date: Tue, 9 Nov 2021 19:57:12 +0000 Message-Id: <20211109195714.7750-4-lukasz.luba@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211109195714.7750-1-lukasz.luba@arm.com> References: <20211109195714.7750-1-lukasz.luba@arm.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The thermal pressure signal gives information to the scheduler about reduced CPU capacity due to thermal. It is based on a value stored in a per-cpu 'thermal_pressure' variable. The online CPUs will get the new value there, while the offline won't. Unfortunately, when the CPU is back online, the value read from per-cpu variable might be wrong (stale data). This might affect the scheduler decisions, since it sees the CPU capacity differently than what is actually available. Fix it by making sure that all online+offline CPUs would get the proper value in their per-cpu variable when there is throttling or throttling is removed. Fixes: 275157b367f479 ("cpufreq: qcom-cpufreq-hw: Add dcvs interrupt support") Reviewed-by: Thara Gopinath Signed-off-by: Lukasz Luba --- drivers/cpufreq/qcom-cpufreq-hw.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c index a2be0df7e174..0138b2ec406d 100644 --- a/drivers/cpufreq/qcom-cpufreq-hw.c +++ b/drivers/cpufreq/qcom-cpufreq-hw.c @@ -304,7 +304,8 @@ static void qcom_lmh_dcvs_notify(struct qcom_cpufreq_data *data) if (capacity > max_capacity) capacity = max_capacity; - arch_set_thermal_pressure(policy->cpus, max_capacity - capacity); + arch_set_thermal_pressure(policy->related_cpus, + max_capacity - capacity); /* * In the unlikely case policy is unregistered do not enable -- 2.17.1