Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp3010559pxj; Mon, 14 Jun 2021 12:13:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx44ZCioNdmaK137CTKLYyu5LHY2SbP4Zwgs4BeFO6jMgxT7hnShRgageoom2HFiQAyFoRB X-Received: by 2002:a05:6402:27d1:: with SMTP id c17mr18614316ede.28.1623697981904; Mon, 14 Jun 2021 12:13:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623697981; cv=none; d=google.com; s=arc-20160816; b=zNp92ErScFS9wMDlLrNbgwNVb6ctMHi2mNzhKenehOutJbe6kVofNi5KF+PtqzCldR IGZ9B3OP9n6pxqS7bo/0a3p0BrleN9mltWwZaL7dX1gjbeowEs2vTmTJp1v3+pU+2ydM lRN+RG9ee1n8TCsy+Ryj9l98YRbT14sQCO/OVmBUUV6f1qQIdx7yfiV2zpMLygkFB1k9 +Jd7uAQmOLf6g1LbI8RB0/cx0GqQ7v7nzqNW+30Rhvbs9KGNiIU+OHVqu5ltRHzR56g8 jdmUYqpiBsdOLzFAbUUEkWr+hOYHNEevKxzC2ENkHFns9ECVM3nTSftX2Iu9CvjLvG9+ fkrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=2XBbQqzu5LoU5SSThb39Bo8ZLO8PuKz8N4o9qkzmGwE=; b=ks5tQ8/dkViIvbR23YEo24ipUEYUoMO24g8+E8shM8tdeAEDG4iTLevstNL1TSeuQ7 Rb4HfEX3ztGG22jwsYYUcEDLBZr0VGJWxatXRtg1twPr/na8ErkWzcKLLFjWDrIwrjgD trhXVBB5xIjQG6fkfTqBn9/484+FvcBIU4F+jKUHEfdH+Rb0piftOrTzKbUiJOFSDW6u YQiSAzbijbeLkS0W6L2OV5zEd6DAyDFOKoSU6fN614ytkoE1hghRxcFoei/fbP3vA5v8 W8cZUw7Nfv1N31IYfSucUW4/ydsX4ELI1asAtBdJ8vd0uRHGgNc1QFwAMey5wmc8mIlH xaFw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l8si792987edb.91.2021.06.14.12.12.38; Mon, 14 Jun 2021 12:13:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233724AbhFNTND (ORCPT + 99 others); Mon, 14 Jun 2021 15:13:03 -0400 Received: from foss.arm.com ([217.140.110.172]:44206 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233169AbhFNTNB (ORCPT ); Mon, 14 Jun 2021 15:13:01 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A1F6B113E; Mon, 14 Jun 2021 12:10:57 -0700 (PDT) Received: from e123648.arm.com (unknown [10.57.5.127]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DE5123F694; Mon, 14 Jun 2021 12:10:53 -0700 (PDT) From: Lukasz Luba To: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org, peterz@infradead.org, rjw@rjwysocki.net, viresh.kumar@linaro.org, vincent.guittot@linaro.org, qperret@google.com, dietmar.eggemann@arm.com, vincent.donnefort@arm.com, lukasz.luba@arm.com, Beata.Michalska@arm.com, mingo@redhat.com, juri.lelli@redhat.com, rostedt@goodmis.org, segall@google.com, mgorman@suse.de, bristot@redhat.com, thara.gopinath@linaro.org, amit.kachhap@gmail.com, amitk@kernel.org, rui.zhang@intel.com, daniel.lezcano@linaro.org Subject: [PATCH 1/3] thermal: cpufreq_cooling: Update also offline CPUs per-cpu thermal_pressure Date: Mon, 14 Jun 2021 20:10:30 +0100 Message-Id: <20210614191030.22241-1-lukasz.luba@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210614185815.15136-1-lukasz.luba@arm.com> References: <20210614185815.15136-1-lukasz.luba@arm.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The thermal pressure signal gives information to the scheduler about reduced CPU capacity due to thermal. It is based on a value stored in a per-cpu 'thermal_pressure' variable. The online CPUs will get the new value there, while the offline won't. Unfortunately, when the CPU is back online, the value read from per-cpu variable might be wrong (stale data). This might affect the scheduler decisions, since it sees the CPU capacity differently than what is actually available. Fix it by making sure that all online+offline CPUs would get the proper value in their per-cpu variable when thermal framework sets capping. Fixes: f12e4f66ab6a3 ("thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping") Acked-by: Viresh Kumar Signed-off-by: Lukasz Luba --- drivers/thermal/cpufreq_cooling.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c index eeb4e4b76c0b..43b1ae8a7789 100644 --- a/drivers/thermal/cpufreq_cooling.c +++ b/drivers/thermal/cpufreq_cooling.c @@ -478,7 +478,7 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev, ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency); if (ret >= 0) { cpufreq_cdev->cpufreq_state = state; - cpus = cpufreq_cdev->policy->cpus; + cpus = cpufreq_cdev->policy->related_cpus; max_capacity = arch_scale_cpu_capacity(cpumask_first(cpus)); capacity = frequency * max_capacity; capacity /= cpufreq_cdev->policy->cpuinfo.max_freq; -- 2.17.1