Received: by 2002:a05:6358:1087:b0:cb:c9d3:cd90 with SMTP id j7csp428516rwi; Mon, 10 Oct 2022 02:24:39 -0700 (PDT) X-Google-Smtp-Source: AMsMyM50psdhNGn4d9kdgI3XTgH1njonXRq276ynVQ763wqYQU63TTQxyiQ/xPF7IICq8cXDPW2S X-Received: by 2002:a17:907:75dc:b0:783:9c71:5e20 with SMTP id jl28-20020a17090775dc00b007839c715e20mr13847626ejc.125.1665393878907; Mon, 10 Oct 2022 02:24:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665393878; cv=none; d=google.com; s=arc-20160816; b=krB/vRUT4eTDXU3DI6LOPKKbE9wlEUQhOT66VvNkdR0K8SdrmeC6PlfLUqE+dcZGTr lWyKE3PUQGULdTKFcAmvcXySNyBD5ucI7dhQmcl+J8zFCpOXUuYUP5IrST9ExVOSDrX3 Rzm9POi87gHc90zqC+xWs9OHR8xrBJ5L5hS7JK+tBLwIXSzQ4A7C1ab3EsnNouUWJkFS t6p6rhR1YuwxIn4YiUxXL1lddKEFLnUR3Jr7/f27gO0rX0zMVrdw1Qz/JSXu8Y4C71c2 JKj7mlB2OfBnAqHTw5Ad5JRRPNZw6U5seExfC/gXXts4NuJ5eWl2D+91jjq+b72oUmgV OZdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=j3Nc5ybSmm3MUhdS+5MVyvIh/NesSmD7wvi1u6Qup6U=; b=IRv8+vQ7YSBDm8z/HThcWoC1RmL2WRbGMQ+qnEZCaUrnnRxAFr3qYJhKoc+9uC5a5D VsvMZjk/Y/6E1WILHGxSaLoA+CARc1m/MUda3qOvl4ihzpntfuXECALfe3q1yp/QocnP 04HfC3dxrzaBjMyWMchWaJ3y2/1kbBGYqrDaZMNrErHujKscqzUyUrMP67Jk05BgfW8y 9Izub3ZNq9+A0PzwBbunFls+MlrGe7aIgy4rugVvX5K8NWXxckb3YNbI39zqqv0G080A NSzTBEQLrg01Fs0yolZSgBzO+WKlBUbqq2579gZBfmBAnIaux2g2+58+POf7u1SvRT9j 8aNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=WvK1AiGt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id qk41-20020a1709077fa900b0078d49f0df9csi11605309ejc.453.2022.10.10.02.24.10; Mon, 10 Oct 2022 02:24:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=WvK1AiGt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229854AbiJJJP0 (ORCPT + 99 others); Mon, 10 Oct 2022 05:15:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230201AbiJJJPY (ORCPT ); Mon, 10 Oct 2022 05:15:24 -0400 Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com [IPv6:2a00:1450:4864:20::232]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DEC34A801 for ; Mon, 10 Oct 2022 02:15:21 -0700 (PDT) Received: by mail-lj1-x232.google.com with SMTP id f9so12505619ljk.12 for ; Mon, 10 Oct 2022 02:15:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=j3Nc5ybSmm3MUhdS+5MVyvIh/NesSmD7wvi1u6Qup6U=; b=WvK1AiGto8CcgCMoI4sx/iJk9ekcDIyGNkFYy2f2pDxuQOp0q5v+wcrG7IBca2IV1s znSltlhtNZ8+BVPfpKUSFa9tIBmTEEsUHTEfEIYn0IHVjuqjKmbxJo9OAc9ARf8hCQDi jwg9o/Dc8npSCgHL/x5WJNEHdipO5+9NUHk6ekjr97xicnLV4wFisvLghRIgePZgQK38 wC/5kHf12eXMbUtv1ecfoslc8iiMRFY/KomLaMHBt4SaPrVl2YDOY0gepcmlT+HwGYTK eThmRHUzAxzvTlSmSEk68nq0aprAxQmgNH9qBnWrri2+LavXZZogk1NFT75YlH6N4Q+d vINQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=j3Nc5ybSmm3MUhdS+5MVyvIh/NesSmD7wvi1u6Qup6U=; b=iiaBUKZYAcNcAYumm0CDIycQnD8MhAxpVu2eLobDyAmPyukM97Ub3nRN/gDqXWGAjC C5wZm651O+vV2ASShwKpbMBP9FQNfu4+sVPGYSdNwH++Vo2xy2oXmkk5mM5sGef3B28e omYfmC1wVYS/vkMcEav5PE/HoRiBEn3zgiktUNb2lgLrhd6aIG0MH7D0fvuRtr3qi2S7 MjjSbvAQ61ZALMrtYxC0bDxU7LTcEpeFdUeB5CS3zgUSBQWS9pKW0+GbJdQL47HmWR62 w5bkgP9iKN5S61MLbkgxDxEPAwaK3Nw46ER2JGmcJ7Hsq6AreAR2pvASMG2h0khHw7Cn r+OA== X-Gm-Message-State: ACrzQf0LRfdpxCAzQLNcnf6SPpT/9/pQXJ2pJRw17d65On5IPJw2XHsA lkinwMxbJSKtuPkNOpPp2GM/i/iLSedIZYEJwCqf4w== X-Received: by 2002:a2e:9919:0:b0:26e:59a:3449 with SMTP id v25-20020a2e9919000000b0026e059a3449mr6189634lji.494.1665393319687; Mon, 10 Oct 2022 02:15:19 -0700 (PDT) MIME-Version: 1.0 References: <20220930094821.31665-1-lukasz.luba@arm.com> <20220930094821.31665-2-lukasz.luba@arm.com> <20221010053902.5rofnpzvyynumw3e@vireshk-i7> <3f9a4123-171b-5fa7-f506-341355f71483@arm.com> In-Reply-To: <3f9a4123-171b-5fa7-f506-341355f71483@arm.com> From: Vincent Guittot Date: Mon, 10 Oct 2022 11:15:07 +0200 Message-ID: Subject: Re: [PATCH 2/2] cpufreq: Update CPU capacity reduction in store_scaling_max_freq() To: Lukasz Luba Cc: Viresh Kumar , linux-kernel@vger.kernel.org, rafael@kernel.org, linux-pm@vger.kernel.org, Dietmar.Eggemann@arm.com, peterz@infradead.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 10 Oct 2022 at 11:02, Lukasz Luba wrote: > > > > On 10/10/22 06:39, Viresh Kumar wrote: > > Would be good to always CC Scheduler maintainers for such a patch. > > Agree, I'll do that. > > > > > On 30-09-22, 10:48, Lukasz Luba wrote: > >> When the new max frequency value is stored, the task scheduler must > >> know about it. The scheduler uses the CPUs capacity information in the > >> task placement. Use the existing mechanism which provides information > >> about reduced CPU capacity to the scheduler due to thermal capping. > >> > >> Signed-off-by: Lukasz Luba > >> --- > >> drivers/cpufreq/cpufreq.c | 18 +++++++++++++++++- > >> 1 file changed, 17 insertions(+), 1 deletion(-) > >> > >> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c > >> index 1f8b93f42c76..205d9ea9c023 100644 > >> --- a/drivers/cpufreq/cpufreq.c > >> +++ b/drivers/cpufreq/cpufreq.c > >> @@ -27,6 +27,7 @@ > >> #include > >> #include > >> #include > >> +#include > >> #include > >> #include > >> #include > >> @@ -718,6 +719,8 @@ static ssize_t show_scaling_cur_freq(struct cpufreq_policy *policy, char *buf) > >> static ssize_t store_scaling_max_freq > >> (struct cpufreq_policy *policy, const char *buf, size_t count) > >> { > >> + unsigned int frequency; > >> + struct cpumask *cpus; > >> unsigned long val; > >> int ret; > >> > >> @@ -726,7 +729,20 @@ static ssize_t store_scaling_max_freq > >> return -EINVAL; > >> > >> ret = freq_qos_update_request(policy->max_freq_req, val); > >> - return ret >= 0 ? count : ret; > >> + if (ret >= 0) { > >> + /* > >> + * Make sure that the task scheduler sees these CPUs > >> + * capacity reduction. Use the thermal pressure mechanism > >> + * to propagate this information to the scheduler. > >> + */ > >> + cpus = policy->related_cpus; > > > > No need of this, just use related_cpus directly. > > > >> + frequency = __resolve_freq(policy, val, CPUFREQ_RELATION_HE); > >> + arch_update_thermal_pressure(cpus, frequency); > > > > I wonder if using the thermal-pressure API here is the right thing to > > do. It is a change coming from User, which may or may not be > > thermal-related. > > Yes, I thought the same. Thermal-pressure name might be not the > best for covering this use case. I have been thinking about this > thermal pressure mechanism for a while, since there are other > use cases like PowerCap DTPM which also reduces CPU capacity > because of power policy from user-space. We don't notify > the scheduler about it. There might be also an issue with virtual > guest OS and how that kernel 'sees' the capacity of CPUs. > We might try to use this 'thermal-pressure' in the guest kernel > to notify about available CPU capacity (just a proposal, not > even an RFC, since we are missing requirements, but issues where > discussed on LPC 2022 on ChromeOS+Android_guest) The User space setting scaling_max_freq is a long scale event and it should be considered as a new running environnement instead of a transient event. I would suggest updating the EM is and capacity orig of the system in this case. Similarly, we rebuild sched_domain with a cpu hotplug. scaling_max_freq interface should not be used to do any kind of dynamic scaling. > > Android middleware has 'powerhits' (IIRC since ~4-5 versions now) > but our capacity in task scheduler is not aware of those reductions. > > IMO thermal-pressure mechanism is good, but the naming convention > just might be a bit more 'generic' to cover those two users. > > Some proposals of better naming: > 1. Performance capping > 2. Capacity capping > 3. Performance reduction > > What do you think about changing the name of this and cover > those two users: PowerCap DTPM and this user-space cpufreq?