Received: by 2002:a05:6358:1087:b0:cb:c9d3:cd90 with SMTP id j7csp664135rwi; Mon, 10 Oct 2022 05:48:51 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6EvFVjnMWLsHXRziXnoANSULAPbqceJxRZlvM81x35nEVeM1kXu04mjywFj80fc7AN0BSw X-Received: by 2002:a17:906:cc0f:b0:77a:9707:bb5b with SMTP id ml15-20020a170906cc0f00b0077a9707bb5bmr1780723ejb.655.1665406130935; Mon, 10 Oct 2022 05:48:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665406130; cv=none; d=google.com; s=arc-20160816; b=Rt1nHWDSniCav0Nk2NV8b6nHt1iIypSv7VopLO+4fmudyCXyTvhX2v2a2/kqPwqul1 q5MSM1HqkJhzHAmOgiRfwFFkdunmfIH7SNlVbZUu3IRmUs8v2xj/e6ZJAOQGRDeT5oNS 9zRPxyaCZYdpppyP5TVCFXbFvgjXNkiJeq2TC9sO9Oetxqi1IljPLUXDN3BUU2G8o8Zw bqkqVlQx26asmcUTilUyou+gHEqUPwURHugTpDPw8frIFuDkcHMm1Trhp/ZvtIkeGL7V rodxFbZxE24dFcKvD0db1uj5Jv37UIFSZCY8t3+GjhxKADVnAvJYmqvOQMf7zxhYGLFs tw+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=EE8cymkuJKB45ZikvQ1veHxJ9Jv2DTzIpLsHzYgtAvg=; b=VCwd7bI1rt6KVjcJMNvlGComXBYcZaB1gugkrLkNcFFUPNRJfwB6dlAdG0DmdTzaks Tm1Y7hVy/fSQrTpglYNbliN/eJJmyWCpiw9a8sxzxdWBrxyU/yF68CdZfRO0loRUm+/j /+unIlzIC6FFbligC2z/8ORQla6vYpPzfeOWWh7/gL5PPOArletWnKcxi0bdaNqVqNxy 4vDwa/xXDzX/BQHtou+zF9T/Q/qryIH7jR7xxzHP83SyoB7+MWjz80gMuxDQiHukzdR7 xV4dDGJU07OiuoYGb38u/4il0qdPugq1VYOgUS5r1pRd08yyqVh0hfxdyvGJ01ytXLRq v5yw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Jt3OCZVP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hu9-20020a170907a08900b0078103b6f8cbsi8413308ejc.318.2022.10.10.05.48.25; Mon, 10 Oct 2022 05:48:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Jt3OCZVP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229530AbiJJMVz (ORCPT + 99 others); Mon, 10 Oct 2022 08:21:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232073AbiJJMVb (ORCPT ); Mon, 10 Oct 2022 08:21:31 -0400 Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com [IPv6:2a00:1450:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A98C929819 for ; Mon, 10 Oct 2022 05:21:24 -0700 (PDT) Received: by mail-lj1-x22a.google.com with SMTP id q7so10566779ljp.3 for ; Mon, 10 Oct 2022 05:21:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=EE8cymkuJKB45ZikvQ1veHxJ9Jv2DTzIpLsHzYgtAvg=; b=Jt3OCZVPohoykpC5nYrtDg3wAV8keUQxqo9e8ufexqi9ta5l2oRuJyEUQvH17lubI7 3NcSDUQU9QFUTWiLi6R5w0oG6zsgPNPkLiKZ/eqH5H5THNo5npL9hvEWSDPi7STv++Kv ZpfNBaUMfpgcvUGkXrHXNADS7k/fhFiCQvWmZe+6AyINTdXsEIcROhSdRQK7ANMq2GBx X9RjR/0hB62y8ncXNvuf3XjZcBLx77kgEqtOfvSRNlbKMnWPnhhx4g0JqIyUKCgvW2j0 n49ElsxD8Dr3BpYUumSffplPusSTS4KOVQZtlvYnEnOiqTseGBCchGp84nbYfFgpFIzj 7pdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=EE8cymkuJKB45ZikvQ1veHxJ9Jv2DTzIpLsHzYgtAvg=; b=SfmxTb2qCXia54Xq1024tr675BRsZaqKk6hz42juXQ5IEDS1SxzAr7iJdkUI0sWr0+ ewZhqsULNO1YY1G4sUSsMUzNzDXiylA4eepadmEznRpy02C8p5UzRs0pFclYdsv9tRX5 GRkZ51Y3sY2KfIiY1OH1qkz/m/RICOJUyvgIQejLaNISqNwlw4LkJtxTlBQhdN/JILTe XJ60brt9YENSCIqRLqgEOUOVqgsdv1vXiBRhvbs5SU3a0ewR1/AAMX1XApEad5bVsjRs 5C23fLAP8+EUSyzbTuCnIx+9hWkjOogn7/R8OOe54yOAVUcdtR2tz3R647aLKf7mpJtE G5HQ== X-Gm-Message-State: ACrzQf0DHSPVpBp5xOUKhZot8ob558hW5ajn3eDGpk5dxU5td/2FS6H0 BkxEVjQGOZm9GKRjxiR5MmptHCMt71bU39eGZDnEfA== X-Received: by 2002:a05:651c:1a0b:b0:26c:7a7d:55b2 with SMTP id by11-20020a05651c1a0b00b0026c7a7d55b2mr7145775ljb.365.1665404482888; Mon, 10 Oct 2022 05:21:22 -0700 (PDT) MIME-Version: 1.0 References: <20220930094821.31665-1-lukasz.luba@arm.com> <20220930094821.31665-2-lukasz.luba@arm.com> <20221010053902.5rofnpzvyynumw3e@vireshk-i7> <3f9a4123-171b-5fa7-f506-341355f71483@arm.com> <8a7968c2-dbf7-5316-ef36-6d45143e0605@arm.com> <9611971c-d8dd-7877-6f50-c5afbf38b171@arm.com> <077811ea-4b63-870e-d15a-602411c4fdbf@arm.com> In-Reply-To: <077811ea-4b63-870e-d15a-602411c4fdbf@arm.com> From: Vincent Guittot Date: Mon, 10 Oct 2022 14:21:11 +0200 Message-ID: Subject: Re: [PATCH 2/2] cpufreq: Update CPU capacity reduction in store_scaling_max_freq() To: Lukasz Luba Cc: Viresh Kumar , linux-kernel@vger.kernel.org, rafael@kernel.org, linux-pm@vger.kernel.org, Dietmar.Eggemann@arm.com, peterz@infradead.org, "daniel.lezcano@linaro.org" Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 10 Oct 2022 at 12:49, Lukasz Luba wrote: > > > +CC Daniel > > On 10/10/22 11:22, Vincent Guittot wrote: > > On Mon, 10 Oct 2022 at 12:12, Lukasz Luba wrote: > >> > >> > >> > >> On 10/10/22 10:32, Vincent Guittot wrote: > >>> On Mon, 10 Oct 2022 at 11:30, Lukasz Luba wrote: > >>>> > >>>> > >>>> > >>>> On 10/10/22 10:15, Vincent Guittot wrote: > >>>>> On Mon, 10 Oct 2022 at 11:02, Lukasz Luba wrote: > >>>>>> > >>>>>> > >>>>>> > >>>>>> On 10/10/22 06:39, Viresh Kumar wrote: > >>>>>>> Would be good to always CC Scheduler maintainers for such a patch. > >>>>>> > >>>>>> Agree, I'll do that. > >>>>>> > >>>>>>> > >>>>>>> On 30-09-22, 10:48, Lukasz Luba wrote: > >>>>>>>> When the new max frequency value is stored, the task scheduler must > >>>>>>>> know about it. The scheduler uses the CPUs capacity information in the > >>>>>>>> task placement. Use the existing mechanism which provides information > >>>>>>>> about reduced CPU capacity to the scheduler due to thermal capping. > >>>>>>>> > >>>>>>>> Signed-off-by: Lukasz Luba > >>>>>>>> --- > >>>>>>>> drivers/cpufreq/cpufreq.c | 18 +++++++++++++++++- > >>>>>>>> 1 file changed, 17 insertions(+), 1 deletion(-) > >>>>>>>> > >>>>>>>> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c > >>>>>>>> index 1f8b93f42c76..205d9ea9c023 100644 > >>>>>>>> --- a/drivers/cpufreq/cpufreq.c > >>>>>>>> +++ b/drivers/cpufreq/cpufreq.c > >>>>>>>> @@ -27,6 +27,7 @@ > >>>>>>>> #include > >>>>>>>> #include > >>>>>>>> #include > >>>>>>>> +#include > >>>>>>>> #include > >>>>>>>> #include > >>>>>>>> #include > >>>>>>>> @@ -718,6 +719,8 @@ static ssize_t show_scaling_cur_freq(struct cpufreq_policy *policy, char *buf) > >>>>>>>> static ssize_t store_scaling_max_freq > >>>>>>>> (struct cpufreq_policy *policy, const char *buf, size_t count) > >>>>>>>> { > >>>>>>>> + unsigned int frequency; > >>>>>>>> + struct cpumask *cpus; > >>>>>>>> unsigned long val; > >>>>>>>> int ret; > >>>>>>>> > >>>>>>>> @@ -726,7 +729,20 @@ static ssize_t store_scaling_max_freq > >>>>>>>> return -EINVAL; > >>>>>>>> > >>>>>>>> ret = freq_qos_update_request(policy->max_freq_req, val); > >>>>>>>> - return ret >= 0 ? count : ret; > >>>>>>>> + if (ret >= 0) { > >>>>>>>> + /* > >>>>>>>> + * Make sure that the task scheduler sees these CPUs > >>>>>>>> + * capacity reduction. Use the thermal pressure mechanism > >>>>>>>> + * to propagate this information to the scheduler. > >>>>>>>> + */ > >>>>>>>> + cpus = policy->related_cpus; > >>>>>>> > >>>>>>> No need of this, just use related_cpus directly. > >>>>>>> > >>>>>>>> + frequency = __resolve_freq(policy, val, CPUFREQ_RELATION_HE); > >>>>>>>> + arch_update_thermal_pressure(cpus, frequency); > >>>>>>> > >>>>>>> I wonder if using the thermal-pressure API here is the right thing to > >>>>>>> do. It is a change coming from User, which may or may not be > >>>>>>> thermal-related. > >>>>>> > >>>>>> Yes, I thought the same. Thermal-pressure name might be not the > >>>>>> best for covering this use case. I have been thinking about this > >>>>>> thermal pressure mechanism for a while, since there are other > >>>>>> use cases like PowerCap DTPM which also reduces CPU capacity > >>>>>> because of power policy from user-space. We don't notify > >>>>>> the scheduler about it. There might be also an issue with virtual > >>>>>> guest OS and how that kernel 'sees' the capacity of CPUs. > >>>>>> We might try to use this 'thermal-pressure' in the guest kernel > >>>>>> to notify about available CPU capacity (just a proposal, not > >>>>>> even an RFC, since we are missing requirements, but issues where > >>>>>> discussed on LPC 2022 on ChromeOS+Android_guest) > >>>>> > >>>>> The User space setting scaling_max_freq is a long scale event and it > >>>>> should be considered as a new running environnement instead of a > >>>>> transient event. I would suggest updating the EM is and capacity orig > >>>>> of the system in this case. Similarly, we rebuild sched_domain with a > >>>>> cpu hotplug. scaling_max_freq interface should not be used to do any > >>>>> kind of dynamic scaling. > >>>> > >>>> I tend to agree, but the EM capacity would be only used in part of EAS > >>>> code. The whole fair.c view to the capacity_of() (RT + DL + irq + > >>>> thermal_pressure) would be still wrong in other parts, e.g. > >>>> select_idle_sibling() and load balance. > >>>> > >>>> When we get this powerhint we might be already in overutilied state > >>>> so EAS is disabled. IMO other mechanisms in the task scheduler > >>>> should be also aware of that capacity reduction. > >>> > >>> That's why I also mentioned the capacity_orig > >> > >> Well, I think this is a bit more complex. Thermal framework governor > >> reduces the perf IDs from top in the freq asc table and keeps that > >> in the statistics in sysfs. It also updates the thermal pressure signal. > >> When we rebuild the capacity of CPUs and make the capacity_orig smaller, > >> the capacity_of would still have the thermal framework reduced capacity > >> in there. We would end up with too small CPU capacity due to this > >> subtraction in capacity_of. > > > > That's why using user space interface should not be used to do dynamic scaling. > > I still think that user space interface is not the right interface > > > >> > >> Ideally, I would see a mechanism which is aware of this performance > >> reduction reason: > >> 1. thermal capping > >> 2. power capping (from DTPM) > >> 3. max freq reduction by user space > > > > Yes for thermal and power capping but no for user space > > > >> > >> That common place would figure and maintain the context for the > >> requested capacity reduction. > >> > >> BTW, those Android user space max freq requests are not that long, > >> mostly due to camera capturing (you can see a few in this file, > >> e.g. [1]). > > > > Why are they doing this ? > > This doesn't seem to be the correct interface to use. It seems to do > > some power budget and they should use the right interface for this > > Yes, I agree. I have sent explanation with this to Peter's emails. > Daniel tries to give them a better interface: DTPM, but also would > suffer the same issue of capacity reduction for this short time. The comments in this thread are only about using the userspace interface scale_max_freq to dynamically scale max freq and then to try to report these changes in the thermal_pressure, which is the purpose of this patch. As said at LPC, I'm fine to rename thermal_pressure for something more generic but this is not the purpose of this patch. This patch is about connecting userspace scale_max_freq to thermal_pressure and it's not the right things to do > > We have a few discussions about it, also Daniel presented on a few > LPC those issues.