Received: by 2002:a05:6358:1087:b0:cb:c9d3:cd90 with SMTP id j7csp562055rwi; Mon, 10 Oct 2022 04:28:36 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6z6AUR8Cd83fDOdrulCojxUcHbu1SJ8/vaXKEDWOv2C/A627faEP2UFAJTzweHqWHYScbJ X-Received: by 2002:a17:907:3c81:b0:77a:327a:815f with SMTP id gl1-20020a1709073c8100b0077a327a815fmr14661435ejc.422.1665401314776; Mon, 10 Oct 2022 04:28:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665401314; cv=none; d=google.com; s=arc-20160816; b=wOmKhVnvio6IAZIQC/XmyxZS8rPLio5cbq0wKb/5g5ENirnRD7Aev5ytoaa4fWYO1g A3hMdRwTNlBZOpTkGlA7FWX/b/94R/KBrrzVb2H/MEb1EWV/xaKEFGe9WtgTCadk1LG2 eox6plKXWh4qwD/qo5PKPTXBQpBLFLH5CAy/L3p18cAh73FTMcbjg52URt+LdowfCByj P6QamH0SAXa7emheq36JeazN7KVKcqNWp3U/pI5J0kkWJ/JnWyTGYeVvJDnLS3mIsLtV 1MOuFcA1UUoBEV169qROS8AEXtVgmUQVZZkSx7NORNdn3f5NLuSJ3jAadWb6IBleW0zK R0Rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=c52/GX6c9xR0LS1KwU0Mp2O0WYuyTOLZ/Vrvha858rE=; b=z+ZijT9W9KpHpcgNZ4V2iQCpDw6IvXChVWeeArrfkeYP6ObWkM5GfxAUP9rrytjipc N8rNMiZKmbIo7jVEChcDVMWDzf7OJA6tMfsOyTZSOCFzIkb/ykhkKAzCykeFO5imgu8I GCRMWC5zbFG2SxmaoeHZGOcza1fdUkNKJQhIpra3S6sq2eTlkS/w+wbbFELihHAGFsBl qsoht7QYsCfJdOX4Wn4vLtk3enQAlo/H/GQC8FUzjsfgpUcTJkoNtNVIVjlV/DL6aUNG Qpu6ZGwWNKX7PRoY1aPSGZ2SLWE3Ax2vTRgTsZNfxWOx0e/kcWuh7rPJr6WEb1xu9gF9 3TqA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w23-20020aa7cb57000000b00457e1c8cb07si6652899edt.374.2022.10.10.04.28.09; Mon, 10 Oct 2022 04:28:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231732AbiJJKt7 (ORCPT + 99 others); Mon, 10 Oct 2022 06:49:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231486AbiJJKt5 (ORCPT ); Mon, 10 Oct 2022 06:49:57 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7587F1CFF7; Mon, 10 Oct 2022 03:49:55 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7919A1480; Mon, 10 Oct 2022 03:50:01 -0700 (PDT) Received: from [10.57.5.39] (unknown [10.57.5.39]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D8AA23F792; Mon, 10 Oct 2022 03:49:53 -0700 (PDT) Message-ID: <077811ea-4b63-870e-d15a-602411c4fdbf@arm.com> Date: Mon, 10 Oct 2022 11:49:52 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.11.0 Subject: Re: [PATCH 2/2] cpufreq: Update CPU capacity reduction in store_scaling_max_freq() Content-Language: en-US To: Vincent Guittot Cc: Viresh Kumar , linux-kernel@vger.kernel.org, rafael@kernel.org, linux-pm@vger.kernel.org, Dietmar.Eggemann@arm.com, peterz@infradead.org, "daniel.lezcano@linaro.org" References: <20220930094821.31665-1-lukasz.luba@arm.com> <20220930094821.31665-2-lukasz.luba@arm.com> <20221010053902.5rofnpzvyynumw3e@vireshk-i7> <3f9a4123-171b-5fa7-f506-341355f71483@arm.com> <8a7968c2-dbf7-5316-ef36-6d45143e0605@arm.com> <9611971c-d8dd-7877-6f50-c5afbf38b171@arm.com> From: Lukasz Luba In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_LOW,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org +CC Daniel On 10/10/22 11:22, Vincent Guittot wrote: > On Mon, 10 Oct 2022 at 12:12, Lukasz Luba wrote: >> >> >> >> On 10/10/22 10:32, Vincent Guittot wrote: >>> On Mon, 10 Oct 2022 at 11:30, Lukasz Luba wrote: >>>> >>>> >>>> >>>> On 10/10/22 10:15, Vincent Guittot wrote: >>>>> On Mon, 10 Oct 2022 at 11:02, Lukasz Luba wrote: >>>>>> >>>>>> >>>>>> >>>>>> On 10/10/22 06:39, Viresh Kumar wrote: >>>>>>> Would be good to always CC Scheduler maintainers for such a patch. >>>>>> >>>>>> Agree, I'll do that. >>>>>> >>>>>>> >>>>>>> On 30-09-22, 10:48, Lukasz Luba wrote: >>>>>>>> When the new max frequency value is stored, the task scheduler must >>>>>>>> know about it. The scheduler uses the CPUs capacity information in the >>>>>>>> task placement. Use the existing mechanism which provides information >>>>>>>> about reduced CPU capacity to the scheduler due to thermal capping. >>>>>>>> >>>>>>>> Signed-off-by: Lukasz Luba >>>>>>>> --- >>>>>>>> drivers/cpufreq/cpufreq.c | 18 +++++++++++++++++- >>>>>>>> 1 file changed, 17 insertions(+), 1 deletion(-) >>>>>>>> >>>>>>>> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c >>>>>>>> index 1f8b93f42c76..205d9ea9c023 100644 >>>>>>>> --- a/drivers/cpufreq/cpufreq.c >>>>>>>> +++ b/drivers/cpufreq/cpufreq.c >>>>>>>> @@ -27,6 +27,7 @@ >>>>>>>> #include >>>>>>>> #include >>>>>>>> #include >>>>>>>> +#include >>>>>>>> #include >>>>>>>> #include >>>>>>>> #include >>>>>>>> @@ -718,6 +719,8 @@ static ssize_t show_scaling_cur_freq(struct cpufreq_policy *policy, char *buf) >>>>>>>> static ssize_t store_scaling_max_freq >>>>>>>> (struct cpufreq_policy *policy, const char *buf, size_t count) >>>>>>>> { >>>>>>>> + unsigned int frequency; >>>>>>>> + struct cpumask *cpus; >>>>>>>> unsigned long val; >>>>>>>> int ret; >>>>>>>> >>>>>>>> @@ -726,7 +729,20 @@ static ssize_t store_scaling_max_freq >>>>>>>> return -EINVAL; >>>>>>>> >>>>>>>> ret = freq_qos_update_request(policy->max_freq_req, val); >>>>>>>> - return ret >= 0 ? count : ret; >>>>>>>> + if (ret >= 0) { >>>>>>>> + /* >>>>>>>> + * Make sure that the task scheduler sees these CPUs >>>>>>>> + * capacity reduction. Use the thermal pressure mechanism >>>>>>>> + * to propagate this information to the scheduler. >>>>>>>> + */ >>>>>>>> + cpus = policy->related_cpus; >>>>>>> >>>>>>> No need of this, just use related_cpus directly. >>>>>>> >>>>>>>> + frequency = __resolve_freq(policy, val, CPUFREQ_RELATION_HE); >>>>>>>> + arch_update_thermal_pressure(cpus, frequency); >>>>>>> >>>>>>> I wonder if using the thermal-pressure API here is the right thing to >>>>>>> do. It is a change coming from User, which may or may not be >>>>>>> thermal-related. >>>>>> >>>>>> Yes, I thought the same. Thermal-pressure name might be not the >>>>>> best for covering this use case. I have been thinking about this >>>>>> thermal pressure mechanism for a while, since there are other >>>>>> use cases like PowerCap DTPM which also reduces CPU capacity >>>>>> because of power policy from user-space. We don't notify >>>>>> the scheduler about it. There might be also an issue with virtual >>>>>> guest OS and how that kernel 'sees' the capacity of CPUs. >>>>>> We might try to use this 'thermal-pressure' in the guest kernel >>>>>> to notify about available CPU capacity (just a proposal, not >>>>>> even an RFC, since we are missing requirements, but issues where >>>>>> discussed on LPC 2022 on ChromeOS+Android_guest) >>>>> >>>>> The User space setting scaling_max_freq is a long scale event and it >>>>> should be considered as a new running environnement instead of a >>>>> transient event. I would suggest updating the EM is and capacity orig >>>>> of the system in this case. Similarly, we rebuild sched_domain with a >>>>> cpu hotplug. scaling_max_freq interface should not be used to do any >>>>> kind of dynamic scaling. >>>> >>>> I tend to agree, but the EM capacity would be only used in part of EAS >>>> code. The whole fair.c view to the capacity_of() (RT + DL + irq + >>>> thermal_pressure) would be still wrong in other parts, e.g. >>>> select_idle_sibling() and load balance. >>>> >>>> When we get this powerhint we might be already in overutilied state >>>> so EAS is disabled. IMO other mechanisms in the task scheduler >>>> should be also aware of that capacity reduction. >>> >>> That's why I also mentioned the capacity_orig >> >> Well, I think this is a bit more complex. Thermal framework governor >> reduces the perf IDs from top in the freq asc table and keeps that >> in the statistics in sysfs. It also updates the thermal pressure signal. >> When we rebuild the capacity of CPUs and make the capacity_orig smaller, >> the capacity_of would still have the thermal framework reduced capacity >> in there. We would end up with too small CPU capacity due to this >> subtraction in capacity_of. > > That's why using user space interface should not be used to do dynamic scaling. > I still think that user space interface is not the right interface > >> >> Ideally, I would see a mechanism which is aware of this performance >> reduction reason: >> 1. thermal capping >> 2. power capping (from DTPM) >> 3. max freq reduction by user space > > Yes for thermal and power capping but no for user space > >> >> That common place would figure and maintain the context for the >> requested capacity reduction. >> >> BTW, those Android user space max freq requests are not that long, >> mostly due to camera capturing (you can see a few in this file, >> e.g. [1]). > > Why are they doing this ? > This doesn't seem to be the correct interface to use. It seems to do > some power budget and they should use the right interface for this Yes, I agree. I have sent explanation with this to Peter's emails. Daniel tries to give them a better interface: DTPM, but also would suffer the same issue of capacity reduction for this short time. We have a few discussions about it, also Daniel presented on a few LPC those issues.