Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp3147954pxu; Tue, 8 Dec 2020 04:56:51 -0800 (PST) X-Google-Smtp-Source: ABdhPJy1qtI86fyhOLmVkWP/6BIK6SlT0PJvQd4KsI+j66YD6lEeGC2CwBkyLFiWPhBFfhX4PnbM X-Received: by 2002:a50:eb97:: with SMTP id y23mr24453154edr.29.1607432211319; Tue, 08 Dec 2020 04:56:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607432211; cv=none; d=google.com; s=arc-20160816; b=PYREg5HVTFi+lKuHY0lOthcrYQOX1OrcZDoNePdAwrA4L0piCc+pKao+zEHN4lM++C +wkHxJyGDvajonMCUBCdJKaXZK43YJz/UEXOMvBE/XqSQA8w2HYFlkziUNt4HWiX5mrJ NQKKhMxe1VwyCM1l4QKp81zkhrXr2GUK7yA5sGHvVyuYVNAyC5RwzjImfjdDBiRNefm4 SL6WcnkTHLLfb+wo52AYNdNWxSU0+3zG8dTZdRm7UCivAoztR2xvUmJ41Q6cM1W+2CZF +Ahp2lLX4xP7z57GAQsRbIv6+SrdpP8OFuAoullenCHjfSyYHTUw1aQ8SUDt6bn5ijEr swdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=AVlt4KshJPYjsv+ycl+N1oM1aREdEqESwlsEdjzZL2w=; b=RFg+ZvTkTeYZX8dVhlvPAHkSAzlYmpKnAWxnu8yyvIAkOxjlh5mk4wyfDm2vLhcKEc GEWNMCcONAGR7/hyvMnWfJeUJRHnfnhaYUFEjOliAcpMtrP69Et1Hw2enVY3pz2DrQqa or0eoHKrAib1e9CqWm3kSkCU3gZBJgydWyIfIEkBx2WzhViymI599kyCvXTBV48d8e9C k8Pc9vm6cBTz1M2TLTj/kqAdWJcEnfeBP94IV7oXpN7YTkxgnuUdlZEUicx2wHsiDVPt R772DOllMzPkpuGKua1GPAdvAYsd4mQgyJH2/MD+XCzaYTTARAXgRaUWZAt3oSibGomC Fibg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gs18si4219397ejb.435.2020.12.08.04.56.27; Tue, 08 Dec 2020 04:56:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729020AbgLHMcR (ORCPT + 99 others); Tue, 8 Dec 2020 07:32:17 -0500 Received: from foss.arm.com ([217.140.110.172]:48326 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726138AbgLHMcQ (ORCPT ); Tue, 8 Dec 2020 07:32:16 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DEF861FB; Tue, 8 Dec 2020 04:31:30 -0800 (PST) Received: from [192.168.178.2] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 243FD3F68F; Tue, 8 Dec 2020 04:31:27 -0800 (PST) Subject: Re: [PATCH V4 3/3] thermal: cpufreq_cooling: Reuse sched_cpu_util() for SMP platforms To: Viresh Kumar Cc: Ingo Molnar , Peter Zijlstra , Vincent Guittot , Amit Daniel Kachhap , Daniel Lezcano , Javi Merino , Zhang Rui , Amit Kucheria , linux-kernel@vger.kernel.org, Quentin Perret , Lukasz Luba , linux-pm@vger.kernel.org References: <95991789-0308-76a9-735b-01ef620031b9@arm.com> <20201207121704.hpyw3ij3wvb5s7os@vireshk-i7> From: Dietmar Eggemann Message-ID: Date: Tue, 8 Dec 2020 13:31:26 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20201207121704.hpyw3ij3wvb5s7os@vireshk-i7> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07/12/2020 13:17, Viresh Kumar wrote: > On 03-12-20, 12:54, Dietmar Eggemann wrote: >> On 24/11/2020 07:26, Viresh Kumar wrote: [...] >> When I ran schbench (-t 16 -r 5) on hikey960 I get multiple (~50) >> instances of ~80ms task activity phase and then ~20ms idle phase on all >> CPUs. >> >> So I assume that scenario 1 is at the beginning (but you mentioned the >> task were sleeping?) > > I am not able to find the exact values I used, but I did something > like this to create a scenario where the old computations shall find > the CPU as idle in the last IPA window: > > - schbench -m 2 -t 4 -s 25000 -c 20000 -r 60 > > - sampling rate of IPA to 10 ms > > With this IPA wakes up many times and finds the CPUs to have been idle > in the last IPA window (i.e. 10ms). Ah, this makes sense. So with this there are only 8 worker threads w/ 20ms runtime and 75ms period (30ms message thread time (-C) and 25 latency (-c)). So much more idle time between two invocations of the worker/message threads and more IPA sampling. [...] >>> Old: thermal_power_cpu_get_power: cpus=00000000,000000ff freq=1200000 total_load=800 load={{0x64,0x64,0x64,0x64,0x64,0x64,0x64,0x64}} dynamic_power=5280 >>> New: thermal_power_cpu_get_power: cpus=00000000,000000ff freq=1200000 total_load=708 load={{0x4d,0x5c,0x5c,0x5b,0x5c,0x5c,0x51,0x5b}} dynamic_power=4672 >>> >>> As can be seen, the idle time based load is 100% for all the CPUs as it >>> took only the last window into account, but in reality the CPUs aren't >>> that loaded as shown by the utilization numbers. >> >> Is this an IPA sampling at the end of the ~20ms idle phase? > > This is during the phase where the CPUs were fully busy for the last > period. OK.