Received: by 2002:a25:31c3:0:0:0:0:0 with SMTP id x186csp1261668ybx; Tue, 5 Nov 2019 13:07:41 -0800 (PST) X-Google-Smtp-Source: APXvYqxEEq3+QtH9OExvTgJx5lnaerey1ggllEe96Qnf38CRmfvzWntYoAgcoQlBZT2bEitK2seu X-Received: by 2002:a17:906:79c9:: with SMTP id m9mr30676867ejo.297.1572988061659; Tue, 05 Nov 2019 13:07:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1572988061; cv=none; d=google.com; s=arc-20160816; b=z304gHqyHy500KGCzRoCSYOB37yE2XV8VrIyTijs2sfmBTWkQadbH/iB24ZhJISSik C06zQAQ9kqYncoYGF5Wv6MCwNvdAgNBe5rRs0rz6p15+bEKRuQLyMwvTGi6pgvripwzj cBXfXZ9L/F23nOHurDabT1Hwb/i7TgZ/hPdnkqXbUD5t7htta0NYOB63zM896LJfzNDQ /XWPLUZSh3w7JknHmt1FckkglsWm/GvKU3d/zegv0Gjp7hXsAcDEI2waeEAn+7zqAqfe jDLLaZ/B4yP6u4yFLmoKwEXlwCdWqkc3hdebXNQqu8qloETgyaaYsD+Cz5Ih4b5MAlNc 6new== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=kasXnI81AjmcDkhWONQlTPWku1qtqFeOq+jnSjgfkcA=; b=naj3vmol1oZWTEBdPSPznvsbYQXnTE6Ij5Uh41k1eHwyQBpjVcNDIuOYwOUFFPYXiP V2w0y4FZw833lK4G+idolaKNmyrNdSf3YddmKed253e3fJC+nRFSB5KrlGpADbeAPRRz 0HH6dYl3FUWkQzQeMonPxL8/v+ECNRL+/58kebCnfmtYpz9oQhxi2YCTHSg0mzg8/sG+ omK4Hodtt4LZfJIlZnMBA3tMt2VDv/Ysa0h+oroa40l8sAdjOaBkmsXz1mooGadLA+7n 5dLnHiLhTa2GHZQu7t7SucXK/DyfKbXlhzT/NzGhDWRy3//dVe6sKcEcJMwXQ6AUjhi4 d4IQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j21si14400830ejn.392.2019.11.05.13.07.16; Tue, 05 Nov 2019 13:07:41 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729770AbfKEVEY (ORCPT + 99 others); Tue, 5 Nov 2019 16:04:24 -0500 Received: from foss.arm.com ([217.140.110.172]:53714 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725806AbfKEVEY (ORCPT ); Tue, 5 Nov 2019 16:04:24 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A7DD5101E; Tue, 5 Nov 2019 13:04:23 -0800 (PST) Received: from localhost (e108754-lin.cambridge.arm.com [10.1.199.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 461BC3F6C4; Tue, 5 Nov 2019 13:04:23 -0800 (PST) Date: Tue, 5 Nov 2019 21:04:21 +0000 From: Ionela Voinescu To: Thara Gopinath Cc: mingo@redhat.com, peterz@infradead.org, vincent.guittot@linaro.org, rui.zhang@intel.com, edubezval@gmail.com, qperret@google.com, linux-kernel@vger.kernel.org, amit.kachhap@gmail.com, javi.merino@kernel.org, daniel.lezcano@linaro.org Subject: Re: [Patch v4 0/6] Introduce Thermal Pressure Message-ID: <20191105210301.GA23045@e108754-lin> References: <1571776465-29763-1-git-send-email-thara.gopinath@linaro.org> <20191031094420.GA19197@e108754-lin> <5DBB0EB0.9050106@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5DBB0EB0.9050106@linaro.org> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Thara, On Thursday 31 Oct 2019 at 12:41:20 (-0400), Thara Gopinath wrote: [...] > >> Regarding testing, basic build, boot and sanity testing have been > >> performed on db845c platform with debian file system. > >> Further, dhrystone and hackbench tests have been > >> run with the thermal pressure algorithm. During testing, due to > >> constraints of step wise governor in dealing with big little systems, > >> trip point 0 temperature was made assymetric between cpus in little > >> cluster and big cluster; the idea being that > >> big core will heat up and cpu cooling device will throttle the > >> frequency of the big cores faster, there by limiting the maximum available > >> capacity and the scheduler will spread out tasks to little cores as well. > >> > > > > Can you please share the changes you've made to sdm845.dtsi and a kernel > > base on top of which to apply your patches? I would like to reproduce > > your results and run more tests and it would be good if our setups were > > as close as possible. > Hi Ionela > Thank you for the review. > So I tested this on 5.4-rc1 kernel. The dtsi changes is to reduce the > thermal trip points for the big CPUs to 60000 or 70000 from the default > 90000. I did this for 2 reasons > 1. I could never get the db845 to heat up sufficiently for my test cases > with the default trip. > 2. I was using the default step-wise governor for thermal. I did not > want little and big to start throttling by the same % because then the > task placement ratio will remain the same between little and big cores. > > Some early testing on this showed that when setting the trip point to 60000 for the big CPUs and the big cluster, and running hackbench (1 group, 30000 loops) the cooling state of the big cluster results in always being set to the maximum (the lowest OPP), which results in capacity inversion (almost) continuously. For 70000 the average cooling state of the bigs is around 20 so it will leave a few more OPPs available on the bigs more of the time, but probably the capacity of bigs is mostly lower than the capacity of little CPUs, during this test as well. I think that explains the difference in results that you obtained below. This is good as it shows that thermal pressure is useful but it shouldn't show much difference between the different decay periods, as can also be observed in your results below. This being said, I did not obtained such significant results on my side by I'll try again with the kernel you've pointed me to offline. Thanks, Ionela. > > > >> Test Results > >> > >> Hackbench: 1 group , 30000 loops, 10 runs > >> Result SD > >> (Secs) (% of mean) > >> No Thermal Pressure 14.03 2.69% > >> Thermal Pressure PELT Algo. Decay : 32 ms 13.29 0.56% > >> Thermal Pressure PELT Algo. Decay : 64 ms 12.57 1.56% > >> Thermal Pressure PELT Algo. Decay : 128 ms 12.71 1.04% > >> Thermal Pressure PELT Algo. Decay : 256 ms 12.29 1.42% > >> Thermal Pressure PELT Algo. Decay : 512 ms 12.42 1.15% > >> > >> Dhrystone Run Time : 20 threads, 3000 MLOOPS > >> Result SD > >> (Secs) (% of mean) > >> No Thermal Pressure 9.452 4.49% > >> Thermal Pressure PELT Algo. Decay : 32 ms 8.793 5.30% > >> Thermal Pressure PELT Algo. Decay : 64 ms 8.981 5.29% > >> Thermal Pressure PELT Algo. Decay : 128 ms 8.647 6.62% > >> Thermal Pressure PELT Algo. Decay : 256 ms 8.774 6.45% > >> Thermal Pressure PELT Algo. Decay : 512 ms 8.603 5.41% > >> > > > > Do you happen to know by how much the CPUs were capped during these > > experiments? > > I don't have any captured results here. I know that big cores were > capped and at times there was capacity inversion. > > Also I will fix the nit comments above. > > > > > Thanks, > > Ionela. > > > > > > -- > Warm Regards > Thara