Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp3815171pxv; Mon, 5 Jul 2021 06:27:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyB9udGXQ+zu6/OL1Y4/sGNWau3dSgVt+v5oZ0NqQvLRFM2aswFCctNirzZ8bItnz6a4yxI X-Received: by 2002:a17:906:1701:: with SMTP id c1mr13481072eje.425.1625491646179; Mon, 05 Jul 2021 06:27:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625491646; cv=none; d=google.com; s=arc-20160816; b=aoWEH/H9sD48KnwlHgo5zN0xngpekG8lJychDRs8102hokSObGxDy+6XPK68YSJS5z mEllYBiUizDKmnduA7k6kG8lWRN4y4nLAlL71dNoMLFgOith2aQsXtFg4xV9TZrBoP3H YYAxx/Wtos9TPN7RvKSCJ+faR4MlsNlEJs24TuxjnIpGLjg80b5PK92EmEftoxEy1CK9 lEtIboWVwdmJ2GD/fNNNKkt6VjyriYx9hYBhepvB6kSVIT9faa/Z4B5t7sHeT3sYzOWf 0T2sODOEcNurcXIkgLD0SiqlcH+9hVebmQp73YjhetklwH4QtwmcRqP0ViWzjOESszVH Gn2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version; bh=wBFZgrsOPrbsUN1uPwodsfr1xtraggXX60tVKBlbJAE=; b=nbKr1tE0lemVKdrv9G2um1esJ1YPb6xxcPLJ9MFxdyWC5Vktq9C28yRAOu080Nv4ry sUvut1qz7h4UcqEREzzjb5DrPmNb8Xd24vPYSQGinII/Fi39unk+3Tl+0tim/O1gMhra 94IF9R+c19DgUAfGMEi2T/F6/kzuGU3jkE7oUFy5O89lty06zPIK4hZ+QGwlMRDEebc0 VdtLIGu4bBAtPYpnEFsv1FsjIFfA3UJQLSId3Oux3RBMFF029697+s1fe4hseJp7dT0Y 2EiOMu9JzqC3mLzlBBsKR2/hxW3zzqHDGTLndL48BxpLT5uZVlu0uGZGU78Xbw1BXTki Xk9A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ds7si14228918ejc.692.2021.07.05.06.27.02; Mon, 05 Jul 2021 06:27:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230412AbhGEN1g (ORCPT + 99 others); Mon, 5 Jul 2021 09:27:36 -0400 Received: from mail-ot1-f47.google.com ([209.85.210.47]:40538 "EHLO mail-ot1-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230188AbhGEN1g (ORCPT ); Mon, 5 Jul 2021 09:27:36 -0400 Received: by mail-ot1-f47.google.com with SMTP id d21-20020a9d72d50000b02904604cda7e66so18292192otk.7; Mon, 05 Jul 2021 06:24:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=wBFZgrsOPrbsUN1uPwodsfr1xtraggXX60tVKBlbJAE=; b=ui6ffD0Aw8+5ke1KvHOZmydv/xAvupCwcfenFq7tjaZwA1WzTIBaMm0pvyNlSRWNi+ 2O717kFhp8jZSs7DZr8+4g6iDHKyf4ghB+i57JDYWgXkWAR5xSOcfP0WffMF+RPYXMn7 +fAVu8dqnfUiR2TAClaNijmllRSX95+fH/bWWm1V+tApHmZmF42pQis/mcAc1ioIlIeJ /8T8mCLbVET6JXVuFA8LsNVEqMVbYrKjpN65/Mg3bAYlLWwKXqqFkXoVP6gyQ8G3qlRB ND53/af+w7mm6IDF+6QD0LIrIl7ftjhr4h6SHVhRUBgKjGes/foHgZfAVKmWvOO9ApHc RY7g== X-Gm-Message-State: AOAM531Tx7yaoTDQKiHVzFHj1QMxK2YDO7oZoTCQSXYgDk8q79tpShRJ 3rtf7jkC9K85Pteg5L9QRuVUQ4mrcolRVDPUcbQ= X-Received: by 2002:a05:6830:1d8:: with SMTP id r24mr11413386ota.206.1625491498880; Mon, 05 Jul 2021 06:24:58 -0700 (PDT) MIME-Version: 1.0 References: <1867445.PYKUYFuaPT@kreacher> <007101d77117$b3b837a0$1b28a6e0$@telus.net> In-Reply-To: <007101d77117$b3b837a0$1b28a6e0$@telus.net> From: "Rafael J. Wysocki" Date: Mon, 5 Jul 2021 15:24:43 +0200 Message-ID: Subject: Re: [PATCH v1 0/5] cpuidle: teo: Rework the idle state selection logic To: Doug Smythies Cc: "Rafael J. Wysocki" , LKML , Linux PM Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Jul 4, 2021 at 11:01 PM Doug Smythies wrote: > > Hi Rafael, > > On 2021.06.02 11:14 Rafael J. Wysocki wrote: > > > Hi All, > > > > This series of patches addresses some theoretical shortcoming in the > > TEO (Timer Events Oriented) cpuidle governor by reworking its idle > > state selection logic to some extent. > > > > Patches [1-2/5] are introductory cleanups and the substantial changes are > > made in patches [3-4/5] (please refer to the changelogs of these two > > patches for details). The last patch only deals with documentation. > > > > Even though this work is mostly based on theoretical considerations, it > > shows a measurable reduction of the number of cases in which the > shallowest > > idle state is selected while it would be more beneficial to select a > deeper > > one or the deepest idle state is selected while it would be more > beneficial to > > select a shallower one, which should be a noticeable improvement. > > Do you have any test results to share? Or test methods that I can try? > I have done a few tests, and generally don't notice much difference. > Perhaps an increase in idle state 2 below (was to shallow) numbers. > I am searching for some results that would offset the below: > > The difficulty I am having with this patch set is the additional overhead > which becomes significant at the extremes, where idle state 0 is dominant. > Throughout the history of teo, I have used multiple one core pipe-tests > for this particular test. Some results: > > CPU: Intel(R) Core(TM) i5-10600K CPU @ 4.10GHz > HWP: disabled > CPU frequency scaling driver: intel_pstate, active, powersave > Pipe-tests are run forever, printing average loop time for the > Last 2.5 million loops. 1021 of those are again averaged. > Total = 2.5525e9 loops > The power and idle data is sampled for 100 minutes. > > Note 1: other tests were also done and also with passive, > schedutil, but it isn't relevant for this test because the > CPU frequency stays pinned at maximum. > > Note 2: I use TCC offset for thermal throttling, but I disabled it > for these tests, because the temperature needed to go higher > than my normal throttling point. > > Idle configuration 1: As a COMETLAKE processor, with 4 idle states. > Kernel 5.13-RC4. > > Before patch set average: > 2.8014 uSec/loop > 113.9 watts > Idle state 0 residency: 9.450% > Idle state 0 entries per minute: 256,812,896.6 > > After patch set average: > 2.8264 uSec/loop, 0.89% slower > 114.0 watts > Idle state 0 residency: 8.677% > Idle state 0 entries per minute: 254,560,049.9 > > Menu governor: > 2.8051 uSec/loop, 0.13% slower > 113.9 watts > Idle state 0 residency: 8.437% > Idle state 0 entries per minute: 256,436,417.2 > > O.K., perhaps not so bad, but also not many idle states. > > Idle configuration 2: As a SKYLAKE processor, with 9 idle states. > i.e.: > /drivers/idle/intel_idle.c > static const struct x86_cpu_id intel_idle_ids[] __initconst > ... > X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_X, &idle_cpu_skx), > + X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE, &idle_cpu_skl), > > Purpose: To demonstrate increasing overhead as a function of number > of idle states. > Kernel 5.13. > > Before patch set average: > 2.8394 uSec/loop > 114.2 watts > Idle state 0 residency: 7.212% > Idle state 0 entries per minute: 253,391,954.3 > > After patch set average: > 2.9103 uSec/loop, 2.5% slower > 114.4 watts, 0.18% more > Idle state 0 residency: 6.152%, 14.7% less. > Idle state 0 entries per minute: 244,024,752.1 > > Menu governor: > 2.8141 uSec/loop, 0.89% faster > 113.9 watts, 0.26% less > Idle state 0 residency: 7.167%, 0.6% less > Idle state 0 entries per minute: 255,650,610.7 > > Another potentially interesting test was the ebizzy test: > Records per second, averaged over many tests, varying > threads and intervals: > > passive, schedutil: > Before: 6771.977 > After: 5502.643, -18.7% > Menu: 10728.89, +58.4% > > Active, powersave: > Before: 8361.82 > After: 8463.31, +1.2% > Menu: 8225.58, -1.6% > > I think it has more to do with CPU scaling governors > than this patch set, so: > > performance: > Before: 12137.33 > After: 12083.26, -0.4% > Menu: 11983.73, -1.3% > > These and other test results available here: > (encoded to prevent a barrage of bots) > > double u double u double u dot smythies dot com > /~doug/linux/idle/teo-2021-06/ > > ... a day later ... > > I might have an answer to my own question. > By switching to cross core pipe-tests, and only loading down one > CPU per core, I was able to get a lot more activity in other idle states. > The test runs for 100 minutes, and the results change with time, but > I'll leave that investigation for another day (there is no throttling): > > 1st 50 tests: > Before: 3.888 uSec/loop > After: 3.764 uSec/loop > Menu: 3.464 uSec/loop > > Tests 50 to 100: > Before: 4.329 uSec/loop > After: 3.919 uSec/loop > Menu: 3.514 uSec/loop > > Tests 200 to 250: > Before: 5.089 uSec/loop > After: 4.364 uSec/loop > Menu: 4.619 uSec/loop > > Tests 280 to 330: > Before: 5.142 uSec/loop > After: 4.464 uSec/loop > Menu: 4.619 uSec/loop > > Notice that the "after" this patch set is applied eventually does > better than using the menu governor. Its processor package power > always remains less, than the menu governor. That's good news, thanks! > The results can be viewed graphically at the above link, but the > most dramatic results are: > > Idle state 3 above % goes from 70% to 5%. > Idle state 2 below % goes from 13% to less than 1%. This also looks promising. Thank you for all of the results!