Received: by 2002:a05:6602:2086:0:0:0:0 with SMTP id a6csp4048663ioa; Tue, 26 Apr 2022 15:53:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyucgI4TFj9KoGNZDsCCeN03Rqi8m3qTx1xODjwLwzfmJIxftj46XUjHPB1yXYz6Pwr7+73 X-Received: by 2002:a05:6402:1e8d:b0:426:9:6ec with SMTP id f13-20020a0564021e8d00b00426000906ecmr4273393edf.55.1651013604449; Tue, 26 Apr 2022 15:53:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651013604; cv=none; d=google.com; s=arc-20160816; b=Ic13GDduqWGP4lTJeLmxw9NcanTKk+bzsds1Wg5gJUGRxDultJpCaCr6H1LdNs2YkD AYxY+6SXcDshUME6CAqVLG4RLDoedOw8WRsBClovVCScDVINfUStTwQ7WhqcrreiIOGB 7Io47FTzWqsu9kX5m2UCKyibG9o7wMhyBAueP2R/EVrV9DzfcbPEA3UOl0LV2GRwMR+W ir8foKvhnGlkYGtrXeBzoFAz95ViMpZDWQSFgxxL3fB4fnJkYbi5EBlDDZit9lMbLJub pivuVfK36unGRcAXLrfEWiUM3EaIqAfrujr3slwcowVaziD2JMRu+uVuoj2QEUJWxceX 6Nuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:date:cc:to:from:subject :message-id; bh=lLa68H9eIgLkWHWkvq6zCPD5WQDK/a+kJn4qLE5GG0M=; b=L4eanD6jZafqqcrJtacBM92qWX/S5TtK1BzBb9kswvYHIsYDJQUL8t0dGcG4fB8H0W NwL/bo8HRu/4YXUTQk/9JHdaMZroEYhLdWpVKHyWRFp1ZgtDY5S7t1s3GYChlLznkWA4 oPeDnUgy97yFvVEK11X4ERbz6frg2l1QX5WO/00EXFZXA0xruWpVrv92wF8+RbmFR/1q 7lctqcKtcAhNkA37a4jYzMQ2agMf4AE0HUzTXPHB7UfhfnC1IiroLDU8oB85G1P44osn Wbc6T3vPVf6dZm0Fd3BbMRIKJSOuRh3uihshsANr7Jy6mEOy5WjHigJv37iTMWnsTFVD acKw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u3-20020a17090617c300b006f3b9e28bf3si1790498eje.789.2022.04.26.15.52.58; Tue, 26 Apr 2022 15:53:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346401AbiDZMIr (ORCPT + 99 others); Tue, 26 Apr 2022 08:08:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346373AbiDZMIq (ORCPT ); Tue, 26 Apr 2022 08:08:46 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1E66EB16C; Tue, 26 Apr 2022 05:05:35 -0700 (PDT) X-IronPort-AV: E=McAfee;i="6400,9594,10328"; a="351996298" X-IronPort-AV: E=Sophos;i="5.90,290,1643702400"; d="scan'208";a="351996298" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2022 05:05:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,290,1643702400"; d="scan'208";a="538655307" Received: from linux.intel.com ([10.54.29.200]) by orsmga002.jf.intel.com with ESMTP; 26 Apr 2022 05:05:35 -0700 Received: from abityuts-desk1.fi.intel.com (abityuts-desk1.fi.intel.com [10.237.72.79]) by linux.intel.com (Postfix) with ESMTP id 9635558090D; Tue, 26 Apr 2022 05:05:32 -0700 (PDT) Message-ID: <97e7e3f5110702fab727b4df7d53511aef5c60b1.camel@gmail.com> Subject: Re: [RFC PATCH v3 2/5] cpuidle: Add Cpufreq Active Stats calls tracking idle entry/exit From: Artem Bityutskiy To: Lukasz Luba , linux-kernel@vger.kernel.org Cc: dietmar.eggemann@arm.com, viresh.kumar@linaro.org, rafael@kernel.org, daniel.lezcano@linaro.org, amitk@kernel.org, rui.zhang@intel.com, amit.kachhap@gmail.com, linux-pm@vger.kernel.org Date: Tue, 26 Apr 2022 15:05:31 +0300 In-Reply-To: <20220406220809.22555-3-lukasz.luba@arm.com> References: <20220406220809.22555-1-lukasz.luba@arm.com> <20220406220809.22555-3-lukasz.luba@arm.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.38.4 (3.38.4-1.fc33) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, FORGED_GMAIL_RCVD,FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_FROM, NML_ADSP_CUSTOM_MED,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_SOFTFAIL autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Lukasz, On Wed, 2022-04-06 at 23:08 +0100, Lukasz Luba wrote: > @@ -231,6 +232,8 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct > cpuidle_driver *drv, >         trace_cpu_idle(index, dev->cpu); >         time_start = ns_to_ktime(local_clock()); >   > +       cpufreq_active_stats_cpu_idle_enter(time_start); > + >         stop_critical_timings(); >         if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE)) >                 rcu_idle_enter(); > @@ -243,6 +246,8 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct > cpuidle_driver *drv, >         time_end = ns_to_ktime(local_clock()); >         trace_cpu_idle(PWR_EVENT_EXIT, dev->cpu); >   > +       cpufreq_active_stats_cpu_idle_exit(time_end); > + At this point the interrupts are still disabled, and they get enabled later. So the more code you add here and the longer it executes, the longer you delay the interrupts. Therefore, you are effectively increasing IRQ latency from idle by adding more code here. How much? I do not know, depends on how much code you need to execute. But the amount of code in functions like this tends to increase over time. So the risk is that we'll keep making 'cpufreq_active_stats_cpu_idle_exit()', and (may be unintentionally) increase idle interrupt latency. This is not ideal. We use the 'wult' tool (https://github.com/intel/wult) to measure C-states latency and interrupt latency on Intel platforms, and for fast C-states like Intel C1, we can see that even the current code between C-state exit and interrupt re-enabled adds measurable overhead. I am worried about adding more stuff here. Please, consider getting the stats after interrupts are re-enabled. You may lose some "precision" because of that, but it is probably overall better that adding to idle interrupt latency. >         /* The cpu is no longer idle or about to enter idle. */ >         sched_idle_set_state(NULL);