Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp4258088pxy; Tue, 27 Apr 2021 00:05:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyykagyh2jVJPDRzg58fVNFfK0lkbVkjiCGyvmlVpS8qyGfuMij5w4YtFHWO849/AZdDl7O X-Received: by 2002:a17:906:6d12:: with SMTP id m18mr22060146ejr.435.1619507135024; Tue, 27 Apr 2021 00:05:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619507135; cv=none; d=google.com; s=arc-20160816; b=eEcya87T4yQCnsGPw/sjoKjOCghumHVTctTxTQci+4/3JBl4ZOtKCiKdjBPH48fDxC bcFYRW8Wr4vBykriZi6J9wWq5s15s9/TfU8ovixPI39Id4E2wSnP3TWay41d0zIxWXe8 3yQvl9+mtaxCp0ycFJfNN9ldIpn/dG/aV846HzLf6oVB9F4J9lfUa+vVfN8TQS0G5baz 6h2AQsEw2wReN51qJvG2DSDDaSakhoG5RCZPAAB4Y5ZLcfbpefVkqLIc0oU56c+LKo+t XHXYcRonUXl2hGNCJHkPUbQXUMCpjLWytap5eUsmapz2dMRzpiYye9TmP5A+nvdGs4n5 5x5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:ironport-sdr:ironport-sdr; bh=cYqaRsShCLYyfvLcdJlMk/tgIBkHavOWt81Wsh1GVWI=; b=omrx4KSwQdXuEos59ahitvjw6RC++c7NlKSbjmFz1g9QBcalVEkTBnWi/MDvk4mURp dPZSWwANi55A/2FR4o/uX8JmxvVVFtTsb0/IjTQ5qJu7iepoWJoC1TGU6UK8GGIe8fS9 L8DZlySnoEEUo4oQs3vbZdd1Q3/gyHHy76SeBCk1JhZl/YMYSZP3zjdSKC21mYDncfo6 sgfnpff3gwXd7ordGu6Xk/TQjfEeSWmIZcTtDidl8a+qZ+gZjoKm7bJO3GKDvkVK06OZ 9kPpgVyFKxjuE4hliV5HRT/TNRs2uf7aD+JuOX7tnPoelLPXqy6ZHGNjDcoX6HVri6YG r2fw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p16si1796718edx.594.2021.04.27.00.05.12; Tue, 27 Apr 2021 00:05:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236277AbhD0HEn (ORCPT + 99 others); Tue, 27 Apr 2021 03:04:43 -0400 Received: from mga09.intel.com ([134.134.136.24]:54937 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236000AbhD0HEc (ORCPT ); Tue, 27 Apr 2021 03:04:32 -0400 IronPort-SDR: TyyAb6PInmYX1kUJDMLDNeNEZELEL0Pga+w5Hra1D+9DBREgZ10phucTNh6GKZqRvTpGKujntt QkiKW2Me3LOg== X-IronPort-AV: E=McAfee;i="6200,9189,9966"; a="196573022" X-IronPort-AV: E=Sophos;i="5.82,254,1613462400"; d="scan'208";a="196573022" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Apr 2021 00:03:49 -0700 IronPort-SDR: Ebm2gcgpmbITmMBOOu44AWLxM5IMrCmOQLSNx1zlcvCiWEn7/obdww3+US+p2YBiH4csyWt49a YmwZbetA5NxA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,254,1613462400"; d="scan'208";a="447506920" Received: from kbl-ppc.sh.intel.com ([10.239.159.163]) by fmsmga004.fm.intel.com with ESMTP; 27 Apr 2021 00:03:47 -0700 From: Jin Yao To: acme@kernel.org, jolsa@kernel.org, peterz@infradead.org, mingo@redhat.com, alexander.shishkin@linux.intel.com Cc: Linux-kernel@vger.kernel.org, ak@linux.intel.com, kan.liang@intel.com, yao.jin@intel.com, Jin Yao Subject: [PATCH v6 15/26] perf stat: Filter out unmatched aggregation for hybrid event Date: Tue, 27 Apr 2021 15:01:28 +0800 Message-Id: <20210427070139.25256-16-yao.jin@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210427070139.25256-1-yao.jin@linux.intel.com> References: <20210427070139.25256-1-yao.jin@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org perf-stat has supported some aggregation modes, such as --per-core, --per-socket and etc. While for hybrid event, it may only available on part of cpus. So for --per-core, we need to filter out the unavailable cores, for --per-socket, filter out the unavailable sockets, and so on. Before: # perf stat --per-core -e cpu_core/cycles/ -a -- sleep 1 Performance counter stats for 'system wide': S0-D0-C0 2 479,530 cpu_core/cycles/ S0-D0-C4 2 175,007 cpu_core/cycles/ S0-D0-C8 2 166,240 cpu_core/cycles/ S0-D0-C12 2 704,673 cpu_core/cycles/ S0-D0-C16 2 865,835 cpu_core/cycles/ S0-D0-C20 2 2,958,461 cpu_core/cycles/ S0-D0-C24 2 163,988 cpu_core/cycles/ S0-D0-C28 2 164,729 cpu_core/cycles/ S0-D0-C32 0 cpu_core/cycles/ S0-D0-C33 0 cpu_core/cycles/ S0-D0-C34 0 cpu_core/cycles/ S0-D0-C35 0 cpu_core/cycles/ S0-D0-C36 0 cpu_core/cycles/ S0-D0-C37 0 cpu_core/cycles/ S0-D0-C38 0 cpu_core/cycles/ S0-D0-C39 0 cpu_core/cycles/ 1.003597211 seconds time elapsed After: # perf stat --per-core -e cpu_core/cycles/ -a -- sleep 1 Performance counter stats for 'system wide': S0-D0-C0 2 210,428 cpu_core/cycles/ S0-D0-C4 2 444,830 cpu_core/cycles/ S0-D0-C8 2 435,241 cpu_core/cycles/ S0-D0-C12 2 423,976 cpu_core/cycles/ S0-D0-C16 2 859,350 cpu_core/cycles/ S0-D0-C20 2 1,559,589 cpu_core/cycles/ S0-D0-C24 2 163,924 cpu_core/cycles/ S0-D0-C28 2 376,610 cpu_core/cycles/ 1.003621290 seconds time elapsed Signed-off-by: Jin Yao Co-developed-by: Jiri Olsa --- v6: - No change. v5: - Use Jiri's code, which is much simpler than original. v4: - No change. tools/perf/util/stat-display.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c index 0679129ad05e..a76fff5e7d83 100644 --- a/tools/perf/util/stat-display.c +++ b/tools/perf/util/stat-display.c @@ -667,6 +667,9 @@ static void print_counter_aggrdata(struct perf_stat_config *config, if (!collect_data(config, counter, aggr_cb, &ad)) return; + if (perf_pmu__has_hybrid() && ad.ena == 0) + return; + nr = ad.nr; ena = ad.ena; run = ad.run; -- 2.17.1