Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp485897ybj; Thu, 7 May 2020 00:05:41 -0700 (PDT) X-Google-Smtp-Source: APiQypIP4xabbiaY1rhSb57gN5AzRcYbpOOZyUztZyTcXBki72XFnjlqofb4+KBbtR9s9C4jXe5r X-Received: by 2002:aa7:c0d2:: with SMTP id j18mr10729884edp.283.1588835140958; Thu, 07 May 2020 00:05:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588835140; cv=none; d=google.com; s=arc-20160816; b=CaUatH3UBw7AL6ueQvHbLBT+0ebXBdClGOJEG0l95Zm3LrTTjEmCeeadlNlc+GXznT ysY5mtsFfwwJhYppeRmHRDEFrsScRjYhJHWWxyhxBdN5NDIbWvtiAGdNSlHIStRQuqql /IExwv04uQ1l4ItV3VNOuYfYrpyDeogsYVnPqhtR9612EuS798H1P3ubeCoBs2p9S0CY n0FyONoPhI7aSAm6dhiTe+DCjNzXid3pCuFiZeVx5ecA3/NLCoC3+lHptDASLv7iUj4f q/MdUgFC2JETsbS0XNwJ2J/e435qphjkj2C5z11RAJnRopFDgKFmQZjYSpEublRaovxJ aHEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:ironport-sdr:ironport-sdr; bh=A4rYGT/sBWVBPB3ozuSAhkIbT/GekTjXtdvUpTlcvN4=; b=JP9pCiSOjU8PYaGDamO/l2WN2CLfgvEuAY2lmpULzZt8e5Enjb1OA5dqbcupBZZ3Xg wnR9ECOIvTUKxi3eq9zmQDPLlfhNTVf3oTTbfthFsLIXaZ5Uo8h7JQDFjsqzAuENqAQq B5PnX5uXHpBuGZjcuZPhLWeY8N6N8qAi0sc8t9YrD/8Jzs75raQccLzpsHAWb3S0JwMF Wb4hxC15a59gtaI2qQOmLDwi0hZpBanPEj7PlYm1oddiJu0hHv++qK7G/T7vclAQvNXP U7wGyB5iF0CT49wCoe1LcdWUg8JNt/LLd9weG7Xij5Kkw0mabdS3XPpF2FJNU0WC7C1V mgQQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a88si2727717edf.379.2020.05.07.00.05.18; Thu, 07 May 2020 00:05:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726638AbgEGHAf (ORCPT + 99 others); Thu, 7 May 2020 03:00:35 -0400 Received: from mga04.intel.com ([192.55.52.120]:18356 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726320AbgEGHAd (ORCPT ); Thu, 7 May 2020 03:00:33 -0400 IronPort-SDR: eMBpNqGwBw7xxkqJ6Xo+o1nXsS8FUjL57ymwSmMrJijOiq6BmtAmSILUSUC8z/97Lnz9d02IBJ QHCu67xKueOA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 May 2020 00:00:33 -0700 IronPort-SDR: Jnby8VGG1gs8G1OnlPkCyxVNUe84yxX0zzSEvENBp3TJFi3jfE5C2z/ycH6HP4gwf76w1AzF3p /LAd9sLllsWA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,362,1583222400"; d="scan'208";a="370026124" Received: from kbl-ppc.sh.intel.com ([10.239.159.118]) by fmsmga001.fm.intel.com with ESMTP; 07 May 2020 00:00:31 -0700 From: Jin Yao To: acme@kernel.org, jolsa@kernel.org, peterz@infradead.org, mingo@redhat.com, alexander.shishkin@linux.intel.com Cc: Linux-kernel@vger.kernel.org, ak@linux.intel.com, kan.liang@intel.com, yao.jin@intel.com, Jin Yao Subject: [PATCH v3 3/4] perf stat: Copy counts from prev_raw_counts to evsel->counts Date: Thu, 7 May 2020 14:58:21 +0800 Message-Id: <20200507065822.8255-4-yao.jin@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200507065822.8255-1-yao.jin@linux.intel.com> References: <20200507065822.8255-1-yao.jin@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org It would be useful to support the overall statistics for perf-stat interval mode. For example, report the summary at the end of "perf-stat -I" output. But since perf-stat can support many aggregation modes, such as --per-thread, --per-socket, -M and etc, we need a solution which doesn't bring much complexity. The idea is to use 'evsel->prev_raw_counts' which is updated in each interval and it's saved with the latest counts. Before reporting the summary, we copy the counts from evsel->prev_raw_counts to evsel->counts, and next we just follow non-interval processing. In evsel__compute_deltas, this patch saves counts to the position of [cpu0,thread0] for AGGR_GLOBAL. After copying counts from evsel->prev_raw_counts to evsel->counts, we don't need to modify process_counter_maps in perf_stat_process_counter to let it work well. Signed-off-by: Jin Yao --- tools/perf/util/evsel.c | 1 + tools/perf/util/stat.c | 24 ++++++++++++++++++++++++ tools/perf/util/stat.h | 1 + 3 files changed, 26 insertions(+) diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c index f3e60c45d59a..898a697c7cdd 100644 --- a/tools/perf/util/evsel.c +++ b/tools/perf/util/evsel.c @@ -1288,6 +1288,7 @@ void evsel__compute_deltas(struct evsel *evsel, int cpu, int thread, if (cpu == -1) { tmp = evsel->prev_raw_counts->aggr; evsel->prev_raw_counts->aggr = *count; + *perf_counts(evsel->prev_raw_counts, 0, 0) = *count; } else { tmp = *perf_counts(evsel->prev_raw_counts, cpu, thread); *perf_counts(evsel->prev_raw_counts, cpu, thread) = *count; diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c index 89e541564ed5..ede113805ecd 100644 --- a/tools/perf/util/stat.c +++ b/tools/perf/util/stat.c @@ -230,6 +230,30 @@ void perf_evlist__reset_prev_raw_counts(struct evlist *evlist) perf_evsel__reset_prev_raw_counts(evsel); } +static void perf_evsel__copy_prev_raw_counts(struct evsel *evsel) +{ + int ncpus = evsel__nr_cpus(evsel); + int nthreads = perf_thread_map__nr(evsel->core.threads); + + for (int thread = 0; thread < nthreads; thread++) { + for (int cpu = 0; cpu < ncpus; cpu++) { + *perf_counts(evsel->counts, cpu, thread) = + *perf_counts(evsel->prev_raw_counts, cpu, + thread); + } + } + + evsel->counts->aggr = evsel->prev_raw_counts->aggr; +} + +void perf_evlist__copy_prev_raw_counts(struct evlist *evlist) +{ + struct evsel *evsel; + + evlist__for_each_entry(evlist, evsel) + perf_evsel__copy_prev_raw_counts(evsel); +} + static void zero_per_pkg(struct evsel *counter) { if (counter->per_pkg_mask) diff --git a/tools/perf/util/stat.h b/tools/perf/util/stat.h index b4fdfaa7f2c0..62cf72c71869 100644 --- a/tools/perf/util/stat.h +++ b/tools/perf/util/stat.h @@ -198,6 +198,7 @@ int perf_evlist__alloc_stats(struct evlist *evlist, bool alloc_raw); void perf_evlist__free_stats(struct evlist *evlist); void perf_evlist__reset_stats(struct evlist *evlist); void perf_evlist__reset_prev_raw_counts(struct evlist *evlist); +void perf_evlist__copy_prev_raw_counts(struct evlist *evlist); int perf_stat_process_counter(struct perf_stat_config *config, struct evsel *counter); -- 2.17.1