Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp1294511ybj; Thu, 7 May 2020 20:38:13 -0700 (PDT) X-Google-Smtp-Source: APiQypI83UoSov/bMJiuiZ5E2Hsej1Y8/Usf+kYuKVIqmN6PTm7pyDvkCp1CI10/QsK//96FsIYF X-Received: by 2002:aa7:daca:: with SMTP id x10mr460049eds.59.1588909093147; Thu, 07 May 2020 20:38:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588909093; cv=none; d=google.com; s=arc-20160816; b=TG6ZFGt2QTGLNy6JuRp+tdOkyLkWQkhwShbG9CcNRJqfNAYYbTOwtYnuxmvTyLS89A QHq1Vr/U/IAtM1xuj/y9Ir1lxAa9D4iZf4LEd1Ad8dHl+Q7vKU+qBiIDZpoJDDL4auCk PdNIQbu/Cp5lTnhVjQB1Sm7URrcQ0BMSA+Az58RR6bTTrLvM7fE7T+GMY3k1jEvjGSq3 965uB4ivA3CrRbmLQQvZSqNV+SrCPMpoQff+iegDg8utOEs5lO48uoBdB0H77RstG2yU hUb7AbqEOqQeCFFjqGzGjIoNFzZiF4r6/WByMOCnzjeuB5+tM04CDf6sF9Pg/MaHu3UB mPHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:ironport-sdr:ironport-sdr; bh=7Msxz24GWRx/UhRAjcfqctfu0zuRWn758RMoo2zt3x0=; b=Y1DybKoBrBsX4pPPoBbU36D8r8haomXprLlkH9bokaVbJ+G1z8FOxjvwnR5N5MNHf+ SdZeJj/fsjNRKLVw2K86bY5RM+u9+EKrBEYEqlruj0IrIgxK27UWi0RPT5HDFFV0keRs 9dlhgmuMNfgBLAkZ2OHiplvqbcg7JfkTYw6rAHI3+7ovfP7v4JnP7qtBeWGqvAjS7TTB umxJKg1LgefC8mnIM7B6SYmPZAWy6BWvXTZtOWFNhJlrz+cw/O8bMPtB6sb+ouB/OO2a pirlbAvL2+Al5tCk08Fcl/WEjCZLlxNUs3Yd0miNuXZ1IN0R1zso0NrXNpqGXRJTQPwJ Ilow== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k26si213983ejs.504.2020.05.07.20.37.49; Thu, 07 May 2020 20:38:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726770AbgEHDeV (ORCPT + 99 others); Thu, 7 May 2020 23:34:21 -0400 Received: from mga03.intel.com ([134.134.136.65]:28185 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726683AbgEHDeV (ORCPT ); Thu, 7 May 2020 23:34:21 -0400 IronPort-SDR: ZVb5BR4/fFdSz3dp08JGV2NWP/XbON1nhhspkS40rZG0fba2Syjiv+vqaZhSj32mKZyXPmwN5T TclygzRx260A== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 May 2020 20:34:20 -0700 IronPort-SDR: OxsBcQCsMMBaNdaZF0cK8Azjuu9bZ+5MIdmVtCAFfumueltSoquUSWSQ6WzMzeAmE97sdo+LWG tTfmpNbFMjjQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,366,1583222400"; d="scan'208";a="370321811" Received: from yjin15-mobl1.ccr.corp.intel.com (HELO [10.238.5.239]) ([10.238.5.239]) by fmsmga001.fm.intel.com with ESMTP; 07 May 2020 20:34:17 -0700 Subject: Re: [PATCH v3 3/4] perf stat: Copy counts from prev_raw_counts to evsel->counts To: Jiri Olsa Cc: acme@kernel.org, jolsa@kernel.org, peterz@infradead.org, mingo@redhat.com, alexander.shishkin@linux.intel.com, Linux-kernel@vger.kernel.org, ak@linux.intel.com, kan.liang@intel.com, yao.jin@intel.com References: <20200507065822.8255-1-yao.jin@linux.intel.com> <20200507065822.8255-4-yao.jin@linux.intel.com> <20200507151950.GG2804092@krava> From: "Jin, Yao" Message-ID: Date: Fri, 8 May 2020 11:34:16 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: <20200507151950.GG2804092@krava> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Jiri, On 5/7/2020 11:19 PM, Jiri Olsa wrote: > On Thu, May 07, 2020 at 02:58:21PM +0800, Jin Yao wrote: >> It would be useful to support the overall statistics for perf-stat >> interval mode. For example, report the summary at the end of >> "perf-stat -I" output. >> >> But since perf-stat can support many aggregation modes, such as >> --per-thread, --per-socket, -M and etc, we need a solution which >> doesn't bring much complexity. >> >> The idea is to use 'evsel->prev_raw_counts' which is updated in >> each interval and it's saved with the latest counts. Before reporting >> the summary, we copy the counts from evsel->prev_raw_counts to >> evsel->counts, and next we just follow non-interval processing. > > I did not realize we already store the count in prev_raw_counts ;-) nice catch! > Thanks! :) >> >> In evsel__compute_deltas, this patch saves counts to the position >> of [cpu0,thread0] for AGGR_GLOBAL. After copying counts from >> evsel->prev_raw_counts to evsel->counts, we don't need to >> modify process_counter_maps in perf_stat_process_counter to let it >> work well. > > I don't understand why you need to store it in here.. what's the catch > in process_counter_maps? > Sorry, I didn't explain clearly. You know the idea is to copy evsel->prev_raw_counts to evsel->counts before reporting the summary. But for AGGR_GLOBAL (cpu = -1 in perf_evsel__compute_deltas), the evsel->prev_raw_counts is only stored with the aggr value. if (cpu == -1) { tmp = evsel->prev_raw_counts->aggr; evsel->prev_raw_counts->aggr = *count; } else { tmp = *perf_counts(evsel->prev_raw_counts, cpu, thread); *perf_counts(evsel->prev_raw_counts, cpu, thread) = *count; } So after copying evsel->prev_raw_counts to evsel->counts, perf_counts(evsel->counts, cpu, thread) are all 0. Once we go to process_counter_maps again, in process_counter_values, count->val is 0. case AGGR_GLOBAL: aggr->val += count->val; aggr->ena += count->ena; aggr->run += count->run; And the aggr->val is 0. So this patch uses a trick that saves the previous aggr value to cpu0/thread0, then above aggr->val calculation can work correctly. Thanks Jin Yao > thanks, > jirka >