Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934163AbcKWWv5 (ORCPT ); Wed, 23 Nov 2016 17:51:57 -0500 Received: from mx1.redhat.com ([209.132.183.28]:51246 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752437AbcKWWv4 (ORCPT ); Wed, 23 Nov 2016 17:51:56 -0500 Date: Wed, 23 Nov 2016 23:51:51 +0100 From: Jiri Olsa To: kan.liang@intel.com Cc: peterz@infradead.org, mingo@redhat.com, acme@kernel.org, linux-kernel@vger.kernel.org, alexander.shishkin@linux.intel.com, tglx@linutronix.de, namhyung@kernel.org, jolsa@kernel.org, adrian.hunter@intel.com, wangnan0@huawei.com, mark.rutland@arm.com, andi@firstfloor.org Subject: Re: [PATCH 06/14] perf tools: show NMI overhead Message-ID: <20161123225151.GC15978@krava> References: <1479894292-16277-1-git-send-email-kan.liang@intel.com> <1479894292-16277-7-git-send-email-kan.liang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1479894292-16277-7-git-send-email-kan.liang@intel.com> User-Agent: Mutt/1.7.1 (2016-10-04) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Wed, 23 Nov 2016 22:51:55 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1230 Lines: 42 On Wed, Nov 23, 2016 at 04:44:44AM -0500, kan.liang@intel.com wrote: > From: Kan Liang > > Caculate the total NMI overhead on each CPU, and display them in perf > report so the output looks like this: --- # Elapsed time: 1720167944 ns # Overhead: # CPU 6 # NMI#: 27 time: 111379 ns # Multiplexing#: 0 time: 0 ns # SB#: 57 time: 90045 ns # # Samples: 26 of event 'cycles:u' # Event count (approx.): 1677531 # # Overhead Command Shared Object Symbol # ........ ....... ................ ....................... # 24.20% ls ls [.] _init 17.18% ls libc-2.24.so [.] __strcoll_l 11.85% ls ld-2.24.so [.] _dl_relocate_object --- few things: - I wonder we want to put this overhead output separatelly from the main perf out.. this scale bad with with bigger cpu counts - we might want to call it some other way, becayse we already use 'overhead' for the event count % - how about TUI output? ;-) I dont think it's necessary, however currently 'perf report --show-overhead' does not show anything ifTUI is default output, unless you use --stdio option thanks, jirka