Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932960AbcLGTDr convert rfc822-to-8bit (ORCPT ); Wed, 7 Dec 2016 14:03:47 -0500 Received: from mga02.intel.com ([134.134.136.20]:14074 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932837AbcLGTDp (ORCPT ); Wed, 7 Dec 2016 14:03:45 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,315,1477983600"; d="scan'208";a="40139798" From: "Liang, Kan" To: Peter Zijlstra CC: "mingo@redhat.com" , "acme@kernel.org" , "linux-kernel@vger.kernel.org" , "alexander.shishkin@linux.intel.com" , "tglx@linutronix.de" , "namhyung@kernel.org" , "jolsa@kernel.org" , "Hunter, Adrian" , "wangnan0@huawei.com" , "mark.rutland@arm.com" , "andi@firstfloor.org" Subject: RE: [PATCH V2 03/13] perf/x86: output sampling overhead Thread-Topic: [PATCH V2 03/13] perf/x86: output sampling overhead Thread-Index: AQHSTOHSuMQJFJKXz0+jqt27jlukiaD6RYyAgADDP4D//4M1AIAAhmxQ//+qT4CAAf72IA== Date: Wed, 7 Dec 2016 19:03:31 +0000 Message-ID: <37D7C6CF3E00A74B8858931C1DB2F07750CAA6ED@SHSMSX103.ccr.corp.intel.com> References: <1480713561-6617-1-git-send-email-kan.liang@intel.com> <1480713561-6617-4-git-send-email-kan.liang@intel.com> <20161206112013.GJ3124@twins.programming.kicks-ass.net> <37D7C6CF3E00A74B8858931C1DB2F07750CA9EAE@SHSMSX103.ccr.corp.intel.com> <20161206153222.GB3061@worktop.programming.kicks-ass.net> <37D7C6CF3E00A74B8858931C1DB2F07750CA9EFB@SHSMSX103.ccr.corp.intel.com> <20161206182647.GC3107@twins.programming.kicks-ass.net> In-Reply-To: <20161206182647.GC3107@twins.programming.kicks-ass.net> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYWQwYjZjNmMtNzQxMC00NjgwLWFkODMtY2JlNTJiN2Y0NWI4IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6ImpnYmZQOTlNeEFyemFFQkh2UENYa283S09RWjdEM3RveUtsZHhZd28ya3M9In0= x-ctpclassification: CTP_IC x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2472 Lines: 68 > On Tue, Dec 06, 2016 at 03:47:40PM +0000, Liang, Kan wrote: > > > > It doesn't record anything, it generates the output. And it doesn't > > > explain why that needs to be in pmu::del(), in general that's a horrible > thing to do. > > > > Yes, it only generate/log the output. Sorry for the confused wording. > > > > The NMI overhead is pmu specific overhead. So the NMI overhead output > > should be generated in pmu code. > > True, but you're also accounting in a per-cpu bucket, which means it > includes all events. At which point the per-event overhead thing doesn't > really make sense. > > It also means that previous sessions influence the numbers of our current > session; there's no explicit reset of the numbers. > > > I assume that the pmu:del is the last called pmu function when perf finish. > > Is it a good place for logging? > > No, its horrible. Sure, we'll call pmu::del on events, but yuck. > > You really only want _one_ invocation when you stop using the event, and > we don't really have a good place for that. But instead of creating one, you > do horrible things. > > Now, I realize there's a bit of a catch-22 in that the moment we know the > event is going away, its already gone from userspace. So we cannot dump > data from there in general.. > > Howver, if we have output redirection we can, but that would make things > depend on that and it cannot be used for the last event who's buffer we're > using. > > Another option would be to introduce PERF_EVENT_IOC_STAT or something > like that, and have the tool call that when its 'done'. > OK. I think I will implement a new ioctl PERF_EVENT_IOC_STAT. The IOC_STAT will be called by tool when its 'start' and 'done'. I will also introduce two new ioc flags. (PERF_IOC_FLAG_STAT_START and PERF_IOC_FLAG_STAT_DONE) In 'start', the kernel will reset the numbers. In 'done', the kernel will generate all outputs. The overhead numbers are from different cpu. To distinguish them, we have to add cpu in overhead_entry. We cannot trust sample_id. struct perf_overhead_entry { __u32 cpu; __u32 nr; __u64 time; }; I will also add void (*overhead_stat) in struct pmu to do pmu specific reset and generation. In V2, the three overheads are stored in different per-event/per-cpu ctx. For next V3, I will store all the overheads in pmu's cpuctx. So the number will be the overhead for pmu, not the global system. It should be more clear and useful. how does it sound? Thanks, Kan