Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751425AbdISMj4 convert rfc822-to-8bit (ORCPT ); Tue, 19 Sep 2017 08:39:56 -0400 Received: from mga04.intel.com ([192.55.52.120]:45427 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751000AbdISMjv (ORCPT ); Tue, 19 Sep 2017 08:39:51 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.42,418,1500966000"; d="scan'208";a="150916907" From: "Liang, Kan" To: Jiri Olsa , Arnaldo Carvalho de Melo CC: "peterz@infradead.org" , "mingo@redhat.com" , "linux-kernel@vger.kernel.org" , "jolsa@kernel.org" , "namhyung@kernel.org" , "Hunter, Adrian" , "Odzioba, Lukasz" , "ak@linux.intel.com" Subject: RE: [PATCH RFC V2 00/10] perf top optimization Thread-Topic: [PATCH RFC V2 00/10] perf top optimization Thread-Index: AQHTKqUGKqa3XFGJ30eMjpDMlJ7smaK53NAAgABEIwCAAUOMgIAAy+Iw Date: Tue, 19 Sep 2017 12:39:47 +0000 Message-ID: <37D7C6CF3E00A74B8858931C1DB2F077537C3FFB@SHSMSX103.ccr.corp.intel.com> References: <1505096603-215017-1-git-send-email-kan.liang@intel.com> <20170918085708.GC17203@krava> <20170918130100.GF14469@kernel.org> <20170919081901.GA4231@krava> In-Reply-To: <20170919081901.GA4231@krava> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNGFjZTdlNmYtOTFlYy00YzczLWI1NjUtZGMwYjU0OGMxOGJkIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6ImNqY3lpbFVUenczeTRZU1BcL1c1bVVIXC9TNjJ6QlBNdG91cUhtMVhTcHR0OD0ifQ== x-ctpclassification: CTP_IC dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2064 Lines: 54 > On Mon, Sep 18, 2017 at 10:01:00AM -0300, Arnaldo Carvalho de Melo wrote: > > Em Mon, Sep 18, 2017 at 10:57:08AM +0200, Jiri Olsa escreveu: > > > On Sun, Sep 10, 2017 at 07:23:13PM -0700, kan.liang@intel.com wrote: > > > > From: Kan Liang > > > > > > > > The patch series intends to fix the severe performance issue in > > > > Knights Landing/Mill, when monitoring in heavy load system. > > > > perf top costs a few minutes to show the result, which is > > > > unacceptable. > > > > With the patch series applied, the latency will reduces to several > > > > seconds. > > > > > > > > machine__synthesize_threads and perf_top__mmap_read costs most > of > > > > the perf top time (> 99%). > > > > > > looks like this patchset adds locking into code paths used by other > > > single threaded tools and that might be bad for them as noted by > > > Andi in here: > > > > > > https://marc.info/?l=linux-kernel&m=149031672928989&w=2 > > > > > > he proposed solution and it was changed&posted by Arnaldo in here: > > > > > > https://marc.info/?l=linux-kernel&m=149132267410294&w=2 > > > > > > but looks like it never got merged > > > > > > could you please add this or similar code before you add the locking > > > code/overhead in? > > > > I'm rehashing that patch and adding it on top of what is in my > > perf/core branch, will push soon, for now you can take a look at > tmp.perf/core. > > checked the code.. one nit, could we have single threaded by default? > only one command is multithreaded atm, it could call perf_set_multihreaded > instead of all current related commands call perf_set_singlethreaded I agree with single threaded as default setting, also I think we need both functions, perf_set_multihreaded and perf_set_singlethreaded. Perf tools probably be half single threaded and half multithreaded. E.g. the perf top optimization. Only the events synthesize codes are multithreaded. So we have to set multithreaded first, then change it to single threaded. Thanks, Kan > > other than that it looks ok > > thanks, > jirka