Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751550AbeAPSNQ convert rfc822-to-8bit (ORCPT + 1 other); Tue, 16 Jan 2018 13:13:16 -0500 Received: from mga09.intel.com ([134.134.136.24]:49634 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751107AbeAPSNO (ORCPT ); Tue, 16 Jan 2018 13:13:14 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,369,1511856000"; d="scan'208";a="196133917" From: "Liang, Kan" To: Jiri Olsa CC: "acme@kernel.org" , "peterz@infradead.org" , "mingo@redhat.com" , "linux-kernel@vger.kernel.org" , "wangnan0@huawei.com" , "jolsa@kernel.org" , "namhyung@kernel.org" , "ak@linux.intel.com" , "yao.jin@linux.intel.com" Subject: RE: [PATCH V4 02/15] perf mmap: introduce perf_mmap__read_init() Thread-Topic: [PATCH V4 02/15] perf mmap: introduce perf_mmap__read_init() Thread-Index: AQHTjj57lzEqAmelqkaOAN4u//v1JaN19MKAgADWmxA= Date: Tue, 16 Jan 2018 18:12:41 +0000 Message-ID: <37D7C6CF3E00A74B8858931C1DB2F07753800FF1@SHSMSX103.ccr.corp.intel.com> References: <1516047651-164336-1-git-send-email-kan.liang@intel.com> <1516047651-164336-3-git-send-email-kan.liang@intel.com> <20180116131237.GG26643@krava> In-Reply-To: <20180116131237.GG26643@krava> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYjJlMjZmOTAtNmI4ZS00N2IzLWJmMWEtYTQzZDA5ZjBjMDNjIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6InlFRnNaYVV0ekM1VWRycnEwOEZKNXhrN01UUFpUQ3lpXC9hY1hqRGZaTFg4PSJ9 x-ctpclassification: CTP_IC dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: > On Mon, Jan 15, 2018 at 12:20:38PM -0800, kan.liang@intel.com wrote: > > From: Kan Liang > > > > The perf record has specific codes to calculate the ringbuffer position > > for both overwrite and non-overwrite mode. Now, only perf record > > supports both modes. The perf top will support both modes later. > > It is useful to make the specific codes generic. > > > > Introduce a new interface perf_mmap__read_init() to find ringbuffer > > position. The perf_mmap__read_init() is actually factored out from > > perf_mmap__push(). > > There are slight differences. > > - Add a check for map->refcnt > > - Add new return value logic, EAGAIN and EINVAL. > > not helpful.. I asked to separate those changes, > so we can clearly see what the refcnt check for > and what's behing that EAGAIN return > > please add separate: > 1) patch that adds perf_mmap__read_init into perf_mmap__push > with no functional change > 2) patch adds adds and explain the refcnt check > 3) patch that adds and explain the EAGAIN return > Oh I see. I have misunderstood before. I will change it in V5. Thanks, Kan > thanks, > jirka > > > > > Signed-off-by: Kan Liang > > --- > > tools/perf/util/mmap.c | 43 > +++++++++++++++++++++++++++++++++++++++++++ > > tools/perf/util/mmap.h | 2 ++ > > 2 files changed, 45 insertions(+) > > > > diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c > > index 05076e6..414089f 100644 > > --- a/tools/perf/util/mmap.c > > +++ b/tools/perf/util/mmap.c > > @@ -267,6 +267,49 @@ static int overwrite_rb_find_range(void *buf, int > mask, u64 head, u64 *start, u6 > > return -1; > > } > > > > +/* > > + * Report the start and end of the available data in ringbuffer > > + */ > > +int perf_mmap__read_init(struct perf_mmap *map, bool overwrite, > > + u64 *startp, u64 *endp) > > +{ > > + unsigned char *data = map->base + page_size; > > + u64 head = perf_mmap__read_head(map); > > + u64 old = map->prev; > > + unsigned long size; > > + > > + /* > > + * Check if event was unmapped due to a POLLHUP/POLLERR. > > + */ > > + if (!refcount_read(&map->refcnt)) > > + return -EINVAL; > > + > > + *startp = overwrite ? head : old; > > + *endp = overwrite ? old : head; > > + > > + if (*startp == *endp) > > + return -EAGAIN; > > + > > + size = *endp - *startp; > > + if (size > (unsigned long)(map->mask) + 1) { > > + if (!overwrite) { > > + WARN_ONCE(1, "failed to keep up with mmap data. > (warn only once)\n"); > > + > > + map->prev = head; > > + perf_mmap__consume(map, overwrite); > > + return -EAGAIN; > > + } > > + > > + /* > > + * Backward ring buffer is full. We still have a chance to read > > + * most of data from it. > > + */ > > + if (overwrite_rb_find_range(data, map->mask, head, startp, > endp)) > > + return -EINVAL; > > + } > > + return 0; > > +} > > + > > int perf_mmap__push(struct perf_mmap *md, bool overwrite, > > void *to, int push(void *to, void *buf, size_t size)) > > { > > diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h > > index e43d7b5..0633308 100644 > > --- a/tools/perf/util/mmap.h > > +++ b/tools/perf/util/mmap.h > > @@ -94,4 +94,6 @@ int perf_mmap__push(struct perf_mmap *md, bool > backward, > > > > size_t perf_mmap__mmap_len(struct perf_mmap *map); > > > > +int perf_mmap__read_init(struct perf_mmap *map, bool overwrite, > > + u64 *startp, u64 *endp); > > #endif /*__PERF_MMAP_H */ > > -- > > 2.5.5 > >