Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752770AbdLCCEN (ORCPT ); Sat, 2 Dec 2017 21:04:13 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:2196 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752643AbdLCCDH (ORCPT ); Sat, 2 Dec 2017 21:03:07 -0500 From: Wang Nan To: , , , , CC: Wang Nan Subject: [PATCH v2 7/8] perf tools: Don't discard prev in backward mode Date: Sun, 3 Dec 2017 02:00:43 +0000 Message-ID: <20171203020044.81680-8-wangnan0@huawei.com> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20171203020044.81680-1-wangnan0@huawei.com> References: <20171203020044.81680-1-wangnan0@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.107.193.248] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2413 Lines: 81 Perf record can switch output. The new output should only store the data after switching. However, in overwrite backward mode, the new output still have the data from old output. That also brings extra overhead. At the end of mmap_read, the position of processed ring buffer is saved in md->prev. Next mmap_read should be end in md->prev if it is not overwriten. That avoids to process duplicate data. However, the md->prev is discarded. So next mmap_read has to process whole valid ring buffer, which probably include the old processed data. Avoid calling backward_rb_find_range() when md->prev is still available. Signed-off-by: Wang Nan Tested-by: Kan Liang --- tools/perf/util/mmap.c | 33 +++++++++++++++------------------ 1 file changed, 15 insertions(+), 18 deletions(-) diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c index 3f262e7..5f8cb15 100644 --- a/tools/perf/util/mmap.c +++ b/tools/perf/util/mmap.c @@ -267,18 +267,6 @@ static int backward_rb_find_range(void *buf, int mask, u64 head, u64 *start, u64 return -1; } -static int rb_find_range(void *data, int mask, u64 head, u64 old, - u64 *start, u64 *end, bool backward) -{ - if (!backward) { - *start = old; - *end = head; - return 0; - } - - return backward_rb_find_range(data, mask, head, start, end); -} - int perf_mmap__push(struct perf_mmap *md, bool backward, void *to, int push(void *to, void *buf, size_t size)) { @@ -290,19 +278,28 @@ int perf_mmap__push(struct perf_mmap *md, bool backward, void *buf; int rc = 0; - if (rb_find_range(data, md->mask, head, old, &start, &end, backward)) - return -1; + start = backward ? head : old; + end = backward ? old : head; if (start == end) return 0; size = end - start; if (size > (unsigned long)(md->mask) + 1) { - WARN_ONCE(1, "failed to keep up with mmap data. (warn only once)\n"); + if (!backward) { + WARN_ONCE(1, "failed to keep up with mmap data. (warn only once)\n"); - md->prev = head; - perf_mmap__consume(md, backward); - return 0; + md->prev = head; + perf_mmap__consume(md, backward); + return 0; + } + + /* + * Backward ring buffer is full. We still have a chance to read + * most of data from it. + */ + if (backward_rb_find_range(data, md->mask, head, &start, &end)) + return -1; } if ((start & md->mask) + size != (end & md->mask)) { -- 2.10.1