Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755783AbcC2CCy (ORCPT ); Mon, 28 Mar 2016 22:02:54 -0400 Received: from szxga01-in.huawei.com ([58.251.152.64]:6151 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751448AbcC2CCw (ORCPT ); Mon, 28 Mar 2016 22:02:52 -0400 Subject: Re: [PATCH 4/4] perf core: Add backward attribute to perf event To: Alexei Starovoitov , Arnaldo Carvalho de Melo , Peter Zijlstra References: <1459147292-239310-1-git-send-email-wangnan0@huawei.com> <1459147292-239310-5-git-send-email-wangnan0@huawei.com> CC: , Brendan Gregg , He Kuang , Jiri Olsa , Masami Hiramatsu , Namhyung Kim , , Zefan Li From: "Wangnan (F)" Message-ID: <56F9E1F4.2050807@huawei.com> Date: Tue, 29 Mar 2016 10:01:24 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.0 MIME-Version: 1.0 In-Reply-To: <1459147292-239310-5-git-send-email-wangnan0@huawei.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.111.66.109] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020202.56F9E200.0137,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: e5cb591b5e2777804490b4ba6b2467d0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1310 Lines: 51 On 2016/3/28 14:41, Wang Nan wrote: [SNIP] > > To prevent this problem, we need to find a way to ensure the ring buffer > is stable during reading. ioctl(PERF_EVENT_IOC_PAUSE_OUTPUT) is > suggested because its overhead is lower than > ioctl(PERF_EVENT_IOC_ENABLE). > Add comment: By carefully verifying 'header' pointer, reader can avoid pausing the ring-buffer. For example: /* A union of all possible events */ union perf_event event; p = head = perf_mmap__read_head(); while (true) { /* copy header of next event */ fetch(&event.header, p, sizeof(event.header)); /* read 'head' pointer */ head = perf_mmap__read_head(); /* check overwritten: is the header good? */ if (!verify(sizeof(event.header), p, head)) break; /* copy the whole event */ fetch(&event, p, event.header.size); /* read 'head' pointer again */ head = perf_mmap__read_head(); /* is the whole event good? */ if (!verify(event.header.size, p, head)) break; p += event.header.size; } However, the overhead is high because: a) In-place decoding is unsafe. Copy-verifying-decode is required. b) Fetching 'head' pointer requires additional synchronization.