Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753839AbbFSHG0 (ORCPT ); Fri, 19 Jun 2015 03:06:26 -0400 Received: from mail-pa0-f54.google.com ([209.85.220.54]:33318 "EHLO mail-pa0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751484AbbFSHGS (ORCPT ); Fri, 19 Jun 2015 03:06:18 -0400 Message-ID: <5583BF69.3090204@plumgrid.com> Date: Fri, 19 Jun 2015 00:06:17 -0700 From: Alexei Starovoitov User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Daniel Wagner CC: linux-kernel@vger.kernel.org Subject: Re: [PATCH v0] bpf: BPF based latency tracing References: <1434627604-9624-1-git-send-email-daniel.wagner@bmw-carit.de> <5582FA7A.4050505@plumgrid.com> <5583B1A9.60503@bmw-carit.de> In-Reply-To: <5583B1A9.60503@bmw-carit.de> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2429 Lines: 59 On 6/18/15 11:07 PM, Daniel Wagner wrote: >> I'm only a bit suspicious of kprobes, since we have: >> >NOKPROBE_SYMBOL(preempt_count_sub) >> >but trace_preemp_on() called by preempt_count_sub() >> >don't have this mark... > The original commit indicates that anything called from > preempt_disable() should also be marked as NOKPROBE_SYMBOL: > > commit 43627582799db317e966ecb0002c2c3c9805ec0f > Author: Srinivasa Ds Sun Feb 24 00:24:04 2008 > Committer: Linus Torvalds Sun Feb 24 02:13:24 2008 > Original File: kernel/sched.c > > kprobes: refuse kprobe insertion on add/sub_preempt_counter() ... > Obviously, this would render this patch useless. well, I've tracked it to that commit as well, but I couldn't find any discussion about kprobe crashes that led to that patch. kprobe has its own mechanism to prevent recursion. >>> >>+SEC("kprobe/trace_preempt_off") > BTW, is there a reason why not supporting build-in > tracepoints/events? It looks like it is only an artificial > limitation of bpf_helpers. The original bpf+tracing patch attached programs to both tracepoints and kprobes, but there was a concern that it promotes tracepoint arguments to stable ABI, since tracepoints in general are considered stable by most maintainers. So we decided to go for bpf+kprobe for now, since kprobes are unstable, so no one can complain that scripts suddenly break because probed function disappears or its arguments change. Since then we've discussed attaching to trace marker, debug tracepoints and other things. So hopefully soon it will be ready. >>> >>+int bpf_prog1(struct pt_regs *ctx) >>> >>+{ >>> >>+ int cpu = bpf_get_smp_processor_id(); >>> >>+ u64 *ts = bpf_map_lookup_elem(&my_map, &cpu); >>> >>+ >>> >>+ if (ts) >>> >>+ *ts = bpf_ktime_get_ns(); >> > >> >btw, I'm planning to add native per-cpu maps which will >> >speed up things more and reduce measurement overhead. > Funny I was about to suggest something like this :) > >> >I think you can retarget this patch to net-next and send >> >it to netdev. It's not too late for this merge window. > I'll rebase it to net-next. Great :) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/