Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932361AbbFRRGN (ORCPT ); Thu, 18 Jun 2015 13:06:13 -0400 Received: from mail-pa0-f47.google.com ([209.85.220.47]:34600 "EHLO mail-pa0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751961AbbFRRGE (ORCPT ); Thu, 18 Jun 2015 13:06:04 -0400 Message-ID: <5582FA7A.4050505@plumgrid.com> Date: Thu, 18 Jun 2015 10:06:02 -0700 From: Alexei Starovoitov User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Daniel Wagner CC: linux-kernel@vger.kernel.org Subject: Re: [PATCH v0] bpf: BPF based latency tracing References: <1434627604-9624-1-git-send-email-daniel.wagner@bmw-carit.de> In-Reply-To: <1434627604-9624-1-git-send-email-daniel.wagner@bmw-carit.de> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3311 Lines: 69 On 6/18/15 4:40 AM, Daniel Wagner wrote: > BPF offers another way to generate latency histograms. We attach > kprobes at trace_preempt_off and trace_preempt_on and calculate the > time it takes to from seeing the off/on transition. > > The first array is used to store the start time stamp. The key is the > CPU id. The second array stores the log2(time diff). We need to use > static allocation here (array and not hash tables). The kprobes > hooking into trace_preempt_on|off should not calling any dynamic > memory allocation or free path. We need to avoid recursivly > getting called. Besides that, it reduces jitter in the measurement. > > CPU 0 > latency : count distribution > 1 -> 1 : 0 | | > 2 -> 3 : 0 | | > 4 -> 7 : 0 | | > 8 -> 15 : 0 | | > 16 -> 31 : 0 | | > 32 -> 63 : 0 | | > 64 -> 127 : 0 | | > 128 -> 255 : 0 | | > 256 -> 511 : 0 | | > 512 -> 1023 : 0 | | > 1024 -> 2047 : 0 | | > 2048 -> 4095 : 166723 |*************************************** | > 4096 -> 8191 : 19870 |*** | > 8192 -> 16383 : 6324 | | > 16384 -> 32767 : 1098 | | nice useful sample indeed! The numbers are non-JITed, right? JIT should reduce the measurement cost 2-3x, but preempt_on/off latency probably will stay in 2k range. > I am not sure if it is really worth spending more time getting > the hash table working for the trace_preempt_[on|off] kprobes. > There are so many things which could go wrong, so going with > a static version seems for me the right choice. agree. for this use case arrays are better choice anyway. But I'll keep working on getting hash tables working even in this extreme conditions. bpf should be always rock solid. I'm only a bit suspicious of kprobes, since we have: NOKPROBE_SYMBOL(preempt_count_sub) but trace_preemp_on() called by preempt_count_sub() don't have this mark... > +SEC("kprobe/trace_preempt_off") > +int bpf_prog1(struct pt_regs *ctx) > +{ > + int cpu = bpf_get_smp_processor_id(); > + u64 *ts = bpf_map_lookup_elem(&my_map, &cpu); > + > + if (ts) > + *ts = bpf_ktime_get_ns(); btw, I'm planning to add native per-cpu maps which will speed up things more and reduce measurement overhead. I think you can retarget this patch to net-next and send it to netdev. It's not too late for this merge window. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/