2011-03-14 22:54:09

by tip-bot for Slava Pestov

[permalink] [raw]
Subject: [PATCH] ftrace: add a new 'tail drops' counter for overflow events

The existing 'overrun' counter is incremented when the ring
buffer wraps around, with overflow on (the default). We wanted
a way to count requests lost from the buffer filling up with
overflow off, too. I decided to add a new counter instead
of retro-fitting the existing one because it seems like a
different statistic to count conceptually, and also because
of how the code was structured.

Signed-Off-By: Slava Pestov <[email protected]>
---
include/linux/ring_buffer.h | 1 +
kernel/trace/ring_buffer.c | 43 ++++++++++++++++++++++++++++++++++++-------
kernel/trace/trace.c | 3 +++
3 files changed, 40 insertions(+), 7 deletions(-)

diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
index 8d3a248..155c665 100644
--- a/include/linux/ring_buffer.h
+++ b/include/linux/ring_buffer.h
@@ -157,6 +157,7 @@ unsigned long ring_buffer_overruns(struct ring_buffer *buffer);
unsigned long ring_buffer_entries_cpu(struct ring_buffer *buffer, int cpu);
unsigned long ring_buffer_overrun_cpu(struct ring_buffer *buffer, int cpu);
unsigned long ring_buffer_commit_overrun_cpu(struct ring_buffer *buffer, int cpu);
+unsigned long ring_buffer_tail_drops_cpu(struct ring_buffer *buffer, int cpu);

u64 ring_buffer_time_stamp(struct ring_buffer *buffer, int cpu);
void ring_buffer_normalize_time_stamp(struct ring_buffer *buffer,
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index bd1c35a..2b9eae7 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -489,9 +489,10 @@ struct ring_buffer_per_cpu {
struct buffer_page *reader_page;
unsigned long lost_events;
unsigned long last_overrun;
- local_t commit_overrun;
- local_t overrun;
local_t entries;
+ local_t overrun;
+ local_t commit_overrun;
+ local_t tail_drops;
local_t committing;
local_t commits;
unsigned long read;
@@ -1931,8 +1932,10 @@ rb_move_tail(struct ring_buffer_per_cpu *cpu_buffer,
* If we are not in overwrite mode,
* this is easy, just stop here.
*/
- if (!(buffer->flags & RB_FL_OVERWRITE))
- goto out_reset;
+ if (!(buffer->flags & RB_FL_OVERWRITE)) {
+ local_inc(&cpu_buffer->tail_drops);
+ goto out_reset;
+ }

ret = rb_handle_head_page(cpu_buffer,
tail_page,
@@ -2647,7 +2650,8 @@ unsigned long ring_buffer_entries_cpu(struct ring_buffer *buffer, int cpu)
EXPORT_SYMBOL_GPL(ring_buffer_entries_cpu);

/**
- * ring_buffer_overrun_cpu - get the number of overruns in a cpu_buffer
+ * ring_buffer_overrun_cpu - get the number of overruns caused by the ring
+ * buffer wrapping around (only if RB_FL_OVERWRITE is on).
* @buffer: The ring buffer
* @cpu: The per CPU buffer to get the number of overruns from
*/
@@ -2667,7 +2671,9 @@ unsigned long ring_buffer_overrun_cpu(struct ring_buffer *buffer, int cpu)
EXPORT_SYMBOL_GPL(ring_buffer_overrun_cpu);

/**
- * ring_buffer_commit_overrun_cpu - get the number of overruns caused by commits
+ * ring_buffer_commit_overrun_cpu - get the number of overruns caused by
+ * commits failing due to the buffer wrapping around while there are uncommitted
+ * events, such as during an interrupt storm.
* @buffer: The ring buffer
* @cpu: The per CPU buffer to get the number of overruns from
*/
@@ -2688,6 +2694,28 @@ ring_buffer_commit_overrun_cpu(struct ring_buffer *buffer, int cpu)
EXPORT_SYMBOL_GPL(ring_buffer_commit_overrun_cpu);

/**
+ * ring_buffer_tail_drops_cpu - get the number of tail drops caused by
+ * the ring buffer filling up (only if RB_FL_OVERWRITE is off).
+ * @buffer: The ring buffer
+ * @cpu: The per CPU buffer to get the number of overruns from
+ */
+unsigned long
+ring_buffer_tail_drops_cpu(struct ring_buffer *buffer, int cpu)
+{
+ struct ring_buffer_per_cpu *cpu_buffer;
+ unsigned long ret;
+
+ if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ return 0;
+
+ cpu_buffer = buffer->buffers[cpu];
+ ret = local_read(&cpu_buffer->tail_drops);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ring_buffer_tail_drops_cpu);
+
+/**
* ring_buffer_entries - get the number of entries in a buffer
* @buffer: The ring buffer
*
@@ -3494,8 +3522,9 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer)
local_set(&cpu_buffer->reader_page->page->commit, 0);
cpu_buffer->reader_page->read = 0;

- local_set(&cpu_buffer->commit_overrun, 0);
local_set(&cpu_buffer->overrun, 0);
+ local_set(&cpu_buffer->commit_overrun, 0);
+ local_set(&cpu_buffer->tail_drops, 0);
local_set(&cpu_buffer->entries, 0);
local_set(&cpu_buffer->committing, 0);
local_set(&cpu_buffer->commits, 0);
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index dc53ecb..3677c8a 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3916,6 +3916,9 @@ tracing_stats_read(struct file *filp, char __user *ubuf,
cnt = ring_buffer_commit_overrun_cpu(tr->buffer, cpu);
trace_seq_printf(s, "commit overrun: %ld\n", cnt);

+ cnt = ring_buffer_tail_drops_cpu(tr->buffer, cpu);
+ trace_seq_printf(s, "tail drops: %ld\n", cnt);
+
count = simple_read_from_buffer(ubuf, count, ppos, s->buffer, s->len);

kfree(s);
--
1.7.3.1


2011-03-15 02:39:57

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH] ftrace: add a new 'tail drops' counter for overflow events

On Mon, 2011-03-14 at 15:53 -0700, Slava Pestov wrote:
> The existing 'overrun' counter is incremented when the ring
> buffer wraps around, with overflow on (the default). We wanted
> a way to count requests lost from the buffer filling up with
> overflow off, too. I decided to add a new counter instead
> of retro-fitting the existing one because it seems like a
> different statistic to count conceptually, and also because
> of how the code was structured.

So this is when we are in producer/consumer mode and the ring buffer
fills up and events are dropped.

For this we could just add a new ring buffer type. We could use the
RINGBUF_TYPE_TIME_STAMP as and call it RINGBUF_TYPE_LOST_EVENTS instead.
I never implemented the TIME_STAMP as I never found a need to ;)

As we currently have a TIME_EXTEND that is still relative from the last
event but has a total of 59 bits for time. That being nanoseconds we can
handle events that are 18 years apart. That far apart and never being
read.

The LOST_EVENTS could store the number of events lost when it starts
reading again. This way raw readers will know that events were lost and
how many.

-- Steve


>
> Signed-Off-By: Slava Pestov <[email protected]>

2011-03-15 04:13:55

by David Sharp

[permalink] [raw]
Subject: Re: [PATCH] ftrace: add a new 'tail drops' counter for overflow events

On Mon, Mar 14, 2011 at 7:39 PM, Steven Rostedt <[email protected]> wrote:
> On Mon, 2011-03-14 at 15:53 -0700, Slava Pestov wrote:
>> The existing 'overrun' counter is incremented when the ring
>> buffer wraps around, with overflow on (the default). We wanted
>> a way to count requests lost from the buffer filling up with
>> overflow off, too. I decided to add a new counter instead
>> of retro-fitting the existing one because it seems like a
>> different statistic to count conceptually, and also because
>> of how the code was structured.
>
> So this is when we are in producer/consumer mode and the ring buffer
> fills up and events are dropped.
>
> For this we could just add a new ring buffer type. We could use the
> RINGBUF_TYPE_TIME_STAMP as and call it RINGBUF_TYPE_LOST_EVENTS instead.
> I never implemented the TIME_STAMP as I never found a need to ;)
>
> As we currently have a TIME_EXTEND that is still relative from the last
> event but has a total of 59 bits for time. That being nanoseconds we can
> handle events that are 18 years apart. That far apart and never being
> read.
>
> The LOST_EVENTS could store the number of events lost when it starts
> reading again. This way raw readers will know that events were lost and
> how many.

s/reading/writing ? ie, the next time enough space is available, it
would first write a LOST_EVENTS event before returning space for the
next event?

Regardless of whether or not you want to put this info in the trace
itself (I agree it would be very useful to know where events were
dropped), I think it's useful for a user to be able to quickly see the
total number of events that were lost without grepping the trace.


>
> -- Steve
>
>
>>
>> Signed-Off-By: Slava Pestov <[email protected]>
>
>
>