Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp6750465rdb; Fri, 15 Dec 2023 07:26:46 -0800 (PST) X-Google-Smtp-Source: AGHT+IG4c3iYj9tuQi2AKC9pNrnbQzJzlgiJWBOzVgHlAn78B4wyqSiDWW7gGxOuo5uqTAhuclMF X-Received: by 2002:a05:6512:3d22:b0:50b:c4f1:9058 with SMTP id d34-20020a0565123d2200b0050bc4f19058mr8168434lfv.12.1702654006364; Fri, 15 Dec 2023 07:26:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702654006; cv=none; d=google.com; s=arc-20160816; b=mnsYEJ4zbCuORAFdTsYQrCTceeS3c5TaxdibFNOD9f5dzSy50At77mINtsibDtQG4L vyTCU85CgMreypsIP/F892Gzb3skOnI4fgI0RpFvrg49FAGKDjkAvOI/M2ZmVZYmzL9B usXCF/TTUknLNGFayXFLYfqxZ6Iu6LN46zBPHQEx390i7C+rY8d3PJxZ3v1TDPKc0cS9 UqnqvCD/GSPuxbwKHNWEBlsFcsI1bO6aJsnpSI26xdqbXgi38Pr52a8BRpW1gEf/xjl6 xO9ME0B6OTkhghzdPcJBhWiYEK7m1kGtUQf++dUWR0bc1jc0hdaOMnBGJ+6eGUl4EIzR oN8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:subject:cc:to:from :date; bh=v4CwemUpdTx+3uj+APBsPJg7pSqEiX5HEtxEJuHhzAI=; fh=10neyeg9SpQc0WdV4aMamIFr7tikcy7JJ0bptR+ICfw=; b=cTaqV/U/wg/EBZPhbBPv9gPoZOoyxhn24YIIfQMsGRHJ2PvcL6O/f8ExxK6NOnUWim WRTCGvh/buK+VckvwU6Ql57AmhKNRB64rRVD0zwt4a9Rfc0uHd+gJYOK4rmI835xtkj+ sPgFqTUXkoh5fBeL89TXkiBD1ULa5Ee7s3U6tjxs2Xs/6O90IYfz0UGCWNZnQHlhu6SE LV9+uLWJYNzTgHP7SgOj0OnuhEdpRJERAzQlqa1mecKthJ2LoWeeKGKQXpZgz1D8rhen oKQQeb1QdA9TiAdojTm/c2mSHoF4/g7hUmPHPodagI5VE/gzi1uOxeixYs8H8BO7wtJE bIAQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-1213-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-1213-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id t8-20020a170906268800b00a1cda6326b0si7328800ejc.179.2023.12.15.07.26.46 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Dec 2023 07:26:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-1213-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-1213-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-1213-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 1BCF41F24946 for ; Fri, 15 Dec 2023 15:26:46 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 567AB36AF3; Fri, 15 Dec 2023 15:26:37 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF8DC37140; Fri, 15 Dec 2023 15:26:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AFE9AC433C8; Fri, 15 Dec 2023 15:26:35 +0000 (UTC) Date: Fri, 15 Dec 2023 10:26:33 -0500 From: Steven Rostedt To: LKML , Linux trace kernel Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers Subject: [PATCH] tracing: Add disable-filter-buf option Message-ID: <20231215102633.7a24cb77@rorschach.local.home> X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit From: "Steven Rostedt (Google)" Normally, when the filter is enabled, a temporary buffer is created to copy the event data into it to perform the filtering logic. If the filter passes and the event should be recorded, then the event is copied from the temporary buffer into the ring buffer. If the event is to be discarded then it is simply dropped. If another event comes in via an interrupt, it will not use the temporary buffer as it is busy and will write directly into the ring buffer. The disable-filter-buf option will disable the temporary buffer and always write into the ring buffer. This will avoid the copy when the event is to be recorded, but also adds a bit more overhead on the discard, and if another event were to interrupt the event that is to be discarded, then the event will not be removed from the ring buffer but instead converted to padding that will not be read by the reader. Padding will still take up space on the ring buffer. This option can be beneficial if most events are recorded and not discarded, or simply for debugging the discard functionality of the ring buffer. Also fix some whitespace (that was fixed by editing this in vscode). Signed-off-by: Steven Rostedt (Google) --- Documentation/trace/ftrace.rst | 23 +++++++++++++++++++++ kernel/trace/trace.c | 37 ++++++++++++++++++++-------------- kernel/trace/trace.h | 1 + 3 files changed, 46 insertions(+), 15 deletions(-) diff --git a/Documentation/trace/ftrace.rst b/Documentation/trace/ftrace.rst index 23572f6697c0..7fe96da34962 100644 --- a/Documentation/trace/ftrace.rst +++ b/Documentation/trace/ftrace.rst @@ -1239,6 +1239,29 @@ Here are the available options: When the free_buffer is closed, tracing will stop (tracing_on set to 0). + disable-filter-buf + Normally, when the filter is enabled, a temporary buffer is + created to copy the event data into it to perform the + filtering logic. If the filter passes and the event should + be recorded, then the event is copied from the temporary + buffer into the ring buffer. If the event is to be discarded + then it is simply dropped. If another event comes in via + an interrupt, it will not use the temporary buffer as it is + busy and will write directly into the ring buffer. + + This option will disable the temporary buffer and always + write into the ring buffer. This will avoid the copy when + the event is to be recorded, but also adds a bit more + overhead on the discard, and if another event were to interrupt + the event that is to be discarded, then the event will not + be removed from the ring buffer but instead converted to + padding that will not be read by the reader. Padding will + still take up space on the ring buffer. + + This option can be beneficial if most events are recorded and + not discarded, or simply for debugging the discard functionality + of the ring buffer. + irq-info Shows the interrupt, preempt count, need resched data. When disabled, the trace looks like:: diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 199df497db07..41b674b1b809 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -5398,6 +5398,8 @@ int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set) return 0; } +static int __tracing_set_filter_buffering(struct trace_array *tr, bool set); + int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled) { int *map; @@ -5451,6 +5453,9 @@ int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled) if (mask == TRACE_ITER_FUNC_FORK) ftrace_pid_follow_fork(tr, enabled); + if (mask == TRACE_ITER_DISABLE_FILTER_BUF) + __tracing_set_filter_buffering(tr, enabled); + if (mask == TRACE_ITER_OVERWRITE) { ring_buffer_change_overwrite(tr->array_buffer.buffer, enabled); #ifdef CONFIG_TRACER_MAX_TRACE @@ -6464,7 +6469,7 @@ static void tracing_set_nop(struct trace_array *tr) { if (tr->current_trace == &nop_trace) return; - + tr->current_trace->enabled--; if (tr->current_trace->reset) @@ -7534,27 +7539,29 @@ u64 tracing_event_time_stamp(struct trace_buffer *buffer, struct ring_buffer_eve return ring_buffer_event_time_stamp(buffer, rbe); } -/* - * Set or disable using the per CPU trace_buffer_event when possible. - */ -int tracing_set_filter_buffering(struct trace_array *tr, bool set) +static int __tracing_set_filter_buffering(struct trace_array *tr, bool set) { - int ret = 0; - - mutex_lock(&trace_types_lock); - if (set && tr->no_filter_buffering_ref++) - goto out; + return 0; if (!set) { - if (WARN_ON_ONCE(!tr->no_filter_buffering_ref)) { - ret = -EINVAL; - goto out; - } + if (WARN_ON_ONCE(!tr->no_filter_buffering_ref)) + return -EINVAL; --tr->no_filter_buffering_ref; } - out: + return 0; +} + +/* + * Set or disable using the per CPU trace_buffer_event when possible. + */ +int tracing_set_filter_buffering(struct trace_array *tr, bool set) +{ + int ret; + + mutex_lock(&trace_types_lock); + ret = __tracing_set_filter_buffering(tr, set); mutex_unlock(&trace_types_lock); return ret; diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 0489e72c8169..a9529943cee2 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -1250,6 +1250,7 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf, C(EVENT_FORK, "event-fork"), \ C(PAUSE_ON_TRACE, "pause-on-trace"), \ C(HASH_PTR, "hash-ptr"), /* Print hashed pointer */ \ + C(DISABLE_FILTER_BUF, "disable-filter-buffer"),\ FUNCTION_FLAGS \ FGRAPH_FLAGS \ STACK_FLAGS \ -- 2.42.0