Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp314216rdb; Thu, 21 Dec 2023 09:50:29 -0800 (PST) X-Google-Smtp-Source: AGHT+IF9ZDFDfhheNxpq4mzorN2DmliysvNa/LOrXX8no3cJ1xmszvKVXHg/Hgv63FRlvOM41afB X-Received: by 2002:a17:902:c406:b0:1d0:44f6:ccb9 with SMTP id k6-20020a170902c40600b001d044f6ccb9mr12973804plk.65.1703181028939; Thu, 21 Dec 2023 09:50:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703181028; cv=none; d=google.com; s=arc-20160816; b=QfRz+XhCESSW5nSc3rhyK1X7bXwRoiherguBvx+pZ8FThm3zBp1FfAA0WQycKpodTp qRSkyjY4Uv6EtSU3z/8QKgZcEHS76WLtgi9/m8ALcr1kZH+WZLCszrwz3asPHODPWkOD eIVGA3/bGYX8/4DU1nz3jApxXLqlWABZctq20L6zDj9ajae+smLYRTA4NpuJvuIBfx3P m+uWUQF5V8WZRuuXZpY2AOgitIq6NEwAesmasIhVSbQ72kY96tR2zYCTuGt98DFFEGJ+ 95/CbJp9AjSqOQ9POOmEXcN4cnz9/T3H4YnMyJhV8mCRdbGJ8fx9WkAhPq4jpRjPy6Pc KjiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :subject:cc:to:from:date; bh=KtN1yixy0jOZj50uxl1clNwAhxg7ia1NRc5+MeWXynA=; fh=H7rhZUnZ+NlyyU7vxDXWwpu6VPK6D0cjPor36M8Xy6o=; b=i6W/UAAT3FKUaBM9SvqvxKwTytjLFq3mt7tkZD7xRPmzTpPpwOhUfohk0WLNYwIGdk 6Z6UUJcvEUK0mXyD35t3TFiUYk28keDcZ0JfQfa8n/Ntzx9iIOZhjCQQP15Bk2rVBVLh aaAxqfRdjTbhq3Ya8rrAfQCpE+UzbK8kUAa6woUyfpbxgIcQan8qLqPKZSPUJPKcXRZu BilYYC0QT2OK3XpUtKlWiZ8GPHhEotTcmwB1rAmXJNQsE5hseqAVhNxicQx4IofySz5g k9RGHzuavxqiA6u+xagXdhL++er6ju0hHf8zNsBZ7bHY5ea8JhGjIWTTAEZyamlJdif4 LkBA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-8879-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-8879-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id m10-20020a170902db0a00b001d33a8a3df6si1643750plx.506.2023.12.21.09.50.28 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Dec 2023 09:50:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-8879-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-8879-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-8879-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id C5655288CF4 for ; Thu, 21 Dec 2023 17:50:27 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DD49A634E6; Thu, 21 Dec 2023 17:50:22 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6FBB5BA2F; Thu, 21 Dec 2023 17:50:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 44821C433C8; Thu, 21 Dec 2023 17:50:21 +0000 (UTC) Date: Thu, 21 Dec 2023 12:51:24 -0500 From: Steven Rostedt To: Vincent Donnefort Cc: mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, kernel-team@android.com Subject: Re: [PATCH v9 1/2] ring-buffer: Introducing ring-buffer mapping functions Message-ID: <20231221125124.49ee8201@gandalf.local.home> In-Reply-To: <20231221173523.3015715-2-vdonnefort@google.com> References: <20231221173523.3015715-1-vdonnefort@google.com> <20231221173523.3015715-2-vdonnefort@google.com> X-Mailer: Claws Mail 3.19.1 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit On Thu, 21 Dec 2023 17:35:22 +0000 Vincent Donnefort wrote: > @@ -739,6 +747,22 @@ static __always_inline bool full_hit(struct trace_buffer *buffer, int cpu, int f > return (dirty * 100) > (full * nr_pages); > } > > +static void rb_update_meta_page(struct ring_buffer_per_cpu *cpu_buffer) > +{ > + if (unlikely(READ_ONCE(cpu_buffer->mapped))) { > + /* Ensure the meta_page is ready */ > + smp_rmb(); > + WRITE_ONCE(cpu_buffer->meta_page->entries, > + local_read(&cpu_buffer->entries)); > + WRITE_ONCE(cpu_buffer->meta_page->overrun, > + local_read(&cpu_buffer->overrun)); > + WRITE_ONCE(cpu_buffer->meta_page->subbufs_touched, > + local_read(&cpu_buffer->pages_touched)); > + WRITE_ONCE(cpu_buffer->meta_page->subbufs_lost, > + local_read(&cpu_buffer->pages_lost)); > + } > +} > + > /* > * rb_wake_up_waiters - wake up tasks waiting for ring buffer input > * > @@ -749,6 +773,18 @@ static void rb_wake_up_waiters(struct irq_work *work) > { > struct rb_irq_work *rbwork = container_of(work, struct rb_irq_work, work); > > + if (rbwork->is_cpu_buffer) { > + struct ring_buffer_per_cpu *cpu_buffer; > + > + cpu_buffer = container_of(rbwork, struct ring_buffer_per_cpu, > + irq_work); > + /* > + * If the waiter is a cpu_buffer, this might be due to a > + * userspace mapping. Let's update the meta-page. > + */ > + rb_update_meta_page(cpu_buffer); > + } > + > wake_up_all(&rbwork->waiters); > if (rbwork->full_waiters_pending || rbwork->wakeup_full) { > rbwork->wakeup_full = false; I think this code would be cleaner if we did: static void rb_update_meta_page(strucrt rb_irq_work *rbwork) { struct ring_buffer_per_cpu *cpu_buffer; if (!rbwork->is_cpu_buffer) return; /* * If the waiter is a cpu_buffer, this might be due to a * userspace mapping. Let's update the meta-page. */ cpu_buffer = container_of(rbwork, struct ring_buffer_per_cpu, irq_work); if (unlikely(READ_ONCE(cpu_buffer->mapped))) { // I don't think we need the "unlikely" /* Ensure the meta_page is ready */ smp_rmb(); WRITE_ONCE(cpu_buffer->meta_page->entries, local_read(&cpu_buffer->entries)); WRITE_ONCE(cpu_buffer->meta_page->overrun, local_read(&cpu_buffer->overrun)); WRITE_ONCE(cpu_buffer->meta_page->subbufs_touched, local_read(&cpu_buffer->pages_touched)); WRITE_ONCE(cpu_buffer->meta_page->subbufs_lost, local_read(&cpu_buffer->pages_lost)); } } /* * rb_wake_up_waiters - wake up tasks waiting for ring buffer input * * Schedules a delayed work to wake up any task that is blocked on the * ring buffer waiters queue. */ static void rb_wake_up_waiters(struct irq_work *work) { struct rb_irq_work *rbwork = container_of(work, struct rb_irq_work, work); rb_update_meta_page(cpu_buffer); wake_up_all(&rbwork->waiters); if (rbwork->full_waiters_pending || rbwork->wakeup_full) { rbwork->wakeup_full = false; rbwork->full_waiters_pending = false; wake_up_all(&rbwork->full_waiters); } } -- Steve