2024-04-06 09:32:45

by Andrea Righi

[permalink] [raw]
Subject: [PATCH v4 0/4] libbpf: API to partially consume items from ringbuffer

Introduce ring__consume_n() and ring_buffer__consume_n() API to
partially consume items from one (or more) ringbuffer(s).

This can be useful, for example, to consume just a single item or when
we need to copy multiple items to a limited user-space buffer from the
ringbuffer callback.

Practical example (where this API can be used):
https://github.com/sched-ext/scx/blob/b7c06b9ed9f72cad83c31e39e9c4e2cfd8683a55/rust/scx_rustland_core/src/bpf.rs#L217

See also:
https://lore.kernel.org/lkml/[email protected]/T/#u

v4:
- add a selftest to test the new API
- open a new 1.5.0 cycle

v3:
- rename ring__consume_max() -> ring__consume_n() and
ring_buffer__consume_max() -> ring_buffer__consume_n()
- add new API to a new 1.5.0 cycle
- fixed minor nits / comments

v2:
- introduce a new API instead of changing the callback's retcode
behavior

Andrea Righi (4):
libbpf: Start v1.5 development cycle
libbpf: ringbuf: allow to consume up to a certain amount of items
libbpf: Add ring__consume_n / ring_buffer__consume_n
selftests/bpf: Add tests for ring__consume_n and ring_buffer__consume_n

tools/lib/bpf/libbpf.h | 12 +++++
tools/lib/bpf/libbpf.map | 6 +++
tools/lib/bpf/libbpf_version.h | 2 +-
tools/lib/bpf/ringbuf.c | 59 ++++++++++++++++++++----
tools/testing/selftests/bpf/prog_tests/ringbuf.c | 8 ++++
5 files changed, 76 insertions(+), 11 deletions(-)


2024-04-06 09:32:46

by Andrea Righi

[permalink] [raw]
Subject: [PATCH v4 2/4] libbpf: ringbuf: allow to consume up to a certain amount of items

In some cases, instead of always consuming all items from ring buffers
in a greedy way, we may want to consume up to a certain amount of items,
for example when we need to copy items from the BPF ring buffer to a
limited user buffer.

This change allows to set an upper limit to the amount of items consumed
from one or more ring buffers.

Signed-off-by: Andrea Righi <[email protected]>
---
tools/lib/bpf/ringbuf.c | 23 +++++++++++++++--------
1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
index aacb64278a01..2c4031168413 100644
--- a/tools/lib/bpf/ringbuf.c
+++ b/tools/lib/bpf/ringbuf.c
@@ -231,7 +231,7 @@ static inline int roundup_len(__u32 len)
return (len + 7) / 8 * 8;
}

-static int64_t ringbuf_process_ring(struct ring *r)
+static int64_t ringbuf_process_ring(struct ring *r, size_t n)
{
int *len_ptr, len, err;
/* 64-bit to avoid overflow in case of extreme application behavior */
@@ -268,6 +268,9 @@ static int64_t ringbuf_process_ring(struct ring *r)
}

smp_store_release(r->consumer_pos, cons_pos);
+
+ if (cnt >= n)
+ goto done;
}
} while (got_new_data);
done:
@@ -287,13 +290,15 @@ int ring_buffer__consume(struct ring_buffer *rb)
for (i = 0; i < rb->ring_cnt; i++) {
struct ring *ring = rb->rings[i];

- err = ringbuf_process_ring(ring);
+ err = ringbuf_process_ring(ring, INT_MAX);
if (err < 0)
return libbpf_err(err);
res += err;
+ if (res > INT_MAX) {
+ res = INT_MAX;
+ break;
+ }
}
- if (res > INT_MAX)
- return INT_MAX;
return res;
}

@@ -314,13 +319,15 @@ int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms)
__u32 ring_id = rb->events[i].data.fd;
struct ring *ring = rb->rings[ring_id];

- err = ringbuf_process_ring(ring);
+ err = ringbuf_process_ring(ring, INT_MAX);
if (err < 0)
return libbpf_err(err);
res += err;
+ if (res > INT_MAX) {
+ res = INT_MAX;
+ break;
+ }
}
- if (res > INT_MAX)
- return INT_MAX;
return res;
}

@@ -375,7 +382,7 @@ int ring__consume(struct ring *r)
{
int64_t res;

- res = ringbuf_process_ring(r);
+ res = ringbuf_process_ring(r, INT_MAX);
if (res < 0)
return libbpf_err(res);

--
2.43.0


2024-04-06 09:32:49

by Andrea Righi

[permalink] [raw]
Subject: [PATCH v4 3/4] libbpf: Add ring__consume_n / ring_buffer__consume_n

Introduce a new API to consume items from a ring buffer, limited to a
specified amount, and return to the caller the actual number of items
consumed.

Link: https://lore.kernel.org/lkml/[email protected]/T
Signed-off-by: Andrea Righi <[email protected]>
---
tools/lib/bpf/libbpf.h | 12 ++++++++++++
tools/lib/bpf/libbpf.map | 3 +++
tools/lib/bpf/ringbuf.c | 38 +++++++++++++++++++++++++++++++++++---
3 files changed, 50 insertions(+), 3 deletions(-)

diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index f88ab50c0229..4f775a6dcaa0 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -1293,6 +1293,7 @@ LIBBPF_API int ring_buffer__add(struct ring_buffer *rb, int map_fd,
ring_buffer_sample_fn sample_cb, void *ctx);
LIBBPF_API int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms);
LIBBPF_API int ring_buffer__consume(struct ring_buffer *rb);
+LIBBPF_API int ring_buffer__consume_n(struct ring_buffer *rb, size_t n);
LIBBPF_API int ring_buffer__epoll_fd(const struct ring_buffer *rb);

/**
@@ -1367,6 +1368,17 @@ LIBBPF_API int ring__map_fd(const struct ring *r);
*/
LIBBPF_API int ring__consume(struct ring *r);

+/**
+ * @brief **ring__consume_n()** consumes up to a requested amount of items from
+ * a ringbuffer without event polling.
+ *
+ * @param r A ringbuffer object.
+ * @param n Maximum amount of items to consume.
+ * @return The number of items consumed, or a negative number if any of the
+ * callbacks return an error.
+ */
+LIBBPF_API int ring__consume_n(struct ring *r, size_t n);
+
struct user_ring_buffer_opts {
size_t sz; /* size of this struct, for forward/backward compatibility */
};
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 5dd81a7b96b5..23d82bba021a 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -418,4 +418,7 @@ LIBBPF_1.4.0 {
} LIBBPF_1.3.0;

LIBBPF_1.5.0 {
+ global:
+ ring__consume_n;
+ ring_buffer__consume_n;
} LIBBPF_1.4.0;
diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
index 2c4031168413..19cd34883011 100644
--- a/tools/lib/bpf/ringbuf.c
+++ b/tools/lib/bpf/ringbuf.c
@@ -277,6 +277,33 @@ static int64_t ringbuf_process_ring(struct ring *r, size_t n)
return cnt;
}

+/* Consume available ring buffer(s) data without event polling, up to n
+ * records.
+ *
+ * Returns number of records consumed across all registered ring buffers (or
+ * n, whichever is less), or negative number if any of the callbacks return
+ * error.
+ */
+int ring_buffer__consume_n(struct ring_buffer *rb, size_t n)
+{
+ int64_t err, res = 0;
+ int i;
+
+ for (i = 0; i < rb->ring_cnt; i++) {
+ struct ring *ring = rb->rings[i];
+
+ err = ringbuf_process_ring(ring, n);
+ if (err < 0)
+ return libbpf_err(err);
+ res += err;
+ n -= err;
+
+ if (n == 0)
+ break;
+ }
+ return res;
+}
+
/* Consume available ring buffer(s) data without event polling.
* Returns number of records consumed across all registered ring buffers (or
* INT_MAX, whichever is less), or negative number if any of the callbacks
@@ -378,17 +405,22 @@ int ring__map_fd(const struct ring *r)
return r->map_fd;
}

-int ring__consume(struct ring *r)
+int ring__consume_n(struct ring *r, size_t n)
{
- int64_t res;
+ int res;

- res = ringbuf_process_ring(r, INT_MAX);
+ res = ringbuf_process_ring(r, n);
if (res < 0)
return libbpf_err(res);

return res > INT_MAX ? INT_MAX : res;
}

+int ring__consume(struct ring *r)
+{
+ return ring__consume_n(r, INT_MAX);
+}
+
static void user_ringbuf_unmap_ring(struct user_ring_buffer *rb)
{
if (rb->consumer_pos) {
--
2.43.0


2024-04-06 17:46:30

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH v4 2/4] libbpf: ringbuf: allow to consume up to a certain amount of items

On Sat, Apr 6, 2024 at 2:20 AM Andrea Righi <andrea.righi@canonicalcom> wrote:
>
> In some cases, instead of always consuming all items from ring buffers
> in a greedy way, we may want to consume up to a certain amount of items,
> for example when we need to copy items from the BPF ring buffer to a
> limited user buffer.
>
> This change allows to set an upper limit to the amount of items consumed
> from one or more ring buffers.
>
> Signed-off-by: Andrea Righi <[email protected]>
> ---
> tools/lib/bpf/ringbuf.c | 23 +++++++++++++++--------
> 1 file changed, 15 insertions(+), 8 deletions(-)
>
> diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
> index aacb64278a01..2c4031168413 100644
> --- a/tools/lib/bpf/ringbuf.c
> +++ b/tools/lib/bpf/ringbuf.c
> @@ -231,7 +231,7 @@ static inline int roundup_len(__u32 len)
> return (len + 7) / 8 * 8;
> }
>
> -static int64_t ringbuf_process_ring(struct ring *r)
> +static int64_t ringbuf_process_ring(struct ring *r, size_t n)
> {
> int *len_ptr, len, err;
> /* 64-bit to avoid overflow in case of extreme application behavior */
> @@ -268,6 +268,9 @@ static int64_t ringbuf_process_ring(struct ring *r)
> }
>
> smp_store_release(r->consumer_pos, cons_pos);
> +
> + if (cnt >= n)
> + goto done;
> }
> } while (got_new_data);
> done:
> @@ -287,13 +290,15 @@ int ring_buffer__consume(struct ring_buffer *rb)
> for (i = 0; i < rb->ring_cnt; i++) {
> struct ring *ring = rb->rings[i];
>
> - err = ringbuf_process_ring(ring);
> + err = ringbuf_process_ring(ring, INT_MAX);
> if (err < 0)
> return libbpf_err(err);
> res += err;
> + if (res > INT_MAX) {
> + res = INT_MAX;
> + break;
> + }
> }
> - if (res > INT_MAX)
> - return INT_MAX;

the idea here was to avoid returning overflown int, not really to stop
at INT_MAX samples. So I kept this part intact (res is int64_t, so no
overflow inside the loop can happen)


> return res;
> }
>
> @@ -314,13 +319,15 @@ int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms)
> __u32 ring_id = rb->events[i].data.fd;
> struct ring *ring = rb->rings[ring_id];
>
> - err = ringbuf_process_ring(ring);
> + err = ringbuf_process_ring(ring, INT_MAX);
> if (err < 0)
> return libbpf_err(err);
> res += err;
> + if (res > INT_MAX) {
> + res = INT_MAX;
> + break;
> + }
> }
> - if (res > INT_MAX)
> - return INT_MAX;
> return res;
> }
>
> @@ -375,7 +382,7 @@ int ring__consume(struct ring *r)
> {
> int64_t res;
>
> - res = ringbuf_process_ring(r);
> + res = ringbuf_process_ring(r, INT_MAX);
> if (res < 0)
> return libbpf_err(res);
>
> --
> 2.43.0
>

2024-04-06 17:50:38

by patchwork-bot+netdevbpf

[permalink] [raw]
Subject: Re: [PATCH v4 0/4] libbpf: API to partially consume items from ringbuffer

Hello:

This series was applied to bpf/bpf-next.git (master)
by Andrii Nakryiko <[email protected]>:

On Sat, 6 Apr 2024 11:15:40 +0200 you wrote:
> Introduce ring__consume_n() and ring_buffer__consume_n() API to
> partially consume items from one (or more) ringbuffer(s).
>
> This can be useful, for example, to consume just a single item or when
> we need to copy multiple items to a limited user-space buffer from the
> ringbuffer callback.
>
> [...]

Here is the summary with links:
- [v4,1/4] libbpf: Start v1.5 development cycle
https://git.kernel.org/bpf/bpf-next/c/5bd2ed658231
- [v4,2/4] libbpf: ringbuf: allow to consume up to a certain amount of items
https://git.kernel.org/bpf/bpf-next/c/13e8125a2276
- [v4,3/4] libbpf: Add ring__consume_n / ring_buffer__consume_n
https://git.kernel.org/bpf/bpf-next/c/4d22ea94ea33
- [v4,4/4] selftests/bpf: Add tests for ring__consume_n and ring_buffer__consume_n
(no matching commit)

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html