2020-01-15 18:45:27

by Brian Vazquez

[permalink] [raw]
Subject: [PATCH v5 bpf-next 0/9] add bpf batch ops to process more than 1 elem

This patch series introduce batch ops that can be added to bpf maps to
lookup/lookup_and_delete/update/delete more than 1 element at the time,
this is specially useful when syscall overhead is a problem and in case
of hmap it will provide a reliable way of traversing them.

The implementation inclues a generic approach that could potentially be
used by any bpf map and adds it to arraymap, it also includes the specific
implementation of hashmaps which are traversed using buckets instead
of keys.

The bpf syscall subcommands introduced are:

BPF_MAP_LOOKUP_BATCH
BPF_MAP_LOOKUP_AND_DELETE_BATCH
BPF_MAP_UPDATE_BATCH
BPF_MAP_DELETE_BATCH

The UAPI attribute is:

struct { /* struct used by BPF_MAP_*_BATCH commands */
__aligned_u64 in_batch; /* start batch,
* NULL to start from beginning
*/
__aligned_u64 out_batch; /* output: next start batch */
__aligned_u64 keys;
__aligned_u64 values;
__u32 count; /* input/output:
* input: # of key/value
* elements
* output: # of filled elements
*/
__u32 map_fd;
__u64 elem_flags;
__u64 flags;
} batch;


in_batch and out_batch are only used for lookup and lookup_and_delete since
those are the only two operations that attempt to traverse the map.

update/delete batch ops should provide the keys/values that user wants
to modify.

Here are the previous discussions on the batch processing:
- https://lore.kernel.org/bpf/[email protected]/
- https://lore.kernel.org/bpf/[email protected]/
- https://lore.kernel.org/bpf/[email protected]/

Changelog sinve v4:
- Remove unnecessary checks from libbpf API (Andrii Nakryiko)
- Move DECLARE_LIBBPF_OPTS with all var declarations (Andrii Nakryiko)
- Change bucket internal buffer size to 5 entries (Yonghong Song)
- Fix some minor bugs in hashtab batch ops implementation (Yonghong Song)

Changelog sinve v3:
- Do not use copy_to_user inside atomic region (Yonghong Song)
- Use _opts approach on libbpf APIs (Andrii Nakryiko)
- Drop generic_map_lookup_and_delete_batch support
- Free malloc-ed memory in tests (Yonghong Song)
- Reverse christmas tree (Yonghong Song)
- Add acked labels

Changelog sinve v2:
- Add generic batch support for lpm_trie and test it (Yonghong Song)
- Use define MAP_LOOKUP_RETRIES for retries (John Fastabend)
- Return errors directly and remove labels (Yonghong Song)
- Insert new API functions into libbpf alphabetically (Yonghong Song)
- Change hlist_nulls_for_each_entry_rcu to
hlist_nulls_for_each_entry_safe in htab batch ops (Yonghong Song)

Changelog since v1:
- Fix SOB ordering and remove Co-authored-by tag (Alexei Starovoitov)

Changelog since RFC:
- Change batch to in_batch and out_batch to support more flexible opaque
values to iterate the bpf maps.
- Remove update/delete specific batch ops for htab and use the generic
implementations instead.

Brian Vazquez (5):
bpf: add bpf_map_{value_size,update_value,map_copy_value} functions
bpf: add generic support for lookup batch op
bpf: add generic support for update and delete batch ops
bpf: add lookup and update batch ops to arraymap
selftests/bpf: add batch ops testing to array bpf map

Yonghong Song (4):
bpf: add batch ops to all htab bpf map
tools/bpf: sync uapi header bpf.h
libbpf: add libbpf support to batch ops
selftests/bpf: add batch ops testing for htab and htab_percpu map

include/linux/bpf.h | 18 +
include/uapi/linux/bpf.h | 21 +
kernel/bpf/arraymap.c | 2 +
kernel/bpf/hashtab.c | 264 +++++++++
kernel/bpf/syscall.c | 554 ++++++++++++++----
tools/include/uapi/linux/bpf.h | 21 +
tools/lib/bpf/bpf.c | 58 ++
tools/lib/bpf/bpf.h | 22 +
tools/lib/bpf/libbpf.map | 4 +
.../bpf/map_tests/array_map_batch_ops.c | 129 ++++
.../bpf/map_tests/htab_map_batch_ops.c | 283 +++++++++
11 files changed, 1248 insertions(+), 128 deletions(-)
create mode 100644 tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
create mode 100644 tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c

--
2.25.0.rc1.283.g88dfdc4193-goog


2020-01-15 18:45:29

by Brian Vazquez

[permalink] [raw]
Subject: [PATCH v5 bpf-next 6/9] tools/bpf: sync uapi header bpf.h

From: Yonghong Song <[email protected]>

sync uapi header include/uapi/linux/bpf.h to
tools/include/uapi/linux/bpf.h

Signed-off-by: Yonghong Song <[email protected]>
---
tools/include/uapi/linux/bpf.h | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 52966e758fe59..9536729a03d57 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -107,6 +107,10 @@ enum bpf_cmd {
BPF_MAP_LOOKUP_AND_DELETE_ELEM,
BPF_MAP_FREEZE,
BPF_BTF_GET_NEXT_ID,
+ BPF_MAP_LOOKUP_BATCH,
+ BPF_MAP_LOOKUP_AND_DELETE_BATCH,
+ BPF_MAP_UPDATE_BATCH,
+ BPF_MAP_DELETE_BATCH,
};

enum bpf_map_type {
@@ -420,6 +424,23 @@ union bpf_attr {
__u64 flags;
};

+ struct { /* struct used by BPF_MAP_*_BATCH commands */
+ __aligned_u64 in_batch; /* start batch,
+ * NULL to start from beginning
+ */
+ __aligned_u64 out_batch; /* output: next start batch */
+ __aligned_u64 keys;
+ __aligned_u64 values;
+ __u32 count; /* input/output:
+ * input: # of key/value
+ * elements
+ * output: # of filled elements
+ */
+ __u32 map_fd;
+ __u64 elem_flags;
+ __u64 flags;
+ } batch;
+
struct { /* anonymous struct used by BPF_PROG_LOAD command */
__u32 prog_type; /* one of enum bpf_prog_type */
__u32 insn_cnt;
--
2.25.0.rc1.283.g88dfdc4193-goog

2020-01-15 18:46:25

by Brian Vazquez

[permalink] [raw]
Subject: [PATCH v5 bpf-next 8/9] selftests/bpf: add batch ops testing for htab and htab_percpu map

From: Yonghong Song <[email protected]>

Tested bpf_map_lookup_batch(), bpf_map_lookup_and_delete_batch(),
bpf_map_update_batch(), and bpf_map_delete_batch() functionality.
$ ./test_maps
...
test_htab_map_batch_ops:PASS
test_htab_percpu_map_batch_ops:PASS
...

Signed-off-by: Yonghong Song <[email protected]>
Signed-off-by: Brian Vazquez <[email protected]>
---
.../bpf/map_tests/htab_map_batch_ops.c | 283 ++++++++++++++++++
1 file changed, 283 insertions(+)
create mode 100644 tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c

diff --git a/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c b/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c
new file mode 100644
index 0000000000000..976bf415fbdd9
--- /dev/null
+++ b/tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c
@@ -0,0 +1,283 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019 Facebook */
+#include <stdio.h>
+#include <errno.h>
+#include <string.h>
+
+#include <bpf/bpf.h>
+#include <bpf/libbpf.h>
+
+#include <bpf_util.h>
+#include <test_maps.h>
+
+static void map_batch_update(int map_fd, __u32 max_entries, int *keys,
+ void *values, bool is_pcpu)
+{
+ typedef BPF_DECLARE_PERCPU(int, value);
+ value *v = NULL;
+ int i, j, err;
+ DECLARE_LIBBPF_OPTS(bpf_map_batch_opts, opts,
+ .elem_flags = 0,
+ .flags = 0,
+ );
+
+ if (is_pcpu)
+ v = (value *)values;
+
+ for (i = 0; i < max_entries; i++) {
+ keys[i] = i + 1;
+ if (is_pcpu)
+ for (j = 0; j < bpf_num_possible_cpus(); j++)
+ bpf_percpu(v[i], j) = i + 2 + j;
+ else
+ ((int *)values)[i] = i + 2;
+ }
+
+ err = bpf_map_update_batch(map_fd, keys, values, &max_entries, &opts);
+ CHECK(err, "bpf_map_update_batch()", "error:%s\n", strerror(errno));
+}
+
+static void map_batch_verify(int *visited, __u32 max_entries,
+ int *keys, void *values, bool is_pcpu)
+{
+ typedef BPF_DECLARE_PERCPU(int, value);
+ value *v = NULL;
+ int i, j;
+
+ if (is_pcpu)
+ v = (value *)values;
+
+ memset(visited, 0, max_entries * sizeof(*visited));
+ for (i = 0; i < max_entries; i++) {
+
+ if (is_pcpu) {
+ for (j = 0; j < bpf_num_possible_cpus(); j++) {
+ CHECK(keys[i] + 1 + j != bpf_percpu(v[i], j),
+ "key/value checking",
+ "error: i %d j %d key %d value %d\n",
+ i, j, keys[i], bpf_percpu(v[i], j));
+ }
+ } else {
+ CHECK(keys[i] + 1 != ((int *)values)[i],
+ "key/value checking",
+ "error: i %d key %d value %d\n", i, keys[i],
+ ((int *)values)[i]);
+ }
+
+ visited[i] = 1;
+
+ }
+ for (i = 0; i < max_entries; i++) {
+ CHECK(visited[i] != 1, "visited checking",
+ "error: keys array at index %d missing\n", i);
+ }
+}
+
+void __test_map_lookup_and_delete_batch(bool is_pcpu)
+{
+ __u32 batch, count, total, total_success;
+ typedef BPF_DECLARE_PERCPU(int, value);
+ int map_fd, *keys, *visited, key;
+ const __u32 max_entries = 10;
+ value pcpu_values[max_entries];
+ int err, step, value_size;
+ bool nospace_err;
+ void *values;
+ struct bpf_create_map_attr xattr = {
+ .name = "hash_map",
+ .map_type = is_pcpu ? BPF_MAP_TYPE_PERCPU_HASH :
+ BPF_MAP_TYPE_HASH,
+ .key_size = sizeof(int),
+ .value_size = sizeof(int),
+ };
+ DECLARE_LIBBPF_OPTS(bpf_map_batch_opts, opts,
+ .elem_flags = 0,
+ .flags = 0,
+ );
+
+ xattr.max_entries = max_entries;
+ map_fd = bpf_create_map_xattr(&xattr);
+ CHECK(map_fd == -1,
+ "bpf_create_map_xattr()", "error:%s\n", strerror(errno));
+
+ value_size = is_pcpu ? sizeof(value) : sizeof(int);
+ keys = malloc(max_entries * sizeof(int));
+ if (is_pcpu)
+ values = pcpu_values;
+ else
+ values = malloc(max_entries * sizeof(int));
+ visited = malloc(max_entries * sizeof(int));
+ CHECK(!keys || !values || !visited, "malloc()",
+ "error:%s\n", strerror(errno));
+
+ /* test 1: lookup/delete an empty hash table, -ENOENT */
+ count = max_entries;
+ err = bpf_map_lookup_and_delete_batch(map_fd, NULL, &batch, keys,
+ values, &count, &opts);
+ CHECK((err && errno != ENOENT), "empty map",
+ "error: %s\n", strerror(errno));
+
+ /* populate elements to the map */
+ map_batch_update(map_fd, max_entries, keys, values, is_pcpu);
+
+ /* test 2: lookup/delete with count = 0, success */
+ count = 0;
+ err = bpf_map_lookup_and_delete_batch(map_fd, NULL, &batch, keys,
+ values, &count, &opts);
+ CHECK(err, "count = 0", "error: %s\n", strerror(errno));
+
+ /* test 3: lookup/delete with count = max_entries, success */
+ memset(keys, 0, max_entries * sizeof(*keys));
+ memset(values, 0, max_entries * value_size);
+ count = max_entries;
+ err = bpf_map_lookup_and_delete_batch(map_fd, NULL, &batch, keys,
+ values, &count, &opts);
+ CHECK((err && errno != ENOENT), "count = max_entries",
+ "error: %s\n", strerror(errno));
+ CHECK(count != max_entries, "count = max_entries",
+ "count = %u, max_entries = %u\n", count, max_entries);
+ map_batch_verify(visited, max_entries, keys, values, is_pcpu);
+
+ /* bpf_map_get_next_key() should return -ENOENT for an empty map. */
+ err = bpf_map_get_next_key(map_fd, NULL, &key);
+ CHECK(!err, "bpf_map_get_next_key()", "error: %s\n", strerror(errno));
+
+ /* test 4: lookup/delete in a loop with various steps. */
+ total_success = 0;
+ for (step = 1; step < max_entries; step++) {
+ map_batch_update(map_fd, max_entries, keys, values, is_pcpu);
+ memset(keys, 0, max_entries * sizeof(*keys));
+ memset(values, 0, max_entries * value_size);
+ total = 0;
+ /* iteratively lookup/delete elements with 'step'
+ * elements each
+ */
+ count = step;
+ nospace_err = false;
+ while (true) {
+ err = bpf_map_lookup_batch(map_fd,
+ total ? &batch : NULL,
+ &batch, keys + total,
+ values +
+ total * value_size,
+ &count, &opts);
+ /* It is possible that we are failing due to buffer size
+ * not big enough. In such cases, let us just exit and
+ * go with large steps. Not that a buffer size with
+ * max_entries should always work.
+ */
+ if (err && errno == ENOSPC) {
+ nospace_err = true;
+ break;
+ }
+
+ CHECK((err && errno != ENOENT), "lookup with steps",
+ "error: %s\n", strerror(errno));
+
+ total += count;
+ if (err)
+ break;
+
+ }
+ if (nospace_err == true)
+ continue;
+
+ CHECK(total != max_entries, "lookup with steps",
+ "total = %u, max_entries = %u\n", total, max_entries);
+ map_batch_verify(visited, max_entries, keys, values, is_pcpu);
+
+ total = 0;
+ count = step;
+ while (total < max_entries) {
+ if (max_entries - total < step)
+ count = max_entries - total;
+ err = bpf_map_delete_batch(map_fd,
+ keys + total,
+ &count, &opts);
+ CHECK((err && errno != ENOENT), "delete batch",
+ "error: %s\n", strerror(errno));
+ total += count;
+ if (err)
+ break;
+ }
+ CHECK(total != max_entries, "delete with steps",
+ "total = %u, max_entries = %u\n", total, max_entries);
+
+ /* check map is empty, errono == ENOENT */
+ err = bpf_map_get_next_key(map_fd, NULL, &key);
+ CHECK(!err || errno != ENOENT, "bpf_map_get_next_key()",
+ "error: %s\n", strerror(errno));
+
+ /* iteratively lookup/delete elements with 'step'
+ * elements each
+ */
+ map_batch_update(map_fd, max_entries, keys, values, is_pcpu);
+ memset(keys, 0, max_entries * sizeof(*keys));
+ memset(values, 0, max_entries * value_size);
+ total = 0;
+ count = step;
+ nospace_err = false;
+ while (true) {
+ err = bpf_map_lookup_and_delete_batch(map_fd,
+ total ? &batch : NULL,
+ &batch, keys + total,
+ values +
+ total * value_size,
+ &count, &opts);
+ /* It is possible that we are failing due to buffer size
+ * not big enough. In such cases, let us just exit and
+ * go with large steps. Not that a buffer size with
+ * max_entries should always work.
+ */
+ if (err && errno == ENOSPC) {
+ nospace_err = true;
+ break;
+ }
+
+ CHECK((err && errno != ENOENT), "lookup with steps",
+ "error: %s\n", strerror(errno));
+
+ total += count;
+ if (err)
+ break;
+ }
+
+ if (nospace_err == true)
+ continue;
+
+ CHECK(total != max_entries, "lookup/delete with steps",
+ "total = %u, max_entries = %u\n", total, max_entries);
+
+ map_batch_verify(visited, max_entries, keys, values, is_pcpu);
+ err = bpf_map_get_next_key(map_fd, NULL, &key);
+ CHECK(!err, "bpf_map_get_next_key()", "error: %s\n",
+ strerror(errno));
+
+ total_success++;
+ }
+
+ CHECK(total_success == 0, "check total_success",
+ "unexpected failure\n");
+ free(keys);
+ free(visited);
+ if (!is_pcpu)
+ free(values);
+}
+
+void htab_map_batch_ops(void)
+{
+ __test_map_lookup_and_delete_batch(false);
+ printf("test_%s:PASS\n", __func__);
+}
+
+void htab_percpu_map_batch_ops(void)
+{
+ __test_map_lookup_and_delete_batch(true);
+ printf("test_%s:PASS\n", __func__);
+}
+
+void test_htab_map_batch_ops(void)
+{
+ htab_map_batch_ops();
+ htab_percpu_map_batch_ops();
+}
--
2.25.0.rc1.283.g88dfdc4193-goog

2020-01-15 20:15:41

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH v5 bpf-next 0/9] add bpf batch ops to process more than 1 elem



On 1/15/20 10:42 AM, Brian Vazquez wrote:
> This patch series introduce batch ops that can be added to bpf maps to
> lookup/lookup_and_delete/update/delete more than 1 element at the time,
> this is specially useful when syscall overhead is a problem and in case
> of hmap it will provide a reliable way of traversing them.
>
> The implementation inclues a generic approach that could potentially be
> used by any bpf map and adds it to arraymap, it also includes the specific
> implementation of hashmaps which are traversed using buckets instead
> of keys.
>
> The bpf syscall subcommands introduced are:
>
> BPF_MAP_LOOKUP_BATCH
> BPF_MAP_LOOKUP_AND_DELETE_BATCH
> BPF_MAP_UPDATE_BATCH
> BPF_MAP_DELETE_BATCH
>
> The UAPI attribute is:
>
> struct { /* struct used by BPF_MAP_*_BATCH commands */
> __aligned_u64 in_batch; /* start batch,
> * NULL to start from beginning
> */
> __aligned_u64 out_batch; /* output: next start batch */
> __aligned_u64 keys;
> __aligned_u64 values;
> __u32 count; /* input/output:
> * input: # of key/value
> * elements
> * output: # of filled elements
> */
> __u32 map_fd;
> __u64 elem_flags;
> __u64 flags;
> } batch;
>
>
> in_batch and out_batch are only used for lookup and lookup_and_delete since
> those are the only two operations that attempt to traverse the map.
>
> update/delete batch ops should provide the keys/values that user wants
> to modify.
>
> Here are the previous discussions on the batch processing:
> - https://lore.kernel.org/bpf/[email protected]/
> - https://lore.kernel.org/bpf/[email protected]/
> - https://lore.kernel.org/bpf/[email protected]/
>
> Changelog sinve v4:
> - Remove unnecessary checks from libbpf API (Andrii Nakryiko)
> - Move DECLARE_LIBBPF_OPTS with all var declarations (Andrii Nakryiko)
> - Change bucket internal buffer size to 5 entries (Yonghong Song)
> - Fix some minor bugs in hashtab batch ops implementation (Yonghong Song)
>
> Changelog sinve v3:
> - Do not use copy_to_user inside atomic region (Yonghong Song)
> - Use _opts approach on libbpf APIs (Andrii Nakryiko)
> - Drop generic_map_lookup_and_delete_batch support
> - Free malloc-ed memory in tests (Yonghong Song)
> - Reverse christmas tree (Yonghong Song)
> - Add acked labels
>
> Changelog sinve v2:
> - Add generic batch support for lpm_trie and test it (Yonghong Song)
> - Use define MAP_LOOKUP_RETRIES for retries (John Fastabend)
> - Return errors directly and remove labels (Yonghong Song)
> - Insert new API functions into libbpf alphabetically (Yonghong Song)
> - Change hlist_nulls_for_each_entry_rcu to
> hlist_nulls_for_each_entry_safe in htab batch ops (Yonghong Song)
>
> Changelog since v1:
> - Fix SOB ordering and remove Co-authored-by tag (Alexei Starovoitov)
>
> Changelog since RFC:
> - Change batch to in_batch and out_batch to support more flexible opaque
> values to iterate the bpf maps.
> - Remove update/delete specific batch ops for htab and use the generic
> implementations instead.
>
> Brian Vazquez (5):
> bpf: add bpf_map_{value_size,update_value,map_copy_value} functions
> bpf: add generic support for lookup batch op
> bpf: add generic support for update and delete batch ops
> bpf: add lookup and update batch ops to arraymap
> selftests/bpf: add batch ops testing to array bpf map
>
> Yonghong Song (4):
> bpf: add batch ops to all htab bpf map
> tools/bpf: sync uapi header bpf.h
> libbpf: add libbpf support to batch ops
> selftests/bpf: add batch ops testing for htab and htab_percpu map
>
> include/linux/bpf.h | 18 +
> include/uapi/linux/bpf.h | 21 +
> kernel/bpf/arraymap.c | 2 +
> kernel/bpf/hashtab.c | 264 +++++++++
> kernel/bpf/syscall.c | 554 ++++++++++++++----
> tools/include/uapi/linux/bpf.h | 21 +
> tools/lib/bpf/bpf.c | 58 ++
> tools/lib/bpf/bpf.h | 22 +
> tools/lib/bpf/libbpf.map | 4 +
> .../bpf/map_tests/array_map_batch_ops.c | 129 ++++
> .../bpf/map_tests/htab_map_batch_ops.c | 283 +++++++++
> 11 files changed, 1248 insertions(+), 128 deletions(-)
> create mode 100644 tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
> create mode 100644 tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c

Thanks for the work! LGTM. Ack for the whole series.

Acked-by: Yonghong Song <[email protected]>

2020-01-15 23:29:49

by Alexei Starovoitov

[permalink] [raw]
Subject: Re: [PATCH v5 bpf-next 0/9] add bpf batch ops to process more than 1 elem

On Wed, Jan 15, 2020 at 12:13 PM Yonghong Song <[email protected]> wrote:
>
>
>
> On 1/15/20 10:42 AM, Brian Vazquez wrote:
> > This patch series introduce batch ops that can be added to bpf maps to
> > lookup/lookup_and_delete/update/delete more than 1 element at the time,
> > this is specially useful when syscall overhead is a problem and in case
> > of hmap it will provide a reliable way of traversing them.
> >
> > The implementation inclues a generic approach that could potentially be
> > used by any bpf map and adds it to arraymap, it also includes the specific
> > implementation of hashmaps which are traversed using buckets instead
> > of keys.
> >
> > The bpf syscall subcommands introduced are:
> >
> > BPF_MAP_LOOKUP_BATCH
> > BPF_MAP_LOOKUP_AND_DELETE_BATCH
> > BPF_MAP_UPDATE_BATCH
> > BPF_MAP_DELETE_BATCH
> >
> > The UAPI attribute is:
> >
> > struct { /* struct used by BPF_MAP_*_BATCH commands */
> > __aligned_u64 in_batch; /* start batch,
> > * NULL to start from beginning
> > */
> > __aligned_u64 out_batch; /* output: next start batch */
> > __aligned_u64 keys;
> > __aligned_u64 values;
> > __u32 count; /* input/output:
> > * input: # of key/value
> > * elements
> > * output: # of filled elements
> > */
> > __u32 map_fd;
> > __u64 elem_flags;
> > __u64 flags;
> > } batch;
> >
> >
> > in_batch and out_batch are only used for lookup and lookup_and_delete since
> > those are the only two operations that attempt to traverse the map.
> >
> > update/delete batch ops should provide the keys/values that user wants
> > to modify.
> >
> > Here are the previous discussions on the batch processing:
> > - https://lore.kernel.org/bpf/[email protected]/
> > - https://lore.kernel.org/bpf/[email protected]/
> > - https://lore.kernel.org/bpf/[email protected]/
> >
> > Changelog sinve v4:
> > - Remove unnecessary checks from libbpf API (Andrii Nakryiko)
> > - Move DECLARE_LIBBPF_OPTS with all var declarations (Andrii Nakryiko)
> > - Change bucket internal buffer size to 5 entries (Yonghong Song)
> > - Fix some minor bugs in hashtab batch ops implementation (Yonghong Song)
> >
> > Changelog sinve v3:
> > - Do not use copy_to_user inside atomic region (Yonghong Song)
> > - Use _opts approach on libbpf APIs (Andrii Nakryiko)
> > - Drop generic_map_lookup_and_delete_batch support
> > - Free malloc-ed memory in tests (Yonghong Song)
> > - Reverse christmas tree (Yonghong Song)
> > - Add acked labels
> >
> > Changelog sinve v2:
> > - Add generic batch support for lpm_trie and test it (Yonghong Song)
> > - Use define MAP_LOOKUP_RETRIES for retries (John Fastabend)
> > - Return errors directly and remove labels (Yonghong Song)
> > - Insert new API functions into libbpf alphabetically (Yonghong Song)
> > - Change hlist_nulls_for_each_entry_rcu to
> > hlist_nulls_for_each_entry_safe in htab batch ops (Yonghong Song)
> >
> > Changelog since v1:
> > - Fix SOB ordering and remove Co-authored-by tag (Alexei Starovoitov)
> >
> > Changelog since RFC:
> > - Change batch to in_batch and out_batch to support more flexible opaque
> > values to iterate the bpf maps.
> > - Remove update/delete specific batch ops for htab and use the generic
> > implementations instead.
> >
> > Brian Vazquez (5):
> > bpf: add bpf_map_{value_size,update_value,map_copy_value} functions
> > bpf: add generic support for lookup batch op
> > bpf: add generic support for update and delete batch ops
> > bpf: add lookup and update batch ops to arraymap
> > selftests/bpf: add batch ops testing to array bpf map
> >
> > Yonghong Song (4):
> > bpf: add batch ops to all htab bpf map
> > tools/bpf: sync uapi header bpf.h
> > libbpf: add libbpf support to batch ops
> > selftests/bpf: add batch ops testing for htab and htab_percpu map
> >
> > include/linux/bpf.h | 18 +
> > include/uapi/linux/bpf.h | 21 +
> > kernel/bpf/arraymap.c | 2 +
> > kernel/bpf/hashtab.c | 264 +++++++++
> > kernel/bpf/syscall.c | 554 ++++++++++++++----
> > tools/include/uapi/linux/bpf.h | 21 +
> > tools/lib/bpf/bpf.c | 58 ++
> > tools/lib/bpf/bpf.h | 22 +
> > tools/lib/bpf/libbpf.map | 4 +
> > .../bpf/map_tests/array_map_batch_ops.c | 129 ++++
> > .../bpf/map_tests/htab_map_batch_ops.c | 283 +++++++++
> > 11 files changed, 1248 insertions(+), 128 deletions(-)
> > create mode 100644 tools/testing/selftests/bpf/map_tests/array_map_batch_ops.c
> > create mode 100644 tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c
>
> Thanks for the work! LGTM. Ack for the whole series.
>
> Acked-by: Yonghong Song <[email protected]>

Applied. Thanks!