2022-06-28 20:36:52

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

This RFC is to give the whole picture. It will most likely be split
onto several series, maybe even merge cycles. See the "table of
contents" below.

The series adds ability to pass different frame
details/parameters/parameters used by most of NICs and the kernel
stack (in skbs), not essential, but highly wanted, such as:

* checksum value, status (Rx) or command (Tx);
* hash value and type/level (Rx);
* queue number (Rx);
* timestamps;
* and so on.

As XDP structures used to represent frames are as small as possible
and must stay like that, it is done by using the already existing
concept of metadata, i.e. some space right before a frame where BPF
programs can put arbitrary data.

Now, a NIC driver, or even a SmartNIC itself, can put those params
there in a well-defined format. The format is fixed, but can be of
several different types represented by structures, which definitions
are available to the kernel, BPF programs and the userland.
It is fixed due to it being almost a UAPI, and the exact format can
be determined by reading the last 10 bytes of metadata. They contain
a 2-byte magic ID to not confuse it with a non-compatible meta and
a 8-byte combined BTF ID + type ID: the ID of the BTF where this
structure is defined and the ID of that definition inside that BTF.
Users can obtain BTF IDs by structure types using helpers available
in the kernel, BPF (written by the CO-RE/verifier) and the userland
(libbpf -> kernel call) and then rely on those ID when reading data
to make sure whether they support it and what to do with it.
Why separate magic and ID? The idea is to make different formats
always contain the basic/"generic" structure embedded at the end.
This way we can still benefit in purely generic consumers (like
cpumap) while providing some "extra" data to those who support it.

The enablement of this feature is controlled on attaching/replacing
XDP program on an interface with two new parameters: that combined
BTF+type ID and metadata threshold.
The threshold specifies the minimum frame size which a driver (or
NIC) should start composing metadata from. It is introduced instead
of just false/true flag due to that often it's not worth it to spend
cycles to fetch all that data for such small frames: let's say, it
can be even faster to just calculate checksums for them on CPU
rather than touch non-coherent DMA zone. Simple XDP_DROP case loses
15 Mpps on 64 byte frames with enabled metadata, threshold can help
mitigate that.

The RFC can be divided into 8 parts:

01-04: BTF ID hacking: here Larysa provides BPF programs with not
only type ID, but the ID of the BTF as well by using the
unused upper 32 bits.
05-10: this provides in-kernel mechanisms for taking ID and
threshold from the userspace and passing it to the drivers.
11-18: provides libbpf API to be able to specify those params from
the userspace, plus some small selftest to verify that both
the kernel and the userspace parts work.
19-29: here the actual structure is defined, then the in-kernel
helpers and finally here comes the first consumer: function
used to convert &xdp_frame to &sk_buff now will be trying
to parse metadata. The affected users are cpumap and veth.
30-36: here I try to benefit from the metadata in cpumap even more
by switching it to GRO. Now that we have checksums from NIC
available... but even with no meta it gives some fair
improvements.
37-43: enabling building generic metadata on Generic/skb path. Since
skbs already have all those fields, it's not a problem to do
this in here, plus allows to benefit from it on interfaces
not supporting meta yet.
44-47: ice driver part, including enabling prog hot-swap;
48-52: adds a complex selftest to verify everything works. Can be
used as a sample as well, showing how to work with metadata
in BPF programs and how to configure it from the userspace.

Please refer to the actual commit messages where some precise
implementation details might be explained.
Nearly 20 of 52 are various cleanups and prereqs, as usually.

Perf figures were taken on cpumap redirect from the ice interface
(driver-side XDP), redirecting the traffic within the same node.

Frame size / 64/42 128/20 256/8 512/4 1024/2 1532/1
thread num

meta off 30022 31350 21993 12144 6374 3610
meta on 33059 28502 21503 12146 6380 3610
GRO meta off 30020 31822 21970 12145 6384 3610
GRO meta on 34736 28848 21566 12144 6381 3610

Yes, redirect between the nodes plays awfully with the metadata
composed by the driver:

meta off 21449 18078 16897 11820 6383 3610
meta on 16956 19004 14337 8228 5683 2822
GRO meta off 22539 19129 16304 11659 6381 3592
GRO meta on 17047 20366 15435 8878 5600 2753

Questions still open:

* the actual generic structure: it must have all the fields used
oftenly and by the majority of NICs. It can always be expanded
later on (note that the structure grows to the left), but the
less often UAPI is modified, the better (less compat pain);
* ability to specify the exact fields to fill by the driver, e.g.
flags bitmap passed from the userspace. In theory it can be more
optimal to not spend cycles on data we don't need, but at the
same time increases the complexity of the whole concept (e.g. it
will be more problematic to unify drivers' routines for collecting
data from descriptors to metadata and to skbs);
* there was an idea to be able to specify from the userspace the
desired cacheline offset, so that [the wanted fields of] metadata
and the packet headers would lay in the same CL. Can't be
implemented in Generic/skb XDP and ice has some troubles with it
too;
* lacks AF_XDP/XSk perf numbers and different other scenarios in
general, is the current implementation optimal for them?
* metadata threshold and everything else present in this
implementation.

The RFC is also available on my open GitHub[0].

Merry and long review and discussion, enjoy!

[0] https://github.com/alobakin/linux/tree/xdp_hints

Alexander Lobakin (46):
libbpf: add function to get the pair BTF ID + type ID for a given type
net, xdp: decouple XDP code from the core networking code
bpf: pass a pointer to union bpf_attr to bpf_link_ops::update_prog()
net, xdp: remove redundant arguments from dev_xdp_{at,de}tach_link()
net, xdp: factor out XDP install arguments to a separate structure
net, xdp: add ability to specify BTF ID for XDP metadata
net, xdp: add ability to specify frame size threshold for XDP metadata
libbpf: factor out __bpf_set_link_xdp_fd_replace() args into a struct
libbpf: add ability to set the BTF/type ID on setting XDP prog
libbpf: add ability to set the meta threshold on setting XDP prog
libbpf: pass &bpf_link_create_opts directly to
bpf_program__attach_fd()
libbpf: add bpf_program__attach_xdp_opts()
selftests/bpf: expand xdp_link to check that setting meta opts works
samples/bpf: pass a struct to sample_install_xdp()
samples/bpf: add ability to specify metadata threshold
stddef: make __struct_group() UAPI C++-friendly
net, xdp: move XDP metadata helpers into new xdp_meta.h
net, xdp: allow metadata > 32
net, skbuff: add ability to skip skb metadata comparison
net, skbuff: constify the @skb argument of skb_hwtstamps()
net, xdp: add basic generic metadata accessors
bpf, btf: add a pair of function to work with the BTF ID + type ID
pair
net, xdp: add &sk_buff <-> &xdp_meta_generic converters
net, xdp: prefetch data a bit when building an skb from an &xdp_frame
net, xdp: try to fill skb fields when converting from an &xdp_frame
net, gro: decouple GRO from the NAPI layer
net, gro: expose some GRO API to use outside of NAPI
bpf, cpumap: switch to GRO from netif_receive_skb_list()
bpf, cpumap: add option to set a timeout for deferred flush
samples/bpf: add 'timeout' option to xdp_redirect_cpu
net, skbuff: introduce napi_skb_cache_get_bulk()
bpf, cpumap: switch to napi_skb_cache_get_bulk()
rcupdate: fix access helpers for incomplete struct pointers on GCC <
10
net, xdp: remove unused xdp_attachment_info::flags
net, xdp: make &xdp_attachment_info a bit more useful in drivers
net, xdp: add an RCU version of xdp_attachment_setup()
net, xdp: replace net_device::xdp_prog pointer with
&xdp_attachment_info
net, xdp: shortcut skb->dev in bpf_prog_run_generic_xdp()
net, xdp: build XDP generic metadata on Generic (skb) XDP path
net, ice: allow XDP prog hot-swapping
net, ice: consolidate all skb fields processing
net, ice: use an onstack &xdp_meta_generic_rx to store HW frame info
net, ice: build XDP generic metadata
libbpf: compress Endianness ops with a macro
selftests/bpf: fix using test_xdp_meta BPF prog via skeleton infra
selftests/bpf: add XDP Generic Hints selftest

Larysa Zaremba (5):
libbpf: factor out BTF loading from load_module_btfs()
libbpf: try to load vmlinux BTF from the kernel first
libbpf: patch module BTF ID into BPF insns
libbpf: add LE <--> CPU conversion helpers
libbpf: introduce a couple memory access helpers

Michal Swiatkowski (1):
bpf, xdp: declare generic XDP metadata structure

MAINTAINERS | 5 +-
drivers/net/ethernet/brocade/bna/bnad.c | 1 +
drivers/net/ethernet/cortina/gemini.c | 1 +
drivers/net/ethernet/intel/ice/ice.h | 16 +-
drivers/net/ethernet/intel/ice/ice_lib.c | 4 +-
drivers/net/ethernet/intel/ice/ice_main.c | 79 +-
drivers/net/ethernet/intel/ice/ice_ptp.c | 19 +-
drivers/net/ethernet/intel/ice/ice_ptp.h | 17 +-
drivers/net/ethernet/intel/ice/ice_txrx.c | 51 +-
drivers/net/ethernet/intel/ice/ice_txrx.h | 3 +-
drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 154 +--
drivers/net/ethernet/intel/ice/ice_txrx_lib.h | 88 +-
drivers/net/ethernet/intel/ice/ice_xsk.c | 26 +-
.../ethernet/mellanox/mlx5/core/en/xsk/rx.c | 1 +
drivers/net/ethernet/netronome/nfp/nfd3/xsk.c | 1 +
drivers/net/tun.c | 2 +-
include/linux/bpf.h | 3 +-
include/linux/btf.h | 13 +
include/linux/filter.h | 2 +
include/linux/netdevice.h | 41 +-
include/linux/rcupdate.h | 37 +-
include/linux/skbuff.h | 35 +-
include/net/gro.h | 53 +-
include/net/xdp.h | 34 +-
include/net/xdp_meta.h | 398 ++++++++
include/uapi/linux/bpf.h | 194 ++++
include/uapi/linux/if_link.h | 2 +
include/uapi/linux/stddef.h | 12 +-
kernel/bpf/bpf_iter.c | 1 +
kernel/bpf/btf.c | 133 ++-
kernel/bpf/cgroup.c | 4 +-
kernel/bpf/cpumap.c | 80 +-
kernel/bpf/net_namespace.c | 1 +
kernel/bpf/syscall.c | 4 +-
net/bpf/Makefile | 5 +-
net/{core/xdp.c => bpf/core.c} | 214 +++-
net/bpf/dev.c | 871 +++++++++++++++++
net/bpf/prog_ops.c | 912 ++++++++++++++++++
net/bpf/test_run.c | 2 +-
net/core/Makefile | 2 +-
net/core/dev.c | 869 +----------------
net/core/dev.h | 4 -
net/core/filter.c | 883 +----------------
net/core/gro.c | 120 ++-
net/core/rtnetlink.c | 24 +-
net/core/skbuff.c | 44 +
net/packet/af_packet.c | 8 +-
net/xdp/xsk.c | 2 +-
samples/bpf/xdp_redirect_cpu_user.c | 44 +-
samples/bpf/xdp_redirect_map_multi_user.c | 26 +-
samples/bpf/xdp_redirect_map_user.c | 22 +-
samples/bpf/xdp_redirect_user.c | 21 +-
samples/bpf/xdp_router_ipv4_user.c | 20 +-
samples/bpf/xdp_sample_user.c | 38 +-
samples/bpf/xdp_sample_user.h | 11 +-
tools/include/uapi/linux/bpf.h | 194 ++++
tools/include/uapi/linux/if_link.h | 2 +
tools/include/uapi/linux/stddef.h | 50 +
tools/lib/bpf/bpf.c | 22 +
tools/lib/bpf/bpf.h | 22 +-
tools/lib/bpf/bpf_core_read.h | 3 +-
tools/lib/bpf/bpf_endian.h | 56 +-
tools/lib/bpf/bpf_helpers.h | 64 ++
tools/lib/bpf/btf.c | 142 ++-
tools/lib/bpf/libbpf.c | 201 +++-
tools/lib/bpf/libbpf.h | 30 +-
tools/lib/bpf/libbpf.map | 2 +
tools/lib/bpf/libbpf_internal.h | 7 +-
tools/lib/bpf/netlink.c | 81 +-
tools/lib/bpf/relo_core.c | 8 +-
tools/lib/bpf/relo_core.h | 1 +
tools/testing/selftests/bpf/.gitignore | 1 +
tools/testing/selftests/bpf/Makefile | 4 +-
.../selftests/bpf/prog_tests/xdp_link.c | 30 +-
.../selftests/bpf/progs/test_xdp_meta.c | 40 +-
tools/testing/selftests/bpf/test_xdp_meta.c | 294 ++++++
tools/testing/selftests/bpf/test_xdp_meta.sh | 59 +-
77 files changed, 4758 insertions(+), 2212 deletions(-)
create mode 100644 include/net/xdp_meta.h
rename net/{core/xdp.c => bpf/core.c} (73%)
create mode 100644 net/bpf/dev.c
create mode 100644 net/bpf/prog_ops.c
create mode 100644 tools/include/uapi/linux/stddef.h
create mode 100644 tools/testing/selftests/bpf/test_xdp_meta.c

--
2.36.1


2022-06-28 20:39:16

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH RFC bpf-next 16/52] selftests/bpf: expand xdp_link to check that setting meta opts works

Add a check in xdp_link to ensure that the values of btf_id and
meta_thresh gotten via bpf_obj_get_info_by_fd() is the same as was
passed via bpf_link_update(). Plus, that the kernel should fail to
set btf_id to 0 when meta_thresh != 0.
Also, use the new bpf_program__attach_xdp_opts() instead of the
non-optsed version and set the initial metadata threshold value
to check whether the kernel is able to process this request.
When the threshold is being set via the Netlink interface, it's
not being stored anywhere in the kernel core, so no test for it.

Signed-off-by: Alexander Lobakin <[email protected]>
---
.../selftests/bpf/prog_tests/xdp_link.c | 30 +++++++++++++++++--
1 file changed, 28 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_link.c b/tools/testing/selftests/bpf/prog_tests/xdp_link.c
index 3e9d5c5521f0..0723278c448f 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_link.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_link.c
@@ -10,6 +10,7 @@ void serial_test_xdp_link(void)
{
struct test_xdp_link *skel1 = NULL, *skel2 = NULL;
__u32 id1, id2, id0 = 0, prog_fd1, prog_fd2;
+ LIBBPF_OPTS(bpf_link_update_opts, lu_opts);
LIBBPF_OPTS(bpf_xdp_attach_opts, opts);
struct bpf_link_info link_info;
struct bpf_prog_info prog_info;
@@ -103,8 +104,16 @@ void serial_test_xdp_link(void)
bpf_link__destroy(skel1->links.xdp_handler);
skel1->links.xdp_handler = NULL;

+ opts.old_prog_fd = 0;
+ opts.meta_thresh = 128;
+
+ err = libbpf_get_type_btf_id("struct xdp_meta_generic", &opts.btf_id);
+ if (!ASSERT_OK(err, "libbpf_get_type_btf_id"))
+ goto cleanup;
+
/* new link attach should succeed */
- link = bpf_program__attach_xdp(skel2->progs.xdp_handler, IFINDEX_LO);
+ link = bpf_program__attach_xdp_opts(skel2->progs.xdp_handler,
+ IFINDEX_LO, &opts);
if (!ASSERT_OK_PTR(link, "link_attach"))
goto cleanup;
skel2->links.xdp_handler = link;
@@ -113,11 +122,25 @@ void serial_test_xdp_link(void)
if (!ASSERT_OK(err, "id2_check_err") || !ASSERT_EQ(id0, id2, "id2_check_val"))
goto cleanup;

+ lu_opts.xdp.new_meta_thresh = 256;
+ lu_opts.xdp.new_btf_id = opts.btf_id;
+
/* updating program under active BPF link works as expected */
- err = bpf_link__update_program(link, skel1->progs.xdp_handler);
+ err = bpf_link_update(bpf_link__fd(link),
+ bpf_program__fd(skel1->progs.xdp_handler),
+ &lu_opts);
if (!ASSERT_OK(err, "link_upd"))
goto cleanup;

+ lu_opts.xdp.new_btf_id = 0;
+
+ /* BTF ID can't be 0 when meta_thresh != 0, and vice versa */
+ err = bpf_link_update(bpf_link__fd(link),
+ bpf_program__fd(skel1->progs.xdp_handler),
+ &lu_opts);
+ if (!ASSERT_ERR(err, "link_upd_fail"))
+ goto cleanup;
+
memset(&link_info, 0, sizeof(link_info));
err = bpf_obj_get_info_by_fd(bpf_link__fd(link), &link_info, &link_info_len);
if (!ASSERT_OK(err, "link_info"))
@@ -126,6 +149,9 @@ void serial_test_xdp_link(void)
ASSERT_EQ(link_info.type, BPF_LINK_TYPE_XDP, "link_type");
ASSERT_EQ(link_info.prog_id, id1, "link_prog_id");
ASSERT_EQ(link_info.xdp.ifindex, IFINDEX_LO, "link_ifindex");
+ ASSERT_EQ(link_info.xdp.btf_id, opts.btf_id, "btf_id");
+ ASSERT_EQ(link_info.xdp.meta_thresh, lu_opts.xdp.new_meta_thresh,
+ "meta_thresh");

/* updating program under active BPF link with different type fails */
err = bpf_link__update_program(link, skel1->progs.tc_handler);
--
2.36.1

2022-06-28 20:40:35

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH RFC bpf-next 02/52] libbpf: try to load vmlinux BTF from the kernel first

From: Larysa Zaremba <[email protected]>

Try to acquire vmlinux BTF the same way it's being done for module
BTFs. Use btf_load_next_with_info() and resort to the filesystem
lookup only if it fails.
Also, adjust debug messages in btf__load_vmlinux_btf() to reflect
that it actually tries to load vmlinux BTF.

Signed-off-by: Larysa Zaremba <[email protected]>
Signed-off-by: Alexander Lobakin <[email protected]>
---
tools/lib/bpf/btf.c | 32 ++++++++++++++++++++++++++++++--
1 file changed, 30 insertions(+), 2 deletions(-)

diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 7e4dbf71fd52..8ecd50923fab 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -4927,6 +4927,25 @@ struct btf *btf_load_next_with_info(__u32 start_id, struct bpf_btf_info *info,
}
}

+static struct btf *btf_load_vmlinux_from_kernel(void)
+{
+ char name[BTF_NAME_BUF_LEN] = { };
+ struct bpf_btf_info info;
+ struct btf *btf;
+
+ memset(&info, 0, sizeof(info));
+ info.name = ptr_to_u64(name);
+ info.name_len = sizeof(name);
+
+ btf = btf_load_next_with_info(0, &info, NULL, true);
+ if (!libbpf_get_error(btf)) {
+ close(btf->fd);
+ btf__set_fd(btf, -1);
+ }
+
+ return btf;
+}
+
/*
* Probe few well-known locations for vmlinux kernel image and try to load BTF
* data out of it to use for target BTF.
@@ -4953,6 +4972,15 @@ struct btf *btf__load_vmlinux_btf(void)
struct btf *btf;
int i, err;

+ btf = btf_load_vmlinux_from_kernel();
+ err = libbpf_get_error(btf);
+ pr_debug("loading vmlinux BTF from kernel: %d\n", err);
+ if (!err)
+ return btf;
+
+ pr_info("failed to load vmlinux BTF from kernel: %d, will look through filesystem\n",
+ err);
+
uname(&buf);

for (i = 0; i < ARRAY_SIZE(locations); i++) {
@@ -4966,14 +4994,14 @@ struct btf *btf__load_vmlinux_btf(void)
else
btf = btf__parse_elf(path, NULL);
err = libbpf_get_error(btf);
- pr_debug("loading kernel BTF '%s': %d\n", path, err);
+ pr_debug("loading vmlinux BTF '%s': %d\n", path, err);
if (err)
continue;

return btf;
}

- pr_warn("failed to find valid kernel BTF\n");
+ pr_warn("failed to find valid vmlinux BTF\n");
return libbpf_err_ptr(-ESRCH);
}

--
2.36.1

2022-06-28 20:41:08

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH RFC bpf-next 46/52] net, ice: use an onstack &xdp_meta_generic_rx to store HW frame info

To be able to pass HW-provided frame metadata, such as hash,
checksum status etc., to BPF and XSK programs, unify the container
which is used to store it regardless of an XDP program presence or
a verdict returned by it. Use an intermediate onstack
&xdp_meta_generic_rx before filling skb fields and switch descriptor
parsing functions to use it instead of an &sk_buff.
This works the same way how &xdp_buff is being filled before forming
an skb. If metadata generation is enabled, the actual space in front
of a frame will be used in the upcoming changes.
Using &xdp_meta_generic_rx instead of full-blown &xdp_meta_generic
reduces text size by 32 bytes per function.

Signed-off-by: Alexander Lobakin <[email protected]>
---
drivers/net/ethernet/intel/ice/ice_ptp.c | 19 ++--
drivers/net/ethernet/intel/ice/ice_ptp.h | 17 ++-
drivers/net/ethernet/intel/ice/ice_txrx.c | 4 +-
drivers/net/ethernet/intel/ice/ice_txrx.h | 1 +
drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 105 ++++++++++--------
drivers/net/ethernet/intel/ice/ice_txrx_lib.h | 12 +-
drivers/net/ethernet/intel/ice/ice_xsk.c | 4 +-
7 files changed, 91 insertions(+), 71 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
index ef9344ef0d8e..d4d955152682 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
+++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
@@ -1795,24 +1795,22 @@ int ice_ptp_set_ts_config(struct ice_pf *pf, struct ifreq *ifr)

/**
* ice_ptp_rx_hwtstamp - Check for an Rx timestamp
- * @rx_ring: Ring to get the VSI info
* @rx_desc: Receive descriptor
- * @skb: Particular skb to send timestamp with
+ * @rx_ring: Ring to get the VSI info
+ * @md: Metadata to set timestamp in
*
* The driver receives a notification in the receive descriptor with timestamp.
* The timestamp is in ns, so we must convert the result first.
*/
-void
-ice_ptp_rx_hwtstamp(struct ice_rx_ring *rx_ring,
- union ice_32b_rx_flex_desc *rx_desc, struct sk_buff *skb)
+void ice_ptp_rx_hwtstamp(struct xdp_meta_generic *md,
+ const union ice_32b_rx_flex_desc *rx_desc,
+ const struct ice_rx_ring *rx_ring)
{
u32 ts_high;
u64 ts_ns;

- /* Populate timesync data into skb */
+ /* Populate timesync data into md */
if (rx_desc->wb.time_stamp_low & ICE_PTP_TS_VALID) {
- struct skb_shared_hwtstamps *hwtstamps;
-
/* Use ice_ptp_extend_32b_ts directly, using the ring-specific
* cached PHC value, rather than accessing the PF. This also
* allows us to simply pass the upper 32bits of nanoseconds
@@ -1822,9 +1820,8 @@ ice_ptp_rx_hwtstamp(struct ice_rx_ring *rx_ring,
ts_high = le32_to_cpu(rx_desc->wb.flex_ts.ts_high);
ts_ns = ice_ptp_extend_32b_ts(rx_ring->cached_phctime, ts_high);

- hwtstamps = skb_hwtstamps(skb);
- memset(hwtstamps, 0, sizeof(*hwtstamps));
- hwtstamps->hwtstamp = ns_to_ktime(ts_ns);
+ xdp_meta_rx_tstamp_present_set(md, 1);
+ xdp_meta_rx_tstamp_set(md, ts_ns);
}
}

diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.h b/drivers/net/ethernet/intel/ice/ice_ptp.h
index 10e396abf130..488b6bb01605 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp.h
+++ b/drivers/net/ethernet/intel/ice/ice_ptp.h
@@ -228,8 +228,12 @@ struct ice_ptp {
#define N_EXT_TS_E810_NO_SMA 2
#define ETH_GLTSYN_ENA(_i) (0x03000348 + ((_i) * 4))

-#if IS_ENABLED(CONFIG_PTP_1588_CLOCK)
struct ice_pf;
+struct ice_rx_ring;
+struct xdp_meta_generic;
+union ice_32b_rx_flex_desc;
+
+#if IS_ENABLED(CONFIG_PTP_1588_CLOCK)
int ice_ptp_set_ts_config(struct ice_pf *pf, struct ifreq *ifr);
int ice_ptp_get_ts_config(struct ice_pf *pf, struct ifreq *ifr);
void ice_ptp_cfg_timestamp(struct ice_pf *pf, bool ena);
@@ -238,9 +242,9 @@ int ice_get_ptp_clock_index(struct ice_pf *pf);
s8 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb);
void ice_ptp_process_ts(struct ice_pf *pf);

-void
-ice_ptp_rx_hwtstamp(struct ice_rx_ring *rx_ring,
- union ice_32b_rx_flex_desc *rx_desc, struct sk_buff *skb);
+void ice_ptp_rx_hwtstamp(struct xdp_meta_generic *md,
+ const union ice_32b_rx_flex_desc *rx_desc,
+ const struct ice_rx_ring *rx_ring);
void ice_ptp_reset(struct ice_pf *pf);
void ice_ptp_prepare_for_reset(struct ice_pf *pf);
void ice_ptp_init(struct ice_pf *pf);
@@ -271,8 +275,9 @@ ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb)

static inline void ice_ptp_process_ts(struct ice_pf *pf) { }
static inline void
-ice_ptp_rx_hwtstamp(struct ice_rx_ring *rx_ring,
- union ice_32b_rx_flex_desc *rx_desc, struct sk_buff *skb) { }
+ice_ptp_rx_hwtstamp(struct xdp_meta_generic *md,
+ const union ice_32b_rx_flex_desc *rx_desc,
+ const struct ice_rx_ring *rx_ring) { }
static inline void ice_ptp_reset(struct ice_pf *pf) { }
static inline void ice_ptp_prepare_for_reset(struct ice_pf *pf) { }
static inline void ice_ptp_init(struct ice_pf *pf) { }
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index ffea5138a7e8..c679f7c30bdc 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -1123,6 +1123,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
/* start the loop to process Rx packets bounded by 'budget' */
while (likely(total_rx_pkts < (unsigned int)budget)) {
union ice_32b_rx_flex_desc *rx_desc;
+ struct xdp_meta_generic_rx md;
struct ice_rx_buf *rx_buf;
unsigned char *hard_start;
unsigned int size;
@@ -1239,7 +1240,8 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
/* probably a little skewed due to removing CRC */
total_rx_bytes += skb->len;

- ice_process_skb_fields(rx_ring, rx_desc, skb);
+ ice_xdp_build_meta(&md, rx_desc, rx_ring, 0);
+ __xdp_populate_skb_meta_generic(skb, &md);

ice_trace(clean_rx_irq_indicate, rx_ring, rx_desc, skb);
ice_receive_skb(rx_ring, skb);
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
index 1fc31ab0bf33..a814709deb50 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
@@ -4,6 +4,7 @@
#ifndef _ICE_TXRX_H_
#define _ICE_TXRX_H_

+#include <net/xdp_meta.h>
#include "ice_type.h"

#define ICE_DFLT_IRQ_WORK 256
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
index 92c001baa2cc..7550e2ed8936 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
@@ -43,36 +43,37 @@ void ice_release_rx_desc(struct ice_rx_ring *rx_ring, u16 val)
* @decoded: the decoded ptype value from the descriptor
*
* Returns appropriate hash type (such as PKT_HASH_TYPE_L2/L3/L4) to be used by
- * skb_set_hash based on PTYPE as parsed by HW Rx pipeline and is part of
- * Rx desc.
+ * xdp_meta_rx_hash_type_set() based on PTYPE as parsed by HW Rx pipeline and
+ * is part of Rx desc.
*/
-static enum pkt_hash_types
+static u32
ice_ptype_to_htype(struct ice_rx_ptype_decoded decoded)
{
if (!decoded.known)
- return PKT_HASH_TYPE_NONE;
+ return XDP_META_RX_HASH_NONE;
if (decoded.payload_layer == ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4)
- return PKT_HASH_TYPE_L4;
+ return XDP_META_RX_HASH_L4;
if (decoded.payload_layer == ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3)
- return PKT_HASH_TYPE_L3;
+ return XDP_META_RX_HASH_L3;
if (decoded.outer_ip == ICE_RX_PTYPE_OUTER_L2)
- return PKT_HASH_TYPE_L2;
+ return XDP_META_RX_HASH_L2;

- return PKT_HASH_TYPE_NONE;
+ return XDP_META_RX_HASH_NONE;
}

/**
- * ice_rx_hash - set the hash value in the skb
+ * ice_rx_hash - set the hash value in the medatadata
+ * @md: pointer to current metadata
* @rx_ring: descriptor ring
* @rx_desc: specific descriptor
- * @skb: pointer to current skb
* @decoded: the decoded ptype value from the descriptor
*/
-static void
-ice_rx_hash(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc,
- struct sk_buff *skb, struct ice_rx_ptype_decoded decoded)
+static void ice_rx_hash(struct xdp_meta_generic *md,
+ const struct ice_rx_ring *rx_ring,
+ const union ice_32b_rx_flex_desc *rx_desc,
+ struct ice_rx_ptype_decoded decoded)
{
- struct ice_32b_rx_flex_desc_nic *nic_mdid;
+ const struct ice_32b_rx_flex_desc_nic *nic_mdid;
u32 hash;

if (!(rx_ring->netdev->features & NETIF_F_RXHASH))
@@ -81,24 +82,24 @@ ice_rx_hash(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc,
if (rx_desc->wb.rxdid != ICE_RXDID_FLEX_NIC)
return;

- nic_mdid = (struct ice_32b_rx_flex_desc_nic *)rx_desc;
+ nic_mdid = (typeof(nic_mdid))rx_desc;
hash = le32_to_cpu(nic_mdid->rss_hash);
- skb_set_hash(skb, hash, ice_ptype_to_htype(decoded));
+
+ xdp_meta_rx_hash_type_set(md, ice_ptype_to_htype(decoded));
+ xdp_meta_rx_hash_set(md, hash);
}

/**
- * ice_rx_csum - Indicate in skb if checksum is good
+ * ice_rx_csum - Indicate in metadata if checksum is good
+ * @md: metadata currently being filled
* @ring: the ring we care about
- * @skb: skb currently being received and modified
* @rx_desc: the receive descriptor
* @decoded: the decoded packet type parsed by hardware
- *
- * skb->protocol must be set before this function is called
*/
-static void
-ice_rx_csum(struct ice_rx_ring *ring, struct sk_buff *skb,
- union ice_32b_rx_flex_desc *rx_desc,
- struct ice_rx_ptype_decoded decoded)
+static void ice_rx_csum(struct xdp_meta_generic *md,
+ const struct ice_rx_ring *ring,
+ const union ice_32b_rx_flex_desc *rx_desc,
+ struct ice_rx_ptype_decoded decoded)
{
u16 rx_status0, rx_status1;
bool ipv4, ipv6;
@@ -106,10 +107,6 @@ ice_rx_csum(struct ice_rx_ring *ring, struct sk_buff *skb,
rx_status0 = le16_to_cpu(rx_desc->wb.status_error0);
rx_status1 = le16_to_cpu(rx_desc->wb.status_error1);

- /* Start with CHECKSUM_NONE and by default csum_level = 0 */
- skb->ip_summed = CHECKSUM_NONE;
- skb_checksum_none_assert(skb);
-
/* check if Rx checksum is enabled */
if (!(ring->netdev->features & NETIF_F_RXCSUM))
return;
@@ -149,14 +146,14 @@ ice_rx_csum(struct ice_rx_ring *ring, struct sk_buff *skb,
* we are indicating we validated the inner checksum.
*/
if (decoded.tunnel_type >= ICE_RX_PTYPE_TUNNEL_IP_GRENAT)
- skb->csum_level = 1;
+ xdp_meta_rx_csum_level_set(md, 1);

/* Only report checksum unnecessary for TCP, UDP, or SCTP */
switch (decoded.inner_prot) {
case ICE_RX_PTYPE_INNER_PROT_TCP:
case ICE_RX_PTYPE_INNER_PROT_UDP:
case ICE_RX_PTYPE_INNER_PROT_SCTP:
- skb->ip_summed = CHECKSUM_UNNECESSARY;
+ xdp_meta_rx_csum_status_set(md, XDP_META_RX_CSUM_OK);
break;
default:
break;
@@ -167,7 +164,13 @@ ice_rx_csum(struct ice_rx_ring *ring, struct sk_buff *skb,
ring->vsi->back->hw_csum_rx_error++;
}

-static void ice_rx_vlan(struct sk_buff *skb,
+#define xdp_meta_rx_vlan_from_feat(feat) ({ \
+ ((feat) & NETIF_F_HW_VLAN_CTAG_RX) ? XDP_META_RX_CVID : \
+ ((feat) & NETIF_F_HW_VLAN_STAG_RX) ? XDP_META_RX_SVID : \
+ XDP_META_RX_VLAN_NONE; \
+})
+
+static void ice_rx_vlan(struct xdp_meta_generic *md,
const struct ice_rx_ring *rx_ring,
const union ice_32b_rx_flex_desc *rx_desc)
{
@@ -181,42 +184,48 @@ static void ice_rx_vlan(struct sk_buff *skb,
if (!non_zero_vlan)
return;

- if ((features & NETIF_F_HW_VLAN_CTAG_RX))
- __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag);
- else if ((features & NETIF_F_HW_VLAN_STAG_RX))
- __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021AD), vlan_tag);
+ xdp_meta_rx_vlan_type_set(md, xdp_meta_rx_vlan_from_feat(features));
+ xdp_meta_rx_vid_set(md, vlan_tag);
}

/**
- * ice_process_skb_fields - Populate skb header fields from Rx descriptor
- * @rx_ring: Rx descriptor ring packet is being transacted on
+ * __ice_xdp_build_meta - Populate XDP generic metadata fields from Rx desc
+ * @rx_md: pointer to the metadata structure to be populated
* @rx_desc: pointer to the EOP Rx descriptor
- * @skb: pointer to current skb being populated
+ * @rx_ring: Rx descriptor ring packet is being transacted on
+ * @full_id: full ID (BTF ID + type ID) to fill in
*
* This function checks the ring, descriptor, and packet information in
* order to populate the hash, checksum, VLAN, protocol, and
- * other fields within the skb.
+ * other fields within the metadata.
*/
-void
-ice_process_skb_fields(struct ice_rx_ring *rx_ring,
- union ice_32b_rx_flex_desc *rx_desc,
- struct sk_buff *skb)
+void __ice_xdp_build_meta(struct xdp_meta_generic_rx *rx_md,
+ const union ice_32b_rx_flex_desc *rx_desc,
+ const struct ice_rx_ring *rx_ring,
+ __le64 full_id)
{
+ struct xdp_meta_generic *md = to_gen_md(rx_md);
struct ice_rx_ptype_decoded decoded;
u16 ptype;

- skb_record_rx_queue(skb, rx_ring->q_index);
+ xdp_meta_init(&md->id, full_id);
+ md->rx_hash = 0;
+ md->rx_csum = 0;
+ md->rx_flags = 0;
+
+ xdp_meta_rx_qid_present_set(md, 1);
+ xdp_meta_rx_qid_set(md, rx_ring->q_index);

ptype = le16_to_cpu(rx_desc->wb.ptype_flex_flags0) &
ICE_RX_FLEX_DESC_PTYPE_M;
decoded = ice_decode_rx_desc_ptype(ptype);

- ice_rx_hash(rx_ring, rx_desc, skb, decoded);
- ice_rx_csum(rx_ring, skb, rx_desc, decoded);
- ice_rx_vlan(skb, rx_ring, rx_desc);
+ ice_rx_hash(md, rx_ring, rx_desc, decoded);
+ ice_rx_csum(md, rx_ring, rx_desc, decoded);
+ ice_rx_vlan(md, rx_ring, rx_desc);

if (rx_ring->ptp_rx)
- ice_ptp_rx_hwtstamp(rx_ring, rx_desc, skb);
+ ice_ptp_rx_hwtstamp(md, rx_desc, rx_ring);
}

/**
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
index 45dc5ef79e28..b51e58b8e83d 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
@@ -92,9 +92,13 @@ void ice_finalize_xdp_rx(struct ice_tx_ring *xdp_ring, unsigned int xdp_res);
int ice_xmit_xdp_buff(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring);
int ice_xmit_xdp_ring(void *data, u16 size, struct ice_tx_ring *xdp_ring);
void ice_release_rx_desc(struct ice_rx_ring *rx_ring, u16 val);
-void
-ice_process_skb_fields(struct ice_rx_ring *rx_ring,
- union ice_32b_rx_flex_desc *rx_desc,
- struct sk_buff *skb);
+
+void __ice_xdp_build_meta(struct xdp_meta_generic_rx *rx_md,
+ const union ice_32b_rx_flex_desc *rx_desc,
+ const struct ice_rx_ring *rx_ring,
+ __le64 full_id);
+
+#define ice_xdp_build_meta(md, ...) \
+ __ice_xdp_build_meta(to_rx_md(md), ##__VA_ARGS__)

#endif /* !_ICE_TXRX_LIB_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
index 0a66128964e7..eade918723eb 100644
--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
+++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
@@ -603,6 +603,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
while (likely(total_rx_packets < (unsigned int)budget)) {
union ice_32b_rx_flex_desc *rx_desc;
unsigned int size, xdp_res = 0;
+ struct xdp_meta_generic_rx md;
struct xdp_buff *xdp;
struct sk_buff *skb;
u16 stat_err_bits;
@@ -673,7 +674,8 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
total_rx_bytes += skb->len;
total_rx_packets++;

- ice_process_skb_fields(rx_ring, rx_desc, skb);
+ ice_xdp_build_meta(&md, rx_desc, rx_ring, 0);
+ __xdp_populate_skb_meta_generic(skb, &md);
ice_receive_skb(rx_ring, skb);
}

--
2.36.1

2022-06-28 20:41:21

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH RFC bpf-next 50/52] libbpf: introduce a couple memory access helpers

From: Larysa Zaremba <[email protected]>

In BPF programs, it is a common thing to declare that we're going
to do a memory access via such snippet:

if (data + ETH_HLEN > data_end)
// bail out

Offsets can be variable:

if (VLAN_HLEN * vlan_count > SOME_ARBITRARY_MAX_OFFSET ||
ctx->data + VLAN_HLEN * vlan_count > data_end)
//

Or even calculated from the end:

if (ctx->data_end - ctx->data - ETH_FCS_LEN > SOME_ARB_MAX_OFF ||
ctx->data_end - ETH_FCS_LEN < ctx->data)
//

As a bonus, LLVM sometimes has a hard time compiling sane C code
the way that it would pass the in-kernel verifier.
Add two new functions to sanitize memory accesses and get pointers
to the requested ranges: one taking an offset from the start and one
from the end (useful for metadata and different integrity check
headers). They are written in Asm, so the offset can be variable and
the code will pass the verifier. There are checks for the maximum
offset (backed by the original verifier value), going out of bounds
etc., so the pointer they return is ready to use (if it's
non-%NULL).
So now all is needed is:

iphdr = bpf_access_mem(ctx->data, ctx->data_end, ETH_HLEN,
sizeof(*iphdr));
if (!iphdr)
// bail out

or

some_meta_struct = bpf_access_mem_end(ctx->data_meta, ctx->data,
sizeof(*some_meta_struct),
sizeof(*some_meta_struct));
if (!some_meta_struct)
//

The Asm code was happily stolen from the Cilium project repo[0] and
then reworked.

[0] https://github.com/cilium/cilium/blob/master/bpf/include/bpf/ctx/xdp.h#L43

Suggested-by: Daniel Borkmann <[email protected]> # original helper
Suggested-by: Toke Høiland-Jørgensen <[email protected]>
Signed-off-by: Larysa Zaremba <[email protected]>
Co-developed-by: Alexander Lobakin <[email protected]>
Signed-off-by: Alexander Lobakin <[email protected]>
---
tools/lib/bpf/bpf_helpers.h | 64 +++++++++++++++++++++++++++++++++++++
1 file changed, 64 insertions(+)

diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
index fb04eaf367f1..cd16e3c9cd85 100644
--- a/tools/lib/bpf/bpf_helpers.h
+++ b/tools/lib/bpf/bpf_helpers.h
@@ -285,4 +285,68 @@ enum libbpf_tristate {
/* Helper macro to print out debug messages */
#define bpf_printk(fmt, args...) ___bpf_pick_printk(args)(fmt, ##args)

+/* Max offset as per kernel verifier */
+#define MAX_PACKET_OFF 0xffff
+
+/**
+ * bpf_access_mem - sanitize memory access to a range
+ * @mem: start of the memory segment
+ * @mem_end: end of the memory segment
+ * @off: offset from the start of the memory segment
+ * @len: length of the range to give access to
+ *
+ * Verifies that the memory operations we want to perform are sane and within
+ * bounds and gives pointer to the requested range. The checks are done in Asm,
+ * so that it is safe to pass variable offset (verifier might reject such code
+ * written in plain C).
+ * The intended way of using it is as follows:
+ *
+ * iphdr = bpf_access_mem(ctx->data, ctx->data_end, ETH_HLEN, sizeof(*iphdr));
+ *
+ * Returns pointer to the beginning of the range or %NULL.
+ */
+static __always_inline void *
+bpf_access_mem(__u64 mem, __u64 mem_end, __u64 off, const __u64 len)
+{
+ void *ret;
+
+ asm volatile("r1 = %[start]\n\t"
+ "r2 = %[end]\n\t"
+ "r3 = %[offmax] - %[len]\n\t"
+ "if %[off] > r3 goto +5\n\t"
+ "r1 += %[off]\n\t"
+ "%[ret] = r1\n\t"
+ "r1 += %[len]\n\t"
+ "if r1 > r2 goto +1\n\t"
+ "goto +1\n\t"
+ "%[ret] = %[null]\n\t"
+ : [ret]"=r"(ret)
+ : [start]"r"(mem), [end]"r"(mem_end), [off]"r"(off),
+ [len]"ri"(len), [offmax]"i"(MAX_PACKET_OFF),
+ [null]"i"(NULL)
+ : "r1", "r2", "r3");
+
+ return ret;
+}
+
+/**
+ * bpf_access_mem_end - sanitize memory access to a range at the end of segment
+ * @mem: start of the memory segment
+ * @mem_end: end of the memory segment
+ * @offend: offset from the end of the memory segment
+ * @len: length of the range to give access to
+ *
+ * Version of bpf_access_mem() which performs all needed calculations to
+ * access a memory segment from the end. E.g., to access FCS (if provided):
+ *
+ * cp = bpf_access_mem_end(ctx->data, ctx->data_end, ETH_FCS_LEN, ETH_FCS_LEN);
+ *
+ * Returns pointer to the beginning of the range or %NULL.
+ */
+static __always_inline void *
+bpf_access_mem_end(__u64 mem, __u64 mem_end, __u64 offend, const __u64 len)
+{
+ return bpf_access_mem(mem, mem_end, mem_end - mem - offend, len);
+}
+
#endif
--
2.36.1

2022-06-28 20:42:24

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH RFC bpf-next 34/52] samples/bpf: add 'timeout' option to xdp_redirect_cpu

Add ability to specify a deferred flush timeout (in usec, not nsec!)
when setting up a cpumap in xdp_redirect_cpu sample.

Signed-off-by: Alexander Lobakin <[email protected]>
---
samples/bpf/xdp_redirect_cpu_user.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/samples/bpf/xdp_redirect_cpu_user.c b/samples/bpf/xdp_redirect_cpu_user.c
index ca457c34eb0f..d184c3fcab53 100644
--- a/samples/bpf/xdp_redirect_cpu_user.c
+++ b/samples/bpf/xdp_redirect_cpu_user.c
@@ -34,6 +34,8 @@ static const char *__doc__ =
#include "xdp_sample_user.h"
#include "xdp_redirect_cpu.skel.h"

+#define NSEC_PER_USEC 1000UL
+
static int map_fd;
static int avail_fd;
static int count_fd;
@@ -61,6 +63,7 @@ static const struct option long_options[] = {
{ "redirect-device", required_argument, NULL, 'r' },
{ "redirect-map", required_argument, NULL, 'm' },
{ "meta-thresh", optional_argument, NULL, 'M' },
+ { "timeout", required_argument, NULL, 't'},
{}
};

@@ -128,9 +131,10 @@ static int create_cpu_entry(__u32 cpu, struct bpf_cpumap_val *value,
}
}

- printf("%s CPU: %u as idx: %u qsize: %d cpumap_prog_fd: %d (cpus_count: %u)\n",
+ printf("%s CPU: %u as idx: %u qsize: %d timeout: %llu cpumap_prog_fd: %d (cpus_count: %u)\n",
new ? "Add new" : "Replace", cpu, avail_idx,
- value->qsize, value->bpf_prog.fd, curr_cpus_count);
+ value->qsize, value->timeout, value->bpf_prog.fd,
+ curr_cpus_count);

return 0;
}
@@ -346,6 +350,7 @@ int main(int argc, char **argv)
* tuned-adm profile network-latency
*/
qsize = 2048;
+ value.timeout = 0; /* Defaults to 0 to mimic the previous behaviour. */

skel = xdp_redirect_cpu__open();
if (!skel) {
@@ -383,7 +388,7 @@ int main(int argc, char **argv)
}

prog = skel->progs.xdp_prognum5_lb_hash_ip_pairs;
- while ((opt = getopt_long(argc, argv, "d:si:Sxp:f:e:r:m:c:q:FMvh",
+ while ((opt = getopt_long(argc, argv, "d:si:Sxp:f:e:r:m:c:q:FMt:vh",
long_options, &longindex)) != -1) {
switch (opt) {
case 'd':
@@ -466,6 +471,10 @@ int main(int argc, char **argv)
opts.meta_thresh = optarg ? strtoul(optarg, NULL, 0) :
1;
break;
+ case 't':
+ value.timeout = strtoull(optarg, NULL, 0) *
+ NSEC_PER_USEC;
+ break;
case 'h':
error = false;
default:
--
2.36.1

2022-06-28 20:47:14

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH RFC bpf-next 52/52] selftests/bpf: add XDP Generic Hints selftest

Add a new BPF selftest which checks whether XDP Generic metadata
works correctly using generic/skb XDP path. It is always available
on any interface, so must always succeed.
It uses special BPF program which works as follows:

* tries to access metadata memory via bpf_access_mem_end();
* checks the frame size. For sizes < 128 bytes, drop packets with
metadata present, so that we could check that setting the
threshold works;
* for sizes 128+, drop packets with no meta. Otherwise, check that
it has correct magic and BTF ID matches with the one written by
the verifier;
* finally, pass packets with fully correct generic meta up the
stack.

And the test itself does the following:

1) attaches that XDP prog to veth interfaces with the threshold of
1, i.e. enable metadata generation for every packet;
2) ensures that the prog drops frames lesser than 128 bytes as
intended (see above);
3) raises the threshold to 128 bytes (test updating the parameters
without replacing the prog);
4) ensures that now no drops occur and that meta for frames >= 128
is valid.

As it involves multiple userspace prog invocation, it performs BPF
link pinning to make it freerunning. `ip netns exec` creates a new
mount namespace (including sysfs) on each execution, the script
now does a temporary persistent BPF FS mountpoint in the tests
directory, so that pinned progs/links will be accessible across
the launches.

Co-developed-by: Larysa Zaremba <[email protected]>
Signed-off-by: Larysa Zaremba <[email protected]>
Signed-off-by: Alexander Lobakin <[email protected]>
---
tools/testing/selftests/bpf/.gitignore | 1 +
tools/testing/selftests/bpf/Makefile | 4 +-
.../selftests/bpf/progs/test_xdp_meta.c | 36 +++
tools/testing/selftests/bpf/test_xdp_meta.c | 294 ++++++++++++++++++
tools/testing/selftests/bpf/test_xdp_meta.sh | 51 +++
5 files changed, 385 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/test_xdp_meta.c

diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore
index ca2f47f45670..7d4de9d9002c 100644
--- a/tools/testing/selftests/bpf/.gitignore
+++ b/tools/testing/selftests/bpf/.gitignore
@@ -44,3 +44,4 @@ test_cpp
xdpxceiver
xdp_redirect_multi
xdp_synproxy
+/test_xdp_meta
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 4fbd88a8ed9e..aca8867deb8c 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -82,7 +82,7 @@ TEST_PROGS_EXTENDED := with_addr.sh \
TEST_GEN_PROGS_EXTENDED = test_sock_addr test_skb_cgroup_id_user \
flow_dissector_load test_flow_dissector test_tcp_check_syncookie_user \
test_lirc_mode2_user xdping test_cpp runqslower bench bpf_testmod.ko \
- xdpxceiver xdp_redirect_multi xdp_synproxy
+ xdpxceiver xdp_redirect_multi xdp_synproxy test_xdp_meta

TEST_CUSTOM_PROGS = $(OUTPUT)/urandom_read

@@ -589,6 +589,8 @@ $(OUTPUT)/bench: $(OUTPUT)/bench.o \
$(call msg,BINARY,,$@)
$(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@

+$(OUTPUT)/test_xdp_meta: | $(OUTPUT)/test_xdp_meta.skel.h
+
EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \
prog_tests/tests.h map_tests/tests.h verifier/tests.h \
feature bpftool \
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_meta.c b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
index fe2d71ae0e71..0b05d1c3979b 100644
--- a/tools/testing/selftests/bpf/progs/test_xdp_meta.c
+++ b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
@@ -2,6 +2,8 @@
#include <linux/if_ether.h>
#include <linux/pkt_cls.h>

+#include <bpf/bpf_core_read.h>
+#include <bpf/bpf_endian.h>
#include <bpf/bpf_helpers.h>

#define __round_mask(x, y) ((__typeof__(x))((y) - 1))
@@ -50,4 +52,38 @@ int ing_xdp(struct xdp_md *ctx)
return XDP_PASS;
}

+#define TEST_META_THRESH 128
+
+SEC("xdp")
+int ing_hints(struct xdp_md *ctx)
+{
+ const struct xdp_meta_generic *md;
+ __le64 genid;
+
+ md = bpf_access_mem_end(ctx->data_meta, ctx->data, sizeof(*md),
+ sizeof(*md));
+
+ /* Selftest enables metadata starting from 128 byte frame size, fail it
+ * if we receive a shorter frame with metadata
+ */
+ if (ctx->data_end - ctx->data < TEST_META_THRESH)
+ return md ? XDP_DROP : XDP_PASS;
+
+ if (!md)
+ return XDP_DROP;
+
+ if (md->magic_id != bpf_cpu_to_le16(XDP_META_GENERIC_MAGIC))
+ return XDP_DROP;
+
+ genid = bpf_cpu_to_le64(bpf_core_type_id_kernel(typeof(*md)));
+ if (md->full_id != genid)
+ return XDP_DROP;
+
+ /* Tx flags must be zeroed */
+ if (md->tx_flags)
+ return XDP_DROP;
+
+ return XDP_PASS;
+}
+
char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/test_xdp_meta.c b/tools/testing/selftests/bpf/test_xdp_meta.c
new file mode 100644
index 000000000000..e5c147d19190
--- /dev/null
+++ b/tools/testing/selftests/bpf/test_xdp_meta.c
@@ -0,0 +1,294 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2022, Intel Corporation. */
+
+#define _GNU_SOURCE /* asprintf() */
+
+#include <bpf/bpf.h>
+#include <getopt.h>
+#include <net/if.h>
+#include <uapi/linux/if_link.h>
+
+#include "test_xdp_meta.skel.h"
+
+struct test_meta_op_opts {
+ struct test_xdp_meta *skel;
+ const char *cmd;
+ char *path;
+ __u32 ifindex;
+ __u32 flags;
+ __u64 btf_id;
+ __u32 meta_thresh;
+};
+
+struct test_meta_opt_desc {
+ const char *arg;
+ const char *help;
+};
+
+#define OPT(n, a, s) { \
+ .name = #n, \
+ .has_arg = (a), \
+ .val = #s[0], \
+}
+
+#define DESC(a, h) { \
+ .arg = (a), \
+ .help = (h), \
+}
+
+static const struct option test_meta_opts[] = {
+ OPT(dev, required_argument, d),
+ OPT(fs, required_argument, f),
+ OPT(help, no_argument, h),
+ OPT(meta-thresh, optional_argument, M),
+ OPT(mode, required_argument, m),
+ { /* Sentinel */ },
+};
+
+static const struct test_meta_opt_desc test_meta_descs[] = {
+ DESC("= < IFNAME | IFINDEX >", "target interface name or index"),
+ DESC("= < MOUNTPOINT >", "BPF FS mountpoint"),
+ DESC(NULL, "display this text and exit"),
+ DESC("= [ THRESH ]", "enable Generic metadata generation (frame size)"),
+ DESC("= < skb | drv | hw >", "force particular XDP mode"),
+};
+
+static void test_meta_usage(char *argv[], bool err)
+{
+ FILE *out = err ? stderr : stdout;
+ __u32 i = 0;
+
+ fprintf(out,
+ "Usage:\n\t%s COMMAND < -d | --dev= > < IFNAME | IFINDEX > [ OPTIONS ]\n\n",
+ argv[0]);
+ fprintf(out, "OPTIONS:\n");
+
+ for (const struct option *opt = test_meta_opts; opt->name; opt++) {
+ fprintf(out, "\t-%c, --%s", opt->val, opt->name);
+ fprintf(out, "%s\t", test_meta_descs[i].arg ? : "\t\t");
+ fprintf(out, "%s\n", test_meta_descs[i++].help);
+ }
+}
+
+static int test_meta_link_attach(const struct test_meta_op_opts *opts)
+{
+ LIBBPF_OPTS(bpf_xdp_attach_opts, la_opts,
+ .flags = opts->flags,
+ .btf_id = opts->btf_id,
+ .meta_thresh = opts->meta_thresh);
+ struct bpf_link *link;
+ int ret;
+
+ link = bpf_program__attach_xdp_opts(opts->skel->progs.ing_hints,
+ opts->ifindex, &la_opts);
+ ret = libbpf_get_error(link);
+ if (ret) {
+ fprintf(stderr, "Failed to attach XDP program: %s (%d)\n",
+ strerror(-ret), ret);
+ return ret;
+ }
+
+ opts->skel->links.ing_hints = link;
+
+ ret = bpf_link__pin(link, opts->path);
+ if (ret)
+ fprintf(stderr, "Failed to pin XDP link at %s: %s (%d)\n",
+ opts->path, strerror(-ret), ret);
+
+ bpf_link__disconnect(link);
+
+ return ret;
+}
+
+static int test_meta_link_update(const struct test_meta_op_opts *opts)
+{
+ LIBBPF_OPTS(bpf_link_update_opts, lu_opts,
+ .xdp.new_btf_id = opts->btf_id,
+ .xdp.new_meta_thresh = opts->meta_thresh);
+ struct bpf_link *link;
+ int ret;
+
+ link = bpf_link__open(opts->path);
+ ret = libbpf_get_error(link);
+ if (ret) {
+ fprintf(stderr, "Failed to open XDP link at %s: %s (%d)\n",
+ opts->path, strerror(-ret), ret);
+ return ret;
+ }
+
+ opts->skel->links.ing_hints = link;
+
+ ret = bpf_link_update(bpf_link__fd(link),
+ bpf_program__fd(opts->skel->progs.ing_hints),
+ &lu_opts);
+ if (ret)
+ fprintf(stderr, "Failed to update XDP link: %s (%d)\n",
+ strerror(-ret), ret);
+
+ return ret;
+}
+
+static int test_meta_link_detach(const struct test_meta_op_opts *opts)
+{
+ struct bpf_link *link;
+ int ret;
+
+ link = bpf_link__open(opts->path);
+ ret = libbpf_get_error(link);
+ if (ret) {
+ fprintf(stderr, "Failed to open XDP link at %s: %s (%d)\n",
+ opts->path, strerror(-ret), ret);
+ return ret;
+ }
+
+ opts->skel->links.ing_hints = link;
+
+ ret = bpf_link__unpin(link);
+ if (ret) {
+ fprintf(stderr, "Failed to unpin XDP link: %s (%d)\n",
+ strerror(-ret), ret);
+ return ret;
+ }
+
+ ret = bpf_link__detach(link);
+ if (ret)
+ fprintf(stderr, "Failed to detach XDP link: %s (%d)\n",
+ strerror(-ret), ret);
+
+ return ret;
+}
+
+static int test_meta_parse_args(struct test_meta_op_opts *opts, int argc,
+ char *argv[])
+{
+ int opt, longidx, ret;
+
+ while (1) {
+ opt = getopt_long(argc, argv, "d:f:hM::m:", test_meta_opts,
+ &longidx);
+ if (opt < 0)
+ break;
+
+ switch (opt) {
+ case 'd':
+ opts->ifindex = if_nametoindex(optarg);
+ if (!opts->ifindex)
+ opts->ifindex = strtoul(optarg, NULL, 0);
+
+ break;
+ case 'f':
+ opts->path = optarg;
+ break;
+ case 'h':
+ test_meta_usage(argv, false);
+ return 0;
+ case 'M':
+ ret = libbpf_get_type_btf_id("struct xdp_meta_generic",
+ &opts->btf_id);
+ if (ret) {
+ fprintf(stderr,
+ "Failed to get BTF ID: %s (%d)\n",
+ strerror(-ret), ret);
+ return ret;
+ }
+
+ /* Allow both `-M64` and `-M 64` */
+ if (!optarg && optind < argc && argv[optind] &&
+ *argv[optind] >= '0' && *argv[optind] <= '9')
+ optarg = argv[optind];
+
+ opts->meta_thresh = strtoul(optarg ? : "1", NULL, 0);
+ break;
+ case 'm':
+ if (!strcmp(optarg, "skb"))
+ opts->flags = XDP_FLAGS_SKB_MODE;
+ else if (!strcmp(optarg, "drv"))
+ opts->flags = XDP_FLAGS_DRV_MODE;
+ else if (!strcmp(optarg, "hw"))
+ opts->flags = XDP_FLAGS_HW_MODE;
+
+ if (opts->flags)
+ break;
+
+ /* fallthrough */
+ default:
+ test_meta_usage(argv, true);
+ return -EINVAL;
+ }
+ }
+
+ if (optind >= argc || !argv[optind]) {
+ fprintf(stderr, "Command is required\n");
+ test_meta_usage(argv, true);
+
+ return -EINVAL;
+ }
+
+ opts->cmd = argv[optind];
+
+ return 0;
+}
+
+int main(int argc, char *argv[])
+{
+ struct test_meta_op_opts opts = { };
+ int ret;
+
+ libbpf_set_strict_mode(LIBBPF_STRICT_ALL);
+
+ if (argc < 3) {
+ test_meta_usage(argv, true);
+ return -EINVAL;
+ }
+
+ ret = test_meta_parse_args(&opts, argc, argv);
+ if (ret)
+ return ret;
+
+ if (!opts.ifindex) {
+ fprintf(stderr, "Invalid or missing device argument\n");
+ test_meta_usage(argv, true);
+
+ return -EINVAL;
+ }
+
+ opts.skel = test_xdp_meta__open_and_load();
+ ret = libbpf_get_error(opts.skel);
+ if (ret) {
+ fprintf(stderr, "Failed to load test_xdp_meta skeleton: %s (%d)\n",
+ strerror(-ret), ret);
+ return ret;
+ }
+
+ ret = asprintf(&opts.path, "%s/xdp/%s-%u", opts.path ? : "/sys/fs/bpf",
+ opts.skel->skeleton->name, opts.ifindex);
+ ret = ret < 0 ? -errno : 0;
+ if (ret) {
+ fprintf(stderr, "Failed to allocate path string: %s (%d)\n",
+ strerror(-ret), ret);
+ goto meta_destroy;
+ }
+
+ if (!strcmp(opts.cmd, "attach")) {
+ ret = test_meta_link_attach(&opts);
+ } else if (!strcmp(opts.cmd, "update")) {
+ ret = test_meta_link_update(&opts);
+ } else if (!strcmp(opts.cmd, "detach")) {
+ ret = test_meta_link_detach(&opts);
+ } else {
+ fprintf(stderr, "Invalid command '%s'\n", opts.cmd);
+ test_meta_usage(argv, true);
+
+ ret = -EINVAL;
+ }
+
+ if (ret)
+ fprintf(stderr, "Failed to execute command '%s': %s (%d)\n",
+ opts.cmd, strerror(-ret), ret);
+
+ free(opts.path);
+meta_destroy:
+ test_xdp_meta__destroy(opts.skel);
+
+ return ret;
+}
diff --git a/tools/testing/selftests/bpf/test_xdp_meta.sh b/tools/testing/selftests/bpf/test_xdp_meta.sh
index 7232714e89b3..79c2ccb68dda 100755
--- a/tools/testing/selftests/bpf/test_xdp_meta.sh
+++ b/tools/testing/selftests/bpf/test_xdp_meta.sh
@@ -5,6 +5,11 @@ readonly KSFT_SKIP=4
readonly NS1="ns1-$(mktemp -u XXXXXX)"
readonly NS2="ns2-$(mktemp -u XXXXXX)"

+# We need a persistent BPF FS mointpoint. `ip netns exec` prepares a different
+# temporary one on each invocation
+readonly FS="$(mktemp -d XXXXXX)"
+mount -t bpf bpffs ${FS}
+
cleanup()
{
if [ "$?" = "0" ]; then
@@ -14,9 +19,16 @@ cleanup()
fi

set +e
+
+ ip netns exec ${NS1} ./test_xdp_meta detach -d veth1 -f ${FS} -m skb 2> /dev/null
+ ip netns exec ${NS2} ./test_xdp_meta detach -d veth2 -f ${FS} -m skb 2> /dev/null
+
ip link del veth1 2> /dev/null
ip netns del ${NS1} 2> /dev/null
ip netns del ${NS2} 2> /dev/null
+
+ umount ${FS}
+ rm -fr ${FS}
}

ip link set dev lo xdp off 2>/dev/null > /dev/null
@@ -54,4 +66,43 @@ ip netns exec ${NS2} ip link set dev veth2 up
ip netns exec ${NS1} ping -c 1 10.1.1.22
ip netns exec ${NS2} ping -c 1 10.1.1.11

+#
+# Generic metadata part
+#
+
+# Cleanup
+ip netns exec ${NS1} ip link set dev veth1 xdp off
+ip netns exec ${NS2} ip link set dev veth2 xdp off
+
+ip netns exec ${NS1} tc filter del dev veth1 ingress
+ip netns exec ${NS2} tc filter del dev veth2 ingress
+
+# Enable metadata generation for every frame
+ip netns exec ${NS1} ./test_xdp_meta attach -d veth1 -f ${FS} -m skb -M
+ip netns exec ${NS2} ./test_xdp_meta attach -d veth2 -f ${FS} -m skb -M
+
+# Those two must fail: XDP prog drops packets < 128 bytes with metadata
+set +e
+
+ip netns exec ${NS1} ping -c 1 10.1.1.22 -W 0.2
+if [ "$?" = "0" ]; then
+ exit 1
+fi
+ip netns exec ${NS2} ping -c 1 10.1.1.11 -W 0.2
+if [ "$?" = "0" ]; then
+ exit 1
+fi
+
+set -e
+
+# Enable metadata only for frames >= 128 bytes
+ip netns exec ${NS1} ./test_xdp_meta update -d veth1 -f ${FS} -m skb -M 128
+ip netns exec ${NS2} ./test_xdp_meta update -d veth2 -f ${FS} -m skb -M 128
+
+# Must succeed
+ip netns exec ${NS1} ping -c 1 10.1.1.22
+ip netns exec ${NS2} ping -c 1 10.1.1.11
+ip netns exec ${NS1} ping -c 1 10.1.1.22 -s 128
+ip netns exec ${NS2} ping -c 1 10.1.1.11 -s 128
+
exit 0
--
2.36.1

2022-06-28 20:48:50

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH RFC bpf-next 04/52] libbpf: patch module BTF ID into BPF insns

From: Larysa Zaremba <[email protected]>

Return both type id and BTF id from bpf_core_type_id_kernel().
Earlier only type id was returned despite the fact that llvm
has enabled the 64 return type for this instruction [1].
This was done as a preparation to the patch [2], which
also strongly served as a inspiration for this implementation.

[1] https://reviews.llvm.org/D91489
[2] https://lore.kernel.org/all/[email protected]

Signed-off-by: Larysa Zaremba <[email protected]>
Signed-off-by: Alexander Lobakin <[email protected]>
---
tools/lib/bpf/bpf_core_read.h | 3 ++-
tools/lib/bpf/relo_core.c | 8 +++++++-
tools/lib/bpf/relo_core.h | 1 +
3 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
index fd48b1ff59ca..2b7d675b2dd0 100644
--- a/tools/lib/bpf/bpf_core_read.h
+++ b/tools/lib/bpf/bpf_core_read.h
@@ -167,7 +167,8 @@ enum bpf_enum_value_kind {
* Convenience macro to get BTF type ID of a target kernel's type that matches
* specified local type.
* Returns:
- * - valid 32-bit unsigned type ID in kernel BTF;
+ * - valid 64-bit unsigned integer: the upper 32 bits is the BTF ID
+ * and the lower 32 bits is the type ID within the BTF.
* - 0, if no matching type was found in a target kernel BTF.
*/
#define bpf_core_type_id_kernel(type) \
diff --git a/tools/lib/bpf/relo_core.c b/tools/lib/bpf/relo_core.c
index e070123332cd..020f0f81374c 100644
--- a/tools/lib/bpf/relo_core.c
+++ b/tools/lib/bpf/relo_core.c
@@ -884,6 +884,7 @@ static int bpf_core_calc_relo(const char *prog_name,
res->fail_memsz_adjust = false;
res->orig_sz = res->new_sz = 0;
res->orig_type_id = res->new_type_id = 0;
+ res->btf_obj_id = 0;

if (core_relo_is_field_based(relo->kind)) {
err = bpf_core_calc_field_relo(prog_name, relo, local_spec,
@@ -934,6 +935,8 @@ static int bpf_core_calc_relo(const char *prog_name,
} else if (core_relo_is_type_based(relo->kind)) {
err = bpf_core_calc_type_relo(relo, local_spec, &res->orig_val, &res->validate);
err = err ?: bpf_core_calc_type_relo(relo, targ_spec, &res->new_val, NULL);
+ if (!err && relo->kind == BPF_CORE_TYPE_ID_TARGET)
+ res->btf_obj_id = btf_obj_id(targ_spec->btf);
} else if (core_relo_is_enumval_based(relo->kind)) {
err = bpf_core_calc_enumval_relo(relo, local_spec, &res->orig_val);
err = err ?: bpf_core_calc_enumval_relo(relo, targ_spec, &res->new_val);
@@ -1125,7 +1128,10 @@ int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn,
}

insn[0].imm = new_val;
- insn[1].imm = new_val >> 32;
+ /* For type IDs, upper 32 bits are used for BTF ID */
+ insn[1].imm = relo->kind == BPF_CORE_TYPE_ID_TARGET ?
+ res->btf_obj_id :
+ (new_val >> 32);
pr_debug("prog '%s': relo #%d: patched insn #%d (LDIMM64) imm64 %llu -> %llu\n",
prog_name, relo_idx, insn_idx,
(unsigned long long)imm, (unsigned long long)new_val);
diff --git a/tools/lib/bpf/relo_core.h b/tools/lib/bpf/relo_core.h
index 3fd3842d4230..f026ea36140e 100644
--- a/tools/lib/bpf/relo_core.h
+++ b/tools/lib/bpf/relo_core.h
@@ -66,6 +66,7 @@ struct bpf_core_relo_res {
__u32 orig_type_id;
__u32 new_sz;
__u32 new_type_id;
+ __u32 btf_obj_id;
};

int __bpf_core_types_are_compat(const struct btf *local_btf, __u32 local_id,
--
2.36.1

2022-06-28 20:50:47

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH RFC bpf-next 13/52] libbpf: add ability to set the meta threshold on setting XDP prog

Covered functions:
* bpf_link_create() - via &bpf_link_create_ops;
* bpf_link_update() - via &bpf_link_update_ops;
* bpf_xdp_attach() - via &bpf_xdp_attach_ops;
* bpf_set_link_xdp_fd_opts() - via &bpf_xdp_set_link_opts;

No support for bpf_get_link_xdp_info()/&xdp_link_info as we store
additional data in the kernel in BPF link mode only.

Signed-off-by: Alexander Lobakin <[email protected]>
---
tools/lib/bpf/bpf.c | 3 +++
tools/lib/bpf/bpf.h | 8 +++++++-
tools/lib/bpf/libbpf.h | 4 ++--
tools/lib/bpf/netlink.c | 10 ++++++++++
4 files changed, 22 insertions(+), 3 deletions(-)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 6036dc75cc7b..e7c713a418f6 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -807,6 +807,9 @@ int bpf_link_create(int prog_fd, int target_fd,
break;
case BPF_XDP:
attr.link_create.xdp.btf_id = OPTS_GET(opts, xdp.btf_id, 0);
+ attr.link_create.xdp.meta_thresh = OPTS_GET(opts,
+ xdp.meta_thresh,
+ 0);
if (!OPTS_ZEROED(opts, xdp))
return libbpf_err(-EINVAL);
break;
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 4e17995fdaff..c0f54f24d675 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -385,6 +385,8 @@ struct bpf_link_create_opts {
struct {
/* target metadata BTF + type ID */
__aligned_u64 btf_id;
+ /* frame size to start composing XDP metadata from */
+ __u32 meta_thresh;
} xdp;
};
size_t :0;
@@ -408,11 +410,15 @@ struct bpf_link_update_opts {
struct {
/* new target metadata BTF + type ID */
__aligned_u64 new_btf_id;
+ /* new frame size to start composing XDP
+ * metadata from
+ */
+ __u32 new_meta_thresh;
} xdp;
};
size_t :0;
};
-#define bpf_link_update_opts__last_field xdp.new_btf_id
+#define bpf_link_update_opts__last_field xdp.new_meta_thresh

LIBBPF_API int bpf_link_update(int link_fd, int new_prog_fd,
const struct bpf_link_update_opts *opts);
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 4f77128ba770..99ac94f148fc 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -1193,7 +1193,7 @@ struct xdp_link_info {
struct bpf_xdp_set_link_opts {
size_t sz;
int old_fd;
- __u32 :32;
+ __u32 meta_thresh;
__u64 btf_id;
size_t :0;
};
@@ -1213,7 +1213,7 @@ LIBBPF_API int bpf_get_link_xdp_info(int ifindex, struct xdp_link_info *info,
struct bpf_xdp_attach_opts {
size_t sz;
int old_prog_fd;
- __u32 :32;
+ __u32 meta_thresh;
__u64 btf_id;
size_t :0;
};
diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
index 104a809d5fb2..ac2a87243ecd 100644
--- a/tools/lib/bpf/netlink.c
+++ b/tools/lib/bpf/netlink.c
@@ -236,6 +236,7 @@ struct __bpf_set_link_xdp_fd_opts {
int old_fd;
__u32 flags;
__u64 btf_id;
+ __u32 meta_thresh;
};

static int
@@ -276,6 +277,13 @@ __bpf_set_link_xdp_fd_replace(const struct __bpf_set_link_xdp_fd_opts *opts)
if (ret < 0)
return ret;
}
+ if (opts->meta_thresh) {
+ ret = nlattr_add(&req, IFLA_XDP_META_THRESH,
+ &opts->meta_thresh,
+ sizeof(opts->meta_thresh));
+ if (ret < 0)
+ return ret;
+ }
nlattr_end_nested(&req, nla);

return libbpf_netlink_send_recv(&req, NULL, NULL, NULL);
@@ -300,6 +308,7 @@ int bpf_xdp_attach(int ifindex, int prog_fd, __u32 flags, const struct bpf_xdp_a
sl_opts.old_fd = -1;

sl_opts.btf_id = OPTS_GET(opts, btf_id, 0);
+ sl_opts.meta_thresh = OPTS_GET(opts, meta_thresh, 0);

err = __bpf_set_link_xdp_fd_replace(&sl_opts);
return libbpf_err(err);
@@ -330,6 +339,7 @@ int bpf_set_link_xdp_fd_opts(int ifindex, int fd, __u32 flags,
}

sl_opts.btf_id = OPTS_GET(opts, btf_id, 0);
+ sl_opts.meta_thresh = OPTS_GET(opts, meta_thresh, 0);

ret = __bpf_set_link_xdp_fd_replace(&sl_opts);
return libbpf_err(ret);
--
2.36.1

2022-06-28 20:53:42

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH RFC bpf-next 11/52] libbpf: factor out __bpf_set_link_xdp_fd_replace() args into a struct

Its argument list already consists of 4 entries, and there are more
to be added. It's convenient to add new opts as they are already
being passed using structs, but at the end the mentioned function
take all the opts one by one.
Place them into a local struct which will satisfy every initial call
site, so it will be now a matter of adding a new field and a
corresponding nlattr_add() to handle a new opt.

Signed-off-by: Alexander Lobakin <[email protected]>
---
tools/lib/bpf/netlink.c | 60 ++++++++++++++++++++++++++++-------------
1 file changed, 42 insertions(+), 18 deletions(-)

diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
index cbc8967d5402..3a25178d0d12 100644
--- a/tools/lib/bpf/netlink.c
+++ b/tools/lib/bpf/netlink.c
@@ -230,8 +230,15 @@ static int libbpf_netlink_send_recv(struct libbpf_nla_req *req,
return ret;
}

-static int __bpf_set_link_xdp_fd_replace(int ifindex, int fd, int old_fd,
- __u32 flags)
+struct __bpf_set_link_xdp_fd_opts {
+ int ifindex;
+ int fd;
+ int old_fd;
+ __u32 flags;
+};
+
+static int
+__bpf_set_link_xdp_fd_replace(const struct __bpf_set_link_xdp_fd_opts *opts)
{
struct nlattr *nla;
int ret;
@@ -242,22 +249,23 @@ static int __bpf_set_link_xdp_fd_replace(int ifindex, int fd, int old_fd,
req.nh.nlmsg_flags = NLM_F_REQUEST | NLM_F_ACK;
req.nh.nlmsg_type = RTM_SETLINK;
req.ifinfo.ifi_family = AF_UNSPEC;
- req.ifinfo.ifi_index = ifindex;
+ req.ifinfo.ifi_index = opts->ifindex;

nla = nlattr_begin_nested(&req, IFLA_XDP);
if (!nla)
return -EMSGSIZE;
- ret = nlattr_add(&req, IFLA_XDP_FD, &fd, sizeof(fd));
+ ret = nlattr_add(&req, IFLA_XDP_FD, &opts->fd, sizeof(opts->fd));
if (ret < 0)
return ret;
- if (flags) {
- ret = nlattr_add(&req, IFLA_XDP_FLAGS, &flags, sizeof(flags));
+ if (opts->flags) {
+ ret = nlattr_add(&req, IFLA_XDP_FLAGS, &opts->flags,
+ sizeof(opts->flags));
if (ret < 0)
return ret;
}
- if (flags & XDP_FLAGS_REPLACE) {
- ret = nlattr_add(&req, IFLA_XDP_EXPECTED_FD, &old_fd,
- sizeof(old_fd));
+ if (opts->flags & XDP_FLAGS_REPLACE) {
+ ret = nlattr_add(&req, IFLA_XDP_EXPECTED_FD, &opts->old_fd,
+ sizeof(opts->old_fd));
if (ret < 0)
return ret;
}
@@ -268,18 +276,23 @@ static int __bpf_set_link_xdp_fd_replace(int ifindex, int fd, int old_fd,

int bpf_xdp_attach(int ifindex, int prog_fd, __u32 flags, const struct bpf_xdp_attach_opts *opts)
{
- int old_prog_fd, err;
+ struct __bpf_set_link_xdp_fd_opts sl_opts = {
+ .ifindex = ifindex,
+ .flags = flags,
+ .fd = prog_fd,
+ };
+ int err;

if (!OPTS_VALID(opts, bpf_xdp_attach_opts))
return libbpf_err(-EINVAL);

- old_prog_fd = OPTS_GET(opts, old_prog_fd, 0);
- if (old_prog_fd)
+ sl_opts.old_fd = OPTS_GET(opts, old_prog_fd, 0);
+ if (sl_opts.old_fd)
flags |= XDP_FLAGS_REPLACE;
else
- old_prog_fd = -1;
+ sl_opts.old_fd = -1;

- err = __bpf_set_link_xdp_fd_replace(ifindex, prog_fd, old_prog_fd, flags);
+ err = __bpf_set_link_xdp_fd_replace(&sl_opts);
return libbpf_err(err);
}

@@ -291,25 +304,36 @@ int bpf_xdp_detach(int ifindex, __u32 flags, const struct bpf_xdp_attach_opts *o
int bpf_set_link_xdp_fd_opts(int ifindex, int fd, __u32 flags,
const struct bpf_xdp_set_link_opts *opts)
{
- int old_fd = -1, ret;
+ struct __bpf_set_link_xdp_fd_opts sl_opts = {
+ .ifindex = ifindex,
+ .flags = flags,
+ .old_fd = -1,
+ .fd = fd,
+ };
+ int ret;

if (!OPTS_VALID(opts, bpf_xdp_set_link_opts))
return libbpf_err(-EINVAL);

if (OPTS_HAS(opts, old_fd)) {
- old_fd = OPTS_GET(opts, old_fd, -1);
+ sl_opts.old_fd = OPTS_GET(opts, old_fd, -1);
flags |= XDP_FLAGS_REPLACE;
}

- ret = __bpf_set_link_xdp_fd_replace(ifindex, fd, old_fd, flags);
+ ret = __bpf_set_link_xdp_fd_replace(&sl_opts);
return libbpf_err(ret);
}

int bpf_set_link_xdp_fd(int ifindex, int fd, __u32 flags)
{
+ struct __bpf_set_link_xdp_fd_opts sl_opts = {
+ .ifindex = ifindex,
+ .flags = flags,
+ .fd = fd,
+ };
int ret;

- ret = __bpf_set_link_xdp_fd_replace(ifindex, fd, 0, flags);
+ ret = __bpf_set_link_xdp_fd_replace(&sl_opts);
return libbpf_err(ret);
}

--
2.36.1

2022-06-28 20:54:22

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH RFC bpf-next 38/52] net, xdp: remove unused xdp_attachment_info::flags

Since %XDP_QUERY_PROG was removed, the ::flags field is not used
anymore. It's being written by xdp_attachment_setup(), but never
read.
Remove it.

Signed-off-by: Alexander Lobakin <[email protected]>
---
include/net/xdp.h | 1 -
net/bpf/core.c | 1 -
2 files changed, 2 deletions(-)

diff --git a/include/net/xdp.h b/include/net/xdp.h
index 1663d0b3a05a..d1fd809655be 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -382,7 +382,6 @@ struct xdp_attachment_info {
struct bpf_prog *prog;
u64 btf_id;
u32 meta_thresh;
- u32 flags;
};

struct netdev_bpf;
diff --git a/net/bpf/core.c b/net/bpf/core.c
index d2d01b8e6441..65f25019493d 100644
--- a/net/bpf/core.c
+++ b/net/bpf/core.c
@@ -554,7 +554,6 @@ void xdp_attachment_setup(struct xdp_attachment_info *info,
info->prog = bpf->prog;
info->btf_id = bpf->btf_id;
info->meta_thresh = bpf->meta_thresh;
- info->flags = bpf->flags;
}
EXPORT_SYMBOL_GPL(xdp_attachment_setup);

--
2.36.1

2022-06-28 20:55:23

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH RFC bpf-next 36/52] bpf, cpumap: switch to napi_skb_cache_get_bulk()

Now that cpumap uses GRO, which drops unused skb heads to the NAPI
cache, use napi_skb_cache_get_bulk() to try to reuse cached entries
and lower the MM layer pressure.
In the situation when all 8 skbs from one cpumap batch goes into one
GRO skb (so the rest 7 go into the cache), there will now be only 1
skb to allocate per cycle instead of 8. If there is some other work
happening in between the cycles, even all 8 might be getting
decached each cycle.
This makes the BH-off period per each batch slightly longer --
previously, skb allocation was happening in the process context.

Signed-off-by: Alexander Lobakin <[email protected]>
---
kernel/bpf/cpumap.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index 145f49de0931..1bb3ae570e6c 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -365,7 +365,6 @@ static int cpu_map_kthread_run(void *data)
while (!kthread_should_stop() || !__ptr_ring_empty(rcpu->queue)) {
struct xdp_cpumap_stats stats = {}; /* zero stats */
unsigned int kmem_alloc_drops = 0, sched = 0;
- gfp_t gfp = __GFP_ZERO | GFP_ATOMIC;
int i, n, m, nframes, xdp_n;
void *frames[CPUMAP_BATCH];
void *skbs[CPUMAP_BATCH];
@@ -416,8 +415,10 @@ static int cpu_map_kthread_run(void *data)

/* Support running another XDP prog on this CPU */
nframes = cpu_map_bpf_prog_run(rcpu, frames, xdp_n, &stats, &list);
+ local_bh_disable();
+
if (nframes) {
- m = kmem_cache_alloc_bulk(skbuff_head_cache, gfp, nframes, skbs);
+ m = napi_skb_cache_get_bulk(skbs, nframes);
if (unlikely(m == 0)) {
for (i = 0; i < nframes; i++)
skbs[i] = NULL; /* effect: xdp_return_frame */
@@ -425,7 +426,6 @@ static int cpu_map_kthread_run(void *data)
}
}

- local_bh_disable();
for (i = 0; i < nframes; i++) {
struct xdp_frame *xdpf = frames[i];
struct sk_buff *skb = skbs[i];
--
2.36.1

2022-06-29 06:57:45

by John Fastabend

[permalink] [raw]
Subject: RE: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

Alexander Lobakin wrote:
> This RFC is to give the whole picture. It will most likely be split
> onto several series, maybe even merge cycles. See the "table of
> contents" below.

Even for RFC its a bit much. Probably improve the summary
message here as well I'm still not clear on the overall
architecture so not sure I want to dig into patches.

>
> The series adds ability to pass different frame
> details/parameters/parameters used by most of NICs and the kernel
> stack (in skbs), not essential, but highly wanted, such as:
>
> * checksum value, status (Rx) or command (Tx);
> * hash value and type/level (Rx);
> * queue number (Rx);
> * timestamps;
> * and so on.
>
> As XDP structures used to represent frames are as small as possible
> and must stay like that, it is done by using the already existing
> concept of metadata, i.e. some space right before a frame where BPF
> programs can put arbitrary data.

OK so you stick attributes in the metadata. You can do this without
touching anything but your driver today. Why not push a patch to
ice to start doing this? People could start using it today and put
it in some feature flag.

I get everyone wants some grand theory around this but again one
patch would do it and your customers could start using it. Show
a benchmark with 20% speedup or whatever with small XDP prog
update and you win.

>
> Now, a NIC driver, or even a SmartNIC itself, can put those params
> there in a well-defined format. The format is fixed, but can be of
> several different types represented by structures, which definitions
> are available to the kernel, BPF programs and the userland.

I don't think in general the format needs to be fixed.

> It is fixed due to it being almost a UAPI, and the exact format can
> be determined by reading the last 10 bytes of metadata. They contain
> a 2-byte magic ID to not confuse it with a non-compatible meta and
> a 8-byte combined BTF ID + type ID: the ID of the BTF where this
> structure is defined and the ID of that definition inside that BTF.
> Users can obtain BTF IDs by structure types using helpers available
> in the kernel, BPF (written by the CO-RE/verifier) and the userland
> (libbpf -> kernel call) and then rely on those ID when reading data
> to make sure whether they support it and what to do with it.
> Why separate magic and ID? The idea is to make different formats
> always contain the basic/"generic" structure embedded at the end.
> This way we can still benefit in purely generic consumers (like
> cpumap) while providing some "extra" data to those who support it.

I don't follow this. If you have a struct in your driver name it
something obvious, ice_xdp_metadata. If I understand things
correctly just dump the BTF for the driver, extract the
struct and done you can use CO-RE reads. For the 'fixed' case
this looks easy. And I don't think you even need a patch for this.

>
> The enablement of this feature is controlled on attaching/replacing
> XDP program on an interface with two new parameters: that combined
> BTF+type ID and metadata threshold.
> The threshold specifies the minimum frame size which a driver (or
> NIC) should start composing metadata from. It is introduced instead
> of just false/true flag due to that often it's not worth it to spend
> cycles to fetch all that data for such small frames: let's say, it
> can be even faster to just calculate checksums for them on CPU
> rather than touch non-coherent DMA zone. Simple XDP_DROP case loses
> 15 Mpps on 64 byte frames with enabled metadata, threshold can help
> mitigate that.

I would put this in the bonus category. Can you do the simple thing
above without these extra bits and then add them later. Just
pick some overly conservative threshold to start with.

>
> The RFC can be divided into 8 parts:

I'm missing something why not do the simplest bit of work and
get this running in ice with a few smallish driver updates
so we can all see it. No need for so many patches.

>
> 01-04: BTF ID hacking: here Larysa provides BPF programs with not
> only type ID, but the ID of the BTF as well by using the
> unused upper 32 bits.
> 05-10: this provides in-kernel mechanisms for taking ID and
> threshold from the userspace and passing it to the drivers.
> 11-18: provides libbpf API to be able to specify those params from
> the userspace, plus some small selftest to verify that both
> the kernel and the userspace parts work.
> 19-29: here the actual structure is defined, then the in-kernel
> helpers and finally here comes the first consumer: function
> used to convert &xdp_frame to &sk_buff now will be trying
> to parse metadata. The affected users are cpumap and veth.
> 30-36: here I try to benefit from the metadata in cpumap even more
> by switching it to GRO. Now that we have checksums from NIC
> available... but even with no meta it gives some fair
> improvements.
> 37-43: enabling building generic metadata on Generic/skb path. Since
> skbs already have all those fields, it's not a problem to do
> this in here, plus allows to benefit from it on interfaces
> not supporting meta yet.
> 44-47: ice driver part, including enabling prog hot-swap;
> 48-52: adds a complex selftest to verify everything works. Can be
> used as a sample as well, showing how to work with metadata
> in BPF programs and how to configure it from the userspace.
>
> Please refer to the actual commit messages where some precise
> implementation details might be explained.
> Nearly 20 of 52 are various cleanups and prereqs, as usually.
>
> Perf figures were taken on cpumap redirect from the ice interface
> (driver-side XDP), redirecting the traffic within the same node.
>
> Frame size / 64/42 128/20 256/8 512/4 1024/2 1532/1
> thread num

You'll have to remind me whats the production use case for
cpu_map on a modern nic or even smart nic? Why are you not
just using a hardware queues and redirecting to the right
queues in hardware to start with?

Also my understanding is if you do XDP_PASS up the stack
the skb is built with all the normal good stuff from hw
descriptor. Sorry going to need some extra context here
to understand.

Could you do a benchmark for AF_XDP I thought this was
the troublesome use case where the user space ring lost
the hardware info e.g. timestamps and checksum values.

>
> meta off 30022 31350 21993 12144 6374 3610
> meta on 33059 28502 21503 12146 6380 3610
> GRO meta off 30020 31822 21970 12145 6384 3610
> GRO meta on 34736 28848 21566 12144 6381 3610
>
> Yes, redirect between the nodes plays awfully with the metadata
> composed by the driver:

Many production use case use XDP exactly for this. If it
slows this basic use case down its going to be very hard
to use in many environments. Likely it wont be used.

>
> meta off 21449 18078 16897 11820 6383 3610
> meta on 16956 19004 14337 8228 5683 2822
> GRO meta off 22539 19129 16304 11659 6381 3592
> GRO meta on 17047 20366 15435 8878 5600 2753

Do you have hardware that can write the data into the
metadata region so you don't do it in software? Seems
like it should be doable without much trouble and would
make this more viable.

>
> Questions still open:
>
> * the actual generic structure: it must have all the fields used
> oftenly and by the majority of NICs. It can always be expanded
> later on (note that the structure grows to the left), but the
> less often UAPI is modified, the better (less compat pain);

I don't believe a generic structure is needed.

> * ability to specify the exact fields to fill by the driver, e.g.
> flags bitmap passed from the userspace. In theory it can be more
> optimal to not spend cycles on data we don't need, but at the
> same time increases the complexity of the whole concept (e.g. it
> will be more problematic to unify drivers' routines for collecting
> data from descriptors to metadata and to skbs);
> * there was an idea to be able to specify from the userspace the
> desired cacheline offset, so that [the wanted fields of] metadata
> and the packet headers would lay in the same CL. Can't be
> implemented in Generic/skb XDP and ice has some troubles with it
> too;
> * lacks AF_XDP/XSk perf numbers and different other scenarios in
> general, is the current implementation optimal for them?

AF_XDP is the primary use case from my understanding.

> * metadata threshold and everything else present in this
> implementation.

I really think your asking questions that are two or three
jumps away. Why not do the simplest bit first and kick
the driver with an on/off switch into this mode. But
I don't understand this cpumap use case so maybe explain
that first.

And sorry didn't even look at your 50+ patches. Figure lets
get agreement on the goal first.

.John

2022-06-29 14:04:44

by Toke Høiland-Jørgensen

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

John Fastabend <[email protected]> writes:

> Alexander Lobakin wrote:
>> This RFC is to give the whole picture. It will most likely be split
>> onto several series, maybe even merge cycles. See the "table of
>> contents" below.
>
> Even for RFC its a bit much. Probably improve the summary
> message here as well I'm still not clear on the overall
> architecture so not sure I want to dig into patches.

+1 on this, and piggybacking on your comment to chime in on the general
architecture.

>> Now, a NIC driver, or even a SmartNIC itself, can put those params
>> there in a well-defined format. The format is fixed, but can be of
>> several different types represented by structures, which definitions
>> are available to the kernel, BPF programs and the userland.
>
> I don't think in general the format needs to be fixed.

No, that's the whole point of BTF: it's not supposed to be UAPI, we'll
use CO-RE to enable dynamic formats...

[...]

>> It is fixed due to it being almost a UAPI, and the exact format can
>> be determined by reading the last 10 bytes of metadata. They contain
>> a 2-byte magic ID to not confuse it with a non-compatible meta and
>> a 8-byte combined BTF ID + type ID: the ID of the BTF where this
>> structure is defined and the ID of that definition inside that BTF.
>> Users can obtain BTF IDs by structure types using helpers available
>> in the kernel, BPF (written by the CO-RE/verifier) and the userland
>> (libbpf -> kernel call) and then rely on those ID when reading data
>> to make sure whether they support it and what to do with it.
>> Why separate magic and ID? The idea is to make different formats
>> always contain the basic/"generic" structure embedded at the end.
>> This way we can still benefit in purely generic consumers (like
>> cpumap) while providing some "extra" data to those who support it.
>
> I don't follow this. If you have a struct in your driver name it
> something obvious, ice_xdp_metadata. If I understand things
> correctly just dump the BTF for the driver, extract the
> struct and done you can use CO-RE reads. For the 'fixed' case
> this looks easy. And I don't think you even need a patch for this.

...however as we've discussed previously, we do need a bit of
infrastructure around this. In particular, we need to embed the embed
the BTF ID into the metadata itself so BPF can do runtime disambiguation
between different formats (and add the right CO-RE primitives to make
this easy). This is for two reasons:

- The metadata might be different per-packet (e.g., PTP packets with
timestamps interleaved with bulk data without them)

- With redirects we may end up processing packets from different devices
in a single XDP program (in devmap or cpumap, or on a veth) so we need
to be able to disambiguate at runtime.

So I think the part of the design that puts the BTF ID into the end of
the metadata struct is sound; however, the actual format doesn't have to
be fixed, we can use CO-RE to pick out the bits that a given BPF program
needs; we just need a convention for how drivers report which format(s)
they support. Which we should also agree on (and add core infrastructure
around) so each driver doesn't go around inventing their own
conventions.

>> The enablement of this feature is controlled on attaching/replacing
>> XDP program on an interface with two new parameters: that combined
>> BTF+type ID and metadata threshold.
>> The threshold specifies the minimum frame size which a driver (or
>> NIC) should start composing metadata from. It is introduced instead
>> of just false/true flag due to that often it's not worth it to spend
>> cycles to fetch all that data for such small frames: let's say, it
>> can be even faster to just calculate checksums for them on CPU
>> rather than touch non-coherent DMA zone. Simple XDP_DROP case loses
>> 15 Mpps on 64 byte frames with enabled metadata, threshold can help
>> mitigate that.
>
> I would put this in the bonus category. Can you do the simple thing
> above without these extra bits and then add them later. Just
> pick some overly conservative threshold to start with.

Yeah, I'd agree this kind of configuration is something that can be
added later, and also it's sort of orthogonal to the consumption of the
metadata itself.

Also, tying this configuration into the loading of an XDP program is a
terrible interface: these are hardware configuration options, let's just
put them into ethtool or 'ip link' like any other piece of device
configuration.

>> The RFC can be divided into 8 parts:
>
> I'm missing something why not do the simplest bit of work and
> get this running in ice with a few smallish driver updates
> so we can all see it. No need for so many patches.

Agreed. This incremental approach is basically what Jesper's
simultaneous series makes a start on, AFAICT? Would be nice if y'all
could converge the efforts :)

[...]

> I really think your asking questions that are two or three
> jumps away. Why not do the simplest bit first and kick
> the driver with an on/off switch into this mode. But
> I don't understand this cpumap use case so maybe explain
> that first.
>
> And sorry didn't even look at your 50+ patches. Figure lets
> get agreement on the goal first.

+1 on both of these :)

-Toke

2022-06-29 18:06:50

by Zvi Effron

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

On Tue, Jun 28, 2022 at 11:15 PM John Fastabend
<[email protected]> wrote:
>
> Alexander Lobakin wrote:
> > This RFC is to give the whole picture. It will most likely be split
> > onto several series, maybe even merge cycles. See the "table of
> > contents" below.
>
> Even for RFC its a bit much. Probably improve the summary
> message here as well I'm still not clear on the overall
> architecture so not sure I want to dig into patches.
>
> >
> > The series adds ability to pass different frame
> > details/parameters/parameters used by most of NICs and the kernel
> > stack (in skbs), not essential, but highly wanted, such as:
> >
> > * checksum value, status (Rx) or command (Tx);
> > * hash value and type/level (Rx);
> > * queue number (Rx);
> > * timestamps;
> > * and so on.
> >
> > As XDP structures used to represent frames are as small as possible
> > and must stay like that, it is done by using the already existing
> > concept of metadata, i.e. some space right before a frame where BPF
> > programs can put arbitrary data.
>
> OK so you stick attributes in the metadata. You can do this without
> touching anything but your driver today. Why not push a patch to
> ice to start doing this? People could start using it today and put
> it in some feature flag.
>
> I get everyone wants some grand theory around this but again one
> patch would do it and your customers could start using it. Show
> a benchmark with 20% speedup or whatever with small XDP prog
> update and you win.
>
> >
> > Now, a NIC driver, or even a SmartNIC itself, can put those params
> > there in a well-defined format. The format is fixed, but can be of
> > several different types represented by structures, which definitions
> > are available to the kernel, BPF programs and the userland.
>
> I don't think in general the format needs to be fixed.
>
> > It is fixed due to it being almost a UAPI, and the exact format can
> > be determined by reading the last 10 bytes of metadata. They contain
> > a 2-byte magic ID to not confuse it with a non-compatible meta and
> > a 8-byte combined BTF ID + type ID: the ID of the BTF where this
> > structure is defined and the ID of that definition inside that BTF.
> > Users can obtain BTF IDs by structure types using helpers available
> > in the kernel, BPF (written by the CO-RE/verifier) and the userland
> > (libbpf -> kernel call) and then rely on those ID when reading data
> > to make sure whether they support it and what to do with it.
> > Why separate magic and ID? The idea is to make different formats
> > always contain the basic/"generic" structure embedded at the end.
> > This way we can still benefit in purely generic consumers (like
> > cpumap) while providing some "extra" data to those who support it.
>
> I don't follow this. If you have a struct in your driver name it
> something obvious, ice_xdp_metadata. If I understand things
> correctly just dump the BTF for the driver, extract the
> struct and done you can use CO-RE reads. For the 'fixed' case
> this looks easy. And I don't think you even need a patch for this.
>
> >
> > The enablement of this feature is controlled on attaching/replacing
> > XDP program on an interface with two new parameters: that combined
> > BTF+type ID and metadata threshold.
> > The threshold specifies the minimum frame size which a driver (or
> > NIC) should start composing metadata from. It is introduced instead
> > of just false/true flag due to that often it's not worth it to spend
> > cycles to fetch all that data for such small frames: let's say, it
> > can be even faster to just calculate checksums for them on CPU
> > rather than touch non-coherent DMA zone. Simple XDP_DROP case loses
> > 15 Mpps on 64 byte frames with enabled metadata, threshold can help
> > mitigate that.
>
> I would put this in the bonus category. Can you do the simple thing
> above without these extra bits and then add them later. Just
> pick some overly conservative threshold to start with.
>
> >
> > The RFC can be divided into 8 parts:
>
> I'm missing something why not do the simplest bit of work and
> get this running in ice with a few smallish driver updates
> so we can all see it. No need for so many patches.
>
> >
> > 01-04: BTF ID hacking: here Larysa provides BPF programs with not
> > only type ID, but the ID of the BTF as well by using the
> > unused upper 32 bits.
> > 05-10: this provides in-kernel mechanisms for taking ID and
> > threshold from the userspace and passing it to the drivers.
> > 11-18: provides libbpf API to be able to specify those params from
> > the userspace, plus some small selftest to verify that both
> > the kernel and the userspace parts work.
> > 19-29: here the actual structure is defined, then the in-kernel
> > helpers and finally here comes the first consumer: function
> > used to convert &xdp_frame to &sk_buff now will be trying
> > to parse metadata. The affected users are cpumap and veth.
> > 30-36: here I try to benefit from the metadata in cpumap even more
> > by switching it to GRO. Now that we have checksums from NIC
> > available... but even with no meta it gives some fair
> > improvements.
> > 37-43: enabling building generic metadata on Generic/skb path. Since
> > skbs already have all those fields, it's not a problem to do
> > this in here, plus allows to benefit from it on interfaces
> > not supporting meta yet.
> > 44-47: ice driver part, including enabling prog hot-swap;
> > 48-52: adds a complex selftest to verify everything works. Can be
> > used as a sample as well, showing how to work with metadata
> > in BPF programs and how to configure it from the userspace.
> >
> > Please refer to the actual commit messages where some precise
> > implementation details might be explained.
> > Nearly 20 of 52 are various cleanups and prereqs, as usually.
> >
> > Perf figures were taken on cpumap redirect from the ice interface
> > (driver-side XDP), redirecting the traffic within the same node.
> >
> > Frame size / 64/42 128/20 256/8 512/4 1024/2 1532/1
> > thread num
>
> You'll have to remind me whats the production use case for
> cpu_map on a modern nic or even smart nic? Why are you not
> just using a hardware queues and redirecting to the right
> queues in hardware to start with?
>
> Also my understanding is if you do XDP_PASS up the stack
> the skb is built with all the normal good stuff from hw
> descriptor. Sorry going to need some extra context here
> to understand.
>
> Could you do a benchmark for AF_XDP I thought this was
> the troublesome use case where the user space ring lost
> the hardware info e.g. timestamps and checksum values.
>
> >
> > meta off 30022 31350 21993 12144 6374 3610
> > meta on 33059 28502 21503 12146 6380 3610
> > GRO meta off 30020 31822 21970 12145 6384 3610
> > GRO meta on 34736 28848 21566 12144 6381 3610
> >
> > Yes, redirect between the nodes plays awfully with the metadata
> > composed by the driver:
>
> Many production use case use XDP exactly for this. If it
> slows this basic use case down its going to be very hard
> to use in many environments. Likely it wont be used.
>
> >
> > meta off 21449 18078 16897 11820 6383 3610
> > meta on 16956 19004 14337 8228 5683 2822
> > GRO meta off 22539 19129 16304 11659 6381 3592
> > GRO meta on 17047 20366 15435 8878 5600 2753
>
> Do you have hardware that can write the data into the
> metadata region so you don't do it in software? Seems
> like it should be doable without much trouble and would
> make this more viable.
>
> >
> > Questions still open:
> >
> > * the actual generic structure: it must have all the fields used
> > oftenly and by the majority of NICs. It can always be expanded
> > later on (note that the structure grows to the left), but the
> > less often UAPI is modified, the better (less compat pain);
>
> I don't believe a generic structure is needed.
>
> > * ability to specify the exact fields to fill by the driver, e.g.
> > flags bitmap passed from the userspace. In theory it can be more
> > optimal to not spend cycles on data we don't need, but at the
> > same time increases the complexity of the whole concept (e.g. it
> > will be more problematic to unify drivers' routines for collecting
> > data from descriptors to metadata and to skbs);
> > * there was an idea to be able to specify from the userspace the
> > desired cacheline offset, so that [the wanted fields of] metadata
> > and the packet headers would lay in the same CL. Can't be
> > implemented in Generic/skb XDP and ice has some troubles with it
> > too;
> > * lacks AF_XDP/XSk perf numbers and different other scenarios in
> > general, is the current implementation optimal for them?
>
> AF_XDP is the primary use case from my understanding.
>

AF_XDP is a use case, and might be the primary, but we work with pure XDP and
have been waiting for the ability to take advantage of the hardware checksums
for years. It would be a very large performance boost for us (in theory) as
we're currently having to verify the checksums ourselves in software, and
recompute them on modifications (since we can't use hardware TX checksums).

Also, if I understand correctly, if the functionality is available to pure XDP,
AF_XDP could benefit from it by having the XDP program that redirects to AF_XDP
copy it into metadata where AF_XDP can find it because of the user defined
contract between the XDP program and the userspace program? (Not as efficient,
obviously, and duplicative, but would work, I think.)

> > * metadata threshold and everything else present in this
> > implementation.
>
> I really think your asking questions that are two or three
> jumps away. Why not do the simplest bit first and kick
> the driver with an on/off switch into this mode. But
> I don't understand this cpumap use case so maybe explain
> that first.
>
> And sorry didn't even look at your 50+ patches. Figure lets
> get agreement on the goal first.
>
> .John

2022-06-30 07:52:30

by Magnus Karlsson

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

On Wed, Jun 29, 2022 at 7:57 PM Zvi Effron <[email protected]> wrote:
>
> On Tue, Jun 28, 2022 at 11:15 PM John Fastabend
> <[email protected]> wrote:
> >
> > Alexander Lobakin wrote:
> > > This RFC is to give the whole picture. It will most likely be split
> > > onto several series, maybe even merge cycles. See the "table of
> > > contents" below.
> >
> > Even for RFC its a bit much. Probably improve the summary
> > message here as well I'm still not clear on the overall
> > architecture so not sure I want to dig into patches.
> >
> > >
> > > The series adds ability to pass different frame
> > > details/parameters/parameters used by most of NICs and the kernel
> > > stack (in skbs), not essential, but highly wanted, such as:
> > >
> > > * checksum value, status (Rx) or command (Tx);
> > > * hash value and type/level (Rx);
> > > * queue number (Rx);
> > > * timestamps;
> > > * and so on.
> > >
> > > As XDP structures used to represent frames are as small as possible
> > > and must stay like that, it is done by using the already existing
> > > concept of metadata, i.e. some space right before a frame where BPF
> > > programs can put arbitrary data.
> >
> > OK so you stick attributes in the metadata. You can do this without
> > touching anything but your driver today. Why not push a patch to
> > ice to start doing this? People could start using it today and put
> > it in some feature flag.
> >
> > I get everyone wants some grand theory around this but again one
> > patch would do it and your customers could start using it. Show
> > a benchmark with 20% speedup or whatever with small XDP prog
> > update and you win.
> >
> > >
> > > Now, a NIC driver, or even a SmartNIC itself, can put those params
> > > there in a well-defined format. The format is fixed, but can be of
> > > several different types represented by structures, which definitions
> > > are available to the kernel, BPF programs and the userland.
> >
> > I don't think in general the format needs to be fixed.
> >
> > > It is fixed due to it being almost a UAPI, and the exact format can
> > > be determined by reading the last 10 bytes of metadata. They contain
> > > a 2-byte magic ID to not confuse it with a non-compatible meta and
> > > a 8-byte combined BTF ID + type ID: the ID of the BTF where this
> > > structure is defined and the ID of that definition inside that BTF.
> > > Users can obtain BTF IDs by structure types using helpers available
> > > in the kernel, BPF (written by the CO-RE/verifier) and the userland
> > > (libbpf -> kernel call) and then rely on those ID when reading data
> > > to make sure whether they support it and what to do with it.
> > > Why separate magic and ID? The idea is to make different formats
> > > always contain the basic/"generic" structure embedded at the end.
> > > This way we can still benefit in purely generic consumers (like
> > > cpumap) while providing some "extra" data to those who support it.
> >
> > I don't follow this. If you have a struct in your driver name it
> > something obvious, ice_xdp_metadata. If I understand things
> > correctly just dump the BTF for the driver, extract the
> > struct and done you can use CO-RE reads. For the 'fixed' case
> > this looks easy. And I don't think you even need a patch for this.
> >
> > >
> > > The enablement of this feature is controlled on attaching/replacing
> > > XDP program on an interface with two new parameters: that combined
> > > BTF+type ID and metadata threshold.
> > > The threshold specifies the minimum frame size which a driver (or
> > > NIC) should start composing metadata from. It is introduced instead
> > > of just false/true flag due to that often it's not worth it to spend
> > > cycles to fetch all that data for such small frames: let's say, it
> > > can be even faster to just calculate checksums for them on CPU
> > > rather than touch non-coherent DMA zone. Simple XDP_DROP case loses
> > > 15 Mpps on 64 byte frames with enabled metadata, threshold can help
> > > mitigate that.
> >
> > I would put this in the bonus category. Can you do the simple thing
> > above without these extra bits and then add them later. Just
> > pick some overly conservative threshold to start with.
> >
> > >
> > > The RFC can be divided into 8 parts:
> >
> > I'm missing something why not do the simplest bit of work and
> > get this running in ice with a few smallish driver updates
> > so we can all see it. No need for so many patches.
> >
> > >
> > > 01-04: BTF ID hacking: here Larysa provides BPF programs with not
> > > only type ID, but the ID of the BTF as well by using the
> > > unused upper 32 bits.
> > > 05-10: this provides in-kernel mechanisms for taking ID and
> > > threshold from the userspace and passing it to the drivers.
> > > 11-18: provides libbpf API to be able to specify those params from
> > > the userspace, plus some small selftest to verify that both
> > > the kernel and the userspace parts work.
> > > 19-29: here the actual structure is defined, then the in-kernel
> > > helpers and finally here comes the first consumer: function
> > > used to convert &xdp_frame to &sk_buff now will be trying
> > > to parse metadata. The affected users are cpumap and veth.
> > > 30-36: here I try to benefit from the metadata in cpumap even more
> > > by switching it to GRO. Now that we have checksums from NIC
> > > available... but even with no meta it gives some fair
> > > improvements.
> > > 37-43: enabling building generic metadata on Generic/skb path. Since
> > > skbs already have all those fields, it's not a problem to do
> > > this in here, plus allows to benefit from it on interfaces
> > > not supporting meta yet.
> > > 44-47: ice driver part, including enabling prog hot-swap;
> > > 48-52: adds a complex selftest to verify everything works. Can be
> > > used as a sample as well, showing how to work with metadata
> > > in BPF programs and how to configure it from the userspace.
> > >
> > > Please refer to the actual commit messages where some precise
> > > implementation details might be explained.
> > > Nearly 20 of 52 are various cleanups and prereqs, as usually.
> > >
> > > Perf figures were taken on cpumap redirect from the ice interface
> > > (driver-side XDP), redirecting the traffic within the same node.
> > >
> > > Frame size / 64/42 128/20 256/8 512/4 1024/2 1532/1
> > > thread num
> >
> > You'll have to remind me whats the production use case for
> > cpu_map on a modern nic or even smart nic? Why are you not
> > just using a hardware queues and redirecting to the right
> > queues in hardware to start with?
> >
> > Also my understanding is if you do XDP_PASS up the stack
> > the skb is built with all the normal good stuff from hw
> > descriptor. Sorry going to need some extra context here
> > to understand.
> >
> > Could you do a benchmark for AF_XDP I thought this was
> > the troublesome use case where the user space ring lost
> > the hardware info e.g. timestamps and checksum values.
> >
> > >
> > > meta off 30022 31350 21993 12144 6374 3610
> > > meta on 33059 28502 21503 12146 6380 3610
> > > GRO meta off 30020 31822 21970 12145 6384 3610
> > > GRO meta on 34736 28848 21566 12144 6381 3610
> > >
> > > Yes, redirect between the nodes plays awfully with the metadata
> > > composed by the driver:
> >
> > Many production use case use XDP exactly for this. If it
> > slows this basic use case down its going to be very hard
> > to use in many environments. Likely it wont be used.
> >
> > >
> > > meta off 21449 18078 16897 11820 6383 3610
> > > meta on 16956 19004 14337 8228 5683 2822
> > > GRO meta off 22539 19129 16304 11659 6381 3592
> > > GRO meta on 17047 20366 15435 8878 5600 2753
> >
> > Do you have hardware that can write the data into the
> > metadata region so you don't do it in software? Seems
> > like it should be doable without much trouble and would
> > make this more viable.
> >
> > >
> > > Questions still open:
> > >
> > > * the actual generic structure: it must have all the fields used
> > > oftenly and by the majority of NICs. It can always be expanded
> > > later on (note that the structure grows to the left), but the
> > > less often UAPI is modified, the better (less compat pain);
> >
> > I don't believe a generic structure is needed.
> >
> > > * ability to specify the exact fields to fill by the driver, e.g.
> > > flags bitmap passed from the userspace. In theory it can be more
> > > optimal to not spend cycles on data we don't need, but at the
> > > same time increases the complexity of the whole concept (e.g. it
> > > will be more problematic to unify drivers' routines for collecting
> > > data from descriptors to metadata and to skbs);
> > > * there was an idea to be able to specify from the userspace the
> > > desired cacheline offset, so that [the wanted fields of] metadata
> > > and the packet headers would lay in the same CL. Can't be
> > > implemented in Generic/skb XDP and ice has some troubles with it
> > > too;
> > > * lacks AF_XDP/XSk perf numbers and different other scenarios in
> > > general, is the current implementation optimal for them?
> >
> > AF_XDP is the primary use case from my understanding.
> >
>
> AF_XDP is a use case, and might be the primary, but we work with pure XDP and
> have been waiting for the ability to take advantage of the hardware checksums
> for years. It would be a very large performance boost for us (in theory) as
> we're currently having to verify the checksums ourselves in software, and
> recompute them on modifications (since we can't use hardware TX checksums).

Yes, there are plenty of people out there that want that boost both
for XDP and AF_XDP, me included. So this work is important.

> Also, if I understand correctly, if the functionality is available to pure XDP,
> AF_XDP could benefit from it by having the XDP program that redirects to AF_XDP
> copy it into metadata where AF_XDP can find it because of the user defined
> contract between the XDP program and the userspace program? (Not as efficient,
> obviously, and duplicative, but would work, I think.)

Correct, AF_XDP can benefit from this directly. The metadata is
already put before the packet, so in zero-copy mode it will be
available to user-space at no extra cost. In copy mode though, it has
to be copied out together with the packet data. But that code is
already there since you can use the metadata section today for
communicating information between XDP and user-space via AF_XDP.


> > > * metadata threshold and everything else present in this
> > > implementation.
> >
> > I really think your asking questions that are two or three
> > jumps away. Why not do the simplest bit first and kick
> > the driver with an on/off switch into this mode. But
> > I don't understand this cpumap use case so maybe explain
> > that first.
> >
> > And sorry didn't even look at your 50+ patches. Figure lets
> > get agreement on the goal first.
> >
> > .John

2022-07-04 15:36:50

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

From: John Fastabend <[email protected]>
Date: Tue, 28 Jun 2022 23:15:12 -0700

> Alexander Lobakin wrote:
> > This RFC is to give the whole picture. It will most likely be split
> > onto several series, maybe even merge cycles. See the "table of
> > contents" below.
>
> Even for RFC its a bit much. Probably improve the summary
> message here as well I'm still not clear on the overall
> architecture so not sure I want to dig into patches.

I wanted to give an overview to the whole code that I have for now.
We have several series on 30+, even 60+ next to this one, I'm not
sure I've seen a single "TL;DR" there. Same thing was with Matt's
folio series, there was 165+, ok he didn't send it as is to LKML,
but his words were "this is the first part, please take a look at
the whole concept here[link]".
I provided a "table of contents", so that anyone could go and
review only stuff he's interested in, not touching the rest.
My intention is that what if I submit the first part, then second,
and then at the third someone says he doesn't like the idea and
NACKs it, what then, if it all was supposed to work together as one?

>
> >
> > The series adds ability to pass different frame
> > details/parameters/parameters used by most of NICs and the kernel
> > stack (in skbs), not essential, but highly wanted, such as:
> >
> > * checksum value, status (Rx) or command (Tx);
> > * hash value and type/level (Rx);
> > * queue number (Rx);
> > * timestamps;
> > * and so on.
> >
> > As XDP structures used to represent frames are as small as possible
> > and must stay like that, it is done by using the already existing
> > concept of metadata, i.e. some space right before a frame where BPF
> > programs can put arbitrary data.
>
> OK so you stick attributes in the metadata. You can do this without
> touching anything but your driver today. Why not push a patch to
> ice to start doing this? People could start using it today and put
> it in some feature flag.

We have to configure it somehow, doesn't we? Do a feature flag for
ice? Why doing anything "generic" then if I could just go with one
driver and send it through our Intel mailing list?

>
> I get everyone wants some grand theory around this but again one
> patch would do it and your customers could start using it. Show
> a benchmark with 20% speedup or whatever with small XDP prog
> update and you win.

One patch would be a hardwiring/hardcoding everything on one button,
what's the point if there were several such examples?
This is RFC because the whole stuff needs to be discussed, not
because a have some drafts and want to show them. It's finished
and polished production-quality code which any vendor or customer
could start using without rehardcoding for their own
driver/needs/etc.
I'm not following this "TL;DR" stuff, one can just apply the series
and see how it goes/works for his needs (and then get back and
report) even if he's not feeling like reviewing it.

>
> >
> > Now, a NIC driver, or even a SmartNIC itself, can put those params
> > there in a well-defined format. The format is fixed, but can be of
> > several different types represented by structures, which definitions
> > are available to the kernel, BPF programs and the userland.
>
> I don't think in general the format needs to be fixed.

We discussed it previously as well, not only in regards to this
stuff, but in general. For BPF programs, for sure we can CO-RE
everything, but we also have: a) in-kernel users not hardcoded to
a particular vendor/driver which just want to have generic fields
in one format for every driver; b) AF_XDP/XSk programs which you
can't CO-RE. There was a proposal from Alexei to patch LLVM to be
able to apply CO-RE for AF_XDP (I mean, for ARM64/x86_64/etc.
binaries) as well, but it's a whole different story with way more
caveats.

>
> > It is fixed due to it being almost a UAPI, and the exact format can
> > be determined by reading the last 10 bytes of metadata. They contain
> > a 2-byte magic ID to not confuse it with a non-compatible meta and
> > a 8-byte combined BTF ID + type ID: the ID of the BTF where this
> > structure is defined and the ID of that definition inside that BTF.
> > Users can obtain BTF IDs by structure types using helpers available
> > in the kernel, BPF (written by the CO-RE/verifier) and the userland
> > (libbpf -> kernel call) and then rely on those ID when reading data
> > to make sure whether they support it and what to do with it.
> > Why separate magic and ID? The idea is to make different formats
> > always contain the basic/"generic" structure embedded at the end.
> > This way we can still benefit in purely generic consumers (like
> > cpumap) while providing some "extra" data to those who support it.
>
> I don't follow this. If you have a struct in your driver name it
> something obvious, ice_xdp_metadata. If I understand things
> correctly just dump the BTF for the driver, extract the
> struct and done you can use CO-RE reads. For the 'fixed' case
> this looks easy. And I don't think you even need a patch for this.
>
> >
> > The enablement of this feature is controlled on attaching/replacing
> > XDP program on an interface with two new parameters: that combined
> > BTF+type ID and metadata threshold.
> > The threshold specifies the minimum frame size which a driver (or
> > NIC) should start composing metadata from. It is introduced instead
> > of just false/true flag due to that often it's not worth it to spend
> > cycles to fetch all that data for such small frames: let's say, it
> > can be even faster to just calculate checksums for them on CPU
> > rather than touch non-coherent DMA zone. Simple XDP_DROP case loses
> > 15 Mpps on 64 byte frames with enabled metadata, threshold can help
> > mitigate that.
>
> I would put this in the bonus category. Can you do the simple thing
> above without these extra bits and then add them later. Just
> pick some overly conservative threshold to start with.

It's as simple as adding on/off button, there's no reason to leave
it for later. Or is there?

>
> >
> > The RFC can be divided into 8 parts:
>
> I'm missing something why not do the simplest bit of work and
> get this running in ice with a few smallish driver updates
> so we can all see it. No need for so many patches.

Ok I should've write this down in the cover: it's not a draft or
some hardcode to just show a PoC...

>
> >
> > 01-04: BTF ID hacking: here Larysa provides BPF programs with not
> > only type ID, but the ID of the BTF as well by using the
> > unused upper 32 bits.
> > 05-10: this provides in-kernel mechanisms for taking ID and
> > threshold from the userspace and passing it to the drivers.
> > 11-18: provides libbpf API to be able to specify those params from
> > the userspace, plus some small selftest to verify that both
> > the kernel and the userspace parts work.
> > 19-29: here the actual structure is defined, then the in-kernel
> > helpers and finally here comes the first consumer: function
> > used to convert &xdp_frame to &sk_buff now will be trying
> > to parse metadata. The affected users are cpumap and veth.
> > 30-36: here I try to benefit from the metadata in cpumap even more
> > by switching it to GRO. Now that we have checksums from NIC
> > available... but even with no meta it gives some fair
> > improvements.
> > 37-43: enabling building generic metadata on Generic/skb path. Since
> > skbs already have all those fields, it's not a problem to do
> > this in here, plus allows to benefit from it on interfaces
> > not supporting meta yet.
> > 44-47: ice driver part, including enabling prog hot-swap;
> > 48-52: adds a complex selftest to verify everything works. Can be
> > used as a sample as well, showing how to work with metadata
> > in BPF programs and how to configure it from the userspace.
> >
> > Please refer to the actual commit messages where some precise
> > implementation details might be explained.
> > Nearly 20 of 52 are various cleanups and prereqs, as usually.
> >
> > Perf figures were taken on cpumap redirect from the ice interface
> > (driver-side XDP), redirecting the traffic within the same node.
> >
> > Frame size / 64/42 128/20 256/8 512/4 1024/2 1532/1
> > thread num
>
> You'll have to remind me whats the production use case for
> cpu_map on a modern nic or even smart nic? Why are you not
> just using a hardware queues and redirecting to the right
> queues in hardware to start with?

Load balancing, you can distribute packets not only by flows, but
as you wish as you have full access to frames. Also, with RSS/RFS
you serve interrupts and push a frame through the networking stack
on the same CPU, with cpumap you can do the former on one and the
latter on another one, and it's obviously faster.

>
> Also my understanding is if you do XDP_PASS up the stack
> the skb is built with all the normal good stuff from hw
> descriptor. Sorry going to need some extra context here
> to understand.

Correct, so this series makes cpumap on par (probably even better)
with just %XDP_PASS.

>
> Could you do a benchmark for AF_XDP I thought this was
> the troublesome use case where the user space ring lost
> the hardware info e.g. timestamps and checksum values.

Ok sure, a bit later. I wasn't focusing on AF_XDP, but it's there
in the closest plans.

>
> >
> > meta off 30022 31350 21993 12144 6374 3610
> > meta on 33059 28502 21503 12146 6380 3610
> > GRO meta off 30020 31822 21970 12145 6384 3610
> > GRO meta on 34736 28848 21566 12144 6381 3610
> >
> > Yes, redirect between the nodes plays awfully with the metadata
> > composed by the driver:
>
> Many production use case use XDP exactly for this. If it
> slows this basic use case down its going to be very hard
> to use in many environments. Likely it wont be used.

Redirect between the nodes is not a good idea in general as you will
be working with remote memory each redirect. Not sure it's widely
used.
And yes, SmartNICs don't have that problem if they're capable of
composing arbitrary meta themselves.

>
> >
> > meta off 21449 18078 16897 11820 6383 3610
> > meta on 16956 19004 14337 8228 5683 2822
> > GRO meta off 22539 19129 16304 11659 6381 3592
> > GRO meta on 17047 20366 15435 8878 5600 2753
>
> Do you have hardware that can write the data into the
> metadata region so you don't do it in software? Seems
> like it should be doable without much trouble and would
> make this more viable.

For now I personally don't, but: a) some people do; b) I will in
some time. IIRC, we were concering for both SmartNIC and "current
generation" NICs not giving favour to any of them.

>
> >
> > Questions still open:
> >
> > * the actual generic structure: it must have all the fields used
> > oftenly and by the majority of NICs. It can always be expanded
> > later on (note that the structure grows to the left), but the
> > less often UAPI is modified, the better (less compat pain);
>
> I don't believe a generic structure is needed.

Please see above.

>
> > * ability to specify the exact fields to fill by the driver, e.g.
> > flags bitmap passed from the userspace. In theory it can be more
> > optimal to not spend cycles on data we don't need, but at the
> > same time increases the complexity of the whole concept (e.g. it
> > will be more problematic to unify drivers' routines for collecting
> > data from descriptors to metadata and to skbs);
> > * there was an idea to be able to specify from the userspace the
> > desired cacheline offset, so that [the wanted fields of] metadata
> > and the packet headers would lay in the same CL. Can't be
> > implemented in Generic/skb XDP and ice has some troubles with it
> > too;
> > * lacks AF_XDP/XSk perf numbers and different other scenarios in
> > general, is the current implementation optimal for them?
>
> AF_XDP is the primary use case from my understanding.

Not really, but is one of them.

>
> > * metadata threshold and everything else present in this
> > implementation.
>
> I really think your asking questions that are two or three
> jumps away. Why not do the simplest bit first and kick
> the driver with an on/off switch into this mode. But
> I don't understand this cpumap use case so maybe explain
> that first.
>
> And sorry didn't even look at your 50+ patches. Figure lets
> get agreement on the goal first.

"TL;DR" will kill open source once ._. As I said, you could just
pick whatever you want to look at, I never said "you, go and
review cpumap GRO stuff" to anyone.

>
> .John

Thanks,
Olek

2022-07-04 16:17:04

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

From: Toke Høiland-Jørgensen <[email protected]>
Date: Wed, 29 Jun 2022 15:43:05 +0200

> John Fastabend <[email protected]> writes:
>
> > Alexander Lobakin wrote:
> >> This RFC is to give the whole picture. It will most likely be split
> >> onto several series, maybe even merge cycles. See the "table of
> >> contents" below.
> >
> > Even for RFC its a bit much. Probably improve the summary
> > message here as well I'm still not clear on the overall
> > architecture so not sure I want to dig into patches.
>
> +1 on this, and piggybacking on your comment to chime in on the general
> architecture.
>
> >> Now, a NIC driver, or even a SmartNIC itself, can put those params
> >> there in a well-defined format. The format is fixed, but can be of
> >> several different types represented by structures, which definitions
> >> are available to the kernel, BPF programs and the userland.
> >
> > I don't think in general the format needs to be fixed.
>
> No, that's the whole point of BTF: it's not supposed to be UAPI, we'll
> use CO-RE to enable dynamic formats...
>
> [...]
>
> >> It is fixed due to it being almost a UAPI, and the exact format can
> >> be determined by reading the last 10 bytes of metadata. They contain
> >> a 2-byte magic ID to not confuse it with a non-compatible meta and
> >> a 8-byte combined BTF ID + type ID: the ID of the BTF where this
> >> structure is defined and the ID of that definition inside that BTF.
> >> Users can obtain BTF IDs by structure types using helpers available
> >> in the kernel, BPF (written by the CO-RE/verifier) and the userland
> >> (libbpf -> kernel call) and then rely on those ID when reading data
> >> to make sure whether they support it and what to do with it.
> >> Why separate magic and ID? The idea is to make different formats
> >> always contain the basic/"generic" structure embedded at the end.
> >> This way we can still benefit in purely generic consumers (like
> >> cpumap) while providing some "extra" data to those who support it.
> >
> > I don't follow this. If you have a struct in your driver name it
> > something obvious, ice_xdp_metadata. If I understand things
> > correctly just dump the BTF for the driver, extract the
> > struct and done you can use CO-RE reads. For the 'fixed' case
> > this looks easy. And I don't think you even need a patch for this.
>
> ...however as we've discussed previously, we do need a bit of
> infrastructure around this. In particular, we need to embed the embed
> the BTF ID into the metadata itself so BPF can do runtime disambiguation
> between different formats (and add the right CO-RE primitives to make
> this easy). This is for two reasons:
>
> - The metadata might be different per-packet (e.g., PTP packets with
> timestamps interleaved with bulk data without them)
>
> - With redirects we may end up processing packets from different devices
> in a single XDP program (in devmap or cpumap, or on a veth) so we need
> to be able to disambiguate at runtime.
>
> So I think the part of the design that puts the BTF ID into the end of
> the metadata struct is sound; however, the actual format doesn't have to
> be fixed, we can use CO-RE to pick out the bits that a given BPF program
> needs; we just need a convention for how drivers report which format(s)
> they support. Which we should also agree on (and add core infrastructure
> around) so each driver doesn't go around inventing their own
> conventions.
>
> >> The enablement of this feature is controlled on attaching/replacing
> >> XDP program on an interface with two new parameters: that combined
> >> BTF+type ID and metadata threshold.
> >> The threshold specifies the minimum frame size which a driver (or
> >> NIC) should start composing metadata from. It is introduced instead
> >> of just false/true flag due to that often it's not worth it to spend
> >> cycles to fetch all that data for such small frames: let's say, it
> >> can be even faster to just calculate checksums for them on CPU
> >> rather than touch non-coherent DMA zone. Simple XDP_DROP case loses
> >> 15 Mpps on 64 byte frames with enabled metadata, threshold can help
> >> mitigate that.
> >
> > I would put this in the bonus category. Can you do the simple thing
> > above without these extra bits and then add them later. Just
> > pick some overly conservative threshold to start with.
>
> Yeah, I'd agree this kind of configuration is something that can be
> added later, and also it's sort of orthogonal to the consumption of the
> metadata itself.
>
> Also, tying this configuration into the loading of an XDP program is a
> terrible interface: these are hardware configuration options, let's just
> put them into ethtool or 'ip link' like any other piece of device
> configuration.

I don't believe it fits there, especially Ethtool. Ethtool is for
hardware configuration, XDP/AF_XDP is 95% software stuff (apart from
offload bits which is purely NFP's for now).
I follow that way:

1) you pick a program you want to attach;
2) usually they are written for special needs and usecases;
3) so most likely that program will be tied with metadata/driver/etc
in some way;
4) so you want to enable Hints of a particular format primarily for
this program and usecase, same with threshold and everything
else.

Pls explain how you see it, I might be wrong for sure.

>
> >> The RFC can be divided into 8 parts:
> >
> > I'm missing something why not do the simplest bit of work and
> > get this running in ice with a few smallish driver updates
> > so we can all see it. No need for so many patches.
>
> Agreed. This incremental approach is basically what Jesper's
> simultaneous series makes a start on, AFAICT? Would be nice if y'all
> could converge the efforts :)

I don't know why at some point Jesper decided to go on his own as he
for sure was using our tree as a base for some time, dunno what
happened then. Regarding these two particular submissions, I didn't
see Jesper's RFC when sending mine, only after when I went to read
some stuff.

>
> [...]
>
> > I really think your asking questions that are two or three
> > jumps away. Why not do the simplest bit first and kick
> > the driver with an on/off switch into this mode. But
> > I don't understand this cpumap use case so maybe explain
> > that first.
> >
> > And sorry didn't even look at your 50+ patches. Figure lets
> > get agreement on the goal first.
>
> +1 on both of these :)

I just thought most of parts were already discussed previously and
the reason I marked it "RFC" was that there are lots of changes and
not everyone may agree with them... Like "here's overview of what
was discussed and what we decided previously, let's review it to see
if there are any questionable/debatable stuff and agree on those 3
questions, then I'll split it according to my taste or to how the
maintainers see it and apply it slow'n'steady".

>
> -Toke

Thanks,
Olek

2022-07-04 17:15:43

by Toke Høiland-Jørgensen

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

Alexander Lobakin <[email protected]> writes:

> From: Toke H??iland-J??rgensen <[email protected]>
> Date: Wed, 29 Jun 2022 15:43:05 +0200
>
>> John Fastabend <[email protected]> writes:
>>
>> > Alexander Lobakin wrote:
>> >> This RFC is to give the whole picture. It will most likely be split
>> >> onto several series, maybe even merge cycles. See the "table of
>> >> contents" below.
>> >
>> > Even for RFC its a bit much. Probably improve the summary
>> > message here as well I'm still not clear on the overall
>> > architecture so not sure I want to dig into patches.
>>
>> +1 on this, and piggybacking on your comment to chime in on the general
>> architecture.
>>
>> >> Now, a NIC driver, or even a SmartNIC itself, can put those params
>> >> there in a well-defined format. The format is fixed, but can be of
>> >> several different types represented by structures, which definitions
>> >> are available to the kernel, BPF programs and the userland.
>> >
>> > I don't think in general the format needs to be fixed.
>>
>> No, that's the whole point of BTF: it's not supposed to be UAPI, we'll
>> use CO-RE to enable dynamic formats...
>>
>> [...]
>>
>> >> It is fixed due to it being almost a UAPI, and the exact format can
>> >> be determined by reading the last 10 bytes of metadata. They contain
>> >> a 2-byte magic ID to not confuse it with a non-compatible meta and
>> >> a 8-byte combined BTF ID + type ID: the ID of the BTF where this
>> >> structure is defined and the ID of that definition inside that BTF.
>> >> Users can obtain BTF IDs by structure types using helpers available
>> >> in the kernel, BPF (written by the CO-RE/verifier) and the userland
>> >> (libbpf -> kernel call) and then rely on those ID when reading data
>> >> to make sure whether they support it and what to do with it.
>> >> Why separate magic and ID? The idea is to make different formats
>> >> always contain the basic/"generic" structure embedded at the end.
>> >> This way we can still benefit in purely generic consumers (like
>> >> cpumap) while providing some "extra" data to those who support it.
>> >
>> > I don't follow this. If you have a struct in your driver name it
>> > something obvious, ice_xdp_metadata. If I understand things
>> > correctly just dump the BTF for the driver, extract the
>> > struct and done you can use CO-RE reads. For the 'fixed' case
>> > this looks easy. And I don't think you even need a patch for this.
>>
>> ...however as we've discussed previously, we do need a bit of
>> infrastructure around this. In particular, we need to embed the embed
>> the BTF ID into the metadata itself so BPF can do runtime disambiguation
>> between different formats (and add the right CO-RE primitives to make
>> this easy). This is for two reasons:
>>
>> - The metadata might be different per-packet (e.g., PTP packets with
>> timestamps interleaved with bulk data without them)
>>
>> - With redirects we may end up processing packets from different devices
>> in a single XDP program (in devmap or cpumap, or on a veth) so we need
>> to be able to disambiguate at runtime.
>>
>> So I think the part of the design that puts the BTF ID into the end of
>> the metadata struct is sound; however, the actual format doesn't have to
>> be fixed, we can use CO-RE to pick out the bits that a given BPF program
>> needs; we just need a convention for how drivers report which format(s)
>> they support. Which we should also agree on (and add core infrastructure
>> around) so each driver doesn't go around inventing their own
>> conventions.
>>
>> >> The enablement of this feature is controlled on attaching/replacing
>> >> XDP program on an interface with two new parameters: that combined
>> >> BTF+type ID and metadata threshold.
>> >> The threshold specifies the minimum frame size which a driver (or
>> >> NIC) should start composing metadata from. It is introduced instead
>> >> of just false/true flag due to that often it's not worth it to spend
>> >> cycles to fetch all that data for such small frames: let's say, it
>> >> can be even faster to just calculate checksums for them on CPU
>> >> rather than touch non-coherent DMA zone. Simple XDP_DROP case loses
>> >> 15 Mpps on 64 byte frames with enabled metadata, threshold can help
>> >> mitigate that.
>> >
>> > I would put this in the bonus category. Can you do the simple thing
>> > above without these extra bits and then add them later. Just
>> > pick some overly conservative threshold to start with.
>>
>> Yeah, I'd agree this kind of configuration is something that can be
>> added later, and also it's sort of orthogonal to the consumption of the
>> metadata itself.
>>
>> Also, tying this configuration into the loading of an XDP program is a
>> terrible interface: these are hardware configuration options, let's just
>> put them into ethtool or 'ip link' like any other piece of device
>> configuration.
>
> I don't believe it fits there, especially Ethtool. Ethtool is for
> hardware configuration, XDP/AF_XDP is 95% software stuff (apart from
> offload bits which is purely NFP's for now).

But XDP-hints is about consuming hardware features. When you're
configuring which metadata items you want, you're saying "please provide
me with these (hardware) features". So ethtool is an excellent place to
do that :)

> I follow that way:
>
> 1) you pick a program you want to attach;
> 2) usually they are written for special needs and usecases;
> 3) so most likely that program will be tied with metadata/driver/etc
> in some way;
> 4) so you want to enable Hints of a particular format primarily for
> this program and usecase, same with threshold and everything
> else.
>
> Pls explain how you see it, I might be wrong for sure.

As above: XDP hints is about giving XDP programs (and AF_XDP consumers)
access to metadata that is not currently available. Tying the lifetime
of that hardware configuration (i.e., which information to provide) to
the lifetime of an XDP program is not a good interface: for one thing,
how will it handle multiple programs? What about when XDP is not used at
all but you still want to configure the same features?

In addition, in every other case where we do dynamic data access (with
CO-RE) the BPF program is a consumer that modifies itself to access the
data provided by the kernel. I get that this is harder to achieve for
AF_XDP, but then let's solve that instead of making a totally
inconsistent interface for XDP.

I'm as excited as you about the prospect of having totally programmable
hardware where you can just specify any arbitrary metadata format and
it'll provide that for you. But that is an orthogonal feature: let's
start with creating a dynamic interface for consuming the (static)
hardware features we already have, and then later we can have a separate
interface for configuring more dynamic hardware features. XDP-hints is
about adding this consumption feature in a way that's sufficiently
dynamic that we can do the other (programmable hardware) thing on top
later...

-Toke

2022-07-04 17:15:43

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata


On 04/07/2022 17.44, Alexander Lobakin wrote:
>> Agreed. This incremental approach is basically what Jesper's
>> simultaneous series makes a start on, AFAICT? Would be nice if y'all
>> could converge the efforts :) >
> I don't know why at some point Jesper decided to go on his own as he
> for sure was using our tree as a base for some time, dunno what
> happened then. Regarding these two particular submissions, I didn't
> see Jesper's RFC when sending mine, only after when I went to read
> some stuff.
>

Well, I have written to you (offlist) that the git tree didn't compile,
so I had a hard time getting it into a working state. We had a
ping-pong of stuff to fix, but it wasn't and you basically told me to
switch to using LLVM to compile your kernel tree, I was not interested
in doing that.

I have looked at the code in your GitHub tree, and decided that it was
an over-engineered approach IMHO. Also simply being 52 commits deep
without having posted this incrementally upstream were also a
non-starter for me, as this isn't the way-to-work upstream.

To get the ball rolling, I have implemented the base XDP-hints support
here[1] with only 9 patches (including support for two drivers).

IMHO we need to start out small and not intermix these huge refactoring
patches. E.g. I'm not convinced renaming net/{core/xdp.c => bpf/core.c}
is an improvement.

-Jesper

[1]
https://lore.kernel.org/bpf/165643378969.449467.13237011812569188299.stgit@firesoul/

2022-07-05 15:02:04

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

From: Jesper Dangaard Brouer <[email protected]>
Date: Mon, 4 Jul 2022 19:13:53 +0200

> On 04/07/2022 17.44, Alexander Lobakin wrote:
> >> Agreed. This incremental approach is basically what Jesper's
> >> simultaneous series makes a start on, AFAICT? Would be nice if y'all
> >> could converge the efforts :) >
> > I don't know why at some point Jesper decided to go on his own as he
> > for sure was using our tree as a base for some time, dunno what
> > happened then. Regarding these two particular submissions, I didn't
> > see Jesper's RFC when sending mine, only after when I went to read
> > some stuff.
> >
>
> Well, I have written to you (offlist) that the git tree didn't compile,
> so I had a hard time getting it into a working state. We had a
> ping-pong of stuff to fix, but it wasn't and you basically told me to
> switch to using LLVM to compile your kernel tree, I was not interested
> in doing that.

Yes and no, I only told you that I missed those build issues due to
that I use LLVM as my primary compiler, but I didn't suggest you
switch to it. Then I fixed all of the issues in a couple days and
wrote you the email on 3th of June saying that everything works
now =\

>
> I have looked at the code in your GitHub tree, and decided that it was
> an over-engineered approach IMHO. Also simply being 52 commits deep
> without having posted this incrementally upstream were also a
> non-starter for me, as this isn't the way-to-work upstream.

So Ingo announced recently that he has a series of 2300+ patches
to try to fix include hell. Now he's preparing to submit them by
batches/series. Look at this RFC as at an announce. "Hey folks,
I have a bunch of stuff and will be submitting it soon, but I'm
posting the whole changeset here, so you could take a look or
give it a try before it's actually started being posted".
All this is mentioned in the cover letter as well. What is the
problem? Ok, next time I can not do any announces and just start
posting series if it made such misunderstandings.

Anyway, I will post a demo-version of the series in a couple weeks
containing only the parts required to get Hints working on ice if
folks prefer to look at the pencil draft instead of looking at the
final painting (never thought I'll have to do that in the kernel
dev community :D).

>
> To get the ball rolling, I have implemented the base XDP-hints support
> here[1] with only 9 patches (including support for two drivers).
>
> IMHO we need to start out small and not intermix these huge refactoring
> patches. E.g. I'm not convinced renaming net/{core/xdp.c => bpf/core.c}
> is an improvement.

Those cleanup patches can be easily put in a standalone series as
a prerequisite. I even mentioned them in the cover letter.
File names is a matter of discussing, my intention there was mainly
to move XDP stuff out of overburdened net/core/dev.c.

>
> -Jesper
>
> [1]
> https://lore.kernel.org/bpf/165643378969.449467.13237011812569188299.stgit@firesoul/

Thanks,
Olek

2022-07-05 15:48:55

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

From: Toke Høiland-Jørgensen <[email protected]>
Date: Mon, 04 Jul 2022 19:14:04 +0200

> Alexander Lobakin <[email protected]> writes:
>
> > From: Toke H??iland-J??rgensen <[email protected]>
> > Date: Wed, 29 Jun 2022 15:43:05 +0200
> >
> >> John Fastabend <[email protected]> writes:
> >>
> >> > Alexander Lobakin wrote:
> >> >> This RFC is to give the whole picture. It will most likely be split
> >> >> onto several series, maybe even merge cycles. See the "table of
> >> >> contents" below.
> >> >
> >> > Even for RFC its a bit much. Probably improve the summary
> >> > message here as well I'm still not clear on the overall
> >> > architecture so not sure I want to dig into patches.
> >>
> >> +1 on this, and piggybacking on your comment to chime in on the general
> >> architecture.
> >>
> >> >> Now, a NIC driver, or even a SmartNIC itself, can put those params
> >> >> there in a well-defined format. The format is fixed, but can be of
> >> >> several different types represented by structures, which definitions
> >> >> are available to the kernel, BPF programs and the userland.
> >> >
> >> > I don't think in general the format needs to be fixed.
> >>
> >> No, that's the whole point of BTF: it's not supposed to be UAPI, we'll
> >> use CO-RE to enable dynamic formats...
> >>
> >> [...]
> >>
> >> >> It is fixed due to it being almost a UAPI, and the exact format can
> >> >> be determined by reading the last 10 bytes of metadata. They contain
> >> >> a 2-byte magic ID to not confuse it with a non-compatible meta and
> >> >> a 8-byte combined BTF ID + type ID: the ID of the BTF where this
> >> >> structure is defined and the ID of that definition inside that BTF.
> >> >> Users can obtain BTF IDs by structure types using helpers available
> >> >> in the kernel, BPF (written by the CO-RE/verifier) and the userland
> >> >> (libbpf -> kernel call) and then rely on those ID when reading data
> >> >> to make sure whether they support it and what to do with it.
> >> >> Why separate magic and ID? The idea is to make different formats
> >> >> always contain the basic/"generic" structure embedded at the end.
> >> >> This way we can still benefit in purely generic consumers (like
> >> >> cpumap) while providing some "extra" data to those who support it.
> >> >
> >> > I don't follow this. If you have a struct in your driver name it
> >> > something obvious, ice_xdp_metadata. If I understand things
> >> > correctly just dump the BTF for the driver, extract the
> >> > struct and done you can use CO-RE reads. For the 'fixed' case
> >> > this looks easy. And I don't think you even need a patch for this.
> >>
> >> ...however as we've discussed previously, we do need a bit of
> >> infrastructure around this. In particular, we need to embed the embed
> >> the BTF ID into the metadata itself so BPF can do runtime disambiguation
> >> between different formats (and add the right CO-RE primitives to make
> >> this easy). This is for two reasons:
> >>
> >> - The metadata might be different per-packet (e.g., PTP packets with
> >> timestamps interleaved with bulk data without them)
> >>
> >> - With redirects we may end up processing packets from different devices
> >> in a single XDP program (in devmap or cpumap, or on a veth) so we need
> >> to be able to disambiguate at runtime.
> >>
> >> So I think the part of the design that puts the BTF ID into the end of
> >> the metadata struct is sound; however, the actual format doesn't have to
> >> be fixed, we can use CO-RE to pick out the bits that a given BPF program
> >> needs; we just need a convention for how drivers report which format(s)
> >> they support. Which we should also agree on (and add core infrastructure
> >> around) so each driver doesn't go around inventing their own
> >> conventions.
> >>
> >> >> The enablement of this feature is controlled on attaching/replacing
> >> >> XDP program on an interface with two new parameters: that combined
> >> >> BTF+type ID and metadata threshold.
> >> >> The threshold specifies the minimum frame size which a driver (or
> >> >> NIC) should start composing metadata from. It is introduced instead
> >> >> of just false/true flag due to that often it's not worth it to spend
> >> >> cycles to fetch all that data for such small frames: let's say, it
> >> >> can be even faster to just calculate checksums for them on CPU
> >> >> rather than touch non-coherent DMA zone. Simple XDP_DROP case loses
> >> >> 15 Mpps on 64 byte frames with enabled metadata, threshold can help
> >> >> mitigate that.
> >> >
> >> > I would put this in the bonus category. Can you do the simple thing
> >> > above without these extra bits and then add them later. Just
> >> > pick some overly conservative threshold to start with.
> >>
> >> Yeah, I'd agree this kind of configuration is something that can be
> >> added later, and also it's sort of orthogonal to the consumption of the
> >> metadata itself.
> >>
> >> Also, tying this configuration into the loading of an XDP program is a
> >> terrible interface: these are hardware configuration options, let's just
> >> put them into ethtool or 'ip link' like any other piece of device
> >> configuration.
> >
> > I don't believe it fits there, especially Ethtool. Ethtool is for
> > hardware configuration, XDP/AF_XDP is 95% software stuff (apart from
> > offload bits which is purely NFP's for now).
>
> But XDP-hints is about consuming hardware features. When you're
> configuring which metadata items you want, you're saying "please provide
> me with these (hardware) features". So ethtool is an excellent place to
> do that :)

With Ethtool you configure the hardware, e.g. it won't strip VLAN
tags if you disable rx-cvlan-stripping. With configuring metadata
you only tell what you want to see there, don't you?

>
> > I follow that way:
> >
> > 1) you pick a program you want to attach;
> > 2) usually they are written for special needs and usecases;
> > 3) so most likely that program will be tied with metadata/driver/etc
> > in some way;
> > 4) so you want to enable Hints of a particular format primarily for
> > this program and usecase, same with threshold and everything
> > else.
> >
> > Pls explain how you see it, I might be wrong for sure.
>
> As above: XDP hints is about giving XDP programs (and AF_XDP consumers)
> access to metadata that is not currently available. Tying the lifetime
> of that hardware configuration (i.e., which information to provide) to
> the lifetime of an XDP program is not a good interface: for one thing,
> how will it handle multiple programs? What about when XDP is not used at

Multiple progs is stuff I didn't cover, but will do later (as you
all say to me, "let's start with something simple" :)). Aaaand
multiple XDP progs (I'm not talking about attaching progs in
differeng modes) is not a kernel feature, rather a libpf feature,
so I believe it should be handled there later...

> all but you still want to configure the same features?

What's the point of configuring metadata when there are no progs
attached? To configure it once and not on every prog attach? I'm
not saying I don't like it, just want to clarify.
Maybe I need opinions from some more people, just to have an
overview of how most of folks see it and would like to configure
it. 'Cause I heard from at least one of the consumers that
libpf API is a perfect place for Hints to him :)

>
> In addition, in every other case where we do dynamic data access (with
> CO-RE) the BPF program is a consumer that modifies itself to access the
> data provided by the kernel. I get that this is harder to achieve for
> AF_XDP, but then let's solve that instead of making a totally
> inconsistent interface for XDP.

I also see CO-RE more fitting and convenient way to use them, but
didn't manage to solve two things:

1) AF_XDP programs, so what to do with them? Prepare patches for
LLVM to make it able to do CO-RE on AF_XDP program load? Or
just hardcode them for particular usecases and NICs? What about
"general-purpose" programs?
And if hardcode, what's the point then to do Generic Hints at
all? Then all it needs is making driver building some meta in
front of frames via on-off button and that's it? Why BTF ID in
the meta then if consumers will access meta hardcoded (via CO-RE
or literally hardcoded, doesn't matter)?
2) In-kernel metadata consumers? Also do CO-RE? Otherwise, with no
generic metadata structure they won't be able to benefit from
Hints. But I guess we still need to provide kernel with meta?
Or no?

>
> I'm as excited as you about the prospect of having totally programmable

But I mostly care about current generation with no programmable
Hints...

> hardware where you can just specify any arbitrary metadata format and
> it'll provide that for you. But that is an orthogonal feature: let's
> start with creating a dynamic interface for consuming the (static)
> hardware features we already have, and then later we can have a separate
> interface for configuring more dynamic hardware features. XDP-hints is
> about adding this consumption feature in a way that's sufficiently
> dynamic that we can do the other (programmable hardware) thing on top
> later...
>
> -Toke

Thanks,
Olek

2022-07-05 18:55:45

by Toke Høiland-Jørgensen

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

Alexander Lobakin <[email protected]> writes:

[... snipping a bit of context here ...]

>> >> Yeah, I'd agree this kind of configuration is something that can be
>> >> added later, and also it's sort of orthogonal to the consumption of the
>> >> metadata itself.
>> >>
>> >> Also, tying this configuration into the loading of an XDP program is a
>> >> terrible interface: these are hardware configuration options, let's just
>> >> put them into ethtool or 'ip link' like any other piece of device
>> >> configuration.
>> >
>> > I don't believe it fits there, especially Ethtool. Ethtool is for
>> > hardware configuration, XDP/AF_XDP is 95% software stuff (apart from
>> > offload bits which is purely NFP's for now).
>>
>> But XDP-hints is about consuming hardware features. When you're
>> configuring which metadata items you want, you're saying "please provide
>> me with these (hardware) features". So ethtool is an excellent place to
>> do that :)
>
> With Ethtool you configure the hardware, e.g. it won't strip VLAN
> tags if you disable rx-cvlan-stripping. With configuring metadata
> you only tell what you want to see there, don't you?

Ah, I think we may be getting closer to identifying the disconnect
between our way of thinking about this!

In my mind, there's no separate "configuration of the metadata" step.
You simply tell the hardware what features you want (say, "enable
timestamps and VLAN offload"), and the driver will then provide the
information related to these features in the metadata area
unconditionally. All XDP hints is about, then, is a way for the driver
to inform the rest of the system how that information is actually laid
out in the metadata area.

Having a separate configuration knob to tell the driver "please lay out
these particular bits of metadata this way" seems like a totally
unnecessary (and quite complicated) feature to have when we can just let
the driver decide and use CO-RE to consume it?

>> > I follow that way:
>> >
>> > 1) you pick a program you want to attach;
>> > 2) usually they are written for special needs and usecases;
>> > 3) so most likely that program will be tied with metadata/driver/etc
>> > in some way;
>> > 4) so you want to enable Hints of a particular format primarily for
>> > this program and usecase, same with threshold and everything
>> > else.
>> >
>> > Pls explain how you see it, I might be wrong for sure.
>>
>> As above: XDP hints is about giving XDP programs (and AF_XDP consumers)
>> access to metadata that is not currently available. Tying the lifetime
>> of that hardware configuration (i.e., which information to provide) to
>> the lifetime of an XDP program is not a good interface: for one thing,
>> how will it handle multiple programs? What about when XDP is not used at
>
> Multiple progs is stuff I didn't cover, but will do later (as you
> all say to me, "let's start with something simple" :)). Aaaand
> multiple XDP progs (I'm not talking about attaching progs in
> differeng modes) is not a kernel feature, rather a libpf feature,
> so I believe it should be handled there later...

Right, but even if we don't *implement* it straight away we still need
to take it into consideration in the design. And expecting libxdp to
arbitrate between different XDP programs' metadata formats sounds like a
royal PITA :)

>> all but you still want to configure the same features?
>
> What's the point of configuring metadata when there are no progs
> attached? To configure it once and not on every prog attach? I'm
> not saying I don't like it, just want to clarify.

See above: you turn on the features because you want the stack to
consume them.

> Maybe I need opinions from some more people, just to have an
> overview of how most of folks see it and would like to configure
> it. 'Cause I heard from at least one of the consumers that
> libpf API is a perfect place for Hints to him :)

Well, as a program author who wants to consume hints, you'd use
lib{bpf,xdp} APIs to do so (probably in the form of suitable CO-RE
macros)...

>> In addition, in every other case where we do dynamic data access (with
>> CO-RE) the BPF program is a consumer that modifies itself to access the
>> data provided by the kernel. I get that this is harder to achieve for
>> AF_XDP, but then let's solve that instead of making a totally
>> inconsistent interface for XDP.
>
> I also see CO-RE more fitting and convenient way to use them, but
> didn't manage to solve two things:
>
> 1) AF_XDP programs, so what to do with them? Prepare patches for
> LLVM to make it able to do CO-RE on AF_XDP program load? Or
> just hardcode them for particular usecases and NICs? What about
> "general-purpose" programs?

You provide a library to read the fields. Jesper actually already
implemented this, did you look at his code?

https://github.com/xdp-project/bpf-examples/tree/master/AF_XDP-interaction

It basically builds a lookup table at load-time using BTF information
from the kernel, keyed on BTF ID and field name, resolving them into
offsets. It's not quite the zero-overhead of CO-RE, but it's fairly
close and can be improved upon (CO-RE for userspace being one way of
doing that).

> And if hardcode, what's the point then to do Generic Hints at
> all? Then all it needs is making driver building some meta in
> front of frames via on-off button and that's it? Why BTF ID in
> the meta then if consumers will access meta hardcoded (via CO-RE
> or literally hardcoded, doesn't matter)?

You're quite right, we could probably implement all the access to
existing (fixed) metadata without using any BTF at all - just define a
common struct and some flags to designate which fields are set. In my
mind, there are a couple of reasons for going the BTF route instead:

- We can leverage CO-RE to get close to optimal efficiency in field
access.

and, more importantly:

- It's infinitely extensible. With the infrastructure in place to make
it really easy to consume metadata described by BTF, we lower the bar
for future innovation in hardware offloads. Both for just adding new
fixed-function stuff to hardware, but especially for fully
programmable hardware.

> 2) In-kernel metadata consumers? Also do CO-RE? Otherwise, with no
> generic metadata structure they won't be able to benefit from
> Hints. But I guess we still need to provide kernel with meta?
> Or no?

In the short term, I think the "generic structure" approach is fine for
leveraging this in the stack. Both your and Jesper's series include
this, and I think that's totally fine. Longer term, if it turns out to
be useful to have something more dynamic for the stack consumption as
well, we could extend it to be CO-RE based as well (most likely by
having the stack load a "translator" BPF program or something along
those lines).

>> I'm as excited as you about the prospect of having totally programmable
>
> But I mostly care about current generation with no programmable
> Hints...

Well, see above; we should be able to support both :)

-Toke

2022-07-05 19:53:11

by Daniel Borkmann

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

On 7/5/22 4:38 PM, Alexander Lobakin wrote:
> From: Jesper Dangaard Brouer <[email protected]>
> Date: Mon, 4 Jul 2022 19:13:53 +0200
[...]
>> I have looked at the code in your GitHub tree, and decided that it was
>> an over-engineered approach IMHO. Also simply being 52 commits deep
>> without having posted this incrementally upstream were also a
>> non-starter for me, as this isn't the way-to-work upstream.
>
> So Ingo announced recently that he has a series of 2300+ patches
> to try to fix include hell. Now he's preparing to submit them by
> batches/series. Look at this RFC as at an announce. "Hey folks,
> I have a bunch of stuff and will be submitting it soon, but I'm
> posting the whole changeset here, so you could take a look or
> give it a try before it's actually started being posted".
> All this is mentioned in the cover letter as well. What is the
> problem? Ok, next time I can not do any announces and just start
> posting series if it made such misunderstandings.

I would suggest to please calm down first. No offense, but above example
with the 2300+ patches is not a great one. There is no way any mortal
would be able to review them, not even thinking about the cycles spent
around rebasing, merge conflict resolution or bugs they may contain.
Anyway, that aside..

Your series essentially starts out with ...

The series adds ability to pass different frame
details/parameters/parameters used by most of NICs and the kernel
stack (in skbs), not essential, but highly wanted, such as:

* checksum value, status (Rx) or command (Tx);
* hash value and type/level (Rx);
* queue number (Rx);
* timestamps;
* and so on.

... so my initial question would be whether in this context there has
been done research / analysis of how this can speed up /real world/
production applications such as Katran L4LB [0], for example? What is
the speedup you observed with it by utilizing the fields from meta data?

Thanks,
Daniel

[0] https://github.com/facebookincubator/katran

2022-07-06 14:37:27

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

From: Toke Høiland-Jørgensen <[email protected]>
Date: Tue, 05 Jul 2022 20:51:14 +0200

> Alexander Lobakin <[email protected]> writes:
>
> [... snipping a bit of context here ...]
>
> >> >> Yeah, I'd agree this kind of configuration is something that can be
> >> >> added later, and also it's sort of orthogonal to the consumption of the
> >> >> metadata itself.
> >> >>
> >> >> Also, tying this configuration into the loading of an XDP program is a
> >> >> terrible interface: these are hardware configuration options, let's just
> >> >> put them into ethtool or 'ip link' like any other piece of device
> >> >> configuration.
> >> >
> >> > I don't believe it fits there, especially Ethtool. Ethtool is for
> >> > hardware configuration, XDP/AF_XDP is 95% software stuff (apart from
> >> > offload bits which is purely NFP's for now).
> >>
> >> But XDP-hints is about consuming hardware features. When you're
> >> configuring which metadata items you want, you're saying "please provide
> >> me with these (hardware) features". So ethtool is an excellent place to
> >> do that :)
> >
> > With Ethtool you configure the hardware, e.g. it won't strip VLAN
> > tags if you disable rx-cvlan-stripping. With configuring metadata
> > you only tell what you want to see there, don't you?
>
> Ah, I think we may be getting closer to identifying the disconnect
> between our way of thinking about this!
>
> In my mind, there's no separate "configuration of the metadata" step.
> You simply tell the hardware what features you want (say, "enable
> timestamps and VLAN offload"), and the driver will then provide the
> information related to these features in the metadata area
> unconditionally. All XDP hints is about, then, is a way for the driver
> to inform the rest of the system how that information is actually laid
> out in the metadata area.
>
> Having a separate configuration knob to tell the driver "please lay out
> these particular bits of metadata this way" seems like a totally
> unnecessary (and quite complicated) feature to have when we can just let
> the driver decide and use CO-RE to consume it?

Magnus (he's currently on vacation) told me it would be useful for
AF_XDP to enable/disable particular metadata, at least from perf
perspective. Let's say, just fetching of one "checksum ok" bit in
the driver is faster than walking through all the descriptor words
and driver logics (i.e. there's several hundred locs in ice which
just parse descriptor data and build an skb or metadata from it).
But if we would just enable/disable corresponding features through
Ethtool, that would hurt XDP_PASS. Maybe it's a bad example, but
what if I want to have only RSS hash in the metadata (and don't
want to spend cycles on parsing the rest), but at the same time
still want skb path to have checksum status to not die at CPU
checksum calculation?

>
> >> > I follow that way:
> >> >
> >> > 1) you pick a program you want to attach;
> >> > 2) usually they are written for special needs and usecases;
> >> > 3) so most likely that program will be tied with metadata/driver/etc
> >> > in some way;
> >> > 4) so you want to enable Hints of a particular format primarily for
> >> > this program and usecase, same with threshold and everything
> >> > else.
> >> >
> >> > Pls explain how you see it, I might be wrong for sure.
> >>
> >> As above: XDP hints is about giving XDP programs (and AF_XDP consumers)
> >> access to metadata that is not currently available. Tying the lifetime
> >> of that hardware configuration (i.e., which information to provide) to
> >> the lifetime of an XDP program is not a good interface: for one thing,
> >> how will it handle multiple programs? What about when XDP is not used at
> >
> > Multiple progs is stuff I didn't cover, but will do later (as you
> > all say to me, "let's start with something simple" :)). Aaaand
> > multiple XDP progs (I'm not talking about attaching progs in
> > differeng modes) is not a kernel feature, rather a libpf feature,
> > so I believe it should be handled there later...
>
> Right, but even if we don't *implement* it straight away we still need
> to take it into consideration in the design. And expecting libxdp to
> arbitrate between different XDP programs' metadata formats sounds like a
> royal PITA :)
>
> >> all but you still want to configure the same features?
> >
> > What's the point of configuring metadata when there are no progs
> > attached? To configure it once and not on every prog attach? I'm
> > not saying I don't like it, just want to clarify.
>
> See above: you turn on the features because you want the stack to
> consume them.
>
> > Maybe I need opinions from some more people, just to have an
> > overview of how most of folks see it and would like to configure
> > it. 'Cause I heard from at least one of the consumers that
> > libpf API is a perfect place for Hints to him :)
>
> Well, as a program author who wants to consume hints, you'd use
> lib{bpf,xdp} APIs to do so (probably in the form of suitable CO-RE
> macros)...
>
> >> In addition, in every other case where we do dynamic data access (with
> >> CO-RE) the BPF program is a consumer that modifies itself to access the
> >> data provided by the kernel. I get that this is harder to achieve for
> >> AF_XDP, but then let's solve that instead of making a totally
> >> inconsistent interface for XDP.
> >
> > I also see CO-RE more fitting and convenient way to use them, but
> > didn't manage to solve two things:
> >
> > 1) AF_XDP programs, so what to do with them? Prepare patches for
> > LLVM to make it able to do CO-RE on AF_XDP program load? Or
> > just hardcode them for particular usecases and NICs? What about
> > "general-purpose" programs?
>
> You provide a library to read the fields. Jesper actually already
> implemented this, did you look at his code?
>
> https://github.com/xdp-project/bpf-examples/tree/master/AF_XDP-interaction
>
> It basically builds a lookup table at load-time using BTF information
> from the kernel, keyed on BTF ID and field name, resolving them into
> offsets. It's not quite the zero-overhead of CO-RE, but it's fairly
> close and can be improved upon (CO-RE for userspace being one way of
> doing that).

Aaaah, sorry, I completely missed that. I thought of something
similar as well, but then thought "variable field offsets, that
would annihilate optimization and performance", and our Xsk team
is super concerned about performance hits when using Hints.

>
> > And if hardcode, what's the point then to do Generic Hints at
> > all? Then all it needs is making driver building some meta in
> > front of frames via on-off button and that's it? Why BTF ID in
> > the meta then if consumers will access meta hardcoded (via CO-RE
> > or literally hardcoded, doesn't matter)?
>
> You're quite right, we could probably implement all the access to
> existing (fixed) metadata without using any BTF at all - just define a
> common struct and some flags to designate which fields are set. In my
> mind, there are a couple of reasons for going the BTF route instead:
>
> - We can leverage CO-RE to get close to optimal efficiency in field
> access.
>
> and, more importantly:
>
> - It's infinitely extensible. With the infrastructure in place to make
> it really easy to consume metadata described by BTF, we lower the bar
> for future innovation in hardware offloads. Both for just adding new
> fixed-function stuff to hardware, but especially for fully
> programmable hardware.

Agree :) That libxdp lookup translator fixed lots of stuff in my
mind.

>
> > 2) In-kernel metadata consumers? Also do CO-RE? Otherwise, with no
> > generic metadata structure they won't be able to benefit from
> > Hints. But I guess we still need to provide kernel with meta?
> > Or no?
>
> In the short term, I think the "generic structure" approach is fine for
> leveraging this in the stack. Both your and Jesper's series include
> this, and I think that's totally fine. Longer term, if it turns out to
> be useful to have something more dynamic for the stack consumption as
> well, we could extend it to be CO-RE based as well (most likely by
> having the stack load a "translator" BPF program or something along
> those lines).

Oh, that translator prog sounds nice BTW!

>
> >> I'm as excited as you about the prospect of having totally programmable
> >
> > But I mostly care about current generation with no programmable
> > Hints...
>
> Well, see above; we should be able to support both :)
>
> -Toke

Thanks,
Olek

2022-07-06 23:26:47

by Toke Høiland-Jørgensen

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

Alexander Lobakin <[email protected]> writes:

> From: Toke H??iland-J??rgensen <[email protected]>
> Date: Tue, 05 Jul 2022 20:51:14 +0200
>
>> Alexander Lobakin <[email protected]> writes:
>>
>> [... snipping a bit of context here ...]
>>
>> >> >> Yeah, I'd agree this kind of configuration is something that can be
>> >> >> added later, and also it's sort of orthogonal to the consumption of the
>> >> >> metadata itself.
>> >> >>
>> >> >> Also, tying this configuration into the loading of an XDP program is a
>> >> >> terrible interface: these are hardware configuration options, let's just
>> >> >> put them into ethtool or 'ip link' like any other piece of device
>> >> >> configuration.
>> >> >
>> >> > I don't believe it fits there, especially Ethtool. Ethtool is for
>> >> > hardware configuration, XDP/AF_XDP is 95% software stuff (apart from
>> >> > offload bits which is purely NFP's for now).
>> >>
>> >> But XDP-hints is about consuming hardware features. When you're
>> >> configuring which metadata items you want, you're saying "please provide
>> >> me with these (hardware) features". So ethtool is an excellent place to
>> >> do that :)
>> >
>> > With Ethtool you configure the hardware, e.g. it won't strip VLAN
>> > tags if you disable rx-cvlan-stripping. With configuring metadata
>> > you only tell what you want to see there, don't you?
>>
>> Ah, I think we may be getting closer to identifying the disconnect
>> between our way of thinking about this!
>>
>> In my mind, there's no separate "configuration of the metadata" step.
>> You simply tell the hardware what features you want (say, "enable
>> timestamps and VLAN offload"), and the driver will then provide the
>> information related to these features in the metadata area
>> unconditionally. All XDP hints is about, then, is a way for the driver
>> to inform the rest of the system how that information is actually laid
>> out in the metadata area.
>>
>> Having a separate configuration knob to tell the driver "please lay out
>> these particular bits of metadata this way" seems like a totally
>> unnecessary (and quite complicated) feature to have when we can just let
>> the driver decide and use CO-RE to consume it?
>
> Magnus (he's currently on vacation) told me it would be useful for
> AF_XDP to enable/disable particular metadata, at least from perf
> perspective. Let's say, just fetching of one "checksum ok" bit in
> the driver is faster than walking through all the descriptor words
> and driver logics (i.e. there's several hundred locs in ice which
> just parse descriptor data and build an skb or metadata from it).
> But if we would just enable/disable corresponding features through
> Ethtool, that would hurt XDP_PASS. Maybe it's a bad example, but
> what if I want to have only RSS hash in the metadata (and don't
> want to spend cycles on parsing the rest), but at the same time
> still want skb path to have checksum status to not die at CPU
> checksum calculation?

Hmm, so this feels a little like a driver-specific optimisation? I.e.,
my guess is that not all drivers have a measurable overhead for pulling
out the metadata. Also, once the XDP metadata bits are in place, we can
move in the direction of building SKBs from the same source, so I'm not
sure it's a good idea to assume that the XDP metadata is separate from
what the stack consumes...

In any case, if such an optimisation does turn out to be useful, we can
add it later (backed by rigorous benchmarks, of course), so I think we
can still start with the simple case and iterate from there?

>> >> > I follow that way:
>> >> >
>> >> > 1) you pick a program you want to attach;
>> >> > 2) usually they are written for special needs and usecases;
>> >> > 3) so most likely that program will be tied with metadata/driver/etc
>> >> > in some way;
>> >> > 4) so you want to enable Hints of a particular format primarily for
>> >> > this program and usecase, same with threshold and everything
>> >> > else.
>> >> >
>> >> > Pls explain how you see it, I might be wrong for sure.
>> >>
>> >> As above: XDP hints is about giving XDP programs (and AF_XDP consumers)
>> >> access to metadata that is not currently available. Tying the lifetime
>> >> of that hardware configuration (i.e., which information to provide) to
>> >> the lifetime of an XDP program is not a good interface: for one thing,
>> >> how will it handle multiple programs? What about when XDP is not used at
>> >
>> > Multiple progs is stuff I didn't cover, but will do later (as you
>> > all say to me, "let's start with something simple" :)). Aaaand
>> > multiple XDP progs (I'm not talking about attaching progs in
>> > differeng modes) is not a kernel feature, rather a libpf feature,
>> > so I believe it should be handled there later...
>>
>> Right, but even if we don't *implement* it straight away we still need
>> to take it into consideration in the design. And expecting libxdp to
>> arbitrate between different XDP programs' metadata formats sounds like a
>> royal PITA :)
>>
>> >> all but you still want to configure the same features?
>> >
>> > What's the point of configuring metadata when there are no progs
>> > attached? To configure it once and not on every prog attach? I'm
>> > not saying I don't like it, just want to clarify.
>>
>> See above: you turn on the features because you want the stack to
>> consume them.
>>
>> > Maybe I need opinions from some more people, just to have an
>> > overview of how most of folks see it and would like to configure
>> > it. 'Cause I heard from at least one of the consumers that
>> > libpf API is a perfect place for Hints to him :)
>>
>> Well, as a program author who wants to consume hints, you'd use
>> lib{bpf,xdp} APIs to do so (probably in the form of suitable CO-RE
>> macros)...
>>
>> >> In addition, in every other case where we do dynamic data access (with
>> >> CO-RE) the BPF program is a consumer that modifies itself to access the
>> >> data provided by the kernel. I get that this is harder to achieve for
>> >> AF_XDP, but then let's solve that instead of making a totally
>> >> inconsistent interface for XDP.
>> >
>> > I also see CO-RE more fitting and convenient way to use them, but
>> > didn't manage to solve two things:
>> >
>> > 1) AF_XDP programs, so what to do with them? Prepare patches for
>> > LLVM to make it able to do CO-RE on AF_XDP program load? Or
>> > just hardcode them for particular usecases and NICs? What about
>> > "general-purpose" programs?
>>
>> You provide a library to read the fields. Jesper actually already
>> implemented this, did you look at his code?
>>
>> https://github.com/xdp-project/bpf-examples/tree/master/AF_XDP-interaction
>>
>> It basically builds a lookup table at load-time using BTF information
>> from the kernel, keyed on BTF ID and field name, resolving them into
>> offsets. It's not quite the zero-overhead of CO-RE, but it's fairly
>> close and can be improved upon (CO-RE for userspace being one way of
>> doing that).
>
> Aaaah, sorry, I completely missed that. I thought of something
> similar as well, but then thought "variable field offsets, that
> would annihilate optimization and performance", and our Xsk team
> is super concerned about performance hits when using Hints.
>
>>
>> > And if hardcode, what's the point then to do Generic Hints at
>> > all? Then all it needs is making driver building some meta in
>> > front of frames via on-off button and that's it? Why BTF ID in
>> > the meta then if consumers will access meta hardcoded (via CO-RE
>> > or literally hardcoded, doesn't matter)?
>>
>> You're quite right, we could probably implement all the access to
>> existing (fixed) metadata without using any BTF at all - just define a
>> common struct and some flags to designate which fields are set. In my
>> mind, there are a couple of reasons for going the BTF route instead:
>>
>> - We can leverage CO-RE to get close to optimal efficiency in field
>> access.
>>
>> and, more importantly:
>>
>> - It's infinitely extensible. With the infrastructure in place to make
>> it really easy to consume metadata described by BTF, we lower the bar
>> for future innovation in hardware offloads. Both for just adding new
>> fixed-function stuff to hardware, but especially for fully
>> programmable hardware.
>
> Agree :) That libxdp lookup translator fixed lots of stuff in my
> mind.

Great! Looks like we're slowly converging towards a shared
understanding, then! :)

>> > 2) In-kernel metadata consumers? Also do CO-RE? Otherwise, with no
>> > generic metadata structure they won't be able to benefit from
>> > Hints. But I guess we still need to provide kernel with meta?
>> > Or no?
>>
>> In the short term, I think the "generic structure" approach is fine for
>> leveraging this in the stack. Both your and Jesper's series include
>> this, and I think that's totally fine. Longer term, if it turns out to
>> be useful to have something more dynamic for the stack consumption as
>> well, we could extend it to be CO-RE based as well (most likely by
>> having the stack load a "translator" BPF program or something along
>> those lines).
>
> Oh, that translator prog sounds nice BTW!

Yeah, it's only a rough idea Jesper and I discussed at some point, but I
think it could have potential (see also point above re: making XDP hints
*the* source of metadata for the whole stack; wouldn't it be nice if
drivers didn't have to deal with the intricacies of assembling SKBs?).

-Toke

2022-07-07 11:52:29

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata



On 07/07/2022 01.22, Toke Høiland-Jørgensen wrote:
> Alexander Lobakin <[email protected]> writes:
>
>> From: Toke H??iland-J??rgensen <[email protected]>
>> Date: Tue, 05 Jul 2022 20:51:14 +0200
>>
>>> Alexander Lobakin <[email protected]> writes:
>>>
>>> [... snipping a bit of context here ...]
>>>
>>>>>>> Yeah, I'd agree this kind of configuration is something that can be
>>>>>>> added later, and also it's sort of orthogonal to the consumption of the
>>>>>>> metadata itself.
>>>>>>>
>>>>>>> Also, tying this configuration into the loading of an XDP program is a
>>>>>>> terrible interface: these are hardware configuration options, let's just
>>>>>>> put them into ethtool or 'ip link' like any other piece of device
>>>>>>> configuration.
>>>>>>
>>>>>> I don't believe it fits there, especially Ethtool. Ethtool is for
>>>>>> hardware configuration, XDP/AF_XDP is 95% software stuff (apart from
>>>>>> offload bits which is purely NFP's for now).
>>>>>
>>>>> But XDP-hints is about consuming hardware features. When you're
>>>>> configuring which metadata items you want, you're saying "please provide
>>>>> me with these (hardware) features". So ethtool is an excellent place to
>>>>> do that :)
>>>>
>>>> With Ethtool you configure the hardware, e.g. it won't strip VLAN
>>>> tags if you disable rx-cvlan-stripping. With configuring metadata
>>>> you only tell what you want to see there, don't you?
>>>
>>> Ah, I think we may be getting closer to identifying the disconnect
>>> between our way of thinking about this!
>>>
>>> In my mind, there's no separate "configuration of the metadata" step.
>>> You simply tell the hardware what features you want (say, "enable
>>> timestamps and VLAN offload"), and the driver will then provide the
>>> information related to these features in the metadata area
>>> unconditionally. All XDP hints is about, then, is a way for the driver
>>> to inform the rest of the system how that information is actually laid
>>> out in the metadata area.
>>>
>>> Having a separate configuration knob to tell the driver "please lay out
>>> these particular bits of metadata this way" seems like a totally
>>> unnecessary (and quite complicated) feature to have when we can just let
>>> the driver decide and use CO-RE to consume it?
>>
>> Magnus (he's currently on vacation) told me it would be useful for
>> AF_XDP to enable/disable particular metadata, at least from perf
>> perspective.

I have recently talked to Magnus (in person at Kernel Recipes), where I
tried to convey my opinion, which is: At least for existing hardware
hints, we need to respect the existing Linux kernel's config interfaces,
and not invent yet-another-way to configure these.
(At least for now) the kernel module defined structs in C-code is the
source of truth, and we consume these layouts via BTF information
provided by the kernel for our XDP-hints.


>> Let's say, just fetching of one "checksum ok" bit in
>> the driver is faster than walking through all the descriptor words
>> and driver logics (i.e. there's several hundred locs in ice which
>> just parse descriptor data and build an skb or metadata from it).
>> But if we would just enable/disable corresponding features through
>> Ethtool, that would hurt XDP_PASS. Maybe it's a bad example, but
>> what if I want to have only RSS hash in the metadata (and don't
>> want to spend cycles on parsing the rest), but at the same time
>> still want skb path to have checksum status to not die at CPU
>> checksum calculation?
>
> Hmm, so this feels a little like a driver-specific optimisation? I.e.,
> my guess is that not all drivers have a measurable overhead for pulling
> out the metadata. Also, once the XDP metadata bits are in place, we can
> move in the direction of building SKBs from the same source, so I'm not
> sure it's a good idea to assume that the XDP metadata is separate from
> what the stack consumes...

I agree.

> In any case, if such an optimisation does turn out to be useful, we can
> add it later (backed by rigorous benchmarks, of course), so I think we
> can still start with the simple case and iterate from there?

For every element in the generic hints data-structure, we already have a
per-element enable/disable facilities. As they are already controlled
by ethtool. Except the timestamping, which can be enabled via a sockopt.
I don't see a benefit of creating another layer (of if-statements) that
are also required to get the HW hint written to XDP-hints metadata area.



>>>>>> I follow that way:
>>>>>>
>>>>>> 1) you pick a program you want to attach;
>>>>>> 2) usually they are written for special needs and usecases;
>>>>>> 3) so most likely that program will be tied with metadata/driver/etc
>>>>>> in some way;
>>>>>> 4) so you want to enable Hints of a particular format primarily for
>>>>>> this program and usecase, same with threshold and everything
>>>>>> else.
>>>>>>
>>>>>> Pls explain how you see it, I might be wrong for sure.
>>>>>
>>>>> As above: XDP hints is about giving XDP programs (and AF_XDP consumers)
>>>>> access to metadata that is not currently available. Tying the lifetime
>>>>> of that hardware configuration (i.e., which information to provide) to
>>>>> the lifetime of an XDP program is not a good interface: for one thing,
>>>>> how will it handle multiple programs? What about when XDP is not used at
>>>>
>>>> Multiple progs is stuff I didn't cover, but will do later (as you
>>>> all say to me, "let's start with something simple" :)). Aaaand
>>>> multiple XDP progs (I'm not talking about attaching progs in
>>>> differeng modes) is not a kernel feature, rather a libpf feature,
>>>> so I believe it should be handled there later...
>>>
>>> Right, but even if we don't *implement* it straight away we still need
>>> to take it into consideration in the design. And expecting libxdp to
>>> arbitrate between different XDP programs' metadata formats sounds like a
>>> royal PITA :)
>>>
>>>>> all but you still want to configure the same features?
>>>>
>>>> What's the point of configuring metadata when there are no progs
>>>> attached? To configure it once and not on every prog attach? I'm
>>>> not saying I don't like it, just want to clarify.
>>>
>>> See above: you turn on the features because you want the stack to
>>> consume them.
>>>
>>>> Maybe I need opinions from some more people, just to have an
>>>> overview of how most of folks see it and would like to configure
>>>> it. 'Cause I heard from at least one of the consumers that
>>>> libpf API is a perfect place for Hints to him :)
>>>
>>> Well, as a program author who wants to consume hints, you'd use
>>> lib{bpf,xdp} APIs to do so (probably in the form of suitable CO-RE
>>> macros)...
>>>
>>>>> In addition, in every other case where we do dynamic data access (with
>>>>> CO-RE) the BPF program is a consumer that modifies itself to access the
>>>>> data provided by the kernel. I get that this is harder to achieve for
>>>>> AF_XDP, but then let's solve that instead of making a totally
>>>>> inconsistent interface for XDP.
>>>>
>>>> I also see CO-RE more fitting and convenient way to use them, but
>>>> didn't manage to solve two things:
>>>>
>>>> 1) AF_XDP programs, so what to do with them? Prepare patches for
>>>> LLVM to make it able to do CO-RE on AF_XDP program load? Or
>>>> just hardcode them for particular usecases and NICs? What about
>>>> "general-purpose" programs?
>>>
>>> You provide a library to read the fields. Jesper actually already
>>> implemented this, did you look at his code?
>>>
>>> https://github.com/xdp-project/bpf-examples/tree/master/AF_XDP-interaction
>>>
>>> It basically builds a lookup table at load-time using BTF information
>>> from the kernel, keyed on BTF ID and field name, resolving them into
>>> offsets. It's not quite the zero-overhead of CO-RE, but it's fairly
>>> close and can be improved upon (CO-RE for userspace being one way of
>>> doing that).
>>
>> Aaaah, sorry, I completely missed that. I thought of something
>> similar as well, but then thought "variable field offsets, that
>> would annihilate optimization and performance", and our Xsk team
>> is super concerned about performance hits when using Hints.
>>
>>>
>>>> And if hardcode, what's the point then to do Generic Hints at
>>>> all? Then all it needs is making driver building some meta in
>>>> front of frames via on-off button and that's it? Why BTF ID in
>>>> the meta then if consumers will access meta hardcoded (via CO-RE
>>>> or literally hardcoded, doesn't matter)?
>>>
>>> You're quite right, we could probably implement all the access to
>>> existing (fixed) metadata without using any BTF at all - just define a
>>> common struct and some flags to designate which fields are set. In my
>>> mind, there are a couple of reasons for going the BTF route instead:
>>>
>>> - We can leverage CO-RE to get close to optimal efficiency in field
>>> access.
>>>
>>> and, more importantly:
>>>
>>> - It's infinitely extensible. With the infrastructure in place to make
>>> it really easy to consume metadata described by BTF, we lower the bar
>>> for future innovation in hardware offloads. Both for just adding new
>>> fixed-function stuff to hardware, but especially for fully
>>> programmable hardware.
>>
>> Agree :) That libxdp lookup translator fixed lots of stuff in my
>> mind.
>
> Great! Looks like we're slowly converging towards a shared
> understanding, then! :)
>
>>>> 2) In-kernel metadata consumers? Also do CO-RE? Otherwise, with no
>>>> generic metadata structure they won't be able to benefit from
>>>> Hints. But I guess we still need to provide kernel with meta?
>>>> Or no?
>>>
>>> In the short term, I think the "generic structure" approach is fine for
>>> leveraging this in the stack. Both your and Jesper's series include
>>> this, and I think that's totally fine. Longer term, if it turns out to
>>> be useful to have something more dynamic for the stack consumption as
>>> well, we could extend it to be CO-RE based as well (most likely by
>>> having the stack load a "translator" BPF program or something along
>>> those lines).
>>
>> Oh, that translator prog sounds nice BTW!
>
> Yeah, it's only a rough idea Jesper and I discussed at some point, but I
> think it could have potential (see also point above re: making XDP hints
> *the* source of metadata for the whole stack; wouldn't it be nice if
> drivers didn't have to deal with the intricacies of assembling SKBs?).

Yes, this is the longer term goal, but we should take this in steps.
(Thus, my patchset[0] focuses on the existing xdp_hints_common).

Eventually (pipe-dream?), I would like to add a new BPF-hook that runs
in the step converting xdp_frame to SKB (today handled in function
__xdp_build_skb_from_frame). This "translator" BPF program should be
tied/loaded per net_device, which makes it easier to consume the driver
specific/dynamic XDP-hints layouts and BPF-code can be smaller as it
only need to CO-RE handle xdp-hints structs known for this driver.
Default BPF-prog should be provided and maintained by driver
maintainers, but can be replaced by end-users.

--Jesper

[0]
https://lore.kernel.org/bpf/165643378969.449467.13237011812569188299.stgit@firesoul/

2022-07-12 11:13:44

by Magnus Karlsson

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

On Thu, Jul 7, 2022 at 1:25 AM Toke Høiland-Jørgensen <[email protected]> wrote:
>
> Alexander Lobakin <[email protected]> writes:
>
> > From: Toke H??iland-J??rgensen <[email protected]>
> > Date: Tue, 05 Jul 2022 20:51:14 +0200
> >
> >> Alexander Lobakin <[email protected]> writes:
> >>
> >> [... snipping a bit of context here ...]
> >>
> >> >> >> Yeah, I'd agree this kind of configuration is something that can be
> >> >> >> added later, and also it's sort of orthogonal to the consumption of the
> >> >> >> metadata itself.
> >> >> >>
> >> >> >> Also, tying this configuration into the loading of an XDP program is a
> >> >> >> terrible interface: these are hardware configuration options, let's just
> >> >> >> put them into ethtool or 'ip link' like any other piece of device
> >> >> >> configuration.
> >> >> >
> >> >> > I don't believe it fits there, especially Ethtool. Ethtool is for
> >> >> > hardware configuration, XDP/AF_XDP is 95% software stuff (apart from
> >> >> > offload bits which is purely NFP's for now).
> >> >>
> >> >> But XDP-hints is about consuming hardware features. When you're
> >> >> configuring which metadata items you want, you're saying "please provide
> >> >> me with these (hardware) features". So ethtool is an excellent place to
> >> >> do that :)
> >> >
> >> > With Ethtool you configure the hardware, e.g. it won't strip VLAN
> >> > tags if you disable rx-cvlan-stripping. With configuring metadata
> >> > you only tell what you want to see there, don't you?
> >>
> >> Ah, I think we may be getting closer to identifying the disconnect
> >> between our way of thinking about this!
> >>
> >> In my mind, there's no separate "configuration of the metadata" step.
> >> You simply tell the hardware what features you want (say, "enable
> >> timestamps and VLAN offload"), and the driver will then provide the
> >> information related to these features in the metadata area
> >> unconditionally. All XDP hints is about, then, is a way for the driver
> >> to inform the rest of the system how that information is actually laid
> >> out in the metadata area.
> >>
> >> Having a separate configuration knob to tell the driver "please lay out
> >> these particular bits of metadata this way" seems like a totally
> >> unnecessary (and quite complicated) feature to have when we can just let
> >> the driver decide and use CO-RE to consume it?
> >
> > Magnus (he's currently on vacation) told me it would be useful for
> > AF_XDP to enable/disable particular metadata, at least from perf
> > perspective. Let's say, just fetching of one "checksum ok" bit in
> > the driver is faster than walking through all the descriptor words
> > and driver logics (i.e. there's several hundred locs in ice which
> > just parse descriptor data and build an skb or metadata from it).
> > But if we would just enable/disable corresponding features through
> > Ethtool, that would hurt XDP_PASS. Maybe it's a bad example, but
> > what if I want to have only RSS hash in the metadata (and don't
> > want to spend cycles on parsing the rest), but at the same time
> > still want skb path to have checksum status to not die at CPU
> > checksum calculation?
>
> Hmm, so this feels a little like a driver-specific optimisation? I.e.,
> my guess is that not all drivers have a measurable overhead for pulling
> out the metadata. Also, once the XDP metadata bits are in place, we can
> move in the direction of building SKBs from the same source, so I'm not
> sure it's a good idea to assume that the XDP metadata is separate from
> what the stack consumes...
>
> In any case, if such an optimisation does turn out to be useful, we can
> add it later (backed by rigorous benchmarks, of course), so I think we
> can still start with the simple case and iterate from there?

Just to check if my intuition was correct or not I ran some benchmarks
around this. I ported Jesper's patch set to the zero-copy driver of
i40e, which was really simple thanks to Jesper's refactoring. One line
of code added to the data path of the zc driver and making
i40e_process_xdp_hints() a global function so it can be reached from
the zc driver. I also moved the prefetch Jesper added to after the
check if xdp_hints are available since it really degrades performance
in the xdp_hints off case.

First number is the throughput change with hints on, and the second
number is with hints off. All are compared to the performance without
Jesper's patch set applied. The application is xdpsock -r (which used
to be part of the samples/bpf directory).

Copy mode with all hints: -21% / -2%
Zero-copy mode with all hints: -29% / -9%

Copy mode rx timestamp only (the rest removed with an #if 0): -11%
Zero-copy mode rx timestamp only: -20%

So, if you only want rx timestamp, but can only enable every hint or
nothing, then you get a 10% performance degradation with copy mode and
9% with zero-copy mode compared to if you were able just to enable rx
timestamp alone. With these rough numbers (a real implementation would
not have an #if 0) I would say it matters, but that does not mean we
should not start simple and just have a big switch to start with. But
as we add hints (to the same btfid), this will just get worse.

Here are some other numbers I got, in case someone is interested. They
are XDP numbers from xdp_rxq_info in samples/bpf.

hints on / hints off
XDP_DROP: -18% / -1.5%
XDP_TX: -10% / -2.5%

> >> >> > I follow that way:
> >> >> >
> >> >> > 1) you pick a program you want to attach;
> >> >> > 2) usually they are written for special needs and usecases;
> >> >> > 3) so most likely that program will be tied with metadata/driver/etc
> >> >> > in some way;
> >> >> > 4) so you want to enable Hints of a particular format primarily for
> >> >> > this program and usecase, same with threshold and everything
> >> >> > else.
> >> >> >
> >> >> > Pls explain how you see it, I might be wrong for sure.
> >> >>
> >> >> As above: XDP hints is about giving XDP programs (and AF_XDP consumers)
> >> >> access to metadata that is not currently available. Tying the lifetime
> >> >> of that hardware configuration (i.e., which information to provide) to
> >> >> the lifetime of an XDP program is not a good interface: for one thing,
> >> >> how will it handle multiple programs? What about when XDP is not used at
> >> >
> >> > Multiple progs is stuff I didn't cover, but will do later (as you
> >> > all say to me, "let's start with something simple" :)). Aaaand
> >> > multiple XDP progs (I'm not talking about attaching progs in
> >> > differeng modes) is not a kernel feature, rather a libpf feature,
> >> > so I believe it should be handled there later...
> >>
> >> Right, but even if we don't *implement* it straight away we still need
> >> to take it into consideration in the design. And expecting libxdp to
> >> arbitrate between different XDP programs' metadata formats sounds like a
> >> royal PITA :)
> >>
> >> >> all but you still want to configure the same features?
> >> >
> >> > What's the point of configuring metadata when there are no progs
> >> > attached? To configure it once and not on every prog attach? I'm
> >> > not saying I don't like it, just want to clarify.
> >>
> >> See above: you turn on the features because you want the stack to
> >> consume them.
> >>
> >> > Maybe I need opinions from some more people, just to have an
> >> > overview of how most of folks see it and would like to configure
> >> > it. 'Cause I heard from at least one of the consumers that
> >> > libpf API is a perfect place for Hints to him :)
> >>
> >> Well, as a program author who wants to consume hints, you'd use
> >> lib{bpf,xdp} APIs to do so (probably in the form of suitable CO-RE
> >> macros)...
> >>
> >> >> In addition, in every other case where we do dynamic data access (with
> >> >> CO-RE) the BPF program is a consumer that modifies itself to access the
> >> >> data provided by the kernel. I get that this is harder to achieve for
> >> >> AF_XDP, but then let's solve that instead of making a totally
> >> >> inconsistent interface for XDP.
> >> >
> >> > I also see CO-RE more fitting and convenient way to use them, but
> >> > didn't manage to solve two things:
> >> >
> >> > 1) AF_XDP programs, so what to do with them? Prepare patches for
> >> > LLVM to make it able to do CO-RE on AF_XDP program load? Or
> >> > just hardcode them for particular usecases and NICs? What about
> >> > "general-purpose" programs?
> >>
> >> You provide a library to read the fields. Jesper actually already
> >> implemented this, did you look at his code?
> >>
> >> https://github.com/xdp-project/bpf-examples/tree/master/AF_XDP-interaction
> >>
> >> It basically builds a lookup table at load-time using BTF information
> >> from the kernel, keyed on BTF ID and field name, resolving them into
> >> offsets. It's not quite the zero-overhead of CO-RE, but it's fairly
> >> close and can be improved upon (CO-RE for userspace being one way of
> >> doing that).
> >
> > Aaaah, sorry, I completely missed that. I thought of something
> > similar as well, but then thought "variable field offsets, that
> > would annihilate optimization and performance", and our Xsk team
> > is super concerned about performance hits when using Hints.
> >
> >>
> >> > And if hardcode, what's the point then to do Generic Hints at
> >> > all? Then all it needs is making driver building some meta in
> >> > front of frames via on-off button and that's it? Why BTF ID in
> >> > the meta then if consumers will access meta hardcoded (via CO-RE
> >> > or literally hardcoded, doesn't matter)?
> >>
> >> You're quite right, we could probably implement all the access to
> >> existing (fixed) metadata without using any BTF at all - just define a
> >> common struct and some flags to designate which fields are set. In my
> >> mind, there are a couple of reasons for going the BTF route instead:
> >>
> >> - We can leverage CO-RE to get close to optimal efficiency in field
> >> access.
> >>
> >> and, more importantly:
> >>
> >> - It's infinitely extensible. With the infrastructure in place to make
> >> it really easy to consume metadata described by BTF, we lower the bar
> >> for future innovation in hardware offloads. Both for just adding new
> >> fixed-function stuff to hardware, but especially for fully
> >> programmable hardware.
> >
> > Agree :) That libxdp lookup translator fixed lots of stuff in my
> > mind.
>
> Great! Looks like we're slowly converging towards a shared
> understanding, then! :)
>
> >> > 2) In-kernel metadata consumers? Also do CO-RE? Otherwise, with no
> >> > generic metadata structure they won't be able to benefit from
> >> > Hints. But I guess we still need to provide kernel with meta?
> >> > Or no?
> >>
> >> In the short term, I think the "generic structure" approach is fine for
> >> leveraging this in the stack. Both your and Jesper's series include
> >> this, and I think that's totally fine. Longer term, if it turns out to
> >> be useful to have something more dynamic for the stack consumption as
> >> well, we could extend it to be CO-RE based as well (most likely by
> >> having the stack load a "translator" BPF program or something along
> >> those lines).
> >
> > Oh, that translator prog sounds nice BTW!
>
> Yeah, it's only a rough idea Jesper and I discussed at some point, but I
> think it could have potential (see also point above re: making XDP hints
> *the* source of metadata for the whole stack; wouldn't it be nice if
> drivers didn't have to deal with the intricacies of assembling SKBs?).
>
> -Toke
>

2022-07-12 14:17:30

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata



On 12/07/2022 12.33, Magnus Karlsson wrote:
> On Thu, Jul 7, 2022 at 1:25 AM Toke Høiland-Jørgensen <[email protected]> wrote:
>>
>> Alexander Lobakin <[email protected]> writes:
>>
>>> From: Toke H??iland-J??rgensen <[email protected]>
>>> Date: Tue, 05 Jul 2022 20:51:14 +0200
>>>
>>>> Alexander Lobakin <[email protected]> writes:
>>>>
>>>> [... snipping a bit of context here ...]
>>>>
>>>>>>>> Yeah, I'd agree this kind of configuration is something that can be
>>>>>>>> added later, and also it's sort of orthogonal to the consumption of the
>>>>>>>> metadata itself.
>>>>>>>>
>>>>>>>> Also, tying this configuration into the loading of an XDP program is a
>>>>>>>> terrible interface: these are hardware configuration options, let's just
>>>>>>>> put them into ethtool or 'ip link' like any other piece of device
>>>>>>>> configuration.
>>>>>>>
>>>>>>> I don't believe it fits there, especially Ethtool. Ethtool is for
>>>>>>> hardware configuration, XDP/AF_XDP is 95% software stuff (apart from
>>>>>>> offload bits which is purely NFP's for now).
>>>>>>
>>>>>> But XDP-hints is about consuming hardware features. When you're
>>>>>> configuring which metadata items you want, you're saying "please provide
>>>>>> me with these (hardware) features". So ethtool is an excellent place to
>>>>>> do that :)
>>>>>
>>>>> With Ethtool you configure the hardware, e.g. it won't strip VLAN
>>>>> tags if you disable rx-cvlan-stripping. With configuring metadata
>>>>> you only tell what you want to see there, don't you?
>>>>
>>>> Ah, I think we may be getting closer to identifying the disconnect
>>>> between our way of thinking about this!
>>>>
>>>> In my mind, there's no separate "configuration of the metadata" step.
>>>> You simply tell the hardware what features you want (say, "enable
>>>> timestamps and VLAN offload"), and the driver will then provide the
>>>> information related to these features in the metadata area
>>>> unconditionally. All XDP hints is about, then, is a way for the driver
>>>> to inform the rest of the system how that information is actually laid
>>>> out in the metadata area.
>>>>
>>>> Having a separate configuration knob to tell the driver "please lay out
>>>> these particular bits of metadata this way" seems like a totally
>>>> unnecessary (and quite complicated) feature to have when we can just let
>>>> the driver decide and use CO-RE to consume it?
>>>
>>> Magnus (he's currently on vacation) told me it would be useful for
>>> AF_XDP to enable/disable particular metadata, at least from perf
>>> perspective. Let's say, just fetching of one "checksum ok" bit in
>>> the driver is faster than walking through all the descriptor words
>>> and driver logics (i.e. there's several hundred locs in ice which
>>> just parse descriptor data and build an skb or metadata from it).
>>> But if we would just enable/disable corresponding features through
>>> Ethtool, that would hurt XDP_PASS. Maybe it's a bad example, but
>>> what if I want to have only RSS hash in the metadata (and don't
>>> want to spend cycles on parsing the rest), but at the same time
>>> still want skb path to have checksum status to not die at CPU
>>> checksum calculation?
>>
>> Hmm, so this feels a little like a driver-specific optimisation? I.e.,
>> my guess is that not all drivers have a measurable overhead for pulling
>> out the metadata. Also, once the XDP metadata bits are in place, we can
>> move in the direction of building SKBs from the same source, so I'm not
>> sure it's a good idea to assume that the XDP metadata is separate from
>> what the stack consumes...
>>
>> In any case, if such an optimisation does turn out to be useful, we can
>> add it later (backed by rigorous benchmarks, of course), so I think we
>> can still start with the simple case and iterate from there?
>
> Just to check if my intuition was correct or not I ran some benchmarks
> around this. I ported Jesper's patch set to the zero-copy driver of
> i40e, which was really simple thanks to Jesper's refactoring. One line
> of code added to the data path of the zc driver and making
> i40e_process_xdp_hints() a global function so it can be reached from
> the zc driver.

Happy to hear it was simple to extend this to AF_XDP in the driver.
Code design wise I'm trying to keep it simple for drivers to add this.
I have a co-worker that have already extended ixgbe.

> I also moved the prefetch Jesper added to after the
> check if xdp_hints are available since it really degrades performance
> in the xdp_hints off case.

Good to know.

> First number is the throughput change with hints on, and the second
> number is with hints off. All are compared to the performance without
> Jesper's patch set applied. The application is xdpsock -r (which used
> to be part of the samples/bpf directory).

For reviewer to relate to these numbers we need to understand/explain
the extreme numbers we are dealing with. In my system with i40e and
xdpsock --rx-drop I can AF_XDP drop packets with a rate of 33.633.761 pps.

This corresponds to a processing time per packet: 29.7 ns (nanosec)
- Calc: (1/33633761)*10^9

> Copy mode with all hints: -21% / -2%

The -21% for enabling all hints does sound like an excessive overhead,
but time-wise this is a reduction/overhead of 6.2 ns.

The real question: Is this 6.2 ns overhead that gives us e.g.
RX-checksumming lower than the gain we can obtain from avoiding doing
RX-checksumming in software?
- A: My previous experiments conclude[1] that for 1500 bytes frames we
can save 54 ns (or increase performance with 8% for normal netstack).


I was going for zero overhead when disabling xdp-hints, which is almost
true as the -2% is time-wise a reduction/overhead of 0.59 ns.

[1]
https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp_frame01_checksum.org#measurements-compare-results--conclusion


> Zero-copy mode with all hints: -29% / -9%

I'm unsure why the percentages increase here, perhaps because zero-copy
is faster and thus the overhead becomes a larger percentage?


> Copy mode rx timestamp only (the rest removed with an #if 0): -11%
> Zero-copy mode rx timestamp only: -20%
>
> So, if you only want rx timestamp, but can only enable every hint or
> nothing, then you get a 10% performance degradation with copy mode and
> 9% with zero-copy mode compared to if you were able just to enable rx
> timestamp alone. With these rough numbers (a real implementation would
> not have an #if 0) I would say it matters, but that does not mean we
> should not start simple and just have a big switch to start with. But
> as we add hints (to the same btfid), this will just get worse.

IMHO we *do* already have individual enable/disable hints features via
ethtool.
Have you tried to use the individual ethtool switches. e.g.:

ethtool -K i40e2 rx-checksumming off

The i40e code uses bitfields for extracting the descriptor, which cause
code that isn't optimal or fully optimized by the compiler. On my setup
I gained 4.2% (or 1.24 ns) by doing this.


> Here are some other numbers I got, in case someone is interested. They
> are XDP numbers from xdp_rxq_info in samples/bpf.
>
> hints on / hints off
> XDP_DROP: -18% / -1.5%

My xdp_rxq_info (no-touch XDP_DROP) nanosec numbers are:

hints on / hints off
XDP_DROP: 35.97ns / 29.80ns (diff 6.17 ns)

Maybe interesting if I touch data (via option --read), then the overhead
is reduced to 4.84 ns.

--Jesper

> XDP_TX: -10% / -2.5%
>
>>>>>>> I follow that way:
>>>>>>>
>>>>>>> 1) you pick a program you want to attach;
>>>>>>> 2) usually they are written for special needs and usecases;
>>>>>>> 3) so most likely that program will be tied with metadata/driver/etc
>>>>>>> in some way;
>>>>>>> 4) so you want to enable Hints of a particular format primarily for
>>>>>>> this program and usecase, same with threshold and everything
>>>>>>> else.
>>>>>>>
>>>>>>> Pls explain how you see it, I might be wrong for sure.
>>>>>>
>>>>>> As above: XDP hints is about giving XDP programs (and AF_XDP consumers)
>>>>>> access to metadata that is not currently available. Tying the lifetime
>>>>>> of that hardware configuration (i.e., which information to provide) to
>>>>>> the lifetime of an XDP program is not a good interface: for one thing,
>>>>>> how will it handle multiple programs? What about when XDP is not used at
>>>>>
>>>>> Multiple progs is stuff I didn't cover, but will do later (as you
>>>>> all say to me, "let's start with something simple" :)). Aaaand
>>>>> multiple XDP progs (I'm not talking about attaching progs in
>>>>> differeng modes) is not a kernel feature, rather a libpf feature,
>>>>> so I believe it should be handled there later...
>>>>
>>>> Right, but even if we don't *implement* it straight away we still need
>>>> to take it into consideration in the design. And expecting libxdp to
>>>> arbitrate between different XDP programs' metadata formats sounds like a
>>>> royal PITA :)
>>>>
>>>>>> all but you still want to configure the same features?
>>>>>
>>>>> What's the point of configuring metadata when there are no progs
>>>>> attached? To configure it once and not on every prog attach? I'm
>>>>> not saying I don't like it, just want to clarify.
>>>>
>>>> See above: you turn on the features because you want the stack to
>>>> consume them.
>>>>
>>>>> Maybe I need opinions from some more people, just to have an
>>>>> overview of how most of folks see it and would like to configure
>>>>> it. 'Cause I heard from at least one of the consumers that
>>>>> libpf API is a perfect place for Hints to him :)
>>>>
>>>> Well, as a program author who wants to consume hints, you'd use
>>>> lib{bpf,xdp} APIs to do so (probably in the form of suitable CO-RE
>>>> macros)...
>>>>
>>>>>> In addition, in every other case where we do dynamic data access (with
>>>>>> CO-RE) the BPF program is a consumer that modifies itself to access the
>>>>>> data provided by the kernel. I get that this is harder to achieve for
>>>>>> AF_XDP, but then let's solve that instead of making a totally
>>>>>> inconsistent interface for XDP.
>>>>>
>>>>> I also see CO-RE more fitting and convenient way to use them, but
>>>>> didn't manage to solve two things:
>>>>>
>>>>> 1) AF_XDP programs, so what to do with them? Prepare patches for
>>>>> LLVM to make it able to do CO-RE on AF_XDP program load? Or
>>>>> just hardcode them for particular usecases and NICs? What about
>>>>> "general-purpose" programs?
>>>>
>>>> You provide a library to read the fields. Jesper actually already
>>>> implemented this, did you look at his code?
>>>>
>>>> https://github.com/xdp-project/bpf-examples/tree/master/AF_XDP-interaction
>>>>
>>>> It basically builds a lookup table at load-time using BTF information
>>>> from the kernel, keyed on BTF ID and field name, resolving them into
>>>> offsets. It's not quite the zero-overhead of CO-RE, but it's fairly
>>>> close and can be improved upon (CO-RE for userspace being one way of
>>>> doing that).
>>>
>>> Aaaah, sorry, I completely missed that. I thought of something
>>> similar as well, but then thought "variable field offsets, that
>>> would annihilate optimization and performance", and our Xsk team
>>> is super concerned about performance hits when using Hints.
>>>
>>>>
>>>>> And if hardcode, what's the point then to do Generic Hints at
>>>>> all? Then all it needs is making driver building some meta in
>>>>> front of frames via on-off button and that's it? Why BTF ID in
>>>>> the meta then if consumers will access meta hardcoded (via CO-RE
>>>>> or literally hardcoded, doesn't matter)?
>>>>
>>>> You're quite right, we could probably implement all the access to
>>>> existing (fixed) metadata without using any BTF at all - just define a
>>>> common struct and some flags to designate which fields are set. In my
>>>> mind, there are a couple of reasons for going the BTF route instead:
>>>>
>>>> - We can leverage CO-RE to get close to optimal efficiency in field
>>>> access.
>>>>
>>>> and, more importantly:
>>>>
>>>> - It's infinitely extensible. With the infrastructure in place to make
>>>> it really easy to consume metadata described by BTF, we lower the bar
>>>> for future innovation in hardware offloads. Both for just adding new
>>>> fixed-function stuff to hardware, but especially for fully
>>>> programmable hardware.
>>>
>>> Agree :) That libxdp lookup translator fixed lots of stuff in my
>>> mind.
>>
>> Great! Looks like we're slowly converging towards a shared
>> understanding, then! :)
>>
>>>>> 2) In-kernel metadata consumers? Also do CO-RE? Otherwise, with no
>>>>> generic metadata structure they won't be able to benefit from
>>>>> Hints. But I guess we still need to provide kernel with meta?
>>>>> Or no?
>>>>
>>>> In the short term, I think the "generic structure" approach is fine for
>>>> leveraging this in the stack. Both your and Jesper's series include
>>>> this, and I think that's totally fine. Longer term, if it turns out to
>>>> be useful to have something more dynamic for the stack consumption as
>>>> well, we could extend it to be CO-RE based as well (most likely by
>>>> having the stack load a "translator" BPF program or something along
>>>> those lines).
>>>
>>> Oh, that translator prog sounds nice BTW!
>>
>> Yeah, it's only a rough idea Jesper and I discussed at some point, but I
>> think it could have potential (see also point above re: making XDP hints
>> *the* source of metadata for the whole stack; wouldn't it be nice if
>> drivers didn't have to deal with the intricacies of assembling SKBs?).
>>
>> -Toke
>>
>

2022-07-15 11:19:09

by Magnus Karlsson

[permalink] [raw]
Subject: Re: [xdp-hints] Re: [PATCH RFC bpf-next 00/52] bpf, xdp: introduce and use Generic Hints/metadata

On Tue, Jul 12, 2022 at 4:15 PM Jesper Dangaard Brouer
<[email protected]> wrote:
>
>
>
> On 12/07/2022 12.33, Magnus Karlsson wrote:
> > On Thu, Jul 7, 2022 at 1:25 AM Toke Høiland-Jørgensen <[email protected]> wrote:
> >>
> >> Alexander Lobakin <[email protected]> writes:
> >>
> >>> From: Toke H??iland-J??rgensen <[email protected]>
> >>> Date: Tue, 05 Jul 2022 20:51:14 +0200
> >>>
> >>>> Alexander Lobakin <[email protected]> writes:
> >>>>
> >>>> [... snipping a bit of context here ...]
> >>>>
> >>>>>>>> Yeah, I'd agree this kind of configuration is something that can be
> >>>>>>>> added later, and also it's sort of orthogonal to the consumption of the
> >>>>>>>> metadata itself.
> >>>>>>>>
> >>>>>>>> Also, tying this configuration into the loading of an XDP program is a
> >>>>>>>> terrible interface: these are hardware configuration options, let's just
> >>>>>>>> put them into ethtool or 'ip link' like any other piece of device
> >>>>>>>> configuration.
> >>>>>>>
> >>>>>>> I don't believe it fits there, especially Ethtool. Ethtool is for
> >>>>>>> hardware configuration, XDP/AF_XDP is 95% software stuff (apart from
> >>>>>>> offload bits which is purely NFP's for now).
> >>>>>>
> >>>>>> But XDP-hints is about consuming hardware features. When you're
> >>>>>> configuring which metadata items you want, you're saying "please provide
> >>>>>> me with these (hardware) features". So ethtool is an excellent place to
> >>>>>> do that :)
> >>>>>
> >>>>> With Ethtool you configure the hardware, e.g. it won't strip VLAN
> >>>>> tags if you disable rx-cvlan-stripping. With configuring metadata
> >>>>> you only tell what you want to see there, don't you?
> >>>>
> >>>> Ah, I think we may be getting closer to identifying the disconnect
> >>>> between our way of thinking about this!
> >>>>
> >>>> In my mind, there's no separate "configuration of the metadata" step.
> >>>> You simply tell the hardware what features you want (say, "enable
> >>>> timestamps and VLAN offload"), and the driver will then provide the
> >>>> information related to these features in the metadata area
> >>>> unconditionally. All XDP hints is about, then, is a way for the driver
> >>>> to inform the rest of the system how that information is actually laid
> >>>> out in the metadata area.
> >>>>
> >>>> Having a separate configuration knob to tell the driver "please lay out
> >>>> these particular bits of metadata this way" seems like a totally
> >>>> unnecessary (and quite complicated) feature to have when we can just let
> >>>> the driver decide and use CO-RE to consume it?
> >>>
> >>> Magnus (he's currently on vacation) told me it would be useful for
> >>> AF_XDP to enable/disable particular metadata, at least from perf
> >>> perspective. Let's say, just fetching of one "checksum ok" bit in
> >>> the driver is faster than walking through all the descriptor words
> >>> and driver logics (i.e. there's several hundred locs in ice which
> >>> just parse descriptor data and build an skb or metadata from it).
> >>> But if we would just enable/disable corresponding features through
> >>> Ethtool, that would hurt XDP_PASS. Maybe it's a bad example, but
> >>> what if I want to have only RSS hash in the metadata (and don't
> >>> want to spend cycles on parsing the rest), but at the same time
> >>> still want skb path to have checksum status to not die at CPU
> >>> checksum calculation?
> >>
> >> Hmm, so this feels a little like a driver-specific optimisation? I.e.,
> >> my guess is that not all drivers have a measurable overhead for pulling
> >> out the metadata. Also, once the XDP metadata bits are in place, we can
> >> move in the direction of building SKBs from the same source, so I'm not
> >> sure it's a good idea to assume that the XDP metadata is separate from
> >> what the stack consumes...
> >>
> >> In any case, if such an optimisation does turn out to be useful, we can
> >> add it later (backed by rigorous benchmarks, of course), so I think we
> >> can still start with the simple case and iterate from there?
> >
> > Just to check if my intuition was correct or not I ran some benchmarks
> > around this. I ported Jesper's patch set to the zero-copy driver of
> > i40e, which was really simple thanks to Jesper's refactoring. One line
> > of code added to the data path of the zc driver and making
> > i40e_process_xdp_hints() a global function so it can be reached from
> > the zc driver.
>
> Happy to hear it was simple to extend this to AF_XDP in the driver.
> Code design wise I'm trying to keep it simple for drivers to add this.
> I have a co-worker that have already extended ixgbe.
>
> > I also moved the prefetch Jesper added to after the
> > check if xdp_hints are available since it really degrades performance
> > in the xdp_hints off case.
>
> Good to know.
>
> > First number is the throughput change with hints on, and the second
> > number is with hints off. All are compared to the performance without
> > Jesper's patch set applied. The application is xdpsock -r (which used
> > to be part of the samples/bpf directory).
>
> For reviewer to relate to these numbers we need to understand/explain
> the extreme numbers we are dealing with. In my system with i40e and
> xdpsock --rx-drop I can AF_XDP drop packets with a rate of 33.633.761 pps.
>
> This corresponds to a processing time per packet: 29.7 ns (nanosec)
> - Calc: (1/33633761)*10^9
>
> > Copy mode with all hints: -21% / -2%

On my system, the overhead is 66 cycles/packet or 31 ns/packet (2.1
GHz CPU with TurboBoost disabled). Copy-mode only drops packets at a
rate of 8.5 Mpps or 118 ns/packet on my system. The rate you quote
must be for zero-copy as I see something similar there if I enable
TurboBoost on my system.

> The -21% for enabling all hints does sound like an excessive overhead,
> but time-wise this is a reduction/overhead of 6.2 ns.
>
> The real question: Is this 6.2 ns overhead that gives us e.g.
> RX-checksumming lower than the gain we can obtain from avoiding doin.
> RX-checksumming in software?
> - A: My previous experiments conclude[1] that for 1500 bytes frames we
> can save 54 ns (or increase performance with 8% for normal netstack).

If you use Rx-checksumming alone, it is a good idea for packets that
are bigger than something around 500 bytes, if you use copy mode. This
is a very rough estimation since I cannot mix your numbers with mine.
But there is a substantial window where it pays off for sure. For ZC,
this window is even larger, see below.

>
> I was going for zero overhead when disabling xdp-hints, which is almost
> true as the -2% is time-wise a reduction/overhead of 0.59 ns.
>
> [1]
> https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp_frame01_checksum.org#measurements-compare-results--conclusion
>
>
> > Zero-copy mode with all hints: -29% / -9%
>
> I'm unsure why the percentages increase here, perhaps because zero-copy
> is faster and thus the overhead becomes a larger percentage?

For zero-copy, the overhead is 31 cycles/packet or 15 ns/packet on my
system. I would have expected the cycles/packet overhead for copy-mode
and zero-copy mode to be about the same since they use the same hints
code, but it is roughly half for zero-copy. Have not examined why. The
packet processing time without your patches on my system is 36
ns/packet or 27.65 Mpps for zero-copy.

>
> > Copy mode rx timestamp only (the rest removed with an #if 0): -11%
> > Zero-copy mode rx timestamp only: -20%
> >
> > So, if you only want rx timestamp, but can only enable every hint or
> > nothing, then you get a 10% performance degradation with copy mode and
> > 9% with zero-copy mode compared to if you were able just to enable rx
> > timestamp alone. With these rough numbers (a real implementation would
> > not have an #if 0) I would say it matters, but that does not mean we
> > should not start simple and just have a big switch to start with. But
> > as we add hints (to the same btfid), this will just get worse.
>
> IMHO we *do* already have individual enable/disable hints features via
> ethtool.
> Have you tried to use the individual ethtool switches. e.g.:
>
> ethtool -K i40e2 rx-checksumming off
>
> The i40e code uses bitfields for extracting the descriptor, which cause
> code that isn't optimal or fully optimized by the compiler. On my setup
> I gained 4.2% (or 1.24 ns) by doing this.

Forgot about that one. Will replace the bitfields and rerun the
experiments to get the overhead down.

>
> > Here are some other numbers I got, in case someone is interested. They
> > are XDP numbers from xdp_rxq_info in samples/bpf.
> >
> > hints on / hints off
> > XDP_DROP: -18% / -1.5%
>
> My xdp_rxq_info (no-touch XDP_DROP) nanosec numbers are:
>
> hints on / hints off
> XDP_DROP: 35.97ns / 29.80ns (diff 6.17 ns)
>
> Maybe interesting if I touch data (via option --read), then the overhead
> is reduced to 4.84 ns.

Good point. We should always touch the data. Will include that in the
next set of experiments.

> --Jesper
>
> > XDP_TX: -10% / -2.5%
> >
> >>>>>>> I follow that way:
> >>>>>>>
> >>>>>>> 1) you pick a program you want to attach;
> >>>>>>> 2) usually they are written for special needs and usecases;
> >>>>>>> 3) so most likely that program will be tied with metadata/driver/etc
> >>>>>>> in some way;
> >>>>>>> 4) so you want to enable Hints of a particular format primarily for
> >>>>>>> this program and usecase, same with threshold and everything
> >>>>>>> else.
> >>>>>>>
> >>>>>>> Pls explain how you see it, I might be wrong for sure.
> >>>>>>
> >>>>>> As above: XDP hints is about giving XDP programs (and AF_XDP consumers)
> >>>>>> access to metadata that is not currently available. Tying the lifetime
> >>>>>> of that hardware configuration (i.e., which information to provide) to
> >>>>>> the lifetime of an XDP program is not a good interface: for one thing,
> >>>>>> how will it handle multiple programs? What about when XDP is not used at
> >>>>>
> >>>>> Multiple progs is stuff I didn't cover, but will do later (as you
> >>>>> all say to me, "let's start with something simple" :)). Aaaand
> >>>>> multiple XDP progs (I'm not talking about attaching progs in
> >>>>> differeng modes) is not a kernel feature, rather a libpf feature,
> >>>>> so I believe it should be handled there later...
> >>>>
> >>>> Right, but even if we don't *implement* it straight away we still need
> >>>> to take it into consideration in the design. And expecting libxdp to
> >>>> arbitrate between different XDP programs' metadata formats sounds like a
> >>>> royal PITA :)
> >>>>
> >>>>>> all but you still want to configure the same features?
> >>>>>
> >>>>> What's the point of configuring metadata when there are no progs
> >>>>> attached? To configure it once and not on every prog attach? I'm
> >>>>> not saying I don't like it, just want to clarify.
> >>>>
> >>>> See above: you turn on the features because you want the stack to
> >>>> consume them.
> >>>>
> >>>>> Maybe I need opinions from some more people, just to have an
> >>>>> overview of how most of folks see it and would like to configure
> >>>>> it. 'Cause I heard from at least one of the consumers that
> >>>>> libpf API is a perfect place for Hints to him :)
> >>>>
> >>>> Well, as a program author who wants to consume hints, you'd use
> >>>> lib{bpf,xdp} APIs to do so (probably in the form of suitable CO-RE
> >>>> macros)...
> >>>>
> >>>>>> In addition, in every other case where we do dynamic data access (with
> >>>>>> CO-RE) the BPF program is a consumer that modifies itself to access the
> >>>>>> data provided by the kernel. I get that this is harder to achieve for
> >>>>>> AF_XDP, but then let's solve that instead of making a totally
> >>>>>> inconsistent interface for XDP.
> >>>>>
> >>>>> I also see CO-RE more fitting and convenient way to use them, but
> >>>>> didn't manage to solve two things:
> >>>>>
> >>>>> 1) AF_XDP programs, so what to do with them? Prepare patches for
> >>>>> LLVM to make it able to do CO-RE on AF_XDP program load? Or
> >>>>> just hardcode them for particular usecases and NICs? What about
> >>>>> "general-purpose" programs?
> >>>>
> >>>> You provide a library to read the fields. Jesper actually already
> >>>> implemented this, did you look at his code?
> >>>>
> >>>> https://github.com/xdp-project/bpf-examples/tree/master/AF_XDP-interaction
> >>>>
> >>>> It basically builds a lookup table at load-time using BTF information
> >>>> from the kernel, keyed on BTF ID and field name, resolving them into
> >>>> offsets. It's not quite the zero-overhead of CO-RE, but it's fairly
> >>>> close and can be improved upon (CO-RE for userspace being one way of
> >>>> doing that).
> >>>
> >>> Aaaah, sorry, I completely missed that. I thought of something
> >>> similar as well, but then thought "variable field offsets, that
> >>> would annihilate optimization and performance", and our Xsk team
> >>> is super concerned about performance hits when using Hints.
> >>>
> >>>>
> >>>>> And if hardcode, what's the point then to do Generic Hints at
> >>>>> all? Then all it needs is making driver building some meta in
> >>>>> front of frames via on-off button and that's it? Why BTF ID in
> >>>>> the meta then if consumers will access meta hardcoded (via CO-RE
> >>>>> or literally hardcoded, doesn't matter)?
> >>>>
> >>>> You're quite right, we could probably implement all the access to
> >>>> existing (fixed) metadata without using any BTF at all - just define a
> >>>> common struct and some flags to designate which fields are set. In my
> >>>> mind, there are a couple of reasons for going the BTF route instead:
> >>>>
> >>>> - We can leverage CO-RE to get close to optimal efficiency in field
> >>>> access.
> >>>>
> >>>> and, more importantly:
> >>>>
> >>>> - It's infinitely extensible. With the infrastructure in place to make
> >>>> it really easy to consume metadata described by BTF, we lower the bar
> >>>> for future innovation in hardware offloads. Both for just adding new
> >>>> fixed-function stuff to hardware, but especially for fully
> >>>> programmable hardware.
> >>>
> >>> Agree :) That libxdp lookup translator fixed lots of stuff in my
> >>> mind.
> >>
> >> Great! Looks like we're slowly converging towards a shared
> >> understanding, then! :)
> >>
> >>>>> 2) In-kernel metadata consumers? Also do CO-RE? Otherwise, with no
> >>>>> generic metadata structure they won't be able to benefit from
> >>>>> Hints. But I guess we still need to provide kernel with meta?
> >>>>> Or no?
> >>>>
> >>>> In the short term, I think the "generic structure" approach is fine for
> >>>> leveraging this in the stack. Both your and Jesper's series include
> >>>> this, and I think that's totally fine. Longer term, if it turns out to
> >>>> be useful to have something more dynamic for the stack consumption as
> >>>> well, we could extend it to be CO-RE based as well (most likely by
> >>>> having the stack load a "translator" BPF program or something along
> >>>> those lines).
> >>>
> >>> Oh, that translator prog sounds nice BTW!
> >>
> >> Yeah, it's only a rough idea Jesper and I discussed at some point, but I
> >> think it could have potential (see also point above re: making XDP hints
> >> *the* source of metadata for the whole stack; wouldn't it be nice if
> >> drivers didn't have to deal with the intricacies of assembling SKBs?).
> >>
> >> -Toke
> >>
> >
>