2023-06-26 23:05:23

by Daniel Xu

[permalink] [raw]
Subject: [PATCH bpf-next 0/7] Support defragmenting IPv(4|6) packets in BPF

=== Context ===

In the context of a middlebox, fragmented packets are tricky to handle.
The full 5-tuple of a packet is often only available in the first
fragment which makes enforcing consistent policy difficult. There are
really only two stateless options, neither of which are very nice:

1. Enforce policy on first fragment and accept all subsequent fragments.
This works but may let in certain attacks or allow data exfiltration.

2. Enforce policy on first fragment and drop all subsequent fragments.
This does not really work b/c some protocols may rely on
fragmentation. For example, DNS may rely on oversized UDP packets for
large responses.

So stateful tracking is the only sane option. RFC 8900 [0] calls this
out as well in section 6.3:

Middleboxes [...] should process IP fragments in a manner that is
consistent with [RFC0791] and [RFC8200]. In many cases, middleboxes
must maintain state in order to achieve this goal.

=== BPF related bits ===

Policy has traditionally been enforced from XDP/TC hooks. Both hooks
run before kernel reassembly facilities. However, with the new
BPF_PROG_TYPE_NETFILTER, we can rather easily hook into existing
netfilter reassembly infra.

The basic idea is we bump a refcnt on the netfilter defrag module and
then run the bpf prog after the defrag module runs. This allows bpf
progs to transparently see full, reassembled packets. The nice thing
about this is that progs don't have to carry around logic to detect
fragments.

=== Patchset details ===

There was an earlier attempt at providing defrag via kfuncs [1]. The
feedback was that we could end up doing too much stuff in prog execution
context (like sending ICMP error replies). However, I think there are
still some outstanding discussion w.r.t. performance when it comes to
netfilter vs the previous approach. I'll schedule some time during
office hours for this.

Patches 1 & 2 are stolenfrom Florian. Hopefully he doesn't mind. There
were some outstanding comments on the v2 [2] but it doesn't look like a
v3 was ever submitted. I've addressed the comments and put them in this
patchset cuz I needed them.

Finally, the new selftest seems to be a little flaky. I'm not quite
sure why the server will fail to `recvfrom()` occassionaly. I'm fairly
sure it's a timing related issue with creating veths. I'll keep
debugging but I didn't want that to hold up discussion on this patchset.


[0]: https://datatracker.ietf.org/doc/html/rfc8900
[1]: https://lore.kernel.org/bpf/[email protected]/
[2]: https://lore.kernel.org/bpf/[email protected]/

Daniel Xu (7):
tools: libbpf: add netfilter link attach helper
selftests/bpf: Add bpf_program__attach_netfilter helper test
netfilter: defrag: Add glue hooks for enabling/disabling defrag
netfilter: bpf: Support BPF_F_NETFILTER_IP_DEFRAG in netfilter link
bpf: selftests: Support not connecting client socket
bpf: selftests: Support custom type and proto for client sockets
bpf: selftests: Add defrag selftests

include/linux/netfilter.h | 12 +
include/uapi/linux/bpf.h | 5 +
net/ipv4/netfilter/nf_defrag_ipv4.c | 8 +
net/ipv6/netfilter/nf_defrag_ipv6_hooks.c | 10 +
net/netfilter/core.c | 6 +
net/netfilter/nf_bpf_link.c | 108 ++++++-
tools/include/uapi/linux/bpf.h | 5 +
tools/lib/bpf/bpf.c | 8 +
tools/lib/bpf/bpf.h | 6 +
tools/lib/bpf/libbpf.c | 47 +++
tools/lib/bpf/libbpf.h | 15 +
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/Makefile | 4 +-
.../selftests/bpf/generate_udp_fragments.py | 90 ++++++
.../selftests/bpf/ip_check_defrag_frags.h | 57 ++++
tools/testing/selftests/bpf/network_helpers.c | 26 +-
tools/testing/selftests/bpf/network_helpers.h | 3 +
.../bpf/prog_tests/ip_check_defrag.c | 282 ++++++++++++++++++
.../bpf/prog_tests/netfilter_basic.c | 78 +++++
.../selftests/bpf/progs/ip_check_defrag.c | 104 +++++++
.../bpf/progs/test_netfilter_link_attach.c | 14 +
21 files changed, 868 insertions(+), 21 deletions(-)
create mode 100755 tools/testing/selftests/bpf/generate_udp_fragments.py
create mode 100644 tools/testing/selftests/bpf/ip_check_defrag_frags.h
create mode 100644 tools/testing/selftests/bpf/prog_tests/ip_check_defrag.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/netfilter_basic.c
create mode 100644 tools/testing/selftests/bpf/progs/ip_check_defrag.c
create mode 100644 tools/testing/selftests/bpf/progs/test_netfilter_link_attach.c

--
2.40.1



2023-06-26 23:10:09

by Daniel Xu

[permalink] [raw]
Subject: [PATCH bpf-next 2/7] selftests/bpf: Add bpf_program__attach_netfilter helper test

Call bpf_program__attach_netfilter() with different
protocol/hook/priority combinations.

Test fails if supposedly-illegal attachments work
(e.g., bogus protocol family, illegal priority and so on)
or if a should-work attachment fails.

Co-developed-by: Florian Westphal <[email protected]>
Signed-off-by: Florian Westphal <[email protected]>
Signed-off-by: Daniel Xu <[email protected]>
---
.../bpf/prog_tests/netfilter_basic.c | 78 +++++++++++++++++++
.../bpf/progs/test_netfilter_link_attach.c | 14 ++++
2 files changed, 92 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/netfilter_basic.c
create mode 100644 tools/testing/selftests/bpf/progs/test_netfilter_link_attach.c

diff --git a/tools/testing/selftests/bpf/prog_tests/netfilter_basic.c b/tools/testing/selftests/bpf/prog_tests/netfilter_basic.c
new file mode 100644
index 000000000000..357353fee19d
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/netfilter_basic.c
@@ -0,0 +1,78 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+
+#include <netinet/in.h>
+#include <linux/netfilter.h>
+
+#include "test_progs.h"
+#include "test_netfilter_link_attach.skel.h"
+
+struct nf_hook_options {
+ __u32 pf;
+ __u32 hooknum;
+ __s32 priority;
+ __u32 flags;
+
+ bool expect_success;
+};
+
+struct nf_hook_options nf_hook_attach_tests[] = {
+ { },
+ { .pf = NFPROTO_NUMPROTO, },
+ { .pf = NFPROTO_IPV4, .hooknum = 42, },
+ { .pf = NFPROTO_IPV4, .priority = INT_MIN },
+ { .pf = NFPROTO_IPV4, .priority = INT_MAX },
+ { .pf = NFPROTO_IPV4, .flags = UINT_MAX },
+
+ { .pf = NFPROTO_INET, .priority = 1, },
+
+ { .pf = NFPROTO_IPV4, .priority = -10000, .expect_success = true },
+ { .pf = NFPROTO_IPV6, .priority = 10001, .expect_success = true },
+};
+
+void test_netfilter_basic(void)
+{
+ struct test_netfilter_link_attach *skel;
+ LIBBPF_OPTS(bpf_netfilter_opts, opts);
+ struct bpf_program *prog;
+ int i;
+
+ skel = test_netfilter_link_attach__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "test_netfilter_link_attach__open_and_load"))
+ goto out;
+
+ prog = skel->progs.nf_link_attach_test;
+
+ for (i = 0; i < ARRAY_SIZE(nf_hook_attach_tests); i++) {
+ struct bpf_link *link;
+
+#define X(opts, m, i) opts.m = nf_hook_attach_tests[(i)].m
+ X(opts, pf, i);
+ X(opts, hooknum, i);
+ X(opts, priority, i);
+ X(opts, flags, i);
+#undef X
+ link = bpf_program__attach_netfilter(prog, &opts);
+ if (nf_hook_attach_tests[i].expect_success) {
+ struct bpf_link *link2;
+
+ if (!ASSERT_OK_PTR(link, "program attach successful"))
+ continue;
+
+ link2 = bpf_program__attach_netfilter(prog, &opts);
+ ASSERT_ERR_PTR(link2, "attach program with same pf/hook/priority");
+
+ if (!ASSERT_OK(bpf_link__destroy(link), "link destroy"))
+ break;
+
+ link2 = bpf_program__attach_netfilter(prog, &opts);
+ if (!ASSERT_OK_PTR(link2, "program reattach successful"))
+ continue;
+ if (!ASSERT_OK(bpf_link__destroy(link2), "link destroy"))
+ break;
+ } else {
+ ASSERT_ERR_PTR(link, "program load failure");
+ }
+ }
+out:
+ test_netfilter_link_attach__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/test_netfilter_link_attach.c b/tools/testing/selftests/bpf/progs/test_netfilter_link_attach.c
new file mode 100644
index 000000000000..03a475160abe
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_netfilter_link_attach.c
@@ -0,0 +1,14 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+
+#define NF_ACCEPT 1
+
+SEC("netfilter")
+int nf_link_attach_test(struct bpf_nf_ctx *ctx)
+{
+ return NF_ACCEPT;
+}
+
+char _license[] SEC("license") = "GPL";
--
2.40.1


2023-06-26 23:10:40

by Daniel Xu

[permalink] [raw]
Subject: [PATCH bpf-next 1/7] tools: libbpf: add netfilter link attach helper

Add new api function: bpf_program__attach_netfilter.

It takes a bpf program (netfilter type), and a pointer to a option struct
that contains the desired attachment (protocol family, priority, hook
location, ...).

It returns a pointer to a 'bpf_link' structure or NULL on error.

Next patch adds new netfilter_basic test that uses this function to
attach a program to a few pf/hook/priority combinations.

Co-developed-by: Florian Westphal <[email protected]>
Signed-off-by: Florian Westphal <[email protected]>
Suggested-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Daniel Xu <[email protected]>
---
tools/lib/bpf/bpf.c | 8 +++++++
tools/lib/bpf/bpf.h | 6 +++++
tools/lib/bpf/libbpf.c | 47 ++++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf.h | 15 +++++++++++++
tools/lib/bpf/libbpf.map | 1 +
5 files changed, 77 insertions(+)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index ed86b37d8024..3b0da19715e1 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -741,6 +741,14 @@ int bpf_link_create(int prog_fd, int target_fd,
if (!OPTS_ZEROED(opts, tracing))
return libbpf_err(-EINVAL);
break;
+ case BPF_NETFILTER:
+ attr.link_create.netfilter.pf = OPTS_GET(opts, netfilter.pf, 0);
+ attr.link_create.netfilter.hooknum = OPTS_GET(opts, netfilter.hooknum, 0);
+ attr.link_create.netfilter.priority = OPTS_GET(opts, netfilter.priority, 0);
+ attr.link_create.netfilter.flags = OPTS_GET(opts, netfilter.flags, 0);
+ if (!OPTS_ZEROED(opts, netfilter))
+ return libbpf_err(-EINVAL);
+ break;
default:
if (!OPTS_ZEROED(opts, flags))
return libbpf_err(-EINVAL);
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 9aa0ee473754..c676295ab9bf 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -349,6 +349,12 @@ struct bpf_link_create_opts {
struct {
__u64 cookie;
} tracing;
+ struct {
+ __u32 pf;
+ __u32 hooknum;
+ __s32 priority;
+ __u32 flags;
+ } netfilter;
};
size_t :0;
};
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 214f828ece6b..a8b9d5abb55f 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -11811,6 +11811,53 @@ static int attach_iter(const struct bpf_program *prog, long cookie, struct bpf_l
return libbpf_get_error(*link);
}

+struct bpf_link *bpf_program__attach_netfilter(const struct bpf_program *prog,
+ const struct bpf_netfilter_opts *opts)
+{
+ DECLARE_LIBBPF_OPTS(bpf_link_create_opts, link_create_opts);
+ struct bpf_link *link;
+ int prog_fd, link_fd;
+
+ if (!OPTS_VALID(opts, bpf_netfilter_opts))
+ return libbpf_err_ptr(-EINVAL);
+
+ link_create_opts.netfilter.pf = OPTS_GET(opts, pf, 0);
+ link_create_opts.netfilter.hooknum = OPTS_GET(opts, hooknum, 0);
+ link_create_opts.netfilter.priority = OPTS_GET(opts, priority, 0);
+ link_create_opts.netfilter.flags = OPTS_GET(opts, flags, 0);
+
+ prog_fd = bpf_program__fd(prog);
+ if (prog_fd < 0) {
+ pr_warn("prog '%s': can't attach before loaded\n", prog->name);
+ return libbpf_err_ptr(-EINVAL);
+ }
+
+ link = calloc(1, sizeof(*link));
+ if (!link)
+ return libbpf_err_ptr(-ENOMEM);
+ link->detach = &bpf_link__detach_fd;
+
+ link_fd = bpf_link_create(prog_fd, 0, BPF_NETFILTER, &link_create_opts);
+
+ link->fd = ensure_good_fd(link_fd);
+
+ if (link->fd < 0) {
+ char errmsg[STRERR_BUFSIZE];
+
+ link_fd = -errno;
+ free(link);
+ pr_warn("prog '%s': failed to attach to pf:%d,hooknum:%d:prio:%d: %s\n",
+ prog->name,
+ OPTS_GET(opts, pf, 0),
+ OPTS_GET(opts, hooknum, 0),
+ OPTS_GET(opts, priority, 0),
+ libbpf_strerror_r(link_fd, errmsg, sizeof(errmsg)));
+ return libbpf_err_ptr(link_fd);
+ }
+
+ return link;
+}
+
struct bpf_link *bpf_program__attach(const struct bpf_program *prog)
{
struct bpf_link *link = NULL;
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 754da73c643b..10642ad69d76 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -718,6 +718,21 @@ LIBBPF_API struct bpf_link *
bpf_program__attach_freplace(const struct bpf_program *prog,
int target_fd, const char *attach_func_name);

+struct bpf_netfilter_opts {
+ /* size of this struct, for forward/backward compatibility */
+ size_t sz;
+
+ __u32 pf;
+ __u32 hooknum;
+ __s32 priority;
+ __u32 flags;
+};
+#define bpf_netfilter_opts__last_field flags
+
+LIBBPF_API struct bpf_link *
+bpf_program__attach_netfilter(const struct bpf_program *prog,
+ const struct bpf_netfilter_opts *opts);
+
struct bpf_map;

LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map);
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 7521a2fb7626..d9ec4407befa 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -395,4 +395,5 @@ LIBBPF_1.2.0 {
LIBBPF_1.3.0 {
global:
bpf_obj_pin_opts;
+ bpf_program__attach_netfilter;
} LIBBPF_1.2.0;
--
2.40.1


2023-06-26 23:12:19

by Daniel Xu

[permalink] [raw]
Subject: [PATCH bpf-next 3/7] netfilter: defrag: Add glue hooks for enabling/disabling defrag

We want to be able to enable/disable IP packet defrag from core
bpf/netfilter code. In other words, execute code from core that could
possibly be built as a module.

To help avoid symbol resolution errors, use glue hooks that the modules
will register callbacks with during module init.

Signed-off-by: Daniel Xu <[email protected]>
---
include/linux/netfilter.h | 12 ++++++++++++
net/ipv4/netfilter/nf_defrag_ipv4.c | 8 ++++++++
net/ipv6/netfilter/nf_defrag_ipv6_hooks.c | 10 ++++++++++
net/netfilter/core.c | 6 ++++++
4 files changed, 36 insertions(+)

diff --git a/include/linux/netfilter.h b/include/linux/netfilter.h
index 0762444e3767..1d68499de03e 100644
--- a/include/linux/netfilter.h
+++ b/include/linux/netfilter.h
@@ -481,6 +481,18 @@ struct nfnl_ct_hook {
};
extern const struct nfnl_ct_hook __rcu *nfnl_ct_hook;

+struct nf_defrag_v4_hook {
+ int (*enable)(struct net *net);
+ void (*disable)(struct net *net);
+};
+extern const struct nf_defrag_v4_hook __rcu *nf_defrag_v4_hook;
+
+struct nf_defrag_v6_hook {
+ int (*enable)(struct net *net);
+ void (*disable)(struct net *net);
+};
+extern const struct nf_defrag_v6_hook __rcu *nf_defrag_v6_hook;
+
/**
* nf_skb_duplicated - TEE target has sent a packet
*
diff --git a/net/ipv4/netfilter/nf_defrag_ipv4.c b/net/ipv4/netfilter/nf_defrag_ipv4.c
index e61ea428ea18..436e629b0969 100644
--- a/net/ipv4/netfilter/nf_defrag_ipv4.c
+++ b/net/ipv4/netfilter/nf_defrag_ipv4.c
@@ -7,6 +7,7 @@
#include <linux/ip.h>
#include <linux/netfilter.h>
#include <linux/module.h>
+#include <linux/rcupdate.h>
#include <linux/skbuff.h>
#include <net/netns/generic.h>
#include <net/route.h>
@@ -113,17 +114,24 @@ static void __net_exit defrag4_net_exit(struct net *net)
}
}

+static struct nf_defrag_v4_hook defrag_hook = {
+ .enable = nf_defrag_ipv4_enable,
+ .disable = nf_defrag_ipv4_disable,
+};
+
static struct pernet_operations defrag4_net_ops = {
.exit = defrag4_net_exit,
};

static int __init nf_defrag_init(void)
{
+ rcu_assign_pointer(nf_defrag_v4_hook, &defrag_hook);
return register_pernet_subsys(&defrag4_net_ops);
}

static void __exit nf_defrag_fini(void)
{
+ rcu_assign_pointer(nf_defrag_v4_hook, NULL);
unregister_pernet_subsys(&defrag4_net_ops);
}

diff --git a/net/ipv6/netfilter/nf_defrag_ipv6_hooks.c b/net/ipv6/netfilter/nf_defrag_ipv6_hooks.c
index cb4eb1d2c620..205fb692f524 100644
--- a/net/ipv6/netfilter/nf_defrag_ipv6_hooks.c
+++ b/net/ipv6/netfilter/nf_defrag_ipv6_hooks.c
@@ -10,6 +10,7 @@
#include <linux/module.h>
#include <linux/skbuff.h>
#include <linux/icmp.h>
+#include <linux/rcupdate.h>
#include <linux/sysctl.h>
#include <net/ipv6_frag.h>

@@ -96,6 +97,11 @@ static void __net_exit defrag6_net_exit(struct net *net)
}
}

+static struct nf_defrag_v6_hook defrag_hook = {
+ .enable = nf_defrag_ipv6_enable,
+ .disable = nf_defrag_ipv6_disable,
+};
+
static struct pernet_operations defrag6_net_ops = {
.exit = defrag6_net_exit,
};
@@ -114,6 +120,9 @@ static int __init nf_defrag_init(void)
pr_err("nf_defrag_ipv6: can't register pernet ops\n");
goto cleanup_frag6;
}
+
+ rcu_assign_pointer(nf_defrag_v6_hook, &defrag_hook);
+
return ret;

cleanup_frag6:
@@ -124,6 +133,7 @@ static int __init nf_defrag_init(void)

static void __exit nf_defrag_fini(void)
{
+ rcu_assign_pointer(nf_defrag_v6_hook, NULL);
unregister_pernet_subsys(&defrag6_net_ops);
nf_ct_frag6_cleanup();
}
diff --git a/net/netfilter/core.c b/net/netfilter/core.c
index 5f76ae86a656..34845155bb85 100644
--- a/net/netfilter/core.c
+++ b/net/netfilter/core.c
@@ -680,6 +680,12 @@ EXPORT_SYMBOL_GPL(nfnl_ct_hook);
const struct nf_ct_hook __rcu *nf_ct_hook __read_mostly;
EXPORT_SYMBOL_GPL(nf_ct_hook);

+const struct nf_defrag_v4_hook __rcu *nf_defrag_v4_hook __read_mostly;
+EXPORT_SYMBOL_GPL(nf_defrag_v4_hook);
+
+const struct nf_defrag_v6_hook __rcu *nf_defrag_v6_hook __read_mostly;
+EXPORT_SYMBOL_GPL(nf_defrag_v6_hook);
+
#if IS_ENABLED(CONFIG_NF_CONNTRACK)
u8 nf_ctnetlink_has_listener;
EXPORT_SYMBOL_GPL(nf_ctnetlink_has_listener);
--
2.40.1


2023-06-26 23:12:25

by Daniel Xu

[permalink] [raw]
Subject: [PATCH bpf-next 6/7] bpf: selftests: Support custom type and proto for client sockets

Extend connect_to_fd_opts() to take optional type and protocol
parameters for the client socket. These parameters are useful when
opening a raw socket to send IP fragments.

Signed-off-by: Daniel Xu <[email protected]>
---
tools/testing/selftests/bpf/network_helpers.c | 21 +++++++++++++------
tools/testing/selftests/bpf/network_helpers.h | 2 ++
2 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/bpf/network_helpers.c b/tools/testing/selftests/bpf/network_helpers.c
index d5c78c08903b..910d5d0470e6 100644
--- a/tools/testing/selftests/bpf/network_helpers.c
+++ b/tools/testing/selftests/bpf/network_helpers.c
@@ -270,14 +270,23 @@ int connect_to_fd_opts(int server_fd, const struct network_helper_opts *opts)
opts = &default_opts;

optlen = sizeof(type);
- if (getsockopt(server_fd, SOL_SOCKET, SO_TYPE, &type, &optlen)) {
- log_err("getsockopt(SOL_TYPE)");
- return -1;
+
+ if (opts->type) {
+ type = opts->type;
+ } else {
+ if (getsockopt(server_fd, SOL_SOCKET, SO_TYPE, &type, &optlen)) {
+ log_err("getsockopt(SOL_TYPE)");
+ return -1;
+ }
}

- if (getsockopt(server_fd, SOL_SOCKET, SO_PROTOCOL, &protocol, &optlen)) {
- log_err("getsockopt(SOL_PROTOCOL)");
- return -1;
+ if (opts->proto) {
+ protocol = opts->proto;
+ } else {
+ if (getsockopt(server_fd, SOL_SOCKET, SO_PROTOCOL, &protocol, &optlen)) {
+ log_err("getsockopt(SOL_PROTOCOL)");
+ return -1;
+ }
}

addrlen = sizeof(addr);
diff --git a/tools/testing/selftests/bpf/network_helpers.h b/tools/testing/selftests/bpf/network_helpers.h
index 87894dc984dd..5eccc67d1a99 100644
--- a/tools/testing/selftests/bpf/network_helpers.h
+++ b/tools/testing/selftests/bpf/network_helpers.h
@@ -22,6 +22,8 @@ struct network_helper_opts {
int timeout_ms;
bool must_fail;
bool noconnect;
+ int type;
+ int proto;
};

/* ipv4 test vector */
--
2.40.1


2023-06-27 00:18:17

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH bpf-next 1/7] tools: libbpf: add netfilter link attach helper

On Mon, Jun 26, 2023 at 4:02 PM Daniel Xu <[email protected]> wrote:
>
> Add new api function: bpf_program__attach_netfilter.
>
> It takes a bpf program (netfilter type), and a pointer to a option struct
> that contains the desired attachment (protocol family, priority, hook
> location, ...).
>
> It returns a pointer to a 'bpf_link' structure or NULL on error.
>
> Next patch adds new netfilter_basic test that uses this function to
> attach a program to a few pf/hook/priority combinations.
>
> Co-developed-by: Florian Westphal <[email protected]>
> Signed-off-by: Florian Westphal <[email protected]>
> Suggested-by: Andrii Nakryiko <[email protected]>
> Signed-off-by: Daniel Xu <[email protected]>
> ---
> tools/lib/bpf/bpf.c | 8 +++++++
> tools/lib/bpf/bpf.h | 6 +++++
> tools/lib/bpf/libbpf.c | 47 ++++++++++++++++++++++++++++++++++++++++
> tools/lib/bpf/libbpf.h | 15 +++++++++++++
> tools/lib/bpf/libbpf.map | 1 +
> 5 files changed, 77 insertions(+)
>
> diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> index ed86b37d8024..3b0da19715e1 100644
> --- a/tools/lib/bpf/bpf.c
> +++ b/tools/lib/bpf/bpf.c
> @@ -741,6 +741,14 @@ int bpf_link_create(int prog_fd, int target_fd,
> if (!OPTS_ZEROED(opts, tracing))
> return libbpf_err(-EINVAL);
> break;
> + case BPF_NETFILTER:
> + attr.link_create.netfilter.pf = OPTS_GET(opts, netfilter.pf, 0);
> + attr.link_create.netfilter.hooknum = OPTS_GET(opts, netfilter.hooknum, 0);
> + attr.link_create.netfilter.priority = OPTS_GET(opts, netfilter.priority, 0);
> + attr.link_create.netfilter.flags = OPTS_GET(opts, netfilter.flags, 0);
> + if (!OPTS_ZEROED(opts, netfilter))
> + return libbpf_err(-EINVAL);
> + break;
> default:
> if (!OPTS_ZEROED(opts, flags))
> return libbpf_err(-EINVAL);
> diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
> index 9aa0ee473754..c676295ab9bf 100644
> --- a/tools/lib/bpf/bpf.h
> +++ b/tools/lib/bpf/bpf.h
> @@ -349,6 +349,12 @@ struct bpf_link_create_opts {
> struct {
> __u64 cookie;
> } tracing;
> + struct {
> + __u32 pf;
> + __u32 hooknum;
> + __s32 priority;
> + __u32 flags;
> + } netfilter;
> };
> size_t :0;
> };
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 214f828ece6b..a8b9d5abb55f 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -11811,6 +11811,53 @@ static int attach_iter(const struct bpf_program *prog, long cookie, struct bpf_l
> return libbpf_get_error(*link);
> }
>
> +struct bpf_link *bpf_program__attach_netfilter(const struct bpf_program *prog,
> + const struct bpf_netfilter_opts *opts)
> +{
> + DECLARE_LIBBPF_OPTS(bpf_link_create_opts, link_create_opts);

nit: let's use shorter LIBBPF_OPTS() macro

> + struct bpf_link *link;
> + int prog_fd, link_fd;
> +
> + if (!OPTS_VALID(opts, bpf_netfilter_opts))
> + return libbpf_err_ptr(-EINVAL);
> +
> + link_create_opts.netfilter.pf = OPTS_GET(opts, pf, 0);
> + link_create_opts.netfilter.hooknum = OPTS_GET(opts, hooknum, 0);
> + link_create_opts.netfilter.priority = OPTS_GET(opts, priority, 0);
> + link_create_opts.netfilter.flags = OPTS_GET(opts, flags, 0);
> +
> + prog_fd = bpf_program__fd(prog);
> + if (prog_fd < 0) {
> + pr_warn("prog '%s': can't attach before loaded\n", prog->name);
> + return libbpf_err_ptr(-EINVAL);
> + }
> +
> + link = calloc(1, sizeof(*link));
> + if (!link)
> + return libbpf_err_ptr(-ENOMEM);
> + link->detach = &bpf_link__detach_fd;
> +
> + link_fd = bpf_link_create(prog_fd, 0, BPF_NETFILTER, &link_create_opts);
> +
> + link->fd = ensure_good_fd(link_fd);

bpf_link_create() does ensure_good_fd() already, no need to do it
here, just assign result directly


> +
> + if (link->fd < 0) {
> + char errmsg[STRERR_BUFSIZE];
> +
> + link_fd = -errno;
> + free(link);
> + pr_warn("prog '%s': failed to attach to pf:%d,hooknum:%d:prio:%d: %s\n",

comma before prio? but also how necessary is to emit all these? what
if we add another argument to opts, would we add them here as well?

I'd just go with just "failed to attach netfilter" and keep it simple

> + prog->name,
> + OPTS_GET(opts, pf, 0),
> + OPTS_GET(opts, hooknum, 0),
> + OPTS_GET(opts, priority, 0),
> + libbpf_strerror_r(link_fd, errmsg, sizeof(errmsg)));
> + return libbpf_err_ptr(link_fd);
> + }
> +
> + return link;
> +}
> +
> struct bpf_link *bpf_program__attach(const struct bpf_program *prog)
> {
> struct bpf_link *link = NULL;
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index 754da73c643b..10642ad69d76 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -718,6 +718,21 @@ LIBBPF_API struct bpf_link *
> bpf_program__attach_freplace(const struct bpf_program *prog,
> int target_fd, const char *attach_func_name);
>
> +struct bpf_netfilter_opts {
> + /* size of this struct, for forward/backward compatibility */
> + size_t sz;
> +
> + __u32 pf;
> + __u32 hooknum;
> + __s32 priority;
> + __u32 flags;
> +};
> +#define bpf_netfilter_opts__last_field flags
> +
> +LIBBPF_API struct bpf_link *
> +bpf_program__attach_netfilter(const struct bpf_program *prog,
> + const struct bpf_netfilter_opts *opts);
> +
> struct bpf_map;
>
> LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map);
> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> index 7521a2fb7626..d9ec4407befa 100644
> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -395,4 +395,5 @@ LIBBPF_1.2.0 {
> LIBBPF_1.3.0 {
> global:
> bpf_obj_pin_opts;
> + bpf_program__attach_netfilter;
> } LIBBPF_1.2.0;
> --
> 2.40.1
>

2023-06-27 11:28:44

by Florian Westphal

[permalink] [raw]
Subject: Re: [PATCH bpf-next 3/7] netfilter: defrag: Add glue hooks for enabling/disabling defrag

Daniel Xu <[email protected]> wrote:
> diff --git a/net/ipv4/netfilter/nf_defrag_ipv4.c b/net/ipv4/netfilter/nf_defrag_ipv4.c
> index e61ea428ea18..436e629b0969 100644
> --- a/net/ipv4/netfilter/nf_defrag_ipv4.c
> +++ b/net/ipv4/netfilter/nf_defrag_ipv4.c
> @@ -7,6 +7,7 @@
> #include <linux/ip.h>
> #include <linux/netfilter.h>
> #include <linux/module.h>
> +#include <linux/rcupdate.h>
> #include <linux/skbuff.h>
> #include <net/netns/generic.h>
> #include <net/route.h>
> @@ -113,17 +114,24 @@ static void __net_exit defrag4_net_exit(struct net *net)
> }
> }
>
> +static struct nf_defrag_v4_hook defrag_hook = {
> + .enable = nf_defrag_ipv4_enable,
> + .disable = nf_defrag_ipv4_disable,
> +};

Nit: static const, same for v6.

> static struct pernet_operations defrag4_net_ops = {
> .exit = defrag4_net_exit,
> };
>
> static int __init nf_defrag_init(void)
> {
> + rcu_assign_pointer(nf_defrag_v4_hook, &defrag_hook);
> return register_pernet_subsys(&defrag4_net_ops);

register_pernet failure results in nf_defrag_v4_hook pointing to
garbage.

2023-06-27 11:32:14

by Florian Westphal

[permalink] [raw]
Subject: Re: [PATCH bpf-next 0/7] Support defragmenting IPv(4|6) packets in BPF

Daniel Xu <[email protected]> wrote:
> Patches 1 & 2 are stolenfrom Florian. Hopefully he doesn't mind. There
> were some outstanding comments on the v2 [2] but it doesn't look like a
> v3 was ever submitted. I've addressed the comments and put them in this
> patchset cuz I needed them.

I did not submit a v3 because i had to wait for the bpf -> bpf-next
merge to get "bpf: netfilter: Add BPF_NETFILTER bpf_attach_type".

Now that has been done so I will do v3 shortly.

2023-06-27 14:36:00

by Daniel Xu

[permalink] [raw]
Subject: Re: [PATCH bpf-next 0/7] Support defragmenting IPv(4|6) packets in BPF

Hi Florian,

On Tue, Jun 27, 2023 at 12:48:20PM +0200, Florian Westphal wrote:
> Daniel Xu <[email protected]> wrote:
> > Patches 1 & 2 are stolenfrom Florian. Hopefully he doesn't mind. There
> > were some outstanding comments on the v2 [2] but it doesn't look like a
> > v3 was ever submitted. I've addressed the comments and put them in this
> > patchset cuz I needed them.
>
> I did not submit a v3 because i had to wait for the bpf -> bpf-next
> merge to get "bpf: netfilter: Add BPF_NETFILTER bpf_attach_type".
>
> Now that has been done so I will do v3 shortly.

Ack. Will wait for your patches to go in before sending my v2.

Thanks,
Daniel

2023-06-27 14:37:24

by Toke Høiland-Jørgensen

[permalink] [raw]
Subject: Re: [PATCH bpf-next 0/7] Support defragmenting IPv(4|6) packets in BPF

> The basic idea is we bump a refcnt on the netfilter defrag module and
> then run the bpf prog after the defrag module runs. This allows bpf
> progs to transparently see full, reassembled packets. The nice thing
> about this is that progs don't have to carry around logic to detect
> fragments.

One high-level comment after glancing through the series: Instead of
allocating a flag specifically for the defrag module, why not support
loading (and holding) arbitrary netfilter modules in the UAPI? If we
need to allocate a new flag every time someone wants to use a netfilter
module along with BPF we'll run out of flags pretty quickly :)

-Toke


2023-06-27 15:39:17

by Daniel Xu

[permalink] [raw]
Subject: Re: [PATCH bpf-next 0/7] Support defragmenting IPv(4|6) packets in BPF

Hi Toke,

Thanks for taking a look at the patchset.

On Tue, Jun 27, 2023 at 04:25:13PM +0200, Toke H?iland-J?rgensen wrote:
> > The basic idea is we bump a refcnt on the netfilter defrag module and
> > then run the bpf prog after the defrag module runs. This allows bpf
> > progs to transparently see full, reassembled packets. The nice thing
> > about this is that progs don't have to carry around logic to detect
> > fragments.
>
> One high-level comment after glancing through the series: Instead of
> allocating a flag specifically for the defrag module, why not support
> loading (and holding) arbitrary netfilter modules in the UAPI? If we
> need to allocate a new flag every time someone wants to use a netfilter
> module along with BPF we'll run out of flags pretty quickly :)

I don't have enough context on netfilter in general to say if it'd be
generically useful -- perhaps Florian can comment on that.

However, I'm not sure such a mechanism removes the need for a flag. The
netfilter defrag modules still need to be called into to bump the refcnt.

The module could export some kfuncs to inc/dec the refcnt, but it'd be
rather odd for prog code to think about the lifetime of the attachment
(as inc/dec for _each_ prog execution seems wasteful and slow). AFAIK
all the other resource acquire/release APIs are for a single prog
execution.

So a flag for link attach feels the most natural to me. We could always
add a flag2 field or something right?

[...]

Thanks,
Daniel

2023-06-27 15:58:00

by Florian Westphal

[permalink] [raw]
Subject: Re: [PATCH bpf-next 0/7] Support defragmenting IPv(4|6) packets in BPF

Toke H?iland-J?rgensen <[email protected]> wrote:
> > The basic idea is we bump a refcnt on the netfilter defrag module and
> > then run the bpf prog after the defrag module runs. This allows bpf
> > progs to transparently see full, reassembled packets. The nice thing
> > about this is that progs don't have to carry around logic to detect
> > fragments.
>
> One high-level comment after glancing through the series: Instead of
> allocating a flag specifically for the defrag module, why not support
> loading (and holding) arbitrary netfilter modules in the UAPI?

How would that work/look like?

defrag (and conntrack) need special handling because loading these
modules has no effect on the datapath.

Traditionally, yes, loading was enough, but now with netns being
ubiquitous we don't want these to get enabled unless needed.

Ignoring bpf, this happens when user adds nftables/iptables rules
that check for conntrack state, use some form of NAT or use e.g. tproxy.

For bpf a flag during link attachment seemed like the best way
to go.

At the moment I only see two flags for this, namely
"need defrag" and "need conntrack".

For conntrack, we MIGHT be able to not need a flag but
maybe verifier could "guess" based on kfuncs used.

But for defrag, I don't think its good to add a dummy do-nothing
kfunc just for expressing the dependency on bpf prog side.

2023-06-29 12:27:36

by Toke Høiland-Jørgensen

[permalink] [raw]
Subject: Re: [PATCH bpf-next 0/7] Support defragmenting IPv(4|6) packets in BPF

Florian Westphal <[email protected]> writes:

> Toke Høiland-Jørgensen <[email protected]> wrote:
>> > The basic idea is we bump a refcnt on the netfilter defrag module and
>> > then run the bpf prog after the defrag module runs. This allows bpf
>> > progs to transparently see full, reassembled packets. The nice thing
>> > about this is that progs don't have to carry around logic to detect
>> > fragments.
>>
>> One high-level comment after glancing through the series: Instead of
>> allocating a flag specifically for the defrag module, why not support
>> loading (and holding) arbitrary netfilter modules in the UAPI?
>
> How would that work/look like?
>
> defrag (and conntrack) need special handling because loading these
> modules has no effect on the datapath.
>
> Traditionally, yes, loading was enough, but now with netns being
> ubiquitous we don't want these to get enabled unless needed.
>
> Ignoring bpf, this happens when user adds nftables/iptables rules
> that check for conntrack state, use some form of NAT or use e.g. tproxy.
>
> For bpf a flag during link attachment seemed like the best way
> to go.

Right, I wasn't disputing that having a flag to load a module was a good
idea. On the contrary, I was thinking we'd need many more of these
if/when BPF wants to take advantage of more netfilter code. Say, if a
BPF module wants to call into TPROXY, that module would also need go be
loaded and kept around, no?

I was thinking something along the lines of just having a field
'netfilter_modules[]' where userspace could put an arbitrary number of
module names into, and we'd load all of them and put a ref into the
bpf_link. In principle, we could just have that be a string array of
module names, but that's probably a bit cumbersome (and, well, building
a generic module loader interface into the bpf_like API is not
desirable either). But maybe with an explicit ENUM?

> At the moment I only see two flags for this, namely
> "need defrag" and "need conntrack".
>
> For conntrack, we MIGHT be able to not need a flag but
> maybe verifier could "guess" based on kfuncs used.

If the verifier can just identify the modules from the kfuncs and do the
whole thing automatically, that would of course be even better from an
ease-of-use PoV. Not sure what that would take, though? I seem to recall
having discussions around these lines before that fell down on various
points.

> But for defrag, I don't think its good to add a dummy do-nothing
> kfunc just for expressing the dependency on bpf prog side.

Agreed.

-Toke


2023-06-29 13:25:22

by Florian Westphal

[permalink] [raw]
Subject: Re: [PATCH bpf-next 0/7] Support defragmenting IPv(4|6) packets in BPF

Toke H?iland-J?rgensen <[email protected]> wrote:
> Florian Westphal <[email protected]> writes:
> > For bpf a flag during link attachment seemed like the best way
> > to go.
>
> Right, I wasn't disputing that having a flag to load a module was a good
> idea. On the contrary, I was thinking we'd need many more of these
> if/when BPF wants to take advantage of more netfilter code. Say, if a
> BPF module wants to call into TPROXY, that module would also need go be
> loaded and kept around, no?

That seems to be a different topic that has nothing to do with
either bpf_link or netfilter?

If the program calls into say, TPROXY, then I'd expect that this needs
to be handled via kfuncs, no? Or if I misunderstand, what do you mean
by "call into TPROXY"?

And if so, thats already handled at bpf_prog load time, not
at link creation time, or do I miss something here?

AFAIU, if prog uses such kfuncs, verifier will grab needed module ref
and if module isn't loaded the kfuncs won't be found and program load
fails.

> I was thinking something along the lines of just having a field
> 'netfilter_modules[]' where userspace could put an arbitrary number of
> module names into, and we'd load all of them and put a ref into the
> bpf_link.

Why? I fail to understand the connection between bpf_link, netfilter
and modules. What makes netfilter so special that we need such a
module array, and what does that have to do with bpf_link interface?

> In principle, we could just have that be a string array f
> module names, but that's probably a bit cumbersome (and, well, building
> a generic module loader interface into the bpf_like API is not
> desirable either). But maybe with an explicit ENUM?

What functionality does that provide? I can't think of a single module
where this functionality is needed.

Either we're talking about future kfuncs, then, as far as i understand
how kfuncs work, this is handled at bpf_prog load time, not when the
bpf_link is created.

Or we are talking about implicit dependencies, where program doesn't
call function X but needs functionality handled earlier in the pipeline?

The only two instances I know where this is the case for netfilter
is defrag + conntrack.

> > For conntrack, we MIGHT be able to not need a flag but
> > maybe verifier could "guess" based on kfuncs used.
>
> If the verifier can just identify the modules from the kfuncs and do the
> whole thing automatically, that would of course be even better from an
> ease-of-use PoV. Not sure what that would take, though? I seem to recall
> having discussions around these lines before that fell down on various
> points.

AFAICS the conntrack kfuncs are wired to nf_conntrack already, so I
would expect that the module has to be loaded already for the verifier
to accept the program.

Those kfuncs are not yet exposed to NETFILTER program types.
Once they are, all that would be needed is for the netfilter bpf_link
to be able tp detect that the prog is calling into those kfuncs, and
then make the needed register/unregister calls to enable the conntrack
hooks.

Wheter thats better than using an explicit "please turn on conntrack for
me", I don't know. Perhaps future bpf programs could access skb->_nfct
directly without kfuncs so I'd say the flag is a better approach
from an uapi point of view.

2023-06-29 14:59:36

by Toke Høiland-Jørgensen

[permalink] [raw]
Subject: Re: [PATCH bpf-next 0/7] Support defragmenting IPv(4|6) packets in BPF

Florian Westphal <[email protected]> writes:

> Toke Høiland-Jørgensen <[email protected]> wrote:
>> Florian Westphal <[email protected]> writes:
>> > For bpf a flag during link attachment seemed like the best way
>> > to go.
>>
>> Right, I wasn't disputing that having a flag to load a module was a good
>> idea. On the contrary, I was thinking we'd need many more of these
>> if/when BPF wants to take advantage of more netfilter code. Say, if a
>> BPF module wants to call into TPROXY, that module would also need go be
>> loaded and kept around, no?
>
> That seems to be a different topic that has nothing to do with
> either bpf_link or netfilter?
>
> If the program calls into say, TPROXY, then I'd expect that this needs
> to be handled via kfuncs, no? Or if I misunderstand, what do you mean
> by "call into TPROXY"?
>
> And if so, thats already handled at bpf_prog load time, not
> at link creation time, or do I miss something here?
>
> AFAIU, if prog uses such kfuncs, verifier will grab needed module ref
> and if module isn't loaded the kfuncs won't be found and program load
> fails.

...

> Or we are talking about implicit dependencies, where program doesn't
> call function X but needs functionality handled earlier in the pipeline?
>
> The only two instances I know where this is the case for netfilter
> is defrag + conntrack.

Well, I was kinda mixing the two cases above, sorry about that. The
"kfuncs locking the module" was not present in my mind when starting to
talk about that bit...

As for the original question, that's answered by your point above: If
those two modules are the only ones that are likely to need this, then a
flag for each is fine by me - that was the key piece I was missing (I'm
not a netfilter expert, as you well know).

Thanks for clarifying, and apologies for the muddled thinking! :)

-Toke


2023-06-29 15:33:39

by Florian Westphal

[permalink] [raw]
Subject: Re: [PATCH bpf-next 0/7] Support defragmenting IPv(4|6) packets in BPF

Toke H?iland-J?rgensen <[email protected]> wrote:
> Florian Westphal <[email protected]> writes:
> As for the original question, that's answered by your point above: If
> those two modules are the only ones that are likely to need this, then a
> flag for each is fine by me - that was the key piece I was missing (I'm
> not a netfilter expert, as you well know).

No problem, I was worried I was missing an important piece of kfunc
plumbing :-)

You do raise a good point though. With kfuncs, module is pinned.
So, should a "please turn on defrag for this bpf_link" pin
the defrag modules too?

For plain netfilter we don't do that, i.e. you can just do
"rmmod nf_defrag_ipv4". But I suspect that for the new bpf-link
defrag we probably should grab a reference to prevent unwanted
functionality breakage of the bpf prog.

2023-06-29 18:21:37

by Daniel Xu

[permalink] [raw]
Subject: Re: [PATCH bpf-next 0/7] Support defragmenting IPv(4|6) packets in BPF

On Thu, Jun 29, 2023 at 04:53:15PM +0200, Florian Westphal wrote:
> Toke H?iland-J?rgensen <[email protected]> wrote:
> > Florian Westphal <[email protected]> writes:
> > As for the original question, that's answered by your point above: If
> > those two modules are the only ones that are likely to need this, then a
> > flag for each is fine by me - that was the key piece I was missing (I'm
> > not a netfilter expert, as you well know).
>
> No problem, I was worried I was missing an important piece of kfunc
> plumbing :-)
>
> You do raise a good point though. With kfuncs, module is pinned.
> So, should a "please turn on defrag for this bpf_link" pin
> the defrag modules too?
>
> For plain netfilter we don't do that, i.e. you can just do
> "rmmod nf_defrag_ipv4". But I suspect that for the new bpf-link
> defrag we probably should grab a reference to prevent unwanted
> functionality breakage of the bpf prog.

Ack. Will add to v3.

Thanks,
Daniel