2021-05-21 20:26:04

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: [PATCH v7 bpf-next 00/11] Socket migration for SO_REUSEPORT.

The SO_REUSEPORT option allows sockets to listen on the same port and to
accept connections evenly. However, there is a defect in the current
implementation [1]. When a SYN packet is received, the connection is tied
to a listening socket. Accordingly, when the listener is closed, in-flight
requests during the three-way handshake and child sockets in the accept
queue are dropped even if other listeners on the same port could accept
such connections.

This situation can happen when various server management tools restart
server (such as nginx) processes. For instance, when we change nginx
configurations and restart it, it spins up new workers that respect the new
configuration and closes all listeners on the old workers, resulting in the
in-flight ACK of 3WHS is responded by RST.

To avoid such a situation, users have to know deeply how the kernel handles
SYN packets and implement connection draining by eBPF [2]:

1. Stop routing SYN packets to the listener by eBPF.
2. Wait for all timers to expire to complete requests
3. Accept connections until EAGAIN, then close the listener.

or

1. Start counting SYN packets and accept syscalls using the eBPF map.
2. Stop routing SYN packets.
3. Accept connections up to the count, then close the listener.

In either way, we cannot close a listener immediately. However, ideally,
the application need not drain the not yet accepted sockets because 3WHS
and tying a connection to a listener are just the kernel behaviour. The
root cause is within the kernel, so the issue should be addressed in kernel
space and should not be visible to user space. This patchset fixes it so
that users need not take care of kernel implementation and connection
draining. With this patchset, the kernel redistributes requests and
connections from a listener to the others in the same reuseport group
at/after close or shutdown syscalls.

Although some software does connection draining, there are still merits in
migration. For some security reasons, such as replacing TLS certificates,
we may want to apply new settings as soon as possible and/or we may not be
able to wait for connection draining. The sockets in the accept queue have
not started application sessions yet. So, if we do not drain such sockets,
they can be handled by the newer listeners and could have a longer
lifetime. It is difficult to drain all connections in every case, but we
can decrease such aborted connections by migration. In that sense,
migration is always better than draining.

Moreover, auto-migration simplifies user space logic and also works well in
a case where we cannot modify and build a server program to implement the
workaround.

Note that the source and destination listeners MUST have the same settings
at the socket API level; otherwise, applications may face inconsistency and
cause errors. In such a case, we have to use the eBPF program to select a
specific listener or to cancel migration.

Special thanks to Martin KaFai Lau for bouncing ideas and exchanging code
snippets along the way.


Link:
[1] The SO_REUSEPORT socket option
https://lwn.net/Articles/542629/

[2] Re: [PATCH 1/1] net: Add SO_REUSEPORT_LISTEN_OFF socket option as drain mode
https://lore.kernel.org/netdev/[email protected]/


Changelog:
v7:
* Prevent attaching/detaching a bpf prog via shutdowned socket
* Fix typo in commit messages
* Split selftest into subtests

v6:
https://lore.kernel.org/bpf/[email protected]/
* Change description in ip-sysctl.rst
* Test IPPROTO_TCP before reading tfo_listener
* Move reqsk_clone() to inet_connection_sock.c and rename to
inet_reqsk_clone()
* Pass req->rsk_listener to inet_csk_reqsk_queue_drop() and
reqsk_queue_removed() in the migration path of receiving ACK
* s/ARG_PTR_TO_SOCKET/PTR_TO_SOCKET/ in sk_reuseport_is_valid_access()
* In selftest, use atomic ops to increment global vars, drop ACK by XDP,
enable force fastopen, use "skel->bss" instead of "skel->data"

v5:
https://lore.kernel.org/bpf/[email protected]/
* Move initializtion of sk_node from 6th to 5th patch
* Initialize sk_refcnt in reqsk_clone()
* Modify some definitions in reqsk_timer_handler()
* Validate in which path/state migration happens in selftest

v4:
https://lore.kernel.org/bpf/[email protected]/
* Make some functions and variables 'static' in selftest
* Remove 'scalability' from the cover letter

v3:
https://lore.kernel.org/bpf/[email protected]/
* Add sysctl back for reuseport_grow()
* Add helper functions to manage socks[]
* Separate migration related logic into functions: reuseport_resurrect(),
reuseport_stop_listen_sock(), reuseport_migrate_sock()
* Clone request_sock to be migrated
* Migrate request one by one
* Pass child socket to eBPF prog

v2:
https://lore.kernel.org/netdev/[email protected]/
* Do not save closed sockets in socks[]
* Revert 607904c357c61adf20b8fd18af765e501d61a385
* Extract inet_csk_reqsk_queue_migrate() into a single patch
* Change the spin_lock order to avoid lockdep warning
* Add static to __reuseport_select_sock
* Use refcount_inc_not_zero() in reuseport_select_migrated_sock()
* Set the default attach type in bpf_prog_load_check_attach()
* Define new proto of BPF_FUNC_get_socket_cookie
* Fix test to be compiled successfully
* Update commit messages

v1:
https://lore.kernel.org/netdev/[email protected]/
* Remove the sysctl option
* Enable migration if eBPF progam is not attached
* Add expected_attach_type to check if eBPF program can migrate sockets
* Add a field to tell migration type to eBPF program
* Support BPF_FUNC_get_socket_cookie to get the cookie of sk
* Allocate an empty skb if skb is NULL
* Pass req_to_sk(req)->sk_hash because listener's hash is zero
* Update commit messages and coverletter

RFC:
https://lore.kernel.org/netdev/[email protected]/


Kuniyuki Iwashima (11):
net: Introduce net.ipv4.tcp_migrate_req.
tcp: Add num_closed_socks to struct sock_reuseport.
tcp: Keep TCP_CLOSE sockets in the reuseport group.
tcp: Add reuseport_migrate_sock() to select a new listener.
tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.
tcp: Migrate TCP_NEW_SYN_RECV requests at retransmitting SYN+ACKs.
tcp: Migrate TCP_NEW_SYN_RECV requests at receiving the final ACK.
bpf: Support BPF_FUNC_get_socket_cookie() for
BPF_PROG_TYPE_SK_REUSEPORT.
bpf: Support socket migration by eBPF.
libbpf: Set expected_attach_type for BPF_PROG_TYPE_SK_REUSEPORT.
bpf: Test BPF_SK_REUSEPORT_SELECT_OR_MIGRATE.

Documentation/networking/ip-sysctl.rst | 25 +
include/linux/bpf.h | 1 +
include/linux/filter.h | 2 +
include/net/netns/ipv4.h | 1 +
include/net/sock_reuseport.h | 9 +-
include/uapi/linux/bpf.h | 16 +
kernel/bpf/syscall.c | 13 +
net/core/filter.c | 23 +-
net/core/sock_reuseport.c | 362 ++++++++++--
net/ipv4/inet_connection_sock.c | 190 +++++-
net/ipv4/inet_hashtables.c | 2 +-
net/ipv4/sysctl_net_ipv4.c | 9 +
net/ipv4/tcp_ipv4.c | 20 +-
net/ipv4/tcp_minisocks.c | 4 +-
net/ipv6/tcp_ipv6.c | 14 +-
tools/include/uapi/linux/bpf.h | 16 +
tools/lib/bpf/libbpf.c | 5 +-
tools/testing/selftests/bpf/network_helpers.c | 2 +-
tools/testing/selftests/bpf/network_helpers.h | 1 +
.../bpf/prog_tests/migrate_reuseport.c | 555 ++++++++++++++++++
.../bpf/progs/test_migrate_reuseport.c | 135 +++++
21 files changed, 1336 insertions(+), 69 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c
create mode 100644 tools/testing/selftests/bpf/progs/test_migrate_reuseport.c

--
2.30.2


2021-05-21 20:26:11

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: [PATCH v7 bpf-next 03/11] tcp: Keep TCP_CLOSE sockets in the reuseport group.

When we close a listening socket, to migrate its connections to another
listener in the same reuseport group, we have to handle two kinds of child
sockets. One is that a listening socket has a reference to, and the other
is not.

The former is the TCP_ESTABLISHED/TCP_SYN_RECV sockets, and they are in the
accept queue of their listening socket. So we can pop them out and push
them into another listener's queue at close() or shutdown() syscalls. On
the other hand, the latter, the TCP_NEW_SYN_RECV socket is during the
three-way handshake and not in the accept queue. Thus, we cannot access
such sockets at close() or shutdown() syscalls. Accordingly, we have to
migrate immature sockets after their listening socket has been closed.

Currently, if their listening socket has been closed, TCP_NEW_SYN_RECV
sockets are freed at receiving the final ACK or retransmitting SYN+ACKs. At
that time, if we could select a new listener from the same reuseport group,
no connection would be aborted. However, we cannot do that because
reuseport_detach_sock() sets NULL to sk_reuseport_cb and forbids access to
the reuseport group from closed sockets.

This patch allows TCP_CLOSE sockets to remain in the reuseport group and
access it while any child socket references them. The point is that
reuseport_detach_sock() was called twice from inet_unhash() and
sk_destruct(). This patch replaces the first reuseport_detach_sock() with
reuseport_stop_listen_sock(), which checks if the reuseport group is
capable of migration. If capable, it decrements num_socks, moves the socket
backwards in socks[] and increments num_closed_socks. When all connections
are migrated, sk_destruct() calls reuseport_detach_sock() to remove the
socket from socks[], decrement num_closed_socks, and set NULL to
sk_reuseport_cb.

By this change, closed or shutdowned sockets can keep sk_reuseport_cb.
Consequently, calling listen() after shutdown() can cause EADDRINUSE or
EBUSY in inet_csk_bind_conflict() or reuseport_add_sock() which expects
such sockets not to have the reuseport group. Therefore, this patch also
loosens such validation rules so that a socket can listen again if it has a
reuseport group with num_closed_socks more than 0.

When such sockets listen again, we handle them in reuseport_resurrect(). If
there is an existing reuseport group (reuseport_add_sock() path), we move
the socket from the old group to the new one and free the old one if
necessary. If there is no existing group (reuseport_alloc() path), we
allocate a new reuseport group, detach sk from the old one, and free it if
necessary, not to break the current shutdown behaviour:

- we cannot carry over the eBPF prog of shutdowned sockets
- we cannot attach/detach an eBPF prog to/from listening sockets via
shutdowned sockets

Note that when the number of sockets gets over U16_MAX, we try to detach a
closed socket randomly to make room for the new listening socket in
reuseport_grow().

Signed-off-by: Kuniyuki Iwashima <[email protected]>
Signed-off-by: Martin KaFai Lau <[email protected]>
---
include/net/sock_reuseport.h | 1 +
net/core/sock_reuseport.c | 184 ++++++++++++++++++++++++++++++--
net/ipv4/inet_connection_sock.c | 12 ++-
net/ipv4/inet_hashtables.c | 2 +-
4 files changed, 188 insertions(+), 11 deletions(-)

diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
index 0e558ca7afbf..1333d0cddfbc 100644
--- a/include/net/sock_reuseport.h
+++ b/include/net/sock_reuseport.h
@@ -32,6 +32,7 @@ extern int reuseport_alloc(struct sock *sk, bool bind_inany);
extern int reuseport_add_sock(struct sock *sk, struct sock *sk2,
bool bind_inany);
extern void reuseport_detach_sock(struct sock *sk);
+void reuseport_stop_listen_sock(struct sock *sk);
extern struct sock *reuseport_select_sock(struct sock *sk,
u32 hash,
struct sk_buff *skb,
diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
index 079bd1aca0e7..ea0e900d3e97 100644
--- a/net/core/sock_reuseport.c
+++ b/net/core/sock_reuseport.c
@@ -17,6 +17,8 @@
DEFINE_SPINLOCK(reuseport_lock);

static DEFINE_IDA(reuseport_ida);
+static int reuseport_resurrect(struct sock *sk, struct sock_reuseport *old_reuse,
+ struct sock_reuseport *reuse, bool bind_inany);

static int reuseport_sock_index(struct sock *sk,
struct sock_reuseport *reuse,
@@ -61,6 +63,29 @@ static bool __reuseport_detach_sock(struct sock *sk,
return true;
}

+static void __reuseport_add_closed_sock(struct sock *sk,
+ struct sock_reuseport *reuse)
+{
+ reuse->socks[reuse->max_socks - reuse->num_closed_socks - 1] = sk;
+ /* paired with READ_ONCE() in inet_csk_bind_conflict() */
+ WRITE_ONCE(reuse->num_closed_socks, reuse->num_closed_socks + 1);
+}
+
+static bool __reuseport_detach_closed_sock(struct sock *sk,
+ struct sock_reuseport *reuse)
+{
+ int i = reuseport_sock_index(sk, reuse, true);
+
+ if (i == -1)
+ return false;
+
+ reuse->socks[i] = reuse->socks[reuse->max_socks - reuse->num_closed_socks];
+ /* paired with READ_ONCE() in inet_csk_bind_conflict() */
+ WRITE_ONCE(reuse->num_closed_socks, reuse->num_closed_socks - 1);
+
+ return true;
+}
+
static struct sock_reuseport *__reuseport_alloc(unsigned int max_socks)
{
unsigned int size = sizeof(struct sock_reuseport) +
@@ -92,6 +117,14 @@ int reuseport_alloc(struct sock *sk, bool bind_inany)
reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
lockdep_is_held(&reuseport_lock));
if (reuse) {
+ if (reuse->num_closed_socks) {
+ /* sk was shutdown()ed before */
+ int err = reuseport_resurrect(sk, reuse, NULL, bind_inany);
+
+ spin_unlock_bh(&reuseport_lock);
+ return err;
+ }
+
/* Only set reuse->bind_inany if the bind_inany is true.
* Otherwise, it will overwrite the reuse->bind_inany
* which was set by the bind/hash path.
@@ -132,8 +165,23 @@ static struct sock_reuseport *reuseport_grow(struct sock_reuseport *reuse)
u32 more_socks_size, i;

more_socks_size = reuse->max_socks * 2U;
- if (more_socks_size > U16_MAX)
+ if (more_socks_size > U16_MAX) {
+ if (reuse->num_closed_socks) {
+ /* Make room by removing a closed sk.
+ * The child has already been migrated.
+ * Only reqsk left at this point.
+ */
+ struct sock *sk;
+
+ sk = reuse->socks[reuse->max_socks - reuse->num_closed_socks];
+ RCU_INIT_POINTER(sk->sk_reuseport_cb, NULL);
+ __reuseport_detach_closed_sock(sk, reuse);
+
+ return reuse;
+ }
+
return NULL;
+ }

more_reuse = __reuseport_alloc(more_socks_size);
if (!more_reuse)
@@ -199,7 +247,15 @@ int reuseport_add_sock(struct sock *sk, struct sock *sk2, bool bind_inany)
reuse = rcu_dereference_protected(sk2->sk_reuseport_cb,
lockdep_is_held(&reuseport_lock));
old_reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
- lockdep_is_held(&reuseport_lock));
+ lockdep_is_held(&reuseport_lock));
+ if (old_reuse && old_reuse->num_closed_socks) {
+ /* sk was shutdown()ed before */
+ int err = reuseport_resurrect(sk, old_reuse, reuse, reuse->bind_inany);
+
+ spin_unlock_bh(&reuseport_lock);
+ return err;
+ }
+
if (old_reuse && old_reuse->num_socks != 1) {
spin_unlock_bh(&reuseport_lock);
return -EBUSY;
@@ -224,6 +280,65 @@ int reuseport_add_sock(struct sock *sk, struct sock *sk2, bool bind_inany)
}
EXPORT_SYMBOL(reuseport_add_sock);

+static int reuseport_resurrect(struct sock *sk, struct sock_reuseport *old_reuse,
+ struct sock_reuseport *reuse, bool bind_inany)
+{
+ if (old_reuse == reuse) {
+ /* If sk was in the same reuseport group, just pop sk out of
+ * the closed section and push sk into the listening section.
+ */
+ __reuseport_detach_closed_sock(sk, old_reuse);
+ __reuseport_add_sock(sk, old_reuse);
+ return 0;
+ }
+
+ if (!reuse) {
+ /* In bind()/listen() path, we cannot carry over the eBPF prog
+ * for the shutdown()ed socket. In setsockopt() path, we should
+ * not change the eBPF prog of listening sockets by attaching a
+ * prog to the shutdown()ed socket. Thus, we will allocate a new
+ * reuseport group and detach sk from the old group.
+ */
+ int id;
+
+ reuse = __reuseport_alloc(INIT_SOCKS);
+ if (!reuse)
+ return -ENOMEM;
+
+ id = ida_alloc(&reuseport_ida, GFP_ATOMIC);
+ if (id < 0) {
+ kfree(reuse);
+ return id;
+ }
+
+ reuse->reuseport_id = id;
+ reuse->bind_inany = bind_inany;
+ } else {
+ /* Move sk from the old group to the new one if
+ * - all the other listeners in the old group were close()d or
+ * shutdown()ed, and then sk2 has listen()ed on the same port
+ * OR
+ * - sk listen()ed without bind() (or with autobind), was
+ * shutdown()ed, and then listen()s on another port which
+ * sk2 listen()s on.
+ */
+ if (reuse->num_socks + reuse->num_closed_socks == reuse->max_socks) {
+ reuse = reuseport_grow(reuse);
+ if (!reuse)
+ return -ENOMEM;
+ }
+ }
+
+ __reuseport_detach_closed_sock(sk, old_reuse);
+ __reuseport_add_sock(sk, reuse);
+ rcu_assign_pointer(sk->sk_reuseport_cb, reuse);
+
+ if (old_reuse->num_socks + old_reuse->num_closed_socks == 0)
+ call_rcu(&old_reuse->rcu, reuseport_free_rcu);
+
+ return 0;
+}
+
void reuseport_detach_sock(struct sock *sk)
{
struct sock_reuseport *reuse;
@@ -232,6 +347,10 @@ void reuseport_detach_sock(struct sock *sk)
reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
lockdep_is_held(&reuseport_lock));

+ /* reuseport_grow() has detached a closed sk */
+ if (!reuse)
+ goto out;
+
/* Notify the bpf side. The sk may be added to a sockarray
* map. If so, sockarray logic will remove it from the map.
*
@@ -243,15 +362,49 @@ void reuseport_detach_sock(struct sock *sk)
bpf_sk_reuseport_detach(sk);

rcu_assign_pointer(sk->sk_reuseport_cb, NULL);
- __reuseport_detach_sock(sk, reuse);
+
+ if (!__reuseport_detach_closed_sock(sk, reuse))
+ __reuseport_detach_sock(sk, reuse);

if (reuse->num_socks + reuse->num_closed_socks == 0)
call_rcu(&reuse->rcu, reuseport_free_rcu);

+out:
spin_unlock_bh(&reuseport_lock);
}
EXPORT_SYMBOL(reuseport_detach_sock);

+void reuseport_stop_listen_sock(struct sock *sk)
+{
+ if (sk->sk_protocol == IPPROTO_TCP) {
+ struct sock_reuseport *reuse;
+
+ spin_lock_bh(&reuseport_lock);
+
+ reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
+ lockdep_is_held(&reuseport_lock));
+
+ if (sock_net(sk)->ipv4.sysctl_tcp_migrate_req) {
+ /* Migration capable, move sk from the listening section
+ * to the closed section.
+ */
+ bpf_sk_reuseport_detach(sk);
+
+ __reuseport_detach_sock(sk, reuse);
+ __reuseport_add_closed_sock(sk, reuse);
+
+ spin_unlock_bh(&reuseport_lock);
+ return;
+ }
+
+ spin_unlock_bh(&reuseport_lock);
+ }
+
+ /* Not capable to do migration, detach immediately */
+ reuseport_detach_sock(sk);
+}
+EXPORT_SYMBOL(reuseport_stop_listen_sock);
+
static struct sock *run_bpf_filter(struct sock_reuseport *reuse, u16 socks,
struct bpf_prog *prog, struct sk_buff *skb,
int hdr_len)
@@ -351,9 +504,13 @@ int reuseport_attach_prog(struct sock *sk, struct bpf_prog *prog)
struct sock_reuseport *reuse;
struct bpf_prog *old_prog;

- if (sk_unhashed(sk) && sk->sk_reuseport) {
- int err = reuseport_alloc(sk, false);
+ if (sk_unhashed(sk)) {
+ int err;
+
+ if (!sk->sk_reuseport)
+ return -EINVAL;

+ err = reuseport_alloc(sk, false);
if (err)
return err;
} else if (!rcu_access_pointer(sk->sk_reuseport_cb)) {
@@ -379,13 +536,24 @@ int reuseport_detach_prog(struct sock *sk)
struct sock_reuseport *reuse;
struct bpf_prog *old_prog;

- if (!rcu_access_pointer(sk->sk_reuseport_cb))
- return sk->sk_reuseport ? -ENOENT : -EINVAL;
-
old_prog = NULL;
spin_lock_bh(&reuseport_lock);
reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
lockdep_is_held(&reuseport_lock));
+
+ /* reuse must be checked after acquiring the reuseport_lock
+ * because reuseport_grow() can detach a closed sk.
+ */
+ if (!reuse) {
+ spin_unlock_bh(&reuseport_lock);
+ return sk->sk_reuseport ? -ENOENT : -EINVAL;
+ }
+
+ if (sk_unhashed(sk) && reuse->num_closed_socks) {
+ spin_unlock_bh(&reuseport_lock);
+ return -ENOENT;
+ }
+
old_prog = rcu_replace_pointer(reuse->prog, old_prog,
lockdep_is_held(&reuseport_lock));
spin_unlock_bh(&reuseport_lock);
diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index fd472eae4f5c..fa806e9167ec 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -135,10 +135,18 @@ static int inet_csk_bind_conflict(const struct sock *sk,
bool relax, bool reuseport_ok)
{
struct sock *sk2;
+ bool reuseport_cb_ok;
bool reuse = sk->sk_reuse;
bool reuseport = !!sk->sk_reuseport;
+ struct sock_reuseport *reuseport_cb;
kuid_t uid = sock_i_uid((struct sock *)sk);

+ rcu_read_lock();
+ reuseport_cb = rcu_dereference(sk->sk_reuseport_cb);
+ /* paired with WRITE_ONCE() in __reuseport_(add|detach)_closed_sock */
+ reuseport_cb_ok = !reuseport_cb || READ_ONCE(reuseport_cb->num_closed_socks);
+ rcu_read_unlock();
+
/*
* Unlike other sk lookup places we do not check
* for sk_net here, since _all_ the socks listed
@@ -156,14 +164,14 @@ static int inet_csk_bind_conflict(const struct sock *sk,
if ((!relax ||
(!reuseport_ok &&
reuseport && sk2->sk_reuseport &&
- !rcu_access_pointer(sk->sk_reuseport_cb) &&
+ reuseport_cb_ok &&
(sk2->sk_state == TCP_TIME_WAIT ||
uid_eq(uid, sock_i_uid(sk2))))) &&
inet_rcv_saddr_equal(sk, sk2, true))
break;
} else if (!reuseport_ok ||
!reuseport || !sk2->sk_reuseport ||
- rcu_access_pointer(sk->sk_reuseport_cb) ||
+ !reuseport_cb_ok ||
(sk2->sk_state != TCP_TIME_WAIT &&
!uid_eq(uid, sock_i_uid(sk2)))) {
if (inet_rcv_saddr_equal(sk, sk2, true))
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
index c96866a53a66..80aeaf9e6e16 100644
--- a/net/ipv4/inet_hashtables.c
+++ b/net/ipv4/inet_hashtables.c
@@ -697,7 +697,7 @@ void inet_unhash(struct sock *sk)
goto unlock;

if (rcu_access_pointer(sk->sk_reuseport_cb))
- reuseport_detach_sock(sk);
+ reuseport_stop_listen_sock(sk);
if (ilb) {
inet_unhash2(hashinfo, sk);
ilb->count--;
--
2.30.2

2021-05-21 20:26:20

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: [PATCH v7 bpf-next 05/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

When we call close() or shutdown() for listening sockets, each child socket
in the accept queue are freed at inet_csk_listen_stop(). If we can get a
new listener by reuseport_migrate_sock() and clone the request by
inet_reqsk_clone(), we try to add it into the new listener's accept queue
by inet_csk_reqsk_queue_add(). If it fails, we have to call __reqsk_free()
to call sock_put() for its listener and free the cloned request.

After putting the full socket into ehash, tcp_v[46]_syn_recv_sock() sets
NULL to ireq_opt/pktopts in struct inet_request_sock, but ipv6_opt can be
non-NULL. So, we have to set NULL to ipv6_opt of the old request to avoid
double free.

Note that we do not update req->rsk_listener and instead clone the req to
migrate because another path may reference the original request. If we
protected it by RCU, we would need to add rcu_read_lock() in many places.

Link: https://lore.kernel.org/netdev/[email protected]/
Suggested-by: Martin KaFai Lau <[email protected]>
Signed-off-by: Kuniyuki Iwashima <[email protected]>
Acked-by: Martin KaFai Lau <[email protected]>
---
net/ipv4/inet_connection_sock.c | 71 ++++++++++++++++++++++++++++++++-
1 file changed, 70 insertions(+), 1 deletion(-)

diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index fa806e9167ec..07e97b2f3635 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -695,6 +695,53 @@ int inet_rtx_syn_ack(const struct sock *parent, struct request_sock *req)
}
EXPORT_SYMBOL(inet_rtx_syn_ack);

+static struct request_sock *inet_reqsk_clone(struct request_sock *req,
+ struct sock *sk)
+{
+ struct sock *req_sk, *nreq_sk;
+ struct request_sock *nreq;
+
+ nreq = kmem_cache_alloc(req->rsk_ops->slab, GFP_ATOMIC | __GFP_NOWARN);
+ if (!nreq) {
+ /* paired with refcount_inc_not_zero() in reuseport_migrate_sock() */
+ sock_put(sk);
+ return NULL;
+ }
+
+ req_sk = req_to_sk(req);
+ nreq_sk = req_to_sk(nreq);
+
+ memcpy(nreq_sk, req_sk,
+ offsetof(struct sock, sk_dontcopy_begin));
+ memcpy(&nreq_sk->sk_dontcopy_end, &req_sk->sk_dontcopy_end,
+ req->rsk_ops->obj_size - offsetof(struct sock, sk_dontcopy_end));
+
+ sk_node_init(&nreq_sk->sk_node);
+ nreq_sk->sk_tx_queue_mapping = req_sk->sk_tx_queue_mapping;
+#ifdef CONFIG_XPS
+ nreq_sk->sk_rx_queue_mapping = req_sk->sk_rx_queue_mapping;
+#endif
+ nreq_sk->sk_incoming_cpu = req_sk->sk_incoming_cpu;
+ refcount_set(&nreq_sk->sk_refcnt, 0);
+
+ nreq->rsk_listener = sk;
+
+ /* We need not acquire fastopenq->lock
+ * because the child socket is locked in inet_csk_listen_stop().
+ */
+ if (sk->sk_protocol == IPPROTO_TCP && tcp_rsk(nreq)->tfo_listener)
+ rcu_assign_pointer(tcp_sk(nreq->sk)->fastopen_rsk, nreq);
+
+ return nreq;
+}
+
+static void reqsk_migrate_reset(struct request_sock *req)
+{
+#if IS_ENABLED(CONFIG_IPV6)
+ inet_rsk(req)->ipv6_opt = NULL;
+#endif
+}
+
/* return true if req was found in the ehash table */
static bool reqsk_queue_unlink(struct request_sock *req)
{
@@ -1036,14 +1083,36 @@ void inet_csk_listen_stop(struct sock *sk)
* of the variants now. --ANK
*/
while ((req = reqsk_queue_remove(queue, sk)) != NULL) {
- struct sock *child = req->sk;
+ struct sock *child = req->sk, *nsk;
+ struct request_sock *nreq;

local_bh_disable();
bh_lock_sock(child);
WARN_ON(sock_owned_by_user(child));
sock_hold(child);

+ nsk = reuseport_migrate_sock(sk, child, NULL);
+ if (nsk) {
+ nreq = inet_reqsk_clone(req, nsk);
+ if (nreq) {
+ refcount_set(&nreq->rsk_refcnt, 1);
+
+ if (inet_csk_reqsk_queue_add(nsk, nreq, child)) {
+ reqsk_migrate_reset(req);
+ } else {
+ reqsk_migrate_reset(nreq);
+ __reqsk_free(nreq);
+ }
+
+ /* inet_csk_reqsk_queue_add() has already
+ * called inet_child_forget() on failure case.
+ */
+ goto skip_child_forget;
+ }
+ }
+
inet_child_forget(sk, req, child);
+skip_child_forget:
reqsk_put(req);
bh_unlock_sock(child);
local_bh_enable();
--
2.30.2

2021-05-21 20:26:35

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: [PATCH v7 bpf-next 10/11] libbpf: Set expected_attach_type for BPF_PROG_TYPE_SK_REUSEPORT.

This commit introduces a new section (sk_reuseport/migrate) and sets
expected_attach_type to two each section in BPF_PROG_TYPE_SK_REUSEPORT
program.

Signed-off-by: Kuniyuki Iwashima <[email protected]>
Acked-by: Martin KaFai Lau <[email protected]>
---
tools/lib/bpf/libbpf.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index dc4d5fe6d9d2..ab020cb73447 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -9057,7 +9057,10 @@ static struct bpf_link *attach_iter(const struct bpf_sec_def *sec,

static const struct bpf_sec_def section_defs[] = {
BPF_PROG_SEC("socket", BPF_PROG_TYPE_SOCKET_FILTER),
- BPF_PROG_SEC("sk_reuseport", BPF_PROG_TYPE_SK_REUSEPORT),
+ BPF_EAPROG_SEC("sk_reuseport/migrate", BPF_PROG_TYPE_SK_REUSEPORT,
+ BPF_SK_REUSEPORT_SELECT_OR_MIGRATE),
+ BPF_EAPROG_SEC("sk_reuseport", BPF_PROG_TYPE_SK_REUSEPORT,
+ BPF_SK_REUSEPORT_SELECT),
SEC_DEF("kprobe/", KPROBE,
.attach_fn = attach_kprobe),
BPF_PROG_SEC("uprobe/", BPF_PROG_TYPE_KPROBE),
--
2.30.2

2021-05-21 20:26:55

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: [PATCH v7 bpf-next 01/11] net: Introduce net.ipv4.tcp_migrate_req.

This commit adds a new sysctl option: net.ipv4.tcp_migrate_req. If this
option is enabled or eBPF program is attached, we will be able to migrate
child sockets from a listener to another in the same reuseport group after
close() or shutdown() syscalls.

Signed-off-by: Kuniyuki Iwashima <[email protected]>
Reviewed-by: Benjamin Herrenschmidt <[email protected]>
Acked-by: Martin KaFai Lau <[email protected]>
---
Documentation/networking/ip-sysctl.rst | 25 +++++++++++++++++++++++++
include/net/netns/ipv4.h | 1 +
net/ipv4/sysctl_net_ipv4.c | 9 +++++++++
3 files changed, 35 insertions(+)

diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
index a5c250044500..b0436d3a4f11 100644
--- a/Documentation/networking/ip-sysctl.rst
+++ b/Documentation/networking/ip-sysctl.rst
@@ -761,6 +761,31 @@ tcp_syncookies - INTEGER
network connections you can set this knob to 2 to enable
unconditionally generation of syncookies.

+tcp_migrate_req - BOOLEAN
+ The incoming connection is tied to a specific listening socket when
+ the initial SYN packet is received during the three-way handshake.
+ When a listener is closed, in-flight request sockets during the
+ handshake and established sockets in the accept queue are aborted.
+
+ If the listener has SO_REUSEPORT enabled, other listeners on the
+ same port should have been able to accept such connections. This
+ option makes it possible to migrate such child sockets to another
+ listener after close() or shutdown().
+
+ The BPF_SK_REUSEPORT_SELECT_OR_MIGRATE type of eBPF program should
+ usually be used to define the policy to pick an alive listener.
+ Otherwise, the kernel will randomly pick an alive listener only if
+ this option is enabled.
+
+ Note that migration between listeners with different settings may
+ crash applications. Let's say migration happens from listener A to
+ B, and only B has TCP_SAVE_SYN enabled. B cannot read SYN data from
+ the requests migrated from A. To avoid such a situation, cancel
+ migration by returning SK_DROP in the type of eBPF program, or
+ disable this option.
+
+ Default: 0
+
tcp_fastopen - INTEGER
Enable TCP Fast Open (RFC7413) to send and accept data in the opening
SYN packet.
diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h
index 746c80cd4257..b8620519eace 100644
--- a/include/net/netns/ipv4.h
+++ b/include/net/netns/ipv4.h
@@ -126,6 +126,7 @@ struct netns_ipv4 {
u8 sysctl_tcp_syn_retries;
u8 sysctl_tcp_synack_retries;
u8 sysctl_tcp_syncookies;
+ u8 sysctl_tcp_migrate_req;
int sysctl_tcp_reordering;
u8 sysctl_tcp_retries1;
u8 sysctl_tcp_retries2;
diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
index 4fa77f182dcb..6f1e64d49232 100644
--- a/net/ipv4/sysctl_net_ipv4.c
+++ b/net/ipv4/sysctl_net_ipv4.c
@@ -960,6 +960,15 @@ static struct ctl_table ipv4_net_table[] = {
.proc_handler = proc_dou8vec_minmax,
},
#endif
+ {
+ .procname = "tcp_migrate_req",
+ .data = &init_net.ipv4.sysctl_tcp_migrate_req,
+ .maxlen = sizeof(u8),
+ .mode = 0644,
+ .proc_handler = proc_dou8vec_minmax,
+ .extra1 = SYSCTL_ZERO,
+ .extra2 = SYSCTL_ONE
+ },
{
.procname = "tcp_reordering",
.data = &init_net.ipv4.sysctl_tcp_reordering,
--
2.30.2

2021-05-21 20:26:57

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: [PATCH v7 bpf-next 07/11] tcp: Migrate TCP_NEW_SYN_RECV requests at receiving the final ACK.

This patch also changes the code to call reuseport_migrate_sock() and
inet_reqsk_clone(), but unlike the other cases, we do not call
inet_reqsk_clone() right after reuseport_migrate_sock().

Currently, in the receive path for TCP_NEW_SYN_RECV sockets, its listener
has three kinds of refcnt:

(A) for listener itself
(B) carried by reuqest_sock
(C) sock_hold() in tcp_v[46]_rcv()

While processing the req, (A) may disappear by close(listener). Also, (B)
can disappear by accept(listener) once we put the req into the accept
queue. So, we have to hold another refcnt (C) for the listener to prevent
use-after-free.

For socket migration, we call reuseport_migrate_sock() to select a listener
with (A) and to increment the new listener's refcnt in tcp_v[46]_rcv().
This refcnt corresponds to (C) and is cleaned up later in tcp_v[46]_rcv().
Thus we have to take another refcnt (B) for the newly cloned request_sock.

In inet_csk_complete_hashdance(), we hold the count (B), clone the req, and
try to put the new req into the accept queue. By migrating req after
winning the "own_req" race, we can avoid such a worst situation:

CPU 1 looks up req1
CPU 2 looks up req1, unhashes it, then CPU 1 loses the race
CPU 3 looks up req2, unhashes it, then CPU 2 loses the race
...

Signed-off-by: Kuniyuki Iwashima <[email protected]>
Acked-by: Martin KaFai Lau <[email protected]>
---
net/ipv4/inet_connection_sock.c | 34 ++++++++++++++++++++++++++++++---
net/ipv4/tcp_ipv4.c | 20 +++++++++++++------
net/ipv4/tcp_minisocks.c | 4 ++--
net/ipv6/tcp_ipv6.c | 14 +++++++++++---
4 files changed, 58 insertions(+), 14 deletions(-)

diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index c1f068464363..b795198f919a 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -1113,12 +1113,40 @@ struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock *child,
struct request_sock *req, bool own_req)
{
if (own_req) {
- inet_csk_reqsk_queue_drop(sk, req);
- reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req);
- if (inet_csk_reqsk_queue_add(sk, req, child))
+ inet_csk_reqsk_queue_drop(req->rsk_listener, req);
+ reqsk_queue_removed(&inet_csk(req->rsk_listener)->icsk_accept_queue, req);
+
+ if (sk != req->rsk_listener) {
+ /* another listening sk has been selected,
+ * migrate the req to it.
+ */
+ struct request_sock *nreq;
+
+ /* hold a refcnt for the nreq->rsk_listener
+ * which is assigned in inet_reqsk_clone()
+ */
+ sock_hold(sk);
+ nreq = inet_reqsk_clone(req, sk);
+ if (!nreq) {
+ inet_child_forget(sk, req, child);
+ goto child_put;
+ }
+
+ refcount_set(&nreq->rsk_refcnt, 1);
+ if (inet_csk_reqsk_queue_add(sk, nreq, child)) {
+ reqsk_migrate_reset(req);
+ reqsk_put(req);
+ return child;
+ }
+
+ reqsk_migrate_reset(nreq);
+ __reqsk_free(nreq);
+ } else if (inet_csk_reqsk_queue_add(sk, req, child)) {
return child;
+ }
}
/* Too bad, another child took ownership of the request, undo. */
+child_put:
bh_unlock_sock(child);
sock_put(child);
return NULL;
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 4f5b68a90be9..6cb8e269f1ab 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -2002,13 +2002,21 @@ int tcp_v4_rcv(struct sk_buff *skb)
goto csum_error;
}
if (unlikely(sk->sk_state != TCP_LISTEN)) {
- inet_csk_reqsk_queue_drop_and_put(sk, req);
- goto lookup;
+ nsk = reuseport_migrate_sock(sk, req_to_sk(req), skb);
+ if (!nsk) {
+ inet_csk_reqsk_queue_drop_and_put(sk, req);
+ goto lookup;
+ }
+ sk = nsk;
+ /* reuseport_migrate_sock() has already held one sk_refcnt
+ * before returning.
+ */
+ } else {
+ /* We own a reference on the listener, increase it again
+ * as we might lose it too soon.
+ */
+ sock_hold(sk);
}
- /* We own a reference on the listener, increase it again
- * as we might lose it too soon.
- */
- sock_hold(sk);
refcounted = true;
nsk = NULL;
if (!tcp_filter(sk, skb)) {
diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
index 7513ba45553d..f258a4c0da71 100644
--- a/net/ipv4/tcp_minisocks.c
+++ b/net/ipv4/tcp_minisocks.c
@@ -775,8 +775,8 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
goto listen_overflow;

if (own_req && rsk_drop_req(req)) {
- reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req);
- inet_csk_reqsk_queue_drop_and_put(sk, req);
+ reqsk_queue_removed(&inet_csk(req->rsk_listener)->icsk_accept_queue, req);
+ inet_csk_reqsk_queue_drop_and_put(req->rsk_listener, req);
return child;
}

diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 4435fa342e7a..4d71464094b3 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -1664,10 +1664,18 @@ INDIRECT_CALLABLE_SCOPE int tcp_v6_rcv(struct sk_buff *skb)
goto csum_error;
}
if (unlikely(sk->sk_state != TCP_LISTEN)) {
- inet_csk_reqsk_queue_drop_and_put(sk, req);
- goto lookup;
+ nsk = reuseport_migrate_sock(sk, req_to_sk(req), skb);
+ if (!nsk) {
+ inet_csk_reqsk_queue_drop_and_put(sk, req);
+ goto lookup;
+ }
+ sk = nsk;
+ /* reuseport_migrate_sock() has already held one sk_refcnt
+ * before returning.
+ */
+ } else {
+ sock_hold(sk);
}
- sock_hold(sk);
refcounted = true;
nsk = NULL;
if (!tcp_filter(sk, skb)) {
--
2.30.2

2021-05-21 20:27:04

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: [PATCH v7 bpf-next 09/11] bpf: Support socket migration by eBPF.

This patch introduces a new bpf_attach_type for BPF_PROG_TYPE_SK_REUSEPORT
to check if the attached eBPF program is capable of migrating sockets. When
the eBPF program is attached, we run it for socket migration if the
expected_attach_type is BPF_SK_REUSEPORT_SELECT_OR_MIGRATE or
net.ipv4.tcp_migrate_req is enabled.

Currently, the expected_attach_type is not enforced for the
BPF_PROG_TYPE_SK_REUSEPORT type of program. Thus, this commit follows the
earlier idea in the commit aac3fc320d94 ("bpf: Post-hooks for sys_bind") to
fix up the zero expected_attach_type in bpf_prog_load_fixup_attach_type().

Moreover, this patch adds a new field (migrating_sk) to sk_reuseport_md to
select a new listener based on the child socket. migrating_sk varies
depending on if it is migrating a request in the accept queue or during
3WHS.

- accept_queue : sock (ESTABLISHED/SYN_RECV)
- 3WHS : request_sock (NEW_SYN_RECV)

In the eBPF program, we can select a new listener by
BPF_FUNC_sk_select_reuseport(). Also, we can cancel migration by returning
SK_DROP. This feature is useful when listeners have different settings at
the socket API level or when we want to free resources as soon as possible.

- SK_PASS with selected_sk, select it as a new listener
- SK_PASS with selected_sk NULL, fallbacks to the random selection
- SK_DROP, cancel the migration.

There is a noteworthy point. We select a listening socket in three places,
but we do not have struct skb at closing a listener or retransmitting a
SYN+ACK. On the other hand, some helper functions do not expect skb is NULL
(e.g. skb_header_pointer() in BPF_FUNC_skb_load_bytes(), skb_tail_pointer()
in BPF_FUNC_skb_load_bytes_relative()). So we allocate an empty skb
temporarily before running the eBPF program.

Link: https://lore.kernel.org/netdev/[email protected]/
Link: https://lore.kernel.org/netdev/[email protected]/
Link: https://lore.kernel.org/netdev/[email protected]/
Suggested-by: Martin KaFai Lau <[email protected]>
Signed-off-by: Kuniyuki Iwashima <[email protected]>
Acked-by: Martin KaFai Lau <[email protected]>
---
include/linux/bpf.h | 1 +
include/linux/filter.h | 2 ++
include/uapi/linux/bpf.h | 15 +++++++++++++++
kernel/bpf/syscall.c | 13 +++++++++++++
net/core/filter.c | 13 ++++++++++++-
net/core/sock_reuseport.c | 34 ++++++++++++++++++++++++++++++----
tools/include/uapi/linux/bpf.h | 15 +++++++++++++++
7 files changed, 88 insertions(+), 5 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 9dc44ba97584..5aa62785c078 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2026,6 +2026,7 @@ struct sk_reuseport_kern {
struct sk_buff *skb;
struct sock *sk;
struct sock *selected_sk;
+ struct sock *migrating_sk;
void *data_end;
u32 hash;
u32 reuseport_id;
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 9a09547bc7ba..226f76c0b974 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -995,11 +995,13 @@ void bpf_warn_invalid_xdp_action(u32 act);
#ifdef CONFIG_INET
struct sock *bpf_run_sk_reuseport(struct sock_reuseport *reuse, struct sock *sk,
struct bpf_prog *prog, struct sk_buff *skb,
+ struct sock *migrating_sk,
u32 hash);
#else
static inline struct sock *
bpf_run_sk_reuseport(struct sock_reuseport *reuse, struct sock *sk,
struct bpf_prog *prog, struct sk_buff *skb,
+ struct sock *migrating_sk,
u32 hash)
{
return NULL;
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 2488a62482bb..f2972836fecd 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -981,6 +981,8 @@ enum bpf_attach_type {
BPF_SK_LOOKUP,
BPF_XDP,
BPF_SK_SKB_VERDICT,
+ BPF_SK_REUSEPORT_SELECT,
+ BPF_SK_REUSEPORT_SELECT_OR_MIGRATE,
__MAX_BPF_ATTACH_TYPE
};

@@ -5393,7 +5395,20 @@ struct sk_reuseport_md {
__u32 ip_protocol; /* IP protocol. e.g. IPPROTO_TCP, IPPROTO_UDP */
__u32 bind_inany; /* Is sock bound to an INANY address? */
__u32 hash; /* A hash of the packet 4 tuples */
+ /* When reuse->migrating_sk is NULL, it is selecting a sk for the
+ * new incoming connection request (e.g. selecting a listen sk for
+ * the received SYN in the TCP case). reuse->sk is one of the sk
+ * in the reuseport group. The bpf prog can use reuse->sk to learn
+ * the local listening ip/port without looking into the skb.
+ *
+ * When reuse->migrating_sk is not NULL, reuse->sk is closed and
+ * reuse->migrating_sk is the socket that needs to be migrated
+ * to another listening socket. migrating_sk could be a fullsock
+ * sk that is fully established or a reqsk that is in-the-middle
+ * of 3-way handshake.
+ */
__bpf_md_ptr(struct bpf_sock *, sk);
+ __bpf_md_ptr(struct bpf_sock *, migrating_sk);
};

#define BPF_TAG_SIZE 8
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 1d1cd80a6e67..caf41567e4d1 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1946,6 +1946,11 @@ static void bpf_prog_load_fixup_attach_type(union bpf_attr *attr)
attr->expected_attach_type =
BPF_CGROUP_INET_SOCK_CREATE;
break;
+ case BPF_PROG_TYPE_SK_REUSEPORT:
+ if (!attr->expected_attach_type)
+ attr->expected_attach_type =
+ BPF_SK_REUSEPORT_SELECT;
+ break;
}
}

@@ -2029,6 +2034,14 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
if (expected_attach_type == BPF_SK_LOOKUP)
return 0;
return -EINVAL;
+ case BPF_PROG_TYPE_SK_REUSEPORT:
+ switch (expected_attach_type) {
+ case BPF_SK_REUSEPORT_SELECT:
+ case BPF_SK_REUSEPORT_SELECT_OR_MIGRATE:
+ return 0;
+ default:
+ return -EINVAL;
+ }
case BPF_PROG_TYPE_SYSCALL:
case BPF_PROG_TYPE_EXT:
if (expected_attach_type)
diff --git a/net/core/filter.c b/net/core/filter.c
index b7818c707f60..89b5c5eefd5f 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -10011,11 +10011,13 @@ int sk_get_filter(struct sock *sk, struct sock_filter __user *ubuf,
static void bpf_init_reuseport_kern(struct sk_reuseport_kern *reuse_kern,
struct sock_reuseport *reuse,
struct sock *sk, struct sk_buff *skb,
+ struct sock *migrating_sk,
u32 hash)
{
reuse_kern->skb = skb;
reuse_kern->sk = sk;
reuse_kern->selected_sk = NULL;
+ reuse_kern->migrating_sk = migrating_sk;
reuse_kern->data_end = skb->data + skb_headlen(skb);
reuse_kern->hash = hash;
reuse_kern->reuseport_id = reuse->reuseport_id;
@@ -10024,12 +10026,13 @@ static void bpf_init_reuseport_kern(struct sk_reuseport_kern *reuse_kern,

struct sock *bpf_run_sk_reuseport(struct sock_reuseport *reuse, struct sock *sk,
struct bpf_prog *prog, struct sk_buff *skb,
+ struct sock *migrating_sk,
u32 hash)
{
struct sk_reuseport_kern reuse_kern;
enum sk_action action;

- bpf_init_reuseport_kern(&reuse_kern, reuse, sk, skb, hash);
+ bpf_init_reuseport_kern(&reuse_kern, reuse, sk, skb, migrating_sk, hash);
action = BPF_PROG_RUN(prog, &reuse_kern);

if (action == SK_PASS)
@@ -10174,6 +10177,10 @@ sk_reuseport_is_valid_access(int off, int size,
info->reg_type = PTR_TO_SOCKET;
return size == sizeof(__u64);

+ case offsetof(struct sk_reuseport_md, migrating_sk):
+ info->reg_type = PTR_TO_SOCK_COMMON_OR_NULL;
+ return size == sizeof(__u64);
+
/* Fields that allow narrowing */
case bpf_ctx_range(struct sk_reuseport_md, eth_protocol):
if (size < sizeof_field(struct sk_buff, protocol))
@@ -10250,6 +10257,10 @@ static u32 sk_reuseport_convert_ctx_access(enum bpf_access_type type,
case offsetof(struct sk_reuseport_md, sk):
SK_REUSEPORT_LOAD_FIELD(sk);
break;
+
+ case offsetof(struct sk_reuseport_md, migrating_sk):
+ SK_REUSEPORT_LOAD_FIELD(migrating_sk);
+ break;
}

return insn - insn_buf;
diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
index 193a3281ddda..8773368eaa78 100644
--- a/net/core/sock_reuseport.c
+++ b/net/core/sock_reuseport.c
@@ -378,13 +378,17 @@ void reuseport_stop_listen_sock(struct sock *sk)
{
if (sk->sk_protocol == IPPROTO_TCP) {
struct sock_reuseport *reuse;
+ struct bpf_prog *prog;

spin_lock_bh(&reuseport_lock);

reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
lockdep_is_held(&reuseport_lock));
+ prog = rcu_dereference_protected(reuse->prog,
+ lockdep_is_held(&reuseport_lock));

- if (sock_net(sk)->ipv4.sysctl_tcp_migrate_req) {
+ if (sock_net(sk)->ipv4.sysctl_tcp_migrate_req ||
+ (prog && prog->expected_attach_type == BPF_SK_REUSEPORT_SELECT_OR_MIGRATE)) {
/* Migration capable, move sk from the listening section
* to the closed section.
*/
@@ -489,7 +493,7 @@ struct sock *reuseport_select_sock(struct sock *sk,
goto select_by_hash;

if (prog->type == BPF_PROG_TYPE_SK_REUSEPORT)
- sk2 = bpf_run_sk_reuseport(reuse, sk, prog, skb, hash);
+ sk2 = bpf_run_sk_reuseport(reuse, sk, prog, skb, NULL, hash);
else
sk2 = run_bpf_filter(reuse, socks, prog, skb, hdr_len);

@@ -520,6 +524,8 @@ struct sock *reuseport_migrate_sock(struct sock *sk,
{
struct sock_reuseport *reuse;
struct sock *nsk = NULL;
+ bool allocated = false;
+ struct bpf_prog *prog;
u16 socks;
u32 hash;

@@ -537,10 +543,30 @@ struct sock *reuseport_migrate_sock(struct sock *sk,
smp_rmb();

hash = migrating_sk->sk_hash;
- if (sock_net(sk)->ipv4.sysctl_tcp_migrate_req)
+ prog = rcu_dereference(reuse->prog);
+ if (!prog || prog->expected_attach_type != BPF_SK_REUSEPORT_SELECT_OR_MIGRATE) {
+ if (sock_net(sk)->ipv4.sysctl_tcp_migrate_req)
+ goto select_by_hash;
+ goto out;
+ }
+
+ if (!skb) {
+ skb = alloc_skb(0, GFP_ATOMIC);
+ if (!skb)
+ goto out;
+ allocated = true;
+ }
+
+ nsk = bpf_run_sk_reuseport(reuse, sk, prog, skb, migrating_sk, hash);
+
+ if (allocated)
+ kfree_skb(skb);
+
+select_by_hash:
+ if (!nsk)
nsk = reuseport_select_sock_by_hash(reuse, hash, socks);

- if (nsk && unlikely(!refcount_inc_not_zero(&nsk->sk_refcnt)))
+ if (IS_ERR_OR_NULL(nsk) || unlikely(!refcount_inc_not_zero(&nsk->sk_refcnt)))
nsk = NULL;

out:
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 2488a62482bb..f2972836fecd 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -981,6 +981,8 @@ enum bpf_attach_type {
BPF_SK_LOOKUP,
BPF_XDP,
BPF_SK_SKB_VERDICT,
+ BPF_SK_REUSEPORT_SELECT,
+ BPF_SK_REUSEPORT_SELECT_OR_MIGRATE,
__MAX_BPF_ATTACH_TYPE
};

@@ -5393,7 +5395,20 @@ struct sk_reuseport_md {
__u32 ip_protocol; /* IP protocol. e.g. IPPROTO_TCP, IPPROTO_UDP */
__u32 bind_inany; /* Is sock bound to an INANY address? */
__u32 hash; /* A hash of the packet 4 tuples */
+ /* When reuse->migrating_sk is NULL, it is selecting a sk for the
+ * new incoming connection request (e.g. selecting a listen sk for
+ * the received SYN in the TCP case). reuse->sk is one of the sk
+ * in the reuseport group. The bpf prog can use reuse->sk to learn
+ * the local listening ip/port without looking into the skb.
+ *
+ * When reuse->migrating_sk is not NULL, reuse->sk is closed and
+ * reuse->migrating_sk is the socket that needs to be migrated
+ * to another listening socket. migrating_sk could be a fullsock
+ * sk that is fully established or a reqsk that is in-the-middle
+ * of 3-way handshake.
+ */
__bpf_md_ptr(struct bpf_sock *, sk);
+ __bpf_md_ptr(struct bpf_sock *, migrating_sk);
};

#define BPF_TAG_SIZE 8
--
2.30.2

2021-05-21 20:27:23

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: [PATCH v7 bpf-next 04/11] tcp: Add reuseport_migrate_sock() to select a new listener.

reuseport_migrate_sock() does the same check done in
reuseport_listen_stop_sock(). If the reuseport group is capable of
migration, reuseport_migrate_sock() selects a new listener by the child
socket hash and increments the listener's sk_refcnt beforehand. Thus, if we
fail in the migration, we have to decrement it later.

We will support migration by eBPF in the later commits.

Signed-off-by: Kuniyuki Iwashima <[email protected]>
Signed-off-by: Martin KaFai Lau <[email protected]>
---
include/net/sock_reuseport.h | 3 ++
net/core/sock_reuseport.c | 78 +++++++++++++++++++++++++++++-------
2 files changed, 67 insertions(+), 14 deletions(-)

diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
index 1333d0cddfbc..473b0b0fa4ab 100644
--- a/include/net/sock_reuseport.h
+++ b/include/net/sock_reuseport.h
@@ -37,6 +37,9 @@ extern struct sock *reuseport_select_sock(struct sock *sk,
u32 hash,
struct sk_buff *skb,
int hdr_len);
+struct sock *reuseport_migrate_sock(struct sock *sk,
+ struct sock *migrating_sk,
+ struct sk_buff *skb);
extern int reuseport_attach_prog(struct sock *sk, struct bpf_prog *prog);
extern int reuseport_detach_prog(struct sock *sk);

diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
index ea0e900d3e97..193a3281ddda 100644
--- a/net/core/sock_reuseport.c
+++ b/net/core/sock_reuseport.c
@@ -44,7 +44,7 @@ static void __reuseport_add_sock(struct sock *sk,
struct sock_reuseport *reuse)
{
reuse->socks[reuse->num_socks] = sk;
- /* paired with smp_rmb() in reuseport_select_sock() */
+ /* paired with smp_rmb() in reuseport_(select|migrate)_sock() */
smp_wmb();
reuse->num_socks++;
}
@@ -435,6 +435,23 @@ static struct sock *run_bpf_filter(struct sock_reuseport *reuse, u16 socks,
return reuse->socks[index];
}

+static struct sock *reuseport_select_sock_by_hash(struct sock_reuseport *reuse,
+ u32 hash, u16 num_socks)
+{
+ int i, j;
+
+ i = j = reciprocal_scale(hash, num_socks);
+ while (reuse->socks[i]->sk_state == TCP_ESTABLISHED) {
+ i++;
+ if (i >= num_socks)
+ i = 0;
+ if (i == j)
+ return NULL;
+ }
+
+ return reuse->socks[i];
+}
+
/**
* reuseport_select_sock - Select a socket from an SO_REUSEPORT group.
* @sk: First socket in the group.
@@ -478,19 +495,8 @@ struct sock *reuseport_select_sock(struct sock *sk,

select_by_hash:
/* no bpf or invalid bpf result: fall back to hash usage */
- if (!sk2) {
- int i, j;
-
- i = j = reciprocal_scale(hash, socks);
- while (reuse->socks[i]->sk_state == TCP_ESTABLISHED) {
- i++;
- if (i >= socks)
- i = 0;
- if (i == j)
- goto out;
- }
- sk2 = reuse->socks[i];
- }
+ if (!sk2)
+ sk2 = reuseport_select_sock_by_hash(reuse, hash, socks);
}

out:
@@ -499,6 +505,50 @@ struct sock *reuseport_select_sock(struct sock *sk,
}
EXPORT_SYMBOL(reuseport_select_sock);

+/**
+ * reuseport_migrate_sock - Select a socket from an SO_REUSEPORT group.
+ * @sk: close()ed or shutdown()ed socket in the group.
+ * @migrating_sk: ESTABLISHED/SYN_RECV full socket in the accept queue or
+ * NEW_SYN_RECV request socket during 3WHS.
+ * @skb: skb to run through BPF filter.
+ * Returns a socket (with sk_refcnt +1) that should accept the child socket
+ * (or NULL on error).
+ */
+struct sock *reuseport_migrate_sock(struct sock *sk,
+ struct sock *migrating_sk,
+ struct sk_buff *skb)
+{
+ struct sock_reuseport *reuse;
+ struct sock *nsk = NULL;
+ u16 socks;
+ u32 hash;
+
+ rcu_read_lock();
+
+ reuse = rcu_dereference(sk->sk_reuseport_cb);
+ if (!reuse)
+ goto out;
+
+ socks = READ_ONCE(reuse->num_socks);
+ if (unlikely(!socks))
+ goto out;
+
+ /* paired with smp_wmb() in __reuseport_add_sock() */
+ smp_rmb();
+
+ hash = migrating_sk->sk_hash;
+ if (sock_net(sk)->ipv4.sysctl_tcp_migrate_req)
+ nsk = reuseport_select_sock_by_hash(reuse, hash, socks);
+
+ if (nsk && unlikely(!refcount_inc_not_zero(&nsk->sk_refcnt)))
+ nsk = NULL;
+
+out:
+ rcu_read_unlock();
+ return nsk;
+}
+EXPORT_SYMBOL(reuseport_migrate_sock);
+
int reuseport_attach_prog(struct sock *sk, struct bpf_prog *prog)
{
struct sock_reuseport *reuse;
--
2.30.2

2021-05-21 20:27:30

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: [PATCH v7 bpf-next 06/11] tcp: Migrate TCP_NEW_SYN_RECV requests at retransmitting SYN+ACKs.

As with the preceding patch, this patch changes reqsk_timer_handler() to
call reuseport_migrate_sock() and inet_reqsk_clone() to migrate in-flight
requests at retransmitting SYN+ACKs. If we can select a new listener and
clone the request, we resume setting the SYN+ACK timer for the new req. If
we can set the timer, we call inet_ehash_insert() to unhash the old req and
put the new req into ehash.

The noteworthy point here is that by unhashing the old req, another CPU
processing it may lose the "own_req" race in tcp_v[46]_syn_recv_sock() and
drop the final ACK packet. However, the new timer will recover this
situation.

Signed-off-by: Kuniyuki Iwashima <[email protected]>
Acked-by: Martin KaFai Lau <[email protected]>
---
net/ipv4/inet_connection_sock.c | 75 ++++++++++++++++++++++++++++++---
1 file changed, 68 insertions(+), 7 deletions(-)

diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index 07e97b2f3635..c1f068464363 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -735,10 +735,20 @@ static struct request_sock *inet_reqsk_clone(struct request_sock *req,
return nreq;
}

+static void reqsk_queue_migrated(struct request_sock_queue *queue,
+ const struct request_sock *req)
+{
+ if (req->num_timeout == 0)
+ atomic_inc(&queue->young);
+ atomic_inc(&queue->qlen);
+}
+
static void reqsk_migrate_reset(struct request_sock *req)
{
+ req->saved_syn = NULL;
+ inet_rsk(req)->ireq_opt = NULL;
#if IS_ENABLED(CONFIG_IPV6)
- inet_rsk(req)->ipv6_opt = NULL;
+ inet_rsk(req)->pktopts = NULL;
#endif
}

@@ -782,15 +792,39 @@ EXPORT_SYMBOL(inet_csk_reqsk_queue_drop_and_put);
static void reqsk_timer_handler(struct timer_list *t)
{
struct request_sock *req = from_timer(req, t, rsk_timer);
+ struct request_sock *nreq = NULL, *oreq = req;
struct sock *sk_listener = req->rsk_listener;
- struct net *net = sock_net(sk_listener);
- struct inet_connection_sock *icsk = inet_csk(sk_listener);
- struct request_sock_queue *queue = &icsk->icsk_accept_queue;
+ struct inet_connection_sock *icsk;
+ struct request_sock_queue *queue;
+ struct net *net;
int max_syn_ack_retries, qlen, expire = 0, resend = 0;

- if (inet_sk_state_load(sk_listener) != TCP_LISTEN)
- goto drop;
+ if (inet_sk_state_load(sk_listener) != TCP_LISTEN) {
+ struct sock *nsk;
+
+ nsk = reuseport_migrate_sock(sk_listener, req_to_sk(req), NULL);
+ if (!nsk)
+ goto drop;
+
+ nreq = inet_reqsk_clone(req, nsk);
+ if (!nreq)
+ goto drop;
+
+ /* The new timer for the cloned req can decrease the 2
+ * by calling inet_csk_reqsk_queue_drop_and_put(), so
+ * hold another count to prevent use-after-free and
+ * call reqsk_put() just before return.
+ */
+ refcount_set(&nreq->rsk_refcnt, 2 + 1);
+ timer_setup(&nreq->rsk_timer, reqsk_timer_handler, TIMER_PINNED);
+ reqsk_queue_migrated(&inet_csk(nsk)->icsk_accept_queue, req);
+
+ req = nreq;
+ sk_listener = nsk;
+ }

+ icsk = inet_csk(sk_listener);
+ net = sock_net(sk_listener);
max_syn_ack_retries = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_synack_retries;
/* Normally all the openreqs are young and become mature
* (i.e. converted to established socket) for first timeout.
@@ -809,6 +843,7 @@ static void reqsk_timer_handler(struct timer_list *t)
* embrions; and abort old ones without pity, if old
* ones are about to clog our table.
*/
+ queue = &icsk->icsk_accept_queue;
qlen = reqsk_queue_len(queue);
if ((qlen << 1) > max(8U, READ_ONCE(sk_listener->sk_max_ack_backlog))) {
int young = reqsk_queue_len_young(queue) << 1;
@@ -833,10 +868,36 @@ static void reqsk_timer_handler(struct timer_list *t)
atomic_dec(&queue->young);
timeo = min(TCP_TIMEOUT_INIT << req->num_timeout, TCP_RTO_MAX);
mod_timer(&req->rsk_timer, jiffies + timeo);
+
+ if (!nreq)
+ return;
+
+ if (!inet_ehash_insert(req_to_sk(nreq), req_to_sk(oreq), NULL)) {
+ /* delete timer */
+ inet_csk_reqsk_queue_drop(sk_listener, nreq);
+ goto drop;
+ }
+
+ reqsk_migrate_reset(oreq);
+ reqsk_queue_removed(&inet_csk(oreq->rsk_listener)->icsk_accept_queue, oreq);
+ reqsk_put(oreq);
+
+ reqsk_put(nreq);
return;
}
+
drop:
- inet_csk_reqsk_queue_drop_and_put(sk_listener, req);
+ /* Even if we can clone the req, we may need not retransmit any more
+ * SYN+ACKs (nreq->num_timeout > max_syn_ack_retries, etc), or another
+ * CPU may win the "own_req" race so that inet_ehash_insert() fails.
+ */
+ if (nreq) {
+ reqsk_migrate_reset(nreq);
+ reqsk_queue_removed(queue, nreq);
+ __reqsk_free(nreq);
+ }
+
+ inet_csk_reqsk_queue_drop_and_put(oreq->rsk_listener, oreq);
}

static void reqsk_queue_hash_req(struct request_sock *req,
--
2.30.2

2021-05-21 20:27:37

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: [PATCH v7 bpf-next 08/11] bpf: Support BPF_FUNC_get_socket_cookie() for BPF_PROG_TYPE_SK_REUSEPORT.

We will call sock_reuseport.prog for socket migration in the next commit,
so the eBPF program has to know which listener is closing to select a new
listener.

We can currently get a unique ID of each listener in the userspace by
calling bpf_map_lookup_elem() for BPF_MAP_TYPE_REUSEPORT_SOCKARRAY map.

This patch makes the pointer of sk available in sk_reuseport_md so that we
can get the ID by BPF_FUNC_get_socket_cookie() in the eBPF program.

Link: https://lore.kernel.org/netdev/[email protected]/
Suggested-by: Martin KaFai Lau <[email protected]>
Signed-off-by: Kuniyuki Iwashima <[email protected]>
Acked-by: Martin KaFai Lau <[email protected]>
---
include/uapi/linux/bpf.h | 1 +
net/core/filter.c | 10 ++++++++++
tools/include/uapi/linux/bpf.h | 1 +
3 files changed, 12 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 418b9b813d65..2488a62482bb 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -5393,6 +5393,7 @@ struct sk_reuseport_md {
__u32 ip_protocol; /* IP protocol. e.g. IPPROTO_TCP, IPPROTO_UDP */
__u32 bind_inany; /* Is sock bound to an INANY address? */
__u32 hash; /* A hash of the packet 4 tuples */
+ __bpf_md_ptr(struct bpf_sock *, sk);
};

#define BPF_TAG_SIZE 8
diff --git a/net/core/filter.c b/net/core/filter.c
index 582ac196fd94..b7818c707f60 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -10139,6 +10139,8 @@ sk_reuseport_func_proto(enum bpf_func_id func_id,
return &sk_reuseport_load_bytes_proto;
case BPF_FUNC_skb_load_bytes_relative:
return &sk_reuseport_load_bytes_relative_proto;
+ case BPF_FUNC_get_socket_cookie:
+ return &bpf_get_socket_ptr_cookie_proto;
default:
return bpf_base_func_proto(func_id);
}
@@ -10168,6 +10170,10 @@ sk_reuseport_is_valid_access(int off, int size,
case offsetof(struct sk_reuseport_md, hash):
return size == size_default;

+ case offsetof(struct sk_reuseport_md, sk):
+ info->reg_type = PTR_TO_SOCKET;
+ return size == sizeof(__u64);
+
/* Fields that allow narrowing */
case bpf_ctx_range(struct sk_reuseport_md, eth_protocol):
if (size < sizeof_field(struct sk_buff, protocol))
@@ -10240,6 +10246,10 @@ static u32 sk_reuseport_convert_ctx_access(enum bpf_access_type type,
case offsetof(struct sk_reuseport_md, bind_inany):
SK_REUSEPORT_LOAD_FIELD(bind_inany);
break;
+
+ case offsetof(struct sk_reuseport_md, sk):
+ SK_REUSEPORT_LOAD_FIELD(sk);
+ break;
}

return insn - insn_buf;
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 418b9b813d65..2488a62482bb 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -5393,6 +5393,7 @@ struct sk_reuseport_md {
__u32 ip_protocol; /* IP protocol. e.g. IPPROTO_TCP, IPPROTO_UDP */
__u32 bind_inany; /* Is sock bound to an INANY address? */
__u32 hash; /* A hash of the packet 4 tuples */
+ __bpf_md_ptr(struct bpf_sock *, sk);
};

#define BPF_TAG_SIZE 8
--
2.30.2

2021-05-21 20:41:26

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: [PATCH v7 bpf-next 02/11] tcp: Add num_closed_socks to struct sock_reuseport.

As noted in the following commit, a closed listener has to hold the
reference to the reuseport group for socket migration. This patch adds a
field (num_closed_socks) to struct sock_reuseport to manage closed sockets
within the same reuseport group. Moreover, this and the following commits
introduce some helper functions to split socks[] into two sections and keep
TCP_LISTEN and TCP_CLOSE sockets in each section. Like a double-ended
queue, we will place TCP_LISTEN sockets from the front and TCP_CLOSE
sockets from the end.

TCP_LISTEN----------> <-------TCP_CLOSE
+---+---+ --- +---+ --- +---+ --- +---+
| 0 | 1 | ... | i | ... | j | ... | k |
+---+---+ --- +---+ --- +---+ --- +---+

i = num_socks - 1
j = max_socks - num_closed_socks
k = max_socks - 1

This patch also extends reuseport_add_sock() and reuseport_grow() to
support num_closed_socks.

Signed-off-by: Kuniyuki Iwashima <[email protected]>
Acked-by: Martin KaFai Lau <[email protected]>
---
include/net/sock_reuseport.h | 5 ++-
net/core/sock_reuseport.c | 76 +++++++++++++++++++++++++++---------
2 files changed, 60 insertions(+), 21 deletions(-)

diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
index 505f1e18e9bf..0e558ca7afbf 100644
--- a/include/net/sock_reuseport.h
+++ b/include/net/sock_reuseport.h
@@ -13,8 +13,9 @@ extern spinlock_t reuseport_lock;
struct sock_reuseport {
struct rcu_head rcu;

- u16 max_socks; /* length of socks */
- u16 num_socks; /* elements in socks */
+ u16 max_socks; /* length of socks */
+ u16 num_socks; /* elements in socks */
+ u16 num_closed_socks; /* closed elements in socks */
/* The last synq overflow event timestamp of this
* reuse->socks[] group.
*/
diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
index b065f0a103ed..079bd1aca0e7 100644
--- a/net/core/sock_reuseport.c
+++ b/net/core/sock_reuseport.c
@@ -18,6 +18,49 @@ DEFINE_SPINLOCK(reuseport_lock);

static DEFINE_IDA(reuseport_ida);

+static int reuseport_sock_index(struct sock *sk,
+ struct sock_reuseport *reuse,
+ bool closed)
+{
+ int left, right;
+
+ if (!closed) {
+ left = 0;
+ right = reuse->num_socks;
+ } else {
+ left = reuse->max_socks - reuse->num_closed_socks;
+ right = reuse->max_socks;
+ }
+
+ for (; left < right; left++)
+ if (reuse->socks[left] == sk)
+ return left;
+ return -1;
+}
+
+static void __reuseport_add_sock(struct sock *sk,
+ struct sock_reuseport *reuse)
+{
+ reuse->socks[reuse->num_socks] = sk;
+ /* paired with smp_rmb() in reuseport_select_sock() */
+ smp_wmb();
+ reuse->num_socks++;
+}
+
+static bool __reuseport_detach_sock(struct sock *sk,
+ struct sock_reuseport *reuse)
+{
+ int i = reuseport_sock_index(sk, reuse, false);
+
+ if (i == -1)
+ return false;
+
+ reuse->socks[i] = reuse->socks[reuse->num_socks - 1];
+ reuse->num_socks--;
+
+ return true;
+}
+
static struct sock_reuseport *__reuseport_alloc(unsigned int max_socks)
{
unsigned int size = sizeof(struct sock_reuseport) +
@@ -72,9 +115,8 @@ int reuseport_alloc(struct sock *sk, bool bind_inany)
}

reuse->reuseport_id = id;
- reuse->socks[0] = sk;
- reuse->num_socks = 1;
reuse->bind_inany = bind_inany;
+ __reuseport_add_sock(sk, reuse);
rcu_assign_pointer(sk->sk_reuseport_cb, reuse);

out:
@@ -98,6 +140,7 @@ static struct sock_reuseport *reuseport_grow(struct sock_reuseport *reuse)
return NULL;

more_reuse->num_socks = reuse->num_socks;
+ more_reuse->num_closed_socks = reuse->num_closed_socks;
more_reuse->prog = reuse->prog;
more_reuse->reuseport_id = reuse->reuseport_id;
more_reuse->bind_inany = reuse->bind_inany;
@@ -105,9 +148,13 @@ static struct sock_reuseport *reuseport_grow(struct sock_reuseport *reuse)

memcpy(more_reuse->socks, reuse->socks,
reuse->num_socks * sizeof(struct sock *));
+ memcpy(more_reuse->socks +
+ (more_reuse->max_socks - more_reuse->num_closed_socks),
+ reuse->socks + reuse->num_socks,
+ reuse->num_closed_socks * sizeof(struct sock *));
more_reuse->synq_overflow_ts = READ_ONCE(reuse->synq_overflow_ts);

- for (i = 0; i < reuse->num_socks; ++i)
+ for (i = 0; i < reuse->max_socks; ++i)
rcu_assign_pointer(reuse->socks[i]->sk_reuseport_cb,
more_reuse);

@@ -158,7 +205,7 @@ int reuseport_add_sock(struct sock *sk, struct sock *sk2, bool bind_inany)
return -EBUSY;
}

- if (reuse->num_socks == reuse->max_socks) {
+ if (reuse->num_socks + reuse->num_closed_socks == reuse->max_socks) {
reuse = reuseport_grow(reuse);
if (!reuse) {
spin_unlock_bh(&reuseport_lock);
@@ -166,10 +213,7 @@ int reuseport_add_sock(struct sock *sk, struct sock *sk2, bool bind_inany)
}
}

- reuse->socks[reuse->num_socks] = sk;
- /* paired with smp_rmb() in reuseport_select_sock() */
- smp_wmb();
- reuse->num_socks++;
+ __reuseport_add_sock(sk, reuse);
rcu_assign_pointer(sk->sk_reuseport_cb, reuse);

spin_unlock_bh(&reuseport_lock);
@@ -183,7 +227,6 @@ EXPORT_SYMBOL(reuseport_add_sock);
void reuseport_detach_sock(struct sock *sk)
{
struct sock_reuseport *reuse;
- int i;

spin_lock_bh(&reuseport_lock);
reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
@@ -200,16 +243,11 @@ void reuseport_detach_sock(struct sock *sk)
bpf_sk_reuseport_detach(sk);

rcu_assign_pointer(sk->sk_reuseport_cb, NULL);
+ __reuseport_detach_sock(sk, reuse);
+
+ if (reuse->num_socks + reuse->num_closed_socks == 0)
+ call_rcu(&reuse->rcu, reuseport_free_rcu);

- for (i = 0; i < reuse->num_socks; i++) {
- if (reuse->socks[i] == sk) {
- reuse->socks[i] = reuse->socks[reuse->num_socks - 1];
- reuse->num_socks--;
- if (reuse->num_socks == 0)
- call_rcu(&reuse->rcu, reuseport_free_rcu);
- break;
- }
- }
spin_unlock_bh(&reuseport_lock);
}
EXPORT_SYMBOL(reuseport_detach_sock);
@@ -274,7 +312,7 @@ struct sock *reuseport_select_sock(struct sock *sk,
prog = rcu_dereference(reuse->prog);
socks = READ_ONCE(reuse->num_socks);
if (likely(socks)) {
- /* paired with smp_wmb() in reuseport_add_sock() */
+ /* paired with smp_wmb() in __reuseport_add_sock() */
smp_rmb();

if (!prog || !skb)
--
2.30.2

2021-05-21 20:41:36

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: [PATCH v7 bpf-next 11/11] bpf: Test BPF_SK_REUSEPORT_SELECT_OR_MIGRATE.

This patch adds a test for BPF_SK_REUSEPORT_SELECT_OR_MIGRATE and
removes 'static' from settimeo() in network_helpers.c.

Signed-off-by: Kuniyuki Iwashima <[email protected]>
Acked-by: Martin KaFai Lau <[email protected]>
---
tools/testing/selftests/bpf/network_helpers.c | 2 +-
tools/testing/selftests/bpf/network_helpers.h | 1 +
.../bpf/prog_tests/migrate_reuseport.c | 555 ++++++++++++++++++
.../bpf/progs/test_migrate_reuseport.c | 135 +++++
4 files changed, 692 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c
create mode 100644 tools/testing/selftests/bpf/progs/test_migrate_reuseport.c

diff --git a/tools/testing/selftests/bpf/network_helpers.c b/tools/testing/selftests/bpf/network_helpers.c
index 12ee40284da0..2060bc122c53 100644
--- a/tools/testing/selftests/bpf/network_helpers.c
+++ b/tools/testing/selftests/bpf/network_helpers.c
@@ -40,7 +40,7 @@ struct ipv6_packet pkt_v6 = {
.tcp.doff = 5,
};

-static int settimeo(int fd, int timeout_ms)
+int settimeo(int fd, int timeout_ms)
{
struct timeval timeout = { .tv_sec = 3 };

diff --git a/tools/testing/selftests/bpf/network_helpers.h b/tools/testing/selftests/bpf/network_helpers.h
index 7205f8afdba1..5e0d51c07b63 100644
--- a/tools/testing/selftests/bpf/network_helpers.h
+++ b/tools/testing/selftests/bpf/network_helpers.h
@@ -33,6 +33,7 @@ struct ipv6_packet {
} __packed;
extern struct ipv6_packet pkt_v6;

+int settimeo(int fd, int timeout_ms);
int start_server(int family, int type, const char *addr, __u16 port,
int timeout_ms);
int connect_to_fd(int server_fd, int timeout_ms);
diff --git a/tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c b/tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c
new file mode 100644
index 000000000000..0fa3f750567d
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c
@@ -0,0 +1,555 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Check if we can migrate child sockets.
+ *
+ * 1. call listen() for 4 server sockets.
+ * 2. call connect() for 25 client sockets.
+ * 3. call listen() for 1 server socket. (migration target)
+ * 4. update a map to migrate all child sockets
+ * to the last server socket (migrate_map[cookie] = 4)
+ * 5. call shutdown() for first 4 server sockets
+ * and migrate the requests in the accept queue
+ * to the last server socket.
+ * 6. call listen() for the second server socket.
+ * 7. call shutdown() for the last server
+ * and migrate the requests in the accept queue
+ * to the second server socket.
+ * 8. call listen() for the last server.
+ * 9. call shutdown() for the second server
+ * and migrate the requests in the accept queue
+ * to the last server socket.
+ * 10. call accept() for the last server socket.
+ *
+ * Author: Kuniyuki Iwashima <[email protected]>
+ */
+
+#include <bpf/bpf.h>
+#include <bpf/libbpf.h>
+
+#include "test_progs.h"
+#include "test_migrate_reuseport.skel.h"
+#include "network_helpers.h"
+
+#define IFINDEX_LO 1
+
+#define NR_SERVERS 5
+#define NR_CLIENTS (NR_SERVERS * 5)
+#define MIGRATED_TO (NR_SERVERS - 1)
+
+/* fastopenq->max_qlen and sk->sk_max_ack_backlog */
+#define QLEN (NR_CLIENTS * 5)
+
+#define MSG "Hello World\0"
+#define MSGLEN 12
+
+static struct migrate_reuseport_test_case {
+ const char *name;
+ __s64 servers[NR_SERVERS];
+ __s64 clients[NR_CLIENTS];
+ struct sockaddr_storage addr;
+ socklen_t addrlen;
+ int family;
+ int state;
+ bool drop_ack;
+ bool expire_synack_timer;
+ bool fastopen;
+ struct bpf_link *link;
+} test_cases[] = {
+ {
+ .name = "IPv4 TCP_ESTABLISHED inet_csk_listen_stop",
+ .family = AF_INET,
+ .state = BPF_TCP_ESTABLISHED,
+ .drop_ack = false,
+ .expire_synack_timer = false,
+ .fastopen = false,
+ },
+ {
+ .name = "IPv4 TCP_SYN_RECV inet_csk_listen_stop",
+ .family = AF_INET,
+ .state = BPF_TCP_SYN_RECV,
+ .drop_ack = true,
+ .expire_synack_timer = false,
+ .fastopen = true,
+ },
+ {
+ .name = "IPv4 TCP_NEW_SYN_RECV reqsk_timer_handler",
+ .family = AF_INET,
+ .state = BPF_TCP_NEW_SYN_RECV,
+ .drop_ack = true,
+ .expire_synack_timer = true,
+ .fastopen = false,
+ },
+ {
+ .name = "IPv4 TCP_NEW_SYN_RECV inet_csk_complete_hashdance",
+ .family = AF_INET,
+ .state = BPF_TCP_NEW_SYN_RECV,
+ .drop_ack = true,
+ .expire_synack_timer = false,
+ .fastopen = false,
+ },
+ {
+ .name = "IPv6 TCP_ESTABLISHED inet_csk_listen_stop",
+ .family = AF_INET6,
+ .state = BPF_TCP_ESTABLISHED,
+ .drop_ack = false,
+ .expire_synack_timer = false,
+ .fastopen = false,
+ },
+ {
+ .name = "IPv6 TCP_SYN_RECV inet_csk_listen_stop",
+ .family = AF_INET6,
+ .state = BPF_TCP_SYN_RECV,
+ .drop_ack = true,
+ .expire_synack_timer = false,
+ .fastopen = true,
+ },
+ {
+ .name = "IPv6 TCP_NEW_SYN_RECV reqsk_timer_handler",
+ .family = AF_INET6,
+ .state = BPF_TCP_NEW_SYN_RECV,
+ .drop_ack = true,
+ .expire_synack_timer = true,
+ .fastopen = false,
+ },
+ {
+ .name = "IPv6 TCP_NEW_SYN_RECV inet_csk_complete_hashdance",
+ .family = AF_INET6,
+ .state = BPF_TCP_NEW_SYN_RECV,
+ .drop_ack = true,
+ .expire_synack_timer = false,
+ .fastopen = false,
+ }
+};
+
+static void init_fds(__s64 fds[], int len)
+{
+ int i;
+
+ for (i = 0; i < len; i++)
+ fds[i] = -1;
+}
+
+static void close_fds(__s64 fds[], int len)
+{
+ int i;
+
+ for (i = 0; i < len; i++) {
+ if (fds[i] != -1) {
+ close(fds[i]);
+ fds[i] = -1;
+ }
+ }
+}
+
+static int setup_fastopen(char *buf, int size, int *saved_len, bool restore)
+{
+ int err = 0, fd, len;
+
+ fd = open("/proc/sys/net/ipv4/tcp_fastopen", O_RDWR);
+ if (!ASSERT_NEQ(fd, -1, "open"))
+ return -1;
+
+ if (restore) {
+ len = write(fd, buf, *saved_len);
+ if (!ASSERT_EQ(len, *saved_len, "write - restore"))
+ err = -1;
+ } else {
+ *saved_len = read(fd, buf, size);
+ if (!ASSERT_GE(*saved_len, 1, "read")) {
+ err = -1;
+ goto close;
+ }
+
+ err = lseek(fd, 0, SEEK_SET);
+ if (!ASSERT_OK(err, "lseek"))
+ goto close;
+
+ /* (TFO_CLIENT_ENABLE | TFO_SERVER_ENABLE |
+ * TFO_CLIENT_NO_COOKIE | TFO_SERVER_COOKIE_NOT_REQD)
+ */
+ len = write(fd, "519", 3);
+ if (!ASSERT_EQ(len, 3, "write - setup"))
+ err = -1;
+ }
+
+close:
+ close(fd);
+
+ return err;
+}
+
+static int drop_ack(struct migrate_reuseport_test_case *test_case,
+ struct test_migrate_reuseport *skel)
+{
+ if (test_case->family == AF_INET)
+ skel->bss->server_port = ((struct sockaddr_in *)
+ &test_case->addr)->sin_port;
+ else
+ skel->bss->server_port = ((struct sockaddr_in6 *)
+ &test_case->addr)->sin6_port;
+
+ test_case->link = bpf_program__attach_xdp(skel->progs.drop_ack,
+ IFINDEX_LO);
+ if (!ASSERT_OK_PTR(test_case->link, "bpf_program__attach_xdp"))
+ return -1;
+
+ return 0;
+}
+
+static int pass_ack(struct migrate_reuseport_test_case *test_case)
+{
+ int err;
+
+ err = bpf_link__detach(test_case->link);
+ if (!ASSERT_OK(err, "bpf_link__detach"))
+ return -1;
+
+ test_case->link = NULL;
+
+ return 0;
+}
+
+static int start_servers(struct migrate_reuseport_test_case *test_case,
+ struct test_migrate_reuseport *skel)
+{
+ int i, err, prog_fd, reuseport = 1, qlen = QLEN;
+
+ prog_fd = bpf_program__fd(skel->progs.migrate_reuseport);
+
+ make_sockaddr(test_case->family,
+ test_case->family == AF_INET ? "127.0.0.1" : "::1", 0,
+ &test_case->addr, &test_case->addrlen);
+
+ for (i = 0; i < NR_SERVERS; i++) {
+ test_case->servers[i] = socket(test_case->family, SOCK_STREAM,
+ IPPROTO_TCP);
+ if (!ASSERT_NEQ(test_case->servers[i], -1, "socket"))
+ return -1;
+
+ err = setsockopt(test_case->servers[i], SOL_SOCKET,
+ SO_REUSEPORT, &reuseport, sizeof(reuseport));
+ if (!ASSERT_OK(err, "setsockopt - SO_REUSEPORT"))
+ return -1;
+
+ err = bind(test_case->servers[i],
+ (struct sockaddr *)&test_case->addr,
+ test_case->addrlen);
+ if (!ASSERT_OK(err, "bind"))
+ return -1;
+
+ if (i == 0) {
+ err = setsockopt(test_case->servers[i], SOL_SOCKET,
+ SO_ATTACH_REUSEPORT_EBPF,
+ &prog_fd, sizeof(prog_fd));
+ if (!ASSERT_OK(err,
+ "setsockopt - SO_ATTACH_REUSEPORT_EBPF"))
+ return -1;
+
+ err = getsockname(test_case->servers[i],
+ (struct sockaddr *)&test_case->addr,
+ &test_case->addrlen);
+ if (!ASSERT_OK(err, "getsockname"))
+ return -1;
+ }
+
+ if (test_case->fastopen) {
+ err = setsockopt(test_case->servers[i],
+ SOL_TCP, TCP_FASTOPEN,
+ &qlen, sizeof(qlen));
+ if (!ASSERT_OK(err, "setsockopt - TCP_FASTOPEN"))
+ return -1;
+ }
+
+ /* All requests will be tied to the first four listeners */
+ if (i != MIGRATED_TO) {
+ err = listen(test_case->servers[i], qlen);
+ if (!ASSERT_OK(err, "listen"))
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int start_clients(struct migrate_reuseport_test_case *test_case)
+{
+ char buf[MSGLEN] = MSG;
+ int i, err;
+
+ for (i = 0; i < NR_CLIENTS; i++) {
+ test_case->clients[i] = socket(test_case->family, SOCK_STREAM,
+ IPPROTO_TCP);
+ if (!ASSERT_NEQ(test_case->clients[i], -1, "socket"))
+ return -1;
+
+ /* The attached XDP program drops only the final ACK, so
+ * clients will transition to TCP_ESTABLISHED immediately.
+ */
+ err = settimeo(test_case->clients[i], 100);
+ if (!ASSERT_OK(err, "settimeo"))
+ return -1;
+
+ if (test_case->fastopen) {
+ int fastopen = 1;
+
+ err = setsockopt(test_case->clients[i], IPPROTO_TCP,
+ TCP_FASTOPEN_CONNECT, &fastopen,
+ sizeof(fastopen));
+ if (!ASSERT_OK(err,
+ "setsockopt - TCP_FASTOPEN_CONNECT"))
+ return -1;
+ }
+
+ err = connect(test_case->clients[i],
+ (struct sockaddr *)&test_case->addr,
+ test_case->addrlen);
+ if (!ASSERT_OK(err, "connect"))
+ return -1;
+
+ err = write(test_case->clients[i], buf, MSGLEN);
+ if (!ASSERT_EQ(err, MSGLEN, "write"))
+ return -1;
+ }
+
+ return 0;
+}
+
+static int update_maps(struct migrate_reuseport_test_case *test_case,
+ struct test_migrate_reuseport *skel)
+{
+ int i, err, migrated_to = MIGRATED_TO;
+ int reuseport_map_fd, migrate_map_fd;
+ __u64 value;
+
+ reuseport_map_fd = bpf_map__fd(skel->maps.reuseport_map);
+ migrate_map_fd = bpf_map__fd(skel->maps.migrate_map);
+
+ for (i = 0; i < NR_SERVERS; i++) {
+ value = (__u64)test_case->servers[i];
+ err = bpf_map_update_elem(reuseport_map_fd, &i, &value,
+ BPF_NOEXIST);
+ if (!ASSERT_OK(err, "bpf_map_update_elem - reuseport_map"))
+ return -1;
+
+ err = bpf_map_lookup_elem(reuseport_map_fd, &i, &value);
+ if (!ASSERT_OK(err, "bpf_map_lookup_elem - reuseport_map"))
+ return -1;
+
+ err = bpf_map_update_elem(migrate_map_fd, &value, &migrated_to,
+ BPF_NOEXIST);
+ if (!ASSERT_OK(err, "bpf_map_update_elem - migrate_map"))
+ return -1;
+ }
+
+ return 0;
+}
+
+static int migrate_dance(struct migrate_reuseport_test_case *test_case)
+{
+ int i, err;
+
+ /* Migrate TCP_ESTABLISHED and TCP_SYN_RECV requests
+ * to the last listener based on eBPF.
+ */
+ for (i = 0; i < MIGRATED_TO; i++) {
+ err = shutdown(test_case->servers[i], SHUT_RDWR);
+ if (!ASSERT_OK(err, "shutdown"))
+ return -1;
+ }
+
+ /* No dance for TCP_NEW_SYN_RECV to migrate based on eBPF */
+ if (test_case->state == BPF_TCP_NEW_SYN_RECV)
+ return 0;
+
+ /* Note that we use the second listener instead of the
+ * first one here.
+ *
+ * The fist listener is bind()ed with port 0 and,
+ * SOCK_BINDPORT_LOCK is not set to sk_userlocks, so
+ * calling listen() again will bind() the first listener
+ * on a new ephemeral port and detach it from the existing
+ * reuseport group. (See: __inet_bind(), tcp_set_state())
+ *
+ * OTOH, the second one is bind()ed with a specific port,
+ * and SOCK_BINDPORT_LOCK is set. Thus, re-listen() will
+ * resurrect the listener on the existing reuseport group.
+ */
+ err = listen(test_case->servers[1], QLEN);
+ if (!ASSERT_OK(err, "listen"))
+ return -1;
+
+ /* Migrate from the last listener to the second one.
+ *
+ * All listeners were detached out of the reuseport_map,
+ * so migration will be done by kernel random pick from here.
+ */
+ err = shutdown(test_case->servers[MIGRATED_TO], SHUT_RDWR);
+ if (!ASSERT_OK(err, "shutdown"))
+ return -1;
+
+ /* Back to the existing reuseport group */
+ err = listen(test_case->servers[MIGRATED_TO], QLEN);
+ if (!ASSERT_OK(err, "listen"))
+ return -1;
+
+ /* Migrate back to the last one from the second one */
+ err = shutdown(test_case->servers[1], SHUT_RDWR);
+ if (!ASSERT_OK(err, "shutdown"))
+ return -1;
+
+ return 0;
+}
+
+static void count_requests(struct migrate_reuseport_test_case *test_case,
+ struct test_migrate_reuseport *skel)
+{
+ struct sockaddr_storage addr;
+ socklen_t len = sizeof(addr);
+ int err, cnt = 0, client;
+ char buf[MSGLEN];
+
+ err = settimeo(test_case->servers[MIGRATED_TO], 4000);
+ if (!ASSERT_OK(err, "settimeo"))
+ goto out;
+
+ for (; cnt < NR_CLIENTS; cnt++) {
+ client = accept(test_case->servers[MIGRATED_TO],
+ (struct sockaddr *)&addr, &len);
+ if (!ASSERT_NEQ(client, -1, "accept"))
+ goto out;
+
+ memset(buf, 0, MSGLEN);
+ read(client, &buf, MSGLEN);
+ close(client);
+
+ if (!ASSERT_STREQ(buf, MSG, "read"))
+ goto out;
+ }
+
+out:
+ ASSERT_EQ(cnt, NR_CLIENTS, "count in userspace");
+
+ switch (test_case->state) {
+ case BPF_TCP_ESTABLISHED:
+ cnt = skel->bss->migrated_at_close;
+ break;
+ case BPF_TCP_SYN_RECV:
+ cnt = skel->bss->migrated_at_close_fastopen;
+ break;
+ case BPF_TCP_NEW_SYN_RECV:
+ if (test_case->expire_synack_timer)
+ cnt = skel->bss->migrated_at_send_synack;
+ else
+ cnt = skel->bss->migrated_at_recv_ack;
+ break;
+ default:
+ cnt = 0;
+ }
+
+ ASSERT_EQ(cnt, NR_CLIENTS, "count in BPF prog");
+}
+
+static void run_test(struct migrate_reuseport_test_case *test_case,
+ struct test_migrate_reuseport *skel)
+{
+ int err, saved_len;
+ char buf[16];
+
+ skel->bss->migrated_at_close = 0;
+ skel->bss->migrated_at_close_fastopen = 0;
+ skel->bss->migrated_at_send_synack = 0;
+ skel->bss->migrated_at_recv_ack = 0;
+
+ init_fds(test_case->servers, NR_SERVERS);
+ init_fds(test_case->clients, NR_CLIENTS);
+
+ if (test_case->fastopen) {
+ memset(buf, 0, sizeof(buf));
+
+ err = setup_fastopen(buf, sizeof(buf), &saved_len, false);
+ if (!ASSERT_OK(err, "setup_fastopen - setup"))
+ return;
+ }
+
+ err = start_servers(test_case, skel);
+ if (!ASSERT_OK(err, "start_servers"))
+ goto close_servers;
+
+ if (test_case->drop_ack) {
+ /* Drop the final ACK of the 3-way handshake and stick the
+ * in-flight requests on TCP_SYN_RECV or TCP_NEW_SYN_RECV.
+ */
+ err = drop_ack(test_case, skel);
+ if (!ASSERT_OK(err, "drop_ack"))
+ goto close_servers;
+ }
+
+ /* Tie requests to the first four listners */
+ err = start_clients(test_case);
+ if (!ASSERT_OK(err, "start_clients"))
+ goto close_clients;
+
+ err = listen(test_case->servers[MIGRATED_TO], QLEN);
+ if (!ASSERT_OK(err, "listen"))
+ goto close_clients;
+
+ err = update_maps(test_case, skel);
+ if (!ASSERT_OK(err, "fill_maps"))
+ goto close_clients;
+
+ /* Migrate the requests in the accept queue only.
+ * TCP_NEW_SYN_RECV requests are not migrated at this point.
+ */
+ err = migrate_dance(test_case);
+ if (!ASSERT_OK(err, "migrate_dance"))
+ goto close_clients;
+
+ if (test_case->expire_synack_timer) {
+ /* Wait for SYN+ACK timers to expire so that
+ * reqsk_timer_handler() migrates TCP_NEW_SYN_RECV requests.
+ */
+ sleep(1);
+ }
+
+ if (test_case->link) {
+ /* Resume 3WHS and migrate TCP_NEW_SYN_RECV requests */
+ err = pass_ack(test_case);
+ if (!ASSERT_OK(err, "pass_ack"))
+ goto close_clients;
+ }
+
+ count_requests(test_case, skel);
+
+close_clients:
+ close_fds(test_case->clients, NR_CLIENTS);
+
+ if (test_case->link) {
+ err = pass_ack(test_case);
+ ASSERT_OK(err, "pass_ack - clean up");
+ }
+
+close_servers:
+ close_fds(test_case->servers, NR_SERVERS);
+
+ if (test_case->fastopen) {
+ err = setup_fastopen(buf, sizeof(buf), &saved_len, true);
+ ASSERT_OK(err, "setup_fastopen - restore");
+ }
+}
+
+void test_migrate_reuseport(void)
+{
+ struct test_migrate_reuseport *skel;
+ int i;
+
+ skel = test_migrate_reuseport__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "open_and_load"))
+ return;
+
+ for (i = 0; i < ARRAY_SIZE(test_cases); i++) {
+ test__start_subtest(test_cases[i].name);
+ run_test(&test_cases[i], skel);
+ }
+
+ test_migrate_reuseport__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/test_migrate_reuseport.c b/tools/testing/selftests/bpf/progs/test_migrate_reuseport.c
new file mode 100644
index 000000000000..27df571abf5b
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_migrate_reuseport.c
@@ -0,0 +1,135 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Check if we can migrate child sockets.
+ *
+ * 1. If reuse_md->migrating_sk is NULL (SYN packet),
+ * return SK_PASS without selecting a listener.
+ * 2. If reuse_md->migrating_sk is not NULL (socket migration),
+ * select a listener (reuseport_map[migrate_map[cookie]])
+ *
+ * Author: Kuniyuki Iwashima <[email protected]>
+ */
+
+#include <stddef.h>
+#include <string.h>
+#include <linux/bpf.h>
+#include <linux/if_ether.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <linux/tcp.h>
+#include <linux/in.h>
+#include <bpf/bpf_endian.h>
+#include <bpf/bpf_helpers.h>
+
+struct {
+ __uint(type, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY);
+ __uint(max_entries, 256);
+ __type(key, int);
+ __type(value, __u64);
+} reuseport_map SEC(".maps");
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(max_entries, 256);
+ __type(key, __u64);
+ __type(value, int);
+} migrate_map SEC(".maps");
+
+int migrated_at_close = 0;
+int migrated_at_close_fastopen = 0;
+int migrated_at_send_synack = 0;
+int migrated_at_recv_ack = 0;
+__be16 server_port;
+
+SEC("xdp")
+int drop_ack(struct xdp_md *xdp)
+{
+ void *data_end = (void *)(long)xdp->data_end;
+ void *data = (void *)(long)xdp->data;
+ struct ethhdr *eth = data;
+ struct tcphdr *tcp = NULL;
+
+ if (eth + 1 > data_end)
+ goto pass;
+
+ switch (bpf_ntohs(eth->h_proto)) {
+ case ETH_P_IP: {
+ struct iphdr *ip = (struct iphdr *)(eth + 1);
+
+ if (ip + 1 > data_end)
+ goto pass;
+
+ if (ip->protocol != IPPROTO_TCP)
+ goto pass;
+
+ tcp = (struct tcphdr *)((void *)ip + ip->ihl * 4);
+ break;
+ }
+ case ETH_P_IPV6: {
+ struct ipv6hdr *ipv6 = (struct ipv6hdr *)(eth + 1);
+
+ if (ipv6 + 1 > data_end)
+ goto pass;
+
+ if (ipv6->nexthdr != IPPROTO_TCP)
+ goto pass;
+
+ tcp = (struct tcphdr *)(ipv6 + 1);
+ break;
+ }
+ default:
+ goto pass;
+ }
+
+ if (tcp + 1 > data_end)
+ goto pass;
+
+ if (tcp->dest != server_port)
+ goto pass;
+
+ if (!tcp->syn && tcp->ack)
+ return XDP_DROP;
+
+pass:
+ return XDP_PASS;
+}
+
+SEC("sk_reuseport/migrate")
+int migrate_reuseport(struct sk_reuseport_md *reuse_md)
+{
+ int *key, flags = 0, state, err;
+ __u64 cookie;
+
+ if (!reuse_md->migrating_sk)
+ return SK_PASS;
+
+ state = reuse_md->migrating_sk->state;
+ cookie = bpf_get_socket_cookie(reuse_md->sk);
+
+ key = bpf_map_lookup_elem(&migrate_map, &cookie);
+ if (!key)
+ return SK_DROP;
+
+ err = bpf_sk_select_reuseport(reuse_md, &reuseport_map, key, flags);
+ if (err)
+ return SK_PASS;
+
+ switch (state) {
+ case BPF_TCP_ESTABLISHED:
+ __sync_fetch_and_add(&migrated_at_close, 1);
+ break;
+ case BPF_TCP_SYN_RECV:
+ __sync_fetch_and_add(&migrated_at_close_fastopen, 1);
+ break;
+ case BPF_TCP_NEW_SYN_RECV:
+ if (!reuse_md->len)
+ __sync_fetch_and_add(&migrated_at_send_synack, 1);
+ else
+ __sync_fetch_and_add(&migrated_at_recv_ack, 1);
+ break;
+ }
+
+ return SK_PASS;
+}
+
+char _license[] SEC("license") = "GPL";
--
2.30.2

2021-05-26 06:49:14

by Daniel Borkmann

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 00/11] Socket migration for SO_REUSEPORT.

On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> The SO_REUSEPORT option allows sockets to listen on the same port and to
> accept connections evenly. However, there is a defect in the current
> implementation [1]. When a SYN packet is received, the connection is tied
> to a listening socket. Accordingly, when the listener is closed, in-flight
> requests during the three-way handshake and child sockets in the accept
> queue are dropped even if other listeners on the same port could accept
> such connections.
>
> This situation can happen when various server management tools restart
> server (such as nginx) processes. For instance, when we change nginx
> configurations and restart it, it spins up new workers that respect the new
> configuration and closes all listeners on the old workers, resulting in the
> in-flight ACK of 3WHS is responded by RST.
>
> To avoid such a situation, users have to know deeply how the kernel handles
> SYN packets and implement connection draining by eBPF [2]:
>
> 1. Stop routing SYN packets to the listener by eBPF.
> 2. Wait for all timers to expire to complete requests
> 3. Accept connections until EAGAIN, then close the listener.
>
> or
>
> 1. Start counting SYN packets and accept syscalls using the eBPF map.
> 2. Stop routing SYN packets.
> 3. Accept connections up to the count, then close the listener.
>
> In either way, we cannot close a listener immediately. However, ideally,
> the application need not drain the not yet accepted sockets because 3WHS
> and tying a connection to a listener are just the kernel behaviour. The
> root cause is within the kernel, so the issue should be addressed in kernel
> space and should not be visible to user space. This patchset fixes it so
> that users need not take care of kernel implementation and connection
> draining. With this patchset, the kernel redistributes requests and
> connections from a listener to the others in the same reuseport group
> at/after close or shutdown syscalls.
>
> Although some software does connection draining, there are still merits in
> migration. For some security reasons, such as replacing TLS certificates,
> we may want to apply new settings as soon as possible and/or we may not be
> able to wait for connection draining. The sockets in the accept queue have
> not started application sessions yet. So, if we do not drain such sockets,
> they can be handled by the newer listeners and could have a longer
> lifetime. It is difficult to drain all connections in every case, but we
> can decrease such aborted connections by migration. In that sense,
> migration is always better than draining.
>
> Moreover, auto-migration simplifies user space logic and also works well in
> a case where we cannot modify and build a server program to implement the
> workaround.
>
> Note that the source and destination listeners MUST have the same settings
> at the socket API level; otherwise, applications may face inconsistency and
> cause errors. In such a case, we have to use the eBPF program to select a
> specific listener or to cancel migration.
>
> Special thanks to Martin KaFai Lau for bouncing ideas and exchanging code
> snippets along the way.
>
>
> Link:
> [1] The SO_REUSEPORT socket option
> https://lwn.net/Articles/542629/
>
> [2] Re: [PATCH 1/1] net: Add SO_REUSEPORT_LISTEN_OFF socket option as drain mode
> https://lore.kernel.org/netdev/[email protected]/

This series needs review/ACKs from TCP maintainers. Eric/Neal/Yuchung please take
a look again.

Thanks,
Daniel

2021-06-08 03:15:29

by Alexei Starovoitov

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 00/11] Socket migration for SO_REUSEPORT.

On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <[email protected]> wrote:
>
> On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > The SO_REUSEPORT option allows sockets to listen on the same port and to
> > accept connections evenly. However, there is a defect in the current
> > implementation [1]. When a SYN packet is received, the connection is tied
> > to a listening socket. Accordingly, when the listener is closed, in-flight
> > requests during the three-way handshake and child sockets in the accept
> > queue are dropped even if other listeners on the same port could accept
> > such connections.
> >
> > This situation can happen when various server management tools restart
> > server (such as nginx) processes. For instance, when we change nginx
> > configurations and restart it, it spins up new workers that respect the new
> > configuration and closes all listeners on the old workers, resulting in the
> > in-flight ACK of 3WHS is responded by RST.
> >
> > To avoid such a situation, users have to know deeply how the kernel handles
> > SYN packets and implement connection draining by eBPF [2]:
> >
> > 1. Stop routing SYN packets to the listener by eBPF.
> > 2. Wait for all timers to expire to complete requests
> > 3. Accept connections until EAGAIN, then close the listener.
> >
> > or
> >
> > 1. Start counting SYN packets and accept syscalls using the eBPF map.
> > 2. Stop routing SYN packets.
> > 3. Accept connections up to the count, then close the listener.
> >
> > In either way, we cannot close a listener immediately. However, ideally,
> > the application need not drain the not yet accepted sockets because 3WHS
> > and tying a connection to a listener are just the kernel behaviour. The
> > root cause is within the kernel, so the issue should be addressed in kernel
> > space and should not be visible to user space. This patchset fixes it so
> > that users need not take care of kernel implementation and connection
> > draining. With this patchset, the kernel redistributes requests and
> > connections from a listener to the others in the same reuseport group
> > at/after close or shutdown syscalls.
> >
> > Although some software does connection draining, there are still merits in
> > migration. For some security reasons, such as replacing TLS certificates,
> > we may want to apply new settings as soon as possible and/or we may not be
> > able to wait for connection draining. The sockets in the accept queue have
> > not started application sessions yet. So, if we do not drain such sockets,
> > they can be handled by the newer listeners and could have a longer
> > lifetime. It is difficult to drain all connections in every case, but we
> > can decrease such aborted connections by migration. In that sense,
> > migration is always better than draining.
> >
> > Moreover, auto-migration simplifies user space logic and also works well in
> > a case where we cannot modify and build a server program to implement the
> > workaround.
> >
> > Note that the source and destination listeners MUST have the same settings
> > at the socket API level; otherwise, applications may face inconsistency and
> > cause errors. In such a case, we have to use the eBPF program to select a
> > specific listener or to cancel migration.
> >
> > Special thanks to Martin KaFai Lau for bouncing ideas and exchanging code
> > snippets along the way.
> >
> >
> > Link:
> > [1] The SO_REUSEPORT socket option
> > https://lwn.net/Articles/542629/
> >
> > [2] Re: [PATCH 1/1] net: Add SO_REUSEPORT_LISTEN_OFF socket option as drain mode
> > https://lore.kernel.org/netdev/[email protected]/
>
> This series needs review/ACKs from TCP maintainers. Eric/Neal/Yuchung please take
> a look again.

Eric,

I've looked through bpf and tcp changes and they don't look scary at all.
I think the feature is useful and a bit of extra complexity is worth it.
So please review tcp bits to make sure we didn't miss anything.

Thanks!

2021-06-08 17:54:26

by Yuchung Cheng

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 00/11] Socket migration for SO_REUSEPORT.

On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <[email protected]> wrote:
>
> On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > The SO_REUSEPORT option allows sockets to listen on the same port and to
> > accept connections evenly. However, there is a defect in the current
> > implementation [1]. When a SYN packet is received, the connection is tied
> > to a listening socket. Accordingly, when the listener is closed, in-flight
> > requests during the three-way handshake and child sockets in the accept
> > queue are dropped even if other listeners on the same port could accept
> > such connections.
> >
> > This situation can happen when various server management tools restart
> > server (such as nginx) processes. For instance, when we change nginx
> > configurations and restart it, it spins up new workers that respect the new
> > configuration and closes all listeners on the old workers, resulting in the
> > in-flight ACK of 3WHS is responded by RST.
> >
> > To avoid such a situation, users have to know deeply how the kernel handles
> > SYN packets and implement connection draining by eBPF [2]:
> >
> > 1. Stop routing SYN packets to the listener by eBPF.
> > 2. Wait for all timers to expire to complete requests
> > 3. Accept connections until EAGAIN, then close the listener.
> >
> > or
> >
> > 1. Start counting SYN packets and accept syscalls using the eBPF map.
> > 2. Stop routing SYN packets.
> > 3. Accept connections up to the count, then close the listener.
> >
> > In either way, we cannot close a listener immediately. However, ideally,
> > the application need not drain the not yet accepted sockets because 3WHS
> > and tying a connection to a listener are just the kernel behaviour. The
> > root cause is within the kernel, so the issue should be addressed in kernel
> > space and should not be visible to user space. This patchset fixes it so
> > that users need not take care of kernel implementation and connection
> > draining. With this patchset, the kernel redistributes requests and
> > connections from a listener to the others in the same reuseport group
> > at/after close or shutdown syscalls.
> >
> > Although some software does connection draining, there are still merits in
> > migration. For some security reasons, such as replacing TLS certificates,
> > we may want to apply new settings as soon as possible and/or we may not be
> > able to wait for connection draining. The sockets in the accept queue have
> > not started application sessions yet. So, if we do not drain such sockets,
> > they can be handled by the newer listeners and could have a longer
> > lifetime. It is difficult to drain all connections in every case, but we
> > can decrease such aborted connections by migration. In that sense,
> > migration is always better than draining.
> >
> > Moreover, auto-migration simplifies user space logic and also works well in
> > a case where we cannot modify and build a server program to implement the
> > workaround.
> >
> > Note that the source and destination listeners MUST have the same settings
> > at the socket API level; otherwise, applications may face inconsistency and
> > cause errors. In such a case, we have to use the eBPF program to select a
> > specific listener or to cancel migration.
This looks to be a useful feature. What happens to migrating a
passively fast-opened socket in the old listener but it has not yet
been accepted (TFO is both a mini-socket and a full-socket)?
It gets tricky when the old and new listener have different TFO key


> >
> > Special thanks to Martin KaFai Lau for bouncing ideas and exchanging code
> > snippets along the way.
> >
> >
> > Link:
> > [1] The SO_REUSEPORT socket option
> > https://lwn.net/Articles/542629/
> >
> > [2] Re: [PATCH 1/1] net: Add SO_REUSEPORT_LISTEN_OFF socket option as drain mode
> > https://lore.kernel.org/netdev/[email protected]/
>
> This series needs review/ACKs from TCP maintainers. Eric/Neal/Yuchung please take
> a look again.
>
> Thanks,
> Daniel

2021-06-08 23:07:02

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 00/11] Socket migration for SO_REUSEPORT.

From: Yuchung Cheng <[email protected]>
Date: Tue, 8 Jun 2021 10:48:06 -0700
> On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <[email protected]> wrote:
> >
> > On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > > The SO_REUSEPORT option allows sockets to listen on the same port and to
> > > accept connections evenly. However, there is a defect in the current
> > > implementation [1]. When a SYN packet is received, the connection is tied
> > > to a listening socket. Accordingly, when the listener is closed, in-flight
> > > requests during the three-way handshake and child sockets in the accept
> > > queue are dropped even if other listeners on the same port could accept
> > > such connections.
> > >
> > > This situation can happen when various server management tools restart
> > > server (such as nginx) processes. For instance, when we change nginx
> > > configurations and restart it, it spins up new workers that respect the new
> > > configuration and closes all listeners on the old workers, resulting in the
> > > in-flight ACK of 3WHS is responded by RST.
> > >
> > > To avoid such a situation, users have to know deeply how the kernel handles
> > > SYN packets and implement connection draining by eBPF [2]:
> > >
> > > 1. Stop routing SYN packets to the listener by eBPF.
> > > 2. Wait for all timers to expire to complete requests
> > > 3. Accept connections until EAGAIN, then close the listener.
> > >
> > > or
> > >
> > > 1. Start counting SYN packets and accept syscalls using the eBPF map.
> > > 2. Stop routing SYN packets.
> > > 3. Accept connections up to the count, then close the listener.
> > >
> > > In either way, we cannot close a listener immediately. However, ideally,
> > > the application need not drain the not yet accepted sockets because 3WHS
> > > and tying a connection to a listener are just the kernel behaviour. The
> > > root cause is within the kernel, so the issue should be addressed in kernel
> > > space and should not be visible to user space. This patchset fixes it so
> > > that users need not take care of kernel implementation and connection
> > > draining. With this patchset, the kernel redistributes requests and
> > > connections from a listener to the others in the same reuseport group
> > > at/after close or shutdown syscalls.
> > >
> > > Although some software does connection draining, there are still merits in
> > > migration. For some security reasons, such as replacing TLS certificates,
> > > we may want to apply new settings as soon as possible and/or we may not be
> > > able to wait for connection draining. The sockets in the accept queue have
> > > not started application sessions yet. So, if we do not drain such sockets,
> > > they can be handled by the newer listeners and could have a longer
> > > lifetime. It is difficult to drain all connections in every case, but we
> > > can decrease such aborted connections by migration. In that sense,
> > > migration is always better than draining.
> > >
> > > Moreover, auto-migration simplifies user space logic and also works well in
> > > a case where we cannot modify and build a server program to implement the
> > > workaround.
> > >
> > > Note that the source and destination listeners MUST have the same settings
> > > at the socket API level; otherwise, applications may face inconsistency and
> > > cause errors. In such a case, we have to use the eBPF program to select a
> > > specific listener or to cancel migration.
> This looks to be a useful feature. What happens to migrating a
> passively fast-opened socket in the old listener but it has not yet
> been accepted (TFO is both a mini-socket and a full-socket)?
> It gets tricky when the old and new listener have different TFO key

The tricky situation can happen without this patch set. We can change
the listener's TFO key when TCP_SYN_RECV sockets are still in the accept
queue. The change is already handled properly, so it does not crash
applications.

In the normal 3WHS case, a full-socket is created after 3WHS. In the TFO
case, a full-socket is created after validating the TFO cookie in the
initial SYN packet.

After that, the connection is basically handled via the full-socket, except
for accept() syscall. So in the both cases, the mini-socket is poped out of
old listener's queue, cloned, and put into the new listner's queue. Then we
can accept() its full-socket via the cloned mini-socket.

2021-06-09 12:03:44

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 00/11] Socket migration for SO_REUSEPORT.

From: Yuchung Cheng <[email protected]>
Date: Tue, 8 Jun 2021 16:47:37 -0700
> On Tue, Jun 8, 2021 at 4:04 PM Kuniyuki Iwashima <[email protected]> wrote:
> >
> > From: Yuchung Cheng <[email protected]>
> > Date: Tue, 8 Jun 2021 10:48:06 -0700
> > > On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <[email protected]> wrote:
> > > >
> > > > On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > > > > The SO_REUSEPORT option allows sockets to listen on the same port and to
> > > > > accept connections evenly. However, there is a defect in the current
> > > > > implementation [1]. When a SYN packet is received, the connection is tied
> > > > > to a listening socket. Accordingly, when the listener is closed, in-flight
> > > > > requests during the three-way handshake and child sockets in the accept
> > > > > queue are dropped even if other listeners on the same port could accept
> > > > > such connections.
> > > > >
> > > > > This situation can happen when various server management tools restart
> > > > > server (such as nginx) processes. For instance, when we change nginx
> > > > > configurations and restart it, it spins up new workers that respect the new
> > > > > configuration and closes all listeners on the old workers, resulting in the
> > > > > in-flight ACK of 3WHS is responded by RST.
> > > > >
> > > > > To avoid such a situation, users have to know deeply how the kernel handles
> > > > > SYN packets and implement connection draining by eBPF [2]:
> > > > >
> > > > > 1. Stop routing SYN packets to the listener by eBPF.
> > > > > 2. Wait for all timers to expire to complete requests
> > > > > 3. Accept connections until EAGAIN, then close the listener.
> > > > >
> > > > > or
> > > > >
> > > > > 1. Start counting SYN packets and accept syscalls using the eBPF map.
> > > > > 2. Stop routing SYN packets.
> > > > > 3. Accept connections up to the count, then close the listener.
> > > > >
> > > > > In either way, we cannot close a listener immediately. However, ideally,
> > > > > the application need not drain the not yet accepted sockets because 3WHS
> > > > > and tying a connection to a listener are just the kernel behaviour. The
> > > > > root cause is within the kernel, so the issue should be addressed in kernel
> > > > > space and should not be visible to user space. This patchset fixes it so
> > > > > that users need not take care of kernel implementation and connection
> > > > > draining. With this patchset, the kernel redistributes requests and
> > > > > connections from a listener to the others in the same reuseport group
> > > > > at/after close or shutdown syscalls.
> > > > >
> > > > > Although some software does connection draining, there are still merits in
> > > > > migration. For some security reasons, such as replacing TLS certificates,
> > > > > we may want to apply new settings as soon as possible and/or we may not be
> > > > > able to wait for connection draining. The sockets in the accept queue have
> > > > > not started application sessions yet. So, if we do not drain such sockets,
> > > > > they can be handled by the newer listeners and could have a longer
> > > > > lifetime. It is difficult to drain all connections in every case, but we
> > > > > can decrease such aborted connections by migration. In that sense,
> > > > > migration is always better than draining.
> > > > >
> > > > > Moreover, auto-migration simplifies user space logic and also works well in
> > > > > a case where we cannot modify and build a server program to implement the
> > > > > workaround.
> > > > >
> > > > > Note that the source and destination listeners MUST have the same settings
> > > > > at the socket API level; otherwise, applications may face inconsistency and
> > > > > cause errors. In such a case, we have to use the eBPF program to select a
> > > > > specific listener or to cancel migration.
> > > This looks to be a useful feature. What happens to migrating a
> > > passively fast-opened socket in the old listener but it has not yet
> > > been accepted (TFO is both a mini-socket and a full-socket)?
> > > It gets tricky when the old and new listener have different TFO key
> >
> > The tricky situation can happen without this patch set. We can change
> > the listener's TFO key when TCP_SYN_RECV sockets are still in the accept
> > queue. The change is already handled properly, so it does not crash
> > applications.
> >
> > In the normal 3WHS case, a full-socket is created after 3WHS. In the TFO
> > case, a full-socket is created after validating the TFO cookie in the
> > initial SYN packet.
> >
> > After that, the connection is basically handled via the full-socket, except
> > for accept() syscall. So in the both cases, the mini-socket is poped out of
> > old listener's queue, cloned, and put into the new listner's queue. Then we
> > can accept() its full-socket via the cloned mini-socket.
>
> Thanks, that makes sense. Eric is the expert in this part to review
> the correctness. My only suggestion is to add some stats tracking the
> mini-sockets that fail to migrate due to a variety of reasons (the
> code locations that the requests need to be dropped). This can be
> useful to evaluate the effectiveness of this new feature.

That's nice idea.
I'll implement it as a follow-up patch or in the next spin.

For now, I would like to wait for Eric's review.

Thank you.

2021-06-09 16:57:46

by Yuchung Cheng

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 00/11] Socket migration for SO_REUSEPORT.

On Tue, Jun 8, 2021 at 4:04 PM Kuniyuki Iwashima <[email protected]> wrote:
>
> From: Yuchung Cheng <[email protected]>
> Date: Tue, 8 Jun 2021 10:48:06 -0700
> > On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann <[email protected]> wrote:
> > >
> > > On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > > > The SO_REUSEPORT option allows sockets to listen on the same port and to
> > > > accept connections evenly. However, there is a defect in the current
> > > > implementation [1]. When a SYN packet is received, the connection is tied
> > > > to a listening socket. Accordingly, when the listener is closed, in-flight
> > > > requests during the three-way handshake and child sockets in the accept
> > > > queue are dropped even if other listeners on the same port could accept
> > > > such connections.
> > > >
> > > > This situation can happen when various server management tools restart
> > > > server (such as nginx) processes. For instance, when we change nginx
> > > > configurations and restart it, it spins up new workers that respect the new
> > > > configuration and closes all listeners on the old workers, resulting in the
> > > > in-flight ACK of 3WHS is responded by RST.
> > > >
> > > > To avoid such a situation, users have to know deeply how the kernel handles
> > > > SYN packets and implement connection draining by eBPF [2]:
> > > >
> > > > 1. Stop routing SYN packets to the listener by eBPF.
> > > > 2. Wait for all timers to expire to complete requests
> > > > 3. Accept connections until EAGAIN, then close the listener.
> > > >
> > > > or
> > > >
> > > > 1. Start counting SYN packets and accept syscalls using the eBPF map.
> > > > 2. Stop routing SYN packets.
> > > > 3. Accept connections up to the count, then close the listener.
> > > >
> > > > In either way, we cannot close a listener immediately. However, ideally,
> > > > the application need not drain the not yet accepted sockets because 3WHS
> > > > and tying a connection to a listener are just the kernel behaviour. The
> > > > root cause is within the kernel, so the issue should be addressed in kernel
> > > > space and should not be visible to user space. This patchset fixes it so
> > > > that users need not take care of kernel implementation and connection
> > > > draining. With this patchset, the kernel redistributes requests and
> > > > connections from a listener to the others in the same reuseport group
> > > > at/after close or shutdown syscalls.
> > > >
> > > > Although some software does connection draining, there are still merits in
> > > > migration. For some security reasons, such as replacing TLS certificates,
> > > > we may want to apply new settings as soon as possible and/or we may not be
> > > > able to wait for connection draining. The sockets in the accept queue have
> > > > not started application sessions yet. So, if we do not drain such sockets,
> > > > they can be handled by the newer listeners and could have a longer
> > > > lifetime. It is difficult to drain all connections in every case, but we
> > > > can decrease such aborted connections by migration. In that sense,
> > > > migration is always better than draining.
> > > >
> > > > Moreover, auto-migration simplifies user space logic and also works well in
> > > > a case where we cannot modify and build a server program to implement the
> > > > workaround.
> > > >
> > > > Note that the source and destination listeners MUST have the same settings
> > > > at the socket API level; otherwise, applications may face inconsistency and
> > > > cause errors. In such a case, we have to use the eBPF program to select a
> > > > specific listener or to cancel migration.
> > This looks to be a useful feature. What happens to migrating a
> > passively fast-opened socket in the old listener but it has not yet
> > been accepted (TFO is both a mini-socket and a full-socket)?
> > It gets tricky when the old and new listener have different TFO key
>
> The tricky situation can happen without this patch set. We can change
> the listener's TFO key when TCP_SYN_RECV sockets are still in the accept
> queue. The change is already handled properly, so it does not crash
> applications.
>
> In the normal 3WHS case, a full-socket is created after 3WHS. In the TFO
> case, a full-socket is created after validating the TFO cookie in the
> initial SYN packet.
>
> After that, the connection is basically handled via the full-socket, except
> for accept() syscall. So in the both cases, the mini-socket is poped out of
> old listener's queue, cloned, and put into the new listner's queue. Then we
> can accept() its full-socket via the cloned mini-socket.

Thanks, that makes sense. Eric is the expert in this part to review
the correctness. My only suggestion is to add some stats tracking the
mini-sockets that fail to migrate due to a variety of reasons (the
code locations that the requests need to be dropped). This can be
useful to evaluate the effectiveness of this new feature.

2021-06-09 18:12:29

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 00/11] Socket migration for SO_REUSEPORT.



On 6/9/21 2:34 AM, Kuniyuki Iwashima wrote:


>
> For now, I would like to wait for Eric's review.
>

I have been busy these days, I will review your patches by tomorrow.

2021-06-10 17:26:51

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 01/11] net: Introduce net.ipv4.tcp_migrate_req.



On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> This commit adds a new sysctl option: net.ipv4.tcp_migrate_req. If this
> option is enabled or eBPF program is attached, we will be able to migrate
> child sockets from a listener to another in the same reuseport group after
> close() or shutdown() syscalls.
>
> Signed-off-by: Kuniyuki Iwashima <[email protected]>
> Reviewed-by: Benjamin Herrenschmidt <[email protected]>
> Acked-by: Martin KaFai Lau <[email protected]>
> ---
> Documentation/networking/ip-sysctl.rst | 25 +++++++++++++++++++++++++
> include/net/netns/ipv4.h | 1 +
> net/ipv4/sysctl_net_ipv4.c | 9 +++++++++
> 3 files changed, 35 insertions(+)

Reviewed-by: Eric Dumazet <[email protected]>

2021-06-10 17:41:40

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 02/11] tcp: Add num_closed_socks to struct sock_reuseport.



On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> As noted in the following commit, a closed listener has to hold the
> reference to the reuseport group for socket migration. This patch adds a
> field (num_closed_socks) to struct sock_reuseport to manage closed sockets
> within the same reuseport group. Moreover, this and the following commits
> introduce some helper functions to split socks[] into two sections and keep
> TCP_LISTEN and TCP_CLOSE sockets in each section. Like a double-ended
> queue, we will place TCP_LISTEN sockets from the front and TCP_CLOSE
> sockets from the end.
>
> TCP_LISTEN----------> <-------TCP_CLOSE
> +---+---+ --- +---+ --- +---+ --- +---+
> | 0 | 1 | ... | i | ... | j | ... | k |
> +---+---+ --- +---+ --- +---+ --- +---+
>
> i = num_socks - 1
> j = max_socks - num_closed_socks
> k = max_socks - 1
>
> This patch also extends reuseport_add_sock() and reuseport_grow() to
> support num_closed_socks.
>
> Signed-off-by: Kuniyuki Iwashima <[email protected]>
> Acked-by: Martin KaFai Lau <[email protected]>
> ---
> include/net/sock_reuseport.h | 5 ++-
> net/core/sock_reuseport.c | 76 +++++++++++++++++++++++++++---------
> 2 files changed, 60 insertions(+), 21 deletions(-)
>
> diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
> index 505f1e18e9bf..0e558ca7afbf 100644
> --- a/include/net/sock_reuseport.h
> +++ b/include/net/sock_reuseport.h
> @@ -13,8 +13,9 @@ extern spinlock_t reuseport_lock;
> struct sock_reuseport {
> struct rcu_head rcu;
>
> - u16 max_socks; /* length of socks */
> - u16 num_socks; /* elements in socks */
> + u16 max_socks; /* length of socks */
> + u16 num_socks; /* elements in socks */
> + u16 num_closed_socks; /* closed elements in socks */
> /* The last synq overflow event timestamp of this
> * reuse->socks[] group.
> */
> diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
> index b065f0a103ed..079bd1aca0e7 100644
> --- a/net/core/sock_reuseport.c
> +++ b/net/core/sock_reuseport.c
> @@ -18,6 +18,49 @@ DEFINE_SPINLOCK(reuseport_lock);
>
> static DEFINE_IDA(reuseport_ida);
>
> +static int reuseport_sock_index(struct sock *sk,
> + struct sock_reuseport *reuse,
> + bool closed)


const struct sock_reuseport *reuse


> +{
> + int left, right;
> +
> + if (!closed) {
> + left = 0;
> + right = reuse->num_socks;
> + } else {
> + left = reuse->max_socks - reuse->num_closed_socks;
> + right = reuse->max_socks;
> + }



> +
> + for (; left < right; left++)
> + if (reuse->socks[left] == sk)
> + return left;


Is this even possible (to return -1) ?

> + return -1;
> +}
> +
> +static void __reuseport_add_sock(struct sock *sk,
> + struct sock_reuseport *reuse)
> +{
> + reuse->socks[reuse->num_socks] = sk;
> + /* paired with smp_rmb() in reuseport_select_sock() */
> + smp_wmb();
> + reuse->num_socks++;
> +}
> +
> +static bool __reuseport_detach_sock(struct sock *sk,
> + struct sock_reuseport *reuse)
> +{
> + int i = reuseport_sock_index(sk, reuse, false);
> +
> + if (i == -1)
> + return false;
> +
> + reuse->socks[i] = reuse->socks[reuse->num_socks - 1];
> + reuse->num_socks--;
> +
> + return true;
> +}
> +
> static struct sock_reuseport *__reuseport_alloc(unsigned int max_socks)
> {
> unsigned int size = sizeof(struct sock_reuseport) +
> @@ -72,9 +115,8 @@ int reuseport_alloc(struct sock *sk, bool bind_inany)
> }
>
> reuse->reuseport_id = id;
> - reuse->socks[0] = sk;
> - reuse->num_socks = 1;
> reuse->bind_inany = bind_inany;
> + __reuseport_add_sock(sk, reuse);

Not sure why you changed this part, really no smp_wmb() is needed at this point ?

> rcu_assign_pointer(sk->sk_reuseport_cb, reuse);
>
> out:
> @@ -98,6 +140,7 @@ static struct sock_reuseport *reuseport_grow(struct sock_reuseport *reuse)
> return NULL;
>
> more_reuse->num_socks = reuse->num_socks;
> + more_reuse->num_closed_socks = reuse->num_closed_socks;
> more_reuse->prog = reuse->prog;
> more_reuse->reuseport_id = reuse->reuseport_id;
> more_reuse->bind_inany = reuse->bind_inany;
> @@ -105,9 +148,13 @@ static struct sock_reuseport *reuseport_grow(struct sock_reuseport *reuse)
>
> memcpy(more_reuse->socks, reuse->socks,
> reuse->num_socks * sizeof(struct sock *));
> + memcpy(more_reuse->socks +
> + (more_reuse->max_socks - more_reuse->num_closed_socks),
> + reuse->socks + reuse->num_socks,

The second memcpy() is to copy the closed sockets,
they should start at reuse->socks + (reuse->max_socks - reuse->num_closed_socks) ?


> + reuse->num_closed_socks * sizeof(struct sock *));
> more_reuse->synq_overflow_ts = READ_ONCE(reuse->synq_overflow_ts);
>
> - for (i = 0; i < reuse->num_socks; ++i)
> + for (i = 0; i < reuse->max_socks; ++i)
> rcu_assign_pointer(reuse->socks[i]->sk_reuseport_cb,
> more_reuse);
>
> @@ -158,7 +205,7 @@ int reuseport_add_sock(struct sock *sk, struct sock *sk2, bool bind_inany)
> return -EBUSY;
> }
>
> - if (reuse->num_socks == reuse->max_socks) {
> + if (reuse->num_socks + reuse->num_closed_socks == reuse->max_socks) {
> reuse = reuseport_grow(reuse);
> if (!reuse) {
> spin_unlock_bh(&reuseport_lock);
> @@ -166,10 +213,7 @@ int reuseport_add_sock(struct sock *sk, struct sock *sk2, bool bind_inany)
> }
> }
>
> - reuse->socks[reuse->num_socks] = sk;
> - /* paired with smp_rmb() in reuseport_select_sock() */
> - smp_wmb();
> - reuse->num_socks++;
> + __reuseport_add_sock(sk, reuse);
> rcu_assign_pointer(sk->sk_reuseport_cb, reuse);
>
> spin_unlock_bh(&reuseport_lock);
> @@ -183,7 +227,6 @@ EXPORT_SYMBOL(reuseport_add_sock);
> void reuseport_detach_sock(struct sock *sk)
> {
> struct sock_reuseport *reuse;
> - int i;
>
> spin_lock_bh(&reuseport_lock);
> reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
> @@ -200,16 +243,11 @@ void reuseport_detach_sock(struct sock *sk)
> bpf_sk_reuseport_detach(sk);
>
> rcu_assign_pointer(sk->sk_reuseport_cb, NULL);
> + __reuseport_detach_sock(sk, reuse);
> +
> + if (reuse->num_socks + reuse->num_closed_socks == 0)
> + call_rcu(&reuse->rcu, reuseport_free_rcu);
>
> - for (i = 0; i < reuse->num_socks; i++) {
> - if (reuse->socks[i] == sk) {
> - reuse->socks[i] = reuse->socks[reuse->num_socks - 1];
> - reuse->num_socks--;
> - if (reuse->num_socks == 0)
> - call_rcu(&reuse->rcu, reuseport_free_rcu);
> - break;
> - }
> - }
> spin_unlock_bh(&reuseport_lock);
> }
> EXPORT_SYMBOL(reuseport_detach_sock);
> @@ -274,7 +312,7 @@ struct sock *reuseport_select_sock(struct sock *sk,
> prog = rcu_dereference(reuse->prog);
> socks = READ_ONCE(reuse->num_socks);
> if (likely(socks)) {
> - /* paired with smp_wmb() in reuseport_add_sock() */
> + /* paired with smp_wmb() in __reuseport_add_sock() */
> smp_rmb();
>
> if (!prog || !skb)
>

2021-06-10 18:04:07

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 03/11] tcp: Keep TCP_CLOSE sockets in the reuseport group.



On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> When we close a listening socket, to migrate its connections to another
> listener in the same reuseport group, we have to handle two kinds of child
> sockets. One is that a listening socket has a reference to, and the other
> is not.
>
> The former is the TCP_ESTABLISHED/TCP_SYN_RECV sockets, and they are in the
> accept queue of their listening socket. So we can pop them out and push
> them into another listener's queue at close() or shutdown() syscalls. On
> the other hand, the latter, the TCP_NEW_SYN_RECV socket is during the
> three-way handshake and not in the accept queue. Thus, we cannot access
> such sockets at close() or shutdown() syscalls. Accordingly, we have to
> migrate immature sockets after their listening socket has been closed.
>
> Currently, if their listening socket has been closed, TCP_NEW_SYN_RECV
> sockets are freed at receiving the final ACK or retransmitting SYN+ACKs. At
> that time, if we could select a new listener from the same reuseport group,
> no connection would be aborted. However, we cannot do that because
> reuseport_detach_sock() sets NULL to sk_reuseport_cb and forbids access to
> the reuseport group from closed sockets.
>
> This patch allows TCP_CLOSE sockets to remain in the reuseport group and
> access it while any child socket references them. The point is that
> reuseport_detach_sock() was called twice from inet_unhash() and
> sk_destruct(). This patch replaces the first reuseport_detach_sock() with
> reuseport_stop_listen_sock(), which checks if the reuseport group is
> capable of migration. If capable, it decrements num_socks, moves the socket
> backwards in socks[] and increments num_closed_socks. When all connections
> are migrated, sk_destruct() calls reuseport_detach_sock() to remove the
> socket from socks[], decrement num_closed_socks, and set NULL to
> sk_reuseport_cb.
>
> By this change, closed or shutdowned sockets can keep sk_reuseport_cb.
> Consequently, calling listen() after shutdown() can cause EADDRINUSE or
> EBUSY in inet_csk_bind_conflict() or reuseport_add_sock() which expects
> such sockets not to have the reuseport group. Therefore, this patch also
> loosens such validation rules so that a socket can listen again if it has a
> reuseport group with num_closed_socks more than 0.
>
> When such sockets listen again, we handle them in reuseport_resurrect(). If
> there is an existing reuseport group (reuseport_add_sock() path), we move
> the socket from the old group to the new one and free the old one if
> necessary. If there is no existing group (reuseport_alloc() path), we
> allocate a new reuseport group, detach sk from the old one, and free it if
> necessary, not to break the current shutdown behaviour:
>
> - we cannot carry over the eBPF prog of shutdowned sockets
> - we cannot attach/detach an eBPF prog to/from listening sockets via
> shutdowned sockets
>
> Note that when the number of sockets gets over U16_MAX, we try to detach a
> closed socket randomly to make room for the new listening socket in
> reuseport_grow().
>
> Signed-off-by: Kuniyuki Iwashima <[email protected]>
> Signed-off-by: Martin KaFai Lau <[email protected]>
> ---
> include/net/sock_reuseport.h | 1 +
> net/core/sock_reuseport.c | 184 ++++++++++++++++++++++++++++++--
> net/ipv4/inet_connection_sock.c | 12 ++-
> net/ipv4/inet_hashtables.c | 2 +-
> 4 files changed, 188 insertions(+), 11 deletions(-)
>
> diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
> index 0e558ca7afbf..1333d0cddfbc 100644
> --- a/include/net/sock_reuseport.h
> +++ b/include/net/sock_reuseport.h
> @@ -32,6 +32,7 @@ extern int reuseport_alloc(struct sock *sk, bool bind_inany);
> extern int reuseport_add_sock(struct sock *sk, struct sock *sk2,
> bool bind_inany);
> extern void reuseport_detach_sock(struct sock *sk);
> +void reuseport_stop_listen_sock(struct sock *sk);
> extern struct sock *reuseport_select_sock(struct sock *sk,
> u32 hash,
> struct sk_buff *skb,
> diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
> index 079bd1aca0e7..ea0e900d3e97 100644
> --- a/net/core/sock_reuseport.c
> +++ b/net/core/sock_reuseport.c
> @@ -17,6 +17,8 @@
> DEFINE_SPINLOCK(reuseport_lock);
>
> static DEFINE_IDA(reuseport_ida);
> +static int reuseport_resurrect(struct sock *sk, struct sock_reuseport *old_reuse,
> + struct sock_reuseport *reuse, bool bind_inany);
>
> static int reuseport_sock_index(struct sock *sk,
> struct sock_reuseport *reuse,
> @@ -61,6 +63,29 @@ static bool __reuseport_detach_sock(struct sock *sk,
> return true;
> }
>
> +static void __reuseport_add_closed_sock(struct sock *sk,
> + struct sock_reuseport *reuse)
> +{
> + reuse->socks[reuse->max_socks - reuse->num_closed_socks - 1] = sk;
> + /* paired with READ_ONCE() in inet_csk_bind_conflict() */
> + WRITE_ONCE(reuse->num_closed_socks, reuse->num_closed_socks + 1);
> +}
> +
> +static bool __reuseport_detach_closed_sock(struct sock *sk,
> + struct sock_reuseport *reuse)
> +{
> + int i = reuseport_sock_index(sk, reuse, true);
> +
> + if (i == -1)
> + return false;
> +
> + reuse->socks[i] = reuse->socks[reuse->max_socks - reuse->num_closed_socks];
> + /* paired with READ_ONCE() in inet_csk_bind_conflict() */
> + WRITE_ONCE(reuse->num_closed_socks, reuse->num_closed_socks - 1);
> +
> + return true;
> +}
> +
> static struct sock_reuseport *__reuseport_alloc(unsigned int max_socks)
> {
> unsigned int size = sizeof(struct sock_reuseport) +
> @@ -92,6 +117,14 @@ int reuseport_alloc(struct sock *sk, bool bind_inany)
> reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
> lockdep_is_held(&reuseport_lock));
> if (reuse) {
> + if (reuse->num_closed_socks) {
> + /* sk was shutdown()ed before */
> + int err = reuseport_resurrect(sk, reuse, NULL, bind_inany);
> +
> + spin_unlock_bh(&reuseport_lock);
> + return err;

It seems coding style in this function would rather do
ret = reuseport_resurrect(sk, reuse, NULL, bind_inany);
goto out;

Overall, changes in this commit are a bit scarry.

2021-06-10 18:14:30

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 04/11] tcp: Add reuseport_migrate_sock() to select a new listener.



On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> reuseport_migrate_sock() does the same check done in
> reuseport_listen_stop_sock(). If the reuseport group is capable of
> migration, reuseport_migrate_sock() selects a new listener by the child
> socket hash and increments the listener's sk_refcnt beforehand. Thus, if we
> fail in the migration, we have to decrement it later.
>
> We will support migration by eBPF in the later commits.
>
> Signed-off-by: Kuniyuki Iwashima <[email protected]>
> Signed-off-by: Martin KaFai Lau <[email protected]>
> ---
> include/net/sock_reuseport.h | 3 ++
> net/core/sock_reuseport.c | 78 +++++++++++++++++++++++++++++-------
> 2 files changed, 67 insertions(+), 14 deletions(-)

Reviewed-by: Eric Dumazet <[email protected]>

2021-06-10 18:25:05

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 05/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.



On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> When we call close() or shutdown() for listening sockets, each child socket
> in the accept queue are freed at inet_csk_listen_stop(). If we can get a
> new listener by reuseport_migrate_sock() and clone the request by
> inet_reqsk_clone(), we try to add it into the new listener's accept queue
> by inet_csk_reqsk_queue_add(). If it fails, we have to call __reqsk_free()
> to call sock_put() for its listener and free the cloned request.
>
> After putting the full socket into ehash, tcp_v[46]_syn_recv_sock() sets
> NULL to ireq_opt/pktopts in struct inet_request_sock, but ipv6_opt can be
> non-NULL. So, we have to set NULL to ipv6_opt of the old request to avoid
> double free.
>
> Note that we do not update req->rsk_listener and instead clone the req to
> migrate because another path may reference the original request. If we
> protected it by RCU, we would need to add rcu_read_lock() in many places.
>
> Link: https://lore.kernel.org/netdev/[email protected]/
> Suggested-by: Martin KaFai Lau <[email protected]>
> Signed-off-by: Kuniyuki Iwashima <[email protected]>
> Acked-by: Martin KaFai Lau <[email protected]>
> ---
> net/ipv4/inet_connection_sock.c | 71 ++++++++++++++++++++++++++++++++-
> 1 file changed, 70 insertions(+), 1 deletion(-)
>
> diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
> index fa806e9167ec..07e97b2f3635 100644
> --- a/net/ipv4/inet_connection_sock.c
> +++ b/net/ipv4/inet_connection_sock.c
> @@ -695,6 +695,53 @@ int inet_rtx_syn_ack(const struct sock *parent, struct request_sock *req)
> }
> EXPORT_SYMBOL(inet_rtx_syn_ack);
>
> +static struct request_sock *inet_reqsk_clone(struct request_sock *req,
> + struct sock *sk)
> +{
> + struct sock *req_sk, *nreq_sk;
> + struct request_sock *nreq;
> +
> + nreq = kmem_cache_alloc(req->rsk_ops->slab, GFP_ATOMIC | __GFP_NOWARN);
> + if (!nreq) {
> + /* paired with refcount_inc_not_zero() in reuseport_migrate_sock() */
> + sock_put(sk);
> + return NULL;
> + }
> +
> + req_sk = req_to_sk(req);
> + nreq_sk = req_to_sk(nreq);
> +
> + memcpy(nreq_sk, req_sk,
> + offsetof(struct sock, sk_dontcopy_begin));
> + memcpy(&nreq_sk->sk_dontcopy_end, &req_sk->sk_dontcopy_end,
> + req->rsk_ops->obj_size - offsetof(struct sock, sk_dontcopy_end));
> +
> + sk_node_init(&nreq_sk->sk_node);
> + nreq_sk->sk_tx_queue_mapping = req_sk->sk_tx_queue_mapping;
> +#ifdef CONFIG_XPS
> + nreq_sk->sk_rx_queue_mapping = req_sk->sk_rx_queue_mapping;
> +#endif
> + nreq_sk->sk_incoming_cpu = req_sk->sk_incoming_cpu;
> + refcount_set(&nreq_sk->sk_refcnt, 0);

Not sure why you clear sk_refcnt here (it is set to 1 later)

> +
> + nreq->rsk_listener = sk;
> +
> + /* We need not acquire fastopenq->lock
> + * because the child socket is locked in inet_csk_listen_stop().
> + */
> + if (sk->sk_protocol == IPPROTO_TCP && tcp_rsk(nreq)->tfo_listener)
> + rcu_assign_pointer(tcp_sk(nreq->sk)->fastopen_rsk, nreq);
> +
> + return nreq;
> +}

Ouch, this is going to be hard to maintain...




> +
> +static void reqsk_migrate_reset(struct request_sock *req)
> +{
> +#if IS_ENABLED(CONFIG_IPV6)
> + inet_rsk(req)->ipv6_opt = NULL;
> +#endif
> +}
> +
> /* return true if req was found in the ehash table */
> static bool reqsk_queue_unlink(struct request_sock *req)
> {
> @@ -1036,14 +1083,36 @@ void inet_csk_listen_stop(struct sock *sk)
> * of the variants now. --ANK
> */
> while ((req = reqsk_queue_remove(queue, sk)) != NULL) {
> - struct sock *child = req->sk;
> + struct sock *child = req->sk, *nsk;
> + struct request_sock *nreq;
>
> local_bh_disable();
> bh_lock_sock(child);
> WARN_ON(sock_owned_by_user(child));
> sock_hold(child);
>
> + nsk = reuseport_migrate_sock(sk, child, NULL);
> + if (nsk) {
> + nreq = inet_reqsk_clone(req, nsk);
> + if (nreq) {
> + refcount_set(&nreq->rsk_refcnt, 1);
> +
> + if (inet_csk_reqsk_queue_add(nsk, nreq, child)) {
> + reqsk_migrate_reset(req);
> + } else {
> + reqsk_migrate_reset(nreq);
> + __reqsk_free(nreq);
> + }
> +
> + /* inet_csk_reqsk_queue_add() has already
> + * called inet_child_forget() on failure case.
> + */
> + goto skip_child_forget;
> + }
> + }
> +
> inet_child_forget(sk, req, child);
> +skip_child_forget:
> reqsk_put(req);
> bh_unlock_sock(child);
> local_bh_enable();
>

2021-06-10 20:24:23

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 06/11] tcp: Migrate TCP_NEW_SYN_RECV requests at retransmitting SYN+ACKs.



On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> As with the preceding patch, this patch changes reqsk_timer_handler() to
> call reuseport_migrate_sock() and inet_reqsk_clone() to migrate in-flight
> requests at retransmitting SYN+ACKs. If we can select a new listener and
> clone the request, we resume setting the SYN+ACK timer for the new req. If
> we can set the timer, we call inet_ehash_insert() to unhash the old req and
> put the new req into ehash.
>

...

> static void reqsk_migrate_reset(struct request_sock *req)
> {
> + req->saved_syn = NULL;
> + inet_rsk(req)->ireq_opt = NULL;
> #if IS_ENABLED(CONFIG_IPV6)
> - inet_rsk(req)->ipv6_opt = NULL;
> + inet_rsk(req)->pktopts = NULL;
> #endif
> }

This is fragile.

Maybe instead :

#if IS_ENABLED(CONFIG_IPV6)
inet_rsk(req)->ipv6_opt = NULL;
inet_rsk(req)->pktopts = NULL;
#else
inet_rsk(req)->ireq_opt = NULL;
#endif

2021-06-10 20:40:02

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 07/11] tcp: Migrate TCP_NEW_SYN_RECV requests at receiving the final ACK.



On 5/21/21 8:21 PM, Kuniyuki Iwashima wrote:
> This patch also changes the code to call reuseport_migrate_sock() and
> inet_reqsk_clone(), but unlike the other cases, we do not call
> inet_reqsk_clone() right after reuseport_migrate_sock().
>
> Currently, in the receive path for TCP_NEW_SYN_RECV sockets, its listener
> has three kinds of refcnt:
>
> (A) for listener itself
> (B) carried by reuqest_sock
> (C) sock_hold() in tcp_v[46]_rcv()
>
> While processing the req, (A) may disappear by close(listener). Also, (B)
> can disappear by accept(listener) once we put the req into the accept
> queue. So, we have to hold another refcnt (C) for the listener to prevent
> use-after-free.
>
> For socket migration, we call reuseport_migrate_sock() to select a listener
> with (A) and to increment the new listener's refcnt in tcp_v[46]_rcv().
> This refcnt corresponds to (C) and is cleaned up later in tcp_v[46]_rcv().
> Thus we have to take another refcnt (B) for the newly cloned request_sock.
>
> In inet_csk_complete_hashdance(), we hold the count (B), clone the req, and
> try to put the new req into the accept queue. By migrating req after
> winning the "own_req" race, we can avoid such a worst situation:
>
> CPU 1 looks up req1
> CPU 2 looks up req1, unhashes it, then CPU 1 loses the race
> CPU 3 looks up req2, unhashes it, then CPU 2 loses the race
> ...
>
> Signed-off-by: Kuniyuki Iwashima <[email protected]>
> Acked-by: Martin KaFai Lau <[email protected]>
> ---
> net/ipv4/inet_connection_sock.c | 34 ++++++++++++++++++++++++++++++---
> net/ipv4/tcp_ipv4.c | 20 +++++++++++++------
> net/ipv4/tcp_minisocks.c | 4 ++--
> net/ipv6/tcp_ipv6.c | 14 +++++++++++---
> 4 files changed, 58 insertions(+), 14 deletions(-)
>
> diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
> index c1f068464363..b795198f919a 100644
> --- a/net/ipv4/inet_connection_sock.c
> +++ b/net/ipv4/inet_connection_sock.c
> @@ -1113,12 +1113,40 @@ struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock *child,
> struct request_sock *req, bool own_req)
> {
> if (own_req) {
> - inet_csk_reqsk_queue_drop(sk, req);
> - reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req);
> - if (inet_csk_reqsk_queue_add(sk, req, child))
> + inet_csk_reqsk_queue_drop(req->rsk_listener, req);
> + reqsk_queue_removed(&inet_csk(req->rsk_listener)->icsk_accept_queue, req);
> +
> + if (sk != req->rsk_listener) {
> + /* another listening sk has been selected,
> + * migrate the req to it.
> + */
> + struct request_sock *nreq;
> +
> + /* hold a refcnt for the nreq->rsk_listener
> + * which is assigned in inet_reqsk_clone()
> + */
> + sock_hold(sk);
> + nreq = inet_reqsk_clone(req, sk);
> + if (!nreq) {
> + inet_child_forget(sk, req, child);

Don't you need a sock_put(sk) here ?

\
> + goto child_put;
> + }
> +
> + refcount_set(&nreq->rsk_refcnt, 1);
> + if (inet_csk_reqsk_queue_add(sk, nreq, child)) {
> + reqsk_migrate_reset(req);
> + reqsk_put(req);
> + return child;
> + }
> +
> + reqsk_migrate_reset(nreq);
> + __reqsk_free(nreq);
> + } else if (inet_csk_reqsk_queue_add(sk, req, child)) {
> return child;
> + }
>

2021-06-10 22:33:42

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 01/11] net: Introduce net.ipv4.tcp_migrate_req.

From: Eric Dumazet <[email protected]>
Date: Thu, 10 Jun 2021 19:24:14 +0200
> On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > This commit adds a new sysctl option: net.ipv4.tcp_migrate_req. If this
> > option is enabled or eBPF program is attached, we will be able to migrate
> > child sockets from a listener to another in the same reuseport group after
> > close() or shutdown() syscalls.
> >
> > Signed-off-by: Kuniyuki Iwashima <[email protected]>
> > Reviewed-by: Benjamin Herrenschmidt <[email protected]>
> > Acked-by: Martin KaFai Lau <[email protected]>
> > ---
> > Documentation/networking/ip-sysctl.rst | 25 +++++++++++++++++++++++++
> > include/net/netns/ipv4.h | 1 +
> > net/ipv4/sysctl_net_ipv4.c | 9 +++++++++
> > 3 files changed, 35 insertions(+)
>
> Reviewed-by: Eric Dumazet <[email protected]>

Thank you!

2021-06-10 22:38:37

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 02/11] tcp: Add num_closed_socks to struct sock_reuseport.

From: Eric Dumazet <[email protected]>
Date: Thu, 10 Jun 2021 19:38:45 +0200
> On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > As noted in the following commit, a closed listener has to hold the
> > reference to the reuseport group for socket migration. This patch adds a
> > field (num_closed_socks) to struct sock_reuseport to manage closed sockets
> > within the same reuseport group. Moreover, this and the following commits
> > introduce some helper functions to split socks[] into two sections and keep
> > TCP_LISTEN and TCP_CLOSE sockets in each section. Like a double-ended
> > queue, we will place TCP_LISTEN sockets from the front and TCP_CLOSE
> > sockets from the end.
> >
> > TCP_LISTEN----------> <-------TCP_CLOSE
> > +---+---+ --- +---+ --- +---+ --- +---+
> > | 0 | 1 | ... | i | ... | j | ... | k |
> > +---+---+ --- +---+ --- +---+ --- +---+
> >
> > i = num_socks - 1
> > j = max_socks - num_closed_socks
> > k = max_socks - 1
> >
> > This patch also extends reuseport_add_sock() and reuseport_grow() to
> > support num_closed_socks.
> >
> > Signed-off-by: Kuniyuki Iwashima <[email protected]>
> > Acked-by: Martin KaFai Lau <[email protected]>
> > ---
> > include/net/sock_reuseport.h | 5 ++-
> > net/core/sock_reuseport.c | 76 +++++++++++++++++++++++++++---------
> > 2 files changed, 60 insertions(+), 21 deletions(-)
> >
> > diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
> > index 505f1e18e9bf..0e558ca7afbf 100644
> > --- a/include/net/sock_reuseport.h
> > +++ b/include/net/sock_reuseport.h
> > @@ -13,8 +13,9 @@ extern spinlock_t reuseport_lock;
> > struct sock_reuseport {
> > struct rcu_head rcu;
> >
> > - u16 max_socks; /* length of socks */
> > - u16 num_socks; /* elements in socks */
> > + u16 max_socks; /* length of socks */
> > + u16 num_socks; /* elements in socks */
> > + u16 num_closed_socks; /* closed elements in socks */
> > /* The last synq overflow event timestamp of this
> > * reuse->socks[] group.
> > */
> > diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
> > index b065f0a103ed..079bd1aca0e7 100644
> > --- a/net/core/sock_reuseport.c
> > +++ b/net/core/sock_reuseport.c
> > @@ -18,6 +18,49 @@ DEFINE_SPINLOCK(reuseport_lock);
> >
> > static DEFINE_IDA(reuseport_ida);
> >
> > +static int reuseport_sock_index(struct sock *sk,
> > + struct sock_reuseport *reuse,
> > + bool closed)
>
>
> const struct sock_reuseport *reuse

I will add const to reuse.
Don't I need to make sk const?


>
>
> > +{
> > + int left, right;
> > +
> > + if (!closed) {
> > + left = 0;
> > + right = reuse->num_socks;
> > + } else {
> > + left = reuse->max_socks - reuse->num_closed_socks;
> > + right = reuse->max_socks;
> > + }
>
>
>
> > +
> > + for (; left < right; left++)
> > + if (reuse->socks[left] == sk)
> > + return left;
>
>
> Is this even possible (to return -1) ?
>
> > + return -1;

Yes.
In the next patch, reuseport_detach_sock() tries to detach a sock from the
closed section first. So, if tcp_migrate_req is disabled, then -1 is
returned immediately because left == right.

===
if (!__reuseport_detach_closed_sock(sk, reuse))
__reuseport_detach_sock(sk, reuse);
===


> > +}
> > +
> > +static void __reuseport_add_sock(struct sock *sk,
> > + struct sock_reuseport *reuse)
> > +{
> > + reuse->socks[reuse->num_socks] = sk;
> > + /* paired with smp_rmb() in reuseport_select_sock() */
> > + smp_wmb();
> > + reuse->num_socks++;
> > +}
> > +
> > +static bool __reuseport_detach_sock(struct sock *sk,
> > + struct sock_reuseport *reuse)
> > +{
> > + int i = reuseport_sock_index(sk, reuse, false);
> > +
> > + if (i == -1)
> > + return false;
> > +
> > + reuse->socks[i] = reuse->socks[reuse->num_socks - 1];
> > + reuse->num_socks--;
> > +
> > + return true;
> > +}
> > +
> > static struct sock_reuseport *__reuseport_alloc(unsigned int max_socks)
> > {
> > unsigned int size = sizeof(struct sock_reuseport) +
> > @@ -72,9 +115,8 @@ int reuseport_alloc(struct sock *sk, bool bind_inany)
> > }
> >
> > reuse->reuseport_id = id;
> > - reuse->socks[0] = sk;
> > - reuse->num_socks = 1;
> > reuse->bind_inany = bind_inany;
> > + __reuseport_add_sock(sk, reuse);
>
> Not sure why you changed this part, really no smp_wmb() is needed at this point ?

I have just reused the helper function, but exactly smp_wmb() is not
needed. I will keep here as is.


>
> > rcu_assign_pointer(sk->sk_reuseport_cb, reuse);
> >
> > out:
> > @@ -98,6 +140,7 @@ static struct sock_reuseport *reuseport_grow(struct sock_reuseport *reuse)
> > return NULL;
> >
> > more_reuse->num_socks = reuse->num_socks;
> > + more_reuse->num_closed_socks = reuse->num_closed_socks;
> > more_reuse->prog = reuse->prog;
> > more_reuse->reuseport_id = reuse->reuseport_id;
> > more_reuse->bind_inany = reuse->bind_inany;
> > @@ -105,9 +148,13 @@ static struct sock_reuseport *reuseport_grow(struct sock_reuseport *reuse)
> >
> > memcpy(more_reuse->socks, reuse->socks,
> > reuse->num_socks * sizeof(struct sock *));
> > + memcpy(more_reuse->socks +
> > + (more_reuse->max_socks - more_reuse->num_closed_socks),
> > + reuse->socks + reuse->num_socks,
>
> The second memcpy() is to copy the closed sockets,
> they should start at reuse->socks + (reuse->max_socks - reuse->num_closed_socks) ?

num_socks == (reuse->max_socks - reuse->num_closed_socks) here, but I think
the latter is less error-prone so I will fix that.

Thank you.


>
>
> > + reuse->num_closed_socks * sizeof(struct sock *));
> > more_reuse->synq_overflow_ts = READ_ONCE(reuse->synq_overflow_ts);
> >
> > - for (i = 0; i < reuse->num_socks; ++i)
> > + for (i = 0; i < reuse->max_socks; ++i)
> > rcu_assign_pointer(reuse->socks[i]->sk_reuseport_cb,
> > more_reuse);
> >
> > @@ -158,7 +205,7 @@ int reuseport_add_sock(struct sock *sk, struct sock *sk2, bool bind_inany)
> > return -EBUSY;
> > }
> >
> > - if (reuse->num_socks == reuse->max_socks) {
> > + if (reuse->num_socks + reuse->num_closed_socks == reuse->max_socks) {
> > reuse = reuseport_grow(reuse);
> > if (!reuse) {
> > spin_unlock_bh(&reuseport_lock);
> > @@ -166,10 +213,7 @@ int reuseport_add_sock(struct sock *sk, struct sock *sk2, bool bind_inany)
> > }
> > }
> >
> > - reuse->socks[reuse->num_socks] = sk;
> > - /* paired with smp_rmb() in reuseport_select_sock() */
> > - smp_wmb();
> > - reuse->num_socks++;
> > + __reuseport_add_sock(sk, reuse);
> > rcu_assign_pointer(sk->sk_reuseport_cb, reuse);
> >
> > spin_unlock_bh(&reuseport_lock);
> > @@ -183,7 +227,6 @@ EXPORT_SYMBOL(reuseport_add_sock);
> > void reuseport_detach_sock(struct sock *sk)
> > {
> > struct sock_reuseport *reuse;
> > - int i;
> >
> > spin_lock_bh(&reuseport_lock);
> > reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
> > @@ -200,16 +243,11 @@ void reuseport_detach_sock(struct sock *sk)
> > bpf_sk_reuseport_detach(sk);
> >
> > rcu_assign_pointer(sk->sk_reuseport_cb, NULL);
> > + __reuseport_detach_sock(sk, reuse);
> > +
> > + if (reuse->num_socks + reuse->num_closed_socks == 0)
> > + call_rcu(&reuse->rcu, reuseport_free_rcu);
> >
> > - for (i = 0; i < reuse->num_socks; i++) {
> > - if (reuse->socks[i] == sk) {
> > - reuse->socks[i] = reuse->socks[reuse->num_socks - 1];
> > - reuse->num_socks--;
> > - if (reuse->num_socks == 0)
> > - call_rcu(&reuse->rcu, reuseport_free_rcu);
> > - break;
> > - }
> > - }
> > spin_unlock_bh(&reuseport_lock);
> > }
> > EXPORT_SYMBOL(reuseport_detach_sock);
> > @@ -274,7 +312,7 @@ struct sock *reuseport_select_sock(struct sock *sk,
> > prog = rcu_dereference(reuse->prog);
> > socks = READ_ONCE(reuse->num_socks);
> > if (likely(socks)) {
> > - /* paired with smp_wmb() in reuseport_add_sock() */
> > + /* paired with smp_wmb() in __reuseport_add_sock() */
> > smp_rmb();
> >
> > if (!prog || !skb)
> >

2021-06-10 22:39:56

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 03/11] tcp: Keep TCP_CLOSE sockets in the reuseport group.

From: Eric Dumazet <[email protected]>
Date: Thu, 10 Jun 2021 19:59:15 +0200
> On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > When we close a listening socket, to migrate its connections to another
> > listener in the same reuseport group, we have to handle two kinds of child
> > sockets. One is that a listening socket has a reference to, and the other
> > is not.
> >
> > The former is the TCP_ESTABLISHED/TCP_SYN_RECV sockets, and they are in the
> > accept queue of their listening socket. So we can pop them out and push
> > them into another listener's queue at close() or shutdown() syscalls. On
> > the other hand, the latter, the TCP_NEW_SYN_RECV socket is during the
> > three-way handshake and not in the accept queue. Thus, we cannot access
> > such sockets at close() or shutdown() syscalls. Accordingly, we have to
> > migrate immature sockets after their listening socket has been closed.
> >
> > Currently, if their listening socket has been closed, TCP_NEW_SYN_RECV
> > sockets are freed at receiving the final ACK or retransmitting SYN+ACKs. At
> > that time, if we could select a new listener from the same reuseport group,
> > no connection would be aborted. However, we cannot do that because
> > reuseport_detach_sock() sets NULL to sk_reuseport_cb and forbids access to
> > the reuseport group from closed sockets.
> >
> > This patch allows TCP_CLOSE sockets to remain in the reuseport group and
> > access it while any child socket references them. The point is that
> > reuseport_detach_sock() was called twice from inet_unhash() and
> > sk_destruct(). This patch replaces the first reuseport_detach_sock() with
> > reuseport_stop_listen_sock(), which checks if the reuseport group is
> > capable of migration. If capable, it decrements num_socks, moves the socket
> > backwards in socks[] and increments num_closed_socks. When all connections
> > are migrated, sk_destruct() calls reuseport_detach_sock() to remove the
> > socket from socks[], decrement num_closed_socks, and set NULL to
> > sk_reuseport_cb.
> >
> > By this change, closed or shutdowned sockets can keep sk_reuseport_cb.
> > Consequently, calling listen() after shutdown() can cause EADDRINUSE or
> > EBUSY in inet_csk_bind_conflict() or reuseport_add_sock() which expects
> > such sockets not to have the reuseport group. Therefore, this patch also
> > loosens such validation rules so that a socket can listen again if it has a
> > reuseport group with num_closed_socks more than 0.
> >
> > When such sockets listen again, we handle them in reuseport_resurrect(). If
> > there is an existing reuseport group (reuseport_add_sock() path), we move
> > the socket from the old group to the new one and free the old one if
> > necessary. If there is no existing group (reuseport_alloc() path), we
> > allocate a new reuseport group, detach sk from the old one, and free it if
> > necessary, not to break the current shutdown behaviour:
> >
> > - we cannot carry over the eBPF prog of shutdowned sockets
> > - we cannot attach/detach an eBPF prog to/from listening sockets via
> > shutdowned sockets
> >
> > Note that when the number of sockets gets over U16_MAX, we try to detach a
> > closed socket randomly to make room for the new listening socket in
> > reuseport_grow().
> >
> > Signed-off-by: Kuniyuki Iwashima <[email protected]>
> > Signed-off-by: Martin KaFai Lau <[email protected]>
> > ---
> > include/net/sock_reuseport.h | 1 +
> > net/core/sock_reuseport.c | 184 ++++++++++++++++++++++++++++++--
> > net/ipv4/inet_connection_sock.c | 12 ++-
> > net/ipv4/inet_hashtables.c | 2 +-
> > 4 files changed, 188 insertions(+), 11 deletions(-)
> >
> > diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
> > index 0e558ca7afbf..1333d0cddfbc 100644
> > --- a/include/net/sock_reuseport.h
> > +++ b/include/net/sock_reuseport.h
> > @@ -32,6 +32,7 @@ extern int reuseport_alloc(struct sock *sk, bool bind_inany);
> > extern int reuseport_add_sock(struct sock *sk, struct sock *sk2,
> > bool bind_inany);
> > extern void reuseport_detach_sock(struct sock *sk);
> > +void reuseport_stop_listen_sock(struct sock *sk);
> > extern struct sock *reuseport_select_sock(struct sock *sk,
> > u32 hash,
> > struct sk_buff *skb,
> > diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
> > index 079bd1aca0e7..ea0e900d3e97 100644
> > --- a/net/core/sock_reuseport.c
> > +++ b/net/core/sock_reuseport.c
> > @@ -17,6 +17,8 @@
> > DEFINE_SPINLOCK(reuseport_lock);
> >
> > static DEFINE_IDA(reuseport_ida);
> > +static int reuseport_resurrect(struct sock *sk, struct sock_reuseport *old_reuse,
> > + struct sock_reuseport *reuse, bool bind_inany);
> >
> > static int reuseport_sock_index(struct sock *sk,
> > struct sock_reuseport *reuse,
> > @@ -61,6 +63,29 @@ static bool __reuseport_detach_sock(struct sock *sk,
> > return true;
> > }
> >
> > +static void __reuseport_add_closed_sock(struct sock *sk,
> > + struct sock_reuseport *reuse)
> > +{
> > + reuse->socks[reuse->max_socks - reuse->num_closed_socks - 1] = sk;
> > + /* paired with READ_ONCE() in inet_csk_bind_conflict() */
> > + WRITE_ONCE(reuse->num_closed_socks, reuse->num_closed_socks + 1);
> > +}
> > +
> > +static bool __reuseport_detach_closed_sock(struct sock *sk,
> > + struct sock_reuseport *reuse)
> > +{
> > + int i = reuseport_sock_index(sk, reuse, true);
> > +
> > + if (i == -1)
> > + return false;
> > +
> > + reuse->socks[i] = reuse->socks[reuse->max_socks - reuse->num_closed_socks];
> > + /* paired with READ_ONCE() in inet_csk_bind_conflict() */
> > + WRITE_ONCE(reuse->num_closed_socks, reuse->num_closed_socks - 1);
> > +
> > + return true;
> > +}
> > +
> > static struct sock_reuseport *__reuseport_alloc(unsigned int max_socks)
> > {
> > unsigned int size = sizeof(struct sock_reuseport) +
> > @@ -92,6 +117,14 @@ int reuseport_alloc(struct sock *sk, bool bind_inany)
> > reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
> > lockdep_is_held(&reuseport_lock));
> > if (reuse) {
> > + if (reuse->num_closed_socks) {
> > + /* sk was shutdown()ed before */
> > + int err = reuseport_resurrect(sk, reuse, NULL, bind_inany);
> > +
> > + spin_unlock_bh(&reuseport_lock);
> > + return err;
>
> It seems coding style in this function would rather do
> ret = reuseport_resurrect(sk, reuse, NULL, bind_inany);
> goto out;
>
> Overall, changes in this commit are a bit scarry.

I will change the style with ret and goto.

Thank you.

2021-06-10 22:43:35

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 04/11] tcp: Add reuseport_migrate_sock() to select a new listener.

From: Eric Dumazet <[email protected]>
Date: Thu, 10 Jun 2021 20:09:32 +0200
> On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > reuseport_migrate_sock() does the same check done in
> > reuseport_listen_stop_sock(). If the reuseport group is capable of
> > migration, reuseport_migrate_sock() selects a new listener by the child
> > socket hash and increments the listener's sk_refcnt beforehand. Thus, if we
> > fail in the migration, we have to decrement it later.
> >
> > We will support migration by eBPF in the later commits.
> >
> > Signed-off-by: Kuniyuki Iwashima <[email protected]>
> > Signed-off-by: Martin KaFai Lau <[email protected]>
> > ---
> > include/net/sock_reuseport.h | 3 ++
> > net/core/sock_reuseport.c | 78 +++++++++++++++++++++++++++++-------
> > 2 files changed, 67 insertions(+), 14 deletions(-)
>
> Reviewed-by: Eric Dumazet <[email protected]>

Thank you again!

2021-06-10 22:51:09

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 05/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

From: Eric Dumazet <[email protected]>
Date: Thu, 10 Jun 2021 20:20:11 +0200
> On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > When we call close() or shutdown() for listening sockets, each child socket
> > in the accept queue are freed at inet_csk_listen_stop(). If we can get a
> > new listener by reuseport_migrate_sock() and clone the request by
> > inet_reqsk_clone(), we try to add it into the new listener's accept queue
> > by inet_csk_reqsk_queue_add(). If it fails, we have to call __reqsk_free()
> > to call sock_put() for its listener and free the cloned request.
> >
> > After putting the full socket into ehash, tcp_v[46]_syn_recv_sock() sets
> > NULL to ireq_opt/pktopts in struct inet_request_sock, but ipv6_opt can be
> > non-NULL. So, we have to set NULL to ipv6_opt of the old request to avoid
> > double free.
> >
> > Note that we do not update req->rsk_listener and instead clone the req to
> > migrate because another path may reference the original request. If we
> > protected it by RCU, we would need to add rcu_read_lock() in many places.
> >
> > Link: https://lore.kernel.org/netdev/[email protected]/
> > Suggested-by: Martin KaFai Lau <[email protected]>
> > Signed-off-by: Kuniyuki Iwashima <[email protected]>
> > Acked-by: Martin KaFai Lau <[email protected]>
> > ---
> > net/ipv4/inet_connection_sock.c | 71 ++++++++++++++++++++++++++++++++-
> > 1 file changed, 70 insertions(+), 1 deletion(-)
> >
> > diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
> > index fa806e9167ec..07e97b2f3635 100644
> > --- a/net/ipv4/inet_connection_sock.c
> > +++ b/net/ipv4/inet_connection_sock.c
> > @@ -695,6 +695,53 @@ int inet_rtx_syn_ack(const struct sock *parent, struct request_sock *req)
> > }
> > EXPORT_SYMBOL(inet_rtx_syn_ack);
> >
> > +static struct request_sock *inet_reqsk_clone(struct request_sock *req,
> > + struct sock *sk)
> > +{
> > + struct sock *req_sk, *nreq_sk;
> > + struct request_sock *nreq;
> > +
> > + nreq = kmem_cache_alloc(req->rsk_ops->slab, GFP_ATOMIC | __GFP_NOWARN);
> > + if (!nreq) {
> > + /* paired with refcount_inc_not_zero() in reuseport_migrate_sock() */
> > + sock_put(sk);
> > + return NULL;
> > + }
> > +
> > + req_sk = req_to_sk(req);
> > + nreq_sk = req_to_sk(nreq);
> > +
> > + memcpy(nreq_sk, req_sk,
> > + offsetof(struct sock, sk_dontcopy_begin));
> > + memcpy(&nreq_sk->sk_dontcopy_end, &req_sk->sk_dontcopy_end,
> > + req->rsk_ops->obj_size - offsetof(struct sock, sk_dontcopy_end));
> > +
> > + sk_node_init(&nreq_sk->sk_node);
> > + nreq_sk->sk_tx_queue_mapping = req_sk->sk_tx_queue_mapping;
> > +#ifdef CONFIG_XPS
> > + nreq_sk->sk_rx_queue_mapping = req_sk->sk_rx_queue_mapping;
> > +#endif
> > + nreq_sk->sk_incoming_cpu = req_sk->sk_incoming_cpu;
> > + refcount_set(&nreq_sk->sk_refcnt, 0);
>
> Not sure why you clear sk_refcnt here (it is set to 1 later)

I thought it was safer, but I'm fine to remove the line.


>
> > +
> > + nreq->rsk_listener = sk;
> > +
> > + /* We need not acquire fastopenq->lock
> > + * because the child socket is locked in inet_csk_listen_stop().
> > + */
> > + if (sk->sk_protocol == IPPROTO_TCP && tcp_rsk(nreq)->tfo_listener)
> > + rcu_assign_pointer(tcp_sk(nreq->sk)->fastopen_rsk, nreq);
> > +
> > + return nreq;
> > +}
>
> Ouch, this is going to be hard to maintain...

How could I make it less hard ... ?


>
>
>
>
> > +
> > +static void reqsk_migrate_reset(struct request_sock *req)
> > +{
> > +#if IS_ENABLED(CONFIG_IPV6)
> > + inet_rsk(req)->ipv6_opt = NULL;
> > +#endif
> > +}
> > +
> > /* return true if req was found in the ehash table */
> > static bool reqsk_queue_unlink(struct request_sock *req)
> > {
> > @@ -1036,14 +1083,36 @@ void inet_csk_listen_stop(struct sock *sk)
> > * of the variants now. --ANK
> > */
> > while ((req = reqsk_queue_remove(queue, sk)) != NULL) {
> > - struct sock *child = req->sk;
> > + struct sock *child = req->sk, *nsk;
> > + struct request_sock *nreq;
> >
> > local_bh_disable();
> > bh_lock_sock(child);
> > WARN_ON(sock_owned_by_user(child));
> > sock_hold(child);
> >
> > + nsk = reuseport_migrate_sock(sk, child, NULL);
> > + if (nsk) {
> > + nreq = inet_reqsk_clone(req, nsk);
> > + if (nreq) {
> > + refcount_set(&nreq->rsk_refcnt, 1);
> > +
> > + if (inet_csk_reqsk_queue_add(nsk, nreq, child)) {
> > + reqsk_migrate_reset(req);
> > + } else {
> > + reqsk_migrate_reset(nreq);
> > + __reqsk_free(nreq);
> > + }
> > +
> > + /* inet_csk_reqsk_queue_add() has already
> > + * called inet_child_forget() on failure case.
> > + */
> > + goto skip_child_forget;
> > + }
> > + }
> > +
> > inet_child_forget(sk, req, child);
> > +skip_child_forget:
> > reqsk_put(req);
> > bh_unlock_sock(child);
> > local_bh_enable();
> >

2021-06-10 22:56:14

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 06/11] tcp: Migrate TCP_NEW_SYN_RECV requests at retransmitting SYN+ACKs.

From: Eric Dumazet <[email protected]>
Date: Thu, 10 Jun 2021 22:21:00 +0200
> On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote:
> > As with the preceding patch, this patch changes reqsk_timer_handler() to
> > call reuseport_migrate_sock() and inet_reqsk_clone() to migrate in-flight
> > requests at retransmitting SYN+ACKs. If we can select a new listener and
> > clone the request, we resume setting the SYN+ACK timer for the new req. If
> > we can set the timer, we call inet_ehash_insert() to unhash the old req and
> > put the new req into ehash.
> >
>
> ...
>
> > static void reqsk_migrate_reset(struct request_sock *req)
> > {
> > + req->saved_syn = NULL;
> > + inet_rsk(req)->ireq_opt = NULL;
> > #if IS_ENABLED(CONFIG_IPV6)
> > - inet_rsk(req)->ipv6_opt = NULL;
> > + inet_rsk(req)->pktopts = NULL;
> > #endif
> > }
>
> This is fragile.
>
> Maybe instead :
>
> #if IS_ENABLED(CONFIG_IPV6)
> inet_rsk(req)->ipv6_opt = NULL;
> inet_rsk(req)->pktopts = NULL;
> #else
> inet_rsk(req)->ireq_opt = NULL;
> #endif

I will fix this, thank you.

Also I will send a follow-up patch later to fix the same style in
inet_reqsk_alloc().

2021-06-10 23:00:32

by Iwashima, Kuniyuki

[permalink] [raw]
Subject: Re: [PATCH v7 bpf-next 07/11] tcp: Migrate TCP_NEW_SYN_RECV requests at receiving the final ACK.

From: Eric Dumazet <[email protected]>
Date: Thu, 10 Jun 2021 22:36:27 +0200
> On 5/21/21 8:21 PM, Kuniyuki Iwashima wrote:
> > This patch also changes the code to call reuseport_migrate_sock() and
> > inet_reqsk_clone(), but unlike the other cases, we do not call
> > inet_reqsk_clone() right after reuseport_migrate_sock().
> >
> > Currently, in the receive path for TCP_NEW_SYN_RECV sockets, its listener
> > has three kinds of refcnt:
> >
> > (A) for listener itself
> > (B) carried by reuqest_sock
> > (C) sock_hold() in tcp_v[46]_rcv()
> >
> > While processing the req, (A) may disappear by close(listener). Also, (B)
> > can disappear by accept(listener) once we put the req into the accept
> > queue. So, we have to hold another refcnt (C) for the listener to prevent
> > use-after-free.
> >
> > For socket migration, we call reuseport_migrate_sock() to select a listener
> > with (A) and to increment the new listener's refcnt in tcp_v[46]_rcv().
> > This refcnt corresponds to (C) and is cleaned up later in tcp_v[46]_rcv().
> > Thus we have to take another refcnt (B) for the newly cloned request_sock.
> >
> > In inet_csk_complete_hashdance(), we hold the count (B), clone the req, and
> > try to put the new req into the accept queue. By migrating req after
> > winning the "own_req" race, we can avoid such a worst situation:
> >
> > CPU 1 looks up req1
> > CPU 2 looks up req1, unhashes it, then CPU 1 loses the race
> > CPU 3 looks up req2, unhashes it, then CPU 2 loses the race
> > ...
> >
> > Signed-off-by: Kuniyuki Iwashima <[email protected]>
> > Acked-by: Martin KaFai Lau <[email protected]>
> > ---
> > net/ipv4/inet_connection_sock.c | 34 ++++++++++++++++++++++++++++++---
> > net/ipv4/tcp_ipv4.c | 20 +++++++++++++------
> > net/ipv4/tcp_minisocks.c | 4 ++--
> > net/ipv6/tcp_ipv6.c | 14 +++++++++++---
> > 4 files changed, 58 insertions(+), 14 deletions(-)
> >
> > diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
> > index c1f068464363..b795198f919a 100644
> > --- a/net/ipv4/inet_connection_sock.c
> > +++ b/net/ipv4/inet_connection_sock.c
> > @@ -1113,12 +1113,40 @@ struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock *child,
> > struct request_sock *req, bool own_req)
> > {
> > if (own_req) {
> > - inet_csk_reqsk_queue_drop(sk, req);
> > - reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req);
> > - if (inet_csk_reqsk_queue_add(sk, req, child))
> > + inet_csk_reqsk_queue_drop(req->rsk_listener, req);
> > + reqsk_queue_removed(&inet_csk(req->rsk_listener)->icsk_accept_queue, req);
> > +
> > + if (sk != req->rsk_listener) {
> > + /* another listening sk has been selected,
> > + * migrate the req to it.
> > + */
> > + struct request_sock *nreq;
> > +
> > + /* hold a refcnt for the nreq->rsk_listener
> > + * which is assigned in inet_reqsk_clone()
> > + */
> > + sock_hold(sk);
> > + nreq = inet_reqsk_clone(req, sk);
> > + if (!nreq) {
> > + inet_child_forget(sk, req, child);
>
> Don't you need a sock_put(sk) here ?

Yes.
If nreq == NULL, inet_reqsk_clone() calls sock_put().


>
> \
> > + goto child_put;
> > + }
> > +
> > + refcount_set(&nreq->rsk_refcnt, 1);
> > + if (inet_csk_reqsk_queue_add(sk, nreq, child)) {
> > + reqsk_migrate_reset(req);
> > + reqsk_put(req);
> > + return child;
> > + }
> > +
> > + reqsk_migrate_reset(nreq);
> > + __reqsk_free(nreq);
> > + } else if (inet_csk_reqsk_queue_add(sk, req, child)) {
> > return child;
> > + }
> >