Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp5136021pxj; Wed, 9 Jun 2021 09:57:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw8aqYSBBqnxoCBYkflynqtNXHjiwlHkVSy/d7y6RAT6ozYIOkn3KJ7uIWr13AuPiwLrDA/ X-Received: by 2002:a17:906:4c91:: with SMTP id q17mr818295eju.512.1623257866689; Wed, 09 Jun 2021 09:57:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623257866; cv=none; d=google.com; s=arc-20160816; b=lz86OagDxXqIpLtgPoYfVyiehTTFqB49UOVH0zGrR0OaZqAZdloEEieTCN7GhF7Azn XbYpkHAGSvD0aWpMV3ylsWGe99aT6EQtTe0u+jLTd9ap7PDsIYITdlLNHnuTHj8naIOj j3L3d8z/GpEPUooS3+w0YxbRRR+wcPToPCWCdUndRz6mgSRwOIcd1V37VyG51K7ytOYB 3aS1hzE0+cjb2MMIZ8D+eev3fGfjjm/8FUEFqmgbZhj5UuRfxrFgMsKuVdy0Hr3FT+C8 /f/71Fakf5RD/9O3cN6k0KOB0zS0Er+48ucLrTmn70iuIc0AHW9eJGL1W3ozLHXr9WYe LV3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=VJEJmU3TjmWlHd5escotygrK9mEkeGOFmkbOO7Ovndk=; b=VP7GryZWvT8fQ3pq7XNyDRabUIov3jqtTCszZq/HwBuKjyv1VVArDfXmceAKgsKL5R EFxDUcZIZ6hIKXz9AYEkbZ6LBIrlOhtPO2VJtporEAttPFQbbl6/NZ8fUYMAJl/6xL3x wVDjtc/aDI0XphXz/Kp05Sx4ykNt6L4AOoIE8ih3O3JDnHltZEmRrZGQLacXLxZStbZf A00BUDmOfFjfSJV6nRkx/Aaa7e0XeiB2vcStNRxkDkntvayqH9WZKELNeP3+u6DhHGUu rz/aiPlqx7fooYAtNeoJs2XVL9Wdl2wbY99QxBD4qoTgxaqzOFmXKMn8s3gp8/SN+FDw 89UA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b="YJiasMX/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 8si284174ejq.15.2021.06.09.09.57.22; Wed, 09 Jun 2021 09:57:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b="YJiasMX/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231262AbhFHXvL (ORCPT + 99 others); Tue, 8 Jun 2021 19:51:11 -0400 Received: from mail-wm1-f47.google.com ([209.85.128.47]:53781 "EHLO mail-wm1-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229937AbhFHXvJ (ORCPT ); Tue, 8 Jun 2021 19:51:09 -0400 Received: by mail-wm1-f47.google.com with SMTP id h3so2885228wmq.3 for ; Tue, 08 Jun 2021 16:49:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=VJEJmU3TjmWlHd5escotygrK9mEkeGOFmkbOO7Ovndk=; b=YJiasMX/CSgvQzJSOSl14nv9P5TK7AT6YV9pmU8EH9rdtrRt8OTdmrI0NRex/lZ4B3 tw5jpIjOdrfz0Lf4CO4KKNqNUyKyRLYQPnsFAYoXc+zQpkNR2AeoQbWSBP+xwGKP/QVF /zVM8QApNaAxXy+38+veFgarBWAfhIgrVV3Iz1zqUg44v+I563jNatKE1z6MRg5/13H3 acGkpl1lv7G6PuHIw5hBjs+zdtaQyX1nlOZkrRIoPeyAfe01Pyo2JKv0zGcNeZJ85OCq wTpiIrfkJ08HgrHEtMUlX4gNtrUZo9Q9CrWQPGedm8vhMZvoq9hPaG0kHDtZeuftdnMM zHNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=VJEJmU3TjmWlHd5escotygrK9mEkeGOFmkbOO7Ovndk=; b=LKJ3I8T5VvEt1OjeyV0vWp/LaROigxDfTJFsmFNTVRJRmMmCzqKilQ9RIkdE1bjxzO xHPAisrNRcuSZ8V8yEMgI32FbU+7ijvUEk8PrM2iBT1ghRfwhxSc31QBNMzupcZ1eJ4z VcYao9EraGv7GQXF/x+Q/OXEiRva3b7QvDQvbhHJ0L2yJceU5fkZLkDr8uu731x3aP80 O4egng7cYbYFfAwvX/v+6tHeW03hto6GNiZNHtMsAUkpF0x1hdjW/w6h0VqA8xiFBOFx A0zd7Q+uT+i6qH8fxS65SN1rEoGZYp9pt30ilJ2dk+P/NviHc9CLhbJrnoPAV8raXfXy HK6Q== X-Gm-Message-State: AOAM532mQMrbqsXdWNI0MpqNJ/xw7mHBsROgr483VOEKb0nnfKVH1O9b AKbmu/y/PK0HzFuQpUCIbi6s6i88PR0QMRMQ4A1kLQ== X-Received: by 2002:a1c:7210:: with SMTP id n16mr6659744wmc.75.1623196094967; Tue, 08 Jun 2021 16:48:14 -0700 (PDT) MIME-Version: 1.0 References: <20210608230357.39528-1-kuniyu@amazon.co.jp> In-Reply-To: <20210608230357.39528-1-kuniyu@amazon.co.jp> From: Yuchung Cheng Date: Tue, 8 Jun 2021 16:47:37 -0700 Message-ID: Subject: Re: [PATCH v7 bpf-next 00/11] Socket migration for SO_REUSEPORT. To: Kuniyuki Iwashima Cc: Andrii Nakryiko , Alexei Starovoitov , Benjamin Herrenschmidt , bpf@vger.kernel.org, Daniel Borkmann , David Miller , Eric Dumazet , Martin Lau , Jakub Kicinski , Kuniyuki Iwashima , LKML , Neal Cardwell , netdev Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 8, 2021 at 4:04 PM Kuniyuki Iwashima wrote: > > From: Yuchung Cheng > Date: Tue, 8 Jun 2021 10:48:06 -0700 > > On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann wrote: > > > > > > On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote: > > > > The SO_REUSEPORT option allows sockets to listen on the same port and to > > > > accept connections evenly. However, there is a defect in the current > > > > implementation [1]. When a SYN packet is received, the connection is tied > > > > to a listening socket. Accordingly, when the listener is closed, in-flight > > > > requests during the three-way handshake and child sockets in the accept > > > > queue are dropped even if other listeners on the same port could accept > > > > such connections. > > > > > > > > This situation can happen when various server management tools restart > > > > server (such as nginx) processes. For instance, when we change nginx > > > > configurations and restart it, it spins up new workers that respect the new > > > > configuration and closes all listeners on the old workers, resulting in the > > > > in-flight ACK of 3WHS is responded by RST. > > > > > > > > To avoid such a situation, users have to know deeply how the kernel handles > > > > SYN packets and implement connection draining by eBPF [2]: > > > > > > > > 1. Stop routing SYN packets to the listener by eBPF. > > > > 2. Wait for all timers to expire to complete requests > > > > 3. Accept connections until EAGAIN, then close the listener. > > > > > > > > or > > > > > > > > 1. Start counting SYN packets and accept syscalls using the eBPF map. > > > > 2. Stop routing SYN packets. > > > > 3. Accept connections up to the count, then close the listener. > > > > > > > > In either way, we cannot close a listener immediately. However, ideally, > > > > the application need not drain the not yet accepted sockets because 3WHS > > > > and tying a connection to a listener are just the kernel behaviour. The > > > > root cause is within the kernel, so the issue should be addressed in kernel > > > > space and should not be visible to user space. This patchset fixes it so > > > > that users need not take care of kernel implementation and connection > > > > draining. With this patchset, the kernel redistributes requests and > > > > connections from a listener to the others in the same reuseport group > > > > at/after close or shutdown syscalls. > > > > > > > > Although some software does connection draining, there are still merits in > > > > migration. For some security reasons, such as replacing TLS certificates, > > > > we may want to apply new settings as soon as possible and/or we may not be > > > > able to wait for connection draining. The sockets in the accept queue have > > > > not started application sessions yet. So, if we do not drain such sockets, > > > > they can be handled by the newer listeners and could have a longer > > > > lifetime. It is difficult to drain all connections in every case, but we > > > > can decrease such aborted connections by migration. In that sense, > > > > migration is always better than draining. > > > > > > > > Moreover, auto-migration simplifies user space logic and also works well in > > > > a case where we cannot modify and build a server program to implement the > > > > workaround. > > > > > > > > Note that the source and destination listeners MUST have the same settings > > > > at the socket API level; otherwise, applications may face inconsistency and > > > > cause errors. In such a case, we have to use the eBPF program to select a > > > > specific listener or to cancel migration. > > This looks to be a useful feature. What happens to migrating a > > passively fast-opened socket in the old listener but it has not yet > > been accepted (TFO is both a mini-socket and a full-socket)? > > It gets tricky when the old and new listener have different TFO key > > The tricky situation can happen without this patch set. We can change > the listener's TFO key when TCP_SYN_RECV sockets are still in the accept > queue. The change is already handled properly, so it does not crash > applications. > > In the normal 3WHS case, a full-socket is created after 3WHS. In the TFO > case, a full-socket is created after validating the TFO cookie in the > initial SYN packet. > > After that, the connection is basically handled via the full-socket, except > for accept() syscall. So in the both cases, the mini-socket is poped out of > old listener's queue, cloned, and put into the new listner's queue. Then we > can accept() its full-socket via the cloned mini-socket. Thanks, that makes sense. Eric is the expert in this part to review the correctness. My only suggestion is to add some stats tracking the mini-sockets that fail to migrate due to a variety of reasons (the code locations that the requests need to be dropped). This can be useful to evaluate the effectiveness of this new feature.