Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4496196pxj; Tue, 8 Jun 2021 16:07:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzf7xMy2zxYJfg6UKegFc5liQrsNflrVABThyBfXv/ajCRtN9zWBqb49nEM6h8QVWr5D5+y X-Received: by 2002:aa7:cb5a:: with SMTP id w26mr28522342edt.139.1623193621959; Tue, 08 Jun 2021 16:07:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623193621; cv=none; d=google.com; s=arc-20160816; b=OWGD8/Ez55zlU59LwrDL2AAd/ngFs+HWM44MK+ZlG/RaqkNm6qKMZkhL0pYdd4wlXW veHnG7Q1coBJGv3cwLXG9PioErpIeYfPL2fCQfTbqbUf6tK4qn8Op/9u2Yyulpw0dNfv WUrLtS+bB9IUMRGtMpS/uiBNBs+AgaSOr/XfOmYgKRN9Fms5JmG+wIDhnN0V+pp9GB7T sgDpkjXk4fiCUpp603HqQssUt8NUybs1va6lr4yOImeDGvcQVgSBbQlBcDo5W9F7pdht awZZ2YGpT0+hvsK5B6zSg1qqQNcdiIa3cdx9mPQXAWIagWcmlHMqeNClOLCWsOkch7tV xqeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=SJ53jugNHBiYCQmrDRJq3GwEPY+tvvQA2foE3RigJbA=; b=z6K8o9d/ev37Tr/yqjERllGE3fMPh5hQPLtohtslGGrUjIPuCrXrlc3Z4EPZw50iEZ 4gAWqvRfnnyl1xShFjV5j89vSoIEP3KMVtoHE51dymwG8IwKYa0yavARzTnfCjgbCF32 CGHpO5MixzX3vZAIcz4NYWa6Dmq1PB1ilLM8WY5sB91i/WMx6hYrbCfR9zsiSJ8Pw5z9 LUhcma5gvEMUn9UnOfnsy6fRwBX/dpOnLZPs73Zmqa7xdLxRWBo3IOkFzSC+DecTl0eY Z8UJkYXyJUQyb9z1VlmyjMU9hLlxoNaqhVhfrYnk0jGzmxrs0wSTQpRhQsa0LtiQ1ldZ Fxdw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amazon.co.jp header.s=amazon201209 header.b=GcLMcsgH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.co.jp Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 30si772836ejg.621.2021.06.08.16.06.38; Tue, 08 Jun 2021 16:07:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@amazon.co.jp header.s=amazon201209 header.b=GcLMcsgH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.co.jp Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234374AbhFHXGD (ORCPT + 99 others); Tue, 8 Jun 2021 19:06:03 -0400 Received: from smtp-fw-2101.amazon.com ([72.21.196.25]:52727 "EHLO smtp-fw-2101.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230250AbhFHXGC (ORCPT ); Tue, 8 Jun 2021 19:06:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.co.jp; i=@amazon.co.jp; q=dns/txt; s=amazon201209; t=1623193449; x=1654729449; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SJ53jugNHBiYCQmrDRJq3GwEPY+tvvQA2foE3RigJbA=; b=GcLMcsgHzRLb3cZ5JaNCsPVEMOuq/8VP8l8x26w6b1rQCc+7sw30Aj6B NIcXEhjtg8uRjN5kqoHc4qU/+AkD7R2wZrvi4mCLmdsvls6mGknkfv/ny 6JTljDOySGIXvzpiFiuqz7W66bWvLvbUxerbltg8iYbAc/yDXGnhchMzp w=; X-IronPort-AV: E=Sophos;i="5.83,259,1616457600"; d="scan'208";a="114579637" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-2c-2225282c.us-west-2.amazon.com) ([10.43.8.2]) by smtp-border-fw-2101.iad2.amazon.com with ESMTP; 08 Jun 2021 23:04:07 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-2c-2225282c.us-west-2.amazon.com (Postfix) with ESMTPS id A3F57A1E39; Tue, 8 Jun 2021 23:04:06 +0000 (UTC) Received: from EX13D04ANC001.ant.amazon.com (10.43.157.89) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 8 Jun 2021 23:04:06 +0000 Received: from 88665a182662.ant.amazon.com (10.43.161.153) by EX13D04ANC001.ant.amazon.com (10.43.157.89) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 8 Jun 2021 23:04:01 +0000 From: Kuniyuki Iwashima To: CC: , , , , , , , , , , , , , Subject: Re: [PATCH v7 bpf-next 00/11] Socket migration for SO_REUSEPORT. Date: Wed, 9 Jun 2021 08:03:57 +0900 Message-ID: <20210608230357.39528-1-kuniyu@amazon.co.jp> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.43.161.153] X-ClientProxiedBy: EX13P01UWB002.ant.amazon.com (10.43.161.191) To EX13D04ANC001.ant.amazon.com (10.43.157.89) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Yuchung Cheng Date: Tue, 8 Jun 2021 10:48:06 -0700 > On Tue, May 25, 2021 at 11:42 PM Daniel Borkmann wrote: > > > > On 5/21/21 8:20 PM, Kuniyuki Iwashima wrote: > > > The SO_REUSEPORT option allows sockets to listen on the same port and to > > > accept connections evenly. However, there is a defect in the current > > > implementation [1]. When a SYN packet is received, the connection is tied > > > to a listening socket. Accordingly, when the listener is closed, in-flight > > > requests during the three-way handshake and child sockets in the accept > > > queue are dropped even if other listeners on the same port could accept > > > such connections. > > > > > > This situation can happen when various server management tools restart > > > server (such as nginx) processes. For instance, when we change nginx > > > configurations and restart it, it spins up new workers that respect the new > > > configuration and closes all listeners on the old workers, resulting in the > > > in-flight ACK of 3WHS is responded by RST. > > > > > > To avoid such a situation, users have to know deeply how the kernel handles > > > SYN packets and implement connection draining by eBPF [2]: > > > > > > 1. Stop routing SYN packets to the listener by eBPF. > > > 2. Wait for all timers to expire to complete requests > > > 3. Accept connections until EAGAIN, then close the listener. > > > > > > or > > > > > > 1. Start counting SYN packets and accept syscalls using the eBPF map. > > > 2. Stop routing SYN packets. > > > 3. Accept connections up to the count, then close the listener. > > > > > > In either way, we cannot close a listener immediately. However, ideally, > > > the application need not drain the not yet accepted sockets because 3WHS > > > and tying a connection to a listener are just the kernel behaviour. The > > > root cause is within the kernel, so the issue should be addressed in kernel > > > space and should not be visible to user space. This patchset fixes it so > > > that users need not take care of kernel implementation and connection > > > draining. With this patchset, the kernel redistributes requests and > > > connections from a listener to the others in the same reuseport group > > > at/after close or shutdown syscalls. > > > > > > Although some software does connection draining, there are still merits in > > > migration. For some security reasons, such as replacing TLS certificates, > > > we may want to apply new settings as soon as possible and/or we may not be > > > able to wait for connection draining. The sockets in the accept queue have > > > not started application sessions yet. So, if we do not drain such sockets, > > > they can be handled by the newer listeners and could have a longer > > > lifetime. It is difficult to drain all connections in every case, but we > > > can decrease such aborted connections by migration. In that sense, > > > migration is always better than draining. > > > > > > Moreover, auto-migration simplifies user space logic and also works well in > > > a case where we cannot modify and build a server program to implement the > > > workaround. > > > > > > Note that the source and destination listeners MUST have the same settings > > > at the socket API level; otherwise, applications may face inconsistency and > > > cause errors. In such a case, we have to use the eBPF program to select a > > > specific listener or to cancel migration. > This looks to be a useful feature. What happens to migrating a > passively fast-opened socket in the old listener but it has not yet > been accepted (TFO is both a mini-socket and a full-socket)? > It gets tricky when the old and new listener have different TFO key The tricky situation can happen without this patch set. We can change the listener's TFO key when TCP_SYN_RECV sockets are still in the accept queue. The change is already handled properly, so it does not crash applications. In the normal 3WHS case, a full-socket is created after 3WHS. In the TFO case, a full-socket is created after validating the TFO cookie in the initial SYN packet. After that, the connection is basically handled via the full-socket, except for accept() syscall. So in the both cases, the mini-socket is poped out of old listener's queue, cloned, and put into the new listner's queue. Then we can accept() its full-socket via the cloned mini-socket.