Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2249079pxj; Sun, 9 May 2021 20:48:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyOjiJvnp1g0Qu4wEprpEuhAz0aYxeUi3DzkbnZGFulLhz3sAe5zt8MlY4UOEofuDKvxMup X-Received: by 2002:a05:6402:708:: with SMTP id w8mr26867495edx.49.1620618530211; Sun, 09 May 2021 20:48:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620618530; cv=none; d=google.com; s=arc-20160816; b=J8v3qm4bxcQTlwuOOVsbCNW9P+DV97iMJCjmGy0hbq4t035hdS16LPgBlrxA4KXj58 aj9VtMfJ7ZFMIuSAsFSuyLHCSO0lDeNFdLSNjlntAbzzmtl2PzMjzly4Edc2Y8d7N394 McId9SHL8M/kcMN9WFH7UsVg/3wRVC351em6/AMJ62O+VF3CUJlj+eYbMA1mB7rtkSTf R9Law2yLP2s79mqMk/uo9psOJkLBCkYY1+KZGVtFpoY6dlfffs/sFwbmIeABgYeexciZ ybrnzWI7FBPq9twpo4N7ihSDz6WeebvoCs2ZFCv+SQ8+Smp69C56eyQ31tdkr8kfzf+K jF8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=IJdnLZgGsPjDuaujzh14N++qoSIh4HQ9A65B6WM09ss=; b=It5Ktqja5kAnAnGaY14BRcpGKDJUGZtE7Smflws5hGG9fe/LOl93ryZ0ndCGyU+lrm O3fCcCvd3ERosBwdQO7Ausy/ngYA3d8juTXIkJE2uLHus9ogPpWSG7t+MWFtLDKybWUr n6U1E6jpND8iRXfAguTkVlNFaxRgI3s6WfkDc6dmpjw5rkY8xsUh+r+636Ny07HwEfB+ 0Q5Qlir8RiSGh1uU074qHJ+SwVX5pNxhJIXmCypgtkhiI4xjlq+ySZcyC7+IkJ3HiVti 0d18Y2N41cKnskCgcFNpJ3AR0aDb5LgHf6dDC4XnwE61R4qaAaQQBsB4ELyQQqdzf99P 1W2Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amazon.co.jp header.s=amazon201209 header.b=tjo4Sp8e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.co.jp Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id nc38si9684643ejc.570.2021.05.09.20.48.27; Sun, 09 May 2021 20:48:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@amazon.co.jp header.s=amazon201209 header.b=tjo4Sp8e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amazon.co.jp Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230390AbhEJDrm (ORCPT + 99 others); Sun, 9 May 2021 23:47:42 -0400 Received: from smtp-fw-9103.amazon.com ([207.171.188.200]:38648 "EHLO smtp-fw-9103.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230362AbhEJDrl (ORCPT ); Sun, 9 May 2021 23:47:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.co.jp; i=@amazon.co.jp; q=dns/txt; s=amazon201209; t=1620618398; x=1652154398; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IJdnLZgGsPjDuaujzh14N++qoSIh4HQ9A65B6WM09ss=; b=tjo4Sp8e0trMR3i4v0NdRlzYgIHjUFkeaSW00/0oQF2stdpgUw3o7+0J t2jFIt+bSpk0S2G0g+viiz3n5thGUz8wrIzAGwfvVenPIULPgARo38Oqt r/PE9K9TapiuCIO9x02f06objubhzH6Jct5TA6tPl86pOldFxvf+eu5h7 4=; X-IronPort-AV: E=Sophos;i="5.82,286,1613433600"; d="scan'208";a="932299559" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO email-inbound-relay-1d-5dd976cd.us-east-1.amazon.com) ([10.25.36.214]) by smtp-border-fw-9103.sea19.amazon.com with ESMTP; 10 May 2021 03:46:36 +0000 Received: from EX13MTAUWB001.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38]) by email-inbound-relay-1d-5dd976cd.us-east-1.amazon.com (Postfix) with ESMTPS id B6CC4A1EB6; Mon, 10 May 2021 03:46:33 +0000 (UTC) Received: from EX13D04ANC001.ant.amazon.com (10.43.157.89) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 10 May 2021 03:46:32 +0000 Received: from 88665a182662.ant.amazon.com (10.43.160.17) by EX13D04ANC001.ant.amazon.com (10.43.157.89) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 10 May 2021 03:46:28 +0000 From: Kuniyuki Iwashima To: "David S . Miller" , Jakub Kicinski , Eric Dumazet , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau CC: Benjamin Herrenschmidt , Kuniyuki Iwashima , Kuniyuki Iwashima , , , Subject: [PATCH v5 bpf-next 06/11] tcp: Migrate TCP_NEW_SYN_RECV requests at retransmitting SYN+ACKs. Date: Mon, 10 May 2021 12:44:28 +0900 Message-ID: <20210510034433.52818-7-kuniyu@amazon.co.jp> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210510034433.52818-1-kuniyu@amazon.co.jp> References: <20210510034433.52818-1-kuniyu@amazon.co.jp> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.43.160.17] X-ClientProxiedBy: EX13D18UWC001.ant.amazon.com (10.43.162.105) To EX13D04ANC001.ant.amazon.com (10.43.157.89) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As with the preceding patch, this patch changes reqsk_timer_handler() to call reuseport_migrate_sock() and reqsk_clone() to migrate in-flight requests at retransmitting SYN+ACKs. If we can select a new listener and clone the request, we resume setting the SYN+ACK timer for the new req. If we can set the timer, we call inet_ehash_insert() to unhash the old req and put the new req into ehash. The noteworthy point here is that by unhashing the old req, another CPU processing it may lose the "own_req" race in tcp_v[46]_syn_recv_sock() and drop the final ACK packet. However, the new timer will recover this situation. Signed-off-by: Kuniyuki Iwashima --- net/ipv4/inet_connection_sock.c | 75 ++++++++++++++++++++++++++++++--- 1 file changed, 68 insertions(+), 7 deletions(-) diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c index 851992405826..e690d1cff36e 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -695,10 +695,20 @@ int inet_rtx_syn_ack(const struct sock *parent, struct request_sock *req) } EXPORT_SYMBOL(inet_rtx_syn_ack); +static void reqsk_queue_migrated(struct request_sock_queue *queue, + const struct request_sock *req) +{ + if (req->num_timeout == 0) + atomic_inc(&queue->young); + atomic_inc(&queue->qlen); +} + static void reqsk_migrate_reset(struct request_sock *req) { + req->saved_syn = NULL; + inet_rsk(req)->ireq_opt = NULL; #if IS_ENABLED(CONFIG_IPV6) - inet_rsk(req)->ipv6_opt = NULL; + inet_rsk(req)->pktopts = NULL; #endif } @@ -742,15 +752,39 @@ EXPORT_SYMBOL(inet_csk_reqsk_queue_drop_and_put); static void reqsk_timer_handler(struct timer_list *t) { struct request_sock *req = from_timer(req, t, rsk_timer); + struct request_sock *nreq = NULL, *oreq = req; struct sock *sk_listener = req->rsk_listener; - struct net *net = sock_net(sk_listener); - struct inet_connection_sock *icsk = inet_csk(sk_listener); - struct request_sock_queue *queue = &icsk->icsk_accept_queue; + struct inet_connection_sock *icsk; + struct request_sock_queue *queue; + struct net *net; int max_syn_ack_retries, qlen, expire = 0, resend = 0; - if (inet_sk_state_load(sk_listener) != TCP_LISTEN) - goto drop; + if (inet_sk_state_load(sk_listener) != TCP_LISTEN) { + struct sock *nsk; + + nsk = reuseport_migrate_sock(sk_listener, req_to_sk(req), NULL); + if (!nsk) + goto drop; + + nreq = reqsk_clone(req, nsk); + if (!nreq) + goto drop; + + /* The new timer for the cloned req can decrease the 2 + * by calling inet_csk_reqsk_queue_drop_and_put(), so + * hold another count to prevent use-after-free and + * call reqsk_put() just before return. + */ + refcount_set(&nreq->rsk_refcnt, 2 + 1); + timer_setup(&nreq->rsk_timer, reqsk_timer_handler, TIMER_PINNED); + reqsk_queue_migrated(&inet_csk(nsk)->icsk_accept_queue, req); + + req = nreq; + sk_listener = nsk; + } + icsk = inet_csk(sk_listener); + net = sock_net(sk_listener); max_syn_ack_retries = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_synack_retries; /* Normally all the openreqs are young and become mature * (i.e. converted to established socket) for first timeout. @@ -769,6 +803,7 @@ static void reqsk_timer_handler(struct timer_list *t) * embrions; and abort old ones without pity, if old * ones are about to clog our table. */ + queue = &icsk->icsk_accept_queue; qlen = reqsk_queue_len(queue); if ((qlen << 1) > max(8U, READ_ONCE(sk_listener->sk_max_ack_backlog))) { int young = reqsk_queue_len_young(queue) << 1; @@ -793,10 +828,36 @@ static void reqsk_timer_handler(struct timer_list *t) atomic_dec(&queue->young); timeo = min(TCP_TIMEOUT_INIT << req->num_timeout, TCP_RTO_MAX); mod_timer(&req->rsk_timer, jiffies + timeo); + + if (!nreq) + return; + + if (!inet_ehash_insert(req_to_sk(nreq), req_to_sk(oreq), NULL)) { + /* delete timer */ + inet_csk_reqsk_queue_drop(sk_listener, nreq); + goto drop; + } + + reqsk_migrate_reset(oreq); + reqsk_queue_removed(&inet_csk(oreq->rsk_listener)->icsk_accept_queue, oreq); + reqsk_put(oreq); + + reqsk_put(nreq); return; } + drop: - inet_csk_reqsk_queue_drop_and_put(sk_listener, req); + /* Even if we can clone the req, we may need not retransmit any more + * SYN+ACKs (nreq->num_timeout > max_syn_ack_retries, etc), or another + * CPU may win the "own_req" race so that inet_ehash_insert() fails. + */ + if (nreq) { + reqsk_migrate_reset(nreq); + reqsk_queue_removed(queue, nreq); + __reqsk_free(nreq); + } + + inet_csk_reqsk_queue_drop_and_put(oreq->rsk_listener, oreq); } static void reqsk_queue_hash_req(struct request_sock *req, -- 2.30.2