Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp4570568pxv; Tue, 6 Jul 2021 04:19:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxWubu1ol6mcn08Oz/XG0PmB6fHjRZhjYHJfQXHq5ieEI5+C2KKrj95/KFFbLessnrwIurM X-Received: by 2002:a05:6638:168a:: with SMTP id f10mr11262809jat.73.1625570366550; Tue, 06 Jul 2021 04:19:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625570366; cv=none; d=google.com; s=arc-20160816; b=P73WkGYXxJg/VE9qt+TYkEXqh9eQEQisa8kBb+Y+JpI//P8w0dxXK4hHBc2GX3tChK r5aXMVbZH+bxRLGtyRe/0GL6pg3dhR+niYgCWcyEWp4bLe+q5Ta576dB90sTueNrv/gV nUmPdLg96pk+GPgAS/OmVSJ0c0rfmZpSFE+EMFRnI1Oz11XYYaaYHwuoIzLkDyLo6hLS aSHN2BrFaMqTjIn+X+MFawIjZ7b0PGdTyexhLvNzOmRSXmNO9R2/XL3H/HvDT82a1sSr kLFfa/1puKBvdFsFk166S8Z7awJ/Lusrb12qiAJGA77OaSFvgOwDw8c6EFiRFPjuvFiu kv4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=zyQeGxjFB4gBtqSnwde7svbzcY3nKCF+JZj9e77aq/E=; b=DufiMXWod4FakwVf328RBSX0VdqOMN1U/wwhFsST4BUXzLb7kp/vsHLMeGTVl8OCTK AO1V4c8ypGgvjxyOGb/aSWclcfNYWBmFt3o3ga9Smk/LO0GAHI7eK7aCRw3UyFX/ITIS xwJsfCWs9f3UUP817ge/Ns1aSsuVaNq/dBUrRAFtb3epPCR7vpT81H1mwEPA5ap1TCPl hmEHWqKl7BCnQ87RFuf1uKn0dUpLqcQqhpAeeJ896CZ4lNxLPW4j3/POBSvGJ/Kxk3r1 +YNqFivO5vClO7ew7WNMuLqrfZxyiFhWVzJxEpQllI5sRMOSPWJ+8I2LKtNdWa6eG2jX qfLg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=etaxGteQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j6si11501573ils.161.2021.07.06.04.19.14; Tue, 06 Jul 2021 04:19:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=etaxGteQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233485AbhGFLVI (ORCPT + 99 others); Tue, 6 Jul 2021 07:21:08 -0400 Received: from mail.kernel.org ([198.145.29.99]:54596 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231878AbhGFLS0 (ORCPT ); Tue, 6 Jul 2021 07:18:26 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 06E8E61C22; Tue, 6 Jul 2021 11:15:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1625570148; bh=b8+TtQfQusBnMO1Kjs2b7DG2qKOuVMUGj9zIJNCEjKU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=etaxGteQzD5FsIU375AsQFHv8RYHZNWvzfDQkDdOPfxHPzcfm+dL7vWMI0DAWHWSG 2wVciurRsJQT7yanRJuqPo9AkFjAODBlDY2KvaoBi6G2/wR905y4xV3YtoKyNvNAiq rN/irWeGYn/GUWdXSnPnW27s/T6y3auZOEe3m/B3GGL1t1gy1ZGxCrID6r/HzQ9Fbt Opd+i7XUkG2kMX+FgVVH5wMo42cMkIh0OZsS1mj4crzKUpXVeD4CaKDmFxhNjf7myZ RuzQUEU+DcLzharUhM9lpizNE1J7Ljcg2LXf7PE/R7a/NFBpvwkn0Z9Xap0+b25hKj Px/oh8wMz6CQA== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Yuchung Cheng , mingkun bian , Neal Cardwell , Eric Dumazet , "David S . Miller" , Sasha Levin , netdev@vger.kernel.org Subject: [PATCH AUTOSEL 5.13 071/189] net: tcp better handling of reordering then loss cases Date: Tue, 6 Jul 2021 07:12:11 -0400 Message-Id: <20210706111409.2058071-71-sashal@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210706111409.2058071-1-sashal@kernel.org> References: <20210706111409.2058071-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Yuchung Cheng [ Upstream commit a29cb6914681a55667436a9eb7a42e28da8cf387 ] This patch aims to improve the situation when reordering and loss are ocurring in the same flight of packets. Previously the reordering would first induce a spurious recovery, then the subsequent ACK may undo the cwnd (based on the timestamps e.g.). However the current loss recovery does not proceed to invoke RACK to install a reordering timer. If some packets are also lost, this may lead to a long RTO-based recovery. An example is https://groups.google.com/g/bbr-dev/c/OFHADvJbTEI The solution is to after reverting the recovery, always invoke RACK to either mount the RACK timer to fast retransmit after the reordering window, or restarts the recovery if new loss is identified. Hence it is possible the sender may go from Recovery to Disorder/Open to Recovery again in one ACK. Reported-by: mingkun bian Signed-off-by: Yuchung Cheng Signed-off-by: Neal Cardwell Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_input.c | 45 +++++++++++++++++++++++++------------------- 1 file changed, 26 insertions(+), 19 deletions(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 4cf4dd532d1c..bc266514ce58 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -2816,8 +2816,17 @@ static void tcp_process_loss(struct sock *sk, int flag, int num_dupack, *rexmit = REXMIT_LOST; } +static bool tcp_force_fast_retransmit(struct sock *sk) +{ + struct tcp_sock *tp = tcp_sk(sk); + + return after(tcp_highest_sack_seq(tp), + tp->snd_una + tp->reordering * tp->mss_cache); +} + /* Undo during fast recovery after partial ACK. */ -static bool tcp_try_undo_partial(struct sock *sk, u32 prior_snd_una) +static bool tcp_try_undo_partial(struct sock *sk, u32 prior_snd_una, + bool *do_lost) { struct tcp_sock *tp = tcp_sk(sk); @@ -2842,7 +2851,9 @@ static bool tcp_try_undo_partial(struct sock *sk, u32 prior_snd_una) tcp_undo_cwnd_reduction(sk, true); NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPPARTIALUNDO); tcp_try_keep_open(sk); - return true; + } else { + /* Partial ACK arrived. Force fast retransmit. */ + *do_lost = tcp_force_fast_retransmit(sk); } return false; } @@ -2866,14 +2877,6 @@ static void tcp_identify_packet_loss(struct sock *sk, int *ack_flag) } } -static bool tcp_force_fast_retransmit(struct sock *sk) -{ - struct tcp_sock *tp = tcp_sk(sk); - - return after(tcp_highest_sack_seq(tp), - tp->snd_una + tp->reordering * tp->mss_cache); -} - /* Process an event, which can update packets-in-flight not trivially. * Main goal of this function is to calculate new estimate for left_out, * taking into account both packets sitting in receiver's buffer and @@ -2943,17 +2946,21 @@ static void tcp_fastretrans_alert(struct sock *sk, const u32 prior_snd_una, if (!(flag & FLAG_SND_UNA_ADVANCED)) { if (tcp_is_reno(tp)) tcp_add_reno_sack(sk, num_dupack, ece_ack); - } else { - if (tcp_try_undo_partial(sk, prior_snd_una)) - return; - /* Partial ACK arrived. Force fast retransmit. */ - do_lost = tcp_force_fast_retransmit(sk); - } - if (tcp_try_undo_dsack(sk)) { - tcp_try_keep_open(sk); + } else if (tcp_try_undo_partial(sk, prior_snd_una, &do_lost)) return; - } + + if (tcp_try_undo_dsack(sk)) + tcp_try_keep_open(sk); + tcp_identify_packet_loss(sk, ack_flag); + if (icsk->icsk_ca_state != TCP_CA_Recovery) { + if (!tcp_time_to_recover(sk, flag)) + return; + /* Undo reverts the recovery state. If loss is evident, + * starts a new recovery (e.g. reordering then loss); + */ + tcp_enter_recovery(sk, ece_ack); + } break; case TCP_CA_Loss: tcp_process_loss(sk, flag, num_dupack, rexmit); -- 2.30.2