Received: by 10.223.185.116 with SMTP id b49csp1042872wrg; Fri, 16 Feb 2018 11:21:11 -0800 (PST) X-Google-Smtp-Source: AH8x226XuDRHZdEtOtkvqNG8x4ogVYgGGGYTl95ckjL2WV5/kZZllCDCA5z0BE/gE0GfBRmqPPJF X-Received: by 10.98.15.15 with SMTP id x15mr7108294pfi.197.1518808871293; Fri, 16 Feb 2018 11:21:11 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u11-v6si1322078pls.129.2018.02.16.11.20.56; Fri, 16 Feb 2018 11:21:11 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@natalenko.name header.s=dkim-20170712 header.b=Uo4SS6U/; arc=fail (signature failed); spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=natalenko.name Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1162187AbeBPRhO (ORCPT + 99 others); Fri, 16 Feb 2018 12:37:14 -0500 Received: from vulcan.natalenko.name ([104.207.131.136]:43402 "EHLO vulcan.natalenko.name" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1162146AbeBPRhL (ORCPT ); Fri, 16 Feb 2018 12:37:11 -0500 Received: from spock.localnet (unknown [IPv6:2001:470:5b39:28:d9be:599a:83a5:fae4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by vulcan.natalenko.name (Postfix) with ESMTPSA id B1BE42F90AA; Fri, 16 Feb 2018 18:37:08 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=natalenko.name; s=dkim-20170712; t=1518802628; h=from:subject:date:message-id:to:cc:mime-version:content-type:content-transfer-encoding:in-reply-to:references; bh=zFmYJBLkO+4pFmb5WBdZ7qo2nG/yjNLwJRAhGwHNEX0=; b=Uo4SS6U/pn0R6cV7/bk+r6ON3rzMZ5y5hZK5iB8O4csr9cLHK08V35o0W+B9eUSm0Pw/Ha Rdi71hrBMN6TtjudhudKKP+MIEhQqANZKMqnTxt8CP+N9f8kA9/sJiRauLNXKEELCzCWZO DWwLKHY6qRMMs7aOC1POmSJQHOnxrFU= DMARC-Filter: OpenDMARC Filter v1.3.2 vulcan.natalenko.name B1BE42F90AA Authentication-Results: vulcan.natalenko.name; dmarc=fail (p=none dis=none) header.from=natalenko.name From: Oleksandr Natalenko To: Eric Dumazet Cc: "David S. Miller" , Alexey Kuznetsov , Hideaki YOSHIFUJI , netdev , LKML , Soheil Hassas Yeganeh , Neal Cardwell , Yuchung Cheng , Van Jacobson , Jerry Chu Subject: Re: TCP and BBR: reproducibly low cwnd and bandwidth Date: Fri, 16 Feb 2018 18:37:08 +0100 Message-ID: <18081951.d6t0IUddpn@natalenko.name> In-Reply-To: References: <1697118.nv5eASg0nx@natalenko.name> <2189487.nPhU5NAnbi@natalenko.name> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="iso-8859-1" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=natalenko.name; s=arc-20170712; t=1518802628; h=from:subject:date:message-id:to:cc:mime-version:content-type:content-transfer-encoding:in-reply-to:references; bh=zFmYJBLkO+4pFmb5WBdZ7qo2nG/yjNLwJRAhGwHNEX0=; b=badOEPxUcMzo046ImKT6JYT4AK8ey576HdFk8k6TrJI/HscG6CTbuiQ2FuFtGPy2tOI8Um v2ML7yKOyjb3wnOTDGiPat8Vyjp86Au6G0oSSCY99gAmaFUmnDZ1DFsASE8dsQAsZwYfo/ NbB3m1/bwFiUpMl519WBllHpb/FnYeM= ARC-Seal: i=1; s=arc-20170712; d=natalenko.name; t=1518802629; a=rsa-sha256; cv=none; b=eCscNvstiUt4UwlTXC56SEFQDvyv5VqRD70dgtOlxlfBqta1oej6z/wzknpb+84ObMSA2/DI2yUT/QIVAPkkFW6YLOXwUgC8kmkykwlbeiYPHetPXUHclA2tZ4VKDXSu6B8Nq1sqW4z0SE6bcKDo5bhHN/+uRVughEwLye/veXA= ARC-Authentication-Results: i=1; auth=pass smtp.auth=oleksandr@natalenko.name smtp.mailfrom=oleksandr@natalenko.name Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi. On p=E1tek 16. =FAnora 2018 17:25:58 CET Eric Dumazet wrote: > The way TCP pacing works, it defaults to internal pacing using a hint > stored in the socket. >=20 > If you change the qdisc while flow is alive, result could be unexpected. I don't change a qdisc while flow is alive. Either the VM is completely=20 restarted, or iperf3 is restarted on both sides. > (TCP socket remembers that one FQ was supposed to handle the pacing) >=20 > What results do you have if you use standard pfifo_fast ? Almost the same as with fq_codel (see my previous email with numbers). > I am asking because TCP pacing relies on High resolution timers, and > that might be weak on your VM. Also, I've switched to measuring things on a real HW only (also see previou= s=20 email with numbers). Thanks. Regards, Oleksandr