Received: by 2002:a25:86ce:0:0:0:0:0 with SMTP id y14csp1128268ybm; Wed, 22 May 2019 18:04:27 -0700 (PDT) X-Google-Smtp-Source: APXvYqxUzfTrGtTm/lDGHZA6HNB/Q5DOIdA2ni63N1P7MCOyGUCT6n2jhrFGWZbjn1qxw1uQYQOT X-Received: by 2002:a63:5857:: with SMTP id i23mr13682429pgm.135.1558573467306; Wed, 22 May 2019 18:04:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558573467; cv=none; d=google.com; s=arc-20160816; b=Ms6XuKhEAfVmNe3PicE89/pVUg/LX/UpiGjPNT2v/X7JZMLHZRmytcU/qtxpYRkdtv 3/63VBITkEKXH80kkDMeDpOoQsdGJ4QcBlMZktotDEMDyf1IKQat84koO68E3uT452fG ndA0E4ED9G+28RqbmSgdI0lGmyb1WizrveotEQNCZEoc6u+lAjgCpFM6v7+G5g+2UQll ouRI+HFSEK9sSXa8J5s3ZiKef9c17ZCUwF0YIko1cPLzsyq9dq5ClP6u0B/hozFex6QW GyFqHiyL+RPY0eMP/UCeg7FtRzdaQcMFsWV5cn8KNZK67IbIcmp4Im4IGQf2x2tE33HK +23w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:from:subject:cc:to:message-id:date; bh=4e+lJs8hdFaDYHi5bRfd1u8FNF17J0AG+FkYrfxxlIc=; b=aPETZAzjXnRBHBnMkDUAvtYafm3ZqRim8W6ZrDxucIJdYUCT6xV38Y9yFU50yjBS4i 9THX1nyxJqlXggm+REWmGH3mkDli4axdZREXM7JHmcZIL7lv7MrUHaUdZCL5tO23KwFS SvmTV34DSeA//vlLVJ3tiirXignli7IwmOemKgmJOJVmD9iAmgIBam4jFVFZ78zMFL9s 4xzAG7lVzrGNbFaWfzTs1uuRDIYT60Bq4ayb8T95EWKeiskYZGHIr5cJ7YGpJRKBg5la d29e6yd8WAtxqwsRsqck15GE01UHpg2e98+TBwjF78Id4/sgQCC9EIE9kOr7m2uJ3Sfr FAHA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 140si29135378pfy.113.2019.05.22.18.04.10; Wed, 22 May 2019 18:04:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729905AbfEWBBA (ORCPT + 99 others); Wed, 22 May 2019 21:01:00 -0400 Received: from shards.monkeyblade.net ([23.128.96.9]:37198 "EHLO shards.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727691AbfEWBA7 (ORCPT ); Wed, 22 May 2019 21:00:59 -0400 Received: from localhost (unknown [IPv6:2601:601:9f80:35cd::3d8]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) (Authenticated sender: davem-davemloft) by shards.monkeyblade.net (Postfix) with ESMTPSA id 1269115043931; Wed, 22 May 2019 18:00:59 -0700 (PDT) Date: Wed, 22 May 2019 18:00:58 -0700 (PDT) Message-Id: <20190522.180058.887469871482412864.davem@davemloft.net> To: sunilmut@microsoft.com Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, sashal@kernel.org, mikelley@microsoft.com, netdev@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH net-next] hv_sock: perf: loop in send() to maximize bandwidth From: David Miller In-Reply-To: References: X-Mailer: Mew version 6.8 on Emacs 26.1 Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.12 (shards.monkeyblade.net [149.20.54.216]); Wed, 22 May 2019 18:00:59 -0700 (PDT) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sunil Muthuswamy Date: Wed, 22 May 2019 23:10:44 +0000 > Currently, the hv_sock send() iterates once over the buffer, puts data into > the VMBUS channel and returns. It doesn't maximize on the case when there > is a simultaneous reader draining data from the channel. In such a case, > the send() can maximize the bandwidth (and consequently minimize the cpu > cycles) by iterating until the channel is found to be full. > > Perf data: > Total Data Transfer: 10GB/iteration > Single threaded reader/writer, Linux hvsocket writer with Windows hvsocket > reader > Packet size: 64KB > CPU sys time was captured using the 'time' command for the writer to send > 10GB of data. > 'Send Buffer Loop' is with the patch applied. > The values below are over 10 iterations. ... > Observation: > 1. The avg throughput doesn't really change much with this change for this > scenario. This is most probably because the bottleneck on throughput is > somewhere else. > 2. The average system (or kernel) cpu time goes down by 10%+ with this > change, for the same amount of data transfer. > > Signed-off-by: Sunil Muthuswamy Applied.