Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp1682966ybh; Mon, 20 Jul 2020 04:46:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx78211i2rHXhq1zTWvzJYS/ZuYcMI90k3GPLR5Q1/SNDNohEkvDyLoa8MPXnbWhOZd/1ZK X-Received: by 2002:a17:906:2b12:: with SMTP id a18mr19878677ejg.186.1595245592367; Mon, 20 Jul 2020 04:46:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1595245592; cv=none; d=google.com; s=arc-20160816; b=OcYRDXohrQIOkchoYOXhQ1ImhjA6Jm5eJzEb9kFDCRZ9d/VhcPvxIsP28RxftRJ+fU xlEUX+F2tKmab8b5K7kGoCbKhoR8C3HE57qDiDQ1v3VLhP4iV0a9Ewphtr09/IXUxvTI aYMSLIn5prrpiYx7EelfGNcLwBbxnLr7h6XF+H8nd0AmGmpqnQL+cwsqJcXKAxgZXrJv xD9TY2hjnJMJnWsfoUsq5wGWcgxC/xxmr3pz59J9dFmhzjNwl2ZrUQHYzSO42G6TsATt wK8OTHY8VBKpyJcAskh0AOYYPp7oVHX9ImSo0Xj4I7jRxIQS9jJOArEEafWMZw4PW1FV jGGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=xVTtZN1yW6PDq/VCwi0KTUBcBexhd/cNJy1JNokh2V4=; b=AbM73xL29Cz5G4HfI3ECo5wFs79aSrO7rKde+pb1iaJ7puREzJ7lS97DTdhDq+omzd KPVVHMZ9eiQfXiE6cJTTwc6SKcicbkfejfI+PtuxathLWDMMdikwrzAlYgbjZ43bQdiw kLcpyRHKYUxklT9hdC69x/6CMHnLgYch2cu7gC7lzsvd8d1QpvnJgOxC0jCKUs06u/N9 dtwibNmv63czd/+4HOEiXhWmbNqMz3vLY/Ln11IYs887vVrlyZ229IqfWsyUobbWCmRc s5tgdYdOSINgKIYf5In1iBxa+K7CMFjnz9dSBmOH+LKSA/udS9BCCDEpFdQlg9DYM2sK XdvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=CTzipu1H; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gy14si9688912ejb.313.2020.07.20.04.46.09; Mon, 20 Jul 2020 04:46:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=CTzipu1H; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728544AbgGTLqA (ORCPT + 99 others); Mon, 20 Jul 2020 07:46:00 -0400 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:54705 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728058AbgGTLp7 (ORCPT ); Mon, 20 Jul 2020 07:45:59 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1595245556; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xVTtZN1yW6PDq/VCwi0KTUBcBexhd/cNJy1JNokh2V4=; b=CTzipu1HTYureAHTQf9Qx9qS/qv5nlEz181AG/ePmu5YIhyArK2GPJFNpWKgwzYhr7xeCc W+avIcaVUK2gPU8BF/zS+VQ1CPYAcSV32HKevWhatgTJmPNO2etB8yJlOWd4BEaYXFjekf 8zfme6fx7KTxUpqFkFtGoo21P0pUa+Q= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-195-aBQlf3KONLqO3rsKP_GIFA-1; Mon, 20 Jul 2020 07:45:55 -0400 X-MC-Unique: aBQlf3KONLqO3rsKP_GIFA-1 Received: by mail-wr1-f72.google.com with SMTP id b8so12034133wro.19 for ; Mon, 20 Jul 2020 04:45:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=xVTtZN1yW6PDq/VCwi0KTUBcBexhd/cNJy1JNokh2V4=; b=HtIkPMCl0AvJPqr/lzSPjrJ+Azx/UHnmtYBropT2dZvuD3yxBDafxu3Tp5QbD4l1Nj FVc09hFkETrLq3/SfhSlh0GGzh8GemIx/th7+QH6MfWf0wfpTCbfvqhbm29c2etdLDoj M26B4GM4jBdibwsvV1ofU9mbUBtWhGET2oN0M0rScKDXGDwKaY+nHH3pgsX4KCfaJ+YQ AZb1hrVZIV+rZVso1tfLWEE60ccOndXbPauDO5paob3gsDM7GFGHWck4SsVugmBZuUfP EeNViYtURwJNxw+a+DFKkJ4CWthlgEnux1cNUTGgl0zm454eLOnEe6lYFRyZOAsFylys Pkfg== X-Gm-Message-State: AOAM533WiH6mt5E96UiHq5DmM6pi9vuQ19UEecWZpzhUovLREg0T+ZFu wN72BR1JDVyM2gt6PiSehAuhA0kLuNX6MVVIAGauPFCbkFmqx5S/dAlzJ/U8Eh+YOma9kq4PbrP 2ybKQA741jT3TfE3tj1jToytQ X-Received: by 2002:adf:ff8c:: with SMTP id j12mr21608425wrr.230.1595245554055; Mon, 20 Jul 2020 04:45:54 -0700 (PDT) X-Received: by 2002:adf:ff8c:: with SMTP id j12mr21608404wrr.230.1595245553799; Mon, 20 Jul 2020 04:45:53 -0700 (PDT) Received: from redhat.com (bzq-79-180-10-140.red.bezeqint.net. [79.180.10.140]) by smtp.gmail.com with ESMTPSA id o205sm33683152wme.24.2020.07.20.04.45.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Jul 2020 04:45:52 -0700 (PDT) Date: Mon, 20 Jul 2020 07:45:50 -0400 From: "Michael S. Tsirkin" To: Eugenio =?iso-8859-1?Q?P=E9rez?= Cc: Jason Wang , Konrad Rzeszutek Wilk , linux-kernel@vger.kernel.org, kvm list , virtualization@lists.linux-foundation.org, netdev@vger.kernel.org Subject: Re: [PATCH RFC v8 02/11] vhost: use batched get_vq_desc version Message-ID: <20200720074545-mutt-send-email-mst@kernel.org> References: <0a83aa03-8e3c-1271-82f5-4c07931edea3@redhat.com> <20200709133438-mutt-send-email-mst@kernel.org> <7dec8cc2-152c-83f4-aa45-8ef9c6aca56d@redhat.com> <20200710015615-mutt-send-email-mst@kernel.org> <20200720051410-mutt-send-email-mst@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 20, 2020 at 01:16:47PM +0200, Eugenio P?rez wrote: > > On Mon, Jul 20, 2020 at 11:27 AM Michael S. Tsirkin wrote: > > On Thu, Jul 16, 2020 at 07:16:27PM +0200, Eugenio Perez Martin wrote: > > > On Fri, Jul 10, 2020 at 7:58 AM Michael S. Tsirkin wrote: > > > > On Fri, Jul 10, 2020 at 07:39:26AM +0200, Eugenio Perez Martin wrote: > > > > > > > How about playing with the batch size? Make it a mod parameter instead > > > > > > > of the hard coded 64, and measure for all values 1 to 64 ... > > > > > > > > > > > > Right, according to the test result, 64 seems to be too aggressive in > > > > > > the case of TX. > > > > > > > > > > > > > > > > Got it, thanks both! > > > > > > > > In particular I wonder whether with batch size 1 > > > > we get same performance as without batching > > > > (would indicate 64 is too aggressive) > > > > or not (would indicate one of the code changes > > > > affects performance in an unexpected way). > > > > > > > > -- > > > > MST > > > > > > > > > > Hi! > > > > > > Varying batch_size as drivers/vhost/net.c:VHOST_NET_BATCH, > > > > sorry this is not what I meant. > > > > I mean something like this: > > > > > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > > index 0b509be8d7b1..b94680e5721d 100644 > > --- a/drivers/vhost/net.c > > +++ b/drivers/vhost/net.c > > @@ -1279,6 +1279,10 @@ static void handle_rx_net(struct vhost_work *work) > > handle_rx(net); > > } > > > > +MODULE_PARM_DESC(batch_num, "Number of batched descriptors. (offset from 64)"); > > +module_param(batch_num, int, 0644); > > +static int batch_num = 0; > > + > > static int vhost_net_open(struct inode *inode, struct file *f) > > { > > struct vhost_net *n; > > @@ -1333,7 +1337,7 @@ static int vhost_net_open(struct inode *inode, struct file *f) > > vhost_net_buf_init(&n->vqs[i].rxq); > > } > > vhost_dev_init(dev, vqs, VHOST_NET_VQ_MAX, > > - UIO_MAXIOV + VHOST_NET_BATCH, > > + UIO_MAXIOV + VHOST_NET_BATCH + batch_num, > > VHOST_NET_PKT_WEIGHT, VHOST_NET_WEIGHT, true, > > NULL); > > > > > > then you can try tweaking batching and playing with mod parameter without > > recompiling. > > > > > > VHOST_NET_BATCH affects lots of other things. > > > > Ok, got it. Since they were aligned from the start, I thought it was a good idea to maintain them in-sync. > > > > and testing > > > the pps as previous mail says. This means that we have either only > > > vhost_net batching (in base testing, like previously to apply this > > > patch) or both batching sizes the same. > > > > > > I've checked that vhost process (and pktgen) goes 100% cpu also. > > > > > > For tx: Batching decrements always the performance, in all cases. Not > > > sure why bufapi made things better the last time. > > > > > > Batching makes improvements until 64 bufs, I see increments of pps but like 1%. > > > > > > For rx: Batching always improves performance. It seems that if we > > > batch little, bufapi decreases performance, but beyond 64, bufapi is > > > much better. The bufapi version keeps improving until I set a batching > > > of 1024. So I guess it is super good to have a bunch of buffers to > > > receive. > > > > > > Since with this test I cannot disable event_idx or things like that, > > > what would be the next step for testing? > > > > > > Thanks! > > > > > > -- > > > Results: > > > # Buf size: 1,16,32,64,128,256,512 > > > > > > # Tx > > > # === > > > # Base > > > 2293304.308,3396057.769,3540860.615,3636056.077,3332950.846,3694276.154,3689820 > > > # Batch > > > 2286723.857,3307191.643,3400346.571,3452527.786,3460766.857,3431042.5,3440722.286 > > > # Batch + Bufapi > > > 2257970.769,3151268.385,3260150.538,3379383.846,3424028.846,3433384.308,3385635.231,3406554.538 > > > > > > # Rx > > > # == > > > # pktgen results (pps) > > > 1223275,1668868,1728794,1769261,1808574,1837252,1846436 > > > 1456924,1797901,1831234,1868746,1877508,1931598,1936402 > > > 1368923,1719716,1794373,1865170,1884803,1916021,1975160 > > > > > > # Testpmd pps results > > > 1222698.143,1670604,1731040.6,1769218,1811206,1839308.75,1848478.75 > > > 1450140.5,1799985.75,1834089.75,1871290,1880005.5,1934147.25,1939034 > > > 1370621,1721858,1796287.75,1866618.5,1885466.5,1918670.75,1976173.5,1988760.75,1978316 > > > > > > pktgen was run again for rx with 1024 and 2048 buf size, giving > > > 1988760.75 and 1978316 pps. Testpmd goes the same way. > > > > Don't really understand what does this data mean. > > Which number of descs is batched for each run? > > > > Sorry, I should have explained better. I will expand here, but feel free to skip it since we are going to discard the > data anyway. Or to propose a better way to tell them. > > Is a CSV with the values I've obtained, in pps, from pktgen and testpmd. This way is easy to plot them. > > Maybe is easier as tables, if mail readers/gmail does not misalign them. > > > > # Tx > > > # === > > Base: With the previous code, not integrating any patch. testpmd is txonly mode, tap interface is XDP_DROP everything. > We vary VHOST_NET_BATCH (1, 16, 32, ...). As Jason put in a previous mail: > > TX: testpmd(txonly) -> virtio-user -> vhost_net -> XDP_DROP on TAP > > > 1 | 16 | 32 | 64 | 128 | 256 | 512 | > 2293304.308| 3396057.769| 3540860.615| 3636056.077| 3332950.846| 3694276.154| 3689820| > > If we add the batching part of the series, but not the bufapi: > > 1 | 16 | 32 | 64 | 128 | 256 | 512 | > 2286723.857 | 3307191.643| 3400346.571| 3452527.786| 3460766.857| 3431042.5 | 3440722.286| > > And if we add the bufapi part, i.e., all the series: > > 1 | 16 | 32 | 64 | 128 | 256 | 512 | 1024 > 2257970.769| 3151268.385| 3260150.538| 3379383.846| 3424028.846| 3433384.308| 3385635.231| 3406554.538 > > For easier treatment, all in the same table: > > 1 | 16 | 32 | 64 | 128 | 256 | 512 | 1024 > ------------+-------------+-------------+-------------+-------------+-------------+------------+------------ > 2293304.308 | 3396057.769 | 3540860.615 | 3636056.077 | 3332950.846 | 3694276.154 | 3689820 | > 2286723.857 | 3307191.643 | 3400346.571 | 3452527.786 | 3460766.857 | 3431042.5 | 3440722.286| > 2257970.769 | 3151268.385 | 3260150.538 | 3379383.846 | 3424028.846 | 3433384.308 | 3385635.231| 3406554.538 > > > > # Rx > > > # == > > The rx tests are done with pktgen injecting packets in tap interface, and testpmd in rxonly forward mode. Again, each > column is a different value of VHOST_NET_BATCH, and each row is base, +batching, and +buf_api: > > > > # pktgen results (pps) > > (Didn't record extreme cases like >512 bufs batching) > > 1 | 16 | 32 | 64 | 128 | 256 | 512 > -------+--------+--------+--------+--------+--------+-------- > 1223275| 1668868| 1728794| 1769261| 1808574| 1837252| 1846436 > 1456924| 1797901| 1831234| 1868746| 1877508| 1931598| 1936402 > 1368923| 1719716| 1794373| 1865170| 1884803| 1916021| 1975160 > > > > # Testpmd pps results > > 1 | 16 | 32 | 64 | 128 | 256 | 512 | 1024 | 2048 > ------------+------------+------------+-----------+-----------+------------+------------+------------+--------- > 1222698.143 | 1670604 | 1731040.6 | 1769218 | 1811206 | 1839308.75 | 1848478.75 | > 1450140.5 | 1799985.75 | 1834089.75 | 1871290 | 1880005.5 | 1934147.25 | 1939034 | > 1370621 | 1721858 | 1796287.75 | 1866618.5 | 1885466.5 | 1918670.75 | 1976173.5 | 1988760.75 | 1978316 > > The last extreme cases (>512 bufs batched) were recorded just for the bufapi case. > > Does that make sense now? > > Thanks! yes, thanks!