Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754890AbdCWJKJ (ORCPT ); Thu, 23 Mar 2017 05:10:09 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46902 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751756AbdCWJKI (ORCPT ); Thu, 23 Mar 2017 05:10:08 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 0B59085542 Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=jasowang@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 0B59085542 Subject: Re: [PATCH net-next 7/8] vhost_net: try batch dequing from skb array To: "Michael S. Tsirkin" References: <1490069087-4783-1-git-send-email-jasowang@redhat.com> <1490069087-4783-8-git-send-email-jasowang@redhat.com> <20170322155111-mutt-send-email-mst@kernel.org> Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org From: Jason Wang Message-ID: Date: Thu, 23 Mar 2017 13:34:36 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0 MIME-Version: 1.0 In-Reply-To: <20170322155111-mutt-send-email-mst@kernel.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Thu, 23 Mar 2017 05:34:46 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2597 Lines: 88 On 2017年03月22日 22:16, Michael S. Tsirkin wrote: > On Tue, Mar 21, 2017 at 12:04:46PM +0800, Jason Wang wrote: >> We used to dequeue one skb during recvmsg() from skb_array, this could >> be inefficient because of the bad cache utilization and spinlock >> touching for each packet. This patch tries to batch them by calling >> batch dequeuing helpers explicitly on the exported skb array and pass >> the skb back through msg_control for underlayer socket to finish the >> userspace copying. >> >> Tests were done by XDP1: >> - small buffer: >> Before: 1.88Mpps >> After : 2.25Mpps (+19.6%) >> - mergeable buffer: >> Before: 1.83Mpps >> After : 2.10Mpps (+14.7%) >> >> Signed-off-by: Jason Wang >> --- >> drivers/vhost/net.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++++---- >> 1 file changed, 60 insertions(+), 4 deletions(-) >> >> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c >> index 9b51989..53f09f2 100644 >> --- a/drivers/vhost/net.c >> +++ b/drivers/vhost/net.c >> @@ -28,6 +28,8 @@ >> #include >> #include >> #include >> +#include >> +#include >> >> #include >> >> @@ -85,6 +87,7 @@ struct vhost_net_ubuf_ref { >> struct vhost_virtqueue *vq; >> }; >> >> +#define VHOST_RX_BATCH 64 >> struct vhost_net_virtqueue { >> struct vhost_virtqueue vq; >> size_t vhost_hlen; >> @@ -99,6 +102,10 @@ struct vhost_net_virtqueue { >> /* Reference counting for outstanding ubufs. >> * Protected by vq mutex. Writers must also take device mutex. */ >> struct vhost_net_ubuf_ref *ubufs; >> + struct skb_array *rx_array; >> + void *rxq[VHOST_RX_BATCH]; >> + int rt; >> + int rh; >> }; >> >> struct vhost_net { >> @@ -201,6 +208,8 @@ static void vhost_net_vq_reset(struct vhost_net *n) >> n->vqs[i].ubufs = NULL; >> n->vqs[i].vhost_hlen = 0; >> n->vqs[i].sock_hlen = 0; >> + n->vqs[i].rt = 0; >> + n->vqs[i].rh = 0; >> } >> >> } >> @@ -503,13 +512,30 @@ static void handle_tx(struct vhost_net *net) >> mutex_unlock(&vq->mutex); >> } >> >> -static int peek_head_len(struct sock *sk) >> +static int peek_head_len_batched(struct vhost_net_virtqueue *rvq) > Pls rename to say what it actually does: fetch skbs Ok. > >> +{ >> + if (rvq->rh != rvq->rt) >> + goto out; >> + >> + rvq->rh = rvq->rt = 0; >> + rvq->rt = skb_array_consume_batched_bh(rvq->rx_array, rvq->rxq, >> + VHOST_RX_BATCH); > A comment explaining why is is -bh would be helpful. Ok. Thanks