Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752677AbcKYEhq (ORCPT ); Thu, 24 Nov 2016 23:37:46 -0500 Received: from mx1.redhat.com ([209.132.183.28]:41256 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752332AbcKYEhi (ORCPT ); Thu, 24 Nov 2016 23:37:38 -0500 From: Jason Wang To: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Jason Wang , Hannes Frederic Sowa , "Michael S . Tsirkin" , Neil Horman , Jeremy Eder , Marko Myllynen , Maxime Coquelin Subject: [PATCH net-next] virtio-net: enable multiqueue by default Date: Fri, 25 Nov 2016 12:37:26 +0800 Message-Id: <1480048646-17536-1-git-send-email-jasowang@redhat.com> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Fri, 25 Nov 2016 04:37:33 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2006 Lines: 55 We use single queue even if multiqueue is enabled and let admin to enable it through ethtool later. This is used to avoid possible regression (small packet TCP stream transmission). But looks like an overkill since: - single queue user can disable multiqueue when launching qemu - brings extra troubles for the management since it needs extra admin tool in guest to enable multiqueue - multiqueue performs much better than single queue in most of the cases So this patch enables multiqueue by default: if #queues is less than or equal to #vcpu, enable as much as queue pairs; if #queues is greater than #vcpu, enable #vcpu queue pairs. Cc: Hannes Frederic Sowa Cc: Michael S. Tsirkin Cc: Neil Horman Cc: Jeremy Eder Cc: Marko Myllynen Cc: Maxime Coquelin Signed-off-by: Jason Wang --- drivers/net/virtio_net.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index d4ac7a6..a21d93a 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1886,8 +1886,11 @@ static int virtnet_probe(struct virtio_device *vdev) if (vi->any_header_sg) dev->needed_headroom = vi->hdr_len; - /* Use single tx/rx queue pair as default */ - vi->curr_queue_pairs = 1; + /* Enable multiqueue by default */ + if (num_online_cpus() >= max_queue_pairs) + vi->curr_queue_pairs = max_queue_pairs; + else + vi->curr_queue_pairs = num_online_cpus(); vi->max_queue_pairs = max_queue_pairs; /* Allocate/initialize the rx/tx queues, and invoke find_vqs */ @@ -1918,6 +1921,8 @@ static int virtnet_probe(struct virtio_device *vdev) goto free_unregister_netdev; } + virtnet_set_affinity(vi); + /* Assume link up if device can't report link status, otherwise get link status from config. */ if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) { -- 2.7.4