Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756063Ab1EDUxx (ORCPT ); Wed, 4 May 2011 16:53:53 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40690 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755918Ab1EDUxu (ORCPT ); Wed, 4 May 2011 16:53:50 -0400 Date: Wed, 4 May 2011 23:53:13 +0300 From: "Michael S. Tsirkin" To: linux-kernel@vger.kernel.org Cc: Rusty Russell , Carsten Otte , Christian Borntraeger , linux390@de.ibm.com, Martin Schwidefsky , Heiko Carstens , Shirley Ma , lguest@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-s390@vger.kernel.org, kvm@vger.kernel.org, Krishna Kumar , Tom Lendacky , steved@us.ibm.com, habanero@linux.vnet.ibm.com Subject: [PATCH 18/18] virtio_net: limit xmit polling Message-ID: <5964e2f3d6aac5cd48f467848eed6570517470ef.1304541919.git.mst@redhat.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Mutt-Fcc: =sent User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2590 Lines: 72 Current code might introduce a lot of latency variation if there are many pending bufs at the time we attempt to transmit a new one. This is bad for real-time applications and can't be good for TCP either. Free up just enough to both clean up all buffers eventually and to be able to xmit the next packet. Signed-off-by: Michael S. Tsirkin --- drivers/net/virtio_net.c | 18 +++++++++++------- 1 files changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index f33c92b..9982bd7 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -509,17 +509,23 @@ again: return received; } -static void free_old_xmit_skbs(struct virtnet_info *vi) +static bool free_old_xmit_skbs(struct virtnet_info *vi, int capacity) { struct sk_buff *skb; unsigned int len; + bool c; + /* We try to free up at least 2 skbs per one sent, so that we'll get + * all of the memory back if they are used fast enough. */ + int n = 2; - while ((skb = virtqueue_get_buf(vi->svq, &len)) != NULL) { + while ((c = virtqueue_get_capacity(vi->svq) >= capacity) && --n > 0 && + (skb = virtqueue_get_buf(vi->svq, &len)) != NULL) { pr_debug("Sent skb %p\n", skb); vi->dev->stats.tx_bytes += skb->len; vi->dev->stats.tx_packets++; dev_kfree_skb_any(skb); } + return c; } static int xmit_skb(struct virtnet_info *vi, struct sk_buff *skb) @@ -574,8 +580,8 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) struct virtnet_info *vi = netdev_priv(dev); int capacity; - /* Free up any pending old buffers before queueing new ones. */ - free_old_xmit_skbs(vi); + /* Free enough pending old buffers to enable queueing new ones. */ + free_old_xmit_skbs(vi, 2+MAX_SKB_FRAGS); /* Try to transmit */ capacity = xmit_skb(vi, skb); @@ -609,9 +615,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) netif_stop_queue(dev); if (unlikely(!virtqueue_enable_cb_delayed(vi->svq))) { /* More just got used, free them then recheck. */ - free_old_xmit_skbs(vi); - capacity = virtqueue_get_capacity(vi->svq); - if (capacity >= 2+MAX_SKB_FRAGS) { + if (!likely(free_old_xmit_skbs(vi, 2+MAX_SKB_FRAGS))) { netif_start_queue(dev); virtqueue_disable_cb(vi->svq); } -- 1.7.5.53.gc233e -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/