Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751935AbdC1Duc (ORCPT ); Mon, 27 Mar 2017 23:50:32 -0400 Received: from shards.monkeyblade.net ([184.105.139.130]:54666 "EHLO shards.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751288AbdC1Dub (ORCPT ); Mon, 27 Mar 2017 23:50:31 -0400 Date: Mon, 27 Mar 2017 20:50:26 -0700 (PDT) Message-Id: <20170327.205026.1935467634311224337.davem@davemloft.net> To: jonas.jensen@gmail.com Cc: netdev@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] net: moxa: fix TX overrun memory leak From: David Miller In-Reply-To: <1490617879-14014-1-git-send-email-jonas.jensen@gmail.com> References: <1490617879-14014-1-git-send-email-jonas.jensen@gmail.com> X-Mailer: Mew version 6.7 on Emacs 24.5 / Mule 6.0 (HANACHIRUSATO) Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.12 (shards.monkeyblade.net [149.20.54.216]); Mon, 27 Mar 2017 20:50:29 -0700 (PDT) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 921 Lines: 32 From: Jonas Jensen Date: Mon, 27 Mar 2017 14:31:19 +0200 > @@ -25,6 +25,7 @@ > #include > #include > #include > +#include > > #include "moxart_ether.h" > > @@ -297,6 +298,7 @@ static void moxart_tx_finished(struct net_device *ndev) > tx_tail = TX_NEXT(tx_tail); > } > priv->tx_tail = tx_tail; > + netif_wake_queue(ndev); > } > > static irqreturn_t moxart_mac_interrupt(int irq, void *dev_id) Doing the wakeup unconditionally is very wasteful, you just need to do it when enough space has been made available. Therefore the wakeup should be more like: if (netif_queue_stopped(ndev) && moxart_tx_queue_space(ndev) >= MOXART_TX_WAKEUP_THRESHOLD) netif_wake_queue(); Otherwise you're just going to flap back and forth under high load and get almost not packet batching at all, hurting performance.