Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751875AbdFTMcv (ORCPT ); Tue, 20 Jun 2017 08:32:51 -0400 Received: from bes.se.axis.com ([195.60.68.10]:33889 "EHLO bes.se.axis.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751106AbdFTMct (ORCPT ); Tue, 20 Jun 2017 08:32:49 -0400 From: Niklas Cassel To: Giuseppe Cavallaro , Alexandre Torgue CC: Niklas Cassel , , Subject: [PATCH net] net: stmmac: free an skb first when there are no longer any descriptors using it Date: Tue, 20 Jun 2017 14:32:41 +0200 Message-ID: <20170620123241.5677-1-niklas.cassel@axis.com> X-Mailer: git-send-email 2.11.0 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.0.5.60] X-ClientProxiedBy: XBOX03.axis.com (10.0.5.17) To XBOX02.axis.com (10.0.5.16) X-TM-AS-GCONF: 00 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2784 Lines: 69 When having the skb pointer in the first descriptor, stmmac_tx_clean can get called at a moment where the IP has only cleared the own bit of the first descriptor, thus freeing the skb, even though there can be several descriptors whose buffers point into the same skb. By simply moving the skb pointer from the first descriptor to the last descriptor, a skb will get freed only when the IP has cleared the own bit of all the descriptors that are using that skb. Signed-off-by: Niklas Cassel --- drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 20 ++++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 6a1cb59728fe..3c73227ba04c 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -2830,7 +2830,6 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) tx_q->tx_skbuff_dma[first_entry].buf = des; tx_q->tx_skbuff_dma[first_entry].len = skb_headlen(skb); - tx_q->tx_skbuff[first_entry] = skb; first->des0 = cpu_to_le32(des); @@ -2864,6 +2863,14 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) tx_q->tx_skbuff_dma[tx_q->cur_tx].last_segment = true; + /* Only the last descriptor gets to point to the skb. */ + tx_q->tx_skbuff[tx_q->cur_tx] = skb; + + /* We've used all descriptors we need for this skb, however, + * advance cur_tx so that it references a fresh descriptor. + * ndo_start_xmit will fill this descriptor the next time it's + * called and stmmac_tx_clean may clean up to this descriptor. + */ tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, DMA_TX_SIZE); if (unlikely(stmmac_tx_avail(priv, queue) <= (MAX_SKB_FRAGS + 1))) { @@ -2996,8 +3003,6 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) first = desc; - tx_q->tx_skbuff[first_entry] = skb; - enh_desc = priv->plat->enh_desc; /* To program the descriptors according to the size of the frame */ if (enh_desc) @@ -3045,8 +3050,15 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) skb->len); } - entry = STMMAC_GET_ENTRY(entry, DMA_TX_SIZE); + /* Only the last descriptor gets to point to the skb. */ + tx_q->tx_skbuff[entry] = skb; + /* We've used all descriptors we need for this skb, however, + * advance cur_tx so that it references a fresh descriptor. + * ndo_start_xmit will fill this descriptor the next time it's + * called and stmmac_tx_clean may clean up to this descriptor. + */ + entry = STMMAC_GET_ENTRY(entry, DMA_TX_SIZE); tx_q->cur_tx = entry; if (netif_msg_pktdata(priv)) { -- 2.11.0