Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp1540919ybv; Thu, 20 Feb 2020 22:49:21 -0800 (PST) X-Google-Smtp-Source: APXvYqwiU19iXO9/wf9gSXgQonIMOoCJ+Aai2xmtLQSCeyxTslryQEJ4dak3XW5Pd02BqnxfWG3o X-Received: by 2002:a05:6808:8ee:: with SMTP id d14mr746706oic.138.1582267761423; Thu, 20 Feb 2020 22:49:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1582267761; cv=none; d=google.com; s=arc-20160816; b=htipZSM6Y2ohzXRNj9ptDj1uuq4EFxxOLQ4AL5cZONqLCVahbRacngPt/YMNREndsh gWRZO6w+1QArnAjrayli/b0e4LylXG7qQpK8t6RSIG38U4dChnNQz1Du4JNyyKEem52O lrX4rN0I8xyb7jE3X/fL5RoDGyj31/92vguKAGvSq1RISY6Dcb/zIt6V0/yQmmM3oiGw WdA2XYpfhXzKTbo6TXrbxXdk2TNs6CVqsfbP3KcCXkDCHo/klIYYznWe9vBe7+CXLywE EXcO54uLB3SLSACl3dIHxHjt6Ky0C0EWKowC7tXhdwrRamwuoTAL8N1zJPfZuhzZF89+ V83w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ZL8nZ8BjasGb2JTgqwY5rIRiLG47znxCD2f8GMiWNxg=; b=pzbosgekzG6hM0KwzopyovJIyU56oeuXHU6uwXdDdtxkeH/z0JBAgvScC87V40Zj13 xodV0KiV/LubH86kY57VEnLTbV0NX1RFMa5cN+Xxcz7s6mawCgqSL+ldKboefq2HAoGd 1rqWk4KpfgzWxAdYBpKJQDd/MG4h261cXTyjRlUWbukyNZDlZP7MwEcjgWEKhTsqPlMZ Rpz9WVh10kzNIgtozA06nFcedweMSYVJfzvX+pyVQmKf48FZ6YkmZS5J8jLt5e1YkdUY /Lf6kyHHC8NbXwMj+xssKTKNg7wq2G+ByQkUnjyvFagP9/yzeuawrm0YT5tisp8XXBbr JH3A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@geanix.com header.s=first header.b=f6uD2Qwq; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=geanix.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c130si367212oib.182.2020.02.20.22.49.09; Thu, 20 Feb 2020 22:49:21 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@geanix.com header.s=first header.b=f6uD2Qwq; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=geanix.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727781AbgBUGsC (ORCPT + 99 others); Fri, 21 Feb 2020 01:48:02 -0500 Received: from first.geanix.com ([116.203.34.67]:55440 "EHLO first.geanix.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726410AbgBUGsC (ORCPT ); Fri, 21 Feb 2020 01:48:02 -0500 Received: from localhost (87-49-45-242-mobile.dk.customer.tdc.net [87.49.45.242]) by first.geanix.com (Postfix) with ESMTPSA id BA1EFAEB4D; Fri, 21 Feb 2020 06:47:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=geanix.com; s=first; t=1582267678; bh=C6ldIWGff5T8O4HfbemgNQNVmUkFjfJUuuR+sw4Z36I=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=f6uD2QwqPaGqlBPWb77tKNPtvgUki/99sA2qGfEsCVviAs3B69DuhQNNxzdhzzKfJ cefu9iWzttf08ePuGff5VIZuVxRdFm7jDb+/SQxDeYWWMCWm85XfrLZ4bK6KWoMDWH dvhvPqHwNqw9bjUsImIFl0KknSudsplJgr0NnUZUdrPkidqxz+FPoT3R80fF9qKLDU Ul1r+hvgx68L9S3a4bQk4JuuAekDVN3y5bcDY3ASQmSMYedXr+6D1EogxEctfFTGnI 3WANSs4OjCwJ3ZuuR/x46OAwawFmBYW0AzOThx/7svC5T6yf1aoKqwo5dNg2TU/ArX 4fZshi2Bay9HA== From: Esben Haabendal To: netdev@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Andrew Lunn , "David S . Miller" , Michal Simek , =?UTF-8?q?Petr=20=C5=A0tetiar?= Subject: [PATCH net v2 4/4] net: ll_temac: Handle DMA halt condition caused by buffer underrun Date: Fri, 21 Feb 2020 07:47:58 +0100 Message-Id: <9d7cb658d37577895b9755a434eacba36a62f580.1582267079.git.esben@geanix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=4.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,UNPARSEABLE_RELAY,URIBL_BLOCKED autolearn=disabled version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on 05ff821c8cf1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The SDMA engine used by TEMAC halts operation when it has finished processing of the last buffer descriptor in the buffer ring. Unfortunately, no interrupt event is generated when this happens, so we need to setup another mechanism to make sure DMA operation is restarted when enough buffers have been added to the ring. Fixes: 92744989533c ("net: add Xilinx ll_temac device driver") Signed-off-by: Esben Haabendal --- drivers/net/ethernet/xilinx/ll_temac.h | 3 ++ drivers/net/ethernet/xilinx/ll_temac_main.c | 58 +++++++++++++++++++-- 2 files changed, 56 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/xilinx/ll_temac.h b/drivers/net/ethernet/xilinx/ll_temac.h index 99fe059e5c7f..53fb8141f1a6 100644 --- a/drivers/net/ethernet/xilinx/ll_temac.h +++ b/drivers/net/ethernet/xilinx/ll_temac.h @@ -380,6 +380,9 @@ struct temac_local { /* DMA channel control setup */ u32 tx_chnl_ctrl; u32 rx_chnl_ctrl; + u8 coalesce_count_rx; + + struct delayed_work restart_work; }; /* Wrappers for temac_ior()/temac_iow() function pointers above */ diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c index 255207f2fd27..9461acec6f70 100644 --- a/drivers/net/ethernet/xilinx/ll_temac_main.c +++ b/drivers/net/ethernet/xilinx/ll_temac_main.c @@ -51,6 +51,7 @@ #include #include #include +#include #include #include #include @@ -866,8 +867,11 @@ temac_start_xmit(struct sk_buff *skb, struct net_device *ndev) skb_dma_addr = dma_map_single(ndev->dev.parent, skb->data, skb_headlen(skb), DMA_TO_DEVICE); cur_p->len = cpu_to_be32(skb_headlen(skb)); - if (WARN_ON_ONCE(dma_mapping_error(ndev->dev.parent, skb_dma_addr))) - return NETDEV_TX_BUSY; + if (WARN_ON_ONCE(dma_mapping_error(ndev->dev.parent, skb_dma_addr))) { + dev_kfree_skb_any(skb); + ndev->stats.tx_dropped++; + return NETDEV_TX_OK; + } cur_p->phys = cpu_to_be32(skb_dma_addr); ptr_to_txbd((void *)skb, cur_p); @@ -897,7 +901,9 @@ temac_start_xmit(struct sk_buff *skb, struct net_device *ndev) dma_unmap_single(ndev->dev.parent, be32_to_cpu(cur_p->phys), skb_headlen(skb), DMA_TO_DEVICE); - return NETDEV_TX_BUSY; + dev_kfree_skb_any(skb); + ndev->stats.tx_dropped++; + return NETDEV_TX_OK; } cur_p->phys = cpu_to_be32(skb_dma_addr); cur_p->len = cpu_to_be32(skb_frag_size(frag)); @@ -920,6 +926,17 @@ temac_start_xmit(struct sk_buff *skb, struct net_device *ndev) return NETDEV_TX_OK; } +static int ll_temac_recv_buffers_available(struct temac_local *lp) +{ + int available; + + if (!lp->rx_skb[lp->rx_bd_ci]) + return 0; + available = 1 + lp->rx_bd_tail - lp->rx_bd_ci; + if (available <= 0) + available += RX_BD_NUM; + return available; +} static void ll_temac_recv(struct net_device *ndev) { @@ -990,6 +1007,18 @@ static void ll_temac_recv(struct net_device *ndev) lp->rx_bd_ci = 0; } while (rx_bd != lp->rx_bd_tail); + /* DMA operations will halt when the last buffer descriptor is + * processed (ie. the one pointed to by RX_TAILDESC_PTR). + * When that happens, no more interrupt events will be + * generated. No IRQ_COAL or IRQ_DLY, and not even an + * IRQ_ERR. To avoid stalling, we schedule a delayed work + * when there is a potential risk of that happening. The work + * will call this function, and thus re-schedule itself until + * enough buffers are available again. + */ + if (ll_temac_recv_buffers_available(lp) < lp->coalesce_count_rx) + schedule_delayed_work(&lp->restart_work, HZ / 1000); + /* Allocate new buffers for those buffer descriptors that were * passed to network stack. Note that GFP_ATOMIC allocations * can fail (e.g. when a larger burst of GFP_ATOMIC @@ -1045,6 +1074,18 @@ static void ll_temac_recv(struct net_device *ndev) spin_unlock_irqrestore(&lp->rx_lock, flags); } +/* Function scheduled to ensure a restart in case of DMA halt + * condition caused by running out of buffer descriptors. + */ +static void ll_temac_restart_work_func(struct work_struct *work) +{ + struct temac_local *lp = container_of(work, struct temac_local, + restart_work.work); + struct net_device *ndev = lp->ndev; + + ll_temac_recv(ndev); +} + static irqreturn_t ll_temac_tx_irq(int irq, void *_ndev) { struct net_device *ndev = _ndev; @@ -1137,6 +1178,8 @@ static int temac_stop(struct net_device *ndev) dev_dbg(&ndev->dev, "temac_close()\n"); + cancel_delayed_work_sync(&lp->restart_work); + free_irq(lp->tx_irq, ndev); free_irq(lp->rx_irq, ndev); @@ -1258,6 +1301,7 @@ static int temac_probe(struct platform_device *pdev) lp->dev = &pdev->dev; lp->options = XTE_OPTION_DEFAULTS; spin_lock_init(&lp->rx_lock); + INIT_DELAYED_WORK(&lp->restart_work, ll_temac_restart_work_func); /* Setup mutex for synchronization of indirect register access */ if (pdata) { @@ -1364,6 +1408,7 @@ static int temac_probe(struct platform_device *pdev) */ lp->tx_chnl_ctrl = 0x10220000; lp->rx_chnl_ctrl = 0xff070000; + lp->coalesce_count_rx = 0x07; /* Finished with the DMA node; drop the reference */ of_node_put(dma_np); @@ -1395,11 +1440,14 @@ static int temac_probe(struct platform_device *pdev) (pdata->tx_irq_count << 16); else lp->tx_chnl_ctrl = 0x10220000; - if (pdata->rx_irq_timeout || pdata->rx_irq_count) + if (pdata->rx_irq_timeout || pdata->rx_irq_count) { lp->rx_chnl_ctrl = (pdata->rx_irq_timeout << 24) | (pdata->rx_irq_count << 16); - else + lp->coalesce_count_rx = pdata->rx_irq_count; + } else { lp->rx_chnl_ctrl = 0xff070000; + lp->coalesce_count_rx = 0x07; + } } /* Error handle returned DMA RX and TX interrupts */ -- 2.25.0