Received: by 10.223.185.116 with SMTP id b49csp1042005wrg; Fri, 16 Feb 2018 11:20:08 -0800 (PST) X-Google-Smtp-Source: AH8x224hZ9MbVYfxMX4DEE3nWsH2jioI3O5s6wUeCMSqhEpQXIl3jPJWEhkuqW/qdj/Ho0kGOlE4 X-Received: by 10.101.69.134 with SMTP id o6mr1973111pgq.340.1518808808259; Fri, 16 Feb 2018 11:20:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518808808; cv=none; d=google.com; s=arc-20160816; b=ydMW1xcoOTpYTqlfuWwClg+y5b3rAVVxHL2/12rtymZV2yszkxJNWla/8tZKFtsMYA FuFifAQCYxt+t1HNJhc5aOD8IYL8xcEAEUdgMAVjxahtaveB93Ciy+mtUKAOcf6v+R58 Ns5bgB1IgLTh0ijuzWsQl8e0VK8VRBcNgYDFoJPIRcmAbUozAqXh6WhNFlmd1rWNDuan nWLD2adAz50wkULyGh/gx9fDsJS7d77HVFbjQBy51cZd+bSl5gtxebtHCA+EDRk1Ch8Y EjNLnDZ/VSF2z3HDhNMPMxRUSUuYjFnyNLwgEQTIkN3BxA/G+wOiLDastE2DpPAI1V9J 33fg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=xi7kaWhH9tdaNhE/tXBwC3ccL4ywStdSTtVWKNS/OAg=; b=eq/xGPwGvPIqamyFDpY6dumVq276GKQxQWgK+vajlh4dvNuRKUZFIL/CS8Fzp5Wnij VBTWY0O+H1Lj28iIG0caWWd1d/ztswlYDnAfX0cUBRx02QBMqnMNjvCc6+2THaTdoC3p UwXPYKHB38D8DrS+eU122NnvD3aLz3KWMXTZZniTyTFyCtQb0xeOU/npedazDMZQIxZk Omr9guELwoUwdc4Q1q5urB+cFTtAug5qJoklHXs0NRkPG88LgS7CHw2Aks5cNzutk0Z2 gAbRwDxi3LUhHTT5B3HQ0K9jhWUHbUEeZN116cHpqR30sh+NIw0VdgFk3Cdn24R2dve7 Hxyg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f2-v6si1284768pln.401.2018.02.16.11.19.53; Fri, 16 Feb 2018 11:20:08 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758873AbeBPRFq (ORCPT + 99 others); Fri, 16 Feb 2018 12:05:46 -0500 Received: from mail-pg0-f41.google.com ([74.125.83.41]:39709 "EHLO mail-pg0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758854AbeBPRFk (ORCPT ); Fri, 16 Feb 2018 12:05:40 -0500 Received: by mail-pg0-f41.google.com with SMTP id w17so2878245pgv.6 for ; Fri, 16 Feb 2018 09:05:40 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xi7kaWhH9tdaNhE/tXBwC3ccL4ywStdSTtVWKNS/OAg=; b=qeZaEyh960hfn8HgEAiSOjdkmeXYjGxHWLO0Cq/rrgz8dzFjih5mb2OI0fSfUONfrc cTmeqQGdEPjx1ETGP9sjH2jYhKEkBuG+YKD0pwElI8+XCqlJsn0QpTKzWUJTX2BEtcj/ z+DkHFymQ/Ms2b8xtEkCCVIXKEB9l4xBTZS0ThexXuYUhC3LQr+WlGZJ+WDAaZnifS5A 4VHiWHUPOumXrBikrlUTuEQzYT3qKMb9Es44JcXbi+dzCIEDDRfRcd1Cf7MnqL02fHaB RF5z7ziq+Ldgqw7ULmZ0xXbjv5fzXWOribGuxT9iZ/I6NoTuUMkGrj2SEy4pWb5FlV3U pDGg== X-Gm-Message-State: APf1xPCv5Pm1tKGQ7QxSIWH7aocm2AuaVEAUp3RHI0bYOoru34WORxyy Lw8BokLuULaMs7gSpA/VbnrJbOyS2Uc= X-Received: by 10.101.80.193 with SMTP id s1mr5781627pgp.417.1518800739363; Fri, 16 Feb 2018 09:05:39 -0800 (PST) Received: from localhost (207-114-172-147.static.twtelecom.net. [207.114.172.147]) by smtp.gmail.com with ESMTPSA id y2sm90313pgc.27.2018.02.16.09.05.38 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 16 Feb 2018 09:05:38 -0800 (PST) From: Moritz Fischer To: linux-kernel@vger.kernel.org Cc: devicetree@vger.kernel.org, netdev@vger.kernel.org, robh+dt@kernel.org, mark.rutland@arm.com, andrew@lunn.ch, f.fainelli@gmail.com, davem@davemloft.net, Moritz Fischer Subject: [PATCH v3 2/2] net: ethernet: nixge: Add support for National Instruments XGE netdev Date: Fri, 16 Feb 2018 09:00:33 -0800 Message-Id: <20180216170033.3834-2-mdf@kernel.org> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20180216170033.3834-1-mdf@kernel.org> References: <20180216170033.3834-1-mdf@kernel.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add support for the National Instruments XGE 1/10G network device. It uses the EEPROM on the board via NVMEM. Signed-off-by: Moritz Fischer --- Changes from v2: - Implement recv side NAPI - Improved error handling - Implemented C45 writes - Added ethtool callbacks & blink functionality - Improved nixge_ctrl_poll_timeout() macro - Removed dev_dbg() for mdio accesses - Added businfo to ethtool drvinfo Changes from v1: - Added dependency on ARCH_ZYNQ (Kbuild) - Removed unused variables - Use of_phy_connect as suggested - Removed masking of (un)supported modes - Added #define for some constants - Removed empty pm functions - Reworked mac_address handling - Made nixge_mdio_*() static (sparse) - Removed driver version - Addressed timeout loop - Adressed return values on timeout --- drivers/net/ethernet/Kconfig | 1 + drivers/net/ethernet/Makefile | 1 + drivers/net/ethernet/ni/Kconfig | 27 + drivers/net/ethernet/ni/Makefile | 1 + drivers/net/ethernet/ni/nixge.c | 1352 ++++++++++++++++++++++++++++++++++++++ 5 files changed, 1382 insertions(+) create mode 100644 drivers/net/ethernet/ni/Kconfig create mode 100644 drivers/net/ethernet/ni/Makefile create mode 100644 drivers/net/ethernet/ni/nixge.c diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig index b6cf4b6962f5..908218561fdd 100644 --- a/drivers/net/ethernet/Kconfig +++ b/drivers/net/ethernet/Kconfig @@ -129,6 +129,7 @@ config FEALNX source "drivers/net/ethernet/natsemi/Kconfig" source "drivers/net/ethernet/netronome/Kconfig" +source "drivers/net/ethernet/ni/Kconfig" source "drivers/net/ethernet/8390/Kconfig" config NET_NETX diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile index 3cdf01e96e0b..d732e9522b76 100644 --- a/drivers/net/ethernet/Makefile +++ b/drivers/net/ethernet/Makefile @@ -61,6 +61,7 @@ obj-$(CONFIG_NET_VENDOR_MYRI) += myricom/ obj-$(CONFIG_FEALNX) += fealnx.o obj-$(CONFIG_NET_VENDOR_NATSEMI) += natsemi/ obj-$(CONFIG_NET_VENDOR_NETRONOME) += netronome/ +obj-$(CONFIG_NET_VENDOR_NI) += ni/ obj-$(CONFIG_NET_NETX) += netx-eth.o obj-$(CONFIG_NET_VENDOR_NUVOTON) += nuvoton/ obj-$(CONFIG_NET_VENDOR_NVIDIA) += nvidia/ diff --git a/drivers/net/ethernet/ni/Kconfig b/drivers/net/ethernet/ni/Kconfig new file mode 100644 index 000000000000..cd30f7de16de --- /dev/null +++ b/drivers/net/ethernet/ni/Kconfig @@ -0,0 +1,27 @@ +# +# National Instuments network device configuration +# + +config NET_VENDOR_NI + bool "National Instruments Devices" + default y + ---help--- + If you have a network (Ethernet) device belonging to this class, say Y. + + Note that the answer to this question doesn't directly affect the + kernel: saying N will just cause the configurator to skip all + the questions about National Instrument devices. + If you say Y, you will be asked for your specific device in the + following questions. + +if NET_VENDOR_NI + +config NI_XGE_MANAGEMENT_ENET + tristate "National Instruments XGE management enet support" + depends on ARCH_ZYNQ + select PHYLIB + ---help--- + Simple LAN device for debug or management purposes. Can + support either 10G or 1G PHYs via SFP+ ports. + +endif diff --git a/drivers/net/ethernet/ni/Makefile b/drivers/net/ethernet/ni/Makefile new file mode 100644 index 000000000000..99c664651c51 --- /dev/null +++ b/drivers/net/ethernet/ni/Makefile @@ -0,0 +1 @@ +obj-$(CONFIG_NI_XGE_MANAGEMENT_ENET) += nixge.o diff --git a/drivers/net/ethernet/ni/nixge.c b/drivers/net/ethernet/ni/nixge.c new file mode 100644 index 000000000000..9b255c23d7cd --- /dev/null +++ b/drivers/net/ethernet/ni/nixge.c @@ -0,0 +1,1352 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2016-2017, National Instruments Corp. + * + * Author: Moritz Fischer + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define TX_BD_NUM 64 +#define RX_BD_NUM 128 + +/* Axi DMA Register definitions */ + +#define XAXIDMA_TX_CR_OFFSET 0x00000000 /* Channel control */ +#define XAXIDMA_TX_SR_OFFSET 0x00000004 /* Status */ +#define XAXIDMA_TX_CDESC_OFFSET 0x00000008 /* Current descriptor pointer */ +#define XAXIDMA_TX_TDESC_OFFSET 0x00000010 /* Tail descriptor pointer */ + +#define XAXIDMA_RX_CR_OFFSET 0x00000030 /* Channel control */ +#define XAXIDMA_RX_SR_OFFSET 0x00000034 /* Status */ +#define XAXIDMA_RX_CDESC_OFFSET 0x00000038 /* Current descriptor pointer */ +#define XAXIDMA_RX_TDESC_OFFSET 0x00000040 /* Tail descriptor pointer */ + +#define XAXIDMA_CR_RUNSTOP_MASK 0x00000001 /* Start/stop DMA channel */ +#define XAXIDMA_CR_RESET_MASK 0x00000004 /* Reset DMA engine */ + +#define XAXIDMA_BD_NDESC_OFFSET 0x00 /* Next descriptor pointer */ +#define XAXIDMA_BD_BUFA_OFFSET 0x08 /* Buffer address */ +#define XAXIDMA_BD_CTRL_LEN_OFFSET 0x18 /* Control/buffer length */ +#define XAXIDMA_BD_STS_OFFSET 0x1C /* Status */ +#define XAXIDMA_BD_USR0_OFFSET 0x20 /* User IP specific word0 */ +#define XAXIDMA_BD_USR1_OFFSET 0x24 /* User IP specific word1 */ +#define XAXIDMA_BD_USR2_OFFSET 0x28 /* User IP specific word2 */ +#define XAXIDMA_BD_USR3_OFFSET 0x2C /* User IP specific word3 */ +#define XAXIDMA_BD_USR4_OFFSET 0x30 /* User IP specific word4 */ +#define XAXIDMA_BD_ID_OFFSET 0x34 /* Sw ID */ +#define XAXIDMA_BD_HAS_STSCNTRL_OFFSET 0x38 /* Whether has stscntrl strm */ +#define XAXIDMA_BD_HAS_DRE_OFFSET 0x3C /* Whether has DRE */ + +#define XAXIDMA_BD_HAS_DRE_SHIFT 8 /* Whether has DRE shift */ +#define XAXIDMA_BD_HAS_DRE_MASK 0xF00 /* Whether has DRE mask */ +#define XAXIDMA_BD_WORDLEN_MASK 0xFF /* Whether has DRE mask */ + +#define XAXIDMA_BD_CTRL_LENGTH_MASK 0x007FFFFF /* Requested len */ +#define XAXIDMA_BD_CTRL_TXSOF_MASK 0x08000000 /* First tx packet */ +#define XAXIDMA_BD_CTRL_TXEOF_MASK 0x04000000 /* Last tx packet */ +#define XAXIDMA_BD_CTRL_ALL_MASK 0x0C000000 /* All control bits */ + +#define XAXIDMA_DELAY_MASK 0xFF000000 /* Delay timeout counter */ +#define XAXIDMA_COALESCE_MASK 0x00FF0000 /* Coalesce counter */ + +#define XAXIDMA_DELAY_SHIFT 24 +#define XAXIDMA_COALESCE_SHIFT 16 + +#define XAXIDMA_IRQ_IOC_MASK 0x00001000 /* Completion intr */ +#define XAXIDMA_IRQ_DELAY_MASK 0x00002000 /* Delay interrupt */ +#define XAXIDMA_IRQ_ERROR_MASK 0x00004000 /* Error interrupt */ +#define XAXIDMA_IRQ_ALL_MASK 0x00007000 /* All interrupts */ + +/* Default TX/RX Threshold and waitbound values for SGDMA mode */ +#define XAXIDMA_DFT_TX_THRESHOLD 24 +#define XAXIDMA_DFT_TX_WAITBOUND 254 +#define XAXIDMA_DFT_RX_THRESHOLD 24 +#define XAXIDMA_DFT_RX_WAITBOUND 254 + +#define XAXIDMA_BD_CTRL_TXSOF_MASK 0x08000000 /* First tx packet */ +#define XAXIDMA_BD_CTRL_TXEOF_MASK 0x04000000 /* Last tx packet */ +#define XAXIDMA_BD_CTRL_ALL_MASK 0x0C000000 /* All control bits */ + +#define XAXIDMA_BD_STS_ACTUAL_LEN_MASK 0x007FFFFF /* Actual len */ +#define XAXIDMA_BD_STS_COMPLETE_MASK 0x80000000 /* Completed */ +#define XAXIDMA_BD_STS_DEC_ERR_MASK 0x40000000 /* Decode error */ +#define XAXIDMA_BD_STS_SLV_ERR_MASK 0x20000000 /* Slave error */ +#define XAXIDMA_BD_STS_INT_ERR_MASK 0x10000000 /* Internal err */ +#define XAXIDMA_BD_STS_ALL_ERR_MASK 0x70000000 /* All errors */ +#define XAXIDMA_BD_STS_RXSOF_MASK 0x08000000 /* First rx pkt */ +#define XAXIDMA_BD_STS_RXEOF_MASK 0x04000000 /* Last rx pkt */ +#define XAXIDMA_BD_STS_ALL_MASK 0xFC000000 /* All status bits */ + +#define XAXIDMA_BD_MINIMUM_ALIGNMENT 0x40 + +#define NIXGE_REG_CTRL_OFFSET 0x4000 +#define NIXGE_REG_INFO 0x00 +#define NIXGE_REG_MAC_CTL 0x04 +#define NIXGE_REG_PHY_CTL 0x08 +#define NIXGE_REG_LED_CTL 0x0c +#define NIXGE_REG_MDIO_DATA 0x10 +#define NIXGE_REG_MDIO_ADDR 0x14 +#define NIXGE_REG_MDIO_OP 0x18 +#define NIXGE_REG_MDIO_CTRL 0x1c + +#define NIXGE_ID_LED_CTL_EN BIT(0) +#define NIXGE_ID_LED_CTL_VAL BIT(1) + +#define NIXGE_MDIO_CLAUSE45 BIT(12) +#define NIXGE_MDIO_CLAUSE22 0 +#define NIXGE_MDIO_OP(n) (((n) & 0x3) << 10) +#define NIXGE_MDIO_OP_ADDRESS 0 +#define NIXGE_MDIO_OP_WRITE BIT(0) +#define NIXGE_MDIO_OP_READ (BIT(1) | BIT(0)) +#define MDIO_C22_WRITE BIT(0) +#define MDIO_C22_READ BIT(1) +#define MDIO_READ_POST 2 +#define NIXGE_MDIO_ADDR(n) (((n) & 0x1f) << 5) +#define NIXGE_MDIO_MMD(n) (((n) & 0x1f) << 0) + +#define NIXGE_MAX_PHY_ADDR 32 + +#define NIXGE_REG_MAC_LSB 0x1000 +#define NIXGE_REG_MAC_MSB 0x1004 + +/* Packet size info */ +#define NIXGE_HDR_SIZE 14 /* Size of Ethernet header */ +#define NIXGE_TRL_SIZE 4 /* Size of Ethernet trailer (FCS) */ +#define NIXGE_MTU 1500 /* Max MTU of an Ethernet frame */ +#define NIXGE_JUMBO_MTU 9000 /* Max MTU of a jumbo Eth. frame */ + +#define NIXGE_MAX_FRAME_SIZE (NIXGE_MTU + NIXGE_HDR_SIZE + NIXGE_TRL_SIZE) +#define NIXGE_MAX_JUMBO_FRAME_SIZE \ + (NIXGE_JUMBO_MTU + NIXGE_HDR_SIZE + NIXGE_TRL_SIZE) + +struct nixge_hw_dma_bd { + u32 next; /* Physical address of next buffer descriptor */ + u32 reserved1; + u32 phys; + u32 reserved2; + u32 reserved3; + u32 reserved4; + u32 cntrl; + u32 status; + u32 app0; + u32 app1; /* TX start << 16 | insert */ + u32 app2; + u32 app3; + u32 app4; + u32 sw_id_offset; + u32 reserved5; + u32 reserved6; +}; + +struct nixge_tx_skb { + struct sk_buff *skb; + dma_addr_t mapping; + size_t size; + bool mapped_as_page; +}; + +struct nixge_priv { + struct net_device *ndev; + struct napi_struct napi; + struct device *dev; + + /* Connection to PHY device */ + struct device_node *phy_node; + phy_interface_t phy_mode; + + int link; + unsigned int speed; + unsigned int duplex; + + /* MDIO bus data */ + struct mii_bus *mii_bus; /* MII bus reference */ + + /* IO registers, dma functions and IRQs */ + void __iomem *ctrl_regs; + void __iomem *dma_regs; + + struct tasklet_struct dma_err_tasklet; + + int tx_irq; + int rx_irq; + u32 last_link; + + /* Buffer descriptors */ + struct nixge_hw_dma_bd *tx_bd_v; + struct nixge_tx_skb *tx_skb; + dma_addr_t tx_bd_p; + + struct nixge_hw_dma_bd *rx_bd_v; + dma_addr_t rx_bd_p; + u32 tx_bd_ci; + u32 tx_bd_tail; + u32 rx_bd_ci; + + u32 max_frm_size; + + u32 coalesce_count_rx; + u32 coalesce_count_tx; +}; + +static void nixge_dma_write_reg(struct nixge_priv *priv, off_t offset, u32 val) +{ + writel(val, priv->dma_regs + offset); +} + +static u32 nixge_dma_read_reg(const struct nixge_priv *priv, off_t offset) +{ + return readl(priv->dma_regs + offset); +} + +static void nixge_ctrl_write_reg(struct nixge_priv *priv, off_t offset, u32 val) +{ + writel(val, priv->ctrl_regs + offset); +} + +static u32 nixge_ctrl_read_reg(struct nixge_priv *priv, off_t offset) +{ + return readl(priv->ctrl_regs + offset); +} + +#define nixge_ctrl_poll_timeout(priv, addr, val, cond, sleep_us, timeout_us) \ + readl_poll_timeout((priv)->ctrl_regs + (addr), (val), (cond), \ + (sleep_us), (timeout_us)) + +#define nixge_dma_poll_timeout(priv, addr, val, cond, sleep_us, timeout_us) \ + readl_poll_timeout((priv)->dma_regs + (addr), (val), (cond), \ + (sleep_us), (timeout_us)) + +static void nixge_hw_dma_bd_release(struct net_device *ndev) +{ + int i; + struct nixge_priv *priv = netdev_priv(ndev); + + for (i = 0; i < RX_BD_NUM; i++) { + dma_unmap_single(ndev->dev.parent, priv->rx_bd_v[i].phys, + NIXGE_MAX_JUMBO_FRAME_SIZE, DMA_FROM_DEVICE); + dev_kfree_skb((struct sk_buff *) + (priv->rx_bd_v[i].sw_id_offset)); + } + + if (priv->rx_bd_v) + dma_free_coherent(ndev->dev.parent, + sizeof(*priv->rx_bd_v) * RX_BD_NUM, + priv->rx_bd_v, + priv->rx_bd_p); + + if (priv->tx_skb) + devm_kfree(ndev->dev.parent, priv->tx_skb); + + if (priv->tx_bd_v) + dma_free_coherent(ndev->dev.parent, + sizeof(*priv->tx_bd_v) * TX_BD_NUM, + priv->tx_bd_v, + priv->tx_bd_p); +} + +static int nixge_hw_dma_bd_init(struct net_device *ndev) +{ + u32 cr; + int i; + struct sk_buff *skb; + struct nixge_priv *priv = netdev_priv(ndev); + + /* Reset the indexes which are used for accessing the BDs */ + priv->tx_bd_ci = 0; + priv->tx_bd_tail = 0; + priv->rx_bd_ci = 0; + + /* Allocate the Tx and Rx buffer descriptors. */ + priv->tx_bd_v = dma_zalloc_coherent(ndev->dev.parent, + sizeof(*priv->tx_bd_v) * TX_BD_NUM, + &priv->tx_bd_p, GFP_KERNEL); + if (!priv->tx_bd_v) + goto out; + + priv->tx_skb = devm_kzalloc(ndev->dev.parent, + sizeof(*priv->tx_skb) * + TX_BD_NUM, + GFP_KERNEL); + if (!priv->tx_skb) + goto out; + + priv->rx_bd_v = dma_zalloc_coherent(ndev->dev.parent, + sizeof(*priv->rx_bd_v) * RX_BD_NUM, + &priv->rx_bd_p, GFP_KERNEL); + if (!priv->rx_bd_v) + goto out; + + for (i = 0; i < TX_BD_NUM; i++) { + priv->tx_bd_v[i].next = priv->tx_bd_p + + sizeof(*priv->tx_bd_v) * + ((i + 1) % TX_BD_NUM); + } + + for (i = 0; i < RX_BD_NUM; i++) { + priv->rx_bd_v[i].next = priv->rx_bd_p + + sizeof(*priv->rx_bd_v) * + ((i + 1) % RX_BD_NUM); + + skb = netdev_alloc_skb_ip_align(ndev, + NIXGE_MAX_JUMBO_FRAME_SIZE); + if (!skb) + goto out; + + priv->rx_bd_v[i].sw_id_offset = (u32)skb; + priv->rx_bd_v[i].phys = + dma_map_single(ndev->dev.parent, + skb->data, + NIXGE_MAX_JUMBO_FRAME_SIZE, + DMA_FROM_DEVICE); + priv->rx_bd_v[i].cntrl = NIXGE_MAX_JUMBO_FRAME_SIZE; + } + + /* Start updating the Rx channel control register */ + cr = nixge_dma_read_reg(priv, XAXIDMA_RX_CR_OFFSET); + /* Update the interrupt coalesce count */ + cr = ((cr & ~XAXIDMA_COALESCE_MASK) | + ((priv->coalesce_count_rx) << XAXIDMA_COALESCE_SHIFT)); + /* Update the delay timer count */ + cr = ((cr & ~XAXIDMA_DELAY_MASK) | + (XAXIDMA_DFT_RX_WAITBOUND << XAXIDMA_DELAY_SHIFT)); + /* Enable coalesce, delay timer and error interrupts */ + cr |= XAXIDMA_IRQ_ALL_MASK; + /* Write to the Rx channel control register */ + nixge_dma_write_reg(priv, XAXIDMA_RX_CR_OFFSET, cr); + + /* Start updating the Tx channel control register */ + cr = nixge_dma_read_reg(priv, XAXIDMA_TX_CR_OFFSET); + /* Update the interrupt coalesce count */ + cr = (((cr & ~XAXIDMA_COALESCE_MASK)) | + ((priv->coalesce_count_tx) << XAXIDMA_COALESCE_SHIFT)); + /* Update the delay timer count */ + cr = (((cr & ~XAXIDMA_DELAY_MASK)) | + (XAXIDMA_DFT_TX_WAITBOUND << XAXIDMA_DELAY_SHIFT)); + /* Enable coalesce, delay timer and error interrupts */ + cr |= XAXIDMA_IRQ_ALL_MASK; + /* Write to the Tx channel control register */ + nixge_dma_write_reg(priv, XAXIDMA_TX_CR_OFFSET, cr); + + /* Populate the tail pointer and bring the Rx Axi DMA engine out of + * halted state. This will make the Rx side ready for reception. + */ + nixge_dma_write_reg(priv, XAXIDMA_RX_CDESC_OFFSET, priv->rx_bd_p); + cr = nixge_dma_read_reg(priv, XAXIDMA_RX_CR_OFFSET); + nixge_dma_write_reg(priv, XAXIDMA_RX_CR_OFFSET, + cr | XAXIDMA_CR_RUNSTOP_MASK); + nixge_dma_write_reg(priv, XAXIDMA_RX_TDESC_OFFSET, priv->rx_bd_p + + (sizeof(*priv->rx_bd_v) * (RX_BD_NUM - 1))); + + /* Write to the RS (Run-stop) bit in the Tx channel control register. + * Tx channel is now ready to run. But only after we write to the + * tail pointer register that the Tx channel will start transmitting. + */ + nixge_dma_write_reg(priv, XAXIDMA_TX_CDESC_OFFSET, priv->tx_bd_p); + cr = nixge_dma_read_reg(priv, XAXIDMA_TX_CR_OFFSET); + nixge_dma_write_reg(priv, XAXIDMA_TX_CR_OFFSET, + cr | XAXIDMA_CR_RUNSTOP_MASK); + + return 0; +out: + nixge_hw_dma_bd_release(ndev); + return -ENOMEM; +} + +static void __nixge_device_reset(struct nixge_priv *priv, off_t offset) +{ + u32 status; + int err; + /* Reset Axi DMA. This would reset NIXGE Ethernet core as well. + * The reset process of Axi DMA takes a while to complete as all + * pending commands/transfers will be flushed or completed during + * this reset process. + */ + nixge_dma_write_reg(priv, offset, XAXIDMA_CR_RESET_MASK); + err = nixge_dma_poll_timeout(priv, offset, status, + !(status & XAXIDMA_CR_RESET_MASK), 10, + 1000); + if (err) + netdev_err(priv->ndev, "%s: DMA reset timeout!\n", __func__); +} + +static void nixge_device_reset(struct net_device *ndev) +{ + struct nixge_priv *priv = netdev_priv(ndev); + + __nixge_device_reset(priv, XAXIDMA_TX_CR_OFFSET); + __nixge_device_reset(priv, XAXIDMA_RX_CR_OFFSET); + + priv->max_frm_size = NIXGE_MAX_JUMBO_FRAME_SIZE; + + if (ndev->mtu > NIXGE_MTU && ndev->mtu <= NIXGE_JUMBO_MTU) + priv->max_frm_size = ndev->mtu + + NIXGE_HDR_SIZE + NIXGE_TRL_SIZE; + + if (nixge_hw_dma_bd_init(ndev)) + netdev_err(ndev, "%s: descriptor allocation failed\n", + __func__); + + netif_trans_update(ndev); +} + +static void nixge_handle_link_change(struct net_device *ndev) +{ + struct nixge_priv *priv = netdev_priv(ndev); + struct phy_device *phydev = ndev->phydev; + + if (phydev->link != priv->link || phydev->speed != priv->speed || + phydev->duplex != priv->duplex) { + priv->link = phydev->link; + priv->speed = phydev->speed; + priv->duplex = phydev->duplex; + phy_print_status(phydev); + } +} + +static void nixge_tx_skb_unmap(struct nixge_priv *priv, + struct nixge_tx_skb *tx_skb) +{ + if (tx_skb->mapping) { + if (tx_skb->mapped_as_page) + dma_unmap_page(priv->ndev->dev.parent, tx_skb->mapping, + tx_skb->size, DMA_TO_DEVICE); + else + dma_unmap_single(priv->ndev->dev.parent, + tx_skb->mapping, + tx_skb->size, DMA_TO_DEVICE); + tx_skb->mapping = 0; + } + + if (tx_skb->skb) { + dev_kfree_skb_any(tx_skb->skb); + tx_skb->skb = NULL; + } +} + +static void nixge_start_xmit_done(struct net_device *ndev) +{ + struct nixge_priv *priv = netdev_priv(ndev); + struct nixge_hw_dma_bd *cur_p; + struct nixge_tx_skb *tx_skb; + unsigned int status = 0; + u32 packets = 0; + u32 size = 0; + + cur_p = &priv->tx_bd_v[priv->tx_bd_ci]; + tx_skb = &priv->tx_skb[priv->tx_bd_ci]; + + status = cur_p->status; + + while (status & XAXIDMA_BD_STS_COMPLETE_MASK) { + nixge_tx_skb_unmap(priv, tx_skb); + cur_p->status = 0; + + size += status & XAXIDMA_BD_STS_ACTUAL_LEN_MASK; + packets++; + + ++priv->tx_bd_ci; + priv->tx_bd_ci %= TX_BD_NUM; + cur_p = &priv->tx_bd_v[priv->tx_bd_ci]; + tx_skb = &priv->tx_skb[priv->tx_bd_ci]; + status = cur_p->status; + } + + ndev->stats.tx_packets += packets; + ndev->stats.tx_bytes += size; + + if (packets) + netif_wake_queue(ndev); +} + +static int nixge_check_tx_bd_space(struct nixge_priv *priv, + int num_frag) +{ + struct nixge_hw_dma_bd *cur_p; + + cur_p = &priv->tx_bd_v[(priv->tx_bd_tail + num_frag) % TX_BD_NUM]; + if (cur_p->status & XAXIDMA_BD_STS_ALL_MASK) + return NETDEV_TX_BUSY; + return 0; +} + +static int nixge_start_xmit(struct sk_buff *skb, struct net_device *ndev) +{ + struct nixge_priv *priv = netdev_priv(ndev); + struct nixge_hw_dma_bd *cur_p; + struct nixge_tx_skb *tx_skb; + dma_addr_t tail_p; + skb_frag_t *frag; + u32 num_frag; + u32 ii; + + num_frag = skb_shinfo(skb)->nr_frags; + cur_p = &priv->tx_bd_v[priv->tx_bd_tail]; + tx_skb = &priv->tx_skb[priv->tx_bd_tail]; + + if (nixge_check_tx_bd_space(priv, num_frag)) { + if (!netif_queue_stopped(ndev)) + netif_stop_queue(ndev); + return NETDEV_TX_OK; + } + + cur_p->phys = dma_map_single(ndev->dev.parent, skb->data, + skb_headlen(skb), DMA_TO_DEVICE); + if (dma_mapping_error(ndev->dev.parent, cur_p->phys)) + goto drop; + + cur_p->cntrl = skb_headlen(skb) | XAXIDMA_BD_CTRL_TXSOF_MASK; + + tx_skb->skb = NULL; + tx_skb->mapping = cur_p->phys; + tx_skb->size = skb_headlen(skb); + tx_skb->mapped_as_page = false; + + for (ii = 0; ii < num_frag; ii++) { + ++priv->tx_bd_tail; + priv->tx_bd_tail %= TX_BD_NUM; + cur_p = &priv->tx_bd_v[priv->tx_bd_tail]; + tx_skb = &priv->tx_skb[priv->tx_bd_tail]; + frag = &skb_shinfo(skb)->frags[ii]; + + cur_p->phys = skb_frag_dma_map(ndev->dev.parent, frag, 0, + skb_frag_size(frag), + DMA_TO_DEVICE); + if (dma_mapping_error(ndev->dev.parent, cur_p->phys)) + goto frag_err; + + cur_p->cntrl = skb_frag_size(frag); + + tx_skb->skb = NULL; + tx_skb->mapping = cur_p->phys; + tx_skb->size = skb_frag_size(frag); + tx_skb->mapped_as_page = true; + } + + /* last buffer of the frame */ + tx_skb->skb = skb; + + cur_p->cntrl |= XAXIDMA_BD_CTRL_TXEOF_MASK; + cur_p->app4 = (unsigned long)skb; + + tail_p = priv->tx_bd_p + sizeof(*priv->tx_bd_v) * priv->tx_bd_tail; + /* Start the transfer */ + nixge_dma_write_reg(priv, XAXIDMA_TX_TDESC_OFFSET, tail_p); + ++priv->tx_bd_tail; + priv->tx_bd_tail %= TX_BD_NUM; + + return NETDEV_TX_OK; +frag_err: + for (; ii > 0; ii--) { + if (priv->tx_bd_tail) + priv->tx_bd_tail--; + else + priv->tx_bd_tail = TX_BD_NUM - 1; + + tx_skb = &priv->tx_skb[priv->tx_bd_tail]; + nixge_tx_skb_unmap(priv, tx_skb); + + cur_p = &priv->tx_bd_v[priv->tx_bd_tail]; + cur_p->status = 0; + } + dma_unmap_single(priv->ndev->dev.parent, + tx_skb->mapping, + tx_skb->size, DMA_TO_DEVICE); +drop: + ndev->stats.tx_dropped++; + return NETDEV_TX_OK; +} + +static int nixge_recv(struct net_device *ndev, int budget) +{ + struct nixge_priv *priv = netdev_priv(ndev); + struct sk_buff *skb, *new_skb; + struct nixge_hw_dma_bd *cur_p; + dma_addr_t tail_p = 0; + u32 packets = 0; + u32 length = 0; + u32 size = 0; + + cur_p = &priv->rx_bd_v[priv->rx_bd_ci]; + + while ((cur_p->status & XAXIDMA_BD_STS_COMPLETE_MASK && + budget > packets)) { + tail_p = priv->rx_bd_p + sizeof(*priv->rx_bd_v) * + priv->rx_bd_ci; + + skb = (struct sk_buff *)(cur_p->sw_id_offset); + + length = cur_p->status & XAXIDMA_BD_STS_ACTUAL_LEN_MASK; + if (length > NIXGE_MAX_JUMBO_FRAME_SIZE) + length = NIXGE_MAX_JUMBO_FRAME_SIZE; + + dma_unmap_single(ndev->dev.parent, cur_p->phys, + NIXGE_MAX_JUMBO_FRAME_SIZE, + DMA_FROM_DEVICE); + + skb_put(skb, length); + + skb->protocol = eth_type_trans(skb, ndev); + skb_checksum_none_assert(skb); + + /* For now mark them as CHECKSUM_NONE since + * we don't have offload capabilities + */ + skb->ip_summed = CHECKSUM_NONE; + + napi_gro_receive(&priv->napi, skb); + + size += length; + packets++; + + new_skb = netdev_alloc_skb_ip_align(ndev, + NIXGE_MAX_JUMBO_FRAME_SIZE); + if (!new_skb) + return packets; + + cur_p->phys = dma_map_single(ndev->dev.parent, new_skb->data, + NIXGE_MAX_JUMBO_FRAME_SIZE, + DMA_FROM_DEVICE); + if (dma_mapping_error(ndev->dev.parent, cur_p->phys)) { + /* FIXME: bail out and clean up */ + netdev_err(ndev, "Failed to map ...\n"); + } + cur_p->cntrl = NIXGE_MAX_JUMBO_FRAME_SIZE; + cur_p->status = 0; + cur_p->sw_id_offset = (u32)new_skb; + + ++priv->rx_bd_ci; + priv->rx_bd_ci %= RX_BD_NUM; + cur_p = &priv->rx_bd_v[priv->rx_bd_ci]; + } + + ndev->stats.rx_packets += packets; + ndev->stats.rx_bytes += size; + + if (tail_p) + nixge_dma_write_reg(priv, XAXIDMA_RX_TDESC_OFFSET, tail_p); + + return packets; +} + +static int nixge_poll(struct napi_struct *napi, int budget) +{ + struct nixge_priv *priv = container_of(napi, struct nixge_priv, napi); + int work_done; + u32 status, cr; + + work_done = 0; + + work_done = nixge_recv(priv->ndev, budget); + if (work_done < budget) { + napi_complete_done(napi, work_done); + status = nixge_dma_read_reg(priv, XAXIDMA_RX_SR_OFFSET); + + if (status & (XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK)) { + /* If there's more, reschedule, but clear */ + nixge_dma_write_reg(priv, XAXIDMA_RX_SR_OFFSET, status); + napi_reschedule(napi); + } else { + /* if not, turn on RX IRQs again ... */ + cr = nixge_dma_read_reg(priv, XAXIDMA_RX_CR_OFFSET); + cr |= (XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK); + nixge_dma_write_reg(priv, XAXIDMA_RX_CR_OFFSET, cr); + } + } + + return work_done; +} + +static irqreturn_t nixge_tx_irq(int irq, void *_ndev) +{ + struct nixge_priv *priv = netdev_priv(_ndev); + struct net_device *ndev = _ndev; + unsigned int status; + u32 cr; + + status = nixge_dma_read_reg(priv, XAXIDMA_TX_SR_OFFSET); + if (status & (XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK)) { + nixge_dma_write_reg(priv, XAXIDMA_TX_SR_OFFSET, status); + nixge_start_xmit_done(priv->ndev); + goto out; + } + if (!(status & XAXIDMA_IRQ_ALL_MASK)) { + netdev_err(ndev, "No interrupts asserted in Tx path\n"); + return IRQ_NONE; + } + if (status & XAXIDMA_IRQ_ERROR_MASK) { + netdev_err(ndev, "DMA Tx error 0x%x\n", status); + netdev_err(ndev, "Current BD is at: 0x%x\n", + (priv->tx_bd_v[priv->tx_bd_ci]).phys); + + cr = nixge_dma_read_reg(priv, XAXIDMA_TX_CR_OFFSET); + /* Disable coalesce, delay timer and error interrupts */ + cr &= (~XAXIDMA_IRQ_ALL_MASK); + /* Write to the Tx channel control register */ + nixge_dma_write_reg(priv, XAXIDMA_TX_CR_OFFSET, cr); + + cr = nixge_dma_read_reg(priv, XAXIDMA_RX_CR_OFFSET); + /* Disable coalesce, delay timer and error interrupts */ + cr &= (~XAXIDMA_IRQ_ALL_MASK); + /* Write to the Rx channel control register */ + nixge_dma_write_reg(priv, XAXIDMA_RX_CR_OFFSET, cr); + + tasklet_schedule(&priv->dma_err_tasklet); + nixge_dma_write_reg(priv, XAXIDMA_TX_SR_OFFSET, status); + } +out: + return IRQ_HANDLED; +} + +static irqreturn_t nixge_rx_irq(int irq, void *_ndev) +{ + struct nixge_priv *priv = netdev_priv(_ndev); + struct net_device *ndev = _ndev; + unsigned int status; + u32 cr; + + status = nixge_dma_read_reg(priv, XAXIDMA_RX_SR_OFFSET); + if (status & (XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK)) { + /* Turn of IRQs because NAPI */ + nixge_dma_write_reg(priv, XAXIDMA_RX_SR_OFFSET, status); + cr = nixge_dma_read_reg(priv, XAXIDMA_RX_CR_OFFSET); + cr &= ~(XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK); + nixge_dma_write_reg(priv, XAXIDMA_RX_CR_OFFSET, cr); + + if (napi_schedule_prep(&priv->napi)) + __napi_schedule(&priv->napi); + goto out; + } + if (!(status & XAXIDMA_IRQ_ALL_MASK)) { + netdev_err(ndev, "No interrupts asserted in Rx path\n"); + return IRQ_NONE; + } + if (status & XAXIDMA_IRQ_ERROR_MASK) { + netdev_err(ndev, "DMA Rx error 0x%x\n", status); + netdev_err(ndev, "Current BD is at: 0x%x\n", + (priv->rx_bd_v[priv->rx_bd_ci]).phys); + + cr = nixge_dma_read_reg(priv, XAXIDMA_TX_CR_OFFSET); + /* Disable coalesce, delay timer and error interrupts */ + cr &= (~XAXIDMA_IRQ_ALL_MASK); + /* Finally write to the Tx channel control register */ + nixge_dma_write_reg(priv, XAXIDMA_TX_CR_OFFSET, cr); + + cr = nixge_dma_read_reg(priv, XAXIDMA_RX_CR_OFFSET); + /* Disable coalesce, delay timer and error interrupts */ + cr &= (~XAXIDMA_IRQ_ALL_MASK); + /* write to the Rx channel control register */ + nixge_dma_write_reg(priv, XAXIDMA_RX_CR_OFFSET, cr); + + tasklet_schedule(&priv->dma_err_tasklet); + nixge_dma_write_reg(priv, XAXIDMA_RX_SR_OFFSET, status); + } +out: + return IRQ_HANDLED; +} + +static void nixge_dma_err_handler(unsigned long data) +{ + struct nixge_priv *lp = (struct nixge_priv *)data; + struct nixge_hw_dma_bd *cur_p; + struct nixge_tx_skb *tx_skb; + u32 cr, i; + + __nixge_device_reset(lp, XAXIDMA_TX_CR_OFFSET); + __nixge_device_reset(lp, XAXIDMA_RX_CR_OFFSET); + + for (i = 0; i < TX_BD_NUM; i++) { + cur_p = &lp->tx_bd_v[i]; + tx_skb = &lp->tx_skb[i]; + nixge_tx_skb_unmap(lp, tx_skb); + + cur_p->phys = 0; + cur_p->cntrl = 0; + cur_p->status = 0; + cur_p->app0 = 0; + cur_p->app1 = 0; + cur_p->app2 = 0; + cur_p->app3 = 0; + cur_p->app4 = 0; + cur_p->sw_id_offset = 0; + } + + for (i = 0; i < RX_BD_NUM; i++) { + cur_p = &lp->rx_bd_v[i]; + cur_p->status = 0; + cur_p->app0 = 0; + cur_p->app1 = 0; + cur_p->app2 = 0; + cur_p->app3 = 0; + cur_p->app4 = 0; + } + + lp->tx_bd_ci = 0; + lp->tx_bd_tail = 0; + lp->rx_bd_ci = 0; + + /* Start updating the Rx channel control register */ + cr = nixge_dma_read_reg(lp, XAXIDMA_RX_CR_OFFSET); + /* Update the interrupt coalesce count */ + cr = ((cr & ~XAXIDMA_COALESCE_MASK) | + (XAXIDMA_DFT_RX_THRESHOLD << XAXIDMA_COALESCE_SHIFT)); + /* Update the delay timer count */ + cr = ((cr & ~XAXIDMA_DELAY_MASK) | + (XAXIDMA_DFT_RX_WAITBOUND << XAXIDMA_DELAY_SHIFT)); + /* Enable coalesce, delay timer and error interrupts */ + cr |= XAXIDMA_IRQ_ALL_MASK; + /* Finally write to the Rx channel control register */ + nixge_dma_write_reg(lp, XAXIDMA_RX_CR_OFFSET, cr); + + /* Start updating the Tx channel control register */ + cr = nixge_dma_read_reg(lp, XAXIDMA_TX_CR_OFFSET); + /* Update the interrupt coalesce count */ + cr = (((cr & ~XAXIDMA_COALESCE_MASK)) | + (XAXIDMA_DFT_TX_THRESHOLD << XAXIDMA_COALESCE_SHIFT)); + /* Update the delay timer count */ + cr = (((cr & ~XAXIDMA_DELAY_MASK)) | + (XAXIDMA_DFT_TX_WAITBOUND << XAXIDMA_DELAY_SHIFT)); + /* Enable coalesce, delay timer and error interrupts */ + cr |= XAXIDMA_IRQ_ALL_MASK; + /* Finally write to the Tx channel control register */ + nixge_dma_write_reg(lp, XAXIDMA_TX_CR_OFFSET, cr); + + /* Populate the tail pointer and bring the Rx Axi DMA engine out of + * halted state. This will make the Rx side ready for reception. + */ + nixge_dma_write_reg(lp, XAXIDMA_RX_CDESC_OFFSET, lp->rx_bd_p); + cr = nixge_dma_read_reg(lp, XAXIDMA_RX_CR_OFFSET); + nixge_dma_write_reg(lp, XAXIDMA_RX_CR_OFFSET, + cr | XAXIDMA_CR_RUNSTOP_MASK); + nixge_dma_write_reg(lp, XAXIDMA_RX_TDESC_OFFSET, lp->rx_bd_p + + (sizeof(*lp->rx_bd_v) * (RX_BD_NUM - 1))); + + /* Write to the RS (Run-stop) bit in the Tx channel control register. + * Tx channel is now ready to run. But only after we write to the + * tail pointer register that the Tx channel will start transmitting + */ + nixge_dma_write_reg(lp, XAXIDMA_TX_CDESC_OFFSET, lp->tx_bd_p); + cr = nixge_dma_read_reg(lp, XAXIDMA_TX_CR_OFFSET); + nixge_dma_write_reg(lp, XAXIDMA_TX_CR_OFFSET, + cr | XAXIDMA_CR_RUNSTOP_MASK); +} + +static int nixge_open(struct net_device *ndev) +{ + struct nixge_priv *priv = netdev_priv(ndev); + struct phy_device *phy; + int ret; + + nixge_device_reset(ndev); + + phy = of_phy_connect(ndev, priv->phy_node, + &nixge_handle_link_change, 0, priv->phy_mode); + if (!phy) + return -ENODEV; + + phy_start(phy); + + /* Enable tasklets for Axi DMA error handling */ + tasklet_init(&priv->dma_err_tasklet, nixge_dma_err_handler, + (unsigned long)priv); + + napi_enable(&priv->napi); + + /* Enable interrupts for Axi DMA Tx */ + ret = request_irq(priv->tx_irq, nixge_tx_irq, 0, ndev->name, ndev); + if (ret) + goto err_tx_irq; + /* Enable interrupts for Axi DMA Rx */ + ret = request_irq(priv->rx_irq, nixge_rx_irq, 0, ndev->name, ndev); + if (ret) + goto err_rx_irq; + + netif_start_queue(ndev); + + return 0; + +err_rx_irq: + free_irq(priv->tx_irq, ndev); +err_tx_irq: + tasklet_kill(&priv->dma_err_tasklet); + netdev_err(ndev, "request_irq() failed\n"); + return ret; +} + +static int nixge_stop(struct net_device *ndev) +{ + struct nixge_priv *priv = netdev_priv(ndev); + u32 cr; + + netif_stop_queue(ndev); + napi_disable(&priv->napi); + + if (ndev->phydev) + phy_stop(ndev->phydev); + + cr = nixge_dma_read_reg(priv, XAXIDMA_RX_CR_OFFSET); + nixge_dma_write_reg(priv, XAXIDMA_RX_CR_OFFSET, + cr & (~XAXIDMA_CR_RUNSTOP_MASK)); + cr = nixge_dma_read_reg(priv, XAXIDMA_TX_CR_OFFSET); + nixge_dma_write_reg(priv, XAXIDMA_TX_CR_OFFSET, + cr & (~XAXIDMA_CR_RUNSTOP_MASK)); + + tasklet_kill(&priv->dma_err_tasklet); + + free_irq(priv->tx_irq, ndev); + free_irq(priv->rx_irq, ndev); + + nixge_hw_dma_bd_release(ndev); + + return 0; +} + +static int nixge_change_mtu(struct net_device *ndev, int new_mtu) +{ + if (netif_running(ndev)) + return -EBUSY; + + if ((new_mtu + NIXGE_HDR_SIZE + NIXGE_TRL_SIZE) > + NIXGE_MAX_JUMBO_FRAME_SIZE) + return -EINVAL; + + ndev->mtu = new_mtu; + + return 0; +} + +static s32 __nixge_hw_set_mac_address(struct net_device *ndev) +{ + struct nixge_priv *priv = netdev_priv(ndev); + + nixge_ctrl_write_reg(priv, NIXGE_REG_MAC_LSB, + (ndev->dev_addr[2]) << 24 | + (ndev->dev_addr[3] << 16) | + (ndev->dev_addr[4] << 8) | + (ndev->dev_addr[5] << 0)); + + nixge_ctrl_write_reg(priv, NIXGE_REG_MAC_MSB, + (ndev->dev_addr[1] | (ndev->dev_addr[0] << 8))); + + return 0; +} + +static int nixge_net_set_mac_address(struct net_device *ndev, void *p) +{ + int err; + + err = eth_mac_addr(ndev, p); + if (!err) + __nixge_hw_set_mac_address(ndev); + + return err; +} + +static const struct net_device_ops nixge_netdev_ops = { + .ndo_open = nixge_open, + .ndo_stop = nixge_stop, + .ndo_start_xmit = nixge_start_xmit, + .ndo_change_mtu = nixge_change_mtu, + .ndo_set_mac_address = nixge_net_set_mac_address, + .ndo_validate_addr = eth_validate_addr, +}; + +static void nixge_ethtools_get_drvinfo(struct net_device *ndev, + struct ethtool_drvinfo *ed) +{ + strlcpy(ed->driver, "nixge", sizeof(ed->driver)); + strlcpy(ed->bus_info, "platform", sizeof(ed->driver)); +} + +static int nixge_ethtools_get_coalesce(struct net_device *ndev, + struct ethtool_coalesce *ecoalesce) +{ + struct nixge_priv *priv = netdev_priv(ndev); + u32 regval = 0; + + regval = nixge_dma_read_reg(priv, XAXIDMA_RX_CR_OFFSET); + ecoalesce->rx_max_coalesced_frames = (regval & XAXIDMA_COALESCE_MASK) + >> XAXIDMA_COALESCE_SHIFT; + regval = nixge_dma_read_reg(priv, XAXIDMA_TX_CR_OFFSET); + ecoalesce->tx_max_coalesced_frames = (regval & XAXIDMA_COALESCE_MASK) + >> XAXIDMA_COALESCE_SHIFT; + return 0; +} + +static int nixge_ethtools_set_coalesce(struct net_device *ndev, + struct ethtool_coalesce *ecoalesce) +{ + struct nixge_priv *priv = netdev_priv(ndev); + + if (netif_running(ndev)) { + netdev_err(ndev, + "Please stop netif before applying configuration\n"); + return -EBUSY; + } + + if (ecoalesce->rx_coalesce_usecs || + ecoalesce->rx_coalesce_usecs_irq || + ecoalesce->rx_max_coalesced_frames_irq || + ecoalesce->tx_coalesce_usecs || + ecoalesce->tx_coalesce_usecs_irq || + ecoalesce->tx_max_coalesced_frames_irq || + ecoalesce->stats_block_coalesce_usecs || + ecoalesce->use_adaptive_rx_coalesce || + ecoalesce->use_adaptive_tx_coalesce || + ecoalesce->pkt_rate_low || + ecoalesce->rx_coalesce_usecs_low || + ecoalesce->rx_max_coalesced_frames_low || + ecoalesce->tx_coalesce_usecs_low || + ecoalesce->tx_max_coalesced_frames_low || + ecoalesce->pkt_rate_high || + ecoalesce->rx_coalesce_usecs_high || + ecoalesce->rx_max_coalesced_frames_high || + ecoalesce->tx_coalesce_usecs_high || + ecoalesce->tx_max_coalesced_frames_high || + ecoalesce->rate_sample_interval) + return -EOPNOTSUPP; + if (ecoalesce->rx_max_coalesced_frames) + priv->coalesce_count_rx = ecoalesce->rx_max_coalesced_frames; + if (ecoalesce->tx_max_coalesced_frames) + priv->coalesce_count_tx = ecoalesce->tx_max_coalesced_frames; + + return 0; +} + +static int nixge_ethtools_set_phys_id(struct net_device *ndev, + enum ethtool_phys_id_state state) +{ + struct nixge_priv *priv = netdev_priv(ndev); + u32 ctrl; + + ctrl = nixge_ctrl_read_reg(priv, NIXGE_REG_LED_CTL); + switch (state) { + case ETHTOOL_ID_ACTIVE: + ctrl |= NIXGE_ID_LED_CTL_EN; + /* Enable identification LED override*/ + nixge_ctrl_write_reg(priv, NIXGE_REG_LED_CTL, ctrl); + return 2; + + case ETHTOOL_ID_ON: + ctrl |= NIXGE_ID_LED_CTL_VAL; + nixge_ctrl_write_reg(priv, NIXGE_REG_LED_CTL, ctrl); + break; + + case ETHTOOL_ID_OFF: + ctrl &= ~NIXGE_ID_LED_CTL_VAL; + nixge_ctrl_write_reg(priv, NIXGE_REG_LED_CTL, ctrl); + break; + + case ETHTOOL_ID_INACTIVE: + /* Restore LED settings */ + ctrl &= ~NIXGE_ID_LED_CTL_EN; + nixge_ctrl_write_reg(priv, NIXGE_REG_LED_CTL, ctrl); + break; + } + + return 0; +} + +static const struct ethtool_ops nixge_ethtool_ops = { + .get_drvinfo = nixge_ethtools_get_drvinfo, + .get_coalesce = nixge_ethtools_get_coalesce, + .set_coalesce = nixge_ethtools_set_coalesce, + .set_phys_id = nixge_ethtools_set_phys_id, + .get_link_ksettings = phy_ethtool_get_link_ksettings, + .set_link_ksettings = phy_ethtool_set_link_ksettings, + .get_link = ethtool_op_get_link, +}; + +static int nixge_mdio_read(struct mii_bus *bus, int phy_id, int reg) +{ + struct nixge_priv *priv = bus->priv; + u32 status, tmp; + int err; + u16 device; + + if (reg & MII_ADDR_C45) { + device = (reg >> 16) & 0x1f; + + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_ADDR, reg & 0xffff); + + tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_OP_ADDRESS) + | NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); + + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1); + + err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status, + !status, 10, 1000); + if (err) { + dev_err(priv->dev, "timeout setting address"); + return err; + } + + tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_OP_READ) | + NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); + } else { + device = reg & 0x1f; + + tmp = NIXGE_MDIO_CLAUSE22 | NIXGE_MDIO_OP(MDIO_C22_READ) | + NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); + } + + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1); + + err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status, + !status, 10, 1000); + if (err) { + dev_err(priv->dev, "timeout setting read command"); + return err; + } + + status = nixge_ctrl_read_reg(priv, NIXGE_REG_MDIO_DATA); + + return status; +} + +static int nixge_mdio_write(struct mii_bus *bus, int phy_id, int reg, u16 val) +{ + struct nixge_priv *priv = bus->priv; + u32 status, tmp; + u16 device; + int err; + + if (reg & MII_ADDR_C45) { + device = (reg >> 16) & 0x1f; + + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_ADDR, reg & 0xffff); + + tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_OP_ADDRESS) + | NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); + + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1); + + err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status, + !status, 10, 1000); + if (err) { + dev_err(priv->dev, "timeout setting address"); + return err; + } + + tmp = NIXGE_MDIO_CLAUSE45 | NIXGE_MDIO_OP(NIXGE_MDIO_OP_WRITE) + | NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); + + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_DATA, val); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); + err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status, + !status, 10, 1000); + if (err) + dev_err(priv->dev, "timeout setting write command"); + } else { + device = reg & 0x1f; + + tmp = NIXGE_MDIO_CLAUSE22 | NIXGE_MDIO_OP(MDIO_C22_WRITE) | + NIXGE_MDIO_ADDR(phy_id) | NIXGE_MDIO_MMD(device); + + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_DATA, val); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_OP, tmp); + nixge_ctrl_write_reg(priv, NIXGE_REG_MDIO_CTRL, 1); + + err = nixge_ctrl_poll_timeout(priv, NIXGE_REG_MDIO_CTRL, status, + !status, 10, 1000); + if (err) + dev_err(priv->dev, "timeout setting write command"); + } + + return err; +} + +static int nixge_mdio_setup(struct nixge_priv *priv, struct device_node *np) +{ + struct mii_bus *bus; + int err; + + bus = mdiobus_alloc(); + if (!bus) + return -ENOMEM; + + snprintf(bus->id, MII_BUS_ID_SIZE, "%s-mii", dev_name(priv->dev)); + bus->priv = priv; + bus->name = "nixge_mii_bus"; + bus->read = nixge_mdio_read; + bus->write = nixge_mdio_write; + bus->parent = priv->dev; + + priv->mii_bus = bus; + err = of_mdiobus_register(bus, np); + if (err) + goto err_register; + + return 0; + +err_register: + mdiobus_free(bus); + return err; +} + +static void *nixge_get_nvmem_address(struct device *dev) +{ + struct nvmem_cell *cell; + size_t cell_size; + char *mac; + + cell = nvmem_cell_get(dev, "address"); + if (IS_ERR(cell)) + return cell; + + mac = nvmem_cell_read(cell, &cell_size); + nvmem_cell_put(cell); + + return mac; +} + +static int nixge_probe(struct platform_device *pdev) +{ + struct nixge_priv *priv; + struct net_device *ndev; + struct resource *dmares; + const char *mac_addr; + int err; + + ndev = alloc_etherdev(sizeof(*priv)); + if (!ndev) + return -ENOMEM; + + platform_set_drvdata(pdev, ndev); + SET_NETDEV_DEV(ndev, &pdev->dev); + + ndev->features = NETIF_F_SG; + ndev->netdev_ops = &nixge_netdev_ops; + ndev->ethtool_ops = &nixge_ethtool_ops; + + /* MTU range: 64 - 9000 */ + ndev->min_mtu = 64; + ndev->max_mtu = NIXGE_JUMBO_MTU; + + mac_addr = nixge_get_nvmem_address(&pdev->dev); + if (mac_addr && is_valid_ether_addr(mac_addr)) + ether_addr_copy(ndev->dev_addr, mac_addr); + else + eth_hw_addr_random(ndev); + + priv = netdev_priv(ndev); + priv->ndev = ndev; + priv->dev = &pdev->dev; + + netif_napi_add(ndev, &priv->napi, nixge_poll, NAPI_POLL_WEIGHT); + + dmares = platform_get_resource(pdev, IORESOURCE_MEM, 0); + priv->dma_regs = devm_ioremap_resource(&pdev->dev, dmares); + if (IS_ERR(priv->dma_regs)) { + netdev_err(ndev, "failed to map dma regs\n"); + return PTR_ERR(priv->dma_regs); + } + priv->ctrl_regs = priv->dma_regs + NIXGE_REG_CTRL_OFFSET; + __nixge_hw_set_mac_address(ndev); + + priv->tx_irq = platform_get_irq_byname(pdev, "tx"); + if (priv->tx_irq < 0) { + netdev_err(ndev, "could not find 'tx' irq"); + return priv->tx_irq; + } + + priv->rx_irq = platform_get_irq_byname(pdev, "rx"); + if (priv->rx_irq < 0) { + netdev_err(ndev, "could not find 'rx' irq"); + return priv->rx_irq; + } + + priv->coalesce_count_rx = XAXIDMA_DFT_RX_THRESHOLD; + priv->coalesce_count_tx = XAXIDMA_DFT_TX_THRESHOLD; + + err = nixge_mdio_setup(priv, pdev->dev.of_node); + if (err) { + netdev_err(ndev, "error registering mdio bus"); + goto free_netdev; + } + + priv->phy_mode = of_get_phy_mode(pdev->dev.of_node); + if (priv->phy_mode < 0) { + netdev_err(ndev, "not find \"phy-mode\" property\n"); + err = -EINVAL; + goto unregister_mdio; + } + + priv->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0); + if (!priv->phy_node) { + netdev_err(ndev, "not find \"phy-handle\" property\n"); + err = -EINVAL; + goto unregister_mdio; + } + + err = register_netdev(priv->ndev); + if (err) { + netdev_err(ndev, "register_netdev() error (%i)\n", err); + goto unregister_mdio; + } + + return 0; + +unregister_mdio: + mdiobus_unregister(priv->mii_bus); + mdiobus_free(priv->mii_bus); + +free_netdev: + free_netdev(ndev); + + return err; +} + +static int nixge_remove(struct platform_device *pdev) +{ + struct net_device *ndev = platform_get_drvdata(pdev); + struct nixge_priv *priv = netdev_priv(ndev); + + mdiobus_unregister(priv->mii_bus); + mdiobus_free(priv->mii_bus); + + unregister_netdev(ndev); + + free_netdev(ndev); + + return 0; +} + +/* Match table for of_platform binding */ +static const struct of_device_id nixge_dt_ids[] = { + { .compatible = "ni,xge-enet-2.00", }, + {}, +}; +MODULE_DEVICE_TABLE(of, nixge_dt_ids); + +static struct platform_driver nixge_driver = { + .probe = nixge_probe, + .remove = nixge_remove, + .driver = { + .name = "nixge", + .of_match_table = of_match_ptr(nixge_dt_ids), + }, +}; +module_platform_driver(nixge_driver); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("National Instruments XGE Management MAC"); +MODULE_AUTHOR("Moritz Fischer "); -- 2.16.1