The axiethernet driver can use the dmaengine framework to communicate
with the xilinx DMAengine driver(AXIDMA, MCDMA). The inspiration behind
this dmaengine adoption is to reuse the in-kernel xilinx dma engine
driver[1] and remove redundant dma programming sequence[2] from the
ethernet driver. This simplifies the ethernet driver and also makes
it generic to be hooked to any complaint dma IP i.e AXIDMA, MCDMA
without any modification.
The dmaengine framework was extended for metadata API support during
the axidma RFC[3] discussion. However, it still needs further
enhancements to make it well suited for ethernet usecases.
Backward compatibility support:
To support backward compatibility, we are planning to use below approach,
1) Use "dmas" property as an optional for now to differentiate
dmaengine based ethernet Driver or built-in dma ethernet driver.
Will move this property to required property some time later.
2) after some time, will introduce a new compatible string to support
the dmaengine method, This new compatible name will use different
APIs for init and data transfer.
Comments, suggestions, thoughts to implement remaining functional
features are very welcome!
[1]: https://github.com/torvalds/linux/blob/master/drivers/dma/xilinx/xilinx_dma.c
[2]: https://github.com/torvalds/linux/blob/master/drivers/net/ethernet/xilinx/xilinx_axienet_main.c#L238
[3]: http://lkml.iu.edu/hypermail/linux/kernel/1804.0/00367.html
Changes in V4:
1) Updated commit description about tx/rx channels name(1/3).
2) Removed "dt-bindings" and "dmaengine" strings in subject(1/3).
3) Extended dmas and dma-names to support MCDMA channel names(1/3).
4) Rename has_dmas to use_dmaegine(2/3).
5) Remove the AXIENET_USE_DMA(2/3).
6) Remove the AXIENET_USE_DMA(3/3).
7) Add dev_err_probe for dma_request_chan error handling(3/3).
8) Add kmem_cache_destroy for create in axienet_setup_dma_chan(3/3).
Changes in V3:
1) Moved RFC to PATCH.
2) Removed ethtool get/set coalesce, will be added later.
3) Added backward comapatible support.
4) Split the dmaengine support patch of V2 into two patches(2/3 and 3/3).
https://lore.kernel.org/all/[email protected]/
Changes in V2:
1) Add ethtool get/set coalesce and DMA reset using DMAengine framework.
2) Add performance numbers.
3) Remove .txt and change the name of file to xlnx,axiethernet.yaml.
4) Fix DT check warning(Fix DT check warning('device_type' does not match
any of the regexes:'pinctrl-[0-9]+' From schema: Documentation/
devicetree/bindings/net/xilinx_axienet.yaml).
Radhey Shyam Pandey (1):
dt-bindings: net: xlnx,axi-ethernet: Introduce DMA support
Sarath Babu Naidu Gaddam (2):
net: axienet: Preparatory changes for dmaengine support
net: axienet: Introduce dmaengine support
.../bindings/net/xlnx,axi-ethernet.yaml | 16 +
drivers/net/ethernet/xilinx/Kconfig | 1 +
drivers/net/ethernet/xilinx/xilinx_axienet.h | 8 +
.../net/ethernet/xilinx/xilinx_axienet_main.c | 616 ++++++++++++++----
4 files changed, 516 insertions(+), 125 deletions(-)
--
2.25.1
Add dmaengine framework to communicate with the xilinx DMAengine
driver(AXIDMA).
Axi ethernet driver uses separate channels for transmit and receive.
Add support for these channels to handle TX and RX with skb and
appropriate callbacks. Also add axi ethernet core interrupt for
dmaengine framework support.
The dmaengine framework was extended for metadata API support during the
axidma RFC[1] discussion. However it still needs further enhancements to
make it well suited for ethernet usecases. The ethernet features i.e
ethtool set/get of DMA IP properties, ndo_poll_controller,(mentioned
in TODO) and it requires follow-up discussion and dma framework
enhancement.
Introducing dmaengine support has a dependency on xilinx_dma. It uses
one of xilinx_dma API to reset the DMA. DMA needs to be reset prior
to accessing MDIO.
[1]: https://lore.kernel.org/lkml/1522665546-10035-1-git-send-email-
[email protected]
Signed-off-by: Radhey Shyam Pandey <[email protected]>
Signed-off-by: Sarath Babu Naidu Gaddam <[email protected]>
---
Changes in V4:
1) Remove the AXIENET_USE_DMA.
2) Add dev_err_probe for dma_request_chan error handling.
3) Add kmem_cache_destroy for create in axienet_setup_dma_chan.
4) Add XILINX_DMA dependency in ethernet drier Kconfig file.
5) move setup_dma_channel to init_dmaengine func
6) Remove unlikely
if (unlikely(ret < 0))
7) if (ret == 0) to if (!ret)
8) Rename DMA_MEM_TO_DEV to DMA_TO_DEVICE
9) Remove else check for lp->use_dmaengine = 1; in the probe.
Changes in V3:
1) New patch for dmaengine framework support.
---
drivers/net/ethernet/xilinx/Kconfig | 1 +
drivers/net/ethernet/xilinx/xilinx_axienet.h | 6 +
.../net/ethernet/xilinx/xilinx_axienet_main.c | 309 +++++++++++++++++-
3 files changed, 314 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/xilinx/Kconfig b/drivers/net/ethernet/xilinx/Kconfig
index 0014729b8865..35d96c633a33 100644
--- a/drivers/net/ethernet/xilinx/Kconfig
+++ b/drivers/net/ethernet/xilinx/Kconfig
@@ -26,6 +26,7 @@ config XILINX_EMACLITE
config XILINX_AXI_EMAC
tristate "Xilinx 10/100/1000 AXI Ethernet support"
depends on HAS_IOMEM
+ depends on XILINX_DMA
select PHYLINK
help
This driver supports the 10/100/1000 Ethernet from Xilinx for the
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h b/drivers/net/ethernet/xilinx/xilinx_axienet.h
index 3ead0bac597b..726c14d1470a 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet.h
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h
@@ -436,6 +436,9 @@ struct axidma_bd {
* @coalesce_count_tx: Store the irq coalesce on TX side.
* @coalesce_usec_tx: IRQ coalesce delay for TX
* @use_dmaengine: flag to check dmaengine framework usage.
+ * @tx_chan: TX DMA channel.
+ * @rx_chan: RX DMA channel.
+ * @skb_cache: Custom skb slab allocator
*/
struct axienet_local {
struct net_device *ndev;
@@ -501,6 +504,9 @@ struct axienet_local {
u32 coalesce_count_tx;
u32 coalesce_usec_tx;
u8 use_dmaengine;
+ struct dma_chan *tx_chan;
+ struct dma_chan *rx_chan;
+ struct kmem_cache *skb_cache;
};
/**
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
index 1fa67bb09625..ea7321703155 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
@@ -37,6 +37,9 @@
#include <linux/phy.h>
#include <linux/mii.h>
#include <linux/ethtool.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma/xilinx_dma.h>
#include "xilinx_axienet.h"
@@ -46,6 +49,9 @@
#define TX_BD_NUM_MIN (MAX_SKB_FRAGS + 1)
#define TX_BD_NUM_MAX 4096
#define RX_BD_NUM_MAX 4096
+#define DMA_NUM_APP_WORDS 5
+#define LEN_APP 4
+#define RX_BUF_NUM_DEFAULT 128
/* Must be shorter than length of ethtool_drvinfo.driver field to fit */
#define DRIVER_NAME "xaxienet"
@@ -54,6 +60,16 @@
#define AXIENET_REGS_N 40
+struct axi_skbuff {
+ struct scatterlist sgl[MAX_SKB_FRAGS + 1];
+ struct dma_async_tx_descriptor *desc;
+ dma_addr_t dma_address;
+ struct sk_buff *skb;
+ int sg_len;
+} __packed;
+
+static int axienet_rx_submit_desc(struct net_device *ndev);
+
/* Match table for of_platform binding */
static const struct of_device_id axienet_of_match[] = {
{ .compatible = "xlnx,axi-ethernet-1.00.a", },
@@ -726,6 +742,108 @@ static inline int axienet_check_tx_bd_space(struct axienet_local *lp,
return 0;
}
+/**
+ * axienet_dma_tx_cb - DMA engine callback for TX channel.
+ * @data: Pointer to the axi_skbuff structure
+ * @result: error reporting through dmaengine_result.
+ * This function is called by dmaengine driver for TX channel to notify
+ * that the transmit is done.
+ */
+static void axienet_dma_tx_cb(void *data, const struct dmaengine_result *result)
+{
+ struct axi_skbuff *axi_skb = data;
+
+ struct net_device *netdev = axi_skb->skb->dev;
+ struct axienet_local *lp = netdev_priv(netdev);
+
+ u64_stats_update_begin(&lp->tx_stat_sync);
+ u64_stats_add(&lp->tx_bytes, axi_skb->skb->len);
+ u64_stats_add(&lp->tx_packets, 1);
+ u64_stats_update_end(&lp->tx_stat_sync);
+
+ dma_unmap_sg(lp->dev, axi_skb->sgl, axi_skb->sg_len, DMA_TO_DEVICE);
+ dev_kfree_skb_any(axi_skb->skb);
+ kmem_cache_free(lp->skb_cache, axi_skb);
+}
+
+/**
+ * axienet_start_xmit_dmaengine - Starts the transmission.
+ * @skb: sk_buff pointer that contains data to be Txed.
+ * @ndev: Pointer to net_device structure.
+ *
+ * Return: NETDEV_TX_OK, on success
+ * NETDEV_TX_BUSY, if any memory failure or SG error.
+ *
+ * This function is invoked from xmit to initiate transmission. The
+ * function sets the skbs , call back API, SG etc.
+ * Additionally if checksum offloading is supported,
+ * it populates AXI Stream Control fields with appropriate values.
+ */
+static netdev_tx_t
+axienet_start_xmit_dmaengine(struct sk_buff *skb, struct net_device *ndev)
+{
+ struct dma_async_tx_descriptor *dma_tx_desc = NULL;
+ struct axienet_local *lp = netdev_priv(ndev);
+ u32 app[DMA_NUM_APP_WORDS] = {0};
+ struct axi_skbuff *axi_skb;
+ u32 csum_start_off;
+ u32 csum_index_off;
+ int sg_len;
+ int ret;
+
+ sg_len = skb_shinfo(skb)->nr_frags + 1;
+ axi_skb = kmem_cache_zalloc(lp->skb_cache, GFP_KERNEL);
+ if (!axi_skb)
+ return NETDEV_TX_BUSY;
+
+ sg_init_table(axi_skb->sgl, sg_len);
+ ret = skb_to_sgvec(skb, axi_skb->sgl, 0, skb->len);
+ if (ret < 0)
+ goto xmit_error_skb_sgvec;
+
+ ret = dma_map_sg(lp->dev, axi_skb->sgl, sg_len, DMA_TO_DEVICE);
+ if (!ret)
+ goto xmit_error_skb_sgvec;
+
+ /*Fill up app fields for checksum */
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ if (lp->features & XAE_FEATURE_FULL_TX_CSUM) {
+ /* Tx Full Checksum Offload Enabled */
+ app[0] |= 2;
+ } else if (lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) {
+ csum_start_off = skb_transport_offset(skb);
+ csum_index_off = csum_start_off + skb->csum_offset;
+ /* Tx Partial Checksum Offload Enabled */
+ app[0] |= 1;
+ app[1] = (csum_start_off << 16) | csum_index_off;
+ }
+ } else if (skb->ip_summed == CHECKSUM_UNNECESSARY) {
+ app[0] |= 2; /* Tx Full Checksum Offload Enabled */
+ }
+
+ dma_tx_desc = lp->tx_chan->device->device_prep_slave_sg(lp->tx_chan, axi_skb->sgl,
+ sg_len, DMA_MEM_TO_DEV,
+ DMA_PREP_INTERRUPT, (void *)app);
+
+ if (!dma_tx_desc)
+ goto xmit_error_prep;
+
+ axi_skb->skb = skb;
+ axi_skb->sg_len = sg_len;
+ dma_tx_desc->callback_param = axi_skb;
+ dma_tx_desc->callback_result = axienet_dma_tx_cb;
+ dmaengine_submit(dma_tx_desc);
+ dma_async_issue_pending(lp->tx_chan);
+
+ return NETDEV_TX_OK;
+
+xmit_error_prep:
+ dma_unmap_sg(lp->dev, axi_skb->sgl, sg_len, DMA_TO_DEVICE);
+xmit_error_skb_sgvec:
+ kmem_cache_free(lp->skb_cache, axi_skb);
+ return NETDEV_TX_BUSY;
+}
+
/**
* axienet_tx_poll - Invoked once a transmit is completed by the
* Axi DMA Tx channel.
@@ -910,7 +1028,42 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
if (!lp->use_dmaengine)
return axienet_start_xmit_legacy(skb, ndev);
else
- return NETDEV_TX_BUSY;
+ return axienet_start_xmit_dmaengine(skb, ndev);
+}
+
+/**
+ * axienet_dma_rx_cb - DMA engine callback for RX channel.
+ * @data: Pointer to the axi_skbuff structure
+ * @result: error reporting through dmaengine_result.
+ * This function is called by dmaengine driver for RX channel to notify
+ * that the packet is received.
+ */
+static void axienet_dma_rx_cb(void *data, const struct dmaengine_result *result)
+{
+ struct axi_skbuff *axi_skb = data;
+ struct sk_buff *skb = axi_skb->skb;
+ struct net_device *netdev = skb->dev;
+ struct axienet_local *lp = netdev_priv(netdev);
+ size_t meta_len, meta_max_len, rx_len;
+ u32 *app;
+
+ app = dmaengine_desc_get_metadata_ptr(axi_skb->desc, &meta_len, &meta_max_len);
+ dma_unmap_single(lp->dev, axi_skb->dma_address, lp->max_frm_size,
+ DMA_FROM_DEVICE);
+ /* TODO: Derive app word index programmatically */
+ rx_len = (app[LEN_APP] & 0xFFFF);
+ skb_put(skb, rx_len);
+ skb->protocol = eth_type_trans(skb, netdev);
+ skb->ip_summed = CHECKSUM_NONE;
+
+ netif_rx(skb);
+ kmem_cache_free(lp->skb_cache, axi_skb);
+ u64_stats_update_begin(&lp->rx_stat_sync);
+ u64_stats_add(&lp->rx_packets, 1);
+ u64_stats_add(&lp->rx_bytes, rx_len);
+ u64_stats_update_end(&lp->rx_stat_sync);
+ axienet_rx_submit_desc(netdev);
+ dma_async_issue_pending(lp->rx_chan);
}
/**
@@ -1146,6 +1299,108 @@ static irqreturn_t axienet_eth_irq(int irq, void *_ndev)
static void axienet_dma_err_handler(struct work_struct *work);
+/**
+ * axienet_rx_submit_desc - Submit the descriptors with required data
+ * like call backup API, skb buffer.. etc to dmaengine.
+ *
+ * @ndev: net_device pointer
+ *
+ *Return: 0, on success.
+ * non-zero error value on failure
+ */
+static int axienet_rx_submit_desc(struct net_device *ndev)
+{
+ struct dma_async_tx_descriptor *dma_rx_desc = NULL;
+ struct axienet_local *lp = netdev_priv(ndev);
+ struct axi_skbuff *axi_skb;
+ struct sk_buff *skb;
+ dma_addr_t addr;
+ int ret;
+
+ axi_skb = kmem_cache_alloc(lp->skb_cache, GFP_KERNEL);
+
+ if (!axi_skb)
+ return -ENOMEM;
+ skb = netdev_alloc_skb(ndev, lp->max_frm_size);
+ if (!skb) {
+ ret = -ENOMEM;
+ goto rx_bd_init_skb;
+ }
+
+ sg_init_table(axi_skb->sgl, 1);
+ addr = dma_map_single(lp->dev, skb->data, lp->max_frm_size, DMA_FROM_DEVICE);
+ sg_dma_address(axi_skb->sgl) = addr;
+ sg_dma_len(axi_skb->sgl) = lp->max_frm_size;
+ dma_rx_desc = dmaengine_prep_slave_sg(lp->rx_chan, axi_skb->sgl,
+ 1, DMA_DEV_TO_MEM,
+ DMA_PREP_INTERRUPT);
+ if (!dma_rx_desc) {
+ ret = -EINVAL;
+ goto rx_bd_init_prep_sg;
+ }
+
+ axi_skb->skb = skb;
+ axi_skb->dma_address = sg_dma_address(axi_skb->sgl);
+ axi_skb->desc = dma_rx_desc;
+ dma_rx_desc->callback_param = axi_skb;
+ dma_rx_desc->callback_result = axienet_dma_rx_cb;
+ dmaengine_submit(dma_rx_desc);
+
+ return 0;
+
+rx_bd_init_prep_sg:
+ dma_unmap_single(lp->dev, addr, lp->max_frm_size, DMA_FROM_DEVICE);
+ dev_kfree_skb(skb);
+rx_bd_init_skb:
+ kmem_cache_free(lp->skb_cache, axi_skb);
+ return ret;
+}
+
+/**
+ * axienet_init_dmaengine - init the dmaengine code.
+ * @ndev: Pointer to net_device structure
+ *
+ * Return: 0, on success.
+ * non-zero error value on failure
+ *
+ * This is the dmaengine initialization code.
+ */
+static inline int axienet_init_dmaengine(struct net_device *ndev)
+{
+ struct axienet_local *lp = netdev_priv(ndev);
+ int i, ret;
+
+ lp->tx_chan = dma_request_chan(lp->dev, "tx_chan0");
+ if (IS_ERR(lp->tx_chan)) {
+ ret = PTR_ERR(lp->tx_chan);
+ return dev_err_probe(lp->dev, ret, "No Ethernet DMA (TX) channel found\n");
+ }
+
+ lp->rx_chan = dma_request_chan(lp->dev, "rx_chan0");
+ if (IS_ERR(lp->rx_chan)) {
+ ret = PTR_ERR(lp->rx_chan);
+ dev_err_probe(lp->dev, ret, "No Ethernet DMA (RX) channel found\n");
+ goto err_dma_request_rx;
+ }
+ lp->skb_cache = kmem_cache_create("ethernet", sizeof(struct axi_skbuff),
+ 0, 0, NULL);
+ if (!lp->skb_cache) {
+ ret = -ENOMEM;
+ goto err_kmem;
+ }
+ /* TODO: Instead of BD_NUM_DEFAULT use runtime support*/
+ for (i = 0; i < RX_BUF_NUM_DEFAULT; i++)
+ axienet_rx_submit_desc(ndev);
+ dma_async_issue_pending(lp->rx_chan);
+
+ return 0;
+err_kmem:
+ dma_release_channel(lp->rx_chan);
+err_dma_request_rx:
+ dma_release_channel(lp->tx_chan);
+ return ret;
+}
+
/**
* axienet_init_legacy_dma - init the dma legacy code.
* @ndev: Pointer to net_device structure
@@ -1237,7 +1492,24 @@ static int axienet_open(struct net_device *ndev)
phylink_start(lp->phylink);
- if (!lp->use_dmaengine) {
+ if (lp->use_dmaengine) {
+ /* Enable interrupts for Axi Ethernet core (if defined) */
+ if (lp->eth_irq > 0) {
+ ret = request_irq(lp->eth_irq, axienet_eth_irq, IRQF_SHARED,
+ ndev->name, ndev);
+ if (ret)
+ goto error_code;
+ }
+
+ ret = axienet_init_dmaengine(ndev);
+
+ if (ret < 0) {
+ if (lp->eth_irq > 0)
+ free_irq(lp->eth_irq, ndev);
+ goto error_code;
+ }
+
+ } else {
ret = axienet_init_legacy_dma(ndev);
if (ret)
goto error_code;
@@ -1285,6 +1557,14 @@ static int axienet_stop(struct net_device *ndev)
free_irq(lp->tx_irq, ndev);
free_irq(lp->rx_irq, ndev);
axienet_dma_bd_release(ndev);
+ } else {
+ dmaengine_terminate_all(lp->tx_chan);
+ dmaengine_terminate_all(lp->rx_chan);
+
+ dma_release_channel(lp->rx_chan);
+ dma_release_channel(lp->tx_chan);
+
+ kmem_cache_destroy(lp->skb_cache);
}
axienet_iow(lp, XAE_IE_OFFSET, 0);
@@ -2134,6 +2414,31 @@ static int axienet_probe(struct platform_device *pdev)
}
netif_napi_add(ndev, &lp->napi_rx, axienet_rx_poll);
netif_napi_add(ndev, &lp->napi_tx, axienet_tx_poll);
+ } else {
+ struct xilinx_vdma_config cfg;
+ struct dma_chan *tx_chan;
+
+ lp->eth_irq = platform_get_irq_optional(pdev, 0);
+ tx_chan = dma_request_chan(lp->dev, "tx_chan0");
+
+ if (IS_ERR(tx_chan)) {
+ ret = PTR_ERR(tx_chan);
+ dev_err_probe(lp->dev, ret, "No Ethernet DMA (TX) channel found\n");
+ goto cleanup_clk;
+ }
+
+ cfg.reset = 1;
+ /* As name says VDMA but it has support for DMA channel reset*/
+ ret = xilinx_vdma_channel_set_config(tx_chan, &cfg);
+
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Reset channel failed\n");
+ dma_release_channel(tx_chan);
+ goto cleanup_clk;
+ }
+
+ dma_release_channel(tx_chan);
+ lp->use_dmaengine = 1;
}
/* Check for Ethernet core IRQ (optional) */
--
2.25.1
The axiethernet driver has in-built dma programming. The aim is to remove
axiethernet axidma programming after some time and instead use the
dmaengine framework to communicate with existing xilinx DMAengine
controller(xilinx_dma) driver.
Keep the axidma programming code under use_dmaengine check so that
dmaengine changes can be added later.
Perform minor code reordering to minimize conditional
use_dmaengine checks and there is no functional change.
It uses "dmas" property to identify whether it should use a dmaengine
framework or axiethernet axidma programming.
Signed-off-by: Sarath Babu Naidu Gaddam <[email protected]>
---
Changes in V4:
1) Renamed has_dmas to use_dmaegine.
2) Removed the AXIENET_USE_DMA.
1) Changed the start_xmit_** functions description.
Changes in V3:
1) New patch
---
drivers/net/ethernet/xilinx/xilinx_axienet.h | 2 +
.../net/ethernet/xilinx/xilinx_axienet_main.c | 317 +++++++++++-------
2 files changed, 191 insertions(+), 128 deletions(-)
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h b/drivers/net/ethernet/xilinx/xilinx_axienet.h
index 575ff9de8985..3ead0bac597b 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet.h
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h
@@ -435,6 +435,7 @@ struct axidma_bd {
* @coalesce_usec_rx: IRQ coalesce delay for RX
* @coalesce_count_tx: Store the irq coalesce on TX side.
* @coalesce_usec_tx: IRQ coalesce delay for TX
+ * @use_dmaengine: flag to check dmaengine framework usage.
*/
struct axienet_local {
struct net_device *ndev;
@@ -499,6 +500,7 @@ struct axienet_local {
u32 coalesce_usec_rx;
u32 coalesce_count_tx;
u32 coalesce_usec_tx;
+ u8 use_dmaengine;
};
/**
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
index 3e310b55bce2..1fa67bb09625 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
@@ -588,10 +588,6 @@ static int axienet_device_reset(struct net_device *ndev)
struct axienet_local *lp = netdev_priv(ndev);
int ret;
- ret = __axienet_device_reset(lp);
- if (ret)
- return ret;
-
lp->max_frm_size = XAE_MAX_VLAN_FRAME_SIZE;
lp->options |= XAE_OPTION_VLAN;
lp->options &= (~XAE_OPTION_JUMBO);
@@ -605,11 +601,17 @@ static int axienet_device_reset(struct net_device *ndev)
lp->options |= XAE_OPTION_JUMBO;
}
- ret = axienet_dma_bd_init(ndev);
- if (ret) {
- netdev_err(ndev, "%s: descriptor allocation failed\n",
- __func__);
- return ret;
+ if (!lp->use_dmaengine) {
+ ret = __axienet_device_reset(lp);
+ if (ret)
+ return ret;
+
+ ret = axienet_dma_bd_init(ndev);
+ if (ret) {
+ netdev_err(ndev, "%s: descriptor allocation failed\n",
+ __func__);
+ return ret;
+ }
}
axienet_status = axienet_ior(lp, XAE_RCW1_OFFSET);
@@ -775,20 +777,20 @@ static int axienet_tx_poll(struct napi_struct *napi, int budget)
}
/**
- * axienet_start_xmit - Starts the transmission.
+ * axienet_start_xmit_legacy - Starts the transmission.
* @skb: sk_buff pointer that contains data to be Txed.
* @ndev: Pointer to net_device structure.
*
* Return: NETDEV_TX_OK, on success
* NETDEV_TX_BUSY, if any of the descriptors are not free
*
- * This function is invoked from upper layers to initiate transmission. The
+ * This function is invoked from axienet_start_xmit to initiate transmission. The
* function uses the next available free BDs and populates their fields to
* start the transmission. Additionally if checksum offloading is supported,
* it populates AXI Stream Control fields with appropriate values.
*/
static netdev_tx_t
-axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+axienet_start_xmit_legacy(struct sk_buff *skb, struct net_device *ndev)
{
u32 ii;
u32 num_frag;
@@ -890,6 +892,27 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
return NETDEV_TX_OK;
}
+/**
+ * axienet_start_xmit - Invoke the transmission function
+ * @skb: sk_buff pointer that contains data to be Txed.
+ * @ndev: Pointer to net_device structure.
+ *
+ * Return: NETDEV_TX_OK, on success
+ * NETDEV_TX_BUSY, if any of the descriptors are not free
+ *
+ * This function is invoked from upper layers to initiate transmission
+ */
+static netdev_tx_t
+axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+{
+ struct axienet_local *lp = netdev_priv(ndev);
+
+ if (!lp->use_dmaengine)
+ return axienet_start_xmit_legacy(skb, ndev);
+ else
+ return NETDEV_TX_BUSY;
+}
+
/**
* axienet_rx_poll - Triggered by RX ISR to complete the BD processing.
* @napi: Pointer to NAPI structure.
@@ -1124,41 +1147,22 @@ static irqreturn_t axienet_eth_irq(int irq, void *_ndev)
static void axienet_dma_err_handler(struct work_struct *work);
/**
- * axienet_open - Driver open routine.
- * @ndev: Pointer to net_device structure
+ * axienet_init_legacy_dma - init the dma legacy code.
+ * @ndev: Pointer to net_device structure
*
* Return: 0, on success.
- * non-zero error value on failure
+ * non-zero error value on failure
+ *
+ * This is the dma initialization code. It also allocates interrupt
+ * service routines, enables the interrupt lines and ISR handling.
*
- * This is the driver open routine. It calls phylink_start to start the
- * PHY device.
- * It also allocates interrupt service routines, enables the interrupt lines
- * and ISR handling. Axi Ethernet core is reset through Axi DMA core. Buffer
- * descriptors are initialized.
*/
-static int axienet_open(struct net_device *ndev)
+
+static inline int axienet_init_legacy_dma(struct net_device *ndev)
{
int ret;
struct axienet_local *lp = netdev_priv(ndev);
- dev_dbg(&ndev->dev, "axienet_open()\n");
-
- /* When we do an Axi Ethernet reset, it resets the complete core
- * including the MDIO. MDIO must be disabled before resetting.
- * Hold MDIO bus lock to avoid MDIO accesses during the reset.
- */
- axienet_lock_mii(lp);
- ret = axienet_device_reset(ndev);
- axienet_unlock_mii(lp);
-
- ret = phylink_of_phy_connect(lp->phylink, lp->dev->of_node, 0);
- if (ret) {
- dev_err(lp->dev, "phylink_of_phy_connect() failed: %d\n", ret);
- return ret;
- }
-
- phylink_start(lp->phylink);
-
/* Enable worker thread for Axi DMA error handling */
INIT_WORK(&lp->dma_err_task, axienet_dma_err_handler);
@@ -1192,13 +1196,62 @@ static int axienet_open(struct net_device *ndev)
err_tx_irq:
napi_disable(&lp->napi_tx);
napi_disable(&lp->napi_rx);
- phylink_stop(lp->phylink);
- phylink_disconnect_phy(lp->phylink);
cancel_work_sync(&lp->dma_err_task);
dev_err(lp->dev, "request_irq() failed\n");
return ret;
}
+/**
+ * axienet_open - Driver open routine.
+ * @ndev: Pointer to net_device structure
+ *
+ * Return: 0, on success.
+ * non-zero error value on failure
+ *
+ * This is the driver open routine. It calls phylink_start to start the
+ * PHY device.
+ * It also allocates interrupt service routines, enables the interrupt lines
+ * and ISR handling. Axi Ethernet core is reset through Axi DMA core. Buffer
+ * descriptors are initialized.
+ */
+static int axienet_open(struct net_device *ndev)
+{
+ int ret;
+ struct axienet_local *lp = netdev_priv(ndev);
+
+ dev_dbg(&ndev->dev, "%s\n", __func__);
+
+ /* When we do an Axi Ethernet reset, it resets the complete core
+ * including the MDIO. MDIO must be disabled before resetting.
+ * Hold MDIO bus lock to avoid MDIO accesses during the reset.
+ */
+ axienet_lock_mii(lp);
+ ret = axienet_device_reset(ndev);
+ axienet_unlock_mii(lp);
+
+ ret = phylink_of_phy_connect(lp->phylink, lp->dev->of_node, 0);
+ if (ret) {
+ dev_err(lp->dev, "phylink_of_phy_connect() failed: %d\n", ret);
+ return ret;
+ }
+
+ phylink_start(lp->phylink);
+
+ if (!lp->use_dmaengine) {
+ ret = axienet_init_legacy_dma(ndev);
+ if (ret)
+ goto error_code;
+ }
+
+ return 0;
+
+error_code:
+ phylink_stop(lp->phylink);
+ phylink_disconnect_phy(lp->phylink);
+
+ return ret;
+}
+
/**
* axienet_stop - Driver stop routine.
* @ndev: Pointer to net_device structure
@@ -1215,8 +1268,10 @@ static int axienet_stop(struct net_device *ndev)
dev_dbg(&ndev->dev, "axienet_close()\n");
- napi_disable(&lp->napi_tx);
- napi_disable(&lp->napi_rx);
+ if (!lp->use_dmaengine) {
+ napi_disable(&lp->napi_tx);
+ napi_disable(&lp->napi_rx);
+ }
phylink_stop(lp->phylink);
phylink_disconnect_phy(lp->phylink);
@@ -1224,18 +1279,18 @@ static int axienet_stop(struct net_device *ndev)
axienet_setoptions(ndev, lp->options &
~(XAE_OPTION_TXEN | XAE_OPTION_RXEN));
- axienet_dma_stop(lp);
+ if (!lp->use_dmaengine) {
+ axienet_dma_stop(lp);
+ cancel_work_sync(&lp->dma_err_task);
+ free_irq(lp->tx_irq, ndev);
+ free_irq(lp->rx_irq, ndev);
+ axienet_dma_bd_release(ndev);
+ }
axienet_iow(lp, XAE_IE_OFFSET, 0);
- cancel_work_sync(&lp->dma_err_task);
-
if (lp->eth_irq > 0)
free_irq(lp->eth_irq, ndev);
- free_irq(lp->tx_irq, ndev);
- free_irq(lp->rx_irq, ndev);
-
- axienet_dma_bd_release(ndev);
return 0;
}
@@ -1411,14 +1466,16 @@ static void axienet_ethtools_get_regs(struct net_device *ndev,
data[29] = axienet_ior(lp, XAE_FMI_OFFSET);
data[30] = axienet_ior(lp, XAE_AF0_OFFSET);
data[31] = axienet_ior(lp, XAE_AF1_OFFSET);
- data[32] = axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET);
- data[33] = axienet_dma_in32(lp, XAXIDMA_TX_SR_OFFSET);
- data[34] = axienet_dma_in32(lp, XAXIDMA_TX_CDESC_OFFSET);
- data[35] = axienet_dma_in32(lp, XAXIDMA_TX_TDESC_OFFSET);
- data[36] = axienet_dma_in32(lp, XAXIDMA_RX_CR_OFFSET);
- data[37] = axienet_dma_in32(lp, XAXIDMA_RX_SR_OFFSET);
- data[38] = axienet_dma_in32(lp, XAXIDMA_RX_CDESC_OFFSET);
- data[39] = axienet_dma_in32(lp, XAXIDMA_RX_TDESC_OFFSET);
+ if (!lp->use_dmaengine) {
+ data[32] = axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET);
+ data[33] = axienet_dma_in32(lp, XAXIDMA_TX_SR_OFFSET);
+ data[34] = axienet_dma_in32(lp, XAXIDMA_TX_CDESC_OFFSET);
+ data[35] = axienet_dma_in32(lp, XAXIDMA_TX_TDESC_OFFSET);
+ data[36] = axienet_dma_in32(lp, XAXIDMA_RX_CR_OFFSET);
+ data[37] = axienet_dma_in32(lp, XAXIDMA_RX_SR_OFFSET);
+ data[38] = axienet_dma_in32(lp, XAXIDMA_RX_CDESC_OFFSET);
+ data[39] = axienet_dma_in32(lp, XAXIDMA_RX_TDESC_OFFSET);
+ }
}
static void
@@ -1878,9 +1935,6 @@ static int axienet_probe(struct platform_device *pdev)
u64_stats_init(&lp->rx_stat_sync);
u64_stats_init(&lp->tx_stat_sync);
- netif_napi_add(ndev, &lp->napi_rx, axienet_rx_poll);
- netif_napi_add(ndev, &lp->napi_tx, axienet_tx_poll);
-
lp->axi_clk = devm_clk_get_optional(&pdev->dev, "s_axi_lite_clk");
if (!lp->axi_clk) {
/* For backward compatibility, if named AXI clock is not present,
@@ -2006,75 +2060,80 @@ static int axienet_probe(struct platform_device *pdev)
goto cleanup_clk;
}
- /* Find the DMA node, map the DMA registers, and decode the DMA IRQs */
- np = of_parse_phandle(pdev->dev.of_node, "axistream-connected", 0);
- if (np) {
- struct resource dmares;
+ if (!of_find_property(pdev->dev.of_node, "dmas", NULL)) {
+ /* Find the DMA node, map the DMA registers, and decode the DMA IRQs */
+ np = of_parse_phandle(pdev->dev.of_node, "axistream-connected", 0);
- ret = of_address_to_resource(np, 0, &dmares);
- if (ret) {
- dev_err(&pdev->dev,
- "unable to get DMA resource\n");
+ if (np) {
+ struct resource dmares;
+
+ ret = of_address_to_resource(np, 0, &dmares);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "unable to get DMA resource\n");
+ of_node_put(np);
+ goto cleanup_clk;
+ }
+ lp->dma_regs = devm_ioremap_resource(&pdev->dev,
+ &dmares);
+ lp->rx_irq = irq_of_parse_and_map(np, 1);
+ lp->tx_irq = irq_of_parse_and_map(np, 0);
of_node_put(np);
+ lp->eth_irq = platform_get_irq_optional(pdev, 0);
+ } else {
+ /* Check for these resources directly on the Ethernet node. */
+ lp->dma_regs = devm_platform_get_and_ioremap_resource(pdev, 1, NULL);
+ lp->rx_irq = platform_get_irq(pdev, 1);
+ lp->tx_irq = platform_get_irq(pdev, 0);
+ lp->eth_irq = platform_get_irq_optional(pdev, 2);
+ }
+ if (IS_ERR(lp->dma_regs)) {
+ dev_err(&pdev->dev, "could not map DMA regs\n");
+ ret = PTR_ERR(lp->dma_regs);
+ goto cleanup_clk;
+ }
+ if (lp->rx_irq <= 0 || lp->tx_irq <= 0) {
+ dev_err(&pdev->dev, "could not determine irqs\n");
+ ret = -ENOMEM;
goto cleanup_clk;
}
- lp->dma_regs = devm_ioremap_resource(&pdev->dev,
- &dmares);
- lp->rx_irq = irq_of_parse_and_map(np, 1);
- lp->tx_irq = irq_of_parse_and_map(np, 0);
- of_node_put(np);
- lp->eth_irq = platform_get_irq_optional(pdev, 0);
- } else {
- /* Check for these resources directly on the Ethernet node. */
- lp->dma_regs = devm_platform_get_and_ioremap_resource(pdev, 1, NULL);
- lp->rx_irq = platform_get_irq(pdev, 1);
- lp->tx_irq = platform_get_irq(pdev, 0);
- lp->eth_irq = platform_get_irq_optional(pdev, 2);
- }
- if (IS_ERR(lp->dma_regs)) {
- dev_err(&pdev->dev, "could not map DMA regs\n");
- ret = PTR_ERR(lp->dma_regs);
- goto cleanup_clk;
- }
- if ((lp->rx_irq <= 0) || (lp->tx_irq <= 0)) {
- dev_err(&pdev->dev, "could not determine irqs\n");
- ret = -ENOMEM;
- goto cleanup_clk;
- }
- /* Autodetect the need for 64-bit DMA pointers.
- * When the IP is configured for a bus width bigger than 32 bits,
- * writing the MSB registers is mandatory, even if they are all 0.
- * We can detect this case by writing all 1's to one such register
- * and see if that sticks: when the IP is configured for 32 bits
- * only, those registers are RES0.
- * Those MSB registers were introduced in IP v7.1, which we check first.
- */
- if ((axienet_ior(lp, XAE_ID_OFFSET) >> 24) >= 0x9) {
- void __iomem *desc = lp->dma_regs + XAXIDMA_TX_CDESC_OFFSET + 4;
-
- iowrite32(0x0, desc);
- if (ioread32(desc) == 0) { /* sanity check */
- iowrite32(0xffffffff, desc);
- if (ioread32(desc) > 0) {
- lp->features |= XAE_FEATURE_DMA_64BIT;
- addr_width = 64;
- dev_info(&pdev->dev,
- "autodetected 64-bit DMA range\n");
- }
+ /* Autodetect the need for 64-bit DMA pointers.
+ * When the IP is configured for a bus width bigger than 32 bits,
+ * writing the MSB registers is mandatory, even if they are all 0.
+ * We can detect this case by writing all 1's to one such register
+ * and see if that sticks: when the IP is configured for 32 bits
+ * only, those registers are RES0.
+ * Those MSB registers were introduced in IP v7.1, which we check first.
+ */
+ if ((axienet_ior(lp, XAE_ID_OFFSET) >> 24) >= 0x9) {
+ void __iomem *desc = lp->dma_regs + XAXIDMA_TX_CDESC_OFFSET + 4;
+
iowrite32(0x0, desc);
+ if (ioread32(desc) == 0) { /* sanity check */
+ iowrite32(0xffffffff, desc);
+ if (ioread32(desc) > 0) {
+ lp->features |= XAE_FEATURE_DMA_64BIT;
+ addr_width = 64;
+ dev_info(&pdev->dev,
+ "autodetected 64-bit DMA range\n");
+ }
+ iowrite32(0x0, desc);
+ }
+ }
+ if (!IS_ENABLED(CONFIG_64BIT) && lp->features & XAE_FEATURE_DMA_64BIT) {
+ dev_err(&pdev->dev, "64-bit addressable DMA is not compatible with 32-bit archecture\n");
+ ret = -EINVAL;
+ goto cleanup_clk;
}
- }
- if (!IS_ENABLED(CONFIG_64BIT) && lp->features & XAE_FEATURE_DMA_64BIT) {
- dev_err(&pdev->dev, "64-bit addressable DMA is not compatible with 32-bit archecture\n");
- ret = -EINVAL;
- goto cleanup_clk;
- }
- ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(addr_width));
- if (ret) {
- dev_err(&pdev->dev, "No suitable DMA available\n");
- goto cleanup_clk;
+ ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(addr_width));
+ if (ret) {
+ dev_err(&pdev->dev, "No suitable DMA available\n");
+ goto cleanup_clk;
+ }
+ netif_napi_add(ndev, &lp->napi_rx, axienet_rx_poll);
+ netif_napi_add(ndev, &lp->napi_tx, axienet_tx_poll);
}
/* Check for Ethernet core IRQ (optional) */
@@ -2092,14 +2151,16 @@ static int axienet_probe(struct platform_device *pdev)
}
lp->coalesce_count_rx = XAXIDMA_DFT_RX_THRESHOLD;
- lp->coalesce_usec_rx = XAXIDMA_DFT_RX_USEC;
lp->coalesce_count_tx = XAXIDMA_DFT_TX_THRESHOLD;
- lp->coalesce_usec_tx = XAXIDMA_DFT_TX_USEC;
- /* Reset core now that clocks are enabled, prior to accessing MDIO */
- ret = __axienet_device_reset(lp);
- if (ret)
- goto cleanup_clk;
+ if (!lp->use_dmaengine) {
+ lp->coalesce_usec_rx = XAXIDMA_DFT_RX_USEC;
+ lp->coalesce_usec_tx = XAXIDMA_DFT_TX_USEC;
+ /* Reset core now that clocks are enabled, prior to accessing MDIO */
+ ret = __axienet_device_reset(lp);
+ if (ret)
+ goto cleanup_clk;
+ }
ret = axienet_mdio_setup(lp);
if (ret)
--
2.25.1
From: Radhey Shyam Pandey <[email protected]>
The axiethernet will use dmaengine framework to communicate
with dma controller IP instead of built-in dma programming sequence.
To request dma transmit and receive channels the axiethernet uses
generic dmas, dma-names properties.
Axiethernet may use AXI DMA or MCDMA. DMA has only two channels
where as MCDMA has 16 Tx, 16 Rx channels. To uniquely identify each
channel, we are using 'chan' suffix. Depending on the usecase AXI
ethernet driver can request any combination of multichannel DMA
channels.
Example:
dma-names = tx_chan0, rx_chan0, tx_chan1, rx_chan1;
Also to support the backward compatibility, use "dmas" property to
identify as it should use dmaengine framework or legacy
driver(built-in dma programming).
At this point it is recommended to use dmaengine framework but it's
optional. Once the solution is stable will make dmas as
required properties.
Signed-off-by: Radhey Shyam Pandey <[email protected]>
Signed-off-by: Sarath Babu Naidu Gaddam <[email protected]>
---
Changes in V4:
1) Updated commit description about tx/rx channels name.
2) Removed "dt-bindings" and "dmaengine" strings in subject.
3) Extended dmas and dma-names to support MCDMA channel names.
1) Remove "driver" from commit message.
2) Use pattern/regex for dma-names property.
Changes in V3:
1) Reverted reg and interrupts property to support backward compatibility.
2) Moved dmas and dma-names properties from Required properties.
Changes in V2:
- None.
---
.../bindings/net/xlnx,axi-ethernet.yaml | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
index 1d33d80af11c..ea203504b8d4 100644
--- a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
+++ b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
@@ -122,6 +122,20 @@ properties:
and "phy-handle" should point to an external PHY if exists.
maxItems: 1
+ dmas:
+ minItems: 2
+ maxItems: 32
+ description: DMA Channel phandle and DMA request line number
+
+ dma-names:
+ items:
+ pattern: "^[tr]x_chan[0-9]|1[0-5]$"
+ description:
+ Should be "tx_chan0", "tx_chan1" ... "tx_chan15" for DMA Tx channel
+ Should be "rx_chan0", "rx_chan1" ... "rx_chan15" for DMA Rx channel
+ minItems: 2
+ maxItems: 32
+
required:
- compatible
- interrupts
@@ -143,6 +157,8 @@ examples:
clocks = <&axi_clk>, <&axi_clk>, <&pl_enet_ref_clk>, <&mgt_clk>;
phy-mode = "mii";
reg = <0x40c00000 0x40000>,<0x50c00000 0x40000>;
+ dmas = <&xilinx_dma 0>, <&xilinx_dma 1>;
+ dma-names = "tx_chan0", "rx_chan0";
xlnx,rxcsum = <0x2>;
xlnx,rxmem = <0x800>;
xlnx,txcsum = <0x2>;
--
2.25.1
On Fri, Jun 30, 2023 at 11:08:41AM +0530, Sarath Babu Naidu Gaddam wrote:
> The axiethernet driver can use the dmaengine framework to communicate
> with the xilinx DMAengine driver(AXIDMA, MCDMA). The inspiration behind
> this dmaengine adoption is to reuse the in-kernel xilinx dma engine
> driver[1] and remove redundant dma programming sequence[2] from the
> ethernet driver. This simplifies the ethernet driver and also makes
> it generic to be hooked to any complaint dma IP i.e AXIDMA, MCDMA
> without any modification.
>
> The dmaengine framework was extended for metadata API support during
> the axidma RFC[3] discussion. However, it still needs further
> enhancements to make it well suited for ethernet usecases.
>
> Backward compatibility support:
> To support backward compatibility, we are planning to use below approach,
> 1) Use "dmas" property as an optional for now to differentiate
> dmaengine based ethernet Driver or built-in dma ethernet driver.
> Will move this property to required property some time later.
> 2) after some time, will introduce a new compatible string to support
> the dmaengine method, This new compatible name will use different
> APIs for init and data transfer.
>
> Comments, suggestions, thoughts to implement remaining functional
> features are very welcome!
Hi Sarath,
unfortunately this series doesn't apply on net-next.
net-next is currently closed.
So please provide a v5 once it reposts, after 10th July.
On the other hand, RFCs are welcome any time.
See: https://www.kernel.org/doc/html/next/process/maintainer-netdev.html#development
--
pw-bot: changes-requested
On Fri, Jun 30, 2023 at 11:08:42AM +0530, Sarath Babu Naidu Gaddam wrote:
> From: Radhey Shyam Pandey <[email protected]>
>
> The axiethernet will use dmaengine framework to communicate
> with dma controller IP instead of built-in dma programming sequence.
What's dmaengine framework? This is a binding patch about the h/w.
>
> To request dma transmit and receive channels the axiethernet uses
> generic dmas, dma-names properties.
>
> Axiethernet may use AXI DMA or MCDMA. DMA has only two channels
> where as MCDMA has 16 Tx, 16 Rx channels. To uniquely identify each
> channel, we are using 'chan' suffix. Depending on the usecase AXI
> ethernet driver can request any combination of multichannel DMA
> channels.
The DMA provider is outside the scope of the binding. Instead, describe
how Axiethernet can use 2 or 32 channels.
>
> Example:
> dma-names = tx_chan0, rx_chan0, tx_chan1, rx_chan1;
>
> Also to support the backward compatibility, use "dmas" property to
> identify as it should use dmaengine framework or legacy
> driver(built-in dma programming).
>
> At this point it is recommended to use dmaengine framework but it's
> optional. Once the solution is stable will make dmas as
> required properties.
>
> Signed-off-by: Radhey Shyam Pandey <[email protected]>
> Signed-off-by: Sarath Babu Naidu Gaddam <[email protected]>
>
> ---
> Changes in V4:
> 1) Updated commit description about tx/rx channels name.
> 2) Removed "dt-bindings" and "dmaengine" strings in subject.
> 3) Extended dmas and dma-names to support MCDMA channel names.
> 1) Remove "driver" from commit message.
> 2) Use pattern/regex for dma-names property.
>
> Changes in V3:
> 1) Reverted reg and interrupts property to support backward compatibility.
> 2) Moved dmas and dma-names properties from Required properties.
>
> Changes in V2:
> - None.
> ---
> .../bindings/net/xlnx,axi-ethernet.yaml | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
> index 1d33d80af11c..ea203504b8d4 100644
> --- a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
> +++ b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
> @@ -122,6 +122,20 @@ properties:
> and "phy-handle" should point to an external PHY if exists.
> maxItems: 1
>
> + dmas:
> + minItems: 2
> + maxItems: 32
> + description: DMA Channel phandle and DMA request line number
Drop this description. That's every 'dmas' property. Instead define what
each entry is.
> +
> + dma-names:
> + items:
> + pattern: "^[tr]x_chan[0-9]|1[0-5]$"
I think you need some parentheses. Does a channel 10 or higher name
validate?
> + description:
> + Should be "tx_chan0", "tx_chan1" ... "tx_chan15" for DMA Tx channel
> + Should be "rx_chan0", "rx_chan1" ... "rx_chan15" for DMA Rx channel
> + minItems: 2
> + maxItems: 32
> +
> required:
> - compatible
> - interrupts
> @@ -143,6 +157,8 @@ examples:
> clocks = <&axi_clk>, <&axi_clk>, <&pl_enet_ref_clk>, <&mgt_clk>;
> phy-mode = "mii";
> reg = <0x40c00000 0x40000>,<0x50c00000 0x40000>;
> + dmas = <&xilinx_dma 0>, <&xilinx_dma 1>;
> + dma-names = "tx_chan0", "rx_chan0";
> xlnx,rxcsum = <0x2>;
> xlnx,rxmem = <0x800>;
> xlnx,txcsum = <0x2>;
> --
> 2.25.1
>