2021-01-06 14:45:16

by Sieng-Piaw Liew

[permalink] [raw]
Subject: [PATCH net-next v3 0/7] bcm63xx_enet: major makeover of driver

This patch series aim to improve the bcm63xx_enet driver by integrating the
latest networking features, i.e. batched rx processing, BQL, build_skb,
etc.

The newer enetsw SoCs are found to be able to do unaligned rx DMA by adding
NET_IP_ALIGN padding which, combined with these patches, improved packet
processing performance by ~50% on BCM6328.

Older non-enetsw SoCs still benefit mainly from rx batching. Performance
improvement of ~30% is observed on BCM6333.

The BCM63xx SoCs are designed for routers. As such, having BQL is
beneficial as well as trivial to add.

v3:
* Simplify xmit_more patch by not moving around the code needlessly.
* Fix indentation in xmit_more patch.
* Fix indentation in build_skb patch.
* Split rx ring cleanup patch from build_skb patch and precede build_skb
patch for better understanding, as suggested by Florian Fainelli.

v2:
* Add xmit_more support and rx loop improvisation patches.
* Moved BQL netdev_reset_queue() to bcm_enet_stop()/bcm_enetsw_stop()
functions as suggested by Florian Fainelli.
* Improved commit messages.

Sieng Piaw Liew (7):
bcm63xx_enet: batch process rx path
bcm63xx_enet: add BQL support
bcm63xx_enet: add xmit_more support
bcm63xx_enet: alloc rx skb with NET_IP_ALIGN
bcm63xx_enet: consolidate rx SKB ring cleanup code
bcm63xx_enet: convert to build_skb
bcm63xx_enet: improve rx loop

drivers/net/ethernet/broadcom/bcm63xx_enet.c | 186 +++++++++----------
drivers/net/ethernet/broadcom/bcm63xx_enet.h | 14 +-
2 files changed, 103 insertions(+), 97 deletions(-)

--
2.17.1


2021-01-06 14:45:18

by Sieng-Piaw Liew

[permalink] [raw]
Subject: [PATCH net-next v3 4/7] bcm63xx_enet: alloc rx skb with NET_IP_ALIGN

Use netdev_alloc_skb_ip_align on newer SoCs with integrated switch
(enetsw) when refilling RX. Increases packet processing performance
by 30% (with netif_receive_skb_list).

Non-enetsw SoCs cannot function with the extra pad so continue to use
the regular netdev_alloc_skb.

Tested on BCM6328 320 MHz and iperf3 -M 512 to measure packet/sec
performance.

Before:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-30.00 sec 120 MBytes 33.7 Mbits/sec 277 sender
[ 4] 0.00-30.00 sec 120 MBytes 33.5 Mbits/sec receiver

After (+netif_receive_skb_list):
[ 4] 0.00-30.00 sec 155 MBytes 43.3 Mbits/sec 354 sender
[ 4] 0.00-30.00 sec 154 MBytes 43.1 Mbits/sec receiver

Signed-off-by: Sieng Piaw Liew <[email protected]>
---
drivers/net/ethernet/broadcom/bcm63xx_enet.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
index 21744dae30ce..96d56c3e2cc9 100644
--- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c
+++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
@@ -237,7 +237,10 @@ static int bcm_enet_refill_rx(struct net_device *dev)
desc = &priv->rx_desc_cpu[desc_idx];

if (!priv->rx_skb[desc_idx]) {
- skb = netdev_alloc_skb(dev, priv->rx_skb_size);
+ if (priv->enet_is_sw)
+ skb = netdev_alloc_skb_ip_align(dev, priv->rx_skb_size);
+ else
+ skb = netdev_alloc_skb(dev, priv->rx_skb_size);
if (!skb)
break;
priv->rx_skb[desc_idx] = skb;
--
2.17.1

2021-01-06 14:46:05

by Sieng-Piaw Liew

[permalink] [raw]
Subject: [PATCH net-next v3 5/7] bcm63xx_enet: consolidate rx SKB ring cleanup code

The rx SKB ring use the same code for cleanup at various points.
Combine them into a function to reduce lines of code.

Signed-off-by: Sieng Piaw Liew <[email protected]>
---
drivers/net/ethernet/broadcom/bcm63xx_enet.c | 72 ++++++--------------
1 file changed, 22 insertions(+), 50 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
index 96d56c3e2cc9..e34b05b10e43 100644
--- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c
+++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
@@ -860,6 +860,24 @@ static void bcm_enet_adjust_link(struct net_device *dev)
priv->pause_tx ? "tx" : "off");
}

+static void bcm_enet_free_rx_skb_ring(struct device *kdev, struct bcm_enet_priv *priv)
+{
+ int i;
+
+ for (i = 0; i < priv->rx_ring_size; i++) {
+ struct bcm_enet_desc *desc;
+
+ if (!priv->rx_skb[i])
+ continue;
+
+ desc = &priv->rx_desc_cpu[i];
+ dma_unmap_single(kdev, desc->address, priv->rx_skb_size,
+ DMA_FROM_DEVICE);
+ kfree_skb(priv->rx_skb[i]);
+ }
+ kfree(priv->rx_skb);
+}
+
/*
* open callback, allocate dma rings & buffers and start rx operation
*/
@@ -1084,18 +1102,7 @@ static int bcm_enet_open(struct net_device *dev)
return 0;

out:
- for (i = 0; i < priv->rx_ring_size; i++) {
- struct bcm_enet_desc *desc;
-
- if (!priv->rx_skb[i])
- continue;
-
- desc = &priv->rx_desc_cpu[i];
- dma_unmap_single(kdev, desc->address, priv->rx_skb_size,
- DMA_FROM_DEVICE);
- kfree_skb(priv->rx_skb[i]);
- }
- kfree(priv->rx_skb);
+ bcm_enet_free_rx_skb_ring(kdev, priv);

out_free_tx_skb:
kfree(priv->tx_skb);
@@ -1174,7 +1181,6 @@ static int bcm_enet_stop(struct net_device *dev)
{
struct bcm_enet_priv *priv;
struct device *kdev;
- int i;

priv = netdev_priv(dev);
kdev = &priv->pdev->dev;
@@ -1203,20 +1209,9 @@ static int bcm_enet_stop(struct net_device *dev)
bcm_enet_tx_reclaim(dev, 1);

/* free the rx skb ring */
- for (i = 0; i < priv->rx_ring_size; i++) {
- struct bcm_enet_desc *desc;
-
- if (!priv->rx_skb[i])
- continue;
-
- desc = &priv->rx_desc_cpu[i];
- dma_unmap_single(kdev, desc->address, priv->rx_skb_size,
- DMA_FROM_DEVICE);
- kfree_skb(priv->rx_skb[i]);
- }
+ bcm_enet_free_rx_skb_ring(kdev, priv);

/* free remaining allocated memory */
- kfree(priv->rx_skb);
kfree(priv->tx_skb);
dma_free_coherent(kdev, priv->rx_desc_alloc_size,
priv->rx_desc_cpu, priv->rx_desc_dma);
@@ -2303,18 +2298,7 @@ static int bcm_enetsw_open(struct net_device *dev)
return 0;

out:
- for (i = 0; i < priv->rx_ring_size; i++) {
- struct bcm_enet_desc *desc;
-
- if (!priv->rx_skb[i])
- continue;
-
- desc = &priv->rx_desc_cpu[i];
- dma_unmap_single(kdev, desc->address, priv->rx_skb_size,
- DMA_FROM_DEVICE);
- kfree_skb(priv->rx_skb[i]);
- }
- kfree(priv->rx_skb);
+ bcm_enet_free_rx_skb_ring(kdev, priv);

out_free_tx_skb:
kfree(priv->tx_skb);
@@ -2343,7 +2327,6 @@ static int bcm_enetsw_stop(struct net_device *dev)
{
struct bcm_enet_priv *priv;
struct device *kdev;
- int i;

priv = netdev_priv(dev);
kdev = &priv->pdev->dev;
@@ -2366,20 +2349,9 @@ static int bcm_enetsw_stop(struct net_device *dev)
bcm_enet_tx_reclaim(dev, 1);

/* free the rx skb ring */
- for (i = 0; i < priv->rx_ring_size; i++) {
- struct bcm_enet_desc *desc;
-
- if (!priv->rx_skb[i])
- continue;
-
- desc = &priv->rx_desc_cpu[i];
- dma_unmap_single(kdev, desc->address, priv->rx_skb_size,
- DMA_FROM_DEVICE);
- kfree_skb(priv->rx_skb[i]);
- }
+ bcm_enet_free_rx_skb_ring(kdev, priv);

/* free remaining allocated memory */
- kfree(priv->rx_skb);
kfree(priv->tx_skb);
dma_free_coherent(kdev, priv->rx_desc_alloc_size,
priv->rx_desc_cpu, priv->rx_desc_dma);
--
2.17.1

2021-01-06 14:46:26

by Sieng-Piaw Liew

[permalink] [raw]
Subject: [PATCH net-next v3 6/7] bcm63xx_enet: convert to build_skb

We can increase the efficiency of rx path by using buffers to receive
packets then build SKBs around them just before passing into the network
stack. In contrast, preallocating SKBs too early reduces CPU cache
efficiency.

Check if we're in NAPI context when refilling RX. Normally we're almost
always running in NAPI context. Dispatch to napi_alloc_frag directly
instead of relying on netdev_alloc_frag which does the same but
with the overhead of local_bh_disable/enable.

Tested on BCM6328 320 MHz and iperf3 -M 512 to measure packet/sec
performance. Included netif_receive_skb_list and NET_IP_ALIGN
optimizations.

Before:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 49.9 MBytes 41.9 Mbits/sec 197 sender
[ 4] 0.00-10.00 sec 49.3 MBytes 41.3 Mbits/sec receiver

After:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-30.00 sec 171 MBytes 47.8 Mbits/sec 272 sender
[ 4] 0.00-30.00 sec 170 MBytes 47.6 Mbits/sec receiver

Signed-off-by: Sieng Piaw Liew <[email protected]>
---
drivers/net/ethernet/broadcom/bcm63xx_enet.c | 111 ++++++++++---------
drivers/net/ethernet/broadcom/bcm63xx_enet.h | 14 ++-
2 files changed, 71 insertions(+), 54 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
index e34b05b10e43..c11491429ed2 100644
--- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c
+++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
@@ -220,7 +220,7 @@ static void bcm_enet_mdio_write_mii(struct net_device *dev, int mii_id,
/*
* refill rx queue
*/
-static int bcm_enet_refill_rx(struct net_device *dev)
+static int bcm_enet_refill_rx(struct net_device *dev, bool napi_mode)
{
struct bcm_enet_priv *priv;

@@ -228,29 +228,29 @@ static int bcm_enet_refill_rx(struct net_device *dev)

while (priv->rx_desc_count < priv->rx_ring_size) {
struct bcm_enet_desc *desc;
- struct sk_buff *skb;
- dma_addr_t p;
int desc_idx;
u32 len_stat;

desc_idx = priv->rx_dirty_desc;
desc = &priv->rx_desc_cpu[desc_idx];

- if (!priv->rx_skb[desc_idx]) {
- if (priv->enet_is_sw)
- skb = netdev_alloc_skb_ip_align(dev, priv->rx_skb_size);
+ if (!priv->rx_buf[desc_idx]) {
+ void *buf;
+
+ if (likely(napi_mode))
+ buf = napi_alloc_frag(priv->rx_frag_size);
else
- skb = netdev_alloc_skb(dev, priv->rx_skb_size);
- if (!skb)
+ buf = netdev_alloc_frag(priv->rx_frag_size);
+ if (unlikely(!buf))
break;
- priv->rx_skb[desc_idx] = skb;
- p = dma_map_single(&priv->pdev->dev, skb->data,
- priv->rx_skb_size,
- DMA_FROM_DEVICE);
- desc->address = p;
+ priv->rx_buf[desc_idx] = buf;
+ desc->address = dma_map_single(&priv->pdev->dev,
+ buf + priv->rx_buf_offset,
+ priv->rx_buf_size,
+ DMA_FROM_DEVICE);
}

- len_stat = priv->rx_skb_size << DMADESC_LENGTH_SHIFT;
+ len_stat = priv->rx_buf_size << DMADESC_LENGTH_SHIFT;
len_stat |= DMADESC_OWNER_MASK;
if (priv->rx_dirty_desc == priv->rx_ring_size - 1) {
len_stat |= (DMADESC_WRAP_MASK >> priv->dma_desc_shift);
@@ -290,7 +290,7 @@ static void bcm_enet_refill_rx_timer(struct timer_list *t)
struct net_device *dev = priv->net_dev;

spin_lock(&priv->rx_lock);
- bcm_enet_refill_rx(dev);
+ bcm_enet_refill_rx(dev, false);
spin_unlock(&priv->rx_lock);
}

@@ -320,6 +320,7 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget)
int desc_idx;
u32 len_stat;
unsigned int len;
+ void *buf;

desc_idx = priv->rx_curr_desc;
desc = &priv->rx_desc_cpu[desc_idx];
@@ -365,16 +366,14 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget)
}

/* valid packet */
- skb = priv->rx_skb[desc_idx];
+ buf = priv->rx_buf[desc_idx];
len = (len_stat & DMADESC_LENGTH_MASK) >> DMADESC_LENGTH_SHIFT;
/* don't include FCS */
len -= 4;

if (len < copybreak) {
- struct sk_buff *nskb;
-
- nskb = napi_alloc_skb(&priv->napi, len);
- if (!nskb) {
+ skb = napi_alloc_skb(&priv->napi, len);
+ if (unlikely(!skb)) {
/* forget packet, just rearm desc */
dev->stats.rx_dropped++;
continue;
@@ -382,14 +381,21 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget)

dma_sync_single_for_cpu(kdev, desc->address,
len, DMA_FROM_DEVICE);
- memcpy(nskb->data, skb->data, len);
+ memcpy(skb->data, buf + priv->rx_buf_offset, len);
dma_sync_single_for_device(kdev, desc->address,
len, DMA_FROM_DEVICE);
- skb = nskb;
} else {
- dma_unmap_single(&priv->pdev->dev, desc->address,
- priv->rx_skb_size, DMA_FROM_DEVICE);
- priv->rx_skb[desc_idx] = NULL;
+ dma_unmap_single(kdev, desc->address,
+ priv->rx_buf_size, DMA_FROM_DEVICE);
+ priv->rx_buf[desc_idx] = NULL;
+
+ skb = build_skb(buf, priv->rx_frag_size);
+ if (unlikely(!skb)) {
+ skb_free_frag(buf);
+ dev->stats.rx_dropped++;
+ continue;
+ }
+ skb_reserve(skb, priv->rx_buf_offset);
}

skb_put(skb, len);
@@ -403,7 +409,7 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget)
netif_receive_skb_list(&rx_list);

if (processed || !priv->rx_desc_count) {
- bcm_enet_refill_rx(dev);
+ bcm_enet_refill_rx(dev, true);

/* kick rx dma */
enet_dmac_writel(priv, priv->dma_chan_en_mask,
@@ -860,22 +866,22 @@ static void bcm_enet_adjust_link(struct net_device *dev)
priv->pause_tx ? "tx" : "off");
}

-static void bcm_enet_free_rx_skb_ring(struct device *kdev, struct bcm_enet_priv *priv)
+static void bcm_enet_free_rx_buf_ring(struct device *kdev, struct bcm_enet_priv *priv)
{
int i;

for (i = 0; i < priv->rx_ring_size; i++) {
struct bcm_enet_desc *desc;

- if (!priv->rx_skb[i])
+ if (!priv->rx_buf[i])
continue;

desc = &priv->rx_desc_cpu[i];
- dma_unmap_single(kdev, desc->address, priv->rx_skb_size,
+ dma_unmap_single(kdev, desc->address, priv->rx_buf_size,
DMA_FROM_DEVICE);
- kfree_skb(priv->rx_skb[i]);
+ skb_free_frag(priv->rx_buf[i]);
}
- kfree(priv->rx_skb);
+ kfree(priv->rx_buf);
}

/*
@@ -987,10 +993,10 @@ static int bcm_enet_open(struct net_device *dev)
priv->tx_curr_desc = 0;
spin_lock_init(&priv->tx_lock);

- /* init & fill rx ring with skbs */
- priv->rx_skb = kcalloc(priv->rx_ring_size, sizeof(struct sk_buff *),
+ /* init & fill rx ring with buffers */
+ priv->rx_buf = kcalloc(priv->rx_ring_size, sizeof(void *),
GFP_KERNEL);
- if (!priv->rx_skb) {
+ if (!priv->rx_buf) {
ret = -ENOMEM;
goto out_free_tx_skb;
}
@@ -1007,8 +1013,8 @@ static int bcm_enet_open(struct net_device *dev)
enet_dmac_writel(priv, ENETDMA_BUFALLOC_FORCE_MASK | 0,
ENETDMAC_BUFALLOC, priv->rx_chan);

- if (bcm_enet_refill_rx(dev)) {
- dev_err(kdev, "cannot allocate rx skb queue\n");
+ if (bcm_enet_refill_rx(dev, false)) {
+ dev_err(kdev, "cannot allocate rx buffer queue\n");
ret = -ENOMEM;
goto out;
}
@@ -1102,7 +1108,7 @@ static int bcm_enet_open(struct net_device *dev)
return 0;

out:
- bcm_enet_free_rx_skb_ring(kdev, priv);
+ bcm_enet_free_rx_buf_ring(kdev, priv);

out_free_tx_skb:
kfree(priv->tx_skb);
@@ -1208,8 +1214,8 @@ static int bcm_enet_stop(struct net_device *dev)
/* force reclaim of all tx buffers */
bcm_enet_tx_reclaim(dev, 1);

- /* free the rx skb ring */
- bcm_enet_free_rx_skb_ring(kdev, priv);
+ /* free the rx buffer ring */
+ bcm_enet_free_rx_buf_ring(kdev, priv);

/* free remaining allocated memory */
kfree(priv->tx_skb);
@@ -1633,9 +1639,12 @@ static int bcm_enet_change_mtu(struct net_device *dev, int new_mtu)
* align rx buffer size to dma burst len, account FCS since
* it's appended
*/
- priv->rx_skb_size = ALIGN(actual_mtu + ETH_FCS_LEN,
+ priv->rx_buf_size = ALIGN(actual_mtu + ETH_FCS_LEN,
priv->dma_maxburst * 4);

+ priv->rx_frag_size = SKB_DATA_ALIGN(priv->rx_buf_offset + priv->rx_buf_size) +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+
dev->mtu = new_mtu;
return 0;
}
@@ -1720,6 +1729,7 @@ static int bcm_enet_probe(struct platform_device *pdev)

priv->enet_is_sw = false;
priv->dma_maxburst = BCMENET_DMA_MAXBURST;
+ priv->rx_buf_offset = NET_SKB_PAD;

ret = bcm_enet_change_mtu(dev, dev->mtu);
if (ret)
@@ -2137,7 +2147,7 @@ static int bcm_enetsw_open(struct net_device *dev)
priv->tx_skb = kcalloc(priv->tx_ring_size, sizeof(struct sk_buff *),
GFP_KERNEL);
if (!priv->tx_skb) {
- dev_err(kdev, "cannot allocate rx skb queue\n");
+ dev_err(kdev, "cannot allocate tx skb queue\n");
ret = -ENOMEM;
goto out_free_tx_ring;
}
@@ -2147,11 +2157,11 @@ static int bcm_enetsw_open(struct net_device *dev)
priv->tx_curr_desc = 0;
spin_lock_init(&priv->tx_lock);

- /* init & fill rx ring with skbs */
- priv->rx_skb = kcalloc(priv->rx_ring_size, sizeof(struct sk_buff *),
+ /* init & fill rx ring with buffers */
+ priv->rx_buf = kcalloc(priv->rx_ring_size, sizeof(void *),
GFP_KERNEL);
- if (!priv->rx_skb) {
- dev_err(kdev, "cannot allocate rx skb queue\n");
+ if (!priv->rx_buf) {
+ dev_err(kdev, "cannot allocate rx buffer queue\n");
ret = -ENOMEM;
goto out_free_tx_skb;
}
@@ -2198,8 +2208,8 @@ static int bcm_enetsw_open(struct net_device *dev)
enet_dma_writel(priv, ENETDMA_BUFALLOC_FORCE_MASK | 0,
ENETDMA_BUFALLOC_REG(priv->rx_chan));

- if (bcm_enet_refill_rx(dev)) {
- dev_err(kdev, "cannot allocate rx skb queue\n");
+ if (bcm_enet_refill_rx(dev, false)) {
+ dev_err(kdev, "cannot allocate rx buffer queue\n");
ret = -ENOMEM;
goto out;
}
@@ -2298,7 +2308,7 @@ static int bcm_enetsw_open(struct net_device *dev)
return 0;

out:
- bcm_enet_free_rx_skb_ring(kdev, priv);
+ bcm_enet_free_rx_buf_ring(kdev, priv);

out_free_tx_skb:
kfree(priv->tx_skb);
@@ -2348,8 +2358,8 @@ static int bcm_enetsw_stop(struct net_device *dev)
/* force reclaim of all tx buffers */
bcm_enet_tx_reclaim(dev, 1);

- /* free the rx skb ring */
- bcm_enet_free_rx_skb_ring(kdev, priv);
+ /* free the rx buffer ring */
+ bcm_enet_free_rx_buf_ring(kdev, priv);

/* free remaining allocated memory */
kfree(priv->tx_skb);
@@ -2648,6 +2658,7 @@ static int bcm_enetsw_probe(struct platform_device *pdev)
priv->rx_ring_size = BCMENET_DEF_RX_DESC;
priv->tx_ring_size = BCMENET_DEF_TX_DESC;
priv->dma_maxburst = BCMENETSW_DMA_MAXBURST;
+ priv->rx_buf_offset = NET_SKB_PAD + NET_IP_ALIGN;

pd = dev_get_platdata(&pdev->dev);
if (pd) {
diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.h b/drivers/net/ethernet/broadcom/bcm63xx_enet.h
index 1d3c917eb830..78f1830fb3cb 100644
--- a/drivers/net/ethernet/broadcom/bcm63xx_enet.h
+++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.h
@@ -230,11 +230,17 @@ struct bcm_enet_priv {
/* next dirty rx descriptor to refill */
int rx_dirty_desc;

- /* size of allocated rx skbs */
- unsigned int rx_skb_size;
+ /* size of allocated rx buffers */
+ unsigned int rx_buf_size;

- /* list of skb given to hw for rx */
- struct sk_buff **rx_skb;
+ /* allocated rx buffer offset */
+ unsigned int rx_buf_offset;
+
+ /* size of allocated rx frag */
+ unsigned int rx_frag_size;
+
+ /* list of buffer given to hw for rx */
+ void **rx_buf;

/* used when rx skb allocation failed, so we defer rx queue
* refill */
--
2.17.1

2021-01-06 14:46:41

by Sieng-Piaw Liew

[permalink] [raw]
Subject: [PATCH net-next v3 2/7] bcm63xx_enet: add BQL support

Add Byte Queue Limits support to reduce/remove bufferbloat in
bcm63xx_enet.

Signed-off-by: Sieng Piaw Liew <[email protected]>
---
drivers/net/ethernet/broadcom/bcm63xx_enet.c | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
index b82b7805c36a..90f8214b4d22 100644
--- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c
+++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
@@ -417,9 +417,11 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget)
static int bcm_enet_tx_reclaim(struct net_device *dev, int force)
{
struct bcm_enet_priv *priv;
+ unsigned int bytes;
int released;

priv = netdev_priv(dev);
+ bytes = 0;
released = 0;

while (priv->tx_desc_count < priv->tx_ring_size) {
@@ -456,10 +458,13 @@ static int bcm_enet_tx_reclaim(struct net_device *dev, int force)
if (desc->len_stat & DMADESC_UNDER_MASK)
dev->stats.tx_errors++;

+ bytes += skb->len;
dev_kfree_skb(skb);
released++;
}

+ netdev_completed_queue(dev, released, bytes);
+
if (netif_queue_stopped(dev) && released)
netif_wake_queue(dev);

@@ -626,6 +631,8 @@ bcm_enet_start_xmit(struct sk_buff *skb, struct net_device *dev)
desc->len_stat = len_stat;
wmb();

+ netdev_sent_queue(dev, skb->len);
+
/* kick tx dma */
enet_dmac_writel(priv, priv->dma_chan_en_mask,
ENETDMAC_CHANCFG, priv->tx_chan);
@@ -1169,6 +1176,7 @@ static int bcm_enet_stop(struct net_device *dev)
kdev = &priv->pdev->dev;

netif_stop_queue(dev);
+ netdev_reset_queue(dev);
napi_disable(&priv->napi);
if (priv->has_phy)
phy_stop(dev->phydev);
@@ -2338,6 +2346,7 @@ static int bcm_enetsw_stop(struct net_device *dev)

del_timer_sync(&priv->swphy_poll);
netif_stop_queue(dev);
+ netdev_reset_queue(dev);
napi_disable(&priv->napi);
del_timer_sync(&priv->rx_timeout);

--
2.17.1

2021-01-06 14:46:55

by Sieng-Piaw Liew

[permalink] [raw]
Subject: [PATCH net-next v3 7/7] bcm63xx_enet: improve rx loop

Use existing rx processed count to track against budget, thereby making
budget decrement operation redundant.

rx_desc_count can be calculated outside the rx loop, making the loop a
bit smaller.

Signed-off-by: Sieng Piaw Liew <[email protected]>
---
drivers/net/ethernet/broadcom/bcm63xx_enet.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
index c11491429ed2..fd8767213165 100644
--- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c
+++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
@@ -339,7 +339,6 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget)
priv->rx_curr_desc++;
if (priv->rx_curr_desc == priv->rx_ring_size)
priv->rx_curr_desc = 0;
- priv->rx_desc_count--;

/* if the packet does not have start of packet _and_
* end of packet flag set, then just recycle it */
@@ -404,9 +403,10 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget)
dev->stats.rx_bytes += len;
list_add_tail(&skb->list, &rx_list);

- } while (--budget > 0);
+ } while (processed < budget);

netif_receive_skb_list(&rx_list);
+ priv->rx_desc_count -= processed;

if (processed || !priv->rx_desc_count) {
bcm_enet_refill_rx(dev, true);
--
2.17.1

2021-01-06 14:47:19

by Sieng-Piaw Liew

[permalink] [raw]
Subject: [PATCH net-next v3 3/7] bcm63xx_enet: add xmit_more support

Support bulking hardware TX queue by using netdev_xmit_more().

Signed-off-by: Sieng Piaw Liew <[email protected]>
---
drivers/net/ethernet/broadcom/bcm63xx_enet.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
index 90f8214b4d22..21744dae30ce 100644
--- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c
+++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
@@ -634,7 +634,8 @@ bcm_enet_start_xmit(struct sk_buff *skb, struct net_device *dev)
netdev_sent_queue(dev, skb->len);

/* kick tx dma */
- enet_dmac_writel(priv, priv->dma_chan_en_mask,
+ if (!netdev_xmit_more() || !priv->tx_desc_count)
+ enet_dmac_writel(priv, priv->dma_chan_en_mask,
ENETDMAC_CHANCFG, priv->tx_chan);

/* stop queue if no more desc available */
--
2.17.1

2021-01-06 14:48:47

by Sieng-Piaw Liew

[permalink] [raw]
Subject: [PATCH net-next v3 1/7] bcm63xx_enet: batch process rx path

Use netif_receive_skb_list to batch process rx skb.
Tested on BCM6328 320 MHz using iperf3 -M 512, increasing performance
by 12.5%.

Before:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-30.00 sec 120 MBytes 33.7 Mbits/sec 277 sender
[ 4] 0.00-30.00 sec 120 MBytes 33.5 Mbits/sec receiver

After:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-30.00 sec 136 MBytes 37.9 Mbits/sec 203 sender
[ 4] 0.00-30.00 sec 135 MBytes 37.7 Mbits/sec receiver

Signed-off-by: Sieng Piaw Liew <[email protected]>
---
drivers/net/ethernet/broadcom/bcm63xx_enet.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
index 916824cca3fd..b82b7805c36a 100644
--- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c
+++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
@@ -297,10 +297,12 @@ static void bcm_enet_refill_rx_timer(struct timer_list *t)
static int bcm_enet_receive_queue(struct net_device *dev, int budget)
{
struct bcm_enet_priv *priv;
+ struct list_head rx_list;
struct device *kdev;
int processed;

priv = netdev_priv(dev);
+ INIT_LIST_HEAD(&rx_list);
kdev = &priv->pdev->dev;
processed = 0;

@@ -391,10 +393,12 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget)
skb->protocol = eth_type_trans(skb, dev);
dev->stats.rx_packets++;
dev->stats.rx_bytes += len;
- netif_receive_skb(skb);
+ list_add_tail(&skb->list, &rx_list);

} while (--budget > 0);

+ netif_receive_skb_list(&rx_list);
+
if (processed || !priv->rx_desc_count) {
bcm_enet_refill_rx(dev);

--
2.17.1

2021-01-06 17:06:14

by Florian Fainelli

[permalink] [raw]
Subject: Re: [PATCH net-next v3 0/7] bcm63xx_enet: major makeover of driver

On 1/6/21 6:42 AM, Sieng Piaw Liew wrote:
> This patch series aim to improve the bcm63xx_enet driver by integrating the
> latest networking features, i.e. batched rx processing, BQL, build_skb,
> etc.
>
> The newer enetsw SoCs are found to be able to do unaligned rx DMA by adding
> NET_IP_ALIGN padding which, combined with these patches, improved packet
> processing performance by ~50% on BCM6328.
>
> Older non-enetsw SoCs still benefit mainly from rx batching. Performance
> improvement of ~30% is observed on BCM6333.
>
> The BCM63xx SoCs are designed for routers. As such, having BQL is
> beneficial as well as trivial to add.
>
> v3:
> * Simplify xmit_more patch by not moving around the code needlessly.
> * Fix indentation in xmit_more patch.
> * Fix indentation in build_skb patch.
> * Split rx ring cleanup patch from build_skb patch and precede build_skb
> patch for better understanding, as suggested by Florian Fainelli.

Thanks for addressing the feedback given, for patches that have not
changed, please carry forward any tag you have been given (Reviewed-by,
Acked-by, etc.) such that we don't have to reply to those patches again.
--
Florian

2021-01-06 17:07:10

by Florian Fainelli

[permalink] [raw]
Subject: Re: [PATCH net-next v3 1/7] bcm63xx_enet: batch process rx path

On 1/6/21 6:42 AM, Sieng Piaw Liew wrote:
> Use netif_receive_skb_list to batch process rx skb.
> Tested on BCM6328 320 MHz using iperf3 -M 512, increasing performance
> by 12.5%.
>
> Before:
> [ ID] Interval Transfer Bandwidth Retr
> [ 4] 0.00-30.00 sec 120 MBytes 33.7 Mbits/sec 277 sender
> [ 4] 0.00-30.00 sec 120 MBytes 33.5 Mbits/sec receiver
>
> After:
> [ ID] Interval Transfer Bandwidth Retr
> [ 4] 0.00-30.00 sec 136 MBytes 37.9 Mbits/sec 203 sender
> [ 4] 0.00-30.00 sec 135 MBytes 37.7 Mbits/sec receiver
>
> Signed-off-by: Sieng Piaw Liew <[email protected]>

Acked-by: Florian Fainelli <[email protected]>
--
Florian

2021-01-06 17:07:53

by Florian Fainelli

[permalink] [raw]
Subject: Re: [PATCH net-next v3 3/7] bcm63xx_enet: add xmit_more support

On 1/6/21 6:42 AM, Sieng Piaw Liew wrote:
> Support bulking hardware TX queue by using netdev_xmit_more().
>
> Signed-off-by: Sieng Piaw Liew <[email protected]>

Acked-by: Florian Fainelli <[email protected]>
--
Florian

2021-01-06 17:08:26

by Florian Fainelli

[permalink] [raw]
Subject: Re: [PATCH net-next v3 2/7] bcm63xx_enet: add BQL support

On 1/6/21 6:42 AM, Sieng Piaw Liew wrote:
> Add Byte Queue Limits support to reduce/remove bufferbloat in
> bcm63xx_enet.
>
> Signed-off-by: Sieng Piaw Liew <[email protected]>

Acked-by: Florian Fainelli <[email protected]>
--
Florian

2021-01-06 17:08:47

by Florian Fainelli

[permalink] [raw]
Subject: Re: [PATCH net-next v3 4/7] bcm63xx_enet: alloc rx skb with NET_IP_ALIGN

On 1/6/21 6:42 AM, Sieng Piaw Liew wrote:
> Use netdev_alloc_skb_ip_align on newer SoCs with integrated switch
> (enetsw) when refilling RX. Increases packet processing performance
> by 30% (with netif_receive_skb_list).
>
> Non-enetsw SoCs cannot function with the extra pad so continue to use
> the regular netdev_alloc_skb.
>
> Tested on BCM6328 320 MHz and iperf3 -M 512 to measure packet/sec
> performance.
>
> Before:
> [ ID] Interval Transfer Bandwidth Retr
> [ 4] 0.00-30.00 sec 120 MBytes 33.7 Mbits/sec 277 sender
> [ 4] 0.00-30.00 sec 120 MBytes 33.5 Mbits/sec receiver
>
> After (+netif_receive_skb_list):
> [ 4] 0.00-30.00 sec 155 MBytes 43.3 Mbits/sec 354 sender
> [ 4] 0.00-30.00 sec 154 MBytes 43.1 Mbits/sec receiver
>
> Signed-off-by: Sieng Piaw Liew <[email protected]>

Acked-by: Florian Fainelli <[email protected]>
--
Florian

2021-01-06 17:10:43

by Florian Fainelli

[permalink] [raw]
Subject: Re: [PATCH net-next v3 5/7] bcm63xx_enet: consolidate rx SKB ring cleanup code

On 1/6/21 6:42 AM, Sieng Piaw Liew wrote:
> The rx SKB ring use the same code for cleanup at various points.
> Combine them into a function to reduce lines of code.
>
> Signed-off-by: Sieng Piaw Liew <[email protected]>

Acked-by: Florian Fainelli <[email protected]>
--
Florian

2021-01-06 17:11:05

by Florian Fainelli

[permalink] [raw]
Subject: Re: [PATCH net-next v3 6/7] bcm63xx_enet: convert to build_skb

On 1/6/21 6:42 AM, Sieng Piaw Liew wrote:
> We can increase the efficiency of rx path by using buffers to receive
> packets then build SKBs around them just before passing into the network
> stack. In contrast, preallocating SKBs too early reduces CPU cache
> efficiency.
>
> Check if we're in NAPI context when refilling RX. Normally we're almost
> always running in NAPI context. Dispatch to napi_alloc_frag directly
> instead of relying on netdev_alloc_frag which does the same but
> with the overhead of local_bh_disable/enable.
>
> Tested on BCM6328 320 MHz and iperf3 -M 512 to measure packet/sec
> performance. Included netif_receive_skb_list and NET_IP_ALIGN
> optimizations.
>
> Before:
> [ ID] Interval Transfer Bandwidth Retr
> [ 4] 0.00-10.00 sec 49.9 MBytes 41.9 Mbits/sec 197 sender
> [ 4] 0.00-10.00 sec 49.3 MBytes 41.3 Mbits/sec receiver
>
> After:
> [ ID] Interval Transfer Bandwidth Retr
> [ 4] 0.00-30.00 sec 171 MBytes 47.8 Mbits/sec 272 sender
> [ 4] 0.00-30.00 sec 170 MBytes 47.6 Mbits/sec receiver
>
> Signed-off-by: Sieng Piaw Liew <[email protected]>

Acked-by: Florian Fainelli <[email protected]>
--
Florian

2021-01-06 17:11:42

by Florian Fainelli

[permalink] [raw]
Subject: Re: [PATCH net-next v3 7/7] bcm63xx_enet: improve rx loop

On 1/6/21 6:42 AM, Sieng Piaw Liew wrote:
> Use existing rx processed count to track against budget, thereby making
> budget decrement operation redundant.
>
> rx_desc_count can be calculated outside the rx loop, making the loop a
> bit smaller.
>
> Signed-off-by: Sieng Piaw Liew <[email protected]>

Acked-by: Florian Fainelli <[email protected]>
--
Florian

2021-01-07 20:45:05

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH net-next v3 0/7] bcm63xx_enet: major makeover of driver

On Wed, 6 Jan 2021 22:42:01 +0800 Sieng Piaw Liew wrote:
> This patch series aim to improve the bcm63xx_enet driver by integrating the
> latest networking features, i.e. batched rx processing, BQL, build_skb,
> etc.
>
> The newer enetsw SoCs are found to be able to do unaligned rx DMA by adding
> NET_IP_ALIGN padding which, combined with these patches, improved packet
> processing performance by ~50% on BCM6328.
>
> Older non-enetsw SoCs still benefit mainly from rx batching. Performance
> improvement of ~30% is observed on BCM6333.
>
> The BCM63xx SoCs are designed for routers. As such, having BQL is
> beneficial as well as trivial to add.
>
> v3:
> * Simplify xmit_more patch by not moving around the code needlessly.
> * Fix indentation in xmit_more patch.
> * Fix indentation in build_skb patch.
> * Split rx ring cleanup patch from build_skb patch and precede build_skb
> patch for better understanding, as suggested by Florian Fainelli.
>
> v2:
> * Add xmit_more support and rx loop improvisation patches.
> * Moved BQL netdev_reset_queue() to bcm_enet_stop()/bcm_enetsw_stop()
> functions as suggested by Florian Fainelli.
> * Improved commit messages.

Applied, thanks!