2019-07-03 10:38:36

by Jose Abreu

[permalink] [raw]
Subject: [PATCH net-next 0/3] net: stmmac: Some performance improvements and a fix

Some performace improvements (01/03 and 03/03) and a fix (02/03), all for -next.

Cc: Joao Pinto <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Giuseppe Cavallaro <[email protected]>
Cc: Alexandre Torgue <[email protected]>
Cc: Maxime Coquelin <[email protected]>
Cc: Maxime Ripard <[email protected]>
Cc: Chen-Yu Tsai <[email protected]>

Jose Abreu (3):
net: stmmac: Implement RX Coalesce Frames setting
net: stmmac: Fix descriptors address being in > 32 bits address space
net: stmmac: Introducing support for Page Pool

drivers/net/ethernet/stmicro/stmmac/Kconfig | 1 +
drivers/net/ethernet/stmicro/stmmac/common.h | 1 +
drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c | 8 +-
.../net/ethernet/stmicro/stmmac/dwmac1000_dma.c | 8 +-
drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c | 8 +-
drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c | 8 +-
drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h | 2 +
drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c | 10 +-
drivers/net/ethernet/stmicro/stmmac/hwif.h | 4 +-
drivers/net/ethernet/stmicro/stmmac/stmmac.h | 12 +-
.../net/ethernet/stmicro/stmmac/stmmac_ethtool.c | 7 +-
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 210 +++++++--------------
12 files changed, 107 insertions(+), 172 deletions(-)

--
2.7.4


2019-07-03 10:38:37

by Jose Abreu

[permalink] [raw]
Subject: [PATCH net-next 2/3] net: stmmac: Fix descriptors address being in > 32 bits address space

Commit a993db88d17d ("net: stmmac: Enable support for > 32 Bits
addressing in XGMAC"), introduced support for > 32 bits addressing in
XGMAC but the conversion of descriptors to dma_addr_t was left out.

As some devices assing coherent memory in regions > 32 bits we need to
set lower and upper value of descriptors address when initializing DMA
channels.

Luckly, this was working for me because I was assigning CMA to < 4GB
address space for performance reasons.

Fixes: a993db88d17d ("net: stmmac: Enable support for > 32 Bits addressing in XGMAC")
Signed-off-by: Jose Abreu <[email protected]>
Cc: Joao Pinto <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Giuseppe Cavallaro <[email protected]>
Cc: Alexandre Torgue <[email protected]>
Cc: Maxime Coquelin <[email protected]>
Cc: Maxime Ripard <[email protected]>
Cc: Chen-Yu Tsai <[email protected]>
---
drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c | 8 ++++----
drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c | 8 ++++----
drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c | 8 ++++----
drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c | 8 ++++----
drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h | 2 ++
drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c | 10 ++++++----
drivers/net/ethernet/stmicro/stmmac/hwif.h | 4 ++--
7 files changed, 26 insertions(+), 22 deletions(-)

diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
index 6d5cba4075eb..2856f3fe5266 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
@@ -289,18 +289,18 @@ static void sun8i_dwmac_dma_init(void __iomem *ioaddr,

static void sun8i_dwmac_dma_init_rx(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_rx_phy, u32 chan)
+ dma_addr_t dma_rx_phy, u32 chan)
{
/* Write RX descriptors address */
- writel(dma_rx_phy, ioaddr + EMAC_RX_DESC_LIST);
+ writel(lower_32_bits(dma_rx_phy), ioaddr + EMAC_RX_DESC_LIST);
}

static void sun8i_dwmac_dma_init_tx(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_tx_phy, u32 chan)
+ dma_addr_t dma_tx_phy, u32 chan)
{
/* Write TX descriptors address */
- writel(dma_tx_phy, ioaddr + EMAC_TX_DESC_LIST);
+ writel(lower_32_bits(dma_tx_phy), ioaddr + EMAC_TX_DESC_LIST);
}

/* sun8i_dwmac_dump_regs() - Dump EMAC address space
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
index 1fdedf77678f..2bac49b49f73 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
@@ -112,18 +112,18 @@ static void dwmac1000_dma_init(void __iomem *ioaddr,

static void dwmac1000_dma_init_rx(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_rx_phy, u32 chan)
+ dma_addr_t dma_rx_phy, u32 chan)
{
/* RX descriptor base address list must be written into DMA CSR3 */
- writel(dma_rx_phy, ioaddr + DMA_RCV_BASE_ADDR);
+ writel(lower_32_bits(dma_rx_phy), ioaddr + DMA_RCV_BASE_ADDR);
}

static void dwmac1000_dma_init_tx(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_tx_phy, u32 chan)
+ dma_addr_t dma_tx_phy, u32 chan)
{
/* TX descriptor base address list must be written into DMA CSR4 */
- writel(dma_tx_phy, ioaddr + DMA_TX_BASE_ADDR);
+ writel(lower_32_bits(dma_tx_phy), ioaddr + DMA_TX_BASE_ADDR);
}

static u32 dwmac1000_configure_fc(u32 csr6, int rxfifosz)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c
index c980cc7360a4..8f0d9bc7cab5 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac100_dma.c
@@ -31,18 +31,18 @@ static void dwmac100_dma_init(void __iomem *ioaddr,

static void dwmac100_dma_init_rx(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_rx_phy, u32 chan)
+ dma_addr_t dma_rx_phy, u32 chan)
{
/* RX descriptor base addr lists must be written into DMA CSR3 */
- writel(dma_rx_phy, ioaddr + DMA_RCV_BASE_ADDR);
+ writel(lower_32_bits(dma_rx_phy), ioaddr + DMA_RCV_BASE_ADDR);
}

static void dwmac100_dma_init_tx(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_tx_phy, u32 chan)
+ dma_addr_t dma_tx_phy, u32 chan)
{
/* TX descriptor base addr lists must be written into DMA CSR4 */
- writel(dma_tx_phy, ioaddr + DMA_TX_BASE_ADDR);
+ writel(lower_32_bits(dma_tx_phy), ioaddr + DMA_TX_BASE_ADDR);
}

/* Store and Forward capability is not used at all.
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
index 0f208e13da9f..6cbcdaea55f6 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
@@ -70,7 +70,7 @@ static void dwmac4_dma_axi(void __iomem *ioaddr, struct stmmac_axi *axi)

static void dwmac4_dma_init_rx_chan(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_rx_phy, u32 chan)
+ dma_addr_t dma_rx_phy, u32 chan)
{
u32 value;
u32 rxpbl = dma_cfg->rxpbl ?: dma_cfg->pbl;
@@ -79,12 +79,12 @@ static void dwmac4_dma_init_rx_chan(void __iomem *ioaddr,
value = value | (rxpbl << DMA_BUS_MODE_RPBL_SHIFT);
writel(value, ioaddr + DMA_CHAN_RX_CONTROL(chan));

- writel(dma_rx_phy, ioaddr + DMA_CHAN_RX_BASE_ADDR(chan));
+ writel(lower_32_bits(dma_rx_phy), ioaddr + DMA_CHAN_RX_BASE_ADDR(chan));
}

static void dwmac4_dma_init_tx_chan(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_tx_phy, u32 chan)
+ dma_addr_t dma_tx_phy, u32 chan)
{
u32 value;
u32 txpbl = dma_cfg->txpbl ?: dma_cfg->pbl;
@@ -97,7 +97,7 @@ static void dwmac4_dma_init_tx_chan(void __iomem *ioaddr,

writel(value, ioaddr + DMA_CHAN_TX_CONTROL(chan));

- writel(dma_tx_phy, ioaddr + DMA_CHAN_TX_BASE_ADDR(chan));
+ writel(lower_32_bits(dma_tx_phy), ioaddr + DMA_CHAN_TX_BASE_ADDR(chan));
}

static void dwmac4_dma_init_channel(void __iomem *ioaddr,
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
index 9a9792527530..7f86dffb264d 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
+++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2.h
@@ -199,7 +199,9 @@
#define XGMAC_RxPBL GENMASK(21, 16)
#define XGMAC_RxPBL_SHIFT 16
#define XGMAC_RXST BIT(0)
+#define XGMAC_DMA_CH_TxDESC_HADDR(x) (0x00003110 + (0x80 * (x)))
#define XGMAC_DMA_CH_TxDESC_LADDR(x) (0x00003114 + (0x80 * (x)))
+#define XGMAC_DMA_CH_RxDESC_HADDR(x) (0x00003118 + (0x80 * (x)))
#define XGMAC_DMA_CH_RxDESC_LADDR(x) (0x0000311c + (0x80 * (x)))
#define XGMAC_DMA_CH_TxDESC_TAIL_LPTR(x) (0x00003124 + (0x80 * (x)))
#define XGMAC_DMA_CH_RxDESC_TAIL_LPTR(x) (0x0000312c + (0x80 * (x)))
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
index 229c58758cbd..a4f236e3593e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
@@ -44,7 +44,7 @@ static void dwxgmac2_dma_init_chan(void __iomem *ioaddr,

static void dwxgmac2_dma_init_rx_chan(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_rx_phy, u32 chan)
+ dma_addr_t phy, u32 chan)
{
u32 rxpbl = dma_cfg->rxpbl ?: dma_cfg->pbl;
u32 value;
@@ -54,12 +54,13 @@ static void dwxgmac2_dma_init_rx_chan(void __iomem *ioaddr,
value |= (rxpbl << XGMAC_RxPBL_SHIFT) & XGMAC_RxPBL;
writel(value, ioaddr + XGMAC_DMA_CH_RX_CONTROL(chan));

- writel(dma_rx_phy, ioaddr + XGMAC_DMA_CH_RxDESC_LADDR(chan));
+ writel(upper_32_bits(phy), ioaddr + XGMAC_DMA_CH_RxDESC_HADDR(chan));
+ writel(lower_32_bits(phy), ioaddr + XGMAC_DMA_CH_RxDESC_LADDR(chan));
}

static void dwxgmac2_dma_init_tx_chan(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_tx_phy, u32 chan)
+ dma_addr_t phy, u32 chan)
{
u32 txpbl = dma_cfg->txpbl ?: dma_cfg->pbl;
u32 value;
@@ -70,7 +71,8 @@ static void dwxgmac2_dma_init_tx_chan(void __iomem *ioaddr,
value |= XGMAC_OSP;
writel(value, ioaddr + XGMAC_DMA_CH_TX_CONTROL(chan));

- writel(dma_tx_phy, ioaddr + XGMAC_DMA_CH_TxDESC_LADDR(chan));
+ writel(upper_32_bits(phy), ioaddr + XGMAC_DMA_CH_TxDESC_HADDR(chan));
+ writel(lower_32_bits(phy), ioaddr + XGMAC_DMA_CH_TxDESC_LADDR(chan));
}

static void dwxgmac2_dma_axi(void __iomem *ioaddr, struct stmmac_axi *axi)
diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h
index 2acfbc70e3c8..278c0dbec9d9 100644
--- a/drivers/net/ethernet/stmicro/stmmac/hwif.h
+++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h
@@ -150,10 +150,10 @@ struct stmmac_dma_ops {
struct stmmac_dma_cfg *dma_cfg, u32 chan);
void (*init_rx_chan)(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_rx_phy, u32 chan);
+ dma_addr_t phy, u32 chan);
void (*init_tx_chan)(void __iomem *ioaddr,
struct stmmac_dma_cfg *dma_cfg,
- u32 dma_tx_phy, u32 chan);
+ dma_addr_t phy, u32 chan);
/* Configure the AXI Bus Mode Register */
void (*axi)(void __iomem *ioaddr, struct stmmac_axi *axi);
/* Dump DMA registers */
--
2.7.4

2019-07-03 10:38:56

by Jose Abreu

[permalink] [raw]
Subject: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

Mapping and unmapping DMA region is an high bottleneck in stmmac driver,
specially in the RX path.

This commit introduces support for Page Pool API and uses it in all RX
queues. With this change, we get more stable troughput and some increase
of banwidth with iperf:
- MAC1000 - 950 Mbps
- XGMAC: 9.22 Gbps

Signed-off-by: Jose Abreu <[email protected]>
Cc: Joao Pinto <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Giuseppe Cavallaro <[email protected]>
Cc: Alexandre Torgue <[email protected]>
Cc: Maxime Coquelin <[email protected]>
Cc: Maxime Ripard <[email protected]>
Cc: Chen-Yu Tsai <[email protected]>
---
drivers/net/ethernet/stmicro/stmmac/Kconfig | 1 +
drivers/net/ethernet/stmicro/stmmac/stmmac.h | 10 +-
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 196 ++++++----------------
3 files changed, 63 insertions(+), 144 deletions(-)

diff --git a/drivers/net/ethernet/stmicro/stmmac/Kconfig b/drivers/net/ethernet/stmicro/stmmac/Kconfig
index 943189dcccb1..2325b40dff6e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/Kconfig
+++ b/drivers/net/ethernet/stmicro/stmmac/Kconfig
@@ -3,6 +3,7 @@ config STMMAC_ETH
tristate "STMicroelectronics Multi-Gigabit Ethernet driver"
depends on HAS_IOMEM && HAS_DMA
select MII
+ select PAGE_POOL
select PHYLINK
select CRC32
imply PTP_1588_CLOCK
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
index 513f4e2df5f6..5cd966c154f3 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
@@ -20,6 +20,7 @@
#include <linux/ptp_clock_kernel.h>
#include <linux/net_tstamp.h>
#include <linux/reset.h>
+#include <net/page_pool.h>

struct stmmac_resources {
void __iomem *addr;
@@ -54,14 +55,19 @@ struct stmmac_tx_queue {
u32 mss;
};

+struct stmmac_rx_buffer {
+ struct page *page;
+ dma_addr_t addr;
+};
+
struct stmmac_rx_queue {
u32 rx_count_frames;
u32 queue_index;
+ struct page_pool *page_pool;
+ struct stmmac_rx_buffer *buf_pool;
struct stmmac_priv *priv_data;
struct dma_extended_desc *dma_erx;
struct dma_desc *dma_rx ____cacheline_aligned_in_smp;
- struct sk_buff **rx_skbuff;
- dma_addr_t *rx_skbuff_dma;
unsigned int cur_rx;
unsigned int dirty_rx;
u32 rx_zeroc_thresh;
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index c8fe85ef9a7e..9f44e8193208 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -1197,26 +1197,14 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
int i, gfp_t flags, u32 queue)
{
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
- struct sk_buff *skb;
+ struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];

- skb = __netdev_alloc_skb_ip_align(priv->dev, priv->dma_buf_sz, flags);
- if (!skb) {
- netdev_err(priv->dev,
- "%s: Rx init fails; skb is NULL\n", __func__);
+ buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
+ if (!buf->page)
return -ENOMEM;
- }
- rx_q->rx_skbuff[i] = skb;
- rx_q->rx_skbuff_dma[i] = dma_map_single(priv->device, skb->data,
- priv->dma_buf_sz,
- DMA_FROM_DEVICE);
- if (dma_mapping_error(priv->device, rx_q->rx_skbuff_dma[i])) {
- netdev_err(priv->dev, "%s: DMA mapping error\n", __func__);
- dev_kfree_skb_any(skb);
- return -EINVAL;
- }
-
- stmmac_set_desc_addr(priv, p, rx_q->rx_skbuff_dma[i]);

+ buf->addr = buf->page->dma_addr;
+ stmmac_set_desc_addr(priv, p, buf->addr);
if (priv->dma_buf_sz == BUF_SIZE_16KiB)
stmmac_init_desc3(priv, p);

@@ -1232,13 +1220,10 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
static void stmmac_free_rx_buffer(struct stmmac_priv *priv, u32 queue, int i)
{
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
+ struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];

- if (rx_q->rx_skbuff[i]) {
- dma_unmap_single(priv->device, rx_q->rx_skbuff_dma[i],
- priv->dma_buf_sz, DMA_FROM_DEVICE);
- dev_kfree_skb_any(rx_q->rx_skbuff[i]);
- }
- rx_q->rx_skbuff[i] = NULL;
+ page_pool_put_page(rx_q->page_pool, buf->page, false);
+ buf->page = NULL;
}

/**
@@ -1321,10 +1306,6 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags)
queue);
if (ret)
goto err_init_rx_buffers;
-
- netif_dbg(priv, probe, priv->dev, "[%p]\t[%p]\t[%x]\n",
- rx_q->rx_skbuff[i], rx_q->rx_skbuff[i]->data,
- (unsigned int)rx_q->rx_skbuff_dma[i]);
}

rx_q->cur_rx = 0;
@@ -1498,8 +1479,9 @@ static void free_dma_rx_desc_resources(struct stmmac_priv *priv)
sizeof(struct dma_extended_desc),
rx_q->dma_erx, rx_q->dma_rx_phy);

- kfree(rx_q->rx_skbuff_dma);
- kfree(rx_q->rx_skbuff);
+ kfree(rx_q->buf_pool);
+ if (rx_q->page_pool)
+ page_pool_request_shutdown(rx_q->page_pool);
}
}

@@ -1551,20 +1533,28 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv)
/* RX queues buffers and DMA */
for (queue = 0; queue < rx_count; queue++) {
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
+ struct page_pool_params pp_params = { 0 };

rx_q->queue_index = queue;
rx_q->priv_data = priv;

- rx_q->rx_skbuff_dma = kmalloc_array(DMA_RX_SIZE,
- sizeof(dma_addr_t),
- GFP_KERNEL);
- if (!rx_q->rx_skbuff_dma)
+ pp_params.flags = PP_FLAG_DMA_MAP;
+ pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
+ pp_params.nid = dev_to_node(priv->device);
+ pp_params.dev = priv->device;
+ pp_params.dma_dir = DMA_FROM_DEVICE;
+
+ rx_q->page_pool = page_pool_create(&pp_params);
+ if (IS_ERR(rx_q->page_pool)) {
+ ret = PTR_ERR(rx_q->page_pool);
+ rx_q->page_pool = NULL;
goto err_dma;
+ }

- rx_q->rx_skbuff = kmalloc_array(DMA_RX_SIZE,
- sizeof(struct sk_buff *),
- GFP_KERNEL);
- if (!rx_q->rx_skbuff)
+ rx_q->buf_pool = kmalloc_array(DMA_RX_SIZE,
+ sizeof(*rx_q->buf_pool),
+ GFP_KERNEL);
+ if (!rx_q->buf_pool)
goto err_dma;

if (priv->extend_desc) {
@@ -3295,9 +3285,8 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
int dirty = stmmac_rx_dirty(priv, queue);
unsigned int entry = rx_q->dirty_rx;

- int bfsize = priv->dma_buf_sz;
-
while (dirty-- > 0) {
+ struct stmmac_rx_buffer *buf = &rx_q->buf_pool[entry];
struct dma_desc *p;
bool use_rx_wd;

@@ -3306,49 +3295,22 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
else
p = rx_q->dma_rx + entry;

- if (likely(!rx_q->rx_skbuff[entry])) {
- struct sk_buff *skb;
-
- skb = netdev_alloc_skb_ip_align(priv->dev, bfsize);
- if (unlikely(!skb)) {
- /* so for a while no zero-copy! */
- rx_q->rx_zeroc_thresh = STMMAC_RX_THRESH;
- if (unlikely(net_ratelimit()))
- dev_err(priv->device,
- "fail to alloc skb entry %d\n",
- entry);
- break;
- }
-
- rx_q->rx_skbuff[entry] = skb;
- rx_q->rx_skbuff_dma[entry] =
- dma_map_single(priv->device, skb->data, bfsize,
- DMA_FROM_DEVICE);
- if (dma_mapping_error(priv->device,
- rx_q->rx_skbuff_dma[entry])) {
- netdev_err(priv->dev, "Rx DMA map failed\n");
- dev_kfree_skb(skb);
+ if (!buf->page) {
+ buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
+ if (!buf->page)
break;
- }
-
- stmmac_set_desc_addr(priv, p, rx_q->rx_skbuff_dma[entry]);
- stmmac_refill_desc3(priv, rx_q, p);
-
- if (rx_q->rx_zeroc_thresh > 0)
- rx_q->rx_zeroc_thresh--;
-
- netif_dbg(priv, rx_status, priv->dev,
- "refill entry #%d\n", entry);
}
- dma_wmb();
+
+ buf->addr = buf->page->dma_addr;
+ stmmac_set_desc_addr(priv, p, buf->addr);
+ stmmac_refill_desc3(priv, rx_q, p);

rx_q->rx_count_frames++;
rx_q->rx_count_frames %= priv->rx_coal_frames;
use_rx_wd = priv->use_riwt && rx_q->rx_count_frames;

- stmmac_set_rx_owner(priv, p, use_rx_wd);
-
dma_wmb();
+ stmmac_set_rx_owner(priv, p, use_rx_wd);

entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
}
@@ -3373,9 +3335,6 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
unsigned int next_entry = rx_q->cur_rx;
int coe = priv->hw->rx_csum;
unsigned int count = 0;
- bool xmac;
-
- xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;

if (netif_msg_rx_status(priv)) {
void *rx_head;
@@ -3389,11 +3348,12 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
stmmac_display_ring(priv, rx_head, DMA_RX_SIZE, true);
}
while (count < limit) {
+ struct stmmac_rx_buffer *buf;
+ struct dma_desc *np, *p;
int entry, status;
- struct dma_desc *p;
- struct dma_desc *np;

entry = next_entry;
+ buf = &rx_q->buf_pool[entry];

if (priv->extend_desc)
p = (struct dma_desc *)(rx_q->dma_erx + entry);
@@ -3423,20 +3383,9 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
stmmac_rx_extended_status(priv, &priv->dev->stats,
&priv->xstats, rx_q->dma_erx + entry);
if (unlikely(status == discard_frame)) {
+ page_pool_recycle_direct(rx_q->page_pool, buf->page);
priv->dev->stats.rx_errors++;
- if (priv->hwts_rx_en && !priv->extend_desc) {
- /* DESC2 & DESC3 will be overwritten by device
- * with timestamp value, hence reinitialize
- * them in stmmac_rx_refill() function so that
- * device can reuse it.
- */
- dev_kfree_skb_any(rx_q->rx_skbuff[entry]);
- rx_q->rx_skbuff[entry] = NULL;
- dma_unmap_single(priv->device,
- rx_q->rx_skbuff_dma[entry],
- priv->dma_buf_sz,
- DMA_FROM_DEVICE);
- }
+ buf->page = NULL;
} else {
struct sk_buff *skb;
int frame_len;
@@ -3476,58 +3425,18 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
frame_len, status);
}

- /* The zero-copy is always used for all the sizes
- * in case of GMAC4 because it needs
- * to refill the used descriptors, always.
- */
- if (unlikely(!xmac &&
- ((frame_len < priv->rx_copybreak) ||
- stmmac_rx_threshold_count(rx_q)))) {
- skb = netdev_alloc_skb_ip_align(priv->dev,
- frame_len);
- if (unlikely(!skb)) {
- if (net_ratelimit())
- dev_warn(priv->device,
- "packet dropped\n");
- priv->dev->stats.rx_dropped++;
- continue;
- }
-
- dma_sync_single_for_cpu(priv->device,
- rx_q->rx_skbuff_dma
- [entry], frame_len,
- DMA_FROM_DEVICE);
- skb_copy_to_linear_data(skb,
- rx_q->
- rx_skbuff[entry]->data,
- frame_len);
-
- skb_put(skb, frame_len);
- dma_sync_single_for_device(priv->device,
- rx_q->rx_skbuff_dma
- [entry], frame_len,
- DMA_FROM_DEVICE);
- } else {
- skb = rx_q->rx_skbuff[entry];
- if (unlikely(!skb)) {
- if (net_ratelimit())
- netdev_err(priv->dev,
- "%s: Inconsistent Rx chain\n",
- priv->dev->name);
- priv->dev->stats.rx_dropped++;
- continue;
- }
- prefetch(skb->data - NET_IP_ALIGN);
- rx_q->rx_skbuff[entry] = NULL;
- rx_q->rx_zeroc_thresh++;
-
- skb_put(skb, frame_len);
- dma_unmap_single(priv->device,
- rx_q->rx_skbuff_dma[entry],
- priv->dma_buf_sz,
- DMA_FROM_DEVICE);
+ skb = netdev_alloc_skb_ip_align(priv->dev, frame_len);
+ if (unlikely(!skb)) {
+ priv->dev->stats.rx_dropped++;
+ continue;
}

+ dma_sync_single_for_cpu(priv->device, buf->addr,
+ frame_len, DMA_FROM_DEVICE);
+ skb_copy_to_linear_data(skb, page_address(buf->page),
+ frame_len);
+ skb_put(skb, frame_len);
+
if (netif_msg_pktdata(priv)) {
netdev_dbg(priv->dev, "frame received (%dbytes)",
frame_len);
@@ -3547,6 +3456,9 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)

napi_gro_receive(&ch->rx_napi, skb);

+ page_pool_recycle_direct(rx_q->page_pool, buf->page);
+ buf->page = NULL;
+
priv->dev->stats.rx_packets++;
priv->dev->stats.rx_bytes += frame_len;
}
--
2.7.4

2019-07-03 10:39:43

by Jose Abreu

[permalink] [raw]
Subject: [PATCH net-next 1/3] net: stmmac: Implement RX Coalesce Frames setting

Add support for coalescing RX path by specifying number of frames which
don't need to have interrupt on completion bit set.

This is only available when RX Watchdog is enabled.

Signed-off-by: Jose Abreu <[email protected]>
Cc: Joao Pinto <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Giuseppe Cavallaro <[email protected]>
Cc: Alexandre Torgue <[email protected]>
Cc: Maxime Coquelin <[email protected]>
Cc: Maxime Ripard <[email protected]>
Cc: Chen-Yu Tsai <[email protected]>
---
drivers/net/ethernet/stmicro/stmmac/common.h | 1 +
drivers/net/ethernet/stmicro/stmmac/stmmac.h | 2 ++
drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c | 7 +++++--
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 18 ++++++++++++------
4 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h
index 2403a65167b2..dfd47fdfa447 100644
--- a/drivers/net/ethernet/stmicro/stmmac/common.h
+++ b/drivers/net/ethernet/stmicro/stmmac/common.h
@@ -252,6 +252,7 @@ struct stmmac_safety_stats {
#define STMMAC_MAX_COAL_TX_TICK 100000
#define STMMAC_TX_MAX_FRAMES 256
#define STMMAC_TX_FRAMES 1
+#define STMMAC_RX_FRAMES 25

/* Packets types */
enum packets_types {
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
index 123898235cb0..513f4e2df5f6 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
@@ -55,6 +55,7 @@ struct stmmac_tx_queue {
};

struct stmmac_rx_queue {
+ u32 rx_count_frames;
u32 queue_index;
struct stmmac_priv *priv_data;
struct dma_extended_desc *dma_erx;
@@ -110,6 +111,7 @@ struct stmmac_priv {
/* Frequently used values are kept adjacent for cache effect */
u32 tx_coal_frames;
u32 tx_coal_timer;
+ u32 rx_coal_frames;

int tx_coalesce;
int hwts_tx_en;
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
index cfd93eefb50e..6efb66820d4c 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
@@ -701,8 +701,10 @@ static int stmmac_get_coalesce(struct net_device *dev,
ec->tx_coalesce_usecs = priv->tx_coal_timer;
ec->tx_max_coalesced_frames = priv->tx_coal_frames;

- if (priv->use_riwt)
+ if (priv->use_riwt) {
+ ec->rx_max_coalesced_frames = priv->rx_coal_frames;
ec->rx_coalesce_usecs = stmmac_riwt2usec(priv->rx_riwt, priv);
+ }

return 0;
}
@@ -715,7 +717,7 @@ static int stmmac_set_coalesce(struct net_device *dev,
unsigned int rx_riwt;

/* Check not supported parameters */
- if ((ec->rx_max_coalesced_frames) || (ec->rx_coalesce_usecs_irq) ||
+ if ((ec->rx_coalesce_usecs_irq) ||
(ec->rx_max_coalesced_frames_irq) || (ec->tx_coalesce_usecs_irq) ||
(ec->use_adaptive_rx_coalesce) || (ec->use_adaptive_tx_coalesce) ||
(ec->pkt_rate_low) || (ec->rx_coalesce_usecs_low) ||
@@ -749,6 +751,7 @@ static int stmmac_set_coalesce(struct net_device *dev,
/* Only copy relevant parameters, ignore all others. */
priv->tx_coal_frames = ec->tx_max_coalesced_frames;
priv->tx_coal_timer = ec->tx_coalesce_usecs;
+ priv->rx_coal_frames = ec->rx_max_coalesced_frames;
priv->rx_riwt = rx_riwt;
stmmac_rx_watchdog(priv, priv->ioaddr, priv->rx_riwt, rx_cnt);

diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index 3425d4dda03d..c8fe85ef9a7e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -2268,20 +2268,21 @@ static void stmmac_tx_timer(struct timer_list *t)
}

/**
- * stmmac_init_tx_coalesce - init tx mitigation options.
+ * stmmac_init_coalesce - init mitigation options.
* @priv: driver private structure
* Description:
- * This inits the transmit coalesce parameters: i.e. timer rate,
+ * This inits the coalesce parameters: i.e. timer rate,
* timer handler and default threshold used for enabling the
* interrupt on completion bit.
*/
-static void stmmac_init_tx_coalesce(struct stmmac_priv *priv)
+static void stmmac_init_coalesce(struct stmmac_priv *priv)
{
u32 tx_channel_count = priv->plat->tx_queues_to_use;
u32 chan;

priv->tx_coal_frames = STMMAC_TX_FRAMES;
priv->tx_coal_timer = STMMAC_COAL_TX_TIMER;
+ priv->rx_coal_frames = STMMAC_RX_FRAMES;

for (chan = 0; chan < tx_channel_count; chan++) {
struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan];
@@ -2651,7 +2652,7 @@ static int stmmac_open(struct net_device *dev)
goto init_error;
}

- stmmac_init_tx_coalesce(priv);
+ stmmac_init_coalesce(priv);

phylink_start(priv->phylink);

@@ -3298,6 +3299,7 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)

while (dirty-- > 0) {
struct dma_desc *p;
+ bool use_rx_wd;

if (priv->extend_desc)
p = (struct dma_desc *)(rx_q->dma_erx + entry);
@@ -3340,7 +3342,11 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
}
dma_wmb();

- stmmac_set_rx_owner(priv, p, priv->use_riwt);
+ rx_q->rx_count_frames++;
+ rx_q->rx_count_frames %= priv->rx_coal_frames;
+ use_rx_wd = priv->use_riwt && rx_q->rx_count_frames;
+
+ stmmac_set_rx_owner(priv, p, use_rx_wd);

dma_wmb();

@@ -4623,7 +4629,7 @@ int stmmac_resume(struct device *dev)
stmmac_clear_descriptors(priv);

stmmac_hw_setup(ndev, false);
- stmmac_init_tx_coalesce(priv);
+ stmmac_init_coalesce(priv);
stmmac_set_rx_mode(ndev);

stmmac_enable_all_queues(priv);
--
2.7.4

2019-07-03 10:42:15

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

++ Jesper: Who is most active committer of page pool API (?) ... Can you
please help review this ?

From: Jose Abreu <[email protected]>

> Mapping and unmapping DMA region is an high bottleneck in stmmac driver,
> specially in the RX path.
>
> This commit introduces support for Page Pool API and uses it in all RX
> queues. With this change, we get more stable troughput and some increase
> of banwidth with iperf:
> - MAC1000 - 950 Mbps
> - XGMAC: 9.22 Gbps
>
> Signed-off-by: Jose Abreu <[email protected]>
> Cc: Joao Pinto <[email protected]>
> Cc: David S. Miller <[email protected]>
> Cc: Giuseppe Cavallaro <[email protected]>
> Cc: Alexandre Torgue <[email protected]>
> Cc: Maxime Coquelin <[email protected]>
> Cc: Maxime Ripard <[email protected]>
> Cc: Chen-Yu Tsai <[email protected]>
> ---
> drivers/net/ethernet/stmicro/stmmac/Kconfig | 1 +
> drivers/net/ethernet/stmicro/stmmac/stmmac.h | 10 +-
> drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 196 ++++++----------------
> 3 files changed, 63 insertions(+), 144 deletions(-)
>
> diff --git a/drivers/net/ethernet/stmicro/stmmac/Kconfig b/drivers/net/ethernet/stmicro/stmmac/Kconfig
> index 943189dcccb1..2325b40dff6e 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/Kconfig
> +++ b/drivers/net/ethernet/stmicro/stmmac/Kconfig
> @@ -3,6 +3,7 @@ config STMMAC_ETH
> tristate "STMicroelectronics Multi-Gigabit Ethernet driver"
> depends on HAS_IOMEM && HAS_DMA
> select MII
> + select PAGE_POOL
> select PHYLINK
> select CRC32
> imply PTP_1588_CLOCK
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
> index 513f4e2df5f6..5cd966c154f3 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
> @@ -20,6 +20,7 @@
> #include <linux/ptp_clock_kernel.h>
> #include <linux/net_tstamp.h>
> #include <linux/reset.h>
> +#include <net/page_pool.h>
>
> struct stmmac_resources {
> void __iomem *addr;
> @@ -54,14 +55,19 @@ struct stmmac_tx_queue {
> u32 mss;
> };
>
> +struct stmmac_rx_buffer {
> + struct page *page;
> + dma_addr_t addr;
> +};
> +
> struct stmmac_rx_queue {
> u32 rx_count_frames;
> u32 queue_index;
> + struct page_pool *page_pool;
> + struct stmmac_rx_buffer *buf_pool;
> struct stmmac_priv *priv_data;
> struct dma_extended_desc *dma_erx;
> struct dma_desc *dma_rx ____cacheline_aligned_in_smp;
> - struct sk_buff **rx_skbuff;
> - dma_addr_t *rx_skbuff_dma;
> unsigned int cur_rx;
> unsigned int dirty_rx;
> u32 rx_zeroc_thresh;
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> index c8fe85ef9a7e..9f44e8193208 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -1197,26 +1197,14 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
> int i, gfp_t flags, u32 queue)
> {
> struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
> - struct sk_buff *skb;
> + struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];
>
> - skb = __netdev_alloc_skb_ip_align(priv->dev, priv->dma_buf_sz, flags);
> - if (!skb) {
> - netdev_err(priv->dev,
> - "%s: Rx init fails; skb is NULL\n", __func__);
> + buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
> + if (!buf->page)
> return -ENOMEM;
> - }
> - rx_q->rx_skbuff[i] = skb;
> - rx_q->rx_skbuff_dma[i] = dma_map_single(priv->device, skb->data,
> - priv->dma_buf_sz,
> - DMA_FROM_DEVICE);
> - if (dma_mapping_error(priv->device, rx_q->rx_skbuff_dma[i])) {
> - netdev_err(priv->dev, "%s: DMA mapping error\n", __func__);
> - dev_kfree_skb_any(skb);
> - return -EINVAL;
> - }
> -
> - stmmac_set_desc_addr(priv, p, rx_q->rx_skbuff_dma[i]);
>
> + buf->addr = buf->page->dma_addr;
> + stmmac_set_desc_addr(priv, p, buf->addr);
> if (priv->dma_buf_sz == BUF_SIZE_16KiB)
> stmmac_init_desc3(priv, p);
>
> @@ -1232,13 +1220,10 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
> static void stmmac_free_rx_buffer(struct stmmac_priv *priv, u32 queue, int i)
> {
> struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
> + struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];
>
> - if (rx_q->rx_skbuff[i]) {
> - dma_unmap_single(priv->device, rx_q->rx_skbuff_dma[i],
> - priv->dma_buf_sz, DMA_FROM_DEVICE);
> - dev_kfree_skb_any(rx_q->rx_skbuff[i]);
> - }
> - rx_q->rx_skbuff[i] = NULL;
> + page_pool_put_page(rx_q->page_pool, buf->page, false);
> + buf->page = NULL;
> }
>
> /**
> @@ -1321,10 +1306,6 @@ static int init_dma_rx_desc_rings(struct net_device *dev, gfp_t flags)
> queue);
> if (ret)
> goto err_init_rx_buffers;
> -
> - netif_dbg(priv, probe, priv->dev, "[%p]\t[%p]\t[%x]\n",
> - rx_q->rx_skbuff[i], rx_q->rx_skbuff[i]->data,
> - (unsigned int)rx_q->rx_skbuff_dma[i]);
> }
>
> rx_q->cur_rx = 0;
> @@ -1498,8 +1479,9 @@ static void free_dma_rx_desc_resources(struct stmmac_priv *priv)
> sizeof(struct dma_extended_desc),
> rx_q->dma_erx, rx_q->dma_rx_phy);
>
> - kfree(rx_q->rx_skbuff_dma);
> - kfree(rx_q->rx_skbuff);
> + kfree(rx_q->buf_pool);
> + if (rx_q->page_pool)
> + page_pool_request_shutdown(rx_q->page_pool);
> }
> }
>
> @@ -1551,20 +1533,28 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv)
> /* RX queues buffers and DMA */
> for (queue = 0; queue < rx_count; queue++) {
> struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
> + struct page_pool_params pp_params = { 0 };
>
> rx_q->queue_index = queue;
> rx_q->priv_data = priv;
>
> - rx_q->rx_skbuff_dma = kmalloc_array(DMA_RX_SIZE,
> - sizeof(dma_addr_t),
> - GFP_KERNEL);
> - if (!rx_q->rx_skbuff_dma)
> + pp_params.flags = PP_FLAG_DMA_MAP;
> + pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
> + pp_params.nid = dev_to_node(priv->device);
> + pp_params.dev = priv->device;
> + pp_params.dma_dir = DMA_FROM_DEVICE;
> +
> + rx_q->page_pool = page_pool_create(&pp_params);
> + if (IS_ERR(rx_q->page_pool)) {
> + ret = PTR_ERR(rx_q->page_pool);
> + rx_q->page_pool = NULL;
> goto err_dma;
> + }
>
> - rx_q->rx_skbuff = kmalloc_array(DMA_RX_SIZE,
> - sizeof(struct sk_buff *),
> - GFP_KERNEL);
> - if (!rx_q->rx_skbuff)
> + rx_q->buf_pool = kmalloc_array(DMA_RX_SIZE,
> + sizeof(*rx_q->buf_pool),
> + GFP_KERNEL);
> + if (!rx_q->buf_pool)
> goto err_dma;
>
> if (priv->extend_desc) {
> @@ -3295,9 +3285,8 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
> int dirty = stmmac_rx_dirty(priv, queue);
> unsigned int entry = rx_q->dirty_rx;
>
> - int bfsize = priv->dma_buf_sz;
> -
> while (dirty-- > 0) {
> + struct stmmac_rx_buffer *buf = &rx_q->buf_pool[entry];
> struct dma_desc *p;
> bool use_rx_wd;
>
> @@ -3306,49 +3295,22 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
> else
> p = rx_q->dma_rx + entry;
>
> - if (likely(!rx_q->rx_skbuff[entry])) {
> - struct sk_buff *skb;
> -
> - skb = netdev_alloc_skb_ip_align(priv->dev, bfsize);
> - if (unlikely(!skb)) {
> - /* so for a while no zero-copy! */
> - rx_q->rx_zeroc_thresh = STMMAC_RX_THRESH;
> - if (unlikely(net_ratelimit()))
> - dev_err(priv->device,
> - "fail to alloc skb entry %d\n",
> - entry);
> - break;
> - }
> -
> - rx_q->rx_skbuff[entry] = skb;
> - rx_q->rx_skbuff_dma[entry] =
> - dma_map_single(priv->device, skb->data, bfsize,
> - DMA_FROM_DEVICE);
> - if (dma_mapping_error(priv->device,
> - rx_q->rx_skbuff_dma[entry])) {
> - netdev_err(priv->dev, "Rx DMA map failed\n");
> - dev_kfree_skb(skb);
> + if (!buf->page) {
> + buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
> + if (!buf->page)
> break;
> - }
> -
> - stmmac_set_desc_addr(priv, p, rx_q->rx_skbuff_dma[entry]);
> - stmmac_refill_desc3(priv, rx_q, p);
> -
> - if (rx_q->rx_zeroc_thresh > 0)
> - rx_q->rx_zeroc_thresh--;
> -
> - netif_dbg(priv, rx_status, priv->dev,
> - "refill entry #%d\n", entry);
> }
> - dma_wmb();
> +
> + buf->addr = buf->page->dma_addr;
> + stmmac_set_desc_addr(priv, p, buf->addr);
> + stmmac_refill_desc3(priv, rx_q, p);
>
> rx_q->rx_count_frames++;
> rx_q->rx_count_frames %= priv->rx_coal_frames;
> use_rx_wd = priv->use_riwt && rx_q->rx_count_frames;
>
> - stmmac_set_rx_owner(priv, p, use_rx_wd);
> -
> dma_wmb();
> + stmmac_set_rx_owner(priv, p, use_rx_wd);
>
> entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
> }
> @@ -3373,9 +3335,6 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
> unsigned int next_entry = rx_q->cur_rx;
> int coe = priv->hw->rx_csum;
> unsigned int count = 0;
> - bool xmac;
> -
> - xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
>
> if (netif_msg_rx_status(priv)) {
> void *rx_head;
> @@ -3389,11 +3348,12 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
> stmmac_display_ring(priv, rx_head, DMA_RX_SIZE, true);
> }
> while (count < limit) {
> + struct stmmac_rx_buffer *buf;
> + struct dma_desc *np, *p;
> int entry, status;
> - struct dma_desc *p;
> - struct dma_desc *np;
>
> entry = next_entry;
> + buf = &rx_q->buf_pool[entry];
>
> if (priv->extend_desc)
> p = (struct dma_desc *)(rx_q->dma_erx + entry);
> @@ -3423,20 +3383,9 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
> stmmac_rx_extended_status(priv, &priv->dev->stats,
> &priv->xstats, rx_q->dma_erx + entry);
> if (unlikely(status == discard_frame)) {
> + page_pool_recycle_direct(rx_q->page_pool, buf->page);
> priv->dev->stats.rx_errors++;
> - if (priv->hwts_rx_en && !priv->extend_desc) {
> - /* DESC2 & DESC3 will be overwritten by device
> - * with timestamp value, hence reinitialize
> - * them in stmmac_rx_refill() function so that
> - * device can reuse it.
> - */
> - dev_kfree_skb_any(rx_q->rx_skbuff[entry]);
> - rx_q->rx_skbuff[entry] = NULL;
> - dma_unmap_single(priv->device,
> - rx_q->rx_skbuff_dma[entry],
> - priv->dma_buf_sz,
> - DMA_FROM_DEVICE);
> - }
> + buf->page = NULL;
> } else {
> struct sk_buff *skb;
> int frame_len;
> @@ -3476,58 +3425,18 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
> frame_len, status);
> }
>
> - /* The zero-copy is always used for all the sizes
> - * in case of GMAC4 because it needs
> - * to refill the used descriptors, always.
> - */
> - if (unlikely(!xmac &&
> - ((frame_len < priv->rx_copybreak) ||
> - stmmac_rx_threshold_count(rx_q)))) {
> - skb = netdev_alloc_skb_ip_align(priv->dev,
> - frame_len);
> - if (unlikely(!skb)) {
> - if (net_ratelimit())
> - dev_warn(priv->device,
> - "packet dropped\n");
> - priv->dev->stats.rx_dropped++;
> - continue;
> - }
> -
> - dma_sync_single_for_cpu(priv->device,
> - rx_q->rx_skbuff_dma
> - [entry], frame_len,
> - DMA_FROM_DEVICE);
> - skb_copy_to_linear_data(skb,
> - rx_q->
> - rx_skbuff[entry]->data,
> - frame_len);
> -
> - skb_put(skb, frame_len);
> - dma_sync_single_for_device(priv->device,
> - rx_q->rx_skbuff_dma
> - [entry], frame_len,
> - DMA_FROM_DEVICE);
> - } else {
> - skb = rx_q->rx_skbuff[entry];
> - if (unlikely(!skb)) {
> - if (net_ratelimit())
> - netdev_err(priv->dev,
> - "%s: Inconsistent Rx chain\n",
> - priv->dev->name);
> - priv->dev->stats.rx_dropped++;
> - continue;
> - }
> - prefetch(skb->data - NET_IP_ALIGN);
> - rx_q->rx_skbuff[entry] = NULL;
> - rx_q->rx_zeroc_thresh++;
> -
> - skb_put(skb, frame_len);
> - dma_unmap_single(priv->device,
> - rx_q->rx_skbuff_dma[entry],
> - priv->dma_buf_sz,
> - DMA_FROM_DEVICE);
> + skb = netdev_alloc_skb_ip_align(priv->dev, frame_len);
> + if (unlikely(!skb)) {
> + priv->dev->stats.rx_dropped++;
> + continue;
> }
>
> + dma_sync_single_for_cpu(priv->device, buf->addr,
> + frame_len, DMA_FROM_DEVICE);
> + skb_copy_to_linear_data(skb, page_address(buf->page),
> + frame_len);
> + skb_put(skb, frame_len);
> +
> if (netif_msg_pktdata(priv)) {
> netdev_dbg(priv->dev, "frame received (%dbytes)",
> frame_len);
> @@ -3547,6 +3456,9 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
>
> napi_gro_receive(&ch->rx_napi, skb);
>
> + page_pool_recycle_direct(rx_q->page_pool, buf->page);
> + buf->page = NULL;
> +
> priv->dev->stats.rx_packets++;
> priv->dev->stats.rx_bytes += frame_len;
> }
> --
> 2.7.4


2019-07-03 20:43:14

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH net-next 1/3] net: stmmac: Implement RX Coalesce Frames setting

On Wed, 3 Jul 2019 12:37:48 +0200, Jose Abreu wrote:
> Add support for coalescing RX path by specifying number of frames which
> don't need to have interrupt on completion bit set.
>
> This is only available when RX Watchdog is enabled.
>
> Signed-off-by: Jose Abreu <[email protected]>
> Cc: Joao Pinto <[email protected]>
> Cc: David S. Miller <[email protected]>
> Cc: Giuseppe Cavallaro <[email protected]>
> Cc: Alexandre Torgue <[email protected]>
> Cc: Maxime Coquelin <[email protected]>
> Cc: Maxime Ripard <[email protected]>
> Cc: Chen-Yu Tsai <[email protected]>

Acked-by: Jakub Kicinski <[email protected]>

2019-07-04 09:40:29

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On Wed, 3 Jul 2019 12:37:50 +0200
Jose Abreu <[email protected]> wrote:

> @@ -1498,8 +1479,9 @@ static void free_dma_rx_desc_resources(struct stmmac_priv *priv)
> sizeof(struct dma_extended_desc),
> rx_q->dma_erx, rx_q->dma_rx_phy);
>
> - kfree(rx_q->rx_skbuff_dma);
> - kfree(rx_q->rx_skbuff);
> + kfree(rx_q->buf_pool);
> + if (rx_q->page_pool)
> + page_pool_request_shutdown(rx_q->page_pool);
> }
> }
>

The page_pool_request_shutdown() API return indication if there are any
in-flight frames/pages, to know when it is safe to call
page_pool_free(), which you are also missing a call to.

This page_pool_request_shutdown() is only intended to be called from
xdp_rxq_info_unreg() code, that handles and schedule a work queue if it
need to wait for in-flight frames/pages.

--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer

2019-07-04 09:49:20

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On Wed, 3 Jul 2019 12:37:50 +0200
Jose Abreu <[email protected]> wrote:

> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -1197,26 +1197,14 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
> int i, gfp_t flags, u32 queue)
> {
> struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
> - struct sk_buff *skb;
> + struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];
>
> - skb = __netdev_alloc_skb_ip_align(priv->dev, priv->dma_buf_sz, flags);
> - if (!skb) {
> - netdev_err(priv->dev,
> - "%s: Rx init fails; skb is NULL\n", __func__);
> + buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
> + if (!buf->page)
> return -ENOMEM;
> - }
> - rx_q->rx_skbuff[i] = skb;
> - rx_q->rx_skbuff_dma[i] = dma_map_single(priv->device, skb->data,
> - priv->dma_buf_sz,
> - DMA_FROM_DEVICE);
> - if (dma_mapping_error(priv->device, rx_q->rx_skbuff_dma[i])) {
> - netdev_err(priv->dev, "%s: DMA mapping error\n", __func__);
> - dev_kfree_skb_any(skb);
> - return -EINVAL;
> - }
> -
> - stmmac_set_desc_addr(priv, p, rx_q->rx_skbuff_dma[i]);
>
> + buf->addr = buf->page->dma_addr;

We/Ilias added a wrapper/helper function for accessing dma_addr, as it
will help us later identifying users.

page_pool_get_dma_addr(page)

> + stmmac_set_desc_addr(priv, p, buf->addr);
> if (priv->dma_buf_sz == BUF_SIZE_16KiB)
> stmmac_init_desc3(priv, p);
>


--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer

2019-07-04 10:01:12

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On Wed, 3 Jul 2019 12:37:50 +0200
Jose Abreu <[email protected]> wrote:

> @@ -3547,6 +3456,9 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
>
> napi_gro_receive(&ch->rx_napi, skb);
>
> + page_pool_recycle_direct(rx_q->page_pool, buf->page);

This doesn't look correct.

The page_pool DMA mapping cannot be "kept" when page traveling into the
network stack attached to an SKB. (Ilias and I have a long term plan[1]
to allow this, but you cannot do it ATM).

You will have to call:
page_pool_release_page(rx_q->page_pool, buf->page);

This will do a DMA-unmap, and you will likely loose your performance
gain :-(


> + buf->page = NULL;
> +
> priv->dev->stats.rx_packets++;
> priv->dev->stats.rx_bytes += frame_len;
> }

Also remember that the page_pool requires you driver to do the DMA-sync
operation. I see a dma_sync_single_for_cpu(), but I didn't see a
dma_sync_single_for_device() (well, I noticed one getting removed).
(For some HW Ilias tells me that the dma_sync_single_for_device can be
elided, so maybe this can still be correct for you).


[1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool02_SKB_return_callback.org
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer

2019-07-04 10:14:18

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jesper Dangaard Brouer <[email protected]>

> The page_pool DMA mapping cannot be "kept" when page traveling into the
> network stack attached to an SKB. (Ilias and I have a long term plan[1]
> to allow this, but you cannot do it ATM).

The reason I recycle the page is this previous call to:

skb_copy_to_linear_data()

So, technically, I'm syncing to CPU the page(s) and then memcpy to a
previously allocated SKB ... So it's safe to just recycle the mapping I
think.

Its kind of using bounce buffers and I do see performance gain in this
(I think the reason is because my setup uses swiotlb for DMA mapping).

Anyway, I'm open to some suggestions on how to improve this ...

> Also remember that the page_pool requires you driver to do the DMA-sync
> operation. I see a dma_sync_single_for_cpu(), but I didn't see a
> dma_sync_single_for_device() (well, I noticed one getting removed).
> (For some HW Ilias tells me that the dma_sync_single_for_device can be
> elided, so maybe this can still be correct for you).

My HW just needs descriptors refilled which are in different coherent
region so I don't see any reason for dma_sync_single_for_device() ...

2019-07-04 10:31:41

by Ilias Apalodimas

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

HI Jesper, Ivan,

> On Wed, 3 Jul 2019 12:37:50 +0200
> Jose Abreu <[email protected]> wrote:
>
> > @@ -3547,6 +3456,9 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
> >
> > napi_gro_receive(&ch->rx_napi, skb);
> >
> > + page_pool_recycle_direct(rx_q->page_pool, buf->page);
>
> This doesn't look correct.
>
> The page_pool DMA mapping cannot be "kept" when page traveling into the
> network stack attached to an SKB. (Ilias and I have a long term plan[1]
> to allow this, but you cannot do it ATM).
>
> You will have to call:
> page_pool_release_page(rx_q->page_pool, buf->page);
>
> This will do a DMA-unmap, and you will likely loose your performance
> gain :-(
>
>
> > + buf->page = NULL;
> > +
> > priv->dev->stats.rx_packets++;
> > priv->dev->stats.rx_bytes += frame_len;
> > }
>
> Also remember that the page_pool requires you driver to do the DMA-sync
> operation. I see a dma_sync_single_for_cpu(), but I didn't see a
> dma_sync_single_for_device() (well, I noticed one getting removed).
> (For some HW Ilias tells me that the dma_sync_single_for_device can be
> elided, so maybe this can still be correct for you).
On our case (and in the page_pool API in general) you have to track buffers when
both .ndo_xdp_xmit() and XDP_TX are used.
So the lifetime of a packet might be

1. page pool allocs packet. The API doesn't sync but i *think* you don't have to
explicitly since the CPU won't touch that buffer until the NAPI handler kicks
in. On the napi handler you need to dma_sync_single_for_cpu() and process the
packet.
2a) no XDP is required so the packet is unmapped and free'd
2b) .ndo_xdp_xmit is called so tyhe buffer need to be mapped/unmapped
2c) XDP_TX is called. In that case we re-use an Rx buffer so we need to
dma_sync_single_for_device()
2a and 2b won't cause any issues
In 2c the buffer will be recycled and fed back to the device with a *correct*
sync (for_device) and all those buffers are allocated as DMA_BIDIRECTIONAL.

So bvottom line i *think* we can skip the dma_sync_single_for_device() on the
initial allocation *only*. If am terribly wrong please let me know :)

Thanks
/Ilias
>
>
> [1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool02_SKB_return_callback.org
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer

2019-07-04 11:12:54

by Ilias Apalodimas

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On Thu, Jul 04, 2019 at 10:13:37AM +0000, Jose Abreu wrote:
> From: Jesper Dangaard Brouer <[email protected]>
>
> > The page_pool DMA mapping cannot be "kept" when page traveling into the
> > network stack attached to an SKB. (Ilias and I have a long term plan[1]
> > to allow this, but you cannot do it ATM).
>
> The reason I recycle the page is this previous call to:
>
> skb_copy_to_linear_data()
>
> So, technically, I'm syncing to CPU the page(s) and then memcpy to a
> previously allocated SKB ... So it's safe to just recycle the mapping I
> think.
>
> Its kind of using bounce buffers and I do see performance gain in this
> (I think the reason is because my setup uses swiotlb for DMA mapping).

Maybe. Have you tested this on big/small packets?
Can you do a test with 64b/128b and 1024b for example?
The memcpy might be cheap for the small sized packets (and cheaper than the dma
map/unmap)

>
> Anyway, I'm open to some suggestions on how to improve this ...
>
> > Also remember that the page_pool requires you driver to do the DMA-sync
> > operation. I see a dma_sync_single_for_cpu(), but I didn't see a
> > dma_sync_single_for_device() (well, I noticed one getting removed).
> > (For some HW Ilias tells me that the dma_sync_single_for_device can be
> > elided, so maybe this can still be correct for you).
>
> My HW just needs descriptors refilled which are in different coherent
> region so I don't see any reason for dma_sync_single_for_device() ...
I am abit overloaded at the moment. I'll try to have a look at this and get back
to you

Cheers
/Ilias

2019-07-04 11:56:43

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On Thu, 4 Jul 2019 10:13:37 +0000
Jose Abreu <[email protected]> wrote:

> From: Jesper Dangaard Brouer <[email protected]>
>
> > The page_pool DMA mapping cannot be "kept" when page traveling into the
> > network stack attached to an SKB. (Ilias and I have a long term plan[1]
> > to allow this, but you cannot do it ATM).
>
> The reason I recycle the page is this previous call to:
>
> skb_copy_to_linear_data()
>
> So, technically, I'm syncing to CPU the page(s) and then memcpy to a
> previously allocated SKB ... So it's safe to just recycle the mapping I
> think.

I didn't notice the skb_copy_to_linear_data(), will copy the entire
frame, thus leaving the page unused and avail for recycle.

Then it looks like you are doing the correct thing. I will appreciate
if you could add a comment above the call like:

/* Data payload copied into SKB, page ready for recycle */
page_pool_recycle_direct(rx_q->page_pool, buf->page);


> Its kind of using bounce buffers and I do see performance gain in this
> (I think the reason is because my setup uses swiotlb for DMA mapping).
>
> Anyway, I'm open to some suggestions on how to improve this ...

I was surprised to see page_pool being used outside the surrounding XDP
APIs (included/net/xdp.h). For you use-case, where you "just" use
page_pool as a driver-local fast recycle-allocator for RX-ring that
keeps pages DMA mapped, it does make a lot of sense. It simplifies the
driver a fair amount:

3 files changed, 63 insertions(+), 144 deletions(-)

Thanks for demonstrating a use-case for page_pool besides XDP, and for
simplifying a driver with this.


> > Also remember that the page_pool requires you driver to do the
> > DMA-sync operation. I see a dma_sync_single_for_cpu(), but I
> > didn't see a dma_sync_single_for_device() (well, I noticed one
> > getting removed). (For some HW Ilias tells me that the
> > dma_sync_single_for_device can be elided, so maybe this can still
> > be correct for you).
>
> My HW just needs descriptors refilled which are in different coherent
> region so I don't see any reason for dma_sync_single_for_device() ...

For you use-case, given you are copying out the data, and not writing
into it, then I don't think you need to do sync for device (before
giving the device the page again for another RX-ring cycle).

The way I understand the danger: if writing to the DMA memory region,
and not doing the DMA-sync for-device, then the HW/coherency-system can
write-back the memory later. Which creates a race with the DMA-device,
if it is receiving a packet and is doing a write into same DMA memory
region. Someone correct me if I misunderstood this...

--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer

2019-07-04 12:06:04

by Ilias Apalodimas

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

Hi Jesper,

> On Thu, 4 Jul 2019 10:13:37 +0000
> Jose Abreu <[email protected]> wrote:
> > > The page_pool DMA mapping cannot be "kept" when page traveling into the
> > > network stack attached to an SKB. (Ilias and I have a long term plan[1]
> > > to allow this, but you cannot do it ATM).
> >
> > The reason I recycle the page is this previous call to:
> >
> > skb_copy_to_linear_data()
> >
> > So, technically, I'm syncing to CPU the page(s) and then memcpy to a
> > previously allocated SKB ... So it's safe to just recycle the mapping I
> > think.
>
> I didn't notice the skb_copy_to_linear_data(), will copy the entire
> frame, thus leaving the page unused and avail for recycle.

Yea this is essentially a 'copybreak' without the byte limitation that other
drivers usually impose (remember mvneta was doing this for all packets < 256b)

That's why i was concerned on what will happen on > 1000b frames and what the
memory pressure is going to be.
The trade off here is copying vs mapping/unmapping.

>
> Then it looks like you are doing the correct thing. I will appreciate
> if you could add a comment above the call like:
>
> /* Data payload copied into SKB, page ready for recycle */
> page_pool_recycle_direct(rx_q->page_pool, buf->page);
>
>
> > Its kind of using bounce buffers and I do see performance gain in this
> > (I think the reason is because my setup uses swiotlb for DMA mapping).
> >
> > Anyway, I'm open to some suggestions on how to improve this ...
>
> I was surprised to see page_pool being used outside the surrounding XDP
> APIs (included/net/xdp.h). For you use-case, where you "just" use
> page_pool as a driver-local fast recycle-allocator for RX-ring that
> keeps pages DMA mapped, it does make a lot of sense. It simplifies the
> driver a fair amount:
>
> 3 files changed, 63 insertions(+), 144 deletions(-)
>
> Thanks for demonstrating a use-case for page_pool besides XDP, and for
> simplifying a driver with this.

Same here thanks Jose,

>
>
> > > Also remember that the page_pool requires you driver to do the
> > > DMA-sync operation. I see a dma_sync_single_for_cpu(), but I
> > > didn't see a dma_sync_single_for_device() (well, I noticed one
> > > getting removed). (For some HW Ilias tells me that the
> > > dma_sync_single_for_device can be elided, so maybe this can still
> > > be correct for you).
> >
> > My HW just needs descriptors refilled which are in different coherent
> > region so I don't see any reason for dma_sync_single_for_device() ...
>
> For you use-case, given you are copying out the data, and not writing
> into it, then I don't think you need to do sync for device (before
> giving the device the page again for another RX-ring cycle).
>
> The way I understand the danger: if writing to the DMA memory region,
> and not doing the DMA-sync for-device, then the HW/coherency-system can
> write-back the memory later. Which creates a race with the DMA-device,
> if it is receiving a packet and is doing a write into same DMA memory
> region. Someone correct me if I misunderstood this...

Similar understanding here

Cheers
/Ilias

2019-07-04 12:15:29

by Arnd Bergmann

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On Thu, Jul 4, 2019 at 12:31 PM Ilias Apalodimas
<[email protected]> wrote:
> > On Wed, 3 Jul 2019 12:37:50 +0200
> > Jose Abreu <[email protected]> wrote:

> 1. page pool allocs packet. The API doesn't sync but i *think* you don't have to
> explicitly since the CPU won't touch that buffer until the NAPI handler kicks
> in. On the napi handler you need to dma_sync_single_for_cpu() and process the
> packet.

> So bvottom line i *think* we can skip the dma_sync_single_for_device() on the
> initial allocation *only*. If am terribly wrong please let me know :)

I think you have to do a sync_single_for_device /somewhere/ before the
buffer is given to the device. On a non-cache-coherent machine with
a write-back cache, there may be dirty cache lines that get written back
after the device DMA's data into it (e.g. from a previous memset
from before the buffer got freed), so you absolutely need to flush any
dirty cache lines on it first.
You may also need to invalidate the cache lines in the following
sync_single_for_cpu() to eliminate clean cache lines with stale data
that got there when speculatively reading between the cache-invalidate
and the DMA.

Arnd

2019-07-04 12:50:29

by Ilias Apalodimas

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On Thu, Jul 04, 2019 at 02:14:28PM +0200, Arnd Bergmann wrote:
> On Thu, Jul 4, 2019 at 12:31 PM Ilias Apalodimas
> <[email protected]> wrote:
> > > On Wed, 3 Jul 2019 12:37:50 +0200
> > > Jose Abreu <[email protected]> wrote:
>
> > 1. page pool allocs packet. The API doesn't sync but i *think* you don't have to
> > explicitly since the CPU won't touch that buffer until the NAPI handler kicks
> > in. On the napi handler you need to dma_sync_single_for_cpu() and process the
> > packet.
>
> > So bvottom line i *think* we can skip the dma_sync_single_for_device() on the
> > initial allocation *only*. If am terribly wrong please let me know :)
>
> I think you have to do a sync_single_for_device /somewhere/ before the
> buffer is given to the device. On a non-cache-coherent machine with
> a write-back cache, there may be dirty cache lines that get written back
> after the device DMA's data into it (e.g. from a previous memset
> from before the buffer got freed), so you absolutely need to flush any
> dirty cache lines on it first.
Ok my bad here i forgot to add "when coherency is there", since the driver
i had in mind runs on such a device (i think this is configurable though so i'll
add the sync explicitly to make sure we won't break any configurations).

In general you are right, thanks for the explanation!
> You may also need to invalidate the cache lines in the following
> sync_single_for_cpu() to eliminate clean cache lines with stale data
> that got there when speculatively reading between the cache-invalidate
> and the DMA.
>
> Arnd


Thanks!
/Ilias

2019-07-04 13:01:45

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

Thank you all for your review comments !

From: Ilias Apalodimas <[email protected]>

> That's why i was concerned on what will happen on > 1000b frames and what the
> memory pressure is going to be.
> The trade off here is copying vs mapping/unmapping.

Well, the performance numbers I mentioned are for TSO with default MTU
(1500) and using iperf3 with zero-copy. Here follows netperf:

---
# netperf -c -C -H 1.2.3.2 -T 7,7 -t TCP_SENDFILE
TCP SENDFILE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 1.2.3.2
(1.2.3.2) port 0 AF_INET : demo : cpu bind
Recv Send Send Utilization Service
Demand
Socket Socket Message Elapsed Send Recv Send
Recv
Size Size Size Time Throughput local remote local
remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB
us/KB

131072 16384 16384 10.00 9132.37 6.13 11.79 0.440
0.846

---
# netperf -c -C -H 1.2.3.2 -T 7,7 -t TCP_STREAM
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
1.2.3.2 (1.2.3.2) port 0 AF_INET : demo : cpu bind
Recv Send Send Utilization Service
Demand
Socket Socket Message Elapsed Send Recv Send
Recv
Size Size Size Time Throughput local remote local
remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB
us/KB

131072 16384 16384 10.01 9041.21 3.20 11.75 0.232
0.852

---
# netperf -c -C -H 1.2.3.2 -T 7,7 -t UDP_STREAM
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
1.2.3.2 (1.2.3.2) port 0 AF_INET : demo : cpu bind
Socket Message Elapsed Messages CPU
Service
Size Size Time Okay Errors Throughput Util Demand
bytes bytes secs # # 10^6bits/sec % SS us/KB

212992 65507 10.00 114455 0 5997.0 12.55 1.371
212992 10.00 114455 5997.0 8.12 0.887

---
# netperf -c -C -H 1.2.3.2 -T 7,7 -t UDP_STREAM -- -m 64
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
1.2.3.2 (1.2.3.2) port 0 AF_INET : demo : cpu bind
Socket Message Elapsed Messages CPU
Service
Size Size Time Okay Errors Throughput Util Demand
bytes bytes secs # # 10^6bits/sec % SS us/KB

212992 64 10.00 4013480 0 205.4 12.51 39.918
212992 10.00 4013480 205.4 7.99 25.482

---
# netperf -c -C -H 1.2.3.2 -T 7,7 -t UDP_STREAM -- -m 128
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
1.2.3.2 (1.2.3.2) port 0 AF_INET : demo : cpu bind
Socket Message Elapsed Messages CPU
Service
Size Size Time Okay Errors Throughput Util Demand
bytes bytes secs # # 10^6bits/sec % SS us/KB

212992 128 10.00 3950480 0 404.4 12.50 20.255
212992 10.00 3950442 404.4 7.70 12.485

---
# netperf -c -C -H 1.2.3.2 -T 7,7 -t UDP_STREAM -- -m 1024
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
1.2.3.2 (1.2.3.2) port 0 AF_INET : demo : cpu bind
Socket Message Elapsed Messages CPU
Service
Size Size Time Okay Errors Throughput Util Demand
bytes bytes secs # # 10^6bits/sec % SS us/KB

212992 1024 10.00 3466506 0 2838.8 12.50 2.886
212992 10.00 3466506 2838.8 7.39 1.707

2019-07-04 13:08:59

by Ilias Apalodimas

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

Hi Jose,

> Thank you all for your review comments !
>
> From: Ilias Apalodimas <[email protected]>
>
> > That's why i was concerned on what will happen on > 1000b frames and what the
> > memory pressure is going to be.
> > The trade off here is copying vs mapping/unmapping.
>
> Well, the performance numbers I mentioned are for TSO with default MTU
> (1500) and using iperf3 with zero-copy. Here follows netperf:
>

Ok i guess this should be fine. Here's why.
You'll allocate an extra memory from page pool API which equals
the number of descriptors * 1 page.
You also allocate SKB's to copy the data and recycle the page pool buffers.
So page_pool won't add any significant memory pressure since we expect *all*
it's buffers to be recycled.
The SKBs are allocated anyway in the current driver so bottom line you trade off
some memory (the page_pool buffers) + a memcpy per packet and skip the dma
map/unmap which is the bottleneck in your hardware.
I think it's fine

Cheers
/Ilias

2019-07-04 14:53:35

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jesper Dangaard Brouer <[email protected]>

> The page_pool_request_shutdown() API return indication if there are any
> in-flight frames/pages, to know when it is safe to call
> page_pool_free(), which you are also missing a call to.
>
> This page_pool_request_shutdown() is only intended to be called from
> xdp_rxq_info_unreg() code, that handles and schedule a work queue if it
> need to wait for in-flight frames/pages.

So you mean I can't call it or I should implement the same deferred work
?

Notice that in stmmac case there will be no in-flight frames/pages
because we free them all before calling this ...

2019-07-04 15:10:12

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On Thu, 4 Jul 2019 14:45:59 +0000
Jose Abreu <[email protected]> wrote:

> From: Jesper Dangaard Brouer <[email protected]>
>
> > The page_pool_request_shutdown() API return indication if there are any
> > in-flight frames/pages, to know when it is safe to call
> > page_pool_free(), which you are also missing a call to.
> >
> > This page_pool_request_shutdown() is only intended to be called from
> > xdp_rxq_info_unreg() code, that handles and schedule a work queue if it
> > need to wait for in-flight frames/pages.
>
> So you mean I can't call it or I should implement the same deferred work
> ?
>
> Notice that in stmmac case there will be no in-flight frames/pages
> because we free them all before calling this ...

You can just use page_pool_free() (p.s I'm working on reintroducing
page_pool_destroy wrapper). As you say, you will not have in-flight
frames/pages in this driver use-case.

--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer

2019-07-04 15:28:35

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jesper Dangaard Brouer <[email protected]>

> You can just use page_pool_free() (p.s I'm working on reintroducing
> page_pool_destroy wrapper). As you say, you will not have in-flight
> frames/pages in this driver use-case.

Well, if I remove the request_shutdown() it will trigger the "API usage
violation" WARN ...

I think this is due to alloc cache only be freed in request_shutdown(),
or I'm having some leak :D

2019-07-04 15:34:50

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On Thu, 4 Jul 2019 15:18:19 +0000
Jose Abreu <[email protected]> wrote:

> From: Jesper Dangaard Brouer <[email protected]>
>
> > You can just use page_pool_free() (p.s I'm working on reintroducing
> > page_pool_destroy wrapper). As you say, you will not have in-flight
> > frames/pages in this driver use-case.
>
> Well, if I remove the request_shutdown() it will trigger the "API usage
> violation" WARN ...
>
> I think this is due to alloc cache only be freed in request_shutdown(),
> or I'm having some leak :D

Sorry, for not being clear. You of-cause first have to call
page_pool_request_shutdown() and then call page_pool_free().

--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer

2019-07-17 18:59:45

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 03/07/2019 11:37, Jose Abreu wrote:
> Mapping and unmapping DMA region is an high bottleneck in stmmac driver,
> specially in the RX path.
>
> This commit introduces support for Page Pool API and uses it in all RX
> queues. With this change, we get more stable troughput and some increase
> of banwidth with iperf:
> - MAC1000 - 950 Mbps
> - XGMAC: 9.22 Gbps
I am seeing a boot regression on one of our Tegra boards with both
mainline and -next. Bisecting is pointing to this commit and reverting
this commit on top of mainline fixes the problem. Unfortunately, there
is not much of a backtrace but what I have captured is below.

Please note that this is seen on a system that is using NFS to mount
the rootfs and the crash occurs right around the point the rootfs is
mounted.

Let me know if you have any thoughts.

Cheers
Jon

[ 12.221843] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
[ 12.229485] CPU: 5 PID: 1 Comm: init Tainted: G S 5.2.0-11500-g916f562fb28a #18
[ 12.238076] Hardware name: NVIDIA Tegra186 P2771-0000 Development Board (DT)
[ 12.245105] Call trace:
[ 12.247548] dump_backtrace+0x0/0x150
[ 12.251199] show_stack+0x14/0x20
[ 12.254505] dump_stack+0x9c/0xc4
[ 12.257809] panic+0x13c/0x32c
[ 12.260853] complete_and_exit+0x0/0x20
[ 12.264676] do_group_exit+0x34/0x98
[ 12.268241] get_signal+0x104/0x668
[ 12.271718] do_notify_resume+0x2ac/0x380
[ 12.275716] work_pending+0x8/0x10
[ 12.279109] SMP: stopping secondary CPUs
[ 12.283025] Kernel Offset: disabled
[ 12.286502] CPU features: 0x0002,20806000
[ 12.290499] Memory Limit: none
[ 12.293548] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---

--
nvpublic

2019-07-18 07:31:12

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/17/2019, 19:58:53 (UTC+00:00)

> I am seeing a boot regression on one of our Tegra boards with both
> mainline and -next. Bisecting is pointing to this commit and reverting
> this commit on top of mainline fixes the problem. Unfortunately, there
> is not much of a backtrace but what I have captured is below.
>
> Please note that this is seen on a system that is using NFS to mount
> the rootfs and the crash occurs right around the point the rootfs is
> mounted.
>
> Let me know if you have any thoughts.
>
> Cheers
> Jon
>
> [ 12.221843] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
> [ 12.229485] CPU: 5 PID: 1 Comm: init Tainted: G S 5.2.0-11500-g916f562fb28a #18
> [ 12.238076] Hardware name: NVIDIA Tegra186 P2771-0000 Development Board (DT)
> [ 12.245105] Call trace:
> [ 12.247548] dump_backtrace+0x0/0x150
> [ 12.251199] show_stack+0x14/0x20
> [ 12.254505] dump_stack+0x9c/0xc4
> [ 12.257809] panic+0x13c/0x32c
> [ 12.260853] complete_and_exit+0x0/0x20
> [ 12.264676] do_group_exit+0x34/0x98
> [ 12.268241] get_signal+0x104/0x668
> [ 12.271718] do_notify_resume+0x2ac/0x380
> [ 12.275716] work_pending+0x8/0x10
> [ 12.279109] SMP: stopping secondary CPUs
> [ 12.283025] Kernel Offset: disabled
> [ 12.286502] CPU features: 0x0002,20806000
> [ 12.290499] Memory Limit: none
> [ 12.293548] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---
>
> --
> nvpublic

You don't have any more data ? Can you activate DMA-API debug and check
if there is any more info outputted ?

---
Thanks,
Jose Miguel Abreu

2019-07-18 07:49:53

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/17/2019, 19:58:53 (UTC+00:00)

> Let me know if you have any thoughts.

Can you try attached patch ?

---
Thanks,
Jose Miguel Abreu


Attachments:
0001-net-stmmac-RX-Descriptors-need-to-be-clean-before-se.patch (1.99 kB)
0001-net-stmmac-RX-Descriptors-need-to-be-clean-before-se.patch

2019-07-18 09:18:23

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 18/07/2019 08:48, Jose Abreu wrote:
> From: Jon Hunter <[email protected]>
> Date: Jul/17/2019, 19:58:53 (UTC+00:00)
>
>> Let me know if you have any thoughts.
>
> Can you try attached patch ?

Yes this did not help. I tried enabling the following but no more output
is seen.

CONFIG_DMA_API_DEBUG=y
CONFIG_DMA_API_DEBUG_SG=y

Have you tried using NFS on a board with this ethernet controller?

Cheers,
Jon

--
nvpublic

2019-07-19 07:52:28

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/18/2019, 10:16:20 (UTC+00:00)

> Have you tried using NFS on a board with this ethernet controller?

I'm having some issues setting up the NFS server in order to replicate
so this may take some time.

Are you able to add some debug in stmmac_init_rx_buffers() to see what's
the buffer address ?

---
Thanks,
Jose Miguel Abreu

2019-07-19 08:38:34

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 19/07/2019 08:51, Jose Abreu wrote:
> From: Jon Hunter <[email protected]>
> Date: Jul/18/2019, 10:16:20 (UTC+00:00)
>
>> Have you tried using NFS on a board with this ethernet controller?
>
> I'm having some issues setting up the NFS server in order to replicate
> so this may take some time.

If that's the case, we may wish to consider reverting this for now as it
is preventing our board from booting. Appears to revert cleanly on top
of mainline.

> Are you able to add some debug in stmmac_init_rx_buffers() to see what's
> the buffer address ?

If you have a debug patch you would like me to apply and test with I
can. However, it is best you prepare the patch as maybe I will not dump
the appropriate addresses.

Cheers
Jon

--
nvpublic

2019-07-19 08:45:39

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/19/2019, 09:37:49 (UTC+00:00)

>
> On 19/07/2019 08:51, Jose Abreu wrote:
> > From: Jon Hunter <[email protected]>
> > Date: Jul/18/2019, 10:16:20 (UTC+00:00)
> >
> >> Have you tried using NFS on a board with this ethernet controller?
> >
> > I'm having some issues setting up the NFS server in order to replicate
> > so this may take some time.
>
> If that's the case, we may wish to consider reverting this for now as it
> is preventing our board from booting. Appears to revert cleanly on top
> of mainline.
>
> > Are you able to add some debug in stmmac_init_rx_buffers() to see what's
> > the buffer address ?
>
> If you have a debug patch you would like me to apply and test with I
> can. However, it is best you prepare the patch as maybe I will not dump
> the appropriate addresses.
>
> Cheers
> Jon
>
> --
> nvpublic

Send me full boot log please.

---
Thanks,
Jose Miguel Abreu

2019-07-19 08:50:37

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 19/07/2019 09:44, Jose Abreu wrote:
> From: Jon Hunter <[email protected]>
> Date: Jul/19/2019, 09:37:49 (UTC+00:00)
>
>>
>> On 19/07/2019 08:51, Jose Abreu wrote:
>>> From: Jon Hunter <[email protected]>
>>> Date: Jul/18/2019, 10:16:20 (UTC+00:00)
>>>
>>>> Have you tried using NFS on a board with this ethernet controller?
>>>
>>> I'm having some issues setting up the NFS server in order to replicate
>>> so this may take some time.
>>
>> If that's the case, we may wish to consider reverting this for now as it
>> is preventing our board from booting. Appears to revert cleanly on top
>> of mainline.
>>
>>> Are you able to add some debug in stmmac_init_rx_buffers() to see what's
>>> the buffer address ?
>>
>> If you have a debug patch you would like me to apply and test with I
>> can. However, it is best you prepare the patch as maybe I will not dump
>> the appropriate addresses.
>>
>> Cheers
>> Jon
>>
>> --
>> nvpublic
>
> Send me full boot log please.

Please see: https://paste.debian.net/1092277/

Cheers
Jon

--
nvpublic

2019-07-19 10:43:14

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/19/2019, 09:49:10 (UTC+00:00)

>
> On 19/07/2019 09:44, Jose Abreu wrote:
> > From: Jon Hunter <[email protected]>
> > Date: Jul/19/2019, 09:37:49 (UTC+00:00)
> >
> >>
> >> On 19/07/2019 08:51, Jose Abreu wrote:
> >>> From: Jon Hunter <[email protected]>
> >>> Date: Jul/18/2019, 10:16:20 (UTC+00:00)
> >>>
> >>>> Have you tried using NFS on a board with this ethernet controller?
> >>>
> >>> I'm having some issues setting up the NFS server in order to replicate
> >>> so this may take some time.
> >>
> >> If that's the case, we may wish to consider reverting this for now as it
> >> is preventing our board from booting. Appears to revert cleanly on top
> >> of mainline.
> >>
> >>> Are you able to add some debug in stmmac_init_rx_buffers() to see what's
> >>> the buffer address ?
> >>
> >> If you have a debug patch you would like me to apply and test with I
> >> can. However, it is best you prepare the patch as maybe I will not dump
> >> the appropriate addresses.
> >>
> >> Cheers
> >> Jon
> >>
> >> --
> >> nvpublic
> >
> > Send me full boot log please.
>
> Please see: https://urldefense.proofpoint.com/v2/url?u=https-3A__paste.debian.net_1092277_&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=WHDsc6kcWAl4i96Vm5hJ_19IJiuxx_p_Rzo2g-uHDKw&m=iHahNPEIegk1merE1utjRvC8Xoz5jQlNb1VRzPHk4-4&s=4UTbo8miS4M-PmGNup4OXgJOosgvJQZm9wcvWYjJs7k&e=
>
> Cheers
> Jon
>
> --
> nvpublic

Thanks. Can you add attached patch and check if WARN is triggered ? And
it would be good to know whether this is boot specific crash or just
doesn't work at all, i.e. not using NFS to mount rootfs and instead
manually configure interface and send/receive packets.

---
Thanks,
Jose Miguel Abreu


Attachments:
0001-net-stmmac-Add-page-sanity-check.patch (1.36 kB)
0001-net-stmmac-Add-page-sanity-check.patch

2019-07-19 12:31:54

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 19/07/2019 11:25, Jose Abreu wrote:

...

> Thanks. Can you add attached patch and check if WARN is triggered ? And
> it would be good to know whether this is boot specific crash or just
> doesn't work at all, i.e. not using NFS to mount rootfs and instead
> manually configure interface and send/receive packets.

With this patch applied I did not see the WARN trigger.

I booted the board without using NFS and then started used dhclient to
bring up the network interface and it appears to be working fine. I can
even mount the NFS share fine. So it does appear to be particular to
using NFS to mount the rootfs.

Cheers
Jon

--
nvpublic

2019-07-19 12:32:29

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jose Abreu <[email protected]>
Date: Jul/19/2019, 11:25:41 (UTC+00:00)

> Thanks. Can you add attached patch and check if WARN is triggered ?

BTW, also add the attached one in this mail. The WARN will probably
never get triggered without it.

Can you also print "buf->addr" after the WARN_ON ?

---
Thanks,
Jose Miguel Abreu


Attachments:
0001-net-stmmac-Use-kcalloc-instead-of-kmalloc_array.patch (2.19 kB)
0001-net-stmmac-Use-kcalloc-instead-of-kmalloc_array.patch

2019-07-19 13:31:17

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/19/2019, 13:30:10 (UTC+00:00)

> I booted the board without using NFS and then started used dhclient to
> bring up the network interface and it appears to be working fine. I can
> even mount the NFS share fine. So it does appear to be particular to
> using NFS to mount the rootfs.

Damn. Can you send me your .config ?

---
Thanks,
Jose Miguel Abreu

2019-07-19 15:15:48

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 19/07/2019 13:28, Jose Abreu wrote:
> From: Jose Abreu <[email protected]>
> Date: Jul/19/2019, 11:25:41 (UTC+00:00)
>
>> Thanks. Can you add attached patch and check if WARN is triggered ?
>
> BTW, also add the attached one in this mail. The WARN will probably
> never get triggered without it.
>
> Can you also print "buf->addr" after the WARN_ON ?

I added this patch, but still no warning.

Cheers
Jon

--
nvpublic

2019-07-19 16:26:29

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 19/07/2019 13:32, Jose Abreu wrote:
> From: Jon Hunter <[email protected]>
> Date: Jul/19/2019, 13:30:10 (UTC+00:00)
>
>> I booted the board without using NFS and then started used dhclient to
>> bring up the network interface and it appears to be working fine. I can
>> even mount the NFS share fine. So it does appear to be particular to
>> using NFS to mount the rootfs.
>
> Damn. Can you send me your .config ?

Yes no problem. Attached.

Cheers
Jon

--
nvpublic


Attachments:
config (194.11 kB)

2019-07-22 07:42:14

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/19/2019, 14:35:52 (UTC+00:00)

>
> On 19/07/2019 13:32, Jose Abreu wrote:
> > From: Jon Hunter <[email protected]>
> > Date: Jul/19/2019, 13:30:10 (UTC+00:00)
> >
> >> I booted the board without using NFS and then started used dhclient to
> >> bring up the network interface and it appears to be working fine. I can
> >> even mount the NFS share fine. So it does appear to be particular to
> >> using NFS to mount the rootfs.
> >
> > Damn. Can you send me your .config ?
>
> Yes no problem. Attached.

Can you compile your image without modules (i.e. all built-in) and let
me know if the error still happens ?

---
Thanks,
Jose Miguel Abreu

2019-07-22 09:49:21

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 22/07/2019 08:23, Jose Abreu wrote:
> From: Jon Hunter <[email protected]>
> Date: Jul/19/2019, 14:35:52 (UTC+00:00)
>
>>
>> On 19/07/2019 13:32, Jose Abreu wrote:
>>> From: Jon Hunter <[email protected]>
>>> Date: Jul/19/2019, 13:30:10 (UTC+00:00)
>>>
>>>> I booted the board without using NFS and then started used dhclient to
>>>> bring up the network interface and it appears to be working fine. I can
>>>> even mount the NFS share fine. So it does appear to be particular to
>>>> using NFS to mount the rootfs.
>>>
>>> Damn. Can you send me your .config ?
>>
>> Yes no problem. Attached.
>
> Can you compile your image without modules (i.e. all built-in) and let
> me know if the error still happens ?

I simply removed the /lib/modules directory from the NFS share and
verified that I still see the same issue. So it is not loading the
modules that is a problem.

Jon

--
nvpublic

2019-07-22 09:52:00

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/22/2019, 10:37:18 (UTC+00:00)

>
> On 22/07/2019 08:23, Jose Abreu wrote:
> > From: Jon Hunter <[email protected]>
> > Date: Jul/19/2019, 14:35:52 (UTC+00:00)
> >
> >>
> >> On 19/07/2019 13:32, Jose Abreu wrote:
> >>> From: Jon Hunter <[email protected]>
> >>> Date: Jul/19/2019, 13:30:10 (UTC+00:00)
> >>>
> >>>> I booted the board without using NFS and then started used dhclient to
> >>>> bring up the network interface and it appears to be working fine. I can
> >>>> even mount the NFS share fine. So it does appear to be particular to
> >>>> using NFS to mount the rootfs.
> >>>
> >>> Damn. Can you send me your .config ?
> >>
> >> Yes no problem. Attached.
> >
> > Can you compile your image without modules (i.e. all built-in) and let
> > me know if the error still happens ?
>
> I simply removed the /lib/modules directory from the NFS share and
> verified that I still see the same issue. So it is not loading the
> modules that is a problem.

Well, I meant that loading modules can be an issue but that's not the
way to verify that.

You need to have all modules built-in so that it proves that no module
will try to be loaded.

Anyway, this is probably not the cause as you wouldn't even be able to
compile kernel if you need a symbol from a module with stmmac built-in.
Kconfig would complain about that.

The other cause could be data corruption in the RX path. Are you able to
send me packet dump by running wireshark either in the transmitter side
(i.e. NFS server), or using some kind of switch ?

---
Thanks,
Jose Miguel Abreu

2019-07-22 09:58:12

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jose Abreu <[email protected]>
Date: Jul/22/2019, 10:47:44 (UTC+00:00)

> From: Jon Hunter <[email protected]>
> Date: Jul/22/2019, 10:37:18 (UTC+00:00)
>
> >
> > On 22/07/2019 08:23, Jose Abreu wrote:
> > > From: Jon Hunter <[email protected]>
> > > Date: Jul/19/2019, 14:35:52 (UTC+00:00)
> > >
> > >>
> > >> On 19/07/2019 13:32, Jose Abreu wrote:
> > >>> From: Jon Hunter <[email protected]>
> > >>> Date: Jul/19/2019, 13:30:10 (UTC+00:00)
> > >>>
> > >>>> I booted the board without using NFS and then started used dhclient to
> > >>>> bring up the network interface and it appears to be working fine. I can
> > >>>> even mount the NFS share fine. So it does appear to be particular to
> > >>>> using NFS to mount the rootfs.
> > >>>
> > >>> Damn. Can you send me your .config ?
> > >>
> > >> Yes no problem. Attached.
> > >
> > > Can you compile your image without modules (i.e. all built-in) and let
> > > me know if the error still happens ?
> >
> > I simply removed the /lib/modules directory from the NFS share and
> > verified that I still see the same issue. So it is not loading the
> > modules that is a problem.
>
> Well, I meant that loading modules can be an issue but that's not the
> way to verify that.
>
> You need to have all modules built-in so that it proves that no module
> will try to be loaded.
>
> Anyway, this is probably not the cause as you wouldn't even be able to
> compile kernel if you need a symbol from a module with stmmac built-in.
> Kconfig would complain about that.
>
> The other cause could be data corruption in the RX path. Are you able to
> send me packet dump by running wireshark either in the transmitter side
> (i.e. NFS server), or using some kind of switch ?
>
> ---
> Thanks,
> Jose Miguel Abreu

Also, please add attached patch. You'll get a compiler warning, just
disregard it.

---
Thanks,
Jose Miguel Abreu


Attachments:
0001-net-stmmac-Debug-print.patch (1.48 kB)
0001-net-stmmac-Debug-print.patch

2019-07-22 12:26:42

by Ilias Apalodimas

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On Thu, Jul 18, 2019 at 07:48:04AM +0000, Jose Abreu wrote:
> From: Jon Hunter <[email protected]>
> Date: Jul/17/2019, 19:58:53 (UTC+00:00)
>
> > Let me know if you have any thoughts.
>
> Can you try attached patch ?
>

The log says someone calls panic() right?
Can we trye and figure were that happens during the stmmac init phase?

Thanks
/Ilias

2019-07-22 12:40:41

by Lars Persson

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On Mon, Jul 22, 2019 at 12:18 PM Ilias Apalodimas
<[email protected]> wrote:
>
> On Thu, Jul 18, 2019 at 07:48:04AM +0000, Jose Abreu wrote:
> > From: Jon Hunter <[email protected]>
> > Date: Jul/17/2019, 19:58:53 (UTC+00:00)
> >
> > > Let me know if you have any thoughts.
> >
> > Can you try attached patch ?
> >
>
> The log says someone calls panic() right?
> Can we trye and figure were that happens during the stmmac init phase?
>

The reason for the panic is hidden in this one line of the kernel logs:
Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b

The init process is killed by SIGSEGV (signal 11 = 0xb).

I would suggest you look for data corruption bugs in the RX path. If
the code is fetched from the NFS mount then a corrupt RX buffer can
trigger a crash in userspace.

/Lars

2019-07-22 12:46:41

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Lars Persson <[email protected]>
Date: Jul/22/2019, 12:11:50 (UTC+00:00)

> On Mon, Jul 22, 2019 at 12:18 PM Ilias Apalodimas
> <[email protected]> wrote:
> >
> > On Thu, Jul 18, 2019 at 07:48:04AM +0000, Jose Abreu wrote:
> > > From: Jon Hunter <[email protected]>
> > > Date: Jul/17/2019, 19:58:53 (UTC+00:00)
> > >
> > > > Let me know if you have any thoughts.
> > >
> > > Can you try attached patch ?
> > >
> >
> > The log says someone calls panic() right?
> > Can we trye and figure were that happens during the stmmac init phase?
> >
>
> The reason for the panic is hidden in this one line of the kernel logs:
> Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
>
> The init process is killed by SIGSEGV (signal 11 = 0xb).
>
> I would suggest you look for data corruption bugs in the RX path. If
> the code is fetched from the NFS mount then a corrupt RX buffer can
> trigger a crash in userspace.
>
> /Lars


Jon, I'm not familiar with ARM. Are the buffer addresses being allocated
in a coherent region ? Can you try attached patch which adds full memory
barrier before the sync ?

---
Thanks,
Jose Miguel Abreu


Attachments:
0001-net-stmmac-Add-memory-barrier.patch (1.36 kB)
0001-net-stmmac-Add-memory-barrier.patch

2019-07-22 12:50:44

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 22/07/2019 12:39, Jose Abreu wrote:
> From: Lars Persson <[email protected]>
> Date: Jul/22/2019, 12:11:50 (UTC+00:00)
>
>> On Mon, Jul 22, 2019 at 12:18 PM Ilias Apalodimas
>> <[email protected]> wrote:
>>>
>>> On Thu, Jul 18, 2019 at 07:48:04AM +0000, Jose Abreu wrote:
>>>> From: Jon Hunter <[email protected]>
>>>> Date: Jul/17/2019, 19:58:53 (UTC+00:00)
>>>>
>>>>> Let me know if you have any thoughts.
>>>>
>>>> Can you try attached patch ?
>>>>
>>>
>>> The log says someone calls panic() right?
>>> Can we trye and figure were that happens during the stmmac init phase?
>>>
>>
>> The reason for the panic is hidden in this one line of the kernel logs:
>> Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
>>
>> The init process is killed by SIGSEGV (signal 11 = 0xb).
>>
>> I would suggest you look for data corruption bugs in the RX path. If
>> the code is fetched from the NFS mount then a corrupt RX buffer can
>> trigger a crash in userspace.
>>
>> /Lars
>
>
> Jon, I'm not familiar with ARM. Are the buffer addresses being allocated
> in a coherent region ? Can you try attached patch which adds full memory
> barrier before the sync ?

TBH I am not sure about the buffer addresses either. The attached patch
did not help. Same problem persists.

Jon

--
nvpublic

2019-07-22 14:26:52

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 22/07/2019 10:57, Jose Abreu wrote:

...

> Also, please add attached patch. You'll get a compiler warning, just
> disregard it.

Here you are ...

https://paste.ubuntu.com/p/H9Mvv37vN9/

Cheers
Jon

--
nvpublic

2019-07-22 16:56:42

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/22/2019, 13:05:38 (UTC+00:00)

>
> On 22/07/2019 12:39, Jose Abreu wrote:
> > From: Lars Persson <[email protected]>
> > Date: Jul/22/2019, 12:11:50 (UTC+00:00)
> >
> >> On Mon, Jul 22, 2019 at 12:18 PM Ilias Apalodimas
> >> <[email protected]> wrote:
> >>>
> >>> On Thu, Jul 18, 2019 at 07:48:04AM +0000, Jose Abreu wrote:
> >>>> From: Jon Hunter <[email protected]>
> >>>> Date: Jul/17/2019, 19:58:53 (UTC+00:00)
> >>>>
> >>>>> Let me know if you have any thoughts.
> >>>>
> >>>> Can you try attached patch ?
> >>>>
> >>>
> >>> The log says someone calls panic() right?
> >>> Can we trye and figure were that happens during the stmmac init phase?
> >>>
> >>
> >> The reason for the panic is hidden in this one line of the kernel logs:
> >> Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
> >>
> >> The init process is killed by SIGSEGV (signal 11 = 0xb).
> >>
> >> I would suggest you look for data corruption bugs in the RX path. If
> >> the code is fetched from the NFS mount then a corrupt RX buffer can
> >> trigger a crash in userspace.
> >>
> >> /Lars
> >
> >
> > Jon, I'm not familiar with ARM. Are the buffer addresses being allocated
> > in a coherent region ? Can you try attached patch which adds full memory
> > barrier before the sync ?
>
> TBH I am not sure about the buffer addresses either. The attached patch
> did not help. Same problem persists.

OK. I'm just guessing now at this stage but can you disable SMP ?

We have to narrow down if this is coherency issue but you said that
booting without NFS and then mounting manually the share works ... So,
can you share logs with same debug prints in this condition in order to
compare ?

---
Thanks,
Jose Miguel Abreu

2019-07-23 18:00:19

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jose Abreu <[email protected]>
Date: Jul/22/2019, 15:04:49 (UTC+00:00)

> From: Jon Hunter <[email protected]>
> Date: Jul/22/2019, 13:05:38 (UTC+00:00)
>
> >
> > On 22/07/2019 12:39, Jose Abreu wrote:
> > > From: Lars Persson <[email protected]>
> > > Date: Jul/22/2019, 12:11:50 (UTC+00:00)
> > >
> > >> On Mon, Jul 22, 2019 at 12:18 PM Ilias Apalodimas
> > >> <[email protected]> wrote:
> > >>>
> > >>> On Thu, Jul 18, 2019 at 07:48:04AM +0000, Jose Abreu wrote:
> > >>>> From: Jon Hunter <[email protected]>
> > >>>> Date: Jul/17/2019, 19:58:53 (UTC+00:00)
> > >>>>
> > >>>>> Let me know if you have any thoughts.
> > >>>>
> > >>>> Can you try attached patch ?
> > >>>>
> > >>>
> > >>> The log says someone calls panic() right?
> > >>> Can we trye and figure were that happens during the stmmac init phase?
> > >>>
> > >>
> > >> The reason for the panic is hidden in this one line of the kernel logs:
> > >> Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
> > >>
> > >> The init process is killed by SIGSEGV (signal 11 = 0xb).
> > >>
> > >> I would suggest you look for data corruption bugs in the RX path. If
> > >> the code is fetched from the NFS mount then a corrupt RX buffer can
> > >> trigger a crash in userspace.
> > >>
> > >> /Lars
> > >
> > >
> > > Jon, I'm not familiar with ARM. Are the buffer addresses being allocated
> > > in a coherent region ? Can you try attached patch which adds full memory
> > > barrier before the sync ?
> >
> > TBH I am not sure about the buffer addresses either. The attached patch
> > did not help. Same problem persists.
>
> OK. I'm just guessing now at this stage but can you disable SMP ?
>
> We have to narrow down if this is coherency issue but you said that
> booting without NFS and then mounting manually the share works ... So,
> can you share logs with same debug prints in this condition in order to
> compare ?

Jon, I have one ARM based board and I can't face your issue but I
noticed that my buffer addresses are being mapped using SWIOTLB. Can you
disable IOMMU support on your setup and let me know if the problem
persists ?

---
Thanks,
Jose Miguel Abreu

2019-07-23 20:03:44

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 23/07/2019 09:14, Jose Abreu wrote:
> From: Jose Abreu <[email protected]>
> Date: Jul/22/2019, 15:04:49 (UTC+00:00)
>
>> From: Jon Hunter <[email protected]>
>> Date: Jul/22/2019, 13:05:38 (UTC+00:00)
>>
>>>
>>> On 22/07/2019 12:39, Jose Abreu wrote:
>>>> From: Lars Persson <[email protected]>
>>>> Date: Jul/22/2019, 12:11:50 (UTC+00:00)
>>>>
>>>>> On Mon, Jul 22, 2019 at 12:18 PM Ilias Apalodimas
>>>>> <[email protected]> wrote:
>>>>>>
>>>>>> On Thu, Jul 18, 2019 at 07:48:04AM +0000, Jose Abreu wrote:
>>>>>>> From: Jon Hunter <[email protected]>
>>>>>>> Date: Jul/17/2019, 19:58:53 (UTC+00:00)
>>>>>>>
>>>>>>>> Let me know if you have any thoughts.
>>>>>>>
>>>>>>> Can you try attached patch ?
>>>>>>>
>>>>>>
>>>>>> The log says someone calls panic() right?
>>>>>> Can we trye and figure were that happens during the stmmac init phase?
>>>>>>
>>>>>
>>>>> The reason for the panic is hidden in this one line of the kernel logs:
>>>>> Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
>>>>>
>>>>> The init process is killed by SIGSEGV (signal 11 = 0xb).
>>>>>
>>>>> I would suggest you look for data corruption bugs in the RX path. If
>>>>> the code is fetched from the NFS mount then a corrupt RX buffer can
>>>>> trigger a crash in userspace.
>>>>>
>>>>> /Lars
>>>>
>>>>
>>>> Jon, I'm not familiar with ARM. Are the buffer addresses being allocated
>>>> in a coherent region ? Can you try attached patch which adds full memory
>>>> barrier before the sync ?
>>>
>>> TBH I am not sure about the buffer addresses either. The attached patch
>>> did not help. Same problem persists.
>>
>> OK. I'm just guessing now at this stage but can you disable SMP ?

I tried limiting the number of CPUs to one by setting 'maxcpus=0' on the
kernel command line. However, this did not help.

>> We have to narrow down if this is coherency issue but you said that
>> booting without NFS and then mounting manually the share works ... So,
>> can you share logs with same debug prints in this condition in order to
>> compare ?
>
> Jon, I have one ARM based board and I can't face your issue but I
> noticed that my buffer addresses are being mapped using SWIOTLB. Can you
> disable IOMMU support on your setup and let me know if the problem
> persists ?

This appears to be a winner and by disabling the SMMU for the ethernet
controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
this worked! So yes appears to be related to the SMMU being enabled. We
had to enable the SMMU for ethernet recently due to commit
954a03be033c7cef80ddc232e7cbdb17df735663.

Cheers
Jon

--
nvpublic

2019-07-23 20:03:46

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/23/2019, 11:01:24 (UTC+00:00)

> This appears to be a winner and by disabling the SMMU for the ethernet
> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
> this worked! So yes appears to be related to the SMMU being enabled. We
> had to enable the SMMU for ethernet recently due to commit
> 954a03be033c7cef80ddc232e7cbdb17df735663.

Finally :)

However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":

+ There are few reasons to allow unmatched stream bypass, and
+ even fewer good ones. If saying YES here breaks your board
+ you should work on fixing your board.

So, how can we fix this ? Is your ethernet DT node marked as
"dma-coherent;" ?

---
Thanks,
Jose Miguel Abreu

2019-07-23 20:05:55

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On 23/07/2019 11:07, Jose Abreu wrote:
> From: Jon Hunter <[email protected]>
> Date: Jul/23/2019, 11:01:24 (UTC+00:00)
>
>> This appears to be a winner and by disabling the SMMU for the ethernet
>> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
>> this worked! So yes appears to be related to the SMMU being enabled. We
>> had to enable the SMMU for ethernet recently due to commit
>> 954a03be033c7cef80ddc232e7cbdb17df735663.
>
> Finally :)
>
> However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
>
> + There are few reasons to allow unmatched stream bypass, and
> + even fewer good ones. If saying YES here breaks your board
> + you should work on fixing your board.
>
> So, how can we fix this ? Is your ethernet DT node marked as
> "dma-coherent;" ?

The first thing to try would be booting the failing setup with
"iommu.passthrough=1" (or using CONFIG_IOMMU_DEFAULT_PASSTHROUGH) - if
that makes things seem OK, then the problem is likely related to address
translation; if not, then it's probably time to start looking at nasties
like coherency and ordering, although in principle I wouldn't expect the
SMMU to have too much impact there.

Do you know if the SMMU interrupts are working correctly? If not, it's
possible that an incorrect address or mapping direction could lead to
the DMA transaction just being silently terminated without any fault
indication, which generally presents as inexplicable weirdness (I've
certainly seen that on another platform with the mix of an unsupported
interrupt controller and an 'imperfect' ethernet driver).

Just to confirm, has the original patch been tested with
CONFIG_DMA_API_DEBUG to rule out any high-level mishaps?

Robin.

2019-07-23 20:11:11

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/23/2019, 11:38:33 (UTC+00:00)

>
> On 23/07/2019 11:07, Jose Abreu wrote:
> > From: Jon Hunter <[email protected]>
> > Date: Jul/23/2019, 11:01:24 (UTC+00:00)
> >
> >> This appears to be a winner and by disabling the SMMU for the ethernet
> >> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
> >> this worked! So yes appears to be related to the SMMU being enabled. We
> >> had to enable the SMMU for ethernet recently due to commit
> >> 954a03be033c7cef80ddc232e7cbdb17df735663.
> >
> > Finally :)
> >
> > However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
> >
> > + There are few reasons to allow unmatched stream bypass, and
> > + even fewer good ones. If saying YES here breaks your board
> > + you should work on fixing your board.
> >
> > So, how can we fix this ? Is your ethernet DT node marked as
> > "dma-coherent;" ?
>
> TBH I have no idea. I can't say I fully understand your change or how it
> is breaking things for us.
>
> Currently, the Tegra DT binding does not have 'dma-coherent' set. I see
> this is optional, but I am not sure how you determine whether or not
> this should be set.

From my understanding it means that your device / IP DMA accesses are coherent regarding the CPU point of view. I think it will be the case if GMAC is not behind any kind of IOMMU in the HW arch.

I don't know about this SMMU but the source does have some special
conditions when device is dma-coherent.

---
Thanks,
Jose Miguel Abreu

2019-07-23 20:27:06

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 23/07/2019 11:07, Jose Abreu wrote:
> From: Jon Hunter <[email protected]>
> Date: Jul/23/2019, 11:01:24 (UTC+00:00)
>
>> This appears to be a winner and by disabling the SMMU for the ethernet
>> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
>> this worked! So yes appears to be related to the SMMU being enabled. We
>> had to enable the SMMU for ethernet recently due to commit
>> 954a03be033c7cef80ddc232e7cbdb17df735663.
>
> Finally :)
>
> However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
>
> + There are few reasons to allow unmatched stream bypass, and
> + even fewer good ones. If saying YES here breaks your board
> + you should work on fixing your board.
>
> So, how can we fix this ? Is your ethernet DT node marked as
> "dma-coherent;" ?

TBH I have no idea. I can't say I fully understand your change or how it
is breaking things for us.

Currently, the Tegra DT binding does not have 'dma-coherent' set. I see
this is optional, but I am not sure how you determine whether or not
this should be set.

Jon

--
nvpublic

2019-07-23 21:51:31

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Robin Murphy <[email protected]>
Date: Jul/23/2019, 11:29:28 (UTC+00:00)

> On 23/07/2019 11:07, Jose Abreu wrote:
> > From: Jon Hunter <[email protected]>
> > Date: Jul/23/2019, 11:01:24 (UTC+00:00)
> >
> >> This appears to be a winner and by disabling the SMMU for the ethernet
> >> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
> >> this worked! So yes appears to be related to the SMMU being enabled. We
> >> had to enable the SMMU for ethernet recently due to commit
> >> 954a03be033c7cef80ddc232e7cbdb17df735663.
> >
> > Finally :)
> >
> > However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
> >
> > + There are few reasons to allow unmatched stream bypass, and
> > + even fewer good ones. If saying YES here breaks your board
> > + you should work on fixing your board.
> >
> > So, how can we fix this ? Is your ethernet DT node marked as
> > "dma-coherent;" ?
>
> The first thing to try would be booting the failing setup with
> "iommu.passthrough=1" (or using CONFIG_IOMMU_DEFAULT_PASSTHROUGH) - if
> that makes things seem OK, then the problem is likely related to address
> translation; if not, then it's probably time to start looking at nasties
> like coherency and ordering, although in principle I wouldn't expect the
> SMMU to have too much impact there.
>
> Do you know if the SMMU interrupts are working correctly? If not, it's
> possible that an incorrect address or mapping direction could lead to
> the DMA transaction just being silently terminated without any fault
> indication, which generally presents as inexplicable weirdness (I've
> certainly seen that on another platform with the mix of an unsupported
> interrupt controller and an 'imperfect' ethernet driver).
>
> Just to confirm, has the original patch been tested with
> CONFIG_DMA_API_DEBUG to rule out any high-level mishaps?

Yes but both my setups don't have any IOMMU: One is x86 + SWIOTLB and
another is just coherent regarding CPU.

---
Thanks,
Jose Miguel Abreu

2019-07-23 22:40:23

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 23/07/2019 11:49, Jose Abreu wrote:
> From: Jon Hunter <[email protected]>
> Date: Jul/23/2019, 11:38:33 (UTC+00:00)
>
>>
>> On 23/07/2019 11:07, Jose Abreu wrote:
>>> From: Jon Hunter <[email protected]>
>>> Date: Jul/23/2019, 11:01:24 (UTC+00:00)
>>>
>>>> This appears to be a winner and by disabling the SMMU for the ethernet
>>>> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
>>>> this worked! So yes appears to be related to the SMMU being enabled. We
>>>> had to enable the SMMU for ethernet recently due to commit
>>>> 954a03be033c7cef80ddc232e7cbdb17df735663.
>>>
>>> Finally :)
>>>
>>> However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
>>>
>>> + There are few reasons to allow unmatched stream bypass, and
>>> + even fewer good ones. If saying YES here breaks your board
>>> + you should work on fixing your board.
>>>
>>> So, how can we fix this ? Is your ethernet DT node marked as
>>> "dma-coherent;" ?
>>
>> TBH I have no idea. I can't say I fully understand your change or how it
>> is breaking things for us.
>>
>> Currently, the Tegra DT binding does not have 'dma-coherent' set. I see
>> this is optional, but I am not sure how you determine whether or not
>> this should be set.
>
> From my understanding it means that your device / IP DMA accesses are coherent regarding the CPU point of view. I think it will be the case if GMAC is not behind any kind of IOMMU in the HW arch.

I understand what coherency is, I just don't know how you tell if this
implementation of the ethernet controller is coherent or not.

Jon

--
nvpublic

2019-07-23 23:06:27

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 23/07/2019 11:29, Robin Murphy wrote:
> On 23/07/2019 11:07, Jose Abreu wrote:
>> From: Jon Hunter <[email protected]>
>> Date: Jul/23/2019, 11:01:24 (UTC+00:00)
>>
>>> This appears to be a winner and by disabling the SMMU for the ethernet
>>> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
>>> this worked! So yes appears to be related to the SMMU being enabled. We
>>> had to enable the SMMU for ethernet recently due to commit
>>> 954a03be033c7cef80ddc232e7cbdb17df735663.
>>
>> Finally :)
>>
>> However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
>>
>> +         There are few reasons to allow unmatched stream bypass, and
>> +         even fewer good ones.  If saying YES here breaks your board
>> +         you should work on fixing your board.
>>
>> So, how can we fix this ? Is your ethernet DT node marked as
>> "dma-coherent;" ?
>
> The first thing to try would be booting the failing setup with
> "iommu.passthrough=1" (or using CONFIG_IOMMU_DEFAULT_PASSTHROUGH) - if
> that makes things seem OK, then the problem is likely related to address
> translation; if not, then it's probably time to start looking at nasties
> like coherency and ordering, although in principle I wouldn't expect the
> SMMU to have too much impact there.

Setting "iommu.passthrough=1" works for me. However, I am not sure where
to go from here, so any ideas you have would be great.

> Do you know if the SMMU interrupts are working correctly? If not, it's
> possible that an incorrect address or mapping direction could lead to
> the DMA transaction just being silently terminated without any fault
> indication, which generally presents as inexplicable weirdness (I've
> certainly seen that on another platform with the mix of an unsupported
> interrupt controller and an 'imperfect' ethernet driver).

If I simply remove the iommu node for the ethernet controller, then I
see lots of ...

[ 6.296121] arm-smmu 12000000.iommu: Unexpected global fault, this could be serious
[ 6.296125] arm-smmu 12000000.iommu: GFSR 0x00000002, GFSYNR0 0x00000000, GFSYNR1 0x00000014, GFSYNR2 0x00000000

So I assume that this is triggering the SMMU interrupt correctly.

> Just to confirm, has the original patch been tested with
> CONFIG_DMA_API_DEBUG to rule out any high-level mishaps?
Yes one of the first things we tried but did not bare any fruit.

Cheers
Jon

--
nvpublic

2019-07-23 23:15:33

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/23/2019, 12:58:55 (UTC+00:00)

>
> On 23/07/2019 11:49, Jose Abreu wrote:
> > From: Jon Hunter <[email protected]>
> > Date: Jul/23/2019, 11:38:33 (UTC+00:00)
> >
> >>
> >> On 23/07/2019 11:07, Jose Abreu wrote:
> >>> From: Jon Hunter <[email protected]>
> >>> Date: Jul/23/2019, 11:01:24 (UTC+00:00)
> >>>
> >>>> This appears to be a winner and by disabling the SMMU for the ethernet
> >>>> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
> >>>> this worked! So yes appears to be related to the SMMU being enabled. We
> >>>> had to enable the SMMU for ethernet recently due to commit
> >>>> 954a03be033c7cef80ddc232e7cbdb17df735663.
> >>>
> >>> Finally :)
> >>>
> >>> However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
> >>>
> >>> + There are few reasons to allow unmatched stream bypass, and
> >>> + even fewer good ones. If saying YES here breaks your board
> >>> + you should work on fixing your board.
> >>>
> >>> So, how can we fix this ? Is your ethernet DT node marked as
> >>> "dma-coherent;" ?
> >>
> >> TBH I have no idea. I can't say I fully understand your change or how it
> >> is breaking things for us.
> >>
> >> Currently, the Tegra DT binding does not have 'dma-coherent' set. I see
> >> this is optional, but I am not sure how you determine whether or not
> >> this should be set.
> >
> > From my understanding it means that your device / IP DMA accesses are coherent regarding the CPU point of view. I think it will be the case if GMAC is not behind any kind of IOMMU in the HW arch.
>
> I understand what coherency is, I just don't know how you tell if this
> implementation of the ethernet controller is coherent or not.

Do you have any detailed diagram of your HW ? Such as blocks / IPs
connection, address space wiring , ...

---
Thanks,
Jose Miguel Abreu

2019-07-24 02:17:14

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 23/07/2019 13:51, Jose Abreu wrote:
> From: Jon Hunter <[email protected]>
> Date: Jul/23/2019, 12:58:55 (UTC+00:00)
>
>>
>> On 23/07/2019 11:49, Jose Abreu wrote:
>>> From: Jon Hunter <[email protected]>
>>> Date: Jul/23/2019, 11:38:33 (UTC+00:00)
>>>
>>>>
>>>> On 23/07/2019 11:07, Jose Abreu wrote:
>>>>> From: Jon Hunter <[email protected]>
>>>>> Date: Jul/23/2019, 11:01:24 (UTC+00:00)
>>>>>
>>>>>> This appears to be a winner and by disabling the SMMU for the ethernet
>>>>>> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
>>>>>> this worked! So yes appears to be related to the SMMU being enabled. We
>>>>>> had to enable the SMMU for ethernet recently due to commit
>>>>>> 954a03be033c7cef80ddc232e7cbdb17df735663.
>>>>>
>>>>> Finally :)
>>>>>
>>>>> However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
>>>>>
>>>>> + There are few reasons to allow unmatched stream bypass, and
>>>>> + even fewer good ones. If saying YES here breaks your board
>>>>> + you should work on fixing your board.
>>>>>
>>>>> So, how can we fix this ? Is your ethernet DT node marked as
>>>>> "dma-coherent;" ?
>>>>
>>>> TBH I have no idea. I can't say I fully understand your change or how it
>>>> is breaking things for us.
>>>>
>>>> Currently, the Tegra DT binding does not have 'dma-coherent' set. I see
>>>> this is optional, but I am not sure how you determine whether or not
>>>> this should be set.
>>>
>>> From my understanding it means that your device / IP DMA accesses are coherent regarding the CPU point of view. I think it will be the case if GMAC is not behind any kind of IOMMU in the HW arch.
>>
>> I understand what coherency is, I just don't know how you tell if this
>> implementation of the ethernet controller is coherent or not.
>
> Do you have any detailed diagram of your HW ? Such as blocks / IPs
> connection, address space wiring , ...

Yes, this can be found in the Tegra X2 Technical Reference Manual [0].
Unfortunately, you need to create an account to download it.

Jon

[0] https://developer.nvidia.com/embedded/dlc/parker-series-trm

--
nvpublic

2019-07-24 02:17:38

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On 23/07/2019 13:09, Jon Hunter wrote:
>
> On 23/07/2019 11:29, Robin Murphy wrote:
>> On 23/07/2019 11:07, Jose Abreu wrote:
>>> From: Jon Hunter <[email protected]>
>>> Date: Jul/23/2019, 11:01:24 (UTC+00:00)
>>>
>>>> This appears to be a winner and by disabling the SMMU for the ethernet
>>>> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
>>>> this worked! So yes appears to be related to the SMMU being enabled. We
>>>> had to enable the SMMU for ethernet recently due to commit
>>>> 954a03be033c7cef80ddc232e7cbdb17df735663.
>>>
>>> Finally :)
>>>
>>> However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
>>>
>>> +         There are few reasons to allow unmatched stream bypass, and
>>> +         even fewer good ones.  If saying YES here breaks your board
>>> +         you should work on fixing your board.
>>>
>>> So, how can we fix this ? Is your ethernet DT node marked as
>>> "dma-coherent;" ?
>>
>> The first thing to try would be booting the failing setup with
>> "iommu.passthrough=1" (or using CONFIG_IOMMU_DEFAULT_PASSTHROUGH) - if
>> that makes things seem OK, then the problem is likely related to address
>> translation; if not, then it's probably time to start looking at nasties
>> like coherency and ordering, although in principle I wouldn't expect the
>> SMMU to have too much impact there.
>
> Setting "iommu.passthrough=1" works for me. However, I am not sure where
> to go from here, so any ideas you have would be great.

OK, so that really implies it's something to do with the addresses. From
a quick skim of the patch, I'm wondering if it's possible for buf->addr
and buf->page->dma_addr to get out-of-sync at any point. The nature of
the IOVA allocator makes it quite likely that a stale DMA address will
have been reused for a new mapping, so putting the wrong address in a
descriptor may well mean the DMA still ends up hitting a valid
translation, but which is now pointing to a different page.

>> Do you know if the SMMU interrupts are working correctly? If not, it's
>> possible that an incorrect address or mapping direction could lead to
>> the DMA transaction just being silently terminated without any fault
>> indication, which generally presents as inexplicable weirdness (I've
>> certainly seen that on another platform with the mix of an unsupported
>> interrupt controller and an 'imperfect' ethernet driver).
>
> If I simply remove the iommu node for the ethernet controller, then I
> see lots of ...
>
> [ 6.296121] arm-smmu 12000000.iommu: Unexpected global fault, this could be serious
> [ 6.296125] arm-smmu 12000000.iommu: GFSR 0x00000002, GFSYNR0 0x00000000, GFSYNR1 0x00000014, GFSYNR2 0x00000000
>
> So I assume that this is triggering the SMMU interrupt correctly.

According to tegra186.dtsi it appears you're using the MMU-500 combined
interrupt, so if global faults are being delivered then context faults
*should* also, but I'd be inclined to try a quick hack of the relevant
stmmac_desc_ops::set_addr callback to write some bogus unmapped address
just to make sure arm_smmu_context_fault() then screams as expected, and
we're not missing anything else.

Robin.

2019-07-24 02:26:58

by David Miller

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Tue, 23 Jul 2019 13:09:00 +0100

> Setting "iommu.passthrough=1" works for me. However, I am not sure where
> to go from here, so any ideas you have would be great.

Then definitely we are accessing outside of a valid IOMMU mapping due
to the page pool support changes.

Such a problem should be spotted with swiommu enabled with debugging.

2019-07-24 02:33:30

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 23/07/2019 14:19, Robin Murphy wrote:

...

>>> Do you know if the SMMU interrupts are working correctly? If not, it's
>>> possible that an incorrect address or mapping direction could lead to
>>> the DMA transaction just being silently terminated without any fault
>>> indication, which generally presents as inexplicable weirdness (I've
>>> certainly seen that on another platform with the mix of an unsupported
>>> interrupt controller and an 'imperfect' ethernet driver).
>>
>> If I simply remove the iommu node for the ethernet controller, then I
>> see lots of ...
>>
>> [    6.296121] arm-smmu 12000000.iommu: Unexpected global fault, this
>> could be serious
>> [    6.296125] arm-smmu 12000000.iommu:         GFSR 0x00000002,
>> GFSYNR0 0x00000000, GFSYNR1 0x00000014, GFSYNR2 0x00000000
>>
>> So I assume that this is triggering the SMMU interrupt correctly.
>
> According to tegra186.dtsi it appears you're using the MMU-500 combined
> interrupt, so if global faults are being delivered then context faults
> *should* also, but I'd be inclined to try a quick hack of the relevant
> stmmac_desc_ops::set_addr callback to write some bogus unmapped address
> just to make sure arm_smmu_context_fault() then screams as expected, and
> we're not missing anything else.

I hacked the driver and forced the address to zero for a test and
in doing so I see ...

[ 10.440072] arm-smmu 12000000.iommu: Unhandled context fault: fsr=0x402, iova=0x00000000, fsynr=0x1c0011, cbfrsynra=0x14, cb=0

So looks like the interrupts are working AFAICT.

Cheers
Jon

--
nvpublic

2019-07-24 08:55:23

by Ilias Apalodimas

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

Hi David,

> From: Jon Hunter <[email protected]>
> Date: Tue, 23 Jul 2019 13:09:00 +0100
>
> > Setting "iommu.passthrough=1" works for me. However, I am not sure where
> > to go from here, so any ideas you have would be great.
>
> Then definitely we are accessing outside of a valid IOMMU mapping due
> to the page pool support changes.

Yes. On the netsec driver i did test with and without SMMU to make sure i am not
breaking anything.
Since we map the whole page on the API i think some offset on the driver causes
that. In any case i'll have another look on page_pool to make sure we are not
missing anything.

>
> Such a problem should be spotted with swiommu enabled with debugging.

Thanks
/Ilias

2019-07-24 10:11:14

by Ilias Apalodimas

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

Jose,
> From: Ilias Apalodimas <[email protected]>
> Date: Jul/24/2019, 09:54:27 (UTC+00:00)
>
> > Hi David,
> >
> > > From: Jon Hunter <[email protected]>
> > > Date: Tue, 23 Jul 2019 13:09:00 +0100
> > >
> > > > Setting "iommu.passthrough=1" works for me. However, I am not sure where
> > > > to go from here, so any ideas you have would be great.
> > >
> > > Then definitely we are accessing outside of a valid IOMMU mapping due
> > > to the page pool support changes.
> >
> > Yes. On the netsec driver i did test with and without SMMU to make sure i am not
> > breaking anything.
> > Since we map the whole page on the API i think some offset on the driver causes
> > that. In any case i'll have another look on page_pool to make sure we are not
> > missing anything.
>
> Ilias, can it be due to this:
>
> stmmac_main.c:
> pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
>
> page_pool.c:
> dma = dma_map_page_attrs(pool->p.dev, page, 0,
> (PAGE_SIZE << pool->p.order),
> pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
>
> "order", will be at least 1 and then mapping the page can cause overlap
> ?

well the API is calling the map with the correct page, page offset (0) and size
right? I don't see any overlapping here. Aren't we mapping what we allocate?

Why do you need higher order pages? Jumbo frames? Can we do a quick test with
the order being 0?

Thanks,
/Ilias

2019-07-24 11:12:33

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 24/07/2019 11:04, Jose Abreu wrote:

...

> Jon, I was able to replicate (at some level) your setup:
>
> # dmesg | grep -i arm-smmu
> [ 1.337322] arm-smmu 70040000.iommu: probing hardware
> configuration...
> [ 1.337330] arm-smmu 70040000.iommu: SMMUv2 with:
> [ 1.337338] arm-smmu 70040000.iommu: stage 1 translation
> [ 1.337346] arm-smmu 70040000.iommu: stage 2 translation
> [ 1.337354] arm-smmu 70040000.iommu: nested translation
> [ 1.337363] arm-smmu 70040000.iommu: stream matching with 128
> register groups
> [ 1.337374] arm-smmu 70040000.iommu: 1 context banks (0
> stage-2 only)
> [ 1.337383] arm-smmu 70040000.iommu: Supported page sizes:
> 0x61311000
> [ 1.337393] arm-smmu 70040000.iommu: Stage-1: 48-bit VA ->
> 48-bit IPA
> [ 1.337402] arm-smmu 70040000.iommu: Stage-2: 48-bit IPA ->
> 48-bit PA
>
> # dmesg | grep -i stmmac
> [ 1.344106] stmmaceth 70000000.ethernet: Adding to iommu group 0
> [ 1.344233] stmmaceth 70000000.ethernet: no reset control found
> [ 1.348276] stmmaceth 70000000.ethernet: User ID: 0x10, Synopsys ID:
> 0x51
> [ 1.348285] stmmaceth 70000000.ethernet: DWMAC4/5
> [ 1.348293] stmmaceth 70000000.ethernet: DMA HW capability register
> supported
> [ 1.348302] stmmaceth 70000000.ethernet: RX Checksum Offload Engine
> supported
> [ 1.348311] stmmaceth 70000000.ethernet: TX Checksum insertion
> supported
> [ 1.348320] stmmaceth 70000000.ethernet: TSO supported
> [ 1.348328] stmmaceth 70000000.ethernet: Enable RX Mitigation via HW
> Watchdog Timer
> [ 1.348337] stmmaceth 70000000.ethernet: TSO feature enabled
> [ 1.348409] libphy: stmmac: probed
> [ 4159.140990] stmmaceth 70000000.ethernet eth0: PHY [stmmac-0:01]
> driver [Generic PHY]
> [ 4159.141005] stmmaceth 70000000.ethernet eth0: phy: setting supported
> 00,00000000,000062ff advertising 00,00000000,000062ff
> [ 4159.142359] stmmaceth 70000000.ethernet eth0: No Safety Features
> support found
> [ 4159.142369] stmmaceth 70000000.ethernet eth0: IEEE 1588-2008 Advanced
> Timestamp supported
> [ 4159.142429] stmmaceth 70000000.ethernet eth0: registered PTP clock
> [ 4159.142439] stmmaceth 70000000.ethernet eth0: configuring for
> phy/gmii link mode
> [ 4159.142452] stmmaceth 70000000.ethernet eth0: phylink_mac_config:
> mode=phy/gmii/Unknown/Unknown adv=00,00000000,000062ff pause=10 link=0
> an=1
> [ 4159.142466] stmmaceth 70000000.ethernet eth0: phy link up
> gmii/1Gbps/Full
> [ 4159.142475] stmmaceth 70000000.ethernet eth0: phylink_mac_config:
> mode=phy/gmii/1Gbps/Full adv=00,00000000,00000000 pause=0f link=1 an=0
> [ 4159.142481] stmmaceth 70000000.ethernet eth0: Link is Up - 1Gbps/Full
> - flow control rx/tx
>
> The only missing point is the NFS boot that I can't replicate with this
> setup. But I did some sanity checks:
>
> Remote Enpoint:
> # dd if=/dev/urandom of=output.dat bs=128M count=1
> # nc -c 192.168.0.2 1234 < output.dat
> # md5sum output.dat
> fde9e0818281836e4fc0edfede2b8762 output.dat
>
> DUT:
> # nc -l -c -p 1234 > output.dat
> # md5sum output.dat
> fde9e0818281836e4fc0edfede2b8762 output.dat

On my setup, if I do not use NFS to mount the rootfs, but then manually
mount the NFS share after booting, I do not see any problems reading or
writing to files on the share. So I am not sure if it is some sort of
race that is occurring when mounting the NFS share on boot. It is 100%
reproducible when using NFS for the root file-system.

I am using the Jetson TX2 devkit [0] to test this.

Cheers
Jon

[0] https://developer.nvidia.com/embedded/jetson-tx2-developer-kit

--
nvpublic

2019-07-24 11:35:59

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/24/2019, 12:10:47 (UTC+00:00)

>
> On 24/07/2019 11:04, Jose Abreu wrote:
>
> ...
>
> > Jon, I was able to replicate (at some level) your setup:
> >
> > # dmesg | grep -i arm-smmu
> > [ 1.337322] arm-smmu 70040000.iommu: probing hardware
> > configuration...
> > [ 1.337330] arm-smmu 70040000.iommu: SMMUv2 with:
> > [ 1.337338] arm-smmu 70040000.iommu: stage 1 translation
> > [ 1.337346] arm-smmu 70040000.iommu: stage 2 translation
> > [ 1.337354] arm-smmu 70040000.iommu: nested translation
> > [ 1.337363] arm-smmu 70040000.iommu: stream matching with 128
> > register groups
> > [ 1.337374] arm-smmu 70040000.iommu: 1 context banks (0
> > stage-2 only)
> > [ 1.337383] arm-smmu 70040000.iommu: Supported page sizes:
> > 0x61311000
> > [ 1.337393] arm-smmu 70040000.iommu: Stage-1: 48-bit VA ->
> > 48-bit IPA
> > [ 1.337402] arm-smmu 70040000.iommu: Stage-2: 48-bit IPA ->
> > 48-bit PA
> >
> > # dmesg | grep -i stmmac
> > [ 1.344106] stmmaceth 70000000.ethernet: Adding to iommu group 0
> > [ 1.344233] stmmaceth 70000000.ethernet: no reset control found
> > [ 1.348276] stmmaceth 70000000.ethernet: User ID: 0x10, Synopsys ID:
> > 0x51
> > [ 1.348285] stmmaceth 70000000.ethernet: DWMAC4/5
> > [ 1.348293] stmmaceth 70000000.ethernet: DMA HW capability register
> > supported
> > [ 1.348302] stmmaceth 70000000.ethernet: RX Checksum Offload Engine
> > supported
> > [ 1.348311] stmmaceth 70000000.ethernet: TX Checksum insertion
> > supported
> > [ 1.348320] stmmaceth 70000000.ethernet: TSO supported
> > [ 1.348328] stmmaceth 70000000.ethernet: Enable RX Mitigation via HW
> > Watchdog Timer
> > [ 1.348337] stmmaceth 70000000.ethernet: TSO feature enabled
> > [ 1.348409] libphy: stmmac: probed
> > [ 4159.140990] stmmaceth 70000000.ethernet eth0: PHY [stmmac-0:01]
> > driver [Generic PHY]
> > [ 4159.141005] stmmaceth 70000000.ethernet eth0: phy: setting supported
> > 00,00000000,000062ff advertising 00,00000000,000062ff
> > [ 4159.142359] stmmaceth 70000000.ethernet eth0: No Safety Features
> > support found
> > [ 4159.142369] stmmaceth 70000000.ethernet eth0: IEEE 1588-2008 Advanced
> > Timestamp supported
> > [ 4159.142429] stmmaceth 70000000.ethernet eth0: registered PTP clock
> > [ 4159.142439] stmmaceth 70000000.ethernet eth0: configuring for
> > phy/gmii link mode
> > [ 4159.142452] stmmaceth 70000000.ethernet eth0: phylink_mac_config:
> > mode=phy/gmii/Unknown/Unknown adv=00,00000000,000062ff pause=10 link=0
> > an=1
> > [ 4159.142466] stmmaceth 70000000.ethernet eth0: phy link up
> > gmii/1Gbps/Full
> > [ 4159.142475] stmmaceth 70000000.ethernet eth0: phylink_mac_config:
> > mode=phy/gmii/1Gbps/Full adv=00,00000000,00000000 pause=0f link=1 an=0
> > [ 4159.142481] stmmaceth 70000000.ethernet eth0: Link is Up - 1Gbps/Full
> > - flow control rx/tx
> >
> > The only missing point is the NFS boot that I can't replicate with this
> > setup. But I did some sanity checks:
> >
> > Remote Enpoint:
> > # dd if=/dev/urandom of=output.dat bs=128M count=1
> > # nc -c 192.168.0.2 1234 < output.dat
> > # md5sum output.dat
> > fde9e0818281836e4fc0edfede2b8762 output.dat
> >
> > DUT:
> > # nc -l -c -p 1234 > output.dat
> > # md5sum output.dat
> > fde9e0818281836e4fc0edfede2b8762 output.dat
>
> On my setup, if I do not use NFS to mount the rootfs, but then manually
> mount the NFS share after booting, I do not see any problems reading or
> writing to files on the share. So I am not sure if it is some sort of
> race that is occurring when mounting the NFS share on boot. It is 100%
> reproducible when using NFS for the root file-system.

I don't understand how can there be corruption then unless the IP AXI
parameters are misconfigured which can lead to sporadic undefined
behavior.

These prints from your logs:
[ 14.579392] Run /init as init process
/init: line 58: chmod: command not found
[ 10:22:46 ] L4T-INITRD Build DATE: Mon Jul 22 10:22:46 UTC 2019
[ 10:22:46 ] Root device found: nfs
[ 10:22:46 ] Ethernet interfaces: eth0
[ 10:22:46 ] IP Address: 10.21.140.41

Where are they coming from ? Do you have any extra init script ?

---
Thanks,
Jose Miguel Abreu

2019-07-24 11:59:14

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 24/07/2019 12:34, Jose Abreu wrote:
> From: Jon Hunter <[email protected]>
> Date: Jul/24/2019, 12:10:47 (UTC+00:00)
>
>>
>> On 24/07/2019 11:04, Jose Abreu wrote:
>>
>> ...
>>
>>> Jon, I was able to replicate (at some level) your setup:
>>>
>>> # dmesg | grep -i arm-smmu
>>> [ 1.337322] arm-smmu 70040000.iommu: probing hardware
>>> configuration...
>>> [ 1.337330] arm-smmu 70040000.iommu: SMMUv2 with:
>>> [ 1.337338] arm-smmu 70040000.iommu: stage 1 translation
>>> [ 1.337346] arm-smmu 70040000.iommu: stage 2 translation
>>> [ 1.337354] arm-smmu 70040000.iommu: nested translation
>>> [ 1.337363] arm-smmu 70040000.iommu: stream matching with 128
>>> register groups
>>> [ 1.337374] arm-smmu 70040000.iommu: 1 context banks (0
>>> stage-2 only)
>>> [ 1.337383] arm-smmu 70040000.iommu: Supported page sizes:
>>> 0x61311000
>>> [ 1.337393] arm-smmu 70040000.iommu: Stage-1: 48-bit VA ->
>>> 48-bit IPA
>>> [ 1.337402] arm-smmu 70040000.iommu: Stage-2: 48-bit IPA ->
>>> 48-bit PA
>>>
>>> # dmesg | grep -i stmmac
>>> [ 1.344106] stmmaceth 70000000.ethernet: Adding to iommu group 0
>>> [ 1.344233] stmmaceth 70000000.ethernet: no reset control found
>>> [ 1.348276] stmmaceth 70000000.ethernet: User ID: 0x10, Synopsys ID:
>>> 0x51
>>> [ 1.348285] stmmaceth 70000000.ethernet: DWMAC4/5
>>> [ 1.348293] stmmaceth 70000000.ethernet: DMA HW capability register
>>> supported
>>> [ 1.348302] stmmaceth 70000000.ethernet: RX Checksum Offload Engine
>>> supported
>>> [ 1.348311] stmmaceth 70000000.ethernet: TX Checksum insertion
>>> supported
>>> [ 1.348320] stmmaceth 70000000.ethernet: TSO supported
>>> [ 1.348328] stmmaceth 70000000.ethernet: Enable RX Mitigation via HW
>>> Watchdog Timer
>>> [ 1.348337] stmmaceth 70000000.ethernet: TSO feature enabled
>>> [ 1.348409] libphy: stmmac: probed
>>> [ 4159.140990] stmmaceth 70000000.ethernet eth0: PHY [stmmac-0:01]
>>> driver [Generic PHY]
>>> [ 4159.141005] stmmaceth 70000000.ethernet eth0: phy: setting supported
>>> 00,00000000,000062ff advertising 00,00000000,000062ff
>>> [ 4159.142359] stmmaceth 70000000.ethernet eth0: No Safety Features
>>> support found
>>> [ 4159.142369] stmmaceth 70000000.ethernet eth0: IEEE 1588-2008 Advanced
>>> Timestamp supported
>>> [ 4159.142429] stmmaceth 70000000.ethernet eth0: registered PTP clock
>>> [ 4159.142439] stmmaceth 70000000.ethernet eth0: configuring for
>>> phy/gmii link mode
>>> [ 4159.142452] stmmaceth 70000000.ethernet eth0: phylink_mac_config:
>>> mode=phy/gmii/Unknown/Unknown adv=00,00000000,000062ff pause=10 link=0
>>> an=1
>>> [ 4159.142466] stmmaceth 70000000.ethernet eth0: phy link up
>>> gmii/1Gbps/Full
>>> [ 4159.142475] stmmaceth 70000000.ethernet eth0: phylink_mac_config:
>>> mode=phy/gmii/1Gbps/Full adv=00,00000000,00000000 pause=0f link=1 an=0
>>> [ 4159.142481] stmmaceth 70000000.ethernet eth0: Link is Up - 1Gbps/Full
>>> - flow control rx/tx
>>>
>>> The only missing point is the NFS boot that I can't replicate with this
>>> setup. But I did some sanity checks:
>>>
>>> Remote Enpoint:
>>> # dd if=/dev/urandom of=output.dat bs=128M count=1
>>> # nc -c 192.168.0.2 1234 < output.dat
>>> # md5sum output.dat
>>> fde9e0818281836e4fc0edfede2b8762 output.dat
>>>
>>> DUT:
>>> # nc -l -c -p 1234 > output.dat
>>> # md5sum output.dat
>>> fde9e0818281836e4fc0edfede2b8762 output.dat
>>
>> On my setup, if I do not use NFS to mount the rootfs, but then manually
>> mount the NFS share after booting, I do not see any problems reading or
>> writing to files on the share. So I am not sure if it is some sort of
>> race that is occurring when mounting the NFS share on boot. It is 100%
>> reproducible when using NFS for the root file-system.
>
> I don't understand how can there be corruption then unless the IP AXI
> parameters are misconfigured which can lead to sporadic undefined
> behavior.
>
> These prints from your logs:
> [ 14.579392] Run /init as init process
> /init: line 58: chmod: command not found
> [ 10:22:46 ] L4T-INITRD Build DATE: Mon Jul 22 10:22:46 UTC 2019
> [ 10:22:46 ] Root device found: nfs
> [ 10:22:46 ] Ethernet interfaces: eth0
> [ 10:22:46 ] IP Address: 10.21.140.41
>
> Where are they coming from ? Do you have any extra init script ?

By default there is an initial ramdisk that is loaded first and then the
rootfs is mounted over NFS. However, even if I remove this ramdisk and
directly mount the rootfs via NFS without it the problem persists. So I
don't see any issue with the ramdisk and whats more is we have been
using this for a long long time. Nothing has changed here.

Jon

--
nvpublic

2019-07-24 12:37:04

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Ilias Apalodimas <[email protected]>
Date: Jul/24/2019, 09:54:27 (UTC+00:00)

> Hi David,
>
> > From: Jon Hunter <[email protected]>
> > Date: Tue, 23 Jul 2019 13:09:00 +0100
> >
> > > Setting "iommu.passthrough=1" works for me. However, I am not sure where
> > > to go from here, so any ideas you have would be great.
> >
> > Then definitely we are accessing outside of a valid IOMMU mapping due
> > to the page pool support changes.
>
> Yes. On the netsec driver i did test with and without SMMU to make sure i am not
> breaking anything.
> Since we map the whole page on the API i think some offset on the driver causes
> that. In any case i'll have another look on page_pool to make sure we are not
> missing anything.

Ilias, can it be due to this:

stmmac_main.c:
pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);

page_pool.c:
dma = dma_map_page_attrs(pool->p.dev, page, 0,
(PAGE_SIZE << pool->p.order),
pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);

"order", will be at least 1 and then mapping the page can cause overlap
?

---
Thanks,
Jose Miguel Abreu

2019-07-24 12:39:14

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Ilias Apalodimas <[email protected]>
Date: Jul/24/2019, 10:53:10 (UTC+00:00)

> Jose,
> > From: Ilias Apalodimas <[email protected]>
> > Date: Jul/24/2019, 09:54:27 (UTC+00:00)
> >
> > > Hi David,
> > >
> > > > From: Jon Hunter <[email protected]>
> > > > Date: Tue, 23 Jul 2019 13:09:00 +0100
> > > >
> > > > > Setting "iommu.passthrough=1" works for me. However, I am not sure where
> > > > > to go from here, so any ideas you have would be great.
> > > >
> > > > Then definitely we are accessing outside of a valid IOMMU mapping due
> > > > to the page pool support changes.
> > >
> > > Yes. On the netsec driver i did test with and without SMMU to make sure i am not
> > > breaking anything.
> > > Since we map the whole page on the API i think some offset on the driver causes
> > > that. In any case i'll have another look on page_pool to make sure we are not
> > > missing anything.
> >
> > Ilias, can it be due to this:
> >
> > stmmac_main.c:
> > pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
> >
> > page_pool.c:
> > dma = dma_map_page_attrs(pool->p.dev, page, 0,
> > (PAGE_SIZE << pool->p.order),
> > pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
> >
> > "order", will be at least 1 and then mapping the page can cause overlap
> > ?
>
> well the API is calling the map with the correct page, page offset (0) and size
> right? I don't see any overlapping here. Aren't we mapping what we allocate?
>
> Why do you need higher order pages? Jumbo frames? Can we do a quick test with
> the order being 0?

Yes, it's for Jumbo frames that can be as large as 16k.

From Jon logs it can be seen that buffers are 8k but frames are 1500 max
so it is using order = 1.

Jon, I was able to replicate (at some level) your setup:

# dmesg | grep -i arm-smmu
[ 1.337322] arm-smmu 70040000.iommu: probing hardware
configuration...
[ 1.337330] arm-smmu 70040000.iommu: SMMUv2 with:
[ 1.337338] arm-smmu 70040000.iommu: stage 1 translation
[ 1.337346] arm-smmu 70040000.iommu: stage 2 translation
[ 1.337354] arm-smmu 70040000.iommu: nested translation
[ 1.337363] arm-smmu 70040000.iommu: stream matching with 128
register groups
[ 1.337374] arm-smmu 70040000.iommu: 1 context banks (0
stage-2 only)
[ 1.337383] arm-smmu 70040000.iommu: Supported page sizes:
0x61311000
[ 1.337393] arm-smmu 70040000.iommu: Stage-1: 48-bit VA ->
48-bit IPA
[ 1.337402] arm-smmu 70040000.iommu: Stage-2: 48-bit IPA ->
48-bit PA

# dmesg | grep -i stmmac
[ 1.344106] stmmaceth 70000000.ethernet: Adding to iommu group 0
[ 1.344233] stmmaceth 70000000.ethernet: no reset control found
[ 1.348276] stmmaceth 70000000.ethernet: User ID: 0x10, Synopsys ID:
0x51
[ 1.348285] stmmaceth 70000000.ethernet: DWMAC4/5
[ 1.348293] stmmaceth 70000000.ethernet: DMA HW capability register
supported
[ 1.348302] stmmaceth 70000000.ethernet: RX Checksum Offload Engine
supported
[ 1.348311] stmmaceth 70000000.ethernet: TX Checksum insertion
supported
[ 1.348320] stmmaceth 70000000.ethernet: TSO supported
[ 1.348328] stmmaceth 70000000.ethernet: Enable RX Mitigation via HW
Watchdog Timer
[ 1.348337] stmmaceth 70000000.ethernet: TSO feature enabled
[ 1.348409] libphy: stmmac: probed
[ 4159.140990] stmmaceth 70000000.ethernet eth0: PHY [stmmac-0:01]
driver [Generic PHY]
[ 4159.141005] stmmaceth 70000000.ethernet eth0: phy: setting supported
00,00000000,000062ff advertising 00,00000000,000062ff
[ 4159.142359] stmmaceth 70000000.ethernet eth0: No Safety Features
support found
[ 4159.142369] stmmaceth 70000000.ethernet eth0: IEEE 1588-2008 Advanced
Timestamp supported
[ 4159.142429] stmmaceth 70000000.ethernet eth0: registered PTP clock
[ 4159.142439] stmmaceth 70000000.ethernet eth0: configuring for
phy/gmii link mode
[ 4159.142452] stmmaceth 70000000.ethernet eth0: phylink_mac_config:
mode=phy/gmii/Unknown/Unknown adv=00,00000000,000062ff pause=10 link=0
an=1
[ 4159.142466] stmmaceth 70000000.ethernet eth0: phy link up
gmii/1Gbps/Full
[ 4159.142475] stmmaceth 70000000.ethernet eth0: phylink_mac_config:
mode=phy/gmii/1Gbps/Full adv=00,00000000,00000000 pause=0f link=1 an=0
[ 4159.142481] stmmaceth 70000000.ethernet eth0: Link is Up - 1Gbps/Full
- flow control rx/tx

The only missing point is the NFS boot that I can't replicate with this
setup. But I did some sanity checks:

Remote Enpoint:
# dd if=/dev/urandom of=output.dat bs=128M count=1
# nc -c 192.168.0.2 1234 < output.dat
# md5sum output.dat
fde9e0818281836e4fc0edfede2b8762 output.dat

DUT:
# nc -l -c -p 1234 > output.dat
# md5sum output.dat
fde9e0818281836e4fc0edfede2b8762 output.dat

---
Thanks,
Jose Miguel Abreu

2019-07-24 12:39:14

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On 23/07/2019 22:39, Jon Hunter wrote:
>
> On 23/07/2019 14:19, Robin Murphy wrote:
>
> ...
>
>>>> Do you know if the SMMU interrupts are working correctly? If not, it's
>>>> possible that an incorrect address or mapping direction could lead to
>>>> the DMA transaction just being silently terminated without any fault
>>>> indication, which generally presents as inexplicable weirdness (I've
>>>> certainly seen that on another platform with the mix of an unsupported
>>>> interrupt controller and an 'imperfect' ethernet driver).
>>>
>>> If I simply remove the iommu node for the ethernet controller, then I
>>> see lots of ...
>>>
>>> [    6.296121] arm-smmu 12000000.iommu: Unexpected global fault, this
>>> could be serious
>>> [    6.296125] arm-smmu 12000000.iommu:         GFSR 0x00000002,
>>> GFSYNR0 0x00000000, GFSYNR1 0x00000014, GFSYNR2 0x00000000
>>>
>>> So I assume that this is triggering the SMMU interrupt correctly.
>>
>> According to tegra186.dtsi it appears you're using the MMU-500 combined
>> interrupt, so if global faults are being delivered then context faults
>> *should* also, but I'd be inclined to try a quick hack of the relevant
>> stmmac_desc_ops::set_addr callback to write some bogus unmapped address
>> just to make sure arm_smmu_context_fault() then screams as expected, and
>> we're not missing anything else.
>
> I hacked the driver and forced the address to zero for a test and
> in doing so I see ...
>
> [ 10.440072] arm-smmu 12000000.iommu: Unhandled context fault: fsr=0x402, iova=0x00000000, fsynr=0x1c0011, cbfrsynra=0x14, cb=0
>
> So looks like the interrupts are working AFAICT.

OK, that's good, thanks for confirming. Unfortunately that now leaves us
with the challenge of figuring out how things are managing to go wrong
*without* ever faulting... :)

I wonder if we can provoke the failure on non-IOMMU platforms with
"swiotlb=force" - I have a few boxes I could potentially test that on,
but sadly forgot my plan to bring one with me this morning.

Robin.

2019-07-25 12:33:22

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/24/2019, 12:58:15 (UTC+00:00)

>
> On 24/07/2019 12:34, Jose Abreu wrote:
> > From: Jon Hunter <[email protected]>
> > Date: Jul/24/2019, 12:10:47 (UTC+00:00)
> >
> >>
> >> On 24/07/2019 11:04, Jose Abreu wrote:
> >>
> >> ...
> >>
> >>> Jon, I was able to replicate (at some level) your setup:
> >>>
> >>> # dmesg | grep -i arm-smmu
> >>> [ 1.337322] arm-smmu 70040000.iommu: probing hardware
> >>> configuration...
> >>> [ 1.337330] arm-smmu 70040000.iommu: SMMUv2 with:
> >>> [ 1.337338] arm-smmu 70040000.iommu: stage 1 translation
> >>> [ 1.337346] arm-smmu 70040000.iommu: stage 2 translation
> >>> [ 1.337354] arm-smmu 70040000.iommu: nested translation
> >>> [ 1.337363] arm-smmu 70040000.iommu: stream matching with 128
> >>> register groups
> >>> [ 1.337374] arm-smmu 70040000.iommu: 1 context banks (0
> >>> stage-2 only)
> >>> [ 1.337383] arm-smmu 70040000.iommu: Supported page sizes:
> >>> 0x61311000
> >>> [ 1.337393] arm-smmu 70040000.iommu: Stage-1: 48-bit VA ->
> >>> 48-bit IPA
> >>> [ 1.337402] arm-smmu 70040000.iommu: Stage-2: 48-bit IPA ->
> >>> 48-bit PA
> >>>
> >>> # dmesg | grep -i stmmac
> >>> [ 1.344106] stmmaceth 70000000.ethernet: Adding to iommu group 0
> >>> [ 1.344233] stmmaceth 70000000.ethernet: no reset control found
> >>> [ 1.348276] stmmaceth 70000000.ethernet: User ID: 0x10, Synopsys ID:
> >>> 0x51
> >>> [ 1.348285] stmmaceth 70000000.ethernet: DWMAC4/5
> >>> [ 1.348293] stmmaceth 70000000.ethernet: DMA HW capability register
> >>> supported
> >>> [ 1.348302] stmmaceth 70000000.ethernet: RX Checksum Offload Engine
> >>> supported
> >>> [ 1.348311] stmmaceth 70000000.ethernet: TX Checksum insertion
> >>> supported
> >>> [ 1.348320] stmmaceth 70000000.ethernet: TSO supported
> >>> [ 1.348328] stmmaceth 70000000.ethernet: Enable RX Mitigation via HW
> >>> Watchdog Timer
> >>> [ 1.348337] stmmaceth 70000000.ethernet: TSO feature enabled
> >>> [ 1.348409] libphy: stmmac: probed
> >>> [ 4159.140990] stmmaceth 70000000.ethernet eth0: PHY [stmmac-0:01]
> >>> driver [Generic PHY]
> >>> [ 4159.141005] stmmaceth 70000000.ethernet eth0: phy: setting supported
> >>> 00,00000000,000062ff advertising 00,00000000,000062ff
> >>> [ 4159.142359] stmmaceth 70000000.ethernet eth0: No Safety Features
> >>> support found
> >>> [ 4159.142369] stmmaceth 70000000.ethernet eth0: IEEE 1588-2008 Advanced
> >>> Timestamp supported
> >>> [ 4159.142429] stmmaceth 70000000.ethernet eth0: registered PTP clock
> >>> [ 4159.142439] stmmaceth 70000000.ethernet eth0: configuring for
> >>> phy/gmii link mode
> >>> [ 4159.142452] stmmaceth 70000000.ethernet eth0: phylink_mac_config:
> >>> mode=phy/gmii/Unknown/Unknown adv=00,00000000,000062ff pause=10 link=0
> >>> an=1
> >>> [ 4159.142466] stmmaceth 70000000.ethernet eth0: phy link up
> >>> gmii/1Gbps/Full
> >>> [ 4159.142475] stmmaceth 70000000.ethernet eth0: phylink_mac_config:
> >>> mode=phy/gmii/1Gbps/Full adv=00,00000000,00000000 pause=0f link=1 an=0
> >>> [ 4159.142481] stmmaceth 70000000.ethernet eth0: Link is Up - 1Gbps/Full
> >>> - flow control rx/tx
> >>>
> >>> The only missing point is the NFS boot that I can't replicate with this
> >>> setup. But I did some sanity checks:
> >>>
> >>> Remote Enpoint:
> >>> # dd if=/dev/urandom of=output.dat bs=128M count=1
> >>> # nc -c 192.168.0.2 1234 < output.dat
> >>> # md5sum output.dat
> >>> fde9e0818281836e4fc0edfede2b8762 output.dat
> >>>
> >>> DUT:
> >>> # nc -l -c -p 1234 > output.dat
> >>> # md5sum output.dat
> >>> fde9e0818281836e4fc0edfede2b8762 output.dat
> >>
> >> On my setup, if I do not use NFS to mount the rootfs, but then manually
> >> mount the NFS share after booting, I do not see any problems reading or
> >> writing to files on the share. So I am not sure if it is some sort of
> >> race that is occurring when mounting the NFS share on boot. It is 100%
> >> reproducible when using NFS for the root file-system.
> >
> > I don't understand how can there be corruption then unless the IP AXI
> > parameters are misconfigured which can lead to sporadic undefined
> > behavior.
> >
> > These prints from your logs:
> > [ 14.579392] Run /init as init process
> > /init: line 58: chmod: command not found
> > [ 10:22:46 ] L4T-INITRD Build DATE: Mon Jul 22 10:22:46 UTC 2019
> > [ 10:22:46 ] Root device found: nfs
> > [ 10:22:46 ] Ethernet interfaces: eth0
> > [ 10:22:46 ] IP Address: 10.21.140.41
> >
> > Where are they coming from ? Do you have any extra init script ?
>
> By default there is an initial ramdisk that is loaded first and then the
> rootfs is mounted over NFS. However, even if I remove this ramdisk and
> directly mount the rootfs via NFS without it the problem persists. So I
> don't see any issue with the ramdisk and whats more is we have been
> using this for a long long time. Nothing has changed here.

OK. Can you please test what Ilias mentioned ?

Basically you can hard-code the order to 0 in
alloc_dma_rx_desc_resources():
- pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
+ pp_params.order = 0;

Unless you use a MTU > PAGE_SIZE.

---
Thanks,
Jose Miguel Abreu

2019-07-25 13:47:58

by Ilias Apalodimas

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

Hi Jon, Jose,
On Thu, Jul 25, 2019 at 10:45:46AM +0100, Jon Hunter wrote:
>
> On 25/07/2019 08:44, Jose Abreu wrote:
>
> ...
>
> > OK. Can you please test what Ilias mentioned ?
> >
> > Basically you can hard-code the order to 0 in
> > alloc_dma_rx_desc_resources():
> > - pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
> > + pp_params.order = 0;
> >
> > Unless you use a MTU > PAGE_SIZE.
>
> I made the change but unfortunately the issue persists.

Yea tbh i didn't expect this to fix it, since i think the mappings are fine, but
it never hurts to verify.
@Jose: Can we add some debugging prints on the driver?
Ideally the pages the api allocates (on init), the page that the driver is
trying to use before the crash and the size of the packet (right from the device
descriptor). Maybe this will tell us where the erroneous access is

Thanks
/Ilias

2019-07-25 14:10:16

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 03/07/2019 11:37, Jose Abreu wrote:
> Mapping and unmapping DMA region is an high bottleneck in stmmac driver,
> specially in the RX path.
>
> This commit introduces support for Page Pool API and uses it in all RX
> queues. With this change, we get more stable troughput and some increase
> of banwidth with iperf:
> - MAC1000 - 950 Mbps
> - XGMAC: 9.22 Gbps
>
> Signed-off-by: Jose Abreu <[email protected]>
> Cc: Joao Pinto <[email protected]>
> Cc: David S. Miller <[email protected]>
> Cc: Giuseppe Cavallaro <[email protected]>
> Cc: Alexandre Torgue <[email protected]>
> Cc: Maxime Coquelin <[email protected]>
> Cc: Maxime Ripard <[email protected]>
> Cc: Chen-Yu Tsai <[email protected]>
> ---
> drivers/net/ethernet/stmicro/stmmac/Kconfig | 1 +
> drivers/net/ethernet/stmicro/stmmac/stmmac.h | 10 +-
> drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 196 ++++++----------------
> 3 files changed, 63 insertions(+), 144 deletions(-)

...

> @@ -3306,49 +3295,22 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
> else
> p = rx_q->dma_rx + entry;
>
> - if (likely(!rx_q->rx_skbuff[entry])) {
> - struct sk_buff *skb;
> -
> - skb = netdev_alloc_skb_ip_align(priv->dev, bfsize);
> - if (unlikely(!skb)) {
> - /* so for a while no zero-copy! */
> - rx_q->rx_zeroc_thresh = STMMAC_RX_THRESH;
> - if (unlikely(net_ratelimit()))
> - dev_err(priv->device,
> - "fail to alloc skb entry %d\n",
> - entry);
> - break;
> - }
> -
> - rx_q->rx_skbuff[entry] = skb;
> - rx_q->rx_skbuff_dma[entry] =
> - dma_map_single(priv->device, skb->data, bfsize,
> - DMA_FROM_DEVICE);
> - if (dma_mapping_error(priv->device,
> - rx_q->rx_skbuff_dma[entry])) {
> - netdev_err(priv->dev, "Rx DMA map failed\n");
> - dev_kfree_skb(skb);
> + if (!buf->page) {
> + buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
> + if (!buf->page)
> break;
> - }
> -
> - stmmac_set_desc_addr(priv, p, rx_q->rx_skbuff_dma[entry]);
> - stmmac_refill_desc3(priv, rx_q, p);
> -
> - if (rx_q->rx_zeroc_thresh > 0)
> - rx_q->rx_zeroc_thresh--;
> -
> - netif_dbg(priv, rx_status, priv->dev,
> - "refill entry #%d\n", entry);
> }
> - dma_wmb();
> +
> + buf->addr = buf->page->dma_addr;
> + stmmac_set_desc_addr(priv, p, buf->addr);
> + stmmac_refill_desc3(priv, rx_q, p);
>
> rx_q->rx_count_frames++;
> rx_q->rx_count_frames %= priv->rx_coal_frames;
> use_rx_wd = priv->use_riwt && rx_q->rx_count_frames;
>
> - stmmac_set_rx_owner(priv, p, use_rx_wd);
> -
> dma_wmb();
> + stmmac_set_rx_owner(priv, p, use_rx_wd);
>
> entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
> }

I was looking at this change in a bit closer detail and one thing that
stuck out to me was the above where the barrier had been moved from
after the stmmac_set_rx_owner() call to before.

So I moved this back and I no longer saw the crash. However, then I
recalled I had the patch to enable the debug prints for the buffer
address applied but after reverting that, the crash occurred again.

In other words, what works for me is moving the above barrier and adding
the debug print, which appears to suggest that there is some
timing/coherency issue here. Anyway, maybe this is clue to what is going
on?

diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index a7486c2f3221..2f016397231b 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -3303,8 +3303,8 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
rx_q->rx_count_frames %= priv->rx_coal_frames;
use_rx_wd = priv->use_riwt && rx_q->rx_count_frames;

- dma_wmb();
stmmac_set_rx_owner(priv, p, use_rx_wd);
+ dma_wmb();

entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
}
@@ -3438,6 +3438,10 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
dma_sync_single_for_device(priv->device, buf->addr,
frame_len, DMA_FROM_DEVICE);

+ pr_info("%s: paddr=0x%llx, vaddr=0x%llx, len=%d", __func__,
+ buf->addr, page_address(buf->page),
+ frame_len);
+
if (netif_msg_pktdata(priv)) {
netdev_dbg(priv->dev, "frame received (%dbytes)",
frame_len);

Cheers
Jon

--
nvpublic

2019-07-25 14:15:30

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/25/2019, 14:20:07 (UTC+00:00)

>
> On 03/07/2019 11:37, Jose Abreu wrote:
> > Mapping and unmapping DMA region is an high bottleneck in stmmac driver,
> > specially in the RX path.
> >
> > This commit introduces support for Page Pool API and uses it in all RX
> > queues. With this change, we get more stable troughput and some increase
> > of banwidth with iperf:
> > - MAC1000 - 950 Mbps
> > - XGMAC: 9.22 Gbps
> >
> > Signed-off-by: Jose Abreu <[email protected]>
> > Cc: Joao Pinto <[email protected]>
> > Cc: David S. Miller <[email protected]>
> > Cc: Giuseppe Cavallaro <[email protected]>
> > Cc: Alexandre Torgue <[email protected]>
> > Cc: Maxime Coquelin <[email protected]>
> > Cc: Maxime Ripard <[email protected]>
> > Cc: Chen-Yu Tsai <[email protected]>
> > ---
> > drivers/net/ethernet/stmicro/stmmac/Kconfig | 1 +
> > drivers/net/ethernet/stmicro/stmmac/stmmac.h | 10 +-
> > drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 196 ++++++----------------
> > 3 files changed, 63 insertions(+), 144 deletions(-)
>
> ...
>
> > @@ -3306,49 +3295,22 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
> > else
> > p = rx_q->dma_rx + entry;
> >
> > - if (likely(!rx_q->rx_skbuff[entry])) {
> > - struct sk_buff *skb;
> > -
> > - skb = netdev_alloc_skb_ip_align(priv->dev, bfsize);
> > - if (unlikely(!skb)) {
> > - /* so for a while no zero-copy! */
> > - rx_q->rx_zeroc_thresh = STMMAC_RX_THRESH;
> > - if (unlikely(net_ratelimit()))
> > - dev_err(priv->device,
> > - "fail to alloc skb entry %d\n",
> > - entry);
> > - break;
> > - }
> > -
> > - rx_q->rx_skbuff[entry] = skb;
> > - rx_q->rx_skbuff_dma[entry] =
> > - dma_map_single(priv->device, skb->data, bfsize,
> > - DMA_FROM_DEVICE);
> > - if (dma_mapping_error(priv->device,
> > - rx_q->rx_skbuff_dma[entry])) {
> > - netdev_err(priv->dev, "Rx DMA map failed\n");
> > - dev_kfree_skb(skb);
> > + if (!buf->page) {
> > + buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);
> > + if (!buf->page)
> > break;
> > - }
> > -
> > - stmmac_set_desc_addr(priv, p, rx_q->rx_skbuff_dma[entry]);
> > - stmmac_refill_desc3(priv, rx_q, p);
> > -
> > - if (rx_q->rx_zeroc_thresh > 0)
> > - rx_q->rx_zeroc_thresh--;
> > -
> > - netif_dbg(priv, rx_status, priv->dev,
> > - "refill entry #%d\n", entry);
> > }
> > - dma_wmb();
> > +
> > + buf->addr = buf->page->dma_addr;
> > + stmmac_set_desc_addr(priv, p, buf->addr);
> > + stmmac_refill_desc3(priv, rx_q, p);
> >
> > rx_q->rx_count_frames++;
> > rx_q->rx_count_frames %= priv->rx_coal_frames;
> > use_rx_wd = priv->use_riwt && rx_q->rx_count_frames;
> >
> > - stmmac_set_rx_owner(priv, p, use_rx_wd);
> > -
> > dma_wmb();
> > + stmmac_set_rx_owner(priv, p, use_rx_wd);
> >
> > entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
> > }
>
> I was looking at this change in a bit closer detail and one thing that
> stuck out to me was the above where the barrier had been moved from
> after the stmmac_set_rx_owner() call to before.
>
> So I moved this back and I no longer saw the crash. However, then I
> recalled I had the patch to enable the debug prints for the buffer
> address applied but after reverting that, the crash occurred again.
>
> In other words, what works for me is moving the above barrier and adding
> the debug print, which appears to suggest that there is some
> timing/coherency issue here. Anyway, maybe this is clue to what is going
> on?
>
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> index a7486c2f3221..2f016397231b 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -3303,8 +3303,8 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
> rx_q->rx_count_frames %= priv->rx_coal_frames;
> use_rx_wd = priv->use_riwt && rx_q->rx_count_frames;
>
> - dma_wmb();
> stmmac_set_rx_owner(priv, p, use_rx_wd);
> + dma_wmb();
>
> entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
> }
> @@ -3438,6 +3438,10 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
> dma_sync_single_for_device(priv->device, buf->addr,
> frame_len, DMA_FROM_DEVICE);
>
> + pr_info("%s: paddr=0x%llx, vaddr=0x%llx, len=%d", __func__,
> + buf->addr, page_address(buf->page),
> + frame_len);
> +
> if (netif_msg_pktdata(priv)) {
> netdev_dbg(priv->dev, "frame received (%dbytes)",
> frame_len);
>
> Cheers
> Jon
>
> --
> nvpublic

Well, I wasn't expecting that :/

Per documentation of barriers I think we should set descriptor fields
and then barrier and finally ownership to HW so that remaining fields
are coherent before owner is set.

Anyway, can you also add a dma_rmb() after the call to
stmmac_rx_status() ?

---
Thanks,
Jose Miguel Abreu

2019-07-25 14:33:46

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 25/07/2019 14:26, Jose Abreu wrote:

...

> Well, I wasn't expecting that :/
>
> Per documentation of barriers I think we should set descriptor fields
> and then barrier and finally ownership to HW so that remaining fields
> are coherent before owner is set.
>
> Anyway, can you also add a dma_rmb() after the call to
> stmmac_rx_status() ?

Yes. I removed the debug print added the barrier, but that did not help.

Jon

--
nvpublic

2019-07-25 15:13:43

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/25/2019, 15:25:59 (UTC+00:00)

>
> On 25/07/2019 14:26, Jose Abreu wrote:
>
> ...
>
> > Well, I wasn't expecting that :/
> >
> > Per documentation of barriers I think we should set descriptor fields
> > and then barrier and finally ownership to HW so that remaining fields
> > are coherent before owner is set.
> >
> > Anyway, can you also add a dma_rmb() after the call to
> > stmmac_rx_status() ?
>
> Yes. I removed the debug print added the barrier, but that did not help.

So, I was finally able to setup NFS using your replicated setup and I
can't see the issue :(

The only difference I have from yours is that I'm using TCP in NFS
whilst you (I believe from the logs), use UDP.

You do have flow control active right ? And your HW FIFO size is >= 4k ?

---
Thanks,
Jose Miguel Abreu

2019-07-25 16:27:56

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 25/07/2019 08:44, Jose Abreu wrote:

...

> OK. Can you please test what Ilias mentioned ?
>
> Basically you can hard-code the order to 0 in
> alloc_dma_rx_desc_resources():
> - pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
> + pp_params.order = 0;
>
> Unless you use a MTU > PAGE_SIZE.

I made the change but unfortunately the issue persists.

Jon

--
nvpublic

2019-07-26 17:47:22

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 25/07/2019 16:12, Jose Abreu wrote:
> From: Jon Hunter <[email protected]>
> Date: Jul/25/2019, 15:25:59 (UTC+00:00)
>
>>
>> On 25/07/2019 14:26, Jose Abreu wrote:
>>
>> ...
>>
>>> Well, I wasn't expecting that :/
>>>
>>> Per documentation of barriers I think we should set descriptor fields
>>> and then barrier and finally ownership to HW so that remaining fields
>>> are coherent before owner is set.
>>>
>>> Anyway, can you also add a dma_rmb() after the call to
>>> stmmac_rx_status() ?
>>
>> Yes. I removed the debug print added the barrier, but that did not help.
>
> So, I was finally able to setup NFS using your replicated setup and I
> can't see the issue :(
>
> The only difference I have from yours is that I'm using TCP in NFS
> whilst you (I believe from the logs), use UDP.

So I tried TCP by setting the kernel boot params to 'nfsvers=3' and
'proto=tcp' and this does appear to be more stable, but not 100% stable.
It still appears to fail in the same place about 50% of the time.

> You do have flow control active right ? And your HW FIFO size is >= 4k ?

How can I verify if flow control is active?

The documentation for this device indicates a max transfer size of 16kB
for TX and RX.

Cheers
Jon

--
nvpublic

2019-07-27 16:00:23

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/26/2019, 15:11:00 (UTC+00:00)

>
> On 25/07/2019 16:12, Jose Abreu wrote:
> > From: Jon Hunter <[email protected]>
> > Date: Jul/25/2019, 15:25:59 (UTC+00:00)
> >
> >>
> >> On 25/07/2019 14:26, Jose Abreu wrote:
> >>
> >> ...
> >>
> >>> Well, I wasn't expecting that :/
> >>>
> >>> Per documentation of barriers I think we should set descriptor fields
> >>> and then barrier and finally ownership to HW so that remaining fields
> >>> are coherent before owner is set.
> >>>
> >>> Anyway, can you also add a dma_rmb() after the call to
> >>> stmmac_rx_status() ?
> >>
> >> Yes. I removed the debug print added the barrier, but that did not help.
> >
> > So, I was finally able to setup NFS using your replicated setup and I
> > can't see the issue :(
> >
> > The only difference I have from yours is that I'm using TCP in NFS
> > whilst you (I believe from the logs), use UDP.
>
> So I tried TCP by setting the kernel boot params to 'nfsvers=3' and
> 'proto=tcp' and this does appear to be more stable, but not 100% stable.
> It still appears to fail in the same place about 50% of the time.
>
> > You do have flow control active right ? And your HW FIFO size is >= 4k ?
>
> How can I verify if flow control is active?

You can check it by dumping register MTL_RxQ_Operation_Mode (0xd30).

Can you also add IOMMU debug in file "drivers/iommu/iommu.c" ?

---
Thanks,
Jose Miguel Abreu

2019-07-29 10:18:03

by Mikko Perttunen

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

My understanding is that Tegra186 does not have DMA coherency, but
Tegra194 does.

Mikko

On 23.7.2019 16.34, Jon Hunter wrote:
>
> On 23/07/2019 13:51, Jose Abreu wrote:
>> From: Jon Hunter <[email protected]>
>> Date: Jul/23/2019, 12:58:55 (UTC+00:00)
>>
>>>
>>> On 23/07/2019 11:49, Jose Abreu wrote:
>>>> From: Jon Hunter <[email protected]>
>>>> Date: Jul/23/2019, 11:38:33 (UTC+00:00)
>>>>
>>>>>
>>>>> On 23/07/2019 11:07, Jose Abreu wrote:
>>>>>> From: Jon Hunter <[email protected]>
>>>>>> Date: Jul/23/2019, 11:01:24 (UTC+00:00)
>>>>>>
>>>>>>> This appears to be a winner and by disabling the SMMU for the ethernet
>>>>>>> controller and reverting commit 954a03be033c7cef80ddc232e7cbdb17df735663
>>>>>>> this worked! So yes appears to be related to the SMMU being enabled. We
>>>>>>> had to enable the SMMU for ethernet recently due to commit
>>>>>>> 954a03be033c7cef80ddc232e7cbdb17df735663.
>>>>>>
>>>>>> Finally :)
>>>>>>
>>>>>> However, from "git show 954a03be033c7cef80ddc232e7cbdb17df735663":
>>>>>>
>>>>>> + There are few reasons to allow unmatched stream bypass, and
>>>>>> + even fewer good ones. If saying YES here breaks your board
>>>>>> + you should work on fixing your board.
>>>>>>
>>>>>> So, how can we fix this ? Is your ethernet DT node marked as
>>>>>> "dma-coherent;" ?
>>>>>
>>>>> TBH I have no idea. I can't say I fully understand your change or how it
>>>>> is breaking things for us.
>>>>>
>>>>> Currently, the Tegra DT binding does not have 'dma-coherent' set. I see
>>>>> this is optional, but I am not sure how you determine whether or not
>>>>> this should be set.
>>>>
>>>> From my understanding it means that your device / IP DMA accesses are coherent regarding the CPU point of view. I think it will be the case if GMAC is not behind any kind of IOMMU in the HW arch.
>>>
>>> I understand what coherency is, I just don't know how you tell if this
>>> implementation of the ethernet controller is coherent or not.
>>
>> Do you have any detailed diagram of your HW ? Such as blocks / IPs
>> connection, address space wiring , ...
>
> Yes, this can be found in the Tegra X2 Technical Reference Manual [0].
> Unfortunately, you need to create an account to download it.
>
> Jon
>
> [0] https://developer.nvidia.com/embedded/dlc/parker-series-trm
>

2019-07-29 10:57:49

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 29/07/2019 09:16, Jose Abreu wrote:
> From: Jose Abreu <[email protected]>
> Date: Jul/27/2019, 16:56:37 (UTC+00:00)
>
>> From: Jon Hunter <[email protected]>
>> Date: Jul/26/2019, 15:11:00 (UTC+00:00)
>>
>>>
>>> On 25/07/2019 16:12, Jose Abreu wrote:
>>>> From: Jon Hunter <[email protected]>
>>>> Date: Jul/25/2019, 15:25:59 (UTC+00:00)
>>>>
>>>>>
>>>>> On 25/07/2019 14:26, Jose Abreu wrote:
>>>>>
>>>>> ...
>>>>>
>>>>>> Well, I wasn't expecting that :/
>>>>>>
>>>>>> Per documentation of barriers I think we should set descriptor fields
>>>>>> and then barrier and finally ownership to HW so that remaining fields
>>>>>> are coherent before owner is set.
>>>>>>
>>>>>> Anyway, can you also add a dma_rmb() after the call to
>>>>>> stmmac_rx_status() ?
>>>>>
>>>>> Yes. I removed the debug print added the barrier, but that did not help.
>>>>
>>>> So, I was finally able to setup NFS using your replicated setup and I
>>>> can't see the issue :(
>>>>
>>>> The only difference I have from yours is that I'm using TCP in NFS
>>>> whilst you (I believe from the logs), use UDP.
>>>
>>> So I tried TCP by setting the kernel boot params to 'nfsvers=3' and
>>> 'proto=tcp' and this does appear to be more stable, but not 100% stable.
>>> It still appears to fail in the same place about 50% of the time.
>>>
>>>> You do have flow control active right ? And your HW FIFO size is >= 4k ?
>>>
>>> How can I verify if flow control is active?
>>
>> You can check it by dumping register MTL_RxQ_Operation_Mode (0xd30).

Where would be the appropriate place to dump this? After probe? Maybe
best if you can share a code snippet of where to dump this.

>> Can you also add IOMMU debug in file "drivers/iommu/iommu.c" ?

You can find a boot log here:

https://paste.ubuntu.com/p/qtRqtYKHGF/

> And, please try attached debug patch.

With this patch it appears to boot fine. So far no issues seen.

Jon

--
nvpublic

2019-07-29 11:53:10

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

On 29/07/2019 12:29, Jose Abreu wrote:
> ++ Catalin, Will (ARM64 Maintainers)
>
> From: Jon Hunter <[email protected]>
> Date: Jul/29/2019, 11:55:18 (UTC+00:00)
>
>>
>> On 29/07/2019 09:16, Jose Abreu wrote:
>>> From: Jose Abreu <[email protected]>
>>> Date: Jul/27/2019, 16:56:37 (UTC+00:00)
>>>
>>>> From: Jon Hunter <[email protected]>
>>>> Date: Jul/26/2019, 15:11:00 (UTC+00:00)
>>>>
>>>>>
>>>>> On 25/07/2019 16:12, Jose Abreu wrote:
>>>>>> From: Jon Hunter <[email protected]>
>>>>>> Date: Jul/25/2019, 15:25:59 (UTC+00:00)
>>>>>>
>>>>>>>
>>>>>>> On 25/07/2019 14:26, Jose Abreu wrote:
>>>>>>>
>>>>>>> ...
>>>>>>>
>>>>>>>> Well, I wasn't expecting that :/
>>>>>>>>
>>>>>>>> Per documentation of barriers I think we should set descriptor fields
>>>>>>>> and then barrier and finally ownership to HW so that remaining fields
>>>>>>>> are coherent before owner is set.
>>>>>>>>
>>>>>>>> Anyway, can you also add a dma_rmb() after the call to
>>>>>>>> stmmac_rx_status() ?
>>>>>>>
>>>>>>> Yes. I removed the debug print added the barrier, but that did not help.
>>>>>>
>>>>>> So, I was finally able to setup NFS using your replicated setup and I
>>>>>> can't see the issue :(
>>>>>>
>>>>>> The only difference I have from yours is that I'm using TCP in NFS
>>>>>> whilst you (I believe from the logs), use UDP.
>>>>>
>>>>> So I tried TCP by setting the kernel boot params to 'nfsvers=3' and
>>>>> 'proto=tcp' and this does appear to be more stable, but not 100% stable.
>>>>> It still appears to fail in the same place about 50% of the time.
>>>>>
>>>>>> You do have flow control active right ? And your HW FIFO size is >= 4k ?
>>>>>
>>>>> How can I verify if flow control is active?
>>>>
>>>> You can check it by dumping register MTL_RxQ_Operation_Mode (0xd30).
>>
>> Where would be the appropriate place to dump this? After probe? Maybe
>> best if you can share a code snippet of where to dump this.
>>
>>>> Can you also add IOMMU debug in file "drivers/iommu/iommu.c" ?
>>
>> You can find a boot log here:
>>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__paste.ubuntu.com_p_qtRqtYKHGF_&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=WHDsc6kcWAl4i96Vm5hJ_19IJiuxx_p_Rzo2g-uHDKw&m=NrxsR2etpZHGb7HkN4XdgaGmKM1XYyldihNPL6qVSv0&s=CMATEcHVoqZw4sIrNOXc7SFE_kV_5CO5EU21-yJez6c&e=
>>
>>> And, please try attached debug patch.
>>
>> With this patch it appears to boot fine. So far no issues seen.
>
> Thank you for testing.
>
> Hi Catalin and Will,
>
> Sorry to add you in such a long thread but we are seeing a DMA issue
> with stmmac driver in an ARM64 platform with IOMMU enabled.
>
> The issue seems to be solved when buffers allocation for DMA based
> transfers are *not* mapped with the DMA_ATTR_SKIP_CPU_SYNC flag *OR*
> when IOMMU is disabled.
>
> Notice that after transfer is done we do use
> dma_sync_single_for_{cpu,device} and then we reuse *the same* page for
> another transfer.
>
> Can you please comment on whether DMA_ATTR_SKIP_CPU_SYNC can not be used
> in ARM64 platforms with IOMMU ?

In terms of what they do, there should be no difference on arm64 between:

dma_map_page(..., dir);
...
dma_unmap_page(..., dir);

and:

dma_map_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);
dma_sync_single_for_device(..., dir);
...
dma_sync_single_for_cpu(..., dir);
dma_unmap_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);

provided that the first sync covers the whole buffer and any subsequent
ones cover at least the parts of the buffer which may have changed. Plus
for coherent hardware it's entirely moot either way.

Given Jon's previous findings, I would lean towards the idea that
performing the extra (redundant) cache maintenance plus barrier in
dma_unmap is mostly just perturbing timing in the same way as the debug
print which also made things seem OK.

Robin.

2019-07-29 12:07:34

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jose Abreu <[email protected]>
Date: Jul/27/2019, 16:56:37 (UTC+00:00)

> From: Jon Hunter <[email protected]>
> Date: Jul/26/2019, 15:11:00 (UTC+00:00)
>
> >
> > On 25/07/2019 16:12, Jose Abreu wrote:
> > > From: Jon Hunter <[email protected]>
> > > Date: Jul/25/2019, 15:25:59 (UTC+00:00)
> > >
> > >>
> > >> On 25/07/2019 14:26, Jose Abreu wrote:
> > >>
> > >> ...
> > >>
> > >>> Well, I wasn't expecting that :/
> > >>>
> > >>> Per documentation of barriers I think we should set descriptor fields
> > >>> and then barrier and finally ownership to HW so that remaining fields
> > >>> are coherent before owner is set.
> > >>>
> > >>> Anyway, can you also add a dma_rmb() after the call to
> > >>> stmmac_rx_status() ?
> > >>
> > >> Yes. I removed the debug print added the barrier, but that did not help.
> > >
> > > So, I was finally able to setup NFS using your replicated setup and I
> > > can't see the issue :(
> > >
> > > The only difference I have from yours is that I'm using TCP in NFS
> > > whilst you (I believe from the logs), use UDP.
> >
> > So I tried TCP by setting the kernel boot params to 'nfsvers=3' and
> > 'proto=tcp' and this does appear to be more stable, but not 100% stable.
> > It still appears to fail in the same place about 50% of the time.
> >
> > > You do have flow control active right ? And your HW FIFO size is >= 4k ?
> >
> > How can I verify if flow control is active?
>
> You can check it by dumping register MTL_RxQ_Operation_Mode (0xd30).
>
> Can you also add IOMMU debug in file "drivers/iommu/iommu.c" ?

And, please try attached debug patch.

---
Thanks,
Jose Miguel Abreu


Attachments:
0001-net-page_pool-Do-not-skip-CPU-sync.patch (1.50 kB)
0001-net-page_pool-Do-not-skip-CPU-sync.patch

2019-07-29 14:26:43

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Robin Murphy <[email protected]>
Date: Jul/29/2019, 12:52:02 (UTC+00:00)

> On 29/07/2019 12:29, Jose Abreu wrote:
> > ++ Catalin, Will (ARM64 Maintainers)
> >
> > From: Jon Hunter <[email protected]>
> > Date: Jul/29/2019, 11:55:18 (UTC+00:00)
> >
> >>
> >> On 29/07/2019 09:16, Jose Abreu wrote:
> >>> From: Jose Abreu <[email protected]>
> >>> Date: Jul/27/2019, 16:56:37 (UTC+00:00)
> >>>
> >>>> From: Jon Hunter <[email protected]>
> >>>> Date: Jul/26/2019, 15:11:00 (UTC+00:00)
> >>>>
> >>>>>
> >>>>> On 25/07/2019 16:12, Jose Abreu wrote:
> >>>>>> From: Jon Hunter <[email protected]>
> >>>>>> Date: Jul/25/2019, 15:25:59 (UTC+00:00)
> >>>>>>
> >>>>>>>
> >>>>>>> On 25/07/2019 14:26, Jose Abreu wrote:
> >>>>>>>
> >>>>>>> ...
> >>>>>>>
> >>>>>>>> Well, I wasn't expecting that :/
> >>>>>>>>
> >>>>>>>> Per documentation of barriers I think we should set descriptor fields
> >>>>>>>> and then barrier and finally ownership to HW so that remaining fields
> >>>>>>>> are coherent before owner is set.
> >>>>>>>>
> >>>>>>>> Anyway, can you also add a dma_rmb() after the call to
> >>>>>>>> stmmac_rx_status() ?
> >>>>>>>
> >>>>>>> Yes. I removed the debug print added the barrier, but that did not help.
> >>>>>>
> >>>>>> So, I was finally able to setup NFS using your replicated setup and I
> >>>>>> can't see the issue :(
> >>>>>>
> >>>>>> The only difference I have from yours is that I'm using TCP in NFS
> >>>>>> whilst you (I believe from the logs), use UDP.
> >>>>>
> >>>>> So I tried TCP by setting the kernel boot params to 'nfsvers=3' and
> >>>>> 'proto=tcp' and this does appear to be more stable, but not 100% stable.
> >>>>> It still appears to fail in the same place about 50% of the time.
> >>>>>
> >>>>>> You do have flow control active right ? And your HW FIFO size is >= 4k ?
> >>>>>
> >>>>> How can I verify if flow control is active?
> >>>>
> >>>> You can check it by dumping register MTL_RxQ_Operation_Mode (0xd30).
> >>
> >> Where would be the appropriate place to dump this? After probe? Maybe
> >> best if you can share a code snippet of where to dump this.
> >>
> >>>> Can you also add IOMMU debug in file "drivers/iommu/iommu.c" ?
> >>
> >> You can find a boot log here:
> >>
> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__paste.ubuntu.com_p_qtRqtYKHGF_&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=WHDsc6kcWAl4i96Vm5hJ_19IJiuxx_p_Rzo2g-uHDKw&m=NrxsR2etpZHGb7HkN4XdgaGmKM1XYyldihNPL6qVSv0&s=CMATEcHVoqZw4sIrNOXc7SFE_kV_5CO5EU21-yJez6c&e=
> >>
> >>> And, please try attached debug patch.
> >>
> >> With this patch it appears to boot fine. So far no issues seen.
> >
> > Thank you for testing.
> >
> > Hi Catalin and Will,
> >
> > Sorry to add you in such a long thread but we are seeing a DMA issue
> > with stmmac driver in an ARM64 platform with IOMMU enabled.
> >
> > The issue seems to be solved when buffers allocation for DMA based
> > transfers are *not* mapped with the DMA_ATTR_SKIP_CPU_SYNC flag *OR*
> > when IOMMU is disabled.
> >
> > Notice that after transfer is done we do use
> > dma_sync_single_for_{cpu,device} and then we reuse *the same* page for
> > another transfer.
> >
> > Can you please comment on whether DMA_ATTR_SKIP_CPU_SYNC can not be used
> > in ARM64 platforms with IOMMU ?
>
> In terms of what they do, there should be no difference on arm64 between:
>
> dma_map_page(..., dir);
> ...
> dma_unmap_page(..., dir);
>
> and:
>
> dma_map_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);
> dma_sync_single_for_device(..., dir);
> ...
> dma_sync_single_for_cpu(..., dir);
> dma_unmap_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);
>
> provided that the first sync covers the whole buffer and any subsequent
> ones cover at least the parts of the buffer which may have changed. Plus
> for coherent hardware it's entirely moot either way.

Thanks for confirming. That's indeed what stmmac is doing when buffer is
received by syncing the packet size to CPU.

>
> Given Jon's previous findings, I would lean towards the idea that
> performing the extra (redundant) cache maintenance plus barrier in
> dma_unmap is mostly just perturbing timing in the same way as the debug
> print which also made things seem OK.

Mikko said that Tegra186 is not coherent so we have to explicit flush
pipeline but I don't understand why sync_single() is not doing it ...

Jon, can you please remove *all* debug prints, hacks, etc ... and test
this one in attach with plain -net tree ?

---
Thanks,
Jose Miguel Abreu


Attachments:
0001-net-stmmac-Flush-all-data-cache-in-RX-path.patch (1.61 kB)
0001-net-stmmac-Flush-all-data-cache-in-RX-path.patch

2019-07-29 15:41:13

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

++ Catalin, Will (ARM64 Maintainers)

From: Jon Hunter <[email protected]>
Date: Jul/29/2019, 11:55:18 (UTC+00:00)

>
> On 29/07/2019 09:16, Jose Abreu wrote:
> > From: Jose Abreu <[email protected]>
> > Date: Jul/27/2019, 16:56:37 (UTC+00:00)
> >
> >> From: Jon Hunter <[email protected]>
> >> Date: Jul/26/2019, 15:11:00 (UTC+00:00)
> >>
> >>>
> >>> On 25/07/2019 16:12, Jose Abreu wrote:
> >>>> From: Jon Hunter <[email protected]>
> >>>> Date: Jul/25/2019, 15:25:59 (UTC+00:00)
> >>>>
> >>>>>
> >>>>> On 25/07/2019 14:26, Jose Abreu wrote:
> >>>>>
> >>>>> ...
> >>>>>
> >>>>>> Well, I wasn't expecting that :/
> >>>>>>
> >>>>>> Per documentation of barriers I think we should set descriptor fields
> >>>>>> and then barrier and finally ownership to HW so that remaining fields
> >>>>>> are coherent before owner is set.
> >>>>>>
> >>>>>> Anyway, can you also add a dma_rmb() after the call to
> >>>>>> stmmac_rx_status() ?
> >>>>>
> >>>>> Yes. I removed the debug print added the barrier, but that did not help.
> >>>>
> >>>> So, I was finally able to setup NFS using your replicated setup and I
> >>>> can't see the issue :(
> >>>>
> >>>> The only difference I have from yours is that I'm using TCP in NFS
> >>>> whilst you (I believe from the logs), use UDP.
> >>>
> >>> So I tried TCP by setting the kernel boot params to 'nfsvers=3' and
> >>> 'proto=tcp' and this does appear to be more stable, but not 100% stable.
> >>> It still appears to fail in the same place about 50% of the time.
> >>>
> >>>> You do have flow control active right ? And your HW FIFO size is >= 4k ?
> >>>
> >>> How can I verify if flow control is active?
> >>
> >> You can check it by dumping register MTL_RxQ_Operation_Mode (0xd30).
>
> Where would be the appropriate place to dump this? After probe? Maybe
> best if you can share a code snippet of where to dump this.
>
> >> Can you also add IOMMU debug in file "drivers/iommu/iommu.c" ?
>
> You can find a boot log here:
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__paste.ubuntu.com_p_qtRqtYKHGF_&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=WHDsc6kcWAl4i96Vm5hJ_19IJiuxx_p_Rzo2g-uHDKw&m=NrxsR2etpZHGb7HkN4XdgaGmKM1XYyldihNPL6qVSv0&s=CMATEcHVoqZw4sIrNOXc7SFE_kV_5CO5EU21-yJez6c&e=
>
> > And, please try attached debug patch.
>
> With this patch it appears to boot fine. So far no issues seen.

Thank you for testing.

Hi Catalin and Will,

Sorry to add you in such a long thread but we are seeing a DMA issue
with stmmac driver in an ARM64 platform with IOMMU enabled.

The issue seems to be solved when buffers allocation for DMA based
transfers are *not* mapped with the DMA_ATTR_SKIP_CPU_SYNC flag *OR*
when IOMMU is disabled.

Notice that after transfer is done we do use
dma_sync_single_for_{cpu,device} and then we reuse *the same* page for
another transfer.

Can you please comment on whether DMA_ATTR_SKIP_CPU_SYNC can not be used
in ARM64 platforms with IOMMU ?

---
Thanks,
Jose Miguel Abreu

2019-07-29 21:34:08

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 29/07/2019 15:08, Jose Abreu wrote:

...

>>> Hi Catalin and Will,
>>>
>>> Sorry to add you in such a long thread but we are seeing a DMA issue
>>> with stmmac driver in an ARM64 platform with IOMMU enabled.
>>>
>>> The issue seems to be solved when buffers allocation for DMA based
>>> transfers are *not* mapped with the DMA_ATTR_SKIP_CPU_SYNC flag *OR*
>>> when IOMMU is disabled.
>>>
>>> Notice that after transfer is done we do use
>>> dma_sync_single_for_{cpu,device} and then we reuse *the same* page for
>>> another transfer.
>>>
>>> Can you please comment on whether DMA_ATTR_SKIP_CPU_SYNC can not be used
>>> in ARM64 platforms with IOMMU ?
>>
>> In terms of what they do, there should be no difference on arm64 between:
>>
>> dma_map_page(..., dir);
>> ...
>> dma_unmap_page(..., dir);
>>
>> and:
>>
>> dma_map_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);
>> dma_sync_single_for_device(..., dir);
>> ...
>> dma_sync_single_for_cpu(..., dir);
>> dma_unmap_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);
>>
>> provided that the first sync covers the whole buffer and any subsequent
>> ones cover at least the parts of the buffer which may have changed. Plus
>> for coherent hardware it's entirely moot either way.
>
> Thanks for confirming. That's indeed what stmmac is doing when buffer is
> received by syncing the packet size to CPU.
>
>>
>> Given Jon's previous findings, I would lean towards the idea that
>> performing the extra (redundant) cache maintenance plus barrier in
>> dma_unmap is mostly just perturbing timing in the same way as the debug
>> print which also made things seem OK.
>
> Mikko said that Tegra186 is not coherent so we have to explicit flush
> pipeline but I don't understand why sync_single() is not doing it ...
>
> Jon, can you please remove *all* debug prints, hacks, etc ... and test
> this one in attach with plain -net tree ?

So far I have just been testing on the mainline kernel branch. The issue
still persists after applying this on mainline. I can test on the -net
tree, but I am not sure that will make a difference.

Cheers
Jon

--
nvpublic

2019-07-30 14:58:52

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/29/2019, 22:33:04 (UTC+00:00)

>
> On 29/07/2019 15:08, Jose Abreu wrote:
>
> ...
>
> >>> Hi Catalin and Will,
> >>>
> >>> Sorry to add you in such a long thread but we are seeing a DMA issue
> >>> with stmmac driver in an ARM64 platform with IOMMU enabled.
> >>>
> >>> The issue seems to be solved when buffers allocation for DMA based
> >>> transfers are *not* mapped with the DMA_ATTR_SKIP_CPU_SYNC flag *OR*
> >>> when IOMMU is disabled.
> >>>
> >>> Notice that after transfer is done we do use
> >>> dma_sync_single_for_{cpu,device} and then we reuse *the same* page for
> >>> another transfer.
> >>>
> >>> Can you please comment on whether DMA_ATTR_SKIP_CPU_SYNC can not be used
> >>> in ARM64 platforms with IOMMU ?
> >>
> >> In terms of what they do, there should be no difference on arm64 between:
> >>
> >> dma_map_page(..., dir);
> >> ...
> >> dma_unmap_page(..., dir);
> >>
> >> and:
> >>
> >> dma_map_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);
> >> dma_sync_single_for_device(..., dir);
> >> ...
> >> dma_sync_single_for_cpu(..., dir);
> >> dma_unmap_page_attrs(..., dir, DMA_ATTR_SKIP_CPU_SYNC);
> >>
> >> provided that the first sync covers the whole buffer and any subsequent
> >> ones cover at least the parts of the buffer which may have changed. Plus
> >> for coherent hardware it's entirely moot either way.
> >
> > Thanks for confirming. That's indeed what stmmac is doing when buffer is
> > received by syncing the packet size to CPU.
> >
> >>
> >> Given Jon's previous findings, I would lean towards the idea that
> >> performing the extra (redundant) cache maintenance plus barrier in
> >> dma_unmap is mostly just perturbing timing in the same way as the debug
> >> print which also made things seem OK.
> >
> > Mikko said that Tegra186 is not coherent so we have to explicit flush
> > pipeline but I don't understand why sync_single() is not doing it ...
> >
> > Jon, can you please remove *all* debug prints, hacks, etc ... and test
> > this one in attach with plain -net tree ?
>
> So far I have just been testing on the mainline kernel branch. The issue
> still persists after applying this on mainline. I can test on the -net
> tree, but I am not sure that will make a difference.
>
> Cheers
> Jon
>
> --
> nvpublic

I looked at netsec implementation and I noticed that we are syncing the
old buffer for device instead of the new one. netsec syncs the buffer
for device immediately after the allocation which may be what we have to
do. Maybe the attached patch can make things work for you ?

---
Thanks,
Jose Miguel Abreu


Attachments:
0001-net-stmmac-Sync-RX-Buffer-upon-allocation.patch (2.41 kB)
0001-net-stmmac-Sync-RX-Buffer-upon-allocation.patch

2019-07-30 16:15:21

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool


On 30/07/2019 10:39, Jose Abreu wrote:

...

> I looked at netsec implementation and I noticed that we are syncing the
> old buffer for device instead of the new one. netsec syncs the buffer
> for device immediately after the allocation which may be what we have to
> do. Maybe the attached patch can make things work for you ?

Great! This one works. I have booted this several times and I am no
longer seeing any issues. Thanks for figuring this out!

Feel free to add my ...

Tested-by: Jon Hunter <[email protected]>

Cheers
Jon

--
nvpublic

2019-07-30 17:27:45

by Jose Abreu

[permalink] [raw]
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page Pool

From: Jon Hunter <[email protected]>
Date: Jul/30/2019, 14:36:39 (UTC+00:00)

>
> On 30/07/2019 10:39, Jose Abreu wrote:
>
> ...
>
> > I looked at netsec implementation and I noticed that we are syncing the
> > old buffer for device instead of the new one. netsec syncs the buffer
> > for device immediately after the allocation which may be what we have to
> > do. Maybe the attached patch can make things work for you ?
>
> Great! This one works. I have booted this several times and I am no
> longer seeing any issues. Thanks for figuring this out!
>
> Feel free to add my ...
>
> Tested-by: Jon Hunter <[email protected]>

This one was hard to find :) Thank you for your patience in testing
this!

---
Thanks,
Jose Miguel Abreu