2023-05-16 16:22:37

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH net-next 00/11] net: intel: start The Great Code Dedup + Page Pool for iavf

Here's a two-shot: introduce Intel Ethernet common library (libie) and
switch iavf to Page Pool. Details in the commit messages; here's the
summary:

Not a secret there's a ton of code duplication between two and more Intel
ethernet modules. Before introducing new changes, which would need to be
copied over again, start decoupling the already existing duplicate
functionality into a new module, which will be shared between several
Intel Ethernet drivers.
The first thing that came to my mind was "libie" -- "Intel Ethernet
common library". Also this sounds like "lovelie" and can be expanded as
"lib Internet Explorer" :P I'm open for anything else (but justified).
The series is only the beginning. From now on, adding every new feature
or doing any good driver refactoring will remove much more lines than add
for quite some time. There's a basic roadmap with some deduplications
planned already, not speaking of that touching every line now asks: "can
I share this?".
PP conversion for iavf lands within the same series as these two are tied
closely. libie will support Page Pool model only, so a driver can't use
much of the lib until it's converted. iavf is only the example, the rest
will eventually be converted soon on a per-driver basis. That is when it
will get really interesting. Stay tech.

...#6 tries to shortcut a bunch of calls and checks, Chris, please let me
know if it violates DMAnything (although I'll be missing my +100 Kpps
then =\ :D).

Alexander Lobakin (11):
net: intel: introduce Intel Ethernet common library
iavf: kill "legacy-rx" for good
iavf: optimize Rx buffer allocation a bunch
iavf: remove page splitting/recycling
iavf: always use a full order-0 page
net: page_pool: avoid calling no-op externals when possible
net: page_pool: add DMA-sync-for-CPU inline helpers
iavf: switch to Page Pool
libie: add common queue stats
libie: add per-queue Page Pool stats
iavf: switch queue stats to libie

MAINTAINERS | 3 +-
drivers/net/ethernet/intel/Kconfig | 12 +-
drivers/net/ethernet/intel/Makefile | 1 +
drivers/net/ethernet/intel/i40e/i40e_common.c | 253 -------
drivers/net/ethernet/intel/i40e/i40e_main.c | 1 +
.../net/ethernet/intel/i40e/i40e_prototype.h | 7 -
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 74 +-
drivers/net/ethernet/intel/i40e/i40e_type.h | 88 ---
drivers/net/ethernet/intel/iavf/iavf.h | 2 +-
drivers/net/ethernet/intel/iavf/iavf_common.c | 253 -------
.../net/ethernet/intel/iavf/iavf_ethtool.c | 227 +-----
drivers/net/ethernet/intel/iavf/iavf_main.c | 45 +-
.../net/ethernet/intel/iavf/iavf_prototype.h | 7 -
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 715 +++++-------------
drivers/net/ethernet/intel/iavf/iavf_txrx.h | 185 +----
drivers/net/ethernet/intel/iavf/iavf_type.h | 90 ---
.../net/ethernet/intel/iavf/iavf_virtchnl.c | 16 +-
.../net/ethernet/intel/ice/ice_lan_tx_rx.h | 316 --------
drivers/net/ethernet/intel/ice/ice_main.c | 1 +
drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 74 +-
drivers/net/ethernet/intel/libie/Makefile | 7 +
drivers/net/ethernet/intel/libie/internal.h | 23 +
drivers/net/ethernet/intel/libie/rx.c | 158 ++++
drivers/net/ethernet/intel/libie/stats.c | 189 +++++
include/linux/net/intel/libie/rx.h | 170 +++++
include/linux/net/intel/libie/stats.h | 214 ++++++
include/net/page_pool.h | 62 +-
net/core/page_pool.c | 10 +
28 files changed, 1125 insertions(+), 2078 deletions(-)
create mode 100644 drivers/net/ethernet/intel/libie/Makefile
create mode 100644 drivers/net/ethernet/intel/libie/internal.h
create mode 100644 drivers/net/ethernet/intel/libie/rx.c
create mode 100644 drivers/net/ethernet/intel/libie/stats.c
create mode 100644 include/linux/net/intel/libie/rx.h
create mode 100644 include/linux/net/intel/libie/stats.h

---
Directly to net-next, has core code changes.
--
2.40.1



2023-05-16 16:22:43

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH net-next 05/11] iavf: always use a full order-0 page

The current scheme with trying to pick the smallest buffer possible for
the current MTU in order to flip/split pages is not very optimal.
For example, on default MTU of 1500 it gives only 192 bytes of headroom,
while XDP may require up to 258. But this also involves unnecessary code
complication, which sometimes is even hard to follow.
As page split is no more, always allocate order-0 pages. This optimizes
performance a bit and drops some bytes off the object code. Next, always
pick the maximum buffer length available for this %PAGE_SIZE to set it
up in the hardware. This means it now becomes a constant value, which
also has its positive impact.
On x64 this means (without XDP):

4096 page
64 head, 320 tail
3712 HW buffer size
3686 max MTU w/o frags

Previously, the maximum MTU w/o splitting a frame into several buffers
was 3046.
Increased buffer size allows us to reach the maximum frame size w/ frags
supported by HW: 16382 bytes (MTU 16356). Reflect it in the netdev
config as well. Relying on max single buffer size when calculating MTU
was not correct.
Move around a couple of fields in &iavf_ring after ::rx_buf_len removal
to reduce holes and improve cache locality.
Instead of providing the Rx definitions, which can and will be reused in
rest of the drivers, exclusively for IAVF, do that in the libie header.
Non-PP drivers could still use at least some of them and lose a couple
copied lines.

Function: add/remove: 0/0 grow/shrink: 3/9 up/down: 18/-265 (-247)

+ even reclaims a half percent of performance, nice.

Signed-off-by: Alexander Lobakin <[email protected]>
---
drivers/net/ethernet/intel/iavf/iavf_main.c | 32 +-----
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 96 +++++++----------
drivers/net/ethernet/intel/iavf/iavf_txrx.h | 100 +-----------------
drivers/net/ethernet/intel/iavf/iavf_type.h | 2 -
.../net/ethernet/intel/iavf/iavf_virtchnl.c | 15 +--
include/linux/net/intel/libie/rx.h | 35 ++++++
6 files changed, 85 insertions(+), 195 deletions(-)

diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index 2d00be69fcde..120bb6a09ceb 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */

+#include <linux/net/intel/libie/rx.h>
+
#include "iavf.h"
#include "iavf_prototype.h"
#include "iavf_client.h"
@@ -709,32 +711,10 @@ static void iavf_configure_tx(struct iavf_adapter *adapter)
**/
static void iavf_configure_rx(struct iavf_adapter *adapter)
{
- unsigned int rx_buf_len = IAVF_RXBUFFER_2048;
struct iavf_hw *hw = &adapter->hw;
- int i;
-
- if (PAGE_SIZE < 8192) {
- struct net_device *netdev = adapter->netdev;

- /* For jumbo frames on systems with 4K pages we have to use
- * an order 1 page, so we might as well increase the size
- * of our Rx buffer to make better use of the available space
- */
- rx_buf_len = IAVF_RXBUFFER_3072;
-
- /* We use a 1536 buffer size for configurations with
- * standard Ethernet mtu. On x86 this gives us enough room
- * for shared info and 192 bytes of padding.
- */
- if (!IAVF_2K_TOO_SMALL_WITH_PADDING &&
- (netdev->mtu <= ETH_DATA_LEN))
- rx_buf_len = IAVF_RXBUFFER_1536 - NET_IP_ALIGN;
- }
-
- for (i = 0; i < adapter->num_active_queues; i++) {
+ for (u32 i = 0; i < adapter->num_active_queues; i++)
adapter->rx_rings[i].tail = hw->hw_addr + IAVF_QRX_TAIL1(i);
- adapter->rx_rings[i].rx_buf_len = rx_buf_len;
- }
}

/**
@@ -2577,11 +2557,7 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter)

netdev->netdev_ops = &iavf_netdev_ops;
iavf_set_ethtool_ops(netdev);
- netdev->watchdog_timeo = 5 * HZ;
-
- /* MTU range: 68 - 9710 */
- netdev->min_mtu = ETH_MIN_MTU;
- netdev->max_mtu = IAVF_MAX_RXBUFFER - IAVF_PACKET_HDR_PAD;
+ netdev->max_mtu = LIBIE_MAX_MTU;

if (!is_valid_ether_addr(adapter->hw.mac.addr)) {
dev_info(&pdev->dev, "Invalid MAC address %pM, using random\n",
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
index a761f3e3d7cc..ec256ec16e68 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
@@ -301,7 +301,7 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi,
((j / WB_STRIDE) == 0) && (j > 0) &&
!test_bit(__IAVF_VSI_DOWN, vsi->state) &&
(IAVF_DESC_UNUSED(tx_ring) != tx_ring->count))
- tx_ring->arm_wb = true;
+ tx_ring->flags |= IAVF_TXRX_FLAGS_ARM_WB;
}

/* notify netdev of completed buffers */
@@ -714,17 +714,16 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring)
/* Invalidate cache lines that may have been written to by
* device so that we avoid corrupting memory.
*/
- dma_sync_single_range_for_cpu(rx_ring->dev, dma, IAVF_SKB_PAD,
- rx_ring->rx_buf_len,
+ dma_sync_single_range_for_cpu(rx_ring->dev, dma,
+ LIBIE_SKB_HEADROOM,
+ LIBIE_RX_BUF_LEN,
DMA_FROM_DEVICE);

/* free resources associated with mapping */
- dma_unmap_page_attrs(rx_ring->dev, dma,
- iavf_rx_pg_size(rx_ring),
- DMA_FROM_DEVICE,
- IAVF_RX_DMA_ATTR);
+ dma_unmap_page_attrs(rx_ring->dev, dma, LIBIE_RX_TRUESIZE,
+ DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR);

- __free_pages(page, iavf_rx_pg_order(rx_ring));
+ __free_page(page);
}

rx_ring->next_to_clean = 0;
@@ -813,31 +812,29 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val)
/**
* iavf_alloc_mapped_page - allocate and map a new page
* @dev: device used for DMA mapping
- * @order: page order to allocate
* @gfp: GFP mask to allocate page
*
* Returns a new &page if the it was successfully allocated, %NULL otherwise.
**/
-static struct page *iavf_alloc_mapped_page(struct device *dev, u32 order,
- gfp_t gfp)
+static struct page *iavf_alloc_mapped_page(struct device *dev, gfp_t gfp)
{
struct page *page;
dma_addr_t dma;

/* alloc new page for storage */
- page = __dev_alloc_pages(gfp, order);
+ page = __dev_alloc_page(gfp);
if (unlikely(!page))
return NULL;

/* map page for use */
- dma = dma_map_page_attrs(dev, page, 0, PAGE_SIZE << order,
- DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR);
+ dma = dma_map_page_attrs(dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE,
+ IAVF_RX_DMA_ATTR);

/* if mapping failed free memory back to system since
* there isn't much point in holding memory we can't use
*/
if (dma_mapping_error(dev, dma)) {
- __free_pages(page, order);
+ __free_page(page);
return NULL;
}

@@ -879,7 +876,6 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring,
static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill,
gfp_t gfp)
{
- u32 order = iavf_rx_pg_order(rx_ring);
struct device *dev = rx_ring->dev;
u32 ntu = rx_ring->next_to_use;
union iavf_rx_desc *rx_desc;
@@ -894,7 +890,7 @@ static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill,
struct page *page;
dma_addr_t dma;

- page = iavf_alloc_mapped_page(dev, order, gfp);
+ page = iavf_alloc_mapped_page(dev, gfp);
if (!page) {
rx_ring->rx_stats.alloc_page_failed++;
break;
@@ -904,14 +900,14 @@ static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill,
dma = page_pool_get_dma_addr(page);

/* sync the buffer for use by the device */
- dma_sync_single_range_for_device(dev, dma, IAVF_SKB_PAD,
- rx_ring->rx_buf_len,
+ dma_sync_single_range_for_device(dev, dma, LIBIE_SKB_HEADROOM,
+ LIBIE_RX_BUF_LEN,
DMA_FROM_DEVICE);

/* Refresh the desc even if buffer_addrs didn't change
* because each write-back erases this info.
*/
- rx_desc->read.pkt_addr = cpu_to_le64(dma + IAVF_SKB_PAD);
+ rx_desc->read.pkt_addr = cpu_to_le64(dma + LIBIE_SKB_HEADROOM);

rx_desc++;
ntu++;
@@ -1082,24 +1078,16 @@ static bool iavf_cleanup_headers(struct iavf_ring *rx_ring, struct sk_buff *skb)
* @skb: sk_buff to place the data into
* @page: page containing data to add
* @size: packet length from rx_desc
- * @pg_size: Rx buffer page size
*
* This function will add the data contained in page to the skb.
* It will just attach the page as a frag to the skb.
*
* The function will then update the page offset.
**/
-static void iavf_add_rx_frag(struct sk_buff *skb, struct page *page, u32 size,
- u32 pg_size)
+static void iavf_add_rx_frag(struct sk_buff *skb, struct page *page, u32 size)
{
-#if (PAGE_SIZE < 8192)
- unsigned int truesize = pg_size / 2;
-#else
- unsigned int truesize = SKB_DATA_ALIGN(size + IAVF_SKB_PAD);
-#endif
-
- skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, IAVF_SKB_PAD,
- size, truesize);
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+ LIBIE_SKB_HEADROOM, size, LIBIE_RX_TRUESIZE);
}

/**
@@ -1113,40 +1101,34 @@ static void iavf_add_rx_frag(struct sk_buff *skb, struct page *page, u32 size,
static void iavf_sync_rx_page(struct device *dev, struct page *page, u32 size)
{
dma_sync_single_range_for_cpu(dev, page_pool_get_dma_addr(page),
- IAVF_SKB_PAD, size, DMA_FROM_DEVICE);
+ LIBIE_SKB_HEADROOM, size,
+ DMA_FROM_DEVICE);
}

/**
* iavf_build_skb - Build skb around an existing buffer
* @page: Rx page to with the data
* @size: size of the data
- * @pg_size: size of the Rx page
*
* This function builds an skb around an existing Rx buffer, taking care
* to set up the skb correctly and avoid any memcpy overhead.
*/
-static struct sk_buff *iavf_build_skb(struct page *page, u32 size, u32 pg_size)
+static struct sk_buff *iavf_build_skb(struct page *page, u32 size)
{
- void *va;
-#if (PAGE_SIZE < 8192)
- unsigned int truesize = pg_size / 2;
-#else
- unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) +
- SKB_DATA_ALIGN(IAVF_SKB_PAD + size);
-#endif
struct sk_buff *skb;
+ void *va;

/* prefetch first cache line of first page */
va = page_address(page);
- net_prefetch(va + IAVF_SKB_PAD);
+ net_prefetch(va + LIBIE_SKB_HEADROOM);

/* build an skb around the page buffer */
- skb = napi_build_skb(va, truesize);
+ skb = napi_build_skb(va, LIBIE_RX_TRUESIZE);
if (unlikely(!skb))
return NULL;

/* update pointers within the skb to store the data */
- skb_reserve(skb, IAVF_SKB_PAD);
+ skb_reserve(skb, LIBIE_SKB_HEADROOM);
__skb_put(skb, size);

return skb;
@@ -1156,13 +1138,12 @@ static struct sk_buff *iavf_build_skb(struct page *page, u32 size, u32 pg_size)
* iavf_unmap_rx_page - Unmap used page
* @dev: device used for DMA mapping
* @page: page to release
- * @pg_size: Rx page size
*/
-static void iavf_unmap_rx_page(struct device *dev, struct page *page,
- u32 pg_size)
+static void iavf_unmap_rx_page(struct device *dev, struct page *page)
{
- dma_unmap_page_attrs(dev, page_pool_get_dma_addr(page), pg_size,
- DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR);
+ dma_unmap_page_attrs(dev, page_pool_get_dma_addr(page),
+ LIBIE_RX_TRUESIZE, DMA_FROM_DEVICE,
+ IAVF_RX_DMA_ATTR);
page_pool_set_dma_addr(page, 0);
}

@@ -1208,7 +1189,6 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
unsigned int total_rx_bytes = 0, total_rx_packets = 0;
const gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN;
u32 to_refill = IAVF_DESC_UNUSED(rx_ring);
- u32 pg_size = iavf_rx_pg_size(rx_ring);
struct sk_buff *skb = rx_ring->skb;
struct device *dev = rx_ring->dev;
u32 ntc = rx_ring->next_to_clean;
@@ -1259,23 +1239,23 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
* stripped by the HW.
*/
if (unlikely(!size)) {
- iavf_unmap_rx_page(dev, page, pg_size);
- __free_pages(page, get_order(pg_size));
+ iavf_unmap_rx_page(dev, page);
+ __free_page(page);
goto skip_data;
}

iavf_sync_rx_page(dev, page, size);
- iavf_unmap_rx_page(dev, page, pg_size);
+ iavf_unmap_rx_page(dev, page);

/* retrieve a buffer from the ring */
if (skb)
- iavf_add_rx_frag(skb, page, size, pg_size);
+ iavf_add_rx_frag(skb, page, size);
else
- skb = iavf_build_skb(page, size, pg_size);
+ skb = iavf_build_skb(page, size);

/* exit if we failed to retrieve a buffer */
if (!skb) {
- __free_pages(page, get_order(pg_size));
+ __free_page(page);
rx_ring->rx_stats.alloc_buff_failed++;
break;
}
@@ -1485,8 +1465,8 @@ int iavf_napi_poll(struct napi_struct *napi, int budget)
clean_complete = false;
continue;
}
- arm_wb |= ring->arm_wb;
- ring->arm_wb = false;
+ arm_wb |= !!(ring->flags & IAVF_TXRX_FLAGS_ARM_WB);
+ ring->flags &= ~IAVF_TXRX_FLAGS_ARM_WB;
}

/* Handle case where we are called by netpoll with a budget of 0 */
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h
index c09ac580fe84..1421e90c7c4e 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h
@@ -81,79 +81,11 @@ enum iavf_dyn_idx_t {
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP) | \
BIT_ULL(IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP))

-/* Supported Rx Buffer Sizes (a multiple of 128) */
-#define IAVF_RXBUFFER_256 256
-#define IAVF_RXBUFFER_1536 1536 /* 128B aligned standard Ethernet frame */
-#define IAVF_RXBUFFER_2048 2048
-#define IAVF_RXBUFFER_3072 3072 /* Used for large frames w/ padding */
-#define IAVF_MAX_RXBUFFER 9728 /* largest size for single descriptor */
-
-/* NOTE: netdev_alloc_skb reserves up to 64 bytes, NET_IP_ALIGN means we
- * reserve 2 more, and skb_shared_info adds an additional 384 bytes more,
- * this adds up to 512 bytes of extra data meaning the smallest allocation
- * we could have is 1K.
- * i.e. RXBUFFER_256 --> 960 byte skb (size-1024 slab)
- * i.e. RXBUFFER_512 --> 1216 byte skb (size-2048 slab)
- */
-#define IAVF_RX_HDR_SIZE IAVF_RXBUFFER_256
-#define IAVF_PACKET_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2))
#define iavf_rx_desc iavf_32byte_rx_desc

#define IAVF_RX_DMA_ATTR \
(DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)

-/* Attempt to maximize the headroom available for incoming frames. We
- * use a 2K buffer for receives and need 1536/1534 to store the data for
- * the frame. This leaves us with 512 bytes of room. From that we need
- * to deduct the space needed for the shared info and the padding needed
- * to IP align the frame.
- *
- * Note: For cache line sizes 256 or larger this value is going to end
- * up negative. In these cases we should fall back to the legacy
- * receive path.
- */
-#if (PAGE_SIZE < 8192)
-#define IAVF_2K_TOO_SMALL_WITH_PADDING \
-((NET_SKB_PAD + IAVF_RXBUFFER_1536) > SKB_WITH_OVERHEAD(IAVF_RXBUFFER_2048))
-
-static inline int iavf_compute_pad(int rx_buf_len)
-{
- int page_size, pad_size;
-
- page_size = ALIGN(rx_buf_len, PAGE_SIZE / 2);
- pad_size = SKB_WITH_OVERHEAD(page_size) - rx_buf_len;
-
- return pad_size;
-}
-
-static inline int iavf_skb_pad(void)
-{
- int rx_buf_len;
-
- /* If a 2K buffer cannot handle a standard Ethernet frame then
- * optimize padding for a 3K buffer instead of a 1.5K buffer.
- *
- * For a 3K buffer we need to add enough padding to allow for
- * tailroom due to NET_IP_ALIGN possibly shifting us out of
- * cache-line alignment.
- */
- if (IAVF_2K_TOO_SMALL_WITH_PADDING)
- rx_buf_len = IAVF_RXBUFFER_3072 + SKB_DATA_ALIGN(NET_IP_ALIGN);
- else
- rx_buf_len = IAVF_RXBUFFER_1536;
-
- /* if needed make room for NET_IP_ALIGN */
- rx_buf_len -= NET_IP_ALIGN;
-
- return iavf_compute_pad(rx_buf_len);
-}
-
-#define IAVF_SKB_PAD iavf_skb_pad()
-#else
-#define IAVF_2K_TOO_SMALL_WITH_PADDING false
-#define IAVF_SKB_PAD (NET_SKB_PAD + NET_IP_ALIGN)
-#endif
-
/**
* iavf_test_staterr - tests bits in Rx descriptor status and error fields
* @rx_desc: pointer to receive descriptor (in le64 format)
@@ -293,12 +225,6 @@ struct iavf_rx_queue_stats {
u64 alloc_buff_failed;
};

-enum iavf_ring_state_t {
- __IAVF_TX_FDIR_INIT_DONE,
- __IAVF_TX_XPS_INIT_DONE,
- __IAVF_RING_STATE_NBITS /* must be last */
-};
-
/* some useful defines for virtchannel interface, which
* is the only remaining user of header split
*/
@@ -320,10 +246,9 @@ struct iavf_ring {
struct iavf_tx_buffer *tx_bi;
struct page **rx_pages;
};
- DECLARE_BITMAP(state, __IAVF_RING_STATE_NBITS);
+ u8 __iomem *tail;
u16 queue_index; /* Queue number of ring */
u8 dcb_tc; /* Traffic class of ring */
- u8 __iomem *tail;

/* high bit set means dynamic, use accessors routines to read/write.
* hardware only supports 2us resolution for the ITR registers.
@@ -332,24 +257,16 @@ struct iavf_ring {
*/
u16 itr_setting;

- u16 count; /* Number of descriptors */
u16 reg_idx; /* HW register index of the ring */
- u16 rx_buf_len;
+ u16 count; /* Number of descriptors */

/* used in interrupt processing */
u16 next_to_use;
u16 next_to_clean;

- u8 atr_sample_rate;
- u8 atr_count;
-
- bool ring_active; /* is ring online or not */
- bool arm_wb; /* do something to arm write back */
- u8 packet_stride;
-
u16 flags;
#define IAVF_TXR_FLAGS_WB_ON_ITR BIT(0)
-/* BIT(1) is free, was IAVF_RXR_FLAGS_BUILD_SKB_ENABLED */
+#define IAVF_TXRX_FLAGS_ARM_WB BIT(1)
/* BIT(2) is free */
#define IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(3)
#define IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(4)
@@ -401,17 +318,6 @@ struct iavf_ring_container {
#define iavf_for_each_ring(pos, head) \
for (pos = (head).ring; pos != NULL; pos = pos->next)

-static inline unsigned int iavf_rx_pg_order(struct iavf_ring *ring)
-{
-#if (PAGE_SIZE < 8192)
- if (ring->rx_buf_len > (PAGE_SIZE / 2))
- return 1;
-#endif
- return 0;
-}
-
-#define iavf_rx_pg_size(_ring) (PAGE_SIZE << iavf_rx_pg_order(_ring))
-
void iavf_alloc_rx_pages(struct iavf_ring *rxr);
netdev_tx_t iavf_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
void iavf_clean_tx_ring(struct iavf_ring *tx_ring);
diff --git a/drivers/net/ethernet/intel/iavf/iavf_type.h b/drivers/net/ethernet/intel/iavf/iavf_type.h
index 3030ba330326..bb90d8f3ad7e 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_type.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_type.h
@@ -10,8 +10,6 @@
#include "iavf_adminq.h"
#include "iavf_devids.h"

-#define IAVF_RXQ_CTX_DBUFF_SHIFT 7
-
/* IAVF_MASK is a macro used on 32 bit registers */
#define IAVF_MASK(mask, shift) ((u32)(mask) << (shift))

diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
index ece5cb218b54..786b9c180c97 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
@@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */

+#include <linux/net/intel/libie/rx.h>
+
#include "iavf.h"
#include "iavf_prototype.h"
#include "iavf_client.h"
@@ -269,13 +271,12 @@ int iavf_get_vf_vlan_v2_caps(struct iavf_adapter *adapter)
void iavf_configure_queues(struct iavf_adapter *adapter)
{
struct virtchnl_vsi_queue_config_info *vqci;
- int i, max_frame = adapter->vf_res->max_mtu;
+ u32 i, max_frame = adapter->vf_res->max_mtu;
int pairs = adapter->num_active_queues;
struct virtchnl_queue_pair_info *vqpi;
size_t len;

- if (max_frame > IAVF_MAX_RXBUFFER || !max_frame)
- max_frame = IAVF_MAX_RXBUFFER;
+ max_frame = min_not_zero(max_frame, LIBIE_MAX_RX_FRM_LEN);

if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) {
/* bail because we already have a command pending */
@@ -289,10 +290,6 @@ void iavf_configure_queues(struct iavf_adapter *adapter)
if (!vqci)
return;

- /* Limit maximum frame size when jumbo frames is not enabled */
- if (adapter->netdev->mtu <= ETH_DATA_LEN)
- max_frame = IAVF_RXBUFFER_1536 - NET_IP_ALIGN;
-
vqci->vsi_id = adapter->vsi_res->vsi_id;
vqci->num_queue_pairs = pairs;
vqpi = vqci->qpair;
@@ -309,9 +306,7 @@ void iavf_configure_queues(struct iavf_adapter *adapter)
vqpi->rxq.ring_len = adapter->rx_rings[i].count;
vqpi->rxq.dma_ring_addr = adapter->rx_rings[i].dma;
vqpi->rxq.max_pkt_size = max_frame;
- vqpi->rxq.databuffer_size =
- ALIGN(adapter->rx_rings[i].rx_buf_len,
- BIT_ULL(IAVF_RXQ_CTX_DBUFF_SHIFT));
+ vqpi->rxq.databuffer_size = LIBIE_RX_BUF_LEN;
vqpi++;
}

diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h
index 58bd0f35d025..3e8d0d5206e1 100644
--- a/include/linux/net/intel/libie/rx.h
+++ b/include/linux/net/intel/libie/rx.h
@@ -4,6 +4,7 @@
#ifndef __LIBIE_RX_H
#define __LIBIE_RX_H

+#include <linux/if_vlan.h>
#include <linux/netdevice.h>

/* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed
@@ -125,4 +126,38 @@ static inline void libie_skb_set_hash(struct sk_buff *skb, u32 hash,
skb_set_hash(skb, hash, parsed.payload_layer);
}

+/* Rx MTU/buffer/truesize helpers. Mostly pure software-side; HW-defined values
+ * are valid for all Intel HW.
+ */
+
+/* Space reserved in front of each frame */
+#define LIBIE_SKB_HEADROOM (NET_SKB_PAD + NET_IP_ALIGN)
+/* Link layer / L2 overhead: Ethernet, 2 VLAN tags (C + S), FCS */
+#define LIBIE_RX_LL_LEN (ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN)
+
+/* Truesize: total space wasted on each frame. Always use order-0 pages */
+#define LIBIE_RX_PAGE_ORDER 0
+#define LIBIE_RX_TRUESIZE (PAGE_SIZE << LIBIE_RX_PAGE_ORDER)
+/* Rx buffer size config is a multiple of 128 */
+#define LIBIE_RX_BUF_LEN_ALIGN 128
+/* HW-writeable space in one buffer: truesize - headroom/tailroom,
+ * HW-aligned
+ */
+#define __LIBIE_RX_BUF_LEN \
+ ALIGN_DOWN(SKB_MAX_ORDER(LIBIE_SKB_HEADROOM, LIBIE_RX_PAGE_ORDER), \
+ LIBIE_RX_BUF_LEN_ALIGN)
+/* The largest size for a single descriptor as per HW */
+#define LIBIE_MAX_RX_BUF_LEN 9728U
+/* "True" HW-writeable space: minimum from SW and HW values */
+#define LIBIE_RX_BUF_LEN min_t(u32, __LIBIE_RX_BUF_LEN, \
+ LIBIE_MAX_RX_BUF_LEN)
+
+/* The maximum frame size as per HW (S/G) */
+#define __LIBIE_MAX_RX_FRM_LEN 16382U
+/* ATST, HW can chain up to 5 Rx descriptors */
+#define LIBIE_MAX_RX_FRM_LEN min_t(u32, __LIBIE_MAX_RX_FRM_LEN, \
+ LIBIE_RX_BUF_LEN * 5)
+/* Maximum frame size minus LL overhead */
+#define LIBIE_MAX_MTU (LIBIE_MAX_RX_FRM_LEN - LIBIE_RX_LL_LEN)
+
#endif /* __LIBIE_RX_H */
--
2.40.1


2023-05-16 16:23:12

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers

Each driver is responsible for syncing buffers written by HW for CPU
before accessing them. Almost each PP-enabled driver uses the same
pattern, which could be shorthanded into a static inline to make driver
code a little bit more compact.
Introduce a couple such functions. The first one takes the actual size
of the data written by HW and is the main one to be used on Rx. The
second does the same, but only if the PP performs DMA synchronizations
at all. The last one picks max_len from the PP params and is designed
for more extreme cases when the size is unknown, but the buffer still
needs to be synced.
Also constify pointer arguments of page_pool_get_dma_dir() and
page_pool_get_dma_addr() to give a bit more room for optimization,
as both of them are read-only.

Signed-off-by: Alexander Lobakin <[email protected]>
---
include/net/page_pool.h | 59 ++++++++++++++++++++++++++++++++++++++---
1 file changed, 55 insertions(+), 4 deletions(-)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index 8435013de06e..f740c50b661f 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -32,7 +32,7 @@

#include <linux/mm.h> /* Needed by ptr_ring */
#include <linux/ptr_ring.h>
-#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>

#define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA
* map/unmap
@@ -237,8 +237,8 @@ static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool,
/* get the stored dma direction. A driver might decide to treat this locally and
* avoid the extra cache line from page_pool to determine the direction
*/
-static
-inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool)
+static inline enum dma_data_direction
+page_pool_get_dma_dir(const struct page_pool *pool)
{
return pool->p.dma_dir;
}
@@ -363,7 +363,7 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
#define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \
(sizeof(dma_addr_t) > sizeof(unsigned long))

-static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
+static inline dma_addr_t page_pool_get_dma_addr(const struct page *page)
{
dma_addr_t ret = page->dma_addr;

@@ -380,6 +380,57 @@ static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
page->dma_addr_upper = upper_32_bits(addr);
}

+/**
+ * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW
+ * @pool: page_pool which this page belongs to
+ * @page: page to sync
+ * @dma_sync_size: size of the data written to the page
+ *
+ * Can be used as a shorthand to sync Rx pages before accessing them in the
+ * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP.
+ */
+static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
+ const struct page *page,
+ u32 dma_sync_size)
+{
+ dma_sync_single_range_for_cpu(pool->p.dev,
+ page_pool_get_dma_addr(page),
+ pool->p.offset, dma_sync_size,
+ page_pool_get_dma_dir(pool));
+}
+
+/**
+ * page_pool_dma_maybe_sync_for_cpu - sync Rx page for CPU if needed
+ * @pool: page_pool which this page belongs to
+ * @page: page to sync
+ * @dma_sync_size: size of the data written to the page
+ *
+ * Performs DMA sync for CPU, but only when required (swiotlb, IOMMU etc.).
+ */
+static inline void
+page_pool_dma_maybe_sync_for_cpu(const struct page_pool *pool,
+ const struct page *page, u32 dma_sync_size)
+{
+ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
+ page_pool_dma_sync_for_cpu(pool, page, dma_sync_size);
+}
+
+/**
+ * page_pool_dma_sync_for_cpu - sync full Rx page for CPU
+ * @pool: page_pool which this page belongs to
+ * @page: page to sync
+ *
+ * Performs sync for the entire length exposed to hardware. Can be used on
+ * DMA errors or before freeing the page, when it's unknown whether the HW
+ * touched the buffer.
+ */
+static inline void
+page_pool_dma_sync_full_for_cpu(const struct page_pool *pool,
+ const struct page *page)
+{
+ page_pool_dma_sync_for_cpu(pool, page, pool->p.max_len);
+}
+
static inline bool is_page_pool_compiled_in(void)
{
#ifdef CONFIG_PAGE_POOL
--
2.40.1


2023-05-16 16:23:25

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH net-next 06/11] net: page_pool: avoid calling no-op externals when possible

Turned out page_pool_put{,_full}_page() can burn quite a bunch of cycles
even when on DMA-coherent platforms (like x86) with no active IOMMU or
swiotlb, just for the call ladder.
Indeed, it's

page_pool_put_page()
page_pool_put_defragged_page() <- external
__page_pool_put_page()
page_pool_dma_sync_for_device() <- non-inline
dma_sync_single_range_for_device()
dma_sync_single_for_device() <- external
dma_direct_sync_single_for_device()
dev_is_dma_coherent() <- exit

For the inline functions, no guarantees the compiler won't uninline them
(they're clearly not one-liners and sometimes compilers uninline even
2 + 2). The first external call is necessary, but the rest 2+ are done
for nothing each time, plus a bunch of checks here and there.
Since Page Pool mappings are long-term and for one "device + addr" pair
dma_need_sync() will always return the same value (basically, whether it
belongs to an swiotlb pool), addresses can be tested once right after
they're obtained and the result can be reused until the page is unmapped.
Define new PP flag, which will mean "do DMA syncs for device, but only
when needed" and turn it on by default when the driver asks to sync
pages. When a page is mapped, check whether it needs syncs and if so,
replace that "sync when needed" back to "always do syncs" globally for
the whole pool (better safe than sorry). As long as a pool has no pages
requiring DMA syncs, this cuts off a good piece of calls and checks.
On my x86_64, this gives from 2% to 5% performance benefit with no
negative impact for cases when IOMMU is on and the shortcut can't be
used.

Signed-off-by: Alexander Lobakin <[email protected]>
---
include/net/page_pool.h | 3 +++
net/core/page_pool.c | 10 ++++++++++
2 files changed, 13 insertions(+)

diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index c8ec2f34722b..8435013de06e 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -46,6 +46,9 @@
* device driver responsibility
*/
#define PP_FLAG_PAGE_FRAG BIT(2) /* for page frag feature */
+#define PP_FLAG_DMA_MAYBE_SYNC BIT(3) /* Internal, should not be used in the
+ * drivers
+ */
#define PP_FLAG_ALL (PP_FLAG_DMA_MAP |\
PP_FLAG_DMA_SYNC_DEV |\
PP_FLAG_PAGE_FRAG)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index e212e9d7edcb..57f323dee6c4 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -175,6 +175,10 @@ static int page_pool_init(struct page_pool *pool,
/* pool->p.offset has to be set according to the address
* offset used by the DMA engine to start copying rx data
*/
+
+ /* Try to avoid calling no-op syncs */
+ pool->p.flags |= PP_FLAG_DMA_MAYBE_SYNC;
+ pool->p.flags &= ~PP_FLAG_DMA_SYNC_DEV;
}

if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT &&
@@ -323,6 +327,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)

page_pool_set_dma_addr(page, dma);

+ if ((pool->p.flags & PP_FLAG_DMA_MAYBE_SYNC) &&
+ dma_need_sync(pool->p.dev, dma)) {
+ pool->p.flags |= PP_FLAG_DMA_SYNC_DEV;
+ pool->p.flags &= ~PP_FLAG_DMA_MAYBE_SYNC;
+ }
+
if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
page_pool_dma_sync_for_device(pool, page, pool->p.max_len);

--
2.40.1


2023-05-16 16:24:07

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH net-next 08/11] iavf: switch to Page Pool

Now that the IAVF driver simply uses dev_alloc_page() + free_page() with
no custom recycling logics and one whole page per frame, it can easily
be switched to using Page Pool API instead.
Introduce libie_rx_page_pool_create(), a wrapper for creating a PP with
the default libie settings applicable to all Intel hardware, and replace
the alloc/free calls with the corresponding PP functions, including the
newly added sync-for-CPU helpers. Use skb_mark_for_recycle() to bring
back the recycling and restore the initial performance.

From the important object code changes, worth mentioning that
__iavf_alloc_rx_pages() is now inlined due to the greatly reduced size.
The resulting driver is on par with the pre-series code and 1-2% slower
than the "optimized" version right before the recycling removal.
But the number of locs and object code bytes slaughtered is much more
important here after all, not speaking of that there's still a vast
space for optimization and improvements.

Signed-off-by: Alexander Lobakin <[email protected]>
---
drivers/net/ethernet/intel/Kconfig | 1 +
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 125 +++++---------------
drivers/net/ethernet/intel/iavf/iavf_txrx.h | 8 +-
drivers/net/ethernet/intel/libie/rx.c | 28 +++++
include/linux/net/intel/libie/rx.h | 3 +
5 files changed, 68 insertions(+), 97 deletions(-)

diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig
index cec4a938fbd0..a368afc42b8d 100644
--- a/drivers/net/ethernet/intel/Kconfig
+++ b/drivers/net/ethernet/intel/Kconfig
@@ -86,6 +86,7 @@ config E1000E_HWTS

config LIBIE
tristate
+ select PAGE_POOL
help
libie (Intel Ethernet library) is a common library containing
routines shared by several Intel Ethernet drivers.
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
index ec256ec16e68..2c09f9cf81f9 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
@@ -690,8 +690,6 @@ int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring)
**/
void iavf_clean_rx_ring(struct iavf_ring *rx_ring)
{
- u16 i;
-
/* ring already cleared, nothing to do */
if (!rx_ring->rx_pages)
return;
@@ -702,28 +700,17 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring)
}

/* Free all the Rx ring sk_buffs */
- for (i = 0; i < rx_ring->count; i++) {
+ for (u32 i = 0; i < rx_ring->count; i++) {
struct page *page = rx_ring->rx_pages[i];
- dma_addr_t dma;

if (!page)
continue;

- dma = page_pool_get_dma_addr(page);
-
/* Invalidate cache lines that may have been written to by
* device so that we avoid corrupting memory.
*/
- dma_sync_single_range_for_cpu(rx_ring->dev, dma,
- LIBIE_SKB_HEADROOM,
- LIBIE_RX_BUF_LEN,
- DMA_FROM_DEVICE);
-
- /* free resources associated with mapping */
- dma_unmap_page_attrs(rx_ring->dev, dma, LIBIE_RX_TRUESIZE,
- DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR);
-
- __free_page(page);
+ page_pool_dma_sync_full_for_cpu(rx_ring->pool, page);
+ page_pool_put_full_page(rx_ring->pool, page, false);
}

rx_ring->next_to_clean = 0;
@@ -738,10 +725,15 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring)
**/
void iavf_free_rx_resources(struct iavf_ring *rx_ring)
{
+ struct device *dev = rx_ring->pool->p.dev;
+
iavf_clean_rx_ring(rx_ring);
kfree(rx_ring->rx_pages);
rx_ring->rx_pages = NULL;

+ page_pool_destroy(rx_ring->pool);
+ rx_ring->dev = dev;
+
if (rx_ring->desc) {
dma_free_coherent(rx_ring->dev, rx_ring->size,
rx_ring->desc, rx_ring->dma);
@@ -758,13 +750,15 @@ void iavf_free_rx_resources(struct iavf_ring *rx_ring)
int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring)
{
struct device *dev = rx_ring->dev;
+ struct page_pool *pool;
+ int ret = -ENOMEM;

/* warn if we are about to overwrite the pointer */
WARN_ON(rx_ring->rx_pages);
rx_ring->rx_pages = kcalloc(rx_ring->count, sizeof(*rx_ring->rx_pages),
GFP_KERNEL);
if (!rx_ring->rx_pages)
- return -ENOMEM;
+ return ret;

u64_stats_init(&rx_ring->syncp);

@@ -780,15 +774,27 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring)
goto err;
}

+ pool = libie_rx_page_pool_create(&rx_ring->q_vector->napi,
+ rx_ring->count);
+ if (IS_ERR(pool)) {
+ ret = PTR_ERR(pool);
+ goto err_free_dma;
+ }
+
+ rx_ring->pool = pool;
+
rx_ring->next_to_clean = 0;
rx_ring->next_to_use = 0;

return 0;
+
+err_free_dma:
+ dma_free_coherent(dev, rx_ring->size, rx_ring->desc, rx_ring->dma);
err:
kfree(rx_ring->rx_pages);
rx_ring->rx_pages = NULL;

- return -ENOMEM;
+ return ret;
}

/**
@@ -809,40 +815,6 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val)
writel(val, rx_ring->tail);
}

-/**
- * iavf_alloc_mapped_page - allocate and map a new page
- * @dev: device used for DMA mapping
- * @gfp: GFP mask to allocate page
- *
- * Returns a new &page if the it was successfully allocated, %NULL otherwise.
- **/
-static struct page *iavf_alloc_mapped_page(struct device *dev, gfp_t gfp)
-{
- struct page *page;
- dma_addr_t dma;
-
- /* alloc new page for storage */
- page = __dev_alloc_page(gfp);
- if (unlikely(!page))
- return NULL;
-
- /* map page for use */
- dma = dma_map_page_attrs(dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE,
- IAVF_RX_DMA_ATTR);
-
- /* if mapping failed free memory back to system since
- * there isn't much point in holding memory we can't use
- */
- if (dma_mapping_error(dev, dma)) {
- __free_page(page);
- return NULL;
- }
-
- page_pool_set_dma_addr(page, dma);
-
- return page;
-}
-
/**
* iavf_receive_skb - Send a completed packet up the stack
* @rx_ring: rx ring in play
@@ -876,7 +848,7 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring,
static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill,
gfp_t gfp)
{
- struct device *dev = rx_ring->dev;
+ struct page_pool *pool = rx_ring->pool;
u32 ntu = rx_ring->next_to_use;
union iavf_rx_desc *rx_desc;

@@ -890,7 +862,7 @@ static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill,
struct page *page;
dma_addr_t dma;

- page = iavf_alloc_mapped_page(dev, gfp);
+ page = page_pool_alloc_pages(pool, gfp);
if (!page) {
rx_ring->rx_stats.alloc_page_failed++;
break;
@@ -899,11 +871,6 @@ static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill,
rx_ring->rx_pages[ntu] = page;
dma = page_pool_get_dma_addr(page);

- /* sync the buffer for use by the device */
- dma_sync_single_range_for_device(dev, dma, LIBIE_SKB_HEADROOM,
- LIBIE_RX_BUF_LEN,
- DMA_FROM_DEVICE);
-
/* Refresh the desc even if buffer_addrs didn't change
* because each write-back erases this info.
*/
@@ -1090,21 +1057,6 @@ static void iavf_add_rx_frag(struct sk_buff *skb, struct page *page, u32 size)
LIBIE_SKB_HEADROOM, size, LIBIE_RX_TRUESIZE);
}

-/**
- * iavf_sync_rx_page - Synchronize received data for use
- * @dev: device used for DMA mapping
- * @page: Rx page containing the data
- * @size: size of the received data
- *
- * This function will synchronize the Rx buffer for use by the CPU.
- */
-static void iavf_sync_rx_page(struct device *dev, struct page *page, u32 size)
-{
- dma_sync_single_range_for_cpu(dev, page_pool_get_dma_addr(page),
- LIBIE_SKB_HEADROOM, size,
- DMA_FROM_DEVICE);
-}
-
/**
* iavf_build_skb - Build skb around an existing buffer
* @page: Rx page to with the data
@@ -1127,6 +1079,8 @@ static struct sk_buff *iavf_build_skb(struct page *page, u32 size)
if (unlikely(!skb))
return NULL;

+ skb_mark_for_recycle(skb);
+
/* update pointers within the skb to store the data */
skb_reserve(skb, LIBIE_SKB_HEADROOM);
__skb_put(skb, size);
@@ -1134,19 +1088,6 @@ static struct sk_buff *iavf_build_skb(struct page *page, u32 size)
return skb;
}

-/**
- * iavf_unmap_rx_page - Unmap used page
- * @dev: device used for DMA mapping
- * @page: page to release
- */
-static void iavf_unmap_rx_page(struct device *dev, struct page *page)
-{
- dma_unmap_page_attrs(dev, page_pool_get_dma_addr(page),
- LIBIE_RX_TRUESIZE, DMA_FROM_DEVICE,
- IAVF_RX_DMA_ATTR);
- page_pool_set_dma_addr(page, 0);
-}
-
/**
* iavf_is_non_eop - process handling of non-EOP buffers
* @rx_ring: Rx ring being processed
@@ -1189,8 +1130,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
unsigned int total_rx_bytes = 0, total_rx_packets = 0;
const gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN;
u32 to_refill = IAVF_DESC_UNUSED(rx_ring);
+ struct page_pool *pool = rx_ring->pool;
struct sk_buff *skb = rx_ring->skb;
- struct device *dev = rx_ring->dev;
u32 ntc = rx_ring->next_to_clean;
u32 ring_size = rx_ring->count;
u32 cleaned_count = 0;
@@ -1239,13 +1180,11 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
* stripped by the HW.
*/
if (unlikely(!size)) {
- iavf_unmap_rx_page(dev, page);
- __free_page(page);
+ page_pool_recycle_direct(pool, page);
goto skip_data;
}

- iavf_sync_rx_page(dev, page, size);
- iavf_unmap_rx_page(dev, page);
+ page_pool_dma_maybe_sync_for_cpu(pool, page, size);

/* retrieve a buffer from the ring */
if (skb)
@@ -1255,7 +1194,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)

/* exit if we failed to retrieve a buffer */
if (!skb) {
- __free_page(page);
+ page_pool_put_page(pool, page, size, true);
rx_ring->rx_stats.alloc_buff_failed++;
break;
}
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h
index 1421e90c7c4e..8fbe549ce6a5 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h
@@ -83,9 +83,6 @@ enum iavf_dyn_idx_t {

#define iavf_rx_desc iavf_32byte_rx_desc

-#define IAVF_RX_DMA_ATTR \
- (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
-
/**
* iavf_test_staterr - tests bits in Rx descriptor status and error fields
* @rx_desc: pointer to receive descriptor (in le64 format)
@@ -240,7 +237,10 @@ struct iavf_rx_queue_stats {
struct iavf_ring {
struct iavf_ring *next; /* pointer to next ring in q_vector */
void *desc; /* Descriptor ring memory */
- struct device *dev; /* Used for DMA mapping */
+ union {
+ struct page_pool *pool; /* Used for Rx page management */
+ struct device *dev; /* Used for DMA mapping on Tx */
+ };
struct net_device *netdev; /* netdev ring maps to */
union {
struct iavf_tx_buffer *tx_bi;
diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c
index f503476d8eef..d68eab76593c 100644
--- a/drivers/net/ethernet/intel/libie/rx.c
+++ b/drivers/net/ethernet/intel/libie/rx.c
@@ -105,6 +105,34 @@ const struct libie_rx_ptype_parsed libie_rx_ptype_lut[LIBIE_RX_PTYPE_NUM] = {
};
EXPORT_SYMBOL_NS_GPL(libie_rx_ptype_lut, LIBIE);

+/* Page Pool */
+
+/**
+ * libie_rx_page_pool_create - create a PP with the default libie settings
+ * @napi: &napi_struct covering this PP (no usage outside its poll loops)
+ * @size: size of the PP, usually simply Rx queue len
+ *
+ * Returns &page_pool on success, casted -errno on failure.
+ */
+struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi,
+ u32 size)
+{
+ const struct page_pool_params pp = {
+ .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
+ .order = LIBIE_RX_PAGE_ORDER,
+ .pool_size = size,
+ .nid = NUMA_NO_NODE,
+ .dev = napi->dev->dev.parent,
+ .napi = napi,
+ .dma_dir = DMA_FROM_DEVICE,
+ .max_len = LIBIE_RX_BUF_LEN,
+ .offset = LIBIE_SKB_HEADROOM,
+ };
+
+ return page_pool_create(&pp);
+}
+EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_create, LIBIE);
+
MODULE_AUTHOR("Intel Corporation");
MODULE_DESCRIPTION("Intel(R) Ethernet common library");
MODULE_LICENSE("GPL");
diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h
index 3e8d0d5206e1..f6ba3b19b7e2 100644
--- a/include/linux/net/intel/libie/rx.h
+++ b/include/linux/net/intel/libie/rx.h
@@ -160,4 +160,7 @@ static inline void libie_skb_set_hash(struct sk_buff *skb, u32 hash,
/* Maximum frame size minus LL overhead */
#define LIBIE_MAX_MTU (LIBIE_MAX_RX_FRM_LEN - LIBIE_RX_LL_LEN)

+struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi,
+ u32 size);
+
#endif /* __LIBIE_RX_H */
--
2.40.1


2023-05-16 16:30:03

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH net-next 09/11] libie: add common queue stats

Next stop, per-queue private stats. They have only subtle differences
from driver to driver and can easily be resolved.
Define common structures, inline helpers and Ethtool helpers to collect,
update and export the statistics. Use u64_stats_t right from the start,
as well as the corresponding helpers to ensure tear-free operations.
For the NAPI parts of both Rx and Tx, also define small onstack
containers to update them in polling loops and then sync the actual
containers once a loop ends.
The drivers will be switched to use this API later on a per-driver
basis, along with conversion to PP.

Signed-off-by: Alexander Lobakin <[email protected]>
---
drivers/net/ethernet/intel/libie/Makefile | 1 +
drivers/net/ethernet/intel/libie/stats.c | 119 ++++++++++++++
include/linux/net/intel/libie/stats.h | 179 ++++++++++++++++++++++
3 files changed, 299 insertions(+)
create mode 100644 drivers/net/ethernet/intel/libie/stats.c
create mode 100644 include/linux/net/intel/libie/stats.h

diff --git a/drivers/net/ethernet/intel/libie/Makefile b/drivers/net/ethernet/intel/libie/Makefile
index 95e81d09b474..76f32253481b 100644
--- a/drivers/net/ethernet/intel/libie/Makefile
+++ b/drivers/net/ethernet/intel/libie/Makefile
@@ -4,3 +4,4 @@
obj-$(CONFIG_LIBIE) += libie.o

libie-objs += rx.o
+libie-objs += stats.o
diff --git a/drivers/net/ethernet/intel/libie/stats.c b/drivers/net/ethernet/intel/libie/stats.c
new file mode 100644
index 000000000000..61456842a362
--- /dev/null
+++ b/drivers/net/ethernet/intel/libie/stats.c
@@ -0,0 +1,119 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation. */
+
+#include <linux/ethtool.h>
+#include <linux/net/intel/libie/stats.h>
+
+/* Rx per-queue stats */
+
+static const char * const libie_rq_stats_str[] = {
+#define act(s) __stringify(s),
+ DECLARE_LIBIE_RQ_STATS(act)
+#undef act
+};
+
+#define LIBIE_RQ_STATS_NUM ARRAY_SIZE(libie_rq_stats_str)
+
+/**
+ * libie_rq_stats_get_sset_count - get the number of Ethtool RQ stats provided
+ *
+ * Returns the number of per-queue Rx stats supported by the library.
+ */
+u32 libie_rq_stats_get_sset_count(void)
+{
+ return LIBIE_RQ_STATS_NUM;
+}
+EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_sset_count, LIBIE);
+
+/**
+ * libie_rq_stats_get_strings - get the name strings of Ethtool RQ stats
+ * @data: reference to the cursor pointing to the output buffer
+ * @qid: RQ number to print in the prefix
+ */
+void libie_rq_stats_get_strings(u8 **data, u32 qid)
+{
+ for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++)
+ ethtool_sprintf(data, "rq%u_%s", qid, libie_rq_stats_str[i]);
+}
+EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_strings, LIBIE);
+
+/**
+ * libie_rq_stats_get_data - get the RQ stats in Ethtool format
+ * @data: reference to the cursor pointing to the output array
+ * @stats: RQ stats container from the queue
+ */
+void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats)
+{
+ u64 sarr[LIBIE_RQ_STATS_NUM];
+ u32 start;
+
+ do {
+ start = u64_stats_fetch_begin(&stats->syncp);
+
+ for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++)
+ sarr[i] = u64_stats_read(&stats->raw[i]);
+ } while (u64_stats_fetch_retry(&stats->syncp, start));
+
+ for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++)
+ (*data)[i] += sarr[i];
+
+ *data += LIBIE_RQ_STATS_NUM;
+}
+EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_data, LIBIE);
+
+/* Tx per-queue stats */
+
+static const char * const libie_sq_stats_str[] = {
+#define act(s) __stringify(s),
+ DECLARE_LIBIE_SQ_STATS(act)
+#undef act
+};
+
+#define LIBIE_SQ_STATS_NUM ARRAY_SIZE(libie_sq_stats_str)
+
+/**
+ * libie_sq_stats_get_sset_count - get the number of Ethtool SQ stats provided
+ *
+ * Returns the number of per-queue Tx stats supported by the library.
+ */
+u32 libie_sq_stats_get_sset_count(void)
+{
+ return LIBIE_SQ_STATS_NUM;
+}
+EXPORT_SYMBOL_NS_GPL(libie_sq_stats_get_sset_count, LIBIE);
+
+/**
+ * libie_sq_stats_get_strings - get the name strings of Ethtool SQ stats
+ * @data: reference to the cursor pointing to the output buffer
+ * @qid: SQ number to print in the prefix
+ */
+void libie_sq_stats_get_strings(u8 **data, u32 qid)
+{
+ for (u32 i = 0; i < LIBIE_SQ_STATS_NUM; i++)
+ ethtool_sprintf(data, "sq%u_%s", qid, libie_sq_stats_str[i]);
+}
+EXPORT_SYMBOL_NS_GPL(libie_sq_stats_get_strings, LIBIE);
+
+/**
+ * libie_sq_stats_get_data - get the SQ stats in Ethtool format
+ * @data: reference to the cursor pointing to the output array
+ * @stats: SQ stats container from the queue
+ */
+void libie_sq_stats_get_data(u64 **data, const struct libie_sq_stats *stats)
+{
+ u64 sarr[LIBIE_SQ_STATS_NUM];
+ u32 start;
+
+ do {
+ start = u64_stats_fetch_begin(&stats->syncp);
+
+ for (u32 i = 0; i < LIBIE_SQ_STATS_NUM; i++)
+ sarr[i] = u64_stats_read(&stats->raw[i]);
+ } while (u64_stats_fetch_retry(&stats->syncp, start));
+
+ for (u32 i = 0; i < LIBIE_SQ_STATS_NUM; i++)
+ (*data)[i] += sarr[i];
+
+ *data += LIBIE_SQ_STATS_NUM;
+}
+EXPORT_SYMBOL_NS_GPL(libie_sq_stats_get_data, LIBIE);
diff --git a/include/linux/net/intel/libie/stats.h b/include/linux/net/intel/libie/stats.h
new file mode 100644
index 000000000000..dbbc98bbd3a7
--- /dev/null
+++ b/include/linux/net/intel/libie/stats.h
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation. */
+
+#ifndef __LIBIE_STATS_H
+#define __LIBIE_STATS_H
+
+#include <linux/u64_stats_sync.h>
+
+/* Common */
+
+/* Use 32-byte alignment to reduce false sharing */
+#define __libie_stats_aligned __aligned(4 * sizeof(u64_stats_t))
+
+/**
+ * libie_stats_add - update one structure counter from a local struct
+ * @qs: queue stats structure to update (&libie_rq_stats or &libie_sq_stats)
+ * @ss: local/onstack stats structure
+ * @f: name of the field to update
+ *
+ * If a local/onstack stats structure is used to collect statistics during
+ * hotpath loops, this macro can be used to shorthand updates, given that
+ * the fields have the same name.
+ * Must be guarded with u64_stats_update_{begin,end}().
+ */
+#define libie_stats_add(qs, ss, f) \
+ u64_stats_add(&(qs)->f, (ss)->f)
+
+/**
+ * __libie_stats_inc_one - safely increment one stats structure counter
+ * @s: queue stats structure to update (&libie_rq_stats or &libie_sq_stats)
+ * @f: name of the field to increment
+ * @n: name of the temporary variable, result of __UNIQUE_ID()
+ *
+ * To be used on exception or slow paths -- allocation fails, queue stops etc.
+ */
+#define __libie_stats_inc_one(s, f, n) ({ \
+ typeof(*(s)) *n = (s); \
+ \
+ u64_stats_update_begin(&n->syncp); \
+ u64_stats_inc(&n->f); \
+ u64_stats_update_end(&n->syncp); \
+})
+#define libie_stats_inc_one(s, f) \
+ __libie_stats_inc_one(s, f, __UNIQUE_ID(qs_))
+
+/* Rx per-queue stats:
+ * packets: packets received on this queue
+ * bytes: bytes received on this queue
+ * fragments: number of processed descriptors carrying only a fragment
+ * alloc_page_fail: number of Rx page allocation fails
+ * build_skb_fail: number of build_skb() fails
+ */
+
+#define DECLARE_LIBIE_RQ_NAPI_STATS(act) \
+ act(packets) \
+ act(bytes) \
+ act(fragments)
+
+#define DECLARE_LIBIE_RQ_FAIL_STATS(act) \
+ act(alloc_page_fail) \
+ act(build_skb_fail)
+
+#define DECLARE_LIBIE_RQ_STATS(act) \
+ DECLARE_LIBIE_RQ_NAPI_STATS(act) \
+ DECLARE_LIBIE_RQ_FAIL_STATS(act)
+
+struct libie_rq_stats {
+ struct u64_stats_sync syncp;
+
+ union {
+ struct {
+#define act(s) u64_stats_t s;
+ DECLARE_LIBIE_RQ_NAPI_STATS(act);
+ DECLARE_LIBIE_RQ_FAIL_STATS(act);
+#undef act
+ };
+ DECLARE_FLEX_ARRAY(u64_stats_t, raw);
+ };
+} __libie_stats_aligned;
+
+/* Rx stats being modified frequently during the NAPI polling, to sync them
+ * with the queue stats once after the loop is finished.
+ */
+struct libie_rq_onstack_stats {
+ union {
+ struct {
+#define act(s) u32 s;
+ DECLARE_LIBIE_RQ_NAPI_STATS(act);
+#undef act
+ };
+ DECLARE_FLEX_ARRAY(u32, raw);
+ };
+};
+
+/**
+ * libie_rq_napi_stats_add - add onstack Rx stats to the queue container
+ * @qs: Rx queue stats structure to update
+ * @ss: onstack structure to get the values from, updated during the NAPI loop
+ */
+static inline void
+libie_rq_napi_stats_add(struct libie_rq_stats *qs,
+ const struct libie_rq_onstack_stats *ss)
+{
+ u64_stats_update_begin(&qs->syncp);
+ libie_stats_add(qs, ss, packets);
+ libie_stats_add(qs, ss, bytes);
+ libie_stats_add(qs, ss, fragments);
+ u64_stats_update_end(&qs->syncp);
+}
+
+u32 libie_rq_stats_get_sset_count(void);
+void libie_rq_stats_get_strings(u8 **data, u32 qid);
+void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats);
+
+/* Tx per-queue stats:
+ * packets: packets sent from this queue
+ * bytes: bytes sent from this queue
+ * busy: number of xmit failures due to the ring being full
+ * stops: number times the ring was stopped from the driver
+ * restarts: number times it was started after being stopped
+ * linearized: number of skbs linearized due to HW limits
+ */
+
+#define DECLARE_LIBIE_SQ_NAPI_STATS(act) \
+ act(packets) \
+ act(bytes)
+
+#define DECLARE_LIBIE_SQ_XMIT_STATS(act) \
+ act(busy) \
+ act(stops) \
+ act(restarts) \
+ act(linearized)
+
+#define DECLARE_LIBIE_SQ_STATS(act) \
+ DECLARE_LIBIE_SQ_NAPI_STATS(act) \
+ DECLARE_LIBIE_SQ_XMIT_STATS(act)
+
+struct libie_sq_stats {
+ struct u64_stats_sync syncp;
+
+ union {
+ struct {
+#define act(s) u64_stats_t s;
+ DECLARE_LIBIE_SQ_STATS(act);
+#undef act
+ };
+ DECLARE_FLEX_ARRAY(u64_stats_t, raw);
+ };
+} __libie_stats_aligned;
+
+struct libie_sq_onstack_stats {
+#define act(s) u32 s;
+ DECLARE_LIBIE_SQ_NAPI_STATS(act);
+#undef act
+};
+
+/**
+ * libie_sq_napi_stats_add - add onstack Tx stats to the queue container
+ * @qs: Tx queue stats structure to update
+ * @ss: onstack structure to get the values from, updated during the NAPI loop
+ */
+static inline void
+libie_sq_napi_stats_add(struct libie_sq_stats *qs,
+ const struct libie_sq_onstack_stats *ss)
+{
+ if (unlikely(!ss->packets))
+ return;
+
+ u64_stats_update_begin(&qs->syncp);
+ libie_stats_add(qs, ss, packets);
+ libie_stats_add(qs, ss, bytes);
+ u64_stats_update_end(&qs->syncp);
+}
+
+u32 libie_sq_stats_get_sset_count(void);
+void libie_sq_stats_get_strings(u8 **data, u32 qid);
+void libie_sq_stats_get_data(u64 **data, const struct libie_sq_stats *stats);
+
+#endif /* __LIBIE_STATS_H */
--
2.40.1


2023-05-16 16:30:48

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH net-next 10/11] libie: add per-queue Page Pool stats

Expand the libie generic per-queue stats with the generic Page Pool
stats provided by the API itself, when CONFIG_PAGE_POOL is enable.
When it's not, there'll be no such fields in the stats structure, so
no space wasted.
They are also a bit special in terms of how they are obtained. One
&page_pool accumulates statistics until it's destroyed obviously,
which happens on ifdown. So, in order to not lose any statistics,
get the stats and store in the queue container before destroying
a pool. This container survives ifups/downs, so it basically stores
the statistics accumulated since the very first pool was allocated
on this queue. When it's needed to export the stats, first get the
numbers from this container and then add the "live" numbers -- the
ones that the current active pool returns. The result values will
always represent the actual device-lifetime* stats.
There's a cast from &page_pool_stats to `u64 *` in a couple functions,
but they are guarded with stats asserts to make sure it's safe to do.
FWIW it saves a lot of object code.

Signed-off-by: Alexander Lobakin <[email protected]>
---
drivers/net/ethernet/intel/libie/internal.h | 23 +++++++
drivers/net/ethernet/intel/libie/rx.c | 20 ++++++
drivers/net/ethernet/intel/libie/stats.c | 72 ++++++++++++++++++++-
include/linux/net/intel/libie/rx.h | 4 ++
include/linux/net/intel/libie/stats.h | 39 ++++++++++-
5 files changed, 155 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/ethernet/intel/libie/internal.h

diff --git a/drivers/net/ethernet/intel/libie/internal.h b/drivers/net/ethernet/intel/libie/internal.h
new file mode 100644
index 000000000000..083398dc37c6
--- /dev/null
+++ b/drivers/net/ethernet/intel/libie/internal.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* libie internal declarations not to be used in drivers.
+ *
+ * Copyright(c) 2023 Intel Corporation.
+ */
+
+#ifndef __LIBIE_INTERNAL_H
+#define __LIBIE_INTERNAL_H
+
+struct libie_rq_stats;
+struct page_pool;
+
+#ifdef CONFIG_PAGE_POOL_STATS
+void libie_rq_stats_sync_pp(struct libie_rq_stats *stats,
+ struct page_pool *pool);
+#else
+static inline void libie_rq_stats_sync_pp(struct libie_rq_stats *stats,
+ struct page_pool *pool)
+{
+}
+#endif
+
+#endif /* __LIBIE_INTERNAL_H */
diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c
index d68eab76593c..128f134aca6d 100644
--- a/drivers/net/ethernet/intel/libie/rx.c
+++ b/drivers/net/ethernet/intel/libie/rx.c
@@ -3,6 +3,8 @@

#include <linux/net/intel/libie/rx.h>

+#include "internal.h"
+
/* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed
* bitfield struct.
*/
@@ -133,6 +135,24 @@ struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi,
}
EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_create, LIBIE);

+/**
+ * libie_rx_page_pool_destroy - destroy a &page_pool created by libie
+ * @pool: pool to destroy
+ * @stats: RQ stats from the ring (or %NULL to skip updating PP stats)
+ *
+ * As the stats usually has the same lifetime as the device, but PP is usually
+ * created/destroyed on ifup/ifdown, in order to not lose the stats accumulated
+ * during the last ifup, the PP stats need to be added to the driver stats
+ * container. Then the PP gets destroyed.
+ */
+void libie_rx_page_pool_destroy(struct page_pool *pool,
+ struct libie_rq_stats *stats)
+{
+ libie_rq_stats_sync_pp(stats, pool);
+ page_pool_destroy(pool);
+}
+EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_destroy, LIBIE);
+
MODULE_AUTHOR("Intel Corporation");
MODULE_DESCRIPTION("Intel(R) Ethernet common library");
MODULE_LICENSE("GPL");
diff --git a/drivers/net/ethernet/intel/libie/stats.c b/drivers/net/ethernet/intel/libie/stats.c
index 61456842a362..95bbb38c39e3 100644
--- a/drivers/net/ethernet/intel/libie/stats.c
+++ b/drivers/net/ethernet/intel/libie/stats.c
@@ -4,6 +4,8 @@
#include <linux/ethtool.h>
#include <linux/net/intel/libie/stats.h>

+#include "internal.h"
+
/* Rx per-queue stats */

static const char * const libie_rq_stats_str[] = {
@@ -14,6 +16,70 @@ static const char * const libie_rq_stats_str[] = {

#define LIBIE_RQ_STATS_NUM ARRAY_SIZE(libie_rq_stats_str)

+#ifdef CONFIG_PAGE_POOL_STATS
+/**
+ * libie_rq_stats_get_pp - get the current stats from a &page_pool
+ * @sarr: local array to add stats to
+ * @pool: pool to get the stats from
+ *
+ * Adds the current "live" stats from an online PP to the stats read from
+ * the RQ container, so that the actual totals will be returned.
+ */
+static void libie_rq_stats_get_pp(u64 *sarr, struct page_pool *pool)
+{
+ struct page_pool_stats *pps;
+ /* Used only to calculate pos below */
+ struct libie_rq_stats tmp;
+ u32 pos;
+
+ /* Validate the libie PP stats array can be casted <-> PP struct */
+ static_assert(sizeof(tmp.pp) == sizeof(*pps));
+
+ if (!pool)
+ return;
+
+ /* Position of the first Page Pool stats field */
+ pos = (u64_stats_t *)&tmp.pp - tmp.raw;
+ pps = (typeof(pps))&sarr[pos];
+
+ page_pool_get_stats(pool, pps);
+}
+
+/**
+ * libie_rq_stats_sync_pp - add the current PP stats to the RQ stats container
+ * @stats: stats structure to update
+ * @pool: pool to read the stats
+ *
+ * Called by libie_rx_page_pool_destroy() to save the stats before destroying
+ * the pool.
+ */
+void libie_rq_stats_sync_pp(struct libie_rq_stats *stats,
+ struct page_pool *pool)
+{
+ u64_stats_t *qarr = (u64_stats_t *)&stats->pp;
+ struct page_pool_stats pps = { };
+ u64 *sarr = (u64 *)&pps;
+
+ if (!stats)
+ return;
+
+ page_pool_get_stats(pool, &pps);
+
+ u64_stats_update_begin(&stats->syncp);
+
+ for (u32 i = 0; i < sizeof(pps) / sizeof(*sarr); i++)
+ u64_stats_add(&qarr[i], sarr[i]);
+
+ u64_stats_update_end(&stats->syncp);
+}
+#else
+static inline void libie_rq_stats_get_pp(u64 *sarr, struct page_pool *pool)
+{
+}
+
+/* static inline void libie_rq_stats_sync_pp() is declared in "internal.h" */
+#endif
+
/**
* libie_rq_stats_get_sset_count - get the number of Ethtool RQ stats provided
*
@@ -41,8 +107,10 @@ EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_strings, LIBIE);
* libie_rq_stats_get_data - get the RQ stats in Ethtool format
* @data: reference to the cursor pointing to the output array
* @stats: RQ stats container from the queue
+ * @pool: &page_pool from the queue (%NULL to ignore PP "live" stats)
*/
-void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats)
+void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats,
+ struct page_pool *pool)
{
u64 sarr[LIBIE_RQ_STATS_NUM];
u32 start;
@@ -54,6 +122,8 @@ void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats)
sarr[i] = u64_stats_read(&stats->raw[i]);
} while (u64_stats_fetch_retry(&stats->syncp, start));

+ libie_rq_stats_get_pp(sarr, pool);
+
for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++)
(*data)[i] += sarr[i];

diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h
index f6ba3b19b7e2..474ffd689001 100644
--- a/include/linux/net/intel/libie/rx.h
+++ b/include/linux/net/intel/libie/rx.h
@@ -160,7 +160,11 @@ static inline void libie_skb_set_hash(struct sk_buff *skb, u32 hash,
/* Maximum frame size minus LL overhead */
#define LIBIE_MAX_MTU (LIBIE_MAX_RX_FRM_LEN - LIBIE_RX_LL_LEN)

+struct libie_rq_stats;
+
struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi,
u32 size);
+void libie_rx_page_pool_destroy(struct page_pool *pool,
+ struct libie_rq_stats *stats);

#endif /* __LIBIE_RX_H */
diff --git a/include/linux/net/intel/libie/stats.h b/include/linux/net/intel/libie/stats.h
index dbbc98bbd3a7..23ca0079a905 100644
--- a/include/linux/net/intel/libie/stats.h
+++ b/include/linux/net/intel/libie/stats.h
@@ -49,6 +49,17 @@
* fragments: number of processed descriptors carrying only a fragment
* alloc_page_fail: number of Rx page allocation fails
* build_skb_fail: number of build_skb() fails
+ * pp_alloc_fast: pages taken from the cache or ring
+ * pp_alloc_slow: actual page allocations
+ * pp_alloc_slow_ho: non-order-0 page allocations
+ * pp_alloc_empty: number of times the pool was empty
+ * pp_alloc_refill: number of cache refills
+ * pp_alloc_waive: NUMA node mismatches during recycling
+ * pp_recycle_cached: direct recyclings into the cache
+ * pp_recycle_cache_full: number of times the cache was full
+ * pp_recycle_ring: recyclings into the ring
+ * pp_recycle_ring_full: number of times the ring was full
+ * pp_recycle_released_ref: pages released due to elevated refcnt
*/

#define DECLARE_LIBIE_RQ_NAPI_STATS(act) \
@@ -60,9 +71,29 @@
act(alloc_page_fail) \
act(build_skb_fail)

+#ifdef CONFIG_PAGE_POOL_STATS
+#define DECLARE_LIBIE_RQ_PP_STATS(act) \
+ act(pp_alloc_fast) \
+ act(pp_alloc_slow) \
+ act(pp_alloc_slow_ho) \
+ act(pp_alloc_empty) \
+ act(pp_alloc_refill) \
+ act(pp_alloc_waive) \
+ act(pp_recycle_cached) \
+ act(pp_recycle_cache_full) \
+ act(pp_recycle_ring) \
+ act(pp_recycle_ring_full) \
+ act(pp_recycle_released_ref)
+#else
+#define DECLARE_LIBIE_RQ_PP_STATS(act)
+#endif
+
#define DECLARE_LIBIE_RQ_STATS(act) \
DECLARE_LIBIE_RQ_NAPI_STATS(act) \
- DECLARE_LIBIE_RQ_FAIL_STATS(act)
+ DECLARE_LIBIE_RQ_FAIL_STATS(act) \
+ DECLARE_LIBIE_RQ_PP_STATS(act)
+
+struct page_pool;

struct libie_rq_stats {
struct u64_stats_sync syncp;
@@ -72,6 +103,9 @@ struct libie_rq_stats {
#define act(s) u64_stats_t s;
DECLARE_LIBIE_RQ_NAPI_STATS(act);
DECLARE_LIBIE_RQ_FAIL_STATS(act);
+ struct_group(pp,
+ DECLARE_LIBIE_RQ_PP_STATS(act);
+ );
#undef act
};
DECLARE_FLEX_ARRAY(u64_stats_t, raw);
@@ -110,7 +144,8 @@ libie_rq_napi_stats_add(struct libie_rq_stats *qs,

u32 libie_rq_stats_get_sset_count(void);
void libie_rq_stats_get_strings(u8 **data, u32 qid);
-void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats);
+void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats,
+ struct page_pool *pool);

/* Tx per-queue stats:
* packets: packets sent from this queue
--
2.40.1


2023-05-16 16:31:02

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH net-next 02/11] iavf: kill "legacy-rx" for good

Ever since build_skb() became stable, the old way with allocating an skb
for storing the headers separately, which will be then copied manually,
was slower, less flexible and thus obsolete.

* it had higher pressure on MM since it actually allocates new pages,
which then get split and refcount-biased (NAPI page cache);
* it implies memcpy() of packet headers (40+ bytes per each frame);
* the actual header length was calculated via eth_get_headlen(), which
invokes Flow Dissector and thus wastes a bunch of CPU cycles;
* XDP makes it even more weird since it requires headroom for long and
also tailroom for some time (since mbuf landed). Take a look at the
ice driver, which is built around work-arounds to make XDP work with
it.

Even on some quite low-end hardware (not a common case for 100G NICs) it
was performing worse.
The only advantage "legacy-rx" had is that it didn't require any
reserved headroom and tailroom. But iavf didn't use this, as it always
splits pages into two halves of 2k, while that save would only be useful
when striding. And again, XDP effectively removes that sole pro.

There's a train of features to land in IAVF soon: Page Pool, XDP, XSk,
multi-buffer etc. Each new would require adding more and more Danse
Macabre for absolutely no reason, besides making hotpath less and less
effective.
Remove the "feature" with all the related code. This includes at least
one very hot branch (typically hit on each new frame), which was either
always-true or always-false at least for a complete NAPI bulk of 64
frames, the whole private flags cruft and so on. Some stats:

Function: add/remove: 0/2 grow/shrink: 0/7 up/down: 0/-774 (-774)
RO Data: add/remove: 0/1 grow/shrink: 0/0 up/down: 0/-40 (-40)

Signed-off-by: Alexander Lobakin <[email protected]>
---
drivers/net/ethernet/intel/iavf/iavf.h | 2 +-
.../net/ethernet/intel/iavf/iavf_ethtool.c | 140 ------------------
drivers/net/ethernet/intel/iavf/iavf_main.c | 10 +-
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 84 +----------
drivers/net/ethernet/intel/iavf/iavf_txrx.h | 18 +--
.../net/ethernet/intel/iavf/iavf_virtchnl.c | 3 +-
6 files changed, 8 insertions(+), 249 deletions(-)

diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
index 9abaff1f2aff..a780e7aa1c2f 100644
--- a/drivers/net/ethernet/intel/iavf/iavf.h
+++ b/drivers/net/ethernet/intel/iavf/iavf.h
@@ -298,7 +298,7 @@ struct iavf_adapter {
#define IAVF_FLAG_CLIENT_NEEDS_L2_PARAMS BIT(12)
#define IAVF_FLAG_PROMISC_ON BIT(13)
#define IAVF_FLAG_ALLMULTI_ON BIT(14)
-#define IAVF_FLAG_LEGACY_RX BIT(15)
+/* BIT(15) is free, was IAVF_FLAG_LEGACY_RX */
#define IAVF_FLAG_REINIT_ITR_NEEDED BIT(16)
#define IAVF_FLAG_QUEUES_DISABLED BIT(17)
#define IAVF_FLAG_SETUP_NETDEV_FEATURES BIT(18)
diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
index 6f171d1d85b7..de3050c02b6f 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
@@ -239,29 +239,6 @@ static const struct iavf_stats iavf_gstrings_stats[] = {

#define IAVF_QUEUE_STATS_LEN ARRAY_SIZE(iavf_gstrings_queue_stats)

-/* For now we have one and only one private flag and it is only defined
- * when we have support for the SKIP_CPU_SYNC DMA attribute. Instead
- * of leaving all this code sitting around empty we will strip it unless
- * our one private flag is actually available.
- */
-struct iavf_priv_flags {
- char flag_string[ETH_GSTRING_LEN];
- u32 flag;
- bool read_only;
-};
-
-#define IAVF_PRIV_FLAG(_name, _flag, _read_only) { \
- .flag_string = _name, \
- .flag = _flag, \
- .read_only = _read_only, \
-}
-
-static const struct iavf_priv_flags iavf_gstrings_priv_flags[] = {
- IAVF_PRIV_FLAG("legacy-rx", IAVF_FLAG_LEGACY_RX, 0),
-};
-
-#define IAVF_PRIV_FLAGS_STR_LEN ARRAY_SIZE(iavf_gstrings_priv_flags)
-
/**
* iavf_get_link_ksettings - Get Link Speed and Duplex settings
* @netdev: network interface device structure
@@ -341,8 +318,6 @@ static int iavf_get_sset_count(struct net_device *netdev, int sset)
return IAVF_STATS_LEN +
(IAVF_QUEUE_STATS_LEN * 2 *
netdev->real_num_tx_queues);
- else if (sset == ETH_SS_PRIV_FLAGS)
- return IAVF_PRIV_FLAGS_STR_LEN;
else
return -EINVAL;
}
@@ -384,24 +359,6 @@ static void iavf_get_ethtool_stats(struct net_device *netdev,
rcu_read_unlock();
}

-/**
- * iavf_get_priv_flag_strings - Get private flag strings
- * @netdev: network interface device structure
- * @data: buffer for string data
- *
- * Builds the private flags string table
- **/
-static void iavf_get_priv_flag_strings(struct net_device *netdev, u8 *data)
-{
- unsigned int i;
-
- for (i = 0; i < IAVF_PRIV_FLAGS_STR_LEN; i++) {
- snprintf(data, ETH_GSTRING_LEN, "%s",
- iavf_gstrings_priv_flags[i].flag_string);
- data += ETH_GSTRING_LEN;
- }
-}
-
/**
* iavf_get_stat_strings - Get stat strings
* @netdev: network interface device structure
@@ -440,105 +397,11 @@ static void iavf_get_strings(struct net_device *netdev, u32 sset, u8 *data)
case ETH_SS_STATS:
iavf_get_stat_strings(netdev, data);
break;
- case ETH_SS_PRIV_FLAGS:
- iavf_get_priv_flag_strings(netdev, data);
- break;
default:
break;
}
}

-/**
- * iavf_get_priv_flags - report device private flags
- * @netdev: network interface device structure
- *
- * The get string set count and the string set should be matched for each
- * flag returned. Add new strings for each flag to the iavf_gstrings_priv_flags
- * array.
- *
- * Returns a u32 bitmap of flags.
- **/
-static u32 iavf_get_priv_flags(struct net_device *netdev)
-{
- struct iavf_adapter *adapter = netdev_priv(netdev);
- u32 i, ret_flags = 0;
-
- for (i = 0; i < IAVF_PRIV_FLAGS_STR_LEN; i++) {
- const struct iavf_priv_flags *priv_flags;
-
- priv_flags = &iavf_gstrings_priv_flags[i];
-
- if (priv_flags->flag & adapter->flags)
- ret_flags |= BIT(i);
- }
-
- return ret_flags;
-}
-
-/**
- * iavf_set_priv_flags - set private flags
- * @netdev: network interface device structure
- * @flags: bit flags to be set
- **/
-static int iavf_set_priv_flags(struct net_device *netdev, u32 flags)
-{
- struct iavf_adapter *adapter = netdev_priv(netdev);
- u32 orig_flags, new_flags, changed_flags;
- u32 i;
-
- orig_flags = READ_ONCE(adapter->flags);
- new_flags = orig_flags;
-
- for (i = 0; i < IAVF_PRIV_FLAGS_STR_LEN; i++) {
- const struct iavf_priv_flags *priv_flags;
-
- priv_flags = &iavf_gstrings_priv_flags[i];
-
- if (flags & BIT(i))
- new_flags |= priv_flags->flag;
- else
- new_flags &= ~(priv_flags->flag);
-
- if (priv_flags->read_only &&
- ((orig_flags ^ new_flags) & ~BIT(i)))
- return -EOPNOTSUPP;
- }
-
- /* Before we finalize any flag changes, any checks which we need to
- * perform to determine if the new flags will be supported should go
- * here...
- */
-
- /* Compare and exchange the new flags into place. If we failed, that
- * is if cmpxchg returns anything but the old value, this means
- * something else must have modified the flags variable since we
- * copied it. We'll just punt with an error and log something in the
- * message buffer.
- */
- if (cmpxchg(&adapter->flags, orig_flags, new_flags) != orig_flags) {
- dev_warn(&adapter->pdev->dev,
- "Unable to update adapter->flags as it was modified by another thread...\n");
- return -EAGAIN;
- }
-
- changed_flags = orig_flags ^ new_flags;
-
- /* Process any additional changes needed as a result of flag changes.
- * The changed_flags value reflects the list of bits that were changed
- * in the code above.
- */
-
- /* issue a reset to force legacy-rx change to take effect */
- if (changed_flags & IAVF_FLAG_LEGACY_RX) {
- if (netif_running(netdev)) {
- adapter->flags |= IAVF_FLAG_RESET_NEEDED;
- queue_work(adapter->wq, &adapter->reset_task);
- }
- }
-
- return 0;
-}
-
/**
* iavf_get_msglevel - Get debug message level
* @netdev: network interface device structure
@@ -584,7 +447,6 @@ static void iavf_get_drvinfo(struct net_device *netdev,
strscpy(drvinfo->driver, iavf_driver_name, 32);
strscpy(drvinfo->fw_version, "N/A", 4);
strscpy(drvinfo->bus_info, pci_name(adapter->pdev), 32);
- drvinfo->n_priv_flags = IAVF_PRIV_FLAGS_STR_LEN;
}

/**
@@ -1969,8 +1831,6 @@ static const struct ethtool_ops iavf_ethtool_ops = {
.get_strings = iavf_get_strings,
.get_ethtool_stats = iavf_get_ethtool_stats,
.get_sset_count = iavf_get_sset_count,
- .get_priv_flags = iavf_get_priv_flags,
- .set_priv_flags = iavf_set_priv_flags,
.get_msglevel = iavf_get_msglevel,
.set_msglevel = iavf_set_msglevel,
.get_coalesce = iavf_get_coalesce,
diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index c17e909d3ff0..a5a6c9861a93 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -713,9 +713,7 @@ static void iavf_configure_rx(struct iavf_adapter *adapter)
struct iavf_hw *hw = &adapter->hw;
int i;

- /* Legacy Rx will always default to a 2048 buffer size. */
-#if (PAGE_SIZE < 8192)
- if (!(adapter->flags & IAVF_FLAG_LEGACY_RX)) {
+ if (PAGE_SIZE < 8192) {
struct net_device *netdev = adapter->netdev;

/* For jumbo frames on systems with 4K pages we have to use
@@ -732,16 +730,10 @@ static void iavf_configure_rx(struct iavf_adapter *adapter)
(netdev->mtu <= ETH_DATA_LEN))
rx_buf_len = IAVF_RXBUFFER_1536 - NET_IP_ALIGN;
}
-#endif

for (i = 0; i < adapter->num_active_queues; i++) {
adapter->rx_rings[i].tail = hw->hw_addr + IAVF_QRX_TAIL1(i);
adapter->rx_rings[i].rx_buf_len = rx_buf_len;
-
- if (adapter->flags & IAVF_FLAG_LEGACY_RX)
- clear_ring_build_skb_enabled(&adapter->rx_rings[i]);
- else
- set_ring_build_skb_enabled(&adapter->rx_rings[i]);
}
}

diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
index a83b96e9b6fc..a7121dc5c32b 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
@@ -824,17 +824,6 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val)
writel(val, rx_ring->tail);
}

-/**
- * iavf_rx_offset - Return expected offset into page to access data
- * @rx_ring: Ring we are requesting offset of
- *
- * Returns the offset value for ring into the data buffer.
- */
-static inline unsigned int iavf_rx_offset(struct iavf_ring *rx_ring)
-{
- return ring_uses_build_skb(rx_ring) ? IAVF_SKB_PAD : 0;
-}
-
/**
* iavf_alloc_mapped_page - recycle or make a new page
* @rx_ring: ring to use
@@ -879,7 +868,7 @@ static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring,

bi->dma = dma;
bi->page = page;
- bi->page_offset = iavf_rx_offset(rx_ring);
+ bi->page_offset = IAVF_SKB_PAD;

/* initialize pagecnt_bias to 1 representing we fully own page */
bi->pagecnt_bias = 1;
@@ -1220,7 +1209,7 @@ static void iavf_add_rx_frag(struct iavf_ring *rx_ring,
#if (PAGE_SIZE < 8192)
unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2;
#else
- unsigned int truesize = SKB_DATA_ALIGN(size + iavf_rx_offset(rx_ring));
+ unsigned int truesize = SKB_DATA_ALIGN(size + IAVF_SKB_PAD);
#endif

if (!size)
@@ -1268,71 +1257,6 @@ static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring,
return rx_buffer;
}

-/**
- * iavf_construct_skb - Allocate skb and populate it
- * @rx_ring: rx descriptor ring to transact packets on
- * @rx_buffer: rx buffer to pull data from
- * @size: size of buffer to add to skb
- *
- * This function allocates an skb. It then populates it with the page
- * data from the current receive descriptor, taking care to set up the
- * skb correctly.
- */
-static struct sk_buff *iavf_construct_skb(struct iavf_ring *rx_ring,
- struct iavf_rx_buffer *rx_buffer,
- unsigned int size)
-{
- void *va;
-#if (PAGE_SIZE < 8192)
- unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2;
-#else
- unsigned int truesize = SKB_DATA_ALIGN(size);
-#endif
- unsigned int headlen;
- struct sk_buff *skb;
-
- if (!rx_buffer)
- return NULL;
- /* prefetch first cache line of first page */
- va = page_address(rx_buffer->page) + rx_buffer->page_offset;
- net_prefetch(va);
-
- /* allocate a skb to store the frags */
- skb = __napi_alloc_skb(&rx_ring->q_vector->napi,
- IAVF_RX_HDR_SIZE,
- GFP_ATOMIC | __GFP_NOWARN);
- if (unlikely(!skb))
- return NULL;
-
- /* Determine available headroom for copy */
- headlen = size;
- if (headlen > IAVF_RX_HDR_SIZE)
- headlen = eth_get_headlen(skb->dev, va, IAVF_RX_HDR_SIZE);
-
- /* align pull length to size of long to optimize memcpy performance */
- memcpy(__skb_put(skb, headlen), va, ALIGN(headlen, sizeof(long)));
-
- /* update all of the pointers */
- size -= headlen;
- if (size) {
- skb_add_rx_frag(skb, 0, rx_buffer->page,
- rx_buffer->page_offset + headlen,
- size, truesize);
-
- /* buffer is used by skb, update page_offset */
-#if (PAGE_SIZE < 8192)
- rx_buffer->page_offset ^= truesize;
-#else
- rx_buffer->page_offset += truesize;
-#endif
- } else {
- /* buffer is unused, reset bias back to rx_buffer */
- rx_buffer->pagecnt_bias++;
- }
-
- return skb;
-}
-
/**
* iavf_build_skb - Build skb around an existing buffer
* @rx_ring: Rx descriptor ring to transact packets on
@@ -1505,10 +1429,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
/* retrieve a buffer from the ring */
if (skb)
iavf_add_rx_frag(rx_ring, rx_buffer, skb, size);
- else if (ring_uses_build_skb(rx_ring))
- skb = iavf_build_skb(rx_ring, rx_buffer, size);
else
- skb = iavf_construct_skb(rx_ring, rx_buffer, size);
+ skb = iavf_build_skb(rx_ring, rx_buffer, size);

/* exit if we failed to retrieve a buffer */
if (!skb) {
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h
index 2624bf6d009e..234e189c1987 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h
@@ -362,7 +362,8 @@ struct iavf_ring {

u16 flags;
#define IAVF_TXR_FLAGS_WB_ON_ITR BIT(0)
-#define IAVF_RXR_FLAGS_BUILD_SKB_ENABLED BIT(1)
+/* BIT(1) is free, was IAVF_RXR_FLAGS_BUILD_SKB_ENABLED */
+/* BIT(2) is free */
#define IAVF_TXRX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(3)
#define IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(4)
#define IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2 BIT(5)
@@ -393,21 +394,6 @@ struct iavf_ring {
*/
} ____cacheline_internodealigned_in_smp;

-static inline bool ring_uses_build_skb(struct iavf_ring *ring)
-{
- return !!(ring->flags & IAVF_RXR_FLAGS_BUILD_SKB_ENABLED);
-}
-
-static inline void set_ring_build_skb_enabled(struct iavf_ring *ring)
-{
- ring->flags |= IAVF_RXR_FLAGS_BUILD_SKB_ENABLED;
-}
-
-static inline void clear_ring_build_skb_enabled(struct iavf_ring *ring)
-{
- ring->flags &= ~IAVF_RXR_FLAGS_BUILD_SKB_ENABLED;
-}
-
#define IAVF_ITR_ADAPTIVE_MIN_INC 0x0002
#define IAVF_ITR_ADAPTIVE_MIN_USECS 0x0002
#define IAVF_ITR_ADAPTIVE_MAX_USECS 0x007e
diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
index 9afbbdac3590..ece5cb218b54 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
@@ -290,8 +290,7 @@ void iavf_configure_queues(struct iavf_adapter *adapter)
return;

/* Limit maximum frame size when jumbo frames is not enabled */
- if (!(adapter->flags & IAVF_FLAG_LEGACY_RX) &&
- (adapter->netdev->mtu <= ETH_DATA_LEN))
+ if (adapter->netdev->mtu <= ETH_DATA_LEN)
max_frame = IAVF_RXBUFFER_1536 - NET_IP_ALIGN;

vqci->vsi_id = adapter->vsi_res->vsi_id;
--
2.40.1


2023-05-16 16:31:13

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH net-next 04/11] iavf: remove page splitting/recycling

As an intermediate step, remove all page splitting/recyclig code. Just
always allocate a new page and don't touch its refcount, so that it gets
freed by the core stack later.
The change allows to greatly simplify certain parts of the code:

Function: add/remove: 2/3 grow/shrink: 0/5 up/down: 543/-963 (-420)

&iavf_rx_buf can even now retire in favor of just storing an array of
pages used for Rx. Their DMA addresses can be stored in page::dma_addr
-- use Page Pool's function for that.
No surprise perf loses up to 30% here, but that regression will go away
once PP lands.

Signed-off-by: Alexander Lobakin <[email protected]>
---
drivers/net/ethernet/intel/iavf/iavf_main.c | 2 +-
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 279 ++++++--------------
drivers/net/ethernet/intel/iavf/iavf_txrx.h | 17 +-
3 files changed, 87 insertions(+), 211 deletions(-)

diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index ade32aa1ed78..2d00be69fcde 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -1236,7 +1236,7 @@ static void iavf_configure(struct iavf_adapter *adapter)
for (i = 0; i < adapter->num_active_queues; i++) {
struct iavf_ring *ring = &adapter->rx_rings[i];

- iavf_alloc_rx_buffers(ring);
+ iavf_alloc_rx_pages(ring);
}
}

diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
index fd08ce67380e..a761f3e3d7cc 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
@@ -690,11 +690,10 @@ int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring)
**/
void iavf_clean_rx_ring(struct iavf_ring *rx_ring)
{
- unsigned long bi_size;
u16 i;

/* ring already cleared, nothing to do */
- if (!rx_ring->rx_bi)
+ if (!rx_ring->rx_pages)
return;

if (rx_ring->skb) {
@@ -704,38 +703,30 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring)

/* Free all the Rx ring sk_buffs */
for (i = 0; i < rx_ring->count; i++) {
- struct iavf_rx_buffer *rx_bi = &rx_ring->rx_bi[i];
+ struct page *page = rx_ring->rx_pages[i];
+ dma_addr_t dma;

- if (!rx_bi->page)
+ if (!page)
continue;

+ dma = page_pool_get_dma_addr(page);
+
/* Invalidate cache lines that may have been written to by
* device so that we avoid corrupting memory.
*/
- dma_sync_single_range_for_cpu(rx_ring->dev,
- rx_bi->dma,
- rx_bi->page_offset,
+ dma_sync_single_range_for_cpu(rx_ring->dev, dma, IAVF_SKB_PAD,
rx_ring->rx_buf_len,
DMA_FROM_DEVICE);

/* free resources associated with mapping */
- dma_unmap_page_attrs(rx_ring->dev, rx_bi->dma,
+ dma_unmap_page_attrs(rx_ring->dev, dma,
iavf_rx_pg_size(rx_ring),
DMA_FROM_DEVICE,
IAVF_RX_DMA_ATTR);

- __page_frag_cache_drain(rx_bi->page, rx_bi->pagecnt_bias);
-
- rx_bi->page = NULL;
- rx_bi->page_offset = 0;
+ __free_pages(page, iavf_rx_pg_order(rx_ring));
}

- bi_size = sizeof(struct iavf_rx_buffer) * rx_ring->count;
- memset(rx_ring->rx_bi, 0, bi_size);
-
- /* Zero out the descriptor ring */
- memset(rx_ring->desc, 0, rx_ring->size);
-
rx_ring->next_to_clean = 0;
rx_ring->next_to_use = 0;
}
@@ -749,8 +740,8 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring)
void iavf_free_rx_resources(struct iavf_ring *rx_ring)
{
iavf_clean_rx_ring(rx_ring);
- kfree(rx_ring->rx_bi);
- rx_ring->rx_bi = NULL;
+ kfree(rx_ring->rx_pages);
+ rx_ring->rx_pages = NULL;

if (rx_ring->desc) {
dma_free_coherent(rx_ring->dev, rx_ring->size,
@@ -768,14 +759,13 @@ void iavf_free_rx_resources(struct iavf_ring *rx_ring)
int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring)
{
struct device *dev = rx_ring->dev;
- int bi_size;

/* warn if we are about to overwrite the pointer */
- WARN_ON(rx_ring->rx_bi);
- bi_size = sizeof(struct iavf_rx_buffer) * rx_ring->count;
- rx_ring->rx_bi = kzalloc(bi_size, GFP_KERNEL);
- if (!rx_ring->rx_bi)
- goto err;
+ WARN_ON(rx_ring->rx_pages);
+ rx_ring->rx_pages = kcalloc(rx_ring->count, sizeof(*rx_ring->rx_pages),
+ GFP_KERNEL);
+ if (!rx_ring->rx_pages)
+ return -ENOMEM;

u64_stats_init(&rx_ring->syncp);

@@ -796,8 +786,9 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring)

return 0;
err:
- kfree(rx_ring->rx_bi);
- rx_ring->rx_bi = NULL;
+ kfree(rx_ring->rx_pages);
+ rx_ring->rx_pages = NULL;
+
return -ENOMEM;
}

@@ -820,36 +811,23 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val)
}

/**
- * iavf_alloc_mapped_page - recycle or make a new page
- * @rx_ring: ring to use
- * @bi: rx_buffer struct to modify
+ * iavf_alloc_mapped_page - allocate and map a new page
* @dev: device used for DMA mapping
* @order: page order to allocate
* @gfp: GFP mask to allocate page
*
- * Returns true if the page was successfully allocated or
- * reused.
+ * Returns a new &page if the it was successfully allocated, %NULL otherwise.
**/
-static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring,
- struct iavf_rx_buffer *bi,
- struct device *dev, u32 order,
- gfp_t gfp)
+static struct page *iavf_alloc_mapped_page(struct device *dev, u32 order,
+ gfp_t gfp)
{
- struct page *page = bi->page;
+ struct page *page;
dma_addr_t dma;

- /* since we are recycling buffers we should seldom need to alloc */
- if (likely(page)) {
- rx_ring->rx_stats.page_reuse_count++;
- return true;
- }
-
/* alloc new page for storage */
page = __dev_alloc_pages(gfp, order);
- if (unlikely(!page)) {
- rx_ring->rx_stats.alloc_page_failed++;
- return false;
- }
+ if (unlikely(!page))
+ return NULL;

/* map page for use */
dma = dma_map_page_attrs(dev, page, 0, PAGE_SIZE << order,
@@ -860,18 +838,12 @@ static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring,
*/
if (dma_mapping_error(dev, dma)) {
__free_pages(page, order);
- rx_ring->rx_stats.alloc_page_failed++;
- return false;
+ return NULL;
}

- bi->dma = dma;
- bi->page = page;
- bi->page_offset = IAVF_SKB_PAD;
-
- /* initialize pagecnt_bias to 1 representing we fully own page */
- bi->pagecnt_bias = 1;
+ page_pool_set_dma_addr(page, dma);

- return true;
+ return page;
}

/**
@@ -896,7 +868,7 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring,
}

/**
- * __iavf_alloc_rx_buffers - Replace used receive buffers
+ * __iavf_alloc_rx_pages - Replace used receive pages
* @rx_ring: ring to place buffers on
* @to_refill: number of buffers to replace
* @gfp: GFP mask to allocate pages
@@ -904,42 +876,47 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring,
* Returns 0 if all allocations were successful or the number of buffers left
* to refill in case of an allocation failure.
**/
-static u32 __iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u32 to_refill,
- gfp_t gfp)
+static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill,
+ gfp_t gfp)
{
u32 order = iavf_rx_pg_order(rx_ring);
struct device *dev = rx_ring->dev;
u32 ntu = rx_ring->next_to_use;
union iavf_rx_desc *rx_desc;
- struct iavf_rx_buffer *bi;

/* do nothing if no valid netdev defined */
if (unlikely(!rx_ring->netdev || !to_refill))
return 0;

rx_desc = IAVF_RX_DESC(rx_ring, ntu);
- bi = &rx_ring->rx_bi[ntu];

do {
- if (!iavf_alloc_mapped_page(rx_ring, bi, dev, order, gfp))
+ struct page *page;
+ dma_addr_t dma;
+
+ page = iavf_alloc_mapped_page(dev, order, gfp);
+ if (!page) {
+ rx_ring->rx_stats.alloc_page_failed++;
break;
+ }
+
+ rx_ring->rx_pages[ntu] = page;
+ dma = page_pool_get_dma_addr(page);

/* sync the buffer for use by the device */
- dma_sync_single_range_for_device(dev, bi->dma, bi->page_offset,
+ dma_sync_single_range_for_device(dev, dma, IAVF_SKB_PAD,
rx_ring->rx_buf_len,
DMA_FROM_DEVICE);

/* Refresh the desc even if buffer_addrs didn't change
* because each write-back erases this info.
*/
- rx_desc->read.pkt_addr = cpu_to_le64(bi->dma + bi->page_offset);
+ rx_desc->read.pkt_addr = cpu_to_le64(dma + IAVF_SKB_PAD);

rx_desc++;
- bi++;
ntu++;
if (unlikely(ntu == rx_ring->count)) {
rx_desc = IAVF_RX_DESC(rx_ring, 0);
- bi = rx_ring->rx_bi;
ntu = 0;
}

@@ -953,9 +930,9 @@ static u32 __iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u32 to_refill,
return to_refill;
}

-void iavf_alloc_rx_buffers(struct iavf_ring *rxr)
+void iavf_alloc_rx_pages(struct iavf_ring *rxr)
{
- __iavf_alloc_rx_buffers(rxr, IAVF_DESC_UNUSED(rxr), GFP_KERNEL);
+ __iavf_alloc_rx_pages(rxr, IAVF_DESC_UNUSED(rxr), GFP_KERNEL);
}

/**
@@ -1100,80 +1077,20 @@ static bool iavf_cleanup_headers(struct iavf_ring *rx_ring, struct sk_buff *skb)
return false;
}

-/**
- * iavf_can_reuse_rx_page - Determine if this page can be reused by
- * the adapter for another receive
- *
- * @rx_buffer: buffer containing the page
- *
- * If page is reusable, rx_buffer->page_offset is adjusted to point to
- * an unused region in the page.
- *
- * For small pages, @truesize will be a constant value, half the size
- * of the memory at page. We'll attempt to alternate between high and
- * low halves of the page, with one half ready for use by the hardware
- * and the other half being consumed by the stack. We use the page
- * ref count to determine whether the stack has finished consuming the
- * portion of this page that was passed up with a previous packet. If
- * the page ref count is >1, we'll assume the "other" half page is
- * still busy, and this page cannot be reused.
- *
- * For larger pages, @truesize will be the actual space used by the
- * received packet (adjusted upward to an even multiple of the cache
- * line size). This will advance through the page by the amount
- * actually consumed by the received packets while there is still
- * space for a buffer. Each region of larger pages will be used at
- * most once, after which the page will not be reused.
- *
- * In either case, if the page is reusable its refcount is increased.
- **/
-static bool iavf_can_reuse_rx_page(struct iavf_rx_buffer *rx_buffer)
-{
- unsigned int pagecnt_bias = rx_buffer->pagecnt_bias;
- struct page *page = rx_buffer->page;
-
- /* Is any reuse possible? */
- if (!dev_page_is_reusable(page))
- return false;
-
-#if (PAGE_SIZE < 8192)
- /* if we are only owner of page we can reuse it */
- if (unlikely((page_count(page) - pagecnt_bias) > 1))
- return false;
-#else
-#define IAVF_LAST_OFFSET \
- (SKB_WITH_OVERHEAD(PAGE_SIZE) - IAVF_RXBUFFER_2048)
- if (rx_buffer->page_offset > IAVF_LAST_OFFSET)
- return false;
-#endif
-
- /* If we have drained the page fragment pool we need to update
- * the pagecnt_bias and page count so that we fully restock the
- * number of references the driver holds.
- */
- if (unlikely(!pagecnt_bias)) {
- page_ref_add(page, USHRT_MAX);
- rx_buffer->pagecnt_bias = USHRT_MAX;
- }
-
- return true;
-}
-
/**
* iavf_add_rx_frag - Add contents of Rx buffer to sk_buff
* @skb: sk_buff to place the data into
- * @rx_buffer: buffer containing page to add
+ * @page: page containing data to add
* @size: packet length from rx_desc
* @pg_size: Rx buffer page size
*
- * This function will add the data contained in rx_buffer->page to the skb.
+ * This function will add the data contained in page to the skb.
* It will just attach the page as a frag to the skb.
*
* The function will then update the page offset.
**/
-static void iavf_add_rx_frag(struct sk_buff *skb,
- struct iavf_rx_buffer *rx_buffer,
- u32 size, u32 pg_size)
+static void iavf_add_rx_frag(struct sk_buff *skb, struct page *page, u32 size,
+ u32 pg_size)
{
#if (PAGE_SIZE < 8192)
unsigned int truesize = pg_size / 2;
@@ -1181,46 +1098,34 @@ static void iavf_add_rx_frag(struct sk_buff *skb,
unsigned int truesize = SKB_DATA_ALIGN(size + IAVF_SKB_PAD);
#endif

- skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page,
- rx_buffer->page_offset, size, truesize);
-
- /* page is being used so we must update the page offset */
-#if (PAGE_SIZE < 8192)
- rx_buffer->page_offset ^= truesize;
-#else
- rx_buffer->page_offset += truesize;
-#endif
-
- /* We have pulled a buffer for use, so decrement pagecnt_bias */
- rx_buffer->pagecnt_bias--;
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, IAVF_SKB_PAD,
+ size, truesize);
}

/**
- * iavf_sync_rx_buffer - Synchronize received data for use
+ * iavf_sync_rx_page - Synchronize received data for use
* @dev: device used for DMA mapping
- * @buf: Rx buffer containing the data
+ * @page: Rx page containing the data
* @size: size of the received data
*
* This function will synchronize the Rx buffer for use by the CPU.
*/
-static void iavf_sync_rx_buffer(struct device *dev, struct iavf_rx_buffer *buf,
- u32 size)
+static void iavf_sync_rx_page(struct device *dev, struct page *page, u32 size)
{
- dma_sync_single_range_for_cpu(dev, buf->dma, buf->page_offset, size,
- DMA_FROM_DEVICE);
+ dma_sync_single_range_for_cpu(dev, page_pool_get_dma_addr(page),
+ IAVF_SKB_PAD, size, DMA_FROM_DEVICE);
}

/**
* iavf_build_skb - Build skb around an existing buffer
- * @rx_buffer: Rx buffer with the data
+ * @page: Rx page to with the data
* @size: size of the data
* @pg_size: size of the Rx page
*
* This function builds an skb around an existing Rx buffer, taking care
* to set up the skb correctly and avoid any memcpy overhead.
*/
-static struct sk_buff *iavf_build_skb(struct iavf_rx_buffer *rx_buffer,
- u32 size, u32 pg_size)
+static struct sk_buff *iavf_build_skb(struct page *page, u32 size, u32 pg_size)
{
void *va;
#if (PAGE_SIZE < 8192)
@@ -1232,11 +1137,11 @@ static struct sk_buff *iavf_build_skb(struct iavf_rx_buffer *rx_buffer,
struct sk_buff *skb;

/* prefetch first cache line of first page */
- va = page_address(rx_buffer->page) + rx_buffer->page_offset;
- net_prefetch(va);
+ va = page_address(page);
+ net_prefetch(va + IAVF_SKB_PAD);

/* build an skb around the page buffer */
- skb = napi_build_skb(va - IAVF_SKB_PAD, truesize);
+ skb = napi_build_skb(va, truesize);
if (unlikely(!skb))
return NULL;

@@ -1244,42 +1149,21 @@ static struct sk_buff *iavf_build_skb(struct iavf_rx_buffer *rx_buffer,
skb_reserve(skb, IAVF_SKB_PAD);
__skb_put(skb, size);

- /* buffer is used by skb, update page_offset */
-#if (PAGE_SIZE < 8192)
- rx_buffer->page_offset ^= truesize;
-#else
- rx_buffer->page_offset += truesize;
-#endif
-
- rx_buffer->pagecnt_bias--;
-
return skb;
}

/**
- * iavf_put_rx_buffer - Recycle or free used buffer
- * @rx_ring: rx descriptor ring to transact packets on
+ * iavf_unmap_rx_page - Unmap used page
* @dev: device used for DMA mapping
- * @rx_buffer: Rx buffer to handle
+ * @page: page to release
* @pg_size: Rx page size
- *
- * Either recycle the buffer if possible or unmap and free the page.
*/
-static void iavf_put_rx_buffer(struct iavf_ring *rx_ring, struct device *dev,
- struct iavf_rx_buffer *rx_buffer, u32 pg_size)
+static void iavf_unmap_rx_page(struct device *dev, struct page *page,
+ u32 pg_size)
{
- if (iavf_can_reuse_rx_page(rx_buffer)) {
- rx_ring->rx_stats.page_reuse_count++;
- return;
- }
-
- /* we are not reusing the buffer so unmap it */
- dma_unmap_page_attrs(dev, rx_buffer->dma, pg_size,
+ dma_unmap_page_attrs(dev, page_pool_get_dma_addr(page), pg_size,
DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR);
- __page_frag_cache_drain(rx_buffer->page, rx_buffer->pagecnt_bias);
-
- /* clear contents of buffer_info */
- rx_buffer->page = NULL;
+ page_pool_set_dma_addr(page, 0);
}

/**
@@ -1332,8 +1216,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
u32 cleaned_count = 0;

while (likely(cleaned_count < budget)) {
- struct iavf_rx_buffer *rx_buffer;
union iavf_rx_desc *rx_desc;
+ struct page *page;
unsigned int size;
u16 vlan_tag = 0;
u8 rx_ptype;
@@ -1341,8 +1225,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)

/* return some buffers to hardware, one at a time is too slow */
if (to_refill >= IAVF_RX_BUFFER_WRITE)
- to_refill = __iavf_alloc_rx_buffers(rx_ring, to_refill,
- gfp);
+ to_refill = __iavf_alloc_rx_pages(rx_ring, to_refill,
+ gfp);

rx_desc = IAVF_RX_DESC(rx_ring, ntc);

@@ -1366,32 +1250,37 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
IAVF_RXD_QW1_LENGTH_PBUF_SHIFT;

iavf_trace(clean_rx_irq, rx_ring, rx_desc, skb);
- rx_buffer = &rx_ring->rx_bi[ntc];
+
+ page = rx_ring->rx_pages[ntc];
+ rx_ring->rx_pages[ntc] = NULL;

/* Very rare, but possible case. The most common reason:
* the last fragment contained FCS only, which was then
* stripped by the HW.
*/
- if (unlikely(!size))
+ if (unlikely(!size)) {
+ iavf_unmap_rx_page(dev, page, pg_size);
+ __free_pages(page, get_order(pg_size));
goto skip_data;
+ }

- iavf_sync_rx_buffer(dev, rx_buffer, size);
+ iavf_sync_rx_page(dev, page, size);
+ iavf_unmap_rx_page(dev, page, pg_size);

/* retrieve a buffer from the ring */
if (skb)
- iavf_add_rx_frag(skb, rx_buffer, size, pg_size);
+ iavf_add_rx_frag(skb, page, size, pg_size);
else
- skb = iavf_build_skb(rx_buffer, size, pg_size);
+ skb = iavf_build_skb(page, size, pg_size);

/* exit if we failed to retrieve a buffer */
if (!skb) {
+ __free_pages(page, get_order(pg_size));
rx_ring->rx_stats.alloc_buff_failed++;
break;
}

skip_data:
- iavf_put_rx_buffer(rx_ring, dev, rx_buffer, pg_size);
-
cleaned_count++;
to_refill++;
if (unlikely(++ntc == ring_size))
@@ -1448,7 +1337,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
rx_ring->skb = skb;

if (to_refill >= IAVF_RX_BUFFER_WRITE) {
- to_refill = __iavf_alloc_rx_buffers(rx_ring, to_refill, gfp);
+ to_refill = __iavf_alloc_rx_pages(rx_ring, to_refill, gfp);
/* guarantee a trip back through this routine if there was
* a failure
*/
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h
index 9c6661a6edf2..c09ac580fe84 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h
@@ -272,17 +272,6 @@ struct iavf_tx_buffer {
u32 tx_flags;
};

-struct iavf_rx_buffer {
- dma_addr_t dma;
- struct page *page;
-#if (BITS_PER_LONG > 32) || (PAGE_SIZE >= 65536)
- __u32 page_offset;
-#else
- __u16 page_offset;
-#endif
- __u16 pagecnt_bias;
-};
-
struct iavf_queue_stats {
u64 packets;
u64 bytes;
@@ -302,8 +291,6 @@ struct iavf_rx_queue_stats {
u64 non_eop_descs;
u64 alloc_page_failed;
u64 alloc_buff_failed;
- u64 page_reuse_count;
- u64 realloc_count;
};

enum iavf_ring_state_t {
@@ -331,7 +318,7 @@ struct iavf_ring {
struct net_device *netdev; /* netdev ring maps to */
union {
struct iavf_tx_buffer *tx_bi;
- struct iavf_rx_buffer *rx_bi;
+ struct page **rx_pages;
};
DECLARE_BITMAP(state, __IAVF_RING_STATE_NBITS);
u16 queue_index; /* Queue number of ring */
@@ -425,7 +412,7 @@ static inline unsigned int iavf_rx_pg_order(struct iavf_ring *ring)

#define iavf_rx_pg_size(_ring) (PAGE_SIZE << iavf_rx_pg_order(_ring))

-void iavf_alloc_rx_buffers(struct iavf_ring *rxr);
+void iavf_alloc_rx_pages(struct iavf_ring *rxr);
netdev_tx_t iavf_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
void iavf_clean_tx_ring(struct iavf_ring *tx_ring);
void iavf_clean_rx_ring(struct iavf_ring *rx_ring);
--
2.40.1


2023-05-16 16:31:23

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH net-next 01/11] net: intel: introduce Intel Ethernet common library

Not a secret there's a ton of code duplication between two and more Intel
ethernet modules.
Before introducing new changes, which would need to be copied over again,
start decoupling the already existing duplicate functionality into a new
module, which will be shared between several Intel Ethernet drivers.
Add the lookup table which converts 8/10-bit hardware packet type into
a parsed bitfield structure for easy checking packet format parameters,
such as payload level, IP version, etc. This is currently used by i40e,
ice and iavf and it's all the same in all three drivers.
The only difference introduced in this implementation is that instead of
defining a 256 (or 1024 in case of ice) element array, add unlikely()
condition to limit the input to 154 (current maximum non-reserved packet
type). There's no reason to waste 600 (or even 3600) bytes only to not
hurt very unlikely exception packets.
The hash computation function now takes payload level directly as a
pkt_hash_type. There's a couple cases when non-IP ptypes are marked as
L3 payload and in the previous versions their hash level would be 2, not
3. But skb_set_hash() only sees difference between L4 and non-L4, thus
this won't change anything at all.
The module is behind the hidden Kconfig symbol, which the drivers will
select when needed. The exports are behind 'LIBIE' namespace to limit
the scope of the functions.

Signed-off-by: Alexander Lobakin <[email protected]>
---
MAINTAINERS | 3 +-
drivers/net/ethernet/intel/Kconfig | 11 +-
drivers/net/ethernet/intel/Makefile | 1 +
drivers/net/ethernet/intel/i40e/i40e_common.c | 253 --------------
drivers/net/ethernet/intel/i40e/i40e_main.c | 1 +
.../net/ethernet/intel/i40e/i40e_prototype.h | 7 -
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 74 +---
drivers/net/ethernet/intel/i40e/i40e_type.h | 88 -----
drivers/net/ethernet/intel/iavf/iavf_common.c | 253 --------------
drivers/net/ethernet/intel/iavf/iavf_main.c | 1 +
.../net/ethernet/intel/iavf/iavf_prototype.h | 7 -
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 70 +---
drivers/net/ethernet/intel/iavf/iavf_type.h | 88 -----
.../net/ethernet/intel/ice/ice_lan_tx_rx.h | 316 ------------------
drivers/net/ethernet/intel/ice/ice_main.c | 1 +
drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 74 +---
drivers/net/ethernet/intel/libie/Makefile | 6 +
drivers/net/ethernet/intel/libie/rx.c | 110 ++++++
include/linux/net/intel/libie/rx.h | 128 +++++++
19 files changed, 312 insertions(+), 1180 deletions(-)
create mode 100644 drivers/net/ethernet/intel/libie/Makefile
create mode 100644 drivers/net/ethernet/intel/libie/rx.c
create mode 100644 include/linux/net/intel/libie/rx.h

diff --git a/MAINTAINERS b/MAINTAINERS
index e2fd64c2ebdc..96c9c9d37ce9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10339,7 +10339,8 @@ F: Documentation/networking/device_drivers/ethernet/intel/
F: drivers/net/ethernet/intel/
F: drivers/net/ethernet/intel/*/
F: include/linux/avf/virtchnl.h
-F: include/linux/net/intel/iidc.h
+F: include/linux/net/intel/
+F: include/linux/net/intel/*/

INTEL ETHERNET PROTOCOL DRIVER FOR RDMA
M: Mustafa Ismail <[email protected]>
diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig
index 9bc0a9519899..cec4a938fbd0 100644
--- a/drivers/net/ethernet/intel/Kconfig
+++ b/drivers/net/ethernet/intel/Kconfig
@@ -84,6 +84,12 @@ config E1000E_HWTS
devices. The cross-timestamp is available through the PTP clock
driver precise cross-timestamp ioctl (PTP_SYS_OFFSET_PRECISE).

+config LIBIE
+ tristate
+ help
+ libie (Intel Ethernet library) is a common library containing
+ routines shared by several Intel Ethernet drivers.
+
config IGB
tristate "Intel(R) 82575/82576 PCI-Express Gigabit Ethernet support"
depends on PCI
@@ -225,6 +231,7 @@ config I40E
depends on PTP_1588_CLOCK_OPTIONAL
depends on PCI
select AUXILIARY_BUS
+ select LIBIE
help
This driver supports Intel(R) Ethernet Controller XL710 Family of
devices. For more information on how to identify your adapter, go
@@ -254,8 +261,9 @@ config IAVF
tristate
config I40EVF
tristate "Intel(R) Ethernet Adaptive Virtual Function support"
- select IAVF
depends on PCI_MSI
+ select IAVF
+ select LIBIE
help
This driver supports virtual functions for Intel XL710,
X710, X722, XXV710, and all devices advertising support for
@@ -282,6 +290,7 @@ config ICE
depends on GNSS || GNSS = n
select AUXILIARY_BUS
select DIMLIB
+ select LIBIE
select NET_DEVLINK
select PLDMFW
help
diff --git a/drivers/net/ethernet/intel/Makefile b/drivers/net/ethernet/intel/Makefile
index d80d04132073..ce622b4d825d 100644
--- a/drivers/net/ethernet/intel/Makefile
+++ b/drivers/net/ethernet/intel/Makefile
@@ -15,3 +15,4 @@ obj-$(CONFIG_I40E) += i40e/
obj-$(CONFIG_IAVF) += iavf/
obj-$(CONFIG_FM10K) += fm10k/
obj-$(CONFIG_ICE) += ice/
+obj-$(CONFIG_LIBIE) += libie/
diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
index ed88e38d488b..25bb858268fc 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
@@ -383,259 +383,6 @@ int i40e_aq_set_rss_key(struct i40e_hw *hw,
return i40e_aq_get_set_rss_key(hw, vsi_id, key, true);
}

-/* The i40e_ptype_lookup table is used to convert from the 8-bit ptype in the
- * hardware to a bit-field that can be used by SW to more easily determine the
- * packet type.
- *
- * Macros are used to shorten the table lines and make this table human
- * readable.
- *
- * We store the PTYPE in the top byte of the bit field - this is just so that
- * we can check that the table doesn't have a row missing, as the index into
- * the table should be the PTYPE.
- *
- * Typical work flow:
- *
- * IF NOT i40e_ptype_lookup[ptype].known
- * THEN
- * Packet is unknown
- * ELSE IF i40e_ptype_lookup[ptype].outer_ip == I40E_RX_PTYPE_OUTER_IP
- * Use the rest of the fields to look at the tunnels, inner protocols, etc
- * ELSE
- * Use the enum i40e_rx_l2_ptype to decode the packet type
- * ENDIF
- */
-
-/* macro to make the table lines short, use explicit indexing with [PTYPE] */
-#define I40E_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
- [PTYPE] = { \
- 1, \
- I40E_RX_PTYPE_OUTER_##OUTER_IP, \
- I40E_RX_PTYPE_OUTER_##OUTER_IP_VER, \
- I40E_RX_PTYPE_##OUTER_FRAG, \
- I40E_RX_PTYPE_TUNNEL_##T, \
- I40E_RX_PTYPE_TUNNEL_END_##TE, \
- I40E_RX_PTYPE_##TEF, \
- I40E_RX_PTYPE_INNER_PROT_##I, \
- I40E_RX_PTYPE_PAYLOAD_LAYER_##PL }
-
-#define I40E_PTT_UNUSED_ENTRY(PTYPE) [PTYPE] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 }
-
-/* shorter macros makes the table fit but are terse */
-#define I40E_RX_PTYPE_NOF I40E_RX_PTYPE_NOT_FRAG
-#define I40E_RX_PTYPE_FRG I40E_RX_PTYPE_FRAG
-#define I40E_RX_PTYPE_INNER_PROT_TS I40E_RX_PTYPE_INNER_PROT_TIMESYNC
-
-/* Lookup table mapping in the 8-bit HW PTYPE to the bit field for decoding */
-struct i40e_rx_ptype_decoded i40e_ptype_lookup[BIT(8)] = {
- /* L2 Packet types */
- I40E_PTT_UNUSED_ENTRY(0),
- I40E_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
- I40E_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, TS, PAY2),
- I40E_PTT(3, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
- I40E_PTT_UNUSED_ENTRY(4),
- I40E_PTT_UNUSED_ENTRY(5),
- I40E_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
- I40E_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
- I40E_PTT_UNUSED_ENTRY(8),
- I40E_PTT_UNUSED_ENTRY(9),
- I40E_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
- I40E_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
- I40E_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- I40E_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- I40E_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- I40E_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- I40E_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- I40E_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- I40E_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- I40E_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- I40E_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- I40E_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
-
- /* Non Tunneled IPv4 */
- I40E_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
- I40E_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
- I40E_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(25),
- I40E_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4),
- I40E_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
- I40E_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
-
- /* IPv4 --> IPv4 */
- I40E_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
- I40E_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
- I40E_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(32),
- I40E_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4),
- I40E_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
- I40E_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
-
- /* IPv4 --> IPv6 */
- I40E_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
- I40E_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
- I40E_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(39),
- I40E_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4),
- I40E_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
- I40E_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT */
- I40E_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
-
- /* IPv4 --> GRE/NAT --> IPv4 */
- I40E_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
- I40E_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
- I40E_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(47),
- I40E_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4),
- I40E_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
- I40E_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT --> IPv6 */
- I40E_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
- I40E_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
- I40E_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(54),
- I40E_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4),
- I40E_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
- I40E_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT --> MAC */
- I40E_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
-
- /* IPv4 --> GRE/NAT --> MAC --> IPv4 */
- I40E_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
- I40E_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
- I40E_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(62),
- I40E_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4),
- I40E_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
- I40E_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT -> MAC --> IPv6 */
- I40E_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
- I40E_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
- I40E_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(69),
- I40E_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4),
- I40E_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
- I40E_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT --> MAC/VLAN */
- I40E_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
-
- /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
- I40E_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
- I40E_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
- I40E_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(77),
- I40E_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4),
- I40E_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
- I40E_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
-
- /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
- I40E_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
- I40E_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
- I40E_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(84),
- I40E_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4),
- I40E_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
- I40E_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
-
- /* Non Tunneled IPv6 */
- I40E_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
- I40E_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
- I40E_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(91),
- I40E_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4),
- I40E_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
- I40E_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
-
- /* IPv6 --> IPv4 */
- I40E_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
- I40E_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
- I40E_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(98),
- I40E_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4),
- I40E_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
- I40E_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
-
- /* IPv6 --> IPv6 */
- I40E_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
- I40E_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
- I40E_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(105),
- I40E_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4),
- I40E_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
- I40E_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT */
- I40E_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
-
- /* IPv6 --> GRE/NAT -> IPv4 */
- I40E_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
- I40E_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
- I40E_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(113),
- I40E_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4),
- I40E_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
- I40E_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> IPv6 */
- I40E_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
- I40E_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
- I40E_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(120),
- I40E_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4),
- I40E_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
- I40E_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> MAC */
- I40E_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
-
- /* IPv6 --> GRE/NAT -> MAC -> IPv4 */
- I40E_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
- I40E_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
- I40E_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(128),
- I40E_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4),
- I40E_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
- I40E_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> MAC -> IPv6 */
- I40E_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
- I40E_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
- I40E_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(135),
- I40E_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4),
- I40E_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
- I40E_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> MAC/VLAN */
- I40E_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
-
- /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
- I40E_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
- I40E_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
- I40E_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(143),
- I40E_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4),
- I40E_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
- I40E_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
- I40E_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
- I40E_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
- I40E_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4),
- I40E_PTT_UNUSED_ENTRY(150),
- I40E_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4),
- I40E_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
- I40E_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
-
- /* unused entries */
- [154 ... 255] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 }
-};
-
/**
* i40e_init_shared_code - Initialize the shared code
* @hw: pointer to hardware structure
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index b847bd105b16..f3bca6e9bd18 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -97,6 +97,7 @@ MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all), Debug mask (0x8XXXXXXX

MODULE_AUTHOR("Intel Corporation, <[email protected]>");
MODULE_DESCRIPTION("Intel(R) Ethernet Connection XL710 Network Driver");
+MODULE_IMPORT_NS(LIBIE);
MODULE_LICENSE("GPL v2");

static struct workqueue_struct *i40e_wq;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
index fe845987d99a..5287d0ef32d5 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
@@ -380,13 +380,6 @@ void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status);

int i40e_set_mac_type(struct i40e_hw *hw);

-extern struct i40e_rx_ptype_decoded i40e_ptype_lookup[];
-
-static inline struct i40e_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
-{
- return i40e_ptype_lookup[ptype];
-}
-
/**
* i40e_virtchnl_link_speed - Convert AdminQ link_speed to virtchnl definition
* @link_speed: the speed to convert
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 8b8bf4880faa..c4b6cdbf3611 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -1,8 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */

-#include <linux/prefetch.h>
#include <linux/bpf_trace.h>
+#include <linux/net/intel/libie/rx.h>
+#include <linux/prefetch.h>
#include <net/mpls.h>
#include <net/xdp.h>
#include "i40e.h"
@@ -1758,40 +1759,32 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
struct sk_buff *skb,
union i40e_rx_desc *rx_desc)
{
- struct i40e_rx_ptype_decoded decoded;
+ struct libie_rx_ptype_parsed parsed;
u32 rx_error, rx_status;
bool ipv4, ipv6;
u8 ptype;
u64 qword;

+ skb->ip_summed = CHECKSUM_NONE;
+
qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT;
+
+ parsed = libie_parse_rx_ptype(ptype);
+ if (!libie_has_rx_checksum(vsi->netdev, parsed))
+ return;
+
rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >>
I40E_RXD_QW1_ERROR_SHIFT;
rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
I40E_RXD_QW1_STATUS_SHIFT;
- decoded = decode_rx_desc_ptype(ptype);
-
- skb->ip_summed = CHECKSUM_NONE;
-
- skb_checksum_none_assert(skb);
-
- /* Rx csum enabled and ip headers found? */
- if (!(vsi->netdev->features & NETIF_F_RXCSUM))
- return;

/* did the hardware decode the packet and checksum? */
if (!(rx_status & BIT(I40E_RX_DESC_STATUS_L3L4P_SHIFT)))
return;

- /* both known and outer_ip must be set for the below code to work */
- if (!(decoded.known && decoded.outer_ip))
- return;
-
- ipv4 = (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP) &&
- (decoded.outer_ip_ver == I40E_RX_PTYPE_OUTER_IPV4);
- ipv6 = (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP) &&
- (decoded.outer_ip_ver == I40E_RX_PTYPE_OUTER_IPV6);
+ ipv4 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV4;
+ ipv6 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV6;

if (ipv4 &&
(rx_error & (BIT(I40E_RX_DESC_ERROR_IPE_SHIFT) |
@@ -1819,49 +1812,16 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
* we need to bump the checksum level by 1 to reflect the fact that
* we are indicating we validated the inner checksum.
*/
- if (decoded.tunnel_type >= I40E_RX_PTYPE_TUNNEL_IP_GRENAT)
+ if (parsed.tunnel_type >= LIBIE_RX_PTYPE_TUNNEL_IP_GRENAT)
skb->csum_level = 1;

- /* Only report checksum unnecessary for TCP, UDP, or SCTP */
- switch (decoded.inner_prot) {
- case I40E_RX_PTYPE_INNER_PROT_TCP:
- case I40E_RX_PTYPE_INNER_PROT_UDP:
- case I40E_RX_PTYPE_INNER_PROT_SCTP:
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- fallthrough;
- default:
- break;
- }
-
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
return;

checksum_fail:
vsi->back->hw_csum_rx_error++;
}

-/**
- * i40e_ptype_to_htype - get a hash type
- * @ptype: the ptype value from the descriptor
- *
- * Returns a hash type to be used by skb_set_hash
- **/
-static inline int i40e_ptype_to_htype(u8 ptype)
-{
- struct i40e_rx_ptype_decoded decoded = decode_rx_desc_ptype(ptype);
-
- if (!decoded.known)
- return PKT_HASH_TYPE_NONE;
-
- if (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP &&
- decoded.payload_layer == I40E_RX_PTYPE_PAYLOAD_LAYER_PAY4)
- return PKT_HASH_TYPE_L4;
- else if (decoded.outer_ip == I40E_RX_PTYPE_OUTER_IP &&
- decoded.payload_layer == I40E_RX_PTYPE_PAYLOAD_LAYER_PAY3)
- return PKT_HASH_TYPE_L3;
- else
- return PKT_HASH_TYPE_L2;
-}
-
/**
* i40e_rx_hash - set the hash value in the skb
* @ring: descriptor ring
@@ -1874,17 +1834,19 @@ static inline void i40e_rx_hash(struct i40e_ring *ring,
struct sk_buff *skb,
u8 rx_ptype)
{
+ struct libie_rx_ptype_parsed parsed;
u32 hash;
const __le64 rss_mask =
cpu_to_le64((u64)I40E_RX_DESC_FLTSTAT_RSS_HASH <<
I40E_RX_DESC_STATUS_FLTSTAT_SHIFT);

- if (!(ring->netdev->features & NETIF_F_RXHASH))
+ parsed = libie_parse_rx_ptype(rx_ptype);
+ if (!libie_has_rx_hash(ring->netdev, parsed))
return;

if ((rx_desc->wb.qword1.status_error_len & rss_mask) == rss_mask) {
hash = le32_to_cpu(rx_desc->wb.qword0.hi_dword.rss);
- skb_set_hash(skb, hash, i40e_ptype_to_htype(rx_ptype));
+ libie_skb_set_hash(skb, hash, parsed);
}
}

diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
index 388c3d36d96a..05b8510f99a9 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_type.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
@@ -773,94 +773,6 @@ enum i40e_rx_desc_error_l3l4e_fcoe_masks {
#define I40E_RXD_QW1_PTYPE_SHIFT 30
#define I40E_RXD_QW1_PTYPE_MASK (0xFFULL << I40E_RXD_QW1_PTYPE_SHIFT)

-/* Packet type non-ip values */
-enum i40e_rx_l2_ptype {
- I40E_RX_PTYPE_L2_RESERVED = 0,
- I40E_RX_PTYPE_L2_MAC_PAY2 = 1,
- I40E_RX_PTYPE_L2_TIMESYNC_PAY2 = 2,
- I40E_RX_PTYPE_L2_FIP_PAY2 = 3,
- I40E_RX_PTYPE_L2_OUI_PAY2 = 4,
- I40E_RX_PTYPE_L2_MACCNTRL_PAY2 = 5,
- I40E_RX_PTYPE_L2_LLDP_PAY2 = 6,
- I40E_RX_PTYPE_L2_ECP_PAY2 = 7,
- I40E_RX_PTYPE_L2_EVB_PAY2 = 8,
- I40E_RX_PTYPE_L2_QCN_PAY2 = 9,
- I40E_RX_PTYPE_L2_EAPOL_PAY2 = 10,
- I40E_RX_PTYPE_L2_ARP = 11,
- I40E_RX_PTYPE_L2_FCOE_PAY3 = 12,
- I40E_RX_PTYPE_L2_FCOE_FCDATA_PAY3 = 13,
- I40E_RX_PTYPE_L2_FCOE_FCRDY_PAY3 = 14,
- I40E_RX_PTYPE_L2_FCOE_FCRSP_PAY3 = 15,
- I40E_RX_PTYPE_L2_FCOE_FCOTHER_PA = 16,
- I40E_RX_PTYPE_L2_FCOE_VFT_PAY3 = 17,
- I40E_RX_PTYPE_L2_FCOE_VFT_FCDATA = 18,
- I40E_RX_PTYPE_L2_FCOE_VFT_FCRDY = 19,
- I40E_RX_PTYPE_L2_FCOE_VFT_FCRSP = 20,
- I40E_RX_PTYPE_L2_FCOE_VFT_FCOTHER = 21,
- I40E_RX_PTYPE_GRENAT4_MAC_PAY3 = 58,
- I40E_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4 = 87,
- I40E_RX_PTYPE_GRENAT6_MAC_PAY3 = 124,
- I40E_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4 = 153
-};
-
-struct i40e_rx_ptype_decoded {
- u32 known:1;
- u32 outer_ip:1;
- u32 outer_ip_ver:1;
- u32 outer_frag:1;
- u32 tunnel_type:3;
- u32 tunnel_end_prot:2;
- u32 tunnel_end_frag:1;
- u32 inner_prot:4;
- u32 payload_layer:3;
-};
-
-enum i40e_rx_ptype_outer_ip {
- I40E_RX_PTYPE_OUTER_L2 = 0,
- I40E_RX_PTYPE_OUTER_IP = 1
-};
-
-enum i40e_rx_ptype_outer_ip_ver {
- I40E_RX_PTYPE_OUTER_NONE = 0,
- I40E_RX_PTYPE_OUTER_IPV4 = 0,
- I40E_RX_PTYPE_OUTER_IPV6 = 1
-};
-
-enum i40e_rx_ptype_outer_fragmented {
- I40E_RX_PTYPE_NOT_FRAG = 0,
- I40E_RX_PTYPE_FRAG = 1
-};
-
-enum i40e_rx_ptype_tunnel_type {
- I40E_RX_PTYPE_TUNNEL_NONE = 0,
- I40E_RX_PTYPE_TUNNEL_IP_IP = 1,
- I40E_RX_PTYPE_TUNNEL_IP_GRENAT = 2,
- I40E_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3,
- I40E_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4,
-};
-
-enum i40e_rx_ptype_tunnel_end_prot {
- I40E_RX_PTYPE_TUNNEL_END_NONE = 0,
- I40E_RX_PTYPE_TUNNEL_END_IPV4 = 1,
- I40E_RX_PTYPE_TUNNEL_END_IPV6 = 2,
-};
-
-enum i40e_rx_ptype_inner_prot {
- I40E_RX_PTYPE_INNER_PROT_NONE = 0,
- I40E_RX_PTYPE_INNER_PROT_UDP = 1,
- I40E_RX_PTYPE_INNER_PROT_TCP = 2,
- I40E_RX_PTYPE_INNER_PROT_SCTP = 3,
- I40E_RX_PTYPE_INNER_PROT_ICMP = 4,
- I40E_RX_PTYPE_INNER_PROT_TIMESYNC = 5
-};
-
-enum i40e_rx_ptype_payload_layer {
- I40E_RX_PTYPE_PAYLOAD_LAYER_NONE = 0,
- I40E_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1,
- I40E_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2,
- I40E_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3,
-};
-
#define I40E_RXD_QW1_LENGTH_PBUF_SHIFT 38
#define I40E_RXD_QW1_LENGTH_PBUF_MASK (0x3FFFULL << \
I40E_RXD_QW1_LENGTH_PBUF_SHIFT)
diff --git a/drivers/net/ethernet/intel/iavf/iavf_common.c b/drivers/net/ethernet/intel/iavf/iavf_common.c
index dd11dbbd5551..ba6c9f154d18 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_common.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_common.c
@@ -499,259 +499,6 @@ enum iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 vsi_id,
return iavf_aq_get_set_rss_key(hw, vsi_id, key, true);
}

-/* The iavf_ptype_lookup table is used to convert from the 8-bit ptype in the
- * hardware to a bit-field that can be used by SW to more easily determine the
- * packet type.
- *
- * Macros are used to shorten the table lines and make this table human
- * readable.
- *
- * We store the PTYPE in the top byte of the bit field - this is just so that
- * we can check that the table doesn't have a row missing, as the index into
- * the table should be the PTYPE.
- *
- * Typical work flow:
- *
- * IF NOT iavf_ptype_lookup[ptype].known
- * THEN
- * Packet is unknown
- * ELSE IF iavf_ptype_lookup[ptype].outer_ip == IAVF_RX_PTYPE_OUTER_IP
- * Use the rest of the fields to look at the tunnels, inner protocols, etc
- * ELSE
- * Use the enum iavf_rx_l2_ptype to decode the packet type
- * ENDIF
- */
-
-/* macro to make the table lines short, use explicit indexing with [PTYPE] */
-#define IAVF_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
- [PTYPE] = { \
- 1, \
- IAVF_RX_PTYPE_OUTER_##OUTER_IP, \
- IAVF_RX_PTYPE_OUTER_##OUTER_IP_VER, \
- IAVF_RX_PTYPE_##OUTER_FRAG, \
- IAVF_RX_PTYPE_TUNNEL_##T, \
- IAVF_RX_PTYPE_TUNNEL_END_##TE, \
- IAVF_RX_PTYPE_##TEF, \
- IAVF_RX_PTYPE_INNER_PROT_##I, \
- IAVF_RX_PTYPE_PAYLOAD_LAYER_##PL }
-
-#define IAVF_PTT_UNUSED_ENTRY(PTYPE) [PTYPE] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 }
-
-/* shorter macros makes the table fit but are terse */
-#define IAVF_RX_PTYPE_NOF IAVF_RX_PTYPE_NOT_FRAG
-#define IAVF_RX_PTYPE_FRG IAVF_RX_PTYPE_FRAG
-#define IAVF_RX_PTYPE_INNER_PROT_TS IAVF_RX_PTYPE_INNER_PROT_TIMESYNC
-
-/* Lookup table mapping the 8-bit HW PTYPE to the bit field for decoding */
-struct iavf_rx_ptype_decoded iavf_ptype_lookup[BIT(8)] = {
- /* L2 Packet types */
- IAVF_PTT_UNUSED_ENTRY(0),
- IAVF_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
- IAVF_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, TS, PAY2),
- IAVF_PTT(3, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
- IAVF_PTT_UNUSED_ENTRY(4),
- IAVF_PTT_UNUSED_ENTRY(5),
- IAVF_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
- IAVF_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
- IAVF_PTT_UNUSED_ENTRY(8),
- IAVF_PTT_UNUSED_ENTRY(9),
- IAVF_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
- IAVF_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
- IAVF_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- IAVF_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- IAVF_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- IAVF_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- IAVF_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- IAVF_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- IAVF_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- IAVF_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- IAVF_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
- IAVF_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3),
-
- /* Non Tunneled IPv4 */
- IAVF_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
- IAVF_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
- IAVF_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(25),
- IAVF_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4),
- IAVF_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
- IAVF_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
-
- /* IPv4 --> IPv4 */
- IAVF_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
- IAVF_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
- IAVF_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(32),
- IAVF_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4),
- IAVF_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
- IAVF_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
-
- /* IPv4 --> IPv6 */
- IAVF_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
- IAVF_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
- IAVF_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(39),
- IAVF_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4),
- IAVF_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
- IAVF_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT */
- IAVF_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
-
- /* IPv4 --> GRE/NAT --> IPv4 */
- IAVF_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
- IAVF_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
- IAVF_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(47),
- IAVF_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4),
- IAVF_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
- IAVF_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT --> IPv6 */
- IAVF_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
- IAVF_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
- IAVF_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(54),
- IAVF_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4),
- IAVF_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
- IAVF_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT --> MAC */
- IAVF_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
-
- /* IPv4 --> GRE/NAT --> MAC --> IPv4 */
- IAVF_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
- IAVF_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
- IAVF_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(62),
- IAVF_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4),
- IAVF_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
- IAVF_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT -> MAC --> IPv6 */
- IAVF_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
- IAVF_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
- IAVF_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(69),
- IAVF_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4),
- IAVF_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
- IAVF_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT --> MAC/VLAN */
- IAVF_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
-
- /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
- IAVF_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
- IAVF_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
- IAVF_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(77),
- IAVF_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4),
- IAVF_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
- IAVF_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
-
- /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
- IAVF_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
- IAVF_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
- IAVF_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(84),
- IAVF_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4),
- IAVF_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
- IAVF_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
-
- /* Non Tunneled IPv6 */
- IAVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
- IAVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
- IAVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(91),
- IAVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4),
- IAVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
- IAVF_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
-
- /* IPv6 --> IPv4 */
- IAVF_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
- IAVF_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
- IAVF_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(98),
- IAVF_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4),
- IAVF_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
- IAVF_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
-
- /* IPv6 --> IPv6 */
- IAVF_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
- IAVF_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
- IAVF_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(105),
- IAVF_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4),
- IAVF_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
- IAVF_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT */
- IAVF_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
-
- /* IPv6 --> GRE/NAT -> IPv4 */
- IAVF_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
- IAVF_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
- IAVF_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(113),
- IAVF_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4),
- IAVF_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
- IAVF_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> IPv6 */
- IAVF_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
- IAVF_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
- IAVF_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(120),
- IAVF_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4),
- IAVF_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
- IAVF_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> MAC */
- IAVF_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
-
- /* IPv6 --> GRE/NAT -> MAC -> IPv4 */
- IAVF_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
- IAVF_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
- IAVF_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(128),
- IAVF_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4),
- IAVF_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
- IAVF_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> MAC -> IPv6 */
- IAVF_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
- IAVF_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
- IAVF_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(135),
- IAVF_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4),
- IAVF_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
- IAVF_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> MAC/VLAN */
- IAVF_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
-
- /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
- IAVF_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
- IAVF_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
- IAVF_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(143),
- IAVF_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4),
- IAVF_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
- IAVF_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
- IAVF_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
- IAVF_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
- IAVF_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4),
- IAVF_PTT_UNUSED_ENTRY(150),
- IAVF_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4),
- IAVF_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
- IAVF_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
-
- /* unused entries */
- [154 ... 255] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 }
-};
-
/**
* iavf_aq_send_msg_to_pf
* @hw: pointer to the hardware structure
diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index 2de4baff4c20..c17e909d3ff0 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -46,6 +46,7 @@ MODULE_DEVICE_TABLE(pci, iavf_pci_tbl);
MODULE_ALIAS("i40evf");
MODULE_AUTHOR("Intel Corporation, <[email protected]>");
MODULE_DESCRIPTION("Intel(R) Ethernet Adaptive Virtual Function Network Driver");
+MODULE_IMPORT_NS(LIBIE);
MODULE_LICENSE("GPL v2");

static const struct net_device_ops iavf_netdev_ops;
diff --git a/drivers/net/ethernet/intel/iavf/iavf_prototype.h b/drivers/net/ethernet/intel/iavf/iavf_prototype.h
index edebfbbcffdc..c2e5dbc0a75a 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_prototype.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_prototype.h
@@ -51,13 +51,6 @@ enum iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 seid,

enum iavf_status iavf_set_mac_type(struct iavf_hw *hw);

-extern struct iavf_rx_ptype_decoded iavf_ptype_lookup[];
-
-static inline struct iavf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
-{
- return iavf_ptype_lookup[ptype];
-}
-
void iavf_vf_parse_hw_config(struct iavf_hw *hw,
struct virtchnl_vf_resource *msg);
enum iavf_status iavf_vf_reset(struct iavf_hw *hw);
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
index e989feda133c..a83b96e9b6fc 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */

+#include <linux/net/intel/libie/rx.h>
#include <linux/prefetch.h>

#include "iavf.h"
@@ -982,40 +983,32 @@ static inline void iavf_rx_checksum(struct iavf_vsi *vsi,
struct sk_buff *skb,
union iavf_rx_desc *rx_desc)
{
- struct iavf_rx_ptype_decoded decoded;
+ struct libie_rx_ptype_parsed parsed;
u32 rx_error, rx_status;
bool ipv4, ipv6;
u8 ptype;
u64 qword;

+ skb->ip_summed = CHECKSUM_NONE;
+
qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
ptype = (qword & IAVF_RXD_QW1_PTYPE_MASK) >> IAVF_RXD_QW1_PTYPE_SHIFT;
+
+ parsed = libie_parse_rx_ptype(ptype);
+ if (!libie_has_rx_checksum(vsi->netdev, parsed))
+ return;
+
rx_error = (qword & IAVF_RXD_QW1_ERROR_MASK) >>
IAVF_RXD_QW1_ERROR_SHIFT;
rx_status = (qword & IAVF_RXD_QW1_STATUS_MASK) >>
IAVF_RXD_QW1_STATUS_SHIFT;
- decoded = decode_rx_desc_ptype(ptype);
-
- skb->ip_summed = CHECKSUM_NONE;
-
- skb_checksum_none_assert(skb);
-
- /* Rx csum enabled and ip headers found? */
- if (!(vsi->netdev->features & NETIF_F_RXCSUM))
- return;

/* did the hardware decode the packet and checksum? */
if (!(rx_status & BIT(IAVF_RX_DESC_STATUS_L3L4P_SHIFT)))
return;

- /* both known and outer_ip must be set for the below code to work */
- if (!(decoded.known && decoded.outer_ip))
- return;
-
- ipv4 = (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP) &&
- (decoded.outer_ip_ver == IAVF_RX_PTYPE_OUTER_IPV4);
- ipv6 = (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP) &&
- (decoded.outer_ip_ver == IAVF_RX_PTYPE_OUTER_IPV6);
+ ipv4 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV4;
+ ipv6 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV6;

if (ipv4 &&
(rx_error & (BIT(IAVF_RX_DESC_ERROR_IPE_SHIFT) |
@@ -1039,46 +1032,13 @@ static inline void iavf_rx_checksum(struct iavf_vsi *vsi,
if (rx_error & BIT(IAVF_RX_DESC_ERROR_PPRS_SHIFT))
return;

- /* Only report checksum unnecessary for TCP, UDP, or SCTP */
- switch (decoded.inner_prot) {
- case IAVF_RX_PTYPE_INNER_PROT_TCP:
- case IAVF_RX_PTYPE_INNER_PROT_UDP:
- case IAVF_RX_PTYPE_INNER_PROT_SCTP:
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- fallthrough;
- default:
- break;
- }
-
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
return;

checksum_fail:
vsi->back->hw_csum_rx_error++;
}

-/**
- * iavf_ptype_to_htype - get a hash type
- * @ptype: the ptype value from the descriptor
- *
- * Returns a hash type to be used by skb_set_hash
- **/
-static inline int iavf_ptype_to_htype(u8 ptype)
-{
- struct iavf_rx_ptype_decoded decoded = decode_rx_desc_ptype(ptype);
-
- if (!decoded.known)
- return PKT_HASH_TYPE_NONE;
-
- if (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP &&
- decoded.payload_layer == IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY4)
- return PKT_HASH_TYPE_L4;
- else if (decoded.outer_ip == IAVF_RX_PTYPE_OUTER_IP &&
- decoded.payload_layer == IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY3)
- return PKT_HASH_TYPE_L3;
- else
- return PKT_HASH_TYPE_L2;
-}
-
/**
* iavf_rx_hash - set the hash value in the skb
* @ring: descriptor ring
@@ -1091,17 +1051,19 @@ static inline void iavf_rx_hash(struct iavf_ring *ring,
struct sk_buff *skb,
u8 rx_ptype)
{
+ struct libie_rx_ptype_parsed parsed;
u32 hash;
const __le64 rss_mask =
cpu_to_le64((u64)IAVF_RX_DESC_FLTSTAT_RSS_HASH <<
IAVF_RX_DESC_STATUS_FLTSTAT_SHIFT);

- if (!(ring->netdev->features & NETIF_F_RXHASH))
+ parsed = libie_parse_rx_ptype(rx_ptype);
+ if (!libie_has_rx_hash(ring->netdev, parsed))
return;

if ((rx_desc->wb.qword1.status_error_len & rss_mask) == rss_mask) {
hash = le32_to_cpu(rx_desc->wb.qword0.hi_dword.rss);
- skb_set_hash(skb, hash, iavf_ptype_to_htype(rx_ptype));
+ libie_skb_set_hash(skb, hash, parsed);
}
}

diff --git a/drivers/net/ethernet/intel/iavf/iavf_type.h b/drivers/net/ethernet/intel/iavf/iavf_type.h
index 9f1f523807c4..3030ba330326 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_type.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_type.h
@@ -339,94 +339,6 @@ enum iavf_rx_desc_error_l3l4e_fcoe_masks {
#define IAVF_RXD_QW1_PTYPE_SHIFT 30
#define IAVF_RXD_QW1_PTYPE_MASK (0xFFULL << IAVF_RXD_QW1_PTYPE_SHIFT)

-/* Packet type non-ip values */
-enum iavf_rx_l2_ptype {
- IAVF_RX_PTYPE_L2_RESERVED = 0,
- IAVF_RX_PTYPE_L2_MAC_PAY2 = 1,
- IAVF_RX_PTYPE_L2_TIMESYNC_PAY2 = 2,
- IAVF_RX_PTYPE_L2_FIP_PAY2 = 3,
- IAVF_RX_PTYPE_L2_OUI_PAY2 = 4,
- IAVF_RX_PTYPE_L2_MACCNTRL_PAY2 = 5,
- IAVF_RX_PTYPE_L2_LLDP_PAY2 = 6,
- IAVF_RX_PTYPE_L2_ECP_PAY2 = 7,
- IAVF_RX_PTYPE_L2_EVB_PAY2 = 8,
- IAVF_RX_PTYPE_L2_QCN_PAY2 = 9,
- IAVF_RX_PTYPE_L2_EAPOL_PAY2 = 10,
- IAVF_RX_PTYPE_L2_ARP = 11,
- IAVF_RX_PTYPE_L2_FCOE_PAY3 = 12,
- IAVF_RX_PTYPE_L2_FCOE_FCDATA_PAY3 = 13,
- IAVF_RX_PTYPE_L2_FCOE_FCRDY_PAY3 = 14,
- IAVF_RX_PTYPE_L2_FCOE_FCRSP_PAY3 = 15,
- IAVF_RX_PTYPE_L2_FCOE_FCOTHER_PA = 16,
- IAVF_RX_PTYPE_L2_FCOE_VFT_PAY3 = 17,
- IAVF_RX_PTYPE_L2_FCOE_VFT_FCDATA = 18,
- IAVF_RX_PTYPE_L2_FCOE_VFT_FCRDY = 19,
- IAVF_RX_PTYPE_L2_FCOE_VFT_FCRSP = 20,
- IAVF_RX_PTYPE_L2_FCOE_VFT_FCOTHER = 21,
- IAVF_RX_PTYPE_GRENAT4_MAC_PAY3 = 58,
- IAVF_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4 = 87,
- IAVF_RX_PTYPE_GRENAT6_MAC_PAY3 = 124,
- IAVF_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4 = 153
-};
-
-struct iavf_rx_ptype_decoded {
- u32 known:1;
- u32 outer_ip:1;
- u32 outer_ip_ver:1;
- u32 outer_frag:1;
- u32 tunnel_type:3;
- u32 tunnel_end_prot:2;
- u32 tunnel_end_frag:1;
- u32 inner_prot:4;
- u32 payload_layer:3;
-};
-
-enum iavf_rx_ptype_outer_ip {
- IAVF_RX_PTYPE_OUTER_L2 = 0,
- IAVF_RX_PTYPE_OUTER_IP = 1
-};
-
-enum iavf_rx_ptype_outer_ip_ver {
- IAVF_RX_PTYPE_OUTER_NONE = 0,
- IAVF_RX_PTYPE_OUTER_IPV4 = 0,
- IAVF_RX_PTYPE_OUTER_IPV6 = 1
-};
-
-enum iavf_rx_ptype_outer_fragmented {
- IAVF_RX_PTYPE_NOT_FRAG = 0,
- IAVF_RX_PTYPE_FRAG = 1
-};
-
-enum iavf_rx_ptype_tunnel_type {
- IAVF_RX_PTYPE_TUNNEL_NONE = 0,
- IAVF_RX_PTYPE_TUNNEL_IP_IP = 1,
- IAVF_RX_PTYPE_TUNNEL_IP_GRENAT = 2,
- IAVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3,
- IAVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4,
-};
-
-enum iavf_rx_ptype_tunnel_end_prot {
- IAVF_RX_PTYPE_TUNNEL_END_NONE = 0,
- IAVF_RX_PTYPE_TUNNEL_END_IPV4 = 1,
- IAVF_RX_PTYPE_TUNNEL_END_IPV6 = 2,
-};
-
-enum iavf_rx_ptype_inner_prot {
- IAVF_RX_PTYPE_INNER_PROT_NONE = 0,
- IAVF_RX_PTYPE_INNER_PROT_UDP = 1,
- IAVF_RX_PTYPE_INNER_PROT_TCP = 2,
- IAVF_RX_PTYPE_INNER_PROT_SCTP = 3,
- IAVF_RX_PTYPE_INNER_PROT_ICMP = 4,
- IAVF_RX_PTYPE_INNER_PROT_TIMESYNC = 5
-};
-
-enum iavf_rx_ptype_payload_layer {
- IAVF_RX_PTYPE_PAYLOAD_LAYER_NONE = 0,
- IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1,
- IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2,
- IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3,
-};
-
#define IAVF_RXD_QW1_LENGTH_PBUF_SHIFT 38
#define IAVF_RXD_QW1_LENGTH_PBUF_MASK (0x3FFFULL << \
IAVF_RXD_QW1_LENGTH_PBUF_SHIFT)
diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
index 89f986a75cc8..611577ebc29d 100644
--- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
@@ -160,64 +160,6 @@ struct ice_fltr_desc {
(0x1ULL << ICE_FXD_FLTR_WB_QW1_FAIL_PROF_S)
#define ICE_FXD_FLTR_WB_QW1_FAIL_PROF_YES 0x1ULL

-struct ice_rx_ptype_decoded {
- u32 known:1;
- u32 outer_ip:1;
- u32 outer_ip_ver:2;
- u32 outer_frag:1;
- u32 tunnel_type:3;
- u32 tunnel_end_prot:2;
- u32 tunnel_end_frag:1;
- u32 inner_prot:4;
- u32 payload_layer:3;
-};
-
-enum ice_rx_ptype_outer_ip {
- ICE_RX_PTYPE_OUTER_L2 = 0,
- ICE_RX_PTYPE_OUTER_IP = 1,
-};
-
-enum ice_rx_ptype_outer_ip_ver {
- ICE_RX_PTYPE_OUTER_NONE = 0,
- ICE_RX_PTYPE_OUTER_IPV4 = 1,
- ICE_RX_PTYPE_OUTER_IPV6 = 2,
-};
-
-enum ice_rx_ptype_outer_fragmented {
- ICE_RX_PTYPE_NOT_FRAG = 0,
- ICE_RX_PTYPE_FRAG = 1,
-};
-
-enum ice_rx_ptype_tunnel_type {
- ICE_RX_PTYPE_TUNNEL_NONE = 0,
- ICE_RX_PTYPE_TUNNEL_IP_IP = 1,
- ICE_RX_PTYPE_TUNNEL_IP_GRENAT = 2,
- ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3,
- ICE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4,
-};
-
-enum ice_rx_ptype_tunnel_end_prot {
- ICE_RX_PTYPE_TUNNEL_END_NONE = 0,
- ICE_RX_PTYPE_TUNNEL_END_IPV4 = 1,
- ICE_RX_PTYPE_TUNNEL_END_IPV6 = 2,
-};
-
-enum ice_rx_ptype_inner_prot {
- ICE_RX_PTYPE_INNER_PROT_NONE = 0,
- ICE_RX_PTYPE_INNER_PROT_UDP = 1,
- ICE_RX_PTYPE_INNER_PROT_TCP = 2,
- ICE_RX_PTYPE_INNER_PROT_SCTP = 3,
- ICE_RX_PTYPE_INNER_PROT_ICMP = 4,
- ICE_RX_PTYPE_INNER_PROT_TIMESYNC = 5,
-};
-
-enum ice_rx_ptype_payload_layer {
- ICE_RX_PTYPE_PAYLOAD_LAYER_NONE = 0,
- ICE_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1,
- ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2,
- ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3,
-};
-
/* Rx Flex Descriptor
* This descriptor is used instead of the legacy version descriptor when
* ice_rlan_ctx.adv_desc is set
@@ -651,262 +593,4 @@ struct ice_tlan_ctx {
u8 int_q_state; /* width not needed - internal - DO NOT WRITE!!! */
};

-/* The ice_ptype_lkup table is used to convert from the 10-bit ptype in the
- * hardware to a bit-field that can be used by SW to more easily determine the
- * packet type.
- *
- * Macros are used to shorten the table lines and make this table human
- * readable.
- *
- * We store the PTYPE in the top byte of the bit field - this is just so that
- * we can check that the table doesn't have a row missing, as the index into
- * the table should be the PTYPE.
- *
- * Typical work flow:
- *
- * IF NOT ice_ptype_lkup[ptype].known
- * THEN
- * Packet is unknown
- * ELSE IF ice_ptype_lkup[ptype].outer_ip == ICE_RX_PTYPE_OUTER_IP
- * Use the rest of the fields to look at the tunnels, inner protocols, etc
- * ELSE
- * Use the enum ice_rx_l2_ptype to decode the packet type
- * ENDIF
- */
-
-/* macro to make the table lines short, use explicit indexing with [PTYPE] */
-#define ICE_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\
- [PTYPE] = { \
- 1, \
- ICE_RX_PTYPE_OUTER_##OUTER_IP, \
- ICE_RX_PTYPE_OUTER_##OUTER_IP_VER, \
- ICE_RX_PTYPE_##OUTER_FRAG, \
- ICE_RX_PTYPE_TUNNEL_##T, \
- ICE_RX_PTYPE_TUNNEL_END_##TE, \
- ICE_RX_PTYPE_##TEF, \
- ICE_RX_PTYPE_INNER_PROT_##I, \
- ICE_RX_PTYPE_PAYLOAD_LAYER_##PL }
-
-#define ICE_PTT_UNUSED_ENTRY(PTYPE) [PTYPE] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 }
-
-/* shorter macros makes the table fit but are terse */
-#define ICE_RX_PTYPE_NOF ICE_RX_PTYPE_NOT_FRAG
-#define ICE_RX_PTYPE_FRG ICE_RX_PTYPE_FRAG
-
-/* Lookup table mapping in the 10-bit HW PTYPE to the bit field for decoding */
-static const struct ice_rx_ptype_decoded ice_ptype_lkup[BIT(10)] = {
- /* L2 Packet types */
- ICE_PTT_UNUSED_ENTRY(0),
- ICE_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
- ICE_PTT_UNUSED_ENTRY(2),
- ICE_PTT_UNUSED_ENTRY(3),
- ICE_PTT_UNUSED_ENTRY(4),
- ICE_PTT_UNUSED_ENTRY(5),
- ICE_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
- ICE_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
- ICE_PTT_UNUSED_ENTRY(8),
- ICE_PTT_UNUSED_ENTRY(9),
- ICE_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
- ICE_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
- ICE_PTT_UNUSED_ENTRY(12),
- ICE_PTT_UNUSED_ENTRY(13),
- ICE_PTT_UNUSED_ENTRY(14),
- ICE_PTT_UNUSED_ENTRY(15),
- ICE_PTT_UNUSED_ENTRY(16),
- ICE_PTT_UNUSED_ENTRY(17),
- ICE_PTT_UNUSED_ENTRY(18),
- ICE_PTT_UNUSED_ENTRY(19),
- ICE_PTT_UNUSED_ENTRY(20),
- ICE_PTT_UNUSED_ENTRY(21),
-
- /* Non Tunneled IPv4 */
- ICE_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3),
- ICE_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3),
- ICE_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(25),
- ICE_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4),
- ICE_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4),
- ICE_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4),
-
- /* IPv4 --> IPv4 */
- ICE_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
- ICE_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
- ICE_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(32),
- ICE_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4),
- ICE_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
- ICE_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
-
- /* IPv4 --> IPv6 */
- ICE_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
- ICE_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
- ICE_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(39),
- ICE_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4),
- ICE_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
- ICE_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT */
- ICE_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
-
- /* IPv4 --> GRE/NAT --> IPv4 */
- ICE_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
- ICE_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
- ICE_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(47),
- ICE_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4),
- ICE_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
- ICE_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT --> IPv6 */
- ICE_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
- ICE_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
- ICE_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(54),
- ICE_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4),
- ICE_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
- ICE_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT --> MAC */
- ICE_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
-
- /* IPv4 --> GRE/NAT --> MAC --> IPv4 */
- ICE_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
- ICE_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
- ICE_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(62),
- ICE_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4),
- ICE_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
- ICE_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT -> MAC --> IPv6 */
- ICE_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
- ICE_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
- ICE_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(69),
- ICE_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4),
- ICE_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
- ICE_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
-
- /* IPv4 --> GRE/NAT --> MAC/VLAN */
- ICE_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
-
- /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */
- ICE_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
- ICE_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
- ICE_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(77),
- ICE_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4),
- ICE_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
- ICE_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
-
- /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */
- ICE_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
- ICE_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
- ICE_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(84),
- ICE_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4),
- ICE_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
- ICE_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
-
- /* Non Tunneled IPv6 */
- ICE_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
- ICE_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
- ICE_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(91),
- ICE_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4),
- ICE_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
- ICE_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4),
-
- /* IPv6 --> IPv4 */
- ICE_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3),
- ICE_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3),
- ICE_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(98),
- ICE_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4),
- ICE_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4),
- ICE_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4),
-
- /* IPv6 --> IPv6 */
- ICE_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3),
- ICE_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3),
- ICE_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(105),
- ICE_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4),
- ICE_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4),
- ICE_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT */
- ICE_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3),
-
- /* IPv6 --> GRE/NAT -> IPv4 */
- ICE_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3),
- ICE_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3),
- ICE_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(113),
- ICE_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4),
- ICE_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4),
- ICE_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> IPv6 */
- ICE_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3),
- ICE_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3),
- ICE_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(120),
- ICE_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4),
- ICE_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4),
- ICE_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> MAC */
- ICE_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3),
-
- /* IPv6 --> GRE/NAT -> MAC -> IPv4 */
- ICE_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3),
- ICE_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3),
- ICE_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(128),
- ICE_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4),
- ICE_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4),
- ICE_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> MAC -> IPv6 */
- ICE_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3),
- ICE_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3),
- ICE_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(135),
- ICE_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4),
- ICE_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4),
- ICE_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> MAC/VLAN */
- ICE_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3),
-
- /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */
- ICE_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3),
- ICE_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3),
- ICE_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(143),
- ICE_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4),
- ICE_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4),
- ICE_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4),
-
- /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */
- ICE_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3),
- ICE_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3),
- ICE_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4),
- ICE_PTT_UNUSED_ENTRY(150),
- ICE_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4),
- ICE_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4),
- ICE_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4),
-
- /* unused entries */
- [154 ... 1023] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 }
-};
-
-static inline struct ice_rx_ptype_decoded ice_decode_rx_desc_ptype(u16 ptype)
-{
- return ice_ptype_lkup[ptype];
-}
-
-
#endif /* _ICE_LAN_TX_RX_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index a1f7c8edc22f..f3d9c5ddef33 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -34,6 +34,7 @@ static const char ice_copyright[] = "Copyright (c) 2018, Intel Corporation.";

MODULE_AUTHOR("Intel Corporation, <[email protected]>");
MODULE_DESCRIPTION(DRV_SUMMARY);
+MODULE_IMPORT_NS(LIBIE);
MODULE_LICENSE("GPL v2");
MODULE_FIRMWARE(ICE_DDP_PKG_FILE);

diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
index c8322fb6f2b3..7543aba4ff9f 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
@@ -2,6 +2,7 @@
/* Copyright (c) 2019, Intel Corporation. */

#include <linux/filter.h>
+#include <linux/net/intel/libie/rx.h>

#include "ice_txrx_lib.h"
#include "ice_eswitch.h"
@@ -38,30 +39,6 @@ void ice_release_rx_desc(struct ice_rx_ring *rx_ring, u16 val)
}
}

-/**
- * ice_ptype_to_htype - get a hash type
- * @ptype: the ptype value from the descriptor
- *
- * Returns appropriate hash type (such as PKT_HASH_TYPE_L2/L3/L4) to be used by
- * skb_set_hash based on PTYPE as parsed by HW Rx pipeline and is part of
- * Rx desc.
- */
-static enum pkt_hash_types ice_ptype_to_htype(u16 ptype)
-{
- struct ice_rx_ptype_decoded decoded = ice_decode_rx_desc_ptype(ptype);
-
- if (!decoded.known)
- return PKT_HASH_TYPE_NONE;
- if (decoded.payload_layer == ICE_RX_PTYPE_PAYLOAD_LAYER_PAY4)
- return PKT_HASH_TYPE_L4;
- if (decoded.payload_layer == ICE_RX_PTYPE_PAYLOAD_LAYER_PAY3)
- return PKT_HASH_TYPE_L3;
- if (decoded.outer_ip == ICE_RX_PTYPE_OUTER_L2)
- return PKT_HASH_TYPE_L2;
-
- return PKT_HASH_TYPE_NONE;
-}
-
/**
* ice_rx_hash - set the hash value in the skb
* @rx_ring: descriptor ring
@@ -74,9 +51,11 @@ ice_rx_hash(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc,
struct sk_buff *skb, u16 rx_ptype)
{
struct ice_32b_rx_flex_desc_nic *nic_mdid;
+ struct libie_rx_ptype_parsed parsed;
u32 hash;

- if (!(rx_ring->netdev->features & NETIF_F_RXHASH))
+ parsed = libie_parse_rx_ptype(rx_ptype);
+ if (!libie_has_rx_hash(rx_ring->netdev, parsed))
return;

if (rx_desc->wb.rxdid != ICE_RXDID_FLEX_NIC)
@@ -84,7 +63,7 @@ ice_rx_hash(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc,

nic_mdid = (struct ice_32b_rx_flex_desc_nic *)rx_desc;
hash = le32_to_cpu(nic_mdid->rss_hash);
- skb_set_hash(skb, hash, ice_ptype_to_htype(rx_ptype));
+ libie_skb_set_hash(skb, hash, parsed);
}

/**
@@ -92,7 +71,7 @@ ice_rx_hash(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc,
* @ring: the ring we care about
* @skb: skb currently being received and modified
* @rx_desc: the receive descriptor
- * @ptype: the packet type decoded by hardware
+ * @ptype: the packet type parsed by hardware
*
* skb->protocol must be set before this function is called
*/
@@ -100,34 +79,26 @@ static void
ice_rx_csum(struct ice_rx_ring *ring, struct sk_buff *skb,
union ice_32b_rx_flex_desc *rx_desc, u16 ptype)
{
- struct ice_rx_ptype_decoded decoded;
+ struct libie_rx_ptype_parsed parsed;
u16 rx_status0, rx_status1;
bool ipv4, ipv6;

- rx_status0 = le16_to_cpu(rx_desc->wb.status_error0);
- rx_status1 = le16_to_cpu(rx_desc->wb.status_error1);
-
- decoded = ice_decode_rx_desc_ptype(ptype);
-
/* Start with CHECKSUM_NONE and by default csum_level = 0 */
skb->ip_summed = CHECKSUM_NONE;
- skb_checksum_none_assert(skb);

- /* check if Rx checksum is enabled */
- if (!(ring->netdev->features & NETIF_F_RXCSUM))
+ parsed = libie_parse_rx_ptype(ptype);
+ if (!libie_has_rx_checksum(ring->netdev, parsed))
return;

- /* check if HW has decoded the packet and checksum */
- if (!(rx_status0 & BIT(ICE_RX_FLEX_DESC_STATUS0_L3L4P_S)))
- return;
+ rx_status0 = le16_to_cpu(rx_desc->wb.status_error0);
+ rx_status1 = le16_to_cpu(rx_desc->wb.status_error1);

- if (!(decoded.known && decoded.outer_ip))
+ /* check if HW has parsed the packet and checksum */
+ if (!(rx_status0 & BIT(ICE_RX_FLEX_DESC_STATUS0_L3L4P_S)))
return;

- ipv4 = (decoded.outer_ip == ICE_RX_PTYPE_OUTER_IP) &&
- (decoded.outer_ip_ver == ICE_RX_PTYPE_OUTER_IPV4);
- ipv6 = (decoded.outer_ip == ICE_RX_PTYPE_OUTER_IP) &&
- (decoded.outer_ip_ver == ICE_RX_PTYPE_OUTER_IPV6);
+ ipv4 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV4;
+ ipv6 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV6;

if (ipv4 && (rx_status0 & (BIT(ICE_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) |
BIT(ICE_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S))))
@@ -151,19 +122,10 @@ ice_rx_csum(struct ice_rx_ring *ring, struct sk_buff *skb,
* we need to bump the checksum level by 1 to reflect the fact that
* we are indicating we validated the inner checksum.
*/
- if (decoded.tunnel_type >= ICE_RX_PTYPE_TUNNEL_IP_GRENAT)
+ if (parsed.tunnel_type >= LIBIE_RX_PTYPE_TUNNEL_IP_GRENAT)
skb->csum_level = 1;

- /* Only report checksum unnecessary for TCP, UDP, or SCTP */
- switch (decoded.inner_prot) {
- case ICE_RX_PTYPE_INNER_PROT_TCP:
- case ICE_RX_PTYPE_INNER_PROT_UDP:
- case ICE_RX_PTYPE_INNER_PROT_SCTP:
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- break;
- default:
- break;
- }
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
return;

checksum_fail:
@@ -175,7 +137,7 @@ ice_rx_csum(struct ice_rx_ring *ring, struct sk_buff *skb,
* @rx_ring: Rx descriptor ring packet is being transacted on
* @rx_desc: pointer to the EOP Rx descriptor
* @skb: pointer to current skb being populated
- * @ptype: the packet type decoded by hardware
+ * @ptype: the packet type parsed by hardware
*
* This function checks the ring, descriptor, and packet information in
* order to populate the hash, checksum, VLAN, protocol, and
diff --git a/drivers/net/ethernet/intel/libie/Makefile b/drivers/net/ethernet/intel/libie/Makefile
new file mode 100644
index 000000000000..95e81d09b474
--- /dev/null
+++ b/drivers/net/ethernet/intel/libie/Makefile
@@ -0,0 +1,6 @@
+# SPDX-License-Identifier: GPL-2.0-only
+# Copyright(c) 2023 Intel Corporation.
+
+obj-$(CONFIG_LIBIE) += libie.o
+
+libie-objs += rx.o
diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c
new file mode 100644
index 000000000000..f503476d8eef
--- /dev/null
+++ b/drivers/net/ethernet/intel/libie/rx.c
@@ -0,0 +1,110 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation. */
+
+#include <linux/net/intel/libie/rx.h>
+
+/* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed
+ * bitfield struct.
+ */
+
+#define LIBIE_RX_PTYPE(oip, ofrag, tun, tp, tefr, iprot, pl) { \
+ .outer_ip = LIBIE_RX_PTYPE_OUTER_##oip, \
+ .outer_frag = LIBIE_RX_PTYPE_##ofrag, \
+ .tunnel_type = LIBIE_RX_PTYPE_TUNNEL_IP_##tun, \
+ .tunnel_end_prot = LIBIE_RX_PTYPE_TUNNEL_END_##tp, \
+ .tunnel_end_frag = LIBIE_RX_PTYPE_##tefr, \
+ .inner_prot = LIBIE_RX_PTYPE_INNER_##iprot, \
+ .payload_layer = LIBIE_RX_PTYPE_PAYLOAD_##pl, \
+ }
+
+#define LIBIE_RX_PTYPE_UNUSED { }
+
+#define __LIBIE_RX_PTYPE_L2(iprot, pl) \
+ LIBIE_RX_PTYPE(L2, NOT_FRAG, NONE, NONE, NOT_FRAG, iprot, pl)
+#define LIBIE_RX_PTYPE_L2 __LIBIE_RX_PTYPE_L2(NONE, L2)
+#define LIBIE_RX_PTYPE_TS __LIBIE_RX_PTYPE_L2(TIMESYNC, L2)
+#define LIBIE_RX_PTYPE_L3 __LIBIE_RX_PTYPE_L2(NONE, L3)
+
+#define LIBIE_RX_PTYPE_IP_FRAG(oip) \
+ LIBIE_RX_PTYPE(IPV##oip, FRAG, NONE, NONE, NOT_FRAG, NONE, L3)
+#define LIBIE_RX_PTYPE_IP_L3(oip, tun, teprot, tefr) \
+ LIBIE_RX_PTYPE(IPV##oip, NOT_FRAG, tun, teprot, tefr, NONE, L3)
+#define LIBIE_RX_PTYPE_IP_L4(oip, tun, teprot, iprot) \
+ LIBIE_RX_PTYPE(IPV##oip, NOT_FRAG, tun, teprot, NOT_FRAG, iprot, L4)
+
+#define LIBIE_RX_PTYPE_IP_NOF(oip, tun, ver) \
+ LIBIE_RX_PTYPE_IP_L3(oip, tun, ver, NOT_FRAG), \
+ LIBIE_RX_PTYPE_IP_L4(oip, tun, ver, UDP), \
+ LIBIE_RX_PTYPE_UNUSED, \
+ LIBIE_RX_PTYPE_IP_L4(oip, tun, ver, TCP), \
+ LIBIE_RX_PTYPE_IP_L4(oip, tun, ver, SCTP), \
+ LIBIE_RX_PTYPE_IP_L4(oip, tun, ver, ICMP)
+
+/* IPv oip --> tun --> IPv ver */
+#define LIBIE_RX_PTYPE_IP_TUN_VER(oip, tun, ver) \
+ LIBIE_RX_PTYPE_IP_L3(oip, tun, ver, FRAG), \
+ LIBIE_RX_PTYPE_IP_NOF(oip, tun, ver)
+
+/* Non Tunneled IPv oip */
+#define LIBIE_RX_PTYPE_IP_RAW(oip) \
+ LIBIE_RX_PTYPE_IP_FRAG(oip), \
+ LIBIE_RX_PTYPE_IP_NOF(oip, NONE, NONE)
+
+/* IPv oip --> tun --> { IPv4, IPv6 } */
+#define LIBIE_RX_PTYPE_IP_TUN(oip, tun) \
+ LIBIE_RX_PTYPE_IP_TUN_VER(oip, tun, IPV4), \
+ LIBIE_RX_PTYPE_IP_TUN_VER(oip, tun, IPV6)
+
+/* IPv oip --> GRE/NAT tun --> { x, IPv4, IPv6 } */
+#define LIBIE_RX_PTYPE_IP_GRE(oip, tun) \
+ LIBIE_RX_PTYPE_IP_L3(oip, tun, NONE, NOT_FRAG), \
+ LIBIE_RX_PTYPE_IP_TUN(oip, tun)
+
+/* Non Tunneled IPv oip
+ * IPv oip --> { IPv4, IPv6 }
+ * IPv oip --> GRE/NAT --> { x, IPv4, IPv6 }
+ * IPv oip --> GRE/NAT --> MAC --> { x, IPv4, IPv6 }
+ * IPv oip --> GRE/NAT --> MAC/VLAN --> { x, IPv4, IPv6 }
+ */
+#define LIBIE_RX_PTYPE_IP(oip) \
+ LIBIE_RX_PTYPE_IP_RAW(oip), \
+ LIBIE_RX_PTYPE_IP_TUN(oip, IP), \
+ LIBIE_RX_PTYPE_IP_GRE(oip, GRENAT), \
+ LIBIE_RX_PTYPE_IP_GRE(oip, GRENAT_MAC), \
+ LIBIE_RX_PTYPE_IP_GRE(oip, GRENAT_MAC_VLAN)
+
+/* Lookup table mapping for O(1) parsing */
+const struct libie_rx_ptype_parsed libie_rx_ptype_lut[LIBIE_RX_PTYPE_NUM] = {
+ /* L2 packet types */
+ LIBIE_RX_PTYPE_UNUSED,
+ LIBIE_RX_PTYPE_L2,
+ LIBIE_RX_PTYPE_TS,
+ LIBIE_RX_PTYPE_L2,
+ LIBIE_RX_PTYPE_UNUSED,
+ LIBIE_RX_PTYPE_UNUSED,
+ LIBIE_RX_PTYPE_L2,
+ LIBIE_RX_PTYPE_L2,
+ LIBIE_RX_PTYPE_UNUSED,
+ LIBIE_RX_PTYPE_UNUSED,
+ LIBIE_RX_PTYPE_L2,
+ LIBIE_RX_PTYPE_UNUSED,
+
+ LIBIE_RX_PTYPE_L3,
+ LIBIE_RX_PTYPE_L3,
+ LIBIE_RX_PTYPE_L3,
+ LIBIE_RX_PTYPE_L3,
+ LIBIE_RX_PTYPE_L3,
+ LIBIE_RX_PTYPE_L3,
+ LIBIE_RX_PTYPE_L3,
+ LIBIE_RX_PTYPE_L3,
+ LIBIE_RX_PTYPE_L3,
+ LIBIE_RX_PTYPE_L3,
+
+ LIBIE_RX_PTYPE_IP(4),
+ LIBIE_RX_PTYPE_IP(6),
+};
+EXPORT_SYMBOL_NS_GPL(libie_rx_ptype_lut, LIBIE);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_DESCRIPTION("Intel(R) Ethernet common library");
+MODULE_LICENSE("GPL");
diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h
new file mode 100644
index 000000000000..58bd0f35d025
--- /dev/null
+++ b/include/linux/net/intel/libie/rx.h
@@ -0,0 +1,128 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation. */
+
+#ifndef __LIBIE_RX_H
+#define __LIBIE_RX_H
+
+#include <linux/netdevice.h>
+
+/* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed
+ * bitfield struct.
+ */
+
+struct libie_rx_ptype_parsed {
+ u16 outer_ip:2;
+ u16 outer_frag:1;
+ u16 tunnel_type:3;
+ u16 tunnel_end_prot:2;
+ u16 tunnel_end_frag:1;
+ u16 inner_prot:3;
+ u16 payload_layer:2;
+};
+
+enum libie_rx_ptype_outer_ip {
+ LIBIE_RX_PTYPE_OUTER_L2 = 0U,
+ LIBIE_RX_PTYPE_OUTER_IPV4,
+ LIBIE_RX_PTYPE_OUTER_IPV6,
+};
+
+enum libie_rx_ptype_outer_fragmented {
+ LIBIE_RX_PTYPE_NOT_FRAG = 0U,
+ LIBIE_RX_PTYPE_FRAG,
+};
+
+enum libie_rx_ptype_tunnel_type {
+ LIBIE_RX_PTYPE_TUNNEL_IP_NONE = 0U,
+ LIBIE_RX_PTYPE_TUNNEL_IP_IP,
+ LIBIE_RX_PTYPE_TUNNEL_IP_GRENAT,
+ LIBIE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC,
+ LIBIE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN,
+};
+
+enum libie_rx_ptype_tunnel_end_prot {
+ LIBIE_RX_PTYPE_TUNNEL_END_NONE = 0U,
+ LIBIE_RX_PTYPE_TUNNEL_END_IPV4,
+ LIBIE_RX_PTYPE_TUNNEL_END_IPV6,
+};
+
+enum libie_rx_ptype_inner_prot {
+ LIBIE_RX_PTYPE_INNER_NONE = 0U,
+ LIBIE_RX_PTYPE_INNER_UDP,
+ LIBIE_RX_PTYPE_INNER_TCP,
+ LIBIE_RX_PTYPE_INNER_SCTP,
+ LIBIE_RX_PTYPE_INNER_ICMP,
+ LIBIE_RX_PTYPE_INNER_TIMESYNC,
+};
+
+enum libie_rx_ptype_payload_layer {
+ LIBIE_RX_PTYPE_PAYLOAD_NONE = PKT_HASH_TYPE_NONE,
+ LIBIE_RX_PTYPE_PAYLOAD_L2 = PKT_HASH_TYPE_L2,
+ LIBIE_RX_PTYPE_PAYLOAD_L3 = PKT_HASH_TYPE_L3,
+ LIBIE_RX_PTYPE_PAYLOAD_L4 = PKT_HASH_TYPE_L4,
+};
+
+#define LIBIE_RX_PTYPE_NUM 154
+
+extern const struct libie_rx_ptype_parsed
+libie_rx_ptype_lut[LIBIE_RX_PTYPE_NUM];
+
+/**
+ * libie_parse_rx_ptype - convert HW packet type to software bitfield structure
+ * @ptype: 10-bit hardware packet type value from the descriptor
+ *
+ * @libie_rx_ptype_lut must be accessed only using this wrapper.
+ *
+ * Returns the parsed bitfield struct corresponding to the provided ptype.
+ */
+static inline struct libie_rx_ptype_parsed libie_parse_rx_ptype(u32 ptype)
+{
+ if (unlikely(ptype >= LIBIE_RX_PTYPE_NUM))
+ ptype = 0;
+
+ return libie_rx_ptype_lut[ptype];
+}
+
+/* libie_has_*() can be used to quickly check whether the HW metadata is
+ * available to avoid further expensive processing such as descriptor reads.
+ * They already check for the corresponding netdev feature to be enabled,
+ * thus can be used as drop-in replacements.
+ */
+
+static inline bool libie_has_rx_checksum(const struct net_device *dev,
+ struct libie_rx_ptype_parsed parsed)
+{
+ /* _INNER_{SCTP,TCP,UDP} are possible only when _OUTER_IPV* is set,
+ * it is enough to check only for the L4 type.
+ */
+ switch (parsed.inner_prot) {
+ case LIBIE_RX_PTYPE_INNER_TCP:
+ case LIBIE_RX_PTYPE_INNER_UDP:
+ case LIBIE_RX_PTYPE_INNER_SCTP:
+ return dev->features & NETIF_F_RXCSUM;
+ default:
+ return false;
+ }
+}
+
+static inline bool libie_has_rx_hash(const struct net_device *dev,
+ struct libie_rx_ptype_parsed parsed)
+{
+ if (parsed.payload_layer < LIBIE_RX_PTYPE_PAYLOAD_L2)
+ return false;
+
+ return dev->features & NETIF_F_RXHASH;
+}
+
+/**
+ * libie_skb_set_hash - fill in skb hash value basing on the parsed ptype
+ * @skb: skb to fill the hash in
+ * @hash: 32-bit hash value from the descriptor
+ * @parsed: parsed packet type
+ */
+static inline void libie_skb_set_hash(struct sk_buff *skb, u32 hash,
+ struct libie_rx_ptype_parsed parsed)
+{
+ skb_set_hash(skb, hash, parsed.payload_layer);
+}
+
+#endif /* __LIBIE_RX_H */
--
2.40.1


2023-05-16 16:31:51

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH net-next 11/11] iavf: switch queue stats to libie

iavf is pretty much ready for using the generic libie stats, so drop all
the custom code and just use generic definitions. The only thing is that
it previously lacked the counter of Tx queue stops. It's present in the
other drivers, so add it here as well.
The rest is straightforward. There were two fields in the Tx stats
struct, which didn't belong there. The first one has never been used,
wipe it; and move the other to the queue structure. Plus move around
a couple fields in &iavf_ring to account stats structs' alignment.

Signed-off-by: Alexander Lobakin <[email protected]>
---
.../net/ethernet/intel/iavf/iavf_ethtool.c | 87 ++++------------
drivers/net/ethernet/intel/iavf/iavf_main.c | 2 +
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 98 ++++++++++---------
drivers/net/ethernet/intel/iavf/iavf_txrx.h | 47 +++------
4 files changed, 87 insertions(+), 147 deletions(-)

diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
index de3050c02b6f..0dcf50d75f86 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
@@ -46,16 +46,6 @@ struct iavf_stats {
.stat_offset = offsetof(_type, _stat) \
}

-/* Helper macro for defining some statistics related to queues */
-#define IAVF_QUEUE_STAT(_name, _stat) \
- IAVF_STAT(struct iavf_ring, _name, _stat)
-
-/* Stats associated with a Tx or Rx ring */
-static const struct iavf_stats iavf_gstrings_queue_stats[] = {
- IAVF_QUEUE_STAT("%s-%u.packets", stats.packets),
- IAVF_QUEUE_STAT("%s-%u.bytes", stats.bytes),
-};
-
/**
* iavf_add_one_ethtool_stat - copy the stat into the supplied buffer
* @data: location to store the stat value
@@ -141,43 +131,6 @@ __iavf_add_ethtool_stats(u64 **data, void *pointer,
#define iavf_add_ethtool_stats(data, pointer, stats) \
__iavf_add_ethtool_stats(data, pointer, stats, ARRAY_SIZE(stats))

-/**
- * iavf_add_queue_stats - copy queue statistics into supplied buffer
- * @data: ethtool stats buffer
- * @ring: the ring to copy
- *
- * Queue statistics must be copied while protected by
- * u64_stats_fetch_begin, so we can't directly use iavf_add_ethtool_stats.
- * Assumes that queue stats are defined in iavf_gstrings_queue_stats. If the
- * ring pointer is null, zero out the queue stat values and update the data
- * pointer. Otherwise safely copy the stats from the ring into the supplied
- * buffer and update the data pointer when finished.
- *
- * This function expects to be called while under rcu_read_lock().
- **/
-static void
-iavf_add_queue_stats(u64 **data, struct iavf_ring *ring)
-{
- const unsigned int size = ARRAY_SIZE(iavf_gstrings_queue_stats);
- const struct iavf_stats *stats = iavf_gstrings_queue_stats;
- unsigned int start;
- unsigned int i;
-
- /* To avoid invalid statistics values, ensure that we keep retrying
- * the copy until we get a consistent value according to
- * u64_stats_fetch_retry. But first, make sure our ring is
- * non-null before attempting to access its syncp.
- */
- do {
- start = !ring ? 0 : u64_stats_fetch_begin(&ring->syncp);
- for (i = 0; i < size; i++)
- iavf_add_one_ethtool_stat(&(*data)[i], ring, &stats[i]);
- } while (ring && u64_stats_fetch_retry(&ring->syncp, start));
-
- /* Once we successfully copy the stats in, update the data pointer */
- *data += size;
-}
-
/**
* __iavf_add_stat_strings - copy stat strings into ethtool buffer
* @p: ethtool supplied buffer
@@ -237,8 +190,6 @@ static const struct iavf_stats iavf_gstrings_stats[] = {

#define IAVF_STATS_LEN ARRAY_SIZE(iavf_gstrings_stats)

-#define IAVF_QUEUE_STATS_LEN ARRAY_SIZE(iavf_gstrings_queue_stats)
-
/**
* iavf_get_link_ksettings - Get Link Speed and Duplex settings
* @netdev: network interface device structure
@@ -308,18 +259,22 @@ static int iavf_get_link_ksettings(struct net_device *netdev,
**/
static int iavf_get_sset_count(struct net_device *netdev, int sset)
{
- /* Report the maximum number queues, even if not every queue is
- * currently configured. Since allocation of queues is in pairs,
- * use netdev->real_num_tx_queues * 2. The real_num_tx_queues is set
- * at device creation and never changes.
- */
+ u32 num;

- if (sset == ETH_SS_STATS)
- return IAVF_STATS_LEN +
- (IAVF_QUEUE_STATS_LEN * 2 *
- netdev->real_num_tx_queues);
- else
+ switch (sset) {
+ case ETH_SS_STATS:
+ /* Per-queue */
+ num = libie_rq_stats_get_sset_count();
+ num += libie_sq_stats_get_sset_count();
+ num *= netdev->real_num_tx_queues;
+
+ /* Global */
+ num += IAVF_STATS_LEN;
+
+ return num;
+ default:
return -EINVAL;
+ }
}

/**
@@ -346,15 +301,15 @@ static void iavf_get_ethtool_stats(struct net_device *netdev,
* it to iterate over rings' stats.
*/
for (i = 0; i < adapter->num_active_queues; i++) {
- struct iavf_ring *ring;
+ const struct iavf_ring *ring;

/* Tx rings stats */
- ring = &adapter->tx_rings[i];
- iavf_add_queue_stats(&data, ring);
+ libie_sq_stats_get_data(&data, &adapter->tx_rings[i].sq_stats);

/* Rx rings stats */
ring = &adapter->rx_rings[i];
- iavf_add_queue_stats(&data, ring);
+ libie_rq_stats_get_data(&data, &ring->rq_stats,
+ ring->rx_pages ? ring->pool : NULL);
}
rcu_read_unlock();
}
@@ -376,10 +331,8 @@ static void iavf_get_stat_strings(struct net_device *netdev, u8 *data)
* real_num_tx_queues for both Tx and Rx queues.
*/
for (i = 0; i < netdev->real_num_tx_queues; i++) {
- iavf_add_stat_strings(&data, iavf_gstrings_queue_stats,
- "tx", i);
- iavf_add_stat_strings(&data, iavf_gstrings_queue_stats,
- "rx", i);
+ libie_sq_stats_get_strings(&data, i);
+ libie_rq_stats_get_strings(&data, i);
}
}

diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index 120bb6a09ceb..4a702795ddb3 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -1581,6 +1581,7 @@ static int iavf_alloc_queues(struct iavf_adapter *adapter)
tx_ring->itr_setting = IAVF_ITR_TX_DEF;
if (adapter->flags & IAVF_FLAG_WB_ON_ITR_CAPABLE)
tx_ring->flags |= IAVF_TXR_FLAGS_WB_ON_ITR;
+ u64_stats_init(&tx_ring->sq_stats.syncp);

rx_ring = &adapter->rx_rings[i];
rx_ring->queue_index = i;
@@ -1588,6 +1589,7 @@ static int iavf_alloc_queues(struct iavf_adapter *adapter)
rx_ring->dev = &adapter->pdev->dev;
rx_ring->count = adapter->rx_desc_count;
rx_ring->itr_setting = IAVF_ITR_RX_DEF;
+ u64_stats_init(&rx_ring->rq_stats.syncp);
}

adapter->num_active_queues = num_active_queues;
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
index 2c09f9cf81f9..abf4b0645fba 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
@@ -158,6 +158,9 @@ void iavf_detect_recover_hung(struct iavf_vsi *vsi)
for (i = 0; i < vsi->back->num_active_queues; i++) {
tx_ring = &vsi->back->tx_rings[i];
if (tx_ring && tx_ring->desc) {
+ const struct libie_sq_stats *st = &tx_ring->sq_stats;
+ u32 start;
+
/* If packet counter has not changed the queue is
* likely stalled, so force an interrupt for this
* queue.
@@ -165,8 +168,13 @@ void iavf_detect_recover_hung(struct iavf_vsi *vsi)
* prev_pkt_ctr would be negative if there was no
* pending work.
*/
- packets = tx_ring->stats.packets & INT_MAX;
- if (tx_ring->tx_stats.prev_pkt_ctr == packets) {
+ do {
+ start = u64_stats_fetch_begin(&st->syncp);
+ packets = u64_stats_read(&st->packets) &
+ INT_MAX;
+ } while (u64_stats_fetch_retry(&st->syncp, start));
+
+ if (tx_ring->prev_pkt_ctr == packets) {
iavf_force_wb(vsi, tx_ring->q_vector);
continue;
}
@@ -175,7 +183,7 @@ void iavf_detect_recover_hung(struct iavf_vsi *vsi)
* to iavf_get_tx_pending()
*/
smp_rmb();
- tx_ring->tx_stats.prev_pkt_ctr =
+ tx_ring->prev_pkt_ctr =
iavf_get_tx_pending(tx_ring, true) ? packets : -1;
}
}
@@ -194,10 +202,10 @@ void iavf_detect_recover_hung(struct iavf_vsi *vsi)
static bool iavf_clean_tx_irq(struct iavf_vsi *vsi,
struct iavf_ring *tx_ring, int napi_budget)
{
+ struct libie_sq_onstack_stats stats = { };
int i = tx_ring->next_to_clean;
struct iavf_tx_buffer *tx_buf;
struct iavf_tx_desc *tx_desc;
- unsigned int total_bytes = 0, total_packets = 0;
unsigned int budget = IAVF_DEFAULT_IRQ_WORK;

tx_buf = &tx_ring->tx_bi[i];
@@ -224,8 +232,8 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi,
tx_buf->next_to_watch = NULL;

/* update the statistics for this packet */
- total_bytes += tx_buf->bytecount;
- total_packets += tx_buf->gso_segs;
+ stats.bytes += tx_buf->bytecount;
+ stats.packets += tx_buf->gso_segs;

/* free the skb */
napi_consume_skb(tx_buf->skb, napi_budget);
@@ -282,12 +290,9 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi,

i += tx_ring->count;
tx_ring->next_to_clean = i;
- u64_stats_update_begin(&tx_ring->syncp);
- tx_ring->stats.bytes += total_bytes;
- tx_ring->stats.packets += total_packets;
- u64_stats_update_end(&tx_ring->syncp);
- tx_ring->q_vector->tx.total_bytes += total_bytes;
- tx_ring->q_vector->tx.total_packets += total_packets;
+ libie_sq_napi_stats_add(&tx_ring->sq_stats, &stats);
+ tx_ring->q_vector->tx.total_bytes += stats.bytes;
+ tx_ring->q_vector->tx.total_packets += stats.packets;

if (tx_ring->flags & IAVF_TXR_FLAGS_WB_ON_ITR) {
/* check to see if there are < 4 descriptors
@@ -306,10 +311,10 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi,

/* notify netdev of completed buffers */
netdev_tx_completed_queue(txring_txq(tx_ring),
- total_packets, total_bytes);
+ stats.packets, stats.bytes);

#define TX_WAKE_THRESHOLD ((s16)(DESC_NEEDED * 2))
- if (unlikely(total_packets && netif_carrier_ok(tx_ring->netdev) &&
+ if (unlikely(stats.packets && netif_carrier_ok(tx_ring->netdev) &&
(IAVF_DESC_UNUSED(tx_ring) >= TX_WAKE_THRESHOLD))) {
/* Make sure that anybody stopping the queue after this
* sees the new next_to_clean.
@@ -320,7 +325,7 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi,
!test_bit(__IAVF_VSI_DOWN, vsi->state)) {
netif_wake_subqueue(tx_ring->netdev,
tx_ring->queue_index);
- ++tx_ring->tx_stats.restart_queue;
+ libie_stats_inc_one(&tx_ring->sq_stats, restarts);
}
}

@@ -675,7 +680,7 @@ int iavf_setup_tx_descriptors(struct iavf_ring *tx_ring)

tx_ring->next_to_use = 0;
tx_ring->next_to_clean = 0;
- tx_ring->tx_stats.prev_pkt_ctr = -1;
+ tx_ring->prev_pkt_ctr = -1;
return 0;

err:
@@ -731,7 +736,7 @@ void iavf_free_rx_resources(struct iavf_ring *rx_ring)
kfree(rx_ring->rx_pages);
rx_ring->rx_pages = NULL;

- page_pool_destroy(rx_ring->pool);
+ libie_rx_page_pool_destroy(rx_ring->pool, &rx_ring->rq_stats);
rx_ring->dev = dev;

if (rx_ring->desc) {
@@ -760,8 +765,6 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring)
if (!rx_ring->rx_pages)
return ret;

- u64_stats_init(&rx_ring->syncp);
-
/* Round up to nearest 4K */
rx_ring->size = rx_ring->count * sizeof(union iavf_32byte_rx_desc);
rx_ring->size = ALIGN(rx_ring->size, 4096);
@@ -863,10 +866,8 @@ static u32 __iavf_alloc_rx_pages(struct iavf_ring *rx_ring, u32 to_refill,
dma_addr_t dma;

page = page_pool_alloc_pages(pool, gfp);
- if (!page) {
- rx_ring->rx_stats.alloc_page_failed++;
+ if (!page)
break;
- }

rx_ring->rx_pages[ntu] = page;
dma = page_pool_get_dma_addr(page);
@@ -1090,25 +1091,23 @@ static struct sk_buff *iavf_build_skb(struct page *page, u32 size)

/**
* iavf_is_non_eop - process handling of non-EOP buffers
- * @rx_ring: Rx ring being processed
* @rx_desc: Rx descriptor for current buffer
- * @skb: Current socket buffer containing buffer in progress
+ * @stats: NAPI poll local stats to update
*
* This function updates next to clean. If the buffer is an EOP buffer
* this function exits returning false, otherwise it will place the
* sk_buff in the next buffer to be chained and return true indicating
* that this is in fact a non-EOP buffer.
**/
-static bool iavf_is_non_eop(struct iavf_ring *rx_ring,
- union iavf_rx_desc *rx_desc,
- struct sk_buff *skb)
+static bool iavf_is_non_eop(union iavf_rx_desc *rx_desc,
+ struct libie_rq_onstack_stats *stats)
{
/* if we are the last buffer then there is nothing else to do */
#define IAVF_RXD_EOF BIT(IAVF_RX_DESC_STATUS_EOF_SHIFT)
if (likely(iavf_test_staterr(rx_desc, IAVF_RXD_EOF)))
return false;

- rx_ring->rx_stats.non_eop_descs++;
+ stats->fragments++;

return true;
}
@@ -1127,8 +1126,8 @@ static bool iavf_is_non_eop(struct iavf_ring *rx_ring,
**/
static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
{
- unsigned int total_rx_bytes = 0, total_rx_packets = 0;
const gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN;
+ struct libie_rq_onstack_stats stats = { };
u32 to_refill = IAVF_DESC_UNUSED(rx_ring);
struct page_pool *pool = rx_ring->pool;
struct sk_buff *skb = rx_ring->skb;
@@ -1145,9 +1144,13 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
u64 qword;

/* return some buffers to hardware, one at a time is too slow */
- if (to_refill >= IAVF_RX_BUFFER_WRITE)
+ if (to_refill >= IAVF_RX_BUFFER_WRITE) {
to_refill = __iavf_alloc_rx_pages(rx_ring, to_refill,
gfp);
+ if (unlikely(to_refill))
+ libie_stats_inc_one(&rx_ring->rq_stats,
+ alloc_page_fail);
+ }

rx_desc = IAVF_RX_DESC(rx_ring, ntc);

@@ -1195,7 +1198,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
/* exit if we failed to retrieve a buffer */
if (!skb) {
page_pool_put_page(pool, page, size, true);
- rx_ring->rx_stats.alloc_buff_failed++;
+ libie_stats_inc_one(&rx_ring->rq_stats,
+ build_skb_fail);
break;
}

@@ -1207,7 +1211,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)

prefetch(IAVF_RX_DESC(rx_ring, ntc));

- if (iavf_is_non_eop(rx_ring, rx_desc, skb))
+ if (iavf_is_non_eop(rx_desc, &stats))
continue;

/* ERR_MASK will only have valid bits if EOP set, and
@@ -1227,7 +1231,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
}

/* probably a little skewed due to removing CRC */
- total_rx_bytes += skb->len;
+ stats.bytes += skb->len;

qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
rx_ptype = (qword & IAVF_RXD_QW1_PTYPE_MASK) >>
@@ -1249,7 +1253,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
skb = NULL;

/* update budget accounting */
- total_rx_packets++;
+ stats.packets++;
}

rx_ring->next_to_clean = ntc;
@@ -1260,16 +1264,16 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
/* guarantee a trip back through this routine if there was
* a failure
*/
- if (unlikely(to_refill))
+ if (unlikely(to_refill)) {
+ libie_stats_inc_one(&rx_ring->rq_stats,
+ alloc_page_fail);
cleaned_count = budget;
+ }
}

- u64_stats_update_begin(&rx_ring->syncp);
- rx_ring->stats.packets += total_rx_packets;
- rx_ring->stats.bytes += total_rx_bytes;
- u64_stats_update_end(&rx_ring->syncp);
- rx_ring->q_vector->rx.total_packets += total_rx_packets;
- rx_ring->q_vector->rx.total_bytes += total_rx_bytes;
+ libie_rq_napi_stats_add(&rx_ring->rq_stats, &stats);
+ rx_ring->q_vector->rx.total_packets += stats.packets;
+ rx_ring->q_vector->rx.total_bytes += stats.bytes;

return cleaned_count;
}
@@ -1448,10 +1452,8 @@ int iavf_napi_poll(struct napi_struct *napi, int budget)
return budget - 1;
}
tx_only:
- if (arm_wb) {
- q_vector->tx.ring[0].tx_stats.tx_force_wb++;
+ if (arm_wb)
iavf_enable_wb_on_itr(vsi, q_vector);
- }
return budget;
}

@@ -1910,6 +1912,7 @@ bool __iavf_chk_linearize(struct sk_buff *skb)
int __iavf_maybe_stop_tx(struct iavf_ring *tx_ring, int size)
{
netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
+ libie_stats_inc_one(&tx_ring->sq_stats, stops);
/* Memory barrier before checking head and tail */
smp_mb();

@@ -1919,7 +1922,8 @@ int __iavf_maybe_stop_tx(struct iavf_ring *tx_ring, int size)

/* A reprieve! - use start_queue because it doesn't call schedule */
netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index);
- ++tx_ring->tx_stats.restart_queue;
+ libie_stats_inc_one(&tx_ring->sq_stats, restarts);
+
return 0;
}

@@ -2100,7 +2104,7 @@ static netdev_tx_t iavf_xmit_frame_ring(struct sk_buff *skb,
return NETDEV_TX_OK;
}
count = iavf_txd_use_count(skb->len);
- tx_ring->tx_stats.tx_linearize++;
+ libie_stats_inc_one(&tx_ring->sq_stats, linearized);
}

/* need: 1 descriptor per page * PAGE_SIZE/IAVF_MAX_DATA_PER_TXD,
@@ -2110,7 +2114,7 @@ static netdev_tx_t iavf_xmit_frame_ring(struct sk_buff *skb,
* otherwise try next time
*/
if (iavf_maybe_stop_tx(tx_ring, count + 4 + 1)) {
- tx_ring->tx_stats.tx_busy++;
+ libie_stats_inc_one(&tx_ring->sq_stats, busy);
return NETDEV_TX_BUSY;
}

diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h
index 8fbe549ce6a5..64c93d6fa54d 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h
@@ -4,6 +4,8 @@
#ifndef _IAVF_TXRX_H_
#define _IAVF_TXRX_H_

+#include <linux/net/intel/libie/stats.h>
+
/* Interrupt Throttling and Rate Limiting Goodies */
#define IAVF_DEFAULT_IRQ_WORK 256

@@ -201,27 +203,6 @@ struct iavf_tx_buffer {
u32 tx_flags;
};

-struct iavf_queue_stats {
- u64 packets;
- u64 bytes;
-};
-
-struct iavf_tx_queue_stats {
- u64 restart_queue;
- u64 tx_busy;
- u64 tx_done_old;
- u64 tx_linearize;
- u64 tx_force_wb;
- int prev_pkt_ctr;
- u64 tx_lost_interrupt;
-};
-
-struct iavf_rx_queue_stats {
- u64 non_eop_descs;
- u64 alloc_page_failed;
- u64 alloc_buff_failed;
-};
-
/* some useful defines for virtchannel interface, which
* is the only remaining user of header split
*/
@@ -272,21 +253,9 @@ struct iavf_ring {
#define IAVF_TXR_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(4)
#define IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2 BIT(5)

- /* stats structs */
- struct iavf_queue_stats stats;
- struct u64_stats_sync syncp;
- union {
- struct iavf_tx_queue_stats tx_stats;
- struct iavf_rx_queue_stats rx_stats;
- };
-
- unsigned int size; /* length of descriptor ring in bytes */
- dma_addr_t dma; /* physical address of ring */
-
struct iavf_vsi *vsi; /* Backreference to associated VSI */
struct iavf_q_vector *q_vector; /* Backreference to associated vector */

- struct rcu_head rcu; /* to avoid race on free */
struct sk_buff *skb; /* When iavf_clean_rx_ring_irq() must
* return before it sees the EOP for
* the current packet, we save that skb
@@ -295,6 +264,18 @@ struct iavf_ring {
* iavf_clean_rx_ring_irq() is called
* for this ring.
*/
+
+ /* stats structs */
+ union {
+ struct libie_sq_stats sq_stats;
+ struct libie_rq_stats rq_stats;
+ };
+
+ int prev_pkt_ctr; /* For stall detection */
+ unsigned int size; /* length of descriptor ring in bytes */
+ dma_addr_t dma; /* physical address of ring */
+
+ struct rcu_head rcu; /* to avoid race on free */
} ____cacheline_internodealigned_in_smp;

#define IAVF_ITR_ADAPTIVE_MIN_INC 0x0002
--
2.40.1


2023-05-16 16:32:03

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH net-next 03/11] iavf: optimize Rx buffer allocation a bunch

The Rx hotpath code of IAVF is not well-optimized TBH. Before doing any
further buffer model changes, shake it up a bit. Notably:

1. Cache more variables on the stack.
DMA device, Rx page size, NTC -- these are the most common things
used all throughout the hotpath, often in loops on each iteration.
Instead of fetching (or even calculating, as with the page size) them
from the ring all the time, cache them on the stack at the beginning
of the NAPI polling callback. NTC will be written back at the end,
the rest are used read-only, so no sync needed.
2. Don't move the recycled buffers around the ring.
The idea of passing the page of the right-now-recycled-buffer to a
different buffer, in this case, the first one that needs to be
allocated, moreover, on each new frame, is fundamentally wrong. It
involves a few o' fetches, branches and then writes (and one Rx
buffer struct is at least 32 bytes) where they're completely unneeded,
but gives no good -- the result is the same as if we'd recycle it
inplace, at the same position where it was used. So drop this and let
the main refilling function take care of all the buffers, which were
processed and now need to be recycled/refilled.
3. Don't allocate with %GPF_ATOMIC on ifup.
This involved introducing the @gfp parameter to a couple functions.
Doesn't change anything for Rx -> softirq.
4. 1 budget unit == 1 descriptor, not skb.
There could be underflow when receiving a lot of fragmented frames.
If each of them would consist of 2 frags, it means that we'd process
64 descriptors at the point where we pass the 32th skb to the stack.
But the driver would count that only as a half, which could make NAPI
re-enable interrupts prematurely and create unnecessary CPU load.
5. Shortcut !size case.
It's super rare, but possible -- for example, if the last buffer of
the fragmented frame contained only FCS, which was then stripped by
the HW. Instead of checking for size several times when processing,
quickly reuse the buffer and jump to the skb fields part.
6. Refill the ring after finishing the polling loop.
Previously, the loop wasn't starting a new iteration after the 64th
desc, meaning that we were always leaving 16 buffers non-refilled
until the next NAPI poll. It's better to refill them while they're
still hot, so do that right after exiting the loop as well.
For a full cycle of 64 descs, there will be 4 refills of 16 descs
from now on.

Function: add/remove: 4/2 grow/shrink: 0/5 up/down: 473/-647 (-174)

+ up to 2% performance.

Signed-off-by: Alexander Lobakin <[email protected]>
---
drivers/net/ethernet/intel/iavf/iavf_main.c | 2 +-
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 259 +++++++++-----------
drivers/net/ethernet/intel/iavf/iavf_txrx.h | 3 +-
3 files changed, 114 insertions(+), 150 deletions(-)

diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index a5a6c9861a93..ade32aa1ed78 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -1236,7 +1236,7 @@ static void iavf_configure(struct iavf_adapter *adapter)
for (i = 0; i < adapter->num_active_queues; i++) {
struct iavf_ring *ring = &adapter->rx_rings[i];

- iavf_alloc_rx_buffers(ring, IAVF_DESC_UNUSED(ring));
+ iavf_alloc_rx_buffers(ring);
}
}

diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
index a7121dc5c32b..fd08ce67380e 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
@@ -736,7 +736,6 @@ void iavf_clean_rx_ring(struct iavf_ring *rx_ring)
/* Zero out the descriptor ring */
memset(rx_ring->desc, 0, rx_ring->size);

- rx_ring->next_to_alloc = 0;
rx_ring->next_to_clean = 0;
rx_ring->next_to_use = 0;
}
@@ -792,7 +791,6 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring)
goto err;
}

- rx_ring->next_to_alloc = 0;
rx_ring->next_to_clean = 0;
rx_ring->next_to_use = 0;

@@ -812,9 +810,6 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val)
{
rx_ring->next_to_use = val;

- /* update next to alloc since we have filled the ring */
- rx_ring->next_to_alloc = val;
-
/* Force memory writes to complete before letting h/w
* know there are new descriptors to fetch. (Only
* applicable for weak-ordered memory model archs,
@@ -828,12 +823,17 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val)
* iavf_alloc_mapped_page - recycle or make a new page
* @rx_ring: ring to use
* @bi: rx_buffer struct to modify
+ * @dev: device used for DMA mapping
+ * @order: page order to allocate
+ * @gfp: GFP mask to allocate page
*
* Returns true if the page was successfully allocated or
* reused.
**/
static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring,
- struct iavf_rx_buffer *bi)
+ struct iavf_rx_buffer *bi,
+ struct device *dev, u32 order,
+ gfp_t gfp)
{
struct page *page = bi->page;
dma_addr_t dma;
@@ -845,23 +845,21 @@ static bool iavf_alloc_mapped_page(struct iavf_ring *rx_ring,
}

/* alloc new page for storage */
- page = dev_alloc_pages(iavf_rx_pg_order(rx_ring));
+ page = __dev_alloc_pages(gfp, order);
if (unlikely(!page)) {
rx_ring->rx_stats.alloc_page_failed++;
return false;
}

/* map page for use */
- dma = dma_map_page_attrs(rx_ring->dev, page, 0,
- iavf_rx_pg_size(rx_ring),
- DMA_FROM_DEVICE,
- IAVF_RX_DMA_ATTR);
+ dma = dma_map_page_attrs(dev, page, 0, PAGE_SIZE << order,
+ DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR);

/* if mapping failed free memory back to system since
* there isn't much point in holding memory we can't use
*/
- if (dma_mapping_error(rx_ring->dev, dma)) {
- __free_pages(page, iavf_rx_pg_order(rx_ring));
+ if (dma_mapping_error(dev, dma)) {
+ __free_pages(page, order);
rx_ring->rx_stats.alloc_page_failed++;
return false;
}
@@ -898,32 +896,36 @@ static void iavf_receive_skb(struct iavf_ring *rx_ring,
}

/**
- * iavf_alloc_rx_buffers - Replace used receive buffers
+ * __iavf_alloc_rx_buffers - Replace used receive buffers
* @rx_ring: ring to place buffers on
- * @cleaned_count: number of buffers to replace
+ * @to_refill: number of buffers to replace
+ * @gfp: GFP mask to allocate pages
*
- * Returns false if all allocations were successful, true if any fail
+ * Returns 0 if all allocations were successful or the number of buffers left
+ * to refill in case of an allocation failure.
**/
-bool iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u16 cleaned_count)
+static u32 __iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u32 to_refill,
+ gfp_t gfp)
{
- u16 ntu = rx_ring->next_to_use;
+ u32 order = iavf_rx_pg_order(rx_ring);
+ struct device *dev = rx_ring->dev;
+ u32 ntu = rx_ring->next_to_use;
union iavf_rx_desc *rx_desc;
struct iavf_rx_buffer *bi;

/* do nothing if no valid netdev defined */
- if (!rx_ring->netdev || !cleaned_count)
- return false;
+ if (unlikely(!rx_ring->netdev || !to_refill))
+ return 0;

rx_desc = IAVF_RX_DESC(rx_ring, ntu);
bi = &rx_ring->rx_bi[ntu];

do {
- if (!iavf_alloc_mapped_page(rx_ring, bi))
- goto no_buffers;
+ if (!iavf_alloc_mapped_page(rx_ring, bi, dev, order, gfp))
+ break;

/* sync the buffer for use by the device */
- dma_sync_single_range_for_device(rx_ring->dev, bi->dma,
- bi->page_offset,
+ dma_sync_single_range_for_device(dev, bi->dma, bi->page_offset,
rx_ring->rx_buf_len,
DMA_FROM_DEVICE);

@@ -943,23 +945,17 @@ bool iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u16 cleaned_count)

/* clear the status bits for the next_to_use descriptor */
rx_desc->wb.qword1.status_error_len = 0;
-
- cleaned_count--;
- } while (cleaned_count);
+ } while (--to_refill);

if (rx_ring->next_to_use != ntu)
iavf_release_rx_desc(rx_ring, ntu);

- return false;
-
-no_buffers:
- if (rx_ring->next_to_use != ntu)
- iavf_release_rx_desc(rx_ring, ntu);
+ return to_refill;
+}

- /* make sure to come back via polling to try again after
- * allocation failure
- */
- return true;
+void iavf_alloc_rx_buffers(struct iavf_ring *rxr)
+{
+ __iavf_alloc_rx_buffers(rxr, IAVF_DESC_UNUSED(rxr), GFP_KERNEL);
}

/**
@@ -1104,32 +1100,6 @@ static bool iavf_cleanup_headers(struct iavf_ring *rx_ring, struct sk_buff *skb)
return false;
}

-/**
- * iavf_reuse_rx_page - page flip buffer and store it back on the ring
- * @rx_ring: rx descriptor ring to store buffers on
- * @old_buff: donor buffer to have page reused
- *
- * Synchronizes page for reuse by the adapter
- **/
-static void iavf_reuse_rx_page(struct iavf_ring *rx_ring,
- struct iavf_rx_buffer *old_buff)
-{
- struct iavf_rx_buffer *new_buff;
- u16 nta = rx_ring->next_to_alloc;
-
- new_buff = &rx_ring->rx_bi[nta];
-
- /* update, and store next to alloc */
- nta++;
- rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
-
- /* transfer page from old buffer to new buffer */
- new_buff->dma = old_buff->dma;
- new_buff->page = old_buff->page;
- new_buff->page_offset = old_buff->page_offset;
- new_buff->pagecnt_bias = old_buff->pagecnt_bias;
-}
-
/**
* iavf_can_reuse_rx_page - Determine if this page can be reused by
* the adapter for another receive
@@ -1191,30 +1161,26 @@ static bool iavf_can_reuse_rx_page(struct iavf_rx_buffer *rx_buffer)

/**
* iavf_add_rx_frag - Add contents of Rx buffer to sk_buff
- * @rx_ring: rx descriptor ring to transact packets on
- * @rx_buffer: buffer containing page to add
* @skb: sk_buff to place the data into
+ * @rx_buffer: buffer containing page to add
* @size: packet length from rx_desc
+ * @pg_size: Rx buffer page size
*
* This function will add the data contained in rx_buffer->page to the skb.
* It will just attach the page as a frag to the skb.
*
* The function will then update the page offset.
**/
-static void iavf_add_rx_frag(struct iavf_ring *rx_ring,
+static void iavf_add_rx_frag(struct sk_buff *skb,
struct iavf_rx_buffer *rx_buffer,
- struct sk_buff *skb,
- unsigned int size)
+ u32 size, u32 pg_size)
{
#if (PAGE_SIZE < 8192)
- unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2;
+ unsigned int truesize = pg_size / 2;
#else
unsigned int truesize = SKB_DATA_ALIGN(size + IAVF_SKB_PAD);
#endif

- if (!size)
- return;
-
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page,
rx_buffer->page_offset, size, truesize);

@@ -1224,63 +1190,47 @@ static void iavf_add_rx_frag(struct iavf_ring *rx_ring,
#else
rx_buffer->page_offset += truesize;
#endif
+
+ /* We have pulled a buffer for use, so decrement pagecnt_bias */
+ rx_buffer->pagecnt_bias--;
}

/**
- * iavf_get_rx_buffer - Fetch Rx buffer and synchronize data for use
- * @rx_ring: rx descriptor ring to transact packets on
- * @size: size of buffer to add to skb
+ * iavf_sync_rx_buffer - Synchronize received data for use
+ * @dev: device used for DMA mapping
+ * @buf: Rx buffer containing the data
+ * @size: size of the received data
*
- * This function will pull an Rx buffer from the ring and synchronize it
- * for use by the CPU.
+ * This function will synchronize the Rx buffer for use by the CPU.
*/
-static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring,
- const unsigned int size)
+static void iavf_sync_rx_buffer(struct device *dev, struct iavf_rx_buffer *buf,
+ u32 size)
{
- struct iavf_rx_buffer *rx_buffer;
-
- rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean];
- prefetchw(rx_buffer->page);
- if (!size)
- return rx_buffer;
-
- /* we are reusing so sync this buffer for CPU use */
- dma_sync_single_range_for_cpu(rx_ring->dev,
- rx_buffer->dma,
- rx_buffer->page_offset,
- size,
+ dma_sync_single_range_for_cpu(dev, buf->dma, buf->page_offset, size,
DMA_FROM_DEVICE);
-
- /* We have pulled a buffer for use, so decrement pagecnt_bias */
- rx_buffer->pagecnt_bias--;
-
- return rx_buffer;
}

/**
* iavf_build_skb - Build skb around an existing buffer
- * @rx_ring: Rx descriptor ring to transact packets on
- * @rx_buffer: Rx buffer to pull data from
- * @size: size of buffer to add to skb
+ * @rx_buffer: Rx buffer with the data
+ * @size: size of the data
+ * @pg_size: size of the Rx page
*
* This function builds an skb around an existing Rx buffer, taking care
* to set up the skb correctly and avoid any memcpy overhead.
*/
-static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring,
- struct iavf_rx_buffer *rx_buffer,
- unsigned int size)
+static struct sk_buff *iavf_build_skb(struct iavf_rx_buffer *rx_buffer,
+ u32 size, u32 pg_size)
{
void *va;
#if (PAGE_SIZE < 8192)
- unsigned int truesize = iavf_rx_pg_size(rx_ring) / 2;
+ unsigned int truesize = pg_size / 2;
#else
unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) +
SKB_DATA_ALIGN(IAVF_SKB_PAD + size);
#endif
struct sk_buff *skb;

- if (!rx_buffer || !size)
- return NULL;
/* prefetch first cache line of first page */
va = page_address(rx_buffer->page) + rx_buffer->page_offset;
net_prefetch(va);
@@ -1301,36 +1251,33 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring,
rx_buffer->page_offset += truesize;
#endif

+ rx_buffer->pagecnt_bias--;
+
return skb;
}

/**
- * iavf_put_rx_buffer - Clean up used buffer and either recycle or free
+ * iavf_put_rx_buffer - Recycle or free used buffer
* @rx_ring: rx descriptor ring to transact packets on
- * @rx_buffer: rx buffer to pull data from
+ * @dev: device used for DMA mapping
+ * @rx_buffer: Rx buffer to handle
+ * @pg_size: Rx page size
*
- * This function will clean up the contents of the rx_buffer. It will
- * either recycle the buffer or unmap it and free the associated resources.
+ * Either recycle the buffer if possible or unmap and free the page.
*/
-static void iavf_put_rx_buffer(struct iavf_ring *rx_ring,
- struct iavf_rx_buffer *rx_buffer)
+static void iavf_put_rx_buffer(struct iavf_ring *rx_ring, struct device *dev,
+ struct iavf_rx_buffer *rx_buffer, u32 pg_size)
{
- if (!rx_buffer)
- return;
-
if (iavf_can_reuse_rx_page(rx_buffer)) {
- /* hand second half of page back to the ring */
- iavf_reuse_rx_page(rx_ring, rx_buffer);
rx_ring->rx_stats.page_reuse_count++;
- } else {
- /* we are not reusing the buffer so unmap it */
- dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma,
- iavf_rx_pg_size(rx_ring),
- DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR);
- __page_frag_cache_drain(rx_buffer->page,
- rx_buffer->pagecnt_bias);
+ return;
}

+ /* we are not reusing the buffer so unmap it */
+ dma_unmap_page_attrs(dev, rx_buffer->dma, pg_size,
+ DMA_FROM_DEVICE, IAVF_RX_DMA_ATTR);
+ __page_frag_cache_drain(rx_buffer->page, rx_buffer->pagecnt_bias);
+
/* clear contents of buffer_info */
rx_buffer->page = NULL;
}
@@ -1350,14 +1297,6 @@ static bool iavf_is_non_eop(struct iavf_ring *rx_ring,
union iavf_rx_desc *rx_desc,
struct sk_buff *skb)
{
- u32 ntc = rx_ring->next_to_clean + 1;
-
- /* fetch, update, and store next to clean */
- ntc = (ntc < rx_ring->count) ? ntc : 0;
- rx_ring->next_to_clean = ntc;
-
- prefetch(IAVF_RX_DESC(rx_ring, ntc));
-
/* if we are the last buffer then there is nothing else to do */
#define IAVF_RXD_EOF BIT(IAVF_RX_DESC_STATUS_EOF_SHIFT)
if (likely(iavf_test_staterr(rx_desc, IAVF_RXD_EOF)))
@@ -1383,11 +1322,16 @@ static bool iavf_is_non_eop(struct iavf_ring *rx_ring,
static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
{
unsigned int total_rx_bytes = 0, total_rx_packets = 0;
+ const gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN;
+ u32 to_refill = IAVF_DESC_UNUSED(rx_ring);
+ u32 pg_size = iavf_rx_pg_size(rx_ring);
struct sk_buff *skb = rx_ring->skb;
- u16 cleaned_count = IAVF_DESC_UNUSED(rx_ring);
- bool failure = false;
+ struct device *dev = rx_ring->dev;
+ u32 ntc = rx_ring->next_to_clean;
+ u32 ring_size = rx_ring->count;
+ u32 cleaned_count = 0;

- while (likely(total_rx_packets < (unsigned int)budget)) {
+ while (likely(cleaned_count < budget)) {
struct iavf_rx_buffer *rx_buffer;
union iavf_rx_desc *rx_desc;
unsigned int size;
@@ -1396,13 +1340,11 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
u64 qword;

/* return some buffers to hardware, one at a time is too slow */
- if (cleaned_count >= IAVF_RX_BUFFER_WRITE) {
- failure = failure ||
- iavf_alloc_rx_buffers(rx_ring, cleaned_count);
- cleaned_count = 0;
- }
+ if (to_refill >= IAVF_RX_BUFFER_WRITE)
+ to_refill = __iavf_alloc_rx_buffers(rx_ring, to_refill,
+ gfp);

- rx_desc = IAVF_RX_DESC(rx_ring, rx_ring->next_to_clean);
+ rx_desc = IAVF_RX_DESC(rx_ring, ntc);

/* status_error_len will always be zero for unused descriptors
* because it's cleared in cleanup, and overlaps with hdr_addr
@@ -1424,24 +1366,38 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
IAVF_RXD_QW1_LENGTH_PBUF_SHIFT;

iavf_trace(clean_rx_irq, rx_ring, rx_desc, skb);
- rx_buffer = iavf_get_rx_buffer(rx_ring, size);
+ rx_buffer = &rx_ring->rx_bi[ntc];
+
+ /* Very rare, but possible case. The most common reason:
+ * the last fragment contained FCS only, which was then
+ * stripped by the HW.
+ */
+ if (unlikely(!size))
+ goto skip_data;
+
+ iavf_sync_rx_buffer(dev, rx_buffer, size);

/* retrieve a buffer from the ring */
if (skb)
- iavf_add_rx_frag(rx_ring, rx_buffer, skb, size);
+ iavf_add_rx_frag(skb, rx_buffer, size, pg_size);
else
- skb = iavf_build_skb(rx_ring, rx_buffer, size);
+ skb = iavf_build_skb(rx_buffer, size, pg_size);

/* exit if we failed to retrieve a buffer */
if (!skb) {
rx_ring->rx_stats.alloc_buff_failed++;
- if (rx_buffer && size)
- rx_buffer->pagecnt_bias++;
break;
}

- iavf_put_rx_buffer(rx_ring, rx_buffer);
+skip_data:
+ iavf_put_rx_buffer(rx_ring, dev, rx_buffer, pg_size);
+
cleaned_count++;
+ to_refill++;
+ if (unlikely(++ntc == ring_size))
+ ntc = 0;
+
+ prefetch(IAVF_RX_DESC(rx_ring, ntc));

if (iavf_is_non_eop(rx_ring, rx_desc, skb))
continue;
@@ -1488,8 +1444,18 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
total_rx_packets++;
}

+ rx_ring->next_to_clean = ntc;
rx_ring->skb = skb;

+ if (to_refill >= IAVF_RX_BUFFER_WRITE) {
+ to_refill = __iavf_alloc_rx_buffers(rx_ring, to_refill, gfp);
+ /* guarantee a trip back through this routine if there was
+ * a failure
+ */
+ if (unlikely(to_refill))
+ cleaned_count = budget;
+ }
+
u64_stats_update_begin(&rx_ring->syncp);
rx_ring->stats.packets += total_rx_packets;
rx_ring->stats.bytes += total_rx_bytes;
@@ -1497,8 +1463,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
rx_ring->q_vector->rx.total_packets += total_rx_packets;
rx_ring->q_vector->rx.total_bytes += total_rx_bytes;

- /* guarantee a trip back through this routine if there was a failure */
- return failure ? budget : (int)total_rx_packets;
+ return cleaned_count;
}

static inline u32 iavf_buildreg_itr(const int type, u16 itr)
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h
index 234e189c1987..9c6661a6edf2 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h
@@ -383,7 +383,6 @@ struct iavf_ring {
struct iavf_q_vector *q_vector; /* Backreference to associated vector */

struct rcu_head rcu; /* to avoid race on free */
- u16 next_to_alloc;
struct sk_buff *skb; /* When iavf_clean_rx_ring_irq() must
* return before it sees the EOP for
* the current packet, we save that skb
@@ -426,7 +425,7 @@ static inline unsigned int iavf_rx_pg_order(struct iavf_ring *ring)

#define iavf_rx_pg_size(_ring) (PAGE_SIZE << iavf_rx_pg_order(_ring))

-bool iavf_alloc_rx_buffers(struct iavf_ring *rxr, u16 cleaned_count);
+void iavf_alloc_rx_buffers(struct iavf_ring *rxr);
netdev_tx_t iavf_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
void iavf_clean_tx_ring(struct iavf_ring *tx_ring);
void iavf_clean_rx_ring(struct iavf_ring *rx_ring);
--
2.40.1


2023-05-17 08:56:36

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH net-next 06/11] net: page_pool: avoid calling no-op externals when possible

So while this looks ok, Eric Dumazet had looked into some optimization
for this touching the core code, and promised me to come up with an
even better version a while ago. Eric, what's the state of your
optimizations for no-op DMA syncs?


2023-05-18 04:24:18

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH net-next 06/11] net: page_pool: avoid calling no-op externals when possible

On Tue, 16 May 2023 18:18:36 +0200 Alexander Lobakin wrote:
> + /* Try to avoid calling no-op syncs */
> + pool->p.flags |= PP_FLAG_DMA_MAYBE_SYNC;
> + pool->p.flags &= ~PP_FLAG_DMA_SYNC_DEV;
> }
>
> if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT &&
> @@ -323,6 +327,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
>
> page_pool_set_dma_addr(page, dma);
>
> + if ((pool->p.flags & PP_FLAG_DMA_MAYBE_SYNC) &&
> + dma_need_sync(pool->p.dev, dma)) {
> + pool->p.flags |= PP_FLAG_DMA_SYNC_DEV;
> + pool->p.flags &= ~PP_FLAG_DMA_MAYBE_SYNC;
> + }

is it just me or does it feel cleaner to allocate a page at init,
and throw it into the cache, rather than adding a condition to a
fast(ish) path?

2023-05-18 04:24:47

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers

On Tue, 16 May 2023 18:18:37 +0200 Alexander Lobakin wrote:
> Each driver is responsible for syncing buffers written by HW for CPU
> before accessing them. Almost each PP-enabled driver uses the same
> pattern, which could be shorthanded into a static inline to make driver
> code a little bit more compact.
> Introduce a couple such functions. The first one takes the actual size
> of the data written by HW and is the main one to be used on Rx. The
> second does the same, but only if the PP performs DMA synchronizations
> at all. The last one picks max_len from the PP params and is designed
> for more extreme cases when the size is unknown, but the buffer still
> needs to be synced.
> Also constify pointer arguments of page_pool_get_dma_dir() and
> page_pool_get_dma_addr() to give a bit more room for optimization,
> as both of them are read-only.

Very neat.

> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> index 8435013de06e..f740c50b661f 100644
> --- a/include/net/page_pool.h
> +++ b/include/net/page_pool.h
> @@ -32,7 +32,7 @@
>
> #include <linux/mm.h> /* Needed by ptr_ring */
> #include <linux/ptr_ring.h>
> -#include <linux/dma-direction.h>
> +#include <linux/dma-mapping.h>

highly nit picky - but isn't dma-mapping.h pretty heavy?
And we include page_pool.h in skbuff.h. Not that it matters
today, but maybe one day we'll succeed putting skbuff.h
on a diet -- so perhaps it's better to put "inline helpers
with non-trivial dependencies" into a new header?

> #define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA
> * map/unmap

> +/**
> + * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW
> + * @pool: page_pool which this page belongs to
> + * @page: page to sync
> + * @dma_sync_size: size of the data written to the page
> + *
> + * Can be used as a shorthand to sync Rx pages before accessing them in the
> + * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP.
> + */
> +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
> + const struct page *page,
> + u32 dma_sync_size)
> +{
> + dma_sync_single_range_for_cpu(pool->p.dev,
> + page_pool_get_dma_addr(page),
> + pool->p.offset, dma_sync_size,
> + page_pool_get_dma_dir(pool));

Likely a dumb question but why does this exist?
Is there a case where the "maybe" version is not safe?

> +}
> +
> +/**
> + * page_pool_dma_maybe_sync_for_cpu - sync Rx page for CPU if needed
> + * @pool: page_pool which this page belongs to
> + * @page: page to sync
> + * @dma_sync_size: size of the data written to the page
> + *
> + * Performs DMA sync for CPU, but only when required (swiotlb, IOMMU etc.).
> + */
> +static inline void
> +page_pool_dma_maybe_sync_for_cpu(const struct page_pool *pool,
> + const struct page *page, u32 dma_sync_size)
> +{
> + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> + page_pool_dma_sync_for_cpu(pool, page, dma_sync_size);
> +}

2023-05-18 04:30:35

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH net-next 10/11] libie: add per-queue Page Pool stats

On Tue, 16 May 2023 18:18:40 +0200 Alexander Lobakin wrote:
> +static inline void libie_rq_stats_get_pp(u64 *sarr, struct page_pool *pool)
> +{
> +}

s/inline //

2023-05-18 04:32:11

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers

On Tue, 16 May 2023 18:18:37 +0200 Alexander Lobakin wrote:
> +/**
> + * page_pool_dma_sync_for_cpu - sync full Rx page for CPU
...
> +page_pool_dma_sync_full_for_cpu(const struct page_pool *pool,

kdoc name mismatch
--
pw-bot: cr

2023-05-18 05:13:45

by Yunsheng Lin

[permalink] [raw]
Subject: Re: [PATCH net-next 06/11] net: page_pool: avoid calling no-op externals when possible

On 2023/5/18 12:08, Jakub Kicinski wrote:
> On Tue, 16 May 2023 18:18:36 +0200 Alexander Lobakin wrote:
>> + /* Try to avoid calling no-op syncs */
>> + pool->p.flags |= PP_FLAG_DMA_MAYBE_SYNC;
>> + pool->p.flags &= ~PP_FLAG_DMA_SYNC_DEV;
>> }
>>
>> if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT &&
>> @@ -323,6 +327,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
>>
>> page_pool_set_dma_addr(page, dma);
>>
>> + if ((pool->p.flags & PP_FLAG_DMA_MAYBE_SYNC) &&
>> + dma_need_sync(pool->p.dev, dma)) {
>> + pool->p.flags |= PP_FLAG_DMA_SYNC_DEV;
>> + pool->p.flags &= ~PP_FLAG_DMA_MAYBE_SYNC;
>> + }
>
> is it just me or does it feel cleaner to allocate a page at init,
> and throw it into the cache, rather than adding a condition to a
> fast(ish) path?

Is dma_need_sync() not reliable until a dma map is called?
Is there any reason why not just clear PP_FLAG_DMA_SYNC_DEV if
dma_need_sync() is false without introducing the PP_FLAG_DMA_MAYBE_SYNC
flag?

>
> .
>

2023-05-18 07:28:33

by Ilias Apalodimas

[permalink] [raw]
Subject: Re: [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers

Hi all,

> On Wed, May 17, 2023 at 09:12:11PM -0700, Jakub Kicinski wrote:
> On Tue, 16 May 2023 18:18:37 +0200 Alexander Lobakin wrote:
> > Each driver is responsible for syncing buffers written by HW for CPU
> > before accessing them. Almost each PP-enabled driver uses the same
> > pattern, which could be shorthanded into a static inline to make driver
> > code a little bit more compact.
> > Introduce a couple such functions. The first one takes the actual size
> > of the data written by HW and is the main one to be used on Rx. The
> > second does the same, but only if the PP performs DMA synchronizations
> > at all. The last one picks max_len from the PP params and is designed
> > for more extreme cases when the size is unknown, but the buffer still
> > needs to be synced.
> > Also constify pointer arguments of page_pool_get_dma_dir() and
> > page_pool_get_dma_addr() to give a bit more room for optimization,
> > as both of them are read-only.
>
> Very neat.
>
> > diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> > index 8435013de06e..f740c50b661f 100644
> > --- a/include/net/page_pool.h
> > +++ b/include/net/page_pool.h
> > @@ -32,7 +32,7 @@
> >
> > #include <linux/mm.h> /* Needed by ptr_ring */
> > #include <linux/ptr_ring.h>
> > -#include <linux/dma-direction.h>
> > +#include <linux/dma-mapping.h>
>
> highly nit picky - but isn't dma-mapping.h pretty heavy?
> And we include page_pool.h in skbuff.h. Not that it matters
> today, but maybe one day we'll succeed putting skbuff.h
> on a diet -- so perhaps it's better to put "inline helpers
> with non-trivial dependencies" into a new header?
>
> > #define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA
> > * map/unmap
>
> > +/**
> > + * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW
> > + * @pool: page_pool which this page belongs to
> > + * @page: page to sync
> > + * @dma_sync_size: size of the data written to the page
> > + *
> > + * Can be used as a shorthand to sync Rx pages before accessing them in the
> > + * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP.
> > + */
> > +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
> > + const struct page *page,
> > + u32 dma_sync_size)
> > +{
> > + dma_sync_single_range_for_cpu(pool->p.dev,
> > + page_pool_get_dma_addr(page),
> > + pool->p.offset, dma_sync_size,
> > + page_pool_get_dma_dir(pool));
>
> Likely a dumb question but why does this exist?
> Is there a case where the "maybe" version is not safe?
>

I got similar concerns here. Syncing for the cpu is currently a
responsibility for the driver. The reason for having an automated DMA sync
is that we know when we allocate buffers for the NIC to consume so we can
safely sync them accordingly. I am fine having a page pool version for the
cpu sync, but do we really have to check the pp flags for that? IOW if you
are at the point that you need to sync a buffer for the cpu *someone*
already mapped it for you. Regardsless of who mapped it the sync is
identical

> > +}
> > +
> > +/**
> > + * page_pool_dma_maybe_sync_for_cpu - sync Rx page for CPU if needed
> > + * @pool: page_pool which this page belongs to
> > + * @page: page to sync
> > + * @dma_sync_size: size of the data written to the page
> > + *
> > + * Performs DMA sync for CPU, but only when required (swiotlb, IOMMU etc.).
> > + */
> > +static inline void
> > +page_pool_dma_maybe_sync_for_cpu(const struct page_pool *pool,
> > + const struct page *page, u32 dma_sync_size)
> > +{
> > + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> > + page_pool_dma_sync_for_cpu(pool, page, dma_sync_size);
> > +}

Thanks
/Ilias

2023-05-18 13:37:01

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [PATCH net-next 06/11] net: page_pool: avoid calling no-op externals when possible

From: Christoph Hellwig <[email protected]>
Date: Wed, 17 May 2023 10:14:58 +0200

> So while this looks ok, Eric Dumazet had looked into some optimization
> for this touching the core code, and promised me to come up with an
> even better version a while ago. Eric, what's the state of your
> optimizations for no-op DMA syncs?
>

Hmm, his last proposals were for avoiding indirect calls when IOMMU is
on, but in fact we don't need to synchronize stuff -- DMA IOMMU on
x86_64 also usually doesn't synchronize anything, but you don't know
that prior to doing an indirect call and dma_need_sync() won't help.
I was thinking of adding .dma_need_sync() callback to DMA ops, which
could also be called once only on page allocation, like in this patch.

Also want to hear how it goes for Eric :)

Thanks,
Olek

2023-05-18 13:37:17

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [PATCH net-next 06/11] net: page_pool: avoid calling no-op externals when possible

From: Yunsheng Lin <[email protected]>
Date: Thu, 18 May 2023 12:54:37 +0800

> On 2023/5/18 12:08, Jakub Kicinski wrote:
>> On Tue, 16 May 2023 18:18:36 +0200 Alexander Lobakin wrote:
>>> + /* Try to avoid calling no-op syncs */
>>> + pool->p.flags |= PP_FLAG_DMA_MAYBE_SYNC;
>>> + pool->p.flags &= ~PP_FLAG_DMA_SYNC_DEV;
>>> }
>>>
>>> if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT &&
>>> @@ -323,6 +327,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
>>>
>>> page_pool_set_dma_addr(page, dma);
>>>
>>> + if ((pool->p.flags & PP_FLAG_DMA_MAYBE_SYNC) &&
>>> + dma_need_sync(pool->p.dev, dma)) {
>>> + pool->p.flags |= PP_FLAG_DMA_SYNC_DEV;
>>> + pool->p.flags &= ~PP_FLAG_DMA_MAYBE_SYNC;
>>> + }
>>
>> is it just me or does it feel cleaner to allocate a page at init,
>> and throw it into the cache, rather than adding a condition to a
>> fast(ish) path?
>
> Is dma_need_sync() not reliable until a dma map is called?
> Is there any reason why not just clear PP_FLAG_DMA_SYNC_DEV if
> dma_need_sync() is false without introducing the PP_FLAG_DMA_MAYBE_SYNC
> flag?

We can't just clear the flag, because some drivers don't want PP to
synchronize DMA. Without a new flag, we can't distinguish those two.
Example:

1) Driver doesn't set DMA_SYNC_DEV
2) We check for dma_need_sync() and it returns true
3) As a result, we set DMA_SYNC_DEV, although driver does that on its
own

Thanks,
Olek

2023-05-18 14:30:39

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [PATCH net-next 10/11] libie: add per-queue Page Pool stats

From: Jakub Kicinski <[email protected]>
Date: Wed, 17 May 2023 21:19:13 -0700

> On Tue, 16 May 2023 18:18:40 +0200 Alexander Lobakin wrote:
>> +static inline void libie_rq_stats_get_pp(u64 *sarr, struct page_pool *pool)
>> +{
>> +}
>
> s/inline //

I was afraid a bit the compiler can decide to not inline it for some
reason ._. But I can drop that with no issues.

Thanks,
Olek

2023-05-18 14:32:02

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [Intel-wired-lan] [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers

From: Jakub Kicinski <[email protected]>
Date: Wed, 17 May 2023 21:12:11 -0700

> On Tue, 16 May 2023 18:18:37 +0200 Alexander Lobakin wrote:
>> Each driver is responsible for syncing buffers written by HW for CPU
>> before accessing them. Almost each PP-enabled driver uses the same
>> pattern, which could be shorthanded into a static inline to make driver
>> code a little bit more compact.
>> Introduce a couple such functions. The first one takes the actual size
>> of the data written by HW and is the main one to be used on Rx. The
>> second does the same, but only if the PP performs DMA synchronizations
>> at all. The last one picks max_len from the PP params and is designed
>> for more extreme cases when the size is unknown, but the buffer still
>> needs to be synced.
>> Also constify pointer arguments of page_pool_get_dma_dir() and
>> page_pool_get_dma_addr() to give a bit more room for optimization,
>> as both of them are read-only.
>
> Very neat.
>
>> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
>> index 8435013de06e..f740c50b661f 100644
>> --- a/include/net/page_pool.h
>> +++ b/include/net/page_pool.h
>> @@ -32,7 +32,7 @@
>>
>> #include <linux/mm.h> /* Needed by ptr_ring */
>> #include <linux/ptr_ring.h>
>> -#include <linux/dma-direction.h>
>> +#include <linux/dma-mapping.h>
>
> highly nit picky - but isn't dma-mapping.h pretty heavy?
> And we include page_pool.h in skbuff.h. Not that it matters
> today, but maybe one day we'll succeed putting skbuff.h
> on a diet -- so perhaps it's better to put "inline helpers
> with non-trivial dependencies" into a new header?

Maybe we could rather stop including page_pool.h into skbuff.h? It's
used there only for 1 external, which could be declared directly in
skbuff.h. When Matteo was developing PP recycling, he was storing
mem_info in skb as well, but then it was optimized and we don't do that
anymore.
It annoys sometimes to see the whole kernel rebuilt each time I edit
pag_pool.h :D In fact, only PP-enabled drivers and core code need it.

>
>> #define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA
>> * map/unmap
>
>> +/**
>> + * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW
>> + * @pool: page_pool which this page belongs to
>> + * @page: page to sync
>> + * @dma_sync_size: size of the data written to the page
>> + *
>> + * Can be used as a shorthand to sync Rx pages before accessing them in the
>> + * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP.
>> + */
>> +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
>> + const struct page *page,
>> + u32 dma_sync_size)
>> +{
>> + dma_sync_single_range_for_cpu(pool->p.dev,
>> + page_pool_get_dma_addr(page),
>> + pool->p.offset, dma_sync_size,
>> + page_pool_get_dma_dir(pool));
>
> Likely a dumb question but why does this exist?
> Is there a case where the "maybe" version is not safe?

If the driver doesn't set DMA_SYNC_DEV flag, then the "maybe" version
will never do anything. But we may want to use these helpers in such
drivers too?

>
>> +}
>> +
>> +/**
>> + * page_pool_dma_maybe_sync_for_cpu - sync Rx page for CPU if needed
>> + * @pool: page_pool which this page belongs to
>> + * @page: page to sync
>> + * @dma_sync_size: size of the data written to the page
>> + *
>> + * Performs DMA sync for CPU, but only when required (swiotlb, IOMMU etc.).
>> + */
>> +static inline void
>> +page_pool_dma_maybe_sync_for_cpu(const struct page_pool *pool,
>> + const struct page *page, u32 dma_sync_size)
>> +{
>> + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
>> + page_pool_dma_sync_for_cpu(pool, page, dma_sync_size);
>> +}
Thanks,
Olek

2023-05-18 14:38:44

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers

From: Ilias Apalodimas <[email protected]>
Date: Thu, 18 May 2023 10:03:41 +0300

> Hi all,
>
>> On Wed, May 17, 2023 at 09:12:11PM -0700, Jakub Kicinski wrote:
>> On Tue, 16 May 2023 18:18:37 +0200 Alexander Lobakin wrote:
>>> Each driver is responsible for syncing buffers written by HW for CPU
>>> before accessing them. Almost each PP-enabled driver uses the same
>>> pattern, which could be shorthanded into a static inline to make driver
>>> code a little bit more compact.

[...]

>>> + dma_sync_single_range_for_cpu(pool->p.dev,
>>> + page_pool_get_dma_addr(page),
>>> + pool->p.offset, dma_sync_size,
>>> + page_pool_get_dma_dir(pool));
>>
>> Likely a dumb question but why does this exist?
>> Is there a case where the "maybe" version is not safe?
>>
>
> I got similar concerns here. Syncing for the cpu is currently a
> responsibility for the driver. The reason for having an automated DMA sync
> is that we know when we allocate buffers for the NIC to consume so we can
> safely sync them accordingly. I am fine having a page pool version for the
> cpu sync, but do we really have to check the pp flags for that? IOW if you
> are at the point that you need to sync a buffer for the cpu *someone*
> already mapped it for you. Regardsless of who mapped it the sync is
> identical

The flag in the "maybe" version is the continuation of the shortcut from
6/11. If the flag is not set, but you asked PP to do syncs, that means
it enabled the shortcut to not go through function call ladders for
nothing. The ladder is basically the same for sync-for-CPU as the one
described in 6/11 for sync-for-dev.
I could place that in the driver, but I feel like it's better to have
that one level up to reduce boilerplating.

>
>>> +}
>>> +
>>> +/**
>>> + * page_pool_dma_maybe_sync_for_cpu - sync Rx page for CPU if needed
>>> + * @pool: page_pool which this page belongs to
>>> + * @page: page to sync
>>> + * @dma_sync_size: size of the data written to the page
>>> + *
>>> + * Performs DMA sync for CPU, but only when required (swiotlb, IOMMU etc.).
>>> + */
>>> +static inline void
>>> +page_pool_dma_maybe_sync_for_cpu(const struct page_pool *pool,
>>> + const struct page *page, u32 dma_sync_size)
>>> +{
>>> + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
>>> + page_pool_dma_sync_for_cpu(pool, page, dma_sync_size);
>>> +}
>
> Thanks
> /Ilias

Thanks,
Olek

2023-05-18 14:38:47

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [PATCH net-next 06/11] net: page_pool: avoid calling no-op externals when possible

From: Jakub Kicinski <[email protected]>
Date: Wed, 17 May 2023 21:08:04 -0700

> On Tue, 16 May 2023 18:18:36 +0200 Alexander Lobakin wrote:
>> + /* Try to avoid calling no-op syncs */
>> + pool->p.flags |= PP_FLAG_DMA_MAYBE_SYNC;
>> + pool->p.flags &= ~PP_FLAG_DMA_SYNC_DEV;
>> }
>>
>> if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT &&
>> @@ -323,6 +327,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
>>
>> page_pool_set_dma_addr(page, dma);
>>
>> + if ((pool->p.flags & PP_FLAG_DMA_MAYBE_SYNC) &&
>> + dma_need_sync(pool->p.dev, dma)) {
>> + pool->p.flags |= PP_FLAG_DMA_SYNC_DEV;
>> + pool->p.flags &= ~PP_FLAG_DMA_MAYBE_SYNC;
>> + }
>
> is it just me or does it feel cleaner to allocate a page at init,
> and throw it into the cache, rather than adding a condition to a
> fast(ish) path?

When recycling is on, not that fast -- new allocations occur relatively
rarely and it's allocations anyway, one `if` doesn't change anything there.

And seems like I didn't get the sentence regarding "allocate and throw"
:s This condition just disables the shortcut if any new page suddenly
requires real DMA syncs (and if it does, then the sync a line below will
take way more time than 1 more `if` anyway).

Thanks,
Olek

2023-05-18 15:06:09

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [Intel-wired-lan] [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers

On Thu, 18 May 2023 15:45:33 +0200 Alexander Lobakin wrote:
> >> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> >> index 8435013de06e..f740c50b661f 100644
> >> --- a/include/net/page_pool.h
> >> +++ b/include/net/page_pool.h
> >> @@ -32,7 +32,7 @@
> >>
> >> #include <linux/mm.h> /* Needed by ptr_ring */
> >> #include <linux/ptr_ring.h>
> >> -#include <linux/dma-direction.h>
> >> +#include <linux/dma-mapping.h>
> >
> > highly nit picky - but isn't dma-mapping.h pretty heavy?
> > And we include page_pool.h in skbuff.h. Not that it matters
> > today, but maybe one day we'll succeed putting skbuff.h
> > on a diet -- so perhaps it's better to put "inline helpers
> > with non-trivial dependencies" into a new header?
>
> Maybe we could rather stop including page_pool.h into skbuff.h? It's
> used there only for 1 external, which could be declared directly in
> skbuff.h. When Matteo was developing PP recycling, he was storing
> mem_info in skb as well, but then it was optimized and we don't do that
> anymore.
> It annoys sometimes to see the whole kernel rebuilt each time I edit
> pag_pool.h :D In fact, only PP-enabled drivers and core code need it.

Or maybe we can do both? I think that separating types, defines and
simple wrappers from helpers should be considered good code hygiene.

> >> #define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA
> >> * map/unmap
> >
> >> +/**
> >> + * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW
> >> + * @pool: page_pool which this page belongs to
> >> + * @page: page to sync
> >> + * @dma_sync_size: size of the data written to the page
> >> + *
> >> + * Can be used as a shorthand to sync Rx pages before accessing them in the
> >> + * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP.
> >> + */
> >> +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
> >> + const struct page *page,
> >> + u32 dma_sync_size)
> >> +{
> >> + dma_sync_single_range_for_cpu(pool->p.dev,
> >> + page_pool_get_dma_addr(page),
> >> + pool->p.offset, dma_sync_size,
> >> + page_pool_get_dma_dir(pool));
> >
> > Likely a dumb question but why does this exist?
> > Is there a case where the "maybe" version is not safe?
>
> If the driver doesn't set DMA_SYNC_DEV flag, then the "maybe" version
> will never do anything. But we may want to use these helpers in such
> drivers too?

Oh, I see, the polarity of the flag is awkward. Hm.
Maybe just rename things, drop the "maybe_" and prefix the non-checking
version with __ ? We expect drivers to call the version which check the
flag mostly (AFAIU), so it should have the most obvious name.
Plus perhaps a sentence in the kdoc explaining why __ exists would be
good, if it wasn't obvious to me it may not be obvious to others..

2023-05-18 15:34:36

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH net-next 06/11] net: page_pool: avoid calling no-op externals when possible

On Thu, 18 May 2023 15:34:44 +0200 Alexander Lobakin wrote:
> And seems like I didn't get the sentence regarding "allocate and throw"
> :s This condition just disables the shortcut if any new page suddenly
> requires real DMA syncs (and if it does, then the sync a line below will
> take way more time than 1 more `if` anyway).

Right, I misread.

2023-05-18 16:09:14

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [Intel-wired-lan] [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers

From: Jakub Kicinski <[email protected]>
Date: Thu, 18 May 2023 07:56:43 -0700

> On Thu, 18 May 2023 15:45:33 +0200 Alexander Lobakin wrote:
>>>> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
>>>> index 8435013de06e..f740c50b661f 100644
>>>> --- a/include/net/page_pool.h
>>>> +++ b/include/net/page_pool.h
>>>> @@ -32,7 +32,7 @@
>>>>
>>>> #include <linux/mm.h> /* Needed by ptr_ring */
>>>> #include <linux/ptr_ring.h>
>>>> -#include <linux/dma-direction.h>
>>>> +#include <linux/dma-mapping.h>
>>>
>>> highly nit picky - but isn't dma-mapping.h pretty heavy?
>>> And we include page_pool.h in skbuff.h. Not that it matters
>>> today, but maybe one day we'll succeed putting skbuff.h
>>> on a diet -- so perhaps it's better to put "inline helpers
>>> with non-trivial dependencies" into a new header?
>>
>> Maybe we could rather stop including page_pool.h into skbuff.h? It's
>> used there only for 1 external, which could be declared directly in
>> skbuff.h. When Matteo was developing PP recycling, he was storing
>> mem_info in skb as well, but then it was optimized and we don't do that
>> anymore.
>> It annoys sometimes to see the whole kernel rebuilt each time I edit
>> pag_pool.h :D In fact, only PP-enabled drivers and core code need it.
>
> Or maybe we can do both? I think that separating types, defines and
> simple wrappers from helpers should be considered good code hygiene.

I'll definitely take a look, I also like the idea of minimalistic and
lightweight headers.
page_pool.h and page_pool_drv.h? :D

>
>>>> #define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA
>>>> * map/unmap
>>>
>>>> +/**
>>>> + * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW
>>>> + * @pool: page_pool which this page belongs to
>>>> + * @page: page to sync
>>>> + * @dma_sync_size: size of the data written to the page
>>>> + *
>>>> + * Can be used as a shorthand to sync Rx pages before accessing them in the
>>>> + * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP.
>>>> + */
>>>> +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
>>>> + const struct page *page,
>>>> + u32 dma_sync_size)
>>>> +{
>>>> + dma_sync_single_range_for_cpu(pool->p.dev,
>>>> + page_pool_get_dma_addr(page),
>>>> + pool->p.offset, dma_sync_size,
>>>> + page_pool_get_dma_dir(pool));
>>>
>>> Likely a dumb question but why does this exist?
>>> Is there a case where the "maybe" version is not safe?
>>
>> If the driver doesn't set DMA_SYNC_DEV flag, then the "maybe" version
>> will never do anything. But we may want to use these helpers in such
>> drivers too?
>
> Oh, I see, the polarity of the flag is awkward. Hm.
> Maybe just rename things, drop the "maybe_" and prefix the non-checking
> version with __ ? We expect drivers to call the version which check the
> flag mostly (AFAIU), so it should have the most obvious name.
> Plus perhaps a sentence in the kdoc explaining why __ exists would be
> good, if it wasn't obvious to me it may not be obvious to others..

Ah, +, good point.

Thanks,
Olek

2023-05-18 21:01:16

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [Intel-wired-lan] [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers

On Thu, 18 May 2023 17:41:52 +0200 Alexander Lobakin wrote:
> > Or maybe we can do both? I think that separating types, defines and
> > simple wrappers from helpers should be considered good code hygiene.
>
> I'll definitely take a look, I also like the idea of minimalistic and
> lightweight headers.
> page_pool.h and page_pool_drv.h? :D

What I've been doing lately is split like this:

include/net/something.h (simply includes all other headers)
include/net/something/types.h (structs, defines, enums)
include/net/something/functions.h (inlines and function declarations)

If that's reasonable -- we should put the helpers under

include/net/page_pool/functions.h ?

2023-05-19 14:13:41

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [Intel-wired-lan] [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers

From: Jakub Kicinski <[email protected]>
Date: Thu, 18 May 2023 13:36:27 -0700

> On Thu, 18 May 2023 17:41:52 +0200 Alexander Lobakin wrote:
>>> Or maybe we can do both? I think that separating types, defines and
>>> simple wrappers from helpers should be considered good code hygiene.
>>
>> I'll definitely take a look, I also like the idea of minimalistic and
>> lightweight headers.
>> page_pool.h and page_pool_drv.h? :D
>
> What I've been doing lately is split like this:
>
> include/net/something.h (simply includes all other headers)
> include/net/something/types.h (structs, defines, enums)
> include/net/something/functions.h (inlines and function declarations)
>
> If that's reasonable -- we should put the helpers under
>
> include/net/page_pool/functions.h ?

Hmm, all files that need something from page_pool.h usually need both
types and functions. Not sure we'll benefit anything here. OTOH leaving
those sync-for-cpu inlines alone allows to avoid including dma-mapping.h
and currently only IAVF needs them. So my idea is:

- you need smth from PP, but not sync-for-cpu -- more lightweight
page_pool.h is for you;
- you need sync-for-cpu (or maybe something else with heavy deps in the
future) -- just include page_pool_drv.h.

I tried moving something else, but couldn't find anything that would
give any win. <linux/mm.h> and <linux/ptr_ring.h> are needed to define
`struct page_pool`, i.e. even being structured like in your example they
would've gone into pp/types.h =\
`struct ptr_ring` itself doesn't require any MM-related definitions, so
would we split it into ptr_ring/{types,functions}.h, we could probably
avoid a couple includes :D

Thanks,
Olek

2023-05-19 20:55:14

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [Intel-wired-lan] [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers

On Fri, 19 May 2023 15:56:40 +0200 Alexander Lobakin wrote:
> From: Jakub Kicinski <[email protected]>
> Date: Thu, 18 May 2023 13:36:27 -0700
> >> I'll definitely take a look, I also like the idea of minimalistic and
> >> lightweight headers.
> >> page_pool.h and page_pool_drv.h? :D
> >
> > What I've been doing lately is split like this:
> >
> > include/net/something.h (simply includes all other headers)
> > include/net/something/types.h (structs, defines, enums)
> > include/net/something/functions.h (inlines and function declarations)
> >
> > If that's reasonable -- we should put the helpers under
> >
> > include/net/page_pool/functions.h ?
>
> Hmm, all files that need something from page_pool.h usually need both
> types and functions. Not sure we'll benefit anything here.

Ack, in the scheme above most places (source files) would include
something.h, the something/types.h is just for other headers.
something/functions.h is basically never included directly.

> OTOH leaving
> those sync-for-cpu inlines alone allows to avoid including dma-mapping.h
> and currently only IAVF needs them. So my idea is:
>
> - you need smth from PP, but not sync-for-cpu -- more lightweight
> page_pool.h is for you;
> - you need sync-for-cpu (or maybe something else with heavy deps in the
> future) -- just include page_pool_drv.h.

The idea makes sense in isolation, but I'm trying to figure out
a convention which would not require case-by-case discussions.

> I tried moving something else, but couldn't find anything that would
> give any win. <linux/mm.h> and <linux/ptr_ring.h> are needed to define
> `struct page_pool`, i.e. even being structured like in your example they
> would've gone into pp/types.h =\
> `struct ptr_ring` itself doesn't require any MM-related definitions, so
> would we split it into ptr_ring/{types,functions}.h, we could probably
> avoid a couple includes :D

Ack, not saying that we need to split now, it's just about the naming
(everyone's favorite topic).

I think that it's a touch weird to name the header _drv.h and then
include it in the core in multiple places (*cough* xdp_sock_drv.h).
Also If someone needs to add another "heavy" static line for use by
the core they will try to put it in page_pool.h rather than _drv.h...

I'd rather split the includes by the basic language-level contents,
first, then by the intended consumer, only if necessary. Language
level sorting require less thinking :)

But none of this is important, if you don't wanna to do it, just keep
the new helpers in page_pool.h (let's not do another _drv.h).

2023-05-22 14:14:46

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [Intel-wired-lan] [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers

From: Jakub Kicinski <[email protected]>
Date: Fri, 19 May 2023 13:45:45 -0700

> On Fri, 19 May 2023 15:56:40 +0200 Alexander Lobakin wrote:
>> From: Jakub Kicinski <[email protected]>
>> Date: Thu, 18 May 2023 13:36:27 -0700

[...]

> Ack, not saying that we need to split now, it's just about the naming
> (everyone's favorite topic).
>
> I think that it's a touch weird to name the header _drv.h and then
> include it in the core in multiple places (*cough* xdp_sock_drv.h).

Hahaha, I also thought of it :>

> Also If someone needs to add another "heavy" static line for use by
> the core they will try to put it in page_pool.h rather than _drv.h...
>
> I'd rather split the includes by the basic language-level contents,
> first, then by the intended consumer, only if necessary. Language
> level sorting require less thinking :)
>
> But none of this is important, if you don't wanna to do it, just keep
> the new helpers in page_pool.h (let's not do another _drv.h).

Ack, will just put there. It doesn't get included by the whole kernel
via skbuff.h anymore (in v2), so new inlines won't hurt.

Thanks,
Olek

2023-05-22 15:13:39

by Paul Menzel

[permalink] [raw]
Subject: Re: [Intel-wired-lan] [PATCH net-next 10/11] libie: add per-queue Page Pool stats

Dear Alexander,


Thank you for your patch.

Am 16.05.23 um 18:18 schrieb Alexander Lobakin:
> Expand the libie generic per-queue stats with the generic Page Pool
> stats provided by the API itself, when CONFIG_PAGE_POOL is enable.

enable*d*

> When it's not, there'll be no such fields in the stats structure, so
> no space wasted.
> They are also a bit special in terms of how they are obtained. One
> &page_pool accumulates statistics until it's destroyed obviously,
> which happens on ifdown. So, in order to not lose any statistics,
> get the stats and store in the queue container before destroying
> a pool. This container survives ifups/downs, so it basically stores
> the statistics accumulated since the very first pool was allocated
> on this queue. When it's needed to export the stats, first get the
> numbers from this container and then add the "live" numbers -- the
> ones that the current active pool returns. The result values will
> always represent the actual device-lifetime* stats.
> There's a cast from &page_pool_stats to `u64 *` in a couple functions,
> but they are guarded with stats asserts to make sure it's safe to do.
> FWIW it saves a lot of object code.
>
> Signed-off-by: Alexander Lobakin <[email protected]>
> ---
> drivers/net/ethernet/intel/libie/internal.h | 23 +++++++
> drivers/net/ethernet/intel/libie/rx.c | 20 ++++++
> drivers/net/ethernet/intel/libie/stats.c | 72 ++++++++++++++++++++-
> include/linux/net/intel/libie/rx.h | 4 ++
> include/linux/net/intel/libie/stats.h | 39 ++++++++++-
> 5 files changed, 155 insertions(+), 3 deletions(-)
> create mode 100644 drivers/net/ethernet/intel/libie/internal.h
>
> diff --git a/drivers/net/ethernet/intel/libie/internal.h b/drivers/net/ethernet/intel/libie/internal.h
> new file mode 100644
> index 000000000000..083398dc37c6
> --- /dev/null
> +++ b/drivers/net/ethernet/intel/libie/internal.h
> @@ -0,0 +1,23 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* libie internal declarations not to be used in drivers.
> + *
> + * Copyright(c) 2023 Intel Corporation.
> + */
> +
> +#ifndef __LIBIE_INTERNAL_H
> +#define __LIBIE_INTERNAL_H
> +
> +struct libie_rq_stats;
> +struct page_pool;
> +
> +#ifdef CONFIG_PAGE_POOL_STATS
> +void libie_rq_stats_sync_pp(struct libie_rq_stats *stats,
> + struct page_pool *pool);
> +#else
> +static inline void libie_rq_stats_sync_pp(struct libie_rq_stats *stats,
> + struct page_pool *pool)
> +{
> +}
> +#endif
> +
> +#endif /* __LIBIE_INTERNAL_H */
> diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c
> index d68eab76593c..128f134aca6d 100644
> --- a/drivers/net/ethernet/intel/libie/rx.c
> +++ b/drivers/net/ethernet/intel/libie/rx.c
> @@ -3,6 +3,8 @@
>
> #include <linux/net/intel/libie/rx.h>
>
> +#include "internal.h"
> +
> /* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed
> * bitfield struct.
> */
> @@ -133,6 +135,24 @@ struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi,
> }
> EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_create, LIBIE);
>
> +/**
> + * libie_rx_page_pool_destroy - destroy a &page_pool created by libie
> + * @pool: pool to destroy
> + * @stats: RQ stats from the ring (or %NULL to skip updating PP stats)
> + *
> + * As the stats usually has the same lifetime as the device, but PP is usually
> + * created/destroyed on ifup/ifdown, in order to not lose the stats accumulated
> + * during the last ifup, the PP stats need to be added to the driver stats
> + * container. Then the PP gets destroyed.
> + */
> +void libie_rx_page_pool_destroy(struct page_pool *pool,
> + struct libie_rq_stats *stats)
> +{
> + libie_rq_stats_sync_pp(stats, pool);
> + page_pool_destroy(pool);
> +}
> +EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_destroy, LIBIE);
> +
> MODULE_AUTHOR("Intel Corporation");
> MODULE_DESCRIPTION("Intel(R) Ethernet common library");
> MODULE_LICENSE("GPL");
> diff --git a/drivers/net/ethernet/intel/libie/stats.c b/drivers/net/ethernet/intel/libie/stats.c
> index 61456842a362..95bbb38c39e3 100644
> --- a/drivers/net/ethernet/intel/libie/stats.c
> +++ b/drivers/net/ethernet/intel/libie/stats.c
> @@ -4,6 +4,8 @@
> #include <linux/ethtool.h>
> #include <linux/net/intel/libie/stats.h>
>
> +#include "internal.h"
> +
> /* Rx per-queue stats */
>
> static const char * const libie_rq_stats_str[] = {
> @@ -14,6 +16,70 @@ static const char * const libie_rq_stats_str[] = {
>
> #define LIBIE_RQ_STATS_NUM ARRAY_SIZE(libie_rq_stats_str)
>
> +#ifdef CONFIG_PAGE_POOL_STATS
> +/**
> + * libie_rq_stats_get_pp - get the current stats from a &page_pool
> + * @sarr: local array to add stats to
> + * @pool: pool to get the stats from
> + *
> + * Adds the current "live" stats from an online PP to the stats read from
> + * the RQ container, so that the actual totals will be returned.
> + */
> +static void libie_rq_stats_get_pp(u64 *sarr, struct page_pool *pool)
> +{
> + struct page_pool_stats *pps;
> + /* Used only to calculate pos below */
> + struct libie_rq_stats tmp;
> + u32 pos;
> +
> + /* Validate the libie PP stats array can be casted <-> PP struct */
> + static_assert(sizeof(tmp.pp) == sizeof(*pps));
> +
> + if (!pool)
> + return;
> +
> + /* Position of the first Page Pool stats field */
> + pos = (u64_stats_t *)&tmp.pp - tmp.raw;
> + pps = (typeof(pps))&sarr[pos];
> +
> + page_pool_get_stats(pool, pps);
> +}
> +
> +/**
> + * libie_rq_stats_sync_pp - add the current PP stats to the RQ stats container
> + * @stats: stats structure to update
> + * @pool: pool to read the stats
> + *
> + * Called by libie_rx_page_pool_destroy() to save the stats before destroying
> + * the pool.
> + */
> +void libie_rq_stats_sync_pp(struct libie_rq_stats *stats,
> + struct page_pool *pool)
> +{
> + u64_stats_t *qarr = (u64_stats_t *)&stats->pp;
> + struct page_pool_stats pps = { };
> + u64 *sarr = (u64 *)&pps;
> +
> + if (!stats)
> + return;
> +
> + page_pool_get_stats(pool, &pps);
> +
> + u64_stats_update_begin(&stats->syncp);
> +
> + for (u32 i = 0; i < sizeof(pps) / sizeof(*sarr); i++)
> + u64_stats_add(&qarr[i], sarr[i]);
> +
> + u64_stats_update_end(&stats->syncp);
> +}
> +#else
> +static inline void libie_rq_stats_get_pp(u64 *sarr, struct page_pool *pool)
> +{
> +}
> +
> +/* static inline void libie_rq_stats_sync_pp() is declared in "internal.h" */
> +#endif
> +
> /**
> * libie_rq_stats_get_sset_count - get the number of Ethtool RQ stats provided
> *
> @@ -41,8 +107,10 @@ EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_strings, LIBIE);
> * libie_rq_stats_get_data - get the RQ stats in Ethtool format
> * @data: reference to the cursor pointing to the output array
> * @stats: RQ stats container from the queue
> + * @pool: &page_pool from the queue (%NULL to ignore PP "live" stats)
> */
> -void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats)
> +void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats,
> + struct page_pool *pool)
> {
> u64 sarr[LIBIE_RQ_STATS_NUM];
> u32 start;
> @@ -54,6 +122,8 @@ void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats)
> sarr[i] = u64_stats_read(&stats->raw[i]);
> } while (u64_stats_fetch_retry(&stats->syncp, start));
>
> + libie_rq_stats_get_pp(sarr, pool);
> +
> for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++)
> (*data)[i] += sarr[i];
>
> diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h
> index f6ba3b19b7e2..474ffd689001 100644
> --- a/include/linux/net/intel/libie/rx.h
> +++ b/include/linux/net/intel/libie/rx.h
> @@ -160,7 +160,11 @@ static inline void libie_skb_set_hash(struct sk_buff *skb, u32 hash,
> /* Maximum frame size minus LL overhead */
> #define LIBIE_MAX_MTU (LIBIE_MAX_RX_FRM_LEN - LIBIE_RX_LL_LEN)
>
> +struct libie_rq_stats;
> +
> struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi,
> u32 size);
> +void libie_rx_page_pool_destroy(struct page_pool *pool,
> + struct libie_rq_stats *stats);
>
> #endif /* __LIBIE_RX_H */
> diff --git a/include/linux/net/intel/libie/stats.h b/include/linux/net/intel/libie/stats.h
> index dbbc98bbd3a7..23ca0079a905 100644
> --- a/include/linux/net/intel/libie/stats.h
> +++ b/include/linux/net/intel/libie/stats.h
> @@ -49,6 +49,17 @@
> * fragments: number of processed descriptors carrying only a fragment
> * alloc_page_fail: number of Rx page allocation fails
> * build_skb_fail: number of build_skb() fails
> + * pp_alloc_fast: pages taken from the cache or ring
> + * pp_alloc_slow: actual page allocations
> + * pp_alloc_slow_ho: non-order-0 page allocations
> + * pp_alloc_empty: number of times the pool was empty
> + * pp_alloc_refill: number of cache refills
> + * pp_alloc_waive: NUMA node mismatches during recycling
> + * pp_recycle_cached: direct recyclings into the cache
> + * pp_recycle_cache_full: number of times the cache was full
> + * pp_recycle_ring: recyclings into the ring
> + * pp_recycle_ring_full: number of times the ring was full
> + * pp_recycle_released_ref: pages released due to elevated refcnt
> */
>
> #define DECLARE_LIBIE_RQ_NAPI_STATS(act) \
> @@ -60,9 +71,29 @@
> act(alloc_page_fail) \
> act(build_skb_fail)
>
> +#ifdef CONFIG_PAGE_POOL_STATS
> +#define DECLARE_LIBIE_RQ_PP_STATS(act) \
> + act(pp_alloc_fast) \
> + act(pp_alloc_slow) \
> + act(pp_alloc_slow_ho) \
> + act(pp_alloc_empty) \
> + act(pp_alloc_refill) \
> + act(pp_alloc_waive) \
> + act(pp_recycle_cached) \
> + act(pp_recycle_cache_full) \
> + act(pp_recycle_ring) \
> + act(pp_recycle_ring_full) \
> + act(pp_recycle_released_ref)
> +#else
> +#define DECLARE_LIBIE_RQ_PP_STATS(act)
> +#endif
> +
> #define DECLARE_LIBIE_RQ_STATS(act) \
> DECLARE_LIBIE_RQ_NAPI_STATS(act) \
> - DECLARE_LIBIE_RQ_FAIL_STATS(act)
> + DECLARE_LIBIE_RQ_FAIL_STATS(act) \
> + DECLARE_LIBIE_RQ_PP_STATS(act)
> +
> +struct page_pool;
>
> struct libie_rq_stats {
> struct u64_stats_sync syncp;
> @@ -72,6 +103,9 @@ struct libie_rq_stats {
> #define act(s) u64_stats_t s;
> DECLARE_LIBIE_RQ_NAPI_STATS(act);
> DECLARE_LIBIE_RQ_FAIL_STATS(act);
> + struct_group(pp,
> + DECLARE_LIBIE_RQ_PP_STATS(act);
> + );
> #undef act
> };
> DECLARE_FLEX_ARRAY(u64_stats_t, raw);
> @@ -110,7 +144,8 @@ libie_rq_napi_stats_add(struct libie_rq_stats *qs,
>
> u32 libie_rq_stats_get_sset_count(void);
> void libie_rq_stats_get_strings(u8 **data, u32 qid);
> -void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats);
> +void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats,
> + struct page_pool *pool);
>
> /* Tx per-queue stats:
> * packets: packets sent from this queue

Reviewed-by: Paul Menzel <[email protected]>


Kind regards,

Paul

2023-05-22 15:44:53

by Magnus Karlsson

[permalink] [raw]
Subject: Re: [Intel-wired-lan] [PATCH net-next 07/11] net: page_pool: add DMA-sync-for-CPU inline helpers

On Mon, 22 May 2023 at 15:50, Alexander Lobakin
<[email protected]> wrote:
>
> From: Jakub Kicinski <[email protected]>
> Date: Fri, 19 May 2023 13:45:45 -0700
>
> > On Fri, 19 May 2023 15:56:40 +0200 Alexander Lobakin wrote:
> >> From: Jakub Kicinski <[email protected]>
> >> Date: Thu, 18 May 2023 13:36:27 -0700
>
> [...]
>
> > Ack, not saying that we need to split now, it's just about the naming
> > (everyone's favorite topic).
> >
> > I think that it's a touch weird to name the header _drv.h and then
> > include it in the core in multiple places (*cough* xdp_sock_drv.h).
>
> Hahaha, I also thought of it :>

Haha! Point taken. Will clear it up.

> > Also If someone needs to add another "heavy" static line for use by
> > the core they will try to put it in page_pool.h rather than _drv.h...
> >
> > I'd rather split the includes by the basic language-level contents,
> > first, then by the intended consumer, only if necessary. Language
> > level sorting require less thinking :)
> >
> > But none of this is important, if you don't wanna to do it, just keep
> > the new helpers in page_pool.h (let's not do another _drv.h).
>
> Ack, will just put there. It doesn't get included by the whole kernel
> via skbuff.h anymore (in v2), so new inlines won't hurt.
>
> Thanks,
> Olek
> _______________________________________________
> Intel-wired-lan mailing list
> [email protected]
> https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

2023-05-22 15:54:41

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [Intel-wired-lan] [PATCH net-next 10/11] libie: add per-queue Page Pool stats

From: Paul Menzel <[email protected]>
Date: Mon, 22 May 2023 17:05:28 +0200

> Dear Alexander,
>
>
> Thank you for your patch.
>
> Am 16.05.23 um 18:18 schrieb Alexander Lobakin:
>> Expand the libie generic per-queue stats with the generic Page Pool
>> stats provided by the API itself, when CONFIG_PAGE_POOL is enable.
>
> enable*d*

Oof, nice catch, thanks! I rely on codespell and checkpatch too much,
but from standalone-word-spelling PoV everything is fine here :s :D

>
>> When it's not, there'll be no such fields in the stats structure, so
>> no space wasted.
>> They are also a bit special in terms of how they are obtained. One
>> &page_pool accumulates statistics until it's destroyed obviously,
>> which happens on ifdown. So, in order to not lose any statistics,
>> get the stats and store in the queue container before destroying
>> a pool. This container survives ifups/downs, so it basically stores
>> the statistics accumulated since the very first pool was allocated
>> on this queue. When it's needed to export the stats, first get the
>> numbers from this container and then add the "live" numbers -- the
>> ones that the current active pool returns. The result values will
>> always represent the actual device-lifetime* stats.
>> There's a cast from &page_pool_stats to `u64 *` in a couple functions,
>> but they are guarded with stats asserts to make sure it's safe to do.
>> FWIW it saves a lot of object code.

[...]

> Reviewed-by: Paul Menzel <[email protected]>
>
>
> Kind regards,
>
> Paul

Thanks,
Olek

2023-05-23 22:55:32

by David Christensen

[permalink] [raw]
Subject: Re: [PATCH net-next 08/11] iavf: switch to Page Pool



On 5/16/23 9:18 AM, Alexander Lobakin wrote:
> Now that the IAVF driver simply uses dev_alloc_page() + free_page() with
> no custom recycling logics and one whole page per frame, it can easily
> be switched to using Page Pool API instead.

Any plans to add page pool fragmentation support (i.e.
PP_FLAG_PAGE_FRAG) in the future to better support architectures with
larger page sizes such as 64KB on ppc64le?

Dave

2023-05-25 11:15:20

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [PATCH net-next 08/11] iavf: switch to Page Pool

From: David Christensen <[email protected]>
Date: Tue, 23 May 2023 15:42:28 -0700

>
>
> On 5/16/23 9:18 AM, Alexander Lobakin wrote:
>> Now that the IAVF driver simply uses dev_alloc_page() + free_page() with
>> no custom recycling logics and one whole page per frame, it can easily
>> be switched to using Page Pool API instead.
>
> Any plans to add page pool fragmentation support (i.e.
> PP_FLAG_PAGE_FRAG) in the future to better support architectures with
> larger page sizes such as 64KB on ppc64le?

Currently no, we resigned from page fragmentation due to the complexity
and restrictions it provides for no benefits on x86_64. But I remember
that pages > 4 Kb exist (I have a couple MIPS boards where I have fun
sometimes and page size is set to 16 Kb there. But still always use 1
page per frame).
By "better support" you mean reducing memory usage or something else?

>
> Dave

Thanks,
Olek

2023-05-31 20:31:07

by David Christensen

[permalink] [raw]
Subject: Re: [PATCH net-next 08/11] iavf: switch to Page Pool



On 5/25/23 4:08 AM, Alexander Lobakin wrote:
>> Any plans to add page pool fragmentation support (i.e.
>> PP_FLAG_PAGE_FRAG) in the future to better support architectures with
>> larger page sizes such as 64KB on ppc64le?
>
> Currently no, we resigned from page fragmentation due to the complexity
> and restrictions it provides for no benefits on x86_64. But I remember
> that pages > 4 Kb exist (I have a couple MIPS boards where I have fun
> sometimes and page size is set to 16 Kb there. But still always use 1
> page per frame).
> By "better support" you mean reducing memory usage or something else?

Yes, reducing memory waste. Current generation P10 systems default to
quad-port, 10Gb copper i40e NICs. When you combine a large number of
CPUs, and therefore a large number of RX queues, with a 64KB page
allocation per packet, memory usage can balloon very quickly as you add
additional ports.

Would you be open to patches to address this further down the road as
your refactoring effort gets closer to completion?

Dave

2023-06-02 13:38:36

by Alexander Lobakin

[permalink] [raw]
Subject: Re: [PATCH net-next 08/11] iavf: switch to Page Pool

From: David Christensen <[email protected]>
Date: Wed, 31 May 2023 13:18:59 -0700

>
>
> On 5/25/23 4:08 AM, Alexander Lobakin wrote:
>>> Any plans to add page pool fragmentation support (i.e.
>>> PP_FLAG_PAGE_FRAG) in the future to better support architectures with
>>> larger page sizes such as 64KB on ppc64le?
>>
>> Currently no, we resigned from page fragmentation due to the complexity
>> and restrictions it provides for no benefits on x86_64. But I remember
>> that pages > 4 Kb exist (I have a couple MIPS boards where I have fun
>> sometimes and page size is set to 16 Kb there. But still always use 1
>> page per frame).
>> By "better support" you mean reducing memory usage or something else?
>
> Yes, reducing memory waste.  Current generation P10 systems default to
> quad-port, 10Gb copper i40e NICs.  When you combine a large number of
> CPUs, and therefore a large number of RX queues, with a 64KB page
> allocation per packet, memory usage can balloon very quickly as you add
> additional ports.

Yeah, I got it. Unfortunately, page split adds a bunch of overhead for
no benefit on 4k systems. There's a small series here on netdev which
tries to combine frag and non-frag allocations in Page Pool, so that
there will be no overhead on 4k systems and no memory waste on 8k+ ones.

>
> Would you be open to patches to address this further down the road as
> your refactoring effort gets closer to completion?

Let's see how the abovementioned effort goes. I feel like a generic
solution would be better than trying to handle that per-driver.

>
> Dave

Thanks,
Olek