2023-12-08 00:53:15

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 00/16] Device Memory TCP

Major changes in v1:
--------------

1. Implemented MVP queue API ndos to remove the userspace-visible
driver reset.

2. Fixed issues in the napi_pp_put_page() devmem frag unref path.

3. Removed RFC tag.

Many smaller addressed comments across all the patches (patches have
individual change log).

Full tree including the rest of the GVE driver changes:
https://github.com/mina/linux/commits/tcpdevmem-v1

Cc: Yunsheng Lin <[email protected]>
Cc: Shailend Chand <[email protected]>
Cc: Harshitha Ramamurthy <[email protected]>

Changes in RFC v3:
------------------

1. Pulled in the memory-provider dependency from Jakub's RFC[1] to make the
series reviewable and mergable.

2. Implemented multi-rx-queue binding which was a todo in v2.

3. Fix to cmsg handling.

The sticking point in RFC v2[2] was the device reset required to refill
the device rx-queues after the dmabuf bind/unbind. The solution
suggested as I understand is a subset of the per-queue management ops
Jakub suggested or similar:

https://lore.kernel.org/netdev/[email protected]/

This is not addressed in this revision, because:

1. This point was discussed at netconf & netdev and there is openness to
using the current approach of requiring a device reset.

2. Implementing individual queue resetting seems to be difficult for my
test bed with GVE. My prototype to test this ran into issues with the
rx-queues not coming back up properly if reset individually. At the
moment I'm unsure if it's a mistake in the POC or a genuine issue in
the virtualization stack behind GVE, which currently doesn't test
individual rx-queue restart.

3. Our usecases are not bothered by requiring a device reset to refill
the buffer queues, and we'd like to support NICs that run into this
limitation with resetting individual queues.

My thought is that drivers that have trouble with per-queue configs can
use the support in this series, while drivers that support new netdev
ops to reset individual queues can automatically reset the queue as
part of the dma-buf bind/unbind.

The same approach with device resets is presented again for consideration
with other sticking points addressed.

This proposal includes the rx devmem path only proposed for merge. For a
snapshot of my entire tree which includes the GVE POC page pool support &
device memory support:

https://github.com/torvalds/linux/compare/master...mina:linux:tcpdevmem-v3

[1] https://lore.kernel.org/netdev/[email protected]/T/
[2] https://lore.kernel.org/netdev/CAHS8izOVJGJH5WF68OsRWFKJid1_huzzUK+hpKbLcL4pSOD1Jw@mail.gmail.com/T/

Cc: Shakeel Butt <[email protected]>
Cc: Jeroen de Borst <[email protected]>
Cc: Praveen Kaligineedi <[email protected]>

Changes in RFC v2:
------------------

The sticking point in RFC v1[1] was the dma-buf pages approach we used to
deliver the device memory to the TCP stack. RFC v2 is a proof-of-concept
that attempts to resolve this by implementing scatterlist support in the
networking stack, such that we can import the dma-buf scatterlist
directly. This is the approach proposed at a high level here[2].

Detailed changes:
1. Replaced dma-buf pages approach with importing scatterlist into the
page pool.
2. Replace the dma-buf pages centric API with a netlink API.
3. Removed the TX path implementation - there is no issue with
implementing the TX path with scatterlist approach, but leaving
out the TX path makes it easier to review.
4. Functionality is tested with this proposal, but I have not conducted
perf testing yet. I'm not sure there are regressions, but I removed
perf claims from the cover letter until they can be re-confirmed.
5. Added Signed-off-by: contributors to the implementation.
6. Fixed some bugs with the RX path since RFC v1.

Any feedback welcome, but specifically the biggest pending questions
needing feedback IMO are:

1. Feedback on the scatterlist-based approach in general.
2. Netlink API (Patch 1 & 2).
3. Approach to handle all the drivers that expect to receive pages from
the page pool (Patch 6).

[1] https://lore.kernel.org/netdev/[email protected]/T/
[2] https://lore.kernel.org/netdev/CAHS8izPm6XRS54LdCDZVd0C75tA1zHSu6jLVO8nzTLXCc=H7Nw@mail.gmail.com/

----------------------

* TL;DR:

Device memory TCP (devmem TCP) is a proposal for transferring data to and/or
from device memory efficiently, without bouncing the data to a host memory
buffer.

* Problem:

A large amount of data transfers have device memory as the source and/or
destination. Accelerators drastically increased the volume of such transfers.
Some examples include:
- ML accelerators transferring large amounts of training data from storage into
GPU/TPU memory. In some cases ML training setup time can be as long as 50% of
TPU compute time, improving data transfer throughput & efficiency can help
improving GPU/TPU utilization.

- Distributed training, where ML accelerators, such as GPUs on different hosts,
exchange data among them.

- Distributed raw block storage applications transfer large amounts of data with
remote SSDs, much of this data does not require host processing.

Today, the majority of the Device-to-Device data transfers the network are
implemented as the following low level operations: Device-to-Host copy,
Host-to-Host network transfer, and Host-to-Device copy.

The implementation is suboptimal, especially for bulk data transfers, and can
put significant strains on system resources, such as host memory bandwidth,
PCIe bandwidth, etc. One important reason behind the current state is the
kernel’s lack of semantics to express device to network transfers.

* Proposal:

In this patch series we attempt to optimize this use case by implementing
socket APIs that enable the user to:

1. send device memory across the network directly, and
2. receive incoming network packets directly into device memory.

Packet _payloads_ go directly from the NIC to device memory for receive and from
device memory to NIC for transmit.
Packet _headers_ go to/from host memory and are processed by the TCP/IP stack
normally. The NIC _must_ support header split to achieve this.

Advantages:

- Alleviate host memory bandwidth pressure, compared to existing
network-transfer + device-copy semantics.

- Alleviate PCIe BW pressure, by limiting data transfer to the lowest level
of the PCIe tree, compared to traditional path which sends data through the
root complex.

* Patch overview:

** Part 1: netlink API

Gives user ability to bind dma-buf to an RX queue.

** Part 2: scatterlist support

Currently the standard for device memory sharing is DMABUF, which doesn't
generate struct pages. On the other hand, networking stack (skbs, drivers, and
page pool) operate on pages. We have 2 options:

1. Generate struct pages for dmabuf device memory, or,
2. Modify the networking stack to process scatterlist.

Approach #1 was attempted in RFC v1. RFC v2 implements approach #2.

** part 3: page pool support

We piggy back on page pool memory providers proposal:
https://github.com/kuba-moo/linux/tree/pp-providers

It allows the page pool to define a memory provider that provides the
page allocation and freeing. It helps abstract most of the device memory
TCP changes from the driver.

** part 4: support for unreadable skb frags

Page pool iovs are not accessible by the host; we implement changes
throughput the networking stack to correctly handle skbs with unreadable
frags.

** Part 5: recvmsg() APIs

We define user APIs for the user to send and receive device memory.

Not included with this RFC is the GVE devmem TCP support, just to
simplify the review. Code available here if desired:
https://github.com/mina/linux/tree/tcpdevmem

This RFC is built on top of net-next with Jakub's pp-providers changes
cherry-picked.

* NIC dependencies:

1. (strict) Devmem TCP require the NIC to support header split, i.e. the
capability to split incoming packets into a header + payload and to put
each into a separate buffer. Devmem TCP works by using device memory
for the packet payload, and host memory for the packet headers.

2. (optional) Devmem TCP works better with flow steering support & RSS support,
i.e. the NIC's ability to steer flows into certain rx queues. This allows the
sysadmin to enable devmem TCP on a subset of the rx queues, and steer
devmem TCP traffic onto these queues and non devmem TCP elsewhere.

The NIC I have access to with these properties is the GVE with DQO support
running in Google Cloud, but any NIC that supports these features would suffice.
I may be able to help reviewers bring up devmem TCP on their NICs.

* Testing:

The series includes a udmabuf kselftest that show a simple use case of
devmem TCP and validates the entire data path end to end without
a dependency on a specific dmabuf provider.

** Test Setup

Kernel: net-next with this RFC and memory provider API cherry-picked
locally.

Hardware: Google Cloud A3 VMs.

NIC: GVE with header split & RSS & flow steering support.

Jakub Kicinski (2):
net: page_pool: factor out releasing DMA from releasing the page
net: page_pool: create hooks for custom page providers

Mina Almasry (14):
queue_api: define queue api
gve: implement queue api
net: netdev netlink api to bind dma-buf to a net device
netdev: support binding dma-buf to netdevice
netdev: netdevice devmem allocator
memory-provider: dmabuf devmem memory provider
page_pool: device memory support
page_pool: don't release iov on elevanted refcount
net: support non paged skb frags
net: add support for skbs with unreadable frags
tcp: RX path for devmem TCP
net: add SO_DEVMEM_DONTNEED setsockopt to release RX frags
net: add devmem TCP documentation
selftests: add ncdevmem, netcat for devmem TCP

Documentation/netlink/specs/netdev.yaml | 52 ++
Documentation/networking/devmem.rst | 270 ++++++++++
drivers/net/ethernet/google/gve/gve_adminq.c | 6 +-
drivers/net/ethernet/google/gve/gve_adminq.h | 3 +
drivers/net/ethernet/google/gve/gve_dqo.h | 2 +
drivers/net/ethernet/google/gve/gve_main.c | 286 +++++++++++
drivers/net/ethernet/google/gve/gve_rx_dqo.c | 5 +-
include/linux/netdevice.h | 24 +
include/linux/skbuff.h | 56 ++-
include/linux/socket.h | 1 +
include/net/devmem.h | 109 +++++
include/net/netdev_rx_queue.h | 1 +
include/net/page_pool/helpers.h | 162 +++++-
include/net/page_pool/types.h | 48 ++
include/net/sock.h | 2 +
include/net/tcp.h | 5 +-
include/uapi/asm-generic/socket.h | 6 +
include/uapi/linux/netdev.h | 19 +
include/uapi/linux/uio.h | 14 +
net/core/datagram.c | 6 +
net/core/dev.c | 314 +++++++++++-
net/core/gro.c | 7 +-
net/core/netdev-genl-gen.c | 19 +
net/core/netdev-genl-gen.h | 2 +
net/core/netdev-genl.c | 124 +++++
net/core/page_pool.c | 239 +++++++--
net/core/skbuff.c | 108 +++-
net/core/sock.c | 38 ++
net/ipv4/tcp.c | 196 +++++++-
net/ipv4/tcp_input.c | 13 +-
net/ipv4/tcp_ipv4.c | 8 +
net/ipv4/tcp_output.c | 5 +-
net/packet/af_packet.c | 4 +-
tools/include/uapi/linux/netdev.h | 19 +
tools/testing/selftests/net/.gitignore | 1 +
tools/testing/selftests/net/Makefile | 5 +
tools/testing/selftests/net/ncdevmem.c | 489 +++++++++++++++++++
37 files changed, 2585 insertions(+), 83 deletions(-)
create mode 100644 Documentation/networking/devmem.rst
create mode 100644 include/net/devmem.h
create mode 100644 tools/testing/selftests/net/ncdevmem.c

--
2.43.0.472.g3155946c3a-goog


2023-12-08 00:53:24

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 02/16] net: page_pool: create hooks for custom page providers

From: Jakub Kicinski <[email protected]>

The page providers which try to reuse the same pages will
need to hold onto the ref, even if page gets released from
the pool - as in releasing the page from the pp just transfers
the "ownership" reference from pp to the provider, and provider
will wait for other references to be gone before feeding this
page back into the pool.

Signed-off-by: Jakub Kicinski <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

This is implemented by Jakub in his RFC:
https://lore.kernel.org/netdev/[email protected]/T/

I take no credit for the idea or implementation; I only added minor
edits to make this workable with device memory TCP, and removed some
hacky test code. This is a critical dependency of device memory TCP
and thus I'm pulling it into this series to make it revewable and
mergable.

RFC v3 -> v1
- Removed unusued mem_provider. (Yunsheng).
- Replaced memory_provider & mp_priv with netdev_rx_queue (Jakub).

---
include/net/page_pool/types.h | 12 ++++++++++
net/core/page_pool.c | 43 +++++++++++++++++++++++++++++++----
2 files changed, 50 insertions(+), 5 deletions(-)

diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index ac286ea8ce2d..0e9fa79a5ef1 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -51,6 +51,7 @@ struct pp_alloc_cache {
* @dev: device, for DMA pre-mapping purposes
* @netdev: netdev this pool will serve (leave as NULL if none or multiple)
* @napi: NAPI which is the sole consumer of pages, otherwise NULL
+ * @queue: struct netdev_rx_queue this page_pool is being created for.
* @dma_dir: DMA mapping direction
* @max_len: max DMA sync memory size for PP_FLAG_DMA_SYNC_DEV
* @offset: DMA sync address offset for PP_FLAG_DMA_SYNC_DEV
@@ -63,6 +64,7 @@ struct page_pool_params {
int nid;
struct device *dev;
struct napi_struct *napi;
+ struct netdev_rx_queue *queue;
enum dma_data_direction dma_dir;
unsigned int max_len;
unsigned int offset;
@@ -125,6 +127,13 @@ struct page_pool_stats {
};
#endif

+struct memory_provider_ops {
+ int (*init)(struct page_pool *pool);
+ void (*destroy)(struct page_pool *pool);
+ struct page *(*alloc_pages)(struct page_pool *pool, gfp_t gfp);
+ bool (*release_page)(struct page_pool *pool, struct page *page);
+};
+
struct page_pool {
struct page_pool_params_fast p;

@@ -174,6 +183,9 @@ struct page_pool {
*/
struct ptr_ring ring;

+ void *mp_priv;
+ const struct memory_provider_ops *mp_ops;
+
#ifdef CONFIG_PAGE_POOL_STATS
/* recycle stats are per-cpu to avoid locking */
struct page_pool_recycle_stats __percpu *recycle_stats;
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index ca1b3b65c9b5..f5c84d2a4510 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -25,6 +25,8 @@

#include "page_pool_priv.h"

+static DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers);
+
#define DEFER_TIME (msecs_to_jiffies(1000))
#define DEFER_WARN_INTERVAL (60 * HZ)

@@ -174,6 +176,7 @@ static int page_pool_init(struct page_pool *pool,
const struct page_pool_params *params)
{
unsigned int ring_qsize = 1024; /* Default */
+ int err;

memcpy(&pool->p, &params->fast, sizeof(pool->p));
memcpy(&pool->slow, &params->slow, sizeof(pool->slow));
@@ -234,10 +237,25 @@ static int page_pool_init(struct page_pool *pool,
/* Driver calling page_pool_create() also call page_pool_destroy() */
refcount_set(&pool->user_cnt, 1);

+ if (pool->mp_ops) {
+ err = pool->mp_ops->init(pool);
+ if (err) {
+ pr_warn("%s() mem-provider init failed %d\n",
+ __func__, err);
+ goto free_ptr_ring;
+ }
+
+ static_branch_inc(&page_pool_mem_providers);
+ }
+
if (pool->p.flags & PP_FLAG_DMA_MAP)
get_device(pool->p.dev);

return 0;
+
+free_ptr_ring:
+ ptr_ring_cleanup(&pool->ring, NULL);
+ return err;
}

static void page_pool_uninit(struct page_pool *pool)
@@ -519,7 +537,10 @@ struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp)
return page;

/* Slow-path: cache empty, do real allocation */
- page = __page_pool_alloc_pages_slow(pool, gfp);
+ if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
+ page = pool->mp_ops->alloc_pages(pool, gfp);
+ else
+ page = __page_pool_alloc_pages_slow(pool, gfp);
return page;
}
EXPORT_SYMBOL(page_pool_alloc_pages);
@@ -576,10 +597,13 @@ void __page_pool_release_page_dma(struct page_pool *pool, struct page *page)
void page_pool_return_page(struct page_pool *pool, struct page *page)
{
int count;
+ bool put;

- __page_pool_release_page_dma(pool, page);
-
- page_pool_clear_pp_info(page);
+ put = true;
+ if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
+ put = pool->mp_ops->release_page(pool, page);
+ else
+ __page_pool_release_page_dma(pool, page);

/* This may be the last page returned, releasing the pool, so
* it is not safe to reference pool afterwards.
@@ -587,7 +611,10 @@ void page_pool_return_page(struct page_pool *pool, struct page *page)
count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt);
trace_page_pool_state_release(pool, page, count);

- put_page(page);
+ if (put) {
+ page_pool_clear_pp_info(page);
+ put_page(page);
+ }
/* An optimization would be to call __free_pages(page, pool->p.order)
* knowing page is not part of page-cache (thus avoiding a
* __page_cache_release() call).
@@ -857,6 +884,12 @@ static void __page_pool_destroy(struct page_pool *pool)

page_pool_unlist(pool);
page_pool_uninit(pool);
+
+ if (pool->mp_ops) {
+ pool->mp_ops->destroy(pool);
+ static_branch_dec(&page_pool_mem_providers);
+ }
+
kfree(pool);
}

--
2.43.0.472.g3155946c3a-goog

2023-12-08 00:53:31

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 03/16] queue_api: define queue api

This API enables the net stack to reset the queues used for devmem.

Signed-off-by: Mina Almasry <[email protected]>

---
include/linux/netdevice.h | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 1b935ee341b4..316f7dee86ce 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -1432,6 +1432,20 @@ struct netdev_net_notifier {
* struct kernel_hwtstamp_config *kernel_config,
* struct netlink_ext_ack *extack);
* Change the hardware timestamping parameters for NIC device.
+ *
+ * void *(*ndo_queue_mem_alloc)(struct net_device *dev, int idx);
+ * Allocate memory for an RX queue. The memory returned in the form of
+ * a void * can be passed to ndo_queue_mem_free() for freeing or to
+ * ndo_queue_start to create an RX queue with this memory.
+ *
+ * void (*ndo_queue_mem_free)(struct net_device *dev, void *);
+ * Free memory from an RX queue.
+ *
+ * int (*ndo_queue_start)(struct net_device *dev, int idx, void *);
+ * Start an RX queue at the specified index.
+ *
+ * int (*ndo_queue_stop)(struct net_device *dev, int idx, void **);
+ * Stop the RX queue at the specified index.
*/
struct net_device_ops {
int (*ndo_init)(struct net_device *dev);
@@ -1673,6 +1687,16 @@ struct net_device_ops {
int (*ndo_hwtstamp_set)(struct net_device *dev,
struct kernel_hwtstamp_config *kernel_config,
struct netlink_ext_ack *extack);
+ void * (*ndo_queue_mem_alloc)(struct net_device *dev,
+ int idx);
+ void (*ndo_queue_mem_free)(struct net_device *dev,
+ void *queue_mem);
+ int (*ndo_queue_start)(struct net_device *dev,
+ int idx,
+ void *queue_mem);
+ int (*ndo_queue_stop)(struct net_device *dev,
+ int idx,
+ void **out_queue_mem);
};

/**
--
2.43.0.472.g3155946c3a-goog

2023-12-08 00:53:37

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 04/16] gve: implement queue api

Define a struct that contains all of the memory needed for an RX
queue to function.

Implement the queue-api in GVE using this struct.

Currently the only memory is allocated at the time of queue start are
the RX pages in gve_rx_post_buffers_dqo(). That can be moved up to
queue_mem_alloc() time in the future.

For simplicity the queue API is only supported by the diorite queue
out-of-order (DQO) format without queue-page-lists (QPL). Support for
other GVE formats can be added in the future as well.

Signed-off-by: Mina Almasry <[email protected]>

---
drivers/net/ethernet/google/gve/gve_adminq.c | 6 +-
drivers/net/ethernet/google/gve/gve_adminq.h | 3 +
drivers/net/ethernet/google/gve/gve_dqo.h | 2 +
drivers/net/ethernet/google/gve/gve_main.c | 286 +++++++++++++++++++
drivers/net/ethernet/google/gve/gve_rx_dqo.c | 5 +-
5 files changed, 296 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
index 12fbd723ecc6..e515b7278295 100644
--- a/drivers/net/ethernet/google/gve/gve_adminq.c
+++ b/drivers/net/ethernet/google/gve/gve_adminq.c
@@ -348,7 +348,7 @@ static int gve_adminq_parse_err(struct gve_priv *priv, u32 status)
/* Flushes all AQ commands currently queued and waits for them to complete.
* If there are failures, it will return the first error.
*/
-static int gve_adminq_kick_and_wait(struct gve_priv *priv)
+int gve_adminq_kick_and_wait(struct gve_priv *priv)
{
int tail, head;
int i;
@@ -591,7 +591,7 @@ int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_que
return gve_adminq_kick_and_wait(priv);
}

-static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index)
+int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index)
{
struct gve_rx_ring *rx = &priv->rx[queue_index];
union gve_adminq_command cmd;
@@ -691,7 +691,7 @@ int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_qu
return gve_adminq_kick_and_wait(priv);
}

-static int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_index)
+int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_index)
{
union gve_adminq_command cmd;
int err;
diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h
index 5865ccdccbd0..265beed965dc 100644
--- a/drivers/net/ethernet/google/gve/gve_adminq.h
+++ b/drivers/net/ethernet/google/gve/gve_adminq.h
@@ -411,6 +411,7 @@ union gve_adminq_command {

static_assert(sizeof(union gve_adminq_command) == 64);

+int gve_adminq_kick_and_wait(struct gve_priv *priv);
int gve_adminq_alloc(struct device *dev, struct gve_priv *priv);
void gve_adminq_free(struct device *dev, struct gve_priv *priv);
void gve_adminq_release(struct gve_priv *priv);
@@ -424,7 +425,9 @@ int gve_adminq_deconfigure_device_resources(struct gve_priv *priv);
int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues);
int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues);
int gve_adminq_create_rx_queues(struct gve_priv *priv, u32 num_queues);
+int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index);
int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 queue_id);
+int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_id);
int gve_adminq_register_page_list(struct gve_priv *priv,
struct gve_queue_page_list *qpl);
int gve_adminq_unregister_page_list(struct gve_priv *priv, u32 page_list_id);
diff --git a/drivers/net/ethernet/google/gve/gve_dqo.h b/drivers/net/ethernet/google/gve/gve_dqo.h
index c36b93f0de15..3eed26a0ed7d 100644
--- a/drivers/net/ethernet/google/gve/gve_dqo.h
+++ b/drivers/net/ethernet/google/gve/gve_dqo.h
@@ -46,6 +46,8 @@ int gve_clean_tx_done_dqo(struct gve_priv *priv, struct gve_tx_ring *tx,
struct napi_struct *napi);
void gve_rx_post_buffers_dqo(struct gve_rx_ring *rx);
void gve_rx_write_doorbell_dqo(const struct gve_priv *priv, int queue_idx);
+void gve_free_page_dqo(struct gve_priv *priv, struct gve_rx_buf_state_dqo *bs,
+ bool free_page);

static inline void
gve_tx_put_doorbell_dqo(const struct gve_priv *priv,
diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
index 619bf63ec935..5b23d811afd3 100644
--- a/drivers/net/ethernet/google/gve/gve_main.c
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -22,6 +22,7 @@
#include "gve_dqo.h"
#include "gve_adminq.h"
#include "gve_register.h"
+#include "gve_utils.h"

#define GVE_DEFAULT_RX_COPYBREAK (256)

@@ -1702,6 +1703,287 @@ static int gve_xdp(struct net_device *dev, struct netdev_bpf *xdp)
}
}

+struct gve_per_rx_queue_mem_dqo {
+ struct gve_rx_buf_state_dqo *buf_states;
+ u32 num_buf_states;
+
+ struct gve_rx_compl_desc_dqo *complq_desc_ring;
+ dma_addr_t complq_bus;
+
+ struct gve_rx_desc_dqo *bufq_desc_ring;
+ dma_addr_t bufq_bus;
+
+ struct gve_queue_resources *q_resources;
+ dma_addr_t q_resources_bus;
+
+ size_t completion_queue_slots;
+ size_t buffer_queue_slots;
+};
+
+static int gve_rx_queue_stop(struct net_device *dev, int idx,
+ void **out_per_q_mem)
+{
+ struct gve_per_rx_queue_mem_dqo *per_q_mem;
+ struct gve_priv *priv = netdev_priv(dev);
+ struct gve_notify_block *block;
+ struct gve_rx_ring *rx;
+ int ntfy_idx;
+ int err;
+
+ rx = &priv->rx[idx];
+ ntfy_idx = gve_rx_idx_to_ntfy(priv, idx);
+ block = &priv->ntfy_blocks[ntfy_idx];
+
+ if (priv->queue_format != GVE_DQO_RDA_FORMAT)
+ return -EOPNOTSUPP;
+
+ if (!out_per_q_mem)
+ return -EINVAL;
+
+ /* Stopping queue 0 while other queues are running is unfortunately
+ * fails silently for GVE at the moment. Disable the queue-api for
+ * queue 0 until this is resolved.
+ */
+ if (idx == 0)
+ return -ERANGE;
+
+ per_q_mem = kvcalloc(1, sizeof(*per_q_mem), GFP_KERNEL);
+ if (!per_q_mem)
+ return -ENOMEM;
+
+ napi_disable(&block->napi);
+ err = gve_adminq_destroy_rx_queue(priv, idx);
+ if (err)
+ goto err_napi_enable;
+
+ err = gve_adminq_kick_and_wait(priv);
+ if (err)
+ goto err_create_rx_queue;
+
+ gve_remove_napi(priv, ntfy_idx);
+
+ per_q_mem->buf_states = rx->dqo.buf_states;
+ per_q_mem->num_buf_states = rx->dqo.num_buf_states;
+
+ per_q_mem->complq_desc_ring = rx->dqo.complq.desc_ring;
+ per_q_mem->complq_bus = rx->dqo.complq.bus;
+
+ per_q_mem->bufq_desc_ring = rx->dqo.bufq.desc_ring;
+ per_q_mem->bufq_bus = rx->dqo.bufq.bus;
+
+ per_q_mem->q_resources = rx->q_resources;
+ per_q_mem->q_resources_bus = rx->q_resources_bus;
+
+ per_q_mem->buffer_queue_slots = rx->dqo.bufq.mask + 1;
+ per_q_mem->completion_queue_slots = rx->dqo.complq.mask + 1;
+
+ *out_per_q_mem = per_q_mem;
+
+ return 0;
+
+err_create_rx_queue:
+ /* There is nothing we can do here if these fail. */
+ gve_adminq_create_rx_queue(priv, idx);
+ gve_adminq_kick_and_wait(priv);
+
+err_napi_enable:
+ napi_enable(&block->napi);
+ kvfree(per_q_mem);
+
+ return err;
+}
+
+static void gve_rx_queue_mem_free(struct net_device *dev, void *per_q_mem)
+{
+ struct gve_per_rx_queue_mem_dqo *gve_q_mem;
+ struct gve_priv *priv = netdev_priv(dev);
+ struct gve_rx_buf_state_dqo *bs;
+ struct device *hdev;
+ size_t size;
+ int i;
+
+ priv = netdev_priv(dev);
+ gve_q_mem = (struct gve_per_rx_queue_mem_dqo *)per_q_mem;
+ hdev = &priv->pdev->dev;
+
+ if (!gve_q_mem)
+ return;
+
+ if (priv->queue_format != GVE_DQO_RDA_FORMAT)
+ return;
+
+ for (i = 0; i < gve_q_mem->num_buf_states; i++) {
+ bs = &gve_q_mem->buf_states[i];
+ if (bs->page_info.page)
+ gve_free_page_dqo(priv, bs, true);
+ }
+
+ if (gve_q_mem->q_resources)
+ dma_free_coherent(hdev, sizeof(*gve_q_mem->q_resources),
+ gve_q_mem->q_resources,
+ gve_q_mem->q_resources_bus);
+
+ if (gve_q_mem->bufq_desc_ring) {
+ size = sizeof(gve_q_mem->bufq_desc_ring[0]) *
+ gve_q_mem->buffer_queue_slots;
+ dma_free_coherent(hdev, size, gve_q_mem->bufq_desc_ring,
+ gve_q_mem->bufq_bus);
+ }
+
+ if (gve_q_mem->complq_desc_ring) {
+ size = sizeof(gve_q_mem->complq_desc_ring[0]) *
+ gve_q_mem->completion_queue_slots;
+ dma_free_coherent(hdev, size, gve_q_mem->complq_desc_ring,
+ gve_q_mem->complq_bus);
+ }
+
+ kvfree(gve_q_mem->buf_states);
+
+ kvfree(per_q_mem);
+}
+
+static void *gve_rx_queue_mem_alloc(struct net_device *dev, int idx)
+{
+ struct gve_per_rx_queue_mem_dqo *gve_q_mem;
+ struct gve_priv *priv = netdev_priv(dev);
+ struct device *hdev = &priv->pdev->dev;
+ size_t size;
+
+ if (priv->queue_format != GVE_DQO_RDA_FORMAT)
+ return NULL;
+
+ /* See comment in gve_rx_queue_stop() */
+ if (idx == 0)
+ return NULL;
+
+ gve_q_mem = kvcalloc(1, sizeof(*gve_q_mem), GFP_KERNEL);
+ if (!gve_q_mem)
+ goto err;
+
+ gve_q_mem->buffer_queue_slots =
+ priv->options_dqo_rda.rx_buff_ring_entries;
+ gve_q_mem->completion_queue_slots = priv->rx_desc_cnt;
+
+ gve_q_mem->num_buf_states =
+ min_t(s16, S16_MAX, gve_q_mem->buffer_queue_slots * 4);
+
+ gve_q_mem->buf_states = kvcalloc(gve_q_mem->num_buf_states,
+ sizeof(gve_q_mem->buf_states[0]),
+ GFP_KERNEL);
+ if (!gve_q_mem->buf_states)
+ goto err;
+
+ size = sizeof(struct gve_rx_compl_desc_dqo) *
+ gve_q_mem->completion_queue_slots;
+ gve_q_mem->complq_desc_ring = dma_alloc_coherent(hdev, size,
+ &gve_q_mem->complq_bus,
+ GFP_KERNEL);
+ if (!gve_q_mem->complq_desc_ring)
+ goto err;
+
+ size = sizeof(struct gve_rx_desc_dqo) * gve_q_mem->buffer_queue_slots;
+ gve_q_mem->bufq_desc_ring = dma_alloc_coherent(hdev, size,
+ &gve_q_mem->bufq_bus,
+ GFP_KERNEL);
+ if (!gve_q_mem->bufq_desc_ring)
+ goto err;
+
+ gve_q_mem->q_resources = dma_alloc_coherent(hdev,
+ sizeof(*gve_q_mem->q_resources),
+ &gve_q_mem->q_resources_bus,
+ GFP_KERNEL);
+ if (!gve_q_mem->q_resources)
+ goto err;
+
+ return gve_q_mem;
+
+err:
+ gve_rx_queue_mem_free(dev, gve_q_mem);
+ return NULL;
+}
+
+static int gve_rx_queue_start(struct net_device *dev, int idx, void *per_q_mem)
+{
+ struct gve_per_rx_queue_mem_dqo *gve_q_mem;
+ struct gve_priv *priv = netdev_priv(dev);
+ struct gve_rx_ring *rx = &priv->rx[idx];
+ struct gve_notify_block *block;
+ int ntfy_idx;
+ int err;
+ int i;
+
+ if (priv->queue_format != GVE_DQO_RDA_FORMAT)
+ return -EOPNOTSUPP;
+
+ /* See comment in gve_rx_queue_stop() */
+ if (idx == 0)
+ return -ERANGE;
+
+ gve_q_mem = (struct gve_per_rx_queue_mem_dqo *)per_q_mem;
+ ntfy_idx = gve_rx_idx_to_ntfy(priv, idx);
+ block = &priv->ntfy_blocks[ntfy_idx];
+
+ netif_dbg(priv, drv, priv->dev, "starting rx ring DQO\n");
+
+ memset(rx, 0, sizeof(*rx));
+ rx->gve = priv;
+ rx->q_num = idx;
+ rx->dqo.bufq.mask = gve_q_mem->buffer_queue_slots - 1;
+ rx->dqo.complq.num_free_slots = gve_q_mem->completion_queue_slots;
+ rx->dqo.complq.mask = gve_q_mem->completion_queue_slots - 1;
+ rx->ctx.skb_head = NULL;
+ rx->ctx.skb_tail = NULL;
+
+ rx->dqo.num_buf_states = gve_q_mem->num_buf_states;
+
+ rx->dqo.buf_states = gve_q_mem->buf_states;
+
+ /* Set up linked list of buffer IDs */
+ for (i = 0; i < rx->dqo.num_buf_states - 1; i++)
+ rx->dqo.buf_states[i].next = i + 1;
+
+ rx->dqo.buf_states[rx->dqo.num_buf_states - 1].next = -1;
+ rx->dqo.recycled_buf_states.head = -1;
+ rx->dqo.recycled_buf_states.tail = -1;
+ rx->dqo.used_buf_states.head = -1;
+ rx->dqo.used_buf_states.tail = -1;
+
+ rx->dqo.complq.desc_ring = gve_q_mem->complq_desc_ring;
+ rx->dqo.complq.bus = gve_q_mem->complq_bus;
+
+ rx->dqo.bufq.desc_ring = gve_q_mem->bufq_desc_ring;
+ rx->dqo.bufq.bus = gve_q_mem->bufq_bus;
+
+ rx->q_resources = gve_q_mem->q_resources;
+ rx->q_resources_bus = gve_q_mem->q_resources_bus;
+
+ gve_rx_add_to_block(priv, idx);
+
+ err = gve_adminq_create_rx_queue(priv, idx);
+ if (err)
+ return err;
+
+ err = gve_adminq_kick_and_wait(priv);
+ if (err)
+ goto err_destroy_rx_queue;
+
+ /* TODO, pull the memory allocations in this to gve_rx_queue_mem_alloc()
+ */
+ gve_rx_post_buffers_dqo(&priv->rx[idx]);
+
+ napi_enable(&block->napi);
+ gve_set_itr_coalesce_usecs_dqo(priv, block, priv->rx_coalesce_usecs);
+
+ return 0;
+
+err_destroy_rx_queue:
+ /* There is nothing we can do if these fail. */
+ gve_adminq_destroy_rx_queue(priv, idx);
+ gve_adminq_kick_and_wait(priv);
+
+ return err;
+}
+
int gve_adjust_queues(struct gve_priv *priv,
struct gve_queue_config new_rx_config,
struct gve_queue_config new_tx_config)
@@ -1900,6 +2182,10 @@ static const struct net_device_ops gve_netdev_ops = {
.ndo_bpf = gve_xdp,
.ndo_xdp_xmit = gve_xdp_xmit,
.ndo_xsk_wakeup = gve_xsk_wakeup,
+ .ndo_queue_mem_alloc = gve_rx_queue_mem_alloc,
+ .ndo_queue_mem_free = gve_rx_queue_mem_free,
+ .ndo_queue_start = gve_rx_queue_start,
+ .ndo_queue_stop = gve_rx_queue_stop,
};

static void gve_handle_status(struct gve_priv *priv, u32 status)
diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
index f281e42a7ef9..e729f04d3f60 100644
--- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c
+++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c
@@ -21,9 +21,8 @@ static int gve_buf_ref_cnt(struct gve_rx_buf_state_dqo *bs)
return page_count(bs->page_info.page) - bs->page_info.pagecnt_bias;
}

-static void gve_free_page_dqo(struct gve_priv *priv,
- struct gve_rx_buf_state_dqo *bs,
- bool free_page)
+void gve_free_page_dqo(struct gve_priv *priv, struct gve_rx_buf_state_dqo *bs,
+ bool free_page)
{
page_ref_sub(bs->page_info.page, bs->page_info.pagecnt_bias - 1);
if (free_page)
--
2.43.0.472.g3155946c3a-goog

2023-12-08 00:53:53

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 10/16] page_pool: don't release iov on elevanted refcount

Currently the page_pool behavior is that a page is considered for
recycling only once, the first time __page_pool_put_page() is called on
it.

This works because in practice the net stack only holds 1 reference to
the skb frags. In that case, the page_pool recycling works as expected,
as the skb frags will have 1 reference on the pages from the net stack
when __page_pool_put_page() is called (if the driver is not holding
extra references for recycling), and so the page will be recycled.

However, this is not compatible with devmem TCP. For devmem TCP, the net
stack holds 2 references for each frag, 1 reference is part of the SKB,
and the second reference is for the user holding the frag until they
call SO_DEVMEM_DONTNEED. This causes a bug in the page_pool recycling
where, when the skb is freed, the reference count goes from 2->1, the
page_pool sees a pending reference, releases the page, and so no devmem
iovs get recycled.

To fix this, don't release iovs on elevated refcount.

Signed-off-by: Mina Almasry <[email protected]>
---
net/core/page_pool.c | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index f0148d66371b..dc2a148f5b06 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -731,6 +731,29 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
/* Page found as candidate for recycling */
return page;
}
+
+ if (page_is_page_pool_iov(page)) {
+ /* With devmem TCP and ppiovs, we can't release pages if the
+ * refcount is > 1. This is because the net stack holds
+ * 2 references:
+ * - 1 for the skb, and
+ * - 1 for the user until they call SO_DEVMEM_DONTNEED.
+ * Releasing pages for elevated refcounts completely disables
+ * page_pool recycling. Instead, simply don't release pages and
+ * the next call to napi_pp_put_page() via SO_DEVMEM_DONTNEED
+ * will consider the page again for recycling. As a result,
+ * devmem TCP incompatible with drivers doing refcnt based
+ * recycling unless those drivers:
+ *
+ * - don't mark skb_mark_for_recycle()
+ * - are sure to release the last reference with
+ * page_pool_put_full_page() to consider the page for
+ * page_pool recycling.
+ */
+ page_pool_page_put_many(page, 1);
+ return NULL;
+ }
+
/* Fallback/non-XDP mode: API user have elevated refcnt.
*
* Many drivers split up the page into fragments, and some
--
2.43.0.472.g3155946c3a-goog

2023-12-08 00:54:01

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 06/16] netdev: support binding dma-buf to netdevice

Add a netdev_dmabuf_binding struct which represents the
dma-buf-to-netdevice binding. The netlink API will bind the dma-buf to
rx queues on the netdevice. On the binding, the dma_buf_attach
& dma_buf_map_attachment will occur. The entries in the sg_table from
mapping will be inserted into a genpool to make it ready
for allocation.

The chunks in the genpool are owned by a dmabuf_chunk_owner struct which
holds the dma-buf offset of the base of the chunk and the dma_addr of
the chunk. Both are needed to use allocations that come from this chunk.

We create a new type that represents an allocation from the genpool:
page_pool_iov. We setup the page_pool_iov allocation size in the
genpool to PAGE_SIZE for simplicity: to match the PAGE_SIZE normally
allocated by the page pool and given to the drivers.

The user can unbind the dmabuf from the netdevice by closing the netlink
socket that established the binding. We do this so that the binding is
automatically unbound even if the userspace process crashes.

The binding and unbinding leaves an indicator in struct netdev_rx_queue
that the given queue is bound, but the binding doesn't take effect until
the driver actually reconfigures its queues, and re-initializes its page
pool.

The netdev_dmabuf_binding struct is refcounted, and releases its
resources only when all the refs are released.

Signed-off-by: Willem de Bruijn <[email protected]>
Signed-off-by: Kaiyuan Zhang <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

v1:

- Introduce devmem.h instead of bloating netdevice.h (Jakub)
- ENOTSUPP -> EOPNOTSUPP (checkpatch.pl I think)
- Remove unneeded rcu protection for binding->list (rtnl protected)
- Removed extraneous err_binding_put: label.
- Removed dma_addr += len (Paolo).
- Don't override err on netdev_bind_dmabuf_to_queue failure.
- Rename devmem -> dmabuf (David).
- Add id to dmabuf binding (David/Stan).
- Fix missing xa_destroy bound_rq_list.
- Use queue api to reset bound RX queues (Jakub).
- Update netlink API for rx-queue type (tx/re) (Jakub).

RFC v3:
- Support multi rx-queue binding

---
include/net/devmem.h | 96 ++++++++++++
include/net/netdev_rx_queue.h | 1 +
include/net/page_pool/types.h | 27 ++++
net/core/dev.c | 276 ++++++++++++++++++++++++++++++++++
net/core/netdev-genl.c | 122 ++++++++++++++-
5 files changed, 520 insertions(+), 2 deletions(-)
create mode 100644 include/net/devmem.h

diff --git a/include/net/devmem.h b/include/net/devmem.h
new file mode 100644
index 000000000000..29ff125f9815
--- /dev/null
+++ b/include/net/devmem.h
@@ -0,0 +1,96 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Device memory TCP support
+ *
+ * Authors: Mina Almasry <[email protected]>
+ * Willem de Bruijn <[email protected]>
+ * Kaiyuan Zhang <[email protected]>
+ *
+ */
+#ifndef _NET_DEVMEM_H
+#define _NET_DEVMEM_H
+
+struct netdev_dmabuf_binding {
+ struct dma_buf *dmabuf;
+ struct dma_buf_attachment *attachment;
+ struct sg_table *sgt;
+ struct net_device *dev;
+ struct gen_pool *chunk_pool;
+
+ /* The user holds a ref (via the netlink API) for as long as they want
+ * the binding to remain alive. Each page pool using this binding holds
+ * a ref to keep the binding alive. Each allocated page_pool_iov holds a
+ * ref.
+ *
+ * The binding undos itself and unmaps the underlying dmabuf once all
+ * those refs are dropped and the binding is no longer desired or in
+ * use.
+ */
+ refcount_t ref;
+
+ /* The portid of the user that owns this binding. Used for netlink to
+ * notify us of the user dropping the bind.
+ */
+ u32 owner_nlportid;
+
+ /* The list of bindings currently active. Used for netlink to notify us
+ * of the user dropping the bind.
+ */
+ struct list_head list;
+
+ /* rxq's this binding is active on. */
+ struct xarray bound_rxq_list;
+
+ /* ID of this binding. Globally unique to all bindings currently
+ * active.
+ */
+ u32 id;
+};
+
+#ifdef CONFIG_DMA_SHARED_BUFFER
+void __netdev_dmabuf_binding_free(struct netdev_dmabuf_binding *binding);
+int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
+ struct netdev_dmabuf_binding **out);
+void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding);
+int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
+ struct netdev_dmabuf_binding *binding);
+#else
+static inline void
+__netdev_dmabuf_binding_free(struct netdev_dmabuf_binding *binding)
+{
+}
+
+static inline int netdev_bind_dmabuf(struct net_device *dev,
+ unsigned int dmabuf_fd,
+ struct netdev_dmabuf_binding **out)
+{
+ return -EOPNOTSUPP;
+}
+static inline void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding)
+{
+}
+
+static inline int
+netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
+ struct netdev_dmabuf_binding *binding)
+{
+ return -EOPNOTSUPP;
+}
+#endif
+
+static inline void
+netdev_dmabuf_binding_get(struct netdev_dmabuf_binding *binding)
+{
+ refcount_inc(&binding->ref);
+}
+
+static inline void
+netdev_dmabuf_binding_put(struct netdev_dmabuf_binding *binding)
+{
+ if (!refcount_dec_and_test(&binding->ref))
+ return;
+
+ __netdev_dmabuf_binding_free(binding);
+}
+
+#endif /* _NET_DEVMEM_H */
diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h
index aa1716fb0e53..5dc35628633a 100644
--- a/include/net/netdev_rx_queue.h
+++ b/include/net/netdev_rx_queue.h
@@ -25,6 +25,7 @@ struct netdev_rx_queue {
* Readers and writers must hold RTNL
*/
struct napi_struct *napi;
+ struct netdev_dmabuf_binding *binding;
} ____cacheline_aligned_in_smp;

/*
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index 0e9fa79a5ef1..44faee7a7b02 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -134,6 +134,33 @@ struct memory_provider_ops {
bool (*release_page)(struct page_pool *pool, struct page *page);
};

+/* page_pool_iov support */
+
+/* Owner of the dma-buf chunks inserted into the gen pool. Each scatterlist
+ * entry from the dmabuf is inserted into the genpool as a chunk, and needs
+ * this owner struct to keep track of some metadata necessary to create
+ * allocations from this chunk.
+ */
+struct dmabuf_genpool_chunk_owner {
+ /* Offset into the dma-buf where this chunk starts. */
+ unsigned long base_virtual;
+
+ /* dma_addr of the start of the chunk. */
+ dma_addr_t base_dma_addr;
+
+ /* Array of page_pool_iovs for this chunk. */
+ struct page_pool_iov *ppiovs;
+ size_t num_ppiovs;
+
+ struct netdev_dmabuf_binding *binding;
+};
+
+struct page_pool_iov {
+ struct dmabuf_genpool_chunk_owner *owner;
+
+ refcount_t refcount;
+};
+
struct page_pool {
struct page_pool_params_fast p;

diff --git a/net/core/dev.c b/net/core/dev.c
index 0432b04cf9b0..b8c8be5a912e 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -153,6 +153,10 @@
#include <linux/prandom.h>
#include <linux/once_lite.h>
#include <net/netdev_rx_queue.h>
+#include <linux/genalloc.h>
+#include <linux/dma-buf.h>
+#include <net/page_pool/types.h>
+#include <net/devmem.h>

#include "dev.h"
#include "net-sysfs.h"
@@ -2041,6 +2045,278 @@ static int call_netdevice_notifiers_mtu(unsigned long val,
return call_netdevice_notifiers_info(val, &info.info);
}

+/* Device memory support */
+
+#ifdef CONFIG_DMA_SHARED_BUFFER
+static void netdev_dmabuf_free_chunk_owner(struct gen_pool *genpool,
+ struct gen_pool_chunk *chunk,
+ void *not_used)
+{
+ struct dmabuf_genpool_chunk_owner *owner = chunk->owner;
+
+ kvfree(owner->ppiovs);
+ kfree(owner);
+}
+
+void __netdev_dmabuf_binding_free(struct netdev_dmabuf_binding *binding)
+{
+ size_t size, avail;
+
+ gen_pool_for_each_chunk(binding->chunk_pool,
+ netdev_dmabuf_free_chunk_owner, NULL);
+
+ size = gen_pool_size(binding->chunk_pool);
+ avail = gen_pool_avail(binding->chunk_pool);
+
+ if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
+ size, avail))
+ gen_pool_destroy(binding->chunk_pool);
+
+ dma_buf_unmap_attachment(binding->attachment, binding->sgt,
+ DMA_BIDIRECTIONAL);
+ dma_buf_detach(binding->dmabuf, binding->attachment);
+ dma_buf_put(binding->dmabuf);
+ xa_destroy(&binding->bound_rxq_list);
+ kfree(binding);
+}
+
+static int netdev_restart_rx_queue(struct net_device *dev, int rxq_idx)
+{
+ void *new_mem;
+ void *old_mem;
+ int err;
+
+ if (!dev || !dev->netdev_ops)
+ return -EINVAL;
+
+ if (!dev->netdev_ops->ndo_queue_stop ||
+ !dev->netdev_ops->ndo_queue_mem_free ||
+ !dev->netdev_ops->ndo_queue_mem_alloc ||
+ !dev->netdev_ops->ndo_queue_start)
+ return -EOPNOTSUPP;
+
+ new_mem = dev->netdev_ops->ndo_queue_mem_alloc(dev, rxq_idx);
+ if (!new_mem)
+ return -ENOMEM;
+
+ err = dev->netdev_ops->ndo_queue_stop(dev, rxq_idx, &old_mem);
+ if (err)
+ goto err_free_new_mem;
+
+ err = dev->netdev_ops->ndo_queue_start(dev, rxq_idx, new_mem);
+ if (err)
+ goto err_start_queue;
+
+ dev->netdev_ops->ndo_queue_mem_free(dev, old_mem);
+
+ return 0;
+
+err_start_queue:
+ dev->netdev_ops->ndo_queue_start(dev, rxq_idx, old_mem);
+
+err_free_new_mem:
+ dev->netdev_ops->ndo_queue_mem_free(dev, new_mem);
+
+ return err;
+}
+
+/* Protected by rtnl_lock() */
+static DEFINE_XARRAY_FLAGS(netdev_dmabuf_bindings, XA_FLAGS_ALLOC1);
+
+void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding)
+{
+ struct netdev_rx_queue *rxq;
+ unsigned long xa_idx;
+ unsigned int rxq_idx;
+
+ if (!binding)
+ return;
+
+ if (binding->list.next)
+ list_del(&binding->list);
+
+ xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) {
+ if (rxq->binding == binding) {
+ /* We hold the rtnl_lock while binding/unbinding
+ * dma-buf, so we can't race with another thread that
+ * is also modifying this value. However, the driver
+ * may read this config while it's creating its
+ * rx-queues. WRITE_ONCE() here to match the
+ * READ_ONCE() in the driver.
+ */
+ WRITE_ONCE(rxq->binding, NULL);
+
+ rxq_idx = get_netdev_rx_queue_index(rxq);
+
+ netdev_restart_rx_queue(binding->dev, rxq_idx);
+ }
+ }
+
+ xa_erase(&netdev_dmabuf_bindings, binding->id);
+
+ netdev_dmabuf_binding_put(binding);
+}
+
+int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
+ struct netdev_dmabuf_binding *binding)
+{
+ struct netdev_rx_queue *rxq;
+ u32 xa_idx;
+ int err;
+
+ rxq = __netif_get_rx_queue(dev, rxq_idx);
+
+ if (rxq->binding)
+ return -EEXIST;
+
+ err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b,
+ GFP_KERNEL);
+ if (err)
+ return err;
+
+ /* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't
+ * race with another thread that is also modifying this value. However,
+ * the driver may read this config while it's creating its * rx-queues.
+ * WRITE_ONCE() here to match the READ_ONCE() in the driver.
+ */
+ WRITE_ONCE(rxq->binding, binding);
+
+ err = netdev_restart_rx_queue(dev, rxq_idx);
+ if (err)
+ goto err_xa_erase;
+
+ return 0;
+
+err_xa_erase:
+ xa_erase(&binding->bound_rxq_list, xa_idx);
+ WRITE_ONCE(rxq->binding, NULL);
+
+ return err;
+}
+
+int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
+ struct netdev_dmabuf_binding **out)
+{
+ struct netdev_dmabuf_binding *binding;
+ static u32 id_alloc_next;
+ struct scatterlist *sg;
+ struct dma_buf *dmabuf;
+ unsigned int sg_idx, i;
+ unsigned long virtual;
+ int err;
+
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+
+ dmabuf = dma_buf_get(dmabuf_fd);
+ if (IS_ERR_OR_NULL(dmabuf))
+ return -EBADFD;
+
+ binding = kzalloc_node(sizeof(*binding), GFP_KERNEL,
+ dev_to_node(&dev->dev));
+ if (!binding) {
+ err = -ENOMEM;
+ goto err_put_dmabuf;
+ }
+ binding->dev = dev;
+
+ err = xa_alloc_cyclic(&netdev_dmabuf_bindings, &binding->id, binding,
+ xa_limit_32b, &id_alloc_next, GFP_KERNEL);
+ if (err < 0)
+ goto err_free_binding;
+
+ xa_init_flags(&binding->bound_rxq_list, XA_FLAGS_ALLOC);
+
+ refcount_set(&binding->ref, 1);
+
+ binding->dmabuf = dmabuf;
+
+ binding->attachment = dma_buf_attach(binding->dmabuf, dev->dev.parent);
+ if (IS_ERR(binding->attachment)) {
+ err = PTR_ERR(binding->attachment);
+ goto err_free_id;
+ }
+
+ binding->sgt = dma_buf_map_attachment(binding->attachment,
+ DMA_BIDIRECTIONAL);
+ if (IS_ERR(binding->sgt)) {
+ err = PTR_ERR(binding->sgt);
+ goto err_detach;
+ }
+
+ /* For simplicity we expect to make PAGE_SIZE allocations, but the
+ * binding can be much more flexible than that. We may be able to
+ * allocate MTU sized chunks here. Leave that for future work...
+ */
+ binding->chunk_pool = gen_pool_create(PAGE_SHIFT,
+ dev_to_node(&dev->dev));
+ if (!binding->chunk_pool) {
+ err = -ENOMEM;
+ goto err_unmap;
+ }
+
+ virtual = 0;
+ for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx) {
+ dma_addr_t dma_addr = sg_dma_address(sg);
+ struct dmabuf_genpool_chunk_owner *owner;
+ size_t len = sg_dma_len(sg);
+ struct page_pool_iov *ppiov;
+
+ owner = kzalloc_node(sizeof(*owner), GFP_KERNEL,
+ dev_to_node(&dev->dev));
+ owner->base_virtual = virtual;
+ owner->base_dma_addr = dma_addr;
+ owner->num_ppiovs = len / PAGE_SIZE;
+ owner->binding = binding;
+
+ err = gen_pool_add_owner(binding->chunk_pool, dma_addr,
+ dma_addr, len, dev_to_node(&dev->dev),
+ owner);
+ if (err) {
+ err = -EINVAL;
+ goto err_free_chunks;
+ }
+
+ owner->ppiovs = kvmalloc_array(owner->num_ppiovs,
+ sizeof(*owner->ppiovs),
+ GFP_KERNEL);
+ if (!owner->ppiovs) {
+ err = -ENOMEM;
+ goto err_free_chunks;
+ }
+
+ for (i = 0; i < owner->num_ppiovs; i++) {
+ ppiov = &owner->ppiovs[i];
+ ppiov->owner = owner;
+ refcount_set(&ppiov->refcount, 1);
+ }
+
+ virtual += len;
+ }
+
+ *out = binding;
+
+ return 0;
+
+err_free_chunks:
+ gen_pool_for_each_chunk(binding->chunk_pool,
+ netdev_dmabuf_free_chunk_owner, NULL);
+ gen_pool_destroy(binding->chunk_pool);
+err_unmap:
+ dma_buf_unmap_attachment(binding->attachment, binding->sgt,
+ DMA_BIDIRECTIONAL);
+err_detach:
+ dma_buf_detach(dmabuf, binding->attachment);
+err_free_id:
+ xa_erase(&netdev_dmabuf_bindings, binding->id);
+err_free_binding:
+ kfree(binding);
+err_put_dmabuf:
+ dma_buf_put(dmabuf);
+ return err;
+}
+#endif
+
#ifdef CONFIG_NET_INGRESS
static DEFINE_STATIC_KEY_FALSE(ingress_needed_key);

diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index 0ed292d87ae0..b3323812d0b0 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -9,6 +9,7 @@
#include <net/xdp_sock.h>
#include <net/netdev_rx_queue.h>
#include <net/busy_poll.h>
+#include <net/devmem.h>

#include "netdev-genl-gen.h"
#include "dev.h"
@@ -469,10 +470,94 @@ int netdev_nl_queue_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
return skb->len;
}

-/* Stub */
+static LIST_HEAD(netdev_rbinding_list);
+
int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
{
- return 0;
+ struct nlattr *tb[ARRAY_SIZE(netdev_queue_dmabuf_nl_policy)];
+ struct netdev_dmabuf_binding *out_binding;
+ u32 ifindex, dmabuf_fd, rxq_idx;
+ struct net_device *netdev;
+ struct sk_buff *rsp;
+ struct nlattr *attr;
+ int rem, err = 0;
+ void *hdr;
+
+ if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_DEV_IFINDEX) ||
+ GENL_REQ_ATTR_CHECK(info, NETDEV_A_BIND_DMABUF_DMABUF_FD) ||
+ GENL_REQ_ATTR_CHECK(info, NETDEV_A_BIND_DMABUF_QUEUES))
+ return -EINVAL;
+
+ ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]);
+ dmabuf_fd = nla_get_u32(info->attrs[NETDEV_A_BIND_DMABUF_DMABUF_FD]);
+
+ rtnl_lock();
+
+ netdev = __dev_get_by_index(genl_info_net(info), ifindex);
+ if (!netdev) {
+ err = -ENODEV;
+ goto err_unlock;
+ }
+
+ err = netdev_bind_dmabuf(netdev, dmabuf_fd, &out_binding);
+ if (err)
+ goto err_unlock;
+
+ nla_for_each_attr(attr, genlmsg_data(info->genlhdr),
+ genlmsg_len(info->genlhdr), rem) {
+ if (nla_type(attr) != NETDEV_A_BIND_DMABUF_QUEUES)
+ continue;
+
+ err = nla_parse_nested(tb,
+ ARRAY_SIZE(netdev_queue_dmabuf_nl_policy) - 1,
+ attr, netdev_queue_dmabuf_nl_policy,
+ info->extack);
+
+ if (err < 0)
+ goto err_unbind;
+
+ rxq_idx = nla_get_u32(tb[NETDEV_A_QUEUE_DMABUF_IDX]);
+
+ if (rxq_idx >= netdev->num_rx_queues) {
+ err = -ERANGE;
+ goto err_unbind;
+ }
+
+ err = netdev_bind_dmabuf_to_queue(netdev, rxq_idx, out_binding);
+ if (err)
+ goto err_unbind;
+ }
+
+ out_binding->owner_nlportid = info->snd_portid;
+ list_add(&out_binding->list, &netdev_rbinding_list);
+
+ rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
+ if (!rsp) {
+ err = -ENOMEM;
+ goto err_unbind;
+ }
+
+ hdr = genlmsg_put(rsp, info->snd_portid, info->snd_seq,
+ &netdev_nl_family, 0, info->genlhdr->cmd);
+ if (!hdr) {
+ err = -EMSGSIZE;
+ goto err_genlmsg_free;
+ }
+
+ nla_put_u32(rsp, NETDEV_A_BIND_DMABUF_DMABUF_ID, out_binding->id);
+ genlmsg_end(rsp, hdr);
+
+ rtnl_unlock();
+
+ return genlmsg_reply(rsp, info);
+
+err_genlmsg_free:
+ nlmsg_free(rsp);
+err_unbind:
+ netdev_unbind_dmabuf(out_binding);
+err_unlock:
+ rtnl_unlock();
+ return err;
}

static int netdev_genl_netdevice_event(struct notifier_block *nb,
@@ -495,10 +580,37 @@ static int netdev_genl_netdevice_event(struct notifier_block *nb,
return NOTIFY_OK;
}

+static int netdev_netlink_notify(struct notifier_block *nb, unsigned long state,
+ void *_notify)
+{
+ struct netlink_notify *notify = _notify;
+ struct netdev_dmabuf_binding *rbinding;
+
+ if (state != NETLINK_URELEASE || notify->protocol != NETLINK_GENERIC)
+ return NOTIFY_DONE;
+
+ rtnl_lock();
+
+ list_for_each_entry(rbinding, &netdev_rbinding_list, list) {
+ if (rbinding->owner_nlportid == notify->portid) {
+ netdev_unbind_dmabuf(rbinding);
+ break;
+ }
+ }
+
+ rtnl_unlock();
+
+ return NOTIFY_OK;
+}
+
static struct notifier_block netdev_genl_nb = {
.notifier_call = netdev_genl_netdevice_event,
};

+static struct notifier_block netdev_netlink_notifier = {
+ .notifier_call = netdev_netlink_notify,
+};
+
static int __init netdev_genl_init(void)
{
int err;
@@ -511,8 +623,14 @@ static int __init netdev_genl_init(void)
if (err)
goto err_unreg_ntf;

+ err = netlink_register_notifier(&netdev_netlink_notifier);
+ if (err)
+ goto err_unreg_family;
+
return 0;

+err_unreg_family:
+ genl_unregister_family(&netdev_nl_family);
err_unreg_ntf:
unregister_netdevice_notifier(&netdev_genl_nb);
return err;
--
2.43.0.472.g3155946c3a-goog

2023-12-08 00:54:04

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 07/16] netdev: netdevice devmem allocator

Implement netdev devmem allocator. The allocator takes a given struct
netdev_dmabuf_binding as input and allocates page_pool_iov from that
binding.

The allocation simply delegates to the binding's genpool for the
allocation logic and wraps the returned memory region in a page_pool_iov
struct.

page_pool_iov are refcounted and are freed back to the binding when the
refcount drops to 0.

Signed-off-by: Willem de Bruijn <[email protected]>
Signed-off-by: Kaiyuan Zhang <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

v1:
- Rename devmem -> dmabuf (David).

---
include/net/devmem.h | 13 ++++++++++++
include/net/page_pool/helpers.h | 28 +++++++++++++++++++++++++
net/core/dev.c | 37 ++++++++++++++++++++++++++++++++-
3 files changed, 77 insertions(+), 1 deletion(-)

diff --git a/include/net/devmem.h b/include/net/devmem.h
index 29ff125f9815..29bc337c7743 100644
--- a/include/net/devmem.h
+++ b/include/net/devmem.h
@@ -48,6 +48,9 @@ struct netdev_dmabuf_binding {
};

#ifdef CONFIG_DMA_SHARED_BUFFER
+struct page_pool_iov *
+netdev_alloc_dmabuf(struct netdev_dmabuf_binding *binding);
+void netdev_free_dmabuf(struct page_pool_iov *ppiov);
void __netdev_dmabuf_binding_free(struct netdev_dmabuf_binding *binding);
int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
struct netdev_dmabuf_binding **out);
@@ -55,6 +58,16 @@ void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding);
int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
struct netdev_dmabuf_binding *binding);
#else
+static inline struct page_pool_iov *
+netdev_alloc_dmabuf(struct netdev_dmabuf_binding *binding)
+{
+ return NULL;
+}
+
+static inline void netdev_free_dmabuf(struct page_pool_iov *ppiov)
+{
+}
+
static inline void
__netdev_dmabuf_binding_free(struct netdev_dmabuf_binding *binding)
{
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index 7dc65774cde5..8bfc2d43efd4 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -79,6 +79,34 @@ static inline u64 *page_pool_ethtool_stats_get(u64 *data, void *stats)
}
#endif

+/* page_pool_iov support */
+
+static inline struct dmabuf_genpool_chunk_owner *
+page_pool_iov_owner(const struct page_pool_iov *ppiov)
+{
+ return ppiov->owner;
+}
+
+static inline unsigned int page_pool_iov_idx(const struct page_pool_iov *ppiov)
+{
+ return ppiov - page_pool_iov_owner(ppiov)->ppiovs;
+}
+
+static inline dma_addr_t
+page_pool_iov_dma_addr(const struct page_pool_iov *ppiov)
+{
+ struct dmabuf_genpool_chunk_owner *owner = page_pool_iov_owner(ppiov);
+
+ return owner->base_dma_addr +
+ ((dma_addr_t)page_pool_iov_idx(ppiov) << PAGE_SHIFT);
+}
+
+static inline struct netdev_dmabuf_binding *
+page_pool_iov_binding(const struct page_pool_iov *ppiov)
+{
+ return page_pool_iov_owner(ppiov)->binding;
+}
+
/**
* page_pool_dev_alloc_pages() - allocate a page.
* @pool: pool from which to allocate
diff --git a/net/core/dev.c b/net/core/dev.c
index b8c8be5a912e..30667e4c3b95 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -155,8 +155,8 @@
#include <net/netdev_rx_queue.h>
#include <linux/genalloc.h>
#include <linux/dma-buf.h>
-#include <net/page_pool/types.h>
#include <net/devmem.h>
+#include <net/page_pool/helpers.h>

#include "dev.h"
#include "net-sysfs.h"
@@ -2120,6 +2120,41 @@ static int netdev_restart_rx_queue(struct net_device *dev, int rxq_idx)
return err;
}

+struct page_pool_iov *netdev_alloc_dmabuf(struct netdev_dmabuf_binding *binding)
+{
+ struct dmabuf_genpool_chunk_owner *owner;
+ struct page_pool_iov *ppiov;
+ unsigned long dma_addr;
+ ssize_t offset;
+ ssize_t index;
+
+ dma_addr = gen_pool_alloc_owner(binding->chunk_pool, PAGE_SIZE,
+ (void **)&owner);
+ if (!dma_addr)
+ return NULL;
+
+ offset = dma_addr - owner->base_dma_addr;
+ index = offset / PAGE_SIZE;
+ ppiov = &owner->ppiovs[index];
+
+ netdev_dmabuf_binding_get(binding);
+
+ return ppiov;
+}
+
+void netdev_free_dmabuf(struct page_pool_iov *ppiov)
+{
+ struct netdev_dmabuf_binding *binding = page_pool_iov_binding(ppiov);
+ unsigned long dma_addr = page_pool_iov_dma_addr(ppiov);
+
+ refcount_set(&ppiov->refcount, 1);
+
+ if (gen_pool_has_addr(binding->chunk_pool, dma_addr, PAGE_SIZE))
+ gen_pool_free(binding->chunk_pool, dma_addr, PAGE_SIZE);
+
+ netdev_dmabuf_binding_put(binding);
+}
+
/* Protected by rtnl_lock() */
static DEFINE_XARRAY_FLAGS(netdev_dmabuf_bindings, XA_FLAGS_ALLOC1);

--
2.43.0.472.g3155946c3a-goog

2023-12-08 00:54:05

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 14/16] net: add SO_DEVMEM_DONTNEED setsockopt to release RX frags

Add an interface for the user to notify the kernel that it is done
reading the devmem dmabuf frags returned as cmsg. The kernel will
drop the reference on the frags to make them available for re-use.

Signed-off-by: Willem de Bruijn <[email protected]>
Signed-off-by: Kaiyuan Zhang <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

Changes in v1:
- devmemtoken -> dmabuf_token (David).
- Use napi_pp_put_page() for refcounting (Yunsheng).

---
include/uapi/asm-generic/socket.h | 1 +
include/uapi/linux/uio.h | 4 ++++
net/core/sock.c | 38 +++++++++++++++++++++++++++++++
3 files changed, 43 insertions(+)

diff --git a/include/uapi/asm-generic/socket.h b/include/uapi/asm-generic/socket.h
index 25a2f5255f52..1acb77780f10 100644
--- a/include/uapi/asm-generic/socket.h
+++ b/include/uapi/asm-generic/socket.h
@@ -135,6 +135,7 @@
#define SO_PASSPIDFD 76
#define SO_PEERPIDFD 77

+#define SO_DEVMEM_DONTNEED 97
#define SO_DEVMEM_LINEAR 98
#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
#define SO_DEVMEM_DMABUF 99
diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h
index ad92e37699da..65f33178a601 100644
--- a/include/uapi/linux/uio.h
+++ b/include/uapi/linux/uio.h
@@ -30,6 +30,10 @@ struct dmabuf_cmsg {
__u32 dmabuf_id; /* dmabuf id this frag belongs to. */
};

+struct dmabuf_token {
+ __u32 token_start;
+ __u32 token_count;
+};
/*
* UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1)
*/
diff --git a/net/core/sock.c b/net/core/sock.c
index fef349dd72fa..521bdc4ff260 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1051,6 +1051,41 @@ static int sock_reserve_memory(struct sock *sk, int bytes)
return 0;
}

+static noinline_for_stack int
+sock_devmem_dontneed(struct sock *sk, sockptr_t optval, unsigned int optlen)
+{
+ struct dmabuf_token tokens[128];
+ unsigned int num_tokens, i, j;
+ int ret;
+
+ if (sk->sk_type != SOCK_STREAM || sk->sk_protocol != IPPROTO_TCP)
+ return -EBADF;
+
+ if (optlen % sizeof(struct dmabuf_token) || optlen > sizeof(tokens))
+ return -EINVAL;
+
+ num_tokens = optlen / sizeof(struct dmabuf_token);
+ if (copy_from_sockptr(tokens, optval, optlen))
+ return -EFAULT;
+
+ ret = 0;
+ for (i = 0; i < num_tokens; i++) {
+ for (j = 0; j < tokens[i].token_count; j++) {
+ struct page *page = xa_erase(&sk->sk_user_pages,
+ tokens[i].token_start + j);
+
+ if (page) {
+ if (WARN_ON_ONCE(!napi_pp_put_page(page,
+ false)))
+ page_pool_page_put_many(page, 1);
+ ret++;
+ }
+ }
+ }
+
+ return ret;
+}
+
void sockopt_lock_sock(struct sock *sk)
{
/* When current->bpf_ctx is set, the setsockopt is called from
@@ -1538,6 +1573,9 @@ int sk_setsockopt(struct sock *sk, int level, int optname,
break;
}

+ case SO_DEVMEM_DONTNEED:
+ ret = sock_devmem_dontneed(sk, optval, optlen);
+ break;
default:
ret = -ENOPROTOOPT;
break;
--
2.43.0.472.g3155946c3a-goog

2023-12-08 00:54:06

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 13/16] tcp: RX path for devmem TCP

In tcp_recvmsg_locked(), detect if the skb being received by the user
is a devmem skb. In this case - if the user provided the MSG_SOCK_DEVMEM
flag - pass it to tcp_recvmsg_devmem() for custom handling.

tcp_recvmsg_devmem() copies any data in the skb header to the linear
buffer, and returns a cmsg to the user indicating the number of bytes
returned in the linear buffer.

tcp_recvmsg_devmem() then loops over the unaccessible devmem skb frags,
and returns to the user a cmsg_devmem indicating the location of the
data in the dmabuf device memory. cmsg_devmem contains this information:

1. the offset into the dmabuf where the payload starts. 'frag_offset'.
2. the size of the frag. 'frag_size'.
3. an opaque token 'frag_token' to return to the kernel when the buffer
is to be released.

The pages awaiting freeing are stored in the newly added
sk->sk_user_pages, and each page passed to userspace is get_page()'d.
This reference is dropped once the userspace indicates that it is
done reading this page. All pages are released when the socket is
destroyed.

Signed-off-by: Willem de Bruijn <[email protected]>
Signed-off-by: Kaiyuan Zhang <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

Changes in v1:
- Added dmabuf_id to dmabuf_cmsg (David/Stan).
- Devmem -> dmabuf (David).
- Change tcp_recvmsg_dmabuf() check to skb->dmabuf (Paolo).
- Use __skb_frag_ref() & napi_pp_put_page() for refcounting (Yunsheng).

RFC v3:
- Fixed issue with put_cmsg() failing silently.

---
include/linux/socket.h | 1 +
include/net/page_pool/helpers.h | 9 ++
include/net/sock.h | 2 +
include/uapi/asm-generic/socket.h | 5 +
include/uapi/linux/uio.h | 10 ++
net/ipv4/tcp.c | 190 +++++++++++++++++++++++++++++-
net/ipv4/tcp_ipv4.c | 8 ++
7 files changed, 220 insertions(+), 5 deletions(-)

diff --git a/include/linux/socket.h b/include/linux/socket.h
index cfcb7e2c3813..fe2b9e2081bb 100644
--- a/include/linux/socket.h
+++ b/include/linux/socket.h
@@ -326,6 +326,7 @@ struct ucred {
* plain text and require encryption
*/

+#define MSG_SOCK_DEVMEM 0x2000000 /* Receive devmem skbs as cmsg */
#define MSG_ZEROCOPY 0x4000000 /* Use user data in kernel path */
#define MSG_SPLICE_PAGES 0x8000000 /* Splice the pages from the iterator in sendmsg() */
#define MSG_FASTOPEN 0x20000000 /* Send data in TCP SYN */
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index 2d4e0a2c5620..e7e2e89d3663 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -108,6 +108,15 @@ page_pool_iov_dma_addr(const struct page_pool_iov *ppiov)
((dma_addr_t)page_pool_iov_idx(ppiov) << PAGE_SHIFT);
}

+static inline unsigned long
+page_pool_iov_virtual_addr(const struct page_pool_iov *ppiov)
+{
+ struct dmabuf_genpool_chunk_owner *owner = page_pool_iov_owner(ppiov);
+
+ return owner->base_virtual +
+ ((unsigned long)page_pool_iov_idx(ppiov) << PAGE_SHIFT);
+}
+
static inline struct netdev_dmabuf_binding *
page_pool_iov_binding(const struct page_pool_iov *ppiov)
{
diff --git a/include/net/sock.h b/include/net/sock.h
index 1d6931caf0c3..01029c855c1b 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -353,6 +353,7 @@ struct sk_filter;
* @sk_txtime_unused: unused txtime flags
* @ns_tracker: tracker for netns reference
* @sk_bind2_node: bind node in the bhash2 table
+ * @sk_user_pages: xarray of pages the user is holding a reference on.
*/
struct sock {
/*
@@ -545,6 +546,7 @@ struct sock {
struct rcu_head sk_rcu;
netns_tracker ns_tracker;
struct hlist_node sk_bind2_node;
+ struct xarray sk_user_pages;
};

enum sk_pacing {
diff --git a/include/uapi/asm-generic/socket.h b/include/uapi/asm-generic/socket.h
index 8ce8a39a1e5f..25a2f5255f52 100644
--- a/include/uapi/asm-generic/socket.h
+++ b/include/uapi/asm-generic/socket.h
@@ -135,6 +135,11 @@
#define SO_PASSPIDFD 76
#define SO_PEERPIDFD 77

+#define SO_DEVMEM_LINEAR 98
+#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
+#define SO_DEVMEM_DMABUF 99
+#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF
+
#if !defined(__KERNEL__)

#if __BITS_PER_LONG == 64 || (defined(__x86_64__) && defined(__ILP32__))
diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h
index 059b1a9147f4..ad92e37699da 100644
--- a/include/uapi/linux/uio.h
+++ b/include/uapi/linux/uio.h
@@ -20,6 +20,16 @@ struct iovec
__kernel_size_t iov_len; /* Must be size_t (1003.1g) */
};

+struct dmabuf_cmsg {
+ __u64 frag_offset; /* offset into the dmabuf where the frag starts.
+ */
+ __u32 frag_size; /* size of the frag. */
+ __u32 frag_token; /* token representing this frag for
+ * DEVMEM_DONTNEED.
+ */
+ __u32 dmabuf_id; /* dmabuf id this frag belongs to. */
+};
+
/*
* UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1)
*/
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 5a3135e93d3d..088b2b48bee0 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -461,6 +461,7 @@ void tcp_init_sock(struct sock *sk)

set_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags);
sk_sockets_allocated_inc(sk);
+ xa_init_flags(&sk->sk_user_pages, XA_FLAGS_ALLOC1);
}
EXPORT_SYMBOL(tcp_init_sock);

@@ -2303,6 +2304,155 @@ static int tcp_inq_hint(struct sock *sk)
return inq;
}

+/* On error, returns the -errno. On success, returns number of bytes sent to the
+ * user. May not consume all of @remaining_len.
+ */
+static int tcp_recvmsg_dmabuf(const struct sock *sk, const struct sk_buff *skb,
+ unsigned int offset, struct msghdr *msg,
+ int remaining_len)
+{
+ struct dmabuf_cmsg dmabuf_cmsg = { 0 };
+ unsigned int start;
+ int i, copy, n;
+ int sent = 0;
+ int err = 0;
+
+ do {
+ start = skb_headlen(skb);
+
+ if (!skb->dmabuf) {
+ err = -ENODEV;
+ goto out;
+ }
+
+ /* Copy header. */
+ copy = start - offset;
+ if (copy > 0) {
+ copy = min(copy, remaining_len);
+
+ n = copy_to_iter(skb->data + offset, copy,
+ &msg->msg_iter);
+ if (n != copy) {
+ err = -EFAULT;
+ goto out;
+ }
+
+ offset += copy;
+ remaining_len -= copy;
+
+ /* First a dmabuf_cmsg for # bytes copied to user
+ * buffer.
+ */
+ memset(&dmabuf_cmsg, 0, sizeof(dmabuf_cmsg));
+ dmabuf_cmsg.frag_size = copy;
+ err = put_cmsg(msg, SOL_SOCKET, SO_DEVMEM_LINEAR,
+ sizeof(dmabuf_cmsg), &dmabuf_cmsg);
+ if (err || msg->msg_flags & MSG_CTRUNC) {
+ msg->msg_flags &= ~MSG_CTRUNC;
+ if (!err)
+ err = -ETOOSMALL;
+ goto out;
+ }
+
+ sent += copy;
+
+ if (remaining_len == 0)
+ goto out;
+ }
+
+ /* after that, send information of dmabuf pages through a
+ * sequence of cmsg
+ */
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ struct page_pool_iov *ppiov;
+ u64 frag_offset;
+ u32 user_token;
+ int end;
+
+ /* skb->dmabuf should indicate that ALL the frags in
+ * this skb are dmabuf page_pool_iovs. We're checking
+ * for that flag above, but also check individual frags
+ * here. If the tcp stack is not setting skb->dmabuf
+ * correctly, we still don't want to crash here when
+ * accessing pgmap or priv below.
+ */
+ if (!skb_frag_page_pool_iov(frag)) {
+ net_err_ratelimited("Found non-dmabuf skb with page_pool_iov");
+ err = -ENODEV;
+ goto out;
+ }
+
+ ppiov = skb_frag_page_pool_iov(frag);
+ end = start + skb_frag_size(frag);
+ copy = end - offset;
+
+ if (copy > 0) {
+ copy = min(copy, remaining_len);
+
+ frag_offset = page_pool_iov_virtual_addr(ppiov) +
+ skb_frag_off(frag) + offset -
+ start;
+ dmabuf_cmsg.frag_offset = frag_offset;
+ dmabuf_cmsg.frag_size = copy;
+ err = xa_alloc((struct xarray *)&sk->sk_user_pages,
+ &user_token, frag->bv_page,
+ xa_limit_31b, GFP_KERNEL);
+ if (err)
+ goto out;
+
+ dmabuf_cmsg.frag_token = user_token;
+ dmabuf_cmsg.dmabuf_id = page_pool_iov_binding_id(ppiov);
+
+ offset += copy;
+ remaining_len -= copy;
+
+ err = put_cmsg(msg, SOL_SOCKET,
+ SO_DEVMEM_DMABUF,
+ sizeof(dmabuf_cmsg),
+ &dmabuf_cmsg);
+ if (err || msg->msg_flags & MSG_CTRUNC) {
+ msg->msg_flags &= ~MSG_CTRUNC;
+ xa_erase((struct xarray *)&sk->sk_user_pages,
+ user_token);
+ if (!err)
+ err = -ETOOSMALL;
+ goto out;
+ }
+
+ __skb_frag_ref(frag);
+
+ sent += copy;
+
+ if (remaining_len == 0)
+ goto out;
+ }
+ start = end;
+ }
+
+ if (!remaining_len)
+ goto out;
+
+ /* if remaining_len is not satisfied yet, we need to go to the
+ * next frag in the frag_list to satisfy remaining_len.
+ */
+ skb = skb_shinfo(skb)->frag_list ?: skb->next;
+
+ offset = offset - start;
+ } while (skb);
+
+ if (remaining_len) {
+ err = -EFAULT;
+ goto out;
+ }
+
+out:
+ if (!sent)
+ sent = err;
+
+ return sent;
+}
+
/*
* This routine copies from a sock struct into the user buffer.
*
@@ -2316,6 +2466,7 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len,
int *cmsg_flags)
{
struct tcp_sock *tp = tcp_sk(sk);
+ int last_copied_dmabuf = -1; /* uninitialized */
int copied = 0;
u32 peek_seq;
u32 *seq;
@@ -2493,15 +2644,44 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len,
}

if (!(flags & MSG_TRUNC)) {
- err = skb_copy_datagram_msg(skb, offset, msg, used);
- if (err) {
- /* Exception. Bailout! */
- if (!copied)
- copied = -EFAULT;
+ if (last_copied_dmabuf != -1 &&
+ last_copied_dmabuf != skb->dmabuf)
break;
+
+ if (!skb->dmabuf) {
+ err = skb_copy_datagram_msg(skb, offset, msg,
+ used);
+ if (err) {
+ /* Exception. Bailout! */
+ if (!copied)
+ copied = -EFAULT;
+ break;
+ }
+ } else {
+ if (!(flags & MSG_SOCK_DEVMEM)) {
+ /* skb->dmabuf skbs can only be received
+ * with the MSG_SOCK_DEVMEM flag.
+ */
+ if (!copied)
+ copied = -EFAULT;
+
+ break;
+ }
+
+ err = tcp_recvmsg_dmabuf(sk, skb, offset, msg,
+ used);
+ if (err <= 0) {
+ if (!copied)
+ copied = -EFAULT;
+
+ break;
+ }
+ used = err;
}
}

+ last_copied_dmabuf = skb->dmabuf;
+
WRITE_ONCE(*seq, *seq + used);
copied += used;
len -= used;
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 86cc6d36f818..986398cc2f65 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -2501,6 +2501,14 @@ static void tcp_md5sig_info_free_rcu(struct rcu_head *head)
void tcp_v4_destroy_sock(struct sock *sk)
{
struct tcp_sock *tp = tcp_sk(sk);
+ struct page *page;
+ unsigned long index;
+
+ xa_for_each(&sk->sk_user_pages, index, page)
+ if (WARN_ON_ONCE(!napi_pp_put_page(page, false)))
+ page_pool_page_put_many(page, 1);
+
+ xa_destroy(&sk->sk_user_pages);

trace_tcp_destroy_sock(sk);

--
2.43.0.472.g3155946c3a-goog

2023-12-08 00:54:10

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 09/16] page_pool: device memory support

Overload the LSB of struct page* to indicate that it's a page_pool_iov.

Refactor mm calls on struct page* into helpers, and add page_pool_iov
handling on those helpers. Modify callers of these mm APIs with calls to
these helpers instead.

In areas where struct page* is dereferenced, add a check for special
handling of page_pool_iov.

Signed-off-by: Mina Almasry <[email protected]>

---

v1:
- Disable fragmentation support for iov properly.
- fix napi_pp_put_page() path (Yunsheng).

---
include/net/page_pool/helpers.h | 78 ++++++++++++++++++++++++++++++++-
net/core/page_pool.c | 67 ++++++++++++++++++++--------
net/core/skbuff.c | 28 +++++++-----
3 files changed, 141 insertions(+), 32 deletions(-)

diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index 00197f14aa87..2d4e0a2c5620 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -154,6 +154,64 @@ static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page)
return NULL;
}

+static inline int page_pool_page_ref_count(struct page *page)
+{
+ if (page_is_page_pool_iov(page))
+ return page_pool_iov_refcount(page_to_page_pool_iov(page));
+
+ return page_ref_count(page);
+}
+
+static inline void page_pool_page_get_many(struct page *page,
+ unsigned int count)
+{
+ if (page_is_page_pool_iov(page))
+ return page_pool_iov_get_many(page_to_page_pool_iov(page),
+ count);
+
+ return page_ref_add(page, count);
+}
+
+static inline void page_pool_page_put_many(struct page *page,
+ unsigned int count)
+{
+ if (page_is_page_pool_iov(page))
+ return page_pool_iov_put_many(page_to_page_pool_iov(page),
+ count);
+
+ if (count > 1)
+ page_ref_sub(page, count - 1);
+
+ put_page(page);
+}
+
+static inline bool page_pool_page_is_pfmemalloc(struct page *page)
+{
+ if (page_is_page_pool_iov(page))
+ return false;
+
+ return page_is_pfmemalloc(page);
+}
+
+static inline bool page_pool_page_is_pref_nid(struct page *page, int pref_nid)
+{
+ /* Assume page_pool_iov are on the preferred node without actually
+ * checking...
+ *
+ * This check is only used to check for recycling memory in the page
+ * pool's fast paths. Currently the only implementation of page_pool_iov
+ * is dmabuf device memory. It's a deliberate decision by the user to
+ * bind a certain dmabuf to a certain netdev, and the netdev rx queue
+ * would not be able to reallocate memory from another dmabuf that
+ * exists on the preferred node, so, this check doesn't make much sense
+ * in this case. Assume all page_pool_iovs can be recycled for now.
+ */
+ if (page_is_page_pool_iov(page))
+ return true;
+
+ return page_to_nid(page) == pref_nid;
+}
+
/**
* page_pool_dev_alloc_pages() - allocate a page.
* @pool: pool from which to allocate
@@ -304,6 +362,10 @@ static inline long page_pool_defrag_page(struct page *page, long nr)
{
long ret;

+ /* fragmentation support hasn't been added to ppiov yet */
+ if (WARN_ON_ONCE(page_is_page_pool_iov(page)))
+ return 0;
+
/* If nr == pp_frag_count then we have cleared all remaining
* references to the page:
* 1. 'n == 1': no need to actually overwrite it.
@@ -347,7 +409,8 @@ static inline long page_pool_defrag_page(struct page *page, long nr)
static inline bool page_pool_is_last_frag(struct page *page)
{
/* If page_pool_defrag_page() returns 0, we were the last user */
- return page_pool_defrag_page(page, 1) == 0;
+ return page_is_page_pool_iov(page) ||
+ page_pool_defrag_page(page, 1) == 0;
}

/**
@@ -434,7 +497,12 @@ static inline void page_pool_free_va(struct page_pool *pool, void *va,
*/
static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
{
- dma_addr_t ret = page->dma_addr;
+ dma_addr_t ret;
+
+ if (page_is_page_pool_iov(page))
+ return page_pool_iov_dma_addr(page_to_page_pool_iov(page));
+
+ ret = page->dma_addr;

if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA)
ret <<= PAGE_SHIFT;
@@ -444,6 +512,12 @@ static inline dma_addr_t page_pool_get_dma_addr(struct page *page)

static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
{
+ /* page_pool_iovs are mapped and their dma-addr can't be modified. */
+ if (page_is_page_pool_iov(page)) {
+ DEBUG_NET_WARN_ON_ONCE(true);
+ return false;
+ }
+
if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) {
page->dma_addr = addr >> PAGE_SHIFT;

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 423c88564a00..f0148d66371b 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -346,7 +346,7 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool)
if (unlikely(!page))
break;

- if (likely(page_to_nid(page) == pref_nid)) {
+ if (likely(page_pool_page_is_pref_nid(page, pref_nid))) {
pool->alloc.cache[pool->alloc.count++] = page;
} else {
/* NUMA mismatch;
@@ -391,7 +391,15 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool,
struct page *page,
unsigned int dma_sync_size)
{
- dma_addr_t dma_addr = page_pool_get_dma_addr(page);
+ dma_addr_t dma_addr;
+
+ /* page_pool_iov memory provider do not support PP_FLAG_DMA_SYNC_DEV */
+ if (page_is_page_pool_iov(page)) {
+ DEBUG_NET_WARN_ON_ONCE(true);
+ return;
+ }
+
+ dma_addr = page_pool_get_dma_addr(page);

dma_sync_size = min(dma_sync_size, pool->p.max_len);
dma_sync_single_range_for_device(pool->p.dev, dma_addr,
@@ -403,6 +411,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
{
dma_addr_t dma;

+ if (page_is_page_pool_iov(page)) {
+ /* page_pool_iovs are already mapped */
+ DEBUG_NET_WARN_ON_ONCE(true);
+ return true;
+ }
+
/* Setup DMA mapping: use 'struct page' area for storing DMA-addr
* since dma_addr_t can be either 32 or 64 bits and does not always fit
* into page private data (i.e 32bit cpu with 64bit DMA caps)
@@ -434,22 +448,33 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
static void page_pool_set_pp_info(struct page_pool *pool,
struct page *page)
{
- page->pp = pool;
- page->pp_magic |= PP_SIGNATURE;
-
- /* Ensuring all pages have been split into one fragment initially:
- * page_pool_set_pp_info() is only called once for every page when it
- * is allocated from the page allocator and page_pool_fragment_page()
- * is dirtying the same cache line as the page->pp_magic above, so
- * the overhead is negligible.
- */
- page_pool_fragment_page(page, 1);
+ if (!page_is_page_pool_iov(page)) {
+ page->pp = pool;
+ page->pp_magic |= PP_SIGNATURE;
+
+ /* Ensuring all pages have been split into one fragment
+ * initially:
+ * page_pool_set_pp_info() is only called once for every page
+ * when it is allocated from the page allocator and
+ * page_pool_fragment_page() is dirtying the same cache line as
+ * the page->pp_magic above, so * the overhead is negligible.
+ */
+ page_pool_fragment_page(page, 1);
+ } else {
+ page_to_page_pool_iov(page)->pp = pool;
+ }
+
if (pool->has_init_callback)
pool->slow.init_callback(page, pool->slow.init_arg);
}

static void page_pool_clear_pp_info(struct page *page)
{
+ if (page_is_page_pool_iov(page)) {
+ page_to_page_pool_iov(page)->pp = NULL;
+ return;
+ }
+
page->pp_magic = 0;
page->pp = NULL;
}
@@ -664,7 +689,7 @@ static bool page_pool_recycle_in_cache(struct page *page,
return false;
}

- /* Caller MUST have verified/know (page_ref_count(page) == 1) */
+ /* Caller MUST have verified/know (page_pool_page_ref_count(page) == 1) */
pool->alloc.cache[pool->alloc.count++] = page;
recycle_stat_inc(pool, cached);
return true;
@@ -689,9 +714,10 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
* refcnt == 1 means page_pool owns page, and can recycle it.
*
* page is NOT reusable when allocated when system is under
- * some pressure. (page_is_pfmemalloc)
+ * some pressure. (page_pool_page_is_pfmemalloc)
*/
- if (likely(page_ref_count(page) == 1 && !page_is_pfmemalloc(page))) {
+ if (likely(page_pool_page_ref_count(page) == 1 &&
+ !page_pool_page_is_pfmemalloc(page))) {
/* Read barrier done in page_ref_count / READ_ONCE */

if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
@@ -806,7 +832,8 @@ static struct page *page_pool_drain_frag(struct page_pool *pool,
if (likely(page_pool_defrag_page(page, drain_count)))
return NULL;

- if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) {
+ if (page_pool_page_ref_count(page) == 1 &&
+ !page_pool_page_is_pfmemalloc(page)) {
if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
page_pool_dma_sync_for_device(pool, page, -1);

@@ -840,6 +867,10 @@ struct page *page_pool_alloc_frag(struct page_pool *pool,
if (WARN_ON(size > max_size))
return NULL;

+ /* page_pool_iov's don't currently support fragmentation */
+ if (WARN_ON_ONCE(pool->mp_ops == &dmabuf_devmem_ops))
+ return NULL;
+
size = ALIGN(size, dma_get_cache_alignment());
*offset = pool->frag_offset;

@@ -882,9 +913,9 @@ static void page_pool_empty_ring(struct page_pool *pool)
/* Empty recycle ring */
while ((page = ptr_ring_consume_bh(&pool->ring))) {
/* Verify the refcnt invariant of cached pages */
- if (!(page_ref_count(page) == 1))
+ if (!(page_pool_page_ref_count(page) == 1))
pr_crit("%s() page_pool refcnt %d violation\n",
- __func__, page_ref_count(page));
+ __func__, page_pool_page_ref_count(page));

page_pool_return_page(pool, page);
}
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index b157efea5dea..07f802f1adf1 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -896,19 +896,23 @@ bool napi_pp_put_page(struct page *page, bool napi_safe)
bool allow_direct = false;
struct page_pool *pp;

- page = compound_head(page);
-
- /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation
- * in order to preserve any existing bits, such as bit 0 for the
- * head page of compound page and bit 1 for pfmemalloc page, so
- * mask those bits for freeing side when doing below checking,
- * and page_is_pfmemalloc() is checked in __page_pool_put_page()
- * to avoid recycling the pfmemalloc page.
- */
- if (unlikely((page->pp_magic & ~0x3UL) != PP_SIGNATURE))
- return false;
+ if (!page_is_page_pool_iov(page)) {
+ page = compound_head(page);
+
+ /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation
+ * in order to preserve any existing bits, such as bit 0 for the
+ * head page of compound page and bit 1 for pfmemalloc page, so
+ * mask those bits for freeing side when doing below checking,
+ * and page_is_pfmemalloc() is checked in __page_pool_put_page()
+ * to avoid recycling the pfmemalloc page.
+ */
+ if (unlikely((page->pp_magic & ~0x3UL) != PP_SIGNATURE))
+ return false;

- pp = page->pp;
+ pp = page->pp;
+ } else {
+ pp = page_to_page_pool_iov(page)->pp;
+ }

/* Allow direct recycle if we have reasons to believe that we are
* in the same context as the consumer would run, so there's
--
2.43.0.472.g3155946c3a-goog

2023-12-08 00:54:11

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 15/16] net: add devmem TCP documentation

Signed-off-by: Mina Almasry <[email protected]>
---
Documentation/networking/devmem.rst | 270 ++++++++++++++++++++++++++++
1 file changed, 270 insertions(+)
create mode 100644 Documentation/networking/devmem.rst

diff --git a/Documentation/networking/devmem.rst b/Documentation/networking/devmem.rst
new file mode 100644
index 000000000000..ed0d9c88b708
--- /dev/null
+++ b/Documentation/networking/devmem.rst
@@ -0,0 +1,270 @@
+
+=================
+Device Memory TCP
+=================
+
+
+Intro
+=====
+
+Device memory TCP (devmem TCP) enables receiving data directly into device
+memory (dmabuf). The feature is currently implemented for TCP sockets.
+
+
+Opportunity
+-----------
+
+A large amount of data transfers have device memory as the source and/or
+destination. Accelerators drastically increased the volume of such transfers.
+Some examples include:
+
+- Distributed training, where ML accelerators, such as GPUs on different hosts,
+ exchange data among them.
+
+- Distributed raw block storage applications transfer large amounts of data with
+ remote SSDs, much of this data does not require host processing.
+
+Today, the majority of the Device-to-Device data transfers the network are
+implemented as the following low level operations: Device-to-Host copy,
+Host-to-Host network transfer, and Host-to-Device copy.
+
+The implementation is suboptimal, especially for bulk data transfers, and can
+put significant strains on system resources such as host memory bandwidth and
+PCIe bandwidth.
+
+Devmem TCP optimizes this use case by implementing socket APIs that enable
+the user to receive incoming network packets directly into device memory.
+
+Packet payloads go directly from the NIC to device memory.
+
+Packet headers go to host memory and are processed by the TCP/IP stack
+normally. The NIC must support header split to achieve this.
+
+Advantages:
+
+- Alleviate host memory bandwidth pressure, compared to existing
+ network-transfer + device-copy semantics.
+
+- Alleviate PCIe bandwidth pressure, by limiting data transfer to the lowest
+ level of the PCIe tree, compared to traditional path which sends data through
+ the root complex.
+
+
+More Info
+---------
+
+ slides, video
+ https://netdevconf.org/0x17/sessions/talk/device-memory-tcp.html
+
+ patchset
+ [RFC PATCH v3 00/12] Device Memory TCP
+ https://lore.kernel.org/lkml/[email protected]/T/
+
+
+Interface
+=========
+
+Example
+-------
+
+tools/testing/selftests/net/ncdevmem.c:do_server shows an example of setting up
+the RX path of this API.
+
+NIC Setup
+---------
+
+Header split, flow steering, & RSS are required features for devmem TCP.
+
+Header split is used to split incoming packets into a header buffer in host
+memory, and a payload buffer in device memory.
+
+Flow steering & RSS are used to ensure that only flows targeting devmem land on
+RX queue bound to devmem.
+
+Enable header split & flow steering:
+
+::
+
+ # enable header split (assuming priv-flag)
+ ethtool --set-priv-flags eth1 enable-header-split on
+
+ # enable flow steering
+ ethtool -K eth1 ntuple on
+
+Configure RSS to steer all traffic away from the target RX queue (queue 15 in
+this example):
+
+::
+
+ ethtool --set-rxfh-indir eth1 equal 15
+
+
+The user must bind a dmabuf to any number of RX queues on a given NIC using
+netlink API:
+
+::
+
+ /* Bind dmabuf to NIC RX queue 15 */
+ struct netdev_queue *queues;
+ queues = malloc(sizeof(*queues) * 1);
+
+ queues[0]._present.type = 1;
+ queues[0]._present.idx = 1;
+ queues[0].type = NETDEV_RX_QUEUE_TYPE_RX;
+ queues[0].idx = 15;
+
+ *ys = ynl_sock_create(&ynl_netdev_family, &yerr);
+
+ req = netdev_bind_rx_req_alloc();
+ netdev_bind_rx_req_set_ifindex(req, 1 /* ifindex */);
+ netdev_bind_rx_req_set_dmabuf_fd(req, dmabuf_fd);
+ __netdev_bind_rx_req_set_queues(req, queues, n_queue_index);
+
+ rsp = netdev_bind_rx(*ys, req);
+
+ dmabuf_id = rsp->dmabuf_id;
+
+
+The netlink API returns a dmabuf_id: a unique ID that refers to this dmabuf
+that has been bound.
+
+Socket Setup
+------------
+
+The socket must be flow steering to the dmabuf bound RX queue:
+
+::
+
+ ethtool -N eth1 flow-type tcp4 ... queue 15,
+
+
+Receiving data
+--------------
+
+The user application must signal to the kernel that it is capable of receiving
+devmem data by passing the MSG_SOCK_DEVMEM flag to recvmsg:
+
+::
+
+ ret = recvmsg(fd, &msg, MSG_SOCK_DEVMEM);
+
+Applications that do not specify the MSG_SOCK_DEVMEM flag will receive an EFAULT
+on devmem data.
+
+Devmem data is received directly into the dmabuf bound to the NIC in 'NIC
+Setup', and the kernel signals such to the user via the SCM_DEVMEM_* cmsgs:
+
+::
+
+ for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) {
+ if (cm->cmsg_level != SOL_SOCKET ||
+ (cm->cmsg_type != SCM_DEVMEM_DMABUF &&
+ cm->cmsg_type != SCM_DEVMEM_LINEAR))
+ continue;
+
+ dmabuf_cmsg = (struct dmabuf_cmsg *)CMSG_DATA(cm);
+
+ if (cm->cmsg_type == SCM_DEVMEM_DMABUF) {
+ /* Frag landed in dmabuf.
+ *
+ * dmabuf_cmsg->dmabuf_id is the dmabuf the
+ * frag landed on.
+ *
+ * dmabuf_cmsg->frag_offset is the offset into
+ * the dmabuf where the frag starts.
+ *
+ * dmabuf_cmsg->frag_size is the size of the
+ * frag.
+ *
+ * dmabuf_cmsg->frag_token is a token used to
+ * refer to this frag for later freeing.
+ */
+
+ struct dmabuf_token token;
+ token.token_start = dmabuf_cmsg->frag_token;
+ token.token_count = 1;
+ continue;
+ }
+
+ if (cm->cmsg_type == SCM_DEVMEM_LINEAR)
+ /* Frag landed in linear buffer.
+ *
+ * dmabuf_cmsg->frag_size is the size of the
+ * frag.
+ */
+ continue;
+
+ }
+
+Applications may receive 2 cmsgs:
+
+- SCM_DEVMEM_DMABUF: this indicates the fragment landed in the dmabuf indicated
+ by dmabuf_id.
+
+- SCM_DEVMEM_LINEAR: this indicates the fragment landed in the linear buffer.
+ This typically happens when the NIC is unable to split the packet at the
+ header boundary, such that part (or all) of the payload landed in host
+ memory.
+
+Applications may receive no SO_DEVMEM_* cmsgs. That indicates non-devmem,
+regular TCP data that landed on an RX queue not bound to a dmabuf.
+
+
+Freeing frags
+-------------
+
+Frags received via SCM_DEVMEM_DMABUF are pinned by the kernel while the user
+processes the frag. The user must return the frag to the kernel via
+SO_DEVMEM_DONTNEED:
+
+::
+
+ ret = setsockopt(client_fd, SOL_SOCKET, SO_DEVMEM_DONTNEED, &token,
+ sizeof(token));
+
+The user must ensure the tokens are returned to the kernel in a timely manner.
+Failure to do so will exhaust the limited dmabuf that is bound to the RX queue
+and will lead to packet drops.
+
+
+Implementation & Caveats
+========================
+
+Unreadable skbs
+---------------
+
+Devmem payloads are inaccessible to the kernel processing the packets. This
+results in a few quirks for payloads of devmem skbs:
+
+- Loopback is not functional. Loopback relies on copying the payload, which is
+ not possible with devmem skbs.
+
+- Software checksum calculation fails.
+
+- TCP Dump and bpf can't access devmem packet payloads.
+
+
+Testing
+=======
+
+More realistic example code can be found in the kernel source under
+tools/testing/selftests/net/ncdevmem.c
+
+ncdevmem is a devmem TCP netcat. It works very similarly to netcat, but
+receives data directly into a udmabuf.
+
+To run ncdevmem, you need to run it a server on the machine under test, and you
+need to run netcat on a peer to provide the TX data.
+
+ncdevmem has a validation mode as well that expects a repeating pattern of
+incoming data and validates it as such:
+
+::
+
+ # On server:
+ ncdevmem -s <server IP> -c <client IP> -f eth1 -d 3 -n 0000:06:00.0 -l \
+ -p 5201 -v 7
+
+ # On client:
+ yes $(echo -e \\x01\\x02\\x03\\x04\\x05\\x06) | \
+ tr \\n \\0 | head -c 5G | nc <server IP> 5201 -p 5201
--
2.43.0.472.g3155946c3a-goog

2023-12-08 00:54:18

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 05/16] net: netdev netlink api to bind dma-buf to a net device

API takes the dma-buf fd as input, and binds it to the netdevice. The
user can specify the rx queues to bind the dma-buf to.

Suggested-by: Stanislav Fomichev <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

Changes in v1:
- Add rx-queue-type to distingish rx from tx (Jakub)
- Return dma-buf ID from netlink API (David, Stan)

Changes in RFC-v3:
- Support binding multiple rx rx-queues

---
Documentation/netlink/specs/netdev.yaml | 52 +++++++++++++++++++++++++
include/uapi/linux/netdev.h | 19 +++++++++
net/core/netdev-genl-gen.c | 19 +++++++++
net/core/netdev-genl-gen.h | 2 +
net/core/netdev-genl.c | 6 +++
tools/include/uapi/linux/netdev.h | 19 +++++++++
6 files changed, 117 insertions(+)

diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
index f2c76d103bd8..df6a11d47006 100644
--- a/Documentation/netlink/specs/netdev.yaml
+++ b/Documentation/netlink/specs/netdev.yaml
@@ -260,6 +260,45 @@ attribute-sets:
name: napi-id
doc: ID of the NAPI instance which services this queue.
type: u32
+ -
+ name: queue-dmabuf
+ attributes:
+ -
+ name: type
+ doc: rx or tx queue
+ type: u8
+ enum: queue-type
+ -
+ name: idx
+ doc: queue index
+ type: u32
+
+ -
+ name: bind-dmabuf
+ attributes:
+ -
+ name: ifindex
+ doc: netdev ifindex to bind the dma-buf to.
+ type: u32
+ checks:
+ min: 1
+ -
+ name: queues
+ doc: receive queues to bind the dma-buf to.
+ type: nest
+ nested-attributes: queue-dmabuf
+ multi-attr: true
+ -
+ name: dmabuf-fd
+ doc: dmabuf file descriptor to bind.
+ type: u32
+ -
+ name: dmabuf-id
+ doc: id of the dmabuf binding
+ type: u32
+ checks:
+ min: 1
+

operations:
list:
@@ -382,6 +421,19 @@ operations:
attributes:
- ifindex
reply: *queue-get-op
+ -
+ name: bind-rx
+ doc: Bind dmabuf to netdev
+ attribute-set: bind-dmabuf
+ do:
+ request:
+ attributes:
+ - ifindex
+ - dmabuf-fd
+ - queues
+ reply:
+ attributes:
+ - dmabuf-id
-
name: napi-get
doc: Get information about NAPI instances configured on the system.
diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
index 424c5e28f495..35d201dc4b05 100644
--- a/include/uapi/linux/netdev.h
+++ b/include/uapi/linux/netdev.h
@@ -129,6 +129,24 @@ enum {
NETDEV_A_QUEUE_MAX = (__NETDEV_A_QUEUE_MAX - 1)
};

+enum {
+ NETDEV_A_QUEUE_DMABUF_TYPE = 1,
+ NETDEV_A_QUEUE_DMABUF_IDX,
+
+ __NETDEV_A_QUEUE_DMABUF_MAX,
+ NETDEV_A_QUEUE_DMABUF_MAX = (__NETDEV_A_QUEUE_DMABUF_MAX - 1)
+};
+
+enum {
+ NETDEV_A_BIND_DMABUF_IFINDEX = 1,
+ NETDEV_A_BIND_DMABUF_QUEUES,
+ NETDEV_A_BIND_DMABUF_DMABUF_FD,
+ NETDEV_A_BIND_DMABUF_DMABUF_ID,
+
+ __NETDEV_A_BIND_DMABUF_MAX,
+ NETDEV_A_BIND_DMABUF_MAX = (__NETDEV_A_BIND_DMABUF_MAX - 1)
+};
+
enum {
NETDEV_CMD_DEV_GET = 1,
NETDEV_CMD_DEV_ADD_NTF,
@@ -140,6 +158,7 @@ enum {
NETDEV_CMD_PAGE_POOL_CHANGE_NTF,
NETDEV_CMD_PAGE_POOL_STATS_GET,
NETDEV_CMD_QUEUE_GET,
+ NETDEV_CMD_BIND_RX,
NETDEV_CMD_NAPI_GET,

__NETDEV_CMD_MAX,
diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c
index be7f2ebd61b2..3384b1ae3f40 100644
--- a/net/core/netdev-genl-gen.c
+++ b/net/core/netdev-genl-gen.c
@@ -27,6 +27,11 @@ const struct nla_policy netdev_page_pool_info_nl_policy[NETDEV_A_PAGE_POOL_IFIND
[NETDEV_A_PAGE_POOL_IFINDEX] = NLA_POLICY_FULL_RANGE(NLA_U32, &netdev_a_page_pool_ifindex_range),
};

+const struct nla_policy netdev_queue_dmabuf_nl_policy[NETDEV_A_QUEUE_DMABUF_IDX + 1] = {
+ [NETDEV_A_QUEUE_DMABUF_TYPE] = NLA_POLICY_MAX(NLA_U8, 1),
+ [NETDEV_A_QUEUE_DMABUF_IDX] = { .type = NLA_U32, },
+};
+
/* NETDEV_CMD_DEV_GET - do */
static const struct nla_policy netdev_dev_get_nl_policy[NETDEV_A_DEV_IFINDEX + 1] = {
[NETDEV_A_DEV_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1),
@@ -58,6 +63,13 @@ static const struct nla_policy netdev_queue_get_dump_nl_policy[NETDEV_A_QUEUE_IF
[NETDEV_A_QUEUE_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1),
};

+/* NETDEV_CMD_BIND_RX - do */
+static const struct nla_policy netdev_bind_rx_nl_policy[NETDEV_A_BIND_DMABUF_DMABUF_FD + 1] = {
+ [NETDEV_A_BIND_DMABUF_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1),
+ [NETDEV_A_BIND_DMABUF_DMABUF_FD] = { .type = NLA_U32, },
+ [NETDEV_A_BIND_DMABUF_QUEUES] = NLA_POLICY_NESTED(netdev_queue_dmabuf_nl_policy),
+};
+
/* NETDEV_CMD_NAPI_GET - do */
static const struct nla_policy netdev_napi_get_do_nl_policy[NETDEV_A_NAPI_ID + 1] = {
[NETDEV_A_NAPI_ID] = { .type = NLA_U32, },
@@ -124,6 +136,13 @@ static const struct genl_split_ops netdev_nl_ops[] = {
.maxattr = NETDEV_A_QUEUE_IFINDEX,
.flags = GENL_CMD_CAP_DUMP,
},
+ {
+ .cmd = NETDEV_CMD_BIND_RX,
+ .doit = netdev_nl_bind_rx_doit,
+ .policy = netdev_bind_rx_nl_policy,
+ .maxattr = NETDEV_A_BIND_DMABUF_DMABUF_FD,
+ .flags = GENL_CMD_CAP_DO,
+ },
{
.cmd = NETDEV_CMD_NAPI_GET,
.doit = netdev_nl_napi_get_doit,
diff --git a/net/core/netdev-genl-gen.h b/net/core/netdev-genl-gen.h
index a47f2bcbe4fa..a7ede514eccd 100644
--- a/net/core/netdev-genl-gen.h
+++ b/net/core/netdev-genl-gen.h
@@ -13,6 +13,7 @@

/* Common nested types */
extern const struct nla_policy netdev_page_pool_info_nl_policy[NETDEV_A_PAGE_POOL_IFINDEX + 1];
+extern const struct nla_policy netdev_queue_dmabuf_nl_policy[NETDEV_A_QUEUE_DMABUF_IDX + 1];

int netdev_nl_dev_get_doit(struct sk_buff *skb, struct genl_info *info);
int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb);
@@ -26,6 +27,7 @@ int netdev_nl_page_pool_stats_get_dumpit(struct sk_buff *skb,
int netdev_nl_queue_get_doit(struct sk_buff *skb, struct genl_info *info);
int netdev_nl_queue_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info);
int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info);
int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb);

diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index fd98936da3ae..0ed292d87ae0 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -469,6 +469,12 @@ int netdev_nl_queue_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
return skb->len;
}

+/* Stub */
+int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
+{
+ return 0;
+}
+
static int netdev_genl_netdevice_event(struct notifier_block *nb,
unsigned long event, void *ptr)
{
diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h
index 424c5e28f495..35d201dc4b05 100644
--- a/tools/include/uapi/linux/netdev.h
+++ b/tools/include/uapi/linux/netdev.h
@@ -129,6 +129,24 @@ enum {
NETDEV_A_QUEUE_MAX = (__NETDEV_A_QUEUE_MAX - 1)
};

+enum {
+ NETDEV_A_QUEUE_DMABUF_TYPE = 1,
+ NETDEV_A_QUEUE_DMABUF_IDX,
+
+ __NETDEV_A_QUEUE_DMABUF_MAX,
+ NETDEV_A_QUEUE_DMABUF_MAX = (__NETDEV_A_QUEUE_DMABUF_MAX - 1)
+};
+
+enum {
+ NETDEV_A_BIND_DMABUF_IFINDEX = 1,
+ NETDEV_A_BIND_DMABUF_QUEUES,
+ NETDEV_A_BIND_DMABUF_DMABUF_FD,
+ NETDEV_A_BIND_DMABUF_DMABUF_ID,
+
+ __NETDEV_A_BIND_DMABUF_MAX,
+ NETDEV_A_BIND_DMABUF_MAX = (__NETDEV_A_BIND_DMABUF_MAX - 1)
+};
+
enum {
NETDEV_CMD_DEV_GET = 1,
NETDEV_CMD_DEV_ADD_NTF,
@@ -140,6 +158,7 @@ enum {
NETDEV_CMD_PAGE_POOL_CHANGE_NTF,
NETDEV_CMD_PAGE_POOL_STATS_GET,
NETDEV_CMD_QUEUE_GET,
+ NETDEV_CMD_BIND_RX,
NETDEV_CMD_NAPI_GET,

__NETDEV_CMD_MAX,
--
2.43.0.472.g3155946c3a-goog

2023-12-08 00:54:18

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

Implement a memory provider that allocates dmabuf devmem page_pool_iovs.

The provider receives a reference to the struct netdev_dmabuf_binding
via the pool->mp_priv pointer. The driver needs to set this pointer for
the provider in the page_pool_params.

The provider obtains a reference on the netdev_dmabuf_binding which
guarantees the binding and the underlying mapping remains alive until
the provider is destroyed.

Usage of PP_FLAG_DMA_MAP is required for this memory provide such that
the page_pool can provide the driver with the dma-addrs of the devmem.

Support for PP_FLAG_DMA_SYNC_DEV is omitted for simplicity.

Signed-off-by: Willem de Bruijn <[email protected]>
Signed-off-by: Kaiyuan Zhang <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

v1:
- static_branch check in page_is_page_pool_iov() (Willem & Paolo).
- PP_DEVMEM -> PP_IOV (David).
- Require PP_FLAG_DMA_MAP (Jakub).

---
include/net/page_pool/helpers.h | 47 +++++++++++++++++
include/net/page_pool/types.h | 9 ++++
net/core/page_pool.c | 89 ++++++++++++++++++++++++++++++++-
3 files changed, 144 insertions(+), 1 deletion(-)

diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index 8bfc2d43efd4..00197f14aa87 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -53,6 +53,8 @@
#define _NET_PAGE_POOL_HELPERS_H

#include <net/page_pool/types.h>
+#include <net/net_debug.h>
+#include <net/devmem.h>

#ifdef CONFIG_PAGE_POOL_STATS
/* Deprecated driver-facing API, use netlink instead */
@@ -92,6 +94,11 @@ static inline unsigned int page_pool_iov_idx(const struct page_pool_iov *ppiov)
return ppiov - page_pool_iov_owner(ppiov)->ppiovs;
}

+static inline u32 page_pool_iov_binding_id(const struct page_pool_iov *ppiov)
+{
+ return page_pool_iov_owner(ppiov)->binding->id;
+}
+
static inline dma_addr_t
page_pool_iov_dma_addr(const struct page_pool_iov *ppiov)
{
@@ -107,6 +114,46 @@ page_pool_iov_binding(const struct page_pool_iov *ppiov)
return page_pool_iov_owner(ppiov)->binding;
}

+static inline int page_pool_iov_refcount(const struct page_pool_iov *ppiov)
+{
+ return refcount_read(&ppiov->refcount);
+}
+
+static inline void page_pool_iov_get_many(struct page_pool_iov *ppiov,
+ unsigned int count)
+{
+ refcount_add(count, &ppiov->refcount);
+}
+
+void __page_pool_iov_free(struct page_pool_iov *ppiov);
+
+static inline void page_pool_iov_put_many(struct page_pool_iov *ppiov,
+ unsigned int count)
+{
+ if (!refcount_sub_and_test(count, &ppiov->refcount))
+ return;
+
+ __page_pool_iov_free(ppiov);
+}
+
+/* page pool mm helpers */
+
+DECLARE_STATIC_KEY_FALSE(page_pool_mem_providers);
+static inline bool page_is_page_pool_iov(const struct page *page)
+{
+ return static_branch_unlikely(&page_pool_mem_providers) &&
+ (unsigned long)page & PP_IOV;
+}
+
+static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page)
+{
+ if (page_is_page_pool_iov(page))
+ return (struct page_pool_iov *)((unsigned long)page & ~PP_IOV);
+
+ DEBUG_NET_WARN_ON_ONCE(true);
+ return NULL;
+}
+
/**
* page_pool_dev_alloc_pages() - allocate a page.
* @pool: pool from which to allocate
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index 44faee7a7b02..136930a238de 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -134,8 +134,15 @@ struct memory_provider_ops {
bool (*release_page)(struct page_pool *pool, struct page *page);
};

+extern const struct memory_provider_ops dmabuf_devmem_ops;
+
/* page_pool_iov support */

+/* We overload the LSB of the struct page pointer to indicate whether it's
+ * a page or page_pool_iov.
+ */
+#define PP_IOV 0x01UL
+
/* Owner of the dma-buf chunks inserted into the gen pool. Each scatterlist
* entry from the dmabuf is inserted into the genpool as a chunk, and needs
* this owner struct to keep track of some metadata necessary to create
@@ -159,6 +166,8 @@ struct page_pool_iov {
struct dmabuf_genpool_chunk_owner *owner;

refcount_t refcount;
+
+ struct page_pool *pp;
};

struct page_pool {
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index f5c84d2a4510..423c88564a00 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -12,6 +12,7 @@

#include <net/page_pool/helpers.h>
#include <net/xdp.h>
+#include <net/netdev_rx_queue.h>

#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
@@ -20,12 +21,15 @@
#include <linux/poison.h>
#include <linux/ethtool.h>
#include <linux/netdevice.h>
+#include <linux/genalloc.h>
+#include <net/devmem.h>

#include <trace/events/page_pool.h>

#include "page_pool_priv.h"

-static DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers);
+DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers);
+EXPORT_SYMBOL(page_pool_mem_providers);

#define DEFER_TIME (msecs_to_jiffies(1000))
#define DEFER_WARN_INTERVAL (60 * HZ)
@@ -175,6 +179,7 @@ static void page_pool_producer_unlock(struct page_pool *pool,
static int page_pool_init(struct page_pool *pool,
const struct page_pool_params *params)
{
+ struct netdev_dmabuf_binding *binding = NULL;
unsigned int ring_qsize = 1024; /* Default */
int err;

@@ -237,6 +242,14 @@ static int page_pool_init(struct page_pool *pool,
/* Driver calling page_pool_create() also call page_pool_destroy() */
refcount_set(&pool->user_cnt, 1);

+ if (pool->p.queue)
+ binding = READ_ONCE(pool->p.queue->binding);
+
+ if (binding) {
+ pool->mp_ops = &dmabuf_devmem_ops;
+ pool->mp_priv = binding;
+ }
+
if (pool->mp_ops) {
err = pool->mp_ops->init(pool);
if (err) {
@@ -1020,3 +1033,77 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
}
}
EXPORT_SYMBOL(page_pool_update_nid);
+
+void __page_pool_iov_free(struct page_pool_iov *ppiov)
+{
+ if (WARN_ON(ppiov->pp->mp_ops != &dmabuf_devmem_ops))
+ return;
+
+ netdev_free_dmabuf(ppiov);
+}
+EXPORT_SYMBOL_GPL(__page_pool_iov_free);
+
+/*** "Dmabuf devmem memory provider" ***/
+
+static int mp_dmabuf_devmem_init(struct page_pool *pool)
+{
+ struct netdev_dmabuf_binding *binding = pool->mp_priv;
+
+ if (!binding)
+ return -EINVAL;
+
+ if (!(pool->p.flags & PP_FLAG_DMA_MAP))
+ return -EOPNOTSUPP;
+
+ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
+ return -EOPNOTSUPP;
+
+ netdev_dmabuf_binding_get(binding);
+ return 0;
+}
+
+static struct page *mp_dmabuf_devmem_alloc_pages(struct page_pool *pool,
+ gfp_t gfp)
+{
+ struct netdev_dmabuf_binding *binding = pool->mp_priv;
+ struct page_pool_iov *ppiov;
+
+ ppiov = netdev_alloc_dmabuf(binding);
+ if (!ppiov)
+ return NULL;
+
+ ppiov->pp = pool;
+ pool->pages_state_hold_cnt++;
+ trace_page_pool_state_hold(pool, (struct page *)ppiov,
+ pool->pages_state_hold_cnt);
+ return (struct page *)((unsigned long)ppiov | PP_IOV);
+}
+
+static void mp_dmabuf_devmem_destroy(struct page_pool *pool)
+{
+ struct netdev_dmabuf_binding *binding = pool->mp_priv;
+
+ netdev_dmabuf_binding_put(binding);
+}
+
+static bool mp_dmabuf_devmem_release_page(struct page_pool *pool,
+ struct page *page)
+{
+ struct page_pool_iov *ppiov;
+
+ if (WARN_ON_ONCE(!page_is_page_pool_iov(page)))
+ return false;
+
+ ppiov = page_to_page_pool_iov(page);
+ page_pool_iov_put_many(ppiov, 1);
+ /* We don't want the page pool put_page()ing our page_pool_iovs. */
+ return false;
+}
+
+const struct memory_provider_ops dmabuf_devmem_ops = {
+ .init = mp_dmabuf_devmem_init,
+ .destroy = mp_dmabuf_devmem_destroy,
+ .alloc_pages = mp_dmabuf_devmem_alloc_pages,
+ .release_page = mp_dmabuf_devmem_release_page,
+};
+EXPORT_SYMBOL(dmabuf_devmem_ops);
--
2.43.0.472.g3155946c3a-goog

2023-12-08 00:54:29

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 11/16] net: support non paged skb frags

Make skb_frag_page() fail in the case where the frag is not backed
by a page, and fix its relevant callers to handle this case.

Correctly handle skb_frag refcounting in the page_pool_iovs case.

Signed-off-by: Mina Almasry <[email protected]>


---

Changes in v1:
- Fix illegal_highdma() (Yunsheng).
- Rework napi_pp_put_page() slightly to reduce code churn (Willem).

---
include/linux/skbuff.h | 42 +++++++++++++++++++++++++++++++++++-------
net/core/dev.c | 3 ++-
net/core/gro.c | 2 +-
net/core/skbuff.c | 3 +++
net/ipv4/tcp.c | 3 +++
5 files changed, 44 insertions(+), 9 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index b370eb8d70f7..851f448d2181 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -37,6 +37,8 @@
#endif
#include <net/net_debug.h>
#include <net/dropreason-core.h>
+#include <net/page_pool/types.h>
+#include <net/page_pool/helpers.h>

/**
* DOC: skb checksums
@@ -3414,15 +3416,38 @@ static inline void skb_frag_off_copy(skb_frag_t *fragto,
fragto->bv_offset = fragfrom->bv_offset;
}

+/* Returns true if the skb_frag contains a page_pool_iov. */
+static inline bool skb_frag_is_page_pool_iov(const skb_frag_t *frag)
+{
+ return page_is_page_pool_iov(frag->bv_page);
+}
+
/**
* skb_frag_page - retrieve the page referred to by a paged fragment
* @frag: the paged fragment
*
- * Returns the &struct page associated with @frag.
+ * Returns the &struct page associated with @frag. Returns NULL if this frag
+ * has no associated page.
*/
static inline struct page *skb_frag_page(const skb_frag_t *frag)
{
- return frag->bv_page;
+ if (!page_is_page_pool_iov(frag->bv_page))
+ return frag->bv_page;
+
+ return NULL;
+}
+
+/**
+ * skb_frag_page_pool_iov - retrieve the page_pool_iov referred to by fragment
+ * @frag: the fragment
+ *
+ * Returns the &struct page_pool_iov associated with @frag. Returns NULL if this
+ * frag has no associated page_pool_iov.
+ */
+static inline struct page_pool_iov *
+skb_frag_page_pool_iov(const skb_frag_t *frag)
+{
+ return page_to_page_pool_iov(frag->bv_page);
}

/**
@@ -3433,7 +3458,7 @@ static inline struct page *skb_frag_page(const skb_frag_t *frag)
*/
static inline void __skb_frag_ref(skb_frag_t *frag)
{
- get_page(skb_frag_page(frag));
+ page_pool_page_get_many(frag->bv_page, 1);
}

/**
@@ -3453,13 +3478,13 @@ bool napi_pp_put_page(struct page *page, bool napi_safe);
static inline void
napi_frag_unref(skb_frag_t *frag, bool recycle, bool napi_safe)
{
- struct page *page = skb_frag_page(frag);
-
#ifdef CONFIG_PAGE_POOL
- if (recycle && napi_pp_put_page(page, napi_safe))
+ if (recycle && napi_pp_put_page(frag->bv_page, napi_safe))
return;
+ page_pool_page_put_many(frag->bv_page, 1);
+#else
+ put_page(skb_frag_page(frag));
#endif
- put_page(page);
}

/**
@@ -3499,6 +3524,9 @@ static inline void skb_frag_unref(struct sk_buff *skb, int f)
*/
static inline void *skb_frag_address(const skb_frag_t *frag)
{
+ if (!skb_frag_page(frag))
+ return NULL;
+
return page_address(skb_frag_page(frag)) + skb_frag_off(frag);
}

diff --git a/net/core/dev.c b/net/core/dev.c
index 30667e4c3b95..1ae9257df441 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3709,8 +3709,9 @@ static int illegal_highdma(struct net_device *dev, struct sk_buff *skb)
if (!(dev->features & NETIF_F_HIGHDMA)) {
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ struct page *page = skb_frag_page(frag);

- if (PageHighMem(skb_frag_page(frag)))
+ if (page && PageHighMem(page))
return 1;
}
}
diff --git a/net/core/gro.c b/net/core/gro.c
index 0759277dc14e..42d7f6755f32 100644
--- a/net/core/gro.c
+++ b/net/core/gro.c
@@ -376,7 +376,7 @@ static inline void skb_gro_reset_offset(struct sk_buff *skb, u32 nhoff)
NAPI_GRO_CB(skb)->frag0 = NULL;
NAPI_GRO_CB(skb)->frag0_len = 0;

- if (!skb_headlen(skb) && pinfo->nr_frags &&
+ if (!skb_headlen(skb) && pinfo->nr_frags && skb_frag_page(frag0) &&
!PageHighMem(skb_frag_page(frag0)) &&
(!NET_IP_ALIGN || !((skb_frag_off(frag0) + nhoff) & 3))) {
NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 07f802f1adf1..2ce64f57a0f6 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -2999,6 +2999,9 @@ static bool __skb_splice_bits(struct sk_buff *skb, struct pipe_inode_info *pipe,
for (seg = 0; seg < skb_shinfo(skb)->nr_frags; seg++) {
const skb_frag_t *f = &skb_shinfo(skb)->frags[seg];

+ if (WARN_ON_ONCE(!skb_frag_page(f)))
+ return false;
+
if (__splice_segment(skb_frag_page(f),
skb_frag_off(f), skb_frag_size(f),
offset, len, spd, false, sk, pipe))
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 70a1bafbefba..e22681c4bfac 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -2157,6 +2157,9 @@ static int tcp_zerocopy_receive(struct sock *sk,
break;
}
page = skb_frag_page(frags);
+ if (WARN_ON_ONCE(!page))
+ break;
+
prefetchw(page);
pages[pages_to_map++] = page;
length += PAGE_SIZE;
--
2.43.0.472.g3155946c3a-goog

2023-12-08 00:54:37

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 12/16] net: add support for skbs with unreadable frags

For device memory TCP, we expect the skb headers to be available in host
memory for access, and we expect the skb frags to be in device memory
and unaccessible to the host. We expect there to be no mixing and
matching of device memory frags (unaccessible) with host memory frags
(accessible) in the same skb.

Add a skb->devmem flag which indicates whether the frags in this skb
are device memory frags or not.

__skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs,
and marks the skb as skb->devmem accordingly.

Add checks through the network stack to avoid accessing the frags of
devmem skbs and avoid coalescing devmem skbs with non devmem skbs.

Signed-off-by: Willem de Bruijn <[email protected]>
Signed-off-by: Kaiyuan Zhang <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>


---

Changes in v1:
- Rename devmem -> dmabuf (David).
- Flip skb_frags_not_readable (Jakub).

---
include/linux/skbuff.h | 14 +++++++-
include/net/tcp.h | 5 +--
net/core/datagram.c | 6 ++++
net/core/gro.c | 5 ++-
net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------
net/ipv4/tcp.c | 3 ++
net/ipv4/tcp_input.c | 13 +++++--
net/ipv4/tcp_output.c | 5 ++-
net/packet/af_packet.c | 4 +--
9 files changed, 112 insertions(+), 20 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 851f448d2181..61de32ab04ea 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -817,6 +817,8 @@ typedef unsigned char *sk_buff_data_t;
* @csum_level: indicates the number of consecutive checksums found in
* the packet minus one that have been verified as
* CHECKSUM_UNNECESSARY (max 3)
+ * @dmabuf: indicates that all the fragments in this skb are backed by
+ * dmabuf.
* @dst_pending_confirm: need to confirm neighbour
* @decrypted: Decrypted SKB
* @slow_gro: state present at GRO time, slower prepare step required
@@ -1003,7 +1005,7 @@ struct sk_buff {
#if IS_ENABLED(CONFIG_IP_SCTP)
__u8 csum_not_inet:1;
#endif
-
+ __u8 dmabuf:1;
#if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS)
__u16 tc_index; /* traffic control index */
#endif
@@ -1778,6 +1780,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb)
__skb_zcopy_downgrade_managed(skb);
}

+/* Return true if frags in this skb are readable by the host. */
+static inline bool skb_frags_readable(const struct sk_buff *skb)
+{
+ return !skb->dmabuf;
+}
+
static inline void skb_mark_not_on_list(struct sk_buff *skb)
{
skb->next = NULL;
@@ -2480,6 +2488,10 @@ static inline void __skb_fill_page_desc(struct sk_buff *skb, int i,
struct page *page, int off, int size)
{
__skb_fill_page_desc_noacc(skb_shinfo(skb), i, page, off, size);
+ if (page_is_page_pool_iov(page)) {
+ skb->dmabuf = true;
+ return;
+ }

/* Propagate page pfmemalloc to the skb if we can. The problem is
* that not all callers have unique ownership of the page but rely
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 973555cb1d3f..0fbf198bdb55 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1017,7 +1017,7 @@ static inline int tcp_skb_mss(const struct sk_buff *skb)

static inline bool tcp_skb_can_collapse_to(const struct sk_buff *skb)
{
- return likely(!TCP_SKB_CB(skb)->eor);
+ return likely(!TCP_SKB_CB(skb)->eor && skb_frags_readable(skb));
}

static inline bool tcp_skb_can_collapse(const struct sk_buff *to,
@@ -1025,7 +1025,8 @@ static inline bool tcp_skb_can_collapse(const struct sk_buff *to,
{
return likely(tcp_skb_can_collapse_to(to) &&
mptcp_skb_can_collapse(to, from) &&
- skb_pure_zcopy_same(to, from));
+ skb_pure_zcopy_same(to, from) &&
+ skb_frags_readable(to) == skb_frags_readable(from));
}

/* Events passed to congestion control interface */
diff --git a/net/core/datagram.c b/net/core/datagram.c
index 103d46fa0eeb..f28472ddbaa4 100644
--- a/net/core/datagram.c
+++ b/net/core/datagram.c
@@ -426,6 +426,9 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset,
return 0;
}

+ if (!skb_frags_readable(skb))
+ goto short_copy;
+
/* Copy paged appendix. Hmm... why does this look so complicated? */
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
int end;
@@ -638,6 +641,9 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk,
if (msg && msg->msg_ubuf && msg->sg_from_iter)
return msg->sg_from_iter(sk, skb, from, length);

+ if (!skb_frags_readable(skb))
+ return -EFAULT;
+
frag = skb_shinfo(skb)->nr_frags;

while (length && iov_iter_count(from)) {
diff --git a/net/core/gro.c b/net/core/gro.c
index 42d7f6755f32..26df48f1b355 100644
--- a/net/core/gro.c
+++ b/net/core/gro.c
@@ -390,6 +390,9 @@ static void gro_pull_from_frag0(struct sk_buff *skb, int grow)
{
struct skb_shared_info *pinfo = skb_shinfo(skb);

+ if (WARN_ON_ONCE(!skb_frags_readable(skb)))
+ return;
+
BUG_ON(skb->end - skb->tail < grow);

memcpy(skb_tail_pointer(skb), NAPI_GRO_CB(skb)->frag0, grow);
@@ -411,7 +414,7 @@ static void gro_try_pull_from_frag0(struct sk_buff *skb)
{
int grow = skb_gro_offset(skb) - skb_headlen(skb);

- if (grow > 0)
+ if (grow > 0 && skb_frags_readable(skb))
gro_pull_from_frag0(skb, grow);
}

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 2ce64f57a0f6..50b1b7c2ef7b 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -1235,6 +1235,14 @@ void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt)
struct page *p;
u8 *vaddr;

+ if (skb_frag_is_page_pool_iov(frag)) {
+ printk("%sskb frag %d: not readable\n", level, i);
+ len -= frag->bv_len;
+ if (!len)
+ break;
+ continue;
+ }
+
skb_frag_foreach_page(frag, skb_frag_off(frag),
skb_frag_size(frag), p, p_off, p_len,
copied) {
@@ -1812,6 +1820,9 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)
if (skb_shared(skb) || skb_unclone(skb, gfp_mask))
return -EINVAL;

+ if (!skb_frags_readable(skb))
+ return -EFAULT;
+
if (!num_frags)
goto release;

@@ -1982,8 +1993,12 @@ struct sk_buff *skb_copy(const struct sk_buff *skb, gfp_t gfp_mask)
{
int headerlen = skb_headroom(skb);
unsigned int size = skb_end_offset(skb) + skb->data_len;
- struct sk_buff *n = __alloc_skb(size, gfp_mask,
- skb_alloc_rx_flag(skb), NUMA_NO_NODE);
+ struct sk_buff *n;
+
+ if (!skb_frags_readable(skb))
+ return NULL;
+
+ n = __alloc_skb(size, gfp_mask, skb_alloc_rx_flag(skb), NUMA_NO_NODE);

if (!n)
return NULL;
@@ -2309,14 +2324,16 @@ struct sk_buff *skb_copy_expand(const struct sk_buff *skb,
int newheadroom, int newtailroom,
gfp_t gfp_mask)
{
- /*
- * Allocate the copy buffer
- */
- struct sk_buff *n = __alloc_skb(newheadroom + skb->len + newtailroom,
- gfp_mask, skb_alloc_rx_flag(skb),
- NUMA_NO_NODE);
int oldheadroom = skb_headroom(skb);
int head_copy_len, head_copy_off;
+ struct sk_buff *n;
+
+ if (!skb_frags_readable(skb))
+ return NULL;
+
+ /* Allocate the copy buffer */
+ n = __alloc_skb(newheadroom + skb->len + newtailroom, gfp_mask,
+ skb_alloc_rx_flag(skb), NUMA_NO_NODE);

if (!n)
return NULL;
@@ -2655,6 +2672,9 @@ void *__pskb_pull_tail(struct sk_buff *skb, int delta)
*/
int i, k, eat = (skb->tail + delta) - skb->end;

+ if (!skb_frags_readable(skb))
+ return NULL;
+
if (eat > 0 || skb_cloned(skb)) {
if (pskb_expand_head(skb, 0, eat > 0 ? eat + 128 : 0,
GFP_ATOMIC))
@@ -2808,6 +2828,9 @@ int skb_copy_bits(const struct sk_buff *skb, int offset, void *to, int len)
to += copy;
}

+ if (!skb_frags_readable(skb))
+ goto fault;
+
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
int end;
skb_frag_t *f = &skb_shinfo(skb)->frags[i];
@@ -2996,6 +3019,9 @@ static bool __skb_splice_bits(struct sk_buff *skb, struct pipe_inode_info *pipe,
/*
* then map the fragments
*/
+ if (!skb_frags_readable(skb))
+ return false;
+
for (seg = 0; seg < skb_shinfo(skb)->nr_frags; seg++) {
const skb_frag_t *f = &skb_shinfo(skb)->frags[seg];

@@ -3219,6 +3245,9 @@ int skb_store_bits(struct sk_buff *skb, int offset, const void *from, int len)
from += copy;
}

+ if (!skb_frags_readable(skb))
+ goto fault;
+
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
int end;
@@ -3298,6 +3327,9 @@ __wsum __skb_checksum(const struct sk_buff *skb, int offset, int len,
pos = copy;
}

+ if (!skb_frags_readable(skb))
+ return 0;
+
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
int end;
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
@@ -3398,6 +3430,9 @@ __wsum skb_copy_and_csum_bits(const struct sk_buff *skb, int offset,
pos = copy;
}

+ if (!skb_frags_readable(skb))
+ return 0;
+
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
int end;

@@ -3888,7 +3923,9 @@ static inline void skb_split_inside_header(struct sk_buff *skb,
skb_shinfo(skb1)->frags[i] = skb_shinfo(skb)->frags[i];

skb_shinfo(skb1)->nr_frags = skb_shinfo(skb)->nr_frags;
+ skb1->dmabuf = skb->dmabuf;
skb_shinfo(skb)->nr_frags = 0;
+ skb->dmabuf = 0;
skb1->data_len = skb->data_len;
skb1->len += skb1->data_len;
skb->data_len = 0;
@@ -3902,6 +3939,7 @@ static inline void skb_split_no_header(struct sk_buff *skb,
{
int i, k = 0;
const int nfrags = skb_shinfo(skb)->nr_frags;
+ const int dmabuf = skb->dmabuf;

skb_shinfo(skb)->nr_frags = 0;
skb1->len = skb1->data_len = skb->len - len;
@@ -3935,6 +3973,16 @@ static inline void skb_split_no_header(struct sk_buff *skb,
pos += size;
}
skb_shinfo(skb1)->nr_frags = k;
+
+ if (skb_shinfo(skb)->nr_frags)
+ skb->dmabuf = dmabuf;
+ else
+ skb->dmabuf = 0;
+
+ if (skb_shinfo(skb1)->nr_frags)
+ skb1->dmabuf = dmabuf;
+ else
+ skb1->dmabuf = 0;
}

/**
@@ -4170,6 +4218,9 @@ unsigned int skb_seq_read(unsigned int consumed, const u8 **data,
return block_limit - abs_offset;
}

+ if (!skb_frags_readable(st->cur_skb))
+ return 0;
+
if (st->frag_idx == 0 && !st->frag_data)
st->stepped_offset += skb_headlen(st->cur_skb);

@@ -5784,7 +5835,10 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
(from->pp_recycle && skb_cloned(from)))
return false;

- if (len <= skb_tailroom(to)) {
+ if (skb_frags_readable(from) != skb_frags_readable(to))
+ return false;
+
+ if (len <= skb_tailroom(to) && skb_frags_readable(from)) {
if (len)
BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len));
*delta_truesize = 0;
@@ -5959,6 +6013,9 @@ int skb_ensure_writable(struct sk_buff *skb, unsigned int write_len)
if (!pskb_may_pull(skb, write_len))
return -ENOMEM;

+ if (!skb_frags_readable(skb))
+ return -EFAULT;
+
if (!skb_cloned(skb) || skb_clone_writable(skb, write_len))
return 0;

@@ -6613,7 +6670,7 @@ void skb_condense(struct sk_buff *skb)
{
if (skb->data_len) {
if (skb->data_len > skb->end - skb->tail ||
- skb_cloned(skb))
+ skb_cloned(skb) || !skb_frags_readable(skb))
return;

/* Nice, we can free page frag(s) right now */
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index e22681c4bfac..5a3135e93d3d 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -2140,6 +2140,9 @@ static int tcp_zerocopy_receive(struct sock *sk,
skb = tcp_recv_skb(sk, seq, &offset);
}

+ if (!skb_frags_readable(skb))
+ break;
+
if (TCP_SKB_CB(skb)->has_rxtstamp) {
tcp_update_recv_tstamps(skb, tss);
zc->msg_flags |= TCP_CMSG_TS;
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 0548f0c12155..a47f98187656 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -5309,6 +5309,9 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root,
for (end_of_skbs = true; skb != NULL && skb != tail; skb = n) {
n = tcp_skb_next(skb, list);

+ if (!skb_frags_readable(skb))
+ goto skip_this;
+
/* No new bits? It is possible on ofo queue. */
if (!before(start, TCP_SKB_CB(skb)->end_seq)) {
skb = tcp_collapse_one(sk, skb, list, root);
@@ -5329,17 +5332,20 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root,
break;
}

- if (n && n != tail && mptcp_skb_can_collapse(skb, n) &&
+ if (n && n != tail && skb_frags_readable(n) &&
+ mptcp_skb_can_collapse(skb, n) &&
TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(n)->seq) {
end_of_skbs = false;
break;
}

+skip_this:
/* Decided to skip this, advance start seq. */
start = TCP_SKB_CB(skb)->end_seq;
}
if (end_of_skbs ||
- (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)))
+ (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) ||
+ !skb_frags_readable(skb))
return;

__skb_queue_head_init(&tmp);
@@ -5383,7 +5389,8 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root,
if (!skb ||
skb == tail ||
!mptcp_skb_can_collapse(nskb, skb) ||
- (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)))
+ (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) ||
+ !skb_frags_readable(skb))
goto end;
#ifdef CONFIG_TLS_DEVICE
if (skb->decrypted != nskb->decrypted)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index eb13a55d660c..c8c0a1cbaca5 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -2343,7 +2343,8 @@ static bool tcp_can_coalesce_send_queue_head(struct sock *sk, int len)

if (unlikely(TCP_SKB_CB(skb)->eor) ||
tcp_has_tx_tstamp(skb) ||
- !skb_pure_zcopy_same(skb, next))
+ !skb_pure_zcopy_same(skb, next) ||
+ skb_frags_readable(skb) != skb_frags_readable(next))
return false;

len -= skb->len;
@@ -3227,6 +3228,8 @@ static bool tcp_can_collapse(const struct sock *sk, const struct sk_buff *skb)
return false;
if (skb_cloned(skb))
return false;
+ if (!skb_frags_readable(skb))
+ return false;
/* Some heuristics for collapsing over SACK'd could be invented */
if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED)
return false;
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
index f92edba4c40f..33988106f237 100644
--- a/net/packet/af_packet.c
+++ b/net/packet/af_packet.c
@@ -2156,7 +2156,7 @@ static int packet_rcv(struct sk_buff *skb, struct net_device *dev,
}
}

- snaplen = skb->len;
+ snaplen = skb_frags_readable(skb) ? skb->len : skb_headlen(skb);

res = run_filter(skb, sk, snaplen);
if (!res)
@@ -2276,7 +2276,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
}
}

- snaplen = skb->len;
+ snaplen = skb_frags_readable(skb) ? skb->len : skb_headlen(skb);

res = run_filter(skb, sk, snaplen);
if (!res)
--
2.43.0.472.g3155946c3a-goog

2023-12-08 00:55:09

by Mina Almasry

[permalink] [raw]
Subject: [net-next v1 16/16] selftests: add ncdevmem, netcat for devmem TCP

ncdevmem is a devmem TCP netcat. It works similarly to netcat, but it
sends and receives data using the devmem TCP APIs. It uses udmabuf as
the dmabuf provider. It is compatible with a regular netcat running on
a peer, or a ncdevmem running on a peer.

In addition to normal netcat support, ncdevmem has a validation mode,
where it sends a specific pattern and validates this pattern on the
receiver side to ensure data integrity.

Suggested-by: Stanislav Fomichev <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

Changes in v1:
- Many more general cleanups (Willem).
- Removed driver reset (Jakub).
- Removed hardcoded if index (Paolo).

RFC v2:
- General cleanups (Willem).

---
tools/testing/selftests/net/.gitignore | 1 +
tools/testing/selftests/net/Makefile | 5 +
tools/testing/selftests/net/ncdevmem.c | 489 +++++++++++++++++++++++++
3 files changed, 495 insertions(+)
create mode 100644 tools/testing/selftests/net/ncdevmem.c

diff --git a/tools/testing/selftests/net/.gitignore b/tools/testing/selftests/net/.gitignore
index 2f9d378edec3..b644dbae58b7 100644
--- a/tools/testing/selftests/net/.gitignore
+++ b/tools/testing/selftests/net/.gitignore
@@ -17,6 +17,7 @@ ipv6_flowlabel
ipv6_flowlabel_mgr
log.txt
msg_zerocopy
+ncdevmem
nettest
psock_fanout
psock_snd
diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile
index 14bd68da7466..d7a66563ffe7 100644
--- a/tools/testing/selftests/net/Makefile
+++ b/tools/testing/selftests/net/Makefile
@@ -5,6 +5,10 @@ CFLAGS = -Wall -Wl,--no-as-needed -O2 -g
CFLAGS += -I../../../../usr/include/ $(KHDR_INCLUDES)
# Additional include paths needed by kselftest.h
CFLAGS += -I../
+CFLAGS += -I../../../net/ynl/generated/
+CFLAGS += -I../../../net/ynl/lib/
+
+LDLIBS += ../../../net/ynl/lib/ynl.a ../../../net/ynl/generated/protos.a

TEST_PROGS := run_netsocktests run_afpackettests test_bpf.sh netdevice.sh \
rtnetlink.sh xfrm_policy.sh test_blackhole_dev.sh
@@ -92,6 +96,7 @@ TEST_PROGS += test_vxlan_nolocalbypass.sh
TEST_PROGS += test_bridge_backup_port.sh
TEST_PROGS += fdb_flush.sh
TEST_PROGS += fq_band_pktlimit.sh
+TEST_GEN_FILES += ncdevmem

TEST_FILES := settings

diff --git a/tools/testing/selftests/net/ncdevmem.c b/tools/testing/selftests/net/ncdevmem.c
new file mode 100644
index 000000000000..7fbeee02b9a2
--- /dev/null
+++ b/tools/testing/selftests/net/ncdevmem.c
@@ -0,0 +1,489 @@
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE
+#define __EXPORTED_HEADERS__
+
+#include <linux/uio.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <stdbool.h>
+#include <string.h>
+#include <errno.h>
+#define __iovec_defined
+#include <fcntl.h>
+#include <malloc.h>
+#include <error.h>
+
+#include <arpa/inet.h>
+#include <sys/socket.h>
+#include <sys/mman.h>
+#include <sys/ioctl.h>
+#include <sys/syscall.h>
+
+#include <linux/memfd.h>
+#include <linux/if.h>
+#include <linux/dma-buf.h>
+#include <linux/udmabuf.h>
+#include <libmnl/libmnl.h>
+#include <linux/types.h>
+#include <linux/netlink.h>
+#include <linux/genetlink.h>
+#include <linux/netdev.h>
+#include <time.h>
+
+#include "netdev-user.h"
+#include <ynl.h>
+
+#define PAGE_SHIFT 12
+#define TEST_PREFIX "ncdevmem"
+#define NUM_PAGES 16000
+
+#ifndef MSG_SOCK_DEVMEM
+#define MSG_SOCK_DEVMEM 0x2000000
+#endif
+
+/*
+ * tcpdevmem netcat. Works similarly to netcat but does device memory TCP
+ * instead of regular TCP. Uses udmabuf to mock a dmabuf provider.
+ *
+ * Usage:
+ *
+ * On server:
+ * ncdevmem -s <server IP> -c <client IP> -f eth1 -d 3 -n 0000:06:00.0 -l \
+ * -p 5201 -v 7
+ *
+ * On client:
+ * yes $(echo -e \\x01\\x02\\x03\\x04\\x05\\x06) | \
+ * tr \\n \\0 | \
+ * head -c 5G | \
+ * nc <server IP> 5201 -p 5201
+ *
+ * Note this is compatible with regular netcat. i.e. the sender or receiver can
+ * be replaced with regular netcat to test the RX or TX path in isolation.
+ */
+
+static char *server_ip = "192.168.1.4";
+static char *client_ip = "192.168.1.2";
+static char *port = "5201";
+static size_t do_validation;
+static int queue_num = 15;
+static char *ifname = "eth1";
+static unsigned int ifindex = 3;
+static char *nic_pci_addr = "0000:06:00.0";
+static unsigned int iterations;
+static unsigned int dmabuf_id;
+
+void print_bytes(void *ptr, size_t size)
+{
+ unsigned char *p = ptr;
+ int i;
+
+ for (i = 0; i < size; i++)
+ printf("%02hhX ", p[i]);
+ printf("\n");
+}
+
+void print_nonzero_bytes(void *ptr, size_t size)
+{
+ unsigned char *p = ptr;
+ unsigned int i;
+
+ for (i = 0; i < size; i++)
+ putchar(p[i]);
+ printf("\n");
+}
+
+void validate_buffer(void *line, size_t size)
+{
+ static unsigned char seed = 1;
+ unsigned char *ptr = line;
+ int errors = 0;
+ size_t i;
+
+ for (i = 0; i < size; i++) {
+ if (ptr[i] != seed) {
+ fprintf(stderr,
+ "Failed validation: expected=%u, actual=%u, index=%lu\n",
+ seed, ptr[i], i);
+ errors++;
+ if (errors > 20)
+ error(1, 0, "validation failed.");
+ }
+ seed++;
+ if (seed == do_validation)
+ seed = 0;
+ }
+
+ fprintf(stdout, "Validated buffer\n");
+}
+
+static void reset_flow_steering(void)
+{
+ char command[256];
+
+ memset(command, 0, sizeof(command));
+ snprintf(command, sizeof(command), "sudo ethtool -K %s ntuple off",
+ "eth1");
+ system(command);
+
+ memset(command, 0, sizeof(command));
+ snprintf(command, sizeof(command), "sudo ethtool -K %s ntuple on",
+ "eth1");
+ system(command);
+}
+
+static void configure_flow_steering(void)
+{
+ char command[256];
+
+ memset(command, 0, sizeof(command));
+ snprintf(command, sizeof(command),
+ "sudo ethtool -N %s flow-type tcp4 src-ip %s dst-ip %s src-port %s dst-port %s queue %d",
+ ifname, client_ip, server_ip, port, port, queue_num);
+ system(command);
+}
+
+static int bind_rx_queue(unsigned int ifindex, unsigned int dmabuf_fd,
+ struct netdev_queue_dmabuf *queues,
+ unsigned int n_queue_index, struct ynl_sock **ys)
+{
+ struct netdev_bind_rx_req *req = NULL;
+ struct netdev_bind_rx_rsp *rsp = NULL;
+ struct ynl_error yerr;
+
+ *ys = ynl_sock_create(&ynl_netdev_family, &yerr);
+ if (!*ys) {
+ fprintf(stderr, "YNL: %s\n", yerr.msg);
+ return -1;
+ }
+
+ req = netdev_bind_rx_req_alloc();
+ netdev_bind_rx_req_set_ifindex(req, ifindex);
+ netdev_bind_rx_req_set_dmabuf_fd(req, dmabuf_fd);
+ __netdev_bind_rx_req_set_queues(req, queues, n_queue_index);
+
+ rsp = netdev_bind_rx(*ys, req);
+ if (!rsp) {
+ perror("netdev_bind_rx");
+ goto err_close;
+ }
+
+ if (!rsp->_present.dmabuf_id) {
+ perror("dmabuf_id not present");
+ goto err_close;
+ }
+
+ printf("got dmabuf id=%d\n", rsp->dmabuf_id);
+ dmabuf_id = rsp->dmabuf_id;
+
+ netdev_bind_rx_req_free(req);
+ netdev_bind_rx_rsp_free(rsp);
+
+ return 0;
+
+err_close:
+ fprintf(stderr, "YNL failed: %s\n", (*ys)->err.msg);
+ netdev_bind_rx_req_free(req);
+ ynl_sock_destroy(*ys);
+ return -1;
+}
+
+static void create_udmabuf(int *devfd, int *memfd, int *buf, size_t dmabuf_size)
+{
+ struct udmabuf_create create;
+ int ret;
+
+ *devfd = open("/dev/udmabuf", O_RDWR);
+ if (*devfd < 0) {
+ error(70, 0,
+ "%s: [skip,no-udmabuf: Unable to access DMA buffer device file]\n",
+ TEST_PREFIX);
+ }
+
+ *memfd = memfd_create("udmabuf-test", MFD_ALLOW_SEALING);
+ if (*memfd < 0)
+ error(70, 0, "%s: [skip,no-memfd]\n", TEST_PREFIX);
+
+ /* Required for udmabuf */
+ ret = fcntl(*memfd, F_ADD_SEALS, F_SEAL_SHRINK);
+ if (ret < 0)
+ error(73, 0, "%s: [skip,fcntl-add-seals]\n", TEST_PREFIX);
+
+ ret = ftruncate(*memfd, dmabuf_size);
+ if (ret == -1)
+ error(74, 0, "%s: [FAIL,memfd-truncate]\n", TEST_PREFIX);
+
+ memset(&create, 0, sizeof(create));
+
+ create.memfd = *memfd;
+ create.offset = 0;
+ create.size = dmabuf_size;
+ *buf = ioctl(*devfd, UDMABUF_CREATE, &create);
+ if (*buf < 0)
+ error(75, 0, "%s: [FAIL, create udmabuf]\n", TEST_PREFIX);
+}
+
+int do_server(void)
+{
+ char ctrl_data[sizeof(int) * 20000];
+ struct netdev_queue_dmabuf *queues;
+ size_t non_page_aligned_frags = 0;
+ struct sockaddr_in client_addr;
+ struct sockaddr_in server_sin;
+ size_t page_aligned_frags = 0;
+ int devfd, memfd, buf, ret;
+ size_t total_received = 0;
+ socklen_t client_addr_len;
+ bool is_devmem = false;
+ char *buf_mem = NULL;
+ struct ynl_sock *ys;
+ size_t dmabuf_size;
+ char iobuf[819200];
+ char buffer[256];
+ int socket_fd;
+ int client_fd;
+ size_t i = 0;
+ int opt = 1;
+
+ dmabuf_size = getpagesize() * NUM_PAGES;
+
+ create_udmabuf(&devfd, &memfd, &buf, dmabuf_size);
+
+ reset_flow_steering();
+ configure_flow_steering();
+
+ sleep(1);
+
+ queues = malloc(sizeof(*queues) * 1);
+
+ queues[0]._present.type = 1;
+ queues[0]._present.idx = 1;
+ queues[0].type = NETDEV_QUEUE_TYPE_RX;
+ queues[0].idx = queue_num;
+ if (bind_rx_queue(ifindex, buf, queues, 1, &ys))
+ error(1, 0, "Failed to bind\n");
+
+ buf_mem = mmap(NULL, dmabuf_size, PROT_READ | PROT_WRITE, MAP_SHARED,
+ buf, 0);
+ if (buf_mem == MAP_FAILED)
+ error(1, 0, "mmap()");
+
+ server_sin.sin_family = AF_INET;
+ server_sin.sin_port = htons(atoi(port));
+
+ ret = inet_pton(server_sin.sin_family, server_ip, &server_sin.sin_addr);
+ if (socket < 0)
+ error(79, 0, "%s: [FAIL, create socket]\n", TEST_PREFIX);
+
+ socket_fd = socket(server_sin.sin_family, SOCK_STREAM, 0);
+ if (socket < 0)
+ error(errno, errno, "%s: [FAIL, create socket]\n", TEST_PREFIX);
+
+ ret = setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &opt,
+ sizeof(opt));
+ if (ret)
+ error(errno, errno, "%s: [FAIL, set sock opt]\n", TEST_PREFIX);
+
+ ret = setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &opt,
+ sizeof(opt));
+ if (ret)
+ error(errno, errno, "%s: [FAIL, set sock opt]\n", TEST_PREFIX);
+
+ printf("binding to address %s:%d\n", server_ip,
+ ntohs(server_sin.sin_port));
+
+ ret = bind(socket_fd, &server_sin, sizeof(server_sin));
+ if (ret)
+ error(errno, errno, "%s: [FAIL, bind]\n", TEST_PREFIX);
+
+ ret = listen(socket_fd, 1);
+ if (ret)
+ error(errno, errno, "%s: [FAIL, listen]\n", TEST_PREFIX);
+
+ client_addr_len = sizeof(client_addr);
+
+ inet_ntop(server_sin.sin_family, &server_sin.sin_addr, buffer,
+ sizeof(buffer));
+ printf("Waiting or connection on %s:%d\n", buffer,
+ ntohs(server_sin.sin_port));
+ client_fd = accept(socket_fd, &client_addr, &client_addr_len);
+
+ inet_ntop(client_addr.sin_family, &client_addr.sin_addr, buffer,
+ sizeof(buffer));
+ printf("Got connection from %s:%d\n", buffer,
+ ntohs(client_addr.sin_port));
+
+ while (1) {
+ struct iovec iov = { .iov_base = iobuf,
+ .iov_len = sizeof(iobuf) };
+ struct dmabuf_cmsg *dmabuf_cmsg = NULL;
+ struct dma_buf_sync sync = { 0 };
+ struct cmsghdr *cm = NULL;
+ struct msghdr msg = { 0 };
+ struct dmabuf_token token;
+ ssize_t ret;
+
+ is_devmem = false;
+ printf("\n\n");
+
+ msg.msg_iov = &iov;
+ msg.msg_iovlen = 1;
+ msg.msg_control = ctrl_data;
+ msg.msg_controllen = sizeof(ctrl_data);
+ ret = recvmsg(client_fd, &msg, MSG_SOCK_DEVMEM);
+ printf("recvmsg ret=%ld\n", ret);
+ if (ret < 0 && (errno == EAGAIN || errno == EWOULDBLOCK))
+ continue;
+ if (ret < 0) {
+ perror("recvmsg");
+ continue;
+ }
+ if (ret == 0) {
+ printf("client exited\n");
+ goto cleanup;
+ }
+
+ i++;
+ for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) {
+ if (cm->cmsg_level != SOL_SOCKET ||
+ (cm->cmsg_type != SCM_DEVMEM_DMABUF &&
+ cm->cmsg_type != SCM_DEVMEM_LINEAR)) {
+ fprintf(stdout, "skipping non-devmem cmsg\n");
+ continue;
+ }
+
+ dmabuf_cmsg = (struct dmabuf_cmsg *)CMSG_DATA(cm);
+ is_devmem = true;
+
+ if (cm->cmsg_type == SCM_DEVMEM_LINEAR) {
+ /* TODO: process data copied from skb's linear
+ * buffer.
+ */
+ fprintf(stdout,
+ "SCM_DEVMEM_LINEAR. dmabuf_cmsg->frag_size=%u\n",
+ dmabuf_cmsg->frag_size);
+
+ continue;
+ }
+
+ token.token_start = dmabuf_cmsg->frag_token;
+ token.token_count = 1;
+
+ total_received += dmabuf_cmsg->frag_size;
+ printf("received frag_page=%llu, in_page_offset=%llu, frag_offset=%llu, frag_size=%u, token=%u, total_received=%lu, dmabuf_id=%u\n",
+ dmabuf_cmsg->frag_offset >> PAGE_SHIFT,
+ dmabuf_cmsg->frag_offset % getpagesize(),
+ dmabuf_cmsg->frag_offset, dmabuf_cmsg->frag_size,
+ dmabuf_cmsg->frag_token, total_received,
+ dmabuf_cmsg->dmabuf_id);
+
+ if (dmabuf_cmsg->dmabuf_id != dmabuf_id)
+ error(1, 0,
+ "received on wrong dmabuf_id: flow steering error\n");
+
+ if (dmabuf_cmsg->frag_size % getpagesize())
+ non_page_aligned_frags++;
+ else
+ page_aligned_frags++;
+
+ sync.flags = DMA_BUF_SYNC_READ | DMA_BUF_SYNC_START;
+ ioctl(buf, DMA_BUF_IOCTL_SYNC, &sync);
+
+ if (do_validation)
+ validate_buffer(
+ ((unsigned char *)buf_mem) +
+ dmabuf_cmsg->frag_offset,
+ dmabuf_cmsg->frag_size);
+ else
+ print_nonzero_bytes(
+ ((unsigned char *)buf_mem) +
+ dmabuf_cmsg->frag_offset,
+ dmabuf_cmsg->frag_size);
+
+ sync.flags = DMA_BUF_SYNC_READ | DMA_BUF_SYNC_END;
+ ioctl(buf, DMA_BUF_IOCTL_SYNC, &sync);
+
+ ret = setsockopt(client_fd, SOL_SOCKET,
+ SO_DEVMEM_DONTNEED, &token,
+ sizeof(token));
+ if (ret != 1)
+ error(1, 0,
+ "SO_DEVMEM_DONTNEED not enough tokens");
+ }
+ if (!is_devmem)
+ error(1, 0, "flow steering error\n");
+
+ printf("total_received=%lu\n", total_received);
+ }
+
+ fprintf(stdout, "%s: ok\n", TEST_PREFIX);
+
+ fprintf(stdout, "page_aligned_frags=%lu, non_page_aligned_frags=%lu\n",
+ page_aligned_frags, non_page_aligned_frags);
+
+ fprintf(stdout, "page_aligned_frags=%lu, non_page_aligned_frags=%lu\n",
+ page_aligned_frags, non_page_aligned_frags);
+
+cleanup:
+
+ munmap(buf_mem, dmabuf_size);
+ close(client_fd);
+ close(socket_fd);
+ close(buf);
+ close(memfd);
+ close(devfd);
+ ynl_sock_destroy(ys);
+
+ return 0;
+}
+
+int main(int argc, char *argv[])
+{
+ int is_server = 0, opt;
+
+ while ((opt = getopt(argc, argv, "ls:c:p:v:q:f:n:i:d:")) != -1) {
+ switch (opt) {
+ case 'l':
+ is_server = 1;
+ break;
+ case 's':
+ server_ip = optarg;
+ break;
+ case 'c':
+ client_ip = optarg;
+ break;
+ case 'p':
+ port = optarg;
+ break;
+ case 'v':
+ do_validation = atoll(optarg);
+ break;
+ case 'q':
+ queue_num = atoi(optarg);
+ break;
+ case 'f':
+ ifname = optarg;
+ break;
+ case 'd':
+ ifindex = atoi(optarg);
+ break;
+ case 'n':
+ nic_pci_addr = optarg;
+ break;
+ case 'i':
+ iterations = atoll(optarg);
+ break;
+ case '?':
+ printf("unknown option: %c\n", optopt);
+ break;
+ }
+ }
+
+ for (; optind < argc; optind++)
+ printf("extra arguments: %s\n", argv[optind]);
+
+ if (is_server)
+ return do_server();
+
+ return 0;
+}
--
2.43.0.472.g3155946c3a-goog

2023-12-08 01:47:45

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 00/16] Device Memory TCP

On Thu, Dec 7, 2023 at 4:52 PM Mina Almasry <[email protected]> wrote:
>
> Major changes in v1:
> --------------
>
> 1. Implemented MVP queue API ndos to remove the userspace-visible
> driver reset.
>
> 2. Fixed issues in the napi_pp_put_page() devmem frag unref path.
>
> 3. Removed RFC tag.
>
> Many smaller addressed comments across all the patches (patches have
> individual change log).
>
> Full tree including the rest of the GVE driver changes:
> https://github.com/mina/linux/commits/tcpdevmem-v1
>
> Cc: Yunsheng Lin <[email protected]>
> Cc: Shailend Chand <[email protected]>
> Cc: Harshitha Ramamurthy <[email protected]>
>

Welp, I messed up the subject line. It should say [PATCH net-next...]
across all the patches. This may trip up bots and email filters. If
this is annoying, I'll resend with the fixed subject line after the
24hr cooldown period. Sorry about that.

--
Thanks,
Mina

2023-12-08 15:41:28

by kernel test robot

[permalink] [raw]
Subject: Re: [net-next v1 06/16] netdev: support binding dma-buf to netdevice

Hi Mina,

kernel test robot noticed the following build warnings:

[auto build test WARNING on net-next/main]

url: https://github.com/intel-lab-lkp/linux/commits/Mina-Almasry/net-page_pool-factor-out-releasing-DMA-from-releasing-the-page/20231208-085531
base: net-next/main
patch link: https://lore.kernel.org/r/20231208005250.2910004-7-almasrymina%40google.com
patch subject: [net-next v1 06/16] netdev: support binding dma-buf to netdevice
config: m68k-randconfig-r071-20231208 (https://download.01.org/0day-ci/archive/20231208/[email protected]/config)
compiler: m68k-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231208/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

In file included from include/asm-generic/bug.h:22,
from arch/m68k/include/asm/bug.h:32,
from include/linux/bug.h:5,
from include/linux/thread_info.h:13,
from include/asm-generic/preempt.h:5,
from ./arch/m68k/include/generated/asm/preempt.h:1,
from include/linux/preempt.h:79,
from arch/m68k/include/asm/irqflags.h:6,
from include/linux/irqflags.h:17,
from arch/m68k/include/asm/atomic.h:6,
from include/linux/atomic.h:7,
from include/linux/rcupdate.h:25,
from include/linux/rculist.h:11,
from include/linux/pid.h:5,
from include/linux/sched.h:14,
from include/linux/uaccess.h:8,
from net/core/dev.c:71:
net/core/dev.c: In function '__netdev_dmabuf_binding_free':
>> net/core/dev.c:2071:34: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'size_t' {aka 'unsigned int'} [-Wformat=]
2071 | if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2072 | size, avail))
| ~~~~
| |
| size_t {aka unsigned int}
include/linux/printk.h:427:25: note: in definition of macro 'printk_index_wrap'
427 | _p_func(_fmt, ##__VA_ARGS__); \
| ^~~~
include/linux/printk.h:129:17: note: in expansion of macro 'printk'
129 | printk(fmt, ##__VA_ARGS__); \
| ^~~~~~
include/asm-generic/bug.h:176:9: note: in expansion of macro 'no_printk'
176 | no_printk(format); \
| ^~~~~~~~~
net/core/dev.c:2071:14: note: in expansion of macro 'WARN'
2071 | if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
| ^~~~
net/core/dev.c:2071:65: note: format string is defined here
2071 | if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
| ~~^
| |
| long unsigned int
| %u
net/core/dev.c:2071:34: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t' {aka 'unsigned int'} [-Wformat=]
2071 | if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2072 | size, avail))
| ~~~~~
| |
| size_t {aka unsigned int}
include/linux/printk.h:427:25: note: in definition of macro 'printk_index_wrap'
427 | _p_func(_fmt, ##__VA_ARGS__); \
| ^~~~
include/linux/printk.h:129:17: note: in expansion of macro 'printk'
129 | printk(fmt, ##__VA_ARGS__); \
| ^~~~~~
include/asm-generic/bug.h:176:9: note: in expansion of macro 'no_printk'
176 | no_printk(format); \
| ^~~~~~~~~
net/core/dev.c:2071:14: note: in expansion of macro 'WARN'
2071 | if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
| ^~~~
net/core/dev.c:2071:76: note: format string is defined here
2071 | if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
| ~~^
| |
| long unsigned int
| %u


vim +2071 net/core/dev.c

2060
2061 void __netdev_dmabuf_binding_free(struct netdev_dmabuf_binding *binding)
2062 {
2063 size_t size, avail;
2064
2065 gen_pool_for_each_chunk(binding->chunk_pool,
2066 netdev_dmabuf_free_chunk_owner, NULL);
2067
2068 size = gen_pool_size(binding->chunk_pool);
2069 avail = gen_pool_avail(binding->chunk_pool);
2070
> 2071 if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
2072 size, avail))
2073 gen_pool_destroy(binding->chunk_pool);
2074
2075 dma_buf_unmap_attachment(binding->attachment, binding->sgt,
2076 DMA_BIDIRECTIONAL);
2077 dma_buf_detach(binding->dmabuf, binding->attachment);
2078 dma_buf_put(binding->dmabuf);
2079 xa_destroy(&binding->bound_rxq_list);
2080 kfree(binding);
2081 }
2082

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-12-08 15:42:09

by kernel test robot

[permalink] [raw]
Subject: Re: [net-next v1 13/16] tcp: RX path for devmem TCP

Hi Mina,

kernel test robot noticed the following build errors:

[auto build test ERROR on net-next/main]

url: https://github.com/intel-lab-lkp/linux/commits/Mina-Almasry/net-page_pool-factor-out-releasing-DMA-from-releasing-the-page/20231208-085531
base: net-next/main
patch link: https://lore.kernel.org/r/20231208005250.2910004-14-almasrymina%40google.com
patch subject: [net-next v1 13/16] tcp: RX path for devmem TCP
config: alpha-defconfig (https://download.01.org/0day-ci/archive/20231208/[email protected]/config)
compiler: alpha-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231208/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All errors (new ones prefixed by >>):

net/ipv4/tcp.c: In function 'tcp_recvmsg_dmabuf':
>> net/ipv4/tcp.c:2348:57: error: 'SO_DEVMEM_LINEAR' undeclared (first use in this function)
2348 | err = put_cmsg(msg, SOL_SOCKET, SO_DEVMEM_LINEAR,
| ^~~~~~~~~~~~~~~~
net/ipv4/tcp.c:2348:57: note: each undeclared identifier is reported only once for each function it appears in
>> net/ipv4/tcp.c:2411:48: error: 'SO_DEVMEM_DMABUF' undeclared (first use in this function)
2411 | SO_DEVMEM_DMABUF,
| ^~~~~~~~~~~~~~~~


vim +/SO_DEVMEM_LINEAR +2348 net/ipv4/tcp.c

2306
2307 /* On error, returns the -errno. On success, returns number of bytes sent to the
2308 * user. May not consume all of @remaining_len.
2309 */
2310 static int tcp_recvmsg_dmabuf(const struct sock *sk, const struct sk_buff *skb,
2311 unsigned int offset, struct msghdr *msg,
2312 int remaining_len)
2313 {
2314 struct dmabuf_cmsg dmabuf_cmsg = { 0 };
2315 unsigned int start;
2316 int i, copy, n;
2317 int sent = 0;
2318 int err = 0;
2319
2320 do {
2321 start = skb_headlen(skb);
2322
2323 if (!skb->dmabuf) {
2324 err = -ENODEV;
2325 goto out;
2326 }
2327
2328 /* Copy header. */
2329 copy = start - offset;
2330 if (copy > 0) {
2331 copy = min(copy, remaining_len);
2332
2333 n = copy_to_iter(skb->data + offset, copy,
2334 &msg->msg_iter);
2335 if (n != copy) {
2336 err = -EFAULT;
2337 goto out;
2338 }
2339
2340 offset += copy;
2341 remaining_len -= copy;
2342
2343 /* First a dmabuf_cmsg for # bytes copied to user
2344 * buffer.
2345 */
2346 memset(&dmabuf_cmsg, 0, sizeof(dmabuf_cmsg));
2347 dmabuf_cmsg.frag_size = copy;
> 2348 err = put_cmsg(msg, SOL_SOCKET, SO_DEVMEM_LINEAR,
2349 sizeof(dmabuf_cmsg), &dmabuf_cmsg);
2350 if (err || msg->msg_flags & MSG_CTRUNC) {
2351 msg->msg_flags &= ~MSG_CTRUNC;
2352 if (!err)
2353 err = -ETOOSMALL;
2354 goto out;
2355 }
2356
2357 sent += copy;
2358
2359 if (remaining_len == 0)
2360 goto out;
2361 }
2362
2363 /* after that, send information of dmabuf pages through a
2364 * sequence of cmsg
2365 */
2366 for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
2367 skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
2368 struct page_pool_iov *ppiov;
2369 u64 frag_offset;
2370 u32 user_token;
2371 int end;
2372
2373 /* skb->dmabuf should indicate that ALL the frags in
2374 * this skb are dmabuf page_pool_iovs. We're checking
2375 * for that flag above, but also check individual frags
2376 * here. If the tcp stack is not setting skb->dmabuf
2377 * correctly, we still don't want to crash here when
2378 * accessing pgmap or priv below.
2379 */
2380 if (!skb_frag_page_pool_iov(frag)) {
2381 net_err_ratelimited("Found non-dmabuf skb with page_pool_iov");
2382 err = -ENODEV;
2383 goto out;
2384 }
2385
2386 ppiov = skb_frag_page_pool_iov(frag);
2387 end = start + skb_frag_size(frag);
2388 copy = end - offset;
2389
2390 if (copy > 0) {
2391 copy = min(copy, remaining_len);
2392
2393 frag_offset = page_pool_iov_virtual_addr(ppiov) +
2394 skb_frag_off(frag) + offset -
2395 start;
2396 dmabuf_cmsg.frag_offset = frag_offset;
2397 dmabuf_cmsg.frag_size = copy;
2398 err = xa_alloc((struct xarray *)&sk->sk_user_pages,
2399 &user_token, frag->bv_page,
2400 xa_limit_31b, GFP_KERNEL);
2401 if (err)
2402 goto out;
2403
2404 dmabuf_cmsg.frag_token = user_token;
2405 dmabuf_cmsg.dmabuf_id = page_pool_iov_binding_id(ppiov);
2406
2407 offset += copy;
2408 remaining_len -= copy;
2409
2410 err = put_cmsg(msg, SOL_SOCKET,
> 2411 SO_DEVMEM_DMABUF,
2412 sizeof(dmabuf_cmsg),
2413 &dmabuf_cmsg);
2414 if (err || msg->msg_flags & MSG_CTRUNC) {
2415 msg->msg_flags &= ~MSG_CTRUNC;
2416 xa_erase((struct xarray *)&sk->sk_user_pages,
2417 user_token);
2418 if (!err)
2419 err = -ETOOSMALL;
2420 goto out;
2421 }
2422
2423 __skb_frag_ref(frag);
2424
2425 sent += copy;
2426
2427 if (remaining_len == 0)
2428 goto out;
2429 }
2430 start = end;
2431 }
2432
2433 if (!remaining_len)
2434 goto out;
2435
2436 /* if remaining_len is not satisfied yet, we need to go to the
2437 * next frag in the frag_list to satisfy remaining_len.
2438 */
2439 skb = skb_shinfo(skb)->frag_list ?: skb->next;
2440
2441 offset = offset - start;
2442 } while (skb);
2443
2444 if (remaining_len) {
2445 err = -EFAULT;
2446 goto out;
2447 }
2448
2449 out:
2450 if (!sent)
2451 sent = err;
2452
2453 return sent;
2454 }
2455

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-12-08 16:03:19

by kernel test robot

[permalink] [raw]
Subject: Re: [net-next v1 06/16] netdev: support binding dma-buf to netdevice

Hi Mina,

kernel test robot noticed the following build warnings:

[auto build test WARNING on net-next/main]

url: https://github.com/intel-lab-lkp/linux/commits/Mina-Almasry/net-page_pool-factor-out-releasing-DMA-from-releasing-the-page/20231208-085531
base: net-next/main
patch link: https://lore.kernel.org/r/20231208005250.2910004-7-almasrymina%40google.com
patch subject: [net-next v1 06/16] netdev: support binding dma-buf to netdevice
config: i386-randconfig-141-20231208 (https://download.01.org/0day-ci/archive/20231208/[email protected]/config)
compiler: clang version 16.0.4 (https://github.com/llvm/llvm-project.git ae42196bc493ffe877a7e3dff8be32035dea4d07)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231208/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

>> net/core/dev.c:2072:5: warning: format specifies type 'unsigned long' but the argument has type 'size_t' (aka 'unsigned int') [-Wformat]
size, avail))
^~~~
include/asm-generic/bug.h:134:29: note: expanded from macro 'WARN'
__WARN_printf(TAINT_WARN, format); \
^~~~~~
include/asm-generic/bug.h:106:17: note: expanded from macro '__WARN_printf'
__warn_printk(arg); \
^~~
net/core/dev.c:2072:11: warning: format specifies type 'unsigned long' but the argument has type 'size_t' (aka 'unsigned int') [-Wformat]
size, avail))
^~~~~
include/asm-generic/bug.h:134:29: note: expanded from macro 'WARN'
__WARN_printf(TAINT_WARN, format); \
^~~~~~
include/asm-generic/bug.h:106:17: note: expanded from macro '__WARN_printf'
__warn_printk(arg); \
^~~
net/core/dev.c:4356:1: warning: unused function 'sch_handle_ingress' [-Wunused-function]
sch_handle_ingress(struct sk_buff *skb, struct packet_type **pt_prev, int *ret,
^
net/core/dev.c:4363:1: warning: unused function 'sch_handle_egress' [-Wunused-function]
sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev)
^
net/core/dev.c:5573:19: warning: unused function 'nf_ingress' [-Wunused-function]
static inline int nf_ingress(struct sk_buff *skb, struct packet_type **pt_prev,
^
5 warnings generated.


vim +2072 net/core/dev.c

2060
2061 void __netdev_dmabuf_binding_free(struct netdev_dmabuf_binding *binding)
2062 {
2063 size_t size, avail;
2064
2065 gen_pool_for_each_chunk(binding->chunk_pool,
2066 netdev_dmabuf_free_chunk_owner, NULL);
2067
2068 size = gen_pool_size(binding->chunk_pool);
2069 avail = gen_pool_avail(binding->chunk_pool);
2070
2071 if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
> 2072 size, avail))
2073 gen_pool_destroy(binding->chunk_pool);
2074
2075 dma_buf_unmap_attachment(binding->attachment, binding->sgt,
2076 DMA_BIDIRECTIONAL);
2077 dma_buf_detach(binding->dmabuf, binding->attachment);
2078 dma_buf_put(binding->dmabuf);
2079 xa_destroy(&binding->bound_rxq_list);
2080 kfree(binding);
2081 }
2082

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-12-08 16:05:57

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 09/16] page_pool: device memory support

On Fri, Dec 8, 2023 at 1:30 AM Yunsheng Lin <[email protected]> wrote:
>
>
> As mentioned before, it seems we need to have the above checking every
> time we need to do some per-page handling in page_pool core, is there
> a plan in your mind how to remove those kind of checking in the future?
>

I see 2 ways to remove the checking, both infeasible:

1. Allocate a wrapper struct that pulls out all the fields the page pool needs:

struct netmem {
/* common fields */
refcount_t refcount;
bool is_pfmemalloc;
int nid;
...
union {
struct dmabuf_genpool_chunk_owner *owner;
struct page * page;
};
};

The page pool can then not care if the underlying memory is iov or
page. However this introduces significant memory bloat as this struct
needs to be allocated for each page or ppiov, which I imagine is not
acceptable for the upside of removing a few static_branch'd if
statements with no performance cost.

2. Create a unified struct for page and dmabuf memory, which the mm
folks have repeatedly nacked, and I imagine will repeatedly nack in
the future.

So I imagine the special handling of ppiov in some form is critical
and the checking may not be removable.

> Even though a static_branch check is added in page_is_page_pool_iov(), it
> does not make much sense that a core has tow different 'struct' for its
> most basic data.
>
> IMHO, the ppiov for dmabuf is forced fitting into page_pool without much
> design consideration at this point.
>
...
>
> For now, the above may work for the the rx part as it seems that you are
> only enabling rx for dmabuf for now.
>
> What is the plan to enable tx for dmabuf? If it is also intergrated into
> page_pool? There was a attempt to enable page_pool for tx, Eric seemed to
> have some comment about this:
> https://lkml.kernel.org/netdev/[email protected]/T/#mb6ab62dc22f38ec621d516259c56dd66353e24a2
>
> If tx is not intergrated into page_pool, do we need to create a new layer for
> the tx dmabuf?
>

I imagine the TX path will reuse page_pool_iov, page_pool_iov_*()
helpers, and page_pool_page_*() helpers, but will not need any core
page_pool changes. This is because the TX path will have to piggyback
on MSG_ZEROCOPY (devmem is not copyable), so no memory allocation from
the page_pool (or otherwise) is needed or possible. RFCv1 had a TX
implementation based on dmabuf pages without page_pool involvement, I
imagine I'll do something similar.

--
Thanks,
Mina

2023-12-08 17:48:48

by David Ahern

[permalink] [raw]
Subject: Re: [net-next v1 06/16] netdev: support binding dma-buf to netdevice

On 12/7/23 5:52 PM, Mina Almasry wrote:
> +
> +static int netdev_restart_rx_queue(struct net_device *dev, int rxq_idx)
> +{
> + void *new_mem;
> + void *old_mem;
> + int err;
> +
> + if (!dev || !dev->netdev_ops)
> + return -EINVAL;
> +
> + if (!dev->netdev_ops->ndo_queue_stop ||
> + !dev->netdev_ops->ndo_queue_mem_free ||
> + !dev->netdev_ops->ndo_queue_mem_alloc ||
> + !dev->netdev_ops->ndo_queue_start)
> + return -EOPNOTSUPP;
> +
> + new_mem = dev->netdev_ops->ndo_queue_mem_alloc(dev, rxq_idx);
> + if (!new_mem)
> + return -ENOMEM;
> +
> + err = dev->netdev_ops->ndo_queue_stop(dev, rxq_idx, &old_mem);
> + if (err)
> + goto err_free_new_mem;
> +
> + err = dev->netdev_ops->ndo_queue_start(dev, rxq_idx, new_mem);
> + if (err)
> + goto err_start_queue;
> +
> + dev->netdev_ops->ndo_queue_mem_free(dev, old_mem);
> +
> + return 0;
> +
> +err_start_queue:
> + dev->netdev_ops->ndo_queue_start(dev, rxq_idx, old_mem);
> +
> +err_free_new_mem:
> + dev->netdev_ops->ndo_queue_mem_free(dev, new_mem);
> +
> + return err;
> +}
> +
> +/* Protected by rtnl_lock() */
> +static DEFINE_XARRAY_FLAGS(netdev_dmabuf_bindings, XA_FLAGS_ALLOC1);
> +
> +void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding)
> +{
> + struct netdev_rx_queue *rxq;
> + unsigned long xa_idx;
> + unsigned int rxq_idx;
> +
> + if (!binding)
> + return;
> +
> + if (binding->list.next)
> + list_del(&binding->list);
> +
> + xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) {
> + if (rxq->binding == binding) {
> + /* We hold the rtnl_lock while binding/unbinding
> + * dma-buf, so we can't race with another thread that
> + * is also modifying this value. However, the driver
> + * may read this config while it's creating its
> + * rx-queues. WRITE_ONCE() here to match the
> + * READ_ONCE() in the driver.
> + */
> + WRITE_ONCE(rxq->binding, NULL);
> +
> + rxq_idx = get_netdev_rx_queue_index(rxq);
> +
> + netdev_restart_rx_queue(binding->dev, rxq_idx);

Blindly restarting a queue when a dmabuf is heavy handed. If the dmabuf
has no outstanding references (ie., no references in the RxQ), then no
restart is needed.

> + }
> + }
> +
> + xa_erase(&netdev_dmabuf_bindings, binding->id);
> +
> + netdev_dmabuf_binding_put(binding);
> +}
> +
> +int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
> + struct netdev_dmabuf_binding *binding)
> +{
> + struct netdev_rx_queue *rxq;
> + u32 xa_idx;
> + int err;
> +
> + rxq = __netif_get_rx_queue(dev, rxq_idx);
> +
> + if (rxq->binding)
> + return -EEXIST;
> +
> + err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b,
> + GFP_KERNEL);
> + if (err)
> + return err;
> +
> + /* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't
> + * race with another thread that is also modifying this value. However,
> + * the driver may read this config while it's creating its * rx-queues.
> + * WRITE_ONCE() here to match the READ_ONCE() in the driver.
> + */
> + WRITE_ONCE(rxq->binding, binding);
> +
> + err = netdev_restart_rx_queue(dev, rxq_idx);

Similarly, here binding a dmabuf to a queue. I was expecting the dmabuf
binding to add entries to the page pool for the queue. If the pool was
previously empty, then maybe the queue needs to be "started" in the
sense of creating with h/w or just pushing buffers into the queue and
moving the pidx.


2023-12-08 17:56:20

by David Ahern

[permalink] [raw]
Subject: Re: [net-next v1 13/16] tcp: RX path for devmem TCP

On 12/7/23 5:52 PM, Mina Almasry wrote:
> In tcp_recvmsg_locked(), detect if the skb being received by the user
> is a devmem skb. In this case - if the user provided the MSG_SOCK_DEVMEM
> flag - pass it to tcp_recvmsg_devmem() for custom handling.
>
> tcp_recvmsg_devmem() copies any data in the skb header to the linear
> buffer, and returns a cmsg to the user indicating the number of bytes
> returned in the linear buffer.
>
> tcp_recvmsg_devmem() then loops over the unaccessible devmem skb frags,
> and returns to the user a cmsg_devmem indicating the location of the
> data in the dmabuf device memory. cmsg_devmem contains this information:
>
> 1. the offset into the dmabuf where the payload starts. 'frag_offset'.
> 2. the size of the frag. 'frag_size'.
> 3. an opaque token 'frag_token' to return to the kernel when the buffer
> is to be released.
>
> The pages awaiting freeing are stored in the newly added
> sk->sk_user_pages, and each page passed to userspace is get_page()'d.
> This reference is dropped once the userspace indicates that it is
> done reading this page. All pages are released when the socket is
> destroyed.
>
> Signed-off-by: Willem de Bruijn <[email protected]>
> Signed-off-by: Kaiyuan Zhang <[email protected]>
> Signed-off-by: Mina Almasry <[email protected]>
>
> ---
>
> Changes in v1:
> - Added dmabuf_id to dmabuf_cmsg (David/Stan).
> - Devmem -> dmabuf (David).
> - Change tcp_recvmsg_dmabuf() check to skb->dmabuf (Paolo).
> - Use __skb_frag_ref() & napi_pp_put_page() for refcounting (Yunsheng).
>
> RFC v3:
> - Fixed issue with put_cmsg() failing silently.
>

What happens if a retransmitted packet is received or an rx window is
closed and a probe is received where the kernel drops the skb - is the
iov reference(s) in the skb returned to the pool by the stack and ready
for use again?

2023-12-08 17:56:55

by David Ahern

[permalink] [raw]
Subject: Re: [net-next v1 07/16] netdev: netdevice devmem allocator

On 12/7/23 5:52 PM, Mina Almasry wrote:
> diff --git a/net/core/dev.c b/net/core/dev.c
> index b8c8be5a912e..30667e4c3b95 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -2120,6 +2120,41 @@ static int netdev_restart_rx_queue(struct net_device *dev, int rxq_idx)
> return err;
> }
>
> +struct page_pool_iov *netdev_alloc_dmabuf(struct netdev_dmabuf_binding *binding)
> +{
> + struct dmabuf_genpool_chunk_owner *owner;
> + struct page_pool_iov *ppiov;
> + unsigned long dma_addr;
> + ssize_t offset;
> + ssize_t index;
> +
> + dma_addr = gen_pool_alloc_owner(binding->chunk_pool, PAGE_SIZE,

Any reason not to allow allocation sizes other than PAGE_SIZE? e.g.,
2048 for smaller MTUs or 8192 for larger ones. It can be a property of
page_pool and constant across allocations vs allowing different size for
each allocation.

2023-12-08 17:58:00

by David Ahern

[permalink] [raw]
Subject: Re: [net-next v1 00/16] Device Memory TCP

On 12/7/23 5:52 PM, Mina Almasry wrote:
> Major changes in v1:
> --------------
>
> 1. Implemented MVP queue API ndos to remove the userspace-visible
> driver reset.
>
> 2. Fixed issues in the napi_pp_put_page() devmem frag unref path.
>
> 3. Removed RFC tag.
>
> Many smaller addressed comments across all the patches (patches have
> individual change log).
>
> Full tree including the rest of the GVE driver changes:
> https://github.com/mina/linux/commits/tcpdevmem-v1
>

Still a lot of DEVMEM references (e.g., socket API). Any reason not to
move those to DMABUF?

2023-12-08 19:22:40

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 06/16] netdev: support binding dma-buf to netdevice

On Fri, Dec 8, 2023 at 9:48 AM David Ahern <[email protected]> wrote:
>
> On 12/7/23 5:52 PM, Mina Almasry wrote:
...
> > +
> > + xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) {
> > + if (rxq->binding == binding) {
> > + /* We hold the rtnl_lock while binding/unbinding
> > + * dma-buf, so we can't race with another thread that
> > + * is also modifying this value. However, the driver
> > + * may read this config while it's creating its
> > + * rx-queues. WRITE_ONCE() here to match the
> > + * READ_ONCE() in the driver.
> > + */
> > + WRITE_ONCE(rxq->binding, NULL);
> > +
> > + rxq_idx = get_netdev_rx_queue_index(rxq);
> > +
> > + netdev_restart_rx_queue(binding->dev, rxq_idx);
>
> Blindly restarting a queue when a dmabuf is heavy handed. If the dmabuf
> has no outstanding references (ie., no references in the RxQ), then no
> restart is needed.
>

I think I need to stop the queue while binding to a dmabuf for the
sake of concurrency, no? I.e. the softirq thread may be delivering a
packet, and in parallel a separate thread holds rtnl_lock and tries to
bind the dma-buf. At that point the page_pool recreation will race
with the driver doing page_pool_alloc_page(). I don't think I can
insert a lock to handle this into the rx fast path, no?

Also, this sounds like it requires (lots of) more changes. The
page_pool + driver need to report how many pending references there
are (with locking so we don't race with incoming packets), and have
them reported via an ndo so that we can skip restarting the queue.
Implementing the changes in to a huge issue but handling the
concurrency may be a genuine blocker. Not sure it's worth the upside
of not restarting the single rx queue?

> > + }
> > + }
> > +
> > + xa_erase(&netdev_dmabuf_bindings, binding->id);
> > +
> > + netdev_dmabuf_binding_put(binding);
> > +}
> > +
> > +int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
> > + struct netdev_dmabuf_binding *binding)
> > +{
> > + struct netdev_rx_queue *rxq;
> > + u32 xa_idx;
> > + int err;
> > +
> > + rxq = __netif_get_rx_queue(dev, rxq_idx);
> > +
> > + if (rxq->binding)
> > + return -EEXIST;
> > +
> > + err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b,
> > + GFP_KERNEL);
> > + if (err)
> > + return err;
> > +
> > + /* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't
> > + * race with another thread that is also modifying this value. However,
> > + * the driver may read this config while it's creating its * rx-queues.
> > + * WRITE_ONCE() here to match the READ_ONCE() in the driver.
> > + */
> > + WRITE_ONCE(rxq->binding, binding);
> > +
> > + err = netdev_restart_rx_queue(dev, rxq_idx);
>
> Similarly, here binding a dmabuf to a queue. I was expecting the dmabuf
> binding to add entries to the page pool for the queue.

To be honest, I think maybe there's a slight disconnect between how
you think the page_pool works, and my primitive understanding of how
it works. Today, I see a 1:1 mapping between rx-queue and page_pool in
the code. I don't see 1:many or many:1 mappings.

In theory mapping 1 rx-queue to n page_pools is trivial: the driver
can call page_pool_create() multiple times to generate n queues and
decide for incoming packets which one to use.

However, mapping n rx-queues to 1 page_pool seems like a can of worms.
I see code in the page_pool that looks to me (and Willem) like it's
safe only because the page_pool is used from the same napi context.
with a n rx-queueue: 1 page_pool mapping, that is no longer true, no?
There is a tail end of issues to resolve to be able to map 1 page_pool
to n queues as I understand and even if resolved I'm not sure the
maintainers are interested in taking the code.

So, per my humble understanding there is no such thing as "add entries
to the page pool for the (specific) queue", the page_pool is always
used by 1 queue.

Note that even though this limitation exists, we still support binding
1 dma-buf to multiple queues, because multiple page pools can use the
same netdev_dmabuf_binding. I should add that to the docs.

> If the pool was
> previously empty, then maybe the queue needs to be "started" in the
> sense of creating with h/w or just pushing buffers into the queue and
> moving the pidx.
>
>

I don't think it's enough to add buffers to the page_pool, no? The
existing buffers in the page_pool (host mem) must be purged. I think
maybe the queue needs to be stopped as well so that we don't race with
incoming packets and end up with skbs with devmem and non-devmem frags
(unless you're thinking it becomes a requirement to support that, I
think things are complicated as-is and it's a good simplification).
When we already purge the existing buffers & restart the queue, it's
little effort to migrate this to become in line with Jakub's queue-api
that he also wants to use for per-queue configuration & ndo_stop/open.

--
Thanks,
Mina

2023-12-08 19:23:31

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 13/16] tcp: RX path for devmem TCP

On Fri, Dec 8, 2023 at 9:55 AM David Ahern <[email protected]> wrote:
>
> On 12/7/23 5:52 PM, Mina Almasry wrote:
> > In tcp_recvmsg_locked(), detect if the skb being received by the user
> > is a devmem skb. In this case - if the user provided the MSG_SOCK_DEVMEM
> > flag - pass it to tcp_recvmsg_devmem() for custom handling.
> >
> > tcp_recvmsg_devmem() copies any data in the skb header to the linear
> > buffer, and returns a cmsg to the user indicating the number of bytes
> > returned in the linear buffer.
> >
> > tcp_recvmsg_devmem() then loops over the unaccessible devmem skb frags,
> > and returns to the user a cmsg_devmem indicating the location of the
> > data in the dmabuf device memory. cmsg_devmem contains this information:
> >
> > 1. the offset into the dmabuf where the payload starts. 'frag_offset'.
> > 2. the size of the frag. 'frag_size'.
> > 3. an opaque token 'frag_token' to return to the kernel when the buffer
> > is to be released.
> >
> > The pages awaiting freeing are stored in the newly added
> > sk->sk_user_pages, and each page passed to userspace is get_page()'d.
> > This reference is dropped once the userspace indicates that it is
> > done reading this page. All pages are released when the socket is
> > destroyed.
> >
> > Signed-off-by: Willem de Bruijn <[email protected]>
> > Signed-off-by: Kaiyuan Zhang <[email protected]>
> > Signed-off-by: Mina Almasry <[email protected]>
> >
> > ---
> >
> > Changes in v1:
> > - Added dmabuf_id to dmabuf_cmsg (David/Stan).
> > - Devmem -> dmabuf (David).
> > - Change tcp_recvmsg_dmabuf() check to skb->dmabuf (Paolo).
> > - Use __skb_frag_ref() & napi_pp_put_page() for refcounting (Yunsheng).
> >
> > RFC v3:
> > - Fixed issue with put_cmsg() failing silently.
> >
>
> What happens if a retransmitted packet is received or an rx window is
> closed and a probe is received where the kernel drops the skb - is the
> iov reference(s) in the skb returned to the pool by the stack and ready
> for use again?

When an skb is dropped, skb_frag_unref() is called on the frags, which
calls napi_pp_put_page(), drops the references, and the iov is
recycled, yes.

--
Thanks,
Mina

2023-12-08 19:28:10

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 07/16] netdev: netdevice devmem allocator

On Fri, Dec 8, 2023 at 9:56 AM David Ahern <[email protected]> wrote:
>
> On 12/7/23 5:52 PM, Mina Almasry wrote:
> > diff --git a/net/core/dev.c b/net/core/dev.c
> > index b8c8be5a912e..30667e4c3b95 100644
> > --- a/net/core/dev.c
> > +++ b/net/core/dev.c
> > @@ -2120,6 +2120,41 @@ static int netdev_restart_rx_queue(struct net_device *dev, int rxq_idx)
> > return err;
> > }
> >
> > +struct page_pool_iov *netdev_alloc_dmabuf(struct netdev_dmabuf_binding *binding)
> > +{
> > + struct dmabuf_genpool_chunk_owner *owner;
> > + struct page_pool_iov *ppiov;
> > + unsigned long dma_addr;
> > + ssize_t offset;
> > + ssize_t index;
> > +
> > + dma_addr = gen_pool_alloc_owner(binding->chunk_pool, PAGE_SIZE,
>
> Any reason not to allow allocation sizes other than PAGE_SIZE? e.g.,
> 2048 for smaller MTUs or 8192 for larger ones. It can be a property of
> page_pool and constant across allocations vs allowing different size for
> each allocation.

Only for simplicity. Supporting non-PAGE_SIZE is certainly possible,
but in my estimation it's a huge can of worms worthy of itss own
series. I find this series complicated to implement and review and
support as-is, and if reasonable I would like to punt that as a future
improvement.

At the minimum, I think the needed changes are:

1. The memory provider needs to report to the page pool the alloc size.
2. The page_pool needs to handle non-PAGE_SIZE memory regions.
3. The drivers need to handle non-PAGE_SIZE memory regions. Drivers
today handle fragged pages, but that is different because it's a
PAGE_SIZE region that is fragged. This is a non-PAGE_SIZE region in
the first place.
4. Any PAGE_SIZE assumptions in the entire net stack need to be removed.

At Google we mostly use page aligned MTUs so we're likely not that
interested in sub PAGE_SIZE allocations, but we are interested in n *
PAGE_SIZE allocations, but, I hope, in a separate followup effort.

--
Thanks,
Mina

2023-12-08 19:31:34

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 00/16] Device Memory TCP

On Fri, Dec 8, 2023 at 9:57 AM David Ahern <[email protected]> wrote:
>
> On 12/7/23 5:52 PM, Mina Almasry wrote:
> > Major changes in v1:
> > --------------
> >
> > 1. Implemented MVP queue API ndos to remove the userspace-visible
> > driver reset.
> >
> > 2. Fixed issues in the napi_pp_put_page() devmem frag unref path.
> >
> > 3. Removed RFC tag.
> >
> > Many smaller addressed comments across all the patches (patches have
> > individual change log).
> >
> > Full tree including the rest of the GVE driver changes:
> > https://github.com/mina/linux/commits/tcpdevmem-v1
> >
>
> Still a lot of DEVMEM references (e.g., socket API). Any reason not to
> move those to DMABUF?
>

In my mind the naming (maybe too silly/complicated, feel free to correct) is:

The feature is devmem TCP because we really care about TCPing into
device memory. So the uapi/feature name retains devmem.

dmabuf is the abstraction for devmem that we use. In theory someone
can come up with a driver that doesn't like dmabuf and uses something
else instead, and the devmem TCP support can be extended to support
that something else. Functions that handle specifically dmabuf and are
not generic to support general devmem are named accordingly
(netdev_alloc_dmabuf/netdev_free_dmabuf)

page_pool_iov is a generic type to support generic non-paged memory,
functions that are supposed to handle any generic non-paged memory and
named accordingly (page_pool_iov_get_many).


--
Thanks,
Mina

2023-12-08 20:32:49

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 06/16] netdev: support binding dma-buf to netdevice

On Fri, Dec 8, 2023 at 11:22 AM Mina Almasry <[email protected]> wrote:
>
> On Fri, Dec 8, 2023 at 9:48 AM David Ahern <[email protected]> wrote:
> >
> > On 12/7/23 5:52 PM, Mina Almasry wrote:
> ...
> > > +
> > > + xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) {
> > > + if (rxq->binding == binding) {
> > > + /* We hold the rtnl_lock while binding/unbinding
> > > + * dma-buf, so we can't race with another thread that
> > > + * is also modifying this value. However, the driver
> > > + * may read this config while it's creating its
> > > + * rx-queues. WRITE_ONCE() here to match the
> > > + * READ_ONCE() in the driver.
> > > + */
> > > + WRITE_ONCE(rxq->binding, NULL);
> > > +
> > > + rxq_idx = get_netdev_rx_queue_index(rxq);
> > > +
> > > + netdev_restart_rx_queue(binding->dev, rxq_idx);
> >
> > Blindly restarting a queue when a dmabuf is heavy handed. If the dmabuf
> > has no outstanding references (ie., no references in the RxQ), then no
> > restart is needed.
> >
>
> I think I need to stop the queue while binding to a dmabuf for the
> sake of concurrency, no? I.e. the softirq thread may be delivering a
> packet, and in parallel a separate thread holds rtnl_lock and tries to
> bind the dma-buf. At that point the page_pool recreation will race
> with the driver doing page_pool_alloc_page(). I don't think I can
> insert a lock to handle this into the rx fast path, no?
>
> Also, this sounds like it requires (lots of) more changes. The
> page_pool + driver need to report how many pending references there
> are (with locking so we don't race with incoming packets), and have
> them reported via an ndo so that we can skip restarting the queue.
> Implementing the changes in to a huge issue but handling the
> concurrency may be a genuine blocker. Not sure it's worth the upside
> of not restarting the single rx queue?
>
> > > + }
> > > + }
> > > +
> > > + xa_erase(&netdev_dmabuf_bindings, binding->id);
> > > +
> > > + netdev_dmabuf_binding_put(binding);
> > > +}
> > > +
> > > +int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
> > > + struct netdev_dmabuf_binding *binding)
> > > +{
> > > + struct netdev_rx_queue *rxq;
> > > + u32 xa_idx;
> > > + int err;
> > > +
> > > + rxq = __netif_get_rx_queue(dev, rxq_idx);
> > > +
> > > + if (rxq->binding)
> > > + return -EEXIST;
> > > +
> > > + err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b,
> > > + GFP_KERNEL);
> > > + if (err)
> > > + return err;
> > > +
> > > + /* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't
> > > + * race with another thread that is also modifying this value. However,
> > > + * the driver may read this config while it's creating its * rx-queues.
> > > + * WRITE_ONCE() here to match the READ_ONCE() in the driver.
> > > + */
> > > + WRITE_ONCE(rxq->binding, binding);
> > > +
> > > + err = netdev_restart_rx_queue(dev, rxq_idx);
> >
> > Similarly, here binding a dmabuf to a queue. I was expecting the dmabuf
> > binding to add entries to the page pool for the queue.
>
> To be honest, I think maybe there's a slight disconnect between how
> you think the page_pool works, and my primitive understanding of how
> it works. Today, I see a 1:1 mapping between rx-queue and page_pool in
> the code. I don't see 1:many or many:1 mappings.
>
> In theory mapping 1 rx-queue to n page_pools is trivial: the driver
> can call page_pool_create() multiple times to generate n queues and
> decide for incoming packets which one to use.
>
> However, mapping n rx-queues to 1 page_pool seems like a can of worms.
> I see code in the page_pool that looks to me (and Willem) like it's
> safe only because the page_pool is used from the same napi context.
> with a n rx-queueue: 1 page_pool mapping, that is no longer true, no?
> There is a tail end of issues to resolve to be able to map 1 page_pool
> to n queues as I understand and even if resolved I'm not sure the
> maintainers are interested in taking the code.
>
> So, per my humble understanding there is no such thing as "add entries
> to the page pool for the (specific) queue", the page_pool is always
> used by 1 queue.
>
> Note that even though this limitation exists, we still support binding
> 1 dma-buf to multiple queues, because multiple page pools can use the
> same netdev_dmabuf_binding. I should add that to the docs.
>
> > If the pool was
> > previously empty, then maybe the queue needs to be "started" in the
> > sense of creating with h/w or just pushing buffers into the queue and
> > moving the pidx.
> >
> >
>
> I don't think it's enough to add buffers to the page_pool, no? The
> existing buffers in the page_pool (host mem) must be purged. I think
> maybe the queue needs to be stopped as well so that we don't race with
> incoming packets and end up with skbs with devmem and non-devmem frags
> (unless you're thinking it becomes a requirement to support that, I
> think things are complicated as-is and it's a good simplification).
> When we already purge the existing buffers & restart the queue, it's
> little effort to migrate this to become in line with Jakub's queue-api
> that he also wants to use for per-queue configuration & ndo_stop/open.
>

FWIW what i'm referring to with Jakub's queue-api is here:
https://lore.kernel.org/netdev/[email protected]/

I made some simplifications, vis-a-vis passing the queue idx for the
driver to extract the config from rather than the 'cfg' param Jakub
outlined, and again passed the queue idx instead of the 'queue info'
(the API currently assumes RX, and can be extended later for TX use
cases).
--
Thanks,
Mina

2023-12-08 22:56:40

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On 12/8/23 00:52, Mina Almasry wrote:
> Implement a memory provider that allocates dmabuf devmem page_pool_iovs.
>
> The provider receives a reference to the struct netdev_dmabuf_binding
> via the pool->mp_priv pointer. The driver needs to set this pointer for
> the provider in the page_pool_params.
>
> The provider obtains a reference on the netdev_dmabuf_binding which
> guarantees the binding and the underlying mapping remains alive until
> the provider is destroyed.
>
> Usage of PP_FLAG_DMA_MAP is required for this memory provide such that
> the page_pool can provide the driver with the dma-addrs of the devmem.
>
> Support for PP_FLAG_DMA_SYNC_DEV is omitted for simplicity.
>
> Signed-off-by: Willem de Bruijn <[email protected]>
> Signed-off-by: Kaiyuan Zhang <[email protected]>
> Signed-off-by: Mina Almasry <[email protected]>
>
> ---
>
> v1:
> - static_branch check in page_is_page_pool_iov() (Willem & Paolo).
> - PP_DEVMEM -> PP_IOV (David).
> - Require PP_FLAG_DMA_MAP (Jakub).
>
> ---
> include/net/page_pool/helpers.h | 47 +++++++++++++++++
> include/net/page_pool/types.h | 9 ++++
> net/core/page_pool.c | 89 ++++++++++++++++++++++++++++++++-
> 3 files changed, 144 insertions(+), 1 deletion(-)
>
> diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
> index 8bfc2d43efd4..00197f14aa87 100644
> --- a/include/net/page_pool/helpers.h
> +++ b/include/net/page_pool/helpers.h
> @@ -53,6 +53,8 @@
> #define _NET_PAGE_POOL_HELPERS_H
>
> #include <net/page_pool/types.h>
> +#include <net/net_debug.h>
> +#include <net/devmem.h>
>
> #ifdef CONFIG_PAGE_POOL_STATS
> /* Deprecated driver-facing API, use netlink instead */
> @@ -92,6 +94,11 @@ static inline unsigned int page_pool_iov_idx(const struct page_pool_iov *ppiov)
> return ppiov - page_pool_iov_owner(ppiov)->ppiovs;
> }
>
> +static inline u32 page_pool_iov_binding_id(const struct page_pool_iov *ppiov)
> +{
> + return page_pool_iov_owner(ppiov)->binding->id;
> +}
> +
> static inline dma_addr_t
> page_pool_iov_dma_addr(const struct page_pool_iov *ppiov)
> {
> @@ -107,6 +114,46 @@ page_pool_iov_binding(const struct page_pool_iov *ppiov)
> return page_pool_iov_owner(ppiov)->binding;
> }
>
> +static inline int page_pool_iov_refcount(const struct page_pool_iov *ppiov)
> +{
> + return refcount_read(&ppiov->refcount);
> +}
> +
> +static inline void page_pool_iov_get_many(struct page_pool_iov *ppiov,
> + unsigned int count)
> +{
> + refcount_add(count, &ppiov->refcount);
> +}
> +
> +void __page_pool_iov_free(struct page_pool_iov *ppiov);
> +
> +static inline void page_pool_iov_put_many(struct page_pool_iov *ppiov,
> + unsigned int count)
> +{
> + if (!refcount_sub_and_test(count, &ppiov->refcount))
> + return;
> +
> + __page_pool_iov_free(ppiov);
> +}
> +
> +/* page pool mm helpers */
> +
> +DECLARE_STATIC_KEY_FALSE(page_pool_mem_providers);
> +static inline bool page_is_page_pool_iov(const struct page *page)
> +{
> + return static_branch_unlikely(&page_pool_mem_providers) &&
> + (unsigned long)page & PP_IOV;
> +}
> +
> +static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page)
> +{
> + if (page_is_page_pool_iov(page))
> + return (struct page_pool_iov *)((unsigned long)page & ~PP_IOV);
> +
> + DEBUG_NET_WARN_ON_ONCE(true);
> + return NULL;
> +}
> +
> /**
> * page_pool_dev_alloc_pages() - allocate a page.
> * @pool: pool from which to allocate
> diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
> index 44faee7a7b02..136930a238de 100644
> --- a/include/net/page_pool/types.h
> +++ b/include/net/page_pool/types.h
> @@ -134,8 +134,15 @@ struct memory_provider_ops {
> bool (*release_page)(struct page_pool *pool, struct page *page);
> };
>
> +extern const struct memory_provider_ops dmabuf_devmem_ops;
> +
> /* page_pool_iov support */
>
> +/* We overload the LSB of the struct page pointer to indicate whether it's
> + * a page or page_pool_iov.
> + */
> +#define PP_IOV 0x01UL
> +
> /* Owner of the dma-buf chunks inserted into the gen pool. Each scatterlist
> * entry from the dmabuf is inserted into the genpool as a chunk, and needs
> * this owner struct to keep track of some metadata necessary to create
> @@ -159,6 +166,8 @@ struct page_pool_iov {
> struct dmabuf_genpool_chunk_owner *owner;
>
> refcount_t refcount;
> +
> + struct page_pool *pp;
> };
>
> struct page_pool {
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index f5c84d2a4510..423c88564a00 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -12,6 +12,7 @@
>
> #include <net/page_pool/helpers.h>
> #include <net/xdp.h>
> +#include <net/netdev_rx_queue.h>
>
> #include <linux/dma-direction.h>
> #include <linux/dma-mapping.h>
> @@ -20,12 +21,15 @@
> #include <linux/poison.h>
> #include <linux/ethtool.h>
> #include <linux/netdevice.h>
> +#include <linux/genalloc.h>
> +#include <net/devmem.h>
>
> #include <trace/events/page_pool.h>
>
> #include "page_pool_priv.h"
>
> -static DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers);
> +DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers);
> +EXPORT_SYMBOL(page_pool_mem_providers);
>
> #define DEFER_TIME (msecs_to_jiffies(1000))
> #define DEFER_WARN_INTERVAL (60 * HZ)
> @@ -175,6 +179,7 @@ static void page_pool_producer_unlock(struct page_pool *pool,
> static int page_pool_init(struct page_pool *pool,
> const struct page_pool_params *params)
> {
> + struct netdev_dmabuf_binding *binding = NULL;
> unsigned int ring_qsize = 1024; /* Default */
> int err;
>
> @@ -237,6 +242,14 @@ static int page_pool_init(struct page_pool *pool,
> /* Driver calling page_pool_create() also call page_pool_destroy() */
> refcount_set(&pool->user_cnt, 1);
>
> + if (pool->p.queue)
> + binding = READ_ONCE(pool->p.queue->binding);
> +
> + if (binding) {
> + pool->mp_ops = &dmabuf_devmem_ops;
> + pool->mp_priv = binding;
> + }

Hmm, I don't understand why would we replace a nice transparent
api with page pool relying on a queue having devmem specific
pointer? It seemed more flexible and cleaner in the last RFC.

> +
> if (pool->mp_ops) {
> err = pool->mp_ops->init(pool);
> if (err) {
> @@ -1020,3 +1033,77 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
> }
> }
> EXPORT_SYMBOL(page_pool_update_nid);
> +
> +void __page_pool_iov_free(struct page_pool_iov *ppiov)
> +{
> + if (WARN_ON(ppiov->pp->mp_ops != &dmabuf_devmem_ops))
> + return;
> +
> + netdev_free_dmabuf(ppiov);
> +}
> +EXPORT_SYMBOL_GPL(__page_pool_iov_free);

I didn't look too deep but I don't think I immediately follow
the pp refcounting. It increments pages_state_hold_cnt on
allocation, but IIUC doesn't mark skbs for recycle? Then, they all
will be put down via page_pool_iov_put_many() bypassing
page_pool_return_page() and friends. That will call
netdev_free_dmabuf(), which doesn't bump pages_state_release_cnt.

At least I couldn't make it work with io_uring, and for my purposes,
I forced all puts to go through page_pool_return_page(), which calls
the ->release_page callback. The callback will put the reference and
ask its page pool to account release_cnt. It also gets rid of
__page_pool_iov_free(), as we'd need to add a hook there for
customization otherwise.

I didn't care about overhead because the hot path for me is getting
buffers from a ring, which is somewhat analogous to sock_devmem_dontneed(),
but done on pp allocations under napi, and it's done separately.

Completely untested with TCP devmem:

https://github.com/isilence/linux/commit/14bd56605183dc80b540999e8058c79ac92ae2d8

> +
> +/*** "Dmabuf devmem memory provider" ***/
> +
> +static int mp_dmabuf_devmem_init(struct page_pool *pool)
> +{
> + struct netdev_dmabuf_binding *binding = pool->mp_priv;
> +
> + if (!binding)
> + return -EINVAL;
> +
> + if (!(pool->p.flags & PP_FLAG_DMA_MAP))
> + return -EOPNOTSUPP;
> +
> + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> + return -EOPNOTSUPP;
> +
> + netdev_dmabuf_binding_get(binding);
> + return 0;
> +}
> +
> +static struct page *mp_dmabuf_devmem_alloc_pages(struct page_pool *pool,
> + gfp_t gfp)
> +{
> + struct netdev_dmabuf_binding *binding = pool->mp_priv;
> + struct page_pool_iov *ppiov;
> +
> + ppiov = netdev_alloc_dmabuf(binding);
> + if (!ppiov)
> + return NULL;
> +
> + ppiov->pp = pool;
> + pool->pages_state_hold_cnt++;
> + trace_page_pool_state_hold(pool, (struct page *)ppiov,
> + pool->pages_state_hold_cnt);
> + return (struct page *)((unsigned long)ppiov | PP_IOV);
> +}
> +
> +static void mp_dmabuf_devmem_destroy(struct page_pool *pool)
> +{
> + struct netdev_dmabuf_binding *binding = pool->mp_priv;
> +
> + netdev_dmabuf_binding_put(binding);
> +}
> +
> +static bool mp_dmabuf_devmem_release_page(struct page_pool *pool,
> + struct page *page)
> +{
> + struct page_pool_iov *ppiov;
> +
> + if (WARN_ON_ONCE(!page_is_page_pool_iov(page)))
> + return false;
> +
> + ppiov = page_to_page_pool_iov(page);
> + page_pool_iov_put_many(ppiov, 1);
> + /* We don't want the page pool put_page()ing our page_pool_iovs. */
> + return false;
> +}
> +
> +const struct memory_provider_ops dmabuf_devmem_ops = {
> + .init = mp_dmabuf_devmem_init,
> + .destroy = mp_dmabuf_devmem_destroy,
> + .alloc_pages = mp_dmabuf_devmem_alloc_pages,
> + .release_page = mp_dmabuf_devmem_release_page,
> +};
> +EXPORT_SYMBOL(dmabuf_devmem_ops);

--
Pavel Begunkov

2023-12-08 23:13:36

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On 12/8/23 00:52, Mina Almasry wrote:
> Implement a memory provider that allocates dmabuf devmem page_pool_iovs.
>
> The provider receives a reference to the struct netdev_dmabuf_binding
> via the pool->mp_priv pointer. The driver needs to set this pointer for
> the provider in the page_pool_params.
>
> The provider obtains a reference on the netdev_dmabuf_binding which
> guarantees the binding and the underlying mapping remains alive until
> the provider is destroyed.
>
> Usage of PP_FLAG_DMA_MAP is required for this memory provide such that
> the page_pool can provide the driver with the dma-addrs of the devmem.
>
> Support for PP_FLAG_DMA_SYNC_DEV is omitted for simplicity.
>
> Signed-off-by: Willem de Bruijn <[email protected]>
> Signed-off-by: Kaiyuan Zhang <[email protected]>
> Signed-off-by: Mina Almasry <[email protected]>
[...]
> +void __page_pool_iov_free(struct page_pool_iov *ppiov);
> +
> +static inline void page_pool_iov_put_many(struct page_pool_iov *ppiov,
> + unsigned int count)
> +{
> + if (!refcount_sub_and_test(count, &ppiov->refcount))
> + return;
> +
> + __page_pool_iov_free(ppiov);
> +}
> +
> +/* page pool mm helpers */
> +
> +DECLARE_STATIC_KEY_FALSE(page_pool_mem_providers);
> +static inline bool page_is_page_pool_iov(const struct page *page)
> +{
> + return static_branch_unlikely(&page_pool_mem_providers) &&
> + (unsigned long)page & PP_IOV;

Are there any recommendations of not using static keys in widely
used inline functions? I'm not familiar with static key code
generation, but I think the compiler will bloat users with fat chunks
of code in unlikely paths. And I'd assume it creates an array of all
uses, which it'll be walked on enabling/disabling the branch.

> +}
> +
> +static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page)
> +{
> + if (page_is_page_pool_iov(page))
> + return (struct page_pool_iov *)((unsigned long)page & ~PP_IOV);
> +
> + DEBUG_NET_WARN_ON_ONCE(true);
> + return NULL;
> +}
> +
> /**
> * page_pool_dev_alloc_pages() - allocate a page.
> * @pool: pool from which to allocate
--
Pavel Begunkov

2023-12-08 23:25:30

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On Fri, Dec 8, 2023 at 2:56 PM Pavel Begunkov <[email protected]> wrote:
>
> On 12/8/23 00:52, Mina Almasry wrote:
...
> > + if (pool->p.queue)
> > + binding = READ_ONCE(pool->p.queue->binding);
> > +
> > + if (binding) {
> > + pool->mp_ops = &dmabuf_devmem_ops;
> > + pool->mp_priv = binding;
> > + }
>
> Hmm, I don't understand why would we replace a nice transparent
> api with page pool relying on a queue having devmem specific
> pointer? It seemed more flexible and cleaner in the last RFC.
>

Jakub requested this change and may chime in, but I suspect it's to
further abstract the devmem changes from driver. In this iteration,
the driver grabs the netdev_rx_queue and passes it to the page_pool,
and any future configurations between the net stack and page_pool can
be passed this way with the driver unbothered.

> > +
> > if (pool->mp_ops) {
> > err = pool->mp_ops->init(pool);
> > if (err) {
> > @@ -1020,3 +1033,77 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
> > }
> > }
> > EXPORT_SYMBOL(page_pool_update_nid);
> > +
> > +void __page_pool_iov_free(struct page_pool_iov *ppiov)
> > +{
> > + if (WARN_ON(ppiov->pp->mp_ops != &dmabuf_devmem_ops))
> > + return;
> > +
> > + netdev_free_dmabuf(ppiov);
> > +}
> > +EXPORT_SYMBOL_GPL(__page_pool_iov_free);
>
> I didn't look too deep but I don't think I immediately follow
> the pp refcounting. It increments pages_state_hold_cnt on
> allocation, but IIUC doesn't mark skbs for recycle? Then, they all
> will be put down via page_pool_iov_put_many() bypassing
> page_pool_return_page() and friends. That will call
> netdev_free_dmabuf(), which doesn't bump pages_state_release_cnt.
>
> At least I couldn't make it work with io_uring, and for my purposes,
> I forced all puts to go through page_pool_return_page(), which calls
> the ->release_page callback. The callback will put the reference and
> ask its page pool to account release_cnt. It also gets rid of
> __page_pool_iov_free(), as we'd need to add a hook there for
> customization otherwise.
>
> I didn't care about overhead because the hot path for me is getting
> buffers from a ring, which is somewhat analogous to sock_devmem_dontneed(),
> but done on pp allocations under napi, and it's done separately.
>
> Completely untested with TCP devmem:
>
> https://github.com/isilence/linux/commit/14bd56605183dc80b540999e8058c79ac92ae2d8
>

This was a mistake in the last RFC, which should be fixed in v1. In
the RFC I was not marking the skbs as skb_mark_for_recycle(), so the
unreffing path wasn't as expected.

In this iteration, that should be completely fixed. I suspect since I
just posted this you're actually referring to the issue tested on the
last RFC? Correct me if wrong.

In this iteration, the reffing story:

- memory provider allocs ppiov and returns it to the page pool with
ppiov->refcount == 1.
- The page_pool gives the page to the driver. The driver may
obtain/release references with page_pool_page_[get|put]_many(), but
the driver is likely not doing that unless it's doing its own page
recycling.
- The net stack obtains references via skb_frag_ref() ->
page_pool_page_get_many()
- The net stack drops references via skb_frag_unref() ->
napi_pp_put_page() -> page_pool_return_page() and friends.

Thus, the issue where the unref path was skipping
page_pool_return_page() and friends should be resolved in this
iteration, let me know if you think otherwise, but I think this was an
issue limited to the last RFC.

> > +
> > +/*** "Dmabuf devmem memory provider" ***/
> > +
> > +static int mp_dmabuf_devmem_init(struct page_pool *pool)
> > +{
> > + struct netdev_dmabuf_binding *binding = pool->mp_priv;
> > +
> > + if (!binding)
> > + return -EINVAL;
> > +
> > + if (!(pool->p.flags & PP_FLAG_DMA_MAP))
> > + return -EOPNOTSUPP;
> > +
> > + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> > + return -EOPNOTSUPP;
> > +
> > + netdev_dmabuf_binding_get(binding);
> > + return 0;
> > +}
> > +
> > +static struct page *mp_dmabuf_devmem_alloc_pages(struct page_pool *pool,
> > + gfp_t gfp)
> > +{
> > + struct netdev_dmabuf_binding *binding = pool->mp_priv;
> > + struct page_pool_iov *ppiov;
> > +
> > + ppiov = netdev_alloc_dmabuf(binding);
> > + if (!ppiov)
> > + return NULL;
> > +
> > + ppiov->pp = pool;
> > + pool->pages_state_hold_cnt++;
> > + trace_page_pool_state_hold(pool, (struct page *)ppiov,
> > + pool->pages_state_hold_cnt);
> > + return (struct page *)((unsigned long)ppiov | PP_IOV);
> > +}
> > +
> > +static void mp_dmabuf_devmem_destroy(struct page_pool *pool)
> > +{
> > + struct netdev_dmabuf_binding *binding = pool->mp_priv;
> > +
> > + netdev_dmabuf_binding_put(binding);
> > +}
> > +
> > +static bool mp_dmabuf_devmem_release_page(struct page_pool *pool,
> > + struct page *page)
> > +{
> > + struct page_pool_iov *ppiov;
> > +
> > + if (WARN_ON_ONCE(!page_is_page_pool_iov(page)))
> > + return false;
> > +
> > + ppiov = page_to_page_pool_iov(page);
> > + page_pool_iov_put_many(ppiov, 1);
> > + /* We don't want the page pool put_page()ing our page_pool_iovs. */
> > + return false;
> > +}
> > +
> > +const struct memory_provider_ops dmabuf_devmem_ops = {
> > + .init = mp_dmabuf_devmem_init,
> > + .destroy = mp_dmabuf_devmem_destroy,
> > + .alloc_pages = mp_dmabuf_devmem_alloc_pages,
> > + .release_page = mp_dmabuf_devmem_release_page,
> > +};
> > +EXPORT_SYMBOL(dmabuf_devmem_ops);
>
> --
> Pavel Begunkov



--
Thanks,
Mina

2023-12-09 23:29:29

by David Ahern

[permalink] [raw]
Subject: Re: [net-next v1 06/16] netdev: support binding dma-buf to netdevice

On 12/8/23 12:22 PM, Mina Almasry wrote:
> On Fri, Dec 8, 2023 at 9:48 AM David Ahern <[email protected]> wrote:
>>
>> On 12/7/23 5:52 PM, Mina Almasry wrote:
> ...
>>> +
>>> + xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) {
>>> + if (rxq->binding == binding) {
>>> + /* We hold the rtnl_lock while binding/unbinding
>>> + * dma-buf, so we can't race with another thread that
>>> + * is also modifying this value. However, the driver
>>> + * may read this config while it's creating its
>>> + * rx-queues. WRITE_ONCE() here to match the
>>> + * READ_ONCE() in the driver.
>>> + */
>>> + WRITE_ONCE(rxq->binding, NULL);
>>> +
>>> + rxq_idx = get_netdev_rx_queue_index(rxq);
>>> +
>>> + netdev_restart_rx_queue(binding->dev, rxq_idx);
>>
>> Blindly restarting a queue when a dmabuf is heavy handed. If the dmabuf
>> has no outstanding references (ie., no references in the RxQ), then no
>> restart is needed.
>>
>
> I think I need to stop the queue while binding to a dmabuf for the
> sake of concurrency, no? I.e. the softirq thread may be delivering a
> packet, and in parallel a separate thread holds rtnl_lock and tries to
> bind the dma-buf. At that point the page_pool recreation will race
> with the driver doing page_pool_alloc_page(). I don't think I can
> insert a lock to handle this into the rx fast path, no?

I think it depends on the details of how entries are added and removed
from the pool. I am behind on the pp details at this point, so I do need
to do some homework.

>
> Also, this sounds like it requires (lots of) more changes. The
> page_pool + driver need to report how many pending references there
> are (with locking so we don't race with incoming packets), and have
> them reported via an ndo so that we can skip restarting the queue.
> Implementing the changes in to a huge issue but handling the
> concurrency may be a genuine blocker. Not sure it's worth the upside
> of not restarting the single rx queue?

It has to do with the usability of this overall solution. As I mentioned
most ML use cases can (and will want to) use many memory allocations for
receiving packets - e.g., allocations per message and receiving multiple
messages per socket connection.

>
>>> + }
>>> + }
>>> +
>>> + xa_erase(&netdev_dmabuf_bindings, binding->id);
>>> +
>>> + netdev_dmabuf_binding_put(binding);
>>> +}
>>> +
>>> +int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
>>> + struct netdev_dmabuf_binding *binding)
>>> +{
>>> + struct netdev_rx_queue *rxq;
>>> + u32 xa_idx;
>>> + int err;
>>> +
>>> + rxq = __netif_get_rx_queue(dev, rxq_idx);
>>> +
>>> + if (rxq->binding)
>>> + return -EEXIST;
>>> +
>>> + err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b,
>>> + GFP_KERNEL);
>>> + if (err)
>>> + return err;
>>> +
>>> + /* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't
>>> + * race with another thread that is also modifying this value. However,
>>> + * the driver may read this config while it's creating its * rx-queues.
>>> + * WRITE_ONCE() here to match the READ_ONCE() in the driver.
>>> + */
>>> + WRITE_ONCE(rxq->binding, binding);
>>> +
>>> + err = netdev_restart_rx_queue(dev, rxq_idx);
>>
>> Similarly, here binding a dmabuf to a queue. I was expecting the dmabuf
>> binding to add entries to the page pool for the queue.
>
> To be honest, I think maybe there's a slight disconnect between how
> you think the page_pool works, and my primitive understanding of how
> it works. Today, I see a 1:1 mapping between rx-queue and page_pool in
> the code. I don't see 1:many or many:1 mappings.

I am not referring to 1:N or N:1 for page pool and queues. I am
referring to entries within a single page pool for a single Rx queue.


>
> In theory mapping 1 rx-queue to n page_pools is trivial: the driver
> can call page_pool_create() multiple times to generate n queues and
> decide for incoming packets which one to use.
>
> However, mapping n rx-queues to 1 page_pool seems like a can of worms.
> I see code in the page_pool that looks to me (and Willem) like it's
> safe only because the page_pool is used from the same napi context.
> with a n rx-queueue: 1 page_pool mapping, that is no longer true, no?
> There is a tail end of issues to resolve to be able to map 1 page_pool
> to n queues as I understand and even if resolved I'm not sure the
> maintainers are interested in taking the code.
>
> So, per my humble understanding there is no such thing as "add entries
> to the page pool for the (specific) queue", the page_pool is always
> used by 1 queue.
>
> Note that even though this limitation exists, we still support binding
> 1 dma-buf to multiple queues, because multiple page pools can use the
> same netdev_dmabuf_binding. I should add that to the docs.
>
>> If the pool was
>> previously empty, then maybe the queue needs to be "started" in the
>> sense of creating with h/w or just pushing buffers into the queue and
>> moving the pidx.
>>
>>
>
> I don't think it's enough to add buffers to the page_pool, no? The
> existing buffers in the page_pool (host mem) must be purged. I think
> maybe the queue needs to be stopped as well so that we don't race with
> incoming packets and end up with skbs with devmem and non-devmem frags
> (unless you're thinking it becomes a requirement to support that, I
> think things are complicated as-is and it's a good simplification).
> When we already purge the existing buffers & restart the queue, it's
> little effort to migrate this to become in line with Jakub's queue-api
> that he also wants to use for per-queue configuration & ndo_stop/open.
>

2023-12-10 03:05:36

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On 12/8/23 23:25, Mina Almasry wrote:
> On Fri, Dec 8, 2023 at 2:56 PM Pavel Begunkov <[email protected]> wrote:
>>
>> On 12/8/23 00:52, Mina Almasry wrote:
> ...
>>> + if (pool->p.queue)
>>> + binding = READ_ONCE(pool->p.queue->binding);
>>> +
>>> + if (binding) {
>>> + pool->mp_ops = &dmabuf_devmem_ops;
>>> + pool->mp_priv = binding;
>>> + }
>>
>> Hmm, I don't understand why would we replace a nice transparent
>> api with page pool relying on a queue having devmem specific
>> pointer? It seemed more flexible and cleaner in the last RFC.
>>
>
> Jakub requested this change and may chime in, but I suspect it's to
> further abstract the devmem changes from driver. In this iteration,
> the driver grabs the netdev_rx_queue and passes it to the page_pool,
> and any future configurations between the net stack and page_pool can
> be passed this way with the driver unbothered.

Ok, that makes sense, but even if passed via an rx queue I'd
at least hope it keeping abstract provider parameters, e.g.
ops, but not hard coded with devmem specific code.

It might even be better done with a helper like
create_page_pool_from_queue(), unless there is some deeper
interaction b/w pp and rx queues is predicted.

>>> +
>>> if (pool->mp_ops) {
>>> err = pool->mp_ops->init(pool);
>>> if (err) {
>>> @@ -1020,3 +1033,77 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
>>> }
>>> }
>>> EXPORT_SYMBOL(page_pool_update_nid);
>>> +
>>> +void __page_pool_iov_free(struct page_pool_iov *ppiov)
>>> +{
>>> + if (WARN_ON(ppiov->pp->mp_ops != &dmabuf_devmem_ops))
>>> + return;
>>> +
>>> + netdev_free_dmabuf(ppiov);
>>> +}
>>> +EXPORT_SYMBOL_GPL(__page_pool_iov_free);
>>
>> I didn't look too deep but I don't think I immediately follow
>> the pp refcounting. It increments pages_state_hold_cnt on
>> allocation, but IIUC doesn't mark skbs for recycle? Then, they all
>> will be put down via page_pool_iov_put_many() bypassing
>> page_pool_return_page() and friends. That will call
>> netdev_free_dmabuf(), which doesn't bump pages_state_release_cnt.
>>
>> At least I couldn't make it work with io_uring, and for my purposes,
>> I forced all puts to go through page_pool_return_page(), which calls
>> the ->release_page callback. The callback will put the reference and
>> ask its page pool to account release_cnt. It also gets rid of
>> __page_pool_iov_free(), as we'd need to add a hook there for
>> customization otherwise.
>>
>> I didn't care about overhead because the hot path for me is getting
>> buffers from a ring, which is somewhat analogous to sock_devmem_dontneed(),
>> but done on pp allocations under napi, and it's done separately.
>>
>> Completely untested with TCP devmem:
>>
>> https://github.com/isilence/linux/commit/14bd56605183dc80b540999e8058c79ac92ae2d8
>>
>
> This was a mistake in the last RFC, which should be fixed in v1. In
> the RFC I was not marking the skbs as skb_mark_for_recycle(), so the
> unreffing path wasn't as expected.
>
> In this iteration, that should be completely fixed. I suspect since I
> just posted this you're actually referring to the issue tested on the
> last RFC? Correct me if wrong.

Right, it was with RFCv3

> In this iteration, the reffing story:
>
> - memory provider allocs ppiov and returns it to the page pool with
> ppiov->refcount == 1.
> - The page_pool gives the page to the driver. The driver may
> obtain/release references with page_pool_page_[get|put]_many(), but
> the driver is likely not doing that unless it's doing its own page
> recycling.
> - The net stack obtains references via skb_frag_ref() ->
> page_pool_page_get_many()
> - The net stack drops references via skb_frag_unref() ->
> napi_pp_put_page() -> page_pool_return_page() and friends.
>
> Thus, the issue where the unref path was skipping
> page_pool_return_page() and friends should be resolved in this
> iteration, let me know if you think otherwise, but I think this was an
> issue limited to the last RFC.

Then page_pool_iov_put_many() should and supposedly would never be
called by non devmap code because all puts must circle back into
->release_page. Why adding it to into page_pool_page_put_many()?

@@ -731,6 +731,29 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
+ if (page_is_page_pool_iov(page)) {
...
+ page_pool_page_put_many(page, 1);
+ return NULL;
+ }

Well, I'm looking at this new branch from Patch 10, it can put
the buffer, but what if we race at it's actually the final put?
Looks like nobody is going to to bump up pages_state_release_cnt

If you remove the branch, let it fall into ->release and rely
on refcounting there, then the callback could also fix up
release_cnt or ask pp to do it, like in the patch I linked above

--
Pavel Begunkov

2023-12-10 03:48:36

by Shakeel Butt

[permalink] [raw]
Subject: Re: [net-next v1 00/16] Device Memory TCP

On Thu, Dec 07, 2023 at 04:52:31PM -0800, Mina Almasry wrote:
[...]
>
> Today, the majority of the Device-to-Device data transfers the network are

'the network' in above can be removed.

> implemented as the following low level operations: Device-to-Host copy,
> Host-to-Host network transfer, and Host-to-Device copy.
>

[...]

>
> ** Part 5: recvmsg() APIs
>
> We define user APIs for the user to send and receive device memory.
>
> Not included with this RFC is the GVE devmem TCP support, just to

no more RFC

> simplify the review. Code available here if desired:
> https://github.com/mina/linux/tree/tcpdevmem
>
> This RFC is built on top of net-next with Jakub's pp-providers changes

no more RFC

[...]
>
> ** Test Setup
>
> Kernel: net-next with this RFC and memory provider API cherry-picked

no more RFC

> locally.
>
> Hardware: Google Cloud A3 VMs.
>
> NIC: GVE with header split & RSS & flow steering support.
>

2023-12-11 02:20:03

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 06/16] netdev: support binding dma-buf to netdevice

On Sat, Dec 9, 2023 at 3:29 PM David Ahern <[email protected]> wrote:
>
> On 12/8/23 12:22 PM, Mina Almasry wrote:
> > On Fri, Dec 8, 2023 at 9:48 AM David Ahern <[email protected]> wrote:
> >>
> >> On 12/7/23 5:52 PM, Mina Almasry wrote:
> > ...
> >>> +
> >>> + xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) {
> >>> + if (rxq->binding == binding) {
> >>> + /* We hold the rtnl_lock while binding/unbinding
> >>> + * dma-buf, so we can't race with another thread that
> >>> + * is also modifying this value. However, the driver
> >>> + * may read this config while it's creating its
> >>> + * rx-queues. WRITE_ONCE() here to match the
> >>> + * READ_ONCE() in the driver.
> >>> + */
> >>> + WRITE_ONCE(rxq->binding, NULL);
> >>> +
> >>> + rxq_idx = get_netdev_rx_queue_index(rxq);
> >>> +
> >>> + netdev_restart_rx_queue(binding->dev, rxq_idx);
> >>
> >> Blindly restarting a queue when a dmabuf is heavy handed. If the dmabuf
> >> has no outstanding references (ie., no references in the RxQ), then no
> >> restart is needed.
> >>
> >
> > I think I need to stop the queue while binding to a dmabuf for the
> > sake of concurrency, no? I.e. the softirq thread may be delivering a
> > packet, and in parallel a separate thread holds rtnl_lock and tries to
> > bind the dma-buf. At that point the page_pool recreation will race
> > with the driver doing page_pool_alloc_page(). I don't think I can
> > insert a lock to handle this into the rx fast path, no?
>
> I think it depends on the details of how entries are added and removed
> from the pool. I am behind on the pp details at this point, so I do need
> to do some homework.
>

I think it also depends on the details of how to invalidate buffers
posted to the rx queue of a particular driver. For GVE as far as I
understands when the queue is started I believe it allocates a bunch
of buffers and posts them to the rx queue. Then it processes the
completion descriptors from the hardware and posts new buffers to
replace the ones consumed, so any started queue would have postesd
buffers in it.

As far as I know we also don't support invalidating posted buffers
without first stopping the queue, replacing the buffers, and starting
again. But I don't think these are limitations overly specific to GVE,
I believe non-RDMA NICs work similarly?

But I'd stress that what I'm proposing here should be extensible to
capabilities of specific drivers. If one has a driver that allows them
to invalidate posted buffers on the fly, I imagine they can extend the
queue API to declare that support to the netstack in a genric way, and
the net stack can invalidate buffers from the previous page pool and
supply the new one.

> >
> > Also, this sounds like it requires (lots of) more changes. The
> > page_pool + driver need to report how many pending references there
> > are (with locking so we don't race with incoming packets), and have
> > them reported via an ndo so that we can skip restarting the queue.
> > Implementing the changes in to a huge issue but handling the
> > concurrency may be a genuine blocker. Not sure it's worth the upside
> > of not restarting the single rx queue?
>
> It has to do with the usability of this overall solution. As I mentioned
> most ML use cases can (and will want to) use many memory allocations for
> receiving packets - e.g., allocations per message and receiving multiple
> messages per socket connection.
>

We support that by flow steering different flows to different RX
queues. Our NICs don't support smart choosing of which page_pool to
place the packet in (based on ntuple rule or what not). So flows that
must land on a given dmabuf are flow steered to that dmabuf, and flows
that need to land host memory and not flow steered and are RSS'd to
the non-dmabuf bound queues. This should also be extensible by folks
that have NICs with the appropriate support.

> >
> >>> + }
> >>> + }
> >>> +
> >>> + xa_erase(&netdev_dmabuf_bindings, binding->id);
> >>> +
> >>> + netdev_dmabuf_binding_put(binding);
> >>> +}
> >>> +
> >>> +int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
> >>> + struct netdev_dmabuf_binding *binding)
> >>> +{
> >>> + struct netdev_rx_queue *rxq;
> >>> + u32 xa_idx;
> >>> + int err;
> >>> +
> >>> + rxq = __netif_get_rx_queue(dev, rxq_idx);
> >>> +
> >>> + if (rxq->binding)
> >>> + return -EEXIST;
> >>> +
> >>> + err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b,
> >>> + GFP_KERNEL);
> >>> + if (err)
> >>> + return err;
> >>> +
> >>> + /* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't
> >>> + * race with another thread that is also modifying this value. However,
> >>> + * the driver may read this config while it's creating its * rx-queues.
> >>> + * WRITE_ONCE() here to match the READ_ONCE() in the driver.
> >>> + */
> >>> + WRITE_ONCE(rxq->binding, binding);
> >>> +
> >>> + err = netdev_restart_rx_queue(dev, rxq_idx);
> >>
> >> Similarly, here binding a dmabuf to a queue. I was expecting the dmabuf
> >> binding to add entries to the page pool for the queue.
> >
> > To be honest, I think maybe there's a slight disconnect between how
> > you think the page_pool works, and my primitive understanding of how
> > it works. Today, I see a 1:1 mapping between rx-queue and page_pool in
> > the code. I don't see 1:many or many:1 mappings.
>
> I am not referring to 1:N or N:1 for page pool and queues. I am
> referring to entries within a single page pool for a single Rx queue.
>
>

Thanks, glad to hear that. I was afraid there is a miscommunication here.

> >
> > In theory mapping 1 rx-queue to n page_pools is trivial: the driver
> > can call page_pool_create() multiple times to generate n queues and
> > decide for incoming packets which one to use.
> >
> > However, mapping n rx-queues to 1 page_pool seems like a can of worms.
> > I see code in the page_pool that looks to me (and Willem) like it's
> > safe only because the page_pool is used from the same napi context.
> > with a n rx-queueue: 1 page_pool mapping, that is no longer true, no?
> > There is a tail end of issues to resolve to be able to map 1 page_pool
> > to n queues as I understand and even if resolved I'm not sure the
> > maintainers are interested in taking the code.
> >
> > So, per my humble understanding there is no such thing as "add entries
> > to the page pool for the (specific) queue", the page_pool is always
> > used by 1 queue.
> >
> > Note that even though this limitation exists, we still support binding
> > 1 dma-buf to multiple queues, because multiple page pools can use the
> > same netdev_dmabuf_binding. I should add that to the docs.
> >
> >> If the pool was
> >> previously empty, then maybe the queue needs to be "started" in the
> >> sense of creating with h/w or just pushing buffers into the queue and
> >> moving the pidx.
> >>
> >>
> >
> > I don't think it's enough to add buffers to the page_pool, no? The
> > existing buffers in the page_pool (host mem) must be purged. I think
> > maybe the queue needs to be stopped as well so that we don't race with
> > incoming packets and end up with skbs with devmem and non-devmem frags
> > (unless you're thinking it becomes a requirement to support that, I
> > think things are complicated as-is and it's a good simplification).
> > When we already purge the existing buffers & restart the queue, it's
> > little effort to migrate this to become in line with Jakub's queue-api
> > that he also wants to use for per-queue configuration & ndo_stop/open.
> >
>


--
Thanks,
Mina

2023-12-11 02:27:00

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 09/16] page_pool: device memory support

On Sun, Dec 10, 2023 at 6:04 PM Yunsheng Lin <[email protected]> wrote:
>
> On 2023/12/9 0:05, Mina Almasry wrote:
> > On Fri, Dec 8, 2023 at 1:30 AM Yunsheng Lin <[email protected]> wrote:
> >>
> >>
> >> As mentioned before, it seems we need to have the above checking every
> >> time we need to do some per-page handling in page_pool core, is there
> >> a plan in your mind how to remove those kind of checking in the future?
> >>
> >
> > I see 2 ways to remove the checking, both infeasible:
> >
> > 1. Allocate a wrapper struct that pulls out all the fields the page pool needs:
> >
> > struct netmem {
> > /* common fields */
> > refcount_t refcount;
> > bool is_pfmemalloc;
> > int nid;
> > ...
> > union {
> > struct dmabuf_genpool_chunk_owner *owner;
> > struct page * page;
> > };
> > };
> >
> > The page pool can then not care if the underlying memory is iov or
> > page. However this introduces significant memory bloat as this struct
> > needs to be allocated for each page or ppiov, which I imagine is not
> > acceptable for the upside of removing a few static_branch'd if
> > statements with no performance cost.
> >
> > 2. Create a unified struct for page and dmabuf memory, which the mm
> > folks have repeatedly nacked, and I imagine will repeatedly nack in
> > the future.
> >
> > So I imagine the special handling of ppiov in some form is critical
> > and the checking may not be removable.
>
> If the above is true, perhaps devmem is not really supposed to be intergated
> into page_pool.
>
> Adding a checking for every per-page handling in page_pool core is just too
> hacky to be really considerred a longterm solution.
>

The only other option is to implement another page_pool for ppiov and
have the driver create page_pool or ppiov_pool depending on the state
of the netdev_rx_queue (or some helper in the net stack to do that for
the driver). This introduces some code duplication. The ppiov_pool &
page_pool would look similar in implementation.

But this was all discussed in detail in RFC v2 and the last response I
heard from Jesper was in favor if this approach, if I understand
correctly:

https://lore.kernel.org/netdev/[email protected]/

Would love to have the maintainer weigh in here.

> It is somewhat ironical that devmem is using static_branch to alliviate the
> performance impact for normal memory at the possible cost of performance
> degradation for devmem, does it not defeat some purpose of intergating devmem
> to page_pool?
>

I don't see the issue. The static branch sets the non-ppiov path as
default if no memory providers are in use, and flips it when they are,
making the default branch prediction ideal in both cases.

> >
> >> Even though a static_branch check is added in page_is_page_pool_iov(), it
> >> does not make much sense that a core has tow different 'struct' for its
> >> most basic data.
> >>
> >> IMHO, the ppiov for dmabuf is forced fitting into page_pool without much
> >> design consideration at this point.
> >>
> > ...
> >>
> >> For now, the above may work for the the rx part as it seems that you are
> >> only enabling rx for dmabuf for now.
> >>
> >> What is the plan to enable tx for dmabuf? If it is also intergrated into
> >> page_pool? There was a attempt to enable page_pool for tx, Eric seemed to
> >> have some comment about this:
> >> https://lkml.kernel.org/netdev/[email protected]/T/#mb6ab62dc22f38ec621d516259c56dd66353e24a2
> >>
> >> If tx is not intergrated into page_pool, do we need to create a new layer for
> >> the tx dmabuf?
> >>
> >
> > I imagine the TX path will reuse page_pool_iov, page_pool_iov_*()
> > helpers, and page_pool_page_*() helpers, but will not need any core
> > page_pool changes. This is because the TX path will have to piggyback
>
> We may need another bit/flags checking to demux between page_pool owned
> devmem and non-page_pool owned devmem.
>

The way I'm imagining the support, I don't see the need for such
flags. We'd be re-using generic helpers like
page_pool_iov_get_dma_address() and what not that don't need that
checking.

> Also calling page_pool_*() on non-page_pool owned devmem is confusing
> enough that we may need a thin layer handling non-page_pool owned devmem
> in the end.
>

The page_pool_page* & page_pool_iov* functions can be renamed if
confusing. I would think that's no issue (note that the page_pool_*
functions need not be called for TX path).

> > on MSG_ZEROCOPY (devmem is not copyable), so no memory allocation from
> > the page_pool (or otherwise) is needed or possible. RFCv1 had a TX
> > implementation based on dmabuf pages without page_pool involvement, I
> > imagine I'll do something similar.
> It would be good to have a tx implementation for the next version, so
> that we can have a whole picture of devmem.
>
> >



--
Thanks,
Mina

2023-12-11 02:31:04

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On Sat, Dec 9, 2023 at 7:05 PM Pavel Begunkov <[email protected]> wrote:
>
> On 12/8/23 23:25, Mina Almasry wrote:
> > On Fri, Dec 8, 2023 at 2:56 PM Pavel Begunkov <[email protected]> wrote:
> >>
> >> On 12/8/23 00:52, Mina Almasry wrote:
> > ...
> >>> + if (pool->p.queue)
> >>> + binding = READ_ONCE(pool->p.queue->binding);
> >>> +
> >>> + if (binding) {
> >>> + pool->mp_ops = &dmabuf_devmem_ops;
> >>> + pool->mp_priv = binding;
> >>> + }
> >>
> >> Hmm, I don't understand why would we replace a nice transparent
> >> api with page pool relying on a queue having devmem specific
> >> pointer? It seemed more flexible and cleaner in the last RFC.
> >>
> >
> > Jakub requested this change and may chime in, but I suspect it's to
> > further abstract the devmem changes from driver. In this iteration,
> > the driver grabs the netdev_rx_queue and passes it to the page_pool,
> > and any future configurations between the net stack and page_pool can
> > be passed this way with the driver unbothered.
>
> Ok, that makes sense, but even if passed via an rx queue I'd
> at least hope it keeping abstract provider parameters, e.g.
> ops, but not hard coded with devmem specific code.
>
> It might even be better done with a helper like
> create_page_pool_from_queue(), unless there is some deeper
> interaction b/w pp and rx queues is predicted.
>

Off hand I don't see the need for a new create_page_pool_from_queue().
page_pool_create() already takes in a param arg that lets us pass in
the queue as well as any other params.

> >>> +
> >>> if (pool->mp_ops) {
> >>> err = pool->mp_ops->init(pool);
> >>> if (err) {
> >>> @@ -1020,3 +1033,77 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
> >>> }
> >>> }
> >>> EXPORT_SYMBOL(page_pool_update_nid);
> >>> +
> >>> +void __page_pool_iov_free(struct page_pool_iov *ppiov)
> >>> +{
> >>> + if (WARN_ON(ppiov->pp->mp_ops != &dmabuf_devmem_ops))
> >>> + return;
> >>> +
> >>> + netdev_free_dmabuf(ppiov);
> >>> +}
> >>> +EXPORT_SYMBOL_GPL(__page_pool_iov_free);
> >>
> >> I didn't look too deep but I don't think I immediately follow
> >> the pp refcounting. It increments pages_state_hold_cnt on
> >> allocation, but IIUC doesn't mark skbs for recycle? Then, they all
> >> will be put down via page_pool_iov_put_many() bypassing
> >> page_pool_return_page() and friends. That will call
> >> netdev_free_dmabuf(), which doesn't bump pages_state_release_cnt.
> >>
> >> At least I couldn't make it work with io_uring, and for my purposes,
> >> I forced all puts to go through page_pool_return_page(), which calls
> >> the ->release_page callback. The callback will put the reference and
> >> ask its page pool to account release_cnt. It also gets rid of
> >> __page_pool_iov_free(), as we'd need to add a hook there for
> >> customization otherwise.
> >>
> >> I didn't care about overhead because the hot path for me is getting
> >> buffers from a ring, which is somewhat analogous to sock_devmem_dontneed(),
> >> but done on pp allocations under napi, and it's done separately.
> >>
> >> Completely untested with TCP devmem:
> >>
> >> https://github.com/isilence/linux/commit/14bd56605183dc80b540999e8058c79ac92ae2d8
> >>
> >
> > This was a mistake in the last RFC, which should be fixed in v1. In
> > the RFC I was not marking the skbs as skb_mark_for_recycle(), so the
> > unreffing path wasn't as expected.
> >
> > In this iteration, that should be completely fixed. I suspect since I
> > just posted this you're actually referring to the issue tested on the
> > last RFC? Correct me if wrong.
>
> Right, it was with RFCv3
>
> > In this iteration, the reffing story:
> >
> > - memory provider allocs ppiov and returns it to the page pool with
> > ppiov->refcount == 1.
> > - The page_pool gives the page to the driver. The driver may
> > obtain/release references with page_pool_page_[get|put]_many(), but
> > the driver is likely not doing that unless it's doing its own page
> > recycling.
> > - The net stack obtains references via skb_frag_ref() ->
> > page_pool_page_get_many()
> > - The net stack drops references via skb_frag_unref() ->
> > napi_pp_put_page() -> page_pool_return_page() and friends.
> >
> > Thus, the issue where the unref path was skipping
> > page_pool_return_page() and friends should be resolved in this
> > iteration, let me know if you think otherwise, but I think this was an
> > issue limited to the last RFC.
>
> Then page_pool_iov_put_many() should and supposedly would never be
> called by non devmap code because all puts must circle back into
> ->release_page. Why adding it to into page_pool_page_put_many()?
>
> @@ -731,6 +731,29 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
> + if (page_is_page_pool_iov(page)) {
> ...
> + page_pool_page_put_many(page, 1);
> + return NULL;
> + }
>
> Well, I'm looking at this new branch from Patch 10, it can put
> the buffer, but what if we race at it's actually the final put?
> Looks like nobody is going to to bump up pages_state_release_cnt
>

Good catch, I think indeed the release_cnt would be incorrect in this
case. I think the race is benign in the sense that the ppiov will be
freed correctly and available for allocation when the page_pool next
needs it; the issue is with the stats AFAICT.

> If you remove the branch, let it fall into ->release and rely
> on refcounting there, then the callback could also fix up
> release_cnt or ask pp to do it, like in the patch I linked above
>

Sadly I don't think this is possible due to the reasons I mention in
the commit message of that patch. Prematurely releasing ppiov and not
having them be candidates for recycling shows me a 4-5x degradation in
performance.

What I could do here is detect that the refcount was dropped to 0 and
fix up the stats in that case.

--
Thanks,
Mina

2023-12-11 04:05:16

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 09/16] page_pool: device memory support

On Sun, Dec 10, 2023 at 6:26 PM Mina Almasry <[email protected]> wrote:
>
> On Sun, Dec 10, 2023 at 6:04 PM Yunsheng Lin <[email protected]> wrote:
> >
> > On 2023/12/9 0:05, Mina Almasry wrote:
> > > On Fri, Dec 8, 2023 at 1:30 AM Yunsheng Lin <[email protected]> wrote:
> > >>
> > >>
> > >> As mentioned before, it seems we need to have the above checking every
> > >> time we need to do some per-page handling in page_pool core, is there
> > >> a plan in your mind how to remove those kind of checking in the future?
> > >>
> > >
> > > I see 2 ways to remove the checking, both infeasible:
> > >
> > > 1. Allocate a wrapper struct that pulls out all the fields the page pool needs:
> > >
> > > struct netmem {
> > > /* common fields */
> > > refcount_t refcount;
> > > bool is_pfmemalloc;
> > > int nid;
> > > ...
> > > union {
> > > struct dmabuf_genpool_chunk_owner *owner;
> > > struct page * page;
> > > };
> > > };
> > >
> > > The page pool can then not care if the underlying memory is iov or
> > > page. However this introduces significant memory bloat as this struct
> > > needs to be allocated for each page or ppiov, which I imagine is not
> > > acceptable for the upside of removing a few static_branch'd if
> > > statements with no performance cost.
> > >
> > > 2. Create a unified struct for page and dmabuf memory, which the mm
> > > folks have repeatedly nacked, and I imagine will repeatedly nack in
> > > the future.
> > >
> > > So I imagine the special handling of ppiov in some form is critical
> > > and the checking may not be removable.
> >
> > If the above is true, perhaps devmem is not really supposed to be intergated
> > into page_pool.
> >
> > Adding a checking for every per-page handling in page_pool core is just too
> > hacky to be really considerred a longterm solution.
> >
>
> The only other option is to implement another page_pool for ppiov and
> have the driver create page_pool or ppiov_pool depending on the state
> of the netdev_rx_queue (or some helper in the net stack to do that for
> the driver). This introduces some code duplication. The ppiov_pool &
> page_pool would look similar in implementation.
>
> But this was all discussed in detail in RFC v2 and the last response I
> heard from Jesper was in favor if this approach, if I understand
> correctly:
>
> https://lore.kernel.org/netdev/[email protected]/
>
> Would love to have the maintainer weigh in here.
>

I should note we may be able to remove some of the checking, but maybe not all.

- Checks that disable page fragging for ppiov can be removed once
ppiov has frag support (in this series or follow up).

- If we use page->pp_frag_count (or page->pp_ref_count) for
refcounting ppiov, we can remove the if checking in the refcounting.

- We may be able to store the dma_addr of the ppiov in page->dma_addr,
but I'm unsure if that actually works, because the dma_buf dmaddr is
dma_addr_t (u32 or u64), but page->dma_addr is unsigned long (4 bytes
I think). But if it works for pages I may be able to make it work for
ppiov as well.

- Checks that obtain the page->pp can work with ppiov if we align the
offset of page->pp and ppiov->pp.

- Checks around page->pp_magic can be removed if we also have offset
aligned ppiov->pp_magic.

Sadly I don't see us removing the checking for these other cases:

- page_is_pfmemalloc(): I'm not allowed to pass a non-struct page into
that helper.

- page_to_nid(): I'm not allowed to pass a non-struct page into that helper.

- page_pool_free_va(): ppiov have no va.

- page_pool_sync_for_dev/page_pool_dma_map: ppiov backed by dma-buf
fundamentally can't get mapped again.

Are the removal (or future removal) of these checks enough to resolve this?

> > It is somewhat ironical that devmem is using static_branch to alliviate the
> > performance impact for normal memory at the possible cost of performance
> > degradation for devmem, does it not defeat some purpose of intergating devmem
> > to page_pool?
> >
>
> I don't see the issue. The static branch sets the non-ppiov path as
> default if no memory providers are in use, and flips it when they are,
> making the default branch prediction ideal in both cases.
>
> > >
> > >> Even though a static_branch check is added in page_is_page_pool_iov(), it
> > >> does not make much sense that a core has tow different 'struct' for its
> > >> most basic data.
> > >>
> > >> IMHO, the ppiov for dmabuf is forced fitting into page_pool without much
> > >> design consideration at this point.
> > >>
> > > ...
> > >>
> > >> For now, the above may work for the the rx part as it seems that you are
> > >> only enabling rx for dmabuf for now.
> > >>
> > >> What is the plan to enable tx for dmabuf? If it is also intergrated into
> > >> page_pool? There was a attempt to enable page_pool for tx, Eric seemed to
> > >> have some comment about this:
> > >> https://lkml.kernel.org/netdev/[email protected]/T/#mb6ab62dc22f38ec621d516259c56dd66353e24a2
> > >>
> > >> If tx is not intergrated into page_pool, do we need to create a new layer for
> > >> the tx dmabuf?
> > >>
> > >
> > > I imagine the TX path will reuse page_pool_iov, page_pool_iov_*()
> > > helpers, and page_pool_page_*() helpers, but will not need any core
> > > page_pool changes. This is because the TX path will have to piggyback
> >
> > We may need another bit/flags checking to demux between page_pool owned
> > devmem and non-page_pool owned devmem.
> >
>
> The way I'm imagining the support, I don't see the need for such
> flags. We'd be re-using generic helpers like
> page_pool_iov_get_dma_address() and what not that don't need that
> checking.
>
> > Also calling page_pool_*() on non-page_pool owned devmem is confusing
> > enough that we may need a thin layer handling non-page_pool owned devmem
> > in the end.
> >
>
> The page_pool_page* & page_pool_iov* functions can be renamed if
> confusing. I would think that's no issue (note that the page_pool_*
> functions need not be called for TX path).
>
> > > on MSG_ZEROCOPY (devmem is not copyable), so no memory allocation from
> > > the page_pool (or otherwise) is needed or possible. RFCv1 had a TX
> > > implementation based on dmabuf pages without page_pool involvement, I
> > > imagine I'll do something similar.
> > It would be good to have a tx implementation for the next version, so
> > that we can have a whole picture of devmem.
> >
> > >
>
>
>
> --
> Thanks,
> Mina



--
Thanks,
Mina

2023-12-11 18:14:34

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 09/16] page_pool: device memory support

On Mon, Dec 11, 2023 at 3:51 AM Yunsheng Lin <[email protected]> wrote:
>
> On 2023/12/11 12:04, Mina Almasry wrote:
> > On Sun, Dec 10, 2023 at 6:26 PM Mina Almasry <[email protected]> wrote:
> >>
> >> On Sun, Dec 10, 2023 at 6:04 PM Yunsheng Lin <[email protected]> wrote:
> >>>
> >>> On 2023/12/9 0:05, Mina Almasry wrote:
> >>>> On Fri, Dec 8, 2023 at 1:30 AM Yunsheng Lin <[email protected]> wrote:
> >>>>>
> >>>>>
> >>>>> As mentioned before, it seems we need to have the above checking every
> >>>>> time we need to do some per-page handling in page_pool core, is there
> >>>>> a plan in your mind how to remove those kind of checking in the future?
> >>>>>
> >>>>
> >>>> I see 2 ways to remove the checking, both infeasible:
> >>>>
> >>>> 1. Allocate a wrapper struct that pulls out all the fields the page pool needs:
> >>>>
> >>>> struct netmem {
> >>>> /* common fields */
> >>>> refcount_t refcount;
> >>>> bool is_pfmemalloc;
> >>>> int nid;
> >>>> ...
> >>>> union {
> >>>> struct dmabuf_genpool_chunk_owner *owner;
> >>>> struct page * page;
> >>>> };
> >>>> };
> >>>>
> >>>> The page pool can then not care if the underlying memory is iov or
> >>>> page. However this introduces significant memory bloat as this struct
> >>>> needs to be allocated for each page or ppiov, which I imagine is not
> >>>> acceptable for the upside of removing a few static_branch'd if
> >>>> statements with no performance cost.
> >>>>
> >>>> 2. Create a unified struct for page and dmabuf memory, which the mm
> >>>> folks have repeatedly nacked, and I imagine will repeatedly nack in
> >>>> the future.
> >>>>
> >>>> So I imagine the special handling of ppiov in some form is critical
> >>>> and the checking may not be removable.
> >>>
> >>> If the above is true, perhaps devmem is not really supposed to be intergated
> >>> into page_pool.
> >>>
> >>> Adding a checking for every per-page handling in page_pool core is just too
> >>> hacky to be really considerred a longterm solution.
> >>>
> >>
> >> The only other option is to implement another page_pool for ppiov and
> >> have the driver create page_pool or ppiov_pool depending on the state
> >> of the netdev_rx_queue (or some helper in the net stack to do that for
> >> the driver). This introduces some code duplication. The ppiov_pool &
> >> page_pool would look similar in implementation.
>
> I think there is a design pattern already to deal with this kind of problem,
> refactoring common code used by both page_pool and ppiov into a library to
> aovid code duplication if most of them have similar implementation.
>

Code can be refactored if it's identical, not if it is similar. I
suspect the page_pools will be only similar, and if you're not willing
to take devmem handling into the page pool then refactoring page_pool
code into helpers that do devmem handling may also not be an option.

> >>
> >> But this was all discussed in detail in RFC v2 and the last response I
> >> heard from Jesper was in favor if this approach, if I understand
> >> correctly:
> >>
> >> https://lore.kernel.org/netdev/[email protected]/
> >>
> >> Would love to have the maintainer weigh in here.
> >>
> >
> > I should note we may be able to remove some of the checking, but maybe not all.
> >
> > - Checks that disable page fragging for ppiov can be removed once
> > ppiov has frag support (in this series or follow up).
> >
> > - If we use page->pp_frag_count (or page->pp_ref_count) for
> > refcounting ppiov, we can remove the if checking in the refcounting.
> >

I'm not sure this is actually possible in the short term. The
page_pool uses both page->_refcount and page->pp_frag_count for
refcounting, and I will not be able to remove the special handling
around page->_refcount as i'm not allowed to call page_ref_*() APIs on
a non-struct page.

> > - We may be able to store the dma_addr of the ppiov in page->dma_addr,
> > but I'm unsure if that actually works, because the dma_buf dmaddr is
> > dma_addr_t (u32 or u64), but page->dma_addr is unsigned long (4 bytes
> > I think). But if it works for pages I may be able to make it work for
> > ppiov as well.
> >
> > - Checks that obtain the page->pp can work with ppiov if we align the
> > offset of page->pp and ppiov->pp.
> >
> > - Checks around page->pp_magic can be removed if we also have offset
> > aligned ppiov->pp_magic.
> >
> > Sadly I don't see us removing the checking for these other cases:
> >
> > - page_is_pfmemalloc(): I'm not allowed to pass a non-struct page into
> > that helper.
>
> We can do similar trick like above as bit 1 of page->pp_magic is used to
> indicate that if it is a pfmemalloc page.
>

Likely yes.

> >
> > - page_to_nid(): I'm not allowed to pass a non-struct page into that helper.
>
> Yes, this one need special case.
>
> >
> > - page_pool_free_va(): ppiov have no va.
>
> Doesn't the skb_frags_readable() checking will protect the page_pool_free_va()
> from being called on devmem?
>

This function seems to be only called from veth which doesn't support
devmem. I can remove the handling there.

> >
> > - page_pool_sync_for_dev/page_pool_dma_map: ppiov backed by dma-buf
> > fundamentally can't get mapped again.
>
> Can we just fail the page_pool creation with PP_FLAG_DMA_MAP and
> DMA_ATTR_SKIP_CPU_SYNC flags for devmem provider?
>

Jakub says PP_FLAG_DMA_MAP must be enabled for devmem, such that the
page_pool handles the dma mapping of the devmem and the driver doesn't
use it on its own.

We may fail creating the page pool on PP_FLAG_DMA_SYNC_DEV maybe, and
remove the checking from page_pool_sync_for_dev(), I think.

> >
> > Are the removal (or future removal) of these checks enough to resolve this?
>
> Yes, that is somewhat similar to my proposal, the biggest objection seems to
> be that we need to have a safe type checking for it to work correctly.
>
> >
> >>> It is somewhat ironical that devmem is using static_branch to alliviate the
> >>> performance impact for normal memory at the possible cost of performance
> >>> degradation for devmem, does it not defeat some purpose of intergating devmem
> >>> to page_pool?
> >>>
> >>
> >> I don't see the issue. The static branch sets the non-ppiov path as
> >> default if no memory providers are in use, and flips it when they are,
> >> making the default branch prediction ideal in both cases.
>
> You are assuming the we are not using page pool for both normal memory and
> devmem at the same. But a generic solution should not have that assumption
> as my understanding.
>
> >>
> >>>>
> >>>>> Even though a static_branch check is added in page_is_page_pool_iov(), it
> >>>>> does not make much sense that a core has tow different 'struct' for its
> >>>>> most basic data.
> >>>>>
> >>>>> IMHO, the ppiov for dmabuf is forced fitting into page_pool without much
> >>>>> design consideration at this point.
> >>>>>
> >>>> ...
> >>>>>
> >>>>> For now, the above may work for the the rx part as it seems that you are
> >>>>> only enabling rx for dmabuf for now.
> >>>>>
> >>>>> What is the plan to enable tx for dmabuf? If it is also intergrated into
> >>>>> page_pool? There was a attempt to enable page_pool for tx, Eric seemed to
> >>>>> have some comment about this:
> >>>>> https://lkml.kernel.org/netdev/[email protected]/T/#mb6ab62dc22f38ec621d516259c56dd66353e24a2
> >>>>>
> >>>>> If tx is not intergrated into page_pool, do we need to create a new layer for
> >>>>> the tx dmabuf?
> >>>>>
> >>>>
> >>>> I imagine the TX path will reuse page_pool_iov, page_pool_iov_*()
> >>>> helpers, and page_pool_page_*() helpers, but will not need any core
> >>>> page_pool changes. This is because the TX path will have to piggyback
> >>>
> >>> We may need another bit/flags checking to demux between page_pool owned
> >>> devmem and non-page_pool owned devmem.
> >>>
> >>
> >> The way I'm imagining the support, I don't see the need for such
> >> flags. We'd be re-using generic helpers like
> >> page_pool_iov_get_dma_address() and what not that don't need that
> >> checking.
> >>
> >>> Also calling page_pool_*() on non-page_pool owned devmem is confusing
> >>> enough that we may need a thin layer handling non-page_pool owned devmem
> >>> in the end.
> >>>
> >>
> >> The page_pool_page* & page_pool_iov* functions can be renamed if
> >> confusing. I would think that's no issue (note that the page_pool_*
>
> When you rename those functions, you will have a thin layer automatically.
>
> >> functions need not be called for TX path).
> >>
> >>>> on MSG_ZEROCOPY (devmem is not copyable), so no memory allocation from
> >>>> the page_pool (or otherwise) is needed or possible. RFCv1 had a TX
> >>>> implementation based on dmabuf pages without page_pool involvement, I
> >>>> imagine I'll do something similar.
> >>> It would be good to have a tx implementation for the next version, so
> >>> that we can have a whole picture of devmem.



--
Thanks,
Mina

2023-12-11 20:37:25

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On 12/11/23 02:30, Mina Almasry wrote:
> On Sat, Dec 9, 2023 at 7:05 PM Pavel Begunkov <[email protected]> wrote:
>>
>> On 12/8/23 23:25, Mina Almasry wrote:
>>> On Fri, Dec 8, 2023 at 2:56 PM Pavel Begunkov <[email protected]> wrote:
>>>>
>>>> On 12/8/23 00:52, Mina Almasry wrote:
>>> ...
>>>>> + if (pool->p.queue)
>>>>> + binding = READ_ONCE(pool->p.queue->binding);
>>>>> +
>>>>> + if (binding) {
>>>>> + pool->mp_ops = &dmabuf_devmem_ops;
>>>>> + pool->mp_priv = binding;
>>>>> + }
>>>>
>>>> Hmm, I don't understand why would we replace a nice transparent
>>>> api with page pool relying on a queue having devmem specific
>>>> pointer? It seemed more flexible and cleaner in the last RFC.
>>>>
>>>
>>> Jakub requested this change and may chime in, but I suspect it's to
>>> further abstract the devmem changes from driver. In this iteration,
>>> the driver grabs the netdev_rx_queue and passes it to the page_pool,
>>> and any future configurations between the net stack and page_pool can
>>> be passed this way with the driver unbothered.
>>
>> Ok, that makes sense, but even if passed via an rx queue I'd
>> at least hope it keeping abstract provider parameters, e.g.
>> ops, but not hard coded with devmem specific code.
>>
>> It might even be better done with a helper like
>> create_page_pool_from_queue(), unless there is some deeper
>> interaction b/w pp and rx queues is predicted.
>>
>
> Off hand I don't see the need for a new create_page_pool_from_queue().
> page_pool_create() already takes in a param arg that lets us pass in
> the queue as well as any other params.
>
>>>>> +
>>>>> if (pool->mp_ops) {
>>>>> err = pool->mp_ops->init(pool);
>>>>> if (err) {
>>>>> @@ -1020,3 +1033,77 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid)
>>>>> }
>>>>> }
>>>>> EXPORT_SYMBOL(page_pool_update_nid);
>>>>> +
>>>>> +void __page_pool_iov_free(struct page_pool_iov *ppiov)
>>>>> +{
>>>>> + if (WARN_ON(ppiov->pp->mp_ops != &dmabuf_devmem_ops))
>>>>> + return;
>>>>> +
>>>>> + netdev_free_dmabuf(ppiov);
>>>>> +}
>>>>> +EXPORT_SYMBOL_GPL(__page_pool_iov_free);
>>>>
>>>> I didn't look too deep but I don't think I immediately follow
>>>> the pp refcounting. It increments pages_state_hold_cnt on
>>>> allocation, but IIUC doesn't mark skbs for recycle? Then, they all
>>>> will be put down via page_pool_iov_put_many() bypassing
>>>> page_pool_return_page() and friends. That will call
>>>> netdev_free_dmabuf(), which doesn't bump pages_state_release_cnt.
>>>>
>>>> At least I couldn't make it work with io_uring, and for my purposes,
>>>> I forced all puts to go through page_pool_return_page(), which calls
>>>> the ->release_page callback. The callback will put the reference and
>>>> ask its page pool to account release_cnt. It also gets rid of
>>>> __page_pool_iov_free(), as we'd need to add a hook there for
>>>> customization otherwise.
>>>>
>>>> I didn't care about overhead because the hot path for me is getting
>>>> buffers from a ring, which is somewhat analogous to sock_devmem_dontneed(),
>>>> but done on pp allocations under napi, and it's done separately.
>>>>
>>>> Completely untested with TCP devmem:
>>>>
>>>> https://github.com/isilence/linux/commit/14bd56605183dc80b540999e8058c79ac92ae2d8
>>>>
>>>
>>> This was a mistake in the last RFC, which should be fixed in v1. In
>>> the RFC I was not marking the skbs as skb_mark_for_recycle(), so the
>>> unreffing path wasn't as expected.
>>>
>>> In this iteration, that should be completely fixed. I suspect since I
>>> just posted this you're actually referring to the issue tested on the
>>> last RFC? Correct me if wrong.
>>
>> Right, it was with RFCv3
>>
>>> In this iteration, the reffing story:
>>>
>>> - memory provider allocs ppiov and returns it to the page pool with
>>> ppiov->refcount == 1.
>>> - The page_pool gives the page to the driver. The driver may
>>> obtain/release references with page_pool_page_[get|put]_many(), but
>>> the driver is likely not doing that unless it's doing its own page
>>> recycling.
>>> - The net stack obtains references via skb_frag_ref() ->
>>> page_pool_page_get_many()
>>> - The net stack drops references via skb_frag_unref() ->
>>> napi_pp_put_page() -> page_pool_return_page() and friends.
>>>
>>> Thus, the issue where the unref path was skipping
>>> page_pool_return_page() and friends should be resolved in this
>>> iteration, let me know if you think otherwise, but I think this was an
>>> issue limited to the last RFC.
>>
>> Then page_pool_iov_put_many() should and supposedly would never be
>> called by non devmap code because all puts must circle back into
>> ->release_page. Why adding it to into page_pool_page_put_many()?
>>
>> @@ -731,6 +731,29 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
>> + if (page_is_page_pool_iov(page)) {
>> ...
>> + page_pool_page_put_many(page, 1);
>> + return NULL;
>> + }
>>
>> Well, I'm looking at this new branch from Patch 10, it can put
>> the buffer, but what if we race at it's actually the final put?
>> Looks like nobody is going to to bump up pages_state_release_cnt
>>
>
> Good catch, I think indeed the release_cnt would be incorrect in this
> case. I think the race is benign in the sense that the ppiov will be
> freed correctly and available for allocation when the page_pool next
> needs it; the issue is with the stats AFAICT.

hold_cnt + release_cnt serves is used for refcounting. In this case
it'll leak the pool when you try to destroy it.


>> If you remove the branch, let it fall into ->release and rely
>> on refcounting there, then the callback could also fix up
>> release_cnt or ask pp to do it, like in the patch I linked above
>>
>
> Sadly I don't think this is possible due to the reasons I mention in
> the commit message of that patch. Prematurely releasing ppiov and not
> having them be candidates for recycling shows me a 4-5x degradation in
> performance.

I don't think I follow. The concept is to only recycle a buffer (i.e.
make it available for allocation) when its refs drop to zero, which is
IMHO the only way it can work, and IIUC what this patchset is doing.

That's also I suggest to do, but through a slightly different path.
Let's say at some moment there are 2 refs (e.g. 1 for an skb and
1 for userspace/xarray).

Say it first puts the skb:

napi_pp_put_page()
-> page_pool_return_page()
-> mp_ops->release_page()
-> need_to_free = put_buf()
// not last ref, need_to_free==false,
// don't recycle, don't increase release_cnt

Then you put the last ref:

page_pool_iov_put_many()
-> page_pool_return_page()
-> mp_ops->release_page()
-> need_to_free = put_buf()
// last ref, need_to_free==true,
// recycle and release_cnt++

And that last put can even be recycled right into the
pp / ptr_ring, in which case it doesn't need to touch
release_cnt. Does it make sense? I don't see where
4-5x degradation would come from


> What I could do here is detect that the refcount was dropped to 0 and
> fix up the stats in that case.

--
Pavel Begunkov

2023-12-12 05:58:23

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [net-next v1 00/16] Device Memory TCP

Please don't spread scatterlists further. They are a bad data structure
that mix input data (page, offset, len) and output data (phys_addr,
dma_offset, dma_len), and do in a horrible way for iommmu mappings that
can coalesce. Jason and coworkers have been looking into the long
overdue API to better support batch mapping of better data structures,
and this is a prime example of new code that should be using.

2023-12-12 08:08:22

by Ilias Apalodimas

[permalink] [raw]
Subject: Re: [net-next v1 02/16] net: page_pool: create hooks for custom page providers

Hi Mina,

Apologies for not participating in the party earlier.

On Fri, 8 Dec 2023 at 02:52, Mina Almasry <[email protected]> wrote:
>
> From: Jakub Kicinski <[email protected]>
>
> The page providers which try to reuse the same pages will
> need to hold onto the ref, even if page gets released from
> the pool - as in releasing the page from the pp just transfers
> the "ownership" reference from pp to the provider, and provider
> will wait for other references to be gone before feeding this
> page back into the pool.
>
> Signed-off-by: Jakub Kicinski <[email protected]>
> Signed-off-by: Mina Almasry <[email protected]>
>
> ---
>
> This is implemented by Jakub in his RFC:
> https://lore.kernel.org/netdev/[email protected]/T/
>
> I take no credit for the idea or implementation; I only added minor
> edits to make this workable with device memory TCP, and removed some
> hacky test code. This is a critical dependency of device memory TCP
> and thus I'm pulling it into this series to make it revewable and
> mergable.
>
> RFC v3 -> v1
> - Removed unusued mem_provider. (Yunsheng).
> - Replaced memory_provider & mp_priv with netdev_rx_queue (Jakub).
>
> ---
> include/net/page_pool/types.h | 12 ++++++++++
> net/core/page_pool.c | 43 +++++++++++++++++++++++++++++++----
> 2 files changed, 50 insertions(+), 5 deletions(-)
>
> diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
> index ac286ea8ce2d..0e9fa79a5ef1 100644
> --- a/include/net/page_pool/types.h
> +++ b/include/net/page_pool/types.h
> @@ -51,6 +51,7 @@ struct pp_alloc_cache {
> * @dev: device, for DMA pre-mapping purposes
> * @netdev: netdev this pool will serve (leave as NULL if none or multiple)
> * @napi: NAPI which is the sole consumer of pages, otherwise NULL
> + * @queue: struct netdev_rx_queue this page_pool is being created for.
> * @dma_dir: DMA mapping direction
> * @max_len: max DMA sync memory size for PP_FLAG_DMA_SYNC_DEV
> * @offset: DMA sync address offset for PP_FLAG_DMA_SYNC_DEV
> @@ -63,6 +64,7 @@ struct page_pool_params {
> int nid;
> struct device *dev;
> struct napi_struct *napi;
> + struct netdev_rx_queue *queue;
> enum dma_data_direction dma_dir;
> unsigned int max_len;
> unsigned int offset;
> @@ -125,6 +127,13 @@ struct page_pool_stats {
> };
> #endif
>
> +struct memory_provider_ops {
> + int (*init)(struct page_pool *pool);
> + void (*destroy)(struct page_pool *pool);
> + struct page *(*alloc_pages)(struct page_pool *pool, gfp_t gfp);
> + bool (*release_page)(struct page_pool *pool, struct page *page);
> +};
> +
> struct page_pool {
> struct page_pool_params_fast p;
>
> @@ -174,6 +183,9 @@ struct page_pool {
> */
> struct ptr_ring ring;
>
> + void *mp_priv;
> + const struct memory_provider_ops *mp_ops;
> +
> #ifdef CONFIG_PAGE_POOL_STATS
> /* recycle stats are per-cpu to avoid locking */
> struct page_pool_recycle_stats __percpu *recycle_stats;
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index ca1b3b65c9b5..f5c84d2a4510 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -25,6 +25,8 @@
>
> #include "page_pool_priv.h"
>
> +static DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers);

We could add the existing page pool mechanisms as another 'provider',
but I assume this is coded like this for performance reasons (IOW skip
the expensive ptr call for the default case?)

> +
> #define DEFER_TIME (msecs_to_jiffies(1000))
> #define DEFER_WARN_INTERVAL (60 * HZ)
>
> @@ -174,6 +176,7 @@ static int page_pool_init(struct page_pool *pool,
> const struct page_pool_params *params)
> {
> unsigned int ring_qsize = 1024; /* Default */
> + int err;
>
> memcpy(&pool->p, &params->fast, sizeof(pool->p));
> memcpy(&pool->slow, &params->slow, sizeof(pool->slow));
> @@ -234,10 +237,25 @@ static int page_pool_init(struct page_pool *pool,
> /* Driver calling page_pool_create() also call page_pool_destroy() */
> refcount_set(&pool->user_cnt, 1);
>
> + if (pool->mp_ops) {
> + err = pool->mp_ops->init(pool);
> + if (err) {
> + pr_warn("%s() mem-provider init failed %d\n",
> + __func__, err);
> + goto free_ptr_ring;
> + }
> +
> + static_branch_inc(&page_pool_mem_providers);
> + }
> +
> if (pool->p.flags & PP_FLAG_DMA_MAP)
> get_device(pool->p.dev);
>
> return 0;
> +
> +free_ptr_ring:
> + ptr_ring_cleanup(&pool->ring, NULL);
> + return err;
> }
>
> static void page_pool_uninit(struct page_pool *pool)
> @@ -519,7 +537,10 @@ struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp)
> return page;
>
> /* Slow-path: cache empty, do real allocation */
> - page = __page_pool_alloc_pages_slow(pool, gfp);
> + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)

Why do we need && pool->mp_ops? On the init function, we only bump
page_pool_mem_providers if the ops are there

> + page = pool->mp_ops->alloc_pages(pool, gfp);
> + else
> + page = __page_pool_alloc_pages_slow(pool, gfp);
> return page;
> }
> EXPORT_SYMBOL(page_pool_alloc_pages);
> @@ -576,10 +597,13 @@ void __page_pool_release_page_dma(struct page_pool *pool, struct page *page)
> void page_pool_return_page(struct page_pool *pool, struct page *page)
> {
> int count;
> + bool put;
>
> - __page_pool_release_page_dma(pool, page);
> -
> - page_pool_clear_pp_info(page);
> + put = true;
> + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)

ditto

> + put = pool->mp_ops->release_page(pool, page);
> + else
> + __page_pool_release_page_dma(pool, page);
>
> /* This may be the last page returned, releasing the pool, so
> * it is not safe to reference pool afterwards.
> @@ -587,7 +611,10 @@ void page_pool_return_page(struct page_pool *pool, struct page *page)
> count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt);
> trace_page_pool_state_release(pool, page, count);
>
> - put_page(page);
> + if (put) {
> + page_pool_clear_pp_info(page);
> + put_page(page);
> + }
> /* An optimization would be to call __free_pages(page, pool->p.order)
> * knowing page is not part of page-cache (thus avoiding a
> * __page_cache_release() call).
> @@ -857,6 +884,12 @@ static void __page_pool_destroy(struct page_pool *pool)
>
> page_pool_unlist(pool);
> page_pool_uninit(pool);
> +
> + if (pool->mp_ops) {

Same here. Using a mix of pool->mp_ops and page_pool_mem_providers
will work, but since we always check the ptr on init, can't we simply
rely on page_pool_mem_providers for the rest of the code?

Thanks
/Ilias
> + pool->mp_ops->destroy(pool);
> + static_branch_dec(&page_pool_mem_providers);
> + }
> +
> kfree(pool);
> }
>
> --
> 2.43.0.472.g3155946c3a-goog
>

2023-12-12 12:25:55

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On Thu, Dec 07, 2023 at 04:52:39PM -0800, Mina Almasry wrote:

> +static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page)
> +{
> + if (page_is_page_pool_iov(page))
> + return (struct page_pool_iov *)((unsigned long)page & ~PP_IOV);
> +
> + DEBUG_NET_WARN_ON_ONCE(true);
> + return NULL;
> +}

We already asked not to do this, please do not allocate weird things
can call them 'struct page' when they are not. It undermines the
maintainability of the mm to have things mis-typed like
this. Introduce a new type for your thing so the compiler can check it
properly.

Jason

2023-12-12 13:07:52

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On Tue, Dec 12, 2023 at 08:25:35AM -0400, Jason Gunthorpe wrote:
> > +static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page)
> > +{
> > + if (page_is_page_pool_iov(page))
> > + return (struct page_pool_iov *)((unsigned long)page & ~PP_IOV);
> > +
> > + DEBUG_NET_WARN_ON_ONCE(true);
> > + return NULL;
> > +}
>
> We already asked not to do this, please do not allocate weird things
> can call them 'struct page' when they are not. It undermines the
> maintainability of the mm to have things mis-typed like
> this. Introduce a new type for your thing so the compiler can check it
> properly.

Yes. Or even better avoid this mess entirely..

2023-12-12 14:27:23

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On Tue, Dec 12, 2023 at 4:25 AM Jason Gunthorpe <[email protected]> wrote:
>
> On Thu, Dec 07, 2023 at 04:52:39PM -0800, Mina Almasry wrote:
>
> > +static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page)
> > +{
> > + if (page_is_page_pool_iov(page))
> > + return (struct page_pool_iov *)((unsigned long)page & ~PP_IOV);
> > +
> > + DEBUG_NET_WARN_ON_ONCE(true);
> > + return NULL;
> > +}
>
> We already asked not to do this, please do not allocate weird things
> can call them 'struct page' when they are not. It undermines the
> maintainability of the mm to have things mis-typed like
> this. Introduce a new type for your thing so the compiler can check it
> properly.
>

There is a new type introduced, it's the page_pool_iov. We set the LSB
on page_pool_iov* and cast it to page* only to avoid the churn of
renaming page* to page_pool_iov* in the page_pool and all the net
drivers using it. Is that not a reasonable compromise in your opinion?
Since the LSB is set on the resulting page pointers, they are not
actually usuable as pages, and are never passed to mm APIs per your
requirement.

--
Thanks,
Mina

2023-12-12 14:29:35

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 09/16] page_pool: device memory support

On Tue, Dec 12, 2023 at 3:17 AM Yunsheng Lin <[email protected]> wrote:
>
> On 2023/12/12 2:14, Mina Almasry wrote:
> > On Mon, Dec 11, 2023 at 3:51 AM Yunsheng Lin <[email protected]> wrote:
> >>
> >> On 2023/12/11 12:04, Mina Almasry wrote:
> >>> On Sun, Dec 10, 2023 at 6:26 PM Mina Almasry <[email protected]> wrote:
> >>>>
> >>>> On Sun, Dec 10, 2023 at 6:04 PM Yunsheng Lin <[email protected]> wrote:
> >>>>>
> >>>>> On 2023/12/9 0:05, Mina Almasry wrote:
> >>>>>> On Fri, Dec 8, 2023 at 1:30 AM Yunsheng Lin <[email protected]> wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>> As mentioned before, it seems we need to have the above checking every
> >>>>>>> time we need to do some per-page handling in page_pool core, is there
> >>>>>>> a plan in your mind how to remove those kind of checking in the future?
> >>>>>>>
> >>>>>>
> >>>>>> I see 2 ways to remove the checking, both infeasible:
> >>>>>>
> >>>>>> 1. Allocate a wrapper struct that pulls out all the fields the page pool needs:
> >>>>>>
> >>>>>> struct netmem {
> >>>>>> /* common fields */
> >>>>>> refcount_t refcount;
> >>>>>> bool is_pfmemalloc;
> >>>>>> int nid;
> >>>>>> ...
> >>>>>> union {
> >>>>>> struct dmabuf_genpool_chunk_owner *owner;
> >>>>>> struct page * page;
> >>>>>> };
> >>>>>> };
> >>>>>>
> >>>>>> The page pool can then not care if the underlying memory is iov or
> >>>>>> page. However this introduces significant memory bloat as this struct
> >>>>>> needs to be allocated for each page or ppiov, which I imagine is not
> >>>>>> acceptable for the upside of removing a few static_branch'd if
> >>>>>> statements with no performance cost.
> >>>>>>
> >>>>>> 2. Create a unified struct for page and dmabuf memory, which the mm
> >>>>>> folks have repeatedly nacked, and I imagine will repeatedly nack in
> >>>>>> the future.
> >>>>>>
> >>>>>> So I imagine the special handling of ppiov in some form is critical
> >>>>>> and the checking may not be removable.
> >>>>>
> >>>>> If the above is true, perhaps devmem is not really supposed to be intergated
> >>>>> into page_pool.
> >>>>>
> >>>>> Adding a checking for every per-page handling in page_pool core is just too
> >>>>> hacky to be really considerred a longterm solution.
> >>>>>
> >>>>
> >>>> The only other option is to implement another page_pool for ppiov and
> >>>> have the driver create page_pool or ppiov_pool depending on the state
> >>>> of the netdev_rx_queue (or some helper in the net stack to do that for
> >>>> the driver). This introduces some code duplication. The ppiov_pool &
> >>>> page_pool would look similar in implementation.
> >>
> >> I think there is a design pattern already to deal with this kind of problem,
> >> refactoring common code used by both page_pool and ppiov into a library to
> >> aovid code duplication if most of them have similar implementation.
> >>
> >
> > Code can be refactored if it's identical, not if it is similar. I
>
> Similarity indicates an opportunity to the refactor out the common
> code, like the page_frag case below:
> https://patchwork.kernel.org/project/netdevbpf/cover/[email protected]/
>
> But untill we do a proof of concept implemention, it is hard to tell if
> it is feasiable or not.
>
> > suspect the page_pools will be only similar, and if you're not willing
> > to take devmem handling into the page pool then refactoring page_pool
> > code into helpers that do devmem handling may also not be an option.
> >
> >>>>
> >>>> But this was all discussed in detail in RFC v2 and the last response I
> >>>> heard from Jesper was in favor if this approach, if I understand
> >>>> correctly:
> >>>>
> >>>> https://lore.kernel.org/netdev/[email protected]/
> >>>>
> >>>> Would love to have the maintainer weigh in here.
> >>>>
> >>>
> >>> I should note we may be able to remove some of the checking, but maybe not all.
> >>>
> >>> - Checks that disable page fragging for ppiov can be removed once
> >>> ppiov has frag support (in this series or follow up).
> >>>
> >>> - If we use page->pp_frag_count (or page->pp_ref_count) for
> >>> refcounting ppiov, we can remove the if checking in the refcounting.
> >>>
> >
> > I'm not sure this is actually possible in the short term. The
> > page_pool uses both page->_refcount and page->pp_frag_count for
> > refcounting, and I will not be able to remove the special handling
> > around page->_refcount as i'm not allowed to call page_ref_*() APIs on
> > a non-struct page.
>
> the page_ref_*() API may be avoided using the below patch:
> https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/
>

Even after the patch above, you're still calling page_ref_count() in
the page_pool to check for recycling, so after that patch you're still
using page->_refcount.

> But I am not sure how to do that for tx part if devmem for tx is not
> intergating into page_pool, that is why I suggest having a tx implementation
> for the next version, so that we can have a whole picture of devmem.
>

I strongly prefer to keep the TX implementation in a separate series.
This series is complicated to implement and review as it is, and is
hitting the 15 patch limit anyway.

> >
> >>> - We may be able to store the dma_addr of the ppiov in page->dma_addr,
> >>> but I'm unsure if that actually works, because the dma_buf dmaddr is
> >>> dma_addr_t (u32 or u64), but page->dma_addr is unsigned long (4 bytes
> >>> I think). But if it works for pages I may be able to make it work for
> >>> ppiov as well.
> >>>
> >>> - Checks that obtain the page->pp can work with ppiov if we align the
> >>> offset of page->pp and ppiov->pp.
> >>>
> >>> - Checks around page->pp_magic can be removed if we also have offset
> >>> aligned ppiov->pp_magic.
> >>>
> >>> Sadly I don't see us removing the checking for these other cases:
> >>>
> >>> - page_is_pfmemalloc(): I'm not allowed to pass a non-struct page into
> >>> that helper.
> >>
> >> We can do similar trick like above as bit 1 of page->pp_magic is used to
> >> indicate that if it is a pfmemalloc page.
> >>
> >
> > Likely yes.
> >
> >>>
> >>> - page_to_nid(): I'm not allowed to pass a non-struct page into that helper.
> >>
> >> Yes, this one need special case.
> >>
> >>>
> >>> - page_pool_free_va(): ppiov have no va.
> >>
> >> Doesn't the skb_frags_readable() checking will protect the page_pool_free_va()
> >> from being called on devmem?
> >>
> >
> > This function seems to be only called from veth which doesn't support
> > devmem. I can remove the handling there.
> >
> >>>
> >>> - page_pool_sync_for_dev/page_pool_dma_map: ppiov backed by dma-buf
> >>> fundamentally can't get mapped again.
> >>
> >> Can we just fail the page_pool creation with PP_FLAG_DMA_MAP and
> >> DMA_ATTR_SKIP_CPU_SYNC flags for devmem provider?
> >>
> >
> > Jakub says PP_FLAG_DMA_MAP must be enabled for devmem, such that the
> > page_pool handles the dma mapping of the devmem and the driver doesn't
> > use it on its own.
>
> I am not sure what benefit does it bring by enabling the DMA_MAP for devmem,
> as devmem seems to call dma_buf_map_attachment() in netdev_bind_dmabuf(), it
> does not really need enabling PP_FLAG_DMA_MAP to get the dma addr for the
> devmem chunk.

--
Thanks,
Mina

2023-12-12 14:40:04

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On Tue, Dec 12, 2023 at 06:26:51AM -0800, Mina Almasry wrote:
> On Tue, Dec 12, 2023 at 4:25 AM Jason Gunthorpe <[email protected]> wrote:
> >
> > On Thu, Dec 07, 2023 at 04:52:39PM -0800, Mina Almasry wrote:
> >
> > > +static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page)
> > > +{
> > > + if (page_is_page_pool_iov(page))
> > > + return (struct page_pool_iov *)((unsigned long)page & ~PP_IOV);
> > > +
> > > + DEBUG_NET_WARN_ON_ONCE(true);
> > > + return NULL;
> > > +}
> >
> > We already asked not to do this, please do not allocate weird things
> > can call them 'struct page' when they are not. It undermines the
> > maintainability of the mm to have things mis-typed like
> > this. Introduce a new type for your thing so the compiler can check it
> > properly.
> >
>
> There is a new type introduced, it's the page_pool_iov. We set the LSB
> on page_pool_iov* and cast it to page* only to avoid the churn of
> renaming page* to page_pool_iov* in the page_pool and all the net
> drivers using it. Is that not a reasonable compromise in your opinion?
> Since the LSB is set on the resulting page pointers, they are not
> actually usuable as pages, and are never passed to mm APIs per your
> requirement.

There were two asks, the one you did was to never pass this non-struct
page memory to the mm, which is great.

The other was to not mistype things, and don't type something as
struct page when it is, in fact, not.

I fear what you've done is make it so only one driver calls these
special functions and left the other drivers passing the struct page
directly to the mm and sort of obfuscating why it is OK based on this
netdev knowledge of not enabling/using the static branch in the other
cases.

Perhaps you can simply avoid this by arranging for this driver to also
exclusively use some special type to indicate the dual nature of the
pointer and leave the other drivers as using the struct page version.

Jason

2023-12-12 14:48:01

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 02/16] net: page_pool: create hooks for custom page providers

On Tue, Dec 12, 2023 at 12:07 AM Ilias Apalodimas
<[email protected]> wrote:
>
> Hi Mina,
>
> Apologies for not participating in the party earlier.
>

No worries, thanks for looking.

> On Fri, 8 Dec 2023 at 02:52, Mina Almasry <[email protected]> wrote:
> >
> > From: Jakub Kicinski <[email protected]>
> >
> > The page providers which try to reuse the same pages will
> > need to hold onto the ref, even if page gets released from
> > the pool - as in releasing the page from the pp just transfers
> > the "ownership" reference from pp to the provider, and provider
> > will wait for other references to be gone before feeding this
> > page back into the pool.
> >
> > Signed-off-by: Jakub Kicinski <[email protected]>
> > Signed-off-by: Mina Almasry <[email protected]>
> >
> > ---
> >
> > This is implemented by Jakub in his RFC:
> > https://lore.kernel.org/netdev/[email protected]/T/
> >
> > I take no credit for the idea or implementation; I only added minor
> > edits to make this workable with device memory TCP, and removed some
> > hacky test code. This is a critical dependency of device memory TCP
> > and thus I'm pulling it into this series to make it revewable and
> > mergable.
> >
> > RFC v3 -> v1
> > - Removed unusued mem_provider. (Yunsheng).
> > - Replaced memory_provider & mp_priv with netdev_rx_queue (Jakub).
> >
> > ---
> > include/net/page_pool/types.h | 12 ++++++++++
> > net/core/page_pool.c | 43 +++++++++++++++++++++++++++++++----
> > 2 files changed, 50 insertions(+), 5 deletions(-)
> >
> > diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
> > index ac286ea8ce2d..0e9fa79a5ef1 100644
> > --- a/include/net/page_pool/types.h
> > +++ b/include/net/page_pool/types.h
> > @@ -51,6 +51,7 @@ struct pp_alloc_cache {
> > * @dev: device, for DMA pre-mapping purposes
> > * @netdev: netdev this pool will serve (leave as NULL if none or multiple)
> > * @napi: NAPI which is the sole consumer of pages, otherwise NULL
> > + * @queue: struct netdev_rx_queue this page_pool is being created for.
> > * @dma_dir: DMA mapping direction
> > * @max_len: max DMA sync memory size for PP_FLAG_DMA_SYNC_DEV
> > * @offset: DMA sync address offset for PP_FLAG_DMA_SYNC_DEV
> > @@ -63,6 +64,7 @@ struct page_pool_params {
> > int nid;
> > struct device *dev;
> > struct napi_struct *napi;
> > + struct netdev_rx_queue *queue;
> > enum dma_data_direction dma_dir;
> > unsigned int max_len;
> > unsigned int offset;
> > @@ -125,6 +127,13 @@ struct page_pool_stats {
> > };
> > #endif
> >
> > +struct memory_provider_ops {
> > + int (*init)(struct page_pool *pool);
> > + void (*destroy)(struct page_pool *pool);
> > + struct page *(*alloc_pages)(struct page_pool *pool, gfp_t gfp);
> > + bool (*release_page)(struct page_pool *pool, struct page *page);
> > +};
> > +
> > struct page_pool {
> > struct page_pool_params_fast p;
> >
> > @@ -174,6 +183,9 @@ struct page_pool {
> > */
> > struct ptr_ring ring;
> >
> > + void *mp_priv;
> > + const struct memory_provider_ops *mp_ops;
> > +
> > #ifdef CONFIG_PAGE_POOL_STATS
> > /* recycle stats are per-cpu to avoid locking */
> > struct page_pool_recycle_stats __percpu *recycle_stats;
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > index ca1b3b65c9b5..f5c84d2a4510 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> > @@ -25,6 +25,8 @@
> >
> > #include "page_pool_priv.h"
> >
> > +static DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers);
>
> We could add the existing page pool mechanisms as another 'provider',
> but I assume this is coded like this for performance reasons (IOW skip
> the expensive ptr call for the default case?)
>

Correct, it's done like this for performance reasons.

> > +
> > #define DEFER_TIME (msecs_to_jiffies(1000))
> > #define DEFER_WARN_INTERVAL (60 * HZ)
> >
> > @@ -174,6 +176,7 @@ static int page_pool_init(struct page_pool *pool,
> > const struct page_pool_params *params)
> > {
> > unsigned int ring_qsize = 1024; /* Default */
> > + int err;
> >
> > memcpy(&pool->p, &params->fast, sizeof(pool->p));
> > memcpy(&pool->slow, &params->slow, sizeof(pool->slow));
> > @@ -234,10 +237,25 @@ static int page_pool_init(struct page_pool *pool,
> > /* Driver calling page_pool_create() also call page_pool_destroy() */
> > refcount_set(&pool->user_cnt, 1);
> >
> > + if (pool->mp_ops) {
> > + err = pool->mp_ops->init(pool);
> > + if (err) {
> > + pr_warn("%s() mem-provider init failed %d\n",
> > + __func__, err);
> > + goto free_ptr_ring;
> > + }
> > +
> > + static_branch_inc(&page_pool_mem_providers);
> > + }
> > +
> > if (pool->p.flags & PP_FLAG_DMA_MAP)
> > get_device(pool->p.dev);
> >
> > return 0;
> > +
> > +free_ptr_ring:
> > + ptr_ring_cleanup(&pool->ring, NULL);
> > + return err;
> > }
> >
> > static void page_pool_uninit(struct page_pool *pool)
> > @@ -519,7 +537,10 @@ struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp)
> > return page;
> >
> > /* Slow-path: cache empty, do real allocation */
> > - page = __page_pool_alloc_pages_slow(pool, gfp);
> > + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
>
> Why do we need && pool->mp_ops? On the init function, we only bump
> page_pool_mem_providers if the ops are there
>

Note that page_pool_mem_providers is a static variable (not part of
the page_pool struct), so if you have 2 page_pools on the system, one
using devmem and one not, we need to check pool->mp_ops to make sure
this page_pool is using a memory provider.

> > + page = pool->mp_ops->alloc_pages(pool, gfp);
> > + else
> > + page = __page_pool_alloc_pages_slow(pool, gfp);
> > return page;
> > }
> > EXPORT_SYMBOL(page_pool_alloc_pages);
> > @@ -576,10 +597,13 @@ void __page_pool_release_page_dma(struct page_pool *pool, struct page *page)
> > void page_pool_return_page(struct page_pool *pool, struct page *page)
> > {
> > int count;
> > + bool put;
> >
> > - __page_pool_release_page_dma(pool, page);
> > -
> > - page_pool_clear_pp_info(page);
> > + put = true;
> > + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
>
> ditto
>
> > + put = pool->mp_ops->release_page(pool, page);
> > + else
> > + __page_pool_release_page_dma(pool, page);
> >
> > /* This may be the last page returned, releasing the pool, so
> > * it is not safe to reference pool afterwards.
> > @@ -587,7 +611,10 @@ void page_pool_return_page(struct page_pool *pool, struct page *page)
> > count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt);
> > trace_page_pool_state_release(pool, page, count);
> >
> > - put_page(page);
> > + if (put) {
> > + page_pool_clear_pp_info(page);
> > + put_page(page);
> > + }
> > /* An optimization would be to call __free_pages(page, pool->p.order)
> > * knowing page is not part of page-cache (thus avoiding a
> > * __page_cache_release() call).
> > @@ -857,6 +884,12 @@ static void __page_pool_destroy(struct page_pool *pool)
> >
> > page_pool_unlist(pool);
> > page_pool_uninit(pool);
> > +
> > + if (pool->mp_ops) {
>
> Same here. Using a mix of pool->mp_ops and page_pool_mem_providers
> will work, but since we always check the ptr on init, can't we simply
> rely on page_pool_mem_providers for the rest of the code?
>
> Thanks
> /Ilias
> > + pool->mp_ops->destroy(pool);
> > + static_branch_dec(&page_pool_mem_providers);
> > + }
> > +
> > kfree(pool);
> > }
> >
> > --
> > 2.43.0.472.g3155946c3a-goog
> >



--
Thanks,
Mina

2023-12-12 14:58:56

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On Tue, Dec 12, 2023 at 6:39 AM Jason Gunthorpe <[email protected]> wrote:
>
> On Tue, Dec 12, 2023 at 06:26:51AM -0800, Mina Almasry wrote:
> > On Tue, Dec 12, 2023 at 4:25 AM Jason Gunthorpe <[email protected]> wrote:
> > >
> > > On Thu, Dec 07, 2023 at 04:52:39PM -0800, Mina Almasry wrote:
> > >
> > > > +static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page)
> > > > +{
> > > > + if (page_is_page_pool_iov(page))
> > > > + return (struct page_pool_iov *)((unsigned long)page & ~PP_IOV);
> > > > +
> > > > + DEBUG_NET_WARN_ON_ONCE(true);
> > > > + return NULL;
> > > > +}
> > >
> > > We already asked not to do this, please do not allocate weird things
> > > can call them 'struct page' when they are not. It undermines the
> > > maintainability of the mm to have things mis-typed like
> > > this. Introduce a new type for your thing so the compiler can check it
> > > properly.
> > >
> >
> > There is a new type introduced, it's the page_pool_iov. We set the LSB
> > on page_pool_iov* and cast it to page* only to avoid the churn of
> > renaming page* to page_pool_iov* in the page_pool and all the net
> > drivers using it. Is that not a reasonable compromise in your opinion?
> > Since the LSB is set on the resulting page pointers, they are not
> > actually usuable as pages, and are never passed to mm APIs per your
> > requirement.
>
> There were two asks, the one you did was to never pass this non-struct
> page memory to the mm, which is great.
>
> The other was to not mistype things, and don't type something as
> struct page when it is, in fact, not.
>
> I fear what you've done is make it so only one driver calls these
> special functions and left the other drivers passing the struct page
> directly to the mm and sort of obfuscating why it is OK based on this
> netdev knowledge of not enabling/using the static branch in the other
> cases.
>

Jason, we set the LSB on page_pool_iov pointers before casting it to
struct page pointers. The resulting pointers are not useable as page
pointers at all.

In order to use the resulting pointers, the driver _must_ use the
special functions that first clear the LSB. It is impossible for the
driver to 'accidentally' use the resulting page pointers with the LSB
set - the kernel would just crash trying to dereference such a
pointer.

The way it works currently is that drivers that support devmem TCP
will declare that support to the net stack, and use the special
functions that clear the LSB and cast the struct back to
page_pool_iov. The drivers that don't support devmem TCP will not
declare support and will get pages allocated from the mm stack from
the page_pool and use them as pages normally.

> Perhaps you can simply avoid this by arranging for this driver to also
> exclusively use some special type to indicate the dual nature of the
> pointer and leave the other drivers as using the struct page version.
>

This is certainly possible, but it requires us to rename all the page
pointers in the page_pool to the new type, and requires the driver
adding devmem TCP support to rename all the page* pointer instances to
the new type. It's possible but it introduces lots of code churn. Is
the LSB + cast not a reasonable compromise here? I feel like the trick
of setting the least significant bit on a pointer to indicate it's
something else has a fair amount of precedent in the kernel.

--
Thanks,
Mina

2023-12-12 15:08:57

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On Tue, Dec 12, 2023 at 06:58:17AM -0800, Mina Almasry wrote:

> Jason, we set the LSB on page_pool_iov pointers before casting it to
> struct page pointers. The resulting pointers are not useable as page
> pointers at all.

I understand that, the second ask is about maintainability of the mm
by using correct types.

> > Perhaps you can simply avoid this by arranging for this driver to also
> > exclusively use some special type to indicate the dual nature of the
> > pointer and leave the other drivers as using the struct page version.
>
> This is certainly possible, but it requires us to rename all the page
> pointers in the page_pool to the new type, and requires the driver
> adding devmem TCP support to rename all the page* pointer instances to
> the new type. It's possible but it introduces lots of code churn. Is
> the LSB + cast not a reasonable compromise here? I feel like the trick
> of setting the least significant bit on a pointer to indicate it's
> something else has a fair amount of precedent in the kernel.

Linus himself has complained about exactly this before, and written a cleanup:

https://lore.kernel.org/linux-mm/[email protected]/

If you mangle a pointer *so it is no longer a pointer* then give it a
proper opaque type so the compiler can check everything statically and
require that the necessary converters are called in all cases.

You call it churn, I call it future maintainability. :(

No objection to using the LSB, just properly type a LSB mangled
pointer so everyone knows what is going on and don't call it MM's
struct page *.

I would say this is important here because it is a large driver facing
API surface.

Jason

2023-12-12 19:08:35

by Simon Horman

[permalink] [raw]
Subject: Re: [net-next v1 14/16] net: add SO_DEVMEM_DONTNEED setsockopt to release RX frags

On Thu, Dec 07, 2023 at 04:52:45PM -0800, Mina Almasry wrote:
> Add an interface for the user to notify the kernel that it is done
> reading the devmem dmabuf frags returned as cmsg. The kernel will
> drop the reference on the frags to make them available for re-use.
>
> Signed-off-by: Willem de Bruijn <[email protected]>
> Signed-off-by: Kaiyuan Zhang <[email protected]>
> Signed-off-by: Mina Almasry <[email protected]>

...

> diff --git a/net/core/sock.c b/net/core/sock.c
> index fef349dd72fa..521bdc4ff260 100644
> --- a/net/core/sock.c
> +++ b/net/core/sock.c
> @@ -1051,6 +1051,41 @@ static int sock_reserve_memory(struct sock *sk, int bytes)
> return 0;
> }
>
> +static noinline_for_stack int
> +sock_devmem_dontneed(struct sock *sk, sockptr_t optval, unsigned int optlen)
> +{
> + struct dmabuf_token tokens[128];

Hi Mina,

I am guessing it is mostly due to the line above,
but on x86 32bit builds I see:

warning: the frame size of 1048 bytes is larger than 1024 bytes [-Wframe-larger-than

> + unsigned int num_tokens, i, j;
> + int ret;
> +
> + if (sk->sk_type != SOCK_STREAM || sk->sk_protocol != IPPROTO_TCP)
> + return -EBADF;
> +
> + if (optlen % sizeof(struct dmabuf_token) || optlen > sizeof(tokens))
> + return -EINVAL;
> +
> + num_tokens = optlen / sizeof(struct dmabuf_token);
> + if (copy_from_sockptr(tokens, optval, optlen))
> + return -EFAULT;
> +
> + ret = 0;
> + for (i = 0; i < num_tokens; i++) {
> + for (j = 0; j < tokens[i].token_count; j++) {
> + struct page *page = xa_erase(&sk->sk_user_pages,
> + tokens[i].token_start + j);
> +
> + if (page) {
> + if (WARN_ON_ONCE(!napi_pp_put_page(page,
> + false)))
> + page_pool_page_put_many(page, 1);
> + ret++;
> + }
> + }
> + }
> +
> + return ret;
> +}
> +
> void sockopt_lock_sock(struct sock *sk)
> {
> /* When current->bpf_ctx is set, the setsockopt is called from

...

2023-12-12 19:14:34

by Simon Horman

[permalink] [raw]
Subject: Re: [net-next v1 15/16] net: add devmem TCP documentation

On Thu, Dec 07, 2023 at 04:52:46PM -0800, Mina Almasry wrote:
> Signed-off-by: Mina Almasry <[email protected]>
> ---
> Documentation/networking/devmem.rst | 270 ++++++++++++++++++++++++++++
> 1 file changed, 270 insertions(+)
> create mode 100644 Documentation/networking/devmem.rst
>
> diff --git a/Documentation/networking/devmem.rst b/Documentation/networking/devmem.rst
> new file mode 100644
> index 000000000000..ed0d9c88b708
> --- /dev/null
> +++ b/Documentation/networking/devmem.rst
> @@ -0,0 +1,270 @@

Hi Mina,

Please consider adding an SPDX header here.

And please consider adding devmem to index.rxt,
as make htmldocs currently warns:

.../devmem.rst: WARNING: document isn't included in any toctree

....

2023-12-13 01:10:04

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On Tue, Dec 12, 2023 at 7:08 AM Jason Gunthorpe <[email protected]> wrote:
>
> On Tue, Dec 12, 2023 at 06:58:17AM -0800, Mina Almasry wrote:
>
> > Jason, we set the LSB on page_pool_iov pointers before casting it to
> > struct page pointers. The resulting pointers are not useable as page
> > pointers at all.
>
> I understand that, the second ask is about maintainability of the mm
> by using correct types.
>
> > > Perhaps you can simply avoid this by arranging for this driver to also
> > > exclusively use some special type to indicate the dual nature of the
> > > pointer and leave the other drivers as using the struct page version.
> >
> > This is certainly possible, but it requires us to rename all the page
> > pointers in the page_pool to the new type, and requires the driver
> > adding devmem TCP support to rename all the page* pointer instances to
> > the new type. It's possible but it introduces lots of code churn. Is
> > the LSB + cast not a reasonable compromise here? I feel like the trick
> > of setting the least significant bit on a pointer to indicate it's
> > something else has a fair amount of precedent in the kernel.
>
> Linus himself has complained about exactly this before, and written a cleanup:
>
> https://lore.kernel.org/linux-mm/[email protected]/
>
> If you mangle a pointer *so it is no longer a pointer* then give it a
> proper opaque type so the compiler can check everything statically and
> require that the necessary converters are called in all cases.
>
> You call it churn, I call it future maintainability. :(
>
> No objection to using the LSB, just properly type a LSB mangled
> pointer so everyone knows what is going on and don't call it MM's
> struct page *.
>
> I would say this is important here because it is a large driver facing
> API surface.
>

OK, I imagine this is not that hard to implement - it's really whether
the change is acceptable to reviewers.

I figure I can start by implementing a no-op abstraction to page*:

typedef struct page netmem_t

and replace the page* in the following places with netmem_t*:

1. page_pool API (not internals)
2. drivers using the page_pool.
3. skb_frag_t.

I think that change needs to be a separate series by itself. Then the
devmem patches would on top of that change netmem_t such that it can
be a union between struct page and page_pool_iov and add the special
handling of page_pool_iov. Does this sound reasonable?


--
Thanks,
Mina

2023-12-13 02:19:23

by David Ahern

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On 12/12/23 6:09 PM, Mina Almasry wrote:
> OK, I imagine this is not that hard to implement - it's really whether
> the change is acceptable to reviewers.
>
> I figure I can start by implementing a no-op abstraction to page*:
>
> typedef struct page netmem_t
>
> and replace the page* in the following places with netmem_t*:
>
> 1. page_pool API (not internals)
> 2. drivers using the page_pool.
> 3. skb_frag_t.
>

accessors to skb_frag_t field are now consolidated to
include/linux/skbuff.h (the one IB driver was fixed in Sept by
4ececeb83986), so changing skb_frag_t from bio_vec to something like:

typedef struct skb_frag {
void *addr;
unsigned int length;
unsigned int offset;
};

is trivial. From there, addr can default to `struct page *`. If LSB is
set, strip it and return `struct page_pool_iov *` or `struct buffer_pool *`

2023-12-13 07:49:34

by Yinjun Zhang

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On Thu, 7 Dec 2023 16:52:39 -0800, Mina Almasry wrote:
<...>
> +static int mp_dmabuf_devmem_init(struct page_pool *pool)
> +{
> + struct netdev_dmabuf_binding *binding = pool->mp_priv;
> +
> + if (!binding)
> + return -EINVAL;
> +
> + if (!(pool->p.flags & PP_FLAG_DMA_MAP))
> + return -EOPNOTSUPP;
> +
> + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> + return -EOPNOTSUPP;
> +
> + netdev_dmabuf_binding_get(binding);
> + return 0;
> +}
> +
> +static struct page *mp_dmabuf_devmem_alloc_pages(struct page_pool *pool,
> + gfp_t gfp)
> +{
> + struct netdev_dmabuf_binding *binding = pool->mp_priv;
> + struct page_pool_iov *ppiov;
> +
> + ppiov = netdev_alloc_dmabuf(binding);

Since it only supports one-page allocation, we'd better add a check in
`ops->init()` that `pool->p.order` must be 0.

> + if (!ppiov)
> + return NULL;
> +
> + ppiov->pp = pool;
> + pool->pages_state_hold_cnt++;
> + trace_page_pool_state_hold(pool, (struct page *)ppiov,
> + pool->pages_state_hold_cnt);
> + return (struct page *)((unsigned long)ppiov | PP_IOV);
> +}
<...>

2023-12-13 07:53:14

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 09/16] page_pool: device memory support

On Sun, Dec 10, 2023 at 8:04 PM Mina Almasry <[email protected]> wrote:
>
> On Sun, Dec 10, 2023 at 6:26 PM Mina Almasry <[email protected]> wrote:
> >
> > On Sun, Dec 10, 2023 at 6:04 PM Yunsheng Lin <[email protected]> wrote:
> > >
> > > On 2023/12/9 0:05, Mina Almasry wrote:
> > > > On Fri, Dec 8, 2023 at 1:30 AM Yunsheng Lin <[email protected]> wrote:
> > > >>
> > > >>
> > > >> As mentioned before, it seems we need to have the above checking every
> > > >> time we need to do some per-page handling in page_pool core, is there
> > > >> a plan in your mind how to remove those kind of checking in the future?
> > > >>
> > > >
> > > > I see 2 ways to remove the checking, both infeasible:
> > > >
> > > > 1. Allocate a wrapper struct that pulls out all the fields the page pool needs:
> > > >
> > > > struct netmem {
> > > > /* common fields */
> > > > refcount_t refcount;
> > > > bool is_pfmemalloc;
> > > > int nid;
> > > > ...
> > > > union {
> > > > struct dmabuf_genpool_chunk_owner *owner;
> > > > struct page * page;
> > > > };
> > > > };
> > > >
> > > > The page pool can then not care if the underlying memory is iov or
> > > > page. However this introduces significant memory bloat as this struct
> > > > needs to be allocated for each page or ppiov, which I imagine is not
> > > > acceptable for the upside of removing a few static_branch'd if
> > > > statements with no performance cost.
> > > >
> > > > 2. Create a unified struct for page and dmabuf memory, which the mm
> > > > folks have repeatedly nacked, and I imagine will repeatedly nack in
> > > > the future.
> > > >
> > > > So I imagine the special handling of ppiov in some form is critical
> > > > and the checking may not be removable.
> > >
> > > If the above is true, perhaps devmem is not really supposed to be intergated
> > > into page_pool.
> > >
> > > Adding a checking for every per-page handling in page_pool core is just too
> > > hacky to be really considerred a longterm solution.
> > >
> >
> > The only other option is to implement another page_pool for ppiov and
> > have the driver create page_pool or ppiov_pool depending on the state
> > of the netdev_rx_queue (or some helper in the net stack to do that for
> > the driver). This introduces some code duplication. The ppiov_pool &
> > page_pool would look similar in implementation.
> >
> > But this was all discussed in detail in RFC v2 and the last response I
> > heard from Jesper was in favor if this approach, if I understand
> > correctly:
> >
> > https://lore.kernel.org/netdev/[email protected]/
> >
> > Would love to have the maintainer weigh in here.
> >
>
> I should note we may be able to remove some of the checking, but maybe not all.
>
> - Checks that disable page fragging for ppiov can be removed once
> ppiov has frag support (in this series or follow up).
>
> - If we use page->pp_frag_count (or page->pp_ref_count) for
> refcounting ppiov, we can remove the if checking in the refcounting.
>
> - We may be able to store the dma_addr of the ppiov in page->dma_addr,
> but I'm unsure if that actually works, because the dma_buf dmaddr is
> dma_addr_t (u32 or u64), but page->dma_addr is unsigned long (4 bytes
> I think). But if it works for pages I may be able to make it work for
> ppiov as well.
>
> - Checks that obtain the page->pp can work with ppiov if we align the
> offset of page->pp and ppiov->pp.
>
> - Checks around page->pp_magic can be removed if we also have offset
> aligned ppiov->pp_magic.
>
> Sadly I don't see us removing the checking for these other cases:
>
> - page_is_pfmemalloc(): I'm not allowed to pass a non-struct page into
> that helper.
>
> - page_to_nid(): I'm not allowed to pass a non-struct page into that helper.
>
> - page_pool_free_va(): ppiov have no va.
>
> - page_pool_sync_for_dev/page_pool_dma_map: ppiov backed by dma-buf
> fundamentally can't get mapped again.
>
> Are the removal (or future removal) of these checks enough to resolve this?
>

I took a deeper look here, and with some effort I'm able to remove
almost all the custom checks for ppiov. The only remaining checks for
devmem are the checks around these mm calls:

page_is_pfmemalloc()
page_to_nid()
page_ref_count()
compound_head()

page_is_pfmemalloc() checks can be removed by using a bit
page->pp_magic potentially to indicate pfmemalloc().

The other 3, I'm not sure I can remove. They rely on the page flags or
other fields not specific to page_pool pages. The next version should
come with the most minimal amount of devmem checks for the page_pool.

> > > It is somewhat ironical that devmem is using static_branch to alliviate the
> > > performance impact for normal memory at the possible cost of performance
> > > degradation for devmem, does it not defeat some purpose of intergating devmem
> > > to page_pool?
> > >
> >
> > I don't see the issue. The static branch sets the non-ppiov path as
> > default if no memory providers are in use, and flips it when they are,
> > making the default branch prediction ideal in both cases.
> >
> > > >
> > > >> Even though a static_branch check is added in page_is_page_pool_iov(), it
> > > >> does not make much sense that a core has tow different 'struct' for its
> > > >> most basic data.
> > > >>
> > > >> IMHO, the ppiov for dmabuf is forced fitting into page_pool without much
> > > >> design consideration at this point.
> > > >>
> > > > ...
> > > >>
> > > >> For now, the above may work for the the rx part as it seems that you are
> > > >> only enabling rx for dmabuf for now.
> > > >>
> > > >> What is the plan to enable tx for dmabuf? If it is also intergrated into
> > > >> page_pool? There was a attempt to enable page_pool for tx, Eric seemed to
> > > >> have some comment about this:
> > > >> https://lkml.kernel.org/netdev/[email protected]/T/#mb6ab62dc22f38ec621d516259c56dd66353e24a2
> > > >>
> > > >> If tx is not intergrated into page_pool, do we need to create a new layer for
> > > >> the tx dmabuf?
> > > >>
> > > >
> > > > I imagine the TX path will reuse page_pool_iov, page_pool_iov_*()
> > > > helpers, and page_pool_page_*() helpers, but will not need any core
> > > > page_pool changes. This is because the TX path will have to piggyback
> > >
> > > We may need another bit/flags checking to demux between page_pool owned
> > > devmem and non-page_pool owned devmem.
> > >
> >
> > The way I'm imagining the support, I don't see the need for such
> > flags. We'd be re-using generic helpers like
> > page_pool_iov_get_dma_address() and what not that don't need that
> > checking.
> >
> > > Also calling page_pool_*() on non-page_pool owned devmem is confusing
> > > enough that we may need a thin layer handling non-page_pool owned devmem
> > > in the end.
> > >
> >
> > The page_pool_page* & page_pool_iov* functions can be renamed if
> > confusing. I would think that's no issue (note that the page_pool_*
> > functions need not be called for TX path).
> >
> > > > on MSG_ZEROCOPY (devmem is not copyable), so no memory allocation from
> > > > the page_pool (or otherwise) is needed or possible. RFCv1 had a TX
> > > > implementation based on dmabuf pages without page_pool involvement, I
> > > > imagine I'll do something similar.
> > > It would be good to have a tx implementation for the next version, so
> > > that we can have a whole picture of devmem.
> > >
> > > >
> >
> >
> >
> > --
> > Thanks,
> > Mina
>
>
>
> --
> Thanks,
> Mina



--
Thanks,
Mina

2023-12-14 01:16:46

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [net-next v1 03/16] queue_api: define queue api

On Thu, 7 Dec 2023 16:52:34 -0800 Mina Almasry wrote:
> This API enables the net stack to reset the queues used for devmem.

Nice, thanks for moving this forward. FWIW when I started hacking on it
the API looked more like:
https://github.com/kuba-moo/linux/commit/7af8abfa4fdff248e21fc76aecc334004a0f322f
which passes the config objects to the queue callbacks as an argument.
Storing in struct netdev_rx_queue makes implementing prepare / swap
harder. But that's just FYI, we can refactor later. The queue config
rabbit hole is pretty deep.

2023-12-14 01:18:07

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [net-next v1 05/16] net: netdev netlink api to bind dma-buf to a net device

On Thu, 7 Dec 2023 16:52:36 -0800 Mina Almasry wrote:
> + name: type
> + doc: rx or tx queue
> + type: u8
> + enum: queue-type

nit: the queue/napi GET was applied to net-next, would be good to stick
to the same types (s/u8/u32)

2023-12-14 06:21:18

by patchwork-bot+netdevbpf

[permalink] [raw]
Subject: Re: [net-next v1 00/16] Device Memory TCP

Hello:

This series was applied to netdev/net-next.git (main)
by Jakub Kicinski <[email protected]>:

On Thu, 7 Dec 2023 16:52:31 -0800 you wrote:
> Major changes in v1:
> --------------
>
> 1. Implemented MVP queue API ndos to remove the userspace-visible
> driver reset.
>
> 2. Fixed issues in the napi_pp_put_page() devmem frag unref path.
>
> [...]

Here is the summary with links:
- [net-next,v1,01/16] net: page_pool: factor out releasing DMA from releasing the page
https://git.kernel.org/netdev/net-next/c/c3f687d8dfeb
- [net-next,v1,02/16] net: page_pool: create hooks for custom page providers
(no matching commit)
- [net-next,v1,03/16] queue_api: define queue api
(no matching commit)
- [net-next,v1,04/16] gve: implement queue api
(no matching commit)
- [net-next,v1,05/16] net: netdev netlink api to bind dma-buf to a net device
(no matching commit)
- [net-next,v1,06/16] netdev: support binding dma-buf to netdevice
(no matching commit)
- [net-next,v1,07/16] netdev: netdevice devmem allocator
(no matching commit)
- [net-next,v1,08/16] memory-provider: dmabuf devmem memory provider
(no matching commit)
- [net-next,v1,09/16] page_pool: device memory support
(no matching commit)
- [net-next,v1,10/16] page_pool: don't release iov on elevanted refcount
(no matching commit)
- [net-next,v1,11/16] net: support non paged skb frags
(no matching commit)
- [net-next,v1,12/16] net: add support for skbs with unreadable frags
(no matching commit)
- [net-next,v1,13/16] tcp: RX path for devmem TCP
(no matching commit)
- [net-next,v1,14/16] net: add SO_DEVMEM_DONTNEED setsockopt to release RX frags
(no matching commit)
- [net-next,v1,15/16] net: add devmem TCP documentation
(no matching commit)
- [net-next,v1,16/16] selftests: add ncdevmem, netcat for devmem TCP
(no matching commit)

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html


2023-12-14 06:49:14

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [net-next v1 00/16] Device Memory TCP

On Thu, Dec 14, 2023 at 06:20:27AM +0000, [email protected] wrote:
> Hello:
>
> This series was applied to netdev/net-next.git (main)
> by Jakub Kicinski <[email protected]>:

Umm, this is still very broken in intraction with other subsystems.
Please don't push ahead so quickly.

2023-12-14 06:52:36

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 00/16] Device Memory TCP

On Wed, Dec 13, 2023 at 10:49 PM Christoph Hellwig <[email protected]> wrote:
>
> On Thu, Dec 14, 2023 at 06:20:27AM +0000, [email protected] wrote:
> > Hello:
> >
> > This series was applied to netdev/net-next.git (main)
> > by Jakub Kicinski <[email protected]>:
>
> Umm, this is still very broken in intraction with other subsystems.
> Please don't push ahead so quickly.
>

The bot is just a bit optimistic. Only this first patch was applied.
It does not interact with other subsystems.

- [net-next,v1,01/16] net: page_pool: factor out releasing DMA from
releasing the page

--
Thanks,
Mina

2023-12-14 06:59:41

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [net-next v1 00/16] Device Memory TCP

On Wed, Dec 13, 2023 at 10:51:25PM -0800, Mina Almasry wrote:
> On Wed, Dec 13, 2023 at 10:49 PM Christoph Hellwig <[email protected]> wrote:
> >
> > On Thu, Dec 14, 2023 at 06:20:27AM +0000, [email protected] wrote:
> > > Hello:
> > >
> > > This series was applied to netdev/net-next.git (main)
> > > by Jakub Kicinski <[email protected]>:
> >
> > Umm, this is still very broken in intraction with other subsystems.
> > Please don't push ahead so quickly.
> >
>
> The bot is just a bit optimistic. Only this first patch was applied.
> It does not interact with other subsystems.
>
> - [net-next,v1,01/16] net: page_pool: factor out releasing DMA from
> releasing the page

Ah, that makes sense. Thanks for the update!

2023-12-14 20:03:51

by Mina Almasry

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On Mon, Dec 11, 2023 at 12:37 PM Pavel Begunkov <[email protected]> wrote:
...
> >> If you remove the branch, let it fall into ->release and rely
> >> on refcounting there, then the callback could also fix up
> >> release_cnt or ask pp to do it, like in the patch I linked above
> >>
> >
> > Sadly I don't think this is possible due to the reasons I mention in
> > the commit message of that patch. Prematurely releasing ppiov and not
> > having them be candidates for recycling shows me a 4-5x degradation in
> > performance.
>
> I don't think I follow. The concept is to only recycle a buffer (i.e.
> make it available for allocation) when its refs drop to zero, which is
> IMHO the only way it can work, and IIUC what this patchset is doing.
>
> That's also I suggest to do, but through a slightly different path.
> Let's say at some moment there are 2 refs (e.g. 1 for an skb and
> 1 for userspace/xarray).
>
> Say it first puts the skb:
>
> napi_pp_put_page()
> -> page_pool_return_page()
> -> mp_ops->release_page()
> -> need_to_free = put_buf()
> // not last ref, need_to_free==false,
> // don't recycle, don't increase release_cnt
>
> Then you put the last ref:
>
> page_pool_iov_put_many()
> -> page_pool_return_page()
> -> mp_ops->release_page()
> -> need_to_free = put_buf()
> // last ref, need_to_free==true,
> // recycle and release_cnt++
>
> And that last put can even be recycled right into the
> pp / ptr_ring, in which case it doesn't need to touch
> release_cnt. Does it make sense? I don't see where
> 4-5x degradation would come from
>
>

Sorry for the late reply, I have been working on this locally.

What you're saying makes sense, and I'm no longer sure why I was
seeing a perf degradation without '[net-next v1 10/16] page_pool:
don't release iov on elevanted refcount'. However, even though what
you're saying is technically correct, AFAIU it's actually semantically
wrong. When a page is released by the page_pool, we should call
page_pool_clear_pp_info() and completely disconnect the page from the
pool. If we call release_page() on a page and then the page pool sees
it again in page_pool_return_page(), I think that is considered a bug.
In fact I think what you're proposing is as a result of a bug because
we don't call a page_pool_clear_pp_info() equivalent on releasing
ppiov.

However, I'm reasonably confident I figured out the right thing to do
here. The page_pool uses page->pp_frag_count for its refcounting.
pp_frag_count is a misnomer, it's being renamed to pp_ref_count in
Liang's series[1]). In this series I used a get_page/put_page
equivalent for refcounting. Once I transitioned to using
pp_[frag|ref]_count for refcounting inside the page_pool, the issue
went away, and I no longer need the patch 'page_pool: don't release
iov on elevanted refcount'.

There is an additional upside, since pages and ppiovs are both being
refcounted using pp_[frag|ref]_count, we get some unified handling for
ppiov and we reduce the checks around ppiov. This should be fixed
properly in the next series.

I still need to do some work (~1 week) before I upload the next
version as there is a new requirement from MM that we transition to a
new type and not re-use page*, but I uploaded my changes github with
the refcounting issues resolved in case they're useful to you. Sorry
for the churn:

https://github.com/mina/linux/commits/tcpdevmem-v1.5/

[1] https://patchwork.kernel.org/project/netdevbpf/list/?series=809049&state=*

--
Thanks,
Mina

2023-12-20 00:00:35

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [net-next v1 08/16] memory-provider: dmabuf devmem memory provider

On 12/14/23 20:03, Mina Almasry wrote:
> On Mon, Dec 11, 2023 at 12:37 PM Pavel Begunkov <[email protected]> wrote:
> ...
>>>> If you remove the branch, let it fall into ->release and rely
>>>> on refcounting there, then the callback could also fix up
>>>> release_cnt or ask pp to do it, like in the patch I linked above
>>>>
>>>
>>> Sadly I don't think this is possible due to the reasons I mention in
>>> the commit message of that patch. Prematurely releasing ppiov and not
>>> having them be candidates for recycling shows me a 4-5x degradation in
>>> performance.
>>
>> I don't think I follow. The concept is to only recycle a buffer (i.e.
>> make it available for allocation) when its refs drop to zero, which is
>> IMHO the only way it can work, and IIUC what this patchset is doing.
>>
>> That's also I suggest to do, but through a slightly different path.
>> Let's say at some moment there are 2 refs (e.g. 1 for an skb and
>> 1 for userspace/xarray).
>>
>> Say it first puts the skb:
>>
>> napi_pp_put_page()
>> -> page_pool_return_page()
>> -> mp_ops->release_page()
>> -> need_to_free = put_buf()
>> // not last ref, need_to_free==false,
>> // don't recycle, don't increase release_cnt
>>
>> Then you put the last ref:
>>
>> page_pool_iov_put_many()
>> -> page_pool_return_page()
>> -> mp_ops->release_page()
>> -> need_to_free = put_buf()
>> // last ref, need_to_free==true,
>> // recycle and release_cnt++
>>
>> And that last put can even be recycled right into the
>> pp / ptr_ring, in which case it doesn't need to touch
>> release_cnt. Does it make sense? I don't see where
>> 4-5x degradation would come from
>>
>>
>
> Sorry for the late reply, I have been working on this locally.
>
> What you're saying makes sense, and I'm no longer sure why I was
> seeing a perf degradation without '[net-next v1 10/16] page_pool:
> don't release iov on elevanted refcount'. However, even though what
> you're saying is technically correct, AFAIU it's actually semantically
> wrong. When a page is released by the page_pool, we should call
> page_pool_clear_pp_info() and completely disconnect the page from the
> pool. If we call release_page() on a page and then the page pool sees
> it again in page_pool_return_page(), I think that is considered a bug.

You're adding a new feature the semantics of which is already
different from what is in there, you can extend it any way as long
as it makes sense and agreed on. IMHO, it does. But well, if
there is a better solution I'm all for it.

> In fact I think what you're proposing is as a result of a bug because
> we don't call a page_pool_clear_pp_info() equivalent on releasing
> ppiov.

I don't get it, what bug? page_pool_clear_pp_info() is not called
for ppiov because it doesn't make sense to call it for ppiov,
there is no reason to clear ppiov->pp, nor there is any pp_magic.


> However, I'm reasonably confident I figured out the right thing to do
> here. The page_pool uses page->pp_frag_count for its refcounting.
> pp_frag_count is a misnomer, it's being renamed to pp_ref_count in
> Liang's series[1]). In this series I used a get_page/put_page
> equivalent for refcounting. Once I transitioned to using
> pp_[frag|ref]_count for refcounting inside the page_pool, the issue
> went away, and I no longer need the patch 'page_pool: don't release
> iov on elevanted refcount'.

Lovely, I'll take a look later! (also assuming it's in v5)


> There is an additional upside, since pages and ppiovs are both being
> refcounted using pp_[frag|ref]_count, we get some unified handling for
> ppiov and we reduce the checks around ppiov. This should be fixed
> properly in the next series.
>
> I still need to do some work (~1 week) before I upload the next
> version as there is a new requirement from MM that we transition to a
> new type and not re-use page*, but I uploaded my changes github with
> the refcounting issues resolved in case they're useful to you. Sorry
> for the churn:
>
> https://github.com/mina/linux/commits/tcpdevmem-v1.5/
>
> [1] https://patchwork.kernel.org/project/netdevbpf/list/?series=809049&state=*
>

--
Pavel Begunkov

2024-03-05 11:46:35

by Arnd Bergmann

[permalink] [raw]
Subject: Re: [net-next v1 04/16] gve: implement queue api

On Fri, Dec 8, 2023, at 01:52, Mina Almasry wrote:
> +static void *gve_rx_queue_mem_alloc(struct net_device *dev, int idx)
> +{
> + struct gve_per_rx_queue_mem_dqo *gve_q_mem;
..
> +
> + gve_q_mem = kvcalloc(1, sizeof(*gve_q_mem), GFP_KERNEL);
> + if (!gve_q_mem)
> + goto err;

[minor comment]

The structure does not seem overly large, even if you have
an array here, I don't see why you would need the vmalloc
type allocation for struct gve_per_rx_queue_mem_dqo.

Arnd