2024-05-30 20:16:51

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 00/14] Device Memory TCP

v10: https://patchwork.kernel.org/project/netdevbpf/list/?series=852422&state=*
====

Major Changes:
--------------

v9 was sent right before the merge window closed (sorry!). v10 is almost
a re-send of the series now that the merge window re-opened. Only
rebased to latest net-next and addressed some minor iterative comments
received on v9.

As usual, the full devmem TCP changes including the full GVE driver
implementation is here:

https://github.com/mina/linux/commits/tcpdevmem-v10/

Detailed changelog:
-------------------

- Fixed tokens leaking in DONTNEED setsockopt (Nikolay).
- Moved net_iov_dma_addr() to devmem.c and made it a devmem specific
helpers (David).
- Rename hook alloc_pages to alloc_netmems as alloc_pages is now
preprocessor macro defined and causes a build error.

v9:
===

Major Changes:
--------------

GVE queue API has been merged. Submitting this version as non-RFC after
rebasing on top of the merged API, and dropped the out of tree queue API
I was carrying on github. Addressed the little feedback v8 has received.

Detailed changelog:
------------------
- Added new patch from David Wei to this series for
netdev_rx_queue_restart()
- Fixed sparse error.
- Removed CONFIG_ checks in netmem_is_net_iov()
- Flipped skb->readable to skb->unreadable
- Minor fixes to selftests & docs.

RFC v8:
=======

Major Changes:
--------------

- Fixed build error generated by patch-by-patch build.
- Applied docs suggestions from Randy.

RFC v7:
=======

Major Changes:
--------------

This revision largely rebases on top of net-next and addresses the feedback
RFCv6 received from folks, namely Jakub, Yunsheng, Arnd, David, & Pavel.

The series remains in RFC because the queue-API ndos defined in this
series are not yet implemented. I have a GVE implementation I carry out
of tree for my testing. A upstreamable GVE implementation is in the
works. Aside from that, in my estimation all the patches are ready for
review/merge. Please do take a look.

As usual the full devmem TCP changes including the full GVE driver
implementation is here:

https://github.com/mina/linux/commits/tcpdevmem-v7/

Detailed changelog:

- Use admin-perm in netlink API.
- Addressed feedback from Jakub with regards to netlink API
implementation.
- Renamed devmem.c functions to something more appropriate for that
file.
- Improve the performance seen through the page_pool benchmark.
- Fix the value definition of all the SO_DEVMEM_* uapi.
- Various fixes to documentation.

Perf - page-pool benchmark:
---------------------------

Improved performance of bench_page_pool_simple.ko tests compared to v6:

https://pastebin.com/raw/v5dYRg8L

net-next base: 8 cycle fast path.
RFC v6: 10 cycle fast path.
RFC v7: 9 cycle fast path.
RFC v7 with CONFIG_DMA_SHARED_BUFFER disabled: 8 cycle fast path,
same as baseline.

Perf - Devmem TCP benchmark:
---------------------

Perf is about the same regardless of the changes in v7, namely the
removal of the static_branch_unlikely to improve the page_pool benchmark
performance:

189/200gbps bi-directional throughput with RX devmem TCP and regular TCP
TX i.e. ~95% line rate.

RFC v6:
=======

Major Changes:
--------------

This revision largely rebases on top of net-next and addresses the little
feedback RFCv5 received.

The series remains in RFC because the queue-API ndos defined in this
series are not yet implemented. I have a GVE implementation I carry out
of tree for my testing. A upstreamable GVE implementation is in the
works. Aside from that, in my estimation all the patches are ready for
review/merge. Please do take a look.

As usual the full devmem TCP changes including the full GVE driver
implementation is here:

https://github.com/mina/linux/commits/tcpdevmem-v6/

This version also comes with some performance data recorded in the cover
letter (see below changelog).

Detailed changelog:

- Rebased on top of the merged netmem_ref changes.

- Converted skb->dmabuf to skb->readable (Pavel). Pavel's original
suggestion was to remove the skb->dmabuf flag entirely, but when I
looked into it closely, I found the issue that if we remove the flag
we have to dereference the shinfo(skb) pointer to obtain the first
frag to tell whether an skb is readable or not. This can cause a
performance regression if it dirties the cache line when the
shinfo(skb) was not really needed. Instead, I converted the skb->dmabuf
flag into a generic skb->readable flag which can be re-used by io_uring
0-copy RX.

- Squashed a few locking optimizations from Eric Dumazet in the RX path
and the DEVMEM_DONTNEED setsockopt.

- Expanded the tests a bit. Added validation for invalid scenarios and
added some more coverage.

Perf - page-pool benchmark:
---------------------------

bench_page_pool_simple.ko tests with and without these changes:
https://pastebin.com/raw/ncHDwAbn

AFAIK the number that really matters in the perf tests is the
'tasklet_page_pool01_fast_path Per elem'. This one measures at about 8
cycles without the changes but there is some 1 cycle noise in some
results.

With the patches this regresses to 9 cycles with the changes but there
is 1 cycle noise occasionally running this test repeatedly.

Lastly I tried disable the static_branch_unlikely() in
netmem_is_net_iov() check. To my surprise disabling the
static_branch_unlikely() check reduces the fast path back to 8 cycles,
but the 1 cycle noise remains.

Perf - Devmem TCP benchmark:
---------------------

189/200gbps bi-directional throughput with RX devmem TCP and regular TCP
TX i.e. ~95% line rate.

Major changes in RFC v5:
========================

1. Rebased on top of 'Abstract page from net stack' series and used the
new netmem type to refer to LSB set pointers instead of re-using
struct page.

2. Downgraded this series back to RFC and called it RFC v5. This is
because this series is now dependent on 'Abstract page from net
stack'[1] and the queue API. Both are removed from the series to
reduce the patch # and those bits are fairly independent or
pre-requisite work.

3. Reworked the page_pool devmem support to use netmem and for some
more unified handling.

4. Reworked the reference counting of net_iov (renamed from
page_pool_iov) to use pp_ref_count for refcounting.

The full changes including the dependent series and GVE page pool
support is here:

https://github.com/mina/linux/commits/tcpdevmem-rfcv5/

[1] https://patchwork.kernel.org/project/netdevbpf/list/?series=810774

Major changes in v1:
====================

1. Implemented MVP queue API ndos to remove the userspace-visible
driver reset.

2. Fixed issues in the napi_pp_put_page() devmem frag unref path.

3. Removed RFC tag.

Many smaller addressed comments across all the patches (patches have
individual change log).

Full tree including the rest of the GVE driver changes:
https://github.com/mina/linux/commits/tcpdevmem-v1

Changes in RFC v3:
==================

1. Pulled in the memory-provider dependency from Jakub's RFC[1] to make the
series reviewable and mergeable.

2. Implemented multi-rx-queue binding which was a todo in v2.

3. Fix to cmsg handling.

The sticking point in RFC v2[2] was the device reset required to refill
the device rx-queues after the dmabuf bind/unbind. The solution
suggested as I understand is a subset of the per-queue management ops
Jakub suggested or similar:

https://lore.kernel.org/netdev/[email protected]/

This is not addressed in this revision, because:

1. This point was discussed at netconf & netdev and there is openness to
using the current approach of requiring a device reset.

2. Implementing individual queue resetting seems to be difficult for my
test bed with GVE. My prototype to test this ran into issues with the
rx-queues not coming back up properly if reset individually. At the
moment I'm unsure if it's a mistake in the POC or a genuine issue in
the virtualization stack behind GVE, which currently doesn't test
individual rx-queue restart.

3. Our usecases are not bothered by requiring a device reset to refill
the buffer queues, and we'd like to support NICs that run into this
limitation with resetting individual queues.

My thought is that drivers that have trouble with per-queue configs can
use the support in this series, while drivers that support new netdev
ops to reset individual queues can automatically reset the queue as
part of the dma-buf bind/unbind.

The same approach with device resets is presented again for consideration
with other sticking points addressed.

This proposal includes the rx devmem path only proposed for merge. For a
snapshot of my entire tree which includes the GVE POC page pool support &
device memory support:

https://github.com/torvalds/linux/compare/master...mina:linux:tcpdevmem-v3

[1] https://lore.kernel.org/netdev/[email protected]/T/
[2] https://lore.kernel.org/netdev/CAHS8izOVJGJH5WF68OsRWFKJid1_huzzUK+hpKbLcL4pSOD1Jw@mail.gmail.com/T/

Changes in RFC v2:
==================

The sticking point in RFC v1[1] was the dma-buf pages approach we used to
deliver the device memory to the TCP stack. RFC v2 is a proof-of-concept
that attempts to resolve this by implementing scatterlist support in the
networking stack, such that we can import the dma-buf scatterlist
directly. This is the approach proposed at a high level here[2].

Detailed changes:
1. Replaced dma-buf pages approach with importing scatterlist into the
page pool.
2. Replace the dma-buf pages centric API with a netlink API.
3. Removed the TX path implementation - there is no issue with
implementing the TX path with scatterlist approach, but leaving
out the TX path makes it easier to review.
4. Functionality is tested with this proposal, but I have not conducted
perf testing yet. I'm not sure there are regressions, but I removed
perf claims from the cover letter until they can be re-confirmed.
5. Added Signed-off-by: contributors to the implementation.
6. Fixed some bugs with the RX path since RFC v1.

Any feedback welcome, but specifically the biggest pending questions
needing feedback IMO are:

1. Feedback on the scatterlist-based approach in general.
2. Netlink API (Patch 1 & 2).
3. Approach to handle all the drivers that expect to receive pages from
the page pool (Patch 6).

[1] https://lore.kernel.org/netdev/[email protected]/T/
[2] https://lore.kernel.org/netdev/CAHS8izPm6XRS54LdCDZVd0C75tA1zHSu6jLVO8nzTLXCc=H7Nw@mail.gmail.com/

==================

* TL;DR:

Device memory TCP (devmem TCP) is a proposal for transferring data to and/or
from device memory efficiently, without bouncing the data to a host memory
buffer.

* Problem:

A large amount of data transfers have device memory as the source and/or
destination. Accelerators drastically increased the volume of such transfers.
Some examples include:
- ML accelerators transferring large amounts of training data from storage into
GPU/TPU memory. In some cases ML training setup time can be as long as 50% of
TPU compute time, improving data transfer throughput & efficiency can help
improving GPU/TPU utilization.

- Distributed training, where ML accelerators, such as GPUs on different hosts,
exchange data among them.

- Distributed raw block storage applications transfer large amounts of data with
remote SSDs, much of this data does not require host processing.

Today, the majority of the Device-to-Device data transfers the network are
implemented as the following low level operations: Device-to-Host copy,
Host-to-Host network transfer, and Host-to-Device copy.

The implementation is suboptimal, especially for bulk data transfers, and can
put significant strains on system resources, such as host memory bandwidth,
PCIe bandwidth, etc. One important reason behind the current state is the
kernel’s lack of semantics to express device to network transfers.

* Proposal:

In this patch series we attempt to optimize this use case by implementing
socket APIs that enable the user to:

1. send device memory across the network directly, and
2. receive incoming network packets directly into device memory.

Packet _payloads_ go directly from the NIC to device memory for receive and from
device memory to NIC for transmit.
Packet _headers_ go to/from host memory and are processed by the TCP/IP stack
normally. The NIC _must_ support header split to achieve this.

Advantages:

- Alleviate host memory bandwidth pressure, compared to existing
network-transfer + device-copy semantics.

- Alleviate PCIe BW pressure, by limiting data transfer to the lowest level
of the PCIe tree, compared to traditional path which sends data through the
root complex.

* Patch overview:

** Part 1: netlink API

Gives user ability to bind dma-buf to an RX queue.

** Part 2: scatterlist support

Currently the standard for device memory sharing is DMABUF, which doesn't
generate struct pages. On the other hand, networking stack (skbs, drivers, and
page pool) operate on pages. We have 2 options:

1. Generate struct pages for dmabuf device memory, or,
2. Modify the networking stack to process scatterlist.

Approach #1 was attempted in RFC v1. RFC v2 implements approach #2.

** part 3: page pool support

We piggy back on page pool memory providers proposal:
https://github.com/kuba-moo/linux/tree/pp-providers

It allows the page pool to define a memory provider that provides the
page allocation and freeing. It helps abstract most of the device memory
TCP changes from the driver.

** part 4: support for unreadable skb frags

Page pool iovs are not accessible by the host; we implement changes
throughput the networking stack to correctly handle skbs with unreadable
frags.

** Part 5: recvmsg() APIs

We define user APIs for the user to send and receive device memory.

Not included with this series is the GVE devmem TCP support, just to
simplify the review. Code available here if desired:
https://github.com/mina/linux/tree/tcpdevmem

This series is built on top of net-next with Jakub's pp-providers changes
cherry-picked.

* NIC dependencies:

1. (strict) Devmem TCP require the NIC to support header split, i.e. the
capability to split incoming packets into a header + payload and to put
each into a separate buffer. Devmem TCP works by using device memory
for the packet payload, and host memory for the packet headers.

2. (optional) Devmem TCP works better with flow steering support & RSS support,
i.e. the NIC's ability to steer flows into certain rx queues. This allows the
sysadmin to enable devmem TCP on a subset of the rx queues, and steer
devmem TCP traffic onto these queues and non devmem TCP elsewhere.

The NIC I have access to with these properties is the GVE with DQO support
running in Google Cloud, but any NIC that supports these features would suffice.
I may be able to help reviewers bring up devmem TCP on their NICs.

* Testing:

The series includes a udmabuf kselftest that show a simple use case of
devmem TCP and validates the entire data path end to end without
a dependency on a specific dmabuf provider.

** Test Setup

Kernel: net-next with this series and memory provider API cherry-picked
locally.

Hardware: Google Cloud A3 VMs.

NIC: GVE with header split & RSS & flow steering support.

Cc: Pavel Begunkov <[email protected]>
Cc: David Wei <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Yunsheng Lin <[email protected]>
Cc: Shailend Chand <[email protected]>
Cc: Harshitha Ramamurthy <[email protected]>
Cc: Shakeel Butt <[email protected]>
Cc: Jeroen de Borst <[email protected]>
Cc: Praveen Kaligineedi <[email protected]>



Jakub Kicinski (1):
net: page_pool: create hooks for custom page providers

Mina Almasry (13):
netdev: add netdev_rx_queue_restart()
net: netdev netlink api to bind dma-buf to a net device
netdev: support binding dma-buf to netdevice
netdev: netdevice devmem allocator
page_pool: convert to use netmem
page_pool: devmem support
memory-provider: dmabuf devmem memory provider
net: support non paged skb frags
net: add support for skbs with unreadable frags
tcp: RX path for devmem TCP
net: add SO_DEVMEM_DONTNEED setsockopt to release RX frags
net: add devmem TCP documentation
selftests: add ncdevmem, netcat for devmem TCP

Documentation/netlink/specs/netdev.yaml | 57 +++
Documentation/networking/devmem.rst | 258 +++++++++++
Documentation/networking/index.rst | 1 +
arch/alpha/include/uapi/asm/socket.h | 6 +
arch/mips/include/uapi/asm/socket.h | 6 +
arch/parisc/include/uapi/asm/socket.h | 6 +
arch/sparc/include/uapi/asm/socket.h | 6 +
include/linux/skbuff.h | 61 ++-
include/linux/skbuff_ref.h | 11 +-
include/linux/socket.h | 1 +
include/net/devmem.h | 124 ++++++
include/net/netdev_rx_queue.h | 5 +
include/net/netmem.h | 208 ++++++++-
include/net/page_pool/helpers.h | 153 +++++--
include/net/page_pool/types.h | 33 +-
include/net/sock.h | 2 +
include/net/tcp.h | 5 +-
include/trace/events/page_pool.h | 29 +-
include/uapi/asm-generic/socket.h | 6 +
include/uapi/linux/netdev.h | 19 +
include/uapi/linux/uio.h | 17 +
net/bpf/test_run.c | 5 +-
net/core/Makefile | 3 +-
net/core/datagram.c | 6 +
net/core/dev.c | 6 +-
net/core/devmem.c | 384 +++++++++++++++++
net/core/gro.c | 8 +-
net/core/netdev-genl-gen.c | 23 +
net/core/netdev-genl-gen.h | 6 +
net/core/netdev-genl.c | 103 +++++
net/core/netdev_rx_queue.c | 74 ++++
net/core/page_pool.c | 368 +++++++++-------
net/core/skbuff.c | 83 +++-
net/core/sock.c | 61 +++
net/ipv4/esp4.c | 3 +-
net/ipv4/tcp.c | 254 ++++++++++-
net/ipv4/tcp_input.c | 13 +-
net/ipv4/tcp_ipv4.c | 10 +
net/ipv4/tcp_minisocks.c | 2 +
net/ipv4/tcp_output.c | 5 +-
net/ipv6/esp6.c | 3 +-
net/packet/af_packet.c | 4 +-
tools/include/uapi/linux/netdev.h | 19 +
tools/testing/selftests/net/.gitignore | 1 +
tools/testing/selftests/net/Makefile | 5 +
tools/testing/selftests/net/ncdevmem.c | 542 ++++++++++++++++++++++++
46 files changed, 2738 insertions(+), 267 deletions(-)
create mode 100644 Documentation/networking/devmem.rst
create mode 100644 include/net/devmem.h
create mode 100644 net/core/devmem.c
create mode 100644 net/core/netdev_rx_queue.c
create mode 100644 tools/testing/selftests/net/ncdevmem.c

--
2.45.1.288.g0e0cd299f1-goog



2024-05-30 20:17:29

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 01/14] netdev: add netdev_rx_queue_restart()

Add netdev_rx_queue_restart() function to netdev_rx_queue.h

Signed-off-by: David Wei <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

v9: https://lore.kernel.org/all/[email protected]/
(submitted by David).
- fixed SPDX license identifier (Simon).
- Rebased on top of merged queue API definition, and changed
implementation to match that.
- Replace rtnl_lock() with rtnl_is_locked() to make it useable from my
netlink code where rtnl is already locked.

---
include/net/netdev_rx_queue.h | 3 ++
net/core/Makefile | 1 +
net/core/netdev_rx_queue.c | 74 +++++++++++++++++++++++++++++++++++
3 files changed, 78 insertions(+)
create mode 100644 net/core/netdev_rx_queue.c

diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h
index aa1716fb0e53c..e78ca52d67fbf 100644
--- a/include/net/netdev_rx_queue.h
+++ b/include/net/netdev_rx_queue.h
@@ -54,4 +54,7 @@ get_netdev_rx_queue_index(struct netdev_rx_queue *queue)
return index;
}
#endif
+
+int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq);
+
#endif
diff --git a/net/core/Makefile b/net/core/Makefile
index 62be9aef25285..f82232b358a2c 100644
--- a/net/core/Makefile
+++ b/net/core/Makefile
@@ -19,6 +19,7 @@ obj-$(CONFIG_NETDEV_ADDR_LIST_TEST) += dev_addr_lists_test.o

obj-y += net-sysfs.o
obj-y += hotdata.o
+obj-y += netdev_rx_queue.o
obj-$(CONFIG_PAGE_POOL) += page_pool.o page_pool_user.o
obj-$(CONFIG_PROC_FS) += net-procfs.o
obj-$(CONFIG_NET_PKTGEN) += pktgen.o
diff --git a/net/core/netdev_rx_queue.c b/net/core/netdev_rx_queue.c
new file mode 100644
index 0000000000000..b3899358e5a9c
--- /dev/null
+++ b/net/core/netdev_rx_queue.c
@@ -0,0 +1,74 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+
+#include <linux/netdevice.h>
+#include <net/netdev_queues.h>
+#include <net/netdev_rx_queue.h>
+
+int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq_idx)
+{
+ void *new_mem, *old_mem;
+ int err;
+
+ if (!dev->queue_mgmt_ops->ndo_queue_stop ||
+ !dev->queue_mgmt_ops->ndo_queue_mem_free ||
+ !dev->queue_mgmt_ops->ndo_queue_mem_alloc ||
+ !dev->queue_mgmt_ops->ndo_queue_start)
+ return -EOPNOTSUPP;
+
+ DEBUG_NET_WARN_ON_ONCE(!rtnl_is_locked());
+
+ new_mem = kvzalloc(dev->queue_mgmt_ops->ndo_queue_mem_size, GFP_KERNEL);
+ if (!new_mem)
+ return -ENOMEM;
+
+ old_mem = kvzalloc(dev->queue_mgmt_ops->ndo_queue_mem_size, GFP_KERNEL);
+ if (!old_mem) {
+ err = -ENOMEM;
+ goto err_free_new_mem;
+ }
+
+ err = dev->queue_mgmt_ops->ndo_queue_mem_alloc(dev, new_mem, rxq_idx);
+ if (err)
+ goto err_free_old_mem;
+
+ err = dev->queue_mgmt_ops->ndo_queue_stop(dev, old_mem, rxq_idx);
+ if (err)
+ goto err_free_new_queue_mem;
+
+ err = dev->queue_mgmt_ops->ndo_queue_start(dev, new_mem, rxq_idx);
+ if (err)
+ goto err_start_queue;
+
+ dev->queue_mgmt_ops->ndo_queue_mem_free(dev, old_mem);
+
+ kvfree(old_mem);
+ kvfree(new_mem);
+
+ return 0;
+
+err_start_queue:
+ /* Restarting the queue with old_mem should be successful as we haven't
+ * changed any of the queue configuration, and there is not much we can
+ * do to recover from a failure here.
+ *
+ * WARN if the we fail to recover the old rx queue, and at least free
+ * old_mem so we don't also leak that.
+ */
+ if (dev->queue_mgmt_ops->ndo_queue_start(dev, old_mem, rxq_idx)) {
+ WARN(1,
+ "Failed to restart old queue in error path. RX queue %d may be unhealthy.",
+ rxq_idx);
+ dev->queue_mgmt_ops->ndo_queue_mem_free(dev, &old_mem);
+ }
+
+err_free_new_queue_mem:
+ dev->queue_mgmt_ops->ndo_queue_mem_free(dev, new_mem);
+
+err_free_old_mem:
+ kvfree(old_mem);
+
+err_free_new_mem:
+ kvfree(new_mem);
+
+ return err;
+}
--
2.45.1.288.g0e0cd299f1-goog


2024-05-30 20:17:32

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

From: Jakub Kicinski <[email protected]>

The page providers which try to reuse the same pages will
need to hold onto the ref, even if page gets released from
the pool - as in releasing the page from the pp just transfers
the "ownership" reference from pp to the provider, and provider
will wait for other references to be gone before feeding this
page back into the pool.

Signed-off-by: Jakub Kicinski <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

- This is implemented by Jakub in his RFC:
https://lore.kernel.org/netdev/[email protected]/T/

I take no credit for the idea or implementation; I only added minor
edits to make this workable with device memory TCP, and removed some
hacky test code. This is a critical dependency of device memory TCP
and thus I'm pulling it into this series to make it revewable and
mergeable.

- There is a pending discussion about the acceptance of the page_pool
memory provider hooks:

https://lore.kernel.org/netdev/[email protected]/

I'm unsure if the discussion has been resolved yet. Sending the series
anyway to get reviews/feedback on the (unrelated) rest of the series.

Cc: Christoph Hellwig <[email protected]>

v10:
- Renamed alloc_pages -> alloc_netmems. alloc_pages is now
a preprocessor macro, and reusing the string results in a build error.

RFC v3 -> v1
- Removed unusued mem_provider. (Yunsheng).
- Replaced memory_provider & mp_priv with netdev_rx_queue (Jakub).

---
include/net/page_pool/types.h | 12 ++++++++++
net/core/page_pool.c | 43 +++++++++++++++++++++++++++++++----
2 files changed, 50 insertions(+), 5 deletions(-)

diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index b088d131aeb0d..b038b838f042f 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -51,6 +51,7 @@ struct pp_alloc_cache {
* @dev: device, for DMA pre-mapping purposes
* @netdev: netdev this pool will serve (leave as NULL if none or multiple)
* @napi: NAPI which is the sole consumer of pages, otherwise NULL
+ * @queue: struct netdev_rx_queue this page_pool is being created for.
* @dma_dir: DMA mapping direction
* @max_len: max DMA sync memory size for PP_FLAG_DMA_SYNC_DEV
* @offset: DMA sync address offset for PP_FLAG_DMA_SYNC_DEV
@@ -64,6 +65,7 @@ struct page_pool_params {
int nid;
struct device *dev;
struct napi_struct *napi;
+ struct netdev_rx_queue *queue;
enum dma_data_direction dma_dir;
unsigned int max_len;
unsigned int offset;
@@ -127,6 +129,13 @@ struct page_pool_stats {
};
#endif

+struct memory_provider_ops {
+ int (*init)(struct page_pool *pool);
+ void (*destroy)(struct page_pool *pool);
+ struct page *(*alloc_netmems)(struct page_pool *pool, gfp_t gfp);
+ bool (*release_page)(struct page_pool *pool, struct page *page);
+};
+
struct page_pool {
struct page_pool_params_fast p;

@@ -193,6 +202,9 @@ struct page_pool {
*/
struct ptr_ring ring;

+ void *mp_priv;
+ const struct memory_provider_ops *mp_ops;
+
#ifdef CONFIG_PAGE_POOL_STATS
/* recycle stats are per-cpu to avoid locking */
struct page_pool_recycle_stats __percpu *recycle_stats;
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index f4444b4e39e63..251c9356c9202 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -26,6 +26,8 @@

#include "page_pool_priv.h"

+static DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers);
+
#define DEFER_TIME (msecs_to_jiffies(1000))
#define DEFER_WARN_INTERVAL (60 * HZ)

@@ -186,6 +188,7 @@ static int page_pool_init(struct page_pool *pool,
int cpuid)
{
unsigned int ring_qsize = 1024; /* Default */
+ int err;

page_pool_struct_check();

@@ -267,7 +270,22 @@ static int page_pool_init(struct page_pool *pool,
if (pool->dma_map)
get_device(pool->p.dev);

+ if (pool->mp_ops) {
+ err = pool->mp_ops->init(pool);
+ if (err) {
+ pr_warn("%s() mem-provider init failed %d\n", __func__,
+ err);
+ goto free_ptr_ring;
+ }
+
+ static_branch_inc(&page_pool_mem_providers);
+ }
+
return 0;
+
+free_ptr_ring:
+ ptr_ring_cleanup(&pool->ring, NULL);
+ return err;
}

static void page_pool_uninit(struct page_pool *pool)
@@ -569,7 +587,10 @@ struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp)
return page;

/* Slow-path: cache empty, do real allocation */
- page = __page_pool_alloc_pages_slow(pool, gfp);
+ if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
+ page = pool->mp_ops->alloc_netmems(pool, gfp);
+ else
+ page = __page_pool_alloc_pages_slow(pool, gfp);
return page;
}
EXPORT_SYMBOL(page_pool_alloc_pages);
@@ -627,10 +648,13 @@ void __page_pool_release_page_dma(struct page_pool *pool, struct page *page)
void page_pool_return_page(struct page_pool *pool, struct page *page)
{
int count;
+ bool put;

- __page_pool_release_page_dma(pool, page);
-
- page_pool_clear_pp_info(page);
+ put = true;
+ if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
+ put = pool->mp_ops->release_page(pool, page);
+ else
+ __page_pool_release_page_dma(pool, page);

/* This may be the last page returned, releasing the pool, so
* it is not safe to reference pool afterwards.
@@ -638,7 +662,10 @@ void page_pool_return_page(struct page_pool *pool, struct page *page)
count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt);
trace_page_pool_state_release(pool, page, count);

- put_page(page);
+ if (put) {
+ page_pool_clear_pp_info(page);
+ put_page(page);
+ }
/* An optimization would be to call __free_pages(page, pool->p.order)
* knowing page is not part of page-cache (thus avoiding a
* __page_cache_release() call).
@@ -937,6 +964,12 @@ static void __page_pool_destroy(struct page_pool *pool)

page_pool_unlist(pool);
page_pool_uninit(pool);
+
+ if (pool->mp_ops) {
+ pool->mp_ops->destroy(pool);
+ static_branch_dec(&page_pool_mem_providers);
+ }
+
kfree(pool);
}

--
2.45.1.288.g0e0cd299f1-goog


2024-05-30 20:18:19

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 03/14] net: netdev netlink api to bind dma-buf to a net device

API takes the dma-buf fd as input, and binds it to the netdevice. The
user can specify the rx queues to bind the dma-buf to.

Suggested-by: Stanislav Fomichev <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

v7:
- Use flags: [ admin-perm ] instead of a CAP_NET_ADMIN check.

Changes in v1:
- Add rx-queue-type to distingish rx from tx (Jakub)
- Return dma-buf ID from netlink API (David, Stan)

Changes in RFC-v3:
- Support binding multiple rx rx-queues

---
Documentation/netlink/specs/netdev.yaml | 53 +++++++++++++++++++++++++
include/uapi/linux/netdev.h | 19 +++++++++
net/core/netdev-genl-gen.c | 19 +++++++++
net/core/netdev-genl-gen.h | 2 +
net/core/netdev-genl.c | 6 +++
tools/include/uapi/linux/netdev.h | 19 +++++++++
6 files changed, 118 insertions(+)

diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
index 11a32373365ab..e5e91b39e9b71 100644
--- a/Documentation/netlink/specs/netdev.yaml
+++ b/Documentation/netlink/specs/netdev.yaml
@@ -268,6 +268,45 @@ attribute-sets:
name: napi-id
doc: ID of the NAPI instance which services this queue.
type: u32
+ -
+ name: queue-dmabuf
+ attributes:
+ -
+ name: type
+ doc: rx or tx queue
+ type: u8
+ enum: queue-type
+ -
+ name: idx
+ doc: queue index
+ type: u32
+
+ -
+ name: bind-dmabuf
+ attributes:
+ -
+ name: ifindex
+ doc: netdev ifindex to bind the dma-buf to.
+ type: u32
+ checks:
+ min: 1
+ -
+ name: queues
+ doc: receive queues to bind the dma-buf to.
+ type: nest
+ nested-attributes: queue-dmabuf
+ multi-attr: true
+ -
+ name: dmabuf-fd
+ doc: dmabuf file descriptor to bind.
+ type: u32
+ -
+ name: dmabuf-id
+ doc: id of the dmabuf binding
+ type: u32
+ checks:
+ min: 1
+

-
name: qstats
@@ -575,6 +614,20 @@ operations:
attributes:
- ifindex
reply: *queue-get-op
+ -
+ name: bind-rx
+ doc: Bind dmabuf to netdev
+ attribute-set: bind-dmabuf
+ flags: [ admin-perm ]
+ do:
+ request:
+ attributes:
+ - ifindex
+ - dmabuf-fd
+ - queues
+ reply:
+ attributes:
+ - dmabuf-id
-
name: napi-get
doc: Get information about NAPI instances configured on the system.
diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
index a8188202413ec..53244084e5330 100644
--- a/include/uapi/linux/netdev.h
+++ b/include/uapi/linux/netdev.h
@@ -136,6 +136,24 @@ enum {
NETDEV_A_QUEUE_MAX = (__NETDEV_A_QUEUE_MAX - 1)
};

+enum {
+ NETDEV_A_QUEUE_DMABUF_TYPE = 1,
+ NETDEV_A_QUEUE_DMABUF_IDX,
+
+ __NETDEV_A_QUEUE_DMABUF_MAX,
+ NETDEV_A_QUEUE_DMABUF_MAX = (__NETDEV_A_QUEUE_DMABUF_MAX - 1)
+};
+
+enum {
+ NETDEV_A_BIND_DMABUF_IFINDEX = 1,
+ NETDEV_A_BIND_DMABUF_QUEUES,
+ NETDEV_A_BIND_DMABUF_DMABUF_FD,
+ NETDEV_A_BIND_DMABUF_DMABUF_ID,
+
+ __NETDEV_A_BIND_DMABUF_MAX,
+ NETDEV_A_BIND_DMABUF_MAX = (__NETDEV_A_BIND_DMABUF_MAX - 1)
+};
+
enum {
NETDEV_A_QSTATS_IFINDEX = 1,
NETDEV_A_QSTATS_QUEUE_TYPE,
@@ -183,6 +201,7 @@ enum {
NETDEV_CMD_PAGE_POOL_CHANGE_NTF,
NETDEV_CMD_PAGE_POOL_STATS_GET,
NETDEV_CMD_QUEUE_GET,
+ NETDEV_CMD_BIND_RX,
NETDEV_CMD_NAPI_GET,
NETDEV_CMD_QSTATS_GET,

diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c
index 8350a0afa9ec7..9acd0d893765a 100644
--- a/net/core/netdev-genl-gen.c
+++ b/net/core/netdev-genl-gen.c
@@ -27,6 +27,11 @@ const struct nla_policy netdev_page_pool_info_nl_policy[NETDEV_A_PAGE_POOL_IFIND
[NETDEV_A_PAGE_POOL_IFINDEX] = NLA_POLICY_FULL_RANGE(NLA_U32, &netdev_a_page_pool_ifindex_range),
};

+const struct nla_policy netdev_queue_dmabuf_nl_policy[NETDEV_A_QUEUE_DMABUF_IDX + 1] = {
+ [NETDEV_A_QUEUE_DMABUF_TYPE] = NLA_POLICY_MAX(NLA_U8, 1),
+ [NETDEV_A_QUEUE_DMABUF_IDX] = { .type = NLA_U32, },
+};
+
/* NETDEV_CMD_DEV_GET - do */
static const struct nla_policy netdev_dev_get_nl_policy[NETDEV_A_DEV_IFINDEX + 1] = {
[NETDEV_A_DEV_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1),
@@ -58,6 +63,13 @@ static const struct nla_policy netdev_queue_get_dump_nl_policy[NETDEV_A_QUEUE_IF
[NETDEV_A_QUEUE_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1),
};

+/* NETDEV_CMD_BIND_RX - do */
+static const struct nla_policy netdev_bind_rx_nl_policy[NETDEV_A_BIND_DMABUF_DMABUF_FD + 1] = {
+ [NETDEV_A_BIND_DMABUF_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1),
+ [NETDEV_A_BIND_DMABUF_DMABUF_FD] = { .type = NLA_U32, },
+ [NETDEV_A_BIND_DMABUF_QUEUES] = NLA_POLICY_NESTED(netdev_queue_dmabuf_nl_policy),
+};
+
/* NETDEV_CMD_NAPI_GET - do */
static const struct nla_policy netdev_napi_get_do_nl_policy[NETDEV_A_NAPI_ID + 1] = {
[NETDEV_A_NAPI_ID] = { .type = NLA_U32, },
@@ -130,6 +142,13 @@ static const struct genl_split_ops netdev_nl_ops[] = {
.maxattr = NETDEV_A_QUEUE_IFINDEX,
.flags = GENL_CMD_CAP_DUMP,
},
+ {
+ .cmd = NETDEV_CMD_BIND_RX,
+ .doit = netdev_nl_bind_rx_doit,
+ .policy = netdev_bind_rx_nl_policy,
+ .maxattr = NETDEV_A_BIND_DMABUF_DMABUF_FD,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
{
.cmd = NETDEV_CMD_NAPI_GET,
.doit = netdev_nl_napi_get_doit,
diff --git a/net/core/netdev-genl-gen.h b/net/core/netdev-genl-gen.h
index 4db40fd5b4a9e..ca5a0983f2834 100644
--- a/net/core/netdev-genl-gen.h
+++ b/net/core/netdev-genl-gen.h
@@ -13,6 +13,7 @@

/* Common nested types */
extern const struct nla_policy netdev_page_pool_info_nl_policy[NETDEV_A_PAGE_POOL_IFINDEX + 1];
+extern const struct nla_policy netdev_queue_dmabuf_nl_policy[NETDEV_A_QUEUE_DMABUF_IDX + 1];

int netdev_nl_dev_get_doit(struct sk_buff *skb, struct genl_info *info);
int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb);
@@ -26,6 +27,7 @@ int netdev_nl_page_pool_stats_get_dumpit(struct sk_buff *skb,
int netdev_nl_queue_get_doit(struct sk_buff *skb, struct genl_info *info);
int netdev_nl_queue_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info);
int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info);
int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb);
int netdev_nl_qstats_get_dumpit(struct sk_buff *skb,
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index 1f6ae6379e0fc..e254503723626 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -721,6 +721,12 @@ int netdev_nl_qstats_get_dumpit(struct sk_buff *skb,
return err;
}

+/* Stub */
+int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
+{
+ return 0;
+}
+
static int netdev_genl_netdevice_event(struct notifier_block *nb,
unsigned long event, void *ptr)
{
diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h
index a8188202413ec..53244084e5330 100644
--- a/tools/include/uapi/linux/netdev.h
+++ b/tools/include/uapi/linux/netdev.h
@@ -136,6 +136,24 @@ enum {
NETDEV_A_QUEUE_MAX = (__NETDEV_A_QUEUE_MAX - 1)
};

+enum {
+ NETDEV_A_QUEUE_DMABUF_TYPE = 1,
+ NETDEV_A_QUEUE_DMABUF_IDX,
+
+ __NETDEV_A_QUEUE_DMABUF_MAX,
+ NETDEV_A_QUEUE_DMABUF_MAX = (__NETDEV_A_QUEUE_DMABUF_MAX - 1)
+};
+
+enum {
+ NETDEV_A_BIND_DMABUF_IFINDEX = 1,
+ NETDEV_A_BIND_DMABUF_QUEUES,
+ NETDEV_A_BIND_DMABUF_DMABUF_FD,
+ NETDEV_A_BIND_DMABUF_DMABUF_ID,
+
+ __NETDEV_A_BIND_DMABUF_MAX,
+ NETDEV_A_BIND_DMABUF_MAX = (__NETDEV_A_BIND_DMABUF_MAX - 1)
+};
+
enum {
NETDEV_A_QSTATS_IFINDEX = 1,
NETDEV_A_QSTATS_QUEUE_TYPE,
@@ -183,6 +201,7 @@ enum {
NETDEV_CMD_PAGE_POOL_CHANGE_NTF,
NETDEV_CMD_PAGE_POOL_STATS_GET,
NETDEV_CMD_QUEUE_GET,
+ NETDEV_CMD_BIND_RX,
NETDEV_CMD_NAPI_GET,
NETDEV_CMD_QSTATS_GET,

--
2.45.1.288.g0e0cd299f1-goog


2024-05-30 20:18:42

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 04/14] netdev: support binding dma-buf to netdevice

Add a netdev_dmabuf_binding struct which represents the
dma-buf-to-netdevice binding. The netlink API will bind the dma-buf to
rx queues on the netdevice. On the binding, the dma_buf_attach
& dma_buf_map_attachment will occur. The entries in the sg_table from
mapping will be inserted into a genpool to make it ready
for allocation.

The chunks in the genpool are owned by a dmabuf_chunk_owner struct which
holds the dma-buf offset of the base of the chunk and the dma_addr of
the chunk. Both are needed to use allocations that come from this chunk.

We create a new type that represents an allocation from the genpool:
net_iov. We setup the net_iov allocation size in the
genpool to PAGE_SIZE for simplicity: to match the PAGE_SIZE normally
allocated by the page pool and given to the drivers.

The user can unbind the dmabuf from the netdevice by closing the netlink
socket that established the binding. We do this so that the binding is
automatically unbound even if the userspace process crashes.

The binding and unbinding leaves an indicator in struct netdev_rx_queue
that the given queue is bound, but the binding doesn't take effect until
the driver actually reconfigures its queues, and re-initializes its page
pool.

The netdev_dmabuf_binding struct is refcounted, and releases its
resources only when all the refs are released.

Signed-off-by: Willem de Bruijn <[email protected]>
Signed-off-by: Kaiyuan Zhang <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

v10:
- Moved net_iov_dma_addr() to devmem.h and made it devmem specific
helper (David).

v9: https://lore.kernel.org/all/[email protected]/
- Removed net_devmem_restart_rx_queues and put it in its own patch
(David).

v8:
- move dmabuf_devmem_ops usage to later patch to avoid patch-by-patch
build error.

v7:
- Use IS_ERR() instead of IS_ERR_OR_NULL() for the dma_buf_get() return
value.
- Changes netdev_* naming in devmem.c to net_devmem_* (Yunsheng).
- DMA_BIDIRECTIONAL -> DMA_FROM_DEVICE (Yunsheng).
- Added a comment around recovering of the old rx queue in
net_devmem_restart_rx_queue(), and added freeing of old_mem if the
restart of the old queue fails. (Yunsheng).
- Use kernel-family sock-priv (Jakub).
- Put pp_memory_provider_params in netdev_rx_queue instead of the
dma-buf specific binding (Pavel & David).
- Move queue management ops to queue_mgmt_ops instead of netdev_ops
(Jakub).
- Remove excess whitespaces (Jakub).
- Use genlmsg_iput (Jakub).

v6:
- Validate rx queue index
- Refactor new functions into devmem.c (Pavel)

v5:
- Renamed page_pool_iov to net_iov, and moved that support to devmem.h
or netmem.h.

v1:
- Introduce devmem.h instead of bloating netdevice.h (Jakub)
- ENOTSUPP -> EOPNOTSUPP (checkpatch.pl I think)
- Remove unneeded rcu protection for binding->list (rtnl protected)
- Removed extraneous err_binding_put: label.
- Removed dma_addr += len (Paolo).
- Don't override err on netdev_bind_dmabuf_to_queue failure.
- Rename devmem -> dmabuf (David).
- Add id to dmabuf binding (David/Stan).
- Fix missing xa_destroy bound_rq_list.
- Use queue api to reset bound RX queues (Jakub).
- Update netlink API for rx-queue type (tx/re) (Jakub).

RFC v3:
- Support multi rx-queue binding

---
Documentation/netlink/specs/netdev.yaml | 4 +
include/net/devmem.h | 111 +++++++++++
include/net/netdev_rx_queue.h | 2 +
include/net/netmem.h | 10 +
include/net/page_pool/types.h | 5 +
net/core/Makefile | 2 +-
net/core/dev.c | 3 +
net/core/devmem.c | 254 ++++++++++++++++++++++++
net/core/netdev-genl-gen.c | 4 +
net/core/netdev-genl-gen.h | 4 +
net/core/netdev-genl.c | 101 +++++++++-
11 files changed, 497 insertions(+), 3 deletions(-)
create mode 100644 include/net/devmem.h
create mode 100644 net/core/devmem.c

diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
index e5e91b39e9b71..16a2994523811 100644
--- a/Documentation/netlink/specs/netdev.yaml
+++ b/Documentation/netlink/specs/netdev.yaml
@@ -669,6 +669,10 @@ operations:
- tx-packets
- tx-bytes

+kernel-family:
+ headers: [ "linux/list.h"]
+ sock-priv: struct list_head
+
mcast-groups:
list:
-
diff --git a/include/net/devmem.h b/include/net/devmem.h
new file mode 100644
index 0000000000000..fa03bdabdffd9
--- /dev/null
+++ b/include/net/devmem.h
@@ -0,0 +1,111 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Device memory TCP support
+ *
+ * Authors: Mina Almasry <[email protected]>
+ * Willem de Bruijn <[email protected]>
+ * Kaiyuan Zhang <[email protected]>
+ *
+ */
+#ifndef _NET_DEVMEM_H
+#define _NET_DEVMEM_H
+
+struct net_devmem_dmabuf_binding {
+ struct dma_buf *dmabuf;
+ struct dma_buf_attachment *attachment;
+ struct sg_table *sgt;
+ struct net_device *dev;
+ struct gen_pool *chunk_pool;
+
+ /* The user holds a ref (via the netlink API) for as long as they want
+ * the binding to remain alive. Each page pool using this binding holds
+ * a ref to keep the binding alive. Each allocated net_iov holds a
+ * ref.
+ *
+ * The binding undos itself and unmaps the underlying dmabuf once all
+ * those refs are dropped and the binding is no longer desired or in
+ * use.
+ */
+ refcount_t ref;
+
+ /* The list of bindings currently active. Used for netlink to notify us
+ * of the user dropping the bind.
+ */
+ struct list_head list;
+
+ /* rxq's this binding is active on. */
+ struct xarray bound_rxq_list;
+
+ /* ID of this binding. Globally unique to all bindings currently
+ * active.
+ */
+ u32 id;
+};
+
+/* Owner of the dma-buf chunks inserted into the gen pool. Each scatterlist
+ * entry from the dmabuf is inserted into the genpool as a chunk, and needs
+ * this owner struct to keep track of some metadata necessary to create
+ * allocations from this chunk.
+ */
+struct dmabuf_genpool_chunk_owner {
+ /* Offset into the dma-buf where this chunk starts. */
+ unsigned long base_virtual;
+
+ /* dma_addr of the start of the chunk. */
+ dma_addr_t base_dma_addr;
+
+ /* Array of net_iovs for this chunk. */
+ struct net_iov *niovs;
+ size_t num_niovs;
+
+ struct net_devmem_dmabuf_binding *binding;
+};
+
+#ifdef CONFIG_DMA_SHARED_BUFFER
+void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding);
+int net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
+ struct net_devmem_dmabuf_binding **out);
+void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding);
+int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
+ struct net_devmem_dmabuf_binding *binding);
+#else
+static inline void
+__net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding)
+{
+}
+
+static inline int net_devmem_bind_dmabuf(struct net_device *dev,
+ unsigned int dmabuf_fd,
+ struct net_devmem_dmabuf_binding **out)
+{
+ return -EOPNOTSUPP;
+}
+static inline void
+net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
+{
+}
+
+static inline int
+net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
+ struct net_devmem_dmabuf_binding *binding)
+{
+ return -EOPNOTSUPP;
+}
+#endif
+
+static inline void
+net_devmem_dmabuf_binding_get(struct net_devmem_dmabuf_binding *binding)
+{
+ refcount_inc(&binding->ref);
+}
+
+static inline void
+net_devmem_dmabuf_binding_put(struct net_devmem_dmabuf_binding *binding)
+{
+ if (!refcount_dec_and_test(&binding->ref))
+ return;
+
+ __net_devmem_dmabuf_binding_free(binding);
+}
+
+#endif /* _NET_DEVMEM_H */
diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h
index e78ca52d67fbf..ac34f5fb4f71d 100644
--- a/include/net/netdev_rx_queue.h
+++ b/include/net/netdev_rx_queue.h
@@ -6,6 +6,7 @@
#include <linux/netdevice.h>
#include <linux/sysfs.h>
#include <net/xdp.h>
+#include <net/page_pool/types.h>

/* This structure contains an instance of an RX queue. */
struct netdev_rx_queue {
@@ -25,6 +26,7 @@ struct netdev_rx_queue {
* Readers and writers must hold RTNL
*/
struct napi_struct *napi;
+ struct pp_memory_provider_params mp_params;
} ____cacheline_aligned_in_smp;

/*
diff --git a/include/net/netmem.h b/include/net/netmem.h
index d8b810245c1da..72e932a1a9489 100644
--- a/include/net/netmem.h
+++ b/include/net/netmem.h
@@ -8,6 +8,16 @@
#ifndef _NET_NETMEM_H
#define _NET_NETMEM_H

+#include <net/devmem.h>
+
+/* net_iov */
+
+struct net_iov {
+ struct dmabuf_genpool_chunk_owner *owner;
+};
+
+/* netmem */
+
/**
* typedef netmem_ref - a nonexistent type marking a reference to generic
* network memory.
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index b038b838f042f..6166fb869c35d 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -136,6 +136,11 @@ struct memory_provider_ops {
bool (*release_page)(struct page_pool *pool, struct page *page);
};

+struct pp_memory_provider_params {
+ const struct memory_provider_ops *mp_ops;
+ void *mp_priv;
+};
+
struct page_pool {
struct page_pool_params_fast p;

diff --git a/net/core/Makefile b/net/core/Makefile
index f82232b358a2c..6b43611fb4a43 100644
--- a/net/core/Makefile
+++ b/net/core/Makefile
@@ -13,7 +13,7 @@ obj-y += dev.o dev_addr_lists.o dst.o netevent.o \
neighbour.o rtnetlink.o utils.o link_watch.o filter.o \
sock_diag.o dev_ioctl.o tso.o sock_reuseport.o \
fib_notifier.o xdp.o flow_offload.o gro.o \
- netdev-genl.o netdev-genl-gen.o gso.o
+ netdev-genl.o netdev-genl-gen.o gso.o devmem.o

obj-$(CONFIG_NETDEV_ADDR_LIST_TEST) += dev_addr_lists_test.o

diff --git a/net/core/dev.c b/net/core/dev.c
index 85fe8138f3e4e..981f5ef9e8580 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -158,6 +158,9 @@
#include <net/page_pool/types.h>
#include <net/page_pool/helpers.h>
#include <net/rps.h>
+#include <linux/genalloc.h>
+#include <linux/dma-buf.h>
+#include <net/devmem.h>

#include "dev.h"
#include "net-sysfs.h"
diff --git a/net/core/devmem.c b/net/core/devmem.c
new file mode 100644
index 0000000000000..d82f92d7cf9ce
--- /dev/null
+++ b/net/core/devmem.c
@@ -0,0 +1,254 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Devmem TCP
+ *
+ * Authors: Mina Almasry <[email protected]>
+ * Willem de Bruijn <[email protected]>
+ * Kaiyuan Zhang <[email protected]
+ */
+
+#include <linux/types.h>
+#include <linux/mm.h>
+#include <linux/netdevice.h>
+#include <trace/events/page_pool.h>
+#include <net/netdev_rx_queue.h>
+#include <net/page_pool/types.h>
+#include <net/page_pool/helpers.h>
+#include <linux/genalloc.h>
+#include <linux/dma-buf.h>
+#include <net/devmem.h>
+#include <net/netdev_queues.h>
+
+/* Device memory support */
+
+#ifdef CONFIG_DMA_SHARED_BUFFER
+static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool,
+ struct gen_pool_chunk *chunk,
+ void *not_used)
+{
+ struct dmabuf_genpool_chunk_owner *owner = chunk->owner;
+
+ kvfree(owner->niovs);
+ kfree(owner);
+}
+
+void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding)
+{
+ size_t size, avail;
+
+ gen_pool_for_each_chunk(binding->chunk_pool,
+ net_devmem_dmabuf_free_chunk_owner, NULL);
+
+ size = gen_pool_size(binding->chunk_pool);
+ avail = gen_pool_avail(binding->chunk_pool);
+
+ if (!WARN(size != avail, "can't destroy genpool. size=%zu, avail=%zu",
+ size, avail))
+ gen_pool_destroy(binding->chunk_pool);
+
+ dma_buf_unmap_attachment(binding->attachment, binding->sgt,
+ DMA_FROM_DEVICE);
+ dma_buf_detach(binding->dmabuf, binding->attachment);
+ dma_buf_put(binding->dmabuf);
+ xa_destroy(&binding->bound_rxq_list);
+ kfree(binding);
+}
+
+/* Protected by rtnl_lock() */
+static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1);
+
+void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
+{
+ struct netdev_rx_queue *rxq;
+ unsigned long xa_idx;
+ unsigned int rxq_idx;
+
+ if (!binding)
+ return;
+
+ if (binding->list.next)
+ list_del(&binding->list);
+
+ xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) {
+ if (rxq->mp_params.mp_priv == binding) {
+ /* We hold the rtnl_lock while binding/unbinding
+ * dma-buf, so we can't race with another thread that
+ * is also modifying this value. However, the page_pool
+ * may read this config while it's creating its
+ * rx-queues. WRITE_ONCE() here to match the
+ * READ_ONCE() in the page_pool.
+ */
+ WRITE_ONCE(rxq->mp_params.mp_ops, NULL);
+ WRITE_ONCE(rxq->mp_params.mp_priv, NULL);
+
+ rxq_idx = get_netdev_rx_queue_index(rxq);
+
+ netdev_rx_queue_restart(binding->dev, rxq_idx);
+ }
+ }
+
+ xa_erase(&net_devmem_dmabuf_bindings, binding->id);
+
+ net_devmem_dmabuf_binding_put(binding);
+}
+
+int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
+ struct net_devmem_dmabuf_binding *binding)
+{
+ struct netdev_rx_queue *rxq;
+ u32 xa_idx;
+ int err;
+
+ if (rxq_idx >= dev->num_rx_queues)
+ return -ERANGE;
+
+ rxq = __netif_get_rx_queue(dev, rxq_idx);
+ if (rxq->mp_params.mp_priv)
+ return -EEXIST;
+
+ err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b,
+ GFP_KERNEL);
+ if (err)
+ return err;
+
+ /* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't
+ * race with another thread that is also modifying this value. However,
+ * the driver may read this config while it's creating its * rx-queues.
+ * WRITE_ONCE() here to match the READ_ONCE() in the driver.
+ */
+ WRITE_ONCE(rxq->mp_params.mp_priv, binding);
+
+ err = netdev_rx_queue_restart(dev, rxq_idx);
+ if (err)
+ goto err_xa_erase;
+
+ return 0;
+
+err_xa_erase:
+ WRITE_ONCE(rxq->mp_params.mp_ops, NULL);
+ WRITE_ONCE(rxq->mp_params.mp_priv, NULL);
+ xa_erase(&binding->bound_rxq_list, xa_idx);
+
+ return err;
+}
+
+int net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
+ struct net_devmem_dmabuf_binding **out)
+{
+ struct net_devmem_dmabuf_binding *binding;
+ static u32 id_alloc_next;
+ struct scatterlist *sg;
+ struct dma_buf *dmabuf;
+ unsigned int sg_idx, i;
+ unsigned long virtual;
+ int err;
+
+ dmabuf = dma_buf_get(dmabuf_fd);
+ if (IS_ERR(dmabuf))
+ return -EBADFD;
+
+ binding = kzalloc_node(sizeof(*binding), GFP_KERNEL,
+ dev_to_node(&dev->dev));
+ if (!binding) {
+ err = -ENOMEM;
+ goto err_put_dmabuf;
+ }
+
+ binding->dev = dev;
+
+ err = xa_alloc_cyclic(&net_devmem_dmabuf_bindings, &binding->id,
+ binding, xa_limit_32b, &id_alloc_next,
+ GFP_KERNEL);
+ if (err < 0)
+ goto err_free_binding;
+
+ xa_init_flags(&binding->bound_rxq_list, XA_FLAGS_ALLOC);
+
+ refcount_set(&binding->ref, 1);
+
+ binding->dmabuf = dmabuf;
+
+ binding->attachment = dma_buf_attach(binding->dmabuf, dev->dev.parent);
+ if (IS_ERR(binding->attachment)) {
+ err = PTR_ERR(binding->attachment);
+ goto err_free_id;
+ }
+
+ binding->sgt =
+ dma_buf_map_attachment(binding->attachment, DMA_BIDIRECTIONAL);
+ if (IS_ERR(binding->sgt)) {
+ err = PTR_ERR(binding->sgt);
+ goto err_detach;
+ }
+
+ /* For simplicity we expect to make PAGE_SIZE allocations, but the
+ * binding can be much more flexible than that. We may be able to
+ * allocate MTU sized chunks here. Leave that for future work...
+ */
+ binding->chunk_pool =
+ gen_pool_create(PAGE_SHIFT, dev_to_node(&dev->dev));
+ if (!binding->chunk_pool) {
+ err = -ENOMEM;
+ goto err_unmap;
+ }
+
+ virtual = 0;
+ for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx) {
+ dma_addr_t dma_addr = sg_dma_address(sg);
+ struct dmabuf_genpool_chunk_owner *owner;
+ size_t len = sg_dma_len(sg);
+ struct net_iov *niov;
+
+ owner = kzalloc_node(sizeof(*owner), GFP_KERNEL,
+ dev_to_node(&dev->dev));
+ owner->base_virtual = virtual;
+ owner->base_dma_addr = dma_addr;
+ owner->num_niovs = len / PAGE_SIZE;
+ owner->binding = binding;
+
+ err = gen_pool_add_owner(binding->chunk_pool, dma_addr,
+ dma_addr, len, dev_to_node(&dev->dev),
+ owner);
+ if (err) {
+ err = -EINVAL;
+ goto err_free_chunks;
+ }
+
+ owner->niovs = kvmalloc_array(owner->num_niovs,
+ sizeof(*owner->niovs),
+ GFP_KERNEL);
+ if (!owner->niovs) {
+ err = -ENOMEM;
+ goto err_free_chunks;
+ }
+
+ for (i = 0; i < owner->num_niovs; i++) {
+ niov = &owner->niovs[i];
+ niov->owner = owner;
+ }
+
+ virtual += len;
+ }
+
+ *out = binding;
+
+ return 0;
+
+err_free_chunks:
+ gen_pool_for_each_chunk(binding->chunk_pool,
+ net_devmem_dmabuf_free_chunk_owner, NULL);
+ gen_pool_destroy(binding->chunk_pool);
+err_unmap:
+ dma_buf_unmap_attachment(binding->attachment, binding->sgt,
+ DMA_BIDIRECTIONAL);
+err_detach:
+ dma_buf_detach(dmabuf, binding->attachment);
+err_free_id:
+ xa_erase(&net_devmem_dmabuf_bindings, binding->id);
+err_free_binding:
+ kfree(binding);
+err_put_dmabuf:
+ dma_buf_put(dmabuf);
+ return err;
+}
+#endif
diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c
index 9acd0d893765a..3dcd25049e593 100644
--- a/net/core/netdev-genl-gen.c
+++ b/net/core/netdev-genl-gen.c
@@ -9,6 +9,7 @@
#include "netdev-genl-gen.h"

#include <uapi/linux/netdev.h>
+#include <linux/list.h>

/* Integer value ranges */
static const struct netlink_range_validation netdev_a_page_pool_id_range = {
@@ -187,4 +188,7 @@ struct genl_family netdev_nl_family __ro_after_init = {
.n_split_ops = ARRAY_SIZE(netdev_nl_ops),
.mcgrps = netdev_nl_mcgrps,
.n_mcgrps = ARRAY_SIZE(netdev_nl_mcgrps),
+ .sock_priv_size = sizeof(struct list_head),
+ .sock_priv_init = (void *)netdev_nl_sock_priv_init,
+ .sock_priv_destroy = (void *)netdev_nl_sock_priv_destroy,
};
diff --git a/net/core/netdev-genl-gen.h b/net/core/netdev-genl-gen.h
index ca5a0983f2834..2c431b7dcbc84 100644
--- a/net/core/netdev-genl-gen.h
+++ b/net/core/netdev-genl-gen.h
@@ -10,6 +10,7 @@
#include <net/genetlink.h>

#include <uapi/linux/netdev.h>
+#include <linux/list.h>

/* Common nested types */
extern const struct nla_policy netdev_page_pool_info_nl_policy[NETDEV_A_PAGE_POOL_IFINDEX + 1];
@@ -40,4 +41,7 @@ enum {

extern struct genl_family netdev_nl_family;

+void netdev_nl_sock_priv_init(struct list_head *priv);
+void netdev_nl_sock_priv_destroy(struct list_head *priv);
+
#endif /* _LINUX_NETDEV_GEN_H */
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index e254503723626..d6007bf4933c6 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -10,6 +10,7 @@
#include <net/netdev_rx_queue.h>
#include <net/netdev_queues.h>
#include <net/busy_poll.h>
+#include <net/devmem.h>

#include "netdev-genl-gen.h"
#include "dev.h"
@@ -721,10 +722,92 @@ int netdev_nl_qstats_get_dumpit(struct sk_buff *skb,
return err;
}

-/* Stub */
int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
{
- return 0;
+ struct nlattr *tb[ARRAY_SIZE(netdev_queue_dmabuf_nl_policy)];
+ struct net_devmem_dmabuf_binding *out_binding;
+ struct list_head *sock_binding_list;
+ u32 ifindex, dmabuf_fd, rxq_idx;
+ struct net_device *netdev;
+ struct sk_buff *rsp;
+ struct nlattr *attr;
+ int rem, err = 0;
+ void *hdr;
+
+ if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_DEV_IFINDEX) ||
+ GENL_REQ_ATTR_CHECK(info, NETDEV_A_BIND_DMABUF_DMABUF_FD) ||
+ GENL_REQ_ATTR_CHECK(info, NETDEV_A_BIND_DMABUF_QUEUES))
+ return -EINVAL;
+
+ ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]);
+ dmabuf_fd = nla_get_u32(info->attrs[NETDEV_A_BIND_DMABUF_DMABUF_FD]);
+
+ rtnl_lock();
+
+ netdev = __dev_get_by_index(genl_info_net(info), ifindex);
+ if (!netdev) {
+ err = -ENODEV;
+ goto err_unlock;
+ }
+
+ err = net_devmem_bind_dmabuf(netdev, dmabuf_fd, &out_binding);
+ if (err)
+ goto err_unlock;
+
+ nla_for_each_attr(attr, genlmsg_data(info->genlhdr),
+ genlmsg_len(info->genlhdr), rem) {
+ if (nla_type(attr) != NETDEV_A_BIND_DMABUF_QUEUES)
+ continue;
+
+ err = nla_parse_nested(
+ tb, ARRAY_SIZE(netdev_queue_dmabuf_nl_policy) - 1, attr,
+ netdev_queue_dmabuf_nl_policy, info->extack);
+ if (err < 0)
+ goto err_unbind;
+
+ rxq_idx = nla_get_u32(tb[NETDEV_A_QUEUE_DMABUF_IDX]);
+
+ err = net_devmem_bind_dmabuf_to_queue(netdev, rxq_idx,
+ out_binding);
+ if (err)
+ goto err_unbind;
+ }
+
+ sock_binding_list = genl_sk_priv_get(&netdev_nl_family,
+ NETLINK_CB(skb).sk);
+ if (IS_ERR(sock_binding_list)) {
+ err = PTR_ERR(sock_binding_list);
+ goto err_unbind;
+ }
+
+ list_add(&out_binding->list, sock_binding_list);
+
+ rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
+ if (!rsp) {
+ err = -ENOMEM;
+ goto err_unbind;
+ }
+
+ hdr = genlmsg_iput(rsp, info);
+ if (!hdr) {
+ err = -EMSGSIZE;
+ goto err_genlmsg_free;
+ }
+
+ nla_put_u32(rsp, NETDEV_A_BIND_DMABUF_DMABUF_ID, out_binding->id);
+ genlmsg_end(rsp, hdr);
+
+ rtnl_unlock();
+
+ return genlmsg_reply(rsp, info);
+
+err_genlmsg_free:
+ nlmsg_free(rsp);
+err_unbind:
+ net_devmem_unbind_dmabuf(out_binding);
+err_unlock:
+ rtnl_unlock();
+ return err;
}

static int netdev_genl_netdevice_event(struct notifier_block *nb,
@@ -771,3 +854,17 @@ static int __init netdev_genl_init(void)
}

subsys_initcall(netdev_genl_init);
+
+void netdev_nl_sock_priv_init(struct list_head *priv)
+{
+ INIT_LIST_HEAD(priv);
+}
+
+void netdev_nl_sock_priv_destroy(struct list_head *priv)
+{
+ struct net_devmem_dmabuf_binding *binding;
+ struct net_devmem_dmabuf_binding *temp;
+
+ list_for_each_entry_safe(binding, temp, priv, list)
+ net_devmem_unbind_dmabuf(binding);
+}
--
2.45.1.288.g0e0cd299f1-goog


2024-05-30 20:19:21

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 05/14] netdev: netdevice devmem allocator

Implement netdev devmem allocator. The allocator takes a given struct
netdev_dmabuf_binding as input and allocates net_iov from that
binding.

The allocation simply delegates to the binding's genpool for the
allocation logic and wraps the returned memory region in a net_iov
struct.

Signed-off-by: Willem de Bruijn <[email protected]>
Signed-off-by: Kaiyuan Zhang <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

v8:
- Rename netdev_dmabuf_binding -> net_devmem_dmabuf_binding to avoid
patch-by-patch build error.
- Move niov->pp_magic/pp/pp_ref_counter usage to later patch to avoid
patch-by-patch build error.

v7:
- netdev_ -> net_devmem_* naming (Yunsheng).

v6:
- Add comment on net_iov_dma_addr to explain why we don't use
niov->dma_addr (Pavel)
- Refactor new functions into net/core/devmem.c (Pavel)

v1:
- Rename devmem -> dmabuf (David).

---
include/net/devmem.h | 13 +++++++++++++
include/net/netmem.h | 18 ++++++++++++++++++
net/core/devmem.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 75 insertions(+)

diff --git a/include/net/devmem.h b/include/net/devmem.h
index fa03bdabdffd9..cd3186f5d1fbd 100644
--- a/include/net/devmem.h
+++ b/include/net/devmem.h
@@ -68,7 +68,20 @@ int net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding);
int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
struct net_devmem_dmabuf_binding *binding);
+struct net_iov *
+net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding);
+void net_devmem_free_dmabuf(struct net_iov *ppiov);
#else
+static inline struct net_iov *
+net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding)
+{
+ return NULL;
+}
+
+static inline void net_devmem_free_dmabuf(struct net_iov *ppiov)
+{
+}
+
static inline void
__net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding)
{
diff --git a/include/net/netmem.h b/include/net/netmem.h
index 72e932a1a9489..01dbdd216fae7 100644
--- a/include/net/netmem.h
+++ b/include/net/netmem.h
@@ -14,8 +14,26 @@

struct net_iov {
struct dmabuf_genpool_chunk_owner *owner;
+ unsigned long dma_addr;
};

+static inline struct dmabuf_genpool_chunk_owner *
+net_iov_owner(const struct net_iov *niov)
+{
+ return niov->owner;
+}
+
+static inline unsigned int net_iov_idx(const struct net_iov *niov)
+{
+ return niov - net_iov_owner(niov)->niovs;
+}
+
+static inline struct net_devmem_dmabuf_binding *
+net_iov_binding(const struct net_iov *niov)
+{
+ return net_iov_owner(niov)->binding;
+}
+
/* netmem */

/**
diff --git a/net/core/devmem.c b/net/core/devmem.c
index d82f92d7cf9ce..d5fac8edf621d 100644
--- a/net/core/devmem.c
+++ b/net/core/devmem.c
@@ -32,6 +32,14 @@ static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool,
kfree(owner);
}

+static inline dma_addr_t net_devmem_get_dma_addr(const struct net_iov *niov)
+{
+ struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov);
+
+ return owner->base_dma_addr +
+ ((dma_addr_t)net_iov_idx(niov) << PAGE_SHIFT);
+}
+
void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding)
{
size_t size, avail;
@@ -54,6 +62,42 @@ void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding)
kfree(binding);
}

+struct net_iov *
+net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding)
+{
+ struct dmabuf_genpool_chunk_owner *owner;
+ unsigned long dma_addr;
+ struct net_iov *niov;
+ ssize_t offset;
+ ssize_t index;
+
+ dma_addr = gen_pool_alloc_owner(binding->chunk_pool, PAGE_SIZE,
+ (void **)&owner);
+ if (!dma_addr)
+ return NULL;
+
+ offset = dma_addr - owner->base_dma_addr;
+ index = offset / PAGE_SIZE;
+ niov = &owner->niovs[index];
+
+ niov->dma_addr = 0;
+
+ net_devmem_dmabuf_binding_get(binding);
+
+ return niov;
+}
+
+void net_devmem_free_dmabuf(struct net_iov *niov)
+{
+ struct net_devmem_dmabuf_binding *binding = net_iov_binding(niov);
+ unsigned long dma_addr = net_devmem_get_dma_addr(niov);
+
+ if (gen_pool_has_addr(binding->chunk_pool, dma_addr, PAGE_SIZE))
+ gen_pool_free(binding->chunk_pool, dma_addr, PAGE_SIZE);
+
+ net_devmem_dmabuf_binding_put(binding);
+}
+
/* Protected by rtnl_lock() */
static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1);

--
2.45.1.288.g0e0cd299f1-goog


2024-05-30 20:19:41

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 07/14] page_pool: devmem support

Convert netmem to be a union of struct page and struct netmem. Overload
the LSB of struct netmem* to indicate that it's a net_iov, otherwise
it's a page.

Currently these entries in struct page are rented by the page_pool and
used exclusively by the net stack:

struct {
unsigned long pp_magic;
struct page_pool *pp;
unsigned long _pp_mapping_pad;
unsigned long dma_addr;
atomic_long_t pp_ref_count;
};

Mirror these (and only these) entries into struct net_iov and implement
netmem helpers that can access these common fields regardless of
whether the underlying type is page or net_iov.

Implement checks for net_iov in netmem helpers which delegate to mm
APIs, to ensure net_iov are never passed to the mm stack.

Signed-off-by: Mina Almasry <[email protected]>

---

v9: https://lore.kernel.org/netdev/[email protected]/
- Remove CONFIG checks in netmem_is_net_iov() (Pavel/David/Jens)

v7:
- Remove static_branch_unlikely from netmem_to_net_iov(). We're getting
better results from the fast path in bench_page_pool_simple tests
without the static_branch_unlikely, and the addition of
static_branch_unlikely doesn't improve performance of devmem TCP.

Additionally only check netmem_to_net_iov() if
CONFIG_DMA_SHARED_BUFFER is enabled, otherwise dmabuf net_iovs cannot
exist anyway.

net-next base: 8 cycle fast path.
with static_branch_unlikely: 10 cycle fast path.
without static_branch_unlikely: 9 cycle fast path.
CONFIG_DMA_SHARED_BUFFER disabled: 8 cycle fast path as baseline.

Performance of devmem TCP is at 95% line rate is regardless of
static_branch_unlikely or not.

v6:
- Rebased on top of the merged netmem_ref type.
- Rebased on top of the merged skb_pp_frag_ref() changes.

v5:
- Use netmem instead of page* with LSB set.
- Use pp_ref_count for refcounting net_iov.
- Removed many of the custom checks for netmem.

v1:
- Disable fragmentation support for iov properly.
- fix napi_pp_put_page() path (Yunsheng).
- Use pp_frag_count for devmem refcounting.

Cc: [email protected]
Cc: Matthew Wilcox <[email protected]>

---
include/net/netmem.h | 137 ++++++++++++++++++++++++++++++--
include/net/page_pool/helpers.h | 25 +++---
net/core/devmem.c | 3 +
net/core/page_pool.c | 26 +++---
net/core/skbuff.c | 22 +++--
5 files changed, 167 insertions(+), 46 deletions(-)

diff --git a/include/net/netmem.h b/include/net/netmem.h
index 664df8325ece5..35ad237fdf29e 100644
--- a/include/net/netmem.h
+++ b/include/net/netmem.h
@@ -9,14 +9,51 @@
#define _NET_NETMEM_H

#include <net/devmem.h>
+#include <net/net_debug.h>

/* net_iov */

+DECLARE_STATIC_KEY_FALSE(page_pool_mem_providers);
+
+/* We overload the LSB of the struct page pointer to indicate whether it's
+ * a page or net_iov.
+ */
+#define NET_IOV 0x01UL
+
struct net_iov {
+ unsigned long __unused_padding;
+ unsigned long pp_magic;
+ struct page_pool *pp;
struct dmabuf_genpool_chunk_owner *owner;
unsigned long dma_addr;
+ atomic_long_t pp_ref_count;
};

+/* These fields in struct page are used by the page_pool and net stack:
+ *
+ * struct {
+ * unsigned long pp_magic;
+ * struct page_pool *pp;
+ * unsigned long _pp_mapping_pad;
+ * unsigned long dma_addr;
+ * atomic_long_t pp_ref_count;
+ * };
+ *
+ * We mirror the page_pool fields here so the page_pool can access these fields
+ * without worrying whether the underlying fields belong to a page or net_iov.
+ *
+ * The non-net stack fields of struct page are private to the mm stack and must
+ * never be mirrored to net_iov.
+ */
+#define NET_IOV_ASSERT_OFFSET(pg, iov) \
+ static_assert(offsetof(struct page, pg) == \
+ offsetof(struct net_iov, iov))
+NET_IOV_ASSERT_OFFSET(pp_magic, pp_magic);
+NET_IOV_ASSERT_OFFSET(pp, pp);
+NET_IOV_ASSERT_OFFSET(dma_addr, dma_addr);
+NET_IOV_ASSERT_OFFSET(pp_ref_count, pp_ref_count);
+#undef NET_IOV_ASSERT_OFFSET
+
static inline struct dmabuf_genpool_chunk_owner *
net_iov_owner(const struct net_iov *niov)
{
@@ -47,20 +84,22 @@ net_iov_binding(const struct net_iov *niov)
*/
typedef unsigned long __bitwise netmem_ref;

+static inline bool netmem_is_net_iov(const netmem_ref netmem)
+{
+ return (__force unsigned long)netmem & NET_IOV;
+}
+
/* This conversion fails (returns NULL) if the netmem_ref is not struct page
* backed.
- *
- * Currently struct page is the only possible netmem, and this helper never
- * fails.
*/
static inline struct page *netmem_to_page(netmem_ref netmem)
{
+ if (WARN_ON_ONCE(netmem_is_net_iov(netmem)))
+ return NULL;
+
return (__force struct page *)netmem;
}

-/* Converting from page to netmem is always safe, because a page can always be
- * a netmem.
- */
static inline netmem_ref page_to_netmem(struct page *page)
{
return (__force netmem_ref)page;
@@ -68,17 +107,103 @@ static inline netmem_ref page_to_netmem(struct page *page)

static inline int netmem_ref_count(netmem_ref netmem)
{
+ /* The non-pp refcount of net_iov is always 1. On net_iov, we only
+ * support pp refcounting which uses the pp_ref_count field.
+ */
+ if (netmem_is_net_iov(netmem))
+ return 1;
+
return page_ref_count(netmem_to_page(netmem));
}

static inline unsigned long netmem_to_pfn(netmem_ref netmem)
{
+ if (netmem_is_net_iov(netmem))
+ return 0;
+
return page_to_pfn(netmem_to_page(netmem));
}

+static inline struct net_iov *__netmem_clear_lsb(netmem_ref netmem)
+{
+ return (struct net_iov *)((__force unsigned long)netmem & ~NET_IOV);
+}
+
+static inline unsigned long netmem_get_pp_magic(netmem_ref netmem)
+{
+ return __netmem_clear_lsb(netmem)->pp_magic;
+}
+
+static inline void netmem_or_pp_magic(netmem_ref netmem, unsigned long pp_magic)
+{
+ __netmem_clear_lsb(netmem)->pp_magic |= pp_magic;
+}
+
+static inline void netmem_clear_pp_magic(netmem_ref netmem)
+{
+ __netmem_clear_lsb(netmem)->pp_magic = 0;
+}
+
+static inline struct page_pool *netmem_get_pp(netmem_ref netmem)
+{
+ return __netmem_clear_lsb(netmem)->pp;
+}
+
+static inline void netmem_set_pp(netmem_ref netmem, struct page_pool *pool)
+{
+ __netmem_clear_lsb(netmem)->pp = pool;
+}
+
+static inline unsigned long netmem_get_dma_addr(netmem_ref netmem)
+{
+ return __netmem_clear_lsb(netmem)->dma_addr;
+}
+
+static inline void netmem_set_dma_addr(netmem_ref netmem,
+ unsigned long dma_addr)
+{
+ __netmem_clear_lsb(netmem)->dma_addr = dma_addr;
+}
+
+static inline atomic_long_t *netmem_get_pp_ref_count_ref(netmem_ref netmem)
+{
+ return &__netmem_clear_lsb(netmem)->pp_ref_count;
+}
+
+static inline bool netmem_is_pref_nid(netmem_ref netmem, int pref_nid)
+{
+ /* Assume net_iov are on the preferred node without actually
+ * checking...
+ *
+ * This check is only used to check for recycling memory in the page
+ * pool's fast paths. Currently the only implementation of net_iov
+ * is dmabuf device memory. It's a deliberate decision by the user to
+ * bind a certain dmabuf to a certain netdev, and the netdev rx queue
+ * would not be able to reallocate memory from another dmabuf that
+ * exists on the preferred node, so, this check doesn't make much sense
+ * in this case. Assume all net_iovs can be recycled for now.
+ */
+ if (netmem_is_net_iov(netmem))
+ return true;
+
+ return page_to_nid(netmem_to_page(netmem)) == pref_nid;
+}
+
static inline netmem_ref netmem_compound_head(netmem_ref netmem)
{
+ /* niov are never compounded */
+ if (netmem_is_net_iov(netmem))
+ return netmem;
+
return page_to_netmem(compound_head(netmem_to_page(netmem)));
}

+static inline void *netmem_address(netmem_ref netmem)
+{
+ if (netmem_is_net_iov(netmem))
+ return NULL;
+
+ return page_address(netmem_to_page(netmem));
+}
+
#endif /* _NET_NETMEM_H */
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index 5e129d5304f53..1770c7be24afc 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -217,7 +217,7 @@ page_pool_get_dma_dir(const struct page_pool *pool)

static inline void page_pool_fragment_netmem(netmem_ref netmem, long nr)
{
- atomic_long_set(&netmem_to_page(netmem)->pp_ref_count, nr);
+ atomic_long_set(netmem_get_pp_ref_count_ref(netmem), nr);
}

/**
@@ -245,7 +245,7 @@ static inline void page_pool_fragment_page(struct page *page, long nr)

static inline long page_pool_unref_netmem(netmem_ref netmem, long nr)
{
- struct page *page = netmem_to_page(netmem);
+ atomic_long_t *pp_ref_count = netmem_get_pp_ref_count_ref(netmem);
long ret;

/* If nr == pp_ref_count then we have cleared all remaining
@@ -262,19 +262,19 @@ static inline long page_pool_unref_netmem(netmem_ref netmem, long nr)
* initially, and only overwrite it when the page is partitioned into
* more than one piece.
*/
- if (atomic_long_read(&page->pp_ref_count) == nr) {
+ if (atomic_long_read(pp_ref_count) == nr) {
/* As we have ensured nr is always one for constant case using
* the BUILD_BUG_ON(), only need to handle the non-constant case
* here for pp_ref_count draining, which is a rare case.
*/
BUILD_BUG_ON(__builtin_constant_p(nr) && nr != 1);
if (!__builtin_constant_p(nr))
- atomic_long_set(&page->pp_ref_count, 1);
+ atomic_long_set(pp_ref_count, 1);

return 0;
}

- ret = atomic_long_sub_return(nr, &page->pp_ref_count);
+ ret = atomic_long_sub_return(nr, pp_ref_count);
WARN_ON(ret < 0);

/* We are the last user here too, reset pp_ref_count back to 1 to
@@ -283,7 +283,7 @@ static inline long page_pool_unref_netmem(netmem_ref netmem, long nr)
* page_pool_unref_page() currently.
*/
if (unlikely(!ret))
- atomic_long_set(&page->pp_ref_count, 1);
+ atomic_long_set(pp_ref_count, 1);

return ret;
}
@@ -402,9 +402,7 @@ static inline void page_pool_free_va(struct page_pool *pool, void *va,

static inline dma_addr_t page_pool_get_dma_addr_netmem(netmem_ref netmem)
{
- struct page *page = netmem_to_page(netmem);
-
- dma_addr_t ret = page->dma_addr;
+ dma_addr_t ret = netmem_get_dma_addr(netmem);

if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA)
ret <<= PAGE_SHIFT;
@@ -427,18 +425,17 @@ static inline dma_addr_t page_pool_get_dma_addr(const struct page *page)
static inline bool page_pool_set_dma_addr_netmem(netmem_ref netmem,
dma_addr_t addr)
{
- struct page *page = netmem_to_page(netmem);
-
if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) {
- page->dma_addr = addr >> PAGE_SHIFT;
+ netmem_set_dma_addr(netmem, addr >> PAGE_SHIFT);

/* We assume page alignment to shave off bottom bits,
* if this "compression" doesn't work we need to drop.
*/
- return addr != (dma_addr_t)page->dma_addr << PAGE_SHIFT;
+ return addr != (dma_addr_t)netmem_get_dma_addr(netmem)
+ << PAGE_SHIFT;
}

- page->dma_addr = addr;
+ netmem_set_dma_addr(netmem, addr);
return false;
}

diff --git a/net/core/devmem.c b/net/core/devmem.c
index d5fac8edf621d..fe9865699abb1 100644
--- a/net/core/devmem.c
+++ b/net/core/devmem.c
@@ -80,7 +80,10 @@ net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding)
index = offset / PAGE_SIZE;
niov = &owner->niovs[index];

+ niov->pp_magic = 0;
+ niov->pp = NULL;
niov->dma_addr = 0;
+ atomic_long_set(&niov->pp_ref_count, 0);

net_devmem_dmabuf_binding_get(binding);

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 39e94f8a39259..fa2a1f7ba0067 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -26,7 +26,7 @@

#include "page_pool_priv.h"

-static DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers);
+DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers);

#define DEFER_TIME (msecs_to_jiffies(1000))
#define DEFER_WARN_INTERVAL (60 * HZ)
@@ -375,7 +375,7 @@ static noinline netmem_ref page_pool_refill_alloc_cache(struct page_pool *pool)
if (unlikely(!netmem))
break;

- if (likely(page_to_nid(netmem_to_page(netmem)) == pref_nid)) {
+ if (likely(netmem_is_pref_nid(netmem, pref_nid))) {
pool->alloc.cache[pool->alloc.count++] = netmem;
} else {
/* NUMA mismatch;
@@ -471,10 +471,8 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem)

static void page_pool_set_pp_info(struct page_pool *pool, netmem_ref netmem)
{
- struct page *page = netmem_to_page(netmem);
-
- page->pp = pool;
- page->pp_magic |= PP_SIGNATURE;
+ netmem_set_pp(netmem, pool);
+ netmem_or_pp_magic(netmem, PP_SIGNATURE);

/* Ensuring all pages have been split into one fragment initially:
* page_pool_set_pp_info() is only called once for every page when it
@@ -489,10 +487,8 @@ static void page_pool_set_pp_info(struct page_pool *pool, netmem_ref netmem)

static void page_pool_clear_pp_info(netmem_ref netmem)
{
- struct page *page = netmem_to_page(netmem);
-
- page->pp_magic = 0;
- page->pp = NULL;
+ netmem_clear_pp_magic(netmem);
+ netmem_set_pp(netmem, NULL);
}

static struct page *__page_pool_alloc_page_order(struct page_pool *pool,
@@ -719,8 +715,9 @@ static bool page_pool_recycle_in_cache(netmem_ref netmem,

static bool __page_pool_page_can_be_recycled(netmem_ref netmem)
{
- return page_ref_count(netmem_to_page(netmem)) == 1 &&
- !page_is_pfmemalloc(netmem_to_page(netmem));
+ return netmem_is_net_iov(netmem) ||
+ (page_ref_count(netmem_to_page(netmem)) == 1 &&
+ !page_is_pfmemalloc(netmem_to_page(netmem)));
}

/* If the page refcnt == 1, this will try to recycle the page.
@@ -742,7 +739,7 @@ __page_pool_put_page(struct page_pool *pool, netmem_ref netmem,
* refcnt == 1 means page_pool owns page, and can recycle it.
*
* page is NOT reusable when allocated when system is under
- * some pressure. (page_is_pfmemalloc)
+ * some pressure. (page_pool_page_is_pfmemalloc)
*/
if (likely(__page_pool_page_can_be_recycled(netmem))) {
/* Read barrier done in page_ref_count / READ_ONCE */
@@ -755,6 +752,7 @@ __page_pool_put_page(struct page_pool *pool, netmem_ref netmem,
/* Page found as candidate for recycling */
return netmem;
}
+
/* Fallback/non-XDP mode: API user have elevated refcnt.
*
* Many drivers split up the page into fragments, and some
@@ -976,7 +974,7 @@ static void page_pool_empty_ring(struct page_pool *pool)
/* Empty recycle ring */
while ((netmem = (__force netmem_ref)ptr_ring_consume_bh(&pool->ring))) {
/* Verify the refcnt invariant of cached pages */
- if (!(page_ref_count(netmem_to_page(netmem)) == 1))
+ if (!(netmem_ref_count(netmem) == 1))
pr_crit("%s() page_pool refcnt %d violation\n",
__func__, netmem_ref_count(netmem));

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 92d1748fef465..2e49a6ea21af3 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -904,9 +904,9 @@ static void skb_clone_fraglist(struct sk_buff *skb)
skb_get(list);
}

-static bool is_pp_page(struct page *page)
+static bool is_pp_netmem(netmem_ref netmem)
{
- return (page->pp_magic & ~0x3UL) == PP_SIGNATURE;
+ return (netmem_get_pp_magic(netmem) & ~0x3UL) == PP_SIGNATURE;
}

int skb_pp_cow_data(struct page_pool *pool, struct sk_buff **pskb,
@@ -1004,9 +1004,7 @@ EXPORT_SYMBOL(skb_cow_data_for_xdp);
#if IS_ENABLED(CONFIG_PAGE_POOL)
bool napi_pp_put_page(netmem_ref netmem)
{
- struct page *page = netmem_to_page(netmem);
-
- page = compound_head(page);
+ netmem = netmem_compound_head(netmem);

/* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation
* in order to preserve any existing bits, such as bit 0 for the
@@ -1015,10 +1013,10 @@ bool napi_pp_put_page(netmem_ref netmem)
* and page_is_pfmemalloc() is checked in __page_pool_put_page()
* to avoid recycling the pfmemalloc page.
*/
- if (unlikely(!is_pp_page(page)))
+ if (unlikely(!is_pp_netmem(netmem)))
return false;

- page_pool_put_full_netmem(page->pp, page_to_netmem(page), false);
+ page_pool_put_full_netmem(netmem_get_pp(netmem), netmem, false);

return true;
}
@@ -1045,7 +1043,7 @@ static bool skb_pp_recycle(struct sk_buff *skb, void *data)
static int skb_pp_frag_ref(struct sk_buff *skb)
{
struct skb_shared_info *shinfo;
- struct page *head_page;
+ netmem_ref head_netmem;
int i;

if (!skb->pp_recycle)
@@ -1054,11 +1052,11 @@ static int skb_pp_frag_ref(struct sk_buff *skb)
shinfo = skb_shinfo(skb);

for (i = 0; i < shinfo->nr_frags; i++) {
- head_page = compound_head(skb_frag_page(&shinfo->frags[i]));
- if (likely(is_pp_page(head_page)))
- page_pool_ref_page(head_page);
+ head_netmem = netmem_compound_head(shinfo->frags[i].netmem);
+ if (likely(is_pp_netmem(head_netmem)))
+ page_pool_ref_netmem(head_netmem);
else
- page_ref_inc(head_page);
+ page_ref_inc(netmem_to_page(head_netmem));
}
return 0;
}
--
2.45.1.288.g0e0cd299f1-goog


2024-05-30 20:20:20

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 09/14] net: support non paged skb frags

Make skb_frag_page() fail in the case where the frag is not backed
by a page, and fix its relevant callers to handle this case.

Signed-off-by: Mina Almasry <[email protected]>


---

v10:
- Fixed newly generated kdoc warnings found by patchwork. While we're
at it, fix the Return section of the functions I touched.

v6:
- Rebased on top of the merged netmem changes.

Changes in v1:
- Fix illegal_highdma() (Yunsheng).
- Rework napi_pp_put_page() slightly to reduce code churn (Willem).

---
include/linux/skbuff.h | 42 +++++++++++++++++++++++++++++++++++++-
include/linux/skbuff_ref.h | 9 ++++----
net/core/dev.c | 3 ++-
net/core/gro.c | 3 ++-
net/core/skbuff.c | 11 ++++++++++
net/ipv4/esp4.c | 3 ++-
net/ipv4/tcp.c | 3 +++
net/ipv6/esp6.c | 3 ++-
8 files changed, 67 insertions(+), 10 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index fe7d8dbef77e1..0a4df0025e6dc 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -3492,21 +3492,58 @@ static inline void skb_frag_off_copy(skb_frag_t *fragto,
fragto->offset = fragfrom->offset;
}

+/* Return: true if the skb_frag contains a net_iov. */
+static inline bool skb_frag_is_net_iov(const skb_frag_t *frag)
+{
+ return netmem_is_net_iov(frag->netmem);
+}
+
+/**
+ * skb_frag_net_iov - retrieve the net_iov referred to by fragment
+ * @frag: the fragment
+ *
+ * Return: the &struct net_iov associated with @frag. Returns NULL if this
+ * frag has no associated net_iov.
+ */
+static inline struct net_iov *skb_frag_net_iov(const skb_frag_t *frag)
+{
+ if (!skb_frag_is_net_iov(frag))
+ return NULL;
+
+ return netmem_to_net_iov(frag->netmem);
+}
+
/**
* skb_frag_page - retrieve the page referred to by a paged fragment
* @frag: the paged fragment
*
- * Returns the &struct page associated with @frag.
+ * Return: the &struct page associated with @frag. Returns NULL if this frag
+ * has no associated page.
*/
static inline struct page *skb_frag_page(const skb_frag_t *frag)
{
+ if (skb_frag_is_net_iov(frag))
+ return NULL;
+
return netmem_to_page(frag->netmem);
}

+/**
+ * skb_frag_netmem - retrieve the netmem referred to by a fragment
+ * @frag: the fragment
+ *
+ * Return: the &netmem_ref associated with @frag.
+ */
+static inline netmem_ref skb_frag_netmem(const skb_frag_t *frag)
+{
+ return frag->netmem;
+}
+
int skb_pp_cow_data(struct page_pool *pool, struct sk_buff **pskb,
unsigned int headroom);
int skb_cow_data_for_xdp(struct page_pool *pool, struct sk_buff **pskb,
struct bpf_prog *prog);
+
/**
* skb_frag_address - gets the address of the data contained in a paged fragment
* @frag: the paged fragment buffer
@@ -3516,6 +3553,9 @@ int skb_cow_data_for_xdp(struct page_pool *pool, struct sk_buff **pskb,
*/
static inline void *skb_frag_address(const skb_frag_t *frag)
{
+ if (!skb_frag_page(frag))
+ return NULL;
+
return page_address(skb_frag_page(frag)) + skb_frag_off(frag);
}

diff --git a/include/linux/skbuff_ref.h b/include/linux/skbuff_ref.h
index 16c241a234728..0f3c58007488a 100644
--- a/include/linux/skbuff_ref.h
+++ b/include/linux/skbuff_ref.h
@@ -34,14 +34,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)

bool napi_pp_put_page(netmem_ref netmem);

-static inline void
-skb_page_unref(struct page *page, bool recycle)
+static inline void skb_page_unref(netmem_ref netmem, bool recycle)
{
#ifdef CONFIG_PAGE_POOL
- if (recycle && napi_pp_put_page(page_to_netmem(page)))
+ if (recycle && napi_pp_put_page(netmem))
return;
#endif
- put_page(page);
+ put_page(netmem_to_page(netmem));
}

/**
@@ -54,7 +53,7 @@ skb_page_unref(struct page *page, bool recycle)
*/
static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle)
{
- skb_page_unref(skb_frag_page(frag), recycle);
+ skb_page_unref(skb_frag_netmem(frag), recycle);
}

/**
diff --git a/net/core/dev.c b/net/core/dev.c
index 981f5ef9e8580..bbbf3af99a14b 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3432,8 +3432,9 @@ static int illegal_highdma(struct net_device *dev, struct sk_buff *skb)
if (!(dev->features & NETIF_F_HIGHDMA)) {
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ struct page *page = skb_frag_page(frag);

- if (PageHighMem(skb_frag_page(frag)))
+ if (page && PageHighMem(page))
return 1;
}
}
diff --git a/net/core/gro.c b/net/core/gro.c
index b3b43de1a6502..26f09c3e830b7 100644
--- a/net/core/gro.c
+++ b/net/core/gro.c
@@ -408,7 +408,8 @@ static inline void skb_gro_reset_offset(struct sk_buff *skb, u32 nhoff)
pinfo = skb_shinfo(skb);
frag0 = &pinfo->frags[0];

- if (pinfo->nr_frags && !PageHighMem(skb_frag_page(frag0)) &&
+ if (pinfo->nr_frags && skb_frag_page(frag0) &&
+ !PageHighMem(skb_frag_page(frag0)) &&
(!NET_IP_ALIGN || !((skb_frag_off(frag0) + nhoff) & 3))) {
NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0);
NAPI_GRO_CB(skb)->frag0_len = min_t(unsigned int,
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 2e49a6ea21af3..99cf1ee73836d 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -1352,6 +1352,14 @@ void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt)
struct page *p;
u8 *vaddr;

+ if (skb_frag_is_net_iov(frag)) {
+ printk("%sskb frag %d: not readable\n", level, i);
+ len -= skb_frag_size(frag);
+ if (!len)
+ break;
+ continue;
+ }
+
skb_frag_foreach_page(frag, skb_frag_off(frag),
skb_frag_size(frag), p, p_off, p_len,
copied) {
@@ -3142,6 +3150,9 @@ static bool __skb_splice_bits(struct sk_buff *skb, struct pipe_inode_info *pipe,
for (seg = 0; seg < skb_shinfo(skb)->nr_frags; seg++) {
const skb_frag_t *f = &skb_shinfo(skb)->frags[seg];

+ if (WARN_ON_ONCE(!skb_frag_page(f)))
+ return false;
+
if (__splice_segment(skb_frag_page(f),
skb_frag_off(f), skb_frag_size(f),
offset, len, spd, false, sk, pipe))
diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
index 3968d3f98e083..4ce0bc41e7806 100644
--- a/net/ipv4/esp4.c
+++ b/net/ipv4/esp4.c
@@ -115,7 +115,8 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp, struct sk_buff *skb)
*/
if (req->src != req->dst)
for (sg = sg_next(req->src); sg; sg = sg_next(sg))
- skb_page_unref(sg_page(sg), skb->pp_recycle);
+ skb_page_unref(page_to_netmem(sg_page(sg)),
+ skb->pp_recycle);
}

#ifdef CONFIG_INET_ESPINTCP
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 5fa68e7f6ddbf..679cb51aaaf2b 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -2171,6 +2171,9 @@ static int tcp_zerocopy_receive(struct sock *sk,
break;
}
page = skb_frag_page(frags);
+ if (WARN_ON_ONCE(!page))
+ break;
+
prefetchw(page);
pages[pages_to_map++] = page;
length += PAGE_SIZE;
diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
index 34a9a5b9ed00b..0318aea0d4ce6 100644
--- a/net/ipv6/esp6.c
+++ b/net/ipv6/esp6.c
@@ -132,7 +132,8 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp, struct sk_buff *skb)
*/
if (req->src != req->dst)
for (sg = sg_next(req->src); sg; sg = sg_next(sg))
- skb_page_unref(sg_page(sg), skb->pp_recycle);
+ skb_page_unref(page_to_netmem(sg_page(sg)),
+ skb->pp_recycle);
}

#ifdef CONFIG_INET6_ESPINTCP
--
2.45.1.288.g0e0cd299f1-goog


2024-05-30 20:20:16

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 06/14] page_pool: convert to use netmem

Abstrace the memory type from the page_pool so we can later add support
for new memory types. Convert the page_pool to use the new netmem type
abstraction, rather than use struct page directly.

As of this patch the netmem type is a no-op abstraction: it's always a
struct page underneath. All the page pool internals are converted to
use struct netmem instead of struct page, and the page pool now exports
2 APIs:

1. The existing struct page API.
2. The new struct netmem API.

Keeping the existing API is transitional; we do not want to refactor all
the current drivers using the page pool at once.

The netmem abstraction is currently a no-op. The page_pool uses
page_to_netmem() to convert allocated pages to netmem, and uses
netmem_to_page() to convert the netmem back to pages to pass to mm APIs,

Follow up patches to this series add non-paged netmem support to the
page_pool. This change is factored out on its own to limit the code
churn to this 1 patch, for ease of code review.

Signed-off-by: Mina Almasry <[email protected]>

---

v9:
- Fix sparse error (Simon).

v8:
- Fix napi_pp_put_page() taking netmem instead of page to fix
patch-by-patch build error.
- Add net/netmem.h include in this patch to fix patch-by-patch build
error.

v6:

- Rebased on top of the merged netmem_ref type.

Cc: [email protected]
Cc: Matthew Wilcox <[email protected]>

---
include/linux/skbuff_ref.h | 4 +-
include/net/netmem.h | 15 ++
include/net/page_pool/helpers.h | 120 ++++++++----
include/net/page_pool/types.h | 18 +-
include/trace/events/page_pool.h | 29 +--
net/bpf/test_run.c | 5 +-
net/core/page_pool.c | 307 +++++++++++++++++--------------
net/core/skbuff.c | 8 +-
8 files changed, 305 insertions(+), 201 deletions(-)

diff --git a/include/linux/skbuff_ref.h b/include/linux/skbuff_ref.h
index 11f0a40634033..16c241a234728 100644
--- a/include/linux/skbuff_ref.h
+++ b/include/linux/skbuff_ref.h
@@ -32,13 +32,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f)
__skb_frag_ref(&skb_shinfo(skb)->frags[f]);
}

-bool napi_pp_put_page(struct page *page);
+bool napi_pp_put_page(netmem_ref netmem);

static inline void
skb_page_unref(struct page *page, bool recycle)
{
#ifdef CONFIG_PAGE_POOL
- if (recycle && napi_pp_put_page(page))
+ if (recycle && napi_pp_put_page(page_to_netmem(page)))
return;
#endif
put_page(page);
diff --git a/include/net/netmem.h b/include/net/netmem.h
index 01dbdd216fae7..664df8325ece5 100644
--- a/include/net/netmem.h
+++ b/include/net/netmem.h
@@ -66,4 +66,19 @@ static inline netmem_ref page_to_netmem(struct page *page)
return (__force netmem_ref)page;
}

+static inline int netmem_ref_count(netmem_ref netmem)
+{
+ return page_ref_count(netmem_to_page(netmem));
+}
+
+static inline unsigned long netmem_to_pfn(netmem_ref netmem)
+{
+ return page_to_pfn(netmem_to_page(netmem));
+}
+
+static inline netmem_ref netmem_compound_head(netmem_ref netmem)
+{
+ return page_to_netmem(compound_head(netmem_to_page(netmem)));
+}
+
#endif /* _NET_NETMEM_H */
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index 873631c79ab16..5e129d5304f53 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -55,6 +55,8 @@
#include <linux/dma-mapping.h>

#include <net/page_pool/types.h>
+#include <net/net_debug.h>
+#include <net/netmem.h>

#ifdef CONFIG_PAGE_POOL_STATS
/* Deprecated driver-facing API, use netlink instead */
@@ -103,7 +105,7 @@ static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool)
* Get a page fragment from the page allocator or page_pool caches.
*
* Return:
- * Return allocated page fragment, otherwise return NULL.
+ * Return allocated page fragment, otherwise return 0.
*/
static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool,
unsigned int *offset,
@@ -114,22 +116,22 @@ static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool,
return page_pool_alloc_frag(pool, offset, size, gfp);
}

-static inline struct page *page_pool_alloc(struct page_pool *pool,
- unsigned int *offset,
- unsigned int *size, gfp_t gfp)
+static inline netmem_ref page_pool_alloc(struct page_pool *pool,
+ unsigned int *offset,
+ unsigned int *size, gfp_t gfp)
{
unsigned int max_size = PAGE_SIZE << pool->p.order;
- struct page *page;
+ netmem_ref netmem;

if ((*size << 1) > max_size) {
*size = max_size;
*offset = 0;
- return page_pool_alloc_pages(pool, gfp);
+ return page_pool_alloc_netmem(pool, gfp);
}

- page = page_pool_alloc_frag(pool, offset, *size, gfp);
- if (unlikely(!page))
- return NULL;
+ netmem = page_pool_alloc_frag_netmem(pool, offset, *size, gfp);
+ if (unlikely(!netmem))
+ return 0;

/* There is very likely not enough space for another fragment, so append
* the remaining size to the current fragment to avoid truesize
@@ -140,7 +142,7 @@ static inline struct page *page_pool_alloc(struct page_pool *pool,
pool->frag_offset = max_size;
}

- return page;
+ return netmem;
}

/**
@@ -154,7 +156,7 @@ static inline struct page *page_pool_alloc(struct page_pool *pool,
* utilization and performance penalty.
*
* Return:
- * Return allocated page or page fragment, otherwise return NULL.
+ * Return allocated page or page fragment, otherwise return 0.
*/
static inline struct page *page_pool_dev_alloc(struct page_pool *pool,
unsigned int *offset,
@@ -162,7 +164,7 @@ static inline struct page *page_pool_dev_alloc(struct page_pool *pool,
{
gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN);

- return page_pool_alloc(pool, offset, size, gfp);
+ return netmem_to_page(page_pool_alloc(pool, offset, size, gfp));
}

static inline void *page_pool_alloc_va(struct page_pool *pool,
@@ -172,7 +174,8 @@ static inline void *page_pool_alloc_va(struct page_pool *pool,
struct page *page;

/* Mask off __GFP_HIGHMEM to ensure we can use page_address() */
- page = page_pool_alloc(pool, &offset, size, gfp & ~__GFP_HIGHMEM);
+ page = netmem_to_page(
+ page_pool_alloc(pool, &offset, size, gfp & ~__GFP_HIGHMEM));
if (unlikely(!page))
return NULL;

@@ -189,7 +192,7 @@ static inline void *page_pool_alloc_va(struct page_pool *pool,
* it returns va of the allocated page or page fragment.
*
* Return:
- * Return the va for the allocated page or page fragment, otherwise return NULL.
+ * Return the va for the allocated page or page fragment, otherwise return 0.
*/
static inline void *page_pool_dev_alloc_va(struct page_pool *pool,
unsigned int *size)
@@ -212,6 +215,11 @@ page_pool_get_dma_dir(const struct page_pool *pool)
return pool->p.dma_dir;
}

+static inline void page_pool_fragment_netmem(netmem_ref netmem, long nr)
+{
+ atomic_long_set(&netmem_to_page(netmem)->pp_ref_count, nr);
+}
+
/**
* page_pool_fragment_page() - split a fresh page into fragments
* @page: page to split
@@ -232,11 +240,12 @@ page_pool_get_dma_dir(const struct page_pool *pool)
*/
static inline void page_pool_fragment_page(struct page *page, long nr)
{
- atomic_long_set(&page->pp_ref_count, nr);
+ page_pool_fragment_netmem(page_to_netmem(page), nr);
}

-static inline long page_pool_unref_page(struct page *page, long nr)
+static inline long page_pool_unref_netmem(netmem_ref netmem, long nr)
{
+ struct page *page = netmem_to_page(netmem);
long ret;

/* If nr == pp_ref_count then we have cleared all remaining
@@ -279,15 +288,41 @@ static inline long page_pool_unref_page(struct page *page, long nr)
return ret;
}

+static inline long page_pool_unref_page(struct page *page, long nr)
+{
+ return page_pool_unref_netmem(page_to_netmem(page), nr);
+}
+
+static inline void page_pool_ref_netmem(netmem_ref netmem)
+{
+ atomic_long_inc(&netmem_to_page(netmem)->pp_ref_count);
+}
+
static inline void page_pool_ref_page(struct page *page)
{
- atomic_long_inc(&page->pp_ref_count);
+ page_pool_ref_netmem(page_to_netmem(page));
}

-static inline bool page_pool_is_last_ref(struct page *page)
+static inline bool page_pool_is_last_ref(netmem_ref netmem)
{
/* If page_pool_unref_page() returns 0, we were the last user */
- return page_pool_unref_page(page, 1) == 0;
+ return page_pool_unref_netmem(netmem, 1) == 0;
+}
+
+static inline void page_pool_put_netmem(struct page_pool *pool,
+ netmem_ref netmem,
+ unsigned int dma_sync_size,
+ bool allow_direct)
+{
+ /* When page_pool isn't compiled-in, net/core/xdp.c doesn't
+ * allow registering MEM_TYPE_PAGE_POOL, but shield linker.
+ */
+#ifdef CONFIG_PAGE_POOL
+ if (!page_pool_is_last_ref(netmem))
+ return;
+
+ page_pool_put_unrefed_netmem(pool, netmem, dma_sync_size, allow_direct);
+#endif
}

/**
@@ -308,15 +343,15 @@ static inline void page_pool_put_page(struct page_pool *pool,
unsigned int dma_sync_size,
bool allow_direct)
{
- /* When page_pool isn't compiled-in, net/core/xdp.c doesn't
- * allow registering MEM_TYPE_PAGE_POOL, but shield linker.
- */
-#ifdef CONFIG_PAGE_POOL
- if (!page_pool_is_last_ref(page))
- return;
+ page_pool_put_netmem(pool, page_to_netmem(page), dma_sync_size,
+ allow_direct);
+}

- page_pool_put_unrefed_page(pool, page, dma_sync_size, allow_direct);
-#endif
+static inline void page_pool_put_full_netmem(struct page_pool *pool,
+ netmem_ref netmem,
+ bool allow_direct)
+{
+ page_pool_put_netmem(pool, netmem, -1, allow_direct);
}

/**
@@ -331,7 +366,7 @@ static inline void page_pool_put_page(struct page_pool *pool,
static inline void page_pool_put_full_page(struct page_pool *pool,
struct page *page, bool allow_direct)
{
- page_pool_put_page(pool, page, -1, allow_direct);
+ page_pool_put_netmem(pool, page_to_netmem(page), -1, allow_direct);
}

/**
@@ -365,6 +400,18 @@ static inline void page_pool_free_va(struct page_pool *pool, void *va,
page_pool_put_page(pool, virt_to_head_page(va), -1, allow_direct);
}

+static inline dma_addr_t page_pool_get_dma_addr_netmem(netmem_ref netmem)
+{
+ struct page *page = netmem_to_page(netmem);
+
+ dma_addr_t ret = page->dma_addr;
+
+ if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA)
+ ret <<= PAGE_SHIFT;
+
+ return ret;
+}
+
/**
* page_pool_get_dma_addr() - Retrieve the stored DMA address.
* @page: page allocated from a page pool
@@ -374,16 +421,14 @@ static inline void page_pool_free_va(struct page_pool *pool, void *va,
*/
static inline dma_addr_t page_pool_get_dma_addr(const struct page *page)
{
- dma_addr_t ret = page->dma_addr;
-
- if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA)
- ret <<= PAGE_SHIFT;
-
- return ret;
+ return page_pool_get_dma_addr_netmem(page_to_netmem((struct page *)page));
}

-static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
+static inline bool page_pool_set_dma_addr_netmem(netmem_ref netmem,
+ dma_addr_t addr)
{
+ struct page *page = netmem_to_page(netmem);
+
if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) {
page->dma_addr = addr >> PAGE_SHIFT;

@@ -419,6 +464,11 @@ static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
page_pool_get_dma_dir(pool));
}

+static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
+{
+ return page_pool_set_dma_addr_netmem(page_to_netmem(page), addr);
+}
+
static inline bool page_pool_put(struct page_pool *pool)
{
return refcount_dec_and_test(&pool->user_cnt);
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index 6166fb869c35d..edc3066e1ea56 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -6,6 +6,7 @@
#include <linux/dma-direction.h>
#include <linux/ptr_ring.h>
#include <linux/types.h>
+#include <net/netmem.h>

#define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA
* map/unmap
@@ -40,7 +41,7 @@
#define PP_ALLOC_CACHE_REFILL 64
struct pp_alloc_cache {
u32 count;
- struct page *cache[PP_ALLOC_CACHE_SIZE];
+ netmem_ref cache[PP_ALLOC_CACHE_SIZE];
};

/**
@@ -74,7 +75,7 @@ struct page_pool_params {
struct net_device *netdev;
unsigned int flags;
/* private: used by test code only */
- void (*init_callback)(struct page *page, void *arg);
+ void (*init_callback)(netmem_ref netmem, void *arg);
void *init_arg;
);
};
@@ -132,8 +133,8 @@ struct page_pool_stats {
struct memory_provider_ops {
int (*init)(struct page_pool *pool);
void (*destroy)(struct page_pool *pool);
- struct page *(*alloc_netmems)(struct page_pool *pool, gfp_t gfp);
- bool (*release_page)(struct page_pool *pool, struct page *page);
+ netmem_ref (*alloc_netmems)(struct page_pool *pool, gfp_t gfp);
+ bool (*release_page)(struct page_pool *pool, netmem_ref netmem);
};

struct pp_memory_provider_params {
@@ -164,7 +165,7 @@ struct page_pool {
*/
__cacheline_group_begin(frag) __aligned(4 * sizeof(long));
long frag_users;
- struct page *frag_page;
+ netmem_ref frag_page;
unsigned int frag_offset;
__cacheline_group_end(frag);

@@ -236,8 +237,12 @@ struct page_pool {
};

struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp);
+netmem_ref page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp);
struct page *page_pool_alloc_frag(struct page_pool *pool, unsigned int *offset,
unsigned int size, gfp_t gfp);
+netmem_ref page_pool_alloc_frag_netmem(struct page_pool *pool,
+ unsigned int *offset, unsigned int size,
+ gfp_t gfp);
struct page_pool *page_pool_create(const struct page_pool_params *params);
struct page_pool *page_pool_create_percpu(const struct page_pool_params *params,
int cpuid);
@@ -267,6 +272,9 @@ static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data,
}
#endif

+void page_pool_put_unrefed_netmem(struct page_pool *pool, netmem_ref netmem,
+ unsigned int dma_sync_size,
+ bool allow_direct);
void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page,
unsigned int dma_sync_size,
bool allow_direct);
diff --git a/include/trace/events/page_pool.h b/include/trace/events/page_pool.h
index 6834356b2d2ae..c5b6383ff2760 100644
--- a/include/trace/events/page_pool.h
+++ b/include/trace/events/page_pool.h
@@ -42,51 +42,52 @@ TRACE_EVENT(page_pool_release,
TRACE_EVENT(page_pool_state_release,

TP_PROTO(const struct page_pool *pool,
- const struct page *page, u32 release),
+ netmem_ref netmem, u32 release),

- TP_ARGS(pool, page, release),
+ TP_ARGS(pool, netmem, release),

TP_STRUCT__entry(
__field(const struct page_pool *, pool)
- __field(const struct page *, page)
+ __field(netmem_ref, netmem)
__field(u32, release)
__field(unsigned long, pfn)
),

TP_fast_assign(
__entry->pool = pool;
- __entry->page = page;
+ __entry->netmem = netmem;
__entry->release = release;
- __entry->pfn = page_to_pfn(page);
+ __entry->pfn = netmem_to_pfn(netmem);
),

- TP_printk("page_pool=%p page=%p pfn=0x%lx release=%u",
- __entry->pool, __entry->page, __entry->pfn, __entry->release)
+ TP_printk("page_pool=%p netmem=%lu pfn=0x%lx release=%u",
+ __entry->pool, (__force unsigned long)__entry->netmem,
+ __entry->pfn, __entry->release)
);

TRACE_EVENT(page_pool_state_hold,

TP_PROTO(const struct page_pool *pool,
- const struct page *page, u32 hold),
+ netmem_ref netmem, u32 hold),

- TP_ARGS(pool, page, hold),
+ TP_ARGS(pool, netmem, hold),

TP_STRUCT__entry(
__field(const struct page_pool *, pool)
- __field(const struct page *, page)
+ __field(netmem_ref, netmem)
__field(u32, hold)
__field(unsigned long, pfn)
),

TP_fast_assign(
__entry->pool = pool;
- __entry->page = page;
+ __entry->netmem = netmem;
__entry->hold = hold;
- __entry->pfn = page_to_pfn(page);
+ __entry->pfn = netmem_to_pfn(netmem);
),

- TP_printk("page_pool=%p page=%p pfn=0x%lx hold=%u",
- __entry->pool, __entry->page, __entry->pfn, __entry->hold)
+ TP_printk("page_pool=%p netmem=%lu pfn=0x%lx hold=%u",
+ __entry->pool, __entry->netmem, __entry->pfn, __entry->hold)
);

TRACE_EVENT(page_pool_update_nid,
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index f6aad4ed2ab2f..b4cde1cc887d0 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -127,9 +127,10 @@ struct xdp_test_data {
#define TEST_XDP_FRAME_SIZE (PAGE_SIZE - sizeof(struct xdp_page_head))
#define TEST_XDP_MAX_BATCH 256

-static void xdp_test_run_init_page(struct page *page, void *arg)
+static void xdp_test_run_init_page(netmem_ref netmem, void *arg)
{
- struct xdp_page_head *head = phys_to_virt(page_to_phys(page));
+ struct xdp_page_head *head =
+ phys_to_virt(page_to_phys(netmem_to_page(netmem)));
struct xdp_buff *new_ctx, *orig_ctx;
u32 headroom = XDP_PACKET_HEADROOM;
struct xdp_test_data *xdp = arg;
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 251c9356c9202..39e94f8a39259 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -345,19 +345,18 @@ struct page_pool *page_pool_create(const struct page_pool_params *params)
}
EXPORT_SYMBOL(page_pool_create);

-static void page_pool_return_page(struct page_pool *pool, struct page *page);
+static void page_pool_return_page(struct page_pool *pool, netmem_ref netmem);

-noinline
-static struct page *page_pool_refill_alloc_cache(struct page_pool *pool)
+static noinline netmem_ref page_pool_refill_alloc_cache(struct page_pool *pool)
{
struct ptr_ring *r = &pool->ring;
- struct page *page;
+ netmem_ref netmem;
int pref_nid; /* preferred NUMA node */

/* Quicker fallback, avoid locks when ring is empty */
if (__ptr_ring_empty(r)) {
alloc_stat_inc(pool, empty);
- return NULL;
+ return 0;
}

/* Softirq guarantee CPU and thus NUMA node is stable. This,
@@ -372,57 +371,57 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool)

/* Refill alloc array, but only if NUMA match */
do {
- page = __ptr_ring_consume(r);
- if (unlikely(!page))
+ netmem = (__force netmem_ref)__ptr_ring_consume(r);
+ if (unlikely(!netmem))
break;

- if (likely(page_to_nid(page) == pref_nid)) {
- pool->alloc.cache[pool->alloc.count++] = page;
+ if (likely(page_to_nid(netmem_to_page(netmem)) == pref_nid)) {
+ pool->alloc.cache[pool->alloc.count++] = netmem;
} else {
/* NUMA mismatch;
* (1) release 1 page to page-allocator and
* (2) break out to fallthrough to alloc_pages_node.
* This limit stress on page buddy alloactor.
*/
- page_pool_return_page(pool, page);
+ page_pool_return_page(pool, netmem);
alloc_stat_inc(pool, waive);
- page = NULL;
+ netmem = 0;
break;
}
} while (pool->alloc.count < PP_ALLOC_CACHE_REFILL);

/* Return last page */
if (likely(pool->alloc.count > 0)) {
- page = pool->alloc.cache[--pool->alloc.count];
+ netmem = pool->alloc.cache[--pool->alloc.count];
alloc_stat_inc(pool, refill);
}

- return page;
+ return netmem;
}

/* fast path */
-static struct page *__page_pool_get_cached(struct page_pool *pool)
+static netmem_ref __page_pool_get_cached(struct page_pool *pool)
{
- struct page *page;
+ netmem_ref netmem;

/* Caller MUST guarantee safe non-concurrent access, e.g. softirq */
if (likely(pool->alloc.count)) {
/* Fast-path */
- page = pool->alloc.cache[--pool->alloc.count];
+ netmem = pool->alloc.cache[--pool->alloc.count];
alloc_stat_inc(pool, fast);
} else {
- page = page_pool_refill_alloc_cache(pool);
+ netmem = page_pool_refill_alloc_cache(pool);
}

- return page;
+ return netmem;
}

static void __page_pool_dma_sync_for_device(const struct page_pool *pool,
- const struct page *page,
+ netmem_ref netmem,
u32 dma_sync_size)
{
#if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC)
- dma_addr_t dma_addr = page_pool_get_dma_addr(page);
+ dma_addr_t dma_addr = page_pool_get_dma_addr_netmem(netmem);

dma_sync_size = min(dma_sync_size, pool->p.max_len);
__dma_sync_single_for_device(pool->p.dev, dma_addr + pool->p.offset,
@@ -432,14 +431,14 @@ static void __page_pool_dma_sync_for_device(const struct page_pool *pool,

static __always_inline void
page_pool_dma_sync_for_device(const struct page_pool *pool,
- const struct page *page,
+ netmem_ref netmem,
u32 dma_sync_size)
{
if (pool->dma_sync && dma_dev_need_sync(pool->p.dev))
- __page_pool_dma_sync_for_device(pool, page, dma_sync_size);
+ __page_pool_dma_sync_for_device(pool, netmem, dma_sync_size);
}

-static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
+static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem)
{
dma_addr_t dma;

@@ -448,17 +447,17 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
* into page private data (i.e 32bit cpu with 64bit DMA caps)
* This mapping is kept for lifetime of page, until leaving pool.
*/
- dma = dma_map_page_attrs(pool->p.dev, page, 0,
- (PAGE_SIZE << pool->p.order),
- pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC |
- DMA_ATTR_WEAK_ORDERING);
+ dma = dma_map_page_attrs(pool->p.dev, netmem_to_page(netmem), 0,
+ (PAGE_SIZE << pool->p.order), pool->p.dma_dir,
+ DMA_ATTR_SKIP_CPU_SYNC |
+ DMA_ATTR_WEAK_ORDERING);
if (dma_mapping_error(pool->p.dev, dma))
return false;

- if (page_pool_set_dma_addr(page, dma))
+ if (page_pool_set_dma_addr_netmem(netmem, dma))
goto unmap_failed;

- page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
+ page_pool_dma_sync_for_device(pool, netmem, pool->p.max_len);

return true;

@@ -470,9 +469,10 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
return false;
}

-static void page_pool_set_pp_info(struct page_pool *pool,
- struct page *page)
+static void page_pool_set_pp_info(struct page_pool *pool, netmem_ref netmem)
{
+ struct page *page = netmem_to_page(netmem);
+
page->pp = pool;
page->pp_magic |= PP_SIGNATURE;

@@ -482,13 +482,15 @@ static void page_pool_set_pp_info(struct page_pool *pool,
* is dirtying the same cache line as the page->pp_magic above, so
* the overhead is negligible.
*/
- page_pool_fragment_page(page, 1);
+ page_pool_fragment_netmem(netmem, 1);
if (pool->has_init_callback)
- pool->slow.init_callback(page, pool->slow.init_arg);
+ pool->slow.init_callback(netmem, pool->slow.init_arg);
}

-static void page_pool_clear_pp_info(struct page *page)
+static void page_pool_clear_pp_info(netmem_ref netmem)
{
+ struct page *page = netmem_to_page(netmem);
+
page->pp_magic = 0;
page->pp = NULL;
}
@@ -503,34 +505,34 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool,
if (unlikely(!page))
return NULL;

- if (pool->dma_map && unlikely(!page_pool_dma_map(pool, page))) {
+ if (pool->dma_map && unlikely(!page_pool_dma_map(pool, page_to_netmem(page)))) {
put_page(page);
return NULL;
}

alloc_stat_inc(pool, slow_high_order);
- page_pool_set_pp_info(pool, page);
+ page_pool_set_pp_info(pool, page_to_netmem(page));

/* Track how many pages are held 'in-flight' */
pool->pages_state_hold_cnt++;
- trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt);
+ trace_page_pool_state_hold(pool, page_to_netmem(page),
+ pool->pages_state_hold_cnt);
return page;
}

/* slow path */
-noinline
-static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
- gfp_t gfp)
+static noinline netmem_ref __page_pool_alloc_pages_slow(struct page_pool *pool,
+ gfp_t gfp)
{
const int bulk = PP_ALLOC_CACHE_REFILL;
unsigned int pp_order = pool->p.order;
bool dma_map = pool->dma_map;
- struct page *page;
+ netmem_ref netmem;
int i, nr_pages;

/* Don't support bulk alloc for high-order pages */
if (unlikely(pp_order))
- return __page_pool_alloc_page_order(pool, gfp);
+ return page_to_netmem(__page_pool_alloc_page_order(pool, gfp));

/* Unnecessary as alloc cache is empty, but guarantees zero count */
if (unlikely(pool->alloc.count > 0))
@@ -539,59 +541,66 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
/* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */
memset(&pool->alloc.cache, 0, sizeof(void *) * bulk);

- nr_pages = alloc_pages_bulk_array_node(gfp, pool->p.nid, bulk,
- pool->alloc.cache);
+ nr_pages = alloc_pages_bulk_array_node(gfp,
+ pool->p.nid, bulk,
+ (struct page **)pool->alloc.cache);
if (unlikely(!nr_pages))
- return NULL;
+ return 0;

/* Pages have been filled into alloc.cache array, but count is zero and
* page element have not been (possibly) DMA mapped.
*/
for (i = 0; i < nr_pages; i++) {
- page = pool->alloc.cache[i];
- if (dma_map && unlikely(!page_pool_dma_map(pool, page))) {
- put_page(page);
+ netmem = pool->alloc.cache[i];
+ if (dma_map && unlikely(!page_pool_dma_map(pool, netmem))) {
+ put_page(netmem_to_page(netmem));
continue;
}

- page_pool_set_pp_info(pool, page);
- pool->alloc.cache[pool->alloc.count++] = page;
+ page_pool_set_pp_info(pool, netmem);
+ pool->alloc.cache[pool->alloc.count++] = netmem;
/* Track how many pages are held 'in-flight' */
pool->pages_state_hold_cnt++;
- trace_page_pool_state_hold(pool, page,
+ trace_page_pool_state_hold(pool, netmem,
pool->pages_state_hold_cnt);
}

/* Return last page */
if (likely(pool->alloc.count > 0)) {
- page = pool->alloc.cache[--pool->alloc.count];
+ netmem = pool->alloc.cache[--pool->alloc.count];
alloc_stat_inc(pool, slow);
} else {
- page = NULL;
+ netmem = 0;
}

/* When page just alloc'ed is should/must have refcnt 1. */
- return page;
+ return netmem;
}

/* For using page_pool replace: alloc_pages() API calls, but provide
* synchronization guarantee for allocation side.
*/
-struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp)
+netmem_ref page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp)
{
- struct page *page;
+ netmem_ref netmem;

/* Fast-path: Get a page from cache */
- page = __page_pool_get_cached(pool);
- if (page)
- return page;
+ netmem = __page_pool_get_cached(pool);
+ if (netmem)
+ return netmem;

/* Slow-path: cache empty, do real allocation */
if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
- page = pool->mp_ops->alloc_netmems(pool, gfp);
+ netmem = pool->mp_ops->alloc_netmems(pool, gfp);
else
- page = __page_pool_alloc_pages_slow(pool, gfp);
- return page;
+ netmem = __page_pool_alloc_pages_slow(pool, gfp);
+ return netmem;
+}
+EXPORT_SYMBOL(page_pool_alloc_netmem);
+
+struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp)
+{
+ return netmem_to_page(page_pool_alloc_netmem(pool, gfp));
}
EXPORT_SYMBOL(page_pool_alloc_pages);
ALLOW_ERROR_INJECTION(page_pool_alloc_pages, NULL);
@@ -620,8 +629,8 @@ s32 page_pool_inflight(const struct page_pool *pool, bool strict)
return inflight;
}

-static __always_inline
-void __page_pool_release_page_dma(struct page_pool *pool, struct page *page)
+static __always_inline void __page_pool_release_page_dma(struct page_pool *pool,
+ netmem_ref netmem)
{
dma_addr_t dma;

@@ -631,13 +640,13 @@ void __page_pool_release_page_dma(struct page_pool *pool, struct page *page)
*/
return;

- dma = page_pool_get_dma_addr(page);
+ dma = page_pool_get_dma_addr_netmem(netmem);

/* When page is unmapped, it cannot be returned to our pool */
dma_unmap_page_attrs(pool->p.dev, dma,
PAGE_SIZE << pool->p.order, pool->p.dma_dir,
DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
- page_pool_set_dma_addr(page, 0);
+ page_pool_set_dma_addr_netmem(netmem, 0);
}

/* Disconnects a page (from a page_pool). API users can have a need
@@ -645,26 +654,26 @@ void __page_pool_release_page_dma(struct page_pool *pool, struct page *page)
* a regular page (that will eventually be returned to the normal
* page-allocator via put_page).
*/
-void page_pool_return_page(struct page_pool *pool, struct page *page)
+void page_pool_return_page(struct page_pool *pool, netmem_ref netmem)
{
int count;
bool put;

put = true;
if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
- put = pool->mp_ops->release_page(pool, page);
+ put = pool->mp_ops->release_page(pool, netmem);
else
- __page_pool_release_page_dma(pool, page);
+ __page_pool_release_page_dma(pool, netmem);

/* This may be the last page returned, releasing the pool, so
* it is not safe to reference pool afterwards.
*/
count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt);
- trace_page_pool_state_release(pool, page, count);
+ trace_page_pool_state_release(pool, netmem, count);

if (put) {
- page_pool_clear_pp_info(page);
- put_page(page);
+ page_pool_clear_pp_info(netmem);
+ put_page(netmem_to_page(netmem));
}
/* An optimization would be to call __free_pages(page, pool->p.order)
* knowing page is not part of page-cache (thus avoiding a
@@ -672,14 +681,14 @@ void page_pool_return_page(struct page_pool *pool, struct page *page)
*/
}

-static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page)
+static bool page_pool_recycle_in_ring(struct page_pool *pool, netmem_ref netmem)
{
int ret;
/* BH protection not needed if current is softirq */
if (in_softirq())
- ret = ptr_ring_produce(&pool->ring, page);
+ ret = ptr_ring_produce(&pool->ring, (__force void *)netmem);
else
- ret = ptr_ring_produce_bh(&pool->ring, page);
+ ret = ptr_ring_produce_bh(&pool->ring, (__force void *)netmem);

if (!ret) {
recycle_stat_inc(pool, ring);
@@ -694,7 +703,7 @@ static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page)
*
* Caller must provide appropriate safe context.
*/
-static bool page_pool_recycle_in_cache(struct page *page,
+static bool page_pool_recycle_in_cache(netmem_ref netmem,
struct page_pool *pool)
{
if (unlikely(pool->alloc.count == PP_ALLOC_CACHE_SIZE)) {
@@ -703,14 +712,15 @@ static bool page_pool_recycle_in_cache(struct page *page,
}

/* Caller MUST have verified/know (page_ref_count(page) == 1) */
- pool->alloc.cache[pool->alloc.count++] = page;
+ pool->alloc.cache[pool->alloc.count++] = netmem;
recycle_stat_inc(pool, cached);
return true;
}

-static bool __page_pool_page_can_be_recycled(const struct page *page)
+static bool __page_pool_page_can_be_recycled(netmem_ref netmem)
{
- return page_ref_count(page) == 1 && !page_is_pfmemalloc(page);
+ return page_ref_count(netmem_to_page(netmem)) == 1 &&
+ !page_is_pfmemalloc(netmem_to_page(netmem));
}

/* If the page refcnt == 1, this will try to recycle the page.
@@ -719,8 +729,8 @@ static bool __page_pool_page_can_be_recycled(const struct page *page)
* If the page refcnt != 1, then the page will be returned to memory
* subsystem.
*/
-static __always_inline struct page *
-__page_pool_put_page(struct page_pool *pool, struct page *page,
+static __always_inline netmem_ref
+__page_pool_put_page(struct page_pool *pool, netmem_ref netmem,
unsigned int dma_sync_size, bool allow_direct)
{
lockdep_assert_no_hardirq();
@@ -734,16 +744,16 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
* page is NOT reusable when allocated when system is under
* some pressure. (page_is_pfmemalloc)
*/
- if (likely(__page_pool_page_can_be_recycled(page))) {
+ if (likely(__page_pool_page_can_be_recycled(netmem))) {
/* Read barrier done in page_ref_count / READ_ONCE */

- page_pool_dma_sync_for_device(pool, page, dma_sync_size);
+ page_pool_dma_sync_for_device(pool, netmem, dma_sync_size);

- if (allow_direct && page_pool_recycle_in_cache(page, pool))
- return NULL;
+ if (allow_direct && page_pool_recycle_in_cache(netmem, pool))
+ return 0;

/* Page found as candidate for recycling */
- return page;
+ return netmem;
}
/* Fallback/non-XDP mode: API user have elevated refcnt.
*
@@ -759,9 +769,9 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
* will be invoking put_page.
*/
recycle_stat_inc(pool, released_refcnt);
- page_pool_return_page(pool, page);
+ page_pool_return_page(pool, netmem);

- return NULL;
+ return 0;
}

static bool page_pool_napi_local(const struct page_pool *pool)
@@ -787,19 +797,28 @@ static bool page_pool_napi_local(const struct page_pool *pool)
return napi && READ_ONCE(napi->list_owner) == cpuid;
}

-void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page,
- unsigned int dma_sync_size, bool allow_direct)
+void page_pool_put_unrefed_netmem(struct page_pool *pool, netmem_ref netmem,
+ unsigned int dma_sync_size, bool allow_direct)
{
if (!allow_direct)
allow_direct = page_pool_napi_local(pool);

- page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct);
- if (page && !page_pool_recycle_in_ring(pool, page)) {
+ netmem =
+ __page_pool_put_page(pool, netmem, dma_sync_size, allow_direct);
+ if (netmem && !page_pool_recycle_in_ring(pool, netmem)) {
/* Cache full, fallback to free pages */
recycle_stat_inc(pool, ring_full);
- page_pool_return_page(pool, page);
+ page_pool_return_page(pool, netmem);
}
}
+EXPORT_SYMBOL(page_pool_put_unrefed_netmem);
+
+void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page,
+ unsigned int dma_sync_size, bool allow_direct)
+{
+ page_pool_put_unrefed_netmem(pool, page_to_netmem(page), dma_sync_size,
+ allow_direct);
+}
EXPORT_SYMBOL(page_pool_put_unrefed_page);

/**
@@ -827,16 +846,16 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
allow_direct = page_pool_napi_local(pool);

for (i = 0; i < count; i++) {
- struct page *page = virt_to_head_page(data[i]);
+ netmem_ref netmem = page_to_netmem(virt_to_head_page(data[i]));

/* It is not the last user for the page frag case */
- if (!page_pool_is_last_ref(page))
+ if (!page_pool_is_last_ref(netmem))
continue;

- page = __page_pool_put_page(pool, page, -1, allow_direct);
+ netmem = __page_pool_put_page(pool, netmem, -1, allow_direct);
/* Approved for bulk recycling in ptr_ring cache */
- if (page)
- data[bulk_len++] = page;
+ if (netmem)
+ data[bulk_len++] = (__force void *)netmem;
}

if (!bulk_len)
@@ -862,98 +881,106 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
* since put_page() with refcnt == 1 can be an expensive operation
*/
for (; i < bulk_len; i++)
- page_pool_return_page(pool, data[i]);
+ page_pool_return_page(pool, (__force netmem_ref)data[i]);
}
EXPORT_SYMBOL(page_pool_put_page_bulk);

-static struct page *page_pool_drain_frag(struct page_pool *pool,
- struct page *page)
+static netmem_ref page_pool_drain_frag(struct page_pool *pool,
+ netmem_ref netmem)
{
long drain_count = BIAS_MAX - pool->frag_users;

/* Some user is still using the page frag */
- if (likely(page_pool_unref_page(page, drain_count)))
- return NULL;
+ if (likely(page_pool_unref_netmem(netmem, drain_count)))
+ return 0;

- if (__page_pool_page_can_be_recycled(page)) {
- page_pool_dma_sync_for_device(pool, page, -1);
- return page;
+ if (__page_pool_page_can_be_recycled(netmem)) {
+ page_pool_dma_sync_for_device(pool, netmem, -1);
+ return netmem;
}

- page_pool_return_page(pool, page);
- return NULL;
+ page_pool_return_page(pool, netmem);
+ return 0;
}

static void page_pool_free_frag(struct page_pool *pool)
{
long drain_count = BIAS_MAX - pool->frag_users;
- struct page *page = pool->frag_page;
+ netmem_ref netmem = pool->frag_page;

- pool->frag_page = NULL;
+ pool->frag_page = 0;

- if (!page || page_pool_unref_page(page, drain_count))
+ if (!netmem || page_pool_unref_netmem(netmem, drain_count))
return;

- page_pool_return_page(pool, page);
+ page_pool_return_page(pool, netmem);
}

-struct page *page_pool_alloc_frag(struct page_pool *pool,
- unsigned int *offset,
- unsigned int size, gfp_t gfp)
+netmem_ref page_pool_alloc_frag_netmem(struct page_pool *pool,
+ unsigned int *offset, unsigned int size,
+ gfp_t gfp)
{
unsigned int max_size = PAGE_SIZE << pool->p.order;
- struct page *page = pool->frag_page;
+ netmem_ref netmem = pool->frag_page;

if (WARN_ON(size > max_size))
- return NULL;
+ return 0;

size = ALIGN(size, dma_get_cache_alignment());
*offset = pool->frag_offset;

- if (page && *offset + size > max_size) {
- page = page_pool_drain_frag(pool, page);
- if (page) {
+ if (netmem && *offset + size > max_size) {
+ netmem = page_pool_drain_frag(pool, netmem);
+ if (netmem) {
alloc_stat_inc(pool, fast);
goto frag_reset;
}
}

- if (!page) {
- page = page_pool_alloc_pages(pool, gfp);
- if (unlikely(!page)) {
- pool->frag_page = NULL;
- return NULL;
+ if (!netmem) {
+ netmem = page_pool_alloc_netmem(pool, gfp);
+ if (unlikely(!netmem)) {
+ pool->frag_page = 0;
+ return 0;
}

- pool->frag_page = page;
+ pool->frag_page = netmem;

frag_reset:
pool->frag_users = 1;
*offset = 0;
pool->frag_offset = size;
- page_pool_fragment_page(page, BIAS_MAX);
- return page;
+ page_pool_fragment_netmem(netmem, BIAS_MAX);
+ return netmem;
}

pool->frag_users++;
pool->frag_offset = *offset + size;
alloc_stat_inc(pool, fast);
- return page;
+ return netmem;
+}
+EXPORT_SYMBOL(page_pool_alloc_frag_netmem);
+
+struct page *page_pool_alloc_frag(struct page_pool *pool, unsigned int *offset,
+ unsigned int size, gfp_t gfp)
+{
+ return netmem_to_page(page_pool_alloc_frag_netmem(pool, offset, size,
+ gfp));
}
EXPORT_SYMBOL(page_pool_alloc_frag);

static void page_pool_empty_ring(struct page_pool *pool)
{
- struct page *page;
+ netmem_ref netmem;

/* Empty recycle ring */
- while ((page = ptr_ring_consume_bh(&pool->ring))) {
+ while ((netmem = (__force netmem_ref)ptr_ring_consume_bh(&pool->ring))) {
/* Verify the refcnt invariant of cached pages */
- if (!(page_ref_count(page) == 1))
+ if (!(page_ref_count(netmem_to_page(netmem)) == 1))
pr_crit("%s() page_pool refcnt %d violation\n",
- __func__, page_ref_count(page));
+ __func__, netmem_ref_count(netmem));

- page_pool_return_page(pool, page);
+ page_pool_return_page(pool, netmem);
}
}

@@ -975,7 +1002,7 @@ static void __page_pool_destroy(struct page_pool *pool)

static void page_pool_empty_alloc_cache_once(struct page_pool *pool)
{
- struct page *page;
+ netmem_ref netmem;

if (pool->destroy_cnt)
return;
@@ -985,8 +1012,8 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool)
* call concurrently.
*/
while (pool->alloc.count) {
- page = pool->alloc.cache[--pool->alloc.count];
- page_pool_return_page(pool, page);
+ netmem = pool->alloc.cache[--pool->alloc.count];
+ page_pool_return_page(pool, netmem);
}
}

@@ -1092,15 +1119,15 @@ EXPORT_SYMBOL(page_pool_destroy);
/* Caller must provide appropriate safe context, e.g. NAPI. */
void page_pool_update_nid(struct page_pool *pool, int new_nid)
{
- struct page *page;
+ netmem_ref netmem;

trace_page_pool_update_nid(pool, new_nid);
pool->p.nid = new_nid;

/* Flush pool alloc cache, as refill will check NUMA node */
while (pool->alloc.count) {
- page = pool->alloc.cache[--pool->alloc.count];
- page_pool_return_page(pool, page);
+ netmem = pool->alloc.cache[--pool->alloc.count];
+ page_pool_return_page(pool, netmem);
}
}
EXPORT_SYMBOL(page_pool_update_nid);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 466999a7515e6..92d1748fef465 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -1002,8 +1002,10 @@ int skb_cow_data_for_xdp(struct page_pool *pool, struct sk_buff **pskb,
EXPORT_SYMBOL(skb_cow_data_for_xdp);

#if IS_ENABLED(CONFIG_PAGE_POOL)
-bool napi_pp_put_page(struct page *page)
+bool napi_pp_put_page(netmem_ref netmem)
{
+ struct page *page = netmem_to_page(netmem);
+
page = compound_head(page);

/* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation
@@ -1016,7 +1018,7 @@ bool napi_pp_put_page(struct page *page)
if (unlikely(!is_pp_page(page)))
return false;

- page_pool_put_full_page(page->pp, page, false);
+ page_pool_put_full_netmem(page->pp, page_to_netmem(page), false);

return true;
}
@@ -1027,7 +1029,7 @@ static bool skb_pp_recycle(struct sk_buff *skb, void *data)
{
if (!IS_ENABLED(CONFIG_PAGE_POOL) || !skb->pp_recycle)
return false;
- return napi_pp_put_page(virt_to_page(data));
+ return napi_pp_put_page(page_to_netmem(virt_to_page(data)));
}

/**
--
2.45.1.288.g0e0cd299f1-goog


2024-05-30 20:20:29

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 08/14] memory-provider: dmabuf devmem memory provider

Implement a memory provider that allocates dmabuf devmem in the form of
net_iov.

The provider receives a reference to the struct netdev_dmabuf_binding
via the pool->mp_priv pointer. The driver needs to set this pointer for
the provider in the net_iov.

The provider obtains a reference on the netdev_dmabuf_binding which
guarantees the binding and the underlying mapping remains alive until
the provider is destroyed.

Usage of PP_FLAG_DMA_MAP is required for this memory provide such that
the page_pool can provide the driver with the dma-addrs of the devmem.

Support for PP_FLAG_DMA_SYNC_DEV is omitted for simplicity & p.order !=
0.

Signed-off-by: Willem de Bruijn <[email protected]>
Signed-off-by: Kaiyuan Zhang <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

v8:
- Use skb_frag_size instead of frag->bv_len to fix patch-by-patch build
error

v6:
- refactor new memory provider functions into net/core/devmem.c (Pavel)

v2:
- Disable devmem for p.order != 0

v1:
- static_branch check in page_is_page_pool_iov() (Willem & Paolo).
- PP_DEVMEM -> PP_IOV (David).
- Require PP_FLAG_DMA_MAP (Jakub).

---
include/net/netmem.h | 15 ++++++
include/net/page_pool/helpers.h | 22 +++++++++
include/net/page_pool/types.h | 2 +
net/core/devmem.c | 83 +++++++++++++++++++++++++++++++++
net/core/page_pool.c | 38 +++++++--------
5 files changed, 138 insertions(+), 22 deletions(-)

diff --git a/include/net/netmem.h b/include/net/netmem.h
index 35ad237fdf29e..7c28d6fac6242 100644
--- a/include/net/netmem.h
+++ b/include/net/netmem.h
@@ -100,6 +100,21 @@ static inline struct page *netmem_to_page(netmem_ref netmem)
return (__force struct page *)netmem;
}

+static inline struct net_iov *netmem_to_net_iov(netmem_ref netmem)
+{
+ if (netmem_is_net_iov(netmem))
+ return (struct net_iov *)((__force unsigned long)netmem &
+ ~NET_IOV);
+
+ DEBUG_NET_WARN_ON_ONCE(true);
+ return NULL;
+}
+
+static inline netmem_ref net_iov_to_netmem(struct net_iov *niov)
+{
+ return (__force netmem_ref)((unsigned long)niov | NET_IOV);
+}
+
static inline netmem_ref page_to_netmem(struct page *page)
{
return (__force netmem_ref)page;
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index 1770c7be24afc..731f2d1e1ee10 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -477,4 +477,26 @@ static inline void page_pool_nid_changed(struct page_pool *pool, int new_nid)
page_pool_update_nid(pool, new_nid);
}

+static inline void page_pool_set_pp_info(struct page_pool *pool,
+ netmem_ref netmem)
+{
+ netmem_set_pp(netmem, pool);
+ netmem_or_pp_magic(netmem, PP_SIGNATURE);
+
+ /* Ensuring all pages have been split into one fragment initially:
+ * page_pool_set_pp_info() is only called once for every page when it
+ * is allocated from the page allocator and page_pool_fragment_page()
+ * is dirtying the same cache line as the page->pp_magic above, so
+ * the overhead is negligible.
+ */
+ page_pool_fragment_netmem(netmem, 1);
+ if (pool->has_init_callback)
+ pool->slow.init_callback(netmem, pool->slow.init_arg);
+}
+
+static inline void page_pool_clear_pp_info(netmem_ref netmem)
+{
+ netmem_clear_pp_magic(netmem);
+ netmem_set_pp(netmem, NULL);
+}
#endif /* _NET_PAGE_POOL_HELPERS_H */
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index edc3066e1ea56..87a7799460267 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -142,6 +142,8 @@ struct pp_memory_provider_params {
void *mp_priv;
};

+extern const struct memory_provider_ops dmabuf_devmem_ops;
+
struct page_pool {
struct page_pool_params_fast p;

diff --git a/net/core/devmem.c b/net/core/devmem.c
index fe9865699abb1..e591449a3cf1b 100644
--- a/net/core/devmem.c
+++ b/net/core/devmem.c
@@ -163,6 +163,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
* the driver may read this config while it's creating its * rx-queues.
* WRITE_ONCE() here to match the READ_ONCE() in the driver.
*/
+ WRITE_ONCE(rxq->mp_params.mp_ops, &dmabuf_devmem_ops);
WRITE_ONCE(rxq->mp_params.mp_priv, binding);

err = netdev_rx_queue_restart(dev, rxq_idx);
@@ -298,4 +299,86 @@ int net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
dma_buf_put(dmabuf);
return err;
}
+
+/*** "Dmabuf devmem memory provider" ***/
+
+static int mp_dmabuf_devmem_init(struct page_pool *pool)
+{
+ struct net_devmem_dmabuf_binding *binding = pool->mp_priv;
+
+ if (!binding)
+ return -EINVAL;
+
+ if (!pool->dma_map)
+ return -EOPNOTSUPP;
+
+ if (pool->dma_sync)
+ return -EOPNOTSUPP;
+
+ if (pool->p.order != 0)
+ return -E2BIG;
+
+ net_devmem_dmabuf_binding_get(binding);
+ return 0;
+}
+
+static netmem_ref mp_dmabuf_devmem_alloc_netmems(struct page_pool *pool,
+ gfp_t gfp)
+{
+ struct net_devmem_dmabuf_binding *binding = pool->mp_priv;
+ netmem_ref netmem;
+ struct net_iov *niov;
+ dma_addr_t dma_addr;
+
+ niov = net_devmem_alloc_dmabuf(binding);
+ if (!niov)
+ return 0;
+
+ dma_addr = net_devmem_get_dma_addr(niov);
+
+ netmem = net_iov_to_netmem(niov);
+
+ page_pool_set_pp_info(pool, netmem);
+
+ if (page_pool_set_dma_addr_netmem(netmem, dma_addr))
+ goto err_free;
+
+ pool->pages_state_hold_cnt++;
+ trace_page_pool_state_hold(pool, netmem, pool->pages_state_hold_cnt);
+ return netmem;
+
+err_free:
+ net_devmem_free_dmabuf(niov);
+ return 0;
+}
+
+static void mp_dmabuf_devmem_destroy(struct page_pool *pool)
+{
+ struct net_devmem_dmabuf_binding *binding = pool->mp_priv;
+
+ net_devmem_dmabuf_binding_put(binding);
+}
+
+static bool mp_dmabuf_devmem_release_page(struct page_pool *pool,
+ netmem_ref netmem)
+{
+ WARN_ON_ONCE(!netmem_is_net_iov(netmem));
+ WARN_ON_ONCE(atomic_long_read(netmem_get_pp_ref_count_ref(netmem)) !=
+ 1);
+
+ page_pool_clear_pp_info(netmem);
+
+ net_devmem_free_dmabuf(netmem_to_net_iov(netmem));
+
+ /* We don't want the page pool put_page()ing our net_iovs. */
+ return false;
+}
+
+const struct memory_provider_ops dmabuf_devmem_ops = {
+ .init = mp_dmabuf_devmem_init,
+ .destroy = mp_dmabuf_devmem_destroy,
+ .alloc_netmems = mp_dmabuf_devmem_alloc_netmems,
+ .release_page = mp_dmabuf_devmem_release_page,
+};
+EXPORT_SYMBOL(dmabuf_devmem_ops);
#endif
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index fa2a1f7ba0067..b625791a0fe77 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -13,6 +13,7 @@

#include <net/page_pool/helpers.h>
#include <net/xdp.h>
+#include <net/netdev_rx_queue.h>

#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
@@ -21,12 +22,15 @@
#include <linux/poison.h>
#include <linux/ethtool.h>
#include <linux/netdevice.h>
+#include <linux/genalloc.h>
+#include <net/devmem.h>

#include <trace/events/page_pool.h>

#include "page_pool_priv.h"

DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers);
+EXPORT_SYMBOL(page_pool_mem_providers);

#define DEFER_TIME (msecs_to_jiffies(1000))
#define DEFER_WARN_INTERVAL (60 * HZ)
@@ -187,7 +191,9 @@ static int page_pool_init(struct page_pool *pool,
const struct page_pool_params *params,
int cpuid)
{
+ const struct memory_provider_ops *mp_ops = NULL;
unsigned int ring_qsize = 1024; /* Default */
+ void *mp_priv = NULL;
int err;

page_pool_struct_check();
@@ -270,6 +276,16 @@ static int page_pool_init(struct page_pool *pool,
if (pool->dma_map)
get_device(pool->p.dev);

+ if (pool->p.queue) {
+ mp_ops = READ_ONCE(pool->p.queue->mp_params.mp_ops);
+ mp_priv = READ_ONCE(pool->p.queue->mp_params.mp_priv);
+ }
+
+ if (mp_ops && mp_priv) {
+ pool->mp_ops = mp_ops;
+ pool->mp_priv = mp_priv;
+ }
+
if (pool->mp_ops) {
err = pool->mp_ops->init(pool);
if (err) {
@@ -469,28 +485,6 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem)
return false;
}

-static void page_pool_set_pp_info(struct page_pool *pool, netmem_ref netmem)
-{
- netmem_set_pp(netmem, pool);
- netmem_or_pp_magic(netmem, PP_SIGNATURE);
-
- /* Ensuring all pages have been split into one fragment initially:
- * page_pool_set_pp_info() is only called once for every page when it
- * is allocated from the page allocator and page_pool_fragment_page()
- * is dirtying the same cache line as the page->pp_magic above, so
- * the overhead is negligible.
- */
- page_pool_fragment_netmem(netmem, 1);
- if (pool->has_init_callback)
- pool->slow.init_callback(netmem, pool->slow.init_arg);
-}
-
-static void page_pool_clear_pp_info(netmem_ref netmem)
-{
- netmem_clear_pp_magic(netmem);
- netmem_set_pp(netmem, NULL);
-}
-
static struct page *__page_pool_alloc_page_order(struct page_pool *pool,
gfp_t gfp)
{
--
2.45.1.288.g0e0cd299f1-goog


2024-05-30 20:20:54

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 12/14] net: add SO_DEVMEM_DONTNEED setsockopt to release RX frags

Add an interface for the user to notify the kernel that it is done
reading the devmem dmabuf frags returned as cmsg. The kernel will
drop the reference on the frags to make them available for reuse.

Signed-off-by: Willem de Bruijn <[email protected]>
Signed-off-by: Kaiyuan Zhang <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

v10:
- Fix leak of tokens (Nikolay).

v7:
- Updated SO_DEVMEM_* uapi to use the next available entry (Arnd).

v6:
- Squash in locking optimizations from [email protected]. With his
changes we lock the xarray once per sock_devmem_dontneed operation
rather than once per frag.

Changes in v1:
- devmemtoken -> dmabuf_token (David).
- Use napi_pp_put_page() for refcounting (Yunsheng).
- Fix build error with missing socket options on other asms.

---
arch/alpha/include/uapi/asm/socket.h | 1 +
arch/mips/include/uapi/asm/socket.h | 1 +
arch/parisc/include/uapi/asm/socket.h | 1 +
arch/sparc/include/uapi/asm/socket.h | 1 +
include/uapi/asm-generic/socket.h | 1 +
include/uapi/linux/uio.h | 4 ++
net/core/sock.c | 61 +++++++++++++++++++++++++++
7 files changed, 70 insertions(+)

diff --git a/arch/alpha/include/uapi/asm/socket.h b/arch/alpha/include/uapi/asm/socket.h
index ef4656a41058a..251b73c5481ea 100644
--- a/arch/alpha/include/uapi/asm/socket.h
+++ b/arch/alpha/include/uapi/asm/socket.h
@@ -144,6 +144,7 @@
#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
#define SO_DEVMEM_DMABUF 79
#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF
+#define SO_DEVMEM_DONTNEED 80

#if !defined(__KERNEL__)

diff --git a/arch/mips/include/uapi/asm/socket.h b/arch/mips/include/uapi/asm/socket.h
index 414807d55e33f..8ab7582291abf 100644
--- a/arch/mips/include/uapi/asm/socket.h
+++ b/arch/mips/include/uapi/asm/socket.h
@@ -155,6 +155,7 @@
#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
#define SO_DEVMEM_DMABUF 79
#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF
+#define SO_DEVMEM_DONTNEED 80

#if !defined(__KERNEL__)

diff --git a/arch/parisc/include/uapi/asm/socket.h b/arch/parisc/include/uapi/asm/socket.h
index 2b817efd45444..38fc0b188e084 100644
--- a/arch/parisc/include/uapi/asm/socket.h
+++ b/arch/parisc/include/uapi/asm/socket.h
@@ -136,6 +136,7 @@
#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
#define SO_DEVMEM_DMABUF 79
#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF
+#define SO_DEVMEM_DONTNEED 80

#if !defined(__KERNEL__)

diff --git a/arch/sparc/include/uapi/asm/socket.h b/arch/sparc/include/uapi/asm/socket.h
index 00248fc689773..57084ed2f3c4e 100644
--- a/arch/sparc/include/uapi/asm/socket.h
+++ b/arch/sparc/include/uapi/asm/socket.h
@@ -137,6 +137,7 @@
#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
#define SO_DEVMEM_DMABUF 0x0058
#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF
+#define SO_DEVMEM_DONTNEED 0x0059

#if !defined(__KERNEL__)

diff --git a/include/uapi/asm-generic/socket.h b/include/uapi/asm-generic/socket.h
index 25a2f5255f523..1acb77780f103 100644
--- a/include/uapi/asm-generic/socket.h
+++ b/include/uapi/asm-generic/socket.h
@@ -135,6 +135,7 @@
#define SO_PASSPIDFD 76
#define SO_PEERPIDFD 77

+#define SO_DEVMEM_DONTNEED 97
#define SO_DEVMEM_LINEAR 98
#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
#define SO_DEVMEM_DMABUF 99
diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h
index 3a22ddae376a2..d17f8fcd93ec9 100644
--- a/include/uapi/linux/uio.h
+++ b/include/uapi/linux/uio.h
@@ -33,6 +33,10 @@ struct dmabuf_cmsg {
*/
};

+struct dmabuf_token {
+ __u32 token_start;
+ __u32 token_count;
+};
/*
* UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1)
*/
diff --git a/net/core/sock.c b/net/core/sock.c
index 521e6373d4f73..128d6a57a7af0 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -124,6 +124,7 @@
#include <linux/netdevice.h>
#include <net/protocol.h>
#include <linux/skbuff.h>
+#include <linux/skbuff_ref.h>
#include <net/net_namespace.h>
#include <net/request_sock.h>
#include <net/sock.h>
@@ -1049,6 +1050,62 @@ static int sock_reserve_memory(struct sock *sk, int bytes)
return 0;
}

+#ifdef CONFIG_PAGE_POOL
+static noinline_for_stack int
+sock_devmem_dontneed(struct sock *sk, sockptr_t optval, unsigned int optlen)
+{
+ unsigned int num_tokens, i, j, k, netmem_num = 0;
+ struct dmabuf_token *tokens;
+ netmem_ref netmems[16];
+ int ret = 0;
+
+ if (sk->sk_type != SOCK_STREAM || sk->sk_protocol != IPPROTO_TCP)
+ return -EBADF;
+
+ if (optlen % sizeof(struct dmabuf_token) ||
+ optlen > sizeof(*tokens) * 128)
+ return -EINVAL;
+
+ tokens = kvmalloc_array(128, sizeof(*tokens), GFP_KERNEL);
+ if (!tokens)
+ return -ENOMEM;
+
+ num_tokens = optlen / sizeof(struct dmabuf_token);
+ if (copy_from_sockptr(tokens, optval, optlen)) {
+ kvfree(tokens);
+ return -EFAULT;
+ }
+
+ xa_lock_bh(&sk->sk_user_frags);
+ for (i = 0; i < num_tokens; i++) {
+ for (j = 0; j < tokens[i].token_count; j++) {
+ netmem_ref netmem = (__force netmem_ref)__xa_erase(
+ &sk->sk_user_frags, tokens[i].token_start + j);
+
+ if (netmem &&
+ !WARN_ON_ONCE(!netmem_is_net_iov(netmem))) {
+ netmems[netmem_num++] = netmem;
+ if (netmem_num == ARRAY_SIZE(netmems)) {
+ xa_unlock_bh(&sk->sk_user_frags);
+ for (k = 0; k < netmem_num; k++)
+ WARN_ON_ONCE(!napi_pp_put_page(netmems[k]));
+ netmem_num = 0;
+ xa_lock_bh(&sk->sk_user_frags);
+ }
+ ret++;
+ }
+ }
+ }
+
+ xa_unlock_bh(&sk->sk_user_frags);
+ for (k = 0; k < netmem_num; k++)
+ WARN_ON_ONCE(!napi_pp_put_page(netmems[k]));
+
+ kvfree(tokens);
+ return ret;
+}
+#endif
+
void sockopt_lock_sock(struct sock *sk)
{
/* When current->bpf_ctx is set, the setsockopt is called from
@@ -1200,6 +1257,10 @@ int sk_setsockopt(struct sock *sk, int level, int optname,
ret = -EOPNOTSUPP;
return ret;
}
+#ifdef CONFIG_PAGE_POOL
+ case SO_DEVMEM_DONTNEED:
+ return sock_devmem_dontneed(sk, optval, optlen);
+#endif
}

sockopt_lock_sock(sk);
--
2.45.1.288.g0e0cd299f1-goog


2024-05-30 20:21:11

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 11/14] tcp: RX path for devmem TCP

In tcp_recvmsg_locked(), detect if the skb being received by the user
is a devmem skb. In this case - if the user provided the MSG_SOCK_DEVMEM
flag - pass it to tcp_recvmsg_devmem() for custom handling.

tcp_recvmsg_devmem() copies any data in the skb header to the linear
buffer, and returns a cmsg to the user indicating the number of bytes
returned in the linear buffer.

tcp_recvmsg_devmem() then loops over the unaccessible devmem skb frags,
and returns to the user a cmsg_devmem indicating the location of the
data in the dmabuf device memory. cmsg_devmem contains this information:

1. the offset into the dmabuf where the payload starts. 'frag_offset'.
2. the size of the frag. 'frag_size'.
3. an opaque token 'frag_token' to return to the kernel when the buffer
is to be released.

The pages awaiting freeing are stored in the newly added
sk->sk_user_frags, and each page passed to userspace is get_page()'d.
This reference is dropped once the userspace indicates that it is
done reading this page. All pages are released when the socket is
destroyed.

Signed-off-by: Willem de Bruijn <[email protected]>
Signed-off-by: Kaiyuan Zhang <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---

v7:
- Updated the SO_DEVMEM_* uapi to use the next available entries (Arnd).
- Updated dmabuf_cmsg struct to be __u64 padded (Arnd).
- Squashed fix from Eric to initialize sk_user_frags for passive
sockets (Eric).

v6
- skb->dmabuf -> skb->readable (Pavel)
- Fixed asm definitions of SO_DEVMEM_LINEAR/SO_DEVMEM_DMABUF not found
on some archs.
- Squashed in locking optimizations from [email protected]. With this
change we lock the xarray once per per tcp_recvmsg_dmabuf() rather
than once per frag in xa_alloc().

Changes in v1:
- Added dmabuf_id to dmabuf_cmsg (David/Stan).
- Devmem -> dmabuf (David).
- Change tcp_recvmsg_dmabuf() check to skb->dmabuf (Paolo).
- Use __skb_frag_ref() & napi_pp_put_page() for refcounting (Yunsheng).

RFC v3:
- Fixed issue with put_cmsg() failing silently.

---
arch/alpha/include/uapi/asm/socket.h | 5 +
arch/mips/include/uapi/asm/socket.h | 5 +
arch/parisc/include/uapi/asm/socket.h | 5 +
arch/sparc/include/uapi/asm/socket.h | 5 +
include/linux/socket.h | 1 +
include/net/netmem.h | 13 ++
include/net/sock.h | 2 +
include/uapi/asm-generic/socket.h | 5 +
include/uapi/linux/uio.h | 13 ++
net/ipv4/tcp.c | 248 +++++++++++++++++++++++++-
net/ipv4/tcp_ipv4.c | 10 ++
net/ipv4/tcp_minisocks.c | 2 +
12 files changed, 309 insertions(+), 5 deletions(-)

diff --git a/arch/alpha/include/uapi/asm/socket.h b/arch/alpha/include/uapi/asm/socket.h
index e94f621903fee..ef4656a41058a 100644
--- a/arch/alpha/include/uapi/asm/socket.h
+++ b/arch/alpha/include/uapi/asm/socket.h
@@ -140,6 +140,11 @@
#define SO_PASSPIDFD 76
#define SO_PEERPIDFD 77

+#define SO_DEVMEM_LINEAR 78
+#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
+#define SO_DEVMEM_DMABUF 79
+#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF
+
#if !defined(__KERNEL__)

#if __BITS_PER_LONG == 64
diff --git a/arch/mips/include/uapi/asm/socket.h b/arch/mips/include/uapi/asm/socket.h
index 60ebaed28a4ca..414807d55e33f 100644
--- a/arch/mips/include/uapi/asm/socket.h
+++ b/arch/mips/include/uapi/asm/socket.h
@@ -151,6 +151,11 @@
#define SO_PASSPIDFD 76
#define SO_PEERPIDFD 77

+#define SO_DEVMEM_LINEAR 78
+#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
+#define SO_DEVMEM_DMABUF 79
+#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF
+
#if !defined(__KERNEL__)

#if __BITS_PER_LONG == 64
diff --git a/arch/parisc/include/uapi/asm/socket.h b/arch/parisc/include/uapi/asm/socket.h
index be264c2b1a117..2b817efd45444 100644
--- a/arch/parisc/include/uapi/asm/socket.h
+++ b/arch/parisc/include/uapi/asm/socket.h
@@ -132,6 +132,11 @@
#define SO_PASSPIDFD 0x404A
#define SO_PEERPIDFD 0x404B

+#define SO_DEVMEM_LINEAR 78
+#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
+#define SO_DEVMEM_DMABUF 79
+#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF
+
#if !defined(__KERNEL__)

#if __BITS_PER_LONG == 64
diff --git a/arch/sparc/include/uapi/asm/socket.h b/arch/sparc/include/uapi/asm/socket.h
index 682da3714686c..00248fc689773 100644
--- a/arch/sparc/include/uapi/asm/socket.h
+++ b/arch/sparc/include/uapi/asm/socket.h
@@ -133,6 +133,11 @@
#define SO_PASSPIDFD 0x0055
#define SO_PEERPIDFD 0x0056

+#define SO_DEVMEM_LINEAR 0x0057
+#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
+#define SO_DEVMEM_DMABUF 0x0058
+#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF
+
#if !defined(__KERNEL__)


diff --git a/include/linux/socket.h b/include/linux/socket.h
index 89d16b90370bd..b0defc2ea40ed 100644
--- a/include/linux/socket.h
+++ b/include/linux/socket.h
@@ -327,6 +327,7 @@ struct ucred {
* plain text and require encryption
*/

+#define MSG_SOCK_DEVMEM 0x2000000 /* Receive devmem skbs as cmsg */
#define MSG_ZEROCOPY 0x4000000 /* Use user data in kernel path */
#define MSG_SPLICE_PAGES 0x8000000 /* Splice the pages from the iterator in sendmsg() */
#define MSG_FASTOPEN 0x20000000 /* Send data in TCP SYN */
diff --git a/include/net/netmem.h b/include/net/netmem.h
index 7c28d6fac6242..1f213a3aedf06 100644
--- a/include/net/netmem.h
+++ b/include/net/netmem.h
@@ -65,6 +65,19 @@ static inline unsigned int net_iov_idx(const struct net_iov *niov)
return niov - net_iov_owner(niov)->niovs;
}

+static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov)
+{
+ struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov);
+
+ return owner->base_virtual +
+ ((unsigned long)net_iov_idx(niov) << PAGE_SHIFT);
+}
+
+static inline u32 net_iov_binding_id(const struct net_iov *niov)
+{
+ return net_iov_owner(niov)->binding->id;
+}
+
static inline struct net_devmem_dmabuf_binding *
net_iov_binding(const struct net_iov *niov)
{
diff --git a/include/net/sock.h b/include/net/sock.h
index 5f4d0629348f3..87f89738da370 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -337,6 +337,7 @@ struct sk_filter;
* @sk_txtime_report_errors: set report errors mode for SO_TXTIME
* @sk_txtime_unused: unused txtime flags
* @ns_tracker: tracker for netns reference
+ * @sk_user_frags: xarray of pages the user is holding a reference on.
*/
struct sock {
/*
@@ -542,6 +543,7 @@ struct sock {
#endif
struct rcu_head sk_rcu;
netns_tracker ns_tracker;
+ struct xarray sk_user_frags;
};

enum sk_pacing {
diff --git a/include/uapi/asm-generic/socket.h b/include/uapi/asm-generic/socket.h
index 8ce8a39a1e5f0..25a2f5255f523 100644
--- a/include/uapi/asm-generic/socket.h
+++ b/include/uapi/asm-generic/socket.h
@@ -135,6 +135,11 @@
#define SO_PASSPIDFD 76
#define SO_PEERPIDFD 77

+#define SO_DEVMEM_LINEAR 98
+#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
+#define SO_DEVMEM_DMABUF 99
+#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF
+
#if !defined(__KERNEL__)

#if __BITS_PER_LONG == 64 || (defined(__x86_64__) && defined(__ILP32__))
diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h
index 059b1a9147f4f..3a22ddae376a2 100644
--- a/include/uapi/linux/uio.h
+++ b/include/uapi/linux/uio.h
@@ -20,6 +20,19 @@ struct iovec
__kernel_size_t iov_len; /* Must be size_t (1003.1g) */
};

+struct dmabuf_cmsg {
+ __u64 frag_offset; /* offset into the dmabuf where the frag starts.
+ */
+ __u32 frag_size; /* size of the frag. */
+ __u32 frag_token; /* token representing this frag for
+ * DEVMEM_DONTNEED.
+ */
+ __u32 dmabuf_id; /* dmabuf id this frag belongs to. */
+ __u32 flags; /* Currently unused. Reserved for future
+ * uses.
+ */
+};
+
/*
* UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1)
*/
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 55d85b1df0f39..496fe77a9ef85 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -468,6 +468,7 @@ void tcp_init_sock(struct sock *sk)

set_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags);
sk_sockets_allocated_inc(sk);
+ xa_init_flags(&sk->sk_user_frags, XA_FLAGS_ALLOC1);
}
EXPORT_SYMBOL(tcp_init_sock);

@@ -2317,6 +2318,213 @@ static int tcp_inq_hint(struct sock *sk)
return inq;
}

+/* batch __xa_alloc() calls and reduce xa_lock()/xa_unlock() overhead. */
+struct tcp_xa_pool {
+ u8 max; /* max <= MAX_SKB_FRAGS */
+ u8 idx; /* idx <= max */
+ __u32 tokens[MAX_SKB_FRAGS];
+ netmem_ref netmems[MAX_SKB_FRAGS];
+};
+
+static void tcp_xa_pool_commit(struct sock *sk, struct tcp_xa_pool *p,
+ bool lock)
+{
+ int i;
+
+ if (!p->max)
+ return;
+ if (lock)
+ xa_lock_bh(&sk->sk_user_frags);
+ /* Commit part that has been copied to user space. */
+ for (i = 0; i < p->idx; i++)
+ __xa_cmpxchg(&sk->sk_user_frags, p->tokens[i], XA_ZERO_ENTRY,
+ (__force void *)p->netmems[i], GFP_KERNEL);
+ /* Rollback what has been pre-allocated and is no longer needed. */
+ for (; i < p->max; i++)
+ __xa_erase(&sk->sk_user_frags, p->tokens[i]);
+ if (lock)
+ xa_unlock_bh(&sk->sk_user_frags);
+ p->max = 0;
+ p->idx = 0;
+}
+
+static int tcp_xa_pool_refill(struct sock *sk, struct tcp_xa_pool *p,
+ unsigned int max_frags)
+{
+ int err, k;
+
+ if (p->idx < p->max)
+ return 0;
+
+ xa_lock_bh(&sk->sk_user_frags);
+
+ tcp_xa_pool_commit(sk, p, false);
+ for (k = 0; k < max_frags; k++) {
+ err = __xa_alloc(&sk->sk_user_frags, &p->tokens[k],
+ XA_ZERO_ENTRY, xa_limit_31b, GFP_KERNEL);
+ if (err)
+ break;
+ }
+
+ xa_unlock_bh(&sk->sk_user_frags);
+
+ p->max = k;
+ p->idx = 0;
+ return k ? 0 : err;
+}
+
+/* On error, returns the -errno. On success, returns number of bytes sent to the
+ * user. May not consume all of @remaining_len.
+ */
+static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb,
+ unsigned int offset, struct msghdr *msg,
+ int remaining_len)
+{
+ struct dmabuf_cmsg dmabuf_cmsg = { 0 };
+ struct tcp_xa_pool tcp_xa_pool;
+ unsigned int start;
+ int i, copy, n;
+ int sent = 0;
+ int err = 0;
+
+ tcp_xa_pool.max = 0;
+ tcp_xa_pool.idx = 0;
+ do {
+ start = skb_headlen(skb);
+
+ if (skb_frags_readable(skb)) {
+ err = -ENODEV;
+ goto out;
+ }
+
+ /* Copy header. */
+ copy = start - offset;
+ if (copy > 0) {
+ copy = min(copy, remaining_len);
+
+ n = copy_to_iter(skb->data + offset, copy,
+ &msg->msg_iter);
+ if (n != copy) {
+ err = -EFAULT;
+ goto out;
+ }
+
+ offset += copy;
+ remaining_len -= copy;
+
+ /* First a dmabuf_cmsg for # bytes copied to user
+ * buffer.
+ */
+ memset(&dmabuf_cmsg, 0, sizeof(dmabuf_cmsg));
+ dmabuf_cmsg.frag_size = copy;
+ err = put_cmsg(msg, SOL_SOCKET, SO_DEVMEM_LINEAR,
+ sizeof(dmabuf_cmsg), &dmabuf_cmsg);
+ if (err || msg->msg_flags & MSG_CTRUNC) {
+ msg->msg_flags &= ~MSG_CTRUNC;
+ if (!err)
+ err = -ETOOSMALL;
+ goto out;
+ }
+
+ sent += copy;
+
+ if (remaining_len == 0)
+ goto out;
+ }
+
+ /* after that, send information of dmabuf pages through a
+ * sequence of cmsg
+ */
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ struct net_iov *niov;
+ u64 frag_offset;
+ int end;
+
+ /* !skb_frags_readable() should indicate that ALL the
+ * frags in this skb are dmabuf net_iovs. We're checking
+ * for that flag above, but also check individual frags
+ * here. If the tcp stack is not setting
+ * skb_frags_readable() correctly, we still don't want
+ * to crash here.
+ */
+ if (!skb_frag_net_iov(frag)) {
+ net_err_ratelimited("Found non-dmabuf skb with net_iov");
+ err = -ENODEV;
+ goto out;
+ }
+
+ niov = skb_frag_net_iov(frag);
+ end = start + skb_frag_size(frag);
+ copy = end - offset;
+
+ if (copy > 0) {
+ copy = min(copy, remaining_len);
+
+ frag_offset = net_iov_virtual_addr(niov) +
+ skb_frag_off(frag) + offset -
+ start;
+ dmabuf_cmsg.frag_offset = frag_offset;
+ dmabuf_cmsg.frag_size = copy;
+ err = tcp_xa_pool_refill(sk, &tcp_xa_pool,
+ skb_shinfo(skb)->nr_frags - i);
+ if (err)
+ goto out;
+
+ /* Will perform the exchange later */
+ dmabuf_cmsg.frag_token = tcp_xa_pool.tokens[tcp_xa_pool.idx];
+ dmabuf_cmsg.dmabuf_id = net_iov_binding_id(niov);
+
+ offset += copy;
+ remaining_len -= copy;
+
+ err = put_cmsg(msg, SOL_SOCKET,
+ SO_DEVMEM_DMABUF,
+ sizeof(dmabuf_cmsg),
+ &dmabuf_cmsg);
+ if (err || msg->msg_flags & MSG_CTRUNC) {
+ msg->msg_flags &= ~MSG_CTRUNC;
+ if (!err)
+ err = -ETOOSMALL;
+ goto out;
+ }
+
+ atomic_long_inc(&niov->pp_ref_count);
+ tcp_xa_pool.netmems[tcp_xa_pool.idx++] = skb_frag_netmem(frag);
+
+ sent += copy;
+
+ if (remaining_len == 0)
+ goto out;
+ }
+ start = end;
+ }
+
+ tcp_xa_pool_commit(sk, &tcp_xa_pool, true);
+ if (!remaining_len)
+ goto out;
+
+ /* if remaining_len is not satisfied yet, we need to go to the
+ * next frag in the frag_list to satisfy remaining_len.
+ */
+ skb = skb_shinfo(skb)->frag_list ?: skb->next;
+
+ offset = offset - start;
+ } while (skb);
+
+ if (remaining_len) {
+ err = -EFAULT;
+ goto out;
+ }
+
+out:
+ tcp_xa_pool_commit(sk, &tcp_xa_pool, true);
+ if (!sent)
+ sent = err;
+
+ return sent;
+}
+
/*
* This routine copies from a sock struct into the user buffer.
*
@@ -2330,6 +2538,7 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len,
int *cmsg_flags)
{
struct tcp_sock *tp = tcp_sk(sk);
+ int last_copied_dmabuf = -1; /* uninitialized */
int copied = 0;
u32 peek_seq;
u32 *seq;
@@ -2509,15 +2718,44 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len,
}

if (!(flags & MSG_TRUNC)) {
- err = skb_copy_datagram_msg(skb, offset, msg, used);
- if (err) {
- /* Exception. Bailout! */
- if (!copied)
- copied = -EFAULT;
+ if (last_copied_dmabuf != -1 &&
+ last_copied_dmabuf != !skb_frags_readable(skb))
break;
+
+ if (skb_frags_readable(skb)) {
+ err = skb_copy_datagram_msg(skb, offset, msg,
+ used);
+ if (err) {
+ /* Exception. Bailout! */
+ if (!copied)
+ copied = -EFAULT;
+ break;
+ }
+ } else {
+ if (!(flags & MSG_SOCK_DEVMEM)) {
+ /* dmabuf skbs can only be received
+ * with the MSG_SOCK_DEVMEM flag.
+ */
+ if (!copied)
+ copied = -EFAULT;
+
+ break;
+ }
+
+ err = tcp_recvmsg_dmabuf(sk, skb, offset, msg,
+ used);
+ if (err <= 0) {
+ if (!copied)
+ copied = -EFAULT;
+
+ break;
+ }
+ used = err;
}
}

+ last_copied_dmabuf = !skb_frags_readable(skb);
+
WRITE_ONCE(*seq, *seq + used);
copied += used;
len -= used;
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 041c7eda9abe2..18ad570c1e2e8 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -79,6 +79,7 @@
#include <linux/seq_file.h>
#include <linux/inetdevice.h>
#include <linux/btf_ids.h>
+#include <linux/skbuff_ref.h>

#include <crypto/hash.h>
#include <linux/scatterlist.h>
@@ -2511,6 +2512,15 @@ static void tcp_md5sig_info_free_rcu(struct rcu_head *head)
void tcp_v4_destroy_sock(struct sock *sk)
{
struct tcp_sock *tp = tcp_sk(sk);
+ __maybe_unused unsigned long index;
+ __maybe_unused void *netmem;
+
+#ifdef CONFIG_PAGE_POOL
+ xa_for_each(&sk->sk_user_frags, index, netmem)
+ WARN_ON_ONCE(!napi_pp_put_page((__force netmem_ref)netmem));
+#endif
+
+ xa_destroy(&sk->sk_user_frags);

trace_tcp_destroy_sock(sk);

diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
index b93619b2384b3..97fb3e1e67bee 100644
--- a/net/ipv4/tcp_minisocks.c
+++ b/net/ipv4/tcp_minisocks.c
@@ -625,6 +625,8 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,

__TCP_INC_STATS(sock_net(sk), TCP_MIB_PASSIVEOPENS);

+ xa_init_flags(&newsk->sk_user_frags, XA_FLAGS_ALLOC1);
+
return newsk;
}
EXPORT_SYMBOL(tcp_create_openreq_child);
--
2.45.1.288.g0e0cd299f1-goog


2024-05-30 20:21:14

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 13/14] net: add devmem TCP documentation

Add documentation outlining the usage and details of devmem TCP.

Signed-off-by: Mina Almasry <[email protected]>

---

v9: https://lore.kernel.org/netdev/[email protected]/
- Bagas doc suggestions.

v8:
- Applied docs suggestions (Randy). Thanks!

v7:
- Applied docs suggestions (Jakub).

v2:

- Missing spdx (simon)
- add to index.rst (simon)

---
Documentation/networking/devmem.rst | 258 ++++++++++++++++++++++++++++
Documentation/networking/index.rst | 1 +
2 files changed, 259 insertions(+)
create mode 100644 Documentation/networking/devmem.rst

diff --git a/Documentation/networking/devmem.rst b/Documentation/networking/devmem.rst
new file mode 100644
index 0000000000000..f32acfd62075d
--- /dev/null
+++ b/Documentation/networking/devmem.rst
@@ -0,0 +1,258 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=================
+Device Memory TCP
+=================
+
+
+Intro
+=====
+
+Device memory TCP (devmem TCP) enables receiving data directly into device
+memory (dmabuf). The feature is currently implemented for TCP sockets.
+
+
+Opportunity
+-----------
+
+A large number of data transfers have device memory as the source and/or
+destination. Accelerators drastically increased the prevalence of such
+transfers. Some examples include:
+
+- Distributed training, where ML accelerators, such as GPUs on different hosts,
+ exchange data.
+
+- Distributed raw block storage applications transfer large amounts of data with
+ remote SSDs. Much of this data does not require host processing.
+
+Typically the Device-to-Device data transfers in the network are implemented as
+the following low-level operations: Device-to-Host copy, Host-to-Host network
+transfer, and Host-to-Device copy.
+
+The flow involving host copies is suboptimal, especially for bulk data transfers,
+and can put significant strains on system resources such as host memory
+bandwidth and PCIe bandwidth.
+
+Devmem TCP optimizes this use case by implementing socket APIs that enable
+the user to receive incoming network packets directly into device memory.
+
+Packet payloads go directly from the NIC to device memory.
+
+Packet headers go to host memory and are processed by the TCP/IP stack
+normally. The NIC must support header split to achieve this.
+
+Advantages:
+
+- Alleviate host memory bandwidth pressure, compared to existing
+ network-transfer + device-copy semantics.
+
+- Alleviate PCIe bandwidth pressure, by limiting data transfer to the lowest
+ level of the PCIe tree, compared to the traditional path which sends data
+ through the root complex.
+
+
+More Info
+---------
+
+ slides, video
+ https://netdevconf.org/0x17/sessions/talk/device-memory-tcp.html
+
+ patchset
+ [RFC PATCH v6 00/12] Device Memory TCP
+ https://lore.kernel.org/netdev/[email protected]/
+
+
+Interface
+=========
+
+Example
+-------
+
+tools/testing/selftests/net/ncdevmem.c:do_server shows an example of setting up
+the RX path of this API.
+
+NIC Setup
+---------
+
+Header split, flow steering, & RSS are required features for devmem TCP.
+
+Header split is used to split incoming packets into a header buffer in host
+memory, and a payload buffer in device memory.
+
+Flow steering & RSS are used to ensure that only flows targeting devmem land on
+an RX queue bound to devmem.
+
+Enable header split & flow steering::
+
+ # enable header split
+ ethtool -G eth1 tcp-data-split on
+
+
+ # enable flow steering
+ ethtool -K eth1 ntuple on
+
+Configure RSS to steer all traffic away from the target RX queue (queue 15 in
+this example)::
+
+ ethtool --set-rxfh-indir eth1 equal 15
+
+
+The user must bind a dmabuf to any number of RX queues on a given NIC using
+the netlink API::
+
+ /* Bind dmabuf to NIC RX queue 15 */
+ struct netdev_queue *queues;
+ queues = malloc(sizeof(*queues) * 1);
+
+ queues[0]._present.type = 1;
+ queues[0]._present.idx = 1;
+ queues[0].type = NETDEV_RX_QUEUE_TYPE_RX;
+ queues[0].idx = 15;
+
+ *ys = ynl_sock_create(&ynl_netdev_family, &yerr);
+
+ req = netdev_bind_rx_req_alloc();
+ netdev_bind_rx_req_set_ifindex(req, 1 /* ifindex */);
+ netdev_bind_rx_req_set_dmabuf_fd(req, dmabuf_fd);
+ __netdev_bind_rx_req_set_queues(req, queues, n_queue_index);
+
+ rsp = netdev_bind_rx(*ys, req);
+
+ dmabuf_id = rsp->dmabuf_id;
+
+
+The netlink API returns a dmabuf_id: a unique ID that refers to this dmabuf
+that has been bound.
+
+Socket Setup
+------------
+
+The socket must be flow steered to the dmabuf bound RX queue::
+
+ ethtool -N eth1 flow-type tcp4 ... queue 15,
+
+
+Receiving data
+--------------
+
+The user application must signal to the kernel that it is capable of receiving
+devmem data by passing the MSG_SOCK_DEVMEM flag to recvmsg::
+
+ ret = recvmsg(fd, &msg, MSG_SOCK_DEVMEM);
+
+Applications that do not specify the MSG_SOCK_DEVMEM flag will receive an EFAULT
+on devmem data.
+
+Devmem data is received directly into the dmabuf bound to the NIC in 'NIC
+Setup', and the kernel signals such to the user via the SCM_DEVMEM_* cmsgs::
+
+ for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) {
+ if (cm->cmsg_level != SOL_SOCKET ||
+ (cm->cmsg_type != SCM_DEVMEM_DMABUF &&
+ cm->cmsg_type != SCM_DEVMEM_LINEAR))
+ continue;
+
+ dmabuf_cmsg = (struct dmabuf_cmsg *)CMSG_DATA(cm);
+
+ if (cm->cmsg_type == SCM_DEVMEM_DMABUF) {
+ /* Frag landed in dmabuf.
+ *
+ * dmabuf_cmsg->dmabuf_id is the dmabuf the
+ * frag landed on.
+ *
+ * dmabuf_cmsg->frag_offset is the offset into
+ * the dmabuf where the frag starts.
+ *
+ * dmabuf_cmsg->frag_size is the size of the
+ * frag.
+ *
+ * dmabuf_cmsg->frag_token is a token used to
+ * refer to this frag for later freeing.
+ */
+
+ struct dmabuf_token token;
+ token.token_start = dmabuf_cmsg->frag_token;
+ token.token_count = 1;
+ continue;
+ }
+
+ if (cm->cmsg_type == SCM_DEVMEM_LINEAR)
+ /* Frag landed in linear buffer.
+ *
+ * dmabuf_cmsg->frag_size is the size of the
+ * frag.
+ */
+ continue;
+
+ }
+
+Applications may receive 2 cmsgs:
+
+- SCM_DEVMEM_DMABUF: this indicates the fragment landed in the dmabuf indicated
+ by dmabuf_id.
+
+- SCM_DEVMEM_LINEAR: this indicates the fragment landed in the linear buffer.
+ This typically happens when the NIC is unable to split the packet at the
+ header boundary, such that part (or all) of the payload landed in host
+ memory.
+
+Applications may receive no SO_DEVMEM_* cmsgs. That indicates non-devmem,
+regular TCP data that landed on an RX queue not bound to a dmabuf.
+
+
+Freeing frags
+-------------
+
+Frags received via SCM_DEVMEM_DMABUF are pinned by the kernel while the user
+processes the frag. The user must return the frag to the kernel via
+SO_DEVMEM_DONTNEED::
+
+ ret = setsockopt(client_fd, SOL_SOCKET, SO_DEVMEM_DONTNEED, &token,
+ sizeof(token));
+
+The user must ensure the tokens are returned to the kernel in a timely manner.
+Failure to do so will exhaust the limited dmabuf that is bound to the RX queue
+and will lead to packet drops.
+
+
+Implementation & Caveats
+========================
+
+Unreadable skbs
+---------------
+
+Devmem payloads are inaccessible to the kernel processing the packets. This
+results in a few quirks for payloads of devmem skbs:
+
+- Loopback is not functional. Loopback relies on copying the payload, which is
+ not possible with devmem skbs.
+
+- Software checksum calculation fails.
+
+- TCP Dump and bpf can't access devmem packet payloads.
+
+
+Testing
+=======
+
+More realistic example code can be found in the kernel source under
+tools/testing/selftests/net/ncdevmem.c
+
+ncdevmem is a devmem TCP netcat. It works very similarly to netcat, but
+receives data directly into a udmabuf.
+
+To run ncdevmem, you need to run it on a server on the machine under test, and
+you need to run netcat on a peer to provide the TX data.
+
+ncdevmem has a validation mode as well that expects a repeating pattern of
+incoming data and validates it as such. For example, you can launch
+ncdevmem on the server by::
+
+ ncdevmem -s <server IP> -c <client IP> -f eth1 -d 3 -n 0000:06:00.0 -l \
+ -p 5201 -v 7
+
+On client side, use regular netcat to send TX data to ncdevmem process
+on the server::
+
+ yes $(echo -e \\x01\\x02\\x03\\x04\\x05\\x06) | \
+ tr \\n \\0 | head -c 5G | nc <server IP> 5201 -p 5201
diff --git a/Documentation/networking/index.rst b/Documentation/networking/index.rst
index 7664c0bfe461c..64e8ae22520ad 100644
--- a/Documentation/networking/index.rst
+++ b/Documentation/networking/index.rst
@@ -48,6 +48,7 @@ Contents:
cdc_mbim
dccp
dctcp
+ devmem
dns_resolver
driver
eql
--
2.45.1.288.g0e0cd299f1-goog


2024-05-30 20:21:50

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 14/14] selftests: add ncdevmem, netcat for devmem TCP

ncdevmem is a devmem TCP netcat. It works similarly to netcat, but it
sends and receives data using the devmem TCP APIs. It uses udmabuf as
the dmabuf provider. It is compatible with a regular netcat running on
a peer, or a ncdevmem running on a peer.

In addition to normal netcat support, ncdevmem has a validation mode,
where it sends a specific pattern and validates this pattern on the
receiver side to ensure data integrity.

Suggested-by: Stanislav Fomichev <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>

---
v9: https://lore.kernel.org/netdev/[email protected]/
- Remove unused nic_pci_addr entry (Cong).

v6:
- Updated to bind 8 queues.
- Added RSS configuration.
- Added some more tests for the netlink API.

Changes in v1:
- Many more general cleanups (Willem).
- Removed driver reset (Jakub).
- Removed hardcoded if index (Paolo).

RFC v2:
- General cleanups (Willem).

---
tools/testing/selftests/net/.gitignore | 1 +
tools/testing/selftests/net/Makefile | 5 +
tools/testing/selftests/net/ncdevmem.c | 542 +++++++++++++++++++++++++
3 files changed, 548 insertions(+)
create mode 100644 tools/testing/selftests/net/ncdevmem.c

diff --git a/tools/testing/selftests/net/.gitignore b/tools/testing/selftests/net/.gitignore
index 49a56eb5d0368..9cd3c99c6e5d4 100644
--- a/tools/testing/selftests/net/.gitignore
+++ b/tools/testing/selftests/net/.gitignore
@@ -17,6 +17,7 @@ ipv6_flowlabel
ipv6_flowlabel_mgr
log.txt
msg_zerocopy
+ncdevmem
nettest
psock_fanout
psock_snd
diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile
index bd01e4a0be2c2..df3fdd727ffb3 100644
--- a/tools/testing/selftests/net/Makefile
+++ b/tools/testing/selftests/net/Makefile
@@ -5,6 +5,10 @@ CFLAGS = -Wall -Wl,--no-as-needed -O2 -g
CFLAGS += -I../../../../usr/include/ $(KHDR_INCLUDES)
# Additional include paths needed by kselftest.h
CFLAGS += -I../
+CFLAGS += -I../../../net/ynl/generated/
+CFLAGS += -I../../../net/ynl/lib/
+
+LDLIBS += ../../../net/ynl/lib/ynl.a ../../../net/ynl/generated/protos.a

TEST_PROGS := run_netsocktests run_afpackettests test_bpf.sh netdevice.sh \
rtnetlink.sh xfrm_policy.sh test_blackhole_dev.sh
@@ -91,6 +95,7 @@ TEST_PROGS += fdb_flush.sh
TEST_PROGS += fq_band_pktlimit.sh
TEST_PROGS += vlan_hw_filter.sh
TEST_PROGS += bpf_offload.py
+TEST_GEN_FILES += ncdevmem

TEST_FILES := settings
TEST_FILES += in_netns.sh lib.sh net_helper.sh setup_loopback.sh setup_veth.sh
diff --git a/tools/testing/selftests/net/ncdevmem.c b/tools/testing/selftests/net/ncdevmem.c
new file mode 100644
index 0000000000000..e00255e54f77b
--- /dev/null
+++ b/tools/testing/selftests/net/ncdevmem.c
@@ -0,0 +1,542 @@
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE
+#define __EXPORTED_HEADERS__
+
+#include <linux/uio.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <stdbool.h>
+#include <string.h>
+#include <errno.h>
+#define __iovec_defined
+#include <fcntl.h>
+#include <malloc.h>
+#include <error.h>
+
+#include <arpa/inet.h>
+#include <sys/socket.h>
+#include <sys/mman.h>
+#include <sys/ioctl.h>
+#include <sys/syscall.h>
+
+#include <linux/memfd.h>
+#include <linux/if.h>
+#include <linux/dma-buf.h>
+#include <linux/udmabuf.h>
+#include <libmnl/libmnl.h>
+#include <linux/types.h>
+#include <linux/netlink.h>
+#include <linux/genetlink.h>
+#include <linux/netdev.h>
+#include <time.h>
+
+#include "netdev-user.h"
+#include <ynl.h>
+
+#define PAGE_SHIFT 12
+#define TEST_PREFIX "ncdevmem"
+#define NUM_PAGES 16000
+
+#ifndef MSG_SOCK_DEVMEM
+#define MSG_SOCK_DEVMEM 0x2000000
+#endif
+
+/*
+ * tcpdevmem netcat. Works similarly to netcat but does device memory TCP
+ * instead of regular TCP. Uses udmabuf to mock a dmabuf provider.
+ *
+ * Usage:
+ *
+ * On server:
+ * ncdevmem -s <server IP> -c <client IP> -f eth1 -d 3 -n 0000:06:00.0 -l \
+ * -p 5201 -v 7
+ *
+ * On client:
+ * yes $(echo -e \\x01\\x02\\x03\\x04\\x05\\x06) | \
+ * tr \\n \\0 | \
+ * head -c 5G | \
+ * nc <server IP> 5201 -p 5201
+ *
+ * Note this is compatible with regular netcat. i.e. the sender or receiver can
+ * be replaced with regular netcat to test the RX or TX path in isolation.
+ */
+
+static char *server_ip = "192.168.1.4";
+static char *client_ip = "192.168.1.2";
+static char *port = "5201";
+static size_t do_validation;
+static int start_queue = 8;
+static int num_queues = 8;
+static char *ifname = "eth1";
+static unsigned int ifindex = 3;
+static unsigned int iterations;
+static unsigned int dmabuf_id;
+
+void print_bytes(void *ptr, size_t size)
+{
+ unsigned char *p = ptr;
+ int i;
+
+ for (i = 0; i < size; i++)
+ printf("%02hhX ", p[i]);
+ printf("\n");
+}
+
+void print_nonzero_bytes(void *ptr, size_t size)
+{
+ unsigned char *p = ptr;
+ unsigned int i;
+
+ for (i = 0; i < size; i++)
+ putchar(p[i]);
+ printf("\n");
+}
+
+void validate_buffer(void *line, size_t size)
+{
+ static unsigned char seed = 1;
+ unsigned char *ptr = line;
+ int errors = 0;
+ size_t i;
+
+ for (i = 0; i < size; i++) {
+ if (ptr[i] != seed) {
+ fprintf(stderr,
+ "Failed validation: expected=%u, actual=%u, index=%lu\n",
+ seed, ptr[i], i);
+ errors++;
+ if (errors > 20)
+ error(1, 0, "validation failed.");
+ }
+ seed++;
+ if (seed == do_validation)
+ seed = 0;
+ }
+
+ fprintf(stdout, "Validated buffer\n");
+}
+
+static void reset_flow_steering(void)
+{
+ char command[256];
+
+ memset(command, 0, sizeof(command));
+ snprintf(command, sizeof(command), "sudo ethtool -K %s ntuple off",
+ "eth1");
+ system(command);
+
+ memset(command, 0, sizeof(command));
+ snprintf(command, sizeof(command), "sudo ethtool -K %s ntuple on",
+ "eth1");
+ system(command);
+}
+
+static void configure_rss(void)
+{
+ char command[256];
+
+ memset(command, 0, sizeof(command));
+ snprintf(command, sizeof(command), "sudo ethtool -X %s equal %d",
+ ifname, start_queue);
+ system(command);
+}
+
+static void configure_flow_steering(void)
+{
+ char command[256];
+
+ memset(command, 0, sizeof(command));
+ snprintf(command, sizeof(command),
+ "sudo ethtool -N %s flow-type tcp4 src-ip %s dst-ip %s src-port %s dst-port %s queue %d",
+ ifname, client_ip, server_ip, port, port, start_queue);
+ system(command);
+}
+
+static int bind_rx_queue(unsigned int ifindex, unsigned int dmabuf_fd,
+ struct netdev_queue_dmabuf *queues,
+ unsigned int n_queue_index, struct ynl_sock **ys)
+{
+ struct netdev_bind_rx_req *req = NULL;
+ struct netdev_bind_rx_rsp *rsp = NULL;
+ struct ynl_error yerr;
+
+ *ys = ynl_sock_create(&ynl_netdev_family, &yerr);
+ if (!*ys) {
+ fprintf(stderr, "YNL: %s\n", yerr.msg);
+ return -1;
+ }
+
+ req = netdev_bind_rx_req_alloc();
+ netdev_bind_rx_req_set_ifindex(req, ifindex);
+ netdev_bind_rx_req_set_dmabuf_fd(req, dmabuf_fd);
+ __netdev_bind_rx_req_set_queues(req, queues, n_queue_index);
+
+ rsp = netdev_bind_rx(*ys, req);
+ if (!rsp) {
+ perror("netdev_bind_rx");
+ goto err_close;
+ }
+
+ if (!rsp->_present.dmabuf_id) {
+ perror("dmabuf_id not present");
+ goto err_close;
+ }
+
+ printf("got dmabuf id=%d\n", rsp->dmabuf_id);
+ dmabuf_id = rsp->dmabuf_id;
+
+ netdev_bind_rx_req_free(req);
+ netdev_bind_rx_rsp_free(rsp);
+
+ return 0;
+
+err_close:
+ fprintf(stderr, "YNL failed: %s\n", (*ys)->err.msg);
+ netdev_bind_rx_req_free(req);
+ ynl_sock_destroy(*ys);
+ return -1;
+}
+
+static void create_udmabuf(int *devfd, int *memfd, int *buf, size_t dmabuf_size)
+{
+ struct udmabuf_create create;
+ int ret;
+
+ *devfd = open("/dev/udmabuf", O_RDWR);
+ if (*devfd < 0) {
+ error(70, 0,
+ "%s: [skip,no-udmabuf: Unable to access DMA buffer device file]\n",
+ TEST_PREFIX);
+ }
+
+ *memfd = memfd_create("udmabuf-test", MFD_ALLOW_SEALING);
+ if (*memfd < 0)
+ error(70, 0, "%s: [skip,no-memfd]\n", TEST_PREFIX);
+
+ /* Required for udmabuf */
+ ret = fcntl(*memfd, F_ADD_SEALS, F_SEAL_SHRINK);
+ if (ret < 0)
+ error(73, 0, "%s: [skip,fcntl-add-seals]\n", TEST_PREFIX);
+
+ ret = ftruncate(*memfd, dmabuf_size);
+ if (ret == -1)
+ error(74, 0, "%s: [FAIL,memfd-truncate]\n", TEST_PREFIX);
+
+ memset(&create, 0, sizeof(create));
+
+ create.memfd = *memfd;
+ create.offset = 0;
+ create.size = dmabuf_size;
+ *buf = ioctl(*devfd, UDMABUF_CREATE, &create);
+ if (*buf < 0)
+ error(75, 0, "%s: [FAIL, create udmabuf]\n", TEST_PREFIX);
+}
+
+int do_server(void)
+{
+ char ctrl_data[sizeof(int) * 20000];
+ struct netdev_queue_dmabuf *queues;
+ size_t non_page_aligned_frags = 0;
+ struct sockaddr_in client_addr;
+ struct sockaddr_in server_sin;
+ size_t page_aligned_frags = 0;
+ int devfd, memfd, buf, ret;
+ size_t total_received = 0;
+ socklen_t client_addr_len;
+ bool is_devmem = false;
+ char *buf_mem = NULL;
+ struct ynl_sock *ys;
+ size_t dmabuf_size;
+ char iobuf[819200];
+ char buffer[256];
+ int socket_fd;
+ int client_fd;
+ size_t i = 0;
+ int opt = 1;
+
+ dmabuf_size = getpagesize() * NUM_PAGES;
+
+ create_udmabuf(&devfd, &memfd, &buf, dmabuf_size);
+
+ reset_flow_steering();
+
+ /* Configure RSS to divert all traffic from our devmem queues */
+ configure_rss();
+
+ /* Flow steer our devmem flows to start_queue */
+ configure_flow_steering();
+
+ sleep(1);
+
+ queues = malloc(sizeof(*queues) * num_queues);
+
+ for (i = 0; i < num_queues; i++) {
+ queues[i]._present.type = 1;
+ queues[i]._present.idx = 1;
+ queues[i].type = NETDEV_QUEUE_TYPE_RX;
+ queues[i].idx = start_queue + i;
+ }
+
+ if (bind_rx_queue(ifindex, buf, queues, num_queues, &ys))
+ error(1, 0, "Failed to bind\n");
+
+ buf_mem = mmap(NULL, dmabuf_size, PROT_READ | PROT_WRITE, MAP_SHARED,
+ buf, 0);
+ if (buf_mem == MAP_FAILED)
+ error(1, 0, "mmap()");
+
+ server_sin.sin_family = AF_INET;
+ server_sin.sin_port = htons(atoi(port));
+
+ ret = inet_pton(server_sin.sin_family, server_ip, &server_sin.sin_addr);
+ if (socket < 0)
+ error(79, 0, "%s: [FAIL, create socket]\n", TEST_PREFIX);
+
+ socket_fd = socket(server_sin.sin_family, SOCK_STREAM, 0);
+ if (socket < 0)
+ error(errno, errno, "%s: [FAIL, create socket]\n", TEST_PREFIX);
+
+ ret = setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &opt,
+ sizeof(opt));
+ if (ret)
+ error(errno, errno, "%s: [FAIL, set sock opt]\n", TEST_PREFIX);
+
+ ret = setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &opt,
+ sizeof(opt));
+ if (ret)
+ error(errno, errno, "%s: [FAIL, set sock opt]\n", TEST_PREFIX);
+
+ printf("binding to address %s:%d\n", server_ip,
+ ntohs(server_sin.sin_port));
+
+ ret = bind(socket_fd, &server_sin, sizeof(server_sin));
+ if (ret)
+ error(errno, errno, "%s: [FAIL, bind]\n", TEST_PREFIX);
+
+ ret = listen(socket_fd, 1);
+ if (ret)
+ error(errno, errno, "%s: [FAIL, listen]\n", TEST_PREFIX);
+
+ client_addr_len = sizeof(client_addr);
+
+ inet_ntop(server_sin.sin_family, &server_sin.sin_addr, buffer,
+ sizeof(buffer));
+ printf("Waiting or connection on %s:%d\n", buffer,
+ ntohs(server_sin.sin_port));
+ client_fd = accept(socket_fd, &client_addr, &client_addr_len);
+
+ inet_ntop(client_addr.sin_family, &client_addr.sin_addr, buffer,
+ sizeof(buffer));
+ printf("Got connection from %s:%d\n", buffer,
+ ntohs(client_addr.sin_port));
+
+ while (1) {
+ struct iovec iov = { .iov_base = iobuf,
+ .iov_len = sizeof(iobuf) };
+ struct dmabuf_cmsg *dmabuf_cmsg = NULL;
+ struct dma_buf_sync sync = { 0 };
+ struct cmsghdr *cm = NULL;
+ struct msghdr msg = { 0 };
+ struct dmabuf_token token;
+ ssize_t ret;
+
+ is_devmem = false;
+ printf("\n\n");
+
+ msg.msg_iov = &iov;
+ msg.msg_iovlen = 1;
+ msg.msg_control = ctrl_data;
+ msg.msg_controllen = sizeof(ctrl_data);
+ ret = recvmsg(client_fd, &msg, MSG_SOCK_DEVMEM);
+ printf("recvmsg ret=%ld\n", ret);
+ if (ret < 0 && (errno == EAGAIN || errno == EWOULDBLOCK))
+ continue;
+ if (ret < 0) {
+ perror("recvmsg");
+ continue;
+ }
+ if (ret == 0) {
+ printf("client exited\n");
+ goto cleanup;
+ }
+
+ i++;
+ for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) {
+ if (cm->cmsg_level != SOL_SOCKET ||
+ (cm->cmsg_type != SCM_DEVMEM_DMABUF &&
+ cm->cmsg_type != SCM_DEVMEM_LINEAR)) {
+ fprintf(stdout, "skipping non-devmem cmsg\n");
+ continue;
+ }
+
+ dmabuf_cmsg = (struct dmabuf_cmsg *)CMSG_DATA(cm);
+ is_devmem = true;
+
+ if (cm->cmsg_type == SCM_DEVMEM_LINEAR) {
+ /* TODO: process data copied from skb's linear
+ * buffer.
+ */
+ fprintf(stdout,
+ "SCM_DEVMEM_LINEAR. dmabuf_cmsg->frag_size=%u\n",
+ dmabuf_cmsg->frag_size);
+
+ continue;
+ }
+
+ token.token_start = dmabuf_cmsg->frag_token;
+ token.token_count = 1;
+
+ total_received += dmabuf_cmsg->frag_size;
+ printf("received frag_page=%llu, in_page_offset=%llu, frag_offset=%llu, frag_size=%u, token=%u, total_received=%lu, dmabuf_id=%u\n",
+ dmabuf_cmsg->frag_offset >> PAGE_SHIFT,
+ dmabuf_cmsg->frag_offset % getpagesize(),
+ dmabuf_cmsg->frag_offset, dmabuf_cmsg->frag_size,
+ dmabuf_cmsg->frag_token, total_received,
+ dmabuf_cmsg->dmabuf_id);
+
+ if (dmabuf_cmsg->dmabuf_id != dmabuf_id)
+ error(1, 0,
+ "received on wrong dmabuf_id: flow steering error\n");
+
+ if (dmabuf_cmsg->frag_size % getpagesize())
+ non_page_aligned_frags++;
+ else
+ page_aligned_frags++;
+
+ sync.flags = DMA_BUF_SYNC_READ | DMA_BUF_SYNC_START;
+ ioctl(buf, DMA_BUF_IOCTL_SYNC, &sync);
+
+ if (do_validation)
+ validate_buffer(
+ ((unsigned char *)buf_mem) +
+ dmabuf_cmsg->frag_offset,
+ dmabuf_cmsg->frag_size);
+ else
+ print_nonzero_bytes(
+ ((unsigned char *)buf_mem) +
+ dmabuf_cmsg->frag_offset,
+ dmabuf_cmsg->frag_size);
+
+ sync.flags = DMA_BUF_SYNC_READ | DMA_BUF_SYNC_END;
+ ioctl(buf, DMA_BUF_IOCTL_SYNC, &sync);
+
+ ret = setsockopt(client_fd, SOL_SOCKET,
+ SO_DEVMEM_DONTNEED, &token,
+ sizeof(token));
+ if (ret != 1)
+ error(1, 0,
+ "SO_DEVMEM_DONTNEED not enough tokens");
+ }
+ if (!is_devmem)
+ error(1, 0, "flow steering error\n");
+
+ printf("total_received=%lu\n", total_received);
+ }
+
+ fprintf(stdout, "%s: ok\n", TEST_PREFIX);
+
+ fprintf(stdout, "page_aligned_frags=%lu, non_page_aligned_frags=%lu\n",
+ page_aligned_frags, non_page_aligned_frags);
+
+ fprintf(stdout, "page_aligned_frags=%lu, non_page_aligned_frags=%lu\n",
+ page_aligned_frags, non_page_aligned_frags);
+
+cleanup:
+
+ munmap(buf_mem, dmabuf_size);
+ close(client_fd);
+ close(socket_fd);
+ close(buf);
+ close(memfd);
+ close(devfd);
+ ynl_sock_destroy(ys);
+
+ return 0;
+}
+
+void run_devmem_tests(void)
+{
+ struct netdev_queue_dmabuf *queues;
+ int devfd, memfd, buf;
+ struct ynl_sock *ys;
+ size_t dmabuf_size;
+ size_t i = 0;
+
+ dmabuf_size = getpagesize() * NUM_PAGES;
+
+ create_udmabuf(&devfd, &memfd, &buf, dmabuf_size);
+
+ /* Configure RSS to divert all traffic from our devmem queues */
+ configure_rss();
+
+ sleep(1);
+
+ queues = malloc(sizeof(*queues) * num_queues);
+
+ for (i = 0; i < num_queues; i++) {
+ queues[i]._present.type = 1;
+ queues[i]._present.idx = 1;
+ queues[i].type = NETDEV_QUEUE_TYPE_RX;
+ queues[i].idx = start_queue + i;
+ }
+
+ if (bind_rx_queue(ifindex, buf, queues, num_queues, &ys))
+ error(1, 0, "Failed to bind\n");
+
+ /* Closing the netlink socket does an implicit unbind */
+ ynl_sock_destroy(ys);
+}
+
+int main(int argc, char *argv[])
+{
+ int is_server = 0, opt;
+
+ while ((opt = getopt(argc, argv, "ls:c:p:v:q:f:n:i:d:")) != -1) {
+ switch (opt) {
+ case 'l':
+ is_server = 1;
+ break;
+ case 's':
+ server_ip = optarg;
+ break;
+ case 'c':
+ client_ip = optarg;
+ break;
+ case 'p':
+ port = optarg;
+ break;
+ case 'v':
+ do_validation = atoll(optarg);
+ break;
+ case 'q':
+ num_queues = atoi(optarg);
+ break;
+ case 't':
+ start_queue = atoi(optarg);
+ break;
+ case 'f':
+ ifname = optarg;
+ break;
+ case 'd':
+ ifindex = atoi(optarg);
+ break;
+ case 'i':
+ iterations = atoll(optarg);
+ break;
+ case '?':
+ printf("unknown option: %c\n", optopt);
+ break;
+ }
+ }
+
+ for (; optind < argc; optind++)
+ printf("extra arguments: %s\n", argv[optind]);
+
+ run_devmem_tests();
+
+ if (is_server)
+ return do_server();
+
+ return 0;
+}
--
2.45.1.288.g0e0cd299f1-goog


2024-05-30 20:21:57

by Mina Almasry

[permalink] [raw]
Subject: [PATCH net-next v10 10/14] net: add support for skbs with unreadable frags

For device memory TCP, we expect the skb headers to be available in host
memory for access, and we expect the skb frags to be in device memory
and unaccessible to the host. We expect there to be no mixing and
matching of device memory frags (unaccessible) with host memory frags
(accessible) in the same skb.

Add a skb->devmem flag which indicates whether the frags in this skb
are device memory frags or not.

__skb_fill_netmem_desc() now checks frags added to skbs for net_iov,
and marks the skb as skb->devmem accordingly.

Add checks through the network stack to avoid accessing the frags of
devmem skbs and avoid coalescing devmem skbs with non devmem skbs.

Signed-off-by: Willem de Bruijn <[email protected]>
Signed-off-by: Kaiyuan Zhang <[email protected]>
Signed-off-by: Mina Almasry <[email protected]>


---

v9: https://lore.kernel.org/netdev/[email protected]/
- change skb->readable to skb->unreadable (Pavel/David).

skb->readable was very complicated, because by default skbs are readable
so the flag needed to be set to true in all code paths where new skbs
were created or cloned. Forgetting to set skb->readable=true in some
paths caused crashes.

Flip it to skb->unreadable so that the default 0 value works well, and
we only need to set it to true when we add unreadable frags.

v6
- skb->dmabuf -> skb->readable (Pavel). Pavel's original suggestion was
to remove the skb->dmabuf flag entirely, but when I looked into it
closely, I found the issue that if we remove the flag we have to
dereference the shinfo(skb) pointer to obtain the first frag, which
can cause a performance regression if it dirties the cache line when
the shinfo(skb) was not really needed. Instead, I converted the
skb->dmabuf flag into a generic skb->readable flag which can be
re-used by io_uring.

Changes in v1:
- Rename devmem -> dmabuf (David).
- Flip skb_frags_not_readable (Jakub).

---
include/linux/skbuff.h | 19 +++++++++++++++--
include/net/tcp.h | 5 +++--
net/core/datagram.c | 6 ++++++
net/core/gro.c | 5 ++++-
net/core/skbuff.c | 48 ++++++++++++++++++++++++++++++++++++++++--
net/ipv4/tcp.c | 3 +++
net/ipv4/tcp_input.c | 13 +++++++++---
net/ipv4/tcp_output.c | 5 ++++-
net/packet/af_packet.c | 4 ++--
9 files changed, 95 insertions(+), 13 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 0a4df0025e6dc..9c9be08f96aa7 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -827,6 +827,8 @@ enum skb_tstamp_type {
* @csum_level: indicates the number of consecutive checksums found in
* the packet minus one that have been verified as
* CHECKSUM_UNNECESSARY (max 3)
+ * @unreadable: indicates that at least 1 of the fragments in this skb is
+ * unreadable.
* @dst_pending_confirm: need to confirm neighbour
* @decrypted: Decrypted SKB
* @slow_gro: state present at GRO time, slower prepare step required
@@ -1008,7 +1010,7 @@ struct sk_buff {
#if IS_ENABLED(CONFIG_IP_SCTP)
__u8 csum_not_inet:1;
#endif
-
+ __u8 unreadable:1;
#if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS)
__u16 tc_index; /* traffic control index */
#endif
@@ -1800,6 +1802,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb)
__skb_zcopy_downgrade_managed(skb);
}

+/* Return true if frags in this skb are readable by the host. */
+static inline bool skb_frags_readable(const struct sk_buff *skb)
+{
+ return !skb->unreadable;
+}
+
static inline void skb_mark_not_on_list(struct sk_buff *skb)
{
skb->next = NULL;
@@ -2516,10 +2524,17 @@ static inline void skb_len_add(struct sk_buff *skb, int delta)
static inline void __skb_fill_netmem_desc(struct sk_buff *skb, int i,
netmem_ref netmem, int off, int size)
{
- struct page *page = netmem_to_page(netmem);
+ struct page *page;

__skb_fill_netmem_desc_noacc(skb_shinfo(skb), i, netmem, off, size);

+ if (netmem_is_net_iov(netmem)) {
+ skb->unreadable = true;
+ return;
+ }
+
+ page = netmem_to_page(netmem);
+
/* Propagate page pfmemalloc to the skb if we can. The problem is
* that not all callers have unique ownership of the page but rely
* on page_is_pfmemalloc doing the right thing(tm).
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 32815a40dea16..181a4834a60fc 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1060,7 +1060,7 @@ static inline int tcp_skb_mss(const struct sk_buff *skb)

static inline bool tcp_skb_can_collapse_to(const struct sk_buff *skb)
{
- return likely(!TCP_SKB_CB(skb)->eor);
+ return likely(!TCP_SKB_CB(skb)->eor && skb_frags_readable(skb));
}

static inline bool tcp_skb_can_collapse(const struct sk_buff *to,
@@ -1068,7 +1068,8 @@ static inline bool tcp_skb_can_collapse(const struct sk_buff *to,
{
return likely(tcp_skb_can_collapse_to(to) &&
mptcp_skb_can_collapse(to, from) &&
- skb_pure_zcopy_same(to, from));
+ skb_pure_zcopy_same(to, from) &&
+ skb_frags_readable(to) == skb_frags_readable(from));
}

/* Events passed to congestion control interface */
diff --git a/net/core/datagram.c b/net/core/datagram.c
index e614cfd8e14a5..b29f881df0e83 100644
--- a/net/core/datagram.c
+++ b/net/core/datagram.c
@@ -407,6 +407,9 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset,
return 0;
}

+ if (!skb_frags_readable(skb))
+ goto short_copy;
+
/* Copy paged appendix. Hmm... why does this look so complicated? */
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
int end;
@@ -619,6 +622,9 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk,
if (msg && msg->msg_ubuf && msg->sg_from_iter)
return msg->sg_from_iter(sk, skb, from, length);

+ if (!skb_frags_readable(skb))
+ return -EFAULT;
+
frag = skb_shinfo(skb)->nr_frags;

while (length && iov_iter_count(from)) {
diff --git a/net/core/gro.c b/net/core/gro.c
index 26f09c3e830b7..7b9d018f552bd 100644
--- a/net/core/gro.c
+++ b/net/core/gro.c
@@ -422,6 +422,9 @@ static void gro_pull_from_frag0(struct sk_buff *skb, int grow)
{
struct skb_shared_info *pinfo = skb_shinfo(skb);

+ if (WARN_ON_ONCE(!skb_frags_readable(skb)))
+ return;
+
BUG_ON(skb->end - skb->tail < grow);

memcpy(skb_tail_pointer(skb), NAPI_GRO_CB(skb)->frag0, grow);
@@ -443,7 +446,7 @@ static void gro_try_pull_from_frag0(struct sk_buff *skb)
{
int grow = skb_gro_offset(skb) - skb_headlen(skb);

- if (grow > 0)
+ if (grow > 0 && skb_frags_readable(skb))
gro_pull_from_frag0(skb, grow);
}

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 99cf1ee73836d..7257bfbb8d2c2 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -1951,6 +1951,9 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)
if (skb_shared(skb) || skb_unclone(skb, gfp_mask))
return -EINVAL;

+ if (!skb_frags_readable(skb))
+ return -EFAULT;
+
if (!num_frags)
goto release;

@@ -2124,6 +2127,9 @@ struct sk_buff *skb_copy(const struct sk_buff *skb, gfp_t gfp_mask)
unsigned int size;
int headerlen;

+ if (!skb_frags_readable(skb))
+ return NULL;
+
if (WARN_ON_ONCE(skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST))
return NULL;

@@ -2462,6 +2468,9 @@ struct sk_buff *skb_copy_expand(const struct sk_buff *skb,
struct sk_buff *n;
int oldheadroom;

+ if (!skb_frags_readable(skb))
+ return NULL;
+
if (WARN_ON_ONCE(skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST))
return NULL;

@@ -2806,6 +2815,9 @@ void *__pskb_pull_tail(struct sk_buff *skb, int delta)
*/
int i, k, eat = (skb->tail + delta) - skb->end;

+ if (!skb_frags_readable(skb))
+ return NULL;
+
if (eat > 0 || skb_cloned(skb)) {
if (pskb_expand_head(skb, 0, eat > 0 ? eat + 128 : 0,
GFP_ATOMIC))
@@ -2959,6 +2971,9 @@ int skb_copy_bits(const struct sk_buff *skb, int offset, void *to, int len)
to += copy;
}

+ if (!skb_frags_readable(skb))
+ goto fault;
+
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
int end;
skb_frag_t *f = &skb_shinfo(skb)->frags[i];
@@ -3147,6 +3162,9 @@ static bool __skb_splice_bits(struct sk_buff *skb, struct pipe_inode_info *pipe,
/*
* then map the fragments
*/
+ if (!skb_frags_readable(skb))
+ return false;
+
for (seg = 0; seg < skb_shinfo(skb)->nr_frags; seg++) {
const skb_frag_t *f = &skb_shinfo(skb)->frags[seg];

@@ -3370,6 +3388,9 @@ int skb_store_bits(struct sk_buff *skb, int offset, const void *from, int len)
from += copy;
}

+ if (!skb_frags_readable(skb))
+ goto fault;
+
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
int end;
@@ -3449,6 +3470,9 @@ __wsum __skb_checksum(const struct sk_buff *skb, int offset, int len,
pos = copy;
}

+ if (!skb_frags_readable(skb))
+ return 0;
+
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
int end;
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
@@ -3549,6 +3573,9 @@ __wsum skb_copy_and_csum_bits(const struct sk_buff *skb, int offset,
pos = copy;
}

+ if (!skb_frags_readable(skb))
+ return 0;
+
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
int end;

@@ -4040,6 +4067,7 @@ static inline void skb_split_inside_header(struct sk_buff *skb,
skb_shinfo(skb1)->frags[i] = skb_shinfo(skb)->frags[i];

skb_shinfo(skb1)->nr_frags = skb_shinfo(skb)->nr_frags;
+ skb1->unreadable = skb->unreadable;
skb_shinfo(skb)->nr_frags = 0;
skb1->data_len = skb->data_len;
skb1->len += skb1->data_len;
@@ -4054,6 +4082,7 @@ static inline void skb_split_no_header(struct sk_buff *skb,
{
int i, k = 0;
const int nfrags = skb_shinfo(skb)->nr_frags;
+ const int unreadable = skb->unreadable;

skb_shinfo(skb)->nr_frags = 0;
skb1->len = skb1->data_len = skb->len - len;
@@ -4087,6 +4116,12 @@ static inline void skb_split_no_header(struct sk_buff *skb,
pos += size;
}
skb_shinfo(skb1)->nr_frags = k;
+
+ if (skb_shinfo(skb)->nr_frags)
+ skb->unreadable = unreadable;
+
+ if (skb_shinfo(skb1)->nr_frags)
+ skb1->unreadable = unreadable;
}

/**
@@ -4322,6 +4357,9 @@ unsigned int skb_seq_read(unsigned int consumed, const u8 **data,
return block_limit - abs_offset;
}

+ if (!skb_frags_readable(st->cur_skb))
+ return 0;
+
if (st->frag_idx == 0 && !st->frag_data)
st->stepped_offset += skb_headlen(st->cur_skb);

@@ -5934,7 +5972,10 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
if (to->pp_recycle != from->pp_recycle)
return false;

- if (len <= skb_tailroom(to)) {
+ if (skb_frags_readable(from) != skb_frags_readable(to))
+ return false;
+
+ if (len <= skb_tailroom(to) && skb_frags_readable(from)) {
if (len)
BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len));
*delta_truesize = 0;
@@ -6111,6 +6152,9 @@ int skb_ensure_writable(struct sk_buff *skb, unsigned int write_len)
if (!pskb_may_pull(skb, write_len))
return -ENOMEM;

+ if (!skb_frags_readable(skb))
+ return -EFAULT;
+
if (!skb_cloned(skb) || skb_clone_writable(skb, write_len))
return 0;

@@ -6790,7 +6834,7 @@ void skb_condense(struct sk_buff *skb)
{
if (skb->data_len) {
if (skb->data_len > skb->end - skb->tail ||
- skb_cloned(skb))
+ skb_cloned(skb) || !skb_frags_readable(skb))
return;

/* Nice, we can free page frag(s) right now */
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 679cb51aaaf2b..55d85b1df0f39 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -2154,6 +2154,9 @@ static int tcp_zerocopy_receive(struct sock *sk,
skb = tcp_recv_skb(sk, seq, &offset);
}

+ if (!skb_frags_readable(skb))
+ break;
+
if (TCP_SKB_CB(skb)->has_rxtstamp) {
tcp_update_recv_tstamps(skb, tss);
zc->msg_flags |= TCP_CMSG_TS;
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 5aadf64e554d8..2b6c448e274be 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -5352,6 +5352,9 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root,
for (end_of_skbs = true; skb != NULL && skb != tail; skb = n) {
n = tcp_skb_next(skb, list);

+ if (!skb_frags_readable(skb))
+ goto skip_this;
+
/* No new bits? It is possible on ofo queue. */
if (!before(start, TCP_SKB_CB(skb)->end_seq)) {
skb = tcp_collapse_one(sk, skb, list, root);
@@ -5372,17 +5375,20 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root,
break;
}

- if (n && n != tail && mptcp_skb_can_collapse(skb, n) &&
+ if (n && n != tail && skb_frags_readable(n) &&
+ mptcp_skb_can_collapse(skb, n) &&
TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(n)->seq) {
end_of_skbs = false;
break;
}

+skip_this:
/* Decided to skip this, advance start seq. */
start = TCP_SKB_CB(skb)->end_seq;
}
if (end_of_skbs ||
- (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)))
+ (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) ||
+ !skb_frags_readable(skb))
return;

__skb_queue_head_init(&tmp);
@@ -5424,7 +5430,8 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root,
if (!skb ||
skb == tail ||
!mptcp_skb_can_collapse(nskb, skb) ||
- (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)))
+ (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) ||
+ !skb_frags_readable(skb))
goto end;
if (skb_cmp_decrypted(skb, nskb))
goto end;
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index f97e098f18a52..16c7c4839b683 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -2344,7 +2344,8 @@ static bool tcp_can_coalesce_send_queue_head(struct sock *sk, int len)

if (unlikely(TCP_SKB_CB(skb)->eor) ||
tcp_has_tx_tstamp(skb) ||
- !skb_pure_zcopy_same(skb, next))
+ !skb_pure_zcopy_same(skb, next) ||
+ skb_frags_readable(skb) != skb_frags_readable(next))
return false;

len -= skb->len;
@@ -3264,6 +3265,8 @@ static bool tcp_can_collapse(const struct sock *sk, const struct sk_buff *skb)
return false;
if (skb_cloned(skb))
return false;
+ if (!skb_frags_readable(skb))
+ return false;
/* Some heuristics for collapsing over SACK'd could be invented */
if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED)
return false;
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
index fce3908875912..2ad47a6f37619 100644
--- a/net/packet/af_packet.c
+++ b/net/packet/af_packet.c
@@ -2155,7 +2155,7 @@ static int packet_rcv(struct sk_buff *skb, struct net_device *dev,
}
}

- snaplen = skb->len;
+ snaplen = skb_frags_readable(skb) ? skb->len : skb_headlen(skb);

res = run_filter(skb, sk, snaplen);
if (!res)
@@ -2275,7 +2275,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev,
}
}

- snaplen = skb->len;
+ snaplen = skb_frags_readable(skb) ? skb->len : skb_headlen(skb);

res = run_filter(skb, sk, snaplen);
if (!res)
--
2.45.1.288.g0e0cd299f1-goog


2024-05-31 06:57:51

by David Wei

[permalink] [raw]
Subject: Re: [PATCH net-next v10 01/14] netdev: add netdev_rx_queue_restart()

On 2024-05-30 13:16, Mina Almasry wrote:
[...]
> +err_start_queue:
> + /* Restarting the queue with old_mem should be successful as we haven't
> + * changed any of the queue configuration, and there is not much we can
> + * do to recover from a failure here.
> + *
> + * WARN if the we fail to recover the old rx queue, and at least free
> + * old_mem so we don't also leak that.
> + */
> + if (dev->queue_mgmt_ops->ndo_queue_start(dev, old_mem, rxq_idx)) {
> + WARN(1,
> + "Failed to restart old queue in error path. RX queue %d may be unhealthy.",
> + rxq_idx);
> + dev->queue_mgmt_ops->ndo_queue_mem_free(dev, &old_mem);

This should be ->ndo_queue_mem_free(dev, old_mem).

2024-06-01 13:09:28

by Bagas Sanjaya

[permalink] [raw]
Subject: Re: [PATCH net-next v10 13/14] net: add devmem TCP documentation

On Thu, May 30, 2024 at 08:16:12PM +0000, Mina Almasry wrote:
> Add documentation outlining the usage and details of devmem TCP.
>
> Signed-off-by: Mina Almasry <[email protected]>
>

The doc LGTM, thanks!

Reviewed-by: Bagas Sanjaya <[email protected]>

--
An old man doll... just what I always wanted! - Clara


Attachments:
(No filename) (335.00 B)
signature.asc (235.00 B)
Download all attachments

2024-06-01 13:52:51

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Thu, May 30, 2024 at 08:16:01PM +0000, Mina Almasry wrote:
> I'm unsure if the discussion has been resolved yet. Sending the series
> anyway to get reviews/feedback on the (unrelated) rest of the series.

As far as I'm concerned it is not. I've not seen any convincing
argument for more than page/folio allocator including larger order /
huge page and dmabuf.


2024-06-03 12:53:25

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH net-next v10 01/14] netdev: add netdev_rx_queue_restart()

On 5/30/24 21:16, Mina Almasry wrote:
> Add netdev_rx_queue_restart() function to netdev_rx_queue.h
>
> Signed-off-by: David Wei <[email protected]>
> Signed-off-by: Mina Almasry <[email protected]>
>
> ---
...
> diff --git a/net/core/netdev_rx_queue.c b/net/core/netdev_rx_queue.c
> new file mode 100644
> index 0000000000000..b3899358e5a9c
> --- /dev/null
> +++ b/net/core/netdev_rx_queue.c
> @@ -0,0 +1,74 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +
> +#include <linux/netdevice.h>
> +#include <net/netdev_queues.h>
> +#include <net/netdev_rx_queue.h>
> +
> +int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq_idx)
> +{
> + void *new_mem, *old_mem;
> + int err;

I believe it should also do:

if (!dev->queue_mgmt_ops)
return -EOPNOTSUPP;

> +
> + if (!dev->queue_mgmt_ops->ndo_queue_stop ||
> + !dev->queue_mgmt_ops->ndo_queue_mem_free ||
> + !dev->queue_mgmt_ops->ndo_queue_mem_alloc ||
> + !dev->queue_mgmt_ops->ndo_queue_start)
> + return -EOPNOTSUPP;
> +
> + DEBUG_NET_WARN_ON_ONCE(!rtnl_is_locked());

--
Pavel Begunkov

2024-06-03 14:17:48

by Mina Almasry

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Fri, May 31, 2024 at 10:35 PM Christoph Hellwig <[email protected]> wrote:
>
> On Thu, May 30, 2024 at 08:16:01PM +0000, Mina Almasry wrote:
> > I'm unsure if the discussion has been resolved yet. Sending the series
> > anyway to get reviews/feedback on the (unrelated) rest of the series.
>
> As far as I'm concerned it is not. I've not seen any convincing
> argument for more than page/folio allocator including larger order /
> huge page and dmabuf.
>

Thanks Christoph, this particular patch series adds dmabuf, so I
assume no objection there. I assume the objection is that you want the
generic, extensible hooks removed.

To be honest, I don't think the hooks are an integral part of the
design, and at this point I think we've argued for them enough. I
think we can easily achieve the same thing with just raw if statements
in a couple of places. We can always add the hooks if and only if we
actually justify many memory providers.

Any objections to me removing the hooks and directing to memory
allocations via simple if statements? Something like (very rough
draft, doesn't compile):

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 92be1aaf18ccc..2cc986455bce6 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -557,8 +557,8 @@ netmem_ref page_pool_alloc_netmem(struct page_pool
*pool, gfp_t gfp)
return netmem;

/* Slow-path: cache empty, do real allocation */
- if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
- netmem = pool->mp_ops->alloc_pages(pool, gfp);
+ if (unlikely(page_pool_is_dmabuf(pool)))
+ netmem = mp_dmabuf_devmem_alloc_pages():
else
netmem = __page_pool_alloc_pages_slow(pool, gfp);
return netmem;


--
Thanks,
Mina

2024-06-03 14:53:55

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On 6/3/24 15:17, Mina Almasry wrote:
> On Fri, May 31, 2024 at 10:35 PM Christoph Hellwig <[email protected]> wrote:
>>
>> On Thu, May 30, 2024 at 08:16:01PM +0000, Mina Almasry wrote:
>>> I'm unsure if the discussion has been resolved yet. Sending the series
>>> anyway to get reviews/feedback on the (unrelated) rest of the series.
>>
>> As far as I'm concerned it is not. I've not seen any convincing
>> argument for more than page/folio allocator including larger order /
>> huge page and dmabuf.
>>
>
> Thanks Christoph, this particular patch series adds dmabuf, so I
> assume no objection there. I assume the objection is that you want the
> generic, extensible hooks removed.
>
> To be honest, I don't think the hooks are an integral part of the
> design, and at this point I think we've argued for them enough. I
> think we can easily achieve the same thing with just raw if statements
> in a couple of places. We can always add the hooks if and only if we
> actually justify many memory providers.
>
> Any objections to me removing the hooks and directing to memory
> allocations via simple if statements? Something like (very rough
> draft, doesn't compile):

The question for Christoph is what exactly is the objection here? Why we
would not be using well defined ops when we know there will be more
users? Repeating what I said in the last thread, for io_uring it's used
to implement the flow of buffers from userspace to the kernel, the ABI,
which is orthogonal to the issue of what memory type it is and how it
came there. And even if you mandate unnecessary dmabuf condoms for user
memory in one form or another IMHO for no clear reason, the callbacks
(or yet another if-else) would still be needed.

Sure, Mina can drop and hard code devmem path to easy the pain for
him and delay the discussion, but then shortly after I will be
re-sending same shit. So, what's the convincing argument _not_ to have
it?


>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 92be1aaf18ccc..2cc986455bce6 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -557,8 +557,8 @@ netmem_ref page_pool_alloc_netmem(struct page_pool
> *pool, gfp_t gfp)
> return netmem;
>
> /* Slow-path: cache empty, do real allocation */
> - if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
> - netmem = pool->mp_ops->alloc_pages(pool, gfp);
> + if (unlikely(page_pool_is_dmabuf(pool)))
> + netmem = mp_dmabuf_devmem_alloc_pages():
> else
> netmem = __page_pool_alloc_pages_slow(pool, gfp);
> return netmem;
>
>

--
Pavel Begunkov

2024-06-03 15:44:37

by Mina Almasry

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Mon, Jun 3, 2024 at 7:52 AM Pavel Begunkov <[email protected]> wrote:
>
> On 6/3/24 15:17, Mina Almasry wrote:
> > On Fri, May 31, 2024 at 10:35 PM Christoph Hellwig <[email protected]> wrote:
> >>
> >> On Thu, May 30, 2024 at 08:16:01PM +0000, Mina Almasry wrote:
> >>> I'm unsure if the discussion has been resolved yet. Sending the series
> >>> anyway to get reviews/feedback on the (unrelated) rest of the series.
> >>
> >> As far as I'm concerned it is not. I've not seen any convincing
> >> argument for more than page/folio allocator including larger order /
> >> huge page and dmabuf.
> >>
> >
> > Thanks Christoph, this particular patch series adds dmabuf, so I
> > assume no objection there. I assume the objection is that you want the
> > generic, extensible hooks removed.
> >
> > To be honest, I don't think the hooks are an integral part of the
> > design, and at this point I think we've argued for them enough. I
> > think we can easily achieve the same thing with just raw if statements
> > in a couple of places. We can always add the hooks if and only if we
> > actually justify many memory providers.
> >
> > Any objections to me removing the hooks and directing to memory
> > allocations via simple if statements? Something like (very rough
> > draft, doesn't compile):
>
> The question for Christoph is what exactly is the objection here? Why we
> would not be using well defined ops when we know there will be more
> users? Repeating what I said in the last thread, for io_uring it's used
> to implement the flow of buffers from userspace to the kernel, the ABI,
> which is orthogonal to the issue of what memory type it is and how it
> came there. And even if you mandate unnecessary dmabuf condoms for user
> memory in one form or another IMHO for no clear reason, the callbacks
> (or yet another if-else) would still be needed.
>
> Sure, Mina can drop and hard code devmem path to easy the pain for
> him and delay the discussion, but then shortly after I will be
> re-sending same shit.

You don't need to re-send the same ops again, right? You can add io
uring support without ops. Something like:

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 92be1aaf18ccc..2cc986455bce6 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -557,8 +557,8 @@ netmem_ref page_pool_alloc_netmem(struct page_pool
*pool, gfp_t gfp)
return netmem;

/* Slow-path: cache empty, do real allocation */
- if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
- netmem = pool->mp_ops->alloc_pages(pool, gfp);
+ if (unlikely(page_pool_is_dmabuf(pool)))
+ netmem = mp_dmabuf_devmem_alloc_pages():
+ else if (unlikely(page_pool_is_iouring(pool)))
+ netmem = mp_io_uring_alloc_pages():
else
netmem = __page_pool_alloc_pages_slow(pool, gfp);
return netmem;

So IMO, the ops themselves, which Christoph is repeatedly nacking, are
not that important.

I humbly think the energy should be spent convincing maintainers of
the use case of io uring memory, not the ops. The ops are a cosmetic
change to the code, and can be added later. Christoph is nacking the
ops because it gives people too much rope [1].

But if you disagree and think the ops themselves are important for a
reason I missed, I'm happy waiting until agreement is reached here.
Sorry, just voicing my 2 cents.

[1] https://lore.kernel.org/netdev/[email protected]/

--
Thanks,
Mina

2024-06-04 10:13:57

by Paolo Abeni

[permalink] [raw]
Subject: Re: [PATCH net-next v10 05/14] netdev: netdevice devmem allocator

On Thu, 2024-05-30 at 20:16 +0000, Mina Almasry wrote:
> diff --git a/net/core/devmem.c b/net/core/devmem.c
> index d82f92d7cf9ce..d5fac8edf621d 100644
> --- a/net/core/devmem.c
> +++ b/net/core/devmem.c
> @@ -32,6 +32,14 @@ static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool,
> kfree(owner);
> }
>
> +static inline dma_addr_t net_devmem_get_dma_addr(const struct net_iov *niov)

Minor nit: please no 'inline' keyword in c files.

Thanks,

Paolo


2024-06-04 10:25:35

by Paolo Abeni

[permalink] [raw]
Subject: Re: [PATCH net-next v10 06/14] page_pool: convert to use netmem

On Thu, 2024-05-30 at 20:16 +0000, Mina Almasry wrote:
> diff --git a/include/trace/events/page_pool.h b/include/trace/events/page_pool.h
> index 6834356b2d2ae..c5b6383ff2760 100644
> --- a/include/trace/events/page_pool.h
> +++ b/include/trace/events/page_pool.h
> @@ -42,51 +42,52 @@ TRACE_EVENT(page_pool_release,
> TRACE_EVENT(page_pool_state_release,
>
> TP_PROTO(const struct page_pool *pool,
> - const struct page *page, u32 release),
> + netmem_ref netmem, u32 release),

This causes a sparse warning, as the caller is still passing a 'page'
argument.

>
> - TP_ARGS(pool, page, release),
> + TP_ARGS(pool, netmem, release),
>
> TP_STRUCT__entry(
> __field(const struct page_pool *, pool)
> - __field(const struct page *, page)
> + __field(netmem_ref, netmem)
> __field(u32, release)
> __field(unsigned long, pfn)
> ),
>
> TP_fast_assign(
> __entry->pool = pool;
> - __entry->page = page;
> + __entry->netmem = netmem;
> __entry->release = release;
> - __entry->pfn = page_to_pfn(page);
> + __entry->pfn = netmem_to_pfn(netmem);
> ),
>
> - TP_printk("page_pool=%p page=%p pfn=0x%lx release=%u",
> - __entry->pool, __entry->page, __entry->pfn, __entry->release)
> + TP_printk("page_pool=%p netmem=%lu pfn=0x%lx release=%u",
> + __entry->pool, (__force unsigned long)__entry->netmem,
> + __entry->pfn, __entry->release)
> );
>
> TRACE_EVENT(page_pool_state_hold,
>
> TP_PROTO(const struct page_pool *pool,
> - const struct page *page, u32 hold),
> + netmem_ref netmem, u32 hold),

Same here.

Thanks,

Paolo


2024-06-04 10:47:21

by Paolo Abeni

[permalink] [raw]
Subject: Re: [PATCH net-next v10 10/14] net: add support for skbs with unreadable frags

On Thu, 2024-05-30 at 20:16 +0000, Mina Almasry wrote:
> diff --git a/net/core/gro.c b/net/core/gro.c
> index 26f09c3e830b7..7b9d018f552bd 100644
> --- a/net/core/gro.c
> +++ b/net/core/gro.c
> @@ -422,6 +422,9 @@ static void gro_pull_from_frag0(struct sk_buff *skb, int grow)
> {
> struct skb_shared_info *pinfo = skb_shinfo(skb);
>
> + if (WARN_ON_ONCE(!skb_frags_readable(skb)))
> + return;
> +
> BUG_ON(skb->end - skb->tail < grow);
>
> memcpy(skb_tail_pointer(skb), NAPI_GRO_CB(skb)->frag0, grow);
> @@ -443,7 +446,7 @@ static void gro_try_pull_from_frag0(struct sk_buff *skb)
> {
> int grow = skb_gro_offset(skb) - skb_headlen(skb);
>
> - if (grow > 0)
> + if (grow > 0 && skb_frags_readable(skb))
> gro_pull_from_frag0(skb, grow);
> }

I'm unsure if this was already mentioned, so please pardon the eventual
duplicate...

The above code is quite critical performance wise, and the previous
patch already prevent frag0 from being set to a non paged frag, so what
about dropping the above additional checks?

thanks!

Paolo


2024-06-04 10:54:20

by Paolo Abeni

[permalink] [raw]
Subject: Re: [PATCH net-next v10 11/14] tcp: RX path for devmem TCP

On Thu, 2024-05-30 at 20:16 +0000, Mina Almasry wrote:
> @@ -2317,6 +2318,213 @@ static int tcp_inq_hint(struct sock *sk)
> return inq;
> }
>
> +/* batch __xa_alloc() calls and reduce xa_lock()/xa_unlock() overhead. */
> +struct tcp_xa_pool {
> + u8 max; /* max <= MAX_SKB_FRAGS */
> + u8 idx; /* idx <= max */
> + __u32 tokens[MAX_SKB_FRAGS];
> + netmem_ref netmems[MAX_SKB_FRAGS];
> +};
> +
> +static void tcp_xa_pool_commit(struct sock *sk, struct tcp_xa_pool *p,
> + bool lock)
> +{
> + int i;
> +
> + if (!p->max)
> + return;
> + if (lock)
> + xa_lock_bh(&sk->sk_user_frags);

The conditional lock here confuses sparse.

I think you can avoid it providing a unlocked version (no need to check
for '!p->max' the only caller wanting the unlocked version already
performs such check) and a locked one, calling the other.

Cheers,

Paolo


2024-06-04 16:16:36

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH net-next v10 05/14] netdev: netdevice devmem allocator

On Tue, 04 Jun 2024 12:13:15 +0200
Paolo Abeni <[email protected]> wrote:

> On Thu, 2024-05-30 at 20:16 +0000, Mina Almasry wrote:
> > diff --git a/net/core/devmem.c b/net/core/devmem.c
> > index d82f92d7cf9ce..d5fac8edf621d 100644
> > --- a/net/core/devmem.c
> > +++ b/net/core/devmem.c
> > @@ -32,6 +32,14 @@ static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool,
> > kfree(owner);
> > }
> >
> > +static inline dma_addr_t net_devmem_get_dma_addr(const struct net_iov *niov)
>
> Minor nit: please no 'inline' keyword in c files.

I'm curious. Is this a networking rule? I use 'inline' in my C code all the
time.

-- Steve

2024-06-04 16:43:47

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH net-next v10 05/14] netdev: netdevice devmem allocator

On Tue, 4 Jun 2024 13:31:58 -0300
Jason Gunthorpe <[email protected]> wrote:

> On Tue, Jun 04, 2024 at 12:15:51PM -0400, Steven Rostedt wrote:
> > On Tue, 04 Jun 2024 12:13:15 +0200
> > Paolo Abeni <[email protected]> wrote:
> >
> > > On Thu, 2024-05-30 at 20:16 +0000, Mina Almasry wrote:
> > > > diff --git a/net/core/devmem.c b/net/core/devmem.c
> > > > index d82f92d7cf9ce..d5fac8edf621d 100644
> > > > --- a/net/core/devmem.c
> > > > +++ b/net/core/devmem.c
> > > > @@ -32,6 +32,14 @@ static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool,
> > > > kfree(owner);
> > > > }
> > > >
> > > > +static inline dma_addr_t net_devmem_get_dma_addr(const struct net_iov *niov)
> > >
> > > Minor nit: please no 'inline' keyword in c files.
> >
> > I'm curious. Is this a networking rule? I use 'inline' in my C code all the
> > time.
>
> It mostly comes from Documentation/process/coding-style.rst:
>
> 15) The inline disease
> ----------------------
>
> There appears to be a common misperception that gcc has a magic "make me
> faster" speedup option called ``inline``. While the use of inlines can be
> appropriate (for example as a means of replacing macros, see Chapter 12), it
> very often is not. Abundant use of the inline keyword leads to a much bigger
> kernel, which in turn slows the system as a whole down, due to a bigger
> icache footprint for the CPU and simply because there is less memory
> available for the pagecache. Just think about it; a pagecache miss causes a
> disk seek, which easily takes 5 milliseconds. There are a LOT of cpu cycles
> that can go into these 5 milliseconds.
>
> A reasonable rule of thumb is to not put inline at functions that have more
> than 3 lines of code in them. An exception to this rule are the cases where
> a parameter is known to be a compiletime constant, and as a result of this
> constantness you *know* the compiler will be able to optimize most of your
> function away at compile time. For a good example of this later case, see
> the kmalloc() inline function.
>
> Often people argue that adding inline to functions that are static and used
> only once is always a win since there is no space tradeoff. While this is
> technically correct, gcc is capable of inlining these automatically without
> help, and the maintenance issue of removing the inline when a second user
> appears outweighs the potential value of the hint that tells gcc to do
> something it would have done anyway.
>

Interesting, as I sped up the ftrace ring buffer by a substantial amount by
adding strategic __always_inline, noinline, likely() and unlikely()
throughout the code. It had to do with what was considered the fast path
and slow path, and not actually the size of the function. gcc got it
horribly wrong.

-- Steve

2024-06-04 17:06:33

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH net-next v10 05/14] netdev: netdevice devmem allocator

On Tue, Jun 04, 2024 at 12:15:51PM -0400, Steven Rostedt wrote:
> On Tue, 04 Jun 2024 12:13:15 +0200
> Paolo Abeni <[email protected]> wrote:
>
> > On Thu, 2024-05-30 at 20:16 +0000, Mina Almasry wrote:
> > > diff --git a/net/core/devmem.c b/net/core/devmem.c
> > > index d82f92d7cf9ce..d5fac8edf621d 100644
> > > --- a/net/core/devmem.c
> > > +++ b/net/core/devmem.c
> > > @@ -32,6 +32,14 @@ static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool,
> > > kfree(owner);
> > > }
> > >
> > > +static inline dma_addr_t net_devmem_get_dma_addr(const struct net_iov *niov)
> >
> > Minor nit: please no 'inline' keyword in c files.
>
> I'm curious. Is this a networking rule? I use 'inline' in my C code all the
> time.

It mostly comes from Documentation/process/coding-style.rst:

15) The inline disease
----------------------

There appears to be a common misperception that gcc has a magic "make me
faster" speedup option called ``inline``. While the use of inlines can be
appropriate (for example as a means of replacing macros, see Chapter 12), it
very often is not. Abundant use of the inline keyword leads to a much bigger
kernel, which in turn slows the system as a whole down, due to a bigger
icache footprint for the CPU and simply because there is less memory
available for the pagecache. Just think about it; a pagecache miss causes a
disk seek, which easily takes 5 milliseconds. There are a LOT of cpu cycles
that can go into these 5 milliseconds.

A reasonable rule of thumb is to not put inline at functions that have more
than 3 lines of code in them. An exception to this rule are the cases where
a parameter is known to be a compiletime constant, and as a result of this
constantness you *know* the compiler will be able to optimize most of your
function away at compile time. For a good example of this later case, see
the kmalloc() inline function.

Often people argue that adding inline to functions that are static and used
only once is always a win since there is no space tradeoff. While this is
technically correct, gcc is capable of inlining these automatically without
help, and the maintenance issue of removing the inline when a second user
appears outweighs the potential value of the hint that tells gcc to do
something it would have done anyway.

Jason

2024-06-04 23:45:13

by Andrew Lunn

[permalink] [raw]
Subject: Re: [PATCH net-next v10 05/14] netdev: netdevice devmem allocator

> Interesting, as I sped up the ftrace ring buffer by a substantial amount by
> adding strategic __always_inline, noinline, likely() and unlikely()
> throughout the code. It had to do with what was considered the fast path
> and slow path, and not actually the size of the function. gcc got it
> horribly wrong.

And what did the compiler people say when you reported gcc was getting
it wrong?

Our assumption is, the compiler is better than a human at deciding
this. Or at least, a human who does not spend a long time profiling
and tuning. If this assumption is not true, we probably should be
trying to figure out why, and improving the compiler when
possible. That will benefit everybody.

Andrew


2024-06-05 00:34:25

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH net-next v10 05/14] netdev: netdevice devmem allocator

On Wed, 5 Jun 2024 01:44:37 +0200
Andrew Lunn <[email protected]> wrote:

> > Interesting, as I sped up the ftrace ring buffer by a substantial amount by
> > adding strategic __always_inline, noinline, likely() and unlikely()
> > throughout the code. It had to do with what was considered the fast path
> > and slow path, and not actually the size of the function. gcc got it
> > horribly wrong.
>
> And what did the compiler people say when you reported gcc was getting
> it wrong?
>
> Our assumption is, the compiler is better than a human at deciding
> this. Or at least, a human who does not spend a long time profiling
> and tuning. If this assumption is not true, we probably should be
> trying to figure out why, and improving the compiler when
> possible. That will benefit everybody.
>

How is the compiler going to know which path is going to be taken the most?
There's two main paths in the ring buffer logic. One when an event stays on
the sub-buffer, the other when the event crosses over to a new sub buffer.
As there's 100s of events that happen on the same sub-buffer for every one
time there's a cross over, I optimized the paths that stayed on the
sub-buffer, which caused the time for those events to go from 250ns down to
150 ns!. That's a 40% speed up.

I added the unlikely/likely and 'always_inline' and 'noinline' paths to
make sure the "staying on the buffer" path was always the hot path, and
keeping it tight in cache.

How is a compiler going to know that?

-- Steve

2024-06-05 00:53:10

by Andrew Lunn

[permalink] [raw]
Subject: Re: [PATCH net-next v10 05/14] netdev: netdevice devmem allocator

> How is the compiler going to know which path is going to be taken the most?
> There's two main paths in the ring buffer logic. One when an event stays on
> the sub-buffer, the other when the event crosses over to a new sub buffer.
> As there's 100s of events that happen on the same sub-buffer for every one
> time there's a cross over, I optimized the paths that stayed on the
> sub-buffer, which caused the time for those events to go from 250ns down to
> 150 ns!. That's a 40% speed up.
>
> I added the unlikely/likely and 'always_inline' and 'noinline' paths to
> make sure the "staying on the buffer" path was always the hot path, and
> keeping it tight in cache.
>
> How is a compiler going to know that?

It might have some heuristics to try to guess unlikely/likely, but
that is not what we are talking about here.

How much difference did 'always_inline' and 'noinline' make? Hopefully
the likely is enough of a clue it should prefer to inline whatever is
in that branch, where as for the unlikely case it can do a function
call.

But compilers is not my thing, which is why i would reach out to the
compiler people and ask them, is it expected to get this wrong, could
it be made better?

Andrew

2024-06-05 09:46:18

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Mon, Jun 03, 2024 at 07:17:05AM -0700, Mina Almasry wrote:
> On Fri, May 31, 2024 at 10:35 PM Christoph Hellwig <[email protected]> wrote:
> >
> > On Thu, May 30, 2024 at 08:16:01PM +0000, Mina Almasry wrote:
> > > I'm unsure if the discussion has been resolved yet. Sending the series
> > > anyway to get reviews/feedback on the (unrelated) rest of the series.
> >
> > As far as I'm concerned it is not. I've not seen any convincing
> > argument for more than page/folio allocator including larger order /
> > huge page and dmabuf.
> >
>
> Thanks Christoph, this particular patch series adds dmabuf, so I
> assume no objection there. I assume the objection is that you want the
> generic, extensible hooks removed.

Exactly! Note that this isn't a review of the dmabuf bits as there
are people more qualified with me.

> To be honest, I don't think the hooks are an integral part of the
> design, and at this point I think we've argued for them enough. I
> think we can easily achieve the same thing with just raw if statements
> in a couple of places. We can always add the hooks if and only if we
> actually justify many memory providers.
>
> Any objections to me removing the hooks and directing to memory
> allocations via simple if statements? Something like (very rough
> draft, doesn't compile):

I like this approach, thanks!

You might still want to keep the static key, though.

2024-06-05 12:07:53

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Mon, Jun 03, 2024 at 03:52:32PM +0100, Pavel Begunkov wrote:
> The question for Christoph is what exactly is the objection here? Why we
> would not be using well defined ops when we know there will be more
> users?

The point is that there should be no more users. If you need another
case you are doing something very wrong.


2024-06-06 01:43:56

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH net-next v10 05/14] netdev: netdevice devmem allocator

On Wed, 5 Jun 2024 02:52:29 +0200
Andrew Lunn <[email protected]> wrote:

> > How is a compiler going to know that?
>
> It might have some heuristics to try to guess unlikely/likely, but
> that is not what we are talking about here.
>
> How much difference did 'always_inline' and 'noinline' make? Hopefully
> the likely is enough of a clue it should prefer to inline whatever is
> in that branch, where as for the unlikely case it can do a function
> call.

Perhaps, but one of the issues was that I have lots of small functions that
are used all over the place, and gcc tends to change them to function
calls, instead of duplicating them. I did this analysis back in 2016, so
maybe it became better.

>
> But compilers is not my thing, which is why i would reach out to the
> compiler people and ask them, is it expected to get this wrong, could
> it be made better?

Well, I actually do work with the compiler folks, and we are actually
trying to get a session at GNU Cauldron where Linux kernel folks can talk
with the gcc compiler folks.

I've stared at so many objdump outputs, that I can now pretty much see the
assembly that my C code makes ;-)

-- Steve

2024-06-06 01:49:00

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH net-next v10 06/14] page_pool: convert to use netmem

On Thu, 30 May 2024 20:16:05 +0000
Mina Almasry <[email protected]> wrote:

> @@ -42,51 +42,52 @@ TRACE_EVENT(page_pool_release,
> TRACE_EVENT(page_pool_state_release,
>
> TP_PROTO(const struct page_pool *pool,
> - const struct page *page, u32 release),
> + netmem_ref netmem, u32 release),
>
> - TP_ARGS(pool, page, release),
> + TP_ARGS(pool, netmem, release),
>
> TP_STRUCT__entry(
> __field(const struct page_pool *, pool)
> - __field(const struct page *, page)
> + __field(netmem_ref, netmem)

Why make this of type "netmem_ref" and not just "unsigned long"?

> __field(u32, release)
> __field(unsigned long, pfn)
> ),
>
> TP_fast_assign(
> __entry->pool = pool;
> - __entry->page = page;
> + __entry->netmem = netmem;

You could have this be:

__entry->netmem = (__force unsigned long)netmem;

> __entry->release = release;
> - __entry->pfn = page_to_pfn(page);
> + __entry->pfn = netmem_to_pfn(netmem);
> ),
>
> - TP_printk("page_pool=%p page=%p pfn=0x%lx release=%u",
> - __entry->pool, __entry->page, __entry->pfn, __entry->release)
> + TP_printk("page_pool=%p netmem=%lu pfn=0x%lx release=%u",
> + __entry->pool, (__force unsigned long)__entry->netmem,

And not have to expose the above text to user space (look at the format
file it produces).

It being of type "netmem_ref" in the ring buffer is useless.

-- Steve


> + __entry->pfn, __entry->release)
> );
>

2024-06-06 17:00:22

by Mina Almasry

[permalink] [raw]
Subject: Re: [PATCH net-next v10 10/14] net: add support for skbs with unreadable frags

On Thu, Jun 6, 2024 at 9:49 AM Mina Almasry <[email protected]> wrote:
>
> On Tue, Jun 4, 2024 at 3:46 AM Paolo Abeni <[email protected]> wrote:
> >
> > On Thu, 2024-05-30 at 20:16 +0000, Mina Almasry wrote:
> > > diff --git a/net/core/gro.c b/net/core/gro.c
> > > index 26f09c3e830b7..7b9d018f552bd 100644
> > > --- a/net/core/gro.c
> > > +++ b/net/core/gro.c
> > > @@ -422,6 +422,9 @@ static void gro_pull_from_frag0(struct sk_buff *skb, int grow)
> > > {
> > > struct skb_shared_info *pinfo = skb_shinfo(skb);
> > >
> > > + if (WARN_ON_ONCE(!skb_frags_readable(skb)))
> > > + return;
> > > +
> > > BUG_ON(skb->end - skb->tail < grow);
> > >
> > > memcpy(skb_tail_pointer(skb), NAPI_GRO_CB(skb)->frag0, grow);
> > > @@ -443,7 +446,7 @@ static void gro_try_pull_from_frag0(struct sk_buff *skb)
> > > {
> > > int grow = skb_gro_offset(skb) - skb_headlen(skb);
> > >
> > > - if (grow > 0)
> > > + if (grow > 0 && skb_frags_readable(skb))
> > > gro_pull_from_frag0(skb, grow);
> > > }
> >
> > I'm unsure if this was already mentioned, so please pardon the eventual
> > duplicate...
> >
> > The above code is quite critical performance wise, and the previous
> > patch already prevent frag0 from being set to a non paged frag,
>
>
> Hi Paolo!
>
> The last patch, d4d25dd237a61 ("net: support non paged skb frags"),
> AFAICT doesn't prevent frag0 from being a non-paged frag. What we do
> is set ->frag0=skb->data, then prevent it from being reset to
> skb_frag_address() for non-paged skbs. ->frag0 will likely actually be
> a bad value for non-paged frags, so we need to check in
> gro_pul_from_frag0() so that we don't accidentally pull from a bad
> ->frag0 value.
>
> What I think I should do here is what you said. I should make sure
> frag0 and frag0_len is not set if it's a non-paged frag. Then, we
> don't need special checks in gro_pull_from_frag0 I think, because
> skb_gro_may_pull() should detect that frag0_len is 0 and should
> prevent a pull.
>
> I will apply this fix to the next iteration for your review. Let me
> know if I missed something.
>
>

Actually, sorry you're right. As written, d4d25dd237a61 ("net: support
non paged skb frags") prevents frag0 from being a non-paged frag. I
can just drop these excessive checks with no downside. Sorry for the
noise!

--
Thanks,
Mina

2024-06-06 17:30:47

by Mina Almasry

[permalink] [raw]
Subject: Re: [PATCH net-next v10 10/14] net: add support for skbs with unreadable frags

On Tue, Jun 4, 2024 at 3:46 AM Paolo Abeni <[email protected]> wrote:
>
> On Thu, 2024-05-30 at 20:16 +0000, Mina Almasry wrote:
> > diff --git a/net/core/gro.c b/net/core/gro.c
> > index 26f09c3e830b7..7b9d018f552bd 100644
> > --- a/net/core/gro.c
> > +++ b/net/core/gro.c
> > @@ -422,6 +422,9 @@ static void gro_pull_from_frag0(struct sk_buff *skb, int grow)
> > {
> > struct skb_shared_info *pinfo = skb_shinfo(skb);
> >
> > + if (WARN_ON_ONCE(!skb_frags_readable(skb)))
> > + return;
> > +
> > BUG_ON(skb->end - skb->tail < grow);
> >
> > memcpy(skb_tail_pointer(skb), NAPI_GRO_CB(skb)->frag0, grow);
> > @@ -443,7 +446,7 @@ static void gro_try_pull_from_frag0(struct sk_buff *skb)
> > {
> > int grow = skb_gro_offset(skb) - skb_headlen(skb);
> >
> > - if (grow > 0)
> > + if (grow > 0 && skb_frags_readable(skb))
> > gro_pull_from_frag0(skb, grow);
> > }
>
> I'm unsure if this was already mentioned, so please pardon the eventual
> duplicate...
>
> The above code is quite critical performance wise, and the previous
> patch already prevent frag0 from being set to a non paged frag,


Hi Paolo!

The last patch, d4d25dd237a61 ("net: support non paged skb frags"),
AFAICT doesn't prevent frag0 from being a non-paged frag. What we do
is set ->frag0=skb->data, then prevent it from being reset to
skb_frag_address() for non-paged skbs. ->frag0 will likely actually be
a bad value for non-paged frags, so we need to check in
gro_pul_from_frag0() so that we don't accidentally pull from a bad
->frag0 value.

What I think I should do here is what you said. I should make sure
frag0 and frag0_len is not set if it's a non-paged frag. Then, we
don't need special checks in gro_pull_from_frag0 I think, because
skb_gro_may_pull() should detect that frag0_len is 0 and should
prevent a pull.

I will apply this fix to the next iteration for your review. Let me
know if I missed something.


> so what
> about dropping the above additional checks?
>


--
Thanks,
Mina

2024-06-07 07:57:14

by Niklas Schnelle

[permalink] [raw]
Subject: Re: [PATCH net-next v10 05/14] netdev: netdevice devmem allocator

On Tue, 2024-06-04 at 20:27 -0400, Steven Rostedt wrote:
> On Wed, 5 Jun 2024 01:44:37 +0200
> Andrew Lunn <[email protected]> wrote:
>
> > > Interesting, as I sped up the ftrace ring buffer by a substantial amount by
> > > adding strategic __always_inline, noinline, likely() and unlikely()
> > > throughout the code. It had to do with what was considered the fast path
> > > and slow path, and not actually the size of the function. gcc got it
> > > horribly wrong.
> >
> > And what did the compiler people say when you reported gcc was getting
> > it wrong?
> >
> > Our assumption is, the compiler is better than a human at deciding
> > this. Or at least, a human who does not spend a long time profiling
> > and tuning. If this assumption is not true, we probably should be
> > trying to figure out why, and improving the compiler when
> > possible. That will benefit everybody.
> >
>
> How is the compiler going to know which path is going to be taken the most?
> There's two main paths in the ring buffer logic. One when an event stays on
> the sub-buffer, the other when the event crosses over to a new sub buffer.
> As there's 100s of events that happen on the same sub-buffer for every one
> time there's a cross over, I optimized the paths that stayed on the
> sub-buffer, which caused the time for those events to go from 250ns down to
> 150 ns!. That's a 40% speed up.
>
> I added the unlikely/likely and 'always_inline' and 'noinline' paths to
> make sure the "staying on the buffer" path was always the hot path, and
> keeping it tight in cache.
>
> How is a compiler going to know that?
>
> -- Steve
>

Isn't this basically a perfect example of something where profile
guided optimization should work?

Thanks,
Niklas

2024-06-07 12:41:29

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH net-next v10 06/14] page_pool: convert to use netmem

On 6/6/24 02:48, Steven Rostedt wrote:
> On Thu, 30 May 2024 20:16:05 +0000
> Mina Almasry <[email protected]> wrote:
>
>> @@ -42,51 +42,52 @@ TRACE_EVENT(page_pool_release,
>> TRACE_EVENT(page_pool_state_release,
>>
>> TP_PROTO(const struct page_pool *pool,
>> - const struct page *page, u32 release),
>> + netmem_ref netmem, u32 release),
>>
>> - TP_ARGS(pool, page, release),
>> + TP_ARGS(pool, netmem, release),
>>
>> TP_STRUCT__entry(
>> __field(const struct page_pool *, pool)
>> - __field(const struct page *, page)
>> + __field(netmem_ref, netmem)
>
> Why make this of type "netmem_ref" and not just "unsigned long"?
>
>> __field(u32, release)
>> __field(unsigned long, pfn)
>> ),
>>
>> TP_fast_assign(
>> __entry->pool = pool;
>> - __entry->page = page;
>> + __entry->netmem = netmem;
>
> You could have this be:
>
> __entry->netmem = (__force unsigned long)netmem;
>
>> __entry->release = release;
>> - __entry->pfn = page_to_pfn(page);
>> + __entry->pfn = netmem_to_pfn(netmem);
>> ),
>>
>> - TP_printk("page_pool=%p page=%p pfn=0x%lx release=%u",
>> - __entry->pool, __entry->page, __entry->pfn, __entry->release)
>> + TP_printk("page_pool=%p netmem=%lu pfn=0x%lx release=%u",
>> + __entry->pool, (__force unsigned long)__entry->netmem,
>
> And not have to expose the above text to user space (look at the format
> file it produces).
>
> It being of type "netmem_ref" in the ring buffer is useless.

netmem is a pointer with one bit serving as a flag, considering
mangling it might be better to %p it and perhaps also print its
type (page* vs iov) separately.

--
Pavel Begunkov

2024-06-07 13:46:15

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On 6/5/24 09:24, Christoph Hellwig wrote:
> On Mon, Jun 03, 2024 at 03:52:32PM +0100, Pavel Begunkov wrote:
>> The question for Christoph is what exactly is the objection here? Why we
>> would not be using well defined ops when we know there will be more
>> users?
>
> The point is that there should be no more users. If you need another

Does that "No more" stops after devmem tcp? Or after io_uring
proposal? For the latter I explained why io_uring has to do it
for good design and that's it's not even related to the memory
type used.

> case you are doing something very wrong.

That's not a very illuminating answer

--
Pavel Begunkov

2024-06-07 13:47:20

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On 6/3/24 16:43, Mina Almasry wrote:
> On Mon, Jun 3, 2024 at 7:52 AM Pavel Begunkov <[email protected]> wrote:
>>
>> On 6/3/24 15:17, Mina Almasry wrote:
>>> On Fri, May 31, 2024 at 10:35 PM Christoph Hellwig <[email protected]> wrote:
>>>>
>>>> On Thu, May 30, 2024 at 08:16:01PM +0000, Mina Almasry wrote:
>>>>> I'm unsure if the discussion has been resolved yet. Sending the series
>>>>> anyway to get reviews/feedback on the (unrelated) rest of the series.
>>>>
>>>> As far as I'm concerned it is not. I've not seen any convincing
>>>> argument for more than page/folio allocator including larger order /
>>>> huge page and dmabuf.
>>>>
>>>
>>> Thanks Christoph, this particular patch series adds dmabuf, so I
>>> assume no objection there. I assume the objection is that you want the
>>> generic, extensible hooks removed.
>>>
>>> To be honest, I don't think the hooks are an integral part of the
>>> design, and at this point I think we've argued for them enough. I
>>> think we can easily achieve the same thing with just raw if statements
>>> in a couple of places. We can always add the hooks if and only if we
>>> actually justify many memory providers.
>>>
>>> Any objections to me removing the hooks and directing to memory
>>> allocations via simple if statements? Something like (very rough
>>> draft, doesn't compile):
>>
>> The question for Christoph is what exactly is the objection here? Why we
>> would not be using well defined ops when we know there will be more
>> users? Repeating what I said in the last thread, for io_uring it's used
>> to implement the flow of buffers from userspace to the kernel, the ABI,
>> which is orthogonal to the issue of what memory type it is and how it
>> came there. And even if you mandate unnecessary dmabuf condoms for user
>> memory in one form or another IMHO for no clear reason, the callbacks
>> (or yet another if-else) would still be needed.
>>
>> Sure, Mina can drop and hard code devmem path to easy the pain for
>> him and delay the discussion, but then shortly after I will be
>> re-sending same shit.
>
> You don't need to re-send the same ops again, right? You can add io
> uring support without ops. Something like:
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 92be1aaf18ccc..2cc986455bce6 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -557,8 +557,8 @@ netmem_ref page_pool_alloc_netmem(struct page_pool
> *pool, gfp_t gfp)
> return netmem;
>
> /* Slow-path: cache empty, do real allocation */
> - if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops)
> - netmem = pool->mp_ops->alloc_pages(pool, gfp);
> + if (unlikely(page_pool_is_dmabuf(pool)))
> + netmem = mp_dmabuf_devmem_alloc_pages():
> + else if (unlikely(page_pool_is_iouring(pool)))
> + netmem = mp_io_uring_alloc_pages():
> else
> netmem = __page_pool_alloc_pages_slow(pool, gfp);
> return netmem;
>
> So IMO, the ops themselves, which Christoph is repeatedly nacking, are
> not that important.
>
> I humbly think the energy should be spent convincing maintainers of
> the use case of io uring memory, not the ops. The ops are a cosmetic

I haven't seen any arguments against from the (net) maintainers so
far. Nor I see any objection against callbacks from them (considering
that either option adds an if).

And just not to confuse folks, it's just user pages, not some
weird special io_uring memory.

> change to the code, and can be added later. Christoph is nacking the
> ops because it gives people too much rope [1].

Yes, it is cosmetic, just as much as removing it is a cosmetic
change. You can apply same "too much rope" argument basically
to anything.

Take io_uring, nothing would change in the process, it'd still
be sent to net and reviewed exactly same way, while being less
clean, with poorer subsystem separation, allowing custom
formats / argument list, etc. I think it's cleaner with callbacks,
Mr. Christoph has other beliefs and keeps coercing to them,
even though from time to time it backfires for the author, just
personal experience.


> But if you disagree and think the ops themselves are important for a
> reason I missed, I'm happy waiting until agreement is reached here.
> Sorry, just voicing my 2 cents.
>
> [1] https://lore.kernel.org/netdev/[email protected]/
>

--
Pavel Begunkov

2024-06-07 14:51:28

by David Ahern

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On 6/7/24 7:42 AM, Pavel Begunkov wrote:
> I haven't seen any arguments against from the (net) maintainers so
> far. Nor I see any objection against callbacks from them (considering
> that either option adds an if).

I have said before I do not understand why the dmabuf paradigm is not
sufficient for both device memory and host memory. A less than ideal
control path to put hostmem in a dmabuf wrapper vs extra checks and
changes in the datapath. The former should always be preferred.

I also do not understand why the ifq cache and overloading xdp functions
have stuck around; I always thought both were added by Jonathan to
simplify kernel ports during early POC days.

2024-06-07 14:57:35

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Fri, Jun 07, 2024 at 08:27:29AM -0600, David Ahern wrote:
> On 6/7/24 7:42 AM, Pavel Begunkov wrote:
> > I haven't seen any arguments against from the (net) maintainers so
> > far. Nor I see any objection against callbacks from them (considering
> > that either option adds an if).
>
> I have said before I do not understand why the dmabuf paradigm is not
> sufficient for both device memory and host memory. A less than ideal
> control path to put hostmem in a dmabuf wrapper vs extra checks and
> changes in the datapath. The former should always be preferred.

I think Pavel explained this - his project is principally to replace
the lifetime policy of pages in the data plane. He wants to change
when a page is considered available for re-allocation because
userspace may continue to use the page after the netstack thinks it is
done with it. It sounds like having a different source of the pages is
the less important part.

IMHO it seems to compose poorly if you can only use the io_uring
lifecycle model with io_uring registered memory, and not with DMABUF
memory registered through Mina's mechanism.

Jason

2024-06-07 15:43:25

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On 6/7/24 15:27, David Ahern wrote:
> On 6/7/24 7:42 AM, Pavel Begunkov wrote:
>> I haven't seen any arguments against from the (net) maintainers so
>> far. Nor I see any objection against callbacks from them (considering
>> that either option adds an if).
>
> I have said before I do not understand why the dmabuf paradigm is not
> sufficient for both device memory and host memory. A less than ideal
> control path to put hostmem in a dmabuf wrapper vs extra checks and
> changes in the datapath. The former should always be preferred.

If we're talking about types of memory specifically, I'm not strictly
against wrapping into dmabuf in kernel, but that just doesn't give
anything.
But the main reason for allocations there is the difference in
approaches to the api. With io_uring the allocation callback is
responsible for getting buffers back from the user (via a shared
ring). No locking for the ring, and buffers are already in the
context (napi) where they would be consumed from. Removes some
headaches for the user (like batching before returning buffers),
and should go better with smaller buffers and such.

> I also do not understand why the ifq cache

I'm not sure what you mean by ifq cache. Can you elaborate?

> and overloading xdp functions

Assuming it's about setup via xdp, it was marked for remaking in
RFCs for longer than desired but it's gone now in our tree (but
maybe not in the latest series).

> have stuck around; I always thought both were added by Jonathan to
> simplify kernel ports during early POC days.

--
Pavel Begunkov

2024-06-07 15:47:19

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On 6/7/24 16:42, Pavel Begunkov wrote:
> On 6/7/24 15:27, David Ahern wrote:
>> On 6/7/24 7:42 AM, Pavel Begunkov wrote:
>>> I haven't seen any arguments against from the (net) maintainers so
>>> far. Nor I see any objection against callbacks from them (considering
>>> that either option adds an if).
>>
>> I have said before I do not understand why the dmabuf paradigm is not
>> sufficient for both device memory and host memory. A less than ideal
>> control path to put hostmem in a dmabuf wrapper vs extra checks and
>> changes in the datapath. The former should always be preferred.
>
> If we're talking about types of memory specifically, I'm not strictly
> against wrapping into dmabuf in kernel, but that just doesn't give
> anything.

And the reason I don't have too strong of an opinion on that is
mainly because it's just setup/cleanup path.

> But the main reason for allocations there is the difference in
> approaches to the api. With io_uring the allocation callback is
> responsible for getting buffers back from the user (via a shared
> ring). No locking for the ring, and buffers are already in the
> context (napi) where they would be consumed from. Removes some
> headaches for the user (like batching before returning buffers),
> and should go better with smaller buffers and such.
>
>> I also do not understand why the ifq cache
>
> I'm not sure what you mean by ifq cache. Can you elaborate?
>
>> and overloading xdp functions
>
> Assuming it's about setup via xdp, it was marked for remaking in
> RFCs for longer than desired but it's gone now in our tree (but
> maybe not in the latest series).
>
>> have stuck around; I always thought both were added by Jonathan to
>> simplify kernel ports during early POC days.
>

--
Pavel Begunkov

2024-06-07 17:07:51

by Mina Almasry

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Fri, Jun 7, 2024 at 8:47 AM Pavel Begunkov <[email protected]> wrote:
>
> On 6/7/24 16:42, Pavel Begunkov wrote:
> > On 6/7/24 15:27, David Ahern wrote:
> >> On 6/7/24 7:42 AM, Pavel Begunkov wrote:
> >>> I haven't seen any arguments against from the (net) maintainers so
> >>> far. Nor I see any objection against callbacks from them (considering
> >>> that either option adds an if).
> >>
> >> I have said before I do not understand why the dmabuf paradigm is not
> >> sufficient for both device memory and host memory. A less than ideal
> >> control path to put hostmem in a dmabuf wrapper vs extra checks and
> >> changes in the datapath. The former should always be preferred.
> >
> > If we're talking about types of memory specifically, I'm not strictly
> > against wrapping into dmabuf in kernel, but that just doesn't give
> > anything.
>
> And the reason I don't have too strong of an opinion on that is
> mainly because it's just setup/cleanup path.
>

I agree wrapping io uring in dmabuf seems to be an unnecessary detour.
I never understood the need or upside to do that, but it could be a
lack of understanding on my part.

However, the concern that David brings up may materialize. I've had to
spend a lot of time minimizing or justifying checks to the code with
page pool benchmarks that detect even 1 cycle regressions. You may be
asked to run the same benchmarks and minimize similar overhead.

The benchmark in question is Jesper's bench_page_pool_simple. I've
forked it and applied it on top of net-next here:
https://github.com/mina/linux/commit/927596f87ab5791a8a6ba8597ba2189747396e54

As io_uring ZC comes close to merging, I suspect it would be good to
run this to understand the regression in the fast path, if any. If
there are no to little regressions, I have no concerns over io uring
memory not being wrapped in dmabufs, and David may agree as well.

--
Thanks,
Mina

2024-06-10 01:07:38

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On 6/10/24 01:37, David Wei wrote:
> On 2024-06-07 17:52, Jason Gunthorpe wrote:
>> IMHO it seems to compose poorly if you can only use the io_uring
>> lifecycle model with io_uring registered memory, and not with DMABUF
>> memory registered through Mina's mechanism.
>
> By this, do you mean io_uring must be exclusively used to use this
> feature?
>
> And you'd rather see the two decoupled, so userspace can register w/ say
> dmabuf then pass it to io_uring?

Personally, I have no clue what Jason means. You can just as
well say that it's poorly composable that write(2) to a disk
cannot post a completion into a XDP ring, or a netlink socket,
or io_uring's main completion queue, or name any other API.

The devmem TCP callback can implement it in a way feasible to
the project, but it cannot directly post events to an unrelated
API like io_uring. And devmem attaches buffers to a socket,
for which a ring for returning buffers might even be a nuisance.

--
Pavel Begunkov

2024-06-10 01:12:51

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On 6/7/24 17:59, Mina Almasry wrote:
> On Fri, Jun 7, 2024 at 8:47 AM Pavel Begunkov <[email protected]> wrote:
>>
>> On 6/7/24 16:42, Pavel Begunkov wrote:
>>> On 6/7/24 15:27, David Ahern wrote:
>>>> On 6/7/24 7:42 AM, Pavel Begunkov wrote:
>>>>> I haven't seen any arguments against from the (net) maintainers so
>>>>> far. Nor I see any objection against callbacks from them (considering
>>>>> that either option adds an if).
>>>>
>>>> I have said before I do not understand why the dmabuf paradigm is not
>>>> sufficient for both device memory and host memory. A less than ideal
>>>> control path to put hostmem in a dmabuf wrapper vs extra checks and
>>>> changes in the datapath. The former should always be preferred.
>>>
>>> If we're talking about types of memory specifically, I'm not strictly
>>> against wrapping into dmabuf in kernel, but that just doesn't give
>>> anything.
>>
>> And the reason I don't have too strong of an opinion on that is
>> mainly because it's just setup/cleanup path.
>>
>
> I agree wrapping io uring in dmabuf seems to be an unnecessary detour.
> I never understood the need or upside to do that, but it could be a
> lack of understanding on my part.
>
> However, the concern that David brings up may materialize. I've had to
> spend a lot of time minimizing or justifying checks to the code with
> page pool benchmarks that detect even 1 cycle regressions. You may be
> asked to run the same benchmarks and minimize similar overhead.
>
> The benchmark in question is Jesper's bench_page_pool_simple. I've
> forked it and applied it on top of net-next here:
> https://github.com/mina/linux/commit/927596f87ab5791a8a6ba8597ba2189747396e54
>
> As io_uring ZC comes close to merging, I suspect it would be good to
> run this to understand the regression in the fast path, if any. If
> there are no to little regressions, I have no concerns over io uring
> memory not being wrapped in dmabufs, and David may agree as well.

That's the easiest part as io_uring only reusing call points
you added for devmem and thus doesn't add anything new on top
to hot paths.

--
Pavel Begunkov

2024-06-10 03:03:19

by David Wei

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On 2024-06-07 17:52, Jason Gunthorpe wrote:
> IMHO it seems to compose poorly if you can only use the io_uring
> lifecycle model with io_uring registered memory, and not with DMABUF
> memory registered through Mina's mechanism.

By this, do you mean io_uring must be exclusively used to use this
feature?

And you'd rather see the two decoupled, so userspace can register w/ say
dmabuf then pass it to io_uring?

>
> Jason

2024-06-10 03:33:16

by David Wei

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On 2024-06-07 17:27, David Ahern wrote:
> I also do not understand why the ifq cache and overloading xdp functions
> have stuck around; I always thought both were added by Jonathan to
> simplify kernel ports during early POC days.

Setting up an Rx queue for ZC w/ a different pp will be done properly
using the new queue API that Mina merged recently. Those custom XDP
hooks will be gone in a non-RFC patchset.

2024-06-10 12:18:12

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Mon, Jun 10, 2024 at 02:07:01AM +0100, Pavel Begunkov wrote:
> On 6/10/24 01:37, David Wei wrote:
> > On 2024-06-07 17:52, Jason Gunthorpe wrote:
> > > IMHO it seems to compose poorly if you can only use the io_uring
> > > lifecycle model with io_uring registered memory, and not with DMABUF
> > > memory registered through Mina's mechanism.
> >
> > By this, do you mean io_uring must be exclusively used to use this
> > feature?
> >
> > And you'd rather see the two decoupled, so userspace can register w/ say
> > dmabuf then pass it to io_uring?
>
> Personally, I have no clue what Jason means. You can just as
> well say that it's poorly composable that write(2) to a disk
> cannot post a completion into a XDP ring, or a netlink socket,
> or io_uring's main completion queue, or name any other API.

There is no reason you shouldn't be able to use your fast io_uring
completion and lifecycle flow with DMABUF backed memory. Those are not
widly different things and there is good reason they should work
together.

Pretending they are totally different just because two different
people wrote them is a very siloed view.

> The devmem TCP callback can implement it in a way feasible to
> the project, but it cannot directly post events to an unrelated
> API like io_uring. And devmem attaches buffers to a socket,
> for which a ring for returning buffers might even be a nuisance.

If you can't compose your io_uring completion mechanism with a DMABUF
provided backing store then I think it needs more work.

Jason

2024-06-10 12:39:11

by Christian König

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

Am 10.06.24 um 14:16 schrieb Jason Gunthorpe:
> On Mon, Jun 10, 2024 at 02:07:01AM +0100, Pavel Begunkov wrote:
>> On 6/10/24 01:37, David Wei wrote:
>>> On 2024-06-07 17:52, Jason Gunthorpe wrote:
>>>> IMHO it seems to compose poorly if you can only use the io_uring
>>>> lifecycle model with io_uring registered memory, and not with DMABUF
>>>> memory registered through Mina's mechanism.
>>> By this, do you mean io_uring must be exclusively used to use this
>>> feature?
>>>
>>> And you'd rather see the two decoupled, so userspace can register w/ say
>>> dmabuf then pass it to io_uring?
>> Personally, I have no clue what Jason means. You can just as
>> well say that it's poorly composable that write(2) to a disk
>> cannot post a completion into a XDP ring, or a netlink socket,
>> or io_uring's main completion queue, or name any other API.
> There is no reason you shouldn't be able to use your fast io_uring
> completion and lifecycle flow with DMABUF backed memory. Those are not
> widly different things and there is good reason they should work
> together.

Well there is the fundamental problem that you can't use io_uring to
implement the semantics necessary for a dma_fence.

That's why we had to reject the io_uring work on DMA-buf sharing from
Google a few years ago.

But this only affects the dma_fence synchronization part of DMA-buf, but
*not* the general buffer sharing.

Regards,
Christian.

>
> Pretending they are totally different just because two different
> people wrote them is a very siloed view.
>
>> The devmem TCP callback can implement it in a way feasible to
>> the project, but it cannot directly post events to an unrelated
>> API like io_uring. And devmem attaches buffers to a socket,
>> for which a ring for returning buffers might even be a nuisance.
> If you can't compose your io_uring completion mechanism with a DMABUF
> provided backing store then I think it needs more work.
>
> Jason


2024-06-10 15:19:56

by David Ahern

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On 6/10/24 6:16 AM, Jason Gunthorpe wrote:
> On Mon, Jun 10, 2024 at 02:07:01AM +0100, Pavel Begunkov wrote:
>> On 6/10/24 01:37, David Wei wrote:
>>> On 2024-06-07 17:52, Jason Gunthorpe wrote:
>>>> IMHO it seems to compose poorly if you can only use the io_uring
>>>> lifecycle model with io_uring registered memory, and not with DMABUF
>>>> memory registered through Mina's mechanism.
>>>
>>> By this, do you mean io_uring must be exclusively used to use this
>>> feature?
>>>
>>> And you'd rather see the two decoupled, so userspace can register w/ say
>>> dmabuf then pass it to io_uring?
>>
>> Personally, I have no clue what Jason means. You can just as
>> well say that it's poorly composable that write(2) to a disk
>> cannot post a completion into a XDP ring, or a netlink socket,
>> or io_uring's main completion queue, or name any other API.
>
> There is no reason you shouldn't be able to use your fast io_uring
> completion and lifecycle flow with DMABUF backed memory. Those are not
> widly different things and there is good reason they should work
> together.
>
> Pretending they are totally different just because two different
> people wrote them is a very siloed view.
>
>> The devmem TCP callback can implement it in a way feasible to
>> the project, but it cannot directly post events to an unrelated
>> API like io_uring. And devmem attaches buffers to a socket,
>> for which a ring for returning buffers might even be a nuisance.
>
> If you can't compose your io_uring completion mechanism with a DMABUF
> provided backing store then I think it needs more work.
>

exactly. io_uring, page_pool, dmabuf - all kernel building blocks for
solutions. This why I was pushing for Mina's set not to be using the
name `devmem` - it is but one type of memory and with dmabuf it should
not matter if it is gpu or host (or something else later on - cxl?).


2024-06-10 16:22:54

by Mina Almasry

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Mon, Jun 10, 2024 at 5:38 AM Christian König
<[email protected]> wrote:
>
> Am 10.06.24 um 14:16 schrieb Jason Gunthorpe:
> > On Mon, Jun 10, 2024 at 02:07:01AM +0100, Pavel Begunkov wrote:
> >> On 6/10/24 01:37, David Wei wrote:
> >>> On 2024-06-07 17:52, Jason Gunthorpe wrote:
> >>>> IMHO it seems to compose poorly if you can only use the io_uring
> >>>> lifecycle model with io_uring registered memory, and not with DMABUF
> >>>> memory registered through Mina's mechanism.
> >>> By this, do you mean io_uring must be exclusively used to use this
> >>> feature?
> >>>
> >>> And you'd rather see the two decoupled, so userspace can register w/ say
> >>> dmabuf then pass it to io_uring?
> >> Personally, I have no clue what Jason means. You can just as
> >> well say that it's poorly composable that write(2) to a disk
> >> cannot post a completion into a XDP ring, or a netlink socket,
> >> or io_uring's main completion queue, or name any other API.
> > There is no reason you shouldn't be able to use your fast io_uring
> > completion and lifecycle flow with DMABUF backed memory. Those are not
> > widly different things and there is good reason they should work
> > together.
>
> Well there is the fundamental problem that you can't use io_uring to
> implement the semantics necessary for a dma_fence.
>
> That's why we had to reject the io_uring work on DMA-buf sharing from
> Google a few years ago.
>

Any chance someone can link me to this? io_uring, as far as my
primitive understanding goes, is not yet very adopted at Google, and
I'm curious what this effort is.

--
Thanks,
Mina

2024-06-10 19:20:38

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On 6/10/24 16:16, David Ahern wrote:
> On 6/10/24 6:16 AM, Jason Gunthorpe wrote:
>> On Mon, Jun 10, 2024 at 02:07:01AM +0100, Pavel Begunkov wrote:
>>> On 6/10/24 01:37, David Wei wrote:
>>>> On 2024-06-07 17:52, Jason Gunthorpe wrote:
>>>>> IMHO it seems to compose poorly if you can only use the io_uring
>>>>> lifecycle model with io_uring registered memory, and not with DMABUF
>>>>> memory registered through Mina's mechanism.
>>>>
>>>> By this, do you mean io_uring must be exclusively used to use this
>>>> feature?
>>>>
>>>> And you'd rather see the two decoupled, so userspace can register w/ say
>>>> dmabuf then pass it to io_uring?
>>>
>>> Personally, I have no clue what Jason means. You can just as
>>> well say that it's poorly composable that write(2) to a disk
>>> cannot post a completion into a XDP ring, or a netlink socket,
>>> or io_uring's main completion queue, or name any other API.
>>
>> There is no reason you shouldn't be able to use your fast io_uring
>> completion and lifecycle flow with DMABUF backed memory. Those are not
>> widly different things and there is good reason they should work
>> together.

Let's not mix up devmem TCP and dmabuf specifically, as I see it
your question was concerning the latter: "... DMABUF memory registered
through Mina's mechanism". io_uring's zcrx can trivially get dmabuf
support in future, as mentioned it's mostly the setup side. ABI,
buffer workflow and some details is a separate issue, and I don't
see how further integration aside from what we're already sharing
is beneficial, on opposite it'll complicate things.

>> Pretending they are totally different just because two different
>> people wrote them is a very siloed view.
io_uring zcrx and devmem? They are not, nobody is saying otherwise,
_very_ similar approaches if anything but with different API, which
is the reason we already use common infra.

>>> The devmem TCP callback can implement it in a way feasible to
>>> the project, but it cannot directly post events to an unrelated
>>> API like io_uring. And devmem attaches buffers to a socket,
>>> for which a ring for returning buffers might even be a nuisance.
>>
>> If you can't compose your io_uring completion mechanism with a DMABUF
>> provided backing store then I think it needs more work.

As per above, it conflates devmem TCP with dmabuf.

> exactly. io_uring, page_pool, dmabuf - all kernel building blocks for
> solutions. This why I was pushing for Mina's set not to be using the
> name `devmem` - it is but one type of memory and with dmabuf it should
> not matter if it is gpu or host (or something else later on - cxl?).

--
Pavel Begunkov

2024-06-10 19:33:24

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On 6/10/24 16:41, Mina Almasry wrote:
> On Mon, Jun 10, 2024 at 5:38 AM Christian König
> <[email protected]> wrote:
>>
>> Am 10.06.24 um 14:16 schrieb Jason Gunthorpe:
>>> On Mon, Jun 10, 2024 at 02:07:01AM +0100, Pavel Begunkov wrote:
>>>> On 6/10/24 01:37, David Wei wrote:
>>>>> On 2024-06-07 17:52, Jason Gunthorpe wrote:
>>>>>> IMHO it seems to compose poorly if you can only use the io_uring
>>>>>> lifecycle model with io_uring registered memory, and not with DMABUF
>>>>>> memory registered through Mina's mechanism.
>>>>> By this, do you mean io_uring must be exclusively used to use this
>>>>> feature?
>>>>>
>>>>> And you'd rather see the two decoupled, so userspace can register w/ say
>>>>> dmabuf then pass it to io_uring?
>>>> Personally, I have no clue what Jason means. You can just as
>>>> well say that it's poorly composable that write(2) to a disk
>>>> cannot post a completion into a XDP ring, or a netlink socket,
>>>> or io_uring's main completion queue, or name any other API.
>>> There is no reason you shouldn't be able to use your fast io_uring
>>> completion and lifecycle flow with DMABUF backed memory. Those are not
>>> widly different things and there is good reason they should work
>>> together.
>>
>> Well there is the fundamental problem that you can't use io_uring to
>> implement the semantics necessary for a dma_fence.
>>
>> That's why we had to reject the io_uring work on DMA-buf sharing from
>> Google a few years ago.
>>
>
> Any chance someone can link me to this? io_uring, as far as my
> primitive understanding goes, is not yet very adopted at Google, and
> I'm curious what this effort is.
I'm curious as well, I don't remember it floating anywhere in mailing
lists. The only discussion I recall was about
DRM_IOCTL_SYNCOBJ_TIMELINE_WAIT, but it didn't get through only because
someone pushed for evenfds.

--
Pavel Begunkov

2024-06-10 19:57:56

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Mon, Jun 10, 2024 at 02:38:18PM +0200, Christian K?nig wrote:
> Am 10.06.24 um 14:16 schrieb Jason Gunthorpe:
> > On Mon, Jun 10, 2024 at 02:07:01AM +0100, Pavel Begunkov wrote:
> > > On 6/10/24 01:37, David Wei wrote:
> > > > On 2024-06-07 17:52, Jason Gunthorpe wrote:
> > > > > IMHO it seems to compose poorly if you can only use the io_uring
> > > > > lifecycle model with io_uring registered memory, and not with DMABUF
> > > > > memory registered through Mina's mechanism.
> > > > By this, do you mean io_uring must be exclusively used to use this
> > > > feature?
> > > >
> > > > And you'd rather see the two decoupled, so userspace can register w/ say
> > > > dmabuf then pass it to io_uring?
> > > Personally, I have no clue what Jason means. You can just as
> > > well say that it's poorly composable that write(2) to a disk
> > > cannot post a completion into a XDP ring, or a netlink socket,
> > > or io_uring's main completion queue, or name any other API.
> > There is no reason you shouldn't be able to use your fast io_uring
> > completion and lifecycle flow with DMABUF backed memory. Those are not
> > widly different things and there is good reason they should work
> > together.
>
> Well there is the fundamental problem that you can't use io_uring to
> implement the semantics necessary for a dma_fence.
>
> That's why we had to reject the io_uring work on DMA-buf sharing from Google
> a few years ago.
>
> But this only affects the dma_fence synchronization part of DMA-buf, but
> *not* the general buffer sharing.

More precisely, it only impacts the userspace/data access implicit
synchronization part of dma-buf. For tracking buffer movements like on
invalidations/refault with a dynamic dma-buf importer/exporter I think the
dma-fence rules are acceptable. At least they've been for rdma drivers.

But the escape hatch is to (temporarily) pin the dma-buf, which is exactly
what direct I/O also does when accessing pages. So aside from the still
unsolved question on how we should account/track pinned dma-buf, there
shouldn't be an issue. Or at least I'm failing to see one.

And for synchronization to data access the dma-fence stuff on dma-buf is
anyway rather deprecated on the gpu side too, exactly because of all these
limitations. On the gpu side we've been moving to free-standing
drm_syncobj instead, but those are fairly gpu specific and any other
subsystem should be able to just reuse what they have already to signal
transaction completions.

Cheers, Sima

>
> Regards,
> Christian.
>
> >
> > Pretending they are totally different just because two different
> > people wrote them is a very siloed view.
> >
> > > The devmem TCP callback can implement it in a way feasible to
> > > the project, but it cannot directly post events to an unrelated
> > > API like io_uring. And devmem attaches buffers to a socket,
> > > for which a ring for returning buffers might even be a nuisance.
> > If you can't compose your io_uring completion mechanism with a DMABUF
> > provided backing store then I think it needs more work.
> >
> > Jason
>

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

2024-06-10 22:15:44

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Mon, Jun 10, 2024 at 08:20:08PM +0100, Pavel Begunkov wrote:
> On 6/10/24 16:16, David Ahern wrote:

> > > There is no reason you shouldn't be able to use your fast io_uring
> > > completion and lifecycle flow with DMABUF backed memory. Those are not
> > > widly different things and there is good reason they should work
> > > together.
>
> Let's not mix up devmem TCP and dmabuf specifically, as I see it
> your question was concerning the latter: "... DMABUF memory registered
> through Mina's mechanism". io_uring's zcrx can trivially get dmabuf
> support in future, as mentioned it's mostly the setup side. ABI,
> buffer workflow and some details is a separate issue, and I don't
> see how further integration aside from what we're already sharing
> is beneficial, on opposite it'll complicate things.

Again, I am talking about composability here, duplicating the DMABUF
stuff into io_uring is not composable, it is just duplicating things.

It does not match the view that there should be two distinct layers
here, one that provides the pages and one that manages the
lifecycle. As HCH pushes for pages either come from the allocator and
get to use the struct folio or the come from a dmabuf and they
don't. That is it, the only two choices.

The iouring stuff is trying to confuse the source of the pages with
the lifecycle - which is surely convenient, but is why Christoph is
opposing it.

Jason

2024-06-11 07:42:01

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Mon, Jun 10, 2024 at 02:38:18PM +0200, Christian K?nig wrote:
> Well there is the fundamental problem that you can't use io_uring to
> implement the semantics necessary for a dma_fence.

What is the exact problem there?


2024-06-11 07:42:52

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Mon, Jun 10, 2024 at 09:16:43AM -0600, David Ahern wrote:
>
> exactly. io_uring, page_pool, dmabuf - all kernel building blocks for
> solutions. This why I was pushing for Mina's set not to be using the
> name `devmem` - it is but one type of memory and with dmabuf it should
> not matter if it is gpu or host (or something else later on - cxl?).

While not really realted to the rest of the discussion I agree.
It really is dmabuf integration now, so let's call it that?

2024-06-11 08:22:44

by Christian König

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

Am 11.06.24 um 08:25 schrieb Christoph Hellwig:
> On Mon, Jun 10, 2024 at 02:38:18PM +0200, Christian König wrote:
>> Well there is the fundamental problem that you can't use io_uring to
>> implement the semantics necessary for a dma_fence.
> What is the exact problem there?

It's an intentional design decision that dma_fences can be waited on
with quite a bunch of locks held. Including the DMA-buf reservation
lock, mmap lock, anything page fault related, shrinker etc...

When you give userspace control over the signaling of a dma_fence then
that has the same effect as returning to userspace with those locks held
- you can basically trivially deadlock the system.

I think nearly a dozen implementations fell into that trap:
https://www.kernel.org/doc/html/v5.9/driver-api/dma-buf.html#indefinite-dma-fences

It's well understood and documented by now why this approach doesn't
work. So not much of an issue any more, we just have to reject
implementations from time to time which try doing the same thing again.

Regards,
Christian.

2024-06-11 10:07:58

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Fri, Jun 07, 2024 at 02:45:55PM +0100, Pavel Begunkov wrote:
> On 6/5/24 09:24, Christoph Hellwig wrote:
> > On Mon, Jun 03, 2024 at 03:52:32PM +0100, Pavel Begunkov wrote:
> > > The question for Christoph is what exactly is the objection here? Why we
> > > would not be using well defined ops when we know there will be more
> > > users?
> >
> > The point is that there should be no more users. If you need another
>
> Does that "No more" stops after devmem tcp?

There should be no other memory source other than the page allocator
and dmabuf. If you need different life time control for your
zero copy proposal don't mix that up with the contol of the memory
source.


2024-06-11 17:49:41

by Mina Almasry

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Mon, Jun 10, 2024 at 11:26 PM Christoph Hellwig <[email protected]> wrote:
>
> On Mon, Jun 10, 2024 at 09:16:43AM -0600, David Ahern wrote:
> >
> > exactly. io_uring, page_pool, dmabuf - all kernel building blocks for
> > solutions. This why I was pushing for Mina's set not to be using the
> > name `devmem` - it is but one type of memory and with dmabuf it should
> > not matter if it is gpu or host (or something else later on - cxl?).
>
> While not really realted to the rest of the discussion I agree.
> It really is dmabuf integration now, so let's call it that?

My mental model is that the feature folks care about is the ability to
use TCP with device memory, and dmabuf is an implementation detail
that is the format that device memory is packaged in. Although not
likely given this discussion, in theory we could want to extend devmem
TCP to support p2pdma for nvme, or some other format if a new one
arises in device drivers. I also think it's more obvious to an end
user what 'devmem TCP' aims to do rather than 'dmabuf TCP' especially
if the user is not a kernel developer familiar with dmabuf.

--
Thanks,
Mina

2024-06-11 18:09:52

by Mina Almasry

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Mon, Jun 10, 2024 at 3:15 PM Jason Gunthorpe <[email protected]> wrote:
>
> On Mon, Jun 10, 2024 at 08:20:08PM +0100, Pavel Begunkov wrote:
> > On 6/10/24 16:16, David Ahern wrote:
>
> > > > There is no reason you shouldn't be able to use your fast io_uring
> > > > completion and lifecycle flow with DMABUF backed memory. Those are not
> > > > widly different things and there is good reason they should work
> > > > together.
> >
> > Let's not mix up devmem TCP and dmabuf specifically, as I see it
> > your question was concerning the latter: "... DMABUF memory registered
> > through Mina's mechanism". io_uring's zcrx can trivially get dmabuf
> > support in future, as mentioned it's mostly the setup side. ABI,
> > buffer workflow and some details is a separate issue, and I don't
> > see how further integration aside from what we're already sharing
> > is beneficial, on opposite it'll complicate things.
>
> Again, I am talking about composability here, duplicating the DMABUF
> stuff into io_uring is not composable, it is just duplicating things.
>
> It does not match the view that there should be two distinct layers
> here, one that provides the pages and one that manages the
> lifecycle. As HCH pushes for pages either come from the allocator and
> get to use the struct folio or the come from a dmabuf and they
> don't. That is it, the only two choices.
>
> The iouring stuff is trying to confuse the source of the pages with
> the lifecycle - which is surely convenient, but is why Christoph is
> opposing it.
>

Just curious: in Pavel's effort, io_uring - which is not a device - is
trying to share memory with the page_pool, which is also not a device.
And Pavel is being asked to wrap the memory in a dmabuf. Is dmabuf
going to be the kernel's standard for any memory sharing between any 2
components in the future, even when they're not devices? As in you
expect dmabuf exporters which are not devices to be added to the
kernel? Currently the only dmabuf exporter which is not a device
(AFAIK) is udmabuf, which is used for testing and emulation, not
really a production thing, I think.

--
Thanks,
Mina

2024-06-12 12:06:26

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On Tue, Jun 11, 2024 at 11:09:15AM -0700, Mina Almasry wrote:

> Just curious: in Pavel's effort, io_uring - which is not a device - is
> trying to share memory with the page_pool, which is also not a device.
> And Pavel is being asked to wrap the memory in a dmabuf. Is dmabuf
> going to be the kernel's standard for any memory sharing between any 2
> components in the future, even when they're not devices?

dmabuf is how we are refcounting non-struct page memory, there is
nothing about it that says it has to be MMIO memory, or even that the
memory doesn't have struct pages.

All it says is that the memory is alive according to dmabuf
refcounting rules. And the importer obviously don't get to touch the
underlying folios, if any.

Jason

2024-06-12 15:58:37

by David Ahern

[permalink] [raw]
Subject: Re: [PATCH net-next v10 02/14] net: page_pool: create hooks for custom page providers

On 6/12/24 6:06 AM, Jason Gunthorpe wrote:
> On Tue, Jun 11, 2024 at 11:09:15AM -0700, Mina Almasry wrote:
>
>> Just curious: in Pavel's effort, io_uring - which is not a device - is
>> trying to share memory with the page_pool, which is also not a device.
>> And Pavel is being asked to wrap the memory in a dmabuf. Is dmabuf
>> going to be the kernel's standard for any memory sharing between any 2
>> components in the future, even when they're not devices?
>
> dmabuf is how we are refcounting non-struct page memory, there is
> nothing about it that says it has to be MMIO memory, or even that the
> memory doesn't have struct pages.
>
> All it says is that the memory is alive according to dmabuf
> refcounting rules. And the importer obviously don't get to touch the
> underlying folios, if any.
>

In addition, the io_uring developers should be considering the use case
of device memory. There is no reason for this design to be limited to
host memory. io_uring should not care (it is not peeking inside the
memory buffers); it is just memory references.

One of io_uring's primary benefits is avoiding system calls. io_uring
works with TCP sockets. Let it work with any dmabuf without concern of
memory type. The performance benefits the Google crowd sees with system
call based apps should be even better with io_uring.

Focus on primitives, building blocks with solid APIs for other
subsystems to leverage and let them be wired up in ways you cannot
imagine today.