2022-06-28 15:27:34

by Dylan Yudaken

[permalink] [raw]
Subject: [PATCH for-next 0/8] io_uring: multishot recv

This series adds support for multishot recv/recvmsg to io_uring.

The idea is that generally socket applications will be continually
enqueuing a new recv() when the previous one completes. This can be
improved on by allowing the application to queue a multishot receive,
which will post completions as and when data is available. It uses the
provided buffers feature to receive new data into a pool provided by
the application.

This is more performant in a few ways:
* Subsequent receives are queued up straight away without requiring the
application to finish a processing loop.
* If there are more data in the socket (sat the provided buffer
size is smaller than the socket buffer) then the data is immediately
returned, improving batching.
* Poll is only armed once and reused, saving CPU cycles

Running a small network benchmark [1] shows improved QPS of ~6-8% over a range of loads.

[1]: https://github.com/DylanZA/netbench/tree/multishot_recv

Dylan Yudaken (8):
io_uring: allow 0 length for buffer select
io_uring: restore bgid in io_put_kbuf
io_uring: allow iov_len = 0 for recvmsg and buffer select
io_uring: recycle buffers on error
io_uring: clean up io_poll_check_events return values
io_uring: add IOU_STOP_MULTISHOT return code
io_uring: add IORING_RECV_MULTISHOT flag
io_uring: multishot recv

include/uapi/linux/io_uring.h | 5 ++
io_uring/io_uring.h | 7 ++
io_uring/kbuf.c | 4 +-
io_uring/kbuf.h | 8 ++-
io_uring/net.c | 119 ++++++++++++++++++++++++++++------
io_uring/poll.c | 30 ++++++---
6 files changed, 140 insertions(+), 33 deletions(-)


base-commit: 755441b9029317d981269da0256e0a7e5a7fe2cc
--
2.30.2


2022-06-28 15:30:20

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH for-next 0/8] io_uring: multishot recv

On 6/28/22 9:02 AM, Dylan Yudaken wrote:
> This series adds support for multishot recv/recvmsg to io_uring.
>
> The idea is that generally socket applications will be continually
> enqueuing a new recv() when the previous one completes. This can be
> improved on by allowing the application to queue a multishot receive,
> which will post completions as and when data is available. It uses the
> provided buffers feature to receive new data into a pool provided by
> the application.
>
> This is more performant in a few ways:
> * Subsequent receives are queued up straight away without requiring the
> application to finish a processing loop.
> * If there are more data in the socket (sat the provided buffer
> size is smaller than the socket buffer) then the data is immediately
> returned, improving batching.
> * Poll is only armed once and reused, saving CPU cycles

The latter is really a big deal, it saves a substantial amount of wait
queue locking and manipulation.

In general this looks good to me. I agree on allowing length of 0, we
strictly don't need a length as that is implicit from the provided
buffer anyway (and capped by that, ultimately). Nice cleanups leading
into the real change too.

Added some individual comments on select patches.

--
Jens Axboe

2022-06-28 15:41:59

by Dylan Yudaken

[permalink] [raw]
Subject: [PATCH for-next 8/8] io_uring: multishot recv

Support multishot receive for io_uring.
Typical server applications will run a loop where for each recv CQE it
requeues another recv/recvmsg.

This can be simplified by using the existing multishot functionality
combined with io_uring's provided buffers.
The API is to add the IORING_RECV_MULTISHOT flag to the SQE. CQEs will
then be posted (with IORING_CQE_F_MORE flag set) when data is available
and is read. Once an error occurs or the socket ends, the multishot will
be removed and a completion without IORING_CQE_F_MORE will be posted.

The benefit to this is that the recv is much more performant.
* Subsequent receives are queued up straight away without requiring the
application to finish a processing loop.
* If there are more data in the socket (sat the provided buffer size is
smaller than the socket buffer) then the data is immediately
returned, improving batching.
* Poll is only armed once and reused, saving CPU cycles

Signed-off-by: Dylan Yudaken <[email protected]>
---
io_uring/net.c | 93 +++++++++++++++++++++++++++++++++++++++++++-------
1 file changed, 81 insertions(+), 12 deletions(-)

diff --git a/io_uring/net.c b/io_uring/net.c
index 0268c4603f5d..9bf8c6c0b549 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -389,6 +389,8 @@ int io_recvmsg_prep_async(struct io_kiocb *req)
return ret;
}

+#define RECVMSG_FLAGS (IORING_RECVSEND_POLL_FIRST | IORING_RECV_MULTISHOT)
+
int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
{
struct io_sr_msg *sr = io_kiocb_to_cmd(req);
@@ -399,13 +401,22 @@ int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
sr->len = READ_ONCE(sqe->len);
sr->flags = READ_ONCE(sqe->addr2);
- if (sr->flags & ~IORING_RECVSEND_POLL_FIRST)
+ if (sr->flags & ~(RECVMSG_FLAGS))
return -EINVAL;
sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
if (sr->msg_flags & MSG_DONTWAIT)
req->flags |= REQ_F_NOWAIT;
if (sr->msg_flags & MSG_ERRQUEUE)
req->flags |= REQ_F_CLEAR_POLLIN;
+ if (sr->flags & IORING_RECV_MULTISHOT) {
+ if (!(req->flags & REQ_F_BUFFER_SELECT))
+ return -EINVAL;
+ if (sr->msg_flags & MSG_WAITALL)
+ return -EINVAL;
+ if (req->opcode == IORING_OP_RECV && sr->len)
+ return -EINVAL;
+ req->flags |= REQ_F_APOLL_MULTISHOT;
+ }

#ifdef CONFIG_COMPAT
if (req->ctx->compat)
@@ -415,6 +426,14 @@ int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
return 0;
}

+static inline void io_recv_prep_retry(struct io_kiocb *req)
+{
+ struct io_sr_msg *sr = io_kiocb_to_cmd(req);
+
+ sr->done_io = 0;
+ sr->len = 0; /* get from the provided buffer */
+}
+
int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
{
struct io_sr_msg *sr = io_kiocb_to_cmd(req);
@@ -424,6 +443,7 @@ int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
unsigned flags;
int ret, min_ret = 0;
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+ size_t len = sr->len;

sock = sock_from_file(req->file);
if (unlikely(!sock))
@@ -442,16 +462,17 @@ int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
(sr->flags & IORING_RECVSEND_POLL_FIRST))
return io_setup_async_msg(req, kmsg);

+retry_multishot:
if (io_do_buffer_select(req)) {
void __user *buf;

- buf = io_buffer_select(req, &sr->len, issue_flags);
+ buf = io_buffer_select(req, &len, issue_flags);
if (!buf)
return -ENOBUFS;
kmsg->fast_iov[0].iov_base = buf;
- kmsg->fast_iov[0].iov_len = sr->len;
+ kmsg->fast_iov[0].iov_len = len;
iov_iter_init(&kmsg->msg.msg_iter, READ, kmsg->fast_iov, 1,
- sr->len);
+ len);
}

flags = sr->msg_flags;
@@ -463,8 +484,15 @@ int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
kmsg->msg.msg_get_inq = 1;
ret = __sys_recvmsg_sock(sock, &kmsg->msg, sr->umsg, kmsg->uaddr, flags);
if (ret < min_ret) {
- if (ret == -EAGAIN && force_nonblock)
- return io_setup_async_msg(req, kmsg);
+ if (ret == -EAGAIN && force_nonblock) {
+ ret = io_setup_async_msg(req, kmsg);
+ if (ret == -EAGAIN && (req->flags & IO_APOLL_MULTI_POLLED) ==
+ IO_APOLL_MULTI_POLLED) {
+ io_kbuf_recycle(req, issue_flags);
+ ret = IOU_ISSUE_SKIP_COMPLETE;
+ }
+ return ret;
+ }
if (ret == -ERESTARTSYS)
ret = -EINTR;
if (ret > 0 && io_net_retry(sock, flags)) {
@@ -491,8 +519,24 @@ int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
cflags = io_put_kbuf(req, issue_flags);
if (kmsg->msg.msg_inq)
cflags |= IORING_CQE_F_SOCK_NONEMPTY;
+
+ if (!(req->flags & REQ_F_APOLL_MULTISHOT)) {
+ io_req_set_res(req, ret, cflags);
+ return IOU_OK;
+ }
+
+ if (ret > 0) {
+ if (io_post_aux_cqe(req->ctx, req->cqe.user_data, ret,
+ cflags | IORING_CQE_F_MORE)) {
+ io_recv_prep_retry(req);
+ goto retry_multishot;
+ } else {
+ ret = -ECANCELED;
+ }
+ }
+
io_req_set_res(req, ret, cflags);
- return IOU_OK;
+ return req->flags & REQ_F_POLLED ? IOU_STOP_MULTISHOT : ret;
}

int io_recv(struct io_kiocb *req, unsigned int issue_flags)
@@ -505,6 +549,7 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
unsigned flags;
int ret, min_ret = 0;
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+ size_t len = sr->len;

if (!(req->flags & REQ_F_POLLED) &&
(sr->flags & IORING_RECVSEND_POLL_FIRST))
@@ -514,16 +559,17 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
if (unlikely(!sock))
return -ENOTSOCK;

+retry_multishot:
if (io_do_buffer_select(req)) {
void __user *buf;

- buf = io_buffer_select(req, &sr->len, issue_flags);
+ buf = io_buffer_select(req, &len, issue_flags);
if (!buf)
return -ENOBUFS;
sr->buf = buf;
}

- ret = import_single_range(READ, sr->buf, sr->len, &iov, &msg.msg_iter);
+ ret = import_single_range(READ, sr->buf, len, &iov, &msg.msg_iter);
if (unlikely(ret))
goto out_free;

@@ -543,8 +589,14 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)

ret = sock_recvmsg(sock, &msg, flags);
if (ret < min_ret) {
- if (ret == -EAGAIN && force_nonblock)
- return -EAGAIN;
+ if (ret == -EAGAIN && force_nonblock) {
+ if ((req->flags & IO_APOLL_MULTI_POLLED) == IO_APOLL_MULTI_POLLED) {
+ io_kbuf_recycle(req, issue_flags);
+ ret = IOU_ISSUE_SKIP_COMPLETE;
+ }
+
+ return ret;
+ }
if (ret == -ERESTARTSYS)
ret = -EINTR;
if (ret > 0 && io_net_retry(sock, flags)) {
@@ -570,8 +622,25 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags)
cflags = io_put_kbuf(req, issue_flags);
if (msg.msg_inq)
cflags |= IORING_CQE_F_SOCK_NONEMPTY;
+
+
+ if (!(req->flags & REQ_F_APOLL_MULTISHOT)) {
+ io_req_set_res(req, ret, cflags);
+ return IOU_OK;
+ }
+
+ if (ret > 0) {
+ if (io_post_aux_cqe(req->ctx, req->cqe.user_data, ret,
+ cflags | IORING_CQE_F_MORE)) {
+ io_recv_prep_retry(req);
+ goto retry_multishot;
+ } else {
+ ret = -ECANCELED;
+ }
+ }
+
io_req_set_res(req, ret, cflags);
- return IOU_OK;
+ return req->flags & REQ_F_POLLED ? IOU_STOP_MULTISHOT : ret;
}

int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
--
2.30.2

2022-06-28 15:49:51

by Dylan Yudaken

[permalink] [raw]
Subject: [PATCH for-next 3/8] io_uring: allow iov_len = 0 for recvmsg and buffer select

When using BUFFER_SELECT there is no technical requirement that the user
actually provides iov, and this removes one copy_from_user call.

So allow iov_len to be 0.

Signed-off-by: Dylan Yudaken <[email protected]>
---
io_uring/net.c | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/io_uring/net.c b/io_uring/net.c
index 19a805c3814c..5e84f7ab92a3 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -300,12 +300,18 @@ static int __io_recvmsg_copy_hdr(struct io_kiocb *req,
return ret;

if (req->flags & REQ_F_BUFFER_SELECT) {
- if (iov_len > 1)
+ if (iov_len == 0) {
+ sr->len = iomsg->fast_iov[0].iov_len = 0;
+ iomsg->fast_iov[0].iov_base = NULL;
+ iomsg->free_iov = NULL;
+ } else if (iov_len > 1) {
return -EINVAL;
- if (copy_from_user(iomsg->fast_iov, uiov, sizeof(*uiov)))
- return -EFAULT;
- sr->len = iomsg->fast_iov[0].iov_len;
- iomsg->free_iov = NULL;
+ } else {
+ if (copy_from_user(iomsg->fast_iov, uiov, sizeof(*uiov)))
+ return -EFAULT;
+ sr->len = iomsg->fast_iov[0].iov_len;
+ iomsg->free_iov = NULL;
+ }
} else {
iomsg->free_iov = iomsg->fast_iov;
ret = __import_iovec(READ, uiov, iov_len, UIO_FASTIOV,
--
2.30.2