2020-05-17 11:05:31

by Pavel Begunkov

[permalink] [raw]
Subject: [PATCH for-5.7 0/2] fortify async punt preparation

[2] fixes FORCE_ASYNC. I don't want to go through every bit, so
not sure whether this is an actual issue with [1], but it's just
safer this way. Please, consider it for-5.7

IMHO, all preparation thing became a bit messy, definitely could use
some rethinking in the future.

Pavel Begunkov (2):
io_uring: don't prepare DRAIN reqs twice
io_uring: fix FORCE_ASYNC req preparation

fs/io_uring.c | 25 ++++++++++++++++---------
1 file changed, 16 insertions(+), 9 deletions(-)

--
2.24.0


2020-05-17 11:06:07

by Pavel Begunkov

[permalink] [raw]
Subject: [PATCH 1/2] io_uring: don't prepare DRAIN reqs twice

If req->io is not NULL, it's already prepared. Don't do it again,
it's dangerous.

Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 5a19120345e4..9e81781d7632 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5098,12 +5098,13 @@ static int io_req_defer(struct io_kiocb *req, const struct io_uring_sqe *sqe)
if (!req_need_defer(req) && list_empty_careful(&ctx->defer_list))
return 0;

- if (!req->io && io_alloc_async_ctx(req))
- return -EAGAIN;
-
- ret = io_req_defer_prep(req, sqe);
- if (ret < 0)
- return ret;
+ if (!req->io) {
+ if (io_alloc_async_ctx(req))
+ return -EAGAIN;
+ ret = io_req_defer_prep(req, sqe);
+ if (ret < 0)
+ return ret;
+ }

spin_lock_irq(&ctx->completion_lock);
if (!req_need_defer(req) && list_empty(&ctx->defer_list)) {
--
2.24.0

2020-05-17 11:09:11

by Pavel Begunkov

[permalink] [raw]
Subject: [PATCH 2/2] io_uring: fix FORCE_ASYNC req preparation

As for other not inlined requests, alloc req->io for FORCE_ASYNC reqs,
so they can be prepared properly.

Signed-off-by: Pavel Begunkov <[email protected]>
---
fs/io_uring.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 9e81781d7632..3d0a08560689 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5692,9 +5692,15 @@ static void io_queue_sqe(struct io_kiocb *req, const struct io_uring_sqe *sqe)
io_double_put_req(req);
}
} else if (req->flags & REQ_F_FORCE_ASYNC) {
- ret = io_req_defer_prep(req, sqe);
- if (unlikely(ret < 0))
- goto fail_req;
+ if (!req->io) {
+ ret = -EAGAIN;
+ if (io_alloc_async_ctx(req))
+ goto fail_req;
+ ret = io_req_defer_prep(req, sqe);
+ if (unlikely(ret < 0))
+ goto fail_req;
+ }
+
/*
* Never try inline submit of IOSQE_ASYNC is set, go straight
* to async execution.
--
2.24.0

2020-05-17 15:25:12

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH for-5.7 0/2] fortify async punt preparation

On 5/17/20 5:02 AM, Pavel Begunkov wrote:
> [2] fixes FORCE_ASYNC. I don't want to go through every bit, so
> not sure whether this is an actual issue with [1], but it's just
> safer this way. Please, consider it for-5.7

LGTM, applied.

--
Jens Axboe