2021-06-22 12:21:21

by Olivier Langlois

[permalink] [raw]
Subject: [PATCH v4] io_uring: reduce latency by reissueing the operation

It is quite frequent that when an operation fails and returns EAGAIN,
the data becomes available between that failure and the call to
vfs_poll() done by io_arm_poll_handler().

Detecting the situation and reissuing the operation is much faster
than going ahead and push the operation to the io-wq.

Performance improvement testing has been performed with:
Single thread, 1 TCP connection receiving a 5 Mbps stream, no sqpoll.

4 measurements have been taken:
1. The time it takes to process a read request when data is already available
2. The time it takes to process by calling twice io_issue_sqe() after vfs_poll() indicated that data was available
3. The time it takes to execute io_queue_async_work()
4. The time it takes to complete a read request asynchronously

2.25% of all the read operations did use the new path.

ready data (baseline)
avg 3657.94182918628
min 580
max 20098
stddev 1213.15975908162

reissue completion
average 7882.67567567568
min 2316
max 28811
stddev 1982.79172973284

insert io-wq time
average 8983.82276995305
min 3324
max 87816
stddev 2551.60056552038

async time completion
average 24670.4758861127
min 10758
max 102612
stddev 3483.92416873804

Conclusion:
On average reissuing the sqe with the patch code is 1.1uSec faster and
in the worse case scenario 59uSec faster than placing the request on
io-wq

On average completion time by reissuing the sqe with the patch code is
16.79uSec faster and in the worse case scenario 73.8uSec faster than
async completion.

Signed-off-by: Olivier Langlois <[email protected]>
---
fs/io_uring.c | 31 ++++++++++++++++++++++---------
1 file changed, 22 insertions(+), 9 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index fc8637f591a6..5efa67c2f974 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5152,7 +5152,13 @@ static __poll_t __io_arm_poll_handler(struct io_kiocb *req,
return mask;
}

-static bool io_arm_poll_handler(struct io_kiocb *req)
+enum {
+ IO_APOLL_OK,
+ IO_APOLL_ABORTED,
+ IO_APOLL_READY
+};
+
+static int io_arm_poll_handler(struct io_kiocb *req)
{
const struct io_op_def *def = &io_op_defs[req->opcode];
struct io_ring_ctx *ctx = req->ctx;
@@ -5162,22 +5168,22 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
int rw;

if (!req->file || !file_can_poll(req->file))
- return false;
+ return IO_APOLL_ABORTED;
if (req->flags & REQ_F_POLLED)
- return false;
+ return IO_APOLL_ABORTED;
if (def->pollin)
rw = READ;
else if (def->pollout)
rw = WRITE;
else
- return false;
+ return IO_APOLL_ABORTED;
/* if we can't nonblock try, then no point in arming a poll handler */
if (!io_file_supports_async(req, rw))
- return false;
+ return IO_APOLL_ABORTED;

apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
if (unlikely(!apoll))
- return false;
+ return IO_APOLL_ABORTED;
apoll->double_poll = NULL;

req->flags |= REQ_F_POLLED;
@@ -5203,12 +5209,14 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
if (ret || ipt.error) {
io_poll_remove_double(req);
spin_unlock_irq(&ctx->completion_lock);
- return false;
+ if (ret)
+ return IO_APOLL_READY;
+ return IO_APOLL_ABORTED;
}
spin_unlock_irq(&ctx->completion_lock);
trace_io_uring_poll_arm(ctx, req, req->opcode, req->user_data,
mask, apoll->poll.events);
- return true;
+ return IO_APOLL_OK;
}

static bool __io_poll_remove_one(struct io_kiocb *req,
@@ -6437,6 +6445,7 @@ static void __io_queue_sqe(struct io_kiocb *req)
struct io_kiocb *linked_timeout = io_prep_linked_timeout(req);
int ret;

+issue_sqe:
ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);

/*
@@ -6456,12 +6465,16 @@ static void __io_queue_sqe(struct io_kiocb *req)
io_put_req(req);
}
} else if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) {
- if (!io_arm_poll_handler(req)) {
+ switch (io_arm_poll_handler(req)) {
+ case IO_APOLL_READY:
+ goto issue_sqe;
+ case IO_APOLL_ABORTED:
/*
* Queued up for async execution, worker will release
* submit reference when the iocb is actually submitted.
*/
io_queue_async_work(req);
+ break;
}
} else {
io_req_complete_failed(req, ret);
--
2.32.0


2021-06-22 17:56:44

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH v4] io_uring: reduce latency by reissueing the operation

On 6/22/21 1:17 PM, Olivier Langlois wrote:
> It is quite frequent that when an operation fails and returns EAGAIN,
> the data becomes available between that failure and the call to
> vfs_poll() done by io_arm_poll_handler().
>
> Detecting the situation and reissuing the operation is much faster
> than going ahead and push the operation to the io-wq.
>
> Performance improvement testing has been performed with:
> Single thread, 1 TCP connection receiving a 5 Mbps stream, no sqpoll.
>
> 4 measurements have been taken:
> 1. The time it takes to process a read request when data is already available
> 2. The time it takes to process by calling twice io_issue_sqe() after vfs_poll() indicated that data was available
> 3. The time it takes to execute io_queue_async_work()
> 4. The time it takes to complete a read request asynchronously
>
> 2.25% of all the read operations did use the new path.
>
> ready data (baseline)
> avg 3657.94182918628
> min 580
> max 20098
> stddev 1213.15975908162
>
> reissue completion
> average 7882.67567567568
> min 2316
> max 28811
> stddev 1982.79172973284
>
> insert io-wq time
> average 8983.82276995305
> min 3324
> max 87816
> stddev 2551.60056552038
>
> async time completion
> average 24670.4758861127
> min 10758
> max 102612
> stddev 3483.92416873804
>
> Conclusion:
> On average reissuing the sqe with the patch code is 1.1uSec faster and
> in the worse case scenario 59uSec faster than placing the request on
> io-wq
>
> On average completion time by reissuing the sqe with the patch code is
> 16.79uSec faster and in the worse case scenario 73.8uSec faster than
> async completion.
>
> Signed-off-by: Olivier Langlois <[email protected]>
> ---
> fs/io_uring.c | 31 ++++++++++++++++++++++---------
> 1 file changed, 22 insertions(+), 9 deletions(-)
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index fc8637f591a6..5efa67c2f974 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c

[...]

> static bool __io_poll_remove_one(struct io_kiocb *req,
> @@ -6437,6 +6445,7 @@ static void __io_queue_sqe(struct io_kiocb *req)
> struct io_kiocb *linked_timeout = io_prep_linked_timeout(req);
> int ret;
>
> +issue_sqe:
> ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
>
> /*
> @@ -6456,12 +6465,16 @@ static void __io_queue_sqe(struct io_kiocb *req)
> io_put_req(req);
> }
> } else if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) {
> - if (!io_arm_poll_handler(req)) {
> + switch (io_arm_poll_handler(req)) {
> + case IO_APOLL_READY:
> + goto issue_sqe;
> + case IO_APOLL_ABORTED:
> /*
> * Queued up for async execution, worker will release
> * submit reference when the iocb is actually submitted.
> */
> io_queue_async_work(req);
> + break;

Hmm, why there is a new break here? It will miscount @linked_timeout
if you do that. Every io_prep_linked_timeout() should be matched with
io_queue_linked_timeout().


> }
> } else {
> io_req_complete_failed(req, ret);
>

--
Pavel Begunkov

2021-06-22 18:10:09

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH v4] io_uring: reduce latency by reissueing the operation

On 6/22/21 6:54 PM, Pavel Begunkov wrote:
> On 6/22/21 1:17 PM, Olivier Langlois wrote:
>> It is quite frequent that when an operation fails and returns EAGAIN,
>> the data becomes available between that failure and the call to
>> vfs_poll() done by io_arm_poll_handler().
>>
>> Detecting the situation and reissuing the operation is much faster
>> than going ahead and push the operation to the io-wq.
>>
>> Performance improvement testing has been performed with:
>> Single thread, 1 TCP connection receiving a 5 Mbps stream, no sqpoll.
>>
>> 4 measurements have been taken:
>> 1. The time it takes to process a read request when data is already available
>> 2. The time it takes to process by calling twice io_issue_sqe() after vfs_poll() indicated that data was available
>> 3. The time it takes to execute io_queue_async_work()
>> 4. The time it takes to complete a read request asynchronously
>>
>> 2.25% of all the read operations did use the new path.
>>
>> ready data (baseline)
>> avg 3657.94182918628
>> min 580
>> max 20098
>> stddev 1213.15975908162
>>
>> reissue completion
>> average 7882.67567567568
>> min 2316
>> max 28811
>> stddev 1982.79172973284
>>
>> insert io-wq time
>> average 8983.82276995305
>> min 3324
>> max 87816
>> stddev 2551.60056552038
>>
>> async time completion
>> average 24670.4758861127
>> min 10758
>> max 102612
>> stddev 3483.92416873804
>>
>> Conclusion:
>> On average reissuing the sqe with the patch code is 1.1uSec faster and
>> in the worse case scenario 59uSec faster than placing the request on
>> io-wq
>>
>> On average completion time by reissuing the sqe with the patch code is
>> 16.79uSec faster and in the worse case scenario 73.8uSec faster than
>> async completion.
>>
>> Signed-off-by: Olivier Langlois <[email protected]>
>> ---
>> fs/io_uring.c | 31 ++++++++++++++++++++++---------
>> 1 file changed, 22 insertions(+), 9 deletions(-)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index fc8637f591a6..5efa67c2f974 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>
> [...]
>
>> static bool __io_poll_remove_one(struct io_kiocb *req,
>> @@ -6437,6 +6445,7 @@ static void __io_queue_sqe(struct io_kiocb *req)
>> struct io_kiocb *linked_timeout = io_prep_linked_timeout(req);
>> int ret;
>>
>> +issue_sqe:
>> ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
>>
>> /*
>> @@ -6456,12 +6465,16 @@ static void __io_queue_sqe(struct io_kiocb *req)
>> io_put_req(req);
>> }
>> } else if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) {
>> - if (!io_arm_poll_handler(req)) {
>> + switch (io_arm_poll_handler(req)) {
>> + case IO_APOLL_READY:
>> + goto issue_sqe;
>> + case IO_APOLL_ABORTED:
>> /*
>> * Queued up for async execution, worker will release
>> * submit reference when the iocb is actually submitted.
>> */
>> io_queue_async_work(req);
>> + break;
>
> Hmm, why there is a new break here? It will miscount @linked_timeout
> if you do that. Every io_prep_linked_timeout() should be matched with
> io_queue_linked_timeout().

Never mind, I said some nonsense and apparently need some coffee


>> }
>> } else {
>> io_req_complete_failed(req, ret);
>>
>

--
Pavel Begunkov

2021-06-22 19:07:48

by Olivier Langlois

[permalink] [raw]
Subject: Re: [PATCH v4] io_uring: reduce latency by reissueing the operation

On Tue, 2021-06-22 at 19:01 +0100, Pavel Begunkov wrote:
> On 6/22/21 6:54 PM, Pavel Begunkov wrote:
> > On 6/22/21 1:17 PM, Olivier Langlois wrote:
> > >
> >
> > > ?static bool __io_poll_remove_one(struct io_kiocb *req,
> > > @@ -6437,6 +6445,7 @@ static void __io_queue_sqe(struct io_kiocb
> > > *req)
> > > ????????struct io_kiocb *linked_timeout =
> > > io_prep_linked_timeout(req);
> > > ????????int ret;
> > > ?
> > > +issue_sqe:
> > > ????????ret = io_issue_sqe(req,
> > > IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
> > > ?
> > > ????????/*
> > > @@ -6456,12 +6465,16 @@ static void __io_queue_sqe(struct
> > > io_kiocb *req)
> > > ????????????????????????io_put_req(req);
> > > ????????????????}
> > > ????????} else if (ret == -EAGAIN && !(req->flags &
> > > REQ_F_NOWAIT)) {
> > > -???????????????if (!io_arm_poll_handler(req)) {
> > > +???????????????switch (io_arm_poll_handler(req)) {
> > > +???????????????case IO_APOLL_READY:
> > > +???????????????????????goto issue_sqe;
> > > +???????????????case IO_APOLL_ABORTED:
> > > ????????????????????????/*
> > > ???????????????????????? * Queued up for async execution, worker
> > > will release
> > > ???????????????????????? * submit reference when the iocb is
> > > actually submitted.
> > > ???????????????????????? */
> > > ????????????????????????io_queue_async_work(req);
> > > +???????????????????????break;
> >
> > Hmm, why there is a new break here? It will miscount
> > @linked_timeout
> > if you do that. Every io_prep_linked_timeout() should be matched
> > with
> > io_queue_linked_timeout().
>
> Never mind, I said some nonsense and apparently need some coffee

but this is a pertinant question, imho. I guess that you could get away
without it since it is the last case of the switch statement... I am
not sure what kernel coding standard says about that.

However, I can tell you that there was also a break statement at the
end of the case for IO_APOLL_READY and checkpatch.pl did complain about
it saying that it was useless since it was following a goto statement.
Therefore, I did remove that one.

checkpatch.pl did remain silent about the other remaining break. Hence
this is why I left it there.

Greetings,


2021-06-22 20:53:15

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH v4] io_uring: reduce latency by reissueing the operation

On 6/22/21 8:05 PM, Olivier Langlois wrote:
> On Tue, 2021-06-22 at 19:01 +0100, Pavel Begunkov wrote:
>> On 6/22/21 6:54 PM, Pavel Begunkov wrote:
>>> On 6/22/21 1:17 PM, Olivier Langlois wrote:
>>>>
>>>
>>>>  static bool __io_poll_remove_one(struct io_kiocb *req,
>>>> @@ -6437,6 +6445,7 @@ static void __io_queue_sqe(struct io_kiocb
>>>> *req)
>>>>         struct io_kiocb *linked_timeout =
>>>> io_prep_linked_timeout(req);
>>>>         int ret;
>>>>  
>>>> +issue_sqe:
>>>>         ret = io_issue_sqe(req,
>>>> IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
>>>>  
>>>>         /*
>>>> @@ -6456,12 +6465,16 @@ static void __io_queue_sqe(struct
>>>> io_kiocb *req)
>>>>                         io_put_req(req);
>>>>                 }
>>>>         } else if (ret == -EAGAIN && !(req->flags &
>>>> REQ_F_NOWAIT)) {
>>>> -               if (!io_arm_poll_handler(req)) {
>>>> +               switch (io_arm_poll_handler(req)) {
>>>> +               case IO_APOLL_READY:
>>>> +                       goto issue_sqe;
>>>> +               case IO_APOLL_ABORTED:
>>>>                         /*
>>>>                          * Queued up for async execution, worker
>>>> will release
>>>>                          * submit reference when the iocb is
>>>> actually submitted.
>>>>                          */
>>>>                         io_queue_async_work(req);
>>>> +                       break;
>>>
>>> Hmm, why there is a new break here? It will miscount
>>> @linked_timeout
>>> if you do that. Every io_prep_linked_timeout() should be matched
>>> with
>>> io_queue_linked_timeout().
>>
>> Never mind, I said some nonsense and apparently need some coffee
>
> but this is a pertinant question, imho. I guess that you could get away

It appeared to me that it doesn't go down to the end of the function
but returns or so, that's the nonsense part.

> without it since it is the last case of the switch statement... I am
> not sure what kernel coding standard says about that.

breaks are preferable, and falling through should be explicitly
marked with fallthrough;

> However, I can tell you that there was also a break statement at the
> end of the case for IO_APOLL_READY and checkpatch.pl did complain about
> it saying that it was useless since it was following a goto statement.
> Therefore, I did remove that one.
>
> checkpatch.pl did remain silent about the other remaining break. Hence
> this is why I left it there.

--
Pavel Begunkov

2021-06-22 20:54:47

by Pavel Begunkov

[permalink] [raw]
Subject: Re: [PATCH v4] io_uring: reduce latency by reissueing the operation

On 6/22/21 1:17 PM, Olivier Langlois wrote:
> It is quite frequent that when an operation fails and returns EAGAIN,
> the data becomes available between that failure and the call to
> vfs_poll() done by io_arm_poll_handler().

Looks good

Reviewed-by: Pavel Begunkov <[email protected]>

> Detecting the situation and reissuing the operation is much faster
> than going ahead and push the operation to the io-wq.
>
> Performance improvement testing has been performed with:
> Single thread, 1 TCP connection receiving a 5 Mbps stream, no sqpoll.
>
> 4 measurements have been taken:
> 1. The time it takes to process a read request when data is already available
> 2. The time it takes to process by calling twice io_issue_sqe() after vfs_poll() indicated that data was available
> 3. The time it takes to execute io_queue_async_work()
> 4. The time it takes to complete a read request asynchronously
>
> 2.25% of all the read operations did use the new path.
>
> ready data (baseline)
> avg 3657.94182918628
> min 580
> max 20098
> stddev 1213.15975908162
>
> reissue completion
> average 7882.67567567568
> min 2316
> max 28811
> stddev 1982.79172973284
>
> insert io-wq time
> average 8983.82276995305
> min 3324
> max 87816
> stddev 2551.60056552038
>
> async time completion
> average 24670.4758861127
> min 10758
> max 102612
> stddev 3483.92416873804
>
> Conclusion:
> On average reissuing the sqe with the patch code is 1.1uSec faster and
> in the worse case scenario 59uSec faster than placing the request on
> io-wq
>
> On average completion time by reissuing the sqe with the patch code is
> 16.79uSec faster and in the worse case scenario 73.8uSec faster than
> async completion.
>
> Signed-off-by: Olivier Langlois <[email protected]>
> ---
> fs/io_uring.c | 31 ++++++++++++++++++++++---------
> 1 file changed, 22 insertions(+), 9 deletions(-)
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index fc8637f591a6..5efa67c2f974 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -5152,7 +5152,13 @@ static __poll_t __io_arm_poll_handler(struct io_kiocb *req,
> return mask;
> }
>
> -static bool io_arm_poll_handler(struct io_kiocb *req)
> +enum {
> + IO_APOLL_OK,
> + IO_APOLL_ABORTED,
> + IO_APOLL_READY
> +};
> +
> +static int io_arm_poll_handler(struct io_kiocb *req)
> {
> const struct io_op_def *def = &io_op_defs[req->opcode];
> struct io_ring_ctx *ctx = req->ctx;
> @@ -5162,22 +5168,22 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
> int rw;
>
> if (!req->file || !file_can_poll(req->file))
> - return false;
> + return IO_APOLL_ABORTED;
> if (req->flags & REQ_F_POLLED)
> - return false;
> + return IO_APOLL_ABORTED;
> if (def->pollin)
> rw = READ;
> else if (def->pollout)
> rw = WRITE;
> else
> - return false;
> + return IO_APOLL_ABORTED;
> /* if we can't nonblock try, then no point in arming a poll handler */
> if (!io_file_supports_async(req, rw))
> - return false;
> + return IO_APOLL_ABORTED;
>
> apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
> if (unlikely(!apoll))
> - return false;
> + return IO_APOLL_ABORTED;
> apoll->double_poll = NULL;
>
> req->flags |= REQ_F_POLLED;
> @@ -5203,12 +5209,14 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
> if (ret || ipt.error) {
> io_poll_remove_double(req);
> spin_unlock_irq(&ctx->completion_lock);
> - return false;
> + if (ret)
> + return IO_APOLL_READY;
> + return IO_APOLL_ABORTED;
> }
> spin_unlock_irq(&ctx->completion_lock);
> trace_io_uring_poll_arm(ctx, req, req->opcode, req->user_data,
> mask, apoll->poll.events);
> - return true;
> + return IO_APOLL_OK;
> }
>
> static bool __io_poll_remove_one(struct io_kiocb *req,
> @@ -6437,6 +6445,7 @@ static void __io_queue_sqe(struct io_kiocb *req)
> struct io_kiocb *linked_timeout = io_prep_linked_timeout(req);
> int ret;
>
> +issue_sqe:
> ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
>
> /*
> @@ -6456,12 +6465,16 @@ static void __io_queue_sqe(struct io_kiocb *req)
> io_put_req(req);
> }
> } else if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) {
> - if (!io_arm_poll_handler(req)) {
> + switch (io_arm_poll_handler(req)) {
> + case IO_APOLL_READY:
> + goto issue_sqe;
> + case IO_APOLL_ABORTED:
> /*
> * Queued up for async execution, worker will release
> * submit reference when the iocb is actually submitted.
> */
> io_queue_async_work(req);
> + break;
> }
> } else {
> io_req_complete_failed(req, ret);
>

--
Pavel Begunkov

2021-06-25 00:46:25

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH v4] io_uring: reduce latency by reissueing the operation

On 6/22/21 6:17 AM, Olivier Langlois wrote:
> It is quite frequent that when an operation fails and returns EAGAIN,
> the data becomes available between that failure and the call to
> vfs_poll() done by io_arm_poll_handler().
>
> Detecting the situation and reissuing the operation is much faster
> than going ahead and push the operation to the io-wq.
>
> Performance improvement testing has been performed with:
> Single thread, 1 TCP connection receiving a 5 Mbps stream, no sqpoll.
>
> 4 measurements have been taken:
> 1. The time it takes to process a read request when data is already available
> 2. The time it takes to process by calling twice io_issue_sqe() after vfs_poll() indicated that data was available
> 3. The time it takes to execute io_queue_async_work()
> 4. The time it takes to complete a read request asynchronously
>
> 2.25% of all the read operations did use the new path.
>
> ready data (baseline)
> avg 3657.94182918628
> min 580
> max 20098
> stddev 1213.15975908162
>
> reissue completion
> average 7882.67567567568
> min 2316
> max 28811
> stddev 1982.79172973284
>
> insert io-wq time
> average 8983.82276995305
> min 3324
> max 87816
> stddev 2551.60056552038
>
> async time completion
> average 24670.4758861127
> min 10758
> max 102612
> stddev 3483.92416873804
>
> Conclusion:
> On average reissuing the sqe with the patch code is 1.1uSec faster and
> in the worse case scenario 59uSec faster than placing the request on
> io-wq
>
> On average completion time by reissuing the sqe with the patch code is
> 16.79uSec faster and in the worse case scenario 73.8uSec faster than
> async completion.

Thanks for respinning with a (much) better commit message. Applied.

--
Jens Axboe

2021-06-25 08:16:32

by David Laight

[permalink] [raw]
Subject: RE: [PATCH v4] io_uring: reduce latency by reissueing the operation

From: Jens Axboe
> Sent: 25 June 2021 01:45
>
> On 6/22/21 6:17 AM, Olivier Langlois wrote:
> > It is quite frequent that when an operation fails and returns EAGAIN,
> > the data becomes available between that failure and the call to
> > vfs_poll() done by io_arm_poll_handler().
> >
> > Detecting the situation and reissuing the operation is much faster
> > than going ahead and push the operation to the io-wq.
> >
> > Performance improvement testing has been performed with:
> > Single thread, 1 TCP connection receiving a 5 Mbps stream, no sqpoll.
> >
> > 4 measurements have been taken:
> > 1. The time it takes to process a read request when data is already available
> > 2. The time it takes to process by calling twice io_issue_sqe() after vfs_poll() indicated that data
> was available
> > 3. The time it takes to execute io_queue_async_work()
> > 4. The time it takes to complete a read request asynchronously
> >
> > 2.25% of all the read operations did use the new path.

How much slower is it when the data to complete the read isn't
available?

I suspect there are different workflows where that is almost
always true.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2021-06-28 06:45:49

by Olivier Langlois

[permalink] [raw]
Subject: Re: [PATCH v4] io_uring: reduce latency by reissueing the operation

On Fri, 2021-06-25 at 08:15 +0000, David Laight wrote:
> From: Jens Axboe
> > Sent: 25 June 2021 01:45
> >
> > On 6/22/21 6:17 AM, Olivier Langlois wrote:
> > > It is quite frequent that when an operation fails and returns
> > > EAGAIN,
> > > the data becomes available between that failure and the call to
> > > vfs_poll() done by io_arm_poll_handler().
> > >
> > > Detecting the situation and reissuing the operation is much
> > > faster
> > > than going ahead and push the operation to the io-wq.
> > >
> > > Performance improvement testing has been performed with:
> > > Single thread, 1 TCP connection receiving a 5 Mbps stream, no
> > > sqpoll.
> > >
> > > 4 measurements have been taken:
> > > 1. The time it takes to process a read request when data is
> > > already available
> > > 2. The time it takes to process by calling twice io_issue_sqe()
> > > after vfs_poll() indicated that data
> > was available
> > > 3. The time it takes to execute io_queue_async_work()
> > > 4. The time it takes to complete a read request asynchronously
> > >
> > > 2.25% of all the read operations did use the new path.
>
> How much slower is it when the data to complete the read isn't
> available?
>
> I suspect there are different workflows where that is almost
> always true.
>
David,

in the case that the data to complete isn't available, the request will
be processed exactly as it was before the patch.

Ideally through io_uring fast polling feature. If not possible because
arming the poll has been aborted, the request will be punted to the io-
wq.

Greetings,