Here's a misconfigured if application is doing polled IO
for devices that don't have a poll queue, the process will
continue to do syscall between user space and kernel space,
as in normal poll IO, CPU utilization will be 100%. IO actually
arrives through interruption.
This patch returns a signal that does not support the operation
when the underlying device does not have a poll queue, avoiding
performance and CPU simultaneous loss.
Signed-off-by: hexue <[email protected]>
---
block/blk-core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 01186333c88e..0afcd74ae939 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -913,7 +913,7 @@ int bio_poll(struct bio *bio, struct io_comp_batch *iob, unsigned int flags)
q = bdev_get_queue(bdev);
if (cookie == BLK_QC_T_NONE ||
!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
- return 0;
+ return -EOPNOTSUPP;
blk_flush_plug(current->plug, false);
--
2.40.1
On Fri, May 31, 2024 at 05:10:15PM +0800, hexue wrote:
> Here's a misconfigured if application is doing polled IO
> for devices that don't have a poll queue, the process will
> continue to do syscall between user space and kernel space,
> as in normal poll IO, CPU utilization will be 100%. IO actually
> arrives through interruption.
>
> This patch returns a signal that does not support the operation
> when the underlying device does not have a poll queue, avoiding
> performance and CPU simultaneous loss.
This feels like the wrong place to check for this.
As we've dropped synchronous polling we now only support
thead based polling, right now only through io_uring.
So we need to ensure REQ_POLLED doesn't even get set for any
other I/O.
On Sat, Jun 1, 2024 at 00:38:49 -0700, Christoph Hellwig wrote:
>On Fri, May 31, 2024 at 05:10:15PM +0800, hexue wrote:
>> Here's a misconfigured if application is doing polled IO
>> for devices that don't have a poll queue, the process will
>> continue to do syscall between user space and kernel space,
>> as in normal poll IO, CPU utilization will be 100%. IO actually
>> arrives through interruption.
>>
>> This patch returns a signal that does not support the operation
>> when the underlying device does not have a poll queue, avoiding
>> performance and CPU simultaneous loss.
>
>This feels like the wrong place to check for this.
>
>As we've dropped synchronous polling we now only support
>thead based polling, right now only through io_uring.
>
>So we need to ensure REQ_POLLED doesn't even get set for any
>other I/O.
Sorry I'm not entirely clear on the impact of REQ_POLLED in this context.
I searched that REQ_POLLED is set for request if polled in io_uring or
io_uring_cmd, but here we just judge the state of request_queue.
Do you mean making this judgment here may not be a good choice, because
it may affect other IO (by same path), and this change should be just
targeted at io_uring?
On 6/1/24 1:38 AM, Christoph Hellwig wrote:
> On Fri, May 31, 2024 at 05:10:15PM +0800, hexue wrote:
>> Here's a misconfigured if application is doing polled IO
>> for devices that don't have a poll queue, the process will
>> continue to do syscall between user space and kernel space,
>> as in normal poll IO, CPU utilization will be 100%. IO actually
>> arrives through interruption.
>>
>> This patch returns a signal that does not support the operation
>> when the underlying device does not have a poll queue, avoiding
>> performance and CPU simultaneous loss.
>
> This feels like the wrong place to check for this.
>
> As we've dropped synchronous polling we now only support
> thead based polling, right now only through io_uring.
>
> So we need to ensure REQ_POLLED doesn't even get set for any
> other I/O.
We happily allow polled IO for async polled IO, even if the destination
queue isn't polled (or it doesn't exist). This is different than the old
sync polled support.
It'll work just fine, it just won't really do what you expect in the
sense that IRQs are still being triggered. The app side won't wait
however, it'll just busy poll on the completion and either race with the
IRQ delivery or find it once completed.
So I think the bigger question here is if we want to change that. It can
indicate a bad configuration, but there's also risk there in terms of
breaking a setup that already works for someone. You'd get -ENONOTSUPP
rather than just (suboptimal) completed IO.
--
Jens Axboe
On Wed, Jun 12, 2024 at 02:53:27PM -0600, Jens Axboe wrote:
> > So we need to ensure REQ_POLLED doesn't even get set for any
> > other I/O.
>
> We happily allow polled IO for async polled IO, even if the destination
> queue isn't polled (or it doesn't exist). This is different than the old
> sync polled support.
Yes, and for that to work we can't start returning -EOPNOTSUPP as in
this patch, as BLK_QC_T_NONE an be cleared for all kinds of reasons.
So if we want some kind of error handling that people don't even
bother to poll for devices where it is not supported we need that
check much earlier (probably in io_uring).
On 6/13/24 2:07 AM, Christoph Hellwig wrote:
> On Wed, Jun 12, 2024 at 02:53:27PM -0600, Jens Axboe wrote:
>>> So we need to ensure REQ_POLLED doesn't even get set for any
>>> other I/O.
>>
>> We happily allow polled IO for async polled IO, even if the destination
>> queue isn't polled (or it doesn't exist). This is different than the old
>> sync polled support.
>
> Yes, and for that to work we can't start returning -EOPNOTSUPP as in
> this patch, as BLK_QC_T_NONE an be cleared for all kinds of reasons.
>
> So if we want some kind of error handling that people don't even
> bother to poll for devices where it is not supported we need that
> check much earlier (probably in io_uring).
There's just no way we can do that, who knows if you'll run into a
polled queue or not further down the stack.
IMHO there's nothing wrong with the current code. If you do stupid
things (do polled IO without having polled queues), then you get to
collect stupid prizes (potentially excessive CPU usage).
--
Jens Axboe