Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp3170000pxj; Sun, 20 Jun 2021 12:12:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzIiB5T+J0jX5qDILj1+eTIPC1YLSZfXBPf4Unwe4vjjJr+bhHJoEdDvOwpDGHWMZ9MOL7K X-Received: by 2002:a05:6402:748:: with SMTP id p8mr14768531edy.251.1624216370446; Sun, 20 Jun 2021 12:12:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624216370; cv=none; d=google.com; s=arc-20160816; b=PLZhHTmCow11S7zi4OvZzzDnChEALLMjuAmxyNRSW2SPL2gLab0NVl1Y5Dx3MxqqE3 m7bmFDrN4g2k8BSiw+6hlTCPiP1PCV+MsKDPVxOU2eMePeOOC1L7RjYpOpt4sQ09paxp 0f3qM7shcI1gKyrW7XjyYwiQpYpxXMcO7+9cZAqgy0o56Y4GN0Da1z7UgqVmoIj3VK9X h8uGb+svWJ/c2T8dZNzP6YjyY8goQ/wL7qq56qofbIB29f2jBTttvPsUXNlExIXx7lK9 8HDwMu3WEYOWB3vqce0h7IQiJDK31phQBRnwA+Qh1VlgnKxSyRxpEk3tOKRAWPGvgizF iQTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:subject:message-id:cc:to:from:date; bh=rRY5aav/Q8rTeRVM+NambBhzxglC3Tth2Dm+Yt5Zvt4=; b=Bt6gD6QukFDvEeO/hXJ7cd7OAYi153i3lJkoBbjIEJTU6A6tewOzgPJv11IxlXDYBC cuW25SW+KIWaRAQwQgMmwG24iP7PcnBddevbVZJxdF3waGKaRSJox48mvV7jQS9Ed8gk Xc2GcCr8RlWp121tyq0OS+9CHTVi2NAx/Qmrd2GaM4n6LffpLtTEFpcHHKejbp7vCcW1 Gh+E98VxKGJNZBVaxuMZeQepFUbCxwDm+lDzlikbANHvBT+Uklo5+e6Y/DkS/MCH73qR BH6IVFBW5PxXQr/fBcym3K2g6KaMvTaux8EV0ZW6asuix4jCVB8YRtqVF/nCxtfWqQlo YSuQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 30si9136469ejg.423.2021.06.20.12.12.11; Sun, 20 Jun 2021 12:12:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229905AbhFTTH3 (ORCPT + 99 others); Sun, 20 Jun 2021 15:07:29 -0400 Received: from cloud48395.mywhc.ca ([173.209.37.211]:60986 "EHLO cloud48395.mywhc.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229877AbhFTTH3 (ORCPT ); Sun, 20 Jun 2021 15:07:29 -0400 Received: from modemcable064.203-130-66.mc.videotron.ca ([66.130.203.64]:53240 helo=localhost) by cloud48395.mywhc.ca with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1lv2kt-0004Eq-BU; Sun, 20 Jun 2021 15:05:15 -0400 Date: Sun, 20 Jun 2021 15:05:14 -0400 From: Olivier Langlois To: Jens Axboe , Pavel Begunkov , io-uring@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Olivier Langlois Message-Id: Subject: [PATCH v2] io_uring: reduce latency by reissueing the operation X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - cloud48395.mywhc.ca X-AntiAbuse: Original Domain - vger.kernel.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - trillion01.com X-Get-Message-Sender-Via: cloud48395.mywhc.ca: authenticated_id: olivier@trillion01.com X-Authenticated-Sender: cloud48395.mywhc.ca: olivier@trillion01.com X-Source: X-Source-Args: X-Source-Dir: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org It is quite frequent that when an operation fails and returns EAGAIN, the data becomes available between that failure and the call to vfs_poll() done by io_arm_poll_handler(). Detecting the situation and reissuing the operation is much faster than going ahead and push the operation to the io-wq. Signed-off-by: Olivier Langlois --- fs/io_uring.c | 26 +++++++++++++++++--------- 1 file changed, 17 insertions(+), 9 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index fa8794c61af7..6e037304429a 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -5143,7 +5143,10 @@ static __poll_t __io_arm_poll_handler(struct io_kiocb *req, return mask; } -static bool io_arm_poll_handler(struct io_kiocb *req) +#define IO_ARM_POLL_OK 0 +#define IO_ARM_POLL_ERR 1 +#define IO_ARM_POLL_READY 2 +static int io_arm_poll_handler(struct io_kiocb *req) { const struct io_op_def *def = &io_op_defs[req->opcode]; struct io_ring_ctx *ctx = req->ctx; @@ -5153,22 +5156,22 @@ static bool io_arm_poll_handler(struct io_kiocb *req) int rw; if (!req->file || !file_can_poll(req->file)) - return false; + return IO_ARM_POLL_ERR; if (req->flags & REQ_F_POLLED) - return false; + return IO_ARM_POLL_ERR; if (def->pollin) rw = READ; else if (def->pollout) rw = WRITE; else - return false; + return IO_ARM_POLL_ERR; /* if we can't nonblock try, then no point in arming a poll handler */ if (!io_file_supports_async(req, rw)) - return false; + return IO_ARM_POLL_ERR; apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC); if (unlikely(!apoll)) - return false; + return IO_ARM_POLL_ERR; apoll->double_poll = NULL; req->flags |= REQ_F_POLLED; @@ -5194,12 +5197,12 @@ static bool io_arm_poll_handler(struct io_kiocb *req) if (ret || ipt.error) { io_poll_remove_double(req); spin_unlock_irq(&ctx->completion_lock); - return false; + return ret?IO_ARM_POLL_READY:IO_ARM_POLL_ERR; } spin_unlock_irq(&ctx->completion_lock); trace_io_uring_poll_arm(ctx, req->opcode, req->user_data, mask, apoll->poll.events); - return true; + return IO_ARM_POLL_OK; } static bool __io_poll_remove_one(struct io_kiocb *req, @@ -6416,6 +6419,7 @@ static void __io_queue_sqe(struct io_kiocb *req) struct io_kiocb *linked_timeout = io_prep_linked_timeout(req); int ret; +issue_sqe: ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER); /* @@ -6435,12 +6439,16 @@ static void __io_queue_sqe(struct io_kiocb *req) io_put_req(req); } } else if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) { - if (!io_arm_poll_handler(req)) { + switch (io_arm_poll_handler(req)) { + case IO_ARM_POLL_READY: + goto issue_sqe; + case IO_ARM_POLL_ERR: /* * Queued up for async execution, worker will release * submit reference when the iocb is actually submitted. */ io_queue_async_work(req); + break; } } else { io_req_complete_failed(req, ret); -- 2.32.0