Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1396458ybt; Thu, 18 Jun 2020 07:47:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz+W4qwTqyVgMzTLC8HTYUEO4duiK1gKlHQiH932r+br/dyvC5KMiWSXSaS3/0WNeIMufbE X-Received: by 2002:a17:906:2c1a:: with SMTP id e26mr4208395ejh.514.1592491662103; Thu, 18 Jun 2020 07:47:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592491662; cv=none; d=google.com; s=arc-20160816; b=U0mzZlew3/5UJue2bp98/+lzR1BY+MM9TUa5ZJ+fiyhkhZxARYawAu1B2W/+470Y/a 309B4NmguTGzuqlZyGe1oekwvzzsblP9s9P55jLJi5j6oNe9+sNpfWLzj6T6c2YIQvp8 c9DEIBL0z5KGgbNZ5hG2j/AIu8al7gNtX1nllYVi4wjjKm4VP+OmPXGL+wid1zGISd7x fn2kFM8clOKYS2DYNPti+yJLJpgXGaLx87iVjP8E0sRHqw+n2vZ0gBzeECvtoyZXrQ3y IDnF41aKr5Mjg0NA55/fCIvaMu1eXCN2nkK4fLLhO7a7dKjVIHsuNU77dFVl7TqNVzCE 1yGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :reply-to:references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=hItvrdMXW3x4/4yokWms4yIKM/BxkkcMkxbyn/WrUe0=; b=a5obH+sw23bR9tnMax5u0HsK0rvKbsdEeDzvP+dcgmNde1poUWqhCGccZJm+qHY2ck Tcz/Yu9eBLA1aoo+W8HKo8R3G1Lus6nNSiNcM9KMAfMH7pyfEx7X9bQfMe4pse04RUZt mKJwa7AgF2MqYcbv+EAcFNclNAjuq4DS3sSZwjRJKLgADoZ1/NKBlOBwfUyjCfhzC4Em izjcnw0Rmp4hzkWkJJz/04ecNYTnx4Y0bTCKVGdph7Hdqhucrh2QJ4NBMCnfPVFBQfo/ d7TAsd7QeZveD8zjoeDjpH0TxarW1zCgJXmVor1GoIczZfcNBx+uuOpURm7hTsiGUlQZ ZBoA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@kernel-dk.20150623.gappssmtp.com header.s=20150623 header.b=RfkCMfGN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a17si1846369eju.694.2020.06.18.07.47.14; Thu, 18 Jun 2020 07:47:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@kernel-dk.20150623.gappssmtp.com header.s=20150623 header.b=RfkCMfGN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731077AbgFROo6 (ORCPT + 99 others); Thu, 18 Jun 2020 10:44:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731027AbgFROob (ORCPT ); Thu, 18 Jun 2020 10:44:31 -0400 Received: from mail-pj1-x1041.google.com (mail-pj1-x1041.google.com [IPv6:2607:f8b0:4864:20::1041]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73B53C061262 for ; Thu, 18 Jun 2020 07:44:20 -0700 (PDT) Received: by mail-pj1-x1041.google.com with SMTP id s88so2766123pjb.5 for ; Thu, 18 Jun 2020 07:44:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; bh=hItvrdMXW3x4/4yokWms4yIKM/BxkkcMkxbyn/WrUe0=; b=RfkCMfGNHVXXALcOZPZ2jdMi+X2j7BbBbajuLvtbMQgxMG/R/SN9Lmnx9NaM1whfmk FoFat2Ffm7ozF2Am9lqvkz0jsEoF+28OalJ8pxWgXrDzHjm9PzijBHVdQeoN953xZdDY tf/dvYtXZA04B8ODzL5vUKx87iaBZufPMtd6kedtozYO9BMoAcWeY9LWTHO0Ida6RpP3 x1eoT/nTeXvRj4v/5yKnIft1RpIV2avzGEzDdkbF3szc11YLa/Lb2tqMcnSEDn0GFTjr Fg3FMcJwXY/ZGtqbb19AHc5aPFxfUxJG8rl9k5i052NZMKop0iiA8EZgxFfSHlvxzSqN C9BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:reply-to:mime-version:content-transfer-encoding; bh=hItvrdMXW3x4/4yokWms4yIKM/BxkkcMkxbyn/WrUe0=; b=PSAGIJC4sOcEaI/an3AL6Q8sFpBHGj9p3PiQc7qmyFZe2Bdr0DPNA3+4aERIsbBF2/ 6wyihObZSrZdIV/QJD6aVK3xUrT9UXVG6nu7it2VPnxcHuTnBFqpfDLJka0m7Ecbhf6L XauRYlrKqOw8IDion0V225jnBCOZA/t967EjMnJybz0npxYLZchwT73fhs0kzhWzDqT1 Oyqx0j4Tat3MIiPoq4ih9BdASQolvKN/IZB4rFMcRyvcSH1Kr3ETNzJvtmp6IlX8bdtX WR8k+7lBuz3HkmtQSEo4OevmHyXMdCgtoKTExOwQKx2QqOOWAM4K+pNv13HuQM5/JM21 l29g== X-Gm-Message-State: AOAM531xuaQcrB+ZS5wEnConQQV74qvwA0JGzoz2e1vFM3LA0gBiOVV8 e1bzH/HAXd8w7+rmzyPlk65SUw== X-Received: by 2002:a17:902:7c8f:: with SMTP id y15mr3899856pll.95.1592491459962; Thu, 18 Jun 2020 07:44:19 -0700 (PDT) Received: from x1.localdomain ([65.144.74.34]) by smtp.gmail.com with ESMTPSA id g9sm3127197pfm.151.2020.06.18.07.44.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Jun 2020 07:44:19 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, Jens Axboe Subject: [PATCH 15/15] io_uring: support true async buffered reads, if file provides it Date: Thu, 18 Jun 2020 08:43:55 -0600 Message-Id: <20200618144355.17324-16-axboe@kernel.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618144355.17324-1-axboe@kernel.dk> References: <20200618144355.17324-1-axboe@kernel.dk> Reply-To: "[PATCHSET v7 0/15]"@vger.kernel.org, Add@vger.kernel.org, support@vger.kernel.org, for@vger.kernel.org, async@vger.kernel.org, buffered@vger.kernel.org, reads@vger.kernel.org MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If the file is flagged with FMODE_BUF_RASYNC, then we don't have to punt the buffered read to an io-wq worker. Instead we can rely on page unlocking callbacks to support retry based async IO. This is a lot more efficient than doing async thread offload. The retry is done similarly to how we handle poll based retry. From the unlock callback, we simply queue the retry to a task_work based handler. Signed-off-by: Jens Axboe --- fs/io_uring.c | 145 +++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 137 insertions(+), 8 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 40413fb9d07b..94282be1c413 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -78,6 +78,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include @@ -503,6 +504,8 @@ struct io_async_rw { struct iovec *iov; ssize_t nr_segs; ssize_t size; + struct wait_page_queue wpq; + struct callback_head task_work; }; struct io_async_ctx { @@ -2750,6 +2753,126 @@ static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe, return 0; } +static void __io_async_buf_error(struct io_kiocb *req, int error) +{ + struct io_ring_ctx *ctx = req->ctx; + + spin_lock_irq(&ctx->completion_lock); + io_cqring_fill_event(req, error); + io_commit_cqring(ctx); + spin_unlock_irq(&ctx->completion_lock); + + io_cqring_ev_posted(ctx); + req_set_fail_links(req); + io_double_put_req(req); +} + +static void io_async_buf_cancel(struct callback_head *cb) +{ + struct io_async_rw *rw; + struct io_kiocb *req; + + rw = container_of(cb, struct io_async_rw, task_work); + req = rw->wpq.wait.private; + __io_async_buf_error(req, -ECANCELED); +} + +static void io_async_buf_retry(struct callback_head *cb) +{ + struct io_async_rw *rw; + struct io_ring_ctx *ctx; + struct io_kiocb *req; + + rw = container_of(cb, struct io_async_rw, task_work); + req = rw->wpq.wait.private; + ctx = req->ctx; + + __set_current_state(TASK_RUNNING); + if (!io_sq_thread_acquire_mm(ctx, req)) { + mutex_lock(&ctx->uring_lock); + __io_queue_sqe(req, NULL); + mutex_unlock(&ctx->uring_lock); + } else { + __io_async_buf_error(req, -EFAULT); + } +} + +static int io_async_buf_func(struct wait_queue_entry *wait, unsigned mode, + int sync, void *arg) +{ + struct wait_page_queue *wpq; + struct io_kiocb *req = wait->private; + struct io_async_rw *rw = &req->io->rw; + struct wait_page_key *key = arg; + struct task_struct *tsk; + int ret; + + wpq = container_of(wait, struct wait_page_queue, wait); + + ret = wake_page_match(wpq, key); + if (ret != 1) + return ret; + + list_del_init(&wait->entry); + + init_task_work(&rw->task_work, io_async_buf_retry); + /* submit ref gets dropped, acquire a new one */ + refcount_inc(&req->refs); + tsk = req->task; + ret = task_work_add(tsk, &rw->task_work, true); + if (unlikely(ret)) { + /* queue just for cancelation */ + init_task_work(&rw->task_work, io_async_buf_cancel); + tsk = io_wq_get_task(req->ctx->io_wq); + task_work_add(tsk, &rw->task_work, true); + } + wake_up_process(tsk); + return 1; +} + +static bool io_rw_should_retry(struct io_kiocb *req) +{ + struct kiocb *kiocb = &req->rw.kiocb; + int ret; + + /* never retry for NOWAIT, we just complete with -EAGAIN */ + if (req->flags & REQ_F_NOWAIT) + return false; + + /* already tried, or we're doing O_DIRECT */ + if (kiocb->ki_flags & (IOCB_DIRECT | IOCB_WAITQ)) + return false; + /* + * just use poll if we can, and don't attempt if the fs doesn't + * support callback based unlocks + */ + if (file_can_poll(req->file) || !(req->file->f_mode & FMODE_BUF_RASYNC)) + return false; + + /* + * If request type doesn't require req->io to defer in general, + * we need to allocate it here + */ + if (!req->io && __io_alloc_async_ctx(req)) + return false; + + ret = kiocb_wait_page_queue_init(kiocb, &req->io->rw.wpq, + io_async_buf_func, req); + if (!ret) { + io_get_req_task(req); + return true; + } + + return false; +} + +static int io_iter_do_read(struct io_kiocb *req, struct iov_iter *iter) +{ + if (req->file->f_op->read_iter) + return call_read_iter(req->file, &req->rw.kiocb, iter); + return loop_rw_iter(READ, req->file, &req->rw.kiocb, iter); +} + static int io_read(struct io_kiocb *req, bool force_nonblock) { struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; @@ -2784,10 +2907,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock) unsigned long nr_segs = iter.nr_segs; ssize_t ret2 = 0; - if (req->file->f_op->read_iter) - ret2 = call_read_iter(req->file, kiocb, &iter); - else - ret2 = loop_rw_iter(READ, req->file, kiocb, &iter); + ret2 = io_iter_do_read(req, &iter); /* Catch -EAGAIN return for forced non-blocking submission */ if (!force_nonblock || (ret2 != -EAGAIN && ret2 != -EIO)) { @@ -2799,17 +2919,26 @@ static int io_read(struct io_kiocb *req, bool force_nonblock) ret = io_setup_async_rw(req, io_size, iovec, inline_vecs, &iter); if (ret) - goto out_free; + goto out; /* any defer here is final, must blocking retry */ if (!(req->flags & REQ_F_NOWAIT) && !file_can_poll(req->file)) req->flags |= REQ_F_MUST_PUNT; + /* if we can retry, do so with the callbacks armed */ + if (io_rw_should_retry(req)) { + ret2 = io_iter_do_read(req, &iter); + if (ret2 == -EIOCBQUEUED) { + goto out; + } else if (ret2 != -EAGAIN) { + kiocb_done(kiocb, ret2); + goto out; + } + } + kiocb->ki_flags &= ~IOCB_WAITQ; return -EAGAIN; } } -out_free: - kfree(iovec); - req->flags &= ~REQ_F_NEED_CLEANUP; +out: return ret; } -- 2.27.0