Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp9107138pxu; Mon, 28 Dec 2020 06:49:33 -0800 (PST) X-Google-Smtp-Source: ABdhPJzg96LyeOVMHLwIonYc21JU7HAAnzdtUmpzE7RHd8J84aGIqLrQgt/6yZvJiDjkmiwaQALW X-Received: by 2002:a17:906:52d9:: with SMTP id w25mr40814129ejn.504.1609166972851; Mon, 28 Dec 2020 06:49:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1609166972; cv=none; d=google.com; s=arc-20160816; b=qe7BlZODVUO/Tyg10Qbp9BbnqTEMeBFJVHxx1PsQ2KdIqJnj/9pSBvHFCnuMyudmK8 Q5GK/t5gBFJfIhgJ1oYA5rd/GMy7U/IL1Mhb7v5ftqENvJ1ZZAPJhjSZwyyqxm2+pBel 9ryxfGRRQP27LRixrR5+MP9dWgHK6OXPNoS4EqulHZ61Wd5jOGHLoX/JM/X9qF+VIoP+ B2MltwrvD7mt5ul6uTQbz95f78GIlCEA7DMi+Z4JMQRv8X4LIjxj/5+R1r/XIfR9LCUV 8AgfySBPdoE4SdxcKxlkLxuUUiS94UvqgFhR+QA76IGJbGtzZvdf3jf/c9dplESlUZOD LfPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=BEBBMKGmMkbY6k+hFtRhXH3sFLLhyvcjo2ykozHwhl0=; b=gjdgVp4nkvm06vK3GddqLeFtzc1bqeC5vaIXEjEKnHgC/4C8+KIO/CtWKMCYgBu/PY Ck4mpPs48B+b6mzmUq6XBClz5yerUJoFLpXEy/+riVXP0TMl5DpisKOZDqplJIF4NRXv zbYWEGU/02wcJYSTNduzRMjjvlwBd5catt71lCPtA3jvuihj+SNrnJbY/0Ri6H2U+e1J RQi6P73IIoc7Rz9KuI2OEBeIdJGu+g1ngDTnQpb0/6GQdNO1HZQflVuXQtqHHYXJA0JT t1VSJYSv1Xc0+qYyx6HFAZ9jkL3kJDB1Sz8XKZZKo/VqWDva42FSkd595GgbDXPC13RR MSHQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b="m0HxB/ED"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id di13si20001207edb.412.2020.12.28.06.49.10; Mon, 28 Dec 2020 06:49:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b="m0HxB/ED"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2392400AbgL1Ore (ORCPT + 99 others); Mon, 28 Dec 2020 09:47:34 -0500 Received: from mail.kernel.org ([198.145.29.99]:58350 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2502808AbgL1OXd (ORCPT ); Mon, 28 Dec 2020 09:23:33 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 90D32206E5; Mon, 28 Dec 2020 14:23:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609165398; bh=0sPQwrbSMMI3HQbhaGGImWtCf5NHsbUmzmfLgkEjfRw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=m0HxB/EDnIndJ6RJb2dGZdp4Yj9DnU8aK0Ih/Enc8Q6IAhNiXh4B0Vv8pUTm10SQ2 gGAbELojM4/J0tOvl4NVZmH84neqZTdFUEk16OIgCdVjG/N9U8kk1+1NVKPGNbyn9F T7zqcgz8zcwxq1J6BCwhtcSB6XKtdfk+VJReD/mw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Xiaoguang Wang , Pavel Begunkov , Jens Axboe Subject: [PATCH 5.10 518/717] io_uring: hold uring_lock while completing failed polled io in io_wq_submit_work() Date: Mon, 28 Dec 2020 13:48:36 +0100 Message-Id: <20201228125045.774596353@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201228125020.963311703@linuxfoundation.org> References: <20201228125020.963311703@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Xiaoguang Wang commit c07e6719511e77c4b289f62bfe96423eb6ea061d upstream. io_iopoll_complete() does not hold completion_lock to complete polled io, so in io_wq_submit_work(), we can not call io_req_complete() directly, to complete polled io, otherwise there maybe concurrent access to cqring, defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always let io_iopoll_complete() complete polled io") has fixed this issue, but Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests( IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is not good. Given that io_iopoll_complete() is always called under uring_lock, so here for polled io, we can also get uring_lock to fix this issue. Fixes: dad1b1242fd5 ("io_uring: always let io_iopoll_complete() complete polled io") Cc: # 5.5+ Signed-off-by: Xiaoguang Wang Reviewed-by: Pavel Begunkov [axboe: don't deref 'req' after completing it'] Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- fs/io_uring.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -6081,19 +6081,28 @@ static struct io_wq_work *io_wq_submit_w } if (ret) { + struct io_ring_ctx *lock_ctx = NULL; + + if (req->ctx->flags & IORING_SETUP_IOPOLL) + lock_ctx = req->ctx; + /* - * io_iopoll_complete() does not hold completion_lock to complete - * polled io, so here for polled io, just mark it done and still let - * io_iopoll_complete() complete it. + * io_iopoll_complete() does not hold completion_lock to + * complete polled io, so here for polled io, we can not call + * io_req_complete() directly, otherwise there maybe concurrent + * access to cqring, defer_list, etc, which is not safe. Given + * that io_iopoll_complete() is always called under uring_lock, + * so here for polled io, we also get uring_lock to complete + * it. */ - if (req->ctx->flags & IORING_SETUP_IOPOLL) { - struct kiocb *kiocb = &req->rw.kiocb; + if (lock_ctx) + mutex_lock(&lock_ctx->uring_lock); + + req_set_fail_links(req); + io_req_complete(req, ret); - kiocb_done(kiocb, ret, NULL); - } else { - req_set_fail_links(req); - io_req_complete(req, ret); - } + if (lock_ctx) + mutex_unlock(&lock_ctx->uring_lock); } return io_steal_work(req);