Received: by 2002:a05:622a:251a:b0:39a:b4a2:e86 with SMTP id cm26csp319825qtb; Wed, 19 Oct 2022 02:50:51 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7F8DJPpDZcobRNpFv9Vu4dvjmihc7tBogECmnICnkKDyXchIxZ+8tuKNaWHjHXx7VEkr2q X-Received: by 2002:a17:907:2d29:b0:78d:cf5c:5774 with SMTP id gs41-20020a1709072d2900b0078dcf5c5774mr5741877ejc.213.1666173051698; Wed, 19 Oct 2022 02:50:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666173051; cv=none; d=google.com; s=arc-20160816; b=APsgqW6QrhdqMO2vt0kypavkQ6AwP3hL8Bc9iOvgHZNWyHSFaqIgiarG7n0gMzA+Mw HBGuQyPzzp6hitC/KWEFIw8eVvtvLuzQ9K4GeFQYRzgkxB9IeUPf9ChcPmghtt9jhDV5 8v53HT6cTZo1dZVrX3Ph5rgOOoj2eLxZG0f0Zoh9QlHt50D9Cfcj6kAtYZk/5K3LszUc aNHJrQT490kO7mcYk3EDH4LyMZ1uB5jXSkbxHwHzijfpvsiGzXzKdwDrfWbR2DnXSD44 0r7ZZgQL9gp/M4G7xtdvTvQy8drsX0QQbu0oDS/QmVUiSYDi124QUuDhQKWM5NiSPYpy t44g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=FvUb1ND+KWCa96sbn/UifRieQjbaVtLTcllfw6ZtSTQ=; b=e5N7V7snc39dc8hZGhCH7hnQ2P9W0+YoKsFU8u4VAxBvCxrmc0/kQE4DJ+qKd1jqNn /NId7oQdleVj1MZ3TimZmkvL5FR+joxafyt6Z5WpuIEinQIN3lFZwc+BmVGz0NnjQQF8 1PrNA+1fQC3UOo2toMDwAYH1+gBHUUgyoCxBrKhmP1vD9Cl/51rppLgEXa24P9utopMr etkXFKTryvrUkSeqUi30AfdP1kg3nUzq7TkG9xoGAKOgVte49TUuxERY5mdsC4DMlt3l jXU1MbqzMXWr3FtBelvWCQ2Gpxzt0/lBs/fy905OesHIZnPvQF6z1XqAYiLgj5ikLY7F iiMw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=HwxNMlfG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s5-20020a170906500500b0078255525a6fsi12646809ejj.671.2022.10.19.02.50.26; Wed, 19 Oct 2022 02:50:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=HwxNMlfG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234405AbiJSJnv (ORCPT + 99 others); Wed, 19 Oct 2022 05:43:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234499AbiJSJkf (ORCPT ); Wed, 19 Oct 2022 05:40:35 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8857EEA8B; Wed, 19 Oct 2022 02:16:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B82D761824; Wed, 19 Oct 2022 09:16:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D21C2C433C1; Wed, 19 Oct 2022 09:16:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1666170967; bh=I4JT3s3BD2qLmN9713tI7x6k8PdkiCBRxTyGCGLdTEU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HwxNMlfGUg85JC9E1XoM6Vo24fEbsa/YwwD3R025KDkk2FcKpICnMyJ+e8wJy9ja6 vYfCBuDHOnAFA3SGmf6+AsfQbIKeMP0TXkrtbeDcg3/DpeWI1uy5LI2p427/o4hk3W sZD7QgQgZT/2+mEdDCLYHefDAh7tzrzVGjp3vkm0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dylan Yudaken , Pavel Begunkov , Jens Axboe , Sasha Levin Subject: [PATCH 6.0 823/862] io_uring: fix CQE reordering Date: Wed, 19 Oct 2022 10:35:10 +0200 Message-Id: <20221019083326.281411913@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221019083249.951566199@linuxfoundation.org> References: <20221019083249.951566199@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Pavel Begunkov [ Upstream commit aa1df3a360a0c50e0f0086a785d75c2785c29967 ] Overflowing CQEs may result in reordering, which is buggy in case of links, F_MORE and so on. If we guarantee that we don't reorder for the unlikely event of a CQ ring overflow, then we can further extend this to not have to terminate multishot requests if it happens. For other operations, like zerocopy sends, we have no choice but to honor CQE ordering. Reported-by: Dylan Yudaken Signed-off-by: Pavel Begunkov Link: https://lore.kernel.org/r/ec3bc55687b0768bbe20fb62d7d06cfced7d7e70.1663892031.git.asml.silence@gmail.com Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- io_uring/io_uring.c | 12 ++++++++++-- io_uring/io_uring.h | 12 +++++++++--- 2 files changed, 19 insertions(+), 5 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index a22a32acf590..c5dd483a7de2 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -567,7 +567,7 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) io_cq_lock(ctx); while (!list_empty(&ctx->cq_overflow_list)) { - struct io_uring_cqe *cqe = io_get_cqe(ctx); + struct io_uring_cqe *cqe = io_get_cqe_overflow(ctx, true); struct io_overflow_cqe *ocqe; if (!cqe && !force) @@ -694,12 +694,19 @@ bool io_req_cqe_overflow(struct io_kiocb *req) * control dependency is enough as we're using WRITE_ONCE to * fill the cq entry */ -struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx) +struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx, bool overflow) { struct io_rings *rings = ctx->rings; unsigned int off = ctx->cached_cq_tail & (ctx->cq_entries - 1); unsigned int free, queued, len; + /* + * Posting into the CQ when there are pending overflowed CQEs may break + * ordering guarantees, which will affect links, F_MORE users and more. + * Force overflow the completion. + */ + if (!overflow && (ctx->check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT))) + return NULL; /* userspace may cheat modifying the tail, be safe and do min */ queued = min(__io_cqring_events(ctx), ctx->cq_entries); @@ -2232,6 +2239,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events, do { io_cqring_overflow_flush(ctx); + if (io_cqring_events(ctx) >= min_events) return 0; if (!io_run_task_work()) diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 2f73f83af960..45809ae6f64e 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -24,7 +24,7 @@ enum { IOU_STOP_MULTISHOT = -ECANCELED, }; -struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx); +struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx, bool overflow); bool io_req_cqe_overflow(struct io_kiocb *req); int io_run_task_work_sig(void); void io_req_complete_failed(struct io_kiocb *req, s32 res); @@ -91,7 +91,8 @@ static inline void io_cq_lock(struct io_ring_ctx *ctx) void io_cq_unlock_post(struct io_ring_ctx *ctx); -static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx) +static inline struct io_uring_cqe *io_get_cqe_overflow(struct io_ring_ctx *ctx, + bool overflow) { if (likely(ctx->cqe_cached < ctx->cqe_sentinel)) { struct io_uring_cqe *cqe = ctx->cqe_cached; @@ -103,7 +104,12 @@ static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx) return cqe; } - return __io_get_cqe(ctx); + return __io_get_cqe(ctx, overflow); +} + +static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx) +{ + return io_get_cqe_overflow(ctx, false); } static inline bool __io_fill_cqe_req(struct io_ring_ctx *ctx, -- 2.35.1