Received: by 2002:a05:6a10:c7c6:0:0:0:0 with SMTP id h6csp1661374pxy; Mon, 2 Aug 2021 07:18:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw/Ld4EXUoH8BlCn5/feGRwk3hYQ1kB0jKDnZmS5g6MevW+ORRhWlsWT/eeATxYD1qwxfEj X-Received: by 2002:aa7:d34b:: with SMTP id m11mr19644327edr.207.1627913914984; Mon, 02 Aug 2021 07:18:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627913914; cv=none; d=google.com; s=arc-20160816; b=ktvD843azD8R+qwcTGM+QvDWRngVxnTBgox0w+0hF7cvYCf3LMUGlHWu31iiaE7pK1 5E+cZH77GiHV6ah9ytaEObZOfgvs7gmjWvLKdq547z4hr27DTyDOYq0V2IE3j5u7w3cD k7sFwuDozUh6qigUJ0Q8+LxS+IvpZR6ygvz2Msj/iwygN8U/U4hOPQ8omeINYtOGVmGM qX1sVmlMjYFV0u5a0rwj05O95e6bXdoWRMwT+Lrxe7WEddD3AWUagwY5YMn3ac2WVBdi PZWwcEb06cBhNWLnBFlOjYqT0PDHSXc/Zf/DV54lmC/FgxhqoJr4DGmY0ae863Txd5pW vNiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=T0duYaPl/WfgazkM7gHKIopVGy3o+kl9vPKAsK0rNlY=; b=n5cuKDPycdlIXfhBAXZgc/qsig0lUs0j0zmHG+8ux4eIk/u6iUwk0FrRszG7vqsCj7 h+sRRUevALga+mfd0vexBW69zx9+me/lMwUMbw+gO7PLWLEZNZ2YVejEWiKetLqKYbBD pTPqoi1Vfw50WlfofaHlLOfS5YupB8g5WGGVDGY874VN2+Omt24F0lgZFt2HlGlYPcJI hFnfzaANR/e4EU+1qhUZoHGwpckJd0trxaPbl77yREWJiR609rN23X33JXE3ZHWVHMIL RUGtYmLOLPsE5ij/EQPFAcvQHf/7YMopn+JOEjKsf1nlQ7Ny/wuUMRtJPLRdVfAKvfw+ tIVw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=j65OqFPP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id kl18si10911996ejc.160.2021.08.02.07.18.12; Mon, 02 Aug 2021 07:18:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=j65OqFPP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238128AbhHBOOf (ORCPT + 99 others); Mon, 2 Aug 2021 10:14:35 -0400 Received: from mail.kernel.org ([198.145.29.99]:48776 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236941AbhHBOE0 (ORCPT ); Mon, 2 Aug 2021 10:04:26 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 825A261248; Mon, 2 Aug 2021 13:57:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1627912673; bh=fDY5j7NozrVdR1l8mdgAIegFL1bpeX6/1+UgZgfM908=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j65OqFPPU7anIdEvPBxxr99hlHMkfDmhLoxZ3HlSc0+MjTR0aAsOXmjJCkVJMQbFm I+SsAJ7k100o7etnQ2nsjkdbnUtDUwpTRZr1JM86e+vH1Tbm1A2xgPuLefXyuUxDks gv3wwcQnFSAutfmkRiqpDcDIgr+yIHtLqRBhdCuc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Forza , Jens Axboe Subject: [PATCH 5.13 100/104] io_uring: fix race in unified task_work running Date: Mon, 2 Aug 2021 15:45:37 +0200 Message-Id: <20210802134347.295977332@linuxfoundation.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210802134344.028226640@linuxfoundation.org> References: <20210802134344.028226640@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jens Axboe commit 110aa25c3ce417a44e35990cf8ed22383277933a upstream. We use a bit to manage if we need to add the shared task_work, but a list + lock for the pending work. Before aborting a current run of the task_work we check if the list is empty, but we do so without grabbing the lock that protects it. This can lead to races where we think we have nothing left to run, where in practice we could be racing with a task adding new work to the list. If we do hit that race condition, we could be left with work items that need processing, but the shared task_work is not active. Ensure that we grab the lock before checking if the list is empty, so we know if it's safe to exit the run or not. Link: https://lore.kernel.org/io-uring/c6bd5987-e9ae-cd02-49d0-1b3ac1ef65b1@tnonline.net/ Cc: stable@vger.kernel.org # 5.11+ Reported-by: Forza Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- fs/io_uring.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -1899,7 +1899,7 @@ static void tctx_task_work(struct callba clear_bit(0, &tctx->task_state); - while (!wq_list_empty(&tctx->task_list)) { + while (true) { struct io_ring_ctx *ctx = NULL; struct io_wq_work_list list; struct io_wq_work_node *node; @@ -1909,6 +1909,9 @@ static void tctx_task_work(struct callba INIT_WQ_LIST(&tctx->task_list); spin_unlock_irq(&tctx->task_lock); + if (wq_list_empty(&list)) + break; + node = list.first; while (node) { struct io_wq_work_node *next = node->next;