Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54EA6C4167B for ; Tue, 30 Nov 2021 14:50:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242939AbhK3Ox3 (ORCPT ); Tue, 30 Nov 2021 09:53:29 -0500 Received: from sin.source.kernel.org ([145.40.73.55]:57192 "EHLO sin.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242655AbhK3OwW (ORCPT ); Tue, 30 Nov 2021 09:52:22 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 5C0A5CE1A4D; Tue, 30 Nov 2021 14:48:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 029A5C53FD3; Tue, 30 Nov 2021 14:48:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1638283733; bh=A+dUwODXJX9yZUhCZEJM+KQsIhUhtwPPnqKJN2KlFCE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jHCf40IR8a101NCN/c4Z2yh/mqSxEqj2uaJHKhybqe9QBd/E7JcDtQxvnf+u7w/ZZ y6HD6gAWB5hd8EK2/5eOeONKBbN9yfhdHqBGCqhSjtL+MnC55IXQu5JfowvQPHTqvu O+T/ocQatvP2IfmPS4cuLF1wkz81sTgbzuZZ2oi3ipU2S1Ott1zXNCW/yVENY2PjQc i6EWy6aV7/3ZQI2VAbWX/o+qkah/igVQXKOlxCqDy7XJaGYE6wvWsmBll8HhLpCfxo rwbk8rT67rJIoBVq64FoExOf4gmGQbho/DVaOJXBKvpPDunGfCdtGqeIrJyW0v72Tx BbgNqiAHvKLhw== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Maurizio Lombardi , Keith Busch , Sagi Grimberg , John Meneghini , Christoph Hellwig , Sasha Levin , kch@nvidia.com, linux-nvme@lists.infradead.org Subject: [PATCH AUTOSEL 5.15 40/68] nvmet-tcp: fix a race condition between release_queue and io_work Date: Tue, 30 Nov 2021 09:46:36 -0500 Message-Id: <20211130144707.944580-40-sashal@kernel.org> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211130144707.944580-1-sashal@kernel.org> References: <20211130144707.944580-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Maurizio Lombardi [ Upstream commit a208fc56721775987c1b86e20d86d7e0d017c0b2 ] If the initiator executes a reset controller operation while performing I/O, the target kernel will crash because of a race condition between release_queue and io_work; nvmet_tcp_uninit_data_in_cmds() may be executed while io_work is running, calling flush_work() was not sufficient to prevent this because io_work could requeue itself. Fix this bug by using cancel_work_sync() to prevent io_work from requeuing itself and set rcv_state to NVMET_TCP_RECV_ERR to make sure we don't receive any more data from the socket. Signed-off-by: Maurizio Lombardi Reviewed-by: Keith Busch Reviewed-by: Sagi Grimberg Reviewed-by: John Meneghini Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/target/tcp.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index 84c387e4bf431..18f36256095f6 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -1437,7 +1437,9 @@ static void nvmet_tcp_release_queue_work(struct work_struct *w) mutex_unlock(&nvmet_tcp_queue_mutex); nvmet_tcp_restore_socket_callbacks(queue); - flush_work(&queue->io_work); + cancel_work_sync(&queue->io_work); + /* stop accepting incoming data */ + queue->rcv_state = NVMET_TCP_RECV_ERR; nvmet_tcp_uninit_data_in_cmds(queue); nvmet_sq_destroy(&queue->nvme_sq); -- 2.33.0