Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp3842713pxb; Mon, 1 Mar 2021 23:10:48 -0800 (PST) X-Google-Smtp-Source: ABdhPJxUUUV7A1z4oii+ADVyRM+zyVhmoPTj5MmZYulybq4+clr3aGkPo1c51liY0sIasxUSXbjf X-Received: by 2002:a17:906:c455:: with SMTP id ck21mr19902345ejb.354.1614669048085; Mon, 01 Mar 2021 23:10:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614669048; cv=none; d=google.com; s=arc-20160816; b=tX2im3WSa3nr51hIRCMmbafbSbH1VSlF1Fe6tHy9K3z6DCa3h7lRgOlaNkJhv3Vvon Gtitisi/c2Do7LoVAk//Rq7uQPiZ27lU+OMKOY0NXKpcg9VT1KKXxvMSd2kJjoLqnDyC DTacuQDcgOnY5BTfwiGiKekoMiy7jK0ePX/nwQ0QfSBMBktj23KdAbHE0mMqaMVxN80T //5kdpWpPiOXy+KFsFGOyfdDHNlXI8QztAO6om9OdSZTwrV126i6WsKVD5EaCRWjvI1b dQE5/8xu7j7PpK0H4cy8ROyS3xTmJuCRmSB2/lNqAVbaIhJ/ubqglYAyz9UCAzVKW9Ff Zt0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=ISB9XVIQZlGD+uFeeEnqbWhdaboZBXItspur5RfkE8s=; b=ddWbcyydM386kq8aQ3G5OnKPqA/GkjbJwTIHEZY5vtNKPkObj+y6Wg7xREHgFkUX7B MJQKsXRCIOLx+MgEqdYupVB2KNJrFyzK6btytJF651zfjDvfHQR8RBWJVNKz2F3Xe2c7 RgrMN5E/hX4P/0VXzyK0NodK6l1XV8FpIBcG/Cn65HRDPlkIcTH/EIrZj8LbSRJlJZf5 gGd0zA54G/R/Hb6Bgr08rxkNrKWrkRfb0YaMWX5dwvjTMyonUpe7oRkYs+VQEfXq1hX1 uMHrys5w+2AQgTiG0whBb4sLlNz5R7X7X/VIYgmfnMBNwlK0iH8V9EzEJXAwrAGF5LPg jQMw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=zPaETHoe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s18si5355392edr.249.2021.03.01.23.10.25; Mon, 01 Mar 2021 23:10:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=zPaETHoe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377688AbhCBAsX (ORCPT + 99 others); Mon, 1 Mar 2021 19:48:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:53936 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240421AbhCASqp (ORCPT ); Mon, 1 Mar 2021 13:46:45 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 7853264F45; Mon, 1 Mar 2021 17:14:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1614618851; bh=UDF5jyk/gV0RH6B4X9+TOSmZZHyWmpJTDx6OtgDFrbw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=zPaETHoe3fH9ggu/kvnGeFmVVKtOMEIY/35XsWnwuou6aG576AOcLf+4v9OkFJrQI 3Fwq3jKmi2kOthDdPkonhZwBlnXW0ktWJDhFx0Qo2+nWZdV5kwgZnpraH6ll1/0F1C VMSPuorHgmE2C1CEGOHPR2JQv+Olhz9YzV+7JGVg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Elad Grupi , Sagi Grimberg , Christoph Hellwig , Sasha Levin Subject: [PATCH 5.10 241/663] nvmet-tcp: fix potential race of tcp socket closing accept_work Date: Mon, 1 Mar 2021 17:08:09 +0100 Message-Id: <20210301161153.745307078@linuxfoundation.org> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301161141.760350206@linuxfoundation.org> References: <20210301161141.760350206@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sagi Grimberg [ Upstream commit 0fbcfb089a3f2f2a731d01f0aec8f7697a849c28 ] When we accept a TCP connection and allocate an nvmet-tcp queue we should make sure not to fully establish it or reference it as the connection may be already closing, which triggers queue release work, which does not fence against queue establishment. In order to address such a race, we make sure to check the sk_state and contain the queue reference to be done underneath the sk_callback_lock such that the queue release work correctly fences against it. Fixes: 872d26a391da ("nvmet-tcp: add NVMe over TCP target driver") Reported-by: Elad Grupi Signed-off-by: Sagi Grimberg Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/target/tcp.c | 28 ++++++++++++++++++---------- 1 file changed, 18 insertions(+), 10 deletions(-) diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index 577ce7d403ae9..8b0485ada315b 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -1485,17 +1485,27 @@ static int nvmet_tcp_set_queue_sock(struct nvmet_tcp_queue *queue) if (inet->rcv_tos > 0) ip_sock_set_tos(sock->sk, inet->rcv_tos); + ret = 0; write_lock_bh(&sock->sk->sk_callback_lock); - sock->sk->sk_user_data = queue; - queue->data_ready = sock->sk->sk_data_ready; - sock->sk->sk_data_ready = nvmet_tcp_data_ready; - queue->state_change = sock->sk->sk_state_change; - sock->sk->sk_state_change = nvmet_tcp_state_change; - queue->write_space = sock->sk->sk_write_space; - sock->sk->sk_write_space = nvmet_tcp_write_space; + if (sock->sk->sk_state != TCP_ESTABLISHED) { + /* + * If the socket is already closing, don't even start + * consuming it + */ + ret = -ENOTCONN; + } else { + sock->sk->sk_user_data = queue; + queue->data_ready = sock->sk->sk_data_ready; + sock->sk->sk_data_ready = nvmet_tcp_data_ready; + queue->state_change = sock->sk->sk_state_change; + sock->sk->sk_state_change = nvmet_tcp_state_change; + queue->write_space = sock->sk->sk_write_space; + sock->sk->sk_write_space = nvmet_tcp_write_space; + queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &queue->io_work); + } write_unlock_bh(&sock->sk->sk_callback_lock); - return 0; + return ret; } static int nvmet_tcp_alloc_queue(struct nvmet_tcp_port *port, @@ -1543,8 +1553,6 @@ static int nvmet_tcp_alloc_queue(struct nvmet_tcp_port *port, if (ret) goto out_destroy_sq; - queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &queue->io_work); - return 0; out_destroy_sq: mutex_lock(&nvmet_tcp_queue_mutex); -- 2.27.0