Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3414953pxf; Mon, 22 Mar 2021 06:04:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJym+AM/kyXOaeACL9dlxXJAnb0n3/1zVazuEeIBsSd+Ljtru/gEPTIcQaHJpBoPVE0WBG0d X-Received: by 2002:a17:906:688:: with SMTP id u8mr18768200ejb.38.1616418240184; Mon, 22 Mar 2021 06:04:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616418240; cv=none; d=google.com; s=arc-20160816; b=n/AkLP7udA2RnGv1m4SnbjydQK7Hrwck85sMBC4tPIeKufUg5GIbg48x2a/e67DPwJ HJOCpjtlaeFSueWf8lv9h1d2NwHMfoBonGajSsSnFHKIHCNLLnSPbHes6iD+cGn9NsS/ N+iwdVcgmjBd/2/M98Kg4RO0wwcVA29HzwmqwHon0iDZ9qz97Vrt8iKE4Yt/2SHQjEzn prfMkpH0pQBkRYIcCSMZSTolMGOQqjvwGiY95rQ0Hm4c1xZdBIR3PAy0Jazd8KYPEMzi 7Lhr1dj/T3WoY/9xYoXKsRIxYbVGk3m+PHENNv86XR+1dxe/Bt4I+pyM+QzRGiuwBXGZ B0aQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=3kJTuq7RxUG5Acoy07ii9u2zcYLsqyYly9hV+gi5330=; b=OvRXuuSnavtC/mU9EZMZkFQs6u/igqzUYPHtxesohOJorlzkHsr5KF29riiQMH+1ks 2A+Gt8U0Si4bFfr9rbW/Nh/U0o2B2VSnWS3BEUwVOEff0bV6tGxwCyIwayz+Mf70QVSo P2exKAvE7TQQpOHWTA2QjAtRZ6CvO8hnN3g/+1zQp0KxRYKdepLsgWeNLqtQYJCT5/D/ 1SHvJ4Y+79puq23Iz77pC1zo1q2ZHf+uosCzY8IVopxzzBorfayhR4/pHLkk0uQoto8w wz9i5qnVsD8ZCsBKJKm+UkkJbxfgDRipqLn4lrzS7R+7YS/wHDl8YMagyFy42rRRz9Yl x/uQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=I1UrPmDO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l11si11284991edq.475.2021.03.22.06.03.36; Mon, 22 Mar 2021 06:04:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=I1UrPmDO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232818AbhCVNA3 (ORCPT + 99 others); Mon, 22 Mar 2021 09:00:29 -0400 Received: from mail.kernel.org ([198.145.29.99]:42260 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232766AbhCVMsM (ORCPT ); Mon, 22 Mar 2021 08:48:12 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id DF0AE619CC; Mon, 22 Mar 2021 12:44:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1616417047; bh=vRM6U3XQPzYZ/vO6pDovKg8Objfa3kJjfPwjSM7U9HY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I1UrPmDOB0BgMfNtP5IVk0pHVNijwe+Dla0bdfznBq9SR/CunpqcEA4sUWWu/DL0R 2PRbrzivJuyIOJGXEUWg7tMn2ewWuzWIted1yBiBea+Rn7R+kOGKMO4AGeJAY7FUP/ JW+S9m4y48Xg3uGWF6oYtc/pUm23gcSdnhDSfA8o= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Chao Leng , Sagi Grimberg , Christoph Hellwig , Sasha Levin Subject: [PATCH 5.4 33/60] nvme-rdma: fix possible hang when failing to set io queues Date: Mon, 22 Mar 2021 13:28:21 +0100 Message-Id: <20210322121923.477871583@linuxfoundation.org> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20210322121922.372583154@linuxfoundation.org> References: <20210322121922.372583154@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sagi Grimberg [ Upstream commit c4c6df5fc84659690d4391d1fba155cd94185295 ] We only setup io queues for nvme controllers, and it makes absolutely no sense to allow a controller (re)connect without any I/O queues. If we happen to fail setting the queue count for any reason, we should not allow this to be a successful reconnect as I/O has no chance in going through. Instead just fail and schedule another reconnect. Reported-by: Chao Leng Fixes: 711023071960 ("nvme-rdma: add a NVMe over Fabrics RDMA host driver") Signed-off-by: Sagi Grimberg Reviewed-by: Chao Leng Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/rdma.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index da6030010432..b8c0f75bfb7b 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -666,8 +666,11 @@ static int nvme_rdma_alloc_io_queues(struct nvme_rdma_ctrl *ctrl) return ret; ctrl->ctrl.queue_count = nr_io_queues + 1; - if (ctrl->ctrl.queue_count < 2) - return 0; + if (ctrl->ctrl.queue_count < 2) { + dev_err(ctrl->ctrl.device, + "unable to set any I/O queues\n"); + return -ENOMEM; + } dev_info(ctrl->ctrl.device, "creating %d I/O queues.\n", nr_io_queues); -- 2.30.1