Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3398945pxf; Mon, 22 Mar 2021 05:40:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwjSUPqM9LgPOakg1L4Ws/lPlol0paVGlEdyG2IImRHnw6I7Y/O90ZFMkg9PbYHd5XN8MVo X-Received: by 2002:a17:906:3f87:: with SMTP id b7mr19132058ejj.139.1616416806084; Mon, 22 Mar 2021 05:40:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616416806; cv=none; d=google.com; s=arc-20160816; b=xOyaDNUvtakQMIpepBtbtzjjCPYH2S+a1ILWw0Eltz8/OGtRJlo1BXc6ogOOmcUbeW tssSsBvM2wauGfeD30HrbDohjNrGS2SOkPRnqjIoid9Fw8n8wo0+q2bVzj1a3Wa7KOh1 i/Vsvf04+tMR1MiThqzhlQEhEAR/aJIIVtig3O9HaiNWmtLTxB73K2uZQzBa4wyA6oNT 9AeIuN1/lsgRHyayulfOy1Gq7QiQfeqAHCUNKZ4Xz2nZW53xHx3DhQI1xhjwFOzFacz9 EpkeUv3hD15RHmykyIQQRnLSWAMo1lhlybyOwqIZ8DijedUEZBq0AG7F4e8AIpfUOQpO nzrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=vM6Rxe/9S0U0gEUis+9gtNqYz4V3Qrmxbd8AgtGVDQ4=; b=OavpZSOSIhnA10ZrEgU175R7sh5UpidCRzVUtyc9X+UQnHhQZofKX5/Y0vfBNJ4TVL hwYDRLpXcBJCtINyUYWpvkN2CPgcWjKdkBbs1ED1b43lBYOMXE9RE3v+Z9YRf1UCYW5f 8JBA9V/ZMzDkcdmQGPwED/Qv4hd57nREewE3SQAg10fjG1GENqY2M/L/FD76o1nnVydf dCNDSWHubydgFY6NevewZlsj5bFxBcjD01A3ots6uvK45wXtrF+UG23D+K1Zt7owS4oz 2PBtACEzxtByJbHOupZTmetNVmgdR8NBnw88x1CoA+hOaiH0bA81Wpnx1GBaKVhUCRou Kpeg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=tgJqMxs3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w12si11264960edx.14.2021.03.22.05.39.43; Mon, 22 Mar 2021 05:40:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=tgJqMxs3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231797AbhCVMir (ORCPT + 99 others); Mon, 22 Mar 2021 08:38:47 -0400 Received: from mail.kernel.org ([198.145.29.99]:56314 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231672AbhCVMeK (ORCPT ); Mon, 22 Mar 2021 08:34:10 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 204CB61990; Mon, 22 Mar 2021 12:34:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1616416450; bh=OKZXgHtbqiMqjqRLkEc7SyxWNUNxlxlYlZS496DrYVo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tgJqMxs3drAjhDwceVJS6LrO2EVT6Oy+8Sm/L+DQxxx6RWEhRq7efUhCDsQZF0IsU d43+jvJeWDQEgP/kB9NI9ltqidlGqSt+VSvRttBWOL1GtogzFvMDbIwoxjCvStU1KO sHQk+oSfLEpWXn8inEmW/cM+FRVelsaTUJ6V5eQI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Chao Leng , Sagi Grimberg , Christoph Hellwig , Sasha Levin Subject: [PATCH 5.11 076/120] nvme-rdma: fix possible hang when failing to set io queues Date: Mon, 22 Mar 2021 13:27:39 +0100 Message-Id: <20210322121932.213310441@linuxfoundation.org> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20210322121929.669628946@linuxfoundation.org> References: <20210322121929.669628946@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sagi Grimberg [ Upstream commit c4c6df5fc84659690d4391d1fba155cd94185295 ] We only setup io queues for nvme controllers, and it makes absolutely no sense to allow a controller (re)connect without any I/O queues. If we happen to fail setting the queue count for any reason, we should not allow this to be a successful reconnect as I/O has no chance in going through. Instead just fail and schedule another reconnect. Reported-by: Chao Leng Fixes: 711023071960 ("nvme-rdma: add a NVMe over Fabrics RDMA host driver") Signed-off-by: Sagi Grimberg Reviewed-by: Chao Leng Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/rdma.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 746392eade45..0c3da10c1f29 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -736,8 +736,11 @@ static int nvme_rdma_alloc_io_queues(struct nvme_rdma_ctrl *ctrl) return ret; ctrl->ctrl.queue_count = nr_io_queues + 1; - if (ctrl->ctrl.queue_count < 2) - return 0; + if (ctrl->ctrl.queue_count < 2) { + dev_err(ctrl->ctrl.device, + "unable to set any I/O queues\n"); + return -ENOMEM; + } dev_info(ctrl->ctrl.device, "creating %d I/O queues.\n", nr_io_queues); -- 2.30.1