Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp2256835ybi; Thu, 20 Jun 2019 11:44:14 -0700 (PDT) X-Google-Smtp-Source: APXvYqyL76gfH/I09be1lfNiTm4Yi5P6f5z1xxk1SkvCF/8qykwNsZ6BsBJpZ83asIcXqoDjIsi7 X-Received: by 2002:a62:a509:: with SMTP id v9mr130823397pfm.82.1561056254179; Thu, 20 Jun 2019 11:44:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561056254; cv=none; d=google.com; s=arc-20160816; b=bkyQmcP246jgWQRkyUHD0YBKIXdkFVKyAi+U3cSpeeS/f01IBuTrA/0TKqUakwJ0X+ 1HDw6hGLQ5zZRDCZDvwGdBA2gnmn/NhUBz4YYpkIRp/49GCkfDL3n7zl3TEBv5wSLCKx REytaeN2t+Ag9kCp45+lV7icT81VFiqPq2hy96EeHrBlsbv+jqc0/6e3tjxXesSpjVNb TpojOhbMOsutEVYUahmCpBTXoci+xcryVq9fl3yY2GmqOutPI6Q7ibMfRLHbKDAZ7pyZ QqnZ6rEzxRCKoxGjQjoIJeDTIWvKG5iVhYsfCMWAqNa08kFRtzHXSyXyBXtGCm5Q2ttL ONwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=i7c0ZczJhjAgXlt4BslviEFyq8ppkroM3oHJn/URFrY=; b=zNr/7onOOBIbqkhyc6pu0WO2ZygYDhrjMQ+xPRjU73ANWHfxC3PmIMKaz82cPlI1ER Y2vy2nxzyCxuX2i8gNwsqrParfRwhk0Qs9tgiQ5mJ9KSftEB9B+5koYlxgGQc8fCnw3k oyrFlwRsBDAUZ+41KnLj/QzmrFWzQOhEk/3t93asizRzdkwT8iJtTFQvfaBQO/VQasRx t6NJoI5wVd6qY+KsGhh/vddeopkKQ1CmJ8iKRRCkqkNMiK3IBqRteZ9UtxUii9l5IrYz F2CUd0vA4C3nj2MnS0wk5+plnxCQhIkNcg3l9iNjDAe5td55ue/IzW55VB6rWdyg+dXS tLHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=f6hLbC7p; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o1si422140pld.278.2019.06.20.11.43.59; Thu, 20 Jun 2019 11:44:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=f6hLbC7p; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726584AbfFTSnl (ORCPT + 99 others); Thu, 20 Jun 2019 14:43:41 -0400 Received: from mail.kernel.org ([198.145.29.99]:39946 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725869AbfFTSnj (ORCPT ); Thu, 20 Jun 2019 14:43:39 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7327A2085A; Thu, 20 Jun 2019 18:43:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1561056218; bh=OcHqNm1n5uJ76yfzV9QqKQLTonumwC4p/KsmHbYpEA8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f6hLbC7pNmyZwHZ5O1vUSgPXZnFgFAdE2bCFOINA908BcVnD5vrCHSU9T4aWNb+50 5YTg6yjR2wN9f0H3+obp/yySu6p7qIvfbKjoPDotcrU5VFkjxJ0A+sVB/+mRp8tNuc JhzzEESSbOnA+KSkcB67Jllem4vll8I0DLdgDLdY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Harris, James R" , Roy Shterman , Sagi Grimberg Subject: [PATCH 5.1 97/98] nvme-tcp: fix queue mapping when queue count is limited Date: Thu, 20 Jun 2019 19:58:04 +0200 Message-Id: <20190620174354.311087942@linuxfoundation.org> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190620174349.443386789@linuxfoundation.org> References: <20190620174349.443386789@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sagi Grimberg commit 6486199378a505c58fddc47459631235c9fb7638 upstream. When the controller supports less queues than requested, we should make sure that queue mapping does the right thing and not assume that all queues are available. This fixes a crash when the controller supports less queues than requested. The rules are: 1. if no write queues are requested, we assign the available queues to the default queue map. The default and read queue maps share the existing queues. 2. if write queues are requested: - first make sure that read queue map gets the requested nr_io_queues count - then grant the default queue map the minimum between the requested nr_write_queues and the remaining queues. If there are no available queues to dedicate to the default queue map, fallback to (1) and share all the queues in the existing queue map. Also, provide a log indication on how we constructed the different queue maps. Reported-by: Harris, James R Tested-by: Jim Harris Cc: # v5.0+ Suggested-by: Roy Shterman Signed-off-by: Sagi Grimberg Signed-off-by: Greg Kroah-Hartman --- drivers/nvme/host/tcp.c | 57 ++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 50 insertions(+), 7 deletions(-) --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -111,6 +111,7 @@ struct nvme_tcp_ctrl { struct work_struct err_work; struct delayed_work connect_work; struct nvme_tcp_request async_req; + u32 io_queues[HCTX_MAX_TYPES]; }; static LIST_HEAD(nvme_tcp_ctrl_list); @@ -1564,6 +1565,35 @@ static unsigned int nvme_tcp_nr_io_queue return nr_io_queues; } +static void nvme_tcp_set_io_queues(struct nvme_ctrl *nctrl, + unsigned int nr_io_queues) +{ + struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); + struct nvmf_ctrl_options *opts = nctrl->opts; + + if (opts->nr_write_queues && opts->nr_io_queues < nr_io_queues) { + /* + * separate read/write queues + * hand out dedicated default queues only after we have + * sufficient read queues. + */ + ctrl->io_queues[HCTX_TYPE_READ] = opts->nr_io_queues; + nr_io_queues -= ctrl->io_queues[HCTX_TYPE_READ]; + ctrl->io_queues[HCTX_TYPE_DEFAULT] = + min(opts->nr_write_queues, nr_io_queues); + nr_io_queues -= ctrl->io_queues[HCTX_TYPE_DEFAULT]; + } else { + /* + * shared read/write queues + * either no write queues were requested, or we don't have + * sufficient queue count to have dedicated default queues. + */ + ctrl->io_queues[HCTX_TYPE_DEFAULT] = + min(opts->nr_io_queues, nr_io_queues); + nr_io_queues -= ctrl->io_queues[HCTX_TYPE_DEFAULT]; + } +} + static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl) { unsigned int nr_io_queues; @@ -1581,6 +1611,8 @@ static int nvme_tcp_alloc_io_queues(stru dev_info(ctrl->device, "creating %d I/O queues.\n", nr_io_queues); + nvme_tcp_set_io_queues(ctrl, nr_io_queues); + return __nvme_tcp_alloc_io_queues(ctrl); } @@ -2089,23 +2121,34 @@ static blk_status_t nvme_tcp_queue_rq(st static int nvme_tcp_map_queues(struct blk_mq_tag_set *set) { struct nvme_tcp_ctrl *ctrl = set->driver_data; + struct nvmf_ctrl_options *opts = ctrl->ctrl.opts; - set->map[HCTX_TYPE_DEFAULT].queue_offset = 0; - set->map[HCTX_TYPE_READ].nr_queues = ctrl->ctrl.opts->nr_io_queues; - if (ctrl->ctrl.opts->nr_write_queues) { + if (opts->nr_write_queues && ctrl->io_queues[HCTX_TYPE_READ]) { /* separate read/write queues */ set->map[HCTX_TYPE_DEFAULT].nr_queues = - ctrl->ctrl.opts->nr_write_queues; + ctrl->io_queues[HCTX_TYPE_DEFAULT]; + set->map[HCTX_TYPE_DEFAULT].queue_offset = 0; + set->map[HCTX_TYPE_READ].nr_queues = + ctrl->io_queues[HCTX_TYPE_READ]; set->map[HCTX_TYPE_READ].queue_offset = - ctrl->ctrl.opts->nr_write_queues; + ctrl->io_queues[HCTX_TYPE_DEFAULT]; } else { - /* mixed read/write queues */ + /* shared read/write queues */ set->map[HCTX_TYPE_DEFAULT].nr_queues = - ctrl->ctrl.opts->nr_io_queues; + ctrl->io_queues[HCTX_TYPE_DEFAULT]; + set->map[HCTX_TYPE_DEFAULT].queue_offset = 0; + set->map[HCTX_TYPE_READ].nr_queues = + ctrl->io_queues[HCTX_TYPE_DEFAULT]; set->map[HCTX_TYPE_READ].queue_offset = 0; } blk_mq_map_queues(&set->map[HCTX_TYPE_DEFAULT]); blk_mq_map_queues(&set->map[HCTX_TYPE_READ]); + + dev_info(ctrl->ctrl.device, + "mapped %d/%d default/read queues.\n", + ctrl->io_queues[HCTX_TYPE_DEFAULT], + ctrl->io_queues[HCTX_TYPE_READ]); + return 0; }