Received: by 2002:a25:d7c1:0:0:0:0:0 with SMTP id o184csp2694111ybg; Mon, 28 Oct 2019 00:22:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqyMBhD+Egh/dh6ZcEn0BADcqlbKuZGiMvQ8puGg+KJi/8xdXoXCmWL7ydRbiynA3ayhi8L2 X-Received: by 2002:a17:906:49d1:: with SMTP id w17mr15570937ejv.101.1572247367421; Mon, 28 Oct 2019 00:22:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572247367; cv=none; d=google.com; s=arc-20160816; b=UdWir//04WdMMcERAn5AhSJ6oAFnFH/iJFXwXoEmbLBZoN8EVg/z/ADEfLspgKLoE/ DmILpeHbQfG3INBJHSPJgNR/2rlFejnXec/ycgc+YmX3gxyiu+9C0PR6m5WC0ac3oPuc YkSM4FrQDpVT6sE8FnyY8kjPGf8rXACD3gHy9uv7oDxnCE2GNrmT2l3keL+R+uK54+7P hE0lvxbJka3wKgq6mEoin9l2siI46+U6cm6GqD+bmbTuErogAvRE+PzxnM4M6+YBzLvR +z5BvzH6LkQt254uTHeVVRUNoHyHdUs7IepF4oDDPD1tEKF1ItAKyxI/iRthi9xWmJFb SF/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=ysf4TR90Nwxs5R9k4t+Y3kS24xdbw/ipEi4my9S65YI=; b=IlUBbbI2fyT3lMGTuX6FZFdg463lz7KlcGOCJdqbils1qCp5k1N7m2znzotFu6SM2k 38j0Xb6PK0gr1WYVKCPAl97yAnxxUhB6Zia0R/3ycDsyTJyZicOqC7hQOxRtJyLtJCfB meKpgptdeqh7/GbayTthd8EPCQfc2tx64K3R0EidKS64RnADtdo5B/BAscwjxq5XoljU pbwc1RTVNMM/7Mv+i6lIBSIIBh/DTpWn7juIv9hynUpNe5b8H8+DjtLBRKo4exbVult3 arjVrMu3Fv3G7dLm3O9vFPOhZcf+dw7/9AIOZaxNbc6mNpPBiSlBaiHdjsvJ8gCh1pjI h7Tg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=U4rY9fig; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h4si7022636edd.189.2019.10.28.00.22.24; Mon, 28 Oct 2019 00:22:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=U4rY9fig; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731135AbfJ0VRy (ORCPT + 99 others); Sun, 27 Oct 2019 17:17:54 -0400 Received: from mail.kernel.org ([198.145.29.99]:37788 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731122AbfJ0VRr (ORCPT ); Sun, 27 Oct 2019 17:17:47 -0400 Received: from localhost (100.50.158.77.rev.sfr.net [77.158.50.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A49562184C; Sun, 27 Oct 2019 21:17:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572211066; bh=RbW/Ro+spT/qRFap2ebndlzW46XRVhfTR2zROlgJvMk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U4rY9figpkDmRGoZ9rfr5HvqRjp+niLpYdverW5LImXn0tWvgmZzto5L84AM0l/RK QmnSuZmn9ZMkA8gnvVC1qSjETv3Y9Bobk7dL2zZtDyAw4FKPVGLmm5ju/ixXQeXlUN iwLsdsuyd99TJzDKC+Hr64LgvGBAVQk0s6VdcSCU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Sagi Grimberg , Max Gurtovoy , Sasha Levin Subject: [PATCH 5.3 020/197] nvme-rdma: Fix max_hw_sectors calculation Date: Sun, 27 Oct 2019 21:58:58 +0100 Message-Id: <20191027203352.763389443@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191027203351.684916567@linuxfoundation.org> References: <20191027203351.684916567@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Max Gurtovoy [ Upstream commit ff13c1b87c97275b82b2af99044b4abf6861b28f ] By default, the NVMe/RDMA driver should support max io_size of 1MiB (or upto the maximum supported size by the HCA). Currently, one will see that /sys/class/block//queue/max_hw_sectors_kb is 1020 instead of 1024. A non power of 2 value can cause performance degradation due to unnecessary splitting of IO requests and unoptimized allocation units. The number of pages per MR has been fixed here, so there is no longer any need to reduce max_sectors by 1. Reviewed-by: Sagi Grimberg Signed-off-by: Max Gurtovoy Signed-off-by: Sagi Grimberg Signed-off-by: Sasha Levin --- drivers/nvme/host/rdma.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 1a6449bc547b9..cc1956349a2af 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -427,7 +427,7 @@ static void nvme_rdma_destroy_queue_ib(struct nvme_rdma_queue *queue) static int nvme_rdma_get_max_fr_pages(struct ib_device *ibdev) { return min_t(u32, NVME_RDMA_MAX_SEGMENTS, - ibdev->attrs.max_fast_reg_page_list_len); + ibdev->attrs.max_fast_reg_page_list_len - 1); } static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) @@ -437,7 +437,7 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) const int cq_factor = send_wr_factor + 1; /* + RECV */ int comp_vector, idx = nvme_rdma_queue_idx(queue); enum ib_poll_context poll_ctx; - int ret; + int ret, pages_per_mr; queue->device = nvme_rdma_find_get_device(queue->cm_id); if (!queue->device) { @@ -479,10 +479,16 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) goto out_destroy_qp; } + /* + * Currently we don't use SG_GAPS MR's so if the first entry is + * misaligned we'll end up using two entries for a single data page, + * so one additional entry is required. + */ + pages_per_mr = nvme_rdma_get_max_fr_pages(ibdev) + 1; ret = ib_mr_pool_init(queue->qp, &queue->qp->rdma_mrs, queue->queue_size, IB_MR_TYPE_MEM_REG, - nvme_rdma_get_max_fr_pages(ibdev), 0); + pages_per_mr, 0); if (ret) { dev_err(queue->ctrl->ctrl.device, "failed to initialize MR pool sized %d for QID %d\n", @@ -824,8 +830,8 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl, if (error) goto out_stop_queue; - ctrl->ctrl.max_hw_sectors = - (ctrl->max_fr_pages - 1) << (ilog2(SZ_4K) - 9); + ctrl->ctrl.max_segments = ctrl->max_fr_pages; + ctrl->ctrl.max_hw_sectors = ctrl->max_fr_pages << (ilog2(SZ_4K) - 9); error = nvme_init_identify(&ctrl->ctrl); if (error) -- 2.20.1