Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1961908ybt; Sun, 28 Jun 2020 03:39:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzwELMpHf/pHX18Mnoehe4VNxqKqI3Q0yXLXLz3qER5lHbyT9OwTK0SmB/y4ee+N+KFlAH2 X-Received: by 2002:a17:906:e91:: with SMTP id p17mr1945942ejf.252.1593340761901; Sun, 28 Jun 2020 03:39:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593340761; cv=none; d=google.com; s=arc-20160816; b=mte8kQq4yrjbc/RSLUJiYZSK2AnRZFEFdvi0Ra9yNyJz24v2VTtJjqoE53wGHGU/Hg GC46s2590oZ7Nx9dlKSb8Ecg1YPKPRWJLAsI2joavWny/E6NHiuB+UEFehCNT64gxGCj gTurWUdu9mm1ZfKHo/yHOYKzUQIGtAbBHtssLNJjTkdMq7iZzkzoEtJYfzORis9brZSS HcMDMkPwgI29LzAlzqXxFg173DeN3dhdKf7FYcszLVjobMvCdeZgZoGAI+e35IYG21kI zAvpNsH5vtYUiOeyhS5qX6uN8vc2O1uh53gFfb9M95DTRv0WN47DO6Ppr4n+Uwo+NsuF NjQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=clB0D1yg/j1Q5gQV8hstyu0sC/WgOAliFGl+COkkU2Y=; b=mykjMddn7787LbkVwvVBgnTKrnIAWE8IluYkaI91yBJLWdj5pNTL6Uao2syoUJ0CfI wFGzJS9Wr30fC66hFWAKmCxQ7xlp72jtxo1AP/ERvq0nmh40zS7BXxNdRtbgVPhIsxGf S04V4zDq/iNanCdaYhopbBHsxiUSBFvQqXUzIAJh5ab7Y7LgnekPGMAMvZugJhqDWobX xJaQz9wXG+YEoaSZYx4ro8j/bpbaNhbEXvymCWp6kwbhkyDPnCh3xmo4RndZnkTT6zaj JACbT2BlHn7v+2mEWYBSY22Bu8be26M069AUWiuFiKRbb+3G6JHNyWMYlKB7xOqXRkZy dOjg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b10si9858423ejv.385.2020.06.28.03.38.58; Sun, 28 Jun 2020 03:39:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726195AbgF1KfC (ORCPT + 99 others); Sun, 28 Jun 2020 06:35:02 -0400 Received: from out30-56.freemail.mail.aliyun.com ([115.124.30.56]:41201 "EHLO out30-56.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725921AbgF1KfC (ORCPT ); Sun, 28 Jun 2020 06:35:02 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R531e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07484;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0U0v-6Et_1593340497; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0U0v-6Et_1593340497) by smtp.aliyun-inc.com(127.0.0.1); Sun, 28 Jun 2020 18:34:57 +0800 From: Baolin Wang To: kbusch@kernel.org, axboe@fb.com, hch@lst.de, sagi@grimberg.me Cc: baolin.wang@linux.alibaba.com, baolin.wang7@gmail.com, linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH] nvme-pci: Move the sg table allocation/free into init/exit_request Date: Sun, 28 Jun 2020 18:34:46 +0800 Message-Id: <4eedad1efab91f4529de19e14ba374da405aea3f.1593340208.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Move the sg table allocation and free into the init_request() and exit_request(), instead of allocating sg table when queuing requests, which can benefit the IO performance. Signed-off-by: Baolin Wang --- drivers/nvme/host/pci.c | 24 ++++++++++++++++++------ 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index b1d18f0..cf7c997 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -410,9 +410,25 @@ static int nvme_init_request(struct blk_mq_tag_set *set, struct request *req, iod->nvmeq = nvmeq; nvme_req(req)->ctrl = &dev->ctrl; + + iod->sg = mempool_alloc(dev->iod_mempool, GFP_ATOMIC); + if (!iod->sg) + return -ENOMEM; + + sg_init_table(iod->sg, NVME_MAX_SEGS); return 0; } +static void nvme_exit_request(struct blk_mq_tag_set *set, struct request *req, + unsigned int hctx_idx) +{ + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + struct nvme_dev *dev = set->driver_data; + + mempool_free(iod->sg, dev->iod_mempool); + iod->sg = NULL; +} + static int queue_irq_offset(struct nvme_dev *dev) { /* if we have more than 1 vec, admin queue offsets us by 1 */ @@ -557,8 +573,6 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) dma_pool_free(dev->prp_page_pool, addr, dma_addr); dma_addr = next_dma_addr; } - - mempool_free(iod->sg, dev->iod_mempool); } static void nvme_print_sgl(struct scatterlist *sgl, int nents) @@ -808,10 +822,6 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, } iod->dma_len = 0; - iod->sg = mempool_alloc(dev->iod_mempool, GFP_ATOMIC); - if (!iod->sg) - return BLK_STS_RESOURCE; - sg_init_table(iod->sg, blk_rq_nr_phys_segments(req)); iod->nents = blk_rq_map_sg(req->q, req, iod->sg); if (!iod->nents) goto out; @@ -1557,6 +1567,7 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid, bool polled) .complete = nvme_pci_complete_rq, .init_hctx = nvme_admin_init_hctx, .init_request = nvme_init_request, + .exit_request = nvme_exit_request, .timeout = nvme_timeout, }; @@ -1566,6 +1577,7 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid, bool polled) .commit_rqs = nvme_commit_rqs, .init_hctx = nvme_init_hctx, .init_request = nvme_init_request, + .exit_request = nvme_exit_request, .map_queues = nvme_pci_map_queues, .timeout = nvme_timeout, .poll = nvme_poll, -- 1.8.3.1