Received: by 10.213.65.68 with SMTP id h4csp1688845imn; Thu, 29 Mar 2018 09:10:25 -0700 (PDT) X-Google-Smtp-Source: AIpwx49mGe6/At3hfIege29inrc/+ILB1frV3EZFiNaG5mHoQH66Zfw8AgZ2S91dceMmANwNhOtV X-Received: by 2002:a17:902:bc41:: with SMTP id t1-v6mr8918429plz.56.1522339825198; Thu, 29 Mar 2018 09:10:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522339825; cv=none; d=google.com; s=arc-20160816; b=BL3Cq06g9nlU8Yyl33GXhShkc2oD06xEPZx8FdluCwaDNqfrrJHv0NPdkUVXsEWhQ6 mJ89daP/Zynt1udmvmTIGLupAWU/cV7D0tvEOi9KaHGJ6U+P4xEZDCjjyUdn42Ufo8NE ywfINQ1lXviJAJN3WYtTlXikQgEgjfjZAGVT4sxsRJttKsjdJTs6kEuxKIgyZgVs6Mbr jGefBvlJ9uSHtfbhUGwgxAq94anhwTvkXrhXaQszY6SepAAnxwhGg4KjrvgMwHrIHGOx AyH7ozJ/J427CN5iZPxmssn49iVPNqsxuEjiKwiM9kGynNaxOd/YjsK0tvRXzHo+pSxC af/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:subject:references:in-reply-to:message-id :date:cc:to:from:arc-authentication-results; bh=YxAmNfIei1W7VAKY0dCnXZgGfbjQh4Y9X18KFCKzgFQ=; b=cYeL+btU/efy8hYxkdODDOBeHyLQMXj8E5Nr0LQcMFtuD/RQ51weHBzpzzMpFREd89 gyHIa8NGenPIvJ8wZqvlr0Mup0/nafNRZC7OJRVs1z9ZNMN+CrIbgg6beSAtCat8/+c7 F4CbwNp8HkvW22zz2r+MhP0L/W65xXaj3hDVzN7WMr3ogIXX48OJFDmZ5+9WUmJlxlxB gkcySDeo6Z3VEL+iyzHBImSu+xsf8xcDCSOcaIpmm/ko2ZGfKuhn2h75dyMQEBcTy7Ti 5U7PEhkdNkDIoxhom+6SwAOayvYT7VhWEwaVl38SjncbpPu6i3T0w2qw6YcKjhASE4Cs SzTg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y101-v6si6140514plh.188.2018.03.29.09.10.11; Thu, 29 Mar 2018 09:10:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752364AbeC2QHd (ORCPT + 99 others); Thu, 29 Mar 2018 12:07:33 -0400 Received: from ale.deltatee.com ([207.54.116.67]:46324 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750820AbeC2QHc (ORCPT ); Thu, 29 Mar 2018 12:07:32 -0400 Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1f1a5K-0005mi-OA; Thu, 29 Mar 2018 10:07:31 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.89) (envelope-from ) id 1f1a5J-0001Eb-9Y; Thu, 29 Mar 2018 10:07:29 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org Cc: Christoph Hellwig , Sagi Grimberg , James Smart , Logan Gunthorpe Date: Thu, 29 Mar 2018 10:07:21 -0600 Message-Id: <20180329160721.4691-5-logang@deltatee.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180329160721.4691-1-logang@deltatee.com> References: <20180329160721.4691-1-logang@deltatee.com> X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me, james.smart@broadcom.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.5 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_FREE,MYRULES_NO_TEXT,T_RP_MATCHES_RCVD autolearn=ham autolearn_force=no version=3.4.1 Subject: [PATCH 4/4] nvmet-fc: Use new SGL alloc/free helper for requests X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Use the new helpers introduced earlier to allocate the SGLs for the request. To do this, we drop the appearantly redundant data_sg and data_sg_cnt members as they are identical to the existing req.sg and req.sg_cnt. Signed-off-by: Logan Gunthorpe Cc: James Smart Cc: Christoph Hellwig Cc: Sagi Grimberg --- drivers/nvme/target/fc.c | 38 +++++++++++--------------------------- 1 file changed, 11 insertions(+), 27 deletions(-) diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c index 9f2f8ab83158..00135ff7d1c2 100644 --- a/drivers/nvme/target/fc.c +++ b/drivers/nvme/target/fc.c @@ -74,8 +74,6 @@ struct nvmet_fc_fcp_iod { struct nvme_fc_cmd_iu cmdiubuf; struct nvme_fc_ersp_iu rspiubuf; dma_addr_t rspdma; - struct scatterlist *data_sg; - int data_sg_cnt; u32 offset; enum nvmet_fcp_datadir io_dir; bool active; @@ -1696,43 +1694,34 @@ EXPORT_SYMBOL_GPL(nvmet_fc_rcv_ls_req); static int nvmet_fc_alloc_tgt_pgs(struct nvmet_fc_fcp_iod *fod) { - struct scatterlist *sg; - unsigned int nent; int ret; - sg = sgl_alloc(fod->req.transfer_len, GFP_KERNEL, &nent); - if (!sg) - goto out; + ret = nvmet_req_alloc_sgl(&fod->req, &fod->queue->nvme_sq); + if (ret < 0) + return NVME_SC_INTERNAL; - fod->data_sg = sg; - fod->data_sg_cnt = nent; - ret = fc_dma_map_sg(fod->tgtport->dev, sg, nent, + ret = fc_dma_map_sg(fod->tgtport->dev, fod->req.sg, fod->req.sg_cnt, ((fod->io_dir == NVMET_FCP_WRITE) ? DMA_FROM_DEVICE : DMA_TO_DEVICE)); /* note: write from initiator perspective */ if (!ret) - goto out; + return NVME_SC_INTERNAL; return 0; - -out: - return NVME_SC_INTERNAL; } static void nvmet_fc_free_tgt_pgs(struct nvmet_fc_fcp_iod *fod) { - if (!fod->data_sg || !fod->data_sg_cnt) + if (!fod->req.sg || !fod->req.sg_cnt) return; - fc_dma_unmap_sg(fod->tgtport->dev, fod->data_sg, fod->data_sg_cnt, + fc_dma_unmap_sg(fod->tgtport->dev, fod->req.sg, fod->req.sg_cnt, ((fod->io_dir == NVMET_FCP_WRITE) ? DMA_FROM_DEVICE : DMA_TO_DEVICE)); - sgl_free(fod->data_sg); - fod->data_sg = NULL; - fod->data_sg_cnt = 0; -} + nvmet_req_free_sgl(&fod->req); +} static bool queue_90percent_full(struct nvmet_fc_tgt_queue *q, u32 sqhd) @@ -1871,7 +1860,7 @@ nvmet_fc_transfer_fcp_data(struct nvmet_fc_tgtport *tgtport, fcpreq->fcp_error = 0; fcpreq->rsplen = 0; - fcpreq->sg = &fod->data_sg[fod->offset / PAGE_SIZE]; + fcpreq->sg = &fod->req.sg[fod->offset / PAGE_SIZE]; fcpreq->sg_cnt = DIV_ROUND_UP(tlen, PAGE_SIZE); /* @@ -2083,7 +2072,7 @@ __nvmet_fc_fcp_nvme_cmd_done(struct nvmet_fc_tgtport *tgtport, * There may be a status where data still was intended to * be moved */ - if ((fod->io_dir == NVMET_FCP_READ) && (fod->data_sg_cnt)) { + if ((fod->io_dir == NVMET_FCP_READ) && (fod->req.sg_cnt)) { /* push the data over before sending rsp */ nvmet_fc_transfer_fcp_data(tgtport, fod, NVMET_FCOP_READDATA); @@ -2153,9 +2142,6 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport, /* clear any response payload */ memset(&fod->rspiubuf, 0, sizeof(fod->rspiubuf)); - fod->data_sg = NULL; - fod->data_sg_cnt = 0; - ret = nvmet_req_init(&fod->req, &fod->queue->nvme_cq, &fod->queue->nvme_sq, @@ -2178,8 +2164,6 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport, return; } } - fod->req.sg = fod->data_sg; - fod->req.sg_cnt = fod->data_sg_cnt; fod->offset = 0; if (fod->io_dir == NVMET_FCP_WRITE) { -- 2.11.0