Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp204538imm; Thu, 30 Aug 2018 11:56:40 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdaujp9vLLFlOpNPW/Qrn3YLF/hgXmGhMNzptJ7lFrUaizxqxAWxN2DHrp7ch5We/8BKCMyA X-Received: by 2002:a63:7f06:: with SMTP id a6-v6mr970714pgd.296.1535655400326; Thu, 30 Aug 2018 11:56:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535655400; cv=none; d=google.com; s=arc-20160816; b=ppKAXivnYS3ESpw4a6ADVxt98YF8H/fCh7CH1e2jxjyUppZsVAS7n6Jfv0nNzN3MCv IvwaHQ6cnBmuSuEQ2kkYTQ/2KfhMumI70m5fIBHB8rYqlIJXGqLGySClE0vbNPusvAVu +MgoEQi1tcJP7SOiJ8fov06XjPUfVbnuH7crnRYrH/VzjRo1OW4QD4KiOKPt5YmGEnis ph1t9otHviI2ajf3fOo3MO/P7yF7aajVfcOCiKnJfwHStrRj5zpStrkJrPEkc0A1co9C YdhT0ymV62wE7GQwreiQvkgJfmLsnGDDEX9LIqNdePLJ0A8aJR6Dv/y8b58J9lfOT8t6 lvwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:subject:references:in-reply-to:message-id :date:cc:to:from:arc-authentication-results; bh=TSemDoiS7PWTLNAKmyt4uAi3osNf/J8BsIz2hnd/PDo=; b=IXZFnpGTIdB5MvZd9+d5jU9i7wEbN8YLtnLN64kRuub1te3JlhkFw4+Mu9l8zqLQ46 3P4MBxH/92k9vgMc48tB6hZZ9FidnlWr1CoXJLiVm+L2VjEss/TmeXueYM/2hQBLd6Kg 1e1eH4jHOgJtdgt5vWCsDMMy+gPQ4cXMtS0Jrt5eZnKYp4EsEY1vuST/InOpu5hyT+BW GwH4E52pr1wgPH8Mvmcz1wFI3ELW/XQzIIQHsu4Zm451Igh+NoJGkZdf/x/1ug95MLZZ d5sIrXD7r9u8gPWAofL8WOXb5aP5UeHwIMJAy2guGiE2Qd/RwbEmue6tllJOIMowxJDi wmZg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t32-v6si7409296pgk.202.2018.08.30.11.56.25; Thu, 30 Aug 2018 11:56:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727939AbeH3W5m (ORCPT + 99 others); Thu, 30 Aug 2018 18:57:42 -0400 Received: from ale.deltatee.com ([207.54.116.67]:40086 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727406AbeH3W5l (ORCPT ); Thu, 30 Aug 2018 18:57:41 -0400 Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1fvS4u-0006Or-GN; Thu, 30 Aug 2018 12:54:06 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.89) (envelope-from ) id 1fvS4p-0000to-AP; Thu, 30 Aug 2018 12:53:55 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm@lists.01.org, linux-block@vger.kernel.org Cc: Stephen Bates , Christoph Hellwig , Keith Busch , Sagi Grimberg , Bjorn Helgaas , Jason Gunthorpe , Max Gurtovoy , Dan Williams , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Benjamin Herrenschmidt , Alex Williamson , =?UTF-8?q?Christian=20K=C3=B6nig?= , Logan Gunthorpe Date: Thu, 30 Aug 2018 12:53:51 -0600 Message-Id: <20180830185352.3369-13-logang@deltatee.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180830185352.3369-1-logang@deltatee.com> References: <20180830185352.3369-1-logang@deltatee.com> X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-rdma@vger.kernel.org, linux-block@vger.kernel.org, sbates@raithlin.com, hch@lst.de, sagi@grimberg.me, bhelgaas@google.com, jgg@mellanox.com, maxg@mellanox.com, keith.busch@intel.com, dan.j.williams@intel.com, benh@kernel.crashing.org, jglisse@redhat.com, alex.williamson@redhat.com, christian.koenig@amd.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.5 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_FREE,MYRULES_NO_TEXT autolearn=ham autolearn_force=no version=3.4.1 Subject: [PATCH v5 12/13] nvmet: Introduce helper functions to allocate and free request SGLs X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add helpers to allocate and free the SGL in a struct nvmet_req: int nvmet_req_alloc_sgl(struct nvmet_req *req, struct nvmet_sq *sq) void nvmet_req_free_sgl(struct nvmet_req *req) This will be expanded in a future patch to implement peer-to-peer memory DMAs and should be common with all target drivers. The presently unused 'sq' argument in the alloc function will be necessary to decide whether to use peer-to-peer memory and obtain the correct provider to allocate the memory. The new helpers are used in nvmet-rdma. Seeing we use req.transfer_len as the length of the SGL it is set earlier and cleared on any error. It also seems to be unnecessary to accumulate the length as the map_sgl functions should only ever be called once per request. Signed-off-by: Logan Gunthorpe Cc: Christoph Hellwig Cc: Sagi Grimberg --- drivers/nvme/target/core.c | 18 ++++++++++++++++++ drivers/nvme/target/nvmet.h | 2 ++ drivers/nvme/target/rdma.c | 20 ++++++++++++-------- 3 files changed, 32 insertions(+), 8 deletions(-) diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index ebf3e7a6c49e..6a1c8d5f552b 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -725,6 +725,24 @@ void nvmet_req_execute(struct nvmet_req *req) } EXPORT_SYMBOL_GPL(nvmet_req_execute); +int nvmet_req_alloc_sgl(struct nvmet_req *req, struct nvmet_sq *sq) +{ + req->sg = sgl_alloc(req->transfer_len, GFP_KERNEL, &req->sg_cnt); + if (!req->sg) + return -ENOMEM; + + return 0; +} +EXPORT_SYMBOL_GPL(nvmet_req_alloc_sgl); + +void nvmet_req_free_sgl(struct nvmet_req *req) +{ + sgl_free(req->sg); + req->sg = NULL; + req->sg_cnt = 0; +} +EXPORT_SYMBOL_GPL(nvmet_req_free_sgl); + static inline bool nvmet_cc_en(u32 cc) { return (cc >> NVME_CC_EN_SHIFT) & 0x1; diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index ec9af4ee03b6..7d6cb61021e4 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -336,6 +336,8 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq, void nvmet_req_uninit(struct nvmet_req *req); void nvmet_req_execute(struct nvmet_req *req); void nvmet_req_complete(struct nvmet_req *req, u16 status); +int nvmet_req_alloc_sgl(struct nvmet_req *req, struct nvmet_sq *sq); +void nvmet_req_free_sgl(struct nvmet_req *req); void nvmet_cq_setup(struct nvmet_ctrl *ctrl, struct nvmet_cq *cq, u16 qid, u16 size); diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index 3533e918ea37..e148dee72ba5 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -489,7 +489,7 @@ static void nvmet_rdma_release_rsp(struct nvmet_rdma_rsp *rsp) } if (rsp->req.sg != rsp->cmd->inline_sg) - sgl_free(rsp->req.sg); + nvmet_req_free_sgl(&rsp->req); if (unlikely(!list_empty_careful(&queue->rsp_wr_wait_list))) nvmet_rdma_process_wr_wait_list(queue); @@ -638,24 +638,24 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp, { struct rdma_cm_id *cm_id = rsp->queue->cm_id; u64 addr = le64_to_cpu(sgl->addr); - u32 len = get_unaligned_le24(sgl->length); u32 key = get_unaligned_le32(sgl->key); int ret; + rsp->req.transfer_len = get_unaligned_le24(sgl->length); + /* no data command? */ - if (!len) + if (!rsp->req.transfer_len) return 0; - rsp->req.sg = sgl_alloc(len, GFP_KERNEL, &rsp->req.sg_cnt); - if (!rsp->req.sg) - return NVME_SC_INTERNAL; + ret = nvmet_req_alloc_sgl(&rsp->req, &rsp->queue->nvme_sq); + if (ret < 0) + goto error_out; ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num, rsp->req.sg, rsp->req.sg_cnt, 0, addr, key, nvmet_data_dir(&rsp->req)); if (ret < 0) - return NVME_SC_INTERNAL; - rsp->req.transfer_len += len; + goto error_out; rsp->n_rdma += ret; if (invalidate) { @@ -664,6 +664,10 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp, } return 0; + +error_out: + rsp->req.transfer_len = 0; + return NVME_SC_INTERNAL; } static u16 nvmet_rdma_map_sgl(struct nvmet_rdma_rsp *rsp) -- 2.11.0