Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp1788806ybh; Mon, 20 Jul 2020 07:16:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw4ImsH8RlT80rYaVAp/8KGN3l8HN7b7T/v0ePTnTYzr5+BpP9m/7c06bfPCkQznMEg2OtN X-Received: by 2002:a50:8e53:: with SMTP id 19mr22266698edx.185.1595254614668; Mon, 20 Jul 2020 07:16:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1595254614; cv=none; d=google.com; s=arc-20160816; b=1LL1/M6pQfiBXb6Ht/d/ZYbBA9RLtffKhwgni7YIm0tIz07IDWuKOU2JdxAj/RrD8d TSaPETQZKDM37uat98p8PwCdIwJ5b4bhvzencJHSPRHPqL2OzW4W9npk+N5U1NzBdzil 0ngYgWMZJoW2Zyb11CdKqoovo+xzpKjnBzLWy7UTsidtURKTrBWpQzNRBYRlCIuiFkK+ I1WZAZquDQIeI57AgM02Y7KPjBSEzS0Eaaw0sVvNASYjs6aEkTqQBxqMOhZEP5ENrvoo i2Z7WogoIKyxl1/DmejRVvScLFCLbuKP72GBz1ixbl4ZnjUUBdXMwMthcsMnPkXW84E7 hiQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=bIqSoqVDF7ybqReeT7N/C2z6c5+Qqx8psYJPM/8xb2E=; b=GbXwp70f8ik9J5mTMo2Yvy13DFf9syseV7by89blYFEUPPxUQ+7pj0RQqu2p+h6jkS 3fmlAv0H4SlhfMGwBNvZv7mM6eZXDC7OoYTbsNDjPwfNWjp8Ms4vTyuzBMFtcCY2LDE5 PoTvCL0wTTe9sx1v5D5CcQuM+1HIMH66JjDPa3Nz2LHx2PPt2TtVgYmJdqENm27Xktf7 RUD7BkdVzmMVqJScCFgKwfL9tUzWgXUv5lQguR3T7ZILGrQuwsLrBgrTySHv9jzuaeqX LxW8INBIwRmAkxKriOAg5q2tRN2u08yrNMRm2uKhV8aJDxWS9hG+Ztq3hUrCGsB038xa i1VA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s24si10591969edw.69.2020.07.20.07.16.31; Mon, 20 Jul 2020 07:16:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728305AbgGTOQK (ORCPT + 99 others); Mon, 20 Jul 2020 10:16:10 -0400 Received: from verein.lst.de ([213.95.11.211]:47199 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725970AbgGTOQK (ORCPT ); Mon, 20 Jul 2020 10:16:10 -0400 Received: by verein.lst.de (Postfix, from userid 2407) id 8A90468BFE; Mon, 20 Jul 2020 16:16:06 +0200 (CEST) Date: Mon, 20 Jul 2020 16:16:06 +0200 From: Christoph Hellwig To: Logan Gunthorpe Cc: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, Christoph Hellwig , Sagi Grimberg , Keith Busch , Jens Axboe , Chaitanya Kulkarni , Max Gurtovoy , Stephen Bates Subject: Re: [PATCH v15 7/9] nvmet-passthru: Add passthru code to process commands Message-ID: <20200720141606.GF4627@lst.de> References: <20200716203319.16022-1-logang@deltatee.com> <20200716203319.16022-8-logang@deltatee.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200716203319.16022-8-logang@deltatee.com> User-Agent: Mutt/1.5.17 (2007-11-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 16, 2020 at 02:33:17PM -0600, Logan Gunthorpe wrote: > Add passthru command handling capability for the NVMeOF target and > export passthru APIs which are used to integrate passthru > code with nvmet-core. > > The new file passthru.c handles passthru cmd parsing and execution. > In the passthru mode, we create a block layer request from the nvmet > request and map the data on to the block layer request. > > Admin commands and features are on a white list as there are a number > of each that don't make too much sense with passthrough. We use a > white list so that new commands can be considered before being blindly > passed through. In both cases, vendor specific commands are always > allowed. > > We also blacklist reservation IO commands as the underlying device > cannot differentiate between multiple hosts behind a fabric. I'm still not so happy about having to look up the namespace and still wonder if we should generalize the connect_q to a passthrough_q. But I guess we can do that later and then reduce some of the exports here.. A few more comments below: > + struct { > + struct request *rq; > + struct work_struct work; > + u16 (*end_req)(struct nvmet_req *req); > + } p; Do we really need the callback for the two identify fixups, or should we just hardcode them to avoid indirection function calls? > +++ b/drivers/nvme/target/passthru.c > @@ -0,0 +1,457 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * NVMe Over Fabrics Target Passthrough command implementation. > + * > + * Copyright (c) 2017-2018 Western Digital Corporation or its > + * affiliates. > + */ I think you forgot to add your own copyrights here. > +static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) > +{ > + int sg_cnt = req->sg_cnt; > + struct scatterlist *sg; > + int op_flags = 0; > + struct bio *bio; > + int i, ret; > + > + if (req->cmd->common.opcode == nvme_cmd_flush) > + op_flags = REQ_FUA; > + else if (nvme_is_write(req->cmd)) > + op_flags = REQ_SYNC | REQ_IDLE; > + > + Double empty line here.. > + bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES)); > + bio->bi_end_io = bio_put; > + > + for_each_sg(req->sg, sg, req->sg_cnt, i) { > + if (bio_add_page(bio, sg_page(sg), sg->length, > + sg->offset) != sg->length) { bio_add_pages is only for non-passthrough requests, this needs to use bio_add_pc_page. > + if (blk_rq_nr_phys_segments(rq) > queue_max_segments(rq->q)) { > + status = NVME_SC_INVALID_FIELD; > + goto fail_out; > + } > + > + if ((blk_rq_payload_bytes(rq) >> 9) > queue_max_hw_sectors(rq->q)) { > + status = NVME_SC_INVALID_FIELD; > + goto fail_out; > + } Which should also take care of these checks. > +static void nvmet_passthru_set_host_behaviour(struct nvmet_req *req) > +{ > + struct nvme_ctrl *ctrl = nvmet_req_passthru_ctrl(req); > + struct nvme_feat_host_behavior *host; > + u16 status; > + int ret; > + > + host = kzalloc(sizeof(*host) * 2, GFP_KERNEL); > + ret = nvme_get_features(ctrl, NVME_FEAT_HOST_BEHAVIOR, 0, > + host, sizeof(*host), NULL); Mising kzalloc return check.