Received: by 2002:ad5:4acb:0:0:0:0:0 with SMTP id n11csp381908imw; Mon, 4 Jul 2022 11:01:41 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uN2ue1/9UHzW/iZK00TdEqOGDm7iJ8b5nwUlYi28nfcqzWBTflrUaA4nXPWwsctoeEMrvg X-Received: by 2002:aa7:9d11:0:b0:528:3d4e:f8d7 with SMTP id k17-20020aa79d11000000b005283d4ef8d7mr17292959pfp.19.1656957701269; Mon, 04 Jul 2022 11:01:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656957701; cv=none; d=google.com; s=arc-20160816; b=tIlkhSAh8IizKMEHqUl+fFBNFIzATzQsaNEJbIo1h0BiSMeWkcTSN75MUhkgmnqs8G WnyF/6gs7LBO23c5GG4FMot5VkdtKinQ8Gk5c5OGh+o+9kPdQTBcI9OnmtLryv17LEiV sC9eYZgNCKg3x5OPb0Bhu+m3cECh9szYGTV5f55KdlpBMZogQY2E4TdTEqkzXBfhPdKO draoHL6dNF1lzucpMhPj/Z+L+5MzY7FAkxdsyg4KKhJrpK4tI65F3sugDbPZEzFRLI+L TPv0wcgPpG5zhvQB4PwcdvzU7PGlIj26nIdBeiJlUUMMd9GdEfpZ3MCW4ANMmsg0MYLc G9Qw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=m7UJ+YG5CXHk/LR6YSJUbKvXt0/M9BLJzyNtxtftuDs=; b=QhxVXhHPpE+olA0Z1Bqn0BjZzIPJ/1oNx/0AZNj2r1G8R9qxjnHgB8Tl//rRIL16gG oUUmku4RnGzQ794iuJuimnSi3DQXwVa17SZGrlSBU6tG82qcAAwm6/AXSg9XMLDJrL3v SFrD8xbTIkwn+Njwj88CD5Rc5yXT2DtAnEiMywoSegjU6FNxRs3RUX+cEHYNC9PhWY7Z 2IauU0OdTi1rNbWDVJzHomo7bCap3gPOAfpObN32Ma5xPJAigWERxSr1kehtFX9xo+NU 4yinohXQfoEKsnNuad8sVxRcrFgI2oPpIT3t85//8vi6Gq/OsP5Ka7Fjkz+sEitPCqiB r44A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="lCJqce/A"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a16-20020a170902ecd000b001674d61c1c7si48277543plh.400.2022.07.04.11.01.27; Mon, 04 Jul 2022 11:01:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="lCJqce/A"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229965AbiGDSAs (ORCPT + 99 others); Mon, 4 Jul 2022 14:00:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229771AbiGDSAr (ORCPT ); Mon, 4 Jul 2022 14:00:47 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5364DDE8E for ; Mon, 4 Jul 2022 11:00:46 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id g20-20020a17090a579400b001ed52939d72so10290548pji.4 for ; Mon, 04 Jul 2022 11:00:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=m7UJ+YG5CXHk/LR6YSJUbKvXt0/M9BLJzyNtxtftuDs=; b=lCJqce/AmmNuefMQS0l5Lc63NRUef3tEkehYhM0q6WPX3fsdGV00lIPwu+YIvNfe3I z+vZkiC2iaQyT4tjJxYoZ0BD1aO4RC3OvwJy4lydHDeOTzvG2UZ+4lI2nEn0t6qO5H0N qsc4HMlCyOO0ISj1ciw3FJm+w9RQ/xI6HuVYq/FmXgYWVFrZnlTkpOmGTwy9P7KVFXj2 M6+s30FCM3Bu6dX1YhYZ5jkDVD+9jkR78FpnL98WM5swYTwPK9SMiDD9z0zl8la3o6G5 mtM5uW82rR7QIep1kJERLhJtgw4UcV6/fm+BqRFBs0UzrRLYLkvnzw/Gh0e/FFzL/6rU 9VEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=m7UJ+YG5CXHk/LR6YSJUbKvXt0/M9BLJzyNtxtftuDs=; b=KLddnUTStf1jc2W8jCokaaN/yyAml9sgdGV3EJ4aO01zFnHnmTmS3q1aTyOTTrv0hp onyglmmjHXsBEf0wa8ZhwTHV2837DtJgRkB+NTYkq573h60PXPZ3eiJ8mbIhrxFaG5QW ULW7QGVdo77ii7TiLP2OP0Z0yObDzfj9cNbZHfuWKOT0ag5RVjJb1sp5OCsA8xllJlI4 CndYas88jX1rh+g3pgQ16+W36gQcZhzTTt8AkD3SxgYgbq3W8qDwOC2rS+j7BKxReRfT wLdi5gQcP5lv7nvj0IzV+9nKFVB0hHhQMwZrB2BZA9F99NYCMy+JLDPgNZWJhlUyjDRl 96gQ== X-Gm-Message-State: AJIora/buI/TYi8Rn6+XrLlzs6LLWxZsA4J7UdSPt5+bAljH7M52lKSf Dbbsdz8zUr913sUJk27E3WN0eA== X-Received: by 2002:a17:902:c3c1:b0:16a:14bd:8b19 with SMTP id j1-20020a170902c3c100b0016a14bd8b19mr36346350plj.152.1656957645747; Mon, 04 Jul 2022 11:00:45 -0700 (PDT) Received: from p14s (S0106889e681aac74.cg.shawcable.net. [68.147.0.187]) by smtp.gmail.com with ESMTPSA id m12-20020a170902768c00b0016a17da4ad4sm21484231pll.39.2022.07.04.11.00.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Jul 2022 11:00:44 -0700 (PDT) Date: Mon, 4 Jul 2022 12:00:42 -0600 From: Mathieu Poirier To: Arnaud Pouliquen Cc: Bjorn Andersson , linux-remoteproc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com, Rob Herring , Christoph Hellwig , Stefano Stabellini , Bruce Ashfield Subject: Re: [PATCH v6 1/4] remoteproc: core: Introduce rproc_rvdev_add_device function Message-ID: <20220704180042.GD2375750@p14s> References: <20220603163158.612513-1-arnaud.pouliquen@foss.st.com> <20220603163158.612513-2-arnaud.pouliquen@foss.st.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220603163158.612513-2-arnaud.pouliquen@foss.st.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 03, 2022 at 06:31:55PM +0200, Arnaud Pouliquen wrote: > In preparation of the migration of the management of rvdev in > remoteproc_virtio.c, this patch spins off a new function to manage the > remoteproc virtio device creation. > > The rproc_rvdev_add_device will be moved to remoteproc_virtio.c. > > The rproc_vdev_data structure is introduced to provide information for > the rvdev creation. This structure allows to manage the rvdev and vrings > allocation in the rproc_rvdev_add_device function. > > Signed-off-by: Arnaud Pouliquen > --- > Updates vs previous revision (based on Mathieu Poirier's comments): > - rproc_vdev_data struct change index field type to u32 > - remove rproc_rvdev_remove_device function creation > --- > drivers/remoteproc/remoteproc_core.c | 145 +++++++++++++---------- > drivers/remoteproc/remoteproc_internal.h | 15 +++ > 2 files changed, 97 insertions(+), 63 deletions(-) > > diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c > index c510125769b9..c438c32f7f0d 100644 > --- a/drivers/remoteproc/remoteproc_core.c > +++ b/drivers/remoteproc/remoteproc_core.c > @@ -484,74 +484,23 @@ static int copy_dma_range_map(struct device *to, struct device *from) > return 0; > } > > -/** > - * rproc_handle_vdev() - handle a vdev fw resource > - * @rproc: the remote processor > - * @ptr: the vring resource descriptor > - * @offset: offset of the resource entry > - * @avail: size of available data (for sanity checking the image) > - * > - * This resource entry requests the host to statically register a virtio > - * device (vdev), and setup everything needed to support it. It contains > - * everything needed to make it possible: the virtio device id, virtio > - * device features, vrings information, virtio config space, etc... > - * > - * Before registering the vdev, the vrings are allocated from non-cacheable > - * physically contiguous memory. Currently we only support two vrings per > - * remote processor (temporary limitation). We might also want to consider > - * doing the vring allocation only later when ->find_vqs() is invoked, and > - * then release them upon ->del_vqs(). > - * > - * Note: @da is currently not really handled correctly: we dynamically > - * allocate it using the DMA API, ignoring requested hard coded addresses, > - * and we don't take care of any required IOMMU programming. This is all > - * going to be taken care of when the generic iommu-based DMA API will be > - * merged. Meanwhile, statically-addressed iommu-based firmware images should > - * use RSC_DEVMEM resource entries to map their required @da to the physical > - * address of their base CMA region (ouch, hacky!). > - * > - * Return: 0 on success, or an appropriate error code otherwise > - */ > -static int rproc_handle_vdev(struct rproc *rproc, void *ptr, > - int offset, int avail) > +static struct rproc_vdev * > +rproc_rvdev_add_device(struct rproc *rproc, struct rproc_vdev_data *rvdev_data) > { > - struct fw_rsc_vdev *rsc = ptr; > - struct device *dev = &rproc->dev; > struct rproc_vdev *rvdev; > - int i, ret; > + struct fw_rsc_vdev *rsc = rvdev_data->rsc; > char name[16]; > - > - /* make sure resource isn't truncated */ > - if (struct_size(rsc, vring, rsc->num_of_vrings) + rsc->config_len > > - avail) { > - dev_err(dev, "vdev rsc is truncated\n"); > - return -EINVAL; > - } > - > - /* make sure reserved bytes are zeroes */ > - if (rsc->reserved[0] || rsc->reserved[1]) { > - dev_err(dev, "vdev rsc has non zero reserved bytes\n"); > - return -EINVAL; > - } > - > - dev_dbg(dev, "vdev rsc: id %d, dfeatures 0x%x, cfg len %d, %d vrings\n", > - rsc->id, rsc->dfeatures, rsc->config_len, rsc->num_of_vrings); > - > - /* we currently support only two vrings per rvdev */ > - if (rsc->num_of_vrings > ARRAY_SIZE(rvdev->vring)) { > - dev_err(dev, "too many vrings: %d\n", rsc->num_of_vrings); > - return -EINVAL; > - } > + int i, ret; > > rvdev = kzalloc(sizeof(*rvdev), GFP_KERNEL); > if (!rvdev) > - return -ENOMEM; > + return ERR_PTR(-ENOMEM); > > kref_init(&rvdev->refcount); > > - rvdev->id = rsc->id; > + rvdev->id = rvdev_data->id; > rvdev->rproc = rproc; > - rvdev->index = rproc->nb_vdev++; > + rvdev->index = rvdev_data->index; > > /* Initialise vdev subdevice */ > snprintf(name, sizeof(name), "vdev%dbuffer", rvdev->index); > @@ -563,7 +512,7 @@ static int rproc_handle_vdev(struct rproc *rproc, void *ptr, > ret = device_register(&rvdev->dev); > if (ret) { > put_device(&rvdev->dev); > - return ret; > + return ERR_PTR(ret); > } > > ret = copy_dma_range_map(&rvdev->dev, rproc->dev.parent); > @@ -576,7 +525,7 @@ static int rproc_handle_vdev(struct rproc *rproc, void *ptr, > ret = dma_coerce_mask_and_coherent(&rvdev->dev, > dma_get_mask(rproc->dev.parent)); > if (ret) { > - dev_warn(dev, > + dev_warn(&rvdev->dev, > "Failed to set DMA mask %llx. Trying to continue... (%pe)\n", > dma_get_mask(rproc->dev.parent), ERR_PTR(ret)); > } > @@ -589,7 +538,7 @@ static int rproc_handle_vdev(struct rproc *rproc, void *ptr, > } > > /* remember the resource offset*/ > - rvdev->rsc_offset = offset; > + rvdev->rsc_offset = rvdev_data->rsc_offset; > > /* allocate the vring resources */ > for (i = 0; i < rsc->num_of_vrings; i++) { > @@ -605,14 +554,14 @@ static int rproc_handle_vdev(struct rproc *rproc, void *ptr, > > rproc_add_subdev(rproc, &rvdev->subdev); > > - return 0; > + return rvdev; > > unwind_vring_allocations: > for (i--; i >= 0; i--) > rproc_free_vring(&rvdev->vring[i]); > free_rvdev: > device_unregister(&rvdev->dev); > - return ret; > + return ERR_PTR(ret); > } > > void rproc_vdev_release(struct kref *ref) > @@ -632,6 +581,76 @@ void rproc_vdev_release(struct kref *ref) > device_unregister(&rvdev->dev); > } > > +/** > + * rproc_handle_vdev() - handle a vdev fw resource > + * @rproc: the remote processor > + * @ptr: the vring resource descriptor > + * @offset: offset of the resource entry > + * @avail: size of available data (for sanity checking the image) > + * > + * This resource entry requests the host to statically register a virtio > + * device (vdev), and setup everything needed to support it. It contains > + * everything needed to make it possible: the virtio device id, virtio > + * device features, vrings information, virtio config space, etc... > + * > + * Before registering the vdev, the vrings are allocated from non-cacheable > + * physically contiguous memory. Currently we only support two vrings per > + * remote processor (temporary limitation). We might also want to consider > + * doing the vring allocation only later when ->find_vqs() is invoked, and > + * then release them upon ->del_vqs(). > + * > + * Note: @da is currently not really handled correctly: we dynamically > + * allocate it using the DMA API, ignoring requested hard coded addresses, > + * and we don't take care of any required IOMMU programming. This is all > + * going to be taken care of when the generic iommu-based DMA API will be > + * merged. Meanwhile, statically-addressed iommu-based firmware images should > + * use RSC_DEVMEM resource entries to map their required @da to the physical > + * address of their base CMA region (ouch, hacky!). > + * > + * Return: 0 on success, or an appropriate error code otherwise > + */ > +static int rproc_handle_vdev(struct rproc *rproc, void *ptr, > + int offset, int avail) > +{ > + struct fw_rsc_vdev *rsc = ptr; > + struct device *dev = &rproc->dev; > + struct rproc_vdev *rvdev; > + struct rproc_vdev_data rvdev_data; > + > + /* make sure resource isn't truncated */ > + if (struct_size(rsc, vring, rsc->num_of_vrings) + rsc->config_len > > + avail) { > + dev_err(dev, "vdev rsc is truncated\n"); > + return -EINVAL; > + } > + > + /* make sure reserved bytes are zeroes */ > + if (rsc->reserved[0] || rsc->reserved[1]) { > + dev_err(dev, "vdev rsc has non zero reserved bytes\n"); > + return -EINVAL; > + } > + > + dev_dbg(dev, "vdev rsc: id %d, dfeatures 0x%x, cfg len %d, %d vrings\n", > + rsc->id, rsc->dfeatures, rsc->config_len, rsc->num_of_vrings); > + > + /* we currently support only two vrings per rvdev */ > + if (rsc->num_of_vrings > ARRAY_SIZE(rvdev->vring)) { > + dev_err(dev, "too many vrings: %d\n", rsc->num_of_vrings); > + return -EINVAL; > + } > + > + rvdev_data.id = rsc->id; > + rvdev_data.index = rproc->nb_vdev++; > + rvdev_data.rsc_offset = offset; > + rvdev_data.rsc = rsc; > + > + rvdev = rproc_rvdev_add_device(rproc, &rvdev_data); > + if (IS_ERR(rvdev)) > + return PTR_ERR(rvdev); > + > + return 0; > +} > + > /** > * rproc_handle_trace() - handle a shared trace buffer resource > * @rproc: the remote processor > diff --git a/drivers/remoteproc/remoteproc_internal.h b/drivers/remoteproc/remoteproc_internal.h > index 72d4d3d7d94d..caa1ef91be14 100644 > --- a/drivers/remoteproc/remoteproc_internal.h > +++ b/drivers/remoteproc/remoteproc_internal.h > @@ -24,6 +24,21 @@ struct rproc_debug_trace { > struct rproc_mem_entry trace_mem; > }; > > +/** > + * struct rproc_vdev_data - remoteproc virtio device data > + * @rsc_offset: offset of the vdev's resource entry > + * @id: virtio device id (as in virtio_ids.h) > + * @index: vdev position versus other vdev declared in resource table > + * @rsc: pointer to the vdev resource entry. Valid onlyduring vdev init as the resource can s/onlyduring/only during More comments coming tomorrow. > + * be cached by rproc. > + */ > +struct rproc_vdev_data { > + u32 rsc_offset; > + unsigned int id; > + u32 index; > + struct fw_rsc_vdev *rsc; > +}; > + > /* from remoteproc_core.c */ > void rproc_release(struct kref *kref); > irqreturn_t rproc_vq_interrupt(struct rproc *rproc, int vq_id); > -- > 2.24.3 >