Received: by 2002:a5d:9c59:0:0:0:0:0 with SMTP id 25csp84117iof; Sun, 5 Jun 2022 21:52:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzII6gqq3iWbqfsm8jtEP1/VQdW41gsYrd2thtIqm73+AAiZHDv3G2UdHigqIYz6aPE5Iyb X-Received: by 2002:a65:6b8a:0:b0:3fc:4c06:8a8d with SMTP id d10-20020a656b8a000000b003fc4c068a8dmr19818979pgw.83.1654491167085; Sun, 05 Jun 2022 21:52:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654491167; cv=none; d=google.com; s=arc-20160816; b=iRsq1GKyijgvSjYRYTg+b1q0AJSYvhli1I5iKeJlM9/dxMML+qIJEPfLJTnMnauhRi /3cLGT3RDKWPQfbyEtcTahDwhwAGS6OcSCRHOpRVpBYiTqMFVrC/ebkBiPn907vOkyd/ 4gQgaFp6Ws6fSAM0OafnwIQqeZcYahoCgmG+DIk0I8zzsk/q83YBj3AeH330ZGHig2m/ 9mp0EdChpV3vLLWSXS/Cu8EN1sRgpL6s0ucaoYWLFVBA+UedcgaDH8JeCQzsD/oAVj/V H2RoowmqITAX4PJ0beuM7YGPI23/wPBfliq3n/NFrUmD765uc3n7HvJBkue0sVxgUu3o uqlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=wDS2S4J3SFB6CgnIAfklMveazEUAZtoKUoBgXKtFxQ4=; b=H0iknClZVU2hZoO/h0dWaMnmNGuEE10KVzguS6wt0ZmPtAphPuOZh1mytKVM1s14rr w2QtUcVEu4Ug/0XBcvcdsC/HtFiQ+u8C/uKs3aD6LUaEAsQQ1dIPNX5hgNVw4r4M1Wp+ eqtloJIuKGFC08M8ckNcQNlVnLgYNtWduQM6F1fE68dKL/hgs0DqRc0JnUcGBN3VPTas C2gdL7MdOW3z/rFJ+5rSd/YMUBKKwM0B5/hT718AJ0JJFMFRRasLO7F9LKWs9FJYsd0W oFZEjNs8xDZNXsHZqPSdikC8vk8CpKp4z2hXjSl61oDnrRf6ldlic8dA1+DhfXoe/Hc+ RTQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@foss.st.com header.s=selector1 header.b=jUitRGOK; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=foss.st.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id hk12-20020a17090b224c00b001e347951503si1242579pjb.164.2022.06.05.21.52.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 05 Jun 2022 21:52:47 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@foss.st.com header.s=selector1 header.b=jUitRGOK; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=foss.st.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E4FAFB36DD; Sun, 5 Jun 2022 21:07:59 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343972AbiFCQce (ORCPT + 99 others); Fri, 3 Jun 2022 12:32:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243611AbiFCQcb (ORCPT ); Fri, 3 Jun 2022 12:32:31 -0400 Received: from mx07-00178001.pphosted.com (mx07-00178001.pphosted.com [185.132.182.106]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7DB436691; Fri, 3 Jun 2022 09:32:20 -0700 (PDT) Received: from pps.filterd (m0241204.ppops.net [127.0.0.1]) by mx07-00178001.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 253FTqiD004990; Fri, 3 Jun 2022 18:32:03 +0200 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foss.st.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=selector1; bh=wDS2S4J3SFB6CgnIAfklMveazEUAZtoKUoBgXKtFxQ4=; b=jUitRGOKU/RqYu4lzNcUqJ3i9nCk7nG7eJZu0twXPJCXFJo4qgBB2VvZZWQEcfKwAJ/7 g6b0NZtkEU7yACWf/UX/HBosGRweGUfWmJCUBp25V+Qbqx31QgsBLwWQP6h6+HmHD02e w2CEnICNFUx74t6ooBhRxcO2uxJI1IqnM+Ge3XPn9mVlsO/gnLQq2r31tUb/FAP8rEEP rEiQPxryakguqCZKGIgrKQrxZNsH9ycfISIbGPKZn9x5BBOHfjOU0XVH5GorhfQzCS0J 0BC0dQMjy7Q8IgDmOXGQVwWOJGvNKUelEvgZwhn57NKK0nO/OT/ACoYA5lnpTenlilxq lA== Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35]) by mx07-00178001.pphosted.com (PPS) with ESMTPS id 3gbc5174sx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 03 Jun 2022 18:32:03 +0200 Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20]) by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 71618100034; Fri, 3 Jun 2022 18:32:02 +0200 (CEST) Received: from Webmail-eu.st.com (shfdag1node2.st.com [10.75.129.70]) by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id 67DEF231DDD; Fri, 3 Jun 2022 18:32:02 +0200 (CEST) Received: from localhost (10.75.127.50) by SHFDAG1NODE2.st.com (10.75.129.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.20; Fri, 3 Jun 2022 18:32:02 +0200 From: Arnaud Pouliquen To: Bjorn Andersson , Mathieu Poirier CC: , , , Rob Herring , Christoph Hellwig , Stefano Stabellini , Bruce Ashfield , Subject: [PATCH v6 1/4] remoteproc: core: Introduce rproc_rvdev_add_device function Date: Fri, 3 Jun 2022 18:31:55 +0200 Message-ID: <20220603163158.612513-2-arnaud.pouliquen@foss.st.com> X-Mailer: git-send-email 2.24.3 In-Reply-To: <20220603163158.612513-1-arnaud.pouliquen@foss.st.com> References: <20220603163158.612513-1-arnaud.pouliquen@foss.st.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.75.127.50] X-ClientProxiedBy: SFHDAG2NODE3.st.com (10.75.127.6) To SHFDAG1NODE2.st.com (10.75.129.70) X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-03_05,2022-06-03_01,2022-02-23_01 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In preparation of the migration of the management of rvdev in remoteproc_virtio.c, this patch spins off a new function to manage the remoteproc virtio device creation. The rproc_rvdev_add_device will be moved to remoteproc_virtio.c. The rproc_vdev_data structure is introduced to provide information for the rvdev creation. This structure allows to manage the rvdev and vrings allocation in the rproc_rvdev_add_device function. Signed-off-by: Arnaud Pouliquen --- Updates vs previous revision (based on Mathieu Poirier's comments): - rproc_vdev_data struct change index field type to u32 - remove rproc_rvdev_remove_device function creation --- drivers/remoteproc/remoteproc_core.c | 145 +++++++++++++---------- drivers/remoteproc/remoteproc_internal.h | 15 +++ 2 files changed, 97 insertions(+), 63 deletions(-) diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c index c510125769b9..c438c32f7f0d 100644 --- a/drivers/remoteproc/remoteproc_core.c +++ b/drivers/remoteproc/remoteproc_core.c @@ -484,74 +484,23 @@ static int copy_dma_range_map(struct device *to, struct device *from) return 0; } -/** - * rproc_handle_vdev() - handle a vdev fw resource - * @rproc: the remote processor - * @ptr: the vring resource descriptor - * @offset: offset of the resource entry - * @avail: size of available data (for sanity checking the image) - * - * This resource entry requests the host to statically register a virtio - * device (vdev), and setup everything needed to support it. It contains - * everything needed to make it possible: the virtio device id, virtio - * device features, vrings information, virtio config space, etc... - * - * Before registering the vdev, the vrings are allocated from non-cacheable - * physically contiguous memory. Currently we only support two vrings per - * remote processor (temporary limitation). We might also want to consider - * doing the vring allocation only later when ->find_vqs() is invoked, and - * then release them upon ->del_vqs(). - * - * Note: @da is currently not really handled correctly: we dynamically - * allocate it using the DMA API, ignoring requested hard coded addresses, - * and we don't take care of any required IOMMU programming. This is all - * going to be taken care of when the generic iommu-based DMA API will be - * merged. Meanwhile, statically-addressed iommu-based firmware images should - * use RSC_DEVMEM resource entries to map their required @da to the physical - * address of their base CMA region (ouch, hacky!). - * - * Return: 0 on success, or an appropriate error code otherwise - */ -static int rproc_handle_vdev(struct rproc *rproc, void *ptr, - int offset, int avail) +static struct rproc_vdev * +rproc_rvdev_add_device(struct rproc *rproc, struct rproc_vdev_data *rvdev_data) { - struct fw_rsc_vdev *rsc = ptr; - struct device *dev = &rproc->dev; struct rproc_vdev *rvdev; - int i, ret; + struct fw_rsc_vdev *rsc = rvdev_data->rsc; char name[16]; - - /* make sure resource isn't truncated */ - if (struct_size(rsc, vring, rsc->num_of_vrings) + rsc->config_len > - avail) { - dev_err(dev, "vdev rsc is truncated\n"); - return -EINVAL; - } - - /* make sure reserved bytes are zeroes */ - if (rsc->reserved[0] || rsc->reserved[1]) { - dev_err(dev, "vdev rsc has non zero reserved bytes\n"); - return -EINVAL; - } - - dev_dbg(dev, "vdev rsc: id %d, dfeatures 0x%x, cfg len %d, %d vrings\n", - rsc->id, rsc->dfeatures, rsc->config_len, rsc->num_of_vrings); - - /* we currently support only two vrings per rvdev */ - if (rsc->num_of_vrings > ARRAY_SIZE(rvdev->vring)) { - dev_err(dev, "too many vrings: %d\n", rsc->num_of_vrings); - return -EINVAL; - } + int i, ret; rvdev = kzalloc(sizeof(*rvdev), GFP_KERNEL); if (!rvdev) - return -ENOMEM; + return ERR_PTR(-ENOMEM); kref_init(&rvdev->refcount); - rvdev->id = rsc->id; + rvdev->id = rvdev_data->id; rvdev->rproc = rproc; - rvdev->index = rproc->nb_vdev++; + rvdev->index = rvdev_data->index; /* Initialise vdev subdevice */ snprintf(name, sizeof(name), "vdev%dbuffer", rvdev->index); @@ -563,7 +512,7 @@ static int rproc_handle_vdev(struct rproc *rproc, void *ptr, ret = device_register(&rvdev->dev); if (ret) { put_device(&rvdev->dev); - return ret; + return ERR_PTR(ret); } ret = copy_dma_range_map(&rvdev->dev, rproc->dev.parent); @@ -576,7 +525,7 @@ static int rproc_handle_vdev(struct rproc *rproc, void *ptr, ret = dma_coerce_mask_and_coherent(&rvdev->dev, dma_get_mask(rproc->dev.parent)); if (ret) { - dev_warn(dev, + dev_warn(&rvdev->dev, "Failed to set DMA mask %llx. Trying to continue... (%pe)\n", dma_get_mask(rproc->dev.parent), ERR_PTR(ret)); } @@ -589,7 +538,7 @@ static int rproc_handle_vdev(struct rproc *rproc, void *ptr, } /* remember the resource offset*/ - rvdev->rsc_offset = offset; + rvdev->rsc_offset = rvdev_data->rsc_offset; /* allocate the vring resources */ for (i = 0; i < rsc->num_of_vrings; i++) { @@ -605,14 +554,14 @@ static int rproc_handle_vdev(struct rproc *rproc, void *ptr, rproc_add_subdev(rproc, &rvdev->subdev); - return 0; + return rvdev; unwind_vring_allocations: for (i--; i >= 0; i--) rproc_free_vring(&rvdev->vring[i]); free_rvdev: device_unregister(&rvdev->dev); - return ret; + return ERR_PTR(ret); } void rproc_vdev_release(struct kref *ref) @@ -632,6 +581,76 @@ void rproc_vdev_release(struct kref *ref) device_unregister(&rvdev->dev); } +/** + * rproc_handle_vdev() - handle a vdev fw resource + * @rproc: the remote processor + * @ptr: the vring resource descriptor + * @offset: offset of the resource entry + * @avail: size of available data (for sanity checking the image) + * + * This resource entry requests the host to statically register a virtio + * device (vdev), and setup everything needed to support it. It contains + * everything needed to make it possible: the virtio device id, virtio + * device features, vrings information, virtio config space, etc... + * + * Before registering the vdev, the vrings are allocated from non-cacheable + * physically contiguous memory. Currently we only support two vrings per + * remote processor (temporary limitation). We might also want to consider + * doing the vring allocation only later when ->find_vqs() is invoked, and + * then release them upon ->del_vqs(). + * + * Note: @da is currently not really handled correctly: we dynamically + * allocate it using the DMA API, ignoring requested hard coded addresses, + * and we don't take care of any required IOMMU programming. This is all + * going to be taken care of when the generic iommu-based DMA API will be + * merged. Meanwhile, statically-addressed iommu-based firmware images should + * use RSC_DEVMEM resource entries to map their required @da to the physical + * address of their base CMA region (ouch, hacky!). + * + * Return: 0 on success, or an appropriate error code otherwise + */ +static int rproc_handle_vdev(struct rproc *rproc, void *ptr, + int offset, int avail) +{ + struct fw_rsc_vdev *rsc = ptr; + struct device *dev = &rproc->dev; + struct rproc_vdev *rvdev; + struct rproc_vdev_data rvdev_data; + + /* make sure resource isn't truncated */ + if (struct_size(rsc, vring, rsc->num_of_vrings) + rsc->config_len > + avail) { + dev_err(dev, "vdev rsc is truncated\n"); + return -EINVAL; + } + + /* make sure reserved bytes are zeroes */ + if (rsc->reserved[0] || rsc->reserved[1]) { + dev_err(dev, "vdev rsc has non zero reserved bytes\n"); + return -EINVAL; + } + + dev_dbg(dev, "vdev rsc: id %d, dfeatures 0x%x, cfg len %d, %d vrings\n", + rsc->id, rsc->dfeatures, rsc->config_len, rsc->num_of_vrings); + + /* we currently support only two vrings per rvdev */ + if (rsc->num_of_vrings > ARRAY_SIZE(rvdev->vring)) { + dev_err(dev, "too many vrings: %d\n", rsc->num_of_vrings); + return -EINVAL; + } + + rvdev_data.id = rsc->id; + rvdev_data.index = rproc->nb_vdev++; + rvdev_data.rsc_offset = offset; + rvdev_data.rsc = rsc; + + rvdev = rproc_rvdev_add_device(rproc, &rvdev_data); + if (IS_ERR(rvdev)) + return PTR_ERR(rvdev); + + return 0; +} + /** * rproc_handle_trace() - handle a shared trace buffer resource * @rproc: the remote processor diff --git a/drivers/remoteproc/remoteproc_internal.h b/drivers/remoteproc/remoteproc_internal.h index 72d4d3d7d94d..caa1ef91be14 100644 --- a/drivers/remoteproc/remoteproc_internal.h +++ b/drivers/remoteproc/remoteproc_internal.h @@ -24,6 +24,21 @@ struct rproc_debug_trace { struct rproc_mem_entry trace_mem; }; +/** + * struct rproc_vdev_data - remoteproc virtio device data + * @rsc_offset: offset of the vdev's resource entry + * @id: virtio device id (as in virtio_ids.h) + * @index: vdev position versus other vdev declared in resource table + * @rsc: pointer to the vdev resource entry. Valid onlyduring vdev init as the resource can + * be cached by rproc. + */ +struct rproc_vdev_data { + u32 rsc_offset; + unsigned int id; + u32 index; + struct fw_rsc_vdev *rsc; +}; + /* from remoteproc_core.c */ void rproc_release(struct kref *kref); irqreturn_t rproc_vq_interrupt(struct rproc *rproc, int vq_id); -- 2.24.3