Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp31570pxb; Wed, 4 Nov 2020 13:35:38 -0800 (PST) X-Google-Smtp-Source: ABdhPJy4DqvtrX4NOrIM9epewNu57oSVe6qXRyIt4YFZXhdJoPi3nYUeQT+KzNVov/5uGbnRIU05 X-Received: by 2002:aa7:d890:: with SMTP id u16mr29200124edq.159.1604525738205; Wed, 04 Nov 2020 13:35:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604525738; cv=none; d=google.com; s=arc-20160816; b=FKbTUQsND0k+Io0w4bWygRng2A/RmjlqYAZtUSEYzWeD2HIzGbxYL6exKGz0qaMkOP dteHjNIziFy+jKNsbOUMxvmxgXz5QTrq4Zc8u5y6Wcvfo5pvRNCsZXC73SHJZzeCjGJd 7Pc4IcwFGkdtGQVMnU3qm+mEGxZzNo/z9gW8glwf9CUV04W4Gt2FOuCy87+CmdTvqJSl CCW7ZdG1Av2+vOZbatiTbYiJHv8SjrveMERJnpIX5rOilTXD+dvM+2j+azvyPOjzZF+i 0JFxn6hvGDHsubg3X+2Uz1xxjDUd10tH3MHyUGEMBoz6nkKE0pNETiFdiyxj1wxolUr6 bkfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=a8aFfWWF7THt0ummWJ41SkqXVGi7HeP6mMNG74ZmdV8=; b=H8qtN/wj8FUV0rCGVgBDHKNXztkYvf138xZG0wmPSaVA8nJbHUW1Au6csQ22U+tu1n bom81DFAa+cDc+3ZTz79rq6o3P0xxTWOqEUvosoYBAX0Z7KxlI2fr20w9tusuAZJYHp+ XS3B0M/tfSUPhTMc1o8DXD7GqgQAkjEmROArEepiKZaNtUR0PLjvQRgFpBtIhA5u8WhN s1YmrhAsMEKA2ne/C9HCe5QYqF5C4rVNT01Qr2xooE2gS0Ml4MnTeWjC94RhN6RwGtH2 qqAiiIBU6/plROpOI4+srKPK9J6vEnXAvkfYj3mLqGDJCXpG6HoxR4NZqMfBu6mcrxPz s+ig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=YNLJGiLJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c7si1140972ejj.419.2020.11.04.13.35.11; Wed, 04 Nov 2020 13:35:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=YNLJGiLJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727698AbgKDV3m (ORCPT + 99 others); Wed, 4 Nov 2020 16:29:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726827AbgKDV3m (ORCPT ); Wed, 4 Nov 2020 16:29:42 -0500 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C098C061A4A for ; Wed, 4 Nov 2020 13:29:42 -0800 (PST) Received: by mail-pg1-x541.google.com with SMTP id i26so17649043pgl.5 for ; Wed, 04 Nov 2020 13:29:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=a8aFfWWF7THt0ummWJ41SkqXVGi7HeP6mMNG74ZmdV8=; b=YNLJGiLJJlDugcLgbqJMuNA7kbnR7iV9Qe+vO1LU+oWt78P46MX22L9rhhBX45jeaE vhP8EYNWs4rx/HCNycuDJHUoZBJHYBDos2xkWL1jFFSru+AAHySzzJCcF7+62uiTLMg/ Pd472qUSCWEWbKvdHc26LJOC63P5sI4xgQy+KiZDi9/fyqJOE9EFlcHmnqRYz6ibm648 plKEuTvS8Wuu54YG6z96P+P2UrfbNpU0uK7Q23V1zY+VsZbJcHBHhmQ8MbYWhB8g0MVY OBVVqSbs42CVGa7vvX+N4g6XpJ//hI9EywFa+9Ua0ecS2zy4sG76vdUTWIhPahy+zU8J l9ZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=a8aFfWWF7THt0ummWJ41SkqXVGi7HeP6mMNG74ZmdV8=; b=cTM2Ka2g36b1iEMcK/lEo30rsiDP8goYn4atrG+3RasuOFalfF0NF0oPZeYNCOpsRV kguyu4d56O7R40VitI//QWNPlIc6bOCy2++3xGR0ATrzVweokK5QI3QpO2qzwR7hm3Hx 7psryUc1DIRmivhCEUtMRh3/s57kIcjg8ZQYfaOXj19KSS13HVWg7Ox2UibpYkRrrLUN B/zcvxyg7PHem+k+6fHZry6xoq2OeUEnyl4OcfJ75/QC7i8QPWHqLUkoryDBjD3FhUDk RSYxygVjrsh1QpL+YOXGJywdyxL1VYeFD6RAqaQG2NUWBKX2tpbzBMPuXQPmtxWEOIjL hkYA== X-Gm-Message-State: AOAM531eKq8+HYNglLo91C72YvbAKdApMfpgpsYhuI44vcnxa3ZFRg5g //4EbXELP2loqXqhfSQXpnPS94HJJyQZEg== X-Received: by 2002:aa7:8f29:0:b029:18b:4c37:bafb with SMTP id y9-20020aa78f290000b029018b4c37bafbmr5827437pfr.44.1604525381151; Wed, 04 Nov 2020 13:29:41 -0800 (PST) Received: from xps15 (S0106002369de4dac.cg.shawcable.net. [68.147.8.254]) by smtp.gmail.com with ESMTPSA id x5sm3416229pfp.113.2020.11.04.13.29.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Nov 2020 13:29:40 -0800 (PST) Date: Wed, 4 Nov 2020 14:29:38 -0700 From: Mathieu Poirier To: Ben Levinsky Cc: michael.auchter@ni.com, stefanos@xilinx.com, devicetree@vger.kernel.org, linux-remoteproc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v22 5/5] remoteproc: Add initial zynqmp R5 remoteproc driver Message-ID: <20201104212938.GE2893396@xps15> References: <20201103182339.14286-1-ben.levinsky@xilinx.com> <20201103182339.14286-6-ben.levinsky@xilinx.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201103182339.14286-6-ben.levinsky@xilinx.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Nov 03, 2020 at 10:23:39AM -0800, Ben Levinsky wrote: > R5 is included in Xilinx Zynq UltraScale MPSoC so by adding this > remotproc driver, we can boot the R5 sub-system in different 2 > configurations - > * split > * lock-step > > The Xilinx R5 Remoteproc Driver boots the R5's via calls to the Xilinx > Platform Management Unit that handles the R5 configuration, memory access > and R5 lifecycle management. The interface to this manager is done in this > driver via zynqmp_pm_* function calls. > > Signed-off-by: Wendy Liang > Signed-off-by: Michal Simek > Signed-off-by: Ed Mooring > Signed-off-by: Jason Wu > Signed-off-by: Ben Levinsky > --- > - update documented param list for zynqmp_r5_probe > - remove use-after-free lines in zynqmp_r5_probe, > zynqmp_r5_remoteproc_probe, and zynqmp_r5_remoteproc_remove > - update path for error handling in zynqmp_r5_probe and > zynqmp_r5_remoteproc_probe > - as the mbox properties are optional, update zynqmp_r5_rproc_kick > so that the mailbox functionality only occurs if these > properties are provided > --- > drivers/remoteproc/Kconfig | 8 + > drivers/remoteproc/Makefile | 1 + > drivers/remoteproc/zynqmp_r5_remoteproc.c | 778 ++++++++++++++++++++++ Many of my comments from V19 have not been addressed and as such I will not move foward with this set. Moreover I will not consider a new revision before November 18th. > 3 files changed, 787 insertions(+) > create mode 100644 drivers/remoteproc/zynqmp_r5_remoteproc.c > > diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig > index c6659dfea7c7..c2fe54b1d94f 100644 > --- a/drivers/remoteproc/Kconfig > +++ b/drivers/remoteproc/Kconfig > @@ -275,6 +275,14 @@ config TI_K3_DSP_REMOTEPROC > It's safe to say N here if you're not interested in utilizing > the DSP slave processors. > > +config ZYNQMP_R5_REMOTEPROC > + tristate "ZynqMP R5 remoteproc support" > + depends on PM && ARCH_ZYNQMP > + select RPMSG_VIRTIO > + select ZYNQMP_IPI_MBOX > + help > + Say y or m here to support ZynqMP R5 remote processors via the remote > + processor framework. > endif # REMOTEPROC > > endmenu > diff --git a/drivers/remoteproc/Makefile b/drivers/remoteproc/Makefile > index 3dfa28e6c701..ef1abff654c2 100644 > --- a/drivers/remoteproc/Makefile > +++ b/drivers/remoteproc/Makefile > @@ -33,3 +33,4 @@ obj-$(CONFIG_ST_REMOTEPROC) += st_remoteproc.o > obj-$(CONFIG_ST_SLIM_REMOTEPROC) += st_slim_rproc.o > obj-$(CONFIG_STM32_RPROC) += stm32_rproc.o > obj-$(CONFIG_TI_K3_DSP_REMOTEPROC) += ti_k3_dsp_remoteproc.o > +obj-$(CONFIG_ZYNQMP_R5_REMOTEPROC) += zynqmp_r5_remoteproc.o > diff --git a/drivers/remoteproc/zynqmp_r5_remoteproc.c b/drivers/remoteproc/zynqmp_r5_remoteproc.c > new file mode 100644 > index 000000000000..48bffa91bc00 > --- /dev/null > +++ b/drivers/remoteproc/zynqmp_r5_remoteproc.c > @@ -0,0 +1,779 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * Zynq R5 Remote Processor driver > + * > + * Based on origin OMAP and Zynq Remote Processor driver > + * > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "remoteproc_internal.h" > + > +#define MAX_RPROCS 2 /* Support up to 2 RPU */ > +#define MAX_MEM_PNODES 4 /* Max power nodes for one RPU memory instance */ > + > +#define BANK_LIST_PROP "meta-memory-regions" > +#define DDR_LIST_PROP "memory-regions" > + > +/* IPI buffer MAX length */ > +#define IPI_BUF_LEN_MAX 32U > +/* RX mailbox client buffer max length */ > +#define RX_MBOX_CLIENT_BUF_MAX (IPI_BUF_LEN_MAX + \ > + sizeof(struct zynqmp_ipi_message)) > + > +/** > + * struct zynqmp_r5_mem - zynqmp rpu memory data > + * @pnode_id: TCM power domain ids > + * @res: memory resource > + * @node: list node > + */ > +struct zynqmp_r5_mem { > + u32 pnode_id[MAX_MEM_PNODES]; > + struct resource res; > + struct list_head node; > +}; > + > +/** > + * struct zynqmp_r5_rproc - zynqmp rpu remote processor state > + * this is for each individual R5 core's state > + * > + * @rx_mc_buf: rx mailbox client buffer to save the rx message > + * @tx_mc: tx mailbox client > + * @rx_mc: rx mailbox client * @dev: device of RPU instance > + * @mbox_work: mbox_work for the RPU remoteproc > + * @tx_mc_skbs: socket buffers for tx mailbox client > + * @dev: device of RPU instance > + * @rproc: rproc handle > + * @tx_chan: tx mailbox channel > + * @rx_chan: rx mailbox channel > + * @pnode_id: RPU CPU power domain id > + * @elem: linked list item > + * @dt_node: device tree node that holds information for 1 R5 core. > + */ > +struct zynqmp_r5_rproc { > + unsigned char rx_mc_buf[RX_MBOX_CLIENT_BUF_MAX]; > + struct mbox_client tx_mc; > + struct mbox_client rx_mc; > + struct work_struct mbox_work; > + struct sk_buff_head tx_mc_skbs; > + struct device *dev; > + struct rproc *rproc; > + struct mbox_chan *tx_chan; > + struct mbox_chan *rx_chan; > + u32 pnode_id; > + struct list_head elem; > +}; > + > +/* > + * r5_set_mode - set RPU operation mode > + * @z_rproc: Remote processor private data > + * @rpu_mode: mode specified by device tree to configure the RPU to > + * > + * set RPU operation mode > + * > + * Return: 0 for success, negative value for failure > + */ > +static int r5_set_mode(struct zynqmp_r5_rproc *z_rproc, > + enum rpu_oper_mode rpu_mode) > +{ > + enum rpu_tcm_comb tcm_mode; > + enum rpu_oper_mode cur_rpu_mode; > + int ret; > + > + ret = zynqmp_pm_get_rpu_mode(z_rproc->pnode_id, &cur_rpu_mode); > + if (ret < 0) > + return ret; > + > + if (rpu_mode != cur_rpu_mode) { > + ret = zynqmp_pm_set_rpu_mode(z_rproc->pnode_id, > + rpu_mode); > + if (ret < 0) > + return ret; > + } > + > + tcm_mode = (rpu_mode == PM_RPU_MODE_LOCKSTEP) ? > + PM_RPU_TCM_COMB : PM_RPU_TCM_SPLIT; > + return zynqmp_pm_set_tcm_config(z_rproc->pnode_id, tcm_mode); > +} > + > +/* > + * release TCM banks when powering down R5 core > + */ > +static int tcm_mem_release(struct rproc *rproc, struct rproc_mem_entry *mem) > +{ > + u32 pnode_id = (u64)mem->priv; > + > + iounmap(mem->va); > + return zynqmp_pm_release_node(pnode_id); > +} > + > +/* > + * given ID corresponding to R5 core in Xilinx Platform management (xpm) API, > + * try to use xpm wake call to wake R5 core > + */ > +static int zynqmp_r5_rproc_start(struct rproc *rproc) > +{ > + struct zynqmp_r5_rproc *z_rproc = rproc->priv; > + enum rpu_boot_mem bootmem; > + > + bootmem = (rproc->bootaddr & 0xF0000000) == 0xF0000000 ? > + PM_RPU_BOOTMEM_HIVEC : PM_RPU_BOOTMEM_LOVEC; > + > + dev_dbg(rproc->dev.parent, "RPU boot from %s.", > + bootmem == PM_RPU_BOOTMEM_HIVEC ? "OCM" : "TCM"); > + > + return zynqmp_pm_request_wake(z_rproc->pnode_id, 1, > + bootmem, ZYNQMP_PM_REQUEST_ACK_NO); > +} > + > +/* > + * given ID corresponding to R5 core in Xilinx Platform management (xpm) API, > + * try to use xpm power down call to power off R5 core > + */ > +static int zynqmp_r5_rproc_stop(struct rproc *rproc) > +{ > + struct zynqmp_r5_rproc *z_rproc = rproc->priv; > + > + return zynqmp_pm_force_pwrdwn(z_rproc->pnode_id, > + ZYNQMP_PM_REQUEST_ACK_BLOCKING); > +} > + > +/* > + * map in physical addr for DDR mem carveout in rproc > + */ > +static int zynqmp_r5_rproc_mem_alloc(struct rproc *rproc, > + struct rproc_mem_entry *mem) > +{ > + void *va; > + > + va = ioremap_wc(mem->dma, mem->len); > + if (IS_ERR_OR_NULL(va)) > + return -ENOMEM; > + > + /* Update memory entry va */ > + mem->va = va; > + > + return 0; > +} > + > +/* unmap rproc_mem_entry virtual addr */ > +static int zynqmp_r5_rproc_mem_release(struct rproc *rproc, > + struct rproc_mem_entry *mem) > +{ > + iounmap(mem->va); > + return 0; > +} > + > +/* construct rproc mem carveouts for DDR regions specified in device tree */ > +static int parse_mem_regions(struct rproc *rproc) > +{ > + int num_mems, i; > + struct zynqmp_r5_rproc *z_rproc = rproc->priv; > + struct device *dev = &rproc->dev; > + struct device_node *np = z_rproc->dev->of_node; > + struct rproc_mem_entry *mem; > + > + num_mems = of_count_phandle_with_args(np, DDR_LIST_PROP, NULL); > + if (num_mems <= 0) > + return 0; > + > + for (i = 0; i < num_mems; i++) { > + struct device_node *node; > + struct reserved_mem *rmem; > + > + node = of_parse_phandle(np, DDR_LIST_PROP, i); > + if (!node) > + return -EINVAL; > + > + rmem = of_reserved_mem_lookup(node); > + if (!rmem) > + return -EINVAL; > + > + if (strstr(node->name, "vdev0vring")) { > + int vring_id; > + char name[16]; > + > + /* > + * expecting form of "rpuXvdev0vringX as documented > + * in xilinx remoteproc device tree binding > + */ > + if (strlen(node->name) < 15) { > + dev_err(dev, "%pOF is less than 14 chars", > + node); > + return -EINVAL; > + } > + > + /* > + * can be 1 of multiple vring IDs per IPC channel > + * e.g. 'vdev0vring0' and 'vdev0vring1' > + */ > + vring_id = node->name[14] - '0'; > + snprintf(name, sizeof(name), "vdev0vring%d", vring_id); > + /* Register vring */ > + mem = rproc_mem_entry_init(dev, NULL, > + (dma_addr_t)rmem->base, > + rmem->size, rmem->base, > + zynqmp_r5_rproc_mem_alloc, > + zynqmp_r5_rproc_mem_release, > + name); > + } else { > + /* Register DMA region */ > + int (*alloc)(struct rproc *r, > + struct rproc_mem_entry *rme); > + int (*release)(struct rproc *r, > + struct rproc_mem_entry *rme); > + char name[20]; > + > + if (strstr(node->name, "vdev0buffer")) { > + alloc = NULL; > + release = NULL; > + strcpy(name, "vdev0buffer"); > + } else { > + alloc = zynqmp_r5_rproc_mem_alloc; > + release = zynqmp_r5_rproc_mem_release; > + strcpy(name, node->name); > + } > + > + mem = rproc_mem_entry_init(dev, NULL, > + (dma_addr_t)rmem->base, > + rmem->size, rmem->base, > + alloc, release, name); > + } > + if (!mem) > + return -ENOMEM; > + > + rproc_add_carveout(rproc, mem); > + } > + > + return 0; > +} > + > +/* call Xilinx Platform manager to request access to TCM bank */ > +static int zynqmp_r5_pm_request_tcm(struct device_node *tcm_node, > + struct device *dev, u32 *pnode_id) > +{ > + int ret; > + > + ret = of_property_read_u32(tcm_node, "pnode-id", pnode_id); > + if (ret) > + return ret; > + > + return zynqmp_pm_request_node(*pnode_id, ZYNQMP_PM_CAPABILITY_ACCESS, 0, > + ZYNQMP_PM_REQUEST_ACK_BLOCKING); > +} > + > +/* > + * Given TCM bank entry, > + * this callback will set device address for R5 running on TCM > + * and also setup virtual address for TCM bank remoteproc carveout > + */ > +static int tcm_mem_alloc(struct rproc *rproc, > + struct rproc_mem_entry *mem) > +{ > + void *va; > + struct device *dev = rproc->dev.parent; > + > + va = ioremap_wc(mem->dma, mem->len); > + if (IS_ERR_OR_NULL(va)) > + return -ENOMEM; > + > + /* Update memory entry va */ > + mem->va = va; > + > + va = devm_ioremap_wc(dev, mem->da, mem->len); > + if (!va) > + return -ENOMEM; > + /* As R5 is 32 bit, wipe out extra high bits */ > + mem->da &= 0x000fffff; > + /* > + * TCM Banks 0A and 0B (0xffe00000 and 0xffe20000) > + * are handled with the above line of code so do nothing > + * for this 2 banks > + */ > + > + /* > + * TCM Banks 1A and 1B (0xffe90000 and 0xffeb0000) still > + * need to be translated to 0x0 and 0x20000 > + */ > + if (mem->da == 0x90000 || mem->da == 0xB0000) > + mem->da -= 0x90000; > + > + /* if translated TCM bank address is not valid report error */ > + if (mem->da != 0x0 && mem->da != 0x20000) { > + dev_err(dev, "invalid TCM bank address: %x\n", mem->da); > + return -EINVAL; > + } > + > + return 0; > +} > + > +/* > + * Given R5 node in remoteproc instance > + * allocate remoteproc carveout for TCM memory > + * needed for firmware to be loaded > + */ > +static int parse_tcm_banks(struct rproc *rproc) > +{ > + int i, num_banks; > + struct zynqmp_r5_rproc *z_rproc = rproc->priv; > + struct device *dev = &rproc->dev; > + struct device_node *r5_node = z_rproc->dev->of_node; > + > + /* go through TCM banks for r5 node */ > + num_banks = of_count_phandle_with_args(r5_node, BANK_LIST_PROP, NULL); > + if (num_banks <= 0) { > + dev_err(dev, "need to specify TCM banks\n"); > + return -EINVAL; > + } > + for (i = 0; i < num_banks; i++) { > + struct resource rsc; > + resource_size_t size; > + struct device_node *dt_node; > + struct rproc_mem_entry *mem; > + int ret; > + u32 pnode_id; /* zynqmp_pm* fn's expect u32 */ > + > + dt_node = of_parse_phandle(r5_node, BANK_LIST_PROP, i); > + if (!dt_node) > + return -EINVAL; > + > + if (of_device_is_available(dt_node)) { > + ret = of_address_to_resource(dt_node, 0, &rsc); > + if (ret < 0) > + return ret; > + > + ret = zynqmp_r5_pm_request_tcm(dt_node, dev, &pnode_id); > + if (ret < 0) > + return ret; > + > + /* add carveout */ > + size = resource_size(&rsc); > + mem = rproc_mem_entry_init(dev, NULL, rsc.start, > + (int)size, rsc.start, > + tcm_mem_alloc, > + tcm_mem_release, > + rsc.name); > + if (!mem) > + return -ENOMEM; > + > + mem->priv = (void *)(u64)pnode_id; > + rproc_add_carveout(rproc, mem); > + } > + } > + > + return 0; > +} > + > +/* > + * when loading firmware, load in needed DDR, TCM memory regions and wire > + * these into remoteproc core's carveouts > + */ > +static int zynqmp_r5_parse_fw(struct rproc *rproc, const struct firmware *fw) > +{ > + int ret; > + > + ret = parse_tcm_banks(rproc); > + if (ret) > + return ret; > + > + ret = parse_mem_regions(rproc); > + if (ret) > + return ret; > + > + ret = rproc_elf_load_rsc_table(rproc, fw); > + if (ret == -EINVAL) { > + /* > + * resource table only required for IPC. > + * if not present, this is not necessarily an error; > + * for example, loading r5 hello world application > + * so simply inform user and keep going. > + */ > + dev_info(&rproc->dev, "no resource table found.\n"); > + ret = 0; > + } > + return ret; > +} > + > +/* kick a firmware */ > +static void zynqmp_r5_rproc_kick(struct rproc *rproc, int vqid) > +{ > + struct sk_buff *skb; > + unsigned int skb_len; > + struct zynqmp_ipi_message *mb_msg; > + int ret; > + > + struct device *dev = rproc->dev.parent; > + struct zynqmp_r5_rproc *z_rproc = rproc->priv; > + > + if (of_property_read_bool(dev->of_node, "mboxes")) { > + skb_len = (unsigned int)(sizeof(vqid) + sizeof(mb_msg)); > + skb = alloc_skb(skb_len, GFP_ATOMIC); > + if (!skb) > + return; > + > + mb_msg = (struct zynqmp_ipi_message *)skb_put(skb, skb_len); > + mb_msg->len = sizeof(vqid); > + memcpy(mb_msg->data, &vqid, sizeof(vqid)); > + > + skb_queue_tail(&z_rproc->tx_mc_skbs, skb); > + ret = mbox_send_message(z_rproc->tx_chan, mb_msg); > + if (ret < 0) { > + dev_warn(dev, "Failed to kick remote.\n"); > + skb_dequeue_tail(&z_rproc->tx_mc_skbs); > + kfree_skb(skb); > + } > + } else { > + (void)skb; > + (void)skb_len; > + (void)mb_msg; > + (void)ret; > + (void)vqid; > + } > +} > + > +static struct rproc_ops zynqmp_r5_rproc_ops = { > + .start = zynqmp_r5_rproc_start, > + .stop = zynqmp_r5_rproc_stop, > + .load = rproc_elf_load_segments, > + .parse_fw = zynqmp_r5_parse_fw, > + .find_loaded_rsc_table = rproc_elf_find_loaded_rsc_table, > + .sanity_check = rproc_elf_sanity_check, > + .get_boot_addr = rproc_elf_get_boot_addr, > + .kick = zynqmp_r5_rproc_kick, > +}; > + > +/** > + * event_notified_idr_cb() - event notified idr callback > + * @id: idr id > + * @ptr: pointer to idr private data > + * @data: data passed to idr_for_each callback > + * > + * Pass notification to remoteproc virtio > + * > + * Return: 0. having return is to satisfy the idr_for_each() function > + * pointer input argument requirement. > + **/ > +static int event_notified_idr_cb(int id, void *ptr, void *data) > +{ > + struct rproc *rproc = data; > + > + (void)rproc_vq_interrupt(rproc, id); > + return 0; > +} > + > +/** > + * handle_event_notified() - remoteproc notification work funciton > + * @work: pointer to the work structure > + * > + * It checks each registered remoteproc notify IDs. > + */ > +static void handle_event_notified(struct work_struct *work) > +{ > + struct rproc *rproc; > + struct zynqmp_r5_rproc *z_rproc; > + > + z_rproc = container_of(work, struct zynqmp_r5_rproc, mbox_work); > + > + (void)mbox_send_message(z_rproc->rx_chan, NULL); > + rproc = z_rproc->rproc; > + /* > + * We only use IPI for interrupt. The firmware side may or may > + * not write the notifyid when it trigger IPI. > + * And thus, we scan through all the registered notifyids. > + */ > + idr_for_each(&rproc->notifyids, event_notified_idr_cb, rproc); > +} > + > +/** > + * zynqmp_r5_mb_rx_cb() - Receive channel mailbox callback > + * @cl: mailbox client > + * @mssg: message pointer > + * > + * It will schedule the R5 notification work. > + */ > +static void zynqmp_r5_mb_rx_cb(struct mbox_client *cl, void *mssg) > +{ > + struct zynqmp_r5_rproc *z_rproc; > + > + z_rproc = container_of(cl, struct zynqmp_r5_rproc, rx_mc); > + if (mssg) { > + struct zynqmp_ipi_message *ipi_msg, *buf_msg; > + size_t len; > + > + ipi_msg = (struct zynqmp_ipi_message *)mssg; > + buf_msg = (struct zynqmp_ipi_message *)z_rproc->rx_mc_buf; > + len = (ipi_msg->len >= IPI_BUF_LEN_MAX) ? > + IPI_BUF_LEN_MAX : ipi_msg->len; > + buf_msg->len = len; > + memcpy(buf_msg->data, ipi_msg->data, len); > + } > + schedule_work(&z_rproc->mbox_work); > +} > + > +/** > + * zynqmp_r5_mb_tx_done() - Request has been sent to the remote > + * @cl: mailbox client > + * @mssg: pointer to the message which has been sent > + * @r: status of last TX - OK or error > + * > + * It will be called by the mailbox framework when the last TX has done. > + */ > +static void zynqmp_r5_mb_tx_done(struct mbox_client *cl, void *mssg, int r) > +{ > + struct zynqmp_r5_rproc *z_rproc; > + struct sk_buff *skb; > + > + if (!mssg) > + return; > + z_rproc = container_of(cl, struct zynqmp_r5_rproc, tx_mc); > + skb = skb_dequeue(&z_rproc->tx_mc_skbs); > + kfree_skb(skb); > +} > + > +/** > + * zynqmp_r5_setup_mbox() - Setup mailboxes > + * this is used for each individual R5 core > + * > + * @z_rproc: pointer to the ZynqMP R5 processor platform data > + * @node: pointer of the device node > + * > + * Function to setup mailboxes to talk to RPU. > + * > + * Return: 0 for success, negative value for failure. > + */ > +static int zynqmp_r5_setup_mbox(struct zynqmp_r5_rproc *z_rproc, > + struct device_node *node) > +{ > + struct mbox_client *mclient; > + > + /* Setup TX mailbox channel client */ > + mclient = &z_rproc->tx_mc; > + mclient->rx_callback = NULL; > + mclient->tx_block = false; > + mclient->knows_txdone = false; > + mclient->tx_done = zynqmp_r5_mb_tx_done; > + mclient->dev = z_rproc->dev; > + > + /* Setup TX mailbox channel client */ > + mclient = &z_rproc->rx_mc; > + mclient->dev = z_rproc->dev; > + mclient->rx_callback = zynqmp_r5_mb_rx_cb; > + mclient->tx_block = false; > + mclient->knows_txdone = false; > + > + INIT_WORK(&z_rproc->mbox_work, handle_event_notified); > + > + /* Request TX and RX channels */ > + z_rproc->tx_chan = mbox_request_channel_byname(&z_rproc->tx_mc, "tx"); > + if (IS_ERR(z_rproc->tx_chan)) { > + dev_err(z_rproc->dev, "failed to request mbox tx channel.\n"); > + z_rproc->tx_chan = NULL; > + return -EINVAL; > + } > + > + z_rproc->rx_chan = mbox_request_channel_byname(&z_rproc->rx_mc, "rx"); > + if (IS_ERR(z_rproc->rx_chan)) { > + dev_err(z_rproc->dev, "failed to request mbox rx channel.\n"); > + z_rproc->rx_chan = NULL; > + return -EINVAL; > + } > + skb_queue_head_init(&z_rproc->tx_mc_skbs); > + > + return 0; > +} > + > +/** > + * zynqmp_r5_probe() - Probes ZynqMP R5 processor device node > + * this is called for each individual R5 core to > + * set up mailbox, Xilinx platform manager unique ID, > + * add to rproc core > + * > + * @pdev: domain platform device for current R5 core > + * @node: pointer of the device node for current R5 core > + * @rpu_mode: mode to configure RPU, split or lockstep > + * @core: Xilinx specific remoteproc structure used later to link > + * in to cluster of cores > + * > + * Function to retrieve the information of the ZynqMP R5 device node. > + * > + * Return: 0 for success, negative value for failure. > + */ > +static int zynqmp_r5_probe(struct platform_device *pdev, > + struct device_node *node, > + enum rpu_oper_mode rpu_mode, > + struct zynqmp_r5_rproc **core) > +{ > + int ret; > + struct device *dev = &pdev->dev; > + struct rproc *rproc_ptr; > + struct zynqmp_r5_rproc *z_rproc; > + > + /* Allocate remoteproc instance */ > + rproc_ptr = devm_rproc_alloc(dev, dev_name(dev), &zynqmp_r5_rproc_ops, > + NULL, sizeof(struct zynqmp_r5_rproc)); > + if (!rproc_ptr) > + return -ENOMEM; > + rproc_ptr->auto_boot = false; > + z_rproc = rproc_ptr->priv; > + z_rproc->rproc = rproc_ptr; > + z_rproc->dev = dev; > + /* Set up DMA mask */ > + ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); > + if (ret) > + goto error; > + /* Get R5 power domain node */ > + ret = of_property_read_u32(node, "pnode-id", &z_rproc->pnode_id); > + if (ret) > + goto error; > + > + ret = r5_set_mode(z_rproc, rpu_mode); > + if (ret) > + goto error; > + > + if (of_property_read_bool(node, "mboxes")) { > + ret = zynqmp_r5_setup_mbox(z_rproc, node); > + if (ret) > + goto error; > + } > + /* Add R5 remoteproc */ > + ret = devm_rproc_add(dev, rproc_ptr); > + if (ret) > + goto error; > + *core = z_rproc; > + > +error: > + return ret; > +} > + > +/* > + * called when driver is probed, for each R5 core specified in DT, > + * setup as needed to do remoteproc-related operations > + */ > +static int zynqmp_r5_remoteproc_probe(struct platform_device *pdev) > +{ > + int ret, i; > + struct device *dev = &pdev->dev; > + struct device_node *nc; > + enum rpu_oper_mode rpu_mode; > + struct list_head *cluster; /* list to track each core's rproc */ > + struct zynqmp_r5_rproc *z_rproc; > + struct platform_device *child_pdev; > + struct list_head *pos; > + > + rpu_mode = of_property_read_bool(dev->of_node, "lockstep-mode") ? > + PM_RPU_MODE_LOCKSTEP : PM_RPU_MODE_SPLIT; > + dev_dbg(dev, "RPU configuration: %s\n", > + rpu_mode == PM_RPU_MODE_LOCKSTEP ? "lockstep" : "split"); > + > + /* > + * if 2 RPUs provided but one is lockstep, then we have an > + * invalid configuration. > + */ > + i = of_get_available_child_count(dev->of_node); > + if ((rpu_mode == PM_RPU_MODE_LOCKSTEP && i != 1) || i > MAX_RPROCS) > + return -EINVAL; > + > + cluster = devm_kzalloc(dev, sizeof(*cluster), GFP_KERNEL); > + if (!cluster) > + return -ENOMEM; > + INIT_LIST_HEAD(cluster); > + > + ret = devm_of_platform_populate(dev); > + if (ret) { > + dev_err(dev, "devm_of_platform_populate failed, ret = %d\n", > + ret); > + return ret; > + } > + > + /* probe each individual r5 core's remoteproc-related info */ > + i = 0; > + for_each_available_child_of_node(dev->of_node, nc) { > + child_pdev = of_find_device_by_node(nc); > + if (!child_pdev) { > + dev_err(dev, "could not get R5 core platform device\n"); > + ret = -ENODEV; > + goto out; > + } > + ret = zynqmp_r5_probe(child_pdev, nc, rpu_mode, &z_rproc); > + dev_dbg(dev, "%s to probe rpu %pOF\n", > + ret ? "Failed" : "Able", > + nc); > + if (!z_rproc) > + ret = -EINVAL; > + if (ret) > + goto out; > + > + put_device(&child_pdev->dev); > + list_add_tail(&z_rproc->elem, cluster); > + i++; > + } > + /* wire in so each core can be cleaned up at drive remove */ > + platform_set_drvdata(pdev, cluster); > + return 0; > +out: > + /* undo core0 upon any failures on core1 in split-mode */ > + if (rpu_mode == PM_RPU_MODE_SPLIT && i == 1 && ret != 0) { > + list_for_each(pos, cluster) { > + z_rproc = list_entry(pos, struct zynqmp_r5_rproc, elem); > + if (of_property_read_bool(z_rproc->dev->of_node, "mboxes")) { > + mbox_free_channel(z_rproc->tx_chan); > + mbox_free_channel(z_rproc->rx_chan); > + } > + } > + } > + return ret; > +} > + > +/* > + * for each core, clean up the following: > + * single rproc entry > + * mailbox tx, rx > + */ > +static int zynqmp_r5_remoteproc_remove(struct platform_device *pdev) > +{ > + struct list_head *pos, *cluster = (struct list_head *) > + platform_get_drvdata(pdev); > + struct zynqmp_r5_rproc *z_rproc = NULL; > + > + list_for_each(pos, cluster) { > + z_rproc = list_entry(pos, struct zynqmp_r5_rproc, elem); > + > + mbox_free_channel(z_rproc->tx_chan); > + mbox_free_channel(z_rproc->rx_chan); > + } > + return 0; > +} > + > +/* Match table for OF platform binding */ > +static const struct of_device_id zynqmp_r5_remoteproc_match[] = { > + { .compatible = "xlnx,zynqmp-r5-remoteproc", }, > + { /* end of list */ }, > +}; > +MODULE_DEVICE_TABLE(of, zynqmp_r5_remoteproc_match); > + > +static struct platform_driver zynqmp_r5_remoteproc_driver = { > + .probe = zynqmp_r5_remoteproc_probe, > + .remove = zynqmp_r5_remoteproc_remove, > + .driver = { > + .name = "zynqmp_r5_remoteproc", > + .of_match_table = zynqmp_r5_remoteproc_match, > + }, > +}; > +module_platform_driver(zynqmp_r5_remoteproc_driver); > + > +MODULE_AUTHOR("Ben Levinsky "); > +MODULE_LICENSE("GPL v2"); > -- > 2.17.1 >