2020-04-24 17:38:39

by Ben Levinsky

[permalink] [raw]
Subject: [PATCH v4 0/5] remoteproc: Add zynqmp_r5 driver

Provide basic driver to control Arm R5 co-processor found on
Xilinx ZynqMP UltraScale+ and Versal MPSoC's.

Currently it is able to start, stop and load elf on to the
processor.

The driver was tested on Xilinx ZynqMP and Versal.

Changes since v1:
- remove domain struct as per review from Mathieu
Changes since v2:
- add xilinx-related platform mgmt fn's instead of wrapping around
function pointer in xilinx eemi ops struct
- update zynqmp_r5 yaml parsing to not raise warnings for extra
information in children of R5 node. The warning "node has a unit
name, but no reg or ranges property" will still be raised though
as this particular node is needed to describe the
'#address-cells' and '#size-cells' information.
changes since v3:
- add default values for enums
- fix formatting as per checkpatch.pl --strict. Note that 1 warning and 1 check
are still raised as each is due to fixing the warning results in that
particular line going over 80 characters.
- remove warning '/example-0/rpu@ff9a0000/r5@0:
node has a unit name, but no reg or ranges property'
by adding reg to r5 node.


Ben Levinsky (5):
firmware: xilinx: Add ZynqMP firmware ioctl enums for RPU
configuration.
firmware: xilinx: Add shutdown/wakeup APIs
firmware: xilinx: Add RPU configuration APIs
dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc
bindings
remoteproc: Add initial zynqmp R5 remoteproc driver

.../remoteproc/xilinx,zynqmp-r5-remoteproc.yaml | 126 +++
drivers/firmware/xilinx/zynqmp.c | 134 +++
drivers/remoteproc/Kconfig | 10 +
drivers/remoteproc/Makefile | 1 +
drivers/remoteproc/zynqmp_r5_remoteproc.c | 902 +++++++++++++++++++++
include/linux/firmware/xlnx-zynqmp.h | 75 ++
6 files changed, 1248 insertions(+)
create mode 100644 Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml
create mode 100644 drivers/remoteproc/zynqmp_r5_remoteproc.c

--
2.7.4


2020-04-24 17:39:18

by Ben Levinsky

[permalink] [raw]
Subject: [PATCH v4 1/5] firmware: xilinx: Add ZynqMP firmware ioctl enums for RPU configuration.

Add ZynqMP firmware ioctl enums for RPU configuration.

Signed-off-by: Ben Levinsky <[email protected]>
---
changes since v3:
- add default values for enums
---
include/linux/firmware/xlnx-zynqmp.h | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)

diff --git a/include/linux/firmware/xlnx-zynqmp.h b/include/linux/firmware/xlnx-zynqmp.h
index 5968df8..bb347df 100644
--- a/include/linux/firmware/xlnx-zynqmp.h
+++ b/include/linux/firmware/xlnx-zynqmp.h
@@ -104,6 +104,10 @@ enum pm_ret_status {
};

enum pm_ioctl_id {
+ IOCTL_GET_RPU_OPER_MODE = 0,
+ IOCTL_SET_RPU_OPER_MODE = 1,
+ IOCTL_RPU_BOOT_ADDR_CONFIG = 2,
+ IOCTL_TCM_COMB_CONFIG = 3,
IOCTL_SD_DLL_RESET = 6,
IOCTL_SET_SD_TAPDELAY,
IOCTL_SET_PLL_FRAC_MODE,
@@ -129,6 +133,21 @@ enum pm_query_id {
PM_QID_CLOCK_GET_MAX_DIVISOR,
};

+enum rpu_oper_mode {
+ PM_RPU_MODE_LOCKSTEP = 0,
+ PM_RPU_MODE_SPLIT = 1,
+};
+
+enum rpu_boot_mem {
+ PM_RPU_BOOTMEM_LOVEC = 0,
+ PM_RPU_BOOTMEM_HIVEC = 1,
+};
+
+enum rpu_tcm_comb {
+ PM_RPU_TCM_SPLIT = 0,
+ PM_RPU_TCM_COMB = 1,
+};
+
enum zynqmp_pm_reset_action {
PM_RESET_ACTION_RELEASE,
PM_RESET_ACTION_ASSERT,
--
2.7.4

2020-04-24 17:39:39

by Ben Levinsky

[permalink] [raw]
Subject: [PATCH v4 2/5] firmware: xilinx: Add shutdown/wakeup APIs

Add shutdown/wakeup a resource eemi operations to shutdown
or bringup a resource.

Signed-off-by: Ben Levinsky <[email protected]>
---
changes since v2:
- add xilinx-related platform mgmt fn's instead of wrapping around
function pointer in xilinx eemi ops struct

changes since v3:
- fix formatting
---
drivers/firmware/xilinx/zynqmp.c | 35 +++++++++++++++++++++++++++++++++++
include/linux/firmware/xlnx-zynqmp.h | 22 ++++++++++++++++++++++
2 files changed, 57 insertions(+)

diff --git a/drivers/firmware/xilinx/zynqmp.c b/drivers/firmware/xilinx/zynqmp.c
index bfaf29a..16a8d69 100644
--- a/drivers/firmware/xilinx/zynqmp.c
+++ b/drivers/firmware/xilinx/zynqmp.c
@@ -845,6 +845,41 @@ int zynqmp_pm_release_node(const u32 node)
EXPORT_SYMBOL_GPL(zynqmp_pm_release_node);

/**
+ * zynqmp_pm_force_powerdown - PM call to request for another PU or subsystem to
+ * be powered down forcefully
+ * @target: Node ID of the targeted PU or subsystem
+ * @ack: Flag to specify whether acknowledge is requested
+ *
+ * Return: status, either success or error+reason
+ */
+int zynqmp_pm_force_powerdown(const u32 target,
+ const enum zynqmp_pm_request_ack ack)
+{
+ return zynqmp_pm_invoke_fn(PM_FORCE_POWERDOWN, target, ack, 0, 0, NULL);
+}
+EXPORT_SYMBOL_GPL(zynqmp_pm_force_powerdown);
+
+/**
+ * zynqmp_pm_request_wakeup - PM call to wake up selected master or subsystem
+ * @node: Node ID of the master or subsystem
+ * @set_addr: Specifies whether the address argument is relevant
+ * @address: Address from which to resume when woken up
+ * @ack: Flag to specify whether acknowledge requested
+ *
+ * Return: status, either success or error+reason
+ */
+int zynqmp_pm_request_wakeup(const u32 node,
+ const bool set_addr,
+ const u64 address,
+ const enum zynqmp_pm_request_ack ack)
+{
+ /* set_addr flag is encoded into 1st bit of address */
+ return zynqmp_pm_invoke_fn(PM_REQUEST_WAKEUP, node, address | set_addr,
+ address >> 32, ack, NULL);
+}
+EXPORT_SYMBOL_GPL(zynqmp_pm_request_wakeup);
+
+/**
* zynqmp_pm_set_requirement() - PM call to set requirement for PM slaves
* @node: Node ID of the slave
* @capabilities: Requested capabilities of the slave
diff --git a/include/linux/firmware/xlnx-zynqmp.h b/include/linux/firmware/xlnx-zynqmp.h
index bb347df..4d83a7d 100644
--- a/include/linux/firmware/xlnx-zynqmp.h
+++ b/include/linux/firmware/xlnx-zynqmp.h
@@ -64,6 +64,8 @@

enum pm_api_id {
PM_GET_API_VERSION = 1,
+ PM_FORCE_POWERDOWN = 8,
+ PM_REQUEST_WAKEUP = 10,
PM_SYSTEM_SHUTDOWN = 12,
PM_REQUEST_NODE = 13,
PM_RELEASE_NODE,
@@ -376,6 +378,12 @@ int zynqmp_pm_write_pggs(u32 index, u32 value);
int zynqmp_pm_read_pggs(u32 index, u32 *value);
int zynqmp_pm_system_shutdown(const u32 type, const u32 subtype);
int zynqmp_pm_set_boot_health_status(u32 value);
+int zynqmp_pm_force_powerdown(const u32 target,
+ const enum zynqmp_pm_request_ack ack);
+int zynqmp_pm_request_wakeup(const u32 node,
+ const bool set_addr,
+ const u64 address,
+ const enum zynqmp_pm_request_ack ack);
#else
static inline struct zynqmp_eemi_ops *zynqmp_pm_get_eemi_ops(void)
{
@@ -526,6 +534,20 @@ static inline int zynqmp_pm_set_boot_health_status(u32 value)
{
return -ENODEV;
}
+
+static inline int zynqmp_pm_force_powerdown(const u32 target,
+ const enum zynqmp_pm_request_ack ack)
+{
+ return -ENODEV;
+}
+
+static inline int zynqmp_pm_request_wakeup(const u32 node,
+ const bool set_addr,
+ const u64 address,
+ const enum zynqmp_pm_request_ack ack)
+{
+ return -ENODEV;
+}
#endif

#endif /* __FIRMWARE_ZYNQMP_H__ */
--
2.7.4

2020-04-24 17:39:57

by Ben Levinsky

[permalink] [raw]
Subject: [PATCH v4 3/5] firmware: xilinx: Add RPU configuration APIs

This patch adds APIs to provide access and a configuration interface
to the current power state of a sub-system on Zynqmp sub-system.

Signed-off-by: Ben Levinsky <[email protected]>
---
Changes since v2:
- add xilinx-related platform mgmt fn's instead of wrapping around
function pointer in xilinx eemi ops struct
changes since v3:
- fix formatting
---

drivers/firmware/xilinx/zynqmp.c | 99 ++++++++++++++++++++++++++++++++++++
include/linux/firmware/xlnx-zynqmp.h | 34 +++++++++++++
2 files changed, 133 insertions(+)

diff --git a/drivers/firmware/xilinx/zynqmp.c b/drivers/firmware/xilinx/zynqmp.c
index 16a8d69..20c1f58 100644
--- a/drivers/firmware/xilinx/zynqmp.c
+++ b/drivers/firmware/xilinx/zynqmp.c
@@ -845,6 +845,61 @@ int zynqmp_pm_release_node(const u32 node)
EXPORT_SYMBOL_GPL(zynqmp_pm_release_node);

/**
+ * zynqmp_pm_get_rpu_mode() - Get RPU mode
+ * @node_id: Node ID of the device
+ * @arg1: Argument 1 to requested IOCTL call
+ * @arg2: Argument 2 to requested IOCTL call
+ * @out: Returned output value
+ *
+ * Return: RPU mode
+ */
+int zynqmp_pm_get_rpu_mode(u32 node_id,
+ u32 arg1, u32 arg2, u32 *out)
+{
+ return zynqmp_pm_invoke_fn(PM_IOCTL, node_id,
+ IOCTL_GET_RPU_OPER_MODE, 0, 0, out);
+}
+EXPORT_SYMBOL_GPL(zynqmp_pm_get_rpu_mode);
+
+/**
+ * zynqmp_pm_set_rpu_mode() - Set RPU mode
+ * @node_id: Node ID of the device
+ * @ioctl_id: ID of the requested IOCTL
+ * @arg2: Argument 2 to requested IOCTL call
+ * @out: Returned output value
+ *
+ * This function is used to set RPU mode.
+ *
+ * Return: Returns status, either success or error+reason
+ */
+int zynqmp_pm_set_rpu_mode(u32 node_id,
+ u32 arg1, u32 arg2, u32 *out)
+{
+ return zynqmp_pm_invoke_fn(PM_IOCTL, node_id,
+ IOCTL_SET_RPU_OPER_MODE, 0, 0, out);
+}
+EXPORT_SYMBOL_GPL(zynqmp_pm_set_rpu_mode);
+
+/**
+ * zynqmp_pm_tcm_comb_config - configure TCM
+ * @node_id: Node ID of the device
+ * @arg1: Argument 1 to requested IOCTL call
+ * @arg2: Argument 2 to requested IOCTL call
+ * @out: Returned output value
+ *
+ * This function is used to set RPU mode.
+ *
+ * Return: Returns status, either success or error+reason
+ */
+int zynqmp_pm_set_tcm_config(u32 node_id,
+ u32 arg1, u32 arg2, u32 *out)
+{
+ return zynqmp_pm_invoke_fn(PM_IOCTL, node_id,
+ IOCTL_TCM_COMB_CONFIG, 0, 0, out);
+}
+EXPORT_SYMBOL_GPL(zynqmp_pm_set_tcm_config);
+
+/**
* zynqmp_pm_force_powerdown - PM call to request for another PU or subsystem to
* be powered down forcefully
* @target: Node ID of the targeted PU or subsystem
@@ -880,6 +935,50 @@ int zynqmp_pm_request_wakeup(const u32 node,
EXPORT_SYMBOL_GPL(zynqmp_pm_request_wakeup);

/**
+ * zynqmp_pm_get_node_status - PM call to request a node's current power state
+ * @node: ID of the component or sub-system in question
+ * @status: Current operating state of the requested node
+ * @requirements: Current requirements asserted on the node,
+ * used for slave nodes only.
+ * @usage: Usage information, used for slave nodes only:
+ * PM_USAGE_NO_MASTER - No master is currently using
+ * the node
+ * PM_USAGE_CURRENT_MASTER - Only requesting master is
+ * currently using the node
+ * PM_USAGE_OTHER_MASTER - Only other masters are
+ * currently using the node
+ * PM_USAGE_BOTH_MASTERS - Both the current and at least
+ * one other master is currently
+ * using the node
+ *
+ * Return: status, either success or error+reason
+ */
+int zynqmp_pm_get_node_status(const u32 node,
+ u32 *status, u32 *requirements,
+ u32 *usage)
+
+{
+ u32 ret_payload[PAYLOAD_ARG_CNT];
+ int ret;
+
+ if (!status)
+ return -EINVAL;
+
+ ret = zynqmp_pm_invoke_fn(PM_GET_NODE_STATUS, node, 0, 0, 0,
+ ret_payload);
+ if (ret_payload[0] == XST_PM_SUCCESS) {
+ *status = ret_payload[1];
+ if (requirements)
+ *requirements = ret_payload[2];
+ if (usage)
+ *usage = ret_payload[3];
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(zynqmp_pm_get_node_status);
+
+/**
* zynqmp_pm_set_requirement() - PM call to set requirement for PM slaves
* @node: Node ID of the slave
* @capabilities: Requested capabilities of the slave
diff --git a/include/linux/firmware/xlnx-zynqmp.h b/include/linux/firmware/xlnx-zynqmp.h
index 4d83a7d..0caca9e 100644
--- a/include/linux/firmware/xlnx-zynqmp.h
+++ b/include/linux/firmware/xlnx-zynqmp.h
@@ -64,6 +64,7 @@

enum pm_api_id {
PM_GET_API_VERSION = 1,
+ PM_GET_NODE_STATUS = 3,
PM_FORCE_POWERDOWN = 8,
PM_REQUEST_WAKEUP = 10,
PM_SYSTEM_SHUTDOWN = 12,
@@ -384,6 +385,14 @@ int zynqmp_pm_request_wakeup(const u32 node,
const bool set_addr,
const u64 address,
const enum zynqmp_pm_request_ack ack);
+int zynqmp_pm_get_node_status(const u32 node, u32 *status,
+ u32 *requirements, u32 *usage);
+int zynqmp_pm_get_rpu_mode(u32 node_id,
+ u32 arg1, u32 arg2, u32 *out);
+int zynqmp_pm_set_rpu_mode(u32 node_id,
+ u32 arg1, u32 arg2, u32 *out);
+int zynqmp_pm_set_tcm_config(u32 node_id,
+ u32 arg1, u32 arg2, u32 *out);
#else
static inline struct zynqmp_eemi_ops *zynqmp_pm_get_eemi_ops(void)
{
@@ -548,6 +557,31 @@ static inline int zynqmp_pm_request_wakeup(const u32 node,
{
return -ENODEV;
}
+
+static inline int zynqmp_pm_get_node_status(const u32 node,
+ u32 *status, u32 *requirements,
+ u32 *usage)
+{
+ return -ENODEV;
+}
+
+static inline int zynqmp_pm_get_rpu_mode(u32 node_id,
+ u32 arg1, u32 arg2, u32 *out)
+{
+ return -ENODEV;
+}
+
+static inline int zynqmp_pm_set_rpu_mode(u32 node_id,
+ u32 arg1, u32 arg2, u32 *out)
+{
+ return -ENODEV;
+}
+
+static inline int zynqmp_pm_set_tcm_config(u32 node_id,
+ u32 arg1, u32 arg2, u32 *out)
+{
+ return -ENODEV;
+}
#endif

#endif /* __FIRMWARE_ZYNQMP_H__ */
--
2.7.4

2020-04-24 17:41:55

by Ben Levinsky

[permalink] [raw]
Subject: [PATCH v4 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings

Add binding for ZynqMP R5 OpenAMP.

Represent the RPU domain resources in one device node. Each RPU
processor is a subnode of the top RPU domain node.

Signed-off-by: Ben Levinsky <[email protected]>
Signed-off-by: Jason Wu <[email protected]>
Signed-off-by: Wendy Liang <[email protected]>
Signed-off-by: Michal Simek <[email protected]>
---
Changes since v2:
- update zynqmp_r5 yaml parsing to not raise warnings for extra
information in children of R5 node. The warning "node has a unit
name, but no reg or ranges property" will still be raised though
as this particular node is needed to describe the
'#address-cells' and '#size-cells' information.
Changes since 3:
- remove warning '/example-0/rpu@ff9a0000/r5@0:
node has a unit name, but no reg or ranges property'
by adding reg to r5 node.
---

.../remoteproc/xilinx,zynqmp-r5-remoteproc.yaml | 127 +++++++++++++++++++++
1 file changed, 127 insertions(+)
create mode 100644 Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml

diff --git a/Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml b/Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml
new file mode 100644
index 0000000..41520b6
--- /dev/null
+++ b/Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml
@@ -0,0 +1,127 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Xilinx R5 remote processor controller bindings
+
+description:
+ This document defines the binding for the remoteproc component that loads and
+ boots firmwares on the Xilinx Zynqmp and Versal family chipset.
+
+maintainers:
+ - Ed Mooring <[email protected]>
+ - Ben Levinsky <[email protected]>
+
+properties:
+ compatible:
+ const: "xlnx,zynqmp-r5-remoteproc-1.0"
+
+ core_conf:
+ description:
+ R5 core configuration (valid string - split or lock-step)
+ maxItems: 1
+
+ interrupts:
+ description:
+ Interrupt mapping for remoteproc IPI. It is required if the
+ user uses the remoteproc driver with the RPMsg kernel driver.
+ maxItems: 6
+
+ memory-region:
+ maxItems: 4
+ minItems: 4
+ pnode-id:
+ maxItems: 1
+ mboxes:
+ maxItems: 2
+ mbox-names:
+ maxItems: 2
+
+ r5@0:
+ type: object
+ required:
+ - '#address-cells'
+ - '#size-cells'
+ - pnode-id
+examples:
+ - |
+ reserved-memory {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ rpu0vdev0vring0: rpu0vdev0vring0@3ed40000 {
+ no-map;
+ reg = <0x3ed40000 0x4000>;
+ };
+ rpu0vdev0vring1: rpu0vdev0vring1@3ed44000 {
+ no-map;
+ reg = <0x3ed44000 0x4000>;
+ };
+ rpu0vdev0buffer: rpu0vdev0buffer@3ed48000 {
+ no-map;
+ reg = <0x3ed48000 0x100000>;
+ };
+ rproc_0_reserved: rproc@3ed000000 {
+ no-map;
+ reg = <0x3ed00000 0x40000>;
+ };
+ };
+ rpu: rpu@ff9a0000 {
+ compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ core_conf = "split";
+ reg = <0xFF9A0000 0x10000>;
+ r5_0: r5@0 {
+ ranges;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ reg = <0xFF9A0100 0x1000>;
+ memory-region = <&rproc_0_reserved>, <&rpu0vdev0buffer>, <&rpu0vdev0vring0>, <&rpu0vdev0vring1>;
+ pnode-id = <0x7>;
+ mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
+ mbox-names = "tx", "rx";
+ tcm_0_a: tcm_0@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ reg = <0xFFE00000 0x10000>;
+ pnode-id = <0xf>;
+ };
+ tcm_0_b: tcm_0@1 {
+ #address-cells = <2>;
+ #size-cells = <2>;
+ reg = <0xFFE20000 0x10000>;
+ pnode-id = <0x10>;
+ };
+ };
+ };
+
+
+ zynqmp_ipi1 {
+ compatible = "xlnx,zynqmp-ipi-mailbox";
+ interrupt-parent = <&gic>;
+ interrupts = <0 29 4>;
+ xlnx,ipi-id = <7>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+
+ /* APU<->RPU0 IPI mailbox controller */
+ ipi_mailbox_rpu0: mailbox@ff90000 {
+ reg = <0xff990600 0x20>,
+ <0xff990620 0x20>,
+ <0xff9900c0 0x20>,
+ <0xff9900e0 0x20>;
+ reg-names = "local_request_region",
+ "local_response_region",
+ "remote_request_region",
+ "remote_response_region";
+ #mbox-cells = <1>;
+ xlnx,ipi-id = <1>;
+ };
+ };
+
+...
--
2.7.4

2020-04-24 17:42:16

by Ben Levinsky

[permalink] [raw]
Subject: [PATCH v4 5/5] remoteproc: Add initial zynqmp R5 remoteproc driver

R5 is included in Xilinx Zynq UltraScale MPSoC so by adding this
remotproc driver, we can boot the R5 sub-system in different
configurations.

Acked-by: Stefano Stabellini <[email protected]>
Acked-by: Ben Levinsky <[email protected]>
Reviewed-by: Radhey Shyam Pandey <[email protected]>
Signed-off-by: Ben Levinsky <[email protected]>
Signed-off-by: Wendy Liang <[email protected]>
Signed-off-by: Michal Simek <[email protected]>
Signed-off-by: Ed Mooring <[email protected]>
Signed-off-by: Jason Wu <[email protected]>
Tested-by: Ben Levinsky <[email protected]>
---
Changes since v1:
- remove domain struct as per review from Mathieu
Changes since v2:
- add xilinx-related platform mgmt fn's instead of wrapping around
function pointer in xilinx eemi ops struct
Changes since v3:
- fix formatting from checpatch.pl --strict
- update commit message

---
drivers/remoteproc/Kconfig | 10 +
drivers/remoteproc/Makefile | 1 +
drivers/remoteproc/zynqmp_r5_remoteproc.c | 902 ++++++++++++++++++++++++++++++
3 files changed, 913 insertions(+)
create mode 100644 drivers/remoteproc/zynqmp_r5_remoteproc.c

diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
index fbaed07..f094c84 100644
--- a/drivers/remoteproc/Kconfig
+++ b/drivers/remoteproc/Kconfig
@@ -222,6 +222,16 @@ config ST_REMOTEPROC
processor framework.
This can be either built-in or a loadable module.

+config ZYNQMP_R5_REMOTEPROC
+ tristate "ZynqMP_r5 remoteproc support"
+ depends on ARM64 && PM && ARCH_ZYNQMP
+ select RPMSG_VIRTIO
+ select MAILBOX
+ select ZYNQMP_IPI_MBOX
+ help
+ Say y here to support ZynqMP R5 remote processors via the remote
+ processor framework.
+
config ST_SLIM_REMOTEPROC
tristate

diff --git a/drivers/remoteproc/Makefile b/drivers/remoteproc/Makefile
index 0effd38..806ac3f 100644
--- a/drivers/remoteproc/Makefile
+++ b/drivers/remoteproc/Makefile
@@ -27,5 +27,6 @@ obj-$(CONFIG_QCOM_WCNSS_PIL) += qcom_wcnss_pil.o
qcom_wcnss_pil-y += qcom_wcnss.o
qcom_wcnss_pil-y += qcom_wcnss_iris.o
obj-$(CONFIG_ST_REMOTEPROC) += st_remoteproc.o
+obj-$(CONFIG_ZYNQMP_R5_REMOTEPROC) += zynqmp_r5_remoteproc.o
obj-$(CONFIG_ST_SLIM_REMOTEPROC) += st_slim_rproc.o
obj-$(CONFIG_STM32_RPROC) += stm32_rproc.o
diff --git a/drivers/remoteproc/zynqmp_r5_remoteproc.c b/drivers/remoteproc/zynqmp_r5_remoteproc.c
new file mode 100644
index 0000000..e3fb4fb
--- /dev/null
+++ b/drivers/remoteproc/zynqmp_r5_remoteproc.c
@@ -0,0 +1,902 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Zynq R5 Remote Processor driver
+ *
+ * Copyright (C) 2015 - 2018 Xilinx Inc.
+ * Copyright (C) 2015 Jason Wu <[email protected]>
+ *
+ * Based on origin OMAP and Zynq Remote Processor driver
+ *
+ * Copyright (C) 2012 Michal Simek <[email protected]>
+ * Copyright (C) 2012 PetaLogix
+ * Copyright (C) 2011 Texas Instruments, Inc.
+ * Copyright (C) 2011 Google, Inc.
+ */
+
+#include <linux/atomic.h>
+#include <linux/cpu.h>
+#include <linux/dma-mapping.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/firmware/xlnx-zynqmp.h>
+#include <linux/genalloc.h>
+#include <linux/idr.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/mailbox_client.h>
+#include <linux/mailbox/zynqmp-ipi-message.h>
+#include <linux/module.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/pfn.h>
+#include <linux/platform_device.h>
+#include <linux/remoteproc.h>
+#include <linux/skbuff.h>
+#include <linux/slab.h>
+#include <linux/sysfs.h>
+
+#include "remoteproc_internal.h"
+
+#define MAX_RPROCS 2 /* Support up to 2 RPU */
+#define MAX_MEM_PNODES 4 /* Max power nodes for one RPU memory instance */
+
+#define DEFAULT_FIRMWARE_NAME "rproc-rpu-fw"
+
+/* PM proc states */
+#define PM_PROC_STATE_ACTIVE 1U
+
+/* IPI buffer MAX length */
+#define IPI_BUF_LEN_MAX 32U
+/* RX mailbox client buffer max length */
+#define RX_MBOX_CLIENT_BUF_MAX (IPI_BUF_LEN_MAX + \
+ sizeof(struct zynqmp_ipi_message))
+
+static bool autoboot __read_mostly;
+
+/**
+ * struct zynqmp_r5_mem - zynqmp rpu memory data
+ * @pnode_id: TCM power domain ids
+ * @res: memory resource
+ * @node: list node
+ */
+struct zynqmp_r5_mem {
+ u32 pnode_id[MAX_MEM_PNODES];
+ struct resource res;
+ struct list_head node;
+};
+
+/**
+ * struct zynqmp_r5_pdata - zynqmp rpu remote processor private data
+ * @dev: device of RPU instance
+ * @rproc: rproc handle
+ * @pnode_id: RPU CPU power domain id
+ * @mems: memory resources
+ * @is_r5_mode_set: indicate if r5 operation mode is set
+ * @tx_mc: tx mailbox client
+ * @rx_mc: rx mailbox client
+ * @tx_chan: tx mailbox channel
+ * @rx_chan: rx mailbox channel
+ * @workqueue: workqueue for the RPU remoteproc
+ * @tx_mc_skbs: socket buffers for tx mailbox client
+ * @rx_mc_buf: rx mailbox client buffer to save the rx message
+ */
+struct zynqmp_r5_pdata {
+ struct device dev;
+ struct rproc *rproc;
+ u32 pnode_id;
+ struct list_head mems;
+ bool is_r5_mode_set;
+ struct mbox_client tx_mc;
+ struct mbox_client rx_mc;
+ struct mbox_chan *tx_chan;
+ struct mbox_chan *rx_chan;
+ struct work_struct workqueue;
+ struct sk_buff_head tx_mc_skbs;
+ unsigned char rx_mc_buf[RX_MBOX_CLIENT_BUF_MAX];
+};
+
+/**
+ * table of RPUs
+ */
+struct zynqmp_r5_pdata rpus[MAX_RPROCS];
+/**
+ * RPU core configuration
+ */
+enum rpu_oper_mode rpu_mode;
+
+/*
+ * r5_set_mode - set RPU operation mode
+ * @pdata: Remote processor private data
+ *
+ * set RPU oepration mode
+ *
+ * Return: 0 for success, negative value for failure
+ */
+static int r5_set_mode(struct zynqmp_r5_pdata *pdata)
+{
+ u32 val[PAYLOAD_ARG_CNT] = {0}, expect;
+ struct device *dev = &pdata->dev;
+ int ret;
+
+ if (pdata->is_r5_mode_set)
+ return 0;
+ expect = (u32)rpu_mode;
+ ret = zynqmp_pm_get_rpu_mode(pdata->pnode_id, 0, 0, val);
+ if (ret < 0) {
+ dev_err(dev, "failed to get RPU oper mode.\n");
+ return ret;
+ }
+ if (val[0] == expect) {
+ dev_dbg(dev, "RPU mode matches: %x\n", val[0]);
+ } else {
+ ret = zynqmp_pm_set_rpu_mode(pdata->pnode_id,
+ expect, 0, val);
+ if (ret < 0) {
+ dev_err(dev,
+ "failed to set RPU oper mode.\n");
+ return ret;
+ }
+ }
+ if (expect == (u32)PM_RPU_MODE_LOCKSTEP)
+ expect = (u32)PM_RPU_TCM_COMB;
+ else
+ expect = (u32)PM_RPU_TCM_SPLIT;
+ ret = zynqmp_pm_set_tcm_config(pdata->pnode_id,
+ expect, 0, val);
+ if (ret < 0) {
+ dev_err(dev, "failed to config TCM to %x.\n",
+ expect);
+ return ret;
+ }
+ pdata->is_r5_mode_set = true;
+ return 0;
+}
+
+/**
+ * r5_is_running - check if r5 is running
+ * @pdata: Remote processor private data
+ *
+ * check if R5 is running
+ *
+ * Return: true if r5 is running, false otherwise
+ */
+static bool r5_is_running(struct zynqmp_r5_pdata *pdata)
+{
+ u32 status, requirements, usage;
+ struct device *dev = &pdata->dev;
+
+ if (zynqmp_pm_get_node_status(pdata->pnode_id,
+ &status, &requirements, &usage)) {
+ dev_err(dev, "Failed to get RPU node %d status.\n",
+ pdata->pnode_id);
+ return false;
+ } else if (status != PM_PROC_STATE_ACTIVE) {
+ dev_dbg(dev, "RPU is not running.\n");
+ return false;
+ }
+
+ dev_dbg(dev, "RPU is running.\n");
+ return true;
+}
+
+/*
+ * ZynqMP R5 remoteproc memory release function
+ */
+static int zynqmp_r5_mem_release(struct rproc *rproc,
+ struct rproc_mem_entry *mem)
+{
+ struct zynqmp_r5_mem *priv;
+ int i, ret;
+ struct device *dev = &rproc->dev;
+
+ priv = mem->priv;
+ if (!priv)
+ return 0;
+ for (i = 0; i < MAX_MEM_PNODES; i++) {
+ if (priv->pnode_id[i]) {
+ dev_dbg(dev, "%s, pnode %d\n",
+ __func__, priv->pnode_id[i]);
+ ret = zynqmp_pm_release_node(priv->pnode_id[i]);
+ if (ret < 0) {
+ dev_err(dev,
+ "failed to release power node: %u\n",
+ priv->pnode_id[i]);
+ return ret;
+ }
+ } else {
+ break;
+ }
+ }
+ return 0;
+}
+
+/*
+ * ZynqMP R5 remoteproc operations
+ */
+static int zynqmp_r5_rproc_start(struct rproc *rproc)
+{
+ struct device *dev = rproc->dev.parent;
+ struct zynqmp_r5_pdata *local = rproc->priv;
+ enum rpu_boot_mem bootmem;
+ int ret;
+ /* Set up R5 */
+ ret = r5_set_mode(local);
+ if (ret) {
+ dev_err(dev, "failed to set R5 operation mode.\n");
+ return ret;
+ }
+ if ((rproc->bootaddr & 0xF0000000) == 0xF0000000)
+ bootmem = PM_RPU_BOOTMEM_HIVEC;
+ else
+ bootmem = PM_RPU_BOOTMEM_LOVEC;
+ dev_info(dev, "RPU boot from %s.",
+ bootmem == PM_RPU_BOOTMEM_HIVEC ? "OCM" : "TCM");
+ ret = zynqmp_pm_request_wakeup(local->pnode_id, 1,
+ bootmem, ZYNQMP_PM_REQUEST_ACK_NO);
+ if (ret < 0) {
+ dev_err(dev, "failed to boot R5.\n");
+ return ret;
+ }
+ return 0;
+}
+
+static int zynqmp_r5_rproc_stop(struct rproc *rproc)
+{
+ struct zynqmp_r5_pdata *local = rproc->priv;
+ int ret;
+
+ ret = zynqmp_pm_force_powerdown(local->pnode_id,
+ ZYNQMP_PM_REQUEST_ACK_BLOCKING);
+ if (ret < 0) {
+ dev_err(&local->dev, "failed to shutdown R5.\n");
+ return ret;
+ }
+ local->is_r5_mode_set = false;
+ return 0;
+}
+
+static int zynqmp_r5_parse_fw(struct rproc *rproc, const struct firmware *fw)
+{
+ int num_mems, i, ret;
+ struct zynqmp_r5_pdata *pdata = rproc->priv;
+ struct device *dev = &pdata->dev;
+ struct device_node *np = dev->of_node;
+ struct rproc_mem_entry *mem;
+ struct device_node *child;
+ struct resource rsc;
+
+ num_mems = of_count_phandle_with_args(np, "memory-region", NULL);
+ if (num_mems <= 0)
+ return 0;
+
+ for (i = 0; i < num_mems; i++) {
+ struct device_node *node;
+ struct zynqmp_r5_mem *zynqmp_mem;
+ struct reserved_mem *rmem;
+
+ node = of_parse_phandle(np, "memory-region", i);
+ rmem = of_reserved_mem_lookup(node);
+ if (!rmem) {
+ dev_err(dev, "unable to acquire memory-region\n");
+ return -EINVAL;
+ }
+ if (strstr(node->name, "vdev") &&
+ strstr(node->name, "buffer")) {
+ int id;
+ char name[16];
+
+ id = node->name[8] - '0';
+ snprintf(name, sizeof(name), "vdev%dbuffer", id);
+ /* Register DMA region */
+ mem = rproc_mem_entry_init(dev, NULL,
+ (dma_addr_t)rmem->base,
+ rmem->size, rmem->base,
+ NULL, NULL,
+ name);
+ if (!mem) {
+ dev_err(dev, "unable to initialize memory-region %s\n",
+ node->name);
+ return -ENOMEM;
+ }
+ dev_dbg(dev, "parsed %s at %llx\r\n", mem->name,
+ mem->dma);
+ rproc_add_carveout(rproc, mem);
+ continue;
+ } else if (strstr(node->name, "vdev") &&
+ strstr(node->name, "vring")) {
+ int id, vring_id;
+ char name[16];
+
+ id = node->name[8] - '0';
+ vring_id = node->name[14] - '0';
+ snprintf(name, sizeof(name), "vdev%dvring%d", id,
+ vring_id);
+ /* Register vring */
+ mem = rproc_mem_entry_init(dev, NULL,
+ (dma_addr_t)rmem->base,
+ rmem->size, rmem->base,
+ NULL, NULL,
+ name);
+ mem->va = devm_ioremap_wc(dev, rmem->base, rmem->size);
+ if (!mem->va)
+ return -ENOMEM;
+ if (!mem) {
+ dev_err(dev, "unable to initialize memory-region %s\n",
+ node->name);
+ return -ENOMEM;
+ }
+ dev_dbg(dev, "parsed %s at %llx\r\n", mem->name,
+ mem->dma);
+ rproc_add_carveout(rproc, mem);
+ continue;
+ } else {
+ mem = rproc_of_resm_mem_entry_init(dev, i,
+ rmem->size,
+ rmem->base,
+ node->name);
+ if (!mem) {
+ dev_err(dev, "unable to initialize memory-region %s\n",
+ node->name);
+ return -ENOMEM;
+ }
+ mem->va = devm_ioremap_wc(dev, rmem->base, rmem->size);
+ if (!mem->va)
+ return -ENOMEM;
+
+ rproc_add_carveout(rproc, mem);
+ }
+ if (!mem)
+ return -ENOMEM;
+
+ /*
+ * It is non-DMA memory, used for firmware loading.
+ * It will be added to the R5 remoteproc mappings later.
+ */
+ zynqmp_mem = devm_kzalloc(dev, sizeof(*zynqmp_mem), GFP_KERNEL);
+ if (!zynqmp_mem)
+ return -ENOMEM;
+ ret = of_address_to_resource(node, 0, &zynqmp_mem->res);
+ if (ret) {
+ dev_err(dev, "unable to resolve memory region.\n");
+ return ret;
+ }
+ list_add_tail(&zynqmp_mem->node, &pdata->mems);
+ dev_dbg(dev, "%s, non-dma mem %s\n",
+ __func__, of_node_full_name(node));
+ }
+
+ /* map TCM memories */
+ for_each_available_child_of_node(np, child) {
+ struct property *prop;
+ const __be32 *cur;
+ u32 pnode_id;
+ void *va;
+ dma_addr_t dma;
+ resource_size_t size;
+
+ ret = of_address_to_resource(child, 0, &rsc);
+
+ i = 0;
+ of_property_for_each_u32(child, "pnode-id", prop, cur,
+ pnode_id) {
+ ret = zynqmp_pm_request_node(pnode_id,
+ ZYNQMP_PM_CAPABILITY_ACCESS, 0,
+ ZYNQMP_PM_REQUEST_ACK_BLOCKING);
+ if (ret < 0) {
+ dev_err(dev, "failed to request power node: %u\n",
+ pnode_id);
+ return ret;
+ }
+ ret = r5_set_mode(pdata);
+ if (ret < 0) {
+ dev_err(dev, "failed to set R5 operation mode.\n");
+ return ret;
+ }
+ }
+ size = resource_size(&rsc);
+
+ va = devm_ioremap_wc(dev, rsc.start, size);
+ if (!va)
+ return -ENOMEM;
+
+ /* zero out tcm base address */
+ if (rsc.start & 0xffe00000) {
+ rsc.start &= 0x000fffff;
+ /* handle tcm banks 1 a and b
+ * (0xffe9000 and oxffeb0000)
+ */
+ if (rsc.start & 0x80000)
+ rsc.start -= 0x90000;
+ }
+
+ dma = (dma_addr_t)rsc.start;
+ mem = rproc_mem_entry_init(dev, va, dma, (int)size, rsc.start,
+ NULL, zynqmp_r5_mem_release,
+ rsc.name);
+ if (!mem)
+ return -ENOMEM;
+
+ rproc_add_carveout(rproc, mem);
+ }
+
+ ret = rproc_elf_load_rsc_table(rproc, fw);
+ if (ret == -EINVAL)
+ ret = 0;
+ return ret;
+}
+
+/* kick a firmware */
+static void zynqmp_r5_rproc_kick(struct rproc *rproc, int vqid)
+{
+ struct device *dev = rproc->dev.parent;
+ struct zynqmp_r5_pdata *local = rproc->priv;
+
+ dev_dbg(dev, "KICK Firmware to start send messages vqid %d\n", vqid);
+
+ if (vqid < 0) {
+ /* If vqid is negative, does not pass the vqid to
+ * mailbox. As vqid is supposed to be 0 or possive.
+ * It also gives a way to just kick instead but
+ * not use the IPI buffer. It is better to provide
+ * a proper way to pass the short message, which will
+ * need to sync to upstream first, for now,
+ * use negative vqid to assume no message will be
+ * passed with IPI buffer, but just raise interrupt.
+ * This will be faster as it doesn't need to copy the
+ * message to the IPI buffer.
+ *
+ * It will ignore the return, as failure is due to
+ * there already kicks in the mailbox queue.
+ */
+ (void)mbox_send_message(local->tx_chan, NULL);
+ } else {
+ struct sk_buff *skb;
+ unsigned int skb_len;
+ struct zynqmp_ipi_message *mb_msg;
+ int ret;
+
+ skb_len = (unsigned int)(sizeof(vqid) + sizeof(mb_msg));
+ skb = alloc_skb(skb_len, GFP_ATOMIC);
+ if (!skb) {
+ dev_err(dev,
+ "Failed to allocate skb to kick remote.\n");
+ return;
+ }
+ mb_msg = (struct zynqmp_ipi_message *)skb_put(skb, skb_len);
+ mb_msg->len = sizeof(vqid);
+ memcpy(mb_msg->data, &vqid, sizeof(vqid));
+ skb_queue_tail(&local->tx_mc_skbs, skb);
+ ret = mbox_send_message(local->tx_chan, mb_msg);
+ if (ret < 0) {
+ dev_warn(dev, "Failed to kick remote.\n");
+ skb_dequeue_tail(&local->tx_mc_skbs);
+ kfree_skb(skb);
+ }
+ }
+}
+
+static struct rproc_ops zynqmp_r5_rproc_ops = {
+ .start = zynqmp_r5_rproc_start,
+ .stop = zynqmp_r5_rproc_stop,
+ .load = rproc_elf_load_segments,
+ .parse_fw = zynqmp_r5_parse_fw,
+ .find_loaded_rsc_table = rproc_elf_find_loaded_rsc_table,
+ .sanity_check = rproc_elf_sanity_check,
+ .get_boot_addr = rproc_elf_get_boot_addr,
+ .kick = zynqmp_r5_rproc_kick,
+};
+
+/* zynqmp_r5_mem_probe() - probes RPU TCM memory device
+ * @pdata: pointer to the RPU remoteproc private data
+ * @node: pointer to the memory node
+ *
+ * Function to retrieve memories resources for RPU TCM memory device.
+ */
+static int zynqmp_r5_mem_probe(struct zynqmp_r5_pdata *pdata,
+ struct device_node *node)
+{
+ struct device *dev;
+ struct zynqmp_r5_mem *mem;
+ int ret;
+
+ dev = &pdata->dev;
+ mem = devm_kzalloc(dev, sizeof(*mem), GFP_KERNEL);
+ if (!mem)
+ return -ENOMEM;
+ ret = of_address_to_resource(node, 0, &mem->res);
+ if (ret < 0) {
+ dev_err(dev, "failed to get resource of memory %s",
+ of_node_full_name(node));
+ return -EINVAL;
+ }
+
+ /* Get the power domain id */
+ if (of_find_property(node, "pnode-id", NULL)) {
+ struct property *prop;
+ const __be32 *cur;
+ u32 val;
+ int i = 0;
+
+ of_property_for_each_u32(node, "pnode-id", prop, cur, val)
+ mem->pnode_id[i++] = val;
+ }
+ list_add_tail(&mem->node, &pdata->mems);
+ return 0;
+}
+
+/**
+ * zynqmp_r5_release() - ZynqMP R5 device release function
+ * @dev: pointer to the device struct of ZynqMP R5
+ *
+ * Function to release ZynqMP R5 device.
+ */
+static void zynqmp_r5_release(struct device *dev)
+{
+ struct zynqmp_r5_pdata *pdata;
+ struct rproc *rproc;
+ struct sk_buff *skb;
+
+ pdata = dev_get_drvdata(dev);
+ rproc = pdata->rproc;
+ if (rproc) {
+ rproc_del(rproc);
+ rproc_free(rproc);
+ }
+ if (pdata->tx_chan)
+ mbox_free_channel(pdata->tx_chan);
+ if (pdata->rx_chan)
+ mbox_free_channel(pdata->rx_chan);
+ /* Discard all SKBs */
+ while (!skb_queue_empty(&pdata->tx_mc_skbs)) {
+ skb = skb_dequeue(&pdata->tx_mc_skbs);
+ kfree_skb(skb);
+ }
+
+ put_device(dev->parent);
+}
+
+/**
+ * event_notified_idr_cb() - event notified idr callback
+ * @id: idr id
+ * @ptr: pointer to idr private data
+ * @data: data passed to idr_for_each callback
+ *
+ * Pass notification to remtoeproc virtio
+ *
+ * Return: 0. having return is to satisfy the idr_for_each() function
+ * pointer input argument requirement.
+ **/
+static int event_notified_idr_cb(int id, void *ptr, void *data)
+{
+ struct rproc *rproc = data;
+
+ (void)rproc_vq_interrupt(rproc, id);
+ return 0;
+}
+
+/**
+ * handle_event_notified() - remoteproc notification work funciton
+ * @work: pointer to the work structure
+ *
+ * It checks each registered remoteproc notify IDs.
+ */
+static void handle_event_notified(struct work_struct *work)
+{
+ struct rproc *rproc;
+ struct zynqmp_r5_pdata *local;
+
+ local = container_of(work, struct zynqmp_r5_pdata, workqueue);
+
+ (void)mbox_send_message(local->rx_chan, NULL);
+ rproc = local->rproc;
+ /*
+ * We only use IPI for interrupt. The firmware side may or may
+ * not write the notifyid when it trigger IPI.
+ * And thus, we scan through all the registered notifyids.
+ */
+ idr_for_each(&rproc->notifyids, event_notified_idr_cb, rproc);
+}
+
+/**
+ * zynqmp_r5_mb_rx_cb() - Receive channel mailbox callback
+ * @cl: mailbox client
+ * @mssg: message pointer
+ *
+ * It will schedule the R5 notification work.
+ */
+static void zynqmp_r5_mb_rx_cb(struct mbox_client *cl, void *mssg)
+{
+ struct zynqmp_r5_pdata *local;
+
+ local = container_of(cl, struct zynqmp_r5_pdata, rx_mc);
+ if (mssg) {
+ struct zynqmp_ipi_message *ipi_msg, *buf_msg;
+ size_t len;
+
+ ipi_msg = (struct zynqmp_ipi_message *)mssg;
+ buf_msg = (struct zynqmp_ipi_message *)local->rx_mc_buf;
+ len = (ipi_msg->len >= IPI_BUF_LEN_MAX) ?
+ IPI_BUF_LEN_MAX : ipi_msg->len;
+ buf_msg->len = len;
+ memcpy(buf_msg->data, ipi_msg->data, len);
+ }
+ schedule_work(&local->workqueue);
+}
+
+/**
+ * zynqmp_r5_mb_tx_done() - Request has been sent to the remote
+ * @cl: mailbox client
+ * @mssg: pointer to the message which has been sent
+ * @r: status of last TX - OK or error
+ *
+ * It will be called by the mailbox framework when the last TX has done.
+ */
+static void zynqmp_r5_mb_tx_done(struct mbox_client *cl, void *mssg, int r)
+{
+ struct zynqmp_r5_pdata *local;
+ struct sk_buff *skb;
+
+ if (!mssg)
+ return;
+ local = container_of(cl, struct zynqmp_r5_pdata, tx_mc);
+ skb = skb_dequeue(&local->tx_mc_skbs);
+ kfree_skb(skb);
+}
+
+/**
+ * zynqmp_r5_setup_mbox() - Setup mailboxes
+ *
+ * @pdata: pointer to the ZynqMP R5 processor platform data
+ * @node: pointer of the device node
+ *
+ * Function to setup mailboxes to talk to RPU.
+ *
+ * Return: 0 for success, negative value for failure.
+ */
+static int zynqmp_r5_setup_mbox(struct zynqmp_r5_pdata *pdata,
+ struct device_node *node)
+{
+ struct device *dev = &pdata->dev;
+ struct mbox_client *mclient;
+
+ /* Setup TX mailbox channel client */
+ mclient = &pdata->tx_mc;
+ mclient->dev = dev;
+ mclient->rx_callback = NULL;
+ mclient->tx_block = false;
+ mclient->knows_txdone = false;
+ mclient->tx_done = zynqmp_r5_mb_tx_done;
+
+ /* Setup TX mailbox channel client */
+ mclient = &pdata->rx_mc;
+ mclient->dev = dev;
+ mclient->rx_callback = zynqmp_r5_mb_rx_cb;
+ mclient->tx_block = false;
+ mclient->knows_txdone = false;
+
+ INIT_WORK(&pdata->workqueue, handle_event_notified);
+
+ /* Request TX and RX channels */
+ pdata->tx_chan = mbox_request_channel_byname(&pdata->tx_mc, "tx");
+ if (IS_ERR(pdata->tx_chan)) {
+ dev_err(dev, "failed to request mbox tx channel.\n");
+ pdata->tx_chan = NULL;
+ return -EINVAL;
+ }
+ pdata->rx_chan = mbox_request_channel_byname(&pdata->rx_mc, "rx");
+ if (IS_ERR(pdata->rx_chan)) {
+ dev_err(dev, "failed to request mbox rx channel.\n");
+ pdata->rx_chan = NULL;
+ return -EINVAL;
+ }
+ skb_queue_head_init(&pdata->tx_mc_skbs);
+ return 0;
+}
+
+/**
+ * zynqmp_r5_probe() - Probes ZynqMP R5 processor device node
+ * @pdata: pointer to the ZynqMP R5 processor platform data
+ * @pdev: parent RPU domain platform device
+ * @node: pointer of the device node
+ *
+ * Function to retrieve the information of the ZynqMP R5 device node.
+ *
+ * Return: 0 for success, negative value for failure.
+ */
+static int zynqmp_r5_probe(struct zynqmp_r5_pdata *pdata,
+ struct platform_device *pdev,
+ struct device_node *node)
+{
+ struct device *dev = &pdata->dev;
+ struct rproc *rproc;
+ struct device_node *nc;
+ int ret;
+
+ /* Create device for ZynqMP R5 device */
+ dev->parent = &pdev->dev;
+ dev->release = zynqmp_r5_release;
+ dev->of_node = node;
+ dev_set_name(dev, "%s", of_node_full_name(node));
+ dev_set_drvdata(dev, pdata);
+ ret = device_register(dev);
+ if (ret) {
+ dev_err(dev, "failed to register device.\n");
+ return ret;
+ }
+ get_device(&pdev->dev);
+
+ /* Allocate remoteproc instance */
+ rproc = rproc_alloc(dev, dev_name(dev), &zynqmp_r5_rproc_ops, NULL, 0);
+ if (!rproc) {
+ dev_err(dev, "rproc allocation failed.\n");
+ ret = -ENOMEM;
+ goto error;
+ }
+ rproc->auto_boot = autoboot;
+ pdata->rproc = rproc;
+ rproc->priv = pdata;
+
+ /*
+ * The device has not been spawned from a device tree, so
+ * arch_setup_dma_ops has not been not called, thus leaving
+ * the device with dummy DMA ops.
+ * Fix this by inheriting the parent's DMA ops and mask.
+ */
+ rproc->dev.dma_mask = pdev->dev.dma_mask;
+ set_dma_ops(&rproc->dev, get_dma_ops(&pdev->dev));
+
+ /* Probe R5 memory devices */
+ INIT_LIST_HEAD(&pdata->mems);
+ for_each_available_child_of_node(node, nc) {
+ ret = zynqmp_r5_mem_probe(pdata, nc);
+ if (ret) {
+ dev_err(dev, "failed to probe memory %s.\n",
+ of_node_full_name(nc));
+ goto error;
+ }
+ }
+
+ /* Set up DMA mask */
+ ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
+ if (ret) {
+ dev_warn(dev, "dma_set_coherent_mask failed: %d\n", ret);
+ /* If DMA is not configured yet, try to configure it. */
+ ret = of_dma_configure(dev, node, true);
+ if (ret) {
+ dev_err(dev, "failed to configure DMA.\n");
+ goto error;
+ }
+ }
+
+ /* Get R5 power domain node */
+ ret = of_property_read_u32(node, "pnode-id", &pdata->pnode_id);
+ if (ret) {
+ dev_err(dev, "failed to get power node id.\n");
+ goto error;
+ }
+
+ /* Check if R5 is running */
+ if (r5_is_running(pdata)) {
+ atomic_inc(&rproc->power);
+ rproc->state = RPROC_RUNNING;
+ }
+
+ if (!of_get_property(dev->of_node, "mboxes", NULL)) {
+ dev_info(dev, "no mailboxes.\n");
+ } else {
+ ret = zynqmp_r5_setup_mbox(pdata, node);
+ if (ret < 0)
+ goto error;
+ }
+
+ /* Add R5 remoteproc */
+ ret = rproc_add(rproc);
+ if (ret) {
+ dev_err(dev, "rproc registration failed\n");
+ goto error;
+ }
+ return 0;
+error:
+ if (pdata->rproc)
+ rproc_free(pdata->rproc);
+ pdata->rproc = NULL;
+ device_unregister(dev);
+ put_device(&pdev->dev);
+ return ret;
+}
+
+static int zynqmp_r5_remoteproc_probe(struct platform_device *pdev)
+{
+ const unsigned char *prop;
+ int ret = 0, i;
+ struct device *dev = &pdev->dev;
+ struct device_node *nc;
+
+ prop = of_get_property(dev->of_node, "core_conf", NULL);
+ if (!prop) {
+ dev_err(&pdev->dev, "core_conf is not used.\n");
+ return -EINVAL;
+ }
+
+ dev_info(dev, "RPU core_conf: %s\n", prop);
+ if (!strcmp(prop, "split")) {
+ rpu_mode = PM_RPU_MODE_SPLIT;
+ } else if (!strcmp(prop, "lockstep")) {
+ rpu_mode = PM_RPU_MODE_LOCKSTEP;
+ } else {
+ dev_err(dev,
+ "Invalid core_conf mode provided - %s , %d\n",
+ prop, (int)rpu_mode);
+ return -EINVAL;
+ }
+
+ i = 0;
+ for_each_available_child_of_node(dev->of_node, nc) {
+ ret = zynqmp_r5_probe(&rpus[i], pdev, nc);
+ if (ret) {
+ dev_err(dev, "failed to probe rpu %s.\n",
+ of_node_full_name(nc));
+ return ret;
+ }
+ i++;
+ }
+
+ return 0;
+}
+
+static int zynqmp_r5_remoteproc_remove(struct platform_device *pdev)
+{
+ int i;
+
+ for (i = 0; i < MAX_RPROCS; i++) {
+ struct zynqmp_r5_pdata *rpu = &rpus[i];
+ struct rproc *rproc;
+
+ rproc = rpu->rproc;
+ if (rproc) {
+ rproc_del(rproc);
+ rproc_free(rproc);
+ rpu->rproc = NULL;
+ }
+ if (rpu->tx_chan) {
+ mbox_free_channel(rpu->tx_chan);
+ rpu->tx_chan = NULL;
+ }
+ if (rpu->rx_chan) {
+ mbox_free_channel(rpu->rx_chan);
+ rpu->rx_chan = NULL;
+ }
+
+ device_unregister(&rpu->dev);
+ }
+
+ return 0;
+}
+
+/* Match table for OF platform binding */
+static const struct of_device_id zynqmp_r5_remoteproc_match[] = {
+ { .compatible = "xlnx,zynqmp-r5-remoteproc-1.0", },
+ { /* end of list */ },
+};
+MODULE_DEVICE_TABLE(of, zynqmp_r5_remoteproc_match);
+
+static struct platform_driver zynqmp_r5_remoteproc_driver = {
+ .probe = zynqmp_r5_remoteproc_probe,
+ .remove = zynqmp_r5_remoteproc_remove,
+ .driver = {
+ .name = "zynqmp_r5_remoteproc",
+ .of_match_table = zynqmp_r5_remoteproc_match,
+ },
+};
+module_platform_driver(zynqmp_r5_remoteproc_driver);
+
+module_param_named(autoboot, autoboot, bool, 0444);
+MODULE_PARM_DESC(autoboot,
+ "enable | disable autoboot. (default: true)");
+
+MODULE_AUTHOR("Ben Levinsky <[email protected]>");
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("ZynqMP R5 remote processor control driver");
--
2.7.4

2020-05-05 20:57:54

by Mathieu Poirier

[permalink] [raw]
Subject: Re: [PATCH v4 5/5] remoteproc: Add initial zynqmp R5 remoteproc driver

Hi Ben,

On Fri, Apr 24, 2020 at 10:36:10AM -0700, Ben Levinsky wrote:
> R5 is included in Xilinx Zynq UltraScale MPSoC so by adding this
> remotproc driver, we can boot the R5 sub-system in different
> configurations.
>
> Acked-by: Stefano Stabellini <[email protected]>
> Acked-by: Ben Levinsky <[email protected]>
> Reviewed-by: Radhey Shyam Pandey <[email protected]>
> Signed-off-by: Ben Levinsky <[email protected]>
> Signed-off-by: Wendy Liang <[email protected]>
> Signed-off-by: Michal Simek <[email protected]>
> Signed-off-by: Ed Mooring <[email protected]>
> Signed-off-by: Jason Wu <[email protected]>
> Tested-by: Ben Levinsky <[email protected]>
> ---
> Changes since v1:
> - remove domain struct as per review from Mathieu
> Changes since v2:
> - add xilinx-related platform mgmt fn's instead of wrapping around
> function pointer in xilinx eemi ops struct
> Changes since v3:
> - fix formatting from checpatch.pl --strict
> - update commit message
>
> ---
> drivers/remoteproc/Kconfig | 10 +
> drivers/remoteproc/Makefile | 1 +
> drivers/remoteproc/zynqmp_r5_remoteproc.c | 902 ++++++++++++++++++++++++++++++
> 3 files changed, 913 insertions(+)
> create mode 100644 drivers/remoteproc/zynqmp_r5_remoteproc.c
>
> diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
> index fbaed07..f094c84 100644
> --- a/drivers/remoteproc/Kconfig
> +++ b/drivers/remoteproc/Kconfig
> @@ -222,6 +222,16 @@ config ST_REMOTEPROC
> processor framework.
> This can be either built-in or a loadable module.
>
> +config ZYNQMP_R5_REMOTEPROC
> + tristate "ZynqMP_r5 remoteproc support"

s/ZynqMP_r5/"ZunqMP R5"

> + depends on ARM64 && PM && ARCH_ZYNQMP
> + select RPMSG_VIRTIO
> + select MAILBOX
> + select ZYNQMP_IPI_MBOX
> + help
> + Say y here to support ZynqMP R5 remote processors via the remote
> + processor framework.
> +

Please move the config block after STM32_RPROC. Otherwise it splis the ST
blocks.

> config ST_SLIM_REMOTEPROC
> tristate
>
> diff --git a/drivers/remoteproc/Makefile b/drivers/remoteproc/Makefile
> index 0effd38..806ac3f 100644
> --- a/drivers/remoteproc/Makefile
> +++ b/drivers/remoteproc/Makefile
> @@ -27,5 +27,6 @@ obj-$(CONFIG_QCOM_WCNSS_PIL) += qcom_wcnss_pil.o
> qcom_wcnss_pil-y += qcom_wcnss.o
> qcom_wcnss_pil-y += qcom_wcnss_iris.o
> obj-$(CONFIG_ST_REMOTEPROC) += st_remoteproc.o
> +obj-$(CONFIG_ZYNQMP_R5_REMOTEPROC) += zynqmp_r5_remoteproc.o
> obj-$(CONFIG_ST_SLIM_REMOTEPROC) += st_slim_rproc.o
> obj-$(CONFIG_STM32_RPROC) += stm32_rproc.o
> diff --git a/drivers/remoteproc/zynqmp_r5_remoteproc.c b/drivers/remoteproc/zynqmp_r5_remoteproc.c
> new file mode 100644
> index 0000000..e3fb4fb
> --- /dev/null
> +++ b/drivers/remoteproc/zynqmp_r5_remoteproc.c
> @@ -0,0 +1,902 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Zynq R5 Remote Processor driver
> + *
> + * Copyright (C) 2015 - 2018 Xilinx Inc.
> + * Copyright (C) 2015 Jason Wu <[email protected]>
> + *
> + * Based on origin OMAP and Zynq Remote Processor driver
> + *
> + * Copyright (C) 2012 Michal Simek <[email protected]>
> + * Copyright (C) 2012 PetaLogix
> + * Copyright (C) 2011 Texas Instruments, Inc.
> + * Copyright (C) 2011 Google, Inc.
> + */
> +
> +#include <linux/atomic.h>
> +#include <linux/cpu.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/delay.h>
> +#include <linux/err.h>
> +#include <linux/firmware/xlnx-zynqmp.h>
> +#include <linux/genalloc.h>
> +#include <linux/idr.h>
> +#include <linux/interrupt.h>
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <linux/mailbox_client.h>
> +#include <linux/mailbox/zynqmp-ipi-message.h>
> +#include <linux/module.h>
> +#include <linux/of_address.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_platform.h>
> +#include <linux/of_reserved_mem.h>
> +#include <linux/pfn.h>
> +#include <linux/platform_device.h>
> +#include <linux/remoteproc.h>
> +#include <linux/skbuff.h>
> +#include <linux/slab.h>
> +#include <linux/sysfs.h>
> +
> +#include "remoteproc_internal.h"
> +
> +#define MAX_RPROCS 2 /* Support up to 2 RPU */
> +#define MAX_MEM_PNODES 4 /* Max power nodes for one RPU memory instance */
> +
> +#define DEFAULT_FIRMWARE_NAME "rproc-rpu-fw"
> +
> +/* PM proc states */
> +#define PM_PROC_STATE_ACTIVE 1U
> +
> +/* IPI buffer MAX length */
> +#define IPI_BUF_LEN_MAX 32U
> +/* RX mailbox client buffer max length */
> +#define RX_MBOX_CLIENT_BUF_MAX (IPI_BUF_LEN_MAX + \
> + sizeof(struct zynqmp_ipi_message))
> +
> +static bool autoboot __read_mostly;
> +
> +/**
> + * struct zynqmp_r5_mem - zynqmp rpu memory data
> + * @pnode_id: TCM power domain ids
> + * @res: memory resource
> + * @node: list node
> + */
> +struct zynqmp_r5_mem {
> + u32 pnode_id[MAX_MEM_PNODES];
> + struct resource res;
> + struct list_head node;
> +};
> +
> +/**
> + * struct zynqmp_r5_pdata - zynqmp rpu remote processor private data
> + * @dev: device of RPU instance
> + * @rproc: rproc handle
> + * @pnode_id: RPU CPU power domain id
> + * @mems: memory resources
> + * @is_r5_mode_set: indicate if r5 operation mode is set
> + * @tx_mc: tx mailbox client
> + * @rx_mc: rx mailbox client
> + * @tx_chan: tx mailbox channel
> + * @rx_chan: rx mailbox channel
> + * @workqueue: workqueue for the RPU remoteproc
> + * @tx_mc_skbs: socket buffers for tx mailbox client
> + * @rx_mc_buf: rx mailbox client buffer to save the rx message
> + */
> +struct zynqmp_r5_pdata {
> + struct device dev;
> + struct rproc *rproc;
> + u32 pnode_id;
> + struct list_head mems;
> + bool is_r5_mode_set;
> + struct mbox_client tx_mc;
> + struct mbox_client rx_mc;
> + struct mbox_chan *tx_chan;
> + struct mbox_chan *rx_chan;
> + struct work_struct workqueue;
> + struct sk_buff_head tx_mc_skbs;
> + unsigned char rx_mc_buf[RX_MBOX_CLIENT_BUF_MAX];
> +};
> +
> +/**
> + * table of RPUs
> + */
> +struct zynqmp_r5_pdata rpus[MAX_RPROCS];
> +/**
> + * RPU core configuration
> + */
> +enum rpu_oper_mode rpu_mode;
> +
> +/*
> + * r5_set_mode - set RPU operation mode
> + * @pdata: Remote processor private data
> + *
> + * set RPU oepration mode
> + *
> + * Return: 0 for success, negative value for failure
> + */
> +static int r5_set_mode(struct zynqmp_r5_pdata *pdata)
> +{
> + u32 val[PAYLOAD_ARG_CNT] = {0}, expect;
> + struct device *dev = &pdata->dev;
> + int ret;
> +
> + if (pdata->is_r5_mode_set)
> + return 0;
> + expect = (u32)rpu_mode;
> + ret = zynqmp_pm_get_rpu_mode(pdata->pnode_id, 0, 0, val);
> + if (ret < 0) {
> + dev_err(dev, "failed to get RPU oper mode.\n");
> + return ret;
> + }
> + if (val[0] == expect) {
> + dev_dbg(dev, "RPU mode matches: %x\n", val[0]);
> + } else {
> + ret = zynqmp_pm_set_rpu_mode(pdata->pnode_id,
> + expect, 0, val);
> + if (ret < 0) {
> + dev_err(dev,
> + "failed to set RPU oper mode.\n");
> + return ret;
> + }
> + }
> + if (expect == (u32)PM_RPU_MODE_LOCKSTEP)
> + expect = (u32)PM_RPU_TCM_COMB;
> + else
> + expect = (u32)PM_RPU_TCM_SPLIT;
> + ret = zynqmp_pm_set_tcm_config(pdata->pnode_id,
> + expect, 0, val);
> + if (ret < 0) {
> + dev_err(dev, "failed to config TCM to %x.\n",
> + expect);
> + return ret;
> + }
> + pdata->is_r5_mode_set = true;
> + return 0;
> +}
> +
> +/**
> + * r5_is_running - check if r5 is running
> + * @pdata: Remote processor private data
> + *
> + * check if R5 is running
> + *
> + * Return: true if r5 is running, false otherwise
> + */
> +static bool r5_is_running(struct zynqmp_r5_pdata *pdata)
> +{
> + u32 status, requirements, usage;
> + struct device *dev = &pdata->dev;
> +
> + if (zynqmp_pm_get_node_status(pdata->pnode_id,
> + &status, &requirements, &usage)) {
> + dev_err(dev, "Failed to get RPU node %d status.\n",
> + pdata->pnode_id);
> + return false;
> + } else if (status != PM_PROC_STATE_ACTIVE) {
> + dev_dbg(dev, "RPU is not running.\n");
> + return false;
> + }
> +
> + dev_dbg(dev, "RPU is running.\n");
> + return true;
> +}
> +
> +/*
> + * ZynqMP R5 remoteproc memory release function
> + */
> +static int zynqmp_r5_mem_release(struct rproc *rproc,
> + struct rproc_mem_entry *mem)
> +{
> + struct zynqmp_r5_mem *priv;
> + int i, ret;
> + struct device *dev = &rproc->dev;
> +
> + priv = mem->priv;
> + if (!priv)
> + return 0;
> + for (i = 0; i < MAX_MEM_PNODES; i++) {
> + if (priv->pnode_id[i]) {
> + dev_dbg(dev, "%s, pnode %d\n",
> + __func__, priv->pnode_id[i]);
> + ret = zynqmp_pm_release_node(priv->pnode_id[i]);
> + if (ret < 0) {
> + dev_err(dev,
> + "failed to release power node: %u\n",
> + priv->pnode_id[i]);
> + return ret;
> + }
> + } else {
> + break;
> + }
> + }
> + return 0;
> +}
> +
> +/*
> + * ZynqMP R5 remoteproc operations
> + */
> +static int zynqmp_r5_rproc_start(struct rproc *rproc)
> +{
> + struct device *dev = rproc->dev.parent;
> + struct zynqmp_r5_pdata *local = rproc->priv;
> + enum rpu_boot_mem bootmem;
> + int ret;
> + /* Set up R5 */
> + ret = r5_set_mode(local);
> + if (ret) {
> + dev_err(dev, "failed to set R5 operation mode.\n");
> + return ret;
> + }
> + if ((rproc->bootaddr & 0xF0000000) == 0xF0000000)
> + bootmem = PM_RPU_BOOTMEM_HIVEC;
> + else
> + bootmem = PM_RPU_BOOTMEM_LOVEC;
> + dev_info(dev, "RPU boot from %s.",
> + bootmem == PM_RPU_BOOTMEM_HIVEC ? "OCM" : "TCM");
> + ret = zynqmp_pm_request_wakeup(local->pnode_id, 1,
> + bootmem, ZYNQMP_PM_REQUEST_ACK_NO);
> + if (ret < 0) {
> + dev_err(dev, "failed to boot R5.\n");
> + return ret;
> + }
> + return 0;
> +}
> +
> +static int zynqmp_r5_rproc_stop(struct rproc *rproc)
> +{
> + struct zynqmp_r5_pdata *local = rproc->priv;
> + int ret;
> +
> + ret = zynqmp_pm_force_powerdown(local->pnode_id,
> + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> + if (ret < 0) {
> + dev_err(&local->dev, "failed to shutdown R5.\n");
> + return ret;
> + }
> + local->is_r5_mode_set = false;
> + return 0;
> +}
> +
> +static int zynqmp_r5_parse_fw(struct rproc *rproc, const struct firmware *fw)
> +{
> + int num_mems, i, ret;
> + struct zynqmp_r5_pdata *pdata = rproc->priv;
> + struct device *dev = &pdata->dev;
> + struct device_node *np = dev->of_node;
> + struct rproc_mem_entry *mem;
> + struct device_node *child;
> + struct resource rsc;
> +
> + num_mems = of_count_phandle_with_args(np, "memory-region", NULL);
> + if (num_mems <= 0)
> + return 0;
> +
> + for (i = 0; i < num_mems; i++) {
> + struct device_node *node;
> + struct zynqmp_r5_mem *zynqmp_mem;
> + struct reserved_mem *rmem;
> +
> + node = of_parse_phandle(np, "memory-region", i);
> + rmem = of_reserved_mem_lookup(node);
> + if (!rmem) {
> + dev_err(dev, "unable to acquire memory-region\n");
> + return -EINVAL;
> + }
> + if (strstr(node->name, "vdev") &&
> + strstr(node->name, "buffer")) {
> + int id;
> + char name[16];
> +
> + id = node->name[8] - '0';
> + snprintf(name, sizeof(name), "vdev%dbuffer", id);
> + /* Register DMA region */
> + mem = rproc_mem_entry_init(dev, NULL,
> + (dma_addr_t)rmem->base,
> + rmem->size, rmem->base,
> + NULL, NULL,
> + name);
> + if (!mem) {
> + dev_err(dev, "unable to initialize memory-region %s\n",
> + node->name);
> + return -ENOMEM;
> + }
> + dev_dbg(dev, "parsed %s at %llx\r\n", mem->name,
> + mem->dma);
> + rproc_add_carveout(rproc, mem);
> + continue;
> + } else if (strstr(node->name, "vdev") &&
> + strstr(node->name, "vring")) {
> + int id, vring_id;
> + char name[16];
> +
> + id = node->name[8] - '0';
> + vring_id = node->name[14] - '0';
> + snprintf(name, sizeof(name), "vdev%dvring%d", id,
> + vring_id);
> + /* Register vring */
> + mem = rproc_mem_entry_init(dev, NULL,
> + (dma_addr_t)rmem->base,
> + rmem->size, rmem->base,
> + NULL, NULL,
> + name);
> + mem->va = devm_ioremap_wc(dev, rmem->base, rmem->size);
> + if (!mem->va)
> + return -ENOMEM;
> + if (!mem) {
> + dev_err(dev, "unable to initialize memory-region %s\n",
> + node->name);
> + return -ENOMEM;
> + }
> + dev_dbg(dev, "parsed %s at %llx\r\n", mem->name,
> + mem->dma);
> + rproc_add_carveout(rproc, mem);
> + continue;
> + } else {
> + mem = rproc_of_resm_mem_entry_init(dev, i,
> + rmem->size,
> + rmem->base,
> + node->name);
> + if (!mem) {
> + dev_err(dev, "unable to initialize memory-region %s\n",
> + node->name);
> + return -ENOMEM;
> + }
> + mem->va = devm_ioremap_wc(dev, rmem->base, rmem->size);
> + if (!mem->va)
> + return -ENOMEM;
> +
> + rproc_add_carveout(rproc, mem);
> + }
> + if (!mem)
> + return -ENOMEM;
> +
> + /*
> + * It is non-DMA memory, used for firmware loading.
> + * It will be added to the R5 remoteproc mappings later.
> + */
> + zynqmp_mem = devm_kzalloc(dev, sizeof(*zynqmp_mem), GFP_KERNEL);
> + if (!zynqmp_mem)
> + return -ENOMEM;
> + ret = of_address_to_resource(node, 0, &zynqmp_mem->res);
> + if (ret) {
> + dev_err(dev, "unable to resolve memory region.\n");
> + return ret;
> + }
> + list_add_tail(&zynqmp_mem->node, &pdata->mems);
> + dev_dbg(dev, "%s, non-dma mem %s\n",
> + __func__, of_node_full_name(node));
> + }
> +
> + /* map TCM memories */
> + for_each_available_child_of_node(np, child) {
> + struct property *prop;
> + const __be32 *cur;
> + u32 pnode_id;
> + void *va;
> + dma_addr_t dma;
> + resource_size_t size;
> +
> + ret = of_address_to_resource(child, 0, &rsc);
> +
> + i = 0;
> + of_property_for_each_u32(child, "pnode-id", prop, cur,
> + pnode_id) {
> + ret = zynqmp_pm_request_node(pnode_id,
> + ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> + if (ret < 0) {
> + dev_err(dev, "failed to request power node: %u\n",
> + pnode_id);
> + return ret;
> + }
> + ret = r5_set_mode(pdata);
> + if (ret < 0) {
> + dev_err(dev, "failed to set R5 operation mode.\n");
> + return ret;
> + }
> + }
> + size = resource_size(&rsc);
> +
> + va = devm_ioremap_wc(dev, rsc.start, size);
> + if (!va)
> + return -ENOMEM;
> +
> + /* zero out tcm base address */
> + if (rsc.start & 0xffe00000) {
> + rsc.start &= 0x000fffff;
> + /* handle tcm banks 1 a and b
> + * (0xffe9000 and oxffeb0000)
> + */
> + if (rsc.start & 0x80000)
> + rsc.start -= 0x90000;
> + }
> +
> + dma = (dma_addr_t)rsc.start;
> + mem = rproc_mem_entry_init(dev, va, dma, (int)size, rsc.start,
> + NULL, zynqmp_r5_mem_release,
> + rsc.name);
> + if (!mem)
> + return -ENOMEM;
> +
> + rproc_add_carveout(rproc, mem);
> + }
> +
> + ret = rproc_elf_load_rsc_table(rproc, fw);
> + if (ret == -EINVAL)
> + ret = 0;
> + return ret;
> +}
> +
> +/* kick a firmware */
> +static void zynqmp_r5_rproc_kick(struct rproc *rproc, int vqid)
> +{
> + struct device *dev = rproc->dev.parent;
> + struct zynqmp_r5_pdata *local = rproc->priv;
> +
> + dev_dbg(dev, "KICK Firmware to start send messages vqid %d\n", vqid);
> +
> + if (vqid < 0) {
> + /* If vqid is negative, does not pass the vqid to
> + * mailbox. As vqid is supposed to be 0 or possive.
> + * It also gives a way to just kick instead but
> + * not use the IPI buffer. It is better to provide
> + * a proper way to pass the short message, which will
> + * need to sync to upstream first, for now,
> + * use negative vqid to assume no message will be
> + * passed with IPI buffer, but just raise interrupt.
> + * This will be faster as it doesn't need to copy the
> + * message to the IPI buffer.
> + *
> + * It will ignore the return, as failure is due to
> + * there already kicks in the mailbox queue.
> + */
> + (void)mbox_send_message(local->tx_chan, NULL);
> + } else {
> + struct sk_buff *skb;
> + unsigned int skb_len;
> + struct zynqmp_ipi_message *mb_msg;
> + int ret;
> +
> + skb_len = (unsigned int)(sizeof(vqid) + sizeof(mb_msg));
> + skb = alloc_skb(skb_len, GFP_ATOMIC);
> + if (!skb) {
> + dev_err(dev,
> + "Failed to allocate skb to kick remote.\n");
> + return;
> + }
> + mb_msg = (struct zynqmp_ipi_message *)skb_put(skb, skb_len);
> + mb_msg->len = sizeof(vqid);
> + memcpy(mb_msg->data, &vqid, sizeof(vqid));
> + skb_queue_tail(&local->tx_mc_skbs, skb);
> + ret = mbox_send_message(local->tx_chan, mb_msg);
> + if (ret < 0) {
> + dev_warn(dev, "Failed to kick remote.\n");
> + skb_dequeue_tail(&local->tx_mc_skbs);
> + kfree_skb(skb);
> + }
> + }
> +}
> +
> +static struct rproc_ops zynqmp_r5_rproc_ops = {
> + .start = zynqmp_r5_rproc_start,
> + .stop = zynqmp_r5_rproc_stop,
> + .load = rproc_elf_load_segments,
> + .parse_fw = zynqmp_r5_parse_fw,
> + .find_loaded_rsc_table = rproc_elf_find_loaded_rsc_table,
> + .sanity_check = rproc_elf_sanity_check,
> + .get_boot_addr = rproc_elf_get_boot_addr,
> + .kick = zynqmp_r5_rproc_kick,
> +};
> +
> +/* zynqmp_r5_mem_probe() - probes RPU TCM memory device
> + * @pdata: pointer to the RPU remoteproc private data
> + * @node: pointer to the memory node
> + *
> + * Function to retrieve memories resources for RPU TCM memory device.

s/memories//

> + */
> +static int zynqmp_r5_mem_probe(struct zynqmp_r5_pdata *pdata,
> + struct device_node *node)
> +{
> + struct device *dev;
> + struct zynqmp_r5_mem *mem;
> + int ret;
> +
> + dev = &pdata->dev;
> + mem = devm_kzalloc(dev, sizeof(*mem), GFP_KERNEL);
> + if (!mem)
> + return -ENOMEM;
> + ret = of_address_to_resource(node, 0, &mem->res);
> + if (ret < 0) {
> + dev_err(dev, "failed to get resource of memory %s",
> + of_node_full_name(node));
> + return -EINVAL;
> + }
> +
> + /* Get the power domain id */
> + if (of_find_property(node, "pnode-id", NULL)) {
> + struct property *prop;
> + const __be32 *cur;
> + u32 val;
> + int i = 0;

Please move the above variable declaration to the top of the function.

> +
> + of_property_for_each_u32(node, "pnode-id", prop, cur, val)
> + mem->pnode_id[i++] = val;
> + }
> + list_add_tail(&mem->node, &pdata->mems);
> + return 0;
> +}
> +
> +/**
> + * zynqmp_r5_release() - ZynqMP R5 device release function
> + * @dev: pointer to the device struct of ZynqMP R5
> + *
> + * Function to release ZynqMP R5 device.
> + */
> +static void zynqmp_r5_release(struct device *dev)
> +{
> + struct zynqmp_r5_pdata *pdata;
> + struct rproc *rproc;
> + struct sk_buff *skb;
> +
> + pdata = dev_get_drvdata(dev);
> + rproc = pdata->rproc;
> + if (rproc) {
> + rproc_del(rproc);
> + rproc_free(rproc);
> + }
> + if (pdata->tx_chan)
> + mbox_free_channel(pdata->tx_chan);
> + if (pdata->rx_chan)
> + mbox_free_channel(pdata->rx_chan);
> + /* Discard all SKBs */
> + while (!skb_queue_empty(&pdata->tx_mc_skbs)) {
> + skb = skb_dequeue(&pdata->tx_mc_skbs);
> + kfree_skb(skb);
> + }
> +
> + put_device(dev->parent);
> +}
> +
> +/**
> + * event_notified_idr_cb() - event notified idr callback
> + * @id: idr id
> + * @ptr: pointer to idr private data
> + * @data: data passed to idr_for_each callback
> + *
> + * Pass notification to remtoeproc virtio
> + *
> + * Return: 0. having return is to satisfy the idr_for_each() function
> + * pointer input argument requirement.
> + **/
> +static int event_notified_idr_cb(int id, void *ptr, void *data)
> +{
> + struct rproc *rproc = data;
> +
> + (void)rproc_vq_interrupt(rproc, id);
> + return 0;
> +}
> +
> +/**
> + * handle_event_notified() - remoteproc notification work funciton
> + * @work: pointer to the work structure
> + *
> + * It checks each registered remoteproc notify IDs.
> + */
> +static void handle_event_notified(struct work_struct *work)
> +{
> + struct rproc *rproc;
> + struct zynqmp_r5_pdata *local;
> +
> + local = container_of(work, struct zynqmp_r5_pdata, workqueue);
> +
> + (void)mbox_send_message(local->rx_chan, NULL);
> + rproc = local->rproc;
> + /*
> + * We only use IPI for interrupt. The firmware side may or may
> + * not write the notifyid when it trigger IPI.
> + * And thus, we scan through all the registered notifyids.
> + */
> + idr_for_each(&rproc->notifyids, event_notified_idr_cb, rproc);
> +}
> +
> +/**
> + * zynqmp_r5_mb_rx_cb() - Receive channel mailbox callback
> + * @cl: mailbox client
> + * @mssg: message pointer
> + *
> + * It will schedule the R5 notification work.
> + */
> +static void zynqmp_r5_mb_rx_cb(struct mbox_client *cl, void *mssg)
> +{
> + struct zynqmp_r5_pdata *local;
> +
> + local = container_of(cl, struct zynqmp_r5_pdata, rx_mc);
> + if (mssg) {
> + struct zynqmp_ipi_message *ipi_msg, *buf_msg;
> + size_t len;
> +
> + ipi_msg = (struct zynqmp_ipi_message *)mssg;
> + buf_msg = (struct zynqmp_ipi_message *)local->rx_mc_buf;
> + len = (ipi_msg->len >= IPI_BUF_LEN_MAX) ?
> + IPI_BUF_LEN_MAX : ipi_msg->len;
> + buf_msg->len = len;
> + memcpy(buf_msg->data, ipi_msg->data, len);
> + }
> + schedule_work(&local->workqueue);
> +}
> +
> +/**
> + * zynqmp_r5_mb_tx_done() - Request has been sent to the remote
> + * @cl: mailbox client
> + * @mssg: pointer to the message which has been sent
> + * @r: status of last TX - OK or error
> + *
> + * It will be called by the mailbox framework when the last TX has done.
> + */
> +static void zynqmp_r5_mb_tx_done(struct mbox_client *cl, void *mssg, int r)
> +{
> + struct zynqmp_r5_pdata *local;
> + struct sk_buff *skb;
> +
> + if (!mssg)
> + return;
> + local = container_of(cl, struct zynqmp_r5_pdata, tx_mc);
> + skb = skb_dequeue(&local->tx_mc_skbs);
> + kfree_skb(skb);
> +}
> +
> +/**
> + * zynqmp_r5_setup_mbox() - Setup mailboxes
> + *
> + * @pdata: pointer to the ZynqMP R5 processor platform data
> + * @node: pointer of the device node
> + *
> + * Function to setup mailboxes to talk to RPU.
> + *
> + * Return: 0 for success, negative value for failure.
> + */
> +static int zynqmp_r5_setup_mbox(struct zynqmp_r5_pdata *pdata,
> + struct device_node *node)
> +{
> + struct device *dev = &pdata->dev;
> + struct mbox_client *mclient;
> +
> + /* Setup TX mailbox channel client */
> + mclient = &pdata->tx_mc;
> + mclient->dev = dev;
> + mclient->rx_callback = NULL;
> + mclient->tx_block = false;
> + mclient->knows_txdone = false;
> + mclient->tx_done = zynqmp_r5_mb_tx_done;
> +
> + /* Setup TX mailbox channel client */
> + mclient = &pdata->rx_mc;
> + mclient->dev = dev;
> + mclient->rx_callback = zynqmp_r5_mb_rx_cb;
> + mclient->tx_block = false;
> + mclient->knows_txdone = false;
> +
> + INIT_WORK(&pdata->workqueue, handle_event_notified);
> +
> + /* Request TX and RX channels */
> + pdata->tx_chan = mbox_request_channel_byname(&pdata->tx_mc, "tx");
> + if (IS_ERR(pdata->tx_chan)) {
> + dev_err(dev, "failed to request mbox tx channel.\n");
> + pdata->tx_chan = NULL;
> + return -EINVAL;
> + }
> + pdata->rx_chan = mbox_request_channel_byname(&pdata->rx_mc, "rx");
> + if (IS_ERR(pdata->rx_chan)) {
> + dev_err(dev, "failed to request mbox rx channel.\n");
> + pdata->rx_chan = NULL;
> + return -EINVAL;
> + }
> + skb_queue_head_init(&pdata->tx_mc_skbs);
> + return 0;
> +}
> +
> +/**
> + * zynqmp_r5_probe() - Probes ZynqMP R5 processor device node
> + * @pdata: pointer to the ZynqMP R5 processor platform data
> + * @pdev: parent RPU domain platform device
> + * @node: pointer of the device node
> + *
> + * Function to retrieve the information of the ZynqMP R5 device node.
> + *
> + * Return: 0 for success, negative value for failure.
> + */
> +static int zynqmp_r5_probe(struct zynqmp_r5_pdata *pdata,
> + struct platform_device *pdev,
> + struct device_node *node)
> +{
> + struct device *dev = &pdata->dev;
> + struct rproc *rproc;
> + struct device_node *nc;
> + int ret;
> +
> + /* Create device for ZynqMP R5 device */
> + dev->parent = &pdev->dev;
> + dev->release = zynqmp_r5_release;
> + dev->of_node = node;
> + dev_set_name(dev, "%s", of_node_full_name(node));
> + dev_set_drvdata(dev, pdata);
> + ret = device_register(dev);

Is there any other reason to use device_register() other than automatically
calling zynqmp_r5_release()? If not device_initialize() is a better option.

> + if (ret) {
> + dev_err(dev, "failed to register device.\n");
> + return ret;
> + }
> + get_device(&pdev->dev);
> +
> + /* Allocate remoteproc instance */
> + rproc = rproc_alloc(dev, dev_name(dev), &zynqmp_r5_rproc_ops, NULL, 0);
> + if (!rproc) {
> + dev_err(dev, "rproc allocation failed.\n");
> + ret = -ENOMEM;
> + goto error;
> + }
> + rproc->auto_boot = autoboot;
> + pdata->rproc = rproc;
> + rproc->priv = pdata;
> +
> + /*
> + * The device has not been spawned from a device tree, so
> + * arch_setup_dma_ops has not been not called, thus leaving

s/not called/called

> + * the device with dummy DMA ops.
> + * Fix this by inheriting the parent's DMA ops and mask.
> + */
> + rproc->dev.dma_mask = pdev->dev.dma_mask;
> + set_dma_ops(&rproc->dev, get_dma_ops(&pdev->dev));
> +
> + /* Probe R5 memory devices */
> + INIT_LIST_HEAD(&pdata->mems);
> + for_each_available_child_of_node(node, nc) {
> + ret = zynqmp_r5_mem_probe(pdata, nc);
> + if (ret) {
> + dev_err(dev, "failed to probe memory %s.\n",
> + of_node_full_name(nc));
> + goto error;
> + }
> + }
> +
> + /* Set up DMA mask */
> + ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
> + if (ret) {
> + dev_warn(dev, "dma_set_coherent_mask failed: %d\n", ret);
> + /* If DMA is not configured yet, try to configure it. */
> + ret = of_dma_configure(dev, node, true);
> + if (ret) {
> + dev_err(dev, "failed to configure DMA.\n");
> + goto error;
> + }
> + }
> +
> + /* Get R5 power domain node */
> + ret = of_property_read_u32(node, "pnode-id", &pdata->pnode_id);
> + if (ret) {
> + dev_err(dev, "failed to get power node id.\n");
> + goto error;
> + }
> +
> + /* Check if R5 is running */
> + if (r5_is_running(pdata)) {
> + atomic_inc(&rproc->power);
> + rproc->state = RPROC_RUNNING;
> + }

Please hold off on scenarios where the remoteproc core needs to synchronise with
a remote processor. We are currently discussing[1] the best way to do this. It
would be great if you could have a look to see if the current proposal covers
Xilinx's needs.

[1]. https://patchwork.kernel.org/project/linux-remoteproc/list/?series=277049

> +
> + if (!of_get_property(dev->of_node, "mboxes", NULL)) {
> + dev_info(dev, "no mailboxes.\n");

This thing can work with no mailboxes?

> + } else {
> + ret = zynqmp_r5_setup_mbox(pdata, node);
> + if (ret < 0)
> + goto error;
> + }
> +
> + /* Add R5 remoteproc */
> + ret = rproc_add(rproc);
> + if (ret) {
> + dev_err(dev, "rproc registration failed\n");
> + goto error;
> + }
> + return 0;
> +error:
> + if (pdata->rproc)
> + rproc_free(pdata->rproc);
> + pdata->rproc = NULL;
> + device_unregister(dev);
> + put_device(&pdev->dev);

Seems to me all you need to do here is call put_device(), zynqmp_r5_release()
should will be called and do all that cleanup for you.

> + return ret;
> +}
> +
> +static int zynqmp_r5_remoteproc_probe(struct platform_device *pdev)
> +{
> + const unsigned char *prop;
> + int ret = 0, i;

'Sparse' will complain that variable 'ret' doesn't need to be initialised.

> + struct device *dev = &pdev->dev;
> + struct device_node *nc;
> +
> + prop = of_get_property(dev->of_node, "core_conf", NULL);
> + if (!prop) {
> + dev_err(&pdev->dev, "core_conf is not used.\n");
> + return -EINVAL;
> + }
> +
> + dev_info(dev, "RPU core_conf: %s\n", prop);

s/dev_info/dev_dbg

> + if (!strcmp(prop, "split")) {
> + rpu_mode = PM_RPU_MODE_SPLIT;
> + } else if (!strcmp(prop, "lockstep")) {
> + rpu_mode = PM_RPU_MODE_LOCKSTEP;

Can you bring the rpu_oper_mode, rpu_boot_mem and rpu_tcm_comp enumerations in
this file?

> + } else {
> + dev_err(dev,
> + "Invalid core_conf mode provided - %s , %d\n",
> + prop, (int)rpu_mode);
> + return -EINVAL;
> + }
> +
> + i = 0;
> + for_each_available_child_of_node(dev->of_node, nc) {
> + ret = zynqmp_r5_probe(&rpus[i], pdev, nc);
> + if (ret) {
> + dev_err(dev, "failed to probe rpu %s.\n",
> + of_node_full_name(nc));
> + return ret;
> + }
> + i++;
> + }
> +
> + return 0;
> +}
> +
> +static int zynqmp_r5_remoteproc_remove(struct platform_device *pdev)
> +{
> + int i;
> +
> + for (i = 0; i < MAX_RPROCS; i++) {
> + struct zynqmp_r5_pdata *rpu = &rpus[i];
> + struct rproc *rproc;
> +
> + rproc = rpu->rproc;
> + if (rproc) {
> + rproc_del(rproc);
> + rproc_free(rproc);
> + rpu->rproc = NULL;
> + }
> + if (rpu->tx_chan) {
> + mbox_free_channel(rpu->tx_chan);
> + rpu->tx_chan = NULL;
> + }
> + if (rpu->rx_chan) {
> + mbox_free_channel(rpu->rx_chan);
> + rpu->rx_chan = NULL;
> + }

Isn't all this already done in zynqmp_r5_release()?

Moreover, if you declare a structure zynqmp_r5_cluster, put rpus[MAX_RPROCS]
and rpu_mode in it and set that as the pdev->dev's drvdata, there wouldn't be a
need for global variables. Because zynqmp_r5_pdata::dev->parent == pdev->dev, it
is possible to access zynqmp_r5_cluster from anywhere.

I am out of time for today - more comments to come tomorrow.

Mathieu


> +
> + device_unregister(&rpu->dev);
> + }
> +
> + return 0;
> +}
> +
> +/* Match table for OF platform binding */
> +static const struct of_device_id zynqmp_r5_remoteproc_match[] = {
> + { .compatible = "xlnx,zynqmp-r5-remoteproc-1.0", },
> + { /* end of list */ },
> +};
> +MODULE_DEVICE_TABLE(of, zynqmp_r5_remoteproc_match);
> +
> +static struct platform_driver zynqmp_r5_remoteproc_driver = {
> + .probe = zynqmp_r5_remoteproc_probe,
> + .remove = zynqmp_r5_remoteproc_remove,
> + .driver = {
> + .name = "zynqmp_r5_remoteproc",
> + .of_match_table = zynqmp_r5_remoteproc_match,
> + },
> +};
> +module_platform_driver(zynqmp_r5_remoteproc_driver);
> +
> +module_param_named(autoboot, autoboot, bool, 0444);
> +MODULE_PARM_DESC(autoboot,
> + "enable | disable autoboot. (default: true)");
> +
> +MODULE_AUTHOR("Ben Levinsky <[email protected]>");
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("ZynqMP R5 remote processor control driver");
> --
> 2.7.4
>

2020-05-06 21:42:04

by Mathieu Poirier

[permalink] [raw]
Subject: Re: [PATCH v4 5/5] remoteproc: Add initial zynqmp R5 remoteproc driver

Hi Ben,

As promised here is more comments...

On Fri, Apr 24, 2020 at 10:36:10AM -0700, Ben Levinsky wrote:
> R5 is included in Xilinx Zynq UltraScale MPSoC so by adding this
> remotproc driver, we can boot the R5 sub-system in different
> configurations.
>
> Acked-by: Stefano Stabellini <[email protected]>
> Acked-by: Ben Levinsky <[email protected]>
> Reviewed-by: Radhey Shyam Pandey <[email protected]>
> Signed-off-by: Ben Levinsky <[email protected]>
> Signed-off-by: Wendy Liang <[email protected]>
> Signed-off-by: Michal Simek <[email protected]>
> Signed-off-by: Ed Mooring <[email protected]>
> Signed-off-by: Jason Wu <[email protected]>
> Tested-by: Ben Levinsky <[email protected]>
> ---
> Changes since v1:
> - remove domain struct as per review from Mathieu
> Changes since v2:
> - add xilinx-related platform mgmt fn's instead of wrapping around
> function pointer in xilinx eemi ops struct
> Changes since v3:
> - fix formatting from checpatch.pl --strict
> - update commit message
>
> ---
> drivers/remoteproc/Kconfig | 10 +
> drivers/remoteproc/Makefile | 1 +
> drivers/remoteproc/zynqmp_r5_remoteproc.c | 902 ++++++++++++++++++++++++++++++
> 3 files changed, 913 insertions(+)
> create mode 100644 drivers/remoteproc/zynqmp_r5_remoteproc.c
>
> diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
> index fbaed07..f094c84 100644
> --- a/drivers/remoteproc/Kconfig
> +++ b/drivers/remoteproc/Kconfig
> @@ -222,6 +222,16 @@ config ST_REMOTEPROC
> processor framework.
> This can be either built-in or a loadable module.
>
> +config ZYNQMP_R5_REMOTEPROC
> + tristate "ZynqMP_r5 remoteproc support"
> + depends on ARM64 && PM && ARCH_ZYNQMP
> + select RPMSG_VIRTIO
> + select MAILBOX
> + select ZYNQMP_IPI_MBOX
> + help
> + Say y here to support ZynqMP R5 remote processors via the remote
> + processor framework.
> +
> config ST_SLIM_REMOTEPROC
> tristate
>
> diff --git a/drivers/remoteproc/Makefile b/drivers/remoteproc/Makefile
> index 0effd38..806ac3f 100644
> --- a/drivers/remoteproc/Makefile
> +++ b/drivers/remoteproc/Makefile
> @@ -27,5 +27,6 @@ obj-$(CONFIG_QCOM_WCNSS_PIL) += qcom_wcnss_pil.o
> qcom_wcnss_pil-y += qcom_wcnss.o
> qcom_wcnss_pil-y += qcom_wcnss_iris.o
> obj-$(CONFIG_ST_REMOTEPROC) += st_remoteproc.o
> +obj-$(CONFIG_ZYNQMP_R5_REMOTEPROC) += zynqmp_r5_remoteproc.o
> obj-$(CONFIG_ST_SLIM_REMOTEPROC) += st_slim_rproc.o
> obj-$(CONFIG_STM32_RPROC) += stm32_rproc.o
> diff --git a/drivers/remoteproc/zynqmp_r5_remoteproc.c b/drivers/remoteproc/zynqmp_r5_remoteproc.c
> new file mode 100644
> index 0000000..e3fb4fb
> --- /dev/null
> +++ b/drivers/remoteproc/zynqmp_r5_remoteproc.c
> @@ -0,0 +1,902 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Zynq R5 Remote Processor driver
> + *
> + * Copyright (C) 2015 - 2018 Xilinx Inc.
> + * Copyright (C) 2015 Jason Wu <[email protected]>
> + *
> + * Based on origin OMAP and Zynq Remote Processor driver
> + *
> + * Copyright (C) 2012 Michal Simek <[email protected]>
> + * Copyright (C) 2012 PetaLogix
> + * Copyright (C) 2011 Texas Instruments, Inc.
> + * Copyright (C) 2011 Google, Inc.
> + */
> +
> +#include <linux/atomic.h>
> +#include <linux/cpu.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/delay.h>
> +#include <linux/err.h>
> +#include <linux/firmware/xlnx-zynqmp.h>
> +#include <linux/genalloc.h>
> +#include <linux/idr.h>
> +#include <linux/interrupt.h>
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <linux/mailbox_client.h>
> +#include <linux/mailbox/zynqmp-ipi-message.h>
> +#include <linux/module.h>
> +#include <linux/of_address.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_platform.h>
> +#include <linux/of_reserved_mem.h>
> +#include <linux/pfn.h>
> +#include <linux/platform_device.h>
> +#include <linux/remoteproc.h>
> +#include <linux/skbuff.h>
> +#include <linux/slab.h>
> +#include <linux/sysfs.h>
> +
> +#include "remoteproc_internal.h"
> +
> +#define MAX_RPROCS 2 /* Support up to 2 RPU */
> +#define MAX_MEM_PNODES 4 /* Max power nodes for one RPU memory instance */
> +
> +#define DEFAULT_FIRMWARE_NAME "rproc-rpu-fw"
> +
> +/* PM proc states */
> +#define PM_PROC_STATE_ACTIVE 1U
> +
> +/* IPI buffer MAX length */
> +#define IPI_BUF_LEN_MAX 32U
> +/* RX mailbox client buffer max length */
> +#define RX_MBOX_CLIENT_BUF_MAX (IPI_BUF_LEN_MAX + \
> + sizeof(struct zynqmp_ipi_message))
> +
> +static bool autoboot __read_mostly;
> +
> +/**
> + * struct zynqmp_r5_mem - zynqmp rpu memory data
> + * @pnode_id: TCM power domain ids
> + * @res: memory resource
> + * @node: list node
> + */
> +struct zynqmp_r5_mem {
> + u32 pnode_id[MAX_MEM_PNODES];
> + struct resource res;
> + struct list_head node;
> +};
> +
> +/**
> + * struct zynqmp_r5_pdata - zynqmp rpu remote processor private data
> + * @dev: device of RPU instance
> + * @rproc: rproc handle
> + * @pnode_id: RPU CPU power domain id
> + * @mems: memory resources
> + * @is_r5_mode_set: indicate if r5 operation mode is set
> + * @tx_mc: tx mailbox client
> + * @rx_mc: rx mailbox client
> + * @tx_chan: tx mailbox channel
> + * @rx_chan: rx mailbox channel
> + * @workqueue: workqueue for the RPU remoteproc
> + * @tx_mc_skbs: socket buffers for tx mailbox client
> + * @rx_mc_buf: rx mailbox client buffer to save the rx message
> + */
> +struct zynqmp_r5_pdata {
> + struct device dev;
> + struct rproc *rproc;
> + u32 pnode_id;
> + struct list_head mems;
> + bool is_r5_mode_set;
> + struct mbox_client tx_mc;
> + struct mbox_client rx_mc;
> + struct mbox_chan *tx_chan;
> + struct mbox_chan *rx_chan;
> + struct work_struct workqueue;

When declaring a work_struct, people will usually call it "work" because it is
work to be executed on the system_wq. Not mandatory, but certainly helpful when
reading the code.

> + struct sk_buff_head tx_mc_skbs;
> + unsigned char rx_mc_buf[RX_MBOX_CLIENT_BUF_MAX];
> +};
> +
> +/**
> + * table of RPUs
> + */
> +struct zynqmp_r5_pdata rpus[MAX_RPROCS];
> +/**
> + * RPU core configuration
> + */
> +enum rpu_oper_mode rpu_mode;
> +
> +/*
> + * r5_set_mode - set RPU operation mode
> + * @pdata: Remote processor private data
> + *
> + * set RPU oepration mode
> + *
> + * Return: 0 for success, negative value for failure
> + */
> +static int r5_set_mode(struct zynqmp_r5_pdata *pdata)
> +{
> + u32 val[PAYLOAD_ARG_CNT] = {0}, expect;
> + struct device *dev = &pdata->dev;
> + int ret;
> +
> + if (pdata->is_r5_mode_set)
> + return 0;

As far as I can see the above condition isn't used because 1) function
zynqmp_r5_rproc_stop() sets the flag to false and 2) a remote processor can't be
started twice.

As such you could have rproc->rpu_mode sets in the _probe() function and just
use there.

> + expect = (u32)rpu_mode;
> + ret = zynqmp_pm_get_rpu_mode(pdata->pnode_id, 0, 0, val);
> + if (ret < 0) {
> + dev_err(dev, "failed to get RPU oper mode.\n");
> + return ret;
> + }
> + if (val[0] == expect) {
> + dev_dbg(dev, "RPU mode matches: %x\n", val[0]);
> + } else {
> + ret = zynqmp_pm_set_rpu_mode(pdata->pnode_id,
> + expect, 0, val);
> + if (ret < 0) {
> + dev_err(dev,
> + "failed to set RPU oper mode.\n");
> + return ret;
> + }
> + }
> + if (expect == (u32)PM_RPU_MODE_LOCKSTEP)
> + expect = (u32)PM_RPU_TCM_COMB;
> + else
> + expect = (u32)PM_RPU_TCM_SPLIT;
> + ret = zynqmp_pm_set_tcm_config(pdata->pnode_id,
> + expect, 0, val);

tcm_mode = (expect == (u32)PM_RPU_MODE_LOCKSTEP) ?
PM_RPU_TCM_COMB : PM_RPU_TCM_SPLIT;

ret = zynqmp_pm_set_tcm_config(pdata->pnode_id, tcm_mode, 0, val);

> + if (ret < 0) {
> + dev_err(dev, "failed to config TCM to %x.\n",
> + expect);
> + return ret;
> + }
> + pdata->is_r5_mode_set = true;
> + return 0;
> +}
> +
> +/**
> + * r5_is_running - check if r5 is running
> + * @pdata: Remote processor private data
> + *
> + * check if R5 is running
> + *
> + * Return: true if r5 is running, false otherwise
> + */
> +static bool r5_is_running(struct zynqmp_r5_pdata *pdata)
> +{
> + u32 status, requirements, usage;
> + struct device *dev = &pdata->dev;
> +
> + if (zynqmp_pm_get_node_status(pdata->pnode_id,
> + &status, &requirements, &usage)) {
> + dev_err(dev, "Failed to get RPU node %d status.\n",
> + pdata->pnode_id);
> + return false;
> + } else if (status != PM_PROC_STATE_ACTIVE) {
> + dev_dbg(dev, "RPU is not running.\n");
> + return false;
> + }
> +
> + dev_dbg(dev, "RPU is running.\n");
> + return true;
> +}
> +
> +/*
> + * ZynqMP R5 remoteproc memory release function
> + */
> +static int zynqmp_r5_mem_release(struct rproc *rproc,
> + struct rproc_mem_entry *mem)
> +{
> + struct zynqmp_r5_mem *priv;
> + int i, ret;
> + struct device *dev = &rproc->dev;
> +
> + priv = mem->priv;
> + if (!priv)
> + return 0;
> + for (i = 0; i < MAX_MEM_PNODES; i++) {
> + if (priv->pnode_id[i]) {
> + dev_dbg(dev, "%s, pnode %d\n",
> + __func__, priv->pnode_id[i]);
> + ret = zynqmp_pm_release_node(priv->pnode_id[i]);
> + if (ret < 0) {
> + dev_err(dev,
> + "failed to release power node: %u\n",
> + priv->pnode_id[i]);
> + return ret;
> + }
> + } else {
> + break;
> + }
> + }
> + return 0;
> +}
> +
> +/*
> + * ZynqMP R5 remoteproc operations
> + */
> +static int zynqmp_r5_rproc_start(struct rproc *rproc)
> +{
> + struct device *dev = rproc->dev.parent;
> + struct zynqmp_r5_pdata *local = rproc->priv;

Please use the same name when referencing zynqmp_r5_pdata. So far I have seen
rpu, pdata and local. Which one you choose is irrelevant but consistency will
help me a lot.

> + enum rpu_boot_mem bootmem;
> + int ret;
> + /* Set up R5 */
> + ret = r5_set_mode(local);
> + if (ret) {
> + dev_err(dev, "failed to set R5 operation mode.\n");
> + return ret;
> + }
> + if ((rproc->bootaddr & 0xF0000000) == 0xF0000000)
> + bootmem = PM_RPU_BOOTMEM_HIVEC;
> + else
> + bootmem = PM_RPU_BOOTMEM_LOVEC;
> + dev_info(dev, "RPU boot from %s.",
> + bootmem == PM_RPU_BOOTMEM_HIVEC ? "OCM" : "TCM");
> + ret = zynqmp_pm_request_wakeup(local->pnode_id, 1,
> + bootmem, ZYNQMP_PM_REQUEST_ACK_NO);
> + if (ret < 0) {
> + dev_err(dev, "failed to boot R5.\n");
> + return ret;
> + }
> + return 0;
> +}
> +
> +static int zynqmp_r5_rproc_stop(struct rproc *rproc)
> +{
> + struct zynqmp_r5_pdata *local = rproc->priv;
> + int ret;
> +
> + ret = zynqmp_pm_force_powerdown(local->pnode_id,
> + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> + if (ret < 0) {
> + dev_err(&local->dev, "failed to shutdown R5.\n");
> + return ret;
> + }
> + local->is_r5_mode_set = false;
> + return 0;
> +}
> +
> +static int zynqmp_r5_parse_fw(struct rproc *rproc, const struct firmware *fw)
> +{
> + int num_mems, i, ret;
> + struct zynqmp_r5_pdata *pdata = rproc->priv;
> + struct device *dev = &pdata->dev;
> + struct device_node *np = dev->of_node;
> + struct rproc_mem_entry *mem;
> + struct device_node *child;
> + struct resource rsc;
> +
> + num_mems = of_count_phandle_with_args(np, "memory-region", NULL);
> + if (num_mems <= 0)
> + return 0;
> +
> + for (i = 0; i < num_mems; i++) {
> + struct device_node *node;
> + struct zynqmp_r5_mem *zynqmp_mem;
> + struct reserved_mem *rmem;
> +
> + node = of_parse_phandle(np, "memory-region", i);
> + rmem = of_reserved_mem_lookup(node);
> + if (!rmem) {
> + dev_err(dev, "unable to acquire memory-region\n");
> + return -EINVAL;
> + }
> + if (strstr(node->name, "vdev") &&
> + strstr(node->name, "buffer")) {
> + int id;
> + char name[16];
> +
> + id = node->name[8] - '0';
> + snprintf(name, sizeof(name), "vdev%dbuffer", id);
> + /* Register DMA region */
> + mem = rproc_mem_entry_init(dev, NULL,
> + (dma_addr_t)rmem->base,
> + rmem->size, rmem->base,
> + NULL, NULL,
> + name);
> + if (!mem) {
> + dev_err(dev, "unable to initialize memory-region %s\n",
> + node->name);
> + return -ENOMEM;
> + }
> + dev_dbg(dev, "parsed %s at %llx\r\n", mem->name,
> + mem->dma);
> + rproc_add_carveout(rproc, mem);
> + continue;
> + } else if (strstr(node->name, "vdev") &&
> + strstr(node->name, "vring")) {
> + int id, vring_id;
> + char name[16];
> +
> + id = node->name[8] - '0';
> + vring_id = node->name[14] - '0';
> + snprintf(name, sizeof(name), "vdev%dvring%d", id,
> + vring_id);
> + /* Register vring */
> + mem = rproc_mem_entry_init(dev, NULL,
> + (dma_addr_t)rmem->base,
> + rmem->size, rmem->base,
> + NULL, NULL,
> + name);
> + mem->va = devm_ioremap_wc(dev, rmem->base, rmem->size);
> + if (!mem->va)
> + return -ENOMEM;
> + if (!mem) {
> + dev_err(dev, "unable to initialize memory-region %s\n",
> + node->name);
> + return -ENOMEM;
> + }
> + dev_dbg(dev, "parsed %s at %llx\r\n", mem->name,
> + mem->dma);
> + rproc_add_carveout(rproc, mem);
> + continue;
> + } else {
> + mem = rproc_of_resm_mem_entry_init(dev, i,
> + rmem->size,
> + rmem->base,
> + node->name);
> + if (!mem) {
> + dev_err(dev, "unable to initialize memory-region %s\n",
> + node->name);
> + return -ENOMEM;
> + }
> + mem->va = devm_ioremap_wc(dev, rmem->base, rmem->size);
> + if (!mem->va)
> + return -ENOMEM;
> +
> + rproc_add_carveout(rproc, mem);
> + }
> + if (!mem)
> + return -ENOMEM;
> +
> + /*
> + * It is non-DMA memory, used for firmware loading.
> + * It will be added to the R5 remoteproc mappings later.
> + */
> + zynqmp_mem = devm_kzalloc(dev, sizeof(*zynqmp_mem), GFP_KERNEL);
> + if (!zynqmp_mem)
> + return -ENOMEM;
> + ret = of_address_to_resource(node, 0, &zynqmp_mem->res);
> + if (ret) {
> + dev_err(dev, "unable to resolve memory region.\n");
> + return ret;
> + }
> + list_add_tail(&zynqmp_mem->node, &pdata->mems);
> + dev_dbg(dev, "%s, non-dma mem %s\n",
> + __func__, of_node_full_name(node));
> + }
> +
> + /* map TCM memories */
> + for_each_available_child_of_node(np, child) {
> + struct property *prop;
> + const __be32 *cur;
> + u32 pnode_id;
> + void *va;
> + dma_addr_t dma;
> + resource_size_t size;
> +
> + ret = of_address_to_resource(child, 0, &rsc);
> +
> + i = 0;
> + of_property_for_each_u32(child, "pnode-id", prop, cur,
> + pnode_id) {
> + ret = zynqmp_pm_request_node(pnode_id,
> + ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> + if (ret < 0) {
> + dev_err(dev, "failed to request power node: %u\n",
> + pnode_id);
> + return ret;
> + }
> + ret = r5_set_mode(pdata);
> + if (ret < 0) {
> + dev_err(dev, "failed to set R5 operation mode.\n");
> + return ret;
> + }
> + }
> + size = resource_size(&rsc);
> +
> + va = devm_ioremap_wc(dev, rsc.start, size);
> + if (!va)
> + return -ENOMEM;
> +
> + /* zero out tcm base address */
> + if (rsc.start & 0xffe00000) {
> + rsc.start &= 0x000fffff;
> + /* handle tcm banks 1 a and b
> + * (0xffe9000 and oxffeb0000)
> + */
> + if (rsc.start & 0x80000)
> + rsc.start -= 0x90000;
> + }
> +
> + dma = (dma_addr_t)rsc.start;
> + mem = rproc_mem_entry_init(dev, va, dma, (int)size, rsc.start,
> + NULL, zynqmp_r5_mem_release,
> + rsc.name);
> + if (!mem)
> + return -ENOMEM;
> +
> + rproc_add_carveout(rproc, mem);
> + }
> +
> + ret = rproc_elf_load_rsc_table(rproc, fw);
> + if (ret == -EINVAL)
> + ret = 0;
> + return ret;
> +}
> +
> +/* kick a firmware */
> +static void zynqmp_r5_rproc_kick(struct rproc *rproc, int vqid)
> +{
> + struct device *dev = rproc->dev.parent;
> + struct zynqmp_r5_pdata *local = rproc->priv;
> +
> + dev_dbg(dev, "KICK Firmware to start send messages vqid %d\n", vqid);
> +
> + if (vqid < 0) {
> + /* If vqid is negative, does not pass the vqid to
> + * mailbox. As vqid is supposed to be 0 or possive.
> + * It also gives a way to just kick instead but
> + * not use the IPI buffer. It is better to provide
> + * a proper way to pass the short message, which will
> + * need to sync to upstream first, for now,
> + * use negative vqid to assume no message will be
> + * passed with IPI buffer, but just raise interrupt.
> + * This will be faster as it doesn't need to copy the
> + * message to the IPI buffer.
> + *
> + * It will ignore the return, as failure is due to
> + * there already kicks in the mailbox queue.
> + */
> + (void)mbox_send_message(local->tx_chan, NULL);
> + } else {
> + struct sk_buff *skb;
> + unsigned int skb_len;
> + struct zynqmp_ipi_message *mb_msg;
> + int ret;
> +
> + skb_len = (unsigned int)(sizeof(vqid) + sizeof(mb_msg));
> + skb = alloc_skb(skb_len, GFP_ATOMIC);
> + if (!skb) {
> + dev_err(dev,
> + "Failed to allocate skb to kick remote.\n");
> + return;
> + }
> + mb_msg = (struct zynqmp_ipi_message *)skb_put(skb, skb_len);
> + mb_msg->len = sizeof(vqid);
> + memcpy(mb_msg->data, &vqid, sizeof(vqid));
> + skb_queue_tail(&local->tx_mc_skbs, skb);
> + ret = mbox_send_message(local->tx_chan, mb_msg);
> + if (ret < 0) {
> + dev_warn(dev, "Failed to kick remote.\n");
> + skb_dequeue_tail(&local->tx_mc_skbs);
> + kfree_skb(skb);
> + }
> + }
> +}
> +
> +static struct rproc_ops zynqmp_r5_rproc_ops = {
> + .start = zynqmp_r5_rproc_start,
> + .stop = zynqmp_r5_rproc_stop,
> + .load = rproc_elf_load_segments,
> + .parse_fw = zynqmp_r5_parse_fw,
> + .find_loaded_rsc_table = rproc_elf_find_loaded_rsc_table,
> + .sanity_check = rproc_elf_sanity_check,
> + .get_boot_addr = rproc_elf_get_boot_addr,
> + .kick = zynqmp_r5_rproc_kick,
> +};
> +
> +/* zynqmp_r5_mem_probe() - probes RPU TCM memory device
> + * @pdata: pointer to the RPU remoteproc private data
> + * @node: pointer to the memory node
> + *
> + * Function to retrieve memories resources for RPU TCM memory device.
> + */
> +static int zynqmp_r5_mem_probe(struct zynqmp_r5_pdata *pdata,
> + struct device_node *node)
> +{
> + struct device *dev;
> + struct zynqmp_r5_mem *mem;
> + int ret;
> +
> + dev = &pdata->dev;
> + mem = devm_kzalloc(dev, sizeof(*mem), GFP_KERNEL);
> + if (!mem)
> + return -ENOMEM;
> + ret = of_address_to_resource(node, 0, &mem->res);
> + if (ret < 0) {
> + dev_err(dev, "failed to get resource of memory %s",
> + of_node_full_name(node));
> + return -EINVAL;
> + }
> +
> + /* Get the power domain id */
> + if (of_find_property(node, "pnode-id", NULL)) {
> + struct property *prop;
> + const __be32 *cur;
> + u32 val;
> + int i = 0;
> +
> + of_property_for_each_u32(node, "pnode-id", prop, cur, val)
> + mem->pnode_id[i++] = val;
> + }
> + list_add_tail(&mem->node, &pdata->mems);
> + return 0;
> +}
> +
> +/**
> + * zynqmp_r5_release() - ZynqMP R5 device release function
> + * @dev: pointer to the device struct of ZynqMP R5
> + *
> + * Function to release ZynqMP R5 device.
> + */
> +static void zynqmp_r5_release(struct device *dev)
> +{
> + struct zynqmp_r5_pdata *pdata;
> + struct rproc *rproc;
> + struct sk_buff *skb;
> +
> + pdata = dev_get_drvdata(dev);
> + rproc = pdata->rproc;
> + if (rproc) {
> + rproc_del(rproc);
> + rproc_free(rproc);
> + }
> + if (pdata->tx_chan)
> + mbox_free_channel(pdata->tx_chan);
> + if (pdata->rx_chan)
> + mbox_free_channel(pdata->rx_chan);
> + /* Discard all SKBs */
> + while (!skb_queue_empty(&pdata->tx_mc_skbs)) {
> + skb = skb_dequeue(&pdata->tx_mc_skbs);
> + kfree_skb(skb);
> + }
> +
> + put_device(dev->parent);
> +}
> +
> +/**
> + * event_notified_idr_cb() - event notified idr callback
> + * @id: idr id
> + * @ptr: pointer to idr private data
> + * @data: data passed to idr_for_each callback
> + *
> + * Pass notification to remtoeproc virtio

s/remtoeproc/remoteproc

> + *
> + * Return: 0. having return is to satisfy the idr_for_each() function
> + * pointer input argument requirement.
> + **/
> +static int event_notified_idr_cb(int id, void *ptr, void *data)
> +{
> + struct rproc *rproc = data;
> +
> + (void)rproc_vq_interrupt(rproc, id);
> + return 0;
> +}
> +
> +/**
> + * handle_event_notified() - remoteproc notification work funciton
> + * @work: pointer to the work structure
> + *
> + * It checks each registered remoteproc notify IDs.
> + */
> +static void handle_event_notified(struct work_struct *work)
> +{
> + struct rproc *rproc;
> + struct zynqmp_r5_pdata *local;
> +
> + local = container_of(work, struct zynqmp_r5_pdata, workqueue);
> +
> + (void)mbox_send_message(local->rx_chan, NULL);
> + rproc = local->rproc;
> + /*
> + * We only use IPI for interrupt. The firmware side may or may
> + * not write the notifyid when it trigger IPI.
> + * And thus, we scan through all the registered notifyids.
> + */
> + idr_for_each(&rproc->notifyids, event_notified_idr_cb, rproc);
> +}
> +
> +/**
> + * zynqmp_r5_mb_rx_cb() - Receive channel mailbox callback
> + * @cl: mailbox client
> + * @mssg: message pointer
> + *
> + * It will schedule the R5 notification work.
> + */
> +static void zynqmp_r5_mb_rx_cb(struct mbox_client *cl, void *mssg)
> +{
> + struct zynqmp_r5_pdata *local;
> +
> + local = container_of(cl, struct zynqmp_r5_pdata, rx_mc);
> + if (mssg) {
> + struct zynqmp_ipi_message *ipi_msg, *buf_msg;
> + size_t len;
> +
> + ipi_msg = (struct zynqmp_ipi_message *)mssg;
> + buf_msg = (struct zynqmp_ipi_message *)local->rx_mc_buf;
> + len = (ipi_msg->len >= IPI_BUF_LEN_MAX) ?
> + IPI_BUF_LEN_MAX : ipi_msg->len;
> + buf_msg->len = len;
> + memcpy(buf_msg->data, ipi_msg->data, len);
> + }
> + schedule_work(&local->workqueue);
> +}
> +
> +/**
> + * zynqmp_r5_mb_tx_done() - Request has been sent to the remote
> + * @cl: mailbox client
> + * @mssg: pointer to the message which has been sent
> + * @r: status of last TX - OK or error
> + *
> + * It will be called by the mailbox framework when the last TX has done.
> + */
> +static void zynqmp_r5_mb_tx_done(struct mbox_client *cl, void *mssg, int r)
> +{
> + struct zynqmp_r5_pdata *local;
> + struct sk_buff *skb;
> +
> + if (!mssg)
> + return;
> + local = container_of(cl, struct zynqmp_r5_pdata, tx_mc);
> + skb = skb_dequeue(&local->tx_mc_skbs);
> + kfree_skb(skb);
> +}

I certainly won't pretend to understand the specifics of Xilinx's mailbox system
and as such can't comment on what the above 3 functions do. On the flip side
the code looks semantically correct.

I will continue tomorrow... Function zynqmp_r5_parse_fw() pretty much knocked
me off my feet.

> +
> +/**
> + * zynqmp_r5_setup_mbox() - Setup mailboxes
> + *
> + * @pdata: pointer to the ZynqMP R5 processor platform data
> + * @node: pointer of the device node
> + *
> + * Function to setup mailboxes to talk to RPU.
> + *
> + * Return: 0 for success, negative value for failure.
> + */
> +static int zynqmp_r5_setup_mbox(struct zynqmp_r5_pdata *pdata,
> + struct device_node *node)
> +{
> + struct device *dev = &pdata->dev;
> + struct mbox_client *mclient;
> +
> + /* Setup TX mailbox channel client */
> + mclient = &pdata->tx_mc;
> + mclient->dev = dev;
> + mclient->rx_callback = NULL;
> + mclient->tx_block = false;
> + mclient->knows_txdone = false;
> + mclient->tx_done = zynqmp_r5_mb_tx_done;
> +
> + /* Setup TX mailbox channel client */
> + mclient = &pdata->rx_mc;
> + mclient->dev = dev;
> + mclient->rx_callback = zynqmp_r5_mb_rx_cb;
> + mclient->tx_block = false;
> + mclient->knows_txdone = false;
> +
> + INIT_WORK(&pdata->workqueue, handle_event_notified);
> +
> + /* Request TX and RX channels */
> + pdata->tx_chan = mbox_request_channel_byname(&pdata->tx_mc, "tx");
> + if (IS_ERR(pdata->tx_chan)) {
> + dev_err(dev, "failed to request mbox tx channel.\n");
> + pdata->tx_chan = NULL;
> + return -EINVAL;
> + }
> + pdata->rx_chan = mbox_request_channel_byname(&pdata->rx_mc, "rx");
> + if (IS_ERR(pdata->rx_chan)) {
> + dev_err(dev, "failed to request mbox rx channel.\n");
> + pdata->rx_chan = NULL;
> + return -EINVAL;
> + }
> + skb_queue_head_init(&pdata->tx_mc_skbs);
> + return 0;
> +}
> +
> +/**
> + * zynqmp_r5_probe() - Probes ZynqMP R5 processor device node
> + * @pdata: pointer to the ZynqMP R5 processor platform data
> + * @pdev: parent RPU domain platform device
> + * @node: pointer of the device node
> + *
> + * Function to retrieve the information of the ZynqMP R5 device node.
> + *
> + * Return: 0 for success, negative value for failure.
> + */
> +static int zynqmp_r5_probe(struct zynqmp_r5_pdata *pdata,
> + struct platform_device *pdev,
> + struct device_node *node)
> +{
> + struct device *dev = &pdata->dev;
> + struct rproc *rproc;
> + struct device_node *nc;
> + int ret;
> +
> + /* Create device for ZynqMP R5 device */
> + dev->parent = &pdev->dev;
> + dev->release = zynqmp_r5_release;
> + dev->of_node = node;
> + dev_set_name(dev, "%s", of_node_full_name(node));
> + dev_set_drvdata(dev, pdata);
> + ret = device_register(dev);
> + if (ret) {
> + dev_err(dev, "failed to register device.\n");
> + return ret;
> + }
> + get_device(&pdev->dev);
> +
> + /* Allocate remoteproc instance */
> + rproc = rproc_alloc(dev, dev_name(dev), &zynqmp_r5_rproc_ops, NULL, 0);
> + if (!rproc) {
> + dev_err(dev, "rproc allocation failed.\n");
> + ret = -ENOMEM;
> + goto error;
> + }
> + rproc->auto_boot = autoboot;
> + pdata->rproc = rproc;
> + rproc->priv = pdata;
> +
> + /*
> + * The device has not been spawned from a device tree, so
> + * arch_setup_dma_ops has not been not called, thus leaving
> + * the device with dummy DMA ops.
> + * Fix this by inheriting the parent's DMA ops and mask.
> + */
> + rproc->dev.dma_mask = pdev->dev.dma_mask;
> + set_dma_ops(&rproc->dev, get_dma_ops(&pdev->dev));
> +
> + /* Probe R5 memory devices */
> + INIT_LIST_HEAD(&pdata->mems);
> + for_each_available_child_of_node(node, nc) {
> + ret = zynqmp_r5_mem_probe(pdata, nc);
> + if (ret) {
> + dev_err(dev, "failed to probe memory %s.\n",
> + of_node_full_name(nc));
> + goto error;
> + }
> + }
> +
> + /* Set up DMA mask */
> + ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
> + if (ret) {
> + dev_warn(dev, "dma_set_coherent_mask failed: %d\n", ret);
> + /* If DMA is not configured yet, try to configure it. */
> + ret = of_dma_configure(dev, node, true);
> + if (ret) {
> + dev_err(dev, "failed to configure DMA.\n");
> + goto error;
> + }
> + }
> +
> + /* Get R5 power domain node */
> + ret = of_property_read_u32(node, "pnode-id", &pdata->pnode_id);
> + if (ret) {
> + dev_err(dev, "failed to get power node id.\n");
> + goto error;
> + }
> +
> + /* Check if R5 is running */
> + if (r5_is_running(pdata)) {
> + atomic_inc(&rproc->power);
> + rproc->state = RPROC_RUNNING;
> + }
> +
> + if (!of_get_property(dev->of_node, "mboxes", NULL)) {
> + dev_info(dev, "no mailboxes.\n");
> + } else {
> + ret = zynqmp_r5_setup_mbox(pdata, node);
> + if (ret < 0)
> + goto error;
> + }
> +
> + /* Add R5 remoteproc */
> + ret = rproc_add(rproc);
> + if (ret) {
> + dev_err(dev, "rproc registration failed\n");
> + goto error;
> + }
> + return 0;
> +error:
> + if (pdata->rproc)
> + rproc_free(pdata->rproc);
> + pdata->rproc = NULL;
> + device_unregister(dev);
> + put_device(&pdev->dev);
> + return ret;
> +}
> +
> +static int zynqmp_r5_remoteproc_probe(struct platform_device *pdev)
> +{
> + const unsigned char *prop;
> + int ret = 0, i;
> + struct device *dev = &pdev->dev;
> + struct device_node *nc;
> +
> + prop = of_get_property(dev->of_node, "core_conf", NULL);
> + if (!prop) {
> + dev_err(&pdev->dev, "core_conf is not used.\n");
> + return -EINVAL;
> + }
> +
> + dev_info(dev, "RPU core_conf: %s\n", prop);
> + if (!strcmp(prop, "split")) {
> + rpu_mode = PM_RPU_MODE_SPLIT;
> + } else if (!strcmp(prop, "lockstep")) {
> + rpu_mode = PM_RPU_MODE_LOCKSTEP;
> + } else {
> + dev_err(dev,
> + "Invalid core_conf mode provided - %s , %d\n",
> + prop, (int)rpu_mode);
> + return -EINVAL;
> + }
> +
> + i = 0;
> + for_each_available_child_of_node(dev->of_node, nc) {
> + ret = zynqmp_r5_probe(&rpus[i], pdev, nc);
> + if (ret) {
> + dev_err(dev, "failed to probe rpu %s.\n",
> + of_node_full_name(nc));
> + return ret;
> + }
> + i++;
> + }
> +
> + return 0;
> +}
> +
> +static int zynqmp_r5_remoteproc_remove(struct platform_device *pdev)
> +{
> + int i;
> +
> + for (i = 0; i < MAX_RPROCS; i++) {
> + struct zynqmp_r5_pdata *rpu = &rpus[i];
> + struct rproc *rproc;
> +
> + rproc = rpu->rproc;
> + if (rproc) {
> + rproc_del(rproc);
> + rproc_free(rproc);
> + rpu->rproc = NULL;
> + }
> + if (rpu->tx_chan) {
> + mbox_free_channel(rpu->tx_chan);
> + rpu->tx_chan = NULL;
> + }
> + if (rpu->rx_chan) {
> + mbox_free_channel(rpu->rx_chan);
> + rpu->rx_chan = NULL;
> + }
> +
> + device_unregister(&rpu->dev);
> + }
> +
> + return 0;
> +}
> +
> +/* Match table for OF platform binding */
> +static const struct of_device_id zynqmp_r5_remoteproc_match[] = {
> + { .compatible = "xlnx,zynqmp-r5-remoteproc-1.0", },
> + { /* end of list */ },
> +};
> +MODULE_DEVICE_TABLE(of, zynqmp_r5_remoteproc_match);
> +
> +static struct platform_driver zynqmp_r5_remoteproc_driver = {
> + .probe = zynqmp_r5_remoteproc_probe,
> + .remove = zynqmp_r5_remoteproc_remove,
> + .driver = {
> + .name = "zynqmp_r5_remoteproc",
> + .of_match_table = zynqmp_r5_remoteproc_match,
> + },
> +};
> +module_platform_driver(zynqmp_r5_remoteproc_driver);
> +
> +module_param_named(autoboot, autoboot, bool, 0444);
> +MODULE_PARM_DESC(autoboot,
> + "enable | disable autoboot. (default: true)");
> +
> +MODULE_AUTHOR("Ben Levinsky <[email protected]>");
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("ZynqMP R5 remote processor control driver");
> --
> 2.7.4
>

2020-05-07 22:12:33

by Mathieu Poirier

[permalink] [raw]
Subject: Re: [PATCH v4 5/5] remoteproc: Add initial zynqmp R5 remoteproc driver

On Fri, Apr 24, 2020 at 10:36:10AM -0700, Ben Levinsky wrote:
> R5 is included in Xilinx Zynq UltraScale MPSoC so by adding this
> remotproc driver, we can boot the R5 sub-system in different
> configurations.
>
> Acked-by: Stefano Stabellini <[email protected]>
> Acked-by: Ben Levinsky <[email protected]>
> Reviewed-by: Radhey Shyam Pandey <[email protected]>
> Signed-off-by: Ben Levinsky <[email protected]>
> Signed-off-by: Wendy Liang <[email protected]>
> Signed-off-by: Michal Simek <[email protected]>
> Signed-off-by: Ed Mooring <[email protected]>
> Signed-off-by: Jason Wu <[email protected]>
> Tested-by: Ben Levinsky <[email protected]>
> ---
> Changes since v1:
> - remove domain struct as per review from Mathieu
> Changes since v2:
> - add xilinx-related platform mgmt fn's instead of wrapping around
> function pointer in xilinx eemi ops struct
> Changes since v3:
> - fix formatting from checpatch.pl --strict
> - update commit message
>
> ---
> drivers/remoteproc/Kconfig | 10 +
> drivers/remoteproc/Makefile | 1 +
> drivers/remoteproc/zynqmp_r5_remoteproc.c | 902 ++++++++++++++++++++++++++++++
> 3 files changed, 913 insertions(+)
> create mode 100644 drivers/remoteproc/zynqmp_r5_remoteproc.c
>
> diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
> index fbaed07..f094c84 100644
> --- a/drivers/remoteproc/Kconfig
> +++ b/drivers/remoteproc/Kconfig
> @@ -222,6 +222,16 @@ config ST_REMOTEPROC
> processor framework.
> This can be either built-in or a loadable module.
>
> +config ZYNQMP_R5_REMOTEPROC
> + tristate "ZynqMP_r5 remoteproc support"
> + depends on ARM64 && PM && ARCH_ZYNQMP
> + select RPMSG_VIRTIO
> + select MAILBOX
> + select ZYNQMP_IPI_MBOX
> + help
> + Say y here to support ZynqMP R5 remote processors via the remote
> + processor framework.
> +
> config ST_SLIM_REMOTEPROC
> tristate
>
> diff --git a/drivers/remoteproc/Makefile b/drivers/remoteproc/Makefile
> index 0effd38..806ac3f 100644
> --- a/drivers/remoteproc/Makefile
> +++ b/drivers/remoteproc/Makefile
> @@ -27,5 +27,6 @@ obj-$(CONFIG_QCOM_WCNSS_PIL) += qcom_wcnss_pil.o
> qcom_wcnss_pil-y += qcom_wcnss.o
> qcom_wcnss_pil-y += qcom_wcnss_iris.o
> obj-$(CONFIG_ST_REMOTEPROC) += st_remoteproc.o
> +obj-$(CONFIG_ZYNQMP_R5_REMOTEPROC) += zynqmp_r5_remoteproc.o
> obj-$(CONFIG_ST_SLIM_REMOTEPROC) += st_slim_rproc.o
> obj-$(CONFIG_STM32_RPROC) += stm32_rproc.o
> diff --git a/drivers/remoteproc/zynqmp_r5_remoteproc.c b/drivers/remoteproc/zynqmp_r5_remoteproc.c
> new file mode 100644
> index 0000000..e3fb4fb
> --- /dev/null
> +++ b/drivers/remoteproc/zynqmp_r5_remoteproc.c
> @@ -0,0 +1,902 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Zynq R5 Remote Processor driver
> + *
> + * Copyright (C) 2015 - 2018 Xilinx Inc.
> + * Copyright (C) 2015 Jason Wu <[email protected]>
> + *
> + * Based on origin OMAP and Zynq Remote Processor driver
> + *
> + * Copyright (C) 2012 Michal Simek <[email protected]>
> + * Copyright (C) 2012 PetaLogix
> + * Copyright (C) 2011 Texas Instruments, Inc.
> + * Copyright (C) 2011 Google, Inc.
> + */
> +
> +#include <linux/atomic.h>
> +#include <linux/cpu.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/delay.h>
> +#include <linux/err.h>
> +#include <linux/firmware/xlnx-zynqmp.h>
> +#include <linux/genalloc.h>
> +#include <linux/idr.h>
> +#include <linux/interrupt.h>
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <linux/mailbox_client.h>
> +#include <linux/mailbox/zynqmp-ipi-message.h>
> +#include <linux/module.h>
> +#include <linux/of_address.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_platform.h>
> +#include <linux/of_reserved_mem.h>
> +#include <linux/pfn.h>
> +#include <linux/platform_device.h>
> +#include <linux/remoteproc.h>
> +#include <linux/skbuff.h>
> +#include <linux/slab.h>
> +#include <linux/sysfs.h>
> +
> +#include "remoteproc_internal.h"
> +
> +#define MAX_RPROCS 2 /* Support up to 2 RPU */
> +#define MAX_MEM_PNODES 4 /* Max power nodes for one RPU memory instance */
> +
> +#define DEFAULT_FIRMWARE_NAME "rproc-rpu-fw"
> +
> +/* PM proc states */
> +#define PM_PROC_STATE_ACTIVE 1U
> +
> +/* IPI buffer MAX length */
> +#define IPI_BUF_LEN_MAX 32U
> +/* RX mailbox client buffer max length */
> +#define RX_MBOX_CLIENT_BUF_MAX (IPI_BUF_LEN_MAX + \
> + sizeof(struct zynqmp_ipi_message))
> +
> +static bool autoboot __read_mostly;
> +
> +/**
> + * struct zynqmp_r5_mem - zynqmp rpu memory data
> + * @pnode_id: TCM power domain ids
> + * @res: memory resource
> + * @node: list node
> + */
> +struct zynqmp_r5_mem {
> + u32 pnode_id[MAX_MEM_PNODES];
> + struct resource res;
> + struct list_head node;
> +};
> +
> +/**
> + * struct zynqmp_r5_pdata - zynqmp rpu remote processor private data
> + * @dev: device of RPU instance
> + * @rproc: rproc handle
> + * @pnode_id: RPU CPU power domain id
> + * @mems: memory resources
> + * @is_r5_mode_set: indicate if r5 operation mode is set
> + * @tx_mc: tx mailbox client
> + * @rx_mc: rx mailbox client
> + * @tx_chan: tx mailbox channel
> + * @rx_chan: rx mailbox channel
> + * @workqueue: workqueue for the RPU remoteproc
> + * @tx_mc_skbs: socket buffers for tx mailbox client
> + * @rx_mc_buf: rx mailbox client buffer to save the rx message
> + */
> +struct zynqmp_r5_pdata {
> + struct device dev;
> + struct rproc *rproc;
> + u32 pnode_id;
> + struct list_head mems;
> + bool is_r5_mode_set;
> + struct mbox_client tx_mc;
> + struct mbox_client rx_mc;
> + struct mbox_chan *tx_chan;
> + struct mbox_chan *rx_chan;
> + struct work_struct workqueue;
> + struct sk_buff_head tx_mc_skbs;
> + unsigned char rx_mc_buf[RX_MBOX_CLIENT_BUF_MAX];
> +};
> +
> +/**
> + * table of RPUs
> + */
> +struct zynqmp_r5_pdata rpus[MAX_RPROCS];
> +/**
> + * RPU core configuration
> + */
> +enum rpu_oper_mode rpu_mode;
> +
> +/*
> + * r5_set_mode - set RPU operation mode
> + * @pdata: Remote processor private data
> + *
> + * set RPU oepration mode
> + *
> + * Return: 0 for success, negative value for failure
> + */
> +static int r5_set_mode(struct zynqmp_r5_pdata *pdata)
> +{
> + u32 val[PAYLOAD_ARG_CNT] = {0}, expect;
> + struct device *dev = &pdata->dev;
> + int ret;
> +
> + if (pdata->is_r5_mode_set)
> + return 0;
> + expect = (u32)rpu_mode;
> + ret = zynqmp_pm_get_rpu_mode(pdata->pnode_id, 0, 0, val);
> + if (ret < 0) {
> + dev_err(dev, "failed to get RPU oper mode.\n");
> + return ret;
> + }
> + if (val[0] == expect) {
> + dev_dbg(dev, "RPU mode matches: %x\n", val[0]);
> + } else {
> + ret = zynqmp_pm_set_rpu_mode(pdata->pnode_id,
> + expect, 0, val);
> + if (ret < 0) {
> + dev_err(dev,
> + "failed to set RPU oper mode.\n");
> + return ret;
> + }
> + }
> + if (expect == (u32)PM_RPU_MODE_LOCKSTEP)
> + expect = (u32)PM_RPU_TCM_COMB;
> + else
> + expect = (u32)PM_RPU_TCM_SPLIT;
> + ret = zynqmp_pm_set_tcm_config(pdata->pnode_id,
> + expect, 0, val);
> + if (ret < 0) {
> + dev_err(dev, "failed to config TCM to %x.\n",
> + expect);
> + return ret;
> + }
> + pdata->is_r5_mode_set = true;
> + return 0;
> +}
> +
> +/**
> + * r5_is_running - check if r5 is running
> + * @pdata: Remote processor private data
> + *
> + * check if R5 is running
> + *
> + * Return: true if r5 is running, false otherwise
> + */
> +static bool r5_is_running(struct zynqmp_r5_pdata *pdata)
> +{
> + u32 status, requirements, usage;
> + struct device *dev = &pdata->dev;
> +
> + if (zynqmp_pm_get_node_status(pdata->pnode_id,
> + &status, &requirements, &usage)) {
> + dev_err(dev, "Failed to get RPU node %d status.\n",
> + pdata->pnode_id);
> + return false;
> + } else if (status != PM_PROC_STATE_ACTIVE) {
> + dev_dbg(dev, "RPU is not running.\n");
> + return false;
> + }
> +
> + dev_dbg(dev, "RPU is running.\n");
> + return true;
> +}
> +
> +/*
> + * ZynqMP R5 remoteproc memory release function
> + */
> +static int zynqmp_r5_mem_release(struct rproc *rproc,
> + struct rproc_mem_entry *mem)
> +{
> + struct zynqmp_r5_mem *priv;
> + int i, ret;
> + struct device *dev = &rproc->dev;
> +
> + priv = mem->priv;
> + if (!priv)
> + return 0;
> + for (i = 0; i < MAX_MEM_PNODES; i++) {
> + if (priv->pnode_id[i]) {
> + dev_dbg(dev, "%s, pnode %d\n",
> + __func__, priv->pnode_id[i]);
> + ret = zynqmp_pm_release_node(priv->pnode_id[i]);
> + if (ret < 0) {
> + dev_err(dev,
> + "failed to release power node: %u\n",
> + priv->pnode_id[i]);
> + return ret;
> + }
> + } else {
> + break;
> + }
> + }
> + return 0;
> +}
> +
> +/*
> + * ZynqMP R5 remoteproc operations
> + */
> +static int zynqmp_r5_rproc_start(struct rproc *rproc)
> +{
> + struct device *dev = rproc->dev.parent;
> + struct zynqmp_r5_pdata *local = rproc->priv;
> + enum rpu_boot_mem bootmem;
> + int ret;
> + /* Set up R5 */
> + ret = r5_set_mode(local);
> + if (ret) {
> + dev_err(dev, "failed to set R5 operation mode.\n");
> + return ret;
> + }
> + if ((rproc->bootaddr & 0xF0000000) == 0xF0000000)
> + bootmem = PM_RPU_BOOTMEM_HIVEC;
> + else
> + bootmem = PM_RPU_BOOTMEM_LOVEC;
> + dev_info(dev, "RPU boot from %s.",
> + bootmem == PM_RPU_BOOTMEM_HIVEC ? "OCM" : "TCM");
> + ret = zynqmp_pm_request_wakeup(local->pnode_id, 1,
> + bootmem, ZYNQMP_PM_REQUEST_ACK_NO);
> + if (ret < 0) {
> + dev_err(dev, "failed to boot R5.\n");
> + return ret;
> + }
> + return 0;
> +}
> +
> +static int zynqmp_r5_rproc_stop(struct rproc *rproc)
> +{
> + struct zynqmp_r5_pdata *local = rproc->priv;
> + int ret;
> +
> + ret = zynqmp_pm_force_powerdown(local->pnode_id,
> + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> + if (ret < 0) {
> + dev_err(&local->dev, "failed to shutdown R5.\n");
> + return ret;
> + }
> + local->is_r5_mode_set = false;
> + return 0;
> +}
> +
> +static int zynqmp_r5_parse_fw(struct rproc *rproc, const struct firmware *fw)
> +{
> + int num_mems, i, ret;
> + struct zynqmp_r5_pdata *pdata = rproc->priv;
> + struct device *dev = &pdata->dev;
> + struct device_node *np = dev->of_node;
> + struct rproc_mem_entry *mem;
> + struct device_node *child;
> + struct resource rsc;
> +
> + num_mems = of_count_phandle_with_args(np, "memory-region", NULL);
> + if (num_mems <= 0)
> + return 0;
> +
> + for (i = 0; i < num_mems; i++) {
> + struct device_node *node;
> + struct zynqmp_r5_mem *zynqmp_mem;
> + struct reserved_mem *rmem;
> +
> + node = of_parse_phandle(np, "memory-region", i);
> + rmem = of_reserved_mem_lookup(node);
> + if (!rmem) {
> + dev_err(dev, "unable to acquire memory-region\n");
> + return -EINVAL;
> + }

Here you can construct a string like "rpu0vdev0buffer" by using rproc->index and
compare that with node->name.

> + if (strstr(node->name, "vdev") &&
> + strstr(node->name, "buffer")) {
> + int id;
> + char name[16];
> +
> + id = node->name[8] - '0';

I think a better way is to just strip off the "rpuX" portion of the newly constructed
string (above comment) and feed that to rproc_mem_entry_init().

> + snprintf(name, sizeof(name), "vdev%dbuffer", id);
> + /* Register DMA region */
> + mem = rproc_mem_entry_init(dev, NULL,
> + (dma_addr_t)rmem->base,
> + rmem->size, rmem->base,

So the application processor and the R5 have the same view of the memory?

> + NULL, NULL,
> + name);
> + if (!mem) {
> + dev_err(dev, "unable to initialize memory-region %s\n",
> + node->name);
> + return -ENOMEM;
> + }
> + dev_dbg(dev, "parsed %s at %llx\r\n", mem->name,
> + mem->dma);
> + rproc_add_carveout(rproc, mem);
> + continue;
> + } else if (strstr(node->name, "vdev") &&
> + strstr(node->name, "vring")) {
> + int id, vring_id;
> + char name[16];
> +
> + id = node->name[8] - '0';
> + vring_id = node->name[14] - '0';
> + snprintf(name, sizeof(name), "vdev%dvring%d", id,
> + vring_id);
> + /* Register vring */
> + mem = rproc_mem_entry_init(dev, NULL,
> + (dma_addr_t)rmem->base,
> + rmem->size, rmem->base,
> + NULL, NULL,
> + name);
> + mem->va = devm_ioremap_wc(dev, rmem->base, rmem->size);

Any reason why this can't be done via the alloc() and release() function
rprovided by rproc_mem_entry_init()? Moreover this will core dump if mem is
NULL.

> + if (!mem->va)
> + return -ENOMEM;
> + if (!mem) {
> + dev_err(dev, "unable to initialize memory-region %s\n",
> + node->name);
> + return -ENOMEM;
> + }
> + dev_dbg(dev, "parsed %s at %llx\r\n", mem->name,
> + mem->dma);
> + rproc_add_carveout(rproc, mem);
> + continue;
> + } else {
> + mem = rproc_of_resm_mem_entry_init(dev, i,
> + rmem->size,
> + rmem->base,
> + node->name);

Is this to handle rproc_0_reserved? If so, what is it used for and if not, what
memory region is this handling?

> + if (!mem) {
> + dev_err(dev, "unable to initialize memory-region %s\n",
> + node->name);
> + return -ENOMEM;
> + }
> + mem->va = devm_ioremap_wc(dev, rmem->base, rmem->size);

If you need to do an ioremap on it, probably best to use rproc_mem_entry_init().

> + if (!mem->va)
> + return -ENOMEM;
> +
> + rproc_add_carveout(rproc, mem);
> + }
> + if (!mem)
> + return -ENOMEM;

What is this check for? As far as I can tell all the return values for @mem are
checked in the above loop.

> +
> + /*
> + * It is non-DMA memory, used for firmware loading.
> + * It will be added to the R5 remoteproc mappings later.
> + */
> + zynqmp_mem = devm_kzalloc(dev, sizeof(*zynqmp_mem), GFP_KERNEL);
> + if (!zynqmp_mem)
> + return -ENOMEM;
> + ret = of_address_to_resource(node, 0, &zynqmp_mem->res);
> + if (ret) {
> + dev_err(dev, "unable to resolve memory region.\n");
> + return ret;
> + }
> + list_add_tail(&zynqmp_mem->node, &pdata->mems);

I am curious as to why this is needed since all that information is found in the
carveouts.

I will stop here for this set. Please address the comments coming out of this
set and we can proceed incrementally with the next one.

Thanks,
Mathieu

> + dev_dbg(dev, "%s, non-dma mem %s\n",
> + __func__, of_node_full_name(node));
> + }
> +
> + /* map TCM memories */
> + for_each_available_child_of_node(np, child) {
> + struct property *prop;
> + const __be32 *cur;
> + u32 pnode_id;
> + void *va;
> + dma_addr_t dma;
> + resource_size_t size;
> +
> + ret = of_address_to_resource(child, 0, &rsc);
> +
> + i = 0;
> + of_property_for_each_u32(child, "pnode-id", prop, cur,
> + pnode_id) {
> + ret = zynqmp_pm_request_node(pnode_id,
> + ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> + if (ret < 0) {
> + dev_err(dev, "failed to request power node: %u\n",
> + pnode_id);
> + return ret;
> + }
> + ret = r5_set_mode(pdata);
> + if (ret < 0) {
> + dev_err(dev, "failed to set R5 operation mode.\n");
> + return ret;
> + }
> + }
> + size = resource_size(&rsc);
> +
> + va = devm_ioremap_wc(dev, rsc.start, size);
> + if (!va)
> + return -ENOMEM;
> +
> + /* zero out tcm base address */
> + if (rsc.start & 0xffe00000) {
> + rsc.start &= 0x000fffff;
> + /* handle tcm banks 1 a and b
> + * (0xffe9000 and oxffeb0000)
> + */
> + if (rsc.start & 0x80000)
> + rsc.start -= 0x90000;
> + }
> +
> + dma = (dma_addr_t)rsc.start;
> + mem = rproc_mem_entry_init(dev, va, dma, (int)size, rsc.start,
> + NULL, zynqmp_r5_mem_release,
> + rsc.name);
> + if (!mem)
> + return -ENOMEM;
> +
> + rproc_add_carveout(rproc, mem);
> + }
> +
> + ret = rproc_elf_load_rsc_table(rproc, fw);
> + if (ret == -EINVAL)
> + ret = 0;
> + return ret;
> +}
> +
> +/* kick a firmware */
> +static void zynqmp_r5_rproc_kick(struct rproc *rproc, int vqid)
> +{
> + struct device *dev = rproc->dev.parent;
> + struct zynqmp_r5_pdata *local = rproc->priv;
> +
> + dev_dbg(dev, "KICK Firmware to start send messages vqid %d\n", vqid);
> +
> + if (vqid < 0) {
> + /* If vqid is negative, does not pass the vqid to
> + * mailbox. As vqid is supposed to be 0 or possive.
> + * It also gives a way to just kick instead but
> + * not use the IPI buffer. It is better to provide
> + * a proper way to pass the short message, which will
> + * need to sync to upstream first, for now,
> + * use negative vqid to assume no message will be
> + * passed with IPI buffer, but just raise interrupt.
> + * This will be faster as it doesn't need to copy the
> + * message to the IPI buffer.
> + *
> + * It will ignore the return, as failure is due to
> + * there already kicks in the mailbox queue.
> + */
> + (void)mbox_send_message(local->tx_chan, NULL);
> + } else {
> + struct sk_buff *skb;
> + unsigned int skb_len;
> + struct zynqmp_ipi_message *mb_msg;
> + int ret;
> +
> + skb_len = (unsigned int)(sizeof(vqid) + sizeof(mb_msg));
> + skb = alloc_skb(skb_len, GFP_ATOMIC);
> + if (!skb) {
> + dev_err(dev,
> + "Failed to allocate skb to kick remote.\n");
> + return;
> + }
> + mb_msg = (struct zynqmp_ipi_message *)skb_put(skb, skb_len);
> + mb_msg->len = sizeof(vqid);
> + memcpy(mb_msg->data, &vqid, sizeof(vqid));
> + skb_queue_tail(&local->tx_mc_skbs, skb);
> + ret = mbox_send_message(local->tx_chan, mb_msg);
> + if (ret < 0) {
> + dev_warn(dev, "Failed to kick remote.\n");
> + skb_dequeue_tail(&local->tx_mc_skbs);
> + kfree_skb(skb);
> + }
> + }
> +}
> +
> +static struct rproc_ops zynqmp_r5_rproc_ops = {
> + .start = zynqmp_r5_rproc_start,
> + .stop = zynqmp_r5_rproc_stop,
> + .load = rproc_elf_load_segments,
> + .parse_fw = zynqmp_r5_parse_fw,
> + .find_loaded_rsc_table = rproc_elf_find_loaded_rsc_table,
> + .sanity_check = rproc_elf_sanity_check,
> + .get_boot_addr = rproc_elf_get_boot_addr,
> + .kick = zynqmp_r5_rproc_kick,
> +};
> +
> +/* zynqmp_r5_mem_probe() - probes RPU TCM memory device
> + * @pdata: pointer to the RPU remoteproc private data
> + * @node: pointer to the memory node
> + *
> + * Function to retrieve memories resources for RPU TCM memory device.
> + */
> +static int zynqmp_r5_mem_probe(struct zynqmp_r5_pdata *pdata,
> + struct device_node *node)
> +{
> + struct device *dev;
> + struct zynqmp_r5_mem *mem;
> + int ret;
> +
> + dev = &pdata->dev;
> + mem = devm_kzalloc(dev, sizeof(*mem), GFP_KERNEL);
> + if (!mem)
> + return -ENOMEM;
> + ret = of_address_to_resource(node, 0, &mem->res);
> + if (ret < 0) {
> + dev_err(dev, "failed to get resource of memory %s",
> + of_node_full_name(node));
> + return -EINVAL;
> + }
> +
> + /* Get the power domain id */
> + if (of_find_property(node, "pnode-id", NULL)) {
> + struct property *prop;
> + const __be32 *cur;
> + u32 val;
> + int i = 0;
> +
> + of_property_for_each_u32(node, "pnode-id", prop, cur, val)
> + mem->pnode_id[i++] = val;
> + }
> + list_add_tail(&mem->node, &pdata->mems);
> + return 0;
> +}
> +
> +/**
> + * zynqmp_r5_release() - ZynqMP R5 device release function
> + * @dev: pointer to the device struct of ZynqMP R5
> + *
> + * Function to release ZynqMP R5 device.
> + */
> +static void zynqmp_r5_release(struct device *dev)
> +{
> + struct zynqmp_r5_pdata *pdata;
> + struct rproc *rproc;
> + struct sk_buff *skb;
> +
> + pdata = dev_get_drvdata(dev);
> + rproc = pdata->rproc;
> + if (rproc) {
> + rproc_del(rproc);
> + rproc_free(rproc);
> + }
> + if (pdata->tx_chan)
> + mbox_free_channel(pdata->tx_chan);
> + if (pdata->rx_chan)
> + mbox_free_channel(pdata->rx_chan);
> + /* Discard all SKBs */
> + while (!skb_queue_empty(&pdata->tx_mc_skbs)) {
> + skb = skb_dequeue(&pdata->tx_mc_skbs);
> + kfree_skb(skb);
> + }
> +
> + put_device(dev->parent);
> +}
> +
> +/**
> + * event_notified_idr_cb() - event notified idr callback
> + * @id: idr id
> + * @ptr: pointer to idr private data
> + * @data: data passed to idr_for_each callback
> + *
> + * Pass notification to remtoeproc virtio
> + *
> + * Return: 0. having return is to satisfy the idr_for_each() function
> + * pointer input argument requirement.
> + **/
> +static int event_notified_idr_cb(int id, void *ptr, void *data)
> +{
> + struct rproc *rproc = data;
> +
> + (void)rproc_vq_interrupt(rproc, id);
> + return 0;
> +}
> +
> +/**
> + * handle_event_notified() - remoteproc notification work funciton
> + * @work: pointer to the work structure
> + *
> + * It checks each registered remoteproc notify IDs.
> + */
> +static void handle_event_notified(struct work_struct *work)
> +{
> + struct rproc *rproc;
> + struct zynqmp_r5_pdata *local;
> +
> + local = container_of(work, struct zynqmp_r5_pdata, workqueue);
> +
> + (void)mbox_send_message(local->rx_chan, NULL);
> + rproc = local->rproc;
> + /*
> + * We only use IPI for interrupt. The firmware side may or may
> + * not write the notifyid when it trigger IPI.
> + * And thus, we scan through all the registered notifyids.
> + */
> + idr_for_each(&rproc->notifyids, event_notified_idr_cb, rproc);
> +}
> +
> +/**
> + * zynqmp_r5_mb_rx_cb() - Receive channel mailbox callback
> + * @cl: mailbox client
> + * @mssg: message pointer
> + *
> + * It will schedule the R5 notification work.
> + */
> +static void zynqmp_r5_mb_rx_cb(struct mbox_client *cl, void *mssg)
> +{
> + struct zynqmp_r5_pdata *local;
> +
> + local = container_of(cl, struct zynqmp_r5_pdata, rx_mc);
> + if (mssg) {
> + struct zynqmp_ipi_message *ipi_msg, *buf_msg;
> + size_t len;
> +
> + ipi_msg = (struct zynqmp_ipi_message *)mssg;
> + buf_msg = (struct zynqmp_ipi_message *)local->rx_mc_buf;
> + len = (ipi_msg->len >= IPI_BUF_LEN_MAX) ?
> + IPI_BUF_LEN_MAX : ipi_msg->len;
> + buf_msg->len = len;
> + memcpy(buf_msg->data, ipi_msg->data, len);
> + }
> + schedule_work(&local->workqueue);
> +}
> +
> +/**
> + * zynqmp_r5_mb_tx_done() - Request has been sent to the remote
> + * @cl: mailbox client
> + * @mssg: pointer to the message which has been sent
> + * @r: status of last TX - OK or error
> + *
> + * It will be called by the mailbox framework when the last TX has done.
> + */
> +static void zynqmp_r5_mb_tx_done(struct mbox_client *cl, void *mssg, int r)
> +{
> + struct zynqmp_r5_pdata *local;
> + struct sk_buff *skb;
> +
> + if (!mssg)
> + return;
> + local = container_of(cl, struct zynqmp_r5_pdata, tx_mc);
> + skb = skb_dequeue(&local->tx_mc_skbs);
> + kfree_skb(skb);
> +}
> +
> +/**
> + * zynqmp_r5_setup_mbox() - Setup mailboxes
> + *
> + * @pdata: pointer to the ZynqMP R5 processor platform data
> + * @node: pointer of the device node
> + *
> + * Function to setup mailboxes to talk to RPU.
> + *
> + * Return: 0 for success, negative value for failure.
> + */
> +static int zynqmp_r5_setup_mbox(struct zynqmp_r5_pdata *pdata,
> + struct device_node *node)
> +{
> + struct device *dev = &pdata->dev;
> + struct mbox_client *mclient;
> +
> + /* Setup TX mailbox channel client */
> + mclient = &pdata->tx_mc;
> + mclient->dev = dev;
> + mclient->rx_callback = NULL;
> + mclient->tx_block = false;
> + mclient->knows_txdone = false;
> + mclient->tx_done = zynqmp_r5_mb_tx_done;
> +
> + /* Setup TX mailbox channel client */
> + mclient = &pdata->rx_mc;
> + mclient->dev = dev;
> + mclient->rx_callback = zynqmp_r5_mb_rx_cb;
> + mclient->tx_block = false;
> + mclient->knows_txdone = false;
> +
> + INIT_WORK(&pdata->workqueue, handle_event_notified);
> +
> + /* Request TX and RX channels */
> + pdata->tx_chan = mbox_request_channel_byname(&pdata->tx_mc, "tx");
> + if (IS_ERR(pdata->tx_chan)) {
> + dev_err(dev, "failed to request mbox tx channel.\n");
> + pdata->tx_chan = NULL;
> + return -EINVAL;
> + }
> + pdata->rx_chan = mbox_request_channel_byname(&pdata->rx_mc, "rx");
> + if (IS_ERR(pdata->rx_chan)) {
> + dev_err(dev, "failed to request mbox rx channel.\n");
> + pdata->rx_chan = NULL;
> + return -EINVAL;
> + }
> + skb_queue_head_init(&pdata->tx_mc_skbs);
> + return 0;
> +}
> +
> +/**
> + * zynqmp_r5_probe() - Probes ZynqMP R5 processor device node
> + * @pdata: pointer to the ZynqMP R5 processor platform data
> + * @pdev: parent RPU domain platform device
> + * @node: pointer of the device node
> + *
> + * Function to retrieve the information of the ZynqMP R5 device node.
> + *
> + * Return: 0 for success, negative value for failure.
> + */
> +static int zynqmp_r5_probe(struct zynqmp_r5_pdata *pdata,
> + struct platform_device *pdev,
> + struct device_node *node)
> +{
> + struct device *dev = &pdata->dev;
> + struct rproc *rproc;
> + struct device_node *nc;
> + int ret;
> +
> + /* Create device for ZynqMP R5 device */
> + dev->parent = &pdev->dev;
> + dev->release = zynqmp_r5_release;
> + dev->of_node = node;
> + dev_set_name(dev, "%s", of_node_full_name(node));
> + dev_set_drvdata(dev, pdata);
> + ret = device_register(dev);
> + if (ret) {
> + dev_err(dev, "failed to register device.\n");
> + return ret;
> + }
> + get_device(&pdev->dev);
> +
> + /* Allocate remoteproc instance */
> + rproc = rproc_alloc(dev, dev_name(dev), &zynqmp_r5_rproc_ops, NULL, 0);
> + if (!rproc) {
> + dev_err(dev, "rproc allocation failed.\n");
> + ret = -ENOMEM;
> + goto error;
> + }
> + rproc->auto_boot = autoboot;
> + pdata->rproc = rproc;
> + rproc->priv = pdata;
> +
> + /*
> + * The device has not been spawned from a device tree, so
> + * arch_setup_dma_ops has not been not called, thus leaving
> + * the device with dummy DMA ops.
> + * Fix this by inheriting the parent's DMA ops and mask.
> + */
> + rproc->dev.dma_mask = pdev->dev.dma_mask;
> + set_dma_ops(&rproc->dev, get_dma_ops(&pdev->dev));
> +
> + /* Probe R5 memory devices */
> + INIT_LIST_HEAD(&pdata->mems);
> + for_each_available_child_of_node(node, nc) {
> + ret = zynqmp_r5_mem_probe(pdata, nc);
> + if (ret) {
> + dev_err(dev, "failed to probe memory %s.\n",
> + of_node_full_name(nc));
> + goto error;
> + }
> + }
> +
> + /* Set up DMA mask */
> + ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
> + if (ret) {
> + dev_warn(dev, "dma_set_coherent_mask failed: %d\n", ret);
> + /* If DMA is not configured yet, try to configure it. */
> + ret = of_dma_configure(dev, node, true);
> + if (ret) {
> + dev_err(dev, "failed to configure DMA.\n");
> + goto error;
> + }
> + }
> +
> + /* Get R5 power domain node */
> + ret = of_property_read_u32(node, "pnode-id", &pdata->pnode_id);
> + if (ret) {
> + dev_err(dev, "failed to get power node id.\n");
> + goto error;
> + }
> +
> + /* Check if R5 is running */
> + if (r5_is_running(pdata)) {
> + atomic_inc(&rproc->power);
> + rproc->state = RPROC_RUNNING;
> + }
> +
> + if (!of_get_property(dev->of_node, "mboxes", NULL)) {
> + dev_info(dev, "no mailboxes.\n");
> + } else {
> + ret = zynqmp_r5_setup_mbox(pdata, node);
> + if (ret < 0)
> + goto error;
> + }
> +
> + /* Add R5 remoteproc */
> + ret = rproc_add(rproc);
> + if (ret) {
> + dev_err(dev, "rproc registration failed\n");
> + goto error;
> + }
> + return 0;
> +error:
> + if (pdata->rproc)
> + rproc_free(pdata->rproc);
> + pdata->rproc = NULL;
> + device_unregister(dev);
> + put_device(&pdev->dev);
> + return ret;
> +}
> +
> +static int zynqmp_r5_remoteproc_probe(struct platform_device *pdev)
> +{
> + const unsigned char *prop;
> + int ret = 0, i;
> + struct device *dev = &pdev->dev;
> + struct device_node *nc;
> +
> + prop = of_get_property(dev->of_node, "core_conf", NULL);
> + if (!prop) {
> + dev_err(&pdev->dev, "core_conf is not used.\n");
> + return -EINVAL;
> + }
> +
> + dev_info(dev, "RPU core_conf: %s\n", prop);
> + if (!strcmp(prop, "split")) {
> + rpu_mode = PM_RPU_MODE_SPLIT;
> + } else if (!strcmp(prop, "lockstep")) {
> + rpu_mode = PM_RPU_MODE_LOCKSTEP;
> + } else {
> + dev_err(dev,
> + "Invalid core_conf mode provided - %s , %d\n",
> + prop, (int)rpu_mode);
> + return -EINVAL;
> + }
> +
> + i = 0;
> + for_each_available_child_of_node(dev->of_node, nc) {
> + ret = zynqmp_r5_probe(&rpus[i], pdev, nc);
> + if (ret) {
> + dev_err(dev, "failed to probe rpu %s.\n",
> + of_node_full_name(nc));
> + return ret;
> + }
> + i++;
> + }
> +
> + return 0;
> +}
> +
> +static int zynqmp_r5_remoteproc_remove(struct platform_device *pdev)
> +{
> + int i;
> +
> + for (i = 0; i < MAX_RPROCS; i++) {
> + struct zynqmp_r5_pdata *rpu = &rpus[i];
> + struct rproc *rproc;
> +
> + rproc = rpu->rproc;
> + if (rproc) {
> + rproc_del(rproc);
> + rproc_free(rproc);
> + rpu->rproc = NULL;
> + }
> + if (rpu->tx_chan) {
> + mbox_free_channel(rpu->tx_chan);
> + rpu->tx_chan = NULL;
> + }
> + if (rpu->rx_chan) {
> + mbox_free_channel(rpu->rx_chan);
> + rpu->rx_chan = NULL;
> + }
> +
> + device_unregister(&rpu->dev);
> + }
> +
> + return 0;
> +}
> +
> +/* Match table for OF platform binding */
> +static const struct of_device_id zynqmp_r5_remoteproc_match[] = {
> + { .compatible = "xlnx,zynqmp-r5-remoteproc-1.0", },
> + { /* end of list */ },
> +};
> +MODULE_DEVICE_TABLE(of, zynqmp_r5_remoteproc_match);
> +
> +static struct platform_driver zynqmp_r5_remoteproc_driver = {
> + .probe = zynqmp_r5_remoteproc_probe,
> + .remove = zynqmp_r5_remoteproc_remove,
> + .driver = {
> + .name = "zynqmp_r5_remoteproc",
> + .of_match_table = zynqmp_r5_remoteproc_match,
> + },
> +};
> +module_platform_driver(zynqmp_r5_remoteproc_driver);
> +
> +module_param_named(autoboot, autoboot, bool, 0444);
> +MODULE_PARM_DESC(autoboot,
> + "enable | disable autoboot. (default: true)");
> +
> +MODULE_AUTHOR("Ben Levinsky <[email protected]>");
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("ZynqMP R5 remote processor control driver");
> --
> 2.7.4
>

2020-05-11 22:19:58

by Rob Herring (Arm)

[permalink] [raw]
Subject: Re: [PATCH v4 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings

On Fri, Apr 24, 2020 at 10:36:09AM -0700, Ben Levinsky wrote:
> Add binding for ZynqMP R5 OpenAMP.
>
> Represent the RPU domain resources in one device node. Each RPU
> processor is a subnode of the top RPU domain node.

This needs to be sorted out as part of the system DT effort that Xilinx
is working on. I can't see this binding co-existing with it.

>
> Signed-off-by: Ben Levinsky <[email protected]>
> Signed-off-by: Jason Wu <[email protected]>
> Signed-off-by: Wendy Liang <[email protected]>
> Signed-off-by: Michal Simek <[email protected]>
> ---
> Changes since v2:
> - update zynqmp_r5 yaml parsing to not raise warnings for extra
> information in children of R5 node. The warning "node has a unit
> name, but no reg or ranges property" will still be raised though
> as this particular node is needed to describe the
> '#address-cells' and '#size-cells' information.
> Changes since 3:
> - remove warning '/example-0/rpu@ff9a0000/r5@0:
> node has a unit name, but no reg or ranges property'
> by adding reg to r5 node.
> ---
>
> .../remoteproc/xilinx,zynqmp-r5-remoteproc.yaml | 127 +++++++++++++++++++++
> 1 file changed, 127 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml
>
> diff --git a/Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml b/Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml
> new file mode 100644
> index 0000000..41520b6
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml
> @@ -0,0 +1,127 @@
> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: "http://devicetree.org/schemas/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml#"
> +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
> +
> +title: Xilinx R5 remote processor controller bindings
> +
> +description:
> + This document defines the binding for the remoteproc component that loads and
> + boots firmwares on the Xilinx Zynqmp and Versal family chipset.
> +
> +maintainers:
> + - Ed Mooring <[email protected]>
> + - Ben Levinsky <[email protected]>
> +
> +properties:
> + compatible:
> + const: "xlnx,zynqmp-r5-remoteproc-1.0"
> +
> + core_conf:
> + description:
> + R5 core configuration (valid string - split or lock-step)
> + maxItems: 1
> +
> + interrupts:
> + description:
> + Interrupt mapping for remoteproc IPI. It is required if the
> + user uses the remoteproc driver with the RPMsg kernel driver.
> + maxItems: 6
> +
> + memory-region:
> + maxItems: 4
> + minItems: 4
> + pnode-id:
> + maxItems: 1

What is this?

> + mboxes:
> + maxItems: 2
> + mbox-names:
> + maxItems: 2
> +
> + r5@0:
> + type: object
> + required:
> + - '#address-cells'
> + - '#size-cells'
> + - pnode-id
> +examples:
> + - |
> + reserved-memory {
> + #address-cells = <1>;
> + #size-cells = <1>;
> + ranges;
> + rpu0vdev0vring0: rpu0vdev0vring0@3ed40000 {
> + no-map;
> + reg = <0x3ed40000 0x4000>;
> + };
> + rpu0vdev0vring1: rpu0vdev0vring1@3ed44000 {
> + no-map;
> + reg = <0x3ed44000 0x4000>;
> + };
> + rpu0vdev0buffer: rpu0vdev0buffer@3ed48000 {
> + no-map;
> + reg = <0x3ed48000 0x100000>;
> + };
> + rproc_0_reserved: rproc@3ed000000 {
> + no-map;
> + reg = <0x3ed00000 0x40000>;
> + };
> + };
> + rpu: rpu@ff9a0000 {
> + compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
> + #address-cells = <1>;
> + #size-cells = <1>;
> + ranges;
> + core_conf = "split";

If split, then where is the 2nd core?

> + reg = <0xFF9A0000 0x10000>;
> + r5_0: r5@0 {

Unit-addresses are based on 'reg' values.

> + ranges;
> + #address-cells = <1>;
> + #size-cells = <1>;
> + reg = <0xFF9A0100 0x1000>;
> + memory-region = <&rproc_0_reserved>, <&rpu0vdev0buffer>, <&rpu0vdev0vring0>, <&rpu0vdev0vring1>;
> + pnode-id = <0x7>;
> + mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
> + mbox-names = "tx", "rx";
> + tcm_0_a: tcm_0@0 {
> + #address-cells = <1>;
> + #size-cells = <1>;
> + reg = <0xFFE00000 0x10000>;
> + pnode-id = <0xf>;

These nodes probably need some sort of compatible. And don't the TCMs
have different addresses for R5 vs. the A cores?

> + };
> + tcm_0_b: tcm_0@1 {
> + #address-cells = <2>;
> + #size-cells = <2>;
> + reg = <0xFFE20000 0x10000>;
> + pnode-id = <0x10>;
> + };
> + };
> + };
> +
> +
> + zynqmp_ipi1 {
> + compatible = "xlnx,zynqmp-ipi-mailbox";
> + interrupt-parent = <&gic>;
> + interrupts = <0 29 4>;
> + xlnx,ipi-id = <7>;
> + #address-cells = <1>;
> + #size-cells = <1>;
> + ranges;
> +
> + /* APU<->RPU0 IPI mailbox controller */
> + ipi_mailbox_rpu0: mailbox@ff90000 {
> + reg = <0xff990600 0x20>,
> + <0xff990620 0x20>,
> + <0xff9900c0 0x20>,
> + <0xff9900e0 0x20>;
> + reg-names = "local_request_region",
> + "local_response_region",
> + "remote_request_region",
> + "remote_response_region";
> + #mbox-cells = <1>;
> + xlnx,ipi-id = <1>;
> + };
> + };
> +
> +...
> --
> 2.7.4
>

2020-05-26 17:44:41

by Ben Levinsky

[permalink] [raw]
Subject: RE: [PATCH v4 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings

Hi Rob,

The Xilinx R5 Remoteproc driver has been around for a long time -- admittedly we should have upstreamed it long ago. The driver in the current form is using an "classic" remoteproc device tree node as described here.

I am working with Stefano to come up with an appropriate System Device Tree representation but it is not going to be ready right away. Our preference would be to upstream the remoteproc node and driver in their current forms while system device tree is maturing.

Will also update as per your below comments in a v5 too.

Best Regards,
Ben Levinsky

-----Original Message-----
From: Rob Herring <[email protected]>
Sent: Monday, May 11, 2020 3:18 PM
To: Ben Levinsky <[email protected]>
Cc: [email protected]; [email protected]; Michal Simek <[email protected]>; Jolly Shah <[email protected]>; Rajan Vaja <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; Jason Wu <[email protected]>; Jiaying Liang <[email protected]>
Subject: Re: [PATCH v4 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings

On Fri, Apr 24, 2020 at 10:36:09AM -0700, Ben Levinsky wrote:
> Add binding for ZynqMP R5 OpenAMP.
>
> Represent the RPU domain resources in one device node. Each RPU
> processor is a subnode of the top RPU domain node.

This needs to be sorted out as part of the system DT effort that Xilinx is working on. I can't see this binding co-existing with it.

>
> Signed-off-by: Ben Levinsky <[email protected]>
> Signed-off-by: Jason Wu <[email protected]>
> Signed-off-by: Wendy Liang <[email protected]>
> Signed-off-by: Michal Simek <[email protected]>
> ---
> Changes since v2:
> - update zynqmp_r5 yaml parsing to not raise warnings for extra
> information in children of R5 node. The warning "node has a unit
> name, but no reg or ranges property" will still be raised though
> as this particular node is needed to describe the
> '#address-cells' and '#size-cells' information.
> Changes since 3:
> - remove warning '/example-0/rpu@ff9a0000/r5@0:
> node has a unit name, but no reg or ranges property'
> by adding reg to r5 node.
> ---
>
> .../remoteproc/xilinx,zynqmp-r5-remoteproc.yaml | 127 +++++++++++++++++++++
> 1 file changed, 127 insertions(+)
> create mode 100644
> Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remotepr
> oc.yaml
>
> diff --git
> a/Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remote
> proc.yaml
> b/Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-remote
> proc.yaml
> new file mode 100644
> index 0000000..41520b6
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/remoteproc/xilinx,zynqmp-r5-re
> +++ moteproc.yaml
> @@ -0,0 +1,127 @@
> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) %YAML 1.2
> +---
> +$id: "http://devicetree.org/schemas/remoteproc/xilinx,zynqmp-r5-remoteproc.yaml#"
> +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
> +
> +title: Xilinx R5 remote processor controller bindings
> +
> +description:
> + This document defines the binding for the remoteproc component that
> +loads and
> + boots firmwares on the Xilinx Zynqmp and Versal family chipset.
> +
> +maintainers:
> + - Ed Mooring <[email protected]>
> + - Ben Levinsky <[email protected]>
> +
> +properties:
> + compatible:
> + const: "xlnx,zynqmp-r5-remoteproc-1.0"
> +
> + core_conf:
> + description:
> + R5 core configuration (valid string - split or lock-step)
> + maxItems: 1
> +
> + interrupts:
> + description:
> + Interrupt mapping for remoteproc IPI. It is required if the
> + user uses the remoteproc driver with the RPMsg kernel driver.
> + maxItems: 6
> +
> + memory-region:
> + maxItems: 4
> + minItems: 4
> + pnode-id:
> + maxItems: 1

What is this?
[Ben Levinsky] I will description for this. This is used by the Xilinx power management code later on when configuring the RPU.

> + mboxes:
> + maxItems: 2
> + mbox-names:
> + maxItems: 2
> +
> + r5@0:
> + type: object
> + required:
> + - '#address-cells'
> + - '#size-cells'
> + - pnode-id
> +examples:
> + - |
> + reserved-memory {
> + #address-cells = <1>;
> + #size-cells = <1>;
> + ranges;
> + rpu0vdev0vring0: rpu0vdev0vring0@3ed40000 {
> + no-map;
> + reg = <0x3ed40000 0x4000>;
> + };
> + rpu0vdev0vring1: rpu0vdev0vring1@3ed44000 {
> + no-map;
> + reg = <0x3ed44000 0x4000>;
> + };
> + rpu0vdev0buffer: rpu0vdev0buffer@3ed48000 {
> + no-map;
> + reg = <0x3ed48000 0x100000>;
> + };
> + rproc_0_reserved: rproc@3ed000000 {
> + no-map;
> + reg = <0x3ed00000 0x40000>;
> + };
> + };
> + rpu: rpu@ff9a0000 {
> + compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
> + #address-cells = <1>;
> + #size-cells = <1>;
> + ranges;
> + core_conf = "split";

If split, then where is the 2nd core?
[Ben Levinsky] Will fix, I will add second core in v5.

> + reg = <0xFF9A0000 0x10000>;
> + r5_0: r5@0 {

Unit-addresses are based on 'reg' values.
[Ben Levinsky] Will fix this, thanks

> + ranges;
> + #address-cells = <1>;
> + #size-cells = <1>;
> + reg = <0xFF9A0100 0x1000>;
> + memory-region = <&rproc_0_reserved>, <&rpu0vdev0buffer>, <&rpu0vdev0vring0>, <&rpu0vdev0vring1>;
> + pnode-id = <0x7>;
> + mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
> + mbox-names = "tx", "rx";
> + tcm_0_a: tcm_0@0 {
> + #address-cells = <1>;
> + #size-cells = <1>;
> + reg = <0xFFE00000 0x10000>;
> + pnode-id = <0xf>;

These nodes probably need some sort of compatible. And don't the TCMs have different addresses for R5 vs. the A cores?
[Ben Levinsky] I can add a compatible. The addressesing here is absolute (i.e. 0xffex-xxxx ) as it is used from point of view of A core here.

> + };
> + tcm_0_b: tcm_0@1 {
> + #address-cells = <2>;
> + #size-cells = <2>;
> + reg = <0xFFE20000 0x10000>;
> + pnode-id = <0x10>;
> + };
> + };
> + };
> +
> +
> + zynqmp_ipi1 {
> + compatible = "xlnx,zynqmp-ipi-mailbox";
> + interrupt-parent = <&gic>;
> + interrupts = <0 29 4>;
> + xlnx,ipi-id = <7>;
> + #address-cells = <1>;
> + #size-cells = <1>;
> + ranges;
> +
> + /* APU<->RPU0 IPI mailbox controller */
> + ipi_mailbox_rpu0: mailbox@ff90000 {
> + reg = <0xff990600 0x20>,
> + <0xff990620 0x20>,
> + <0xff9900c0 0x20>,
> + <0xff9900e0 0x20>;
> + reg-names = "local_request_region",
> + "local_response_region",
> + "remote_request_region",
> + "remote_response_region";
> + #mbox-cells = <1>;
> + xlnx,ipi-id = <1>;
> + };
> + };
> +
> +...
> --
> 2.7.4
>

2020-06-10 20:41:04

by Rob Herring (Arm)

[permalink] [raw]
Subject: Re: [PATCH v4 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings

On Tue, May 26, 2020 at 11:40 AM Ben Levinsky <[email protected]> wrote:
>
> Hi Rob,
>
> The Xilinx R5 Remoteproc driver has been around for a long time -- admittedly we should have upstreamed it long ago. The driver in the current form is using an "classic" remoteproc device tree node as described here.

I would rather not have 2 possible bindings to maintain. If there's
been no rush to upstream this til now, then it can wait longer.

>
> I am working with Stefano to come up with an appropriate System Device Tree representation but it is not going to be ready right away. Our preference would be to upstream the remoteproc node and driver in their current forms while system device tree is maturing.

There's obviously going to still need to be some sort of description
of the interface between cores, but this has parts that obviously
conflict with what's getting defined for system DT. The TCMs are the
most obvious. If you can remove (or hardcode in the driver) what
conflicts, then perhaps this can be upstreamed now.

Rob

2020-06-30 00:38:34

by Stefano Stabellini

[permalink] [raw]
Subject: Re: [PATCH v4 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings

On Wed, 10 Jun 2020, Rob Herring wrote:
> On Tue, May 26, 2020 at 11:40 AM Ben Levinsky <[email protected]> wrote:
> >
> > Hi Rob,
> >
> > The Xilinx R5 Remoteproc driver has been around for a long time -- admittedly we should have upstreamed it long ago. The driver in the current form is using an "classic" remoteproc device tree node as described here.
>
> I would rather not have 2 possible bindings to maintain. If there's
> been no rush to upstream this til now, then it can wait longer.
>
> >
> > I am working with Stefano to come up with an appropriate System Device Tree representation but it is not going to be ready right away. Our preference would be to upstream the remoteproc node and driver in their current forms while system device tree is maturing.
>
> There's obviously going to still need to be some sort of description
> of the interface between cores, but this has parts that obviously
> conflict with what's getting defined for system DT. The TCMs are the
> most obvious. If you can remove (or hardcode in the driver) what
> conflicts, then perhaps this can be upstreamed now.


Hi Rob,

Sorry it took a while to answer back but we wanted to do some research
to make sure the reply is correct.


The System Device Tree version of the OpenAMP remoteproc bindings aims
at being simpler and vendor-neutral. As anything else System Device
Tree, Lopper will read it and generate a "traditional" device tree with
the existing remoteproc bindings. In that sense, it might not affect
Linux directly.

However, given the fragmentation of the remoteproc bindings across
multiple vendors (they are all different), I think it is a good idea for
Linux, for System Device Tree, and in general to come up with simpler
remoteproc bindings, more aligned between the vendors. If nothing else,
it is going to make Lopper's development easier.


So I think it is a good idea to take this opportunity to simplify the
Xilinx remoteproc bindings as you suggested. The idea of to removing the
TCM nodes is a good one. In addition I asked Ben to have a look at
whether the mboxes and mbox-names properties can be removed too.

Ben will reply with a simplified bindings proposal.

2020-06-30 02:24:09

by Bjorn Andersson

[permalink] [raw]
Subject: Re: [PATCH v4 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings

On Mon 29 Jun 17:37 PDT 2020, Stefano Stabellini wrote:

> On Wed, 10 Jun 2020, Rob Herring wrote:
> > On Tue, May 26, 2020 at 11:40 AM Ben Levinsky <[email protected]> wrote:
> > >
> > > Hi Rob,
> > >
> > > The Xilinx R5 Remoteproc driver has been around for a long time -- admittedly we should have upstreamed it long ago. The driver in the current form is using an "classic" remoteproc device tree node as described here.
> >
> > I would rather not have 2 possible bindings to maintain. If there's
> > been no rush to upstream this til now, then it can wait longer.
> >
> > >
> > > I am working with Stefano to come up with an appropriate System Device Tree representation but it is not going to be ready right away. Our preference would be to upstream the remoteproc node and driver in their current forms while system device tree is maturing.
> >
> > There's obviously going to still need to be some sort of description
> > of the interface between cores, but this has parts that obviously
> > conflict with what's getting defined for system DT. The TCMs are the
> > most obvious. If you can remove (or hardcode in the driver) what
> > conflicts, then perhaps this can be upstreamed now.
>
>
> Hi Rob,
>
> Sorry it took a while to answer back but we wanted to do some research
> to make sure the reply is correct.
>
>
> The System Device Tree version of the OpenAMP remoteproc bindings aims
> at being simpler and vendor-neutral. As anything else System Device
> Tree, Lopper will read it and generate a "traditional" device tree with
> the existing remoteproc bindings. In that sense, it might not affect
> Linux directly.
>

Can you give some examples of how you will be able to describe the
hardware involved in powering/clocking resources surrounding your
remoteproc and the necessary resources in a "simpler and vendor neutral"
way that then can be further lopped(?) into something that Linux can use
to control any remoteproc?

> However, given the fragmentation of the remoteproc bindings across
> multiple vendors (they are all different), I think it is a good idea for
> Linux, for System Device Tree, and in general to come up with simpler
> remoteproc bindings, more aligned between the vendors. If nothing else,
> it is going to make Lopper's development easier.
>

In my view the big reason for the fragmentation between bindings is
because they all describe different hardware. There has been common
properties of remoteprocs discussed, but apart from the firmware-name
property I don't think we have agreed on any.

>
> So I think it is a good idea to take this opportunity to simplify the
> Xilinx remoteproc bindings as you suggested. The idea of to removing the
> TCM nodes is a good one. In addition I asked Ben to have a look at
> whether the mboxes and mbox-names properties can be removed too.
>

If your remoteproc uses a mailbox for signaling, then this should be
described in devicetree. This will allow you to reuse components in
other designs where either part is replaced or reused.

Regards,
Bjorn

> Ben will reply with a simplified bindings proposal.

2020-06-30 15:41:03

by Ben Levinsky

[permalink] [raw]
Subject: RE: [PATCH v4 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings

Hi Bjorn,



-----Original Message-----
From: Bjorn Andersson <[email protected]>
Sent: Monday, June 29, 2020 7:20 PM
To: Stefano Stabellini <[email protected]>
Cc: Rob Herring <[email protected]>; Ben Levinsky <[email protected]>; [email protected]; Michal Simek <[email protected]>; Jolly Shah <[email protected]>; Rajan Vaja <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; Stefano Stabellini <[email protected]>
Subject: Re: [PATCH v4 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings

On Mon 29 Jun 17:37 PDT 2020, Stefano Stabellini wrote:

> On Wed, 10 Jun 2020, Rob Herring wrote:
> > On Tue, May 26, 2020 at 11:40 AM Ben Levinsky <[email protected]> wrote:
> > >
> > > Hi Rob,
> > >
> > > The Xilinx R5 Remoteproc driver has been around for a long time -- admittedly we should have upstreamed it long ago. The driver in the current form is using an "classic" remoteproc device tree node as described here.
> >
> > I would rather not have 2 possible bindings to maintain. If there's
> > been no rush to upstream this til now, then it can wait longer.
> >
> > >
> > > I am working with Stefano to come up with an appropriate System Device Tree representation but it is not going to be ready right away. Our preference would be to upstream the remoteproc node and driver in their current forms while system device tree is maturing.
> >
> > There's obviously going to still need to be some sort of description
> > of the interface between cores, but this has parts that obviously
> > conflict with what's getting defined for system DT. The TCMs are the
> > most obvious. If you can remove (or hardcode in the driver) what
> > conflicts, then perhaps this can be upstreamed now.
>
>
> Hi Rob,
>
> Sorry it took a while to answer back but we wanted to do some research
> to make sure the reply is correct.
>
>
> The System Device Tree version of the OpenAMP remoteproc bindings aims
> at being simpler and vendor-neutral. As anything else System Device
> Tree, Lopper will read it and generate a "traditional" device tree
> with the existing remoteproc bindings. In that sense, it might not
> affect Linux directly.
>

Can you give some examples of how you will be able to describe the hardware involved in powering/clocking resources surrounding your remoteproc and the necessary resources in a "simpler and vendor neutral"
way that then can be further lopped(?) into something that Linux can use to control any remoteproc?

> However, given the fragmentation of the remoteproc bindings across
> multiple vendors (they are all different), I think it is a good idea
> for Linux, for System Device Tree, and in general to come up with
> simpler remoteproc bindings, more aligned between the vendors. If
> nothing else, it is going to make Lopper's development easier.
>

In my view the big reason for the fragmentation between bindings is because they all describe different hardware. There has been common properties of remoteprocs discussed, but apart from the firmware-name property I don't think we have agreed on any.

>
> So I think it is a good idea to take this opportunity to simplify the
> Xilinx remoteproc bindings as you suggested. The idea of to removing
> the TCM nodes is a good one. In addition I asked Ben to have a look at
> whether the mboxes and mbox-names properties can be removed too.
>

If your remoteproc uses a mailbox for signaling, then this should be described in devicetree. This will allow you to reuse components in other designs where either part is replaced or reused.

[Ben Levinsky] The Xilinx R5 remoteproc binding can optionally use the mailbox. That is if loading a simple binary that is not making use of IPC then the mailbox is optional and not needed. Not sure if you still would prefer to have the example showcasing use of mailbox then.


Regards,
Bjorn

> Ben will reply with a simplified bindings proposal.

2020-06-30 18:53:10

by Ben Levinsky

[permalink] [raw]
Subject: RE: [PATCH v4 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings

Hi Rob,

Below is proposal simplified device tree binding as per the below discussion.

Best Regards,
Ben

reserved-memory {
#address-cells = <1>;
#size-cells = <1>;
ranges;
elf_load: rproc@300000000 {
no-map;
reg = <0x30000000 0x40000>;
};
};
Rpu {
compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
#address-cells = <1>;
#size-cells = <1>;
ranges;
lockstep-mode = <0>;
r5_0 {
ranges;
#address-cells = <1>;
#size-cells = <1>;
memory-region = <&elf_load>;
pnode-id = <0x10>;
};
};

-----Original Message-----
From: Ben Levinsky
Sent: Tuesday, June 30, 2020 8:39 AM
To: Bjorn Andersson <[email protected]>; Stefano Stabellini <[email protected]>
Cc: Rob Herring <[email protected]>; [email protected]; Michal Simek <[email protected]>; Jolly Shah <[email protected]>; Rajan Vaja <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; Stefano Stabellini <[email protected]>
Subject: RE: [PATCH v4 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings

Hi Bjorn,



-----Original Message-----
From: Bjorn Andersson <[email protected]>
Sent: Monday, June 29, 2020 7:20 PM
To: Stefano Stabellini <[email protected]>
Cc: Rob Herring <[email protected]>; Ben Levinsky <[email protected]>; [email protected]; Michal Simek <[email protected]>; Jolly Shah <[email protected]>; Rajan Vaja <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; Stefano Stabellini <[email protected]>
Subject: Re: [PATCH v4 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings

On Mon 29 Jun 17:37 PDT 2020, Stefano Stabellini wrote:

> On Wed, 10 Jun 2020, Rob Herring wrote:
> > On Tue, May 26, 2020 at 11:40 AM Ben Levinsky <[email protected]> wrote:
> > >
> > > Hi Rob,
> > >
> > > The Xilinx R5 Remoteproc driver has been around for a long time -- admittedly we should have upstreamed it long ago. The driver in the current form is using an "classic" remoteproc device tree node as described here.
> >
> > I would rather not have 2 possible bindings to maintain. If there's
> > been no rush to upstream this til now, then it can wait longer.
> >
> > >
> > > I am working with Stefano to come up with an appropriate System Device Tree representation but it is not going to be ready right away. Our preference would be to upstream the remoteproc node and driver in their current forms while system device tree is maturing.
> >
> > There's obviously going to still need to be some sort of description
> > of the interface between cores, but this has parts that obviously
> > conflict with what's getting defined for system DT. The TCMs are the
> > most obvious. If you can remove (or hardcode in the driver) what
> > conflicts, then perhaps this can be upstreamed now.
>
>
> Hi Rob,
>
> Sorry it took a while to answer back but we wanted to do some research
> to make sure the reply is correct.
>
>
> The System Device Tree version of the OpenAMP remoteproc bindings aims
> at being simpler and vendor-neutral. As anything else System Device
> Tree, Lopper will read it and generate a "traditional" device tree
> with the existing remoteproc bindings. In that sense, it might not
> affect Linux directly.
>

Can you give some examples of how you will be able to describe the hardware involved in powering/clocking resources surrounding your remoteproc and the necessary resources in a "simpler and vendor neutral"
way that then can be further lopped(?) into something that Linux can use to control any remoteproc?

> However, given the fragmentation of the remoteproc bindings across
> multiple vendors (they are all different), I think it is a good idea
> for Linux, for System Device Tree, and in general to come up with
> simpler remoteproc bindings, more aligned between the vendors. If
> nothing else, it is going to make Lopper's development easier.
>

In my view the big reason for the fragmentation between bindings is because they all describe different hardware. There has been common properties of remoteprocs discussed, but apart from the firmware-name property I don't think we have agreed on any.

>
> So I think it is a good idea to take this opportunity to simplify the
> Xilinx remoteproc bindings as you suggested. The idea of to removing
> the TCM nodes is a good one. In addition I asked Ben to have a look at
> whether the mboxes and mbox-names properties can be removed too.
>

If your remoteproc uses a mailbox for signaling, then this should be described in devicetree. This will allow you to reuse components in other designs where either part is replaced or reused.

[Ben Levinsky] The Xilinx R5 remoteproc binding can optionally use the mailbox. That is if loading a simple binary that is not making use of IPC then the mailbox is optional and not needed. Not sure if you still would prefer to have the example showcasing use of mailbox then.


Regards,
Bjorn

> Ben will reply with a simplified bindings proposal.

2020-07-10 17:43:28

by Stefano Stabellini

[permalink] [raw]
Subject: Re: [PATCH v4 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings

Sorry for the late reply, a couple of conferences kept me busy.


On Mon, 29 Jun 2020, Bjorn Andersson wrote:
> > However, given the fragmentation of the remoteproc bindings across
> > multiple vendors (they are all different), I think it is a good idea for
> > Linux, for System Device Tree, and in general to come up with simpler
> > remoteproc bindings, more aligned between the vendors. If nothing else,
> > it is going to make Lopper's development easier.
> >
>
> In my view the big reason for the fragmentation between bindings is
> because they all describe different hardware. There has been common
> properties of remoteprocs discussed, but apart from the firmware-name
> property I don't think we have agreed on any.

Yeah, it is as you wrote.

I meant to say that there might be room for improvement if the vendors
come together and agree on a few more common properties. However, I
don't have any concrete suggestions on this yet. Also, as mentioned, we
can work with today's bindings just fine from a system device tree
perspective.


> Can you give some examples of how you will be able to describe the
> hardware involved in powering/clocking resources surrounding your
> remoteproc and the necessary resources in a "simpler and vendor neutral"
> way that then can be further lopped(?) into something that Linux can use
> to control any remoteproc?

The description at the system device tree level looks a bit different,
which might make the problem a bit easier, or at least different.

Let me give you some context. Lopper
(https://github.com/devicetree-org/lopper) is a tool that takes a system
device tree as input and generates one or more traditional device trees
as output (i.e. today's device tree for Linux.)

System device tree comes with the description of multiple "execution
domains" (https://connect.linaro.org/resources/ltd20/ltd20-205/) and
the ability to assign resources to each of them. That part is
vendor-neutral. We also have the ability to define a vendor-specific
flag when assigning resources.

All together it enables us to describe an openamp/remoteproc system with
only very few vendor-specific info. I am working on a full example of an
input system device tree with openamp information and the resulting
traditional Linux devicetree. I'll make sure to reach out when I have it
ready.



> > So I think it is a good idea to take this opportunity to simplify the
> > Xilinx remoteproc bindings as you suggested. The idea of to removing the
> > TCM nodes is a good one. In addition I asked Ben to have a look at
> > whether the mboxes and mbox-names properties can be removed too.
> >
>
> If your remoteproc uses a mailbox for signaling, then this should be
> described in devicetree. This will allow you to reuse components in
> other designs where either part is replaced or reused.

OK