2023-10-13 04:23:56

by Tanmay Shah

[permalink] [raw]
Subject: [PATCH v6 0/4] add zynqmp TCM bindings

Tightly-Coupled Memories(TCMs) are low-latency memory that provides
predictable instruction execution and predictable data load/store
timing. Each Cortex-R5F processor contains exclusive two 64 KB memory
banks on the ATCM and BTCM ports, for a total of 128 KB of memory.
In lockstep mode, both 128KB memory is accessible to the cluster.

As per ZynqMP Ultrascale+ Technical Reference Manual UG1085, following
is address space of TCM memory. The bindings in this patch series
introduces properties to accommodate following address space with
address translation between Linux and Cortex-R5 views.

| | | |
| --- | --- | --- |
| *Mode* | *R5 View* | *Linux view* | Notes |
| *Split Mode* | *start addr*| *start addr* | |
| R5_0 ATCM (64 KB) | 0x0000_0000 | 0xFFE0_0000 | |
| R5_0 BTCM (64 KB) | 0x0002_0000 | 0xFFE2_0000 | |
| R5_1 ATCM (64 KB) | 0x0000_0000 | 0xFFE9_0000 | alias of 0xFFE1_0000 |
| R5_1 BTCM (64 KB) | 0x0002_0000 | 0xFFEB_0000 | alias of 0xFFE3_0000 |
| ___ | ___ | ___ | |
| *Lockstep Mode* | | | |
| R5_0 ATCM (128 KB) | 0x0000_0000 | 0xFFE0_0000 | |
| R5_0 BTCM (128 KB) | 0x0002_0000 | 0xFFE2_0000 | |

References:
UG1085 TCM address space:
https://docs.xilinx.com/r/en-US/ug1085-zynq-ultrascale-trm/Tightly-Coupled-Memory-Address-Map

Changes in v6:
- Introduce new node entry for r5f cluster split mode dts and
keep it disabled by default.
- Keep remoteproc lockstep mode enabled by default to maintian
back compatibility.
- Enable split mode only for zcu102 board to demo split mode use
- Remove spurious change
- Handle errors in add_pm_domains function
- Remove redundant code to handle errors from remove_pm_domains
- Missing . at the end of the commit message
- remove redundant initialization of variables
- remove fail_tcm label and relevant code to free memory
acquired using devm_* API. As this will be freed when device free it
- add extra check to see if "reg" property is supported or not

Changes in v5:
- maintain Rob's Ack on bindings patch as no changes in bindings
- split previous patch into multiple patches
- Use pm domain framework to turn on/off TCM
- Add support of parsing TCM information from device-tree
- maintain backward compatibility with previous bindings without
TCM information available in device-tree

This patch series continues previous effort to upstream ZynqMP
TCM bindings:
Previous v4 version link:
https://lore.kernel.org/all/[email protected]/

Previous v3 version link:
https://lore.kernel.org/all/[email protected]/
Radhey Shyam Pandey (1):
dt-bindings: remoteproc: add Tightly Coupled Memory (TCM) bindings


Radhey Shyam Pandey (1):
dt-bindings: remoteproc: add Tightly Coupled Memory (TCM) bindings

Tanmay Shah (3):
dts: zynqmp: add properties for TCM in remoteproc
remoteproc: zynqmp: add pm domains support
remoteproc: zynqmp: parse TCM from device tree

.../remoteproc/xlnx,zynqmp-r5fss.yaml | 131 ++++++-
.../boot/dts/xilinx/zynqmp-zcu102-rev1.0.dts | 8 +
arch/arm64/boot/dts/xilinx/zynqmp.dtsi | 60 ++-
drivers/remoteproc/xlnx_r5_remoteproc.c | 368 ++++++++++++++++--
4 files changed, 517 insertions(+), 50 deletions(-)


base-commit: a7d272979d3a89b117ca2c547dc8a465c4f28635
--
2.25.1


2023-10-13 04:24:04

by Tanmay Shah

[permalink] [raw]
Subject: [PATCH v6 2/4] dts: zynqmp: add properties for TCM in remoteproc

Add properties as per new bindings in zynqmp remoteproc node
to represent TCM address and size.

This patch also adds alternative remoteproc node to represent
remoteproc cluster in split mode. By default lockstep mode is
enabled and users should disable it before using split mode
dts. Both device-tree nodes can't be used simultaneously one
of them must be disabled. For zcu102-1.0 and zcu102-1.1 board
remoteproc split mode dts node is enabled and lockstep mode
dts is disabled.

Signed-off-by: Tanmay Shah <[email protected]>
---

Changes in v6:
- Introduce new node entry for r5f cluster split mode dts and
keep it disabled by default.
- Keep remoteproc lockstep mode enabled by default to maintian
back compatibility.
- Enable split mode only for zcu102 board to demo split mode use

.../boot/dts/xilinx/zynqmp-zcu102-rev1.0.dts | 8 +++
arch/arm64/boot/dts/xilinx/zynqmp.dtsi | 60 +++++++++++++++++--
2 files changed, 63 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/boot/dts/xilinx/zynqmp-zcu102-rev1.0.dts b/arch/arm64/boot/dts/xilinx/zynqmp-zcu102-rev1.0.dts
index c8f71a1aec89..495ca94b45db 100644
--- a/arch/arm64/boot/dts/xilinx/zynqmp-zcu102-rev1.0.dts
+++ b/arch/arm64/boot/dts/xilinx/zynqmp-zcu102-rev1.0.dts
@@ -14,6 +14,14 @@ / {
compatible = "xlnx,zynqmp-zcu102-rev1.0", "xlnx,zynqmp-zcu102", "xlnx,zynqmp";
};

+&rproc_split {
+ status = "okay";
+};
+
+&rproc_lockstep {
+ status = "disabled";
+};
+
&eeprom {
#address-cells = <1>;
#size-cells = <1>;
diff --git a/arch/arm64/boot/dts/xilinx/zynqmp.dtsi b/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
index b61fc99cd911..602e6aba7ac5 100644
--- a/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
+++ b/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
@@ -247,19 +247,69 @@ fpga_full: fpga-full {
ranges;
};

- remoteproc {
+ rproc_lockstep: remoteproc@ffe00000 {
compatible = "xlnx,zynqmp-r5fss";
xlnx,cluster-mode = <1>;

- r5f-0 {
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ ranges = <0x0 0x0 0x0 0xffe00000 0x0 0x20000>,
+ <0x0 0x20000 0x0 0xffe20000 0x0 0x20000>,
+ <0x1 0x0 0x0 0xffe90000 0x0 0x10000>,
+ <0x1 0x20000 0x0 0xffeb0000 0x0 0x10000>;
+
+ r5f@0 {
+ compatible = "xlnx,zynqmp-r5f";
+ reg = <0x0 0x0 0x0 0x20000>, <0x0 0x20000 0x0 0x20000>;
+ reg-names = "atcm", "btcm";
+ power-domains = <&zynqmp_firmware PD_RPU_0>,
+ <&zynqmp_firmware PD_R5_0_ATCM>,
+ <&zynqmp_firmware PD_R5_0_BTCM>;
+ memory-region = <&rproc_0_fw_image>;
+ };
+
+ r5f@1 {
+ compatible = "xlnx,zynqmp-r5f";
+ reg = <0x1 0x0 0x0 0x10000>, <0x1 0x20000 0x0 0x10000>;
+ reg-names = "atcm", "btcm";
+ power-domains = <&zynqmp_firmware PD_RPU_1>,
+ <&zynqmp_firmware PD_R5_1_ATCM>,
+ <&zynqmp_firmware PD_R5_1_BTCM>;
+ memory-region = <&rproc_1_fw_image>;
+ };
+ };
+
+ rproc_split: remoteproc-split@ffe00000 {
+ status = "disabled";
+ compatible = "xlnx,zynqmp-r5fss";
+ xlnx,cluster-mode = <0>;
+
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ ranges = <0x0 0x0 0x0 0xffe00000 0x0 0x10000>,
+ <0x0 0x20000 0x0 0xffe20000 0x0 0x10000>,
+ <0x1 0x0 0x0 0xffe90000 0x0 0x10000>,
+ <0x1 0x20000 0x0 0xffeb0000 0x0 0x10000>;
+
+ r5f@0 {
compatible = "xlnx,zynqmp-r5f";
- power-domains = <&zynqmp_firmware PD_RPU_0>;
+ reg = <0x0 0x0 0x0 0x10000>, <0x0 0x20000 0x0 0x10000>;
+ reg-names = "atcm", "btcm";
+ power-domains = <&zynqmp_firmware PD_RPU_0>,
+ <&zynqmp_firmware PD_R5_0_ATCM>,
+ <&zynqmp_firmware PD_R5_0_BTCM>;
memory-region = <&rproc_0_fw_image>;
};

- r5f-1 {
+ r5f@1 {
compatible = "xlnx,zynqmp-r5f";
- power-domains = <&zynqmp_firmware PD_RPU_1>;
+ reg = <0x1 0x0 0x0 0x10000>, <0x1 0x20000 0x0 0x10000>;
+ reg-names = "atcm", "btcm";
+ power-domains = <&zynqmp_firmware PD_RPU_1>,
+ <&zynqmp_firmware PD_R5_1_ATCM>,
+ <&zynqmp_firmware PD_R5_1_BTCM>;
memory-region = <&rproc_1_fw_image>;
};
};
--
2.25.1

2023-10-13 04:24:26

by Tanmay Shah

[permalink] [raw]
Subject: [PATCH v6 4/4] remoteproc: zynqmp: parse TCM from device tree

ZynqMP TCM information is fixed in driver. Now ZynqMP TCM information
is available in device-tree. Parse TCM information in driver
as per new bindings.

Signed-off-by: Tanmay Shah <[email protected]>
---

Changes in v6:
- Missing . at the end of the commit message
- remove redundant initialization of variables
- remove fail_tcm label and relevant code to free memory
acquired using devm_* API. As this will be freed when device free it
- add extra check to see if "reg" property is supported or not

drivers/remoteproc/xlnx_r5_remoteproc.c | 106 ++++++++++++++++++++++--
1 file changed, 98 insertions(+), 8 deletions(-)

diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
index 04e95d880184..8c3575970c1d 100644
--- a/drivers/remoteproc/xlnx_r5_remoteproc.c
+++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
@@ -75,8 +75,8 @@ struct mbox_info {
};

/*
- * Hardcoded TCM bank values. This will be removed once TCM bindings are
- * accepted for system-dt specifications and upstreamed in linux kernel
+ * Hardcoded TCM bank values. This will stay in driver to maintain backward
+ * compatibility with device-tree that does not have TCM information.
*/
static const struct mem_bank_data zynqmp_tcm_banks_split[] = {
{0xffe00000UL, 0x0, 0x10000UL, PD_R5_0_ATCM, "atcm0"}, /* TCM 64KB each */
@@ -613,7 +613,8 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
bank_name);
if (!rproc_mem) {
ret = -ENOMEM;
- zynqmp_pm_release_node(pm_domain_id);
+ if (pm_domain_id)
+ zynqmp_pm_release_node(pm_domain_id);
goto release_tcm_split;
}

@@ -626,7 +627,8 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
/* If failed, Turn off all TCM banks turned on before */
for (i--; i >= 0; i--) {
pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
- zynqmp_pm_release_node(pm_domain_id);
+ if (pm_domain_id)
+ zynqmp_pm_release_node(pm_domain_id);
}
return ret;
}
@@ -940,6 +942,8 @@ static int zynqmp_r5_add_pm_domains(struct rproc *rproc)
}
}

+ return 0;
+
fail_add_pm_domains_lockstep:
while (j >= 1) {
if (r5_core->pm_dev_link2 && !IS_ERR_OR_NULL(r5_core->pm_dev_link2[j]))
@@ -1102,6 +1106,83 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
return ERR_PTR(ret);
}

+static int zynqmp_r5_get_tcm_node_from_dt(struct zynqmp_r5_cluster *cluster)
+{
+ struct zynqmp_r5_core *r5_core;
+ int i, j, tcm_bank_count, ret;
+ struct platform_device *cpdev;
+ struct mem_bank_data *tcm;
+ struct device_node *np;
+ struct resource *res;
+ u64 abs_addr, size;
+ struct device *dev;
+
+ for (i = 0; i < cluster->core_count; i++) {
+ r5_core = cluster->r5_cores[i];
+ dev = r5_core->dev;
+ np = dev_of_node(dev);
+
+ /* we have address cell 2 and size cell as 2 */
+ ret = of_property_count_elems_of_size(np, "reg",
+ 4 * sizeof(u32));
+ if (ret <= 0) {
+ dev_err(dev, "can't get reg property err %d\n", ret);
+ return -EINVAL;
+ }
+
+ tcm_bank_count = ret;
+
+ r5_core->tcm_banks = devm_kcalloc(dev, tcm_bank_count,
+ sizeof(struct mem_bank_data *),
+ GFP_KERNEL);
+ if (!r5_core->tcm_banks)
+ ret = -ENOMEM;
+
+ r5_core->tcm_bank_count = tcm_bank_count;
+ for (j = 0; j < tcm_bank_count; j++) {
+ tcm = devm_kzalloc(dev, sizeof(struct mem_bank_data *),
+ GFP_KERNEL);
+ if (!tcm)
+ return -ENOMEM;
+
+ r5_core->tcm_banks[j] = tcm;
+
+ /* get tcm address without translation */
+ ret = of_property_read_reg(np, j, &abs_addr, &size);
+ if (ret) {
+ dev_err(dev, "failed to get reg property\n");
+ return ret;
+ }
+
+ /*
+ * remote processor can address only 32 bits
+ * so convert 64-bits into 32-bits. This will discard
+ * any unwanted upper 32-bits.
+ */
+ tcm->da = (u32)abs_addr;
+ tcm->size = (u32)size;
+
+ cpdev = to_platform_device(dev);
+ res = platform_get_resource(cpdev, IORESOURCE_MEM, j);
+ if (!res) {
+ dev_err(dev, "failed to get tcm resource\n");
+ return -EINVAL;
+ }
+
+ tcm->addr = (u32)res->start;
+ tcm->bank_name = (char *)res->name;
+ res = devm_request_mem_region(dev, tcm->addr, tcm->size,
+ tcm->bank_name);
+ if (!res) {
+ dev_err(dev, "failed to request tcm resource\n");
+ return -EINVAL;
+ }
+ }
+ }
+
+ return 0;
+}
+
/**
* zynqmp_r5_get_tcm_node()
* Ideally this function should parse tcm node and store information
@@ -1180,10 +1261,19 @@ static int zynqmp_r5_core_init(struct zynqmp_r5_cluster *cluster,
struct zynqmp_r5_core *r5_core;
int ret, i;

- ret = zynqmp_r5_get_tcm_node(cluster);
- if (ret < 0) {
- dev_err(dev, "can't get tcm node, err %d\n", ret);
- return ret;
+ r5_core = cluster->r5_cores[0];
+ if (of_find_property(r5_core->np, "reg", NULL)) {
+ ret = zynqmp_r5_get_tcm_node_from_dt(cluster);
+ if (ret) {
+ dev_err(dev, "can't get tcm node from dt, err %d\n", ret);
+ return ret;
+ }
+ } else {
+ ret = zynqmp_r5_get_tcm_node(cluster);
+ if (ret < 0) {
+ dev_err(dev, "can't get tcm node, err %d\n", ret);
+ return ret;
+ }
}

for (i = 0; i < cluster->core_count; i++) {
--
2.25.1

2023-10-13 04:24:28

by Tanmay Shah

[permalink] [raw]
Subject: [PATCH v6 3/4] remoteproc: zynqmp: add pm domains support

Use TCM pm domains extracted from device-tree
to power on/off TCM using general pm domain framework.

Signed-off-by: Tanmay Shah <[email protected]>
---

Changes in v6:
- Remove spurious change
- Handle errors in add_pm_domains function
- Remove redundant code to handle errors from remove_pm_domains

drivers/remoteproc/xlnx_r5_remoteproc.c | 262 ++++++++++++++++++++++--
1 file changed, 243 insertions(+), 19 deletions(-)

diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
index 4395edea9a64..04e95d880184 100644
--- a/drivers/remoteproc/xlnx_r5_remoteproc.c
+++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
@@ -16,6 +16,7 @@
#include <linux/of_reserved_mem.h>
#include <linux/platform_device.h>
#include <linux/remoteproc.h>
+#include <linux/pm_domain.h>

#include "remoteproc_internal.h"

@@ -102,6 +103,12 @@ static const struct mem_bank_data zynqmp_tcm_banks_lockstep[] = {
* @rproc: rproc handle
* @pm_domain_id: RPU CPU power domain id
* @ipi: pointer to mailbox information
+ * @num_pm_dev: number of tcm pm domain devices for this core
+ * @pm_dev1: pm domain virtual devices for power domain framework
+ * @pm_dev_link1: pm domain device links after registration
+ * @pm_dev2: used only in lockstep mode. second core's pm domain virtual devices
+ * @pm_dev_link2: used only in lockstep mode. second core's pm device links after
+ * registration
*/
struct zynqmp_r5_core {
struct device *dev;
@@ -111,6 +118,11 @@ struct zynqmp_r5_core {
struct rproc *rproc;
u32 pm_domain_id;
struct mbox_info *ipi;
+ int num_pm_dev;
+ struct device **pm_dev1;
+ struct device_link **pm_dev_link1;
+ struct device **pm_dev2;
+ struct device_link **pm_dev_link2;
};

/**
@@ -575,12 +587,21 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
bank_size = r5_core->tcm_banks[i]->size;
pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;

- ret = zynqmp_pm_request_node(pm_domain_id,
- ZYNQMP_PM_CAPABILITY_ACCESS, 0,
- ZYNQMP_PM_REQUEST_ACK_BLOCKING);
- if (ret < 0) {
- dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
- goto release_tcm_split;
+ /*
+ * If TCM information is available in device-tree then
+ * in that case, pm domain framework will power on/off TCM.
+ * In that case pm_domain_id is set to 0. If hardcode
+ * bindings from driver is used, then only this driver will
+ * use pm_domain_id.
+ */
+ if (pm_domain_id) {
+ ret = zynqmp_pm_request_node(pm_domain_id,
+ ZYNQMP_PM_CAPABILITY_ACCESS, 0,
+ ZYNQMP_PM_REQUEST_ACK_BLOCKING);
+ if (ret < 0) {
+ dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
+ goto release_tcm_split;
+ }
}

dev_dbg(dev, "TCM carveout split mode %s addr=%llx, da=0x%x, size=0x%lx",
@@ -646,13 +667,16 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
for (i = 0; i < num_banks; i++) {
pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;

- /* Turn on each TCM bank individually */
- ret = zynqmp_pm_request_node(pm_domain_id,
- ZYNQMP_PM_CAPABILITY_ACCESS, 0,
- ZYNQMP_PM_REQUEST_ACK_BLOCKING);
- if (ret < 0) {
- dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
- goto release_tcm_lockstep;
+ if (pm_domain_id) {
+ /* Turn on each TCM bank individually */
+ ret = zynqmp_pm_request_node(pm_domain_id,
+ ZYNQMP_PM_CAPABILITY_ACCESS, 0,
+ ZYNQMP_PM_REQUEST_ACK_BLOCKING);
+ if (ret < 0) {
+ dev_err(dev, "failed to turn on TCM 0x%x",
+ pm_domain_id);
+ goto release_tcm_lockstep;
+ }
}

bank_size = r5_core->tcm_banks[i]->size;
@@ -687,7 +711,8 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
/* If failed, Turn off all TCM banks turned on before */
for (i--; i >= 0; i--) {
pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
- zynqmp_pm_release_node(pm_domain_id);
+ if (pm_domain_id)
+ zynqmp_pm_release_node(pm_domain_id);
}
return ret;
}
@@ -758,6 +783,192 @@ static int zynqmp_r5_parse_fw(struct rproc *rproc, const struct firmware *fw)
return ret;
}

+static void zynqmp_r5_remove_pm_domains(struct rproc *rproc)
+{
+ struct zynqmp_r5_core *r5_core = rproc->priv;
+ struct device *dev = r5_core->dev;
+ struct zynqmp_r5_cluster *cluster;
+ int i;
+
+ cluster = platform_get_drvdata(to_platform_device(dev->parent));
+
+ for (i = 1; i < r5_core->num_pm_dev; i++) {
+ device_link_del(r5_core->pm_dev_link1[i]);
+ dev_pm_domain_detach(r5_core->pm_dev1[i], false);
+ }
+
+ kfree(r5_core->pm_dev1);
+ r5_core->pm_dev1 = NULL;
+ kfree(r5_core->pm_dev_link1);
+ r5_core->pm_dev_link1 = NULL;
+
+ if (cluster->mode == SPLIT_MODE) {
+ r5_core->num_pm_dev = 0;
+ return;
+ }
+
+ for (i = 1; i < r5_core->num_pm_dev; i++) {
+ device_link_del(r5_core->pm_dev_link2[i]);
+ dev_pm_domain_detach(r5_core->pm_dev2[i], false);
+ }
+
+ kfree(r5_core->pm_dev2);
+ r5_core->pm_dev2 = NULL;
+ kfree(r5_core->pm_dev_link2);
+ r5_core->pm_dev_link2 = NULL;
+ r5_core->num_pm_dev = 0;
+}
+
+static int zynqmp_r5_add_pm_domains(struct rproc *rproc)
+{
+ struct zynqmp_r5_core *r5_core = rproc->priv;
+ struct device *dev = r5_core->dev, *dev2;
+ struct zynqmp_r5_cluster *cluster;
+ struct platform_device *pdev;
+ struct device_node *np;
+ int i, j, num_pm_dev, ret;
+
+ cluster = dev_get_drvdata(dev->parent);
+
+ /* get number of power-domains */
+ num_pm_dev = of_count_phandle_with_args(r5_core->np, "power-domains",
+ "#power-domain-cells");
+
+ if (num_pm_dev <= 0)
+ return -EINVAL;
+
+ r5_core->pm_dev1 = kcalloc(num_pm_dev,
+ sizeof(struct device *),
+ GFP_KERNEL);
+ if (!r5_core->pm_dev1)
+ ret = -ENOMEM;
+
+ r5_core->pm_dev_link1 = kcalloc(num_pm_dev,
+ sizeof(struct device_link *),
+ GFP_KERNEL);
+ if (!r5_core->pm_dev_link1) {
+ kfree(r5_core->pm_dev1);
+ r5_core->pm_dev1 = NULL;
+ return -ENOMEM;
+ }
+
+ r5_core->num_pm_dev = num_pm_dev;
+
+ /*
+ * start from 2nd entry in power-domains property list as
+ * for zynqmp we only add TCM power domains and not core's power domain.
+ */
+ for (i = 1; i < r5_core->num_pm_dev; i++) {
+ r5_core->pm_dev1[i] = dev_pm_domain_attach_by_id(dev, i);
+ if (IS_ERR_OR_NULL(r5_core->pm_dev1[i])) {
+ dev_dbg(dev, "failed to attach pm domain %d, ret=%ld\n", i,
+ PTR_ERR(r5_core->pm_dev1[i]));
+ ret = -EINVAL;
+ goto fail_add_pm_domains;
+ }
+ r5_core->pm_dev_link1[i] = device_link_add(dev, r5_core->pm_dev1[i],
+ DL_FLAG_STATELESS |
+ DL_FLAG_RPM_ACTIVE |
+ DL_FLAG_PM_RUNTIME);
+ if (!r5_core->pm_dev_link1[i]) {
+ dev_pm_domain_detach(r5_core->pm_dev1[i], true);
+ r5_core->pm_dev1[i] = NULL;
+ ret = -EINVAL;
+ goto fail_add_pm_domains;
+ }
+ }
+
+ if (cluster->mode == SPLIT_MODE)
+ return 0;
+
+ r5_core->pm_dev2 = kcalloc(num_pm_dev,
+ sizeof(struct device *),
+ GFP_KERNEL);
+ if (!r5_core->pm_dev2) {
+ ret = -ENOMEM;
+ goto fail_add_pm_domains;
+ }
+
+ r5_core->pm_dev_link2 = kcalloc(num_pm_dev,
+ sizeof(struct device_link *),
+ GFP_KERNEL);
+ if (!r5_core->pm_dev_link2) {
+ kfree(r5_core->pm_dev2);
+ r5_core->pm_dev2 = NULL;
+ ret = -ENOMEM;
+ goto fail_add_pm_domains;
+ }
+
+ /* get second core's device to detach its power-domains */
+ np = of_get_next_child(cluster->dev->of_node, of_node_get(dev->of_node));
+
+ pdev = of_find_device_by_node(np);
+ if (!pdev) {
+ dev_err(cluster->dev, "core1 platform device not available\n");
+ kfree(r5_core->pm_dev2);
+ kfree(r5_core->pm_dev_link2);
+ r5_core->pm_dev2 = NULL;
+ r5_core->pm_dev_link2 = NULL;
+ ret = -EINVAL;
+ goto fail_add_pm_domains;
+ }
+
+ dev2 = &pdev->dev;
+
+ /* for zynqmp we only add TCM power domains and not core's power domain */
+ for (j = 1; j < r5_core->num_pm_dev; j++) {
+ r5_core->pm_dev2[j] = dev_pm_domain_attach_by_id(dev2, j);
+ if (!r5_core->pm_dev2[j]) {
+ dev_dbg(dev, "can't attach to pm domain %d\n", j);
+ ret = -EINVAL;
+ goto fail_add_pm_domains_lockstep;
+ } else if (IS_ERR(r5_core->pm_dev2[j])) {
+ dev_dbg(dev, "can't attach to pm domain %d\n", j);
+ ret = PTR_ERR(r5_core->pm_dev2[j]);
+ goto fail_add_pm_domains_lockstep;
+ }
+
+ r5_core->pm_dev_link2[j] = device_link_add(dev, r5_core->pm_dev2[j],
+ DL_FLAG_STATELESS |
+ DL_FLAG_RPM_ACTIVE |
+ DL_FLAG_PM_RUNTIME);
+ if (!r5_core->pm_dev_link2[j]) {
+ dev_pm_domain_detach(r5_core->pm_dev2[j], true);
+ r5_core->pm_dev2[j] = NULL;
+ ret = -ENODEV;
+ goto fail_add_pm_domains_lockstep;
+ }
+ }
+
+fail_add_pm_domains_lockstep:
+ while (j >= 1) {
+ if (r5_core->pm_dev_link2 && !IS_ERR_OR_NULL(r5_core->pm_dev_link2[j]))
+ device_link_del(r5_core->pm_dev_link2[j]);
+ if (r5_core->pm_dev2 && !IS_ERR_OR_NULL(r5_core->pm_dev2[j]))
+ dev_pm_domain_detach(r5_core->pm_dev2[j], true);
+ j--;
+ }
+ kfree(r5_core->pm_dev2);
+ r5_core->pm_dev2 = NULL;
+ kfree(r5_core->pm_dev_link2);
+ r5_core->pm_dev_link2 = NULL;
+
+fail_add_pm_domains:
+ while (i >= 1) {
+ if (r5_core->pm_dev_link1 && !IS_ERR_OR_NULL(r5_core->pm_dev_link1[i]))
+ device_link_del(r5_core->pm_dev_link1[i]);
+ if (r5_core->pm_dev1 && !IS_ERR_OR_NULL(r5_core->pm_dev1[i]))
+ dev_pm_domain_detach(r5_core->pm_dev1[i], true);
+ i--;
+ }
+ kfree(r5_core->pm_dev1);
+ r5_core->pm_dev1 = NULL;
+ kfree(r5_core->pm_dev_link1);
+ r5_core->pm_dev_link1 = NULL;
+
+ return ret;
+}
+
/**
* zynqmp_r5_rproc_prepare()
* adds carveouts for TCM bank and reserved memory regions
@@ -770,19 +981,30 @@ static int zynqmp_r5_rproc_prepare(struct rproc *rproc)
{
int ret;

+ ret = zynqmp_r5_add_pm_domains(rproc);
+ if (ret) {
+ dev_err(&rproc->dev, "failed to add pm domains\n");
+ return ret;
+ }
+
ret = add_tcm_banks(rproc);
if (ret) {
dev_err(&rproc->dev, "failed to get TCM banks, err %d\n", ret);
- return ret;
+ goto fail_prepare;
}

ret = add_mem_regions_carveout(rproc);
if (ret) {
dev_err(&rproc->dev, "failed to get reserve mem regions %d\n", ret);
- return ret;
+ goto fail_prepare;
}

return 0;
+
+fail_prepare:
+ zynqmp_r5_remove_pm_domains(rproc);
+
+ return ret;
}

/**
@@ -801,11 +1023,13 @@ static int zynqmp_r5_rproc_unprepare(struct rproc *rproc)

r5_core = rproc->priv;

+ zynqmp_r5_remove_pm_domains(rproc);
+
for (i = 0; i < r5_core->tcm_bank_count; i++) {
pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
- if (zynqmp_pm_release_node(pm_domain_id))
- dev_warn(r5_core->dev,
- "can't turn off TCM bank 0x%x", pm_domain_id);
+ if (pm_domain_id && zynqmp_pm_release_node(pm_domain_id))
+ dev_dbg(r5_core->dev,
+ "can't turn off TCM bank 0x%x", pm_domain_id);
}

return 0;
--
2.25.1

2023-10-13 04:24:53

by Tanmay Shah

[permalink] [raw]
Subject: [PATCH v6 1/4] dt-bindings: remoteproc: add Tightly Coupled Memory (TCM) bindings

From: Radhey Shyam Pandey <[email protected]>

Introduce bindings for TCM memory address space on AMD-xilinx Zynq
UltraScale+ platform. It will help in defining TCM in device-tree
and make it's access platform agnostic and data-driven.

Tightly-coupled memories(TCMs) are low-latency memory that provides
predictable instruction execution and predictable data load/store
timing. Each Cortex-R5F processor contains two 64-bit wide 64 KB memory
banks on the ATCM and BTCM ports, for a total of 128 KB of memory.

The TCM resources(reg, reg-names and power-domain) are documented for
each TCM in the R5 node. The reg and reg-names are made as required
properties as we don't want to hardcode TCM addresses for future
platforms and for zu+ legacy implementation will ensure that the
old dts w/o reg/reg-names works and stable ABI is maintained.

It also extends the examples for TCM split and lockstep modes.

Signed-off-by: Radhey Shyam Pandey <[email protected]>
Signed-off-by: Tanmay Shah <[email protected]>
Acked-by: Rob Herring <[email protected]>
---
.../remoteproc/xlnx,zynqmp-r5fss.yaml | 131 +++++++++++++++---
1 file changed, 113 insertions(+), 18 deletions(-)

diff --git a/Documentation/devicetree/bindings/remoteproc/xlnx,zynqmp-r5fss.yaml b/Documentation/devicetree/bindings/remoteproc/xlnx,zynqmp-r5fss.yaml
index 78aac69f1060..9ecd63ea1b38 100644
--- a/Documentation/devicetree/bindings/remoteproc/xlnx,zynqmp-r5fss.yaml
+++ b/Documentation/devicetree/bindings/remoteproc/xlnx,zynqmp-r5fss.yaml
@@ -20,6 +20,17 @@ properties:
compatible:
const: xlnx,zynqmp-r5fss

+ "#address-cells":
+ const: 2
+
+ "#size-cells":
+ const: 2
+
+ ranges:
+ description: |
+ Standard ranges definition providing address translations for
+ local R5F TCM address spaces to bus addresses.
+
xlnx,cluster-mode:
$ref: /schemas/types.yaml#/definitions/uint32
enum: [0, 1, 2]
@@ -37,7 +48,7 @@ properties:
2: single cpu mode

patternProperties:
- "^r5f-[a-f0-9]+$":
+ "^r5f@[0-9a-f]+$":
type: object
description: |
The RPU is located in the Low Power Domain of the Processor Subsystem.
@@ -54,8 +65,19 @@ patternProperties:
compatible:
const: xlnx,zynqmp-r5f

+ reg:
+ items:
+ - description: ATCM internal memory region
+ - description: BTCM internal memory region
+
+ reg-names:
+ items:
+ - const: atcm
+ - const: btcm
+
power-domains:
- maxItems: 1
+ minItems: 1
+ maxItems: 3

mboxes:
minItems: 1
@@ -102,34 +124,107 @@ patternProperties:
required:
- compatible
- power-domains
+ - reg
+ - reg-names

unevaluatedProperties: false

required:
- compatible
+ - "#address-cells"
+ - "#size-cells"
+ - ranges

additionalProperties: false

examples:
- |
- remoteproc {
- compatible = "xlnx,zynqmp-r5fss";
- xlnx,cluster-mode = <1>;
-
- r5f-0 {
- compatible = "xlnx,zynqmp-r5f";
- power-domains = <&zynqmp_firmware 0x7>;
- memory-region = <&rproc_0_fw_image>, <&rpu0vdev0buffer>, <&rpu0vdev0vring0>, <&rpu0vdev0vring1>;
- mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
- mbox-names = "tx", "rx";
+ #include <dt-bindings/power/xlnx-zynqmp-power.h>
+
+ //Split mode configuration
+ soc {
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ remoteproc@ffe00000 {
+ compatible = "xlnx,zynqmp-r5fss";
+ xlnx,cluster-mode = <0>;
+
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges = <0x0 0x0 0x0 0xffe00000 0x0 0x10000>,
+ <0x0 0x20000 0x0 0xffe20000 0x0 0x10000>,
+ <0x1 0x0 0x0 0xffe90000 0x0 0x10000>,
+ <0x1 0x20000 0x0 0xffeb0000 0x0 0x10000>;
+
+ r5f@0 {
+ compatible = "xlnx,zynqmp-r5f";
+ reg = <0x0 0x0 0x0 0x10000>, <0x0 0x20000 0x0 0x10000>;
+ reg-names = "atcm", "btcm";
+ power-domains = <&zynqmp_firmware PD_RPU_0>,
+ <&zynqmp_firmware PD_R5_0_ATCM>,
+ <&zynqmp_firmware PD_R5_0_BTCM>;
+ memory-region = <&rproc_0_fw_image>, <&rpu0vdev0buffer>,
+ <&rpu0vdev0vring0>, <&rpu0vdev0vring1>;
+ mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
+ mbox-names = "tx", "rx";
+ };
+
+ r5f@1 {
+ compatible = "xlnx,zynqmp-r5f";
+ reg = <0x1 0x0 0x0 0x10000>, <0x1 0x20000 0x0 0x10000>;
+ reg-names = "atcm", "btcm";
+ power-domains = <&zynqmp_firmware PD_RPU_1>,
+ <&zynqmp_firmware PD_R5_1_ATCM>,
+ <&zynqmp_firmware PD_R5_1_BTCM>;
+ memory-region = <&rproc_1_fw_image>, <&rpu1vdev0buffer>,
+ <&rpu1vdev0vring0>, <&rpu1vdev0vring1>;
+ mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>;
+ mbox-names = "tx", "rx";
+ };
};
+ };

- r5f-1 {
- compatible = "xlnx,zynqmp-r5f";
- power-domains = <&zynqmp_firmware 0x8>;
- memory-region = <&rproc_1_fw_image>, <&rpu1vdev0buffer>, <&rpu1vdev0vring0>, <&rpu1vdev0vring1>;
- mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>;
- mbox-names = "tx", "rx";
+ - |
+ //Lockstep configuration
+ soc {
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ remoteproc@ffe00000 {
+ compatible = "xlnx,zynqmp-r5fss";
+ xlnx,cluster-mode = <1>;
+
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges = <0x0 0x0 0x0 0xffe00000 0x0 0x20000>,
+ <0x0 0x20000 0x0 0xffe20000 0x0 0x20000>;
+
+ r5f@0 {
+ compatible = "xlnx,zynqmp-r5f";
+ reg = <0x0 0x0 0x0 0x20000>, <0x0 0x20000 0x0 0x20000>;
+ reg-names = "atcm", "btcm";
+ power-domains = <&zynqmp_firmware PD_RPU_0>,
+ <&zynqmp_firmware PD_R5_0_ATCM>,
+ <&zynqmp_firmware PD_R5_0_BTCM>;
+ memory-region = <&rproc_0_fw_image>, <&rpu0vdev0buffer>,
+ <&rpu0vdev0vring0>, <&rpu0vdev0vring1>;
+ mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
+ mbox-names = "tx", "rx";
+ };
+
+ r5f@1 {
+ compatible = "xlnx,zynqmp-r5f";
+ reg = <0x1 0x0 0x0 0x10000>, <0x1 0x20000 0x0 0x10000>;
+ reg-names = "atcm", "btcm";
+ power-domains = <&zynqmp_firmware PD_RPU_1>,
+ <&zynqmp_firmware PD_R5_1_ATCM>,
+ <&zynqmp_firmware PD_R5_1_BTCM>;
+ memory-region = <&rproc_1_fw_image>, <&rpu1vdev0buffer>,
+ <&rpu1vdev0vring0>, <&rpu1vdev0vring1>;
+ mboxes = <&ipi_mailbox_rpu1 0>, <&ipi_mailbox_rpu1 1>;
+ mbox-names = "tx", "rx";
+ };
};
};
...
--
2.25.1

2023-10-18 17:38:39

by Mathieu Poirier

[permalink] [raw]
Subject: Re: [PATCH v6 3/4] remoteproc: zynqmp: add pm domains support

Good morning,

On Thu, Oct 12, 2023 at 09:22:28PM -0700, Tanmay Shah wrote:
> Use TCM pm domains extracted from device-tree
> to power on/off TCM using general pm domain framework.
>
> Signed-off-by: Tanmay Shah <[email protected]>
> ---
>
> Changes in v6:
> - Remove spurious change
> - Handle errors in add_pm_domains function
> - Remove redundant code to handle errors from remove_pm_domains
>
> drivers/remoteproc/xlnx_r5_remoteproc.c | 262 ++++++++++++++++++++++--
> 1 file changed, 243 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
> index 4395edea9a64..04e95d880184 100644
> --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
> +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
> @@ -16,6 +16,7 @@
> #include <linux/of_reserved_mem.h>
> #include <linux/platform_device.h>
> #include <linux/remoteproc.h>
> +#include <linux/pm_domain.h>
>
> #include "remoteproc_internal.h"
>
> @@ -102,6 +103,12 @@ static const struct mem_bank_data zynqmp_tcm_banks_lockstep[] = {
> * @rproc: rproc handle
> * @pm_domain_id: RPU CPU power domain id
> * @ipi: pointer to mailbox information
> + * @num_pm_dev: number of tcm pm domain devices for this core
> + * @pm_dev1: pm domain virtual devices for power domain framework
> + * @pm_dev_link1: pm domain device links after registration
> + * @pm_dev2: used only in lockstep mode. second core's pm domain virtual devices
> + * @pm_dev_link2: used only in lockstep mode. second core's pm device links after
> + * registration
> */
> struct zynqmp_r5_core {
> struct device *dev;
> @@ -111,6 +118,11 @@ struct zynqmp_r5_core {
> struct rproc *rproc;
> u32 pm_domain_id;
> struct mbox_info *ipi;
> + int num_pm_dev;
> + struct device **pm_dev1;

s/pm_dev1/pm_dev_core0

> + struct device_link **pm_dev_link1;

s/pm_dev_link1/pm_dev_core0_link;

> + struct device **pm_dev2;

s/pm_dev2/pm_dev_core1

> + struct device_link **pm_dev_link2;

s/pm_dev_link2/pm_dev_core1_link;

> };
>
> /**
> @@ -575,12 +587,21 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> bank_size = r5_core->tcm_banks[i]->size;
> pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
>
> - ret = zynqmp_pm_request_node(pm_domain_id,
> - ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> - ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> - if (ret < 0) {
> - dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
> - goto release_tcm_split;
> + /*
> + * If TCM information is available in device-tree then
> + * in that case, pm domain framework will power on/off TCM.
> + * In that case pm_domain_id is set to 0. If hardcode
> + * bindings from driver is used, then only this driver will
> + * use pm_domain_id.
> + */
> + if (pm_domain_id) {
> + ret = zynqmp_pm_request_node(pm_domain_id,
> + ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> + if (ret < 0) {
> + dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
> + goto release_tcm_split;
> + }

This should go in the next patch.

> }
>
> dev_dbg(dev, "TCM carveout split mode %s addr=%llx, da=0x%x, size=0x%lx",
> @@ -646,13 +667,16 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> for (i = 0; i < num_banks; i++) {
> pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
>
> - /* Turn on each TCM bank individually */
> - ret = zynqmp_pm_request_node(pm_domain_id,
> - ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> - ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> - if (ret < 0) {
> - dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
> - goto release_tcm_lockstep;
> + if (pm_domain_id) {
> + /* Turn on each TCM bank individually */
> + ret = zynqmp_pm_request_node(pm_domain_id,
> + ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> + if (ret < 0) {
> + dev_err(dev, "failed to turn on TCM 0x%x",
> + pm_domain_id);
> + goto release_tcm_lockstep;
> + }

Same

> }
>
> bank_size = r5_core->tcm_banks[i]->size;
> @@ -687,7 +711,8 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> /* If failed, Turn off all TCM banks turned on before */
> for (i--; i >= 0; i--) {
> pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> - zynqmp_pm_release_node(pm_domain_id);
> + if (pm_domain_id)
> + zynqmp_pm_release_node(pm_domain_id);
> }
> return ret;
> }
> @@ -758,6 +783,192 @@ static int zynqmp_r5_parse_fw(struct rproc *rproc, const struct firmware *fw)
> return ret;
> }
>
> +static void zynqmp_r5_remove_pm_domains(struct rproc *rproc)
> +{
> + struct zynqmp_r5_core *r5_core = rproc->priv;
> + struct device *dev = r5_core->dev;
> + struct zynqmp_r5_cluster *cluster;
> + int i;
> +
> + cluster = platform_get_drvdata(to_platform_device(dev->parent));
> +
> + for (i = 1; i < r5_core->num_pm_dev; i++) {
> + device_link_del(r5_core->pm_dev_link1[i]);
> + dev_pm_domain_detach(r5_core->pm_dev1[i], false);
> + }
> +
> + kfree(r5_core->pm_dev1);
> + r5_core->pm_dev1 = NULL;
> + kfree(r5_core->pm_dev_link1);
> + r5_core->pm_dev_link1 = NULL;
> +
> + if (cluster->mode == SPLIT_MODE) {
> + r5_core->num_pm_dev = 0;
> + return;
> + }
> +
> + for (i = 1; i < r5_core->num_pm_dev; i++) {
> + device_link_del(r5_core->pm_dev_link2[i]);
> + dev_pm_domain_detach(r5_core->pm_dev2[i], false);
> + }
> +
> + kfree(r5_core->pm_dev2);
> + r5_core->pm_dev2 = NULL;
> + kfree(r5_core->pm_dev_link2);
> + r5_core->pm_dev_link2 = NULL;
> + r5_core->num_pm_dev = 0;
> +}
> +
> +static int zynqmp_r5_add_pm_domains(struct rproc *rproc)
> +{
> + struct zynqmp_r5_core *r5_core = rproc->priv;
> + struct device *dev = r5_core->dev, *dev2;
> + struct zynqmp_r5_cluster *cluster;
> + struct platform_device *pdev;
> + struct device_node *np;
> + int i, j, num_pm_dev, ret;
> +
> + cluster = dev_get_drvdata(dev->parent);
> +
> + /* get number of power-domains */
> + num_pm_dev = of_count_phandle_with_args(r5_core->np, "power-domains",
> + "#power-domain-cells");
> +
> + if (num_pm_dev <= 0)
> + return -EINVAL;
> +
> + r5_core->pm_dev1 = kcalloc(num_pm_dev,
> + sizeof(struct device *),
> + GFP_KERNEL);
> + if (!r5_core->pm_dev1)
> + ret = -ENOMEM;
> +
> + r5_core->pm_dev_link1 = kcalloc(num_pm_dev,
> + sizeof(struct device_link *),
> + GFP_KERNEL);
> + if (!r5_core->pm_dev_link1) {
> + kfree(r5_core->pm_dev1);
> + r5_core->pm_dev1 = NULL;
> + return -ENOMEM;
> + }
> +
> + r5_core->num_pm_dev = num_pm_dev;
> +
> + /*
> + * start from 2nd entry in power-domains property list as
> + * for zynqmp we only add TCM power domains and not core's power domain.
> + */

It would be worth mentionning where the 1st entry get added.

> + for (i = 1; i < r5_core->num_pm_dev; i++) {
> + r5_core->pm_dev1[i] = dev_pm_domain_attach_by_id(dev, i);
> + if (IS_ERR_OR_NULL(r5_core->pm_dev1[i])) {
> + dev_dbg(dev, "failed to attach pm domain %d, ret=%ld\n", i,
> + PTR_ERR(r5_core->pm_dev1[i]));
> + ret = -EINVAL;
> + goto fail_add_pm_domains;
> + }
> + r5_core->pm_dev_link1[i] = device_link_add(dev, r5_core->pm_dev1[i],
> + DL_FLAG_STATELESS |
> + DL_FLAG_RPM_ACTIVE |
> + DL_FLAG_PM_RUNTIME);
> + if (!r5_core->pm_dev_link1[i]) {
> + dev_pm_domain_detach(r5_core->pm_dev1[i], true);
> + r5_core->pm_dev1[i] = NULL;
> + ret = -EINVAL;

Cleanup for this iteration is properly done here. As such the while() loop in
fail_add_pm_domains needs to be while (--i >= 0). See my comment below.

> + goto fail_add_pm_domains;
> + }
> + }
> +
> + if (cluster->mode == SPLIT_MODE)
> + return 0;
> +
> + r5_core->pm_dev2 = kcalloc(num_pm_dev,
> + sizeof(struct device *),
> + GFP_KERNEL);
> + if (!r5_core->pm_dev2) {
> + ret = -ENOMEM;
> + goto fail_add_pm_domains;
> + }
> +
> + r5_core->pm_dev_link2 = kcalloc(num_pm_dev,
> + sizeof(struct device_link *),
> + GFP_KERNEL);
> + if (!r5_core->pm_dev_link2) {
> + kfree(r5_core->pm_dev2);
> + r5_core->pm_dev2 = NULL;
> + ret = -ENOMEM;
> + goto fail_add_pm_domains;
> + }
> +
> + /* get second core's device to detach its power-domains */
> + np = of_get_next_child(cluster->dev->of_node, of_node_get(dev->of_node));
> +
> + pdev = of_find_device_by_node(np);
> + if (!pdev) {
> + dev_err(cluster->dev, "core1 platform device not available\n");
> + kfree(r5_core->pm_dev2);
> + kfree(r5_core->pm_dev_link2);
> + r5_core->pm_dev2 = NULL;
> + r5_core->pm_dev_link2 = NULL;
> + ret = -EINVAL;
> + goto fail_add_pm_domains;
> + }
> +
> + dev2 = &pdev->dev;
> +
> + /* for zynqmp we only add TCM power domains and not core's power domain */
> + for (j = 1; j < r5_core->num_pm_dev; j++) {
> + r5_core->pm_dev2[j] = dev_pm_domain_attach_by_id(dev2, j);
> + if (!r5_core->pm_dev2[j]) {
> + dev_dbg(dev, "can't attach to pm domain %d\n", j);
> + ret = -EINVAL;
> + goto fail_add_pm_domains_lockstep;
> + } else if (IS_ERR(r5_core->pm_dev2[j])) {
> + dev_dbg(dev, "can't attach to pm domain %d\n", j);
> + ret = PTR_ERR(r5_core->pm_dev2[j]);
> + goto fail_add_pm_domains_lockstep;
> + }
> +
> + r5_core->pm_dev_link2[j] = device_link_add(dev, r5_core->pm_dev2[j],
> + DL_FLAG_STATELESS |
> + DL_FLAG_RPM_ACTIVE |
> + DL_FLAG_PM_RUNTIME);
> + if (!r5_core->pm_dev_link2[j]) {
> + dev_pm_domain_detach(r5_core->pm_dev2[j], true);
> + r5_core->pm_dev2[j] = NULL;
> + ret = -ENODEV;
> + goto fail_add_pm_domains_lockstep;
> + }
> + }
> +
> +fail_add_pm_domains_lockstep:
> + while (j >= 1) {
> + if (r5_core->pm_dev_link2 && !IS_ERR_OR_NULL(r5_core->pm_dev_link2[j]))
> + device_link_del(r5_core->pm_dev_link2[j]);
> + if (r5_core->pm_dev2 && !IS_ERR_OR_NULL(r5_core->pm_dev2[j]))
> + dev_pm_domain_detach(r5_core->pm_dev2[j], true);
> + j--;
> + }
> + kfree(r5_core->pm_dev2);
> + r5_core->pm_dev2 = NULL;
> + kfree(r5_core->pm_dev_link2);
> + r5_core->pm_dev_link2 = NULL;
> +
> +fail_add_pm_domains:
> + while (i >= 1) {
> + if (r5_core->pm_dev_link1 && !IS_ERR_OR_NULL(r5_core->pm_dev_link1[i]))
> + device_link_del(r5_core->pm_dev_link1[i]);

Because the cleanup is properly done above we can start the loop at the previous
value of 'i', i.e --i. The added bonus is that you don't need the if()
statement.

Another problem with starting at 'i' is that you get an out of bound access when
all PM domains have been properly added for core 0 but fail for core 1.

> + if (r5_core->pm_dev1 && !IS_ERR_OR_NULL(r5_core->pm_dev1[i]))
> + dev_pm_domain_detach(r5_core->pm_dev1[i], true);

Same as above.

I will stop here for this revision.

Mathieu


> + i--;
> + }
> + kfree(r5_core->pm_dev1);
> + r5_core->pm_dev1 = NULL;
> + kfree(r5_core->pm_dev_link1);
> + r5_core->pm_dev_link1 = NULL;
> +
> + return ret;
> +}
> +
> /**
> * zynqmp_r5_rproc_prepare()
> * adds carveouts for TCM bank and reserved memory regions
> @@ -770,19 +981,30 @@ static int zynqmp_r5_rproc_prepare(struct rproc *rproc)
> {
> int ret;
>
> + ret = zynqmp_r5_add_pm_domains(rproc);
> + if (ret) {
> + dev_err(&rproc->dev, "failed to add pm domains\n");
> + return ret;
> + }
> +
> ret = add_tcm_banks(rproc);
> if (ret) {
> dev_err(&rproc->dev, "failed to get TCM banks, err %d\n", ret);
> - return ret;
> + goto fail_prepare;
> }
>
> ret = add_mem_regions_carveout(rproc);
> if (ret) {
> dev_err(&rproc->dev, "failed to get reserve mem regions %d\n", ret);
> - return ret;
> + goto fail_prepare;
> }
>
> return 0;
> +
> +fail_prepare:
> + zynqmp_r5_remove_pm_domains(rproc);
> +
> + return ret;
> }
>
> /**
> @@ -801,11 +1023,13 @@ static int zynqmp_r5_rproc_unprepare(struct rproc *rproc)
>
> r5_core = rproc->priv;
>
> + zynqmp_r5_remove_pm_domains(rproc);
> +
> for (i = 0; i < r5_core->tcm_bank_count; i++) {
> pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> - if (zynqmp_pm_release_node(pm_domain_id))
> - dev_warn(r5_core->dev,
> - "can't turn off TCM bank 0x%x", pm_domain_id);
> + if (pm_domain_id && zynqmp_pm_release_node(pm_domain_id))
> + dev_dbg(r5_core->dev,
> + "can't turn off TCM bank 0x%x", pm_domain_id);
> }
>
> return 0;
> --
> 2.25.1
>

2023-10-19 11:04:36

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v6 4/4] remoteproc: zynqmp: parse TCM from device tree

Hi Tanmay,

kernel test robot noticed the following build warnings:

[auto build test WARNING on a7d272979d3a89b117ca2c547dc8a465c4f28635]

url: https://github.com/intel-lab-lkp/linux/commits/Tanmay-Shah/dt-bindings-remoteproc-add-Tightly-Coupled-Memory-TCM-bindings/20231017-120805
base: a7d272979d3a89b117ca2c547dc8a465c4f28635
patch link: https://lore.kernel.org/r/20231013042229.3954527-5-tanmay.shah%40amd.com
patch subject: [PATCH v6 4/4] remoteproc: zynqmp: parse TCM from device tree
config: arm64-randconfig-002-20231019 (https://download.01.org/0day-ci/archive/20231019/[email protected]/config)
compiler: aarch64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231019/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

drivers/remoteproc/xlnx_r5_remoteproc.c: In function 'zynqmp_r5_get_tcm_node_from_dt':
>> drivers/remoteproc/xlnx_r5_remoteproc.c:1162:28: warning: array subscript 'struct mem_bank_data[0]' is partly outside array bounds of 'unsigned char[8]' [-Warray-bounds=]
1162 | tcm->da = (u32)abs_addr;
| ^~
In file included from include/linux/dma-mapping.h:8,
from drivers/remoteproc/xlnx_r5_remoteproc.c:8:
In function 'devm_kzalloc',
inlined from 'zynqmp_r5_get_tcm_node_from_dt' at drivers/remoteproc/xlnx_r5_remoteproc.c:1143:10:
include/linux/device.h:314:16: note: object of size 8 allocated by 'devm_kmalloc'
314 | return devm_kmalloc(dev, size, gfp | __GFP_ZERO);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/remoteproc/xlnx_r5_remoteproc.c: In function 'zynqmp_r5_get_tcm_node_from_dt':
drivers/remoteproc/xlnx_r5_remoteproc.c:1163:28: warning: array subscript 'struct mem_bank_data[0]' is partly outside array bounds of 'unsigned char[8]' [-Warray-bounds=]
1163 | tcm->size = (u32)size;
| ^~
In function 'devm_kzalloc',
inlined from 'zynqmp_r5_get_tcm_node_from_dt' at drivers/remoteproc/xlnx_r5_remoteproc.c:1143:10:
include/linux/device.h:314:16: note: object of size 8 allocated by 'devm_kmalloc'
314 | return devm_kmalloc(dev, size, gfp | __GFP_ZERO);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/remoteproc/xlnx_r5_remoteproc.c: In function 'zynqmp_r5_get_tcm_node_from_dt':
drivers/remoteproc/xlnx_r5_remoteproc.c:1172:28: warning: array subscript 'struct mem_bank_data[0]' is partly outside array bounds of 'unsigned char[8]' [-Warray-bounds=]
1172 | tcm->addr = (u32)res->start;
| ^~
In function 'devm_kzalloc',
inlined from 'zynqmp_r5_get_tcm_node_from_dt' at drivers/remoteproc/xlnx_r5_remoteproc.c:1143:10:
include/linux/device.h:314:16: note: object of size 8 allocated by 'devm_kmalloc'
314 | return devm_kmalloc(dev, size, gfp | __GFP_ZERO);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/remoteproc/xlnx_r5_remoteproc.c: In function 'zynqmp_r5_get_tcm_node_from_dt':
drivers/remoteproc/xlnx_r5_remoteproc.c:1173:28: warning: array subscript 'struct mem_bank_data[0]' is partly outside array bounds of 'unsigned char[8]' [-Warray-bounds=]
1173 | tcm->bank_name = (char *)res->name;
| ^~
In function 'devm_kzalloc',
inlined from 'zynqmp_r5_get_tcm_node_from_dt' at drivers/remoteproc/xlnx_r5_remoteproc.c:1143:10:
include/linux/device.h:314:16: note: object of size 8 allocated by 'devm_kmalloc'
314 | return devm_kmalloc(dev, size, gfp | __GFP_ZERO);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from include/linux/device.h:17:
drivers/remoteproc/xlnx_r5_remoteproc.c: In function 'zynqmp_r5_get_tcm_node_from_dt':
drivers/remoteproc/xlnx_r5_remoteproc.c:1174:74: warning: array subscript 'struct mem_bank_data[0]' is partly outside array bounds of 'unsigned char[8]' [-Warray-bounds=]
1174 | res = devm_request_mem_region(dev, tcm->addr, tcm->size,
| ^~
include/linux/ioport.h:306:63: note: in definition of macro 'devm_request_mem_region'
306 | __devm_request_region(dev, &iomem_resource, (start), (n), (name))
| ^
In function 'devm_kzalloc',
inlined from 'zynqmp_r5_get_tcm_node_from_dt' at drivers/remoteproc/xlnx_r5_remoteproc.c:1143:10:
include/linux/device.h:314:16: note: object of size 8 allocated by 'devm_kmalloc'
314 | return devm_kmalloc(dev, size, gfp | __GFP_ZERO);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


vim +1162 drivers/remoteproc/xlnx_r5_remoteproc.c

1108
1109 static int zynqmp_r5_get_tcm_node_from_dt(struct zynqmp_r5_cluster *cluster)
1110 {
1111 struct zynqmp_r5_core *r5_core;
1112 int i, j, tcm_bank_count, ret;
1113 struct platform_device *cpdev;
1114 struct mem_bank_data *tcm;
1115 struct device_node *np;
1116 struct resource *res;
1117 u64 abs_addr, size;
1118 struct device *dev;
1119
1120 for (i = 0; i < cluster->core_count; i++) {
1121 r5_core = cluster->r5_cores[i];
1122 dev = r5_core->dev;
1123 np = dev_of_node(dev);
1124
1125 /* we have address cell 2 and size cell as 2 */
1126 ret = of_property_count_elems_of_size(np, "reg",
1127 4 * sizeof(u32));
1128 if (ret <= 0) {
1129 dev_err(dev, "can't get reg property err %d\n", ret);
1130 return -EINVAL;
1131 }
1132
1133 tcm_bank_count = ret;
1134
1135 r5_core->tcm_banks = devm_kcalloc(dev, tcm_bank_count,
1136 sizeof(struct mem_bank_data *),
1137 GFP_KERNEL);
1138 if (!r5_core->tcm_banks)
1139 ret = -ENOMEM;
1140
1141 r5_core->tcm_bank_count = tcm_bank_count;
1142 for (j = 0; j < tcm_bank_count; j++) {
1143 tcm = devm_kzalloc(dev, sizeof(struct mem_bank_data *),
1144 GFP_KERNEL);
1145 if (!tcm)
1146 return -ENOMEM;
1147
1148 r5_core->tcm_banks[j] = tcm;
1149
1150 /* get tcm address without translation */
1151 ret = of_property_read_reg(np, j, &abs_addr, &size);
1152 if (ret) {
1153 dev_err(dev, "failed to get reg property\n");
1154 return ret;
1155 }
1156
1157 /*
1158 * remote processor can address only 32 bits
1159 * so convert 64-bits into 32-bits. This will discard
1160 * any unwanted upper 32-bits.
1161 */
> 1162 tcm->da = (u32)abs_addr;
1163 tcm->size = (u32)size;
1164
1165 cpdev = to_platform_device(dev);
1166 res = platform_get_resource(cpdev, IORESOURCE_MEM, j);
1167 if (!res) {
1168 dev_err(dev, "failed to get tcm resource\n");
1169 return -EINVAL;
1170 }
1171
1172 tcm->addr = (u32)res->start;
1173 tcm->bank_name = (char *)res->name;
1174 res = devm_request_mem_region(dev, tcm->addr, tcm->size,
1175 tcm->bank_name);
1176 if (!res) {
1177 dev_err(dev, "failed to request tcm resource\n");
1178 return -EINVAL;
1179 }
1180 }
1181 }
1182
1183 return 0;
1184 }
1185

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-10-30 16:09:39

by Tanmay Shah

[permalink] [raw]
Subject: Re: [PATCH v6 3/4] remoteproc: zynqmp: add pm domains support

Hi Mathieu,

I agree to all the comments, I will address them in next revision.

Thanks,

Tanmay

On 10/18/23 12:38 PM, Mathieu Poirier wrote:
> Good morning,
>
> On Thu, Oct 12, 2023 at 09:22:28PM -0700, Tanmay Shah wrote:
> > Use TCM pm domains extracted from device-tree
> > to power on/off TCM using general pm domain framework.
> >
> > Signed-off-by: Tanmay Shah <[email protected]>
> > ---
> >
> > Changes in v6:
> > - Remove spurious change
> > - Handle errors in add_pm_domains function
> > - Remove redundant code to handle errors from remove_pm_domains
> >
> > drivers/remoteproc/xlnx_r5_remoteproc.c | 262 ++++++++++++++++++++++--
> > 1 file changed, 243 insertions(+), 19 deletions(-)
> >
> > diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
> > index 4395edea9a64..04e95d880184 100644
> > --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
> > +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
> > @@ -16,6 +16,7 @@
> > #include <linux/of_reserved_mem.h>
> > #include <linux/platform_device.h>
> > #include <linux/remoteproc.h>
> > +#include <linux/pm_domain.h>
> >
> > #include "remoteproc_internal.h"
> >
> > @@ -102,6 +103,12 @@ static const struct mem_bank_data zynqmp_tcm_banks_lockstep[] = {
> > * @rproc: rproc handle
> > * @pm_domain_id: RPU CPU power domain id
> > * @ipi: pointer to mailbox information
> > + * @num_pm_dev: number of tcm pm domain devices for this core
> > + * @pm_dev1: pm domain virtual devices for power domain framework
> > + * @pm_dev_link1: pm domain device links after registration
> > + * @pm_dev2: used only in lockstep mode. second core's pm domain virtual devices
> > + * @pm_dev_link2: used only in lockstep mode. second core's pm device links after
> > + * registration
> > */
> > struct zynqmp_r5_core {
> > struct device *dev;
> > @@ -111,6 +118,11 @@ struct zynqmp_r5_core {
> > struct rproc *rproc;
> > u32 pm_domain_id;
> > struct mbox_info *ipi;
> > + int num_pm_dev;
> > + struct device **pm_dev1;
>
> s/pm_dev1/pm_dev_core0
>
> > + struct device_link **pm_dev_link1;
>
> s/pm_dev_link1/pm_dev_core0_link;
>
> > + struct device **pm_dev2;
>
> s/pm_dev2/pm_dev_core1
>
> > + struct device_link **pm_dev_link2;
>
> s/pm_dev_link2/pm_dev_core1_link;
>
> > };
> >
> > /**
> > @@ -575,12 +587,21 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
> > bank_size = r5_core->tcm_banks[i]->size;
> > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> >
> > - ret = zynqmp_pm_request_node(pm_domain_id,
> > - ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > - ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> > - if (ret < 0) {
> > - dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
> > - goto release_tcm_split;
> > + /*
> > + * If TCM information is available in device-tree then
> > + * in that case, pm domain framework will power on/off TCM.
> > + * In that case pm_domain_id is set to 0. If hardcode
> > + * bindings from driver is used, then only this driver will
> > + * use pm_domain_id.
> > + */
> > + if (pm_domain_id) {
> > + ret = zynqmp_pm_request_node(pm_domain_id,
> > + ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> > + if (ret < 0) {
> > + dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
> > + goto release_tcm_split;
> > + }
>
> This should go in the next patch.
>
> > }
> >
> > dev_dbg(dev, "TCM carveout split mode %s addr=%llx, da=0x%x, size=0x%lx",
> > @@ -646,13 +667,16 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > for (i = 0; i < num_banks; i++) {
> > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> >
> > - /* Turn on each TCM bank individually */
> > - ret = zynqmp_pm_request_node(pm_domain_id,
> > - ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > - ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> > - if (ret < 0) {
> > - dev_err(dev, "failed to turn on TCM 0x%x", pm_domain_id);
> > - goto release_tcm_lockstep;
> > + if (pm_domain_id) {
> > + /* Turn on each TCM bank individually */
> > + ret = zynqmp_pm_request_node(pm_domain_id,
> > + ZYNQMP_PM_CAPABILITY_ACCESS, 0,
> > + ZYNQMP_PM_REQUEST_ACK_BLOCKING);
> > + if (ret < 0) {
> > + dev_err(dev, "failed to turn on TCM 0x%x",
> > + pm_domain_id);
> > + goto release_tcm_lockstep;
> > + }
>
> Same
>
> > }
> >
> > bank_size = r5_core->tcm_banks[i]->size;
> > @@ -687,7 +711,8 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
> > /* If failed, Turn off all TCM banks turned on before */
> > for (i--; i >= 0; i--) {
> > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > - zynqmp_pm_release_node(pm_domain_id);
> > + if (pm_domain_id)
> > + zynqmp_pm_release_node(pm_domain_id);
> > }
> > return ret;
> > }
> > @@ -758,6 +783,192 @@ static int zynqmp_r5_parse_fw(struct rproc *rproc, const struct firmware *fw)
> > return ret;
> > }
> >
> > +static void zynqmp_r5_remove_pm_domains(struct rproc *rproc)
> > +{
> > + struct zynqmp_r5_core *r5_core = rproc->priv;
> > + struct device *dev = r5_core->dev;
> > + struct zynqmp_r5_cluster *cluster;
> > + int i;
> > +
> > + cluster = platform_get_drvdata(to_platform_device(dev->parent));
> > +
> > + for (i = 1; i < r5_core->num_pm_dev; i++) {
> > + device_link_del(r5_core->pm_dev_link1[i]);
> > + dev_pm_domain_detach(r5_core->pm_dev1[i], false);
> > + }
> > +
> > + kfree(r5_core->pm_dev1);
> > + r5_core->pm_dev1 = NULL;
> > + kfree(r5_core->pm_dev_link1);
> > + r5_core->pm_dev_link1 = NULL;
> > +
> > + if (cluster->mode == SPLIT_MODE) {
> > + r5_core->num_pm_dev = 0;
> > + return;
> > + }
> > +
> > + for (i = 1; i < r5_core->num_pm_dev; i++) {
> > + device_link_del(r5_core->pm_dev_link2[i]);
> > + dev_pm_domain_detach(r5_core->pm_dev2[i], false);
> > + }
> > +
> > + kfree(r5_core->pm_dev2);
> > + r5_core->pm_dev2 = NULL;
> > + kfree(r5_core->pm_dev_link2);
> > + r5_core->pm_dev_link2 = NULL;
> > + r5_core->num_pm_dev = 0;
> > +}
> > +
> > +static int zynqmp_r5_add_pm_domains(struct rproc *rproc)
> > +{
> > + struct zynqmp_r5_core *r5_core = rproc->priv;
> > + struct device *dev = r5_core->dev, *dev2;
> > + struct zynqmp_r5_cluster *cluster;
> > + struct platform_device *pdev;
> > + struct device_node *np;
> > + int i, j, num_pm_dev, ret;
> > +
> > + cluster = dev_get_drvdata(dev->parent);
> > +
> > + /* get number of power-domains */
> > + num_pm_dev = of_count_phandle_with_args(r5_core->np, "power-domains",
> > + "#power-domain-cells");
> > +
> > + if (num_pm_dev <= 0)
> > + return -EINVAL;
> > +
> > + r5_core->pm_dev1 = kcalloc(num_pm_dev,
> > + sizeof(struct device *),
> > + GFP_KERNEL);
> > + if (!r5_core->pm_dev1)
> > + ret = -ENOMEM;
> > +
> > + r5_core->pm_dev_link1 = kcalloc(num_pm_dev,
> > + sizeof(struct device_link *),
> > + GFP_KERNEL);
> > + if (!r5_core->pm_dev_link1) {
> > + kfree(r5_core->pm_dev1);
> > + r5_core->pm_dev1 = NULL;
> > + return -ENOMEM;
> > + }
> > +
> > + r5_core->num_pm_dev = num_pm_dev;
> > +
> > + /*
> > + * start from 2nd entry in power-domains property list as
> > + * for zynqmp we only add TCM power domains and not core's power domain.
> > + */
>
> It would be worth mentionning where the 1st entry get added.
>
> > + for (i = 1; i < r5_core->num_pm_dev; i++) {
> > + r5_core->pm_dev1[i] = dev_pm_domain_attach_by_id(dev, i);
> > + if (IS_ERR_OR_NULL(r5_core->pm_dev1[i])) {
> > + dev_dbg(dev, "failed to attach pm domain %d, ret=%ld\n", i,
> > + PTR_ERR(r5_core->pm_dev1[i]));
> > + ret = -EINVAL;
> > + goto fail_add_pm_domains;
> > + }
> > + r5_core->pm_dev_link1[i] = device_link_add(dev, r5_core->pm_dev1[i],
> > + DL_FLAG_STATELESS |
> > + DL_FLAG_RPM_ACTIVE |
> > + DL_FLAG_PM_RUNTIME);
> > + if (!r5_core->pm_dev_link1[i]) {
> > + dev_pm_domain_detach(r5_core->pm_dev1[i], true);
> > + r5_core->pm_dev1[i] = NULL;
> > + ret = -EINVAL;
>
> Cleanup for this iteration is properly done here. As such the while() loop in
> fail_add_pm_domains needs to be while (--i >= 0). See my comment below.
>
> > + goto fail_add_pm_domains;
> > + }
> > + }
> > +
> > + if (cluster->mode == SPLIT_MODE)
> > + return 0;
> > +
> > + r5_core->pm_dev2 = kcalloc(num_pm_dev,
> > + sizeof(struct device *),
> > + GFP_KERNEL);
> > + if (!r5_core->pm_dev2) {
> > + ret = -ENOMEM;
> > + goto fail_add_pm_domains;
> > + }
> > +
> > + r5_core->pm_dev_link2 = kcalloc(num_pm_dev,
> > + sizeof(struct device_link *),
> > + GFP_KERNEL);
> > + if (!r5_core->pm_dev_link2) {
> > + kfree(r5_core->pm_dev2);
> > + r5_core->pm_dev2 = NULL;
> > + ret = -ENOMEM;
> > + goto fail_add_pm_domains;
> > + }
> > +
> > + /* get second core's device to detach its power-domains */
> > + np = of_get_next_child(cluster->dev->of_node, of_node_get(dev->of_node));
> > +
> > + pdev = of_find_device_by_node(np);
> > + if (!pdev) {
> > + dev_err(cluster->dev, "core1 platform device not available\n");
> > + kfree(r5_core->pm_dev2);
> > + kfree(r5_core->pm_dev_link2);
> > + r5_core->pm_dev2 = NULL;
> > + r5_core->pm_dev_link2 = NULL;
> > + ret = -EINVAL;
> > + goto fail_add_pm_domains;
> > + }
> > +
> > + dev2 = &pdev->dev;
> > +
> > + /* for zynqmp we only add TCM power domains and not core's power domain */
> > + for (j = 1; j < r5_core->num_pm_dev; j++) {
> > + r5_core->pm_dev2[j] = dev_pm_domain_attach_by_id(dev2, j);
> > + if (!r5_core->pm_dev2[j]) {
> > + dev_dbg(dev, "can't attach to pm domain %d\n", j);
> > + ret = -EINVAL;
> > + goto fail_add_pm_domains_lockstep;
> > + } else if (IS_ERR(r5_core->pm_dev2[j])) {
> > + dev_dbg(dev, "can't attach to pm domain %d\n", j);
> > + ret = PTR_ERR(r5_core->pm_dev2[j]);
> > + goto fail_add_pm_domains_lockstep;
> > + }
> > +
> > + r5_core->pm_dev_link2[j] = device_link_add(dev, r5_core->pm_dev2[j],
> > + DL_FLAG_STATELESS |
> > + DL_FLAG_RPM_ACTIVE |
> > + DL_FLAG_PM_RUNTIME);
> > + if (!r5_core->pm_dev_link2[j]) {
> > + dev_pm_domain_detach(r5_core->pm_dev2[j], true);
> > + r5_core->pm_dev2[j] = NULL;
> > + ret = -ENODEV;
> > + goto fail_add_pm_domains_lockstep;
> > + }
> > + }
> > +
> > +fail_add_pm_domains_lockstep:
> > + while (j >= 1) {
> > + if (r5_core->pm_dev_link2 && !IS_ERR_OR_NULL(r5_core->pm_dev_link2[j]))
> > + device_link_del(r5_core->pm_dev_link2[j]);
> > + if (r5_core->pm_dev2 && !IS_ERR_OR_NULL(r5_core->pm_dev2[j]))
> > + dev_pm_domain_detach(r5_core->pm_dev2[j], true);
> > + j--;
> > + }
> > + kfree(r5_core->pm_dev2);
> > + r5_core->pm_dev2 = NULL;
> > + kfree(r5_core->pm_dev_link2);
> > + r5_core->pm_dev_link2 = NULL;
> > +
> > +fail_add_pm_domains:
> > + while (i >= 1) {
> > + if (r5_core->pm_dev_link1 && !IS_ERR_OR_NULL(r5_core->pm_dev_link1[i]))
> > + device_link_del(r5_core->pm_dev_link1[i]);
>
> Because the cleanup is properly done above we can start the loop at the previous
> value of 'i', i.e --i. The added bonus is that you don't need the if()
> statement.
>
> Another problem with starting at 'i' is that you get an out of bound access when
> all PM domains have been properly added for core 0 but fail for core 1.
>
> > + if (r5_core->pm_dev1 && !IS_ERR_OR_NULL(r5_core->pm_dev1[i]))
> > + dev_pm_domain_detach(r5_core->pm_dev1[i], true);
>
> Same as above.
>
> I will stop here for this revision.
>
> Mathieu
>
>
> > + i--;
> > + }
> > + kfree(r5_core->pm_dev1);
> > + r5_core->pm_dev1 = NULL;
> > + kfree(r5_core->pm_dev_link1);
> > + r5_core->pm_dev_link1 = NULL;
> > +
> > + return ret;
> > +}
> > +
> > /**
> > * zynqmp_r5_rproc_prepare()
> > * adds carveouts for TCM bank and reserved memory regions
> > @@ -770,19 +981,30 @@ static int zynqmp_r5_rproc_prepare(struct rproc *rproc)
> > {
> > int ret;
> >
> > + ret = zynqmp_r5_add_pm_domains(rproc);
> > + if (ret) {
> > + dev_err(&rproc->dev, "failed to add pm domains\n");
> > + return ret;
> > + }
> > +
> > ret = add_tcm_banks(rproc);
> > if (ret) {
> > dev_err(&rproc->dev, "failed to get TCM banks, err %d\n", ret);
> > - return ret;
> > + goto fail_prepare;
> > }
> >
> > ret = add_mem_regions_carveout(rproc);
> > if (ret) {
> > dev_err(&rproc->dev, "failed to get reserve mem regions %d\n", ret);
> > - return ret;
> > + goto fail_prepare;
> > }
> >
> > return 0;
> > +
> > +fail_prepare:
> > + zynqmp_r5_remove_pm_domains(rproc);
> > +
> > + return ret;
> > }
> >
> > /**
> > @@ -801,11 +1023,13 @@ static int zynqmp_r5_rproc_unprepare(struct rproc *rproc)
> >
> > r5_core = rproc->priv;
> >
> > + zynqmp_r5_remove_pm_domains(rproc);
> > +
> > for (i = 0; i < r5_core->tcm_bank_count; i++) {
> > pm_domain_id = r5_core->tcm_banks[i]->pm_domain_id;
> > - if (zynqmp_pm_release_node(pm_domain_id))
> > - dev_warn(r5_core->dev,
> > - "can't turn off TCM bank 0x%x", pm_domain_id);
> > + if (pm_domain_id && zynqmp_pm_release_node(pm_domain_id))
> > + dev_dbg(r5_core->dev,
> > + "can't turn off TCM bank 0x%x", pm_domain_id);
> > }
> >
> > return 0;
> > --
> > 2.25.1
> >