2024-04-18 09:27:10

by Varadarajan Narayanan

[permalink] [raw]
Subject: [PATCH v9 0/6] Add interconnect driver for IPQ9574 SoC

MSM platforms manage NoC related clocks and scaling from RPM.
However, in IPQ SoCs, RPM is not involved in managing NoC
related clocks and there is no NoC scaling.

However, there is a requirement to enable some NoC interface
clocks for the accessing the peripherals present in the
system. Hence add a minimalistic interconnect driver that
establishes a path from the processor/memory to those peripherals
and vice versa.

---
v9: Squash icc-clk driver change and cbf-msm8996 change
Remove HWS_DATA macro
v8: Change icc-clk driver to take master and slave ids instead
of auto generating
Remove ICC_xxx defines from dt-bindings header
Define MASTER/SLAVE_xxx macros from 0 .. n

v7: Fix macro names in dt-bindings header
Do clock get in icc driver

v6: Removed 'Reviewed-by: Krzysztof' from dt-bindings patch
Remove clock get from ICC driver as suggested by Stephen Boyd
so that the actual peripheral can do the clock get
first_id -> icc_first_node_id
Remove tristate from INTERCONNECT_CLK
v5:
Split gcc-ipq9574.c and common.c changes into separate patches
Introduce devm_icc_clk_register
Fix error handling
v4:
gcc-ipq9574.c
Use clk_hw instead of indices
common.c
Do icc register in qcom_cc_probe() call stream
common.h
Add icc clock info to qcom_cc_desc structure

v3:
qcom,ipq9574.h
Move 'first id' define to clock driver
gcc-ipq9574.c:
Use indexed identifiers here to avoid confusion
Fix error messages and move code to common.c as it can be
shared with future SoCs

v2:
qcom,ipq9574.h
Fix license identifier
Rename macros
qcom,ipq9574-gcc.yaml
Include interconnect-cells
gcc-ipq9574.c
Update commit log
Remove IS_ENABLED(CONFIG_INTERCONNECT) and auto select it from Kconfig
ipq9574.dtsi
Moved to separate patch
Include interconnect-cells to clock controller node
drivers/clk/qcom/Kconfig:
Auto select CONFIG_INTERCONNECT & CONFIG_INTERCONNECT_CLK

Varadarajan Narayanan (6):
interconnect: icc-clk: Allow user to specify master/slave ids
dt-bindings: interconnect: Add Qualcomm IPQ9574 support
interconnect: icc-clk: Add devm_icc_clk_register
clk: qcom: common: Add interconnect clocks support
clk: qcom: ipq9574: Use icc-clk for enabling NoC related clocks
arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

.../bindings/clock/qcom,ipq9574-gcc.yaml | 3 +
arch/arm64/boot/dts/qcom/ipq9574.dtsi | 2 +
drivers/clk/qcom/Kconfig | 2 +
drivers/clk/qcom/clk-cbf-8996.c | 7 ++-
drivers/clk/qcom/common.c | 35 ++++++++++-
drivers/clk/qcom/common.h | 9 +++
drivers/clk/qcom/gcc-ipq9574.c | 31 ++++++++++
drivers/interconnect/icc-clk.c | 24 +++++++-
.../dt-bindings/interconnect/qcom,ipq9574.h | 59 +++++++++++++++++++
include/linux/interconnect-clk.h | 4 ++
10 files changed, 171 insertions(+), 5 deletions(-)
create mode 100644 include/dt-bindings/interconnect/qcom,ipq9574.h

--
2.34.1



2024-04-18 09:28:15

by Varadarajan Narayanan

[permalink] [raw]
Subject: [PATCH v9 5/6] clk: qcom: ipq9574: Use icc-clk for enabling NoC related clocks

Use the icc-clk framework to enable few clocks to be able to
create paths and use the peripherals connected on those NoCs.

Signed-off-by: Varadarajan Narayanan <[email protected]>
---
v9: Remove HWS_DATA macro
v8: Bind clock and interconnect using master and slave ids
Use indices instead of clock pointers
v7: Auto select INTERCONNECT & INTERCONNECT_CLK in COMMON_CLK_QCOM
to address build break with random config build test, with the
following combination

CONFIG_COMMON_CLK_QCOM=y
and
CONFIG_INTERCONNECT_CLK=m

the following error is seen as devm_icc_clk_register is in a
module and being referenced from vmlinux.

powerpc64-linux-ld: drivers/clk/qcom/common.o: in function `qcom_cc_really_probe':
>> common.c:(.text+0x980): undefined reference to `devm_icc_clk_register'

v6: Move enum to dt-bindings and share between here and DT
first_id -> icc_first_node_id
v5: Split from common.c changes into separate patch
No functional changes
---
drivers/clk/qcom/Kconfig | 2 ++
drivers/clk/qcom/gcc-ipq9574.c | 31 +++++++++++++++++++++++++++++++
2 files changed, 33 insertions(+)

diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
index 8ab08e7b5b6c..b65a373f2e6b 100644
--- a/drivers/clk/qcom/Kconfig
+++ b/drivers/clk/qcom/Kconfig
@@ -17,6 +17,8 @@ menuconfig COMMON_CLK_QCOM
select RATIONAL
select REGMAP_MMIO
select RESET_CONTROLLER
+ select INTERCONNECT
+ select INTERCONNECT_CLK

if COMMON_CLK_QCOM

diff --git a/drivers/clk/qcom/gcc-ipq9574.c b/drivers/clk/qcom/gcc-ipq9574.c
index 0a3f846695b8..b691d19d66e1 100644
--- a/drivers/clk/qcom/gcc-ipq9574.c
+++ b/drivers/clk/qcom/gcc-ipq9574.c
@@ -4,6 +4,7 @@
*/

#include <linux/clk-provider.h>
+#include <linux/interconnect-clk.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
@@ -12,6 +13,7 @@

#include <dt-bindings/clock/qcom,ipq9574-gcc.h>
#include <dt-bindings/reset/qcom,ipq9574-gcc.h>
+#include <dt-bindings/interconnect/qcom,ipq9574.h>

#include "clk-alpha-pll.h"
#include "clk-branch.h"
@@ -4301,6 +4303,32 @@ static const struct qcom_reset_map gcc_ipq9574_resets[] = {
[GCC_WCSS_Q6_TBU_BCR] = { 0x12054, 0 },
};

+#define IPQ_APPS_ID 9574 /* some unique value */
+
+static struct qcom_icc_hws_data icc_ipq9574_hws[] = {
+ { MASTER_ANOC_PCIE0, SLAVE_ANOC_PCIE0, GCC_ANOC_PCIE0_1LANE_M_CLK },
+ { MASTER_SNOC_PCIE0, SLAVE_SNOC_PCIE0, GCC_SNOC_PCIE0_1LANE_S_CLK },
+ { MASTER_ANOC_PCIE1, SLAVE_ANOC_PCIE1, GCC_ANOC_PCIE1_1LANE_M_CLK },
+ { MASTER_SNOC_PCIE1, SLAVE_SNOC_PCIE1, GCC_SNOC_PCIE1_1LANE_S_CLK },
+ { MASTER_ANOC_PCIE2, SLAVE_ANOC_PCIE2, GCC_ANOC_PCIE2_2LANE_M_CLK },
+ { MASTER_SNOC_PCIE2, SLAVE_SNOC_PCIE2, GCC_SNOC_PCIE2_2LANE_S_CLK },
+ { MASTER_ANOC_PCIE3, SLAVE_ANOC_PCIE3, GCC_ANOC_PCIE3_2LANE_M_CLK },
+ { MASTER_SNOC_PCIE3, SLAVE_SNOC_PCIE3, GCC_SNOC_PCIE3_2LANE_S_CLK },
+ { MASTER_USB, SLAVE_USB, GCC_SNOC_USB_CLK },
+ { MASTER_USB_AXI, SLAVE_USB_AXI, GCC_ANOC_USB_AXI_CLK },
+ { MASTER_NSSNOC_NSSCC, SLAVE_NSSNOC_NSSCC, GCC_NSSNOC_NSSCC_CLK },
+ { MASTER_NSSNOC_SNOC_0, SLAVE_NSSNOC_SNOC_0, GCC_NSSNOC_SNOC_CLK },
+ { MASTER_NSSNOC_SNOC_1, SLAVE_NSSNOC_SNOC_1, GCC_NSSNOC_SNOC_1_CLK },
+ { MASTER_NSSNOC_PCNOC_1, SLAVE_NSSNOC_PCNOC_1, GCC_NSSNOC_PCNOC_1_CLK },
+ { MASTER_NSSNOC_QOSGEN_REF, SLAVE_NSSNOC_QOSGEN_REF, GCC_NSSNOC_QOSGEN_REF_CLK },
+ { MASTER_NSSNOC_TIMEOUT_REF, SLAVE_NSSNOC_TIMEOUT_REF, GCC_NSSNOC_TIMEOUT_REF_CLK },
+ { MASTER_NSSNOC_XO_DCD, SLAVE_NSSNOC_XO_DCD, GCC_NSSNOC_XO_DCD_CLK },
+ { MASTER_NSSNOC_ATB, SLAVE_NSSNOC_ATB, GCC_NSSNOC_ATB_CLK },
+ { MASTER_MEM_NOC_NSSNOC, SLAVE_MEM_NOC_NSSNOC, GCC_MEM_NOC_NSSNOC_CLK },
+ { MASTER_NSSNOC_MEMNOC, SLAVE_NSSNOC_MEMNOC, GCC_NSSNOC_MEMNOC_CLK },
+ { MASTER_NSSNOC_MEM_NOC_1, SLAVE_NSSNOC_MEM_NOC_1, GCC_NSSNOC_MEM_NOC_1_CLK },
+};
+
static const struct of_device_id gcc_ipq9574_match_table[] = {
{ .compatible = "qcom,ipq9574-gcc" },
{ }
@@ -4323,6 +4351,9 @@ static const struct qcom_cc_desc gcc_ipq9574_desc = {
.num_resets = ARRAY_SIZE(gcc_ipq9574_resets),
.clk_hws = gcc_ipq9574_hws,
.num_clk_hws = ARRAY_SIZE(gcc_ipq9574_hws),
+ .icc_hws = icc_ipq9574_hws,
+ .num_icc_hws = ARRAY_SIZE(icc_ipq9574_hws),
+ .icc_first_node_id = IPQ_APPS_ID,
};

static int gcc_ipq9574_probe(struct platform_device *pdev)
--
2.34.1


2024-04-18 09:28:21

by Varadarajan Narayanan

[permalink] [raw]
Subject: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

IPQ SoCs dont involve RPM in managing NoC related clocks and
there is no NoC scaling. Linux itself handles these clocks.
However, these should not be exposed as just clocks and align
with other Qualcomm SoCs that handle these clocks from a
interconnect provider.

Hence include icc provider capability to the gcc node so that
peripherals can use the interconnect facility to enable these
clocks.

Reviewed-by: Dmitry Baryshkov <[email protected]>
Signed-off-by: Varadarajan Narayanan <[email protected]>
---
arch/arm64/boot/dts/qcom/ipq9574.dtsi | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/ipq9574.dtsi b/arch/arm64/boot/dts/qcom/ipq9574.dtsi
index 7f2e5cbf3bbb..5b3e69379b1f 100644
--- a/arch/arm64/boot/dts/qcom/ipq9574.dtsi
+++ b/arch/arm64/boot/dts/qcom/ipq9574.dtsi
@@ -8,6 +8,7 @@

#include <dt-bindings/clock/qcom,apss-ipq.h>
#include <dt-bindings/clock/qcom,ipq9574-gcc.h>
+#include <dt-bindings/interconnect/qcom,ipq9574.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/reset/qcom,ipq9574-gcc.h>
#include <dt-bindings/thermal/thermal.h>
@@ -306,6 +307,7 @@ gcc: clock-controller@1800000 {
#clock-cells = <1>;
#reset-cells = <1>;
#power-domain-cells = <1>;
+ #interconnect-cells = <1>;
};

tcsr_mutex: hwlock@1905000 {
--
2.34.1


2024-04-18 09:29:00

by Varadarajan Narayanan

[permalink] [raw]
Subject: [PATCH v9 3/6] interconnect: icc-clk: Add devm_icc_clk_register

Wrap icc_clk_register to create devm_icc_clk_register to be
able to release the resources properly.

Reviewed-by: Dmitry Baryshkov <[email protected]>
Signed-off-by: Varadarajan Narayanan <[email protected]>
---
v8: Added Reviewed-by: Dmitry Baryshkov
v7: Simplify devm_icc_clk_register implementation as suggested in review
v5: Introduced devm_icc_clk_register
---
drivers/interconnect/icc-clk.c | 18 ++++++++++++++++++
include/linux/interconnect-clk.h | 2 ++
2 files changed, 20 insertions(+)

diff --git a/drivers/interconnect/icc-clk.c b/drivers/interconnect/icc-clk.c
index 2be193fd7d8f..f788db15cd76 100644
--- a/drivers/interconnect/icc-clk.c
+++ b/drivers/interconnect/icc-clk.c
@@ -148,6 +148,24 @@ struct icc_provider *icc_clk_register(struct device *dev,
}
EXPORT_SYMBOL_GPL(icc_clk_register);

+static void devm_icc_release(void *res)
+{
+ icc_clk_unregister(res);
+}
+
+int devm_icc_clk_register(struct device *dev, unsigned int first_id,
+ unsigned int num_clocks, const struct icc_clk_data *data)
+{
+ struct icc_provider *prov;
+
+ prov = icc_clk_register(dev, first_id, num_clocks, data);
+ if (IS_ERR(prov))
+ return PTR_ERR(prov);
+
+ return devm_add_action_or_reset(dev, devm_icc_release, prov);
+}
+EXPORT_SYMBOL_GPL(devm_icc_clk_register);
+
/**
* icc_clk_unregister() - unregister a previously registered clk interconnect provider
* @provider: provider returned by icc_clk_register()
diff --git a/include/linux/interconnect-clk.h b/include/linux/interconnect-clk.h
index 170898faaacb..9bcee3e9c56c 100644
--- a/include/linux/interconnect-clk.h
+++ b/include/linux/interconnect-clk.h
@@ -19,6 +19,8 @@ struct icc_provider *icc_clk_register(struct device *dev,
unsigned int first_id,
unsigned int num_clocks,
const struct icc_clk_data *data);
+int devm_icc_clk_register(struct device *dev, unsigned int first_id,
+ unsigned int num_clocks, const struct icc_clk_data *data);
void icc_clk_unregister(struct icc_provider *provider);

#endif
--
2.34.1


2024-04-18 09:29:39

by Varadarajan Narayanan

[permalink] [raw]
Subject: [PATCH v9 4/6] clk: qcom: common: Add interconnect clocks support

Unlike MSM platforms that manage NoC related clocks and scaling
from RPM, IPQ SoCs dont involve RPM in managing NoC related
clocks and there is no NoC scaling.

However, there is a requirement to enable some NoC interface
clocks for accessing the peripheral controllers present on
these NoCs. Though exposing these as normal clocks would work,
having a minimalistic interconnect driver to handle these clocks
would make it consistent with other Qualcomm platforms resulting
in common code paths. This is similar to msm8996-cbf's usage of
icc-clk framework.

Signed-off-by: Varadarajan Narayanan <[email protected]>
---
v9: Remove HWS_DATA macro
v8: Explicitly set master and slave ids
v7: Restore clk_get
v6: first_id -> icc_first_node_id
Remove clock get so that the peripheral that uses the clock
can do the clock get
v5: Split changes in common.c to separate patch
Fix error handling
Use devm_icc_clk_register instead of icc_clk_register
v4: Use clk_hw instead of indices
Do icc register in qcom_cc_probe() call stream
Add icc clock info to qcom_cc_desc structure
v3: Use indexed identifiers here to avoid confusion
Fix error messages and move to common.c
v2: Move DTS to separate patch
Update commit log
Auto select CONFIG_INTERCONNECT & CONFIG_INTERCONNECT_CLK to fix build error
---
drivers/clk/qcom/common.c | 35 ++++++++++++++++++++++++++++++++++-
drivers/clk/qcom/common.h | 9 +++++++++
2 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/drivers/clk/qcom/common.c b/drivers/clk/qcom/common.c
index 75f09e6e057e..a6410b1828ca 100644
--- a/drivers/clk/qcom/common.c
+++ b/drivers/clk/qcom/common.c
@@ -8,6 +8,7 @@
#include <linux/regmap.h>
#include <linux/platform_device.h>
#include <linux/clk-provider.h>
+#include <linux/interconnect-clk.h>
#include <linux/reset-controller.h>
#include <linux/of.h>

@@ -234,6 +235,38 @@ static struct clk_hw *qcom_cc_clk_hw_get(struct of_phandle_args *clkspec,
return cc->rclks[idx] ? &cc->rclks[idx]->hw : NULL;
}

+static int qcom_cc_icc_register(struct device *dev,
+ const struct qcom_cc_desc *desc)
+{
+ struct icc_clk_data *icd;
+ struct clk_hw *hws;
+ int i;
+
+ if (!IS_ENABLED(CONFIG_INTERCONNECT_CLK))
+ return 0;
+
+ if (!desc->icc_hws)
+ return 0;
+
+ icd = devm_kcalloc(dev, desc->num_icc_hws, sizeof(*icd), GFP_KERNEL);
+ if (!icd)
+ return -ENOMEM;
+
+ for (i = 0; i < desc->num_icc_hws; i++) {
+ icd[i].master_id = desc->icc_hws[i].master_id;
+ icd[i].slave_id = desc->icc_hws[i].slave_id;
+ hws = &desc->clks[desc->icc_hws[i].clk_id]->hw;
+ icd[i].clk = devm_clk_hw_get_clk(dev, hws, "icc");
+ if (!icd[i].clk)
+ return dev_err_probe(dev, -ENOENT,
+ "(%d) clock entry is null\n", i);
+ icd[i].name = clk_hw_get_name(hws);
+ }
+
+ return devm_icc_clk_register(dev, desc->icc_first_node_id,
+ desc->num_icc_hws, icd);
+}
+
int qcom_cc_really_probe(struct platform_device *pdev,
const struct qcom_cc_desc *desc, struct regmap *regmap)
{
@@ -303,7 +336,7 @@ int qcom_cc_really_probe(struct platform_device *pdev,
if (ret)
return ret;

- return 0;
+ return qcom_cc_icc_register(dev, desc);
}
EXPORT_SYMBOL_GPL(qcom_cc_really_probe);

diff --git a/drivers/clk/qcom/common.h b/drivers/clk/qcom/common.h
index 9c8f7b798d9f..ea96b2ca3cac 100644
--- a/drivers/clk/qcom/common.h
+++ b/drivers/clk/qcom/common.h
@@ -19,6 +19,12 @@ struct clk_hw;
#define PLL_VOTE_FSM_ENA BIT(20)
#define PLL_VOTE_FSM_RESET BIT(21)

+struct qcom_icc_hws_data {
+ int master_id;
+ int slave_id;
+ int clk_id;
+};
+
struct qcom_cc_desc {
const struct regmap_config *config;
struct clk_regmap **clks;
@@ -29,6 +35,9 @@ struct qcom_cc_desc {
size_t num_gdscs;
struct clk_hw **clk_hws;
size_t num_clk_hws;
+ struct qcom_icc_hws_data *icc_hws;
+ size_t num_icc_hws;
+ unsigned int icc_first_node_id;
};

/**
--
2.34.1


2024-04-22 23:11:12

by Konrad Dybcio

[permalink] [raw]
Subject: Re: [PATCH v9 4/6] clk: qcom: common: Add interconnect clocks support



On 4/18/24 11:23, Varadarajan Narayanan wrote:
> Unlike MSM platforms that manage NoC related clocks and scaling
> from RPM, IPQ SoCs dont involve RPM in managing NoC related
> clocks and there is no NoC scaling.
>
> However, there is a requirement to enable some NoC interface
> clocks for accessing the peripheral controllers present on
> these NoCs. Though exposing these as normal clocks would work,
> having a minimalistic interconnect driver to handle these clocks
> would make it consistent with other Qualcomm platforms resulting
> in common code paths. This is similar to msm8996-cbf's usage of
> icc-clk framework.
>
> Signed-off-by: Varadarajan Narayanan <[email protected]>
> ---

One more thing that we don't seem to have squared out is how to handle
sync_state. We can just jerryrig icc_sync_state in here, but I'm not
sure how that goes with some floating ideas of clk_sync_state that have
been tried in the past

Konrad

2024-04-23 13:14:55

by Konrad Dybcio

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc



On 4/18/24 11:23, Varadarajan Narayanan wrote:
> IPQ SoCs dont involve RPM in managing NoC related clocks and
> there is no NoC scaling. Linux itself handles these clocks.
> However, these should not be exposed as just clocks and align
> with other Qualcomm SoCs that handle these clocks from a
> interconnect provider.
>
> Hence include icc provider capability to the gcc node so that
> peripherals can use the interconnect facility to enable these
> clocks.
>
> Reviewed-by: Dmitry Baryshkov <[email protected]>
> Signed-off-by: Varadarajan Narayanan <[email protected]>
> ---

If this is all you do to enable interconnect (which is not the case,
as this patch only satisfies the bindings checker, the meaningful
change happens in the previous patch) and nothing explodes, this is
an apparent sign of your driver doing nothing.

The expected reaction to "enabling interconnect" without defining the
required paths for your hardware would be a crash-on-sync_state, as all
unused (from Linux's POV) resources ought to be shut down.

Because you lack sync_state, the interconnects silently retain the state
that they were left in (which is not deterministic), and that's precisely
what we want to avoid.

Konrad

2024-04-25 10:28:08

by Varadarajan Narayanan

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
>
>
> On 4/18/24 11:23, Varadarajan Narayanan wrote:
> > IPQ SoCs dont involve RPM in managing NoC related clocks and
> > there is no NoC scaling. Linux itself handles these clocks.
> > However, these should not be exposed as just clocks and align
> > with other Qualcomm SoCs that handle these clocks from a
> > interconnect provider.
> >
> > Hence include icc provider capability to the gcc node so that
> > peripherals can use the interconnect facility to enable these
> > clocks.
> >
> > Reviewed-by: Dmitry Baryshkov <[email protected]>
> > Signed-off-by: Varadarajan Narayanan <[email protected]>
> > ---
>
> If this is all you do to enable interconnect (which is not the case,
> as this patch only satisfies the bindings checker, the meaningful
> change happens in the previous patch) and nothing explodes, this is
> an apparent sign of your driver doing nothing.

It appears to do nothing because, we are just enabling the clock
provider to also act as interconnect provider. Only when the
consumers are enabled with interconnect usage, this will create
paths and turn on the relevant NOC clocks.

This interconnect will be used by the PCIe and NSS blocks. When
those patches were posted earlier, they were put on hold until
interconnect driver is available.

Once this patch gets in, PCIe for example will make use of icc.
Please refer to https://lore.kernel.org/linux-arm-msm/[email protected]/.

The 'pcieX' nodes will include the following entries.

interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
<&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
interconnect-names = "pcie-mem", "cpu-pcie";

> The expected reaction to "enabling interconnect" without defining the
> required paths for your hardware would be a crash-on-sync_state, as all
> unused (from Linux's POV) resources ought to be shut down.
>
> Because you lack sync_state, the interconnects silently retain the state
> that they were left in (which is not deterministic), and that's precisely
> what we want to avoid.

I tried to set 'sync_state' to icc_sync_state to be invoked and
didn't see any crash.

Thanks
Varada

2024-04-25 10:32:16

by Varadarajan Narayanan

[permalink] [raw]
Subject: Re: [PATCH v9 4/6] clk: qcom: common: Add interconnect clocks support

On Tue, Apr 23, 2024 at 01:05:16AM +0200, Konrad Dybcio wrote:
>
>
> On 4/18/24 11:23, Varadarajan Narayanan wrote:
> > Unlike MSM platforms that manage NoC related clocks and scaling
> > from RPM, IPQ SoCs dont involve RPM in managing NoC related
> > clocks and there is no NoC scaling.
> >
> > However, there is a requirement to enable some NoC interface
> > clocks for accessing the peripheral controllers present on
> > these NoCs. Though exposing these as normal clocks would work,
> > having a minimalistic interconnect driver to handle these clocks
> > would make it consistent with other Qualcomm platforms resulting
> > in common code paths. This is similar to msm8996-cbf's usage of
> > icc-clk framework.
> >
> > Signed-off-by: Varadarajan Narayanan <[email protected]>
> > ---
>
> One more thing that we don't seem to have squared out is how to handle
> .sync_state. We can just jerryrig icc_sync_state in here, but I'm not
> sure how that goes with some floating ideas of clk_sync_state that have
> been tried in the past

Ok. Will add that and post a new version.

Thanks
Varada

2024-04-30 10:05:49

by Konrad Dybcio

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
> On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
>>
>>
>> On 4/18/24 11:23, Varadarajan Narayanan wrote:
>>> IPQ SoCs dont involve RPM in managing NoC related clocks and
>>> there is no NoC scaling. Linux itself handles these clocks.
>>> However, these should not be exposed as just clocks and align
>>> with other Qualcomm SoCs that handle these clocks from a
>>> interconnect provider.
>>>
>>> Hence include icc provider capability to the gcc node so that
>>> peripherals can use the interconnect facility to enable these
>>> clocks.
>>>
>>> Reviewed-by: Dmitry Baryshkov <[email protected]>
>>> Signed-off-by: Varadarajan Narayanan <[email protected]>
>>> ---
>>
>> If this is all you do to enable interconnect (which is not the case,
>> as this patch only satisfies the bindings checker, the meaningful
>> change happens in the previous patch) and nothing explodes, this is
>> an apparent sign of your driver doing nothing.
>
> It appears to do nothing because, we are just enabling the clock
> provider to also act as interconnect provider. Only when the
> consumers are enabled with interconnect usage, this will create
> paths and turn on the relevant NOC clocks.

No, with sync_state it actually does "something" (sets the interconnect
path bandwidths to zero). And *this* patch does nothing functionally,
it only makes the dt checker happy.

>
> This interconnect will be used by the PCIe and NSS blocks. When
> those patches were posted earlier, they were put on hold until
> interconnect driver is available.
>
> Once this patch gets in, PCIe for example will make use of icc.
> Please refer to https://lore.kernel.org/linux-arm-msm/[email protected]/.
>
> The 'pcieX' nodes will include the following entries.
>
> interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
> <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
> interconnect-names = "pcie-mem", "cpu-pcie";

Okay. What about USB that's already enabled? And BIMC/MEMNOC?

>
>> The expected reaction to "enabling interconnect" without defining the
>> required paths for your hardware would be a crash-on-sync_state, as all
>> unused (from Linux's POV) resources ought to be shut down.
>>
>> Because you lack sync_state, the interconnects silently retain the state
>> that they were left in (which is not deterministic), and that's precisely
>> what we want to avoid.
>
> I tried to set 'sync_state' to icc_sync_state to be invoked and
> didn't see any crash.

Have you confirmed that the registers are actually written to, and with
correct values?

Konrad

2024-05-02 09:32:50

by Varadarajan Narayanan

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
> On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
> > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
> >>
> >>
> >> On 4/18/24 11:23, Varadarajan Narayanan wrote:
> >>> IPQ SoCs dont involve RPM in managing NoC related clocks and
> >>> there is no NoC scaling. Linux itself handles these clocks.
> >>> However, these should not be exposed as just clocks and align
> >>> with other Qualcomm SoCs that handle these clocks from a
> >>> interconnect provider.
> >>>
> >>> Hence include icc provider capability to the gcc node so that
> >>> peripherals can use the interconnect facility to enable these
> >>> clocks.
> >>>
> >>> Reviewed-by: Dmitry Baryshkov <[email protected]>
> >>> Signed-off-by: Varadarajan Narayanan <[email protected]>
> >>> ---
> >>
> >> If this is all you do to enable interconnect (which is not the case,
> >> as this patch only satisfies the bindings checker, the meaningful
> >> change happens in the previous patch) and nothing explodes, this is
> >> an apparent sign of your driver doing nothing.
> >
> > It appears to do nothing because, we are just enabling the clock
> > provider to also act as interconnect provider. Only when the
> > consumers are enabled with interconnect usage, this will create
> > paths and turn on the relevant NOC clocks.
>
> No, with sync_state it actually does "something" (sets the interconnect
> path bandwidths to zero). And *this* patch does nothing functionally,
> it only makes the dt checker happy.

I understand.

> > This interconnect will be used by the PCIe and NSS blocks. When
> > those patches were posted earlier, they were put on hold until
> > interconnect driver is available.
> >
> > Once this patch gets in, PCIe for example will make use of icc.
> > Please refer to https://lore.kernel.org/linux-arm-msm/[email protected]/.
> >
> > The 'pcieX' nodes will include the following entries.
> >
> > interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
> > <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
> > interconnect-names = "pcie-mem", "cpu-pcie";
>
> Okay. What about USB that's already enabled? And BIMC/MEMNOC?

For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface
clock. Hence, interconnect is not specified there.

MEMNOC to System NOC interfaces seem to be enabled automatically.
Software doesn't have to turn on or program specific clocks.

> >> The expected reaction to "enabling interconnect" without defining the
> >> required paths for your hardware would be a crash-on-sync_state, as all
> >> unused (from Linux's POV) resources ought to be shut down.
> >>
> >> Because you lack sync_state, the interconnects silently retain the state
> >> that they were left in (which is not deterministic), and that's precisely
> >> what we want to avoid.
> >
> > I tried to set 'sync_state' to icc_sync_state to be invoked and
> > didn't see any crash.
>
> Have you confirmed that the registers are actually written to, and with
> correct values?

I tried the following combinations:-

1. Top of tree linux-next + This patch set

* icc_sync_state called
* No crash or hang observed
* From /sys/kernel/debug/clk/clk_summary can see the
relevant clocks are set to the expected rates (compared
with downstream kernel)

2. Top of tree linux-next + This patch set + PCIe enablement

* icc_sync_state NOT called
* No crash or hang observed
* From /sys/kernel/debug/clk/clk_summary can see the
relevant clocks are set to the expected rates (compared
with downstream kernel)

Does this answer your question? Please let me know if you were
looking for some other information.

Thanks
Varada

2024-05-03 13:53:02

by Georgi Djakov

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

Hi Varada,

Thank you for your work on this!

On 2.05.24 12:30, Varadarajan Narayanan wrote:
> On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
>> On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
>>> On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
>>>>
>>>>
>>>> On 4/18/24 11:23, Varadarajan Narayanan wrote:
>>>>> IPQ SoCs dont involve RPM in managing NoC related clocks and
>>>>> there is no NoC scaling. Linux itself handles these clocks.
>>>>> However, these should not be exposed as just clocks and align
>>>>> with other Qualcomm SoCs that handle these clocks from a
>>>>> interconnect provider.
>>>>>
>>>>> Hence include icc provider capability to the gcc node so that
>>>>> peripherals can use the interconnect facility to enable these
>>>>> clocks.
>>>>>
>>>>> Reviewed-by: Dmitry Baryshkov <[email protected]>
>>>>> Signed-off-by: Varadarajan Narayanan <[email protected]>
>>>>> ---
>>>>
>>>> If this is all you do to enable interconnect (which is not the case,
>>>> as this patch only satisfies the bindings checker, the meaningful
>>>> change happens in the previous patch) and nothing explodes, this is
>>>> an apparent sign of your driver doing nothing.
>>>
>>> It appears to do nothing because, we are just enabling the clock
>>> provider to also act as interconnect provider. Only when the
>>> consumers are enabled with interconnect usage, this will create
>>> paths and turn on the relevant NOC clocks.
>>
>> No, with sync_state it actually does "something" (sets the interconnect
>> path bandwidths to zero). And *this* patch does nothing functionally,
>> it only makes the dt checker happy.
>
> I understand.
>
>>> This interconnect will be used by the PCIe and NSS blocks. When
>>> those patches were posted earlier, they were put on hold until
>>> interconnect driver is available.
>>>
>>> Once this patch gets in, PCIe for example will make use of icc.
>>> Please refer to https://lore.kernel.org/linux-arm-msm/[email protected]/.
>>>
>>> The 'pcieX' nodes will include the following entries.
>>>
>>> interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
>>> <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
>>> interconnect-names = "pcie-mem", "cpu-pcie";
>>
>> Okay. What about USB that's already enabled? And BIMC/MEMNOC?
>
> For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface
> clock. Hence, interconnect is not specified there.
>
> MEMNOC to System NOC interfaces seem to be enabled automatically.
> Software doesn't have to turn on or program specific clocks.
>
>>>> The expected reaction to "enabling interconnect" without defining the
>>>> required paths for your hardware would be a crash-on-sync_state, as all
>>>> unused (from Linux's POV) resources ought to be shut down.
>>>>
>>>> Because you lack sync_state, the interconnects silently retain the state
>>>> that they were left in (which is not deterministic), and that's precisely
>>>> what we want to avoid.
>>>
>>> I tried to set 'sync_state' to icc_sync_state to be invoked and
>>> didn't see any crash.
>>
>> Have you confirmed that the registers are actually written to, and with
>> correct values?
>
> I tried the following combinations:-
>
> 1. Top of tree linux-next + This patch set
>
> * icc_sync_state called
> * No crash or hang observed
> * From /sys/kernel/debug/clk/clk_summary can see the
> relevant clocks are set to the expected rates (compared
> with downstream kernel)
>
> 2. Top of tree linux-next + This patch set + PCIe enablement
>
> * icc_sync_state NOT called

If sync_state() is not being called, that usually means that there
are interconnect consumers that haven't probed successfully (PCIe?)
or their dependencies. That can be checked in /sys/class/devlink/.../status
But i am not sure how this works for PCI devices however.

You can also manually force a call to sync_state by writing "1" to
the interconnect provider's /sys/devices/.../state_synced

Anyway, the question is if PCIe and NSS work without this driver?
If they work, is this because the clocks are turned on by default
or by the boot loader?

Then if an interconnect path (clock) gets disabled either when we
reach a sync_state (with no bandwidth requests) or we explicitly
call icc_set_bw() with 0 bandwidth values, i would expect that
these PCIe and NSS devices would not function anymore (it might
save some power etc) and if this is unexpected we should see a
a crash or hang...

Can you confirm this?

Thanks,
Georgi

> * No crash or hang observed
> * From /sys/kernel/debug/clk/clk_summary can see the
> relevant clocks are set to the expected rates (compared
> with downstream kernel)
>
> Does this answer your question? Please let me know if you were
> looking for some other information.
>
> Thanks
> Varada
>


2024-05-08 06:53:23

by Varadarajan Narayanan

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote:
> Hi Varada,
>
> Thank you for your work on this!
>
> On 2.05.24 12:30, Varadarajan Narayanan wrote:
> > On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
> > > On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
> > > > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
> > > > >
> > > > >
> > > > > On 4/18/24 11:23, Varadarajan Narayanan wrote:
> > > > > > IPQ SoCs dont involve RPM in managing NoC related clocks and
> > > > > > there is no NoC scaling. Linux itself handles these clocks.
> > > > > > However, these should not be exposed as just clocks and align
> > > > > > with other Qualcomm SoCs that handle these clocks from a
> > > > > > interconnect provider.
> > > > > >
> > > > > > Hence include icc provider capability to the gcc node so that
> > > > > > peripherals can use the interconnect facility to enable these
> > > > > > clocks.
> > > > > >
> > > > > > Reviewed-by: Dmitry Baryshkov <[email protected]>
> > > > > > Signed-off-by: Varadarajan Narayanan <[email protected]>
> > > > > > ---
> > > > >
> > > > > If this is all you do to enable interconnect (which is not the case,
> > > > > as this patch only satisfies the bindings checker, the meaningful
> > > > > change happens in the previous patch) and nothing explodes, this is
> > > > > an apparent sign of your driver doing nothing.
> > > >
> > > > It appears to do nothing because, we are just enabling the clock
> > > > provider to also act as interconnect provider. Only when the
> > > > consumers are enabled with interconnect usage, this will create
> > > > paths and turn on the relevant NOC clocks.
> > >
> > > No, with sync_state it actually does "something" (sets the interconnect
> > > path bandwidths to zero). And *this* patch does nothing functionally,
> > > it only makes the dt checker happy.
> >
> > I understand.
> >
> > > > This interconnect will be used by the PCIe and NSS blocks. When
> > > > those patches were posted earlier, they were put on hold until
> > > > interconnect driver is available.
> > > >
> > > > Once this patch gets in, PCIe for example will make use of icc.
> > > > Please refer to https://lore.kernel.org/linux-arm-msm/[email protected]/.
> > > >
> > > > The 'pcieX' nodes will include the following entries.
> > > >
> > > > interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
> > > > <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
> > > > interconnect-names = "pcie-mem", "cpu-pcie";
> > >
> > > Okay. What about USB that's already enabled? And BIMC/MEMNOC?
> >
> > For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface
> > clock. Hence, interconnect is not specified there.
> >
> > MEMNOC to System NOC interfaces seem to be enabled automatically.
> > Software doesn't have to turn on or program specific clocks.
> >
> > > > > The expected reaction to "enabling interconnect" without defining the
> > > > > required paths for your hardware would be a crash-on-sync_state, as all
> > > > > unused (from Linux's POV) resources ought to be shut down.
> > > > >
> > > > > Because you lack sync_state, the interconnects silently retain the state
> > > > > that they were left in (which is not deterministic), and that's precisely
> > > > > what we want to avoid.
> > > >
> > > > I tried to set 'sync_state' to icc_sync_state to be invoked and
> > > > didn't see any crash.
> > >
> > > Have you confirmed that the registers are actually written to, and with
> > > correct values?
> >
> > I tried the following combinations:-
> >
> > 1. Top of tree linux-next + This patch set
> >
> > * icc_sync_state called
> > * No crash or hang observed
> > * From /sys/kernel/debug/clk/clk_summary can see the
> > relevant clocks are set to the expected rates (compared
> > with downstream kernel)
> >
> > 2. Top of tree linux-next + This patch set + PCIe enablement
> >
> > * icc_sync_state NOT called
>
> If sync_state() is not being called, that usually means that there
> are interconnect consumers that haven't probed successfully (PCIe?)
> or their dependencies. That can be checked in /sys/class/devlink/.../status
> But i am not sure how this works for PCI devices however.
>
> You can also manually force a call to sync_state by writing "1" to
> the interconnect provider's /sys/devices/.../state_synced
>
> Anyway, the question is if PCIe and NSS work without this driver?

No.

> If they work, is this because the clocks are turned on by default
> or by the boot loader?

Initially, the PCIe/NSS driver enabled these clocks directly
by having them in their DT nodes itself. Based on community
feedback this was removed and after that PCIe/NSS did not work.

> Then if an interconnect path (clock) gets disabled either when we
> reach a sync_state (with no bandwidth requests) or we explicitly
> call icc_set_bw() with 0 bandwidth values, i would expect that
> these PCIe and NSS devices would not function anymore (it might
> save some power etc) and if this is unexpected we should see a
> a crash or hang...
>
> Can you confirm this?

With ICC enabled, icc_set_bw (with non-zero values) is called by
PCIe and NSS drivers. Haven't checked with icc_set_bw with zero
values.

PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw
NSS: ppe_icc_init -> icc_set_bw

I believe sync_state is not getting called since there is a
non-zero set bandwidth request. Which seems to be aligned with
your explanation.

Thanks
Varada

>
> Thanks,
> Georgi
>
> > * No crash or hang observed
> > * From /sys/kernel/debug/clk/clk_summary can see the
> > relevant clocks are set to the expected rates (compared
> > with downstream kernel)
> >
> > Does this answer your question? Please let me know if you were
> > looking for some other information.
> >
> > Thanks
> > Varada
> >
>

2024-05-08 08:10:44

by Dmitry Baryshkov

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan
<[email protected]> wrote:
>
> On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote:
> > Hi Varada,
> >
> > Thank you for your work on this!
> >
> > On 2.05.24 12:30, Varadarajan Narayanan wrote:
> > > On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
> > > > On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
> > > > > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
> > > > > >
> > > > > >
> > > > > > On 4/18/24 11:23, Varadarajan Narayanan wrote:
> > > > > > > IPQ SoCs dont involve RPM in managing NoC related clocks and
> > > > > > > there is no NoC scaling. Linux itself handles these clocks.
> > > > > > > However, these should not be exposed as just clocks and align
> > > > > > > with other Qualcomm SoCs that handle these clocks from a
> > > > > > > interconnect provider.
> > > > > > >
> > > > > > > Hence include icc provider capability to the gcc node so that
> > > > > > > peripherals can use the interconnect facility to enable these
> > > > > > > clocks.
> > > > > > >
> > > > > > > Reviewed-by: Dmitry Baryshkov <[email protected]>
> > > > > > > Signed-off-by: Varadarajan Narayanan <[email protected]>
> > > > > > > ---
> > > > > >
> > > > > > If this is all you do to enable interconnect (which is not the case,
> > > > > > as this patch only satisfies the bindings checker, the meaningful
> > > > > > change happens in the previous patch) and nothing explodes, this is
> > > > > > an apparent sign of your driver doing nothing.
> > > > >
> > > > > It appears to do nothing because, we are just enabling the clock
> > > > > provider to also act as interconnect provider. Only when the
> > > > > consumers are enabled with interconnect usage, this will create
> > > > > paths and turn on the relevant NOC clocks.
> > > >
> > > > No, with sync_state it actually does "something" (sets the interconnect
> > > > path bandwidths to zero). And *this* patch does nothing functionally,
> > > > it only makes the dt checker happy.
> > >
> > > I understand.
> > >
> > > > > This interconnect will be used by the PCIe and NSS blocks. When
> > > > > those patches were posted earlier, they were put on hold until
> > > > > interconnect driver is available.
> > > > >
> > > > > Once this patch gets in, PCIe for example will make use of icc.
> > > > > Please refer to https://lore.kernel.org/linux-arm-msm/[email protected]/.
> > > > >
> > > > > The 'pcieX' nodes will include the following entries.
> > > > >
> > > > > interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
> > > > > <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
> > > > > interconnect-names = "pcie-mem", "cpu-pcie";
> > > >
> > > > Okay. What about USB that's already enabled? And BIMC/MEMNOC?
> > >
> > > For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface
> > > clock. Hence, interconnect is not specified there.
> > >
> > > MEMNOC to System NOC interfaces seem to be enabled automatically.
> > > Software doesn't have to turn on or program specific clocks.
> > >
> > > > > > The expected reaction to "enabling interconnect" without defining the
> > > > > > required paths for your hardware would be a crash-on-sync_state, as all
> > > > > > unused (from Linux's POV) resources ought to be shut down.
> > > > > >
> > > > > > Because you lack sync_state, the interconnects silently retain the state
> > > > > > that they were left in (which is not deterministic), and that's precisely
> > > > > > what we want to avoid.
> > > > >
> > > > > I tried to set 'sync_state' to icc_sync_state to be invoked and
> > > > > didn't see any crash.
> > > >
> > > > Have you confirmed that the registers are actually written to, and with
> > > > correct values?
> > >
> > > I tried the following combinations:-
> > >
> > > 1. Top of tree linux-next + This patch set
> > >
> > > * icc_sync_state called
> > > * No crash or hang observed
> > > * From /sys/kernel/debug/clk/clk_summary can see the
> > > relevant clocks are set to the expected rates (compared
> > > with downstream kernel)
> > >
> > > 2. Top of tree linux-next + This patch set + PCIe enablement
> > >
> > > * icc_sync_state NOT called
> >
> > If sync_state() is not being called, that usually means that there
> > are interconnect consumers that haven't probed successfully (PCIe?)
> > or their dependencies. That can be checked in /sys/class/devlink/.../status
> > But i am not sure how this works for PCI devices however.
> >
> > You can also manually force a call to sync_state by writing "1" to
> > the interconnect provider's /sys/devices/.../state_synced
> >
> > Anyway, the question is if PCIe and NSS work without this driver?
>
> No.
>
> > If they work, is this because the clocks are turned on by default
> > or by the boot loader?
>
> Initially, the PCIe/NSS driver enabled these clocks directly
> by having them in their DT nodes itself. Based on community
> feedback this was removed and after that PCIe/NSS did not work.
>
> > Then if an interconnect path (clock) gets disabled either when we
> > reach a sync_state (with no bandwidth requests) or we explicitly
> > call icc_set_bw() with 0 bandwidth values, i would expect that
> > these PCIe and NSS devices would not function anymore (it might
> > save some power etc) and if this is unexpected we should see a
> > a crash or hang...
> >
> > Can you confirm this?
>
> With ICC enabled, icc_set_bw (with non-zero values) is called by
> PCIe and NSS drivers. Haven't checked with icc_set_bw with zero
> values.
>
> PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw
> NSS: ppe_icc_init -> icc_set_bw
>
> I believe sync_state is not getting called since there is a
> non-zero set bandwidth request. Which seems to be aligned with
> your explanation.

This doesn't look correct. sync_state is being called once all
consumers are probed. It doesn't matter whether those consumers have
non-zero bandwidth requests or no.


--
With best wishes
Dmitry

2024-06-06 14:07:12

by Konrad Dybcio

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote:
> On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan
> <[email protected]> wrote:
>>
>> On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote:
>>> Hi Varada,
>>>
>>> Thank you for your work on this!
>>>
>>> On 2.05.24 12:30, Varadarajan Narayanan wrote:
>>>> On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
>>>>> On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
>>>>>> On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 4/18/24 11:23, Varadarajan Narayanan wrote:
>>>>>>>> IPQ SoCs dont involve RPM in managing NoC related clocks and
>>>>>>>> there is no NoC scaling. Linux itself handles these clocks.
>>>>>>>> However, these should not be exposed as just clocks and align
>>>>>>>> with other Qualcomm SoCs that handle these clocks from a
>>>>>>>> interconnect provider.
>>>>>>>>
>>>>>>>> Hence include icc provider capability to the gcc node so that
>>>>>>>> peripherals can use the interconnect facility to enable these
>>>>>>>> clocks.
>>>>>>>>
>>>>>>>> Reviewed-by: Dmitry Baryshkov <[email protected]>
>>>>>>>> Signed-off-by: Varadarajan Narayanan <[email protected]>
>>>>>>>> ---
>>>>>>>
>>>>>>> If this is all you do to enable interconnect (which is not the case,
>>>>>>> as this patch only satisfies the bindings checker, the meaningful
>>>>>>> change happens in the previous patch) and nothing explodes, this is
>>>>>>> an apparent sign of your driver doing nothing.
>>>>>>
>>>>>> It appears to do nothing because, we are just enabling the clock
>>>>>> provider to also act as interconnect provider. Only when the
>>>>>> consumers are enabled with interconnect usage, this will create
>>>>>> paths and turn on the relevant NOC clocks.
>>>>>
>>>>> No, with sync_state it actually does "something" (sets the interconnect
>>>>> path bandwidths to zero). And *this* patch does nothing functionally,
>>>>> it only makes the dt checker happy.
>>>>
>>>> I understand.
>>>>
>>>>>> This interconnect will be used by the PCIe and NSS blocks. When
>>>>>> those patches were posted earlier, they were put on hold until
>>>>>> interconnect driver is available.
>>>>>>
>>>>>> Once this patch gets in, PCIe for example will make use of icc.
>>>>>> Please refer to https://lore.kernel.org/linux-arm-msm/[email protected]/.
>>>>>>
>>>>>> The 'pcieX' nodes will include the following entries.
>>>>>>
>>>>>> interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
>>>>>> <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
>>>>>> interconnect-names = "pcie-mem", "cpu-pcie";
>>>>>
>>>>> Okay. What about USB that's already enabled? And BIMC/MEMNOC?
>>>>
>>>> For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface
>>>> clock. Hence, interconnect is not specified there.
>>>>
>>>> MEMNOC to System NOC interfaces seem to be enabled automatically.
>>>> Software doesn't have to turn on or program specific clocks.
>>>>
>>>>>>> The expected reaction to "enabling interconnect" without defining the
>>>>>>> required paths for your hardware would be a crash-on-sync_state, as all
>>>>>>> unused (from Linux's POV) resources ought to be shut down.
>>>>>>>
>>>>>>> Because you lack sync_state, the interconnects silently retain the state
>>>>>>> that they were left in (which is not deterministic), and that's precisely
>>>>>>> what we want to avoid.
>>>>>>
>>>>>> I tried to set 'sync_state' to icc_sync_state to be invoked and
>>>>>> didn't see any crash.
>>>>>
>>>>> Have you confirmed that the registers are actually written to, and with
>>>>> correct values?
>>>>
>>>> I tried the following combinations:-
>>>>
>>>> 1. Top of tree linux-next + This patch set
>>>>
>>>> * icc_sync_state called
>>>> * No crash or hang observed
>>>> * From /sys/kernel/debug/clk/clk_summary can see the
>>>> relevant clocks are set to the expected rates (compared
>>>> with downstream kernel)
>>>>
>>>> 2. Top of tree linux-next + This patch set + PCIe enablement
>>>>
>>>> * icc_sync_state NOT called
>>>
>>> If sync_state() is not being called, that usually means that there
>>> are interconnect consumers that haven't probed successfully (PCIe?)
>>> or their dependencies. That can be checked in /sys/class/devlink/.../status
>>> But i am not sure how this works for PCI devices however.
>>>
>>> You can also manually force a call to sync_state by writing "1" to
>>> the interconnect provider's /sys/devices/.../state_synced
>>>
>>> Anyway, the question is if PCIe and NSS work without this driver?
>>
>> No.
>>
>>> If they work, is this because the clocks are turned on by default
>>> or by the boot loader?
>>
>> Initially, the PCIe/NSS driver enabled these clocks directly
>> by having them in their DT nodes itself. Based on community
>> feedback this was removed and after that PCIe/NSS did not work.
>>
>>> Then if an interconnect path (clock) gets disabled either when we
>>> reach a sync_state (with no bandwidth requests) or we explicitly
>>> call icc_set_bw() with 0 bandwidth values, i would expect that
>>> these PCIe and NSS devices would not function anymore (it might
>>> save some power etc) and if this is unexpected we should see a
>>> a crash or hang...
>>>
>>> Can you confirm this?
>>
>> With ICC enabled, icc_set_bw (with non-zero values) is called by
>> PCIe and NSS drivers. Haven't checked with icc_set_bw with zero
>> values.
>>
>> PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw
>> NSS: ppe_icc_init -> icc_set_bw
>>
>> I believe sync_state is not getting called since there is a
>> non-zero set bandwidth request. Which seems to be aligned with
>> your explanation.
>
> This doesn't look correct. sync_state is being called once all
> consumers are probed. It doesn't matter whether those consumers have
> non-zero bandwidth requests or no.

/sys/kernel/debug/devices_deferred may have some useful info, too

Konrad

2024-06-11 09:42:44

by Varadarajan Narayanan

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote:
> On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote:
> > On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan
> > <[email protected]> wrote:
> >>
> >> On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote:
> >>> Hi Varada,
> >>>
> >>> Thank you for your work on this!
> >>>
> >>> On 2.05.24 12:30, Varadarajan Narayanan wrote:
> >>>> On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
> >>>>> On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
> >>>>>> On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>> On 4/18/24 11:23, Varadarajan Narayanan wrote:
> >>>>>>>> IPQ SoCs dont involve RPM in managing NoC related clocks and
> >>>>>>>> there is no NoC scaling. Linux itself handles these clocks.
> >>>>>>>> However, these should not be exposed as just clocks and align
> >>>>>>>> with other Qualcomm SoCs that handle these clocks from a
> >>>>>>>> interconnect provider.
> >>>>>>>>
> >>>>>>>> Hence include icc provider capability to the gcc node so that
> >>>>>>>> peripherals can use the interconnect facility to enable these
> >>>>>>>> clocks.
> >>>>>>>>
> >>>>>>>> Reviewed-by: Dmitry Baryshkov <[email protected]>
> >>>>>>>> Signed-off-by: Varadarajan Narayanan <[email protected]>
> >>>>>>>> ---
> >>>>>>>
> >>>>>>> If this is all you do to enable interconnect (which is not the case,
> >>>>>>> as this patch only satisfies the bindings checker, the meaningful
> >>>>>>> change happens in the previous patch) and nothing explodes, this is
> >>>>>>> an apparent sign of your driver doing nothing.
> >>>>>>
> >>>>>> It appears to do nothing because, we are just enabling the clock
> >>>>>> provider to also act as interconnect provider. Only when the
> >>>>>> consumers are enabled with interconnect usage, this will create
> >>>>>> paths and turn on the relevant NOC clocks.
> >>>>>
> >>>>> No, with sync_state it actually does "something" (sets the interconnect
> >>>>> path bandwidths to zero). And *this* patch does nothing functionally,
> >>>>> it only makes the dt checker happy.
> >>>>
> >>>> I understand.
> >>>>
> >>>>>> This interconnect will be used by the PCIe and NSS blocks. When
> >>>>>> those patches were posted earlier, they were put on hold until
> >>>>>> interconnect driver is available.
> >>>>>>
> >>>>>> Once this patch gets in, PCIe for example will make use of icc.
> >>>>>> Please refer to https://lore.kernel.org/linux-arm-msm/[email protected]/.
> >>>>>>
> >>>>>> The 'pcieX' nodes will include the following entries.
> >>>>>>
> >>>>>> interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
> >>>>>> <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
> >>>>>> interconnect-names = "pcie-mem", "cpu-pcie";
> >>>>>
> >>>>> Okay. What about USB that's already enabled? And BIMC/MEMNOC?
> >>>>
> >>>> For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface
> >>>> clock. Hence, interconnect is not specified there.
> >>>>
> >>>> MEMNOC to System NOC interfaces seem to be enabled automatically.
> >>>> Software doesn't have to turn on or program specific clocks.
> >>>>
> >>>>>>> The expected reaction to "enabling interconnect" without defining the
> >>>>>>> required paths for your hardware would be a crash-on-sync_state, as all
> >>>>>>> unused (from Linux's POV) resources ought to be shut down.
> >>>>>>>
> >>>>>>> Because you lack sync_state, the interconnects silently retain the state
> >>>>>>> that they were left in (which is not deterministic), and that's precisely
> >>>>>>> what we want to avoid.
> >>>>>>
> >>>>>> I tried to set 'sync_state' to icc_sync_state to be invoked and
> >>>>>> didn't see any crash.
> >>>>>
> >>>>> Have you confirmed that the registers are actually written to, and with
> >>>>> correct values?
> >>>>
> >>>> I tried the following combinations:-
> >>>>
> >>>> 1. Top of tree linux-next + This patch set
> >>>>
> >>>> * icc_sync_state called
> >>>> * No crash or hang observed
> >>>> * From /sys/kernel/debug/clk/clk_summary can see the
> >>>> relevant clocks are set to the expected rates (compared
> >>>> with downstream kernel)
> >>>>
> >>>> 2. Top of tree linux-next + This patch set + PCIe enablement
> >>>>
> >>>> * icc_sync_state NOT called
> >>>
> >>> If sync_state() is not being called, that usually means that there
> >>> are interconnect consumers that haven't probed successfully (PCIe?)
> >>> or their dependencies. That can be checked in /sys/class/devlink/.../status
> >>> But i am not sure how this works for PCI devices however.
> >>>
> >>> You can also manually force a call to sync_state by writing "1" to
> >>> the interconnect provider's /sys/devices/.../state_synced
> >>>
> >>> Anyway, the question is if PCIe and NSS work without this driver?
> >>
> >> No.
> >>
> >>> If they work, is this because the clocks are turned on by default
> >>> or by the boot loader?
> >>
> >> Initially, the PCIe/NSS driver enabled these clocks directly
> >> by having them in their DT nodes itself. Based on community
> >> feedback this was removed and after that PCIe/NSS did not work.
> >>
> >>> Then if an interconnect path (clock) gets disabled either when we
> >>> reach a sync_state (with no bandwidth requests) or we explicitly
> >>> call icc_set_bw() with 0 bandwidth values, i would expect that
> >>> these PCIe and NSS devices would not function anymore (it might
> >>> save some power etc) and if this is unexpected we should see a
> >>> a crash or hang...
> >>>
> >>> Can you confirm this?
> >>
> >> With ICC enabled, icc_set_bw (with non-zero values) is called by
> >> PCIe and NSS drivers. Haven't checked with icc_set_bw with zero
> >> values.
> >>
> >> PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw
> >> NSS: ppe_icc_init -> icc_set_bw
> >>
> >> I believe sync_state is not getting called since there is a
> >> non-zero set bandwidth request. Which seems to be aligned with
> >> your explanation.
> >
> > This doesn't look correct. sync_state is being called once all
> > consumers are probed. It doesn't matter whether those consumers have
> > non-zero bandwidth requests or no.
>
> /sys/kernel/debug/devices_deferred may have some useful info, too

/sys/kernel/debug/devices_deferred seems to be empty

# mount | grep -w debugfs
none on /sys/kernel/debug type debugfs (rw,relatime)

# cat /sys/kernel/debug/devices_deferred | wc -l
0

Added the following print to icc_sync_state,

@@ -1096,6 +1096,7 @@ void icc_sync_state(struct device *dev)
struct icc_node *n;
static int count;

+ printk("--> %s: %d %d\n", __func__, providers_count, count);
count++;

if (count < providers_count)
return;

icc_sync_state seems to be called once,

# dmesg | grep icc_sync_state
[ 12.260544] --> icc_sync_state: 2 0

Since 'providers_count' is greated than 'count' icc_sync_state
seems to return before doing anything.

Thanks
Varada

2024-06-11 11:34:42

by Georgi Djakov

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

On 11.06.24 12:42, Varadarajan Narayanan wrote:
> On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote:
>> On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote:
>>> On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan
>>> <[email protected]> wrote:
>>>>
>>>> On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote:
>>>>> Hi Varada,
>>>>>
>>>>> Thank you for your work on this!
>>>>>
>>>>> On 2.05.24 12:30, Varadarajan Narayanan wrote:
>>>>>> On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
>>>>>>> On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
>>>>>>>> On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 4/18/24 11:23, Varadarajan Narayanan wrote:
>>>>>>>>>> IPQ SoCs dont involve RPM in managing NoC related clocks and
>>>>>>>>>> there is no NoC scaling. Linux itself handles these clocks.
>>>>>>>>>> However, these should not be exposed as just clocks and align
>>>>>>>>>> with other Qualcomm SoCs that handle these clocks from a
>>>>>>>>>> interconnect provider.
>>>>>>>>>>
>>>>>>>>>> Hence include icc provider capability to the gcc node so that
>>>>>>>>>> peripherals can use the interconnect facility to enable these
>>>>>>>>>> clocks.
>>>>>>>>>>
>>>>>>>>>> Reviewed-by: Dmitry Baryshkov <[email protected]>
>>>>>>>>>> Signed-off-by: Varadarajan Narayanan <[email protected]>
>>>>>>>>>> ---
>>>>>>>>>
>>>>>>>>> If this is all you do to enable interconnect (which is not the case,
>>>>>>>>> as this patch only satisfies the bindings checker, the meaningful
>>>>>>>>> change happens in the previous patch) and nothing explodes, this is
>>>>>>>>> an apparent sign of your driver doing nothing.
>>>>>>>>
>>>>>>>> It appears to do nothing because, we are just enabling the clock
>>>>>>>> provider to also act as interconnect provider. Only when the
>>>>>>>> consumers are enabled with interconnect usage, this will create
>>>>>>>> paths and turn on the relevant NOC clocks.
>>>>>>>
>>>>>>> No, with sync_state it actually does "something" (sets the interconnect
>>>>>>> path bandwidths to zero). And *this* patch does nothing functionally,
>>>>>>> it only makes the dt checker happy.
>>>>>>
>>>>>> I understand.
>>>>>>
>>>>>>>> This interconnect will be used by the PCIe and NSS blocks. When
>>>>>>>> those patches were posted earlier, they were put on hold until
>>>>>>>> interconnect driver is available.
>>>>>>>>
>>>>>>>> Once this patch gets in, PCIe for example will make use of icc.
>>>>>>>> Please refer to https://lore.kernel.org/linux-arm-msm/[email protected]/.
>>>>>>>>
>>>>>>>> The 'pcieX' nodes will include the following entries.
>>>>>>>>
>>>>>>>> interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
>>>>>>>> <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
>>>>>>>> interconnect-names = "pcie-mem", "cpu-pcie";
>>>>>>>
>>>>>>> Okay. What about USB that's already enabled? And BIMC/MEMNOC?
>>>>>>
>>>>>> For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface
>>>>>> clock. Hence, interconnect is not specified there.
>>>>>>
>>>>>> MEMNOC to System NOC interfaces seem to be enabled automatically.
>>>>>> Software doesn't have to turn on or program specific clocks.
>>>>>>
>>>>>>>>> The expected reaction to "enabling interconnect" without defining the
>>>>>>>>> required paths for your hardware would be a crash-on-sync_state, as all
>>>>>>>>> unused (from Linux's POV) resources ought to be shut down.
>>>>>>>>>
>>>>>>>>> Because you lack sync_state, the interconnects silently retain the state
>>>>>>>>> that they were left in (which is not deterministic), and that's precisely
>>>>>>>>> what we want to avoid.
>>>>>>>>
>>>>>>>> I tried to set 'sync_state' to icc_sync_state to be invoked and
>>>>>>>> didn't see any crash.
>>>>>>>
>>>>>>> Have you confirmed that the registers are actually written to, and with
>>>>>>> correct values?
>>>>>>
>>>>>> I tried the following combinations:-
>>>>>>
>>>>>> 1. Top of tree linux-next + This patch set
>>>>>>
>>>>>> * icc_sync_state called
>>>>>> * No crash or hang observed
>>>>>> * From /sys/kernel/debug/clk/clk_summary can see the
>>>>>> relevant clocks are set to the expected rates (compared
>>>>>> with downstream kernel)
>>>>>>
>>>>>> 2. Top of tree linux-next + This patch set + PCIe enablement
>>>>>>
>>>>>> * icc_sync_state NOT called
>>>>>
>>>>> If sync_state() is not being called, that usually means that there
>>>>> are interconnect consumers that haven't probed successfully (PCIe?)
>>>>> or their dependencies. That can be checked in /sys/class/devlink/.../status
>>>>> But i am not sure how this works for PCI devices however.
>>>>>
>>>>> You can also manually force a call to sync_state by writing "1" to
>>>>> the interconnect provider's /sys/devices/.../state_synced
>>>>>
>>>>> Anyway, the question is if PCIe and NSS work without this driver?
>>>>
>>>> No.
>>>>
>>>>> If they work, is this because the clocks are turned on by default
>>>>> or by the boot loader?
>>>>
>>>> Initially, the PCIe/NSS driver enabled these clocks directly
>>>> by having them in their DT nodes itself. Based on community
>>>> feedback this was removed and after that PCIe/NSS did not work.
>>>>
>>>>> Then if an interconnect path (clock) gets disabled either when we
>>>>> reach a sync_state (with no bandwidth requests) or we explicitly
>>>>> call icc_set_bw() with 0 bandwidth values, i would expect that
>>>>> these PCIe and NSS devices would not function anymore (it might
>>>>> save some power etc) and if this is unexpected we should see a
>>>>> a crash or hang...
>>>>>
>>>>> Can you confirm this?
>>>>
>>>> With ICC enabled, icc_set_bw (with non-zero values) is called by
>>>> PCIe and NSS drivers. Haven't checked with icc_set_bw with zero
>>>> values.
>>>>
>>>> PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw
>>>> NSS: ppe_icc_init -> icc_set_bw
>>>>
>>>> I believe sync_state is not getting called since there is a
>>>> non-zero set bandwidth request. Which seems to be aligned with
>>>> your explanation.
>>>
>>> This doesn't look correct. sync_state is being called once all
>>> consumers are probed. It doesn't matter whether those consumers have
>>> non-zero bandwidth requests or no.
>>
>> /sys/kernel/debug/devices_deferred may have some useful info, too
>
> /sys/kernel/debug/devices_deferred seems to be empty
>
> # mount | grep -w debugfs
> none on /sys/kernel/debug type debugfs (rw,relatime)
>
> # cat /sys/kernel/debug/devices_deferred | wc -l
> 0
>
> Added the following print to icc_sync_state,
>
> @@ -1096,6 +1096,7 @@ void icc_sync_state(struct device *dev)
> struct icc_node *n;
> static int count;
>
> + printk("--> %s: %d %d\n", __func__, providers_count, count);
> count++;
>
> if (count < providers_count)
> return;
>
> icc_sync_state seems to be called once,
>
> # dmesg | grep icc_sync_state
> [ 12.260544] --> icc_sync_state: 2 0
>
> Since 'providers_count' is greated than 'count' icc_sync_state
> seems to return before doing anything.

Is there also another interconnect provider on this platform, other
than the gcc? Check for DT nodes that have the #interconnect-cells
property. Are all providers probing successfully?

All providers must probe, as there might be paths that cross multiple
providers and we can't get into sync-state with a topology that is
only partially initialized.

Thanks,
Georgi

2024-06-12 06:30:50

by Varadarajan Narayanan

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

On Tue, Jun 11, 2024 at 02:29:48PM +0300, Georgi Djakov wrote:
> On 11.06.24 12:42, Varadarajan Narayanan wrote:
> > On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote:
> > > On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote:
> > > > On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan
> > > > <[email protected]> wrote:
> > > > >
> > > > > On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote:
> > > > > > Hi Varada,
> > > > > >
> > > > > > Thank you for your work on this!
> > > > > >
> > > > > > On 2.05.24 12:30, Varadarajan Narayanan wrote:
> > > > > > > On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
> > > > > > > > On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
> > > > > > > > > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On 4/18/24 11:23, Varadarajan Narayanan wrote:
> > > > > > > > > > > IPQ SoCs dont involve RPM in managing NoC related clocks and
> > > > > > > > > > > there is no NoC scaling. Linux itself handles these clocks.
> > > > > > > > > > > However, these should not be exposed as just clocks and align
> > > > > > > > > > > with other Qualcomm SoCs that handle these clocks from a
> > > > > > > > > > > interconnect provider.
> > > > > > > > > > >
> > > > > > > > > > > Hence include icc provider capability to the gcc node so that
> > > > > > > > > > > peripherals can use the interconnect facility to enable these
> > > > > > > > > > > clocks.
> > > > > > > > > > >
> > > > > > > > > > > Reviewed-by: Dmitry Baryshkov <[email protected]>
> > > > > > > > > > > Signed-off-by: Varadarajan Narayanan <[email protected]>
> > > > > > > > > > > ---
> > > > > > > > > >
> > > > > > > > > > If this is all you do to enable interconnect (which is not the case,
> > > > > > > > > > as this patch only satisfies the bindings checker, the meaningful
> > > > > > > > > > change happens in the previous patch) and nothing explodes, this is
> > > > > > > > > > an apparent sign of your driver doing nothing.
> > > > > > > > >
> > > > > > > > > It appears to do nothing because, we are just enabling the clock
> > > > > > > > > provider to also act as interconnect provider. Only when the
> > > > > > > > > consumers are enabled with interconnect usage, this will create
> > > > > > > > > paths and turn on the relevant NOC clocks.
> > > > > > > >
> > > > > > > > No, with sync_state it actually does "something" (sets the interconnect
> > > > > > > > path bandwidths to zero). And *this* patch does nothing functionally,
> > > > > > > > it only makes the dt checker happy.
> > > > > > >
> > > > > > > I understand.
> > > > > > >
> > > > > > > > > This interconnect will be used by the PCIe and NSS blocks. When
> > > > > > > > > those patches were posted earlier, they were put on hold until
> > > > > > > > > interconnect driver is available.
> > > > > > > > >
> > > > > > > > > Once this patch gets in, PCIe for example will make use of icc.
> > > > > > > > > Please refer to https://lore.kernel.org/linux-arm-msm/[email protected]/.
> > > > > > > > >
> > > > > > > > > The 'pcieX' nodes will include the following entries.
> > > > > > > > >
> > > > > > > > > interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
> > > > > > > > > <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
> > > > > > > > > interconnect-names = "pcie-mem", "cpu-pcie";
> > > > > > > >
> > > > > > > > Okay. What about USB that's already enabled? And BIMC/MEMNOC?
> > > > > > >
> > > > > > > For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface
> > > > > > > clock. Hence, interconnect is not specified there.
> > > > > > >
> > > > > > > MEMNOC to System NOC interfaces seem to be enabled automatically.
> > > > > > > Software doesn't have to turn on or program specific clocks.
> > > > > > >
> > > > > > > > > > The expected reaction to "enabling interconnect" without defining the
> > > > > > > > > > required paths for your hardware would be a crash-on-sync_state, as all
> > > > > > > > > > unused (from Linux's POV) resources ought to be shut down.
> > > > > > > > > >
> > > > > > > > > > Because you lack sync_state, the interconnects silently retain the state
> > > > > > > > > > that they were left in (which is not deterministic), and that's precisely
> > > > > > > > > > what we want to avoid.
> > > > > > > > >
> > > > > > > > > I tried to set 'sync_state' to icc_sync_state to be invoked and
> > > > > > > > > didn't see any crash.
> > > > > > > >
> > > > > > > > Have you confirmed that the registers are actually written to, and with
> > > > > > > > correct values?
> > > > > > >
> > > > > > > I tried the following combinations:-
> > > > > > >
> > > > > > > 1. Top of tree linux-next + This patch set
> > > > > > >
> > > > > > > * icc_sync_state called
> > > > > > > * No crash or hang observed
> > > > > > > * From /sys/kernel/debug/clk/clk_summary can see the
> > > > > > > relevant clocks are set to the expected rates (compared
> > > > > > > with downstream kernel)
> > > > > > >
> > > > > > > 2. Top of tree linux-next + This patch set + PCIe enablement
> > > > > > >
> > > > > > > * icc_sync_state NOT called
> > > > > >
> > > > > > If sync_state() is not being called, that usually means that there
> > > > > > are interconnect consumers that haven't probed successfully (PCIe?)
> > > > > > or their dependencies. That can be checked in /sys/class/devlink/.../status
> > > > > > But i am not sure how this works for PCI devices however.
> > > > > >
> > > > > > You can also manually force a call to sync_state by writing "1" to
> > > > > > the interconnect provider's /sys/devices/.../state_synced
> > > > > >
> > > > > > Anyway, the question is if PCIe and NSS work without this driver?
> > > > >
> > > > > No.
> > > > >
> > > > > > If they work, is this because the clocks are turned on by default
> > > > > > or by the boot loader?
> > > > >
> > > > > Initially, the PCIe/NSS driver enabled these clocks directly
> > > > > by having them in their DT nodes itself. Based on community
> > > > > feedback this was removed and after that PCIe/NSS did not work.
> > > > >
> > > > > > Then if an interconnect path (clock) gets disabled either when we
> > > > > > reach a sync_state (with no bandwidth requests) or we explicitly
> > > > > > call icc_set_bw() with 0 bandwidth values, i would expect that
> > > > > > these PCIe and NSS devices would not function anymore (it might
> > > > > > save some power etc) and if this is unexpected we should see a
> > > > > > a crash or hang...
> > > > > >
> > > > > > Can you confirm this?
> > > > >
> > > > > With ICC enabled, icc_set_bw (with non-zero values) is called by
> > > > > PCIe and NSS drivers. Haven't checked with icc_set_bw with zero
> > > > > values.
> > > > >
> > > > > PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw
> > > > > NSS: ppe_icc_init -> icc_set_bw
> > > > >
> > > > > I believe sync_state is not getting called since there is a
> > > > > non-zero set bandwidth request. Which seems to be aligned with
> > > > > your explanation.
> > > >
> > > > This doesn't look correct. sync_state is being called once all
> > > > consumers are probed. It doesn't matter whether those consumers have
> > > > non-zero bandwidth requests or no.
> > >
> > > /sys/kernel/debug/devices_deferred may have some useful info, too
> >
> > /sys/kernel/debug/devices_deferred seems to be empty
> >
> > # mount | grep -w debugfs
> > none on /sys/kernel/debug type debugfs (rw,relatime)
> >
> > # cat /sys/kernel/debug/devices_deferred | wc -l
> > 0
> >
> > Added the following print to icc_sync_state,
> >
> > @@ -1096,6 +1096,7 @@ void icc_sync_state(struct device *dev)
> > struct icc_node *n;
> > static int count;
> >
> > + printk("--> %s: %d %d\n", __func__, providers_count, count);
> > count++;
> >
> > if (count < providers_count)
> > return;
> >
> > icc_sync_state seems to be called once,
> >
> > # dmesg | grep icc_sync_state
> > [ 12.260544] --> icc_sync_state: 2 0
> >
> > Since 'providers_count' is greated than 'count' icc_sync_state
> > seems to return before doing anything.
>
> Is there also another interconnect provider on this platform, other
> than the gcc? Check for DT nodes that have the #interconnect-cells
> property.

Yes there are two interconnect providers

# find /proc/device-tree/ -name '#interconnect-cells'
/proc/device-tree/soc@0/clock-controller@1800000/#interconnect-cells
/proc/device-tree/soc@0/clock-controller@39b00000/#interconnect-cells

Note: gcc => clock-controller@1800000
nsscc => clock-controller@39b00000

> Are all providers probing successfully?

Yes. I printed the return value of their probe functions...

# dmesg | grep probe:
[ 0.037815] --> gcc_ipq9574_probe: return 0
[ 2.078215] --> nss_cc_ipq9574_probe: return 0


> All providers must probe, as there might be paths that cross multiple
> providers and we can't get into sync-state with a topology that is
> only partially initialized.

It does look like both the providers' probe has completed. And,
there aren't any paths that cross providers

interconnects = <&gcc MASTER_ANOC_PCIE1 &gcc SLAVE_ANOC_PCIE1>,
<&gcc MASTER_SNOC_PCIE1 &gcc SLAVE_SNOC_PCIE1>;

interconnects = <&gcc MASTER_ANOC_PCIE3 &gcc SLAVE_ANOC_PCIE3>,
<&gcc MASTER_SNOC_PCIE3 &gcc SLAVE_SNOC_PCIE3>;

interconnects = <&gcc MASTER_ANOC_PCIE2 &gcc SLAVE_ANOC_PCIE2>,
<&gcc MASTER_SNOC_PCIE2 &gcc SLAVE_SNOC_PCIE2>;

interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
<&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;

interconnects = <&nsscc MASTER_NSSNOC_PPE &nsscc SLAVE_NSSNOC_PPE>,
<&nsscc MASTER_NSSNOC_PPE_CFG &nsscc SLAVE_NSSNOC_PPE_CFG>,
<&gcc MASTER_NSSNOC_QOSGEN_REF &gcc SLAVE_NSSNOC_QOSGEN_REF>,
<&gcc MASTER_NSSNOC_TIMEOUT_REF &gcc SLAVE_NSSNOC_TIMEOUT_REF>,
<&gcc MASTER_MEM_NOC_NSSNOC &gcc SLAVE_MEM_NOC_NSSNOC>,
<&gcc MASTER_NSSNOC_MEMNOC &gcc SLAVE_NSSNOC_MEMNOC>,
<&gcc MASTER_NSSNOC_MEM_NOC_1 &gcc SLAVE_NSSNOC_MEM_NOC_1>;

Thanks
Varada

2024-06-12 08:48:57

by Georgi Djakov

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

On 12.06.24 9:30, Varadarajan Narayanan wrote:
> On Tue, Jun 11, 2024 at 02:29:48PM +0300, Georgi Djakov wrote:
>> On 11.06.24 12:42, Varadarajan Narayanan wrote:
>>> On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote:
>>>> On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote:
>>>>> On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan
>>>>> <[email protected]> wrote:
>>>>>>
>>>>>> On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote:
>>>>>>> Hi Varada,
>>>>>>>
>>>>>>> Thank you for your work on this!
>>>>>>>
>>>>>>> On 2.05.24 12:30, Varadarajan Narayanan wrote:
>>>>>>>> On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
>>>>>>>>> On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
>>>>>>>>>> On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 4/18/24 11:23, Varadarajan Narayanan wrote:
>>>>>>>>>>>> IPQ SoCs dont involve RPM in managing NoC related clocks and
>>>>>>>>>>>> there is no NoC scaling. Linux itself handles these clocks.
>>>>>>>>>>>> However, these should not be exposed as just clocks and align
>>>>>>>>>>>> with other Qualcomm SoCs that handle these clocks from a
>>>>>>>>>>>> interconnect provider.
>>>>>>>>>>>>
>>>>>>>>>>>> Hence include icc provider capability to the gcc node so that
>>>>>>>>>>>> peripherals can use the interconnect facility to enable these
>>>>>>>>>>>> clocks.
>>>>>>>>>>>>
>>>>>>>>>>>> Reviewed-by: Dmitry Baryshkov <[email protected]>
>>>>>>>>>>>> Signed-off-by: Varadarajan Narayanan <[email protected]>
>>>>>>>>>>>> ---
>>>>>>>>>>>
>>>>>>>>>>> If this is all you do to enable interconnect (which is not the case,
>>>>>>>>>>> as this patch only satisfies the bindings checker, the meaningful
>>>>>>>>>>> change happens in the previous patch) and nothing explodes, this is
>>>>>>>>>>> an apparent sign of your driver doing nothing.
>>>>>>>>>>
>>>>>>>>>> It appears to do nothing because, we are just enabling the clock
>>>>>>>>>> provider to also act as interconnect provider. Only when the
>>>>>>>>>> consumers are enabled with interconnect usage, this will create
>>>>>>>>>> paths and turn on the relevant NOC clocks.
>>>>>>>>>
>>>>>>>>> No, with sync_state it actually does "something" (sets the interconnect
>>>>>>>>> path bandwidths to zero). And *this* patch does nothing functionally,
>>>>>>>>> it only makes the dt checker happy.
>>>>>>>>
>>>>>>>> I understand.
>>>>>>>>
>>>>>>>>>> This interconnect will be used by the PCIe and NSS blocks. When
>>>>>>>>>> those patches were posted earlier, they were put on hold until
>>>>>>>>>> interconnect driver is available.
>>>>>>>>>>
>>>>>>>>>> Once this patch gets in, PCIe for example will make use of icc.
>>>>>>>>>> Please refer to https://lore.kernel.org/linux-arm-msm/[email protected]/.
>>>>>>>>>>
>>>>>>>>>> The 'pcieX' nodes will include the following entries.
>>>>>>>>>>
>>>>>>>>>> interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
>>>>>>>>>> <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
>>>>>>>>>> interconnect-names = "pcie-mem", "cpu-pcie";
>>>>>>>>>
>>>>>>>>> Okay. What about USB that's already enabled? And BIMC/MEMNOC?
>>>>>>>>
>>>>>>>> For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface
>>>>>>>> clock. Hence, interconnect is not specified there.
>>>>>>>>
>>>>>>>> MEMNOC to System NOC interfaces seem to be enabled automatically.
>>>>>>>> Software doesn't have to turn on or program specific clocks.
>>>>>>>>
>>>>>>>>>>> The expected reaction to "enabling interconnect" without defining the
>>>>>>>>>>> required paths for your hardware would be a crash-on-sync_state, as all
>>>>>>>>>>> unused (from Linux's POV) resources ought to be shut down.
>>>>>>>>>>>
>>>>>>>>>>> Because you lack sync_state, the interconnects silently retain the state
>>>>>>>>>>> that they were left in (which is not deterministic), and that's precisely
>>>>>>>>>>> what we want to avoid.
>>>>>>>>>>
>>>>>>>>>> I tried to set 'sync_state' to icc_sync_state to be invoked and
>>>>>>>>>> didn't see any crash.
>>>>>>>>>
>>>>>>>>> Have you confirmed that the registers are actually written to, and with
>>>>>>>>> correct values?
>>>>>>>>
>>>>>>>> I tried the following combinations:-
>>>>>>>>
>>>>>>>> 1. Top of tree linux-next + This patch set
>>>>>>>>
>>>>>>>> * icc_sync_state called
>>>>>>>> * No crash or hang observed
>>>>>>>> * From /sys/kernel/debug/clk/clk_summary can see the
>>>>>>>> relevant clocks are set to the expected rates (compared
>>>>>>>> with downstream kernel)
>>>>>>>>
>>>>>>>> 2. Top of tree linux-next + This patch set + PCIe enablement
>>>>>>>>
>>>>>>>> * icc_sync_state NOT called
>>>>>>>
>>>>>>> If sync_state() is not being called, that usually means that there
>>>>>>> are interconnect consumers that haven't probed successfully (PCIe?)
>>>>>>> or their dependencies. That can be checked in /sys/class/devlink/.../status
>>>>>>> But i am not sure how this works for PCI devices however.
>>>>>>>
>>>>>>> You can also manually force a call to sync_state by writing "1" to
>>>>>>> the interconnect provider's /sys/devices/.../state_synced
>>>>>>>
>>>>>>> Anyway, the question is if PCIe and NSS work without this driver?
>>>>>>
>>>>>> No.
>>>>>>
>>>>>>> If they work, is this because the clocks are turned on by default
>>>>>>> or by the boot loader?
>>>>>>
>>>>>> Initially, the PCIe/NSS driver enabled these clocks directly
>>>>>> by having them in their DT nodes itself. Based on community
>>>>>> feedback this was removed and after that PCIe/NSS did not work.
>>>>>>
>>>>>>> Then if an interconnect path (clock) gets disabled either when we
>>>>>>> reach a sync_state (with no bandwidth requests) or we explicitly
>>>>>>> call icc_set_bw() with 0 bandwidth values, i would expect that
>>>>>>> these PCIe and NSS devices would not function anymore (it might
>>>>>>> save some power etc) and if this is unexpected we should see a
>>>>>>> a crash or hang...
>>>>>>>
>>>>>>> Can you confirm this?
>>>>>>
>>>>>> With ICC enabled, icc_set_bw (with non-zero values) is called by
>>>>>> PCIe and NSS drivers. Haven't checked with icc_set_bw with zero
>>>>>> values.
>>>>>>
>>>>>> PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw
>>>>>> NSS: ppe_icc_init -> icc_set_bw
>>>>>>
>>>>>> I believe sync_state is not getting called since there is a
>>>>>> non-zero set bandwidth request. Which seems to be aligned with
>>>>>> your explanation.
>>>>>
>>>>> This doesn't look correct. sync_state is being called once all
>>>>> consumers are probed. It doesn't matter whether those consumers have
>>>>> non-zero bandwidth requests or no.
>>>>
>>>> /sys/kernel/debug/devices_deferred may have some useful info, too
>>>
>>> /sys/kernel/debug/devices_deferred seems to be empty
>>>
>>> # mount | grep -w debugfs
>>> none on /sys/kernel/debug type debugfs (rw,relatime)
>>>
>>> # cat /sys/kernel/debug/devices_deferred | wc -l
>>> 0
>>>
>>> Added the following print to icc_sync_state,
>>>
>>> @@ -1096,6 +1096,7 @@ void icc_sync_state(struct device *dev)
>>> struct icc_node *n;
>>> static int count;
>>>
>>> + printk("--> %s: %d %d\n", __func__, providers_count, count);
>>> count++;
>>>
>>> if (count < providers_count)
>>> return;
>>>
>>> icc_sync_state seems to be called once,
>>>
>>> # dmesg | grep icc_sync_state
>>> [ 12.260544] --> icc_sync_state: 2 0
>>>
>>> Since 'providers_count' is greated than 'count' icc_sync_state
>>> seems to return before doing anything.
>>
>> Is there also another interconnect provider on this platform, other
>> than the gcc? Check for DT nodes that have the #interconnect-cells
>> property.
>
> Yes there are two interconnect providers
>
> # find /proc/device-tree/ -name '#interconnect-cells'
> /proc/device-tree/soc@0/clock-controller@1800000/#interconnect-cells
> /proc/device-tree/soc@0/clock-controller@39b00000/#interconnect-cells
>
> Note: gcc => clock-controller@1800000
> nsscc => clock-controller@39b00000
>
>> Are all providers probing successfully?
>
> Yes. I printed the return value of their probe functions...
>
> # dmesg | grep probe:
> [ 0.037815] --> gcc_ipq9574_probe: return 0
> [ 2.078215] --> nss_cc_ipq9574_probe: return 0
>
>
>> All providers must probe, as there might be paths that cross multiple
>> providers and we can't get into sync-state with a topology that is
>> only partially initialized.
>
> It does look like both the providers' probe has completed. And,
> there aren't any paths that cross providers
>
> interconnects = <&gcc MASTER_ANOC_PCIE1 &gcc SLAVE_ANOC_PCIE1>,
> <&gcc MASTER_SNOC_PCIE1 &gcc SLAVE_SNOC_PCIE1>;
>
> interconnects = <&gcc MASTER_ANOC_PCIE3 &gcc SLAVE_ANOC_PCIE3>,
> <&gcc MASTER_SNOC_PCIE3 &gcc SLAVE_SNOC_PCIE3>;
>
> interconnects = <&gcc MASTER_ANOC_PCIE2 &gcc SLAVE_ANOC_PCIE2>,
> <&gcc MASTER_SNOC_PCIE2 &gcc SLAVE_SNOC_PCIE2>;
>
> interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
> <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
>
> interconnects = <&nsscc MASTER_NSSNOC_PPE &nsscc SLAVE_NSSNOC_PPE>,
> <&nsscc MASTER_NSSNOC_PPE_CFG &nsscc SLAVE_NSSNOC_PPE_CFG>,
> <&gcc MASTER_NSSNOC_QOSGEN_REF &gcc SLAVE_NSSNOC_QOSGEN_REF>,
> <&gcc MASTER_NSSNOC_TIMEOUT_REF &gcc SLAVE_NSSNOC_TIMEOUT_REF>,
> <&gcc MASTER_MEM_NOC_NSSNOC &gcc SLAVE_MEM_NOC_NSSNOC>,
> <&gcc MASTER_NSSNOC_MEMNOC &gcc SLAVE_NSSNOC_MEMNOC>,
> <&gcc MASTER_NSSNOC_MEM_NOC_1 &gcc SLAVE_NSSNOC_MEM_NOC_1>;

Are the above consumers also probing successfully? Especially the one with
the nsscc paths? Is nss_cc_ipq9574 also using icc_sync_state? Sync state
will be called when all consumers of the specific provider are probed.

The idea of sync state is to allow all consumers to probe and to request
their paths. Only after that, the framework will take into account the
bandwidth values that has been requested from consumers and disable unused
paths.

Sorry, but i am doing a bit of guessing here as i am missing the complete
picture. So you add interconnect-cells to nsscc, but what is this DT node
that requests the nss and gcc paths? I am failing to find these on the
mailing lists.

BR,
Georgi

2024-06-12 10:28:53

by Varadarajan Narayanan

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

On Wed, Jun 12, 2024 at 11:48:17AM +0300, Georgi Djakov wrote:
> On 12.06.24 9:30, Varadarajan Narayanan wrote:
> > On Tue, Jun 11, 2024 at 02:29:48PM +0300, Georgi Djakov wrote:
> > > On 11.06.24 12:42, Varadarajan Narayanan wrote:
> > > > On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote:
> > > > > On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote:
> > > > > > On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan
> > > > > > <[email protected]> wrote:
> > > > > > >
> > > > > > > On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote:
> > > > > > > > Hi Varada,
> > > > > > > >
> > > > > > > > Thank you for your work on this!
> > > > > > > >
> > > > > > > > On 2.05.24 12:30, Varadarajan Narayanan wrote:
> > > > > > > > > On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
> > > > > > > > > > On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
> > > > > > > > > > > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > On 4/18/24 11:23, Varadarajan Narayanan wrote:
> > > > > > > > > > > > > IPQ SoCs dont involve RPM in managing NoC related clocks and
> > > > > > > > > > > > > there is no NoC scaling. Linux itself handles these clocks.
> > > > > > > > > > > > > However, these should not be exposed as just clocks and align
> > > > > > > > > > > > > with other Qualcomm SoCs that handle these clocks from a
> > > > > > > > > > > > > interconnect provider.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Hence include icc provider capability to the gcc node so that
> > > > > > > > > > > > > peripherals can use the interconnect facility to enable these
> > > > > > > > > > > > > clocks.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Reviewed-by: Dmitry Baryshkov <[email protected]>
> > > > > > > > > > > > > Signed-off-by: Varadarajan Narayanan <[email protected]>
> > > > > > > > > > > > > ---
> > > > > > > > > > > >
> > > > > > > > > > > > If this is all you do to enable interconnect (which is not the case,
> > > > > > > > > > > > as this patch only satisfies the bindings checker, the meaningful
> > > > > > > > > > > > change happens in the previous patch) and nothing explodes, this is
> > > > > > > > > > > > an apparent sign of your driver doing nothing.
> > > > > > > > > > >
> > > > > > > > > > > It appears to do nothing because, we are just enabling the clock
> > > > > > > > > > > provider to also act as interconnect provider. Only when the
> > > > > > > > > > > consumers are enabled with interconnect usage, this will create
> > > > > > > > > > > paths and turn on the relevant NOC clocks.
> > > > > > > > > >
> > > > > > > > > > No, with sync_state it actually does "something" (sets the interconnect
> > > > > > > > > > path bandwidths to zero). And *this* patch does nothing functionally,
> > > > > > > > > > it only makes the dt checker happy.
> > > > > > > > >
> > > > > > > > > I understand.
> > > > > > > > >
> > > > > > > > > > > This interconnect will be used by the PCIe and NSS blocks. When
> > > > > > > > > > > those patches were posted earlier, they were put on hold until
> > > > > > > > > > > interconnect driver is available.
> > > > > > > > > > >
> > > > > > > > > > > Once this patch gets in, PCIe for example will make use of icc.
> > > > > > > > > > > Please refer to https://lore.kernel.org/linux-arm-msm/[email protected]/.
> > > > > > > > > > >
> > > > > > > > > > > The 'pcieX' nodes will include the following entries.
> > > > > > > > > > >
> > > > > > > > > > > interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
> > > > > > > > > > > <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
> > > > > > > > > > > interconnect-names = "pcie-mem", "cpu-pcie";
> > > > > > > > > >
> > > > > > > > > > Okay. What about USB that's already enabled? And BIMC/MEMNOC?
> > > > > > > > >
> > > > > > > > > For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface
> > > > > > > > > clock. Hence, interconnect is not specified there.
> > > > > > > > >
> > > > > > > > > MEMNOC to System NOC interfaces seem to be enabled automatically.
> > > > > > > > > Software doesn't have to turn on or program specific clocks.
> > > > > > > > >
> > > > > > > > > > > > The expected reaction to "enabling interconnect" without defining the
> > > > > > > > > > > > required paths for your hardware would be a crash-on-sync_state, as all
> > > > > > > > > > > > unused (from Linux's POV) resources ought to be shut down.
> > > > > > > > > > > >
> > > > > > > > > > > > Because you lack sync_state, the interconnects silently retain the state
> > > > > > > > > > > > that they were left in (which is not deterministic), and that's precisely
> > > > > > > > > > > > what we want to avoid.
> > > > > > > > > > >
> > > > > > > > > > > I tried to set 'sync_state' to icc_sync_state to be invoked and
> > > > > > > > > > > didn't see any crash.
> > > > > > > > > >
> > > > > > > > > > Have you confirmed that the registers are actually written to, and with
> > > > > > > > > > correct values?
> > > > > > > > >
> > > > > > > > > I tried the following combinations:-
> > > > > > > > >
> > > > > > > > > 1. Top of tree linux-next + This patch set
> > > > > > > > >
> > > > > > > > > * icc_sync_state called
> > > > > > > > > * No crash or hang observed
> > > > > > > > > * From /sys/kernel/debug/clk/clk_summary can see the
> > > > > > > > > relevant clocks are set to the expected rates (compared
> > > > > > > > > with downstream kernel)
> > > > > > > > >
> > > > > > > > > 2. Top of tree linux-next + This patch set + PCIe enablement
> > > > > > > > >
> > > > > > > > > * icc_sync_state NOT called
> > > > > > > >
> > > > > > > > If sync_state() is not being called, that usually means that there
> > > > > > > > are interconnect consumers that haven't probed successfully (PCIe?)
> > > > > > > > or their dependencies. That can be checked in /sys/class/devlink/.../status
> > > > > > > > But i am not sure how this works for PCI devices however.
> > > > > > > >
> > > > > > > > You can also manually force a call to sync_state by writing "1" to
> > > > > > > > the interconnect provider's /sys/devices/.../state_synced
> > > > > > > >
> > > > > > > > Anyway, the question is if PCIe and NSS work without this driver?
> > > > > > >
> > > > > > > No.
> > > > > > >
> > > > > > > > If they work, is this because the clocks are turned on by default
> > > > > > > > or by the boot loader?
> > > > > > >
> > > > > > > Initially, the PCIe/NSS driver enabled these clocks directly
> > > > > > > by having them in their DT nodes itself. Based on community
> > > > > > > feedback this was removed and after that PCIe/NSS did not work.
> > > > > > >
> > > > > > > > Then if an interconnect path (clock) gets disabled either when we
> > > > > > > > reach a sync_state (with no bandwidth requests) or we explicitly
> > > > > > > > call icc_set_bw() with 0 bandwidth values, i would expect that
> > > > > > > > these PCIe and NSS devices would not function anymore (it might
> > > > > > > > save some power etc) and if this is unexpected we should see a
> > > > > > > > a crash or hang...
> > > > > > > >
> > > > > > > > Can you confirm this?
> > > > > > >
> > > > > > > With ICC enabled, icc_set_bw (with non-zero values) is called by
> > > > > > > PCIe and NSS drivers. Haven't checked with icc_set_bw with zero
> > > > > > > values.
> > > > > > >
> > > > > > > PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw
> > > > > > > NSS: ppe_icc_init -> icc_set_bw
> > > > > > >
> > > > > > > I believe sync_state is not getting called since there is a
> > > > > > > non-zero set bandwidth request. Which seems to be aligned with
> > > > > > > your explanation.
> > > > > >
> > > > > > This doesn't look correct. sync_state is being called once all
> > > > > > consumers are probed. It doesn't matter whether those consumers have
> > > > > > non-zero bandwidth requests or no.
> > > > >
> > > > > /sys/kernel/debug/devices_deferred may have some useful info, too
> > > >
> > > > /sys/kernel/debug/devices_deferred seems to be empty
> > > >
> > > > # mount | grep -w debugfs
> > > > none on /sys/kernel/debug type debugfs (rw,relatime)
> > > >
> > > > # cat /sys/kernel/debug/devices_deferred | wc -l
> > > > 0
> > > >
> > > > Added the following print to icc_sync_state,
> > > >
> > > > @@ -1096,6 +1096,7 @@ void icc_sync_state(struct device *dev)
> > > > struct icc_node *n;
> > > > static int count;
> > > >
> > > > + printk("--> %s: %d %d\n", __func__, providers_count, count);
> > > > count++;
> > > >
> > > > if (count < providers_count)
> > > > return;
> > > >
> > > > icc_sync_state seems to be called once,
> > > >
> > > > # dmesg | grep icc_sync_state
> > > > [ 12.260544] --> icc_sync_state: 2 0
> > > >
> > > > Since 'providers_count' is greated than 'count' icc_sync_state
> > > > seems to return before doing anything.
> > >
> > > Is there also another interconnect provider on this platform, other
> > > than the gcc? Check for DT nodes that have the #interconnect-cells
> > > property.
> >
> > Yes there are two interconnect providers
> >
> > # find /proc/device-tree/ -name '#interconnect-cells'
> > /proc/device-tree/soc@0/clock-controller@1800000/#interconnect-cells
> > /proc/device-tree/soc@0/clock-controller@39b00000/#interconnect-cells
> >
> > Note: gcc => clock-controller@1800000
> > nsscc => clock-controller@39b00000
> >
> > > Are all providers probing successfully?
> >
> > Yes. I printed the return value of their probe functions...
> >
> > # dmesg | grep probe:
> > [ 0.037815] --> gcc_ipq9574_probe: return 0
> > [ 2.078215] --> nss_cc_ipq9574_probe: return 0
> >
> >
> > > All providers must probe, as there might be paths that cross multiple
> > > providers and we can't get into sync-state with a topology that is
> > > only partially initialized.
> >
> > It does look like both the providers' probe has completed. And,
> > there aren't any paths that cross providers
> >
> > interconnects = <&gcc MASTER_ANOC_PCIE1 &gcc SLAVE_ANOC_PCIE1>,
> > <&gcc MASTER_SNOC_PCIE1 &gcc SLAVE_SNOC_PCIE1>;
> >
> > interconnects = <&gcc MASTER_ANOC_PCIE3 &gcc SLAVE_ANOC_PCIE3>,
> > <&gcc MASTER_SNOC_PCIE3 &gcc SLAVE_SNOC_PCIE3>;
> >
> > interconnects = <&gcc MASTER_ANOC_PCIE2 &gcc SLAVE_ANOC_PCIE2>,
> > <&gcc MASTER_SNOC_PCIE2 &gcc SLAVE_SNOC_PCIE2>;
> >
> > interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>,
> > <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>;
> >
> > interconnects = <&nsscc MASTER_NSSNOC_PPE &nsscc SLAVE_NSSNOC_PPE>,
> > <&nsscc MASTER_NSSNOC_PPE_CFG &nsscc SLAVE_NSSNOC_PPE_CFG>,
> > <&gcc MASTER_NSSNOC_QOSGEN_REF &gcc SLAVE_NSSNOC_QOSGEN_REF>,
> > <&gcc MASTER_NSSNOC_TIMEOUT_REF &gcc SLAVE_NSSNOC_TIMEOUT_REF>,
> > <&gcc MASTER_MEM_NOC_NSSNOC &gcc SLAVE_MEM_NOC_NSSNOC>,
> > <&gcc MASTER_NSSNOC_MEMNOC &gcc SLAVE_NSSNOC_MEMNOC>,
> > <&gcc MASTER_NSSNOC_MEM_NOC_1 &gcc SLAVE_NSSNOC_MEM_NOC_1>;
>
> Are the above consumers also probing successfully? Especially the one with
> the nsscc paths? Is nss_cc_ipq9574 also using icc_sync_state? Sync state
> will be called when all consumers of the specific provider are probed.

nsscc_ipq9574 was not using icc_sync_state. After adding that, I
can see the following messages printed from icc_sync_state. I
also added a print to confirm if 'p->set(n, n);' is called.

[ 12.260138] --> icc_sync_state: 2 2
---> [ 12.260166] qcom,gcc-ipq9574 1800000.clock-controller: interconnect provider is in synced state
[ 12.262429] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie0_1lane_m_clk_master)
[ 12.271206] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie0_1lane_m_clk_slave)
[ 12.281225] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie0_1lane_s_clk_master)
[ 12.291118] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie0_1lane_s_clk_slave)
[ 12.300902] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie1_1lane_m_clk_master)
[ 12.310797] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie1_1lane_m_clk_slave)
[ 12.320596] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie1_1lane_s_clk_master)
[ 12.330494] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie1_1lane_s_clk_slave)
[ 12.340299] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie2_2lane_m_clk_master)
[ 12.350224] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie2_2lane_m_clk_slave)
[ 12.360013] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie2_2lane_s_clk_master)
[ 12.369904] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie2_2lane_s_clk_slave)
[ 12.379709] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie3_2lane_m_clk_master)
[ 12.389616] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie3_2lane_m_clk_slave)
[ 12.399415] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie3_2lane_s_clk_master)
[ 12.409312] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie3_2lane_s_clk_slave)
[ 12.419119] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_usb_clk_master)
[ 12.429017] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_usb_clk_slave)
[ 12.437781] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_usb_axi_clk_master)
[ 12.446813] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_usb_axi_clk_slave)
[ 12.456098] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_nsscc_clk_master)
[ 12.465474] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_nsscc_clk_slave)
[ 12.474767] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_snoc_clk_master)
[ 12.484138] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_snoc_clk_slave)
[ 12.493424] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_snoc_1_clk_master)
[ 12.502713] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_snoc_1_clk_slave)
[ 12.512261] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_pcnoc_1_clk_master)
[ 12.521379] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_pcnoc_1_clk_slave)
[ 12.531098] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_qosgen_ref_clk_master)
[ 12.540651] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_qosgen_ref_clk_slave)
[ 12.550456] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_timeout_ref_clk_master)
[ 12.559922] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_timeout_ref_clk_slave)
[ 12.569986] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_xo_dcd_clk_master)
[ 12.579886] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_xo_dcd_clk_slave)
[ 12.589344] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_atb_clk_master)
[ 12.598466] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_atb_clk_slave)
[ 12.607834] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_mem_noc_nssnoc_clk_master)
[ 12.617039] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_mem_noc_nssnoc_clk_slave)
[ 12.626497] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_memnoc_clk_master)
[ 12.636049] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_memnoc_clk_slave)
[ 12.645507] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_mem_noc_1_clk_master)
[ 12.654668] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_mem_noc_1_clk_slave)
---> [ 12.664354] qcom,nsscc-ipq9574 39b00000.clock-controller: interconnect provider is in synced state
[ 12.674069] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_ppe_clk_master)
[ 12.683012] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_ppe_clk_slave)
[ 12.692646] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_ppe_cfg_clk_master)
[ 12.702369] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_ppe_cfg_clk_slave)
[ 12.712349] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_nss_csr_clk_master)
[ 12.722431] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_nss_csr_clk_slave)
[ 12.732404] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_imem_qsb_clk_master)
[ 12.742473] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_imem_qsb_clk_slave)
[ 12.752801] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_imem_ahb_clk_master)
[ 12.762611] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_imem_ahb_clk_slave)

> The idea of sync state is to allow all consumers to probe and to request
> their paths. Only after that, the framework will take into account the
> bandwidth values that has been requested from consumers and disable unused
> paths.
>
> Sorry, but i am doing a bit of guessing here as i am missing the complete
> picture. So you add interconnect-cells to nsscc, but what is this DT node
> that requests the nss and gcc paths? I am failing to find these on the
> mailing lists.

The gcc based interconnect paths are referenced by PCIe controller
nodes. Please refer to this patch

[PATCH V5 4/6] arm64: dts: qcom: ipq9574: Add PCIe PHYs and controller nodes
https://lore.kernel.org/linux-arm-msm/[email protected]/

Sorry, did not post the nsscc related patches since this base ICC
patch hasn't reached closure. The nsscc patches are very similar
to this gcc based series. Wanted to gather the issues raised in
this and address them in nsscc so that it is in a more acceptable
shape.

Thanks
Varada

2024-06-12 12:54:40

by Georgi Djakov

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

On 12.06.24 13:28, Varadarajan Narayanan wrote:
> On Wed, Jun 12, 2024 at 11:48:17AM +0300, Georgi Djakov wrote:
>> On 12.06.24 9:30, Varadarajan Narayanan wrote:
>>> On Tue, Jun 11, 2024 at 02:29:48PM +0300, Georgi Djakov wrote:
>>>> On 11.06.24 12:42, Varadarajan Narayanan wrote:
>>>>> On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote:
>>>>>> On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote:
>>>>>>> On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan
>>>>>>> <[email protected]> wrote:
>>>>>>>>
>>>>>>>> On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote:
>>>>>>>>> Hi Varada,
>>>>>>>>>
>>>>>>>>> Thank you for your work on this!
>>>>>>>>>
>>>>>>>>> On 2.05.24 12:30, Varadarajan Narayanan wrote:
>>>>>>>>>> On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
>>>>>>>>>>> On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
>>>>>>>>>>>> On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 4/18/24 11:23, Varadarajan Narayanan wrote:
>>>>>>>>>>>>>> IPQ SoCs dont involve RPM in managing NoC related clocks and
>>>>>>>>>>>>>> there is no NoC scaling. Linux itself handles these clocks.
>>>>>>>>>>>>>> However, these should not be exposed as just clocks and align
>>>>>>>>>>>>>> with other Qualcomm SoCs that handle these clocks from a
>>>>>>>>>>>>>> interconnect provider.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hence include icc provider capability to the gcc node so that
>>>>>>>>>>>>>> peripherals can use the interconnect facility to enable these
>>>>>>>>>>>>>> clocks.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Reviewed-by: Dmitry Baryshkov <[email protected]>
>>>>>>>>>>>>>> Signed-off-by: Varadarajan Narayanan <[email protected]>
>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>
>>>>>>>>>>>>> If this is all you do to enable interconnect (which is not the case,
>>>>>>>>>>>>> as this patch only satisfies the bindings checker, the meaningful
>>>>>>>>>>>>> change happens in the previous patch) and nothing explodes, this is
>>>>>>>>>>>>> an apparent sign of your driver doing nothing.
>>>>>>>>>>>>
>>>>>>>>>>>> It appears to do nothing because, we are just enabling the clock
>>>>>>>>>>>> provider to also act as interconnect provider. Only when the
>>>>>>>>>>>> consumers are enabled with interconnect usage, this will create
>>>>>>>>>>>> paths and turn on the relevant NOC clocks.
>>>>>>>>>>>
>>>>>>>>>>> No, with sync_state it actually does "something" (sets the interconnect
>>>>>>>>>>> path bandwidths to zero). And *this* patch does nothing functionally,
>>>>>>>>>>> it only makes the dt checker happy.
>>>>>>>>>>

[..]

>
> nsscc_ipq9574 was not using icc_sync_state. After adding that, I
> can see the following messages printed from icc_sync_state. I
> also added a print to confirm if 'p->set(n, n);' is called.

Ok, that's good! So now when all providers are using sync_state, we
can go back to the initial comment from Konrad. I think you should
re-check the tests that you did, as the current results just lead to
more questions than answers. Maybe it was just the sync-state that
was missing, or there is some other issue.

BR,
Georgi

[..]
>
> The gcc based interconnect paths are referenced by PCIe controller
> nodes. Please refer to this patch
>
> [PATCH V5 4/6] arm64: dts: qcom: ipq9574: Add PCIe PHYs and controller nodes
> https://lore.kernel.org/linux-arm-msm/[email protected]/
>
> Sorry, did not post the nsscc related patches since this base ICC
> patch hasn't reached closure. The nsscc patches are very similar
> to this gcc based series. Wanted to gather the issues raised in
> this and address them in nsscc so that it is in a more acceptable
> shape.
>
> Thanks
> Varada


2024-06-13 03:49:42

by Varadarajan Narayanan

[permalink] [raw]
Subject: Re: [PATCH v9 6/6] arm64: dts: qcom: ipq9574: Add icc provider ability to gcc

On Wed, Jun 12, 2024 at 03:52:51PM +0300, Georgi Djakov wrote:
> On 12.06.24 13:28, Varadarajan Narayanan wrote:
> > On Wed, Jun 12, 2024 at 11:48:17AM +0300, Georgi Djakov wrote:
> > > On 12.06.24 9:30, Varadarajan Narayanan wrote:
> > > > On Tue, Jun 11, 2024 at 02:29:48PM +0300, Georgi Djakov wrote:
> > > > > On 11.06.24 12:42, Varadarajan Narayanan wrote:
> > > > > > On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote:
> > > > > > > On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote:
> > > > > > > > On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan
> > > > > > > > <[email protected]> wrote:
> > > > > > > > >
> > > > > > > > > On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote:
> > > > > > > > > > Hi Varada,
> > > > > > > > > >
> > > > > > > > > > Thank you for your work on this!
> > > > > > > > > >
> > > > > > > > > > On 2.05.24 12:30, Varadarajan Narayanan wrote:
> > > > > > > > > > > On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote:
> > > > > > > > > > > > On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote:
> > > > > > > > > > > > > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > On 4/18/24 11:23, Varadarajan Narayanan wrote:
> > > > > > > > > > > > > > > IPQ SoCs dont involve RPM in managing NoC related clocks and
> > > > > > > > > > > > > > > there is no NoC scaling. Linux itself handles these clocks.
> > > > > > > > > > > > > > > However, these should not be exposed as just clocks and align
> > > > > > > > > > > > > > > with other Qualcomm SoCs that handle these clocks from a
> > > > > > > > > > > > > > > interconnect provider.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Hence include icc provider capability to the gcc node so that
> > > > > > > > > > > > > > > peripherals can use the interconnect facility to enable these
> > > > > > > > > > > > > > > clocks.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Reviewed-by: Dmitry Baryshkov <[email protected]>
> > > > > > > > > > > > > > > Signed-off-by: Varadarajan Narayanan <[email protected]>
> > > > > > > > > > > > > > > ---
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > If this is all you do to enable interconnect (which is not the case,
> > > > > > > > > > > > > > as this patch only satisfies the bindings checker, the meaningful
> > > > > > > > > > > > > > change happens in the previous patch) and nothing explodes, this is
> > > > > > > > > > > > > > an apparent sign of your driver doing nothing.
> > > > > > > > > > > > >
> > > > > > > > > > > > > It appears to do nothing because, we are just enabling the clock
> > > > > > > > > > > > > provider to also act as interconnect provider. Only when the
> > > > > > > > > > > > > consumers are enabled with interconnect usage, this will create
> > > > > > > > > > > > > paths and turn on the relevant NOC clocks.
> > > > > > > > > > > >
> > > > > > > > > > > > No, with sync_state it actually does "something" (sets the interconnect
> > > > > > > > > > > > path bandwidths to zero). And *this* patch does nothing functionally,
> > > > > > > > > > > > it only makes the dt checker happy.
> > > > > > > > > > >
>
> [..]
>
> >
> > nsscc_ipq9574 was not using icc_sync_state. After adding that, I
> > can see the following messages printed from icc_sync_state. I
> > also added a print to confirm if 'p->set(n, n);' is called.
>
> Ok, that's good! So now when all providers are using sync_state, we
> can go back to the initial comment from Konrad. I think you should
> re-check the tests that you did, as the current results just lead to
> more questions than answers. Maybe it was just the sync-state that
> was missing, or there is some other issue.

Georgi,

Thanks very much for the clarifications. Will re-test the patches
and update the thread.

-Varada

> [..]
> >
> > The gcc based interconnect paths are referenced by PCIe controller
> > nodes. Please refer to this patch
> >
> > [PATCH V5 4/6] arm64: dts: qcom: ipq9574: Add PCIe PHYs and controller nodes
> > https://lore.kernel.org/linux-arm-msm/[email protected]/
> >
> > Sorry, did not post the nsscc related patches since this base ICC
> > patch hasn't reached closure. The nsscc patches are very similar
> > to this gcc based series. Wanted to gather the issues raised in
> > this and address them in nsscc so that it is in a more acceptable
> > shape.
> >
> > Thanks
> > Varada
>