Hi,
A recent patch series, targeting enhancements in the OPP core, ended up breaking
cpufreq on some of the Qualcomm platforms [1]. Necessary adjustments are made in
the OPP core, a bit hacky though, to get it working for now but it would be
better to solve the problem at hand in a cleaner way. And this patchset is an
attempt towards the same.
cpufreq-hw is a hardware engine, which takes care of frequency
management for CPUs. The engine manages the clocks for CPU devices, but
it isn't the end consumer of the clocks, which are the CPUs in this
case.
For this reason, it looks incorrect to keep the clock related properties
in the cpufreq-hw node. They should really be present at the end user,
i.e. the CPUs.
The case was simple currently as all the devices, i.e. the CPUs, that
the engine manages share the same clock names. What if the clock names
are different for different CPUs or clusters ? How will keeping the
clock properties in the cpufreq-hw node work in that case ?
This design creates further problems for frameworks like OPP, which
expects all such details (clocks) to be present in the end device node
itself, instead of another related node.
This patchset moves the clock properties to the node that uses them instead,
i.e. the CPU nodes and makes necessary adjustments at other places.
After this is applied, I can drop the unnecessary change from the OPP core, but
I wanted to discuss if this is a step in the right direction or not first and so
the RFC.
--
Viresh
[1] https://lore.kernel.org/lkml/[email protected]/
Viresh Kumar (4):
dt-bindings: cpufreq-qcom-hw: Move clocks to CPU nodes
arm64: dts: qcom: Move clocks to CPU nodes
cpufreq: qcom-cpufreq-hw: Clocks are moved to CPU nodes
cpufreq: qcom-cpufreq-hw: Register config_clks helper
.../bindings/cpufreq/cpufreq-qcom-hw.yaml | 31 ++++----
arch/arm64/boot/dts/qcom/sc7180.dtsi | 19 ++++-
arch/arm64/boot/dts/qcom/sc7280.dtsi | 18 ++++-
arch/arm64/boot/dts/qcom/sdm845.dtsi | 19 ++++-
arch/arm64/boot/dts/qcom/sm6350.dtsi | 18 ++++-
arch/arm64/boot/dts/qcom/sm8150.dtsi | 19 ++++-
arch/arm64/boot/dts/qcom/sm8250.dtsi | 18 ++++-
arch/arm64/boot/dts/qcom/sm8350.dtsi | 19 ++++-
arch/arm64/boot/dts/qcom/sm8450.dtsi | 18 ++++-
drivers/cpufreq/qcom-cpufreq-hw.c | 75 ++++++++++++++-----
10 files changed, 199 insertions(+), 55 deletions(-)
--
2.31.1.272.g89b43f80a514
The clock specific properties must be part the consumer nodes, i.e. the
CPUs here, instead of the node that manages the frequency engine.
Move the clocks properties to the CPU node instead.
Signed-off-by: Viresh Kumar <[email protected]>
---
arch/arm64/boot/dts/qcom/sc7180.dtsi | 19 ++++++++++++++++---
arch/arm64/boot/dts/qcom/sc7280.dtsi | 18 ++++++++++++++++--
arch/arm64/boot/dts/qcom/sdm845.dtsi | 19 ++++++++++++++++---
arch/arm64/boot/dts/qcom/sm6350.dtsi | 18 ++++++++++++++++--
arch/arm64/boot/dts/qcom/sm8150.dtsi | 19 ++++++++++++++++---
arch/arm64/boot/dts/qcom/sm8250.dtsi | 18 ++++++++++++++++--
arch/arm64/boot/dts/qcom/sm8350.dtsi | 19 ++++++++++++++++---
arch/arm64/boot/dts/qcom/sm8450.dtsi | 18 ++++++++++++++++--
8 files changed, 128 insertions(+), 20 deletions(-)
diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
index 5dcaac23a138..4c9a5f5e4ab4 100644
--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
+++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
@@ -138,6 +138,8 @@ &LITTLE_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
capacity-dmips-mhz = <415>;
dynamic-power-coefficient = <137>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
@@ -164,6 +166,8 @@ &LITTLE_CPU_SLEEP_1
capacity-dmips-mhz = <415>;
dynamic-power-coefficient = <137>;
next-level-cache = <&L2_100>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
@@ -186,6 +190,8 @@ &LITTLE_CPU_SLEEP_1
capacity-dmips-mhz = <415>;
dynamic-power-coefficient = <137>;
next-level-cache = <&L2_200>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
@@ -208,6 +214,8 @@ &LITTLE_CPU_SLEEP_1
capacity-dmips-mhz = <415>;
dynamic-power-coefficient = <137>;
next-level-cache = <&L2_300>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
@@ -230,6 +238,8 @@ &LITTLE_CPU_SLEEP_1
capacity-dmips-mhz = <415>;
dynamic-power-coefficient = <137>;
next-level-cache = <&L2_400>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
@@ -252,6 +262,8 @@ &LITTLE_CPU_SLEEP_1
capacity-dmips-mhz = <415>;
dynamic-power-coefficient = <137>;
next-level-cache = <&L2_500>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
@@ -274,6 +286,8 @@ &BIG_CPU_SLEEP_1
capacity-dmips-mhz = <1024>;
dynamic-power-coefficient = <480>;
next-level-cache = <&L2_600>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu6_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
@@ -296,6 +310,8 @@ &BIG_CPU_SLEEP_1
capacity-dmips-mhz = <1024>;
dynamic-power-coefficient = <480>;
next-level-cache = <&L2_700>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu6_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
@@ -3538,9 +3554,6 @@ cpufreq_hw: cpufreq@18323000 {
reg = <0 0x18323000 0 0x1400>, <0 0x18325800 0 0x1400>;
reg-names = "freq-domain0", "freq-domain1";
- clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
- clock-names = "xo", "alternate";
-
#freq-domain-cells = <1>;
};
diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
index e66fc67de206..f7600dbdd1e1 100644
--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
+++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
@@ -172,6 +172,8 @@ CPU0: cpu@0 {
&LITTLE_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
next-level-cache = <&L2_0>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>;
@@ -195,6 +197,8 @@ CPU1: cpu@100 {
&LITTLE_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
next-level-cache = <&L2_100>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>;
@@ -215,6 +219,8 @@ CPU2: cpu@200 {
&LITTLE_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
next-level-cache = <&L2_200>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>;
@@ -235,6 +241,8 @@ CPU3: cpu@300 {
&LITTLE_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
next-level-cache = <&L2_300>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>;
@@ -255,6 +263,8 @@ CPU4: cpu@400 {
&BIG_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
next-level-cache = <&L2_400>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu4_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>;
@@ -275,6 +285,8 @@ CPU5: cpu@500 {
&BIG_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
next-level-cache = <&L2_500>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu4_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>;
@@ -295,6 +307,8 @@ CPU6: cpu@600 {
&BIG_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
next-level-cache = <&L2_600>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu4_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>;
@@ -315,6 +329,8 @@ CPU7: cpu@700 {
&BIG_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
next-level-cache = <&L2_700>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
operating-points-v2 = <&cpu7_opp_table>;
interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
<&epss_l3 MASTER_EPSS_L3_APPS &epss_l3 SLAVE_EPSS_L3_SHARED>;
@@ -4915,8 +4931,6 @@ cpufreq_hw: cpufreq@18591000 {
reg = <0 0x18591000 0 0x1000>,
<0 0x18592000 0 0x1000>,
<0 0x18593000 0 0x1000>;
- clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
- clock-names = "xo", "alternate";
#freq-domain-cells = <1>;
};
};
diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
index 0692ae0e60a4..3154a8f67f76 100644
--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
@@ -202,6 +202,8 @@ &LITTLE_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
capacity-dmips-mhz = <611>;
dynamic-power-coefficient = <290>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
@@ -227,6 +229,8 @@ &LITTLE_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
capacity-dmips-mhz = <611>;
dynamic-power-coefficient = <290>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
@@ -249,6 +253,8 @@ &LITTLE_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
capacity-dmips-mhz = <611>;
dynamic-power-coefficient = <290>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
@@ -271,6 +277,8 @@ &LITTLE_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
capacity-dmips-mhz = <611>;
dynamic-power-coefficient = <290>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
@@ -293,6 +301,8 @@ CPU4: cpu@400 {
&BIG_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
dynamic-power-coefficient = <442>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
operating-points-v2 = <&cpu4_opp_table>;
interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
@@ -315,6 +325,8 @@ CPU5: cpu@500 {
&BIG_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
dynamic-power-coefficient = <442>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
operating-points-v2 = <&cpu4_opp_table>;
interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
@@ -337,6 +349,8 @@ CPU6: cpu@600 {
&BIG_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
dynamic-power-coefficient = <442>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
operating-points-v2 = <&cpu4_opp_table>;
interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
@@ -359,6 +373,8 @@ CPU7: cpu@700 {
&BIG_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
dynamic-power-coefficient = <442>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
operating-points-v2 = <&cpu4_opp_table>;
interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
@@ -5022,9 +5038,6 @@ cpufreq_hw: cpufreq@17d43000 {
interrupts-extended = <&lmh_cluster0 0>, <&lmh_cluster1 0>;
- clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
- clock-names = "xo", "alternate";
-
#freq-domain-cells = <1>;
};
diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
index d4f8f33f3f0c..645fb73fdad2 100644
--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
@@ -43,6 +43,8 @@ CPU0: cpu@0 {
capacity-dmips-mhz = <1024>;
dynamic-power-coefficient = <100>;
next-level-cache = <&L2_0>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
#cooling-cells = <2>;
L2_0: l2-cache {
@@ -62,6 +64,8 @@ CPU1: cpu@100 {
capacity-dmips-mhz = <1024>;
dynamic-power-coefficient = <100>;
next-level-cache = <&L2_100>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
#cooling-cells = <2>;
L2_100: l2-cache {
@@ -78,6 +82,8 @@ CPU2: cpu@200 {
capacity-dmips-mhz = <1024>;
dynamic-power-coefficient = <100>;
next-level-cache = <&L2_200>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
#cooling-cells = <2>;
L2_200: l2-cache {
@@ -94,6 +100,8 @@ CPU3: cpu@300 {
capacity-dmips-mhz = <1024>;
dynamic-power-coefficient = <100>;
next-level-cache = <&L2_300>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
#cooling-cells = <2>;
L2_300: l2-cache {
@@ -110,6 +118,8 @@ CPU4: cpu@400 {
capacity-dmips-mhz = <1024>;
dynamic-power-coefficient = <100>;
next-level-cache = <&L2_400>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
#cooling-cells = <2>;
L2_400: l2-cache {
@@ -126,6 +136,8 @@ CPU5: cpu@500 {
capacity-dmips-mhz = <1024>;
dynamic-power-coefficient = <100>;
next-level-cache = <&L2_500>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
#cooling-cells = <2>;
L2_500: l2-cache {
@@ -143,6 +155,8 @@ CPU6: cpu@600 {
capacity-dmips-mhz = <1894>;
dynamic-power-coefficient = <703>;
next-level-cache = <&L2_600>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
#cooling-cells = <2>;
L2_600: l2-cache {
@@ -159,6 +173,8 @@ CPU7: cpu@700 {
capacity-dmips-mhz = <1894>;
dynamic-power-coefficient = <703>;
next-level-cache = <&L2_700>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
#cooling-cells = <2>;
L2_700: l2-cache {
@@ -1462,8 +1478,6 @@ cpufreq_hw: cpufreq@18323000 {
compatible = "qcom,cpufreq-hw";
reg = <0 0x18323000 0 0x1000>, <0 0x18325800 0 0x1000>;
reg-names = "freq-domain0", "freq-domain1";
- clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
- clock-names = "xo", "alternate";
#freq-domain-cells = <1>;
};
diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
index 8ea44c4b56b4..bb38e36ae659 100644
--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
@@ -51,6 +51,8 @@ CPU0: cpu@0 {
capacity-dmips-mhz = <488>;
dynamic-power-coefficient = <232>;
next-level-cache = <&L2_0>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -75,6 +77,8 @@ CPU1: cpu@100 {
capacity-dmips-mhz = <488>;
dynamic-power-coefficient = <232>;
next-level-cache = <&L2_100>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -97,6 +101,8 @@ CPU2: cpu@200 {
capacity-dmips-mhz = <488>;
dynamic-power-coefficient = <232>;
next-level-cache = <&L2_200>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -118,6 +124,8 @@ CPU3: cpu@300 {
capacity-dmips-mhz = <488>;
dynamic-power-coefficient = <232>;
next-level-cache = <&L2_300>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -139,6 +147,8 @@ CPU4: cpu@400 {
capacity-dmips-mhz = <1024>;
dynamic-power-coefficient = <369>;
next-level-cache = <&L2_400>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
operating-points-v2 = <&cpu4_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -160,6 +170,8 @@ CPU5: cpu@500 {
capacity-dmips-mhz = <1024>;
dynamic-power-coefficient = <369>;
next-level-cache = <&L2_500>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
operating-points-v2 = <&cpu4_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -181,6 +193,8 @@ CPU6: cpu@600 {
capacity-dmips-mhz = <1024>;
dynamic-power-coefficient = <369>;
next-level-cache = <&L2_600>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
operating-points-v2 = <&cpu4_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -202,6 +216,8 @@ CPU7: cpu@700 {
capacity-dmips-mhz = <1024>;
dynamic-power-coefficient = <421>;
next-level-cache = <&L2_700>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 2>;
operating-points-v2 = <&cpu7_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -4102,9 +4118,6 @@ cpufreq_hw: cpufreq@18323000 {
reg-names = "freq-domain0", "freq-domain1",
"freq-domain2";
- clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
- clock-names = "xo", "alternate";
-
#freq-domain-cells = <1>;
};
diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
index cf0c97bd5ad3..29c496e85dda 100644
--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
@@ -101,6 +101,8 @@ CPU0: cpu@0 {
next-level-cache = <&L2_0>;
power-domains = <&CPU_PD0>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -125,6 +127,8 @@ CPU1: cpu@100 {
next-level-cache = <&L2_100>;
power-domains = <&CPU_PD1>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -146,6 +150,8 @@ CPU2: cpu@200 {
next-level-cache = <&L2_200>;
power-domains = <&CPU_PD2>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -167,6 +173,8 @@ CPU3: cpu@300 {
next-level-cache = <&L2_300>;
power-domains = <&CPU_PD3>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -188,6 +196,8 @@ CPU4: cpu@400 {
next-level-cache = <&L2_400>;
power-domains = <&CPU_PD4>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
operating-points-v2 = <&cpu4_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -209,6 +219,8 @@ CPU5: cpu@500 {
next-level-cache = <&L2_500>;
power-domains = <&CPU_PD5>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
operating-points-v2 = <&cpu4_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -231,6 +243,8 @@ CPU6: cpu@600 {
next-level-cache = <&L2_600>;
power-domains = <&CPU_PD6>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
operating-points-v2 = <&cpu4_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -252,6 +266,8 @@ CPU7: cpu@700 {
next-level-cache = <&L2_700>;
power-domains = <&CPU_PD7>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 2>;
operating-points-v2 = <&cpu7_opp_table>;
interconnects = <&gem_noc MASTER_AMPSS_M0 &mc_virt SLAVE_EBI_CH0>,
@@ -5020,8 +5036,6 @@ cpufreq_hw: cpufreq@18591000 {
reg-names = "freq-domain0", "freq-domain1",
"freq-domain2";
- clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
- clock-names = "xo", "alternate";
interrupts = <GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
index 743cba9b683c..c7e9447f0388 100644
--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
@@ -66,6 +66,8 @@ CPU0: cpu@0 {
reg = <0x0 0x0>;
enable-method = "psci";
next-level-cache = <&L2_0>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
power-domains = <&CPU_PD0>;
power-domain-names = "psci";
@@ -85,6 +87,8 @@ CPU1: cpu@100 {
reg = <0x0 0x100>;
enable-method = "psci";
next-level-cache = <&L2_100>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
power-domains = <&CPU_PD1>;
power-domain-names = "psci";
@@ -101,6 +105,8 @@ CPU2: cpu@200 {
reg = <0x0 0x200>;
enable-method = "psci";
next-level-cache = <&L2_200>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
power-domains = <&CPU_PD2>;
power-domain-names = "psci";
@@ -117,6 +123,8 @@ CPU3: cpu@300 {
reg = <0x0 0x300>;
enable-method = "psci";
next-level-cache = <&L2_300>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
power-domains = <&CPU_PD3>;
power-domain-names = "psci";
@@ -133,6 +141,8 @@ CPU4: cpu@400 {
reg = <0x0 0x400>;
enable-method = "psci";
next-level-cache = <&L2_400>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
power-domains = <&CPU_PD4>;
power-domain-names = "psci";
@@ -149,6 +159,8 @@ CPU5: cpu@500 {
reg = <0x0 0x500>;
enable-method = "psci";
next-level-cache = <&L2_500>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
power-domains = <&CPU_PD5>;
power-domain-names = "psci";
@@ -166,6 +178,8 @@ CPU6: cpu@600 {
reg = <0x0 0x600>;
enable-method = "psci";
next-level-cache = <&L2_600>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
power-domains = <&CPU_PD6>;
power-domain-names = "psci";
@@ -182,6 +196,8 @@ CPU7: cpu@700 {
reg = <0x0 0x700>;
enable-method = "psci";
next-level-cache = <&L2_700>;
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 2>;
power-domains = <&CPU_PD7>;
power-domain-names = "psci";
@@ -2074,9 +2090,6 @@ cpufreq_hw: cpufreq@18591000 {
<0 0x18593000 0 0x1000>;
reg-names = "freq-domain0", "freq-domain1", "freq-domain2";
- clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
- clock-names = "xo", "alternate";
-
#freq-domain-cells = <1>;
};
diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
index 7d08fad76371..229cf5eb6447 100644
--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
@@ -48,6 +48,8 @@ CPU0: cpu@0 {
next-level-cache = <&L2_0>;
power-domains = <&CPU_PD0>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
#cooling-cells = <2>;
L2_0: l2-cache {
@@ -67,6 +69,8 @@ CPU1: cpu@100 {
next-level-cache = <&L2_100>;
power-domains = <&CPU_PD1>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
#cooling-cells = <2>;
L2_100: l2-cache {
@@ -83,6 +87,8 @@ CPU2: cpu@200 {
next-level-cache = <&L2_200>;
power-domains = <&CPU_PD2>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
#cooling-cells = <2>;
L2_200: l2-cache {
@@ -99,6 +105,8 @@ CPU3: cpu@300 {
next-level-cache = <&L2_300>;
power-domains = <&CPU_PD3>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 0>;
#cooling-cells = <2>;
L2_300: l2-cache {
@@ -115,6 +123,8 @@ CPU4: cpu@400 {
next-level-cache = <&L2_400>;
power-domains = <&CPU_PD4>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
#cooling-cells = <2>;
L2_400: l2-cache {
@@ -131,6 +141,8 @@ CPU5: cpu@500 {
next-level-cache = <&L2_500>;
power-domains = <&CPU_PD5>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
#cooling-cells = <2>;
L2_500: l2-cache {
@@ -148,6 +160,8 @@ CPU6: cpu@600 {
next-level-cache = <&L2_600>;
power-domains = <&CPU_PD6>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 1>;
#cooling-cells = <2>;
L2_600: l2-cache {
@@ -164,6 +178,8 @@ CPU7: cpu@700 {
next-level-cache = <&L2_700>;
power-domains = <&CPU_PD7>;
power-domain-names = "psci";
+ clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ clock-names = "xo", "alternate";
qcom,freq-domain = <&cpufreq_hw 2>;
#cooling-cells = <2>;
L2_700: l2-cache {
@@ -2998,8 +3014,6 @@ cpufreq_hw: cpufreq@17d91000 {
<0 0x17d92000 0 0x1000>,
<0 0x17d93000 0 0x1000>;
reg-names = "freq-domain0", "freq-domain1", "freq-domain2";
- clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
- clock-names = "xo", "alternate";
interrupts = <GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
--
2.31.1.272.g89b43f80a514
On Wed, Jul 13, 2022 at 12:22:55PM +0530, Viresh Kumar wrote:
> Hi,
>
> A recent patch series, targeting enhancements in the OPP core, ended up breaking
> cpufreq on some of the Qualcomm platforms [1]. Necessary adjustments are made in
> the OPP core, a bit hacky though, to get it working for now but it would be
> better to solve the problem at hand in a cleaner way. And this patchset is an
> attempt towards the same.
>
> cpufreq-hw is a hardware engine, which takes care of frequency
> management for CPUs. The engine manages the clocks for CPU devices, but
> it isn't the end consumer of the clocks, which are the CPUs in this
> case.
>
> For this reason, it looks incorrect to keep the clock related properties
> in the cpufreq-hw node. They should really be present at the end user,
> i.e. the CPUs.
>
> The case was simple currently as all the devices, i.e. the CPUs, that
> the engine manages share the same clock names. What if the clock names
> are different for different CPUs or clusters ? How will keeping the
> clock properties in the cpufreq-hw node work in that case ?
>
> This design creates further problems for frameworks like OPP, which
> expects all such details (clocks) to be present in the end device node
> itself, instead of another related node.
>
> This patchset moves the clock properties to the node that uses them instead,
> i.e. the CPU nodes and makes necessary adjustments at other places.
>
> After this is applied, I can drop the unnecessary change from the OPP core, but
> I wanted to discuss if this is a step in the right direction or not first and so
> the RFC.
>
The clocks defined in the devicetree currently (CXO, GPLL) are the source
clocks of the EPSS block (cpufreq-hw). And EPSS will supply clock and
voltage through other blocks to the CPU domains. Even though the end
consumer of the source clocks are the CPUs, those clocks are not
directly reachign the CPUs but instead through some other blocks in EPSS.
Initially I was temped to add cpufreq-hw as the clock provider and have
it source clocks to the individual CPUs. This somehow models the clock
topology also, but after having a discussion with Bjorn we concluded that
it is best to leave it as it is.
The main issue that Bjorn pointed out was the fact that the clocks coming
out of EPSS are not exactly of the same frequency that was requested.
EPSS will do its own logic to generate the clocks to the CPUs based on
the input frequency vote and limits.
Thanks,
Mani
> --
> Viresh
>
> [1] https://lore.kernel.org/lkml/[email protected]/
>
> Viresh Kumar (4):
> dt-bindings: cpufreq-qcom-hw: Move clocks to CPU nodes
> arm64: dts: qcom: Move clocks to CPU nodes
> cpufreq: qcom-cpufreq-hw: Clocks are moved to CPU nodes
> cpufreq: qcom-cpufreq-hw: Register config_clks helper
>
> .../bindings/cpufreq/cpufreq-qcom-hw.yaml | 31 ++++----
> arch/arm64/boot/dts/qcom/sc7180.dtsi | 19 ++++-
> arch/arm64/boot/dts/qcom/sc7280.dtsi | 18 ++++-
> arch/arm64/boot/dts/qcom/sdm845.dtsi | 19 ++++-
> arch/arm64/boot/dts/qcom/sm6350.dtsi | 18 ++++-
> arch/arm64/boot/dts/qcom/sm8150.dtsi | 19 ++++-
> arch/arm64/boot/dts/qcom/sm8250.dtsi | 18 ++++-
> arch/arm64/boot/dts/qcom/sm8350.dtsi | 19 ++++-
> arch/arm64/boot/dts/qcom/sm8450.dtsi | 18 ++++-
> drivers/cpufreq/qcom-cpufreq-hw.c | 75 ++++++++++++++-----
> 10 files changed, 199 insertions(+), 55 deletions(-)
>
> --
> 2.31.1.272.g89b43f80a514
>
On 15-07-22, 21:39, Manivannan Sadhasivam wrote:
> The clocks defined in the devicetree currently (CXO, GPLL) are the source
> clocks of the EPSS block (cpufreq-hw). And EPSS will supply clock and
> voltage through other blocks to the CPU domains. Even though the end
> consumer of the source clocks are the CPUs, those clocks are not
> directly reachign the CPUs but instead through some other blocks in EPSS.
Fair enough, o these clocks should be present in the cpufreq-hw node,
as there were.
> Initially I was temped to add cpufreq-hw as the clock provider and have
> it source clocks to the individual CPUs. This somehow models the clock
> topology also
Right.
> , but after having a discussion with Bjorn we concluded that
> it is best to leave it as it is.
>
> The main issue that Bjorn pointed out was the fact that the clocks coming
> out of EPSS are not exactly of the same frequency that was requested.
> EPSS will do its own logic to generate the clocks to the CPUs based on
> the input frequency vote and limits.
The OPP tables, which are part of the CPU nodes, mentions clock rates.
Are these values for the cxo/gpll clocks or the clock that reaches the
CPUs? I believe the latter. The DT is not really complete if the CPU
node mentions the frequency, but not the source clock. It works for
you because you don't want to do clk_set_rate() in this case, but then
it leaves other frameworks, like OPP, confused and rightly so.
Normally, there is always a difference in what the OPP table contains
as frequency value and what the hardware programs, mostly it is small
though. It shouldn't prevent us from having the hierarchy clearly
defined in the DT.
Based on your description, I think it would be better to make
cpufreq-hw a clock provider and CPUs the consumer of it. It would then
allow the OPP core to not carry the hack to make it all work.
--
viresh
On 18-07-22, 07:27, Viresh Kumar wrote:
> The OPP tables, which are part of the CPU nodes, mentions clock rates.
> Are these values for the cxo/gpll clocks or the clock that reaches the
> CPUs? I believe the latter. The DT is not really complete if the CPU
> node mentions the frequency, but not the source clock. It works for
> you because you don't want to do clk_set_rate() in this case, but then
> it leaves other frameworks, like OPP, confused and rightly so.
>
> Normally, there is always a difference in what the OPP table contains
> as frequency value and what the hardware programs, mostly it is small
> though. It shouldn't prevent us from having the hierarchy clearly
> defined in the DT.
>
> Based on your description, I think it would be better to make
> cpufreq-hw a clock provider and CPUs the consumer of it. It would then
> allow the OPP core to not carry the hack to make it all work.
Bjorn / Mani,
Can we please get this sorted out ? I don't want to carry an unnecessary hack in
the OPP core for this.
--
viresh
On Mon, Aug 01, 2022 at 08:07:56AM +0530, Viresh Kumar wrote:
> On 18-07-22, 07:27, Viresh Kumar wrote:
> > The OPP tables, which are part of the CPU nodes, mentions clock rates.
> > Are these values for the cxo/gpll clocks or the clock that reaches the
> > CPUs? I believe the latter. The DT is not really complete if the CPU
> > node mentions the frequency, but not the source clock. It works for
> > you because you don't want to do clk_set_rate() in this case, but then
> > it leaves other frameworks, like OPP, confused and rightly so.
> >
> > Normally, there is always a difference in what the OPP table contains
> > as frequency value and what the hardware programs, mostly it is small
> > though. It shouldn't prevent us from having the hierarchy clearly
> > defined in the DT.
> >
> > Based on your description, I think it would be better to make
> > cpufreq-hw a clock provider and CPUs the consumer of it. It would then
> > allow the OPP core to not carry the hack to make it all work.
>
> Bjorn / Mani,
>
> Can we please get this sorted out ? I don't want to carry an unnecessary hack in
> the OPP core for this.
>
I'm waiting for inputs from Bjorn.
@Bjorn: What do you think of the proposal to add qcom-cpufreq-hw as the clk
provider for CPUs?
Thanks,
Mani
> --
> viresh
--
மணிவண்ணன் சதாசிவம்
On Mon, Aug 01, 2022 at 08:07:56AM +0530, Viresh Kumar wrote:
> On 18-07-22, 07:27, Viresh Kumar wrote:
> > The OPP tables, which are part of the CPU nodes, mentions clock rates.
> > Are these values for the cxo/gpll clocks or the clock that reaches the
> > CPUs? I believe the latter. The DT is not really complete if the CPU
> > node mentions the frequency, but not the source clock. It works for
> > you because you don't want to do clk_set_rate() in this case, but then
> > it leaves other frameworks, like OPP, confused and rightly so.
> >
> > Normally, there is always a difference in what the OPP table contains
> > as frequency value and what the hardware programs, mostly it is small
> > though. It shouldn't prevent us from having the hierarchy clearly
> > defined in the DT.
> >
> > Based on your description, I think it would be better to make
> > cpufreq-hw a clock provider and CPUs the consumer of it. It would then
> > allow the OPP core to not carry the hack to make it all work.
>
> Bjorn / Mani,
>
> Can we please get this sorted out ? I don't want to carry an unnecessary hack in
> the OPP core for this.
>
Conceptually, it sounds like a good idea to express the clock feeding
the CPU clusters, which is controlled by the OSM/EPSS. But do you
expect the OPP framework to actually do something with the clock, or
just to ensure that the relationship is properly described?
FWIW, the possible discrepancy between the requested frequency and the
actual frequency comes from the fact that OSM/EPSS throttles the cluster
frequency based on a number of different factors (thermal, voltages
...).
This is reported back to the kernel using the thermal pressure
interface. It would be quite interesting to see some investigation in
how efficient the kernel is at making use of this feedback.
Regards,
Bjorn
On 29-08-22, 22:24, Bjorn Andersson wrote:
> Conceptually, it sounds like a good idea to express the clock feeding
> the CPU clusters, which is controlled by the OSM/EPSS. But do you
> expect the OPP framework to actually do something with the clock, or
> just to ensure that the relationship is properly described?
No, the OPP core will never try to set the clock rate in your case,
though it will do clk_get().
> FWIW, the possible discrepancy between the requested frequency and the
> actual frequency comes from the fact that OSM/EPSS throttles the cluster
> frequency based on a number of different factors (thermal, voltages
> ...).
> This is reported back to the kernel using the thermal pressure
> interface. It would be quite interesting to see some investigation in
> how efficient the kernel is at making use of this feedback.
--
viresh
On Tue, Aug 30, 2022 at 11:10:42AM +0530, Viresh Kumar wrote:
> On 29-08-22, 22:24, Bjorn Andersson wrote:
> > Conceptually, it sounds like a good idea to express the clock feeding
> > the CPU clusters, which is controlled by the OSM/EPSS. But do you
> > expect the OPP framework to actually do something with the clock, or
> > just to ensure that the relationship is properly described?
>
> No, the OPP core will never try to set the clock rate in your case,
> though it will do clk_get().
>
Okay. Then I think it is a fair argument to make qcom-cpufreq-hw as the
clock provider for CPUs.
I will send the RFC soon.
Thanks,
Mani
> > FWIW, the possible discrepancy between the requested frequency and the
> > actual frequency comes from the fact that OSM/EPSS throttles the cluster
> > frequency based on a number of different factors (thermal, voltages
> > ...).
> > This is reported back to the kernel using the thermal pressure
> > interface. It would be quite interesting to see some investigation in
> > how efficient the kernel is at making use of this feedback.
>
> --
> viresh
On 30-08-22, 11:50, Manivannan Sadhasivam wrote:
> On Tue, Aug 30, 2022 at 11:10:42AM +0530, Viresh Kumar wrote:
> > On 29-08-22, 22:24, Bjorn Andersson wrote:
> > > Conceptually, it sounds like a good idea to express the clock feeding
> > > the CPU clusters, which is controlled by the OSM/EPSS. But do you
> > > expect the OPP framework to actually do something with the clock, or
> > > just to ensure that the relationship is properly described?
> >
> > No, the OPP core will never try to set the clock rate in your case,
> > though it will do clk_get().
> >
>
> Okay. Then I think it is a fair argument to make qcom-cpufreq-hw as the
> clock provider for CPUs.
>
> I will send the RFC soon.
Ping.
--
viresh
On Tue, Sep 20, 2022 at 03:58:03PM +0530, Viresh Kumar wrote:
> On 30-08-22, 11:50, Manivannan Sadhasivam wrote:
> > On Tue, Aug 30, 2022 at 11:10:42AM +0530, Viresh Kumar wrote:
> > > On 29-08-22, 22:24, Bjorn Andersson wrote:
> > > > Conceptually, it sounds like a good idea to express the clock feeding
> > > > the CPU clusters, which is controlled by the OSM/EPSS. But do you
> > > > expect the OPP framework to actually do something with the clock, or
> > > > just to ensure that the relationship is properly described?
> > >
> > > No, the OPP core will never try to set the clock rate in your case,
> > > though it will do clk_get().
> > >
> >
> > Okay. Then I think it is a fair argument to make qcom-cpufreq-hw as the
> > clock provider for CPUs.
> >
> > I will send the RFC soon.
>
> Ping.
>
Didn't get time so far. Will get to this once I'm back from vacation.
Thanks,
Mani
> --
> viresh
--
மணிவண்ணன் சதாசிவம்