Introduce core voltage scaling for NVIDIA Tegra20/30 SoCs, which reduces
power consumption and heating of the Tegra chips. Tegra SoC has multiple
hardware units which belong to a core power domain of the SoC and share
the core voltage. The voltage must be selected in accordance to a minimum
requirement of every core hardware unit.
The minimum core voltage requirement depends on:
1. Clock enable state of a hardware unit.
2. Clock frequency.
3. Unit's internal idling/active state.
This series is tested on Acer A500 (T20), AC100 (T20), Nexus 7 (T30) and
Ouya (T30) devices. I also added voltage scaling to the Ventana (T20) and
Cardhu (T30) boards which are tested by NVIDIA's CI farm. Tegra30 is now up
to 5C cooler on Nexus 7 and stays cool on Ouya (instead of becoming burning
hot) while system is idling. It should be possible to improve this further
by implementing a more advanced power management features for the kernel
drivers.
The DVFS support is opt-in for all boards, meaning that older DTBs will
continue to work like they did it before this series. It should be possible
to easily add the core voltage scaling support for Tegra114+ SoCs based on
this grounding work later on, if anyone will want to implement it.
WARNING(!) This series is made on top of the memory interconnect patches
which are currently under review [1]. The Tegra EMC driver
and devicetree-related patches need to be applied on top of
the ICC series.
[1] https://patchwork.ozlabs.org/project/linux-tegra/list/?series=212196
Dmitry Osipenko (30):
dt-bindings: host1x: Document OPP and voltage regulator properties
dt-bindings: mmc: tegra: Document OPP and voltage regulator properties
dt-bindings: pwm: tegra: Document OPP and voltage regulator properties
media: dt: bindings: tegra-vde: Document OPP and voltage regulator
properties
dt-binding: usb: ci-hdrc-usb2: Document OPP and voltage regulator
properties
dt-bindings: usb: tegra-ehci: Document OPP and voltage regulator
properties
soc/tegra: Add sync state API
soc/tegra: regulators: Support Tegra SoC device sync state API
soc/tegra: regulators: Fix lockup when voltage-spread is out of range
regulator: Allow skipping disabled regulators in
regulator_check_consumers()
drm/tegra: dc: Support OPP and SoC core voltage scaling
drm/tegra: gr2d: Correct swapped device-tree compatibles
drm/tegra: gr2d: Support OPP and SoC core voltage scaling
drm/tegra: gr3d: Support OPP and SoC core voltage scaling
drm/tegra: hdmi: Support OPP and SoC core voltage scaling
gpu: host1x: Support OPP and SoC core voltage scaling
mmc: sdhci-tegra: Support OPP and core voltage scaling
pwm: tegra: Support OPP and core voltage scaling
media: staging: tegra-vde: Support OPP and SoC core voltage scaling
usb: chipidea: tegra: Support OPP and SoC core voltage scaling
usb: host: ehci-tegra: Support OPP and SoC core voltage scaling
memory: tegra20-emc: Support Tegra SoC device state syncing
memory: tegra30-emc: Support Tegra SoC device state syncing
ARM: tegra: Add OPP tables for Tegra20 peripheral devices
ARM: tegra: Add OPP tables for Tegra30 peripheral devices
ARM: tegra: ventana: Add voltage supplies to DVFS-capable devices
ARM: tegra: paz00: Add voltage supplies to DVFS-capable devices
ARM: tegra: acer-a500: Add voltage supplies to DVFS-capable devices
ARM: tegra: cardhu-a04: Add voltage supplies to DVFS-capable devices
ARM: tegra: nexus7: Add voltage supplies to DVFS-capable devices
.../display/tegra/nvidia,tegra20-host1x.txt | 56 +++
.../bindings/media/nvidia,tegra-vde.txt | 12 +
.../bindings/mmc/nvidia,tegra20-sdhci.txt | 12 +
.../bindings/pwm/nvidia,tegra20-pwm.txt | 13 +
.../devicetree/bindings/usb/ci-hdrc-usb2.txt | 4 +
.../bindings/usb/nvidia,tegra20-ehci.txt | 2 +
.../boot/dts/tegra20-acer-a500-picasso.dts | 30 +-
arch/arm/boot/dts/tegra20-paz00.dts | 40 +-
.../arm/boot/dts/tegra20-peripherals-opp.dtsi | 386 ++++++++++++++++
arch/arm/boot/dts/tegra20-ventana.dts | 65 ++-
arch/arm/boot/dts/tegra20.dtsi | 14 +
.../tegra30-asus-nexus7-grouper-common.dtsi | 23 +
arch/arm/boot/dts/tegra30-cardhu-a04.dts | 44 ++
.../arm/boot/dts/tegra30-peripherals-opp.dtsi | 415 ++++++++++++++++++
arch/arm/boot/dts/tegra30.dtsi | 13 +
drivers/gpu/drm/tegra/Kconfig | 1 +
drivers/gpu/drm/tegra/dc.c | 138 +++++-
drivers/gpu/drm/tegra/dc.h | 5 +
drivers/gpu/drm/tegra/gr2d.c | 140 +++++-
drivers/gpu/drm/tegra/gr3d.c | 136 ++++++
drivers/gpu/drm/tegra/hdmi.c | 63 ++-
drivers/gpu/host1x/Kconfig | 1 +
drivers/gpu/host1x/dev.c | 87 ++++
drivers/memory/tegra/tegra20-emc.c | 8 +-
drivers/memory/tegra/tegra30-emc.c | 8 +-
drivers/mmc/host/Kconfig | 1 +
drivers/mmc/host/sdhci-tegra.c | 70 ++-
drivers/pwm/Kconfig | 1 +
drivers/pwm/pwm-tegra.c | 84 +++-
drivers/regulator/core.c | 12 +-
.../soc/samsung/exynos-regulator-coupler.c | 2 +-
drivers/soc/tegra/common.c | 152 ++++++-
drivers/soc/tegra/regulators-tegra20.c | 25 +-
drivers/soc/tegra/regulators-tegra30.c | 30 +-
drivers/staging/media/tegra-vde/Kconfig | 1 +
drivers/staging/media/tegra-vde/vde.c | 127 ++++++
drivers/staging/media/tegra-vde/vde.h | 1 +
drivers/usb/chipidea/Kconfig | 1 +
drivers/usb/chipidea/ci_hdrc_tegra.c | 79 ++++
drivers/usb/host/Kconfig | 1 +
drivers/usb/host/ehci-tegra.c | 79 ++++
include/linux/regulator/coupler.h | 6 +-
include/soc/tegra/common.h | 22 +
43 files changed, 2360 insertions(+), 50 deletions(-)
--
2.27.0
Document new DVFS OPP table and voltage regulator properties of the
Tegra EHCI controller.
Signed-off-by: Dmitry Osipenko <[email protected]>
---
Documentation/devicetree/bindings/usb/nvidia,tegra20-ehci.txt | 2 ++
1 file changed, 2 insertions(+)
diff --git a/Documentation/devicetree/bindings/usb/nvidia,tegra20-ehci.txt b/Documentation/devicetree/bindings/usb/nvidia,tegra20-ehci.txt
index f60785f73d3d..e4070ae21fd9 100644
--- a/Documentation/devicetree/bindings/usb/nvidia,tegra20-ehci.txt
+++ b/Documentation/devicetree/bindings/usb/nvidia,tegra20-ehci.txt
@@ -21,3 +21,5 @@ Required properties :
Optional properties:
- nvidia,needs-double-reset : boolean is to be set for some of the Tegra20
USB ports, which need reset twice due to hardware issues.
+ - operating-points-v2: See ../bindings/opp/opp.txt for details.
+ - core-supply: Phandle of voltage regulator of the SoC "core" power domain.
--
2.27.0
Downscale of the CORE voltage isn't allowed because some hardware units,
which are supplied by the CORE regulator, usually left ON at a boot time.
The new sync state API resolves this problem for us. All drivers of the
devices that are known to be ON at a boot time now should sync theirs
state. Once everything is synced, the voltage of the CORE domain could
be scaled without any limitations.
Make Tegra20/30 regulator couplers to use the new sync state API.
Tested-by: Peter Geis <[email protected]>
Tested-by: Nicolas Chauvet <[email protected]>
Signed-off-by: Dmitry Osipenko <[email protected]>
---
drivers/soc/tegra/regulators-tegra20.c | 19 ++++++++++++++++++-
drivers/soc/tegra/regulators-tegra30.c | 22 +++++++++++++++++++++-
2 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/drivers/soc/tegra/regulators-tegra20.c b/drivers/soc/tegra/regulators-tegra20.c
index 367a71a3cd10..8782e399a58c 100644
--- a/drivers/soc/tegra/regulators-tegra20.c
+++ b/drivers/soc/tegra/regulators-tegra20.c
@@ -16,6 +16,8 @@
#include <linux/regulator/driver.h>
#include <linux/regulator/machine.h>
+#include <soc/tegra/common.h>
+
struct tegra_regulator_coupler {
struct regulator_coupler coupler;
struct regulator_dev *core_rdev;
@@ -38,6 +40,21 @@ static int tegra20_core_limit(struct tegra_regulator_coupler *tegra,
int core_cur_uV;
int err;
+ /*
+ * Tegra20 SoC has critical DVFS-capable devices that are
+ * permanently-active or active at a boot time, like EMC
+ * (DRAM controller) or Host1x bus for example.
+ *
+ * The voltage of a CORE SoC power domain shall not be dropped below
+ * a minimum level, which is determined by device's clock rate.
+ * This means that we can't fully allow CORE voltage scaling until
+ * the state of all DVFS-critical CORE devices is synced.
+ */
+ if (tegra_soc_dvfs_state_synced()) {
+ pr_info_once("voltage state synced\n");
+ return 0;
+ }
+
if (tegra->core_min_uV > 0)
return tegra->core_min_uV;
@@ -58,7 +75,7 @@ static int tegra20_core_limit(struct tegra_regulator_coupler *tegra,
*/
tegra->core_min_uV = core_max_uV;
- pr_info("core minimum voltage limited to %duV\n", tegra->core_min_uV);
+ pr_info("core voltage initialized to %duV\n", tegra->core_min_uV);
return tegra->core_min_uV;
}
diff --git a/drivers/soc/tegra/regulators-tegra30.c b/drivers/soc/tegra/regulators-tegra30.c
index 7f21f31de09d..f7a5260edffe 100644
--- a/drivers/soc/tegra/regulators-tegra30.c
+++ b/drivers/soc/tegra/regulators-tegra30.c
@@ -16,6 +16,7 @@
#include <linux/regulator/driver.h>
#include <linux/regulator/machine.h>
+#include <soc/tegra/common.h>
#include <soc/tegra/fuse.h>
struct tegra_regulator_coupler {
@@ -39,6 +40,21 @@ static int tegra30_core_limit(struct tegra_regulator_coupler *tegra,
int core_cur_uV;
int err;
+ /*
+ * Tegra30 SoC has critical DVFS-capable devices that are
+ * permanently-active or active at a boot time, like EMC
+ * (DRAM controller) or Host1x bus for example.
+ *
+ * The voltage of a CORE SoC power domain shall not be dropped below
+ * a minimum level, which is determined by device's clock rate.
+ * This means that we can't fully allow CORE voltage scaling until
+ * the state of all DVFS-critical CORE devices is synced.
+ */
+ if (tegra_soc_dvfs_state_synced()) {
+ pr_info_once("voltage state synced\n");
+ return 0;
+ }
+
if (tegra->core_min_uV > 0)
return tegra->core_min_uV;
@@ -59,7 +75,7 @@ static int tegra30_core_limit(struct tegra_regulator_coupler *tegra,
*/
tegra->core_min_uV = core_max_uV;
- pr_info("core minimum voltage limited to %duV\n", tegra->core_min_uV);
+ pr_info("core voltage initialized to %duV\n", tegra->core_min_uV);
return tegra->core_min_uV;
}
@@ -143,6 +159,10 @@ static int tegra30_voltage_update(struct tegra_regulator_coupler *tegra,
if (core_min_uV < 0)
return core_min_uV;
+ err = regulator_check_voltage(core_rdev, &core_min_uV, &core_max_uV);
+ if (err)
+ return err;
+
err = regulator_check_consumers(core_rdev, &core_min_uV, &core_max_uV,
PM_SUSPEND_ON);
if (err)
--
2.27.0
Document new DVFS OPP table and voltage regulator properties of the
Host1x bus and devices sitting on the bus.
Signed-off-by: Dmitry Osipenko <[email protected]>
---
.../display/tegra/nvidia,tegra20-host1x.txt | 56 +++++++++++++++++++
1 file changed, 56 insertions(+)
diff --git a/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt b/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt
index 34d993338453..0593c8df70bb 100644
--- a/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt
+++ b/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt
@@ -20,6 +20,18 @@ Required properties:
- reset-names: Must include the following entries:
- host1x
+Optional properties:
+- operating-points-v2: See ../bindings/opp/opp.txt for details.
+- core-supply: Phandle of voltage regulator of the SoC "core" power domain.
+
+For each opp entry in 'operating-points-v2' table of host1x and its modules:
+- opp-supported-hw: One bitfield indicating:
+ On Tegra20: SoC process ID mask
+ On Tegra30+: SoC speedo ID mask
+
+ A bitwise AND is performed against the value and if any bit
+ matches, the OPP gets enabled.
+
Each host1x client module having to perform DMA through the Memory Controller
should have the interconnect endpoints set to the Memory Client and External
Memory respectively.
@@ -45,6 +57,8 @@ of the following host1x client modules:
- interconnect-names: Must include name of the interconnect path for each
interconnect entry. Consult TRM documentation for information about
available memory clients, see MEMORY CONTROLLER section.
+ - core-supply: Phandle of voltage regulator of the SoC "core" power domain.
+ - operating-points-v2: See ../bindings/opp/opp.txt for details.
- vi: video input
@@ -128,6 +142,8 @@ of the following host1x client modules:
- interconnect-names: Must include name of the interconnect path for each
interconnect entry. Consult TRM documentation for information about
available memory clients, see MEMORY CONTROLLER section.
+ - operating-points-v2: See ../bindings/opp/opp.txt for details.
+ - core-supply: Phandle of voltage regulator of the SoC "core" power domain.
- epp: encoder pre-processor
@@ -147,6 +163,8 @@ of the following host1x client modules:
- interconnect-names: Must include name of the interconnect path for each
interconnect entry. Consult TRM documentation for information about
available memory clients, see MEMORY CONTROLLER section.
+ - operating-points-v2: See ../bindings/opp/opp.txt for details.
+ - core-supply: Phandle of voltage regulator of the SoC "core" power domain.
- isp: image signal processor
@@ -166,6 +184,7 @@ of the following host1x client modules:
- interconnect-names: Must include name of the interconnect path for each
interconnect entry. Consult TRM documentation for information about
available memory clients, see MEMORY CONTROLLER section.
+ - core-supply: Phandle of voltage regulator of the SoC "core" power domain.
- gr2d: 2D graphics engine
@@ -185,6 +204,8 @@ of the following host1x client modules:
- interconnect-names: Must include name of the interconnect path for each
interconnect entry. Consult TRM documentation for information about
available memory clients, see MEMORY CONTROLLER section.
+ - operating-points-v2: See ../bindings/opp/opp.txt for details.
+ - core-supply: Phandle of voltage regulator of the SoC "core" power domain.
- gr3d: 3D graphics engine
@@ -209,6 +230,8 @@ of the following host1x client modules:
- interconnect-names: Must include name of the interconnect path for each
interconnect entry. Consult TRM documentation for information about
available memory clients, see MEMORY CONTROLLER section.
+ - operating-points-v2: See ../bindings/opp/opp.txt for details.
+ - core-supply: Phandle of voltage regulator of the SoC "core" power domain.
- dc: display controller
@@ -241,6 +264,8 @@ of the following host1x client modules:
- interconnect-names: Must include name of the interconnect path for each
interconnect entry. Consult TRM documentation for information about
available memory clients, see MEMORY CONTROLLER section.
+ - operating-points-v2: See ../bindings/opp/opp.txt for details.
+ - core-supply: Phandle of voltage regulator of the SoC "core" power domain.
- hdmi: High Definition Multimedia Interface
@@ -267,6 +292,8 @@ of the following host1x client modules:
- nvidia,hpd-gpio: specifies a GPIO used for hotplug detection
- nvidia,edid: supplies a binary EDID blob
- nvidia,panel: phandle of a display panel
+ - operating-points-v2: See ../bindings/opp/opp.txt for details.
+ - core-supply: Phandle of voltage regulator of the SoC "core" power domain.
- tvo: TV encoder output
@@ -277,6 +304,10 @@ of the following host1x client modules:
- clocks: Must contain one entry, for the module clock.
See ../clocks/clock-bindings.txt for details.
+ Optional properties:
+ - operating-points-v2: See ../bindings/opp/opp.txt for details.
+ - core-supply: Phandle of voltage regulator of the SoC "core" power domain.
+
- dsi: display serial interface
Required properties:
@@ -305,6 +336,8 @@ of the following host1x client modules:
- nvidia,panel: phandle of a display panel
- nvidia,ganged-mode: contains a phandle to a second DSI controller to gang
up with in order to support up to 8 data lanes
+ - operating-points-v2: See ../bindings/opp/opp.txt for details.
+ - core-supply: Phandle of voltage regulator of the SoC "core" power domain.
- sor: serial output resource
@@ -394,6 +427,7 @@ of the following host1x client modules:
- interconnect-names: Must include name of the interconnect path for each
interconnect entry. Consult TRM documentation for information about
available memory clients, see MEMORY CONTROLLER section.
+ - core-supply: Phandle of voltage regulator of the SoC "core" power domain.
Example:
@@ -408,6 +442,8 @@ Example:
clocks = <&tegra_car TEGRA20_CLK_HOST1X>;
resets = <&tegra_car 28>;
reset-names = "host1x";
+ operating-points-v2 = <&dvfs_opp_table>;
+ core-supply = <&vdd_core>;
#address-cells = <1>;
#size-cells = <1>;
@@ -421,6 +457,8 @@ Example:
clocks = <&tegra_car TEGRA20_CLK_MPE>;
resets = <&tegra_car 60>;
reset-names = "mpe";
+ operating-points-v2 = <&dvfs_opp_table>;
+ core-supply = <&vdd_core>;
};
vi@54080000 {
@@ -429,6 +467,8 @@ Example:
interrupts = <GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>;
assigned-clocks = <&tegra_car TEGRA210_CLK_VI>;
assigned-clock-parents = <&tegra_car TEGRA210_CLK_PLL_C4_OUT0>;
+ operating-points-v2 = <&dvfs_opp_table>;
+ core-supply = <&vdd_core>;
clocks = <&tegra_car TEGRA210_CLK_VI>;
power-domains = <&pd_venc>;
@@ -510,6 +550,8 @@ Example:
clocks = <&tegra_car TEGRA20_CLK_EPP>;
resets = <&tegra_car 19>;
reset-names = "epp";
+ operating-points-v2 = <&dvfs_opp_table>;
+ core-supply = <&vdd_core>;
};
isp {
@@ -528,6 +570,8 @@ Example:
clocks = <&tegra_car TEGRA20_CLK_GR2D>;
resets = <&tegra_car 21>;
reset-names = "2d";
+ operating-points-v2 = <&dvfs_opp_table>;
+ core-supply = <&vdd_core>;
};
gr3d {
@@ -536,6 +580,8 @@ Example:
clocks = <&tegra_car TEGRA20_CLK_GR3D>;
resets = <&tegra_car 24>;
reset-names = "3d";
+ operating-points-v2 = <&dvfs_opp_table>;
+ core-supply = <&vdd_core>;
};
dc@54200000 {
@@ -547,6 +593,8 @@ Example:
clock-names = "dc", "parent";
resets = <&tegra_car 27>;
reset-names = "dc";
+ operating-points-v2 = <&dvfs_opp_table>;
+ core-supply = <&vdd_core>;
interconnects = <&mc TEGRA20_MC_DISPLAY0A &emc>,
<&mc TEGRA20_MC_DISPLAY0B &emc>,
@@ -571,6 +619,8 @@ Example:
clock-names = "dc", "parent";
resets = <&tegra_car 26>;
reset-names = "dc";
+ operating-points-v2 = <&dvfs_opp_table>;
+ core-supply = <&vdd_core>;
interconnects = <&mc TEGRA20_MC_DISPLAY0AB &emc>,
<&mc TEGRA20_MC_DISPLAY0BB &emc>,
@@ -596,6 +646,8 @@ Example:
resets = <&tegra_car 51>;
reset-names = "hdmi";
status = "disabled";
+ operating-points-v2 = <&dvfs_opp_table>;
+ core-supply = <&vdd_core>;
};
tvo {
@@ -604,6 +656,8 @@ Example:
interrupts = <0 76 0x04>;
clocks = <&tegra_car TEGRA20_CLK_TVO>;
status = "disabled";
+ operating-points-v2 = <&dvfs_opp_table>;
+ core-supply = <&vdd_core>;
};
dsi {
@@ -615,6 +669,8 @@ Example:
resets = <&tegra_car 48>;
reset-names = "dsi";
status = "disabled";
+ operating-points-v2 = <&dvfs_opp_table>;
+ core-supply = <&vdd_core>;
};
};
--
2.27.0
Document new DVFS OPP table and voltage regulator properties of the
PWM controller.
Signed-off-by: Dmitry Osipenko <[email protected]>
---
.../devicetree/bindings/pwm/nvidia,tegra20-pwm.txt | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/Documentation/devicetree/bindings/pwm/nvidia,tegra20-pwm.txt b/Documentation/devicetree/bindings/pwm/nvidia,tegra20-pwm.txt
index 74c41e34c3b6..d4d1c44a2c04 100644
--- a/Documentation/devicetree/bindings/pwm/nvidia,tegra20-pwm.txt
+++ b/Documentation/devicetree/bindings/pwm/nvidia,tegra20-pwm.txt
@@ -32,6 +32,17 @@ The PWM node will have following optional properties.
pinctrl-names: Pin state names. Must be "default" and "sleep".
pinctrl-0: phandle for the default/active state of pin configurations.
pinctrl-1: phandle for the sleep state of pin configurations.
+core-supply: phandle for voltage regulator of the SoC "core" power domain.
+
+operating-points-v2: see ../bindings/opp/opp.txt for details.
+
+For each opp entry in 'operating-points-v2' table:
+- opp-supported-hw: One bitfield indicating:
+ On Tegra20: SoC process ID mask
+ On Tegra30+: SoC speedo ID mask
+
+ A bitwise AND is performed against the value and if any bit
+ matches, the OPP gets enabled.
Example:
@@ -42,6 +53,8 @@ Example:
clocks = <&tegra_car 17>;
resets = <&tegra_car 17>;
reset-names = "pwm";
+ operating-points-v2 = <&dvfs_opp_table>;
+ core-supply = <&vdd_core>;
};
--
2.27.0
Add OPP tables for Tegra30 SoC devices.
Signed-off-by: Dmitry Osipenko <[email protected]>
---
.../arm/boot/dts/tegra30-peripherals-opp.dtsi | 415 ++++++++++++++++++
arch/arm/boot/dts/tegra30.dtsi | 13 +
2 files changed, 428 insertions(+)
diff --git a/arch/arm/boot/dts/tegra30-peripherals-opp.dtsi b/arch/arm/boot/dts/tegra30-peripherals-opp.dtsi
index cbe84d25e726..f8c522099dfe 100644
--- a/arch/arm/boot/dts/tegra30-peripherals-opp.dtsi
+++ b/arch/arm/boot/dts/tegra30-peripherals-opp.dtsi
@@ -380,4 +380,419 @@ opp@900000000 {
opp-peak-kBps = <7200000>;
};
};
+
+ vde_dvfs_opp_table: vde-opp-table {
+ compatible = "operating-points-v2";
+
+ opp@228000000,1000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <228000000>;
+ opp-supported-hw = <0x0003>;
+ };
+
+ opp@247000000,1000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <247000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@275000000,1050 {
+ opp-microvolt = <1050000 1050000 1350000>;
+ opp-hz = /bits/ 64 <275000000>;
+ opp-supported-hw = <0x0003>;
+ };
+
+ opp@304000000,1050 {
+ opp-microvolt = <1050000 1050000 1350000>;
+ opp-hz = /bits/ 64 <304000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@332000000,1100 {
+ opp-microvolt = <1100000 1100000 1350000>;
+ opp-hz = /bits/ 64 <332000000>;
+ opp-supported-hw = <0x0003>;
+ };
+
+ opp@352000000,1100 {
+ opp-microvolt = <1100000 1100000 1350000>;
+ opp-hz = /bits/ 64 <352000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@380000000,1150 {
+ opp-microvolt = <1150000 1150000 1350000>;
+ opp-hz = /bits/ 64 <380000000>;
+ opp-supported-hw = <0x0003>;
+ };
+
+ opp@400000000,1150 {
+ opp-microvolt = <1150000 1150000 1350000>;
+ opp-hz = /bits/ 64 <400000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@416000000,1200 {
+ opp-microvolt = <1200000 1200000 1350000>;
+ opp-hz = /bits/ 64 <416000000>;
+ opp-supported-hw = <0x0003>;
+ };
+
+ opp@437000000,1200 {
+ opp-microvolt = <1200000 1200000 1350000>;
+ opp-hz = /bits/ 64 <437000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@484000000,1250 {
+ opp-microvolt = <1250000 1250000 1350000>;
+ opp-hz = /bits/ 64 <484000000>;
+ opp-supported-hw = <0x000C>;
+ };
+
+ opp@520000000,1300 {
+ opp-microvolt = <1300000 1300000 1350000>;
+ opp-hz = /bits/ 64 <520000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@600000000,1350 {
+ opp-microvolt = <1350000 1350000 1350000>;
+ opp-hz = /bits/ 64 <600000000>;
+ opp-supported-hw = <0x0004>;
+ };
+ };
+
+ gr2d_dvfs_opp_table: gr2d-opp-table {
+ compatible = "operating-points-v2";
+
+ opp@267000000,1000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <267000000>;
+ opp-supported-hw = <0x0007>;
+ };
+
+ opp@285000000,1050 {
+ opp-microvolt = <1050000 1050000 1350000>;
+ opp-hz = /bits/ 64 <285000000>;
+ opp-supported-hw = <0x0003>;
+ };
+
+ opp@304000000,1050 {
+ opp-microvolt = <1050000 1050000 1350000>;
+ opp-hz = /bits/ 64 <304000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@332000000,1100 {
+ opp-microvolt = <1100000 1100000 1350000>;
+ opp-hz = /bits/ 64 <332000000>;
+ opp-supported-hw = <0x0003>;
+ };
+
+ opp@361000000,1100 {
+ opp-microvolt = <1100000 1100000 1350000>;
+ opp-hz = /bits/ 64 <361000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@380000000,1150 {
+ opp-microvolt = <1150000 1150000 1350000>;
+ opp-hz = /bits/ 64 <380000000>;
+ opp-supported-hw = <0x0003>;
+ };
+
+ opp@408000000,1150 {
+ opp-microvolt = <1150000 1150000 1350000>;
+ opp-hz = /bits/ 64 <408000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@416000000,1200 {
+ opp-microvolt = <1200000 1200000 1350000>;
+ opp-hz = /bits/ 64 <416000000>;
+ opp-supported-hw = <0x0003>;
+ };
+
+ opp@446000000,1200 {
+ opp-microvolt = <1200000 1200000 1350000>;
+ opp-hz = /bits/ 64 <446000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@484000000,1250 {
+ opp-microvolt = <1250000 1250000 1350000>;
+ opp-hz = /bits/ 64 <484000000>;
+ opp-supported-hw = <0x000C>;
+ };
+
+ opp@520000000,1300 {
+ opp-microvolt = <1300000 1300000 1350000>;
+ opp-hz = /bits/ 64 <520000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@600000000,1350 {
+ opp-microvolt = <1350000 1350000 1350000>;
+ opp-hz = /bits/ 64 <600000000>;
+ opp-supported-hw = <0x0004>;
+ };
+ };
+
+ gr3d_dvfs_opp_table: gr3d-opp-table {
+ compatible = "operating-points-v2";
+
+ opp@234000000,1000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <234000000>;
+ opp-supported-hw = <0x0003>;
+ };
+
+ opp@247000000,1000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <247000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@285000000,1050 {
+ opp-microvolt = <1050000 1050000 1350000>;
+ opp-hz = /bits/ 64 <285000000>;
+ opp-supported-hw = <0x0003>;
+ };
+
+ opp@304000000,1050 {
+ opp-microvolt = <1050000 1050000 1350000>;
+ opp-hz = /bits/ 64 <304000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@332000000,1100 {
+ opp-microvolt = <1100000 1100000 1350000>;
+ opp-hz = /bits/ 64 <332000000>;
+ opp-supported-hw = <0x0003>;
+ };
+
+ opp@361000000,1100 {
+ opp-microvolt = <1100000 1100000 1350000>;
+ opp-hz = /bits/ 64 <361000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@380000000,1150 {
+ opp-microvolt = <1150000 1150000 1350000>;
+ opp-hz = /bits/ 64 <380000000>;
+ opp-supported-hw = <0x0003>;
+ };
+
+ opp@408000000,1150 {
+ opp-microvolt = <1150000 1150000 1350000>;
+ opp-hz = /bits/ 64 <408000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@416000000,1200 {
+ opp-microvolt = <1200000 1200000 1350000>;
+ opp-hz = /bits/ 64 <416000000>;
+ opp-supported-hw = <0x0003>;
+ };
+
+ opp@446000000,1200 {
+ opp-microvolt = <1200000 1200000 1350000>;
+ opp-hz = /bits/ 64 <446000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@484000000,1250 {
+ opp-microvolt = <1250000 1250000 1350000>;
+ opp-hz = /bits/ 64 <484000000>;
+ opp-supported-hw = <0x000C>;
+ };
+
+ opp@520000000,1300 {
+ opp-microvolt = <1300000 1300000 1350000>;
+ opp-hz = /bits/ 64 <520000000>;
+ opp-supported-hw = <0x0004>;
+ };
+
+ opp@600000000,1350 {
+ opp-microvolt = <1350000 1350000 1350000>;
+ opp-hz = /bits/ 64 <600000000>;
+ opp-supported-hw = <0x0004>;
+ };
+ };
+
+ host1x_dvfs_opp_table: host1x-opp-table {
+ compatible = "operating-points-v2";
+
+ opp@152000000,1000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <152000000>;
+ opp-supported-hw = <0x0007>;
+ };
+
+ opp@188000000,1050 {
+ opp-microvolt = <1050000 1050000 1350000>;
+ opp-hz = /bits/ 64 <188000000>;
+ opp-supported-hw = <0x0007>;
+ };
+
+ opp@222000000,1100 {
+ opp-microvolt = <1100000 1100000 1350000>;
+ opp-hz = /bits/ 64 <222000000>;
+ opp-supported-hw = <0x0007>;
+ };
+
+ opp@242000000,1250 {
+ opp-microvolt = <1250000 1250000 1350000>;
+ opp-hz = /bits/ 64 <242000000>;
+ opp-supported-hw = <0x0008>;
+ };
+
+ opp@254000000,1150 {
+ opp-microvolt = <1150000 1150000 1350000>;
+ opp-hz = /bits/ 64 <254000000>;
+ opp-supported-hw = <0x0007>;
+ };
+
+ opp@267000000,1200 {
+ opp-microvolt = <1200000 1200000 1350000>;
+ opp-hz = /bits/ 64 <267000000>;
+ opp-supported-hw = <0x0007>;
+ };
+
+ opp@300000000,1350 {
+ opp-microvolt = <1350000 1350000 1350000>;
+ opp-hz = /bits/ 64 <300000000>;
+ opp-supported-hw = <0x0004>;
+ };
+ };
+
+ usbd_dvfs_opp_table: usbd-opp-table {
+ compatible = "operating-points-v2";
+
+ opp@480000000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <480000000>;
+ };
+ };
+
+ usb2_dvfs_opp_table: usb2-opp-table {
+ compatible = "operating-points-v2";
+
+ opp@480000000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <480000000>;
+ };
+ };
+
+ usb3_dvfs_opp_table: usb3-opp-table {
+ compatible = "operating-points-v2";
+
+ opp@480000000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <480000000>;
+ };
+ };
+
+ sdmmc1_dvfs_opp_table: sdmmc1-opp-table {
+ compatible = "operating-points-v2";
+
+ opp@104000000 {
+ opp-microvolt = <950000 950000 1350000>;
+ opp-hz = /bits/ 64 <104000000>;
+ };
+
+ opp@208000000 {
+ opp-microvolt = <1200000 1200000 1350000>;
+ opp-hz = /bits/ 64 <208000000>;
+ };
+ };
+
+ sdmmc3_dvfs_opp_table: sdmmc3-opp-table {
+ compatible = "operating-points-v2";
+
+ opp@104000000 {
+ opp-microvolt = <950000 950000 1350000>;
+ opp-hz = /bits/ 64 <104000000>;
+ };
+
+ opp@208000000 {
+ opp-microvolt = <1200000 1200000 1350000>;
+ opp-hz = /bits/ 64 <208000000>;
+ };
+ };
+
+ hdmi_dvfs_opp_table: hdmi-opp-table {
+ compatible = "operating-points-v2";
+
+ opp@148500000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <148500000>;
+ };
+ };
+
+ pwm_dvfs_opp_table: pwm-opp-table {
+ compatible = "operating-points-v2";
+
+ opp@408000000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <408000000>;
+ };
+ };
+
+ dc0_dvfs_opp_table: dc0-opp-table {
+ compatible = "operating-points-v2";
+
+ opp@120000000,1000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <120000000>;
+ opp-supported-hw = <0x0009>;
+ };
+
+ opp@155000000,1000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <155000000>;
+ opp-supported-hw = <0x0006>;
+ };
+
+ opp@190000000,1200 {
+ opp-microvolt = <1200000 1200000 1350000>;
+ opp-hz = /bits/ 64 <190000000>;
+ opp-supported-hw = <0x0009>;
+ };
+
+ opp@268000000,1050 {
+ opp-microvolt = <1050000 1050000 1350000>;
+ opp-hz = /bits/ 64 <268000000>;
+ opp-supported-hw = <0x0006>;
+ };
+ };
+
+ dc1_dvfs_opp_table: dc1-opp-table {
+ compatible = "operating-points-v2";
+
+ opp@120000000,1000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <120000000>;
+ opp-supported-hw = <0x0009>;
+ };
+
+ opp@155000000,1000 {
+ opp-microvolt = <1000000 1000000 1350000>;
+ opp-hz = /bits/ 64 <155000000>;
+ opp-supported-hw = <0x0006>;
+ };
+
+ opp@190000000,1200 {
+ opp-microvolt = <1200000 1200000 1350000>;
+ opp-hz = /bits/ 64 <190000000>;
+ opp-supported-hw = <0x0009>;
+ };
+
+ opp@268000000,1050 {
+ opp-microvolt = <1050000 1050000 1350000>;
+ opp-hz = /bits/ 64 <268000000>;
+ opp-supported-hw = <0x0006>;
+ };
+ };
};
diff --git a/arch/arm/boot/dts/tegra30.dtsi b/arch/arm/boot/dts/tegra30.dtsi
index 44a6dbba7081..c387d46f737c 100644
--- a/arch/arm/boot/dts/tegra30.dtsi
+++ b/arch/arm/boot/dts/tegra30.dtsi
@@ -123,6 +123,7 @@ host1x@50000000 {
resets = <&tegra_car 28>;
reset-names = "host1x";
iommus = <&mc TEGRA_SWGROUP_HC>;
+ operating-points-v2 = <&host1x_dvfs_opp_table>;
#address-cells = <1>;
#size-cells = <1>;
@@ -180,6 +181,7 @@ gr2d@54140000 {
clocks = <&tegra_car TEGRA30_CLK_GR2D>;
resets = <&tegra_car 21>;
reset-names = "2d";
+ operating-points-v2 = <&gr2d_dvfs_opp_table>;
iommus = <&mc TEGRA_SWGROUP_G2>;
};
@@ -193,6 +195,7 @@ gr3d@54180000 {
resets = <&tegra_car 24>,
<&tegra_car 98>;
reset-names = "3d", "3d2";
+ operating-points-v2 = <&gr3d_dvfs_opp_table>;
iommus = <&mc TEGRA_SWGROUP_NV>,
<&mc TEGRA_SWGROUP_NV2>;
@@ -207,6 +210,7 @@ dc@54200000 {
clock-names = "dc", "parent";
resets = <&tegra_car 27>;
reset-names = "dc";
+ operating-points-v2 = <&dc0_dvfs_opp_table>;
iommus = <&mc TEGRA_SWGROUP_DC>;
@@ -237,6 +241,7 @@ dc@54240000 {
clock-names = "dc", "parent";
resets = <&tegra_car 26>;
reset-names = "dc";
+ operating-points-v2 = <&dc1_dvfs_opp_table>;
iommus = <&mc TEGRA_SWGROUP_DCB>;
@@ -268,6 +273,7 @@ hdmi@54280000 {
resets = <&tegra_car 51>;
reset-names = "hdmi";
status = "disabled";
+ operating-points-v2 = <&hdmi_dvfs_opp_table>;
};
tvo@542c0000 {
@@ -466,6 +472,7 @@ vde@6001a000 {
reset-names = "vde", "mc";
resets = <&tegra_car 61>, <&mc TEGRA30_MC_RESET_VDE>;
iommus = <&mc TEGRA_SWGROUP_VDE>;
+ operating-points-v2 = <&vde_dvfs_opp_table>;
};
apbmisc@70000800 {
@@ -574,6 +581,7 @@ pwm: pwm@7000a000 {
resets = <&tegra_car 17>;
reset-names = "pwm";
status = "disabled";
+ operating-points-v2 = <&pwm_dvfs_opp_table>;
};
rtc@7000e000 {
@@ -906,6 +914,7 @@ mmc@78000000 {
resets = <&tegra_car 14>;
reset-names = "sdhci";
status = "disabled";
+ operating-points-v2 = <&sdmmc1_dvfs_opp_table>;
};
mmc@78000200 {
@@ -928,6 +937,7 @@ mmc@78000400 {
resets = <&tegra_car 69>;
reset-names = "sdhci";
status = "disabled";
+ operating-points-v2 = <&sdmmc3_dvfs_opp_table>;
};
mmc@78000600 {
@@ -952,6 +962,7 @@ usb@7d000000 {
nvidia,needs-double-reset;
nvidia,phy = <&phy1>;
status = "disabled";
+ operating-points-v2 = <&usbd_dvfs_opp_table>;
};
phy1: usb-phy@7d000000 {
@@ -991,6 +1002,7 @@ usb@7d004000 {
reset-names = "usb";
nvidia,phy = <&phy2>;
status = "disabled";
+ operating-points-v2 = <&usb2_dvfs_opp_table>;
};
phy2: usb-phy@7d004000 {
@@ -1029,6 +1041,7 @@ usb@7d008000 {
reset-names = "usb";
nvidia,phy = <&phy3>;
status = "disabled";
+ operating-points-v2 = <&usb3_dvfs_opp_table>;
};
phy3: usb-phy@7d008000 {
--
2.27.0
On Thu, Nov 05, 2020 at 02:43:57AM +0300, Dmitry Osipenko wrote:
> Introduce core voltage scaling for NVIDIA Tegra20/30 SoCs, which reduces
> power consumption and heating of the Tegra chips. Tegra SoC has multiple
> hardware units which belong to a core power domain of the SoC and share
> the core voltage. The voltage must be selected in accordance to a minimum
> requirement of every core hardware unit.
[...]
Just looked briefly through the series - it looks like there is a lot of
code duplication in *_init_opp_table() functions. Could this be made
more generic / data-driven?
Best Regards
Micha? Miros?aw
+ Viresh
On Thu, 5 Nov 2020 at 00:44, Dmitry Osipenko <[email protected]> wrote:
>
> Introduce core voltage scaling for NVIDIA Tegra20/30 SoCs, which reduces
> power consumption and heating of the Tegra chips. Tegra SoC has multiple
> hardware units which belong to a core power domain of the SoC and share
> the core voltage. The voltage must be selected in accordance to a minimum
> requirement of every core hardware unit.
>
> The minimum core voltage requirement depends on:
>
> 1. Clock enable state of a hardware unit.
> 2. Clock frequency.
> 3. Unit's internal idling/active state.
>
> This series is tested on Acer A500 (T20), AC100 (T20), Nexus 7 (T30) and
> Ouya (T30) devices. I also added voltage scaling to the Ventana (T20) and
> Cardhu (T30) boards which are tested by NVIDIA's CI farm. Tegra30 is now up
> to 5C cooler on Nexus 7 and stays cool on Ouya (instead of becoming burning
> hot) while system is idling. It should be possible to improve this further
> by implementing a more advanced power management features for the kernel
> drivers.
>
> The DVFS support is opt-in for all boards, meaning that older DTBs will
> continue to work like they did it before this series. It should be possible
> to easily add the core voltage scaling support for Tegra114+ SoCs based on
> this grounding work later on, if anyone will want to implement it.
>
> WARNING(!) This series is made on top of the memory interconnect patches
> which are currently under review [1]. The Tegra EMC driver
> and devicetree-related patches need to be applied on top of
> the ICC series.
>
> [1] https://patchwork.ozlabs.org/project/linux-tegra/list/?series=212196
>
> Dmitry Osipenko (30):
> dt-bindings: host1x: Document OPP and voltage regulator properties
> dt-bindings: mmc: tegra: Document OPP and voltage regulator properties
> dt-bindings: pwm: tegra: Document OPP and voltage regulator properties
> media: dt: bindings: tegra-vde: Document OPP and voltage regulator
> properties
> dt-binding: usb: ci-hdrc-usb2: Document OPP and voltage regulator
> properties
> dt-bindings: usb: tegra-ehci: Document OPP and voltage regulator
> properties
> soc/tegra: Add sync state API
> soc/tegra: regulators: Support Tegra SoC device sync state API
> soc/tegra: regulators: Fix lockup when voltage-spread is out of range
> regulator: Allow skipping disabled regulators in
> regulator_check_consumers()
> drm/tegra: dc: Support OPP and SoC core voltage scaling
> drm/tegra: gr2d: Correct swapped device-tree compatibles
> drm/tegra: gr2d: Support OPP and SoC core voltage scaling
> drm/tegra: gr3d: Support OPP and SoC core voltage scaling
> drm/tegra: hdmi: Support OPP and SoC core voltage scaling
> gpu: host1x: Support OPP and SoC core voltage scaling
> mmc: sdhci-tegra: Support OPP and core voltage scaling
> pwm: tegra: Support OPP and core voltage scaling
> media: staging: tegra-vde: Support OPP and SoC core voltage scaling
> usb: chipidea: tegra: Support OPP and SoC core voltage scaling
> usb: host: ehci-tegra: Support OPP and SoC core voltage scaling
> memory: tegra20-emc: Support Tegra SoC device state syncing
> memory: tegra30-emc: Support Tegra SoC device state syncing
> ARM: tegra: Add OPP tables for Tegra20 peripheral devices
> ARM: tegra: Add OPP tables for Tegra30 peripheral devices
> ARM: tegra: ventana: Add voltage supplies to DVFS-capable devices
> ARM: tegra: paz00: Add voltage supplies to DVFS-capable devices
> ARM: tegra: acer-a500: Add voltage supplies to DVFS-capable devices
> ARM: tegra: cardhu-a04: Add voltage supplies to DVFS-capable devices
> ARM: tegra: nexus7: Add voltage supplies to DVFS-capable devices
>
> .../display/tegra/nvidia,tegra20-host1x.txt | 56 +++
> .../bindings/media/nvidia,tegra-vde.txt | 12 +
> .../bindings/mmc/nvidia,tegra20-sdhci.txt | 12 +
> .../bindings/pwm/nvidia,tegra20-pwm.txt | 13 +
> .../devicetree/bindings/usb/ci-hdrc-usb2.txt | 4 +
> .../bindings/usb/nvidia,tegra20-ehci.txt | 2 +
> .../boot/dts/tegra20-acer-a500-picasso.dts | 30 +-
> arch/arm/boot/dts/tegra20-paz00.dts | 40 +-
> .../arm/boot/dts/tegra20-peripherals-opp.dtsi | 386 ++++++++++++++++
> arch/arm/boot/dts/tegra20-ventana.dts | 65 ++-
> arch/arm/boot/dts/tegra20.dtsi | 14 +
> .../tegra30-asus-nexus7-grouper-common.dtsi | 23 +
> arch/arm/boot/dts/tegra30-cardhu-a04.dts | 44 ++
> .../arm/boot/dts/tegra30-peripherals-opp.dtsi | 415 ++++++++++++++++++
> arch/arm/boot/dts/tegra30.dtsi | 13 +
> drivers/gpu/drm/tegra/Kconfig | 1 +
> drivers/gpu/drm/tegra/dc.c | 138 +++++-
> drivers/gpu/drm/tegra/dc.h | 5 +
> drivers/gpu/drm/tegra/gr2d.c | 140 +++++-
> drivers/gpu/drm/tegra/gr3d.c | 136 ++++++
> drivers/gpu/drm/tegra/hdmi.c | 63 ++-
> drivers/gpu/host1x/Kconfig | 1 +
> drivers/gpu/host1x/dev.c | 87 ++++
> drivers/memory/tegra/tegra20-emc.c | 8 +-
> drivers/memory/tegra/tegra30-emc.c | 8 +-
> drivers/mmc/host/Kconfig | 1 +
> drivers/mmc/host/sdhci-tegra.c | 70 ++-
> drivers/pwm/Kconfig | 1 +
> drivers/pwm/pwm-tegra.c | 84 +++-
> drivers/regulator/core.c | 12 +-
> .../soc/samsung/exynos-regulator-coupler.c | 2 +-
> drivers/soc/tegra/common.c | 152 ++++++-
> drivers/soc/tegra/regulators-tegra20.c | 25 +-
> drivers/soc/tegra/regulators-tegra30.c | 30 +-
> drivers/staging/media/tegra-vde/Kconfig | 1 +
> drivers/staging/media/tegra-vde/vde.c | 127 ++++++
> drivers/staging/media/tegra-vde/vde.h | 1 +
> drivers/usb/chipidea/Kconfig | 1 +
> drivers/usb/chipidea/ci_hdrc_tegra.c | 79 ++++
> drivers/usb/host/Kconfig | 1 +
> drivers/usb/host/ehci-tegra.c | 79 ++++
> include/linux/regulator/coupler.h | 6 +-
> include/soc/tegra/common.h | 22 +
> 43 files changed, 2360 insertions(+), 50 deletions(-)
>
> --
> 2.27.0
>
I need some more time to review this, but just a quick check found a
few potential issues...
The "core-supply", that you specify as a regulator for each
controller's device node, is not the way we describe power domains.
Instead, it seems like you should register a power-domain provider
(with the help of genpd) and implement the ->set_performance_state()
callback for it. Each device node should then be hooked up to this
power-domain, rather than to a "core-supply". For DT bindings, please
have a look at Documentation/devicetree/bindings/power/power-domain.yaml
and Documentation/devicetree/bindings/power/power_domain.txt.
In regards to the "sync state" problem (preventing to change
performance states until all consumers have been attached), this can
then be managed by the genpd provider driver instead.
Kind regards
Uffe
On 05-11-20, 10:45, Ulf Hansson wrote:
> + Viresh
Thanks Ulf. I found a bug in OPP core because you cc'd me here :)
> On Thu, 5 Nov 2020 at 00:44, Dmitry Osipenko <[email protected]> wrote:
> I need some more time to review this, but just a quick check found a
> few potential issues...
>
> The "core-supply", that you specify as a regulator for each
> controller's device node, is not the way we describe power domains.
Maybe I misunderstood your comment here, but there are two ways of
scaling the voltage of a device depending on if it is a regulator (and
can be modeled as one in the kernel) or a power domain.
In case of Qcom earlier (when we added the performance-state stuff),
the eventual hardware was out of kernel's control and we didn't wanted
(allowed) to model it as a virtual regulator just to pass the votes to
the RPM. And so we did what we did.
But if the hardware (where the voltage is required to be changed) is
indeed a regulator and is modeled as one, then what Dmitry has done
looks okay. i.e. add a supply in the device's node and microvolt
property in the DT entries.
--
viresh
On Thu, 5 Nov 2020 at 11:06, Viresh Kumar <[email protected]> wrote:
>
> On 05-11-20, 10:45, Ulf Hansson wrote:
> > + Viresh
>
> Thanks Ulf. I found a bug in OPP core because you cc'd me here :)
Happy to help. :-)
>
> > On Thu, 5 Nov 2020 at 00:44, Dmitry Osipenko <[email protected]> wrote:
> > I need some more time to review this, but just a quick check found a
> > few potential issues...
> >
> > The "core-supply", that you specify as a regulator for each
> > controller's device node, is not the way we describe power domains.
>
> Maybe I misunderstood your comment here, but there are two ways of
> scaling the voltage of a device depending on if it is a regulator (and
> can be modeled as one in the kernel) or a power domain.
I am not objecting about scaling the voltage through a regulator,
that's fine to me. However, encoding a power domain as a regulator
(even if it may seem like a regulator) isn't. Well, unless Mark Brown
has changed his mind about this.
In this case, it seems like the regulator supply belongs in the
description of the power domain provider.
>
> In case of Qcom earlier (when we added the performance-state stuff),
> the eventual hardware was out of kernel's control and we didn't wanted
> (allowed) to model it as a virtual regulator just to pass the votes to
> the RPM. And so we did what we did.
>
> But if the hardware (where the voltage is required to be changed) is
> indeed a regulator and is modeled as one, then what Dmitry has done
> looks okay. i.e. add a supply in the device's node and microvolt
> property in the DT entries.
I guess I haven't paid enough attention how power domain regulators
are being described then. I was under the impression that the CPUfreq
case was a bit specific - and we had legacy bindings to stick with.
Can you point me to some other existing examples of where power domain
regulators are specified as a regulator in each device's node?
Kind regards
Uffe
On 05-11-20, 11:34, Ulf Hansson wrote:
> I am not objecting about scaling the voltage through a regulator,
> that's fine to me. However, encoding a power domain as a regulator
> (even if it may seem like a regulator) isn't. Well, unless Mark Brown
> has changed his mind about this.
>
> In this case, it seems like the regulator supply belongs in the
> description of the power domain provider.
Okay, I wasn't sure if it is a power domain or a regulator here. Btw,
how do we identify if it is a power domain or a regulator ?
> > In case of Qcom earlier (when we added the performance-state stuff),
> > the eventual hardware was out of kernel's control and we didn't wanted
> > (allowed) to model it as a virtual regulator just to pass the votes to
> > the RPM. And so we did what we did.
> >
> > But if the hardware (where the voltage is required to be changed) is
> > indeed a regulator and is modeled as one, then what Dmitry has done
> > looks okay. i.e. add a supply in the device's node and microvolt
> > property in the DT entries.
>
> I guess I haven't paid enough attention how power domain regulators
> are being described then. I was under the impression that the CPUfreq
> case was a bit specific - and we had legacy bindings to stick with.
>
> Can you point me to some other existing examples of where power domain
> regulators are specified as a regulator in each device's node?
No, I thought it is a regulator here and not a power domain.
--
viresh
On Thu, 5 Nov 2020 at 11:40, Viresh Kumar <[email protected]> wrote:
>
> On 05-11-20, 11:34, Ulf Hansson wrote:
> > I am not objecting about scaling the voltage through a regulator,
> > that's fine to me. However, encoding a power domain as a regulator
> > (even if it may seem like a regulator) isn't. Well, unless Mark Brown
> > has changed his mind about this.
> >
> > In this case, it seems like the regulator supply belongs in the
> > description of the power domain provider.
>
> Okay, I wasn't sure if it is a power domain or a regulator here. Btw,
> how do we identify if it is a power domain or a regulator ?
Good question. It's not a crystal clear line in between them, I think.
A power domain to me, means that some part of a silicon (a group of
controllers or just a single piece, for example) needs some kind of
resource (typically a power rail) to be enabled to be functional, to
start with. If there are operating points involved, that's also a
clear indication to me, that it's not a regular regulator.
Maybe we should try to specify this more exactly in some
documentation, somewhere.
>
> > > In case of Qcom earlier (when we added the performance-state stuff),
> > > the eventual hardware was out of kernel's control and we didn't wanted
> > > (allowed) to model it as a virtual regulator just to pass the votes to
> > > the RPM. And so we did what we did.
> > >
> > > But if the hardware (where the voltage is required to be changed) is
> > > indeed a regulator and is modeled as one, then what Dmitry has done
> > > looks okay. i.e. add a supply in the device's node and microvolt
> > > property in the DT entries.
> >
> > I guess I haven't paid enough attention how power domain regulators
> > are being described then. I was under the impression that the CPUfreq
> > case was a bit specific - and we had legacy bindings to stick with.
> >
> > Can you point me to some other existing examples of where power domain
> > regulators are specified as a regulator in each device's node?
>
> No, I thought it is a regulator here and not a power domain.
Okay, thanks!
Kind regards
Uffe
On 05-11-20, 11:56, Ulf Hansson wrote:
> On Thu, 5 Nov 2020 at 11:40, Viresh Kumar <[email protected]> wrote:
> > Btw, how do we identify if it is a power domain or a regulator ?
To be honest, I was a bit afraid and embarrassed to ask this question,
and was hoping people to make fun of me in return :)
> Good question. It's not a crystal clear line in between them, I think.
And I was relieved after reading this :)
> A power domain to me, means that some part of a silicon (a group of
> controllers or just a single piece, for example) needs some kind of
> resource (typically a power rail) to be enabled to be functional, to
> start with.
Isn't this what a part of regulator does as well ? i.e.
enabling/disabling of the regulator or power to a group of
controllers.
Over that the regulator does voltage/current scaling as well, which
normally the power domains don't do (though we did that in
performance-state case).
> If there are operating points involved, that's also a
> clear indication to me, that it's not a regular regulator.
Is there any example of that? I hope by OPP you meant both freq and
voltage here. I am not sure if I know of a case where a power domain
handles both of them.
> Maybe we should try to specify this more exactly in some
> documentation, somewhere.
I think yes, it is very much required. And in absence of that I think,
many (or most) of the platforms that also need to scale the voltage
would have modeled their hardware as a regulator and not a PM domain.
What I always thought was:
- Module that can just enable/disable power to a block of SoC is a
power domain.
- Module that can enable/disable as well as scale voltage is a
regulator.
And so I thought that this patchset has done the right thing. This
changed a bit with the qcom stuff where the IP to be configured was in
control of RPM and not Linux and so we couldn't add it as a regulator.
If it was controlled by Linux, it would have been a regulator in
kernel for sure :)
--
viresh
On Thu, 5 Nov 2020 at 12:13, Viresh Kumar <[email protected]> wrote:
>
> On 05-11-20, 11:56, Ulf Hansson wrote:
> > On Thu, 5 Nov 2020 at 11:40, Viresh Kumar <[email protected]> wrote:
> > > Btw, how do we identify if it is a power domain or a regulator ?
>
> To be honest, I was a bit afraid and embarrassed to ask this question,
> and was hoping people to make fun of me in return :)
>
> > Good question. It's not a crystal clear line in between them, I think.
>
> And I was relieved after reading this :)
>
> > A power domain to me, means that some part of a silicon (a group of
> > controllers or just a single piece, for example) needs some kind of
> > resource (typically a power rail) to be enabled to be functional, to
> > start with.
>
> Isn't this what a part of regulator does as well ? i.e.
> enabling/disabling of the regulator or power to a group of
> controllers.
It could, but it shouldn't.
>
> Over that the regulator does voltage/current scaling as well, which
> normally the power domains don't do (though we did that in
> performance-state case).
>
> > If there are operating points involved, that's also a
> > clear indication to me, that it's not a regular regulator.
>
> Is there any example of that? I hope by OPP you meant both freq and
> voltage here. I am not sure if I know of a case where a power domain
> handles both of them.
It may be both voltage and frequency - but in some cases only voltage.
From HW point of view, many ARM legacy platforms have power domains
that work like this.
As you know, the DVFS case has in many years not been solved in a
generic way, but mostly via platform specific hacks.
The worst ones are probably those hacking clock drivers (which myself
also have contributed to). Have a look at clk_prcmu_opp_prepare(), for
example, which is used by the UX500 platform. Another option has been
to use the devfreq framework, but it has limitations in regards to
this too.
That said, I am hoping that people start moving towards the
deploying/implementing DVFS through the power-domain approach,
together with the OPPs. Maybe there are still some pieces missing from
an infrastructure point of view, but that should become more evident
as more starts using it.
>
> > Maybe we should try to specify this more exactly in some
> > documentation, somewhere.
>
> I think yes, it is very much required. And in absence of that I think,
> many (or most) of the platforms that also need to scale the voltage
> would have modeled their hardware as a regulator and not a PM domain.
>
> What I always thought was:
>
> - Module that can just enable/disable power to a block of SoC is a
> power domain.
>
> - Module that can enable/disable as well as scale voltage is a
> regulator.
>
> And so I thought that this patchset has done the right thing. This
> changed a bit with the qcom stuff where the IP to be configured was in
> control of RPM and not Linux and so we couldn't add it as a regulator.
> If it was controlled by Linux, it would have been a regulator in
> kernel for sure :)
In my view, DT bindings have consistently been pushed back during the
year, if they have tried to model power domains as regulator supplies
from consumer device nodes. Hence, people have tried other things, as
I mentioned above.
I definitely agree that we need to update some documentations,
explaining things more exactly. Additionally, it seems like a talk at
some conferences should make sense, as a way to spread the word.
Kind regards
Uffe
05.11.2020 04:45, Michał Mirosław пишет:
> On Thu, Nov 05, 2020 at 02:43:57AM +0300, Dmitry Osipenko wrote:
>> Introduce core voltage scaling for NVIDIA Tegra20/30 SoCs, which reduces
>> power consumption and heating of the Tegra chips. Tegra SoC has multiple
>> hardware units which belong to a core power domain of the SoC and share
>> the core voltage. The voltage must be selected in accordance to a minimum
>> requirement of every core hardware unit.
> [...]
>
> Just looked briefly through the series - it looks like there is a lot of
> code duplication in *_init_opp_table() functions. Could this be made
> more generic / data-driven?
Indeed, it should be possible to add a common helper. I had a quick
thought about doing it too, but then decided to defer for the starter
since there were some differences among the needs of the drivers. I'll
take a closer look for the v2, thanks!
05.11.2020 12:45, Ulf Hansson пишет:
...
> I need some more time to review this, but just a quick check found a
> few potential issues...
Thank you for starting the review! I'm pretty sure it will take a couple
revisions until all the questions will be resolved :)
> The "core-supply", that you specify as a regulator for each
> controller's device node, is not the way we describe power domains.
> Instead, it seems like you should register a power-domain provider
> (with the help of genpd) and implement the ->set_performance_state()
> callback for it. Each device node should then be hooked up to this
> power-domain, rather than to a "core-supply". For DT bindings, please
> have a look at Documentation/devicetree/bindings/power/power-domain.yaml
> and Documentation/devicetree/bindings/power/power_domain.txt.
>
> In regards to the "sync state" problem (preventing to change
> performance states until all consumers have been attached), this can
> then be managed by the genpd provider driver instead.
I'll need to take a closer look at GENPD, thank you for the suggestion.
Sounds like a software GENPD driver which manages clocks and voltages
could be a good idea, but it also could be an unnecessary
over-engineering. Let's see..
05.11.2020 18:22, Dmitry Osipenko пишет:
> 05.11.2020 12:45, Ulf Hansson пишет:
> ...
>> I need some more time to review this, but just a quick check found a
>> few potential issues...
>
> Thank you for starting the review! I'm pretty sure it will take a couple
> revisions until all the questions will be resolved :)
>
>> The "core-supply", that you specify as a regulator for each
>> controller's device node, is not the way we describe power domains.
>> Instead, it seems like you should register a power-domain provider
>> (with the help of genpd) and implement the ->set_performance_state()
>> callback for it. Each device node should then be hooked up to this
>> power-domain, rather than to a "core-supply". For DT bindings, please
>> have a look at Documentation/devicetree/bindings/power/power-domain.yaml
>> and Documentation/devicetree/bindings/power/power_domain.txt.
>>
>> In regards to the "sync state" problem (preventing to change
>> performance states until all consumers have been attached), this can
>> then be managed by the genpd provider driver instead.
>
> I'll need to take a closer look at GENPD, thank you for the suggestion.
>
> Sounds like a software GENPD driver which manages clocks and voltages
> could be a good idea, but it also could be an unnecessary
> over-engineering. Let's see..
>
Hello Ulf and all,
I took a detailed look at the GENPD and tried to implement it. Here is
what was found:
1. GENPD framework doesn't aggregate performance requests from the
attached devices. This means that if deviceA requests performance state
10 and then deviceB requests state 3, then framework will set domain's
state to 3 instead of 10.
https://elixir.bootlin.com/linux/v5.10-rc2/source/drivers/base/power/domain.c#L376
2. GENPD framework has a sync() callback in the genpd.domain structure,
but this callback isn't allowed to be used by the GENPD implementation.
The GENPD framework always overrides that callback for its own needs.
Hence GENPD doesn't allow to solve the bootstrapping
state-synchronization problem in a nice way.
https://elixir.bootlin.com/linux/v5.10-rc2/source/drivers/base/power/domain.c#L2606
3. Tegra doesn't have a dedicated hardware power-controller for the core
domain, instead there is only an external voltage regulator. Hence we
will need to create a phony device-tree node for the virtual power
domain, which is probably a wrong thing to do.
===
Perhaps it should be possible to create some hacks to work around
bullets 2 and 3 in order to achieve what we need for DVFS on Tegra, but
bullet 1 isn't solvable without changing how the GENPD core works.
Altogether, the GENPD in its current form is a wrong abstraction for a
system-wide DVFS in a case where multiple devices share power domain and
this domain is a voltage regulator. The regulator framework is the
correct abstraction in this case for today.
On 08-11-20, 15:19, Dmitry Osipenko wrote:
> I took a detailed look at the GENPD and tried to implement it. Here is
> what was found:
>
> 1. GENPD framework doesn't aggregate performance requests from the
> attached devices. This means that if deviceA requests performance state
> 10 and then deviceB requests state 3, then framework will set domain's
> state to 3 instead of 10.
It does. Look at _genpd_reeval_performance_state().
--
viresh
09.11.2020 07:43, Viresh Kumar пишет:
> On 08-11-20, 15:19, Dmitry Osipenko wrote:
>> I took a detailed look at the GENPD and tried to implement it. Here is
>> what was found:
>>
>> 1. GENPD framework doesn't aggregate performance requests from the
>> attached devices. This means that if deviceA requests performance state
>> 10 and then deviceB requests state 3, then framework will set domain's
>> state to 3 instead of 10.
>
> It does. Look at _genpd_reeval_performance_state().
>
Thanks, I probably had a bug in the quick prototype and then overlooked
that function.
09.11.2020 07:47, Dmitry Osipenko пишет:
> 09.11.2020 07:43, Viresh Kumar пишет:
>> On 08-11-20, 15:19, Dmitry Osipenko wrote:
>>> I took a detailed look at the GENPD and tried to implement it. Here is
>>> what was found:
>>>
>>> 1. GENPD framework doesn't aggregate performance requests from the
>>> attached devices. This means that if deviceA requests performance state
>>> 10 and then deviceB requests state 3, then framework will set domain's
>>> state to 3 instead of 10.
>>
>> It does. Look at _genpd_reeval_performance_state().
>>
>
> Thanks, I probably had a bug in the quick prototype and then overlooked
> that function.
>
If a non-hardware device-tree node is okay to have for the domain, then
I can try again.
What I also haven't mentioned is that GENPD adds some extra complexity
to some drivers (3d, video decoder) because we will need to handle both
new GENPD and legacy Tegra specific pre-genpd era domains.
I'm also not exactly sure how the topology of domains should look like
because Tegra has a power-controller (PMC) which manages power rail of a
few hardware units. Perhaps it should be
device -> PMC domain -> CORE domain
but not exactly sure for now.
On 09-11-20, 08:10, Dmitry Osipenko wrote:
> 09.11.2020 07:47, Dmitry Osipenko пишет:
> > 09.11.2020 07:43, Viresh Kumar пишет:
> >> On 08-11-20, 15:19, Dmitry Osipenko wrote:
> >>> I took a detailed look at the GENPD and tried to implement it. Here is
> >>> what was found:
> >>>
> >>> 1. GENPD framework doesn't aggregate performance requests from the
> >>> attached devices. This means that if deviceA requests performance state
> >>> 10 and then deviceB requests state 3, then framework will set domain's
> >>> state to 3 instead of 10.
> >>
> >> It does. Look at _genpd_reeval_performance_state().
> >>
> >
> > Thanks, I probably had a bug in the quick prototype and then overlooked
> > that function.
> >
>
> If a non-hardware device-tree node is okay to have for the domain, then
> I can try again.
>
> What I also haven't mentioned is that GENPD adds some extra complexity
> to some drivers (3d, video decoder) because we will need to handle both
> new GENPD and legacy Tegra specific pre-genpd era domains.
>
> I'm also not exactly sure how the topology of domains should look like
> because Tegra has a power-controller (PMC) which manages power rail of a
> few hardware units. Perhaps it should be
>
> device -> PMC domain -> CORE domain
>
> but not exactly sure for now.
I am also confused on if it should be a domain or regulator, but that
is for Ulf to tell :)
--
viresh
On Thu, 05 Nov 2020 02:43:58 +0300, Dmitry Osipenko wrote:
> Document new DVFS OPP table and voltage regulator properties of the
> Host1x bus and devices sitting on the bus.
>
> Signed-off-by: Dmitry Osipenko <[email protected]>
> ---
> .../display/tegra/nvidia,tegra20-host1x.txt | 56 +++++++++++++++++++
> 1 file changed, 56 insertions(+)
>
Reviewed-by: Rob Herring <[email protected]>
On Thu, 05 Nov 2020 02:44:00 +0300, Dmitry Osipenko wrote:
> Document new DVFS OPP table and voltage regulator properties of the
> PWM controller.
>
> Signed-off-by: Dmitry Osipenko <[email protected]>
> ---
> .../devicetree/bindings/pwm/nvidia,tegra20-pwm.txt | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
Reviewed-by: Rob Herring <[email protected]>
On Thu, 05 Nov 2020 02:44:03 +0300, Dmitry Osipenko wrote:
> Document new DVFS OPP table and voltage regulator properties of the
> Tegra EHCI controller.
>
> Signed-off-by: Dmitry Osipenko <[email protected]>
> ---
> Documentation/devicetree/bindings/usb/nvidia,tegra20-ehci.txt | 2 ++
> 1 file changed, 2 insertions(+)
>
Reviewed-by: Rob Herring <[email protected]>
On Sun, 8 Nov 2020 at 13:19, Dmitry Osipenko <[email protected]> wrote:
>
> 05.11.2020 18:22, Dmitry Osipenko пишет:
> > 05.11.2020 12:45, Ulf Hansson пишет:
> > ...
> >> I need some more time to review this, but just a quick check found a
> >> few potential issues...
> >
> > Thank you for starting the review! I'm pretty sure it will take a couple
> > revisions until all the questions will be resolved :)
> >
> >> The "core-supply", that you specify as a regulator for each
> >> controller's device node, is not the way we describe power domains.
> >> Instead, it seems like you should register a power-domain provider
> >> (with the help of genpd) and implement the ->set_performance_state()
> >> callback for it. Each device node should then be hooked up to this
> >> power-domain, rather than to a "core-supply". For DT bindings, please
> >> have a look at Documentation/devicetree/bindings/power/power-domain.yaml
> >> and Documentation/devicetree/bindings/power/power_domain.txt.
> >>
> >> In regards to the "sync state" problem (preventing to change
> >> performance states until all consumers have been attached), this can
> >> then be managed by the genpd provider driver instead.
> >
> > I'll need to take a closer look at GENPD, thank you for the suggestion.
> >
> > Sounds like a software GENPD driver which manages clocks and voltages
> > could be a good idea, but it also could be an unnecessary
> > over-engineering. Let's see..
> >
>
> Hello Ulf and all,
>
> I took a detailed look at the GENPD and tried to implement it. Here is
> what was found:
>
> 1. GENPD framework doesn't aggregate performance requests from the
> attached devices. This means that if deviceA requests performance state
> 10 and then deviceB requests state 3, then framework will set domain's
> state to 3 instead of 10.
>
> https://elixir.bootlin.com/linux/v5.10-rc2/source/drivers/base/power/domain.c#L376
As Viresh also stated, genpd does aggregate the votes. It even
performs aggregation hierarchy (a genpd is allowed to have parent(s)
to model a topology).
>
> 2. GENPD framework has a sync() callback in the genpd.domain structure,
> but this callback isn't allowed to be used by the GENPD implementation.
> The GENPD framework always overrides that callback for its own needs.
> Hence GENPD doesn't allow to solve the bootstrapping
> state-synchronization problem in a nice way.
>
> https://elixir.bootlin.com/linux/v5.10-rc2/source/drivers/base/power/domain.c#L2606
That ->sync() callback isn't the callback you are looking for, it's a
PM domain specific callback - and has other purposes.
To solve the problem you refer to, your genpd provider driver (a
platform driver) should assign its ->sync_state() callback. The
->sync_state() callback will be invoked, when all consumer devices
have been attached (and probed) to their corresponding provider.
You may have a look at drivers/cpuidle/cpuidle-psci-domain.c, to see
an example of how this works. If there is anything unclear, just tell
me and I will try to help.
>
> 3. Tegra doesn't have a dedicated hardware power-controller for the core
> domain, instead there is only an external voltage regulator. Hence we
> will need to create a phony device-tree node for the virtual power
> domain, which is probably a wrong thing to do.
No, this is absolutely the correct thing to do.
This isn't a virtual power domain, it's a real power domain. You only
happen to model the control of it as a regulator, as it fits nicely
with that for *this* SoC. Don't get me wrong, that's fine as long as
the supply is specified only in the power-domain provider node.
On another SoC, you might have a different FW interface for the power
domain provider that doesn't fit well with the regulator. When that
happens, all you need to do is to implement a new power domain
provider and potentially re-define the power domain topology. More
importantly, you don't need to re-invent yet another slew of device
specific bindings - for each SoC.
>
> ===
>
> Perhaps it should be possible to create some hacks to work around
> bullets 2 and 3 in order to achieve what we need for DVFS on Tegra, but
> bullet 1 isn't solvable without changing how the GENPD core works.
>
> Altogether, the GENPD in its current form is a wrong abstraction for a
> system-wide DVFS in a case where multiple devices share power domain and
> this domain is a voltage regulator. The regulator framework is the
> correct abstraction in this case for today.
Well, I admit it's a bit complex. But it solves the problem in a
nicely abstracted way that should work for everybody, at least in my
opinion.
Although, let's not exclude that there are pieces missing in genpd or
the opp layer, as this DVFS feature is rather new - but then we should
just extend/fix it.
Kind regards
Uffe
On Thu, 5 Nov 2020 at 00:44, Dmitry Osipenko <[email protected]> wrote:
>
> Document new DVFS OPP table and voltage regulator properties of the
> Host1x bus and devices sitting on the bus.
>
> Signed-off-by: Dmitry Osipenko <[email protected]>
> ---
> .../display/tegra/nvidia,tegra20-host1x.txt | 56 +++++++++++++++++++
> 1 file changed, 56 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt b/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt
> index 34d993338453..0593c8df70bb 100644
> --- a/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt
> +++ b/Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt
> @@ -20,6 +20,18 @@ Required properties:
> - reset-names: Must include the following entries:
> - host1x
>
> +Optional properties:
> +- operating-points-v2: See ../bindings/opp/opp.txt for details.
> +- core-supply: Phandle of voltage regulator of the SoC "core" power domain.
> +
> +For each opp entry in 'operating-points-v2' table of host1x and its modules:
> +- opp-supported-hw: One bitfield indicating:
> + On Tegra20: SoC process ID mask
> + On Tegra30+: SoC speedo ID mask
> +
> + A bitwise AND is performed against the value and if any bit
> + matches, the OPP gets enabled.
> +
> Each host1x client module having to perform DMA through the Memory Controller
> should have the interconnect endpoints set to the Memory Client and External
> Memory respectively.
> @@ -45,6 +57,8 @@ of the following host1x client modules:
> - interconnect-names: Must include name of the interconnect path for each
> interconnect entry. Consult TRM documentation for information about
> available memory clients, see MEMORY CONTROLLER section.
> + - core-supply: Phandle of voltage regulator of the SoC "core" power domain.
> + - operating-points-v2: See ../bindings/opp/opp.txt for details.
>
As discussed in the thread for the cover-letter.
We already have DT bindings for power-domains (providers and
consumers). Please use them instead of adding SoC specific bindings to
each peripheral device.
[...]
Kind regards
Uffe
11.11.2020 14:38, Ulf Hansson пишет:
> On Sun, 8 Nov 2020 at 13:19, Dmitry Osipenko <[email protected]> wrote:
>>
>> 05.11.2020 18:22, Dmitry Osipenko пишет:
>>> 05.11.2020 12:45, Ulf Hansson пишет:
>>> ...
>>>> I need some more time to review this, but just a quick check found a
>>>> few potential issues...
>>>
>>> Thank you for starting the review! I'm pretty sure it will take a couple
>>> revisions until all the questions will be resolved :)
>>>
>>>> The "core-supply", that you specify as a regulator for each
>>>> controller's device node, is not the way we describe power domains.
>>>> Instead, it seems like you should register a power-domain provider
>>>> (with the help of genpd) and implement the ->set_performance_state()
>>>> callback for it. Each device node should then be hooked up to this
>>>> power-domain, rather than to a "core-supply". For DT bindings, please
>>>> have a look at Documentation/devicetree/bindings/power/power-domain.yaml
>>>> and Documentation/devicetree/bindings/power/power_domain.txt.
>>>>
>>>> In regards to the "sync state" problem (preventing to change
>>>> performance states until all consumers have been attached), this can
>>>> then be managed by the genpd provider driver instead.
>>>
>>> I'll need to take a closer look at GENPD, thank you for the suggestion.
>>>
>>> Sounds like a software GENPD driver which manages clocks and voltages
>>> could be a good idea, but it also could be an unnecessary
>>> over-engineering. Let's see..
>>>
>>
>> Hello Ulf and all,
>>
>> I took a detailed look at the GENPD and tried to implement it. Here is
>> what was found:
>>
>> 1. GENPD framework doesn't aggregate performance requests from the
>> attached devices. This means that if deviceA requests performance state
>> 10 and then deviceB requests state 3, then framework will set domain's
>> state to 3 instead of 10.
>>
>> https://elixir.bootlin.com/linux/v5.10-rc2/source/drivers/base/power/domain.c#L376
>
> As Viresh also stated, genpd does aggregate the votes. It even
> performs aggregation hierarchy (a genpd is allowed to have parent(s)
> to model a topology).
Yes, I already found and fixed the bug which confused me previously and
it's working well now.
>> 2. GENPD framework has a sync() callback in the genpd.domain structure,
>> but this callback isn't allowed to be used by the GENPD implementation.
>> The GENPD framework always overrides that callback for its own needs.
>> Hence GENPD doesn't allow to solve the bootstrapping
>> state-synchronization problem in a nice way.
>>
>> https://elixir.bootlin.com/linux/v5.10-rc2/source/drivers/base/power/domain.c#L2606
>
> That ->sync() callback isn't the callback you are looking for, it's a
> PM domain specific callback - and has other purposes.
>
> To solve the problem you refer to, your genpd provider driver (a
> platform driver) should assign its ->sync_state() callback. The
> ->sync_state() callback will be invoked, when all consumer devices
> have been attached (and probed) to their corresponding provider.
>
> You may have a look at drivers/cpuidle/cpuidle-psci-domain.c, to see
> an example of how this works. If there is anything unclear, just tell
> me and I will try to help.
Indeed, thank you for the clarification. This variant works well.
>> 3. Tegra doesn't have a dedicated hardware power-controller for the core
>> domain, instead there is only an external voltage regulator. Hence we
>> will need to create a phony device-tree node for the virtual power
>> domain, which is probably a wrong thing to do.
>
> No, this is absolutely the correct thing to do.
>
> This isn't a virtual power domain, it's a real power domain. You only
> happen to model the control of it as a regulator, as it fits nicely
> with that for *this* SoC. Don't get me wrong, that's fine as long as
> the supply is specified only in the power-domain provider node.
>
> On another SoC, you might have a different FW interface for the power
> domain provider that doesn't fit well with the regulator. When that
> happens, all you need to do is to implement a new power domain
> provider and potentially re-define the power domain topology. More
> importantly, you don't need to re-invent yet another slew of device
> specific bindings - for each SoC.
>
>>
>> ===
>>
>> Perhaps it should be possible to create some hacks to work around
>> bullets 2 and 3 in order to achieve what we need for DVFS on Tegra, but
>> bullet 1 isn't solvable without changing how the GENPD core works.
>>
>> Altogether, the GENPD in its current form is a wrong abstraction for a
>> system-wide DVFS in a case where multiple devices share power domain and
>> this domain is a voltage regulator. The regulator framework is the
>> correct abstraction in this case for today.
>
> Well, I admit it's a bit complex. But it solves the problem in a
> nicely abstracted way that should work for everybody, at least in my
> opinion.
The OPP framework supports both voltage regulator and power domain,
hiding the implementation details from drivers. This means that OPP API
usage will be the same regardless of what approach (regulator or power
domain) is used for a particular SoC.
> Although, let's not exclude that there are pieces missing in genpd or
> the opp layer, as this DVFS feature is rather new - but then we should
> just extend/fix it.
Will be nice to have a per-device GENPD performance stats.
Thierry, could you please let me know what do you think about replacing
regulator with the power domain? Do you think it's a worthwhile change?
The difference in comparison to using voltage regulator directly is
minimal, basically the core-supply phandle is replaced is replaced with
a power-domain phandle in a device tree.
The only thing which makes me feel a bit uncomfortable is that there is
no real hardware node for the power domain node in a device-tree.
On Thu, Nov 12, 2020 at 10:57:27PM +0300, Dmitry Osipenko wrote:
> 11.11.2020 14:38, Ulf Hansson пишет:
> > On Sun, 8 Nov 2020 at 13:19, Dmitry Osipenko <[email protected]> wrote:
> >>
> >> 05.11.2020 18:22, Dmitry Osipenko пишет:
> >>> 05.11.2020 12:45, Ulf Hansson пишет:
> >>> ...
> >>>> I need some more time to review this, but just a quick check found a
> >>>> few potential issues...
> >>>
> >>> Thank you for starting the review! I'm pretty sure it will take a couple
> >>> revisions until all the questions will be resolved :)
> >>>
> >>>> The "core-supply", that you specify as a regulator for each
> >>>> controller's device node, is not the way we describe power domains.
> >>>> Instead, it seems like you should register a power-domain provider
> >>>> (with the help of genpd) and implement the ->set_performance_state()
> >>>> callback for it. Each device node should then be hooked up to this
> >>>> power-domain, rather than to a "core-supply". For DT bindings, please
> >>>> have a look at Documentation/devicetree/bindings/power/power-domain.yaml
> >>>> and Documentation/devicetree/bindings/power/power_domain.txt.
> >>>>
> >>>> In regards to the "sync state" problem (preventing to change
> >>>> performance states until all consumers have been attached), this can
> >>>> then be managed by the genpd provider driver instead.
> >>>
> >>> I'll need to take a closer look at GENPD, thank you for the suggestion.
> >>>
> >>> Sounds like a software GENPD driver which manages clocks and voltages
> >>> could be a good idea, but it also could be an unnecessary
> >>> over-engineering. Let's see..
> >>>
> >>
> >> Hello Ulf and all,
> >>
> >> I took a detailed look at the GENPD and tried to implement it. Here is
> >> what was found:
> >>
> >> 1. GENPD framework doesn't aggregate performance requests from the
> >> attached devices. This means that if deviceA requests performance state
> >> 10 and then deviceB requests state 3, then framework will set domain's
> >> state to 3 instead of 10.
> >>
> >> https://elixir.bootlin.com/linux/v5.10-rc2/source/drivers/base/power/domain.c#L376
> >
> > As Viresh also stated, genpd does aggregate the votes. It even
> > performs aggregation hierarchy (a genpd is allowed to have parent(s)
> > to model a topology).
>
> Yes, I already found and fixed the bug which confused me previously and
> it's working well now.
>
> >> 2. GENPD framework has a sync() callback in the genpd.domain structure,
> >> but this callback isn't allowed to be used by the GENPD implementation.
> >> The GENPD framework always overrides that callback for its own needs.
> >> Hence GENPD doesn't allow to solve the bootstrapping
> >> state-synchronization problem in a nice way.
> >>
> >> https://elixir.bootlin.com/linux/v5.10-rc2/source/drivers/base/power/domain.c#L2606
> >
> > That ->sync() callback isn't the callback you are looking for, it's a
> > PM domain specific callback - and has other purposes.
> >
> > To solve the problem you refer to, your genpd provider driver (a
> > platform driver) should assign its ->sync_state() callback. The
> > ->sync_state() callback will be invoked, when all consumer devices
> > have been attached (and probed) to their corresponding provider.
> >
> > You may have a look at drivers/cpuidle/cpuidle-psci-domain.c, to see
> > an example of how this works. If there is anything unclear, just tell
> > me and I will try to help.
>
> Indeed, thank you for the clarification. This variant works well.
>
> >> 3. Tegra doesn't have a dedicated hardware power-controller for the core
> >> domain, instead there is only an external voltage regulator. Hence we
> >> will need to create a phony device-tree node for the virtual power
> >> domain, which is probably a wrong thing to do.
> >
> > No, this is absolutely the correct thing to do.
> >
> > This isn't a virtual power domain, it's a real power domain. You only
> > happen to model the control of it as a regulator, as it fits nicely
> > with that for *this* SoC. Don't get me wrong, that's fine as long as
> > the supply is specified only in the power-domain provider node.
> >
> > On another SoC, you might have a different FW interface for the power
> > domain provider that doesn't fit well with the regulator. When that
> > happens, all you need to do is to implement a new power domain
> > provider and potentially re-define the power domain topology. More
> > importantly, you don't need to re-invent yet another slew of device
> > specific bindings - for each SoC.
> >
> >>
> >> ===
> >>
> >> Perhaps it should be possible to create some hacks to work around
> >> bullets 2 and 3 in order to achieve what we need for DVFS on Tegra, but
> >> bullet 1 isn't solvable without changing how the GENPD core works.
> >>
> >> Altogether, the GENPD in its current form is a wrong abstraction for a
> >> system-wide DVFS in a case where multiple devices share power domain and
> >> this domain is a voltage regulator. The regulator framework is the
> >> correct abstraction in this case for today.
> >
> > Well, I admit it's a bit complex. But it solves the problem in a
> > nicely abstracted way that should work for everybody, at least in my
> > opinion.
>
> The OPP framework supports both voltage regulator and power domain,
> hiding the implementation details from drivers. This means that OPP API
> usage will be the same regardless of what approach (regulator or power
> domain) is used for a particular SoC.
>
> > Although, let's not exclude that there are pieces missing in genpd or
> > the opp layer, as this DVFS feature is rather new - but then we should
> > just extend/fix it.
>
> Will be nice to have a per-device GENPD performance stats.
>
> Thierry, could you please let me know what do you think about replacing
> regulator with the power domain? Do you think it's a worthwhile change?
>
> The difference in comparison to using voltage regulator directly is
> minimal, basically the core-supply phandle is replaced is replaced with
> a power-domain phandle in a device tree.
These new power-domain handles would have to be added to devices that
potentially already have a power-domain handle, right? Isn't that going
to cause issues? I vaguely recall that we already have multiple power
domains for the XUSB controller and we have to jump through extra hoops
to make that work.
> The only thing which makes me feel a bit uncomfortable is that there is
> no real hardware node for the power domain node in a device-tree.
Could we anchor the new power domain at the PMC for example? That would
allow us to avoid the "virtual" node. On the other hand, if we were to
use a regulator, we'd be adding a node for that, right? So isn't this
effectively going to be the same node if we use a power domain? Both
software constructs are using the same voltage regulator, so they should
be able to be described by the same device tree node, shouldn't they?
Thierry
12.11.2020 23:43, Thierry Reding пишет:
>> The difference in comparison to using voltage regulator directly is
>> minimal, basically the core-supply phandle is replaced is replaced with
>> a power-domain phandle in a device tree.
> These new power-domain handles would have to be added to devices that
> potentially already have a power-domain handle, right? Isn't that going
> to cause issues? I vaguely recall that we already have multiple power
> domains for the XUSB controller and we have to jump through extra hoops
> to make that work.
I modeled the core PD as a parent of the PMC sub-domains, which
presumably is a correct way to represent the domains topology.
https://gist.github.com/digetx/dfd92c7f7e0aa6cef20403c4298088d7
>> The only thing which makes me feel a bit uncomfortable is that there is
>> no real hardware node for the power domain node in a device-tree.
> Could we anchor the new power domain at the PMC for example? That would
> allow us to avoid the "virtual" node.
I had a thought about using PMC for the core domain, but not sure
whether it will be an entirely correct hardware description. Although,
it will be nice to have it this way.
This is what Tegra TRM says about PMC:
"The Power Management Controller (PMC) block interacts with an external
or Power Manager Unit (PMU). The PMC mostly controls the entry and exit
of the system from different sleep modes. It provides power-gating
controllers for SOC and CPU power-islands and also provides scratch
storage to save some of the context during sleep modes (when CPU and/or
SOC power rails are off). Additionally, PMC interacts with the external
Power Manager Unit (PMU)."
The core voltage regulator is a part of the PMU.
Not all core SoC devices are behind PMC, IIUC.
> On the other hand, if we were to
> use a regulator, we'd be adding a node for that, right? So isn't this
> effectively going to be the same node if we use a power domain? Both
> software constructs are using the same voltage regulator, so they should
> be able to be described by the same device tree node, shouldn't they?
I'm not exactly sure what you're meaning by "use a regulator" and "we'd
be adding a node for that", could you please clarify? This v1 approach
uses a core-supply phandle (i.e. regulator is used), it doesn't require
extra nodes.
On Thu, 12 Nov 2020 at 23:14, Dmitry Osipenko <[email protected]> wrote:
>
> 12.11.2020 23:43, Thierry Reding пишет:
> >> The difference in comparison to using voltage regulator directly is
> >> minimal, basically the core-supply phandle is replaced is replaced with
> >> a power-domain phandle in a device tree.
> > These new power-domain handles would have to be added to devices that
> > potentially already have a power-domain handle, right? Isn't that going
> > to cause issues? I vaguely recall that we already have multiple power
> > domains for the XUSB controller and we have to jump through extra hoops
> > to make that work.
>
> I modeled the core PD as a parent of the PMC sub-domains, which
> presumably is a correct way to represent the domains topology.
>
> https://gist.github.com/digetx/dfd92c7f7e0aa6cef20403c4298088d7
That could make sense, it seems.
Anyway, this made me realize that
dev_pm_genpd_set_performance_state(dev) returns -EINVAL, in case the
device's genpd doesn't have the ->set_performance_state() assigned.
This may not be correct. Instead we should likely consider an empty
callback as okay and continue to walk the topology upwards to the
parent domain, etc.
Just wanted to point this out. I intend to post a patch as soon as I
can for this.
[...]
Kind regards
Uffe
13.11.2020 17:45, Ulf Hansson пишет:
> On Thu, 12 Nov 2020 at 23:14, Dmitry Osipenko <[email protected]> wrote:
>>
>> 12.11.2020 23:43, Thierry Reding пишет:
>>>> The difference in comparison to using voltage regulator directly is
>>>> minimal, basically the core-supply phandle is replaced is replaced with
>>>> a power-domain phandle in a device tree.
>>> These new power-domain handles would have to be added to devices that
>>> potentially already have a power-domain handle, right? Isn't that going
>>> to cause issues? I vaguely recall that we already have multiple power
>>> domains for the XUSB controller and we have to jump through extra hoops
>>> to make that work.
>>
>> I modeled the core PD as a parent of the PMC sub-domains, which
>> presumably is a correct way to represent the domains topology.
>>
>> https://gist.github.com/digetx/dfd92c7f7e0aa6cef20403c4298088d7
>
> That could make sense, it seems.
>
> Anyway, this made me realize that
> dev_pm_genpd_set_performance_state(dev) returns -EINVAL, in case the
> device's genpd doesn't have the ->set_performance_state() assigned.
> This may not be correct. Instead we should likely consider an empty
> callback as okay and continue to walk the topology upwards to the
> parent domain, etc.
>
> Just wanted to point this out. I intend to post a patch as soon as I
> can for this.
Thank you, I was also going to make the same change, but haven't
bothered to do it so far. Please feel free to CC me on the patch.
On Fri, Nov 13, 2020 at 01:14:45AM +0300, Dmitry Osipenko wrote:
> 12.11.2020 23:43, Thierry Reding пишет:
> >> The difference in comparison to using voltage regulator directly is
> >> minimal, basically the core-supply phandle is replaced is replaced with
> >> a power-domain phandle in a device tree.
> > These new power-domain handles would have to be added to devices that
> > potentially already have a power-domain handle, right? Isn't that going
> > to cause issues? I vaguely recall that we already have multiple power
> > domains for the XUSB controller and we have to jump through extra hoops
> > to make that work.
>
> I modeled the core PD as a parent of the PMC sub-domains, which
> presumably is a correct way to represent the domains topology.
>
> https://gist.github.com/digetx/dfd92c7f7e0aa6cef20403c4298088d7
>
> >> The only thing which makes me feel a bit uncomfortable is that there is
> >> no real hardware node for the power domain node in a device-tree.
> > Could we anchor the new power domain at the PMC for example? That would
> > allow us to avoid the "virtual" node.
>
> I had a thought about using PMC for the core domain, but not sure
> whether it will be an entirely correct hardware description. Although,
> it will be nice to have it this way.
>
> This is what Tegra TRM says about PMC:
>
> "The Power Management Controller (PMC) block interacts with an external
> or Power Manager Unit (PMU). The PMC mostly controls the entry and exit
> of the system from different sleep modes. It provides power-gating
> controllers for SOC and CPU power-islands and also provides scratch
> storage to save some of the context during sleep modes (when CPU and/or
> SOC power rails are off). Additionally, PMC interacts with the external
> Power Manager Unit (PMU)."
>
> The core voltage regulator is a part of the PMU.
>
> Not all core SoC devices are behind PMC, IIUC.
There are usually some SoC devices that are always-on. Things like the
RTC for example, can never be power-gated, as far as I recall. On newer
chips there are usually many more blocks that can't be powergated at
all.
> > On the other hand, if we were to
> > use a regulator, we'd be adding a node for that, right? So isn't this
> > effectively going to be the same node if we use a power domain? Both
> > software constructs are using the same voltage regulator, so they should
> > be able to be described by the same device tree node, shouldn't they?
>
> I'm not exactly sure what you're meaning by "use a regulator" and "we'd
> be adding a node for that", could you please clarify? This v1 approach
> uses a core-supply phandle (i.e. regulator is used), it doesn't require
> extra nodes.
What I meant to say was that the actual supply voltage is generated by
some device (typically one of the SD outputs of the PMIC). Whether we
model this as a power domain or a regulator doesn't really matter,
right? So I'm wondering if the device that generates the voltage should
be the power domain provider, just like it is the provider of the
regulator if this was modelled as a regulator.
Thierry
13.11.2020 19:35, Thierry Reding пишет:
> On Fri, Nov 13, 2020 at 01:14:45AM +0300, Dmitry Osipenko wrote:
>> 12.11.2020 23:43, Thierry Reding пишет:
>>>> The difference in comparison to using voltage regulator directly is
>>>> minimal, basically the core-supply phandle is replaced is replaced with
>>>> a power-domain phandle in a device tree.
>>> These new power-domain handles would have to be added to devices that
>>> potentially already have a power-domain handle, right? Isn't that going
>>> to cause issues? I vaguely recall that we already have multiple power
>>> domains for the XUSB controller and we have to jump through extra hoops
>>> to make that work.
>>
>> I modeled the core PD as a parent of the PMC sub-domains, which
>> presumably is a correct way to represent the domains topology.
>>
>> https://gist.github.com/digetx/dfd92c7f7e0aa6cef20403c4298088d7
>>
>>>> The only thing which makes me feel a bit uncomfortable is that there is
>>>> no real hardware node for the power domain node in a device-tree.
>>> Could we anchor the new power domain at the PMC for example? That would
>>> allow us to avoid the "virtual" node.
>>
>> I had a thought about using PMC for the core domain, but not sure
>> whether it will be an entirely correct hardware description. Although,
>> it will be nice to have it this way.
>>
>> This is what Tegra TRM says about PMC:
>>
>> "The Power Management Controller (PMC) block interacts with an external
>> or Power Manager Unit (PMU). The PMC mostly controls the entry and exit
>> of the system from different sleep modes. It provides power-gating
>> controllers for SOC and CPU power-islands and also provides scratch
>> storage to save some of the context during sleep modes (when CPU and/or
>> SOC power rails are off). Additionally, PMC interacts with the external
>> Power Manager Unit (PMU)."
>>
>> The core voltage regulator is a part of the PMU.
>>
>> Not all core SoC devices are behind PMC, IIUC.
>
> There are usually some SoC devices that are always-on. Things like the
> RTC for example, can never be power-gated, as far as I recall. On newer
> chips there are usually many more blocks that can't be powergated at
> all.
The RTC is actually a special power domain on Tegra, it's not a part of
the CORE domain, they are separate from each other.
We need to know what blocks belong to a power domain and what's the
power topology of these blocks. I think we already have this knowledge,
so it shouldn't be a problem.
>>> On the other hand, if we were to
>>> use a regulator, we'd be adding a node for that, right? So isn't this
>>> effectively going to be the same node if we use a power domain? Both
>>> software constructs are using the same voltage regulator, so they should
>>> be able to be described by the same device tree node, shouldn't they?
>>
>> I'm not exactly sure what you're meaning by "use a regulator" and "we'd
>> be adding a node for that", could you please clarify? This v1 approach
>> uses a core-supply phandle (i.e. regulator is used), it doesn't require
>> extra nodes.
>
> What I meant to say was that the actual supply voltage is generated by
> some device (typically one of the SD outputs of the PMIC). Whether we
> model this as a power domain or a regulator doesn't really matter,
> right? So I'm wondering if the device that generates the voltage should
> be the power domain provider, just like it is the provider of the
> regulator if this was modelled as a regulator.
Technically this could be done and it shouldn't be difficult to add
GENPD support to the regulator framework, but I think this is an
inaccurate hardware description.
It shouldn't be correct to describe internal SoC parts as
directly-connected to an external voltage regulator. The core voltage
regulator is connected to a one of several power rails of the Tegra
chip. There is no good way to describe hardware in terms of voltage
regulators, hence that's why this v1 series added a core-supply to each
SoC component of each board's DT individually.
It's actually one of the benefits of using a separate DT node for the
power-domain, which describes the "Tegra Core" part of the Tegra SoC,
and thus, it all stays within tegra.dtsi. This means that PD explicitly
belongs to the SoC internals in oppose to describing PD like it's an
external/off-chip component.
Initially I didn't like much that there is no hardware address to back
up the power domain node in a DT, but actually there is no address for
the power rail. Hence it should be better to describe hardware by
keeping PD internally to the SoC. Note that potentially PD may require
knowledge about specifics of a particular SoC, while external regulator
doesn't belong to a SoC. Also, I guess technically there could be
multiple external regulators which power a single SoC rail.
01.12.2020 16:57, Mark Brown пишет:
> On Thu, 5 Nov 2020 02:43:57 +0300, Dmitry Osipenko wrote:
>> Introduce core voltage scaling for NVIDIA Tegra20/30 SoCs, which reduces
>> power consumption and heating of the Tegra chips. Tegra SoC has multiple
>> hardware units which belong to a core power domain of the SoC and share
>> the core voltage. The voltage must be selected in accordance to a minimum
>> requirement of every core hardware unit.
>>
>> The minimum core voltage requirement depends on:
>>
>> [...]
>
> Applied to
>
> https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git for-next
>
> Thanks!
>
> [1/1] regulator: Allow skipping disabled regulators in regulator_check_consumers()
> (no commit info)
>
> All being well this means that it will be integrated into the linux-next
> tree (usually sometime in the next 24 hours) and sent to Linus during
> the next merge window (or sooner if it is a bug fix), however if
> problems are discovered then the patch may be dropped or reverted.
>
> You may get further e-mails resulting from automated or manual testing
> and review of the tree, please engage with people reporting problems and
> send followup patches addressing any issues that are reported if needed.
>
> If any updates are required or you are submitting further changes they
> should be sent as incremental updates against current git, existing
> patches will not be replaced.
>
> Please add any relevant lists and maintainers to the CCs when replying
> to this mail.
Hello Mark,
Could you please hold on this patch? It won't be needed in a v2, which
will use power domains.
Also, I'm not sure whether the "sound" tree is suitable for any of the
patches in this series.
On Tue, Dec 01, 2020 at 05:17:20PM +0300, Dmitry Osipenko wrote:
> 01.12.2020 16:57, Mark Brown пишет:
> > [1/1] regulator: Allow skipping disabled regulators in regulator_check_consumers()
> > (no commit info)
> Could you please hold on this patch? It won't be needed in a v2, which
> will use power domains.
> Also, I'm not sure whether the "sound" tree is suitable for any of the
> patches in this series.
It didn't actually get applied (note the "no commit info") - it looks
like b4's matching code got confused and decided to generate mails for
anything that I've ever downloaded and not posted.
01.12.2020 17:34, Mark Brown пишет:
> On Tue, Dec 01, 2020 at 05:17:20PM +0300, Dmitry Osipenko wrote:
>> 01.12.2020 16:57, Mark Brown пишет:
>
>>> [1/1] regulator: Allow skipping disabled regulators in regulator_check_consumers()
>>> (no commit info)
>
>> Could you please hold on this patch? It won't be needed in a v2, which
>> will use power domains.
>
>> Also, I'm not sure whether the "sound" tree is suitable for any of the
>> patches in this series.
>
> It didn't actually get applied (note the "no commit info") - it looks
> like b4's matching code got confused and decided to generate mails for
> anything that I've ever downloaded and not posted.
>
Alright, thank you for the clarification.