The Controller driver is the parent device of the PCIe host bridge,
PCI-PCI bridge and PCIe endpoint as shown below.
PCIe controller(Top level parent & parent of host bridge)
|
v
PCIe Host bridge(Parent of PCI-PCI bridge)
|
v
PCI-PCI bridge(Parent of endpoint driver)
|
v
PCIe endpoint driver
Now, when the controller device goes to runtime suspend, PM framework
will check the runtime PM state of the child device (host bridge) and
will find it to be disabled. So it will allow the parent (controller
device) to go to runtime suspend. Only if the child device's state was
'active' it will prevent the parent to get suspended.
Since runtime PM is disabled for host bridge, the state of the child
devices under the host bridge is not taken into account by PM framework
for the top level parent, PCIe controller. So PM framework, allows
the controller driver to enter runtime PM irrespective of the state
of the devices under the host bridge. And this causes the topology
breakage and also possible PM issues like controller driver goes to
runtime suspend while endpoint driver is doing some transfers.
So enable runtime PM for the host bridge, so that controller driver
goes to suspend only when all child devices goes to runtime suspend.
Signed-off-by: Krishna chaitanya chundru <[email protected]>
---
Changes in v3:
- Moved the runtime API call's from the dwc driver to PCI framework
as it is applicable for all (suggested by mani)
- Updated the commit message.
- Link to v3: https://lore.kernel.org/all/[email protected]
Changes in v2:
- Updated commit message as suggested by mani.
- Link to v1: https://lore.kernel.org/r/[email protected]
---
---
drivers/pci/probe.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index 20475ca30505..b7f9ff75b0b3 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -3108,6 +3108,10 @@ int pci_host_probe(struct pci_host_bridge *bridge)
pcie_bus_configure_settings(child);
pci_bus_add_devices(bus);
+
+ pm_runtime_set_active(&bridge->dev);
+ pm_runtime_enable(&bridge->dev);
+
return 0;
}
EXPORT_SYMBOL_GPL(pci_host_probe);
---
base-commit: 30417e6592bfc489a78b3fe564cfe1960e383829
change-id: 20240609-runtime_pm-7e6de1190113
Best regards,
--
Krishna chaitanya chundru <[email protected]>
…
> So enable runtime PM for the host bridge, so that controller driver
> goes to suspend only when all child devices goes to runtime suspend.
Can the tag “Fixes” become relevant for this change?
Regards,
Markus
On 6/8/2024 8:14 PM, Krishna chaitanya chundru wrote:
> The Controller driver is the parent device of the PCIe host bridge,
> PCI-PCI bridge and PCIe endpoint as shown below.
>
> PCIe controller(Top level parent & parent of host bridge)
> |
> v
> PCIe Host bridge(Parent of PCI-PCI bridge)
> |
> v
> PCI-PCI bridge(Parent of endpoint driver)
> |
> v
> PCIe endpoint driver
>
> Now, when the controller device goes to runtime suspend, PM framework
> will check the runtime PM state of the child device (host bridge) and
> will find it to be disabled. So it will allow the parent (controller
> device) to go to runtime suspend. Only if the child device's state was
> 'active' it will prevent the parent to get suspended.
>
> Since runtime PM is disabled for host bridge, the state of the child
> devices under the host bridge is not taken into account by PM framework
> for the top level parent, PCIe controller. So PM framework, allows
> the controller driver to enter runtime PM irrespective of the state
> of the devices under the host bridge. And this causes the topology
> breakage and also possible PM issues like controller driver goes to
> runtime suspend while endpoint driver is doing some transfers.
>
> So enable runtime PM for the host bridge, so that controller driver
> goes to suspend only when all child devices goes to runtime suspend.
>
> Signed-off-by: Krishna chaitanya chundru <[email protected]>
> ---
> Changes in v3:
> - Moved the runtime API call's from the dwc driver to PCI framework
> as it is applicable for all (suggested by mani)
> - Updated the commit message.
> - Link to v3: https://lore.kernel.org/all/[email protected]
> Changes in v2:
> - Updated commit message as suggested by mani.
> - Link to v1: https://lore.kernel.org/r/[email protected]
> ---
>
> ---
> drivers/pci/probe.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
> index 20475ca30505..b7f9ff75b0b3 100644
> --- a/drivers/pci/probe.c
> +++ b/drivers/pci/probe.c
> @@ -3108,6 +3108,10 @@ int pci_host_probe(struct pci_host_bridge *bridge)
> pcie_bus_configure_settings(child);
>
> pci_bus_add_devices(bus);
> +
> + pm_runtime_set_active(&bridge->dev);
> + pm_runtime_enable(&bridge->dev);
Can you consider using devm_pm_runtime_enable() instead of
pm_runtime_enable() ?
It serves calling pm_runtime_disable() when bridge device is removed as
seeing pcie driver is using pci_host_probe() after allocating bridge
device, and as part of pcie driver removal calls pci_remove_host_bus(),
and bridge device would be freed, but it doesn't call
pm_runtime_disable() on it. I don't see any specific functionality issue
here as bridge device would be freed anyway, although as we have way to
undo what is probe() is doing when bridge device is binding with bridge
device. Perhaps we can use available mechanism.
Regards,
Mayank
> return 0;
> }
> EXPORT_SYMBOL_GPL(pci_host_probe);
>
> ---
> base-commit: 30417e6592bfc489a78b3fe564cfe1960e383829
> change-id: 20240609-runtime_pm-7e6de1190113
>
> Best regards,