On Wed, Feb 23, 2022 at 08:55:34PM +0800, Shawn Guo wrote:
> It becomes a common situation on some platforms that certain hardware
> setup needs to be done on the last standing cpu, and rpmh-rsc[1] is such
> an existing example. As figuring out the last standing cpu is really
> something generic, it adds CPU_LAST_PM_ENTER (and CPU_FIRST_PM_EXIT)
> event support to cpu_pm helper, so that individual driver can be
> notified when the last standing cpu is about to enter low power state.
Sorry for not getting back on the previous email thread.
When I meant I didn't want to use CPU_CLUSTER_PM_{ENTER,EXIT}, I wasn't
thinking new ones to be added as alternative. With this OSI cpuidle, we
have introduces the concept of power domains and I was check if we can
associate these requirements to them rather than introducing the first
and last cpu notion. The power domains already identify them in order
to turn on or off. Not sure if there is any notification mechanism in
genpd/power domains. I really don't like this addition. It is disintegrating
all the solutions for OSI and makes it hard to understand.
One solution I can think of(not sure if others like or if that is feasible)
is to create a parent power domain that encloses all the last level CPU
power domains, which means when the last one is getting powered off, you
will be asked to power off and you can take whatever action you want.
--
Regards,
Sudeep
On Wed, Feb 23, 2022 at 07:30:50PM +0000, Sudeep Holla wrote:
> On Wed, Feb 23, 2022 at 08:55:34PM +0800, Shawn Guo wrote:
> > It becomes a common situation on some platforms that certain hardware
> > setup needs to be done on the last standing cpu, and rpmh-rsc[1] is such
> > an existing example. As figuring out the last standing cpu is really
> > something generic, it adds CPU_LAST_PM_ENTER (and CPU_FIRST_PM_EXIT)
> > event support to cpu_pm helper, so that individual driver can be
> > notified when the last standing cpu is about to enter low power state.
>
> Sorry for not getting back on the previous email thread.
> When I meant I didn't want to use CPU_CLUSTER_PM_{ENTER,EXIT}, I wasn't
> thinking new ones to be added as alternative. With this OSI cpuidle, we
> have introduces the concept of power domains and I was check if we can
> associate these requirements to them rather than introducing the first
> and last cpu notion. The power domains already identify them in order
> to turn on or off. Not sure if there is any notification mechanism in
> genpd/power domains. I really don't like this addition. It is disintegrating
> all the solutions for OSI and makes it hard to understand.
>
> One solution I can think of(not sure if others like or if that is feasible)
> is to create a parent power domain that encloses all the last level CPU
> power domains, which means when the last one is getting powered off, you
> will be asked to power off and you can take whatever action you want.
Thanks Sudeep for the input! Yes, it works for me (if I understand your
suggestion correctly). So the needed changes on top of the current
version would be:
1) Declare MPM as a PD (power domain) provider and have it be the
parent PD of cpu cluster (the platform has only one cluster including
4 cpus).
diff --git a/arch/arm64/boot/dts/qcom/qcm2290.dtsi b/arch/arm64/boot/dts/qcom/qcm2290.dtsi
index 5bc5ce0b5d77..0cd0a9722ec5 100644
--- a/arch/arm64/boot/dts/qcom/qcm2290.dtsi
+++ b/arch/arm64/boot/dts/qcom/qcm2290.dtsi
@@ -240,6 +240,7 @@ CPU_PD3: cpu3 {
CLUSTER_PD: cpu-cluster0 {
#power-domain-cells = <0>;
+ power-domains = <&mpm>;
domain-idle-states = <&CLUSTER_SLEEP_0>;
};
};
@@ -490,6 +491,7 @@ mpm: interrupt-controller@45f01b8 {
interrupt-controller;
interrupt-parent = <&intc>;
#interrupt-cells = <2>;
+ #power-domain-cells = <0>;
qcom,mpm-pin-count = <96>;
qcom,mpm-pin-map = <2 275>, /* tsens0_tsens_upper_lower_int */
<5 296>, /* lpass_irq_out_sdc */
2) Add PD in MPM driver and call qcom_mpm_enter_sleep() from .power_off
hook of the PD.
diff --git a/drivers/irqchip/qcom-mpm.c b/drivers/irqchip/qcom-mpm.c
index d3d8251e57e4..f4409c169a3a 100644
--- a/drivers/irqchip/qcom-mpm.c
+++ b/drivers/irqchip/qcom-mpm.c
@@ -4,7 +4,6 @@
* Copyright (c) 2010-2020, The Linux Foundation. All rights reserved.
*/
-#include <linux/cpu_pm.h>
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/init.h>
@@ -18,6 +17,7 @@
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
+#include <linux/pm_domain.h>
#include <linux/slab.h>
#include <linux/soc/qcom/irq.h>
#include <linux/spinlock.h>
@@ -84,7 +84,7 @@ struct qcom_mpm_priv {
unsigned int map_cnt;
unsigned int reg_stride;
struct irq_domain *domain;
- struct notifier_block pm_nb;
+ struct generic_pm_domain genpd;
};
static u32 qcom_mpm_read(struct qcom_mpm_priv *priv, unsigned int reg,
@@ -312,23 +312,12 @@ static int qcom_mpm_enter_sleep(struct qcom_mpm_priv *priv)
return 0;
}
-static int qcom_mpm_cpu_pm_callback(struct notifier_block *nb,
- unsigned long action, void *data)
+static int mpm_pd_power_off(struct generic_pm_domain *genpd)
{
- struct qcom_mpm_priv *priv = container_of(nb, struct qcom_mpm_priv,
- pm_nb);
- int ret = NOTIFY_OK;
-
- switch (action) {
- case CPU_LAST_PM_ENTER:
- if (qcom_mpm_enter_sleep(priv))
- ret = NOTIFY_BAD;
- break;
- default:
- ret = NOTIFY_DONE;
- }
+ struct qcom_mpm_priv *priv = container_of(genpd, struct qcom_mpm_priv,
+ genpd);
- return ret;
+ return qcom_mpm_enter_sleep(priv);
}
static int qcom_mpm_init(struct device_node *np, struct device_node *parent)
@@ -336,6 +325,7 @@ static int qcom_mpm_init(struct device_node *np, struct device_node *parent)
struct platform_device *pdev = of_find_device_by_node(np);
struct device *dev = &pdev->dev;
struct irq_domain *parent_domain;
+ struct generic_pm_domain *genpd;
struct qcom_mpm_priv *priv;
unsigned int pin_cnt;
int i, irq;
@@ -387,6 +377,26 @@ static int qcom_mpm_init(struct device_node *np, struct device_node *parent)
if (irq < 0)
return irq;
+ genpd = &priv->genpd;
+ genpd->flags = GENPD_FLAG_IRQ_SAFE;
+ genpd->power_off = mpm_pd_power_off;
+
+ genpd->name = devm_kasprintf(dev, GFP_KERNEL, "%s", dev_name(dev));
+ if (!genpd->name)
+ return -ENOMEM;
+
+ ret = pm_genpd_init(genpd, NULL, false);
+ if (ret) {
+ dev_err(dev, "failed to init genpd: %d\n", ret);
+ return ret;
+ }
+
+ ret = of_genpd_add_provider_simple(np, genpd);
+ if (ret) {
+ dev_err(dev, "failed to add genpd provider: %d\n", ret);
+ goto remove_genpd;
+ }
+
priv->mbox_client.dev = dev;
priv->mbox_chan = mbox_request_channel(&priv->mbox_client, 0);
if (IS_ERR(priv->mbox_chan)) {
@@ -420,15 +430,14 @@ static int qcom_mpm_init(struct device_node *np, struct device_node *parent)
goto remove_domain;
}
- priv->pm_nb.notifier_call = qcom_mpm_cpu_pm_callback;
- cpu_pm_register_notifier(&priv->pm_nb);
-
return 0;
remove_domain:
irq_domain_remove(priv->domain);
free_mbox:
mbox_free_channel(priv->mbox_chan);
+remove_genpd:
+ pm_genpd_remove(genpd);
return ret;
}
Let's me know if this is what you are asking for, thanks!
Shawn
On Fri, Feb 25, 2022 at 12:33:11PM +0800, Shawn Guo wrote:
> On Wed, Feb 23, 2022 at 07:30:50PM +0000, Sudeep Holla wrote:
> > On Wed, Feb 23, 2022 at 08:55:34PM +0800, Shawn Guo wrote:
> > > It becomes a common situation on some platforms that certain hardware
> > > setup needs to be done on the last standing cpu, and rpmh-rsc[1] is such
> > > an existing example. As figuring out the last standing cpu is really
> > > something generic, it adds CPU_LAST_PM_ENTER (and CPU_FIRST_PM_EXIT)
> > > event support to cpu_pm helper, so that individual driver can be
> > > notified when the last standing cpu is about to enter low power state.
> >
> > Sorry for not getting back on the previous email thread.
> > When I meant I didn't want to use CPU_CLUSTER_PM_{ENTER,EXIT}, I wasn't
> > thinking new ones to be added as alternative. With this OSI cpuidle, we
> > have introduces the concept of power domains and I was check if we can
> > associate these requirements to them rather than introducing the first
> > and last cpu notion. The power domains already identify them in order
> > to turn on or off. Not sure if there is any notification mechanism in
> > genpd/power domains. I really don't like this addition. It is disintegrating
> > all the solutions for OSI and makes it hard to understand.
> >
> > One solution I can think of(not sure if others like or if that is feasible)
> > is to create a parent power domain that encloses all the last level CPU
> > power domains, which means when the last one is getting powered off, you
> > will be asked to power off and you can take whatever action you want.
>
> Thanks Sudeep for the input! Yes, it works for me (if I understand your
> suggestion correctly). So the needed changes on top of the current
> version would be:
>
> 1) Declare MPM as a PD (power domain) provider and have it be the
> parent PD of cpu cluster (the platform has only one cluster including
> 4 cpus).
>
[...]
>
> Let's me know if this is what you are asking for, thanks!
Matches exactly. I don't know if there is anything I am missing to see,
but if this possible, for me it is easier to understand as this is all
linked to power-domains like other things in OSI cpuidle.
So yes, I prefer this, but let us see what others have to say about this.
--
Regards,
Sudeep