2022-10-18 15:37:50

by Ulf Hansson

[permalink] [raw]
Subject: [PATCH v3 0/6] soc: qcom: Add APSS RSC to the CPU cluster PM domain

Changes in v3:
- Re-worked the genpd patch (patch4) and updated the commit-msg.
- No other changes.

The APSS RSC is a part of the CPU cluster PM domain and it's responsible for
flushing 'sleep' and 'wake' votes to avoid wasting energy. In particular, this
needs to be done when last CPU in the cluster enters a deeper idlestate.

To make this work, this series makes the APSS RSC device to become attached to
its corresponding CPU cluster PM domain, which are being managed by the
cpuidle-psci-domain through genpd.

More details are available in the commit messages for each patch.

Assuming there is an agreement to apply this, I suggest Bjorn to funnel this via
his qcom tree for v6.2.

Note that, there is also a related error report [1], [2], that this series
should be fixing. Although, as $subject series seems a bit too heavy for stable
kernels there's another minor workarounds in pipe [2] for 6.1-rc back to v5.18.

That said, we may then need to make a revert of [2] for v6.2, assuming it gets
merged for v6.1-rc[n].

Kind regards
Ulf Hansson

[1]
https://lore.kernel.org/linux-pm/[email protected]/

[2]
https://lore.kernel.org/linux-arm-msm/[email protected]/


Lina Iyer (1):
soc: qcom: rpmh-rsc: Attach RSC to cluster PM domain

Maulik Shah (5):
dt-bindings: soc: qcom: Update devicetree binding document for
rpmh-rsc
arm64: dts: qcom: Add power-domains property for apps_rsc
PM: domains: Store the next hrtimer wakeup in genpd
soc: qcom: rpmh-rsc: Save base address of drv
soc: qcom: rpmh-rsc: Write CONTROL_TCS with next timer wakeup

.../bindings/soc/qcom/qcom,rpmh-rsc.yaml | 5 +
arch/arm64/boot/dts/qcom/sm8150.dtsi | 1 +
arch/arm64/boot/dts/qcom/sm8250.dtsi | 1 +
arch/arm64/boot/dts/qcom/sm8350.dtsi | 1 +
arch/arm64/boot/dts/qcom/sm8450.dtsi | 1 +
drivers/base/power/domain.c | 26 ++++
drivers/base/power/domain_governor.c | 3 +
drivers/soc/qcom/rpmh-internal.h | 9 +-
drivers/soc/qcom/rpmh-rsc.c | 146 ++++++++++++++++--
drivers/soc/qcom/rpmh.c | 4 +-
include/linux/pm_domain.h | 7 +
11 files changed, 188 insertions(+), 16 deletions(-)

--
2.34.1


2022-10-18 15:43:05

by Ulf Hansson

[permalink] [raw]
Subject: [PATCH v3 4/6] PM: domains: Store the next hrtimer wakeup in genpd

From: Maulik Shah <[email protected]>

The arch timer cannot wake up the Qualcomm Technologies, Inc. (QTI) SoCs
from the deeper CPUidle states. To be able to wakeup from these deeper
states, another always-on timer needs to be programmed through the so
called CONTROL_TCS.

As the RSC is part of CPU subsystem and the corresponding APSS RSC device
is attached to the cluster PM domain (through genpd), it holds the
responsibility to program the always-on timer, before entering any of these
deeper CPUidle states.

However, programming the timer requires information about the next hrtimer
wakeup for the cluster PM domain, which is currently only known by genpd.
Therefore, let's share this data through a new genpd helper function,
dev_pm_genpd_get_next_hrtimer().

Signed-off-by: Maulik Shah <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
[Ulf: Reworked the code and updated the commit message]
Signed-off-by: Ulf Hansson <[email protected]>
Tested-by: Dmitry Baryshkov <[email protected]> # SM8450
---
drivers/base/power/domain.c | 26 ++++++++++++++++++++++++++
drivers/base/power/domain_governor.c | 3 +++
include/linux/pm_domain.h | 7 +++++++
3 files changed, 36 insertions(+)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index ead135c7044c..c2dec386c72e 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -494,6 +494,31 @@ void dev_pm_genpd_set_next_wakeup(struct device *dev, ktime_t next)
}
EXPORT_SYMBOL_GPL(dev_pm_genpd_set_next_wakeup);

+/**
+ * dev_pm_genpd_get_next_hrtimer - Return the next_hrtimer for the genpd
+ * @dev: A device that is attached to the genpd.
+ *
+ * This routine should typically be called for a device, at the point of when a
+ * GENPD_NOTIFY_PRE_OFF notification has been sent for it.
+ *
+ * Returns the aggregated value of the genpd's next hrtimer or KTIME_MAX if no
+ * valid value have been set.
+ */
+ktime_t dev_pm_genpd_get_next_hrtimer(struct device *dev)
+{
+ struct generic_pm_domain *genpd;
+
+ genpd = dev_to_genpd_safe(dev);
+ if (!genpd)
+ return KTIME_MAX;
+
+ if (genpd->gd)
+ return genpd->gd->next_hrtimer;
+
+ return KTIME_MAX;
+}
+EXPORT_SYMBOL_GPL(dev_pm_genpd_get_next_hrtimer);
+
static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
{
unsigned int state_idx = genpd->state_idx;
@@ -1994,6 +2019,7 @@ static int genpd_alloc_data(struct generic_pm_domain *genpd)
gd->max_off_time_ns = -1;
gd->max_off_time_changed = true;
gd->next_wakeup = KTIME_MAX;
+ gd->next_hrtimer = KTIME_MAX;
}

/* Use only one "off" state if there were no states declared */
diff --git a/drivers/base/power/domain_governor.c b/drivers/base/power/domain_governor.c
index 282a3a135827..cc2c3a5a6d35 100644
--- a/drivers/base/power/domain_governor.c
+++ b/drivers/base/power/domain_governor.c
@@ -375,6 +375,9 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
if (idle_duration_ns <= 0)
return false;

+ /* Store the next domain_wakeup to allow consumers to use it. */
+ genpd->gd->next_hrtimer = domain_wakeup;
+
/*
* Find the deepest idle state that has its residency value satisfied
* and by also taking into account the power off latency for the state.
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index ebc351698090..1cd41bdf73cf 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -17,6 +17,7 @@
#include <linux/notifier.h>
#include <linux/spinlock.h>
#include <linux/cpumask.h>
+#include <linux/time64.h>

/*
* Flags to control the behaviour of a genpd.
@@ -95,6 +96,7 @@ struct genpd_governor_data {
s64 max_off_time_ns;
bool max_off_time_changed;
ktime_t next_wakeup;
+ ktime_t next_hrtimer;
bool cached_power_down_ok;
bool cached_power_down_state_idx;
};
@@ -232,6 +234,7 @@ int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state);
int dev_pm_genpd_add_notifier(struct device *dev, struct notifier_block *nb);
int dev_pm_genpd_remove_notifier(struct device *dev);
void dev_pm_genpd_set_next_wakeup(struct device *dev, ktime_t next);
+ktime_t dev_pm_genpd_get_next_hrtimer(struct device *dev);

extern struct dev_power_governor simple_qos_governor;
extern struct dev_power_governor pm_domain_always_on_gov;
@@ -293,6 +296,10 @@ static inline int dev_pm_genpd_remove_notifier(struct device *dev)
static inline void dev_pm_genpd_set_next_wakeup(struct device *dev, ktime_t next)
{ }

+static inline ktime_t dev_pm_genpd_get_next_hrtimer(struct device *dev)
+{
+ return KTIME_MAX;
+}
#define simple_qos_governor (*(struct dev_power_governor *)(NULL))
#define pm_domain_always_on_gov (*(struct dev_power_governor *)(NULL))
#endif
--
2.34.1

2022-10-18 15:51:00

by Ulf Hansson

[permalink] [raw]
Subject: [PATCH v3 6/6] soc: qcom: rpmh-rsc: Write CONTROL_TCS with next timer wakeup

From: Maulik Shah <[email protected]>

The next wakeup timer value needs to be set in always on domain timer
as the arch timer interrupt can not wakeup the SoC if after the deepest
CPUidle states the SoC also enters deepest low power state.

To wakeup the SoC in such scenarios the earliest wakeup time is set in
CONTROL_TCS and the firmware takes care of setting up its own timer in
always on domain with next wakeup time. The timer wakes up the RSC and
sets resources back to wake state.

Signed-off-by: Maulik Shah <[email protected]>
Reviewed-by: Ulf Hansson <[email protected]>
Tested-by: Dmitry Baryshkov <[email protected]> # SM8450
---
drivers/soc/qcom/rpmh-internal.h | 3 ++
drivers/soc/qcom/rpmh-rsc.c | 61 ++++++++++++++++++++++++++++++++
drivers/soc/qcom/rpmh.c | 4 ++-
3 files changed, 67 insertions(+), 1 deletion(-)

diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
index 7866bb1e5361..39f53586f724 100644
--- a/drivers/soc/qcom/rpmh-internal.h
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -112,6 +112,7 @@ struct rpmh_ctrlr {
* @tcs_wait: Wait queue used to wait for @tcs_in_use to free up a
* slot
* @client: Handle to the DRV's client.
+ * @dev: RSC device.
*/
struct rsc_drv {
const char *name;
@@ -127,12 +128,14 @@ struct rsc_drv {
spinlock_t lock;
wait_queue_head_t tcs_wait;
struct rpmh_ctrlr client;
+ struct device *dev;
};

int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg);
int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv,
const struct tcs_request *msg);
void rpmh_rsc_invalidate(struct rsc_drv *drv);
+void rpmh_rsc_write_next_wakeup(struct rsc_drv *drv);

void rpmh_tx_done(const struct tcs_request *msg, int r);
int rpmh_flush(struct rpmh_ctrlr *ctrlr);
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index 8e01697f59af..25b838bf9078 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -12,6 +12,7 @@
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/kernel.h>
+#include <linux/ktime.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/notifier.h>
@@ -25,6 +26,7 @@
#include <linux/spinlock.h>
#include <linux/wait.h>

+#include <clocksource/arm_arch_timer.h>
#include <soc/qcom/cmd-db.h>
#include <soc/qcom/tcs.h>
#include <dt-bindings/soc/qcom,rpmh-rsc.h>
@@ -49,6 +51,14 @@
#define DRV_NCPT_MASK 0x1F
#define DRV_NCPT_SHIFT 27

+/* Offsets for CONTROL TCS Registers */
+#define RSC_DRV_CTL_TCS_DATA_HI 0x38
+#define RSC_DRV_CTL_TCS_DATA_HI_MASK 0xFFFFFF
+#define RSC_DRV_CTL_TCS_DATA_HI_VALID BIT(31)
+#define RSC_DRV_CTL_TCS_DATA_LO 0x40
+#define RSC_DRV_CTL_TCS_DATA_LO_MASK 0xFFFFFFFF
+#define RSC_DRV_CTL_TCS_DATA_SIZE 32
+
/* Offsets for common TCS Registers, one bit per TCS */
#define RSC_DRV_IRQ_ENABLE 0x00
#define RSC_DRV_IRQ_STATUS 0x04
@@ -142,6 +152,14 @@
* +---------------------------------------------------+
*/

+#define USECS_TO_CYCLES(time_usecs) \
+ xloops_to_cycles((time_usecs) * 0x10C7UL)
+
+static inline unsigned long xloops_to_cycles(unsigned long xloops)
+{
+ return (xloops * loops_per_jiffy * HZ) >> 32;
+}
+
static inline void __iomem *
tcs_reg_addr(const struct rsc_drv *drv, int reg, int tcs_id)
{
@@ -756,6 +774,48 @@ static bool rpmh_rsc_ctrlr_is_busy(struct rsc_drv *drv)
return set < max;
}

+/**
+ * rpmh_rsc_write_next_wakeup() - Write next wakeup in CONTROL_TCS.
+ * @drv: The controller
+ *
+ * Writes maximum wakeup cycles when called from suspend.
+ * Writes earliest hrtimer wakeup when called from idle.
+ */
+void rpmh_rsc_write_next_wakeup(struct rsc_drv *drv)
+{
+ ktime_t now, wakeup;
+ u64 wakeup_us, wakeup_cycles = ~0;
+ u32 lo, hi;
+
+ if (!drv->tcs[CONTROL_TCS].num_tcs || !drv->genpd_nb.notifier_call)
+ return;
+
+ /* Set highest time when system (timekeeping) is suspended */
+ if (system_state == SYSTEM_SUSPEND)
+ goto exit;
+
+ /* Find the earliest hrtimer wakeup from online cpus */
+ wakeup = dev_pm_genpd_get_next_hrtimer(drv->dev);
+
+ /* Find the relative wakeup in kernel time scale */
+ now = ktime_get();
+ wakeup = ktime_sub(wakeup, now);
+ wakeup_us = ktime_to_us(wakeup);
+
+ /* Convert the wakeup to arch timer scale */
+ wakeup_cycles = USECS_TO_CYCLES(wakeup_us);
+ wakeup_cycles += arch_timer_read_counter();
+
+exit:
+ lo = wakeup_cycles & RSC_DRV_CTL_TCS_DATA_LO_MASK;
+ hi = wakeup_cycles >> RSC_DRV_CTL_TCS_DATA_SIZE;
+ hi &= RSC_DRV_CTL_TCS_DATA_HI_MASK;
+ hi |= RSC_DRV_CTL_TCS_DATA_HI_VALID;
+
+ writel_relaxed(lo, drv->base + RSC_DRV_CTL_TCS_DATA_LO);
+ writel_relaxed(hi, drv->base + RSC_DRV_CTL_TCS_DATA_HI);
+}
+
/**
* rpmh_rsc_cpu_pm_callback() - Check if any of the AMCs are busy.
* @nfb: Pointer to the notifier block in struct rsc_drv.
@@ -1035,6 +1095,7 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
INIT_LIST_HEAD(&drv->client.batch_cache);

dev_set_drvdata(&pdev->dev, drv);
+ drv->dev = &pdev->dev;

ret = devm_of_platform_populate(&pdev->dev);
if (ret && pdev->dev.pm_domain) {
diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index 01765ee9cdfb..3a53ed99d03c 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -450,7 +450,7 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)

if (!ctrlr->dirty) {
pr_debug("Skipping flush, TCS has latest data.\n");
- goto exit;
+ goto write_next_wakeup;
}

/* Invalidate the TCSes first to avoid stale data */
@@ -479,6 +479,8 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)

ctrlr->dirty = false;

+write_next_wakeup:
+ rpmh_rsc_write_next_wakeup(ctrlr_to_drv(ctrlr));
exit:
spin_unlock(&ctrlr->cache_lock);
return ret;
--
2.34.1

2022-10-18 16:06:02

by Ulf Hansson

[permalink] [raw]
Subject: [PATCH v3 2/6] soc: qcom: rpmh-rsc: Attach RSC to cluster PM domain

From: Lina Iyer <[email protected]>

RSC is part the CPU subsystem and powers off the CPU domains when all
the CPUs and no RPMH transactions are pending from any of the drivers.
The RSC needs to flush the 'sleep' and 'wake' votes that are critical
for saving power when all the CPUs are in idle.

Let's make RSC part of the CPU PM domains, by attaching it to the
cluster power domain. Registering for PM domain notifications, RSC
driver can be notified that the last CPU is powering down. When the last
CPU is powering down the domain, let's flush the 'sleep' and 'wake'
votes that are stored in the data buffers into the hardware and also
write next wakeup in CONTROL_TCS.

Signed-off-by: Lina Iyer <[email protected]>
Signed-off-by: Maulik Shah <[email protected]>
Reviewed-by: Ulf Hansson <[email protected]>
Tested-by: Dmitry Baryshkov <[email protected]> # SM8450
---
drivers/soc/qcom/rpmh-internal.h | 4 +-
drivers/soc/qcom/rpmh-rsc.c | 67 ++++++++++++++++++++++++++++++--
2 files changed, 66 insertions(+), 5 deletions(-)

diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
index 344ba687c13b..cd3d6ce137e3 100644
--- a/drivers/soc/qcom/rpmh-internal.h
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -97,7 +97,8 @@ struct rpmh_ctrlr {
* @rsc_pm: CPU PM notifier for controller.
* Used when solver mode is not present.
* @cpus_in_pm: Number of CPUs not in idle power collapse.
- * Used when solver mode is not present.
+ * Used when solver mode and "power-domains" is not present.
+ * @genpd_nb: PM Domain notifier for cluster genpd notifications.
* @tcs: TCS groups.
* @tcs_in_use: S/W state of the TCS; only set for ACTIVE_ONLY
* transfers, but might show a sleep/wake TCS in use if
@@ -117,6 +118,7 @@ struct rsc_drv {
int id;
int num_tcs;
struct notifier_block rsc_pm;
+ struct notifier_block genpd_nb;
atomic_t cpus_in_pm;
struct tcs_group tcs[TCS_TYPE_NR];
DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index 01c2f50cb97e..050b5f5c9f62 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -14,10 +14,13 @@
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/module.h>
+#include <linux/notifier.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
+#include <linux/pm_domain.h>
+#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/wait.h>
@@ -834,6 +837,50 @@ static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
return ret;
}

+/**
+ * rpmh_rsc_pd_callback() - Check if any of the AMCs are busy.
+ * @nfb: Pointer to the genpd notifier block in struct rsc_drv.
+ * @action: GENPD_NOTIFY_PRE_OFF, GENPD_NOTIFY_OFF, GENPD_NOTIFY_PRE_ON or GENPD_NOTIFY_ON.
+ * @v: Unused
+ *
+ * This function is given to dev_pm_genpd_add_notifier() so we can be informed
+ * about when cluster-pd is going down. When cluster go down we know no more active
+ * transfers will be started so we write sleep/wake sets. This function gets
+ * called from cpuidle code paths and also at system suspend time.
+ *
+ * If AMCs are not busy then writes cached sleep and wake messages to TCSes.
+ * The firmware then takes care of triggering them when entering deepest low power modes.
+ *
+ * Return:
+ * * NOTIFY_OK - success
+ * * NOTIFY_BAD - failure
+ */
+static int rpmh_rsc_pd_callback(struct notifier_block *nfb,
+ unsigned long action, void *v)
+{
+ struct rsc_drv *drv = container_of(nfb, struct rsc_drv, genpd_nb);
+
+ /* We don't need to lock as genpd on/off are serialized */
+ if ((action == GENPD_NOTIFY_PRE_OFF) &&
+ (rpmh_rsc_ctrlr_is_busy(drv) || rpmh_flush(&drv->client)))
+ return NOTIFY_BAD;
+
+ return NOTIFY_OK;
+}
+
+static int rpmh_rsc_pd_attach(struct rsc_drv *drv, struct device *dev)
+{
+ int ret;
+
+ pm_runtime_enable(dev);
+ drv->genpd_nb.notifier_call = rpmh_rsc_pd_callback;
+ ret = dev_pm_genpd_add_notifier(dev, &drv->genpd_nb);
+ if (ret)
+ pm_runtime_disable(dev);
+
+ return ret;
+}
+
static int rpmh_probe_tcs_config(struct platform_device *pdev,
struct rsc_drv *drv, void __iomem *base)
{
@@ -963,7 +1010,7 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
return ret;

/*
- * CPU PM notification are not required for controllers that support
+ * CPU PM/genpd notification are not required for controllers that support
* 'HW solver' mode where they can be in autonomous mode executing low
* power mode to power down.
*/
@@ -971,8 +1018,14 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
solver_config &= DRV_HW_SOLVER_MASK << DRV_HW_SOLVER_SHIFT;
solver_config = solver_config >> DRV_HW_SOLVER_SHIFT;
if (!solver_config) {
- drv->rsc_pm.notifier_call = rpmh_rsc_cpu_pm_callback;
- cpu_pm_register_notifier(&drv->rsc_pm);
+ if (pdev->dev.pm_domain) {
+ ret = rpmh_rsc_pd_attach(drv, &pdev->dev);
+ if (ret)
+ return ret;
+ } else {
+ drv->rsc_pm.notifier_call = rpmh_rsc_cpu_pm_callback;
+ cpu_pm_register_notifier(&drv->rsc_pm);
+ }
}

/* Enable the active TCS to send requests immediately */
@@ -985,7 +1038,13 @@ static int rpmh_rsc_probe(struct platform_device *pdev)

dev_set_drvdata(&pdev->dev, drv);

- return devm_of_platform_populate(&pdev->dev);
+ ret = devm_of_platform_populate(&pdev->dev);
+ if (ret && pdev->dev.pm_domain) {
+ dev_pm_genpd_remove_notifier(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+ }
+
+ return ret;
}

static const struct of_device_id rpmh_drv_match[] = {
--
2.34.1

2022-10-18 16:30:04

by Ulf Hansson

[permalink] [raw]
Subject: [PATCH v3 5/6] soc: qcom: rpmh-rsc: Save base address of drv

From: Maulik Shah <[email protected]>

Add changes to save drv's base address for rsc. This is
used to read drv's configuration such as solver mode is
supported or to write into CONTROL_TCS registers.

Signed-off-by: Maulik Shah <[email protected]>
Reviewed-by: Ulf Hansson <[email protected]>
Tested-by: Dmitry Baryshkov <[email protected]> # SM8450
---
drivers/soc/qcom/rpmh-internal.h | 2 ++
drivers/soc/qcom/rpmh-rsc.c | 18 ++++++++----------
2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
index cd3d6ce137e3..7866bb1e5361 100644
--- a/drivers/soc/qcom/rpmh-internal.h
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -91,6 +91,7 @@ struct rpmh_ctrlr {
* Resource State Coordinator controller (RSC)
*
* @name: Controller identifier.
+ * @base: Start address of the DRV registers in this controller.
* @tcs_base: Start address of the TCS registers in this controller.
* @id: Instance id in the controller (Direct Resource Voter).
* @num_tcs: Number of TCSes in this DRV.
@@ -114,6 +115,7 @@ struct rpmh_ctrlr {
*/
struct rsc_drv {
const char *name;
+ void __iomem *base;
void __iomem *tcs_base;
int id;
int num_tcs;
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index 050b5f5c9f62..8e01697f59af 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -881,8 +881,7 @@ static int rpmh_rsc_pd_attach(struct rsc_drv *drv, struct device *dev)
return ret;
}

-static int rpmh_probe_tcs_config(struct platform_device *pdev,
- struct rsc_drv *drv, void __iomem *base)
+static int rpmh_probe_tcs_config(struct platform_device *pdev, struct rsc_drv *drv)
{
struct tcs_type_config {
u32 type;
@@ -896,9 +895,9 @@ static int rpmh_probe_tcs_config(struct platform_device *pdev,
ret = of_property_read_u32(dn, "qcom,tcs-offset", &offset);
if (ret)
return ret;
- drv->tcs_base = base + offset;
+ drv->tcs_base = drv->base + offset;

- config = readl_relaxed(base + DRV_PRNT_CHLD_CONFIG);
+ config = readl_relaxed(drv->base + DRV_PRNT_CHLD_CONFIG);

max_tcs = config;
max_tcs &= DRV_NUM_TCS_MASK << (DRV_NUM_TCS_SHIFT * drv->id);
@@ -960,7 +959,6 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
char drv_id[10] = {0};
int ret, irq;
u32 solver_config;
- void __iomem *base;

/*
* Even though RPMh doesn't directly use cmd-db, all of its children
@@ -987,11 +985,11 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
drv->name = dev_name(&pdev->dev);

snprintf(drv_id, ARRAY_SIZE(drv_id), "drv-%d", drv->id);
- base = devm_platform_ioremap_resource_byname(pdev, drv_id);
- if (IS_ERR(base))
- return PTR_ERR(base);
+ drv->base = devm_platform_ioremap_resource_byname(pdev, drv_id);
+ if (IS_ERR(drv->base))
+ return PTR_ERR(drv->base);

- ret = rpmh_probe_tcs_config(pdev, drv, base);
+ ret = rpmh_probe_tcs_config(pdev, drv);
if (ret)
return ret;

@@ -1014,7 +1012,7 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
* 'HW solver' mode where they can be in autonomous mode executing low
* power mode to power down.
*/
- solver_config = readl_relaxed(base + DRV_SOLVER_CONFIG);
+ solver_config = readl_relaxed(drv->base + DRV_SOLVER_CONFIG);
solver_config &= DRV_HW_SOLVER_MASK << DRV_HW_SOLVER_SHIFT;
solver_config = solver_config >> DRV_HW_SOLVER_SHIFT;
if (!solver_config) {
--
2.34.1

2022-10-18 18:00:05

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v3 6/6] soc: qcom: rpmh-rsc: Write CONTROL_TCS with next timer wakeup

Hi Ulf,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on robh/for-next]
[also build test WARNING on rafael-pm/linux-next arm/for-next arm64/for-next/core clk/clk-next kvmarm/next rockchip/for-next shawnguo/for-next soc/for-next linus/master v6.1-rc1 next-20221018]
[cannot apply to xilinx-xlnx/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Ulf-Hansson/soc-qcom-Add-APSS-RSC-to-the-CPU-cluster-PM-domain/20221018-233137
base: https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git for-next
patch link: https://lore.kernel.org/r/20221018152837.619426-7-ulf.hansson%40linaro.org
patch subject: [PATCH v3 6/6] soc: qcom: rpmh-rsc: Write CONTROL_TCS with next timer wakeup
config: m68k-allyesconfig
compiler: m68k-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/2f9e747dca385eca0eb09df0df78572223ca9486
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Ulf-Hansson/soc-qcom-Add-APSS-RSC-to-the-CPU-cluster-PM-domain/20221018-233137
git checkout 2f9e747dca385eca0eb09df0df78572223ca9486
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=m68k SHELL=/bin/bash drivers/soc/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>

All warnings (new ones prefixed by >>):

drivers/soc/qcom/rpmh-rsc.c: In function 'xloops_to_cycles':
>> drivers/soc/qcom/rpmh-rsc.c:160:48: warning: right shift count >= width of type [-Wshift-count-overflow]
160 | return (xloops * loops_per_jiffy * HZ) >> 32;
| ^~


vim +160 drivers/soc/qcom/rpmh-rsc.c

100
101 /*
102 * Here's a high level overview of how all the registers in RPMH work
103 * together:
104 *
105 * - The main rpmh-rsc address is the base of a register space that can
106 * be used to find overall configuration of the hardware
107 * (DRV_PRNT_CHLD_CONFIG). Also found within the rpmh-rsc register
108 * space are all the TCS blocks. The offset of the TCS blocks is
109 * specified in the device tree by "qcom,tcs-offset" and used to
110 * compute tcs_base.
111 * - TCS blocks come one after another. Type, count, and order are
112 * specified by the device tree as "qcom,tcs-config".
113 * - Each TCS block has some registers, then space for up to 16 commands.
114 * Note that though address space is reserved for 16 commands, fewer
115 * might be present. See ncpt (num cmds per TCS).
116 *
117 * Here's a picture:
118 *
119 * +---------------------------------------------------+
120 * |RSC |
121 * | ctrl |
122 * | |
123 * | Drvs: |
124 * | +-----------------------------------------------+ |
125 * | |DRV0 | |
126 * | | ctrl/config | |
127 * | | IRQ | |
128 * | | | |
129 * | | TCSes: | |
130 * | | +------------------------------------------+ | |
131 * | | |TCS0 | | | | | | | | | | | | | | |
132 * | | | ctrl | 0| 1| 2| 3| 4| 5| .| .| .| .|14|15| | |
133 * | | | | | | | | | | | | | | | | | |
134 * | | +------------------------------------------+ | |
135 * | | +------------------------------------------+ | |
136 * | | |TCS1 | | | | | | | | | | | | | | |
137 * | | | ctrl | 0| 1| 2| 3| 4| 5| .| .| .| .|14|15| | |
138 * | | | | | | | | | | | | | | | | | |
139 * | | +------------------------------------------+ | |
140 * | | +------------------------------------------+ | |
141 * | | |TCS2 | | | | | | | | | | | | | | |
142 * | | | ctrl | 0| 1| 2| 3| 4| 5| .| .| .| .|14|15| | |
143 * | | | | | | | | | | | | | | | | | |
144 * | | +------------------------------------------+ | |
145 * | | ...... | |
146 * | +-----------------------------------------------+ |
147 * | +-----------------------------------------------+ |
148 * | |DRV1 | |
149 * | | (same as DRV0) | |
150 * | +-----------------------------------------------+ |
151 * | ...... |
152 * +---------------------------------------------------+
153 */
154
155 #define USECS_TO_CYCLES(time_usecs) \
156 xloops_to_cycles((time_usecs) * 0x10C7UL)
157
158 static inline unsigned long xloops_to_cycles(unsigned long xloops)
159 {
> 160 return (xloops * loops_per_jiffy * HZ) >> 32;
161 }
162

--
0-DAY CI Kernel Test Service
https://01.org/lkp


Attachments:
(No filename) (5.68 kB)
config (288.30 kB)
Download all attachments

2022-11-07 17:12:48

by Bjorn Andersson

[permalink] [raw]
Subject: Re: [PATCH v3 6/6] soc: qcom: rpmh-rsc: Write CONTROL_TCS with next timer wakeup

On Wed, Oct 19, 2022 at 01:47:17AM +0800, kernel test robot wrote:
[..]
> 155 #define USECS_TO_CYCLES(time_usecs) \
> 156 xloops_to_cycles((time_usecs) * 0x10C7UL)
> 157
> 158 static inline unsigned long xloops_to_cycles(unsigned long xloops)

Any objections to me changing the type to u64 while applying the
patches?

Regards,
Bjorn

> 159 {
> > 160 return (xloops * loops_per_jiffy * HZ) >> 32;
> 161 }
> 162

2022-11-08 10:35:06

by Ulf Hansson

[permalink] [raw]
Subject: Re: [PATCH v3 6/6] soc: qcom: rpmh-rsc: Write CONTROL_TCS with next timer wakeup

On Mon, 7 Nov 2022 at 18:01, Bjorn Andersson <[email protected]> wrote:
>
> On Wed, Oct 19, 2022 at 01:47:17AM +0800, kernel test robot wrote:
> [..]
> > 155 #define USECS_TO_CYCLES(time_usecs) \
> > 156 xloops_to_cycles((time_usecs) * 0x10C7UL)
> > 157
> > 158 static inline unsigned long xloops_to_cycles(unsigned long xloops)
>
> Any objections to me changing the type to u64 while applying the
> patches?

No objections. Thanks for making the improvement!

>
> Regards,
> Bjorn
>
> > 159 {
> > > 160 return (xloops * loops_per_jiffy * HZ) >> 32;
> > 161 }
> > 162

Kind regards
Uffe

2022-11-10 04:59:07

by Bjorn Andersson

[permalink] [raw]
Subject: Re: (subset) [PATCH v3 0/6] soc: qcom: Add APSS RSC to the CPU cluster PM domain

On Tue, 18 Oct 2022 17:28:31 +0200, Ulf Hansson wrote:
> Changes in v3:
> - Re-worked the genpd patch (patch4) and updated the commit-msg.
> - No other changes.
>
> The APSS RSC is a part of the CPU cluster PM domain and it's responsible for
> flushing 'sleep' and 'wake' votes to avoid wasting energy. In particular, this
> needs to be done when last CPU in the cluster enters a deeper idlestate.
>
> [...]

Applied, thanks!

[3/6] arm64: dts: qcom: Add power-domains property for apps_rsc
commit: 2ffa0ca4d37a1fef0b423f32007067fbce8708a3

Best regards,
--
Bjorn Andersson <[email protected]>