2020-03-03 12:57:51

by Maulik Shah

[permalink] [raw]
Subject: [PATCH v10 0/3] Invoke rpmh_flush for non OSI targets

Changes in v10:
- Address Evan's comments to update commit message on change 2
- Add Evan's Reviewed by on change 2
- Remove comment from rpmh_flush() related to last CPU invoking it
- Rebase all changes on top of next-20200302

Changes in v9:
- Keep rpmh_flush() to invoke from within cache_lock
- Remove comments related to only last cpu invoking rpmh_flush()

Changes in v8:
- Address Stephen's comments on changes 2 and 3
- Add Reviewed by from Stephen on change 1

Changes in v7:
- Address Srinivas's comments to update commit text
- Add Reviewed by from Srinivas

Changes in v6:
- Drop 1 & 2 changes from v5 as they already landed in maintainer tree
- Drop 3 & 4 changes from v5 as no user at present for power domain in rsc
- Rename subject to appropriate since power domain changes are dropped
- Rebase other changes on top of next-20200221

Changes in v5:
- Add Rob's Acked by on dt-bindings change
- Drop firmware psci change
- Update cpuidle stats in dtsi to follow PC mode
- Include change to update dirty flag when data is updated from [4]
- Add change to invoke rpmh_flush when caches are dirty

Changes in v4:
- Add change to allow hierarchical topology in PC mode
- Drop hierarchical domain idle states converter from v3
- Address Merge sc7180 dtsi change to add low power modes

Changes in v3:
- Address Rob's comment on dt property value
- Address Stephen's comments on rpmh-rsc driver change
- Include sc7180 cpuidle low power mode changes from [1]
- Include hierarchical domain idle states converter change from [2]

Changes in v2:
- Add Stephen's Reviewed-By to the first three patches
- Addressed Stephen's comments on fourth patch
- Include changes to connect rpmh domain to cpuidle and genpds

Resource State Coordinator (RSC) is responsible for powering off/lowering
the requirements from CPU subsystem for the associated hardware like buses,
clocks, and regulators when all CPUs and cluster is powered down.

RSC power domain uses last-man activities provided by genpd framework based
on Ulf Hansoon's patch series[3], when the cluster of CPUs enter deepest
idle states. As a part of domain poweroff, RSC can lower resource state
requirements by flushing the cached sleep and wake state votes for various
resources.

[1] https://patchwork.kernel.org/patch/11218965
[2] https://patchwork.kernel.org/patch/10941671
[3] https://patchwork.kernel.org/project/linux-arm-msm/list/?series=222355
[4] https://patchwork.kernel.org/project/linux-arm-msm/list/?series=236503

Maulik Shah (3):
arm64: dts: qcom: sc7180: Add cpuidle low power states
soc: qcom: rpmh: Update dirty flag only when data changes
soc: qcom: rpmh: Invoke rpmh_flush for dirty caches

arch/arm64/boot/dts/qcom/sc7180.dtsi | 78 ++++++++++++++++++++++++++++++++++++
drivers/soc/qcom/rpmh.c | 27 ++++++++++---
2 files changed, 100 insertions(+), 5 deletions(-)

--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation


2020-03-03 12:58:09

by Maulik Shah

[permalink] [raw]
Subject: [PATCH v10 3/3] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches

Add changes to invoke rpmh flush() from within cache_lock when the data
in cache is dirty.

This is done only if OSI is not supported in PSCI. If OSI is supported
rpmh_flush can get invoked when the last cpu going to power collapse
deepest low power mode.

Also remove "depends on COMPILE_TEST" for Kconfig option QCOM_RPMH so the
driver is only compiled for arm64 which supports psci_has_osi_support()
API.

Signed-off-by: Maulik Shah <[email protected]>
Reviewed-by: Srinivas Rao L <[email protected]>
---
drivers/soc/qcom/Kconfig | 2 +-
drivers/soc/qcom/rpmh.c | 37 ++++++++++++++++++++++---------------
2 files changed, 23 insertions(+), 16 deletions(-)

diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index d0a73e7..2e581bc 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -105,7 +105,7 @@ config QCOM_RMTFS_MEM

config QCOM_RPMH
bool "Qualcomm RPM-Hardened (RPMH) Communication"
- depends on ARCH_QCOM && ARM64 || COMPILE_TEST
+ depends on ARCH_QCOM && ARM64
help
Support for communication with the hardened-RPM blocks in
Qualcomm Technologies Inc (QTI) SoCs. RPMH communication uses an
diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index f28afe4..dafb0da 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -12,6 +12,7 @@
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
+#include <linux/psci.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/types.h>
@@ -158,6 +159,13 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
}

unlock:
+ if (ctrlr->dirty && !psci_has_osi_support()) {
+ if (rpmh_flush(ctrlr)) {
+ spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
+ return ERR_PTR(-EINVAL);
+ }
+ }
+
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);

return req;
@@ -285,26 +293,35 @@ int rpmh_write(const struct device *dev, enum rpmh_state state,
}
EXPORT_SYMBOL(rpmh_write);

-static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
+static int cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
{
unsigned long flags;

spin_lock_irqsave(&ctrlr->cache_lock, flags);
+
list_add_tail(&req->list, &ctrlr->batch_cache);
ctrlr->dirty = true;
+
+ if (!psci_has_osi_support()) {
+ if (rpmh_flush(ctrlr)) {
+ spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
+ return -EINVAL;
+ }
+ }
+
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
+
+ return 0;
}

static int flush_batch(struct rpmh_ctrlr *ctrlr)
{
struct batch_cache_req *req;
const struct rpmh_request *rpm_msg;
- unsigned long flags;
int ret = 0;
int i;

/* Send Sleep/Wake requests to the controller, expect no response */
- spin_lock_irqsave(&ctrlr->cache_lock, flags);
list_for_each_entry(req, &ctrlr->batch_cache, list) {
for (i = 0; i < req->count; i++) {
rpm_msg = req->rpm_msgs + i;
@@ -314,7 +331,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
break;
}
}
- spin_unlock_irqrestore(&ctrlr->cache_lock, flags);

return ret;
}
@@ -386,10 +402,8 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
cmd += n[i];
}

- if (state != RPMH_ACTIVE_ONLY_STATE) {
- cache_batch(ctrlr, req);
- return 0;
- }
+ if (state != RPMH_ACTIVE_ONLY_STATE)
+ return cache_batch(ctrlr, req);

for (i = 0; i < count; i++) {
struct completion *compl = &compls[i];
@@ -455,9 +469,6 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
* Return: -EBUSY if the controller is busy, probably waiting on a response
* to a RPMH request sent earlier.
*
- * This function is always called from the sleep code from the last CPU
- * that is powering down the entire system. Since no other RPMH API would be
- * executing at this time, it is safe to run lockless.
*/
int rpmh_flush(struct rpmh_ctrlr *ctrlr)
{
@@ -474,10 +485,6 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
if (ret)
return ret;

- /*
- * Nobody else should be calling this function other than system PM,
- * hence we can run without locks.
- */
list_for_each_entry(p, &ctrlr->cache, list) {
if (!is_req_valid(p)) {
pr_debug("%s: skipping RPMH req: a:%#x s:%#x w:%#x",
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

2020-03-03 12:58:40

by Maulik Shah

[permalink] [raw]
Subject: [PATCH v10 2/3] soc: qcom: rpmh: Update dirty flag only when data changes

Currently rpmh ctrlr dirty flag is set for all cases regardless of data
is really changed or not. Add changes to update dirty flag when data is
changed to newer values. Update dirty flag everytime when data in batch
cache is updated since rpmh_flush() may get invoked from any CPU instead
of only last CPU going to low power mode.

Also move dirty flag updates to happen from within cache_lock and remove
unnecessary INIT_LIST_HEAD() call and a default case from switch.

Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
Signed-off-by: Maulik Shah <[email protected]>
Reviewed-by: Srinivas Rao L <[email protected]>
Reviewed-by: Evan Green <[email protected]>
---
drivers/soc/qcom/rpmh.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index eb0ded0..f28afe4 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -133,26 +133,30 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,

req->addr = cmd->addr;
req->sleep_val = req->wake_val = UINT_MAX;
- INIT_LIST_HEAD(&req->list);
list_add_tail(&req->list, &ctrlr->cache);

existing:
switch (state) {
case RPMH_ACTIVE_ONLY_STATE:
- if (req->sleep_val != UINT_MAX)
+ if (req->sleep_val != UINT_MAX) {
req->wake_val = cmd->data;
+ ctrlr->dirty = true;
+ }
break;
case RPMH_WAKE_ONLY_STATE:
- req->wake_val = cmd->data;
+ if (req->wake_val != cmd->data) {
+ req->wake_val = cmd->data;
+ ctrlr->dirty = true;
+ }
break;
case RPMH_SLEEP_STATE:
- req->sleep_val = cmd->data;
- break;
- default:
+ if (req->sleep_val != cmd->data) {
+ req->sleep_val = cmd->data;
+ ctrlr->dirty = true;
+ }
break;
}

- ctrlr->dirty = true;
unlock:
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);

@@ -287,6 +291,7 @@ static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)

spin_lock_irqsave(&ctrlr->cache_lock, flags);
list_add_tail(&req->list, &ctrlr->batch_cache);
+ ctrlr->dirty = true;
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
}

@@ -323,6 +328,7 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
kfree(req);
INIT_LIST_HEAD(&ctrlr->batch_cache);
+ ctrlr->dirty = true;
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
}

@@ -507,7 +513,6 @@ int rpmh_invalidate(const struct device *dev)
int ret;

invalidate_batch(ctrlr);
- ctrlr->dirty = true;

do {
ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

2020-03-04 23:23:26

by Doug Anderson

[permalink] [raw]
Subject: Re: [PATCH v10 3/3] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches

Hi,

On Tue, Mar 3, 2020 at 4:27 AM Maulik Shah <[email protected]> wrote:
>
> Add changes to invoke rpmh flush() from within cache_lock when the data
> in cache is dirty.
>
> This is done only if OSI is not supported in PSCI. If OSI is supported
> rpmh_flush can get invoked when the last cpu going to power collapse
> deepest low power mode.
>
> Also remove "depends on COMPILE_TEST" for Kconfig option QCOM_RPMH so the
> driver is only compiled for arm64 which supports psci_has_osi_support()
> API.
>
> Signed-off-by: Maulik Shah <[email protected]>
> Reviewed-by: Srinivas Rao L <[email protected]>
> ---
> drivers/soc/qcom/Kconfig | 2 +-
> drivers/soc/qcom/rpmh.c | 37 ++++++++++++++++++++++---------------
> 2 files changed, 23 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
> index d0a73e7..2e581bc 100644
> --- a/drivers/soc/qcom/Kconfig
> +++ b/drivers/soc/qcom/Kconfig
> @@ -105,7 +105,7 @@ config QCOM_RMTFS_MEM
>
> config QCOM_RPMH
> bool "Qualcomm RPM-Hardened (RPMH) Communication"
> - depends on ARCH_QCOM && ARM64 || COMPILE_TEST
> + depends on ARCH_QCOM && ARM64
> help
> Support for communication with the hardened-RPM blocks in
> Qualcomm Technologies Inc (QTI) SoCs. RPMH communication uses an
> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> index f28afe4..dafb0da 100644
> --- a/drivers/soc/qcom/rpmh.c
> +++ b/drivers/soc/qcom/rpmh.c
> @@ -12,6 +12,7 @@
> #include <linux/module.h>
> #include <linux/of.h>
> #include <linux/platform_device.h>
> +#include <linux/psci.h>
> #include <linux/slab.h>
> #include <linux/spinlock.h>
> #include <linux/types.h>
> @@ -158,6 +159,13 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
> }
>
> unlock:
> + if (ctrlr->dirty && !psci_has_osi_support()) {
> + if (rpmh_flush(ctrlr)) {
> + spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> + return ERR_PTR(-EINVAL);
> + }
> + }
> +
> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>
> return req;
> @@ -285,26 +293,35 @@ int rpmh_write(const struct device *dev, enum rpmh_state state,
> }
> EXPORT_SYMBOL(rpmh_write);
>
> -static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
> +static int cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
> {
> unsigned long flags;
>
> spin_lock_irqsave(&ctrlr->cache_lock, flags);
> +
> list_add_tail(&req->list, &ctrlr->batch_cache);
> ctrlr->dirty = true;
> +
> + if (!psci_has_osi_support()) {
> + if (rpmh_flush(ctrlr)) {

The whole API here is a bit unfortunate. From what I can tell,
callers of this code almost always call rpmh_write_batch() in
triplicate, AKA:

rpmh_write_batch(active, ...)
rpmh_write_batch(wake, ...)
rpmh_write_batch(sleep, ...)

...that's going to end up writing the whole sleep/wake sets twice
every single time, right? I know you talked about trying to keep
separate dirty bits for sleep/wake and maybe that would help, but it
might not be so easy due to the comparison of "sleep_val" and
"wake_val" in is_req_valid().

I guess we can keep the inefficiency for now and see how much it hits
us, but it feels ugly.


> + spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> + return -EINVAL;

nit: why not add "int ret = 0" to the top of the function, then here:

if (rpmh_flush(ctrl))
ret = -EINVAL;

...then at the end "return ret". It avoids the 2nd copy of the unlock?

Also: Why throw away the return value of rpmh_flush and replace it
with -EINVAL? Trying to avoid -EBUSY? ...oh, should you handle
-EBUSY? AKA:

if (!psci_has_osi_support()) {
do {
ret = rpmh_flush(ctrl);
} while (ret == -EBUSY);
}


> + }
> + }
> +
> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> +
> + return 0;
> }
>
> static int flush_batch(struct rpmh_ctrlr *ctrlr)
> {
> struct batch_cache_req *req;
> const struct rpmh_request *rpm_msg;
> - unsigned long flags;
> int ret = 0;
> int i;
>
> /* Send Sleep/Wake requests to the controller, expect no response */
> - spin_lock_irqsave(&ctrlr->cache_lock, flags);
> list_for_each_entry(req, &ctrlr->batch_cache, list) {
> for (i = 0; i < req->count; i++) {
> rpm_msg = req->rpm_msgs + i;
> @@ -314,7 +331,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
> break;
> }
> }
> - spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>
> return ret;
> }
> @@ -386,10 +402,8 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
> cmd += n[i];
> }
>
> - if (state != RPMH_ACTIVE_ONLY_STATE) {
> - cache_batch(ctrlr, req);
> - return 0;
> - }
> + if (state != RPMH_ACTIVE_ONLY_STATE)
> + return cache_batch(ctrlr, req);
>
> for (i = 0; i < count; i++) {
> struct completion *compl = &compls[i];
> @@ -455,9 +469,6 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
> * Return: -EBUSY if the controller is busy, probably waiting on a response
> * to a RPMH request sent earlier.
> *
> - * This function is always called from the sleep code from the last CPU
> - * that is powering down the entire system. Since no other RPMH API would be
> - * executing at this time, it is safe to run lockless.

nit: you've now got an extra "blank" (just has a "*" on it) line at
the end of your comment block.

nit: in v9, Evan suggested "We should probably replace that with a
comment indicating that we assume ctrlr->cache_lock is already held".
Maybe you could do that?

Also: presumably you _will_ still be called by the sleep code from the
last CPU on systems with OSI. Is that true? If that's not true then
you should change your function to static. If that is true, then your
comment should be something like "this function will either be called
from sleep code on the last CPU (thus no spinlock needed) or with the
spinlock already held".


-Doug

2020-03-04 23:23:48

by Doug Anderson

[permalink] [raw]
Subject: Re: [PATCH v10 2/3] soc: qcom: rpmh: Update dirty flag only when data changes

Hi,

On Tue, Mar 3, 2020 at 4:27 AM Maulik Shah <[email protected]> wrote:
>
> Currently rpmh ctrlr dirty flag is set for all cases regardless of data
> is really changed or not. Add changes to update dirty flag when data is
> changed to newer values. Update dirty flag everytime when data in batch
> cache is updated since rpmh_flush() may get invoked from any CPU instead
> of only last CPU going to low power mode.
>
> Also move dirty flag updates to happen from within cache_lock and remove
> unnecessary INIT_LIST_HEAD() call and a default case from switch.
>
> Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
> Signed-off-by: Maulik Shah <[email protected]>
> Reviewed-by: Srinivas Rao L <[email protected]>
> Reviewed-by: Evan Green <[email protected]>
> ---
> drivers/soc/qcom/rpmh.c | 21 +++++++++++++--------
> 1 file changed, 13 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> index eb0ded0..f28afe4 100644
> --- a/drivers/soc/qcom/rpmh.c
> +++ b/drivers/soc/qcom/rpmh.c
> @@ -133,26 +133,30 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
>
> req->addr = cmd->addr;
> req->sleep_val = req->wake_val = UINT_MAX;
> - INIT_LIST_HEAD(&req->list);
> list_add_tail(&req->list, &ctrlr->cache);
>
> existing:
> switch (state) {
> case RPMH_ACTIVE_ONLY_STATE:
> - if (req->sleep_val != UINT_MAX)
> + if (req->sleep_val != UINT_MAX) {
> req->wake_val = cmd->data;
> + ctrlr->dirty = true;
> + }

You could maybe avoid a few additional "dirty" cases by changing the
above "if" to:

if (req->sleep_val != UINT_MAX &&
(req->wake_val != cmd->data)

...since otherwise writing an "ACTIVE_ONLY" thing over and over again
with the same value would keep saying "dirty".


Looking at this code makes me wonder a bit about how it's supposed to
work, though. Let's look at a sequence of 3 commands called in two
different orders:

rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0xaa);
rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0xbb);

==> End result will be a cache entry (addr=0x10, wake=0xaa, sleep=0xbb)


rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0xbb);
rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0xaa);
rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);

==> End result will be a cache entry (addr=0x10, wake=0x99, sleep=0xbb)


Said another way, it seems weird that a vote for "active" counts as a
vote for "wake", but only if a sleep vote was made beforehand?
Howzat?


Maybe at one point in time it was assumed that wake's point was just
to undo sleep? That is, if:

state_orig = /* the state before sleep happens */
state_sleep = apply(state_orig, sleep_actions)
state_wake = apply(state_sleep, wake_actions)

The code is assuming "state_orig == state_wake".

...it sorta makes sense that "state_orig == state_wake" would be true,
but if we were really making that requirement we really should have
structured RPMH's APIs differently. We shouldn't have even allowed
the callers to specify "WAKE_ONLY" state and we should have just
constructed it from the "ACTIVE_ONLY" state.


To summarize:

a) If the only allowable use of "WAKE_ONLY" is to undo "SLEEP_ONLY"
then we should re-think the API and stop letting callers to
rpmh_write(), rpmh_write_async(), or rpmh_write_batch() ever specify
"WAKE_ONLY". The code should just assume that "wake_only =
active_only if (active_only != sleep_only)". In other words, RPMH
should programmatically figure out the "wake" state based on the
sleep/active state and not force callers to do this.

b) If "WAKE_ONLY" is allowed to do other things (or if it's not RPMH's
job to enforce/assume this) then we should fully skip calling
cache_rpm_request() for RPMH_ACTIVE_ONLY_STATE.


NOTE: this discussion also makes me wonder about the is_req_valid()
function. That will skip sending a sleep/wake entry if the sleep and
wake entries are equal to each other. ...but if sleep and wake are
both different than "active" it'll be a problem.


> break;
> case RPMH_WAKE_ONLY_STATE:
> - req->wake_val = cmd->data;
> + if (req->wake_val != cmd->data) {
> + req->wake_val = cmd->data;
> + ctrlr->dirty = true;

As far as I can tell from the code, you can also avoid dirty if
req->sleep_val == UINT_MAX since nothing will be sent if either
sleep_val or wake_val are UINT_MAX. Same in the sleep case where we
can avoid dirty if wake_val == UINT_MAX.


> + }
> break;
> case RPMH_SLEEP_STATE:
> - req->sleep_val = cmd->data;
> - break;
> - default:
> + if (req->sleep_val != cmd->data) {
> + req->sleep_val = cmd->data;
> + ctrlr->dirty = true;
> + }
> break;
> }

I wonder if instead of putting the dirty everywhere above it's better
to cache the old value before the switch, then do:

ctrl->dirty = (req->sleep_val != old_sleep_val ||
req->wake_val != old_wake_val) &&
req->sleep_val != UINT_MAX &&
req->wake_val != UINT_MAX;


-Doug

2020-03-05 11:10:59

by Maulik Shah

[permalink] [raw]
Subject: Re: [PATCH v10 2/3] soc: qcom: rpmh: Update dirty flag only when data changes


On 3/5/2020 4:51 AM, Doug Anderson wrote:
> Hi,
>
> On Tue, Mar 3, 2020 at 4:27 AM Maulik Shah <[email protected]> wrote:
>> Currently rpmh ctrlr dirty flag is set for all cases regardless of data
>> is really changed or not. Add changes to update dirty flag when data is
>> changed to newer values. Update dirty flag everytime when data in batch
>> cache is updated since rpmh_flush() may get invoked from any CPU instead
>> of only last CPU going to low power mode.
>>
>> Also move dirty flag updates to happen from within cache_lock and remove
>> unnecessary INIT_LIST_HEAD() call and a default case from switch.
>>
>> Fixes: 600513dfeef3 ("drivers: qcom: rpmh: cache sleep/wake state requests")
>> Signed-off-by: Maulik Shah <[email protected]>
>> Reviewed-by: Srinivas Rao L <[email protected]>
>> Reviewed-by: Evan Green <[email protected]>
>> ---
>> drivers/soc/qcom/rpmh.c | 21 +++++++++++++--------
>> 1 file changed, 13 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index eb0ded0..f28afe4 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -133,26 +133,30 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
>>
>> req->addr = cmd->addr;
>> req->sleep_val = req->wake_val = UINT_MAX;
>> - INIT_LIST_HEAD(&req->list);
>> list_add_tail(&req->list, &ctrlr->cache);
>>
>> existing:
>> switch (state) {
>> case RPMH_ACTIVE_ONLY_STATE:
>> - if (req->sleep_val != UINT_MAX)
>> + if (req->sleep_val != UINT_MAX) {
>> req->wake_val = cmd->data;
>> + ctrlr->dirty = true;
>> + }
> You could maybe avoid a few additional "dirty" cases by changing the
> above "if" to:
>
> if (req->sleep_val != UINT_MAX &&
> (req->wake_val != cmd->data)
>
> ...since otherwise writing an "ACTIVE_ONLY" thing over and over again
> with the same value would keep saying "dirty".
>
>
> Looking at this code makes me wonder a bit about how it's supposed to
> work, though. Let's look at a sequence of 3 commands called in two
> different orders:
>
> rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0xaa);
> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0xbb);
>
> ==> End result will be a cache entry (addr=0x10, wake=0xaa, sleep=0xbb)
>
>
> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0xbb);
> rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0xaa);
> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
>
> ==> End result will be a cache entry (addr=0x10, wake=0x99, sleep=0xbb)
>
>
> Said another way, it seems weird that a vote for "active" counts as a
> vote for "wake", but only if a sleep vote was made beforehand?
> Howzat?
>
>
> Maybe at one point in time it was assumed that wake's point was just
> to undo sleep? That is, if:
>
> state_orig = /* the state before sleep happens */
> state_sleep = apply(state_orig, sleep_actions)
> state_wake = apply(state_sleep, wake_actions)
>
> The code is assuming "state_orig == state_wake".
>
> ...it sorta makes sense that "state_orig == state_wake" would be true,
> but if we were really making that requirement we really should have
> structured RPMH's APIs differently. We shouldn't have even allowed
> the callers to specify "WAKE_ONLY" state and we should have just
> constructed it from the "ACTIVE_ONLY" state.
>
>
> To summarize:
>
> a) If the only allowable use of "WAKE_ONLY" is to undo "SLEEP_ONLY"
> then we should re-think the API and stop letting callers to
> rpmh_write(), rpmh_write_async(), or rpmh_write_batch() ever specify
> "WAKE_ONLY". The code should just assume that "wake_only =
> active_only if (active_only != sleep_only)". In other words, RPMH
> should programmatically figure out the "wake" state based on the
> sleep/active state and not force callers to do this.
>
> b) If "WAKE_ONLY" is allowed to do other things (or if it's not RPMH's
> job to enforce/assume this) then we should fully skip calling
> cache_rpm_request() for RPMH_ACTIVE_ONLY_STATE.
>
>
> NOTE: this discussion also makes me wonder about the is_req_valid()
> function. That will skip sending a sleep/wake entry if the sleep and
> wake entries are equal to each other. ...but if sleep and wake are
> both different than "active" it'll be a problem.

Hi Doug,

To answer above points, yes in general it’s the understanding that wake is
almost always need to be equal to active. However, there can be valid reasons
for which the callers are enforced to call them differently in the first place.

At present caller send 3 types of request.
rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x0);
rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x99);

Now, Lets assume we handle this in rpmh driver since wake=active and the caller
send only 2 type of request (lets call it active and sleep, since we have assumption
of wake=active, and we don’t want 3rd request as its handled in rpmh driver)
So callers will now invoke 2 types of request.

rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x0);

with first type request, it now needs to serve 2 purpose
(a)    cache ACTIVE request votes as WAKE votes
(b)    trigger it out immediately (in ACTIVE TCS) as it need to be also complete immediately.

For SLEEP, nothing changes. Now when entering to sleep we do rpmh_flush() to program all
WAKE and SLEEP request…so far so good…

Now consider a corner case,

There is something called a solver mode in RSC where HW could be in autonomous mode executing
low power modes. For this it may want to “only” program WAKE and SLEEP votes and then controller
would be in solver mode entering and exiting sleep autonomously.

There is no ACTIVE set request and hence no requirement to send it right away as ACTIVE vote.

If we have only 2 type of request, caller again need to differentiate to tell rpmh driver that
when it invoke

rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);

with this caching it as WAKE is fine  ((a) in above) but do not trigger it ((b) in above)

so we need to again modify this API and pass another argument saying whether to do (a + b) or only (a).
but caller can already differentiate by using RPMH_ACTIVE_ONLY_STATE or RPMH_WAKE_ONLY_STATE.

i think at least for now, leave it as it is, unless we really see any impact by caller invoking all
3 types of request and take in account all such corner cases before i make any such change.
we can take it separate if needed along with optimization pointed in v9 series discussions.
>
>> break;
>> case RPMH_WAKE_ONLY_STATE:
>> - req->wake_val = cmd->data;
>> + if (req->wake_val != cmd->data) {
>> + req->wake_val = cmd->data;
>> + ctrlr->dirty = true;
> As far as I can tell from the code, you can also avoid dirty if
> req->sleep_val == UINT_MAX since nothing will be sent if either
> sleep_val or wake_val are UINT_MAX. Same in the sleep case where we
> can avoid dirty if wake_val == UINT_MAX.
>
>
>> + }
>> break;
>> case RPMH_SLEEP_STATE:
>> - req->sleep_val = cmd->data;
>> - break;
>> - default:
>> + if (req->sleep_val != cmd->data) {
>> + req->sleep_val = cmd->data;
>> + ctrlr->dirty = true;
>> + }
>> break;
>> }
> I wonder if instead of putting the dirty everywhere above it's better
> to cache the old value before the switch, then do:
>
> ctrl->dirty = (req->sleep_val != old_sleep_val ||
> req->wake_val != old_wake_val) &&
> req->sleep_val != UINT_MAX &&
> req->wake_val != UINT_MAX;
>
>
> -Doug

Thanks,  this seems good. v11 on the way.

Thanks,
Maulik

--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

2020-03-05 11:31:41

by Maulik Shah

[permalink] [raw]
Subject: Re: [PATCH v10 3/3] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches


On 3/5/2020 4:52 AM, Doug Anderson wrote:
> Hi,
>
> On Tue, Mar 3, 2020 at 4:27 AM Maulik Shah <[email protected]> wrote:
>> Add changes to invoke rpmh flush() from within cache_lock when the data
>> in cache is dirty.
>>
>> This is done only if OSI is not supported in PSCI. If OSI is supported
>> rpmh_flush can get invoked when the last cpu going to power collapse
>> deepest low power mode.
>>
>> Also remove "depends on COMPILE_TEST" for Kconfig option QCOM_RPMH so the
>> driver is only compiled for arm64 which supports psci_has_osi_support()
>> API.
>>
>> Signed-off-by: Maulik Shah <[email protected]>
>> Reviewed-by: Srinivas Rao L <[email protected]>
>> ---
>> drivers/soc/qcom/Kconfig | 2 +-
>> drivers/soc/qcom/rpmh.c | 37 ++++++++++++++++++++++---------------
>> 2 files changed, 23 insertions(+), 16 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
>> index d0a73e7..2e581bc 100644
>> --- a/drivers/soc/qcom/Kconfig
>> +++ b/drivers/soc/qcom/Kconfig
>> @@ -105,7 +105,7 @@ config QCOM_RMTFS_MEM
>>
>> config QCOM_RPMH
>> bool "Qualcomm RPM-Hardened (RPMH) Communication"
>> - depends on ARCH_QCOM && ARM64 || COMPILE_TEST
>> + depends on ARCH_QCOM && ARM64
>> help
>> Support for communication with the hardened-RPM blocks in
>> Qualcomm Technologies Inc (QTI) SoCs. RPMH communication uses an
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index f28afe4..dafb0da 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -12,6 +12,7 @@
>> #include <linux/module.h>
>> #include <linux/of.h>
>> #include <linux/platform_device.h>
>> +#include <linux/psci.h>
>> #include <linux/slab.h>
>> #include <linux/spinlock.h>
>> #include <linux/types.h>
>> @@ -158,6 +159,13 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
>> }
>>
>> unlock:
>> + if (ctrlr->dirty && !psci_has_osi_support()) {
>> + if (rpmh_flush(ctrlr)) {
>> + spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> + return ERR_PTR(-EINVAL);
>> + }
>> + }
>> +
>> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>
>> return req;
>> @@ -285,26 +293,35 @@ int rpmh_write(const struct device *dev, enum rpmh_state state,
>> }
>> EXPORT_SYMBOL(rpmh_write);
>>
>> -static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
>> +static int cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
>> {
>> unsigned long flags;
>>
>> spin_lock_irqsave(&ctrlr->cache_lock, flags);
>> +
>> list_add_tail(&req->list, &ctrlr->batch_cache);
>> ctrlr->dirty = true;
>> +
>> + if (!psci_has_osi_support()) {
>> + if (rpmh_flush(ctrlr)) {
> The whole API here is a bit unfortunate. From what I can tell,
> callers of this code almost always call rpmh_write_batch() in
> triplicate, AKA:
>
> rpmh_write_batch(active, ...)
> rpmh_write_batch(wake, ...)
> rpmh_write_batch(sleep, ...)
>
> ...that's going to end up writing the whole sleep/wake sets twice
> every single time, right? I know you talked about trying to keep
> separate dirty bits for sleep/wake and maybe that would help, but it
> might not be so easy due to the comparison of "sleep_val" and
> "wake_val" in is_req_valid().
>
> I guess we can keep the inefficiency for now and see how much it hits
> us, but it feels ugly.
>
>
>> + spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> + return -EINVAL;
> nit: why not add "int ret = 0" to the top of the function, then here:
>
> if (rpmh_flush(ctrl))
> ret = -EINVAL;
>
> ...then at the end "return ret". It avoids the 2nd copy of the unlock?
Done.
>
> Also: Why throw away the return value of rpmh_flush and replace it
> with -EINVAL? Trying to avoid -EBUSY? ...oh, should you handle
> -EBUSY? AKA:
>
> if (!psci_has_osi_support()) {
> do {
> ret = rpmh_flush(ctrl);
> } while (ret == -EBUSY);
> }

Done, the return value from rpmh_flush() can be -EAGAIN, not -EBUSY.

i will update the comment accordingly and will include below change as well in next series.

https://patchwork.kernel.org/patch/11364067/

this should address for caller to not handle -EAGAIN.

>
>
>> + }
>> + }
>> +
>> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> +
>> + return 0;
>> }
>>
>> static int flush_batch(struct rpmh_ctrlr *ctrlr)
>> {
>> struct batch_cache_req *req;
>> const struct rpmh_request *rpm_msg;
>> - unsigned long flags;
>> int ret = 0;
>> int i;
>>
>> /* Send Sleep/Wake requests to the controller, expect no response */
>> - spin_lock_irqsave(&ctrlr->cache_lock, flags);
>> list_for_each_entry(req, &ctrlr->batch_cache, list) {
>> for (i = 0; i < req->count; i++) {
>> rpm_msg = req->rpm_msgs + i;
>> @@ -314,7 +331,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
>> break;
>> }
>> }
>> - spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>
>> return ret;
>> }
>> @@ -386,10 +402,8 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
>> cmd += n[i];
>> }
>>
>> - if (state != RPMH_ACTIVE_ONLY_STATE) {
>> - cache_batch(ctrlr, req);
>> - return 0;
>> - }
>> + if (state != RPMH_ACTIVE_ONLY_STATE)
>> + return cache_batch(ctrlr, req);
>>
>> for (i = 0; i < count; i++) {
>> struct completion *compl = &compls[i];
>> @@ -455,9 +469,6 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
>> * Return: -EBUSY if the controller is busy, probably waiting on a response
>> * to a RPMH request sent earlier.
>> *
>> - * This function is always called from the sleep code from the last CPU
>> - * that is powering down the entire system. Since no other RPMH API would be
>> - * executing at this time, it is safe to run lockless.
> nit: you've now got an extra "blank" (just has a "*" on it) line at
> the end of your comment block.
Done.
> nit: in v9, Evan suggested "We should probably replace that with a
> comment indicating that we assume ctrlr->cache_lock is already held".
> Maybe you could do that?
yes i left it for below reason since we still can call it from sleep code.
i will mention same in v11.

Thanks,
Maulik
>
> Also: presumably you _will_ still be called by the sleep code from the
> last CPU on systems with OSI. Is that true? If that's not true then
> you should change your function to static. If that is true, then your
> comment should be something like "this function will either be called
> from sleep code on the last CPU (thus no spinlock needed) or with the
> spinlock already held".
>
>
> -Doug

--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

2020-03-05 22:22:06

by Doug Anderson

[permalink] [raw]
Subject: Re: [PATCH v10 3/3] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches

Hi,

On Thu, Mar 5, 2020 at 3:30 AM Maulik Shah <[email protected]> wrote:
>
> >> + spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
> >> + return -EINVAL;
> > nit: why not add "int ret = 0" to the top of the function, then here:
> >
> > if (rpmh_flush(ctrl))
> > ret = -EINVAL;
> >
> > ...then at the end "return ret". It avoids the 2nd copy of the unlock?
> Done.
> >
> > Also: Why throw away the return value of rpmh_flush and replace it
> > with -EINVAL? Trying to avoid -EBUSY? ...oh, should you handle
> > -EBUSY? AKA:
> >
> > if (!psci_has_osi_support()) {
> > do {
> > ret = rpmh_flush(ctrl);
> > } while (ret == -EBUSY);
> > }
>
> Done, the return value from rpmh_flush() can be -EAGAIN, not -EBUSY.
>
> i will update the comment accordingly and will include below change as well in next series.
>
> https://patchwork.kernel.org/patch/11364067/
>
> this should address for caller to not handle -EAGAIN.

A few issues, I guess.

1. I _think_ it's important that you enable interrupts between
retries. If you're on the same CPU that the interrupt is routed to
and you were waiting for 'tcs_in_use' to be cleared you'll be in
trouble otherwise. ...I think we need to audit all of the places that
are looping based on -EAGAIN and confirm that interrupts are enabled
between retries. Before your patch series the only looping I see was
in rpmh_invalidate() and the lock wasn't held. After your series it's
also in rpmh_flush() which is called under spin_lock_irqsave() which
will be a problem.

2. The RPMH code uses both -EBUSY and -EAGAIN so I looked carefully at
this again. You're right that -EBUSY seems to be exclusively returned
by things only called by rpmh_rsc_send_data() and that function
handles the retries. ...but looking at this made me find a broken
corner case with the "zero active tcs" case (assuming you care about
this case as per your other thread). Specifically if you have "zero
active tcs" then get_tcs_for_msg() can call rpmh_rsc_invalidate()
which can return -EAGAIN. That will return the -EAGAIN out of
tcs_write() into rpmh_rsc_send_data(). rpmh_rsc_send_data() only
handles -EBUSY, not -EAGAIN.

-Doug

2020-03-05 22:23:23

by Doug Anderson

[permalink] [raw]
Subject: Re: [PATCH v10 2/3] soc: qcom: rpmh: Update dirty flag only when data changes

Hi,

On Thu, Mar 5, 2020 at 3:10 AM Maulik Shah <[email protected]> wrote:
>
> > To summarize:
> >
> > a) If the only allowable use of "WAKE_ONLY" is to undo "SLEEP_ONLY"
> > then we should re-think the API and stop letting callers to
> > rpmh_write(), rpmh_write_async(), or rpmh_write_batch() ever specify
> > "WAKE_ONLY". The code should just assume that "wake_only =
> > active_only if (active_only != sleep_only)". In other words, RPMH
> > should programmatically figure out the "wake" state based on the
> > sleep/active state and not force callers to do this.
> >
> > b) If "WAKE_ONLY" is allowed to do other things (or if it's not RPMH's
> > job to enforce/assume this) then we should fully skip calling
> > cache_rpm_request() for RPMH_ACTIVE_ONLY_STATE.
> >
> >
> > NOTE: this discussion also makes me wonder about the is_req_valid()
> > function. That will skip sending a sleep/wake entry if the sleep and
> > wake entries are equal to each other. ...but if sleep and wake are
> > both different than "active" it'll be a problem.
>
> Hi Doug,
>
> To answer above points, yes in general it’s the understanding that wake is
> almost always need to be equal to active. However, there can be valid reasons
> for which the callers are enforced to call them differently in the first place.
>
> At present caller send 3 types of request.
> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x0);
> rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x99);
>
> Now, Lets assume we handle this in rpmh driver since wake=active and the caller
> send only 2 type of request (lets call it active and sleep, since we have assumption
> of wake=active, and we don’t want 3rd request as its handled in rpmh driver)
> So callers will now invoke 2 types of request.
>
> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x0);
>
> with first type request, it now needs to serve 2 purpose
> (a) cache ACTIVE request votes as WAKE votes
> (b) trigger it out immediately (in ACTIVE TCS) as it need to be also complete immediately.
>
> For SLEEP, nothing changes. Now when entering to sleep we do rpmh_flush() to program all
> WAKE and SLEEP request…so far so good…
>
> Now consider a corner case,
>
> There is something called a solver mode in RSC where HW could be in autonomous mode executing
> low power modes. For this it may want to “only” program WAKE and SLEEP votes and then controller
> would be in solver mode entering and exiting sleep autonomously.
>
> There is no ACTIVE set request and hence no requirement to send it right away as ACTIVE vote.
>
> If we have only 2 type of request, caller again need to differentiate to tell rpmh driver that
> when it invoke
>
> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
>
> with this caching it as WAKE is fine ((a) in above) but do not trigger it ((b) in above)
>
> so we need to again modify this API and pass another argument saying whether to do (a + b) or only (a).
> but caller can already differentiate by using RPMH_ACTIVE_ONLY_STATE or RPMH_WAKE_ONLY_STATE.
>
> i think at least for now, leave it as it is, unless we really see any impact by caller invoking all
> 3 types of request and take in account all such corner cases before i make any such change.
> we can take it separate if needed along with optimization pointed in v9 series discussions.

I totally don't understand what solver mode is for and when it's used,
but I'm willing to set that aside for now I guess. From looking at
what you did for v12 it looks like the way you're expecting things to
function is this:

* ACTIVE: set wake state and trigger active change right away.

* SLEEP: set only sleep state

* WAKE: set only wake state, will take effect after next sleep/wake
unless changed again before that happens.


...I'll look at the code with this understanding, now. Presumably also:

* We should document this.

* If we see clients that are explicitly setting _both_ active and wake
to the same thing then we can change the clients. That means the only
people using "WAKE" mode would be those clients that explicitly want
the deferred action (presumably those using "solver" mode).

Do those seem correct?


If that's correct, I guess one subtle corner-case bug in
is_req_valid(). Specifically if it's ever valid to do this:

rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x0);
rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x0);

...then it won't work. You'll transition between sleep/wake and stay
with "data=0x99". Since "sleep == wake" then is_req_valid() will
return "false" and we won't bother programming the commands for
sleep/wake. One simple way to solve this is remove the
"req->sleep_val != req->wake_val" optimization in is_req_valid().


I guess we should also document that "batch" doesn't work like this.
The "batch" API is really designed around having exactly one "batch"
caller (the interconnect code) and we assume that the batch code will
be sending us pre-optimized commands I guess? Specifically there
doesn't seem to be anything trying to catch batch writes to "active"
and also applying them to "wake".


-Doug

2020-03-10 11:02:58

by Maulik Shah

[permalink] [raw]
Subject: Re: [PATCH v10 3/3] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches


On 3/6/2020 3:50 AM, Doug Anderson wrote:
> Hi,
>
> On Thu, Mar 5, 2020 at 3:30 AM Maulik Shah <[email protected]> wrote:
>>>> + spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>>> + return -EINVAL;
>>> nit: why not add "int ret = 0" to the top of the function, then here:
>>>
>>> if (rpmh_flush(ctrl))
>>> ret = -EINVAL;
>>>
>>> ...then at the end "return ret". It avoids the 2nd copy of the unlock?
>> Done.
>>> Also: Why throw away the return value of rpmh_flush and replace it
>>> with -EINVAL? Trying to avoid -EBUSY? ...oh, should you handle
>>> -EBUSY? AKA:
>>>
>>> if (!psci_has_osi_support()) {
>>> do {
>>> ret = rpmh_flush(ctrl);
>>> } while (ret == -EBUSY);
>>> }
>> Done, the return value from rpmh_flush() can be -EAGAIN, not -EBUSY.
>>
>> i will update the comment accordingly and will include below change as well in next series.
>>
>> https://patchwork.kernel.org/patch/11364067/
>>
>> this should address for caller to not handle -EAGAIN.
> A few issues, I guess.
>
> 1. I _think_ it's important that you enable interrupts between
> retries. If you're on the same CPU that the interrupt is routed to
> and you were waiting for 'tcs_in_use' to be cleared you'll be in
> trouble otherwise. ...I think we need to audit all of the places that
> are looping based on -EAGAIN and confirm that interrupts are enabled
> between retries. Before your patch series the only looping I see was
> in rpmh_invalidate() and the lock wasn't held. After your series it's
> also in rpmh_flush() which is called under spin_lock_irqsave() which
> will be a problem.
I will take a look at interrupts part.
>
> 2. The RPMH code uses both -EBUSY and -EAGAIN so I looked carefully at
> this again. You're right that -EBUSY seems to be exclusively returned
> by things only called by rpmh_rsc_send_data() and that function
> handles the retries. ...but looking at this made me find a broken
> corner case with the "zero active tcs" case (assuming you care about
> this case as per your other thread). Specifically if you have "zero
> active tcs" then get_tcs_for_msg() can call rpmh_rsc_invalidate()
> which can return -EAGAIN. That will return the -EAGAIN out of
> tcs_write() into rpmh_rsc_send_data(). rpmh_rsc_send_data() only
> handles -EBUSY, not -EAGAIN.
>
> -Doug

Thanks Doug. I will have a patch to fix this.

Thanks,
Maulik

--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

2020-03-10 11:03:58

by Maulik Shah

[permalink] [raw]
Subject: Re: [PATCH v10 2/3] soc: qcom: rpmh: Update dirty flag only when data changes


On 3/6/2020 3:52 AM, Doug Anderson wrote:
> Hi,
>
> On Thu, Mar 5, 2020 at 3:10 AM Maulik Shah <[email protected]> wrote:
>>> To summarize:
>>>
>>> a) If the only allowable use of "WAKE_ONLY" is to undo "SLEEP_ONLY"
>>> then we should re-think the API and stop letting callers to
>>> rpmh_write(), rpmh_write_async(), or rpmh_write_batch() ever specify
>>> "WAKE_ONLY". The code should just assume that "wake_only =
>>> active_only if (active_only != sleep_only)". In other words, RPMH
>>> should programmatically figure out the "wake" state based on the
>>> sleep/active state and not force callers to do this.
>>>
>>> b) If "WAKE_ONLY" is allowed to do other things (or if it's not RPMH's
>>> job to enforce/assume this) then we should fully skip calling
>>> cache_rpm_request() for RPMH_ACTIVE_ONLY_STATE.
>>>
>>>
>>> NOTE: this discussion also makes me wonder about the is_req_valid()
>>> function. That will skip sending a sleep/wake entry if the sleep and
>>> wake entries are equal to each other. ...but if sleep and wake are
>>> both different than "active" it'll be a problem.
>> Hi Doug,
>>
>> To answer above points, yes in general it’s the understanding that wake is
>> almost always need to be equal to active. However, there can be valid reasons
>> for which the callers are enforced to call them differently in the first place.
>>
>> At present caller send 3 types of request.
>> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
>> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x0);
>> rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x99);
>>
>> Now, Lets assume we handle this in rpmh driver since wake=active and the caller
>> send only 2 type of request (lets call it active and sleep, since we have assumption
>> of wake=active, and we don’t want 3rd request as its handled in rpmh driver)
>> So callers will now invoke 2 types of request.
>>
>> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
>> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x0);
>>
>> with first type request, it now needs to serve 2 purpose
>> (a) cache ACTIVE request votes as WAKE votes
>> (b) trigger it out immediately (in ACTIVE TCS) as it need to be also complete immediately.
>>
>> For SLEEP, nothing changes. Now when entering to sleep we do rpmh_flush() to program all
>> WAKE and SLEEP request…so far so good…
>>
>> Now consider a corner case,
>>
>> There is something called a solver mode in RSC where HW could be in autonomous mode executing
>> low power modes. For this it may want to “only” program WAKE and SLEEP votes and then controller
>> would be in solver mode entering and exiting sleep autonomously.
>>
>> There is no ACTIVE set request and hence no requirement to send it right away as ACTIVE vote.
>>
>> If we have only 2 type of request, caller again need to differentiate to tell rpmh driver that
>> when it invoke
>>
>> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
>>
>> with this caching it as WAKE is fine ((a) in above) but do not trigger it ((b) in above)
>>
>> so we need to again modify this API and pass another argument saying whether to do (a + b) or only (a).
>> but caller can already differentiate by using RPMH_ACTIVE_ONLY_STATE or RPMH_WAKE_ONLY_STATE.
>>
>> i think at least for now, leave it as it is, unless we really see any impact by caller invoking all
>> 3 types of request and take in account all such corner cases before i make any such change.
>> we can take it separate if needed along with optimization pointed in v9 series discussions.
> I totally don't understand what solver mode is for and when it's used,
> but I'm willing to set that aside for now I guess. From looking at
> what you did for v12 it looks like the way you're expecting things to
> function is this:
>
> * ACTIVE: set wake state and trigger active change right away.
>
> * SLEEP: set only sleep state
>
> * WAKE: set only wake state, will take effect after next sleep/wake
> unless changed again before that happens.
>
>
> ...I'll look at the code with this understanding, now. Presumably also:
>
> * We should document this.
Okay, i will document above.
> * If we see clients that are explicitly setting _both_ active and wake
> to the same thing then we can change the clients. That means the only
> people using "WAKE" mode would be those clients that explicitly want
> the deferred action (presumably those using "solver" mode).
>
> Do those seem correct?
Correct. but i suggest to change clients only once solver mode changes go in.
until then leave clients to call ACTIVE and WAKE request separately (even with same data)
>
> If that's correct, I guess one subtle corner-case bug in
> is_req_valid(). Specifically if it's ever valid to do this:
>
> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x0);
> rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x0);
This scenario will never hit in solver mode.
when in solver, only WAKE and SLEEP requests are allowed to go through.
> ...then it won't work.
will work out just fine, as said above.
> You'll transition between sleep/wake and stay
> with "data=0x99". Since "sleep == wake" then is_req_valid() will
> return "false" and we won't bother programming the commands for
> sleep/wake. One simple way to solve this is remove the
> "req->sleep_val != req->wake_val" optimization in is_req_valid().

This will still need to keep check.

the clients may invoke with below example data...

rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x99);
rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x99); (we assume wake=active)

while ACTIVE is immediatly sent, and resource already came to 0x99 level.

Now while flushing, there is no point in programming in SLEEP TCS as such when cmd triggers
from SLEEP TCS then it won't make any real level transition since its already brought up to
0x99 level with previous ACTIVE cmd. same reason goes for not programming it in WAKE TCS.

>
> I guess we should also document that "batch" doesn't work like this.
> The "batch" API is really designed around having exactly one "batch"
> caller (the interconnect code) and we assume that the batch code will
> be sending us pre-optimized commands I guess? Specifically there
> doesn't seem to be anything trying to catch batch writes to "active"
> and also applying them to "wake".
Okay, i will document above.

Thanks,
Maulik

> -Doug

--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

2020-03-10 15:47:15

by Doug Anderson

[permalink] [raw]
Subject: Re: [PATCH v10 2/3] soc: qcom: rpmh: Update dirty flag only when data changes

Hi,


On Tue, Mar 10, 2020 at 4:03 AM Maulik Shah <[email protected]> wrote:
>
>
> On 3/6/2020 3:52 AM, Doug Anderson wrote:
> > Hi,
> >
> > On Thu, Mar 5, 2020 at 3:10 AM Maulik Shah <[email protected]> wrote:
> >>> To summarize:
> >>>
> >>> a) If the only allowable use of "WAKE_ONLY" is to undo "SLEEP_ONLY"
> >>> then we should re-think the API and stop letting callers to
> >>> rpmh_write(), rpmh_write_async(), or rpmh_write_batch() ever specify
> >>> "WAKE_ONLY". The code should just assume that "wake_only =
> >>> active_only if (active_only != sleep_only)". In other words, RPMH
> >>> should programmatically figure out the "wake" state based on the
> >>> sleep/active state and not force callers to do this.
> >>>
> >>> b) If "WAKE_ONLY" is allowed to do other things (or if it's not RPMH's
> >>> job to enforce/assume this) then we should fully skip calling
> >>> cache_rpm_request() for RPMH_ACTIVE_ONLY_STATE.
> >>>
> >>>
> >>> NOTE: this discussion also makes me wonder about the is_req_valid()
> >>> function. That will skip sending a sleep/wake entry if the sleep and
> >>> wake entries are equal to each other. ...but if sleep and wake are
> >>> both different than "active" it'll be a problem.
> >> Hi Doug,
> >>
> >> To answer above points, yes in general it’s the understanding that wake is
> >> almost always need to be equal to active. However, there can be valid reasons
> >> for which the callers are enforced to call them differently in the first place.
> >>
> >> At present caller send 3 types of request.
> >> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
> >> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x0);
> >> rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x99);
> >>
> >> Now, Lets assume we handle this in rpmh driver since wake=active and the caller
> >> send only 2 type of request (lets call it active and sleep, since we have assumption
> >> of wake=active, and we don’t want 3rd request as its handled in rpmh driver)
> >> So callers will now invoke 2 types of request.
> >>
> >> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
> >> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x0);
> >>
> >> with first type request, it now needs to serve 2 purpose
> >> (a) cache ACTIVE request votes as WAKE votes
> >> (b) trigger it out immediately (in ACTIVE TCS) as it need to be also complete immediately.
> >>
> >> For SLEEP, nothing changes. Now when entering to sleep we do rpmh_flush() to program all
> >> WAKE and SLEEP request…so far so good…
> >>
> >> Now consider a corner case,
> >>
> >> There is something called a solver mode in RSC where HW could be in autonomous mode executing
> >> low power modes. For this it may want to “only” program WAKE and SLEEP votes and then controller
> >> would be in solver mode entering and exiting sleep autonomously.
> >>
> >> There is no ACTIVE set request and hence no requirement to send it right away as ACTIVE vote.
> >>
> >> If we have only 2 type of request, caller again need to differentiate to tell rpmh driver that
> >> when it invoke
> >>
> >> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
> >>
> >> with this caching it as WAKE is fine ((a) in above) but do not trigger it ((b) in above)
> >>
> >> so we need to again modify this API and pass another argument saying whether to do (a + b) or only (a).
> >> but caller can already differentiate by using RPMH_ACTIVE_ONLY_STATE or RPMH_WAKE_ONLY_STATE.
> >>
> >> i think at least for now, leave it as it is, unless we really see any impact by caller invoking all
> >> 3 types of request and take in account all such corner cases before i make any such change.
> >> we can take it separate if needed along with optimization pointed in v9 series discussions.
> > I totally don't understand what solver mode is for and when it's used,
> > but I'm willing to set that aside for now I guess. From looking at
> > what you did for v12 it looks like the way you're expecting things to
> > function is this:
> >
> > * ACTIVE: set wake state and trigger active change right away.
> >
> > * SLEEP: set only sleep state
> >
> > * WAKE: set only wake state, will take effect after next sleep/wake
> > unless changed again before that happens.
> >
> >
> > ...I'll look at the code with this understanding, now. Presumably also:
> >
> > * We should document this.
> Okay, i will document above.
> > * If we see clients that are explicitly setting _both_ active and wake
> > to the same thing then we can change the clients. That means the only
> > people using "WAKE" mode would be those clients that explicitly want
> > the deferred action (presumably those using "solver" mode).
> >
> > Do those seem correct?
> Correct. but i suggest to change clients only once solver mode changes go in.
> until then leave clients to call ACTIVE and WAKE request separately (even with same data)

If you want to leave clients to call ACTIVE and WAKE requests
separately for now, then please change your patch ("soc: qcom: rpmh:
Update dirty flag only when data changes"), which (from v13) has this
in cache_rpm_request():

switch (state) {
case RPMH_ACTIVE_ONLY_STATE:
case RPMH_WAKE_ONLY_STATE:
req->wake_val = cmd->data;

...that bit of code is what led me to request that you document that
setting the active-only state also sets the wake-only state, so either
change that code or add the documentation.


> > If that's correct, I guess one subtle corner-case bug in
> > is_req_valid(). Specifically if it's ever valid to do this:
> >
> > rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
> > rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x0);
> > rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x0);
> This scenario will never hit in solver mode.
> when in solver, only WAKE and SLEEP requests are allowed to go through.

OK, I guess this stems from me not understanding solver mode. I guess
for now it's unlikely to cause problems given the current usage, so we
can ignore for now.


> > ...then it won't work.
> will work out just fine, as said above.
> > You'll transition between sleep/wake and stay
> > with "data=0x99". Since "sleep == wake" then is_req_valid() will
> > return "false" and we won't bother programming the commands for
> > sleep/wake. One simple way to solve this is remove the
> > "req->sleep_val != req->wake_val" optimization in is_req_valid().
>
> This will still need to keep check.
>
> the clients may invoke with below example data...
>
> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x99);
> rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x99); (we assume wake=active)
>
> while ACTIVE is immediatly sent, and resource already came to 0x99 level.
>
> Now while flushing, there is no point in programming in SLEEP TCS as such when cmd triggers
> from SLEEP TCS then it won't make any real level transition since its already brought up to
> 0x99 level with previous ACTIVE cmd. same reason goes for not programming it in WAKE TCS.

"Need" is perhaps too strong of a word if I understand things. The
example above would still work even if we didn't check "sleep == wake"
it would just program an extra (but pointless) command. Right?

...but I guess we can debate about it later. For now I'm not aware of
anyone setting WAKE to something other than ACTIVE.


-Doug

2020-03-11 05:41:25

by Maulik Shah

[permalink] [raw]
Subject: Re: [PATCH v10 2/3] soc: qcom: rpmh: Update dirty flag only when data changes

Hi,

On 3/10/2020 9:16 PM, Doug Anderson wrote:
> Hi,
>
>
> On Tue, Mar 10, 2020 at 4:03 AM Maulik Shah <[email protected]> wrote:
>>
>> On 3/6/2020 3:52 AM, Doug Anderson wrote:
>>> Hi,
>>>
>>> On Thu, Mar 5, 2020 at 3:10 AM Maulik Shah <[email protected]> wrote:
>>>>> To summarize:
>>>>>
>>>>> a) If the only allowable use of "WAKE_ONLY" is to undo "SLEEP_ONLY"
>>>>> then we should re-think the API and stop letting callers to
>>>>> rpmh_write(), rpmh_write_async(), or rpmh_write_batch() ever specify
>>>>> "WAKE_ONLY". The code should just assume that "wake_only =
>>>>> active_only if (active_only != sleep_only)". In other words, RPMH
>>>>> should programmatically figure out the "wake" state based on the
>>>>> sleep/active state and not force callers to do this.
>>>>>
>>>>> b) If "WAKE_ONLY" is allowed to do other things (or if it's not RPMH's
>>>>> job to enforce/assume this) then we should fully skip calling
>>>>> cache_rpm_request() for RPMH_ACTIVE_ONLY_STATE.
>>>>>
>>>>>
>>>>> NOTE: this discussion also makes me wonder about the is_req_valid()
>>>>> function. That will skip sending a sleep/wake entry if the sleep and
>>>>> wake entries are equal to each other. ...but if sleep and wake are
>>>>> both different than "active" it'll be a problem.
>>>> Hi Doug,
>>>>
>>>> To answer above points, yes in general it’s the understanding that wake is
>>>> almost always need to be equal to active. However, there can be valid reasons
>>>> for which the callers are enforced to call them differently in the first place.
>>>>
>>>> At present caller send 3 types of request.
>>>> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
>>>> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x0);
>>>> rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x99);
>>>>
>>>> Now, Lets assume we handle this in rpmh driver since wake=active and the caller
>>>> send only 2 type of request (lets call it active and sleep, since we have assumption
>>>> of wake=active, and we don’t want 3rd request as its handled in rpmh driver)
>>>> So callers will now invoke 2 types of request.
>>>>
>>>> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
>>>> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x0);
>>>>
>>>> with first type request, it now needs to serve 2 purpose
>>>> (a) cache ACTIVE request votes as WAKE votes
>>>> (b) trigger it out immediately (in ACTIVE TCS) as it need to be also complete immediately.
>>>>
>>>> For SLEEP, nothing changes. Now when entering to sleep we do rpmh_flush() to program all
>>>> WAKE and SLEEP request…so far so good…
>>>>
>>>> Now consider a corner case,
>>>>
>>>> There is something called a solver mode in RSC where HW could be in autonomous mode executing
>>>> low power modes. For this it may want to “only” program WAKE and SLEEP votes and then controller
>>>> would be in solver mode entering and exiting sleep autonomously.
>>>>
>>>> There is no ACTIVE set request and hence no requirement to send it right away as ACTIVE vote.
>>>>
>>>> If we have only 2 type of request, caller again need to differentiate to tell rpmh driver that
>>>> when it invoke
>>>>
>>>> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
>>>>
>>>> with this caching it as WAKE is fine ((a) in above) but do not trigger it ((b) in above)
>>>>
>>>> so we need to again modify this API and pass another argument saying whether to do (a + b) or only (a).
>>>> but caller can already differentiate by using RPMH_ACTIVE_ONLY_STATE or RPMH_WAKE_ONLY_STATE.
>>>>
>>>> i think at least for now, leave it as it is, unless we really see any impact by caller invoking all
>>>> 3 types of request and take in account all such corner cases before i make any such change.
>>>> we can take it separate if needed along with optimization pointed in v9 series discussions.
>>> I totally don't understand what solver mode is for and when it's used,
>>> but I'm willing to set that aside for now I guess. From looking at
>>> what you did for v12 it looks like the way you're expecting things to
>>> function is this:
>>>
>>> * ACTIVE: set wake state and trigger active change right away.
>>>
>>> * SLEEP: set only sleep state
>>>
>>> * WAKE: set only wake state, will take effect after next sleep/wake
>>> unless changed again before that happens.
>>>
>>>
>>> ...I'll look at the code with this understanding, now. Presumably also:
>>>
>>> * We should document this.
>> Okay, i will document above.
>>> * If we see clients that are explicitly setting _both_ active and wake
>>> to the same thing then we can change the clients. That means the only
>>> people using "WAKE" mode would be those clients that explicitly want
>>> the deferred action (presumably those using "solver" mode).
>>>
>>> Do those seem correct?
>> Correct. but i suggest to change clients only once solver mode changes go in.
>> until then leave clients to call ACTIVE and WAKE request separately (even with same data)
> If you want to leave clients to call ACTIVE and WAKE requests
> separately for now, then please change your patch ("soc: qcom: rpmh:
> Update dirty flag only when data changes"), which (from v13) has this
> in cache_rpm_request():
>
> switch (state) {
> case RPMH_ACTIVE_ONLY_STATE:
> case RPMH_WAKE_ONLY_STATE:
> req->wake_val = cmd->data;
>
> ...that bit of code is what led me to request that you document that
> setting the active-only state also sets the wake-only state, so either
> change that code or add the documentation.

yes, i will add the documentation.

>
>
>>> If that's correct, I guess one subtle corner-case bug in
>>> is_req_valid(). Specifically if it's ever valid to do this:
>>>
>>> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
>>> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x0);
>>> rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x0);
>> This scenario will never hit in solver mode.
>> when in solver, only WAKE and SLEEP requests are allowed to go through.
> OK, I guess this stems from me not understanding solver mode. I guess
> for now it's unlikely to cause problems given the current usage, so we
> can ignore for now.
>
>
>>> ...then it won't work.
>> will work out just fine, as said above.
>>> You'll transition between sleep/wake and stay
>>> with "data=0x99". Since "sleep == wake" then is_req_valid() will
>>> return "false" and we won't bother programming the commands for
>>> sleep/wake. One simple way to solve this is remove the
>>> "req->sleep_val != req->wake_val" optimization in is_req_valid().
>> This will still need to keep check.
>>
>> the clients may invoke with below example data...
>>
>> rpmh_write(RPMH_ACTIVE_ONLY_STATE, addr=0x10, data=0x99);
>> rpmh_write(RPMH_SLEEP_STATE, addr=0x10, data=0x99);
>> rpmh_write(RPMH_WAKE_ONLY_STATE, addr=0x10, data=0x99); (we assume wake=active)
>>
>> while ACTIVE is immediatly sent, and resource already came to 0x99 level.
>>
>> Now while flushing, there is no point in programming in SLEEP TCS as such when cmd triggers
>> from SLEEP TCS then it won't make any real level transition since its already brought up to
>> 0x99 level with previous ACTIVE cmd. same reason goes for not programming it in WAKE TCS.
> "Need" is perhaps too strong of a word if I understand things. The
> example above would still work even if we didn't check "sleep == wake"
> it would just program an extra (but pointless) command. Right?
>
> ...but I guess we can debate about it later. For now I'm not aware of
> anyone setting WAKE to something other than ACTIVE.
>
>
> -Doug
its not just programming an extra command, neither it is pointless. its needed to keep it that way.
let me make explain effect of this command...

the resource for which this extra command will be sent is already at required level.
so below might happen in my understanding.

Resource may be in its own low power mode, when this extra command is sent, it needs to
come out of low power mode and apply the newly requested level in sleep command but it is already
at that level, so without doing any real level transition, it will again go back to sleep.
same happens for wake command. so there can be a power hit.
so this extra command need not sent.

Thanks,
Maulik

--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation