2013-08-23 19:46:16

by Colin Cross

[permalink] [raw]
Subject: [PATCH 1/3] cpuidle: coupled: disable interrupts after entering safe state

Calling cpuidle_enter_state is expected to return with interrupts
enabled, but interrupts must be disabled before starting the
ready loop synchronization stage. Call local_irq_disable after
each call to cpuidle_enter_state for the safe state.

CC: [email protected]
Signed-off-by: Colin Cross <[email protected]>
---
drivers/cpuidle/coupled.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
index 2a297f8..db92bcb 100644
--- a/drivers/cpuidle/coupled.c
+++ b/drivers/cpuidle/coupled.c
@@ -460,6 +460,7 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
}
entered_state = cpuidle_enter_state(dev, drv,
dev->safe_state_index);
+ local_irq_disable();
}

/* Read barrier ensures online_count is read after prevent is cleared */
@@ -485,6 +486,7 @@ retry:

entered_state = cpuidle_enter_state(dev, drv,
dev->safe_state_index);
+ local_irq_disable();
}

if (cpuidle_coupled_clear_pokes(dev->cpu)) {
--
1.8.3


2013-08-23 19:45:23

by Colin Cross

[permalink] [raw]
Subject: [PATCH 2/3] cpuidle: coupled: abort idle if pokes are pending

Joseph Lo <[email protected]> reported a lockup on Tegra3 caused
by a race condition in coupled cpuidle. When two or more cpus
enter idle at the same time, the first cpus to arrive may go to the
ready loop without processing pending pokes from the last cpu to
arrive.

This patch adds a check for pending pokes once all cpus have been
synchronized in the ready loop and resets the coupled state and
retries if any cpus failed to handle their pending poke.

Retrying on all cpus may trigger the same issue again, so this patch
also adds a check to ensure that each cpu has received at least one
poke between when it enters the waiting loop and when it moves on to
the ready loop.

Reported-by: Joseph Lo <[email protected]>
CC: [email protected]
Signed-off-by: Colin Cross <[email protected]>
---
drivers/cpuidle/coupled.c | 107 +++++++++++++++++++++++++++++++++++-----------
1 file changed, 82 insertions(+), 25 deletions(-)

diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
index db92bcb..bbc4ba5 100644
--- a/drivers/cpuidle/coupled.c
+++ b/drivers/cpuidle/coupled.c
@@ -106,6 +106,7 @@ struct cpuidle_coupled {
cpumask_t coupled_cpus;
int requested_state[NR_CPUS];
atomic_t ready_waiting_counts;
+ atomic_t abort_barrier;
int online_count;
int refcnt;
int prevent;
@@ -122,12 +123,19 @@ static DEFINE_MUTEX(cpuidle_coupled_lock);
static DEFINE_PER_CPU(struct call_single_data, cpuidle_coupled_poke_cb);

/*
- * The cpuidle_coupled_poked_mask mask is used to avoid calling
+ * The cpuidle_coupled_poke_pending mask is used to avoid calling
* __smp_call_function_single with the per cpu call_single_data struct already
* in use. This prevents a deadlock where two cpus are waiting for each others
* call_single_data struct to be available
*/
-static cpumask_t cpuidle_coupled_poked_mask;
+static cpumask_t cpuidle_coupled_poke_pending;
+
+/*
+ * The cpuidle_coupled_poke_pending mask is used to ensure that each cpu has
+ * been poked once to minimize entering the ready loop with a poke pending,
+ * which would require aborting and retrying.
+ */
+static cpumask_t cpuidle_coupled_poked;

/**
* cpuidle_coupled_parallel_barrier - synchronize all online coupled cpus
@@ -291,10 +299,11 @@ static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
return state;
}

-static void cpuidle_coupled_poked(void *info)
+static void cpuidle_coupled_handle_poke(void *info)
{
int cpu = (unsigned long)info;
- cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
+ cpumask_set_cpu(cpu, &cpuidle_coupled_poked);
+ cpumask_clear_cpu(cpu, &cpuidle_coupled_poke_pending);
}

/**
@@ -313,7 +322,7 @@ static void cpuidle_coupled_poke(int cpu)
{
struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);

- if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
+ if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poke_pending))
__smp_call_function_single(cpu, csd, 0);
}

@@ -340,30 +349,19 @@ static void cpuidle_coupled_poke_others(int this_cpu,
* @coupled: the struct coupled that contains the current cpu
* @next_state: the index in drv->states of the requested state for this cpu
*
- * Updates the requested idle state for the specified cpuidle device,
- * poking all coupled cpus out of idle if necessary to let them see the new
- * state.
+ * Updates the requested idle state for the specified cpuidle device.
+ * Returns the number of waiting cpus.
*/
-static void cpuidle_coupled_set_waiting(int cpu,
+static int cpuidle_coupled_set_waiting(int cpu,
struct cpuidle_coupled *coupled, int next_state)
{
- int w;
-
coupled->requested_state[cpu] = next_state;

/*
- * If this is the last cpu to enter the waiting state, poke
- * all the other cpus out of their waiting state so they can
- * enter a deeper state. This can race with one of the cpus
- * exiting the waiting state due to an interrupt and
- * decrementing waiting_count, see comment below.
- *
* The atomic_inc_return provides a write barrier to order the write
* to requested_state with the later write that increments ready_count.
*/
- w = atomic_inc_return(&coupled->ready_waiting_counts) & WAITING_MASK;
- if (w == coupled->online_count)
- cpuidle_coupled_poke_others(cpu, coupled);
+ return atomic_inc_return(&coupled->ready_waiting_counts) & WAITING_MASK;
}

/**
@@ -418,13 +416,24 @@ static void cpuidle_coupled_set_done(int cpu, struct cpuidle_coupled *coupled)
static int cpuidle_coupled_clear_pokes(int cpu)
{
local_irq_enable();
- while (cpumask_test_cpu(cpu, &cpuidle_coupled_poked_mask))
+ while (cpumask_test_cpu(cpu, &cpuidle_coupled_poke_pending))
cpu_relax();
local_irq_disable();

return need_resched() ? -EINTR : 0;
}

+static bool cpuidle_coupled_any_pokes_pending(struct cpuidle_coupled *coupled)
+{
+ cpumask_t cpus;
+ int ret;
+
+ cpumask_and(&cpus, cpu_online_mask, &coupled->coupled_cpus);
+ ret = cpumask_and(&cpus, &cpuidle_coupled_poke_pending, &cpus);
+
+ return ret;
+}
+
/**
* cpuidle_enter_state_coupled - attempt to enter a state with coupled cpus
* @dev: struct cpuidle_device for the current cpu
@@ -449,6 +458,7 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
{
int entered_state = -1;
struct cpuidle_coupled *coupled = dev->coupled;
+ int w;

if (!coupled)
return -EINVAL;
@@ -466,14 +476,33 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
/* Read barrier ensures online_count is read after prevent is cleared */
smp_rmb();

- cpuidle_coupled_set_waiting(dev->cpu, coupled, next_state);
+reset:
+ cpumask_clear_cpu(dev->cpu, &cpuidle_coupled_poked);
+
+ w = cpuidle_coupled_set_waiting(dev->cpu, coupled, next_state);
+ /*
+ * If this is the last cpu to enter the waiting state, poke
+ * all the other cpus out of their waiting state so they can
+ * enter a deeper state. This can race with one of the cpus
+ * exiting the waiting state due to an interrupt and
+ * decrementing waiting_count, see comment below.
+ */
+ if (w == coupled->online_count) {
+ cpumask_set_cpu(dev->cpu, &cpuidle_coupled_poked);
+ cpuidle_coupled_poke_others(dev->cpu, coupled);
+ }

retry:
/*
* Wait for all coupled cpus to be idle, using the deepest state
- * allowed for a single cpu.
+ * allowed for a single cpu. If this was not the poking cpu, wait
+ * for at least one poke before leaving to avoid a race where
+ * two cpus could arrive at the waiting loop at the same time,
+ * but the first of the two to arrive could skip the loop without
+ * processing the pokes from the last to arrive.
*/
- while (!cpuidle_coupled_cpus_waiting(coupled)) {
+ while (!cpuidle_coupled_cpus_waiting(coupled) ||
+ !cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked)) {
if (cpuidle_coupled_clear_pokes(dev->cpu)) {
cpuidle_coupled_set_not_waiting(dev->cpu, coupled);
goto out;
@@ -495,6 +524,12 @@ retry:
}

/*
+ * Make sure final poke status for this cpu is visible before setting
+ * cpu as ready.
+ */
+ smp_wmb();
+
+ /*
* All coupled cpus are probably idle. There is a small chance that
* one of the other cpus just became active. Increment the ready count,
* and spin until all coupled cpus have incremented the counter. Once a
@@ -513,6 +548,28 @@ retry:
cpu_relax();
}

+ /*
+ * Make sure read of all cpus ready is done before reading pending pokes
+ */
+ smp_rmb();
+
+ /*
+ * There is a small chance that a cpu left and reentered idle after this
+ * cpu saw that all cpus were waiting. The cpu that reentered idle will
+ * have sent this cpu a poke, which will still be pending after the
+ * ready loop. The pending interrupt may be lost by the interrupt
+ * controller when entering the deep idle state. It's not possible to
+ * clear a pending interrupt without turning interrupts on and handling
+ * it, and it's too late to turn on interrupts here, so reset the
+ * coupled idle state of all cpus and retry.
+ */
+ if (cpuidle_coupled_any_pokes_pending(coupled)) {
+ cpuidle_coupled_set_done(dev->cpu, coupled);
+ /* Wait for all cpus to see the pending pokes */
+ cpuidle_coupled_parallel_barrier(dev, &coupled->abort_barrier);
+ goto reset;
+ }
+
/* all cpus have acked the coupled state */
next_state = cpuidle_coupled_get_state(dev, coupled);

@@ -598,7 +655,7 @@ have_coupled:
coupled->refcnt++;

csd = &per_cpu(cpuidle_coupled_poke_cb, dev->cpu);
- csd->func = cpuidle_coupled_poked;
+ csd->func = cpuidle_coupled_handle_poke;
csd->info = (void *)(unsigned long)dev->cpu;

return 0;
--
1.8.3

2013-08-23 19:45:20

by Colin Cross

[permalink] [raw]
Subject: [PATCH 3/3] cpuidle: coupled: fix race condition between pokes and safe state

The coupled cpuidle waiting loop clears pending pokes before
entering the safe state. If a poke arrives just before the
pokes are cleared, but after the while loop condition checks,
the poke will be lost and the cpu will stay in the safe state
until another interrupt arrives. This may cause the cpu that
sent the poke to spin in the ready loop with interrupts off
until another cpu receives an interrupt, and if no other cpus
have interrupts routed to them it can spin forever.

Change the return value of cpuidle_coupled_clear_pokes to
return if a poke was cleared, and move the need_resched()
checks into the callers. In the waiting loop, if
a poke was cleared restart the loop to repeat the while
condition checks.

Reported-by: Neil Zhang <[email protected]>
CC: [email protected]
Signed-off-by: Colin Cross <[email protected]>
---
drivers/cpuidle/coupled.c | 20 ++++++++++++++------
1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
index bbc4ba5..c91230b 100644
--- a/drivers/cpuidle/coupled.c
+++ b/drivers/cpuidle/coupled.c
@@ -408,19 +408,22 @@ static void cpuidle_coupled_set_done(int cpu, struct cpuidle_coupled *coupled)
* been processed and the poke bit has been cleared.
*
* Other interrupts may also be processed while interrupts are enabled, so
- * need_resched() must be tested after turning interrupts off again to make sure
+ * need_resched() must be tested after this function returns to make sure
* the interrupt didn't schedule work that should take the cpu out of idle.
*
- * Returns 0 if need_resched was false, -EINTR if need_resched was true.
+ * Returns 0 if no poke was pending, 1 if a poke was cleared.
*/
static int cpuidle_coupled_clear_pokes(int cpu)
{
+ if (!cpumask_test_cpu(cpu, &cpuidle_coupled_poke_pending))
+ return 0;
+
local_irq_enable();
while (cpumask_test_cpu(cpu, &cpuidle_coupled_poke_pending))
cpu_relax();
local_irq_disable();

- return need_resched() ? -EINTR : 0;
+ return 1;
}

static bool cpuidle_coupled_any_pokes_pending(struct cpuidle_coupled *coupled)
@@ -464,7 +467,8 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
return -EINVAL;

while (coupled->prevent) {
- if (cpuidle_coupled_clear_pokes(dev->cpu)) {
+ cpuidle_coupled_clear_pokes(dev->cpu);
+ if (need_resched()) {
local_irq_enable();
return entered_state;
}
@@ -503,7 +507,10 @@ retry:
*/
while (!cpuidle_coupled_cpus_waiting(coupled) ||
!cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked)) {
- if (cpuidle_coupled_clear_pokes(dev->cpu)) {
+ if (cpuidle_coupled_clear_pokes(dev->cpu))
+ continue;
+
+ if (need_resched()) {
cpuidle_coupled_set_not_waiting(dev->cpu, coupled);
goto out;
}
@@ -518,7 +525,8 @@ retry:
local_irq_disable();
}

- if (cpuidle_coupled_clear_pokes(dev->cpu)) {
+ cpuidle_coupled_clear_pokes(dev->cpu);
+ if (need_resched()) {
cpuidle_coupled_set_not_waiting(dev->cpu, coupled);
goto out;
}
--
1.8.3

2013-08-23 23:15:20

by Stephen Warren

[permalink] [raw]
Subject: Re: [PATCH 1/3] cpuidle: coupled: disable interrupts after entering safe state

On 08/23/2013 01:45 PM, Colin Cross wrote:
> Calling cpuidle_enter_state is expected to return with interrupts
> enabled, but interrupts must be disabled before starting the
> ready loop synchronization stage. Call local_irq_disable after
> each call to cpuidle_enter_state for the safe state.

Tested-by: Stephen Warren <[email protected]>

Note: I tested the current Tegra cpuidle code that's in next-20130819.
IIRC, Joseph reported the issue when trying to enable some additional
feature in Tegra30 cpuidle. I didn't actually try to enable whatever
that was; I just briefly tested for regressions in the existing code
configuration.

2013-08-24 00:22:33

by Colin Cross

[permalink] [raw]
Subject: Re: [PATCH 1/3] cpuidle: coupled: disable interrupts after entering safe state

On Fri, Aug 23, 2013 at 4:09 PM, Stephen Warren <[email protected]> wrote:
> On 08/23/2013 01:45 PM, Colin Cross wrote:
>> Calling cpuidle_enter_state is expected to return with interrupts
>> enabled, but interrupts must be disabled before starting the
>> ready loop synchronization stage. Call local_irq_disable after
>> each call to cpuidle_enter_state for the safe state.
>
> Tested-by: Stephen Warren <[email protected]>
>
> Note: I tested the current Tegra cpuidle code that's in next-20130819.
> IIRC, Joseph reported the issue when trying to enable some additional
> feature in Tegra30 cpuidle. I didn't actually try to enable whatever
> that was; I just briefly tested for regressions in the existing code
> configuration.

Is that for the series or just this patch?

2013-08-26 09:11:58

by Joseph Lo

[permalink] [raw]
Subject: Re: [PATCH 2/3] cpuidle: coupled: abort idle if pokes are pending

On Sat, 2013-08-24 at 03:45 +0800, Colin Cross wrote:
> Joseph Lo <[email protected]> reported a lockup on Tegra3 caused
> by a race condition in coupled cpuidle. When two or more cpus
Actually this issue can be reproduced on both Tegra20/30 platforms. And
I suggest using Tegra20 to replace Tegra3 here, we only apply coupled
CPU idle function on Tegra20 in the mainline right now.
> enter idle at the same time, the first cpus to arrive may go to the
> ready loop without processing pending pokes from the last cpu to
> arrive.
>
> This patch adds a check for pending pokes once all cpus have been
> synchronized in the ready loop and resets the coupled state and
> retries if any cpus failed to handle their pending poke.
>
> Retrying on all cpus may trigger the same issue again, so this patch
> also adds a check to ensure that each cpu has received at least one
> poke between when it enters the waiting loop and when it moves on to
> the ready loop.
>
> Reported-by: Joseph Lo <[email protected]>
> CC: [email protected]
> Signed-off-by: Colin Cross <[email protected]>
> ---
> drivers/cpuidle/coupled.c | 107 +++++++++++++++++++++++++++++++++++-----------
> 1 file changed, 82 insertions(+), 25 deletions(-)
>
[snip]
> +/*
> + * The cpuidle_coupled_poke_pending mask is used to ensure that each cpu has
s/cpuidle_coupled_poke_pending/cpuidle_coupled_poked/? :)
> + * been poked once to minimize entering the ready loop with a poke pending,
> + * which would require aborting and retrying.
> + */
> +static cpumask_t cpuidle_coupled_poked;
>
I fixed this issue by checking if there is a pending SGI, then abort the
coupled state on Tegra20. It still can be reproduced easily if I remove
the checking code. So I tested the case with this patch, the result is
good. This patch can fix the issue indeed.

I also tested with the other two patches. It didn't cause any
regression.

So this series:
Tested-by: Joseph Lo <[email protected]>

2013-08-26 15:55:20

by Stephen Warren

[permalink] [raw]
Subject: Re: [PATCH 1/3] cpuidle: coupled: disable interrupts after entering safe state

On 08/23/2013 06:22 PM, Colin Cross wrote:
> On Fri, Aug 23, 2013 at 4:09 PM, Stephen Warren <[email protected]> wrote:
>> On 08/23/2013 01:45 PM, Colin Cross wrote:
>>> Calling cpuidle_enter_state is expected to return with interrupts
>>> enabled, but interrupts must be disabled before starting the
>>> ready loop synchronization stage. Call local_irq_disable after
>>> each call to cpuidle_enter_state for the safe state.
>>
>> Tested-by: Stephen Warren <[email protected]>
>>
>> Note: I tested the current Tegra cpuidle code that's in next-20130819.
>> IIRC, Joseph reported the issue when trying to enable some additional
>> feature in Tegra30 cpuidle. I didn't actually try to enable whatever
>> that was; I just briefly tested for regressions in the existing code
>> configuration.
>
> Is that for the series or just this patch?

The series.

2013-08-27 09:02:26

by Neil Zhang

[permalink] [raw]
Subject: RE: [PATCH 3/3] cpuidle: coupled: fix race condition between pokes and safe state

> -----Original Message-----
> From: Colin Cross [mailto:[email protected]]
> Sent: 2013??8??24?? 3:45
> To: [email protected]
> Cc: [email protected]; Neil Zhang; Joseph Lo;
> [email protected]; Colin Cross; [email protected]; Rafael J.
> Wysocki; Daniel Lezcano
> Subject: [PATCH 3/3] cpuidle: coupled: fix race condition between pokes and safe
> state
>
> The coupled cpuidle waiting loop clears pending pokes before entering the safe
> state. If a poke arrives just before the pokes are cleared, but after the while
> loop condition checks, the poke will be lost and the cpu will stay in the safe state
> until another interrupt arrives. This may cause the cpu that sent the poke to
> spin in the ready loop with interrupts off until another cpu receives an interrupt,
> and if no other cpus have interrupts routed to them it can spin forever.
>
> Change the return value of cpuidle_coupled_clear_pokes to return if a poke was
> cleared, and move the need_resched() checks into the callers. In the waiting
> loop, if a poke was cleared restart the loop to repeat the while condition checks.
>
> Reported-by: Neil Zhang <[email protected]>
> CC: [email protected]
> Signed-off-by: Colin Cross <[email protected]>
> ---
> drivers/cpuidle/coupled.c | 20 ++++++++++++++------
> 1 file changed, 14 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c index
> bbc4ba5..c91230b 100644
> --- a/drivers/cpuidle/coupled.c
> +++ b/drivers/cpuidle/coupled.c
> @@ -408,19 +408,22 @@ static void cpuidle_coupled_set_done(int cpu, struct
> cpuidle_coupled *coupled)
> * been processed and the poke bit has been cleared.
> *
> * Other interrupts may also be processed while interrupts are enabled, so
> - * need_resched() must be tested after turning interrupts off again to make sure
> + * need_resched() must be tested after this function returns to make
> + sure
> * the interrupt didn't schedule work that should take the cpu out of idle.
> *
> - * Returns 0 if need_resched was false, -EINTR if need_resched was true.
> + * Returns 0 if no poke was pending, 1 if a poke was cleared.
> */
> static int cpuidle_coupled_clear_pokes(int cpu) {
> + if (!cpumask_test_cpu(cpu, &cpuidle_coupled_poke_pending))
> + return 0;
> +
> local_irq_enable();
> while (cpumask_test_cpu(cpu, &cpuidle_coupled_poke_pending))
> cpu_relax();
> local_irq_disable();
>
> - return need_resched() ? -EINTR : 0;
> + return 1;
> }
>
> static bool cpuidle_coupled_any_pokes_pending(struct cpuidle_coupled
> *coupled) @@ -464,7 +467,8 @@ int cpuidle_enter_state_coupled(struct
> cpuidle_device *dev,
> return -EINVAL;
>
> while (coupled->prevent) {
> - if (cpuidle_coupled_clear_pokes(dev->cpu)) {
> + cpuidle_coupled_clear_pokes(dev->cpu);
> + if (need_resched()) {
> local_irq_enable();
> return entered_state;
> }
> @@ -503,7 +507,10 @@ retry:
> */
> while (!cpuidle_coupled_cpus_waiting(coupled) ||
> !cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked)) {
> - if (cpuidle_coupled_clear_pokes(dev->cpu)) {
> + if (cpuidle_coupled_clear_pokes(dev->cpu))
> + continue;
> +
> + if (need_resched()) {
> cpuidle_coupled_set_not_waiting(dev->cpu, coupled);
> goto out;
> }
> @@ -518,7 +525,8 @@ retry:
> local_irq_disable();
> }
>
> - if (cpuidle_coupled_clear_pokes(dev->cpu)) {
> + cpuidle_coupled_clear_pokes(dev->cpu);
> + if (need_resched()) {
> cpuidle_coupled_set_not_waiting(dev->cpu, coupled);
> goto out;
> }
> --
> 1.8.3

I think this patch should also be workable.
Thanks.

Reviewed-by: Neil Zhang <[email protected]>

Best Regards,
Neil Zhang
????{.n?+???????+%?????ݶ??w??{.n?+????{??G?????{ay?ʇڙ?,j??f???h?????????z_??(?階?ݢj"???m??????G????????????&???~???iO???z??v?^?m???? ????????I?

2013-08-28 21:17:51

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH 1/3] cpuidle: coupled: disable interrupts after entering safe state

On Friday, August 23, 2013 12:45:10 PM Colin Cross wrote:
> Calling cpuidle_enter_state is expected to return with interrupts
> enabled, but interrupts must be disabled before starting the
> ready loop synchronization stage. Call local_irq_disable after
> each call to cpuidle_enter_state for the safe state.
>
> CC: [email protected]
> Signed-off-by: Colin Cross <[email protected]>

I've queued up all thress for 3.12, but I wonder what stable versions they
should be included into? All of them or just a subset?

Rafael


> ---
> drivers/cpuidle/coupled.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
> index 2a297f8..db92bcb 100644
> --- a/drivers/cpuidle/coupled.c
> +++ b/drivers/cpuidle/coupled.c
> @@ -460,6 +460,7 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
> }
> entered_state = cpuidle_enter_state(dev, drv,
> dev->safe_state_index);
> + local_irq_disable();
> }
>
> /* Read barrier ensures online_count is read after prevent is cleared */
> @@ -485,6 +486,7 @@ retry:
>
> entered_state = cpuidle_enter_state(dev, drv,
> dev->safe_state_index);
> + local_irq_disable();
> }
>
> if (cpuidle_coupled_clear_pokes(dev->cpu)) {
>
--
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.

2013-08-28 22:01:02

by Colin Cross

[permalink] [raw]
Subject: Re: [PATCH 1/3] cpuidle: coupled: disable interrupts after entering safe state

On Wed, Aug 28, 2013 at 2:28 PM, Rafael J. Wysocki <[email protected]> wrote:
> On Friday, August 23, 2013 12:45:10 PM Colin Cross wrote:
>> Calling cpuidle_enter_state is expected to return with interrupts
>> enabled, but interrupts must be disabled before starting the
>> ready loop synchronization stage. Call local_irq_disable after
>> each call to cpuidle_enter_state for the safe state.
>>
>> CC: [email protected]
>> Signed-off-by: Colin Cross <[email protected]>
>
> I've queued up all thress for 3.12, but I wonder what stable versions they
> should be included into? All of them or just a subset?

The patches apply cleanly back to v3.6.

Joseph Lo had 2 minor comments on patch 2 (changing Tegra3 to Tegra20
in the commit message, replacing cpuidle_coupled_poke_pending with
cpuidle_coupled_poked in the comment above cpuidle_coupled_poked), do
you want to fix those up locally or should I resend the series?

2013-08-29 00:40:09

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH 1/3] cpuidle: coupled: disable interrupts after entering safe state

On Wednesday, August 28, 2013 03:00:58 PM Colin Cross wrote:
> On Wed, Aug 28, 2013 at 2:28 PM, Rafael J. Wysocki <[email protected]> wrote:
> > On Friday, August 23, 2013 12:45:10 PM Colin Cross wrote:
> >> Calling cpuidle_enter_state is expected to return with interrupts
> >> enabled, but interrupts must be disabled before starting the
> >> ready loop synchronization stage. Call local_irq_disable after
> >> each call to cpuidle_enter_state for the safe state.
> >>
> >> CC: [email protected]
> >> Signed-off-by: Colin Cross <[email protected]>
> >
> > I've queued up all thress for 3.12, but I wonder what stable versions they
> > should be included into? All of them or just a subset?
>
> The patches apply cleanly back to v3.6.
>
> Joseph Lo had 2 minor comments on patch 2 (changing Tegra3 to Tegra20
> in the commit message, replacing cpuidle_coupled_poke_pending with
> cpuidle_coupled_poked in the comment above cpuidle_coupled_poked), do
> you want to fix those up locally or should I resend the series?

I'd prefer it to be resent, then, but just the patch(es) that changed.

Thanks,
Rafael

2013-08-29 06:50:37

by Prashant Gaikwad

[permalink] [raw]
Subject: Re: [PATCH 1/3] cpuidle: coupled: disable interrupts after entering safe state

On Saturday 24 August 2013 01:15 AM, Colin Cross wrote:
> Calling cpuidle_enter_state is expected to return with interrupts
> enabled, but interrupts must be disabled before starting the
> ready loop synchronization stage. Call local_irq_disable after
> each call to cpuidle_enter_state for the safe state.
>
> CC: [email protected]
> Signed-off-by: Colin Cross <[email protected]>
> ---
> drivers/cpuidle/coupled.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
> index 2a297f8..db92bcb 100644
> --- a/drivers/cpuidle/coupled.c
> +++ b/drivers/cpuidle/coupled.c
> @@ -460,6 +460,7 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
> }
> entered_state = cpuidle_enter_state(dev, drv,
> dev->safe_state_index);
> + local_irq_disable();

Colin,

There is still some window where irq remains enabled after exiting safe
state. It may introduce some corner case.
Instead of this can we pass a parameter to cpuidle_enter_state
indicating that irq has to be enabled or not after exit from idle state,
which would be false when entering safe state from coupled idle.

> }
>
> /* Read barrier ensures online_count is read after prevent is cleared */
> @@ -485,6 +486,7 @@ retry:
>
> entered_state = cpuidle_enter_state(dev, drv,
> dev->safe_state_index);
> + local_irq_disable();
> }
>
> if (cpuidle_coupled_clear_pokes(dev->cpu)) {

2013-08-29 07:12:21

by Colin Cross

[permalink] [raw]
Subject: Re: [PATCH 1/3] cpuidle: coupled: disable interrupts after entering safe state

On Wed, Aug 28, 2013 at 11:50 PM, Prashant Gaikwad <[email protected]> wrote:
> On Saturday 24 August 2013 01:15 AM, Colin Cross wrote:
>>
>> Calling cpuidle_enter_state is expected to return with interrupts
>> enabled, but interrupts must be disabled before starting the
>> ready loop synchronization stage. Call local_irq_disable after
>> each call to cpuidle_enter_state for the safe state.
>>
>> CC: [email protected]
>> Signed-off-by: Colin Cross <[email protected]>
>> ---
>> drivers/cpuidle/coupled.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
>> index 2a297f8..db92bcb 100644
>> --- a/drivers/cpuidle/coupled.c
>> +++ b/drivers/cpuidle/coupled.c
>> @@ -460,6 +460,7 @@ int cpuidle_enter_state_coupled(struct cpuidle_device
>> *dev,
>> }
>> entered_state = cpuidle_enter_state(dev, drv,
>> dev->safe_state_index);
>> + local_irq_disable();
>
>
> Colin,
>
> There is still some window where irq remains enabled after exiting safe
> state. It may introduce some corner case.
> Instead of this can we pass a parameter to cpuidle_enter_state indicating
> that irq has to be enabled or not after exit from idle state, which would be
> false when entering safe state from coupled idle.

It's fine for irqs to be enabled when exiting the safe state, in fact
on further inspection this patch isn't strictly necessary at all - we
always enable interrupts inside cpuidle_coupled_clear_pokes soon after
cpuidle_enter_state returns, and then disable them again. It's
probably better to disable interrupts right after cpuidle_enter_state,
it makes sure interrupts are consistently disabled when calling
cpuidle_coupled_set_waiting, cpuidle_coupled_cpus_waiting and
cpuidle_coupled_clear_pokes, although that doesn't matter in the
current implementation.

Rafael, feel free to drop the stable annotation from this patch.

2013-08-29 19:58:21

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH 1/3] cpuidle: coupled: disable interrupts after entering safe state

On Thursday, August 29, 2013 12:12:17 AM Colin Cross wrote:
> On Wed, Aug 28, 2013 at 11:50 PM, Prashant Gaikwad <[email protected]> wrote:
> > On Saturday 24 August 2013 01:15 AM, Colin Cross wrote:
> >>
> >> Calling cpuidle_enter_state is expected to return with interrupts
> >> enabled, but interrupts must be disabled before starting the
> >> ready loop synchronization stage. Call local_irq_disable after
> >> each call to cpuidle_enter_state for the safe state.
> >>
> >> CC: [email protected]
> >> Signed-off-by: Colin Cross <[email protected]>
> >> ---
> >> drivers/cpuidle/coupled.c | 2 ++
> >> 1 file changed, 2 insertions(+)
> >>
> >> diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
> >> index 2a297f8..db92bcb 100644
> >> --- a/drivers/cpuidle/coupled.c
> >> +++ b/drivers/cpuidle/coupled.c
> >> @@ -460,6 +460,7 @@ int cpuidle_enter_state_coupled(struct cpuidle_device
> >> *dev,
> >> }
> >> entered_state = cpuidle_enter_state(dev, drv,
> >> dev->safe_state_index);
> >> + local_irq_disable();
> >
> >
> > Colin,
> >
> > There is still some window where irq remains enabled after exiting safe
> > state. It may introduce some corner case.
> > Instead of this can we pass a parameter to cpuidle_enter_state indicating
> > that irq has to be enabled or not after exit from idle state, which would be
> > false when entering safe state from coupled idle.
>
> It's fine for irqs to be enabled when exiting the safe state, in fact
> on further inspection this patch isn't strictly necessary at all - we
> always enable interrupts inside cpuidle_coupled_clear_pokes soon after
> cpuidle_enter_state returns, and then disable them again. It's
> probably better to disable interrupts right after cpuidle_enter_state,
> it makes sure interrupts are consistently disabled when calling
> cpuidle_coupled_set_waiting, cpuidle_coupled_cpus_waiting and
> cpuidle_coupled_clear_pokes, although that doesn't matter in the
> current implementation.
>
> Rafael, feel free to drop the stable annotation from this patch.

I will, thanks!

--
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.