2014-04-02 14:20:46

by Krzysztof Kozlowski

[permalink] [raw]
Subject: [PATCH 0/2] Backport to 3.10 stable (Fix CPU0 stall after CPU1 hotplug)

Hi,

These two patches are good candidates for backporting to stable 3.10. They fix
a CPU0 stall due to timer list corruption after hotplugging CPU1.

1. Commit: 95731ebb114c
cpufreq: Fix governor start/stop race condition
2. Commit: 3617f2ca6d0e
cpufreq: Fix timer/workqueue corruption due to double queueing

Stall:
[ 130.127262] INFO: rcu_preempt detected stalls on CPUs/tasks: { 0} (detected by 1, t=12003 jiffies, g=172, c=171, q=0)
[ 130.132285] Task dump for CPU 0:
[ 130.135496] swapper/0 R running 0 0 0 0x00001000
[ 130.141983] [<c04ae870>] (__schedule+0x3bc/0x80c) from [<c075ffec>] (cpu_idle_force_poll+0x0/0x4)

List corruption:
[ 4244.528166] ------------[ cut here ]------------
[ 4244.528507] WARNING: at lib/list_debug.c:33 __list_add+0xa8/0xbc()
[ 4244.533418] list_add corruption. prev->next should be next (c08864b0), but was (null). (prev=c1105454).
[ 4244.542938] Modules linked in:
[ 4244.546024] CPU: 0 PID: 1 Comm: sh Tainted: G W 3.10.14-03857-g20b26f7e0c59-dirty #1354
[ 4244.554956] [<c0016194>] (unwind_backtrace+0x0/0x138) from [<c001336c>] (show_stack+0x10/0x14)
[ 4244.563518] [<c001336c>] (show_stack+0x10/0x14) from [<c0025438>] (warn_slowpath_common+0x4c/0x68)
[ 4244.572452] [<c0025438>] (warn_slowpath_common+0x4c/0x68) from [<c00254e8>] (warn_slowpath_fmt+0x30/0x40)
[ 4244.582008] [<c00254e8>] (warn_slowpath_fmt+0x30/0x40) from [<c022b9b8>] (__list_add+0xa8/0xbc)
[ 4244.590696] [<c022b9b8>] (__list_add+0xa8/0xbc) from [<c0034ac8>] (internal_add_timer+0x10/0x40)
[ 4244.599462] [<c0034ac8>] (internal_add_timer+0x10/0x40) from [<c00350e4>] (add_timer_on+0x74/0x124)
[ 4244.608493] [<c00350e4>] (add_timer_on+0x74/0x124) from [<c004354c>] (mod_delayed_work_on+0x50/0x68)
[ 4244.617622] [<c004354c>] (mod_delayed_work_on+0x50/0x68) from [<c0347df8>] (gov_queue_work+0x48/0xa4)
[ 4244.626805] [<c0347df8>] (gov_queue_work+0x48/0xa4) from [<c0348518>] (cpufreq_governor_dbs+0x29c/0x668)
[ 4244.636261] [<c0348518>] (cpufreq_governor_dbs+0x29c/0x668) from [<c034438c>] (__cpufreq_governor.part.9+0x30/0xd4)
[ 4244.646689] [<c034438c>] (__cpufreq_governor.part.9+0x30/0xd4) from [<c0345474>] (__cpufreq_remove_dev.isra.12+0x130/0x484)
[ 4244.657798] [<c0345474>] (__cpufreq_remove_dev.isra.12+0x130/0x484) from [<c04cc8fc>] (cpufreq_cpu_callback+0x70/0x80)
[ 4244.668466] [<c04cc8fc>] (cpufreq_cpu_callback+0x70/0x80) from [<c004e808>] (notifier_call_chain+0x44/0x84)
[ 4244.678184] [<c004e808>] (notifier_call_chain+0x44/0x84) from [<c0028ac0>] (__cpu_notify+0x2c/0x48)
[ 4244.687200] [<c0028ac0>] (__cpu_notify+0x2c/0x48) from [<c04c9208>] (_cpu_down+0x80/0x268)
[ 4244.695436] [<c04c9208>] (_cpu_down+0x80/0x268) from [<c04c9418>] (cpu_down+0x28/0x3c)
[ 4244.703334] [<c04c9418>] (cpu_down+0x28/0x3c) from [<c04c9b44>] (store_online+0x30/0x74)
[ 4244.711426] [<c04c9b44>] (store_online+0x30/0x74) from [<c02cb2b0>] (dev_attr_store+0x18/0x24)
[ 4244.720027] [<c02cb2b0>] (dev_attr_store+0x18/0x24) from [<c016b2d4>] (sysfs_write_file+0x80/0xb4)
[ 4244.728960] [<c016b2d4>] (sysfs_write_file+0x80/0xb4) from [<c010f2d8>] (vfs_write+0xbc/0x1bc)
[ 4244.737551] [<c010f2d8>] (vfs_write+0xbc/0x1bc) from [<c010f718>] (SyS_write+0x40/0x68)
[ 4244.745541] [<c010f718>] (SyS_write+0x40/0x68) from [<c000ec00>] (ret_fast_syscall+0x0/0x3c)
[ 4244.753881] ---[ end trace 6c85e0f7596a61f2 ]---
[ 4244.758467] ------------[ cut here ]------------
[ 4244.763115] WARNING: at lib/list_debug.c:36 __list_add+0x88/0xbc()
[ 4244.769247] list_add double add: new=c1105454, prev=c1105454, next=c08864b0.
[ 4244.776254] Modules linked in:
[ 4244.779337] CPU: 0 PID: 1 Comm: sh Tainted: G W 3.10.14-03857-g20b26f7e0c59-dirty #1354
[ 4244.788247] [<c0016194>] (unwind_backtrace+0x0/0x138) from [<c001336c>] (show_stack+0x10/0x14)
[ 4244.796830] [<c001336c>] (show_stack+0x10/0x14) from [<c0025438>] (warn_slowpath_common+0x4c/0x68)
[ 4244.805766] [<c0025438>] (warn_slowpath_common+0x4c/0x68) from [<c00254e8>] (warn_slowpath_fmt+0x30/0x40)
[ 4244.815325] [<c00254e8>] (warn_slowpath_fmt+0x30/0x40) from [<c022b998>] (__list_add+0x88/0xbc)
[ 4244.824009] [<c022b998>] (__list_add+0x88/0xbc) from [<c0034ac8>] (internal_add_timer+0x10/0x40)
[ 4244.832777] [<c0034ac8>] (internal_add_timer+0x10/0x40) from [<c00350e4>] (add_timer_on+0x74/0x124)
[ 4244.841811] [<c00350e4>] (add_timer_on+0x74/0x124) from [<c004354c>] (mod_delayed_work_on+0x50/0x68)
[ 4244.850935] [<c004354c>] (mod_delayed_work_on+0x50/0x68) from [<c0347df8>] (gov_queue_work+0x48/0xa4)
[ 4244.860120] [<c0347df8>] (gov_queue_work+0x48/0xa4) from [<c0348518>] (cpufreq_governor_dbs+0x29c/0x668)
[ 4244.869577] [<c0348518>] (cpufreq_governor_dbs+0x29c/0x668) from [<c034438c>] (__cpufreq_governor.part.9+0x30/0xd4)
[ 4244.880007] [<c034438c>] (__cpufreq_governor.part.9+0x30/0xd4) from [<c0345474>] (__cpufreq_remove_dev.isra.12+0x130/0x484)
[ 4244.891112] [<c0345474>] (__cpufreq_remove_dev.isra.12+0x130/0x484) from [<c04cc8fc>] (cpufreq_cpu_callback+0x70/0x80)
[ 4244.901780] [<c04cc8fc>] (cpufreq_cpu_callback+0x70/0x80) from [<c004e808>] (notifier_call_chain+0x44/0x84)
[ 4244.911499] [<c004e808>] (notifier_call_chain+0x44/0x84) from [<c0028ac0>] (__cpu_notify+0x2c/0x48)
[ 4244.920515] [<c0028ac0>] (__cpu_notify+0x2c/0x48) from [<c04c9208>] (_cpu_down+0x80/0x268)
[ 4244.928752] [<c04c9208>] (_cpu_down+0x80/0x268) from [<c04c9418>] (cpu_down+0x28/0x3c)
[ 4244.936653] [<c04c9418>] (cpu_down+0x28/0x3c) from [<c04c9b44>] (store_online+0x30/0x74)
[ 4244.944741] [<c04c9b44>] (store_online+0x30/0x74) from [<c02cb2b0>] (dev_attr_store+0x18/0x24)
[ 4244.953341] [<c02cb2b0>] (dev_attr_store+0x18/0x24) from [<c016b2d4>] (sysfs_write_file+0x80/0xb4)
[ 4244.962274] [<c016b2d4>] (sysfs_write_file+0x80/0xb4) from [<c010f2d8>] (vfs_write+0xbc/0x1bc)
[ 4244.970866] [<c010f2d8>] (vfs_write+0xbc/0x1bc) from [<c010f718>] (SyS_write+0x40/0x68)
[ 4244.978858] [<c010f718>] (SyS_write+0x40/0x68) from [<c000ec00>] (ret_fast_syscall+0x0/0x3c)
[ 4244.987198] ---[ end trace 6c85e0f7596a61f3 ]---


Best regards,
Krzysztof


2014-04-02 14:19:52

by Krzysztof Kozlowski

[permalink] [raw]
Subject: [PATCH 2/2] cpufreq: Fix timer/workqueue corruption due to double queueing

From: Stephen Boyd <[email protected]>

When a CPU is hot removed we'll cancel all the delayed work items
via gov_cancel_work(). Normally this will just cancels a delayed
timer on each CPU that the policy is managing and the work won't
run, but if the work is already running the workqueue code will
wait for the work to finish before continuing to prevent the
work items from re-queuing themselves like they normally do. This
scheme will work most of the time, except for the case where the
work function determines that it should adjust the delay for all
other CPUs that the policy is managing. If this scenario occurs,
the canceling CPU will cancel its own work but queue up the other
CPUs works to run. For example:

CPU0 CPU1
---- ----
cpu_down()
...
__cpufreq_remove_dev()
cpufreq_governor_dbs()
case CPUFREQ_GOV_STOP:
gov_cancel_work(dbs_data, policy);
cpu0 work is canceled
timer is canceled
cpu1 work is canceled <work runs>
<waits for cpu1> od_dbs_timer()
gov_queue_work(*, *, true);
cpu0 work queued
cpu1 work queued
cpu2 work queued
...
cpu1 work is canceled
cpu2 work is canceled
...

At the end of the GOV_STOP case cpu0 still has a work queued to
run although the code is expecting all of the works to be
canceled. __cpufreq_remove_dev() will then proceed to
re-initialize all the other CPUs works except for the CPU that is
going down. The CPUFREQ_GOV_START case in cpufreq_governor_dbs()
will trample over the queued work and debugobjects will spit out
a warning:

WARNING: at lib/debugobjects.c:260 debug_print_object+0x94/0xbc()
ODEBUG: init active (active state 0) object type: timer_list hint: delayed_work_timer_fn+0x0/0x10
Modules linked in:
CPU: 0 PID: 1491 Comm: sh Tainted: G W 3.10.0 #19
[<c010c178>] (unwind_backtrace+0x0/0x11c) from [<c0109dec>] (show_stack+0x10/0x14)
[<c0109dec>] (show_stack+0x10/0x14) from [<c01904cc>] (warn_slowpath_common+0x4c/0x6c)
[<c01904cc>] (warn_slowpath_common+0x4c/0x6c) from [<c019056c>] (warn_slowpath_fmt+0x2c/0x3c)
[<c019056c>] (warn_slowpath_fmt+0x2c/0x3c) from [<c0388a7c>] (debug_print_object+0x94/0xbc)
[<c0388a7c>] (debug_print_object+0x94/0xbc) from [<c0388e34>] (__debug_object_init+0x2d0/0x340)
[<c0388e34>] (__debug_object_init+0x2d0/0x340) from [<c019e3b0>] (init_timer_key+0x14/0xb0)
[<c019e3b0>] (init_timer_key+0x14/0xb0) from [<c0635f78>] (cpufreq_governor_dbs+0x3e8/0x5f8)
[<c0635f78>] (cpufreq_governor_dbs+0x3e8/0x5f8) from [<c06325a0>] (__cpufreq_governor+0xdc/0x1a4)
[<c06325a0>] (__cpufreq_governor+0xdc/0x1a4) from [<c0633704>] (__cpufreq_remove_dev.isra.10+0x3b4/0x434)
[<c0633704>] (__cpufreq_remove_dev.isra.10+0x3b4/0x434) from [<c08989f4>] (cpufreq_cpu_callback+0x60/0x80)
[<c08989f4>] (cpufreq_cpu_callback+0x60/0x80) from [<c08a43c0>] (notifier_call_chain+0x38/0x68)
[<c08a43c0>] (notifier_call_chain+0x38/0x68) from [<c01938e0>] (__cpu_notify+0x28/0x40)
[<c01938e0>] (__cpu_notify+0x28/0x40) from [<c0892ad4>] (_cpu_down+0x7c/0x2c0)
[<c0892ad4>] (_cpu_down+0x7c/0x2c0) from [<c0892d3c>] (cpu_down+0x24/0x40)
[<c0892d3c>] (cpu_down+0x24/0x40) from [<c0893ea8>] (store_online+0x2c/0x74)
[<c0893ea8>] (store_online+0x2c/0x74) from [<c04519d8>] (dev_attr_store+0x18/0x24)
[<c04519d8>] (dev_attr_store+0x18/0x24) from [<c02a69d4>] (sysfs_write_file+0x100/0x148)
[<c02a69d4>] (sysfs_write_file+0x100/0x148) from [<c0255c18>] (vfs_write+0xcc/0x174)
[<c0255c18>] (vfs_write+0xcc/0x174) from [<c0255f70>] (SyS_write+0x38/0x64)
[<c0255f70>] (SyS_write+0x38/0x64) from [<c0106120>] (ret_fast_syscall+0x0/0x30)

Signed-off-by: Stephen Boyd <[email protected]>
Acked-by: Viresh Kumar <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Cc: <[email protected]>
---
drivers/cpufreq/cpufreq_governor.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 87427360c77f..bce2cd216423 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -119,6 +119,9 @@ void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy,
{
int i;

+ if (!policy->governor_enabled)
+ return;
+
if (!all_cpus) {
__gov_queue_work(smp_processor_id(), dbs_data, delay);
} else {
--
1.7.9.5

2014-04-02 14:19:50

by Krzysztof Kozlowski

[permalink] [raw]
Subject: [PATCH 1/2] cpufreq: Fix governor start/stop race condition

From: Xiaoguang Chen <[email protected]>

Cpufreq governors' stop and start operations should be carried out
in sequence. Otherwise, there will be unexpected behavior, like in
the example below.

Suppose there are 4 CPUs and policy->cpu=CPU0, CPU1/2/3 are linked
to CPU0. The normal sequence is:

1) Current governor is userspace. An application tries to set the
governor to ondemand. It will call __cpufreq_set_policy() in
which it will stop the userspace governor and then start the
ondemand governor.

2) Current governor is userspace. The online of CPU3 runs on CPU0.
It will call cpufreq_add_policy_cpu() in which it will first
stop the userspace governor, and then start it again.

If the sequence of the above two cases interleaves, it becomes:

1) Application stops userspace governor
2) Hotplug stops userspace governor

which is a problem, because the governor shouldn't be stopped twice
in a row. What happens next is:

3) Application starts ondemand governor
4) Hotplug starts a governor

In step 4, the hotplug is supposed to start the userspace governor,
but now the governor has been changed by the application to ondemand,
so the ondemand governor is started once again, which is incorrect.

The solution is to prevent policy governors from being stopped
multiple times in a row. A governor should only be stopped once for
one policy. After it has been stopped, no more governor stop
operations should be executed.

Also add a mutex to serialize governor operations.

[rjw: Changelog. And you owe me a beverage of my choice.]
Signed-off-by: Xiaoguang Chen <[email protected]>
Acked-by: Viresh Kumar <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
Cc: <[email protected]>
---
drivers/cpufreq/cpufreq.c | 24 ++++++++++++++++++++++++
include/linux/cpufreq.h | 1 +
2 files changed, 25 insertions(+)

diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index f8c28607ccd6..43cf60832468 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -49,6 +49,7 @@ static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data);
static DEFINE_PER_CPU(char[CPUFREQ_NAME_LEN], cpufreq_cpu_governor);
#endif
static DEFINE_RWLOCK(cpufreq_driver_lock);
+static DEFINE_MUTEX(cpufreq_governor_lock);

/*
* cpu_policy_rwsem is a per CPU reader-writer semaphore designed to cure
@@ -1635,6 +1636,21 @@ static int __cpufreq_governor(struct cpufreq_policy *policy,

pr_debug("__cpufreq_governor for CPU %u, event %u\n",
policy->cpu, event);
+
+ mutex_lock(&cpufreq_governor_lock);
+ if ((!policy->governor_enabled && (event == CPUFREQ_GOV_STOP)) ||
+ (policy->governor_enabled && (event == CPUFREQ_GOV_START))) {
+ mutex_unlock(&cpufreq_governor_lock);
+ return -EBUSY;
+ }
+
+ if (event == CPUFREQ_GOV_STOP)
+ policy->governor_enabled = false;
+ else if (event == CPUFREQ_GOV_START)
+ policy->governor_enabled = true;
+
+ mutex_unlock(&cpufreq_governor_lock);
+
ret = policy->governor->governor(policy, event);

if (!ret) {
@@ -1642,6 +1658,14 @@ static int __cpufreq_governor(struct cpufreq_policy *policy,
policy->governor->initialized++;
else if (event == CPUFREQ_GOV_POLICY_EXIT)
policy->governor->initialized--;
+ } else {
+ /* Restore original values */
+ mutex_lock(&cpufreq_governor_lock);
+ if (event == CPUFREQ_GOV_STOP)
+ policy->governor_enabled = true;
+ else if (event == CPUFREQ_GOV_START)
+ policy->governor_enabled = false;
+ mutex_unlock(&cpufreq_governor_lock);
}

/* we keep one module reference alive for
diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
index d93905633dc7..125719d41285 100644
--- a/include/linux/cpufreq.h
+++ b/include/linux/cpufreq.h
@@ -111,6 +111,7 @@ struct cpufreq_policy {
unsigned int policy; /* see above */
struct cpufreq_governor *governor; /* see below */
void *governor_data;
+ bool governor_enabled; /* governor start/stop flag */

struct work_struct update; /* if update_policy() needs to be
* called, but you're in IRQ context */
--
1.7.9.5

2014-04-03 09:42:50

by Luis Henriques

[permalink] [raw]
Subject: Re: [PATCH 0/2] Backport to 3.10 stable (Fix CPU0 stall after CPU1 hotplug)

On Wed, Apr 02, 2014 at 04:19:36PM +0200, Krzysztof Kozlowski wrote:
> Hi,
>
> These two patches are good candidates for backporting to stable 3.10. They fix
> a CPU0 stall due to timer list corruption after hotplugging CPU1.
>
> 1. Commit: 95731ebb114c
> cpufreq: Fix governor start/stop race condition
> 2. Commit: 3617f2ca6d0e
> cpufreq: Fix timer/workqueue corruption due to double queueing
>

Thank you Krzysztof, I'll queue this 2nd patch for the 3.11 kernel as well
(the 1st one is already there).

Cheers,
--
Lu?s

> Stall:
> [ 130.127262] INFO: rcu_preempt detected stalls on CPUs/tasks: { 0} (detected by 1, t=12003 jiffies, g=172, c=171, q=0)
> [ 130.132285] Task dump for CPU 0:
> [ 130.135496] swapper/0 R running 0 0 0 0x00001000
> [ 130.141983] [<c04ae870>] (__schedule+0x3bc/0x80c) from [<c075ffec>] (cpu_idle_force_poll+0x0/0x4)
>
> List corruption:
> [ 4244.528166] ------------[ cut here ]------------
> [ 4244.528507] WARNING: at lib/list_debug.c:33 __list_add+0xa8/0xbc()
> [ 4244.533418] list_add corruption. prev->next should be next (c08864b0), but was (null). (prev=c1105454).
> [ 4244.542938] Modules linked in:
> [ 4244.546024] CPU: 0 PID: 1 Comm: sh Tainted: G W 3.10.14-03857-g20b26f7e0c59-dirty #1354
> [ 4244.554956] [<c0016194>] (unwind_backtrace+0x0/0x138) from [<c001336c>] (show_stack+0x10/0x14)
> [ 4244.563518] [<c001336c>] (show_stack+0x10/0x14) from [<c0025438>] (warn_slowpath_common+0x4c/0x68)
> [ 4244.572452] [<c0025438>] (warn_slowpath_common+0x4c/0x68) from [<c00254e8>] (warn_slowpath_fmt+0x30/0x40)
> [ 4244.582008] [<c00254e8>] (warn_slowpath_fmt+0x30/0x40) from [<c022b9b8>] (__list_add+0xa8/0xbc)
> [ 4244.590696] [<c022b9b8>] (__list_add+0xa8/0xbc) from [<c0034ac8>] (internal_add_timer+0x10/0x40)
> [ 4244.599462] [<c0034ac8>] (internal_add_timer+0x10/0x40) from [<c00350e4>] (add_timer_on+0x74/0x124)
> [ 4244.608493] [<c00350e4>] (add_timer_on+0x74/0x124) from [<c004354c>] (mod_delayed_work_on+0x50/0x68)
> [ 4244.617622] [<c004354c>] (mod_delayed_work_on+0x50/0x68) from [<c0347df8>] (gov_queue_work+0x48/0xa4)
> [ 4244.626805] [<c0347df8>] (gov_queue_work+0x48/0xa4) from [<c0348518>] (cpufreq_governor_dbs+0x29c/0x668)
> [ 4244.636261] [<c0348518>] (cpufreq_governor_dbs+0x29c/0x668) from [<c034438c>] (__cpufreq_governor.part.9+0x30/0xd4)
> [ 4244.646689] [<c034438c>] (__cpufreq_governor.part.9+0x30/0xd4) from [<c0345474>] (__cpufreq_remove_dev.isra.12+0x130/0x484)
> [ 4244.657798] [<c0345474>] (__cpufreq_remove_dev.isra.12+0x130/0x484) from [<c04cc8fc>] (cpufreq_cpu_callback+0x70/0x80)
> [ 4244.668466] [<c04cc8fc>] (cpufreq_cpu_callback+0x70/0x80) from [<c004e808>] (notifier_call_chain+0x44/0x84)
> [ 4244.678184] [<c004e808>] (notifier_call_chain+0x44/0x84) from [<c0028ac0>] (__cpu_notify+0x2c/0x48)
> [ 4244.687200] [<c0028ac0>] (__cpu_notify+0x2c/0x48) from [<c04c9208>] (_cpu_down+0x80/0x268)
> [ 4244.695436] [<c04c9208>] (_cpu_down+0x80/0x268) from [<c04c9418>] (cpu_down+0x28/0x3c)
> [ 4244.703334] [<c04c9418>] (cpu_down+0x28/0x3c) from [<c04c9b44>] (store_online+0x30/0x74)
> [ 4244.711426] [<c04c9b44>] (store_online+0x30/0x74) from [<c02cb2b0>] (dev_attr_store+0x18/0x24)
> [ 4244.720027] [<c02cb2b0>] (dev_attr_store+0x18/0x24) from [<c016b2d4>] (sysfs_write_file+0x80/0xb4)
> [ 4244.728960] [<c016b2d4>] (sysfs_write_file+0x80/0xb4) from [<c010f2d8>] (vfs_write+0xbc/0x1bc)
> [ 4244.737551] [<c010f2d8>] (vfs_write+0xbc/0x1bc) from [<c010f718>] (SyS_write+0x40/0x68)
> [ 4244.745541] [<c010f718>] (SyS_write+0x40/0x68) from [<c000ec00>] (ret_fast_syscall+0x0/0x3c)
> [ 4244.753881] ---[ end trace 6c85e0f7596a61f2 ]---
> [ 4244.758467] ------------[ cut here ]------------
> [ 4244.763115] WARNING: at lib/list_debug.c:36 __list_add+0x88/0xbc()
> [ 4244.769247] list_add double add: new=c1105454, prev=c1105454, next=c08864b0.
> [ 4244.776254] Modules linked in:
> [ 4244.779337] CPU: 0 PID: 1 Comm: sh Tainted: G W 3.10.14-03857-g20b26f7e0c59-dirty #1354
> [ 4244.788247] [<c0016194>] (unwind_backtrace+0x0/0x138) from [<c001336c>] (show_stack+0x10/0x14)
> [ 4244.796830] [<c001336c>] (show_stack+0x10/0x14) from [<c0025438>] (warn_slowpath_common+0x4c/0x68)
> [ 4244.805766] [<c0025438>] (warn_slowpath_common+0x4c/0x68) from [<c00254e8>] (warn_slowpath_fmt+0x30/0x40)
> [ 4244.815325] [<c00254e8>] (warn_slowpath_fmt+0x30/0x40) from [<c022b998>] (__list_add+0x88/0xbc)
> [ 4244.824009] [<c022b998>] (__list_add+0x88/0xbc) from [<c0034ac8>] (internal_add_timer+0x10/0x40)
> [ 4244.832777] [<c0034ac8>] (internal_add_timer+0x10/0x40) from [<c00350e4>] (add_timer_on+0x74/0x124)
> [ 4244.841811] [<c00350e4>] (add_timer_on+0x74/0x124) from [<c004354c>] (mod_delayed_work_on+0x50/0x68)
> [ 4244.850935] [<c004354c>] (mod_delayed_work_on+0x50/0x68) from [<c0347df8>] (gov_queue_work+0x48/0xa4)
> [ 4244.860120] [<c0347df8>] (gov_queue_work+0x48/0xa4) from [<c0348518>] (cpufreq_governor_dbs+0x29c/0x668)
> [ 4244.869577] [<c0348518>] (cpufreq_governor_dbs+0x29c/0x668) from [<c034438c>] (__cpufreq_governor.part.9+0x30/0xd4)
> [ 4244.880007] [<c034438c>] (__cpufreq_governor.part.9+0x30/0xd4) from [<c0345474>] (__cpufreq_remove_dev.isra.12+0x130/0x484)
> [ 4244.891112] [<c0345474>] (__cpufreq_remove_dev.isra.12+0x130/0x484) from [<c04cc8fc>] (cpufreq_cpu_callback+0x70/0x80)
> [ 4244.901780] [<c04cc8fc>] (cpufreq_cpu_callback+0x70/0x80) from [<c004e808>] (notifier_call_chain+0x44/0x84)
> [ 4244.911499] [<c004e808>] (notifier_call_chain+0x44/0x84) from [<c0028ac0>] (__cpu_notify+0x2c/0x48)
> [ 4244.920515] [<c0028ac0>] (__cpu_notify+0x2c/0x48) from [<c04c9208>] (_cpu_down+0x80/0x268)
> [ 4244.928752] [<c04c9208>] (_cpu_down+0x80/0x268) from [<c04c9418>] (cpu_down+0x28/0x3c)
> [ 4244.936653] [<c04c9418>] (cpu_down+0x28/0x3c) from [<c04c9b44>] (store_online+0x30/0x74)
> [ 4244.944741] [<c04c9b44>] (store_online+0x30/0x74) from [<c02cb2b0>] (dev_attr_store+0x18/0x24)
> [ 4244.953341] [<c02cb2b0>] (dev_attr_store+0x18/0x24) from [<c016b2d4>] (sysfs_write_file+0x80/0xb4)
> [ 4244.962274] [<c016b2d4>] (sysfs_write_file+0x80/0xb4) from [<c010f2d8>] (vfs_write+0xbc/0x1bc)
> [ 4244.970866] [<c010f2d8>] (vfs_write+0xbc/0x1bc) from [<c010f718>] (SyS_write+0x40/0x68)
> [ 4244.978858] [<c010f718>] (SyS_write+0x40/0x68) from [<c000ec00>] (ret_fast_syscall+0x0/0x3c)
> [ 4244.987198] ---[ end trace 6c85e0f7596a61f3 ]---
>
>
> Best regards,
> Krzysztof
>
> --
> To unsubscribe from this list: send the line "unsubscribe stable" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html