2015-04-10 02:48:09

by Liu, XinwuX

[permalink] [raw]
Subject: [PATCH] Fix the deadlock in uevent_buffer_pm_notify

From: "Liu, XinwuX" <[email protected]>

When suspend finish uevent_buffer_pm_notify will traverse buffer_list
and clean up some kobj that refcount equals zero, it will trigger
kobject_event and get uevent_buffer_mutex again. That will cause deadlock.

Workqueue: auto_suspend try_to_suspend
Call Trace:
[<8086a9ad>] ? preempt_count_sub+0xad/0x100
[<8087c237>] ? __wake_up_common+0x47/0x70
[<8086a9ad>] ? preempt_count_sub+0xad/0x100
[<80ee5ea3>] schedule+0x23/0x60
[<80ee6147>] schedule_preempt_disabled+0x17/0x30
[<80ee876b>] __mutex_lock_slowpath+0x10b/0x370
[<80ee89e0>] mutex_lock+0x10/0x1c
[<80ab5813>] kobject_uevent_env+0x2d3/0x4a0
[<80ab59ea>] kobject_uevent+0xa/0x10
[<80ab4339>] kobject_cleanup+0xb9/0x1b0
[<80ee865d>] ? mutex_unlock+0xd/0x10
[<80ab53c4>] ? kobject_deliver_uevent+0x174/0x220
[<80ab5110>] ? uevent_net_exit+0x70/0x70
[<80ab41a5>] kobject_put+0x25/0x60
[<80ab550e>] ? uevent_buffer_pm_notify+0x9e/0xd0
[<80ac78bb>] ? list_del+0xb/0x20
[<80ab54ef>] uevent_buffer_pm_notify+0x7f/0xd0
[<80861597>] notifier_call_chain+0x47/0x90
[<80861713>] __blocking_notifier_call_chain+0x43/0x70
[<8086175f>] blocking_notifier_call_chain+0x1f/0x30
[<808865d6>] pm_notifier_call_chain+0x16/0x30
[<8088730d>] pm_suspend+0x1ad/0x220
[<808879c5>] try_to_suspend+0xa5/0xc0
[<80ee9b5d>] ? _raw_spin_unlock_irq+0x1d/0x40
[<80856893>] process_one_work+0x113/0x3e0
[<8084b49d>] ? mod_timer+0xed/0x200
[<8086aa55>] ? preempt_count_add+0x55/0xa0
[<80857a37>] worker_thread+0xf7/0x320
[<80857940>] ? manage_workers.isra.24+0x290/0x290
[<8085d21b>] kthread+0x9b/0xb0
[<80eea9f7>] ret_from_kernel_thread+0x1b/0x28
[<8085d180>] ? flush_kthread_worker+0xb0/0xb0

Change-Id: Iaf62ccd7a729f3eff1b5565f4b7b0d7eb01bc6a6
Signed-off-by: Liu, XinwuX <[email protected]>
---
lib/kobject_uevent.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/lib/kobject_uevent.c b/lib/kobject_uevent.c
index 40d97a9..abebf46 100644
--- a/lib/kobject_uevent.c
+++ b/lib/kobject_uevent.c
@@ -507,7 +507,9 @@ int uevent_buffer_pm_notify(struct notifier_block *nb,
kobject_deliver_uevent(ub->kobj, ub->env, ub->action,
ub->devpath, ub->subsys);
list_del(&ub->buffer_list);
+ mutex_unlock(&uevent_buffer_mutex);
kobject_put(ub->kobj);
+ mutex_lock(&uevent_buffer_mutex);
kfree(ub->env);
kfree(ub->devpath);
kfree(ub->subsys);
--
1.7.9.5


2015-04-10 12:45:27

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH] Fix the deadlock in uevent_buffer_pm_notify

On Sat, Apr 11, 2015 at 10:49:17AM +0800, Liu, XinwuX wrote:
> From: "Liu, XinwuX" <[email protected]>
>
> When suspend finish uevent_buffer_pm_notify will traverse buffer_list
> and clean up some kobj that refcount equals zero, it will trigger
> kobject_event and get uevent_buffer_mutex again. That will cause deadlock.
>
> Workqueue: auto_suspend try_to_suspend
> Call Trace:
> [<8086a9ad>] ? preempt_count_sub+0xad/0x100
> [<8087c237>] ? __wake_up_common+0x47/0x70
> [<8086a9ad>] ? preempt_count_sub+0xad/0x100
> [<80ee5ea3>] schedule+0x23/0x60
> [<80ee6147>] schedule_preempt_disabled+0x17/0x30
> [<80ee876b>] __mutex_lock_slowpath+0x10b/0x370
> [<80ee89e0>] mutex_lock+0x10/0x1c
> [<80ab5813>] kobject_uevent_env+0x2d3/0x4a0
> [<80ab59ea>] kobject_uevent+0xa/0x10
> [<80ab4339>] kobject_cleanup+0xb9/0x1b0
> [<80ee865d>] ? mutex_unlock+0xd/0x10
> [<80ab53c4>] ? kobject_deliver_uevent+0x174/0x220
> [<80ab5110>] ? uevent_net_exit+0x70/0x70
> [<80ab41a5>] kobject_put+0x25/0x60
> [<80ab550e>] ? uevent_buffer_pm_notify+0x9e/0xd0
> [<80ac78bb>] ? list_del+0xb/0x20
> [<80ab54ef>] uevent_buffer_pm_notify+0x7f/0xd0
> [<80861597>] notifier_call_chain+0x47/0x90
> [<80861713>] __blocking_notifier_call_chain+0x43/0x70
> [<8086175f>] blocking_notifier_call_chain+0x1f/0x30
> [<808865d6>] pm_notifier_call_chain+0x16/0x30
> [<8088730d>] pm_suspend+0x1ad/0x220
> [<808879c5>] try_to_suspend+0xa5/0xc0
> [<80ee9b5d>] ? _raw_spin_unlock_irq+0x1d/0x40
> [<80856893>] process_one_work+0x113/0x3e0
> [<8084b49d>] ? mod_timer+0xed/0x200
> [<8086aa55>] ? preempt_count_add+0x55/0xa0
> [<80857a37>] worker_thread+0xf7/0x320
> [<80857940>] ? manage_workers.isra.24+0x290/0x290
> [<8085d21b>] kthread+0x9b/0xb0
> [<80eea9f7>] ret_from_kernel_thread+0x1b/0x28
> [<8085d180>] ? flush_kthread_worker+0xb0/0xb0
>
> Change-Id: Iaf62ccd7a729f3eff1b5565f4b7b0d7eb01bc6a6

I can't do anything with a patch that has a line like this in it, except
delete it from my inbox.

Please fix and resend.