Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756762AbaLIMXj (ORCPT ); Tue, 9 Dec 2014 07:23:39 -0500 Received: from e06smtp11.uk.ibm.com ([195.75.94.107]:49900 "EHLO e06smtp11.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756257AbaLIMXi (ORCPT ); Tue, 9 Dec 2014 07:23:38 -0500 From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: heiko.carstens@de.ibm.com, dahi@linux.vnet.ibm.com, borntraeger@de.ibm.com, rafael.j.wysocki@intel.com, paulmck@linux.vnet.ibm.com, peterz@infradead.org, oleg@redhat.com, bp@suse.de, jkosina@suse.cz Subject: [PATCH v3] CPU hotplug: active_writer not woken up in some cases - deadlock Date: Tue, 9 Dec 2014 13:23:31 +0100 Message-Id: <1418127811-22629-1-git-send-email-dahi@linux.vnet.ibm.com> X-Mailer: git-send-email 1.8.5.5 X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 14120912-0005-0000-0000-000002637074 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit b2c4623dcd07 ("rcu: More on deadlock between CPU hotplug and expedited grace periods") introduced another problem that can easily be reproduced by starting/stopping cpus in a loop. E.g.: for i in `seq 5000`; do echo 1 > /sys/devices/system/cpu/cpu1/online echo 0 > /sys/devices/system/cpu/cpu1/online done Will result in: INFO: task /cpu_start_stop:1 blocked for more than 120 seconds. Call Trace: ([<00000000006a028e>] __schedule+0x406/0x91c) [<0000000000130f60>] cpu_hotplug_begin+0xd0/0xd4 [<0000000000130ff6>] _cpu_up+0x3e/0x1c4 [<0000000000131232>] cpu_up+0xb6/0xd4 [<00000000004a5720>] device_online+0x80/0xc0 [<00000000004a57f0>] online_store+0x90/0xb0 ... And a deadlock. Problem is that if the last ref in put_online_cpus() can't get the cpu_hotplug.lock the puts_pending count is incremented, but a sleeping active_writer might never be woken up, therefore never exiting the loop in cpu_hotplug_begin(). This fix wakes up the active_writer proactively. The writer already goes back to sleep if the ref count isn't already down to 0, so this should be fine. In order to avoid many potential races, we have to: - Protect current_writer by a spin lock. When holding this lock we can be sure that the writer won't vainsh or change. (use-after-free) - Increment the cpu_hotplug.puts_pending count before we test for an active_writer. (otherwise a wakeup might get lost) - Move setting of TASK_UNINTERRUPTIBLE in cpu_hotplug_begin() above the condition check. (otherwise a wakeup might get lost) Can't reproduce it with this fix. Signed-off-by: David Hildenbrand --- kernel/cpu.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/kernel/cpu.c b/kernel/cpu.c index 90a3d01..7489b7a 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -58,6 +58,7 @@ static int cpu_hotplug_disabled; static struct { struct task_struct *active_writer; + spinlock_t awr_lock; /* protects active_writer from being changed */ struct mutex lock; /* Synchronizes accesses to refcount, */ /* * Also blocks the new readers during @@ -72,6 +73,7 @@ static struct { #endif } cpu_hotplug = { .active_writer = NULL, + .awr_lock = __SPIN_LOCK_UNLOCKED(cpu_hotplug.awr_lock), .lock = __MUTEX_INITIALIZER(cpu_hotplug.lock), .refcount = 0, #ifdef CONFIG_DEBUG_LOCK_ALLOC @@ -116,7 +118,13 @@ void put_online_cpus(void) if (cpu_hotplug.active_writer == current) return; if (!mutex_trylock(&cpu_hotplug.lock)) { + /* inc before testing for active_writer to not lose wake ups */ atomic_inc(&cpu_hotplug.puts_pending); + spin_lock(&cpu_hotplug.awr_lock); + /* we might be the last one */ + if (unlikely(cpu_hotplug.active_writer)) + wake_up_process(cpu_hotplug.active_writer); + spin_unlock(&cpu_hotplug.awr_lock); cpuhp_lock_release(); return; } @@ -156,20 +164,24 @@ EXPORT_SYMBOL_GPL(put_online_cpus); */ void cpu_hotplug_begin(void) { + spin_lock(&cpu_hotplug.awr_lock); cpu_hotplug.active_writer = current; + spin_unlock(&cpu_hotplug.awr_lock); cpuhp_lock_acquire(); for (;;) { mutex_lock(&cpu_hotplug.lock); + __set_current_state(TASK_UNINTERRUPTIBLE); if (atomic_read(&cpu_hotplug.puts_pending)) { int delta; delta = atomic_xchg(&cpu_hotplug.puts_pending, 0); cpu_hotplug.refcount -= delta; } - if (likely(!cpu_hotplug.refcount)) + if (likely(!cpu_hotplug.refcount)) { + __set_current_state(TASK_RUNNING); break; - __set_current_state(TASK_UNINTERRUPTIBLE); + } mutex_unlock(&cpu_hotplug.lock); schedule(); } @@ -177,7 +189,9 @@ void cpu_hotplug_begin(void) void cpu_hotplug_done(void) { + spin_lock(&cpu_hotplug.awr_lock); cpu_hotplug.active_writer = NULL; + spin_unlock(&cpu_hotplug.awr_lock); mutex_unlock(&cpu_hotplug.lock); cpuhp_lock_release(); } -- 1.8.5.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/