Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751819AbdFZXDs (ORCPT ); Mon, 26 Jun 2017 19:03:48 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:33878 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751427AbdFZXDe (ORCPT ); Mon, 26 Jun 2017 19:03:34 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 8AC8C608A5 Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=markivx@codeaurora.org Subject: Re: [PATCH] kthread: Atomically set completion and perform dequeue in __kthread_parkme To: rusty@rustcorp.com.au, tj@kernel.org, tglx@linutronix.de, akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, stable@vger.kernel.org References: <1498515483-12743-1-git-send-email-markivx@codeaurora.org> From: Vikram Mulukutla Message-ID: <318fac36-66cd-7f90-df61-44042119ee2e@codeaurora.org> Date: Mon, 26 Jun 2017 16:03:27 -0700 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.1.1 MIME-Version: 1.0 In-Reply-To: <1498515483-12743-1-git-send-email-markivx@codeaurora.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4508 Lines: 91 correcting Thomas Gleixner's email address. s/linuxtronix/linutronix On 6/26/2017 3:18 PM, Vikram Mulukutla wrote: > kthread_park waits for the target kthread to park itself with > __kthread_parkme using a completion variable. __kthread_parkme - which is > invoked by the target kthread - sets the completion variable before > calling schedule() to voluntarily get itself off of the runqueue. > > This causes an interesting race in the hotplug path. takedown_cpu() > invoked for CPU_X attempts to park the cpuhp/X hotplug kthread before > running the stopper thread on CPU_X. kthread_unpark doesn't guarantee that > cpuhp/X is off of X's runqueue, only that the thread has executed > __kthread_parkme and set the completion. cpuhp/X may have been preempted > out before calling schedule() to voluntarily sleep. takedown_cpu proceeds > to run the stopper thread on CPU_X which promptly migrates off the > still-on-rq cpuhp/X thread to another cpu CPU_Y, setting its affinity > mask to something other than CPU_X alone. > > This is OK - cpuhp/X may finally get itself off of CPU_Y's runqueue at > some later point. But if that doesn't happen (for example, if there's > an RT thread on CPU_Y), the kthread_unpark in a subsequent cpu_up call > for CPU_X will race with the still-on-rq condition. Even now we're > functionally OK because there is a wait_task_inactive in the > kthread_unpark(), BUT the following happens: > > [ 12.472745] BUG: scheduling while atomic: swapper/7/0/0x00000002 > [ 12.472749] Modules linked in: > [ 12.472756] CPU: 7 PID: 0 Comm: swapper/7 Not tainted 4.9.32-perf+ #680 > [ 12.472758] Hardware name: XXXXX > [ 12.472760] Call trace: > [ 12.472773] [] dump_backtrace+0x0/0x198 > [ 12.472777] [] show_stack+0x14/0x1c > [ 12.472781] [] dump_stack+0x8c/0xac > [ 12.472786] [] __schedule_bug+0x54/0x70 > [ 12.472792] [] __schedule+0x6b4/0x928 > [ 12.472794] [] schedule+0x3c/0xa0 > [ 12.472797] [] schedule_hrtimeout_range_clock+0x80/0xec > [ 12.472799] [] schedule_hrtimeout+0x18/0x20 > [ 12.472803] [] wait_task_inactive+0x1a0/0x1a4 > [ 12.472806] [] __kthread_bind_mask+0x20/0x7c > [ 12.472809] [] __kthread_bind+0x28/0x30 > [ 12.472811] [] __kthread_unpark+0x5c/0x60 > [ 12.472814] [] kthread_unpark+0x24/0x2c > [ 12.472818] [] cpuhp_online_idle+0x50/0x90 > [ 12.472822] [] cpu_startup_entry+0x3c/0x1d4 > [ 12.472824] [] secondary_start_kernel+0x164/0x1b4 > > Since the kthread_unpark is invoked from a preemption-disabled context, > wait_task_inactive's action of invoking schedule is invalid, causing the > splat. Note that kthread_bind_mask is correctly attempting to re-set > the affinity mask since cpuhp is a per-cpu smpboot thread. > > Instead of adding an expensive wait_task_inactive inside kthread_park() > or trying to muck with the hotplug code, let's just ensure that the > completion variable and the schedule happen atomically inside > __kthread_parkme. This focuses the fix to the hotplug requirement alone, > and removes the unnecessary migration of cpuhp/X. > > Signed-off-by: Vikram Mulukutla > --- > kernel/kthread.c | 13 ++++++++++++- > 1 file changed, 12 insertions(+), 1 deletion(-) > > diff --git a/kernel/kthread.c b/kernel/kthread.c > index 26db528..7ad3354 100644 > --- a/kernel/kthread.c > +++ b/kernel/kthread.c > @@ -171,9 +171,20 @@ static void __kthread_parkme(struct kthread *self) > { > __set_current_state(TASK_PARKED); > while (test_bit(KTHREAD_SHOULD_PARK, &self->flags)) { > + /* > + * Why the preempt_disable? > + * Hotplug needs to ensure that 'self' is off of the runqueue > + * as well, before scheduling the stopper thread that will > + * migrate tasks off of the runqeue that 'self' was running on. > + * This avoids unnecessary migration work and also ensures that > + * kthread_unpark in the cpu_up path doesn't race with > + * __kthread_parkme. > + */ > + preempt_disable(); > if (!test_and_set_bit(KTHREAD_IS_PARKED, &self->flags)) > complete(&self->parked); > - schedule(); > + schedule_preempt_disabled(); > + preempt_enable(); > __set_current_state(TASK_PARKED); > } > clear_bit(KTHREAD_IS_PARKED, &self->flags); >