Received: by 10.192.165.148 with SMTP id m20csp4758393imm; Tue, 1 May 2018 03:19:24 -0700 (PDT) X-Google-Smtp-Source: AB8JxZq2SwPt0VzTjwsPOMkP39NXovgmp/o0sYJBPozxVkBAJCPSe04F6KmrpST75WqaERO1OkjV X-Received: by 2002:a17:902:988b:: with SMTP id s11-v6mr11556574plp.304.1525169964794; Tue, 01 May 2018 03:19:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525169964; cv=none; d=google.com; s=arc-20160816; b=B4miTxFE/XhdCJEO/VGNzayDqXi2MoaS5vhde+f5pil83Ph8UNrmyCNbXFXxBBezLa EY780+gaNq0pJwbLsTdLTQ/EhV9ZHRAwKsq7kD1mGhCk1xuYYPoILSekvvAuZf523uq+ SxTFgCKc/q3dlVzxX4qbeeDJR+XykyWObOmm5MWPrxB4LapbkhQYrsu5B/FuRysNqEPS Y4mEN6uKNVQcxNAlkzhvbT7fRVPgqjsEmutGo4lcnAjIfPQW1sutT53y+6jWWA3BQ/oy /0ehp3BhAFb+nWTBeQPFW2cIeynpFNoZg6Pr930i/V8iTSBXr5a/q7LkKSRytdGCakrX kCaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=ve+rNVSF4hAgf3wepZKXWaJgANuYzTFJIj4OQnLxKRU=; b=ZIPucfbVdlplgi7FoZ0uFW/6ZY/O1nfj6+kXGjHfGEYjITAE2shezoYyKXMeXjMATs mWfGtd2NkqOubtoJzGIJ89eUWobhrpxewdwMVZftVoFPQGm7zjUvTm251gA6tZccrahY cfSeEmkaewEPA9VSGlIDug+giMiO8WVGCfhFM4cMbd2h80p91sk34w6r0zpUHoiEFvvs Qz9cdHvTFZ7tfNExguMddVISqM2msTu2/2W/4pXj/awNIGhPIXsbt6rxvF5jng34/Uq+ nwid3NQwhiKoAqxl7WSlNvqUYXRNkGR0eo+Y6cwVUnuUThzv2opuNsGjxKvkZGXvSaMO Wydw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=tvj9z2k0; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b9-v6si2600790pgu.27.2018.05.01.03.19.10; Tue, 01 May 2018 03:19:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=tvj9z2k0; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753878AbeEAKTC (ORCPT + 99 others); Tue, 1 May 2018 06:19:02 -0400 Received: from merlin.infradead.org ([205.233.59.134]:41732 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750994AbeEAKTA (ORCPT ); Tue, 1 May 2018 06:19:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=ve+rNVSF4hAgf3wepZKXWaJgANuYzTFJIj4OQnLxKRU=; b=tvj9z2k0mw2hxPbqhwKQYatJh S0qsl3gMpF4PBRYzOzT/xHoyuE4GWN5Y3vMJ8WEUxQAZgQM35zJzAkMH7A+z/f/0vWFixZXJGptF3 uyLs3odK1dXW/l0HyXjHlV5IiX2ENS9+UgL13NXJvpMdKhyMErzijJDDaMV3J0WcnVIc5JwQoiEvN vpGbNuZBgAKy6Yc34LoJuwRwKqDVDAf2Arvri2jh8s9Gpg3JEM1mBggWoEv29IQwzXyNJPtrxbXmz qp7jnkfE4I2VVx/9nyw6+Gk7YkOUe83uXBDxHQ0go4gnpD9AFuHMW5cWvmJZouUmjmpTuhblzJZ5f tUvgM4Cxw==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1fDSMx-00013O-6b; Tue, 01 May 2018 10:18:47 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 6386E2029FA14; Tue, 1 May 2018 12:18:45 +0200 (CEST) Date: Tue, 1 May 2018 12:18:45 +0200 From: Peter Zijlstra To: "Kohli, Gaurav" Cc: tglx@linutronix.de, mpe@ellerman.id.au, mingo@kernel.org, bigeasy@linutronix.de, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, Neeraj Upadhyay , Will Deacon , Oleg Nesterov Subject: Re: [PATCH v1] kthread/smpboot: Serialize kthread parking against wakeup Message-ID: <20180501101845.GE12217@hirez.programming.kicks-ass.net> References: <1524645199-5596-1-git-send-email-gkohli@codeaurora.org> <20180425200917.GZ4082@hirez.programming.kicks-ass.net> <20180426084131.GV4129@hirez.programming.kicks-ass.net> <20180426085719.GW4129@hirez.programming.kicks-ass.net> <4d3f68f8-e599-6b27-a2e8-9e96b401d57a@codeaurora.org> <20180430111744.GE4082@hirez.programming.kicks-ass.net> <3af3365b-4e3f-e388-8e90-45a3bd4120fd@codeaurora.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3af3365b-4e3f-e388-8e90-45a3bd4120fd@codeaurora.org> User-Agent: Mutt/1.9.5 (2018-04-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 01, 2018 at 01:20:26PM +0530, Kohli, Gaurav wrote: > But In our older case, where we have seen failure below is the wake up path > and ftraces, Wakeup occured and completed before schedule call only. > > So final state of CPUHP is running not parked. I have also pasted debug > ftraces that we got during issue reproduction. > > Here wakeup for cpuhp is below: > > takedown_cpu-> kthread_park-> wake_up_process > > > 39,034,311,742,395 apps (10240) Trace Printk cpuhp/0 (16) [000] > 39015.625000: __kthread_parkme state=512 task=ffffffcc7458e680 > flags: 0x5 -> state 5 -> state is parked inside parkme function > > 39,034,311,846,510 apps (10240) Trace Printk cpuhp/0 (16) [000] > 39015.625000: before schedule __kthread_parkme state=0 > task=ffffffcc7458e680 flags: 0xd -> just before schedule call, state is > running > > tatic void __kthread_parkme(struct kthread *self) > > { > > __set_current_state(TASK_PARKED); > > while (test_bit(KTHREAD_SHOULD_PARK, &self->flags)) { > > if (!test_and_set_bit(KTHREAD_IS_PARKED, &self->flags)) > > complete(&self->parked); > > schedule(); > > __set_current_state(TASK_PARKED); > > } > > clear_bit(KTHREAD_IS_PARKED, &self->flags); > > __set_current_state(TASK_RUNNING); > > } > > So my point is here also, if it is reschedule then it can set TASK_PARKED, > but it seems after takedown_cpu call this thread never get a chance to run, > So final state is TASK_RUNNING. > > In our current fix also can't we observe same scenario where final state is > TASK_RUNNING. I'm not sure I understand your concern. Loosing the TASK_PARKED store with the above code is obviously bad. But with the loop as proposed I don't see a problem. takedown_cpu() can proceed beyond smpboot_park_threads() and kill the CPU before any of the threads are parked -- per having the complete() before hitting schedule(). And, afaict, that is harmless. When we go offline, sched_cpu_dying() -> migrate_tasks() will migrate any still runnable threads off the cpu. But because at this point the thread must be in the PARKED wait-loop, it will hit schedule() and go to sleep eventually. Also note that kthread_unpark() does __kthread_bind() to rebind the threads. Aaaah... I think I've spotted a problem there. We clear SHOULD_PARK before we rebind, so if the thread lost the first PARKED store, does the completion, gets migrated, cycles through the loop and now observes !SHOULD_PARK and bails the wait-loop, then __kthread_bind() will forever wait. Is that what you had in mind?