Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754000AbaAWNyh (ORCPT ); Thu, 23 Jan 2014 08:54:37 -0500 Received: from mail-wg0-f51.google.com ([74.125.82.51]:45032 "EHLO mail-wg0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753869AbaAWNyf (ORCPT ); Thu, 23 Jan 2014 08:54:35 -0500 Date: Thu, 23 Jan 2014 14:54:32 +0100 From: Frederic Weisbecker To: Lei Wen Cc: Viresh Kumar , Kevin Hilman , Lists linaro-kernel , Peter Zijlstra , Linux Kernel Mailing List , Steven Rostedt , Linaro Networking , Tejun Heo Subject: Re: [QUERY]: Is using CPU hotplug right for isolating CPUs? Message-ID: <20140123135430.GB13345@localhost.localdomain> References: <20140120154058.GA9436@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 21, 2014 at 10:07:58AM +0800, Lei Wen wrote: > On Mon, Jan 20, 2014 at 11:41 PM, Frederic Weisbecker > wrote: > > On Mon, Jan 20, 2014 at 08:30:10PM +0530, Viresh Kumar wrote: > >> On 20 January 2014 19:29, Lei Wen wrote: > >> > Hi Viresh, > >> > >> Hi Lei, > >> > >> > I have one question regarding unbounded workqueue migration in your case. > >> > You use hotplug to migrate the unbounded work to other cpus, but its cpu mask > >> > would still be 0xf, since cannot be changed by cpuset. > >> > > >> > My question is how you could prevent this unbounded work migrate back > >> > to your isolated cpu? > >> > Seems to me there is no such mechanism in kernel, am I understand wrong? > >> > >> These workqueues are normally queued back from workqueue handler. And we > >> normally queue them on the local cpu, that's the default behavior of workqueue > >> subsystem. And so they land up on the same CPU again and again. > > > > But for workqueues having a global affinity, I think they can be rescheduled later > > on the old CPUs. Although I'm not sure about that, I'm Cc'ing Tejun. > > Agree, since worker thread is made as enterring into all cpus, it > cannot prevent scheduler > do the migration. > > But here is one point, that I see Viresh alredy set up two cpuset with > scheduler load balance > disabled, so it should stop the task migration between those two groups? Since > the sched_domain changed? > > What is more, I also did similiar test, and find when I set two such > cpuset group, > like core 0-2 to cpuset1, core 3 to cpuset2, while hotunplug the core3 > afterwise. > I find the cpuset's cpus member becomes NULL even I hotplug the core3 > back again. > So is it a bug? Not sure, you may need to check cpuset internals. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/