Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S266837AbUJFCtw (ORCPT ); Tue, 5 Oct 2004 22:49:52 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S266366AbUJFCtw (ORCPT ); Tue, 5 Oct 2004 22:49:52 -0400 Received: from omx2-ext.sgi.com ([192.48.171.19]:46521 "EHLO omx2.sgi.com") by vger.kernel.org with ESMTP id S266839AbUJFCtW (ORCPT ); Tue, 5 Oct 2004 22:49:22 -0400 Date: Tue, 5 Oct 2004 19:47:02 -0700 From: Paul Jackson To: colpatch@us.ibm.com Cc: mbligh@aracnet.com, pwil3058@bigpond.net.au, frankeh@watson.ibm.com, dipankar@in.ibm.com, akpm@osdl.org, ckrm-tech@lists.sourceforge.net, efocht@hpce.nec.com, lse-tech@lists.sourceforge.net, hch@infradead.org, steiner@sgi.com, jbarnes@sgi.com, sylvain.jeaugey@bull.net, djh@sgi.com, linux-kernel@vger.kernel.org, Simon.Derr@bull.net, ak@suse.de, sivanich@sgi.com Subject: Re: [Lse-tech] [PATCH] cpusets - big numa cpu and memory placement Message-Id: <20041005194702.0644070b.pj@sgi.com> In-Reply-To: <1097014749.4065.48.camel@arrakis> References: <20040805100901.3740.99823.84118@sam.engr.sgi.com> <20040805190500.3c8fb361.pj@sgi.com> <247790000.1091762644@[10.10.2.4]> <200408061730.06175.efocht@hpce.nec.com> <20040806231013.2b6c44df.pj@sgi.com> <411685D6.5040405@watson.ibm.com> <20041001164118.45b75e17.akpm@osdl.org> <20041001230644.39b551af.pj@sgi.com> <20041002145521.GA8868@in.ibm.com> <415ED3E3.6050008@watson.ibm.com> <415F37F9.6060002@bigpond.net.au> <821020000.1096814205@[10.10.2.4]> <20041003083936.7c844ec3.pj@sgi.com> <834330000.1096847619@[10.10.2.4]> <1097014749.4065.48.camel@arrakis> Organization: SGI X-Mailer: Sylpheed version 0.9.12 (GTK+ 1.2.10; i686-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1112 Lines: 27 Matthew wrote: > > By adding locking and reference counting, and simplifying the way in which > sched_domains are created, linked, unlinked and eventually destroyed we > can use sched_domains as the implementation of cpusets. I'd be inclined to turn this sideways from what you say. Rather, add another couple of properties to cpusets: 1) An isolated flag, that guarantees whatever isolation properties we agree that schedulers, allocators and resource allocators require between domains, and 2) For those cpusets which are so isolated, the option to add links of some form, between that cpuset, and distinct scheduler, allocator and/or resource domains. -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson 1.650.933.1373 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/