Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S266189AbUJETiB (ORCPT ); Tue, 5 Oct 2004 15:38:01 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S266183AbUJETiB (ORCPT ); Tue, 5 Oct 2004 15:38:01 -0400 Received: from e5.ny.us.ibm.com ([32.97.182.105]:47748 "EHLO e5.ny.us.ibm.com") by vger.kernel.org with ESMTP id S265395AbUJETha (ORCPT ); Tue, 5 Oct 2004 15:37:30 -0400 Date: Tue, 05 Oct 2004 12:34:46 -0700 From: "Martin J. Bligh" To: Simon Derr cc: Paul Jackson , pwil3058@bigpond.net.au, frankeh@watson.ibm.com, dipankar@in.ibm.com, akpm@osdl.org, ckrm-tech@lists.sourceforge.net, efocht@hpce.nec.com, lse-tech@lists.sourceforge.net, hch@infradead.org, steiner@sgi.com, jbarnes@sgi.com, sylvain.jeaugey@bull.net, djh@sgi.com, linux-kernel@vger.kernel.org, colpatch@us.ibm.com, ak@suse.de, sivanich@sgi.com Subject: Re: [Lse-tech] [PATCH] cpusets - big numa cpu and memory placement Message-ID: <58780000.1097004886@flay> In-Reply-To: References: <20040805100901.3740.99823.84118@sam.engr.sgi.com><20040805190500.3c8fb361.pj@sgi.com><247790000.1091762644@[10.10.2.4]><200408061730.06175.efocht@hpce.nec.com><20040806231013.2b6c44df.pj@sgi.com><411685D6.5040405@watson.ibm.com><20041001164118.45b75e17.akpm@osdl.org><20041001230644.39b551af.pj@sgi.com><20041002145521.GA8868@in.ibm.com><415ED3E3.6050008@watson.ibm.com><415F37F9.6060002@bigpond.net.au><821020000.1096814205@[10.10.2.4]><20041003083936.7c844ec3.pj@sgi.com><834330000.1096847619@[10.10.2.4]><835810000.1096848156@[10.10.2.4]><20041003175309.6b02b5c6.pj@sgi.com><838090000.1096862199@[10.10.2.4]><20041003212452.1a15a49a.pj@sgi.com> <843670000.1096902220@[10.10.2.4]> X-Mailer: Mulberry/2.1.2 (Linux/x86) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1144 Lines: 28 > The idea was to have a system, and run all jobs on it through a batch > scheduler. Some jobs cared about performance, some didn't. > > The ones who cared about performance got an 'exclusive' cpuset, the ones > who didn't got a 'non exclusive' cpuset. OK, makes sense. Thanks for that. > Of course, in our case, a valid argument is that 'exclusiveness' should > not be enforced by the kernel but rather by the job scheduler. Probably. > > But now I see that the discussion is going towards: > -fully exclusive cpusets, maybe even with no interrupts handling > -maybe only allow exclusive cpusets, since non-exclusive cpusets are > tricky wrt CKRM. Nope - personally I see us more headed for the exclusive cpusets, and handle the non-exclusive stuff via a more CKRM-style mechanism. Which I still think achieves what you need, though perhaps not in exactly the fashion you envisioned. M. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/