Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751458AbWHRSRx (ORCPT ); Fri, 18 Aug 2006 14:17:53 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751463AbWHRSRw (ORCPT ); Fri, 18 Aug 2006 14:17:52 -0400 Received: from e32.co.us.ibm.com ([32.97.110.150]:42650 "EHLO e32.co.us.ibm.com") by vger.kernel.org with ESMTP id S1751458AbWHRSRw (ORCPT ); Fri, 18 Aug 2006 14:17:52 -0400 Subject: Re: [ckrm-tech] [PATCH 4/7] UBC: syscalls (user interface) From: Chandra Seetharaman Reply-To: sekharan@us.ibm.com To: Andrew Morton Cc: Dave Hansen , Rik van Riel , ckrm-tech@lists.sourceforge.net, Christoph@sc8-sf-spam2-b.sourceforge.net, List , Kirill Korotaev , Hellwig , Andrey Savochkin , Alan Cox , Linux@sc8-sf-spam2-b.sourceforge.net, rohitseth@google.com, hugh@veritas.com, Ingo Molnar , Pavel Emelianov , devel@openvz.org, Andi Kleen In-Reply-To: <20060818094248.cdca152d.akpm@osdl.org> References: <44E33893.6020700@sw.ru> <44E33C3F.3010509@sw.ru> <1155752277.22595.70.camel@galaxy.corp.google.com> <1155755069.24077.392.camel@localhost.localdomain> <1155756170.22595.109.camel@galaxy.corp.google.com> <44E45D6A.8000003@sw.ru> <20060817084033.f199d4c7.akpm@osdl.org> <20060818120809.B11407@castle.nmd.msu.ru> <1155912348.9274.83.camel@localhost.localdomain> <20060818094248.cdca152d.akpm@osdl.org> Content-Type: text/plain Organization: IBM Date: Fri, 18 Aug 2006 11:17:45 -0700 Message-Id: <1155925065.26155.17.camel@linuxchandra> Mime-Version: 1.0 X-Mailer: Evolution 2.0.4 (2.0.4-7) Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2757 Lines: 60 On Fri, 2006-08-18 at 09:42 -0700, Andrew Morton wrote: > On Fri, 18 Aug 2006 07:45:48 -0700 > Dave Hansen wrote: > > > On Fri, 2006-08-18 at 12:08 +0400, Andrey Savochkin wrote: > > > > > > A) Have separate memory management for each container, > > > with separate buddy allocator, lru lists, page replacement mechanism. > > > That implies a considerable overhead, and the main challenge there > > > is sharing of pages between these separate memory managers. > > > > Hold on here for just a sec... > > > > It is quite possible to do memory management aimed at one container > > while that container's memory still participates in the main VM. > > > > There is overhead here, as the LRU scanning mechanisms get less > > efficient, but I'd rather pay a penalty at LRU scanning time than divide > > up the VM, or coarsely start failing allocations. > > > > I have this mad idea that you can divide a 128GB machine up into 256 fake > NUMA nodes, then you use each "node" as a 512MB unit of memory allocation. > So that 4.5GB job would be placed within an exclusive cpuset which has nine > "mems" (what are these called?) and voila: the job has a hard 4.5GB limit, > no kernel changes needed. In this model memory and container are tightly coupled, hence memory might be unused/wasted in one container/resource group", while a different group is hitting its limit too often. In order to minimize this effect, resource controllers should be providing both minimum and maximum amount of resources available for a resource group. > > Unfortunately this is not testable because numa=fake=256 doesn't come even > vaguely close to working. Am trying to get that fixed. > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > ckrm-tech mailing list > https://lists.sourceforge.net/lists/listinfo/ckrm-tech -- ---------------------------------------------------------------------- Chandra Seetharaman | Be careful what you choose.... - sekharan@us.ibm.com | .......you may get it. ---------------------------------------------------------------------- - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/