Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756926AbZCLTYd (ORCPT ); Thu, 12 Mar 2009 15:24:33 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754683AbZCLTYZ (ORCPT ); Thu, 12 Mar 2009 15:24:25 -0400 Received: from smtp.ultrahosting.com ([74.213.174.254]:49150 "EHLO smtp.ultrahosting.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754410AbZCLTYY (ORCPT ); Thu, 12 Mar 2009 15:24:24 -0400 Date: Thu, 12 Mar 2009 15:12:14 -0400 (EDT) From: Christoph Lameter X-X-Sender: cl@qirst.com To: David Rientjes cc: Andrew Morton , Pekka Enberg , Matt Mackall , Paul Menage , Randy Dunlap , KOSAKI Motohiro , linux-kernel@vger.kernel.org Subject: Re: [patch -mm v2] cpusets: add memory_slab_hardwall flag In-Reply-To: Message-ID: References: User-Agent: Alpine 1.10 (DEB 962 2008-03-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 961 Lines: 20 On Thu, 12 Mar 2009, David Rientjes wrote: > Yes, jobs are running in the leaf with my above example. And it's quite > possible that the higher level has segmented the machine for NUMA locality > and then further divided that memory for individual jobs. When a job > completes or is killed, the slab cache that it has allocated can be freed > in its entirety with no partial slab fragmentation (i.e. there are no > objects allocated from its slabs for disjoint, still running jobs). That > cpuset may then serve another job. Looks like we are talking about a differing project here. Partial slabs are shared between all processors with SLUB. Slab shares the partial slabs for the processors on the same node. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/