Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1762148AbZCPWTd (ORCPT ); Mon, 16 Mar 2009 18:19:33 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1760023AbZCPWTW (ORCPT ); Mon, 16 Mar 2009 18:19:22 -0400 Received: from smtp-out.google.com ([216.239.33.17]:58605 "EHLO smtp-out.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753308AbZCPWTV (ORCPT ); Mon, 16 Mar 2009 18:19:21 -0400 DomainKey-Signature: a=rsa-sha1; s=beta; d=google.com; c=nofws; q=dns; h=date:from:x-x-sender:to:cc:subject:in-reply-to:message-id: references:user-agent:mime-version:content-type:x-system-of-record; b=oNE7kGNAK15Idtdx9Bmzjo3ZyjkOkT0u6xm7g1cixJyZs+LKUDPqq4V5M8fR5ed/d mJOmmsOwR6H/0P2tX7XzQ== Date: Mon, 16 Mar 2009 15:17:10 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Christoph Lameter cc: Andrew Morton , Pekka Enberg , Matt Mackall , Paul Menage , Randy Dunlap , KOSAKI Motohiro , linux-kernel@vger.kernel.org Subject: Re: [patch -mm v2] cpusets: add memory_slab_hardwall flag In-Reply-To: Message-ID: References: User-Agent: Alpine 2.00 (DEB 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-System-Of-Record: true Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1418 Lines: 43 On Mon, 16 Mar 2009, Christoph Lameter wrote: > If the nodes are exclusive to a load then the cpus attached to those nodes > are also exclusive? No, they are not exclusive. Here is my example (for the third time) if, for example, mems are grouped by the cpus for which they have affinity: /dev/cpuset --> cpuset_A (cpus 0-1, mems 0-3) --> cpuset_B (cpus 2-3, mems 4-7) --> cpuset_C (cpus 4-5, mems 8-11) --> ... Within that, we isolate mems for specific jobs: /dev/cpuset --> cpuset_A (cpus 0-1, mems 0-3) --> job_1 (mem 0) --> job_2 (mem 1-2) --> job_3 (mem 3) --> ... > If so then there is no problem since the percpu queues > are only in use for a specific load with a consistent restriction on > cpusets and a consistent memory policy. Thus there is no need for > memory_slab_hardwall. > All of those jobs may have different mempolicy requirements. Specifically, some cpusets may require slab hardwall behavior while others do not for true memory isolation or NUMA optimizations. In other words, there is _no_ way with slub to isolate slab allocations for job_1 from job_2, job_3, etc. That is what memory_slab_hardwall intends to address. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/