Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755588Ab3FKRB7 (ORCPT ); Tue, 11 Jun 2013 13:01:59 -0400 Received: from hrndva-omtalb.mail.rr.com ([71.74.56.122]:5630 "EHLO hrndva-omtalb.mail.rr.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754199Ab3FKRB6 (ORCPT ); Tue, 11 Jun 2013 13:01:58 -0400 X-Authority-Analysis: v=2.0 cv=Tr1kdUrh c=1 sm=0 a=rXTBtCOcEpjy1lPqhTCpEQ==:17 a=mNMOxpOpBa8A:10 a=wskK765KtKEA:10 a=5SG0PmZfjMsA:10 a=IkcTkHD0fZMA:10 a=meVymXHHAAAA:8 a=GqmPQ3klG2UA:10 a=BwKYX66jYyDMaeh__qQA:9 a=QEXdDO2ut3YA:10 a=jJooYxnUmyLXkiAx:21 a=zFQGuUqExv4yfY1U:21 a=rXTBtCOcEpjy1lPqhTCpEQ==:117 X-Cloudmark-Score: 0 X-Authenticated-User: X-Originating-IP: 74.67.115.198 Message-ID: <1370970115.9844.189.camel@gandalf.local.home> Subject: Re: [PATCH RFC ticketlock] Auto-queued ticketlock From: Steven Rostedt To: paulmck@linux.vnet.ibm.com Cc: Waiman Long , linux-kernel@vger.kernel.org, mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, edumazet@google.com, darren@dvhart.com, fweisbec@gmail.com, sbw@mit.edu, torvalds@linux-foundation.org Date: Tue, 11 Jun 2013 13:01:55 -0400 In-Reply-To: <20130611163607.GG5146@linux.vnet.ibm.com> References: <20130609193657.GA13392@linux.vnet.ibm.com> <51B748DA.2070306@hp.com> <20130611163607.GG5146@linux.vnet.ibm.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.4.4-3 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1434 Lines: 31 On Tue, 2013-06-11 at 09:36 -0700, Paul E. McKenney wrote: > > I am a bit concern about the size of the head queue table itself. > > RHEL6, for example, had defined CONFIG_NR_CPUS to be 4096 which mean > > a table size of 256. Maybe it is better to dynamically allocate the > > table at init time depending on the actual number of CPUs in the > > system. > > But if your kernel is built for 4096 CPUs, the 32*256=8192 bytes of memory > is way down in the noise. Systems that care about that small an amount > of memory probably have a small enough number of CPUs that they can just > turn off queueing at build time using CONFIG_TICKET_LOCK_QUEUED=n, right? If this turns out to work for large machines, that means that distros will enable it, and distros tend to up the NR_CPUS, which is defined at compile time and is set regardless of if you are running with 2 CPUs or a 1000 CPUs. For now it's fine to use NR_CPUS, but I always try to avoid it. Working in the ARM and POWER environment you are use to lots of kernels compiled specifically for the target. But in the x86 world, it is basically one kernel for all environments, where NR_CPUS does make a big difference. -- Steve -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/