Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757921AbYGJQZM (ORCPT ); Thu, 10 Jul 2008 12:25:12 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1750833AbYGJQZA (ORCPT ); Thu, 10 Jul 2008 12:25:00 -0400 Received: from smtp1.linux-foundation.org ([140.211.169.13]:51345 "EHLO smtp1.linux-foundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751251AbYGJQY7 (ORCPT ); Thu, 10 Jul 2008 12:24:59 -0400 Message-ID: <487637A1.4080403@linux-foundation.org> Date: Thu, 10 Jul 2008 11:24:01 -0500 From: Christoph Lameter User-Agent: Thunderbird 2.0.0.14 (Windows/20080421) MIME-Version: 1.0 To: "H. Peter Anvin" CC: Jeremy Fitzhardinge , "Eric W. Biederman" , Ingo Molnar , Mike Travis , Andrew Morton , Jack Steiner , linux-kernel@vger.kernel.org, Arjan van de Ven Subject: Re: [RFC 00/15] x86_64: Optimize percpu accesses References: <20080709165129.292635000@polaris-admin.engr.sgi.com> <20080709200757.GD14009@elte.hu> <48751B57.8030605@goop.org> <48751CF9.4020901@linux-foundation.org> <4875209D.8010603@goop.org> <48752CCD.30507@linux-foundation.org> <48753C99.5050408@goop.org> <487555A8.2050007@zytor.com> <487556A5.5090907@goop.org> <4876194E.4080205@linux-foundation.org> <48761C06.3020003@zytor.com> <48762A3B.5050104@linux-foundation.org> <48762DD2.5090802@zytor.com> In-Reply-To: <48762DD2.5090802@zytor.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1203 Lines: 26 H. Peter Anvin wrote: > but there is a distinct lack of wiggle room, which can be resolved > either by using negative offsets, or by moving the kernel text area up a > bit from -2 GB. Lets say we reserve 256MB of cpu alloc space per processor. On a system with 4k processors this will result in the need for 1TB virtual address space for per cpu areas (note that there may be more processors in the future). Preferably we would calculate the address of the per cpu area by PERCPU_START_ADDRESS + PERCPU_SIZE * smp_processor_id() instead of looking it up in a table because that will save a memory access on per_cpu(). The first percpu area would ideally be the per cpu segment generated by the linker. How would that fit into the address map? In particular the 2G distance between code and the first per cpu area must not be violated unless we go to a zero based approach. Maybe there is another way of arranging things that would allow for this? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/