Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758233AbYBOQ4K (ORCPT ); Fri, 15 Feb 2008 11:56:10 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1756056AbYBOQzq (ORCPT ); Fri, 15 Feb 2008 11:55:46 -0500 Received: from e35.co.us.ibm.com ([32.97.110.153]:45619 "EHLO e35.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755118AbYBOQzo (ORCPT ); Fri, 15 Feb 2008 11:55:44 -0500 Subject: Re: [PATCH] drivers/base: export gpl (un)register_memory_notifier From: Dave Hansen To: Christoph Raisch Cc: apw , Greg KH , Jan-Bernd Themann , linux-kernel , linuxppc-dev@ozlabs.org, netdev , ossthema@linux.vnet.ibm.com, Badari Pulavarty , Thomas Q Klein , tklein@linux.ibm.com In-Reply-To: References: <200802111724.12416.ossthema@de.ibm.com> <1202748429.8276.21.camel@nimitz.home.sr71.net> <200802131617.58646.ossthema@de.ibm.com> <1203009163.19205.42.camel@nimitz.home.sr71.net> Content-Type: text/plain Date: Fri, 15 Feb 2008 08:55:38 -0800 Message-Id: <1203094538.8142.23.camel@nimitz.home.sr71.net> Mime-Version: 1.0 X-Mailer: Evolution 2.12.1 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2169 Lines: 58 On Fri, 2008-02-15 at 14:22 +0100, Christoph Raisch wrote: > A translation from kernel to ehea_bmap space should be fast and > predictable > (ruling out hashes). > If a driver doesn't know anything else about the mapping structure, > the normal solution in kernel for this type of problem is a multi > level > look up table > like pgd->pud->pmd->pte > This doesn't sound right to be implemented in a device driver. > > We didn't see from the existing code that such a mapping to a > contiguous > space already exists. > Maybe we've missed it. I've been thinking about that, and I don't think you really *need* to keep a comprehensive map like that. When the memory is in a particular configuration (range of memory present along with unique set of holes) you get a unique ehea_bmap configuration. That layout is completely predictable. So, if at any time you want to figure out what the ehea_bmap address for a particular *Linux* virtual address is, you just need to pretend that you're creating the entire ehea_bmap, use the same algorithm and figure out host you would have placed things, and use that result. Now, that's going to be a slow, crappy linear search (but maybe not as slow as recreating the silly thing). So, you might eventually run into some scalability problems with a lot of packets going around. But, I'd be curious if you do in practice. The other idea is that you create a mapping that is precisely 1:1 with kernel memory. Let's say you have two sections present, 0 and 100. You have a high_section_index of 100, and you vmalloc() a 100 entry array. You need to create a *CONTIGUOUS* ehea map? Create one like this: EHEA_VADDR->Linux Section 0->0 1->0 2->0 3->0 ... 100->100 It's contiguous. Each area points to a valid Linux memory address. It's also discernable in O(1) to what EHEA address a given Linux address is mapped. You just have a couple of duplicate entries. -- Dave -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/