Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755574Ab0A2Uxv (ORCPT ); Fri, 29 Jan 2010 15:53:51 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754015Ab0A2UvK (ORCPT ); Fri, 29 Jan 2010 15:51:10 -0500 Received: from nlpi157.sbcis.sbc.com ([207.115.36.171]:42608 "EHLO nlpi157.prodigy.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753453Ab0A2UvI (ORCPT ); Fri, 29 Jan 2010 15:51:08 -0500 Message-Id: <20100129205000.248965617@quilx.com> References: <20100129204931.789743493@quilx.com> User-Agent: quilt/0.46-1 Date: Fri, 29 Jan 2010 14:49:35 -0600 From: Christoph Lameter To: Andi Kleen Cc: Dave Chinner , Christoph Lameter , Pekka Enberg Cc: Rik van Riel Cc: akpm@linux-foundation.org Cc: Miklos Szeredi Cc: Nick Piggin Cc: Hugh Dickins Cc: linux-kernel@vger.kernel.org Subject: slub: Sort slab cache list and establish maximum objects for defrag slabs Content-Disposition: inline; filename=slub_sort_slab_cache_list Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3026 Lines: 92 When defragmenting slabs then it is advantageous to have all defragmentable slabs together at the beginning of the list so that there is no need to scan the complete list. Put defragmentable caches first when adding a slab cache and others last. Determine the maximum number of objects in defragmentable slabs. This allows to size the allocation of arrays holding refs to these objects later. Reviewed-by: Rik van Riel Signed-off-by: Christoph Lameter Signed-off-by: Pekka Enberg Signed-off-by: Christoph Lameter --- mm/slub.c | 26 ++++++++++++++++++++++++-- 1 file changed, 24 insertions(+), 2 deletions(-) Index: linux-2.6/mm/slub.c =================================================================== --- linux-2.6.orig/mm/slub.c 2010-01-29 10:27:17.000000000 -0600 +++ linux-2.6/mm/slub.c 2010-01-29 10:27:21.000000000 -0600 @@ -189,6 +189,9 @@ static enum { static DECLARE_RWSEM(slub_lock); static LIST_HEAD(slab_caches); +/* Maximum objects in defragmentable slabs */ +static unsigned int max_defrag_slab_objects; + /* * Tracking user of a slab. */ @@ -2707,7 +2710,7 @@ static struct kmem_cache *create_kmalloc flags, NULL)) goto panic; - list_add(&s->list, &slab_caches); + list_add_tail(&s->list, &slab_caches); if (sysfs_slab_add(s)) goto panic; @@ -2976,9 +2979,23 @@ void kfree(const void *x) } EXPORT_SYMBOL(kfree); +/* + * Allocate a slab scratch space that is sufficient to keep at least + * max_defrag_slab_objects pointers to individual objects and also a bitmap + * for max_defrag_slab_objects. + */ +static inline void *alloc_scratch(void) +{ + return kmalloc(max_defrag_slab_objects * sizeof(void *) + + BITS_TO_LONGS(max_defrag_slab_objects) * sizeof(unsigned long), + GFP_KERNEL); +} + void kmem_cache_setup_defrag(struct kmem_cache *s, kmem_defrag_get_func get, kmem_defrag_kick_func kick) { + int max_objects = oo_objects(s->max); + /* * Defragmentable slabs must have a ctor otherwise objects may be * in an undetermined state after they are allocated. @@ -2986,6 +3003,11 @@ void kmem_cache_setup_defrag(struct kmem BUG_ON(!s->ctor); s->get = get; s->kick = kick; + down_write(&slub_lock); + list_move(&s->list, &slab_caches); + if (max_objects > max_defrag_slab_objects) + max_defrag_slab_objects = max_objects; + up_write(&slub_lock); } EXPORT_SYMBOL(kmem_cache_setup_defrag); @@ -3397,7 +3419,7 @@ struct kmem_cache *kmem_cache_create(con if (s) { if (kmem_cache_open(s, GFP_KERNEL, name, size, align, flags, ctor)) { - list_add(&s->list, &slab_caches); + list_add_tail(&s->list, &slab_caches); up_write(&slub_lock); if (sysfs_slab_add(s)) { down_write(&slub_lock); -- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/