Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755515Ab2JPB2l (ORCPT ); Mon, 15 Oct 2012 21:28:41 -0400 Received: from mail-oa0-f46.google.com ([209.85.219.46]:49291 "EHLO mail-oa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755452Ab2JPB2k (ORCPT ); Mon, 15 Oct 2012 21:28:40 -0400 MIME-Version: 1.0 In-Reply-To: <1350141021.21172.14949.camel@edumazet-glaptop> References: <1350141021.21172.14949.camel@edumazet-glaptop> Date: Tue, 16 Oct 2012 10:28:39 +0900 Message-ID: Subject: Re: [Q] Default SLAB allocator From: JoonSoo Kim To: Eric Dumazet Cc: David Rientjes , Andi Kleen , Ezequiel Garcia , Linux Kernel Mailing List , linux-mm@kvack.org, Tim Bird , celinux-dev@lists.celinuxforum.org Content-Type: text/plain; charset=ISO-8859-1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1183 Lines: 28 Hello, Eric. 2012/10/14 Eric Dumazet : > SLUB was really bad in the common workload you describe (allocations > done by one cpu, freeing done by other cpus), because all kfree() hit > the slow path and cpus contend in __slab_free() in the loop guarded by > cmpxchg_double_slab(). SLAB has a cache for this, while SLUB directly > hit the main "struct page" to add the freed object to freelist. Could you elaborate more on how 'netperf RR' makes kernel "allocations done by one cpu, freeling done by other cpus", please? I don't have enough background network subsystem, so I'm just curious. > I played some months ago adding a percpu associative cache to SLUB, then > just moved on other strategy. > > (Idea for this per cpu cache was to build a temporary free list of > objects to batch accesses to struct page) Is this implemented and submitted? If it is, could you tell me the link for the patches? Thanks! -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/