Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753252AbZJNBe1 (ORCPT ); Tue, 13 Oct 2009 21:34:27 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752165AbZJNBe0 (ORCPT ); Tue, 13 Oct 2009 21:34:26 -0400 Received: from smtp-out.google.com ([216.239.33.17]:14724 "EHLO smtp-out.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751647AbZJNBeZ (ORCPT ); Tue, 13 Oct 2009 21:34:25 -0400 DomainKey-Signature: a=rsa-sha1; s=beta; d=google.com; c=nofws; q=dns; h=date:from:x-x-sender:to:cc:subject:in-reply-to:message-id: references:user-agent:mime-version:content-type:x-system-of-record; b=ErFTkNsxLDHPzeWV5TMGEc54ksokDOCWL1rcuZBmk5YZ/3wfdBx0oTgisFErvqe8c Yb7XBw1sgWbDV1ga+TjYg== Date: Tue, 13 Oct 2009 18:33:06 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Christoph Lameter cc: Pekka Enberg , Tejun Heo , linux-kernel@vger.kernel.org, Mathieu Desnoyers , Mel Gorman , Zhang Yanmin Subject: Re: [this_cpu_xx V6 7/7] this_cpu: slub aggressive use of this_cpu operations in the hotpaths In-Reply-To: Message-ID: References: <20091007211024.442168959@gentwo.org> <20091007211053.378634196@gentwo.org> <4AD307A5.105@kernel.org> <84144f020910120614r529d8e4em9babe83a90e9371f@mail.gmail.com> User-Agent: Alpine 1.00 (DEB 882 2007-12-20) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-System-Of-Record: true Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1506 Lines: 30 On Tue, 13 Oct 2009, Christoph Lameter wrote: > > I ran 60-second netperf TCP_RR benchmarks with various thread counts over > > two machines, both four quad-core Opterons. I ran the trials ten times > > each with both vanilla per-cpu#for-next at 9288f99 and with v6 of this > > patchset. The transfer rates were virtually identical showing no > > improvement or regression with this patchset in this benchmark. > > > > [ As I reported in http://marc.info/?l=linux-kernel&m=123839191416472, > > this benchmark continues to be the most significant regression slub has > > compared to slab. ] > > Hmmm... Last time I ran the in kernel benchmarks this showed a reduction > in cycle counts. Did not get to get my tests yet. > > Can you also try the irqless hotpath? > v6 of your patchset applied to percpu#for-next now at dec54bf "this_cpu: Use this_cpu_xx in trace_functions_graph.c" works fine, but when I apply the irqless patch from http://marc.info/?l=linux-kernel&m=125503037213262 it hangs my netserver machine within the first 60 seconds when running this benchmark. These kernels both include the fixes to kmem_cache_open() and dma_kmalloc_cache() you posted earlier. I'll have to debug why that's happening before collecting results. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/