Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755977Ab3FKSKg (ORCPT ); Tue, 11 Jun 2013 14:10:36 -0400 Received: from hrndva-omtalb.mail.rr.com ([71.74.56.122]:27434 "EHLO hrndva-omtalb.mail.rr.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755786Ab3FKSKf (ORCPT ); Tue, 11 Jun 2013 14:10:35 -0400 X-Authority-Analysis: v=2.0 cv=Tr1kdUrh c=1 sm=0 a=rXTBtCOcEpjy1lPqhTCpEQ==:17 a=mNMOxpOpBa8A:10 a=wskK765KtKEA:10 a=5SG0PmZfjMsA:10 a=IkcTkHD0fZMA:10 a=meVymXHHAAAA:8 a=GqmPQ3klG2UA:10 a=JGEIL-1my4bj-s7v4HcA:9 a=QEXdDO2ut3YA:10 a=rXTBtCOcEpjy1lPqhTCpEQ==:117 X-Cloudmark-Score: 0 X-Authenticated-User: X-Originating-IP: 74.67.115.198 Message-ID: <1370974231.9844.212.camel@gandalf.local.home> Subject: Re: [PATCH RFC ticketlock] Auto-queued ticketlock From: Steven Rostedt To: Davidlohr Bueso Cc: Linus Torvalds , Paul McKenney , Linux Kernel Mailing List , Ingo Molnar , =?UTF-8?Q?=E8=B5=96=E6=B1=9F=E5=B1=B1?= , Dipankar Sarma , Andrew Morton , Mathieu Desnoyers , Josh Triplett , niv@us.ibm.com, Thomas Gleixner , Peter Zijlstra , Valdis Kletnieks , David Howells , Eric Dumazet , Darren Hart , =?ISO-8859-1?Q?Fr=E9d=E9ric?= Weisbecker , sbw@mit.edu Date: Tue, 11 Jun 2013 14:10:31 -0400 In-Reply-To: <1370973186.1744.9.camel@buesod1.americas.hpqcorp.net> References: <20130609193657.GA13392@linux.vnet.ibm.com> <1370911480.9844.160.camel@gandalf.local.home> <1370973186.1744.9.camel@buesod1.americas.hpqcorp.net> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.4.4-3 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1829 Lines: 60 On Tue, 2013-06-11 at 10:53 -0700, Davidlohr Bueso wrote: > I hate to be the bearer of bad news but I got some pretty bad aim7 > performance numbers with this patch on an 8-socket (80 core) 256 Gb > memory DL980 box against a vanilla 3.10-rc4 kernel: This doesn't surprise me as the spin lock now contains a function call on any contention. Not to mention the added i$ pressure on the embedded spinlock code having to setup a function call. Even if the queues are not used, it adds a slight overhead to all spinlocks, due to the code size increase as well as a function call on all contention, which will also have an impact on i$ and branch prediction. > > * shared workload: > 10-100 users is in the noise area. > 100-2000 users: -13% throughput. > > * high_systime workload: > 10-700 users is in the noise area. > 700-2000 users: -55% throughput. > > * disk: > 10-100 users -57% throughput. > 100-1000 users: -25% throughput > 1000-2000 users: +8% throughput (this patch only benefits when we have a Perhaps this actually started using the queues? > lot of concurrency). > > * custom: > 10-100 users: -33% throughput. > 100-2000 users: -46% throughput. > > * alltests: > 10-1000 users is in the noise area. > 1000-2000 users: -10% throughput. > > One notable exception is the short workload where we actually see > positive numbers: > 10-100 users: +40% throughput. > 100-2000 users: +69% throughput. Perhaps short work loads have a cold cache, and the impact on cache is not as drastic? It would be interesting to see what perf reports on these runs. -- Steve -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/