Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752860AbYJ0MGa (ORCPT ); Mon, 27 Oct 2008 08:06:30 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751634AbYJ0MGS (ORCPT ); Mon, 27 Oct 2008 08:06:18 -0400 Received: from mail.gmx.net ([213.165.64.20]:47074 "HELO mail.gmx.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S1750881AbYJ0MGR (ORCPT ); Mon, 27 Oct 2008 08:06:17 -0400 X-Authenticated: #14349625 X-Provags-ID: V01U2FsdGVkX1+yYxAzQhOyQ5BwbT7rAJ7fIMASPvozKp8RFLDkFR vTSvhupvPToFzw Subject: Re: [tbench regression fixes]: digging out smelly deadmen. From: Mike Galbraith To: Alan Cox Cc: Ingo Molnar , Jiri Kosina , Andrew Morton , Peter Zijlstra , David Miller , rjw@sisk.pl, s0mbre@tservice.net.ru, linux-kernel@vger.kernel.org, netdev@vger.kernel.org In-Reply-To: <20081027113306.5b1d5898@lxorguk.ukuu.org.uk> References: <20081024.221653.23695396.davem@davemloft.net> <1224914333.3822.18.camel@marge.simson.net> <1224917623.4929.15.camel@marge.simson.net> <20081025.002420.82739316.davem@davemloft.net> <1225010790.8566.22.camel@marge.simson.net> <1225011648.27415.4.camel@twins> <20081026021153.47878580.akpm@linux-foundation.org> <20081027112750.GA2771@elte.hu> <20081027113306.5b1d5898@lxorguk.ukuu.org.uk> Content-Type: text/plain Date: Mon, 27 Oct 2008 13:06:11 +0100 Message-Id: <1225109171.4238.23.camel@marge.simson.net> Mime-Version: 1.0 X-Mailer: Evolution 2.22.1.1 Content-Transfer-Encoding: 7bit X-Y-GMX-Trusted: 0 X-FuHaFi: 0.63 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1404 Lines: 33 On Mon, 2008-10-27 at 11:33 +0000, Alan Cox wrote: > > The way to get the best possible dbench numbers in CPU-bound dbench > > runs, you have to throw away the scheduler completely, and do this > > instead: > > > > - first execute all requests of client 1 > > - then execute all requests of client 2 > > .... > > - execute all requests of client N > > Rubbish. If you do that you'll not get enough I/O in parallel to schedule > the disk well (not that most of our I/O schedulers are doing the job > well, and the vm writeback threads then mess it up and the lack of Arjans > ioprio fixes then totally screw you) > > > the moment the clients are allowed to overlap, the moment their requests > > are executed more fairly, the dbench numbers drop. > > Fairness isn't everything. Dbench is a fairly good tool for studying some > real world workloads. If your fairness hurts throughput that much maybe > your scheduler algorithm is just plain *wrong* as it isn't adapting to > workload at all well. Doesn't seem to be scheduler/fairness. 2.6.22.19 is O(1), and falls apart too, I posted the numbers and full dbench output yesterday. -Mike -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/