Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id ; Sun, 23 Jun 2002 02:08:19 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id ; Sun, 23 Jun 2002 02:08:18 -0400 Received: from spruce.woods.net ([166.70.175.33]:11192 "EHLO a.smtp.woods.net") by vger.kernel.org with ESMTP id ; Sun, 23 Jun 2002 02:08:17 -0400 Date: Sun, 23 Jun 2002 00:00:01 -0600 (MDT) From: "Christopher E. Brown" To: Andreas Dilger Cc: "Griffiths, Richard A" , "'Andrew Morton'" , , "'Jens Axboe'" , Linux Kernel Mailing List , Subject: Re: ext3 performance bottleneck as the number of spindles gets large In-Reply-To: <20020623043310.GL22411@clusterfs.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1075 Lines: 31 On Sat, 22 Jun 2002, Andreas Dilger wrote: > On Jun 22, 2002 22:02 -0600, Christopher E. Brown wrote: > > On Thu, 20 Jun 2002, Griffiths, Richard A wrote: > > > > > I should have mentioned the throughput we saw on 4 adapters 6 drives was > > > 126KB/s. The max theoretical bus bandwith is 640MB/s. > > > > This is *NOT* correct. Assuming a 64bit 66Mhz PCI bus your MAX is > > 503MB/sec minus PCI overhead... > > Assuming you only have a single PCI bus... Yes, we could (for example) assume a DP264 board, it features 2/4/8 way memory interleave, dual 21264 CPUs, and 2 separate PCI 64bit 66Mhz buses. However, multiple busses are *rare* on x86. There are alot of chained busses via PCI to PCI bridge, but few systems with 2 or more PCI busses of any type with parallel access to the CPU. -- I route, therefore you are. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/