hi
I have this pc with a promise udma133 tx2 controller, having one 120G
drive per channel. I'm benchmarking this, using some 50 'dd if=file[1..50]
of=/dev/null' to simuate my application. This will work for a few seconds,
giving me pretty good i/o speed, but then all the processes go defunct and
stay like that some minutes (i really don't know how long).
I tried to do the bechmark with the individual drives, and no problem
there...
Anyone have a clue?
Configuration:
- Promise 133 TX2 (20269 manually patched in. Just made som defines to
alias it to the 20268)
- 2 WD1200BB drives (hde and hdg)
/etc/raidtab:
raiddev /dev/md0
raid-level 0
nr-raid-disks 2
persistent-superblock 0
chunk-size 4096
device /dev/hde
raid-disk 0
device /dev/hdg
raid-disk 1
--
Roy Sigurd Karlsbakk, MCSE, MCNE, CLS, LCA
Computers are like air conditioners.
They stop working when you open Windows.
> it would be interesting to write a simple benchmark
> that simply reads a file at a fixed rate. *that* would
> actually simulate your app.
sure. I'm using tux+wget for that. I were just playing around with dd
> sounds like a VM/balance problem. you didn't mention which kernel
> you're using.
2.4.16 w/tux + xfs. The fs used on the raid vol is xfs
> > I tried to do the bechmark with the individual drives, and no problem
> > there...
>
> much less bandwidth, and therefore VM pressure.
ok
--
Roy Sigurd Karlsbakk, MCSE, MCNE, CLS, LCA
Computers are like air conditioners.
They stop working when you open Windows.
On Tue, 2001-12-11 at 12:54, Roy Sigurd Karlsbakk wrote:
> > it would be interesting to write a simple benchmark
> > that simply reads a file at a fixed rate. *that* would
> > actually simulate your app.
>
> sure. I'm using tux+wget for that. I were just playing around with dd
>
> > sounds like a VM/balance problem. you didn't mention which kernel
> > you're using.
>
> 2.4.16 w/tux + xfs. The fs used on the raid vol is xfs
We just got to the bottom of a problem in xfs which was causing memory
not to get cleaned as efficiently as it should be - it lead to dbench
lockups on low memory systems. It is possible you are seeing a similar
effect - we dirty all the memory and then struggle to clean it up.
Try the attached patch.
Steve
On Tue, 2001-12-11 at 13:34, Steve Lord wrote:
> On Tue, 2001-12-11 at 12:54, Roy Sigurd Karlsbakk wrote:
> > > it would be interesting to write a simple benchmark
> > > that simply reads a file at a fixed rate. *that* would
> > > actually simulate your app.
> >
> > sure. I'm using tux+wget for that. I were just playing around with dd
> >
> > > sounds like a VM/balance problem. you didn't mention which kernel
> > > you're using.
> >
> > 2.4.16 w/tux + xfs. The fs used on the raid vol is xfs
>
> We just got to the bottom of a problem in xfs which was causing memory
> not to get cleaned as efficiently as it should be - it lead to dbench
> lockups on low memory systems. It is possible you are seeing a similar
> effect - we dirty all the memory and then struggle to clean it up.
>
> Try the attached patch.
>
> Steve
>
OK, don't try that patch too hard, I have a better one without a race
condition on the buffer head.... it should show up in the xfs cvs tree
later today, I am going to stress this one a little harder first!
Steve