"Justin T. Gibbs" wrote:
>
> ...
> > The evidence is here:
> >
> > http://marc.theaimsgroup.com/?l=linux-kernel&m=103302456113997&w=1
>
> Which unfortunately characterizes only a single symptom without breaking
> it down on a transaction by transaction basis. We need to understand
> how many writes were queued by the OS to the drive between each read to
> know if the drive is actually allowing writes to pass reads or not.
>
Given that I measured a two-second read latency with four tags,
that would be about 60 megabytes of write traffic after the
read was submitted. Say, 120 requests. That's with a tag
depth of four.
Not sure how old the disk is. It's a 36G Fujitsu SCA-2. Manufactured
in 2000, perhaps??
>> Which unfortunately characterizes only a single symptom without breaking
>> it down on a transaction by transaction basis. We need to understand
>> how many writes were queued by the OS to the drive between each read to
>> know if the drive is actually allowing writes to pass reads or not.
>>
>
> Given that I measured a two-second read latency with four tags,
> that would be about 60 megabytes of write traffic after the
> read was submitted. Say, 120 requests. That's with a tag
> depth of four.
I still don't follow your reasoning. Your benchmark indicates the
latency for several reads (cat kernel/*.c), not the per-read latency.
The two are quite different and unless you know the per-read latency and
whether it was affected by filling the drive's entire cache with
pent up writes (again these are writes that are above and beyond
those still assigned tags) you are still speculating that writes
are passing reads.
If you can tell me exactly how you ran your benchmark, I'll find the
information I want by using a SCSI bus analyzer to sniff the traffic
on the bus.
--
Justin