2005-09-17 04:31:10

by Al Boldi

[permalink] [raw]
Subject: Eradic disk access during reads

Monitoring disk access using gkrellm, I noticed that a command like

cat /dev/hda > /dev/null

shows eradic disk reads ranging from 0 to 80MB/s on an otherwise idle system.

1. Is this a hardware or software problem?
2. Is there a lightweight perf-mon tool (cmd-line) that would log this
behaviour graphically?

Thanks!

--
Al


2005-09-17 05:50:23

by Willy Tarreau

[permalink] [raw]
Subject: Re: Eradic disk access during reads

On Sat, Sep 17, 2005 at 07:26:11AM +0300, Al Boldi wrote:
> Monitoring disk access using gkrellm, I noticed that a command like
>
> cat /dev/hda > /dev/null
>
> shows eradic disk reads ranging from 0 to 80MB/s on an otherwise idle system.
>
> 1. Is this a hardware or software problem?

Difficult to tell without more info. Can be a broken IDE disk or defective
ribbon.

> 2. Is there a lightweight perf-mon tool (cmd-line) that would log this
> behaviour graphically?

You can do " readspeed </dev/hda | tr '\r' '\n' > log " with the readspeed
tool from there :
http://w.ods.org/tools/readspeed

Then you just have to graph $6 (kB/s) versus $1 (bytes read). There may
be other tools which do all this automatically though.

Regards,
Willy

2005-09-17 16:41:31

by Al Boldi

[permalink] [raw]
Subject: Re: Eradic disk access during reads

Denis Vlasenko wrote:
> > > > Monitoring disk access using gkrellm, I noticed that a command like
> > > >
> > > > cat /dev/hda > /dev/null
> > > >
> > > > shows eradic disk reads ranging from 0 to 80MB/s on an otherwise
> > > > idle system.
> > > >
> > > > 1. Is this a hardware or software problem?
> > >
> > > Difficult to tell without more info. Can be a broken IDE disk or
> > > defective ribbon.
> >
> > Tried the same with 2.4.31 which shows steady behaviour with occasional
> > dips and pops in the msec range.
> >
> > > > 2. Is there a lightweight perf-mon tool (cmd-line) that would log
> > > > this behaviour graphically?
>
> Try attached one.

Nice!

> # dd if=/dev/hda of=/dev/null bs=16M

This is what I got; what do you get?

Thanks!

2.4.31 # nmeter t6 c x i b d100
18:56:36.009981 cpu [SSSSSSSS..] ctx 145 int 86 bio 4.7M 0
18:56:36.110327 cpu [SSSSSSSS..] ctx 145 int 87 bio 4.8M 0
18:56:36.210735 cpu [SSSSSSSS..] ctx 139 int 88 bio 4.7M 0
18:56:36.310315 cpu [SSSSSSSS..] ctx 142 int 85 bio 4.7M 0
18:56:36.409979 cpu [SSSSSS....] ctx 147 int 86 bio 4.7M 0
18:56:36.510764 cpu [SSSSSSSSS.] ctx 161 int 88 bio 4.8M 0
18:56:36.610213 cpu [SSSSSSSS..] ctx 165 int 85 bio 4.7M 0
18:56:36.710194 cpu [SSSSSSSSS.] ctx 152 int 89 bio 4.7M 4096
18:56:36.809977 cpu [SSSSSSS...] ctx 135 int 86 bio 4.7M 0
18:56:36.909970 cpu [SSSSSSS...] ctx 142 int 89 bio 4.7M 16k
18:56:37.009972 cpu [SSSSS.....] ctx 130 int 86 bio 4.7M 0
18:56:37.109979 cpu [SSSSS.....] ctx 142 int 86 bio 4.7M 0
18:56:37.210335 cpu [SSSSSSSS..] ctx 135 int 87 bio 4.8M 0
18:56:37.309973 cpu [SSSSS.....] ctx 100 int 68 bio 3.5M 0
18:56:37.410011 cpu [SSSSSSS...] ctx 146 int 87 bio 4.7M 0
18:56:37.509980 cpu [SSSSSSS...] ctx 145 int 86 bio 4.8M 0
18:56:37.610117 cpu [SSSSSSS...] ctx 137 int 87 bio 4.7M 0
18:56:37.709978 cpu [SSSSSSSSS.] ctx 133 int 87 bio 4.7M 0
18:56:37.809981 cpu [SSSSSSSS..] ctx 146 int 87 bio 4.8M 0
18:56:37.909972 cpu [SSSSS.....] ctx 139 int 86 bio 4.7M 0
18:56:38.010504 cpu [SSSSSS....] ctx 149 int 87 bio 4.7M 0
18:56:38.110025 cpu [SSSSSSS...] ctx 136 int 87 bio 4.7M 0
18:56:38.210122 cpu [SSSSSSS...] ctx 137 int 85 bio 4.7M 0
18:56:38.309975 cpu [SSSSSSS...] ctx 141 int 87 bio 4.7M 0
18:56:38.410095 cpu [SSSSSSSS..] ctx 138 int 86 bio 4.8M 0
18:56:38.509978 cpu [SSSSS.....] ctx 145 int 87 bio 4.7M 0
18:56:38.610115 cpu [SSSSS.....] ctx 145 int 86 bio 4.7M 0
18:56:38.710204 cpu [SSSSSSSS..] ctx 141 int 87 bio 4.7M 0
18:56:38.811411 cpu [SSSSSSS...] ctx 128 int 88 bio 4.8M 0
18:56:38.911221 cpu [SSSSSSSS..] ctx 131 int 86 bio 4.7M 0
18:56:39.010779 cpu [SSSSSSSS..] ctx 127 int 86 bio 4.7M 0
18:56:39.110211 cpu [SSSSS.....] ctx 127 int 86 bio 4.7M 0
18:56:39.210086 cpu [SSSSSSSSS.] ctx 131 int 86 bio 4.7M 0
18:56:39.313572 cpu [SSSSSSS...] ctx 137 int 90 bio 5.0M 0
18:56:39.410307 cpu [SSSSSSS...] ctx 139 int 84 bio 4.6M 0
18:56:39.510618 cpu [SSSSSSSSS.] ctx 131 int 85 bio 4.7M 0
18:56:39.610558 cpu [SSSSSSSS..] ctx 144 int 87 bio 4.7M 0
18:56:39.709981 cpu [SSSS......] ctx 141 int 86 bio 4.7M 0
18:56:39.810109 cpu [SSSSSSS...] ctx 139 int 86 bio 4.7M 0
18:56:39.910581 cpu [SSSSSSS...] ctx 144 int 87 bio 4.8M 0
18:56:40.010554 cpu [SSSSSSSSS.] ctx 144 int 87 bio 4.7M 0
18:56:40.110382 cpu [SSSSSSSS..] ctx 143 int 86 bio 4.7M 0
18:56:40.210709 cpu [SSSSSSSSS.] ctx 138 int 87 bio 4.8M 0
18:56:40.310512 cpu [SSSS......] ctx 144 int 86 bio 4.7M 0
18:56:40.410830 cpu [SSSSSSS...] ctx 147 int 87 bio 4.7M 0
18:56:40.510371 cpu [SSSSSSSS..] ctx 160 int 86 bio 4.7M 0
18:56:40.610664 cpu [SSSSSSSS..] ctx 170 int 87 bio 4.8M 0
18:56:40.710106 cpu [SSSSSSS...] ctx 149 int 87 bio 4.7M 0
18:56:40.810056 cpu [SSSSSSSS..] ctx 141 int 85 bio 4.6M 0
18:56:40.910416 cpu [SSSSSSS...] ctx 138 int 86 bio 4.8M 0
18:56:41.009974 cpu [SSSSSS....] ctx 142 int 87 bio 4.7M 0
18:56:41.110094 cpu [SSSSSS....] ctx 142 int 86 bio 4.7M 0
18:56:41.209972 cpu [SSSSSSSS..] ctx 139 int 87 bio 4.7M 0
18:56:41.310140 cpu [SSSSSSSS..] ctx 135 int 88 bio 4.8M 0

2.6.13 # nmeter t6 c x i b d100
18:09:22.117959 cpu [SSSSSSSSSD] ctx 80 int 47 bio 4.7M 0
18:09:22.217932 cpu [SSSSSSDDDD] ctx 83 int 48 bio 4.8M 0
18:09:22.319200 cpu [SSSSDDDDDI] ctx 81 int 56 bio 4.7M 28k
18:09:22.407979 cpu [SSSSSSSSSD] ctx 60 int 38 bio 3.8M 0
18:09:22.517960 cpu [SSSSSSSSDI] ctx 95 int 61 bio 5.2M 52k
18:09:22.617942 cpu [SSSSSSSSSD] ctx 77 int 47 bio 4.7M 0
18:09:22.718043 cpu [SSSSSDDDDD] ctx 81 int 48 bio 4.8M 0
18:09:22.817976 cpu [SSSSDDDDDD] ctx 80 int 48 bio 4.6M 0
18:09:22.918080 cpu [SSSSSSSDDI] ctx 80 int 48 bio 4.8M 0
18:09:23.017975 cpu [SSSSSSSSDD] ctx 80 int 47 bio 4.7M 0
18:09:23.117912 cpu [SSSSDDDDDD] ctx 36 int 25 bio 3.1M 0
18:09:23.217943 cpu [SSSSDDDDDD] ctx 44 int 29 bio 2.7M 0
18:09:23.317949 cpu [SSSSSSSDDD] ctx 80 int 47 bio 4.8M 0
18:09:23.417944 cpu [SSSSSSSSDI] ctx 84 int 49 bio 4.7M 0
18:09:23.517944 cpu [SSSSSSSSDD] ctx 78 int 47 bio 4.7M 0
18:09:23.617948 cpu [SSSSDDDDDD] ctx 67 int 42 bio 4.5M 8192
18:09:23.718238 cpu [SSSSDDDDDD] ctx 80 int 48 bio 4.7M 0
18:09:23.818039 cpu [SSSSSSSDDI] ctx 79 int 48 bio 4.7M 0
18:09:23.917961 cpu [SSSSSSSSDD] ctx 80 int 49 bio 4.7M 0
18:09:24.019101 cpu [SSSSSSSDDD] ctx 80 int 48 bio 4.8M 0
18:09:24.107934 cpu [SSSSSSSDII] ctx 71 int 42 bio 4.2M 0
18:09:24.217946 cpu [SSSSDDDDDD] ctx 90 int 53 bio 5.2M 0
18:09:24.317965 cpu [SSSSSSDDDI] ctx 83 int 49 bio 4.7M 0
18:09:24.417974 cpu [SSSSSSSSDD] ctx 78 int 46 bio 4.6M 0
18:09:24.517938 cpu [SSSSDDDDDD] ctx 48 int 30 bio 3.7M 0
18:09:24.617940 cpu [SSSDDDDDDD] ctx 80 int 48 bio 4.7M 0
18:09:24.717948 cpu [SSSSSSSDDI] ctx 79 int 48 bio 4.7M 0
18:09:24.817955 cpu [SSSSSSSSDI] ctx 82 int 49 bio 4.7M 0
18:09:24.917914 cpu [SSSSDDDDDD] ctx 31 int 21 bio 2.7M 0
18:09:25.018051 cpu [SSSDDDDDDD] ctx 31 int 23 bio 2.7M 0
18:09:25.117988 cpu [SSSSSSDDII] ctx 81 int 49 bio 4.8M 0
18:09:25.217970 cpu [SSSSSDDDDD] ctx 37 int 24 bio 3.1M 0
18:09:25.318115 cpu [SSSSSSSSDD] ctx 81 int 48 bio 4.8M 0
18:09:25.417959 cpu [SSSSSSSSDD] ctx 81 int 47 bio 4.7M 0
18:09:25.517943 cpu [SSSSSSDDDD] ctx 65 int 41 bio 4.3M 0
18:09:25.617959 cpu [SSSSDDDDDD] ctx 80 int 48 bio 4.7M 0
18:09:25.717945 cpu [SSSSSSSDDD] ctx 81 int 47 bio 4.8M 0
18:09:25.817979 cpu [SSSSSSSSDI] ctx 79 int 49 bio 4.6M 0
18:09:25.917913 cpu [SSSSDDDDDD] ctx 30 int 21 bio 2.7M 0
18:09:26.017940 cpu [SSSSDDDDDD] ctx 64 int 39 bio 3.7M 0
18:09:26.118932 cpu [SSSSDDDDDI] ctx 79 int 49 bio 4.7M 0
18:09:26.217959 cpu [SSSSSDDDDD] ctx 43 int 26 bio 3.3M 0
18:09:26.317948 cpu [SSSSSDDDDD] ctx 64 int 38 bio 4.1M 0
18:09:26.417962 cpu [SSSSSSSSDD] ctx 78 int 47 bio 4.7M 0
18:09:26.517958 cpu [SSSSDDDDDD] ctx 27 int 20 bio 2.2M 0
18:09:26.617978 cpu [SSSDDDDDDD] ctx 24 int 18 bio 1.6M 0
18:09:26.717966 cpu [SSSSSDDDDI] ctx 76 int 45 bio 4.5M 0
18:09:26.817963 cpu [SSSSSSDDDI] ctx 62 int 37 bio 4.0M 0
18:09:26.917972 cpu [SSSSSDDDDD] ctx 69 int 40 bio 4.3M 0
18:09:27.017971 cpu [SSSSSSSDDD] ctx 78 int 47 bio 4.7M 0
18:09:27.117942 cpu [SDDDDDDDDD] ctx 40 int 25 bio 3.3M 0
18:09:27.217949 cpu [SSSSSSSSDI] ctx 69 int 42 bio 4.2M 0
18:09:27.318990 cpu [SSSSSSSSDI] ctx 84 int 48 bio 4.8M 0
18:09:27.417950 cpu [SSSSSSSSSI] ctx 79 int 47 bio 4.6M 0
18:09:27.518161 cpu [SSSSSSSSDI] ctx 79 int 47 bio 4.8M 8192
18:09:27.617941 cpu [SSSSSSSDDD] ctx 87 int 51 bio 4.7M 24k

2005-09-17 10:45:05

by Denis Vlasenko

[permalink] [raw]
Subject: Re: Eradic disk access during reads

> > > Monitoring disk access using gkrellm, I noticed that a command like
> > >
> > > cat /dev/hda > /dev/null
> > >
> > > shows eradic disk reads ranging from 0 to 80MB/s on an otherwise idle
> > > system.
> > >
> > > 1. Is this a hardware or software problem?
> >
> > Difficult to tell without more info. Can be a broken IDE disk or defective
> > ribbon.
>
> Tried the same with 2.4.31 which shows steady behaviour with occasional dips
> and pops in the msec range.
>
> > > 2. Is there a lightweight perf-mon tool (cmd-line) that would log this
> > > behaviour graphically?
> >
> > You can do " readspeed </dev/hda | tr '\r' '\n' > log " with the readspeed
> > tool from there :
> > http://w.ods.org/tools/readspeed

Try attached one.

# dd if=/dev/hda of=/dev/null bs=16M

On another console:

# nmeter t6 b d1
...
13:42:28.352037 bio 0 0
13:42:28.354675 bio 0 0
13:42:28.357703 bio 0 0
13:42:28.360671 bio 0 0
13:42:28.364712 bio 0 0
13:42:28.367671 bio 128k 0
13:42:28.370673 bio 0 0
13:42:28.373656 bio 0 0
13:42:28.376653 bio 0 0
13:42:28.379680 bio 128k 0
13:42:28.382656 bio 0 0
13:42:28.385819 bio 0 0
13:42:28.388651 bio 0 0
13:42:28.391736 bio 128k 0
13:42:28.394673 bio 0 0
13:42:28.397656 bio 0 0
...
--
vda


Attachments:
(No filename) (1.28 kB)
d.tar.bz2 (18.82 kB)
Download all attachments

2005-09-17 10:36:03

by Al Boldi

[permalink] [raw]
Subject: Re: Eradic disk access during reads

Willy Tarreau wrote:
> On Sat, Sep 17, 2005 at 07:26:11AM +0300, Al Boldi wrote:
> > Monitoring disk access using gkrellm, I noticed that a command like
> >
> > cat /dev/hda > /dev/null
> >
> > shows eradic disk reads ranging from 0 to 80MB/s on an otherwise idle
> > system.
> >
> > 1. Is this a hardware or software problem?
>
> Difficult to tell without more info. Can be a broken IDE disk or defective
> ribbon.

Tried the same with 2.4.31 which shows steady behaviour with occasional dips
and pops in the msec range.

>
> > 2. Is there a lightweight perf-mon tool (cmd-line) that would log this
> > behaviour graphically?
>
> You can do " readspeed </dev/hda | tr '\r' '\n' > log " with the readspeed
> tool from there :
> http://w.ods.org/tools/readspeed

Does it have msec resolution?

Thanks!

--
Al

2005-09-17 18:46:49

by Willy Tarreau

[permalink] [raw]
Subject: Re: Eradic disk access during reads

On Sat, Sep 17, 2005 at 01:32:53PM +0300, Al Boldi wrote:
> Willy Tarreau wrote:
> > On Sat, Sep 17, 2005 at 07:26:11AM +0300, Al Boldi wrote:
> > > Monitoring disk access using gkrellm, I noticed that a command like
> > >
> > > cat /dev/hda > /dev/null
> > >
> > > shows eradic disk reads ranging from 0 to 80MB/s on an otherwise idle
> > > system.
> > >
> > > 1. Is this a hardware or software problem?
> >
> > Difficult to tell without more info. Can be a broken IDE disk or defective
> > ribbon.
>
> Tried the same with 2.4.31 which shows steady behaviour with occasional dips
> and pops in the msec range.
>
> >
> > > 2. Is there a lightweight perf-mon tool (cmd-line) that would log this
> > > behaviour graphically?
> >
> > You can do " readspeed </dev/hda | tr '\r' '\n' > log " with the readspeed
> > tool from there :
> > http://w.ods.org/tools/readspeed
>
> Does it have msec resolution?

no, it does not. It will only show you the average transfer rate during
the past second each second. It often helps me tune network, nfs, raid, ...

Denis' tool seems clearly more suited to analyse your problem.

Regards,
Willy

2005-09-18 04:17:53

by Al Boldi

[permalink] [raw]
Subject: Re: Eradic disk access during reads

Willy Tarreau wrote:
> On Sat, Sep 17, 2005 at 01:32:53PM +0300, Al Boldi wrote:
> > Willy Tarreau wrote:
> > > On Sat, Sep 17, 2005 at 07:26:11AM +0300, Al Boldi wrote:
> > > > Monitoring disk access using gkrellm, I noticed that a command like
> > > >
> > > > cat /dev/hda > /dev/null
> > > >
> > > > shows eradic disk reads ranging from 0 to 80MB/s on an otherwise
> > > > idle system.

New synonym: eradic=erratic :)

> Denis' tool seems clearly more suited to analyse your problem.

The problem seems to be a multi-access collision in the queue, which forces a
~50% reduction of thruput, which recovers with another multi-access
collision. Maybe?!

--
Al

2005-09-18 05:23:07

by Willy Tarreau

[permalink] [raw]
Subject: Re: Eradic disk access during reads

On Sun, Sep 18, 2005 at 07:16:46AM +0300, Al Boldi wrote:
> Willy Tarreau wrote:
> > On Sat, Sep 17, 2005 at 01:32:53PM +0300, Al Boldi wrote:
> > > Willy Tarreau wrote:
> > > > On Sat, Sep 17, 2005 at 07:26:11AM +0300, Al Boldi wrote:
> > > > > Monitoring disk access using gkrellm, I noticed that a command like
> > > > >
> > > > > cat /dev/hda > /dev/null
> > > > >
> > > > > shows eradic disk reads ranging from 0 to 80MB/s on an otherwise
> > > > > idle system.
>
> New synonym: eradic=erratic :)

I've noticed from the first mail too :-)

> > Denis' tool seems clearly more suited to analyse your problem.
>
> The problem seems to be a multi-access collision in the queue, which forces a
> ~50% reduction of thruput, which recovers with another multi-access
> collision. Maybe?!

do you have anything else connected to the same controller ? I've encountered
a problem a long time ago with SATA drives (it was around 2.4.25, with the
beginning of SATA support), where I could not use RAID on my drives because
simultaneous accesses to the drives was not possible. I then realized that
every time I accessed a drive during a read from or a write to another one
would stall. I don't remember if CDROM access caused the same trouble. This
was with a SATA-patched ata-piix driver, but fortunately this is fixed by now.

Regards,
Willy

2005-09-18 06:56:06

by Al Boldi

[permalink] [raw]
Subject: Re: Eradic disk access during reads

Willy Tarreau wrote:
> On Sun, Sep 18, 2005 at 07:16:46AM +0300, Al Boldi wrote:
> >
> > New synonym: eradic=erratic :)
> >
> > The problem seems to be a multi-access collision in the queue, which
> > forces a ~50% reduction of thruput, which recovers with another
> > multi-access collision. Maybe?!
>
> do you have anything else connected to the same controller ?

Nothing, but hda.

This problem only shows in 2.6.13, 2.4.31 works ok!

Maybe this is also accociated to the block-dev read-ahead problem?

Thanks!

--
Al


2005-09-18 10:41:04

by Denis Vlasenko

[permalink] [raw]
Subject: Re: Eradic disk access during reads

> > > You can do " readspeed </dev/hda | tr '\r' '\n' > log " with the readspeed
> > > tool from there :
> > > http://w.ods.org/tools/readspeed
> >
> > Does it have msec resolution?
>
> no, it does not. It will only show you the average transfer rate during
> the past second each second. It often helps me tune network, nfs, raid, ...
>
> Denis' tool seems clearly more suited to analyse your problem.

Denis' tool just got a homepage :)

http://195.66.192.167/linux/vdautil-0.5/src/
--
vda

2005-09-18 11:01:13

by Denis Vlasenko

[permalink] [raw]
Subject: Re: Eradic disk access during reads

On Saturday 17 September 2005 19:38, Al Boldi wrote:
> > > > > Monitoring disk access using gkrellm, I noticed that a command like
> > > > >
> > > > > cat /dev/hda > /dev/null
> > > > >
> > > > > shows eradic disk reads ranging from 0 to 80MB/s on an otherwise
> > > > > idle system.
> > > > >
> > > > > 1. Is this a hardware or software problem?
> > > >
> > > > Difficult to tell without more info. Can be a broken IDE disk or
> > > > defective ribbon.
> > >
> > > Tried the same with 2.4.31 which shows steady behaviour with occasional
> > > dips and pops in the msec range.
> > >
> > > > > 2. Is there a lightweight perf-mon tool (cmd-line) that would log
> > > > > this behaviour graphically?
> >
> > Try attached one.
>
> Nice!
>
> > # dd if=/dev/hda of=/dev/null bs=16M
>
> This is what I got; what do you get?
>
> Thanks!
>
> 2.4.31 # nmeter t6 c x i b d100
> 18:56:36.009981 cpu [SSSSSSSS..] ctx 145 int 86 bio 4.7M 0
> 18:56:36.110327 cpu [SSSSSSSS..] ctx 145 int 87 bio 4.8M 0
> 18:56:36.210735 cpu [SSSSSSSS..] ctx 139 int 88 bio 4.7M 0
> 18:56:36.310315 cpu [SSSSSSSS..] ctx 142 int 85 bio 4.7M 0
>
> 2.6.13 # nmeter t6 c x i b d100
> 18:09:22.117959 cpu [SSSSSSSSSD] ctx 80 int 47 bio 4.7M 0
> 18:09:22.217932 cpu [SSSSSSDDDD] ctx 83 int 48 bio 4.8M 0
> 18:09:22.319200 cpu [SSSSDDDDDI] ctx 81 int 56 bio 4.7M 28k
> 18:09:22.407979 cpu [SSSSSSSSSD] ctx 60 int 38 bio 3.8M 0
> 18:09:22.517960 cpu [SSSSSSSSDI] ctx 95 int 61 bio 5.2M 52k

Well, I do not see any bursts you mention...
Anyway, my data:

# uname -a
Linux firebird 2.6.12-r4 #1 SMP Sun Jul 17 13:51:47 EEST 2005 i686 unknown unknown GNU/Linux

hda:

13:52:05.401922 cpu [..........] ctx 4 int 64 bio 0 0
13:52:05.500900 cpu [..........] ctx 6 int 100 bio 0 0
13:52:05.600879 cpu [..........] ctx 2 int 102 bio 0 0
13:52:05.700857 cpu [..........] ctx 5 int 101 bio 0 0
13:52:05.800836 cpu [..........] ctx 6 int 103 bio 0 0
13:52:05.900815 cpu [..........] ctx 2 int 100 bio 0 0
13:52:06.000795 cpu [..........] ctx 2 int 102 bio 0 0
13:52:06.100776 cpu [..........] ctx 3 int 100 bio 0 0
13:52:06.200752 cpu [..........] ctx 2 int 100 bio 0 0
13:52:06.300731 cpu [..........] ctx 2 int 100 bio 0 0
13:52:06.401724 cpu [SSDDDD....] ctx 36 int 133 bio 857k 0
13:52:06.501749 cpu [SSSSSSDDDI] ctx 44 int 134 bio 3.2M 0
13:52:06.601721 cpu [SSSSSSDDII] ctx 38 int 161 bio 4.0M 0
13:52:06.701674 cpu [SSSSSSDDDI] ctx 43 int 141 bio 3.7M 0
13:52:06.802877 cpu [SSSSSSDDDI] ctx 33 int 148 bio 3.6M 0
13:52:06.901690 cpu [SSSSSSSDDI] ctx 40 int 140 bio 3.5M 0
13:52:07.001663 cpu [SSSSSSDDDI] ctx 37 int 139 bio 4.1M 0
13:52:07.102638 cpu [SSSSSSUDDI] ctx 38 int 150 bio 4.1M 0
13:52:07.201653 cpu [SSSSSSSDDI] ctx 45 int 148 bio 4.1M 0
13:52:07.301605 cpu [SSSSSSSDII] ctx 38 int 142 bio 4.0M 0
13:52:07.401556 cpu [SSSSSSDDDI] ctx 42 int 143 bio 4.0M 0
13:52:07.501513 cpu [SSSSSSSDDI] ctx 35 int 145 bio 4.1M 0
13:52:07.601485 cpu [SSSSSSSDDI] ctx 29 int 142 bio 3.6M 0
13:52:07.701839 cpu [SSSSSSSSDI] ctx 30 int 141 bio 4.0M 0
13:52:07.801519 cpu [SSSSSSDDII] ctx 34 int 147 bio 4.0M 0
13:52:07.901527 cpu [SSSSSSSDDI] ctx 32 int 141 bio 4.1M 0
13:52:08.001448 cpu [SSSSSSSDDI] ctx 53 int 150 bio 3.7M 0
13:52:08.101378 cpu [SSSSSSDDDI] ctx 47 int 138 bio 3.7M 0
13:52:08.201395 cpu [SSSSSSSSDI] ctx 44 int 138 bio 3.8M 0
13:52:08.301555 cpu [SSSSSDDDII] ctx 33 int 136 bio 3.3M 0
13:52:08.402275 cpu [SSSSSSSDDD] ctx 43 int 150 bio 3.7M 16k
13:52:08.501617 cpu [SSSSSSSSDI] ctx 35 int 142 bio 4.0M 0
13:52:08.601614 cpu [SSSSSSSDDI] ctx 46 int 148 bio 4.0M 0
13:52:08.701250 cpu [SSSSSDDDDI] ctx 36 int 129 bio 2.7M 0
13:52:08.801316 cpu [SSSSSSSDDI] ctx 41 int 154 bio 3.8M 0
13:52:08.902231 cpu [SSSSSSSDDI] ctx 42 int 140 bio 3.8M 0
13:52:09.001419 cpu [SSSSSSSDDI] ctx 36 int 142 bio 4.1M 0
13:52:09.102196 cpu [SSSSSSSSDI] ctx 43 int 140 bio 3.8M 0
13:52:09.201450 cpu [SSSSSSDDII] ctx 34 int 142 bio 4.1M 0
13:52:09.301176 cpu [SSSSSSSDDI] ctx 39 int 138 bio 3.8M 0
13:52:09.401806 cpu [SSSSSSSSDI] ctx 55 int 157 bio 4.3M 0
13:52:09.501077 cpu [SSSSDDDDDI] ctx 32 int 121 bio 2.2M 0
13:52:09.601067 cpu [SSSDDDDDDD] ctx 24 int 127 bio 1.7M 0
13:52:09.701108 cpu [SSSSSSSDDI] ctx 50 int 153 bio 4.1M 0
13:52:09.802156 cpu [SSSSDDDDII] ctx 34 int 139 bio 2.7M 0
13:52:09.902026 cpu [SSSSSSSSDD] ctx 44 int 152 bio 4.0M 0
13:52:10.002041 cpu [SSSSDDDDDI] ctx 39 int 126 bio 2.2M 0
13:52:10.100997 cpu [SSSSDDDDDI] ctx 46 int 129 bio 2.7M 0
13:52:10.201964 cpu [SSSSSSSDDI] ctx 57 int 139 bio 3.6M 0
13:52:10.303468 cpu [SSSSDDDDDI] ctx 34 int 133 bio 2.2M 0
13:52:10.401523 cpu [SSSSSSSDDI] ctx 50 int 145 bio 3.6M 4096
13:52:10.500934 cpu [SSSSSSSSII] ctx 66 int 139 bio 4.1M 0
13:52:10.601854 cpu [SSSSSSSSDI] ctx 59 int 147 bio 4.0M 0
13:52:10.701938 cpu [SSSSSSSDDI] ctx 46 int 151 bio 3.7M 0
13:52:10.801827 cpu [SSSSSSSDDI] ctx 47 int 153 bio 4.0M 0
13:52:10.900829 cpu [SSSSSSSDII] ctx 58 int 142 bio 4.1M 100k
13:52:11.000810 cpu [SSSSSSSDII] ctx 62 int 153 bio 4.0M 0
13:52:11.100836 cpu [SSSSSSSSDI] ctx 57 int 143 bio 4.1M 0
13:52:11.201740 cpu [SSSSSSSDII] ctx 54 int 154 bio 4.1M 0
13:52:11.300827 cpu [SSSSSSSSDI] ctx 66 int 138 bio 4.1M 0
13:52:11.402139 cpu [SSSSSSSDII] ctx 62 int 149 bio 4.0M 4096
13:52:11.501748 cpu [SSSSSSSSDI] ctx 55 int 142 bio 4.1M 0
13:52:11.601638 cpu [SSSSSSSSDI] ctx 58 int 144 bio 4.0M 0
13:52:11.701665 cpu [SSSSSSSDII] ctx 59 int 143 bio 4.1M 0
13:52:11.801702 cpu [SSSSSSSSDI] ctx 57 int 142 bio 4.1M 0
13:52:11.900620 cpu [SSSSSSDDII] ctx 60 int 147 bio 3.9M 76k
13:52:12.001602 cpu [SSSSSSSSDI] ctx 59 int 141 bio 4.1M 0
13:52:12.102003 cpu [SSSSSSSSDI] ctx 60 int 141 bio 4.1M 0
13:52:12.202031 cpu [SSSSSSSDII] ctx 46 int 154 bio 4.1M 0
13:52:12.300878 cpu [SSSSSSSSDI] ctx 56 int 138 bio 4.0M 0
13:52:12.401747 cpu [SSSSSSSDDI] ctx 56 int 149 bio 4.1M 0
13:52:12.501482 cpu [SSSSSSSSDI] ctx 62 int 140 bio 4.0M 0
13:52:12.600455 cpu [SSSSSSSDII] ctx 57 int 151 bio 4.0M 0
13:52:12.700400 cpu [SSSSSUDDDI] ctx 35 int 138 bio 2.6M 0
13:52:12.801719 cpu [SSSSSSSDDI] ctx 48 int 140 bio 3.7M 0
13:52:12.901651 cpu [SSSSSSSSDI] ctx 50 int 148 bio 4.0M 0
13:52:13.001352 cpu [SSSSSSDDII] ctx 43 int 138 bio 4.1M 0
13:52:13.103070 cpu [SSSSSSSDDI] ctx 47 int 151 bio 4.0M 0
13:52:13.201283 cpu [SSSSSSD...] ctx 41 int 123 bio 2.3M 0
13:52:13.301254 cpu [..........] ctx 2 int 101 bio 0 0
13:52:13.401230 cpu [..........] ctx 2 int 100 bio 0 0
13:52:13.501211 cpu [..........] ctx 4 int 100 bio 0 0
13:52:13.601189 cpu [..........] ctx 2 int 102 bio 0 0
13:52:13.701168 cpu [..........] ctx 2 int 100 bio 0 0
13:52:13.801146 cpu [..........] ctx 2 int 101 bio 0 0
13:52:13.901125 cpu [..........] ctx 2 int 100 bio 0 0

/dev/hda:

Model=SAMSUNG SV0411N, FwRev=UA100-07, SerialNo=S01RJ10X122455
Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs }
RawCHS=16383/16/63, TrkSize=34902, SectSize=554, ECCbytes=4
BuffType=DualPortCache, BuffSize=2048kB, MaxMultSect=16, MultSect=16
CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=78242976
IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
PIO modes: pio0 pio1 pio2 pio3 pio4
DMA modes: mdma0 mdma1 mdma2
UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5
AdvancedPM=no WriteCache=enabled
Drive conforms to: (null):

* signifies the current active mode

Note: CPU is mostly busy in system (S), not in disk wait (D).
My CPU is not that new:

# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 11
model name : Intel(R) Celeron(TM) CPU 1200MHz
stepping : 1
cpu MHz : 1196.195
cache size : 256 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 mmx fxsr sse
bogomips : 2359.29


hdc (an old disk):

13:52:38.701903 cpu [..........] ctx 3 int 101 bio 0 0
13:52:38.801869 cpu [..........] ctx 2 int 100 bio 0 0
13:52:38.901849 cpu [..........] ctx 2 int 101 bio 0 0
13:52:39.001830 cpu [..........] ctx 4 int 101 bio 0 0
13:52:39.101107 cpu [SUDDDDDDI.] ctx 24 int 108 bio 1.0M 0
13:52:39.202024 cpu [SSDDDDDDDD] ctx 21 int 111 bio 1.2M 0
13:52:39.301787 cpu [SSDDDDDDDD] ctx 21 int 109 bio 1.2M 0
13:52:39.401764 cpu [SSDDDDDDDI] ctx 27 int 112 bio 1.2M 0
13:52:39.501744 cpu [SSDDDDDDDD] ctx 22 int 110 bio 1.2M 0
13:52:39.601725 cpu [SSDDDDDDDI] ctx 24 int 111 bio 1.2M 0
13:52:39.701702 cpu [SSDDDDDDDD] ctx 27 int 110 bio 1.2M 0
13:52:39.801677 cpu [SSDDDDDDDD] ctx 28 int 111 bio 1.2M 0
13:52:39.901909 cpu [SSDDDDDDDI] ctx 26 int 110 bio 1.2M 0
13:52:40.001772 cpu [SSDDDDDDDD] ctx 25 int 110 bio 1.1M 0
13:52:40.100723 cpu [SSDDDDDDDI] ctx 25 int 111 bio 1.2M 0
13:52:40.201718 cpu [SSSDDDDDDD] ctx 26 int 112 bio 1.2M 0
13:52:40.301569 cpu [SSDDDDDDDD] ctx 28 int 109 bio 1.2M 0
13:52:40.402552 cpu [SSDDDDDDDI] ctx 23 int 111 bio 1.2M 0
13:52:40.502535 cpu [SSDDDDDDDD] ctx 22 int 110 bio 1.2M 0
13:52:40.601507 cpu [SSDDDDDDDI] ctx 26 int 111 bio 1.2M 0
13:52:40.701493 cpu [SSDDDDDDDD] ctx 23 int 110 bio 1.2M 0
13:52:40.801470 cpu [SSDDDDDDDI] ctx 24 int 110 bio 1.2M 0
13:52:40.901451 cpu [SSDDDDDDDD] ctx 23 int 111 bio 1.2M 0
13:52:41.001448 cpu [SSDDDDDDDI] ctx 26 int 110 bio 1.2M 0
13:52:41.101449 cpu [SSDDDDDDDD] ctx 24 int 111 bio 1.1M 0
13:52:41.201394 cpu [SSDDDDDDDD] ctx 24 int 110 bio 1.2M 0
13:52:41.302404 cpu [SSDDDDDDDI] ctx 23 int 112 bio 1.2M 0
13:52:41.402345 cpu [SSDDDDDDDD] ctx 23 int 109 bio 1.2M 0
13:52:41.501318 cpu [SSDDDDDDDI] ctx 23 int 109 bio 1.2M 0
13:52:41.602298 cpu [SSDDDDDDDD] ctx 26 int 115 bio 1.2M 0
13:52:41.702281 cpu [SSDDDDDDDI] ctx 23 int 112 bio 1.2M 0
13:52:41.802260 cpu [SSDDDDDDDD] ctx 24 int 110 bio 1.2M 0
13:52:41.902239 cpu [SSSDDDDDDI] ctx 35 int 116 bio 1.2M 124k
13:52:42.002220 cpu [SSDDDDDDDD] ctx 24 int 132 bio 1.2M 92k
13:52:42.102199 cpu [SSDDDDDDDI] ctx 25 int 113 bio 1.2M 0
13:52:42.203194 cpu [SSDDDDDDDD] ctx 23 int 111 bio 1.2M 0
13:52:42.302944 cpu [SSDDDDDDDD] ctx 22 int 111 bio 1.2M 0
13:52:42.402144 cpu [SSDDDDDDDI] ctx 23 int 109 bio 1.1M 0
13:52:42.502114 cpu [SSDDDDDDDD] ctx 22 int 110 bio 1.2M 0
13:52:42.601088 cpu [SSDDDDDDDI] ctx 27 int 110 bio 1.2M 0
13:52:42.702070 cpu [SSDDDDDDDD] ctx 23 int 113 bio 1.2M 0
13:52:42.802048 cpu [SSSUDDDDDD] ctx 23 int 110 bio 1.2M 0
13:52:42.902029 cpu [SSDDDDDDDI] ctx 23 int 111 bio 1.2M 0
13:52:43.002007 cpu [SSDDDDDDDD] ctx 25 int 111 bio 1.2M 0
13:52:43.101981 cpu [SSDDDDDDDD] ctx 25 int 111 bio 1.2M 0
13:52:43.201966 cpu [SSDDDDDDDI] ctx 23 int 111 bio 1.2M 0
13:52:43.301945 cpu [SSDDDDDDDD] ctx 24 int 110 bio 1.2M 0
13:52:43.402606 cpu [SSDDDDDDDI] ctx 27 int 111 bio 1.2M 0
13:52:43.501933 cpu [SSDDDDDDDD] ctx 23 int 109 bio 1.1M 0
13:52:43.601882 cpu [SSDDDDDDDI] ctx 24 int 112 bio 1.2M 0
13:52:43.701860 cpu [SSDDDDDDDD] ctx 23 int 112 bio 1.2M 0
13:52:43.801839 cpu [SSDDDDDDDI] ctx 23 int 110 bio 1.2M 0
13:52:43.901820 cpu [SSDDDDDDDD] ctx 24 int 111 bio 1.2M 0
13:52:44.001792 cpu [SSDDDDDDDD] ctx 24 int 111 bio 1.2M 0
13:52:44.101809 cpu [SSSDDDDDDI] ctx 23 int 110 bio 996k 0
13:52:44.201740 cpu [SSSSSSSS..] ctx 12 int 100 bio 0 0
13:52:44.301706 cpu [..........] ctx 2 int 100 bio 0 0
13:52:44.402686 cpu [..........] ctx 2 int 102 bio 0 0
13:52:44.502664 cpu [..........] ctx 2 int 100 bio 0 0
13:52:44.602643 cpu [..........] ctx 4 int 100 bio 0 0
13:52:44.702621 cpu [..........] ctx 4 int 104 bio 0 0

/dev/hdc:

Model=Maxtor 90640D4, FwRev=PAS24F15, SerialNo=V41Y3ABA
Config={ Fixed }
RawCHS=13328/15/63, TrkSize=0, SectSize=0, ECCbytes=29
BuffType=DualPortCache, BuffSize=256kB, MaxMultSect=16, MultSect=16
CurCHS=13328/15/63, CurSects=12594960, LBA=yes, LBAsects=12594960
IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
PIO modes: pio0 pio1 pio2 pio3 pio4
DMA modes: mdma0 mdma1 mdma2
UDMA modes: udma0 udma1 *udma2
AdvancedPM=no WriteCache=enabled
Drive conforms to: ATA/ATAPI-4 T13 1153D revision 17: 1 2 3 4

* signifies the current active mode
--
vda

2005-09-18 13:43:40

by Al Boldi

[permalink] [raw]
Subject: Re: Eradic disk access during reads

Denis Vlasenko wrote:
> On Saturday 17 September 2005 19:38, Al Boldi wrote:
> > > # dd if=/dev/hda of=/dev/null bs=16M
> > 2.4.31 # nmeter t6 c x i b d100
> > 18:56:36.009981 cpu [SSSSSSSS..] ctx 145 int 86 bio 4.7M 0
> > 18:56:36.110327 cpu [SSSSSSSS..] ctx 145 int 87 bio 4.8M 0
> > 18:56:36.210735 cpu [SSSSSSSS..] ctx 139 int 88 bio 4.7M 0
> > 18:56:36.310315 cpu [SSSSSSSS..] ctx 142 int 85 bio 4.7M 0
> >
> > 2.6.13 # nmeter t6 c x i b d100
> > 18:09:22.117959 cpu [SSSSSSSSSD] ctx 80 int 47 bio 4.7M 0
> > 18:09:22.217932 cpu [SSSSSSDDDD] ctx 83 int 48 bio 4.8M 0
> > 18:09:22.319200 cpu [SSSSDDDDDI] ctx 81 int 56 bio 4.7M 28k
> > 18:09:22.407979 cpu [SSSSSSSSSD] ctx 60 int 38 bio 3.8M 0
> > 18:09:22.517960 cpu [SSSSSSSSDI] ctx 95 int 61 bio 5.2M 52k
>
> Well, I do not see any bursts you mention...

Can you try 2.4.31?

> Anyway, my data:
> 13:52:09.201450 cpu [SSSSSSDDII] ctx 34 int 142 bio 4.1M 0
> 13:52:09.301176 cpu [SSSSSSSDDI] ctx 39 int 138 bio 3.8M 0
> 13:52:09.401806 cpu [SSSSSSSSDI] ctx 55 int 157 bio 4.3M 0
> 13:52:09.501077 cpu [SSSSDDDDDI] ctx 32 int 121 bio 2.2M 0
> 13:52:09.601067 cpu [SSSDDDDDDD] ctx 24 int 127 bio 1.7M 0
> 13:52:09.701108 cpu [SSSSSSSDDI] ctx 50 int 153 bio 4.1M 0
> 13:52:09.802156 cpu [SSSSDDDDII] ctx 34 int 139 bio 2.7M 0
> 13:52:09.902026 cpu [SSSSSSSSDD] ctx 44 int 152 bio 4.0M 0
> 13:52:10.002041 cpu [SSSSDDDDDI] ctx 39 int 126 bio 2.2M 0
> 13:52:10.100997 cpu [SSSSDDDDDI] ctx 46 int 129 bio 2.7M 0
> 13:52:10.201964 cpu [SSSSSSSDDI] ctx 57 int 139 bio 3.6M 0
> 13:52:10.303468 cpu [SSSSDDDDDI] ctx 34 int 133 bio 2.2M 0
> 13:52:10.401523 cpu [SSSSSSSDDI] ctx 50 int 145 bio 3.6M 4096
> 13:52:10.500934 cpu [SSSSSSSSII] ctx 66 int 139 bio 4.1M 0

Does not look smooth.

> /dev/hda:
> UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5

same here.

> My CPU is not that new:

PII - 400Mhz here.

> hdc (an old disk):
> 13:52:40.201718 cpu [SSSDDDDDDD] ctx 26 int 112 bio 1.2M 0
> 13:52:40.301569 cpu [SSDDDDDDDD] ctx 28 int 109 bio 1.2M 0
> 13:52:40.402552 cpu [SSDDDDDDDI] ctx 23 int 111 bio 1.2M 0
> 13:52:40.502535 cpu [SSDDDDDDDD] ctx 22 int 110 bio 1.2M 0
> 13:52:40.601507 cpu [SSDDDDDDDI] ctx 26 int 111 bio 1.2M 0
> 13:52:40.701493 cpu [SSDDDDDDDD] ctx 23 int 110 bio 1.2M 0

Looks smooth.

Also, great meter! Best of all does not hog the CPU!
Could you add a top3 procs display?

Thanks!

--
Al

2005-09-18 16:03:00

by Denis Vlasenko

[permalink] [raw]
Subject: Re: Eradic disk access during reads

> > > > # dd if=/dev/hda of=/dev/null bs=16M
> > > 2.4.31 # nmeter t6 c x i b d100
> > > 18:56:36.009981 cpu [SSSSSSSS..] ctx 145 int 86 bio 4.7M 0
> > > 18:56:36.110327 cpu [SSSSSSSS..] ctx 145 int 87 bio 4.8M 0
> > > 18:56:36.210735 cpu [SSSSSSSS..] ctx 139 int 88 bio 4.7M 0
> > > 18:56:36.310315 cpu [SSSSSSSS..] ctx 142 int 85 bio 4.7M 0
> > >
> > > 2.6.13 # nmeter t6 c x i b d100
> > > 18:09:22.117959 cpu [SSSSSSSSSD] ctx 80 int 47 bio 4.7M 0
> > > 18:09:22.217932 cpu [SSSSSSDDDD] ctx 83 int 48 bio 4.8M 0
> > > 18:09:22.319200 cpu [SSSSDDDDDI] ctx 81 int 56 bio 4.7M 28k
> > > 18:09:22.407979 cpu [SSSSSSSSSD] ctx 60 int 38 bio 3.8M 0
> > > 18:09:22.517960 cpu [SSSSSSSSDI] ctx 95 int 61 bio 5.2M 52k
> >
> > Well, I do not see any bursts you mention...
>
> Can you try 2.4.31?

Not easy. I guess I have quite a few 2.6-only things here...

> same here.
>
> > My CPU is not that new:
>
> PII - 400Mhz here.

I meant that kernel seem to eat too much CPU here. This
is not expected. I expected CPU bar to be all D.

> > hdc (an old disk):
> > 13:52:40.201718 cpu [SSSDDDDDDD] ctx 26 int 112 bio 1.2M 0
> > 13:52:40.301569 cpu [SSDDDDDDDD] ctx 28 int 109 bio 1.2M 0
> > 13:52:40.402552 cpu [SSDDDDDDDI] ctx 23 int 111 bio 1.2M 0
> > 13:52:40.502535 cpu [SSDDDDDDDD] ctx 22 int 110 bio 1.2M 0
> > 13:52:40.601507 cpu [SSDDDDDDDI] ctx 26 int 111 bio 1.2M 0
> > 13:52:40.701493 cpu [SSDDDDDDDD] ctx 23 int 110 bio 1.2M 0
>
> Looks smooth.
>
> Also, great meter! Best of all does not hog the CPU!
> Could you add a top3 procs display?

What is a "top3 procs display"?
--
vda

2005-09-18 16:35:04

by Willy Tarreau

[permalink] [raw]
Subject: Re: Eradic disk access during reads

On Sun, Sep 18, 2005 at 07:02:17PM +0300, Denis Vlasenko wrote:
> > > My CPU is not that new:
> >
> > PII - 400Mhz here.
>
> I meant that kernel seem to eat too much CPU here. This
> is not expected. I expected CPU bar to be all D.

This is often caused by disks running in PIO instead of DMA.

> > Also, great meter! Best of all does not hog the CPU!
> > Could you add a top3 procs display?
>
> What is a "top3 procs display"?

probably something which will turn your tool into sort of a complex and
unusable one when another session running 'top' could do the trick.

Oh, BTW, the first reason I wrote my tool was to avoid copying into
/dev/null which consumes a small amount of CPU. Thus, I made it a pure
data eater. It might be interesting to run it instead of 'dd' while your
tool is running, to see if system usage decreases a bit.

Regards,
Willy

2005-09-18 18:10:53

by Al Boldi

[permalink] [raw]
Subject: Re: Eradic disk access during reads

Willy Tarreau wrote:
> On Sun, Sep 18, 2005 at 07:02:17PM +0300, Denis Vlasenko wrote:
> > > Also, great meter! Best of all does not hog the CPU!
> > > Could you add a top3 procs display?
> >
> > What is a "top3 procs display"?

top 3 procs that eat most of the CPU.

> probably something which will turn your tool into sort of a complex and
> unusable one when another session running 'top' could do the trick.

Top is super-expensive.

> Oh, BTW, the first reason I wrote my tool was to avoid copying into
> /dev/null which consumes a small amount of CPU. Thus, I made it a pure
> data eater. It might be interesting to run it instead of 'dd' while your
> tool is running, to see if system usage decreases a bit.

Good idea!

Thanks!

--
Al