Hi all,
it has now happened five times to me and now the
threshold to write to this list has been reached :-) :
(kernel 2.4.21-9.TLsmp BTW)
idea for enhancement of software raid 1:
every time the raid determines that a sector cannot
be read it could at least try to overwrite the bad are
with good data from the other disk.
Doing a re-sync of the raid happened to make the failed disk
error free again. (its a 200 GB disk and re-syncing
takes some time).
in all cases a smart scan showed the sector really as bad
and after resync it was readable again and smart
scanning was error free again.
i already had the disk replaced in the past (same model,
Model=Maxtor 6Y200P0, FwRev=YAR41BW0, SerialNo=Y63J7TSE)
the disk is not hot (smart shows 19 celsius which may
be correct since the room is air conditioned) and
the bad sectors were not that close together on
the surface:
# 1 Extended off-line Completed: read failure 40% 3512 0x02ab8a02
# 6 Extended off-line Completed: read failure 40% 2308 0x00057a35
# 9 Extended off-line Completed: read failure 40% 2291 0x01b63b6a
#11 Extended off-line Completed: read failure 40% 1861 0x01f67b1a
#18 Extended off-line Completed: read failure 40% 679 0x01d7052a
interesting: (look at Reallocated_Sector_Ct - it is still zero...)
4 Start_Stop_Count 0x0032 253 253 000 Old_age - 3
5 Reallocated_Sector_Ct 0x0033 253 253 063 Pre-fail - 0
6 Read_Channel_Margin 0x0001 253 253 100 Pre-fail - 0
What do you think? Ideas welcome.
Greetings and thanks for your time,
Karl
--
Karl Kiniger mailto:[email protected]
GE Medical Systems Kretztechnik GmbH & Co OHG
Tiefenbach 15 Tel: (++43) 7682-3800-710
A-4871 Zipf Austria Fax: (++43) 7682-3800-47
On 2005-01-18T22:18:01, "Kiniger, Karl (GE Healthcare)" <[email protected]> wrote:
> idea for enhancement of software raid 1:
>
> every time the raid determines that a sector cannot
> be read it could at least try to overwrite the bad are
> with good data from the other disk.
The idea is good and I'm sure we'll love to get a patch ;-)
Sincerely,
Lars Marowsky-Br?e <[email protected]>
--
High Availability & Clustering
SUSE Labs, Research and Development
SUSE LINUX Products GmbH - A Novell Business
On Tue, Jan 18, 2005 at 10:46:05PM +0100, Lars Marowsky-Bree wrote:
> On 2005-01-18T22:18:01, "Kiniger, Karl (GE Healthcare)" <[email protected]> wrote:
>
> > idea for enhancement of software raid 1:
> >
> > every time the raid determines that a sector cannot
> > be read it could at least try to overwrite the bad are
> > with good data from the other disk.
>
> The idea is good and I'm sure we'll love to get a patch ;-)
Dont account on me as a coder (absolutely no spare time
for the next couple of months)
some random thoughts:
nowadays hardware sector sizes are much bigger than 512 bytes and
the read error may affect some sectors +- the sector which actually
returned the error.
to keep the handling in userspace as much as possible:
the real problem is the long resync time. therefore it would
be sufficient to have a concept of "defective areas" per partition
and drive (a few of them, perhaps four or so , would be enough)
which will be excluded from reads/writes and some means to
re-synchronize these "defective areas" from the good counterparts
of the other disk. This would avoid having the whole partition being
marked as defective.
The repair could then be done in userspace given some kernel support.
There might be some corner cases though (e.g defective physical
sector spanning more than one partition) but I think they would be
rare.
Has anybody else had problems with the newer Maxtors
(200 GB and up) ?
Karl
>
>
> Sincerely,
> Lars Marowsky-Br?e <[email protected]>
>
> --
> High Availability & Clustering
> SUSE Labs, Research and Development
> SUSE LINUX Products GmbH - A Novell Business
--
Karl Kiniger mailto:[email protected]
GE Medical Systems Kretztechnik GmbH & Co OHG
Tiefenbach 15 Tel: (++43) 7682-3800-710
A-4871 Zipf Austria Fax: (++43) 7682-3800-47
On Wed, Jan 19, 2005 at 11:48:52AM +0100, Kiniger wrote:
...
> some random thoughts:
>
> nowadays hardware sector sizes are much bigger than 512 bytes
No :)
> and
> the read error may affect some sectors +- the sector which actually
> returned the error.
That's right
>
> to keep the handling in userspace as much as possible:
>
> the real problem is the long resync time. therefore it would
> be sufficient to have a concept of "defective areas" per partition
> and drive (a few of them, perhaps four or so , would be enough)
> which will be excluded from reads/writes and some means to
> re-synchronize these "defective areas" from the good counterparts
> of the other disk. This would avoid having the whole partition being
> marked as defective.
I wonder if it's really worth it.
The original idea has some merit I think - but what you're suggesting
here is almost "bad block remapping" with transparent recovery and user
space policy agents etc. etc.
If a drive has problems reading the platter, it can usually be corrected
by overwriting the given sector (either the drive can actually overwrite
the sector in place, or it will re-allocate it with severe read
performance penalties following). But there's a reason why that sector
went bad, and you realy want to get the disk replaced.
I think the current policy of marking the disk as failed when it has
failed is sensible.
Just my 0.02 Euro
--
/ jakob
Having looked at a lot of disks, I think that it is definitely worth
forcing a write to try and invoke the remap. With large drives, you
usually several bad sectors in the normal case (drive vendors allocate
up to a couple thousand spare sectors just for remapping).
Depending on the type of drive error, the act of writing is likely to
clean the questionable sector and leave you with a perfectly fine disk.
Ric
Jakob Oestergaard wrote:
>On Wed, Jan 19, 2005 at 11:48:52AM +0100, Kiniger wrote:
>...
>
>
>>some random thoughts:
>>
>>nowadays hardware sector sizes are much bigger than 512 bytes
>>
>>
>
>No :)
>
>
>
>>and
>>the read error may affect some sectors +- the sector which actually
>>returned the error.
>>
>>
>
>That's right
>
>
>
>>to keep the handling in userspace as much as possible:
>>
>>the real problem is the long resync time. therefore it would
>>be sufficient to have a concept of "defective areas" per partition
>>and drive (a few of them, perhaps four or so , would be enough)
>>which will be excluded from reads/writes and some means to
>>re-synchronize these "defective areas" from the good counterparts
>>of the other disk. This would avoid having the whole partition being
>>marked as defective.
>>
>>
>
>I wonder if it's really worth it.
>
>The original idea has some merit I think - but what you're suggesting
>here is almost "bad block remapping" with transparent recovery and user
>space policy agents etc. etc.
>
>If a drive has problems reading the platter, it can usually be corrected
>by overwriting the given sector (either the drive can actually overwrite
>the sector in place, or it will re-allocate it with severe read
>performance penalties following). But there's a reason why that sector
>went bad, and you realy want to get the disk replaced.
>
>I think the current policy of marking the disk as failed when it has
>failed is sensible.
>
>Just my 0.02 Euro
>
>
>
> Depending on the type of drive error, the act of writing is likely to
> clean the questionable sector and leave you with a perfectly fine disk.
>
Definitely - and in fact some of the other Linux tools already know
about this and do it (for example ext3 fsck will force rewrite bad
blocks if you want).