2001-12-06 06:15:04

by Daniel Stodden

[permalink] [raw]
Subject: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }


hi.

over the last few days, i've been experiencing lengthy syslog
complaints like the following:

Dec 6 06:33:42 bitch kernel: hdc: dma_intr: status=0x51 { DriveReady
SeekComplete Error }
Dec 6 06:33:42 bitch kernel: hdc: dma_intr: error=0x40 {
UncorrectableError }, LBAsect=1753708, sector=188216
Dec 6 06:33:42 bitch kernel: end_request: I/O error, dev 16:06 (hdc),
sector 188216
Dec 6 06:33:42 bitch kernel: vs-13070: reiserfs_read_inode2: i/o
failure occurred trying to find stat data of [113248 120445 0x0 SD]
Dec 6 06:33:47 bitch kernel: hdc: dma_intr: status=0x51 { DriveReady
SeekComplete Error }
Dec 6 06:33:47 bitch kernel: hdc: dma_intr: error=0x40 {
UncorrectableError }, LBAsect=1753708, sector=188216
Dec 6 06:33:47 bitch kernel: end_request: I/O error, dev 16:06 (hdc),
sector 188216
Dec 6 06:33:47 bitch kernel: vs-13070: reiserfs_read_inode2: i/o
failure occurred trying to find stat data of [113248 120439 0x0 SD]
Dec 6 06:33:52 bitch kernel: hdc: dma_intr: status=0x51 { DriveReady
SeekComplete Error }
Dec 6 06:33:52 bitch kernel: hdc: dma_intr: error=0x40 {
UncorrectableError }, LBAsect=1753708, sector=188216
Dec 6 06:33:52 bitch kernel: end_request: I/O error, dev 16:06 (hdc),
sector 188216
Dec 6 06:33:52 bitch kernel: vs-13070: reiserfs_read_inode2: i/o
failure occurred trying to find stat data of [113248 120449 0x0 SD]
Dec 6 06:33:57 bitch kernel: hdc: dma_intr: status=0x51 { DriveReady
SeekComplete Error }
Dec 6 06:33:57 bitch kernel: hdc: dma_intr: error=0x40 {
UncorrectableError }, LBAsect=1753708, sector=188216
Dec 6 06:33:57 bitch kernel: end_request: I/O error, dev 16:06 (hdc),
sector 188216

always goes on for about 2 minutes, then may disappear for the rest of
the day or longer.

my main question is whether this could be a kernel problem or whether
i just need to hurry a little getting my backups up to date and think
about a new disk.


- linux is 2.4.16 + preemption + buggy nvidia-1.0-2013 modules
- board is an asus p5a, k6-2 550Mhz, Aladdin chipset

thanx,
dns


bitch:/home/dns/src/mcnet/driver# uname -a
Linux bitch 2.4.16 #1 Wed Nov 28 03:18:38 CET 2001 i586 unknown

bitch:/home/dns/src/mcnet/driver# mount
/dev/md1 on / type reiserfs (rw)
proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/hda4 on /boot type ext2 (rw)
/dev/md0 on /var type reiserfs (rw)
/dev/md2 on /home type reiserfs (rw)
/dev/hda3 on /export01 type reiserfs (rw)
/dev/hdc3 on /export02 type reiserfs (rw)
/dev/sda1 on /export03 type reiserfs (rw)
/proc/bus/usb on /proc/bus/usb type usbdevfs (rw)

bitch:/proc/ide# lsmod
Module Size Used by Tainted: P
ov511 37476 0
videodev 5120 1 [ov511]
hisax 137888 3 (autoclean)
isdn 104800 4 (autoclean) [hisax]
slhc 4704 1 (autoclean) [isdn]
sd_mod 10588 2 (autoclean)
ide-scsi 7776 0
ncr53c8xx 52000 1
scsi_mod 91384 3 [sd_mod ide-scsi ncr53c8xx]
eeprom 3200 0 (unused)
w83781d 17312 0 (unused)
i2c-proc 6176 0 [eeprom w83781d]
i2c-ali15x3 4484 0 (unused)
i2c-core 13288 0 [eeprom w83781d i2c-proc
i2c-ali15x3]
mousedev 4128 1
hid 13024 0 (unused)
input 3616 0 [mousedev hid]
usb-ohci 21024 0 (unused)
usbcore 49824 1 [ov511 hid usb-ohci]
rtc 6360 0 (autoclean)
unix 17124 11 (autoclean)

bitch:/proc/ide# cat /proc/interrupts
CPU0
0: 106679 XT-PIC timer
1: 4919 XT-PIC keyboard
2: 0 XT-PIC cascade
7: 108 XT-PIC HiSax
8: 2 XT-PIC rtc
10: 71 XT-PIC ncr53c8xx
12: 4091 XT-PIC usb-ohci
14: 25224 XT-PIC ide0
15: 17892 XT-PIC ide1
NMI: 0
ERR: 0

bitch:/home/dns/src/mcnet/driver# cat /proc/mdstat
Personalities : [raid0]
read_ahead 1024 sectors
md2 : active raid0 hdc7[1] hda7[0]
31848576 blocks 4k chunks

md0 : active raid0 hdc5[1] hda5[0]
1049088 blocks 4k chunks

md1 : active raid0 hdc6[1] hda6[0]
8389376 blocks 4k chunks

unused devices: <none>

bitch:/home/dns/src/mcnet/driver# cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 5
model : 8
model name : AMD-K6(tm) 3D processor
stepping : 12
cpu MHz : 551.273
cache size : 64 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu vme de pse tsc msr mce cx8 pge mmx syscall 3dnow
k6_mtrr
bogomips : 1101.00


bitch:/proc/ide# find . -type f | xargs cat

Ali M15x3 Chipset.
------------------
PCI Clock: 33.
CD_ROM FIFO:No , CD_ROM DMA:Yes
FIFO Status: contains 0 Words, runs.

-------------------primary channel-------------------secondary channel---------

channel status: Off Off
both channels togth: Yes Yes
Channel state: OK OK
Add. Setup Timing: 1T 1T
Command Act. Count: 8T 8T
Command Rec. Count: 16T 16T

----------------drive0-----------drive1------------drive0-----------drive1------

DMA enabled: Yes Yes Yes No
FIFO threshold: 4 Words 4 Words 8 Words 8 Words
FIFO mode: FIFO Off FIFO Off FIFO On FIFO On
Dt RW act. Cnt 3T 3T 3T 8T
Dt RW rec. Cnt 1T 1T 1T 16T

-----------------------------------UDMA Timings--------------------------------

UDMA: OK No OK No
UDMA timings: 2.5T 2.5T 2.5T 1.5T

ide-scsi version 0.9
ide-disk version 1.10
0010 3c01 0000 0000 0000 0000 0000 3202
0000 0000 0000 0000 0000 1803 0000 0000
0000 0000 0000 0004 0000 0000 0000 0000
0000 0505 0000 0000 0000 0000 0000 4307
0000 0000 0000 0000 0000 1408 0000 0000
0000 0000 0000 0009 0000 0000 0000 0000
0000 3c0a 0000 0000 0000 0000 0000 000c
0000 0000 0000 0000 0000 32c0 0000 0000
0000 0000 0000 32c1 0000 0000 0000 0000
0000 00c2 0000 0000 0000 0000 0000 00c4
0000 0000 0000 0000 0000 00c5 0000 0000
0000 0000 0000 00c6 0000 0000 0000 0000
0000 00c7 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 d800
0010 0b01 6400 0064 0000 0000 0000 0502
6400 0064 0000 0000 0000 0703 6100 e561
dc00 0400 0000 1204 6400 2764 0000 0000
0000 3305 6400 0164 0000 0000 0000 0b07
6400 0064 0000 0000 0000 0508 6400 0064
0000 0000 0000 1209 6400 7164 0012 0000
0000 130a 6400 0064 0000 0000 0000 320c
6400 2764 0000 0000 0000 32c0 6400 2764
0000 0000 0000 12c1 6400 2764 0000 0000
0000 02c2 b100 1fb1 0000 0000 0000 32c4
6400 0164 0000 0000 0000 22c5 6400 0164
0000 0000 0000 08c6 6400 0064 0000 0000
0000 0ac7 c800 00c8 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 06e2 1b01
0003 0001 1302 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 2400
physical 59560/16/63
logical 59560/16/63
1916
60036480
name value min max mode
---- ----- --- --- ----
bios_cyl 59560 0 65535 rw
bios_head 16 0 255 rw
bios_sect 63 0 63 rw
breada_readahead 4 0 127 rw
bswap 0 0 1 r
current_speed 66 0 69 rw
failures 0 0 65535 rw
file_readahead 124 0 16384 rw
ide_scsi 0 0 1 rw
init_speed 66 0 69 rw
io_32bit 0 0 3 rw
keepsettings 0 0 1 rw
lun 0 0 7 rw
max_failures 1 0 65535 rw
max_kb_per_request 127 1 127 rw
multcount 0 0 8 rw
nice1 1 0 1 rw
nowerr 0 0 1 rw
number 2 0 3 rw
pio_mode write-only 0 255 w
slow 0 0 1 rw
unmaskirq 0 0 1 rw
using_dma 1 0 1 rw
IBM-DTLA-307030
disk
045a 3fff 37c8 0010 0000 0000 003f 0000
0000 0000 2020 2020 2020 2020 2059 4b30
594b 464c 3733 3633 0003 0ef8 0028 5458
344f 4135 3043 4942 4d2d 4454 4c41 2d33
3037 3033 3020 2020 2020 2020 2020 2020
2020 2020 2020 2020 2020 2020 2020 8010
0000 2f00 4000 0200 0200 0007 3fff 0010
003f fc10 00fb 0100 1580 0394 0000 0007
0003 0078 0078 00f0 0078 0000 0000 0000
0000 0000 0000 001f 0000 0000 0000 0000
003c 0015 74eb 43ea 4000 7469 0002 4000
043f 000d 0000 0000 fffe 600b 80fe 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0001 000b 0001 0000 0000 001b 0000 0000
4000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 61a5
ide-disk version 1.10
pci
ide0
pci bus 00 device 78 vid 10b9 did 5229 channel 1
b9 10 29 52 05 00 80 02 c1 8a 01 01 00 20 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
01 d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 01 02 04
00 00 00 7a 00 00 00 00 00 00 00 4a 00 80 ba 3a
00 00 00 89 00 55 2a 0a 01 00 31 31 01 00 31 00
00 00 1d 00 02 00 01 01 00 00 00 00 08 00 ca ec
e0 07 4f c2 00 00 00 00 21 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
1
2147483647
name value min max mode
---- ----- --- --- ----
bios_cyl 0 0 1023 rw
bios_head 0 0 255 rw
bios_sect 0 0 63 rw
current_speed 34 0 69 rw
ide_scsi 0 0 1 rw
init_speed 34 0 69 rw
io_32bit 0 0 3 rw
keepsettings 0 0 1 rw
log 0 0 1 rw
nice1 1 0 1 rw
number 1 0 3 rw
pio_mode write-only 0 255 w
slow 0 0 1 rw
transform 1 0 3 rw
unmaskirq 0 0 1 rw
using_dma 1 0 1 rw
ASUS CD-S500/A
cdrom
85c0 0000 0000 0000 0000 0000 0000 0000
0000 0000 2020 2020 2020 2020 2020 2020
2020 2020 2020 2020 0000 0000 0000 5631
2e31 4b20 2020 4153 5553 2043 442d 5335
3030 2f41 2020 2020 2020 2020 2020 2020
2020 2020 2020 2020 2020 2020 2020 0000
0000 0b00 0000 0400 0200 0006 0000 0000
0000 0000 0000 0000 0000 0000 0000 0407
0003 0078 0078 00e3 0078 0000 0000 0000
0000 0004 0009 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0007 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
ide-scsi version 0.9
0010 3c01 0000 0000 0000 0000 0000 3202
0000 0000 0000 0000 0000 1803 0000 0000
0000 0000 0000 0004 0000 0000 0000 0000
0000 0505 0000 0000 0000 0000 0000 4307
0000 0000 0000 0000 0000 1408 0000 0000
0000 0000 0000 0009 0000 0000 0000 0000
0000 3c0a 0000 0000 0000 0000 0000 000c
0000 0000 0000 0000 0000 32c0 0000 0000
0000 0000 0000 32c1 0000 0000 0000 0000
0000 00c2 0000 0000 0000 0000 0000 00c4
0000 0000 0000 0000 0000 00c5 0000 0000
0000 0000 0000 00c6 0000 0000 0000 0000
0000 00c7 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 d800
0010 0b01 6400 0064 0000 0000 0000 0502
6400 0064 0000 0000 0000 0703 6000 e360
e500 0400 0000 1204 6400 1064 0000 0000
0000 3305 6400 0064 0000 0000 0000 0b07
6400 0064 0000 0000 0000 0508 6400 0064
0000 0000 0000 1209 6400 5464 000e 0000
0000 130a 6400 0064 0000 0000 0000 320c
6400 1064 0000 0000 0000 32c0 6400 1064
0000 0000 0000 12c1 6400 1064 0000 0000
0000 02c2 a600 21a6 0000 0000 0000 32c4
6400 0064 0000 0000 0000 22c5 6400 0064
0000 0000 0000 08c6 6400 0064 0000 0000
0000 0ac7 c800 00c8 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 06e2 1b01
0003 0001 1302 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 b300
physical 59560/16/63
logical 59560/16/63
1916
60036480
name value min max mode
---- ----- --- --- ----
bios_cyl 59560 0 65535 rw
bios_head 16 0 255 rw
bios_sect 63 0 63 rw
breada_readahead 4 0 127 rw
bswap 0 0 1 r
current_speed 66 0 69 rw
failures 0 0 65535 rw
file_readahead 124 0 16384 rw
ide_scsi 0 0 1 rw
init_speed 66 0 69 rw
io_32bit 0 0 3 rw
keepsettings 0 0 1 rw
lun 0 0 7 rw
max_failures 1 0 65535 rw
max_kb_per_request 127 1 127 rw
multcount 0 0 8 rw
nice1 1 0 1 rw
nowerr 0 0 1 rw
number 0 0 3 rw
pio_mode write-only 0 255 w
slow 0 0 1 rw
unmaskirq 0 0 1 rw
using_dma 1 0 1 rw
IBM-DTLA-307030
disk
045a 3fff 37c8 0010 0000 0000 003f 0000
0000 0000 2020 2020 2020 2020 2059 4b45
594b 4e4a 3137 3932 0003 0ef8 0028 5458
344f 4136 3841 4942 4d2d 4454 4c41 2d33
3037 3033 3020 2020 2020 2020 2020 2020
2020 2020 2020 2020 2020 2020 2020 8010
0000 2f00 4000 0200 0200 0007 3fff 0010
003f fc10 00fb 0100 1580 0394 0000 0007
0003 0078 0078 00f0 0078 0000 0000 0000
0000 0000 0000 001f 0000 0000 0000 0000
003c 0015 74eb 43ea 4000 7469 0002 4000
043f 000d 0000 0000 fffe 603b 80fe 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0001 000b 0000 0000 0000 001b 0000 0000
4000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 10a5
ide-disk version 1.10
pci
ide1
pci bus 00 device 78 vid 10b9 did 5229 channel 0
b9 10 29 52 05 00 80 02 c1 8a 01 01 00 20 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
01 d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 01 02 04
00 00 00 7a 00 00 00 00 00 00 00 4a 00 80 ba 3a
00 00 00 89 00 55 2a 0a 01 00 31 31 01 00 31 00
00 00 1d 00 02 00 01 01 00 00 00 00 00 00 ec ec
4f c2 4f c2 00 00 00 00 21 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0


--
___________________________________________________________________________
mailto:[email protected]


2001-12-06 06:56:02

by Sven.Riedel

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

On Thu, Dec 06, 2001 at 07:13:13AM +0100, Daniel Stodden wrote:
> over the last few days, i've been experiencing lengthy syslog
> complaints like the following:
>
> Dec 6 06:33:42 bitch kernel: hdc: dma_intr: status=0x51 { DriveReady
> SeekComplete Error }
> Dec 6 06:33:42 bitch kernel: hdc: dma_intr: error=0x40 {
> UncorrectableError }, LBAsect=1753708, sector=188216
> Dec 6 06:33:42 bitch kernel: end_request: I/O error, dev 16:06 (hdc),
> sector 188216

Looks like your disk is trying to read defective sectors. I.e. the disk
is dying. Since you're using DTLA's I'd run the drive fitness test, and
if that finds any bad sectors contact IBM for exchange _without_ doing a
lowlevel format as the drive fitness test will suggest to you.

Sorry :)

Regs,
Sven

--
Sven Riedel [email protected]
Osteroeder Str. 6 / App. 13 [email protected]
38678 Clausthal "Call me bored, but don't call me boring."
- Larry Wall

2001-12-06 15:05:13

by Matthias Andree

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

On Thu, 06 Dec 2001, Daniel Stodden wrote:

> over the last few days, i've been experiencing lengthy syslog
> complaints like the following:
>
> Dec 6 06:33:42 bitch kernel: hdc: dma_intr: status=0x51 { DriveReady
> SeekComplete Error }
> Dec 6 06:33:42 bitch kernel: hdc: dma_intr: error=0x40 {
> UncorrectableError }, LBAsect=1753708, sector=188216

> my main question is whether this could be a kernel problem or whether
> i just need to hurry a little getting my backups up to date and think
> about a new disk.

1. Get your backup done quickly, as long as the drive lasts.

2. Run IBM's drive fitness test. DO NOT USE "ERASE DISK" (low level
format) features, no matter what IBM may say, check the
German-Language DTLA FAQ: http://www.cooling-solutions.de/dtla-faq/
Write down the technical result code. Also check Anandtech's FAQ
(English language): http://www.anandtech.com/guides/viewfaq.html?i=71

3. (applies to Germany only, therefore in German, it's about where to ship
the drive):
a- Laufwerk < 6 Monate: H?ndler steht f?r alle Kosten ein, die durch
den Defekt entstehen, Versandh?ndler m?ssen z. B. Porto erstatten

b- Laufwerk > 6 Monate: von IBMs Webseite eine RAM anfordern, Laufwerk
zu IBM einschicken, Frachtkosten gehen zu eigenen Lasten -- Versand
war in meinem Fall in die Niederlande.

3.II: Make up your mind whether to ask the dealer or IBM (whichever
applies) to replace the drive with one of another series. Usually,
I'd to that, but that may not be feasible with RAID
configurations. On a single drive, I'd even ask the dealer to
replace with another brand, or IBM to replace a DTLA-3070xx with
an IC35L0x0AVER07.

2001-12-23 07:25:34

by Andre Hedrick

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }


Daniel,

Would it be great if Linux did not have such a lame design to handle the
these problems? Just imaging if we had a properly formed
FS/BLOCK/MAIN-LOOP/LOW_LEVEL model; whereas, an error of this nature in a
write to media failure could be passed back up the pathway to the FS and
request the FS to re-issue the storing of data to a new location.

To bad the global design lacks this ablitity and I suspect that nobody
gives a damn, or even worse ever thought of this idea.

Now the importance of model is not to promote the use of dying media, but
to be able to secure the content in buffer-cache, from app.-space, or
user-space to the media and flag the UID(0) that we are in deep DOO-DOO
and you better start backing up the content now.

I am just waiting to rip somebody's lunch who disagrees with me on the
importance of data-recovery and relocation upon media failures. Any
points claiming it is not important because the predictive nature of
device failure is unknown, should maybe ask an expert in the industry to
get you the answer. So lets have some fun and light off a really good
flame war!

Regards,

Andre Hedrick
CEO/President, LAD Storage Consulting Group
Linux ATA Development
Linux Disk Certification Project

On 6 Dec 2001, Daniel Stodden wrote:

>
> hi.
>
> over the last few days, i've been experiencing lengthy syslog
> complaints like the following:
>
> Dec 6 06:33:42 bitch kernel: hdc: dma_intr: status=0x51 { DriveReady
> SeekComplete Error }
> Dec 6 06:33:42 bitch kernel: hdc: dma_intr: error=0x40 {
> UncorrectableError }, LBAsect=1753708, sector=188216
> Dec 6 06:33:42 bitch kernel: end_request: I/O error, dev 16:06 (hdc),
> sector 188216
> Dec 6 06:33:42 bitch kernel: vs-13070: reiserfs_read_inode2: i/o
> failure occurred trying to find stat data of [113248 120445 0x0 SD]
> Dec 6 06:33:47 bitch kernel: hdc: dma_intr: status=0x51 { DriveReady
> SeekComplete Error }
> Dec 6 06:33:47 bitch kernel: hdc: dma_intr: error=0x40 {
> UncorrectableError }, LBAsect=1753708, sector=188216
> Dec 6 06:33:47 bitch kernel: end_request: I/O error, dev 16:06 (hdc),
> sector 188216
> Dec 6 06:33:47 bitch kernel: vs-13070: reiserfs_read_inode2: i/o
> failure occurred trying to find stat data of [113248 120439 0x0 SD]
> Dec 6 06:33:52 bitch kernel: hdc: dma_intr: status=0x51 { DriveReady
> SeekComplete Error }
> Dec 6 06:33:52 bitch kernel: hdc: dma_intr: error=0x40 {
> UncorrectableError }, LBAsect=1753708, sector=188216
> Dec 6 06:33:52 bitch kernel: end_request: I/O error, dev 16:06 (hdc),
> sector 188216
> Dec 6 06:33:52 bitch kernel: vs-13070: reiserfs_read_inode2: i/o
> failure occurred trying to find stat data of [113248 120449 0x0 SD]
> Dec 6 06:33:57 bitch kernel: hdc: dma_intr: status=0x51 { DriveReady
> SeekComplete Error }
> Dec 6 06:33:57 bitch kernel: hdc: dma_intr: error=0x40 {
> UncorrectableError }, LBAsect=1753708, sector=188216
> Dec 6 06:33:57 bitch kernel: end_request: I/O error, dev 16:06 (hdc),
> sector 188216
>
> always goes on for about 2 minutes, then may disappear for the rest of
> the day or longer.
>
> my main question is whether this could be a kernel problem or whether
> i just need to hurry a little getting my backups up to date and think
> about a new disk.
>
>
> - linux is 2.4.16 + preemption + buggy nvidia-1.0-2013 modules
> - board is an asus p5a, k6-2 550Mhz, Aladdin chipset
>
> thanx,
> dns
>
>
> bitch:/home/dns/src/mcnet/driver# uname -a
> Linux bitch 2.4.16 #1 Wed Nov 28 03:18:38 CET 2001 i586 unknown
>
> bitch:/home/dns/src/mcnet/driver# mount
> /dev/md1 on / type reiserfs (rw)
> proc on /proc type proc (rw)
> devpts on /dev/pts type devpts (rw,gid=5,mode=620)
> /dev/hda4 on /boot type ext2 (rw)
> /dev/md0 on /var type reiserfs (rw)
> /dev/md2 on /home type reiserfs (rw)
> /dev/hda3 on /export01 type reiserfs (rw)
> /dev/hdc3 on /export02 type reiserfs (rw)
> /dev/sda1 on /export03 type reiserfs (rw)
> /proc/bus/usb on /proc/bus/usb type usbdevfs (rw)
>
> bitch:/proc/ide# lsmod
> Module Size Used by Tainted: P
> ov511 37476 0
> videodev 5120 1 [ov511]
> hisax 137888 3 (autoclean)
> isdn 104800 4 (autoclean) [hisax]
> slhc 4704 1 (autoclean) [isdn]
> sd_mod 10588 2 (autoclean)
> ide-scsi 7776 0
> ncr53c8xx 52000 1
> scsi_mod 91384 3 [sd_mod ide-scsi ncr53c8xx]
> eeprom 3200 0 (unused)
> w83781d 17312 0 (unused)
> i2c-proc 6176 0 [eeprom w83781d]
> i2c-ali15x3 4484 0 (unused)
> i2c-core 13288 0 [eeprom w83781d i2c-proc
> i2c-ali15x3]
> mousedev 4128 1
> hid 13024 0 (unused)
> input 3616 0 [mousedev hid]
> usb-ohci 21024 0 (unused)
> usbcore 49824 1 [ov511 hid usb-ohci]
> rtc 6360 0 (autoclean)
> unix 17124 11 (autoclean)
>
> bitch:/proc/ide# cat /proc/interrupts
> CPU0
> 0: 106679 XT-PIC timer
> 1: 4919 XT-PIC keyboard
> 2: 0 XT-PIC cascade
> 7: 108 XT-PIC HiSax
> 8: 2 XT-PIC rtc
> 10: 71 XT-PIC ncr53c8xx
> 12: 4091 XT-PIC usb-ohci
> 14: 25224 XT-PIC ide0
> 15: 17892 XT-PIC ide1
> NMI: 0
> ERR: 0
>
> bitch:/home/dns/src/mcnet/driver# cat /proc/mdstat
> Personalities : [raid0]
> read_ahead 1024 sectors
> md2 : active raid0 hdc7[1] hda7[0]
> 31848576 blocks 4k chunks
>
> md0 : active raid0 hdc5[1] hda5[0]
> 1049088 blocks 4k chunks
>
> md1 : active raid0 hdc6[1] hda6[0]
> 8389376 blocks 4k chunks
>
> unused devices: <none>
>
> bitch:/home/dns/src/mcnet/driver# cat /proc/cpuinfo
> processor : 0
> vendor_id : AuthenticAMD
> cpu family : 5
> model : 8
> model name : AMD-K6(tm) 3D processor
> stepping : 12
> cpu MHz : 551.273
> cache size : 64 KB
> fdiv_bug : no
> hlt_bug : no
> f00f_bug : no
> coma_bug : no
> fpu : yes
> fpu_exception : yes
> cpuid level : 1
> wp : yes
> flags : fpu vme de pse tsc msr mce cx8 pge mmx syscall 3dnow
> k6_mtrr
> bogomips : 1101.00
>
>
> bitch:/proc/ide# find . -type f | xargs cat
>
> Ali M15x3 Chipset.
> ------------------
> PCI Clock: 33.
> CD_ROM FIFO:No , CD_ROM DMA:Yes
> FIFO Status: contains 0 Words, runs.
>
> -------------------primary channel-------------------secondary channel---------
>
> channel status: Off Off
> both channels togth: Yes Yes
> Channel state: OK OK
> Add. Setup Timing: 1T 1T
> Command Act. Count: 8T 8T
> Command Rec. Count: 16T 16T
>
> ----------------drive0-----------drive1------------drive0-----------drive1------
>
> DMA enabled: Yes Yes Yes No
> FIFO threshold: 4 Words 4 Words 8 Words 8 Words
> FIFO mode: FIFO Off FIFO Off FIFO On FIFO On
> Dt RW act. Cnt 3T 3T 3T 8T
> Dt RW rec. Cnt 1T 1T 1T 16T
>
> -----------------------------------UDMA Timings--------------------------------
>
> UDMA: OK No OK No
> UDMA timings: 2.5T 2.5T 2.5T 1.5T
>
> ide-scsi version 0.9
> ide-disk version 1.10
> 0010 3c01 0000 0000 0000 0000 0000 3202
> 0000 0000 0000 0000 0000 1803 0000 0000
> 0000 0000 0000 0004 0000 0000 0000 0000
> 0000 0505 0000 0000 0000 0000 0000 4307
> 0000 0000 0000 0000 0000 1408 0000 0000
> 0000 0000 0000 0009 0000 0000 0000 0000
> 0000 3c0a 0000 0000 0000 0000 0000 000c
> 0000 0000 0000 0000 0000 32c0 0000 0000
> 0000 0000 0000 32c1 0000 0000 0000 0000
> 0000 00c2 0000 0000 0000 0000 0000 00c4
> 0000 0000 0000 0000 0000 00c5 0000 0000
> 0000 0000 0000 00c6 0000 0000 0000 0000
> 0000 00c7 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 d800
> 0010 0b01 6400 0064 0000 0000 0000 0502
> 6400 0064 0000 0000 0000 0703 6100 e561
> dc00 0400 0000 1204 6400 2764 0000 0000
> 0000 3305 6400 0164 0000 0000 0000 0b07
> 6400 0064 0000 0000 0000 0508 6400 0064
> 0000 0000 0000 1209 6400 7164 0012 0000
> 0000 130a 6400 0064 0000 0000 0000 320c
> 6400 2764 0000 0000 0000 32c0 6400 2764
> 0000 0000 0000 12c1 6400 2764 0000 0000
> 0000 02c2 b100 1fb1 0000 0000 0000 32c4
> 6400 0164 0000 0000 0000 22c5 6400 0164
> 0000 0000 0000 08c6 6400 0064 0000 0000
> 0000 0ac7 c800 00c8 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 06e2 1b01
> 0003 0001 1302 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 2400
> physical 59560/16/63
> logical 59560/16/63
> 1916
> 60036480
> name value min max mode
> ---- ----- --- --- ----
> bios_cyl 59560 0 65535 rw
> bios_head 16 0 255 rw
> bios_sect 63 0 63 rw
> breada_readahead 4 0 127 rw
> bswap 0 0 1 r
> current_speed 66 0 69 rw
> failures 0 0 65535 rw
> file_readahead 124 0 16384 rw
> ide_scsi 0 0 1 rw
> init_speed 66 0 69 rw
> io_32bit 0 0 3 rw
> keepsettings 0 0 1 rw
> lun 0 0 7 rw
> max_failures 1 0 65535 rw
> max_kb_per_request 127 1 127 rw
> multcount 0 0 8 rw
> nice1 1 0 1 rw
> nowerr 0 0 1 rw
> number 2 0 3 rw
> pio_mode write-only 0 255 w
> slow 0 0 1 rw
> unmaskirq 0 0 1 rw
> using_dma 1 0 1 rw
> IBM-DTLA-307030
> disk
> 045a 3fff 37c8 0010 0000 0000 003f 0000
> 0000 0000 2020 2020 2020 2020 2059 4b30
> 594b 464c 3733 3633 0003 0ef8 0028 5458
> 344f 4135 3043 4942 4d2d 4454 4c41 2d33
> 3037 3033 3020 2020 2020 2020 2020 2020
> 2020 2020 2020 2020 2020 2020 2020 8010
> 0000 2f00 4000 0200 0200 0007 3fff 0010
> 003f fc10 00fb 0100 1580 0394 0000 0007
> 0003 0078 0078 00f0 0078 0000 0000 0000
> 0000 0000 0000 001f 0000 0000 0000 0000
> 003c 0015 74eb 43ea 4000 7469 0002 4000
> 043f 000d 0000 0000 fffe 600b 80fe 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0001 000b 0001 0000 0000 001b 0000 0000
> 4000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 61a5
> ide-disk version 1.10
> pci
> ide0
> pci bus 00 device 78 vid 10b9 did 5229 channel 1
> b9 10 29 52 05 00 80 02 c1 8a 01 01 00 20 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 01 d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 01 02 04
> 00 00 00 7a 00 00 00 00 00 00 00 4a 00 80 ba 3a
> 00 00 00 89 00 55 2a 0a 01 00 31 31 01 00 31 00
> 00 00 1d 00 02 00 01 01 00 00 00 00 08 00 ca ec
> e0 07 4f c2 00 00 00 00 21 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 1
> 2147483647
> name value min max mode
> ---- ----- --- --- ----
> bios_cyl 0 0 1023 rw
> bios_head 0 0 255 rw
> bios_sect 0 0 63 rw
> current_speed 34 0 69 rw
> ide_scsi 0 0 1 rw
> init_speed 34 0 69 rw
> io_32bit 0 0 3 rw
> keepsettings 0 0 1 rw
> log 0 0 1 rw
> nice1 1 0 1 rw
> number 1 0 3 rw
> pio_mode write-only 0 255 w
> slow 0 0 1 rw
> transform 1 0 3 rw
> unmaskirq 0 0 1 rw
> using_dma 1 0 1 rw
> ASUS CD-S500/A
> cdrom
> 85c0 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 2020 2020 2020 2020 2020 2020
> 2020 2020 2020 2020 0000 0000 0000 5631
> 2e31 4b20 2020 4153 5553 2043 442d 5335
> 3030 2f41 2020 2020 2020 2020 2020 2020
> 2020 2020 2020 2020 2020 2020 2020 0000
> 0000 0b00 0000 0400 0200 0006 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0407
> 0003 0078 0078 00e3 0078 0000 0000 0000
> 0000 0004 0009 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0007 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> ide-scsi version 0.9
> 0010 3c01 0000 0000 0000 0000 0000 3202
> 0000 0000 0000 0000 0000 1803 0000 0000
> 0000 0000 0000 0004 0000 0000 0000 0000
> 0000 0505 0000 0000 0000 0000 0000 4307
> 0000 0000 0000 0000 0000 1408 0000 0000
> 0000 0000 0000 0009 0000 0000 0000 0000
> 0000 3c0a 0000 0000 0000 0000 0000 000c
> 0000 0000 0000 0000 0000 32c0 0000 0000
> 0000 0000 0000 32c1 0000 0000 0000 0000
> 0000 00c2 0000 0000 0000 0000 0000 00c4
> 0000 0000 0000 0000 0000 00c5 0000 0000
> 0000 0000 0000 00c6 0000 0000 0000 0000
> 0000 00c7 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 d800
> 0010 0b01 6400 0064 0000 0000 0000 0502
> 6400 0064 0000 0000 0000 0703 6000 e360
> e500 0400 0000 1204 6400 1064 0000 0000
> 0000 3305 6400 0064 0000 0000 0000 0b07
> 6400 0064 0000 0000 0000 0508 6400 0064
> 0000 0000 0000 1209 6400 5464 000e 0000
> 0000 130a 6400 0064 0000 0000 0000 320c
> 6400 1064 0000 0000 0000 32c0 6400 1064
> 0000 0000 0000 12c1 6400 1064 0000 0000
> 0000 02c2 a600 21a6 0000 0000 0000 32c4
> 6400 0064 0000 0000 0000 22c5 6400 0064
> 0000 0000 0000 08c6 6400 0064 0000 0000
> 0000 0ac7 c800 00c8 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 06e2 1b01
> 0003 0001 1302 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 b300
> physical 59560/16/63
> logical 59560/16/63
> 1916
> 60036480
> name value min max mode
> ---- ----- --- --- ----
> bios_cyl 59560 0 65535 rw
> bios_head 16 0 255 rw
> bios_sect 63 0 63 rw
> breada_readahead 4 0 127 rw
> bswap 0 0 1 r
> current_speed 66 0 69 rw
> failures 0 0 65535 rw
> file_readahead 124 0 16384 rw
> ide_scsi 0 0 1 rw
> init_speed 66 0 69 rw
> io_32bit 0 0 3 rw
> keepsettings 0 0 1 rw
> lun 0 0 7 rw
> max_failures 1 0 65535 rw
> max_kb_per_request 127 1 127 rw
> multcount 0 0 8 rw
> nice1 1 0 1 rw
> nowerr 0 0 1 rw
> number 0 0 3 rw
> pio_mode write-only 0 255 w
> slow 0 0 1 rw
> unmaskirq 0 0 1 rw
> using_dma 1 0 1 rw
> IBM-DTLA-307030
> disk
> 045a 3fff 37c8 0010 0000 0000 003f 0000
> 0000 0000 2020 2020 2020 2020 2059 4b45
> 594b 4e4a 3137 3932 0003 0ef8 0028 5458
> 344f 4136 3841 4942 4d2d 4454 4c41 2d33
> 3037 3033 3020 2020 2020 2020 2020 2020
> 2020 2020 2020 2020 2020 2020 2020 8010
> 0000 2f00 4000 0200 0200 0007 3fff 0010
> 003f fc10 00fb 0100 1580 0394 0000 0007
> 0003 0078 0078 00f0 0078 0000 0000 0000
> 0000 0000 0000 001f 0000 0000 0000 0000
> 003c 0015 74eb 43ea 4000 7469 0002 4000
> 043f 000d 0000 0000 fffe 603b 80fe 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0001 000b 0000 0000 0000 001b 0000 0000
> 4000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 0000
> 0000 0000 0000 0000 0000 0000 0000 10a5
> ide-disk version 1.10
> pci
> ide1
> pci bus 00 device 78 vid 10b9 did 5229 channel 0
> b9 10 29 52 05 00 80 02 c1 8a 01 01 00 20 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 01 d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 01 02 04
> 00 00 00 7a 00 00 00 00 00 00 00 4a 00 80 ba 3a
> 00 00 00 89 00 55 2a 0a 01 00 31 31 01 00 31 00
> 00 00 1d 00 02 00 01 01 00 00 00 00 00 00 ec ec
> 4f c2 4f c2 00 00 00 00 21 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 0
>
>
> --
> ___________________________________________________________________________
> mailto:[email protected]
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

Andre Hedrick
CEO/President, LAD Storage Consulting Group
Linux ATA Development
Linux Disk Certification Project

2001-12-23 07:55:31

by Andrew Morton

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

Andre Hedrick wrote:
>
> Daniel,
>
> Would it be great if Linux did not have such a lame design to handle the
> these problems? Just imaging if we had a properly formed
> FS/BLOCK/MAIN-LOOP/LOW_LEVEL model; whereas, an error of this nature in a
> write to media failure could be passed back up the pathway to the FS and
> request the FS to re-issue the storing of data to a new location.
>
> To bad the global design lacks this ablitity and I suspect that nobody
> gives a damn, or even worse ever thought of this idea.
>
> Now the importance of model is not to promote the use of dying media, but
> to be able to secure the content in buffer-cache, from app.-space, or
> user-space to the media and flag the UID(0) that we are in deep DOO-DOO
> and you better start backing up the content now.

For the filesystems with which I am familiar, this feature would
be quite difficult to implement. Quite difficult indeed. And given
that it only provides recovery for write errors, its value seems
questionable.

Much easier would be for those applications which really care about
data integrity to fsync() the file before closing it. If either of those
calls returns an error, the *application* should take some form of
action to preserve the data. MTAs do this, for example.

That being said, our propagation of I/O errors is a bit wobbly in places
(and the loop device hides write IO errors completely). But that's a
separate issue.

On this one, I would be more interested in the opinion of sophisticated
*users* of linux rather than kernel developers, btw. Whether this is
a valuable feature.


> I am just waiting to rip somebody's lunch who disagrees with me on the
> importance of data-recovery and relocation upon media failures. Any
> points claiming it is not important because the predictive nature of
> device failure is unknown, should maybe ask an expert in the industry to
> get you the answer. So lets have some fun and light off a really good
> flame war!
>

umm... Daniel's error was on a read. The data is not, by definition,
in memory. It's gone.


What percentage of disk errors occur on writes, as opposed to reads?
If errors-on-writes preponderate then maybe you're on to something.
But I don't think they do?

2001-12-23 11:18:32

by Peter Osterlund

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

Andrew Morton <[email protected]> writes:

> Andre Hedrick wrote:
> >
> > Would it be great if Linux did not have such a lame design to handle the
> > these problems? Just imaging if we had a properly formed
> > FS/BLOCK/MAIN-LOOP/LOW_LEVEL model; whereas, an error of this nature in a
> > write to media failure could be passed back up the pathway to the FS and
> > request the FS to re-issue the storing of data to a new location.
> >
> > To bad the global design lacks this ablitity and I suspect that nobody
> > gives a damn, or even worse ever thought of this idea.
>
> For the filesystems with which I am familiar, this feature would
> be quite difficult to implement. Quite difficult indeed. And given
> that it only provides recovery for write errors, its value seems
> questionable.

...

> What percentage of disk errors occur on writes, as opposed to reads?
> If errors-on-writes preponderate then maybe you're on to something.
> But I don't think they do?

One case were write errors are probably much more common than read
errors is packet writing on CDRW media. CDRW disks can only do a
limited number of writes to a given sector, and being able to remap
sectors on write errors can greatly increase the time a CDRW disk
remains usable.

The UDF filesystem has some code for bad block relocation
(udf_relocate_blocks) and the packet writing block "device" (it's a
layered device driver, somewhat like the loop device) has hooks for
requesting block relocation on I/O errors. This code is not working
yet though, and it seems quite complicated to get the relocation to
work properly when the file system is operating in asynchronous mode.

--
Peter ?sterlund [email protected]
Sk?ndalsv?gen 35 http://w1.894.telia.com/~u89404340
S-128 66 Sk?ndal +46 8 942647
Sweden

2001-12-23 12:26:02

by T. A.

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

Well I would find this feature useful. If I'm writing a file to a disk
I kind of would like the system to try its best to make sure the data get
written. If the filesystem encounters a bad sector then by all means, mark
the block as bad and write the data over to a good area of the disk. DOS
was able to do this, not sure about Windows, would be nice if Linux did as
well. Also from what I've seen in the current way things are done, at least
the last time I encountered a bad sector, its pretty bad. Many times it
isn't just one file that's lost to the wind for the system continuously
tried to write to that bad sector until you get a chance to take that disk
offline and run e2fsck with the -c option to mark off the block as bad. And
even then the problem is even worse as sometimes a sector only fails part of
the time. So running the -c option on a disk may not get all the errors on
one pass and if you run it again previously marked bad sectors would not get
reflagged.

As for using fsync() you can't guarantee all programmers will use it.
As for when the bad sectors are being found many times a bad sector may not
be found until a write is attempted. And if a bad sector is found during a
read, well its obvious the system can't restore data that has already been
lost and its time to head for the backups. Just because you can't recover
form a read error doesn't mean you shouldn't recover from a write error if
you could.

This feature could definitely be used for one very useful feature. A
verify option. This used to exist on DOS in which the system would read the
cluster after it wrote it, thus verifying that the data was written
successfully to the disk. This feature was removed in Windows and never
existed in DOS. Strangely enough I never had a problem of losing data when
writing to a floppy in DOS. In both Windows and Linux I have found it quite
common to write stuff to a floppy, then try using that floppy only to find
that the floppy apparently had bad sectors and one or more files that I
thought had been written, weren't.

----- Original Message -----
From: "Andrew Morton" <[email protected]>
To: "Andre Hedrick" <[email protected]>
Cc: "Daniel Stodden" <[email protected]>; <[email protected]>
Sent: Sunday, December 23, 2001 2:53 AM
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }


> Andre Hedrick wrote:
> >
> > Daniel,
> >
> > Would it be great if Linux did not have such a lame design to handle the
> > these problems? Just imaging if we had a properly formed
> > FS/BLOCK/MAIN-LOOP/LOW_LEVEL model; whereas, an error of this nature in
a
> > write to media failure could be passed back up the pathway to the FS and
> > request the FS to re-issue the storing of data to a new location.
> >
> > To bad the global design lacks this ablitity and I suspect that nobody
> > gives a damn, or even worse ever thought of this idea.
> >
> > Now the importance of model is not to promote the use of dying media,
but
> > to be able to secure the content in buffer-cache, from app.-space, or
> > user-space to the media and flag the UID(0) that we are in deep DOO-DOO
> > and you better start backing up the content now.
>
> For the filesystems with which I am familiar, this feature would
> be quite difficult to implement. Quite difficult indeed. And given
> that it only provides recovery for write errors, its value seems
> questionable.
>
> Much easier would be for those applications which really care about
> data integrity to fsync() the file before closing it. If either of those
> calls returns an error, the *application* should take some form of
> action to preserve the data. MTAs do this, for example.
>
> That being said, our propagation of I/O errors is a bit wobbly in places
> (and the loop device hides write IO errors completely). But that's a
> separate issue.
>
> On this one, I would be more interested in the opinion of sophisticated
> *users* of linux rather than kernel developers, btw. Whether this is
> a valuable feature.
>
>
> > I am just waiting to rip somebody's lunch who disagrees with me on the
> > importance of data-recovery and relocation upon media failures. Any
> > points claiming it is not important because the predictive nature of
> > device failure is unknown, should maybe ask an expert in the industry to
> > get you the answer. So lets have some fun and light off a really good
> > flame war!
> >
>
> umm... Daniel's error was on a read. The data is not, by definition,
> in memory. It's gone.
>
>
> What percentage of disk errors occur on writes, as opposed to reads?
> If errors-on-writes preponderate then maybe you're on to something.
> But I don't think they do?
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2001-12-23 13:22:35

by Alan

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

> I am just waiting to rip somebody's lunch who disagrees with me on the
> importance of data-recovery and relocation upon media failures. Any
> points claiming it is not important because the predictive nature of
> device failure is unknown, should maybe ask an expert in the industry to
> get you the answer. So lets have some fun and light off a really good
> flame war!

Why do you want to pass it up to the file system, when the fs probably
doesn't know what to do about it ? I can see why you want to pass it back
as far up as you can until someone claims it.

Or are you assuming in most cases file systems would have an "io_failure"
function that reissues the request a few times then marks the fs read only
and screams ?

Incidentally the EVMS IBM volume manager code does support bad block
remapping in some situations.

Alan

2001-12-23 22:09:51

by Andre Hedrick

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

On Sun, 23 Dec 2001, Alan Cox wrote:

> > I am just waiting to rip somebody's lunch who disagrees with me on the
> > importance of data-recovery and relocation upon media failures. Any
> > points claiming it is not important because the predictive nature of
> > device failure is unknown, should maybe ask an expert in the industry to
> > get you the answer. So lets have some fun and light off a really good
> > flame war!
>
> Why do you want to pass it up to the file system, when the fs probably
> doesn't know what to do about it ? I can see why you want to pass it back
> as far up as you can until someone claims it.

Well that is a design flaw in the general FS models from the beginning,
and it should be fixed. I recall everyone drum banging SCSI because it
could hand this problem, and surprize it fails too. So it come back to
the FS being responsible to force the completetion of it requests
regardless of the overhead.

> Or are you assuming in most cases file systems would have an "io_failure"
> function that reissues the request a few times then marks the fs read only
> and screams ?

Now that is a sensible solution, but I do not know if it would work in all
cases or how best it should be handled. Regardless, the responsiblity of
the content is primarily the FS. Should an APP close a file but it is
still in buffer_cache, there is no way to notify the app or the user or
anything associated with the creation/closing of that file, if a write
error occurs.

So we have user-space believing it is success.
FS doing an initial ACK of success.
BLOCK generating the request to the low_level.
LOW_LEVEL goes OH CRAP, I am having a problem and can not complete.

LOW_LEVEL goes, HEY BLOCK we have a problem.
BLOCK, that is nice whatever ....

This is a bad model, an worse is

LOW_LEVEL goes, HEY BLOCK we have a problem.
BLOCK goes, HEY FS we have an annoying LOW_LEVEL asking for reissue.
FS, duh which way did the rabbit go ...

So what is the right answer, 'cause blackholing the file is wrong, and I
do not see a way for it to be reissued. We should note the kernel now
owns the responsiblity of completion in this case; regardless, what other
think.

> Incidentally the EVMS IBM volume manager code does support bad block
> remapping in some situations.

Well managing badblock can be a major pain, but it is the right thing to
do. Now what is the cost, since there is surge in journaling FS's that
have logs. The cost is coming up w/ a sane way to manage the mess.
Even before we get to managing the mess, we have to be able to reissue the
request to a reallocated location, and make all kinds of noise that we are
doing heroic attempts to save the data. These may include --

periodic message bombing the open terminals.
clearing all outstanding requests, and making RO (caution /)

The issue is we are doing nothing to address the point, and it is arrogant
for the maintainers of the various storage classes and the supported upper
layers not willing to address this issue.

This reply may seem a little fragmented because I had to answer it is
stages between distractions.

Regards,

Andre Hedrick
CEO/President, LAD Storage Consulting Group
Linux ATA Development
Linux Disk Certification Project

2001-12-27 14:54:40

by Jens Axboe

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

On Sun, Dec 23 2001, Andre Hedrick wrote:
> the content is primarily the FS. Should an APP close a file but it is
> still in buffer_cache, there is no way to notify the app or the user or
> anything associated with the creation/closing of that file, if a write
> error occurs.
>
> So we have user-space believing it is success.

We have a buggy user-space app believing it is a success -- do you
really believe programs like eg mta's ignorantly closes a file and just
hopes for the best? fsync.

> FS doing an initial ACK of success.
> BLOCK generating the request to the low_level.
> LOW_LEVEL goes OH CRAP, I am having a problem and can not complete.
>
> LOW_LEVEL goes, HEY BLOCK we have a problem.
> BLOCK, that is nice whatever ....

What does this _mean_?

> This is a bad model, an worse is
>
> LOW_LEVEL goes, HEY BLOCK we have a problem.
> BLOCK goes, HEY FS we have an annoying LOW_LEVEL asking for reissue.
> FS, duh which way did the rabbit go ...

retries belong at the low level, once you pass up info of failure to the
upper layers it's fatal. time for FS to shut down.

> > Incidentally the EVMS IBM volume manager code does support bad block
> > remapping in some situations.
>
> Well managing badblock can be a major pain, but it is the right thing to
> do. Now what is the cost, since there is surge in journaling FS's that
> have logs. The cost is coming up w/ a sane way to manage the mess.
> Even before we get to managing the mess, we have to be able to reissue the
> request to a reallocated location, and make all kinds of noise that we are
> doing heroic attempts to save the data. These may include --

Irk, software managed bad block remapping is horrible.

> The issue is we are doing nothing to address the point, and it is arrogant
> for the maintainers of the various storage classes and the supported upper
> layers not willing to address this issue.

How about showing solutions in form of patches instead bitching about
this again and again? Frankly, I'm pretty sick of just seeing pointless
talk about the issue.

--
Jens Axboe

2001-12-27 16:32:52

by Alan

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

> retries belong at the low level, once you pass up info of failure to the
> upper layers it's fatal. time for FS to shut down.

Thats definitely not the case. Just because your file system is too dumb to
use the information please don't assume everyone elses isnt - in fact one
of the side properties of Daniel Phillips stuff is that it should be able
to sanely handle a bad block problem.

EVMS, MD, multipath all need to know about and do their own bad block
handling. If the block driver knows how to recover stuff then great it
can recover it, but we should ensure its possible for the fs internals
to recover and work around a bad block.

> Irk, software managed bad block remapping is horrible.

IBM have it working, so however horrible doesn't matter that much, someone
has done the work for you.

Alan

2001-12-27 16:51:44

by Jens Axboe

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

On Thu, Dec 27 2001, Alan Cox wrote:
> > retries belong at the low level, once you pass up info of failure to the
> > upper layers it's fatal. time for FS to shut down.
>
> Thats definitely not the case. Just because your file system is too dumb to
> use the information please don't assume everyone elses isnt - in fact one
> of the side properties of Daniel Phillips stuff is that it should be able
> to sanely handle a bad block problem.

That's ok too, the fs can do whatever it wants in case of I/O failure.
It's not up to the fs to reissue failed requests, _that's_ stupid.

> EVMS, MD, multipath all need to know about and do their own bad block
> handling. If the block driver knows how to recover stuff then great it
> can recover it, but we should ensure its possible for the fs internals
> to recover and work around a bad block.

Need to know, fine I'm not arguing with that. I don't want to hide
information from anyone.

> > Irk, software managed bad block remapping is horrible.
>
> IBM have it working, so however horrible doesn't matter that much, someone
> has done the work for you.

Then it must be The Right Thing.

I've written a block driver that handles (or wants to) bad block
remapping too, which just made me even more sure that this is definitely
a hw issue.

--
Jens Axboe

2001-12-27 17:43:08

by Alan

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

> I've written a block driver that handles (or wants to) bad block
> remapping too, which just made me even more sure that this is definitely
> a hw issue.

And what about when the hardware doesn't handle it ? We have fine
demonstrations that in some cases it does a lousy job, or just plain doesn't
cope. With the current price/reliability for IDE disks personally I'll use
raid1 but it should still be possible to do software remapping either block
level or in some cases fs level.

2001-12-27 17:49:29

by Linus Torvalds

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

In article <[email protected]>, Jens Axboe <[email protected]> wrote:
>On Thu, Dec 27 2001, Alan Cox wrote:
>> > retries belong at the low level, once you pass up info of failure to the
>> > upper layers it's fatal. time for FS to shut down.
>>
>> Thats definitely not the case. Just because your file system is too dumb to
>> use the information please don't assume everyone elses isnt - in fact one
>> of the side properties of Daniel Phillips stuff is that it should be able
>> to sanely handle a bad block problem.
>
>That's ok too, the fs can do whatever it wants in case of I/O failure.
>It's not up to the fs to reissue failed requests, _that's_ stupid.

Actually, I really think we should move the failure recovery up to the
filesystem: we can fairly easily do it already today, as basically very
few of the filesystems actually do the requests directly, but instead
rely on helper functions like "bread()" and "generic_file_read()".

Moving error handling up has a lot of advantages:

- it simplifies the (often fragile) lower layers, and moves common
problems to common code instead of putting it at a low level and
duplicating it across different drivers.

- it allows "context-sensitive" handling of errors, ie if there is a
read error on a read-ahead request the upper layers can comfortably
just say "f*ck it, I don't need it yet", which can _seriously_ help
interactive feel on bad mediums (right now we often try to re-read
a failing sector tens of times, because we re-read it during
read-ahead several times, and the lower layers re-read it _too_).

In fact, it would even be mostly _simple_ to do it at a higher level, at
least for reads.

Writes are somewhat harder, mainly because the upper layers have never
had to handle re-issuing of requests, and don't really have the state
information.

For reads, sufficient state information is already there ("uptodate" bit
- just add a counter for retries), but for writes we only have the dirty
bit that gets cleared when the request gets sent off. So for writes
we'd need to add a new bit ("write in progress", and then clear it on
successful completion, and set the "dirty" bit again on error).

So I'd actually _like_ for all IO requests to be clearly "try just
once", and it being up to th eupper layers to retry on error.

(The notion of timeouts are much harder - the upper layers can retry on
errors, but I really don't think that the upper layers want to handle
timeouts and the associated "cancel this request" issues. So low layers
would still have to do _that_ part of error recovery, but at least they
wouldn't have to worry about keeping the request around until it is
successful).

Does anybody see any really fundamental problems with moving retrying to
_above_ ll_rw_block.c instead of below it?

(And once it is above, you can much more easily support filesystems that
can automatically remap blocks on IO failure etc, and even have
interruptible block filesystem mounts for those pesky bad media problems
- allowing user level to tell the kernel to not screw around too much
with errors and just return them early to user space).

Linus

2001-12-27 18:34:28

by Andre Hedrick

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }


There must be an ECHO, 'cause I know that I have been making this
statement for a while, your appointed "Storage Architect" still does not
get the point. So maybe if you would like, I could finish teaching the
model that is started, or request that somebody with more vision than me
from who has been in the storage industry longer than me.

Regardless you have a mess that is not easily cleanable and really needs
to be restarted, and that is "My Professional Opinion" and that of others
who are laugh at the mess you allowed started without an means of defining
completion, stablity, nor integrity.

Lately I have found you slow on the up take, but at least somebody is
getting my message and notes to you, especially since you have blackholed
me.

Regards,

Andre Hedrick
CEO/President, LAD Storage Consulting Group
Linux ATA Development
Linux Disk Certification Project

On Thu, 27 Dec 2001, Linus Torvalds wrote:

> In article <[email protected]>, Jens Axboe <[email protected]> wrote:
> >On Thu, Dec 27 2001, Alan Cox wrote:
> >> > retries belong at the low level, once you pass up info of failure to the
> >> > upper layers it's fatal. time for FS to shut down.
> >>
> >> Thats definitely not the case. Just because your file system is too dumb to
> >> use the information please don't assume everyone elses isnt - in fact one
> >> of the side properties of Daniel Phillips stuff is that it should be able
> >> to sanely handle a bad block problem.
> >
> >That's ok too, the fs can do whatever it wants in case of I/O failure.
> >It's not up to the fs to reissue failed requests, _that's_ stupid.
>
> Actually, I really think we should move the failure recovery up to the
> filesystem: we can fairly easily do it already today, as basically very
> few of the filesystems actually do the requests directly, but instead
> rely on helper functions like "bread()" and "generic_file_read()".
>
> Moving error handling up has a lot of advantages:
>
> - it simplifies the (often fragile) lower layers, and moves common
> problems to common code instead of putting it at a low level and
> duplicating it across different drivers.
>
> - it allows "context-sensitive" handling of errors, ie if there is a
> read error on a read-ahead request the upper layers can comfortably
> just say "f*ck it, I don't need it yet", which can _seriously_ help
> interactive feel on bad mediums (right now we often try to re-read
> a failing sector tens of times, because we re-read it during
> read-ahead several times, and the lower layers re-read it _too_).
>
> In fact, it would even be mostly _simple_ to do it at a higher level, at
> least for reads.
>
> Writes are somewhat harder, mainly because the upper layers have never
> had to handle re-issuing of requests, and don't really have the state
> information.
>
> For reads, sufficient state information is already there ("uptodate" bit
> - just add a counter for retries), but for writes we only have the dirty
> bit that gets cleared when the request gets sent off. So for writes
> we'd need to add a new bit ("write in progress", and then clear it on
> successful completion, and set the "dirty" bit again on error).
>
> So I'd actually _like_ for all IO requests to be clearly "try just
> once", and it being up to th eupper layers to retry on error.
>
> (The notion of timeouts are much harder - the upper layers can retry on
> errors, but I really don't think that the upper layers want to handle
> timeouts and the associated "cancel this request" issues. So low layers
> would still have to do _that_ part of error recovery, but at least they
> wouldn't have to worry about keeping the request around until it is
> successful).
>
> Does anybody see any really fundamental problems with moving retrying to
> _above_ ll_rw_block.c instead of below it?
>
> (And once it is above, you can much more easily support filesystems that
> can automatically remap blocks on IO failure etc, and even have
> interruptible block filesystem mounts for those pesky bad media problems
> - allowing user level to tell the kernel to not screw around too much
> with errors and just return them early to user space).
>
> Linus
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2001-12-27 21:16:00

by Jeff Garzik

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

On Thu, Dec 27, 2001 at 05:46:43PM +0000, Linus Torvalds wrote:
> Actually, I really think we should move the failure recovery up to the
> filesystem: we can fairly easily do it already today, as basically very
> few of the filesystems actually do the requests directly, but instead
> rely on helper functions like "bread()" and "generic_file_read()".
>
> Moving error handling up has a lot of advantages:
>
> - it simplifies the (often fragile) lower layers, and moves common
> problems to common code instead of putting it at a low level and
> duplicating it across different drivers.
>
> - it allows "context-sensitive" handling of errors, ie if there is a
> read error on a read-ahead request the upper layers can comfortably
> just say "f*ck it, I don't need it yet", which can _seriously_ help
> interactive feel on bad mediums (right now we often try to re-read
> a failing sector tens of times, because we re-read it during
> read-ahead several times, and the lower layers re-read it _too_).
>
> In fact, it would even be mostly _simple_ to do it at a higher level, at
> least for reads.
>
> Writes are somewhat harder, mainly because the upper layers have never
> had to handle re-issuing of requests, and don't really have the state
> information.

Call me crazy but IMHO it is clear and logical and easy how to handle
write errors at the filesystem level. I've been thinking about this
for the "ibu fs" I am hacking on... have your own writepage and your
own async-io-completion routine. If M of N bh's indicate write failure
when bh->b_end_io is called, queue those for the filesystem so it can
add those blocks to the bad block list, and allocate some new blocks for
writing. Seems straightforward to me except worst-case where all
remaining sectors are bad.

A side effect of this is that I am taking a cue from "NTFS TNG"
[which is nice, 100% page-cache-based code] and simply making the
hard sector size be the logical block size. That way, not only can
write errors be handled on a fine-grained (hard sector) level, there
are no limitations on what the filesystem's blocksize is. Right now,
we cannot have a block size larger than PAGE-CACHE-SIZE AFAICS, but
when blocksize==hard sector size, you can simply fake any blocksize you
want (32K, 64K, ...) You have a bit more overhead with an increased
number of bh's, but since the bh->b_data is pointing into a page,
that's all the overhead is... a buffer head.

Right now no filesystems really support big blocksize in this way
because fragment handling hasn't been thought through [again, AFAICS]


> For reads, sufficient state information is already there ("uptodate" bit
> - just add a counter for retries), but for writes we only have the dirty
> bit that gets cleared when the request gets sent off. So for writes
> we'd need to add a new bit ("write in progress", and then clear it on
> successful completion, and set the "dirty" bit again on error).

Handling read errors always seemed uglier than handling write errors,
but I haven't thought through read errors yet...


> So I'd actually _like_ for all IO requests to be clearly "try just
> once", and it being up to th eupper layers to retry on error.

I agree this is a good direction.

Jeff


2001-12-28 02:08:04

by Andre Hedrick

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }


I would provide patches if you had not goat screwed the block layer and
had taken a little more thought in design, but GAWD forbid we have design.
You have made it clear that you do not believe in testing the data
transport layers in storage.

Translation: You do not care if data gets to disk correctly or not.

You have stated you do not believe in standards, thus you believe less in
me because I "Co-Author" one for NCITS.

You have stated the tools of the trade are worthless but you have an ATA
and SCSI analyzer but you refuse to use them. I know you have them
because I know who provided them to you.

Maybe when you get off your high horse and begin to listen, I will quit
being the ASS pointing out your faulty implimentation of BIO. Maybe when
you decide it is important we can try to work togather again.

One thing you need to remember, BLOCK is everybodies "BITCH".
FileSystems dictate to BLOCK there requirements.
Low_Level Drivers dictate to BLOCK there rulesets.
BLOCK is there to bend over backwards to make the transition work.
BLOCK is not a RULE-SETTER, it is nothing but a SLAVE, and it has to be
damn clever. One of you assests is you are "clever", but that will not
cover your knowledge defecits in pure storage transport.

Now I am tired of this game you are playing, so lets spell it out.

Congratulations of your copying of the rq->special for the CDB transport
out of the ACB driver that myself and two other people authored.

Congratulations on you first attempts to create the "pre-builder" model
that myself and one other has described to you, to bad you did not listen
9 months ago or you would have the bigger picture.

BUZZIT on mixing two issues that are volitale on there own rights to sort
out. The high memory bounce and block are two different changes, and one
is dependent on the other, regardless if you see it or not.

BUZZIT on your short sightedness on the immediate intercept of command
mode

BUZZIT on you himem operations that is not able to perform the vaddr
windowing for the entire request, but choose to suffer the latency of
single sector transaction repetion.

BUZZIT on your total lack of documention the the changes to the
request_struct, otherwise I could follow your mindset and it would not be
a pissing contest.

BUZZIT on your moving CDROM operations to create a bogus BLOCK_IOCTL, for
what? Who know it may have some valid usage, but CDROM is not it.

BUZZIT on your cut and past in block to make an effective rabbit warren or
chaotic maze to make it total opaque in what is really going on.

Cheers, and Happy New Year.

Andre Hedrick
CEO/President, LAD Storage Consulting Group
Linux ATA Development
Linux Disk Certification Project

On Thu, 27 Dec 2001, Jens Axboe wrote:

> On Sun, Dec 23 2001, Andre Hedrick wrote:
> > the content is primarily the FS. Should an APP close a file but it is
> > still in buffer_cache, there is no way to notify the app or the user or
> > anything associated with the creation/closing of that file, if a write
> > error occurs.
> >
> > So we have user-space believing it is success.
>
> We have a buggy user-space app believing it is a success -- do you
> really believe programs like eg mta's ignorantly closes a file and just
> hopes for the best? fsync.
>
> > FS doing an initial ACK of success.
> > BLOCK generating the request to the low_level.
> > LOW_LEVEL goes OH CRAP, I am having a problem and can not complete.
> >
> > LOW_LEVEL goes, HEY BLOCK we have a problem.
> > BLOCK, that is nice whatever ....
>
> What does this _mean_?
>
> > This is a bad model, an worse is
> >
> > LOW_LEVEL goes, HEY BLOCK we have a problem.
> > BLOCK goes, HEY FS we have an annoying LOW_LEVEL asking for reissue.
> > FS, duh which way did the rabbit go ...
>
> retries belong at the low level, once you pass up info of failure to the
> upper layers it's fatal. time for FS to shut down.
>
> > > Incidentally the EVMS IBM volume manager code does support bad block
> > > remapping in some situations.
> >
> > Well managing badblock can be a major pain, but it is the right thing to
> > do. Now what is the cost, since there is surge in journaling FS's that
> > have logs. The cost is coming up w/ a sane way to manage the mess.
> > Even before we get to managing the mess, we have to be able to reissue the
> > request to a reallocated location, and make all kinds of noise that we are
> > doing heroic attempts to save the data. These may include --
>
> Irk, software managed bad block remapping is horrible.
>
> > The issue is we are doing nothing to address the point, and it is arrogant
> > for the maintainers of the various storage classes and the supported upper
> > layers not willing to address this issue.
>
> How about showing solutions in form of patches instead bitching about
> this again and again? Frankly, I'm pretty sick of just seeing pointless
> talk about the issue.
>
> --
> Jens Axboe
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2001-12-28 11:01:45

by Jens Axboe

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

On Thu, Dec 27 2001, Andre Hedrick wrote:
>
> I would provide patches if you had not goat screwed the block layer and
> had taken a little more thought in design, but GAWD forbid we have design.

You still have zero, absolutely zero facts. _What_ is screwed?

> You have made it clear that you do not believe in testing the data
> transport layers in storage.

That's not true. I would not mind such a framework. What I opposed was
you claiming that you can _proof_ data correctness. Verifying data
integrity and proof are two different things.

> Translation: You do not care if data gets to disk correctly or not.

Bullshit.

> You have stated you do not believe in standards, thus you believe less in
> me because I "Co-Author" one for NCITS.

Please stop misquoting me. _You_ claim that if you have the states down
100% from the specs, then your driver is proofen correct. I say that
believing that is an illusion, only in a perfect world with perfect
hardware would that be the case.

> You have stated the tools of the trade are worthless but you have an ATA
> and SCSI analyzer but you refuse to use them. I know you have them
> because I know who provided them to you.

Again, I've _never_ made such claims. I have the tools, yes, and they
are handy at times.

> Maybe when you get off your high horse and begin to listen, I will quit
> being the ASS pointing out your faulty implimentation of BIO. Maybe when
> you decide it is important we can try to work togather again.

... obviously no need for me to comment on this.

> One thing you need to remember, BLOCK is everybodies "BITCH".
> FileSystems dictate to BLOCK there requirements.
> Low_Level Drivers dictate to BLOCK there rulesets.
> BLOCK is there to bend over backwards to make the transition work.
> BLOCK is not a RULE-SETTER, it is nothing but a SLAVE, and it has to be
> damn clever. One of you assests is you are "clever", but that will not
> cover your knowledge defecits in pure storage transport.

I agree.

> Now I am tired of this game you are playing, so lets spell it out.
>
> Congratulations of your copying of the rq->special for the CDB transport
> out of the ACB driver that myself and two other people authored.

Strictly a cleanup, what's your point?

> Congratulations on you first attempts to create the "pre-builder" model
> that myself and one other has described to you, to bad you did not listen
> 9 months ago or you would have the bigger picture.

I did it now because it's easy to do, comprende? It can be done cleanly
because elv_next_request is there now.

> BUZZIT on mixing two issues that are volitale on there own rights to sort
> out. The high memory bounce and block are two different changes, and one
> is dependent on the other, regardless if you see it or not.

Explain.

> BUZZIT on your short sightedness on the immediate intercept of command
> mode

Explain

> BUZZIT on you himem operations that is not able to perform the vaddr
> windowing for the entire request, but choose to suffer the latency of
> single sector transaction repetion.

s/single sector/single segment. And that temp mapping is done for _PIO_
for christ sakes, I challenge you to show me that being a bottleneck in
any way at all. Red herring.

> BUZZIT on your total lack of documention the the changes to the
> request_struct, otherwise I could follow your mindset and it would not be
> a pissing contest.

Tried reading the source?

> BUZZIT on your moving CDROM operations to create a bogus BLOCK_IOCTL, for
> what? Who know it may have some valid usage, but CDROM is not it.

That part isn't even started yet, block_ioctl is just to show that
rq->cmd[] and ide-cd does work with passing packet commands around.

> BUZZIT on your cut and past in block to make an effective rabbit warren or
> chaotic maze to make it total opaque in what is really going on.

Again, what are you talking about?

Congratulations on spending so much time writing an email with very
little factual content. Here's an idea -- try backing up your statements
and claims with something besides hot air.

--
Jens Axboe

2001-12-28 12:30:14

by Rik van Riel

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

On Fri, 28 Dec 2001, Jens Axboe wrote:
> On Thu, Dec 27 2001, Andre Hedrick wrote:

> > BUZZIT on your total lack of documention the the changes to the
> > request_struct, otherwise I could follow your mindset and it would not be
> > a pissing contest.
>
> Tried reading the source?

As usual, without documentation you only know what the code
does, not what it's supposed to do or why it does it.

Documentation is an essential ingredient when hunting for
bugs in the code, because without the docs you have to guess
whether something is a bug or not, while with docs it's much
easier to identify inconsistencies.

regards,

Rik
--
Shortwave goes a long way: irc.starchat.net #swl

http://www.surriel.com/ http://distro.conectiva.com/

2001-12-28 12:34:24

by Jens Axboe

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

On Fri, Dec 28 2001, Rik van Riel wrote:
> On Fri, 28 Dec 2001, Jens Axboe wrote:
> > On Thu, Dec 27 2001, Andre Hedrick wrote:
>
> > > BUZZIT on your total lack of documention the the changes to the
> > > request_struct, otherwise I could follow your mindset and it would not be
> > > a pissing contest.
> >
> > Tried reading the source?
>
> As usual, without documentation you only know what the code
> does, not what it's supposed to do or why it does it.
>
> Documentation is an essential ingredient when hunting for
> bugs in the code, because without the docs you have to guess
> whether something is a bug or not, while with docs it's much
> easier to identify inconsistencies.

please look at the source before making such comments -- it's quite
adequately commented.

--
Jens Axboe

2001-12-28 13:01:08

by Jens Axboe

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

On Fri, Dec 28 2001, Jens Axboe wrote:
> On Fri, Dec 28 2001, Rik van Riel wrote:
> > On Fri, 28 Dec 2001, Jens Axboe wrote:
> > > On Thu, Dec 27 2001, Andre Hedrick wrote:
> >
> > > > BUZZIT on your total lack of documention the the changes to the
> > > > request_struct, otherwise I could follow your mindset and it would not be
> > > > a pissing contest.
> > >
> > > Tried reading the source?
> >
> > As usual, without documentation you only know what the code
> > does, not what it's supposed to do or why it does it.
> >
> > Documentation is an essential ingredient when hunting for
> > bugs in the code, because without the docs you have to guess
> > whether something is a bug or not, while with docs it's much
> > easier to identify inconsistencies.
>
> please look at the source before making such comments -- it's quite
> adequately commented.

Lest I forget to mention this again -- also see Suparna's excellent
notes on the new block I/O layer:

http://lse.sourceforge.net/io/bionotes.txt

And my own write-up right here on lkml:

http://marc.theaimsgroup.com/?t=100695031900001&r=1&w=2


--
Jens Axboe

2001-12-28 19:37:41

by Peter Osterlund

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

Jens Axboe <[email protected]> writes:

> On Fri, Dec 28 2001, Rik van Riel wrote:
> > On Fri, 28 Dec 2001, Jens Axboe wrote:
> > >
> > > Tried reading the source?
> >
> > As usual, without documentation you only know what the code
> > does, not what it's supposed to do or why it does it.
>
> please look at the source before making such comments -- it's quite
> adequately commented.

I agree, but I have one specific question though. What are the
bi_end_io() functions supposed to return? The return value doesn't
ever seem to be used (yet?), so reading the source code can not answer
that question.

--
Peter Osterlund - [email protected]
http://w1.894.telia.com/~u89404340

2001-12-28 20:26:10

by Andre Hedrick

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

On Fri, 28 Dec 2001, Jens Axboe wrote:

> On Thu, Dec 27 2001, Andre Hedrick wrote:
> >
> > I would provide patches if you had not goat screwed the block layer and
> > had taken a little more thought in design, but GAWD forbid we have design.
>
> You still have zero, absolutely zero facts. _What_ is screwed?

Well your total lack of documentation.
You have changed portions of struct request * and no referrences.
You have change the behavor of calculating ones position of the
rq->buffer, wrt "nr_sectors - current_nr_sectors". There is still no
valid reason yet given by you.

> > You have made it clear that you do not believe in testing the data
> > transport layers in storage.
>
> That's not true. I would not mind such a framework. What I opposed was
> you claiming that you can _proof_ data correctness. Verifying data
> integrity and proof are two different things.

You have strange defs.

PROOF it works is done but "Verifying data integrity".
thus
"Verifying data integrity" is PROOF it works.

One of those silly mathmatic rules of equality.

You have had access to this code for at least a year (one variation or
another), I can be responsible if you elect to ignore things.

> > Translation: You do not care if data gets to disk correctly or not.
>
> Bullshit.

Really, then why did you not use the BUS Analyzers given to you to test
your changes to the new BLOCK translation between FS and DRIVER.
Obviously you have the tools, but you neglected to even look.

> > You have stated you do not believe in standards, thus you believe less in
> > me because I "Co-Author" one for NCITS.
>
> Please stop misquoting me. _You_ claim that if you have the states down
> 100% from the specs, then your driver is proofen correct. I say that
> believing that is an illusion, only in a perfect world with perfect
> hardware would that be the case.

This is where you fail to understand, I do not know how to teach you
anymore, nor does having you visit the folks in the drive making
industry help because even they could not instruct you.

Maybe if you could see that writing code that is correct to the
state-machine described at the physical layer of the hardware, one would
have the understand of a base from which to address exceptions.

> > You have stated the tools of the trade are worthless but you have an ATA
> > and SCSI analyzer but you refuse to use them. I know you have them
> > because I know who provided them to you.
>
> Again, I've _never_ made such claims. I have the tools, yes, and they
> are handy at times.

Really! well let me replay an irc-log

*andre* maybe I trust it because it passes signal correctness on an ata-bus analizer
*axboe* who has proven the analyzer? :)
*andre* are you serious ?
*axboe* of course
*andre* that is all I want to know -- sorry I asked

Electrons do not lie.

> > Maybe when you get off your high horse and begin to listen, I will quit
> > being the ASS pointing out your faulty implimentation of BIO. Maybe when
> > you decide it is important we can try to work togather again.
>
> ... obviously no need for me to comment on this.

Yes because you have no intention of getting off your high-horse.

> > One thing you need to remember, BLOCK is everybodies "BITCH".
> > FileSystems dictate to BLOCK there requirements.
> > Low_Level Drivers dictate to BLOCK there rulesets.
> > BLOCK is there to bend over backwards to make the transition work.
> > BLOCK is not a RULE-SETTER, it is nothing but a SLAVE, and it has to be
> > damn clever. One of you assests is you are "clever", but that will not
> > cover your knowledge defecits in pure storage transport.
>
> I agree.

Really, now that you are backed into a corner I will tell you how the
driver is to work and you will adjust block to conform to that ruleset.

> > Now I am tired of this game you are playing, so lets spell it out.
> >
> > Congratulations of your copying of the rq->special for the CDB transport
> > out of the ACB driver that myself and two other people authored.
>
> Strictly a cleanup, what's your point?
>
> > Congratulations on you first attempts to create the "pre-builder" model
> > that myself and one other has described to you, to bad you did not listen
> > 9 months ago or you would have the bigger picture.
>
> I did it now because it's easy to do, comprende? It can be done cleanly
> because elv_next_request is there now.
>
> > BUZZIT on mixing two issues that are volitale on there own rights to sort
> > out. The high memory bounce and block are two different changes, and one
> > is dependent on the other, regardless if you see it or not.
>
> Explain.

Given the volitale nature of these two areas on their own, why would
anyone in their right mind mix the two.

> > BUZZIT on your short sightedness on the immediate intercept of command
> > mode
>
> Explain

Well the obvious usage for "immediate immediate" is for domain-validation.
This allows one to test and access the behavor of the codebase before
assuming it is valid and it gives you a way to compare expected v/s
observed when using a bus analyzer. This is another way to verify if the
code written is doing what is expected.

> > BUZZIT on you himem operations that is not able to perform the vaddr
> > windowing for the entire request, but choose to suffer the latency of
> > single sector transaction repetion.
>
> s/single sector/single segment. And that temp mapping is done for _PIO_
> for christ sakes, I challenge you to show me that being a bottleneck in
> any way at all. Red herring.

"bottleneck" no, it breaks a valid and tested method for using the
buffer, and PIO is where we land if we have DMA problems.

> > BUZZIT on your total lack of documention the the changes to the
> > request_struct, otherwise I could follow your mindset and it would not be
> > a pissing contest.
>
> Tried reading the source?

I would if there were any docs.

Maybe you could finally explain your new recipe for the changes in the
request struct. I have been asking you in private for more than a month
and you have yet to disclose the usage.

Recall above where you agree that BLOCK is a SLAVE to the DRIVER.
Now I require that upon entry the following be set and only released upon
the dequeuing of the request.


to = ide_map_buffer(rq, &flags);

This is to be performed upon entry for the given request.
It is to be released upon a dequeuing.

ide_unmap_buffer(to, &flags);

This will allow me to walk "to" and not "buffer" so that BLOCK can
properly map into the DRIVER.

/*
* Handler for command with PIO data-in phase
*/
ide_startstop_t task_in_intr (ide_drive_t *drive)
{
byte stat = GET_STAT();
byte io_32bit = drive->io_32bit;
struct request *rq = HWGROUP(drive)->rq;
char *pBuf = NULL;

if (!OK_STAT(stat,DATA_READY,BAD_R_STAT)) {
if (stat & (ERR_STAT|DRQ_STAT)) {
return ide_error(drive, "task_in_intr", stat);
}
if (!(stat & BUSY_STAT)) {
DTF("task_in_intr to Soon wait for next interrupt\n");
ide_set_handler(drive, &task_in_intr, WAIT_CMD, NULL);
return ide_started;
}
}
DTF("stat: %02x\n", stat);
pBuf = rq->buffer + ((rq->nr_sectors - rq->current_nr_sectors) * SECTOR_SIZE);
DTF("Read: %p, rq->current_nr_sectors: %d\n", pBuf, (int) rq->current_nr_sectors);

drive->io_32bit = 0;
taskfile_input_data(drive, pBuf, SECTOR_WORDS);
drive->io_32bit = io_32bit;

if (--rq->current_nr_sectors <= 0) {
/* (hs): swapped next 2 lines */
DTF("Request Ended stat: %02x\n", GET_STAT());
ide_end_request(1, HWGROUP(drive));
} else {
ide_set_handler(drive, &task_in_intr, WAIT_CMD, NULL);
return ide_started;
}
return ide_stopped;
}

As you can see "rq->nr_sectors" is not touched, nor is "rq->buffer".
This was your primary bender and I fixed it but you choose not to use it.
I only modify "rq->current_nr_sectors" and no longer bastardize the rest
of the request.

> > BUZZIT on your moving CDROM operations to create a bogus BLOCK_IOCTL, for
> > what? Who know it may have some valid usage, but CDROM is not it.
>
> That part isn't even started yet, block_ioctl is just to show that
> rq->cmd[] and ide-cd does work with passing packet commands around.

Well maybe if you had or would disclose a model then people could help and
not be left wondering about the mess created, while working to clean up a
bigger mess from the past.

> > BUZZIT on your cut and past in block to make an effective rabbit warren or
> > chaotic maze to make it total opaque in what is really going on.
>
> Again, what are you talking about?

Well after discussing w/ Suparna, I now understand that it is related to
the multi-segments but I am sure that most do not see that because it is
not comments or explained in any model. Much of what I saw was a glorious
cut-n-paste job to make a readable (but rather large function) now
scattered and run through a whopper-chopper. However I see some of the
validity of the changes but still much needs explaining.

> Congratulations on spending so much time writing an email with very
> little factual content. Here's an idea -- try backing up your statements
> and claims with something besides hot air.

And for you in a total waste of communication requests from the past that
have yet to appear, thus I have to be a BOHA to get your attention.
However you still ignore valid questions and concerns.


Regards,

Andre Hedrick
CEO/President, LAD Storage Consulting Group
Linux ATA Development
Linux Disk Certification Project



2001-12-29 14:16:19

by Jens Axboe

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

On Fri, Dec 28 2001, Andre Hedrick wrote:
> > On Thu, Dec 27 2001, Andre Hedrick wrote:
> > >
> > > I would provide patches if you had not goat screwed the block layer and
> > > had taken a little more thought in design, but GAWD forbid we have design.
> >
> > You still have zero, absolutely zero facts. _What_ is screwed?
>
> Well your total lack of documentation.

Please stop making an even bigger as of yourself -- there are plenty of
docs, commented source, etc.

> You have changed portions of struct request * and no referrences.

Ditto

> You have change the behavor of calculating ones position of the
> rq->buffer, wrt "nr_sectors - current_nr_sectors". There is still no
> valid reason yet given by you.

That you reference ->buffer and nr_sectors in the same sentence shows
that you never had a grasp of how this worked.

> > > You have made it clear that you do not believe in testing the data
> > > transport layers in storage.
> >
> > That's not true. I would not mind such a framework. What I opposed was
> > you claiming that you can _proof_ data correctness. Verifying data
> > integrity and proof are two different things.
>
> You have strange defs.
>
> PROOF it works is done but "Verifying data integrity".
> thus
> "Verifying data integrity" is PROOF it works.
>
> One of those silly mathmatic rules of equality.

Whatever

> > > Translation: You do not care if data gets to disk correctly or not.
> >
> > Bullshit.
>
> Really, then why did you not use the BUS Analyzers given to you to test
> your changes to the new BLOCK translation between FS and DRIVER.
> Obviously you have the tools, but you neglected to even look.

Did you see any data corruption in the intial bio patches? If you did,
you failed to report them.

> > > You have stated you do not believe in standards, thus you believe less in
> > > me because I "Co-Author" one for NCITS.
> >
> > Please stop misquoting me. _You_ claim that if you have the states down
> > 100% from the specs, then your driver is proofen correct. I say that
> > believing that is an illusion, only in a perfect world with perfect
> > hardware would that be the case.
>
> This is where you fail to understand, I do not know how to teach you
> anymore, nor does having you visit the folks in the drive making
> industry help because even they could not instruct you.

You keep trying to teach me industry talk, I keep trying to teach you
how the kernel works. Apparently we are both failing.

> > > You have stated the tools of the trade are worthless but you have an ATA
> > > and SCSI analyzer but you refuse to use them. I know you have them
> > > because I know who provided them to you.
> >
> > Again, I've _never_ made such claims. I have the tools, yes, and they
> > are handy at times.
>
> Really! well let me replay an irc-log
>
> *andre* maybe I trust it because it passes signal correctness on an ata-bus analizer
> *axboe* who has proven the analyzer? :)
> *andre* are you serious ?
> *axboe* of course
> *andre* that is all I want to know -- sorry I asked
>
> Electrons do not lie.

This translates into I think analyzers are worthless?

> > > Maybe when you get off your high horse and begin to listen, I will quit
> > > being the ASS pointing out your faulty implimentation of BIO. Maybe when
> > > you decide it is important we can try to work togather again.
> >
> > ... obviously no need for me to comment on this.
>
> Yes because you have no intention of getting off your high-horse.

No it means that I refuse to fall back to the petty name calling game
you are now playing. You have yet to single out a single flaw in bio
general or design. All you do is bitch and complain without facts --
makes me wonder how much of the stuff you actually understand. Welcome
to my kill file.

> > > One thing you need to remember, BLOCK is everybodies "BITCH".
> > > FileSystems dictate to BLOCK there requirements.
> > > Low_Level Drivers dictate to BLOCK there rulesets.
> > > BLOCK is there to bend over backwards to make the transition work.
> > > BLOCK is not a RULE-SETTER, it is nothing but a SLAVE, and it has to be
> > > damn clever. One of you assests is you are "clever", but that will not
> > > cover your knowledge defecits in pure storage transport.
> >
> > I agree.
>
> Really, now that you are backed into a corner I will tell you how the
> driver is to work and you will adjust block to conform to that ruleset.

Lets rephrase that into "you are free to make suggestions and send
patches", then I agree. Block may be everyones bitch, I'm certainly not
yours.

> > > BUZZIT on mixing two issues that are volitale on there own rights to sort
> > > out. The high memory bounce and block are two different changes, and one
> > > is dependent on the other, regardless if you see it or not.
> >
> > Explain.
>
> Given the volitale nature of these two areas on their own, why would
> anyone in their right mind mix the two.

They are two seperate layers still. Maybe you need to read this code
again. Let me recommend Suparna's doc again.

> > > BUZZIT on your short sightedness on the immediate intercept of command
> > > mode
> >
> > Explain
>
> Well the obvious usage for "immediate immediate" is for domain-validation.
> This allows one to test and access the behavor of the codebase before
> assuming it is valid and it gives you a way to compare expected v/s
> observed when using a bus analyzer. This is another way to verify if the
> code written is doing what is expected.

That may be so, how this relates to any complaints about bio design or
whatever is not clear. As usual.

> > > BUZZIT on you himem operations that is not able to perform the vaddr
> > > windowing for the entire request, but choose to suffer the latency of
> > > single sector transaction repetion.
> >
> > s/single sector/single segment. And that temp mapping is done for _PIO_
> > for christ sakes, I challenge you to show me that being a bottleneck in
> > any way at all. Red herring.
>
> "bottleneck" no, it breaks a valid and tested method for using the
> buffer, and PIO is where we land if we have DMA problems.

No it doesn't, you can use ->buffer any way you want still.

> > > BUZZIT on your total lack of documention the the changes to the
> > > request_struct, otherwise I could follow your mindset and it would not be
> > > a pissing contest.
> >
> > Tried reading the source?
>
> I would if there were any docs.

Andre, want me to comment on that again?

> Maybe you could finally explain your new recipe for the changes in the
> request struct. I have been asking you in private for more than a month
> and you have yet to disclose the usage.

I've _explained_ it to you several times. I have better things to do
than keep doing so. Especially if you either cannot or don't want to
read the souce and see for yourself, or the external bio reference that
is Suparna's bio notes.

> Recall above where you agree that BLOCK is a SLAVE to the DRIVER.
> Now I require that upon entry the following be set and only released upon
> the dequeuing of the request.
>
>
> to = ide_map_buffer(rq, &flags);
>
> This is to be performed upon entry for the given request.
> It is to be released upon a dequeuing.
>
> ide_unmap_buffer(to, &flags);
>
> This will allow me to walk "to" and not "buffer" so that BLOCK can
> properly map into the DRIVER.

You don't know how the kmap stuff works, do you?

> As you can see "rq->nr_sectors" is not touched, nor is "rq->buffer".
> This was your primary bender and I fixed it but you choose not to use it.
> I only modify "rq->current_nr_sectors" and no longer bastardize the rest
> of the request.

current_nr_sectors was the only problem, hard_nr_sectors and hard_sector
have been in place for some time. hard_cur_sectors is there too in 2.5
now, so you can screw current_nr_sectors as much as you want now too.

> > > BUZZIT on your moving CDROM operations to create a bogus BLOCK_IOCTL, for
> > > what? Who know it may have some valid usage, but CDROM is not it.
> >
> > That part isn't even started yet, block_ioctl is just to show that
> > rq->cmd[] and ide-cd does work with passing packet commands around.
>
> Well maybe if you had or would disclose a model then people could help and
> not be left wondering about the mess created, while working to clean up a
> bigger mess from the past.

Still babbling about docs?? Please show me a sub system change as widely
documented as the bio change in recent times? Funny how everyone else
thinks the docs are good. Have you read them? Did you understand them?
Did you read the source? Or are you just too busy moaning on irc or here
to do so?

> > > BUZZIT on your cut and past in block to make an effective rabbit warren or
> > > chaotic maze to make it total opaque in what is really going on.
> >
> > Again, what are you talking about?
>
> Well after discussing w/ Suparna, I now understand that it is related to
> the multi-segments but I am sure that most do not see that because it is
> not comments or explained in any model. Much of what I saw was a glorious
> cut-n-paste job to make a readable (but rather large function) now
> scattered and run through a whopper-chopper. However I see some of the
> validity of the changes but still much needs explaining.

There are no big and glorious cut-n-paste jobs. I'm so sick of you
making claims that are nothing but Andre FUD.

> > Congratulations on spending so much time writing an email with very
> > little factual content. Here's an idea -- try backing up your statements
> > and claims with something besides hot air.
>
> And for you in a total waste of communication requests from the past that
> have yet to appear, thus I have to be a BOHA to get your attention.
> However you still ignore valid questions and concerns.

I don't ingore valid questions and concerns. I ignore _you_ from time to
time, you are time consuming and I usually have better ways
to spend my time.

--
Jens Axboe

2001-12-29 15:08:10

by Jens Axboe

[permalink] [raw]
Subject: Re: hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error }

On Fri, Dec 28 2001, Peter Osterlund wrote:
> Jens Axboe <[email protected]> writes:
>
> > On Fri, Dec 28 2001, Rik van Riel wrote:
> > > On Fri, 28 Dec 2001, Jens Axboe wrote:
> > > >
> > > > Tried reading the source?
> > >
> > > As usual, without documentation you only know what the code
> > > does, not what it's supposed to do or why it does it.
> >
> > please look at the source before making such comments -- it's quite
> > adequately commented.
>
> I agree, but I have one specific question though. What are the
> bi_end_io() functions supposed to return? The return value doesn't
> ever seem to be used (yet?), so reading the source code can not answer
> that question.

They were supposed to return 0 if the bio was completely done, or 1 if
there was remaining I/O to be done. Right now it's unused, so just
return 0 for success.

--
Jens Axboe

2001-12-29 21:01:45

by Andre Hedrick

[permalink] [raw]
Subject: You WIN ...


Jens,

You win -- it is not worth trying to work with you at this time.
All you and I have done is become bitter enemies.

Next, I do not know all the details of the kernel, but what I know is
that neither one of us is willing to listen and learn. However, I have
tried to get answers from you public and private, and nothing. You may
get your wish granted to replace me in the future, as this has been your
stated goal from the past to me directly.

In closing, there were several cases of filesystem corruption based on
partition offsets and other various items. This was totally unacceptable
for the most part. I truly think that you do not see what the object of
"BLOCK" is all about, regardless that you are clever and quick. You have
decided that block will define the interface to the drivers and thus the
drivers can not conform to standards set forth by the people creating the
physical layer.

Finally, I offer a public apology to you and all who have suffered on LKML.

Respectfully,

Andre Hedrick
CEO/President, LAD Storage Consulting Group
Linux ATA Development
Linux Disk Certification Project

2001-12-31 12:59:06

by Jens Axboe

[permalink] [raw]
Subject: Re: You WIN ...


(sat on this for a few days not to boil over)

On Sat, Dec 29 2001, Andre Hedrick wrote:
>
> Jens,
>
> You win -- it is not worth trying to work with you at this time.
> All you and I have done is become bitter enemies.
>
> Next, I do not know all the details of the kernel, but what I know is
> that neither one of us is willing to listen and learn. However, I have
> tried to get answers from you public and private, and nothing. You may

I've answered lots of your mails and inquiries on irc, I frankly cannot
see how I can improve there.

> get your wish granted to replace me in the future, as this has been your
> stated goal from the past to me directly.

I've never stated that I want to replace you. In fact I've stated
several times that I definitely do not want to maintain low level IDE
code (or any other any directly hardware related driver, nothing but
trouble). I stick to core kernel mainly, makes me happy.

> In closing, there were several cases of filesystem corruption based on
> partition offsets and other various items. This was totally unacceptable
> for the most part. I truly think that you do not see what the object of

Please -- there was _one_ case of a one-off in partition handling with
an obscure partition format caused by a missing include, *this was not a
bio problem*. Please list the "various other items" for me. Are you
making stuff up again now?

You seem to naively believe that if you driver passed some ata analyzer
tests or follows the specification state diagrams to the letter, that
it's perfect. That is just so obviously wrong. How about timing related
bugs in your driver under different circumstances? SMP (or just irq)
related races?

I have seen data corruption several times while developing bio
(expected, I'm not perfect), however _none_ of these could have been
avoided with using an analyzer. The low level block driver just did what
I/bio asked it to do, regardless of the data contents or data direction
was right or not. Too bad.

We have seen data corruption in stable kernels before after block or IDE
change. The former was due to missing locking lately, or head-active
list corruption. The latter was a plugging bug in IDE. Neither of these
could have been caught with an analyzer.

> "BLOCK" is all about, regardless that you are clever and quick. You have
> decided that block will define the interface to the drivers and thus the
> drivers can not conform to standards set forth by the people creating the
> physical layer.

Not so, I'm not telling drivers what to do. I'm making the block
interface as flexible as I can for drivers, while also making it easier
to write a block driver and _get it right_.

> Finally, I offer a public apology to you and all who have suffered on LKML.

Thank you

--
Jens Axboe

2001-12-31 21:08:22

by Willem Riede

[permalink] [raw]
Subject: Re: You WIN ...

On 2001.12.31 07:58 Jens Axboe wrote:
>
> (sat on this for a few days not to boil over)
>
> On Sat, Dec 29 2001, Andre Hedrick wrote:
> >
> > Jens,
> >
> > You win -- it is not worth trying to work with you at this time.
> > All you and I have done is become bitter enemies.
>
I'm not sure that entering this debate is a good idea, but it is
so sad to see two such valuable contributers to Linux bicker
with each other...

On the infrequent occasion that I had to exchange email with Andre,
I've realized it is very hard to communicate with him (sorry Andre),
so I understand how Jens must be feeling.

On the other hand - and that may just be my problem, as I don't claim
to have enough expertise - I've read Suparna's and Jens' notes,
reviewed the code changes and still don't understand where the
whole ide/scsi/block/char i/o re-design is going. (I can see what has
been done - that's well documented, but not what's coming or why - a
discussion of the decomposition in layers, what is each' responsibility
and what's the api between them, simplifications/improvements that
result)

Likewise, I've not been able to discern from Andre's posts what it is
he's trying to do. No idea how that violates what Jens wants.

But Andre is clearly an unrivaled expert in his area, and the recent
results posted to this list are impressive, so I understand that
Andre feels frustrated not to be able to get those improvements into
the kernel.

As a linux user, I certainly want the benefit of what both parties have
to offer. It would have to satisfy Linus' taste, fit Jens' design, and
realize Andre's improvements. There ought to be a way for that to happen.

Now, if only I was smart enough to figure that one out :-(

Regards, Willem Riede.
(maintainer of osst tape driver)