Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756419AbZCTNJ6 (ORCPT ); Fri, 20 Mar 2009 09:09:58 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1755237AbZCTNJp (ORCPT ); Fri, 20 Mar 2009 09:09:45 -0400 Received: from rtr.ca ([76.10.145.34]:52933 "EHLO mail.rtr.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753070AbZCTNJo (ORCPT ); Fri, 20 Mar 2009 09:09:44 -0400 Message-ID: <49C39590.2040705@rtr.ca> Date: Fri, 20 Mar 2009 09:09:36 -0400 From: Mark Lord Organization: Real-Time Remedies Inc. User-Agent: Thunderbird 2.0.0.19 (X11/20090105) MIME-Version: 1.0 To: Andrew Morton Cc: Norman Diamond , linux-kernel@vger.kernel.org, linux-ide@vger.kernel.org Subject: Re: Overagressive failing of disk reads, both LIBATA and IDE References: <49C30E67.4060702@rtr.ca> <20090320030012.2f19f709.akpm@linux-foundation.org> In-Reply-To: <20090320030012.2f19f709.akpm@linux-foundation.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2655 Lines: 66 Andrew Morton wrote: > On Thu, 19 Mar 2009 23:32:55 -0400 Mark Lord wrote: > >> Norman Diamond wrote: >>> For months I was wondering how a disk could do this: >>> dd if=/dev/hda of=/dev/null bs=512 skip=551540 count=4 # succeeds >>> dd if=/dev/hda of=/dev/null bs=512 skip=551544 count=4 # succeeds >>> dd if=/dev/hda of=/dev/null bs=512 skip=551540 count=8 # fails >>> >>> It turns out the disk isn't doing that. Linux is. The old IDE drivers did >>> it, but with LIBATA the same thing happens to /dev/sda. In later examples >>> also, the same happens to /dev/sda as /dev/hda. >> .. >> >> You can blame me for the IDE driver not doing that properly. >> But for libata, it's the SCSI layer. >> >> I've been patching this for years for my clients, >> and will be updating the patch soon-ish and trying >> again to get it into upstream kernels. >> >> Here's the (now ancient) 2.6.20 version for SLES10: >> >> * * * >> >> Allow SCSI to continue with the remaining blocks of a request >> after encountering a media error. Otherwise, it may just fail >> the entire request, even though some blocks were fine and needed >> by a completely different process than the one that wanted the bad block(s). >> >> Signed-off-by: Mark Lord >> >> --- linux-2.6.16.60-0.6/drivers/scsi/scsi_lib.c 2008-03-10 13:46:03.000000000 -0400 >> +++ linux/drivers/scsi/scsi_lib.c 2008-03-21 11:54:09.000000000 -0400 >> @@ -888,6 +888,12 @@ >> */ >> if (sense_valid && !sense_deferred) { >> switch (sshdr.sense_key) { >> + case MEDIUM_ERROR: >> + /* Bad sector. Fail it, and then continue the rest of the request. */ >> + if (scsi_end_request(cmd, 0, cmd->device->sector_size, 1) == NULL) { >> + cmd->retries = 0; // go around again.. >> + return; >> + } >> case UNIT_ATTENTION: >> if (cmd->device->removable) { >> /* Detected disc change. Set a bit > > Once upon a time the VFS would fall back to single page reads when a large > readahead request failed. That's probably still the case. > > It was more by accident than by design, but it had (has) the desired effect? .. Ahh.. but the block layer efficiently merges adjacent sectors from multiple processes into single requests. Generally a good thing, that, but it does mean that one bad sector can currently trigger I/O failures for several processes which aren't even interested in the bad sector. -ml -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/