Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934289AbXEUTbF (ORCPT ); Mon, 21 May 2007 15:31:05 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1765442AbXEUTVu (ORCPT ); Mon, 21 May 2007 15:21:50 -0400 Received: from 216-99-217-87.dsl.aracnet.com ([216.99.217.87]:46412 "EHLO sous-sol.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1765440AbXEUTVs (ORCPT ); Mon, 21 May 2007 15:21:48 -0400 Message-Id: <20070521191734.512551000@sous-sol.org> References: <20070521191612.800400000@sous-sol.org> User-Agent: quilt/0.46-1 Date: Mon, 21 May 2007 12:16:51 -0700 From: Chris Wright To: linux-kernel@vger.kernel.org, stable@kernel.org, Andrew Morton Cc: Justin Forbes , Zwane Mwaikambo , "Theodore Ts'o" , Randy Dunlap , Dave Jones , Chuck Wolber , Chris Wedgwood , Michael Krufky , Chuck Ebbert , torvalds@linux-foundation.org, alan@lxorguk.ukuu.org.uk, NeilBrown , linux-raid@vger.kernel.org Subject: [patch 39/69] md: Avoid a possibility that a read error can wrongly propagate through md/raid1 to a filesystem. Content-Disposition: inline; filename=md-avoid-a-possibility-that-a-read-error-can-wrongly-propagate-through-md-raid1-to-a-filesystem.patch Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3067 Lines: 89 -stable review patch. If anyone has any objections, please let us know. --------------------- From: NeilBrown When a raid1 has only one working drive, we want read error to propagate up to the filesystem as there is no point failing the last drive in an array. Currently the code perform this check is racy. If a write and a read a both submitted to a device on a 2-drive raid1, and the write fails followed by the read failing, the read will see that there is only one working drive and will pass the failure up, even though the one working drive is actually the *other* one. So, tighten up the locking. Signed-off-by: Neil Brown Signed-off-by: Chris Wright --- drivers/md/raid1.c | 33 +++++++++++++++++++-------------- 1 file changed, 19 insertions(+), 14 deletions(-) diff .prev/drivers/md/raid1.c ./drivers/md/raid1.c --- linux-2.6.21.1.orig/drivers/md/raid1.c +++ linux-2.6.21.1/drivers/md/raid1.c @@ -271,21 +271,25 @@ static int raid1_end_read_request(struct */ update_head_pos(mirror, r1_bio); - if (uptodate || (conf->raid_disks - conf->mddev->degraded) <= 1) { - /* - * Set R1BIO_Uptodate in our master bio, so that - * we will return a good error code for to the higher - * levels even if IO on some other mirrored buffer fails. - * - * The 'master' represents the composite IO operation to - * user-side. So if something waits for IO, then it will - * wait for the 'master' bio. + if (uptodate) + set_bit(R1BIO_Uptodate, &r1_bio->state); + else { + /* If all other devices have failed, we want to return + * the error upwards rather than fail the last device. + * Here we redefine "uptodate" to mean "Don't want to retry" */ - if (uptodate) - set_bit(R1BIO_Uptodate, &r1_bio->state); + unsigned long flags; + spin_lock_irqsave(&conf->device_lock, flags); + if (r1_bio->mddev->degraded == conf->raid_disks || + (r1_bio->mddev->degraded == conf->raid_disks-1 && + !test_bit(Faulty, &conf->mirrors[mirror].rdev->flags))) + uptodate = 1; + spin_unlock_irqrestore(&conf->device_lock, flags); + } + if (uptodate) raid_end_bio_io(r1_bio); - } else { + else { /* * oops, read error: */ @@ -992,13 +996,14 @@ static void error(mddev_t *mddev, mdk_rd unsigned long flags; spin_lock_irqsave(&conf->device_lock, flags); mddev->degraded++; + set_bit(Faulty, &rdev->flags); spin_unlock_irqrestore(&conf->device_lock, flags); /* * if recovery is running, make sure it aborts. */ set_bit(MD_RECOVERY_ERR, &mddev->recovery); - } - set_bit(Faulty, &rdev->flags); + } else + set_bit(Faulty, &rdev->flags); set_bit(MD_CHANGE_DEVS, &mddev->flags); printk(KERN_ALERT "raid1: Disk failure on %s, disabling device. \n" " Operation continuing on %d devices\n", -- - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/