2008-01-08 21:23:40

by Al Boldi

[permalink] [raw]
Subject: [RFD] Incremental fsck

Andi Kleen wrote:
> Theodore Tso <[email protected]> writes:
> > Now, there are good reasons for doing periodic checks every N mounts
> > and after M months. And it has to do with PC class hardware. (Ted's
> > aphorism: "PC class hardware is cr*p").
>
> If these reasons are good ones (some skepticism here) then the correct
> way to really handle this would be to do regular background scrubbing
> during runtime; ideally with metadata checksums so that you can actually
> detect all corruption.
>
> But since fsck is so slow and disks are so big this whole thing
> is a ticking time bomb now. e.g. it is not uncommon to require tens
> of minutes or even hours of fsck time and some server that reboots
> only every few months will eat that when it happens to reboot.
> This means you get a quite long downtime.

Has there been some thought about an incremental fsck?

You know, somehow fencing a sub-dir to do an online fsck?


Thanks for some thoughts!

--
Al


2008-01-08 21:41:22

by Rik van Riel

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

On Wed, 9 Jan 2008 00:22:55 +0300
Al Boldi <[email protected]> wrote:

> Has there been some thought about an incremental fsck?
>
> You know, somehow fencing a sub-dir to do an online fsck?

Search for "chunkfs"

--
All rights reversed.

2008-01-08 21:42:27

by alan

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

> Andi Kleen wrote:
>> Theodore Tso <[email protected]> writes:
>> > Now, there are good reasons for doing periodic checks every N mounts
>> > and after M months. And it has to do with PC class hardware. (Ted's
>> > aphorism: "PC class hardware is cr*p").
>>
>> If these reasons are good ones (some skepticism here) then the correct
>> way to really handle this would be to do regular background scrubbing
>> during runtime; ideally with metadata checksums so that you can actually
>> detect all corruption.
>>
>> But since fsck is so slow and disks are so big this whole thing
>> is a ticking time bomb now. e.g. it is not uncommon to require tens
>> of minutes or even hours of fsck time and some server that reboots
>> only every few months will eat that when it happens to reboot.
>> This means you get a quite long downtime.
>
> Has there been some thought about an incremental fsck?

Is that anything like a cluster fsck? ]:>

2008-01-09 04:42:26

by Al Boldi

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

Rik van Riel wrote:
> Al Boldi <[email protected]> wrote:
> > Has there been some thought about an incremental fsck?
> >
> > You know, somehow fencing a sub-dir to do an online fsck?
>
> Search for "chunkfs"

Sure, and there is TileFS too.

But why wouldn't it be possible to do this on the current fs infrastructure,
using just a smart fsck, working incrementally on some sub-dir?


Thanks!

--
Al

2008-01-09 07:46:16

by Valerie Henson

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

On Jan 8, 2008 8:40 PM, Al Boldi <[email protected]> wrote:
> Rik van Riel wrote:
> > Al Boldi <[email protected]> wrote:
> > > Has there been some thought about an incremental fsck?
> > >
> > > You know, somehow fencing a sub-dir to do an online fsck?
> >
> > Search for "chunkfs"
>
> Sure, and there is TileFS too.
>
> But why wouldn't it be possible to do this on the current fs infrastructure,
> using just a smart fsck, working incrementally on some sub-dir?

Several data structures are file system wide and require finding every
allocated file and block to check that they are correct. In
particular, block and inode bitmaps can't be checked per subdirectory.

http://infohost.nmt.edu/~val/review/chunkfs.pdf

-VAL

-VAL

2008-01-09 08:04:52

by Valdis Klētnieks

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

On Wed, 09 Jan 2008 07:40:12 +0300, Al Boldi said:

> But why wouldn't it be possible to do this on the current fs infrastructure,
> using just a smart fsck, working incrementally on some sub-dir?

If you have /home/usera, /home/userb, and /home/userc, the vast majority of
fs screw-ups can't be detected by only looking at one sub-dir. For example,
you can't tell definitively that all blocks referenced by an inode under
/home/usera are properly only allocated to one file until you *also* look at
the inodes under user[bc]. Heck, you can't even tell if the link count for
a file is correct unless you walk the entire filesystem - you can find a file
with a link count of 3 in the inode, and you find one reference under usera,
and a second under userb - you can't tell if the count is one too high or
not until you walk through userc and actually see (or fail to see) a third
directory entry referencing it.


Attachments:
(No filename) (226.00 B)

2008-01-09 09:18:00

by Andreas Dilger

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

Andi Kleen wrote:
>> Theodore Tso <[email protected]> writes:
>> > Now, there are good reasons for doing periodic checks every N mounts
>> > and after M months. And it has to do with PC class hardware. (Ted's
>> > aphorism: "PC class hardware is cr*p").
>>
>> If these reasons are good ones (some skepticism here) then the correct
>> way to really handle this would be to do regular background scrubbing
>> during runtime; ideally with metadata checksums so that you can actually
>> detect all corruption.
>>
>> But since fsck is so slow and disks are so big this whole thing
>> is a ticking time bomb now. e.g. it is not uncommon to require tens
>> of minutes or even hours of fsck time and some server that reboots
>> only every few months will eat that when it happens to reboot.
>> This means you get a quite long downtime.
>
> Has there been some thought about an incremental fsck?

While an _incremental_ fsck isn't so easy for existing filesystem types,
what is pretty easy to automate is making a read-only snapshot of a
filesystem via LVM/DM and then running e2fsck against that. The kernel
and filesystem have hooks to flush the changes from cache and make the
on-disk state consistent.

You can then set the the ext[234] superblock mount count and last check
time via tune2fs if all is well, or schedule an outage if there are
inconsistencies found.

There is a copy of this script at:
http://osdir.com/ml/linux.lvm.devel/2003-04/msg00001.html

Note that it might need some tweaks to run with DM/LVM2 commands/output,
but is mostly what is needed.

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

2008-01-09 11:55:27

by Al Boldi

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

Valerie Henson wrote:
> On Jan 8, 2008 8:40 PM, Al Boldi <[email protected]> wrote:
> > Rik van Riel wrote:
> > > Al Boldi <[email protected]> wrote:
> > > > Has there been some thought about an incremental fsck?
> > > >
> > > > You know, somehow fencing a sub-dir to do an online fsck?
> > >
> > > Search for "chunkfs"
> >
> > Sure, and there is TileFS too.
> >
> > But why wouldn't it be possible to do this on the current fs
> > infrastructure, using just a smart fsck, working incrementally on some
> > sub-dir?
>
> Several data structures are file system wide and require finding every
> allocated file and block to check that they are correct. In
> particular, block and inode bitmaps can't be checked per subdirectory.

Ok, but let's look at this a bit more opportunistic / optimistic.

Even after a black-out shutdown, the corruption is pretty minimal, using
ext3fs at least. So let's take advantage of this fact and do an optimistic
fsck, to assure integrity per-dir, and assume no external corruption. Then
we release this checked dir to the wild (optionally ro), and check the next.
Once we find external inconsistencies we either fix it unconditionally,
based on some preconfigured actions, or present the user with options.

All this could be per-dir or using some form of on-the-fly file-block-zoning.

And there probably is a lot more to it, but it should conceptually be
possible, with more thoughts though...

> http://infohost.nmt.edu/~val/review/chunkfs.pdf


Thanks!

--
Al

2008-01-09 14:44:26

by Rik van Riel

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

On Wed, 9 Jan 2008 14:52:14 +0300
Al Boldi <[email protected]> wrote:

> Ok, but let's look at this a bit more opportunistic / optimistic.

You can't play fast and loose with data integrity.

Besides, if we looked at things optimistically, we would conclude
that no fsck will be needed, ever :)

> > http://infohost.nmt.edu/~val/review/chunkfs.pdf

You will really want to read this paper, if you haven't already.

--
All Rights Reversed

2008-01-10 13:27:41

by Al Boldi

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

Rik van Riel wrote:
> Al Boldi <[email protected]> wrote:
> > Ok, but let's look at this a bit more opportunistic / optimistic.
>
> You can't play fast and loose with data integrity.

Correct, but you have to be realistic...

> Besides, if we looked at things optimistically, we would conclude
> that no fsck will be needed,

And that's the reality, because people are mostly optimistic and feel
extremely tempted to just force-mount a dirty ext3fs, instead of waiting
hours-on-end for a complete fsck, which mostly comes back with some benign
"inode should be zero" warning.

> ever :)

Well not ever, but most people probably fsck during scheduled shutdowns, or
when they are forced to, due to online fs accessibility errors.

> > > http://infohost.nmt.edu/~val/review/chunkfs.pdf
>
> You will really want to read this paper, if you haven't already.

Definitely a good read, but attacking the problem from a completely different
POV.

BTW: Dropped some cc's due to bounces.


Thanks!

--
Al

2008-01-11 14:21:42

by Bodo Eggert

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

Al Boldi <[email protected]> wrote:

> Even after a black-out shutdown, the corruption is pretty minimal, using
> ext3fs at least. So let's take advantage of this fact and do an optimistic
> fsck, to assure integrity per-dir, and assume no external corruption. Then
> we release this checked dir to the wild (optionally ro), and check the next.
> Once we find external inconsistencies we either fix it unconditionally,
> based on some preconfigured actions, or present the user with options.

Maybe we can know the changes that need to be done in order to fix the
filesystem. Let's record this information in - eh - let's call it a journal!

2008-01-12 10:21:53

by Al Boldi

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

Bodo Eggert wrote:
> Al Boldi <[email protected]> wrote:
> > Even after a black-out shutdown, the corruption is pretty minimal, using
> > ext3fs at least. So let's take advantage of this fact and do an
> > optimistic fsck, to assure integrity per-dir, and assume no external
> > corruption. Then we release this checked dir to the wild (optionally
> > ro), and check the next. Once we find external inconsistencies we either
> > fix it unconditionally, based on some preconfigured actions, or present
> > the user with options.
>
> Maybe we can know the changes that need to be done in order to fix the
> filesystem. Let's record this information in - eh - let's call it a
> journal!

Don't mistake data=journal as an fsck replacement.


Thanks!

--
Al

2008-01-12 14:52:38

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

On Wed, Jan 09, 2008 at 02:52:14PM +0300, Al Boldi wrote:
>
> Ok, but let's look at this a bit more opportunistic / optimistic.
>
> Even after a black-out shutdown, the corruption is pretty minimal, using
> ext3fs at least.
>

After a unclean shutdown, assuming you have decent hardware that
doesn't lie about when blocks hit iron oxide, you shouldn't have any
corruption at all. If you have crappy hardware, then all bets are off....

> So let's take advantage of this fact and do an optimistic fsck, to
> assure integrity per-dir, and assume no external corruption. Then
> we release this checked dir to the wild (optionally ro), and check
> the next. Once we find external inconsistencies we either fix it
> unconditionally, based on some preconfigured actions, or present the
> user with options.

So what can you check? The *only* thing you can check is whether or
not the directory syntax looks sane, whether the inode structure looks
sane, and whether or not the blocks reported as belong to an inode
looks sane.

What is very hard to check is whether or not the link count on the
inode is correct. Suppose the link count is 1, but there are actually
two directory entries pointing at it. Now when someone unlinks the
file through one of the directory hard entries, the link count will go
to zero, and the blocks will start to get reused, even though the
inode is still accessible via another pathname. Oops. Data Loss.

This is why doing incremental, on-line fsck'ing is *hard*. You're not
going to find this while doing each directory one at a time, and if
the filesystem is changing out from under you, it gets worse. And
it's not just the hard link count. There is a similar issue with the
block allocation bitmap. Detecting the case where two files are
simultaneously can't be done if you are doing it incrementally, and if
the filesystem is changing out from under you, it's impossible, unless
you also have the filesystem telling you every single change while it
is happening, and you keep an insane amount of bookkeeping.

One that you *might* be able to do, is to mount a filesystem readonly,
check it in the background while you allow users to access it
read-only. There are a few caveats, however ---- (1) some filesystem
errors may cause the data to be corrupt, or in the worst case, could
cause the system to panic (that's would arguably be a
filesystem/kernel bug, but we've not necessarily done as much testing
here as we should.) (2) if there were any filesystem errors found,
you would beed to completely unmount the filesystem to flush the inode
cache and remount it before it would be safe to remount the filesystem
read/write. You can't just do a "mount -o remount" if the filesystem
was modified under the OS's nose.

> All this could be per-dir or using some form of on-the-fly file-block-zoning.
>
> And there probably is a lot more to it, but it should conceptually be
> possible, with more thoughts though...

Many things are possible, in the NASA sense of "with enough thrust,
anything will fly". Whether or not it is *useful* and *worthwhile*
are of course different questions! :-)

- Ted

2008-01-12 23:56:16

by Daniel Phillips

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

On Wednesday 09 January 2008 01:16, Andreas Dilger wrote:
> While an _incremental_ fsck isn't so easy for existing filesystem
> types, what is pretty easy to automate is making a read-only snapshot
> of a filesystem via LVM/DM and then running e2fsck against that. The
> kernel and filesystem have hooks to flush the changes from cache and
> make the on-disk state consistent.
>
> You can then set the the ext[234] superblock mount count and last
> check time via tune2fs if all is well, or schedule an outage if there
> are inconsistencies found.
>
> There is a copy of this script at:
> http://osdir.com/ml/linux.lvm.devel/2003-04/msg00001.html
>
> Note that it might need some tweaks to run with DM/LVM2
> commands/output, but is mostly what is needed.

You can do this now with ddsnap (an out-of-tree device mapper target)
either by checking a local snapshot or a replicated snapshot on a
different machine, see:

http://zumastor.org/

Doing the check on a remote machine seems attractive because the fsck
does not create a load on the server.

Regards,

Daniel

2008-01-13 11:06:43

by Al Boldi

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

Theodore Tso wrote:
> On Wed, Jan 09, 2008 at 02:52:14PM +0300, Al Boldi wrote:
> > Ok, but let's look at this a bit more opportunistic / optimistic.
> >
> > Even after a black-out shutdown, the corruption is pretty minimal, using
> > ext3fs at least.
>
> After a unclean shutdown, assuming you have decent hardware that
> doesn't lie about when blocks hit iron oxide, you shouldn't have any
> corruption at all. If you have crappy hardware, then all bets are off....

Maybe with barriers...

> > So let's take advantage of this fact and do an optimistic fsck, to
> > assure integrity per-dir, and assume no external corruption. Then
> > we release this checked dir to the wild (optionally ro), and check
> > the next. Once we find external inconsistencies we either fix it
> > unconditionally, based on some preconfigured actions, or present the
> > user with options.
>
> So what can you check? The *only* thing you can check is whether or
> not the directory syntax looks sane, whether the inode structure looks
> sane, and whether or not the blocks reported as belong to an inode
> looks sane.

Which would make this dir/area ready for read/write access.

> What is very hard to check is whether or not the link count on the
> inode is correct. Suppose the link count is 1, but there are actually
> two directory entries pointing at it. Now when someone unlinks the
> file through one of the directory hard entries, the link count will go
> to zero, and the blocks will start to get reused, even though the
> inode is still accessible via another pathname. Oops. Data Loss.

We could buffer this, and only actually overwrite when we are completely
finished with the fsck.

> This is why doing incremental, on-line fsck'ing is *hard*. You're not
> going to find this while doing each directory one at a time, and if
> the filesystem is changing out from under you, it gets worse. And
> it's not just the hard link count. There is a similar issue with the
> block allocation bitmap. Detecting the case where two files are
> simultaneously can't be done if you are doing it incrementally, and if
> the filesystem is changing out from under you, it's impossible, unless
> you also have the filesystem telling you every single change while it
> is happening, and you keep an insane amount of bookkeeping.

Ok, you have a point, so how about we change the implementation detail a bit,
from external fsck to internal fsck, leveraging the internal fs bookkeeping,
while allowing immediate but controlled read/write access.


Thanks for more thoughts!

--
Al

2008-01-13 17:19:39

by Pavel Machek

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

On Sat 2008-01-12 09:51:40, Theodore Tso wrote:
> On Wed, Jan 09, 2008 at 02:52:14PM +0300, Al Boldi wrote:
> >
> > Ok, but let's look at this a bit more opportunistic / optimistic.
> >
> > Even after a black-out shutdown, the corruption is pretty minimal, using
> > ext3fs at least.
> >
>
> After a unclean shutdown, assuming you have decent hardware that
> doesn't lie about when blocks hit iron oxide, you shouldn't have any
> corruption at all. If you have crappy hardware, then all bets are off....

What hardware is crappy here. Lets say... internal hdd in thinkpad
x60?

What are ext3 expectations of disk (is there doc somewhere)? For
example... if disk does not lie, but powerfail during write damages
the sector -- is ext3 still going to work properly?

If disk does not lie, but powerfail during write may cause random
numbers to be returned on read -- can fsck handle that?

What abou disk that kills 5 sectors around sector being written during
powerfail; can ext3 survive that?

Pavel

--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2008-01-13 17:45:31

by Alan

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

> What are ext3 expectations of disk (is there doc somewhere)? For
> example... if disk does not lie, but powerfail during write damages
> the sector -- is ext3 still going to work properly?

Nope. However the few disks that did this rapidly got firmware updates
because there are other OS's that can't cope.

> If disk does not lie, but powerfail during write may cause random
> numbers to be returned on read -- can fsck handle that?

most of the time. and fsck knows about writing sectors to remove read
errors in metadata blocks.

> What abou disk that kills 5 sectors around sector being written during
> powerfail; can ext3 survive that?

generally. Note btw that for added fun there is nothing that guarantees
the blocks around a block on the media are sequentially numbered. The
usually are but you never know.

Alan

2008-01-14 00:23:31

by Daniel Phillips

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

Hi Ted,

On Saturday 12 January 2008 06:51, Theodore Tso wrote:
> What is very hard to check is whether or not the link count on the
> inode is correct. Suppose the link count is 1, but there are
> actually two directory entries pointing at it. Now when someone
> unlinks the file through one of the directory hard entries, the link
> count will go to zero, and the blocks will start to get reused, even
> though the inode is still accessible via another pathname. Oops.
> Data Loss.
>
> This is why doing incremental, on-line fsck'ing is *hard*. You're
> not going to find this while doing each directory one at a time, and
> if the filesystem is changing out from under you, it gets worse. And
> it's not just the hard link count. There is a similar issue with the
> block allocation bitmap. Detecting the case where two files are
> simultaneously can't be done if you are doing it incrementally, and
> if the filesystem is changing out from under you, it's impossible,
> unless you also have the filesystem telling you every single change
> while it is happening, and you keep an insane amount of bookkeeping.

In this case I am listening to Chicken Little carefully and really do
believe the sky will fall if we fail to come up with an incremental
online fsck some time in the next few years. I realize the challenge
verges on insane, but I have been slowly chewing away at this question
for some time.

Val proposes to simplify the problem by restricting the scope of block
pointers and hard links. Best of luck with that, the concept of fault
isolation domains has a nice ring to it. I prefer to stick close to
tried and true Ext3 and not change the basic algorithms.

Rather than restricting pointers, I propose to add a small amount of new
metadata to accelerate global checking. The idea is to be able to
build per-group reverse maps very quickly, to support mapping physical
blocks back to inodes that own them, and mapping inodes back to the
directories that reference them.

I see on-the-fly filesystem reverse mapping as useful for more than just
online fsck. For example it would be nice to be able to work backwards
efficiently from a list of changed blocks such as ddsnap produces to a
list of file level changes.

The amount of metadata required to support efficient on-the-fly reverse
mapping is surprisingly small: 2K per block group per terabyte, in a
fixed location at the base of each group. This is consistent with my
goal of producing code that is mergable for Ext4 and backportable to
Ext3.

Building a block reverse map for a given group is easy and efficient.
The first pass walks across the inode table and already maps most of
the physical blocks for typical usage patterns, because most files only
have direct pointers. Index blocks discovered in the first pass go
onto a list to be processed by subsequent passes, which may discover
additional index blocks. Just keep pushing the index blocks back onto
the list and the algorithm terminates when the list is empty. This
builds a reverse map for the group including references to external
groups.

Note that the recent metadata clustering patch from Abhishek Rai will
speed up this group mapping algorithm significantly because (almost)
all the index blocks can be picked up in one linear read. This should
only take a few milliseconds. One more reason why I think his patch is
an Important Patch[tm].

A data block may be up to four groups removed from its home group,
therefore the reverse mapping process must follow pointers across
groups and map each file entirely to be sure that all pointers to the
group being checked have been discovered. It is possible to construct
a case where a group contains a lot of inodes of big files that are
mostly stored in other groups. Mapping such a group could possibly
require examining all the index blocks on the entire volume. That
would be about 2**18 index blocks per terabyte, which is still within
the realm of practicality.

To generate the inode reverse map for, walk each directory in the group,
decoding the index blocks by hand. Strictly speaking, directories
ought to pass block level checking before being reverse mapped, but
there could be many directories in the same group spilling over into a
lot of external groups, so getting all the directory inodes to pass
block level checks at the same time could be difficult with filesystem
writing going on between fsck episodes. Instead, just go ahead and
assume a directory file is ok, and if this is not the case the
directory walk will fail or a block level check will eventually pick up
the problem.

The worst case for directory mapping is much worse than the worst case
for block mapping. A single directory could fill an entire volume.
For such a large directory, reverse mapping is not possible without
keeping the filesystem suspended for an unreasonable time. Either make
the reverse map incremental and maintained on the fly or fall back to a
linear search of the entire directory when doing the checks below, the
latter being easy but very slow. Or just give up on fscking groups
involving the directory. Or maybe I am obsessing about this too much,
because mapping a directory of a million files only requires reading
about 60 MB, and such large directories are very rare.

The group cross reference tables have to be persistently recorded on
disk in order to avoid searching the whole volume for some checks. A
per group bitmap handles this nicely, with as many bits as there are
block groups. Each one bit flags some external group as referencing
the group in which the bitmap is stored. With default settings, a
cross reference bitmap is only 1K per terabyte. Two such bitmaps are
needed per group, one for external block pointers and the other for
external hard links. When needed for processing, a bitmap is converted
into a list or hash table. New cross group references need to be
detected in the filesystem code and saved to disk before the associated
transaction proceeds. Though new cross group references should be
relatively rare, the cross reference bitmaps can be logically
journalled in the commit block of the associated transaction and
batch-updated on journal flush so that there is very little new write
overhead.

Cross group references may be updated lazily on delete, since the only
harm caused by false positives is extra time spent building unneeded
reverse maps. The incremental fsck step "check group cross reference
bitmaps" describes how redundant cross reference bits are detected.
These can be batched up in memory and updated as convenient.

Cached reverse maps can be disturbed by write activity on the
filesystem. The lazy approach is to discard them on any change, which
should work well enough to get started. With additional work, cached
reverse maps can be updated on the fly by the respective get_block,
truncate and directory entry operations.

The incremental fsck algorithm works by checking a volume one block
group at a time, with filesystem operations suspended during the check.
The expected service interruption will be small compared to taking the
volume offline but will occur more often, which might be an issue for
interactive use. Making the algorithm completely bumpless would
require something like a temporary in-memory volume snapshot, followed
by a clever merge of the changed blocks, taking advantage of the
reverse maps to know which checked groups need to be changed back to
unchecked. Beyond the scope of the current effort.

Broadly, there are two layers of integrity to worry about:

1) Block pointers (block level)
2) Directory entries (inode level)

Luckily for lazy programmers, similar techniques and even identical data
structures work for both. Each group has one persistent bitmap to
reference the group via block pointers, and another to show which
groups reference the group via directory entries. In memory, there is
one cached reverse map per group to map blocks to inodes (one to one),
and another to map inodes to directory inodes (one to many).
Algorithms for block and inode level checks are similar, as detailed
below.

With on-demand reverse maps to help us, we do something like:

* Suspend filesystem, flushing dirty page cache to disk
* Build reverse map for this group if needed
* Build reverse maps for groups referencing this group as needed
* Perform checks listed below
* If the checks passed mark the group as checked
* Resume filesystem

The order in which groups are checked can depend opportunistically on
which reverse maps are already cached. Some kind of userspace
interface would let the operator know about checking progress and the
nature of problems detected. Each group records the time last checked,
the time checked successfully, and a few bits indicating the nature of
problems found.

Now for the specific integrity checks, and strategy for each. There are
two interesting kinds of potentially nonlocal checks:

Downward check: this group may reference other groups that must be
examined together with this group to complete the check. Can be
completed immediately when all references are local, otherwise
make a list of groups needed to continue the check and defer
until convenient to map those groups all at the same time.

Upward check: this group may be referenced by other groups that must
be examined together with this group in order to complete the
check. Need to check the maps of all groups referencing this
group to find incoming references.

To prepare for checking a group, its reverse map is constructed if not
already cached, and the reverse maps for all groups marked as
referencing the group. If that is too many reverse maps then just give
up on trying to fsck that group, or do it very slowly by constructing
each reverse map at the point it is actually needed in a check.

As online fsck works its way through groups a list of pending downward
checks for certain inodes will build up. When this list gets long
enough, find a subset of it involving a reasonably small number of
groups, map those groups and perform the needed checks.

A list of checks and correspondence to e2fsck passes follows.

Inode mode field size and block count (e2fsck pass 1)

Downward check. Do the local inode checks, then walk the inode index
structure counting the blocks. Ensure that the rightmost block lies
within the inode size.

Check block references (e2fsck pass 1)

Upward check. Check that each block found to be locally referenced is
not marked free in the block bitmap. For each block found to have no
local reference, check the maps of the groups referencing this group to
ensure that exactly one of them points at the block, or none if the
block is marked free in the group bitmap.

Check directory structure (e2fsck pass 2)

Downward check. The same set of directory structure tests as e2fsck,
such as properly formed directory entries, htree nodes, etc.

Check directory inode links (e2fsck pass 3)

Upward check. While walking directory entries, ensure that each
directory inode to be added to the reverse map is not already in the
map and is not marked free in the inode bitmap. For each inode
discovered to have no local link after building the reverse map, check
the reverse maps of the groups referring to this group to ensure that
exactly one of them links to the inode, or that there are no external
links if the block bitmap indicates the block is free.

Check inode reference counts (e2fsck pass 4)

Upward check. While walking directory entries, ensure that each non
directory inode to be added to the reverse map is not marked free in
the inode bitmap. Check that the inode reference count is equal to the
number of references to the inode found in the local reverse map plus
the number of references found in the maps of all groups referencing
this group.

Check block bitmaps (e2fsck pass 5)

Checking block references above includes ensuring that no block in use
is marked free. Now check that no block marked free in the block
bitmap appears in the local or external block reverse map.

Check inode bitmaps (e2fsck pass 5)

Checking inode references above includes ensuring that no inode in use
is marked free. Now check that no inode marked free in the inode
bitmap appears in the local or external inode reverse map.

Check group cross reference bitmaps

Each time a group is mapped, check that for each external reference
discovered the corresponding bit is set in the external bitmap. Check
that for all groups having an external reference bit set for this
group, this group does in fact reference the external group. Because
cross reference bitmaps are so small they should all fit in cache
comfortably. The buffer cache is ideal for this.

Finally...

Total work required to get something working along these lines looks
significant, but the importance is high, so work is quite likely to go
ahead if the approach survives scrutiny.

Regards,

Daniel

2008-01-15 01:06:12

by Ric Wheeler

[permalink] [raw]
Subject: Re: [RFD] Incremental fsck

Pavel Machek wrote:
> On Sat 2008-01-12 09:51:40, Theodore Tso wrote:
>> On Wed, Jan 09, 2008 at 02:52:14PM +0300, Al Boldi wrote:
>>> Ok, but let's look at this a bit more opportunistic / optimistic.
>>>
>>> Even after a black-out shutdown, the corruption is pretty minimal, using
>>> ext3fs at least.
>>>
>> After a unclean shutdown, assuming you have decent hardware that
>> doesn't lie about when blocks hit iron oxide, you shouldn't have any
>> corruption at all. If you have crappy hardware, then all bets are off....
>
> What hardware is crappy here. Lets say... internal hdd in thinkpad
> x60?
>
> What are ext3 expectations of disk (is there doc somewhere)? For
> example... if disk does not lie, but powerfail during write damages
> the sector -- is ext3 still going to work properly?
>
> If disk does not lie, but powerfail during write may cause random
> numbers to be returned on read -- can fsck handle that?
>
> What abou disk that kills 5 sectors around sector being written during
> powerfail; can ext3 survive that?
>
> Pavel
>

I think that you have to keep in mind the way disk (and other media)
fail. You can get media failures after a successful write or errors that
pop up as the media ages.

Not to mention the way most people run with write cache enabled and no
write barriers enabled - a sure recipe for corruption.

Of course, there are always software errors to introduce corruption even
when we get everything else right ;-)

From what I see, media errors are the number one cause of corruption in
file systems. It is critical that fsck (and any other tools) continue
after an IO error since they are fairly common (just assume that sector
is lost and do your best as you continue on).

ric

2008-01-15 20:17:16

by Pavel Machek

[permalink] [raw]
Subject: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

Hi!

> > What are ext3 expectations of disk (is there doc somewhere)? For
> > example... if disk does not lie, but powerfail during write damages
> > the sector -- is ext3 still going to work properly?
>
> Nope. However the few disks that did this rapidly got firmware updates
> because there are other OS's that can't cope.
>
> > If disk does not lie, but powerfail during write may cause random
> > numbers to be returned on read -- can fsck handle that?
>
> most of the time. and fsck knows about writing sectors to remove read
> errors in metadata blocks.
>
> > What abou disk that kills 5 sectors around sector being written during
> > powerfail; can ext3 survive that?
>
> generally. Note btw that for added fun there is nothing that guarantees
> the blocks around a block on the media are sequentially numbered. The
> usually are but you never know.

Ok, should something like this be added to the documentation?

It would be cool to be able to include few examples (modern SATA disks
support bariers so are safe, any IDE from 1989 is unsafe), but I do
not know enough about hw...

Signed-off-by: Pavel Machek <[email protected]>

diff --git a/Documentation/filesystems/ext3.txt b/Documentation/filesystems/ext3.txt
index b45f3c1..adfcc9d 100644
--- a/Documentation/filesystems/ext3.txt
+++ b/Documentation/filesystems/ext3.txt
@@ -183,6 +183,18 @@ mke2fs: create a ext3 partition with th
debugfs: ext2 and ext3 file system debugger.
ext2online: online (mounted) ext2 and ext3 filesystem resizer

+Requirements
+============
+
+Ext3 needs disk that does not do write-back caching or disk that
+supports barriers and Linux configuration that can use them.
+
+* if disk damages the sector being written during powerfail, ext3
+ can't cope with that. Fortunately, such disks got firmware updates
+ to fix this long time ago.
+
+* if disk writes random data during powerfail, ext3 should survive
+ that most of the time.

References
==========


--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2008-01-15 21:44:10

by David Chinner

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Tue, Jan 15, 2008 at 09:16:53PM +0100, Pavel Machek wrote:
> Hi!
>
> > > What are ext3 expectations of disk (is there doc somewhere)? For
> > > example... if disk does not lie, but powerfail during write damages
> > > the sector -- is ext3 still going to work properly?
> >
> > Nope. However the few disks that did this rapidly got firmware updates
> > because there are other OS's that can't cope.
> >
> > > If disk does not lie, but powerfail during write may cause random
> > > numbers to be returned on read -- can fsck handle that?
> >
> > most of the time. and fsck knows about writing sectors to remove read
> > errors in metadata blocks.
> >
> > > What abou disk that kills 5 sectors around sector being written during
> > > powerfail; can ext3 survive that?
> >
> > generally. Note btw that for added fun there is nothing that guarantees
> > the blocks around a block on the media are sequentially numbered. The
> > usually are but you never know.
>
> Ok, should something like this be added to the documentation?
>
> It would be cool to be able to include few examples (modern SATA disks
> support bariers so are safe, any IDE from 1989 is unsafe), but I do
> not know enough about hw...

ext3 is not the only filesystem that will have trouble due to
volatile write caches. We see problems often enough with XFS
due to volatile write caches that it's in our FAQ:

http://oss.sgi.com/projects/xfs/faq.html#wcache

Cheers,

Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group

2008-01-15 23:07:23

by Pavel Machek

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

Hi!

> > > > What are ext3 expectations of disk (is there doc somewhere)? For
> > > > example... if disk does not lie, but powerfail during write damages
> > > > the sector -- is ext3 still going to work properly?
> > >
> > > Nope. However the few disks that did this rapidly got firmware updates
> > > because there are other OS's that can't cope.
> > >
> > > > If disk does not lie, but powerfail during write may cause random
> > > > numbers to be returned on read -- can fsck handle that?
> > >
> > > most of the time. and fsck knows about writing sectors to remove read
> > > errors in metadata blocks.
> > >
> > > > What abou disk that kills 5 sectors around sector being written during
> > > > powerfail; can ext3 survive that?
> > >
> > > generally. Note btw that for added fun there is nothing that guarantees
> > > the blocks around a block on the media are sequentially numbered. The
> > > usually are but you never know.
> >
> > Ok, should something like this be added to the documentation?
> >
> > It would be cool to be able to include few examples (modern SATA disks
> > support bariers so are safe, any IDE from 1989 is unsafe), but I do
> > not know enough about hw...
>
> ext3 is not the only filesystem that will have trouble due to
> volatile write caches. We see problems often enough with XFS
> due to volatile write caches that it's in our FAQ:
>
> http://oss.sgi.com/projects/xfs/faq.html#wcache

Nice FAQ, yep. Perhaps you should move parts of it to Documentation/ ,
and I could then make ext3 FAQ point to it?

I had write cache enabled on my main computer. Oops. I guess that
means we do need better documentation.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2008-01-15 23:44:41

by Daniel Phillips

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Jan 15, 2008 6:07 PM, Pavel Machek <[email protected]> wrote:
> I had write cache enabled on my main computer. Oops. I guess that
> means we do need better documentation.

Writeback cache on disk in iteself is not bad, it only gets bad if the
disk is not engineered to save all its dirty cache on power loss,
using the disk motor as a generator or alternatively a small battery.
It would be awfully nice to know which brands fail here, if any,
because writeback cache is a big performance booster.

Regards,

Daniel

2008-01-16 00:18:12

by Alan

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

> Writeback cache on disk in iteself is not bad, it only gets bad if the
> disk is not engineered to save all its dirty cache on power loss,
> using the disk motor as a generator or alternatively a small battery.
> It would be awfully nice to know which brands fail here, if any,
> because writeback cache is a big performance booster.

AFAIK no drive saves the cache. The worst case cache flush for drives is
several seconds with no retries and a couple of minutes if something
really bad happens.

This is why the kernel has some knowledge of barriers and uses them to
issue flushes when needed.

2008-01-16 01:24:45

by Daniel Phillips

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Jan 15, 2008 7:15 PM, Alan Cox <[email protected]> wrote:
> > Writeback cache on disk in iteself is not bad, it only gets bad if the
> > disk is not engineered to save all its dirty cache on power loss,
> > using the disk motor as a generator or alternatively a small battery.
> > It would be awfully nice to know which brands fail here, if any,
> > because writeback cache is a big performance booster.
>
> AFAIK no drive saves the cache. The worst case cache flush for drives is
> several seconds with no retries and a couple of minutes if something
> really bad happens.
>
> This is why the kernel has some knowledge of barriers and uses them to
> issue flushes when needed.

Indeed, you are right, which is supported by actual measurements:

http://sr5tech.com/write_back_cache_experiments.htm

Sorry for implying that anybody has engineered a drive that can do
such a nice thing with writeback cache.

The "disk motor as a generator" tale may not be purely folklore. When
an IDE drive is not in writeback mode, something special needs to done
to ensure the last write to media is not a scribble.

A small UPS can make writeback mode actually reliable, provided the
system is smart enough to take the drives out of writeback mode when
the line power is off.

Regards,

Daniel

2008-01-16 01:38:22

by Chris Mason

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Tue, 15 Jan 2008 20:24:27 -0500
"Daniel Phillips" <[email protected]> wrote:

> On Jan 15, 2008 7:15 PM, Alan Cox <[email protected]> wrote:
> > > Writeback cache on disk in iteself is not bad, it only gets bad
> > > if the disk is not engineered to save all its dirty cache on
> > > power loss, using the disk motor as a generator or alternatively
> > > a small battery. It would be awfully nice to know which brands
> > > fail here, if any, because writeback cache is a big performance
> > > booster.
> >
> > AFAIK no drive saves the cache. The worst case cache flush for
> > drives is several seconds with no retries and a couple of minutes
> > if something really bad happens.
> >
> > This is why the kernel has some knowledge of barriers and uses them
> > to issue flushes when needed.
>
> Indeed, you are right, which is supported by actual measurements:
>
> http://sr5tech.com/write_back_cache_experiments.htm
>
> Sorry for implying that anybody has engineered a drive that can do
> such a nice thing with writeback cache.
>
> The "disk motor as a generator" tale may not be purely folklore. When
> an IDE drive is not in writeback mode, something special needs to done
> to ensure the last write to media is not a scribble.
>
> A small UPS can make writeback mode actually reliable, provided the
> system is smart enough to take the drives out of writeback mode when
> the line power is off.

We've had mount -o barrier=1 for ext3 for a while now, it makes
writeback caching safe. XFS has this on by default, as does reiserfs.

-chris

2008-01-16 01:44:54

by Daniel Phillips

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

Hi Pavel,

Along with this effort, could you let me know if the world actually
cares about online fsck? Now we know how to do it I think, but is it
worth the effort.

Regards,

Daniel

2008-01-16 03:06:04

by Rik van Riel

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Tue, 15 Jan 2008 20:44:38 -0500
"Daniel Phillips" <[email protected]> wrote:

> Along with this effort, could you let me know if the world actually
> cares about online fsck? Now we know how to do it I think, but is it
> worth the effort.

With a filesystem that is compartmentalized and checksums metadata,
I believe that an online fsck is absolutely worth having.

Instead of the filesystem resorting to mounting the whole volume
read-only on certain errors, part of the filesystem can be offlined
while an fsck runs. This could even be done automatically in many
situations.

--
All rights reversed.

2008-01-16 11:49:44

by Pavel Machek

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

Hi!

> Along with this effort, could you let me know if the world actually
> cares about online fsck?

I'm not the world's spokeperson (yet ;-).

> Now we know how to do it I think, but is it
> worth the effort.

ext3's "lets fsck on every 20 mounts" is good idea, but it can be
annoying when developing. Having option to fsck while filesystem is
online takes that annoyance away.

So yes, it would be very useful for me...

For long-running servers, this may be less of a problem... but OTOH
their filesystems are not checked at all as long servers are
online... so online fsck is actually important there, too, but for
other reasons.

So yes, it is very useful for world.

Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2008-01-16 11:51:51

by Pavel Machek

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Tue 2008-01-15 18:44:26, Daniel Phillips wrote:
> On Jan 15, 2008 6:07 PM, Pavel Machek <[email protected]> wrote:
> > I had write cache enabled on my main computer. Oops. I guess that
> > means we do need better documentation.
>
> Writeback cache on disk in iteself is not bad, it only gets bad if the
> disk is not engineered to save all its dirty cache on power loss,
> using the disk motor as a generator or alternatively a small battery.
> It would be awfully nice to know which brands fail here, if any,
> because writeback cache is a big performance booster.

Is it?

I guess I should try to measure it. (Linux already does writeback
caching, with 2GB of memory. I wonder how important disks's 2MB of
cache can be).
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2008-01-16 12:22:30

by Valdis Klētnieks

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Wed, 16 Jan 2008 12:51:44 +0100, Pavel Machek said:

> I guess I should try to measure it. (Linux already does writeback
> caching, with 2GB of memory. I wonder how important disks's 2MB of
> cache can be).

It serves essentially the same purpose as the 'async' option in /etc/exports
(i.e. we declare it "done" when the other end of the wire says it's caught
the data, not when it's actually committed), with similar latency wins. Of
course, it's impedance-matching for bursty traffic - the 2M doesn't do much
at all if you're streaming data to it. For what it's worth, the 80G Seagate
drive in my laptop claims it has 8M, so it probably does 4 times as much
good as 2M. ;)


Attachments:
(No filename) (226.00 B)

2008-01-16 16:39:40

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Wed, Jan 16, 2008 at 08:43:25AM +1100, David Chinner wrote:
> ext3 is not the only filesystem that will have trouble due to
> volatile write caches. We see problems often enough with XFS
> due to volatile write caches that it's in our FAQ:

In fact it will hit every filesystem. A write-back cache that can't
be forced to write back bythe filesystem will cause corruption on
uncontained power loss, period.

2008-01-16 19:07:01

by Bryan Henderson

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

>The "disk motor as a generator" tale may not be purely folklore. When
>an IDE drive is not in writeback mode, something special needs to done
>to ensure the last write to media is not a scribble.

No it doesn't. The last write _is_ a scribble. Systems that make atomic
updates to disk drives use a shadow update mechanism and write the master
sector twice. If the power fails in the middle of writing one, it will
almost certainly be unreadable due to a CRC failure, and the other one
will have either the old or new master block contents.

And I think there's a problem with drives that, upon sensing the
unreadable sector, assign an alternate even though the sector is fine, and
you eventually run out of spares.


Incidentally, while this primitive behavior applies to IDE (ATA et al)
drives, that isn't the only thing people put filesystem on. Many
important filesystems go on higher level storage subsystems that contain
IDE drives and cache memory and batteries. A device like this _does_ make
sure that all data that it says has been written is actually retrievable
even if there's a subsequent power outage, even while giving the
performance of writeback caching.

--
Bryan Henderson IBM Almaden Research Center
San Jose CA Filesystems

2008-01-16 20:21:56

by Alan

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

> And I think there's a problem with drives that, upon sensing the
> unreadable sector, assign an alternate even though the sector is fine, and
> you eventually run out of spares.

You are assuming drives can't tell the difference between stray data loss
and sectors that can't be recovered by rewriting and reuse. I was under
the impression modern drives could do this ?

Alan

2008-01-16 20:52:52

by Valerie Henson

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Jan 16, 2008 3:49 AM, Pavel Machek <[email protected]> wrote:
>
> ext3's "lets fsck on every 20 mounts" is good idea, but it can be
> annoying when developing. Having option to fsck while filesystem is
> online takes that annoyance away.

I'm sure everyone on cc: knows this, but for the record you can change
ext3's fsck on N mounts or every N days to something that makes sense
for your use case. Usually I just turn it off entirely and run fsck
by hand when I'm worried:

# tune2fs -c 0 -i 0 /dev/whatever

-VAL

2008-01-16 21:29:00

by Eric Sandeen

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

Alan Cox wrote:
>> Writeback cache on disk in iteself is not bad, it only gets bad if the
>> disk is not engineered to save all its dirty cache on power loss,
>> using the disk motor as a generator or alternatively a small battery.
>> It would be awfully nice to know which brands fail here, if any,
>> because writeback cache is a big performance booster.
>
> AFAIK no drive saves the cache. The worst case cache flush for drives is
> several seconds with no retries and a couple of minutes if something
> really bad happens.
>
> This is why the kernel has some knowledge of barriers and uses them to
> issue flushes when needed.

Problem is, ext3 has barriers off by default so it's not saving most people.

And then if you turn them on, but have your filesystem on an lvm device,
lvm strips them out again.

-Eric

2008-01-17 02:03:17

by Daniel Phillips

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Jan 16, 2008 2:06 PM, Bryan Henderson <[email protected]> wrote:
> >The "disk motor as a generator" tale may not be purely folklore. When
> >an IDE drive is not in writeback mode, something special needs to done
> >to ensure the last write to media is not a scribble.
>
> No it doesn't. The last write _is_ a scribble.

Have you observed that in the wild? A former engineer of a disk drive
company suggests to me that the capacitors on the board provide enough
power to complete the last sector, even to park the head.

Regards,

Daniel

2008-01-17 07:38:52

by Andreas Dilger

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Jan 15, 2008 22:05 -0500, Rik van Riel wrote:
> With a filesystem that is compartmentalized and checksums metadata,
> I believe that an online fsck is absolutely worth having.
>
> Instead of the filesystem resorting to mounting the whole volume
> read-only on certain errors, part of the filesystem can be offlined
> while an fsck runs. This could even be done automatically in many
> situations.

In ext4 we store per-group state flags in each group, and the group
descriptor is checksummed (to detect spurious flags), so it should
be relatively straight forward to store an "error" flag in a single
group and have it become read-only.

As a starting point, it would be worthwhile to check instances of
ext4_error() to see how many of them can be targetted at a specific
group. I'd guess most of them could be (corrupt inodes, directory
and indirect blocks, incorrect bitmaps).

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

2008-01-17 13:00:28

by Szabolcs Szakacsits

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)


On Tue, 15 Jan 2008, Daniel Phillips wrote:

> Along with this effort, could you let me know if the world actually
> cares about online fsck? Now we know how to do it I think, but is it
> worth the effort.

Most users seem to care deeply about "things just work". Here is why
ntfs-3g also took the online fsck path some time ago.

NTFS support had a highly bad reputation on Linux thus the new code was
written with rigid sanity checks and extensive automatic, regression
testing. One of the consequences is that we're detecting way too many
inconsistencies left behind by the Windows and other NTFS drivers,
hardware faults, device drivers.

To better utilize the non-existing developer resources, it was obvious to
suggest the already existing Windows fsck (chkdsk) in such cases. Simple
and safe as most people like us would think who never used Windows.

However years of experience shows that depending on several factors chkdsk
may start or not, may report the real problems or not, but on the other
hand it may report bogus issues, it may run long or just forever, and it
may even remove completely valid files. So one could perhaps even consider
suggestions to run chkdsk a call to play Russian roulette.

Thankfully NTFS has some level of metadata redundancy with signatures and
weak "checksums" which make possible to correct some common and obvious
corruptions on the fly.

Similarly to ZFS, Windows Server 2008 also has self-healing NTFS:
http://technet2.microsoft.com/windowsserver2008/en/library/6f883d0d-3668-4e15-b7ad-4df0f6e6805d1033.mspx?mfr=true

Szaka

--
NTFS-3G: http://ntfs-3g.org

2008-01-17 20:55:03

by Pavel Machek

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Tue 2008-01-15 20:36:16, Chris Mason wrote:
> On Tue, 15 Jan 2008 20:24:27 -0500
> "Daniel Phillips" <[email protected]> wrote:
>
> > On Jan 15, 2008 7:15 PM, Alan Cox <[email protected]> wrote:
> > > > Writeback cache on disk in iteself is not bad, it only gets bad
> > > > if the disk is not engineered to save all its dirty cache on
> > > > power loss, using the disk motor as a generator or alternatively
> > > > a small battery. It would be awfully nice to know which brands
> > > > fail here, if any, because writeback cache is a big performance
> > > > booster.
> > >
> > > AFAIK no drive saves the cache. The worst case cache flush for
> > > drives is several seconds with no retries and a couple of minutes
> > > if something really bad happens.
> > >
> > > This is why the kernel has some knowledge of barriers and uses them
> > > to issue flushes when needed.
> >
> > Indeed, you are right, which is supported by actual measurements:
> >
> > http://sr5tech.com/write_back_cache_experiments.htm
> >
> > Sorry for implying that anybody has engineered a drive that can do
> > such a nice thing with writeback cache.
> >
> > The "disk motor as a generator" tale may not be purely folklore. When
> > an IDE drive is not in writeback mode, something special needs to done
> > to ensure the last write to media is not a scribble.
> >
> > A small UPS can make writeback mode actually reliable, provided the
> > system is smart enough to take the drives out of writeback mode when
> > the line power is off.
>
> We've had mount -o barrier=1 for ext3 for a while now, it makes
> writeback caching safe. XFS has this on by default, as does reiserfs.

Maybe ext3 should do barriers by default? Having ext3 in "lets corrupt
data by default"... seems like bad idea.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2008-01-17 21:37:26

by Bryan Henderson

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

"Daniel Phillips" <[email protected]> wrote on 01/16/2008 06:02:50 PM:

> On Jan 16, 2008 2:06 PM, Bryan Henderson <[email protected]> wrote:
> > >The "disk motor as a generator" tale may not be purely folklore. When
> > >an IDE drive is not in writeback mode, something special needs to
done
> > >to ensure the last write to media is not a scribble.
> >
> > No it doesn't. The last write _is_ a scribble.
>
> Have you observed that in the wild? A former engineer of a disk drive
> company suggests to me that the capacitors on the board provide enough
> power to complete the last sector, even to park the head.

No, I haven't. It's hearsay, and from about 3 years ago.

As for parking the head, that's hard to believe, since it's so easy and
more reliable to use a spring and an electromagnet.

--
Bryan Henderson IBM Almaden Research Center
San Jose CA Filesystems

2008-01-17 22:54:45

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Wed, Jan 16, 2008 at 09:02:50PM -0500, Daniel Phillips wrote:
>
> Have you observed that in the wild? A former engineer of a disk drive
> company suggests to me that the capacitors on the board provide enough
> power to complete the last sector, even to park the head.
>

The problem isn't with the disk drive; it's from the DRAM, which tend
to be much more voltage sensitive than the hard drives --- so it's
quite likely that you could end up DMA'ing garbage from the memory.
In fact the fact that the disk drives lasts longer due to capacitors
on the board, rotational inertia of the platters, etc., is part of the
problem.

It was observed in the wild by SGI, many years ago on their hardware.
They later added extra capacitors on the motherboard and a powerfail
interrupt which caused the Irix to run around frantically shutting
down DMA's for a controlled shutdown. Of course, PC-class hardware
has none of this. My source for this was Jim Mostek, one of the
original Linux XFS porters. He had given me source code to a test
program that would show this; basically zeroed out a region of disk,
then started writing series of patterns on that part of the, and you
you kicked out the power cord, and then see if there was any garbage
on the disk. If you saw something that wasn't one of the patterns
being written to the disk, then you knew you had a problem. I can't
find the program any more, but it wouldn't be hard to write.

I do know that I have seen reports from many ext2 users in the field
that could only be explained by the hard drive scribbling garbage onto
the inode table. Ext3 solves this problem because of its physical
block journaling.

- Ted

2008-01-17 22:56:05

by Daniel Phillips

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Jan 17, 2008 7:29 AM, Szabolcs Szakacsits <[email protected]> wrote:
> Similarly to ZFS, Windows Server 2008 also has self-healing NTFS:

I guess that is enough votes to justify going ahead and trying an
implementation of the reverse mapping ideas I posted. But of course
more votes for this is better. If online incremental fsck is
something people want, then please speak up here and that will very
definitely help make it happen.

On the walk-before-run principle, it would initially just be
filesystem checking, not repair. But even this would help, by setting
per-group checked flags that offline fsck could use to do a much
quicker repair pass. And it will let you know when a volume needs to
be taken offline without having to build in planned downtime just in
case, which already eats a bunch of nines.

Regards,

Daniel

2008-01-17 23:09:57

by Alan

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)


> interrupt which caused the Irix to run around frantically shutting
> down DMA's for a controlled shutdown. Of course, PC-class hardware
> has none of this. My source for this was Jim Mostek, one of the

PC class hardware has a power good signal which drops just before the
rest.

2008-01-17 23:21:38

by Ric Wheeler

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

Theodore Tso wrote:
> On Wed, Jan 16, 2008 at 09:02:50PM -0500, Daniel Phillips wrote:
>> Have you observed that in the wild? A former engineer of a disk drive
>> company suggests to me that the capacitors on the board provide enough
>> power to complete the last sector, even to park the head.
>>

Even if true (which I doubt), this is not implemented.

A modern drive can have 16-32 MB of write cache. Worst case, those
sectors are not sequential which implies lots of head movement.

>
> The problem isn't with the disk drive; it's from the DRAM, which tend
> to be much more voltage sensitive than the hard drives --- so it's
> quite likely that you could end up DMA'ing garbage from the memory.
> In fact the fact that the disk drives lasts longer due to capacitors
> on the board, rotational inertia of the platters, etc., is part of the
> problem.

I can tell you directly that when you drop power to a drive, you will
lose write cache data if the write cache is enabled. With barriers
enabled, our testing shows that file systems survive power failures
which routinely caused corruption without them ;-)

ric

2008-01-18 00:32:11

by Bryan Henderson

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

Ric Wheeler <[email protected]> wrote on 01/17/2008 03:18:05 PM:

> Theodore Tso wrote:
> > On Wed, Jan 16, 2008 at 09:02:50PM -0500, Daniel Phillips wrote:
> >> Have you observed that in the wild? A former engineer of a disk
drive
> >> company suggests to me that the capacitors on the board provide
enough
> >> power to complete the last sector, even to park the head.
> >>
>
> Even if true (which I doubt), this is not implemented.
>
> A modern drive can have 16-32 MB of write cache. Worst case, those
> sectors are not sequential which implies lots of head movement.

We weren't actually talking about writing out the cache. While that was
part of an earlier thread which ultimately conceded that disk drives most
probably do not use the spinning disk energy to write out the cache, the
claim was then made that the drive at least survives long enough to finish
writing the sector it was writing, thereby maintaining the integrity of
the data at the drive level. People often say that a disk drive
guarantees atomic writes at the sector level even in the face of a power
failure.

But I heard some years ago from a disk drive engineer that that is a myth
just like the rotational energy thing. I added that to the discussion,
but admitted that I haven't actually seen a disk drive write a partial
sector.

Ted brought up the separate issue of the host sending garbage to the disk
device because its own power is failing at the same time, which makes the
integrity at the disk level moot (or even undesirable, as you'd rather
write a bad sector than a good one with the wrong data).

--
Bryan Henderson IBM Almaden Research Center
San Jose CA Filesystems

2008-01-18 14:24:24

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

On Thu, Jan 17, 2008 at 04:31:48PM -0800, Bryan Henderson wrote:
> But I heard some years ago from a disk drive engineer that that is a myth
> just like the rotational energy thing. I added that to the discussion,
> but admitted that I haven't actually seen a disk drive write a partial
> sector.

Well, it would be impossible or at least very hard to see that in
practice, right? My understanding is that drives do sector-level
checksums, so if there was a partially written sector, the checksum
would be bogus and the drive would return an error when you tried to
read from it.

> Ted brought up the separate issue of the host sending garbage to the disk
> device because its own power is failing at the same time, which makes the
> integrity at the disk level moot (or even undesirable, as you'd rather
> write a bad sector than a good one with the wrong data).

Yep, exactly. It would be interesting to see if this happens on
modern hardware; all of the evidence I've had for this is years old at
this point.

- Ted

2008-01-18 15:32:49

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

Bryan Henderson wrote:
>
> We weren't actually talking about writing out the cache. While that was
> part of an earlier thread which ultimately conceded that disk drives most
> probably do not use the spinning disk energy to write out the cache, the
> claim was then made that the drive at least survives long enough to finish
> writing the sector it was writing, thereby maintaining the integrity of
> the data at the drive level. People often say that a disk drive
> guarantees atomic writes at the sector level even in the face of a power
> failure.
>
> But I heard some years ago from a disk drive engineer that that is a myth
> just like the rotational energy thing. I added that to the discussion,
> but admitted that I haven't actually seen a disk drive write a partial
> sector.
>

Did he work for Maxtor, by any chance? :-/

A disk drive whose power is cut needs to have enough residual power to
park its heads (or *massive* data loss will occur), and at that point it
might as well keep enough on hand to finish an in-progress sector write.

There are two possible sources of onboard temporary power: a large
enough capacitor, or the rotational energy of the platters (an
electrical motor also being a generator.) I don't care which one they
use, but they need to do something.

-hpa

2008-01-18 15:35:13

by linux-os (Dick Johnson)

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incrementalfsck)


On Fri, 18 Jan 2008, Theodore Tso wrote:

> On Thu, Jan 17, 2008 at 04:31:48PM -0800, Bryan Henderson wrote:
>> But I heard some years ago from a disk drive engineer that that is a myth
>> just like the rotational energy thing. I added that to the discussion,
>> but admitted that I haven't actually seen a disk drive write a partial
>> sector.
>
> Well, it would be impossible or at least very hard to see that in
> practice, right? My understanding is that drives do sector-level
> checksums, so if there was a partially written sector, the checksum
> would be bogus and the drive would return an error when you tried to
> read from it.
>
>> Ted brought up the separate issue of the host sending garbage to the disk
>> device because its own power is failing at the same time, which makes the
>> integrity at the disk level moot (or even undesirable, as you'd rather
>> write a bad sector than a good one with the wrong data).
>
> Yep, exactly. It would be interesting to see if this happens on
> modern hardware; all of the evidence I've had for this is years old at
> this point.
>
> - Ted

I have a Seagate Barracuda 7200.9 80 Gbyte SATA drive that I
use for experiments. I can permanently destroy a EXT3 file-system
at least 50% of the time by disconnecting the data cable while
a `dd` write to a file is in progress. Something bad happens
making partition information invalid. I have to re-partition
to reuse the drive.

If I try the same experiment by disconnecting power to the drive
the file is no good (naturally), but the rest of the file-system
is fine.

My theory is that the destination offset is present in every
SATA access and some optimization code within the drive sets
the heads to track zero and writes before any CRC or checksum
is done to find out if it was the correct offset with the
correct data!

Cheers,
Dick Johnson
Penguin : Linux version 2.6.22.1 on an i686 machine (5588.29 BogoMips).
My book : http://www.AbominableFirebug.com/
_


****************************************************************
The information transmitted in this message is confidential and may be privileged. Any review, retransmission, dissemination, or other use of this information by persons or entities other than the intended recipient is prohibited. If you are not the intended recipient, please notify Analogic Corporation immediately - by replying to this message or by sending an email to [email protected] - and destroy all copies of this information, including any attachments, without reading or disclosing them.

Thank you.

2008-01-18 15:38:29

by Ric Wheeler

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

Theodore Tso wrote:
> On Thu, Jan 17, 2008 at 04:31:48PM -0800, Bryan Henderson wrote:
>> But I heard some years ago from a disk drive engineer that that is a myth
>> just like the rotational energy thing. I added that to the discussion,
>> but admitted that I haven't actually seen a disk drive write a partial
>> sector.
>
> Well, it would be impossible or at least very hard to see that in
> practice, right? My understanding is that drives do sector-level
> checksums, so if there was a partially written sector, the checksum
> would be bogus and the drive would return an error when you tried to
> read from it.

There is extensive per sector error correction on each sector written.
What you would see in this case (or many, many other possible ways
drives can corrupt media) is a "media error" on the next read.

You would never get back the partially written contents of that sector
at the host.

Having our tools (fsck especially) be resilient in the face of media
errors is really critical. Although I don't think the scenario of a
partially written sector is common, media errors in general are common
and can develop over time.

>
>> Ted brought up the separate issue of the host sending garbage to the disk
>> device because its own power is failing at the same time, which makes the
>> integrity at the disk level moot (or even undesirable, as you'd rather
>> write a bad sector than a good one with the wrong data).
>
> Yep, exactly. It would be interesting to see if this happens on
> modern hardware; all of the evidence I've had for this is years old at
> this point.
>
> - Ted
>

See the NetApp paper from Sigmetrics 2007 for some interesting analysis...


ric

2008-01-18 17:44:03

by Bryan Henderson

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

"H. Peter Anvin" <[email protected]> wrote on 01/18/2008 07:08:30 AM:

> Bryan Henderson wrote:
> >
> > We weren't actually talking about writing out the cache. While that
was
> > part of an earlier thread which ultimately conceded that disk drives
most
> > probably do not use the spinning disk energy to write out the cache,
the
> > claim was then made that the drive at least survives long enough to
finish
> > writing the sector it was writing, thereby maintaining the integrity
of
> > the data at the drive level. People often say that a disk drive
> > guarantees atomic writes at the sector level even in the face of a
power
> > failure.
> >
> > But I heard some years ago from a disk drive engineer that that is a
myth
> > just like the rotational energy thing. I added that to the
discussion,
> > but admitted that I haven't actually seen a disk drive write a partial

> > sector.
> >
>
> A disk drive whose power is cut needs to have enough residual power to
> park its heads (or *massive* data loss will occur), and at that point it

> might as well keep enough on hand to finish an in-progress sector write.
>
> There are two possible sources of onboard temporary power: a large
> enough capacitor, or the rotational energy of the platters (an
> electrical motor also being a generator.) I don't care which one they
> use, but they need to do something.

I believe the power for that comes from a third source: a spring. Parking
the heads is too important to leave to active circuits.

--
Bryan Henderson IBM Almaden Research Center
San Jose CA Filesystems

2008-01-18 20:35:32

by Jeff Garzik

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

Ric Wheeler wrote:
> Theodore Tso wrote:
>> On Thu, Jan 17, 2008 at 04:31:48PM -0800, Bryan Henderson wrote:
>>> But I heard some years ago from a disk drive engineer that that is a
>>> myth just like the rotational energy thing. I added that to the
>>> discussion, but admitted that I haven't actually seen a disk drive
>>> write a partial sector.
>>
>> Well, it would be impossible or at least very hard to see that in
>> practice, right? My understanding is that drives do sector-level
>> checksums, so if there was a partially written sector, the checksum
>> would be bogus and the drive would return an error when you tried to
>> read from it.
>
> There is extensive per sector error correction on each sector written.
> What you would see in this case (or many, many other possible ways
> drives can corrupt media) is a "media error" on the next read.

Correct.


> You would never get back the partially written contents of that sector
> at the host.

Correct.


> Having our tools (fsck especially) be resilient in the face of media
> errors is really critical. Although I don't think the scenario of a
> partially written sector is common, media errors in general are common
> and can develop over time.

Agreed.

Jeff


2008-01-18 22:36:19

by Bryan Henderson

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

I just had a talk with a colleague, John Palmer, who worked on disk drive
design for about 5 years in the '90s and he gave me a very confident,
credible explanation of some of the things we've been wondering about disk
drive power loss in this thread, complete with demonstrations of various
generations of disk drives, dismantled.

First of all, it is plain to see that there is no spring capable of
parking the head, and there is no capacitor that looks big enough to
possibly supply the energy to park the head, in any of the models I looked
at. Since parking of the heads is essential, we can only conclude that
the myth of the kinetic energy of the disks being used for that (turned
into electricity by the drive motor) is true. The energy required is not
just to move the heads to the parking zone, but to latch them there as
well.

The myth is probably just that that energy is used for anything else; it's
really easy to build a dumb circuit to park the heads using that power;
keeping a computer running is something else.

The drive does drop a write in the middle of the sector if it is writing
at the time of power loss. The designers were too conservative to keep
writing as power fails -- there's no telling what damage you might do. So
the drive cuts the power to the heads at the first sign of power loss. If
a write was in progress, this means there is one garbage sector on the
disk. It can't be read.

Trying to finish writing the sector is something I can image some drive
model somewhere trying to do, but if even _some_ take the conservative
approach, everyone has to design for it, so it doesn't matter.

A device might then reassign that sector the next time you try to write to
it (after failing to read it), thinking the medium must be bad. But there
are various algorithms for deciding when to reassign a sector, so it might
not too.

--
Bryan Henderson IBM Almaden Research Center
San Jose CA Filesystems

2008-01-19 14:51:16

by Pavel Machek

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incremental fsck)

Hi!

> > I guess I should try to measure it. (Linux already does writeback
> > caching, with 2GB of memory. I wonder how important disks's 2MB of
> > cache can be).
>
> It serves essentially the same purpose as the 'async' option in /etc/exports
> (i.e. we declare it "done" when the other end of the wire says it's caught
> the data, not when it's actually committed), with similar latency wins. Of
> course, it's impedance-matching for bursty traffic - the 2M doesn't do much
> at all if you're streaming data to it. For what it's worth, the 80G Seagate
> drive in my laptop claims it has 8M, so it probably does 4 times as much
> good as 2M. ;)

I doubt "impedance-matching" is useful here. SATA link is fast/low
latency, and kernel already does buffering with main memory...

Hmm... what is the way to measure that? Tar decompress kernel few
times with cache on / cache off?
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2008-01-19 14:53:21

by Pavel Machek

[permalink] [raw]
Subject: Re: [Patch] document ext3 requirements (was Re: [RFD] Incrementalfsck)

On Fri 2008-01-18 10:16:30, linux-os (Dick Johnson) wrote:
>
> On Fri, 18 Jan 2008, Theodore Tso wrote:
>
> > On Thu, Jan 17, 2008 at 04:31:48PM -0800, Bryan Henderson wrote:
> >> But I heard some years ago from a disk drive engineer that that is a myth
> >> just like the rotational energy thing. I added that to the discussion,
> >> but admitted that I haven't actually seen a disk drive write a partial
> >> sector.
> >
> > Well, it would be impossible or at least very hard to see that in
> > practice, right? My understanding is that drives do sector-level
> > checksums, so if there was a partially written sector, the checksum
> > would be bogus and the drive would return an error when you tried to
> > read from it.
> >
> >> Ted brought up the separate issue of the host sending garbage to the disk
> >> device because its own power is failing at the same time, which makes the
> >> integrity at the disk level moot (or even undesirable, as you'd rather
> >> write a bad sector than a good one with the wrong data).
> >
> > Yep, exactly. It would be interesting to see if this happens on
> > modern hardware; all of the evidence I've had for this is years old at
> > this point.
>
> I have a Seagate Barracuda 7200.9 80 Gbyte SATA drive that I
> use for experiments. I can permanently destroy a EXT3 file-system
> at least 50% of the time by disconnecting the data cable while
> a `dd` write to a file is in progress. Something bad happens
> making partition information invalid. I have to re-partition
> to reuse the drive.

Does turning off writeback cache on disk help? This is quite serious,
I'd say...

Pavel


--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html