Situation: External 500GB drive holds lots of snapshots using lots of
hard links made by rsync --link-dest. The controller went bad and
destroyed superblock and directory structures. The drive contains
roughly a million files and four complete directory-tree-snapshots with
each roughly a million hardlinks.
Tried
e2fsck 1.41.12 (17-May-2010)
Benutze EXT2FS Library version 1.41.12, 17-May-2010
e2fsck 1.41.11 (14-Mar-2010)
Benutze EXT2FS Library version 1.41.11, 14-Mar-2010
Symptoms: fsck.ext4 -y -f takes nearly a month to fix the structures on
a P4@2,8Ghz, with very little access to the drive and 100% cpu use.
output of fsck looks much like this:
File ??? (Inode #123456, modify time Wed Jul 22 16:20:23 2009)
block Nr. 6144 double block(s), used with four file(s):
<filesystem metadata>
??? (Inode #123457, mod time Wed Jul 22 16:20:23 2009)
??? (Inode #123458, mod time Wed Jul 22 16:20:23 2009)
...
multiply claimed block map? Yes
Is there an adhoc method of getting my data back faster?
Is the slow performance with lots of hard links a known issue?
--
mit freundlichen Gr??en,
Christian Brandt EDV-Dienstleistungen
Tel 089/89427711, Fax 089/89427712
Kreuzlinger Str.37 82110 Germering
St.Nr. 117/206/91819 Ust-ID DE233256795
On 03/27/2011 07:28 AM, Christian Brandt wrote:
> Situation: External 500GB drive holds lots of snapshots using lots of
> hard links made by rsync --link-dest. The controller went bad and
> destroyed superblock and directory structures. The drive contains
> roughly a million files and four complete directory-tree-snapshots with
> each roughly a million hardlinks.
>
> Tried
>
> e2fsck 1.41.12 (17-May-2010)
> Benutze EXT2FS Library version 1.41.12, 17-May-2010
>
> e2fsck 1.41.11 (14-Mar-2010)
> Benutze EXT2FS Library version 1.41.11, 14-Mar-2010
>
> Symptoms: fsck.ext4 -y -f takes nearly a month to fix the structures on
> a P4@2,8Ghz, with very little access to the drive and 100% cpu use.
>
> output of fsck looks much like this:
>
> File ??? (Inode #123456, modify time Wed Jul 22 16:20:23 2009)
> block Nr. 6144 double block(s), used with four file(s):
> <filesystem metadata>
> ??? (Inode #123457, mod time Wed Jul 22 16:20:23 2009)
> ??? (Inode #123458, mod time Wed Jul 22 16:20:23 2009)
> ...
> multiply claimed block map? Yes
>
> Is there an adhoc method of getting my data back faster?
>
> Is the slow performance with lots of hard links a known issue?
>
Sounds like a configuration that might well require lots of memory to cache your
allocated inodes, etc. How much memory do you have in the box running fsck? Any
sense (vmstat, etc) what the box is spending its time doing?
Ric
On 3/27/11 6:28 AM, Christian Brandt wrote:
> Situation: External 500GB drive holds lots of snapshots using lots of
> hard links made by rsync --link-dest. The controller went bad and
> destroyed superblock and directory structures. The drive contains
> roughly a million files and four complete directory-tree-snapshots with
> each roughly a million hardlinks.
>
> Tried
>
> e2fsck 1.41.12 (17-May-2010)
> Benutze EXT2FS Library version 1.41.12, 17-May-2010
>
> e2fsck 1.41.11 (14-Mar-2010)
> Benutze EXT2FS Library version 1.41.11, 14-Mar-2010
>
> Symptoms: fsck.ext4 -y -f takes nearly a month to fix the structures on
> a P4@2,8Ghz, with very little access to the drive and 100% cpu use.
Does that mean very little access to -this- drive or to -any- drive?
IOW, are you swapping madly?
-Eric
> output of fsck looks much like this:
>
> File ??? (Inode #123456, modify time Wed Jul 22 16:20:23 2009)
> block Nr. 6144 double block(s), used with four file(s):
> <filesystem metadata>
> ??? (Inode #123457, mod time Wed Jul 22 16:20:23 2009)
> ??? (Inode #123458, mod time Wed Jul 22 16:20:23 2009)
> ...
> multiply claimed block map? Yes
>
> Is there an adhoc method of getting my data back faster?
>
> Is the slow performance with lots of hard links a known issue?
>
On Sun, Mar 27, 2011 at 01:28:53PM +0200, Christian Brandt wrote:
> Situation: External 500GB drive holds lots of snapshots using lots of
> hard links made by rsync --link-dest. The controller went bad and
> destroyed superblock and directory structures. The drive contains
> roughly a million files and four complete directory-tree-snapshots with
> each roughly a million hardlinks.
As Ric said, this is a configuration that can take a long time to
fsck, mainly due to swapping (it's fairly memory intensive). But
500GB isn't *that* big. The larger problem is that a lot more than
just superblock and directory structures got destroyed:
> File ??? (Inode #123456, modify time Wed Jul 22 16:20:23 2009)
> block Nr. 6144 double block(s), used with four file(s):
> <filesystem metadata>
> ??? (Inode #123457, mod time Wed Jul 22 16:20:23 2009)
> ??? (Inode #123458, mod time Wed Jul 22 16:20:23 2009)
> ...
> multiply claimed block map? Yes
This means that you have very badly damaged inode tables. You either
have garbage written into the inode table, or inode table blocks
written to the wrong location on disk, or both. (I'd guess most
likely both).
> Is there an adhoc method of getting my data back faster?
What's your high level goal? If this is a backup device, how badly do
you need the old snapshots?
> Is the slow performance with lots of hard links a known issue?
Lots of hard links will cause a large memory usage requirement. This
is a problem primarily on 32-bit systems, particularly (ahem) "value"
NAS systems that don't have a lot of physical memory to begin with.
On 64-bit systems, you can either install enough physical memory that
this won't be a problem, or you can enable swap, in which case you
might end up swapping a lot (which will cause things to be slow) but
it should finish.
We do have a workaround for people who just can't add the physical
memory, which inolves adding a [scratch_files] section in e2fsck, and
that does cause slow performance. There has been some work on
improving that lately, by tuning the use of the tdb library we are
using. But if you haven't specifically enabled this workaround, it's
prboably not an issue.
I think what you're running into is the a problem caused by very badly
corrupted inode tables, and the work to keep track of the
double-allocated blocks is slowing things down. We've improved things
a lot in this area, so we're O(n log n) in number of multiply claimed
blocks, instead of O(n^2), but if N is sufficiently large, this can
still be problematic.
There are patches that I've never had time to vet and merge that will
try to use hueristics to determine if an inode table block is hopeless
garbage, and if so, to skip the inode table block entirely. This will
speed up e2fsck's performance in these situations, and the risk of
perhaps skipping some valid data that could have otherwise been
recovered.
So where are you at this point? Have you completed running the fsck,
and simply wanted to let us know? Do you need assistance in trying to
recover this disk?
- Ted
On Mon, Mar 28, 2011 at 10:43:30AM -0400, Ric Wheeler wrote:
> On 03/27/2011 07:28 AM, Christian Brandt wrote:
> >Situation: External 500GB drive holds lots of snapshots using lots of
> >hard links made by rsync --link-dest. The controller went bad and
> >destroyed superblock and directory structures. The drive contains
> >roughly a million files and four complete directory-tree-snapshots with
> >each roughly a million hardlinks.
> >
> >Tried
> >
> >e2fsck 1.41.12 (17-May-2010)
> > Benutze EXT2FS Library version 1.41.12, 17-May-2010
> >
> >e2fsck 1.41.11 (14-Mar-2010)
> > Benutze EXT2FS Library version 1.41.11, 14-Mar-2010
> >
> >Symptoms: fsck.ext4 -y -f takes nearly a month to fix the structures on
> >a P4@2,8Ghz, with very little access to the drive and 100% cpu use.
> >
> >output of fsck looks much like this:
> >
> >File ??? (Inode #123456, modify time Wed Jul 22 16:20:23 2009)
> > block Nr. 6144 double block(s), used with four file(s):
> > <filesystem metadata>
> > ??? (Inode #123457, mod time Wed Jul 22 16:20:23 2009)
> > ??? (Inode #123458, mod time Wed Jul 22 16:20:23 2009)
> > ...
> >multiply claimed block map? Yes
> >
> >Is there an adhoc method of getting my data back faster?
> >
> >Is the slow performance with lots of hard links a known issue?
Yes, it is a known issue.
You get to test my patch. :-)
I strongly suspect that (just like me) sometime in the past you've
seen e2fsck run out of memory and were advised to enable the
on-disk-databases.
Roger.
--
** [email protected] ** http://www.BitWizard.nl/ ** +31-15-2600998 **
** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
Q: It doesn't work. A: Look buddy, doesn't work is an ambiguous statement.
Does it sit on the couch all day? Is it unemployed? Please be specific!
Define 'it' and what it isn't doing. --------- Adapted from lxrbot FAQ
Am 28.03.2011 16:43, schrieb Ric Wheeler:
> Sounds like a configuration that might well require lots of memory to
> cache your allocated inodes, etc. How much memory do you have in the
> box running fsck? Any sense (vmstat, etc) what the box is spending
> its time doing?
100% CPU use as seen in top and ps.
The first System had 1GB and the process used 680MB, a plain Ubuntu
with nothing else running.
I tried another system with 4GB, at least 3GB free, the process still
uses only 680MB. Booted from a Knoppix-CD, nothing running except fsck,
vmstat says:
procs -----------memory---------- ---swap-- -----io---- -system--
----cpu----
r b swpd free buff cache si so bi bo in cs us sy
id wa
1 0 0 1573456 797988 776484 0 0 13 8 7 18 41 5
53 2
--
Christian Brandt
life is short and in most cases it ends with death
but my tombstone will carry the hiscore
Am 29.03.2011 08:03, schrieb Rogier Wolff:
>>> >>> Is the slow performance with lots of hard links a known issue?
> >
> > Yes, it is a known issue.
At least its not my fault :-) thanks for the info.
> > You get to test my patch. :-)
> >
> > I strongly suspect that (just like me) sometime in the past you've
> > seen e2fsck run out of memory and were advised to enable the
> > on-disk-databases.
Something like that... The drive has been formatted recently but a bad
controller corrupted vital information upon mount and some more on the
next fsck. I Ctrl-C pretty fast when I saw lots of rather confusing
kernel errors between fsck output. This could have left the drive in a
similiar state, couldn't it?
-- Christian Brandt
life is short and in most cases it ends with death but my tombstone
will carry the hiscore
Am 28.03.2011 17:47, schrieb Ted Ts'o:
Hi Ted,
> So where are you at this point? Have you completed running the fsck,
> and simply wanted to let us know? Do you need assistance in trying to
> recover this disk?
The fsck is still running in its sixth day.
The data itself would be still very handy but I have got most from an
older backup meanwhile.
Can stopping fsck.ext4 now damage things even more?
The patches mentioned, do they apply to the fsck.ext4-source? What do I
need to compile, are kernel-headers, build-essential and basic knowledge
about patch+make fine?
While getting the data back fast would be nifty I am already at the
point I'll just try to get experience out of the situation, not
necesserily my data.
--
Christian Brandt
life is short and in most cases it ends with death
but my tombstone will carry the hiscore
On Wed, Mar 30, 2011 at 12:02:10AM +0200, Christian Brandt wrote:
> Can stopping fsck.ext4 now damage things even more?
In contrast to fsck.reiser: not likely.
I dropped reiser when it did this to me and it cost me 6 hours of my
night sleep.
Roger.
--
** [email protected] ** http://www.BitWizard.nl/ ** +31-15-2600998 **
** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
Q: It doesn't work. A: Look buddy, doesn't work is an ambiguous statement.
Does it sit on the couch all day? Is it unemployed? Please be specific!
Define 'it' and what it isn't doing. --------- Adapted from lxrbot FAQ
On Tue, Mar 29, 2011 at 10:26:54PM +0200, Christian Brandt wrote:
> Am 29.03.2011 08:03, schrieb Rogier Wolff:
>
> >>> >>> Is the slow performance with lots of hard links a known issue?
> > >
> > > Yes, it is a known issue.
>
> At least its not my fault :-) thanks for the info.
>
> > > You get to test my patch. :-)
> > >
> > > I strongly suspect that (just like me) sometime in the past you've
> > > seen e2fsck run out of memory and were advised to enable the
> > > on-disk-databases.
>
> Something like that... The drive has been formatted recently but a bad
> controller corrupted vital information upon mount and some more on the
> next fsck. I Ctrl-C pretty fast when I saw lots of rather confusing
> kernel errors between fsck output. This could have left the drive in a
> similiar state, couldn't it?
The code I "fixed" is the code that uses an on-disk database instead
of in-memory datastructures.
Those in-memory datastructures may move to swap if you have enough of
that and enough addressing space. In my case, normal fsck memory usage
plus those two flexible datastructures would have exceeded 3Gb which
exceeds the 32-bit Linux process size limit.
So if you haven't touched the config file which specifies to put these
structures on disk, you are not experiencing the same problem that I
was....
Or someone else changed the configuration file for you....
The patch is against a CVS checkout (or whatever SCM is used) of
e2fsprogs.
Roger.
--
** [email protected] ** http://www.BitWizard.nl/ ** +31-15-2600998 **
** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
Q: It doesn't work. A: Look buddy, doesn't work is an ambiguous statement.
Does it sit on the couch all day? Is it unemployed? Please be specific!
Define 'it' and what it isn't doing. --------- Adapted from lxrbot FAQ