Hi folks.
What I did then yesterday night, was a
fsck.ext3 -b 32768 -B4096 device
There were MANY errors... (nearly all of them, that wouldn't be
automatically corrected e.g. by -p).
"Something" actually came back, though I cannot say (and probably never
will, as there are millions) whether everything came back and/or whether
the files are internally corrupted.
The problem is that the old directory structure was not recovered, but
_everything_ went into lost+found, some files directly in it (not with
their correct names but #xxxxxx numbers) and also many directories (also
with #xxxxxx numbers as names).
The directories however do contain at least some subtrees of the original
filesystem hierarchy (I mean with the correct names).
I guess there's no (automatic) way now, to get the full directory
structure as before, is there?
Of course I have some backups but unfortunately dating back to last
October (yes, I know, I'm stupid and deserved it ^^)...
Will try now to use fdupes and/or rdfind, to sort out all files that are
equal between backup and rescue fs and manually check and move back the
others (a work of some months probably ^^).
It's not that I wanna blame others (I mean being stupid is my fault), but
e2fsprogs' mkfs is really missing a check whether any known
filesystem/partition type/container (luks, lvm, mdadm, etc.) is already on
device (and a -force switch),... IIRC xfsprogs already do this more or
less.
Thanks and cheers,
Chris.
Hi,
On Sat 07-05-11 22:03:39, Christoph Anton Mitterer wrote:
> What I did then yesterday night, was a
>
> fsck.ext3 -b 32768 -B4096 device
>
> There were MANY errors... (nearly all of them, that wouldn't be
> automatically corrected e.g. by -p).
>
>
> "Something" actually came back, though I cannot say (and probably never
> will, as there are millions) whether everything came back and/or whether
> the files are internally corrupted.
>
> The problem is that the old directory structure was not recovered, but
> _everything_ went into lost+found, some files directly in it (not with
> their correct names but #xxxxxx numbers) and also many directories (also
> with #xxxxxx numbers as names).
> The directories however do contain at least some subtrees of the original
> filesystem hierarchy (I mean with the correct names).
>
> I guess there's no (automatic) way now, to get the full directory
> structure as before, is there?
No, not really. Actually, contents of the files should be generally OK
because mkfs overwrites only inodes. So you have lost some files and
directories but once you have a file, it should be OK.
> Of course I have some backups but unfortunately dating back to last
> October (yes, I know, I'm stupid and deserved it ^^)...
>
> Will try now to use fdupes and/or rdfind, to sort out all files that are
> equal between backup and rescue fs and manually check and move back the
> others (a work of some months probably ^^).
>
> It's not that I wanna blame others (I mean being stupid is my fault), but
> e2fsprogs' mkfs is really missing a check whether any known
> filesystem/partition type/container (luks, lvm, mdadm, etc.) is already on
> device (and a -force switch),... IIRC xfsprogs already do this more or
> less.
Yes, that would be reasonable although it might break some people's
scripts. But probably worth it anyway.
Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR
On Mon, May 09, 2011 at 02:06:08PM +0200, Jan Kara wrote:
> > e2fsprogs' mkfs is really missing a check whether any known
> > filesystem/partition type/container (luks, lvm, mdadm, etc.) is already on
> > device (and a -force switch),... IIRC xfsprogs already do this more or
> > less.
> Yes, that would be reasonable although it might break some
> people's scripts. But probably worth it anyway.
This is a difficult change....
Some GUI install things already perform this check and after asking
confirmation from the user will just execute:
mkfs.ext3 /dev/....
and expect it to make the filesystem.
So you could add an option:
--check _for_existing_filesystem
But then it is quite likely that people running it manually will
forget to add the option....
Roger.
--
** [email protected] ** http://www.BitWizard.nl/ ** +31-15-2600998 **
** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
Q: It doesn't work. A: Look buddy, doesn't work is an ambiguous statement.
Does it sit on the couch all day? Is it unemployed? Please be specific!
Define 'it' and what it isn't doing. --------- Adapted from lxrbot FAQ
On Mon, 2011-05-09 at 14:06 +0200, Jan Kara wrote:
> Actually, contents of the files should be generally OK
> because mkfs overwrites only inodes. So you have lost some files and
> directories but once you have a file, it should be OK.
This is really some good news!! :-)
Are you sure with that? And wouldn't mean, that inodes are always
located at the same block locations?
> > It's not that I wanna blame others (I mean being stupid is my fault), but
> > e2fsprogs' mkfs is really missing a check whether any known
> > filesystem/partition type/container (luks, lvm, mdadm, etc.) is already on
> > device (and a -force switch),... IIRC xfsprogs already do this more or
> > less.
> Yes, that would be reasonable although it might break some people's
> scripts. But probably worth it anyway.
IMHO the breakage is really justified then :-)
Especially as some of those scripts might actually do their own checks
for some filesystems, but perhaps completely forget about other
containter types (partitions, LUKS, mdadm, etc. etc.)
Cheers,
Chris.
On Mon 09-05-11 14:59:29, Christoph Anton Mitterer wrote:
> On Mon, 2011-05-09 at 14:06 +0200, Jan Kara wrote:
> > Actually, contents of the files should be generally OK
> > because mkfs overwrites only inodes. So you have lost some files and
> > directories but once you have a file, it should be OK.
> This is really some good news!! :-)
> Are you sure with that? And wouldn't mean, that inodes are always
> located at the same block locations?
Well, the block size is most likely the same (4 KB) in both the old and
the new fs (unless you tinkered with it but I don't expect that). That
defines size of a block group and thus position of inodes, bitmaps, etc.
Another variable is a number of inodes (per group). If you have an old
superblock you can compare the old and the new number of inodes and you
can be sure. Otherwise you rely on whether the math in the mkfs with which
you've originally created the fs is the same as the math in your current
mkfs (and you didn't specify any special options regarding this)...
Honza
> > > It's not that I wanna blame others (I mean being stupid is my fault), but
> > > e2fsprogs' mkfs is really missing a check whether any known
> > > filesystem/partition type/container (luks, lvm, mdadm, etc.) is already on
> > > device (and a -force switch),... IIRC xfsprogs already do this more or
> > > less.
> > Yes, that would be reasonable although it might break some people's
> > scripts. But probably worth it anyway.
> IMHO the breakage is really justified then :-)
>
> Especially as some of those scripts might actually do their own checks
> for some filesystems, but perhaps completely forget about other
> containter types (partitions, LUKS, mdadm, etc. etc.)
>
>
> Cheers,
> Chris.
--
Jan Kara <[email protected]>
SUSE Labs, CR
On Mon, 9 May 2011, Christoph Anton Mitterer wrote:
> On Mon, 2011-05-09 at 14:06 +0200, Jan Kara wrote:
> > Actually, contents of the files should be generally OK
> > because mkfs overwrites only inodes. So you have lost some files and
> > directories but once you have a file, it should be OK.
> This is really some good news!! :-)
> Are you sure with that? And wouldn't mean, that inodes are always
> located at the same block locations?
>
>
> > > It's not that I wanna blame others (I mean being stupid is my fault), but
> > > e2fsprogs' mkfs is really missing a check whether any known
> > > filesystem/partition type/container (luks, lvm, mdadm, etc.) is already on
> > > device (and a -force switch),... IIRC xfsprogs already do this more or
> > > less.
> > Yes, that would be reasonable although it might break some people's
> > scripts. But probably worth it anyway.
> IMHO the breakage is really justified then :-)
>
> Especially as some of those scripts might actually do their own checks
> for some filesystems, but perhaps completely forget about other
> containter types (partitions, LUKS, mdadm, etc. etc.)
Well, it really is not justified!. We all have testing scripts where we
just create filesystem over and over again because it is how it works,
consider xfstests for example, there are even some tests which clears
MKFS_OPTIONS completely (which should be fixet btw).
However I do agree that the change should be made, but we should
probably just print an warning first and then after some time, when
people notice that something is going to change, make that change to
refuse mkfs on existing filesystem without "-F" option.
Thanks!
-Lukas
>
>
> Cheers,
> Chris.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
On Mon, May 09, 2011 at 02:06:08PM +0200, Jan Kara wrote:
> Yes, that would be reasonable although it might break some people's
> scripts. But probably worth it anyway.
I wouldn't object to a patch to e2fsprogs which changes mke2fs to do
this check and refuses to blow away a file system if:
[defaults]
check_for_prexisting_fs = true
was in /etc/mke2fs.conf.
I'd also suggest that mke2fs -f NOT enable mke2fs to blow away file
systems, but that we add a new utility --- probably to util-linux-ng
--- with the destroy_partition, which a system administrator has to
explicitly run to blow away a partition.
I'd really rather not train people that mke2fs -f is the way to get
the utility to !@#@! shut up. It's exactly the same problem with why
"alias rm 'rm -i'" doesn't work. System administrators just get in
the habit of typing return, 'y', return, and in the end things still
just get lost.
If we have a separate utility, destroy_filesystem, which checks to see
if there is a tty, and if so, prints the details of what is on the
partition, and then forces the user to type "YES<return>", then we
have a unified way of protecting against careless sysadmins. We can
also have that utility do clever things, such as simply telling the
hard drive to reinitiailize its crypto keys if it supports FDE, or
tell it to TRIM the entire disk if it supports TRIM/DISCARD, etc.
- Ted
On Mon, 9 May 2011 15:10:27 +0200, Jan Kara <[email protected]> wrote:
> Well, the block size is most likely the same (4 KB) in both the old
and
> the new fs (unless you tinkered with it but I don't expect that). That
> defines size of a block group and thus position of inodes, bitmaps, etc.
> Another variable is a number of inodes (per group). If you have an old
> superblock you can compare the old and the new number of inodes and you
> can be sure. Otherwise you rely on whether the math in the mkfs with
which
> you've originally created the fs is the same as the math in your current
> mkfs (and you didn't specify any special options regarding this)...
Well I didn't change them but maybe Debian has modified the defaults in
mke2fs.conf since I created the fs initially.
inode_size = 256 could be a candidate. Unfortunately I don't remember
which Debian/e2fsprogs I've used to create the fs originally.
Was this ever set to 128 (i mean as a default for e2fsprogs itself, when
it was not set in mke2fs.conf)?
If the values would have actually changed, wouldn't this mean that all
data was then gone?
Cheers,
Chris
On Mon 09-05-11 13:47:41, Christoph Anton Mitterer wrote:
> On Mon, 9 May 2011 15:10:27 +0200, Jan Kara <[email protected]> wrote:
> > Well, the block size is most likely the same (4 KB) in both the old
> and
> > the new fs (unless you tinkered with it but I don't expect that). That
> > defines size of a block group and thus position of inodes, bitmaps, etc.
> > Another variable is a number of inodes (per group). If you have an old
> > superblock you can compare the old and the new number of inodes and you
> > can be sure. Otherwise you rely on whether the math in the mkfs with
> which
> > you've originally created the fs is the same as the math in your current
> > mkfs (and you didn't specify any special options regarding this)...
>
> Well I didn't change them but maybe Debian has modified the defaults in
> mke2fs.conf since I created the fs initially.
> inode_size = 256 could be a candidate. Unfortunately I don't remember
> which Debian/e2fsprogs I've used to create the fs originally.
>
> Was this ever set to 128 (i mean as a default for e2fsprogs itself, when
> it was not set in mke2fs.conf)?
Yes it was although relatively long time ago (several years).
> If the values would have actually changed, wouldn't this mean that all
> data was then gone?
Not really because we store extended attributes in the additional 128
bytes of inode space and unless we see proper magic value we ignore the
contents. So you'd just silently loose every second inode I think.
Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR