After unintentionally deleting some file, I noticed what appears to be
an incosistency (or at least a change) in ext3. Running debugfs and
executing the command "lsdel", I saw no inodes listed since I last ran
the partition as ext2. Does ext3 not add its deleted inodes to whatever
list ext2 does? And can this be fixed without compromising the speed or
data-integrity of ext3?
--
-Steven
In a time of universal deceit, telling the truth is a revolutionary act.
-- George Orwell
He's alive. He's alive! Oh, that fellow at RadioShack said I was mad!
Well, who's mad now?
-- Montgomery C. Burns
On Sun, Feb 24, 2002 at 09:27:27PM -0600, Steven Walter wrote:
> After unintentionally deleting some file, I noticed what appears to be
> an incosistency (or at least a change) in ext3. Running debugfs and
> executing the command "lsdel", I saw no inodes listed since I last ran
> the partition as ext2. Does ext3 not add its deleted inodes to whatever
> list ext2 does? And can this be fixed without compromising the speed or
> data-integrity of ext3?
This issue has been discussed in the ext3-users mailing few monthes ago:
https://listman.redhat.com/pipermail/ext3-users/2001-March/000381.html
and more recently :
https://listman.redhat.com/pipermail/ext3-users/2002-February/002950.html
--
fabrice
On Feb 24, 2002 21:27 -0600, Steven Walter wrote:
> After unintentionally deleting some file, I noticed what appears to be
> an incosistency (or at least a change) in ext3. Running debugfs and
> executing the command "lsdel", I saw no inodes listed since I last ran
> the partition as ext2. Does ext3 not add its deleted inodes to whatever
> list ext2 does? And can this be fixed without compromising the speed or
> data-integrity of ext3?
Known problem. Apparently difficult to fix, unfortunately. It's not so
much that ext2 adds deleted inodes to a list, as that it simply marks the
inode "deleted" and doesn't overwrite any of the inode data on the disk.
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/
Is there work being done on a filesystem extension that allows admins to
'undelete' files similar to the way Netware does? My company uses Netware
only for that fact, and I would like to see more Linux boxes here. I have
sold them on a couple of webservers and one application server, but they
hold fast to the Netware file servers because loading a backup copy of some
file from tape is not feasable with the amount of data we have (approaching
1T). I have looked across the web and only found this:
http://www.timpanogas.com/
but I don't want a Netware filesystem running on Linux, I want a *native*
Linux filesystem (i.e. ext3) that has the ability to queue deleted files
should I configure it to. Is there such a thing? If not, do you feel it
would be worth developing into the kernel? This would make Linux much more
attractive to Netware houses I believe.
Billy Rose
-----Original Message-----
From: Andreas Dilger [mailto:[email protected]]
Sent: Sunday, February 24, 2002 11:08 PM
To: Steven Walter; [email protected]
Subject: Re: ext3 and undeletion
On Feb 24, 2002 21:27 -0600, Steven Walter wrote:
> After unintentionally deleting some file, I noticed what appears to be
> an incosistency (or at least a change) in ext3. Running debugfs and
> executing the command "lsdel", I saw no inodes listed since I last ran
> the partition as ext2. Does ext3 not add its deleted inodes to whatever
> list ext2 does? And can this be fixed without compromising the speed or
> data-integrity of ext3?
Known problem. Apparently difficult to fix, unfortunately. It's not so
much that ext2 adds deleted inodes to a list, as that it simply marks the
inode "deleted" and doesn't overwrite any of the inode data on the disk.
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/
> but I don't want a Netware filesystem running on Linux, I
> want a *native* Linux filesystem (i.e. ext3) that has the
> ability to queue deleted files should I configure it to.
Rather than implementing this in the filesystem itself, I'd first try
writing a libc shim that overrides unlink(). You could copy files to safety,
or do anything else you want, before they actually get deleted...
Regards,
Dan
On Mon, Feb 25, 2002 at 12:06:29PM -0500, Dan Maas wrote:
> > but I don't want a Netware filesystem running on Linux, I
> > want a *native* Linux filesystem (i.e. ext3) that has the
> > ability to queue deleted files should I configure it to.
>
> Rather than implementing this in the filesystem itself, I'd first try
> writing a libc shim that overrides unlink(). You could copy files to safety,
> or do anything else you want, before they actually get deleted...
Yep, more portable.
Now the question is: Is there already something written?
On Mon, 25 Feb 2002, Dan Maas wrote:
> > but I don't want a Netware filesystem running on Linux, I
> > want a *native* Linux filesystem (i.e. ext3) that has the
> > ability to queue deleted files should I configure it to.
>
> Rather than implementing this in the filesystem itself, I'd first try
> writing a libc shim that overrides unlink(). You could copy files to safety,
> or do anything else you want, before they actually get deleted...
>
> Regards,
> Dan
>
Yes... unlink() becomes `mv /path/filename /deleted/path/filename`
Simple. For idiot users, you can just make such an alias for those
who insist in doing `rm *` instead of `rm \*` after they had used
a wild-card as a file-name... It happens:
`ls *.* >files`
...is typoed to:
`ls *.>* files`
If somebody then recreates the same file and deletes it again -- tough.
Cheers,
Dick Johnson
Penguin : Linux version 2.4.1 on an i686 machine (797.90 BogoMips).
111,111,111 * 111,111,111 = 12,345,678,987,654,321
On Mon, Feb 25, 2002 at 01:08:23PM -0500, Richard B. Johnson wrote:
> On Mon, 25 Feb 2002, Dan Maas wrote:
>
> > > but I don't want a Netware filesystem running on Linux, I
> > > want a *native* Linux filesystem (i.e. ext3) that has the
> > > ability to queue deleted files should I configure it to.
> >
> > Rather than implementing this in the filesystem itself, I'd first try
> > writing a libc shim that overrides unlink(). You could copy files to safety,
> > or do anything else you want, before they actually get deleted...
> >
> > Regards,
> > Dan
> >
> Yes... unlink() becomes `mv /path/filename /deleted/path/filename`
> Simple. For idiot users, you can just make such an alias for those
It would be nice if there was a 'deleted' dir per mount point, as that would
keep similar speeds as rm. Also, 'deleted' would probably have to be marked
writable, but not readable and would need a suid binary to read that dir and
limit the output to only list files owned by the calling uid. But that's a
bit too offtopic for this list...
Mike
On Feb 25, 2002 10:40 -0800, Mike Fedyk wrote:
> On Mon, Feb 25, 2002 at 01:08:23PM -0500, Richard B. Johnson wrote:
> > > Rather than implementing this in the filesystem itself, I'd first try
> > > writing a libc shim that overrides unlink(). You could copy files to
> > > safety, or do anything else you want, before they actually get deleted...
> >
> > Yes... unlink() becomes `mv /path/filename /deleted/path/filename`
> > Simple. For idiot users, you can just make such an alias for those
>
> It would be nice if there was a 'deleted' dir per mount point, as that would
> keep similar speeds as rm. Also, 'deleted' would probably have to be marked
> writable, but not readable and would need a suid binary to read that dir and
> limit the output to only list files owned by the calling uid. But that's a
> bit too offtopic for this list...
Yes, the deleted (prefer /.deleted or similar) directory would _have_ to
be per mount point for a few reasons:
1) speed - copying all deleted files across mountpoints would be _slow_.
2) space - you would have to have a _huge_ root directory otherwise.
3) locality - need to handle network filesystems properly (e.g. being able
to undelete a file on a network fs if it was deleted on a
different host).
While I've seen this "change unlink in libc" suggestion many, many times
I don't think I've ever seen it implemented. Is it just because the people
who can do it don't want to, and by the time the people who want it can
implement it they don't want it anymore?
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/
Quoting Andreas Dilger <[email protected]> on Mon, Feb 25 12:49:
>
> While I've seen this "change unlink in libc" suggestion many, many times
> I don't think I've ever seen it implemented.
libtrash does this: http://www.m-arriaga.net/software/libtrash/
I've never actually used it, but I've been meaning to look at it for
quite a while.
Omen
--
"What is this talk of 'release'? Klingons do not make software
'releases'. Our software 'escapes,' leaving a bloody trail of
designers and quality assurance people in its wake."
On Monday 25 February 2002 12:20, Mike Fedyk wrote:
> On Mon, Feb 25, 2002 at 12:06:29PM -0500, Dan Maas wrote:
> > > but I don't want a Netware filesystem running on Linux, I
> > > want a *native* Linux filesystem (i.e. ext3) that has the
> > > ability to queue deleted files should I configure it to.
> >
> > Rather than implementing this in the filesystem itself, I'd first try
> > writing a libc shim that overrides unlink(). You could copy files to
> > safety, or do anything else you want, before they actually get deleted...
>
> Yep, more portable.
But it only works if everything get linked with the new library.
>
> Now the question is: Is there already something written?
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
In article <02022518330103.01161@grumpersII> you wrote:
> But it only works if everything get linked with the new library.
Nope, just put the new Lib into /etc/ld.so.preload
Actually there is a unlink package out there, which does exactly this.
Greetings
Bernd
Followup to: <02022518330103.01161@grumpersII>
By author: Tom Rauschenbach <[email protected]>
In newsgroup: linux.dev.kernel
>
> On Monday 25 February 2002 12:20, Mike Fedyk wrote:
> > On Mon, Feb 25, 2002 at 12:06:29PM -0500, Dan Maas wrote:
> > > > but I don't want a Netware filesystem running on Linux, I
> > > > want a *native* Linux filesystem (i.e. ext3) that has the
> > > > ability to queue deleted files should I configure it to.
> > >
> > > Rather than implementing this in the filesystem itself, I'd first try
> > > writing a libc shim that overrides unlink(). You could copy files to
> > > safety, or do anything else you want, before they actually get deleted...
> >
> > Yep, more portable.
>
> But it only works if everything get linked with the new library.
>
What's a lot worse is that the kernel cannot chose to garbage-collect
it. One reason to put undelete in the kernel is that that files in
limbo can be reclaimed as the disk space is needed for other users,
and you don't risk getting ENOSPC due to the disk being full with
ghosts.
-hpa
--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt <[email protected]>
On Mon, Feb 25, 2002 at 09:53:08PM -0800, H. Peter Anvin wrote:
> Followup to: <02022518330103.01161@grumpersII>
> By author: Tom Rauschenbach <[email protected]>
> In newsgroup: linux.dev.kernel
> >
> > On Monday 25 February 2002 12:20, Mike Fedyk wrote:
> > > On Mon, Feb 25, 2002 at 12:06:29PM -0500, Dan Maas wrote:
> > > > > but I don't want a Netware filesystem running on Linux, I
> > > > > want a *native* Linux filesystem (i.e. ext3) that has the
> > > > > ability to queue deleted files should I configure it to.
> > > >
> > > > Rather than implementing this in the filesystem itself, I'd first try
> > > > writing a libc shim that overrides unlink(). You could copy files to
> > > > safety, or do anything else you want, before they actually get deleted...
> > >
> > > Yep, more portable.
> >
> > But it only works if everything get linked with the new library.
> >
>
> What's a lot worse is that the kernel cannot chose to garbage-collect
> it. One reason to put undelete in the kernel is that that files in
> limbo can be reclaimed as the disk space is needed for other users,
> and you don't risk getting ENOSPC due to the disk being full with
> ghosts.
>
True, and it could to tricks like listing space used for undelete as "free"
in addition to dynamic garbage collection.
Though, with a daemon checking the dirs often, or using Daniel's idea of a
socket between unlink() in glibc and an undelete daemon could work quite
similairly.
Also, there wouldn't be any interaction with filesystem internals, and
userspace would probably work better with non-posix type filesystems (vfat,
hfs, etc) too.
IOW, there seems to be little gain to having an kernelspace solution.
Mike
Mike Fedyk wrote:
>
> Though, with a daemon checking the dirs often,
>
Can you say "race condition?"
-hpa
> True, and it could to tricks like listing space used for undelete as "free"
> in addition to dynamic garbage collection.
>
> Though, with a daemon checking the dirs often, or using Daniel's idea of a
> socket between unlink() in glibc and an undelete daemon could work quite
> similairly.
>
> Also, there wouldn't be any interaction with filesystem internals, and
> userspace would probably work better with non-posix type filesystems (vfat,
> hfs, etc) too.
>
> IOW, there seems to be little gain to having an kernelspace solution.
>
IMNSHO everyone thinking about undeletion in Linux should be
sentenced to 1 year of VMS usage and asked then again if he
still think's that it's a good idea...
On Tue, Feb 26, 2002 at 08:31:38AM -0800, H. Peter Anvin wrote:
> Mike Fedyk wrote:
> >
> >Though, with a daemon checking the dirs often,
> >
>
> Can you say "race condition?"
>
Uhh, no.
You have a configurable size for the undelete dirs and you delete a file.
Now, that file gets moved to $mountpoint/.undelete. The daemon gets
notified through a socket, and it can check to see if it needs to delete any
older deleted files to make sure .undelete doesn't get bigger than
configured.
We're only scanning the dirs upon daemon startup (reminds me of
quota... ;), and all other daemon actions are triggered by unlink() writing
to a socket. The worst thing that can happen is not seeing your free space
immediately, but a few seconds later.
Mike
On Tue, Feb 26, 2002 at 05:36:51PM +0100, Martin Dalecki wrote:
> >True, and it could to tricks like listing space used for undelete as "free"
> >in addition to dynamic garbage collection.
> >
> >Though, with a daemon checking the dirs often, or using Daniel's idea of a
> >socket between unlink() in glibc and an undelete daemon could work quite
> >similairly.
> >
> >Also, there wouldn't be any interaction with filesystem internals, and
> >userspace would probably work better with non-posix type filesystems (vfat,
> >hfs, etc) too.
> >
> >IOW, there seems to be little gain to having an kernelspace solution.
> >
>
> IMNSHO everyone thinking about undeletion in Linux should be
> sentenced to 1 year of VMS usage and asked then again if he
> still think's that it's a good idea...
Can you describe the pitfalls that VMS went through so we can aviod the
problems?
I haven't had the chance to use VMS, and don't have any hardware to try it
out on. Also, just because one implementation was bad (even long ago, and
unix was considered bad then too... ;) does it mean the entire idea is bad.
Mike
Followup to: <[email protected]>
By author: Mike Fedyk <[email protected]>
In newsgroup: linux.dev.kernel
>
> Uhh, no.
>
> You have a configurable size for the undelete dirs and you delete a file.
> Now, that file gets moved to $mountpoint/.undelete. The daemon gets
> notified through a socket, and it can check to see if it needs to delete any
> older deleted files to make sure .undelete doesn't get bigger than
> configured.
>
> We're only scanning the dirs upon daemon startup (reminds me of
> quota... ;), and all other daemon actions are triggered by unlink() writing
> to a socket. The worst thing that can happen is not seeing your free space
> immediately, but a few seconds later.
>
Hence race condition. Also, the solution to hard-reserve space seems
to fundamentally defeat the purpose (IMO).
-hpa
--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt <[email protected]>
Mike Fedyk wrote:
> On Tue, Feb 26, 2002 at 05:36:51PM +0100, Martin Dalecki wrote:
>
>>>True, and it could to tricks like listing space used for undelete as "free"
>>>in addition to dynamic garbage collection.
>>>
>>>Though, with a daemon checking the dirs often, or using Daniel's idea of a
>>>socket between unlink() in glibc and an undelete daemon could work quite
>>>similairly.
>>>
>>>Also, there wouldn't be any interaction with filesystem internals, and
>>>userspace would probably work better with non-posix type filesystems (vfat,
>>>hfs, etc) too.
>>>
>>>IOW, there seems to be little gain to having an kernelspace solution.
>>>
>>>
>>IMNSHO everyone thinking about undeletion in Linux should be
>>sentenced to 1 year of VMS usage and asked then again if he
>>still think's that it's a good idea...
>>
>
> Can you describe the pitfalls that VMS went through so we can aviod the
> problems?
>
> I haven't had the chance to use VMS, and don't have any hardware to try it
> out on. Also, just because one implementation was bad (even long ago, and
> unix was considered bad then too... ;) does it mean the entire idea is bad.
Yes I can. The main problem is that most people think that undeletion
is a magical way of getting around stiupid users. But the fact is
that the very same users very quickly adapt to the the presence of
undeletion facilities. And guess whot? They will expect you to
instantly recover allways a version of "this" file from the "stone age".
So the pain for the sysadmin will certainly not be decreased. Quite
contrary for what he expects. For the educated user it was always a pain
in the you know where, to constantly run out of quota space due to
file versioning.
On Tue, Feb 26, 2002 at 05:54:58PM +0100, Martin Dalecki wrote:
> Mike Fedyk wrote:
> >Can you describe the pitfalls that VMS went through so we can aviod the
> >problems?
> >
> >I haven't had the chance to use VMS, and don't have any hardware to try it
> >out on. Also, just because one implementation was bad (even long ago, and
> >unix was considered bad then too... ;) does it mean the entire idea is bad.
>
> Yes I can. The main problem is that most people think that undeletion
> is a magical way of getting around stiupid users.
That is one use, but not the only use. It is one feature that is missing on
Linux. I don't know what other unix-like systems have, but it'd be nice if
Linux had it.
>But the fact is
> that the very same users very quickly adapt to the the presence of
> undeletion facilities. And guess whot? They will expect you to
> instantly recover allways a version of "this" file from the "stone age".
> So the pain for the sysadmin will certainly not be decreased. Quite
> contrary for what he expects.
Yes, I can understand this exactly, but it still doesn't negate the
usefulness of undeletion.
>For the educated user it was always a pain
> in the you know where, to constantly run out of quota space due to
> file versioning.
Ahh, so we'd need to chown the files to root (or a configurable user and
group) to get around the quota issue.
Mike
>>For the educated user it was always a pain
>>in the you know where, to constantly run out of quota space due to
>>file versioning.
>>
>
> Ahh, so we'd need to chown the files to root (or a configurable user and
> group) to get around the quota issue.
Welcome to my kill-file. This just shows that you don't even have basic
background.
On Tue, Feb 26, 2002 at 08:55:17AM -0800, H. Peter Anvin wrote:
> Followup to: <[email protected]>
> By author: Mike Fedyk <[email protected]>
> In newsgroup: linux.dev.kernel
> >
> > Uhh, no.
> >
> > You have a configurable size for the undelete dirs and you delete a file.
> > Now, that file gets moved to $mountpoint/.undelete. The daemon gets
> > notified through a socket, and it can check to see if it needs to delete any
> > older deleted files to make sure .undelete doesn't get bigger than
> > configured.
> >
> > We're only scanning the dirs upon daemon startup (reminds me of
> > quota... ;), and all other daemon actions are triggered by unlink() writing
> > to a socket. The worst thing that can happen is not seeing your free space
> > immediately, but a few seconds later.
> >
>
> Hence race condition.
But an acceptable one (it's a small delay), unless the daemon dies. :(
>Also, the solution to hard-reserve space seems
> to fundamentally defeat the purpose (IMO).
>
Do you really thing we should be moving files from kernel space? Ok, glibc
could move the files, that'd be ok.
So, (I don't know) how is the kernel going to support undeletion on all
filesystems (ext2/3, reiserfs, vfat, jfs, xfs, and any other writable fs...)
in the exact same way (as seen from userspace...)?
Mike
On Tue, Feb 26, 2002 at 06:07:49PM +0100, Martin Dalecki wrote:
> >>For the educated user it was always a pain
> >>in the you know where, to constantly run out of quota space due to
> >>file versioning.
> >>
> >
> >Ahh, so we'd need to chown the files to root (or a configurable user and
> >group) to get around the quota issue.
>
> Welcome to my kill-file. This just shows that you don't even have basic
> background.
Thank you.
Now, if I'm talking out of my ass, can someone sane say so?
It would only call chown/chgrp on the files *inside* the undelete dir, and
user,group,etc would have to be accounted for in another way. Am I going in
the wrong direction?
Mike
On Tue, Feb 26, 2002 at 06:07:49PM +0100, Martin Dalecki wrote:
> >>For the educated user it was always a pain
> >>in the you know where, to constantly run out of quota space due to
> >>file versioning.
> >>
> >
> >Ahh, so we'd need to chown the files to root (or a configurable user and
> >group) to get around the quota issue.
>
> Welcome to my kill-file. This just shows that you don't even have basic
> background.
>
Ok guys,
Maybe you'll respect the ideas coming from Andreas Dilger, as some of you
obviously don't like some of my ideas.
Here goes:
On Feb 25, 2002 13:45 -0800, Mike Fedyk wrote:
> I don't think we need anything very complicated either.
>
> Here's what it probably should do *in libc*:
> o trap unlink calls
> o if link count >= 2 then act normally
> o if link count == 1 then move file (including directory structure from
> mount point to $mount_point/.deleted/$path/file)
Well, my idea on this is that you don't check the link count at all. Why
should it be special if I delete /home/adilger/foo, when it is also linked
to /home/adilger/bar? If I want to undelete /home/adilger/foo, then I
should be able to. Since it is on the same filesystem, it doesn't take any
more space in the filesystem to keep this link, and in fact avoids the
race condition entirely.
Only the undelete daemon would do real deletions, everything else would
_always_ do something like:
base=`echo $file | sed "s#$mntpt/##"`
mv $file $mntpt/.undelete/$base[.timestamp/username/etc]
> The undelete daemon (undeleted) would do:
> o monitor how full the various deleted directories are (always keep some
> percentage empty to allow new files to be deleted without overflowing
> the space configured for undelete)
> o enforce configurable setting for how much space .undelete will hold
> o delete any single file that will not fit in .undelete's space no matter
> how new it is
> o any other sysadmin notification type of things
>
> Should the glibc routine interact directly with the undelete daemon so that
> the case of a lot of deletion of large files will be handled faster?
> Otherwise, if you delete a lot of files, df won't show the free space
> getting bigger until undeleted did a rescan of it's undelete dirs and freed
> the old deleted files.
Well, my take on this would be that since we are only ever moving files over
to .undelete, then we are never using up more space than what we are already
using, so we do not need to ever synchronously free space to delete a file.
We could always have unlink() write to a socket (e.g. .undelete/daemon or so)
telling it which file it just deleted, and the daemon sleeps on this socket
until woken to see if it needs to do cleanup. Having the name of the
recently deleted file passed in allows it to do things like clean up old
copies of that file quickly, etc.
Note also that the "unrm" command would need some smarts so that it can
match against variations of $base, because programs may rename a file,
make a new copy, and then delete the renamed file (e.g. emacs, vi, etc).
It would probably do some sort of regexp on the basename() of the file.
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/
On Tue, 26 Feb 2002, Mike Fedyk wrote:
> On Tue, Feb 26, 2002 at 06:07:49PM +0100, Martin Dalecki wrote:
> > >>For the educated user it was always a pain
> > >>in the you know where, to constantly run out of quota space due to
> > >>file versioning.
> > >>
> > >
> > >Ahh, so we'd need to chown the files to root (or a configurable user and
> > >group) to get around the quota issue.
> >
> > Welcome to my kill-file. This just shows that you don't even have basic
> > background.
>
> Thank you.
>
> Now, if I'm talking out of my ass, can someone sane say so?
Your idea should work on deletion, when the inode were
about to be destroyed, but ...
> It would only call chown/chgrp on the files *inside* the undelete dir,
> and user,group,etc would have to be accounted for in another way. Am I
> going in the wrong direction?
... of course, there still is the problem of hard links.
If you unlink a file, it might still be around under
another name.
Consider:
$ ln bigfile newbigfile
$ rm bigfile
Under your scheme, maybe bigfile would be moved to the
undeletion area ... fine.
The problem would start if the ownership was changed
to root, because then 'newbigfile' would also be owned
by root and you could no longer access your file ;)
regards,
Rik
--
Will hack the VM for food.
http://www.surriel.com/ http://distro.conectiva.com/
On Tue, Feb 26, 2002 at 02:22:31PM -0300, Rik van Riel wrote:
> On Tue, 26 Feb 2002, Mike Fedyk wrote:
> > On Tue, Feb 26, 2002 at 06:07:49PM +0100, Martin Dalecki wrote:
> > > >>For the educated user it was always a pain
> > > >>in the you know where, to constantly run out of quota space due to
> > > >>file versioning.
> > > >>
> > > >
> > > >Ahh, so we'd need to chown the files to root (or a configurable user and
> > > >group) to get around the quota issue.
> > >
> > > Welcome to my kill-file. This just shows that you don't even have basic
> > > background.
> >
> > Thank you.
> >
> > Now, if I'm talking out of my ass, can someone sane say so?
>
> Your idea should work on deletion, when the inode were
> about to be destroyed, but ...
>
> > It would only call chown/chgrp on the files *inside* the undelete dir,
> > and user,group,etc would have to be accounted for in another way. Am I
> > going in the wrong direction?
>
> ... of course, there still is the problem of hard links.
>
I had considered hard links. Take a look at my another message from me in
this thread and see Daniel's response to it.
Basically, it would only move the files to the undelete area if the link
count == 1. If you just decremented the link, then unlink() in glibc would
work as it does now.
Mike
"So the pain for the sysadmin will certainly not be decreased."
My company can tolerate 0% loss of data (which is why I raised this issue).
The sysadmin's pain would be standing in the unemployment line if a file
could not be recovered (which is currently from a heap of tapes that may
take many hours to locate). The issue is not an easier job, but data
integrity. Any sysadmin would state that every user at some point in time
will delete something that is critical. Hell, I've done it myself on my own
workstation after staring at the screen for 15 hours on a Saturday. The
ability to handle situations like a file going "poof" is why my company will
not use Linux on these particular file servers. My aim was to change that by
crushing the only thing holding Netware in my company.
Billy Rose
-----Original Message-----
From: Martin Dalecki [mailto:[email protected]]
Sent: Tuesday, February 26, 2002 10:55 AM
To: Mike Fedyk
Cc: H. Peter Anvin; [email protected]
Subject: Re: ext3 and undeletion
Mike Fedyk wrote:
> On Tue, Feb 26, 2002 at 05:36:51PM +0100, Martin Dalecki wrote:
>
>>>True, and it could to tricks like listing space used for undelete as
"free"
>>>in addition to dynamic garbage collection.
>>>
>>>Though, with a daemon checking the dirs often, or using Daniel's idea of
a
>>>socket between unlink() in glibc and an undelete daemon could work quite
>>>similairly.
>>>
>>>Also, there wouldn't be any interaction with filesystem internals, and
>>>userspace would probably work better with non-posix type filesystems
(vfat,
>>>hfs, etc) too.
>>>
>>>IOW, there seems to be little gain to having an kernelspace solution.
>>>
>>>
>>IMNSHO everyone thinking about undeletion in Linux should be
>>sentenced to 1 year of VMS usage and asked then again if he
>>still think's that it's a good idea...
>>
>
> Can you describe the pitfalls that VMS went through so we can aviod the
> problems?
>
> I haven't had the chance to use VMS, and don't have any hardware to try it
> out on. Also, just because one implementation was bad (even long ago, and
> unix was considered bad then too... ;) does it mean the entire idea is
bad.
Yes I can. The main problem is that most people think that undeletion
is a magical way of getting around stiupid users. But the fact is
that the very same users very quickly adapt to the the presence of
undeletion facilities. And guess whot? They will expect you to
instantly recover allways a version of "this" file from the "stone age".
So the pain for the sysadmin will certainly not be decreased. Quite
contrary for what he expects. For the educated user it was always a pain
in the you know where, to constantly run out of quota space due to
file versioning.
Rose, Billy wrote:
> "So the pain for the sysadmin will certainly not be decreased."
>
> My company can tolerate 0% loss of data (which is why I raised this issue).
> The sysadmin's pain would be standing in the unemployment line if a file
> could not be recovered (which is currently from a heap of tapes that may
> take many hours to locate). The issue is not an easier job, but data
> integrity. Any sysadmin would state that every user at some point in time
> will delete something that is critical. Hell, I've done it myself on my own
> workstation after staring at the screen for 15 hours on a Saturday. The
> ability to handle situations like a file going "poof" is why my company will
> not use Linux on these particular file servers. My aim was to change that by
> crushing the only thing holding Netware in my company.
Ever tought of adding some *archiving* features to samba - fully
transparent to the users and still no need to mess around with the
kernel? And last but not least - much easier to implement correctly,
if the only thing you wan't is to crash netware...
On Feb 26, 2002 09:05 -0800, Mike Fedyk wrote:
> On Tue, Feb 26, 2002 at 05:54:58PM +0100, Martin Dalecki wrote:
> > For the educated user it was always a pain
> > in the you know where, to constantly run out of quota space due to
> > file versioning.
>
> Ahh, so we'd need to chown the files to root (or a configurable user and
> group) to get around the quota issue.
Well, I don't agree with changing file ownership, because _any_ way around
the quota system will be exploited by users (e.g. deleting files temporarily
to gain more space, and hope they aren't destroyed before they need them
again). It also opens a huge can of worms security wise, because it may
be possible for one user to undelete files belonging to another user if
you are not super careful.
No, I would have the unlink wrapper/daemon be quota-aware, and if a user
is getting close to filling their quota then it would delete more of that
user's files from the undelete directory, just as if the entire fs was
getting full or the user had hit their preconfigured limit for maximum
undelete size or versions of a file. Since the unlink call will never
_increase_ the amount of disk used by a user (it is simply a rename()
in disguise) this in itself can't be the cause a quota problem.
The only potential problem would be if the cleanup daemon dies. In that
case, a user should still be able to do something like "unrm --purge" to
manually clean up his files in the undelete tree (or "unrm -ls <filespec>"
to show files and "unrm -d <file...>" to really delete individual ones).
For people who don't want to have undelete at all (for whatever reason)
can always have something like "max_undelete=0" in their .unrmrc file,
or just not use it in the first place.
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/
On Tue, 26 Feb 2002, Rose, Billy wrote:
> My company can tolerate 0% loss of data (which is why I raised this issue).
> The ability to handle situations like a file going "poof" is why my
> company will not use Linux on these particular file servers. My aim was
> to change that by crushing the only thing holding Netware in my company.
You could use LVM snapshots.
regards,
Rik
--
Will hack the VM for food.
http://www.surriel.com/ http://distro.conectiva.com/
Rose, Billy wrote:
>
> My company can tolerate 0% loss of data (which is why I raised this issue).
>
There is no such thing as 0% loss of data. You can get some amount of
security with backups, snapshots (really useful!) or undelete, but you
can *NEVER* guarantee 0% loss of data... consider the case when a
(l)user overwrites (not just deletes) a newly created file.
-hpa
On Tue, Feb 26, 2002 at 06:53:31PM +0100, Martin Dalecki wrote:
> Rose, Billy wrote:
> >"So the pain for the sysadmin will certainly not be decreased."
> >
> >My company can tolerate 0% loss of data (which is why I raised this issue).
> >The sysadmin's pain would be standing in the unemployment line if a file
> >could not be recovered (which is currently from a heap of tapes that may
> >take many hours to locate). The issue is not an easier job, but data
> >integrity. Any sysadmin would state that every user at some point in time
> >will delete something that is critical. Hell, I've done it myself on my own
> >workstation after staring at the screen for 15 hours on a Saturday. The
> >ability to handle situations like a file going "poof" is why my company
> >will
> >not use Linux on these particular file servers. My aim was to change that
> >by
> >crushing the only thing holding Netware in my company.
>
> Ever tought of adding some *archiving* features to samba - fully
> transparent to the users and still no need to mess around with the
> kernel? And last but not least - much easier to implement correctly,
> if the only thing you wan't is to crash netware...
Ahh, so now you only get undelete if you deleted it through samba, or nfs or
ftp, but not from anywhere else...
Also, this doesn't have to touch the kernel *at all*.
Mike
On Tue, Feb 26, 2002 at 09:38:22AM -0800, Mike Fedyk wrote:
>
> Basically, it would only move the files to the undelete area if the link
> count == 1. If you just decremented the link, then unlink() in glibc would
> work as it does now.
Always racy if done in userspace, unless you introduce a centralised
"unlink daemon" (hope no glibc developer reads that, they might be
tempted to implement such an abomination...):
proc1 proc2
--------------------
stat()
stat()
unlink()
unlink()
*kaboom*, blackhole opens, file is gone.
/If/ you start doing such a mess (personally I don't want undeletion
anyways), please do it at least in a correct way.
Andreas
--
Andreas Ferber - dev/consulting GmbH - Bielefeld, FRG
---------------------------------------------------------
+49 521 1365800 - [email protected] - http://www.devcon.net
On Feb 26, 2002 10:00 -0800, H. Peter Anvin wrote:
> Rose, Billy wrote:
> > My company can tolerate 0% loss of data (which is why I raised this issue).
>
> There is no such thing as 0% loss of data. You can get some amount of
> security with backups, snapshots (really useful!) or undelete, but you
> can *NEVER* guarantee 0% loss of data... consider the case when a
> (l)user overwrites (not just deletes) a newly created file.
Snapshots at the filesystem level could handle the overwrite case.
However, even then it cannot be 0% loss of data, unless you have snapshots
for _every_ write of the file, which would quickly become prohibitive in
space usage (think autobackup from an editor on a large file). Sometimes
people just have to learn from their mistakes...
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/
> So the pain for the sysadmin will certainly not be decreased. Quite
> contrary for what he expects. For the educated user it was always a pain
> in the you know where, to constantly run out of quota space due to
> file versioning.
Netware was somewhat more sensible. Digging out an old file took running
a tool which had a little irritation factor. In addition stuff got
automatically recycled over time and as disk space was needed.
On Tue, Feb 26, 2002 at 11:15:09AM -0700, Andreas Dilger wrote:
> On Feb 26, 2002 10:00 -0800, H. Peter Anvin wrote:
> > Rose, Billy wrote:
> > > My company can tolerate 0% loss of data (which is why I raised this issue).
> >
> > There is no such thing as 0% loss of data. You can get some amount of
> > security with backups, snapshots (really useful!) or undelete, but you
> > can *NEVER* guarantee 0% loss of data... consider the case when a
> > (l)user overwrites (not just deletes) a newly created file.
>
> Snapshots at the filesystem level could handle the overwrite case.
>
> However, even then it cannot be 0% loss of data, unless you have snapshots
> for _every_ write of the file, which would quickly become prohibitive in
> space usage (think autobackup from an editor on a large file). Sometimes
> people just have to learn from their mistakes...
That's a logging filesystem. One that stores the "diff" whenever someone
writes, rm's, truncates, etc. etc.
AFAIK they exist, so it can be done. Don't know much else about them though.
It does seem like the "elegant" solution to the problem though, if it's ~0%
data loss that's the objective. Having undelete is far from a full solution
to that problem.
Anyone knows about those devils ?
--
................................................................
: [email protected] : And I see the elder races, :
:.........................: putrid forms of man :
: Jakob ?stergaard : See him rise and claim the earth, :
: OZ9ABN : his downfall is at hand. :
:.........................:............{Konkhra}...............:
well, if they run everything on worm drives or setup a filesystem that
never overwrites blocks, only allocates new ones (and never has a hardware
failure) and are willing to buy huge amounts of storage, it's possible,
but the cost (if only for installing all the new disk drives) would be
huge.
David Lang
On Tue, 26 Feb 2002, H. Peter Anvin wrote:
> Date: Tue, 26 Feb 2002 10:00:42 -0800
> From: H. Peter Anvin <[email protected]>
> To: "Rose, Billy" <[email protected]>
> Cc: 'Martin Dalecki' <[email protected]>,
> Mike Fedyk <[email protected]>, [email protected]
> Subject: Re: ext3 and undeletion
>
> Rose, Billy wrote:
>
> >
> > My company can tolerate 0% loss of data (which is why I raised this issue).
> >
>
>
> There is no such thing as 0% loss of data. You can get some amount of
> security with backups, snapshots (really useful!) or undelete, but you
> can *NEVER* guarantee 0% loss of data... consider the case when a
> (l)user overwrites (not just deletes) a newly created file.
>
> -hpa
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
On Tue, 26 Feb 2002, Mike Fedyk wrote:
> On Tue, Feb 26, 2002 at 06:07:49PM +0100, Martin Dalecki wrote:
[SNIPPED...]
> > >group) to get around the quota issue.
> >
> > Welcome to my kill-file. This just shows that you don't even have basic
> > background.
>
> Thank you.
>
> Now, if I'm talking out of my ass, can someone sane say so?
>
> It would only call chown/chgrp on the files *inside* the undelete dir, and
> user,group,etc would have to be accounted for in another way. Am I going in
> the wrong direction?
>
> Mike
No SUID, no nothing special to accomplish the following:
johnson[1]$ pwd
/home/users/johnson
johnson[2]$ rm -r *
johnson[3]$ ls -la
total 8
drwxr-xr-x 2 rjohnson guru 4096 Feb 26 13:03 .
drwxr-xr-x 54 root root 4096 Feb 26 13:04 ..
johnson[4]$ ls -laR /home/users/lost+found/rjohnson
total 5428
drwxr-xr-x 17 rjohnson guru 4096 Feb 13 16:54 .
drwxrwxrwx 21 root root 4096 Feb 26 05:34 ..
-rw-r--r-- 1 rjohnson guru 6 Oct 15 1998 .XF86_S3
-rw------- 1 rjohnson guru 0 Aug 10 1998 .Xauthority
-rw-r--r-- 1 rjohnson guru 1557 Dec 31 1997 .acrorc
-rw-r--r-- 1 rjohnson guru 273 May 13 1998 .addressbook
-rw-r--r-- 1 rjohnson guru 2684 May 13 1998 .addressbook.lu
-rw-r--r-- 1 rjohnson guru 0 Feb 13 16:54 .bash_history
-rwxr-xr-x 1 rjohnson guru 343 May 17 1999 .bashrc
-rw-r--r-- 1 rjohnson guru 2938 Sep 26 1994 .emacs
-rw-r--r-- 1 rjohnson guru 47 Aug 5 1996 .forward.SAVED
-rw-r--r-- 1 rjohnson guru 92 Feb 26 1997 .gopherrc
-rw-r--r-- 1 rjohnson guru 130 Nov 2 1998 .ispell_english
-rw-r--r-- 1 rjohnson guru 164 Sep 26 1994 .kermrc
-rw-r--r-- 1 rjohnson guru 34 Sep 26 1994 .less
-rw-r--r-- 1 rjohnson guru 114 Sep 26 1994 .lessrc
-rw-r--r-- 1 rjohnson guru 8 Feb 12 1999 .mosaic-cc-2.7b5
-rw-r--r-- 1 rjohnson guru 1349 Feb 12 1999 .mosaic-hot.html
-rw-r--r-- 1 rjohnson guru 1413 Feb 12 1999 .mosaic-hot.html.backup
-rw-r--r-- 1 rjohnson guru 282 Oct 20 1995 .mosaic-hotlist-default
-rw-r--r-- 1 rjohnson guru 854 Aug 22 1996 .mosaic-hotlist-default.html
drwx------ 2 rjohnson guru 4096 Jan 27 1997 .mosaic-personal-annotations
-rw-r--r-- 1 rjohnson guru 1718 Feb 12 1999 .mosaic-x-history
-rw-r--r-- 1 rjohnson guru 39706 Feb 12 1999 .mosaic-x-history.backup
-rw-r--r-- 1 rjohnson guru 5 Feb 12 1999 .mosaicpid
-rw-r--r-- 1 rjohnson guru 204 Oct 10 1995 .mymail
-rw-r--r-- 1 rjohnson guru 9236 Dec 30 1999 .pine-debug1
-rw-r--r-- 1 rjohnson guru 8978 Feb 19 1999 .pine-debug2
-rw-r--r-- 1 rjohnson guru 8978 Feb 19 1999 .pine-debug3
-rw-r--r-- 1 rjohnson guru 9237 Feb 19 1999 .pine-debug4
-rw-r--r-- 1 rjohnson guru 10682 Dec 30 1999 .pinerc
-rw-r--r-- 1 rjohnson guru 90 Jul 15 1995 .plan
-rw-r--r-- 1 rjohnson guru 625 Oct 15 1998 .procmailrc
-rw-r--r-- 1 rjohnson guru 17 Dec 6 1994 .profile
-rw-r--r-- 1 rjohnson guru 454 Jul 15 1995 .project
-rw-r--r-- 1 rjohnson guru 35 May 16 1995 .rhosts
drwxr-xr-x 2 rjohnson guru 4096 Jan 26 1997 .seyon
-rw-r--r-- 1 rjohnson guru 209 Feb 13 16:54 .signature
-rw-r--r-- 1 rjohnson guru 532 Nov 19 1996 .signature.SAVED
-rw-r--r-- 1 rjohnson guru 275 Sep 10 1997 .signature.bak
-rw-r--r-- 1 rjohnson guru 50 Feb 19 1999 .sversionrc
drwx------ 2 rjohnson guru 4096 Jan 26 1997 .term
-rw-r--r-- 2 rjohnson guru 12288 Sep 11 1996 .vacation.dir
-rw-r--r-- 1 rjohnson guru 247 Sep 11 1996 .vacation.msg
-rw-r--r-- 2 rjohnson guru 12288 Sep 11 1996 .vacation.pag
-rwxr-xr-x 1 rjohnson guru 443 Sep 4 1997 .xinitrc
-rw-r--r-- 1 rjohnson guru 1117 Sep 2 1998 BOMBER.CREW
-rw-r--r-- 1 rjohnson guru 7697 Jul 27 1998 BusLogic.html
-rw-r--r-- 1 rjohnson guru 21 Apr 17 1998 CHECK.KERNEL
-rw-r--r-- 1 rjohnson guru 6519 Mar 30 1998 Computers
-rw-r--r-- 1 rjohnson guru 442 Sep 28 1998 DELAY.PATCH
-rw-r--r-- 1 rjohnson guru 4349 Nov 2 1998 Flying_saucers
-rw-r--r-- 1 rjohnson guru 4349 Mar 17 1998 Flying_saucers.bak
-rw-r--r-- 1 rjohnson guru 576 May 15 1998 IRQ.PATCH
-rw-r--r-- 1 rjohnson guru 69 Apr 24 1998 ME.TXT
-rwxr-xr-x 1 rjohnson guru 131 Aug 3 1998 MIT
drwx--x--x 2 rjohnson guru 4096 Oct 21 1998 Mail
-rw-r--r-- 1 rjohnson guru 135 Jul 11 1998 PATENT
-rw-r--r-- 1 rjohnson guru 625 Jun 3 1998 _procmail
drwxr-xr-x 2 rjohnson guru 4096 Dec 1 1998 asm
-rwxr-xr-x 1 rjohnson guru 1358 Sep 16 1997 backup
-rw-r--r-- 1 rjohnson guru 230454 Jun 5 1998 baldie.bmp
drwxr-xr-x 3 rjohnson guru 4096 Jun 1 1998 bomb
-rwxr-xr-x 1 rjohnson guru 2993 Oct 9 16:06 c.c
drwxr-xr-x 2 rjohnson guru 4096 Sep 28 1998 callback
-rw-r--r-- 1 rjohnson guru 12900 May 18 1998 chaos.txt
-rwxr-xr-x 1 rjohnson guru 57276 Dec 9 1997 chkdev.c
-rw-r--r-- 1 rjohnson guru 5276 Nov 17 1998 cpu.speed
-rw-r--r-- 1 rjohnson guru 895 Sep 29 1998 delay.c
-rw-r--r-- 1 rjohnson guru 964 Sep 30 1998 delay.patch
-rwxr-xr-x 1 rjohnson guru 5414 Mar 9 1998 diskio
-rw-r--r-- 1 rjohnson guru 3364 Nov 22 1998 evil.IDE
-rwxr-xr-x 1 rjohnson guru 5061 Nov 11 1998 fastchk.asm
-rw-r--r-- 1 rjohnson guru 957 Oct 31 1997 firewall.sh
-rwxr-xr-x 1 rjohnson guru 4717 Oct 1 15:09 fpu
-rw-r--r-- 1 rjohnson guru 499 Oct 30 1997 fpu.c
-rw-r--r-- 1 rjohnson guru 257507 Jan 28 08:47 friday.ps
-rw-r--r-- 1 rjohnson guru 5363 Aug 20 1997 ftp.txt
-rwxr-xr-x 1 rjohnson guru 345 Sep 15 1998 get_bomb
-rwxr-xr-x 1 rjohnson guru 219539 Dec 6 1994 gwm
drwxr-xr-x 2 rjohnson guru 4096 Nov 14 1997 homepage
-rwxr-xr-x 1 rjohnson guru 84 Jan 28 1998 hostfile
-rw-r--r-- 1 rjohnson guru 86994 Jan 28 1998 hosts.tmp
-rwxr-xr-x 1 rjohnson guru 22261 Sep 10 1998 iiclink.asm
-rw-r--r-- 1 rjohnson guru 1522 May 25 1998 info.bak
-rw-r--r-- 1 rjohnson guru 392 Aug 17 1998 ioapic-2.1.115-A
-rw-r--r-- 1 rjohnson guru 30547 May 14 1998 irq.c
drwxr-xr-x 2 rjohnson guru 4096 Jan 26 1997 jboot
-rw-r--r-- 1 rjohnson guru 675 Nov 29 1995 jboot-1.13.lsm
-rw-r--r-- 1 rjohnson guru 8263 Oct 4 1998 jitter.c
-rwxr-xr-x 1 rjohnson guru 79 Jun 26 1998 keepalive.sh
-rw-r--r-- 1 rjohnson guru 24348 Dec 10 1997 latierra.c
-rw-r--r-- 1 rjohnson guru 267 Aug 11 1997 lawsuit
drwxr-xr-x 2 rjohnson guru 4096 Mar 19 1997 logs
-rwxr-xr-x 1 rjohnson guru 5904 Nov 9 1998 lookups
-rw-r--r-- 1 rjohnson guru 1533 Nov 9 1998 lookups.c
-rw-r--r-- 1 rjohnson guru 1339467 Aug 31 1998 ls-lR
-rw-r--r-- 1 rjohnson guru 1239040 Jul 28 1998 m4-1.4.tar
drwx------ 2 rjohnson guru 4096 Oct 5 1999 mail
-rwxr-xr-x 1 rjohnson guru 411945 Nov 14 1997 main
-rwxr-xr-x 1 rjohnson guru 469 Dec 16 1996 make_bfloppy
-rwxr-xr-x 1 rjohnson guru 474 Sep 16 1996 make_boot
-rwxr-xr-x 1 rjohnson guru 460 Jun 19 1998 make_floppy
-rw-r--r-- 1 rjohnson guru 1175 Mar 20 1998 mem.c
-rw-r--r-- 1 rjohnson guru 5835 Nov 9 1998 messages
-rwxr-xr-x 1 rjohnson guru 36 Sep 12 1996 modem
-rw-r--r-- 1 rjohnson guru 28084 Oct 3 1998 mptable.c
-rw-r--r-- 1 rjohnson guru 5715 Aug 22 1996 mwmrc
-rwxr-xr-x 1 rjohnson guru 1312 Jan 28 1997 new_backup
-rw-r--r-- 1 rjohnson guru 2343 Apr 21 1998 operating.systems
-rwxr-xr-x 1 rjohnson guru 718 Jan 2 1996 plot
drwxr-xr-x 3 rjohnson guru 4096 Jan 26 1997 pppdc
drwxr-xr-x 2 rjohnson guru 4096 Jan 26 1997 ptools
-rw-r--r-- 1 rjohnson guru 3955 Jun 18 1998 reboot.c
-rwxr-xr-x 1 rjohnson guru 214 Nov 6 1997 restore
-rwxr-xr-x 1 rjohnson guru 2185 Jan 6 1997 rights.h
-rwxr-xr-x 1 rjohnson guru 7293 Dec 26 1996 showserv
-rw-r--r-- 1 rjohnson guru 1765 Aug 13 2001 smile.sh
drwxr-xr-x 19 rjohnson guru 4096 Apr 17 1998 test
-rw-r--r-- 1 rjohnson guru 788903 Jul 22 1998 test.cdcolor
-rwxr-xr-x 1 rjohnson guru 5019 May 25 1998 timer
-rw-r--r-- 1 rjohnson guru 1247 Oct 3 1998 timer.c
drwxr-xr-x 10 rjohnson guru 4096 Nov 18 1998 tools
-rw-r--r-- 1 rjohnson guru 471 Oct 9 16:09 typescript
-rw-r--r-- 1 rjohnson guru 426 Jun 24 1998 udelay.patch
-rwxr-xr-x 1 rjohnson guru 4137 Dec 5 1996 unix2world
-rwxr-xr-x 1 rjohnson guru 171 Jan 27 1998 update_host
-rwxr-xr-x 1 rjohnson guru 84707 Jul 10 1998 utility.asm
-rwxr-xr-x 1 rjohnson guru 16819 Sep 25 1998 vl12ct.asm
-rwxr-xr-x 1 rjohnson guru 3845 Dec 5 1996 world2unix
-rwxr-xr-x 1 rjohnson guru 8141 Oct 9 16:08 xxx
All the deleted files, with the correct path(s), are now in the
top directory file the file-system ../lost+found directory. They
are still owned by the original user, still subject to the same
quota. The disk space can't run out because you have simply moved
files that didn't exceed the disk space before they were moved.
All one needs is a compile-time switch to enable the following:
If the lost+found directory is writable by the current user, blindly
create any paths that may already exist, using the ../lost+found/~
directory as the root, and rename() the file you think you are
unlink()ing unless the file exists in /tmp, or is a sym-link,
special file, pipe, socket or device.
To enable such a function (after modifing the C library), just make
lost+found world-writable. The permissions of its contents remain
unchanged as does the owner. If the owner didn't want others in
the universe to read his resume.txt, or porno.jpg, they still can't
read it and they can't overwrite it.
The sysadmin deletes the contents (probably via crond), every time it
has been backed up.
There are lots of advantages to this scheme. The most notable being
that it is transparent to normal users. Database programs that do
a lot of deleting temporary files that don't exist in /tmp, need to
be have their default directory on a file-system with lost+found
not world writable.
Cheers,
Dick Johnson
Penguin : Linux version 2.4.1 on an i686 machine (797.90 BogoMips).
111,111,111 * 111,111,111 = 12,345,678,987,654,321
On Tue, Feb 26, 2002 at 11:48:49AM -0600, Rose, Billy wrote:
>
> My company can tolerate 0% loss of data (which is why I raised this issue).
There are a zillion ways to accidentally destroy a file.
Ever typed ">" instead of ">>"? Ever cp'ed to the wrong destination?
Ever edited the wrong file with vi and recognized it just after the
":wq"?
All of this just truncates the old file without unlink()ing it first.
So simple undeletion can /never/ provide by any means any protection
against accidental data loss.
Backup can, but not for the most recent data. Try shortening the
timespan between two backups (for example using LVM snapshots), that
will give you much more protection than pushing the dead horse called
"undeletion".
Andreas
--
Andreas Ferber - dev/consulting GmbH - Bielefeld, FRG
---------------------------------------------------------
+49 521 1365800 - [email protected] - http://www.devcon.net
Richard B. Johnson wrote:
>
> All the deleted files, with the correct path(s), are now in the
> top directory file the file-system ../lost+found directory. They
> are still owned by the original user, still subject to the same
> quota. The disk space can't run out because you have simply moved
> files that didn't exceed the disk space before they were moved.
>
Ummm... it never occurred to you why someone would delete files in the
first place?
-hpa
> Snapshots at the filesystem level could handle the overwrite case.
We need BitKeeperFS! It stores the diff'd history of all changes to all
files!
:)
Dana Lacoste
Ottawa, Canada
On Tue, 26 Feb 2002, H. Peter Anvin wrote:
> Richard B. Johnson wrote:
>
> >
> > All the deleted files, with the correct path(s), are now in the
> > top directory file the file-system ../lost+found directory. They
> > are still owned by the original user, still subject to the same
> > quota. The disk space can't run out because you have simply moved
> > files that didn't exceed the disk space before they were moved.
> >
>
>
> Ummm... it never occurred to you why someone would delete files in the
> first place?
>
> -hpa
Yep. They probably thought they had changed directory to some scratch
file-system and they were cleaning it up! Most wildcard deletions are
truly accidental like this :
ls .c>* # woops, made a file called '*', I'll fix it..
rm * # Good, now back to work...
ls *.c >files
Cheers,
Dick Johnson
Penguin : Linux version 2.4.1 on an i686 machine (797.90 BogoMips).
111,111,111 * 111,111,111 = 12,345,678,987,654,321
On Tue, 26 Feb 2002 10:39:40 -0800
Dana Lacoste <[email protected]> wrote:
> > Snapshots at the filesystem level could handle the overwrite case.
>
> We need BitKeeperFS! It stores the diff'd history of all changes to all
> files!
>
> :)
>
> Dana Lacoste
> Ottawa, Canada
Plan 9
Daniel
---
Recursion n.:
See Recursion.
-- Random Shack Data Processing Dictionary
only if you check them in between writes.
david Lang
On Tue, 26 Feb 2002, Dana Lacoste wrote:
> > Snapshots at the filesystem level could handle the overwrite case.
>
> We need BitKeeperFS! It stores the diff'd history of all changes to all
> files!
>
> :)
>
> Dana Lacoste
> Ottawa, Canada
On Feb 26, 2002 13:34 -0500, Richard B. Johnson wrote:
> johnson[4]$ ls -laR /home/users/lost+found/rjohnson
> total 5428
> drwxr-xr-x 17 rjohnson guru 4096 Feb 13 16:54 .
> drwxrwxrwx 21 root root 4096 Feb 26 05:34 ..
> -rw-r--r-- 1 rjohnson guru 6 Oct 15 1998 .XF86_S3
> :
> :
> -rwxr-xr-x 1 rjohnson guru 8141 Oct 9 16:08 xxx
A shorter example would have sufficed...
> All the deleted files, with the correct path(s), are now in the
> top directory file the file-system ../lost+found directory.
> To enable such a function (after modifing the C library), just make
> lost+found world-writable.
Making lost+found world-writable is a terrible idea (even world readable
is bad) because it exposes potentially sensitive files to the world if
it happens that fsck moves a file there after some filesystem problem.
It may be that the sensitive file was in a secure directory, and now it
is world readable.
I would stronly suggest using some other directory for this purpose,
since if you are changing lost+found to be world writable, you could
just as easily do "mkdir .undelete".
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/
On Feb 26, 2002 19:14 +0100, Andreas Ferber wrote:
> On Tue, Feb 26, 2002 at 09:38:22AM -0800, Mike Fedyk wrote:
> > Basically, it would only move the files to the undelete area if the link
> > count == 1. If you just decremented the link, then unlink() in glibc would
> > work as it does now.
>
> Always racy if done in userspace, unless you introduce a centralised
> "unlink daemon" (hope no glibc developer reads that, they might be
> tempted to implement such an abomination...):
>
> proc1 proc2
> --------------------
> stat()
> stat()
> unlink()
> unlink()
>
> *kaboom*, blackhole opens, file is gone.
I had previously suggested to Mike (but he seems to have missed it) that
you should _always_ move files to the undelete area, regardless of how
many links there are. This avoids all of the races, doesn't increase
space usage at all (because the undelete area is always in the same fs),
and is actually better for the users (because "rm <file>", "unrm <file>"
will always work even if <file> has multiple links).
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/
On Feb 26, 2002 14:56 -0300, Rik van Riel wrote:
> On Tue, 26 Feb 2002, Rose, Billy wrote:
> > My company can tolerate 0% loss of data (which is why I raised this issue).
>
> > The ability to handle situations like a file going "poof" is why my
> > company will not use Linux on these particular file servers. My aim was
> > to change that by crushing the only thing holding Netware in my company.
>
> You could use LVM snapshots.
No, LVM snapshots are not really practical for such applications. The
real problem is that (a) you are limited to 256 LVs with the current
LVM code, so maybe hourly snapshots on 10 filesystems probably isn't
enough, and (b) you have to make a copy of the data for _each_ LVM
snapshot that you have, which quickly becomes expensive for a large
number of snapshots.
At one time there was a SnapFS project at SourceForge (for which I
wrote the original ext2 shapshot code), but it appears to have gone
into oblivion and the current "maintainers" are not responsive to
requests to make this available.
SnapFS has the benefits of only keeping a single copy of each version
of the file, and you can make a larger number of snapshots than with
LVM with no overhead from adding additional snapshots.
However, I have just realized that even though they deleted everything
in CVS and disabled CVS on that project entirely, I can still download a
copy of the entire CVS repository to get my original code back. It is
clearly documented in the CVS repository that the code is GPL and has
my name in the copyright messages. Some of the code is clearly not
compatible with current enhancements to ext2 (after my time, of course)
so before people start using it again it would need to be cleaned up
(e.g. get non-conflicting ext2 COMPAT feature bits, inode flags, ioctl
numbers, space in the on-disk superblock, etc).
It may also be possible to get the SF site admins to assign control
of the SnapFS project to someone else if they are interested in
working on this, because the current guys are off in proprietary-land,
even though the original code was GPL. Sadly, I probably won't have
any time to look at this in the near future, but maybe a few months
down the road after I get settled into my new job.
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/
On Tue, Feb 26, 2002 at 09:38:22AM -0800, Mike Fedyk wrote:
> On Tue, Feb 26, 2002 at 02:22:31PM -0300, Rik van Riel wrote:
> > Your idea should work on deletion, when the inode were
> > about to be destroyed, but ...
> >
> > > It would only call chown/chgrp on the files *inside* the undelete dir,
> > > and user,group,etc would have to be accounted for in another way. Am I
> > > going in the wrong direction?
> >
> > ... of course, there still is the problem of hard links.
> >
>
> I had considered hard links. Take a look at my another message from me in
> this thread and see Daniel's response to it.
>
Correction, that was Andreas Dilger that replied offline, and has been
posted by me with his permission.
And now the idea of chgrp/chown is out of the question...
Mike
On Tue, Feb 26, 2002 at 11:48:49AM -0600 or thereabouts, Rose, Billy wrote:
It seems to me the undelete could be in the kernel, and could be
beneficial.
Rather than modifying all the different filesystems, or libc, we could
modify the VFS unlink function in the kernel. It would therefore work
with all filesystems working under VFS, and all programs regardless of
whether it is linked against the latest libc or using LD_PRELOAD.
There are obviously some issues that would have to be resolved with the
algorithm, but as far as versioning I think that is the role of backups.
This should be more along the lines of 'whoops I deleted /etc/fstab.
Let me go get it out of /.undelete'. Simply put, if the file is already
in there, just overwrite it. Though, it wouldn't be too hard to tack a
.1 on the end of the old file I suppose.
Also, if the files are just moved to the .undelete directory (and by
moved, I mean a hard link to .undelete, followed by a remove of the
original), disk usage as reported by df and du would still show it
as there. I don't think that is a very big deal. I simple solution
would just be to have a cron job empty out older files. It should be the
sysadmin's job on how to manage the .undelete directory, not the kernel's
(IMO). Of course, a configurable daemon to monitor the directory could
be implemented, but this especially seems like a userspace problem.
Undeleting is the harder of these. User's should be able to undelete a
file IMO. Either an suid binary has to be created to list the contents
of the .undelete directory based on the user running it, or they can go
into the directory and get what they need. Rather than having a world
write /tmp like directory, it could be chmod 1755 with root ownership.
That way users could browse the directory and cp out what they wanted,
but they can't write to it and overwrite files and do symlink attacks,
etc. This is a security issue in terms of privacy though, depending on
the user's umask. The former (an suid binary) is probably better, but
the latter is the easier to implement.
Please comment.
James Strandboge
--
Email: [email protected]
GPG/PGP ID: 26384A3A
Fingerprint: D9FF DF4A 2D46 A353 A289 E8F5 AA75 DCBE 2638 4A3A
> Rather than modifying all the different filesystems, or libc, we could
> modify the VFS unlink function in the kernel. It would therefore work
What about every data loss caused by truncate, overwriting etc..
> /root/.bashrc /etc/fstab'), wouldn't 'cp' (or most any other app) first
> unlink the first file (/etc/fstab), then create and write the new one?
Unlikely - It will truncate it and write over it. Try strace cp 8)
On Wed, 2002-02-27 at 16:40, Alan Cox wrote:
> > Rather than modifying all the different filesystems, or libc, we could
> > modify the VFS unlink function in the kernel. It would therefore work
>
> What about every data loss caused by truncate, overwriting etc..
>
This is a good point. The easiest answer is 'that is what backups are
for'. :-)
More seriously, truncate could be implemented in the truncate calls in
VFS as well, but this would have to be a copy to .undelete rather than a
simple link change. I am not sure implementing truncate in undelete
would be that great of an idea though. Many apps will truncate files
only to update them again, which would result in the .undelete directory
filling the disk. This could be implemented with an optional mount
option and having the default be to not copy truncated files to
.undelete.
Unless I am missing something, overwrite should be handled by the change
to VFS sys_unlink transparently. If a file is overwritten (eg 'cp
/root/.bashrc /etc/fstab'), wouldn't 'cp' (or most any other app) first
unlink the first file (/etc/fstab), then create and write the new one?
Jamie Strandboge
--
Email: [email protected]
GPG/PGP ID: 26384A3A
Fingerprint: D9FF DF4A 2D46 A353 A289 E8F5 AA75 DCBE 2638 4A3A
On Wed, 2002-02-27 at 17:33, Alan Cox wrote:
> > /root/.bashrc /etc/fstab'), wouldn't 'cp' (or most any other app) first
> > unlink the first file (/etc/fstab), then create and write the new one?
>
> Unlikely - It will truncate it and write over it. Try strace cp 8)
Excellent, then I am totally missing something!
Then truncate would have to be implemented, for the very limited case of
using 'cp'. ;-)
The mount option ('undeltrunc?') would have to be implemented. However,
I just looked at the strace of vi for fun, and then remembered that it
uses a temporary file which is unlinked after the save. Considering the
amount of truncates and unlinks that could potentially happen on a
system, .undelete would undoubtedly fill up quickly. Filtering files
going into .undelete could be an option, but this would be kludgey to
put into the kernel, even for a daemon.
What is your opinion of having a mount option of 'undel' and a mount
option of 'undeltrunc'? The defaults for mount would be to not do
either. This way you could do something like:
mount -o undel / # saves unlink, not truncated
mount /var # does not save truncated or unlink
mount -o undel,undeltrunc /home # saves unlink and truncated
A cron job or user daemon (or filter of some sort) could monitor those
directories that were mounted with undel.
Jamie
--
Email: [email protected]
GPG/PGP ID: 26384A3A
Fingerprint: D9FF DF4A 2D46 A353 A289 E8F5 AA75 DCBE 2638 4A3A
On Wed, 2002-02-27 at 18:03, James D Strandboge wrote:
> What is your opinion of having a mount option of 'undel' and a mount
> option of 'undeltrunc'? The defaults for mount would be to not do
> either. This way you could do something like:
>
> mount -o undel / # saves unlink, not truncated
> mount /var # does not save truncated or unlink
> mount -o undel,undeltrunc /home # saves unlink and truncated
>
> A cron job or user daemon (or filter of some sort) could monitor those
> directories that were mounted with undel.
In thinking about truncate more (and at least 'cp' overwrite if not
more), IMHO this is an unavoidable delete and should not be implemented
in undelete. It would create too much overhead both in disk I/O and
coding (be it in the kernel or user space). Moving files to a directory
to be deleted later which should have just been truncated in the first
case is too kludgey and backward.
However, for unlink there wouldn't be a big I/O problem in getting the
items into .undelete-- we are just changing links. It should be
relatively easy to implement, not very intrusive, should be useful in
the general case (rm and gui apps) and won't cause the disk to fill up.
Jamie
--
Email: [email protected]
GPG/PGP ID: 26384A3A
Fingerprint: D9FF DF4A 2D46 A353 A289 E8F5 AA75 DCBE 2638 4A3A
James D Strandboge wrote:
> However, for unlink there wouldn't be a big I/O problem in getting the
> items into .undelete-- we are just changing links. It should be
> relatively easy to implement, not very intrusive, should be useful in
> the general case (rm and gui apps) and won't cause the disk
> to fill up.
>
> Jamie
That's definitely better than nothing. Now all we need to do is keep
track of deletion time and which user did the deletion. Which will
give us the same functionality that NetWare offers. Last week I had
to salvage hundreds of files from a Netware 5.1 server after a careless
user had deleted a substantial directory tree. Without being able to
sort by deletion time by job would have been a lot harder. And yes,
I recovered a lot of files which had been changed since the previous
night's backup.
Cheers,
Phil
---------------------------------------------
Phil Randal
Network Engineer
Herefordshire Council
Hereford, UK
On Tue, Feb 26, 2002 at 01:34:27PM -0500, Richard B. Johnson wrote:
> All the deleted files, with the correct path(s), are now in the
> top directory file the file-system ../lost+found directory. They
> are still owned by the original user, still subject to the same
> quota.
And what about:
- Luser rm's "foo.c"
- Luser starts working on new version of "foo.c"
- Luser recognizes, that the old version was better
- Luser rm's new "foo.c"
- Luser tries to unrm the old "foo.c" -> *bang*
Trust me, there /will/ be a luser who tries to do it this way. If
teaching lusers were enough, you'd have no need for an unrm at all.
Everyone would be using version control for important data, and
everything would be fine.
> The disk space can't run out because you have simply moved
> files that didn't exceed the disk space before they were moved.
But a user will end up unable to /free/ any diskspace. User tries
something, generates a /huge/ error log filling up the quota/disk,
oops, has to call sysadmin before work can go on... Five minutes
later, the fix just tried didn't work, oops, has to call admin again,
and so on. Do you /really/ want this?
And how do you want to handle temp files? If you don't exclude them
from undeletion, they will fill up your diskspace soon. For the moment
I can't think of any mechanism that identifies temp files reliably
(without API changes).
> All one needs is a compile-time switch to enable the following:
And a system wide configurable switch, and a user configurable switch
and so on.
Undeletion has /many/ implications, did you think through all of them?
Just as a personal note: personally I would simply /refuse/ to work on
a system where I end up unable to delete even files I /own/, or at
least I would end up implementing my own way of deleting files which
circumvents undeletion (there will /always/ be a way to do it).
If your employer didn't expressively forbid you to keep private data
on your work account, you are allowed to do so, at least here in
germany, and you can sue your employer if he takes actions to look
into your private data without informing you /before/ doing it (taken
strictly, if you are allowed to keep private data on your work
account, you even have to be informed explicitly that the data may be
backuped and recovered later from backup tapes). So in the end,
undeletion is also a matter of privacy, and the ability to undelete
may even pose legal problems on a company.
Andreas
--
Andreas Ferber - dev/consulting GmbH - Bielefeld, FRG
---------------------------------------------------------
+49 521 1365800 - [email protected] - http://www.devcon.net
On Thu, 2002-02-28 at 10:05, Andreas Ferber wrote:
> On Tue, Feb 26, 2002 at 01:34:27PM -0500, Richard B. Johnson wrote:
>
> > All the deleted files, with the correct path(s), are now in the
> > top directory file the file-system ../lost+found directory. They
> > are still owned by the original user, still subject to the same
> > quota.
>
> And what about:
>
> - Luser rm's "foo.c"
> - Luser starts working on new version of "foo.c"
> - Luser recognizes, that the old version was better
> - Luser rm's new "foo.c"
> - Luser tries to unrm the old "foo.c" -> *bang*
As stated in a later post, a basic versioning could be implemented
where a ".1" or similar could be added to the end of the file. This
could definitely cause problems with the disk filling up though (see
below).
> But a user will end up unable to /free/ any diskspace. User tries
> something, generates a /huge/ error log filling up the quota/disk,
> oops, has to call sysadmin before work can go on... Five minutes
> later, the fix just tried didn't work, oops, has to call admin again,
> and so on. Do you /really/ want this?
A special binary could be created that would be able to read the
contents of the .undelete directory based on the uid of the user running
it. The binary could remove the file if desired. The user space
implementation would require more thought, but there are several
possibilities that I can think of.
>
> And how do you want to handle temp files? If you don't exclude them
> from undeletion, they will fill up your diskspace soon. For the moment
> I can't think of any mechanism that identifies temp files reliably
> (without API changes).
This is a good point. The kernel space code could just filter out
anything being removed from /tmp (and the various .undelete
directories). But there would be other files. A low priorotiy cron job
or daemon could remedy the others (being configurable in user space).
it could also remove files with a high version number if space was low.
> > All one needs is a compile-time switch to enable the following:
>
> And a system wide configurable switch, and a user configurable switch
> and so on.
>
I am not sure what this is referencing. I was now thinking that a
kernel compile option (CONFIG_UNDELETE) could be used for those who
wanted this, and those that didn't could leave it out. And for those
that do, but only on some mountpoints, the sysadmin could simply have
the .undelete just on those mounts that needed the undelete feature.
The kernel code would simply remove the file like normal if the
.undelete directory was not present.
> Undeletion has /many/ implications, did you think through all of them?
>
I REALLY doubt it! :-) But it is fun getting into the kernel code and
trying some stuff out.
>
> Just as a personal note: personally I would simply /refuse/ to work on
> a system where I end up unable to delete even files I /own/, or at
> least I would end up implementing my own way of deleting files which
> circumvents undeletion (there will /always/ be a way to do it).
>
> If your employer didn't expressively forbid you to keep private data
> on your work account, you are allowed to do so, at least here in
> germany, and you can sue your employer if he takes actions to look
> into your private data without informing you /before/ doing it (taken
> strictly, if you are allowed to keep private data on your work
> account, you even have to be informed explicitly that the data may be
> backuped and recovered later from backup tapes). So in the end,
> undeletion is also a matter of privacy, and the ability to undelete
> may even pose legal problems on a company.
>
Valid points which leads me to think that a special suid/sgid user space
program would be needed. The .undelete directory would have chmod 700
owned by root (or possibly 770 with 'chown root.undelete' ownership).
Jamie
--
Email: [email protected]
GPG/PGP ID: 26384A3A
Fingerprint: D9FF DF4A 2D46 A353 A289 E8F5 AA75 DCBE 2638 4A3A
On Feb 28, 2002 16:05 +0100, Andreas Ferber wrote:
> On Tue, Feb 26, 2002 at 01:34:27PM -0500, Richard B. Johnson wrote:
> > The disk space can't run out because you have simply moved
> > files that didn't exceed the disk space before they were moved.
>
> But a user will end up unable to /free/ any diskspace. User tries
> something, generates a /huge/ error log filling up the quota/disk,
> oops, has to call sysadmin before work can go on... Five minutes
> later, the fix just tried didn't work, oops, has to call admin again,
> and so on. Do you /really/ want this?
This is just being silly. Obviously the user will be able to delete
files from the .undelete directory, and a daemon to do automatic
cleanup was also proposed. Thinking anything else is just being obtuse.
You could have the unlink() wrapper check that there is still some
free space/quota when it is doing the move, and if not it deletes
old files until there is free space/quota. The daemon just does this
for you in the background to avoid slowing things down.
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/
On Feb 28, 2002 16:19 -0800, Rick Lindsley wrote:
> A lot of talk about "daemons" ... seems overkill to me. Any reason not
> to let each user do this on their own? I've got an rm/unrm program
> that just stores the "rescued" files in your home directory for a
> period of time based on the either the name or pathname.
Well, the daemon could be used to do only background cleanup. You could
always have the unlink wrapper do cleanup synchronously all the time,
but this might make some deletes slow. If you weren't running a daemon,
then eventually the unlink wrapper would do all of the cleanups for you.
However, it would not be possible with the unlink wrapper for one user
to force cleanup of another user's deleted files if the filesystem is
running out of space. I think having a daemon do this is far safer than
making "rm" an suid program (eek).
The other problem with moving files to the owner's home directory instead
of just renaming it to a per-filesystem .undelete directory is that this
takes a long time if they are on different filesystems, or (heaven forbid)
that the home directory is NFS mounted and you just deleted a 500MB GIMP
swap file from /tmp.
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/
A lot of talk about "daemons" ... seems overkill to me. Any reason not
to let each user do this on their own? I've got an rm/unrm program
that just stores the "rescued" files in your home directory for a
period of time based on the either the name or pathname.
It is not fully "hardened" -- it is possible to conceive of filenames
and filename/directory name pairs which will confuse it -- but it is
certainly functional for 98% of cases, which has been good enough for
my personal use.
Disadvantages:
* the mtime is purposefully changed when the file is deleted to make
it easy to tell when a file was "deleted", so you lose that information.
* Directory modes and owners are not maintained.
* File ownerships are maintained only to the extent that a hard
link allows you to. If you couldn't do a hard link (cross file
systems) then the "saved" file will be owned by you unless you
are root.
* There's no way to say "save until X amount is saved then really
delete" (whether "X amount" is %age of file system, a fixed amount,
or some other criteria). The only criteria is age of deletion.
* (true) deletions are only attempted when a subsequent file is rm'ed.
So it's conceivable to delete 300Mb of data that is scheduled to
disappear in 30 seconds but not have it go away for three days,
because you left town for the weekend and neither you nor your
cron scripts ran "rm" again until Monday.
If this sort of program is useful to folks, I'm more than happy to
provide it. No doubt it could be enhanced to address some of the
shortcomings above. I just got tired many years ago of accidentally
typing "rm * .o" and only finding out my typo when I saw "rm: .o: file
not found" and it has, for the most part, served with minor modifications
for many years.
Rick
On Thu, Feb 28, 2002 at 04:05:52PM +0100, Andreas Ferber wrote:
> On Tue, Feb 26, 2002 at 01:34:27PM -0500, Richard B. Johnson wrote:
>
> > All the deleted files, with the correct path(s), are now in the
> > top directory file the file-system ../lost+found directory. They
> > are still owned by the original user, still subject to the same
> > quota.
>
> And what about:
>
> - Luser rm's "foo.c"
> - Luser starts working on new version of "foo.c"
> - Luser recognizes, that the old version was better
> - Luser rm's new "foo.c"
> - Luser tries to unrm the old "foo.c" -> *bang*
>
> Trust me, there /will/ be a luser who tries to do it this way.
Yes, users will do that. And this problem is easily solved by keeping a
copy of each deleted file based on the date, so you can have several
versions of the same file in the undeleted dir.
>If
> teaching lusers were enough, you'd have no need for an unrm at all.
> Everyone would be using version control for important data, and
> everything would be fine.
Not everyone works with text-only formats.
> > The disk space can't run out because you have simply moved
> > files that didn't exceed the disk space before they were moved.
>
> But a user will end up unable to /free/ any diskspace. User tries
> something, generates a /huge/ error log filling up the quota/disk,
> oops, has to call sysadmin before work can go on... Five minutes
> later, the fix just tried didn't work, oops, has to call admin again,
> and so on. Do you /really/ want this?
>
The undelete daemon will have to be quota aware. The unfortunate side
affect is that if the user is close to their limit, undelete is effectively
disabled because there won't be enough space left in their quota to keep the
deleted file.
The only way for undelete to fill up your drives is for the undelete daemon
to crash and die. This can be avoided by having init monitor it... or
whatever other mechanism you want...
> And how do you want to handle temp files? If you don't exclude them
> from undeletion, they will fill up your diskspace soon. For the moment
> I can't think of any mechanism that identifies temp files reliably
> (without API changes).
>
The temp files will only make other older files in the undelete dir be
purged...
> > All one needs is a compile-time switch to enable the following:
>
> And a system wide configurable switch, and a user configurable switch
> and so on.
>
> Undeletion has /many/ implications, did you think through all of them?
>
No, but this thread has brought up many considerations.
>
>
> Just as a personal note: personally I would simply /refuse/ to work on
> a system where I end up unable to delete even files I /own/, or at
> least I would end up implementing my own way of deleting files which
> circumvents undeletion (there will /always/ be a way to do it).
Yes, statically compiled binaries would work, a library preload, etc.
> If your employer didn't expressively forbid you to keep private data
> on your work account, you are allowed to do so, at least here in
> germany, and you can sue your employer if he takes actions to look
> into your private data without informing you /before/ doing it (taken
> strictly, if you are allowed to keep private data on your work
> account, you even have to be informed explicitly that the data may be
> backuped and recovered later from backup tapes). So in the end,
> undeletion is also a matter of privacy, and the ability to undelete
> may even pose legal problems on a company.
>
That is a configuration issue. All the implementation will need to do is be
configurable enough to follow the local policy.
Mike
James D Strandboge wrote:
>On Tue, Feb 26, 2002 at 11:48:49AM -0600 or thereabouts, Rose, Billy wrote:
>
>It seems to me the undelete could be in the kernel, and could be
>beneficial.
>
>Rather than modifying all the different filesystems, or libc, we could
>modify the VFS unlink function in the kernel. It would therefore work
>with all filesystems working under VFS, and all programs regardless of
>whether it is linked against the latest libc or using LD_PRELOAD.
>
>There are obviously some issues that would have to be resolved with the
>algorithm, but as far as versioning I think that is the role of backups.
>This should be more along the lines of 'whoops I deleted /etc/fstab.
>Let me go get it out of /.undelete'. Simply put, if the file is already
>in there, just overwrite it. Though, it wouldn't be too hard to tack a
>.1 on the end of the old file I suppose.
>
>Also, if the files are just moved to the .undelete directory (and by
>moved, I mean a hard link to .undelete, followed by a remove of the
>original), disk usage as reported by df and du would still show it
>as there. I don't think that is a very big deal. I simple solution
>would just be to have a cron job empty out older files. It should be the
>sysadmin's job on how to manage the .undelete directory, not the kernel's
>(IMO). Of course, a configurable daemon to monitor the directory could
>be implemented, but this especially seems like a userspace problem.
>
>Undeleting is the harder of these. User's should be able to undelete a
>file IMO. Either an suid binary has to be created to list the contents
>of the .undelete directory based on the user running it, or they can go
>into the directory and get what they need. Rather than having a world
>write /tmp like directory, it could be chmod 1755 with root ownership.
>That way users could browse the directory and cp out what they wanted,
>but they can't write to it and overwrite files and do symlink attacks,
>etc. This is a security issue in terms of privacy though, depending on
>the user's umask. The former (an suid binary) is probably better, but
>the latter is the easier to implement.
>
>Please comment.
>
>James Strandboge
>
My 2ctvs.
An example:
We have a file server with these fs mounted in /mnt
/
+-mnt
| +-fs1
| | +-dir1
| | | +--rw-r--r-- 1 root root 121 dic 13 19:47
file1.txt
| | | +--rw-r--r-- 1 paul sales 232121 dic 13 19:47
file2.txt
| | +-dir2
| | | +--rw-r--r-- 1 root root 72534 dic 14 20:27
file1.txt
| | | +--rw-r--r-- 1 mary sales 9493 dic 14 20:27
file2.txt
| +-fs2
| | +-dir1
| | | +--rw-r--r-- 1 root root 2312 dic 13 19:55
other1.txt
| | | +--rw-r--r-- 1 root root 232 dic 13 19:55
other2.txt
| | +-dir2
| | +--rw-r--r-- 1 root root 2534 dic 14 20:34
file1.txt
| | +--rw-r--r-- 1 root root 493 dic 14 20:54
file2.txt
|
.
Then, UserA delete /mnt/fs1/dir1/file2.txt and UserB delete
/mnt/fs1/dir2/file2.txt and create a new one. The state of the file
server will be:
+-mnt
| +-fs1
| | +-dir1
| | | +--rw-r--r-- 1 root root 121 dic 13 19:47
file1.txt
| | +-dir2
| | | +--rw-r--r-- 1 root root 72534 dic 14 20:27
file1.txt
| | | +--rw-r--r-- 1 UserB sales 9493 Mar 02 12:22
file2.txt
| | +-.undelete
| | +--rw-r--r-- 1 paul sales 232121 dic 13 19:47
+2001-12-13 19:47:23+dir1+file2.txt
| | +--rw-r--r-- 1 mary sales 9493 dic 14 20:27
+2001-12-14 20:27:44+dir2+file2.txt
| +-fs2
| | +-dir1
| | | +--rw-r--r-- 1 root root 2312 dic 13 19:55
other1.txt
| | | +--rw-r--r-- 1 root root 232 dic 13 19:55
other2.txt
| | +-dir2
| | +--rw-r--r-- 1 root root 2534 dic 14 20:34
file1.txt
| | +--rw-r--r-- 1 root root 493 dic 14 20:54
file2.txt
|
.
I mean:
When a user delete a file, the old version would be moved to .undelete
directory and renamed:
yyyy-MM-dd hh:mm:ss+directory+from+became+filename.ext
and there would be a '.undelete' directory inside each mounted fs (with
undelete option in /etc/fstab?).
In this way we can undelete erased files AND complete directory erases.
The date/time attr of the files in 'undelete' directory could be set the
delete time.
The rwx attr do not change
Another filename version could be:
yyyy-MM-dd
hh:mm:ss+owner_user+owner_group+rwxrwxrwx+directory+from+became+filename.ext
The date/time attr of the files in 'undelete' directory would have the
delete time AND the uid/gid would have the uid/gid of the user that
delete the files, so he/she will have rights to undelete it.
The rwx attr changes to 440 I think...
If the fs does not support long names, then we could to move and rename
it as file1.txt.1, file1.txt2, etc...
A undelete utility that complete the picture would be usefull.
Sorry my English. ;-)
Pablo
On Wed, Feb 27, 2002 at 10:33:05PM +0000, Alan Cox wrote:
> > /root/.bashrc /etc/fstab'), wouldn't 'cp' (or most any other app) first
> > unlink the first file (/etc/fstab), then create and write the new one?
>
> Unlikely - It will truncate it and write over it. Try strace cp 8)
Doesn't truncate use the same inode after the trunc op? If so, then using
another inode after the trunc op would break unix semantics. In order to
work, you'd have to use a new inode (in .undelete, of course), copy, then do
the actual trunc call.
This would make truncation expensive, whereas before it was pretty fast.
Modifying unlink will probably suffice.
Though, if truncate is modified for undelete, people could claim that our
solution is better than others and can save data in more cases. Also, it
could be used as a rudamentary file versioning system. :) This feature
should be optional, and enabled only at the sysadmin's discression.
Hmm, truncating a large file would basically copy it, and that is what
usually happens during a save operation. So, this option needs some serious
thinking before proceeding.
Also, it would put more stress on the cleanup facilities of undeletion
because more data would pass through the undelete dir. But, communication
(however you want do to do that, be it socket or etc...) between the
unlink/truncate calls and the cleanup system should make most possible races
small or non-existant.
I've probably left something out, so feel free to fill in the blanks.
Mike
> another inode after the trunc op would break unix semantics. In order to
> work, you'd have to use a new inode (in .undelete, of course), copy, then do
> the actual trunc call.
> This would make truncation expensive, whereas before it was pretty fast.
> Modifying unlink will probably suffice.
You would need to hook the truncate/unlink paths in the file system. If
you are doing it within the fs it becomes cheap (at least for ext2) - as
you can simply reassign the data blocks to a new inode, stuff the new inode
into the magic "stuff we deleted" directory and continue.
On Mon, Mar 04, 2002 at 03:12:44PM +0000, Alan Cox wrote:
> > another inode after the trunc op would break unix semantics. In order to
> > work, you'd have to use a new inode (in .undelete, of course), copy, then do
> > the actual trunc call.
> > This would make truncation expensive, whereas before it was pretty fast.
> > Modifying unlink will probably suffice.
>
> You would need to hook the truncate/unlink paths in the file system. If
> you are doing it within the fs it becomes cheap (at least for ext2) - as
> you can simply reassign the data blocks to a new inode, stuff the new inode
> into the magic "stuff we deleted" directory and continue.
It may make it easier to put this part in the kernel, but is there some way
to make it filesystem generic?
Undelete on truncate isn't a high priority, but if we do have it, it would
be nice if all of the filesystems that follow unix semantics (and maybe the
others too) could use generic VFS ops for this feature.
On Mon, 2002-03-04 at 10:12, Alan Cox wrote:
> > Modifying unlink will probably suffice.
I am working on a preliminary patch that does this. My current
implementaion (which is not ready to submit-- but works) added a line to
sys_unlink in fs/namei.c that calls my vfs_undel_link(). The
vfs_undel_link() function is based on the logic of sys_link, and creates
a hard link from the deleted file to one in the "stuff we deleted"
directory. Then vfs_undel_link returns to sys_unlink and original link
is deleted, leaving only the one in the "stuff we deleted" directory.
> You would need to hook the truncate/unlink paths in the file system. If
> you are doing it within the fs it becomes cheap (at least for ext2) - as
> you can simply reassign the data blocks to a new inode, stuff the new inode
> into the magic "stuff we deleted" directory and continue.
After much consideration, my implementation does not deal with
truncate/overwrite because it would fill up the filesystem and be very
slow in VFS since there would have to be a full copy. Also, staying
high level in VFS makes the patch work over any fs that uses VFS.
When I submit, I will make sure to add RFC to get more input on the
implementation, and possibly dealing with truncate.
Jamie Strandboge
--
Email: [email protected]
GPG/PGP ID: 26384A3A
Fingerprint: D9FF DF4A 2D46 A353 A289 E8F5 AA75 DCBE 2638 4A3A
James D Strandboge <[email protected]>:
> On Mon, 2002-03-04 at 10:12, Alan Cox wrote:
> > > Modifying unlink will probably suffice.
> I am working on a preliminary patch that does this. My current
> implementaion (which is not ready to submit-- but works) added a line to
> sys_unlink in fs/namei.c that calls my vfs_undel_link(). The
> vfs_undel_link() function is based on the logic of sys_link, and creates
> a hard link from the deleted file to one in the "stuff we deleted"
> directory. Then vfs_undel_link returns to sys_unlink and original link
> is deleted, leaving only the one in the "stuff we deleted" directory.
>
> > You would need to hook the truncate/unlink paths in the file system. If=20
> > you are doing it within the fs it becomes cheap (at least for ext2) - as
> > you can simply reassign the data blocks to a new inode, stuff the new ino=
> de
> > into the magic "stuff we deleted" directory and continue.
> After much consideration, my implementation does not deal with
> truncate/overwrite because it would fill up the filesystem and be very
> slow in VFS since there would have to be a full copy. Also, staying
> high level in VFS makes the patch work over any fs that uses VFS.
>
> When I submit, I will make sure to add RFC to get more input on the
> implementation, and possibly dealing with truncate.
>
> Jamie Strandboge
How do you handle "rm dir1/main.c dir2/main.c" ??? Both files have the
same name. And how about VFAT (no inode numbers...).
If you create a shadow directory tree, how do you handle the quota problem?
What happens to files deleted by fsck? (which depends on the disk
implementation of the FS and not the VFS)
Is there a design document or FAQ somewhere ?
(I did have to deal with VMS for a while - our solution: Don't do that...
recovery was just too much of a hassle)
-------------------------------------------------------------------------
Jesse I Pollard, II
Email: [email protected]
Any opinions expressed are solely my own.
Hi!
> > > but I don't want a Netware filesystem running on Linux, I
> > > want a *native* Linux filesystem (i.e. ext3) that has the
> > > ability to queue deleted files should I configure it to.
> >
> > Rather than implementing this in the filesystem itself, I'd first try
> > writing a libc shim that overrides unlink(). You could copy files to safety,
> > or do anything else you want, before they actually get deleted...
>
> Yep, more portable.
>
> Now the question is: Is there already something written?
Yep, I did it at one point. Then I realized I often kill files with
> file....
Pavel
--
Philips Velo 1: 1"x4"x8", 300gram, 60, 12MB, 40bogomips, linux, mutt,
details at http://atrey.karlin.mff.cuni.cz/~pavel/velo/index.html.
Hi
> > All the deleted files, with the correct path(s), are now in the
> > top directory file the file-system ../lost+found directory. They
> > are still owned by the original user, still subject to the same
> > quota.
>
> And what about:
>
> - Luser rm's "foo.c"
> - Luser starts working on new version of "foo.c"
> - Luser recognizes, that the old version was better
> - Luser rm's new "foo.c"
> - Luser tries to unrm the old "foo.c" -> *bang*
>
> Trust me, there /will/ be a luser who tries to do it this way. If
> teaching lusers were enough, you'd have no need for an unrm at all.
You don't consider me a luser, right?
Okay, and I *did* need unrm few times. Few examples:
/dev# rm sbpcd * (simple typo, recovered by immediate powerdown + fsck)
/big$ mp1enc > samotari.mpg (oops, I did it twice, second time by mistake, and
powerswitch was too far away to make it in time)
So yes, unrm is usefull. And it would be even more usefull if it recovered
truncated files, too. How many times did you do > instead of >>? I did that
mistake many times, its just easy..
Pavel
--
Philips Velo 1: 1"x4"x8", 300gram, 60, 12MB, 40bogomips, linux, mutt,
details at http://atrey.karlin.mff.cuni.cz/~pavel/velo/index.html.
On Mon, Mar 04, 2002 at 04:26:15PM +0000, Pavel Machek wrote:
> >
> > And what about:
> >
> > - Luser rm's "foo.c"
> > - Luser starts working on new version of "foo.c"
> > - Luser recognizes, that the old version was better
> > - Luser rm's new "foo.c"
> > - Luser tries to unrm the old "foo.c" -> *bang*
> >
> > Trust me, there /will/ be a luser who tries to do it this way. If
> > teaching lusers were enough, you'd have no need for an unrm at all.
> You don't consider me a luser, right?
No, not really. ;-)
> Okay, and I *did* need unrm few times. Few examples:
>
> /dev# rm sbpcd * (simple typo, recovered by immediate powerdown + fsck)
% rm sbpcd *
zsh: sure you want to delete all the files in /dev [yn]? n
rm: remove write-protected file `sbpcd'? n
%
> /big$ mp1enc > samotari.mpg (oops, I did it twice, second time by mistake, and
> powerswitch was too far away to make it in time)
>
> So yes, unrm is usefull. And it would be even more usefull if it recovered
> truncated files, too. How many times did you do > instead of >>? I did that
> mistake many times, its just easy..
% cat > foo
zsh: file exists: foo
% cat >| foo
[Now zsh is quiet. It even "fixes up" the history if you want so, so
that you can simply press "Up"+"Enter" if you really want to overwrite
the file]
Surely, protection against typos etc. has its value. But do it at the
place where does typos happen (ie. at the shell prompt), not by
messing with lowlevel stuff like the unlink syscall, which
a) catches only very few ways of destroying a files contents
b) poses a /great/ deal of complexity on you (like having to
identify tempfiles, managing disk space etc.)
With some simple checks of the commandline you can catch many of the
common typos which end up destroying data. And it comes at nearly no
cost, is much more flexible and avoids any problems with tempfiles,
quota etc. in the first place.
And for the rest, which are not catched by those checks, that's what
backups are for.
Honestly, I also had a few moments where I wished I had an unrm
command available, and where it required a bit of work to fix up the
mess. But those situations are not that common that I really want to
have to manage a full fledged undeletion system all the time, and work
around the problems that it imposes.
Also, take the "human factor" into account: Users /will/ get more
sloppy with regards to thinking before typing, checking commandlines
before hitting enter etc., if they always have a "safety net" behind
them. And it /will/ bite them in the ass once they are in a situation
(different system, root, ...) where those safety nets are not
available.
This is also the reason why I don't like those "rm -i" aliases some
distros are setting by default. I have /seen/ luser typing "rm *" when
they just wanted to delete some of the files in a directory, because
they were used to rm asking for every file if they want to delete it.
Now guess what happens if such a luser is dropped into an environment
where those aliases are not set...
And yes, I realize that for example those zsh examples above are also
somewhat into the same direction. But they are different from the "rm
-i" example above in that they don't change the semantics of a
command line, so that you are still obliged into checking the command
line /before/ submitting it, otherwise you get those annoying
questions and still have to resubmit the command.
Andreas
--
Andreas Ferber - dev/consulting GmbH - Bielefeld, FRG
---------------------------------------------------------
+49 521 1365800 - [email protected] - http://www.devcon.net
On Mon, 4 Mar 2002, Pavel Machek wrote:
> Hi
> > > All the deleted files, with the correct path(s), are now in the
> > > top directory file the file-system ../lost+found directory. They
> > > are still owned by the original user, still subject to the same
> > > quota.
> >
> > And what about:
> >
> > - Luser rm's "foo.c"
> > - Luser starts working on new version of "foo.c"
> > - Luser recognizes, that the old version was better
> > - Luser rm's new "foo.c"
> > - Luser tries to unrm the old "foo.c" -> *bang*
> >
> > Trust me, there /will/ be a luser who tries to do it this way. If
> > teaching lusers were enough, you'd have no need for an unrm at all.
>
> You don't consider me a luser, right?
Nope.
Some newbees think that Windoze 'send-to-the-wastebasket' is a kernel-
level "safe-delete". It's just some ^&$)##*@*) program that slows most
of us down.
Even Windows/Professional/2000 (NT) developers knew that it was
garbage. If you've figured out how to get to the CMD prompt, just
type:
cd \
rm -r *.*
| | |______ They still have dots
| |__________ Yes, even "folders" <coff, coff>
|_____________ What do you expect for a stolen OS? Yes, `rm` instead of
del, following the Unix pathname tradition.
Cheers,
Dick Johnson
Penguin : Linux version 2.4.18 on an i686 machine (799.53 BogoMips).
Bill Gates? Who?
To me, Linux is about freedom. As we all know, freedom comes with a price.
My company has users that sit on Win9x boxes and use Explorer to connect to
a set of Netware boxes. These boxes house critical data. I didn't create
this system, it grew to what it is before my time. If it were up to me, all
of the users would be running Emacs or some other great editor (vim is my
favorite), and connect to a Linux machine via CVS with a Linux box to alter
files so there is little chance they could crap out the file system, or
delete files without knowing it. Explorer is one of those M$ monsters that,
under the right circumstances, grabs an entire tree and grinds it up into
the digital void. The users in question are at these moments little more
than automatons from editing hundreds, perhaps even a thousand, files in
some 8 hour span of time. These users don't even have the DOS prompt in
their Start menu, let alone the time to mess with it. Bottom line: Linux is
not being used for file servers in my company because this feature is not
present. We are _not_ talking about a windblows trashcan here, we are
talking about short term enterprise class file recovery as implemented in
Netware. This was my intention when I brought the whole issue up on this
list. We don't need a windows garbage can (unless you mean literally :), we
need file recovery at the sysadmin level without going to the tapes as
often. In order for Linux to take over the planet (my dream), then all of
the features that keep companies tied to an OS needs to be addressed. This
issue is one such company tie.
Billy Rose
P.S. I got 2981.88 BogoMIPS today from a new install of RedHat 7.2 on a P4
1.5Ghz!
-----Original Message-----
From: Richard B. Johnson [mailto:[email protected]]
Sent: Tuesday, March 05, 2002 4:07 PM
To: Pavel Machek
Cc: Andreas Ferber; [email protected]
Subject: Re: ext3 and undeletion
On Mon, 4 Mar 2002, Pavel Machek wrote:
> Hi
> > > All the deleted files, with the correct path(s), are now in the
> > > top directory file the file-system ../lost+found directory. They
> > > are still owned by the original user, still subject to the same
> > > quota.
> >
> > And what about:
> >
> > - Luser rm's "foo.c"
> > - Luser starts working on new version of "foo.c"
> > - Luser recognizes, that the old version was better
> > - Luser rm's new "foo.c"
> > - Luser tries to unrm the old "foo.c" -> *bang*
> >
> > Trust me, there /will/ be a luser who tries to do it this way. If
> > teaching lusers were enough, you'd have no need for an unrm at all.
>
> You don't consider me a luser, right?
Nope.
Some newbees think that Windoze 'send-to-the-wastebasket' is a kernel-
level "safe-delete". It's just some ^&$)##*@*) program that slows most
of us down.
Even Windows/Professional/2000 (NT) developers knew that it was
garbage. If you've figured out how to get to the CMD prompt, just
type:
cd \
rm -r *.*
| | |______ They still have dots
| |__________ Yes, even "folders" <coff, coff>
|_____________ What do you expect for a stolen OS? Yes, `rm` instead of
del, following the Unix pathname tradition.
Cheers,
Dick Johnson
Penguin : Linux version 2.4.18 on an i686 machine (799.53 BogoMips).
Bill Gates? Who?
Hi!
> > Okay, and I *did* need unrm few times. Few examples:
> >
> > /dev# rm sbpcd * (simple typo, recovered by immediate powerdown + fsck)
>
> % rm sbpcd *
> zsh: sure you want to delete all the files in /dev [yn]? n
> rm: remove write-protected file `sbpcd'? n
> %
Is it decent enough to do rm -rf /usr/src/linux without asking?
> > /big$ mp1enc > samotari.mpg (oops, I did it twice, second time by mistake, and
> > powerswitch was too far away to make it in time)
> >
> > So yes, unrm is usefull. And it would be even more usefull if it recovered
> > truncated files, too. How many times did you do > instead of >>? I did that
> > mistake many times, its just easy..
>
> % cat > foo
> zsh: file exists: foo
> % cat >| foo
> [Now zsh is quiet. It even "fixes up" the history if you want so, so
> that you can simply press "Up"+"Enter" if you really want to overwrite
> the file]
This would not help. mp1enc has nasty habit of randomly not starting,
so I had to repeat the command above few times. What was wrong was
that I repeated it one time too many times.
> Surely, protection against typos etc. has its value. But do it at the
> place where does typos happen (ie. at the shell prompt), not by
> messing with lowlevel stuff like the unlink syscall, which
Not sure it is possible. What you want is safety nets in every
app.
> a) catches only very few ways of destroying a files contents
> b) poses a /great/ deal of complexity on you (like having to
> identify tempfiles, managing disk space etc.)
Really? ext2 *already* is undelete capable. mc: cd /#undel:hda3. So,
where's that great complexity?
Pavel
--
Casualities in World Trade Center: ~3k dead inside the building,
cryptography in U.S.A. and free speech in Czech Republic.
If I had to choose between a file system that guaranteed recovery of
files within a relatively short timespan (60 seconds? 60 minutes?
until it next happens to overwrite the blocks of data that the file
previously used? the latter of the last two?), or a faster file
system with relatively no fragmentation on disk, I would choose the
faster file system with relatively no fragmentation on disk.
Unless you happen to reserve the blocks used by recently unlinked
inodes until some condition is reached (time, free block threshold,
...), there is no guarantee that your data exists 200ms after the user
removes the file. As such, by the time the user calls the
administrator to demand that the administrator restore their file that
they wrecklessly, or negligently removed, there is a possibility that
it is already gone. On a file server with many users all making
changes at the same time, the possibility of this to occur is only
significantly greater.
What is the value of having a few users being able to remove a few
files, and be able to restore them without using backups, every so
often, compared to people that find their computer speed limited by
the speed of their hard disk, and the file system that accesses the
hard disk? What of development time for Linux, and bloat for ext2/3?
"It is already bloated" isn't a true argument against. "It is already
bloated" is an argument that the bloat should be removed.
I appreciate your (Billy Rose) situation, however, there are
alternatives. For one, if you can appreciate that your users should
be using some sort of configuration management software (such as CVS),
why not pursue that end? The benefits of using a configuration
management solution are more than being able to restore a file removed
by user error. One is able to allow parallel development among other
things.
"Undelete" support gives you nothing that a hammer to the head of
your users wouldn't give you.`
So... that leaves... what is "undelete" for? It is a pretty feature
that could be used once in a blue moon, that may not be reliable, that
if extended, would require a great deal of effort to extend, and would
likely affect the performance of the file system that supported it
either in terms of fragmentation, or efficiency, or both.
Usually additional figuring would eliminate the fragmentation concern,
and the reliability (ability to reliably make use of the future)
concern, but would degrade the efficiency. Less figuring would
eliminate the efficiency concern, but decrease reliability, and
increase fragmentation. Nothing comes for free.
Companies pay a lot of money for good backup support. They don't
bother assuming that the file system that they use 'might' be able to
restore the data.
On Tue, Mar 05, 2002 at 05:04:11PM -0600, Rose, Billy wrote:
> P.S. I got 2981.88 BogoMIPS today from a new install of RedHat 7.2 on a P4
> 1.5Ghz!
You'll have to update the BogoMIPS mini-howto. They document the highest
value as:
1.2 The highest single-CPU Linux boot sequence BogoMips value
Gary Bridgewater, [email protected]
Intel Pentium 4, at 1500 MHz
2962.23 BogoMips
mark
--
[email protected]/[email protected]/[email protected] __________________________
. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder
|\/| |_| |_| |/ |_ |\/| | |_ | |/ |_ |
| | | | | \ | \ |__ . | | .|. |__ |__ | \ |__ | Ottawa, Ontario, Canada
One ring to rule them all, one ring to find them, one ring to bring them all
and in the darkness bind them...
http://mark.mielke.cc/
I'm taking an idea from the NetApp filer and using LVM to solve these
issues. I'm hoping it will work as planned and not lose too much
performance. I am going to create snapshots of certain partitions with
LVM at different time indexes. I will use these snapshots to recover
lost data and to backup to tape.
On Tue, 2002-03-05 at 18:04, Rose, Billy wrote:
> To me, Linux is about freedom. As we all know, freedom comes with a price.
> My company has users that sit on Win9x boxes and use Explorer to connect to
> a set of Netware boxes. These boxes house critical data. I didn't create
> this system, it grew to what it is before my time. If it were up to me, all
> of the users would be running Emacs or some other great editor (vim is my
> favorite), and connect to a Linux machine via CVS with a Linux box to alter
> files so there is little chance they could crap out the file system, or
> delete files without knowing it. Explorer is one of those M$ monsters that,
> under the right circumstances, grabs an entire tree and grinds it up into
> the digital void. The users in question are at these moments little more
> than automatons from editing hundreds, perhaps even a thousand, files in
> some 8 hour span of time. These users don't even have the DOS prompt in
> their Start menu, let alone the time to mess with it. Bottom line: Linux is
> not being used for file servers in my company because this feature is not
> present. We are _not_ talking about a windblows trashcan here, we are
> talking about short term enterprise class file recovery as implemented in
> Netware. This was my intention when I brought the whole issue up on this
> list. We don't need a windows garbage can (unless you mean literally :), we
> need file recovery at the sysadmin level without going to the tapes as
> often. In order for Linux to take over the planet (my dream), then all of
> the features that keep companies tied to an OS needs to be addressed. This
> issue is one such company tie.
>
> Billy Rose
>
> P.S. I got 2981.88 BogoMIPS today from a new install of RedHat 7.2 on a P4
> 1.5Ghz!
>
> -----Original Message-----
> From: Richard B. Johnson [mailto:[email protected]]
> Sent: Tuesday, March 05, 2002 4:07 PM
> To: Pavel Machek
> Cc: Andreas Ferber; [email protected]
> Subject: Re: ext3 and undeletion
>
>
> On Mon, 4 Mar 2002, Pavel Machek wrote:
>
> > Hi
> > > > All the deleted files, with the correct path(s), are now in the
> > > > top directory file the file-system ../lost+found directory. They
> > > > are still owned by the original user, still subject to the same
> > > > quota.
> > >
> > > And what about:
> > >
> > > - Luser rm's "foo.c"
> > > - Luser starts working on new version of "foo.c"
> > > - Luser recognizes, that the old version was better
> > > - Luser rm's new "foo.c"
> > > - Luser tries to unrm the old "foo.c" -> *bang*
> > >
> > > Trust me, there /will/ be a luser who tries to do it this way. If
> > > teaching lusers were enough, you'd have no need for an unrm at all.
> >
> > You don't consider me a luser, right?
>
> Nope.
>
> Some newbees think that Windoze 'send-to-the-wastebasket' is a kernel-
> level "safe-delete". It's just some ^&$)##*@*) program that slows most
> of us down.
>
> Even Windows/Professional/2000 (NT) developers knew that it was
> garbage. If you've figured out how to get to the CMD prompt, just
> type:
>
> cd \
> rm -r *.*
> | | |______ They still have dots
> | |__________ Yes, even "folders" <coff, coff>
> |_____________ What do you expect for a stolen OS? Yes, `rm` instead of
> del, following the Unix pathname tradition.
>
> Cheers,
> Dick Johnson
>
> Penguin : Linux version 2.4.18 on an i686 machine (799.53 BogoMips).
>
> Bill Gates? Who?
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
[email protected]
"Communications without intelligence is noise;
Intelligence without communications is irrelevant."
- Gen Alfred. M. Gray, USMC .